text
stringlengths
56
7.94M
\begin{document} \title{The four-state problem and convex integration for linear differential operators} \begin{abstract} We show that the four-state problem for general linear differential operators is flexible. The only flexibility result available in this context is the one for the five-state problem for the $\curl$ operator due to B. Kirchheim and D. Preiss, \cite[Section 4.3]{KIRK}, and its generalization \cite{FS}. To build our counterexample, we extend the convex integration method introduced by S. M\"uller and V. \v Sver\'ak in \cite{SMVS} to linear operators that admit a potential, and we exploit the notion of \emph{large} $T_N$ configuration introduced by C. F\"orster and L. Sz{\'{e}}kelyhidi in \cite{FS}. \end{abstract} \partialar \noindent \textbf{Keywords:} convex integration, flexibility, $\mathcal{A}$-free maps, four-state problem. \partialar \noindent {\sc MSC (2020): 35B99 - 35E20 - 35G35. \partialar } \section{Introduction} In the last years, much attention has been given to the study of properties of $\mathcal{A}$-free maps $u$, i.e. maps $u$ that verify $$\mathcal{A}(u) = 0,$$ for a linear differential operator $\mathcal{A}$. The properties one can expect from $\mathcal{A}$-free maps are strongly related to the form of $\mathcal{A}$. If it is elliptic, for instance $$\mathcal{A}(u) = \mathcal{D}elta u,$$ one can show smoothness of solutions. If $\mathcal{A}$ is not elliptic one cannot expect in general any improvement in the regularity of $u$. For example, if $\mathcal{A} = \curl$, the best one can infer on an $\mathcal{A}$-free map $u$ is that it can be locally expressed as the gradient of a map $v$. In this case, more subtle questions arise: for instance, understanding the structure of the singular part of $\curl$-free measures leads to the celebrated rank-one theorem of Alberti, \cite{ALB}, see also \cite{MV,GUIANN}. In this paper, we shall focus on non-elliptic operators. \\ \\ Classically, the most studied non-elliptic operator is $\mathcal{A} = \curl$, and we refer the reader to \cite{DMU} for an account of the theory. A rich literature is also available in the case $\mathcal{A} = {\partial}v$, see for instance \cite{SER,GN,PP,DRST,LR}. Typical questions in this context concern fine qualitative properties of measures $\mu$ satisfying $\mathcal{A}(\mu) = 0$, see \cite{GUIANN, RDHR}, higher order estimates and regularity, see \cite{SER,LR,HIGHER, VS, GC}, semicontinuity of functionals defined on $\mathcal{A}$-free maps, see \cite{FM,DRST,REL,WIE,RAI}, and structural results on Young measures generated by sequences of $\mathcal{A}$-free maps, see \cite{FM,GR,KR}. \\ \\ This paper is devoted to the study of the $s$-state problem for general linear operators $\mathcal{A}$, that we state now. Fix an open set $\Omega \subset \mathbb R^{m}$ and a linear differential operator $\mathcal{A}:C^{{\infty}nfty}(\Omega , \mathbb R^{n}) \to C^{{\infty}nfty}(\Omega , \mathbb R^{N})$ of order $k$ and consider its associated wave cone, $\Lambda_\mathcal{A} \subset \mathbb R^n$, see \eqref{LAMBDA} for the definition. It is well-known that if $a,b {\infty}n \mathbb R^n$ satisfy \[ a-b {\infty}n \Lambda_\mathcal{A}, \] then one can find a non-constant oscillatory solution $u$ to \[ \begin{cases} u(x) {\infty}n \{a,b\}, &\text{ a.e. on }\Omega\\ \mathcal{A}(u) = 0, &\text{ in the sense of distributions}. \end{cases} \] This can be achieved by the so-called simple laminate construction, see the beginning of Section \ref{section:simplelaminates}. The question becomes more challenging when we add the constraint \[ a-b \notin \Lambda_\mathcal{A}. \] Therefore, the $s$-state problem precisely asks whether there exists a non-constant solution to \begin{equation}\label{problem} \begin{cases} u(x) {\infty}n \{a_1,{\partial}ots, a_s\}, &\text{ a.e. on }\Omega\\ \mathcal{A}(u) = 0, &\text{ in the sense of distributions}\\ a_i-a_j \notin \Lambda_\mathcal{A}, & \text{ if } i\neq j, 1\le i,j \le s. \end{cases} \end{equation} System \eqref{problem} has already received much attention, and we give now an account of the literature. \\ \\ Problem \eqref{problem} was first studied for $\mathcal{A} = \curl$. In that context, J.M. Ball and R.D. James in \cite{BJ} have shown that if $s = 2$, the problem is \emph{rigid}, i.e. the only solution to $\eqref{problem}$ is the constant one. The same rigidity holds if $s = 3$, and is sometimes attributed to K. Zhang as in \cite{CMKB,SMA} and sometimes to V. \v Sver\'ak, as in \cite[Section 2.4]{DMU}. Rigidity still holds for $s = 4$, as proved in \cite{CMKB} by Kirchheim and M. Chleb\'ik. Finally, for $s = 5$, the problem becomes \emph{flexible}, i.e. one can find a non-constant map $u$ that takes precisely $5$ states and solves \eqref{problem}. This construction is due to Kirchheim and Preiss and appears in \cite[Section 4.3]{KIRK}. Similar results are known also for $\mathcal{A} = {\partial}v$. For this operator, rigidity for the $s$-state problem \eqref{problem} was proved by A. Garroni and V. Nesi in \cite{GN} and by M. Palombaro and M. Ponsiglione in \cite{PP}, in the case $s = 2$ and $s = 3$ respectively. To the best of our knowledge, nothing is known for $s \ge 4$. Some results concerning rigidity for linear operators $\mathcal{A}$ of order one also appeared in \cite{MAB}. Finally, for general operators $\mathcal{A}$, rigidity for $s = 2$ was proved by G. De Philippis, L. Palmieri and F. Rindler in \cite{PPR}. \\ \\ Our main theorem fits in this list of results, since it asserts that the four-state problem is flexible: \begin{NTEO} There exists an operator $\mathcal{A}$ such that problem \eqref{problem} with $s = 4$ admits a non-constant solution. \end{NTEO} Our main result should be compared with \cite[Theorem 1.2(A)]{PPR}. In particular, it states that a result in the generality of \cite[Theorem 1.2(A)]{PPR} is not possible if $s \ge 4$. The case $s = 3$ remains open. For such $s$, the only known result is the rigidity for operators of order one, that can be inferred from the rigidity result for $\mathcal{A} = {\partial}v$ of \cite{PP}, as we will prove in Proposition \ref{rig}. Therefore, we can list here the following open questions on the problem: \begin{OQ} Is problem \eqref{problem} rigid for operators of order $2$ or higher if $s = 3$? \end{OQ} \begin{OQ} Is problem \eqref{problem} rigid for operators of order $1$ if $s = 4$? \end{OQ} Together with the study of \emph{exact} solutions to \eqref{problem}, one may consider rigidity and flexibility of \emph{approximate} solutions to \eqref{problem}, i.e. classify limit points of sequence $u_n$ equibounded in $L^{\infty}nfty$ and satisfying \begin{equation}\label{prob_app} \begin{cases} {\partial}ist(u_n(x),\{a_1,{\partial}ots, a_s\}) \to 0, &\text{strongly in }L^1 \text{ as }n\to {\infty}nfty,\\ \mathcal{A}(u_n) = 0, \forall n, &\text{ in the sense of distributions}, \end{cases} \end{equation} coupled once again with the requirement $a_i - a_j \notin \Lambda_\mathcal{A}$ if $i \neq j$. In this case, if $\mathcal{A} = \curl$, the problem \eqref{prob_app} is rigid, i.e. $(u_n)_n$ converges strongly in $L^1$ to a constant, if $s = 2,3$, see \cite{BJ,SMA}, but it is flexible if $s = 4$, due to the existence of Tartar's $T_4$, see for instance \cite[Lemma 2.6]{DMU}. In \cite[Theorem 1.2(B)]{PPR}, it is shown that the same rigidity result holds for general operators $\mathcal{A}$ if $s = 2$. This is sharp, since in \cite[Lemma 4.1]{GN} the authors show flexibility of {\em approximate} solutions for the operator $\mathcal{A}$ by producing a $T_3$ configuration for the operator $\mathcal{A} = {\partial}v$. \\ \\ Let us outline the strategy we adopt to show our main theorem. We recall that in \cite{FS}, F\"orster and Sz{\'{e}}kelyhidi gave the definition of \emph{large} $T_5$ \emph{configuration}, that generalizes Kirchheim and Preiss construction of \cite{KIRK}. Every large $T_5$ configuration is a five-point set $K \subset \mathbb R^n$ that fulfills some geometric constraints and has the property that the $5$-state problem for $K$ with $\mathcal{A} = \curl$ is non-rigid. Firstly, we extend this notion in a natural way for general linear operators, see the Definition \ref{T4} of large $\Lambda_\mathcal{A}$-$T_4$ configurations. Instead of fixing a particular operator $\mathcal{A}$ and then trying to find a large $\Lambda_\mathcal{A}$-$T_4$ configuration, we consider a set $K = \{a_1,a_2,a_3,a_4\}$ that satisfies suitable geometric constraints and then we find an operator $\mathcal{A}$ such that $K$ is a large $\Lambda_\mathcal{A}$-$T_4$ configuration for that particular operator. In order for this plan to work, we will need to prove that, as in \cite{FS}, large $\mathcal{A}$-${T_4}$ configurations yield non-constant solutions of \eqref{problem}. A large part of this proof comes from the convex integration framework introduced essentially by M\"uller and \v Sver\'ak in \cite{SMVS} in the case of the $\curl$ operator. The reason why M\"uller and \v Sver\'ak develop these methods is to find counterexample to regularity of critical points of quasiconvex energies. Since then, these techniques have been successfully applied in various contexts and for various operators, compare \cite{LSP, LINF,COR}, and they still are one of the main tools for trying to build counterexamples, see \cite{LOP,JTR,DLDPKT}. M\"uller and \v Sver\'ak's theory is systematically developed only for the $\curl$ operator, and we extend it to homogeneous linear operators \emph{of constant rank}. One of the main ingredients we will use is the notion of \emph{potential} introduced by B. Rai{\c{t}}{\u{a}} in \cite{RAI}. \\ \\ Let us end this introduction by giving an outline of the paper. In Sections \ref{not} and \ref{prel}, we introduce the notation and collect some preliminaries on linear operators. Section \ref{CI} is devoted to develop all the tools of M\"uller and \v Sver\'ak's approach to convex integration in the case of general linear operators that admit a potential. In Section \ref{fours} we define large $\Lambda_\mathcal{A}$-$T_4$-configurations and we show our main theorem by finding a counterexample to the four state-problem. Finally, in the Appendix we will show the rigidity of the three-state problem for operators of order $1$. \\ \\ \textbf{ Acknowledgements}. The authors wish to thank Federico Stra for suggesting to use the second method explained in Proposition \ref{computer}. The authors have been supported by the SNF Grant 182565. \section{Notation}\label{not} We define $\mathcal{M}(d,m)$ to be the space of multi-indexes $I = ({\alpha}lpha_1,{\partial}ots,{\alpha}lpha_m) {\infty}n \mathbb{N}^m$ with $|{\alpha}lpha_1| + |{\alpha}lpha_2| + {\partial}ots +|{\alpha}lpha_m| = d$. $\mathbb{P}_H (d,m)$ defines the vector space of homogeneous polynomials of degree $d$ in $\mathbb R^m$. With the notation above, an element $p {\infty}n \mathbb{P}_H(d,m)$ can be written as \[ p(x) = \sum_{I {\infty}n \mathcal{M}(d,m)}a_Ix^I, \] where for $x = (x_1,{\partial}ots, x_m) {\infty}n \mathbb R^m$ the notation $x^I$ means \[ x^I = x_1^{{\alpha}lpha_1}x_2^{{\alpha}lpha_2}{\partial}ots x_m^{{\alpha}lpha_m}. \] We also introduce $\mathbb{P}_\mathbb{H}L(d,m)$, the vector space of polynomials of degree $d$ in $\mathbb R^m$. \\ \\ $\Omega \subset \mathbb R^m$ will always be used to denote an open bounded set. A function $f: \Omega \to \mathbb R$ is said to be \emph{piecewise a polynomial of degree $d$} if there exists a countable family of pairwise disjoint open sets $\{\Omega_n\}_n$ such that \[ \left|\Omega \setminus \bigcup_n \Omega_n\right| = 0 \] and, on $\Omega_n$, every component of $f$ is a polynomial of degree $d$. The definition of \emph{piecewise smooth} is analogous. Throughout the paper, $|E|$ denotes the Lebesgue measure of a measurable $E \subset \mathbb R^m$. \\ \\ We will say that $E \subset \mathbb R^m$ is \emph{essentially open in} $\Omega$ if $\left|\partialartial E\cap \Omega \right| = 0$. Here, $\partialartial E$ is the topological boundary of $E$. $\overline{E}$ denotes the closure of $E$. We denote by $B_{\varepsilon}(E)$ the $\varepsilon$-neighbourhood of the set $E$ and by $\co(E)$ the convex hull of $E$. For two elements $a,b {\infty}n \mathbb R^n$, we will use the notation $[a,b]$ for $\co(\{a,b\})$. \\ \\ The set of probability measures compactly supported in $U \subset \mathbb R^n$ is denoted by $\mathcal{P}(U)$. We let $\bar \nu {\partial}oteq {\infty}nt_{\mathbb R^n} x d\nu(x)$ be the barycentre of $\nu {\infty}n \mathcal{P}(U)$. \section{Preliminaries on general linear operators}\label{prel} Let $\mathcal{A}$ be a differential operator acting on vector-valued functions $v {\infty}n C^{{\infty}nfty}(\Omega ; \mathbb R^{n})$, where $\Omega \subset \mathbb R^{m}$ is an open set, namely \begin{equation} \label{d_operator} \mathcal{A} v {\partial}oteq \sum_{\ell = 1}^k\sum_{{\alpha}lpha {\infty}n \mathcal{M}(\ell,d)} A_{\alpha}lpha \partialartial^{{\alpha}lpha} v + Av + C(x), \end{equation} here, $A_{\alpha}lpha, A {\infty}n \mathbb R^{N \times n}$ are constant matrices and $C {\infty}n L_{\loc}^{2} (\Omega; \mathbb R^{N})$. Note that the equation $\mathcal{A} v =0 $ is actually a {\em system of $N$ equations}. We will use the notation $\text{op}(k,m,n,N)$ to denote these operators, but we will actually always consider \emph{homogeneous} differential operators, i.e. $C(x) \equiv 0$ and $A = 0$, $A_{{\alpha}lpha} = 0$, if $|{\alpha}lpha| < k$ in \eqref{d_operator}. The set of homogeneous operators will be denote by $\op (k,m,n,N)$. \\ \\ Let $\mathcal{A} {\infty}n \op (k,m,n,N)$. For each $\xi {\infty}n \mathbb R^m$, we consider the linear maps $\mathbb{A}(\xi): \mathbb R^n \to \mathbb R^N$ defined as \begin{equation}\label{ell} \mathbb{A}(\xi)(\eta) {\partial}oteq \sum_{{\alpha}lpha {\infty}n \mathcal{M}(k,m)}\xi^{\alpha}lpha A_{\alpha}lpha\eta,\quad \forall \eta {\infty}n \mathbb R^n. \end{equation} Define the {\em wave cone} associated to $\mathcal{A}$ as: \begin{equation}\label{LAMBDA} \Lambda_{\mathcal{A}} {\partial}oteq \bigcup_{\xi {\infty}n \mathbb R^m\setminus\{0\}}\Ker(\mathbb{A}(\xi)) = \{\eta {\infty}n \mathbb R^n: \exists \xi {\infty}n \mathbb R^m\setminus\{0\} \text{ s.t. } \mathbb{A} (\xi)(\eta) = 0\}. \end{equation} In what follows, we will only consider operators $\mathcal{A} {\infty}n \op (k,m,n,N)$ with \emph{constant rank}, namely \[ \xi \mapsto \rightarrownk(\mathbb{A}(\xi)) \text{ is constant}. \] This class of operators will be denoted with the symbol $\op (k,m,n,N)R$. We will exploit \cite[Theorem 1]{RAI}, that asserts that the homogeneous operator $\mathcal{A}$ is of constant rank if and only if it admits a potential (of constant rank), meaning that there exists $\mathcal{B} {\infty}n \op(k',m,n',n)$ such that \begin{equation}\label{pot} \Ker(\mathbb{A}(\xi)) = {\infty}m(\mathbb{B}(\xi)), \quad\forall \xi {\infty}n \mathbb R^m\setminus\{0\}. \end{equation} For technical reasons, in Section \ref{CI} we will need to restrict ourselves to \emph{balanced} operators, that we now introduce. \subsection{Balanced Operators} In addition to the constant rank condition, we require an additional property on the linear differential operator $\mathcal{A}$. \begin{definition}\label{balanced} We say that the wave cone $\Lambda_\mathcal{A}$ is \emph{balanced} if \begin{equation} \spn(\Lambda_\mathcal{A}) = \mathbb R^n, \end{equation} and we say that an operator $\mathcal{A} {\infty}n \op (k,m,n,N)$ is \emph{balanced} if the associated wave cone $\Lambda_\mathcal{A}$ is \emph{balanced}. \end{definition} The heuristic idea for which we need to consider balanced operators stems from the fact that on $\spn(\Lambda_{\mathcal{A}})^{\partialerp}$, the operator is, in some sense \emph{elliptic}, compare \cite[Equation (4)]{GRA}. This can be seen clearly in the extreme case $\spn(\Lambda_\mathcal{A}) = \{0\}$, in which one has \[ \mathcal{A}(u) = 0 \mathbb Rightarrow u {\infty}n C^{\infty}nfty(\Omega, \mathbb R^n). \] Since we are interested in constructing irregular solutions via convex integration, the images of these will surely avoid directions contained in $\spn(\Lambda_{\mathcal{A}})^{\partialerp}$. However, the requirement that $\mathcal{A}$ is balanced is mainly made for simplicity of exposition and is in fact not restrictive. Indeed we have the following simple result: \begin{prop} \label{p:balanced} Let $\mathcal{A} {\infty}n \op (k,m,n,N)$. Let $\partiali {\partial}oteq \spn(\Lambda_\mathcal{A})$ and let $d \ge 1$ be its dimension. Fix an orthonormal basis $e_1,{\partial}ots, e_d$ for $\partiali$. Then, if we define $\mathcal{A}' {\infty}n \op(k,m,d,N)$ as \begin{equation}\label{notres} \mathcal{A}'(u) {\partial}oteq \mathcal{A}\left(\sum_{i = 1}^du_ie_i\right), \text{ if } u = \left(\begin{array}{c}u_1\\ \vdots \\ u_d\end{array}\right), \end{equation} the following hold: \begin{itemize} \label{condition:spanbal} {\infty}tem $\mathcal{A}'$ is balanced; {\infty}tem $\mathcal{A}$ has constant rank if and only if $\mathcal{A}'$ has. \end{itemize} \end{prop} The proposition tells us that we may study wild solutions of the balanced operator $\mathcal{A}'$ instead of studying those of $\mathcal{A}$, and by \eqref{notres} these will be also solutions to $\mathcal{A}(u) = 0$. We omit the proof of Proposition \ref{p:balanced} since the verifications are simple. \\ \\ The fact that $\mathcal{A}$ is balanced yields the following: \begin{prop}\label{surj} Let $\mathcal{A} {\infty}n \op (k,m,n,N)R$ be balanced, and let $\mathcal{B} {\infty}n \op(k',m,n',n)$ be a potential for $\mathcal{A}$. Then, the map $T :(\mathbb{P}_H(k',m))^{n'} \to \mathbb R^n$ defined as $T(q) {\partial}oteq \mathcal{B}(q)$ is surjective. \end{prop} \begin{proof}[Proof of Proposition \ref{surj}] The proof is by contradiction: suppose $T$ is not surjective. Fix $\xi {\infty}n \mathbb R^m$ and $a {\infty}n \mathbb R^{n'}$. We choose the polynomial \[ p(x) {\partial}oteq \sum_{I {\infty}n \mathcal{M}(k',m)}\frac{\xi^{I}}{|I|}x^I \] and define $q(x) {\partial}oteq p(x) a {\infty}n (\mathbb{P}_H(k',m))^{n'}$. A direct computation shows that \[ T(q) = \mathbb{B}(\xi)(a). \] This yields \begin{equation}\label{Bpsi} {\infty}m(\mathbb{B}(\xi)) \subset {\infty}m(T),\qquad \forall \xi {\infty}n \mathbb R^m. \end{equation} In particular, since $T$ is linear and not surjective, we find a non-zero vector $v {\infty}n \mathbb R^n$ such that \[ v \partialerp {\infty}m(T), \] and, using \eqref{Bpsi}, \[ v \partialerp {\infty}m(\mathbb{B}(\xi)), \qquad \forall \xi {\infty}n \mathbb R^m. \] Since $\mathcal{B}$ is the potential of $\mathcal{A}$, by definition \eqref{pot} holds, and we find a contradiction with the definition of $\mathcal{A}$ being balanced. \end{proof} \section{Convex integration for general differential operators of constant rank}\label{CI} Throughout the section, we will consider a fixed balanced operator $\mathcal{A} {\infty}n \op (k,m,n,N)R$, with a given potential $\mathcal{B} {\infty}n \op(k',m,n',n)$. \\ \\ Aim of this part of the work is to develop the convex integration scheme essentially due to M\"uller and \v Sver\'ak in the case of the $\curl$ operator, see for instance \cite[Sections 2,3]{SMVS}. The final goal is to being able to show the existence of a non-constant solution $u {\infty}n L^{\infty}nfty(\Omega,\mathbb R^n)$ to the following system: \begin{equation}\label{AINC} \begin{cases} u(x) {\infty}n K, \; \text{a.e. in }\Omega,\\ \mathcal{A}(u) = 0, \end{cases} \end{equation} where $\mathcal{A} {\infty}n \op (k,m,n,N)R$, $\Omega$ is a given open, bounded, convex set, and $K \subset \mathbb R^n$ is a compact set without $\Lambda_\mathcal{A}$ connections, i.e. for any $a,b {\infty}n K$ we have that $b-a \notin \Lambda_\mathcal{A}$. In the case of the four-state problem that we will treat in Section \ref{fours}, $K$ is the four-point set of the admissible states. In particular, our aim is to show that the existence of a $\mathcal{A}$-in-approximation $\{U_n\}_n$ of $K$, see Definition \ref{INAPP}, yields the existence of a (in fact, many) non-constant solutions to \eqref{AINC}. \\ \\ Due to the technical nature of some proofs of this section, it is probably better to briefly explain our strategy. First, in Subsection \ref{section:simplelaminates}, we introduce the building blocks of this convex integration scheme, the simple $\mathcal{A}$-laminates. Roughly speaking, these are highly oscillatory solutions of \eqref{AINC} that can be constructed starting from two vectors $a,b {\infty}n \mathbb R^n$ with $b-a {\infty}n \Lambda_\mathcal{A}$. Their properties are listed in Proposition \ref{lam+}. Subsequently, we define $\mathcal{A}$-laminates of finite order, and describe their main properties, see Definition \ref{d:laminates_finite} and Proposition \ref{ind}. Then, we move on to $\mathcal{A}$-laminates, see Subsection \ref{section:laminates}, and we quote a result of \cite{KIRK} that asserts the weak-$*$ density of $\mathcal{A}$-laminates of finite order in the space of $\mathcal{A}$-laminates, compare Theorem \ref{ann}. We will use this result in Section \ref{inappexsol} to show the preliminary Proposition \ref{usefulprop} and finally Theorem \ref{rcexact}, that asserts the existence of exact solutions to \eqref{AINC} once we are given a $\mathcal{A}$-in-approximation. \subsection{Simple laminates} \label{section:simplelaminates} The building block is given by the simple $\mathcal{A}$-laminate construction. Let $a,b {\infty}n \mathbb R^n$ be such that $$b-a = c {\infty}n \Ker(\mathbb{A}(\xi_0)) \subset \Lambda_{\mathcal{A}}.$$ It is simple to check that for any profile $h {\infty}n L^{\infty}nfty(\mathbb R)$, the map \[ v(x) {\partial}oteq h((x,\xi_0))c \] solves $\mathcal{A}(v) = 0$. Here and in the following, $(x,y)$ denotes the standard scalar product of $\mathbb R^m$. This observation can be refined as follows. Let $\lambda {\infty}n (0,1)$ be arbitrary, $e {\partial}oteq \lambda a + (1-\lambda) b$, and choose \begin{equation}\label{h} h(t) {\partial}oteq \begin{cases}\lambda, &\text{ if } t {\infty}n [0,1-\lambda)\\ -(1-\lambda), &\text{ if } t {\infty}n [1-\lambda,1],\end{cases} \end{equation} and its 1-periodic extension outside $[0,1]$. If we let \begin{equation}\label{v} v_{\varepsilon,\xi_0,a,b,\lambda}(x) {\partial}oteq e + h\left(\frac{(x,\xi_0)}{\varepsilon}\right)c, \end{equation} one can check that, given any bounded open set $\Omega \subset \mathbb R^m$, $v_{\varepsilon,\xi_0,a,b,\lambda}$ enjoys the following properties \begin{enumerate} {\infty}tem\label{1} $\mathcal{A}(v_{\varepsilon,\xi_0,a,b,\lambda}) = 0$, $\forall \varepsilon > 0$; {\infty}tem\label{2} $|\{x: v_{\varepsilon,\xi_0,a,b,\lambda}(x) = a\}| \to \lambda|\Omega|$ and $|\{x: v_{\varepsilon,\xi_0,a,b,\lambda}(x) = b\}| \to (1-\lambda)|\Omega|$ as $\varepsilon \to 0^+$; {\infty}tem\label{3} $v_{\varepsilon,\xi_0,a,b,\lambda} \overset{*}{\rightharpoonup} e$ in $L^{\infty}nfty$ as $\varepsilon \to 0^+$. \end{enumerate} In other words, every element of the $\Lambda_{\mathcal{A}}$-cone $c$ gives rise to a family of highly oscillatory solutions to the PDE defined by $\mathcal{A}$. The oscillating behaviour is due to the choice of a periodic profile $h$, and yields to the fact that these solutions do not converge strongly, as can be easily seen from \eqref{2}-\eqref{3}. \\ \\ Using the theoretical potential $\mathcal{B}$, we can find a potential for $v_{\varepsilon,\xi_0,a,b,\lambda}$. Indeed, since $\mathbb{A}(\xi_0)(c) = 0$, by \eqref{pot}, there exists $c' {\infty}n \mathbb R^{n'}$ such that \begin{equation}\label{ima} \mathbb{B}(\xi_0)(c') = c. \end{equation} Furthermore, we consider the unique function $H: \mathbb R \to \mathbb R$ such that $H^{(k')}(t) = h(t)$, $\forall t {\infty}n \mathbb R$ and $H^{(\ell)}(0) = 0$, $\forall 0 \le \ell \le k' -1$, where $H^{(\ell)}$ denotes the $\ell$-th derivative of $H$, and $H^{(0)} {\partial}oteq H$. Finally we choose any $q_e {\infty}n \mathbb{P}_\mathbb{H}L(k',m)^{n'}$ such that \[ \mathcal{B}(q_e) = e, \text{ everywhere on $\mathbb R^m$}. \] By Proposition \ref{surj}, there exists at least one vector of polynomials with this property. If we define \[ V_{\varepsilon,\xi_0,a,b,\lambda}(x) {\partial}oteq q_e(x) + \varepsilon^{k'}H\left(\frac{(x,\xi_0)}{\varepsilon}\right)c', \] then we see by construction that \[ \mathcal{B}(V_{\varepsilon,\xi_0,a,b,\lambda})(x) = e + h\left(\frac{(x,\xi_0)}{\varepsilon}\right)\mathbb{B}(\xi_0)(c') = e + h\left(\frac{(x,\xi_0)}{\varepsilon}\right)c = v_{\varepsilon,\xi_0,a,b,\lambda}(x), \] almost everywhere and in the sense of distributions. Notice that by construction $V_{\varepsilon,\xi_0,a,b,\lambda}$ is, for every $\varepsilon > 0$, a vector of piecewise polynomials of degree $k'$. This discussion allows us to prove the following: \begin{lemma}\label{lam} Let $\Omega\subset \mathbb R^m$ be an open and bounded set. Let $a,b {\infty}n \mathbb R^n$, $b-a = c {\infty}n \Lambda_{\mathcal{A}}$ and $e = \lambda a + (1-\lambda) b$, for some $\lambda {\infty}n (0,1)$. Fix any element $q_e {\infty}n \mathbb{P}_\mathbb{H}L(k',m)^{n'}$ with the property that $\mathcal{B}(q_e) = e$ everywhere in $\mathbb R^m$. Then, for all ${\alpha}lpha > 0$, there exists $V_{\alpha}lpha {\infty}n W^{k',{\infty}nfty} \cap C^{k' -1}(\overline{\Omega},\mathbb R^{n'})$, and two disjoint open sets $\Omega_{\alpha}lpha^1$, $\Omega_{\alpha}lpha^2$ with $|\Omega| = |\Omega_{\alpha}lpha^1\cup \Omega_{\alpha}lpha^2|$ such that \begin{enumerate} {\infty}tem\label{f0} the $W^{k',{\infty}nfty} \cap C^{k' -1}$ norm of $V_{\alpha}lpha$ only depends on ${\partial}iam(\Omega),|a|,|b|$ and $|D^{k'}q_e|$; {\infty}tem\label{f1} $V_{\alpha}lpha = q_e$, together with all its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{s1} Every component of $V_{\alpha}lpha$ is piecewise a polynomial of degree $k'$, {\infty}tem\label{t1} Let $v_{{\alpha}lpha}(x) {\partial}oteq \mathcal{B}(V_{\alpha}lpha)(x)$. The sets $A_{\alpha}lpha = \{x {\infty}n \Omega_{{\alpha}lpha}^1: v_{\alpha}lpha(x) = a\}$, $B_{\alpha}lpha = \{x {\infty}n \Omega_{{\alpha}lpha}^1: v_{\alpha}lpha(x) = b\}$, $ \Omega_{\alpha}lpha^1 {\partial}oteq A_{\alpha}lpha \cup B_{\alpha}lpha$ and $\Omega_{\alpha}lpha^2 {\partial}oteq (\Omega_1^{\alpha}lpha)^c$ are essentially open in $\Omega$, , and \[ |A_{\alpha}lpha| \ge (1-{\alpha}lpha)\lambda|\Omega| \text{ and } |B_{\alpha}lpha| \ge (1 - {\alpha}lpha)(1-\lambda)|\Omega|. \] {\infty}tem\label{ff1} $|\Omega_{\alpha}lpha^2| \le {\alpha}lpha|\Omega|$; {\infty}tem\label{fff1} $\|V_{\alpha}lpha - q_e\|_{C^{k' - 1}}\le {\alpha}lpha$; {\infty}tem\label{ss1} $v_{{\alpha}lpha}(x) {\infty}n B_{{\alpha}lpha}([a,b])$ a.e. in $\Omega$. \end{enumerate} \end{lemma} \begin{proof} Fix ${\alpha}lpha > 0$. Choose an open set $\Omega'$ compactly contained in $\Omega$ with $|\Omega \setminus \Omega'| \le \frac{{\alpha}lpha}{2}|\Omega|$, $\Omega \setminus \Omega'$ essentially open in $\Omega$, and let $\varphi$ be a fixed smooth cut-off function with values in $[0,1]$ such that $\varphi(x) = 1$, $\forall x {\infty}n \Omega'$. With the notation introduced before the statement of the lemma, we define \[ W_{\varepsilon}(x) {\partial}oteq q_e(x) + \varepsilon^{k'}\varphi(x)H\left(\frac{(x,\xi_0)}{\varepsilon}\right)c'. \] We wish to take $\Omega_{\alpha}lpha^1 {\partial}oteq \Omega'$, $\Omega_{\alpha}lpha^2$ as the interior of $\Omega \setminus \Omega_{\alpha}lpha^1$ and $V_{\alpha}lpha {\partial}oteq W_{\varepsilon}$ for $\varepsilon > 0$ sufficiently small, and up to a correction on the small set $\Omega_{\alpha}lpha^2$ in order to make every component piecewise polynomial. With these choices \eqref{f1} and \eqref{t1} are immediate, once $\varepsilon$ is chosen sufficiently small, and \eqref{ff1} is a consequence of \eqref{t1}. As $\varepsilon \to 0$, the boundedness in $L^{\infty}nfty$ of $H$ yields the strong convergence in $L^{\infty}nfty$ of $W_\varepsilon$ to $q_e$. To see that the convergence is in the $C^{k' - 1}$ topology, it is sufficient to show the equiboundedness in $W^{k',{\infty}nfty}(\Omega, \mathbb R^{n'})$. To see the latter, it is sufficient to take a derivative of order $k'$ of \[ W'_\varepsilon(x) {\partial}oteq \varepsilon^{k'}\varphi(x)H\left(\frac{(x,\xi_0)}{\varepsilon}\right). \] Let then $I {\infty}n \mathcal{M}(k',m)$. $\partialartial_IW'_\varepsilon(x)$ can be estimated by a sum of terms of the form \begin{equation}\label{prodrule} \varepsilon^\ell\partialartial_{I'}\varphi(x) H^{(k'-\ell)}\left(\frac{(x,\xi_0)}{\varepsilon}\right), \end{equation} where $I' {\infty}n \mathcal{M}(\ell,m)$. It is then easy to see that if $\varepsilon = \varepsilon({\alpha}lpha)$ is sufficiently small, we may estimate the latter by $\|h\|_{L^{\infty}nfty}$, and hence conclude that \eqref{f0} holds. A similar computation shows \eqref{fff1}. Finally, with computations analogous to the ones of \eqref{prodrule}, one can estimate: \begin{equation}\label{close} \left|\mathcal{B}(W_\varepsilon)(x) - e - \varphi(x)h\left(\frac{(x,\xi_0)}{\varepsilon}\right)c\right| \le C\varepsilon, \end{equation} for some constant $C > 0$ at a.e. $x {\infty}n \Omega$. Since \[ e + \varphi(x)h\left(\frac{(x,\xi_0)}{\varepsilon}\right)c {\infty}n [a,b], \] from \eqref{close} we further deduce \eqref{ss1}. The map $W_\varepsilon$ satisfies all the properties listed in the statement of the lemma, except for \eqref{s1}. It is simple to see, from the definition of $W_\varepsilon$, that on $\Omega_1^{\alpha}lpha$ every component of $W_\varepsilon$ is piecewise a polynomial of degree $k'$ and that it is globally a piecewise smooth map. Therefore, we may subdivide $\Omega_{\alpha}lpha^2$ into pairwise disjoint, compactly supported and open cubes $Q_j$ on each of which $W_\varepsilon$ is a smooth map up to the boundary. By Lemma \ref{pp} below, we see that, on every $Q_j$, $W_\varepsilon$ can be substituted with a map $W_{\varepsilon,j}$ whose components are piecewise polynomials of order $k'$ with $W_{\varepsilon,j} = W_\varepsilon$ on $\partialartial Q_j$ and arbitrarily small $\|W_\varepsilon - W_{\varepsilon,j}\|_{C^k(\overline{\Omega})}$. It is simple to check that if this norm is taken sufficiently small, then \eqref{f0}-\eqref{f1}-\eqref{s1}-\eqref{t1}-\eqref{ff1}-\eqref{fff1} still hold for the map defined as $W_{\varepsilon,j}$ on $Q_j$ and $W_\varepsilon$ everywhere else. This defines the map $V_{\alpha}lpha$. \end{proof} We now show Lemma \ref{pp}, that was used in the previous proof. This states that any map $u {\infty}n C^k(\Omega)$ can be finely approximated by functions $v{\infty}n C^k(\Omega)$ that are piecewise polynomials of order $k$. This was done in \cite[Proposition 3.3]{KIRK} in the case $k = 2$. \begin{lemma}\label{pp} Let $\Omega$ be open and $u {\infty}n C^{k'}(\overline{\Omega})$. Then, for all $\varepsilon > 0$, there exists a function $v_\varepsilon {\infty}n C^{k'}(\overline{\Omega})$ such that \begin{enumerate} {\infty}tem \label{pp1} $\|u - v_\varepsilon\|_{C^{k'}(\overline\Omega)} \le \varepsilon$; {\infty}tem \label{pp2} $v_\varepsilon$ is piecewise a polynomial of order $k'$; {\infty}tem \label{pp3}$v_\varepsilon = u$ together with all of its derivatives of order $0\le \ell \le k'$ on $\partialartial \Omega$, $\forall \varepsilon > 0$. \end{enumerate} \end{lemma} \begin{proof} Fix $\varepsilon > 0$. We obtain $v_\varepsilon$ as the limit of a sequence $v_n$ defined inductively. Set $\varepsilon_n = \frac{\varepsilon}{2^n}$. We claim that, given a function $v_n$ with the following properties: \begin{enumerate}[(a)] {\infty}tem\label{indpp11} $v_n$ is $C^{k'}$ up to the boundary of $\Omega$; {\infty}tem\label{indpp12} $\Omega_n^1 \subset \{x: v_n \text{ is piecewise a polynomial of order }k' \text{ in a neighborhood of $x$} \}$ and $\Omega_n^2 = (\Omega_n^1)^c$ are essentially open sets in $\Omega$ with $|\Omega_n^2| \le \partialrod_{j = 1}^n\varepsilon_j|\Omega|$; {\infty}tem\label{indpp13} $v_n = u$ together with all of its derivatives of order $0\le \ell \le k'$ on $\partialartial \Omega$; \end{enumerate} then it is possible to find $v_{n + 1}$ such that \begin{enumerate}[(A)] {\infty}tem\label{indpp21} $v_{n+1}$ is $C^{k'}$ up to the boundary of $\Omega$; {\infty}tem\label{indpp22} $\|v_{n + 1} - v_n\|_{C^{k'}} \le \varepsilon_{n + 1}$; {\infty}tem\label{indpp23} $\Omega_{n + 1}^1 \subset \{x: v_{n + 1} \text{ is piecewise a polynomial of order }k' \text{ in a neighborhood of $x$}\}$ and $\Omega_{n + 1}^2 = (\Omega_{n + 1}^1)^c$ are essentially open sets in $\Omega$ such that $|\Omega_{n + 1}^2| \le \partialrod_{j = 1}^{n + 1}\varepsilon_j|\Omega|$ and $\Omega_{n + 1}^2 \subset \Omega_{n}^2$; {\infty}tem\label{indpp24} $v_{n + 1} = u$ together with all of its derivatives of order $0\le \ell \le k'$ on $\partialartial \Omega$. \end{enumerate} If this inductive step holds, then we start with $v_0 {\partial}oteq v$, working with the convention that \[ \sum_{j = 1}^0\varepsilon_j = 0 \text{ and } \partialrod_{j = 1}^0\varepsilon_j = 1. \] Since $\{v_n\}_n$ is a Cauchy sequence with respect to the $C^{k'}$ topology by \eqref{indpp22}, we can define $v_\varepsilon {\partial}oteq \lim_n v_n$. It is then easy to see that this function $v_\varepsilon$ has the required properties. \\ \\ To show the inductive step, we consider $\Omega_n^2$ and we first subdivide it in countably many, compactly contained, pairwise disjoint open cubes such that $|\Omega_n^2\setminus \bigcup_rQ_r| = 0$. On $Q_r$, all the derivatives of $v_n$ are uniformly continuous and hence we can find ${\partial}elta > 0$ such that if $x,y {\infty}n Q_r$ and $|x-y| \le {\partial}elta$, then \begin{equation}\label{contv} \sum_{j = 0}^{k'}|D^jv_n(x) - D^jv_n(y)| \le \gamma\varepsilon_{n +1}, \end{equation} where $\gamma > 0$ is a dimensional constant that will be fixed later. Now further subdivide $Q_r$ as a finite union of cubes $Q_{r,s}$ with ${\partial}iam(Q_{r,s}) \le {\partial}elta$. Fix a compactly contained open set $S_{r,s} \subset Q_{r,s}$ with \begin{equation}\label{crs} |Q_{r,s}\setminus S_{r,s}| \le \varepsilon_{n + 1}|Q_{r,s}|. \end{equation} Finally, fix a smooth cut-off function $\partialsi {\infty}n C^{{\infty}nfty}_c (Q_{r,s})$ such that $\partialsi \equiv 1$ on $S_{r,s}$, with \begin{equation}\label{estpsi} \|D^{\ell}\partialsi\|_{L^{\infty}nfty} \le \frac{c}{{\partial}iam(Q_{r,s})^\ell}, \quad \forall \ell \ge 0, \end{equation} where $c> 0$ is a dimensional constant. We modify $v_n$ on $Q_{r,s}$ by replacing it with \[ (1-\partialsi(x))v_n(x) + \partialsi(x)P_{r,s}(x), \] where $P_{r,s}$ is the $k'$-th order Taylor polynomial centred in the center of $Q_{r,s}$. This operation defines $v_{n + 1}$. Now \eqref{indpp21}-\eqref{indpp24} are immediate to check. \eqref{indpp23} follows by construction and \eqref{crs}, noticing that $\Omega_{n + 1}^1 = \Omega_n^1 \cup \bigcup_{r,s} S_{r,s} $. We only need to show \eqref{indpp22}. We check \eqref{indpp22} separately on every $Q_{r,s}$. For all $x {\infty}n Q_{r,s}$, we have, for every multi-index $I {\infty}n \mathcal{M}(\ell,m)$, $0 \le \ell \le k'$: \begin{align*} |\partialartial_I(v_{n + 1} - v_n)(x)| = |\partialartial_I((1-\partialsi(x))v_n(x) + \partialsi(x)P_{r,s}(x) - v_n(x))| = |\partialartial_{I}(\partialsi(x)(P_{r,s}(x) - v_n(x)))|. \end{align*} With a triangle inequality, it is easy to see that the latter can be estimated with a sum of terms of the form \[ |\partialartial_{I'}\partialsi(x)\partialartial_{I''}(P_{r,s}(x) - v_n(x)))|, \] with $I' {\infty}n \mathcal{M}(\ell',m)$ and $I'' {\infty}n \mathcal{M}(\ell - \ell',m)$. Now $\eqref{estpsi}$, $\eqref{contv}$ and the choice of $P_{r,s}$ yield \[ |\partialartial_{I'}\partialsi(x)\partialartial_{I''}(P_{r,s}(x) - v_n(x)))| \le c\gamma\varepsilon_{n + 1} \] and hence conclude the proof, provided we choose $\gamma$ sufficiently small depending only on $k'$, $m$ and $c$. \end{proof} The basic laminate construction as the one of Lemma \ref{lam} has already appeared in the literature in various contexts and for various operators, see for instance \cite[Proposition 3.2]{LINF} and \cite[Lemma 3.3]{COR}. We will now refine it by showing that the map $V_{\alpha}lpha$ can be chosen to take values in $B_{\alpha}lpha(a) \cup B_{{\alpha}lpha}(b)$ instead of $B_{\alpha}lpha([a,b])$. Closely related results appeared in \cite[Proposition 3.3-3.4]{KIRK} and \cite[Lemma 2.1]{AFSZ}, when studying laminations for the $\curl$ operator in the space of symmetric matrices. \begin{prop}\label{lam+} Let $\Omega\subset \mathbb R^m$ be an open and bounded set. Let $a,b {\infty}n \mathbb R^n$, $b-a = c {\infty}n \Lambda_{\mathcal{A}}$ and $e = \lambda a + (1-\lambda) b$, for some $\lambda {\infty}n (0,1)$. Fix any element $q_e {\infty}n \mathbb{P}_\mathbb{H}L(k',m)^{n'}$ with the property that $\mathcal{B}(q_e) = e$ everywhere in $\mathbb R^m$. Then, for all $\beta > 0$, there exists a map $V_\beta {\infty}n W^{k',{\infty}nfty} \cap C^{k' -1}(\overline{\Omega},\mathbb R^{n'})$ such that \begin{enumerate} {\infty}tem\label{f02} the $W^{k',{\infty}nfty} \cap C^{k' -1}$ norm of $V_\beta$ only depends on ${\partial}iam(\Omega),|a|,|b|$ and $|D^{k'}q_e|$; {\infty}tem\label{f2} $V_\beta = q_e$, together with all its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{ff2} every component of $V_\beta$ is piecewise a polynomial of degree $k'$; {\infty}tem\label{fff2} $\|V_\beta -q_e\|_{C^{k' - 1}(\overline{\Omega})} \le \beta$; {\infty}tem\label{ss2} if $v_{\beta}(x) {\partial}oteq \mathcal{B}(V_\beta)$, $|\{x {\infty}n \Omega: v_\beta(x) {\infty}n B_\beta(a)\}| = \lambda|\Omega|$ and $|\{x {\infty}n \Omega: v_\beta {\infty}n B_\beta(b)\}| = (1-\lambda)|\Omega|$. \end{enumerate} \end{prop} \begin{proof} Fix $0 < \beta \le \frac{1}{2}|a-b|$ and $0 <\sigma < \min\left\{\frac{\beta}{2|a-b|},\beta\right\}$. We inductively construct a sequence of maps $\{V_n\}_n$ that in the limit will give us a map $V_{\beta,\sigma}$. Let $v_{\beta,\sigma}{\partial}oteq \mathcal{B}(V_{\beta,\sigma})$. $V_{\beta,\sigma}$ will have all the required properties, except for \eqref{ss2} that will be replaced by: \begin{equation}\label{all} |\Omega| = |\{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(a)\}\cup \{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(b)\}| \end{equation} and \begin{equation}\label{almost} |\{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(a)\}| \ge (1-\sigma)\lambda|\Omega| \text{ and } |\{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(b)\}| \ge (1-\sigma)(1-\lambda)|\Omega|. \end{equation} We will deal with \eqref{ss2} in a second moment. \\ \\ \;\fbox{Step 1: the inductive setup:} \\ \\ At step $0$, we choose $V_0 = V_{{\alpha}lpha}$ for ${\alpha}lpha = \frac{\sigma}{2} < \frac{\beta}{2}$ and $\Omega_0 {\partial}oteq \Omega_2^{\alpha}lpha$ as in Lemma \ref{lam}. By Lemma \ref{lam}, $\Omega_0$ is essentially open in $\Omega$. Define $\varepsilon_{n} {\partial}oteq \frac{\sigma}{2^{n + 2}}$. Suppose we are given a map $V_n {\infty}n W^{k',{\infty}nfty}$ whose components are piecewise polynomials of degree $k'$ that satisfies the following properties \begin{enumerate}[ $(a)$] {\infty}tem\label{ind10} $V_n = q_e$ together with all of its derivatives of order $\ell < k'$ on $\partialartial\Omega$; {\infty}tem\label{ind16} let $v_n {\partial}oteq \mathcal{B}(V_n)$. There exists $\Omega_n$, essentially open in $\Omega$, with $|\Omega_n| \le \varepsilon_n|\Omega|$ and such that $$\Omega_n \supseteq \{x: v_{n} \notin B_{\sum_{j}^{n}\varepsilon_j}(a)\cup B_{\sum_{j}^{n}\varepsilon_j}(b)\};$$ {\infty}tem\label{ind18} $v_{n}(x) {\infty}n B_{\sum_{j}^{n}\varepsilon_j}([a,b])$. \end{enumerate} We claim it is possible to find a new map $V_{n + 1}{\infty}n W^{k',{\infty}nfty}\cap C^{k'-1}$ whose components are piecewise polynomials of degree $k'$ and with $\|V_{n + 1}\|_{W^{k',{\infty}nfty}\cap C^{k'-1}} \le \max\{\|V_n\|_{W^{k',{\infty}nfty}\cap C^{k'-1}},L\}$, where $L$ only depends on $|a|,|b|$ and $|D^{(k)}q_e|$, and fulfilling the following properties: \begin{enumerate}[ $(A)$] {\infty}tem\label{ind20} $V_{n+1} = q_e$, together with all of its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{ind26} let $v_{n + 1} {\partial}oteq \mathcal{B}(V_{n + 1})$. There exists $\Omega_{n +1}$, essentially open in $\Omega$, with $|\Omega_{n +1}| \le \varepsilon_{n + 1}|\Omega|$ and such that $$\Omega_{n+1} \supseteq \{x: v_{n+1} \notin B_{\sum_{j}^{n+1}\varepsilon_j}(a)\cup B_{\sum_{j}^{n+1}\varepsilon_j}(b)\};$$ {\infty}tem\label{ind28} $v_{n+1}(x) {\infty}n B_{\sum_{j}^{n+1}\varepsilon_j}([a,b])$. {\infty}tem\label{ind22} $V_{n + 1} = V_n$ on $\Omega_n^c$; {\infty}tem\label{ind25} $\|V_{n + 1} - V_n\|_{C^{k' -1}} \le \varepsilon_{n + 1}$; \end{enumerate} Suppose for a moment the claim holds. First, Lemma \ref{lam} tells us that $V_0$ satisfies \eqref{ind10}-\eqref{ind16}-\eqref{ind18} for $n = 0$. By \eqref{ind25}, we can define the $C^{k'-1}$ limit \[ V_{\beta,\sigma} = \lim_n V_n. \] Moreover, we have that $\|v_n\|_{L^{\infty}nfty}$ is equibounded and, by the strong convergence of $V_n$ in $L^{\infty}nfty$, we infer the weak-$*$ convergence in $L^{\infty}nfty$ of $v_n$ to $v_{\beta,\sigma} = \mathcal{B}(V_{\beta,\sigma})$. Since $V_n$ and $V_{n + 1}$ differ only on $\Omega_n$ and $|\Omega_n| \to 0$, we see that $V_n$ and $v_n$ converge in measure to $V_{\beta,\sigma}$ and $v_{\beta,\sigma}$, respectively. Now it is easy to deduce from the properties of $V_n$ and $V_{n + 1}$ that $v_{\beta,\sigma}$ and $V_{\beta,\sigma}$ enjoys properties \eqref{f02}-\eqref{f2}-\eqref{ff2}-\eqref{fff2} listed in the statement of the proposition together with \eqref{all}-\eqref{almost}. We now prove the inductive step. \\ \\ \;\fbox{Step 2: the inductive step.} \\ \\ Suppose we are given $V_n$ and $\Omega_n$ as above. Split $\Omega_n = \bigcup_q \Omega'_q$, with $\Omega'_q$ open, in such a way that on $\Omega'_q$, every component of $V_n$ is a polynomial of order $k'$. We modify $V_n$ on $\Omega'_q$ in the following way. By \eqref{ind18}, we know that \[ v_{n}(x) {\infty}n B_{\sum_{j}^{n}\varepsilon_j}([a,b]),\quad \forall x {\infty}n \Omega, \] but from the definition of $\Omega_n$ we also know that \begin{equation}\label{nonballs} v_n(x) {\infty}n B_{\sum_j^n\varepsilon_j}([a,b])\setminus B_{\sum_{j}^n\varepsilon_j}(a)\cup B_{\sum_{j}^n\varepsilon_j}(b), \quad\forall x {\infty}n \Omega'_q. \end{equation} Observe that $v_n(x)$ is constant on $\Omega'_q$, since $V_n$ is a vector of polynomials of order $k'$ there. We will then call $e'_q {\partial}oteq v_n(x)$. We infer from \eqref{nonballs} that there exists $h_q$ with $|h_q| < \sum_{j = 1}^{n}\varepsilon_j$ and $\mu_q {\infty}n (0,1)$ such that \[ e'_q = h_q + \mu_q a + (1-\mu_q)b = \mu_q (a + h_q) + (1-\mu_q)(b + h_q). \] We use Lemma \ref{lam} with $a + h_q, b+ h_q,e'_q,\mu_q, P_q$ instead of $a,b,e,\lambda, q_e$, where $P_q$ is the unique element of $\mathbb{P}_\mathbb{H}L(k',m)^{n'}$ that extends $V_n|_{\Omega_q'}$, to find a map $V_{\rho,q}$ with the properties listed in the statement of Lemma \ref{lam}, for any $0 < \rho < \varepsilon_{n + 1}$. We then replace $V_n$ on $\Omega'_q$ by $V_{\rho,q}$. Call $V_{n + 1}$ the map that coincides with $V_n$ outside of $\Omega_n$ and is defined as $V_{\rho,q}$ in $\Omega'_q$. Notice that we can check the inductive step separately on each subdomain $\Omega'_q$. The fact that \[ \|V_{n + 1}\|_{W^{k',{\infty}nfty}} \le \max\{\|V_n\|_{W^{k',{\infty}nfty}},L\} \] stems from the definition of $V_{n + 1}$ and property \eqref{f0} of $V_{\rho,q}$ stated in Lemma \ref{lam}. Furthermore, \eqref{ind20}-\eqref{ind22} are immediate by construction and \eqref{f1} of Lemma \ref{lam}. \eqref{ind25} is a consequence of the choice $\rho < \varepsilon_{n + 1}$ and \eqref{fff1} of Lemma \ref{lam}. Using \eqref{ss1} of Lemma \ref{lam} and the estimates $\rho < \varepsilon_{n+ 1}$ and $|h_q| < \sum_{j = 1}^n\varepsilon_j$, \eqref{ind28} also follows. Finally, exploiting again the estimates on $|h_q|$ and $\rho$, we also have \eqref{ind26}, by \eqref{t1}-\eqref{ff1} of Lemma \ref{lam}. This concludes the proof of the inductive step. \\ \\ \fbox{Step 3: proof of \eqref{ss2}.} \\ \\ This step is analogous to the same step of Lemma \cite[Lemma 2.1]{AFSZ} and we repeat it for the convenience of the reader. Up to now, we have found a map $V_{\beta,\sigma}$ with properties \eqref{f02}-\eqref{f2}-\eqref{ff2}-\eqref{fff2} of the statement of the proposition and with \eqref{ss2} replaced by \eqref{all}-\eqref{almost}, namely: \[ |\Omega| = |\{x {\infty}n \Omega: v_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}\cup\{x {\infty}n \Omega: v_{\beta,\sigma}(x) {\infty}n B_\beta(b)\}|, \] and \[ |\{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(a)\}| \ge (1-\sigma)\lambda|\Omega| \text{ and } |\{x {\infty}n \Omega: v_{\beta,\sigma} {\infty}n B_\beta(b)\}| \ge (1-\sigma)(1-\lambda)|\Omega|. \] Since the inductive statement worked for any domain $\Omega$, we now work on a cube\footnote{In fact, any open set $Q$ with $|\partialartial Q| =0$ would serve for our purpose.} $Q \subset \mathbb R^m$ instead of $\Omega$, and we come back to the general bounded open set $\Omega$ of the statement of the proposition later on. We can suppose, without loss of generality that \[ \lambda|Q| > |\{x {\infty}n Q: v_{\beta,\sigma} {\infty}n B_\beta(a)\}| \ge (1-\sigma)\lambda|Q|. \] Now choose any $s$ such that $ \sigma < s < \min\left\{\frac{\beta}{2|a-b|},1-\lambda\right\}$ and set \[ a' {\partial}oteq a + s(b-a). \] Let $\mu = \frac{\lambda}{1 - s} > \lambda$ and write \[ e = \mu a' + (1-\mu)b. \] Since $s < 1 -\lambda$, $\mu {\infty}n (0,1)$. We can repeat the previous steps of the proof with $a',b,\lambda$ and $q_e$ in place of $a,b,\mu$ and $q_e$, to obtain a map $V'_{\beta,\sigma}$ with properties \eqref{f02}-\eqref{f2}-\eqref{ff2}-\eqref{fff2} of the statement of the Proposition and with \eqref{ss2} replaced by \begin{equation}\label{all1} |Q| = |\{x {\infty}n Q: v'_{\beta,\sigma}(x) {\infty}n B_\frac{\beta}{2}(a')\}\cup\{x {\infty}n Q: v_{\beta,\sigma}(x) {\infty}n B_\frac{\beta}{2}(b)\}|, \end{equation} and \begin{equation}\label{almost1} |\{x {\infty}n Q: v'_{\beta,\sigma}(x) {\infty}n B_\frac{\beta}{2}(a')\}| \ge (1-\sigma)\mu|Q| \text{ and } |\{x {\infty}n Q: v'_{\beta,\sigma}(x) {\infty}n B_\frac{\beta}{2}(b)\}| \ge (1-\sigma)(1-\mu)|Q|. \end{equation} Here, as usual, $v'_{\beta,\sigma} = B(V'_{\beta,\sigma})$. Since $s < \frac{\beta}{2|a-b|}$, we see that \[ B_\frac{\beta}{2}(a') \subset B_{\beta}(a), \] and hence $\eqref{almost1}$ implies \begin{equation}\label{almost2} |\{x {\infty}n Q: v'_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}| \ge (1-\sigma)\mu|Q| \text{ and } |\{x {\infty}n Q: v'_{\beta,\sigma}(x) {\infty}n B_\beta(b)\}| \ge (1-\sigma)(1-\mu)|Q|. \end{equation} We now come back to the domain $\Omega$ of the statement of the proposition. We split $\Omega$ into two open sets $\Omega_1$ and $\Omega_2$ with $|\Omega_1| = t|\Omega|$, $|\Omega_2| =(1-t)|\Omega|$, $t {\infty}n (0,1)$ to be fixed. We subdivide $\Omega_1$ in cubes and fill it with rescaled and translated copies of $V_{\beta,\sigma}$ of the form \[ V_{\beta,\sigma,r,x_0}(x) {\partial}oteq r^{k'}V_{\beta,\sigma}\left(\frac{x- x_0}{r}\right), \] and $\Omega_2$ with rescaled and translated copies of $V'_{\beta,\sigma}$ of the same form. The map $V_\beta$ is exactly given by the resulting map, for the correct choice of $t$. Indeed, it is simple to see that $V_\beta$ inherits properties \eqref{f02}-\eqref{f2}-\eqref{ff2}-\eqref{fff2} of the Lemma, and also \eqref{all}-\eqref{all1}, in the sense that \[ |\Omega| = |\{x {\infty}n \Omega: v_{\beta}(x) {\infty}n B_\beta(a)\}\cup\{x {\infty}n \Omega: v_{\beta}(x) {\infty}n B_\beta(b)\}|. \] Notice that, by our choice $\beta < \frac{1}{2}|a-b|$, the sets $\{x {\infty}n \Omega: v_{\beta}(x) {\infty}n B_\beta(a)\}$ and $\{x {\infty}n \Omega: v_{\beta}(x) {\infty}n B_\beta(b)\}$ are disjoint, thus it suffices to check that there exists $t {\infty}n (0,1)$ such that \[ |\{x {\infty}n \Omega: v_{\beta}(x) {\infty}n B_\beta(a)\}| = \lambda|\Omega| \] to conclude the proof. To see the latter, we write \begin{align*} |\{x {\infty}n \Omega: v_\beta(x) {\infty}n B_\beta(a)\}| &= |\{x {\infty}n \Omega_1: v_{\beta}(x) {\infty}n B_\beta(a)\}| + |\{x {\infty}n \Omega_2: v_\beta(x) {\infty}n B_\beta(a)\}|\\ & = |\{x {\infty}n Q: v_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}|\frac{|\Omega_1|}{|Q|} + |\{x {\infty}n Q: v_{\beta,\sigma}'(x) {\infty}n B_\beta(a)\}|\frac{|\Omega_2|}{|Q|}\\ & = t|\{x {\infty}n Q: v_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}|\frac{|\Omega|}{|Q|} + (1-t)|\{x {\infty}n Q: v_{\beta,\sigma}' {\infty}n B_\beta(a)\}|\frac{|\Omega|}{|Q|}. \end{align*} Since $\sigma < s$, $\mu = \frac{\lambda}{1-s}$, \[ |\{x {\infty}n Q: v_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}| < \lambda|Q|\text{ and }|\{x {\infty}n Q: v'_{\beta,\sigma}(a) {\infty}n B_\beta(a)\}| \ge (1-\sigma)\mu|Q| > \lambda |Q|, \] it is then clear that there exists $t {\infty}n (0,1)$ such that \[ t|\{x {\infty}n Q: v_{\beta,\sigma}(x) {\infty}n B_\beta(a)\}| + (1-t)|\{x {\infty}n Q: v'_{\beta,\sigma}(a) {\infty}n B_\beta(a)\}| = \lambda|Q|. \] This choice of $t$ fixes $V_\beta$ and concludes the proof. \end{proof} It is convenient to introduce some measure theoretic concept alongside with the simple laminates construction, compare \cite[Section 2]{SMVS}, \cite[Introduction]{KIRK}. For instance, given $a,b$ as in Lemma \ref{lam}, we consider\footnote{The measure we associate is the so-called \emph{Young measure} generated by the sequence of maps defined in Lemma \ref{lam}. We will only use particular Young measures, namely laminates, and hence we will not introduce them in full generality. For a comprehensive introduction, see for instance \cite[Chapter 3]{DMU}.} \[ \nu = \lambda{\partial}elta_{a} + (1-\lambda){\partial}elta_b. \] Now, after having split the barycentre $e$ into $a$ and $b$ as $e = \lambda a + (1-\lambda)b$, one may split $b$ as $b = \mu A + (1-\mu)B$, for $\mu {\infty}n (0,1)$ and $B - A {\infty}n \Lambda_\mathcal{A}$. After this operation, we consider the new measure \[ \nu' = \lambda{\partial}elta_{a} + (1-\lambda)\mu{\partial}elta_A + (1-\lambda)(1-\mu){\partial}elta_B. \] Notice that the barycentre of $\nu'$ is the same as the one of $\nu$. Generalizing this simple example, we give the following: \begin{definition} \label{d:laminates_finite} Let $\nu,\nu' {\infty}n \mathcal{P}(U)$, $U \subset \mathbb R^{n}$ open. Let $\nu = \sum_{i= 1}^r\lambda_i{\partial}elta_{a_i}$. We say that $\nu'$ can be obtained via \emph{elementary splitting from }$\nu$ if for some $i {\infty}n \{1,{\partial}ots,r\}$, there exist $b,c {\infty}n U$, $\lambda {\infty}n [0,1]$ such that \[ b-c {\infty}n \Lambda_\mathcal{A} ,\quad [b,c] \subset U, \quad a_i = sb + (1-s)c, \] for some $s {\infty}n (0,1)$ and \[ \nu' = \nu +\lambda\lambda_i(-{\partial}elta_{a_i} + s{\partial}elta_b + (1-s){\partial}elta_c). \] A measure $\nu = \sum_{i= 1}^r\lambda_i{\partial}elta_{a_i}{\infty}n \mathcal{P}(U)$ is called an $\mathcal{A}$-\emph{laminate of finite order} if there exists a finite number of measures $\nu_1,{\partial}ots,\nu_{r'} {\infty}n \mathcal{P}(U)$ such that \[ \nu_1 = {\partial}elta_X,\quad \nu_{r'} = \nu \] and $\nu_{j + 1}$ can be obtained via elementary splitting from $\nu_j$, for every $j{\infty}n \{1,{\partial}ots,N-1\}$. \end{definition} Using the definition of $\mathcal{A}$-laminate of finite order and a simple iterative procedure that exploits Proposition \ref{lam+} at every splitting, one may prove the following result. We refer the interested reader to \cite[Lemma 3.2]{SMVS} for a proof in the case $\mathcal{A} = \curl$. \begin{prop}\label{ind} Let $\nu = \sum_{i = 1}^r\lambda_i{\partial}elta_{a_i} {\infty}n \mathcal{P}(U)$ be an $\mathcal{A}$-laminate of finite order, and let $e = \bar \nu$. Fix any element $q_e {\infty}n \mathbb{P}_\mathbb{H}L(k',m)^{n'}$ with the property that $\mathcal{B}(q_e) = e$ everywhere in $\mathbb R^m$. Then, given an open set $\Omega$, for every $\varepsilon >0$ there exists $V_\varepsilon {\infty}n W^{k',{\infty}nfty} \cap C^{k' -1}(\overline{\Omega},\mathbb R^{n'})$ enjoying the following properties: \begin{enumerate} {\infty}tem\label{flof0} the $W^{k',{\infty}nfty} \cap C^{k' -1}$ norm of $V_\varepsilon$ only depends on ${\partial}iam(\Omega),\max_i|a_i|$ and $|D^{k'}q_e|$; {\infty}tem\label{flof} $V_\varepsilon = q_e$, together with all its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{fflof} Every component of $V_\varepsilon$ is piecewise a polynomial of degree $k'$; {\infty}tem\label{ffflof} $\|V_\varepsilon -q_e\|_{C^{k' - 1}(\overline{\Omega})} \le \varepsilon$; {\infty}tem\label{sslof} if $v_{\varepsilon}(x) {\partial}oteq \mathcal{B}(V_\varepsilon)(x)$, then $|\{x {\infty}n \Omega: v_{\varepsilon}(x) {\infty}n B_{\varepsilon}(a_i)\}| = \lambda_i|\Omega|,\forall i {\infty}n \{1,{\partial}ots, r\}$. \end{enumerate} \end{prop} \subsection{Laminates} \label{section:laminates} In this section we give the definition of $\mathcal{A}$-\emph{laminate}. In \cite[Section 4]{KIRK}, Kirchheim develops all the useful tools concerning $\mathcal{A}$-laminates, thus extending \cite[Section 2]{SMVS} from the case $\mathcal{A} = \curl$ to the case of general linear differential operators. In this subsection, we simply recall the definitions and the results of \cite{KIRK}. Let us point out that in \cite{KIRK} the notation $\mathcal{D}$ is used instead of $\Lambda_\mathcal{A}$ and the name $\mathcal{D}$-\emph{prelaminates} is used instead of $\mathcal{A}$-\emph{laminates of finite order}. \begin{definition} Let $O \subset \mathbb R^{ n}$ be an open set. We say that $f: O \to \mathbb R$ is $\Lambda_\mathcal{A}$-convex in $O$ if $f$ is convex on every $\Lambda_\mathcal{A}$ segment contained in $O$, i.e. $$ f(\lambda a + (1- \lambda)b ) \leq \lambda f(a) + (1- \lambda) f(b),$$ for any $a,b {\infty}n \mathbb R^{n}$ such that $a-b {\infty}n \Lambda_\mathcal{A}$. If $f$ is $\Lambda_\mathcal{A}$-convex in $\mathbb R^n$, we will simply say that $f$ is $\Lambda_\mathcal{A}$-convex. \end{definition} \begin{definition} Let $E \subset \mathbb R^{n}$. We say that $\nu {\infty}n \mathcal{P}(E)$ is an $\mathcal{A}$-laminate if \begin{equation} \label{jensen} {\infty}nt_{\mathbb R^{n\times m}}f(X)d\nu \ge f\left({\infty}nt_{\mathbb R^{n}}Xd\nu\right) = f(\bar \nu), \end{equation} for every $\Lambda_\mathcal{A}$-convex function $f$ in $\mathbb R^{n}$. We define $$ \mathcal{P}^{ \Lambda_\mathcal{A}}(K){\partial}oteq \{\nu {\infty}n \mathcal{P}(K): \nu \text{ is an } \mathcal{A}\text{-laminate} \}.$$ \end{definition} We give now the definition of $\Lambda_\mathcal{A}$-convex hull of a compact or open subset of $\mathbb R^n$. In the case $\mathcal{A} = \curl$, this is the so called rank-one convex hull, $E^{rc}$, compare \cite[Section 6]{SMVS}. \begin{definition} Let $K \subset \mathbb R^n$ be a compact set. We define the $\Lambda_\mathcal{A}$ convex hull $K^{\Lambda_\mathcal{A}}$ as the set of \[ K^{\Lambda_\mathcal{A}} {\partial}oteq \{X: X \text{ is the barycenter of a $\mathcal{A}$-laminate $\nu$ in $K$}\}, \] For an open set $U$, \[ U^{\Lambda_\mathcal{A}} {\partial}oteq \bigcup_{K\subset U: K \text{ compact}}K^{\Lambda_\mathcal{A}}. \] \end{definition} We collect in the next proposition some useful properties of the objects we just introduced: \begin{prop} \label{p:lambdawavecone} The following hold: \begin{enumerate} {\infty}tem\label{firstprop} For any compact set $K \subset \mathbb R^n$, $$K^{\Lambda_\mathcal{A}} = \{X: f(X) \leq 0, \text{ for every } \ \Lambda_\mathcal{A} \text{-convex $f$ with } \max_{Y{\infty}n K}f(Y)\le 0\};$$ {\infty}tem\label{secondprop} If $U \subset \mathbb R^n$ is open, then $U^{\Lambda_\mathcal{A}}$ is open; {\infty}tem\label{thirdprop} Let $O \subset \mathbb R^n$ be an open and bounded, and let $f:O\to\mathbb R$ be $\Lambda_\mathcal{A}$-convex. Then $f$ is locally Lipschitz. \end{enumerate} \end{prop} For the proof of \eqref{firstprop} we refer the reader to \cite[Corollary 4.11]{KIRK}. \eqref{secondprop} follows from the simple fact that the translation of a laminate is still a laminate. Finally, the proof of \eqref{thirdprop} can be found in \cite[Lemma 2.3]{KKR}. \\ \\ Notice that if $\nu$ is a $\mathcal{A}$-\emph{laminate of finite order}, then \eqref{jensen} holds for every $\Lambda_\mathcal{A}$-convex function $f$. Since every $\Lambda_\mathcal{A}$-convex function is locally Lipschitz continuous, \eqref{jensen} also holds for every weak-$*$ limit of sequences $\{\nu_n\}_n$ of $\mathcal{A}$-laminates of finite order supported in a fixed bounded open set. Therefore, the weak-$*$ closure of the space of $\mathcal{A}$-laminates of finite order is contained in the space of $\mathcal{A}$-laminates. M\"uller and \v Sver\'ak actually managed to prove the converse in the case of the wave cone induced by the operator $\mathcal{A} =\curl$, compare \cite[Theorem 2.1]{SMVS}. \cite[Theorem 4.12]{KIRK} extends this result to the case of general operators: \begin{theorem}\label{ann} Let $K \subset \mathbb R^n$ be a compact set and let $\nu {\infty}n \mathcal{P}^{\Lambda_\mathcal{A}}(K)$. Let $U$ be an open set such that $K^{\Lambda_\mathcal{A}} \subset U$. Then there exists a sequence $\{\nu_j\}_j \subset \mathcal{P}(U)$ of laminates of finite order such that $\overline{\nu_j} = \overline{\nu}$ for each $j$ and $\{\nu_j\}_j$ converges weakly-$*$ to $\nu$ in the sense of measures. \end{theorem} \subsection{In-approximations and exact solutions}\label{inappexsol} In this subsection we exploit the theory developed in Section \ref{section:simplelaminates} and Section \ref{section:laminates} to construct solutions of \eqref{AINC}, and in particular we prove Theorem \ref{rcexact}. We start with the following preliminary result. \begin{prop}\label{usefulprop} Let $U \subset \mathbb R^n$ and $\Omega\subset \mathbb R^m$ be open and bounded sets and let $W{\infty}n W^{k',{\infty}nfty}\cap C^{k'-1}(\overline{\Omega},\mathbb R^{n'})$ be a map whose components are piecewise polynomials of order $k'$ such that $$\mathcal{B} (W) {\infty}n U^{\Lambda_\mathcal{A}} \text{ in } \Omega.$$ Then, for every ${\partial}elta > 0$, there exists a map $V_{\partial}elta {\infty}n W^{k',{\infty}nfty} \cap C^{k' -1}(\overline{\Omega},\mathbb R^{n'})$ whose components are piecewise polynomials of order $k'$ with the following properties: \begin{enumerate} {\infty}tem the $W^{k',{\infty}nfty} \cap C^{k' -1}$ norm of $V_{\partial}elta$ only depends on ${\partial}iam(\Omega)$, ${\partial}iam(U)$ and $\|W\|_{C^{k'}}$; {\infty}tem $V_{\partial}elta = W$, together with all of its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem $\|V_{\partial}elta - W\|_{C^{k'-1}} \le {\partial}elta$; {\infty}tem $v_{\partial}elta {\partial}oteq \mathcal{B}(V_{\partial}elta)$, then $v_{\partial}elta {\infty}n U$ a.e. in $\Omega$. \end{enumerate} \end{prop} \begin{proof} By definition, there exist countably many open and disjoint $\Omega_n$ such that $\Omega = \bigcup_{n}\Omega_n$ and, on $\Omega_n$, $W$ is a vector of polynomials of order $k'$. We work on each $\Omega_n$ separately, and hence fix now $n {\infty}n {\mathbb N}$. \\ \\ By definition, since $e {\partial}oteq \mathcal{B}(W|_{\Omega_n}) {\infty}n U^{\Lambda_\mathcal{A}}$, there exists a compact set $C\subset U$ such that \[ e {\infty}n C^{\Lambda_\mathcal{A}}. \] By Proposition \ref{p:lambdawavecone}, we infer the existence of a $\mathcal{A}$-laminate $\nu$ supported in $C$ with barycentre $e$. Therefore, we can apply Theorem \ref{ann} with $U^{\Lambda_\mathcal{A}}$ instead of $U$. This is possible since $U^{\Lambda_\mathcal{A}}$ is open, see \eqref{secondprop} of Proposition \ref{p:lambdawavecone}. Thus, we can find a $\mathcal{A}$-laminate of finite order $$\mu = \sum_{i = 1}^r \lambda_i{\partial}elta_{a_i}$$ supported in $U^{\Lambda_\mathcal{A}}$, and satisfying \begin{equation}\label{alp} \mu(U) \ge \frac{1}{2}\nu(U) = \frac{1}{2} \end{equation} the latter coming from the lower semi-continuity on open sets of the total variation of probability measure with respect to the weak-$*$ convergence, see \cite[Theorem 1.40(ii)]{EVG}. We apply Proposition \ref{ind} with $\mu$ and with $q_e {\infty}n \mathbb{P}_\mathbb{H}L(k',m)$ chosen to be the unique extension to $\mathbb R^m$ of the polynomial $W|_{\Omega_n}$. Hence, fixed $\beta >0$, we know that we can find a map $V^n_\beta{\infty}n W^{k',{\infty}nfty} \cap C^{k' -1}(\overline{\Omega},\mathbb R^{n'})$ whose components are piecewise polynomials of degree $k'$ such that: \begin{enumerate} {\infty}tem the $W^{k',{\infty}nfty} \cap C^{k' -1}$ norm of $V_\beta^n$ is bounded by ${\partial}iam(\Omega), {\partial}iam(U)$ and $|D^{k'}q_e| \le \|W\|_{W^{k',{\infty}nfty}}$; {\infty}tem $\|V^n_\beta - W\|_{C^{k' -1}(\overline{\Omega_n},\mathbb R^{n'})} \le \beta$; {\infty}tem $V_\beta^n(x) =W$, together with all of its derivatives of order $\ell < k'$, on $\partialartial\Omega_n$; {\infty}tem if $v_\beta^n {\partial}oteq \mathcal{B}(V_\beta^n)$, then $|\{x{\infty}n\Omega_n: {\partial}ist(v_\beta^n,\{a_1,{\partial}ots, a_r\}) \ge \beta\}| = 0$ {\infty}tem \label{eq:measure}$|\{x {\infty}n \Omega_n: {\partial}ist(v_\beta^n,a_i) \le \beta\}| = \lambda_i|\Omega_n|,\forall i {\infty}n \{1,{\partial}ots, r\}$. \end{enumerate} By \eqref{alp}, we have \[ \mu(U) = \sum_{i: a_i {\infty}n U}\lambda_i \ge \frac{1}{2}. \] We then choose $\beta>0$ so that $B_\beta(a_k) \subset U^{\Lambda_\mathcal{A}}$, $\forall k =1,{\partial}ots, r$, and if $a_k {\infty}n U$, then also $B_\beta(a_k) \subset U$. This is possible since $U$ is open and we have a finite number of $a_k$. Therefore, \eqref{eq:measure} of the previous list tells us that \begin{equation}\label{inmeas} |\{x {\infty}n \Omega_n: \mathcal{B}(V_\beta^n) \notin U\}| \le \frac{| \Omega_n|}{2} . \end{equation} Now we can define $V_1$ on $\Omega$ by setting $V_1 {\partial}oteq V_\beta^n$ on $\Omega_n$. $V_1$ is a map with the required regularity and whose components are all piecewise polynomials of order $k'$. Moreover $V_1=W \text{ on } \partialartial \Omega$, together with all of its derivatives of order $\ell < k'$, \begin{equation}\label{props} \| V_1- W \|_{C^{k' - 1}} \leq \beta,\quad |\{x {\infty}n \Omega: \mathcal{B}(V_1) \notin U\}| \le \frac{|\Omega|}{2}. \end{equation} Now one iterates this reasoning, considering $V_1$ instead of $W$ and $\{x {\infty}n \Omega: \mathcal{B}(V_1) \notin U\}$ instead of $\Omega$. After this step, one gets a map $V_2$ with properties similar to the ones of \eqref{props} with the last one replaced by \[ |\{x {\infty}n \Omega: \mathcal{B}(V_2) \notin U\}| \le \frac{|\{x {\infty}n \Omega: \mathcal{B}(V_1) \notin U\}|}{2} \le \frac{|\Omega|}{2^2}. \] Iterating this reasoning infinitely many times, one gets a sequence of maps $\{V_q\}_q$ that are easily seen to converge to a map $V_{\partial}elta$ with the required properties. \end{proof} We now give the definition of $\mathcal{A}$-in-approximation. \begin{definition}\label{INAPP} We say that $K \subset \mathbb R^n$ \emph{admits a} $\mathcal{A}$-in-approximation if there exists a sequence of open and equibounded sets $U_n \subset \mathbb R^{n}$ such that \begin{equation}\label{lc} U_n \subset U_{n + 1}^{ {\Lambda_\mathcal{A}}}, \end{equation} and for every sequence $(X_n)_n$ with $X_n {\infty}n U_n$, \begin{equation}\label{lp} \{X_n\}_n\text{ can only have limit points in } K. \end{equation} In the sequel, we will simply write $U_n \to K$ for a sequence of open sets having property \eqref{lp}. \end{definition} We are now ready to use the proposition above and the concept of $\mathcal{A}$-in-approximation to construct exact solutions of $\eqref{AINC}$. \begin{theorem}\label{rcexact} Let $\Omega \subset \mathbb R^m$ be an open and bounded set and $K \subset \mathbb R^{n}$ be a compact set that admits a $\mathcal{A}$-in-approximation $\{U_n\}_n$. Then, for every $W {\infty}n C^{k'}( \overline{\Omega}, \mathbb R^{n'})$ such that \[ \mathcal{B}(W) {\infty}n U_1 \text{ in }\Omega, \] and for every $\varepsilon>0$, there exists a map $V_\varepsilon{\infty}n W^{k',{\infty}nfty}\cap C^{k'-1}(\overline\Omega,\mathbb R^{n'})$ such that: \begin{enumerate} {\infty}tem\label{BOUND} the $W^{k',{\infty}nfty}\cap C^{k'-1}$ norm only depends on $\max_n\{{\partial}iam(U_n),{\partial}iam(K)\}$; {\infty}tem\label{BOUNDARY} $V_\varepsilon = W$, together with all its derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{unif}$\|V_\varepsilon - W\|_{C^{k' -1}(\overline\Omega,\mathbb R^{n'})} \leq \varepsilon$; {\infty}tem\label{EXA} $\mathcal{B}(V_\varepsilon) {\infty}n K, \text{ in }\Omega$. \end{enumerate} \end{theorem} \begin{proof} Let $W {\infty}n C^{k'}(\overline{\Omega},\mathbb R^{n'})$. The first thing we do is to replace $W$ with a map $W'$ whose components are piecewise polynomials of degree $k'$ that well approximates $W$ in the $C^{k'}$ norm and has the same boundary datum. To do so, we define for every $j {\infty}n {\mathbb N}$ \[ \Omega_j {\partial}oteq \{x {\infty}n \Omega: {\partial}ist(x,\partialartial \Omega) \ge 2^{-j}\}. \] Up to considering $\Omega_{j + j_0}$ for some $j_0 {\infty}n {\mathbb N}$, we can assume without loss of generality that $\Omega_0$ is non-empty. Furthermore, again without loss of generality, we can assume that $$0 < \varepsilon < \min_{x {\infty}n \overline{\Omega_0}}{\partial}ist(\mathcal{B}(W)(x),\partialartial U_1)$$ and from now on we fix $\varepsilon > 0$. Consider a decreasing sequence $\{c_j\}_j$ of positive numbers such that $c_1 < \frac{\varepsilon}{2}$ and \begin{equation}\label{vicinov} {\partial}ist(\mathcal{B}(W)(x),\partialartial U_1) \ge c_{j + 1}, \quad \forall x {\infty}n \Omega_{j+1} \setminus \Omega_{j}. \end{equation} Applying Lemma \ref{pp} we can find a map $W' {\infty}n C^{k'}(\overline{\Omega},\mathbb R^{n'})$ whose components are piecewise polynomials of order $k'$ such that \begin{equation}\label{vicino} \|W'-W\|_{C^{k'}(\overline{\Omega},\mathbb R^{n'})} \le \frac{\varepsilon}{2}, \quad \|W'-W\|_{C^{k'}(\overline{\Omega_{j+1} \setminus \Omega_{j}},\mathbb R^{n'})} \le \frac{c_{j+1}}{2}, \forall j, \end{equation} and \begin{equation}\label{bordo} W'|_{\partialartial\Omega} = W. \end{equation} Now \eqref{vicinov}, \eqref{vicino}, the openness of $U_1$ and the fact that $\mathcal{B}$ is an operator of order $k'$ imply that \[ \mathcal{B}(W') {\infty}n U_1, \text{ in }\Omega. \] We now come to the main part of the proof. First, we exploit the property $U_n \subset U_{n+1}^{\Lambda_\mathcal{A}}$ inductively in the following way. We start with $V_1 = W'$. Then, we can apply Proposition \ref{usefulprop} with $U_{i +1}, V_{i}$ instead of $U,W$ and ${\partial}elta_{i+ 1}>0$ to find a map $V_{i +1} {\infty}n W^{k',{\infty}nfty} \cap C^{k'-1}(\overline{\Omega},\mathbb R^{n'}) $ whose components are piecewise polynomials of degree $k'$ such that: \begin{enumerate}[\quad\;(i)] {\infty}tem\label{equiboud} the $W^{k',{\infty}nfty} \cap C^{k'-1}$ norm is equibounded by $\max_n\{{\partial}iam(U_n),{\partial}iam(K)\}$; {\infty}tem\label{boudary} $W = V_i = V_{i + 1}$ together with all the derivatives of order $\ell < k'$, on $\partialartial\Omega$; {\infty}tem\label{deltai} $\|V_{i + 1} - V_i\|_{C^{k' -1}} \le {\partial}elta_{i+1}$; {\infty}tem\label{BVi} $\mathcal{B}(V_{i +1}) {\infty}n U_{i +1}$. \end{enumerate} The sequence $\{{\partial}elta_i\}_i$ is chosen inductively: given $V_i$ and ${\partial}elta_i$, we choose suitably ${\partial}elta_{i + 1}$, and thus also $V_{i + 1}$ by Proposition \ref{usefulprop}. Using the notation $ \| \cdot \|_{1,i} {\partial}oteq \| \cdot \|_{L^1(\Omega_{i},\mathbb R^{n'})}$, we find $0 < \varepsilon_i <\min\{2^{-i},\varepsilon_{i -1}\}$ such that \begin{equation}\label{BVstar} \|\mathcal{B}(V_i)-\mathcal{B}(V_i)\star \rho_{\varepsilon_i}\|_{1,i} \le \frac{1}{i}. \end{equation} In the last equation, we denoted with $\mathcal{B}(V_i)\star \rho_{\varepsilon}$ the mollification of $\mathcal{B}(V_i)$ with the standard even, smooth, compactly supported mollification kernel $\rho_\varepsilon$. Now we choose \begin{equation}\label{choicedeltai} {\partial}elta_{i + 1} {\partial}oteq \varepsilon\frac{\varepsilon_i}{C2^{i + 1}}, \end{equation} where $C > 1$ is a universal constant depending only on the choice of the convolution kernel $\rho$. After having made the choice \eqref{choicedeltai}, we continue the iteration. By \eqref{deltai}, we find that $\{V_i\}_i$ is a Cauchy sequence in $C^{k'-1}$. This implies that there exists a limit in the $C^{k'-1}$ topology $V_\varepsilon = \lim_iV_i$. The fact that $V_\varepsilon$ fulfills \eqref{BOUND}-\eqref{BOUNDARY} is an immediate consequence of \eqref{equiboud}-\eqref{boudary}. Furthermore, \eqref{equiboud} and \eqref{BVi} imply that $\mathcal{B}(V_i)$ is equibounded in $L^{\infty}nfty$, and thus the sequence $\mathcal{B}(V_i)$ is converging weakly-$*$ in $L^{\infty}nfty$ to $\mathcal{B}(V_\varepsilon)$. We will now prove that \begin{equation}\label{uinf} \mathcal{B}(V_\varepsilon) {\infty}n K, \text{ in } \Omega. \end{equation} The crucial point is that the choice of the sequence $\{{\partial}elta_i\}_i$ yields strong $L^1_{\loc}$ convergence of $\mathcal{B}(V_i)$ to $\mathcal{B}(V)$. Once we show this, we can pass to a subsequence that converges pointwise a.e. and use hypothesis \eqref{lp} to conclude \eqref{uinf}. To prove strong $L^1_{\loc}$ convergence, we fix $i_0 {\infty}n {\mathbb N}$ and, for all $i > i_0$, we write: \begin{equation}\label{gradest} \|\mathcal{B}(V_{i}) - \mathcal{B}(V_\varepsilon)\|_{1,i_0} \le \|\mathcal{B} (V_i) - \mathcal{B}(V_{i})\star\rho_{{\varepsilon}_i}\|_{1,i_0} + \|\mathcal{B}(V_{i})\star\rho_{{\varepsilon}_i} - \mathcal{B}(V_\varepsilon)\star\rho_{{\varepsilon}_i}\|_{1,i_0} + \| \mathcal{B}(V_\varepsilon)-\mathcal{B}(V_\varepsilon)\star\rho_{{\varepsilon}_i}\|_{1,i_0}. \end{equation} The first term of the previous sum is converging to $0$ by \eqref{BVstar}, while the latter is converging to $0$ since $\mathcal{B}(V_\varepsilon)$ is an $L^1$ function. It only remains to estimate the middle term of the right hand side of \eqref{gradest}. Since $\mathcal{B} $ is an operator of order $k'$, for the same constant $C > 1$ appearing in \eqref{choicedeltai}, we can write: \[ \|\mathcal{B}(V_i)\star\rho_{{\varepsilon}_i} - \mathcal{B}(V_\varepsilon)\star\rho_{{\varepsilon}_i}\|_{1,i_0} \leq C \frac{\|V_i- V_\varepsilon\|_{C^{k'-1}(\overline{\Omega},\mathbb R^{n'})}}{\varepsilon_i} \le \frac{C}{\varepsilon_i}\sum_{j = i}^{\infty}nfty\|V_{j + 1}-V_j\|_{C^{k'-1}} \overset{\eqref{deltai}}{\leq} \frac{C}{\varepsilon_i}\sum_{j = i}^{\infty}nfty {\partial}elta_{j+1}. \] By our choice \eqref{choicedeltai}, we estimate: \[ \frac{C}{\varepsilon_i}\sum_{j = i}^{\infty}nfty {\partial}elta_{j + 1} \leq \frac{\varepsilon}{2^i}. \] Therefore, the right hand side of \eqref{gradest} converges to $0$ as $i \to {\infty}nfty$. Since $i_0$ was arbitrary and $\Omega_{i_0} \nearrow \Omega$, \eqref{EXA} is proven and it only remains to show \eqref{unif}: \[ \|V_{\varepsilon} - W\|_{C^{k'-1}(\overline{\Omega},\mathbb R^{n'})} \le \|W' - W\|_{C^{k'-1}} + \|V_\varepsilon-W'\|_{C^{k'-1}} \overset{\eqref{vicino}}{\le} \frac{\varepsilon}{2} + \sum_{i=1}^{{\infty}nfty}\|V_{i + 1} - V_i\|_{C^{k'-1}} \le \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon. \] This concludes the proof. \end{proof} \section{The four state problem}\label{fours} In this section, we study the inclusion \begin{equation}\label{AINCfour} \begin{cases} v(x) {\infty}n K {\partial}oteq \{a_1,a_2,a_3,a_4\} \subset \mathbb R^n,& \text{a.e. in $B_1$,}\\ \mathcal{A}(v) = 0, &\text{ in the sense of distributions}, \end{cases} \end{equation} with $v {\infty}n L^{\infty}nfty(B_1,\mathbb R^{n'})$, $B_1 \subset \mathbb R^m$ being the ball of radius $1$ centred at $0$, and $a_i -a_j \notin \Lambda_\mathcal{A}$ if $i \neq j$. We wish to exploit Theorem \ref{rcexact} to solve \eqref{AINCfour}. In order to do so we need to find a $\mathcal{A}$-in-approximation for $K$. In \cite[Definition 2.6]{FS}, F\"orster and Sz{\'{e}}kelyhidi introduced the notion of large $T_5$ in order to find a solution to \eqref{AINCfour} in the case $\mathcal{A} = \curl$ and five states. We give the analogous definition in our case. \begin{definition}\label{T4} Let $S \subset \mathbb R^n$ be arbitrary. We say that an ordered set of elements $(a_1,a_2,a_3, a_4)$ \emph{are in $S$-$T_4$ configuration} if there exist $p {\infty}n \mathbb R^n$, $c_1,c_2,c_3,c_4 {\infty}n S \subset \mathbb R^n$ and $k_1,k_2,k_3,k_4 {\infty}n (1,+{\infty}nfty)$ such that \begin{equation}\label{t4} \begin{cases} &a_1 = p + k_1c_1\\ &a_2 = p + c_1 + k_2c_2\\ &a_3 = p + c_1 + c_2 + k_3c_3\\ &a_4 = p + c_1 + c_2 + c_3 + k_4c_4\\ &c_1 + c_2 + c_3 + c_4 = 0. \end{cases} \end{equation} We say that $\{a_1,a_2,a_3,a_4\}$ \emph{form a large $S$-$T_4$ configuration} if there exist $3$ distinct permutations $\sigma_1,\sigma_2,\sigma_3: \{1,2,3,4\} \to \{1,2,3,4\}$ such that the ordered set of vectors $(a_{\sigma_i(1)},a_{\sigma_i(2)},a_{\sigma_i(3)},a_{\sigma_i(4)})$ is in $S$-$T_4$ configuration, i.e. it fulfills \begin{equation}\label{t4perms} \begin{cases} &a_{\sigma_{i}(1)} = p^{\sigma_i} +k^{\sigma_i}_{1}c^{\sigma_i}_{1}\\ &a_{\sigma_{i}(2)} = p^{\sigma_i} + c_1^{\sigma_i} + k^{\sigma_i}_{2}c^{\sigma_i}_{2}\\ &a_{\sigma_i(3)} = p^{\sigma_i} + c_1^{\sigma_i} + c^{\sigma_i}_{2} + k^{\sigma_i}_{3}c^{\sigma_i}_{3}\\ &a_{\sigma_i(4)} = p^{\sigma_i} + c_1^{\sigma_i} + c^{\sigma_i}_{2} + c^{\sigma_i}_{3}+ k^{\sigma_i}_{4}c^{\sigma_i}_{4}\\ &c_1^{\sigma_i} + c^{\sigma_i}_{2} + c^{\sigma_i}_{3}+ c^{\sigma_i}_{4} = 0. \end{cases} \end{equation} for vectors $p^{\sigma_i}$, $k^{\sigma_i}_{\ell}$, $c^{\sigma_i}_{\ell} $, for $1\le i \le 3$, $1 \le \ell \le 4$, and moreover the vectors $c^{\sigma_1}_{\sigma_{1}^{-1}(\ell)},c^{\sigma_2}_{\sigma_{2}^{-1}(\ell)},c^{\sigma_3}_{\sigma_{3}^{-1}(\ell)} {\infty}n \Lambda_\mathcal{A}$ are linearly independent for every fixed $\ell {\infty}n \{1,2,3,4\}$. \end{definition} Definition \ref{T4} becomes meaningful when $S = \Lambda_\mathcal{A}$ for some $\mathcal{A} {\infty}n \op (k,m,n,N)$ with potential $\mathcal{B} {\infty}n \op(k',m,n',n)$. In this case, $\Lambda_\mathcal{A}$-$T_4$ configurations are the most studied example of sets without $\Lambda_\mathcal{A}$ that display \emph{flexibility} for approximate solutions. Indeed given a $\Lambda_\mathcal{A}$-$T_4$ configuration $K = \{a_1,a_2,a_3,a_4\}$ as in \eqref{t4} one can find a sequence of equibounded maps $\{u_n\}_n \subset L^{\infty}nfty$ such that \[ \begin{cases} {\partial}ist(u_n,K) \to 0, &\text{strongly in }L^1\\ u_n \weak P, &\text{as }n \to {\infty}nfty,\\ \mathcal{A}(u_n) = 0, &\forall n {\infty}n {\mathbb N}, \end{cases} \] where $P {\infty}n \mathbb{P}_\mathbb{H}L(k',m)$ is such that $\mathcal{B}(P) = p$ everywhere on $\mathbb R^m$. This stems from the fact that $p {\infty}n B_r(\{a_1,a_2,a_3,a_4\})^{\Lambda_\mathcal{A}}$ for all $r > 0$ and hence Proposition \ref{usefulprop} applies. For an introduction to $\Lambda_{\curl}$-$T_4$ configurations, that are simply called $T_4$ configurations in the literature, see \cite[Lemma 2.6]{DMU}. \\ \\ While $\Lambda_\mathcal{A}$-$T_4$ are related to the existence of approximate solutions, large $\Lambda_\mathcal{A}$-$T_4$ configurations yield the existence of $\mathcal{A}$-in approximations of $K$ and hence through Theorem \ref{rcexact} exact solutions. This was noticed first in \cite{FS}. The fact that the existence of a large $\Lambda_\mathcal{A}$-$T_4$ configuration $\{a_1,a_2,a_3,a_4\}$ implies the existence of a $\mathcal{A}$-in-approximation is analogous to the proof of the same fact for the $\curl$ operator given in \cite{FS}, and will be sketched in Subsection \ref{larget4}. \\ \\ If $S$ is not (a subset of) a cone $\Lambda_\mathcal{A}$, Definition \ref{T4} has a purely algebraic meaning, and we chose to give it in that way for improving the clarity of our exposition. Indeed, in our strategy, we will first find a set $\{a_1,a_2,a_3,a_4\}$ and write it as a large $\mathbb R^n$-$T_4$ configuration. At this level, this only means computing the values $p^\sigma, k_\ell^{\sigma_i}, c_\ell^{\sigma_i}$ for which the algebraic condition \eqref{t4} are satisfied for the three permutations $\sigma_1,\sigma_2,\sigma_3$, and the additional requirement on the linear independence of $\left\{c^{\sigma_i}_{\sigma_{i}^{-1}(\ell)}, 1\le i \le 3\right\}$ for all $1\le \ell \le 4$. Note that this is always possible. Subsequently, we find an operator $\mathcal{A}$ for which $\{c_\ell^{\sigma_i}, 1\le i \le 3, 1 \le \ell \le 4\} \subset \Lambda_\mathcal{A}$, thus proving that $\{a_1,a_2,a_3, a_4\}$ is a large $\Lambda_\mathcal{A}$-$T_4$ configuration. More precisely, we construct an operator $\mathcal{A}$ for which \begin{equation}\label{c} \{c^{\sigma_i}_\ell: 1\le i \le 3 , 1\le \ell \le 4\} \subset \Lambda_\mathcal{A} \end{equation} and \begin{equation}\label{a-a} a_i - a_j \notin \Lambda_\mathcal{A}, \quad \forall i \neq j. \end{equation} In order to guarantee that $c^{\sigma_1}_{\sigma_{1}^{-1}(\ell)},c^{\sigma_2}_{\sigma_{2}^{-1}(\ell)},c^{\sigma_3}_{\sigma_{3}^{-1}(\ell)}$ are linearly independent, for fixed $\ell {\infty}n \{1,2,3,4\}$, we need to have $ n \ge 3$, and hence we fix $n = 3$. We then choose the following vectors: \begin{equation}\label{aaaa} a_1 =\left(\begin{array}{c} 0 \\ 0 \\ 0\end{array}\right),\; a_2 =\left(\begin{array}{c} 1 \\ 0 \\ 0\end{array}\right),\; a_3 =\left(\begin{array}{c} 0 \\ 1 \\ 0\end{array}\right),\;a_4 =\left(\begin{array}{c} 0 \\ 0 \\ 1\end{array}\right). \end{equation} The three permutations $\sigma_1,\sigma_2,\sigma_3$ and the elements $p^{\sigma_i}$, $k^{\sigma_i}_{\ell}$, $c^{\sigma_i}_{\ell}$, for $1\le \ell \le 4$, $1 \le i \le 3$ for which $\{a_1,a_2,a_3,a_4\}$ form a large $\mathbb R^n$-$T_4$ configuration will be given in Subsection \ref{exval}. From now on, we treat all of these values as fixed, explicit values. \\ \\ Let us now examine requirement \eqref{a-a}. We can rewrite \eqref{a-a} as: \[ a_i - a_j \notin \Ker(\mathbb{A}(\xi)) = {\infty}m(\mathbb{B}(\xi)), \quad \forall \xi {\infty}n \mathbb R^m, \] if, as usual, $\mathcal{B} {\infty}n \op(k',m,n',n)$ denotes the potential of $\mathcal{A}$. Heuristically, \eqref{a-a} has more chances to be satisfied once $ \Lambda_\mathcal{A} = \bigcup_{\xi {\infty}n \mathbb R^m}{\infty}m(\mathbb{B}(\xi))$ is chosen as small as possible. It is therefore natural to ask $n',m < n = 3$, and indeed we fix $n' = 1$ and $m = 2$. Notice anyway that \eqref{c} is asking that $\Lambda_\mathcal{A}$ contains \emph{at least} the $12$ vectors $c_\ell^{\sigma_i}$, and in order to achieve this, we will use our last degree of freedom $k'$. Given the constraints $n' = 1, m=2$, we have that, for $q_1,q_2,q_3 {\infty}n \mathbb{P}_\mathbb{H}(k',2)$ that will be chosen later, \begin{equation}\label{BP} \mathbb{B}(\xi) = \left(\begin{array}{c} q_1(\xi)\\ q_2(\xi) \\ q_3(\xi)\end{array}\right). \end{equation} We identify the linear application $\mathbb{B}(\xi)$ with its associated matrix. It only remains to deal with \eqref{c}. Each $q_i {\infty}n \mathbb{P}_\mathbb{H}L(k',2)$ has $k'+1$ coefficients. \eqref{c} is now equivalent to asking the existence of twelve vectors: $(\xi_1^{\sigma_i},\xi_2^{\sigma_i}), (\xi_3^{\sigma_i},\xi_4^{\sigma_i}), (\xi_5^{\sigma_i},\xi_6^{\sigma_i}),(\xi_7^{\sigma_i},\xi_8^{\sigma_i}) {\infty}n \mathbb R^2$ , for $1\le i \le 3$, such that \begin{equation}\label{Pc} \mathbb{B}((\xi_{2\ell-1}^{\sigma_i}, \xi^{\sigma_i}_{2\ell})) = c^{\sigma_i}_\ell, \quad \forall 1\le i\le 3, \forall 1\le \ell \le 4. \end{equation} We randomly generate the vectors $(\xi^{\sigma_i}_{2\ell-1}, \xi^{\sigma_i}_{2\ell}) {\infty}n \mathbb R^2$, see Subsection \ref{exval} for the explicit values. Recall that also the $12$ vectors $c^{\sigma_i}_\ell {\infty}n \mathbb R^3$ are fixed, and thus \eqref{Pc} becomes a linear system of $36$ equations that can be solved using the coefficients of $q_1,q_2,q_3$. The right number of variables is therefore $36$, that amounts to ask \[ 3(k' + 1) = 36, \] or $k' = 11$. This last choice fixes all the degrees of freedom of $\mathcal{B}{\infty}n \op(11,2,1,3)$. Of course, we should now find an operator $\mathcal{A} {\infty}n \op(k,2,3,N)$ whose potential is $\mathcal{B}$. We define our candidate $\mathcal{A}$ by writing its symbols $\mathbb{A}(\xi)$. First, we choose \[ k = 11 \text{ and } N=3, \] and set, for all $\xi {\infty}n \mathbb R^2$, \begin{equation}\label{Axi} \mathbb{A}(\xi) {\partial}oteq \left(\begin{array}{ccc} 0 & -q_3(\xi)& q_2(\xi)\\ -q_3(\xi) & 0 & q_1(\xi) \\ -q_2(\xi) & q_1(\xi) & 0\end{array}\right). \end{equation} In order to apply the convex integration methods of the previous section, we need the operator $\mathbb{A}$ to be of constant rank and balanced. Furthermore, we need to find a way to verify \eqref{a-a}. This will be done in Theorem \ref{check}. First, we collect our set of assumptions in the following: \begin{prop}\label{computer} Let $a_1,a_2,a_3,a_4$ be as in \eqref{aaaa}, and $p^{\sigma_i}$, $k^{\sigma_i}_{\ell}$, $c^{\sigma_i}_{\ell}$, $(\xi_{2\ell-1}^{\sigma_i}, \xi^{\sigma_i}_{2\ell})$ for $1\le \ell \le 4$, $1 \le i \le 3$ as in Subsection \ref{exval}. Then, there are unique polynomials $q_1,q_2,q_3 {\infty}n \mathbb{P}_\mathbb{H}(11,2)$ such that \eqref{Pc} is feasible. Furthermore: \begin{enumerate} {\infty}tem\label{comp0} \eqref{t4perms} holds for all $\sigma_i$, $1 \le i \le 3$; {\infty}tem\label{comp1} $c^{\sigma_1}_{\sigma_{1}^{-1}(\ell)},c^{\sigma_2}_{\sigma_{2}^{-1}(\ell)},c^{\sigma_3}_{\sigma_{3}^{-1}(\ell)}$ are linearly independent for all fixed $\ell {\infty}n \{1,2,3,4\}$; {\infty}tem\label{comp3} $q_i$ and $q_j$ have no common zero on $\mathbb{S}^1$, for all $i \neq j$; {\infty}tem\label{comp4} $q_i + q_j$ has no common zero with $q_k$ on $\mathbb{S}^1$, for all $i,j,k$ such that $\{i,j,k\} = \{1,2,3\}$; \end{enumerate} \end{prop} \begin{proof} All of the above checks have been made using Maple 2020 using symbolic calculus and the fractional representation of rational numbers, hence they are formally justified. We will now explain how to perform these computations on a computer in such a way that the result is rigorous, especially \eqref{comp3} and \eqref{comp4}. \\ \\ We use coordinates $\xi = (x,y)$ in $\mathbb R^2$. First, \eqref{comp0}-\eqref{comp1} are simple computations that could be potentially done by hand. Next, one checks that \eqref{Pc} has a solution. Since the solution is unique and the coefficients are particularly lengthy, we do not write them here explicitly. Notice that, since \eqref{Pc} is a linear system with rational entries, the solution is also rational. Thus, every coefficient of $q_1,q_2,q_3$ can be exactly represented as a fraction. Furthermore, using the explicit form of $q_1,q_2,q_3$, one can easily check the following, for all $1\le i,j \le 3$: \begin{equation}\label{coeff} \text{the coefficient of $x^{11}$ of $q_i(x,y)$ and of $q_{i}(x,y) + q_j(x,y)$ is non-zero.} \end{equation} Now we turn to \eqref{comp3}-\eqref{comp4}. Let us define $r_k(\xi) {\partial}oteq q_i(\xi) + q_j(\xi)$, for $i,j,k$ such that $\{i,j,k\} = \{1,2,3\}$. By homogeneity, we have, for all $i {\infty}n \{1,2,3\}$ and $y \neq 0$, \begin{equation}\label{qiri} q_i(x,y) = y^{11}q_i\left(\frac{x}{y},1\right), \quad r_i(x,y) = y^{11}r_i\left(\frac{x}{y},1\right). \end{equation} Therefore, we can associate to every $q_i$ and $r_i$ a polynomial of one variable, $Q_i(z)$ and $R_i(z)$, defined as \[ Q_i(z) {\partial}oteq q_i(z,1), \quad R_i(z) {\partial}oteq r_i(z,1). \] By \eqref{coeff} and \eqref{qiri}, zeroes of $q_i$ and $r_i$ are in bijective correspondence with the ones of $Q_i$, $R_i$, in the sense that $q_i(x_0,y_0) = 0$ if and only if $Q_i\left(\frac{x_0}{y_0}\right) = 0$, and analogously for $r_i$. \eqref{comp3}-\eqref{comp4} then become equivalent to the following: \begin{enumerate}[ (i)] {\infty}tem\label{comp3bis} $Q_i$ and $Q_j$ have no common zero, for all $i \neq j$; {\infty}tem\label{comp4bis} $R_i$ has no common zero with $Q_i$, for all $i {\infty}n \{1,2,3\}$. \end{enumerate} Given the explicit forms of the $Q_i$ and $R_i$, there are two ways to check \eqref{comp3bis}-\eqref{comp4bis}. \\ \\ The first starts by computing the zeroes of the six polynomials $Q_1,Q_2,Q_3,R_1,R_2,R_3$ numerically. Of course, this does not yield a rigorous proof, but then one only uses the numerical values to find (small) intervals with rational endpoints \emph{around} those numerical zeroes, in such a way to have that the polynomial evaluated at the two endpoints has two different signs. Notice that the evaluation at the endpoints again gives an exact value, as the polynomial is rational and the endpoints have been chosen to be rational. By continuity it follows that a zero of the polynomial lies inside this interval. Now, instead of having \emph{different} zeroes, we may simply try to find \emph{disjoint} intervals, which would suffice to show \eqref{comp3bis}-\eqref{comp4bis}. \\ \\ The second way to check \eqref{comp3bis}-\eqref{comp4bis} is much quicker in terms of computations, and it is the method we employed. This simply consist in computing the GCD of every couple $Q_i,Q_j$ and $Q_i,R_i$, varying $1\le i\neq j \le 3$. If the GCD of these couples is a constant, then clearly they can have no common zero, and this turns out to be the case in our particular example. Since the polynomials depend only on one variable, we can use the Euclidean algorithm to compute the GCD among the couples of polynomials we are interested in. Using the built-in \emph{gcd} function of Maple 2020, we have checked that $GCD(Q_i,R_i) = GCD(Q_i,Q_j) = 1, \forall 1\le i \neq j \le 3$, and hence \eqref{comp3bis}-\eqref{comp4bis} hold. \end{proof} \begin{corollary}\label{c:potential} Let $a_1,a_2,a_3,a_4$ be as in \eqref{aaaa}, and $p^{\sigma_i}$, $k^{\sigma_i}_{\ell}$, $c^{\sigma_i}_{\ell}$, $(\xi_{2\ell-1}^{\sigma_i}, \xi^{\sigma_i}_{2\ell})$ for $1\le \ell \le 4$, $1 \le i \le 3$ as in Subsection \ref{exval}. Finally, let $q_1,q_2,q_3$ be the only solution of \eqref{Pc} and define the two operators $\mathcal{A}$ as in \eqref{Axi} and $\mathcal{B}$ as in \eqref{BP}. Then: \begin{itemize} {\infty}tem $a_i - a_j \notin \Lambda_\mathcal{A}, \forall 1 \le i < j \le 4$; {\infty}tem $\mathcal{B}$ is a potential for $\mathcal{A}$ in the sense of \eqref{pot}; {\infty}tem $\mathcal{A}$ has constant rank and is balanced; \end{itemize} \end{corollary} \begin{proof} Fix $1\le i < j \le 4$. To see that $a_i - a_j \notin \Lambda_\mathcal{A} $ we notice that, by \eqref{aaaa}, $a_i - a_j$ either has two zeroes, or it has a zero component while the other two are $1$ and -1. In the first case, i.e. when $a_i - a_j$ has two zeroes, $a_i - a_j \notin \Lambda_\mathcal{A}$ stems from \eqref{comp3} of Proposition \ref{computer}, while in the second, $a_i - a_j \notin \Lambda_\mathcal{A}$ is a consequence of \eqref{comp4} of Proposition \ref{computer}. \\ \\ We show now that $\mathcal{B}$ is a potential for $\mathcal{A}$. For all $\xi {\infty}n \mathbb R^2$, it holds \[ {\infty}m(\mathbb{B}(\xi)) \subset \Ker(\mathbb{A}(\xi)), \] that shows $\rightarrownk(\mathbb{A}(\xi)) \le 2$ since $\mathbb{B}(\xi) \neq 0, \forall \xi {\infty}n \mathbb R^2\setminus\{0\}$ by \eqref{comp3} of Proposition \ref{computer}. The principal $2\times 2$ minors of $\mathbb{A}(\xi)$ read as $-q_3^2(\xi), q_2^2(\xi), -q_1^2(\xi)$ and thus again by property \eqref{comp3} of Proposition \ref{computer}, we find that $\rightarrownk(\mathbb{A}(\xi)) = 2$ for all $\xi {\infty}n \mathbb R^2\setminus\{0\}$. It also follows that \[ {\infty}m(\mathbb{B}(\xi)) = \Ker(\mathbb{A}(\xi)). \] This shows that $\mathcal{A}$ has constant rank and $\mathcal{B}$ is the potential of $\mathcal{A}$ in the sense of \eqref{pot}. Finally, we show that $\mathcal{A}$ is balanced. By \eqref{Pc} and \eqref{comp1} of Proposition \ref{computer} ${\infty}m(\mathbb{B}(\xi))$ contains three linearly independent vectors. Since ${\infty}m(\mathbb{B}(\xi)) \subset \Lambda_\mathcal{A}$ for all $\xi {\infty}n \mathbb R^2$, it follows $\spn(\Lambda_\mathcal{A}) = \mathbb R^3$ and the proof is finished. \end{proof} Now we can finally prove the main result of this paper, namely the existence of a nontrivial solution of \eqref{AINCfour}. \begin{theorem}\label{check} Let $a_1,a_2,a_3,a_4$ be as in \eqref{aaaa}, $q_1,q_2,q_3 {\infty}n \mathbb{P}_\mathbb{H}(11,2)$ be defined by \eqref{Pc} for the values $(\xi_{2\ell-1}^{\sigma_i}, \xi^{\sigma_i}_{2\ell})$ and $c_{\ell}^{\sigma_i}$ given in Subsection \ref{exval}. Finally define the two operators $\mathcal{A}$ as in \eqref{Axi} and $\mathcal{B}$ as in \eqref{BP}. Then, there exists a non-constant solution $v {\infty}n L^{\infty}nfty(B_1,\mathbb R^3)$ of \[ \begin{cases} v(x) {\infty}n K {\partial}oteq \{a_1,a_2,a_3,a_4\}, & \text{a.e. in $B_1$,}\\ \mathcal{A}(v) = 0, &\text{in the sense of distributions},\\ a_i - a_j \notin \Lambda_\mathcal{A}, & \forall i \neq j. \end{cases} \] Furthermore, $v$ takes all four values of $K$, and $v$ admits a potential, i.e. there exists $V {\infty}n W^{11,{\infty}nfty}\cap C^{10}(\overline{B_1})$ such that a.e. on $\Omega$ \[ v = \mathcal{B}(V) \] and $V$ coincides with a polynomial of order $11$ on the boundary of $\partialartial B_1$. \end{theorem} \begin{proof} By \eqref{comp0}-\eqref{comp1} of Proposition \ref{computer}, we find that $\{a_1,a_2,a_3,a_4\}$ form a large $\Lambda_\mathcal{A}$-$T_4$ configuration. Moreover, Corollary \ref{c:potential} yields that $a_i - a_j \notin \Lambda_\mathcal{A},\forall 1 \le i < j \le 4$. By Theorem \ref{existenceinapp}, there exists a $\mathcal{A}$-in-approximation $\{U_n\}_n$ of $K$. Since by Corollary \ref{c:potential} $\mathcal{A}$ is a balanced and of constant rank with potential $\mathcal{B}$, we are in position to apply Theorem \ref{rcexact}. Now fix any polynomial $r {\infty}n \mathbb{P}_\mathbb{H}L(11,2)$ such that $\mathcal{B}(r) {\infty}n U_1$ everywhere on $\mathbb R^2$. This exists by Proposition \ref{surj}. Using the existence of three linearly independent direction $c_1,c_2,c_3 {\infty}n \Lambda_\mathcal{A}$, that is \eqref{comp1} of Proposition \ref{computer}, in combination with Proposition \ref{ind}, it is not difficult to build a map $W {\infty}n C^{11}( \overline{B_1})$ such that $W(x) = r(x), \text{ on } \partialartial B_1$, $\mathcal{B}(W)(x) {\infty}n U_1$, $\forall x {\infty}n B_1$ \begin{equation} \spn\{{\infty}m(\mathcal{B}(W))\} \text{ is not contained in an affine subspace of $\mathbb R^3$ of dimension $\le 2$}.\label{propfi3} \end{equation} Now, by Theorem \ref{rcexact} we find a family $V_\varepsilon$ of maps that are equibounded in $W^{11,{\infty}nfty}\cap C^{10}(\overline{B_1})$ such that $V_\varepsilon = r$ on $\partialartial B_1$, $\mathcal{B}(V_\varepsilon) {\infty}n K$ a.e. and $V_\varepsilon \to W$ as $\varepsilon \to 0^+$. This yields the weak-$*$ convergence of $\mathcal{B}(V_\varepsilon)$ to $\mathcal{B}(W)$ in $L^{\infty}nfty$, and hence the weak convergence in $L^2$. If, by contradiction, for all $\varepsilon > 0$, $\mathcal{B}(V_\varepsilon)$ belonged to a proper subset of $K$, say to $\{a_1,a_2,a_3\}$, then \[ \mathcal{B}(V_\varepsilon) {\infty}n \co(\{a_1,a_2,a_3\}), \quad \forall \varepsilon >0. \] By Mazur Lemma we would find that $\mathcal{B}(W)(x) {\infty}n \co(\{a_1,a_2,a_3\})$, for all $x {\infty}n B_1$, and this is in contradiction with $\eqref{propfi3}$. This concludes the proof of the Theorem. \end{proof} \subsection{Large $\Lambda_\mathcal{A}$-$T_4$ configurations and in-approximations}\label{larget4} In this subsection, we collect the main results concerning large $\Lambda_\mathcal{A}$-$T_4$ configurations that we used in the previous section. We will always work with the operator $\mathcal{A}$ defined in $\eqref{Axi}$ and use the objects and the notation introduced in the previous section. Most of the theory immediately follows from the results of \cite{FS} with minor modifications. Thus, some proofs will be omitted and precise reference to the corresponding results of \cite{FS} will be provided. Let us start with the following: \begin{lemma}\label{IMT} Let $\{a_1,a_2,a_3,a_4\} \subset \mathbb R^3$ be defined as in \eqref{aaaa}, and let $\sigma_i, 1\le i\le 3$ be the three permutations of \eqref{permut} for which the \emph{ordered} sets $(a_{\sigma_i(1)},a_{\sigma_i(2)},a_{\sigma_i(3)},a_{\sigma_i(4)})$ are in $\Lambda_\mathcal{A}$-$T_4$ configuration. Define $A_i {\partial}oteq (a_{\sigma_i(1)},a_{\sigma_i(2)},a_{\sigma_i(3)},a_{\sigma_i(4)}) \subset (\mathbb R^{3})^4$. Then, the following hold for all $1\le i \le 3$: \begin{enumerate} {\infty}tem\label{intorno} there exists $\varepsilon > 0$ such that all points $X = (x_1,x_2,x_3,x_4) {\infty}n B_\varepsilon(A_i)$ are in $\Lambda_\mathcal{A}$-$T_4$ configuration; {\infty}tem \label{intornolarge} there exists $\varepsilon > 0$ such that all points $X = (x_1,x_2,x_3,x_4) {\infty}n B_\varepsilon(A_1)$ form a large $\Lambda_\mathcal{A}$-$T_4$ configuration with the same permutations \eqref{permut}; {\infty}tem\label{welldef} write $X = (x_1,x_2,x_3,x_4) {\infty}n B_\varepsilon(A_1)$ as in \eqref{t4perms} for vectors $p^{\sigma_i}(X)$, $k^{\sigma_i}_{\ell}(X)$, $c^{\sigma_i}_{\ell}(X)$, for $1\le i \le 3$, $1 \le \ell \le 4$. Then the maps $\mathbf Phi^{\sigma_i}: B_\varepsilon(A_1) \to (\mathbb R^3)^{ 4}$ defined as $$\mathbf Phi^\sigma(X) {\partial}oteq (c_1^{\sigma_i}(X),c_2^{\sigma_i}(X),c_3^{\sigma_i}(X),c_4^{\sigma_i}(X))$$ are well-defined and smooth. \end{enumerate} \end{lemma} Here we cannot argue as in \cite[Lemma 2.4]{FS} since it relies heavily on the characterization of $T_4$ configurations for the $\curl$ operator shown in \cite{LSR}, and hence we explain the proof in detail. \begin{proof} Clearly, \eqref{intornolarge} follows from Definition \ref{T4} and \eqref{intorno}. To show \eqref{intorno}-\eqref{welldef}, we rely on the implicit function theorem and the inverse function theorem. Throughout the proof, we fix $1\le i \le 3$. First, we use the implicit function theorem to show that the map $\mathbf Psi$ defined as \[ \mathbf Psi_i(\xi_1,\xi_2,\xi_3,\xi_4,\xi_5,\xi_6,\xi_7,\xi_8) = v(\xi_1,\xi_2)+ v(\xi_3,\xi_4) + v(\xi_5,\xi_6) + v(\xi_7,\xi_8) \] has a non-degenerate set of zeroes in a neighbourhood of $\Xi^{\sigma_i} {\partial}oteq (\xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i},\xi_6^{\sigma_i},\xi_7^{\sigma_i},\xi_8^{\sigma_i})$, for the explicit values $\xi_\ell^{\sigma_i}$ of Subsection \ref{exval}. To do so, it is sufficient to compute the determinants of the matrix \[ (\partialartial_2v(\xi_5^{\sigma_i},\xi_6^{\sigma_i})| \partialartial_1v(\xi_7^{\sigma_i},\xi_8^{\sigma_i})|\partialartial_2v(\xi_7^{\sigma_i},\xi_8^{\sigma_i})) \] and see that they are all non-zero. This can be easily checked with the use of a computer. We infer that in a small neighbourhood $D$ of $(\xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i},\xi_6^{\sigma_i},\xi_7^{\sigma_i},\xi_8^{\sigma_i})$, $$\mathbf Psi_i(\xi_1,\xi_2,\xi_3,\xi_4,\xi_5,\xi_6,\xi_7,\xi_8) = 0$$ if and only if \[ \xi_\ell = \xi_\ell(\xi_1,\xi_2,\xi_3,\xi_4,\xi_5), \quad \forall 6\le \ell \le 8, \] for some smooth maps $\xi_\ell$ defined in a small neighbourhood of $(\xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i})$. Define $D' {\partial}oteq \partiali(D)$, where $\partiali$ is the projection on the first $5$ coordinates. Observe that $D'$ is open. Now finally define \[ F_i: \mathbb R^3\times \mathbb R^4 \times D' \to \mathbb R^{12}, \] as \[ F_i(p,k_1,k_2,k_3,k_4,\bar \xi) {\partial}oteq \left(\begin{array}{c} p + k_1v(\xi_1,\xi_2)\\ p + v(\xi_1,\xi_2) + k_2v(\xi_3,\xi_4) \\ p + v(\xi_1,\xi_2) + v(\xi_3,\xi_4) + k_3v(\xi_5,\xi_6(\bar \xi))\\ p + v(\xi_1,\xi_2) + v(\xi_3,\xi_4) + v(\xi_5,\xi_6(\bar \xi)) + k_4v(\xi_7(\bar \xi),\xi_8(\bar \xi))\end{array}\right), \] where we used the short hand notation $\bar \xi {\partial}oteq (\xi_1,\xi_2,\xi_3,\xi_4,\xi_5)$. Now we apply the inverse function theorem at the point defined by the exact values of Subsection \ref{exval}: \[ (p^{\sigma_i}, k_1^{\sigma_i},k_2^{\sigma_i},k_3^{\sigma_i},k_4^{\sigma_i}, \xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i}), \] Notice that the derivatives of $\xi_\ell(\bar \xi)$ at the point $(\xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i})$ are explicitly provided by the implicit function theorem applied in the first part of this proof. Again with the help of a computer, one can check that \[ {\partial}et(DF_i(p^{\sigma_i}, k_1^{\sigma_i},k_2^{\sigma_i},k_3^{\sigma_i},k_4^{\sigma_i}, \xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i})) \neq 0, \] and hence that $F_i$ is a diffeomorphism around $(p^{\sigma_i}, k_1^{\sigma_i},k_2^{\sigma_i},k_3^{\sigma_i},k_4^{\sigma_i}, \xi_1^{\sigma_i},\xi_2^{\sigma_i},\xi_3^{\sigma_i},\xi_4^{\sigma_i},\xi_5^{\sigma_i})$. This shows \eqref{intorno}-\eqref{welldef} and concludes the proof. \end{proof} The proof of the following Proposition is analogous to the one of \cite[Proposition 2.7]{FS}, and uses Lemma \ref{IMT}. \begin{prop} Let $\{a_1,a_2,a_3,a_4\} \subset \mathbb R^3$ be defined as in \eqref{aaaa}, and denote $A_1 {\partial}oteq (a_1,a_2,a_3,a_4)$. Then, there exists ${\partial}elta > 0$ and for all $1\le \ell \le 4$ smooth maps \[ \partiali_\ell : (-{\partial}elta,{\partial}elta)^3\times B_{\partial}elta(A_1) \to \mathbb R^3 \] with the following properties \begin{itemize} {\infty}tem the map $t \mapsto \partiali_\ell(t,X)$ is an embedding for each $X = (x_1,x_2,x_3,x_4) {\infty}n B_{\partial}elta(A_1)$; {\infty}tem $\partiali_\ell(t,X) {\infty}n \{x_1,x_2,x_3, x_4\}^{\Lambda_\mathcal{A}}$ for all $t {\infty}n [0,{\partial}elta)^3$, $X = (x_1,x_2,x_3,x_4) {\infty}n B_{\partial}elta(A_1)$; {\infty}tem $\partiali_\ell(0,X) = x_\ell$, for all $X = (x_1,x_2,x_3,x_4) {\infty}n B_{\partial}elta(A_1)$. \end{itemize} \end{prop} With the help of the previous proposition, one can show the following result, see \cite[Theorem 2.8]{FS}. \begin{theorem}\label{existenceinapp} Let $\{a_1,a_2,a_3,a_4\} \subset \mathbb R^3$ be defined as in \eqref{aaaa}. Then, there exists a $\mathcal{A}$-in-approximation of $\{a_1,a_2,a_3,a_4\}$ for the operator $\mathcal{A}$ defined in \eqref{Axi}. \end{theorem} \subsection{Exact values}\label{exval} In this section we give all the exact values needed to see that the set $\{ a_1, a_2, a_3, a_4 \}$ of \eqref{aaaa} \emph{forms a large $\mathbb R^3$-$T_4$ configuration} in the sense of Definition \ref{T4} and that \eqref{Pc} is uniquely solvable. \\ \\ The permutations $\sigma_i, 1\le i \le 3$ are:\ \begin{equation}\label{permut} \begin{split} &(\sigma_1(1),\sigma_1(2),\sigma_1(3),\sigma_1(4)) = (1,2,3,4),\\ &(\sigma_2(1),\sigma_2(2),\sigma_2(3),\sigma_2(4)) = (4,1,2,3), \\ &(\sigma_3(1),\sigma_3(2),\sigma_3(3),\sigma_3(4)) = (3,4,1,2). \end{split} \end{equation} \noindent The values $p^{\sigma_i}$ for $1 \leq i \leq 3$ are: \begin{align*} p^{\sigma_1}= \frac{1}{15} \left(\begin{array}{c} 2 \\ 4 \\ 8\end{array}\right) \ \ \ p^{\sigma_2}= \frac{1}{65} \left(\begin{array}{c} 18 \\ 27 \\ 8\end{array}\right) \ \ \ p^{\sigma_3}= \frac{1}{175} \left(\begin{array}{c} 64 \\ 27 \\ 36\end{array}\right). \end{align*} \noindent The values $c_\ell^{\sigma_i}$ for $1 \leq \ell \leq 4$ and $1 \leq i \leq 3$ are: \begin{align*} c^{\sigma_1}_1= \frac{1}{15} \left(\begin{array}{c} -1 \\ -2 \\ -4 \end{array}\right) \ \ \ c^{\sigma_1}_2= \frac{1}{15} \left(\begin{array}{c} 7 \\ -1 \\ -2\end{array}\right) \ \ \ c^{\sigma_1}_3= \frac{1}{15} \left(\begin{array}{c} -4 \\ 7 \\ -1\end{array}\right) \ \ \ c^{\sigma_1}_4= \frac{1}{15} \left(\begin{array}{c} -2 \\ -4 \\ 7\end{array}\right), \end{align*} \begin{align*} c^{\sigma_2}_1= \frac{1}{65} \left(\begin{array}{c} -6 \\ -9 \\ 19 \end{array}\right) \ \ \ c^{\sigma_2}_2= \frac{1}{65} \left(\begin{array}{c} -4 \\ -6 \\ -9\end{array}\right) \ \ \ c^{\sigma_2}_3= \frac{1}{65} \left(\begin{array}{c} 19 \\ -4 \\ -6\end{array}\right) \ \ \ c^{\sigma_2}_4= \frac{1}{65} \left(\begin{array}{c} -9 \\ 19 \\ -4\end{array}\right), \end{align*} \begin{align*} c^{\sigma_3}_1= \frac{1}{175} \left(\begin{array}{c} -16 \\ 37 \\ -9 \end{array}\right) \ \ \ c^{\sigma_3}_2= \frac{1}{175} \left(\begin{array}{c} -12 \\ -16 \\ 37\end{array}\right) \ \ \ c^{\sigma_3}_3= \frac{1}{175} \left(\begin{array}{c} -9 \\ -12 \\ -16\end{array}\right) \ \ \ c^{\sigma_3}_4= \frac{1}{175} \left(\begin{array}{c} 37 \\ -9 \\ -12\end{array}\right). \end{align*} \\ \\ The values $k_\ell^{\sigma_i}$ are $k_\ell^{\sigma_i}= i+1$ for any $1 \leq \ell \leq 4$ and $1 \leq i \leq 3$. \\ \\ The values $(\xi^{\sigma_i}_{2\ell-1}, \xi^{\sigma_i}_{2\ell}) {\infty}n \mathbb R^2$, for $1 \leq \ell \leq 4$ and $1 \leq i \leq 3$ are given by: \begin{align*} \left(\begin{array}{c} \xi_{1}^{\sigma_1} \\ \xi_2^{\sigma_1} \end{array}\right) = \left(\begin{array}{c} -14 \\ 5 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{3}^{\sigma_1} \\ \xi_4^{\sigma_1} \end{array}\right) = \left(\begin{array}{c} 19 \\ -8 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{5}^{\sigma_1} \\ \xi_6^{\sigma_1} \end{array}\right) = \left(\begin{array}{c} 11 \\ -14 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{7}^{\sigma_1} \\ \xi_8^{\sigma_1} \end{array}\right) = \left(\begin{array}{c} -4 \\ -17 \end{array}\right) , \end{align*} \begin{align*} \left(\begin{array}{c} \xi_{1}^{\sigma_2} \\ \xi_2^{\sigma_2} \end{array}\right) = \left(\begin{array}{c} -7 \\ -3 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{3}^{\sigma_2} \\ \xi_4^{\sigma_2} \end{array}\right) = \left(\begin{array}{c} 6 \\ 16 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{5}^{\sigma_2} \\ \xi_6^{\sigma_2} \end{array}\right) = \left(\begin{array}{c} 2 \\ -17 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{7}^{\sigma_2} \\ \xi_8^{\sigma_2} \end{array}\right) = \left(\begin{array}{c} -18 \\ 2 \end{array}\right) , \end{align*} \begin{align*} \left(\begin{array}{c} \xi_{1}^{\sigma_3} \\ \xi_2^{\sigma_3} \end{array}\right) = \left(\begin{array}{c} -7 \\ -14 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{3}^{\sigma_3} \\ \xi_4^{\sigma_3} \end{array}\right) = \left(\begin{array}{c} -9 \\ 19 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{5}^{\sigma_3} \\ \xi_6^{\sigma_3} \end{array}\right) = \left(\begin{array}{c} 6 \\ 18 \end{array}\right) \ \ \ \left(\begin{array}{c} \xi_{7}^{\sigma_3} \\ \xi_8^{\sigma_3} \end{array}\right) = \left(\begin{array}{c} -20 \\ -9 \end{array}\right) . \end{align*} {\alpha}ppendix \section{The three state problem for operators of order 1} Here we show how to infer the rigidity of the three state problem \eqref{problem} from the rigidity of the three state problem of the divergence proved in \cite{PP}. We are indebted to Guido De Philippis for making us realize that in many cases the study of operators of order one reduces to the study of the divergence operator, thus greatly simplifying our original proof of the following result. \begin{prop}\label{rig} Let $\mathcal{A} {\infty}n \op(1,m,n,N)$ and let $u$ be a solution to \eqref{problem} on the open connected set $\Omega$ for $s = 3$. Then, $u$ is constant. \end{prop} \begin{proof} Consider $v {\partial}oteq u - a_1$. Then, $v$ solves \eqref{problem} with $\{a_1,a_2,a_3\}$ replaced by $\{0,b_1,b_2\}$, with $b_1 = a_2 - a_1,b_2 = a_3 - a_1,b_1-b_2 = a_2 - a_3 \notin \Lambda_{\mathcal{A}}$. Now consider the operator $\mathcal{A}' {\infty}n \op(1,m,2,N)$ defined as \[ \mathcal{A}'(z_1,z_2) {\partial}oteq \mathcal{A}(z_1 b_1 + z_2 b_2) \] for all $z_i {\infty}n L^{\infty}nfty(\Omega)$, $i =1,2$. Defining $w {\partial}oteq (\chi_{E_1},\chi_{E_2})$, $E_i {\partial}oteq \{x {\infty}n \Omega: u(x) = b_i\}$ and $e_1 = (1,0), e_2 =(0,1)$, it is easy to see that $w$ solves \begin{equation}\label{probspec} \begin{cases} w(x) {\infty}n \{0,e_1,e_2\}, &\text{ a.e. on }\Omega\\ \mathcal{A}'(w) = 0, &\text{ in the sense of distributions},\\ e_1,e_2, e_1-e_2 \notin \Lambda_{\mathcal{A}'}, \end{cases} \end{equation} and that $u$ is constant if and only if $w$ is constant. Since $\mathcal{A}' {\infty}n \op(1,m,2,N)$, by Definition \ref{d_operator} it admits a representation of the form \[ \mathcal{A'}(z_1,z_2) = MDz_1 + NDz_2 = {\partial}v(Mz_1 + Nz_2), \] for $M,N {\infty}n \mathbb R^{N\times m}$ and all bounded $(z_1,z_2)$. The latter and the fact that $e_1,e_2, e_1-e_2 \notin \Lambda_{\mathcal{A}'}$ easily imply that \eqref{probspec} is equivalent to the fact that $Z(x) = Mw_1 + Nw_2$ solves: \[ \begin{cases} Z(x) {\infty}n \{0,M,N\}, &\text{ a.e. on }\Omega\\ {\partial}v(Z) = 0, &\text{ in the sense of distributions},\\ M,N, M-N \notin \Lambda_{{\partial}v}. \end{cases} \] By \cite{PP}, we know that $Z$ is constant, and hence also $w$ and $u$ must be constant. \end{proof} \end{document}
\begin{document} \begin{abstract} A special Danielewski surface is an affine surface which is the total space of a principal $(\C,+)$-bundle over an affine line with a multiple origin. Using a fiber product trick introduced by Danielewski, it is known that cylinders over two such surfaces are always isomorphic provided that both bases have the same number of origins. The goal of this note is to give an explicit method to find isomorphisms between cylinders over special Danielewski surfaces. The method is based on the construction of appropriate locally nilpotent derivations. \end{abstract} \title{Isomorphisms between cylinders over Danielewski surfaces} \section{Introduction} In 1989, Danielewski exhibited a family of pairwise non-isomorphic complex affine rational surfaces $Y_n$, $n\geq1$, such that the cylinders $Y_n\times \A^1$ are all isomorphic. The surface $Y_n$ is defined to be the hypersurface in $\A^3$ defined by $x^ny=z^2-1$ for every positive integer $n$. Since this result, several authors have generalized Danielewski's construction and have introduced the notion of Danielewski surfaces. These are certain affine surfaces which can be realized as the total space of an $\A^1$-fibration over the affine line. Special Danielewski surfaces have the stronger property of being the total space of a principal $(\C,+)$-bundle over an affine line with a multiple origin. They were introduced in \cite {DuPo} and are those Danielewski surfaces for which Danielewski's original argument can be used to find isomorphic cylinders. However, the proof of these isomorphisms is not constructive. The main result of this article is to give a method to find explicit isomorphisms of these cylinders. More precisely, the theorem~\ref{main-thm} produces, for every special Danielewski surface, an isomorphism between the cylinder over this surface and the cylinder over a classical Danielewski surface defined by an equation of the form $xy=P(z)$ in $\A^3$. This involves the construction of an appropriate $(\C,+)$-action on the cylinder of one surface whose quotient gives the other Danielewski surface. As a corollary, one gets explicit embeddings of all special Danielewski surfaces as complete intersections in $\A^4$. The paper is organized as follows. In the section two, we recall the construction of Danielewski surfaces and some of their important properties, due to Fieseler and Dubouloz. Then in the following section we introduce three particular families of special Danielewski surfaces which are later used as examples for the main result in section 4. Two of these families are constructed as hypersurfaces, whereas for the last family, we do not know if they are realizable as hypersurfaces or not. In section four, we establish the theorem~\ref{main-thm}, which shows how to construct an isomorphism between the cylinders of any two special Danielewski surfaces. Finally, in section 5, we apply this result to the families of surfaces described in section 3. In particular, we obtain in the proposition~\ref{prop-classical-DS} a very simple explicit isomorphism between the cylinders of any two classical Danielewski surfaces whose respective equations are of the form $x^ny=P(z)$ and $x^my=Q(z)$. {\bf Acknowledgments.} Part of this work was done during the first joint meeting Brazil-France in Mathematics. The second-named author gratefully acknowledges financial support from the R\'eseau Franco-Br\'esilien de Math\'ematiques (RFBM). \section{Danielewski surfaces after Danielewski, Fieseler and Dubouloz} In this section, we introduce some notations and summarize basic facts about Danielewski surfaces due to Fieseler \cite{Fi} and Dubouloz \cite{Du} (see also \cite{DuPo}). \subsection{Construction of Danielewski surfaces} \begin{definition} A Danielewski surface is a smooth complex affine surface $S$ equipped with an $\A^1$-fibration $\pi\colon S\to\A^1=\mathrm{Spec}(\C[x])$ that restricts to a trivial $\A^1$-bundle over $\A^1_*=\mathrm{Spec}(\C[x,x^{-1}])$ such that the exceptional fiber $\pi^{-1}(0)$ is reduced and consists of a disjoint union \[\pi^{-1}(0)=\coprod_{i=1}^d \ell_i\] of $d\geq2$ curves, $\ell_1,\ldots,\ell_d$, all isomorphic to the affine line. \end{definition} For every $1\leq i\leq d$, we denote by $\Ucal_i\subset S$ the open subvariety of $S$ defined by \[\Ucal_i=S\smallsetminus\coprod_{j\neq i}\ell_j\subset S.\] Since every $\Ucal_i$ is isomorphic to the affine plane $\A^2$, every Danielewski surface can be constructed by gluing together $d\geq2$ copies of $\A^2$ along $\A^1_*\times\A^1$. More precisely, every Danielewski surface is isomorphic to a variety $S(d,\boldsymbol{\sigma})$ defined as follows. \begin{definition} Let $d\geq2$ be an integer and let \[\boldsymbol{\sigma}=\big((n_1,\sigma_1(x)),\ldots,(n_d,\sigma_d(x))\big)\in(\Z_{>0}\times\C[x])^d\] be a sequence such that the polynomials $\sigma_i$ are distinct and satisfy that $\deg(\sigma_i(x))<n_i$ for all $1\leq i\leq d$. We denote by $S(d,\boldsymbol{\sigma})$ the surface obtained by gluing together $d$ copies $\Ucal_i=\mathrm{Spec}(\C[x,u_i])$ of $\A^2$ along the open subsets \[\Ucal_i^*=\mathrm{Spec}(\C[x,x^{-1},u_i])\simeq\C^*\times\C\] via the transition functions \begin{align*} \Ucal_i^* &\to\Ucal_j^* \\ (x,u_i)&\mapsto (x,x^{n_i-n_j}u_i+\frac{\sigma_i(x)-\sigma_j(x)}{x^{n_j}}). \end{align*} \end{definition} By \cite[Proposition 1.4]{Fi}, every such surface $S=S(d,\boldsymbol{\sigma})$ is affine. Moreover, the inclusion $\C[x]\hookrightarrow\C[S]$ defines an $\A^1$-fibration $\pi\colon S\to\A^1$ such that $\pi^{-1}(\A^1_*)\simeq\A^1_*\times\A^1$ and such that the unique special fiber $\pi^{-1}(0)$ consists of a disjoint union of $d$ reduced copies of $\A^1$. This shows that $S(d,\boldsymbol{\sigma})$ is indeed a Danielewski surface. By construction, every Danielewski surface $S=S(d,\boldsymbol{\sigma})$ is canonically equipped with a regular function $u\in\C[S]$ whose restrictions to each of the open subsets $\Ucal_i$ are given by \[u|_{\Ucal_i} = x^{n_i} u_i +\sigma_i(x) \in \C[x, u_i].\] Note that $u$ restricts to a coordinate function on every general fiber of $\pi=\textrm{pr}_x\colon S\to\A^1$, but not on the exceptional fiber $\pi^{-1}(0)$. \subsection{Additive group actions and isomorphic cylinders.} Every Danielewski surface $S=S(d,\boldsymbol{\sigma})$ is canonically equipped with a regular $(\C,+)$-action $\delta:\C\times S\to S$ defined on each chart $\Ucal_i$ by \[\delta(\lambda,(x,u_i))=(x,u_i+\lambda x^{n-n_i}),\] where $n=\max\{n_i\mid 1\leq i\leq d\}$. Algebraically, the action $\delta$ corresponds to the locally nilpotent derivation $D\in\textrm{LND}(\C[S])$ that is defined by $D(x)=0$ and $D(u_i)=x^{n-n_i}$. Note that $D(u)=x^n$. An important property of Danielewski surfaces is the fact that the map $\pi=\textrm{pr}_x\colon S\to\A^1$ factors through a locally trivial fiber bundle $S\to Z(d)$ over the affine line $Z(d)$ with a $d$-fold origin, where the preimages of the $d$ origins are the $d$ affine lines $\ell_i$. In the case when the $(\C,+)$-action $\delta$ is free, we have moreover that $S$ is the total space of a $(\C,+)$-principal bundle over $Z(d)$. Recall (see \cite[Section 2.10]{DuPo}) that $\delta$ is free if and only if all $n_i$ are equal to each other, i.e.~ if and only if $n_i=n$ for all $1\leq i\leq d$. The latter condition is equivalent to the fact that the canonical class of $S$ is trivial. These Danielewski surfaces were called \emph{special} in \cite{DuPo}. Danielewski's fiber product trick goes then as follows. Take two Danielewski surfaces, say $S$ and $S'$, that are $(\C,+)$-principal bundles over the same $Z(d)$ and consider their fiber product $S\times_{Z(d)}S'$. Since every $(\C,+)$-principal bundle over an affine base is trivial, we get at once that \[S\times\A^1\simeq S\times_{Z(d)}S'\simeq S'\times\A^1,\] hence that the cylinders over $S$ and $S'$ are isomorphic to each other. \section{Examples of special Danielewski surfaces} \subsection{Classical Danielewski surfaces.} These surfaces are the ones originally considered by Danielewski. They are defined as the hypersurfaces $W_{n,P}$ in $\A^3$ of equation \[W_{n,P}\colon x^ny=P(z),\] where $n\geq1$ is a positive integer and where $P(z)=\prod_{i=1}^d(z-r_i)\in\C[z]$ is a polynomial with $d\geq2$ simple roots. Together with the restriction of the first projection $\pi=\textrm{pr}_x\colon W_{n,P}\to \A^1$, every such surface defines a Danielewski surface. The special fiber $\pi^{-1}(0)$ is the union of the lines $\ell_1,\ldots,\ell_d$ given by \[\A^1\simeq\ell_i=\{(0,y,r_i)\mid y\in\C\}\subset W_{n,P}.\] Every open set $\Ucal_i=W_{n,P}\smallsetminus\coprod_{j\neq i}\ell_j$ is isomorphic to $\A^2$ and we have the isomorphisms \[\varphi_i\colon \Ucal_i\xrightarrow{\sim}\A^2, (x,y,z)\mapsto(x,u_i), \text{ where } u_i=\frac{z-r_i}{x^n}=\frac{y}{\prod_{j\neq i}(z-r_j)}\in\C[\Ucal_i].\] \subsection{Danielewski hypersurfaces.} The hypersurfaces in $\A^3$ that are defined by an equation of the form \[H_{n,Q}\colon x^ny=Q(x,z),\] where $n\geq1$ and where $Q(x,z)\in\C[x,z]$ is such that $\deg(Q(0,z))\geq2$ are called \emph{Danielewski hypersurfaces}. If moreover the polynomial $Q(0,z)\in\C[z]$ has $d\geq2$ simple roots, say $r_1,\ldots,r_d$, then $\pi=\text{pr}_x\colon H_{n,Q}\to\A^1$ defines a Danielewski surface. Its special fiber is the union of the lines $\ell_1,\ldots,\ell_d$ given by \[\A^1\simeq\ell_i=\{(0,y,r_i)\mid y\in\C\}\subset H_{n,Q}.\] Furthermore, there exist unique polynomials $\sigma_1(x),\ldots,\sigma_d(x)\in\C[x]$ of degree strictly smaller than $n$ such that $\sigma_i(0)=r_i$ and such that the congruences \[Q(x,\sigma_i(x))\equiv 0 \mod(x^n)\] hold for all $1\leq i\leq d$. Then, every open set $\Ucal_i=H_{n,Q}\smallsetminus\coprod_{j\neq i}\ell_j$ is isomorphic to $\A^2$ and we have the isomorphisms \[\varphi_i\colon \Ucal_i\xrightarrow{\sim}\A^2, (x,y,z)\mapsto(x,u_i), \text{ where } u_i=\frac{z-\sigma_i(x)}{x^n}.\] (See \cite{DuPo} for the details.) \subsection{Iterated Danielewski hypersurfaces}\label{Section:construction-iterated-Danielewski} Introduced by Alhajjar \cite{Al}, iterated Danielewski hypersurfaces are the hypersurfaces in $\A^3$ that are defined by an equation of the form \[H_{n,Q,m,R}\colon x^mz=R(x,x^ny-Q(x,z)),\] where $n,m\geq1$ and where $Q(x,t),R(x,t)\in\C[x,t]$. If the polynomial $R(0,-Q(0,t))$ in $\C[t]$ has only $d\geq2$ simple roots, then $H_{n,Q,m,R}$ is a Danielewski surface. We discuss now a specific example in details. Consider the surface $H\subset\A^3$ defined by \[H\colon \{xz=(xy+z^2)^2-1\}.\] The special fiber of $\pi=\textrm{pr}_x\colon H\to\A^1$ consists of the four lines $\ell_1,\ldots,\ell_4$ given by \[\ell_i=\{(0,y,\varepsilon^i)\mid y\in\C\},\] where $\varepsilon=\boldsymbol{i}$ denotes a primitive fourth root of the unity. Letting $u=xy+z^2$ and $\sigma_i(x)=\varepsilon^{2i}+\frac{\varepsilon^{-i}}{2}x$ for all $1\leq i\leq 4$, it follows that the open set $\Ucal_i=H\smallsetminus\coprod_{j\neq i}\ell_j$ is isomorphic to $\A^2$ and one claims that the map \[\varphi_i\colon \Ucal_i\xrightarrow{\sim}\A^2, (x,y,z)\mapsto(x,u_i), \text{ where } u_i=\frac{u-\sigma_i(x)}{x^2}\] is an isomorphism. \begin{proof} First, we remark that the rational functions \[\alpha_i=\frac{u-\varepsilon^{2i}}{x}=\frac{z}{u+\varepsilon^{2i}}\quad \text{ and }\quad \beta_i=\frac{z-\varepsilon^i}{x}=\frac{z-xy^2-2yz^2}{\prod_{j\in\{1,\ldots,4\}\smallsetminus\{i\}}(z-\varepsilon^j)}\] are regular on $\Ucal_i$. It follows that $u_i$ is also an element of $\C[\Ucal_i]$, since one easily checks that \[\beta_i-\left(\alpha_i\right)^2=2\varepsilon^{2i}u_i.\] Finally, the fact that $\varphi_i$ is an isomorphism follows from the following identities in $\C[\Ucal_i]$. \begin{align*} \alpha_i &= xu_i+\frac{\varepsilon^{-i}}{2}\\ \beta_i &=2\varepsilon^{2i}u_i+\left(\alpha_i\right)^2\\ z&=\varepsilon^i+x\beta_i\\ y&= \frac{u-z^2}{x}=\frac{u-(\varepsilon^i+x\beta_i)^2}{x}=\alpha_i-2\varepsilon^i\beta_i-x(\beta_i)^2. \end{align*} \end{proof} \subsection{Double Danielewski surfaces}\label{Section:construction-double-Danielewski} In \cite{GuSe}, Gupta and Sen studied some surfaces defined by two equations in $\A^4$ of the form \[S\colon \{x^ny=Q(x,z) \text{ and } x^mt=R(x,z,y)\},\] where $n,m\geq1$ and where $Q(x,z)\in\C[x,z]$ and $R(x,z,y)\in\C[x,z,y]$. They call them \emph{double Danielewski surfaces}. Indeed, if $Q(0,z)\in\C[z]$ has $d\geq2$ simple roots, say $r_1,\ldots,r_{d}$, and if every polynomial $R(0,r_i,y)\in\C[y]$ also has only simple roots, then $S$ is a Danielewski surface together with the first projection $\textrm{pr}_x\colon S\to\A^1$. Let us study here a specific example in details, namely the surface $D\subset\A^4$ defined by \[D\colon \{xy=z^2-1 \text{ and } xt=y^2-1\}.\] It is a Danielewski surface, its special fiber $\textrm{pr}_x^{-1}(0)$ consisting of the four lines given by $\{(0,\pm1,\pm1,t)\mid t\in\C\}\subset D$. Let us introduce the following notation. For every pair $(i,j)\in\{-1,1\}\times\{-1,1\}$, we let \[\ell_{i j}=\{(0,i,j,t)\mid t\in\C\}\] and \[\sigma_{i j}(x)=j+\frac{ij}{2}x.\] Then, every open set $\Ucal_{i j}=D\smallsetminus\coprod_{(i',j')\neq (i,j)}\ell_{i' j'}$ is isomorphic to $\A^2$ and one claims that the map \[\varphi_{i j}\colon \Ucal_{i j}\xrightarrow{\sim}\A^2, (x,y,z,t)\mapsto(x,u_{i j}) \text{ where } u_{i j}=\frac{z-\sigma_{i j}(x)}{x^2}\] is an isomorphism. \begin{proof} First, remark that $\alpha_j=\frac{z-j}{x}=\frac{y}{z+j}$ and $\beta_i=\frac{y-i}{x}=\frac{t}{y+i}$ are regular functions on $\Ucal_{i j}$. Hence, it is straightforward to check that $u_{i j}$ is a regular function on $\Ucal_{i j}$, since \[\beta_i-\left(\alpha_j\right)^2=2ju_{i j}.\] The fact that $\varphi_{i j}$ is an isomorphism follows from the following identities in $\C[\Ucal_{i j}]$. \begin{align*} \alpha_j &= xu_{i j}+\frac{ij}{2}\\ \beta_i &=2ju_{i j}+\left(\alpha_j\right)^2\\ z&=j+x\alpha_j\\ y&=i+x\beta_i\\ t&= (y+i)\beta_i. \end{align*} \end{proof} \section{Isomorphisms between cylinders}\label{Section:plan of the construction} In this section, we fix an integer $d\geq 2$ and denote by $Z(d)$ the affine line with $d$ origins. Let $P(z)=\prod_{i=1}^d(z-r_i)\in\C[z]$ be a polynomial with simple roots. We will explain how to construct, given a special Danielewski surface $S$ which is a principal bundle over $Z(d)$, an isomorphism between its cylinder $S\times\A^1$ and the cylinder $W\times\A^1$ over the classical Danielewski surface \[W=W_{1,P}\colon \{xy=P(z)=\prod_{i=1}^d(z-r_i)\} \text { in } \A^3.\] Recall that a special Danielewski surface $S$ is constructed from a data set consisting of a positive integer $n\geq1$ and of distinct polynomial $\sigma_1(x),\ldots,\sigma_d(x)\in\C[x]$ of degree strictly smaller than $n$. More precisely, $S$ is obtained by gluing $d$ copies, $\Ucal_1,\ldots,\Ucal_d$, of $\A^2=\mathrm{Spec}(\C[x,u_i])$ along $\A^1_*\times\A^1$ by means of the transition functions \[(x,u_i)\mapsto (x,u_i+\frac{\sigma_i(x)-\sigma_j(x)}{x^{n}}).\] We also recall that the inclusion $\C[x]\hookrightarrow S$ defines an $\A^1$-fibration $\pi\colon S\to\A^1$ with a unique special fiber $\pi^{-1}(0)=\coprod_{i=1}^d \ell_i$ consisting of $d$ disjoint reduced copies of $\A^1$, and that we can define, by considering the regular function $u\in\C[S]$ whose restrictions on the open sets $\Ucal_i$ are given by \[u|_{\Ucal_i} = x^{n} u_i +\sigma_i(x) \in \C[x, u_i],\] the canonical locally nilpotent derivation $D\in\mathrm{LND}(\C[S])$ by setting $D(x)=0$ and $D(u)=x^n$. Following Danielewski's original argument, we consider the fiber product $S\times_{Z(d)}W$, which we denote by $V$. We will use the following notations. We identify the ring of regular functions on $W$ with its canonical image in the ring of regular functions on $V$ and write \[\C[W]=\C[x,y,z]\subset\C[V], \quad\text{ where } xy=P(z).\] Similarly, we identify $\C[S]$ as a subring of $\C[V]$. Then, $V$ can be naturally seen as being obtained by gluing $d$ copies $\Vcal_i=\mathrm{Spec}[x,u_i,z_i]$ of $\A^3$ where $z_i=(z-r_i)/x$. The open subvarieties $\Vcal_i$ are glued together along $\A^1_*\times\A^2$ via the transition functions \[(x,u_i,z_i)\mapsto(x,u_i+\frac{\sigma_i(x)-\sigma_j(x)}{x^{n}},z_i+\frac{r_i-r_j}{x}).\] In particular, we have that the regular functions $u\in\C[S]\subset\C[V]$ and $z\in\C[W]\subset\C[V]$ satisfy that \[u|_{\Vcal_i} = x^{n} u_i +\sigma_i(x) \in \C[\Vcal_i]=\C[x,u_i,z_i]\] and \[z|_{\Vcal_i} = x z_i +r_i \in \C[\Vcal_i]=\C[x,u_i,z_i]\] for all $1\leq i\leq d$. {\bf Plan of the construction.} Our construction of an isomorphism between $S\times\A^1$ and $W\times\A^1$ consists of three steps. We first find a regular function $\alpha\in \C[V]$ on the fiber product $V=S\times_{Z(d)}W$ such that \[\C[V]=\C[S][\alpha]\simeq\C[S\times\A^1].\] We then use this equality to extend the canonical derivation on $\C[S]$ to a locally nilpotent derivation $\tilde{D}$ on $\C[V]$ in such a way that \[\mathrm{Ker}(\tilde{D})=\C[x,y,z]=\C[W]\subset\C[V].\] Finally, in the last step, we construct an element $s\in\C[V]$ which is a slice for $\tilde{D}$. This gives \[\C[S\times\A^1]\simeq\C[S][\alpha]=\C[V]=\mathrm{Ker}[\tilde{D}][s]=\C[W][s]\simeq\C[W\times\A^1]\] hence the desired isomorphism between $S\times\A^1$ and $W\times\A^1$. {\bf Step 1.} Since $\ell_1,\ldots,\ell_d$ are disjoint closed subvarieties of the affine variety $S$, there exists a regular function $f\in\C[S]$ such that \begin{equation}\tag{$\star$}\label{equa*} f|_{\ell_i} = r_i \quad\text{for all } 1\leq i\leq d. \end{equation} In other words, we can choose a function $f\in\C[V]$ such that \[f|_{\Vcal_i}=r_i+x\tilde{f}_i \quad \text{for all } 1\leq i\leq d,\] where $\tilde{f}_i$ is some element in $\C[x,u_i]\subset\C[\Vcal_i]$. Therefore, the rational function \[\alpha=\frac{z-f}{x}\] is in fact a regular function on $V$, since \[(z-f)|_{\Vcal_i}=xz_i+r_i-r_i-x\tilde{f}_i=x(z_i-\tilde{f}_i)\] is divisible by $x$ for all $i$. It is then straightforward to check that $z=f+x\alpha$ and $y=x^{-1}P(f+x\alpha)$ are both elements of $\C[S][\alpha]$, hence \[\C[V]=\C[S][\alpha].\] {\bf Step 2.} Note that the image $D(f)\in\C[S]$ of $f$ under the derivation $D$ is divisible by $x$. Therefore, we can extend $D$ to a locally nilpotent derivation $\tilde{D}$ on $\C[V]=\C[S][\alpha]$ by letting \[\tilde{D}(\alpha)=-\frac{D(f)}{x}.\] With this choice, we then have that $\tilde{D}(z)=\tilde{D}(f+x\alpha)=0$ and that $\tilde{D}(y)=\tilde{D}(\frac{P(z)}{x})=0$. {\bf Step 3.} To find a slice $s$ for $\tilde{D}$, it suffices to take a polynomial $g(x,t)\in\C[x,t]$ such that the congruences \begin{equation}\tag{$\star\star$}\label{equa**} g(x,r_i+xt)\equiv \sigma_i(x) \mod(x^n) \end{equation} hold in $\C[x,t]$ for all $1\leq i\leq d$, and to define \[s=\frac{u-g(x,z)}{x^n}.\] Indeed, since every restriction \[(u-g(x,z))|_{\Vcal_i}=x^nu_i+\sigma_i(x)-g(x,r_i+xz_i)\] is divisible by $x^n$ in $\C[\Vcal_i]$, it follows that $s$ is a regular function on $V$. Moreover, it is clear that $\tilde{D}(s)=1$. In order to construct a suitable polynomial $g(x,t)$, one can proceed as follows. If we denote \[\sigma_i(x)=\sum_{j=0}^{n-1}a_{ij}x^j\quad \text{ with } a_{ij}\in\C,\] then we can define \[g(x,t)=\sum_{j=0}^{n-1}g_j(t)x^j,\] where the $g_j(t)\in\C[t]$ are Hermite interpolation polynomials such that \[g_j(r_i)=a_{ij}\quad\text{ and }\quad g_j^{(k)}(r_i)=0\] for all $1\leq i\leq d$ and all $0\leq j\leq n-1$, $1\leq k\leq n-1-j$. {\bf The isomorphism.} Finally, the above three steps have produced the desired isomorphism. We have therefore proven the following result. \begin{theorem}\label{main-thm} Let $S$ be a special Danielewski surface over $Z(d)$, and let $W$ be the hypersurface defined by the equation $XY=P(Z)$, where $P$ is a polynomial of degree $d$ whose roots are all simple. Suppose $f$ and $g$ are chosen to satisfy $($\ref{equa*}$)$ and $($\ref{equa**}$)$ above. Then the map \[\Phi\colon\C[W\times\A^1]=\C[X,Y,Z,W]/(XY-P(Z))=\C[x,y,z,w]\xrightarrow{\sim}\C[S\times\A^1]=\C[S][\alpha]\] defined by \begin{align*} \Phi(x)&=x\\ \Phi(z)&=f+x\alpha\\ \Phi(y)&=\frac{P(f+x\alpha)}{x}\\ \Phi(w)&=\frac{u-g(x,f+x\alpha)}{x^n} \end{align*} is an isomorphism. \end{theorem} \begin{corollary} Keeping the same notation as in the previous theorem, it follows that the special Danielewski surface $S$ is isomorphic to the surface defined by the equations \[xy=P(z) \text{ and } \Phi^{-1}(\alpha)=\lambda\] in $\A^4$, where $\lambda\in\C$ is any constant. \end{corollary} \section{Some explicit examples} \subsection{Russell's isomorphism} In \cite{SY}, the authors give an explicit isomorphism, which is due to Russell, between the cylinders over the Danielewski surfaces of respective equations $xy=z^2-1$ and $x^2y=z^2-1$. See also Theorem 10.1 in \cite{Fre}. With our method, we can recover this isomorphism easily. In this section we will show how to apply the method of the previous section to treat a slightly more general case, and, in the end of the section, we will specialize to the case of the Russell isomorphism. We shall use the following notations. We denote by $P(z)=\prod_{i=1}^d(z-r_i)\in\C[z]$ a polynomial with $d\geq2$ simple roots and by $W_{n,P}$ the hypersurface in $\A^3$ defined by the equation $x^ny=P(z)$, where $n\geq1$ is a positive integer. Moreover, we let \[\C[W_{n,P}]=\C[X,Y,Z]/(X^nY-P(Z))=\C[x_n,y_n,z_n]\] and \[\C[W_{n,P}\times\A^1]=\C[X,Y,Z,W]/(X^nY-P(Z))=\C[x_n,y_n,z_n,w_n],\] where $(x_n)^ny_n=P(z_n)$. Accordingly with the previous section, we now proceed to construct an isomorphism between $\C[W_{1,P}\times\A^1]$ and $\C[W_{2,P}\times\A^1]$. First, note that the regular function $f=z_2\in\C[W_{2,P}]$ is equal to $r_i$ on every point of the line $\{x=0, z=r_i\}\subset W_{2,P}$. Also, in this case, $u=z_2\in\C[W_{2,P}]$ restricts to a coordinate function on every general fiber of the projection $\textrm{pr}_{x}\colon W_{2,P}\to\C$. Secondly, since $P$ has only simple roots, there exist two polynomials $U,V\in\C[z]$ such that $U(z)P'(z)+V(z)P(z)=1$ in $\C[z]$. Then, the polynomial \[g(z)=z-P(z)U(z)\in\C[z]\] satisfies that \[g(r_i)=r_i\] and \[g'(r_i)=1-P'(r_i)U(r_i)-P(r_i)U'(r_i)=0\] for all $1\leq i\leq d$. With these choices for $f$, $u$ and $g$, we get the isomorphism \[\Phi\colon\C[W_{1,P}\times\A^1]=\C[x_1,y_1,z_1,w_1]\xrightarrow{\sim}\C[W_{2,P}\times\A^1]=\C[x_2,y_2,z_2,w_2]\] defined by \begin{align*} \Phi(x_1)&=x_2\\ \Phi(z_1)&=z_2+x_2w_2\\ \Phi(y_1)&=\frac{P(z_2+x_2w_2)}{x_2}\\ \Phi(w_1)&=\frac{z_2-g(z_2+x_2w_2)}{x_2^2}, \end{align*} whose inverse isomorphism \[\Psi\colon\C[W_{2,P}\times\A^1]=\C[x_2,y_2,z_2,w_2]\xrightarrow{\sim}\C[W_{1,P}\times\A^1]=\C[x_1,y_1,z_1,w_1]\] is defined by \begin{align*} \Psi(x_2)&=x_1\\ \Psi(z_2)&=x_1^2w_1+g(z_1)\\ \Psi(y_2)&=\frac{P(x_1^2w_1+g(z_1))}{x_1^2}\\ \Psi(w_2)&=\frac{z_1-(x_1^2w_1+g(z_1))}{x_1}. \end{align*} In the special case where $P(z)=z^2-1$, we have $g(z)=z-(z^2-1)\dfrac{z}{2}$ and we thus obtain the inverse isomorphisms \[\Phi_*\colon \{x^2y=z^2-1\}\times\A^1\to \{xy=z^2-1\}\times\A^1\] and \[\Psi_*\colon \{xy=z^2-1\}\times\A^1\to \{x^2y=z^2-1\}\times\A^1\] defined by \begin{align*} \Phi_*(x,y,z,w)&=\Big(x,\frac{P(z+xw)}{x},z+xw,\frac{z-g(z+xw)}{x^2}\Big)\\ &=\Big(x,\frac{(z+xw)^2-1}{x},z+xw,\\ &\qquad\qquad\frac{z-(z+xw)+\frac{1}{2}(z+xw)((z+xw)^2-1)}{x^2}\Big)\\ &=\Big(x,\frac{z^2-1}{x}+2zw+xw^2,z+xw,\\ &\qquad\qquad\frac{\frac{1}{2}(z+xw)(z^2-1+x^2w^2)+xw(z^2-1)+x^2w^2z}{x^2}\Big)\\ &=\Big(x,xy+2zw+xw^2,z+xw,\frac{1}{2}(z+xw)(y+w^2)+xyw+w^2z\Big)\\ &=\Big(x,xy+2zw+xw^2,z+xw,\frac{1}{2}(yz+3zw^2+3xyw+xw^3)\Big) \end{align*} and \begin{align*} \Psi_*(x,y,z,w)&=\Big(x,\frac{P(x^2w+g(z))}{x^2},x^2w+g(z),\frac{z-x^2w-g(z)}{x}\Big)\\ &=\Big(x,\frac{x^4w^2+2x^2wg(z)+(z^2-1)^2(\frac{1}{4}z^2-1)}{x^2},x^2w+g(z),\\ &\qquad\qquad-xw+\frac{1}{2}\cdot\frac{z(z^2-1)}{x}\Big)\\ &=\Big(x,x^2w^2+2wg(z)+y^2(\frac{1}{4}z^2-1),x^2w+g(z),-xw+\frac{1}{2}zy\Big). \end{align*} \subsection{Classical Danielewski surfaces} In light of the previous example, we obtain simple explicit isomorphisms between the cylinders over two classical Danielewski surfaces. \begin{proposition}\label{prop-classical-DS} Let $d,n,m\geq1$ be positive integers and let $P(z)=\prod_{i=1}^d(z-a_i)$ and $Q(z)=\prod_{i=1}^d(z-b_i)$ be polynomials in $\C[z]$ with simple roots. Recall that $W_{n,P}$ and $W_{m,Q}$ denote the hypersurfaces in $\A^3=\mathrm{Spec}(\C[x,y,z])$ that are defined respectively by the equation \[W_{n,P}\colon x^ny=P(z)\] and \[W_{m,Q}\colon x^my=Q(z).\] Let $f,g\in\C[z]$ be two Hermite interpolating polynomials such that \[f(b_i)=a_i\quad\text{ and }\quad f^{(k)}(b_i)=0\quad \text{ for all } 1\leq i\leq d, 1\leq k\leq n-1\] and \[g(a_i)=b_i\quad\text{ and }\quad g^{(k)}(a_i)=0\quad \text{ for all } 1\leq i\leq d, 1\leq k\leq m-1.\] Then, the maps \begin{align*} \varphi\colon &W_{n,P}\times\A^1\to W_{m,Q}\times\A^1\\ &(x,y,z,w)\mapsto(x,\frac{Q(g(z)+x^mw)}{x^m},g(z)+x^mw,\frac{z-f(g(z)+x^mw)}{x^n}) \end{align*} and \begin{align*} \psi\colon &W_{m,Q}\times\A^1\to W_{n,P}\times\A^1\\ &(x,y,z,w)\mapsto(x,\frac{P(f(z)+x^nw)}{x^n},f(z)+x^nw,\frac{z-g(f(z)+x^nw)}{x^m}) \end{align*} are regular and define inverse isomorphisms between the cylinders $W_{n,P}\times\A^1$ and $W_{m,Q}\times\A^1$. \end{proposition} \begin{proof} On the one hand, we have that \[(P\circ f)(b_i)=(P\circ f)'(b_i)=\cdots=(P\circ f)^{(n-1)}(b_i)=0\quad\text{ for all } i.\] This shows that $P(f(z))$ is divisible by $(Q(z))^n$, hence that $P(f(z))/x^n$ is a regular function on $W_{m,Q}$. Similarly, $Q(g(z))/x^m$ is a regular function on $W_{n,P}$. On the other hand, we have that \[z-g(f(z)+x^nw)=z-g(f(z))-\sum_{k=1}^{\infty}\frac{(x^nw)^k}{k!}g^{(k)}(f(z))\] is an element of the ideal $(Q(z),x^m)\C[x,z,w]$. Therefore, $x^{-m}(z-g(f(z)+x^nw))$ is a regular function on $W_{m,Q}\times\A^1=\mathrm{Spec}(\C[x,y,z,w]/(x^my-Q(z)))$. Similarly, $x^{-n}(z-f(g(z)+x^mw))$ is a regular function on $W_{n,P}\times\A^1$. Thus, $\varphi$ and $\psi$ are regular maps. It is moreover straightforward to check that they are inverse of each other. \end{proof} \subsection{An iterated Danielewski hypersurface} Let us look again at the iterated Danielewski hypersurface $H=\{xz=(xy+z^2)^2-1\}$ in $\A^3$ that we studied at Section \ref{Section:construction-iterated-Danielewski}. Recall that the special fiber consists of the four lines \[\ell_i=\{(0,y,\varepsilon^i)\mid y\in\C\}, 1\leq i\leq 4,\] where $\varepsilon=\boldsymbol{i}\in\C$ denotes a primitive fourth root of the unity, and that the surface $H$ corresponds to the data $n=2$ and $\sigma_i(x)=\varepsilon^{2i}+\frac{\varepsilon^{-i}}{2}x$ for all $1\leq i\leq 4$. We give now an isomorphism from $H\times\A^1$ to $\{xy=z^4-1\}\times\A^1$. Keeping the notations of Section~\ref{Section:plan of the construction}, we obtain the isomorphism \begin{align*} \{xz=(xy+z^2)^2-1\}\times\A^1&\to \{xy=z^4-1\}\times\A^1\\ (x,y,z,w)&\mapsto(x,\frac{(f+xw)^4-1}{x},f+xw,\frac{u-g(x,f+xw)}{x^2}), \end{align*} where \begin{align*} r_i&=\varepsilon^i\\ f&=z\\ u&=xy+z^2\\ g(x,z)&=z^2-\frac{1}{2}z^2(z^4-1)+x\frac{1}{2}z^3. \end{align*} \subsection{A double Danielewski surface} We consider again the double Danielewski surface \[D= \{xy=z^2-1 \text{ and } xt=y^2-1\}\quad \text{ in } \A^4\] that we described at Section \ref{Section:construction-double-Danielewski}. Recall that the special fiber consists of the four lines \[\ell_{i j}=\{(0,i,j,t)\mid t\in\C\}\}\] and that the surface $D$ corresponds to the data $n=2$ and $\sigma_{i j}(x)=j+\frac{ij}{2}x$, where $(i,j)\in\{1,-1\}\times\{1,-1\}$. Following the notations of Section~\ref{Section:plan of the construction}, we denote by $\varepsilon=\boldsymbol{i}\in\C$ a primitive fourth root of unity and define \begin{align*} f&=\frac{z+y}{2}+\varepsilon\frac{y-z}{2}\\ u&=z\\ g(x,z)&=\frac{1-\varepsilon}{2}z^3+\frac{1+\varepsilon}{2}z-(z^4-1)\frac{z}{4}(3\frac{1-\varepsilon}{2}z^2+\frac{1+\varepsilon}{2})+x\frac{1}{2}z^2. \end{align*} Then, we have that \begin{align*} &f|_{\ell_{11}}=1, && g(x,1)=\sigma_{1 1}(x)\\ &f|_{\ell_{1 -1}}=\varepsilon, && g(x,\varepsilon)=\sigma_{1 -1}(x)\\ &f|_{\ell_{-1 1}}=-\varepsilon, && g(x,-\varepsilon)=\sigma_{-1 1}(x)\\ &f|_{\ell_{-1 -1}}=-1, && g(x,-1)=\sigma_{-1 -1}(x) \end{align*} and \[\frac{\partial g}{\partial z}(x,\varepsilon^i)\equiv0 \mod (x)\] for all $1\leq i\leq 4$. This produces the isomorphism \begin{align*} D\times\A^1&\to \{xy=z^4-1\}\times\A^1\\ (x,y,z,t,w)&\mapsto(x,\frac{(f+xw)^4-1}{x},f+xw,\frac{u-g(x,f+xw)}{x^2}). \end{align*} \begin{bibdiv} \begin{biblist} \bib{Al}{thesis}{ author={Alhajjar, Bachar}, title={Locally Nilpotent Derivations of Integral Domains}, type={Ph.D. Thesis}, organization={Universit\'e de Bourgogne}, date={2015}, } \bib{Da}{article}{ author={Danielewski, W.}, title={On the cancellation problem and automorphism groups of affine algebraic varieties}, status={preprint}, date={1989}, } \bib{Du}{article}{ author={Dubouloz, Adrien}, title={Danielewski-Fieseler surfaces}, journal={Transform. Groups}, volume={10}, date={2005}, number={2}, pages={139--162}, } \bib{DuPo}{article}{ author={Dubouloz, Adrien}, author={Poloni, Pierre-Marie}, title={On a class of Danielewski surfaces in affine 3-space}, journal={J. Algebra}, volume={321}, date={2009}, number={7}, pages={1797--1812}, } \bib{Fi}{article}{ author={Fieseler, Karl-Heinz}, title={On complex affine surfaces with ${\bf C}^+$-action}, journal={Comment. Math. Helv.}, volume={69}, date={1994}, number={1}, pages={5--27}, } \bib{Fre}{book}{ author={Freudenburg, Gene}, title={Algebraic theory of locally nilpotent derivations}, series={Encyclopaedia of Mathematical Sciences}, volume={136}, edition={2}, note={Invariant Theory and Algebraic Transformation Groups, VII}, publisher={Springer-Verlag, Berlin}, date={2017}, pages={xxii+319}, } \bib{GuSe}{article}{ author={Gupta, Neena}, author={Sen, Sourav}, title={On double Danielewski surfaces and the cancellation problem}, journal={J. Algebra}, volume={533}, date={2019}, pages={25--43}, } \bib{SY}{article}{ author={Shpilrain, Vladimir}, author={Yu, Jie-Tai}, title={Affine varieties with equivalent cylinders}, journal={J. Algebra}, volume={251}, date={2002}, number={1}, pages={295--307}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Convergence Analysis of Generalized ADMM with Majorization for Linearly Constrained Composite Convex Optimization} \begin{abstract} The generalized alternating direction method of multipliers (ADMM) of Xiao et al. [{\tt Math. Prog. Comput., 2018}] aims at the two-block linearly constrained composite convex programming problem, in which each block is in the form of ``nonsmooth + quadratic". However, in the case of non-quadratic (but smooth), this method may fail unless the favorable structure of 'nonsmooth + smooth' is no longer used. This paper aims to remedy this defect by using a majorized technique to approximate the augmented Lagrangian function, so that the corresponding subprobllem can be decomposed into some smaller problems and then solved separately. Furthermore, the recent symmetric Gauss-Seidel (sGS) decomposition theorem guarantees the equivalence between the bigger subproblem and these smaller ones. This paper focuses on convergence analysis, that is, we prove that the sequence generated by the proposed method converges globally to a Karush-Kuhn-Tucker point of the considered problem. Finally, we do some numerical experiments on a kind of simulated convex composite optimization problems which illustrate that the proposed method is more efficient than its compared ones. \end{abstract} {\bf Key words.} composite convex programming, alternating direction method of multipliers, majorization, symmetric Gauss-Seidel iteration, proximal point term \setcounter{equation}{0} \section{Introduction}\label{section1} Let $\mathbb{X}, \mathbb{Y}$, and $\mathbb{Z}$ be real finite dimensional Euclidean spaces with an inner product $\langle \cdot,\cdot \rangle $ and its induced norm $\|\cdot\|$. Let $f_1 : \mathbb{X} \rightarrow ( - \infty, + \infty)$ and $h_1 : \mathbb{Y} \rightarrow ( - \infty, + \infty)$ are convex functions with Lipschitz continuous gradients; $f_2 : \mathbb{X} \rightarrow ( - \infty, + \infty]$ and $h_2 : \mathbb{Y} \rightarrow ( - \infty, + \infty]$ are closed proper convex (not necessarily smooth) functions. We consider the following composite convex optimization problem \begin{equation}\label{prob1} \min_{x\in\mathbb{X},y\in\mathbb{Y}} \ \big\{ f_1(x)+f_2(x) + h_1(y)+h_2(y) \ | \ {\mathcal A}^*x + {\mathcal B}^*y = c \big\}, \end{equation} where ${\mathcal A} : \mathbb{Z} \rightarrow \mathbb{X}$ and ${\mathcal B} : \mathbb{Z} \rightarrow \mathbb{Y}$ are linear operators with adjoints ${\mathcal A}^*$ and ${\mathcal B}^*$, respectively; and $c\in\mathbb{Z}$ is a given vector. Let $\sigma\in( 0 , +\infty)$ be a penalty parameter, the augmented Lagrangian function associated with problem (\ref{prob1}) is defined by, for any $(x,y,z)\in\mathbb{X}\times\mathbb{Y}\times\mathbb{Z}$, \begin{equation}\label{alf} {\mathcal L}_{\sigma}(x,y;z) := f_1(x)+f_2(x) + h_1(y)+h_2(y) + \langle z, {\mathcal A}^*x + {\mathcal B}^*y - c\rangle +\frac{\sigma}{2}\|{\mathcal A}^*x + {\mathcal B}^*y - c\|^2, \end{equation} where $(x,y,z)\in\mathbb{X}\times\mathbb{Y}\times\mathbb{Z}$ and $z$ is a multiplier. One attempt to solve (\ref{prob1}) is the standard alternating direction method of multipliers (ADMM), which alternately updates the variables $(x,y)$ and the multiplier $z$ from an initial point $(x^0,y^0,z^0)\in \dom(f)\times \dom(h)\times\mathbb{Z}$, that is, \begin{equation}\label{cadmm} \left\{ \begin{array}{l} x^{k+1} = \argmin_{x\in\mathbb{X}}\big\{{\mathcal L}_{\sigma}(x,y^k;z^k)=:f_1(x)+f_2(x)+\frac{\sigma}{2}\|{\mathcal A}^*x + {\mathcal B}^*y^k - c+z^k/\sigma\|^2\big\},\\[3mm] y^{k+1} = \argmin_{y\in\mathbb{Y}}\big\{{\mathcal L}_{\sigma}(x^{k+1},y;z^k)=:h_1(y)+h_2(y)+\frac{\sigma}{2}\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y - c+z^k/\sigma\|^2\big\}, \\[3mm] z^{k+1} = z^k + \tau\sigma({\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k+1} - c), \end{array} \right. \end{equation} where $\tau\in(0, (1+\sqrt{5})/2)$ is a step-length. The convergence of the standard ADMM has long been established by Gabay \& Mercier\cite{GMC}, and Fortin \& Glowinski\cite{FGC}. And especially, Gabay \cite{ADMMDRS} showed that the standard ADMM with $\tau=1$ is exactly the Douglas-Rachford splitting method to the sum of two maximal monotone operators from the dual of (\ref{prob1}). And then, Eckstein \& Bertsekas \cite{GADMM} showed that itself is an instance of the proximal point algorithm \cite{ALMPPA} to a specially generated operator. For more work of ADMM, one refer to an important note \cite{noteADMM} and excellent survey \cite{GHC}. To improve the performance of (\ref{cadmm}) in the case of $\tau = 1$, Eckstein \& Bertsekas \cite{GADMM} also proposed a generalized variant of ADMM. Subsequently, Chen \cite[Section 3.2]{CHEND} made an interesting observation and concluded that the generalized ADMM of Eckstein \& Bertsekas \cite{GADMM} is equivalent to the following iterative scheme from an initial point $\widetilde{\omega}^0:=(\widetilde{x}^0, \widetilde{y}^0, \widetilde{z}^0)\in \dom(f)\times \dom(h)\times\mathbb{Z}$, \begin{equation}\label{gadmm} \left\{ \begin{array}{l} x^{k} = \argmin_{x\in\mathbb{X}}{\mathcal B}ig\{{\mathcal L}_{\sigma}(x,\widetilde{y}^k;\widetilde{z}^k)=:f_1(x)+f_2(x)+\frac{\sigma}{2}\|{\mathcal A}^*x + {\mathcal B}^*\widetilde{y}^k - c+\widetilde{z}^k/\sigma\|^2{\mathcal B}ig\},\\[3mm] z^{k} = \widetilde{z}^k + \sigma({\mathcal A}^*x^{k} + {\mathcal B}^*\widetilde{y}^{k} - c),\\[3mm] y^{k} = \argmin_{y\in\mathbb{Y}}{\mathcal B}ig\{{\mathcal L}_{\sigma}(x^{k},y;z^k)=:h_1(y)+h_2(y)+\frac{\sigma}{2}\|{\mathcal A}^*x^k + {\mathcal B}^*y - c+z^k/\sigma\|^2{\mathcal B}ig\}, \\[3mm] \widetilde{\omega}^{k+1} = \widetilde{\omega}^k + \rho(\omega^k - \widetilde{\omega}^k), \end{array} \right. \end{equation} where $\omega^k = (x^k, y^k, z^k)$, $\widetilde{\omega}^k = (\widetilde{x}^k, \widetilde{y}^k, \widetilde{z}^k)$, and $\rho\in(0,2)$ is an uniform relaxation factor. Clearly, for $\rho=1$, the above generalized ADMM scheme is scheme (\ref{cadmm}) with $\tau=1$. We see that the efficiency of the scheme (\ref{gadmm}) is mainly determined by the $x$- and $y$-subproblems. It was known that, if $f_1(x)$ and $h_1(y)$ are quadratic, and $f_2(x)$ and $g_2(y)$ are in the form of $f_2(x)=f_2(x_1)$ and $h_2(y)=h_2(y_1)$, the $x$- and $y$-subproblems can be solved efficiently if a couple of suitable proximal point terms are added, that is \begin{align} x^{k} = \argmin_{x\in\mathbb{X}}{\mathcal B}ig\{f_1(x)+f_2(x_1)+\frac{\sigma}{2}\|{\mathcal A}^*x + {\mathcal B}^*\widetilde{y}^k - c+\widetilde{z}^k/\sigma\|^2+\frac12\|x-\widetilde{x}^k\|_{\mathcal S}^2{\mathcal B}ig\},\label{xiaox}\\[2mm] y^{k} = \argmin_{y\in\mathbb{Y}}{\mathcal B}ig\{h_1(y)+h_2(y_1)+\frac{\sigma}{2}\|{\mathcal A}^*x^k + {\mathcal B}^*y - c+z^k/\sigma\|^2+\frac12\|y-\widetilde{y}^k\|_{\mathcal T}^2{\mathcal B}ig\},\label{xiaoy} \end{align} where ${\mathcal S}$ and ${\mathcal T}$ are two self-adjoint and positive semidefinite linear operators. It was shown from Li et al. \cite{spADMM,bsGs} that, if ${\mathcal S}$ and ${\mathcal T}$ are chosen properly, both subproblems can be split into some smaller ones and then solved separately in a sGS manner. However, if $f_1(x)$ and $h_1(y)$ are non-quadratic, it seemly difficult to split into some small pieces so that the favorable structure 'nonsmooth+smooth' cannot be used any more. We must emphasize that the approach of (\ref{xiaox}) and (\ref{xiaoy}) was firstly proposed by Xiao et al. \cite{spGADMM} which has been numerically demonstrated very efficient to solve doubly non-negative semidefinite programming problems with moderate accuracy. In optimization literature, one popular way to approximate a continuously differentially convex function into a quadratic is the using of majorization \cite{COVA}, such as, Hong et al. \cite{HONG} and Cui et al. \cite{CUIJOTA}. Particularly, Li et al. \cite{miPADMM} majorized the augmented Lagrangian function involved in (\ref{cadmm}) when $f_1(x)$ and $g_1(y)$ are non-quadratic to make the subproblems become a composite convex quadratic minimization. Therefore, the resulting problems were solved efficiently with respect to each variable in a sGS order \cite{bsGs}. The attractive feather of using sGS is to make this iterative scheme fill into the framework of Fazel et al. \cite{FST}. In a similar way, Qin et al. \cite{MGADMM} also used a majorization technique to the generalized ADMM of Eckstein \& Bertsekas \cite{GADMM} to make it more flexible and robust. Extensive numerical experiments demonstrated that the generalized ADMM with a suitable relaxation factor achieved better performance than the method of Li et al. \cite{miPADMM}. Nevertheless, the performance of the semi-proximal generalized ADMM stated in (\ref{gadmm}), (\ref{xiaox}), and (\ref{xiaoy}) still has not been studied. Hence, one natural question is how can we use the majorization technique so that the corresponding subproblems more amenable to efficient computations when $f_1(x)$ and $h_1(y)$ are non-quadratic. The main purpose of this paper is to use a majorization technique to the augmented Lagrangian function (\ref{alf}) to make the resulting subproblems (\ref{xiaox}) and (\ref{xiaoy}) more easier solve, and hence, it will enhance the capabilities of the generalized ADMM of Xiao et al. \cite{spGADMM}. At the beginning, we must clarify that the reason why we focus on the generalized ADMM of Xiao et al. \cite{spGADMM} is due to the fact that this type of method is highly efficient than some state-of-the-art algorithms according to a series of numerical experiments. Let $\mathbb{X}:=\mathbb{X}_1\times\ldots\times\mathbb{X}_s$ and $\mathbb{Y}:=\mathbb{Y}_1\times\ldots\times\mathbb{Y}_t$ with positive constants $s$ and $t$. At each iteration, we use majorized functions to replace $f_1(x)$ and $h_1(y)$ at the associated augmented Lagrangian function, and then both subproblems have ``nonsmooth+quadratic" structures. If $f_2(x):=f_2(x_1)$ and $h_2(y):=h_2(y_1)$ are simple functions, the $x$-subproblem ({\itshape resp.} $y$-subproblem) can be solved individually in the order of $x_s\rightarrow\ldots\rightarrow x_2\rightarrow x_1\rightarrow x_2\rightarrow\ldots\rightarrow x_s$ ({\itshape resp.} $y_t\rightarrow\ldots\rightarrow y_2\rightarrow y_1\rightarrow y_2\rightarrow\ldots\rightarrow y_t$) by making full use of the structures of $f_2(x_1)$ and $h_2(y_1)$. Then, from the sGS decomposition theorem of Li et al. \cite{bsGs}, it is easy to show that this cycle is equivalent to adding proximal point terms with proper linear operators ${\mathcal S}$ and ${\mathcal T}$. We draw the difference in the iterative points between our method and the methods in \cite{MGADMM,miPADMM}, that is $$ \begin{array}{rrcl} [\text{Methods in \cite{miPADMM,MGADMM} }] &\ldots\rightarrow(x^k,y^k)&\longrightarrow&(x^{k+1},y^{k+1})\rightarrow\ldots\\[2mm] [\text{Our method}] &\ldots\rightarrow(x^k,y^k)&\rightarrow(\widetilde{x}^{k+1},\widetilde{y}^{k+1})\rightarrow&(x^{k+1},y^{k+1})\rightarrow\ldots \end{array} $$ which shows that the new point $(x^{k+1},y^{k+1})$ is from the relaxation point $(\widetilde{x}^{k+1},\widetilde{y}^{k+1})$ but not the previous $(x^k,y^k)$ as in \cite{miPADMM,MGADMM}. We must emphasize that the relaxation point $(\widetilde{x}^{k},\widetilde{y}^{k})$ will lead to more technical difficulties, so that the theoretical analysis can not be obtained by mimicking the aforementioned methods in \cite{miPADMM,MGADMM}. Most of the remainder of this paper will focus on theoretical analysis, that is, we prove that the sequence $\{x^k,y^k\}$ generated by our method converges to a Karush-Kuhn-Tucker (KKT) point of problem (\ref{prob1}) under some technical conditions. Finally, we do numerical experiments on a class of simulated convex composite optimization problems. The numerical results illustrate that the proposed method performs better than the M-ADMM of Li et al. \cite{miPADMM} and the M-GADMM of Qin et al. \cite{MGADMM}. The remaining parts of this paper are organized as follows. In section \ref{section3}, we propose the majorized and generalized ADMM to solve the composite convex problem (\ref{prob1}) in the case of $f_1(x)$ and $h_1(y)$ being non-quadratic and then it followed by some important properties. Then, we focus on the convergence analysis of the proposed algorithm in section \ref{section4}. In section \ref{section5}, we are devoted to implementation issue to show the potential numerical efficiency of our proposed algorithm. Finally, we conclude this paper with some remarks in section \ref{section6}. \setcounter{equation}{0} \section{A generalized ADMM with majorization}\label{section3} At the beginning of this section, we give some preliminaries needed in the subsequent developments. Let $f:\mathbb{X}\rightarrow(-\infty,+\infty]$ be a closed proper convex function. The effective domain of $f$, which is denoted by $\dom(f)$, is defined as $\dom(f):=\{x:f(x)<+\infty\}$. A vector $x^*$ is said to be a subgradient of $f$ at point $x$ if $f(z)\geq f(x)+\langle x^*,z-x\rangle$ for all $z\in\mathbb{X}$. The set of all subgradients of $f$ at $x$ is called the subdifferential of $f$ at $x$ and is denoted by $\partial f(x)$ or $\partial f$. It is well-known from \cite{COVA} that $\partial f$ is a maximal monotone operator. Because $f_1(x)$ and $h_1(y)$ are smooth convex functions with Lipschitz continuous gradients, we know that there exist self-adjoint and positive semidefinite linear operators such that $\widehat{{\mathcal S}igma}_{f_1}\succeq {\mathcal S}igma_{f_1}$ and $\widehat{{\mathcal S}igma}_{h_1}\succeq {\mathcal S}igma_{h_1}$, and for any $x, x' \in \mathbb{X}$ and any $y, y' \in \mathbb{Y}$, it holds that \begin{equation}\label{f1cov} \frac{1}{2}\|x-x'\|_{{\mathcal S}igma_{f_1}}^2 \le f_1(x) - f_1(x') - \langle x-x',\nabla f_1(x') \rangle \le \displaystyle{\frac{1}{2}}\|x-x'\|_{\widehat{{\mathcal S}igma}_{f_1}}^2, \end{equation} \begin{equation}\label{h1cov} \frac{1}{2}\|y-y'\|_{{\mathcal S}igma_{h_1}}^2 \le h_1(y) - h_1(y') - \langle y-y',\nabla h_1(y') \rangle \le \displaystyle{\frac{1}{2}}\|y-y'\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{equation} Using the majorization technique, we construct the majorized functions for $f_1(x)$ and $h_1(y)$ as \begin{equation}\label{hatf1} \hat{f}_1(x,x') := f_1(x') + \langle x-x',\nabla f_1(x') \rangle + \displaystyle{\frac{1}{2}}\|x-x'\|_{\hat{{\mathcal S}igma}_{f_1}}^2, \end{equation} \begin{equation}\label{hath1} \hat{h}_1(y,y') := h_1(y') + \langle y-y',\nabla h_1(y') \rangle + \displaystyle{\frac{1}{2}}\|y-y'\|_{\hat{{\mathcal S}igma}_{h_1}}^2. \end{equation} And then, using both majorized functions to replace the functions $f_1(x)$ and $h_1(y)$ in (\ref{alf}), we can get the majorized augmented Lagrangian function \begin{equation}\label{majalf} \hat{{\mathcal L}}_{\sigma}\big(x,y;(z,x',y')\big) := \hat{f}_1(x,x') + f_2(x) + \hat{h}_1(y,y') + h_2(y) + \langle z, {\mathcal A}^*x + {\mathcal B}^*y - c\rangle +\frac{\sigma}{2}\|{\mathcal A}^*x + {\mathcal B}^*y - c\|^2. \end{equation} Clearly, it is a composite convex quadratic function except for the terms $f_2(x)$ and $h_2(y)$. For subsequent developments, we need the following constraint qualification. \begin{assumption}\label{assum} There exists $(x^0, y^0)\in ri(\dom(f_2) \times \dom (h_2)) \cap {\mathcal O}mega$, where ${\mathcal O}mega :=\{(x,y)\in \mathbb{X}\times\mathbb{Y}\mid {\mathcal A}^*x + {\mathcal B}^*y = c \}$. \end{assumption} Under Assumption (\ref{assum}), it is from \cite[Corollaries 28.2.2 and 28.3.1]{COVA}, we can get the optimality conditions of the problem (\ref{prob1}). \begin{theorem}\label{optcond} If the Assumption (\ref{assum}) hold, then $(\bar{x}, \bar{y})\in \mathbb{X}\times\mathbb{Y}$ is an optimal solution to problem (\ref{prob1}) if and only if there exists a Lagrangian multiplier $\bar{z}\in\mathbb{Z}$ such that $(\bar{x},\bar{y},\bar{z})$ satisfies the following KKT system: \begin{equation}\label{kktcond} 0\in \partial f_2(\bar{x}) + \nabla f_1(\bar{x}) + {\mathcal A}\bar{z},\quad 0\in \partial h_2(\bar{y}) + \nabla h_1(\bar{y}) + {\mathcal B}\bar{z},\quad {\mathcal A}^*\bar{x} + {\mathcal B}^*\bar{y} - c = 0, \end{equation} where $\bar z\in\mathbb{Z}$ is an optimal solution to the dual problem of (\ref{prob1}). \end{theorem} Because $f_2(x)$ and $h_2(y)$ are convex functions, the KKT system (\ref{kktcond}) is equivalent to finding a triple of points $(\bar{x},\bar{y},\bar{z})\in\mathbb{W} :=\mathbb{X}\times\mathbb{Y}\times\mathbb{Z}$ such that for any $(x, y)\in\mathbb{X}\times\mathbb{Y}$ the following inequality holds \begin{equation}\label{covineq} \big(f_2(x) + h_2(y)\big) - \big(f_2(\bar{x}) + h_2(\bar{y})\big) + \big\langle x-\bar{x},\nabla f_1(\bar{x})+A\bar{z}\big\rangle + \big\langle y-\bar{y}, \nabla h_1(\bar{y})+B\bar{z}\big\rangle \ge 0. \end{equation} The inequality will be used frequently in the theoretical analysis part. In light of above preliminary results, we are ready to the construct our algorithm. For convenience, we denote $$ {\mathcal F}:=\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} + \sigma {\mathcal A}{\mathcal A}^* \quad \text{and} \quad {\mathcal H}:= \widehat{{\mathcal S}igma}_{h_1} + {\mathcal T} + \sigma {\mathcal B}{\mathcal B}^*. $$ At the current iteration, if we use the majorized augmented Lagrangian function (\ref{majalf}) to replace its standard form (\ref{alf}), the $x$-subproblem in (\ref{xiaox}) will reduce to \begin{align*} x^{k} =& \argmin_{x}{\mathcal B}ig\{\hat{{\mathcal L}}_{\sigma}(x,\widetilde{y}^k;(\widetilde{z}^k,\widetilde{x}^k,\widetilde{y}^k)) + \frac{1}{2}\|x - \widetilde{x}^k\|_{\mathcal S}^2{\mathcal B}ig\}\\[3mm] =&\argmin_{x}{\mathcal B}ig\{f_2(x) + \frac{1}{2}\langle x, {\mathcal F} x\rangle + {\mathcal B}ig\langle \nabla f_1(\widetilde{x}^k) + \sigma {\mathcal A}({\mathcal A}^*\widetilde{x}^{k} + {\mathcal B}^*\widetilde{y}^{k} - c + \sigma^{-1}\widetilde{z}^k) - {\mathcal F} \widetilde{x}^k, x{\mathcal B}ig\rangle{\mathcal B}ig\}, \end{align*} and the $y$-subproblem in (\ref{xiaoy}) will reduce to \begin{align*} y^{k} =& \argmin_{y}{\mathcal B}ig\{\hat{{\mathcal L}}_{\sigma}(x^k,y;(z^k,x^k,\widetilde{y}^k)) + \displaystyle{\frac{1}{2}}\|y - \widetilde{y}^k\|_{\mathcal T}^2{\mathcal B}ig\}\\[3mm] =&\argmin_{y}{\mathcal B}ig\{h_2(y) + \displaystyle{\frac{1}{2}}\langle y, {\mathcal H} y\rangle + {\mathcal B}ig\langle \nabla h_1(\widetilde{y}^k) + \sigma {\mathcal B}({\mathcal A}^*x^{k} + {\mathcal B}^*\widetilde{y}^{k} - c + \sigma^{-1}z^{k}) - {\mathcal H} \widetilde{y}^k, y{\mathcal B}ig\rangle{\mathcal B}ig\}. \end{align*} Clearly, both subproblems have the favorable structure of ``nonsmooth + quadratic" so that they can be solved efficiently if the self-adjoint operators ${\mathcal S}$ and ${\mathcal T}$ are chosen properly. Taking the $x$-subproblem as an example, in the case of $f_2(x)=f_2(x_1)$ being a simple function, we denote $Q_x:=\widehat{{\mathcal S}igma}_{f_1} + \sigma {\mathcal A}{\mathcal A}^*$ and then decompose it into $Q_x=U_x+{\mathcal S}igma_x+U^\top_x$, where $U_x$ is a strictly upper triangular submatrix and ${\mathcal S}igma_x$ is the diagonal of $Q_x$. Let ${\mathcal S}:=U_x{\mathcal S}igma_x^{-1}U_x^\top$, then the $x$-subproblem can be computed in a sGS order $x_s\rightarrow\ldots\rightarrow x_2\rightarrow x_1\rightarrow x_2\rightarrow\ldots\rightarrow x_s$, which indicates that the $x$-subproblem is split into a series of smaller problems with each $x_i$ and solved separately. For more theoretical details on this iterative scheme, one may refer to the excellent papers of Li et al. \cite{spGADMM,miPADMM}. In light of the above analyses, we list the generalized ADMM with majorization (abbr. G-ADMM-M) for solving problem (\ref{prob1}) as follows. \begin{framed} \noindent {\bf Algorithm: G-ADMM-M (Generalized ADMM with Majorization).} \vskip 1.0mm \hrule \vskip 1mm \noindent \begin{itemize} \item[Step 0.] Choose $\sigma> 0$ and $\rho\in (0,2)$. Choose self-adjoint positive semidefinite linear operators ${\mathcal S}$ and ${\mathcal T}$ in $\mathbb{X}$ and $\mathbb{Y}$ such that ${\mathcal F}\succ 0$ and ${\mathcal H}\succ 0$. Input an initial point $\widetilde{\omega}^0 := (\widetilde{x}^0,\widetilde{y}^0,\widetilde{z}^0)\in \dom(f_2)\times \dom( h_2)\times \mathbb{Z}$. Let $k:=0$. \item[Step 1.] Compute \begin{equation}\label{mipgadmm} \left\{ \begin{array}{l} x^{k} = \argmin_{x}{\mathcal B}ig\{f_2(x) + \displaystyle{\frac{1}{2}}\langle x, {\mathcal F} x\rangle + \langle \nabla f_1(\widetilde{x}^k) + \sigma {\mathcal A}({\mathcal A}^*\widetilde{x}^{k} + {\mathcal B}^*\widetilde{y}^{k} - c + \sigma^{-1}\widetilde{z}^k) - {\mathcal F} \widetilde{x}^k, x\rangle{\mathcal B}ig\},\\[3mm] z^{k} = \widetilde{z}^k + \sigma({\mathcal A}^*x^{k} + {\mathcal B}^*\widetilde{y}^{k} - c),\\[3mm] y^{k} = \argmin_{y}{\mathcal B}ig\{h_2(y) + \displaystyle{\frac{1}{2}}\langle y, {\mathcal H} y\rangle + \langle \nabla h_1(\widetilde{y}^k) + \sigma {\mathcal B}({\mathcal A}^*x^{k} + {\mathcal B}^*\widetilde{y}^{k} - c + \sigma^{-1}z^{k}) - {\mathcal H} \widetilde{y}^k, y\rangle{\mathcal B}ig\}. \end{array} \right. \end{equation} \item[Step 2.] Terminate if a termination criterion is satisfied. Output $(x^k,y^k,z^k)$. Otherwise, compute \begin{equation}\label{omegak} \widetilde{\omega}^{k+1} = \widetilde{\omega}^k + \rho(\omega^k - \widetilde{\omega}^k). \end{equation} Let $k:=k+1$ and go to Step 1. \end{itemize} \end{framed} For the convergence analysis of the G-ADMM-M, we present a couple of useful lemmas. The results are well known in the field of numerical algebra, hence, we omit their proof here. \begin{lemma}\label{salop1} For any vectors $u$, $v$ in the same Euclidean vector space ${\mathcal I}m$, and any self-adjoint positive semidefinite linear operator ${\mathcal G}: {\mathcal I}m\rightarrow{\mathcal I}m$, it holds that \begin{equation}\label{noneq} \|u\|_{\mathcal G}^2 + \|v\|_{\mathcal G}^2 \ge \displaystyle{\frac{1}{2}}\|u - v\|_{\mathcal G}^2, \end{equation} and \begin{equation}\label{ident1} 2\langle u, {\mathcal G} v\rangle = \|u\|_{\mathcal G}^2 + \|v\|_{\mathcal G}^2 - \|u - v\|_{\mathcal G}^2 = \|u + v\|_{\mathcal G}^2 - \|u\|_{\mathcal G}^2 - \|v\|_{\mathcal G}^2. \end{equation} \end{lemma} \begin{lemma}\label{salop2} For any vectors $u_1$, $u_2$, $v_1$, and $v_2$ in the same Euclidean vector space ${\mathcal I}m$, and any self-adjoint positive semidefinite linear operator ${\mathcal G}: {\mathcal I}m\rightarrow{\mathcal I}m$, we have the identity \begin{equation}\label{ident2} 2\langle u_1 - u_2, {\mathcal G} (v_1 - v_2)\rangle = \|u_1 - v_2\|_{\mathcal G}^2 + \|u_2 - v_1\|_{\mathcal G}^2 - \|u_1 - v_1\|_{\mathcal G}^2 - \|u_2 - v_2\|_{\mathcal G}^2. \end{equation} \end{lemma} Denote $\mathbb{W}^*$ be the set of solutions satisfy (\ref{kktcond}), which is nonempty under the Assumption \ref{assum}, i.e., the solution set of problem (\ref{prob1}) is nonempty. For $(\bar{x},\bar{y},\bar{z})\in\mathbb{W}^*$ and any $(x,y,z)\in\mathbb{W}$, we denote $x_e := x - \bar{x}$, $y_e := y - \bar{y}$, and $z_e := z -\bar{z}$ for convenience. Using these notations, we have the following three properties which will be used in our desiring convergence theorem analysis. \begin{lemma}\label{lem3} Suppose that Assumption \ref{assum} holds. Let $\{(x^k, y^k, z^k)\}$ be generated by Algorithm G-ADMM-M, and $(\bar{x},\bar{y},\bar{z})\in\mathbb{W}^*$. Then for any $\rho\in(0,2)$, $\sigma>0$ and $k\ge 0$, we have \begin{equation}\label{ident3} \begin{array}{l} {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k, z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle \\[3mm] =\displaystyle{\frac{1}{2\sigma\rho}}{\mathcal B}ig(\|z_e^{k+1} + \sigma(\rho - 1)A^*x_e^{k+1}\|^2 - \|z_e^{k} + \sigma(\rho - 1){\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig) + \displaystyle{\frac{\sigma\rho}{2}}\|{\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k\|^2. \end{array} \end{equation} \end{lemma} \begin{proof} From the iterative scheme(\ref{mipgadmm} and \ref{omegak}) in Algorithm G-ADMM-M, we have \begin{equation}\label{lagmtran} \begin{aligned} z^{k+1} &= \widetilde{z}^{k+1} + \sigma({\mathcal A}^*x^{k+1} + {\mathcal B}^*\widetilde{y}^{k+1} - c)\\[3mm] &= z^k - (1-\rho)(z^k - \widetilde{z}^k) + \sigma\rho({\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c) + \sigma(1-\rho)({\mathcal A}^*x^{k+1} + {\mathcal B}^*\widetilde{y}^k - c)\\[3mm] &= z^k + \sigma\rho({\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c) + \sigma(\rho-1)({\mathcal A}^*x^{k} - {\mathcal A}^*x^{k+1}), \end{aligned} \end{equation} which indicates $$ \big[z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}\big] - \big[z_e^{k} + \sigma(\rho - 1){\mathcal A}^*x_e^{k}\big] = \sigma\rho({\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k). $$ According to Lemma \ref{salop1}, we get \begin{equation}\label{ident4} \begin{array}{l} 2\sigma\rho\langle {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k, z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}\rangle \\[3mm] =\sigma^2\rho^2\|{\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k\|^2 + \|z_e^{k+1} + \sigma(\rho - 1)A^*x_e^{k+1}\|^2 - \|z_e^{k} + \sigma(\rho - 1){\mathcal A}^*x_e^{k}\|^2. \end{array} \end{equation} Combining with (\ref{ident4}), it yields the desired result (\ref{ident3}). \end{proof} \begin{lemma}\label{lem4} Suppose that Assumption \ref{assum} holds. Let $\{(x^k, y^k, z^k)\}$ be generated by Algorithm G-ADMM-M, and $(\bar{x},\bar{y},\bar{z})\in\mathbb{W}^*$. Then for any $\rho\in(0,2)$, $\sigma>0$ and $k\ge 0$, we have \begin{equation}\label{ident5} \begin{array}{l} {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z^{k} + \sigma({\mathcal A}^*x_e^k + {\mathcal B}^*y_e^k) - z^{k+1} - \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle \\[3mm] =\displaystyle{\frac{\sigma(2-\rho)}{2}}{\mathcal B}ig(\|{\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k\|^2 - \|{\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig) - \displaystyle{\frac{\sigma\rho}{2}}{\mathcal B}ig(\|{\mathcal A}^*x_e^{k+1} + B^*y_e^k\|^2 - \|{\mathcal A}^*x_e^{k+1}\|^2{\mathcal B}ig). \end{array} \end{equation} \end{lemma} \begin{proof} From (\ref{lagmtran}), we have \begin{equation}\nonumber \begin{array}{ll} &z^{k} + \sigma({\mathcal A}^*x_e^k + {\mathcal B}^*y_e^k) - z^{k+1} - \sigma(\rho - 1){\mathcal A}^*x_e^{k+1} \\[3mm] =&-\sigma\rho({\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k) + \sigma({\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k) - \sigma(\rho-1){\mathcal A}^*x_e^{k}. \end{array} \end{equation} Then we have \begin{equation}\nonumber \begin{array}{rl} &\langle {\mathcal B}^*y_e^k, z^{k} + \sigma({\mathcal A}^*x_e^k + {\mathcal B}^*y_e^k) - z^{k+1} - \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}\rangle \\[3mm] =&-\sigma\rho\langle {\mathcal B}^*y_e^k, {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k\rangle + \sigma\langle {\mathcal B}^*y_e^k, {\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k\rangle - \sigma(\rho-1)\langle {\mathcal B}^*y_e^k, {\mathcal A}^*x_e^{k}\rangle. \end{array} \end{equation} According to the equality (\ref{ident1}) in Lemma \ref{salop1}, we have \begin{equation}\nonumber \begin{array}{rl} &-\sigma\rho\langle {\mathcal B}^* y_e^k, {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k\rangle + \sigma\langle {\mathcal B}^* y_e^k, {\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k\rangle - \sigma(\rho-1)\langle {\mathcal B} y_e^k, {\mathcal A}^*x_e^{k}\rangle\\[3mm] = &- \frac{\sigma\rho}{2}{\mathcal B}ig(\|{\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k\|^2 + \|B^*y_e^k\|^2 - \|{\mathcal A}^*x_e^{k+1}\|^2{\mathcal B}ig) + \displaystyle{\frac{\sigma}{2}}{\mathcal B}ig(\|{\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k\|^2 + \|{\mathcal B}^*y_e^k\|^2 - \|{\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig) \\[3mm] & - \frac{\sigma(\rho- 1)}{2}{\mathcal B}ig(\|{\mathcal A}^*x_e^{k} + {\mathcal B}^*y_e^k\|^2 - \|{\mathcal B}^*y_e^k\|^2 - \|{\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig)\\[3mm] = &\frac{\sigma(2-\rho)}{2}{\mathcal B}ig(\|A^*x_e^{k} + B^*y_e^k\|^2 - \|A^*x_e^{k}\|^2{\mathcal B}ig) - \displaystyle{\frac{\sigma\rho}{2}}{\mathcal B}ig(\|A^*x_e^{k+1} + B^*y_e^k\|^2 - \|A^*x_e^{k+1}\|^2{\mathcal B}ig). \end{array} \end{equation} The proof is complete. \end{proof} For notational convenience, we define \begin{equation}\label{psik} {\mathcal P}si_k(\bar{x},\bar{y},\bar{z}) := \frac{1}{\sigma\rho}\|z_e^k + \sigma(\rho-1)A^*x_e^{k}\|^2 + \sigma(2-\rho)\|A^*x_e^{k}\|^2 + \frac{1}{\rho}\|\widetilde{x}_e^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 + \frac{1}{\rho}\|\widetilde{y}_e^k\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2, \end{equation} and \begin{equation}\label{deltak} \begin{array}{l} \delta_k := \|\widetilde{x}^{k+1} - x^{k+1}\|_{\frac{1}{2}{\mathcal S}igma_{f_1}}^2 + \|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2 + (1-\lambda)(2-\rho)\|\widetilde{x}^{k} - x^{k}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2\\[3mm] \qquad + \sigma(2\lambda-1)(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k} - c\|^2 + \frac{\sigma(1-\lambda)(2-\rho)}{2}\|{\mathcal B}^*(y^k-y^{k-1})\|^2\\[3mm] \qquad + \frac{\lambda(2-\rho)^2}{\rho}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma {\mathcal A}{\mathcal A}^*}^2, \end{array} \end{equation} where $\lambda>0$ will defined later. Furthermore, we also define \begin{equation}\label{thetak} \theta_k := \|\widetilde{x}^{k+1} - x^{k+1}\|_{\frac{1}{2}{\mathcal S}igma_{f_1}-\hat{{\mathcal S}igma}_{f_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})}^2 + \|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}-\hat{{\mathcal S}igma}_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2, \end{equation} \begin{equation}\label{etak} \eta_k := {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1}, z_e^{k+1} {\mathcal B}ig\rangle + {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^k + \sigma({\mathcal A}^*x^k + {\mathcal B}^*y^k - c){\mathcal B}ig\rangle, \end{equation} and \begin{equation}\label{xik} \xi_k := \|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2 + \|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{equation} For the global convergence for the Algorithm G-ADMM-M, the following inequality is essential. \begin{lemma}\label{lem5} Suppose that Assumption \ref{assum} holds. Let $\{(x^k, y^k, z^k)\}$ be generated by Algorithm G-ADMM-M, and $(\bar{x},\bar{y},\bar{z})\in\mathbb{W}^*$. Let ${\mathcal P}si_k$ and $\theta_k$ be defined as in (\ref{psik}) and (\ref{thetak}). Then for any $\rho\in(0,2)$, $\sigma>0$ and $k\ge 0$, we have \begin{equation}\label{pivineq} {\mathcal P}si_k(\bar{x},\bar{y},\bar{z}) - {\mathcal P}si_{k+1}(\bar{x},\bar{y},\bar{z}) \ge \theta_k + \sigma(2-\rho)\|{\mathcal A}^*x^k + {\mathcal B}^*y^k - c\|^2. \end{equation} \end{lemma} \begin{proof} From the left inequality of (\ref{f1cov}), let $x':=\widetilde{x}^{k+1}$, we have \begin{equation}\label{ineqcovl} f_1(x) \ge f_1(\widetilde{x}^{k+1}) + \langle x-\widetilde{x}^{k+1},\nabla f_1(\widetilde{x}^{k+1}) \rangle + \displaystyle{\frac{1}{2}}\|x-\widetilde{x}^{k+1}\|_{{\mathcal S}igma_{f_1}}^2, \end{equation} and then from the right inequality of (\ref{f1cov}), let $x:=x^{k+1}$, $x':=\widetilde{x}^{k+1}$, we have \begin{equation}\label{ineqcovr} f_1(x^{k+1}) \le f_1(\widetilde{x}^{k+1}) + \langle x^{k+1}-\widetilde{x}^{k+1},\nabla f_1(\widetilde{x}^{k+1}) \rangle + \displaystyle{\frac{1}{2}}\|x^{k+1}-\widetilde{x}^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2. \end{equation} Subtracting both sides of (\ref{ineqcovl}) and (\ref{ineqcovr}), we obtain \begin{equation}\label{ineq1} f_1(x) - f_1(x^{k+1}) - \langle x-x^{k+1},\nabla f_1(\widetilde{x}^{k+1}) \rangle \ge \displaystyle{\frac{1}{2}}\|x - \widetilde{x}^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 - \displaystyle{\frac{1}{2}}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2. \end{equation} Using the necessary optimality conditions of $x^{k+1}$ in (\ref{mipgadmm}), we have for any $x\in\mathbb{X}$ and $\xi\in\partial f_2(x^{k+1})$ that \begin{equation}\label{varineq} {\mathcal B}ig\langle x - x^{k+1}, \xi + {\mathcal F} x^{k+1} + \nabla f_1(\widetilde{x}^{k+1}) + \sigma {\mathcal A}({\mathcal A}^*\widetilde{x}^{k+1} + {\mathcal B}^*\widetilde{y}^{k+1} - c + \sigma^{-1}\widetilde{z}^{k+1}) - {\mathcal F}\widetilde{x}^{k+1} {\mathcal B}ig\rangle \ge 0. \end{equation} From the convexity of $f_2(x)$, we get \begin{equation}\label{convf2} \langle x - x^{k+1}, \xi\rangle \le f_2(x) - f_2(x^{k+1}). \end{equation} Then, substituting (\ref{convf2}) into (\ref{varineq}) and using the relaxation step (\ref{omegak}), it yields \begin{align}\label{ineq2} &f_2(x) - f_2(x^{k+1}) + {\mathcal B}ig\langle x - x^{k+1}, \nabla f_1(\widetilde{x}^{k+1}) + \sigma {\mathcal A}({\mathcal A}^*\widetilde{x}^{k+1} + {\mathcal B}^*\widetilde{y}^{k+1} - c + \sigma^{-1}\widetilde{z}^{k+1}) + {\mathcal F}(x^{k+1} - \widetilde{x}^{k+1}){\mathcal B}ig\rangle\nonumber\\[3mm] =& f_2(x) - f_2(x^{k+1}) + {\mathcal B}ig\langle x - x^{k+1}, \nabla f_1(\widetilde{x}^{k+1}) + {\mathcal A} z^{k+1} - (\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1}){\mathcal B}ig\rangle \ge 0. \end{align} Combining (\ref{ineq1}) and (\ref{ineq2}), we obtain \begin{equation}\label{ineq3} \begin{array}{l} f_1(x) - f_1(x^{k+1}) + f_2(x) - f_2(x^{k+1}) + \langle x - x^{k+1}, {\mathcal A} z^{k+1} - (\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1})\rangle\\[3mm] \ge \frac{1}{2}\|x - \widetilde{x}^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 - \frac{1}{2}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2. \end{array} \end{equation} Similarly, for any $y\in\mathbb{Y}$, we have \begin{equation}\label{ineq4} \begin{array}{l} h_1(y) - h_1(y^{k}) + h_2(y) - h_2(y^{k}) + {\mathcal B}ig\langle y - y^{k}, \sigma {\mathcal B}({\mathcal A}^*x^{k} + {\mathcal B}^*y^{k} - c + \sigma^{-1}z^{k}) - (\hat{{\mathcal S}igma}_{h_1}+{\mathcal T})(\widetilde{y}^{k} - y^{k}){\mathcal B}ig\rangle\\[3mm] \ge \frac{1}{2}\|y - \widetilde{y}^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \frac{1}{2}\|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{array} \end{equation} Let $x=\bar{x}$ and $y=\bar{y}$, and combine (\ref{ineq3}) with (\ref{ineq4}), we obtain \begin{equation}\label{ineq5} \begin{array}{l} f_1(\bar{x}) - f_1(x^{k+1}) + h_1(\bar{y}) - h_1(y^{k}) + f_2(\bar{x}) - f_2(x^{k+1}) + h_2(\bar{y}) - h_2(y^{k})\\[3mm] - {\mathcal B}ig\langle x_e^{k+1}, {\mathcal A} z^{k+1} - (\hat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1}){\mathcal B}ig\rangle - {\mathcal B}ig\langle y_e^{k}, \sigma {\mathcal B}({\mathcal A}^*x^{k} + {\mathcal B}^*y^{k} - c + \sigma^{-1}z^{k}) - (\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})(\widetilde{y}^{k} - y^{k}){\mathcal B}ig\rangle\\[3mm] \ge \displaystyle{\frac{1}{2}}\|\widetilde{x}_e^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 - \displaystyle{\frac{1}{2}}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2 + \displaystyle{\frac{1}{2}}\|\widetilde{y}_e^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \displaystyle{\frac{1}{2}}\|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{array} \end{equation} From (\ref{f1cov}) and (\ref{h1cov}), we can easily get \begin{equation}\label{ineq6} f_1(x^{k+1}) - f_1(\bar{x}) \ge \langle x_e^{k+1},\nabla f_1(\bar{x}) \rangle + \displaystyle{\frac{1}{2}}\|x_e^{k+1}\|_{{\mathcal S}igma_{f_1}}^2, \end{equation} and \begin{equation}\label{ineq7} h_1(y^{k}) - h_1(\bar{y}) \ge \langle y_e^{k},\nabla h_1(\bar{y}) \rangle + \displaystyle{\frac{1}{2}}\|y_e^{k}\|_{{\mathcal S}igma_{h_1}}^2. \end{equation} According to (\ref{ineq5}), (\ref{ineq6}), (\ref{ineq7}) and the inequality (\ref{noneq}), we obtain \begin{align}\label{ineq8} &f_2(\bar{x}) - f_2(x^{k+1}) + h_2(\bar{y}) - h_2(y^{k}) - \langle x_e^{k+1}, \nabla f_1(\bar{x}) + A\bar{z}\rangle - \langle y_e^{k}, \nabla h_1(\bar{y}) + {\mathcal B}\bar{z}\rangle - \langle {\mathcal A}^*x_e^{k+1}, z_e^{k+1} \rangle\nonumber\\[3mm] &- {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^k + \sigma({\mathcal A}^*x^k + {\mathcal B}^*y^k - c){\mathcal B}ig\rangle + {\mathcal B}ig\langle x_e^{k+1}, (\hat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1}){\mathcal B}ig\rangle + {\mathcal B}ig\langle y_e^{k}, (\hat{{\mathcal S}igma}_{h_1}+{\mathcal T})(\widetilde{y}^{k} - y^{k}){\mathcal B}ig\rangle\nonumber\\[3mm] \ge& \frac{1}{2}\|\widetilde{x}_e^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 - \frac{1}{2}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\hat{{\mathcal S}igma}_{f_1}}^2 + \displaystyle{\frac{1}{2}}\|\widetilde{y}_e^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \frac{1}{2}\|\widetilde{y}^{k} - y^{k}\|_{\hat{{\mathcal S}igma}_{h_1}}^2 + \frac{1}{2}\|x_e^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 + \displaystyle{\frac{1}{2}}\|y_e^{k}\|_{{\mathcal S}igma_{h_1}}^2\nonumber\\[3mm] \ge& \displaystyle{\frac{1}{4}}\|\widetilde{x}^{k+1} - x^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 + \displaystyle{\frac{1}{4}}\|\widetilde{y}^{k} - y^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \displaystyle{\frac{1}{2}}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2 - \displaystyle{\frac{1}{2}}\|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{align} Replace $x$ with $x^{k+1}$, and $y$ with $y^k$ in the inequality (\ref{covineq}), we get $$ f_2(\bar{x}) - f_2(x^{k+1}) + h_2(\bar{y}) - h_2(y^{k}) - \langle x_e^{k+1}, \nabla f_1(\bar{x}) + A\bar{z}\rangle - \langle y_e^{k}, \nabla h_1(\bar{y}) + B\bar{z}\rangle \le 0. $$ Substituting this inequality and (\ref{etak}) into (\ref{ineq8}), it yields \begin{align}\label{ineq9} &-\eta^k + {\mathcal B}ig\langle x_e^{k+1}, (\hat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1}){\mathcal B}ig\rangle + {\mathcal B}ig\langle y_e^{k}, (\hat{{\mathcal S}igma}_{h_1}+{\mathcal T})(\widetilde{y}^{k} - y^{k}){\mathcal B}ig\rangle\nonumber\\[3mm] \ge& \frac{1}{4}\|\widetilde{x}^{k+1} - x^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 + \frac{1}{4}\|\widetilde{y}^{k} - y^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \frac{1}{2}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2 - \frac{1}{2}\|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{align} Using Lemmas \ref{lem3} and \ref{lem4}, we have \begin{equation}\label{etakconv} \begin{array}{l} \eta^k = {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1}, z_e^{k+1} {\mathcal B}ig\rangle + {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^k + \sigma({\mathcal A}^*x^k + {\mathcal B}^*y^k - c){\mathcal B}ig\rangle\\[3mm] \quad = {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k, z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle - {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^{k+1} + \sigma(\rho - 1)A^*x_e^{k+1}{\mathcal B}ig\rangle\\[3mm] \qquad - {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1}, \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle + {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^k + \sigma({\mathcal A}^*x^k + {\mathcal B}^*y^k - c){\mathcal B}ig\rangle\\[3mm] \quad = {\mathcal B}ig\langle {\mathcal A}^*x_e^{k+1} + {\mathcal B}^*y_e^k, z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle + {\mathcal B}ig\langle {\mathcal B}^*y_e^k, z_e^k + \sigma({\mathcal A}^*x^k + {\mathcal B}^*y^k - c) - z_e^{k+1}\\[3mm] \qquad - \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}{\mathcal B}ig\rangle - \sigma(\rho - 1)\|{\mathcal A}^*x_e^{k+1}\|^2\\[3mm] \quad = \displaystyle{\frac{1}{2\sigma\rho}}{\mathcal B}ig(\|z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}\|^2 - \|z_e^{k} + \sigma(\rho - 1){\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig)\\[3mm] \qquad+ \displaystyle{\frac{\sigma(2-\rho)}{2}}{\mathcal B}ig(\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^k - c\|^2 + \|{\mathcal A}^*x_e^{k+1}\|^2 - \|{\mathcal A}^*x_e^{k}\|^2{\mathcal B}ig).\\[3mm] \end{array} \end{equation} From (\ref{omegak}) and (\ref{ident2}), we have \begin{equation}\label{xekconv} \begin{array}{l} {\mathcal B}ig\langle x_e^{k+1}, (\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - x^{k+1}){\mathcal B}ig\rangle\\[3mm] = \displaystyle{\frac{1}{\rho}}{\mathcal B}ig\langle x^{k+1} - \bar{x}, (\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})(\widetilde{x}^{k+1} - \widetilde{x}^{k+2}){\mathcal B}ig\rangle\\[3mm] = \displaystyle{\frac{1}{2\rho}}{\mathcal B}ig(\|\widetilde{x}_e^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 - \|\widetilde{x}_e^{k+2}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 - \rho(2-\rho)\|x^{k+1} - \widetilde{x}^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2{\mathcal B}ig). \end{array} \end{equation} In a similar way, we get \begin{equation}\label{yekconv} \begin{array}{l} {\mathcal B}ig\langle y_e^{k}, (\hat{{\mathcal S}igma}_{h_1}+{\mathcal T})(\widetilde{y}^{k} - y^{k}){\mathcal B}ig\rangle\\[3mm] = \displaystyle{\frac{1}{2\rho}}{\mathcal B}ig(\|\widetilde{y}_e^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2 - \|\widetilde{y}_e^{k+1}\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2 - \rho(2-\rho)\|y^{k} - \widetilde{y}^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2{\mathcal B}ig). \end{array} \end{equation} Combining (\ref{ineq9}), (\ref{etakconv}), (\ref{xekconv}) with (\ref{yekconv}), we obtain \begin{align}\label{ineqconv} &\frac{1}{2\sigma\rho}{\mathcal B}ig(\|z_e^{k} + \sigma(\rho - 1){\mathcal A}^*x_e^{k}\|^2 - \|z_e^{k+1} + \sigma(\rho - 1){\mathcal A}^*x_e^{k+1}\|^2{\mathcal B}ig) - \frac{\sigma(2-\rho)}{2}\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^k - c\|^2\nonumber\\[3mm] &+ \frac{\sigma(2-\rho)}{2}{\mathcal B}ig(\|A^*x_e^{k}\|^2 - \|A^*x_e^{k+1}\|^2{\mathcal B}ig) - \frac{2-\rho}{2}\|x^{k+1} - \widetilde{x}^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 \nonumber\\[3mm] &+ \frac{1}{2\rho}{\mathcal B}ig(\|\widetilde{x}_e^{k+1}\|_{\hat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2- \|\widetilde{x}_e^{k+2}\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2{\mathcal B}ig) - \frac{2-\rho}{2}\|y^{k} - \widetilde{y}^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2 + \frac{1}{2\rho}{\mathcal B}ig(\|\widetilde{y}_e^{k}\|_{\hat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2 - \|\widetilde{y}_e^{k+1}\|_{\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2{\mathcal B}ig)\nonumber\\[3mm] \ge & \frac{1}{4}\|\widetilde{x}^{k+1} - x^{k+1}\|_{{\mathcal S}igma_{f_1}}^2 + \frac{1}{4}\|\widetilde{y}^{k} - y^{k}\|_{{\mathcal S}igma_{h_1}}^2 - \frac{1}{2}\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1}}^2 - \frac{1}{2}\|\widetilde{y}^{k} - y^{k}\|_{\widehat{{\mathcal S}igma}_{h_1}}^2. \end{align} Substitutiing (\ref{psik}) and (\ref{thetak}) into (\ref{ineqconv}) and noting the definition of ${\mathcal P}si_k$ in (\ref{psik}), we can get the conclusion (\ref{pivineq}) immediately. \end{proof} \setcounter{equation}{0} \section{Convergence analysis}\label{section4} Based on the lemmas proved in the previous section, we can establish the convergence of G-ADMM-M for solving problem (\ref{prob1}). For this purpose, we firstly give some useful lemmas. \begin{lemma}\label{lem6} Suppose that the Assumption (\ref{assum}) hold. Let the sequence $\{(x^k,y^k,z^k)\}$ be generated by Algorithm G-ADMM-M and $(\bar{x},\bar{y},\bar{z})\in \mathbb{W}^*$. Then, for any $\sigma > 0$, $\rho\in(0,2)$, and $k > 1$, we have \begin{align}\label{conneq} \sigma(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^k - c\|^2 \ge& \sigma(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2 + \frac{(2-\rho)^2}{\rho}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma AA^*}\nonumber\\[3mm] &- (2-\rho)\|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2. \end{align} \end{lemma} \begin{proof} Note that \begin{align*} &\sigma(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^k - c\|^2\\[3mm] = &\sigma(2-\rho)\|A^*x^{k+1} + {\mathcal B}^*y^k - c + A^*(x^k - x^{k+1})\|^2\\[3mm] = &\sigma(2-\rho)\|A^*x^{k+1} + {\mathcal B}^*y^k - c\|^2 + \sigma(2-\rho)\|A^*(x^k - x^{k+1})\|^2\\[3mm] &+ 2\sigma(2-\rho){\mathcal B}ig\langle {\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c, {\mathcal A}^*(x^k - x^{k+1}){\mathcal B}ig\rangle. \end{align*} From (\ref{lagmtran}), we have \begin{align*} &2\sigma(2-\rho){\mathcal B}ig\langle {\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c, A^*(x^k - x^{k+1}){\mathcal B}ig\rangle\\[3mm] =& \frac{2(2-\rho)}{\rho}{\mathcal B}ig\langle z_e^{k+1} - z_e^k + \sigma(\rho-1)({\mathcal A}^*x_e^{k+1} - {\mathcal A}^*x_e^k), {\mathcal A}^*(x^k - x^{k+1}){\mathcal B}ig\rangle\\[3mm] = &\frac{2(2-\rho)}{\rho}{\mathcal B}ig\langle z_e^{k+1} - z_e^k, {\mathcal A}^*(x^k - x^{k+1}){\mathcal B}ig\rangle - \frac{2\sigma(\rho-1)(2-\rho)}{\rho}\|{\mathcal A}^*(x^k - x^{k+1})\|^2. \end{align*} Then we have \begin{align}\label{eqcon} &\sigma(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^k - c\|^2\nonumber\\[3mm] = &\sigma(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2 +\frac{(2-\rho)^2}{\rho}\|x^{k+1} - x^{k}\|_{\sigma {\mathcal A}{\mathcal A}^*}^2 + \frac{2(2-\rho)}{\rho}{\mathcal B}ig\langle z_e^{k+1} - z_e^k, {\mathcal A}^*(x^k - x^{k+1}){\mathcal B}ig\rangle. \end{align} According to the first-order optimality condition of the x-subproblem in (\ref{mipgadmm}), we know that \begin{equation}\label{foocx} \begin{array}{l} -\nabla f_1(\widetilde{x}^k) - (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S})(x^k - \widetilde{x}^k) - {\mathcal A} z^k \in \partial f_2(x^k),\\[3mm] -\nabla f_1(\widetilde{x}^{k+1}) - (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S})(x^{k+1} - \widetilde{x}^{k+1}) - {\mathcal A} z^{k+1} \in \partial f_2(x^{k+1}). \end{array} \end{equation} Hence, from the convexity of $f_2(x)$, we have \begin{equation}\nonumber {\mathcal B}ig\langle x^k - x^{k+1}, \nabla f_1(\widetilde{x}^{k+1}) - \nabla f_1(\widetilde{x}^k) + (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S})[(x^{k+1} - x^k) - (\widetilde{x}^{k+1} - \widetilde{x}^k)] + {\mathcal A}(z^{k+1} - z^k){\mathcal B}ig\rangle\ge 0. \end{equation} Then, it yields that \begin{align}\label{neqsubxz} &\langle z^{k+1} - z^k, {\mathcal A}^*(x^k - x^{k+1})\rangle\nonumber\\[3mm] =&\langle {\mathcal A}(z^{k+1} - z^k), x^k - x^{k+1}\rangle\nonumber\\[3mm] \ge& {\mathcal B}ig\langle x^{k+1} - x^k, \nabla f_1(\widetilde{x}^{k+1}) - \nabla f_1(\widetilde{x}^k){\mathcal B}ig\rangle - {\mathcal B}ig\langle x^{k+1} - x^k, (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S})(\widetilde{x}^{k+1} - \widetilde{x}^k){\mathcal B}ig\rangle\nonumber\\[3mm] &+ \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2. \end{align} Noting that $\nabla f_1$ is assumed be globally Lipschitz continuous, then, from Clarkes Mean-Value Theorem \cite[Proposition 2.6.5]{OPNSA}, there exists a self-adjoint and positive semidefinite linear operator ${\mathcal G}amma^k\in\partial^2(f_1(\gamma^k))$ with $\gamma^k$ being a point at the segment between $x^k$ and $x^{k+1}$, such that \begin{equation}\nonumber \nabla f_1(\widetilde{x}^{k+1}) - \nabla f_1(\widetilde{x}^k) = {\mathcal G}amma^k(\widetilde{x}^{k+1} - \widetilde{x}^{k}). \end{equation} Then, according to (\ref{omegak}) and the left equality in (\ref{ident1}), the inequality (\ref{neqsubxz}) can be reorganized as \begin{equation}\label{neqsubxzd} \begin{array}{l} {\mathcal B}ig\langle z^{k+1} - z^k, {\mathcal A}^*(x^k - x^{k+1}){\mathcal B}ig\rangle\\[3mm] \ge {\mathcal B}ig\langle x^{k+1} - x^k, (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k)(\widetilde{x}^{k} - \widetilde{x}^{k+1}){\mathcal B}ig\rangle + \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2\\[3mm] = \rho {\mathcal B}ig\langle x^{k+1} - x^k, (\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k)(\widetilde{x}^{k} - x^{k}){\mathcal B}ig\rangle + \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2\\[3mm] = \displaystyle{\frac{\rho}{2}}{\mathcal B}ig[\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 + \|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 - \|\widetilde{x}^k - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2{\mathcal B}ig] + \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2. \end{array} \end{equation} Because $\widehat{{\mathcal S}igma}_{f_1}\succeq {\mathcal G}amma^k\succeq 0$ and $\frac{1}{2}\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}\succeq 0$, we have $$ \widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - \frac{1}{2}{\mathcal G}amma^k = \frac{1}{2}\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} + \frac{1}{2}(\widehat{{\mathcal S}igma}_{f_1} - {\mathcal G}amma^k)\succeq 0. $$ Then, according to (\ref{noneq}), the inequality (\ref{neqsubxzd}) can be rewritten as \begin{equation}\label{neqsubxzdef} \begin{array}{l} {\mathcal B}ig\langle z^{k+1} - z^k, A^*(x^k - x^{k+1}){\mathcal B}ig\rangle\\[3mm] \ge \displaystyle{\frac{\rho}{2}}{\mathcal B}ig[\|x^{k+1} - x^k\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 + \|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 - \|\widetilde{x}^k - x^{k+1}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S} - \displaystyle{\frac{1}{2}}{\mathcal G}amma^k}^2{\mathcal B}ig]\\[3mm] \quad + \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2\\[3mm] \ge \displaystyle{\frac{\rho}{2}}{\mathcal B}ig[\|x^{k+1} - x^k\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 + \|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} - {\mathcal G}amma^k}^2 - 2\|x^{k+1} - x^{k}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S} - \frac{1}{2}{\mathcal G}amma^k}^2\\[3mm] \quad - 2\|\widetilde{x}^k - x^{k}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S} - \frac{1}{2}{\mathcal G}amma^k}^2{\mathcal B}ig] + \|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2\\[3mm] = \displaystyle{\frac{2-\rho}{2}}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 - \displaystyle{\frac{\rho}{2}}\|\widetilde{x}^k - x^{k}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2. \end{array} \end{equation} Substituting the inequality (\ref{neqsubxzdef}) into (\ref{eqcon}), it is easy to see that our conclusion (\ref{conneq}) holds. \end{proof} In light of above lemmas, we are ready to establish the convergence of the G-ADMM-M algorithm under some technical conditions to solve the problem (\ref{prob1}). \begin{theorem}\label{convmipgadmm} Suppose that the Assumption (\ref{assum}) holds. Let the sequence $\{(x^k,y^k,z^k)\}$ be generated by Algorithm G-ADMM-M, and that $(\bar{x},\bar{y},\bar{z})\in \mathbb{W}^*$ is a solution of problem (\ref{prob1}). Let ${\mathcal P}si_k$, $\delta_k$, $\theta_k$ and $\xi_k$ are defined by (\ref{psik}), (\ref{deltak}), (\ref{thetak}) and (\ref{xik}), respectively. For any $\sigma > 0$, $\rho\in(0,2)$, $\lambda\in(0,1]$, and $k > 1$, it holds that \begin{equation}\label{convneq} \begin{array}{l} {\mathcal B}ig({\mathcal P}si_k(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^k - x^{k}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2{\mathcal B}ig)\\[3mm] - {\mathcal B}ig({\mathcal P}si_{k+1}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k} - c\|^2{\mathcal B}ig)\\[3mm] \ge \delta_k - \xi_k. \end{array} \end{equation} Moreover, if $\lambda\in(\frac{1}{2},1]$, we also assume that \begin{equation}\label{pdefneq} \widehat{{\mathcal S}igma}_{h_1} + {\mathcal T} \succ 0, \quad {\mathcal F} \succ 0, \quad {\mathcal H} \succ 0, \end{equation} \begin{equation}\label{sumxik} \sum_{k=1}^{\infty}\xi_k < +\infty, \end{equation} Then the sequence $\{(x^k,y^k)\}$ converges to an optimal solution of the primal problem (\ref{prob1})) and that $\{z^k\}$ converges to an optimal solution of the corresponding dual problem. \end{theorem} \begin{proof} At the beginning, it holds that \begin{equation}\label{constieq} \begin{array}{rl} &\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k} - c\|^2\\[3mm] = &\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c + {\mathcal B}^*(y^k - y^{k-1})\|^2\\[3mm] =&\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2 + \|{\mathcal B}^*(y^k - y^{k-1})\|^2 + 2 {\mathcal B}ig\langle {\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c, {\mathcal B}^*(y^k - y^{k-1}){\mathcal B}ig\rangle\\[3mm] \ge& - \|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2 + \frac{1}{2}\|{\mathcal B}^*(y^k - y^{k-1})\|^2. \end{array} \end{equation} According to (\ref{noneq}),(\ref{conneq}) and the inequality (\ref{constieq}) above, we have, for any $\sigma > 0$, $\rho\in(0,2)$, $\lambda\in(0,1]$, and $k > 1$, that \begin{align}\label{thetaconstieq} &\theta_k + \sigma(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k} - c\|^2\nonumber\\[3mm] = &\theta_k + (1-\lambda)\sigma(2-\rho)\|{\mathcal A}^*x^{k} + B^*y^{k} - c\|^2 + \lambda\sigma(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k} - c\|^2\nonumber\\[3mm] \ge& \theta_k + \lambda{\mathcal B}ig(\sigma(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2 + \frac{(2-\rho)^2}{\rho}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma AA^*}\nonumber\\[3mm] &- (2-\rho)\|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2{\mathcal B}ig) - (1-\lambda)\sigma(2-\rho){\mathcal B}ig(\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2 - \frac{1}{2}\|{\mathcal B}^*(y^k - y^{k-1})\|^2{\mathcal B}ig)\nonumber\\[3mm] =& \|\widetilde{x}^{k+1} - x^{k+1}\|_{\frac{1}{2}{\mathcal S}igma_{f_1}-\widehat{{\mathcal S}igma}_{f_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})}^2 + \|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}-\widehat{{\mathcal S}igma}_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2\nonumber\\[3mm] &+ (1-\lambda)\sigma(2-\rho){\mathcal B}ig(\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2 - \|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2{\mathcal B}ig) + (2\lambda-1)\sigma(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2\nonumber\\[3mm] &+ \frac{\lambda(2-\rho)^2}{\rho}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma {\mathcal A}{\mathcal A}^*} - (2-\rho)\|\widetilde{x}^k - x^k\|_{\hat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 +(1-\lambda)(2-\rho)\|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2\nonumber\\[3mm] & + \frac{\sigma(1-\lambda)(2-\rho)}{2}\|B^*(y^k - y^{k-1})\|^2\nonumber\\[3mm] = &\|\widetilde{x}^{k+1} - x^{k+1}\|_{(2-\rho)(\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S})}^2 - (2-\rho)\|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho){\mathcal B}ig(\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2\nonumber\\[3mm] &- \|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2{\mathcal B}ig) + \|\widetilde{x}^{k+1} - x^{k+1}\|_{\frac{1}{2}{\mathcal S}igma_{f_1}-\widehat{{\mathcal S}igma}_{f_1}}^2 + \|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}-\widehat{{\mathcal S}igma}_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2\nonumber\\[3mm] &+ (1-\lambda)(2-\rho)\|\widetilde{x}^k - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2 + (2\lambda-1)\sigma(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^k - c\|^2\nonumber\\[3mm] & + \frac{\sigma(1-\lambda)(2-\rho)}{2}\|{\mathcal B}^*(y^k - y^{k-1})\|^2 + \frac{\lambda(2-\rho)^2}{\rho}\|x^{k+1} - x^k\|_{\widehat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma {\mathcal A}{\mathcal A}^*}. \end{align} Using Lemma \ref{lem5}, and substituting (\ref{thetaconstieq}) into (\ref{pivineq}), it can get the inequality (\ref{convneq}) easily. Noting that $\delta_k \ge 0$ in the case of $\lambda\in(\frac{1}{2},1]$, then, using the conditions (\ref{pdefneq}) and (\ref{sumxik}), we have the following inequality from (\ref{convneq}), that is, \begin{equation}\label{psikbneq} \begin{array}{l} 0 \le {\mathcal P}si_{k+1}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k} - c\|^2\\[3mm] \quad \le {\mathcal P}si_k(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^k - x^{k}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k} + {\mathcal B}^*y^{k-1} - c\|^2 + \xi_k\\[3mm] \quad \le {\mathcal P}si_1(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^1 - x^1\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{1} + {\mathcal B}^*y^{0} - c\|^2 + \sum_{j=1}^k\xi_j\\[3mm] \quad \le +\infty, \end{array} \end{equation} which means that the sequences of $\{{\mathcal P}si_k(\bar{x},\bar{y},\bar{z})\}$, $\{\|\widetilde{x}^k - x^{k}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2\}$ and $\{\|A^*x^{k+1} + B^*y^{k} - c\|^2\}$ are bounded. According to the definition of ${\mathcal P}si_k$, we know that the sequences $\{\|z_e^k + \sigma(\rho-1){\mathcal A}^*x_e^{k}\|^2\}, \{\|{\mathcal A}^*x_e^{k}\|^2\}$, $\{\|\widetilde{x}_e^{k+1}\|_{\hat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2\}$ and $\{\|\widetilde{y}_e^k\|_{\hat{{\mathcal S}igma}_{h_1}+{\mathcal T}}^2\}$ are also bounded. Moreover, from (\ref{noneq}), we get \begin{equation}\label{xekxkneq} \frac{1}{2}\|x_e^{k}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 \le \|\widetilde{x}^k - x^{k}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \|\widetilde{x}_e^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2. \end{equation} Also note that $\{\|A^*x_e^{k}\|^2\}$ is bounded, we have that the sequence of $\{\|x_e^{k}\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S}+\sigma AA^*}^2\}$ is bounded. Then, the sequence $\{\|x^k\|^2\}$ is bounded because of the condition ${\mathcal F}\succ 0$. Similarly, we can get that the sequence $\{\|z^k\|^2\}$ is also bounded. Now, we prove that the sequence $\{\|y^k\|^2\}$ is bounded, too. For any $k\ge1$, from (\ref{convneq}), we have \begin{equation}\nonumber \begin{array}{l} \sum_{j=1}^k\delta_j \le {\mathcal B}ig({\mathcal P}si_1(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^1 - x^1\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{1} + {\mathcal B}^*y^{0} - c\|^2{\mathcal B}ig)\\[3mm] -{\mathcal B}ig({\mathcal P}si_{k+1}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k} - c\|^2{\mathcal B}ig) + \sum_{j=1}^k\xi_j\\[3mm] \le +\infty. \end{array} \end{equation} Hence, it gets that $\lim_{k\rightarrow+\infty}\delta_k=0$. By the definition of $\delta_k$, when $k\rightarrow+\infty$, we have \begin{equation}\label{deltaktend0} \begin{array}{l} \|\widetilde{x}^{k+1} - x^{k+1}\|_{\frac{1}{2}{\mathcal S}igma_{f_1}}^2\rightarrow 0,\quad \|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}+(2-\rho)(\hat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2\rightarrow 0,\quad\|\widetilde{x}^{k} - x^{k}\|_{\hat{{\mathcal S}igma}_{f_1}+{\mathcal S}}^2\rightarrow 0,\\[3mm] \|A^*x^{k+1} + B^*y^{k} - c\|^2\rightarrow 0,\quad \|B^*(y^k-y^{k-1})\|^2\rightarrow 0,\quad \|x^{k+1} - x^k\|_{\hat{{\mathcal S}igma}_{f_1}+{\mathcal S}+\sigma AA^*}^2\rightarrow 0.\\[3mm] \end{array} \end{equation} Then, from $\|\widetilde{y}^{k} - y^{k}\|_{\frac{1}{2}{\mathcal S}igma_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})}^2\rightarrow 0$ and $\frac{1}{2}{\mathcal S}igma_{h_1}+(2-\rho)(\widehat{{\mathcal S}igma}_{h_1}+{\mathcal T})\succ 0$, we can easily know that $\|\widetilde{y}^{k} - y^{k}\|\rightarrow 0$. So, we know that the sequence $\{\|\widetilde{y}^{k} - y^{k}\|\}$ is bounded, and the sequence $\{\|\widetilde{y}_e^k\|\}$ is bounded too, because of $\hat{{\mathcal S}igma}_{h_1}+{\mathcal T}\succ 0$. Moreover, from (\ref{noneq}), we have \begin{equation}\nonumber \frac{1}{2}\|y_e^{k}\|^2 \le \|\widetilde{y}^k - y^{k}\|^2 + \|\widetilde{y}_e^k\|^2. \end{equation} Hence, the sequence $\{\|y^k\|^2\}$ is bounded. Consequently, the sequence $\{(x^k,y^k,z^k)\}$ is bounded, and then there exists at least one subsequence $\{(x^{k_i},y^{k_i},z^{k_i})\}$ such that it converges to a cluster point, say $(x^{*},y^{*},z^{*})$. In the following, we prove that $(x^{*},y^{*})$ is an optimal solution of the problem (\ref{prob1}) and $z^*$ is the corresponding Lagrange multiplier. By using the first-order optimality condition of the $x$- and $y$-subproblem in (\ref{mipgadmm}) on the subsequence $\{(x^{k_i},y^{k_i},z^{k_i})\}$, we know that \begin{equation}\label{fooconki} \left\{ \begin{array}{l} 0 \in \partial f_2(x^{k_i}) + {\mathcal F} (x^{k_i} - \widetilde{x}^{k_i}) + \nabla f_1(\widetilde{x}^{k_i}) + \sigma {\mathcal A}({\mathcal A}^*\widetilde{x}^{k_i} + {\mathcal B}^*\widetilde{y}^{k_i} - c) + {\mathcal A}\widetilde{z}^{k_i},\\[3mm] 0 \in \partial h_2(y^{k_i}) + {\mathcal H} (y^{k_i} - \widetilde{y}^{k_i}) + \nabla h_1(\widetilde{y}^{k_i}) + \sigma {\mathcal B}({\mathcal A}^*x^{k_i} + {\mathcal B}^*\widetilde{y}^{k_i} - c) + Bz^{k_i}. \end{array} \right. \end{equation} From (\ref{deltaktend0}) and (\ref{fooconki}), we have \begin{equation}\nonumber \left\{ \begin{array}{l} 0 \in \partial f_2(x^{*}) + \nabla f_1({x}^{*}) + {\mathcal A}{z}^{*},\\[3mm] 0 \in \partial h_2(y^{*}) + \nabla h_1({y}^{*}) + {\mathcal B} z^{*},\\[3mm] {\mathcal A}^*x^{*} + {\mathcal B}^*{y}^{*} - c = 0. \end{array} \right. \end{equation} which implies that $(x^*,y^*)$ is an optimal solution to problem (\ref{prob1}) and $z^*$ is the corresponding Lagrange multiplier. Finally, we show that $(x^*,y^*,z^*)$ is the unique limit point of the sequence $\{(x^k,y^k,z^k)\}$. Without loss of generality, let $(\bar{x},\bar{y},\bar{z}):=(x^*,y^*,z^*)$, from the definition of ${\mathcal P}si_k$ and the fact that the subsequence $\{(x^{k_i},y^{k_i},z^{k_i})\}\rightarrow (x^*,y^*,z^*)$ as $i\rightarrow\infty$, we have $$ \lim_{i\rightarrow+\infty}{\mathcal B}ig\{{\mathcal P}si_{k_i}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k_i} - x^{k_i}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k_i} + {\mathcal B}^*y^{k_i-1} - c\|^2{\mathcal B}ig\} = 0. $$ From $(\ref{psikbneq})$, for any $k>k_i$, we have \begin{equation}\nonumber \begin{array}{l} 0 \le {\mathcal P}si_{k+1}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k+1} - x^{k+1}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k+1} + {\mathcal B}^*y^{k} - c\|^2\\[3mm] \quad \le {\mathcal P}si_{k_i}(\bar{x},\bar{y},\bar{z}) + (2-\rho)\|\widetilde{x}^{k_i} - x^{k_i}\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 + \sigma(1-\lambda)(2-\rho)\|{\mathcal A}^*x^{k_i} + {\mathcal B}^*y^{k_i-1} - c\|^2 + \sum_{j=k_i}^k\xi_j.\\[3mm] \end{array} \end{equation} Obviously, combining this inequality with (\ref{deltaktend0}), we get \begin{equation}\nonumber \lim_{k\rightarrow+\infty}{\mathcal P}si_{k}(\bar{x},\bar{y},\bar{z}) = 0, \end{equation} which means that \begin{subequations} \begin{equation}\label{subzklim} \lim_{k\rightarrow+\infty}\|z_e^k + \sigma(\rho-1){\mathcal A}^*x_e^k\|^2 = 0 \end{equation} \begin{equation}\label{subAkxklim} \lim_{k\rightarrow+\infty}\|{\mathcal A}^*x_e^k\|^2 = 0 \end{equation} \begin{equation}\label{subwxklim} \lim_{k\rightarrow+\infty}\|\widetilde{x}_e^k\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 = 0 \end{equation} \begin{equation}\label{subwyklim} \lim_{k\rightarrow+\infty}\|\widetilde{y}_e^k\|_{\hat{{\mathcal S}igma}_{h_1} + {\mathcal T}}^2 = 0 \end{equation} \end{subequations} From (\ref{xekxkneq}), (\ref{deltaktend0}) and (\ref{subwxklim}), we know that $\lim_{k\rightarrow+\infty}\|x_e^k\|_{\hat{{\mathcal S}igma}_{f_1} + {\mathcal S}}^2 = 0$. Then, from (\ref{subAkxklim}), we get $$ \lim_{k\rightarrow+\infty}\|x_e^k\|_{\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} + \sigma {\mathcal A}{\mathcal A}^*}^2 = 0, $$ Because of $\widehat{{\mathcal S}igma}_{f_1} + {\mathcal S} + \sigma {\mathcal A}{\mathcal A}^*\succ 0$, we know that $\lim_{k\rightarrow+\infty}x^k = \bar{x}$. Homoplastically, we get $\lim_{k\rightarrow+\infty}y^k = \bar{y}$. In a similar way, from (\ref{noneq}), we get \begin{equation}\nonumber \frac{1}{2}\|z_e^{k}\|^2 \le \|z_e^k + \sigma(\rho-1){\mathcal A}^*x_e^k\|^2 + \|\sigma(\rho-1){\mathcal A}^*x_e^k\|^2, \end{equation} then, from (\ref{subzklim}) and $\lim_{k\rightarrow+\infty}x_e^k = 0$, we get $\lim_{k\rightarrow+\infty}z_e^k = 0$, and hence, $\lim_{k\rightarrow+\infty}z^k = \bar{z}$. Then, we know that the sequence $\{(x^k,y^k,z^k)\}$ converges to $(\bar{x},\bar{y},\bar{z})$. The poof is complete. \end{proof} \setcounter{equation}{0} \section{Numerical experiments}\label{section5} In this section, we evaluate the feasibility and efficiency of the G-ADMM-M algorithm by using some simulated convex composite optimization problems. All experiments are performed under Microsoft Windows 10 and MATLAB R2020b, and run on a Lenovo Laptop with an Intel Core i7-1165G7 CPU at 2.80 GHz and 8 GB of memory. In this test, we focus on the following convex composite quadratic optimization problem which has been considered in \cite{miPADMM} and \cite{MGADMM}, that is \begin{equation}\label{numexprb1} \min_{x\in{\mathcal R}^m,y\in{\mathcal R}^n} \ {\mathcal B}ig\{\frac{1}{2}\langle y, {\mathcal Q} y \rangle - \langle b, y\rangle + \frac{\chi}{2}\|{\mathcal P}i_{{\mathcal R}_{+}^m}({\mathcal D} (d-{\mathcal H} y))\|^2 + \mu\|y\|_1 + \delta_{{\mathcal R}_{+}^m}(x) \ : \ {\mathcal H} y + x = c {\mathcal B}ig\} \end{equation} where ${\mathcal Q}$ is an $n\times n$ symmetric and positive semidefinite matrix (may not be positive definite); $\chi$ is a nonnegative penalty parameter; ${\mathcal P}i_{{\mathcal R}_{+}^m}(\cdot)$ denotes the projection onto ${\mathcal R}_{+}^m$; ${\mathcal H}\in{\mathcal R}^{m \times n}$, $b\in{\mathcal R}^n$, $c\in{\mathcal R}^m$, $\mu > 0$ and $d\le c$ are given data; ${\mathcal D}$ is a positive definite diagonal matrix chosen to normalize each row of ${\mathcal H}$ to have an unit norm. Observing that problem (\ref{numexprb1}) can be expressed in the from of (\ref{prob1}) if it takes \begin{equation}\nonumber \left\{ \begin{array}{l} h_1(y) = \frac{1}{2}\langle y, {\mathcal Q} y \rangle - \langle b, y\rangle + \frac{\chi}{2}\|{\mathcal P}i_{{\mathcal R}_{+}^m}({\mathcal D} (d-{\mathcal H} y))\|^2, \ h_2(y) = \mu\|y\|_1,\\[3mm] f_1(x) \equiv 0, \ f_2(x) = \delta_{{\mathcal R}_{+}^m}(x), \ {\mathcal A}^* = I, \ {\mathcal B}^* = {\mathcal H}. \end{array} \right. \end{equation} It is not hard to deduce that the KKT system for problem (\ref{numexprb1}) is the following form: \begin{equation}\label{numexpkkt} \nabla h_1(y) + {\mathcal H}^* z + v = 0, \ v\in\partial\mu\|y\|_1, \ {\mathcal H} y+x-c=0, \ x \ge 0, \ z \ge 0, \ x \circ z=0. \end{equation} where $z$ and $v$ are the corresponding dual variables, $'\circ'$ denotes the elementwise product. In the following, for performance comparisons, we also employ the M-GADMM of Qin et al. \cite{MGADMM} with $\rho = 1.9$ and M-ADMM of Li et al. \cite{miPADMM} with the step-length parameters $\tau = 1.618$ to solve the problem (\ref{numexprb1}), respectively. In our algorithm G-ADMM-M, we set the parameter $\mu=5\sqrt{n}$ and $d = c-5e$ where $e$ is a vector of all ones. Besides, we choose the zero vector as an initial point, and the penalty parameter is initialized as $\sigma = 0.8$. Based on the KKT system (\ref{numexpkkt}), we stop the iterative process based on the residual of the KKT system, that is, \begin{equation}\nonumber Res:= \max{\mathcal B}ig\{\frac{\|{\mathcal H} y^{k+1} + x^{k+1} -c\|}{1+\|c\|}, \frac{\|\nabla h_1(y^{k+1}) + {\mathcal H}^*z^{k+1} + v^{k+1}\|}{1+\|b\|}{\mathcal B}ig\}\le 10^{-5}. \end{equation} In the numerical experiments, we choose the same proximal terms and operators as in \cite{miPADMM} and \cite{MGADMM}, i.e. \begin{equation}\nonumber \begin{array}{l} \widehat{{\mathcal S}igma}_{h_1} = {\mathcal Q} + \chi {\mathcal H}^*{\mathcal D}^2{\mathcal H}, \ {\mathcal S}igma_{h_1}={\mathcal Q}, \ \widehat{{\mathcal S}igma}_{f_1} = {\mathcal S}igma_{f_1}= 0,\\[3mm] {\mathcal T} = \lambda_{\max}(\widehat{{\mathcal S}igma}_{h_1} + \sigma {\mathcal H}^*{\mathcal H} ){\mathcal I} - (\widehat{{\mathcal S}igma}_{h_1} + \sigma {\mathcal H}^*{\mathcal H}), \ {\mathcal S} = 0. \end{array} \end{equation} With these choices, it is easy to verify that the conditions (\ref{pdefneq}) in Theorem (\ref{convmipgadmm}) are satisfied. Using these notations, we choose different pairs of $(m, n)$ to generate the data used in (\ref{numexprb1}) and the numerical results derived by each algorithm are listed in Tables (\ref{tab1} - \ref{tab2}), which includes the number of iterations (Iter), the computing time (Time), the KKT residuals of the solution (Res). From these tables, we find that the G-ADMM-M is beter than M-GADMM and M-ADMM in terms of iteration number and computation time almost when a similar accuracy solutions are achieved. In particular, we can see that our proposed algorithm is $15-35\%$ faster than the M-ADMM. \begin{table}[ht] \centering \renewcommand\tabcolsep{4.5pt} {\scriptsize \caption{\small Numerical results of M-ADMM, M-GADMM and G-ADMM-M with $\chi=0$.} \begin{tabular}{lc|ccc|ccc|ccc} \hline \multirow{2}{*}{m}& \multirow{2}{*}{n}& \multicolumn{3}{c|}{M-ADMM} & \multicolumn{3}{c|}{M-GADMM} & \multicolumn{3}{c}{G-ADMM-M} \\ \cline{3-11} & & Iter & Time & Res & Iter & Time & Res & Iter & Time & Res \\ \hline 200& 500& 5966& 1.44& 9.98e-06& 5032& 1.10& 9.98e-06& 4886& 1.16& 9.98e-06 \\ 500& 200& 576& 0.18& 1.00e-05& 581& 0.17& 9.99e-06& 423& 0.14& 9.96e-06 \\ 500& 500& 794& 1.83& 9.96e-06& 747& 1.59& 9.96e-06& 628& 1.33& 1.00e-05 \\ 1000& 500& 628& 2.80& 9.94e-06& 579& 2.50& 9.98e-06& 444& 1.88& 9.99e-06 \\ 500& 1000& 1354& 4.98& 9.97e-06& 1179& 4.27& 9.90e-06& 1121& 4.12& 9.97e-06 \\ 1000& 1000& 758& 6.39& 9.89e-06& 696& 5.72& 9.98e-06& 558& 4.64& 9.92e-06 \\ 2000& 1000& 630& 11.11& 9.90e-06& 608& 10.61& 9.93e-06& 439& 7.73& 9.96e-06 \\ 1000& 2000& 3088& 46.15& 1.00e-05& 2573& 38.23& 1.00e-05& 2667& 40.05& 1.00e-05 \\ 2000& 2000& 778& 24.39& 9.88e-06& 720& 22.18& 9.96e-06& 550& 17.31& 1.00e-05 \\ 4000& 2000& 587& 41.89& 9.95e-06& 575& 39.78& 9.97e-06& 397& 28.32& 9.88e-06 \\ 2000& 4000& 1816& 105.28& 1.00e-05& 1553& 89.32& 1.00e-05& 1531& 90.12& 9.99e-06 \\ 4000& 4000& 708& 89.75& 9.97e-06& 683& 86.87& 9.99e-06& 524& 68.35& 9.99e-06 \\ 4000& 8000& 1334& 320.39& 9.99e-06& 1103& 262.73& 9.99e-06& 1034& 254.14& 9.99e-06 \\ 8000& 4000& 681& 194.44& 9.95e-06& 708& 197.55& 9.99e-06& 487& 141.90& 9.95e-06 \\ 8000& 8000& 793& 460.87& 9.98e-06& 779& 415.07& 9.97e-06& 547& 306.90& 9.99e-06 \\ \hline \end{tabular}\label{tab1} } \end{table} \begin{table}[ht] \centering \renewcommand\tabcolsep{4.5pt} {\scriptsize \caption{\small Numerical results of M-ADMM, M-GADMM and G-ADMM-M with $\chi=2\mu$.} \begin{tabular}{lc|ccc|ccc|ccc} \hline \multirow{2}{*}{m}& \multirow{2}{*}{n}& \multicolumn{3}{c|}{M-ADMM} & \multicolumn{3}{c|}{M-GADMM} & \multicolumn{3}{c}{G-ADMM-M} \\ \cline{3-11} & & Iter & Time & Res & Iter & Time & Res & Iter & Time & Res \\ \hline 200& 500& 6221& 1.44& 9.99e-06& 5221& 1.18& 9.99e-06& 5004& 1.28& 1.00e-05 \\ 500& 200& 593& 0.19& 9.92e-06& 568& 0.21& 9.89e-06& 385& 0.13& 9.31e-06 \\ 500& 500& 526& 1.08& 9.87e-06& 495& 0.97& 9.86e-06& 360& 0.88& 9.58e-06 \\ 1000& 500& 458& 2.09& 1.00e-05& 445& 1.95& 9.93e-06& 296& 1.25& 9.96e-06 \\ 500& 1000& 1715& 6.49& 1.00e-05& 1423& 5.21& 9.99e-06& 1519& 5.53& 9.99e-06 \\ 1000& 1000& 430& 3.72& 9.96e-06& 409& 3.46& 9.94e-06& 303& 2.51& 9.65e-06 \\ 2000& 1000& 445& 7.86& 9.86e-06& 441& 7.76& 9.91e-06& 279& 4.95& 9.90e-06 \\ 1000& 2000& 1466& 21.58& 1.00e-05& 1249& 18.57& 9.97e-06& 1296& 19.57& 9.99e-06 \\ 2000& 2000& 341& 10.87& 9.88e-06& 325& 10.39& 9.86e-06& 237& 7.73& 9.79e-06 \\ 4000& 2000& 433& 30.80& 9.97e-06& 430& 30.06& 9.97e-06& 263& 19.15& 9.82e-06 \\ 2000& 4000& 1276& 74.11& 9.99e-06& 1075& 62.47& 9.96e-06& 1104& 65.02& 9.96e-06 \\ 4000& 4000& 312& 42.04& 9.96e-06& 306& 41.44& 9.94e-06& 211& 29.88& 9.99e-06 \\ 4000& 8000& 1028& 253.05& 1.00e-05& 841& 203.23& 9.97e-06& 903& 224.51& 9.99e-06 \\ 8000& 4000& 463& 135.72& 9.98e-06& 470& 134.67& 9.92e-06& 298& 90.87& 9.92e-06 \\ 8000& 8000& 315& 188.71& 9.91e-06& 314& 183.85& 9.99e-06& 207& 136.79& 9.85e-06 \\ \hline \end{tabular}\label{tab2} } \end{table} \section{Concluding remarks}\label{section6} We have known that the generalized ADMM of Xiao et al. \cite{spGADMM} has the ability to solve two-block linearly constrained composite convex programming problem, in which the smooth term located in each block must be `quadratic'. This paper enhanced the capacity of this method and showed that if we used a majorized quadratic function to replace the non-quadratic smooth function, the resulting subproblem in the ADMM framework can be split some smaller problems if a symmetric Gauss-Seidel iteration is used. The majority of this paper is on the theoretical analysis, that is, we proved that the sequence generated by the proposed method converges to one of a KKT points of the considered problem. Besides, we also did some simulated experiments to illustrate the effectiveness of the majorization technique and the progressiveness of the proposed method. At last, we must note that, further performance evaluation of the method using various problems deserves investigating. Of course, research on the proposed method's convergence rate and its iteration complexity are also interesting topics for further research. \section*{Acknowledgments} We would like to thank Z. Ma from Henan University for her valuable comments on the proof of this paper. The work of Y. Xiao is supported by the National Natural Science Foundation of China (Grant No. 11971149). \end{document}
\begin{document} \title{On the time to absorption in $\Lambda$-coalescents} \author{G\"otz Kersting\thanks{Institut f\"ur Mathematik, Goethe Universit\"at, Frankfurt am Main, Germany \newline [email protected], [email protected] \newline Work partially supported by the DFG Priority Programme SPP 1590 ``Probabilistic Structures in Evolution''}$\ $ and Anton Wakolbinger$^*$} \date{} \maketitle \begin{abstract} We present a law of large numbers and a central limit theorem for the time to absorption of $\Lambda$-coalescents, started from $n$ blocks, as $n \to \infty$. The proofs rely on an approximation of the logarithm of the block-counting process of $\Lambda$-coalescents with a dust component by means of a drifted subordinator.\\ {\em AMS 2010 subject classification:} 60J75 (primary), 60J27, 60F05 (secondary)$^{\color{white} \big|}$\\ {\em Keywords:} coalescents, time to absorption, law of large numbers, central limit theorem, subordinator with drift \end{abstract} \section{Introduction and main results} How long does it take for the ancestral lineages of a large sample of individuals back to its common ancestor? For population of constant size this turns into a question on the absorption time of a coalescents, which describes the genealogical tree of $n$ individuals by means of merging partitions. Here we consider coalescent with multiple mergers, also known as $\Lambda$-coalescents, which were introduced in 1999 by Pitman \cite{Pi} and Sagitov \cite{Sa}. If $\Lambda$ is a finite, non-zero measure on $[0,1]$, then the $\Lambda$-coalescent started with $n$ blocks is a continuous-time Markov chain $(\Pi_n(t), t \geq 0)$ taking its values in the set of partitions of $\{1, \dots, n\}$. It has the property that whenever there are $b$ blocks, each possible transition that involves merging $k \geq 2$ of the blocks into a single block happens at rate \[ \lambda_{b,k} = \int_{[0,1]} p^{k} (1-p)^{b-k} \: \frac{\Lambda(dp)}{p^2}\ , \] and these are the only possible transitions. Let $N_n(t)$ be the number of blocks in the partition $\Pi_n(t)$, $t \ge 0$. Then \[\tau_n := \inf\{t\ge 0: N_n(t) = 1\}\] is the time of the last merger, also called the {\em absorption time} of the coalescent started in $n$ blocks. We will investigate the asymptotic distribution of $\tau_n$ as $n \to \infty$. Our first result is a law of large numbers for the times $\tau_n$. Let \[ \mu:=\int_{[0,1]} \log \frac 1{1-p} \, \frac {\Lambda(dp)}{p^2} \ , \] in particular $\mu=\infty$ in case of $\Lambda(\{0\})>0$ or $\Lambda(\{1\})>0$. \begin{Theo}\label{LLN} For any $\Lambda$-coalescent, \begin{align} \frac{\tau_n}{\log n} \to \frac 1\mu \ . \label{lln} \end{align} in probability as $n \to \infty$. \end{Theo} This theorem says that in a $\Lambda$-coalescent the number of blocks decays at least at an exponential rate. If $\mu=\infty$, then the right-hand limit is 0, and the coalescent decreases even super-exponentially fast. The case $\mu<\infty$ is equivalently captured by the simultaneous validity of the conditions \[ \int_{[0,1]}\frac{\Lambda(dp)}p < \infty \quad \text{ and }\quad \int_{[0,1]} \log \frac 1{1-p} \, \Lambda (dp) < \infty \ . \] The first one is a requirement on $\Lambda$ in the vicinity of 0, it prohibits a swarm of small mergers (as they occur in coalescents coming down from infinity, meaning that the $\tau_n$ are bounded in probability uniformly in $n$). The second is a condition on $\Lambda$ in the vicinity of 1. It rules out the possibility of mergers which, although appearing only every now and then, are so vast that they make the coalescent collapse. -- A counterpart to Theorem \ref{LLN}, with $\tau_n$ in \eqref{lln} replaced by its expectation, was obtained by Herriger and M\"ohle \cite{HeMoe}. Our second result is a central limit theorem. Here we confine ourselves to coalescents with $\mu<\infty$. Then the function \begin{equation} \label{ourf} f(y):= \int_{[0,1]} \frac{1-(1-p)^{e^y} }{e^y} \, \frac{\Lambda(dp)}{p^2} \ , \ y \in \mathbb R \end{equation} is everywhere finite. Also $f$ is a positive, monotone decreasing, continuous function with the property $f(y)\to 0$ for $y \to \infty$. Let \[ b_n := \int_\kappa^{\log n} \frac{dy}{\mu- f(y)} \ , \] where we choose $\kappa \ge 0$ such that \[ f(y) \le \frac \mu 2 \text{ for all } y \ge \kappa \ . \] \begin{Theo}\label{CLT} Assume that $\mu< \infty$ and moreover \[ \sigma^2 := \int_{[0,1]} \Big(\log \frac 1{1-p}\Big)^2\, \frac{\Lambda(dp)}{p^2} < \infty \ . \] Then \begin{align}\label{celith} \frac{\tau_n - b_n}{\sqrt{\log n}}\ \stackrel d \to\ N(0, \sigma^2/\mu^3) \end{align} as $n \to \infty$. \end{Theo} Under the additional condition \begin{align}\label{integral} \int_{[0,1]} \log \frac 1p \, \frac{\Lambda(dp)}{p} < \infty \ . \end{align} the CLT \eqref{celith} has been obtained by Gnedin, Iksanov and Marynych \cite{Gne}, with $b_n $ replaced by $\log n/\mu$. (Their condition (9) is equivalent to the above condition \eqref{integral}, see Remark 13 in \cite{Ke}). Thus the question arises, whether the simplified centering by $\log n/\mu$ is always feasible. The next proposition shows that this can be done under a condition that is weaker than \eqref{integral}, but not in any case. \begin{Prop} \label{Prop2} Let $0\le c < \infty$. Then \begin{align} \label{condi2} b_n= \frac{\log n}{\mu}+\frac {2c} {\mu^2}\sqrt {\log n} + o(\sqrt {\log n}) \end{align} as $n \to \infty$, if and only if \begin{align}\label{condi} \sqrt{\log \tfrac 1r}\int_{[0,r]} \frac{\Lambda(dp)}p \to c \end{align} as $r\to 0$. \end{Prop} \paragraph{Example.} We consider for $\gamma \in \mathbb R$ the finite measures \[ \Lambda(dp) = \big(1+\log \tfrac 1p\big)^{-\gamma} \, dp\ , \ 0\le p \le 1 \ . \] For $\gamma=0$ this gives the Bolthausen-Sznitman coalescent. For $\gamma >1$ it leads to coalescents with $\mu, \sigma^2<\infty$. Note that \eqref{integral} is satisfied iff $\gamma >2$, and \eqref{condi} is fulfilled iff $\gamma >3/2$. Thus within the range $1<\gamma \le 3/2$ one has to come back to the constants $b_n$ in the central limit theorem. The law of large numbers from Theorem 1 holds for all $\gamma >1$. For the regime $\gamma \le 1$, Theorem~\ref{LLN} just tells us that $\tau_n= o_P(\log n)$. For $\gamma=0$, the Bolthausen-Sznitman coalescent, it is known that $\tau_n$ is already down to the order $\log\log n$ \cite{Go}. For $\gamma <0$, applying Schweinsberg's criterion \cite{Schw}, it can be shown that the coalescents come down from infinity. There remains the gap $0< \gamma \le 1$. It is tempting to conjecture that $\tau_n$ is of order $(\log n)^\gamma$ for $0<\gamma < 1$. \qed \mbox{}\\ If equation \eqref{condi} is violated then the subsequent approximation to $b_n$ may be practical. Starting from the identity \[ \frac{1}{\mu-f(y)} = \frac 1\mu +\frac {f(y)}{\mu^2} + \frac{f^2(y)}{\mu^3} +\cdots + \frac{f^k(y)}{\mu^{k+1}}+ \frac {f^{k+1}(y)}{\mu^{k+1}(\mu-f(y))}\] we obtain the expansion \begin{align*} b_n = \frac {\log n}\mu + \frac {1}{\mu^2} \int_0^{\log n} f(y)\, dy + \cdots + \frac 1{\mu^{k+1}}\int^{\log n}_0 f^k(y)\, dy + O\Big( \int_0^{\log n} f^{k+1}(y)\, dy\Big) \ . \end{align*} Let us now explain the method of proving Theorems \ref{LLN} and \ref{CLT}. We are mainly dealing with $\Lambda$-coalescents having a {\em dust component}. Shortly speaking these are the coalescents for which the rate, at which a single lineage merges with some others from the sample, stays bounded as the sample size tends to infinity. As is well-known this property is characterized by the condition \begin{align} \label{dustcond} \int_{[0,1]} \frac{\Lambda(dp)}{p} < \infty \ . \end{align} An established tool for the analysis of a $\Lambda$-coalescent with dust is the subordinator $S=(S_t)_{t \ge 0}$, which is used to approximate the logarithm of its block-counting process $N_n=(N_n(t))_{t \ge 0}$ (see e.g. Pitman \cite{Pi}, M\"ohle \cite{Moe}, and the above mentioned paper by Gnedin et al \cite{Gne}). We will recall this subordinator in Sec.~3. Indeed, analogues of Theorems 1 and 2 are well-known for first-passage times of subordinators with finite first resp. second moment. However, this approximation neglects the subtlety that a coalescent of $b$ lineages results in a downward jump of size $b-1$ (and not $b$) for the process $N_n$. This effect becomes significant when many small jumps accumulate over time, as it happens close to the dustless case (and as it becomes visible in Proposition \ref{Prop2} and in the above example). Then the appropriate approximation is provided by a {\em drifted} subordinator $Y_n=(Y_n(t))_{t \ge0}$, given by the SDE \[ Y_n(t) = \log n -S_t + \int_0^t f(Y_n(s))\, ds \ , \ t \ge 0\ ,\] with initial value $Y_n(0)=\log n$. The drift compensates the just mentioned difference between $b$ and $b-1$. In Kersting et al \cite{Ke} it is shown that \[ \sup_{t<\tau_n}\big|Y_n(t)- \log N_n(t)\big|=O_P(1)\] as $n \to \infty$, that is, these random variables are bounded in probability. In Sec.~3 we suitably strengthen this result. In Sec.~2 we provide the required limit theorems for passage times for a more general class of drifted subordinators. The above results are then proved in Sec.~4. It turns out that the regime considered by Gnedin et al \cite{Gne} is the one in which the random variables $\int_0^{\tau_n} f(Y_n(s))\, ds$ are bounded in probability uniformly in $n$. This can be seen to be equivalent to the requirement $\int_0^\infty f(y)\, dy<\infty$, which likewise is equivalent to \eqref{integral} (see the proof of Corollary 12 in \cite{Ke}). Under this assumption Gnedin et al \cite{Gne} proved their central limit theorem also with non-normal (stable or Mittag-Leffler) limiting distributions of $\tau_n$. A similar generalization of Theorem \ref{CLT} is feasible in the general dust case, without the requirement \eqref{integral}. \section{Limit theorems for a drifted subordinator} Let $S=(S_t)_{t \ge 0}$ be a pure jump subordinator with L\'evy measure $\lambda$ on $(0,\infty)$. Recall that this requires \[ \int_0^\infty (y\wedge 1) \, \lambda(dy) < \infty \ .\] With regard to the mentioned properties of the function in \eqref{ourf}, let $f:\mathbb R \to \mathbb R$ be an arbitrary positive, non-increasing, continuous function with \[ \lim_{y\to \infty} f(y)=0\ . \] Let the process $Y^z=(Y^z_t)_{t \ge 0}$ denote the unique solution of the SDE \begin{align} \label{SDE} Y^z_t = z-S_t + \int_0^t f(Y^z_s)\, ds \end{align} with initial value $z> 0$. We will investigate the asymptotic behaviour of its passage times across $x \in \mathbb R$, \[ T^z_x := \inf\{ t \ge 0: Y^z_t < x \}\ ,\] in the limit $z \to \infty$. The first result provides a law of large numbers. Denote \begin{align} \mu := \int_{(0,\infty)} y\, \lambda(dy) \ . \label{defmu} \end{align} \begin{Prop} \label{Prop3} Assume that $\mu<\infty$. Then for any $x \in \mathbb R$ \[ \frac{1}z \, T^z_x \to \frac 1\mu \] in probability as $z\to \infty$. \end{Prop} \begin{proof} Let $z >x$. Then \[ \{T^z_x \ge t\} = \{ Y^z_s \ge x \text{ for all }s \le t \} = \Big\{S_s \le z-x + \int_0^sf(Y^z_u)\, du \text{ for all } s \le t\Big\} \ . \] By positivity of the function $f$ it follows $ \mathbf P( T^z_x \ge t) \ge \mathbf P( S_t \le z-x) $, thus for any $\varepsilon >0$ \begin{equation}\label{oneplusepsilon} \mathbf P\Big(T_x^{ z}\ge (1-\varepsilon)\frac z{\mu} \Big) \ge \mathbf P\big( S_{(1-\varepsilon)z/\mu} \le z -x\big) \ . \end{equation} Now $\mu=\mathbf E[S_1]$, thus by the law of large numbers \[\frac{S_t}t \to \mu \] a.s., hence the right-hand term in \eqref{oneplusepsilon} converges to 1 for $z \to \infty$ and also \[\mathbf P\Big(T_x^{ z}\ge (1-\varepsilon)\frac z{\mu} \Big) \to 1 \ . \] On the other hand, \begin{align*}\{ T^z_x \ge t\} = \{ Y^z_s \ge x \text{ for all } s \le t\}=\Big\{ Y^z_s \ge x \text{ for all }s \le t\, ,\, S_t \le z -x + \int_0^t f(Y_s^z)\,ds \Big\} \ . \end{align*} Monotonicity of $f$ implies $ \mathbf P( T^z_x \ge t ) \le \mathbf P \big(S_t \le z -x + t f(x) \} \big) $. Therefore, since $f(x)\to 0$ as $x \to \infty$, \begin{align*} \mathbf P\Big(T_x^{ z}\ge (1+\varepsilon)\frac z{\mu} \Big) &\le \mathbf P\Big( S_{(1+\varepsilon)z/\mu } \le z -x + (1+\varepsilon)\frac z{\mu}f(x) \Big)\\&\le \mathbf P\big( S_{(1+\varepsilon)z/\mu } \le z(1+\varepsilon/2) -x\big) \end{align*} if only $x$ is sufficiently large. Now the right-hand term converges to 0, thus it follows that \[ \mathbf P\Big(T_x^{ z}\ge (1+\varepsilon)\frac z{\mu} \Big) \to 0\ . \] Note that we proved this result only for $x$ sufficiently large, depending on $\varepsilon$. However, this restriction may be skipped, since for fixed $x_1<x_2$ the random variables $T_{x_1}^z-T_{x_2}^z$ are bounded in probability uniformly in $z$. Thus altogether we have for any $x$ \[ \mathbf P\Big( (1-\varepsilon) \frac z{\mu} \le T^z_x < (1+\varepsilon) \frac z{\mu} \Big)\to 1\] as $z \to \infty$, which (since $\varepsilon > 0$ was arbitrary) is our assertion. \end{proof} \mbox{}\\ Now we turn to a central limit theorem for passage times of the processes $Y^z$. Let the function $\beta_z$, $z\ge \kappa$, be given by \begin{equation}\label{defbeta} \beta_z:= \int_\kappa^z \frac{dy}{\mu-f(y)} \ , \end{equation} where we choose $\kappa\ge 0$ so large that \[ \sup_{y \ge \kappa} f(y) \le \frac \mu 2 \ . \] \begin{Prop}\label{Prop4} Suppose that \begin{align}\label{sigma} \sigma^2 := \int_{(0,\infty)} y^2 \, \lambda (dy) < \infty\ . \end{align} Then \[ \frac{ T_x^z- \beta_z}{\sqrt z}\ \stackrel d \to\ N(0, \sigma^2/\mu^3) \] as $z \to \infty$. \end{Prop} \begin{proof} (i) Note again that for $x_1 < x_2$ the random variables $T^z_{x_1}-T^z_{x_2}$ are bounded in probability uniformly in $z$. Thus it suffices to prove our theorem for all $x \ge x_0$ for some $x_0\in \mathbb R$. Therefore we may change $f(x)$ for all $x<x_0$. We do it in such a way that $f(x)\le \mu/2$ for all $x\in \mathbb R$, without touching the other properties of $f$. Thus we assume from now that \begin{align} \label{fmu} f(y) \le \frac \mu 2 \quad \text{ for all } y \in \mathbb R\ \end{align} and set $\kappa=0\ $ in \eqref{defbeta}. Consequently, \begin{equation}\label{betabounds} \frac z\mu \le \beta_z\le \frac{2z}\mu\ , \quad z >0\ . \end{equation} For any $z >0$ we define the function $\rho^z(t)=\rho^z_t$, $0 \le t \le \beta_z$, such that \[ \beta_{\rho^z(t)}=\beta_z-t\ , 0\le t \le \beta_z\ , \] in particular $\rho^z(0)=z$ and $\rho^z(\beta_z)=0$. This means that $\rho^z$ arises by first inverting the function $\beta$ (restricted to the interval $[0, z]$), and then reversing the time parameter on its domain $[0, \beta_z]$. By differentiation we obtain \[ \dot\rho^z_t= f(\rho^z_t)-\mu\ , \] consequently $\dot \rho_t \le - \mu/2$ and \[ \rho^z_t = z - \mu t + \int_0^t f(\rho^z_s)\, ds\ . \] (ii) A glimpse on \eqref{SDE} suggests that $\rho^z$ will make a good approximation for the process $Y^z$. In order to estimate their difference observe that \[ Y_t^z- \rho^z_t = -(S_t-\mu t) + \int_0^t (f(Y^z_s)-f(\rho^z_s))\, ds \ .\] For given $t>0$ define \begin{align*} u_t =\begin{cases} \sup \{ s<t: Y^z_s \le \rho^z_s\} \mbox{ on the event } Y_t^z > \rho^z_t \\ \sup \{ s<t: Y^z_s \ge \rho^z_s\} \mbox{ on the event } Y_t^z < \rho^z_t \end{cases} \end{align*} and $u_t:=t$ on the event $Y_t^z = \rho^z_t$. We have $0 \le u_t \le t$, since $Y^z_0=z= \rho_0^z$. Because $f$ is a decreasing function, the event $Y_t^z > \rho^z_t$ implies that \begin{align*}Y_t^z- \rho^z_t &\le Y_t^z- \rho^z_t - \int_{u_t}^t (f(Y^z_s)-f(\rho^z_s))\, ds - (Y^z_{u_t-}- \rho^z_{u_t-})\\ &= -(S_t- \mu t) + (S_{u_t-}-\mu u_t) \ . \end{align*} On the event $Y_t^z < \rho^z_t$ there is an analogous estimate from below, altogether \begin{align*} |Y_t^z- \rho^z_t| \le 2 M_t \quad \text{ with } M_t:= \sup_{u \le t} |S_u-\mu u| \ . \end{align*} Consequently, $Y^z_s \ge \rho^z_s-2M_s \ge \rho^z_s-2M_t$ for $s \le t$ and by means of the monotonicity of $f$ \begin{align*} \int_0^t f(Y^z_s)\, ds -\int_0^tf(\rho^z_s)\, ds \le \int_0^t f(\rho^z_s-2M_t)\, ds -\int_0^tf(\rho^z_s)\, ds \le2M_t f(\rho^z_t-2M_t)\ . \end{align*} An analoguous estimate is valid from below and we obtain \begin{align} \Big|\int_0^t f(Y^z_s)\, ds -\int_0^tf(\rho^z_s)\, ds \Big| \le 2M_t f(\rho^z_t-2M_t)\ . \label{Mrho} \end{align} At this point we recall that under the above assumptions on the subordinator $S$ by Donsker's invariance principle we have \[ M_t = O_P( \sqrt t) \] as $t \to \infty$. (iii) Now we derive some upper estimates of probabilities. Given $a,x\in \mathbb R$, we have for any $c>0$ \begin{align*} \mathbf P&( T^z_x \ge \beta_z + a \sqrt z) = \mathbf P(Y^z_t \ge x \text{ for all } t \le \beta_z+a\sqrt z)\\ &= \mathbf P\Big(S_{\beta_z + a\sqrt z} \le z-x + \int_0^{\beta_z + a\sqrt z} f(Y^z_s) \, ds\, , \,Y^z_t \ge x \text{ for all } t \le \beta_z+a\sqrt z\Big)\\ &\le \mathbf P\Big(S_{\beta_z + a\sqrt z} \le z-x + f(x)(c+|a|)\sqrt z+ \int_0^{\beta_z -c\sqrt z} f(Y^z_s) \, ds\Big) \end{align*} We now bring \eqref{Mrho} into play. From the definition of $\rho^z$ we have, writing $\beta(y) = \beta_y$, that \[\beta(\rho^z(\beta_z-c\sqrt z))= c\sqrt z,\] thus because of \eqref{betabounds} \[\rho^z(\beta_z-c\sqrt z) \ge \frac{c\sqrt z}{2\mu}.\] Then on the event $M_{\beta_z} \le {c\sqrt z} /{(8\mu)}$ we have \[\rho^z(\beta_z-\sqrt z) - 2M_{\beta_z-c\sqrt z}\ge \frac{c\sqrt z}{2\mu}- \frac{c\sqrt z}{4\mu} = \frac{c\sqrt z}{4\mu}. \] Consequently, by means of \eqref{Mrho} and since $\beta_z \le 2z/\mu$ \begin{align} \mathbf P( T^z_x &\ge \beta_z + a \sqrt z) \le \mathbf P\Big(M_{2z/\mu} > \frac{c\sqrt z}{8\mu}\Big) \notag\\ & \mbox{}+ \mathbf P\Big(S_{\beta_z + a\sqrt z} \le z-x + f(x)(c+|a|)\sqrt z+ \int_0^{\beta_z} f(\rho^z_s) \, ds+ \frac{c\sqrt z}{4\mu} f\Big(\frac{c\sqrt z}{4\mu}\Big) \Big) \ . \label{Ungl1} \end{align} Moreover, by definition of $\rho^z$, \[ z+\int_0^{\beta_z} f(\rho^z_s) \, ds =\rho^z(\beta_z)+\mu \beta_z = \mu \beta_z . \] Therefore, if we fix $\varepsilon>0$, let $c$ be so large that the first right-hand probability in \eqref{Ungl1} is smaller than $\varepsilon$, then choose $z$ so large that $(c/4\mu) f(\frac{c\sqrt z}{ 4\mu}) \le \varepsilon $, and also choose $x >0$ and so large that $cf(x)(c+|a|) \le \varepsilon $, then we end up with \[ \mathbf P( T^z_x \ge \beta_z + a \sqrt z) \le \varepsilon + \mathbf P\Big(S_{\beta_z + a\sqrt z} \le \mu\beta_z +2 \varepsilon\sqrt z \Big)\ . \] Also by the law of large numbers \[ S_{\beta_z + a\sqrt z}- S_{\beta_z} \sim \mu a\sqrt z \] in probability. Therefore \[\mathbf P( T^z_x \ge \beta_z + a \sqrt z) \le 2\varepsilon+ \mathbf P\big(S_{\beta_z} \le \mu\beta_z + (-\mu a+3\varepsilon)\sqrt z \big)\ .\] Moreover $\mu \beta_z \sim z $, hence \[\mathbf P( T^z_x \ge \beta_z + a \sqrt z) \le 2 \varepsilon+ \mathbf P\big(S_{\beta_z} \le \mu\beta_z + (-\mu a+4\varepsilon)\mu^{1/2} \sqrt{\beta_z} \big)\] for large $z$. Now from assumption \eqref{sigma} and the central limit theorem there follows \[ \frac{S_t-\mu t} {\sqrt {\sigma^2 t}} \ \stackrel d\to\ L \ , \] where $L$ denotes a standard normal random variable. Thus \[ \limsup_{z \to \infty} \mathbf P( T^z_x \ge \beta_z + a \sqrt z)\le 2\varepsilon + \mathbf P (L \le (-\mu a+4\varepsilon)\mu^{1/2} \sigma^{-1})\ . \] Note that the choice of $x$ depends on $\varepsilon$ in our proof. However, since again the differences $T^z_{x_1}-T^z_{x_2}$ are bounded in probability uniformly in $z$, this estimate generalizes to all $x$. Now letting $\varepsilon \to 0$ we obtain \[\limsup_{z \to \infty} \mathbf P \Big( \frac{T^z_x- \beta_z}{\sqrt z} \ge a\Big) \le \mathbf P(L \le -\mu^{3/2} \sigma^{-1}a) \ . \] This is the first part of our claim. (iv) For the lower estimates we first introduce the random variable \[ R_{z,x} := \sup\{ t\ge 0: Y^z_t \ge x\} - \inf \{ t \ge 0: Y^z_t < x\} \] which is the length of the time interval where $Y^z_t-x$ is changing from positive sign to ultimately negative sign (note that the paths of $Y^z$ are {\em not} monotone). We claim that these random variables are bounded in probability, uniformly in $z$ and $x$. Indeed, with \[ \eta_{z,x}:= \inf \{t \ge 0: Y^z_t < x\} \] we have for $t>\eta=\eta_{z,x} $ because of $Y^z_\eta \le x$ and \eqref{fmu} \begin{align*}Y_t^z &= Y_\eta^z -(S_t-S_\eta) +\int_\eta^tf(Y^z_s)\, ds\le x -(S_t-S_\eta) + \frac \mu 2 (t-\eta) \ . \end{align*} Thus $R_{z,x}$ is bounded from above by \[ R_{z,x}' := \sup \{u \ge 0: (S_{\eta_{z,x}+u}-S_{\eta_{z,x}}) - \mu u/2\le 0 \} \] These random variables are a.s. finite. Moreover, they are identically distributed, since $\eta_{z,x}$ are stopping times. This proves that the $R_{z,x}$ are uniformly bounded in probability. Now for the lower bounds we have for $a,b \in \mathbb R$ \begin{align*} \mathbf P(T^z_x \ge \beta_z + a \sqrt z) &\ge \mathbf P( Y^z_t \ge x \text{ for all } t \le \beta_z+a\sqrt z\, ,\, R_{z,x} \le b)\\ &= \mathbf P( Y^z_t \ge x \text{ for all } \beta_z +a\sqrt z- b\le t \le \beta_z+a\sqrt z\, ,\, R_{z,x} \le b)\ . \end{align*} For these $t$ we have \[Y_t^z = z- S_t+ \int_0^t f(Y^z_s)\, ds \ge z-S_{\beta_z+a\sqrt z}+ \int_0^{\beta_z +a\sqrt z- b} f(Y^z_s)\, ds \ , \] therefore \begin{align*} \mathbf P(T^z_x \ge \beta_z + a \sqrt z) &\ge \mathbf P\Big( S_{\beta_z+a\sqrt z} \le z- x+ \int_0^{\beta_z+a\sqrt z- b}f(Y^z_s)\, ds \, , \, R_{z,x} \le b\Big)\\ &\ge \mathbf P\Big( S_{\beta_z+a\sqrt z} \le z- x+ \int_0^{\beta_z-c\sqrt z}f(Y^z_s)\, ds\Big) - \mathbf P(R_{z,x}>b) \end{align*} for $c$ sufficiently large. We now bring, as in part (iii), \eqref{Mrho} into play. Proceeding analogously we obtain instead of \eqref{Ungl1} the estimate \begin{align*} \mathbf P( T^z_x \ge \beta_z + a \sqrt z) \ge &- \mathbf P(R_{z,x}>b)-\mathbf P\Big(M_{2z/\mu} > \frac{c\sqrt z}{8\mu}\Big) \\ & \mbox{}+ \mathbf P\Big(S_{\beta_z + a\sqrt z} \le z-x + \int_0^{\beta_z-c\sqrt z} f(\rho^z_s) \, ds- \frac{c\sqrt z}{4\mu} f\Big(\frac{c\sqrt z}{4\mu}\Big) \Big) \ . \end{align*} Also, since $\rho^z_{\beta_z}=0$ and $\dot \rho^z_t \le -\mu/2$, \[ \int_{\beta_z-c\sqrt z}^{\beta_z} f(\rho^z_s) \, ds \le \int_0^{c\sqrt z} f(\mu s/2) \, ds = o(\sqrt z) \ .\] Hence, for given $\varepsilon>0$ and $z$ sufficiently large \begin{align*} \mathbf P( T^z_x \ge \beta_z + a \sqrt z) \ge &- \mathbf P(R_{z,x}>b)-\mathbf P\Big(M_{2z/\mu} > \frac{c\sqrt z}{8\mu}\Big) \\ & \mbox{}+ \mathbf P\Big(S_{\beta_z + a\sqrt z} \le z-\varepsilon \sqrt z + \int_0^{\beta_z} f(\rho^z_s) \, ds- \frac{c\sqrt z}{4\mu} f\Big(\frac{c\sqrt z}{4\mu}\Big) \Big) \ . \end{align*} Returning to the arguments of part (iii) we choose $b$, $c$ and then $z$ so large that we arrive at \[ \mathbf P( T^z_x \ge \beta_z + a \sqrt z) \ge - 2\varepsilon + \mathbf P\Big(S_{\beta_z + a\sqrt z} \le \mu \beta_z- 2\varepsilon \sqrt z \Big) \] and further at \[ \liminf_{z \to \infty} \mathbf P( T^z_x \ge \beta_z + a \sqrt z)\ge -3\varepsilon + \mathbf P (L \le (-\mu a-3\varepsilon)\mu^{1/2}\sigma^{-1} )\ . \] The limit $\varepsilon \to 0$ leads to the desired lower estimate. \end{proof} \section{Approximating the block counting process} In this section we derive a strengthening of a result in Kersting, Schweinsberg and Wakolbinger ~\cite{Ke} on the approximation to the logarithm of the block counting processes in the dust case. To this end, let us quickly recall the Poisson point process construction of the $\Lambda$-coalescent given in \cite{Ke}, which is a slight variation of the construction provided by Pitman in \cite{Pi}. This construction requires $\Lambda(\{0\}) = 0$, which is fulfilled for coalescents with dust. Consider a Poisson point process $\Psi$ on $(0, \infty) \times (0, 1] \times [0, 1]^n$ with intensity $$dt \times p^{-2} \Lambda(dp) \times du_1 \times \dots \times du_n\ ,$$ and let $\Pi_n(0) = \{\{1\}, \dots, \{n\}\}$ be the partition of the integers $1, \dots, n$ into singletons. Suppose $(t, p, u_1, \dots, u_n)$ is a point of $\Psi$, and $\Pi_n(t-)$ consists of the blocks $B_1, \dots, B_b$, ranked in order by their smallest element. Then $\Pi_n(t)$ is obtained from $\Pi_n(t-)$ by merging together all of the blocks $B_i$ for which $u_i \leq p$ into a single block. These are the only times that mergers occur. This construction is well-defined because almost surely for any fixed $t' < \infty$, there are only finitely many points $(t, p, u_1, \dots, u_n)$ of $\Psi$ for which $t \leq t'$ and at least two of $u_1, \dots, u_n$ are less than or equal to $p$. The resulting process $\Pi_n = (\Pi_n(t), t \geq 0)$ is the $\Lambda$-coalescent. When $(t, p, u_1, \dots, u_n)$ is a point of $\Psi$, we say that a $p$-merger occurs at time $t$. Condition \eqref{dustcond} allows us to approximate the number of blocks in the $\Lambda$-coalescent by a subordinator. Let $\phi: (0, \infty) \times (0, 1] \times [0, 1]^n \rightarrow (0, \infty) \times (0, \infty]$ be the function defined by $$\phi(t, p, u_1, \dots, u_n) = (t, -\log(1-p)).$$ Now $\phi(\Psi)$ is a Poisson point process, and we can define a pure jump subordinator $(S(t), t \geq 0)$ having the property that $S(0) = 0$ and, if $(t, x)$ is a point of $\phi(\Psi)$, then $S(t) = S(t-) + x$. With $\lambda$ the L\'evy measure of $S$, the formulas \eqref{defmu} and \eqref{sigma} now read \[ \mu=\int_{[0,1]} \log \frac 1{1-p} \, \frac {\Lambda(dp)}{p^2} \ \text{ and }\ \sigma^2 = \int_{[0,1]} \Big(\log \frac 1{1-p}\Big)^2\, \frac{\Lambda(dp)}{p^2} \ .\] This subordinator first appeared in the work of Pitman \cite{Pi} and was used to approximate the block-counting process by Gnedin et al. \cite{Gne} and M\"ohle \cite{Moe}; the benefits of a refined approximation by a {\em drifted} subordinator were discovered in \cite{Ke}. We recall that the drift appears because a merging of $b$ out of $N_n(t)$ lines results in a decrease by $b-1$ and not by $b$ lines, see equation (23) in \cite{Ke} for an explanation of the form of the drift. The next result provides a refinement of Theorem 10 in \cite{Ke}. \begin{Prop}\label{Prop5} Let \[ \int_{[0,1]} \frac{\Lambda(dp)}p < \infty \ , \] let $f$ be as in \eqref{ourf}, and let $Y_n$ be the solution of \eqref{SDE} with $z:= \log n$. Then for any $\varepsilon >0$ there is an $\ell <\infty$ such that \[ \mathbf P\big( \sup_{t<\tau_n}| \log N_n(t)-Y_n(t)| \le \ell \, , \, Y_n(\tau_n) < \ell \big) \ge 1-\varepsilon \ . \] \end{Prop} \begin{proof} From \cite{Ke} we know that for given $\varepsilon >0$ there is an $r <\infty$ such that \begin{align*} \mathbf P\big( \sup_{t<\tau_n}| \log N_n(t)-Y_n(t)| \le r \big) \ge 1-\varepsilon/2 \ . \end{align*} Now we consider the size $\Delta_n$ of the last jump. Letting $(u_i,p_i)$, $i \ge 1$, be the points of the underlying Poisson point process with intensity measure $dt\, \Lambda(dp)/p^2$, the associated subordinator $S$ has jumps of size $v_i=-\log(1-p_i)$ at times $t_i$. Thus for any $c>0$ we have \begin{align*} \{ \Delta_n \le \log N_n(\tau_n-)-c\} &= \{ \tau_n=t_i \text{ and } -\log(1-p_i) \le \log N_n(t_i-)-c \text{ for some } i\ge 1\}\\ &= \Big\{ \tau_n=t_i \text{ and } p_i \le 1- \frac{e^c}{N_n(t_i-)} \text{ for some } i\ge 1\Big\} \end{align*} Given $N_n(t-)$ this event appears at time $t$ with rate \[ \nu_{n,t}= \int_{[0,1-e^c/N_n(t-)]} p^{N_n(t-)} \frac{\Lambda(dp)}{p^2} \ .\] Using the inequalities $p^b=(1-(1-p))^b\le e^{-(1-p)b} \le 1/((1-p)b)$ we get \[ \nu_{n,t} \le \int_{[0,1-e^c/N_n(t-)]} e^{-(1-p)(N_n(t-)-2)} \, \Lambda(dp)\le \int_{[0,1-e^c/N_n(t-)]}\frac {e^2}{(1-p)N_n(t-)} \, \Lambda(dp) \ . \] It follows \begin{align*} \mathbf E\Big[ \int_0^\infty \nu_{n,t} \, dt \Big] \le \mathbf E \Big[ \int_{[0,1]} \int_0^\infty \frac {e^2}{(1-p)N_n(t-)} I_{\{N_n(t-) \ge \lceil e^c/(1-p)\rceil \}} \, dt \, \Lambda(dp) \Big] \end{align*} Lemma 14 of \cite{Ke} yields the estimate \[\mathbf E \Big[ \int_0^\infty \frac {1}{N_n(t-)} I_{\{N_n(t-) \ge \lceil e^c/(1-p)\rceil \}} \, dt\Big] \le c_1 \lceil e^c/(1-p)\rceil ^{-1}\le c_1 \frac{1-p}{e^c}\] with some $c_1>0$, hence \[ \mathbf E\Big[ \int_0^\infty \nu_{n,t} \, dt \Big] \le c_1 e^{2-c} \Lambda ([0,1])\ . \] Therefore for $c$ sufficiently large \[ \mathbf E\Big[ \int_0^\infty \nu_{n,t} \, dt \Big] \le \varepsilon/2 \ , \] which implies \[ \mathbf P\big(\Delta_n \le \log N_n(\tau_n-)-c\big) = 1-\exp \Big(- \mathbf E\Big[ \int_0^\infty \nu_{n,t} \, dt \Big]\Big) \le \varepsilon/2 \ .\] Altogether we obtain \[ \mathbf P\big(\sup_{t<\tau_n}| \log N_n(t)-Y_n(t)| \le r \, , \, \Delta_n > \log N_n(\tau_n-)-c\big) \ge 1-\varepsilon \ . \] The event in the previous formula implies \[ Y_n(\tau_n) = Y_n(\tau_n-)-\Delta_n < \log N_n(\tau_n-)+r -(\log N_n(\tau_n-) -c) = r+c \ ,\] and the claim of the theorem follows with $\ell=r+c$. \end{proof} \section{Proof of the main results} \begin{proof}[Proof of Theorem \ref{LLN}] Let us first assume that $\mu < \infty$. Then we have a coalescent with dust, and we may apply Proposition \ref{Prop5}. Fix $\eta >0$. Note that on the event that $Y_n(\tau_n)<\ell$ the event $\tau_n < (1- \eta)\log n/\mu$ implies the inequality $T_\ell^{\log n} < (1-\eta) \log n/\mu$. Thus in view of Proposition~\ref{Prop5} there exists for any $\varepsilon >0$ an $\ell$ such that \begin{align*} \mathbf P( \tau_n < (1- \eta)\log n/\mu) \le \mathbf P( T_\ell^{\log n} < (1-\eta) \log n/\mu)+ \varepsilon \ . \end{align*} Proposition \ref{Prop3} implies that the right-hand probability converges to 0 as $n \to \infty$. Letting $\varepsilon \to 0$ we obtain \[ \lim_{n \to \infty} \mathbf P( \tau_n < (1- \eta)\log n/\mu) =0\ . \] Also on the event $\sup_{t<\tau_n}| \log N_n(t)-Y_n(t)| \le \ell$, the event $\tau_n > (1+\eta)\log n/\mu $ implies $Y_n(t) \ge -\ell$ for all $t\le (1+\eta)\log n/\mu$, and consequently \[ \mathbf P(\tau_n > (1+\eta)\log n/\mu) \le \mathbf P( T_{-\ell}^{\log n} > (1+\eta) \log n/\mu)+ \varepsilon \ . \] Again the right-hand probability converges to zero in view of Proposition \ref{Prop3}, and we obtain \[\lim_{n \to \infty} \mathbf P(\tau_n > (1+\eta)\log n/\mu) =0 \ . \] Altogether our claim follows in the case $\mu < \infty$. Now assume $\mu = \infty$. If $\Lambda(\{0\})>0$, then the coalescent comes down from infinity and $\tau_n$ stays bounded in probability. The same is true if $\Lambda(\{1\})>0$, thus we may assume that $\Lambda(\{0,1\})=0$. For given $\varepsilon >0$ define the measure $\Lambda^\varepsilon$ by $\Lambda^\varepsilon(B):= \Lambda(B\cap [\varepsilon,1-\varepsilon])$. Obviously \[ \mu^\varepsilon:=\int_0^1 \log \frac 1{1-p} \, \frac{\Lambda^\varepsilon(dp)}{p^2} < \infty\ . \] Thus for the absorption times $\tau_n^\varepsilon$ of the $\Lambda^\varepsilon$-coalescent we have \[ \frac {\tau^\varepsilon_n}{\log n} \to \frac 1{\mu^\varepsilon}\] in probability as $n \to \infty$. Now we may couple the $\Lambda^\varepsilon$-coalescent in an obvious manner to the $\Lambda$-coalescent in such a way that $N_n(t)\le N_n^\varepsilon(t)$ a.s. for all $t \ge 0$, in particular $\tau_n \le \tau_n^\varepsilon$. Hence it follows that \[ \mathbf P(\tau_n/\log n > 2 /\mu^\varepsilon) \to 0 \ . \] Because of $\Lambda(\{0,1\})=0$ we have $\mu^\varepsilon \to \mu=\infty $ with $\varepsilon \to 0$, consequently \[ \mathbf P(\tau_n/\log n > \eta) \to 0\] for all $\eta >0$. This is our claim. \end{proof} \begin{proof}[Proof of Theorem \ref{CLT}] Because of the condition $\mu<\infty$ we again may apply Proposition \ref{Prop5}. We follow the same line as in the previous proof: For $\varepsilon >0$ there exists an $\ell$ such that for all $a\in \mathbb R$ \begin{align*} \mathbf P( \tau_n < b_n+ a \sqrt n) \le \mathbf P( T_\ell^{\log n} < b_n+ a \sqrt n)+ \varepsilon \end{align*} and \begin{align*} \mathbf P(\tau_n > b_n+a\sqrt n) \le \mathbf P( T_{-\ell}^{\log n} > b_n+a\sqrt n)+ \varepsilon \end{align*} Now apply Proposition \ref{Prop4} and let $\varepsilon \to 0$. \end{proof} \begin{proof}[Proof of Proposition \ref{Prop2}] (i) Let us first assume \eqref{condi}. Because of $1-(1-p)^{1/r} \le \min (p/r,1)$ for $0<r<1$ we have for $\alpha > 0$ \begin{align} f\big(\log \tfrac 1r\big) \le \int_0^{r^\alpha} \frac{\Lambda (dp)}p + r\int_{r^\alpha}^1 \frac{ \Lambda(dp)}{p^2} \le \int_0^{r^\alpha} \frac{\Lambda (dp)}p + r^{1-\alpha }\int_0^1 \frac{\Lambda(dp)}{p} \ . \label{estimate1} \end{align} Also, because of $1-(1-p)^{1/r} \ge 1- e^{-p/r} \ge e^{-p/r}p/r$, it follows for $\beta >0$ that \begin{align} f\big(\log \tfrac 1r\big) \ge e^{-r^{\beta -1}}\int_0^{r^\beta} \frac{\Lambda (dp)}p\ . \label{estimate2} \end{align} Together with \eqref{condi} these two estimates yield for $\alpha < 1 < \beta$ \[ c \beta^{-1/2} \le \liminf_{r \to 0} f\big(\log \tfrac 1r\big) \sqrt{\log \tfrac 1r} \le \limsup_{r \to 0} f\big(\log \tfrac 1r\big) \sqrt{\log \tfrac 1r} \le c \alpha^{-1/2} \ .\] Letting $\alpha, \beta \to 1$ we arrive at $f(y)= (c+o(1))/\sqrt y$ as $y \to \infty$ and consequently \[ \int_0^{\log n} f(y) \, dy = (c+o(1))2\sqrt {\log n} \] as $n \to \infty$. Now, because of \begin{align*} \frac{1}{\mu-f(y)}= \frac 1\mu + \frac {f(y)}{\mu(\mu-f(y))} \end{align*} and $f(y)=o(1)$ as $y\to \infty$, we have \begin{align} \label{formula} \int_\kappa^z \frac{dy}{\mu-f(y)} = \frac z\mu + \frac{1+o(1)}{\mu^2} \int_0^z f(y)\, dy +O(1) \end{align} as $z\to \infty$, and consequently, as claimed, \begin{align*} b_n = \frac{\log n}\mu + \frac{2c+o(1)}{\mu^2} \, \sqrt{\log n} \ . \end{align*} (ii) Now suppose that \eqref{condi2} is satisfied. Then in view of \eqref{formula} with $z=\log n$ it follows that \[ \int_0^{\log n} f(y)\, dy = (2c+o(1)) \sqrt {\log n} \] as $n \to \infty$, or equivalently \[ \int_0^{z} f(y)\, dy = (2c+o(1)) \sqrt z \] for $z \to \infty$. This implies that $f(z)= (c+o(1))/ \sqrt z$ as $z\to \infty$. For $c=0$ this claim follows because $f$ is decreasing, which entails \[ zf(z) \le \int_0^z f(y)\, dy = o(\sqrt z) \ . \] For $c>0$ we use the estimate \[ \frac 1{\eta \sqrt z}\int_z^{(1+\eta) z} f(y)\, dy \le \sqrt z f(z) \le \frac 1{\eta \sqrt z} \int_{(1-\eta)z}^z f(y) \, dy \] with $\eta >0$. Taking the limit $z\to \infty$ and then $\eta \to 0$ yields $f(z)= (c+o(1))/ \sqrt z$. Now, similar as in part (i) we get from \eqref{estimate1} and \eqref{estimate2} \[c\sqrt \alpha \le \liminf_{r \to 0}\sqrt {\log \tfrac 1r}\int_{[0,r]} \frac{\Lambda(dp)}p\le \limsup_{r \to 0}\sqrt {\log \tfrac 1r}\int_{[0,r]} \frac{\Lambda(dp)}p \le c\sqrt \beta\ .\] With $\alpha, \beta \to 1$ we arrive at \eqref{condi}. \end{proof} \paragraph{Acknowledgement.} It is our pleasure to dedicate this work to Peter Jagers. \end{document}
\begin{document} \title{Local H\"older regularity of minimizers for nonlocal variational problems} \author{ Matteo Novaga\footnote{Universit\`a di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy. E-mail: {\tt [email protected]}}\and Fumihiko Onoue\footnote{Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy. E-mail: {\tt [email protected]}} } \date{\today} \maketitle \begin{abstract} We study the regularity of solutions to a nonlocal variational problem, which is related to the image denoising model, and we show that, in two dimensions, minimizers have the same H\"older regularity as the original image. More precisely, if the datum is (locally) $\beta$-H\"older continuous for some $\beta\in(1-s,\,1]$, where $s\in (0,1)$ is a parameter related to the nonlocal operator, we prove that the solution is also $\beta$-H\"older continuous. \end{abstract} \tableofcontents \section{Introduction} Let $K:\mathbb{R}^n\setminus\{0\} \to \mathbb{R}$ be a given function and $f:\mathbb{R}^n \to \mathbb{R}$ be a given datum. We study the minimization problem \begin{equation}\label{miniProb} \min\left\{ \mathcal{F}_{K,f}(u) \mid u \in BV_K(\mathbb{R}^n) \cap L^2(\mathbb{R}^n) \right\} \end{equation} where the functional $\mathcal{F}_{K,f}$ is defined as \begin{equation}\label{funcDenoisingProb} \mathcal{F}_{K,f}(u) \coloneqq \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n} K(x-y)\,|u(x) - u(y)|\,dx\,dy + \frac{1}{2}\int_{\mathbb{R}^n}(u(x)-f(x))^2\,dx \end{equation} for any measurable function $u:\mathbb{R}^n \to \mathbb{R}$, and the space $BV_K(\mathbb{R}^n)$ is the set of all functions such that the first term of \eqref{funcDenoisingProb} is finite (see Section \ref{preliminaries} for the detail). The function $K$ is a kernel singular at the origin, and a typical example is the function $x \mapsto |x|^{-(n+s)}$ with $s \in (0,\,1)$. If $K$ is non-negative and we understand that $\mathcal{F}_{K,\,f}(u) = +\infty$ when $u \in L^2(\mathbb{R}^n)\setminus BV_K(\mathbb{R}^n)$, then we observe that the functional $\mathcal{F}_{K,\,f}$ is strictly convex, lower semi-continuous, and coercive in $L^2(\mathbb{R}^n)$. Hence, from the general theory of functional analysis (see, for instance, \cite{Brezis}), we obtain existence and uniqueness of solutions to \eqref{miniProb}. In this paper we focus on the specific kernel $K(x) = |x|^{-(n+s)}$, with $s\in (0,1)$, and we study the regularity of the minimizers of $\mathcal{F}_{K,f}$, under some suitable conditions on the datum $f$. Our minimization problem is motivated by the classical variational problem \begin{equation}\label{classicalDenosignProb} \min\left\{ \mathcal{F}_{f}(u) \mid u \in BV(\mathbb{R}^n) \cap L^2(\mathbb{R}^n) \right\} \end{equation} where $\mathcal{F}_{f}(u)$ is defined as \begin{equation}\label{classicalTotalVariFunc} \mathcal{F}_{f}(u) \coloneqq \int_{\mathbb{R}^n}|\nabla u|\,dx + \frac{1}{2}\int_{\mathbb{R}^n} |u-f|^2\,dx. \end{equation} Th minimization problem \eqref{classicalDenosignProb} has been studied by many authors since the celebrated work by Rudin, Osher, and Fatemi \cite{ROF}, and plays an important role in image denoising and restoration (see for instance \cite{CCN, Brezis02}). From the perspective of image processing, the datum $f$ in the functional $\mathcal{F}_{f}$ indicates an observed image and, when the given image has poor quality, then the minimizers of $\mathcal{F}_{f}$ or solutions to the Euler-Lagrange equation associated with $\mathcal{F}_{f}$ correspond to regularized images. It is easy to show that the minimizer of \eqref{classicalTotalVariFunc} exists and is unique, as a result of strict convexity, lower semi-continuity and coercivity of the functional. Moreover, the minimizer turns out to be the solution, in a suitable sense, of the Euler-Lagrange equation \begin{equation}\label{classicalTotalVariEq} -\mathrm{div}\,\left( \frac{\nabla u}{|\nabla u|}\right) + u - f = 0 \quad \text{ in } \mathbb{R}^n. \end{equation} The regularity of minimizers of $\mathcal{F}_{f}$ have been studied by several authors. In particular, the global and local regularity was investigated in a series of papers by Caselles, Chambolle and Novaga (see \cite{CCN01, CCN02, CCN}), who proved that the solution of \eqref{classicalTotalVariEq} inherits the local H\"older or Lipschitz regularity of the datum $f$, when the space-dimension $n$ is less than or equal to 7. In addition, if $f$ is globally H\"older or Lipschitz in a convex domain $\Omega \subset \mathbb{R}^n$, the global regularity also holds for the solution of \eqref{classicalTotalVariEq} with homogeneous Neumann boundary condition. In the recent papers \cite{Mercier, Porretta}, some of these results were extended to general dimensions. In \cite{Mercier}, Mercier has proved that the continuity of $f$ implies the continuity of a solution $u$ and, in the case of convex domains, the modulus of continuity is also inherited globally by the solution. Eventually, in \cite{Porretta}, Porretta was able to remove the restriction on the space-dimension. For the variational problems associated with the nonlocal total variation, Aubert and Kornprobst in \cite{AuKo} and Gilboa and Osher in \cite{GiOs, GiOs02} have proposed the methods for approximating the solutions to \eqref{classicalDenosignProb} with a sequence of nonlocal total variations associated with non-singular smooth kernels. However, as far as we know, there are no results on the regularity of minimizers of the functional $\mathcal{F}_{K,f}$. Thus, in this paper, we consider the local H\"older regularity of the minimizers of \eqref{funcDenoisingProb} in two dimension as an analogy of the regularity results shown in \cite{CCN01, CCN02}. Precisely, we prove the following result: \begin{theorem}\label{mainTheorem} Let $n=2$. Assume that $K(x) = |x|^{-(2+s)}$ with $s \in (0,\,1)$ and $f \in L^2(\mathbb{R}^2) \cap L^{\infty}(\mathbb{R}^2)$. If $f$ is locally $\beta$-H\"older continuous with $\beta \in (1-s,\,1]$, then the minimizer $u$ of the functional $\mathcal{F}_{K,f}$ is also locally $\beta$-H\"older continuous in $\mathbb{R}^2$. \end{theorem} We point out that we cannot show the regularity result in any dimension due to the appearance of singularities on the boundary of the levelsets of minimizers. However, the two-dimensional case is of particular interest for the application to image denoising. As discussed in the case of the denoising problem in \cite{CCN01, CCN02, CCN}, our regularity result is based on the following observation: if $u$ is a minimizer of the functional $\mathcal{F}_{K,f}$, then the super-level set $\{u > t\}$ for each $t \in \mathbb{R}$ is also a minimizer of the functional associated with the prescribed nonlocal mean curvature problem \begin{equation}\label{prescribedNonlocalMCProb} \min\left\{ \mathcal{E}_{K,f,t}(E) \mid \text{$E \subset \mathbb{R}^n$: measurable}\right\} \end{equation} where we define the functional $\mathcal{E}_{K,f,t}$ by \begin{equation}\nonumber \mathcal{E}_{K,f,t}(E) \coloneqq P_K(E) + \int_{E}(t-f(x))\,dx \end{equation} for any measurable set $E \subset \mathbb{R}^n$ and $t\in\mathbb{R}$. Here $P_K$ is the \textit{nonlocal perimeter} associated with the kernel $K$, which is given as \begin{equation}\nonumber P_K(E) \coloneqq \int_{E}\int_{E^c}K(x-y)\,dx\,dy \end{equation} for any measurable set $E \subset \mathbb{R}^n$ (see Section \ref{preliminaries} for the detail). If $E_t$ is a minimizer of $\mathcal{E}_{K,f,t}$ for each $t$ and $\partial E_t$ is smooth ($C^2$-regularity is sufficient), then we can obtain that the boundary $\partial E_t$ satisfies the following \textit{prescribed nonlocal mean curvature equation} \begin{equation}\nonumber H^K_{E_t}(x) + t - f(x) = 0 \end{equation} for any $x \in \partial E_t$. Here $H^K_{E_t}$ is the so-called \textit{nonlocal mean curvature} defined by \begin{equation}\label{definitionNonlocalMC} H^K_{E_t}(x) \coloneqq \text{p.v.}\int_{\mathbb{R}^n}K(x-y)(\chi_{E_t}(x) - \chi_{E_t}(y)) \,dy \end{equation} for any $x \in \mathbb{R}^n$, where we mean by ``p.v." the Cauchy principal value. Note that, if $K(x) = |x|^{-(N+s)}$, then we denote the nonlocal mean curvature at $x \in \partial E$ associated with $K$ by $H^s_E(x)$. One may observe that, if $\partial E$ is of class $C^{1,\alpha}$ with $\alpha > s$, then $H^s_E$ is finite at each point of $\partial E$. The idea to show the local H\"older regularity of a minimizer $u$ is based on the observation that the distance between the boundaries of the two super-level sets $\{u>t\}$ and $\{u>t'\}$ for $t,\,t' \in \mathbb{R}$ with $t \neq t'$ should not be too close. To observe this, we compare between the nonlocal mean curvatures of $\partial \{u>t\}$ and $\partial \{u>t'\}$ at the points where the smallest distance between the boundaries $\partial \{u>t\}$ and $\partial \{u>t'\}$ is attained. The comparison can be done thanks to the computations of the first variation of the nonlocal mean curvature shown in \cite{DdPW, JuLa}. Thus we are able to derive some local estimate to assert the local H\"older regularity with the assumption of the local H\"older regularity of $f$. The organization of this paper is as follows: In Section \ref{preliminaries}, we will introduce the notation related to the nonlocal total variations. In Section \ref{secELeqforEnergy}, we will show the correspondence between the minimizers of $\mathcal{F}_{K,f}$ in \eqref{funcDenoisingProb} and the solutions to the nonlocal 1-Laplace equation. In Section \ref{secComparisonMini}, we will give a sort of comparison principle for the minimizers. As a result of this claim, we will show that, if a datum $f$ is bounded, then the minimizer of $\mathcal{F}_{K,f}$ is also bounded. In Section \ref{secCharacteriMinimizers}, we will show that each super-level set of a minimizer of $\mathcal{F}_{K,f}$ is also a minimizer of $\mathcal{E}_{K,f,t}$ for $t\in\mathbb{R}$. In Section \ref{secBoundednessSuperLeverlsets}, we will show the boundedness of each super-level set of the minimizer and, moreover, this set can be uniformly bounded whenever the minimizer is bounded from below. Finally, by using all the previous results, in Section \ref{secLocalHolderConti} we prove the main theorem in this paper on the H\"older regularity of minimizers in two dimensions. \section{Notation}\label{preliminaries} In this section, we give several definitions and properties of the space of functions with finite nonlocal total variations. First of all, we define the space $BV_K(\Omega)$ of functions with nonlocal bounded variations associated with the kernel $K$ by \begin{equation}\label{functionBVwithK} BV_K(\Omega) \coloneqq \left\{u \in L^1(\Omega) \mid [u]_{K}(\Omega) <\infty \right\} \end{equation} where we set, for any measurable function $u$, \begin{equation}\label{seminormNonolocalTV} [u]_{K}(\Omega) \coloneqq \frac{1}{2}\int_{\Omega}\int_{\Omega} K(x-y)\,|u(x) - u(y)|\,dx\,dy. \end{equation} We observe that the space $BV_K(\Omega)$ coincides with the fractional Sobolev space $W^{s,\,1}(\Omega)$ when the kernel $K$ is given as $K(x)= |x|^{-(n+s)}$ with $s \in (0,\,1)$ (see, for instance, \cite{NPV}). Secondly, if we set $\Omega = \mathbb{R}^n$ and substitute a characteristic function $\chi_E$ of a set $E\subset\mathbb{R}^n$ in \eqref{seminormNonolocalTV}, then we obtain the so-called \textit{nonlocal perimeter}. Namely, we define the nonlocal perimeter of a set $E \subset \mathbb{R}^n$ associated with the kernel $K$ by \begin{equation}\label{definitionNonlocalPeri} P_K(E) \coloneqq \int_{E}\int_{E^c}K(x-y)\,dx\,dy. \end{equation} In the case that $K(x) = |x|^{-(n+s)}$ for $s \in (0,\,1)$, we call $P_K$ the \textit{$s$-fractional perimeter}, and we denote it by $P_s$. This notion was introduced by Caffarelli, Roquejoffre, and Savin in \cite{CRS}. The authors' work in \cite{CRS} is motivated by the structure of inter-phases when long-range correlations exist (see also \cite{CaSo, Imbert} for the study of interfaces with fractional mean curvatures). After their work, any problems involving not only the $s$-fractional perimeter but also the nonlocal perimeter with the kernel $K$ were studied by many authors. We leave here a short list of papers, which are related to our problems, for those who are interested in the variational problems involving the nonlocal perimeter \cite{AuKo, AGP, Brezis02, CeNo, DRM, MRT01, MRT02} and the references are therein. Next we can consider a localized version of the nonlocal perimeter $P_K$ as follows: let $\Omega \subset \mathbb{R}^n$ be any domain. Then the nonlocal perimeter in $\Omega$ associated with the kernel $K$ is given by \begin{align} P_K(E;\Omega) &\coloneqq \int_{\Omega \cap E}\int_{\Omega \cap E^c} K(x-y)\,dx\,dy + \int_{\Omega \cap E}\int_{\Omega^c \cap E^c} K(x-y) \,dx\,dy \nonumber\\ &\qquad + \int_{\Omega \cap E^c}\int_{\Omega^c \cap E} K(x-y) \,dx\,dy \nonumber \end{align} for any $E\subset \mathbb{R}^n$. Secondly, we give the definition of solutions to the so-called \textit{nonlocal 1-Laplace equations} associated with the kernel $K$. \begin{definition}\label{defSolutionNonlocal1Lap} Let $F : \mathbb{R}^n \times \mathbb{R} \to \mathbb{R}$ be a measurable function in $L^2(\mathbb{R}^n \times \mathbb{R})$. We say that $u \in BV_K \cap L^2(\mathbb{R}^n)$ is a solution to the nonlocal equation \begin{equation}\label{nonlocalSemilinearEq} -\Delta^K_1 u(x) = F(x,\,u(x)) \quad \text{for a.e. $x\in \mathbb{R}^n$} \end{equation} if there exists a function $z: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ with $|z| \leq 1$ a.e. in $\mathbb{R}^n \times \mathbb{R}^n$ and $z(x,\,y) = - z(y,\,x)$ for a.e. $(x,\,y) \in \mathbb{R}^n \times \mathbb{R}^n$ such that \begin{equation}\label{ELeqforF} \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z(x,\,y)(v(x) - v(y))\,dx\,dy = \int_{\mathbb{R}^n}F(x,\,u(x))\,v(x)\,dx \end{equation} for every $v \in C^{\infty}_c(\mathbb{R}^n)$ with $[v]_K(\mathbb{R}^n) < \infty$ and \begin{equation}\nonumber z(x,\,y) \in \mathrm{sgn}\,(u(y) - u(x)) \quad \text{for a.e. $(x,\,y) \in \mathbb{R}^n \times \mathbb{R}^n$} \end{equation} where $\mathrm{sgn}\,(x)$ is a generalized sign function satisfying that \begin{equation}\nonumber \mathrm{sgn}\,(x) \in [-1,\,1], \quad \mathrm{sgn}\,(x)\,x = |x| \quad \text{for any $x \in \mathbb{R}$}. \end{equation} \end{definition} We mention that the authors in \cite{MRT01, MRT02} give a similar definition of the nonlocal 1-Laplacian associated with an integrable kernel. In the present paper, we only consider the case that $F(x,\,u(x)) = u(x) - f(x)$ for a given datum $f$. The concept of the definition is motivated by the Euler-Lagrange equation of the functional \begin{equation}\nonumber \mathcal{I}_K(u) \coloneqq \frac{1}{2}\int_{\mathbb{R}^n} \int_{\mathbb{R}^n} K(x-y)|u(x)-u(y)|\,dx\,dy. \end{equation} Indeed, when we assume that $u$ is a minimizer of $\mathcal{I}_K$ and consider the first variation of the functional $\mathcal{I}_K$, namely, the quantity $\frac{d}{d\varepsilon}\lfloor_{\varepsilon=0}\mathcal{I}_K(u + \varepsilon\phi)$ for any suitable test function $\phi$, we can formally obtain \begin{equation}\nonumber \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\frac{u(x) - u(y)}{|u(x) - u(y)|}\,(\phi(x) - \phi(y))\,dx\,dy = 0. \end{equation} However, it is quite problematic for us to give a rigorous meaning to the ratio $\frac{u(x) - u(y)}{|u(x) - u(y)|}$. To overcome this difficulty, we may apply Definition \ref{defSolutionNonlocal1Lap} and this can be regarded as one of the proper treatments for this issue. Indeed, in Definition \ref{defSolutionNonlocal1Lap}, we may consider the condition that $z(x,\,y)(u(y) - u(x)) = |u(y) - u(x)|$ for a.e. $(x,\,y) \in \mathbb{R}^n \times \mathbb{R}^n$ with $u(x) \neq u(y)$ as a natural requirement. Note that the framework of solutions in the sense of Definition \ref{defSolutionNonlocal1Lap} has been originally developed by, for instance, Maz\'on, Rossi, and Toledo in \cite{MRT} and can be seen as a nonlocal counterpart of the framework given in \cite{ABCM}, \cite{ACM}, and \cite{MRD}. \section{Preliminary results} \subsection{Euler-Lagrange equation for $\mathcal{F}_{K,f}$}\label{secELeqforEnergy} In this section, we show the necessary and sufficient condition for the minimizers of the functional $\mathcal{F}_{K,f}$ in $\mathbb{R}^n$. Before stating the claim, we give some conditions on the kernel $K$ which we will assume in the sequel. \begin{itemize} \item[(K1)] $K:\mathbb{R}^n \setminus \{0\} \to \mathbb{R}$ is a non-negative measurable function. \item[(K2)] $K$ is symmetric with respect to the origin, namely $K(-x) = K(x)$ for any $x \in \mathbb{R}^n\setminus \{0\}$. \end{itemize} We observe that a typical example of the kernel $K$ is given as $K(x)=|x|^{-(n+s)}$ with $s\in(0,\,1)$ and this function satisfies all the above assumptions. In the following lemma, we show that the minimizer of $\mathcal{F}_{K,f}$ satisfies a prescribed nonlocal mean curvature equation. This equation can be regarded as the Euler-Lagrange equation. Moreover, we show that the converse statement is also valid. \begin{lemma}\label{equivMinimizerAndSolution} Assume that the kernel $K$ satisfies (K1) and (K2) and a given datum $f$ is $L^2(\mathbb{R}^2)$. If $u \in BV_K \cap L^2(\mathbb{R}^n)$ is a minimizer of the functional $\mathcal{F}_{K,f}$, then $u$ satisfies the equation \begin{equation}\label{nonlocalEquation00} -\Delta^K_1 u = u -f \quad \text{in $\mathbb{R}^n$} \end{equation} in the sense of Definition \ref{defSolutionNonlocal1Lap}. Conversely, if $u \in BV_K \cap L^2(\mathbb{R}^n)$ is a solution of the equation \eqref{nonlocalEquation00} in the sense of Definition \ref{defSolutionNonlocal1Lap}, then $u$ is a minimizer of $\mathcal{F}_{K,f}$. \end{lemma} \begin{proof} First, we recall the definition of the functional $\mathcal{I}_{K}$ and the non-negativity of $K$ and thus, find that $\mathcal{I}_K$ is convex, lower semi-continuous, and positive homogeneous of degree one. Then, by using the same argument as in \cite{MRT01, MRT02}, we can show the characterization of the sub-differential of $\mathcal{I}_{K}(u)$ as follows: \begin{align}\label{characteriSubdiffNonlocalfunc} &\partial \mathcal{I}_{K}(u) \nonumber\\ & \quad = \left\{ v \in L^2(\mathbb{R}^n) \mid \text{$-\Delta^K_1 u = v$ in the sense of Definition \ref{defSolutionNonlocal1Lap}} \right\}. \end{align} Here we recall the definition of the sub-differential $\partial \mathcal{E}(u)$ for $u \in X$ of the functional $\mathcal{E}: X \to \mathbb{R}\cup\{+\infty\}$ where $X$ is the Hilbert space with the inner product $(\cdot,\cdot)_X$. We say that $v \in X$ belongs to $\partial \mathcal{E}(u)$ for each $u \in X$ if it holds that, for any $w \in X$, \begin{equation}\nonumber \mathcal{E}(w) - \mathcal{E}(u) \geq (w,\,v)_X. \end{equation} Note that $u \in X$ is a minimizer of $\mathcal{E}$ if and only if $0 \in \partial \mathcal{E}(u)$. Then, from the general theory on the sub-differential, we can also show the identity \begin{equation}\label{characteriSubdifferentialEnergy} \partial \mathcal{F}_{K,f}(u) = \partial \mathcal{I}_{K}(u) + u-f. \end{equation} for any $u \in L^2$. Indeed, if $v \in \partial \mathcal{F}_{K,f}(u)$, then we can compute the functional of $u$ as follows; for any $w \in L^2(\mathbb{R}^n)$, \begin{align}\label{characteriSubdiffe01} \mathcal{I}_{K}(w) - \mathcal{I}_{K}(u) &= \mathcal{F}_{K,f}(w) - \mathcal{F}_{K,f}(u) + \frac{1}{2}\int_{\mathbb{R}^n}(u-f)^2\,dx - \frac{1}{2}\int_{\mathbb{R}^n}(w-f)^2 \nonumber\\ &\geq \int_{\mathbb{R}^n}v(w-u)\,dx - \frac{1}{2}\int_{\mathbb{R}^n}(w-u)(w+u-2f)\,dx \nonumber\\ &= \int_{\mathbb{R}^n}(v-u+f)(w-u)\,dx + \int_{\mathbb{R}^n}(u-f)(w-u)\,dx \nonumber\\ &\qquad - \frac{1}{2}\int_{\mathbb{R}^n}(w-u)(w+u-2f)\,dx \nonumber\\ &= \int_{\mathbb{R}^n}(v-u+f)(w-u)\,dx +\frac{1}{2}\int_{\mathbb{R}^n}(w-u)^2\,dx \nonumber\\ &\geq \int_{\mathbb{R}^n}(v-u+f)(w-u)\,dx. \end{align} Therefore we obtain $v-u+f \in \partial \mathcal{I}_{K}(u)$. On the other hand, if $v \in \partial \mathcal{I}_{K}(u)+u-f$, then we can compute in the following manner; for any $w \in L^2(\mathbb{R}^n)$, we have \begin{align}\label{characteriSubdiffe02} \mathcal{F}_{K,f}(w) - \mathcal{F}_{K,f}(u) &= \mathcal{I}_{K}(w) - \mathcal{I}_{K}(u) + \frac{1}{2}\int_{\mathbb{R}^n}(w-f)^2\,dx - \frac{1}{2}\int_{\mathbb{R}^n}(u-f)^2 \\ &\geq \int_{\mathbb{R}^n} (v-u+f)(w-u)\,dx + \frac{1}{2}\int_{\mathbb{R}^n}(w-u)(w+u - 2f)\,dx \nonumber\\ &= \int_{\mathbb{R}^n}v(w-u)\,dx + \frac{1}{2}\int_{\mathbb{R}^n}(w-u)^2\,dx \nonumber\\ &\geq \int_{\mathbb{R}^n}v(w-u)\,dx, \end{align} and thus we have that $v \in \partial \mathcal{F}_{K,f}(u)$. Therefore, from the computations \eqref{characteriSubdiffe01} and \eqref{characteriSubdiffe02}, we conclude that the first part of the claim is valid. Then, from \eqref{characteriSubdifferentialEnergy}, we can easily obtain the equity \begin{align}\label{identityOverall} &\partial \mathcal{F}_{K,f}(u) \nonumber\\ &\quad = \left\{ v+u-f \in L^2(\mathbb{R}^n) \mid \text{$-\Delta^K_1 u = v$ in the sense of Definition \ref{defSolutionNonlocal1Lap}} \right\}. \end{align} We can readily see that $0 \in \partial \mathcal{F}_{K,f}(u)$ whenever $u$ is a minimizer of $\mathcal{F}_{K,f}$ Therefore, we conclude that, if $u$ is a minimizer of $\mathcal{F}_{K,f}$, then $u$ is a solution of the equation \eqref{nonlocalEquation00}. Conversely, if $u$ is a solution of the equation \eqref{nonlocalEquation00}, then from \eqref{identityOverall} we have that $0$ belongs to the set in the right-hand side of \eqref{identityOverall}, and thus we obtain $0 \in \partial \mathcal{F}_{K,f}(u)$. \end{proof} \subsection{Comparison between minimizers}\label{secComparisonMini} In this section, we prove a comparison principle for the minimizers of $\mathcal{F}_{K,f}$. We assume that $K$ satisfies the assumptions (K1) and (K2) shown in Section \ref{secELeqforEnergy} and the data $f_1$ and $f_2$ satisfy that $f_1 \leq f_2$. Then we show that the minimizers $u_1$ and $u_2$ associated with $f_1$ and $f_2$, respectively, preserves the inequality. Precisely, we prove the following result: \begin{lemma}\label{comparisonMini} Let $f_i$ be in $L^2(\mathbb{R}^n)$ for each $i \in \{1,\,2\}$ and $u_i \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of $\mathcal{F}_{K,f_i}$ for each $i\in\{1,\,2\}$. Assume that the kernel $K : \mathbb{R}^n \setminus \{0\} \to \mathbb{R}$ satisfies (K1) and (K2). If $f_1 \leq f_2$ $\mathcal{L}^n$-a.e. in $\mathbb{R}^n$, then $u_1 \leq u_2$ $\mathcal{L}^n$-a.e. in $\mathbb{R}^n$. \end{lemma} \begin{proof} Let $u_1,\,u_2 \in BV_K(\mathbb{R}^n)$ be minimizers of $\mathcal{F}_{K,f}$ associated with given data $f_1,\,f_2 \in L^{2}(\mathbb{R}^n)$, respectively. First of all, we prove the following inequality: \begin{equation}\label{submodularNonlocalTotalVari} [u_{+}]_{K}(\mathbb{R}^n) + [u_{-}]_{K}(\mathbb{R}^n) \leq [u_1]_{K}(\mathbb{R}^n) + [u_2]_{K}(\mathbb{R}^n). \end{equation} Indeed, setting \begin{equation}\label{maxMin} u_{+}(x) \coloneqq \max\{u_1(x),\,u_2(x)\}, \quad u_{-}(x) \coloneqq \min\{u_1(x),\,u_2(x)\} \end{equation} for any $x \in \mathbb{R}^n$ and by the co-area formula, we have that \begin{align}\label{coareaFormulaTV} [u_{i}]_{K}(\mathbb{R}^n) &= \int_{-\infty}^{\infty}\,\frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,|\chi_{\{u_{i}>t\}}(x) - \chi_{\{u_{i}>t\}}(y)| \,dx\,dy\,dt \nonumber\\ &= \int_{-\infty}^{\infty} P_K(\{u_{i} > t\}) \,dt \end{align} for any $i \in \{1,\,2,\,+,\,-\}$. We recall that the nonlocal perimeter $P_K$ is sub-modular, namely, it holds that \begin{equation}\label{submodularNonlocalPeri} P_K(E \cup F) + P_K(E \cap F) \leq P_K(E) + P_K(F) \end{equation} for any $E,\,F\subset \mathbb{R}^n$. Therefore from \eqref{submodularNonlocalPeri} and the definitions of $u_{+}$ and $u_{-}$, we obtain the claim. Now from the general theory of calculus of variations, the minimizer of $\mathcal{F}_{K,f}$ is unique in $L^2(\mathbb{R}^n)$ and thus, it is sufficient to prove that \begin{equation}\nonumber \mathcal{F}_{K,f_2}(u_{+}) \leq \mathcal{F}_{K,f_2}(u_2) \end{equation} where $u_{+}$ is defined in \eqref{maxMin} to obtain the lemma. From a simple computation, we can easily see that the inequality \begin{equation}\label{computationMinimizers} (u_{-}- f_1)^2 + (u_{+}- f_2)^2 \leq (u_1- f_1)^2 + (u_2- f_2)^2 \end{equation} in $\mathbb{R}^n$. From the minimality of $u_i$ for $i\in\{1,\,2\}$, we have \begin{equation}\label{estimate01} \mathcal{F}_{K,f_1}(u_1) + \mathcal{F}_{K,f_2}(u_2) \leq \mathcal{F}_{K,f_1}(u_{-}) + \mathcal{F}_{K,f_2}(u_{+}). \end{equation} On the other hand, from \eqref{submodularNonlocalTotalVari} and \eqref{computationMinimizers}, we have \begin{align}\label{estimate02} &\mathcal{F}_{K,f_1}(u_{-}) + \mathcal{F}_{K,f_2}(u_{+}) \\ &\leq [u_{-}]_{K}(\mathbb{R}^n) + \frac{1}{2}\int_{\mathbb{R}^n}(u_{-}- f_1)^2\,dx + [u_{+}]_{K}(\mathbb{R}^n) + \frac{1}{2}\int_{\mathbb{R}^n}(u_{+}- f_2)^2\,dx \nonumber\\ &= [u_1]_{K}(\mathbb{R}^n) + \frac{1}{2}\int_{\mathbb{R}^n}(u_1- f_1)^2\,dx + [u_2]_{K}(\mathbb{R}^n) + \frac{1}{2}\int_{\mathbb{R}^n}(u_2- f_2)^2\,dx \nonumber\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^n}(u_{-}- f_1)^2\,dx - \frac{1}{2}\int_{\mathbb{R}^n}(u_1- f_1)^2\,dx \nonumber\\ &\quad \quad + \frac{1}{2}\int_{\mathbb{R}^n}(u_{+}- f_2)^2\,dx - \frac{1}{2}\int_{\mathbb{R}^n}(u_2- f_2)^2\,dx \nonumber\\ &\leq \mathcal{F}_{K,f_1}(u_1) + \mathcal{F}_{K,f_2}(u_2). \end{align} Thus from \eqref{estimate01} and \eqref{estimate02}, we obtain \begin{equation}\label{identityEnergies} \mathcal{F}_{K,f_1}(u_1) + \mathcal{F}_{K,f_2}(u_2) = \mathcal{F}_{K,f_1}(u_{-}) + \mathcal{F}_{K,f_2}(u_{+}) \end{equation} Now suppose by contradiction that $\mathcal{F}_{K,f_2}(u_{+}) > \mathcal{F}_{K,f_2}(u_2)$. Then from \eqref{identityEnergies} we have \begin{equation}\nonumber \mathcal{F}_{K,f_1}(u_1) > \mathcal{F}_{K,f_1}(u_{-}) \end{equation} which contradicts the minimality of $u_1$. Thus we obtain the inequality $\mathcal{F}_{K,f_2}(u_{+}) \leq \mathcal{F}_{K,f_2}(u_2)$. Therefore, by the uniqueness of the minimizer of $\mathcal{F}_{K}$ in $L^2(\mathbb{R}^n)$, we obtain that $u_{+} = u_2$ a.e. in $\mathbb{R}^n$, which implies that $u_2 \geq u_1$ a.e. in $\mathbb{R}^n$. \end{proof} \begin{corollary}\label{nonnegativityMini} Assume that the kernel $K: \mathbb{R}^n \setminus \{0\} \to \mathbb{R}$ satisfies the assumptions (K1) and (K2) in Section \ref{secELeqforEnergy}. If a datum $f \in L^2(\mathbb{R}^n)$ is non-negative a.e. in $\mathbb{R}^n$, then the minimizer $u \in BV_K \cap L^2(\mathbb{R}^n)$ is also non-negative a.e. in $\mathbb{R}^n$. \end{corollary} \begin{proof} Since it holds that \begin{equation}\nonumber \mathcal{F}_{K,0}(0) = 0 \leq \mathcal{F}_{K,0}(v) \end{equation} for every $v \in BV_K \cap L^2(\mathbb{R}^n)$, we have that the unique solution of the problem \begin{equation}\nonumber \inf\{\mathcal{F}_{K,0}(v) \mid v \in BV_K \cap L^2\} \end{equation} is $v=0$. Hence, by applying Lemma \ref{comparisonMini} to the case that $f_1=0$ and $f_2=f$, we obtain that $0 \leq u$ a.e. in $\mathbb{R}^n$. \end{proof} Finally, we show a sort of comparison property of minimizers under the assumption that a datum $f$ is bounded in $\mathbb{R}^n$. We do not derive the following proposition directly from Lemma \ref{comparisonMini} but from a simple computation. \begin{proposition}\label{comparisonLinfty} Let $u \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of $\mathcal{F}_{K,f}$ with a datum $f \in L^2(\mathbb{R}^n)$. Assume that the kernel $K:\mathbb{R}^n \setminus \{0\} \to \mathbb{R}$ is non-negative measurable function. If there exists a constant $C>0$ such that $|f(x)| \leq C$ for a.e. $x \in \mathbb{R}^n$, then $|u(x)| \leq C$ for a.e. $x \in \mathbb{R}^n$ with the same constant $C$. \end{proposition} \begin{proof} It is sufficient to show that, if $f \leq C$ a.e. in $\mathbb{R}^n$ with some constant $C>0$, then $u \leq C$ a.e. in $\mathbb{R}^n$ with the same constant $C$ because we only repeat the same argument as we show in this proof. We define $v(x) \coloneqq \min\{u(x),\,C\}$ for $x \in \mathbb{R}^n$. It is sufficient to show that $u = v$ for a.e. in $\mathbb{R}^n$. From the definition, we can show the claim that $|v(x) - v(y)| \leq |u(x) - u(y)|$ for $x,\,y \in \mathbb{R}^n$. Indeed, if $u(x) \leq C$ and $u(y) \leq C$ or $u(x) > C$ and $u(y) > C$, then we can readily obtain the claim. If $u(x) \leq C$ and $u(y) > C$, then we have \begin{align} |u(x) - u(y)|^2 - |v(x) - v(y)|^2 &= u^2(y) - C^2 - 2u(x)\,u(y) + 2u(x)\,C \nonumber\\ &= (u(y)-C)(u(y) + C - 2u(x)) \geq 0. \nonumber \end{align} In the same way, we can prove the claim if $u(x) > C$ and $u(y) \leq C$. Moreover, we can show that $(v-f)^2 \leq (u-f)^2$ in $\mathbb{R}^n$. Therefore we compute the functional associated with $v$ as follows: \begin{align} \mathcal{F}_{K,f}(v) &= \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,|v(x)-v(y)|\,dx\,dy + \frac{1}{2}\int_{\mathbb{R}^n}(v-f)^2 \,dx \nonumber\\ & \leq \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,|u(x)-u(y)|\,dx\,dy + \frac{1}{2}\int_{\mathbb{R}^n}(u-f)^2 \,dx \nonumber\\ &= \mathcal{F}_{K,f}(u). \nonumber \end{align} Thus, from the uniqueness of the minimizer of $\mathcal{F}_{K,f}$ in $L^2(\mathbb{R}^n)$, we obtain $v = u$ a.e. in $\mathbb{R}^n$ and this concludes the proof. \end{proof} \subsection{Characterization of minimizers for $\mathcal{F}_{K,f}$}\label{secCharacteriMinimizers} In this section, we show the following claim which gives a relation between the minimizers of $\mathcal{F}_{K,f}$ and $\mathcal{E}_{K,f,t}$ for $t \in \mathbb{R}$. Recall that $\mathcal{E}_{K,f,t}(E)$ as \begin{equation}\label{nonlocalPeriEnergy} \mathcal{E}_{K,f,t}(E) \coloneqq P_K(E) + \int_{E}(t-f(x))\,dx \end{equation} for every measurable set $E \subset \mathbb{R}^n$ where we assume that $f \in L^2(\mathbb{R}^n)$ is a given datum and $t\in\mathbb{R}$ is any number. \begin{lemma}\label{relationMiniTwoEnergies} Assume that the kernel $K(x) = |x|^{-(n+s)}$ for $x \in \mathbb{R}^n \setminus \{0\}$ with $s \in (0,\,1)$ and a datum $f \in L^2 \cap L^{\infty}(\mathbb{R}^n)$. If $u \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of $\mathcal{F}_{K,f}$, then the set $\{x\in\mathbb{R}^n \mid u(x)>t\}$ is also a minimizer of $\mathcal{E}_{K,f,t}(E)$ for every $t\in\mathbb{R}$ among measurable sets $E \subset \mathbb{R}^n$. \end{lemma} \begin{proof} Let $F \subset \mathbb{R}^n$ be any measurable set. We may assume that $P_K(F) < \infty$; otherwise this set cannot minimize the functional $\mathcal{E}_{K,f,t}$. Moreover, we may assume that $\|\chi_F\|_{L^1} = |F| <\infty$ because of the nonlocal isoperimeteric inequality. Then it suffices to show that the super-level set $\{u > t\}$ for each $t\in\mathbb{R}$ satisfies the inequality \begin{equation}\label{minimalityInequality} P_K(\{u > t\}) + \int_{\{u > t\}}(t-f(x))\,dx \leq P_K(F) + \int_{F}(t-f(x))\,dx. \end{equation} From Lemma \ref{equivMinimizerAndSolution} and the assumption that $u$ is a minimizer of the functional $\mathcal{F}_{K,f}$, we have that $u$ is also a solution of the equation \begin{equation}\label{nonlocalEq} -\Delta^K_1 u = u -f \quad \text{ in }\mathbb{R}^n. \end{equation} Thus, from Definition \ref{defSolutionNonlocal1Lap}, there exists a function $z_u\in L^{\infty}(\mathbb{R}^n \times \mathbb{R}^n)$ with $|z_u| \leq 1$ and $z_u$ being antisymmetric such that \begin{equation}\label{defSolNonlocalEq} \frac{1}{2}\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,(w(x) - w(y))\,dx\,dy = \int_{\mathbb{R}^n}(u-f)\,w(x)\,dx \end{equation} for any $w \in C^{\infty}_c(\mathbb{R}^n)$ with $[w]_K(\mathbb{R}^n) < \infty$ and moreover \begin{equation}\label{propSolNonlocalEq} z_u(x,\,y)(u(y) - u(x)) = |u(y) - u(x)| \end{equation} for a.e. $x,\,y\in \mathbb{R}^n$. From the co-area formula, we have the following two identities: \begin{equation}\label{coarea01} |u(x) - u(y)| = \int_{-\infty}^{+\infty}|\chi_{\{u > t\}}(x) - \chi_{\{u > t\}}(y)|\,dt \end{equation} and \begin{equation}\label{coarea02} (u(x) - u(y)) = \int_{-\infty}^{+\infty}(\chi_{\{u > t\}}(x) - \chi_{\{u > t\}}(y))\,dt \end{equation} for any measurable $u:\mathbb{R}^n \to \mathbb{R}$ and a.e. $x,\,y \in \mathbb{R}^n$. Thus from \eqref{propSolNonlocalEq}, \eqref{coarea01}, and \eqref{coarea02}, we obtain \begin{equation}\label{identityLevelset} z_u(x,\,y)(\chi_{\{u > t\}}(y) - \chi_{\{u > t\}}(x)) = |\chi_{\{u > t\}}(y) - \chi_{\{u > t\}}(x)| \end{equation} for a.e. $t \in \mathbb{R}$. Now we fix $t \in \mathbb{R}$ such that \eqref{identityLevelset} holds. From the specific choice of $K(x) = |x|^{-(n+s)}$, the function space $BV_K(\mathbb{R}^n)$ coincides with the fractional Sobolev space $W^{s,1}(\mathbb{R}^n)$. Recall that the space $C^{\infty}_c(\mathbb{R}^n)$ of smooth functions with compact supports is dense in $W^{s,1}(\mathbb{R}^n)$ (see \cite{Adam} for the detail). Hence, from the fact that $P_K(\{u>t\})$ and $P_K(F)$ are finite, we can choose sequences $\{\eta_{l}^u\}_{l\in\mathbb{N}}$ and $\{\eta_{l}^F\}_{l\in\mathbb{N}}$ in $C^{\infty}_c(\mathbb{R}^n)$ such that \begin{equation}\label{approximationCharacterFunc} \eta_{l}^u \xrightarrow[l \to \infty]{} \chi_{\{u>t\}}, \quad \eta_{l}^F \xrightarrow[l \to \infty]{} \chi_{F} \quad \text{in $W^{s,1}(\mathbb{R}^n)$}. \end{equation} From the choice of the approximation, we notice that the difference function $\eta_{l}^u - \eta_{l}^F$ is also in $C^{\infty}_c(\mathbb{R}^n)$ and $[\eta_{l}^u - \eta_{l}^F]_K(\mathbb{R}^n) < \infty$ for each $l \in \mathbb{N}$. Hence, from the definition of solutions to the equation \eqref{nonlocalEq}, we obtain \begin{align}\label{identitySubstitutionCompetitor} &\int_{\mathbb{R}^n}(u-f)\,(\eta_{l}^u - \eta_{l}^F)\,dx \nonumber\\ &= -\frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,[(\eta_{l}^u - \eta_{l}^F)(y) - (\eta_{l}^u - \eta_{l}^F)(x)]\,dx\,dy \nonumber\\ &= -\frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,(\eta_{l}^u(y) - \eta_{l}^u(x))\,dx\,dy \nonumber\\ &\qquad + \frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,(\eta_{l}^F(y) - \eta_{l}^F(x))\,dx\,dy. \end{align} By applying Proposition \ref{comparisonLinfty} and from the assumption that $f \in L^{\infty}(\mathbb{R}^n)$, we have that the minimizer $u$ is also in $L^{\infty}(\mathbb{R}^n)$ and thus \begin{equation}\label{convergenceLinearTerm} \left|\int_{\mathbb{R}^n}(u-f)(\eta_{l}^u - \eta_{l}^F)\,dx - \int_{\mathbb{R}^n}(u-f)(\chi_{\{u>t\}} - \chi_{F})\,dx \right| \xrightarrow[l \to \infty]{} 0. \end{equation} Hence by applying the dominated convergence theorem and from \eqref{approximationCharacterFunc}, \eqref{identitySubstitutionCompetitor}, and \eqref{convergenceLinearTerm}, we obtain that \begin{align}\label{identitySubstitutionLimit} &\int_{\mathbb{R}^n}(u-f)(\chi_{\{u>t\}} - \chi_{F})\,dx \nonumber\\ &= \lim_{l \to \infty}\int_{\mathbb{R}^n}(u-f)\,(\eta_{l}^u - \eta_{l}^F) \,dx \nonumber\\ &= -\frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,(\chi_{\{u>t\}}(y) - \chi_{\{u>t\}}(x))\,dx\,dy \nonumber\\ &\qquad -\frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y) \,z_u(x,\,y)\,(\chi_{F}(y) - \chi_{F}(x))\,dx\,dy. \end{align} From the definition of $z_u$, we have \begin{align}\label{perimeterAnySetF} &\frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)\,z_u(x,\,y)\,(\chi_{F}(x) - \chi_{F}(y))\,dx\,dy \nonumber\\ &\leq \frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)|\chi_{F}(x) - \chi_{F}(y)|\,dx\,dy = P_K(F). \end{align} Taking into account \eqref{identityLevelset}, \eqref{identitySubstitutionLimit}, and \eqref{perimeterAnySetF}, we obtain \begin{align}\label{periEstimateOverall01} &\int_{\mathbb{R}^n}(u-f)\,(\chi_{\{u>t\}} - \chi_{F})\,dx \nonumber\\ &\leq - \frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)|\chi_{\{u>t\}}(x) - \chi_{\{u>t\}}(y)| \,dx\,dy + P_{K}(F). \end{align} Regarding the left-hand side of \eqref{periEstimateOverall01}, we have \begin{align}\label{estimateLHS} \int_{\mathbb{R}^n}(u-f)\,(\chi_{\{u>t\}} - \chi_{F}) \,dx &= \int_{\mathbb{R}^n}(u-t+t-f)\,(\chi_{\{u>t\}} - \chi_{F}) \,dx \nonumber\\ &\geq \int_{\{u > t\} \cap F^c} (t-f) \,dx - \int_{\{u \leq t\} \cap F} (u-f) \,dx \nonumber\\ &\geq \int_{\{u > t\} \cap F^c} (t-f) \,dx - \int_{\{u \leq t\} \cap F} (t-f) \,dx \nonumber\\ &= \int_{\mathbb{R}^n}(t-f)\,(\chi_{\{u>t\}} - \chi_{F})\,dx \end{align} for a.e. $t\in\mathbb{R}$. Hence, from \eqref{periEstimateOverall01} and \eqref{estimateLHS}, we have \begin{align}\label{periEstimateOverall02} &P_K(\{u>t\}) + \int_{\mathbb{R}^n}(t-f)\,\chi_{\{u > t\}}\,dx \nonumber\\ &= \frac{1}{2} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}K(x-y)|\chi_{\{u>t\}}(x) - \chi_{\{u>t\}}(y)|\,dx\,dy + \int_{\mathbb{R}^n}(t-f)\,\chi_{\{u > t\}}\,dx \nonumber\\ &\leq P_{K}(F) + \int_{\mathbb{R}^n} (t-f)\,\chi_{F}\,dx \end{align} for a.e. $t\in\mathbb{R}$. Therefore we conclude that the inequality \eqref{minimalityInequality} holds for a.e. $t\in\mathbb{R}$. Notice that, for any $t\in\mathbb{R}$ such that \eqref{identityLevelset} does not hold, we can choose a sequence $\{t_j\}_{j\in\mathbb{N}}$ such that $t_j \rightarrow t$ as $j \to \infty$ and \eqref{identityLevelset} holds for any $t_j$; otherwise we can choose a constant $\delta>0$ such that $B_{\delta}(t) \subset \{t\in\mathbb{R} \mid \text{\eqref{identityLevelset} is not true}\}$. Since the condition \eqref{identityLevelset} holds true for a.e. $t \in \mathbb{R}$, we have that \begin{equation}\nonumber 0< 2\delta = |B_{\delta}(t)| \leq |\{t\in\mathbb{R} \mid \text{\eqref{identityLevelset} is not true}\}| = 0, \end{equation} which is a contradiction. Thus from the lower semi-continuity of $P_K$ and the continuity of the map $t \mapsto |\{u>t\}|$, we conclude that \eqref{minimalityInequality} holds for every $t\in\mathbb{R}$. \end{proof} \subsection{Boundedness of super-level sets of minimizers}\label{secBoundednessSuperLeverlsets} Let $u \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of $\mathcal{F}_{K,f}$ with a datum $f \in L^p(\mathbb{R}^n)$ with $p \in (\frac{n}{s},\,\infty]$. In this section, we show that the super-level set $\{u>t\}$ for each $t \in \mathbb{R}$ is bounded up to negligible sets. Precisely, we prove \begin{lemma}\label{boundednessMinimizers} Assume that the kernel $K(x) = |x|^{-(n+s)}$ for $x \in \mathbb{R}^n \setminus \{0\}$ with $s \in (0,\,1)$ and $f \in L^p(\mathbb{R}^n)$ with $p \in (\frac{n}{s},\,\infty]$. If $E_T$ is a minimizer of $\mathcal{E}_{K,f,T}$ among sets with finite volumes for any $T\in\mathbb{R}$, then there exists a constant $R_T >0$ such that $|E_T \setminus B_{R_T}|=0$. \end{lemma} \begin{proof} We basically follow the proof shown in \cite[Proposition 3.2]{CeNo}. Suppose by contradiction that $|E_T \setminus B_r| > 0$ for any $r > 0$. By setting $\phi_T(r) \coloneqq |E_T \setminus B_r|$ for any $r >0$, we have \begin{equation}\nonumber (\phi_T)'(r) = - \mathcal{H}^{n-1}(E_T \cap \partial B_r) \end{equation} for a.e. $r>0$. We fix any $R > 1$. From the minimality of $E_T$, we have \begin{equation}\label{inequalityByMinimality00} \mathcal{E}_{K,f,T}(E_T) \leq \mathcal{E}_{K,f,T}(E_T \cap B_r). \end{equation} Since it holds that \begin{equation}\nonumber P_K(A \cup B) = P_K(A) + P_K(B) - 2 \int_{A}\int_{B} K(x-y)\,dx\,dy \end{equation} for sets $A,\,B \subset \mathbb{R}^n$ with $A\cap B = \emptyset$, we have \begin{equation}\label{inequalityByMinimality} P_K(E_T\setminus B_r) \leq 2\int_{ E_T \cap B_r}\int_{E_T\setminus B_r } K(x-y)\,dx\,dy - \int_{E_T \setminus B_r}(T - f(x))\,dx. \end{equation} From the isoperimetric inequality for the nonlocal perimeter, we can have the following lower bound of the term of the left-hand side in \eqref{inequalityByMinimality} (see for instance \cite{FFMMM}): \begin{equation}\label{isoperiNonlocalPeri} P_K(E_T\setminus B_r) \geq \frac{P_K(B_1)}{|B_1|^{\frac{n-s}{n}}}\,|E_T\setminus B_r|^{\frac{n-s}{n}} = C(n,s)\,\phi_T^{\frac{n-s}{n}}(r) \end{equation} for $r \geq R$, where we set $C(n,s) \coloneqq |B_1|^{-\frac{n-s}{n}}\,P_K(B_1)$. Secondly, from Fubini-Tonelli's theorem and the co-area formula, we can compute the first term of the right-hand side in \eqref{inequalityByMinimality} as follows: \begin{align}\label{estifromMini01} \int_{E_T \cap B_r}\int_{E_T\setminus B_r} K(x-y)\,dx\,dy &\leq \int_{E_T\setminus B_r}\int_{B_{|y|-r}(y)} \frac{1}{|x-y|^{n+s}}\,dx\,dy \nonumber\\ &= \int_{E_T\setminus B_r}|\mathbb{S}^{n-1}|\int_{|y|-r}^{\infty} \frac{1}{r^{1+s}}\,dr\,dy \nonumber\\ &\leq \frac{|\mathbb{S}^{n-1}|}{s}\int_{E_T \setminus B_r}(|y|-r)^{-s}\,dy \nonumber\\ &= \frac{|\mathbb{S}^{n-1}|}{s} \int_{r}^{+\infty}\frac{\mathcal{H}^{n-1}(E_T \cap \partial B_{\sigma})}{(\sigma - r)^s}\,d\sigma \nonumber\\ &= -\frac{|\mathbb{S}^{n-1}|}{s} \int_{r}^{+\infty} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,d\sigma \end{align} for any $r \geq R$. Finally, regarding the second term of the right-hand side in \eqref{inequalityByMinimality}, from the assumption of $f$ and Cauchy-Schwartz inequality (if $p \neq \infty$), we have \begin{align}\label{estifromMini02} \int_{E_T \setminus B_r }(-T + f(x))\,dx &\leq T\,|E_T \setminus B_r| + \|f\|_{L^p(\mathbb{R}^n)}\,|E_T \setminus B_r|^{\frac{1}{q}} \nonumber\\ &= T\,\phi_T(r) + \|f\|_{L^p(\mathbb{R}^n)}\,\phi_T^{\frac{1}{q}}(r) < \infty \end{align} for any $r \geq R>1$ where $q \geq 1$ satisfies $p^{-1}+q^{-1}=1$. By combining all the computations \eqref{isoperiNonlocalPeri}, \eqref{estifromMini01}, and \eqref{estifromMini02} with \eqref{inequalityByMinimality}, we obtain \begin{equation}\label{estiByMinimality} C(n,s)\,\phi_T^{\frac{n-s}{n}}(r) \leq -C_1 \int_{r}^{+\infty} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,d\sigma + T\,\phi_T(r) + \|f\|_{L^p(\mathbb{R}^n)}\,\phi_T^{\frac{1}{q}}(r) \end{equation} for any $r \geq R$ where we set $C_1 \coloneqq \frac{|\mathbb{S}^{n-1}|}{s}$. Since $\phi_T(r)$ vanishes as $r \to \infty$ and $\frac{1}{q} > \frac{n-s}{n}$, we can have that \begin{equation}\nonumber 2T\,\phi_T(r) + 2\|f\|_{L^p(\mathbb{R}^n)}\phi_T^{\frac{1}{q}}(r) \leq C(n,s)\,\phi_T^{\frac{n-s}{n}}(r) \end{equation} for sufficiently large $r \geq R$. Hence, by integrating the both sides of \eqref{estiByMinimality} over $r \in (R,\infty)$, we obtain \begin{equation}\label{estiByMinimality01} \frac{C(n,s)}{2}\int_{R}^{\infty}\phi_T^{\frac{n-s}{n}}(r)\,dr \leq -C_1 \int_{R}^{\infty}\int_{r}^{+\infty} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,d\sigma\,dr. \end{equation} By exchanging the order of the integration, we have \begin{equation}\label{exchangeIntergration} \int_{R}^{\infty}\int_{r}^{+\infty} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,d\sigma\,dr = \int_{R}^{\infty}\int_{R}^{\sigma} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,dr\,d\sigma. \end{equation} Then by employing the similar computation shown in \cite{CeNo}, we obtain \begin{equation}\nonumber \int_{R}^{\infty}\int_{R}^{\sigma} \frac{(\phi_T)'(\sigma)}{(\sigma - r)^s}\,dr\,d\sigma \geq -\frac{\phi_T(R)}{1-s} - \int_{R+1}^{\infty} \frac{\phi_T(r)}{(\sigma - R)^s}\,d\sigma. \end{equation} Therefore, from \eqref{estiByMinimality01}, we have \begin{align} \frac{C(n,s)}{2}\int_{R}^{\infty}\phi_T^{\frac{n-s}{n}}(r)\,dr &\leq C_1\frac{\phi_T(R)}{1-s} + C_1\int_{R+1}^{\infty} \frac{\phi_T(\sigma)}{(\sigma - R)^s}\,d\sigma \nonumber\\ &\leq C_1\frac{\phi_T(R)}{1-s} + C_1\int_{R+1}^{\infty}\phi_T(\sigma)\,d\sigma. \nonumber \end{align} Again, by choosing $R$ sufficiently large so that the inequality \begin{equation}\nonumber C_1\int_{R+1}^{\infty}\phi_T(r)\,dr \leq \frac{C(n,s)}{4}\int_{R}^{\infty}\phi_T^{\frac{n-s}{n}}(r)\,dr \end{equation} holds, we have \begin{equation}\nonumber \int_{R}^{\infty}\phi_T^{\frac{n-s}{n}}(r)\,dr \leq \frac{4C_1}{C(n,s)(1-s)}\,\phi_T(R). \end{equation} Then by applying the method shown in, for instance, \cite{CNRV, CeNo}, we obtain the contradiction to the assumption that $\phi_T(r)>0$ for any $r>0$. Therefore, we conclude the essential boundedness of the set $E_T$. \end{proof} We assume that $u \in BV_K \cap L^2(\mathbb{R}^n)$ is a minimizer of the functional $\mathcal{F}_{K,f}$ and $u$ is bounded from below with the constant $c \in \mathbb{R}$. Then, since the super-level set $\{u > c\}$ is also a minimizer of $\mathcal{E}_{K,f,c}$, we may obtain from Lemma \ref{boundednessMinimizers} that there exists a constant $R_c>1$ such that $|\{u>c\} \setminus B_{R_c}| = 0$. In addition to this, we have the inclusion of the super-level sets that $\{u>t'\} \subset \{u>t\}$ for any $t' > t$. Thus, we conclude that the following corollary holds. \begin{corollary}\label{corUniformBoundednessMinimizer} Assume that the kernel $K(x) = |x|^{-(n+s)}$ for $x \in \mathbb{R}^n \setminus \{0\}$ with $s \in (0,\,1)$. Let $u \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of $\mathcal{F}_{K,f}$. If a datum $f$ is in $L^p(\mathbb{R}^n)$ with $p \in (\frac{n}{s},\,\infty]$ and $u \geq c$ a.e. in $\mathbb{R}^n$ for some $c \in \mathbb{R}$, then the super-level set $\{u>t\}$ is uniformly bounded with respect to $t \geq c$. Namely, there exists $R_c>0$, independent of $t$, such that $\{u>t\} \subset B_{R_c}$ for any $t \geq c$. \end{corollary} \section{H\"older regularity of minimizers}\label{secLocalHolderConti} First of all, we observe that, if $u$ satisfies some equation associated with the Euler-Lagrange equations of $\mathcal{E}_{K,f,t}$ and the boundary of $\{u>t\}$ is regular, then $u$ is continuous. \begin{proposition}\label{continuityMinimizersLemma} Assume that $K(x) = |x|^{-(N+s)}$ for any $x \in \mathbb{R}^n$ with $s \in (0,\,1)$ and the datum $f$ is in $L^2 \cap L^{\infty}(\mathbb{R}^n)$. Let $u \in BV_K \cap L^2(\mathbb{R}^n)$. Assume that $\partial \{u > t\}$ is of class $C^{1,\alpha}$ with $\alpha \in (s,\,1]$ and $u$ satisfies the equation \begin{equation*} H^s_{\{u>t\}}(x) + t - f(x) = 0 \end{equation*} for any $x \in \partial \{u>t\}$ and $t\in\mathbb{R}$. Then $u$ is continuous in $\mathbb{R}^n$. \end{proposition} \begin{proof} Suppose by contradiction that $u$ is not continuous in $\mathbb{R}^n$. Then there exist a point $x_0 \in \mathbb{R}^n$ and $-\infty< t' < t < \infty$ such that $x_0 \in \partial E_t \cap \partial E_{t'}$. Indeed, if $u$ is not continuous at $x_0$, then it holds that $t_{+} \coloneqq \limsup_{x \to x_0}u(x) > \liminf_{x \to x_0}u(x) \eqqcolon t_{-}$. Note that $t_{+} \geq u(x_0) \geq t_{-}$ by definition. Setting $\delta \coloneqq t_{+} - t_{-}>0$ and the definition of $t_{+}$, we can choose a sequence $\{x_n\}_{n\in\mathbb{N}}$ such that $x_k \to x_0$ and $u(x_k) > t_{+} - \frac{\delta}{2^{k}}$ for any $k\in\mathbb{N}$ with $k \geq 1$. If $u(x_0) = t_{+}$, then we have that $x_k \in \{u > u(x_0) - \frac{\delta}{2}\}$ for large $k\in\mathbb{N}$. Thus we obtain that $x_0 \in \overline{\{u > u(x_0) - \frac{\delta}{2} \}}$. However, from the definition of $\delta$, $x_0$ cannot be a interior point of $\{u > u(x_0) - \frac{\delta}{2} \}$; otherwise we can choose a sequence $\{y_k\}_{k\in\mathbb{N}}$ such that \begin{equation} u(x_0) - \frac{\delta}{2} < u(y_k) < t_{-} + \frac{\delta}{2^{k}} \end{equation} for any large $k$. From the definition of $\delta$ and the fact that $u(x_0)=t_{+}$, we obtain a contradiction. Thus we may assume that $u(x_0) < t_{+}$. Setting $\tilde{\delta} \coloneqq t_{+} - u(x_0)>0$ and since $u(x_k) > t_{+} - \frac{\delta}{2^{k}}$ for any $k\in\mathbb{N}$, we have that $u(x_k) > u(x_0) + \frac{1}{2}\tilde{\delta}$ for any $k\in\mathbb{N}$ with $k \geq (2\delta)^{-1}\tilde{\delta}$ and that $x_k \in \{u > u(x_0) + \frac{1}{2}\tilde{\delta}\}$ for large $k\in\mathbb{N}$. Hence, recalling that $x_k \to x_0$ as $k\to\infty$, we obtain that $x_0 \in \partial \{u > u(x_0) + \frac{1}{2}\tilde{\delta} \}$. In the same way, we can show that $x_0 \in \partial \{u > u(x_0) + \frac{3}{4}\tilde{\delta} \}$. Therefore, we conclude that, if $u$ is not continuous at $x_0$, we can find distinct constants $t,\,t'\in\mathbb{R}$ such that $x_0 \in \partial \{u>t\} \cap \partial \{u>t'\}$. From the assumptions, we obtain that the following equations hold: \begin{equation}\label{eulerLagrange01} H_{E_t}^s(x) + t - f(x) = 0 \end{equation} and \begin{equation}\label{eulerLagrange02} H_{E_{t'}}^s(x) + t' - f(x) = 0 \end{equation} for each $x \in \partial E_t \cap \partial E_{t'}$. Recall that the nonlocal mean curvature associated with $K(x) = |x|^{-(N+s)}$ is well-defined at each point on $\partial E$ if $\partial E$ is at least of class $C^{1,\alpha}$ with $\alpha>s$ (see, for instance, \cite[Corollary 3.5]{Cozzi}). Now we can readily see that, if two sets $E$ and $F$ satisfy that $E \subset F$ and $\partial E \cap \partial F \not= \emptyset$, then it holds that $H^s_{E} \geq H^s_{F}$ on $\partial E \cap \partial F$. Indeed, by definition, we have \begin{align}\label{nonlocalMeanCurvature} H^s_{E}(x) - H^s_{F}(x) &= \text{P.V.}\,\int_{\mathbb{R}^n}\frac{\chi_{E}(x) - \chi_{E}(y)}{|x-y|^{N+s}}\,dy \nonumber\\ &\qquad - \text{p.v.}\,\int_{\mathbb{R}^n}\frac{\chi_{F}(x) - \chi_{F}(y)}{|x-y|^{N+s}}\,dy \nonumber\\ &= \text{p.v.}\,\int_{\mathbb{R}^n}\frac{\chi_{E}(x) - \chi_{F}(x) - \chi_{E}(y) + \chi_{F}(y)}{|x-y|^{N+s}}\,dy \end{align} for any $x \in \partial E \cap \partial F$. Since $E \subset F$, it holds $\chi_E \leq \chi_F$ in $\mathbb{R}^n$ and $\chi_E(x)=\chi_F(x)$ for any $x \in \partial E \cap \partial F$. Thus from \eqref{nonlocalMeanCurvature} and the non-negativity of $K$, we obtain the claim. Therefore, from \eqref{eulerLagrange01}, \eqref{eulerLagrange02}, and the fact that $H^s_{E_{t'}} \geq H^s_{E_{t}}$, we obtain \begin{equation} t' - f(x_0) \geq t - f(x_0) \nonumber \end{equation} and it turns out that $t' \geq t$. This contradicts the fact that $t' < t$. \end{proof} \subsection{Regularity of boundaries of super-level sets for minimizers} Now we show some regularity results of the boundary of the set $\{u>t\}$ for each $t$ under suitable assumptions on the datum $f$, where $u$ is a minimizer of $\mathcal{F}_{K,f}$ with $K(x)=|x|^{-(N+s)}$. From Proposition \ref{comparisonLinfty}, we have that $u \in L^{\infty}(\mathbb{R}^n)$ whenever $f \in L^2 \cap L^{\infty}(\mathbb{R}^n)$. Since $\{u>t\} = \mathbb{R}^n$ if $t < - \|u\|_{L^{\infty}}$ and $\{u>t\} = \emptyset$ if $t \geq \|u\|_{L^{\infty}}$, in the sequel, we focus on the set $\{u>t\}$ only for $t \in [-\|u\|_{L^{\infty}}, \, \|u\|_{L^{\infty}})$ if $f \in L^{\infty}(\mathbb{R}^n)$. Recall that, from Corollary \ref{corUniformBoundednessMinimizer}, the super-level set $\{u>t\}$ is bounded uniformly in $t \in [-\|u\|_{L^{\infty}}, \, \|u\|_{L^{\infty}})$. To obtain our main result on the regularity of minimizers, we exploit the regularity results proved by Caputo and Guillen \cite{CaGu}; Figalli, Fusco, Maggi, Millot, and Morini \cite{FFMMM}; Savin and Valdinoci \cite{SaVa}; and Barrios, Figalli, and Valdinoci \cite{BFV}. Before recalling the regularity results, we give the definition of ``almost'' minimizers of $P_s$ in the sense of Figalli, et al \cite{FFMMM}. Given $\Lambda>0$, we say that a measurable bounded set $E \subset \mathbb{R}^n$ is an {\it almost minimizer} of $P_s$ if \begin{equation}\label{definitionAlmostMinimizer} P_s(E) \leq P_s(F) + \frac{\Lambda}{1-s}|E \Delta F| \end{equation} for any measurable bounded set $F \subset \mathbb{R}^n$. Note that the concept of the almost minimality of $P_s$ was also given by Caputo and Guillen \cite{CaGu} and their definition can include a wider variety of sets than the definition by Figalli, et al. \cite{FFMMM}. In this paper, it is sufficient to apply the definition given by Figalli, et al \cite{FFMMM} and thus we do not write the definition given by Caputo and Guillen \cite{CaGu} here. First, we recall the regularity of almost minimizers of $P_s$ in the sense of \eqref{definitionAlmostMinimizer}, which was shown by Figalli, Fusco, Maggi, Millot, and Morini \cite[Corollary 3.5]{FFMMM} (see also \cite{CaGu}). This result is a nonlocal analogue of the theory of Tamanini \cite{Tamanini} on almost minimal surfaces. \begin{theorem}[\cite{FFMMM}]\label{improvementFlatnessFFMMM} If $n \geq 2$, $\Lambda > 0$, and $s_0 \in (0,\,1)$, then there exist positive constants $0 < \varepsilon_0 < 1$, $C_0>0$, and $\alpha < 1$, depending on $n$, $\Lambda$, and $s_0$ only, with the following property: if $E$ is an almost minimizer of $P_s$ with $s \in (s_0, \,1)$ in the sense of \eqref{definitionAlmostMinimizer}, then $\partial E$ is of class $C^{1,\alpha}$ for some $0 < \alpha < 1$ except a closed set of Hausdorff dimension $n-2$. \end{theorem} Next we recall the regularity result of fractional minimal cones in $\mathbb{R}^2$ by Savin and Valdinoci \cite{SaVa}. \begin{theorem}[\cite{SaVa}] \label{thmNonlocalCone2d} Assume that $E \subset \mathbb{R}^2$ is a $s$-fractional minimal cone, namely, $E$ satisfies that $E = t\,E$ for any $t >0$. Then $E$ is a half-plane. \end{theorem} In particular, by combining the blow-up and blow-down arguments in \cite{CRS}, one may obtain that $s$-fractional minimal surfaces in $\mathbb{R}^2$ are fully $C^{1,\alpha}$-regular for any $\alpha \in (0,\,s)$. \begin{corollary}[\cite{SaVa}] \label{corNonlocalCone2d} If $E$ is an $s$-fractional minimal set in $\Omega \subset \mathbb{R}^2$, then $\partial E \cap \Omega'$ is a $C^{1,\alpha}$-curve for any $\Omega' \Subset \Omega$. \end{corollary} Originally, the regularity of nonlocal(fractional) minimal surfaces, which are defined by the boundaries of sets minimizing the fractional perimeter, was obtained by Caffarelli, Roquejoffre, and Savin \cite{CRS}. Precisely they proved that every fractional minimal surface is locally $C^{1,\alpha}$ except a closed set of Hausdorff dimension $n-2$. Moreover, thanks to Corollary \ref{corNonlocalCone2d}, this closed set of fractional minimal surfaces has Hausdorff dimension at most $n-3$. As a consequence of these regularity results, we obtain \begin{lemma}[$C^{1,\alpha}$-regularity of boundary of super-level set of minimizers]\label{holderRegualrityLemma} Let $s \in (0,\,1)$ and let $f \in L^2 \cap L^{\infty}(\mathbb{R}^n)$. Assume that $K(x) = |x|^{-(n+s)}$ for $x \in \mathbb{R}^n \setminus \{0\}$ and $u \in BV_K \cap L^2(\mathbb{R}^n)$ is a minimizer of the functional $\mathcal{F}_{K,f}$. Then, for each $t \in \mathbb{R}$, the boundary of the super-level set of $u$ is of class $C^{1,\alpha}$ with some $0 < \alpha < 1$, except a closed set of Hausdorff dimension $n-3$. \end{lemma} \begin{proof} We fix $t \in \mathbb{R}$. Let $x_0 \in \partial \{u>t\}$ and $r>0$ be any number. First, from the assumption on $f$ and Lemma \ref{boundednessMinimizers} in Section \ref{secComparisonMini}, $u$ is non-negative and there exists a constant $R_0>0$ such that $E_t \coloneqq \{u>t\} \subset B_{\frac{R_0}{2}}$ for any $t \geq -\|u\|_{L^{\infty}}$. In order to apply Theorem \ref{improvementFlatnessFFMMM} to our case, it is sufficient to show that each set $E_t$ is an almost minimizer in the sense of \eqref{definitionAlmostMinimizer}. From Lemma \ref{relationMiniTwoEnergies}, we know that $\{u>t\}$ is a solution to the problem \begin{equation}\nonumber \min\{\mathcal{E}_{K,f,t}(E) \mid |E| < \infty\} \end{equation} for each $t \in \mathbb{R}$. Hence, from the minimality and boundedness of $E_t$, we have that \begin{equation}\label{minimalitySublevel} \mathcal{E}_{K,f,t}(E_t) \leq \mathcal{E}_{K,f,t}(F) \end{equation} for any bounded measurable set $F \subset \mathbb{R}^n$. Hence, from \eqref{minimalitySublevel}, we can compute as follows: for any bounded measurable set $F$, we have \begin{align}\label{estimateQuasiNonlocalMini} P_{K}(E_t) - P_{K}(F) &= \mathcal{E}_{K,f,t}(E_t) - \int_{E_t}(t-f(x))\,dx \nonumber\\ &\qquad - \mathcal{E}_{K,f,t}(F) + \int_{F}(t-f(x)\,dx \nonumber\\ &\leq \int_{\mathbb{R}^n}|\chi_{E_t}- \chi_{F}|\,|t - f(x)|\,dx \nonumber\\ &\leq \int_{B_r(x_0)}|t - f(x)|\,dx. \end{align} Since we assume that $f \in L^{\infty}(\mathbb{R}^n)$, we have \begin{equation}\label{estimateResidue01} \int_{B_r(x_0)}|t - f(x)|\,dx \leq (t +\|f\|_{L^{\infty}(\mathbb{R}^n)})\,|E_t \Delta F|. \end{equation} Hence, from \eqref{estimateQuasiNonlocalMini} and \eqref{estimateResidue01}, we have \begin{equation}\nonumber P_K(E_t) \leq P_K(F) + (t + \|f\|_{L^{\infty}(\mathbb{R}^n)})\,|E_t \Delta F|. \end{equation} Therefore, we apply Theorem \ref{improvementFlatnessFFMMM} and Corollary \ref{corNonlocalCone2d} to conclude that the claim is valid. \end{proof} In addition, we employ another result of the regularity of solutions to integro-differential equations via the bootstrap argument. This result is obtained by Barrios, Figalli, and Valdinoci \cite[Theorem 1.6]{BFV}. They proved the following regularity theorem on the solutions to integro-differential equations. For simplicity, we do not describe the whole statement. See \cite[Theorem 1.6]{BFV} for the full statement. \begin{theorem}\label{theoremBootstrapBFV} Let $v \in L^{\infty}(\mathbb{R}^{n-1})$ be a solution (in the viscosity sense) to the integro-differential equation \begin{equation}\nonumber \int_{\mathbb{R}^{n-1}}A_r(x',\,y')\left( v(x'+y') + v(x'-y') - 2v(x') \right) \,dy' = F(x', v(x')) \end{equation} for any $x' \in B'_r(0) \subset \mathbb{R}^{n-1}$ where $A_r$ satisfies the following assumptions: \begin{itemize} \item[(A1)] There exist constants $a_0,\,r_0>0$ and $\eta \in (0,\frac{a_0}{4})$ such that \begin{equation*} \frac{(1-s)(a_0-\eta)}{|y'|^{n+s}} \leq A_r(x',\, y') \leq \frac{(1-s)(a_0+\eta)}{|y'|^{n+s}} \end{equation*} for any $x' \in B'_r(0)$ and $y' \in B'_{r_0}(0) \setminus \{0\}$. \item[(A2)] There exists a constant $C_0>0$ such that \begin{equation*} \| A_r(\cdot,\,y') \|_{C^{0,\beta}(B'_1)} \leq \frac{C_0}{|y'|^{n+s}} \end{equation*} for any $y' \in B'_{r_0}(0) \setminus \{0\}$. \end{itemize} and $F \in C^{0,\beta}(B'_r(0))$ with $\beta \in (0,\,1]$. Then $v \in C^{1,s+\alpha}(B'_{\frac{r}{2}}(0))$ for any $\alpha < \beta$. \end{theorem} Taking into account all the above arguments, we can obtain that the boundary of the super-level set of the minimizer of $\mathcal{F}_{K,f}$ has the $C^{2,s+\beta-1}$-regularity under the $\beta$-H\"older regularity of a given datum $f$ with $\beta \in (1-s,\,1]$. Precisely, we prove \begin{lemma}\label{improvedRegularity} Assume that $K(x) = |x|^{-(n+s)}$ for $x \in \mathbb{R}^n \setminus \{0\}$ with $s\ in (0,\,1)$ and $f$ is in $L^2 \cap L^{\infty}(\mathbb{R}^n)$. Let $u \in BV_K \cap L^2(\mathbb{R}^n)$ be a minimizer of the functional $\mathcal{F}_{K,f}$. If a datum $f$ is in $C^{0,\beta}_{loc}(\mathbb{R}^n)$ with $\beta \in (1-s,\,1]$, then for each $t \in \mathbb{R}$, the boundary of the super-level set $\{u > t\}$ is of class $C^{2,s+\alpha-1}$ with $1-s < \alpha < \beta \leq 1$ except a closed set of Hausdorff dimension $n-3$. \end{lemma} \begin{proof} From Lemma \ref{holderRegualrityLemma} and the assumption that $f \in C^{0,\beta}_{loc} \cap L^{\infty}(\mathbb{R}^n)$ with $\beta \in (1-s,\,1]$, the boundary of the set $\{u > t\}$ has full $C^{1,\alpha}$-regularity with some $\alpha \in (0,\,1)$ except a closed set $\Sigma$ of Hausdorff dimension $n-3$, and thus we can represent $\partial \{u>t\} \setminus \Sigma$ locally as a graph of a $C^{1,\alpha}$-function $v_t$ in a bounded domain $U' \subset \mathbb{R}^{n-1}$. By employing the computation shown in \cite{BFV}, we may have that $v_t$ satisfies the equation, in the viscosity sense, \begin{align} &\int_{\mathbb{R}^{n-1}} A_r(x',\,y') \frac{v_t(x'+y') + v_t(x'-y') - 2v(x')}{|y'-x'|^{(n-1)+(1+s)}}\,dy' \nonumber\\ &\quad = G(x',\,v(x')) + t - f(x',\,v_t(x')) \quad \text{for $x' \in U' \subset \mathbb{R}^{n-1}$} \nonumber \end{align} where $A_r$ satisfies (A1) and (A2) and $G$ is a smooth function (see \cite{BFV} for the detail). Then, since $f \in C^{0,\beta}_{loc}(\mathbb{R}^n)$, we now apply Theorem \ref{theoremBootstrapBFV} several times, if necessary, to conclude that the regularity of $v_t$ can be improved up to $C^{2,s+\alpha-1}$ with $1-s < \alpha < \beta \leq 1$. From the compactness of the boundary of $\{u>t\}$ and by the standard covering argument, we obtain the $C^{2,s+\alpha-1}$-regularity of $\partial \{u>t\}$ for any $\alpha \in (1-s,\,\beta)$. \end{proof} \subsection{Proof of the main regularity result} By using Lemma \ref{improvedRegularity}, we are now ready to prove the main result of this paper. Let us briefly explain the strategy of the proof of Theorem \ref{mainTheorem}. Let $t_1, \, t_2 \in [-\|u\|_{L^{\infty}},\,\|u\|_{L^{\infty}})$ with $t_1 < t_2$ and we set $E_1 \coloneqq \{u>t_1\}$ and $E_2 \coloneqq \{u>t_2\}$. Notice that $E_2 \subset E_1$ because $t_1 < t_2$. In order to show the H\"older regularity of $u$, it is sufficient to observe that the boundaries of $E_1$ and $E_2$ are not too close. Precisely, using the regularity of $f \in C^{0,\beta}$, we show the inequality that \begin{equation}\label{inequalityCrucialMainTheorem} t_2 - t_1 \lesssim \left( \mathrm{dist}\,(\partial E_1, \partial E_2) \right)^{\beta}. \end{equation} To see this, we compare the nonlocal mean curvatures on the boundaries $\partial E_1$ and $\partial E_2$. Notice that one can compare the curvatures at points which the boundaries have in common. Thus, we slide $\partial E_1$ (denoted by $\partial E^{\nu}_1$) along the outer unit normal $\nu$ of $\partial E_1$ until $\partial E^{\nu}_1$ touches $\partial E_2$. At the touching point, we can now compare the curvatures between $\partial E^{\nu}_1$ and $\partial E_2$. Moreover, by employing the computation by D\'avila, del Pino, and Wei \cite{DdPW}, we can also compare the curvatures between $\partial E_1$ and $\partial E^{\nu}_1$. \begin{proof}[Proof of Theorem \ref{mainTheorem}] Let $d_{t} \coloneqq d_{E_t}$ for $t \in [0,\,\infty)$ be a signed distance function from $\partial \{u>t\}$, which is negative inside $\{u>t\}$. We set $E_t \coloneqq \{x\mid u(x)>t\}$ for any $t$. Since $n=2$, from Lemma \ref{improvedRegularity} it follows that all the points on $\partial E_t$ are regular points. Thus, the signed distance function $d_t$ is of class $C^{2,s+\alpha-1}$ in a neighborhood of $\partial E_t$ with $1-s<\alpha<\beta$ (see, for instance, \cite{Wl, DeZo01, DeZo02, Bellettini} for the relation between the distance function and regularity of surfaces). Recall that, from the assumption on $f$ and Proposition \ref{comparisonLinfty}, we have that $\|u\|_{L^{\infty}} \leq \|f\|_{L^{\infty}} < \infty$. We now take any $t_1 \in [-\|u\|_{L^{\infty}},\,\|u\|_{L^{\infty}})$ and set $E_1 \coloneqq E_{t_1}$. Then we can choose a neighborhood $U_1 \subset \mathbb{R}^2$ of the boundary $\partial E_1$ such that $d_1 \coloneqq d_{t_1} \in C^{2,s+\alpha-1}(U_1)$. Moreover, we take any $t_2 \in (-\|u\|_{L^{\infty}},\,\|u\|_{L^{\infty}})$ with $t_2 > t_1$ and set $E_2 \coloneqq E_{t_2}$. Then, from Lemma \ref{corUniformBoundednessMinimizer}, we obtain that there exists a constant $R_c>0$ independent of $t_1$ and $t_2$ such that $E_2 \subset E_1 \subset B_{R_c}$. We can choose points $x_1 \in \partial E_1$ and $x_2 \in \partial E_2$ such that \begin{equation}\nonumber \tilde{\delta} \coloneqq \mathrm{dist}\,(\partial E_1,\,\partial E_2) = |x_1 - x_2|. \end{equation} Since we study the local H\"older regularity of $u$, it is sufficient to consider the case that $x_2 \in U_1$. We first show that the following inequality holds: \begin{equation}\label{inequalityHolderRegularity} t_2 - t_1 \leq ([f]_{\beta} + C\,\tilde{\delta}^{1-\beta} )\,\tilde{\delta}^{\beta} \end{equation} where $\beta$ is as in Theorem \ref{mainTheorem} and $C>0$ is a constant depending only on $s$ and $d_1$. Without loss of generality, we may assume that $\widetilde{\delta} > 0$. Indeed, if $\widetilde{\delta} = 0$, then, from the definition of $\widetilde{\delta}$, we can easily see that $t_2=t_1$. This implies that the inequality \eqref{inequalityHolderRegularity} is valid. Thus, in the sequel, we always assume that $\tilde{\delta} > 0$. Now we define $E_1^{\delta}$ as \begin{equation}\nonumber E_1^{\delta} \coloneqq \{ x \in E_1 \mid \mathrm{dist}\,(x,\,\partial E_1) \leq \delta\} \end{equation} for any $\delta \in (0,\,\tilde{\delta}]$. Then, from the choice of $t_2$ and the definition of $\widetilde{\delta}$, the boundary of $E_1^{\delta}$ can be described as $\partial E_1^{\delta} = \{x-\delta \nabla d_1(x) \mid x \in \partial E_1 \}$ for any $\delta \in (0,\,\widetilde{\delta}]$, where $\nabla d_1$ coincides with the outer unit normal vector of $\partial E_1$. From the definition of the nonlocal mean curvature, we can easily show that the following comparison inequality holds: \begin{equation}\label{comparisonE_1dAndE_2} H^K_{E_1^{\tilde{\delta}}}(x_2) \leq H^K_{E_2}(x_2). \end{equation} From the choice of $x_1$ and $x_2$, we have $x_2 = x_1 - \delta\,\nabla d_1(x_1)$. Now we compare the two nonlocal curvatures $H^{K}_{E_1^\delta}(x_2)$ and $H^{K}_{E_1}(x_1)$. To do this, we employ the computation shown by D\'avila, del Pino, and Wei in \cite{DdPW} (see also \cite{Cozzi, JuLa}). This computation is on the variation of the nonlocal(fractional) mean curvature. Precisely, we have that, for any set $E \subset \mathbb{R}^2$ with a smooth boundary (at least $C^2$), it holds that \begin{align}\label{variationNonlocalMC} &-\left.\frac{d}{d\delta}\right|_{\delta=0} H^K_{E_{\delta h}}(x-\delta h(x)\nabla d_{E}(x)) \nonumber\\ &= 2\int_{\partial E}\frac{h(y) - h(x)}{|y-x|^{2+s}}\,d\mathcal{H}^{n-1}(y) \nonumber \\ &\quad + 2\int_{\partial E}\frac{(\nabla d_E(y) - \nabla d_E(x)) \cdot \nabla d_E(x)}{|y-x|^{2+s}} \,d\mathcal{H}^{n-1}(y) \end{align} for $x \in \partial E$ where $h \in L^{\infty}(\partial E)$ and $h$ is as smooth as $\partial E$. Here we define $E_{\delta h}$ in such a way that its boundary is given by $\partial E_{\delta h} \coloneqq \{x-\delta\,h(x)\,\nabla d_E(x) \mid x\in \partial E\}$ for any $\delta>0$. Then from \eqref{variationNonlocalMC} and by some computation, we have the estimate of the variation of the nonlocal mean curvature $H^s_{E_1^{\delta}}$ for small $\delta>0$. Precisely we can obtain that there exist constants $C>0$ and $\delta_0>0$, which depends on the space-dimension $n=2$, $s$, and the $L^{\infty}$-norm of $\nabla^2 d_1$ (equivalently the second fundamental form of $\partial E_1$), such that \begin{equation}\label{estimateVariationNonlocalMC} -\frac{d}{d\delta} H^K_{E_1^{\delta}}(\Psi_{\delta}(x_1)) \leq C\,\int_{\partial E_1}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \end{equation} for any $\delta \in (0,\,\delta_0)$ where we set $\Psi_{\delta}(x_1) \coloneqq x-\delta \,\nabla d_1(x)$. Indeed, choosing any smooth cut-off function $\eta_{\varepsilon}$ such that $\mathrm{spt}\,\eta_{\varepsilon} \subset B^c_{\varepsilon}(0)$, $\eta_{\varepsilon} \equiv 1$ in $B^c_{2\varepsilon}(0)$, and $0 \leq \eta_{\varepsilon} \leq 1$, we can write the nonlocal curvature as follows: \begin{align}\label{nonlocalPeriSplit} &-H^s_{E_1^{\delta}}(\Psi_{\delta}(x_1)) \nonumber\\ &= \int_{\mathbb{R}^2}\frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|y-\Psi_{\delta}(x_1)|^{2+s}} \eta_{\varepsilon}(y-\Psi_{\delta}(x_1))\,dy \nonumber\\ &\qquad + \int_{\mathbb{R}^2}\frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|y-\Psi_{\delta}(x_1)|^{2+s}} (1-\eta_{\varepsilon}(y-\Psi_{\delta}(x_1)))\,dy \nonumber\\ &\eqqcolon A_{\varepsilon}(\delta) + B_{\varepsilon}(\delta). \end{align} Then we can compute the derivative of $A_{\varepsilon}(\delta)$ in \eqref{nonlocalPeriSplit} for small $\delta>0$ in the following manner: setting $\widetilde{y}_{\delta} \coloneqq y-\Psi_{\delta}(x_1)$ for simplicity, we have \begin{align}\label{variationNonlocalMC01} & \frac{d}{d\delta}\int_{\mathbb{R}^2}\frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|\widetilde{y}_{\delta}|^{2+s}} \eta_{\varepsilon}(\widetilde{y}_{\delta})\,dy \nonumber\\ &= \int_{\partial E_1^{\delta}} \frac{\eta_{\varepsilon}(\widetilde{y}_{\delta})}{|\widetilde{y}_{\delta}|^{2+s}} \,d\mathcal{H}^{n-1}(y) + \int_{\partial (E_1^{\delta})^c} \frac{\eta_{\varepsilon}(\widetilde{y}_{\delta})}{|\widetilde{y}_{\delta}|^{2+s}} \,d\mathcal{H}^{n-1}(y) \nonumber\\ &\qquad - (2+s)\int_{\mathbb{R}^2} \frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|\widetilde{y}_{\delta}|^{4+s}} (y-x_1+ \delta \nabla d_1(x_1)) \cdot \nabla d_1(x_1) \,\eta_{\varepsilon}(\widetilde{y}_{\delta})\,dy \nonumber\\ &\qquad \quad + \int_{\mathbb{R}^2} \frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|\widetilde{y}_{\delta}|^{2+s}} \nabla \eta_{\varepsilon}(\widetilde{y}_{\delta}) \cdot \nabla d_1(x_1)\,dy \end{align} for any $\delta \in (0,\,1)$ with $\Psi_{\delta}(x_1) \in U_1$. Then by using the Gauss-Green theorem, we have \begin{align}\label{variationNonlocalMC02} &- (2+s)\int_{\mathbb{R}^2} \frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|\widetilde{y}_{\delta}|^{4+s}} (y-x_1+ \delta \nabla d_1(x_1)) \cdot \nabla d_1(x_1) \,\eta_{\varepsilon}(\widetilde{y}_{\delta})\,dy \nonumber\\ &= \int_{\mathbb{R}^2} (\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)) \nabla_y \left(\frac{1}{|\widetilde{y}_{\delta}|^{2+s}}\right) \cdot \nabla d_1(x_1) \,\eta_{\varepsilon}(\widetilde{y}_{\delta})\,dy \nonumber\\ &= \int_{\partial E_1^{\delta}} \frac{\nabla d_1(x_1) \cdot \nabla d_{E_1^{\delta}}(y)}{|\widetilde{y}_{\delta}|^{2+s}} \eta_{\varepsilon}(\widetilde{y}_{\delta}) \,d\mathcal{H}^{n-1} \nonumber\\ &\qquad - \int_{\partial (E_1^{\delta})^c} \frac{\nabla d_1(x_1) \cdot (-\nabla d_{E_1^{\delta}}(y))}{|\widetilde{y}_{\delta}|^{2+s}} \eta_{\varepsilon}(\widetilde{y}_{\delta}) \,d\mathcal{H}^{n-1} \nonumber\\ &\qquad \quad - \int_{\mathbb{R}^2} \frac{\chi_{E_1^{\delta}}(y) - \chi_{(E_1^{\delta})^c}(y)}{|\widetilde{y}_{\delta}|^{2+s}} \nabla \eta_{\varepsilon}(\widetilde{y}_{\delta}) \cdot \nabla d_1(x_1)\,dy. \end{align} Thus from \eqref{variationNonlocalMC01} and \eqref{variationNonlocalMC02}, we obtain \begin{align} \frac{d}{d\delta}A_{\varepsilon}(\delta) &= \int_{\partial E_1^{\delta}} \frac{2- 2(\nabla d_1(x_1) \cdot \nabla d_{E_1^{\delta}}(y))}{|\widetilde{y}_{\delta}|^{2+s}}\eta_{\varepsilon}(\widetilde{y}_{\delta}) \,d\mathcal{H}^{n-1}(y) \nonumber\\ &= \int_{\partial E_1^{\delta}} \frac{|\nabla d_1(x_1) - \nabla d_{E_1^{\delta}}(y)|^2}{|\widetilde{y}_{\delta}|^{2+s}}\eta_{\varepsilon}(\widetilde{y}_{\delta}) \,d\mathcal{H}^{n-1}(y) \nonumber \end{align} for any small $\delta >0$ with $\Psi_{\delta}(x_1) \in U_1$. Hence from the change of variables, we obtain \begin{align} \frac{d}{d\delta}A_{\varepsilon}(\delta) &= \int_{\partial E_1} \frac{|\nabla d_1(x_1) - \nabla d_{1}(y)|^2}{|\Psi_{\delta}(y)-\Psi_{\delta}(x_1)|^{2+s}}\eta_{\varepsilon}(\Psi_{\delta}(y)-\Psi_{\delta}(x_1)) \,J_{\partial E_1}\Psi_{\delta}(y)\,d\mathcal{H}^{n-1}(y) \nonumber \end{align} where $J_{\partial E_1}\Psi_{\delta}(y)$ is the tangential Jacobian of $\partial E_1$ at $y$. As is shown in \cite{DdPW}, we can have that there exist constants $c'>0$ and $\delta'>0$, depending on the space-dimension $n=2$ and $s$ but independent of $\varepsilon >0$, such that $|\frac{d}{d\delta}B_{\varepsilon}(\delta)| \leq c'\varepsilon^{1-s}$ for any $\delta \in (0,\,\delta')$ and $\varepsilon \in (0,\,1)$. Therefore, we conclude that \begin{align} -\frac{d}{d\delta}H^s_{E_1^{\delta}}(\Psi_{\delta}(x_1)) &= \lim_{\varepsilon \downarrow 0}\left(\frac{d}{d\delta}A_{\varepsilon}(\delta) + \frac{d}{d\delta} B_{\varepsilon}(\delta) \right) \nonumber\\ &= \int_{\partial E_1} \frac{|\nabla d_1(x_1) - \nabla d_{1}(y)|^2}{|\Psi_{\delta}(y)-\Psi_{\delta}(x_1)|^{2+s}} J_{\partial E_1}\Psi_{\delta}(y) \,d\mathcal{H}^{n-1}(y) \nonumber \end{align} for any $\delta \in (0,\,\delta'_0)$ where $\delta'_0>0$ is a constant depending on the space-dimension $n=2$, $s$, and the $L^{\infty}$-norm of $\nabla^2 d_1$. From the definition of $\Psi_{\delta}$, we have that there exists a constant $C_0>0$ depending on the space-dimension $n=2$, $s$, and the $L^{\infty}$-norm of $\nabla^2 d_1$, such that \begin{equation}\nonumber \frac{J_{\partial E_1}\Psi_{\delta}(y)}{|\Psi_{\delta}(y) -\Psi_{\delta}(x_1)|^{2+s}} \leq \frac{C_0}{|y-x_1|^{2+s}} \end{equation} for any $y \in \partial E_1$ and $\delta \in (0,\,\delta'_0)$. Therefore we obtain that there exist constants $C>0$ and $\delta_0>0$, depending on the space-dimension $n=2$, $s$, and the second derivative of $d_1$ but independent of $\delta$, such that the inequality \eqref{estimateVariationNonlocalMC} with the constant $C$ holds for any $\delta \in (0,\,\delta_0)$. Thus, from the fundamental theorem of calculus and \eqref{estimateVariationNonlocalMC}, we obtain that \begin{align}\label{estimateNonlocalCurvatures} &-H^K_{E_1^{\delta}}(x-\delta \,\nabla d_1(x)) \nonumber\\ &= -H^K_{E_1}(x_1) - \delta\,\int_{0}^{1} \frac{d}{d\delta} H^K_{E_1^{\delta}}(x- \lambda\delta \,\nabla d_1(x)) \,d\lambda \nonumber\\ &\leq -H^K_{E_1}(x_1) + C\,\delta\,\int_{\partial E_1}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \end{align} for any $\delta \in (0,\,\delta_0)$. Now we show that the integral \begin{equation}\nonumber \int_{\partial E_1}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \end{equation} is uniformly bounded for any $x_1 \in V$ and any open set $V \subsetneq U_1$. Indeed, we define the set $U^r_1 \coloneqq \{x \in U_1 \mid \mathrm{dist}\,(x,\,\partial U_1)>r\}$ for any $r>0$ satisfying that $B_{2r}(x) \subset U_1$ for any $x \in U_1$. Then we can compute the integral as follows: for any $x_1 \in U^r_1$, it holds that \begin{align}\label{estimateFracNormNormalVec} &\int_{\partial E_1}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \nonumber\\ &= \int_{\partial E_1 \cap B_r(x_1)}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \nonumber\\ &\qquad + \int_{\partial E_1 \cap B^c_r(x_1)}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) \nonumber\\ &\leq \int_{\partial E_1 \cap B_r(x_1)}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^2} \frac{1}{|y-x_1|^{n-2+s}} \,d\mathcal{H}^{n-1}(y) \nonumber\\ &\qquad + \int_{\partial E_1 \cap B^c_r(x_1)}\frac{4}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y). \end{align} From the fundamental theorem of calculus and the fact that $B_r(x_1) \subset U_1$ for any $x_1 \in U^r_1$, we have that \begin{equation}\label{estiGradientDistance} \frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^2} \leq \|\nabla^2 d_1\|_{L^{\infty}(B_r(x_1))}^2 \end{equation} for any $y \in B_r(x_1)$. Thus from \eqref{estimateFracNormNormalVec} and \eqref{estiGradientDistance} and noticing that $x_1 \in U^r_1$ and $E_t \subset B_{R_c}$ holds uniformly in $t \geq c$ where $c \coloneqq - \|u\|_{L^{\infty}} > -\infty$, we obtain \begin{align}\label{estimateGradDistanceonSurface} \int_{\partial E_1}\frac{|\nabla d_1(y) - \nabla d_1(x_1)|^2}{|y-x_1|^{2+s}} \,d\mathcal{H}^{n-1}(y) &\leq c_1\, \|\nabla^2 d_1\|_{L^{\infty}(B_r(x_1))}^3\,r^{1-s} \nonumber\\ &\qquad + \frac{c_2\,\|\nabla^2 d_1\|_{L^{\infty}(U_1)}}{r^s} \end{align} where $c_1>0$ and $c_2>0$ are constants depending on the space-dimension $n=2$ and $s$. Since we choose any $r$ in such a way that $B_r(x_1) \subset U_1$, we conclude the claim is valid. Thus, from \eqref{estimateNonlocalCurvatures} and \eqref{estimateGradDistanceonSurface}, we finally obtain the inequality \begin{equation}\label{comparisonNonlocalMCSmallDiff} -H^K_{E_1^{\delta}}(x_1-\delta \,\nabla d_1(x)) \leq -H^K_{E_1}(x_1) + C(n,s,R_c)\,\delta \end{equation} for any $\delta \in (0,\,\delta_0)$ where $C(n,s,R_c)>0$ ($n=2$ is the space-dimension) and $\delta_0>0$ are some constants, which also depend on the $L^{\infty}$-norm of $\nabla^2 d_1$. Note that the constant $\delta_0$ can be bounded by the inverse of the $L^{\infty}$-norm of $\nabla^2 d_1$. Thus from \eqref{comparisonE_1dAndE_2} and \eqref{comparisonNonlocalMCSmallDiff}, we have that, for any $\delta \in (0,\,\delta_0)$, \begin{equation}\label{comparisonNonlocalCurvature} -H^K_{E_2}(x_2) \leq -H^K_{E_1}(x_1) + C(n,s,R_c)\,\delta. \end{equation} Now we consider the following two cases: \textit{Case 1}: $0< \tilde{\delta} < \delta_0$. In this case, we simply substitute $\delta = \tilde{\delta}$ with \eqref{comparisonNonlocalCurvature} and obtain \begin{equation}\nonumber -H^K_{E_2}(x_2) \leq -H^K_{E_1}(x_1) + C(n,s,R_c)\,\tilde{\delta} \end{equation} where $\tilde{\delta} = \mathrm{dist}\,(\partial E_1,\,\partial E_2)$. \textit{Case 2}: $\tilde{\delta} \geq \delta_0$. In this case, there exists a number $N \in \mathbb{N}$ such that $ \frac{\tilde{\delta}}{N} < \|\nabla^2 d_1\|_{L^{\infty}(U_1)}^{-1}$. Then setting $\tilde{\delta}_k \coloneqq \frac{k}{N}\tilde{\delta}$ for each $k \in \{1,\cdots,\,N\}$ and taking into account all the above arguments, we obtain the inequality that \begin{equation}\label{comparisonNonlocalCurvatureIteration} -H^{K}_{E_1^{\tilde{\delta}_k}}(x^{\tilde{\delta}_k}_1) \leq -H^{K}_{E_1^{\tilde{\delta}_{k-1}}}(x^{\tilde{\delta}_{k-1}}_1) + C(n,s,R_c)\,\frac{\tilde{\delta}}{N} \end{equation} for each $k \in \{1,\cdots,\,N\}$ where we understand the notation $x^{\tilde{\delta}_0}_1 = x_1$ and $E_1^{\tilde{\delta}_0} = E_1$. Thus by summing the inequality \eqref{comparisonNonlocalCurvatureIteration} for all $i \in \{1,\cdots,\,N\}$, we obtain \begin{align} -H^{K}_{E_1^{\tilde{\delta}}}(x_2) &= -H^{K}_{E_1^{\tilde{\delta}_N}}(x^{\tilde{\delta}_N}_1) \nonumber\\ &\leq -H^{K}_{E_1^{\tilde{\delta}_0}}(x^{\tilde{\delta}_0}_1) + N\,C(n,s,R_c)\,\frac{\tilde{\delta}}{N} = -H^{K}_{E_1}(x_1) + C(n,s,R_c)\,\tilde{\delta} \nonumber \end{align} where $\tilde{\delta} = \mathrm{dist}\,(\partial E_1,\,\partial E_2)$. In both cases, we finally obtain the inequality \begin{equation}\label{comparisonFractionalMeanCurvatures} - H^K_{E_2}(x_2) \leq -H^{K}_{E_1}(x_1) + C(n,s,R_c)\,\tilde{\delta}. \end{equation} Thanks to Lemma \ref{improvedRegularity}, the Euler-Lagrange equation \begin{equation} H^s_{E_t}(x) + t - f(x) = 0 \end{equation} holds for every $x \in \partial E_t$. Then, since $E_i$ is the minimizer of $\mathcal{E}_{K,f,t_i}$ for $i \in \{1,2\}$ and from \eqref{comparisonFractionalMeanCurvatures}, we obtain \begin{equation}\nonumber t_2 - t_1 \leq f(x_2) - f(x_1) + C(n,s,R_c)\,\tilde{\delta}. \end{equation} Recalling the definition of $x_2$, the H\"older continuity of $f$, and the fact that $E_t \subset B_{R_c}$ for any $t \geq c$, we conclude that \begin{equation}\label{keyInequalityHolder} t_2 - t_1 \leq ([f]_{\beta}(B_{R_c}) + C(n,s,R_c)\,\tilde{\delta}^{1-\beta})\,\tilde{\delta}^\beta \end{equation} where $[f]_{\beta}(B_{R_c})$ is the H\"older constant of $f$ in $B_{R_c}$ given as \begin{equation}\nonumber [f]_{\beta}(B_{R_c}) \coloneqq \sup_{x,\,y \in B_{R_c}, \, x \neq y}\frac{|f(x) - f(y)|}{|x-y|^{\beta}} \end{equation} and the constant $\tilde{\delta}$ is defined as $\tilde{\delta} \coloneqq \mathrm{dist}\,(\partial E_1,\,\partial E_2)$. Note that the constant $C(n,s,R_c)>0$ also depends on the $L^{\infty}$-norm of $\nabla^2 d_1$. We are now ready to prove the local H\"older continuity of $u$. Let $B_{r_0}(x_0) \subset \mathbb{R}^2$ be any open ball of radius $r_0$ with $x_0 \in \{u = t_0 \}$ for a number $t_0 \geq c \coloneqq -\|u\|_{L^{\infty}}$. We take any points $x,\,y \in B_{r_0}(x_0)$ with $x \neq y$ and set $t_1,\,t_2 \in \mathbb{R}$ as $t_1 \coloneqq u(x)$ and $t_2 \coloneqq u(y)$. We may assume that $t_1 > t_2 \geq c$ because we only repeat the same argument in the case of $t_1 < t_2$. In addition to this, we also assume that $t_1 > t_0 > t_2$. Indeed, in the case of $t_1 > t_2 \geq t_0$ or $t_0 \geq t_1 > t_2$, it is sufficient to take another point $x_0' \in B_{r_0}(x_0)$ and $t_0' \in \mathbb{R}$ such that $x_0' \in \{ u = t_0' \}$ and $t_1 > t_0' > t_2$, and do the argument that we will show below. Moreover, since we only observe the local regularity of $u$, it is sufficient to consider the case that $B_{r_0}(x_0) \subset U_0$ where $U_0$ is a neighborhood of $\partial \{u > t_0\}$ such that the signed distance function from $\partial \{u > t_0\}$ is of class $C^{2, s+\alpha-1}(U_0)$ with $\alpha \in (1-s,\, 1)$. Indeed, if $x \in B_{r_0}(x_0) \setminus U_0$ and $y \in B_{r_0}(x_0)$, then, from the continuity of $u$, we can choose a point $z_0$ in $B_{r_0}(x_0)$ and close to $x$ such that the estimate $|u(x) - u(z_0)| \leq |x-y|^{\beta}$ holds and $t_1 = u(x) > u(z_0) \geq u(y) = t_2$. In the case of $z_0 \in U_0$, we just apply the argument that we will show below with \eqref{keyInequalityHolder} for $z_0$, $x_0$, and $y$; otherwise we can repeat the above argument until we have the point belonging to $U_0$. Now we choose sufficiently small $\varepsilon > 0$ such that $t_1 - \varepsilon > t_0$ and $t_0 - \varepsilon > t_2$ and then we have that $x \in \{ u > t_1 -\varepsilon\}$, $y\in \{ u > t_2 - \varepsilon\}$, and $x_0 \in \{ u > t_0 - \varepsilon\}$. Hence, from \eqref{keyInequalityHolder} and the fact that $x,\,y \in B_{r_0}(x_0)$, we obtain the two inequalities \begin{align}\label{estimateHolder01} u(x) - u(x_0) = t_1 - \varepsilon - (t_0 - \varepsilon) &\leq ( [f]_{\beta}(B_{R_c}) + C(n,s,R_c)\,\tilde{\delta}_1^{1-\beta})\,\tilde{\delta}_1^{\beta} \nonumber\\ &\leq ( [f]_{\beta}(B_{R_c}) + C(n,s,R_c)\,r_0^{1-\beta})\,\tilde{\delta}_1^{\beta}. \end{align} and \begin{align}\label{estimateHolder02} u(x_0) - u(y) = t_0 - \varepsilon - (t_2 - \varepsilon) &\leq ( [f]_{\beta}(B_{R_c}) + C(n,s,R_c)\,\tilde{\delta}_2^{1-\beta})\,\tilde{\delta}_2^{\beta} \nonumber\\ &\leq ( [f]_{\beta}(B_{R_c}) + C(n,s,R_c)\,r_0^{1-\beta})\,\tilde{\delta}_2^{\beta} \end{align} where we set $\tilde{\delta}_1 \coloneqq \mathrm{dist}\,(\partial E_{t_0},\partial E_{t_1})$ and $\tilde{\delta}_2 \coloneqq \mathrm{dist}\,(\partial E_{t_0},\partial E_{t_2})$. Note that the constant $C(n,s,R_c)>0$ also depends on the $L^{\infty}$-norm of $\nabla^2 d_{t_0}$, which can be uniformly bounded in $B_{r_0}(x_0)$. Notice that the inequality \begin{equation}\nonumber \tilde{\delta}_1 + \tilde{\delta}_2 = \mathrm{dist}\,(\partial E_{t_0},\,\partial E_{t_1}) + \mathrm{dist}\,(\partial E_{t_0},\,\partial E_{t_2}) \leq \mathrm{dist}\,(\partial E_{t_1},\,\partial E_{t_2}) \leq |x-y| \end{equation} holds because of the fact that $E_{t_1} \subset E_{t_0} \subset E_{t_2}$. Therefore from \eqref{estimateHolder01} and \eqref{estimateHolder02}, we obtain that there exists a constant $C=C(n,s,f,R_c,r_0,x_0)>0$ (we have assumed that the space-dimension $n$ is two) such that \begin{align} |u(x) - u(y)| &= |u(x) - u(x_0) + u(x_0) - u(y)| \nonumber\\ &\leq C\,(\tilde{\delta}_1^{\beta} + \tilde{\delta}_2^{\beta}) \leq C\,2^{1-\beta}(\tilde{\delta}_1 + \tilde{\delta}_2)^{\beta} \leq 2^{1-\beta}C\, |x-y|^{\beta}. \nonumber \end{align} Here, in the second inequality, we have used the fact that $2^{1-\beta}(x+1)^{\beta} \geq x^{\beta} + 1$ for any $x \geq 1$ and $\beta \in (0,\,1)$ and applied this fact with $x=\widetilde{\delta}_1\,\widetilde{\delta}_2^{-1}$ if $\widetilde{\delta}_1 \geq \widetilde{\delta}_2$ or $x=\widetilde{\delta}_2\,\widetilde{\delta}_1^{-1}$ if $\widetilde{\delta}_1 < \widetilde{\delta}_1$. \end{proof} \end{document}
\boldsymbol bgin{document} \boldsymbol bgin{abstract} We prove an incidence theorem for points and planes in the projective space ${\mathbb P}^3$ over any field $\mathbb F$, whose characteristic $p\neq 2.$ An incidence is viewed as an intersection along a line of a pair of two-planes from two canonical rulings of the Klein quadric. The Klein quadric can be traversed by a generic hyperplane, yielding a line-line incidence problem in a three-quadric, the Klein image of a regular line complex. This hyperplane can be chosen so that at most two lines meet. Hence, one can apply an algebraic theorem of Guth and Katz, with a constraint involving $p$ if $p>0$. This yields a bound on the number of incidences between $m$ points and $n$ planes in ${\mathbb P}^3$, with $m\geq n$ as $$O\left(m\sqrt{n}+ m k\right),$$ where $k$ is the maximum number of collinear planes, provided that $n=O(p^2)$ if $p>0$. Examples show that this bound cannot be improved without additional assumptions. This gives one a vehicle to establish geometric incidence estimates when $p>0$. For a non-collinear point set $S\subseteq \mathbb{F}^2$ and a non-degenerate symmetric or skew-symmetric bilinear form $\boldsymbol \omegaega$, the number of distinct values of $\boldsymbol \omegaega$ on pairs of points of $S$ is $\Omega\left[\min\left(|S|^{\frac{2}{3}},p\right)\right]$. This is also the best known bound over ${\mathbb R}$, where it follows from the Szemer\'edi-Trotter theorem. Also, a set $S\subseteq \mathbb F^3$, not supported in a single semi-isotropic plane contains a point, from which $\Omega\left[\min\left(|S|^{\frac{1}{2}},p\right)\right]$ distinct distances to other points of $S$ are attained. \boldsymbol nd{abstract} \title{On the number of incidences between points and planes in three dimensions} \section{Introduction} Let $\mathbb{F}$ be a field of characteristic $p$ and ${\mathbb P}^d$ the $d$-dimensional projective space over $\mathbb{F}$. Our methods do not work for $p=2$, but the results in view of constraints in terms of $p$ hold trivially for $p=O(1)$. As usual, we use the notation $|\cdot|$ for cardinalities of finite sets. The symbols $\ll$, $\gg,$ suppress absolute constants in inequalities, as well as respectively do $O$ and $\Omega$. Besides, $X=\Theta(Y)$ means that $X=O(Y)$ and $X=\Omega(Y)$. The symbols $C$ and $c$ stand for absolute constants, which may sometimes change from line to line, depending on the context. When we turn to sum-products, we use the standard notation $$A+B=\{a+b:\,a\in A,\,b\in B\}$$ for the sumset $A+B$ of $A, B\subseteq \mathbb F$, similarly for the product set $AB$. The Szemer\'edi-Trotter theorem \cite{ST} on the number of incidences between lines and points in the Euclidean plane has many applications in combinatorics. The theorem is also valid over $\mathbb{C}$, this was first proved by T\'oth \cite{T}. In positive characteristic, however, no universal satisfactory for applications point-line incidence estimate is available. The current ``world record" for partial results in this direction for the prime residue field $\mathbb{F}_p$ is due to Jones \cite{J}. This paper shows that in three dimensions there is an incidence estimate between a set $P$ of $m$ points and a set $\Pi$ of $n$ planes in ${\mathbb P}^3$, valid for any field of characteristic $p\neq 2$. If $p>0$, there is a constraint that $\min(m,n)=O(p^2).$ Hence, the result is trivial, unless $p$ is regarded as a large parameter. Still, since our geometric set-up in terms of $\alpha$-and $\boldsymbol bta$-planes in the Klein quadric breaks down for $p= 2$, we have chosen to state that $p\neq 2$ explicitly in the formulation of main results. Extending the results to more specific situations, when the constraint in terms of $p$ can be weakened may not be impossible but way beyond the methodology herein. A few more words address this issue in the sequel. The set of incidences is defined as \boldsymbol bgin{equation} I(P,\Pi) :=\{(q,\pi)\in P\times \Pi:\,q\in\pi\}. \label{ins}\boldsymbol nd{equation} Over the reals, the point-plane incidence problem has been studied quite thoroughly throughout the past 25 years and several tight bounds are known. In general one can have all the points and planes involved to be incident to a single line in space, in which case the number of incidences is trivially $mn$. To do better than that, one needs some non-degeneracy assumption regarding collinearity, and the results quoted next differ as to the exact formulation of such an assumption. In the 1990 paper of Edelsbrunner et al. \cite{EGS} it was proven (modulo slow-growing factors that can be removed, see \cite{AS}) that if no three planes are collinear in $\mathbb R^3$, \boldsymbol bgin{equation}\label{egsest} |I(P,\Pi)| = O\left(m^{\frac{4}{5}}n^{\frac{3}{5}}+m+n\right). \boldsymbol nd{equation} This bound was shown to be tight for a wide range of $m$ and $n$, owing to a construction by Brass and Knauer \cite{BK}. A thorough review of the state of the art by the year 2007 can be found in the paper of Apfelbaum and Sharir \cite{AS}. Elekes and T\'oth \cite{ET} weakened the non-collinearity assumption down to that all planes were ``not-too-degenerate". That is a single line in a plane may support only a constant proportion of incidences in that plane. They proved a bound \boldsymbol bgin{equation}\label{etest} |I(P,\Pi)| = O\left((mn)^{\frac{3}{4}} + m\sqrt{n} + n\right) \boldsymbol nd{equation} and presented a construction, showing it to be generally tight. The constructions supporting the tightness of both latter estimates are algebraic and extend beyond the real case. More recently, research in incidence geometry over ${\mathbb R}$ has intensified after the introduction of the polynomial partitioning technique in a breakthrough paper of Guth and Katz \cite{GK}. E.g., there is now a ``continuous" generalisation of the bound \eqref{egsest} by Basit and Sheffer \cite{BS}: \boldsymbol bgin{equation}\label{bsest} |I(P,\Pi)| = O^*\left(m^{\frac{4}{5}+\varepsilonilon}n^{\frac{3}{5}}k^{\frac{2}{5}} +mk+n\right), \boldsymbol nd{equation} where $k$ is the maximum number of collinear planes. For any $\varepsilonilon>0$, the constant hidden in the $O^*$-symbol depends on $\varepsilonilon$. The proofs of the above results rely crucially on the order properties of $\mathbb R$. Some of them, say \eqref{etest} extend over $\mathbb C$, for it is based on the Szemer\'edi-Trotter theorem. Technically harder partitioning-based works like \cite{BS} have so far defied generalisation beyond ${\mathbb R}$. This paper presents a different approach to point-plane incidences in the projective three-space ${\mathbb P}^3$. The approach appears to be robust enough to embrace, in principle, all fields $\mathbb{F}$, but for the apparently special case of characteristic $2$. When we have a specific field $\mathbb{F}$ in mind, we use the notation $\mathbb{FP}$ for the projective line ${\mathbb P}$. The novelty of our approach is on its geometric side: we fetch and use extensively the classical XIX century Pl\"ucker-Klein formalism for line geometry in ${\mathbb P}^3$. This is combined with a recent algebraic incidence theorem for counting line-line intersections in three dimensions by Guth and Katz. The work of Guth and Katz, see \cite{GK} and the references contained therein for its predecessors, established two important theorems. Both rested on the polynomial Nullstellensatz principle, which was once again demonstrated to be so efficient a tool for discrete geometry problems by Dvir, who used it to resolve the finite field Kakeya conjecture \cite{D}. The proof of the first Guth-Katz theorem, Theorem 2.10 in \cite{GK}, was in essence algebraic, using the Nullstellensatz and basic properties of ruled surfaces, which come into play due to the use of the classical XIX century geometry Monge-Salmon theorem. See \cite{Sa} for the original exposition of the latter theorem, as well as \cite{K} (wholly dedicated to the prominent role this theorem plays in today's incidence geometry) and Appendix in \cite{Ko}. The second Guth-Katz theorem, Theorem 2.11 in \cite{GK}, introduced the aforementioned method of polynomial partitioning of the real space, based on the Borsuk-Ulam theorem. It is the latter theorem of Guth and Katz that has recently attracted more attention and follow-ups. Since we work over any field, we cannot not use polynomial partitioning. It is a variant of Theorem 2.10 from \cite{GK} that plays a key role here, and is henceforth referred to as {\em the} Guth-Katz theorem. We share this, at least in part, with a recent work of Koll\'ar \cite{Ko} dedicated to point-line incidences in $3D$, in particular over fields with positive characteristic. \boldsymbol bgin{theorem}[Guth-Katz] \label{gkt} Let $\mathcal L$ be a set of $n$ straight lines in ${\mathbb R}^3$. Suppose, no more then two lines are concurrent. Then the number of pair-wise intersections of lines in $\mathcal L$ is $$ O\left(n^{\frac{3}{2}}+ nk\right), $$ where $k$ is the maximum number of lines, contained in a plane or regulus. \boldsymbol nd{theorem} The proof of Theorem \ref{gkt} goes about over the complex field.\footnote{For a reader not familiar with the proof of Theorem \ref{gkt}, that is Theorem 2.10 in \cite{GK}, we recommend Katz's note \cite{K} for more than an outline of the proof. See also a post {\sf www.terrytao.wordpress.com/2014/03/28/the-cayley-salmon-theorem-via-classical-differential-geometry/} by Tao and the links contained therein.} Moreover, it extends without major changes to any algebraically closed field, under the constraint $n=O(p^2)$ in the positive characteristic case. This was spelt out by Koll\'ar, see \cite{Ko} Corollary 40, with near-optimal values of constants. To complete the introduction, let us briefly discuss, in slightly more modern terms, the ``continuous'' Monge-Salmon theorem, brought in by Guth and Katz to discrete geometry. Suppose the field $\mathbb F$ is algebraically closed field and $Z$ is a surface in $\mathbb{F}{\mathbb P}^3$, defined as the zero set of a minimal polynomial $Q$ of degree $d$. A point $x\in Z$ is called {\em flechnodal} if there is a line $l$ with at least fourth order contact with $Z$ at $x$, that is apart from $x\in l$, at least three derivatives of $Q$ in the direction of $l$ vanish at $x$. Monge showed that flechnodal points are cut out by a homogeneous polynomial, whose degree Salmon claimed to be equal to $11d-24$ (which is sharp for $d=3$, due to the celebrated Cayley-Salmon theorem). Thus, for an irreducible $Z$, either all points are flechnodal, or flechnodal points lie on a curve of degree $d(11d-24)$. Over the complex field Salmon proved that assuming that all points of $Z$ are flechnodal implies that $Z$ is ruled. In positive characteristic it happens that there exist high degree non-ruled surfaces, where each point is flechnodal. But not for $d<p$. Voloch \cite{V} adapted the Monge-Salmon proof to modern terminology, $p>0$, $d<p$, and also conjectured that counterexamples may take place only if $p$ divides $d(d-1)(d-2)$. The following statement is implicit in the proof of Proposition 1 in \cite{V}. \boldsymbol bgin{theorem}[Salmon] \label{Salmon} An irreducible algebraic surface in ${\mathbb P}^3$ over an algebraically closed field $\mathbb{F}$, containing more than $d(11d-24)$ lines must be ruled, under the additional constraint that $d<p$ if $\mathbb{F}$ has positive characteristic $p$. \boldsymbol nd{theorem} Once Theorem \ref{Salmon} gets invoked within the proof of Theorem \ref{gkt}, the rest of it uses basic properties of ruled surfaces, for a summary see \cite{Ko}, Section 7. We complement it with some additional background material in Section \ref{ruled}, working with the Grassmannian parameterising the set of lines in ${\mathbb P}^3$, that is the Klein quadric. \section{Main results} The main geometric idea of this paper is to interpret incidence problems between points and planes in ${\mathbb P}^3$ as line-line incidence problems in a projective three-quadric $\mathcal{G}$. $\mathcal{G}$ is contained in the Klein quadric $\mathcal{K}$ representing the space of lines in the ``physical space'' ${\mathbb P}^3$ in the ``phase space'' ${\mathbb P}^5$. $\mathcal{G}$ is the Klein image of a so-called {\em regular line complex} and has many well-known geometric properties. In comparison to ${\mathbb P}^3$, where the space of lines is four-dimensional, the space of lines in $\mathcal{G}$ is three-dimensional, and this enables one to satisfy the no-multiple-concurrencies hypotheses of Theorem \ref{gkt}. It will also turn out that the parameters denoted as $k$ in both the point-plane incidence estimate (\ref{bsest}) and Theorem \ref{gkt} are closely related. Our main result is as follows. \boldsymbol bgin{theorem} \label{mish} Let $P, \Pi$ be sets of points and planes, of cardinalities respectively $m$ and $n$ in ${\mathbb P}^3$. Suppose, $m\geq n$ and if $\mathbb{F}$ has positive characteristic $p$, then $p\neq 2$ and $n=O(p^2)$. Let $k$ be the maximum number of collinear planes. Then \boldsymbol bgin{equation}\label{pups} |I(P,\Pi)|=O\left( m\sqrt{n}+ km\right).\boldsymbol nd{equation} \boldsymbol nd{theorem} The statement of the theorem can be reversed in an obvious way, using duality in the case when the number of planes is greater than the number of points. Note that the $km$ term may dominate only if $k\geq\sqrt{n}$. The estimate (\ref{pups}) of Theorem \ref{mish} is a basic universal estimate. It is weaker than the above quoted estimates (\ref{egsest}), as well as (\ref{bsest}) for small values of $k$, and slightly weaker than (\ref{etest}). Later in Section \ref{example}, for completeness sake, we present a construction, not so dissimilar from those in \cite{BK} and \cite{ET}, showing that in the case $n=m$ and $k=m^{\frac{1}{2}}$, the estimate (\ref{pups}) is tight, for any admissible $n$. \boldsymbol bgin{remark}\label{sharps} Let us argue that in positive characteristic and solely under the constraint $\min(m,n)=O(p^2)$ the main term in the estimate \eqref{pups} cannot be improved. This suggests that analogues of stronger Euclidean point-plane incidence bounds like \eqref{egsest} do not extend to positive characteristic without additional assumptions stronger than in Theorem \ref{mish}. Let $\mathbb{F}=\mathbb{F}_p$, and take the point set $P$ as a smooth cubic surface in $\mathbb{F}^3$, so $|P|=O(p^2)$. Suppose, $|\Pi|>|P|$, so the roles of $m, n$ in Theorem \ref{mish} get reversed. A generic plane intersects $P$ at $\Omega(p)$ points. By the classical Cayley-Salmon theorem (which follows from the statement of Theorem \ref{Salmon} above) $P$ may contain at most $27$ lines. Delete them, still calling $P$ the remaining positive proportion of $P$. Now no more than three points in $P$ are collinear, so $k=3$. However, for a set of generic planes $\Pi$, $|I(P,\Pi)|=\Omega(|\Pi|\sqrt{|P|})$, which matches up to constants the bound \eqref{pups} when $|\Pi|>|P|$. \boldsymbol nd{remark} We also give two applications of Theorem \ref{mish} and show how it yields reasonably strong geometric incidence estimates over fields with positive characteristic. The forthcoming Theorem \ref{spr} claims that any plane set $S\subset \mathbb{F}^2$ of $N$ non-collinear points determines $\Omega\left[\min\left(N^{\frac{2}{3}},p\right)\right]$ distinct pair-wise bilinear -- i.e., wedge or dot -- products, with respect to any origin. If $S=A\times A$, $A\subseteq \mathbb{F}$ this improves to a sum-product type inequality \boldsymbol bgin{equation} |AA+AA|=\Omega\left[\min\left(|A|^{\frac{3}{2}},p\right)\right]. \label{2spr}\boldsymbol nd{equation} In the special case of $A$ being a multiplicative subgroup of $\mathbb{F}^*_p$, the same bound was proved by Heath-Brown and Konyagin \cite{HBK} and improved by V'jugin and Shkredov \cite{VS} (for suitably small multiplicative subgroups) to $\Omega\left(\frac{|A|^{\frac{5}{3}}} {\log^{\frac{1}{2}}|A|}\right).$ Theorem \ref{mish} becomes a vehicle to extend bounds for multiplicative subgroups to approximate subgroups. For more applications of Theorem \ref{mish} to questions of sum-product type see \cite{RRS}. Results in the latter paper include a new state of the art sum-product estimate $$\max(|A+A|,\,|AA|)\gg |A|^{\frac{6}{5}},\qquad\mbox{for }|A|<p^{\frac{5}{8}},$$ obtained from Theorem \ref{mish} in a manner, similar to how the sum-product exponent $\frac{5}{4}$ gets proven over ${\mathbb R}$ using the Szemer\'edi-Trotter theorem in the well-known construction by Elekes \cite{E0}. The previously known best sum-product exponent $\frac{12}{11}-o(1)$ over $\mathbb{F}_p$ was proven by the author \cite{R}, ending a stretch of many authors' incremental contributions based on the so-called {\em additive pivot} technique introduced by Bourgain, Katz and Tao \cite{BKT}.\footnote{In a forthcoming paper with E. Aksoy, B. Murphy, and I. D. Shkredov we present further applications of Theorem \ref{mish} to sum-product type questions in positive characteristic.} Such reasonably strong bounds in positive characteristic have been available so far only for subsets of finite fields, large enough relative to the size of the field itself: see, e.g., \cite{HI}. Theorem \ref{mish} enables one to extend these bounds to small sets, and the barrier it imposes in terms of $p$ is often exactly where the two types of bounds over $\mathbb{F}_p$ meet. See \cite{RRS} for more discussion along these lines. The same can be said about our second application of Theorem \ref{mish}, Theorem \ref{erd}. It yields a new result for the Erd\H os distance problem in three dimensions in positive characteristic, which is not too far off what is known over the reals. A set $S$ of $N$ points not supported in a single semi-isotropic plane in $\mathbb{F}^3$, contains a point, from which $\Omega\left[\min\left(\sqrt{N},p\right)\right]$ distances are realised. Semi-isotropic planes are planes spanned by two mutually orthogonal vectors $\boldsymbol e_1,\boldsymbol e_2$, such that $\boldsymbol e_1\cdot \boldsymbol e_1=0$, while $\boldsymbol e_2\cdot \boldsymbol e_2\neq 0$. They always exist in positive characteristic -- see \cite{HI} for explicit constructions in finite fields -- and one can have point sets with very few distinct distances within these planes. We mention in passing another application of Theorem \ref{mish}, which is Corollary \ref{intersections} appearing midway through the paper, concerning the prime residue field $\mathbb{F}_p$. Given {\em any} family of $\Omega(p^2)$ straight lines in $G={SL}_2(\mathbb{F}_p)$, their union takes up a positive proportion of $G$. In Lie group-theoretical terminology these lines are known as generalised horocycles, that is right cosets of one-dimensional subgroups conjugate to one of the two one-dimensional subgroups of triangular matrices with $1$'s on the main diagonal. (See, e.g., \cite{BM} as a general reference.) A similar claim in $\mathbb{F}_p^3$ is false, for all the lines may lie in a small number of planes. Nonetheless our Corollary \ref{intersections} is not new and follows from a result of Ellenberg and Hablicsek \cite{EH}. They extend to $\mathbb{F}_p^3$ another, earlier relative to Theorem \ref{gkt}, algebraic theorem of Guth and Katz over $\mathbb{C}$, from another breakthrough paper \cite{GKprime}. The assumption required for that in \cite{EH} is that all planes be relatively ``poor''. \boldsymbol bgin{remark} The presence of the characteristic $p$ in the constraint of Theorem \ref{mish} and its applications makes a positive characteristic field $\mathbb{F}$ somewhat morally just $\mathbb{F}_p$, for Theorem \ref{Salmon} is not true otherwise. Replacing this constraint by more elaborate ones in the context of finite extensions of $\mathbb{F}_p$ may be possible for $p>2$ provided that classification of exceptional cases as to Salmon's theorem becomes available. Voloch \cite{V} conjectures that an irreducible flechnodal surface of degree $d$ may be unruled only if $p$ divides $d(d-1)(d-2)$ and gives evidence in this direction. See also \cite{EH} for examples of such {\em flexy} surfaces and discussion from the incidence theory viewpoint.\label{rem} \boldsymbol nd{remark} Let us give an outline of the proof of Theorem \ref{mish} to motivate the forthcoming background material in Section \ref{setup}. First off, Theorem \ref{gkt} needs to be extended to the case of pair-wise intersections between two families of $m$ and $n$ lines, respectively. The only way to do so to meet our purpose, in view of Remark \ref{sharps}, is the cheap one. If $m$ is much bigger than $n$, partition the $m$ lines into $\sim \frac{m}{n}$ groups of $\sim n$ lines each and apply a generalisation to $\mathbb{F}$ of Theorem \ref{gkt} separately to count incidences of each group with the family of $n$ lines. Let us proceed assuming $m=n$. Let $q\in P,\,\pi\in\Pi$ be a point and a plane in ${\mathbb P}^3,$ and $q\in\pi$. Draw in the plane $\pi$ all lines, incident to the point $q$. In line geometry literature this figure is called a {\em plane pencil} of lines. It is represented by a line in the space of lines, that is the Klein quadric ${\mathcal K}$, a four-dimensional hyperbolic projective quadric in ${\mathbb P}^5$, whose points are in one-to-one correspondence with lines in ${\mathbb P}^3$ via the so-called {\em Klein map}. If the characteristic of $\mathbb{F}$, $p\neq 2$, the line pencil gets represented in $\mathcal{K}$ as follows. The Klein image of the family of all lines incident to $q$ is a copy of ${\mathbb P}^2$ contained in $\mathcal{K}$, a so-called $\alpha$-plane. The family of all lines contained in $\pi$ is also represented by a copy of ${\mathbb P}^2$ contained in $\mathcal{K}$, a so-called $\boldsymbol bta$-plane. A pair of planes of two distinct types in ${\mathcal K}$ typically do not meet. If they do, this happens along a copy of ${\mathbb P}^1,$ a line in ${\mathcal K}$, which is the Klein image of the above line pencil, if and only if $q\in\pi$. Thus the number of incidences $|I(P,\Pi)|$ equals the number of lines along which the corresponding sets of $\alpha$ and $\boldsymbol bta$-planes meet in ${\mathcal K}$. One can now restrict the arrangement of planes in ${\mathcal K}$ from ${\mathbb P}^5$ to a generic hyperplane ${\mathbb P}^4$ intersecting $\mathcal{K}$ transversely. Its intersection with $\mathcal{K}$ is a three-dimensional sub-quadric $\mathcal{G}$, whose pre-image under the Klein map is called a {\em regular line complex.} There is a lot of freedom in choosing the generic subspace ${\mathbb P}^4$ to cut out $\mathcal{G}$. Or, one can fix the subspace ${\mathbb P}^4$ in the ``phase space'' ${\mathbb P}^5$ and realise this freedom to allow for certain projective transformations of the ``physical space'' ${\mathbb P}^3$ and its dual. For there is a one-to-one correspondence between regular line complexes and so-called {\em null polarities} -- transformations from ${\mathbb P}^3$ to its dual by non-degenerate skew-symmetric matrices. See \cite{PW}, Chapter 3 for general theory. The benefit of having gone down in dimension from $\mathcal{K}$ to $\mathcal{G}$ is that $\alpha$ and $\boldsymbol bta$-planes restrict to $\mathcal{G}$ as lines, which may generically meet only if they are of different type. This is because two planes of the same type intersect at one and only one point in $\mathcal{K}$. So one can choose the subspace ${\mathbb P}^4$, defining $\mathcal{G}$ in such a way that it contains none of the above finite number of points. If the field $\mathbb{F}$ is finite, the latter finite set may appear to be sizable in comparison with the size of $\mathcal{G}$ itself. However, just like in the proof of Theorem \ref{gkt} one works in the algebraic closure of $\mathbb{F}$, which is infinite. Thus the only place where the characteristic $p$ of $\mathbb{F}$ makes a difference is within the body of the Guth-Katz theorem to ensure the validity of Salmon's Theorem \ref{Salmon}. The corresponding constraint in terms of $p$ is stated explicitly and with constants in \cite{Ko}, Corollary 40. Having restricted the $\alpha$ and $\boldsymbol bta$-planes as lines in $\mathcal{G}$ we end up with two families of lines there, such that lines of the same type do not meet. The number of incidences $|I(P,\Pi)|$ equals the number of pair-wise intersections of these lines. The lines satisfy the input conditions of Theorem \ref{gkt}, the only difference being that they live in the three-quadric $\mathcal{G}\subset{\mathbb P}^4$, rather than ${\mathbb P}^3$. But one can always project a finite family of lines from higher dimension to ${\mathbb P}^3$ so that skew lines remain skew. Thereupon we find ourselves in ${\mathbb P}^3$, and what's left for the proof of Theorem \ref{mish} has been essentially worked out by Guth-Katz and Koll\'ar. This seems a bit like a waste, for the space of lines in $\mathcal{G}$ is three, rather than four-dimensional. Yet we could not conceive a better theorem for $\mathcal{G}$, but for a chance of slightly better constants. Plus, Remark \ref{sharps} suggests that a stronger theorem about $\mathcal{G}$ must have more restrictive assumptions than Theorem \ref{mish}. To conclude this section, we briefly summarise the key steps in the beautiful proof of Guth and Katz, which we retell with small modifications as the proof of Theorem \ref{gkt2} in the main body of the paper. We could have almost got away with just citing \cite{Ko}, Sections 3 and 4 but for a few extra details, since we still need to bring the collinearity parameter $k$ into play. Assuming that there are some $C n^{\frac{3}{2}}$ pair-wise line intersections in ${\mathbb P}^3$ enables one to put all the lines, supporting more than roughly the average number of incidences per line, on a polynomial surface $Z$ of degree $d\sim \frac{ \sqrt{n} }{C},$ so most of the incidences come from within factors of $Z$. One can use induction in $n$ to effectively assume that the number of these lines is $\Omega(n)$. Then Salmon's theorem implies that $Z$ must have a ruled component, containing a vast majority of the latter lines. One should not bother about non-ruled factors by the induction hypothesis. However, a non-cone ruled factor of degree $d>2$ can only support a relatively small number of incidences. Since lines of the same type do not meet, having many incidences within planes or cones is not an option either. Hence if there are $C n^{\frac{3}{2}}$ incidences, $Z\subset {\mathbb P}^3$ must have a doubly-ruled quadric factor, containing many lines. Once we lift $Z$ back to $\mathcal{G}$, this means having many lines of each type in the intersection of $\mathcal{G}\subset{\mathbb P}^4$ with a ${\mathbb P}^3$. Finally, an easy argument in the forthcoming Lemma \ref{lem} shows that intersections of $\mathcal{G}$ with a ${\mathbb P}^3$ can be put into correspondence with what happens within the original arrangement of points and planes in the ``physical space''. Namely lines of the two types meeting in $\mathcal{G}\cap{\mathbb P}^3$ represent precisely incidences of the original points and planes along some line in the ${\mathbb P}^3$. This brings the collinearity parameter $k$ into the incidence estimate and completes the proof. \section{Acknowledgment} The author is grateful to Jon Selig for educating him about the Klein quadric. Special thanks to J\'ozsef Solymosi for being the first one to point out a mistake in the estimate of Theorem \ref{mish} in the original version of the paper and two anonymous Referees for their patience and attention to detail. This research was conceived in part when the author was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation. \section{Geometric set-up} \subsection{Background} \label{setup} We begin with a brief introduction of the Klein, alias Klein-Pl\"ucker quadric ${\mathcal K}$. See \cite{PW}, Chapter 2 or \cite{JS}, Chapter 6 for a more thorough treatment. The space of lines in ${\mathbb P}^3$ is represented as a projective quadric, known as the {\em Klein quadric} $\mathcal K$ in ${\mathbb P}^5$, with projective coordinates $(P_{01}:P_{02}:P_{03}:P_{23}:P_{31}:P_{12})$, known as {\em Pl\"ucker coordinates.} The latter {\em Pl\"ucker vector} yields the {\em Klein image} of a line $l$ defined by a pair of points $q=(q_0:q_1:q_2:q_3)$ and $u=(u_0:u_1:u_2:u_3)$ in ${\mathbb P}^3$ that it contains, under the {\em Klein map}, defined as follows: \boldsymbol bgin{equation} P_{ij}=q_iu_j-q_ju_i,\qquad i,j=0,\ldots,3. \label{Pc}\boldsymbol nd{equation} It is easy to verify that once $\{P_{ij}\}$ are viewed as homogeneous coordinates, this definition does not depend on the particular choice of the pair of points on the ``physical line'' $l$, and there are $6=4\cdot 3/2$ independent projective Pl\"ucker coordinates $P_{ij}$. We use the capital $L\in {\mathbb P}^5$ for the Pl\"ucker vector, which is the Klein image of the line $l\subset {\mathbb P}^3$. For an affine line in $\mathbb{F}^3$, obtained by setting $q_0=u_0=1$, the Pl\"ucker coordinates acquire the meaning of a projective pair of three-vectors $(\boldsymbol \omega: \boldsymbol v)$, where $\boldsymbol \omega=(P_{01},P_{02},P_{03})$ is a vector in the direction of the line and for any point $\boldsymbol q=(q_1,q_2,q_3)$ on the line, $\boldsymbol v = (P_{23},P_{31},P_{12}) = \boldsymbol q\times\boldsymbol \omega$ is the line's moment vector\footnote{In this section we use boldface notation for three-vectors. The essentially Euclidean vector product notation is to keep the exposition as elementary as possible: in $\mathbb{F}^3$ the notation $\boldsymbol v = \boldsymbol q\times\boldsymbol \omega$ means only that $\boldsymbol v$ arises from $\boldsymbol \omega$ after multiplication on the left by the skew-symmetric matrix $T=ad({\boldsymbol q})$, with $T_{12}=-q_3,\,T_{13}=q_2,\,T_{23}=-q_1$.\label{ads}} with respect to the fixed origin. Lines in the plane at infinity have $\boldsymbol \omega=\boldsymbol 0$. We use the boldface notation for three-vectors throughout. Conversely, one can denote $\boldsymbol \omega=(P_{01},P_{02},P_{03}),\; \boldsymbol v=(P_{23},P_{31},P_{12}),$ the Pl\"ucker coordinates then become $(\boldsymbol \omega:\boldsymbol v)$, and treat $\boldsymbol \omega$ and $\boldsymbol v$ as vectors in $\mathbb{F}^3$, bearing in mind that as a pair they are projective quantities. The equation of the Klein quadric ${\mathcal K}$ in ${\mathbb P}^5$ is \boldsymbol bgin{equation} P_{01}P_{23}+P_{02}P_{31}+P_{03}P_{12}=0,\;\mbox{ i.e., }\; \boldsymbol \omega\cdot\boldsymbol v=0. \label{Klein}\boldsymbol nd{equation} More formally, equation \eqref{Klein} arises after writing out, with the notations \eqref{Pc}, the truism $$ \det\left(\boldsymbol bgin{array}{cccccc} q_0&u_0&q_0&u_0\\q_1&u_1&q_1&u_1\\q_2&u_2&q_2&u_2\\q_3&u_3&q_3&u_3\boldsymbol nd{array}\right) = 0. $$ Two lines $l,l'$ in ${\mathbb P}^3$, with Klein images $$L=(P_{01}:P_{02}:P_{03}:P_{23}:P_{31}:P_{12}),\qquad L'=(P'_{01}:P'_{02}:P'_{03}:P'_{23}:P'_{31}:P'_{12})$$ meet in ${\mathbb P}^3$ if and only if \boldsymbol bgin{equation}\label{intersection} P_{01}P'_{23} + P_{02}P'_{31} + P_{03}P'_{12} + P'_{01}P_{23} + P'_{02}P_{31} + P'_{03}P_{12}\;=\;0.\boldsymbol nd{equation} The left-hand side of \eqref{intersection} is called the {\em reciprocal product} of two Pl\"ucker vectors. If they are viewed as $L=(\boldsymbol \omega:\boldsymbol v)$ and $L'=(\boldsymbol \omega':\boldsymbol v')$, the intersection condition becomes \boldsymbol bgin{equation}\label{intersectionv} \boldsymbol \omega\cdot \boldsymbol v'+ \boldsymbol v\cdot \boldsymbol \omega' = 0.\boldsymbol nd{equation} Condition \eqref{intersection} can be restated as \boldsymbol bgin{equation} L^T\mathcal QL' = 0,\qquad \mathcal Q = \left(\boldsymbol bgin{array}{ccc} 0 & I_3\\ I_3 & 0\boldsymbol nd{array}\right),\label{qum}\boldsymbol nd{equation} where $I_3$ is the $3\times 3$ identity matrix. It is easy to see by \eqref{intersection}, after taking the gradient in \eqref{Klein} that a hyperplane ${\mathbb P}^4$ in ${\mathbb P}^5$ is tangent to $\mathcal{K}$ at some point $L$ if and only if the covector defining the hyperplane is itself in the Klein quadric in the dual space. Moreover, it follows from (\ref{intersection}) that the intersection of $\mathcal{K}$ with the tangent hyperplane $T_L \mathcal{K}\cap \mathcal{K}$ through $L$ consists of $L'\in \mathcal{K}$, which are the Klein images of all lines $l'$ in ${\mathbb P}^3$, incident to the line $l$, represented by $L$. The union of all these lines $l'$ is called a {\em singular line complex.} \subsubsection{Two rulings by planes and line complexes} The largest dimension of a projective subspace contained in $\mathcal{K}$ is two. This can be seen as follows. After the coordinate change $\boldsymbol x = \boldsymbol \omega-\boldsymbol v$, $\boldsymbol y = \boldsymbol \omega+\boldsymbol v$, the equation \eqref{Klein} becomes \boldsymbol bgin{equation} \|\boldsymbol x\|^2= \|\boldsymbol y\|^2. \label{xform}\boldsymbol nd{equation} This equation cannot be satisfied by a ${\mathbb P}^3$. It can be satisfied by a ${\mathbb P}^2$ if and only if $\boldsymbol y= M\boldsymbol x,$ for some orthogonal matrix $M$. We further assume that ${\rm char}(\mathbb F)\neq 2$, which is crucial. For then there are two cases, corresponding to $\det M=\pm 1$. The two cases correspond to two ``rulings'' of $\mathcal{K}$ by planes, which lie entirely in it, the fibre space of each ruling being ${\mathbb P}^3$. To characterise the two rulings, called $\alpha$ and $\boldsymbol bta$-planes, corresponding to $\det M=\pm1$, respectively, one returns to the original coordinates $(\boldsymbol \omega:\boldsymbol v)$. After a brief calculation, see \cite{JS}, Section 6.3, it turns out that Pl\"ucker vectors in a single $\alpha$-plane in $\mathcal{K}$ are Klein images of lines in ${\mathbb P}^3$, which are concurrent at some point $(q_0:q_1:q_2:q_3)\in {\mathbb P}^3$. If the concurrency point is $(1:\boldsymbol q)$, which is identified with $\boldsymbol q\in \mathbb F^3$, the $\alpha$-plane is a graph $\boldsymbol v = \boldsymbol q\times \boldsymbol \omega$. Otherwise, an ideal concurrency point $(0:\boldsymbol \omega)$ gets identified with some fixed $\boldsymbol \omega$, viewed as a projective vector. The corresponding $\alpha$-plane is the union of the set of parallel lines in $\mathbb{F}^3$ in the direction of $\boldsymbol \omega$, with Pl\"ucker coordinates $(\boldsymbol \omega:\boldsymbol v)$, so $\boldsymbol v\cdot\boldsymbol \omega=0,$ by \eqref{Klein}, and the set of lines in the plane at infinity incident to the ideal point $(0:\boldsymbol \omega)$. The latter lines have Pl\"ucker coordinates $(\boldsymbol 0:\boldsymbol v),$ with once again $\boldsymbol v\cdot\boldsymbol \omega=0$. Similarly, Pl\"ucker vectors lying in a $\boldsymbol bta$-plane represent co-planar lines in ${\mathbb P}^3$. A ``generic'' $\boldsymbol bta$-plane is a graph $\boldsymbol \omega = \boldsymbol u\times \boldsymbol v$, for some $\boldsymbol u \in \mathbb F^3$. The case $\boldsymbol u =\boldsymbol 0$ corresponds to the plane at infinity, otherwise the equation of the co-planarity plane in $\mathbb{F}^3$ becomes \boldsymbol bgin{equation}\boldsymbol u\cdot \boldsymbol q =-1.\label{refer}\boldsymbol nd{equation} If $\boldsymbol u$ gets replaced by a fixed ideal point $(0:\boldsymbol v)$, the corresponding $\boldsymbol bta$-plane comprises lines, coplanar in planes through the origin: $\boldsymbol v\cdot \boldsymbol q = 0$. The $\boldsymbol bta$-plane in the Klein quadric is formed by the set of lines with Pl\"ucker coordinates $(\boldsymbol \omega:\boldsymbol v)$, plus the set of lines through the origin in the co-planarity plane. The latter lines have Pl\"ucker coordinates $(\boldsymbol \omega:\boldsymbol 0)$. In both cases one requires $\boldsymbol \omega\cdot\boldsymbol v = 0$. Two planes of the same ruling of $\mathcal{K}$ always meet at a point, which is the line defined by the two concurrency points in the case of $\alpha$-planes. An $\alpha$ and a $\boldsymbol bta$-plane typically do not meet. If they do -- this means that the concurrency point $q$, defining the $\alpha$-plane lives in the plane $\pi$, defining the $\boldsymbol bta$-plane. The intersection is then a line, a copy of ${\mathbb P}^1$ in ${\mathcal K}$, representing a {\em plane pencil of lines}. These are the lines in ${\mathbb P}^3$, which are co-planar in $\pi$ and concurrent at $q$. Conversely, each line in ${\mathcal K}$ identifies the pair ($\alpha$-plane, $\boldsymbol bta$-plane), that is the plane pencil of lines uniquely. Moreover points $L,L'\in \mathcal{K}$ can be connected by a straight line in $\mathcal{K}$ if and only if the corresponding lines $l,l'$ in ${\mathbb P}^3$ meet, cf. \eqref{qum}. From non-degeneracy of the reciprocal product it follows that the reciprocal-orthogonal projective subspace to a $\alpha$ or $\boldsymbol bta$-plane is the plane itself. Hence, a hyperplane in ${\mathbb P}^5$ contains a $\alpha$ or $\boldsymbol bta$-plane if and only if it is a $T_L(\mathcal{K})$ at some point $L$, lying in the plane. It follows that a singular line complex arises if and only if the equation of the hyperplane intersecting $\mathcal{K}$ is $(\boldsymbol u:\boldsymbol w)^T(\boldsymbol \boldsymbol \omegaega:\boldsymbol v)=0$, with the dual vector $(\boldsymbol u:\boldsymbol w)$ itself such that $\boldsymbol u\cdot\boldsymbol w=0$. Otherwise the Klein pre-image of the intersection of the hyperplane with $\mathcal{K}$ is called a regular line complex. We remark that a geometric characterisation of a regular line complex is that it is a set of invariant lines of some {\em null polarity}, that is a projective map from ${\mathbb P}^3$ to its dual ${\mathbb P}^{3*}$ defined via a $4\times 4$ non-degenerate skew-symmetric matrix. In particular, a null polarity assigns to each point $q\in {\mathbb P}^3$ a plane $\pi(q)$, such that $q\in \pi$. See \cite{PW}, Chapter 3 for more detail. A particular example of the Klein image of a regular line complex arises if one sets $\boldsymbol \omegaega_3=v_3$, i.e. $x_3=0$ in coordinates \eqref{xform}. One can identify $(-x_1:x_2:0:y_1:y_2:1)$ with $\mathbb F^4$, getting $$ x_1y_1 - x_2y_2 = 1 $$ for the affine part of $\mathcal{G}$, which can be identified with the group $SL_2(\mathbb F)$. The following lemma describes the intersection of a regular complex with a singular one. \boldsymbol bgin{lemma} \label{lem} Let $l$ be a line in ${\mathbb P}^3$, represented by $L\in\mathcal{K}$. Then $\mathcal{K}\cap T_L\mathcal{K}$ contains $\alpha$ and $\boldsymbol bta$-planes, corresponding, respectively to points on $l$ and planes containing $l$. Given two hyperplanes $S_1$, $S_2$ in ${\mathbb P}^5$, suppose $\mathcal{K}\cap S_1$ is the Klein image of a regular line complex. Consider the intersection $\mathcal{K}\cap S_1\cap S_2$. If the field $\mathbb{F}$ is algebraically closed, $\mathcal{K}\cap S_1\cap S_2 = \mathcal{K}\cap S_1\cap S'_2$, where $S'_2$ is tangent to $\mathcal{K}$ at some point $L$. That is, $\mathcal{K}\cap S_2'$ is the Klein image of the singular line complex of lines in ${\mathbb P}^3$ meeting the Klein pre-image $l$ of $L$. \boldsymbol nd{lemma} \boldsymbol bgin{proof} The first statement follows immediately by definitions above. To prove the second statement, suppose $S_2$ is not tangent to $\mathcal{K}$. Let the two line complexes be defined by dual vectors $(\boldsymbol u:\boldsymbol w)$ and $(\boldsymbol u':\boldsymbol w')$. If $\mathbb{F}$ is algebraically closed, the line $t_1(\boldsymbol u:\boldsymbol w) + t_2 (\boldsymbol u':\boldsymbol w')$ in the dual space will then intersect the Klein quadric in the dual space, a point of intersection $L$ defining $S_2'$. Note, however, that if $S_2$ is itself tangent to $\mathcal{K}$ at $L$, then there is only one solution, $L$ itself, otherwise there are two. \boldsymbol nd{proof} \subsubsection{Reguli} For completeness purposes and since reguli appear in the formulation of Theorem \ref{gkt} we give a brief account in this section. See also the next section on ruled surfaces. The $\alpha$ and $\boldsymbol bta$-planes represent a degenerate case when a subspace $S= {\mathbb P}^2$ of ${\mathbb P}^5$ is contained in $\mathcal{K}$. Assume that $\mathbb{F}$ is algebraically closed, then any $S$ intersects $\mathcal{K}$. The non-degenerate situation would be $S$ intersecting $\mathcal{K}$ along a irreducible conic curve. This curve in $\mathcal{K}$ is called a {\em regulus}, and the union of lines corresponding to in in the physical space forms a single ruling of a doubly-ruled quadric surface. One uses the term regulus to refer to both the above curve in $\mathcal{K}$ and the family of lines in ${\mathbb P}^3$ this curve represents. Choose affine coordinates, so that the equations of the two-plane $S$ can be written as $$ A\boldsymbol \omega + B\boldsymbol v = \boldsymbol 0, $$ where $A,B$ are some $3\times3$ matrices. For points in $S\cap\mathcal{K}$, which do not represent lines in the plane at infinity in ${\mathbb P}^3$, we can write $\boldsymbol v =\boldsymbol q\times \boldsymbol \omega$, where $\boldsymbol q$ is some point in $\mathbb{F}^3$, on the line with Pl\"ucker coordinates $(\boldsymbol \omega:\boldsymbol v)$, and $\boldsymbol \omega\neq \boldsymbol 0$. If $T$ denotes the skew-symmetric matrix $ad(\boldsymbol q)$ we obtain $$ (A-BT)\boldsymbol \omega =\boldsymbol 0\qquad {\mathbb R}ightarrow\qquad \det(A-BT)=0. $$ This a quadratic equation in $\boldsymbol q$, since $T$ is a $3\times 3$ skew-symmetric matrix, so $\det T=0$. If the above equation has a linear factor in $\boldsymbol q$, defining a plane $\pi\subset{\mathbb P}^3$, then $S\cap \mathcal{K}$ contains a line, which represents a pencil of lines in $\pi$. If the above quadratic polynomial in $\boldsymbol q$ is irreducible, and $\mathbb{F}$ is algebraically closed, one always gets a quadric irreducible surface in ${\mathbb P}^3$ as the union of lines in the regulus, see Lemma \ref{rs} in the next section. In the latter case, by Lemma \ref{lem}, the two-plane $S$ in ${\mathbb P}^5$ can be obtained as the intersection of three four-planes, tangent to $\mathcal{K}$ at some three points $L_1,L_2,L_3$, corresponding to three mutually skew lines in ${\mathbb P}^3$. Thus a regulus can be redefined as the set of all lines in ${\mathbb P}^3$, meeting three given mutually skew lines $l_1,l_2,l_3$. Its Klein image is a conic. Each regulus has a reciprocal one, the Klein image of the union of all lines incident to any three lines, represented in the former regulus. These lines form the second ruling of the same quadric doubly-ruled surface. See \cite{JS}, Section 6.5.1 for coordinate description of reciprocal reguli. \label{rgl} \subsubsection{Algebraic ruled surfaces} \label{ruled} Differential geometry of ruled surfaces is a rich and classical field of study. From a historical perspective, it was Pl\"ucker who pretty much invented the subject in the two-volume treatise \cite{Pl}, which was completed after his death by Klein. We give the minimum background on algebraic ruled surfaces in ${\mathbb P}^3$. In this whole section the field $\mathbb{F}$ is assumed to be algebraically closed, of characteristic $p\neq 2$. See \cite{PW}, Chapter 5 for the discussion in the case $p=0$. In positive characteristic the basics of algebraic theory of ruled surfaces are in many respects the same, and for our modest designs we need only these basics. A ruled surface is defined as a smooth projective surface over an algebraically closed field that is birationally equivalent to a surface ${\mathbb P}\times \mathcal C$ where $\mathcal C$ is a smooth projective curve of genus $g\geq0$. See, e.g. \cite{Ba}, \cite{Li} for general theory of algebraic surfaces. Also Koll\'ar (see \cite{Ko}, Section 7) presents in terms of more formal algebraic geometry a brief account of facts about ruled surfaces, necessary for the proof of Theorem \ref{gkt}. Since he only mentions the Klein quadric implicitly through a citation we review these facts below. Informally, an algebraic ruled surface is a surface in $\mathbb P^3$ composed of a polynomial family of lines. We assume the viewpoint from Chapter 5 of the book by Pottmann and Wallner \cite{PW}, where an algebraic ruled surface is identified with a polynomial curve $\Gamma$ in the Klein quadric. The union of lines, Klein pre-images of the points of $\Gamma$ draws a surface $Z\subset \mathbb P^3$ called the {\em point set} of $\Gamma$. It is easy to show that $Z$ is then an algebraic surface, that is a projective variety of dimension $2$. A line in $Z$, which is the Klein pre-image of a point of $\Gamma$ is called a {\em generator.} A regular generator $L$, that is a regular point of $\Gamma\subset \mathbb K$ is called {\em torsal} in the special case when the tangent vector to $\Gamma$ at $L$ is also in $\mathcal{K}$. The Klein pre-image of a regular torsal generator necessarily supports a singular surface point, called {\em cuspidal point}. An irreducible component of $\Gamma$ is referred to as a {\em ruling} of $\Gamma$. The same term {\em ruling} is applied to the corresponding family of lines, ruling the surface $Z$. Here is s basic genericity statement about ruled surfaces. See, e.g., \cite{PW}, Chapter 5. \boldsymbol bgin{lemma}\label{rs} Let $\Gamma$ be an algebraic curve in $\mathcal{K}$, with no irreducible component contained in the intersection of $\mathcal{K}$ with any ${\mathbb P}^2$. Let $Z$ be the point set of $\Gamma$. The subset of $Z$, which is the union of all pair-wise intersections of different rulings of $\Gamma$ and all cuspidal points is a subset of the set of singular points of $Z$. It is contained in an algebraic subvariety of dimension $\leq 1$. Besides, the curve $\Gamma$ is irreducible if and only if its point set $Z$ is irreducible. \boldsymbol nd{lemma} We do not give a proof but for a few remarks. The conditions of Lemma \ref{rs} rule out the cases when $Z$ has a plane or smooth quadric component. Clearly, a plane can be the point set for many rulings of lines lying therein, a smooth quadric has two reciprocal reguli, and is therefore an example when the union of the two reguli, not irreducible as a ruled surface has an irreducible point set. Let $Z$ further denote the point set of a ruling. Suppose, $Z$ contains three lines $l_{1}, l_{2}, l_3$ incident to every line in the ruling. If, say $l_{1}$ and $l_{2}$ meet, then $Z$ is either a plane, and hence the ruling lies in an $\alpha$ or $\boldsymbol bta$-plane, depending on whether or not $l_3$ also meets $l_1$ and $l_2$ at the same point. If the three lines are mutually skew, then the ruling is contained in the intersection of three singular line complexes $T_1,T_2,T_3$, corresponding to the three lines. Their intersection is represented in $\mathcal{K}$ as the latter's transverse section by a ${\mathbb P}^2$ along a conic, that is a regulus. Then $Z$ is a irreducible quadric surface, which has a reciprocal ruling: the set of lines incident to any three lines in the former ruling. See the above discussion of reguli, as well as \cite{JS}, Chapter 6 for more details. Conversely, if a ruling is contained in a $\alpha$-plane, then $Z$ is a cone: all the generators are incident at the concurrency point defining the $\alpha$-plane. It a ruling lies in a $\boldsymbol bta$-plane, then $Z$ is a plane. If the ruling arises as a result of transverse intersection of a ${\mathbb P}^2$ with $\mathcal{K}$, it is either a pencil of lines or a regulus. In the former case $Z$ is a plane, in the latter case an irreducible doubly-ruled quadric. An important part of the proof of Theorem \ref{gkt} is the claim that one cannot have too many line-line incidences within a higher degree irreducible ruled surface, which is not a cone. It is essentially the rest of this section that is directly relevant to Theorems \ref{gkt} and \ref{mish}. \boldsymbol bgin{lemma}\label{rss} Let $\Gamma$ be an algebraic ruled surface of degree $d$, whose point set $Z$ has no plane component. Then the degree of $Z$ equals $d$. A generator in a ruled surface of degree $d$, which does not have a cone component, meets at most $d-2$ other generators. \boldsymbol nd{lemma} \boldsymbol bgin{proof} By the preceding argument, the theorem is true for $d=2$, so one may assume that conditions of Lemma \ref{rs} are satisfied. Since $\mathbb{F}$ is algebraically closed, a generic line $l$ in ${\mathbb P}^3$ intersects $Z$ exactly $d$ times at points meeting one generator each. It follows that for the Klein image $L$ of $l$, one has $$ L^T \mathcal Q L' = 0, $$ for $d$ distinct $L'\in \Gamma$. Thus the curve $\Gamma$ meets a hyperplane $T_L\mathcal{K}$ in ${\mathbb P}^5$ transversely $d$ times, and hence has degree $d$. If in the latter equation $L$ no longer represents a generic line in ${\mathbb P}^3$ but a generator of $\Gamma$, and the above equation must still have $d$ solutions, counting multiplicities. Besides $L'=L$ has multiplicity at least $2$, since the intersection of $\Gamma$ with $T_L\mathcal{K}$ at $L$ is not transverse. \boldsymbol nd{proof} It also follows that the point set of an irreducible ruled surface $\Gamma$ of degree $d\geq3$ cannot be a smooth projective surface. The point set of $\Gamma$ will necessarily have singular points where two generators meet or a cuspidal points of torsal generators. It is also well known that the point set of an irreducible ruled surface of degree $d\geq3$ can support at most two non-generator {\em special lines} which intersect each generator. This is because special lines must be skew to each other, or one has a plane. But then if there are three or more special lines, one has a quadric. \subsection{Point-plane incidences in ${\mathbb P}^3$ are line incidences in a three-quadric in ${\mathbb P}^4$} \label{geom} We can now start moving towards Theorem \ref{mish}. Assume that $\mathbb{F}$ is algebraically closed or pass to the algebraic closure still calling it $\mathbb{F}$. It is crucial for this section that $\mathbb{F}$ not have characteristic $2$. Let $\mathcal{K}\subset{\mathbb P}^5$ be the Klein quadric, $\mathcal{G}=\mathcal{K}\cap S$, for a four-hyperplane $S$ whose defining covector is not in the Klein quadric in ${\mathbb P}^{5*}$. E.g., $\mathcal{G}$ may be defined by the equation $P_{03}=P_{12}$. Since $\mathcal{G}$ contains no planes, each $\alpha$ or $\boldsymbol bta$-plane in ${\mathcal K}$ intersects ${\mathcal G}$ along a line. We therefore have two line families $L_\alpha,L_\boldsymbol bta$ in $\mathcal{G}$. We warn the reader from confusing lines lying in the three-quadric $\mathcal{G}\subset {\mathbb P}^4\subset\mathcal{K}\subset{\mathbb P}^5$ in the ``phase space'' with lines from the regular line complex in the ``physical space'' ${\mathbb P}^3$ that $\mathcal{G}$ is the Klein image of. The following lemma states that one can assume $L_\alpha\cap L_\boldsymbol bta=\emptyset,$ as well as that the lines within each family do not meet each other. \boldsymbol bgin{lemma} \label{tog} Suppose, $\mathbb{F}$ is algebraically closed and not of characteristic $2$. To every finite point-plane arrangement $(P,\Pi)$ in ${\mathbb P}^3$ one can associate two distinct families of lines $L_\alpha,L_\boldsymbol bta$ contained in some three quadric $\mathcal{G}=\mathcal{K}\cap S$, where the four-hyperplane $S$ is not tangent to $\mathcal{K}$, with the following property. No two lines of the same family meet; $|L_\alpha|=m$, $|L_\boldsymbol bta|=n$, and $|I(P,\Pi)| = |I(L_\alpha,L_\boldsymbol bta)|$, where $I(L_\alpha,L_\boldsymbol bta)$ is the set of pair-wise incidences between the lines in $L_\alpha$ and $L_\boldsymbol bta$. Alternatively, one can regard $S$ as fixed and find a new point-plane arrangement $(P',\Pi')$ in ${\mathbb P}^3$ with the same $m,n$ and the number of incidences, to which the above claim applies. Besides, if $k_m,k_n$ are the maximum numbers of, respectively, collinear points and planes in $P,\Pi$, they are now the maximum numbers of lines in the families $L_\alpha,L_\boldsymbol bta$, respectively, contained in the intersection of $\mathcal{G}\subset S$ with a projective three-subspace in $S$. \boldsymbol nd{lemma} \boldsymbol bgin{proof} Suppose, we have an incidence $(p,\pi)\in P\times\Pi$. This means that the $\alpha$-plane defined by $q\in P$ and the $\boldsymbol bta$-plane defined by $\pi\in \Pi$ intersect along a line in ${\mathcal K}$. There are at most $m^2+n^2$ points in ${\mathcal{K}}$ where planes of the same type meet and at most $mn$ lines along which the planes of different type may possibly intersect. We must choose $\mathcal{G}$ that is a hyperplane $S$ in ${\mathbb P}^5$ intersecting $\mathcal{K}$ transversely, so that it supports none of the above lines or points in $\mathcal{K}$. This means avoiding a finite number of linear constraints on the dual vector $U^T\in {\mathbb P}^{5*}$ defining $S$. Since $\mathbb{F}$ is algebraically closed, it is infinite, and such $S$ always exists, for $m,n$ are finite. The covector $U^T$ defining $S$ must (i) not lie in the Klein quadric in ${\mathbb P}^{5*},$ and (ii) be such that $U^T L_i\neq 0$ for at most $m^2+n^2 + mn$ Pl\"ucker vectors $L_i$. There is a nonempty Zariski open set of such covectors in ${\mathbb P}^{5*}$. To justify the second claim of the lemma we use the fact that there is one-to-one correspondence between so-called null polarities and regular line complexes. A null polarity is a projective transformation from ${\mathbb P}^3$ to its dual, given by a non-degenerate $4\times 4$ skew-symmetric matrix. The six above-diagonal entries of the matrix are in one-to-one correspondence with the covector defining the regular line complex. The fact that the skew-symmetric matrix is non-degenerate is precisely that the covector not lie in the Klein quadric. See \cite{PW}, Chapter 3 for general theory of line complexes. Hence the following procedure is equivalent to the above-described one of choosing the transverse hyperplane $S$ defining $\mathcal{G}$. Fix $S$ and find a null polarity, whose application to the original arrangement of planes and points in ${\mathbb P}^3$ yields a new point-plane arrangement as follows. The roles of points and planes get reversed, and we now have the set of $m$ planes $\Pi'$ and the set of $n$ points $P'$, with the same number of incidences $|I(P,\Pi)|$. Take a dual arrangement so points become again points and planes are planes. However, no two lines of the same type, arising in $\mathcal{G}\subset S$ after the procedure described in the beginning of this section applied to the arrangement $(P',\Pi')$, will intersect. The last claim of Lemma \ref{tog} follows from Lemma \ref{lem}. \boldsymbol nd{proof} Fixing the transverse hyperplane $S$ may be interesting for applications, when the affine part of the quadric $\mathcal{G}$ becomes, say the Lie group ${SL}_2(\mathbb{F})$, with its standard embedding in $\mathbb{F}^4$. Suppose there are $n$ lines supported in a fixed $\mathcal{G}$. Each line in $\mathcal{G}$ is a line in $\mathcal{K}$ and therefore corresponds to a unique plane pencil of lines in the ``physical space" ${\mathbb P}^3$, that is a unique pair $\alpha$ and $\boldsymbol bta$-plane intersecting along this line. I.e., there is a unique pair $(q,\pi(q))$, where the point $q$ lies in the plane $\pi(q)$. (Conversely, $\mathcal{G}$ viewed as a null polarity is defined by the linear skew-symmetric linear map $q\to\pi(q),$ see \cite{PW}, Chapter 3.) Hence, given a family of $n$ lines in $\mathcal{G}$, the problem of counting their pair-wise intersections can be expressed as counting the number of incidences in $I(P,\Pi)$, where $P=\{q\}$ and $\Pi=\{\pi(q)\}$. Moreover, $|P|,|\Pi|=n$, for two different planes of the same type will never intersect $\mathcal{G}$ along the same line (that is a null polarity is an isomorphism). Besides, if $k$ was the maximum number of lines in the intersection of $\mathcal{G}\subset {\mathbb P}^4$ with a ${\mathbb P}^3$, then the same $k$ stands for the maximum number of collinear points or planes, by Lemma \ref{lem}. We have established the following statement. \boldsymbol bgin{lemma}\label{convert} Suppose, $\mathbb{F}$ is algebraically closed and not of characteristic $2$. Let $\mathcal L$ be a family of $n$ lines in $\mathcal{G}.$ Then there is an arrangement $(P,\Pi)$ of $n$ points and $n$ planes in ${\mathbb P}^3$, such that the number of pair-wise intersections of lines in $\mathcal L$ equals $|I(P,\Pi)|-n$. Moreover, there are two disjoint families of $n$ new lines in $\mathcal{G}$ each, such that lines within each family are mutually skew, and the total number of incidences is $|I(P,\Pi)|-n$. \boldsymbol nd{lemma} Note, the $-n$ comes from the fact that each $\pi(q)$ contains $q$. Lemma \ref{convert} and Theorem \ref{mish} have the following corollary. This fact also follows from the results in \cite{EH} after a projection argument. We present the proof along the lines of exposition in this section, for it also gives an application of the formalism here. \boldsymbol bgin{corollary}\label{intersections} The union of any $n=\Omega(p^2)$ straight lines in $G={SL}_2(\mathbb{F}_p)$ has cardinality $\Omega(p^3)$, that is takes up a positive proportion of $G$. \boldsymbol nd{corollary} \boldsymbol bgin{proof} The statement is trivial for small $p$, so let $p>2$. View lines in $G\subset \mathbb{F}_p^4$ as lines in $\mathcal{G}\subset {\mathbb P}^4$ over the algebraic closure of $\mathbb{F}_p$. Pass to a point-plane incidence problem in ${\mathbb P}^3$ using Lemma \ref{convert} and then by Lemma \ref{tog} back to a line-line incidence problem in $\mathcal{G}.$ We may change $n$ to $cn$ to make Theorem \ref{mish} applicable. The value of the absolute $c$ may be further decreased to justify subsequent steps. By the inclusion-exclusion principle one needs to show that the number of pair-wise intersections of lines is at most a fraction of $pn$. This would follow if one could apply the incidence bound \eqref{pups} with $m=n$ and, say $k=\frac{p}{2}$. By Lemma \ref{tog} the quantity $k$ is the maximum number of ``new lines'' in the intersection of $\mathcal{G}$ with a projective three-hyperplane. Observe that there are more than $\frac{p}{2}$ of new lines in the intersection of $\mathcal{G}$ with a hyperplane if and only if there was the same number of ``old lines'' in the intersection of $G$ with an affine hyperplane in $\mathbb{F}_p^4$. Let us throw away from the initial set of lines in $G$ those lines, contained in intersections of $G\subset \mathbb F_p^4$ with affine three-planes $H$, with $H\cap G$ having more than $\frac{p}{2}$ lines. Either we have a positive proportion of lines left, and no more rich hyperplanes $H$, or we have had $\Omega(p)$ quadric surfaces $H\cap G$ in $G$, with at least $\frac{p}{2}$ lines in each. In the former case, if $c$ is small enough, we are done by (\ref{pups}). In the latter case, by the inclusion-exclusion principle applied within each surface, the union of lines contained therein takes up a positive proportion of each $H\cap G$, i.e., has cardinality $\Omega(p^2)$. Since $H\cap H'\cap G$, $H\neq H'$ is at most two lines, by the inclusion-exclusion principle, the union of $\Omega(p)$ of them has cardinality $\Omega(p^3)$. \boldsymbol nd{proof} \section{Proof of Theorem \ref{mish}} We use Lemma \ref{tog} to pass to the incidence problem between two disjoint line families $L_\alpha,L_\boldsymbol bta$ lying in $\mathcal{G}$, now using $m= |L_\alpha|$, $n= |L_\boldsymbol bta|$. Lines within each family are mutually skew. All we need on the technical side is to consider the case $m\geq n$ and adapt the strategy of the proof of Theorem \ref{gkt} to the three-quadric $\mathcal{G}$ instead of ${\mathbb P}^3$. The latter is done via a generic projection argument, and the rest of the proof follows the outline in the opening sections. We skip some easy intermediate estimates throughout the proof, since they have been worked out accurately up to constants in \cite{Ko}, Sections 3,4. The key issue is that any finite line arrangement over an infinite field in higher dimension can be projected into three dimensions with the same number of incidences; this fact is also stated in \cite{Ko}. Our lines lie in ${\mathbb P}^4$, containing the quadric $\mathcal{G}$. A pair of skew lines defines a three-hyperplane $H_i$ in ${\mathbb P}^4$. This hyperplane is projected one-to-one onto a fixed three-hyperplane $H$ if and only if the projective vector $u\in{\mathbb P}^4$ defining $H$ does not lie in $H_i$. Since we are dealing with a finite number of pairs of skew lines and $\mathbb{F}$ is infinite, the set of $u$, such that the projection of the line arrangement on the corresponding three-hyperplane $H$ acts one-to-one on the set of incidences is non-empty and Zariski open. \boldsymbol bgin{theorem}\label{gkt2} Let $L_\alpha,L_\boldsymbol bta$ be two disjoint sets of respectively $m,n$ lines contained in the quadric $\mathcal{G}=\mathcal{K}\cap S$, where the hyperplane $S$ is not tangent to the Klein quadric $\mathcal{K}$. Suppose, lines within each family are mutually skew. Assume that $m\geq n$, $\mathbb{F}$ is algebraically closed, with characteristic $p\neq 2$. Let $n\leq cp^2,$ for some absolute $c$. Then \boldsymbol bgin{equation}\label{ibou} |I(L_\alpha,L_\boldsymbol bta)|=O\left( m\sqrt{n}+ km\right), \boldsymbol nd{equation} where $k$ is the maximum number of lines in $L_\boldsymbol bta$, contained in the intersection of $\mathcal{G}\subset {\mathbb P}^4$ with a subspace ${\mathbb P}^3$ in ${\mathbb P}^4$. \boldsymbol nd{theorem} \boldsymbol bgin{proof} Following Guth and Katz, it is technically very convenient to use induction in $\min(m,n)$ and a probabilistic argument. The estimate $I=O( m\sqrt{n} )$ is true for all sufficiently small $m,n$, given a sufficiently large $O(1)$ value $C$ of the constant in the $O$-symbol, which we fix. We do not specify how large $C$ should be, however Koll\'ar evaluates it explicitly, see \cite{Ko}. For the induction assumption to work throughout let us reset $n = \min(|L_\alpha|,|L_\boldsymbol bta|)$ and $m = \max(|L_\alpha|,|L_\boldsymbol bta|)$. The induction assumption will be used throughout the proof as the bound for incidences between sub-families of $(m',n')$ lines, with $n'$ sufficiently less than $n$, no matter what $m'$ is. This will enable us to exclude from consideration the incidences that some undesirable subsets of lines in $L_\boldsymbol bta$ account for, as long as they constitute a reasonably small fraction of $L_\boldsymbol bta$ itself. Suppose, we have the smallest value of $n$, such that for some $m\geq n$ the main term in the right-hand side of \eqref{ibou} fails to do the job, that is \boldsymbol bgin{equation} |I(L_\alpha,L_\boldsymbol bta)| = Cm\sqrt{n},\label{contr}\boldsymbol nd{equation} for some large enough constant $C$. We will show that this assumption implies the bound $I=O(km)$, independent of $C$, which will therefore finish the proof. Note that since the right-hand side of the assumption \eqref{contr} is linear in $m$, it implies, by the pigeonhole principle, that there is a subset $\tilde L_\alpha$ of $L_\alpha$ of some $\tilde m\leq m$ lines, with $\tilde m=O(n)$, such that $$|I(\tilde L_\alpha, L_\boldsymbol bta)| \geq C\tilde m\sqrt{n}.$$ We reset the notations $\tilde L_\alpha$ to $L_\alpha$ and $\tilde m$ back to $m$, but now $m=O(n)$, which is necessary for the next step. A large proportion of incidences must be supported on lines in $L_\alpha$, which are intersected not much less than average, say by at least $\frac{1}{4} C \sqrt{n}$ lines from $L_\boldsymbol bta$ each. Let us call this popular set $L'_\alpha$. We now delete lines from $L_\boldsymbol bta$ randomly and independently, with probability $1-\rho$ to be chosen. Let the random surviving subset of $L_\boldsymbol bta$ be denoted as $\tilde L_\boldsymbol bta$. By the law of large numbers, the probability that an individual line in $L'_\alpha$ is met by lines from $\tilde L_\boldsymbol bta$ less than half the expected number of times is exponentially small in $n$, and so is $m$ times this probability, since now $m=O(n)$. Thus there is a realisation of $\tilde L_\boldsymbol bta \subset L_\boldsymbol bta$, of size close to the expected one, i.e., between $\frac{1}{2}\rho n$ and $2\rho n$ such that every line in $L'_\alpha$ meets at least, say \boldsymbol bgin{equation}\label{avrg}\frac{1}{8} C \rho \sqrt{n}\boldsymbol nd{equation} lines in $\tilde L_\boldsymbol bta$. Our lines live in $\mathcal{G}\subset{\mathbb P}^4$, with homogeneous coordinates $(x_0:\ldots:x_4)$. By the projection argument, preceding the formulation of Theorem \ref{gkt2}, the coordinates can be chosen in such a way that lines in the union of the two families project one-to-one as lines in the $(x_1:\ldots:x_4)$-space, and skew lines remains skew. Let $Q$ be a nonzero homogeneous polynomial in $(x_1:\ldots:x_4)$ that vanishes on the projections of the lines in $\tilde L_\boldsymbol bta$ to the $(x_1:\ldots:x_4)$-space, so it will also vanish on the lines in $\tilde L_\boldsymbol bta$. The degree $d$ of $Q$ can be taken as $O\left((\rho n)^{\frac{1}{2}}\right)$. This fact is well known, see e.g. the survey \cite{D1}. For completeness, we give a quick argument. Choose $t$ points on each of the projected lines from $\tilde L_\boldsymbol bta$, with or without repetitions. Let $X\subset{\mathbb P}^3$ be the corresponding set of at most $t|\tilde L_\boldsymbol bta|$ points. There is a nonzero homogeneous polynomial of degree $d= O[ (t|\tilde L_\boldsymbol bta|)^{1/3} ]$ vanishing on $X$. More precisely, it suffices to satisfy the inequality $\left(\boldsymbol bgin{array}{c} d+3\\3\boldsymbol nd{array}\right)>|X|$ for the degree of the polynomial. The left-hand side of the latter inequality is the dimension of the vector space of degree $d$ homogeneous polynomials in four variables; if it is bigger than $|X|$, the evaluation map on $X$ has nontrivial kernel, by the rank-nullity theorem. By construction of the point set $X$, the polynomial $Q$ has $t$ zeroes on each line from $\tilde L_\boldsymbol bta$, so in order to have it vanish identically on the union of these lines one must merely ensure that $t>d$. Hence, the above claim for $d$. We choose the parameter $\rho$, so that the degree $d$ of $Q$ is smaller than the number of its zeroes on each line in $L_\alpha'$, which is at least \eqref{avrg}. I.e., $$\rho =O\left(\frac{1}{C^2}\right)<1,$$ and thus \boldsymbol bgin{equation}\label{d1}d=O(\sqrt{\rho n}) =O\left( \frac{\sqrt{n}}{C}\right). \boldsymbol nd{equation} Reduce $Q$ to the minimal product of irreducible factors. Denote $\bar Z$ the zero set of the polynomial $Q$ in ${\mathbb P}^3$ defined by the $(x_1:\ldots:x_4)$ variables and $\bar L'_\alpha, \bar L'_\boldsymbol bta$ the projections of the corresponding line families. Let also $Z$ denote the zero set of the polynomial $Q$ in $\mathcal{G}\subset{\mathbb P}^4$. Recall that the projection has been chosen so that $|I(\bar L'_\alpha, \bar L'_\boldsymbol bta)|=|I(L'_\alpha, L'_\boldsymbol bta)|$ and lines in the same family still do not meet. In the sequel, when we speak of zero sets of factors of $Q$, we mean point sets in ${\mathbb P}^3$, in the $(x_1:\ldots:x_4)$ variables. It follows that all the lines in $\bar L'_\alpha$ are contained in $\bar Z$, for each supports more zeroes of $Q$ than the degree $d$. For all lines from $\bar L_\boldsymbol bta$ that do not live in $\bar Z$, every such line will intersect $\bar Z$ at most $d$ times. The number of incidences these lines can create altogether is thus \boldsymbol bgin{equation}O\left( C^{-1} n^{\frac{3}{2}} \right) = O\left( C^{-1} m\sqrt{n}\right),\label{transverse}\boldsymbol nd{equation} which is too small in comparison with the supposedly large total number of incidences \eqref{contr}. Therefore, we may assume that, say at least $\frac{1}{2} C m\sqrt{n}$ incidences are supported on lines in $\bar L'_\alpha$ and those lines from $\bar L_\boldsymbol bta$ that are also contained in $\bar Z$. Suppose, the number of the latter lines is less than, say $\frac{n}{16}$. This will contradict the induction assumption -- no matter how many lines $m'$ are there in $\bar L'_\alpha$. If $m'\geq n$, then the number of incidences, by the induction assumption, must be at most $Cm'\sqrt{n}/4$; if $m'<\frac{n}{16}$, it is at most $C n \sqrt{m'}/16<C m \sqrt{n}/16.$ Hence, there are at least $\frac{n}{16}$ lines from $\bar L_\boldsymbol bta$ in $\bar Z$, and we call the set of these lines $\bar L'_\boldsymbol bta$. To avoid taking further fractions of $n$, let us proceed assuming that $|\bar L'_\boldsymbol bta|=n$. We can repeat the transverse intersection incidence counting argument for the zero set of each irreducible factor of $Q$. Suppose, the factor has degree $d'$. Then the number of incidences of lines in the zero set $\bar Z'$ of the factor with those not contained in $\bar Z'$ is at most $d' (m+n)$. Summing over the factors, we can use the right-hand side of \eqref{transverse} as the estimate for the total over all the irreducible factors of $Q$ number of transverse incidences. We therefore proceed assuming that there are $\Omega( C m\sqrt{n})$ of pairs of intersecting lines from the two families, each incidence occurring inside the zero set of some irreducible factor of $Q$. Invoking Salmon's Theorem \ref{Salmon} we deduce that if $n>11d^2-24d$, and given that $d<p$ if the characteristic $p>0$, the zero set $\bar Z$ of the polynomial $Q$ must have a ruled factor. The latter inequality entails that almost 100\% of lines in the $\boldsymbol bta$-family must lie in ruled factors. Indeed, we have $|\bar L'_\boldsymbol bta|=n$ lines in $Z$, and at most $11d^2=O(n/C^2)$ may lie in the union of non-ruled factors, provided that $d=O(\sqrt{n}/C)< p$, that is the constraint in Theorem \ref{Salmon} has been satisfied. Thus, we may not bother about what happens in non-ruled factors of $\bar Z$ by the induction assumption and proceed, having redefined $n$ slightly one more time, so that now $n$ lines from $\bar L'_\boldsymbol bta$ lie in ruled factors of $\bar Z$. They still have to account for $\Omega(Cm\sqrt{n})$ incidences with the lines from $\bar L'_\alpha$, for all the lines in $\bar L'_\boldsymbol bta$ that have been disregarded so far could only account for a small percentage of the total number of incidences. A single ruled factor cannot be a cone, for no more than two of our lines meet at a point. However, a ruled factor of degree $d'>2$, which is not a cone, can contribute, by Lemma \ref{rss}, at most $n(d'-2)+2n+(m+n)d'$ incidences. The latter three summands come, respectively, from mutual intersections of generators, intersections of generators with special lines -- see the discussion from Lemma \ref{rss} through the end of Section \ref{ruled} -- and intersections of lines within the factor with lines outside the factor. Once again, summing over irreducible ruled factors with $d'>2$, we arrive in the right-hand side term in \eqref{transverse} again -- this is too small in comparison with \eqref{contr}. Hence $Q$ must contain one or more irreducible factors $Q'$ of degree at most $2$, that is the zero set of each such $Q'$ is an irreducible doubly-ruled quadric or a plane in ${\mathbb P}^3$. If the union of these low degree factors contains only a small proportion of the lines from $\bar L'_\boldsymbol bta$, we once again invoke the induction assumption and contradict \eqref{contr}. Let us reset $n$ to its original value. The argument up to now has calmed that if \eqref{contr} is true, we have at least $cn$ lines from $\bar L'_\boldsymbol bta$ lying in the union of the zero sets of low degree -- meaning degree at most two -- factors of $Q$, creating at least $cCm\sqrt{n}$ incidences with lines from $\bar L'_\alpha$ inside these factors. By the pigeonhole principle, there is a low degree factor $Q'$, whose zero set contains at least $c\frac{n}{d}=\Omega(C\sqrt{n})$ lines from $\bar L'_\boldsymbol bta$. Moreover, we can disregard whatever happens inside the union of low degree factors, each containing fewer than some $cC\sqrt{n}$ lines from $\bar L'_\boldsymbol bta$, by the induction assumption. The contribution of plane factors of $Q$ is negligible, for each plane in ${\mathbb P}^3$ may contain only one line from each (projected) family. Thus there is a rich degree $2$ irreducible factor $Q'$, which defines a doubly ruled quadric surface $\bar Z'$ in the $(x_1:\ldots:x_4)$ variables. $\bar Z'$ supports at least two lines from $\bar L'_\alpha$ in one ruling, for otherwise the total number of incidences within all such rich quadrics would be $O(C^{-1}n^{\frac{3}{2}})$. These two lines are crossed by all lines in the second ruling, that is by $\Omega(C\sqrt{n})$ lines from the family $\bar L'_\boldsymbol bta.$ It remains to bring the parameter $k$ in, the maximum number of lines from $L_\boldsymbol bta$, per intersection of $\mathcal{G}\subset {\mathbb P}^4$ with a three-hyperplane. Let $Z'=\mathcal{G}\cap (\bar Z'\times {\mathbb P}^1)$, that is the intersection of the quadric $\mathcal{G}$ with the quadric, which is the zero set of $Q'$ in ${\mathbb P}^4$. Lifting lines from $\bar Z'$ to $Z'$ preserves incidences, so we arrive at the following figure in $Z'\subset \mathcal{G}$: a pair of skew lines from $L_\alpha$ crossed by $\Omega(C\sqrt{n})$ lines from $L_\boldsymbol bta$. The two lines from $L_\alpha$ determine a three-hyperplane $H$, which also contains all the $\Omega(C\sqrt{n})$ lines in question from $L_\boldsymbol bta$. By the assumption of the theorem, $H$ may contain at most $k$ lines from $L_\boldsymbol bta$. This means $C=O\left(\frac{k}{\sqrt{n}}\right)$. Substituting this into \eqref{contr} yields the inequality $ |I(L_\alpha,L_\boldsymbol bta)|=O(km).$ This completes the proof of Theorem \ref{gkt2}. \boldsymbol nd{proof} Theorem \ref{gkt2} together with the preceding it discussion in Sections \ref{setup} and \ref{geom} and its outcomes stated as Lemma \ref{lem} and \ref{tog}, result straight into the claim of our main Theorem \ref{mish}. \section{Applications of Theorem \ref{mish}} This section has three main parts. First, we develop an application of Theorem \ref{mish} to the problem of counting vector products defined by a plane point set, extending to positive characteristic the estimates obtained over ${\mathbb R}$ via the Szemer\'edi-Trotter theorem. Then we use that application in a specific example to show that in a certain parameter regime Theorem \ref{mish} is tight. Finally, we use Theorem \ref{mish} to consider a pinned version of the Erd\H os distance problem on the number of distinct distances determined by a set of $N$ points in $\mathbb{F}^3$, where we also get a new bound in positive characteristic, which is not too far off the best known bound over the reals. Before we do this, we state a slightly stronger version of Theorem \ref{mish}, which is more tuned for applications. The need for it comes from the fact that sometimes, when questions of geometric and arithmetic combinatorics are reformulated as incidence problems, there are certain geometrically identifiable subsets of the incidence set that should be excluded from the count, for they correspond to some in some sense ``pathological'' scenario. We encountered this in \cite{RR}, where the Guth-Katz approach to the the Erd\H os distance problem was applied to Minkowski distances in the real plane. In order to get the lower bound for the number of distinct Minkowski distances, one claims an upper bound on the number of pairs of congruent, that is equal Minkowski length line segments with endpoints in the given plane point set. However, it is easy to construct an example where the number of pairs of zero Minkowski length segments is forbiddingly large. Hence, the analysis in \cite{RR} considered only nonzero Minkowski distances, and had to elucidate how this fact gets reflected in the corresponding incidence problem for lines in three dimensions. Discounting pairs of line segments of zero Minkowski length was equivalent to discounting pair-wise line intersections within a set of specific two planes in $3D$; these planes could violate the assumption of Theorem \ref{gkt} about the maximum number of coplanar lines. Such a restricted application of the Guth-Katz approach was further generalised in \cite{RS}, where more $2D$ combinatorial problems have been identified, where the tandem of incidence Theorems 2.10 and 2.11 from \cite{GK} worked ``as a hammer'', if used in the restricted form, that is discounting pairwise line intersections within certain ``bad'' planes, as well as at certain ``bad'' points. Technically, it is Theorem 2.11 from \cite{GK}, whose restricted version required most of the work in \cite{RR}; adapting Theorem 2.10 took only a few lines of argument, and this is all that is essentially needed here regarding Theorem \ref{mish}, where we wish to discount point-plane incidences supported on a certain set of forbidden lines in ${\mathbb P}^3$. Suppose, we have a finite set of lines $L^*$ in ${\mathbb P}^3$. Define the restricted set of incidences between a point set $P$ and set of planes $\Pi$ as \boldsymbol bgin{equation}\label{inss} I^*(P,\Pi) = \{(q,\pi)\in P\times \Pi: q\in \pi \mbox{ and } \forall l\in L^*, \,q\not \in l \mbox{ or } l \not\subset\pi\}. \boldsymbol nd{equation} \addtocounter{theorem}{-10} \renewcommand{\arabic{theorem}}{\arabic{theorem}*} \boldsymbol bgin{theorem} \label{mishh} Let $P, \Pi$ be sets of points and planes in ${\mathbb P}^3$, of cardinalities respectively $m,n$, with $m\geq n$. If $\mathbb{F}$ has positive characteristic $p$, then $p\neq 2$ and $n=O(p^2)$. For a finite set of lines $L^*$, let $k^*$ be the maximum number of planes, incident to any line not in $L^*$. Then \boldsymbol bgin{equation}\label{pupss} |I^*(P,\Pi)|=O\left( m\sqrt{n}+ k^*m\right).\boldsymbol nd{equation} \boldsymbol nd{theorem} \boldsymbol bgin{proof} We return to Section \ref{geom} to map the incidence problem between points and planes to one between line families $L_\alpha,L_\boldsymbol bta$ in $\mathcal G\subset {\mathbb P}^4$. By Lemmas \ref{lem}, \ref{tog} the set of lines $L^*$ now displays itself as a set ${\mathcal H}^*$ of three-hyperplanes in ${\mathbb P}^4$. One comes to Theorem \ref{gkt2}, only now aiming to claim \eqref{pupss} as the estimate for the cardinality of the restricted incidence set $I^*(L_\alpha,L_\boldsymbol bta)$, which discounts pair-wise line intersections within the intersections of $\mathcal{G}$ with each $h\in \mathcal H^*$, $k^*$ replacing $k$. The proof of Theorem \ref{gkt2} is modified as follows. Since the number of bad hyperplanes is finite, one can choose coordinates so that the intersection of each $h\in \mathcal H^*$ with $\mathcal{G}$ is defined by a quadratic polynomial $Q_h$ in $(x_1:\ldots:x_4)$. The arguments of Theorem \ref{gkt2} are copied modulo that one assumes \eqref{contr} about the quantity $|I^*(P,\Pi)|$ and having reduced the problem to counting incidences only inside factors of a polynomial $Q$ of degree satisfying \eqref{d1}, does not take into account incidences in common factors of $Q$ and $\mathfrak{p}od_{h\in \mathcal H} Q_h$. As a result, the modified assumption \eqref{contr} forces one to have a rich irreducible degree $2$ factor of $Q$, which is not forbidden. This corresponds, within Theorem \ref{gkt2} to $\Omega(C\sqrt{n})$ lines from the family $L_\boldsymbol bta$ lying inside the intersection of $\mathcal{G}$ with some three-hyperplane $H\not\in \mathcal H^*$. In terms of Theorem \ref{mishh} this means collinearity of $\Omega(C\sqrt{n})$ planes in $\Pi$ along some line not in $L^*$. This establishes the estimate \eqref{pupss}.\boldsymbol nd{proof} \renewcommand{\arabic{theorem}}{\arabic{theorem}} \addtocounter{theorem}{9} Throughout the rest of the section, $\mathbb{F}$ is a field of odd characteristic $p$. \subsection{On distinct values of bilinear forms}\label{vpr} Established sum-product type inequalities over fields with positive characteristic have been weaker than over ${\mathbb R}$, where one can take advantage of the order structure and use geometric, rather than additive combinatorics. See, e.g., \cite{E0}, \cite{So}, \cite{KR}, \cite{BJ}, \cite{KS} for some key methods and ``world records''. The closely related geometric problem discussed in this section is one of lower bounds on the cardinality of the set of values of a non-degenerate bilinear form $\boldsymbol \omegaega$, evaluated on pairs of points from a set $S$ of $N$ non-collinear points in the plane. One may conjecture the bound $\Omega(N)$, possibly modulo factors, growing slower than any power of $N$. This may clearly hold in full generality in positive characteristic only if $N=O(p)$. The problem was claimed to have been solved over ${\mathbb R}$ up to the factor of $\log N$ in \cite{IRR}, $\boldsymbol \omegaega$ being the cross or dot product. However, the proof was flawed. The error came down to ignoring the presence of nontrivial weights or multiplicities, as they appear below. The best bound over ${\mathbb R},\mathbb{C}$ that the erratum \cite{IRRE} sets is $\Omega(N^{9/13})$, for a skew-symmetric $\boldsymbol \omegaega$. The bound $\Omega(N^{2/3})$ for any non-degenerate form $\boldsymbol \omegaega$ follows just from applying the Szemer\'edi-Trotter theorem to bound the number of realisations of any particular nonzero value of $\boldsymbol \omegaega$. In this section we prove the following theorem. \boldsymbol bgin{theorem}\label{spr} Let $\boldsymbol \omegaega$ be a non-degenerate symmetric or skew-symmetric bilinear form and the set $S\subseteq \mathbb{F}^2$ of $N$ points not be supported on a single line. Then \boldsymbol bgin{equation}|\boldsymbol \omegaega(S) := \{\boldsymbol \omegaega(s,s'):\,s,s'\in S\}| = \Omega\left[\min \left(N^{\frac{2}{3}},p\right)\right].\label{worst}\boldsymbol nd{equation} If $S$ has a subset $S'$ of $N'<p$ points, lying in distinct directions from the origin, then $|\boldsymbol \omegaega(S)|\gg N'.$ \boldsymbol nd{theorem} \boldsymbol bgin{proof} From now on we assume that $S$ does not have more than $N^{\frac{2}{3}}$ points on a single line through the origin, for since $S$ also contains a point outside this line, the estimate \eqref{worst} follows. This assumption will be seen not to affect the second claim of the theorem. Suppose also, without loss of generality, that $S$ does not contain the origin, nor does it have points on the two coordinate axes. We may assume that $\mathbb{F}$ is algebraically closed, in which case one may take a symmetric form $\boldsymbol \omegaega$ as given by the $2\times 2$ identity matrix and a skew-symmetric one by the canonical symplectic matrix. We consider the latter situation only. The former case is similar. One can also replace $S$ with its union with $S^\perp=\{(-q_2,q_1):\,(q_1,q_2)\in S\}$ and repeat the forthcoming argument. Consider the equation \boldsymbol bgin{equation}\label{eng} \boldsymbol \omegaega(s,s')=\boldsymbol \omegaega(t,t')\neq 0, \qquad (s,s',t,t')\in S\times S\times S\times S. \boldsymbol nd{equation} Assuming that $\boldsymbol \omegaega$ represents wedge products, this equation can be viewed as counting the number of incidences between the set of points $P\subset {\mathbb P}^3$ with homogeneous coordinates $(s_1:s_2:t_1:t_2)$ and planes in a set $\Pi$ defined by covectors $(s_2':-s_1':-t_2':t_1')$. However, both points and planes are weighted. Namely, the weight $w(p)$ of a point $p=(s:t)$ is the number of points $(s,t)\in \mathbb{F}^4$, which are projectively equivalent that is lie on the same line through the origin. The same applies to planes. The total weight of both sets of points and planes is $W=N^2$. Like in the case of the Szemer\'edi-Trotter theorem, the weighted variant of estimate of Theorem \ref{mish} gets worse with maximum possible weight. The number of solutions of \eqref{eng}, plus counting also quadruples yielding zero values of $\boldsymbol \omegaega$ is the number of weighted incidences \boldsymbol bgin{equation}\label{weightin} I_w := \sum_{q\in P, \pi\in \Pi} w(q)w(\pi) \delta_{q\pi}, \boldsymbol nd{equation} where $\delta_{q\pi}$ is $1$ when $q\in \pi$ and zero otherwise. Consider two cases: {\em (i) $S$ only has points in $O(N^{2/3})$ distinct directions through the origin; (ii) there exists $S'\subset S$ with exactly one point in $\Omega(N^{2/3})$ distinct directions.} To deal with (i) we need the following weighted version of Theorem \ref{mish}. \boldsymbol bgin{theorem}\label{wmish} Let $P, \Pi$ be weighted sets of points and planes in ${\mathbb P}^3$, both with total weight $W$. Suppose, maximum weights are bounded by $w_0>1$. Let $k$ be the maximum number of collinear points, counted without weights. Suppose, $\frac{W}{w_0}=O(p^2)$, where $p>2$ is the characteristic of $\mathbb{F}$. Then the number $I_w$ of weighted incidences is bounded as follows: \boldsymbol bgin{equation}\label{pupsweight} I_w=O\left( W\sqrt{w_0W}+ k w_0 W\right).\boldsymbol nd{equation} The same estimate holds for the quantity $I^*_w$, which discounts weighted incidences along a certain set $L^*$ of lines in ${\mathbb P}^3$, the quantity $k^*$, denoting the maximum number of points incident to a line not in $L^*$ replacing $k$ in estimate \eqref{pupsweight}. \boldsymbol nd{theorem} \boldsymbol bgin{proof} It is a simple weight rearrangement argument, the same as, e.g., in \cite{IKRT} apropos of the Szemer\'edi-Trotter theorem. Pick a subset $P'\subseteq P$, containing $n=O\left(\frac{W}{w_0}\right)$ richest points in terms of non-weighted incidences. Assign to each one of the points in $P'$ the weight $w_0$, delete the rest of the points in $P$, so $P'$ now replaces $P$. The number of weighted incidences will thereby not decrease. Now of all planes pick a subset $\Pi'$ of the same number $n$ of the richest ones, in terms of their non-weighted incidences with $P'$. Assign once again the weight $w_0$ to each plane in $\Pi'$. We now replace $P,\Pi$ with $P',\Pi'$ -- the sets of respectively $n$ points and planes, for which we apply Theorem \ref{mish}, counting each incidence $w_0^2$ times. Note that we may still have $k$ collinear points in $P'$ or planes in $\Pi'$. This yields \eqref{pupsweight}. For the last claim of Theorem \ref{wmish} we use Theorem \ref{mishh} instead of Theorem \ref{mish}.\boldsymbol nd{proof} Returning to the proof of Theorem \ref{spr}, suppose we are in case (i). We will apply the $I_w^*$ estimate of Theorem \ref{wmish} to the weighted arrangement of planes and points in ${\mathbb P}^3$, representing \eqref{eng}. Let us show that the quantity $k^*$ can be bounded as $O(N^{\frac{2}{3}})$, after it becomes clear what the set $L^*$ of forbidden lines is. The quantity $k$ is the maximum number of collinear points in the set $S\times S\in \mathbb{F}^4$, viewed projectively. Suppose, $k\geq N^{\frac{2}{3}}$. This means we have a two-plane through the origin in $\mathbb{F}^4$, which contains points of $S\times S$ in at least $N^{\frac{2}{3}}$ directions in this plane. If this two-plane projects on the first two coordinates in $\mathbb{F}^4$ one-to-one, then $S$ itself has points in $N^{\frac{2}{3}}$ directions. But in case (i) this is not the case. We now define the finite set $L^*$ of forbidden lines in ${\mathbb P}^3$ as two-planes in $\mathbb{F}^4$, which are Cartesian products of pairs of lines through the origin in $\mathbb{F}^2$, each supporting a point of $S$. Hence $k^*$ is the maximum number of points incident to any other line in ${\mathbb P}^3$. If the two-plane through the origin in $\mathbb{F}^4$ projects on each coordinate two-plane $\mathbb{F}^2$, containing $S$ as a line through the origin, it is a Cartesian product of two lines $l_1$ and $l_2$ through the origin in $\mathbb{F}^2$. Such a plane may contain a point $(s_1,s_2,t_1,t_2)\in \mathbb{F}^4$ or be incident to a three-hyperplane through the origin in $\mathbb{F}^4$, defined by the covector $(s_2', -s_1', -t_2', t_1')=0$ only if the lines $l_1,l_2$ contain points of $S$. Applying the $I_w^*$-version of estimate \eqref{pupsweight}, we therefore obtain \boldsymbol bgin{equation}\label{weste} I^*_w=O\left(N^{\frac{10}{3}} + N^{\frac{10}{3}}\right). \boldsymbol nd{equation} It remains to show that point-plane incidences along the lines in $L^*$ correspond to zero values of the form $\boldsymbol \omegaega$ in \eqref{eng}. By definition, a line in $L^*$ is represented by a pair $(l_1,l_2)$ lines through the origin in $\mathbb{F}^2$. If the $\mathbb{F}^4$-point $(s,t)=(s_1,s_2,t_1,t_2)$ lies in the two-plane, which is the Cartesian product $l_1\times l_2$, this means $s\in l_1$, $t\in l_2$. If a three-hyperplane through the origin in $\mathbb{F}^4$, defined by the covector $(s_2', -s_1', -t_2', t_1')=0$ contains both lines $l_1,l_2$, this means $s'\in l_1$, $t'\in l_2$. Hence $\boldsymbol \omegaega(s,s')=\boldsymbol \omegaega(t,t')=0.$ So, if case (i) takes place, the bound \eqref{worst} follows from \eqref{eng} and \eqref{weste} by the Cauchy-Schwarz inequality. Observe that Theorem \ref{wmish} applies under the constraint $N\leq cp^{\frac{3}{2}}$ for some absolute $c$. In particular, when $N= \lfloor cp^{\frac{3}{2}}\rfloor$, it yields $I_w=O(p^5)$, hence one has $\Omega(N^{\frac{2}{3}})= \Omega(p)$ distinct values of the form $\boldsymbol \omegaega$. For $N\geq cp^{\frac{3}{2}}$ we do no more than retain this estimate. Finally, if case (ii) takes place, we apply Theorem \ref{mish} to the set $S'$. For now planes and points bear no weights other than $1$, and the above argument about collinear planes and points applies. Namely one can set $k=N'$ and zero values of $\boldsymbol \omegaega$ may no longer be excluded. Then equation \eqref{eng} with variables in $S'$ alone has $O({N'}^3)$ solutions, and the last claim of Theorem \ref{spr} follows by the Cauchy-Schwarz inequality. \boldsymbol nd{proof} It is easy to adapt the proof of Theorem \ref{spr} to the special case when $S=A\times B$ for then one can set $w_0 = \min(|A|,|B|)$. This results in the following corollary. There is also a more economical way of deriving the following statement from Theorem \ref{mish}. See \cite{RRS}, Corollary 4. \boldsymbol bgin{corollary}\label{hbk} Let $A,B\subseteq \mathbb{F}$, with $|A|\geq|B|$. Then \boldsymbol bgin{equation} |AB\pm AB|=\Omega\left[\min\left(|A|\sqrt{|B|},p\right)\right].\label{mebd}\boldsymbol nd{equation} \boldsymbol nd{corollary} \subsection{Tightness of Theorem \ref{mish}} \label{example} We use the considerations of the previous section, looking at the number of distinct dot products of pairs of vectors in the set $$ S=\{(a,b):\,a,b\in[1,\ldots, n]:\; \mbox{gcd}(a,b)=1\}. $$ The set can be thought of lying in ${\mathbb R}^2$ or $\mathbb{F}_p^2$, for $p\gg n^2$. Clearly, $S$ has $N=\Theta(n^2)$ elements. But now there are no weights in excess of $1$, in the sense of the discussion in the preceding section. So we can apply the argument from case (ii) within the proof of Theorem \ref{spr} and get a $O(N^3)$ bound for the number of solutions $E$ of the equation, with the standard dot product, \boldsymbol bgin{equation} s\cdot s' = t\cdot t', \qquad (s,s',t,t')\in S\times S\times S\times S. \label{integers}\boldsymbol nd{equation} Note that zero dot products can only contribute $O(N^2)$. On the other hand, the same, up to constants, bound for $E$ from below follows by the Cauchy-Schwarz inequality. Indeed, $x=s\cdot s' $ in equation \eqref{integers} assumes integer values in $[1\ldots4n^2]$. If $n(x)$ is the number of realisations of $x$, one has $$E=\sum_x n^2(x) \geq \frac{1}{4n^2} \left(\sum_x n(x)\right)^2 \gg n^6\gg N^3.$$ \subsection{On distinct distances in $\mathbb{F}^3$} \newcommand{\boldsymbol s}{\boldsymbol s} \newcommand{\boldsymbol t}{\boldsymbol t} Once again in this section $\mathbb{F}$ is an algebraically closed field of positive characteristic $p>2$. The Erd\H os distance conjecture is open in ${\mathbb R}^3$, where it claims that a set $S$ of $N$ points determines $\Omega(N^{\frac{2}{3}})$ distinct distances\footnote{The conjecture is often formulated more cautiously, that there are $\Omega^*(N^{\frac{2}{3}})$ distinct distances, the symbol $\Omega^*$ swallowing terms, growing slower than any power of $N$.}. The best known bound in ${\mathbb R}^3$ is $\Omega(N^{.5643})$, due to Solymosi and Vu \cite{SV}. We prove the bound $\Omega(\sqrt{N})$ for the positive characteristic pinned version of the problem, i.e., for the number of distinct distances, attained from some point $\boldsymbol s\in S$, for $N=O(p^2)$, assuming that $S$ is not contained in a single semi-isotropic plane, as described below. Define the distance set $$\Delta(S) = \{\|\boldsymbol s-\boldsymbol t\|^2:\,\boldsymbol s,\boldsymbol t \in S\},$$ with the notation $\boldsymbol s=(s_1,s_2,s_3)$, $\|\boldsymbol s\|^2 = s_1^2+s_2^2+s_3^2.$ Let us call a pair $(\boldsymbol s,\boldsymbol t)$ a {\em null-pair} if $\|\boldsymbol s-\boldsymbol t\|=0$. In positive characteristic, the space $\mathbb{F}^3$ (even if $\mathbb{F}=\mathbb{F}_p$) always has a cone of {\em isotropic directions} from the origin, that is $\{\boldsymbol \omega\in \mathbb{F}^3:\,\boldsymbol \omega\cdot \boldsymbol \omega=0\}$, with respect to the standard dot product. See \cite{HI}, in particular Theorem 2.7 therein for explicit calculations of isotropic vectors and their orthogonal complements over $\mathbb{F}_p$. The equation for the isotropic cone through the origin in $\mathbb{F}^3$ is clearly \boldsymbol bgin{equation}\label{cone} x^2+y^2+z^2=0. \boldsymbol nd{equation} It is a degree two ruled surface, whose ruling is not a regulus, see Section \ref{rgl}. If $\boldsymbol e_1$ is an isotropic vector through the origin, its orthogonal complement $ \boldsymbol e_1^\perp$ is a plane, containing $\boldsymbol e_1$. Let $\boldsymbol e_2$ be another basis vector in this plane, orthogonal to $\boldsymbol e_2$. Then $\boldsymbol e_2$ is not isotropic, for otherwise the whole plane $\boldsymbol e_1^\perp$ would be isotropic. This is impossible, for equation \eqref{cone} is irreducible. We call the plane $\boldsymbol e_1^\perp$ or its translate {\em semi-isotropic.} The fact that $\boldsymbol e_2$ is not isotropic implies that there are no {\em nontrivial null triangles} that is triangles with three zero length sides, unless the three vertices lie on an isotropic line. With this terminology, there exist only {\em trivial} null triangles in $\mathbb{F}^3$. In a semi-isotropic plane one can have $N=kl$ points, with $1\leq k\leq l$, with just $O(k)$ distinct pairwise distances: place $l$ points on each of $k$ parallel lines in the direction of $\boldsymbol e_1$, whose $\boldsymbol e_2$-intersects are in arithmetic progression. To deal with zero distances we use the following lemma. \boldsymbol bgin{lemma} \label{easy} Let $T$ be a set of $K$ points on the level set $$Z_R=\{(x,y,z):\,x^2+y^2+z^2=R\}.$$ For $K\gg1$ sufficiently large, either $\Omega(K)$ points in $T$ are collinear, or a possible proportion of $(\boldsymbol t,\boldsymbol t')\in T\times T$ are not null pairs. \boldsymbol nd{lemma} \boldsymbol bgin{proof} First note, as an observation, that even if $R\neq 0$, when $Z_R$ is a doubly-ruled quadric it may well be ruled by isotropic lines. Indeed, representing lines in $\mathbb{F}^3$ by Pl\"ucker vectors $(\boldsymbol \omega:\boldsymbol v)$ in the Klein quadric $\mathcal{K}$, defined by the relation \eqref{Klein}, i.e., $\boldsymbol \omega\cdot\boldsymbol v =0$, isotropic vectors are cut out by the quadric $\boldsymbol \omega\cdot\boldsymbol \omega=0$, while a regulus is a conic curve cut out from $\mathcal{K}$ by a two-plane. If the intersection of the three varieties in question is non-degenerate, it is at most four points, that is there are at most four isotropic lines per regulus. However, take the two-plane as $\boldsymbol v =\lambda \boldsymbol \omega$, for some $\lambda\neq 0$. (The case $\lambda=0$ corresponds to the isotropic cone through the origin.) Write $\boldsymbol v=ad(\boldsymbol q)\,\boldsymbol \omega$, for some point $\boldsymbol q\in \mathbb{F}^3$ lying on the line in question, where $ad(\boldsymbol q)$ is a skew-symmetric matrix, see Footnote \ref{ads}. This yields the eigenvalue equation $\det(ad(\boldsymbol q) - \lambda I) =0,$ which means that $\boldsymbol q$ satisfies $$ \lambda^2+ \|\boldsymbol q^2\|=0, $$ that is $\boldsymbol q\in Z_{-\lambda^2},$ the point set of a regulus of isotropic lines. Turning to the actual proof of the lemma, consider a simple undirected graph $G$ with the vertex set $T$, where there is an edge connecting distinct vertices $\boldsymbol t$ and $\boldsymbol t'$ if $(\boldsymbol t,\boldsymbol t')$ is a null pair. Suppose $G$ is close to the complete graph, that is $G$ has at least $.99 K(K-1)/2$ edges. Note that $K'\leq K$ points of $T$, lying on an isotropic line, yield a clique of size $K'$ in $K$. Suppose there is no clique of size, say $K'\geq.01K,$ or we are done. Then in each clique of size $K'$ one can delete at most $(K'+1)^2/4$ edges, turning it into a bipartite graph, whereupon there are no triangles left within that clique. After that one is left with no triangles in $G$, corresponding to trivial null triangles in $Z_R$. However if $K'\leq .01K$, the number of remaining edges is still greater than $K^2/4$, clearly the former cliques had no edges in common. By Turan's theorem there is a triangle in what is left of $G$, and it corresponds to a nontrivial null triangle in $Z_R$. This contradiction finishes the proof of Lemma \ref{easy} \boldsymbol nd{proof} We are now ready to prove the last theorem in this paper. \boldsymbol bgin{theorem}\label{erd} A set $S$ of $N$ points in $\mathbb{F}^3$, such that all points in $S$ do not lie in a single semi-isotropic plane, determines $\Omega[\min(\sqrt{N},p)]$ distinct pinned distances, i.e., distances from some fixed $\boldsymbol s\in S$ to other points of $S$. \boldsymbol nd{theorem} \boldsymbol bgin{proof} First off, let us restrict $S$, if necessary, to a subset of at most $cp^2$ points, where $c$ is some small absolute constant, later to enable us to use Theorem \ref{mish}. We keep using the notation $S$ and $N$. Furtermore, we assume that $S$ has at most $\sqrt{N}$ collinear points or there is nothing to prove: even if $\sqrt{N}$ collinear points lie on an isotropic line, $S$ has another point $\boldsymbol s$ outside this line, such that the plane containing $\boldsymbol s$ and the line is not semi-isotropic. It is easy to see that then there are $\Omega(\sqrt{N})$ distinct distances from $\boldsymbol s$ to the points on the line. Let $E$ be the number of solutions of the equation \boldsymbol bgin{equation}\label{energy} \|\boldsymbol s-\boldsymbol t\|^2 = \|\boldsymbol s-\boldsymbol t'\|^2\neq 0,\qquad (\boldsymbol s,\boldsymbol t,\boldsymbol t')\in S\times S\times S.\boldsymbol nd{equation} Let us show that either $S$ contains a line with $\Omega(\sqrt{N})$ points or \boldsymbol bgin{equation}\label{clm} E=O(N^{\frac{5}{2}}). \boldsymbol nd{equation} We claim, by the pigeonhole principle and Lemma \ref{easy}, that assuming $E\gg N^{5/2}$ implies that either there is a line with $\Omega(\sqrt{N})$ points, or $E=O(E^*)$, where $E^*$ is the number of solutions of the equation \boldsymbol bgin{equation}\label{energystar} \|\boldsymbol s-\boldsymbol t\|^2 = \|\boldsymbol s-\boldsymbol t'\|^2\neq 0,\qquad (\boldsymbol s,\boldsymbol t,\boldsymbol t')\in S\times S\times S:\;\|\boldsymbol t-\boldsymbol t'\|\neq 0. \boldsymbol nd{equation} Indeed, the quantity $E$ counts the number of equidistant pairs of points from each $\boldsymbol s\in S$ and sums over $\boldsymbol s$. Therefore, a positive proportion of $E$ is contributed by points $\boldsymbol s$ and level sets $Z_R(\boldsymbol s)=\{\boldsymbol t\in \mathbb{F}^3:\, \|\boldsymbol s-\boldsymbol t\|=R\}$, such that $Z_R(\boldsymbol s)$ supports $\Omega(\sqrt{N})$ points of $S$. By Lemma \ref{easy} either there is a line with $\Omega(\sqrt{N})$ points, or a positive proportion of pairs of distinct $\boldsymbol t,\boldsymbol t'\in Z_R(\boldsymbol s)$ is non-null. This establishes the claim in question. Now observe that to evaluate the quantity $E^*$, for each pair $(\boldsymbol t,\boldsymbol t')$ we have a plane through the midpoint of the segment $[\boldsymbol t\,\boldsymbol t']$, normal to the vector $\boldsymbol t-\boldsymbol t'$ and need to count points $\boldsymbol s$ incident to this plane. The plane in question does not contain $\boldsymbol t$ or $\boldsymbol t'$. We arrive at an incidence problem $(S,\Pi)$ between $N$ points and a family of planes, but the planes have weights in the range $[1,\ldots,N]$, for the same plane can bisect up to $N/2$ segments $[\boldsymbol t \,\boldsymbol t']$, {\em provided that} $(\boldsymbol t,\boldsymbol t')$ is not a null pair. That is given the plane, there is at most one $\boldsymbol t'$ for each $\boldsymbol t$, so that the plane may bisect $[\boldsymbol t \,\boldsymbol t']$. Thus number $m$ of distinct planes is $\Omega(N)$ and at most $N^2$, the maximum weight per plane is $N$, the total weight of the planes $W=N^2$. It is immediate to adapt the formula (\ref{pups}) to the case of planes with weights. Note that the number of distinct planes is not less than the number of points, so in the formula \eqref{pups}, the notation $m$ will now pertain to planes, $n$ to points, and $k$ to the maximum number of collinear points. Since the estimate (\ref{pups}) is linear in $m$, the case of weighted planes and non-weighted points arises by replacing $m$ with $N^2$, $n$ with $N$, and $k$ with $\sqrt{N}$, for otherwise, once again, there is nothing to prove. Theorem \ref{mish} now applies for $N=O(p^2)$ and yields the estimate \eqref{clm}. Theorem \ref{erd} follows from \eqref{energy} by the Cauchy-Schwarz inequality. In particular, when $N=cp^2$ for some absolute $c$, we get $\Omega(p)$ distinct pinned distances. If $N\geq cp^2$ we simply retain this estimate. \boldsymbol nd{proof} \boldsymbol bgin{thebibliography}{4} \bibitem{AS} R. Apfelbaum, M. Sharir. {\em Large complete bipartite subgraphs in incidence graphs of points and hyperplanes.} SIAM J. Discrete Math. {\bf 21} (2007), no. 3, 707--725. \bibitem{Ba} L. Badescu. {\em Algebraic Surfaces.} Translated by V. Masek. Springer, New York, 2001. 272pp. \bibitem{BS} A. Basit, A. Sheffer, {\em Incidences with $k$-non-degenerate Sets and Their Applications.} J. Computational Geometry. {\bf 5.1} (2014), 284--302. \bibitem{BM} B. Bekka, M. Mayer. {\em Ergodic theory and topological dynamics of group actions on homogeneous spaces.} London Mathematical Society Lecture Note Series, {\bf 269}. Cambridge University Press, Cambridge, 2000. 200pp. \bibitem{BJ} T. Bloom, T.G.F. Jones. {\em A sum-product theorem in function fields.} Int. Math. Res. Not. IMRN 2014, no. 19, 5249--5263. \bibitem{BKT} J. Bourgain, N. Katz, T. Tao. {\em A sum-product estimate in finite fields, and applications.} Geom. Funct. Anal. \boldsymbol txtbf{14} (2004), 27--57. \bibitem{BK} P. Brass, C. Knauer. {\em On counting point-hyperplane incidences.} Special issue on the European Workshop on Computational Geometry -- CG01 (Berlin). Comput. Geom. {\bf 25} (2003), no. 1-2, 13--20. \bibitem{D} Z. Dvir. {\em On the size of Kakeya sets in finite fields} J. Amer. Math. Soc. {\bf 22} (2009), no. 4, 1093--1097. \bibitem{D1} Z. Dvir. {\em Incidence Theorems and Their Applications. } Preprint {\sf arXiv:1208.5073v2} [math.CO] 27 Aug 2013. Survey 104pp. \bibitem{EGS} Edelsbrunner, L. Guibas, M. Sharir. {\em The complexity of many cells in arrangements of planes and related problems.} Discrete Comput. Geom., {\bf 5} (1990), 197--216. \bibitem{E0} G. Elekes. {\em On the number of sums and products}. Acta Arith. {\bf 81} (1997), 365--367. \bibitem{ET} G. Elekes, C. T\'oth. {\em Incidences of not-too-degenerate hyperplanes.} Computational geometry (SCG'05), 16--21, ACM, New York, 2005. \bibitem{EH} J. S. Ellenberg, M. Hablicsek. {\em An incidence conjecture of Bourgain over fields of positive characteristic.} Preprint {\sf arXiv:1311.1479} [math.CO] 6 Nov 2013. \bibitem{GK} L. Guth, N. H. Katz. {\it On the Erd\H os distinct distance problem in the plane}. Ann. of Math. (2) {\bf 181} (2015), no. 1, 155--190. \bibitem{GKprime} L. Guth, N. H. Katz. {\em Algebraic methods in discrete analogs of the Kakeya problem.} Adv. Math. {\bf 225} (2010), no. 5, 2828--2839. \bibitem{HI} D. Hart, A. Iosevich, D. Koh, M. Rudnev. {\em Averages over hyperplanes, sum-product theory in vector spaces over finite fields and the Erd\H os-Falconer distance conjecture.} Trans. Amer. Math. Soc. 363 (2011), no. 6, 3255--3275. \bibitem{HBK} D. R. Heath-Brown, S. V. Konyagin. {\em New bounds for Gauss sums derived from kth powers, and for Heilbronn's exponential sum. Q. J. Math.,} 51 (2) (2000), 221--235. \bibitem{IKRT} A. Iosevich, S. Konyagin, M. Rudnev, V. Ten. {\em Combinatorial complexity of convex sequences.} Discrete Comput. Geom. {\bf 35} (2006), no. 1, 143--158. \bibitem{IRR} A. Iosevich, O. Roche-Newton, d M. Rudnev. {\em On an application of the Guth-Katz Theorem}. Math. Res. Lett. {\bf 18} (2011), no. 4, 691--697. \bibitem{IRRE} A. Iosevich, O. Roche-Newton, and M. Rudnev. {\em On discrete values of bilinear forms}. Preprint 2015. \bibitem{J} T. G. F. Jones. {\em Further improvements to incidence and Beck-type bounds over prime fields.} Preprint {\sf arXiv:1206.4517} [math.CO] 20 Jun 2012. \bibitem{Li} C. Liedtke. {\em Algebraic Surfaces in Positive Characteristic.} In {\em Birational Geometry, Rational Curves, and Arithmetic} Springer, 2013, pp 229--292. \bibitem{K} N. H. Katz. {\em The flecnode polynomial: a central object in incidence geometry.} Preprint {\sf arXiv:1404.3412} [math.CO] 13 Apr 2014. \bibitem{Ko} J. Koll\'ar. {\em Szemer\'edi-Trotter-type theorems in dimension 3.} Adv. Math. {\bf 271} (2015), 30--61. \bibitem{KR} S. V. Konyagin, M. Rudnev. {\em On new sum-product type estimates.} SIAM J. Discrete Math. {\bf 27} (2013), no. 2, 973--990. \bibitem{KS} S. V. Konyagin, I. D. Shkredov. {\em On sum sets of sets, having small product set.} Preprint {\sf arXiv:1503.05771v3} [math.CO] 29 Mar 2015. \bibitem{Pl} J. Pl\"ucker. {\em Neue Geometrie des Raumes, gegrundet auf die Betrachtung der geraden Linie als Raumelement, 2 vols.} Leipzig: B. G. Teubner, 1868--1869. \bibitem{PW} H. Pottmann, J. Wallner. {\em Computational Line Geometry.} Springer Verlag, Berlin, 2001, 565 pp. \bibitem{RR} O. Roche-Newton, M. Rudnev. {\em On the Minkowski distances and products of sum sets.} Israel J. Math. {\bf 209} (2015), no 2, 507--526. \bibitem{RRS} O. Roche-Newton, M. Rudnev, I. D. Shkredov. {\em New sum-product type estimates over finite fields.} Preprint {\sf arXiv:1408.0542v3} [math.CO] 24 Jul 2015. \bibitem{R} M. Rudnev. {\em An Improved Sum-Product Inequality in Fields of Prime Order.} {\em Int. Math. Res. Not. IMRN} (2012), no. 16, 3693--3705. \bibitem{RS} M. Rudnev, J. M. Selig. {\em On the use of Klein quadric for geometric incidence problems in two dimensions.} Preprint {\sf arXiv:1412.2909}[math.CO] 9 Dec 2014. \bibitem{Sa} G. Salmon. {\em A treatise on the analytic geometry of three dimensions,} vol. {\bf 2}, 5th edition, Longmans, Green and Co., London 1915. \bibitem{JS} J. M. Selig. {\em Geometric Fundamentals of Robotics.} Monographs in Computer Science. Springer, 2007, 416pp. \bibitem{So} J. Solymosi. {\em Bounding multiplicative energy by the sumset}. Adv. Math., {\bf 222} (2009) no 2, 402--408. \bibitem{SV} J. Solymosi, V. H. Vu. {\em Near optimal bounds for the Erd\H os distinct distances problem in high dimensions.} Combinatorica {\bf 28}(1) (2008), 113--125. \bibitem{ST} E. Szemer\'edi, W. T. Trotter, Jr. {\em Extremal problems in discrete geometry.} Combinatorica {\bf 3} (1983), 381--392. \bibitem{T} C. T\'oth. {\em The Szemer\'edi-Trotter theorem in the complex plane.} Combinatorica {\bf 3} (2015), no 1, 95--126. \bibitem{V} F. Voloch. {\em Surfaces in ${\mathbb P}^3$ over finite fields.} Topics in algebraic and noncommutative geometry (Luminy/Annapolis, MD, 2001), 219--226, Contemp. Math., 324, Amer. Math. Soc., Providence, RI, 2003. \bibitem{VS} I. V. V'yugin, I. D. Shkredov. {\em On additive shifts of multiplicative subgroups.} (Russian) Mat. Sb. {\bf 203} (2012), no. 6, 81--100. \boldsymbol nd{thebibliography} \boldsymbol nd{document}
\mathbf{b}egin{document} \title{A statistical perspective of sampling scores for linear regression} \mathbf{a}uthor{Siheng Chen$^{1,2}$, Rohan Varma$^1$, Aarti Singh$^2$, Jelena Kova\v{c}evi\'c$^{1,3}$ \thanks{The authors gratefully acknowledge support from the NSF through awards 1130616, 1421919, IIS-1247658, IIS-1252412 and AFOSR YIP FA9550-14-1-0285. Due to the lack of space, the full proofs of results appear in~\cite{ChenVSK:15g}. } \\ $^1$ Dept. of ECE, $^2$ Dept. of Machine Learning, $^3$Dept. of BME, \\ Carnegie Mellon University, Pittsburgh, PA, USA} \date{} \maketitle \mathbf{b}egin{abstract} In this paper, we consider a statistical problem of learning a linear model from noisy samples. Existing work has focused on approximating the least squares solution by using leverage-based scores as an importance sampling distribution. However, no finite sample statistical guarantees and no computationally efficient optimal sampling strategies have been proposed. To evaluate the statistical properties of different sampling strategies, we propose a simple yet effective estimator, which is easy for theoretical analysis and is useful in multitask linear regression. We derive the exact mean square error of the proposed estimator for any given sampling scores. Based on minimizing the mean square error, we propose the optimal sampling scores for both estimator and predictor, and show that they are influenced by the noise-to-signal ratio. Numerical simulations match the theoretical analysis well. \mathbf{e}nd{abstract} \mathbf{s}ection{Introduction} In many engineering problems, it is expensive to measure and store a large set of samples. Motivated by this, there has been a great deal of work on developing an effective sampling strategy for a variety of matrix-based problems, including compressed sensing~\cite{Candes:06, Donoho:06}, adaptive sampling for signal recovery and detection~\cite{TroppLDRB:10, HauptCN:11, DavenportMNW:15}, adaptive sampling for matrix approximation and completion~\cite{KrishnamurthyS:13, PatelGDMB:15,WangS:15a} and many others. At the same time, motivated by computational efficiency, statistically effective sampling strategies are developed for least-squares approximation~\cite{DrineasMM:06}, least absolute deviations regression~\cite{ClarksonDMMMW:13} and low-rank matrix approximation~\cite{MahoneyD:09}. One way to develop a sampling strategy is to design a data-dependent importance sampling distribution from which to sample the data. A widely used sampling distribution to select rows and columns of the input matrix are the leverage scores, defined formally in Section 2. Typically, the leverage scores are computed approximately~\cite{DrineasMMW:12,ClarksonDMMMW:13}, or otherwise, a random projection is used to precondition by approximately uniformizing them~\cite{AvronMT:10,AilonC:10, DrineasMMS:11,ClarksonDMMMW:13,MengSM:14}. A detailed discussion of this approach can be found in the recent review on randomized algorithms for matrices and matrix-based data problems~\cite{Mahoney:11}. Even though leverage-based sampling distributions are widely used, for many problems, it is unclear that why they might work well from a statistical perspective. For the problem of low-rank matrix approximation,~\cite{WangS:15b} provided an extensive empirical evaluation of several sampling distributions and showed that iterative norm sampling outperforms leverage score based sampling empirically.~\cite{YangZJZ:15} showed that for low-rank matrix approximation, the square root of the leverage score based sampling statistically outperforms uniform sampling all the time and outperforms leverage score based sampling when the leverage scores are nonuniform. Further, the authors proposed a constrained optimization problem resulting in optimal sampling probabilities, which are between leverage scores and their square roots. For the problem of linear regression, the optimal sampling strategies for estimation and prediction are known to be obtained via $A$ and $G$-optimality criteria, respectively~\cite{Pukelsheim:06}, but the resulting sampling strategies are combinatorial. The computationally efficient optimal sampling strategies are still unclear.~\cite{MaMY:15} showed that based on the sampled least squares, neither leverage score based sampling nor uniform sampling outperforms the other. The leveraging based estimator could suffer from a large variance when the leverage scores are nonuniform while uniform sampling is less vulnerable to small leverage scores.~\cite{ZhuMMY:15} only analyzed the asymptotic behavior of the sampled least squares and the corresponding optimal sampling is still unclear. In this paper, we propose an estimator with computationally efficient sampling strategy that also comes with closed form and finite sample guarantees on performance. We derive the exact mean square error (MSE) of this estimator and show the optimal sampling distribution for both estimator and predictor. The results of the statistical analysis can be summarized as follows: (1) the proposed estimator is unbiased for any sampling distribution; (2) the optimal sampling distribution is influenced by the noise-to-signal ratio; (3) the optimal sampling distribution involves a tradeoff between the leverage scores and the square root of the leverage scores. We further provide an empirical evaluation of the proposed algorithm on synthetic data under various settings. This empirical evaluation clearly shows the efficacy of the derived optimal sampling scores for sampling large-scale data in order to learn a linear model from the noisy samples. Except for the simple statistical analysis, the proposed sampled projection estimator is useful for the multitask linear regression. This is because the computational cost is less when we estimate multiple coefficient vectors based on the same data matrix. This is crucial for many applications on sensor networks and electrical systems~\cite{ChenVSK:15a}. For example, traffic speeds are time-evolving data supported on an urban traffic network. Based on the same urban traffic network, it is potentially efficient to use the proposed estimator to estimate the traffic speeds at all the intersections from a small number of samples every period of time. The proposed estimator also can be applied to estimate corrupted measurements in sensor networks~\cite{ChenSMK:14}, uncertain voltage states in power grids~\cite{WengNI:13} and many others. The main contributions are (1) we derive the exact MSE of a natural modification of the least squares estimator; and (2) we derive the optimal sampling distributions for the estimator and the predictor, and show that they are influenced by the noise-to-signal ratio and they are related to leverage scores. \mathbf{s}ection{Background} We consider a simple linear model \mathbf{b}egin{eqnarray*} \mathbf{y} \ = \ \X \mathbf{b}eta^{(0)} + \mathbf{e}psilon, \mathbf{e}nd{eqnarray*} where $\mathbf{y} \in \mathbf{e}nsuremath{\mathbb{R}}^n$ is a response vector, $\X \in \mathbf{e}nsuremath{\mathbb{R}}^{n \times p}$ is a data matrix ($n > p$), $\mathbf{b}eta^{(0)} \in \mathbf{e}nsuremath{\mathbb{R}}^p$ is a coefficient vector, and $\mathbf{e}psilon \in \mathbf{e}nsuremath{\mathbb{R}}^n$ is a zero mean noise vector with independent and identically distributed entries, and covariance matrix $\mathbf{s}igma^2 I$. The task is, given $\X$, $\mathbf{y}$, we aim to estimate $\mathbf{b}eta^{(0)}$. In this case, the unknown coefficient vector $\mathbf{b}eta^{(0)}$ is usually estimated via the least squares as \mathbf{b}egin{eqnarray} \label{eq:ls} \mathbf{w}idehat{\mathbf{b}eta} \ = \ \mathbf{a}rg \min_{\mathbf{b}eta} \left\| \mathbf{y} - \X \mathbf{b}eta \right\|_2^2 \ = \ (\X^T \X)^{-1} \X^T \mathbf{y} \ = \ \X^{\dagger} \mathbf{y} \mathbf{e}nd{eqnarray} where $\X^{\dagger} = (\X^T \X)^{-1} \X^T \in \mathbf{e}nsuremath{\mathbb{R}}^{p \times n}$. The predicted response vector is then $\mathbf{w}idehat{\mathbf{y}} = \Hm \mathbf{y}$, where $\Hm = \X \X^{\dagger} $ is the hat matrix. The diagonal elements of $\Hm$ are the leverage scores, that is, $\Hm_{i, i}$ is the leverage of the $i$th sample. The statistical leverage scores are widely used for detecting outliers and influential data~\cite{HoaglinW:78,VellemanW:81,DrineasMMW:12}. In some applications, it is expensive to sample the entire response vector. Some previous works consider sampling algorithms computing the approximate solutions to the overconstrained least squares problem~\cite{DrineasMM:06, Mahoney:11, DhillonLFU:13, MaMY:15}. These algorithms choose a small number of rows of $\X$ and the corresponding elements of $\mathbf{y}$, and use least squares on the samples to estimate $\mathbf{b}eta^{(0)}$. Formally, let $\mathcal{M} = (\mathcal{M}_1, \cdots, \mathcal{M}_{m})$ be the sequence of sampled indices, called~\mathbf{e}mph{sampling set}, with $|\mathcal{M}| = m$ and $\mathcal{M}_i \in \{1, \cdots, n\}$. A sampling matrix $\Psi$ is a linear mapping from $\mathbf{e}nsuremath{\mathbb{R}}^n$ to $\mathbf{e}nsuremath{\mathbb{R}}^m$ that describes the process of sampling with replacement, \mathbf{b}egin{equation*} \label{eq:Psi} \Psi_{i,j} = \left\{ \mathbf{b}egin{array}{rl} 1, & j = \mathcal{M}_i;\\ 0, & \mbox{otherwise}. \mathbf{e}nd{array} \right. \mathbf{e}nd{equation*} When the $j$th element of $\mathbf{y}$ is chosen in the $i$th random trial, then $\mathcal{M}_i = j$ and $\Psi_{i,j} = 1$. The goal is {\mathbf{b}f given $\X$, we design the sampling operator $\Psi$ to obtain samples $\Psi \mathbf{y}$, and then, estimate $\mathbf{b}eta^{(0)}$}. Here we focus on experimental design of sampling operator, that is, the sampling strategy needs to be designed in advance of collecting any $y_i$. The choice of samples is an important degree of freedom when studying the corresponding quality of approximation. In this paper, we employ random sampling with an underlying probability distribution to choose samples. Let $\{ \pi_i \}_i^n$ be a probability distribution, where $\pi_i$ denotes the probability to choose the $i$th sample in each random trial. We consider two choices of the probability distribution:~\mathbf{e}mph{uniform sampling} means that the sample indices are chosen from $\{1, 2, \cdots, n\}$ independently and randomly; and~\mathbf{e}mph{sampling score-based sampling} which means that the sample indices are chosen from an importance sampling distribution that is proportional to a sampling score that is computed from the data matrix\mathbf{f}ootnote{the terms of sampling distribution and sampling scores mean the same thing in this paper}. A widely-used sampling score is the leverage scores of the data matrix. Given the samples, one way to estimate $\mathbf{b}eta^{(0)}$ is by solving a weighted least square problem: \mathbf{b}egin{eqnarray} \label{eq:SampleLS} \mathbf{w}idehat{\mathbf{b}eta} & = & \mathbf{a}rg \min_{\mathbf{b}eta} \left\| \mathcal{D}d \Psi \mathbf{y} - \mathcal{D}d \Psi \X \mathbf{b}eta \right\|_2^2 \nonumber \\ & = & \left( \mathcal{D}d \Psi \X \right)^{\dagger} \mathcal{D}d \Psi \mathbf{y}, \mathbf{e}nd{eqnarray} where $\mathcal{D}d \in \mathbf{e}nsuremath{\mathbb{R}}^{m \times m}$ is a diagonal rescaling matrix with $\mathcal{D}d_{i,i} = 1/\mathbf{s}qrt{m \pi_j}$ when $\Psi_{i,j} = 1$. This is called~\mathbf{e}mph{sampled least squares} (SampleLS)~\cite{MaMY:15}. The computational cost of taking pseudo-inverse of $\mathcal{D}d \Psi \X \in \mathbf{e}nsuremath{\mathbb{R}}^{m \times p}$ is $O(2p^2m + p^3)$. The inverse term is involved with the sampling operator, but the computation is much cheaper than taking pseudo-inverse of $\X \in \mathbf{e}nsuremath{\mathbb{R}}^{n \times p}$, which takes $O(2p^2n + p^3)$ ($n \geq m \geq p$). However, it is hard to show the exact MSEs and the optimal sampling scores of SampleLS.~\cite{MaMY:15} shows that, based on SampleLS, neither leverage score based sampling nor uniform sampling dominates the other from from a statistical perspective. \mathbf{s}ection{Proposed Method} To have a deeper statistical understanding of this task, we propose a similar, but simpler algorithm to deal with the same task, but it is easy to show its corresponding exact MSEs and the corresponding optimal sampling scores. \mathbf{s}ubsection{Problem Formulation} Similarly to~\mathbf{e}qref{eq:SampleLS}, we estimate $\mathbf{b}eta^{(0)}$ by solving the following problem: \mathbf{b}egin{eqnarray} \label{eq:proj} \mathbf{w}idehat{ \mathbf{b}eta } & = & \mathbf{arg min}_{\mathbf{b}eta} \left\| \Psi^T \Psi \mathcal{D}d^2 \Psi^T \Psi \mathbf{y} - \X \mathbf{b}eta \right\|_2^2 \nonumber \\ & = & \X^{\dagger} \Psi^T \Psi \mathcal{D}d^2 \Psi^T \Psi \mathbf{y}, \mathbf{e}nd{eqnarray} where $\mathcal{D}d \in \mathbf{e}nsuremath{\mathbb{R}}^{n \times n}$ is the same diagonal rescaling matrix in~\mathbf{e}qref{eq:SampleLS}. We call~\mathbf{e}qref{eq:proj}~\mathbf{e}mph{sampled projection} (SampleProj). This is akin to zero-filling the unobserved entries of $\mathbf{y}$ and rescaling the non-zero entries to make it unbiased. Comparing to the ordinary least squares, SampleProj does not benefit the computational efficiency because it still needs to taking pseudo-inverse of $\X \in \mathbf{e}nsuremath{\mathbb{R}}^{n \times p}$ and the computation involves the factor $n$; however, it is still useful when taking samples is expense. Comparing to SampleLS, this algorithm is more appealing when we aim to estimate multiple $\mathbf{b}eta^{(0)}$s based on the same data matrix $\X$ because the inverse term is not involved with the sampling operator. \mathbf{s}ubsection{Statistical Analysis} We obtain the exact mean squared error and an optimal sampling distribution for SampleProj. Our main contributions based on SampleProj can be summarized as follows: \mathbf{b}egin{itemize} \item the estimator is unbiased for any sampling scores (Lemma~\ref{lem:unbias} ); \item the closed-form solution (finite sample) on the MSE of the estimator and the predictor (Theorem~\ref{thm:MSE}); and \item analytic optimal sampling scores of the estimator and predictor are provided (Theorem~\ref{thm:optimal}). \mathbf{e}nd{itemize} Due to the lack of space, the full proofs of results appear in~\cite{ChenVSK:15g} \mathbf{b}egin{myLem} \label{lem:unbias} The estimator $\mathbf{w}idehat{ \mathbf{b}eta}$ in SampleProj is an unbiased estimator of $\mathbf{b}eta^{(0)}$, that is, $ \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta } \right] \ = \ \mathbf{b}eta^{(0)}. $ \mathbf{e}nd{myLem} \mathbf{b}egin{myLem} \label{lem:var} The covariance of the estimator $\mathbf{w}idehat{ \mathbf{b}eta}$ in SampleProj has the following property: \mathbf{b}egin{eqnarray*} && {\rm Tr} ( {\rm Covar} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] ) \ = \ \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right\|^2 \\ & = & \mathbf{s}um_{i=1}^{p} \left( \mathbf{s}um_{l=1}^n \mathbf{f}rac{1}{ m \pi_l } (\X^{\dagger})_{i,l}^2 \left( ( \X \mathbf{b}eta^{(0)} )_l^2 + \mathbf{s}igma^2 \right) - \mathbf{f}rac{1}{m} ( \mathbf{b}eta^{(0)})_i^2 \right), \mathbf{e}nd{eqnarray*} where $ \mathbf{s}igma^2$ is the variance of the Gaussian random noise. \mathbf{e}nd{myLem} Combining the results of Lemmas~\ref{lem:unbias} and~\ref{lem:var}, we obtain the exact MSE of both the estimator and the predictor. \mathbf{b}egin{myThm} \label{thm:MSE} Let $\mathbf{w}idehat{ \mathbf{b}eta}$ be the solution of SampleProj with sampling distribution of $\{ \pi_i \}_{i=1}^n$. The mean squared error of the estimator $\mathbf{w}idehat{ \mathbf{b}eta}$ is \mathbf{b}egin{eqnarray*} \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbf{b}eta^{(0)} \right\| ^2 \ = \ {\rm Tr} \left( \X^{\dagger} \W_{\rm C} (\X^{\dagger})^T \right) - \mathbf{f}rac{ 1}{m} \left\| \mathbf{b}eta^{(0)} \right\|_2^2, \mathbf{e}nd{eqnarray*} where $(\W_{\rm C})_{l,l} = \left( ( \X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right)/( m \pi_l )$. The mean squared error of the predictor $\X \mathbf{w}idehat{ \mathbf{b}eta}$ is \mathbf{b}egin{eqnarray*} && \mathbb{E} \left\| \X \mathbf{w}idehat{ \mathbf{b}eta} - \X \mathbf{b}eta^{(0)} \right\| ^2 \\ & = & {\rm Tr} \left( \Hm \W_{\rm C} \right) - \mathbf{f}rac{1}{m} \left\| \X \mathbf{b}eta^{(0)} \right\|_2^2. \mathbf{e}nd{eqnarray*} \mathbf{e}nd{myThm} We next optimize over the mean squared errors and obtain the optimal sampling scores. \mathbf{b}egin{myThm} \label{thm:optimal} The optimal sampling score to minimize the mean squared error of the estimator $\mathbf{w}idehat{ \mathbf{b}eta}$ is \mathbf{b}egin{eqnarray*} \pi_l \propto \mathbf{s}qrt{ \left( \X (\X^T \X)^{-2} \X^T \right)_{l,l} \left( (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right) }; \mathbf{e}nd{eqnarray*} The optimal sampling scores to minimize the MSE of the predictor $\X \mathbf{w}idehat{ \mathbf{b}eta}$ is \mathbf{b}egin{eqnarray*} \pi_l \propto \mathbf{s}qrt{ \Hm_{l,l} \left( (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right) }. \mathbf{e}nd{eqnarray*} \mathbf{e}nd{myThm} Theorem~\ref{thm:optimal} shows that the optimal sampling scores depend on the signal strength and the noise. In practice, we cannot access to $\X \mathbf{b}eta^{(0)}$ and we need to approximate the ratio between each $(\X \mathbf{b}eta^{(0)})_l$ and $\mathbf{s}igma^2$. For active sampling, we can collect the feedback to approximate each signal coefficient $(\X \mathbf{b}eta^{(0)})_l$; for experimentally designed sampling, we approximate beforehand. One way is to use the Cauchy-Schwarz inequality to bound $x_i$, \mathbf{b}egin{eqnarray*} (\X \mathbf{b}eta^{(0)})_l & = & | \mathbf{x}_l^T \mathbf{b}eta^{(0)} | \ \leq \ \left\| \mathbf{x}_l \right\|_2 \left\| \mathbf{b}eta^{(0)} \right\|_2. \mathbf{e}nd{eqnarray*} An upper bound of the MSE of the estimator is then \mathbf{b}egin{eqnarray*} \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbf{b}eta^{(0)} \right\| ^2 \ \leq \ {\rm Tr} \left( \X^{\dagger} \W_{\rm \mathbf{b}ar{C}} (\X^{\dagger})^T \right) - \mathbf{f}rac{ 1}{m} \left\| \mathbf{b}eta^{(0)} \right\|_2^2, \mathbf{e}nd{eqnarray*} where $(\W_{\rm \mathbf{b}ar{C}})_{l,l} = \left( \left\| \mathbf{x}_l \right\|_2^2 \left\| \mathbf{b}eta^{(0)} \right\|_2^2 \right)/( m \pi_l )$. The corresponding sampling scores are \mathbf{b}egin{eqnarray} \label{eq:oss_esti} \pi_l & \propto & \mathbf{s}qrt{ \left( (\X^{\dagger})^T \X^{\dagger} \right)_{l,l} \left( \left\| \mathbf{x}_l \right\|_2^2 \left\| \mathbf{b}eta^{(0)} \right\|_2^2 + \mathbf{s}igma^2 \right) } \nonumber \\ \nonumber & = & \mathbf{s}qrt{ \left( (\X^{\dagger})^T \X^{\dagger} \right)_{l,l} \left\| \mathbf{b}eta^{(0)} \right\|_2^2 \left( \left\| \mathbf{x}_l \right\|_2^2 + \mathbf{f}rac{ \mathbf{s}igma^2 } { \left\| \mathbf{b}eta^{(0)} \right\|_2^2 } \right) } \\ & \propto & \mathbf{s}qrt{ \left( \X (\X^T \X)^{-2} \X^T \right)_{l,l} \left( \left( \X\X^T \right)_{l,l} + {\rm NSR} \right) }, \mathbf{e}nd{eqnarray} where ${\rm NSR} = \mathbf{s}igma^2/\left\| \mathbf{b}eta^{(0)} \right\|_2^2$. When the column vectors of $\X$ are orthonormal, $\X^T \X$ is the identity matrix, the sampling scores~\mathbf{e}qref{eq:oss_esti} are between the leverage scores and the square root of the leverage scores. \mathbf{b}egin{myCorollary} \label{cor:lev_esti} Let the column vectors of $\X$ are orthonormal. When ${\rm NSR} \rightarrow +\infty$, the sampling scores~\mathbf{e}qref{eq:oss_esti} are the square root of the leverage scores, that is, $\pi_l = \mathbf{s}qrt{\Hm_{l,l}}$, for all $l$. When ${\rm NSR} \rightarrow 0$, the sampling scores~\mathbf{e}qref{eq:oss_esti} are the leverage scores, that is, $\pi_l = \Hm_{l,l}$, for all $l$. \mathbf{e}nd{myCorollary} An upper bound of the MSE of the predictor is then \mathbf{b}egin{eqnarray*} \mathbb{E} \left\| \X \mathbf{w}idehat{ \mathbf{b}eta} - \X \mathbf{b}eta^{(0)} \right\| ^2 \ \leq \ {\rm Tr} \left( \Hm \W_{\rm \mathbf{b}ar{C}} \right) - \mathbf{f}rac{1}{m} \left\| \X \mathbf{b}eta^{(0)} \right\|_2^2, \mathbf{e}nd{eqnarray*} where $(\W_{\rm \mathbf{b}ar{C}})_{l,l} = \left( \left\| \mathbf{x}_l \right\|_2^2 \left\| \mathbf{b}eta^{(0)} \right\|_2^2 \right)/( m \pi_l )$. The corresponding optimal sampling scores are \mathbf{b}egin{eqnarray} \label{eq:oss_pred} \pi_l & \propto & \mathbf{s}qrt{ \Hm_{l,l} \left( \left\| \mathbf{x}_l \right\|_2^2 \left\| \mathbf{b}eta^{(0)} \right\|_2^2 + \mathbf{s}igma^2 \right) } \nonumber \\ & \propto & \mathbf{s}qrt{ \Hm_{l,l} \left( \left( \X\X^T \right)_{l,l} + {\rm NSR} \right) }. \mathbf{e}nd{eqnarray} When NSR goes to infinity, the optimal sampling scores are always the square root of the leverage scores for any $\X$. When the column vectors of $\X$ are orthonormal, the sampling scores~\mathbf{e}qref{eq:oss_pred} are the same with~\mathbf{e}qref{eq:oss_esti} and are between the leverage scores and the square root of the leverage scores. \mathbf{b}egin{myCorollary} \label{cor:lev_pred} When ${\rm NSR} \rightarrow +\infty$, the sampling scores~\mathbf{e}qref{eq:oss_pred} are the square root of the leverage scores, that is, $\pi_l = \mathbf{s}qrt{\Hm_{l,l}}$, for all $l$. When the column vectors of $\X$ are orthonormal and ${\rm NSR} \rightarrow 0$, the sampling scores~\mathbf{e}qref{eq:oss_pred} are the leverage scores, that is, $\pi_l = \Hm_{l,l}$, for all $l$. \mathbf{e}nd{myCorollary} \mathbf{s}ection{Empirical Evaluation} In this section, we validate the proposed algorithm and statistical analysis on synthetic data. \mathbf{s}ubsection{Synthetic Dataset} \label{sec:data} We generate a $1000 \times 20$ data matrix $\X$ from multivariate t-distribution with 1 degree of freedom and covariance matrix $\hat{S}igma$, whose elements are $\hat{S}igma_{i,j} = 2 \times 0.5^{|i-j|}$. The leverage scores of this matrix are nonuniform. This is the same as $T_1$ in~\cite{MaMY:15}. We then generate a $20 \times 1$ coefficient vector $\mathbf{b}eta^{0}$, whose elements are drawn from uniform distribution $\mathcal{U}(0, 1)$. We test both noiseless and noisy cases. In the noiseless case, a response vector is $\mathbf{y} = \X \mathbf{b}eta^{0}$; and in the noisy case, a response vector is $\mathbf{y} = \X \mathbf{b}eta^{0} + \mathbf{e}psilon$, where $\mathbf{e}psilon \mathbf{s}im \mathcal{N}(0, \mathbf{s}igma^2)$. We vary $\mathbf{s}igma$ as $5, 25, 50, 75, 100$. The corresponding ratios between $\left\| \mathbf{e}psilon \right\|_2^2$ and $\left\| \mathbf{y} \right\|_2^2$ are around $0.7\%, 15\%, 40\%, 60\%, 72\%$ and the corresponding NSR, $\mathbf{s}igma^2/\left\| \mathbf{b}eta^{(0)} \right\|_2 $, are around $4, 100, 400, 800, 1600$. \mathbf{s}ubsection{Experimental Setup} \label{sec:setup} We consider two key questions as follows: \mathbf{b}egin{itemize} \item do different sampling scores make a difference? \item does noise make a difference? \mathbf{e}nd{itemize} To answer these questions, we test the proposed SampleProj with different sampling scores for both noiseless and noisy cases. For each test, we compare $5$ sampling scores, including uniform sampling as Uniform (blue), leverage scores as Lev (red), square root of the leverage scores as sql-Lev (orange), sampling scores for the estimator~\mathbf{e}qref{eq:oss_esti} as opt-Est (purple), and sampling scores for the predictor~\mathbf{e}qref{eq:oss_pred} as opt-Pred (green). For opt-Est and opt-Pred, we use the true noise-to-signal ratios. All the results are averaged over 500 independent runs. We measure the quality of estimation as \mathbf{b}egin{equation*} {\rm err_{Est}} \ = \ \mathbf{f}rac{ \left\| \mathbf{w}idehat{\mathbf{b}eta} - \mathbf{b}eta^{(0)} \right\|_2 }{ \left\| \mathbf{b}eta^{(0)} \right\|_2 }, \mathbf{e}nd{equation*} where $\mathbf{b}eta^{(0)} $ is the ground truth and $\mathbf{w}idehat{\mathbf{b}eta}$ is an estimator, and we measure the quality of prediction as \mathbf{b}egin{equation*} {\rm err_{Pred}} \ = \ \mathbf{f}rac{ \left\| \X \mathbf{w}idehat{\mathbf{b}eta} - \X \mathbf{b}eta^{(0)} \right\|_2 }{ \left\| \X \mathbf{b}eta^{(0)} \right\|_2 }. \mathbf{e}nd{equation*} \mathbf{s}ubsection{Results} Figure~\ref{fig:proj_t_t1_est} compares the estimation error in both the noiseless and noisy scenarios of SampleProj as a function of sample size. We see that, as expected, the sampling scores for the estimator~\mathbf{e}qref{eq:oss_esti} outperform all other scores in both noiseless and noisy cases. Also, leverage scores perform well in the noiseless case, and square root of the leverage scores perform well in the noisy case, which matches the result in Corollary~\ref{cor:lev_esti}. \mathbf{b}egin{figure}[htb] \mathbf{b}egin{center} \mathbf{b}egin{tabular}{cc} \includegraphics[width=0.48\columnwidth]{figures/proj_t1_n0_est.eps} & \includegraphics[width=0.48\columnwidth]{figures/proj_t1_n2_est.eps} \\ {\mathbf{s}mall (a) Noiseless.} & {\mathbf{s}mall (b) noisy $\mathbf{s}igma = 40$.} \\ \mathbf{e}nd{tabular} \mathbf{e}nd{center} \caption{\label{fig:proj_t_t1_est} Comparison of estimation error for noisless and noisy cases of SampleProj in Task 1 as a function of sample size.} \mathbf{e}nd{figure} Figure~\ref{fig:proj_t_t1_pred} compares the prediction error for noiseless and noisy cases of SampleProj as a function of sample size. We see that, the sampling scores for the predictor~\mathbf{e}qref{eq:oss_pred} result in the best performance in both the noiseless and noisy cases; leverage scores perform better than square root of the leverage scores in the noiseless cases. In Corollary~\ref{cor:lev_esti}, we see that square root of the leverage scores should perform better than leverage scores in a high noise case, thus, we suspect that the noise level is not high enough to see the trend. \mathbf{b}egin{figure}[htb] \mathbf{b}egin{center} \mathbf{b}egin{tabular}{cc} \includegraphics[width=0.48\columnwidth]{figures/proj_t1_n0_pred.eps} & \includegraphics[width=0.48\columnwidth]{figures/proj_t1_n2_pred.eps} \\ {\mathbf{s}mall (a) Noiseless.} & {\mathbf{s}mall (b) Noisy $\mathbf{s}igma = 40$.} \\ \mathbf{e}nd{tabular} \mathbf{e}nd{center} \caption{\label{fig:proj_t_t1_pred} Comparison of prediction error for noisless and noisy cases of SampleProj as a function of sample size.} \mathbf{e}nd{figure} To study how noise influences the results, given 200 samples, we show the estimation errors and the prediction errors of SampleProj as a function of noise level. Figure~\ref{fig:t_t1_est} (a) shows that sampling scores for the estimator~\mathbf{e}qref{eq:oss_esti} consistently outperform the other scores. Leverage scores are better than square root of the leverage scores when the noise level is small, and square root of the leverage scores catch up with leverage scores when the noise level increases, which matches Corollary~\ref{cor:lev_esti}. Figure~\ref{fig:t_t1_est} (b) shows that sampling scores for the predictor~\mathbf{e}qref{eq:oss_pred} consistently outperform the other scores in terms of prediction error; leverage scores are better than square root of the leverage scores when the noise level is small, and square root of the leverage scores catch up with leverage scores when the noise level increases, which matches Corollary~\ref{cor:lev_pred}. \mathbf{b}egin{figure}[htb] \mathbf{b}egin{center} \mathbf{b}egin{tabular}{cc} \includegraphics[width=0.48\columnwidth]{figures/proj_t1_est.eps} & \includegraphics[width=0.48\columnwidth]{figures/proj_t1_pred.eps} \\ {\mathbf{s}mall (a) Estimation error .} & {\mathbf{s}mall (b) Prediction error.} \\ \mathbf{e}nd{tabular} \mathbf{e}nd{center} \caption{\label{fig:t_t1_est} Comparison of estimation error and prediction error of SampleProj as a function of noise level.} \mathbf{e}nd{figure} Now we can answer the two questions proposed in Section~\ref{sec:setup}. Based on SampleProj, the proposed sampling scores for the estimator consistently outperforms the other sampling scores in terms of the estimation error and the proposed sampling scores for the predictor consistently outperforms the other sampling scores in terms of the prediction error. Leverage score based sampling is better in the noiseless case, but the square roots of the leverage score based sampling is better in the noisy case. Note that many work in computer science theory typically does not address noise and mostly focuses on least squares~\cite{DrineasMM:06, BoutsidisD:09, AvronMT:10}, hence, from the algorithmic perspective, the leverage score sampling was considered optimal. In the experiments, we use the true NSRs for optimal sampling scores for the estimator and the predictor, which is impractical. In practice, for low noise cases, we set the NSR to zero and sample proportional to $ \mathbf{s}qrt{ \left( \X (\X^T \X)^{-2} \X^T \right)_{l,l} (\X \X^T)_{l,l} }$ for the estimator and $\mathbf{s}qrt{ \Hm_{l,l} (\X \X^T)_{l,l} }$ for the predictor; for high noise cases, we should sample proportional to $ \mathbf{s}qrt{ \left( \X (\X^T \X)^{-2} \X^T \right)_{l,l} }$ for the estimator and $\mathbf{s}qrt{ \Hm_{l,l} }$ for the predictor. In general, when $\X$ is non-orthonormal, these sampling scores are different than the leverage scores and the square root of the leverage scores. \mathbf{s}ection{Conclusions} We consider the problem of learning a linear model from noisy samples from a statistical perspective. Existing work on sampling for least-squares approximation has focused on using leverage-based scores as an importance sampling distribution. However, it is hard to obtain the precise MSE of the sampled least squares estimator. To understand the importance sampling distributions, we propose a simple yet effective estimator, called SampleProj, to evaluate the statistical properties of sampling scores. The proposed SampleProj is appealing for theoretical analysis and multitask linear regression. The main contributions are (1) we derive the exact MSE of SampleProj with a given sampling distribution; and (3) we derive the optimal sampling scores for the estimator and the predictor, and show that they are influenced by the noise-to-signal ratio. The numerical simulations show that empirical performance is consistent with the proposed theory. We have derived the optimal sampling strategy for a specific estimator, but identifying the optimal estimator with a computationally efficient sampling strategy remains an open direction. \mathbf{b}ibliographystyle{IEEEbib} \mathbf{b}ibliography{bibl_jelena} \mathbf{a}ppendix \mathbf{s}ection{Appendices} \mathbf{s}ubsection{Proof of Theorem~\ref{thm:MSE}} We first show the unbias of the estimator in Lemma~\ref{lem:unbias}. For each element in the bias term, we have \mathbf{b}egin{eqnarray*} && \left( \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right)_i \\ & = & \mathbb{E}_{\mathcal{M}, \mathbf{e}psilon} \left( \mathbf{s}um_{\mathcal{M}_j \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} \W_{\mathcal{M}_j,\mathcal{M}_j} (\X \mathbf{b}eta^{(0)} + \mathbf{e}psilon)_{\mathcal{M}_j} \right) \\ & \mathbf{s}tackrel{(a)}{=} & m \mathbb{E}_{l} \left( (\X^{\dagger})_{i,l} \mathbf{f}rac{1}{m \pi_{l}} (\X \mathbf{b}eta^{(0)})_l \right) \\ & = & m \mathbf{s}um_{l = 1}^n (\X^{\dagger})_{i,l} \mathbf{f}rac{1}{m \pi_{l}} (\X \mathbf{b}eta^{(0)})_l \pi_l \\ & = & \mathbf{s}um_{l =1}^n (\X^{\dagger})_{i, l} (\X \mathbf{b}eta^{(0)} )_l \ = \ \left( \mathbf{b}eta_1^{(0)} \right)_i, \mathbf{e}nd{eqnarray*} where $(a)$ follows from that each sample is identically and independently draw from $\{\pi_i\}_{i=1}^n$. We then show the covariance of the estimator. We first split $ \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] $ into two components, and then bound both one by one. \mathbf{b}egin{eqnarray*} & & \left( \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right)_i \\ & = & \mathbf{s}um_{\mathcal{M}_j \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} \W_{\mathcal{M}_j,\mathcal{M}_j} (\X \mathbf{b}eta^{(0)} + \mathbf{e}psilon)_{\mathcal{M}_j} - \left( \mathbf{b}eta^{(0)} \right)_i \\ & = & \left( \mathbf{s}um_{\mathcal{M}_j \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} \W_{\mathcal{M}_j,\mathcal{M}_j} ( \X \mathbf{b}eta^{(0)} )_{\mathcal{M}_j} - ( \mathbf{b}eta^{(0)}_1)_i \right) \\ && ~ + \mathbf{s}um_{\mathcal{M}_j \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} \W_{\mathcal{M}_j,\mathcal{M}_j} \mathbf{e}psilon_{\mathcal{M}_j} \\ & = & \mathcal{D}elta^{(1)}_i + \mathcal{D}elta^{(2)}_i. \mathbf{e}nd{eqnarray*} The first term captures variability due to sampling, while the second term captures variability introduced by noise. Since the noise is independent from the sampling set $\mathcal{M}$, we can bound $\mathcal{D}elta^{(1)}$ and $\mathcal{D}elta^{(2)}$ separately. For $\mathcal{D}elta^{(1)}_i $, we have \mathbf{b}egin{eqnarray*} & & \mathbb{E} \left\| \mathcal{D}elta_i^{(1)} \right\|^2 \\ & = & \mathbb{E}_{\mathcal{M}} \mathbf{b}igg( \mathbf{s}um_{\mathcal{M}_j \neq \mathcal{M}_{j'} } (\X^{\dagger})_{i,\mathcal{M}_j} (\X^{\dagger})_{i,\mathcal{M}_{j'}} \W_{\mathcal{M}_j,\mathcal{M}_j} \W_{\mathcal{M}_{j'},\mathcal{M}_{j'}} \\ && ( \X \mathbf{b}eta^{(0)} )_{\mathcal{M}_j} ( \X \mathbf{b}eta^{(0)} )_{\mathcal{M}_{j'}} \mathbf{b}igg) + \mathbb{E}_{\mathcal{M}} \mathbf{b}igg(\mathbf{s}um_{\mathcal{M}_j = \mathcal{M}_{j'} } (\X^{\dagger})_{i,\mathcal{M}_j} \\ && (\X^{\dagger})_{i,\mathcal{M}_{j'}} \W_{\mathcal{M}_j,\mathcal{M}_j} \W_{\mathcal{M}_{j'},\mathcal{M}_{j'}} \\ && ( \X \mathbf{b}eta^{(0)} )_{\mathcal{M}_j} ( \X \mathbf{b}eta^{(0)} )_{\mathcal{M}_{j'}} \mathbf{b}igg) - ( \mathbf{b}eta^{(0)}_1)_i^2 \mathbf{e}nd{eqnarray*} \mathbf{b}egin{eqnarray*} & = & \mathbb{E}_{l, l'} \mathbf{b}igg( (\X^{\dagger})_{i,l} (\X^{\dagger})_{i,l'} \mathbf{f}rac{ m^2-m }{m^2 \pi_l \pi_{l'} } ( \X \mathbf{b}eta^{(0)} )_{l} ( \X \mathbf{b}eta^{(0)} )_{l'} \mathbf{b}igg) \\ && + ~ m \mathbb{E}_{l} \left( (\X^{\dagger})_{i,l}^2 \mathbf{f}rac{1}{m^2 \pi_l^2 } ( \X \mathbf{b}eta^{(0)} )_l^2 \right) - ( \mathbf{b}eta^{(0)}_1)_i^2 \\ & = & \mathbf{s}um_{l,l' = 1}^n (\X^{\dagger})_{i,l} (\X^{\dagger})_{i,l'} \mathbf{f}rac{m^2-m}{m^2 \pi_l \pi_{l'} } ( \X \mathbf{b}eta^{(0)} )_{l} ( \X \mathbf{b}eta^{(0)} )_{l'} \pi_l \pi_{l'} \\ && + ~ m \mathbf{s}um_{l=1}^n (\X^{\dagger})_{i,l}^2 \mathbf{f}rac{1}{m^2 \pi_l^2 } ( \X \mathbf{b}eta^{(0)} )_l^2 \pi_l - ( \mathbf{b}eta^{(0)}_1)_i^2 \\ & = & \mathbf{s}um_{l=1}^n \mathbf{f}rac{1}{ m \pi_l } (\X^{\dagger})_{i,l}^2 ( \X \mathbf{b}eta^{(0)} )_l^2 - \mathbf{f}rac{1}{m} ( \mathbf{b}eta^{(0)}_1)_i^2. \mathbf{e}nd{eqnarray*} For the noise term $\mathcal{D}elta^{(2)}_i $, we have \mathbf{b}egin{eqnarray*} && \mathbb{E} || \mathcal{D}elta^{(2)}_i ||^2 \ = \ \mathbb{E}_{\mathcal{M}, \mathbf{e}psilon} \left( \mathbf{s}um_{\mathcal{M}_j \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} \W_{\mathcal{M}_j,\mathcal{M}_j} \mathbf{e}psilon_{\mathcal{M}_j} \right) \\ && \mathbf{b}igg( \mathbf{s}um_{\mathcal{M}_{j'} \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_{j'}} \W_{\mathcal{M}_{j'},\mathcal{M}_{j'}} \mathbf{e}psilon_{\mathcal{M}_{j'}} \mathbf{b}igg) \\ & = & \mathbb{E}_{\mathcal{M}, \mathbf{e}psilon} \mathbf{b}igg( \mathbf{s}um_{\mathcal{M}_j, \mathcal{M}_{j'} \in \mathcal{M}} (\X^{\dagger})_{i,\mathcal{M}_j} (\X^{\dagger})_{i,\mathcal{M}_{j'}} \\ && \W_{\mathcal{M}_j,\mathcal{M}_j} \W_{\mathcal{M}_{j'},\mathcal{M}_{j'}} \mathbf{e}psilon_{\mathcal{M}_j} \mathbf{e}psilon_{\mathcal{M}_{j'}} \mathbf{b}igg) \\ & = & m \mathbb{E}_{l, \mathbf{e}psilon} \left( (\X^{\dagger})_{i,l}^2 \mathbf{f}rac{1}{m^2 \pi_{l}^2 } \mathbf{e}psilon_l^2 \right) \\ & = & m \mathbf{s}um_{l=1}^n (\X^{\dagger})_{i,l}^2 \mathbf{f}rac{1}{m^2 \pi_{l}^2 } \mathbb{E} \left[ \mathbf{e}psilon_l^2 \right] \pi_l \\ & = & \mathbf{s}igma^2 \mathbf{s}um_{l=1}^n \mathbf{f}rac{1}{m \pi_{l} } (\X^{\dagger})_{i,l}^2. \mathbf{e}nd{eqnarray*} We then combine both $\mathcal{D}elta^{(1)}_i $ and $\mathcal{D}elta^{(2)}_i $, and obtain the variance term. \mathbf{b}egin{eqnarray*} & & \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right\|^2 \ = \ \mathbf{s}um_{i=1}^{p} \mathbb{E} \left( \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right)_i^2 \\ & = & \mathbf{s}um_{i=1}^{p} \left( \mathbb{E} \left\| \mathcal{D}elta_i^{(1)} \right\|^2 + \mathbb{E} \left\| \mathcal{D}elta_i^{(2)} \right\|^2 \right) \\ & = & \mathbf{s}um_{i=1}^{p} \left( \mathbf{s}um_{l=1}^n \mathbf{f}rac{1}{ m \pi_l } (\X^{\dagger})_{i,l}^2 \left( ( \X \mathbf{b}eta^{(0)} )_l^2 + \mathbf{s}igma^2 \right) - \mathbf{f}rac{1}{m} ( \mathbf{b}eta^{(0)})_i^2 \right). \mathbf{e}nd{eqnarray*} Finally, we put the bias term and the variance term together to obtain the exact MSE of the estimator. \mathbf{b}egin{eqnarray*} && \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta } - \mathbf{b}eta^{(0)} \right\|^2 \\ & = & \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta } \right] + \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta } \right] - \mathbf{b}eta^{(0)} \right\|^2 \\ & = & \left\| \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] - \mathbf{b}eta^{(0)} \right\|^2 + \mathbb{E} \left\| \mathbf{w}idehat{ \mathbf{b}eta} - \mathbb{E} \left[ \mathbf{w}idehat{ \mathbf{b}eta} \right] \right\|^2 \\ & = & 0 + \mathbf{s}um_{i=1}^{p} \left( \mathbf{s}um_{l=1}^n \mathbf{f}rac{1}{ m \pi_l } (\X^{\dagger})_{i,l}^2 \left( ( \X \mathbf{b}eta^{(0)} )_l^2 + \mathbf{s}igma^2 \right) - \mathbf{f}rac{( \mathbf{b}eta^{(0)} )_i^2}{m} \right) \\ & = & \mathbf{s}um_{l=1}^n \mathbf{f}rac{ ( \X \mathbf{b}eta^{(0)} )_l^2 + \mathbf{s}igma^2 }{ m \pi_l } \mathbf{s}um_{i=1}^{p} (\X^{\dagger})_{i,l}^2 - \mathbf{f}rac{ 1}{m} \left\| \mathbf{b}eta^{(0)}_1 \right\|_2^2 \\ & = & {\rm Tr} \left( \X^{\dagger} \W_{\rm C} (\X^{\dagger})^T \right) - \mathbf{f}rac{ 1}{m} \left\| \mathbf{b}eta^{(0)} \right\|_2^2. \mathbf{e}nd{eqnarray*} \mathbf{h}fill$\mathbf{b}lacksquare$ \mathbf{s}ubsection{Proof of Theorem~\ref{thm:MSE}} To obtain the optimal sampling scores for the estimator, we solve the following optimization problem. \mathbf{b}egin{eqnarray*} && \min_{\pi_l} {\rm Tr} \left( \X^{\dagger} \W_{\rm C} (\X^{\dagger})^T \right), \\ && {\rm subject~to:}~ \mathbf{s}um_l \pi_l = 1, \pi_l \geq 0. \mathbf{e}nd{eqnarray*} The objective function is the upper bound on the MSE of the estimator derived in Theorem~\ref{thm:MSE} and the constraints require $\{ \pi_l \}_{i=1}^n$ to be a valid probability distribution. The Lagrangian function is then \mathbf{b}egin{eqnarray*} L( \pi_l, \lambda, \mu_l ) & = & \mathbf{s}um_{l=1}^n \mathbf{f}rac{ (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 }{ m \pi_l } \mathbf{s}um_{i=1}^{p} (\X^{\dagger})_{i,l}^2 \\ && + \lambda \left( \mathbf{s}um_l \pi_l - 1 \right) + \mathbf{s}um_l \mu_l \pi_l. \mathbf{e}nd{eqnarray*} We set the derivative of the Lagrangian function to zero, \mathbf{b}egin{eqnarray*} \mathbf{f}rac{d L}{d \pi_l} & = & - \mathbf{f}rac{ (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 }{ m \pi_l^2 } \mathbf{s}um_{i=1}^{p} (\X^{\dagger})_{i,l}^2 + \lambda + \mu_l = 0, \mathbf{e}nd{eqnarray*} and then, we obtain the optimal sampling distribution to be \mathbf{b}egin{eqnarray*} \pi_l & \propto & \mathbf{s}qrt{ \left( \mathbf{s}um_{i=1}^{p}(\X^{\dagger})_{i,l}^2 \right) \left( (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right) } \\ & = & \mathbf{s}qrt{ \left( (\X^{\dagger})^T \X^{\dagger} \right)_{l,l} \left( (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right) }. \mathbf{e}nd{eqnarray*} Similarly, we minimize ${\rm Tr} \left( \Hm \W_{\rm C} \right)$ to obtain the optimal sampling distribution for the predictor to be \mathbf{b}egin{eqnarray*} \pi_l \propto \mathbf{s}qrt{ \Hm_{l,l} \left( (\X \mathbf{b}eta^{(0)})_l^2 + \mathbf{s}igma^2 \right) }. \mathbf{e}nd{eqnarray*} \mathbf{h}fill$\mathbf{b}lacksquare$ \mathbf{e}nd{document}
\begin{document} \maketitle \begin{center} \today \mathbf{e}nd{center} \begin{abstract} We formulate a quasistatic nonlinear model for nonsimple viscoelastic materials at a finite-strain setting in the Kelvin's-Voigt's rheology where the viscosity stress tensor complies with the principle of time-continuous frame-indifference. We identify weak solutions in the nonlinear framework as limits of time-incremental problems for vanishing time increment. Moreover, we show that linearization around the identity leads to the standard system for linearized viscoelasticity and that solutions of the nonlinear system converge in a suitable sense to solutions of the linear one. The same property holds for time-discrete approximations and we provide a corresponding commutativity result. Our main tools are the theory of gradient flows in metric spaces and $\Gamma$-convergence. \mathbf{e}nd{abstract} \section{Introduction} Neglecting inertia, a nonlinear viscoelastic material in Kelvin's-Voigt's rheology obeys the following system of equations \begin{align}\label{eq:viscoel}-{\rm div}\Big(\partial_FW(\mathbf{n}abla y) + \partial_{\dot F}R(\mathbf{n}abla y,\partial_t \mathbf{n}abla y) \Big) = f\text{ in $ [0,T] \times \color{black} \Omega$.} \mathbf{e}nd{align} Here, $[0,T]$ is a process time interval with $T>0$, $\Omega\subset\Bbb R^d$ ($d=2$ or $d=3$) is a smooth bounded domain representing the reference configuration, and \color{black} $y:[0,T]\times \Omega\to\Bbb R^d$ is a deformation mapping with corresponding \color{black} deformation gradient $\mathbf{n}abla y$. Further, \color{black} $W:\Bbb R^{d\times d}\to [0,\infty]\color{black}$ is a stored energy density, which represents a potential of the first Piola-Kirchhoff stress tensor ${T^E}$, i.e., ${T^E}:=\partial_F W:=\partial W/\partial F$ and $F\in\Bbb R^{d\times d}$ is the placeholder of $\mathbf{n}abla y$. Finally, $R: \Bbb R^{d \times d} \times \Bbb R^{d \times d} \to [0,\infty) \color{black} $ denotes a (pseudo)potential of dissipative forces, where $\dot F \in \Bbb R^{d \times d}$ is the placeholder of $\partial_t \mathbf{n}abla y$, \color{black} and $f:[0,T]\times \Omega\to\Bbb R^d$ is a volume density of external forces acting on $\Omega$. In the present contribution, we consider a version of \mathbf{e}qref{eq:viscoel} for nonsimple materials where the elastic stored energy density depends also on the second gradient of $y$. In this case, we get \begin{align}\label{eq:viscoel-nonsimple} -{\rm div}\Big( \partial_F W(\mathbf{n}abla y) + \varepsilon\mathcal{L}_{P}(\mathbf{n}abla^2 y) + \partial_{\dot{F}}R(\mathbf{n}abla y,\partial_t \mathbf{n}abla y) \Big) = f\text{ in $\color{black} [0,T] \times \color{black} \Omega$,} \mathbf{e}nd{align} where $\varepsilon>0$ is small and $\mathcal{L}_{P}$ is a first \color{black} order differential operator which is associated to an additional term $\int_\Omega P(\mathbf{n}abla^2 y)$ in the stored elastic energy, e.g., for $P(G):= \frac{1}{2} \color{black} |G|^2$ with $G\in\Bbb R^{d\times d\times d}$, we get $-{\rm div}\mathcal{L}_{P}(\mathbf{n}abla^2 y)= \Delta^2 y$. We refer to \mathbf{e}qref{LP-def} for more details. Thus, we resort to the so-called nonsimple materials, the stored energy density (and the first Piola-Kirchhoff stress tensor, too) of which depends also on the second gradient of the deformation. This idea was first introduced by Toupin \cite{Toupin:62,Toupin:64} and proved to be useful in mathematical elasticity, \color{black} see e.g.~\cite{BCO,Batra, chen, MielkeRoubicek:16,MielkeRoubicek,Podio} \color{black} because it brings additional compactness to the problem. The first Piola-Kirchhoff stress tensor, ${T^E}$, then reads for all $i,j\in\{1,\ldots, d\}$ \begin{align*} {T^E}_{ij}(F,G):= \color{black} \partial_{F_{ij}} W(F) +\varepsilon \big(\mathcal{L}_{P}(G)\big)_{ij}\color{black} = \partial_{F_{ij}} W(F) -\varepsilon\sum_{k=1}^d \partial_k \big(\partial_{G_{ijk}}P(G)\big), \mathbf{e}nd{align*} where $G\in\Bbb R^{d\times d\times d}$ is the placeholder for the second gradient of $y$. The term $\varepsilon\partial_{G}P(G)$ is usually called hyperstress. We standardly assume that $W$ as well as $P$ are frame-indifferent functions, i.e., that $W(F)=W(QF)$ and $P(G)=P(QG)$ for every proper rotation $Q\in{\rm SO}(d)$, every $F\in\Bbb R^{d\times d}$, and every $G\in\Bbb R^{d\times d\times d}$. This implies that $W$ depends on the right Cauchy-Green strain tensor $C:=F^\top F$, see e.g.~\cite{Ciarlet}. \color{black} We wish to emphasize that, in the case of nonsimple materials, \color{black} no convexity properties of $W$ are needed, in particular, we do not have to assume that $W$ is polyconvex \cite{Ball:77,Ciarlet}. Moreover, it is shown in \cite{HealeyKroemer:09} that if $W$ satisfies suitable and physically relevant growth conditions (as $W(F)\to\infty$ if ${\rm det}\, F\to 0$), then every minimizer of the elastic energy is a weak solution to the corresponding Euler-Lagrange equations. The second term on the left-hand side of \mathbf{e}qref{eq:viscoel} is the \color{black} viscous \color{black} stress tensor ${S}(F,\dot F):= \partial_{\dot F} R(F,\dot F)$ which has its origin in viscous dissipative mechanisms of the material. Notice that its potential $R$ plays an analogous role as $W$ in the case of purely elastic, i.e., non-dissipative processes. Naturally, we require that $R(F,\dot F)\ge R(F,0)=0$. The viscous stress tensor must comply with the time-continuous frame-indifference principle meaning that for all $F$ \begin{align*} {S}(F,\dot F)=F\tilde{S}(C,\dot C) \mathbf{e}nd{align*} where $\tilde{S}$ is a symmetric matrix-valued function. This condition constraints $R$ so that \cite{Antmann, Antmann:04,MOS} \color{black} (see also \cite{Demoulini}) \color{black} \begin{align}\label{eq:frame indifference-R} R(F,\dot F)=\tilde R(C,\dot C)\ , \mathbf{e}nd{align} for some nonnegative function $\tilde R$. In other words, \color{black} $R$ must depend on the right Cauchy-Green strain tensor $C$ and its time derivative $\dot C$. In this work, we are interested in the case of small strains, i.e., when $\mathbf{n}abla u:=\mathbf{n}abla y - \mathbf{Id}$ is of order $\delta$ for some small $\delta >0$. Here, $u:=y-\mathbf{id}$ is the displacement corresponding to $y$ with $\mathbf{id}$ and $\mathbf{Id}$ standing for the identity map and identity matrix, respectively. Such a property is certainly meaningful if one considers initial values $y_0$ with $\Vert \mathbf{n}abla y_0 - \mathbf{Id} \Vert_{L^2(\Omega)} \le \delta$. Therefore, it is convenient to define the rescaled displacement $u = \delta^{-1}(y - \mathbf{id})$. Introducing a proper scaling in the above equation we get \begin{align}\label{eq:viscoel-nonsimple-scaled} -{\rm div}\Big( \delta^{-1}\partial_F W(\mathbf{Id} + \delta \mathbf{n}abla u) + \tilde{\varepsilon}\mathcal{L}_{P}(\delta\mathbf{n}abla^2 u) + \delta^{-1}\partial_{\dot{F}}R(\mathbf{Id} + \delta \mathbf{n}abla u, \delta \partial_t \mathbf{n}abla u) \Big) = f\mathbf{e}nd{align} for $\tilde{\varepsilon}=\tilde{\varepsilon}(\delta)$ appropriate. \color{black} Note that to obtain \mathbf{e}qref{eq:viscoel-nonsimple-scaled} from \mathbf{e}qref{eq:viscoel} we write the latter equation for $f:=\delta f$ and then divide the whole equation by $\delta$. \color{black} Formally, we can pass to the limit and obtain the equation (for $\tilde{\varepsilon} \to 0$ as $\delta \to 0$) \begin{align}\label{eq:viscoel-small} -{\rm div}\Big( \Bbb C_W e(u) + \Bbb C_D e(\partial_t u) \Big) = f, \mathbf{e}nd{align} where $\Bbb C_W:=\partial^2_{F^2}W(\mathbf{Id})$ is the tensor of elastic constants, $\Bbb C_D:= \partial^2_{\dot F^2}R(\mathbf{Id},0)$ is the tensor of viscosity coefficients, and $e(u):=(\mathbf{n}abla u+(\mathbf{n}abla u)^\top)/2$ denotes the linear strain tensor. The goal of this contribution is twofold: we first show existence of solutions \color{black} to the nonlinear system of equations \mathbf{e}qref{eq:viscoel-nonsimple-scaled}. \color{black} Afterwards, \color{black} we make the limit passage rigorous, i.e., we show that solutions to the nonlinear equations converge to the unique \color{black} solution of the linear systems as $\delta\to 0$. Interestingly, although the nonlinear viscoelastisity systems is written for a nonsimple material, in the limit we obtain the standard \color{black} linear equations without spatial gradients of $e(u)$. Our general strategy is to treat the system of quasistatic viscoelasticity in the abstract setting of metric gradient flows \cite{AGS} which was, to our best knowledge, formulated for the first time in \cite{MOS} for simple materials (i.e.~only the first gradient of $y$ is considered). However, in their setting, \color{black} a passage from time-discrete problems to a continuous one is only possible in a specific one-dimensional case. See also \cite{BallSenguel:15} for a related approach in materials undergoing phase transition. This, in our opinion, also supports models of nonsimple materials as their linearization \color{black} leads to \color{black} the usual small-strain viscoelasticity model which seems unreachable (or at least rather difficult) in the case of simple materials. \color{black} An abstract framework for the study of metric gradient flows along a sequence of energies and metric spaces has been developed in \cite{S1,S2}. In practice, for each specific problem the challenge lies in proving that the additional conditions needed to ensure convergence of gradient flows are satisfied (we refer to \cite{S2} for some examples in that direction). Our aim is to show that the passage of nonlinear to linearized viscoelasticity can be formulated in this setting. Let us also mention that a rigorous analysis of the static, purely elastic case without viscosity goes back to \cite{DalMasoNegriPercivale:02}. \color{black} \color{black} Heuristically, the idea of gradient flows in metric spaces stems from the observation that, having a Hilbert space (equipped with the dot product $\langle\cdot,\cdot\rangle$), the inequality $$ |u'|^2 +2\langle u',\mathbf{n}abla\phi(u)\rangle+|\mathbf{n}abla\phi(u)|^2\ge 0 $$ becomes equality if and only if $$ u'=-\mathbf{n}abla \phi(u), $$ i.e., if $u$ solves the gradient flow equation. This approach can be extended to metric spaces provided we are able to find analogies to $|u'|$ and $|\mathbf{n}abla\phi|$ in metric spaces. These are called the metric derivative and the upper gradient (or slope), respectively. Precise definitions can be found in Section~\ref{sec: defs} below. \color{black} The plan of the paper is as follows. In Section~\ref{sec:Model}, we introduce the nonlinear and linear systems of viscoelasticity \color{black} in more detail and state our main results. In particular, Theorem~\ref{maintheorem1} and Theorem~\ref{maintheorem2} show the existence of solutions to the nonlinear and linear problems, respectively. These solutions can be identified with so-called \mathbf{e}mph{curves of maximal slope} \color{black} introduced in \cite{DGMT}. \color{black} Proofs of existence rely on semidiscretization in time, and on the theory of \mathbf{e}mph{generalized minimizing movements} and gradient flows in metric spaces \cite{AGS}, where the underlying metric is given by a \mathbf{e}mph{dissipation distance} suitably related to the potential $R$ (see \mathbf{e}qref{intro:R}). \color{black} Finally, Theorem~\ref{maintheorem3} shows the relationship between the two systems. Besides convergence of solutions of \mathbf{e}qref{eq:viscoel-nonsimple} to solutions of \mathbf{e}qref{eq:viscoel-small}, we also get analogous convergences for semidiscretized problems. Moreover, convergences for vanishing time step and $\delta\to 0$ commute, see Figure~\ref{diagram}. (For a related commutativity result in an abstract setting we refer to \cite{BCGS}.) Section~\ref{sec3} is devoted to definitions of generalized minimizing movements (GMM) and curves of maximal slope. \color{black} Here we also collect the necessary existence results proved in \cite{AGS}. Moreover, we present a statement similar to \cite{Ortner,S2} about sequences of curves of maximal slope and their limits as well as a corresponding result for minimizing movements. \color{black} Further, Section~\ref{sec:energy-dissipation} shows interesting properties of dissipation distances related to our viscous dissipation. It turns out that by frame indifference \mathbf{e}qref{eq:frame indifference-R} the dissipation distances are genuinely non-convex. However, due to the presence of the higher order gradient we are able to obtain sufficiently good convexity properties in order to apply the abstract theory \color{black} \cite{AGS, S2}. \color{black} Finally, proofs of our results can be found in Section~\ref{sec results}. In particular, we relate curves of maximal slope for the nonlinear system with limiting curves of maximal slope as $\delta\to 0$ and identify these configurations as weak solutions of \mathbf{e}qref{eq:viscoel-nonsimple} and \mathbf{e}qref{eq:viscoel-small}. \color{black} In what follows, we use standard notation for Lebesgue spaces, $L^p(\Omega)$, which are measurable maps on $\Omega\subset\Bbb R^d$ integrable with the $p$-th power (if $1\le p<+\infty$) or essentially bounded (if $p=+\infty$). Sobolev spaces, i.e., $W^{k,p}(\Omega)$ denote the linear spaces of maps which, together with their derivatives up to the order $k\in\Bbb N$, belong to $L^p(\Omega)$. \color{black} Further, $W^{k,p}_0(\Omega)$ contains maps from $W^{k,p}(\Omega)$ having zero boundary conditions (in the sense of traces). \color{black} In order to emphasize its Hilbert structure, we write $H^1(\Omega):=W^{1,2}(\Omega)$. We also work with the dual space to $H^1_0(\Omega)$ denoted by $H^{-1}(\Omega)$. We refer to \cite{AdamsFournier:05} for more details on Sobolev spaces and their duals. If $A\in\Bbb R^{d\times d\times d\times d}$ and $e\in\Bbb R^{d\times d}$ then $Ae\in\Bbb R^{d\times d}$ such that for $i,j\in\{1,\ldots, d\}$ we define $(Ae)_{ij}:=A_{ijkl}e_{kl}$ where we use Einstein's summation convention. An analogous convention is used in similar occasions, in the sequel. Finally, at many spots, we follow closely notation introduced in \cite{AGS} to ease readability of our work because the theory developed there is one of the main tools of our analysis. \section{The model and main results}\label{sec:Model} \subsection{The nonlinear setting} We adopt the usual setting of nonlinear elasticity: consider $\Omega \subset \Bbb R^d$ open, bounded with Lipschitz boundary. Fix $\delta>0$ (small), $p>d$ and $0< \alpha<1$. The parameter $\tilde\varepsilon(\delta)$ introduced in \mathbf{e}qref{eq:viscoel-nonsimple-scaled} is defined as $\tilde\varepsilon(\delta):=\delta^{1-p\alpha}$. \textbf{Stored elastic energy and body forces:} We introduce the nonlinear elastic energy $\phi_\delta: W^{2,p}(\Omega;\Bbb R^d) \to [0,\infty]$ by \begin{align}\label{nonlinear energy} \phi_\delta(y) = \frac{1}{\delta^2}\int_\Omega W(\mathbf{n}abla y(x))\, dx + \frac{1}{\delta^{p\alpha}}\int_\Omega P(\mathbf{n}abla^2 y(x)) \, dx - \frac{1}{\delta}\int_\Omega f(x)\cdot y(x) \, dx \mathbf{e}nd{align} for a \mathbf{e}mph{deformation} $y: W^{2,p}(\Omega;\Bbb R^d) \to \Bbb R^d$. Here, $W: \Bbb R^{d \times d} \to [0,\infty]$ is a single well, frame indifferent stored energy functional with the usual assumptions in nonlinear elasticity. Altogether, we suppose that there exists $c>0$ such that \begin{align}\label{assumptions-W} \begin{split} (i)& \ \ W \text{ continuous and $C^3$ in a neighborhood of $SO(d)$},\\ (ii)& \ \ \text{Frame indifference: } W(QF) = W(F) \text{ for all } F \in \Bbb R^{d \times d}, Q \in SO(d),\\ (iii)& \ \ W(F) \ge c\operatorname{dist}^2(F,SO(d)), \ W(F) = 0 \text{ iff } F \in SO(d), \mathbf{e}nd{split} \mathbf{e}nd{align} where $SO(d) = \lbrace Q\in \Bbb R^{d \times d}: Q^\top Q = \mathbf{Id}, \, \det Q=1 \rbrace$. Moreover, $P: \Bbb R^{d\times d \times d} \to [0,\infty]$ denotes a higher order perturbation satisfying \begin{align}\label{assumptions-P} \begin{split} (i)& \ \ \text{frame indifference: } P(QG) = P(G) \text{ for all } G \in \Bbb R^{d \times d \times d}, Q \in SO(d),\\ (ii)& \ \ \text{$P$ is convex and $C^1$},\\ (iii)& \ \ \text{growth condition: For all $G \in \Bbb R^{d \times d \times d}$ we have } \\& \ \ \ \ \ \ c_1 |G|^p \le P(G) \le c_2 |G|^p, \ \ \ \ \ \ \color{black} |\partial_G P(G)| \color{black} \le c_2 |G|^{p-1} \color{black} \mathbf{e}nd{split} \mathbf{e}nd{align} for $0<c_1<c_2$. Finally, $f \in L^\infty(\Omega;\Bbb R^d)$ denotes a volume force. From now on we always drop the target space $\Bbb R^d$ for notational convenience when no confusion arises. \color{black} We remark that by minor adaptions of our arguments we can also treat potentials with additional dependence on the material point $x \in \Omega$. We scale the energy appropriately with a (small) positive parameter $\delta$ as we will eventually be interested in the behavior in the small strain limit $\delta \to 0$. \textbf{Dissipation potential and viscous stress:} Consider a time dependent deformation $y: [0,T] \times \Omega \to \Bbb R^d$. Viscosity is not only related to the strain $\mathbf{n}abla y(t,x)$, but also to the strain rate $\partial_t \mathbf{n}abla y(t,x)$ and can be expressed in terms of a dissipation potential $R(\mathbf{n}abla y, \partial_t \mathbf{n}abla y)$, where $R: \Bbb R^{d \times d} \times \Bbb R^{d \times d} \to [0,\infty)$. An admissible potential has to satisfy frame indifference in the sense (see \cite{Antmann, MOS}) \begin{align}\label{R: frame indiff} R(F,\dot{F}) = R(QF,Q(\dot{F} + AF)) \ \ \ \forall Q \in SO(d), A \in {\rm Skew}(d) \mathbf{e}nd{align} for all $F \in GL_+(d)$ and $\dot{F} \in \Bbb R^{d \times d}$, where $GL_+(d) = \lbrace F \in \Bbb R^{d \times d}: \det F>0 \rbrace$ and ${\rm Skew}(d) = \lbrace A \in \Bbb R^{d \times d}: A=-A^\top \rbrace$. Following the discussion in \cite[Section 2.2]{MOS}, from the point of modeling it is much more convenient to postulate the existence of a (smooth) global distance $D: GL_+(d) \times GL_+(d) \to [0,\infty)$ satisfying $D(F,F) = 0$ for all $F \in GL_+(d)$, from which an associated dissipation potential $R$ can be calculated by \begin{align}\label{intro:R} R(F,\dot{F}) := \lim_{\varepsilon \to 0} \frac{1}{2\varepsilon^2} D^2(F+\varepsilon\dot{F},F) = \frac{1}{4} \partial^2_{F_1^2} D^2(F,F) [\dot{F},\dot{F}] \mathbf{e}nd{align} for $F \in GL_+(d)$, $\dot{F} \in \Bbb R^{d \times d}$, where $\partial^2_{F_1^2} D^2(F_1,F_2)$ denotes the Hessian of $D^2$ in direction of $F_1$ at $(F_1,F_2)$, being a fourth order tensor. We have the following assumptions on $D$ for some $c>0$. \begin{align}\label{eq: assumptions-D} (i) & \ \ D(F_1,F_2)> 0 \text{ if } F_1^\top F_1 \mathbf{n}eq F_2^\top F_2,\mathbf{n}otag \\ (ii) & \ \ D(F_1,F_2) = D(F_2,F_1),\\ (iii) & \ \ D(F_1,F_3) \le D(F_1,F_2) + D(F_2,F_3),\mathbf{n}otag \\ (iv) & \ \ \text{$D(\cdot,\cdot)$ is $C^3$ in a neigborhood of $SO(d) \times SO(d)$},\mathbf{n}otag \\ (v)& \ \ \text{Separate frame indifference: } D(Q_1F_1,Q_2F_2) = D(F_1,F_2)\mathbf{n}otag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \forall Q_1,Q_2 \in SO(d), \ \forall F_1,F_2 \in GL_+(d),\mathbf{n}otag\\ (vi) & \ \ \text{$D(F,\mathbf{Id}) \ge c\operatorname{dist}(F,SO(d))$ $\forall F \in \Bbb R^{d \times d}$ in a neighborhood of $SO(d)$}.\mathbf{n}otag \mathbf{e}nd{align} Note that conditions (i),(iii) state that $D$ is a true distance when restricted to symmetric matrices. We can not expect more due to the separate frame indifference (v). We also note that (v) implies \mathbf{e}qref{R: frame indiff} as shown in \cite[Lemma 2.1]{MOS}. Note that in our model we do not require any conditions of polyconvexity neither for $W$ nor for $D$ \cite{Ball:77}. For examples of admissible dissipation distances we refer the reader to \cite[Section 2.3]{MOS}. \textbf{Equations of nonlinear viscoelasticity:} We will impose the boundary conditions $y(t,x) = x$ for $(t,x) \in [0,T] \times \partial \Omega$ and for convenience we define the set $W^{2,p}_\mathbf{id}(\Omega)= \lbrace y = \mathbf{id} + u \in W^{2,p}(\Omega): u \in W^{2,p}_0(\Omega) \rbrace$, where $\mathbf{id}$ denotes the identity function on $\Omega$. We remark that our results can be extended to more general Dirichlet boundary conditions, too, which we do not include here for the sake of maximizing simplicity rather than generality. We now introduce a differential operator associated to the perturbation $P$ (cf. \mathbf{e}qref{assumptions-P}). To this end, we use the notation $(\mathbf{n}abla y)_{ik} = \partial_k y_i$ and $(\mathbf{n}abla^2 y)_{ijk} = \partial^2_{jk} y_i$ for $i,j,k \in \lbrace 1,\ldots,d\rbrace$ \color{black} and define \begin{align}\label{LP-def} \big(\mathcal{L}_P(\mathbf{n}abla^2 y)\big)_{ij} = -\sum\mathbf{n}olimits_{k=1}^d \partial_k(\partial_GP(\mathbf{n}abla^2 y))_{ijk}, \ \ \ \ \color{black} i,j \color{black} \in \lbrace 1,\ldots, d\rbrace \mathbf{e}nd{align} for $y \in W^{2,p}_\mathbf{id}(\Omega)$, where the derivatives have to be understood in the sense of distributions. The equations of nonlinear viscoelasticity then read as (respecting the different scalings of the terms in \mathbf{e}qref{nonlinear energy}) \begin{align}\label{nonlinear equation} \begin{split} \begin{cases} - {\rm div}\Big( \partial_FW(\mathbf{n}abla y) + \delta^{2-p\alpha}\mathcal{L}_{P}(\mathbf{n}abla^2 y) + \partial_{\dot{F}}R(\mathbf{n}abla y,\partial_t \mathbf{n}abla y) \Big) = \delta f & \text{in } [0,\infty) \times \Omega \\ y(0,\cdot) = y_0 & \text{in } \Omega \\ y(t,\cdot) \in W^{2,p}_\mathbf{id}(\Omega) &\text{for } t\in [0,\infty) \mathbf{e}nd{cases} \mathbf{e}nd{split} \mathbf{e}nd{align} for some $y_0 \in W^{2,p}_\mathbf{id}(\Omega)$, where $\partial_FW(\mathbf{n}abla y(t,x))$ denotes the first \mathbf{e}mph{Piola-Kirchhoff stress tensor} and $\partial_{\dot{F}}R(\mathbf{n}abla y(t,x),\partial_t \mathbf{n}abla y(t,x))$ the \mathbf{e}mph{viscous stress} with $R$ as introduced in \mathbf{e}qref{intro:R}. The first goal of the present contribution is to prove the existence of weak solutions to \mathbf{e}qref{nonlinear equation}. More precisely, we say that $y \in L^\infty([0,\infty);W^{2,p}_{\mathbf{id}}(\Omega)) \cap W^{1,2}([0,\infty);H^1(\Omega))$ is a \mathbf{e}mph{weak solution} of \mathbf{e}qref{nonlinear equation} if $y(0,\cdot) = y_0$ and for a.e. $t \ge 0$ \begin{align}\label{nonlinear equation2} \begin{split} & \int_\Omega \Big( \partial_FW(\mathbf{n}abla y(t,x)) + \partial_{\dot{F}}R(\mathbf{n}abla y(t,x),\partial_t \mathbf{n}abla y(t,x))\Big) : \mathbf{n}abla \varphi(x) \, dx \\ & \ \ \ \ \ \ \ \ \ + \int_\Omega\delta^{2-p\alpha} \partial_GP(\mathbf{n}abla^2 y(t,x)) :\mathbf{n}abla^2 \varphi(x) \, dx = \delta \int_\Omega f(x) \cdot \varphi(x) \, dx \mathbf{e}nd{split} \mathbf{e}nd{align} for all $\varphi \in W^{2,p}_0(\Omega)$. In particular, we note that the first term in the second line is well defined for a weak solution by \mathbf{e}qref{assumptions-P}(iii) and H\"older's inequality. \color{black} \subsection{The linear problem} After rescaling with $\delta^{-1}$ and introducing the rescaled displacement field $u(t,x) = \delta^{-1} (y(t,x)-x)$, the partial differential equation \mathbf{e}qref{nonlinear equation} can be written as $$-{\rm div}\Big( \delta^{-1}\partial_FW(\mathbf{id} + \delta \mathbf{n}abla u) + \delta^{1-p\alpha}\mathcal{L}_{P}(\delta\mathbf{n}abla^2 u) + \delta^{-1}\partial_{\dot{F}}R(\mathbf{id} + \delta \mathbf{n}abla u, \delta \partial_t \mathbf{n}abla u) \Big) = f$$ with an initial datum $u_0 = \delta^{-1}(y_0 - \mathbf{id})$. For $\alpha$ small, letting $\delta \to 0$ we obtain, at least formally, the equation \begin{align}\label{linear equation} \begin{split} \begin{cases} -{\rm div}\Big( \Bbb C_W e(u) + \Bbb C_D e(\partial_t u) \Big) = f & \text{in } [0,\infty) \times \Omega \\ u(0,\cdot) = u_0 & \text{in } \Omega \\ u(t,\cdot) \in H^1_{0}(\Omega) &\text{for } t\in [0,\infty), \mathbf{e}nd{cases} \mathbf{e}nd{split} \mathbf{e}nd{align} where $\Bbb C_W := \partial^2_{F^2} W(\mathbf{Id})$ and $\Bbb C_D := \frac{1}{2}\partial^2_{F_1^2} D^2(\mathbf{Id},\mathbf{Id}) $ (cf. \mathbf{e}qref{intro:R}). Note that the frame indifference of the energy and the dissipation (see \mathbf{e}qref{assumptions-W}(ii) and \mathbf{e}qref{eq: assumptions-D}(v), respectively) imply that the contributions only depend on the symmetric part of the strain $e(u) = \frac{1}{2}( \mathbf{n}abla u +(\mathbf{n}abla u)^\top)$ and the strain rate $e(\partial_t u) = \frac{1}{2}( \partial_t \mathbf{n}abla u + \partial_t (\mathbf{n}abla u)^\top)$. Let us also mention that the stress tensor is related to the linearized elastic energy $\bar{\phi}_0 : H_0^1(\Omega) \to [0,\infty)$ given by \begin{align}\label{linear energy} \bar{\phi}_0(u) = \int_\Omega \frac{1}{2}\Bbb C_W[e(u)(x), e(u)(x)] \, dx - \int_\Omega f(x) \cdot u(x) \,dx \mathbf{e}nd{align} for $u \in H^1_0(\Omega)$. The goal of this article is to show that the above reasoning can be made rigorous: we will prove that \mathbf{e}qref{linear equation} admits a unique weak solution and that solutions of \mathbf{e}qref{nonlinear equation} converge to the solution of \mathbf{e}qref{linear equation} in a suitable sense. Here, similarly as before, we say $u \in W^{1,2}([0,\infty); H^1_0(\Omega))$ is a \mathbf{e}mph{weak solution} of \mathbf{e}qref{linear equation} if $u(0,\cdot) = u_0$ and for a.e. $t \ge 0$ and all $\varphi \in H^{1}_0(\Omega)$ we have $$\int_\Omega ( \Bbb C_W e(u) + \Bbb C_D e(\partial_t u) ) : \mathbf{n}abla \varphi = \int_\Omega f\cdot \varphi$$. \color{black} \subsection{Main results} Let us introduce the \mathbf{e}mph{global dissipation distance} between two deformations for the nonlinear and linear setting by \begin{align}\label{eq: D,D0} \begin{split} &\mathcal{D}_\delta(y_0,y_1) = \delta^{-1}\Big(\int_\Omega D^2(\mathbf{n}abla y_0, \mathbf{n}abla y_1) \Big)^{1/2}, \\ & \bar{\mathcal{D}}_0(u_0,u_1) = \Big(\int_\Omega \Bbb C_D[\mathbf{n}abla u_0 - \mathbf{n}abla u_1,\mathbf{n}abla u_0 - \mathbf{n}abla u_1] \Big)^{1/2} \mathbf{e}nd{split} \mathbf{e}nd{align} for $y_0,y_1 \in W^{2,p}_\mathbf{id}(\Omega)$ and $u_0,u_1 \in H^1_0(\Omega)$, respectively. (In many notations we include an overline to indicate that the notion is related to the linear setting.) We also define the sublevel sets $\mathscr{S}_\delta^M := \lbrace y\in W^{2,p}_\mathbf{id}(\Omega): \phi_\delta(y) \le M\rbrace$. (For convenience we do not include $\Omega$ in the notation.) Our general strategy will be to show that the spaces $(\mathscr{S}_\delta^M, \mathcal{D}_\delta)$ and $(H^1_0(\Omega), \bar{\mathcal{D}}_0)$ are complete metric spaces and to follow the approach in \cite{AGS} (see Theorem \ref{th: metric space} and Theorem \ref{th: metric space-lin} below). In particular, to show existence of solutions to the problems \mathbf{e}qref{nonlinear equation} and \mathbf{e}qref{linear equation}, we will apply an approximation scheme solving suitable time-incremental minimization problems and show that \color{black} time-continuous \color{black} limits are curves of maximal slope for the elastic energies $\phi_\delta, \bar{\phi}_0$, respectively. Finally, using the property that in Hilbert spaces curves of maximal slope can be related to gradient flows, we find solutions to \mathbf{e}qref{nonlinear equation}, \mathbf{e}qref{linear equation}. Moreover, to study the relation between the nonlinear and linear problem we will apply some results about the limit of sequences of curves of maximal slope proved in Section \ref{sec: auxi-proofs}. For the main definitions and notation for discrete solutions, (generalized) minimizing movements (abbreviated by MM and GMM, see Definition~\ref{main def1}) and curves of maximal slope we refer to Section \ref{sec: defs}. In particular, we define $\Phi_\delta$ and $\bar{\Phi}_0$, respectively, as in \mathbf{e}qref{incremental} replacing $\phi, \mathcal{D}$ by $\phi_\delta,\mathcal{D}_\delta$ and $\bar{\phi}_0,\bar{\mathcal{D}}_0$, respectively. Moreover, we write $|\partial \phi_\delta|_{\mathcal{D}_\delta}$, $|\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$ for the (local) slopes and $|y'|_{\mathcal{D}_\delta}$, $|u'|_{\bar{\mathcal{D}}_0}$ for the metric derivatives, respectively (see Definition \ref{main def2}). Finally, discrete solutions for time step $\tau > 0$ will be denoted by \color{black} $\tilde{Y}^\delta_\tau$ and $\tilde{U}^0_\tau$, respectively. \color{black} Our first main result addresses the existence of solutions to the nonlinear problem. \begin{theorem}[Solutions to the nonlinear problem]\label{maintheorem1} Let $M>0$ and $\mathscr{S}_\delta^M = \lbrace y \in W^{2,p}_\mathbf{id}(\Omega): \phi_\delta(y) \le M\rbrace$. Then for $\delta>0$ sufficiently small only depending on $M$ the following holds: (i) (Existence of GMM) \color{black} $GMM(\Phi_\delta;y_0) \mathbf{n}eq \mathbf{e}mptyset$ for all $y_0 \in \mathscr{S}^M_\delta$. (ii) (Curves of maximal slope) \color{black} For all $y_0 \in \mathscr{S}^M_\delta $ each $y \in GMM(\Phi_\delta;y_0)$ is a curve of maximal slope for $\phi_\delta$ with respect to the strong upper gradient $|\partial \phi_\delta|_{\mathcal{D}_\delta}$, in particular for all $T>0$ we have the energy identity \begin{align}\label{slopesolution} \frac{1}{2} \int_0^T |y'|_{\mathcal{D}_\delta}^2(t)\,dt + \frac{1}{2} \int_0^T |\partial \phi_\delta|^2_{\mathcal{D}_\delta}(y(t))\,dt + \phi_\delta(y(T)) = \phi_\delta(y_0). \mathbf{e}nd{align} (iii) (Relation to PDE) \color{black} For all $y_0 \in \mathscr{S}^M_\delta $ each $y \in GMM(\Phi_\delta;y_0)$ is a weak solution of the partial differential equations of nonlinear viscoelasticity \mathbf{e}qref{nonlinear equation} in the sense of \mathbf{e}qref{nonlinear equation2}. \color{black} \mathbf{e}nd{theorem} For the linearized model we obtain the following results. \begin{theorem}[Solutions to the linear problem]\label{maintheorem2} The limiting linear problem has the following properties. (i) (Existence/Uniqueness of MM) \color{black} For all $u_0 \in H_0^1(\Omega)$ there exists a unique $u \in MM(\bar{\Phi}_0;u_0)$. (ii) (Curves of maximal slope) \color{black} For all $u_0 \in H_0^1(\Omega)$ the minimizing movement $u \in MM(\bar{\Phi}_0;u_0)$ is the unique curve of maximal slope for $\bar{\phi}_0$ with respect to the strong upper gradient $|\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$. (iii) (Relation to PDE) \color{black} For all $u_0 \in H_0^1(\Omega)$ the unique $u \in MM(\Phi_\delta;u_0)$ is a weak solution of the partial differential equations of linear viscoelasticity \mathbf{e}qref{linear equation}. \mathbf{e}nd{theorem} In contrast to Theorem \ref{maintheorem1}, we get that the weak solution to \mathbf{e}qref{linear equation} for given initial value $u_0 \in H^1_0(\Omega)$ is uniquely determined and a minimizing movement (and not simply a generalized one). Finally, we study the relation of the solutions to the equations \mathbf{e}qref{nonlinear equation} and \mathbf{e}qref{linear equation}. \begin{theorem}[Relation between nonlinear and linear problems]\label{maintheorem3} Fix a null sequence $(\delta_k)_k$ and a sequence of initial data $(y_0^k)_{k\in \Bbb N} \subset W^{2,p}_\mathbf{id}(\Omega)$ such that $$\sup\mathbf{n}olimits_{k\in\Bbb N} \phi_{\delta_k}(y_0^k)<\infty, \ \ \ \ \delta_k^{-p\alpha}\int_\Omega P(\mathbf{n}abla^2 y_0^k) \to 0, \ \ \ \ \delta_k^{-1}(y^k_0 - \mathbf{id}) \to u_0 \in H_0^1(\Omega).$$ Let $u$ be the unique element of $MM(\bar{\Phi}_0;u_0)$. Then the following holds: (i) (Convergence of discrete solutions) \color{black} For all $\tau>0$ and all discrete solutions \color{black} $\tilde{Y}_\tau^{\delta_k}$ \color{black} as in \mathbf{e}qref{ds} below there is a discrete solution \color{black} $\tilde{U}^0_\tau$ \color{black} for the linearized system such that \color{black} $\delta_k^{-1}(\tilde{Y}_\tau^{\delta_k}(t) -\mathbf{id}) \to \tilde{U}^0_\tau(t)$ strongly in $H^1(\Omega)$ for all $t \in [0,\infty)$. \color{black} (ii) (Convergence of continuous solutions) \color{black} Each sequence $y_k \in GMM(\Phi_{\delta_k};y_0^k)$, $k \in \Bbb N$, satisfies $\delta_k^{-1}(y_k(t) -\mathbf{id}) \to u(t)$ strongly in $H^1(\Omega)$ for all $t \in [0,\infty)$. (iii) (Convergence at specific scales) \color{black} For each null sequence $(\tau_k)_k$ and each sequence of discrete solutions \color{black} $\tilde{Y}_{\tau_k}^{\delta_k}$ as in \mathbf{e}qref{ds} we have $\delta_k^{-1}(\tilde{Y}_{\tau_k}^{\delta_k}(t) -\mathbf{id}) \to u(t)$ strongly in $H^1(\Omega)$ for all $t \in [0,\infty)$. \color{black} \mathbf{e}nd{theorem} We remark that, in the formulation of \color{black} \cite{Braides, BCGS}, \color{black} property (iii) states that the configuration $u$ is a minimizing movement along $\phi_{\delta_k}$ at scale $\tau_k$. Let us emphasize that the converge in Theorem \ref{maintheorem3} is with respect to the strong $H^1(\Omega)$-topology. From now on we set $f \mathbf{e}quiv 0 $ for convenience. The general case indeed follows with minor modifications, which are standard.\color{black} \begin{figure}[H] \centering \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=8em,column sep=8em,minimum width=6em] { \delta_k^{-1}(\tilde{Y}_{\tau_n}^{\delta_k}(t) -\mathbf{id}) & \delta_k^{-1}(y_k -\mathbf{id}) \\ \tilde{U}^0_{\tau_n} & u \\}; \path[-stealth] (m-1-1) edge node [left] {$k \to \infty$} (m-2-1) edge node [below] {$n \to \infty$} (m-1-2) (m-2-1) edge node [below] {$n \to \infty$} (m-2-2) (m-1-2) edge node [right] {$k \to \infty$} (m-2-2) (m-1-1) edge node [below] {\hspace{-1.5cm}$n,k\to \infty$} (m-2-2) ; \mathbf{e}nd{tikzpicture} \caption{ Illustration of the commutativity result given in Theorem \ref{maintheorem1}-Theorem \ref{maintheorem3}. The horizontal arrows are addressed in Theorem \ref{maintheorem1} and Theorem \ref{maintheorem2}, respectively. For the vertical and diagonal arrows we refer to Theorem \ref{maintheorem3}.\color{black} }\label{diagram} \mathbf{e}nd{figure} \section{Preliminaries: Generalized minimizing movements and curves of maximal slope}\label{sec3} In this section we first recall the relevant definitions and also give a convergence result for discrete solutions to curves of maximal slope proved in \cite{AGS}. In Section \ref{sec: auxi-proofs} we then present a result about the limit of sequences of curves of maximal slope being a variant of results presented in \cite{CG, S2}. \subsection{Definitions}\label{sec: defs} We consider a complete metric space $(\mathscr{S},\mathcal{D})$. We say a curve $u: (a,b) \to \mathscr{S}$ is \mathbf{e}mph{absolutely continuous} with respect to $\mathcal{D}$ if there exists $m \in L^1(a,b)$ such that \begin{align}\label{metric-deriv} \mathcal{D}(u(s),u(t)) \le \int_s^t m(r) \, dr \ \ \ \text{for all} \ a \le s \le t \le b. \mathbf{e}nd{align} The smallest function $m$ with this property, denoted by $|u'|_{\mathcal{D}}$, is called \mathbf{e}mph{metric derivative} of $u$ and satisfies for a.e. $t \in (a,b)$ (see \cite[Theorem 1.1.2]{AGS} for the existence proof) $$|u'|_{\mathcal{D}}(t) := \lim_{s \to t} \frac{\mathcal{D}(u(s),u(t))}{|s-t|}.$$ We now define the notion of a \mathbf{e}mph{curve of maximal slope}. We only give the basic definition here and refer to \cite[Section 1.2, 1.3]{AGS} for motivations and more details. By $h^+:=\max(h,0)$ we denote the positive part of a function $h$. \begin{definition}[Upper gradients, slopes, curves of maximal slope]\label{main def2} We consider a complete metric space $(\mathscr{S},\mathcal{D})$ with a functional $\phi: \mathscr{S} \to (-\infty,+\infty]$. (i) A function $g: \mathscr{S} \to [0,\infty]$ is called a strong upper gradient for $\phi$ if for every absolutely continuous curve $v: (a,b) \to \mathscr{S}$ the function $g \circ v$ is Borel and $$|\phi(v(t)) - \phi(v(s))| \le \int_s^t g(v(r)) |v'|_{\mathcal{D}}(r)\,dr \ \ \ \text{for all} \ a< s \le t < b.$$ (ii) For each $u \in \mathscr{S}$ the local slope of $\phi$ at $u$ is defined by $$|\partial \phi|_{\mathcal{D}}(u): = \limsup_{w \to u} \frac{(\phi(u) - \phi(w))^+}{\mathcal{D}(u,w)}.$$ (iii) An absolutely continuous curve $u: (a,b) \to \mathscr{S}$ is called a curve of maximal slope for $\phi$ with respect to the strong upper gradient $g$ if for a.e. $t \in (a,b)$ $$\frac{\rm d}{ {\rm d} t} \phi(u(t)) \le - \frac{1}{2}|u'|^2_{\mathcal{D}}(t) - \frac{1}{2}g^2(u(t)).$$ \mathbf{e}nd{definition} We now introduce minimizing movements. In the following we will use an approximation scheme solving suitable time-incremental minimization problems: Consider a fixed time step $\tau >0$ and suppose that an initial datum $U^0_\tau$ is given. Whenever, $U_\tau^0, \ldots, U^{n-1}_\tau$ are known, $U^n_\tau$ is defined as (if existent) \begin{align}\label{incremental} U_\tau^n = {\rm argmin}_{v \in \mathcal{S}} \Phi(\tau,U^{n-1}_\tau; v), \ \ \ \Phi(\tau,u; v):= \frac{1}{2\tau} \mathcal{D}(v,u)^2 + \phi(v). \mathbf{e}nd{align} Supposing that for a choice of $\tau$ a sequence $(U_\tau^n)_{n \in \Bbb N}$ solving \mathbf{e}qref{incremental} exists, we define the piecewise constant interpolation by \begin{align}\label{ds} \color{black} \tilde{U}_\tau(0) = U^0_\tau, \ \ \ \tilde{U}_\tau(t) = U^n_\tau \ \text{for} \ t \in ( (n-1)\tau,n\tau], \ n\ge 1. \color{black} \mathbf{e}nd{align} In the following, \color{black} $\tilde{U}_\tau$ \color{black} will be called a \mathbf{e}mph{discrete solution}. Note that the existence of discrete solutions is usually guaranteed by the direct method of the calculus of variations under suitable compactness, coercivity, and lower semicontinuity assumptions. Finally, we introduce the \mathbf{e}mph{modulus of the derivative} \begin{align*} |{\tilde U'_{\tau}}|_{\mathcal{D}}(t) = \frac{\mathcal{D}(U_{\tau}^n, U_{\tau}^{n-1})}{\tau} \ \text{ for } t \in ( (n-1)\tau, n\tau], \ n\ge 1. \mathbf{e}nd{align*} \begin{definition}[Minimizing movements]\label{main def1} (i) We say a curve $u: [0,\infty) \to \mathscr{S}$ is a minimizing movement for $\Phi$ as defined in \mathbf{e}qref{incremental}, starting from the initial datum $u_0 \in \mathscr{S}$, if for every sequence of timesteps $(\tau_k)_k$ with $\tau_k \to 0$ there exist discrete solutions defined in \mathbf{e}qref{ds} such that \begin{align}\label{MM} \begin{split} &\lim_{k\to \infty} \phi(U^0_{\tau_k}) = \phi(u_0), \ \ \ \ \limsup\mathbf{n}olimits_{k \to \infty} \mathcal{D}(U^0_{\tau_k},u_0) < \infty, \\& \lim_{k\to \infty} \color{black} \mathcal{D}(\tilde{U}_{\tau_k}(t),u(t)) = 0 \ \ \ \forall t \in [0,\infty). \color{black} \mathbf{e}nd{split} \mathbf{e}nd{align} By $MM(\Phi;u_0)$ we denote the collection of all minimizing movements for $\Phi$ starting from $u_0$. (ii) Likewise, we say a curve $u: [0,\infty) \to \mathscr{S}$ is a generalized minimizing movement for $\Phi$ starting from $u_0 \in \mathscr{S}$ if there exists a sequence of timesteps $(\tau_k)_k$ with $\tau_k \to 0$ and corresponding discrete solutions such that \mathbf{e}qref{MM} holds. The collection of all such curves is denoted by $GMM(\Phi;u_0)$. \mathbf{e}nd{definition} \subsection{Compactness of discrete solutions and convergence to curves of maximal slope}\label{sec: AGS-results} Suppose again that $(\mathscr{S},\mathcal{D})$ is a complete metric space. As discussed in \cite[Remark 2.0.5]{AGS}, it is convenient to introduce a weaker topology on $\mathscr{S}$ to have more flexibility in the derivation of compactness properties. Assume that there is a Hausdorff topology $\sigma$ on $\mathscr{S}$, which is compatible with $\mathcal{D}$ in the sense that $\sigma$ is weaker than the topology induced by $\mathcal{D}$ and satisfies \begin{align}\label{compatibility2} u_n \stackrel{\sigma}{\to} u, \ \ v_n \stackrel{\sigma}{\to} v \ \ \ \Bbb Rightarrow \ \ \ \liminf_{n \to \infty} \mathcal{D}(u_n,v_n) \ge \mathcal{D}(u,v). \mathbf{e}nd{align} Consider a functional $\phi: \mathscr{S} \to [0,+\infty)$ with the following properties: \begin{align}\label{basic assumptions} \begin{split} (i) & \ \ \text{$u_n \stackrel{\sigma}{\to} u$, \ \ $\sup\mathbf{n}olimits_{n,m}\mathcal{D}(u_n,u_m)< \infty \ \ \Bbb Rightarrow \ \ \liminf_{n \to \infty}\phi(u_n) \ge \phi(u)$,} \\ (ii)& \ \ \text{for all $N \in\Bbb N$ there is a $\sigma$-sequentially compact set $K_N$ such that} \\ & \ \ \text{$\lbrace u \in \mathscr{S}: \phi(u) + \mathcal{D}(u,u_*) \le N \rbrace \subset K_N$ for some point $u_* \in \mathscr{S}$.} \mathbf{e}nd{split} \mathbf{e}nd{align} Note that nonnegativity of $\phi$ can be generalized to a suitable \mathbf{e}mph{coerciveness} condition, see \cite[(2.1.2b)]{AGS}, which we do not include here for the sake of simplicity. From \cite[Proposition 2.2.3, Theorem 2.3.3, Remark 2.3.4(i)]{AGS} we obtain the following compactness and convergence result. \begin{theorem}\label{th: auxiliary1} Suppose that $\phi$ satisfies \mathbf{e}qref{basic assumptions} and $v \in \mathscr{S} \mapsto |\partial \phi|_{\mathcal D}(v)$ is a strong upper gradient for $\phi$ and $\sigma$-lower semicontinuous. Then the following holds: (i) Suppose that there is a sequence of initial data $(U^0_{\tau_k})_{k \in \Bbb N}$ and $u_0 \in \mathscr{S}$ with $\sup_k \mathcal{D}(U^0_{\tau_k},u_0)<+\infty$, $U^0_{\tau_k} \stackrel{\sigma}{\to} u_0$, and $\phi(U^0_{\tau_k}) \to \phi(u_0)$. Then there is an absolutely continuous curve $u:[0,\infty) \to \mathscr{S}$ and a subsequence, \color{black} not relabeled, \color{black} of $(\tau_k)_{k \in \Bbb N}$ such that a sequence of discrete solutions \color{black} $(\tilde{U}_{\tau_k})_{k \in \Bbb N}$ \color{black} defined in \mathbf{e}qref{ds} satisfies \color{black} $\tilde{U}_{\tau_k}(t) \stackrel{\sigma}{\to} u(t)$ for all $t \in [0,\infty)$.\color{black} (ii) Every $u \in GMM(\Phi;u_0)$ for each $u_0 \in \mathscr{S}$ is a curve of maximal slope for $\phi$ with respect to $|\partial \phi|_{\mathcal{D}}$ and in particular $u$ satisfies the energy identity \begin{align}\label{maximalslope} \frac{1}{2} \int_0^T |u'|_{\mathcal{D}}^2(t) \, dt + \frac{1}{2} \int_0^T |\partial \phi|_{\mathcal{D}}^2(u(t)) \, dt + \phi(u(T)) = \phi(u_0) \ \ \forall T>0. \mathbf{e}nd{align} Moreover, for a sequence of discrete solutions \color{black} $(\tilde{U}_{\tau_k})_{k \in \Bbb N}$ \color{black} as in (i) we have \begin{align*} \begin{split} &\lim_{k \to \infty} \color{black} \phi(\tilde{U}_{\tau_k}(t)) = \phi(u(t))\color{black} \ \ \ \forall t \in [0,\infty),\\ & \lim_{k \to \infty} |\partial \phi|_{\mathcal{D}}({\tilde U_{\tau_k}}) = |\partial \phi|_{\mathcal{D}}(u)\ \ \text{in} \ \ L^2_{\rm loc}([0,\infty)),\\ & \lim_{k \to \infty} |{\tilde U'_{\tau_k}}|_{\mathcal{D}} = |u'|_{\mathcal{D}} \ \ \text{in} \ \ L^2_{\rm loc}([0,\infty)). \mathbf{e}nd{split} \mathbf{e}nd{align*} \mathbf{e}nd{theorem} In particular, Theorem \ref{th: auxiliary1}(i) states that the limit $u$ is a generalized minimizing movement, provided that $\sigma$ coincides with the topology induced by $\mathcal{D}$. We remark that $GMM(\Phi;u_0)$ could also be defined with respect to the weaker topology $\sigma$, see \cite[Definition 2.0.6]{AGS}. For our purposes, however, a definition in terms of $\mathcal{D}$ is more convenient. The result can be considerably improved if $\Phi$ satisfies suitable convexity properties (see \cite[Theorem 4.0.4 and Theorem 4.0.7]{AGS}). \begin{theorem}\label{th: auxiliary2} Suppose that $\phi$ is $\mathcal{D}$-lower semicontinuous and $\phi \ge 0$. Moreover, assume that for all $\tau>0$ and for all $w,v_0,v_1 \in \mathscr{S}$ there exists a curve $(\gamma_t)_{t \in [0,1]} \subset \mathscr{S}$ with $\gamma_0 = v_0$ and $\gamma_1 = v_1$ such that $$\Phi(\tau,w;\gamma_t) \le (1-t)\Phi(\tau,w; v_0 ) + t\Phi(\tau,w; v_1 ) - \frac{t(1-t)}{2\tau} \mathcal{D}(v_0,v_1)^2 \ \ \ \forall t \in [0,1].$$ Then for each $u_0 \in \mathscr{S}$ there exists a unique $u \in MM(\Phi;u_0)$. Moreover, the assertion of Theorem \ref{th: auxiliary1} (with $\sigma$ being the topology induced by $\mathcal{D}$) holds and for a discrete solution $\tilde{U}_{\tau}$ with $U^0_\tau = u_0$ we have $\mathcal{D}(\tilde{U}_\tau(t),u(t))^2 \le \frac{1}{2}\tau^2|\partial \phi|_{\mathcal{D}}^2(u_0)$ for all $t>0$. \mathbf{e}nd{theorem} Note that in contrast to Theorem \ref{th: auxiliary1}, Theorem \ref{th: auxiliary2} yields also a uniqueness result for minimizing movements. Observe that \mathbf{e}qref{basic assumptions}(ii) is not necessary for Theorem \ref{th: auxiliary2} since the solvability of the problem ${\rm argmin}_{v \in \mathscr{S}} \Phi(\tau,u; v)$ for $\tau>0$ and $u \in \mathscr{S}$ (cf. \mathbf{e}qref{incremental}) follows from a convexity argument. In this setting, much more refined results can be established and we refer to \cite[Section 4]{AGS} for more details. \subsection{Limits of curves of maximal slopes}\label{sec: auxi-proofs} We now consider a set $\mathscr{S}$ and a sequence of metrics $(\mathcal{D}_n)_n$ on $\mathscr{S}$ as well as a limiting metric $\mathcal{D}$. We again assume that all metric spaces are complete. Moreover, let $(\phi_n)_n$ be a sequence of functionals with $\phi_n: \mathscr{S} \to [0,\infty]$. Suppose that there is a Hausdorff topology $\sigma$ on $\mathscr{S}$ which is weaker than the topology induced by each $\mathcal{D}_n,\mathcal{D}$ and satisfies similarly to \mathbf{e}qref{compatibility2} \begin{align}\label{compatibility} \begin{split} u_n \stackrel{\sigma}{\to} u, &\ \ v_n \stackrel{\sigma}{\to} v \ \ \ \Bbb Rightarrow \ \ \ \liminf_{n \to \infty} \mathcal{D}_n(u_n,v_n) \ge \mathcal{D}(u,v). \mathbf{e}nd{split} \mathbf{e}nd{align} Moreover, assume that $(\phi_n)_n $ satisfy \mathbf{e}qref{basic assumptions}(ii), i.e., for all $N \in\Bbb N$ there is a $\sigma$-sequentially compact set $K_N$ and $u_* \in \mathscr{S}$ such that for all $n \in \Bbb N$ \begin{align}\label{basic assumptions2} \lbrace u \in \mathscr{S}: \phi_n(u) + \mathcal{D}_n(u,u_*) \le N \rbrace \subset K_N. \mathbf{e}nd{align} To ensure the existence of limiting curves of maximal slope, we will apply the following refined version of the Arzel\`{a} Ascoli theorem. \begin{theorem}\label{th: auxiliary3} Let $T>0$, let metrics $\mathcal{D}_n$, $\mathcal{D}$ and functionals $(\phi_n)_n$ be given such that \mathbf{e}qref{compatibility} holds with respect to the topology $\sigma$. Let $K \subset \mathscr{S}$ be a $\sigma$-sequentially compact set. Let $u_n:[0,T]\to \mathscr{S}$ be curves such that \begin{align*} u_n(t) \in K \ \forall n \in \Bbb N, t \in [0,T], \ \ \ \ \limsup_{n \to \infty}\mathcal{D}_n(u_n(s),u_n(t)) \le \omega(s,t) \ \ \ \forall s,t \in [0,T] \mathbf{e}nd{align*} for a symmetric function $\omega: [0,T]^2 \to [0,\infty)$ with $$\lim_{(s,t) \to (r,r)} \omega(s,t) = 0 \ \ \ \ \forall r\in [0,T] \setminus \mathscr{C}, $$ where $\mathscr{C}$ is an at most countable subset of $[0,T]$. Then there exists a (not relabeled) subsequence and a limiting curve $u:[0,T] \to \mathscr{S}$ such that $$u_n(t) \stackrel{\sigma}{\to} u(t) \ \ \ \forall t\in [0,T], \ \ \ u \text{ is $\mathcal{D}$-continuous in $[0,T] \setminus \mathscr{C}$.} $$ \mathbf{e}nd{theorem} \par\mathbf{n}oindent{\mathbf{e}m Proof. } We follow the proof of \cite[Proposition 3.3.1]{AGS} with the only difference that the lower semicontinuity condition for the metric is replaced by our condition \mathbf{e}qref{compatibility} along the sequence of metrics. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ Now consider also a limiting functional $\phi: \mathscr{S} \to [0,\infty]$. We suppose lower semicontinuity of the functionals and the slopes in the following sense: For all $u \in \mathscr{S}$ and $(u_k)_k \subset \mathscr{S}$ we have \begin{align}\label{eq: implication} \begin{split} u_k \stackrel{\sigma}{\to} u \ \ \ \Bbb Rightarrow \ \ \ \liminf_{k \to \infty} |\partial \phi_{k}|_{\mathcal{D}_{k}} (u_{k}) \ge |\partial \phi|_{\mathcal{D}} (u), \ \ \ \liminf_{k \to \infty} \phi_{k}(u_{k}) \ge \phi(u). \mathbf{e}nd{split} \mathbf{e}nd{align} We now obtain the following result about limits of curves of maximal slope. \begin{theorem}\label{th:abstract convergence 1} Consider a set $\mathscr{S}$, metrics $(\mathcal{D}_n)_{n \in \Bbb N}$ and functionals $\phi_n: \mathscr{S} \to [0,\infty]$, $n \in \Bbb N$, as well as $\mathcal{D}$ and $\phi: \mathscr{S}\to [0,\infty]$. Suppose that there is a weaker topology $\sigma$ on $\mathscr{S}$ such that \mathbf{e}qref{compatibility}, \mathbf{e}qref{basic assumptions2}, and the implication \mathbf{e}qref{eq: implication} hold. Moreover, assume that $|\partial \phi_n|_{\mathcal{D}_n}$, $|\partial \phi|_{\mathcal{D}}$ are strong upper gradients for $\phi_n$, $\phi$ with respect to $\mathcal{D}_n$, $\mathcal{D}$, respectively. Let $T>0$ and $\bar{u} \in \mathscr{S}$. For all $n \in \Bbb N$ let $u_n$ be a curve of maximal slope for $\phi_n$ with respect to $|\partial \phi_n|_{\mathcal{D}_n}$ such that \begin{align}\label{eq: abstract assumptions1} \begin{split} (i)& \ \ \sup_{n \in \Bbb N} \sup_{t \in [0,T]} \big( \phi_n(u_n(t)) + \mathcal{D}_n(u_n(t),\bar{u}) \big) < \infty, \\ (ii)& \ \ u_n(0) \stackrel{\sigma}{\to}\bar{u}, \ \ \ \phi_n(u_n(0)) \to \phi(\bar{u}). \mathbf{e}nd{split} \mathbf{e}nd{align} Then there exists a limiting function $u: [0,T] \to \mathscr{S}$ such that up to a subsequence, \color{black} not relabeled, \color{black} $$u_n(t) \stackrel{\sigma}{\to} u(t), \ \ \ \ \ \phi_n(u_n(t)) \to \phi(u(t)) \ \ \ \forall t \in [0,T]$$ as $n \to \infty$ and $u$ is a curve of maximal slope for $\phi$ with respect to $|\partial \phi|_{\mathcal{D}}$. \mathbf{e}nd{theorem} The result is an adaption of a statement in \cite{S2} where condition \mathbf{e}qref{compatibility} is replaced by a lower bound condition on the metric derivatives along the sequence. We also refer to \cite{CG}, where a similar result is proved without the assumption that the slopes are \mathbf{e}mph{strong} upper gradients (cf. \cite[Definition 1.2.1 and Definition 1.2.2]{AGS} for the definition of strong and weak upper gradients), which comes at the expense that a suitable continuity condition along $(\phi_k)_k$ for sequences $(u_k)_k$ converging with respect to the metric has to be imposed. \par\mathbf{n}oindent{\mathbf{e}m Proof. } From the properties of a curve of maximal slope we have (cf. \mathbf{e}qref{maximalslope}) \begin{align}\label{abstract1} \frac{1}{2} \int_0^t |u'_n|_{\mathcal{D}_n}^2(s) \, ds + \frac{1}{2} \int_0^t |\partial \phi_n|_{\mathcal{D}_n}^2(u_n(s)) \, ds + \phi_n(u_n(t)) = \phi_n(u_n(0)) \mathbf{e}nd{align} for all $t \in [0,T]$. (Here, we have used that $|\partial \phi_n|_{\mathcal{D}_n}$ are strong upper gradients for $\phi_n$ with respect to $\mathcal{D}_n$.) From \mathbf{e}qref{abstract1} and the equiboundedness of $\phi_n(u_n(t))$ (see \mathbf{e}qref{eq: abstract assumptions1}(i)) we get \begin{align*} \sup_{n \in \Bbb N} \int_0^T|u'_n|_{\mathcal{D}_n}^2(t) \, dt + \sup_{n \in \Bbb N} \int_0^T |\partial \phi_n|_{\mathcal{D}_n}^2(u_n(t)) \, dt < \infty. \mathbf{e}nd{align*} Consequently, there is a function $A \in L^2((0,T))$ such that $|u_n'|_{{\mathcal{D}}_{n}} \rightharpoonup A$ weakly in $L^2((0,T))$ up to a subsequence, \color{black} not relabeled. \color{black} In particular, this yields \begin{align}\label{abstract2} \limsup_{n \to \infty} {\mathcal{D}}_n(u_n(s),u_n(t)) \le \limsup_{n \to \infty}\int_s^t|u_n'|_{\mathcal{D}_n}\le \omega(s,t):=\int_s^t A(r)\, dr \mathbf{e}nd{align} for all $0 \le s \le t \le T$ by \mathbf{e}qref{metric-deriv}. Using \mathbf{e}qref{basic assumptions2}, \mathbf{e}qref{eq: abstract assumptions1}(i), and \mathbf{e}qref{abstract2}, we can apply Theorem \ref{th: auxiliary3} and obtain an absolutely continuous curve $u: [0,T] \to \mathscr{S}$ as well as a further subsequence \color{black} (not relabeled) \color{black} such that $u_n(t) \stackrel{\sigma}{\to} u(t)$ for all $t \in [0,T] $. Moreover, recalling \mathbf{e}qref{compatibility} we get $\mathcal{D}(u(s),u(t)) \le \int_s^t A(r)\,dr$, which gives $|u'| \le A$. By \mathbf{e}qref{eq: implication} we get \begin{align*} \begin{split} |\partial \phi|_{\mathcal{D}} (u(t)) \le \liminf_{n \to \infty} |\partial \phi_n|_{\mathcal{D}_n} (u_n(t)),\ \ \ \phi(u(t)) \le \liminf_{n \to \infty} \phi_n(u_n(t)) \mathbf{e}nd{split} \mathbf{e}nd{align*} for $t\in [0,T]$. This together with the fact that $|u_n'|_{{\mathcal{D}}_{n}} \rightharpoonup A$ weakly in $L^2((0,T))$ and $|u'| \le A$ gives \begin{align*} \begin{split} &\frac{1}{2} \int_0^t |u'|_{\mathcal{D}}^2(s) \, ds + \frac{1}{2} \int_0^t |\partial \phi|_{\mathcal{D}}^2(u(s)) \, ds + \phi(u(t)) \\ &\le \frac{1}{2} \int_0^t A^2(s) \, ds + \frac{1}{2} \int_0^t \liminf_{n \to \infty} |\partial \phi_n|_{\mathcal{D}_n}^2(u_n(s)) \, ds + \liminf_{n \to \infty} \phi_n(u_n(t)) \\ &\le \liminf_{n \to \infty} \Big( \frac{1}{2} \int_0^t |u'_n|_{\mathcal{D}_n}^2(s) \, ds + \frac{1}{2} \int_0^t |\partial \phi_n|_{\mathcal{D}_n}^2(u_n(s)) \, ds + \phi_n(u_n(t))\Big) \mathbf{e}nd{split} \mathbf{e}nd{align*} for all $t \in [0,T]$, where in the second step we used Fatou's lemma. Using \mathbf{e}qref{eq: abstract assumptions1}(ii), \mathbf{e}qref{abstract1}, and $\bar{u} = u(0)$ we get $$\frac{1}{2} \int_0^t |u'|_{\mathcal{D}}^2(s) \, ds + \frac{1}{2} \int_0^t |\partial \phi|_{\mathcal{D}}^2(u(s)) \, ds + \phi(u(t)) \le \liminf_{n \to \infty} \phi_n(u_n(0)) = \phi(u(0)).$$ On the other hand, as $|\partial \phi|_{\mathcal{D}}$ is a strong upper gradient for $\phi$ with respect to $\mathcal{D}$, we obtain (recall Definition \ref{main def2}) \begin{align*} \phi(u(0)) \le \phi(u(t)) + \int_0^t |\partial \phi|_{\mathcal{D}}(u(s))|u'|_{\mathcal{D}}(s) \,ds. \mathbf{e}nd{align*} Therefore, combining the previous estimates and using Young's inequality we derive $$ |u'|_{\mathcal{D}}(t) = |\partial \phi|_{\mathcal{D}}(u(t)), \ \ \ \phi(u(0))- \phi(u(t)) = \int_0^t |\partial \phi|_{\mathcal{D}}(u(s))|u'|_{\mathcal{D}}(s) \,ds $$ for a.e. $t\in[0,T]$ and $\lim_{n \to \infty}\phi_n(u_n(t)) = \phi(u(t))$ for all $t \in [0,T]$. It follows that $u$ is absolutely continuous and for a.e. $t \in [0,T]$ we have \begin{align*} \frac{\rm d}{ {\rm d} t} \phi(u(t)) = - |\partial \phi|_{\mathcal{D}}(u(t))|u'|_{\mathcal{D}}(t). \mathbf{e}nd{align*} This concludes the proof. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ We now study discrete solutions along the sequence of functionals $(\phi_n)_n$. \begin{theorem}\label{th:abstract convergence 2} Consider a set $\mathscr{S}$, metrics $(\mathcal{D}_n)_{n \in \Bbb N}$ and functionals $\phi_n: \mathscr{S} \to [0,\infty)$, $n \in \Bbb N$, as well as $\mathcal{D}$ and $\phi: \mathscr{S}\to [0,\infty)$. Suppose that there is a weaker topology $\sigma$ on $\mathscr{S}$ such that \mathbf{e}qref{compatibility}, \mathbf{e}qref{basic assumptions2} and the implication \mathbf{e}qref{eq: implication} hold. Moreover, assume that $|\partial \phi|_{\mathcal{D}}$ is a strong upper gradient for $ \phi $ with respect to $\mathcal{D}$. Let $T>0$. Consider a null sequence $(\tau_k)_k$ and initial data $(U^0_{\tau_k})_k$, $\bar{u}$ with \begin{align*} \sup\mathbf{n}olimits_k \mathcal{D}_k(U^0_{\tau_k},\bar{u}) < + \infty, \ \ \ \ \ U^0_{\tau_k} \stackrel{\sigma}{\to} \bar{u} , \ \ \ \ \ \phi_k(U^0_{\tau_k}) \to \phi(\bar{u}). \mathbf{e}nd{align*} Then for each sequence of discrete solutions $(\tilde{U}_{\tau_k})_k$ starting from $(U^0_{\tau_k})_k$ there is a curve $u$ of maximal slope for $\phi$ with respect to $|\partial \phi|_\mathcal{D}$ such that up to a subsequence, \color{black} not relabeled, \color{black} $\tilde{U}_{\tau_k}(t) \stackrel{\sigma}{\to} u(t)$ and $\phi_k(\tilde{U}_{\tau_k}(t)) \to \phi(u(t))$ for $t \in [0,T]$. \mathbf{e}nd{theorem} \color{black} For the proof we refer to \cite[Section 2]{Ortner}. Let us also mention the recently obtained variant \cite{BCGS} where, similarly to \cite{CG}, the lower semicontinuity along the sequence $(\phi_n)_n$ (see \mathbf{e}qref{eq: implication}) is replaced by a continuity condition. Note that in their setting it is not necessary to require that $|\partial \phi|_{\mathcal{D}}$ is a strong upper gradient. \color{black} \section{Properties of energies and dissipation distances}\label{sec:energy-dissipation} In this section we prove several properties about the energies and dissipation distances. Let $\delta>0$, $0 < \alpha < 1$ and recall the definition of the nonlinear energy in \mathbf{e}qref{nonlinear energy}-\mathbf{e}qref{assumptions-P} as well as \mathbf{e}qref{eq: assumptions-D}. We \color{black} recall that \color{black} $\mathscr{S}_\delta^M = \lbrace y \in W^{2,p}_\mathbf{id}(\Omega): \phi_\delta(y) \le M \rbrace$. In the whole section, $C\ge 1$ and $ 0 < c \le 1$ indicate generic constants, which may vary from line to line and depend on $M$, $\Omega$, the exponent $p>d$ (see \mathbf{e}qref{assumptions-P}), and on the constants in \mathbf{e}qref{assumptions-W}, \mathbf{e}qref{assumptions-P}, \mathbf{e}qref{eq: assumptions-D}, but are always independent of the small parameter $\delta$. \subsection{Basic properties} We start with some properties about the Hessian of $W$ and $D$. By $\partial^2 D^2$ we denote the Hessian and by $\partial^2_{F_1^2} D^2, \partial^2_{F_2^2} D^2$ the Hessian in direction of the first or second entry of $D^2$, respectively. Moreover, we define ${\rm sym }(F) = \frac{F + F^\top}{2}$ for $F \in \Bbb R^{d \times d}$ and recall the definition of $\Bbb C_W,\Bbb C_D$ in \mathbf{e}qref{linear equation}. By $\mathbf{Id} \in \Bbb R^{d \times d}$ we again denote the identity matrix. \begin{lemma}[Properties of Hessian]\label{D-lin} Let $F_1,F_2 \in \Bbb R^{d \times d}$ and $Y \in \Bbb R^{d \times d}$ in a neighborhood of $\mathbf{Id}$ such that $\partial^2 D^2(Y,Y)$ exists. (i) We have $\partial^2 D^2(Y,Y)[(F_1,F_2),(F_1,F_2)] = \partial^2_{F_1^2}D^2(Y,Y)[F_1-F_2,F_1-F_2] = \partial^2_{F_2^2}D^2(Y,Y)[F_1-F_2,F_1-F_2]$. (ii) We have $\partial^2 D^2(\mathbf{Id},\mathbf{Id})[(F_1,F_2),(F_1,F_2)] = \Bbb C_D[{\rm sym}(F_1-F_2), {\rm sym}(F_1-F_2)]$. (iii) There is a constant $c>0$ independent of $F$ such that $\Bbb C_W[F,F] \ge c|{\rm sym}(F)|^2$, $\Bbb C_D[F,F] \ge c|{\rm sym}(F)|^2$. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } (i) Set $H= \partial^2 D^2(Y,Y)$ for brevity. By symmetry \mathbf{e}qref{eq: assumptions-D}(ii) we find two fourth order tensors $H_1,H_2 : \Bbb R^{d\times d} \times \Bbb R^{d\times d} \to \Bbb R$ such that $H[(F_1,F_2),(F_1,F_2)] = H_1[F_1,F_1] + 2H_2[F_1,F_2] + H_1[F_2,F_2]$ and $H_2[F_1,F_2] = H_2[F_2,F_1]$. Note that $H_1 = \partial^2_{F_1^2}D^2(Y,Y) =\partial^2_{F_2^2} D^2(Y,Y)$. As $D(F,F)=0$ for all $F \in GL_+(d)$, we get $H[(F,F),(F,F)] = 0$ for all $F \in \Bbb R^{d \times d}$. Thus, we obtain $H_1[F,F] = -H_2[F,F]$ for all $F \in \Bbb R^{d \times d}$ and we compute \begin{align*} H_1[F_1-F_2,&F_1-F_2]\\& = - H_2[F_1-F_2,F_1-F_2] = -H_2[F_1,F_1] + 2H_2[F_1,F_2] - H_2[F_2,F_2] \\ &= H_1[F_1,F_1] + 2H_2[F_1,F_2] + H_1[F_2,F_2] = H[(F_1,F_2),(F_1,F_2)]. \mathbf{e}nd{align*} Property (ii) follows from frame indifference \mathbf{e}qref{eq: assumptions-D}(v) by an elementary computation. Finally, the growth condition for $\Bbb C_W$ and $\Bbb C_D$ stated in (iii) follow from \mathbf{e}qref{assumptions-W}(iii) and \mathbf{e}qref{eq: assumptions-D}(vi), respectively. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ In the following, by $\mathbf{id}$ we again denote the identity function. \begin{lemma}[Rigidity]\label{lemma:rigidity} There is constant $C>1$ independent of $\delta$ such that for $\delta$ sufficiently small for all $y \in \mathscr{S}_\delta^M$ we have \begin{itemize} \item[(i)] $\Vert y - \mathbf{id} \Vert_{H^1(\Omega)} \le C\Vert \operatorname{dist}(\mathbf{n}abla y,SO(d)) \Vert_{L^2(\Omega)}$, \item[(ii)] $\Vert \mathbf{n}abla y -\mathbf{Id} \Vert_{L^\infty(\Omega)}\le C\delta^{\alpha}$, \ \ \ $\Vert y -\mathbf{id} \Vert_{L^\infty(\Omega)}\le C\delta^{\alpha}$. \mathbf{e}nd{itemize} \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } (i) is a typical geometric rigidity argument, see e.g. \cite{DalMasoNegriPercivale:02, FrieseckeJamesMueller:02}: By \cite[Theorem 3.1]{FrieseckeJamesMueller:02} and Poincar\'e's inequality we find a rotation $Q \in SO(d)$ and $b \in \Bbb R^d$ such that \begin{align}\label{rig1} \Vert y - (Q\cdot + b) \Vert_{H^1(\Omega)} \le C\Vert \operatorname{dist}(\mathbf{n}abla y,SO(d)) \Vert_{L^2(\Omega)}. \mathbf{e}nd{align} Passing to a trace estimate and using $y = \mathbf{id}$ on $\partial \Omega$, we get $\Vert \mathbf{id} - (Q\cdot + b) \Vert_{L^2(\partial \Omega)} \le C\Vert \operatorname{dist}(\mathbf{n}abla y,SO(d)) \Vert_{L^2(\Omega)}$. Using \cite[Lemma 3.3]{DalMasoNegriPercivale:02} we then find $|b| + |Q- \mathbf{Id}| \le C\Vert \mathbf{id} - (Q\cdot + b) \Vert_{L^2(\partial \Omega)}$ for a constant only depending on $\Omega$. This together with \mathbf{e}qref{rig1} implies (i). We now prove (ii). By the definition of $\phi_\delta$ and \mathbf{e}qref{assumptions-P}(iii) we get $\Vert \mathbf{n}abla^2 y \Vert^p_{L^p(\Omega)} \le CM\delta^{p\alpha}$ for all $y \in \mathscr{S}^M_\delta$. As $p>d$, Poincar\'e's inequality yields some $F \in \Bbb R^{d \times d}$ and $b \in \Bbb R^d$ such that \begin{align}\label{rig2} \Vert y - (F\cdot + b)\Vert_{W^{1,\infty}(\Omega)} \le C\delta^{\alpha} \mathbf{e}nd{align} for a constant additionally depending on $\Omega$, M, and $p$. Using $\phi_\delta(y) \le M$, \mathbf{e}qref{assumptions-W}(iii), and (i) we compute \begin{align*} \Vert (F\cdot + b)- \mathbf{id} \Vert^2_{H^1(\Omega)} \le C\Vert \operatorname{dist}(\mathbf{n}abla y,SO(d)) \Vert^2_{L^2(\Omega)} + C|\Omega|\delta^{2\alpha} \le C\delta^2M + C|\Omega|\delta^{2\alpha} . \mathbf{e}nd{align*} Since $\alpha \le 1$, this gives $|b| + |F - \mathbf{Id}| \le C\delta^\alpha$, which together with \mathbf{e}qref{rig2} yields (ii). \mathbf{n}opagebreak\hspace*{\fill}$\Box$ In the following we set for shorthand $H_Y := \frac{1}{2}\partial^2_{F_1^2} D^2(Y,Y) = \frac{1}{2}\partial^2_{F_2^2} D^2(Y,Y)$ for $Y \in GL_+(d)$ and given a deformation $y \in W^{2,p}_\mathbf{id}(\Omega)$ we also introduce the mapping $H_{\mathbf{n}abla y}: \Omega \to \Bbb R^{d \times d \times d \times d}$ by $H_{\mathbf{n}abla y}(x) = H_{\mathbf{n}abla y(x)}$ for $x \in \Omega$. Recall the definition of $\mathcal{D}_\delta, \bar{\mathcal{D}}_0$ in \mathbf{e}qref{eq: D,D0} and $\Bbb C_W$ below \mathbf{e}qref{linear equation}. \begin{lemma}[Dissipation and energy]\label{lemma: metric space-properties} There are constants $0<c<1$, $C>1$ independent of $\delta$ such that for all $y,y_0,y_1 \in \mathscr{S}_\delta^M$ for $\delta$ sufficiently small we have \begin{itemize} \item[(i)] $\big|\delta^2\mathcal{D}_\delta(y_0,y_1)^2 - \int_\Omega H_{\mathbf{n}abla y_0}[\mathbf{n}abla (y_1 - y_0),\mathbf{n}abla (y_1 - y_0) ]| \le C \Vert \mathbf{n}abla (y_1- y_0) \Vert^3_{L^3(\Omega)}$, \item[(ii)] $c \Vert y_1 - y_0 \Vert_{H^1(\Omega)} \le \delta\mathcal{D}_\delta(y_0,y_1) \le C\Vert y_1 - y_0 \Vert_{H^1(\Omega)}$, \item[(iii)] $\big|\mathcal{D}_\delta(y_0,y_1)^2 - \bar{\mathcal{D}}_0(u_0,u_1)^2\big| \le C\delta^\alpha$, \item[(iv)] $\big|\delta^{-2} \int_\Omega W(\mathbf{n}abla y) - \int_\Omega \frac{1}{2}\Bbb C_W[e (u),e (u) ]\big| \le C\delta^\alpha,$ \mathbf{e}nd{itemize} where $u = \delta^{-1}(y - \mathbf{id})$ and $u_i = \delta^{-1}(y_i - \mathbf{id})$, $i=0,1$. In particular, (ii) shows that the topologies induced by $\mathcal{D}_\delta$ and $\Vert \cdot \Vert_{H^1(\Omega)}$ coincide. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } Recall that $D^2$ is $C^3$ in a neighborhood of $(\mathbf{Id},\mathbf{Id})$. In view of the uniform bound on $\mathbf{n}abla y_0, \mathbf{n}abla y_1 $ (see Lemma \ref{lemma:rigidity}(ii)) and a Taylor expansion of $D^2$ at $(\mathbf{n}abla y_0, \mathbf{n}abla y_0)$, we derive by Lemma \ref{D-lin} \begin{align*} \begin{split} \int_\Omega D^2(\mathbf{n}abla y_0,\mathbf{n}abla y_1) &= \int_\Omega H_{\mathbf{n}abla y_0}[\mathbf{n}abla (y_1 - y_0),\mathbf{n}abla (y_1 - y_0) ] + O( \Vert \mathbf{n}abla (y_1- y_0) \Vert^3_{L^3(\Omega)}). \mathbf{e}nd{split} \mathbf{e}nd{align*} This gives (i). We obtain $\Vert H_{\mathbf{n}abla y_0} - \Bbb C_D \Vert_{L^\infty(\Omega)} \le C\delta^\alpha$ by regularity of $D$ and Lemma \ref{lemma:rigidity}(ii). This together with (i), Lemma \ref{lemma:rigidity}(ii), and Lemma \ref{D-lin} yields \begin{align}\label{eq:NNN} \begin{split} \int_\Omega D^2(\mathbf{n}abla y_0,\mathbf{n}abla y_1)& = \int_\Omega \Bbb C_D[e(y_1)-e(y_0),e(y_1)-e(y_0)]\\& \ \ \ + O(\delta^\alpha \Vert \mathbf{n}abla y_1- \mathbf{n}abla y_0 \Vert^2_{L^2(\Omega)}). \mathbf{e}nd{split} \mathbf{e}nd{align} Now by \mathbf{e}qref{eq:NNN}, Lemma \ref{D-lin}(iii), and Korn's inequality we derive for $\delta$ small enough \begin{align*} \int_\Omega D^2(\mathbf{n}abla y_0,\mathbf{n}abla y_1) & \ge c \Vert e(y_1)-e(y_0) \Vert^2_{L^2(\Omega)} + O( \delta^\alpha\Vert \mathbf{n}abla y_1- \mathbf{n}abla y_0 \Vert^2_{L^2(\Omega)}) \\& \ge c \Vert \mathbf{n}abla y_1- \mathbf{n}abla y_0 \Vert^2_{L^2(\Omega)}. \mathbf{e}nd{align*} Here we used that $y_1 - y_0 = 0$ on $\partial \Omega$. The first inequality in (ii) follows from Poincar\'e's inequality. The other inequality can be seen along similar lines. By Lemma \ref{lemma:rigidity}(i), \mathbf{e}qref{assumptions-W}(iii) and the fact that $y_0,y_1 \in \mathscr{S}_\delta^M$ we get \begin{align}\label{eq:remD} \Vert \mathbf{n}abla y_i - \mathbf{Id} \Vert^2_{L^2(\Omega)} \le C\Vert \operatorname{dist}(\mathbf{n}abla y_i,SO(d)) \Vert^2_{L^2(\Omega)} \le C\phi_\delta(y_i) \le CM\delta^2 \mathbf{e}nd{align} for $i=0,1$. Recalling the definition of $\mathcal{D}_\delta, \bar{\mathcal{D}}_0$, we now obtain (iii) by \mathbf{e}qref{eq:NNN}. Finally, to see (iv), an argument very similar to (i), essentially relying on a Taylor expansion and Lemma \ref{lemma: metric space-properties}(ii), yields $$\Big|\delta^{-2} \int_\Omega W(\mathbf{n}abla y) - \int_\Omega \frac{1}{2}\Bbb C_W[e (u),e (u) ]\Big| \le C\delta^{\alpha-2} \Vert \mathbf{n}abla y - \mathbf{Id} \Vert^2_{L^2(\Omega)},$$ which together with \mathbf{e}qref{eq:remD} implies the claim. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ We close this section with proving differentiablity of $\int_\Omega W(\mathbf{n}abla y)$. \begin{lemma}[Differentiablity of $\int_\Omega W(\mathbf{n}abla y)$]\label{lemma:C1} For $(y_n)_n \subset \mathscr{S}_\delta^M$ and $y \in \mathscr{S}_\delta^M$ with $\mathcal{D}_\delta(y_n,y) \to 0$, we have $$\lim_{n \to \infty} \frac{\int_\Omega W(\mathbf{n}abla y_n) - \int_\Omega W(\mathbf{n}abla y) - \int_\Omega \partial_FW(\mathbf{n}abla y) : (\mathbf{n}abla y_n - \mathbf{n}abla y)}{\mathcal{D}_\delta(y_n,y)} = 0.$$ \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } By a Taylor expansion we find a universal constant $C'>0$ such that $|W(F_2) - W(F_1) - \partial_F W(F_1) : (F_2 - F_1)| \le C'|F_1 - F_2|^2$ for all $F_1,F_2$ with $|F_1 - \mathbf{Id}|,|F_2-\mathbf{Id}| \le C\delta^\alpha$, where $C$ is the constant in Lemma \ref{lemma:rigidity}(ii). This together with Lemma \ref{lemma:rigidity}(ii) and Lemma \ref{lemma: metric space-properties}(ii) gives the result. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ \subsection{Metric spaces and convexity}\label{sec: metric} In this section we show that $(\mathscr{S}^M_\delta, \mathcal{D}_\delta)$, $(H^1_0(\Omega),\bar{\mathcal{D}}_0)$ are complete metric spaces and derive convexity properties for the energies and dissipation distances. \begin{theorem}[Properties of $(\mathscr{S}^M_\delta, \mathcal{D}_\delta)$ and $\phi_\delta$]\label{th: metric space} For $\delta>0$ small enough we have \begin{itemize} \item[(i)] $(\mathscr{S}^M_\delta, \mathcal{D}_\delta)$ is a complete metric space. \item[(ii)] Compactness: If $(y_n)_n \subset \mathscr{S}^M_\delta$, then $(y_n)_n$ admits a subsequence converging weakly in $W^{2,p}(\Omega)$, strongly in $W^{1,\infty}(\Omega)$, and with respect to $\mathcal{D}_\delta$. \item[(iii)] Lower semicontinuity: $\mathcal{D}_\delta(y_n,y) \to 0$ \ \ $\Bbb Rightarrow$ \ \ $\liminf_{n \to \infty} \phi_\delta(y_n) \ge \phi_\delta(y)$. \mathbf{e}nd{itemize} \mathbf{e}nd{theorem} \par\mathbf{n}oindent{\mathbf{e}m Proof. } First, recalling \mathbf{e}qref{nonlinear energy} and \mathbf{e}qref{assumptions-P}(iii), we have $\Vert \mathbf{n}abla^2 y \Vert^p_{L^p(\Omega)} \le CM\delta^{p\alpha}$ for all $y \in \mathscr{S}^M_\delta$, which together with Lemma \ref{lemma:rigidity}(ii) shows $\sup_{y \in \mathscr{S}^M_\delta}\Vert y \Vert_{W^{2,p}(\Omega)} < \infty$. This implies (ii) recalling $p>d$ and also using Lemma \ref{lemma: metric space-properties}(ii). In particular, for a sequence $(y_n)_n$ converging to $y$ with respect to $\mathcal{D}_\delta$ we have $y_n \rightharpoonup y$ weakly in $W^{2,p}(\Omega)$ and $y_n \to y$ strongly in $W^{1,\infty}(\Omega)$. Then (iii) follows from Fatou's lemma and the fact that $\liminf_{n\to \infty} \int_\Omega P(\mathbf{n}abla^2 y_n) \ge \int_\Omega P(\mathbf{n}abla^2 y)$ by \mathbf{e}qref{assumptions-P}(ii). We now finally show (i). Apart from the positivity, all properties of a metric follow directly from \mathbf{e}qref{eq: assumptions-D} and \mathbf{e}qref{eq: D,D0}. To show that if $\mathcal{D}_\delta(y_0,y_1)=0$ for $y_0, y_1 \in \mathscr{S}^M_\delta$, then $y_0=y_1$, we apply Lemma \ref{lemma: metric space-properties}(ii). Finally, it remains to show that $(\mathscr{S}^M_\delta, \mathcal{D}_\delta)$ is complete. Let $(y_k)_k$ be a Cauchy sequence with respect to $\mathcal{D}_\delta$. By (ii) we find $y \in W^{2,p}(\Omega)$ and a subsequence \color{black} (not relabeled) \color{black} such that $y_k \to y$ in $W^{1,\infty}(\Omega)$. Then also $\lim_{k\to \infty}\mathcal{D}_\delta(y_k,y) = 0$ by Lemma \ref{lemma: metric space-properties}(ii). By (iii) we get $y \in \mathscr{S}_\delta^M$. The fact that $(y_k)_k$ is a Cauchy sequence now implies that the whole sequence $y_k$ converges to $y$ with respect to $\mathcal{D}_\delta$. This concludes the proof. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ Similar properties can be derived in the linear setting. Recall the definition of $\bar{\mathcal{D}}_0$ in \mathbf{e}qref{eq: D,D0}. \begin{theorem}[Properties of $(H^1_0(\Omega), \bar{\mathcal{D}}_0)$ and $\bar{\phi}_0$]\label{th: metric space-lin} We have \begin{itemize} \item[(i)] $(H^1_0(\Omega), \bar{\mathcal{D}}_0)$ is a complete metric space. \item[(ii)] Continuity: $\bar{\mathcal{D}}_0(u_n, u) \to 0$ \ \ $\Bbb Rightarrow$ \ \ $\lim_{n \to \infty} \bar{\phi}_0(u_n) = \bar{\phi}_0(u)$. \mathbf{e}nd{itemize} \mathbf{e}nd{theorem} \par\mathbf{n}oindent{\mathbf{e}m Proof. } By Lemma \ref{D-lin}(iii) we find a constant $c>0$ such that $$\bar{\mathcal{D}}_0(u_0,u_1)^2 \ge c \Vert e(u_0) - e(u_1) \Vert_{L^2(\Omega)}^2 \ge \Vert u_0 - u_1 \Vert^2_{H^1(\Omega)}, $$ where the last step follows from Korn's and Poincare's inequality. This show that $(H^1_0(\Omega), \bar{\mathcal{D}}_0)$ is a complete metric space, where $\bar{\mathcal{D}}_0$ is equivalent to the metric induced by $\Vert \cdot \Vert_{H^1(\Omega)}$. Recalling \mathbf{e}qref{linear energy} we find that $\bar{\phi}_0$ is continuous with respect to $\bar{\mathcal{D}}_0$. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ The following properties are crucial to use the theory in \cite{AGS}. \begin{theorem}[Convexity and generalized geodesics in the nonlinear setting]\label{th: convexity} There is a constant $C \ge 1$ independent of $\delta$ such that for $\delta$ small and for all $y_0,y_1 \in \mathscr{S}^M_\delta$: \begin{align*} (i)& \ \ \mathcal{D}_\delta(y_s,y_0)^2 \le s^2\mathcal{D}_\delta(y_1,y_0)^2 (1 + C \Vert \mathbf{n}abla y_1 - \mathbf{n}abla y_0 \Vert_{L^\infty(\Omega)}), \\ (ii) & \ \ \phi_\delta(y_s) \le (1-s) \phi_\delta(y_0) + s\phi_\delta(y_1), \mathbf{e}nd{align*} where $y_s := (1-s) y_0 + sy_1$, $s \in [0,1]$. \mathbf{e}nd{theorem} Note that $y_s$ is not a geodesic in the sense of \cite[Definition 2.4.2]{AGS}, but $y_s$ can be understood as a generalized geodesic. We also refer to \cite[Section 3.2, Section 3.4]{MOS} for a discussion about generalized geodesics in a related setting. \par\mathbf{n}oindent{\mathbf{e}m Proof. } Let $y_s = (1-s)y_0 + sy_1$. By Lemma \ref{lemma: metric space-properties}(i) we obtain \begin{align*} \delta^2\mathcal{D}_\delta(y_1,y_0)^2 &\ge \int_\Omega H_{\mathbf{n}abla y_0}[\mathbf{n}abla (y_1-y_0),\mathbf{n}abla (y_1-y_0)] - C\int_\Omega|\mathbf{n}abla y_1 - \mathbf{n}abla y_0|^3. \mathbf{e}nd{align*} Likewise, we get \begin{align*} \delta^2\mathcal{D}_\delta(y_s,y_0)^2 & \le s^2\int_\Omega H_{\mathbf{n}abla y_0}[\mathbf{n}abla (y_1-y_0),\mathbf{n}abla (y_1-y_0)]+ Cs^3\int_\Omega|\mathbf{n}abla y_1 - \mathbf{n}abla y_0|^3. \mathbf{e}nd{align*} Combining the two estimates, we therefore obtain \begin{align*} \mathcal{D}_\delta(y_s,y_0)^2 & \le s^2\big( \mathcal{D}_\delta(y_1,y_0)^2 + C\delta^{-2}\Vert \mathbf{n}abla y_1 - \mathbf{n}abla y_0 \Vert_{L^3(\Omega)}^3\big), \mathbf{e}nd{align*} which together with Lemma \ref{lemma: metric space-properties}(ii) shows (i). To see (ii), it suffices to show $\int_\Omega W(\mathbf{n}abla y_s) \le (1-s)\int_\Omega W(\mathbf{n}abla y_0) + s \int_\Omega W(\mathbf{n}abla y_1)$ since $P$ is convex (see \mathbf{e}qref{assumptions-P}(ii)). A Taylor expansion gives $\int_\Omega W(\mathbf{n}abla y) = \frac{1}{2}\int_\Omega \Bbb C_W[\mathbf{n}abla y,\mathbf{n}abla y] + \omega(\mathbf{n}abla y)$ for a (regular) function $\omega: \Bbb R^{d \times d} \to \Bbb R$ with $\partial_F \omega(0) = 0$ and $\partial^2_{F^2} \omega(0) =0$. We get \begin{align}\label{quadratic convexity} \begin{split} \int_\Omega \Bbb C_W[\mathbf{n}abla y_s,\mathbf{n}abla y_s] &= (1-s) \int_\Omega \Bbb C_W[\mathbf{n}abla y_0,\mathbf{n}abla y_0] + s \int_\Omega \Bbb C_W[\mathbf{n}abla y_1,\mathbf{n}abla y_1] \\ & \ \ \ - s(1-s) \int_\Omega \Bbb C_W[\mathbf{n}abla (y_1 - y_0),\mathbf{n}abla (y_1 - y_0)]. \mathbf{e}nd{split} \mathbf{e}nd{align} Denote by $B_{2C\delta^\alpha}(\mathbf{Id}) \subset \Bbb R^{d \times d}$ the ball with center $\mathbf{Id}$ and radius $2C\delta^\alpha$ with the constant $C$ from Lemma \ref{lemma:rigidity}(ii). Since $F \mapsto \omega(F) + \frac{1}{2}\Vert \partial^2_{F^2} \omega \Vert_{L^\infty(B_{2C\delta^\alpha}(\mathbf{Id}))}|F|^2$ is convex on $B_{2C\delta^\alpha}(\mathbf{Id})$, we get by Lemma \ref{lemma:rigidity}(ii) \begin{align*} \int_\Omega \omega(\mathbf{n}abla y_s ) & \le s\int_\Omega \omega (\mathbf{n}abla y_0) + (1-s) \int_\Omega \omega(\mathbf{n}abla y_1 ) \\& \ \ \ + \frac{1}{2}s(1-s) \Vert \partial^2_{F^2} \omega\Vert_{L^\infty(B_{2C\delta^\alpha}(\mathbf{Id}))} \int_\Omega|\mathbf{n}abla y_1 - \mathbf{n}abla y_0|^2.\mathbf{n}otag \mathbf{e}nd{align*} By the fact that $\partial^2_{F^2 }w(0) =0$ and the regularity of $\omega$ we find $\Vert \partial^2_{F^2 } \omega \Vert_{L^\infty(B_{2C\delta^\alpha}(\mathbf{Id}))} \le C\delta^\alpha$. Combining the previous three estimates and recalling that $\int_\Omega W(\mathbf{n}abla y) = \frac{1}{2}\int_\Omega \Bbb C_W[\mathbf{n}abla y,\mathbf{n}abla y] + \omega(\mathbf{n}abla y)$, we conclude \begin{align*} &\int_\Omega W(\mathbf{n}abla y_s) - (1-s)\int_\Omega W(\mathbf{n}abla y_0) - s\int_\Omega W(\mathbf{n}abla y_1) \\ & \le - s(1-s) \int_\Omega \Bbb C_W[\mathbf{n}abla (y_1 - y_0),\mathbf{n}abla (y_1 - y_0)] +\frac{1}{2}s(1-s) C\delta^\alpha \int_\Omega|\mathbf{n}abla (y_1 - y_0)|^2 \le 0 \mathbf{e}nd{align*} for $\delta$ small enough, where the last step follows from Lemma \ref{D-lin}(iii) and Korn's inequality. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ We note without proof that by a similar reasoning as in (ii) one can show that for given $w \in \mathscr{S}^M_\delta$ $$\mathcal{D}_\delta(y_s,w)^2 \le (1-s)\mathcal{D}_\delta(y_0,w)^2 + s\mathcal{D}_\delta(y_1,w)^2 - s(1-s)(1 - C\delta^\alpha)\mathcal{D}_\delta(y_1,y_0)^2.$$ This implies that $\mathcal{D}_\delta$ is $2(1-C\delta^\alpha)$-convex in the sense of \cite[Assumption 4.0.1]{AGS}. Note that this property is not strong enough to apply directly the results in \cite[Section 2.4, Section 4]{AGS}. \color{black} Nevertheless, we will be able to derive representations and lower semicontinuity properties for the slopes by direct computations (see Lemma \ref{lemma: slopes}, Lemma \ref{lemma: lsc-slope} below.) \color{black} However, in the linear setting we obtain $2$-convexity as the following result shows. \begin{lemma}[Convexity in the linear setting]\label{lemma: convexity2} For all $u_0,u_1 \in H^1_0(\Omega)$ and $v \in H^1_0(\Omega)$ with $u_s := (1-s) u_0 + su_1$ we have $$\bar{\mathcal{D}}_0(u_s,v)^2 \le (1-s) \bar{\mathcal{D}}_0(u_0,v)^2 + s \bar{\mathcal{D}}_0(u_1,v)^2 - s(1-s) \bar{\mathcal{D}}_0(u_1,u_0)^2.$$ \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } The property follows from an elementary computation as in \mathbf{e}qref{quadratic convexity} taking into account that $\bar{\mathcal{D}}_0^2$ is quadratic. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ \subsection{Properties of local slopes}\label{sec: slopes} We now derive representations and properties of the slopes corresponding to $\phi_\delta$ and $\bar{\phi}_0$. Recall Definition \ref{main def2}. \begin{lemma}[Slopes]\label{lemma: slopes} (i) For $\delta>0$ small enough the local slopes in the nonlinear setting admit the representation \begin{align*} &|\partial \phi_\delta|_{\mathcal{D}_\delta}(y) = \sup_{w \mathbf{n}eq y} \ \frac{(\phi_\delta(y) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y,w) (1 + C \Vert \mathbf{n}abla y - \mathbf{n}abla w \Vert_{L^\infty(\Omega)})^{1/2}} \ \ \ \ \forall y \in \mathscr{S}_\delta^M, \mathbf{e}nd{align*} where $C$ is the constant from Theorem \ref{th: convexity}. The slopes are lower semicontinuous with respect to both $H^1(\Omega)$ and $\mathcal{D}_\delta$ and are strong upper gradients for $\phi_\delta$. (ii) The local slope for the linear energy $\bar{\phi}_0$ admits the representation $$ |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u) = \sup_{v \mathbf{n}eq u} \ \frac{(\bar{\phi}_0(u) - \bar{\phi}_0(v))^+}{\bar{\mathcal{D}}_0(u,v)},$$ and is a strong upper gradient for $\bar{\phi}_0$. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } Before we start with the actual proof, let us recall from \cite[Lemma 1.2.5]{AGS} that in a complete metric space $(\mathscr{S,\mathcal{D}})$ with energy $\phi$ one has that $|\partial \phi|_{\mathcal{D}}$ is a weak upper gradient for $\phi$ in the sense of \cite[Definition 1.2.2]{AGS}. We do not repeat the definition of weak upper gradients, but only mention that weak upper gradients are also strong upper gradients if for each absolutely continuous curve $z:(a,b) \to \mathscr{S}$ with $|\partial \phi|_{\mathcal{D}}(z)|z'|_{\mathcal{D}} \in L^1(a,b)$, the function $\phi \circ z$ is absolutely continuous. Moreover, \cite[Lemma 1.2.5]{AGS} also states that, if $\phi$ is $\mathcal{D}$-lower semicontinuous, then the global slope \begin{align}\label{global slope} \color{black} \mathcal{S}_{\phi}(v) \color{black} := \sup_{w \mathbf{n}eq v} \frac{(\phi(v) - \phi(w))^+}{\mathcal{D}(v,w)} \mathbf{e}nd{align} is a strong (and thus also weak) upper gradient for $\phi$. We now give the proof of (i). We partially follow the proofs of Theorem 2.4.9 and Corollary 2.4.10 in \cite{AGS}. To confirm the representation of $|\partial \phi_\delta|_{\mathcal{D}_\delta}$, we use the definition of the local slope in Definition \ref{main def2} and obtain with $C$ being the constant from Theorem \ref{th: convexity}(i) \begin{align*} |\partial \phi_\delta|_{\mathcal{D}_\delta}(y) & = \limsup_{w \to y} \frac{(\phi_\delta(y) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y,w)} = \limsup_{w \to y} \frac{(\phi_\delta(y) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y,w) (1 + C \Vert \mathbf{n}abla y- \mathbf{n}abla w \Vert_{\infty})^{1/2}} \\ &\le \sup_{w \mathbf{n}eq y} \ \frac{(\phi_\delta(y) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y,w) (1 + C \Vert \mathbf{n}abla y - \mathbf{n}abla w \Vert_{\infty})^{1/2}}, \mathbf{e}nd{align*} where in the second \color{black} equality \color{black} we used that $w \to y$ (with respect to $\mathcal{D}_\delta$) implies $\Vert \mathbf{n}abla w - \mathbf{n}abla y\Vert_{L^ \infty(\Omega)} \to 0$ by Theorem \ref{th: metric space}(ii). To see the other inequality, it is not restrictive to suppose that $y \mathbf{n}eq w$ and \begin{align}\label{proof2.1} \phi_\delta(y) - \phi_\delta(w)>0. \mathbf{e}nd{align} By Theorem \ref{th: convexity}(ii) with $y_0 = y$ and $y_1 = w$ we get \begin{align*} \frac{\phi_\delta(y) - \phi_\delta(y_s)}{\mathcal{D}_\delta(y,y_s)} \ge \frac{\phi_\delta(y) - \phi_\delta(w)}{\mathcal{D}_\delta(y,w)} \frac{s\mathcal{D}_\delta(y,w)}{\mathcal{D}_\delta(y,y_s)} \mathbf{e}nd{align*} for all $s \in [0,1]$, where $y_s = (1-s)y + s w$. Then we derive by \mathbf{e}qref{proof2.1} and Theorem \ref{th: convexity}(i) \begin{align*} |\partial \phi_\delta|_{\mathcal{D}_\delta}(y) \ge \frac{\phi_\delta(y) - \phi_\delta(w)}{\mathcal{D}_\delta(y,w) (1+ C \Vert \mathbf{n}abla y - \mathbf{n}abla w \Vert_\infty)^{1/2}}. \mathbf{e}nd{align*} The claim now follows by taking the supremum with respect to $w$. To confirm the lower semicontinuity, we consider $y_h \to y$ in $\mathcal{D}_\delta$ or equivalently in $H^1(\Omega)$ (see Lemma \ref{lemma: metric space-properties}(ii)). If $w \mathbf{n}eq y$, then $w \mathbf{n}eq y_h$ for $h$ large enough and thus \begin{align*} \liminf_{h \to \infty} |\partial \phi_\delta|_{\mathcal{D}_\delta}(y_h) & \ge \liminf_{h \to \infty} \frac{(\phi_\delta(y_h) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y_h,w) (1 + C\Vert \mathbf{n}abla y_h - \mathbf{n}abla w \Vert_{\infty})^{1/2}} \\ & \ge \frac{(\phi_\delta(y) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y,w) (1 + C\Vert \mathbf{n}abla y - \mathbf{n}abla w \Vert_{\infty})^{1/2}}, \mathbf{e}nd{align*} where we used Theorem \ref{th: metric space}(ii),(iii). By taking the supremum with respect to $w$ the lower semicontinuity follows. It remains to show that $|\partial \phi_\delta|_{\mathcal{D}_\delta}$ is a strong upper gradient. With Lemma \ref{lemma:rigidity}(ii), for $\delta$ small enough we find $\mathcal{S}_{\phi_\delta}(y) \le 2 |\partial \phi_\delta|_{\mathcal{D}_\delta}(y)$ with $\mathcal{S}_{\phi_\delta}$ as introduced in \mathbf{e}qref{global slope}. Recalling the remarks at the beginning of the proof, to show that $|\partial \phi_\delta|_{\mathcal{D}_\delta}$ is a strong upper gradient we have to check that for all absolutely continuous $z:(a,b) \to \mathscr{S}^M_\delta$ with $|\partial \phi_\delta|_{\mathcal{D}_\delta}(z)|z'|_{{\mathcal D}_\delta} \in L^1(a,b)$, the function $\phi_\delta \circ z$ is absolutely continuous. First, it follows $\mathcal{S}_{\phi_\delta}(z)|z'|_{{\mathcal D}_\delta} \in L^1(a,b)$ as $\mathcal{S}_{\phi_\delta} \le 2 |\partial \phi_\delta|_{\mathcal{D}_\delta}$. Since $\phi_\delta$ is $\mathcal{D}_\delta$-lower semicontinous, $\mathcal{S}_{\phi_\delta}$ is a strong upper gradient. Thus, we indeed get that $\phi_\delta \circ z$ is absolutely continuous, see Definition \ref{main def2}. We now concern ourselves with (ii). The representation of the local slope follows from the convexity property in Lemma \ref{lemma: convexity2} as was shown in \cite[Theorem 2.4.9]{AGS}. Therefore, $\mathcal{S}_{\bar{\phi}_0} = |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$, which is $\bar{\mathcal{D}}_0$ lower semicontinous by Lemma \ref{th: metric space-lin}(ii) and thus $|\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$ is a strong upper gradient. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ \section{Proof of the main results}\label{sec results} In this section we give the proof of Theorem \ref{maintheorem1}-Theorem \ref{maintheorem3}. \subsection{Existence of curves of maximal slope} In this section we prove the first two parts of Theorem \ref{maintheorem1} and Theorem \ref{maintheorem2}, which essentially follow from the properties of the metric spaces established in Section \ref{sec: metric}, \ref{sec: slopes} by applying the general results recalled in Section \ref{sec: AGS-results}. \begin{proof}[Proof of Theorem \ref{maintheorem1}(i),(ii)] First, we note that the assumptions of Theorem \ref{th: auxiliary1} are satisfied by Lemma \ref{lemma: slopes}(i) and Lemma \ref{th: metric space}(ii),(iii), where we let $\mathscr{S} = \mathscr{S}_\delta^M$ and let $\sigma$ be the topology induced by $\mathcal{D}_\delta$. (i) Fix $y_0 \in \mathscr{S}^M_\delta$. Define the initial data $U^0_\tau = y_0$ for all $\tau>0$. Applying Theorem \ref{th: auxiliary1}(i) we find a curve $y$ which is the limit of a sequence of discrete solutions with $y(0) = y_0$. Thus, in view of Definition \ref{main def1}, $y \in GMM(\Phi_{\delta};y_0)$, which is therefore nonempty. (ii) To see that generalized minimizing movements are curves of maximal slope, it suffices to apply Theorem \ref{th: auxiliary1}(ii). \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{maintheorem2}(i),(ii)] In the linear setting the convexity property given in Lemma \ref{lemma: convexity2} holds and $\bar{\phi}_0$ is convex by \mathbf{e}qref{linear energy} and Lemma \ref{D-lin}(iii). Thus, Theorem \ref{th: auxiliary2} is applicable. Apart from uniqueness, the result then follows from Theorem \ref{th: auxiliary2}. It remains to show that the unique minimizing movement is also the unique curve of maximal slope for $\bar{\phi}_0$ with respect to the strong upper gradient $|\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$. To this end, we follow an idea used, e.g., in \cite{Gigli}. We first observe that the metric derivative $|u'|^2_{\bar{\mathcal{D}}_0}$ is convex. Indeed, let $u^1,u^2:[0,\infty) \to H^1_0(\Omega)$ be two curves. We get for $u^{3} = \frac{1}{2}(u^1 + u^2)$ by Young's inequality (define $v^i = u^{i}(s) - u^{i}(t)$, $i=1,2$, for brevity) \begin{align*} \bar{\mathcal{D}}_0&(u^{3}(s), u^{3}(t))^2 = \int_\Omega \Bbb C_D[e((v^1+v^2)/2), e((v^1+v^2)/2)]\\& = \sum\mathbf{n}olimits_{i=1,2} \frac{1}{4}\int_\Omega \Bbb C_D[e(v^i), e(v^i)] + \frac{1}{2}\int_\Omega \Bbb C_D[e(v^1), e(v^2)] \\ & \le \sum\mathbf{n}olimits_{i=1,2}\frac{1}{2}\int_\Omega \Bbb C_D[e(v^i), e(v^i)] = \frac{1}{2} \bar{\mathcal{D}}_0(u^{1}(s), u^{1}(t))^2 + \frac{1}{2} \bar{\mathcal{D}}_0(u^{2}(s), u^{2}(t))^2. \mathbf{e}nd{align*} Dividing by $|s-t|^2$ and letting $s$ go to $t$ we obtain the claim. We also anticipate from Lemma \ref{lemma: lin-slope} below that $u \mapsto |\partial \bar{\phi}_0|^2_{\bar{\mathcal{D}}_0}(u)$ is convex. Assume there were two different curves of maximal slope $u^1$, $u^2$ starting from $u_0$, i.e., we find some $T$ such that $e(u^1(T)) \mathbf{n}eq e(u^2(T))$ since otherwise the curves would coincide by Korn's inequality. Set $u^{3} = \frac{1}{2}(u^1 + u^2)$ and compute by the strict convexity of $\Bbb C_W$ on $\Bbb R^{d \times d}_{\rm sym}$ (see Lemma \ref{D-lin}(iii)), the convexity properties of the slope and metric derivative, and \mathbf{e}qref{maximalslope} \begin{align*} \bar{\phi}_0(u_0) &= \frac{1}{2}\sum_{i=1,2} \Big( \frac{1}{2} \int_0^T |(u^i)'|_{\bar{\mathcal{D}}_0}^2(t) \, dt + \frac{1}{2} \int_0^T |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}^2(u^i(t)) \, dt + \bar{\phi}_0(u^i(T)) \Big)\\ & > \frac{1}{2} \int_0^T |(u^{3})'|_{\bar{\mathcal{D}}_0}^2(t) \, dt + \frac{1}{2} \int_0^T |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}^2(u^{3}(t)) \, dt + \bar{\phi}_0(u^{3}(T)), \mathbf{e}nd{align*} which contradicts the fact that $ |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}$ is an upper gradient (see Definition \ref{main def2}(i) and use Young's inequality). This contradiction establishes uniqueness and concludes the proof. \mathbf{e}nd{proof} \subsection{$\Gamma$-convergence and lower semicontinuity} As a preparation for the passage to the linear problem, we recall and prove $\Gamma$-convergence results for the energies and lower semicontinuity for the slopes. In the following it is convenient to express all quantities in terms of the linear setting. To this end, recalling \mathbf{e}qref{nonlinear energy} and \mathbf{e}qref{eq: D,D0}, for $u,v \in W^{2,p}_0(\Omega)$ and $\tau,\delta>0$ we define \begin{align*} & \bar{\phi}_{\delta}(u) = \phi_\delta(\mathbf{id} + \delta u), \ \ \bar{\phi}_{\delta,P}(u) = \delta^{-p\alpha}\int_\Omega P(\delta\mathbf{n}abla^2 u), \ \ \bar{\phi}_{\delta,W}(u) = \bar{\phi}_\delta(u) - \bar{\phi}_{\delta,P}(u), \\ &\bar{\mathcal{D}}_\delta(u,v) = {\mathcal{D}}_\delta(\mathbf{id}+ \delta u, \mathbf{id} + \delta v), \ \ \ \bar{\Phi}_\delta(\tau,v;u) = \bar{\phi}_\delta(u) + \frac{1}{2\tau}\bar{\mathcal{D}}_\delta(u,v)^2,\\ &|\partial \bar{\phi}_\delta|_{\bar{\mathcal{D}}_\delta}(u) = |\partial {\phi}_\delta|_{{\mathcal{D}}_\delta}(\mathbf{id} + \delta u). \mathbf{e}nd{align*} We extend $\bar{\phi}_\delta$ to a functional defined on $H^1_0(\Omega) $ by setting $\bar{\phi}_\delta(u) = + \infty$ for $u \in H^1_0(\Omega) \setminus W^{2,p}_0(\Omega) $. Likewise, we extend $\bar{\Phi}_\delta$. Moreover, we say $u \in \bar{\mathscr{S}}_\delta^M$ if $\mathbf{id} + \delta u \in \mathscr{S}_\delta^M$. We obtain the following $\Gamma$-convergence results. (For an exhaustive treatment of $\Gamma$-convergence we refer the reader to \cite{DalMaso:93}.) \begin{theorem}[$\Gamma$-convergence]\label{th: Gamma} Let $(\delta_n)_n$ be a null sequence. (i) The functionals $\bar{\phi}_{\delta_n}: H^1_0(\Omega) \to [0,\infty]$ $\Gamma$-converge to $\bar{\phi}_0$ in the weak $H^1(\Omega)$-topology. (ii) For each $\tau>0$, $M>0$, and each sequence $(\bar{v}_n)_n$ with $\bar{v}_n \in \bar{\mathscr{S}}_{\delta_n}^M$ and $\bar{v}_n \to \bar{v}$ strongly in $H^1(\Omega)$, the functionals $\bar{\Phi}_{\delta_n}(\tau,\bar{v}_n;\cdot): H^1_0(\Omega) \to [0,\infty]$ $\Gamma$-converge to $\bar{\Phi}_0(\tau,\bar{v}; \cdot)$ in the weak $H^1(\Omega)$-topology. \mathbf{e}nd{theorem} \par\mathbf{n}oindent{\mathbf{e}m Proof. } (i) The result is essentially proved in the paper \cite{DalMasoNegriPercivale:02} and we only give a short sketch highlighting the relevant adaptions. Since $\bar{\phi}_{\delta_n,P} \ge 0$, for the lower bound it suffices to prove $\liminf_{n \to \infty} \bar{\phi}_{\delta_n,W}(u_n) \ge \bar{\phi}_0(u)$ whenever $u_n \rightharpoonup u$ weakly in $H^1(\Omega)$. This was proved under more general assumptions in \cite[Proposition 4.4]{DalMasoNegriPercivale:02}. In our setting it follows readily by using Lemma \ref{lemma: metric space-properties}(iv) and the lower semicontinuity of $\bar{\phi}_0$ (see Lemma \ref{D-lin}(iii)). By a general approximation argument in the theory of $\Gamma$-convergence it suffices to establish the upper bound for smooth functions $u$, cf. \cite[Proposition 4.1]{DalMasoNegriPercivale:02}. For such a function, setting $u_n = u$, we find $\lim_n \bar{\phi}_{\delta_n,W}(u_n) = \bar{\phi}_0(u)$ (see Lemma \ref{lemma: metric space-properties}(iv) or \cite[Proposition 4.1]{DalMasoNegriPercivale:02}) and moreover it is not hard to see that $\bar{\phi}_{\delta_n,P}(u_n) \to 0$ by the growth of $P$ and the fact that $\alpha<1$. This concludes the proof of (i). (ii) We first suppose that the sequence $(\bar{v}_n)_n$ is constantly $\bar{v}$. Then $\bar{\Phi}_{\delta_n}(\tau,\bar{v};\cdot)$ $\Gamma$-converges to $\bar{\Phi}_{0}(\tau,\bar{v};\cdot)$ repeating exactly the proof of (i), where, in addition to Lemma \ref{lemma: metric space-properties}(iv), we also use Lemma \ref{lemma: metric space-properties}(iii). To obtain the general case, it now suffices to prove that for every sequence $(u_n)_n$ uniformly bounded in $H_0^1(\Omega)$ and $u_n \in \bar{\mathscr{S}}_{\delta_n}^M$ for some $M$ large enough we obtain $$\lim\mathbf{n}olimits_{n\to \infty} |\bar{\mathcal{D}}_{\delta_n}(u_n,\bar{v}_{n})^2 - \bar{\mathcal{D}}_{\delta_n}(u_n,\bar{v})^2| = 0.$$ In view of Lemma \ref{lemma: metric space-properties}(iii), it suffices to show $\lim\mathbf{n}olimits_{n\to \infty} |\bar{\mathcal{D}}_{0}(u_n,\bar{v}_{n})^2 - \bar{\mathcal{D}}_{0}(u_n,\bar{v})^2| = 0$. To this end, we note that (recall \mathbf{e}qref{eq: D,D0}) \begin{align*} \bar{\mathcal{D}}_{0}(u_n,\bar{v}_{n})^2 - \bar{\mathcal{D}}_{0}(u_n,\bar{v})^2 &= \int_\Omega \Bbb C_D[\mathbf{n}abla \bar{v}_{n}, \mathbf{n}abla \bar{v}_{n}] - \int_\Omega \Bbb C_D[\mathbf{n}abla \bar{v}, \mathbf{n}abla \bar{v}] \\& \ \ \ - 2\int_\Omega \Bbb C_D[\mathbf{n}abla u_n, \mathbf{n}abla \bar{v}_{n} - \mathbf{n}abla\bar{v}], \mathbf{e}nd{align*} which by the assumption on $(\bar{v}_n)_n$ and $(u_n)_n$ converges to zero. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ We remark that by a general result in the theory of $\Gamma$-convergence we get that (almost) minimizers associated to the sequence of functionals converge to minimizers of the limiting functional. We obtain the following strong convergence result for recovery sequences which in various settings has been derived in, e.g., \cite{DalMasoNegriPercivale:02, FriedrichSchmidt:2011, Schmidt:08}. \begin{lemma}[Strong convergence of recovery sequences]\label{lemma: energy} Suppose that the assumptions of Theorem \ref{th: Gamma} hold. Let $M>0$, let $(u_n)_n$ be a sequence with $u_n \in \bar{\mathscr{S}}_{\delta_n}^M$. Let $u \in H_0^1(\Omega)$ such that $u_n\rightharpoonup u$ weakly in $H^1(\Omega)$ and $$(i) \ \ \bar{\phi}_{\delta_n}(u_n) \to \bar{\phi}_0(u) \ \ \ \text{or} \ \ \ (ii) \ \ \bar{\Phi}_{\delta_n}(\tau,\bar{v}_n; u_n) \to\bar{\Phi}_0(\tau,\bar{v}; u).$$ Then $u_n \to u$ strongly in $H^1(\Omega)$. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } If $\bar{\phi}_{\delta_n}(u_n) \to \bar{\phi}_0(u)$, we find $\bar{\phi}_0(u_n) \to \bar{\phi}_0(u)$ by Lemma \ref{lemma: metric space-properties}(iv) and thus by Lemma \ref{D-lin}(iii) \begin{align*} \Vert & e(u_n-u) \Vert^2_{L^2(\Omega)} \le C\int_\Omega \Bbb C_W[e(u_n-u),e(u_n-u)] \\ & = C\Big(\int_\Omega \Bbb C_W[e(u_n),e(u_n)] + \int_\Omega \Bbb C_W[e(u),e(u)] -2\int_\Omega \Bbb C_W[e(u_n),e(u)]\Big) \to 0 \mathbf{e}nd{align*} as $n \to \infty$. The assertion of (i) follows from Korn's inequality. The proof of (ii) is similar, where one additionally takes Lemma \ref{lemma: metric space-properties}(iii) into account. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ We close this section with a lower semicontinuity result for the slopes. \begin{lemma}[Lower semicontinuity of slopes]\label{lemma: lsc-slope} For each sequence $(u_n)_n\subset \bar{\mathscr{S}}_{\delta_n}^M$ with $u_n \rightharpoonup u$ weakly in $H^1(\Omega)$ we have $\liminf_{n \to \infty}|\partial \bar{\phi}_{\delta_n}|_{\bar{\mathcal{D}}_{\delta_n}}(u_n) \ge |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u)$. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } For $\varepsilon>0$ fix $u' \in C^\infty_c(\Omega;\Bbb R^d)$ with $\Vert u' - u \Vert_{H^1(\Omega)} \le \varepsilon$. Fix $v \in C^\infty_c(\Omega;\Bbb R^d)$, $v \mathbf{n}eq u',u$. We first note that with $w_n := u_n - u'+ v$ we have by Lemma \ref{lemma: slopes}(i) \begin{align*} |\partial \bar{\phi}_{\delta_n}|_{\bar{\mathcal{D}}_{\delta_n}}(u_n) & = \sup_{w \mathbf{n}eq u_n} \ \frac{(\bar{\phi}_{\delta_n}(u_n) - \bar{\phi}_{\delta_n}(w))^+}{\bar{\mathcal{D}}_{\delta_n}(u_n,w)(1 + C \Vert \mathbf{Id} + \delta_n \mathbf{n}abla u_n - (\mathbf{Id} + \delta_n \mathbf{n}abla w) \Vert_{L^\infty(\Omega)})^{1/2}} \\ &\ge \frac{(\bar{\phi}_{\delta_n}(u_n) - \bar{\phi}_{\delta_n}(w_n ))^+}{\bar{\mathcal{D}}_{\delta_n}(u_n,w_n)(1 + C_v\delta_n)^{1/2}}, \mathbf{e}nd{align*} where $C_v$ is a constant depending also on $v$ and $u'$. Note that, since $u',v$ are smooth, we indeed get $w_n = u_n - u'+ v \in \bar{\mathscr{S}}_{\delta_n}^M$ for $n$ large enough for some possibly larger $M>0$. Consequently, by Lemma \ref{lemma: metric space-properties}(iii),(iv) we get \begin{align}\label{lsc-slope1} \liminf_{n \to \infty}|\partial \bar{\phi}_{\delta_n}|_{\bar{\mathcal{D}}_{\delta_n}}(u_n) \ge \liminf_{n \to \infty} \frac{(\bar{\phi}_{0}(u_n) - \bar{\phi}_{0}(w_n) + \bar{\phi}_{\delta_n,P}(u_n) - \bar{\phi}_{\delta_n,P}(w_n) )^+}{\bar{\mathcal{D}}_{0}(u_n,w_n)}. \mathbf{e}nd{align} Recalling \mathbf{e}qref{linear energy} (for $f \mathbf{e}quiv 0 $) we obtain by a direct computation \begin{align} \lim_{n \to \infty} &\big(\bar{\phi}_0(u_n) - \bar{\phi}_0(u_n - u'+ v)\big) = \lim_{n \to \infty} \big(-\bar{\phi}_0(v-u') - 2\int_\Omega \Bbb C_W[e(u_n),e(v-u')] \big)\mathbf{n}otag \\ &= -\bar{\phi}_0(v-u') - 2\int_\Omega \Bbb C_W[e(u),e(v-u')]\mathbf{n}otag\\& = \bar{\phi}_0(u) - \bar{\phi}_0(v) - \bar{\phi}_0(u'-u) +2\int_\Omega \Bbb C_W[e(u'-u),e(v)]. \mathbf{e}nd{align} Moreover, by convexity of $P$ and the definition $w_n := u_n - u'+ v$ we find \begin{align}\label{lsc-slope3} \bar{\phi}_{\delta_n,P}(u_n) - \bar{\phi}_{\delta_n,P}(u_n - u'+ v) \ge \delta_n^{-p\alpha} \int_\Omega \partial_GP(\delta_n\mathbf{n}abla^2w_n) : \delta_n(\mathbf{n}abla^2 u'- \mathbf{n}abla^2 v), \mathbf{e}nd{align} which vanishes as $n \to \infty$ by \mathbf{e}qref{assumptions-P}(iii), H\"older's inequality, $1+ \alpha(p-1) - \alpha p >0$, and the fact that $\Vert \delta_n \mathbf{n}abla^2 w_n \Vert^p_{L^p(\Omega)} \le CM\delta_n^{p\alpha}$. (The latter follows from $w_n \in \bar{\mathscr{S}}_{\delta_n}^M$.) Combining \mathbf{e}qref{lsc-slope1}-\mathbf{e}qref{lsc-slope3}, using $\bar{\mathcal{D}}_{0}(u_n,w_n) = \bar{\mathcal{D}}_{0}(v,u')$, and recalling $u_n \rightharpoonup u$, we get after some calculations \begin{align*} \liminf_{n \to \infty}|\partial \bar{\phi}_{\delta_n}|_{\bar{\mathcal{D}}_{\delta_n}}(u_n)& \ge \frac{(\bar{\phi}_0(u) - \bar{\phi}_0(v) - \bar{\phi}_0(u'-u) +2\int_\Omega \Bbb C_W[e(u'-u),e(v)])^+}{\bar{\mathcal{D}}_{0}(v,u')}\\ & \ge \frac{(\bar{\phi}_0(u) - \bar{\phi}_0(v))^+}{\bar{\mathcal{D}}_{0}(v,u)} -C\varepsilon \mathbf{e}nd{align*} for some $C>0$ depending only on $u$, $u'$ and $v$. Letting first $\varepsilon\to 0$ and taking then the supremum with respect to $v$ we get \begin{align*} \liminf_{n \to \infty}|\partial \bar{\phi}_{\delta_n}|_{\bar{\mathcal{D}}_{\delta_n}}(u_n) \ge \sup_{v \in C_c^\infty(\Omega), v \mathbf{n}eq u} & \frac{(\bar{\phi}_0(u) - \bar{\phi}_0(v))^+}{\bar{\mathcal{D}}_{0}(v,u)}. \mathbf{e}nd{align*} In view of Lemma \ref{lemma: slopes}(ii), the claim now follows by approximating each $v \in H^1_0(\Omega)$ by a sequence of smooth functions noting that the right hand side is continuous with respect to $H^1(\Omega)$-convergence. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ \subsection{Passage from nonlinear to linear viscoelasticity} In this section we now give the proof of Theorem \ref{maintheorem3}. For the whole section we fix a null sequence $(\delta_k)_k$ and sequence of initial data $(y_0^k)_{k\in \Bbb N} \subset W^{2,p}_\mathbf{id}(\Omega)$ such that $\delta_k^{-1}(y^k_0 - \mathbf{id}) \to u_0 \in H_0^1(\Omega)$. Moreover, we fix $M>0$ so large that $y_0^k \in \mathscr{S}_{\delta_k}^M$ for $k \in \Bbb N$. \begin{proof}[Proof of Theorem \ref{maintheorem3}(i)] Let $\tau>0$ and let $\tilde{Y}_\tau^{\delta_k}$ as in \mathbf{e}qref{ds} be a discrete solution. For each $k \in \Bbb N$ we then have the sequence $(U^n_k)_{n \in \Bbb N}$ with $U^n_k= \delta_k^{-1}(\tilde{Y}_\tau^{\delta_k}(n\tau) - \mathbf{id}) \in \bar{\mathscr{S}}_{\delta_k}^M$ for $n \in \Bbb N$. We need to show that there exists a sequence $(U^n_0)_{n \in \Bbb N}$ with $U^0_0 = u_0$ such that $$(i) \ \ U^n_0 = {\rm argmin}_{v \in H^1_0(\Omega)} \bar{\Phi}_0(\tau,U^{n-1}_0; v), \ \ \ \ (ii) \ \ \text{$U^n_k \to U^n_0$ strongly in $H^1(\Omega)$} $$ for all $n \in \Bbb N$. We show this property by induction. Suppose $(U^i_0)_{i=0}^n$ have been found such that the above properties hold. In particular, we note that (ii) holds for $n=0$ by assumption. We now pass from step $n$ to $n+1$. As $U^n_k \to U^n_0$ strongly in $H^1(\Omega)$ and thus by Theorem \ref{th: Gamma}(ii) $\bar{\Phi}_{\delta_k}(\tau,U^{n}_k; \cdot)$ $\Gamma$-converges to $\bar{\Phi}_0(\tau,U^{n}_0; \cdot)$, we derive by properties of $\Gamma$-convergence that the (unique) minimizer of $\bar{\Phi}_0(\tau,U^{n}_0; \cdot)$, denoted by $U^{n+1}_0$, is the limit of minimizers of $\bar{\Phi}_{\delta_k}(\tau,U^{n}_k; \cdot)$. Consequently, we obtain $U^{n+1}_k \rightharpoonup U^{n+1}_0$ weakly in $H^1(\Omega)$ and $\bar{\Phi}_{\delta_k}(\tau,U^{n}_k; U^{n+1}_k) \to \bar{\Phi}_0(\tau,U^{n}_0; U^{n+1}_0) $. Thus, Lemma \ref{lemma: energy} implies that the sequence even converges strongly in $H^1(\Omega)$. This concludes the induction step. \mathbf{e}nd{proof} In the following let $u$ be the unique element of $MM(\bar{\Phi}_0;u_0)$. \begin{proof}[Proof of Theorem \ref{maintheorem3}(ii)] We let $\sigma$ be the weak $H^1(\Omega)$-topology. We consider the sequence of metrics $\mathcal{D}_k = \bar{\mathcal{D}}_{\delta_k}$ on $H^1_0(\Omega)$ and the functionals $\phi_k = \bar{\phi}_{\delta_k}$ as well as the limiting objects $\bar{\mathcal{D}}_0$ and $\bar{\phi}_0$. We note that \mathbf{e}qref{compatibility} is satisfied due to Lemma \ref{lemma: metric space-properties}(iii) and the fact that $\bar{\mathcal{D}}_0$ is quadratic and convex (see Lemma \ref{D-lin}(iii)). Moreover, also \mathbf{e}qref{eq: implication} is satisfied by the $\Gamma$-liminf inequality in Lemma \ref{th: Gamma}(i) and Lemma \ref{lemma: lsc-slope}. Finally, also \mathbf{e}qref{basic assumptions2} holds. In fact, by the rigidity estimate in Lemma \ref{lemma:rigidity}(i) and \mathbf{e}qref{nonlinear energy}, \mathbf{e}qref{assumptions-W}(iii) we find for all $k \in \Bbb N$ and $u \in \bar{\mathscr{S}}_{\delta_k}^M$ letting $y = \mathbf{id} + \delta_k u$ \begin{align}\label{go to limit1} \begin{split} \Vert u \Vert^2_{H^1(\Omega)} &= \delta_k^{-2} \Vert y-\mathbf{id} \Vert^2_{H^1(\Omega)} \le C\delta_k^{-2} \Vert \operatorname{dist}(\mathbf{n}abla y,SO(d)\Vert^2_{L^2(\Omega)} \\&\le C\delta_k^{-2} \phi_{\delta_k}(y) \le CM. \mathbf{e}nd{split} \mathbf{e}nd{align} Now consider a sequence $(y_k)_k$ of generalized minimizing movements starting from $y_0^k$ with $\delta_k^{-1}(y_0^k-\mathbf{id}) \to u_0$ in $H^1(\Omega)$. For convenience we also introduce the curves $u_k = \delta_k^{-1}(y_k - \mathbf{id})$. Fix $M>0$ so large that $y_0^k \in \mathscr{S}_{\delta_k}^M$ for $k \in \Bbb N$. As $\bar{\phi}_{\delta_k}(u_k(t)) \le \phi_{\delta_k}(y_k^0)$ for all $t \ge 0$, we get $\sup_k\sup_t (\phi_{\delta_k}(u_k(t)) + \mathcal{D}_k(u_k(t),u_0)) < \infty$ by \mathbf{e}qref{go to limit1} and Lemma \ref{lemma: metric space-properties}(iii). Consequently, also \mathbf{e}qref{eq: abstract assumptions1}(i) holds and \mathbf{e}qref{eq: abstract assumptions1}(ii) is satisfied by the assumption on the initial data and Lemma \ref{lemma: metric space-properties}(iv). Since the slopes are strong upper gradients by Lemma \ref{lemma: slopes}, we can apply Theorem \ref{th:abstract convergence 1} and the existence of a limiting curve of maximal slope follows. As this curve is uniquely given by $u$ (see Theorem \ref{maintheorem2}(ii)), we indeed obtain $u_k(t) \rightharpoonup u(t)$ weakly in $H^1(\Omega)$ for all $t \in [0,\infty)$ up to a subsequence. Since the limit is unique, we see that the whole sequence converges to $u$ by Urysohn's subsequence principle. It remains to observe that the convergence is actually strong. This follows from the fact that $\lim_{k \to \infty}\bar{\phi}_{\delta_k}(u_k(t)) = \bar{\phi}_0 (u(t))$ for all $t \in [0,\infty)$ (see Theorem \ref{th:abstract convergence 1}) and Lemma \ref{lemma: energy}. \mathbf{e}nd{proof} \begin{proof}[Proof of Theorem \ref{maintheorem3}(iii)] Proceeding as in the previous proof, we see that all assumptions of Theorem \ref{th:abstract convergence 2} are satisfied. Therefore, we get that for any sequence of discrete solutions there is a subsequence converging pointwise weakly in $H^1(\Omega)$ to a curve of maximal slope for $\bar{\phi}_0$ which can again be identified as $u$. The strong convergence as well as the convergence of the whole sequence follow exactly as in the previous proof. \mathbf{e}nd{proof} \subsection{Fine representation of the slopes and solutions to the equations} In this section we derive fine representations for the slopes which will allow us to relate curves of maximal slope with solutions to the equations \mathbf{e}qref{nonlinear equation} and \mathbf{e}qref{linear equation}. Recall that $\Bbb C_D$ as defined in \mathbf{e}qref{linear equation} is a fourth order symmetric tensor inducing a quadratic form $(F_1,F_2) \mapsto \Bbb C_D[F_1,F_2]$ which is positive definite on $\Bbb R^{d \times d}_{\rm sym}$ (cf. Lemma \ref{D-lin}). Moreover, it maps $\Bbb R^{d \times d}$ to $\Bbb R^{d \times d}_{\rm sym}$, denoted by $F \mapsto \Bbb C_D F$ in the following. More precisely, the mapping $F \mapsto \Bbb C_D F$ from $\Bbb R^{d \times d}_{\rm sym}$ to $\Bbb R^{d \times d}_{\rm sym}$ is bijective. By $\sqrt{\Bbb C_D}$ we denote its (unique) root and by $\sqrt{\Bbb C_D}^{-1}$ the inverse of $\sqrt{\Bbb C_D}$, both mappings defined on $\Bbb R^{d \times d}_{\rm sym}$. We start with a fine representation of the slope in the linear setting. \begin{lemma}[Slope in the linear setting]\label{lemma: lin-slope} There exists a linear differential operator $\mathcal{L}_0: H^1_0(\Omega;\Bbb R^{d}) \to L^2(\Omega;\Bbb R^{d \times d}_{\rm sym})$ satisfying ${\rm div} \mathcal{L}_0(u) = 0$ in $H^{-1}(\Omega;\Bbb R^d)$ such that for all $u \in H^1_0(\Omega)$ we have $$|\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u) = \Vert \sqrt{\Bbb C_D}^{-1}\big(\Bbb C_W e(u) + \mathcal{L}_0(u) \big) \Vert_{L^2(\Omega)}.$$ Particularly, we note that $|\partial \bar{\phi}_0|^2_{\bar{\mathcal{D}}_0}$ is convex on $H^1_0(\Omega)$. \mathbf{e}nd{lemma} \par\mathbf{n}oindent{\mathbf{e}m Proof. } Recalling \mathbf{e}qref{linear energy} (for $f \mathbf{e}quiv 0 $), \mathbf{e}qref{eq: D,D0}, Definition \ref{main def2}(ii), and Lemma \ref{D-lin} we have \begin{align}\label{lin-slope1} |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u) &= \limsup_{v \to u} \frac{(\bar{\phi}_0(u) - \bar{\phi}_0(v))^+}{\bar{\mathcal{D}}_0(u,v)}\\ &= \limsup_{v \to u} \frac{ (\int_\Omega \Bbb C_W[e(u), e(u-v)] - \frac{1}{2}\Bbb C_W[e(v-u), e(v-u)])^+} {(\int_\Omega \Bbb C_D[e(u-v),e(u-v)])^{1/2}}\mathbf{n}otag\\ &= \limsup_{v \to u} \frac{ \int_\Omega \Bbb C_W[e(u), e(u-v)]} {\Vert \sqrt{\Bbb C_D} e(u-v) \Vert_{L^2(\Omega)}} = \sup_{w \mathbf{n}eq 0} \frac{ \int_\Omega \Bbb C_W[e(u), e(w)]} {\Vert \sqrt{\Bbb C_D} e(w) \Vert_{L^2(\Omega)}},\mathbf{n}otag \mathbf{e}nd{align} where in the second step we used $\int_\Omega \Bbb C_W[e(v-u), e(v-u)] / \Vert \sqrt{\Bbb C_D} e(u-v) \Vert_{L^2(\Omega)} \to 0$ as $v \to u$. Let $\bar{w}$ be \color{black} the unique \color{black} solution to the minimization problem $$\min_{v \in H^1_0(\Omega)} \int_\Omega \Big(\frac{1}{2}|\sqrt{\Bbb C_D} e(v)|^2 - \int_\Omega \Bbb C_W[e(u), e(v)]\Big)\ . $$ \color{black} Clearly, $\bar{w}$ necessarily satisfies \color{black} $$\int_\Omega \big(\sqrt{\Bbb C_D} e(\bar{w}) : \sqrt{\Bbb C_D}e(\varphi) - \Bbb C_W [e(u),e(\varphi)]\big) = 0 $$ for all $\varphi \in H^1_0(\Omega)$. This condition can also be formulated as \begin{align}\label{lin-slope2} \mathcal{L}_0(u): e( \varphi) = 0 \ \ \forall \varphi \in H^1_0(\Omega), \ \ \text{where} \ \ \mathcal{L}_0(u): = \Bbb C_D e(\bar{w}) - \Bbb C_We(u). \mathbf{e}nd{align} As the solution $\bar{w}$ depends linearly on $u$, we also get that $\mathcal{L}_0$ is a linear operator. By \mathbf{e}qref{lin-slope1} and the property of $\mathcal{L}_0$ we now find \begin{align*} |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u) &= \sup_{w \mathbf{n}eq 0} \frac{ \int_\Omega (\Bbb C_W e(u) + \mathcal{L}_0(u)) : e(w)} {\Vert \sqrt{\Bbb C_D} e(w) \Vert_{L^2(\Omega)}} \\&= \sup_{w \mathbf{n}eq 0} \frac{ \int_\Omega \big(\sqrt{\Bbb C_D}^{-1}(\Bbb C_We(u) + \mathcal{L}_0(u)) \big) : \sqrt{\Bbb C_D}e(w)} {\Vert \sqrt{\Bbb C_D} e(w) \Vert_{L^2(\Omega)}}\\ & \le \Vert \sqrt{\Bbb C_D}^{-1}(\Bbb C_We(u) + \mathcal{L}_0(u)) \Vert_{L^2(\Omega)}, \mathbf{e}nd{align*} where in the last step we used the Cauchy-Schwartz inequality. On the other hand, by definition of $\mathcal{L}_0$ in \mathbf{e}qref{lin-slope2}, we get \begin{align*} |\partial \bar{\phi}_0|_{\bar{\mathcal{D}}_0}(u)& \ge \frac{ \int_\Omega \big(\sqrt{\Bbb C_D}^{-1}(\Bbb C_We(u) + \mathcal{L}_0(u)) \big) : \sqrt{\Bbb C_D}e(\bar{w})} {\Vert \sqrt{\Bbb C_D} e(\bar{w}) \Vert_{L^2(\Omega)}} \\ & = \Vert \sqrt{\Bbb C_D} e(\bar{w}) \Vert_{L^2(\Omega)} = \Vert \sqrt{\Bbb C_D}^{-1}(\Bbb C_We(u) + \mathcal{L}_0(u)) \Vert_{L^2(\Omega)}. \mathbf{e}nd{align*} This concludes the proof. \mathbf{n}opagebreak\hspace*{\fill}$\Box$ Recall the definition of the symmetric fourth order tensor $H_Y = \frac{1}{2}\partial^2_{F_1^2} D^2(Y,Y)$ for $Y \in GL_+(d)$ (see before Lemma \ref{lemma: metric space-properties}). Let $Y \in \Bbb R^{d \times d}$ be in a small neighborhood of $\mathbf{Id}$ such that $Y^{-1}$ exists. Similarly to the discussion before Lemma \ref{lemma: lin-slope}, we get that $H_Y$ induces a bijective mapping from $Y^{-\top}\Bbb R^{d \times d}_{\rm sym}$ to $Y\Bbb R^{d \times d}_{\rm sym}$ by using frame indifference \mathbf{e}qref{eq: assumptions-D}(v) and the growth assumption \mathbf{e}qref{eq: assumptions-D}(vi). We then introduce $\sqrt{H_Y}$ as a bijective mapping from $Y^{-\top}\Bbb R^{d \times d}_{\rm sym}$ to $Y\Bbb R^{d \times d}_{\rm sym}$. In a similar fashion, we introduce the inverse $\sqrt{H_Y}^{-1}$. For a given deformation $y: \Omega \to \Bbb R^d$ we introduce a mapping $H_{\mathbf{n}abla y}: \Omega \to \Bbb R^{d \times d \times d \times d}$ by $H_{\mathbf{n}abla y}(x) = H_{\mathbf{n}abla y(x)}$ for $x \in \Omega$. We note by Lemma \ref{lemma:rigidity}(ii), the fact that $D \in C^3$, and a continuity argument that \begin{align}\label{continuity for H} \Vert \sqrt{H_\mathbf{Id}} - \sqrt{H_{\mathbf{n}abla y}} \Vert_{L^\infty(\Omega)} \le C\delta^\alpha \mathbf{e}nd{align} for all $y \in \mathscr{S}_\delta^M$ for a sufficiently large constant $C>0$. Moreover, recall the definition of the operator $\mathcal{L}_P: \lbrace \mathbf{n}abla^2 u: u \in W^{2,p}_\mathbf{id}(\Omega)\rbrace \to W^{-1,\frac{p}{p-1}}(\Omega;\Bbb R^{d \times d})$ in \mathbf{e}qref{LP-def}. We write $\beta = \delta^{2-\alpha p}$ in the following for convenience. Note that $\int_\Omega \partial_GP(\mathbf{n}abla^2 y) : \mathbf{n}abla^2 \varphi = \mathcal{L}_P(y) : \mathbf{n}abla \varphi $ for all $y \in W^{2,p}_\mathbf{id}(\Omega)$ and $\varphi \in W^{2,p}_0(\Omega)$, where the boundary term vanishes due to $\mathbf{n}abla \varphi =0$ on $\partial \Omega$. We now obtain the following result. \begin{lemma}[Slope in the nonlinear setting]\label{lemma: nonlin-slope} There exists a differential operator $\mathcal{L}^*_P: \lbrace y \in W^{2,p}_\mathbf{id}(\Omega): {\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \in H^{-1}(\Omega; \Bbb R^{d}) \rbrace \to L^2(\Omega; \Bbb R^{d\times d})$ satisfying ${\rm div}\mathcal{L}^*_P(y) = {\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y)$ in $H^{-1}(\Omega; \Bbb R^{d})$ such that for $\delta>0$ small enough and for all $y \in \mathscr{S}_\delta^M$ we have \begin{align*} |\partial \phi_\delta|_{{\mathcal{D}}_\delta}(y) = \begin{cases} \tfrac{1}{\delta}\Vert \sqrt{H_{\mathbf{n}abla y}}^{-1}\big(\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}^*_P (y) \big) \Vert_{L^2(\Omega)} & \text{if} \ {\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \in H^{-1}(\Omega),\\ + \infty & \text{else}. \mathbf{e}nd{cases} \mathbf{e}nd{align*} \mathbf{e}nd{lemma} \begin{rem}\label{rem-slope} {\mathbf{n}ormalfont We remark that the expression is well defined in the following sense: If $\mathbf{n}abla y(x) = Y(x)$ in the above notation, then we indeed have $\partial_FW(\mathbf{n}abla y(x)) + \beta\mathcal{L}^*_P (y(x)) \in Y(x) \Bbb R^{d \times d}_{\rm sym}$ for a.e. $x \in \Omega$. } \mathbf{e}nd{rem} \par\mathbf{n}oindent{\mathbf{e}m Proof. } We (i) first prove the lower bond in the case ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \in H^{-1}(\Omega)$ and (ii) afterwards if ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \mathbf{n}otin H^{-1}(\Omega)$. Finally, (iii) we establish the upper bound. (i) Suppose that ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \in H^{-1}(\Omega)$. Consider the minimization problem $$\min_{w \in H^1_0(\Omega)} \int_\Omega \Big(\frac{1}{2}|\sqrt{H_{\mathbf{n}abla y}}\mathbf{n}abla w|^2 - (\partial_FW(\mathbf{n}abla y)+ \beta\mathcal{L}_P(\mathbf{n}abla^2 y)\Big): \mathbf{n}abla w.$$ By \mathbf{e}qref{continuity for H}, the fact that $\sqrt{H_\mathbf{Id}} = \sqrt{\Bbb C_D}$, Lemma \ref{D-lin}(iii), and Korn's inequality we have \begin{align*} \Vert \sqrt{H_{\mathbf{n}abla y}}\mathbf{n}abla w\Vert_{L^2(\Omega)}^2& \ge \Vert \sqrt{H_\mathbf{Id}}\mathbf{n}abla w\Vert_{L^2(\Omega)}^2 - C\delta^{2\alpha} \Vert \mathbf{n}abla w\Vert_{L^2(\Omega)}^2 \\&\ge C\Vert e(w)\Vert_{L^2(\Omega)}^2 - C\delta^{2\alpha} \Vert \mathbf{n}abla w\Vert_{L^2(\Omega)}^2 \ge C\Vert \mathbf{n}abla w\Vert_{L^2(\Omega)}^2 \mathbf{e}nd{align*} for $\delta$ sufficiently small for all $w \in H^1_0(\Omega)$. Moreover, we have $|\int_\Omega \mathcal{L}_P(\mathbf{n}abla^2 y): \mathbf{n}abla w| \le \Vert {\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \Vert_{H^{-1}(\Omega)} \Vert w \Vert_{H^1(\Omega)}$ for all $w \in H^1_0(\Omega)$. Thus, the solution $\bar{w}$ of the problem exists, is unique, and satisfies $$(\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}_P(\mathbf{n}abla^2 y) ) : \mathbf{n}abla \varphi = \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} : \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \varphi = H_{\mathbf{n}abla y} \mathbf{n}abla \bar{w} : \mathbf{n}abla \varphi $$ for all $\varphi \in H^1_0(\Omega)$. Define $\mathcal{L}^*_P(y) := \beta^{-1}( H_{\mathbf{n}abla y} \mathbf{n}abla \bar{w} - \partial_FW(\mathbf{n}abla y))$ and note that \begin{align}\label{nonlin-slope2} \mathcal{L}^*_P(y) : \mathbf{n}abla \varphi = \mathcal{L}_P(\mathbf{n}abla^2 y) : \mathbf{n}abla \varphi \ \ \ \text{ for all $\varphi \in H^1_0(\Omega)$} \mathbf{e}nd{align} as well as $\mathcal{L}^*_P(y) \in L^2(\Omega)$. Moreover, since $\beta\mathcal{L}^*_P(y) + \partial_FW(\mathbf{n}abla y)= H_{\mathbf{n}abla y} \mathbf{n}abla \bar{w}$, recalling the properties of $H_{\mathbf{n}abla y}$ we see that Remark \ref{rem-slope} applies. Fix $\varepsilon>0$ and choose $w_\varepsilon \in C_c^\infty(\Omega;\Bbb R^d)$ with $\Vert \bar{w} - w_\varepsilon \Vert_{H^1(\Omega)} \le \varepsilon$. Letting $w_n = y - \frac{1}{n}w_\varepsilon$ we get by a Taylor expansion \begin{align*} n\delta^2( \phi_\delta(w_n) - & \phi_\delta(y)) = n\int_\Omega \partial_FW(\mathbf{n}abla y) : (\mathbf{n}abla w_n - \mathbf{n}abla y) + n O(\Vert \mathbf{n}abla w_n- \mathbf{n}abla y \Vert^2_{L^2(\Omega)})\mathbf{n}otag\\ & + n \beta \int_\Omega \partial_GP(\mathbf{n}abla^2 y): (\mathbf{n}abla^2w_n - \mathbf{n}abla^2 y)+ n\beta O(\Vert \mathbf{n}abla^2 w_n - \mathbf{n}abla^2 y \Vert^2_{L^2(\Omega)}) \mathbf{n}otag\\ & \ \ \ \ \ \ = - \int_\Omega \partial_FW(\mathbf{n}abla y) : \mathbf{n}abla w_\varepsilon - \beta \partial_GP(\mathbf{n}abla^2 y): \mathbf{n}abla^2w_\varepsilon + O(1/n), \mathbf{e}nd{align*} where $O(1/n)$ depends on the choice of $w_\varepsilon$. Similarly, we get by Lemma \ref{lemma: metric space-properties}(i) \begin{align*} n^2\delta^2\mathcal{D}_\delta(y,w_n)^2 &= n^2 \int_\Omega H_{\mathbf{n}abla y}[\mathbf{n}abla (y - w_n), \mathbf{n}abla (y - w_n)] + n^2 O(\Vert \mathbf{n}abla w_n- \mathbf{n}abla y\Vert^3_{L^3(\Omega)}) \mathbf{n}otag \\& = \Vert \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla w_\varepsilon \Vert^2_{L^2(\Omega)}+ O(1/n). \mathbf{e}nd{align*} For brevity we introduce $$\Phi(w) = \Big(\int_\Omega (\partial_FW(\mathbf{n}abla y)+\beta\mathcal{L}_P(\mathbf{n}abla^2 y)) : \mathbf{n}abla w \Big)\Vert \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla w \Vert_{L^2(\Omega)}^{-1}. $$ Since $\mathcal{D}_\delta(y,w_n) \to 0$, we now obtain \begin{align*} \delta|\partial \phi_\delta|_{\mathcal{D}_\delta}(y)& \ge \limsup_{n \to \infty} \frac{\delta(\phi_\delta(y) - \phi_\delta(w_n))^+}{\mathcal{D}_\delta(y,w_n)} \\&\ge \frac{\int_\Omega \partial_FW(\mathbf{n}abla y) : \mathbf{n}abla w_\varepsilon + \int_\Omega\beta \partial_GP(\mathbf{n}abla^2 y) : \mathbf{n}abla^2 w_\varepsilon }{\Vert \sqrt{H}_{\mathbf{n}abla y} \mathbf{n}abla w_\varepsilon \Vert_{L^2(\Omega)} } = \Phi(w_\varepsilon) \mathbf{e}nd{align*} where in the last step we used the definition of $\mathcal{L}_P$ in \mathbf{e}qref{LP-def}. Recalling the definition of $\mathcal{L}^*_P$ and \mathbf{e}qref{nonlin-slope2} we now derive \begin{align*} \Phi(\bar{w}) - \Phi(w_\varepsilon) + \delta|\partial \phi_\delta|_{\mathcal{D}_\delta}(y)& \ge \Phi(\bar{w}) = \frac{\int_\Omega H_{\mathbf{n}abla y} \mathbf{n}abla \bar{w} : \mathbf{n}abla \bar{w} }{\Vert \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} \Vert_{L^2(\Omega)} } \\& = \frac{\int_\Omega \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} :\sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} }{\Vert \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} \Vert_{L^2(\Omega)} } = \Vert \sqrt{H_{\mathbf{n}abla y}} \mathbf{n}abla \bar{w} \Vert_{L^2(\Omega)} \\ & = \Vert \sqrt{H_{\mathbf{n}abla y}}^{-1}\big(\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}^*_P (y) \big) \Vert_{L^2(\Omega)}. \mathbf{e}nd{align*} By definition of $w_\varepsilon$ we get $|\Phi(\bar{w}) - \Phi(w_\varepsilon)| \to 0$ as $\varepsilon \to 0$ and the lower bound in the case ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \in H^{-1}(\Omega)$ follows. (ii) Now suppose that ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \mathbf{n}otin H^{-1}(\Omega)$. Let $(y_n)_n$ be a sequence of smooth functions converging to $y$ in $W^{2,p}(\Omega)$. Then $ \mathcal{L}^*_P(y_n)$ is not bounded in $L^2(\Omega)$. Indeed, otherwise we would get by the definition of $\mathcal{L}_P$, \mathbf{e}qref{assumptions-P}(iii), and \mathbf{e}qref{nonlin-slope2} that \begin{align*} \Big|\int_\Omega \mathcal{L}_P(\mathbf{n}abla^2 y) : \mathbf{n}abla \varphi\Big| & = \Big|\int_\Omega \partial_G P(\mathbf{n}abla^2 y): \mathbf{n}abla^2 \varphi\Big| = \lim_{n \to \infty}\Big|\int_\Omega \partial_G P(\mathbf{n}abla^2 y_n): \mathbf{n}abla^2 \varphi\Big| \\ & = \lim_{n \to \infty} \Big|\int_\Omega \mathcal{L}^*_P(y_n) : \mathbf{n}abla\varphi\Big| \le C \Vert \mathbf{n}abla \varphi\Vert_{L^2(\Omega)} \mathbf{e}nd{align*} for all $\varphi \in W^{2,p}_0(\Omega)$. This, however, contradicts the assumption ${\rm div}\mathcal{L}_P(\mathbf{n}abla^2 y) \mathbf{n}otin H^{-1}(\Omega)$. As energy and dissipation are $W^{2,p}(\Omega)$-continuous (see \mathbf{e}qref{assumptions-W},\mathbf{e}qref{assumptions-P}, Lemma \ref{lemma: metric space-properties}(ii)), we find for some fixed $\varepsilon>0$ and $n$ large enough by Lemma \ref{lemma: slopes}(i) $$\varepsilon + |\partial \phi_\delta|_{{\mathcal{D}}_\delta}(y) \ge \sup_{w \mathbf{n}eq y_n} \frac{(\phi_\delta(y_n) - \phi_\delta(w))^+}{\mathcal{D}_\delta(y_n,w)(1 + C \Vert \mathbf{n}abla y_n - \mathbf{n}abla w \Vert_{L^\infty(\Omega)})^{1/2}} =|\partial \phi_\delta|_{{\mathcal{D}}_\delta}(y_n).$$ By the representation of the slope at $y_n$ and the fact that $ \mathcal{L}^*_P(y_n)$ is not bounded in $L^2(\Omega)$, the right hand side tends to infinity for $n \to \infty$, as desired. (iii) For the upper bound, we first use Lemma \ref{lemma: metric space-properties}(i),(ii), and Lemma \ref{th: metric space}(ii) to get \begin{align*} 1 & = \lim_{w \to v} \frac{\mathcal{D}_\delta(v,w)^2}{\mathcal{D}_\delta(v,w)^2} \ge \limsup_{w \to v} \frac{\Vert \sqrt{H_{\mathbf{n}abla v}} \mathbf{n}abla (w-v) \Vert^2_{L^2(\Omega)} - C\Vert \mathbf{n}abla v - \mathbf{n}abla w \Vert^3_{L^3(\Omega)}}{\delta^2\mathcal{D}_\delta(v,w)^2} \\ & \ge \limsup_{w \to v} \frac{\Vert \sqrt{H_{\mathbf{n}abla v}} \mathbf{n}abla (w-v) \Vert^2_{L^2(\Omega)} }{\delta^2\mathcal{D}_\delta(v,w)^2} - C\limsup_{w \to v} \Vert \mathbf{n}abla v - \mathbf{n}abla w \Vert_{L^\infty(\Omega)}\\ & = \limsup_{w \to v} \frac{\Vert \sqrt{H_{\mathbf{n}abla v}} \mathbf{n}abla (w-v) \Vert^2_{L^2(\Omega)} }{\delta^2\mathcal{D}_\delta(v,w)^2}. \mathbf{e}nd{align*} This together with Lemma \ref{lemma:C1} and the convexity of $P$ gives \begin{align*} \delta|\partial \phi_\delta|_{\mathcal{D}_\delta}(y) & = \limsup_{w \to y}\frac{\delta^2(\phi_\delta(y) - \phi_\delta(w))^+}{\delta\mathcal{D}_\delta(y,w)} \\&\le \limsup_{w \to y} \frac{\int_\Omega \partial_FW(\mathbf{n}abla y) : \mathbf{n}abla (y - w) + \int_\Omega \beta \partial_GP(\mathbf{n}abla^2 y): \mathbf{n}abla^2 (y -w)}{\Vert \sqrt{H_{\mathbf{n}abla y}} (\mathbf{n}abla w - \mathbf{n}abla y) \Vert_{L^2(\Omega)} }. \mathbf{e}nd{align*} Recalling the definition of $\mathcal{L}_P$ and using \mathbf{e}qref{nonlin-slope2} as in the lower bound, we get \begin{align*} \delta|\partial \phi_\delta|_{\mathcal{D}_\delta}(y) & \le \limsup_{w \to y} \frac{\int_\Omega (\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}^*_P(y) ) : \mathbf{n}abla (y - w)}{\Vert \sqrt{H_{\mathbf{n}abla y}} (\mathbf{n}abla w - \mathbf{n}abla y) \Vert_{L^2(\Omega)} }. \mathbf{e}nd{align*} Finally, the Cauchy Schwartz inequality gives \begin{align*} \delta|\partial \phi_\delta|_{\mathcal{D}_\delta}(y) & \le \limsup_{w \to y} \frac{\int_\Omega \sqrt{H_{\mathbf{n}abla y}}^{-1}(\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}^*_P(y) ) : \sqrt{H_{\mathbf{n}abla y}}\mathbf{n}abla (y - w)}{\Vert \sqrt{H_{\mathbf{n}abla y}} (\mathbf{n}abla w - \mathbf{n}abla y) \Vert_{L^2(\Omega)} } \\ & \le \Vert \sqrt{H_{\mathbf{n}abla y}}^{-1}(\partial_FW(\mathbf{n}abla y) + \beta\mathcal{L}^*_P(y)) \Vert_{L^2(\Omega)}. \mathbf{e}nd{align*} \mathbf{n}opagebreak\hspace*{\fill}$\Box$ Finally, following \cite[Section 1.4]{AGS} we relate curves of maximal slope with solutions to the equations \mathbf{e}qref{nonlinear equation} and \mathbf{e}qref{linear equation}. Similar to \cite[Corollary 1.4.5]{AGS}, this relies on the fact that the stored energy can be written as a sum of a convex functional and a $C^1$ functional on $H^1(\Omega)$. \begin{proof}[Proof of Theorem \ref{maintheorem1}(iii) and Theorem \ref{maintheorem2}(iii)] We only give the proof for the nonlinear equation. The proof for the linear equation is easier and can be seen along similar lines. First, the fact that $\phi_\delta(y(t))$ is decreasing in time together with \mathbf{e}qref{nonlinear energy}-\mathbf{e}qref{assumptions-P} gives $y \in L^\infty([0,\infty ); W^{2,p}_{\mathbf{id}}(\Omega))$. Moreover, since $|y'|_{\mathcal{D}_\delta} \in L^2([0,\infty ))$ by \mathbf{e}qref{slopesolution} and $\mathcal{D}_\delta$ is equivalent to the $H^1(\Omega)$-norm (see Lemma \ref{lemma: metric space-properties}(ii)), we observe that $y$ is an absolutely continuous curve in the Hilbert space $H^1(\Omega)$. By \cite[Remark 1.1.3]{AGS} this implies that $y$ is differentiable for a.e. $t$ with $\partial_t \mathbf{n}abla y(t) \in L^2(\Omega)$ for a.e. $t$, that \begin{align}\label{eq: derivative in Hilbert} \mathbf{n}abla y(t) - \mathbf{n}abla y(s) = \int_s^t \partial_t \mathbf{n}abla y(r) \, dr \ \ \ \text{a.e. in $\Omega$ \ \ \ for all } 0 \le s < t \mathbf{e}nd{align} and that $y \in W^{1,2}([0,\infty);H^1(\Omega))$. More precisely, by Fatou's lemma and Lemma \ref{lemma: metric space-properties}(i) we get for a.e. $t$ \begin{align}\label{PDE1} \begin{split} |y'|_{\mathcal{D}_\delta}(t) & = \lim_{s \to t}\frac{\mathcal{D}_\delta(y(s),y(t))}{|s-t|} = \lim_{s \to t} \delta^{-1}\Big(\frac{\delta^2\mathcal{D}_\delta(y(s),y(t))^2}{|s-t|^2} \Big)^{1/2} \\&\ge \delta^{-1} \Big( \int_\Omega \liminf_{s\to t} \Big( H_{\mathbf{n}abla y(t)} \Big[\frac{\mathbf{n}abla y(s) - \mathbf{n}abla y(t)}{|s-t|}, \frac{\mathbf{n}abla y(s) - \mathbf{n}abla y(t)}{|s-t|}\Big] \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + |s-t|^{-2} O(|\mathbf{n}abla y(t) - \mathbf{n}abla y(s)|^3) \Big) \Big)^{1/2} \\ & =\delta^{-1} \Big(\int_\Omega H_{\mathbf{n}abla y(t)}[\partial_t \mathbf{n}abla y(t),\partial_t \mathbf{n}abla y(t) ]\Big)^{1/2} = \delta^{-1}\Vert \sqrt{H_{\mathbf{n}abla y(t)}} \partial_t \mathbf{n}abla y(t)\Vert_{L^2(\Omega)}. \mathbf{e}nd{split} \mathbf{e}nd{align} We now determine the derivative $\frac{{\rm d}}{{\rm d}t} \phi_\delta(y(t))$ of the absolutely continuous curve $\phi_\delta \circ y$. Fix $t$ such that $\lim_{s \to t}\frac{\mathcal{D}_\delta(y(s),y(t))}{|s-t|}$ exists, which holds for a.e. $t$. Then by Lemma \ref{lemma:C1} we find $$\lim_{s \to \infty} \frac{\int_\Omega W(\mathbf{n}abla y(s)) - \int_\Omega W(\mathbf{n}abla y(t)) - \int_\Omega \partial_FW(\mathbf{n}abla y(t)) : (\mathbf{n}abla y(s) - \mathbf{n}abla y(t))}{s-t} = 0.$$ The previous estimate together with the convexity of $P$ yields \begin{align*} \frac{{\rm d}}{{\rm d}t} \phi_\delta(y(t)) & = \lim_{s \to t} \frac{\phi_\delta(y(s)) - \phi_\delta(y(t))}{s-t} \\&\ge \liminf_{s \to t} \frac{1}{\delta^2(s-t)}\int_\Omega \Big(\partial_FW(\mathbf{n}abla y(t)) : (\mathbf{n}abla y(s) - \mathbf{n}abla y(t)) \\& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \beta \partial_GP(\mathbf{n}abla^2 y(t)) : (\mathbf{n}abla^2 y(s) - \mathbf{n}abla^2 y(t)) \Big)\\ & = \liminf_{s \to t} \frac{1}{\delta^2(s-t)}\int_\Omega \big(\partial_FW(\mathbf{n}abla y(t))+ \beta\mathcal{L}^*_P(y(t)) \big) : (\mathbf{n}abla y(s) - \mathbf{n}abla y(t)), \mathbf{e}nd{align*} where as before $\beta = \delta^{2-\alpha p}$. In the last step we integrated by parts and used ${\rm div}(\mathcal{L}^*_P(y(t))) = {\rm div}(\mathcal{L}_P(\mathbf{n}abla^2 y(t)))$ by Lemma \ref{lemma: nonlin-slope}. Note that the last term is well defined as $\mathcal{L}^*_P(y(t)) \in L^2(\Omega)$ for a.e. $t$ by Lemma \ref{lemma: nonlin-slope} and \mathbf{e}qref{slopesolution}. Now \mathbf{e}qref{eq: derivative in Hilbert} implies \begin{align*} \frac{{\rm d}}{{\rm d}t} \phi_\delta(y(t)) & \ge \delta^{-2}\int_\Omega \sqrt{H_{\mathbf{n}abla y(t)}}^{-1}\big(\partial_FW(\mathbf{n}abla y(t)) + \beta\mathcal{L}^*_P(y(t)) \big): \sqrt{H_{\mathbf{n}abla y(t)}}\partial_t\mathbf{n}abla y(t). \mathbf{e}nd{align*} We find by Lemma \ref{lemma: nonlin-slope}, \mathbf{e}qref{PDE1}, and Young's inequality $$ \frac{{\rm d}}{{\rm d}t} \phi_\delta(y(t)) \ge - \frac{1}{2} \big(|\partial \phi_\delta|^2_{\mathcal{D}_\delta}(y(t)) + |y'|^2_{\mathcal{D}_\delta}(t)\big) \ge \frac{{\rm d}}{{\rm d}t} \phi_\delta(y(t)),$$ where the last step is a consequence of the fact that $y$ is a curve of maximal slope with respect to $\phi_\delta$. Consequently, all inequalities employed in the proof are in fact equalities and we get $$ \sqrt{H_{\mathbf{n}abla y(t)}}^{-1}\big(\partial_FW(\mathbf{n}abla y(t)) + \beta\mathcal{L}^*_P(y(t)) \big) = -\sqrt{H_{\mathbf{n}abla y(t)}}\partial_t\mathbf{n}abla y(t) $$ pointwise a.e. in $\Omega$. Equivalently, recalling $\partial_{\dot{F}}R(F,\dot{F}) = \frac{1}{2} \partial^2_{F_1^2} D^2(F,F)\dot{F} = H_{F}\dot{F}$ from \mathbf{e}qref{intro:R}, we obtain $$ \big(\partial_FW(\mathbf{n}abla y(t)) + \beta\mathcal{L}^*_P(y(t)) \big) + \partial_{\dot{F}}R(\mathbf{n}abla y(t),\partial_t \mathbf{n}abla y(t)) =0$$ pointwise a.e. in $\Omega$. Multiplying the equation with $\mathbf{n}abla \varphi$ for $\varphi \in W_0^{2,p}(\Omega)$, using again $\int_\Omega \mathcal{L}^*_P(y(t)) : \mathbf{n}abla \varphi = \int_\Omega\mathcal{L}_P(\mathbf{n}abla^2 y(t)) : \mathbf{n}abla \varphi$ by Lemma \ref{lemma: nonlin-slope}, and the definition of $\mathcal{L}_P(\mathbf{n}abla^2 y(t)) $, we conclude that $y$ is a weak solution (see \mathbf{e}qref{nonlinear equation2}). \mathbf{e}nd{proof} \mathbf{n}oindent \textbf{Acknowledgements} This work has been funded by the Vienna Science and Technology Fund (WWTF) through Project MA14-009. M.F.~acknowledges support by the Alexander von Humboldt Stiftung and thanks for the warm hospitality at \'{U}TIA AV\v{C}R, where this project has been initiated. M.K.~acknowledges support by the GA\v{C}R-FWF project 16-34894L. Both authors were also supported by the M\v{S}MT \v{C}R mobility project 7AMB16AT015. We wish to thank Ulisse Stefanelli for turning \color{black} our \color{black} attention to this problem. \typeout{References} \begin{thebibliography}{10} \bibitem{AdamsFournier:05} {\sc R.~A.~Adams, J.~J.~F.~Fournier}. \mathbf{n}ewblock {\mathbf{e}m Sobolev Spaces}. (2nd ed) \mathbf{n}ewblock Elsevier, Amsterdam, 2003. \bibitem{AGS} {\sc L.~Ambrosio, N.~Gigli, and G.~Savar\'e}. \mathbf{n}ewblock {\mathbf{e}m Gradient Flows in Metric Spaces and in the Space of Probability Measures}. \mathbf{n}ewblock Lectures Math.\ ETH Z\"urich, Birkh\"auser, Basel, 2005. \bibitem{Antmann} {\sc S.~S.~Antman}. \mathbf{n}ewblock {\mathbf{e}m Physically unacceptable viscous stresses}. \mathbf{n}ewblock Z.\ Angew.\ Math.\ Phys.\ \mathbf{n}ewblock {\bf 49} (1998), 980--988. \bibitem{Antmann:04} {\sc S.~S.~Antman}. \mathbf{n}ewblock {\mathbf{e}m Nonlinear Problems of Elasticity}. \mathbf{n}ewblock Springer, New York, 2004. \bibitem{Ball:77} {\sc J.~M.~Ball} \mathbf{n}ewblock {\mathbf{e}m Convexity conditions and existence theorems in nonlinear elasticity.} \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 63} (1977), 337--403. \bibitem{BCO} {\sc J.~M.~Ball, J.~C.~Currie, P.~L.~Olver}. \mathbf{n}ewblock {\mathbf{e}m Null Lagrangians, weak continuity, and variational problems of arbitrary order}. \mathbf{n}ewblock J.\ Funct.\ Anal.\ \mathbf{n}ewblock {\bf 41} (1981), 135--174. \bibitem{BallSenguel:15} {\sc J.~M.~Ball, Y.~Seng\"{u}l}. \mathbf{n}ewblock {\mathbf{e}m Quasistatic nonlinear viscoelasticity and gradient flows.} \mathbf{n}ewblock J.\ Dyn.\ Diff.\ Equat.\ \mathbf{n}ewblock {\bf 27} (2015), 405-–442. \bibitem{Batra} {\sc R.~C.~Batra}. \mathbf{n}ewblock {\mathbf{e}m Thermodynamics of non-simple elastic materials}. \mathbf{n}ewblock J.\ Elasticity\ \mathbf{n}ewblock {\bf 6} (1976), 451--456. \color{black} \bibitem{Braides} {\sc A.~Braides}. \mathbf{n}ewblock {\mathbf{e}m Local Minimization, Variational Evolution and $\Gamma$-convergence}. \mathbf{n}ewblock Springer Verlag, Berlin, 2014. \color{black} \bibitem{BCGS} {\sc A.~Braides, M.~Colombo, M.~Gobbino, M.~Solci}. \mathbf{n}ewblock {\mathbf{e}m Minimizing movements along a sequence of functionals and curves of maximal slope}. \mathbf{n}ewblock Comptes Rendus Mathematique \mathbf{n}ewblock {\bf 354} (2016), 685–-689. \bibitem{Ciarlet} {\sc P.~G.~Ciarlet}. \mathbf{n}ewblock{\mathbf{e}m Mathematical Elasticity, Vol.~I: Three-dimensional Elasticity}. \mathbf{n}ewblock North-Holland, Amsterdam, 1988. \color{black} \bibitem{chen} {\sc P.~J.~Chen, M.~E.~Gurtin, W.O.~Williams}. \mathbf{n}ewblock{\mathbf{e}m On the thermodynamics of non-simple elastic materials with two temperatures}. \mathbf{n}ewblock Zeit.\ Angew.\ Math.\ Phys. \mathbf{n}ewblock {\bf 20} (1969), 107--112. \color{black} \bibitem{CG} {\sc M.~Colombo, M.~Gobbino}. \mathbf{n}ewblock {\mathbf{e}m Passing to the limit in maximal slope curves: from a regularized Perona-Malik equation to the total variation flow}. \mathbf{n}ewblock Math.\ Models Methods Appl.\ Sci.\ \mathbf{n}ewblock {\bf 22} (2012), 1250017. \bibitem{DalMaso:93} {\sc G. Dal Maso}. \mathbf{n}ewblock {\mathbf{e}m An introduction to $\Gamma$-convergence}. \mathbf{n}ewblock Birkh{\"a}user, Boston $\cdot$ Basel $\cdot$ Berlin 1993. \bibitem{DalMasoNegriPercivale:02} {\sc G.~Dal Maso, M.~Negri, D.~Percivale}. \mathbf{n}ewblock {\mathbf{e}m Linearized elasticity as $\Gamma$-limit of finite elasticity}. \mathbf{n}ewblock Set-valued\ Anal.\ \mathbf{n}ewblock {\bf 10} (2002), 165--183. \color{black} \bibitem{DGMT} {\sc E.~De Giorgi, A.~Marino, M.~Tosques}. \mathbf{n}ewblock {\mathbf{e}m Problems of evolution in metric spaces and maximal decreasing curve}. \mathbf{n}ewblock Att\ Accad\ Naz.\ Lincei\ Rend.\ Cl.\ Sci.\ Fis.\ Mat.\ Natur.\ J.\ Nonlin.\ Sci.\ \mathbf{n}ewblock {\bf 68} (1980), 180--187. \color{black} \color{black} \bibitem{Demoulini} {\sc S.~Demoulini}. \mathbf{n}ewblock {\mathbf{e}m Weak solutions for a class of nonlinear systems of viscoelasticity}. \mathbf{n}ewblock Arch.\ Ration.\ Mech.\ Anal.\ \mathbf{n}ewblock {\bf 155} (2000), 299--334. \color{black} \bibitem{FriedrichSchmidt:2011} {\sc M.~Friedrich, B.~Schmidt}. \mathbf{n}ewblock {\mathbf{e}m An atomistic-to-continuum analysis of crystal cleavage in a two-dimensional model problem}. \mathbf{n}ewblock J.\ Nonlin.\ Sci.\ \mathbf{n}ewblock {\bf 24} (2014), 145--183. \bibitem{FrieseckeJamesMueller:02} {\sc G.~Friesecke, R.~D.~James, S.~M{\"u}ller}. \mathbf{n}ewblock {\mathbf{e}m A theorem on geometric rigidity and the derivation of nonlinear plate theory from three-dimensional elasticity}. \mathbf{n}ewblock Comm.\ Pure Appl.\ Math.\ \mathbf{n}ewblock {\bf 55} (2002), 1461--1506. \bibitem{Gigli} {\sc N.~Gigli}. \mathbf{n}ewblock {\mathbf{e}m On the heat flow on metric measure spaces: existence, uniqueness and stability}. \mathbf{n}ewblock Calc.\ Var.\ PDE \mathbf{n}ewblock {\bf 39} (2010), 101--120. \bibitem{HealeyKroemer:09} {\sc T.~J.~Healey, S.~Kr\"{o}mer}. \mathbf{n}ewblock {\mathbf{e}m Injective weak solutions in second-gradient nonlinear elasticity.} \mathbf{n}ewblock ESAIM Control Optim.\ Calc.\ Var.\ \mathbf{n}ewblock {\bf 15} (2009), 863–-871. \bibitem{MOS} {\sc A.~Mielke, C.~Ortner, Y.~\c{S}eng\"ul}. \mathbf{n}ewblock {\mathbf{e}m An approach to nonlinear viscoelasticity via metric gradient flows}. \mathbf{n}ewblock SIAM J.\ Math.\ Anal.\ \mathbf{n}ewblock {\bf 46} (2014), 1317--1347. \bibitem{MielkeRoubicek:16} {\sc A.~Mielke, T.~Roub\'{\i}\v{c}ek}. \mathbf{n}ewblock {\mathbf{e}m Rate-independent elastoplasticity at finite strains and its numerical approximation}. \mathbf{n}ewblock Math.\ Models \& Methods in Appl.\ Sci.\ \mathbf{n}ewblock {\bf 26} (2016), 2203--2236. \bibitem{MielkeRoubicek} {\sc A.~Mielke, T.~Roub\'{\i}\v{c}ek}. \mathbf{n}ewblock {\mathbf{e}m Thermoviscoelasticity in Kelvin-Voigt rheology at large strains}. \mathbf{n}ewblock In preparation. \color{black} \bibitem{Ortner} {\sc C.~Ortner}. \mathbf{n}ewblock {\mathbf{e}m Two Variational Techniques for the Approximation of Curves of Maximal Slope}. \mathbf{n}ewblock Technical report NA05/10, \mathbf{n}ewblock Oxford University Computing Laboratory, Oxford, UK, 2005. \color{black} \bibitem{Podio} {\sc P.~Podio-Guidugli}. \mathbf{n}ewblock {\mathbf{e}m Contact interactions, stress, and material symmetry for nonsimple elastic materials.} \mathbf{n}ewblock Theor.\ Appl.\ Mech.\ \mathbf{n}ewblock {\bf 28--29} (2002), 261--276. \color{black} \bibitem{S1} {\sc E.~Sandier, S.~Serfaty}. \mathbf{n}ewblock {\mathbf{e}m Gamma-convergence of gradient flows with applications to Ginzburg-Landau}. \mathbf{n}ewblock Comm.\ Pure Appl.\ Math.\ \mathbf{n}ewblock {\bf 57} (2004), 1627-–1672. \bibitem{S2} {\sc S.~Serfaty}. \mathbf{n}ewblock {\mathbf{e}m Gamma-convergence of gradient flows on Hilbert and metric spaces and applications}. \mathbf{n}ewblock Discrete Contin.\ Dyn.\ Syst.\ Ser.\ A \mathbf{n}ewblock {\bf 31} (2011), 1427-–1451. \color{black} \bibitem{Schmidt:08} {\sc B.~Schmidt}. \mathbf{n}ewblock {\mathbf{e}m Linear $\Gamma$-limits of multiwell energies in nonlinear elasticity theory}. \mathbf{n}ewblock Continuum Mech.\ Thermodyn.\ \mathbf{n}ewblock {\bf 20} (2008), 375--396. \bibitem{Toupin:62} {\sc R.~A.~Toupin}. \mathbf{n}ewblock {\mathbf{e}m Elastic materials with couple stresses}. \mathbf{n}ewblock Arch. Ration. Mech. Anal.\ \mathbf{n}ewblock {\bf 11} (1962), 385--414. \bibitem{Toupin:64} {\sc R.~A.~Toupin}. \mathbf{n}ewblock {\mathbf{e}m Theory of elasticity with couple stress}. \mathbf{n}ewblock Arch. Ration. Mech. Anal.\ \mathbf{n}ewblock {\bf 17} (1964), 85--112. \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
\begin{document} \title{A family of interpolation inequalities involving products of low-order derivatives} \begin{abstract} Gagliardo--Nirenberg interpolation inequalities relate Lebesgue norms of iterated derivatives of a function. We present a generalization of these inequalities in which the low-order term of the right-hand side is replaced by a Lebesgue norm of a pointwise product of derivatives of the function. \end{abstract} \section{Introduction} The symbol $\lesssim$ denotes inequalities holding up to a constant, which can depend on the parameters of the statement, but not on the unknown function. By convenience, we restrict the statements to smooth functions, although they persist in appropriate Sobolev spaces, by usual regularization arguments. \subsection{The classical Gagliardo--Nirenberg inequality} In their full generality, the following interpolation inequalities date back to Gagliardo's and Nirenberg's respective short communications at the 1958 International Congress of Mathematicians in Edinburgh, later published in \cite{gag1,gag2,nir1,nir2}. We refer to~\cite{fiorenza2021detailed} for a recent proof with an historical perspective. \begin{theorem} \label{thm:GN} Let $p,q,r \in [1,\infty]$, $0 \leq k \leq j < m \in \mathbb{N}$ and $\theta \in [\theta^*,1]$ where \begin{equation} \label{eq:theta-critic} \theta^*:= \frac{j - k}{m - k}. \end{equation} Assume that \begin{equation} \label{eq:GN-relation} \frac{1}{p}-j = \theta \left( \frac{1}{r} - m \right) + (1-\theta) \left(\frac{1}{q} - k\right). \end{equation} Then, for $u \in C^\infty_c(\mathbb{R};\mathbb{R})$, \begin{equation} \label{eq:GN-estimate} \| D^j u \|_{L^p(\mathbb{R})} \lesssim \| D^m u \|_{L^r(\mathbb{R})}^\theta \| D^k u \|_{L^q(\mathbb{R})}^{1-\theta}. \end{equation} \end{theorem} In this 1D setting, estimate \eqref{eq:GN-estimate} had also been derived with optimal constants by Landau~\cite{landau}, Kolmogorov~\cite{kolmogoroff} and Stein~\cite{stein} for particular cases of the parameters and exponents. \begin{remark}[Role of $k$] Usually, such inequalities are stated with $k = 0$. The less frequent case $k > 0$ can of course be reduced to the standard case $k = 0$ by applying the latter to the function $D^k u$. We include it in the above statement to highlight the symmetry with our generalization in \cref{thm:main}. \end{remark} \begin{remark}[Geometric setting] \label{rk:geometry} We restrict the statements in this note to the one-dimensional case. \emph{Mutatis mutandis} in \eqref{eq:GN-relation}, \cref{thm:GN} remains valid in~$\mathbb{R}^d$, or sufficiently regular bounded domains \cite{nir1}, or even exterior domains~\cite{crispo2004interpolation}, up to exceptional cases of the parameters. In particular, the case $p = \infty$ is only valid when $d = 1$ (see \cite[Theorem 1.1 and comments below]{fiorenza2021detailed}). \end{remark} \begin{remark}[Fractional versions] \label{rk:fractional} Although historical statements only involved integer orders of the derivatives, generalizations of \cref{thm:GN} to fractional Sobolev spaces \cite{brezis2,brezis1} or Hölder spaces \cite{kufner1995interpolation} are now correctly understood. \end{remark} \begin{remark}[Bounded domain] \label{rk:GN-bounded} Let $0 \leq k_0 \leq k$ and $s \in [1,\infty]$. For $u \in C^\infty([0,1];\mathbb{R})$, which is not necessarily compactly supported in $(0,1)$, \begin{equation} \label{eq:GN-bounded} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^\theta \| D^k u \|_{L^q(0,1)}^{1-\theta} + \| D^{k_0} u \|_{L^s(0,1)}. \end{equation} The supplementary (inhomogeneous, low-order) term is necessary, as one could have $D^m u \equiv 0$ on $(0,1)$ but $D^j u \neq 0$ (see \cref{sec:GN-bounded}). \end{remark} \subsection{Statement of the main result} Motivated by applications to nonlinear control theory (see \cref{sec:control}), our main result is the following one-dimensional generalization of \cref{thm:GN} in which the low-order term $\|D^k u\|_{L^q(\mathbb{R})}$ of the right-hand side is replaced by a Lebesgue norm of a pointwise product of low-order derivatives of the function. \begin{theorem} \label{thm:main} Let $p,q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} \leq j < m \in \mathbb{N}$. Let $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. Let $\theta \in [\theta^*,1]$, where \begin{equation} \label{eq:theta*} \theta^* := \frac{j - \bar{k}}{m - \bar{k}}. \end{equation} Assume that \begin{equation} \label{eq:main-relation} \frac{1}{p}-j = \theta \left( \frac{1}{r} - m \right) + (1-\theta) \left(\frac{1}{q \kappa} - \bar{k} \right). \end{equation} Then, for $u \in C^\infty_c(\mathbb{R};\mathbb{R})$, \begin{equation} \label{eq:main-estimate} \| D^j u \|_{L^p(\mathbb{R})} \lesssim \| D^m u \|_{L^r(\mathbb{R})}^\theta \| D^{k_1} u \dotsb D^{k_\kappa} u \|_{L^{q}(\mathbb{R})}^{(1-\theta)/\kappa}. \end{equation} \end{theorem} \begin{remark} \label{rk:dkqk} Heuristically, everything behaves as if the pointwise product term was replaced by $\| D^{\bar k} u \|_{L^{q \kappa}(\mathbb{R})}$, see also \cref{sec:open}. \end{remark} \begin{remark} \label{rmk:pqr} In the critical case $\theta = \theta^*$, relation \eqref{eq:main-relation} is equivalent to \begin{equation} \label{eq:critic-pqr} \frac{1}{p} = \frac{\theta}{r} + \frac{1-\theta}{q \kappa}. \end{equation} \end{remark} \begin{corollary}[Bounded domain] \label{cor:main-bounded} Let $0 \leq k_0 \leq k_1$ and $s \in [1,\infty]$. For $u \in C^\infty([0,1];\mathbb{R})$, which is not necessarily compactly supported in $(0,1)$, \begin{equation} \label{eq:main-estimate-bounded} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^\theta \| D^{k_1} u \dotsb D^{k_\kappa} u \|_{L^{q}(0,1)}^{(1-\theta)/\kappa} + \| D^{k_0} u \|_{L^s(0,1)}. \end{equation} \end{corollary} Our proof is inspired by Nirenberg's historical one, as rewritten recently in~\cite{fiorenza2021detailed}. Compared with the usual case, we encounter two difficulties. First, the additive version of \eqref{eq:main-estimate} now involves a compactness argument (see \cref{lem:compact}). Second, and maybe more importantly, the pointwise product nature of the new term breaks the usual subdivision argument (see \cref{sec:subdivision}) since, within a small interval where $u$ is a polynomial of low degree, this term could vanish identically. To circumvent this difficulty, we introduce a notion of ``nowhere-polynomial'' function (see \cref{sec:nowhere}) and we prove that any smooth function can be approximated in this class. An illustration of these difficulties is that pointwise multiplicative inequalities of the form $|u'(x)|^2 \lesssim |u(x) u''(x)|$ usually require to subtract from $u$ a local polynomial approximation, and to formulate the estimate using the Hardy--Littlewood maximal functions $Mu$, $Mu'$ and $Mu''$ instead of the raw functions (see e.g.\ \cite[Theorem 1]{js1994pointwise}). \subsection{Some examples} As illustrations of \cref{thm:main} for small values of the parameter $\kappa$, and in view of \cref{sec:control}, we state two particular cases. \begin{corollary} Let $k \in \mathbb{N}^*$. For $u \in W^{2k,\infty}_0((0,1);\mathbb{R})$, \begin{equation} \| D^k u \|_{L^6(0,1)}^6 \lesssim \| D^{2k} u \|_{L^\infty(0,1)}^2 \| u D^k u \|_{L^2(0,1)}^2. \end{equation} \end{corollary} \begin{proof} This follows from \cref{thm:main} with $\kappa = 2$, $\bar{k} = \frac k 2$, $\theta = \theta^* = \frac{k - \bar{k}}{2k - \bar{k}} = \frac 13$, for which $\frac 16 = \frac{\theta}{\infty} + \frac{1-\theta}{2 \cdot 2}$ so that \eqref{eq:critic-pqr} holds. The estimate for non-smooth $u$ follows by standard regularization arguments. \end{proof} \begin{corollary} \label{cor:cubic} For $u \in W^{3,\infty}_0((0,1);\mathbb{R})$, \begin{equation} \| u'' \|_{L^{12}(0,1)}^{12} \lesssim \| u''' \|_{L^\infty(0,1)}^6 \| u u' u'' \|_{L^2(0,1)}^2. \end{equation} \end{corollary} \begin{proof} This follows from \cref{thm:main} with $\kappa = 3$, $\bar{k} = 1$, $\theta = \theta^* = \frac{2-1}{3-1} = \frac 12$, for which $\frac 1 {12} = \frac \theta \infty + \frac{1-\theta}{2 \cdot 3}$ so that \eqref{eq:critic-pqr} holds. The estimate for non-smooth $u$ follows by standard regularization arguments. Incidentally, this particular estimate can also be checked from the usual Gagliardo--Nirenberg estimate of \cref{thm:GN} \begin{equation} \| u'' \|_{L^{12}(\mathbb{R})} \lesssim \| u''' \|_{L^\infty(\mathbb{R})}^{\frac 12} \| u' \|_{L^6(\mathbb{R})}^{\frac 12} \end{equation} and the coercivity estimate \eqref{eq:uu'u''-mino} proved below. \end{proof} \subsection{Some open problems} \label{sec:open} As mentioned in \cref{rk:geometry}, the usual Gagliardo--Nirenberg inequalities admit generalizations in $\mathbb{R}^d$. It would be natural to investigate such generalizations of \cref{thm:main}. A difficulty in this direction might be that one has to determine the appropriate (symmetric?) generalizations of the product $D^{k_1} u \dotsb D^{k_\kappa} u$ with partial derivatives. As mentioned in \cref{rk:fractional}, the usual Gagliardo--Nirenberg inequalities admit generalizations in fractional Sobolev spaces. It would be natural to investigate such generalizations of \cref{thm:main}, especially since, as noted in \cref{rk:dkqk}, even for integer values of the parameters, the product $D^{k_1} u \dotsb D^{k_\kappa} u$ already behaves as a fractional Sobolev norm when $\bar{k} \notin \mathbb{N}$. Another particularly challenging problem concerns the possibility to relax the assumptions $k_i \leq j$ of \cref{thm:main}. A natural (weaker) assumption would be $\bar{k} \leq j$. In particular, one can wonder in which settings the following result holds (corresponding to $j = \bar{k}$ and $\theta = \theta^* = 0$). \begin{open} \label{open} Let $q \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} \in \mathbb{N}$. Let $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. When is it true that, for $u \in C^\infty_c(\mathbb{R};\mathbb{R})$, \begin{equation} \label{eq:dbarku} \| D^{\bar{k}} u \|_{L^{q \kappa}(\mathbb{R})} \lesssim \| D^{k_1} u \dotsb D^{k_\kappa} u \|_{L^{q}(\mathbb{R})}^{\frac 1 \kappa}, \end{equation} where the left-hand side should be interpreted as the fractional $\dot{W}^{\bar{k},q\kappa}(\mathbb{R})$ semi-norm of $u$ when $\bar{k}$ is not an integer. \end{open} As noted in \cref{rk:dkqk}, positive answers to \cref{open} imply \cref{thm:main} (up to exceptional cases) thanks to the (fractional) Gagliardo--Nirenberg inequality $\| D^j u \|_{L^p(\mathbb{R})} \lesssim \| D^m u \|_{L^r(\mathbb{R})}^\theta \| D^{\bar k} u \|_{L^{q\kappa}(\mathbb{R})}^{1-\theta}$ (see e.g. \cite{brezis2,brezis1} when $\bar{k} \notin \mathbb{N}$). Unfortunately, the proofs of \cref{sec:proof} do rely on the assumptions $k_i \leq j$. In particular, estimate \eqref{eq:lem:compact} below is false if there exists $i$ such that $k_i > j$. Nevertheless, \emph{ad hoc} arguments entail that \eqref{eq:dbarku} holds for some examples, hinting that \cref{open} might have positive answers. \begin{lemma} For $u \in C^\infty_c(\mathbb{R};\mathbb{R})$, \begin{align} \label{eq:w12-4} \| u \|_{\dot{W}^{\frac 12, 4}(\mathbb{R})} & \lesssim \| u u' \|^{\frac 12}_{L^2(\mathbb{R})}, \\ \label{eq:u'l4} \| u' \|_{L^4(\mathbb{R})} & \lesssim \| u u'' \|_{L^2(\mathbb{R})}^{\frac 12}, \\ \label{eq:uu'u''-mino} \| u' \|_{L^6(\mathbb{R})} & \lesssim \| u u' u'' \|_{L^2(\mathbb{R})}^{\frac 13}. \end{align} \end{lemma} \begin{proof} Estimate \eqref{eq:w12-4} can be derived from the remark that $(u^2)' = 2 u u'$. Hence \begin{equation} \| u \|_{\dot{W}^{\frac 12, 4}(\mathbb{R})} \lesssim \| |u| \|_{\dot{W}^{\frac 12, 4}(\mathbb{R})} \lesssim \| u^2 \|_{\dot{H}^1(\mathbb{R})}^{\frac 12} = \| 2 u u' \|_{L^2(\mathbb{R})}^{\frac 12}, \end{equation} where the first estimate with the absolute value is derived in \cite[Théorème 2]{lemarie} and the second estimate in \cite[Section 5.4.4]{runst1996sobolev}. Estimates \eqref{eq:u'l4} and \eqref{eq:uu'u''-mino} come from straightforward integrations by parts and the Cauchy--Schwarz inequality: \begin{equation} \int_\mathbb{R} (u')^4 = - 3 \int_\mathbb{R} u (u')^2 u'' \leq 3 \left( \int_\mathbb{R} (u')^4 \right)^{\frac 12} \left( \int_\mathbb{R} (u u'')^2 \right)^{\frac 12} \end{equation} and \begin{equation} \int_\mathbb{R} (u')^6 = - 5 \int_\mathbb{R} u (u')^4 u'' \leq 5 \left( \int_\mathbb{R} (u')^6 \right)^{\frac 12} \left( \int_\mathbb{R} (u u' u'')^2 \right)^{\frac 12}, \end{equation} which entail \eqref{eq:u'l4} and \eqref{eq:uu'u''-mino}. \end{proof} Estimate \eqref{eq:u'l4} above is very classical, for example stated as Lemma~1 in~\cite{kalamajska2012some}, which contains many interesting generalizations. \section{Proofs on the real line} \label{sec:proof} \subsection{Sobolev inequalities with localized low-order terms} In this paragraph, we start by proving the natural statement that low-order terms in Sobolev inequalities can be localized in arbitrarily small subdomains. \begin{lemma}[Usual Sobolev embedding] \label{lem:Sobolev} Let $0 \leq j < m \in \mathbb{N}$. Let $\omega \subset (0,1)$ be a non-empty open interval. For $u \in C^\infty([0,1];\mathbb{R})$, \begin{equation} \label{eq:Sobolev} \| D^j u \|_{L^\infty(\omega)} \lesssim \| D^m u \|_{L^1(\omega)} + \| u \|_{L^1(\omega)} \end{equation} \end{lemma} \begin{lemma}[Localized $W^{1,1} \hookrightarrow L^\infty$ embedding] \label{lem:linf-l1omega} Let $\omega \subset (0,1)$ be a non-empty open interval. For $u \in C^\infty([0,1];\mathbb{R})$, \begin{equation} \label{eq:linf-l1omega} \| u \|_{L^\infty(0,1)} \lesssim \| u' \|_{L^1(0,1)} + \| u \|_{L^1(\omega)}. \end{equation} \end{lemma} \begin{proof} We write $\omega = (x_1, x_2)$ with $0 \leq x_1 < x_2 \leq 1$. Let $u \in C^\infty([0,1];\mathbb{R})$. For any $x \in [0,1]$ and $x_0 \in (x_1,x_2)$, \begin{equation} u(x) = u(x_0) + \int_{x_0}^x u'(y) \,\mathrm{d} y. \end{equation} Hence, for any $x \in [0,1]$, averaging over $x_0 \in (x_1,x_2)$, \begin{equation} u(x) = \frac{1}{x_2-x_1} \int_{x_1}^{x_2} u(x_0) \,\mathrm{d} x_0 + \frac{1}{x_2-x_1} \int_{x_1}^{x_2} \left( \int_{x_0}^x u'(y) \,\mathrm{d} y \right) \,\mathrm{d} x_0, \end{equation} which entails \eqref{eq:linf-l1omega}. \end{proof} \begin{lemma}[Localized Sobolev embedding] \label{lem:Sobolev-loc} Let $p,q,r \in [1,\infty]$. Let $0 \leq k \leq j < m \in \mathbb{N}$. Let $\omega \subset (0,1)$ be a non-empty open interval. For $u \in C^\infty([0,1];\mathbb{R})$, \begin{equation} \label{eq:lem:omega-1} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)} + \| D^k u \|_{L^q(\omega)}. \end{equation} \end{lemma} \begin{proof} By monotony of the Lebesgue spaces on the bounded domain $(0,1)$, it is sufficient to prove the result for $p = \infty$ and $q = r = 1$. By \cref{lem:Sobolev}, $\|D^j u \|_{L^1(\omega)} = \|D^{j-k} D^k u\|_{L^1(\omega)} \lesssim \|D^{m-k} D^k u\|_{L^1(\omega)} + \|D^k u\|_{L^1(\omega)}$, so it is sufficient to prove the result with $k = j$. Hence, up to working with $D^j u$ instead of $u$, it is sufficient to prove the result with $k = j = 0$ and $m \geq 1$. We will therefore prove \begin{equation} \label{eq:lem:omega-2} \| u \|_{L^\infty(0,1)} \lesssim \| D^m u \|_{L^1(0,1)} + \| u \|_{L^1(\omega)}. \end{equation} For $m = 1$, this corresponds to \cref{lem:linf-l1omega}. Take $m > 1$. By \cref{lem:linf-l1omega}, there exists $C > 0$ such that, for each $i = 0, \dotsc, m-1$, \begin{equation} \| D^i u \|_{L^\infty(0,1)} \leq C \left( \| D^{i+1} u \|_{L^1(0,1)} + \| D^i u \|_{L^1(\omega)} \right). \end{equation} Multiplying these inequalities by $C^i$ and summing over $i$ yields \begin{equation} \sum_{i=0}^{m-1} C^i \| D^i u \|_{L^\infty(0,1)} \leq C \sum_{i=0}^{m-1} C^i \left( \| D^{i+1} u \|_{L^1(0,1)} + \| D^i u \|_{L^1(\omega)} \right). \end{equation} Thus, bounding the $L^1$ norms by $L^\infty$ and cancelling the terms on both sides, \begin{equation} \| u \|_{L^\infty(0,1)} \leq C^m \| D^m u \|_{L^1(0,1)} + \sum_{i=0}^{m-1} C^{i+1} \| D^i u \|_{L^1(\omega)}. \end{equation} By \cref{lem:Sobolev}, for each $i = 0, \dotsc, m-1$, $\|D^i u\|_{L^1(\omega)} \lesssim \|u\|_{L^1(\omega)} + \|D^m u\|_{L^1(\omega)}$, which concludes the proof of \eqref{eq:lem:omega-2}. \end{proof} \subsection{Sobolev inequality involving a product of derivatives} In this paragraph, we prove the following localized additive version of \eqref{eq:main-estimate}, by induction on the length of the product and a compactness argument. \begin{proposition} \label{lem:compact} Let $p,q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} \leq j < m \in \mathbb{N}$. Let $\omega \subset (0,1)$ be a non-empty open interval. For $u \in C^\infty([0,1];\mathbb{R})$, \begin{equation} \label{eq:lem:compact} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)} + \| D^{k_1}u \dotsb D^{k_\kappa} u \|_{L^q(\omega)}^{\frac 1\kappa}. \end{equation} \end{proposition} \begin{proof} Without loss of generality, up to working with $D^{k_1} u$, one can assume that $k_1 = 0$. When $m = 1$, $k_1 = \dotsb = k_\kappa = j = 0$, so that the statement follows from \cref{lem:Sobolev-loc}. Hence, one can assume that $m \geq 2$. By monotony of the Lebesgue spaces on bounded domains, it is sufficient to prove the result with $p = \infty$ and $q = r = 1$. By \cref{lem:Sobolev-loc}, $\| D^j u \|_{L^\infty(0,1)} \leq \| D^m u \|_{L^1(0,1)} + \| D^j u \|_{L^1(0,1)}$. Hence, it is sufficient to prove the result with $p = 1$. We proceed by induction on $\kappa \in \mathbb{N}^*$. The case $\kappa = 1$ corresponds to \cref{lem:Sobolev-loc}. Let $\kappa > 1$. Assume by contradiction that the lemma holds for products of up to $\kappa - 1$ terms, but not for $\kappa$ terms. One could therefore find a sequence $u_n \in C^\infty([0,1];\mathbb{R})$ such that \begin{equation} \label{eq:compact-un-1} \| D^j u_n \|_{L^1(0,1)} > n \left( \| D^m u_n \|_{L^1(0,1)} + \| u_n D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega)}^{\frac 1\kappa} \right). \end{equation} In particular, $\| D^j u_n \|_{L^1(0,1)} > 0$. Since \eqref{eq:lem:compact} is linear in $u$, up to a rescaling, one can assume that \begin{equation} \label{eq:lem:compact-2} \| D^j u_n \|_{L^1(0,1)} + \| u_n \|_{L^1(0,1)} = 1. \end{equation} By \eqref{eq:compact-un-1}, this entails that $u_n$ is uniformly bounded in $W^{m,1}(0,1)$. Hence, by the Rellich--Kondrachov compact embedding theorem, there exists $\bar{u} \in W^{m-1,1}(0,1)$ such that $u_n \to \bar{u}$ strongly in $W^{m-1,1}(0,1)$. Since the sequence converges strongly in $W^{j,1}(0,1)$, the normalization \eqref{eq:lem:compact-2} implies \begin{equation} \|D^j \bar{u}\|_{L^1(0,1)} + \| \bar{u} \|_{L^1(0,1)} = 1 \end{equation} which ensures that $\bar u \neq 0$. By Morrey's inequality, since $m-1\geq 1$, $\bar{u} \in C^0([0,1])$ and $u_n \to u$ in $C^0([0,1])$. Thus, since $\bar{u} \neq 0$, there exists a small non-empty open interval $\omega' \subset (0,1)$ and $\delta \in (0,1)$ such that, for $n$ large enough $|u_n| \geq \delta$ on $\omega'$. Hence, \begin{equation} \| u_n D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega)}^{\frac 1\kappa} \geq \delta^{\frac 1 \kappa} \| D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega')}^{\frac 1 \kappa}. \end{equation} Moreover, since $u_n$ is uniformly bounded in $W^{m,1}(0,1)$, the $D^{k_i} u_n$ are uniformly bounded in $L^\infty(0,1)$ by \cref{lem:Sobolev}. Hence there exists $0 < c \leq 1$ such that \begin{equation} \delta^{\frac 1 \kappa} \| D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega')}^{\frac 1 \kappa} \\ \geq c \| D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega')}^{\frac{1}{\kappa-1}}. \end{equation} Hence, substituting in \eqref{eq:compact-un-1}, and applying the induction hypothesis, there exists $C > 0$ such that, \begin{equation} \begin{split} \| D^j u_n \|_{L^1(0,1)} & > n \left( \|D^m u_n \|_{L^1(0,1)} + c \| D^{k_2} u_n \dotsb D^{k_\kappa} u_n \|_{L^1(\omega')}^{\frac{1}{\kappa-1}} \right) \\ & \geq \frac{n c}{C} \| D^j u_n \|_{L^1(0,1)}, \end{split} \end{equation} which yields a contradiction for $n$ large enough since $\| D^j u_n \|_{L^1(0,1)} > 0$. \end{proof} \begin{corollary} \label{lem:scale} Let $p,q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} \leq j < m \in \mathbb{N}$. For $u \in C^\infty([0,1];\mathbb{R})$ and $I \subset (0,1)$ a non-empty interval of length $\ell$, \begin{equation} \label{eq:lem:scale} \ell^{j - \frac 1 p} \| D^j u \|_{L^p(I)} \lesssim \ell^{m - \frac 1 r} \| D^m u \|_{L^r(I)} + \ell^{\bar{k} - \frac {1}{q\kappa}} \| D^{k_1}u \dotsb D^{k_\kappa} u \|_{L^q(I)}^{\frac 1\kappa}, \end{equation} where $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. \end{corollary} \begin{proof} This is a straightforward consequence of \cref{lem:compact} by a scaling argument. Indeed, write $I = (x_0, x_0 + \ell)$ for some $x_0 \in [0,1)$. For $u \in C^\infty([0,1];\mathbb{R})$, let $v \in C^\infty([0,1],\mathbb{R})$ defined by $v(t) := v(x_0 + x \ell)$, so that \eqref{eq:lem:scale} follows from \eqref{eq:lem:compact} with the same constant. \end{proof} \subsection{Nowhere-polynomial functions} \label{sec:nowhere} In this paragraph, we introduce a notion of ``nowhere-polynomial'' function, as well as an approximation result by this subclass of smooth functions. Our motivation is that we wish to interpret the pointwise product $D^{k_1} u \dotsb D^{k_\kappa} u$ as playing the role of the low-order term in the interpolation inequality and thus avoid that it vanishes on significant portions of the support of $u$. \begin{definition} Let $I$ be a (closed or open) non-empty interval of $\mathbb{R}$. We say that $u \in C^\infty(I;\mathbb{R})$ is \emph{nowhere-polynomial} when \begin{equation} \label{eq:nowhere} \mu \Big(\{ u \neq 0 \} \cap \left( \cup_{i \in \mathbb{N}^*} \{ D^i u = 0 \} \right) \Big) = 0, \end{equation} where $\mu$ denotes the Lebesgue measure on $(0,1)$. \end{definition} \begin{lemma} \label{lem:nowhere-exists} Let $I \subset (0,1)$ be a non-empty open interval with $\bar{I} \subset (0,1)$. There exists a nowhere-polynomial $\psi \in C^\infty_c((0,1);\mathbb{R})$ such that $\psi > 0$ on $I$. \end{lemma} \begin{proof} Let $\chi(t) := e^{- \frac{1}{t(1-t)}}$ for $t \in (0,1)$, extended by $0$ on $\mathbb{R}$. It is classical that $\chi \in C^\infty(\mathbb{R})$, $\operatorname{supp} \chi = [0,1]$, $\chi > 0$ on $(0,1)$ and that, for every $i \geq 1$, $D^i\chi(t) = R_i(t) \chi(t)$ where $R_i$ is a (non-zero) rational function. In particular, $R_i$ vanishes at most a finite number of times on $[0,1]$. Thus $(0,1) \cap \cup_{i \in \mathbb{N}^*} \{ D^i \chi = 0 \}$ is countable, so of zero Lebesgue measure. Given $I = (a,b)$ with $0 < a < b < 1$, $\psi(t) := \chi ((t-a)/(b-a))$ satisfies the conclusions of the lemma. \end{proof} \begin{lemma} \label{lem:approx-nowhere} Let $u \in C^\infty_c((0,1);\mathbb{R})$. There exist nowhere-polynomial functions $u_n \in C^\infty_c((0,1);\mathbb{R})$ such that, for every $k \in \mathbb{N}$, $u_n \to u$ in $C^k([0,1];\mathbb{R})$. \end{lemma} \begin{proof} Let $u \in C^\infty_c((0,1);\mathbb{R})$. Let $0 < a < b < 1$ such that $\overline{\{ u \neq 0 \}} \subset (a,b)$. Let $\psi \in C^\infty_c((0,1);\mathbb{R})$ be a nowhere-polynomial function given by \cref{lem:nowhere-exists} such that $\psi > 0$ on $(a,b)$. For $\varepsilon > 0$, set $u_\varepsilon := u + \varepsilon \psi$. As $\varepsilon \to 0$, for every $k \in \mathbb{N}$, $u_\varepsilon \to u$ in $C^k([0,1];\mathbb{R})$. We claim that there exists a sequence $\varepsilon_n \to 0$ such that the $u_{\varepsilon_n}$ are nowhere-polynomial. Otherwise, by contradiction, one could find $\varepsilon^* > 0$ such that, for every $\varepsilon \in (0,\varepsilon^*)$, $u_{\varepsilon}$ is not nowhere-polynomial. Hence \begin{equation} J_\varepsilon := \{ u_\varepsilon \neq 0 \} \cap \left( \cup_{i \in \mathbb{N}^*} \{ D^i u_\varepsilon = 0 \} \right) \subset (a,b) \end{equation} satisfies $\mu(J_\varepsilon) > 0$. Let $J^i_\varepsilon := \{ D^i u_\varepsilon = 0 \} \cap (a,b)$. Since $\mu(J_\varepsilon) > 0$, there exists $i_\varepsilon \in \mathbb{N}^*$ such that $\mu(J^{i_\varepsilon}_\varepsilon) > 0$. Hence, $(0,\varepsilon^*) = \cup_{i \in \mathbb{N}^*} M_i$, where \begin{equation} M_i := \{ \varepsilon \in (0,\varepsilon^*) ; \mu(J^i_\varepsilon) > 0 \}. \end{equation} Let $i \in \mathbb{N}^*$. Let $\varepsilon \neq \varepsilon' \in (0,\varepsilon^*)$. Since $J_\varepsilon^i \cap J^i_{\varepsilon'} \subset \{ D^i \psi = 0 \} \cap \{ \psi > 0 \}$ and $\psi$ is nowhere-polynomial, one has $\mu(J^i_\varepsilon \cap J^i_{\varepsilon'}) = 0$. Hence, for every $n \in \mathbb{N}^*$, $\{ \varepsilon \in (0,\varepsilon^*) ; \mu(J^i_\varepsilon) \geq 1/n \}$ is finite. Thus $M_i$ is a countable union of finite sets, so is countable. Hence $\cup_{i\in \mathbb{N}^*} M_i = (0,\varepsilon^*)$ is also countable, which contradicts the fact that $\mathbb{R}$ is not countable. \end{proof} \begin{lemma} \label{lem:approx-nowhere-bounded} Let $u \in C^\infty([0,1];\mathbb{R})$. There exist nowhere-polynomial functions $u_n \in C^\infty([0,1];\mathbb{R})$ such that, for every $k \in \mathbb{N}$, $u_n \to u$ in $C^k([0,1];\mathbb{R})$. \end{lemma} \begin{proof} Let $\bar{u} \in C^\infty_c((-1,2);\mathbb{R})$ be a smooth compactly supported extension of~$u$. We apply \cref{lem:approx-nowhere} to a rescaled version of $\bar{u}$ to obtain a sequence $\bar{u}_n \in C^\infty_c((-1,2);\mathbb{R})$ of nowhere-polynomial functions such that $\bar{u}_n \to \bar{u}$ in $C^k([-1,2];\mathbb{R})$ for every $k \in \mathbb{N}$. Then the sequence of restrictions $u_n := (\bar{u}_n)_{\rvert [0,1]}$ satisfies the claimed properties. \end{proof} \subsection{Subdivision argument} \label{sec:subdivision} In this paragraph, we prove that, given a nowhere-polynomial function, we can find a subdivision of its support such that, on each interval, both terms of the right-hand side of \eqref{eq:lem:scale} are equal. The proof is inspired by \cite[Lemma~3.3]{fiorenza2021detailed} and relies on the following version of Besicovitch's covering theorem \cite{besicovitch1945general}. \begin{lemma}[Besicovitch] \label{lem:besicovitch} Let $E$ be a bounded subset of $\mathbb{R}$ and $r : E \to (0,+\infty)$. For $x \in E$, consider the non-empty open interval $I_x := (x-r_x,x+r_x)$. There exists a countable (finite or countably infinite) collection of points $x_n \in E$ such that $E \subset \cup_n I_{x_n}$ and $\sum_n \mathbf{1}_{I_{x_n}} \leq 4$ on $\mathbb{R}$. \end{lemma} \begin{proof} This statement corresponds to the one-dimensional case of \cite[Theorem~18.1c, Chapter~2]{dibenedetto2002real} (see also \cite[Section~18, Chapter~2]{dibenedetto2002real}). \end{proof} \begin{proposition} \label{lem:subdivision} Let $q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} < m \in \mathbb{N}$. Assume that $\bar{k} < m - 1$ where $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. Let $u \in C^\infty_c((0,1);\mathbb{R})$ be a nowhere-polynomial function. Then there exists a countable family $(I_n)_n$ of non-empty open intervals $I_n \subset \mathbb{R}$ such that \begin{align} \label{eq:lem:Ik-low} & 1 \leq \sum_n \mathbf{1}_{I_n} \quad \mu \text{ a.e.\ on } \{ u \neq 0 \}, \\ \label{eq:lem:Ik-high} & \sum \mathbf{1}_{I_n} \leq 4 \quad \text{ on } \mathbb{R}, \end{align} and, for every $n$, denoting by $\ell_n$ the length of $I_n$, \begin{equation} \label{eq:lem:Ik-2} \ell_n^{m - \frac 1 r} \| D^m u \|_{L^r(I_n)} = \ell_n^{\bar{k} - \frac {1}{q\kappa}} \| D^{k_1}u \dotsb D^{k_\kappa} u \|_{L^q(I_n)}^{\frac 1\kappa}. \end{equation} \end{proposition} \begin{proof} Let $u \in C^\infty_c((0,1);\mathbb{R})$ be nowhere-polynomial and $v := D^{k_1} u \dotsb D^{k_\kappa} u$ (we implicitly consider their respective smooth extensions by $0$ outside of $(0,1)$). Let \begin{equation} \label{eq:def-E} E := \left\{ x \in (0,1) ; u(x) \neq 0 \text{ and } v(x) \neq 0 \right\}. \end{equation} For $x \in E$ and $h > 0$, we define \begin{align} \alpha_x(h) & := h^{\bar{k} - \frac{1}{q\kappa}} \| v \|^{\frac 1\kappa}_{L^q(x-h,x+h)}, \\ \beta_x(h) & := h^{m-\frac{1}{r}} \|D^m u \|_{L^r(x-h,x+h)}. \end{align} As $h \to 0$, $\alpha_x(h) \sim |v(x)|^{\frac{1}{\kappa}} 2^{\frac{1}{q\kappa}} h^{\bar{k}}$ and $\beta_x(h) \leq h^{m} \|D^m u\|_{L^\infty(0,1)}$. Thus, since $m > \bar{k}$, $\beta_x(h) < \alpha_x(h)$ for $h$ small enough. Conversely, for $h \geq 1$, $\alpha_x(h) = h^{\bar{k} - \frac{1}{q\kappa}} \| v \|^{\frac 1\kappa}_{L^q(0,1)}$ and $\beta_x(h) = h^{m-\frac 1r} \|D^m u\|_{L^r(0,1)}$. Since $m > \bar{k} + 1$, $m - \frac 1 r > \bar{k} - \frac 1 {q\kappa}$ and thus $\beta_x(h) > \alpha_x(h)$ for $h$ large enough. Hence, we can define \begin{equation} \label{eq:rx} r_x := \inf \{ h > 0 ; \alpha_x(h) \leq \beta_x(h) \} \in (0,+\infty). \end{equation} In particular, for every $x \in E$, $\alpha_x(r_x) = \beta_x(r_x)$. By \cref{lem:besicovitch}, there exists a countable collection of elements $x_n \in E$ such that $E \subset \cup_n I_n$ and $\sum_n \mathbf{1}_{I_n} \leq 4$ on $\mathbb{R}$, where $I_n = (x_n - r_{x_n}, x_n + r_{x_n})$. These intervals satisfy \eqref{eq:lem:Ik-2} by the definition of~$r_x$. Moreover, since $1 \leq \sum_n \mathbf{1}_{I_n}$ on $E$, writing \begin{equation} \{ u \neq 0 \} = \left( \{ u \neq 0 \} \cap \{ v = 0 \} \right) \cup E \end{equation} and using the fact that $u$ is nowhere-polynomial, we obtain that $1 \leq \sum_n \mathbf{1}_{I_n}$ almost everywhere on $\{ u \neq 0 \}$, which proves \eqref{eq:lem:Ik-low}. \end{proof} \subsection{Proof of the main result} \label{sec:proof-main} We start with a classical result from measure theory. \begin{lemma} \label{lem:sard} Let $1 \leq p < \infty$ and $j \in \mathbb{N}$. For $u \in C^\infty([0,1];\mathbb{R})$, \begin{equation} \| D^j u \|_{L^p(0,1)}^p = \int_0^1 |D^j u|^p \mathbf{1}_{u \neq 0}. \end{equation} \end{lemma} \begin{proof} We write \begin{equation} \| D^j u \|_{L^p(0,1)}^p = \int_0^1 |D^j u|^p \mathbf{1}_{u \neq 0} + \int_0^1 |D^j u|^p \mathbf{1}_{u = 0} \mathbf{1}_{D^j u \neq 0}. \end{equation} Thus, it is sufficient to prove that $E := \{ u = 0 \} \cap \{ D^j u \neq 0 \}$ is of zero Lebesgue measure. Let us show that $E$ is a discrete subset of $[0,1]$. Let $x \in E$. Then $u(x) = 0$. Let $1 \leq i \leq j$ be the smallest integer such that $D^i u(x) \neq 0$. For $h$ small enough $u(x+h) = D^i u(x) h^i / i! + O(h^{i+1})$. In particular, there exists $h$ small enough such that $u(x + h) = 0$ if and only if $h = 0$. Thus $x$ is isolated in~$E$. Hence $E$ is discrete and $\mu(E) = 0$, which concludes the proof. \end{proof} We now prove \cref{thm:main}. Since estimate \eqref{eq:main-estimate} is invariant under translation and rescalings, one can assume that $u \in C^\infty_c((0,1);\mathbb{R})$. We start with the most important case: $\theta = \theta^*$. We postpone the generalization to $\theta \in (\theta^*,1]$ to the end of this section. \paragraph{Proof in the critical case $\theta = \theta^*$.} Assume moreover, temporarily, that $\bar{k} < m-1$ and $p, q, r < \infty$. Let $u \in C^\infty_c((0,1);\mathbb{R})$. As a first step, assume that $u$ is nowhere-polynomial. Let $v := D^{k_1}u \dotsb D^{k_\kappa} u$. Let $(I_n)_n$ be a countable collection of non-empty open intervals such as in \cref{lem:subdivision}. First, using \cref{lem:sard} and \eqref{eq:lem:Ik-low}, \begin{equation} \label{eq:lp-sumislp} \| D^j u \|_{L^p(0,1)}^p = \int_0^1 |D^j u|^p \mathbf{1}_{u \neq 0} \leq \sum_n \int_0^1 |D^j u|^p \mathbf{1}_{I_n} \end{equation} Second, using \cref{lem:scale} and \eqref{eq:lem:Ik-2}, there exists $C > 0$ (independent of $u$) such that, for each $n$, \begin{equation} \begin{split} \| D^j u \|_{L^p(I_n)}^p & \leq C^p \ell_n^{1-pj} \left( \ell_n^{m - \frac 1 r} \| D^m u \|_{L^r(I_n)} + \ell_n^{\bar{k} - \frac {1}{q\kappa}} \| v \|_{L^q(I_n)}^{\frac 1\kappa} \right)^p \\ & = C^p 2^p \ell_n^{1-pj} \left(\ell_n^{m - \frac 1 r} \| D^m u \|_{L^r(I_n)}\right)^{\theta p} \left(\ell_n^{\bar{k} - \frac {1}{q\kappa}} \| v \|_{L^q(I_n)}^{\frac 1\kappa}\right)^{(1-\theta)p} \\ & = (2C)^p \| D^m u \|_{L^r(I_n)}^{\theta p} \| v \|_{L^q(I_n)}^{\frac{p(1-\theta)}{\kappa}} \end{split} \end{equation} since the parameters are related by \eqref{eq:main-relation}. Since $\theta = \theta^*$, the relation \eqref{eq:critic-pqr} of \cref{rmk:pqr} implies that the exponents $\alpha = \frac{r}{\theta p}$ and $\alpha' = \frac{q \kappa}{p(1-\theta)}$ satisfy $1/\alpha+1/\alpha' = 1$. Thus, by Hölder's inequality, \begin{equation} \label{eq:proof-Holder-main} \begin{split} \sum_n \| D^j u \|_{L^p(I_n)}^p & \leq (2C)^p \sum_n \| D^m u \|_{L^r(I_n)}^{\theta p} \| v \|_{L^q(I_n)}^{\frac{p(1-\theta)}{\kappa}} \\ & \leq (2C)^p \left( \sum_n \| D^m u \|_{L^r(I_n)}^{r} \right)^{\frac{\theta p}{r}} \left( \sum_n \| v \|_{L^q(I_n)}^q \right)^{\frac{p(1-\theta)}{q \kappa}} \\ & \leq (2C)^p 4^{\frac{\theta p}{r}}4^{\frac{p(1-\theta)}{q \kappa}} \| D^m u \|_{L^r(0,1)}^{\theta p} \|v \|_{L^q(0,1)}^{\frac{p(1-\theta)}{\kappa}} \end{split} \end{equation} using \eqref{eq:lem:Ik-high}. Substituting this estimate in \eqref{eq:lp-sumislp} proves \eqref{eq:main-estimate}. If $u$ is not nowhere-polynomial, then one applies \eqref{eq:main-estimate} to the approximation sequence $u_n$ of nowhere-polynomial functions given by \cref{lem:approx-nowhere}. Since $u_n \to u$ in $C^m([0,1];\mathbb{R})$, the estimate passes to the limit. When $q = \infty$ or $r = \infty$, it suffices to replace the Hölder estimate in \eqref{eq:proof-Holder-main} involving a sum by the appropriate supremum over $n$. When $p = \infty$, one writes \begin{equation} \| D^j u \|_{L^\infty(0,1)} = \sup_n \| D^j u \|_{L^\infty(I_n)}, \end{equation} where, similarly, using \cref{lem:scale} and \eqref{eq:lem:Ik-2}, there exists $C > 0$ (independent of $u$) such that, \begin{equation} \begin{split} \| D^j u \|_{L^\infty(I_n)} & \leq C \ell_n^{-j} \left( \ell_n^{m - \frac 1 r} \| D^m u \|_{L^r(I_n)} + \ell_n^{\bar{k} - \frac {1}{q\kappa}} \| v \|_{L^q(I_n)}^{\frac 1\kappa} \right) \\ & = C \| D^m u \|_{L^r(I_n)}^{\theta} \| v \|_{L^q(I_n)}^{\frac{(1-\theta)}{\kappa}}. \end{split} \end{equation} Eventually, when $\bar{k} = m-1$, the assumption $k_i \leq j < m$ entail that $k_1 = \dotsb = k_\kappa = j = m - 1$. Hence $\theta^* = 0$ and $p = q \kappa$ by \eqref{eq:critic-pqr}. Thus \eqref{eq:main-estimate} reduces to $\|D^{m-1} u\|_{L^p(0,1)} \lesssim \| D^{m-1} u \dotsb D^{m-1} u \|_{L^q(0,1)}^{\frac 1 \kappa} = \|D^{m-1} u\|_{L^p(0,1)}$. \paragraph{Proof in the case $\theta \in (\theta^*,1]$.} When $\theta = 1$, this simply corresponds to the embedding $W^{1,1}(\mathbb{R}) \hookrightarrow L^\infty(\mathbb{R})$ for compactly supported functions. Now let $\theta \in (\theta^*,1)$ and define $p^* \in [1,\infty]$ by \begin{equation} \label{eq:p*} \frac 1 {p^*} = \frac{\theta^*}{r} + \frac{1-\theta^*}{q\kappa}. \end{equation} Thanks to the critical case $\theta = \theta^*$, we know that \begin{equation} \label{eq:casi-1} \| D^j u \|_{L^{p^*}(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^{\theta^*} \| v \|_{L^q(0,1)}^{\frac{1-\theta^*}{\kappa}}. \end{equation} Define $\alpha \in (0,1)$ by \begin{equation} \label{eq:alpha} \alpha := \frac{\theta-\theta^*}{1-\theta^*}. \end{equation} We apply the usual Gagliardo--Nirenberg inequality of \cref{thm:GN} to obtain \begin{equation} \label{eq:casi-2} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^\alpha \| D^j u \|_{L^{p^*}(0,1)}^{1-\alpha}. \end{equation} Combining \eqref{eq:casi-1} and \eqref{eq:casi-2} proves \eqref{eq:main-estimate}. Thus, it only remains to check that the parameters satisfy \eqref{eq:GN-relation} so that we could indeed apply \cref{thm:GN}. And, indeed, by \eqref{eq:p*} and \eqref{eq:alpha}, \begin{equation} \label{eq:alpha-OK} \begin{split} \alpha & \left( \frac 1 r - m \right) + (1-\alpha) \left( \frac{1}{p^*} - j \right) - \left( \frac 1 p - j \right) \\ & = \alpha \left( \frac 1 r - m - \frac{1}{p^*} + j \right) + \frac{1}{p^*} - \frac 1 p \\ & = \frac{\theta-\theta^*}{1-\theta^*} \left( \frac 1 r - m - \frac{\theta^*}{r} - \frac{1-\theta^*}{q\kappa} + j \right) + \frac{\theta^*}{r} + \frac{1-\theta^*}{q\kappa} - \frac 1 p \\ & = \theta \left( \frac{1}{r} - \frac{1}{q\kappa} - \frac{m-j}{1-\theta^*} \right) + \left(\frac{\theta^*}{1-\theta^*} (m-j) + \frac{1}{q\kappa} - \frac 1 p\right) = 0, \end{split} \end{equation} since $\theta$ satisfies \eqref{eq:main-relation}. In the last line we used that $(m-j)/(1-\theta^*) = m - \bar{k}$ and $\theta^* (m-j)/(1-\theta^*) = j - \bar{k}$, by \eqref{eq:theta*}. \section{The case of bounded domains} \label{sec:bounded} In this paragraph, we consider the case $u \in C^\infty([0,1];\mathbb{R})$, but not necessarily compactly supported in $(0,1)$, by adding a low-order term to the estimates. The proofs rely on the distinction between two cases, depending on whether $u$ is mostly ``low-frequency'' or ``high-frequency''. \subsection{A slight extension of the usual inequality} \label{sec:GN-bounded} We prove \cref{rk:GN-bounded}. Estimate \eqref{eq:GN-bounded} is classical when $k_0 = k$ (it follows by applying the usual inequality to $D^k u$, see e.g.\ \cite[item 5, p.\ 126]{nir1}). We build upon this case to give a short proof when $0 \leq k_0 < k \leq j$. Up to working with $D^{k_0} u$, it is sufficient to treat the case $k_0 = 0$. \paragraph{Case $0 < k < j$.} Define $\alpha^* \in (0,1)$ and $p_\alpha^* \in [1,\infty]$ by \begin{equation} \alpha^* := \frac{k}{j} \quad \text{and} \quad \frac{1}{p_\alpha^*} := \frac{\alpha^*}{p} + \frac{1-\alpha^*}{s}. \end{equation} By \cref{rk:GN-bounded} (in the classical case $k_0 = k$), one has both \begin{align} \| D^j u \|_{L^p(0,1)} & \leq C_1 \| D^m u \|_{L^r(0,1)}^\theta \| D^k u \|_{L^q(0,1)}^{1-\theta} + C_1 \| D^k u \|_{L^{p_\alpha^*}(0,1)}, \\ \| D^k u \|_{L^{p_\alpha^*}(0,1)} & \leq C_2 \| D^j u \|_{L^p(0,1)}^{\alpha^*} \| u \|_{L^s(0,1)}^{1-\alpha^*} + C_2 \| u \|_{L^s(0,1)}. \end{align} By Young's inequality for products, for $\varepsilon > 0$, \begin{equation} \| D^j u \|_{L^p(0,1)}^{\alpha^*} \| u \|_{L^s(0,1)}^{1-\alpha^*} \leq \alpha^* \varepsilon \| D^j u \|_{L^p(0,1)} + (1-\alpha^*) \varepsilon^{-\frac{\alpha^*}{1-\alpha^*}} \| u \|_{L^s(0,1)}. \end{equation} Choosing $\varepsilon < (C_1 C_2 \alpha^*)^{-1}$ and combining the three estimates proves \eqref{eq:GN-bounded}. \paragraph{Low-frequency case when $k = j$.} Let $u \in C^\infty([0,1];\mathbb{R})$. Assume that \begin{equation} \label{eq:Dmu<Dju} \| D^m u \|_{L^r(0,1)} \leq \| D^j u \|_{L^p(0,1)}. \end{equation} Define $\beta^* \in (0,1)$ and $p_\beta^* \in [1,\infty]$ by \begin{equation} \beta^* := \frac{j}{m} \quad \text{and} \quad \frac{1}{p_\beta^*} := \frac{\beta^*}{r} + \frac{1-\beta^*}{s}. \end{equation} By \cref{rk:GN-bounded} (in the classical case $k_0 = k$), one has both \begin{align} \| D^j u \|_{L^p(0,1)} & \lesssim \| D^m u \|_{L^r(0,1)}^\theta \| D^j u \|_{L^q(0,1)}^{1-\theta} + \| D^j u \|_{L^{p_\beta^*}(0,1)}, \\ \| D^j u \|_{L^{p_\beta^*}(0,1)} & \lesssim \| D^m u \|_{L^r(0,1)}^{\beta^*} \| u \|_{L^s(0,1)}^{1-\beta^*} + \| u \|_{L^s(0,1)}. \end{align} Combining both estimates with assumption \eqref{eq:Dmu<Dju} and using Young's inequality as above proves \eqref{eq:main-estimate-bounded}. \paragraph{High-frequency case when $k=j$.} Let $u \in C^\infty([0,1];\mathbb{R})$. Assume that \begin{equation} \label{eq:Dmu>Dju} \| D^m u \|_{L^r(0,1)} \geq \| D^j u \|_{L^p(0,1)}. \end{equation} By \cref{rk:GN-bounded} (in the classical case $k_0 = k$), \begin{equation} \label{eq:DjuDmu+Dju} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^\theta \| D^j u \|_{L^q(0,1)}^{1-\theta} + \| D^j u \|_{L^1(0,1)}. \end{equation} By Hölder's inequality and \eqref{eq:Dmu>Dju}, \begin{equation} \| D^j u \|_{L^1(0,1)} \leq \|D^j u\|_{L^p(0,1)}^{\theta} \| D^j u \|_{L^q(0,1)}^{1-\theta} \leq \| D^m u \|_{L^r(0,1)}^\theta \| D^j u \|_{L^q(0,1)}^{1-\theta}. \end{equation} Hence \eqref{eq:DjuDmu+Dju} entails \eqref{eq:main-estimate-bounded}. \subsection{Proof of the main result for bounded domains} We turn to the proof of \cref{cor:main-bounded}. We start with the following modification of \cref{lem:subdivision} (which removes the compact support assumption). \begin{proposition} \label{lem:subdivision-bounded} Let $q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} < m \in \mathbb{N}$. Let $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. Let $u \in C^\infty([0,1];\mathbb{R})$ be nowhere-polynomial such that \begin{equation} \label{eq:Dkv<Dmu} \| D^{k_1} u \dotsb D^{k_\kappa} u\|_{L^q(0,1)}^{\frac 1 \kappa} \leq \| D^m u \|_{L^r(0,1)}. \end{equation} There exists a countable family $(I_n)_n$ of non-empty open intervals $I_n \subset (0,1)$ satisfying \eqref{eq:lem:Ik-low}, \eqref{eq:lem:Ik-high} on $[0,1]$ and \eqref{eq:lem:Ik-2}. \end{proposition} \begin{proof} Let $u \in C^\infty([0,1];\mathbb{R})$ be nowhere-polynomial and $v := D^{k_1} u \dotsb D^{k_\kappa} u$. Let $E$ as in \eqref{eq:def-E}. For $x \in E$ and $h > 0$, we define \begin{align} \alpha_x(h) & := |J_x(h)|^{\bar{k} - \frac{1}{q\kappa}} \| v \|^{\frac 1\kappa}_{L^q(J_x(h))}, \\ \beta_x(h) & := |J_x(h))|^{m-\frac{1}{r}} \|D^m u \|_{L^r(J_x(h))}, \end{align} where $J_x(h) := (x-h,x+h) \cap (0,1)$. Since $x \in (0,1)$, for $h$ small enough $J_x(h) = (x-h,x+h)$ and $|J_x(h)| = 2h$. As $h \to 0$, $\alpha_x(h) \sim |v(x)|^{\frac{1}{\kappa}} 2^{\frac{1}{q\kappa}} h^{\bar{k}}$ and $\beta_x(h) \leq h^{m} \|D^m u\|_{L^\infty(0,1)}$. Thus, since $m > \bar{k}$, $\beta_x(h) < \alpha_x(h)$ for $h$ small enough. Conversely, for $h \geq \max \{ x, 1-x \}$, $J_x(h) = (0,1)$ and $\alpha_x(h) = \| v \|^{\frac 1\kappa}_{L^q(0,1)}$ and $\beta_x(h) = \|D^m u\|_{L^r(0,1)}$. Thus, by \eqref{eq:Dkv<Dmu}, $\alpha_h(x) \leq \beta_h(x)$. Hence, for every $x \in E$, we can define $r_x \in (0,\infty)$ as in \eqref{eq:rx}, which satisfies $\alpha_x(r_x) = \beta_x(r_x)$. By \cref{lem:besicovitch}, there exists a countable collection of elements $x_n \in E$ such that $E \subset \cup_n I_n'$ and $\sum_n \mathbf{1}_{I_n'} \leq 4$ on $\mathbb{R}$, where $I_n' = (x_n - r_{x_n}, x_n + r_{x_n})$. Let $I_n := I_n' \cap (0,1)$. The intervals $I_n$ satisfy \eqref{eq:lem:Ik-2} by the definitions of~$r_x$ and of $J_x(r_x)$. Moreover, since $1 \leq \sum_n \mathbf{1}_{I_n'}$ on $E$, writing \begin{equation} \{ u \neq 0 \} \cap (0,1) = \left( \{ u \neq 0 \} \cap \{ v = 0 \} \cap (0,1) \right) \cup E \end{equation} and using the fact that $u$ is nowhere-polynomial, we obtain that $1 \leq \sum_n \mathbf{1}_{I_n}$ almost everywhere on $\{ u \neq 0 \} \cap (0,1)$, which proves \eqref{eq:lem:Ik-low}. \end{proof} In the situation where $D^m u$ is small compared with $D^{k_1} u \dotsb D^{k_\kappa} u$, the construction of the family of intervals of the previous lemma fails. We will rely on the following estimate instead. \begin{lemma} \label{lem:low-freq} Let $p,q,r \in [1,\infty]$, $\kappa \in \mathbb{N}^*$ and $0 \leq k_1 \leq \dotsb \leq k_{\kappa} \leq j < m \in \mathbb{N}$. Let $\bar{k} := (k_1 + \dotsb + k_\kappa) / \kappa$. Let $0 \leq k_0 \leq k_1$ and $s \in [1,\infty]$. For $u \in C^\infty([0,1];\mathbb{R})$ such that \begin{equation} \label{eq:Dmu<2Dkv} \| D^m u \|_{L^r(0,1)} \leq 2 \| D^{k_1} u \dotsb D^{k_\kappa} u\|_{L^q(0,1)}^{\frac 1 \kappa}, \end{equation} there holds, \begin{align} \label{eq:Dju-Dk0u} \| D^j u \|_{L^p(0,1)} \lesssim \| D^{k_0} u \|_{L^s(0,1)}. \end{align} \end{lemma} \begin{proof} By monotony of the Lebesgue spaces on the bounded domain $(0,1)$, it is sufficient to prove the result for $p = \infty$ and $s = 1$. Moreover, since $0 \leq k_0 \leq k_1 \leq \dotsb \leq j < m$, up to working with $D^{k_0} u$ instead of $u$, one can assume that $k_0 = 0$. By the usual Gagliardo--Nirenberg inequality on bounded domains of \cref{rk:GN-bounded} (with $k = k_0 = 0$), \begin{equation} \label{eq:DjuDmuAlpha} \| D^j u \|_{L^\infty(0,1)} \lesssim \| D^m u \|^{\alpha}_{L^r(0,1)} \| u \|_{L^1(0,1)}^{1-\alpha} + \| u \|_{L^1(0,1)} \end{equation} with $\alpha := \frac{j+1}{m+1+\frac 1r} \geq \frac jm$ since $j < m$ and $r \geq 1$. Moreover, for each $0 \leq k_i \leq j$, \begin{equation} \| D^{k_i} u \|_{L^\infty(0,1)} \lesssim \| D^j u \|_{L^\infty(0,1)} + \| u \|_{L^1(0,1)}, \end{equation} which is immediate when $k_i = j$ and follows from the usual Sobolev embedding \cref{lem:Sobolev} when $k_i < j$. Thus, by Hölder's inequality, \begin{equation} \label{eq:Dk1dKkappa-Dju+u} \begin{split} \| D^{k_1} u \dotsb D^{k_\kappa} u \|_{L^q(0,1)}^{\frac{1}{\kappa}} & \leq \| D^{k_1} u\|_{L^\infty(0,1)}^{\frac{1}{\kappa}} \dotsb \| D^{k_\kappa} u\|_{L^\infty(0,1)}^{\frac{1}{\kappa}} \\ & \lesssim \| D^j u \|_{L^\infty(0,1)} + \| u \|_{L^1(0,1)}. \end{split} \end{equation} Using \eqref{eq:Dmu<2Dkv} and substituting \eqref{eq:Dk1dKkappa-Dju+u} in \eqref{eq:DjuDmuAlpha} proves that \begin{equation} \| D^j u \|_{L^\infty(0,1)} \lesssim \| D^j u \|_{L^\infty(0,1)}^\alpha \| u \|_{L^1(0,1)}^{1-\alpha} + \| u \|_{L^1(0,1)}. \end{equation} Since $j <m$, $\alpha < 1$ and Young's weighted inequality for products entails that \begin{equation} \label{eq:Dju-uL1} \| D^j u \|_{L^\infty(0,1)} \lesssim \| u \|_{L^1(0,1)}, \end{equation} which is indeed \eqref{eq:Dju-Dk0u} with $p = \infty$, $s = 1$ and $k_0=0$. \end{proof} We are now ready to prove \cref{cor:main-bounded}. \paragraph{Reduction to the case $D^m u$ large.} Let $u \in C^\infty([0,1];\mathbb{R})$. If $u$ satisfies \eqref{eq:Dmu<2Dkv}, then estimate \eqref{eq:Dju-Dk0u} of \cref{lem:low-freq} implies \eqref{eq:main-estimate-bounded} since the low-order term by itself is sufficient to bound the left-hand side. Hence, we can focus on the case where $D^m u$ is large. \paragraph{Proof in the critical case $\theta = \theta^*$.} Let $u \in C^\infty([0,1];\mathbb{R})$ be a nowhere-polynomial function such that \begin{equation} \label{eq:Dmu>2Dkv} \| D^m u \|_{L^r(0,1)} \geq 2 \| D^{k_1} u \dotsb D^{k_\kappa} u\|_{L^q(0,1)}^{\frac 1 \kappa}. \end{equation} In particular, assumption \eqref{eq:Dkv<Dmu} is satisfied, so \cref{lem:subdivision-bounded} applies. Thus the same argument as in \cref{sec:proof-main} can be applied and proves that \begin{equation} \label{eq:main-estimate-bounded-1} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^{\theta} \| D^{k_1} u \dotsb D^{k_\kappa} u\|_{L^q(0,1)}^{\frac 1 \kappa}. \end{equation} If $u$ is not nowhere-polynomial, then one applies \eqref{eq:main-estimate-bounded-1} to the approximation sequence $u_n$ of nowhere-polynomial functions given by \cref{lem:approx-nowhere-bounded}. Since $u_n \to u$ in $C^m([0,1];\mathbb{R})$ and $u$ satisfies \eqref{eq:Dmu>2Dkv}, the $u_n$ satisfy assumption \eqref{eq:Dkv<Dmu} for $n$ large enough, and the estimate passes to the limit. \paragraph{Proof in the case $\theta \in (\theta^*,1]$.} When $\theta = 1$, this simply corresponds to the embedding $W^{1,1}(0,1) \hookrightarrow L^\infty(0,1)$. Now let $\theta \in (\theta^*,1)$ and define $p^* \in [1,\infty]$ by \eqref{eq:p*}. Thanks to the critical case $\theta = \theta^*$, we have the bound \eqref{eq:casi-1} with $v := D^{k_1} u \dotsb D^{k_\kappa} u$. Define $\alpha \in (0,1)$ by \eqref{eq:alpha}. Recalling the relation between the parameters verified in \eqref{eq:alpha-OK}, we apply the usual Gagliardo--Nirenberg inequality of \cref{rk:GN-bounded} (in its general setting proved in \cref{sec:GN-bounded}) to obtain \begin{equation} \label{eq:casi-2-bounded} \| D^j u \|_{L^p(0,1)} \lesssim \| D^m u \|_{L^r(0,1)}^\alpha \| D^j u \|_{L^{p^*}(0,1)}^{1-\alpha} + \| D^{k_0} u \|_{L^s(0,1)}. \end{equation} Combined with \eqref{eq:casi-1}, this concludes the proof of \eqref{eq:main-estimate-bounded}. \section{An application to control theory} \label{sec:control} Our initial motivation concerns obstructions to small-time local controllability for nonlinear finite-dimensional scalar-input control-affine systems. It is known that such obstructions are linked with interpolation inequalities (see \cite{beauchard2022unified}). As an example, given $p \in \mathbb{N}^*$, consider the following system on $\mathbb{R}^4$: \begin{equation} \label{syst} \begin{cases} \dot{x}_1 = w \\ \dot{x}_2 = x_1 \\ \dot{x}_3 = x_2 \\ \dot{x}_4 = x_1^2 x_2^2 x_3^2 - x_1^p \end{cases} \end{equation} with initial condition $x(0) = 0$ where $w \in L^\infty((0,T);\mathbb{R})$ is the control to be chosen. We are interested in the following local property. \begin{definition} We say that system \eqref{syst} is \emph{small-time locally controllable} when, for every $T,\eta > 0$, there exists $\delta > 0$ such that, for every $x^* \in \mathbb{R}^4$ with $|x^*| \leq \delta$, there exists $w \in L^\infty((0,T);\mathbb{R})$ such that $\|w\|_{L^\infty(0,T)} \leq \eta$ and the associated solution to \eqref{syst} with initial condition $x(0) = 0$ satisfies $x(T) = x^*$. \end{definition} \begin{proposition} System \eqref{syst} is small-time locally controllable if and only if $p \in \{ 3, 5, 7, 8, 9, 10, 11 \}$. \end{proposition} \begin{proof} Let $T > 0$. If $w \in L^\infty((0,T);\mathbb{R})$ is a control such that $x_1(T) = x_2(T) = x_3(T) = 0$, then $u := x_3 \in W^{3,\infty}_0((0,T);\mathbb{R})$ and \begin{equation} \label{eq:x4} x_4(T) = \int_0^T (u u' u'')^2 - \int_0^T (u'')^p \end{equation} so that the possibility to reach a target of the form $(0,0,0,\pm 1)$ is linked with functional inequalities involving products of derivatives. We study each case. \begin{itemize} \item Case $p \geq 12$. First, \begin{equation} \| u'' \|_{L^p(0,T)}^p \leq \| u'' \|_{L^\infty(0,T)}^{p-12} \| u'' \|_{L^{12}(0,T)}^{12} \end{equation} Moreover, since $u''' = x_3''' = w$ and $u''(0) = x_1(0) = 0$, \begin{equation} \| u'' \|_{L^\infty(0,T)} \leq T \| w \|_{L^\infty(0,T)}. \end{equation} Thus, thanks to the interpolation inequality of \cref{cor:cubic}, \begin{equation} \| u'' \|_{L^p(0,T)}^p \leq T^{p-12} \| w \|_{L^\infty(0,T)}^{p-6} \int_0^T (u u' u'')^2. \end{equation} Substituting in \eqref{eq:x4} proves that $x_4(T) \geq 0$ when $T^{p-12} \| w \|_{L^\infty(0,T)}^{p-6} \leq 1$. Thus, choosing $0 < \eta \ll 1$ such that $T^{p-12} \eta^{p-6} \leq 1$ negates the definition of small-time local controllability. \item Case $7 \leq p \leq 11$. Let $0 \neq \chi \in C^\infty_c((0,T);\mathbb{R})$ and consider $w(t) := \varepsilon \chi'''(t)$ for $0 < \varepsilon \ll 1$. As $\varepsilon \to 0$, $w \to 0$ in $L^\infty((0,T);\mathbb{R})$. Moreover, by \eqref{eq:x4}, \begin{equation} x_4(T) = \varepsilon^6 \int_0^T (\chi \chi' \chi'')^2 + O(\varepsilon^7). \end{equation} So one can move in the direction $(0,0,0,+1)$. Conversely, set $u(t) = \varepsilon^{1+3a} \chi(t\varepsilon^{-a})$ or equivalently $w(t) := \varepsilon \chi'''(t \varepsilon^{-a})$ for $a > 0$ and $0 < \varepsilon \ll 1$. As $\varepsilon \to 0$, $w \to 0$ in $L^\infty((0,T);\mathbb{R})$. By \eqref{eq:x4}, \begin{equation} x_4(T) = \varepsilon^{7+12a} \int_0^T (\chi \chi' \chi'')^2 - \varepsilon^{p(1+a)+1} \int_0^T (\chi'')^p. \end{equation} If $\int_0^T (\chi'')^p > 0$ and $p(1+a) < 6+12a$ (which is possible when $7 \leq p \leq 11$), one can move in the direction $(0,0,0,-1)$. From these elementary movements, it is classical to conclude that \eqref{syst} is small-time locally controllable. \item Case $p = 1$. Then $\dot{x}_2 + \dot{x}_4 = (x_1 x_2 x_3)^2 \geq 0$. Hence, for every control, $(x_2 + x_4)(T) \geq 0$ so targets with $x_2^* + x_4^* < 0$ are not reachable. \item Case $p \in \{ 2, 4, 6 \}$. The system does not satisfy Stefani's necessary condition for small-time local controllability (see \cite{stefani}). \item Case $p \in \{ 3, 5 \}$. The system satisfies Hermes' sufficient condition for small-time local controllability (see \cite{hermes1982}).\qedhere \end{itemize} \end{proof} \end{document}
\begin{document} \maketitle \begin{abstract} We show that the set of Bernoulli measures of an isolated topologically mixing homoclinic class of a generic diffeomorphism is a dense subset of the set of invariant measures supported on the class. For this, we introduce the large periods property and show that this is a robust property for these classes. We also show that the whole manifold is a homoclinic class for an open and dense subset of the set of robustly transitive diffeomorphisms far away from homoclinic tangencies. In particular, using results from Abdenur and Crovisier, we obtain that every diffeomorphism in this subset is robustly topologically mixing. \end{abstract} \section{introduction} The study of chain-recurrence classes began once that Conley's Fundamental Theorem of Dynamical Systems appeared. It says that up to quotient these classes on points any dynamical system looks like a gradient dynamics. However, some of these classes, called homoclinic classes, gained interest with the advent of Smale's Spectral Decomposition Theorem. Indeed, this theorem says that for Axiom A (hyperbolic) dynamics the non-wandering set splits into finitely many homoclinic classes. Moreover, each of these classes is isolated: it is the maximal invariant set of a neighbourhood of itself. Thus, these homoclinic classes are the sole chain recurrence classes of such dynamics. We recall that a homoclinic class of a periodic point $p$ is the closure of the transversal intersections of the invariant manifolds of the orbit of $p$. It is well known that such classes are transitive, i.e. they contain a point whose orbit is dense in the class. Hence, the study of homoclinic classes, in non-hyperbolic situations, attracted the attention of many mathematicians, see \cite{BDV} for a survey on the subject. The purpose of this article is to contribute to this study both in the measure theoretical viewpoint and the topological viewpoint. The dynamical systems we shall consider here are diffeomorphisms and the topology used in the space of diffeomorphisms will be the $C^1$-topology. In ergodic theory, an important problem is to describe the set of invariant measures of a dynamical system, since the theory says that the invariant measures help to describe the dynamics. In \cite{S2}, Sigmund studied this problem in the hyperbolic case. More precisely he proved that for any homoclinic class of an Axiom A diffeomorphism, the set of periodic measures, i.e. Dirac measures evenly distributed on a periodic orbit, is dense in the set of invariant measures. On the other hand, there is a refinement of the Spectral Decomposition Theorem, due to Bowen, which says that any such class of an Axiom A system splits into finitely many compact sets which are cyclically permuted by the dynamics and the dynamics of each piece, at the return, is topologically mixing, i.e. given two open sets $U$ and $V$ then the $n$-th iterate of $U$ meets $V$ \emptyseth{for every} $n$ large enough. Using this, Sigmund in \cite{S1} was able to prove that the set of Bernoulli measures is dense among the invariant measures. He also proved that weakly mixing measures contains a residual subset of invariant measures. Indeed, the set of weakly mixing measures is a countable intersection of open sets. We recall that a measure is Bernoulli if the system endowed with it is measure theoretically isomorphic to a Bernoulli shift. In the non-hyperbolic case, \cite{ABC} proved that for a generic diffeomorphism any isolated homoclinic class has periodic measures dense in the set of invariant measures, thus extending the first result of Sigmund mentioned above to the generic setting. We recall that a property holds generically if it holds in a countable intersection of open and dense sets of diffeomorphisms. Our first result extends the second result of Sigmund mentioned above. \begin{mainthm} \label{teoA} For any generic diffeomorphism $f$, if the dynamics restricted to an isolated homoclinic class is topologically mixing then the Bernoulli measures are dense in space of invariant measures supported on the class. In particular, the set of weakly mixing measures contain a residual subset. \end{mainthm} The main tools employed here to prove Theorem \mathbb{R}f{teoA} are the results from \cite{ABC}, mentioned above, the main theorem in \cite{AC}, and the {\it large periods property} which is a tool that we devised in order to detect mixing behavior. For instance, a dynamical system has {\it large periods property} if there are periodic points with any large enough period which are arbitrarily dense. The presence of this property implies that the system is topologically mixing. In the differentiable setting, we also define the \emptyseth{homoclinic large periods property} which only considers the homoclinically related periodic points. We prove that this property is robust, see Proposition \mathbb{R}f{r.lpp}. We recall that a property is robust if it holds in a neighbourhood of the diffeomorphism. In \cite{AC}, the authors use their main result to prove that any homoclinic class of a generic diffeomorphism has a spectral decomposition in the sense of Bowen, like discussed before. One of the motivations is that all known examples of robustly transitive diffeomorphisms are robustly topologically mixing. So, in the same article the authors ask the following questions: \begin{enumerate} \item Is \emptyseth{every} robustly transitive diffeomorphism topologically mixing? \item Failing that, is topological mixing at least a $C^1$ open and dense condition within the space of all robustly transitive diffeomorphisms? \end{enumerate} Now, we point out that the results of section 2 of \cite{AC} gives immediately the following result\footnote{We would like to thank Prof. Sylvain Crovisier for pointing out this result to us.} (see also Remark \mathbb{R}f{mixing1}). \begin{theorem} \label{baranasquestion} Let $f$ be a generic diffeomorphism. If an isolated homoclinic class of $f$ is topologically mixing then it is robustly topologically mixing. \end{theorem} Actually, since the large periods property implies topological mixing, the robustness of this property could lead to another proof of the previous result, see Section 4. We want now attack problem (2) above. To this purpose it is natural to look for the global dynamics of the previous theorem instead of the semi-local dynamics. This leads us to a question posed in \cite{BDV} (Problem 7.25, page 144): ``For an open and dense subset of robustly transitive partially hyperbolic diffeomorphism: Is the whole manifold robustly a homoclinic class?''. Recall by a result of \cite{BC}, for generic transitive diffeomorphisms, the whole manifold is a homoclinic class. Our next result gives a positive answer to Problem 7.25 of \cite{BDV} (quoted above) \emptyseth{far from homoclinic tangencies}. A homoclinic tangency is a non-transversal intersection between the invariant manifolds of a hyperbolic periodic point. The result is the following: \begin{mainthm} There exists an open and dense subset among robustly transitive diffeomorphisms far from homoclinic tangencies formed by diffeomorphisms such that the whole manifold is a homoclinic class. \label{main.homoclinic}\end{mainthm} This result together with Theorem \mathbb{R}f{baranasquestion} give us a partial answer to question (2) above, posed in \cite{AC}. \begin{mainthm} \label{r.mixing} There is an open and dense subset among robustly transitive diffeomorphisms far from homoclinic tangencies formed by robustly topologically mixing diffeomorphisms. \end{mainthm} These two results were previously obtained by \cite{BDU} for strongly partially hyperbolic diffeomorphisms with one dimensional center bundle, see also \cite{HHU}. By strong partial hyperbolicity we mean partial hyperbolicity with both non-trivial extremal bundles such that the center bundle splits in one-dimensional subbundles in a dominated way. Actually, they obtain this proving that one of the strong foliations given by the partial hyperbolicity is minimal, which is a stronger property than topological mixing. In order to obtain this minimality they used arguments involving the accessibility property. We notice however that our results hold for diffeomorphisms with higher dimensional center directions. In section two, we present a way to produce such examples. This paper is organized as follows. In Section 2 we give the precise definitions of the main objects we shall deal with. In Section 3 we state the known results that will be our main tools. In Section 4 we introduce the large periods property. In Section 5 we use the large periods property to prove Theorem \mathbb{R}f{teoA}. Finally, in Section 6 we prove Theorem \mathbb{R}f{main.homoclinic}. \ {\bf Acknowledgements:} A.A. and B.S. want to thank FAMAT-UFU, and T.C. wants to thank IM-UFRJ for the kind hospitality of these institutions during the preparation of this work. This work was partially supported by CNPq, CAPES, FAPERJ and FAPEMIG. \section{Precise Definitions} In this section, we give the precise definitions of the objects used in the statements of the results. In this paper $M$ will be a closed and connected Riemaniann manifold of dimension $d$. Also, $cl(.)$ will denote the closure operator. \operatorname{Supp}bsection{Topological dynamics} Let $f:M\rightarrow M$ be a homeomorphism. Given $x\in M$, we define the \emptyseth{orbit} of $x$ as the set $O(x):=\{f^n(x);n\in\mathbb{Z}\}$. The forward orbit of $x$ is the set $O^{+}(x):=\{f^n(x);n\in\mathbb{N}\}$. In a similar way we define the backward orbit $O^{-}(x)$. If necessary, to emphasize the dependence of $f$, we may write $O_f(x)$. Given $\Lambda\operatorname{Supp}bset M$ we say that it is an \emptyseth{invariant} set if $f(\Lambda)=\Lambda$. We recall the notions of transitivity and mixing. We say that $f$ is transitive if there exists a point in $M$ whose forward orbit is dense. This is equivalent to the existence of a dense backward orbit and is also equivalent to the following condition: for every pair $U,V$ of open sets, there exists $n>0$ such that $f^n(U)\cap V\neq\emptysettyset$. More specially, we say that $f$ is topologically mixing if for every par $U,V$ of open sets there exists $N_0>0$ such that $n\geq N_0$ implies $f^n(U)\cap V\neq\emptysettyset$. \operatorname{Supp}bsection{Hyperbolic Periodic Points} We say that $p$ is a \emptyseth{periodic point} if $f^n(p)=p$ for some $n\geq 1$. The minimum of such $n$ is called the \emptyseth{period} of $p$ and it is denoted by $\tau(p)$. The periodic point is \emptyseth{hyperbolic} if the eigenvalues of $Df^{\tau(p)}(p)$ do not belong to $S^1$. As usual, $E^s(p)$ (resp. $E^ u(p)$) denotes the eigenspace of the eigenvalues with norm smaller (resp. bigger) than one. This gives a $Df^{\tau(p)}$ invariant splitting of the tangent bundle over the orbit $O(p)$ of $p$. The {\it index } of a hyperbolic periodic point $p$ is the dimension of the stable direction, denoted by $I(p)$. If $p$ is a hyperbolic periodic point for $f$ then every diffeomorphism $g$, $C^1-$close to $f$ have also a hyperbolic periodic point close to $p$ with same period and index, which is called the {continuation of $p$ for $g$}, and it is denoted by {\it $p(g)$}. The local stable and unstable manifolds of a hyperbolic periodic point $p$ are defined as follows: given $\varepsilon>0$ small enough, we set $$ W^s_{loc}(p)=\{x\in M ; \ \ d(f^n(x), f^n(p))\leq \varepsilon \text{, for every }n\geq 0\} \text{ and } $$ $$W^u_{loc}(p)=\{x\in M ; \ \d(f^{-n}(x), f^{-n}(p))\leq \varepsilon \text{, for every }n\geq 0\}. $$ They are differentiable manifolds tangent at $p$ to $E^s(p)$ and $E^u(p)$. The stable and unstable manifolds are given by the saturations of the local manifolds. indeed, $$W^s(p)=\bigcup_{n\geq 0}f^{-n\tau(p)}(W^s_{loc}(p))\text{ and }W^u(p)=\bigcup_{n\geq 0}f^{n\tau(p)}(W^u_{loc}(p)).$$ The stable and unstable set of a hyperbolic periodic orbit, $O(p)$ are given by: $$ W^s(O(p))=\bigcup_{j=0}^{\tau(p)-1} W^s(f^j(p)) \text{ and } W^u(O(p))=\bigcup_{j=0}^{\tau(p)-1} W^u(f^j(p)). $$ \operatorname{Supp}bsection{Homoclinic Intersections} If $p$ is a hyperbolic periodic point of $f$, then its \emptyseth{homoclinic class} $H(p)$ is the closure of the transversal intersections of the stable manifold and unstable manifold of the orbit of $p$: $$H(p)=cl\big(W^s(O(p))\partialitchfork W^u(O(p))\big).$$ We say that a hyperbolic periodic point $q$ is \emptyseth{ homoclinically related} to $p$ if $W^s(O(p))\partialitchfork W^u(O(q))\neq \emptysettyset$ and $W^u(O(p))\partialitchfork W^s(O(q))\neq \emptysettyset$. It is well known that a homoclinic class coincides with the closure of the hyperbolic periodic points homoclinically related to $p$. Moreover, it is a transitive invariant set. We say that a homoclinic class $H(p)$ has a robust property if $H(p(g))$ has also this property for any diffeomorphism $g$ sufficiently close to $f$. We define {\it the period of a homoclinic class} $H(p)$ as the greatest common divisor of the periods of the hyperbolic periodic points homoclinically related to $p$ and we denote by $l(O(p))$. We say that the homoclinic class $H(p)$ is isolated if there exists a neighbourhood $U$ of $H(p)$ such that $H(p)=\bigcap_{n\in \mathbb{Z}}f^n(U)$. On the other hand, we say that a non-transversal intersection between $W^s(O(p))$ and $W^u(O(p))$ is a \emptyseth{homoclinic tangency}. We denote by $\mathcal{HT}(M)$ the set of diffeomorphisms exhibiting a homoclinic tangency. We will say that a diffeomorphism $f$ is \emptyseth{far from homoclinic tangencies} if $f\notin cl(\mathcal{HT}(M))$. Given $p$ and $q$ hyperbolic periodic points with $I(p)<I(q)$ we say that they form a {\it heterodimensional cycle} if there exists $x\in W^s(O(p))\cap W^u(O(q))$, with $\operatorname{dim}m{(T_xW^s(O(p))\cap T_xW^u(O(q)))}=0$ and $W^u(O(p))\partialitchfork W^u(O(q))\neq\emptysettyset$. \operatorname{Supp}bsection{Invariant Measures} A probability measure $\mu$ is $f$-invariant if $\mu(f^{-1}(B))=\mu(B)$ for every measurable set $B$. An invariant measure is ergodic if the measure of any invariant set is zero or one. Let ${\mathcal M}(f)$ be the space of $f$-invariant \textit{probability measures} on $M$, and let ${\mathcal M}_e(f)$ denote the ergodic elements of ${\mathcal M}(f)$. For a periodic point $p$ of $f$ with period $\tau(p)$, we let $ \mu_p$ denote the periodic measure associated to $p$, given by $$\mu_p =\frac{1}{\tau(p)} \operatorname{Supp}m_{ x\in O(p)} \delta_x $$ where $\delta_x$ is the Dirac measure at $x$. Given an invariant measure $\mu$, Oseledets' Theorem says that for almost every $x\in M$ and all $v\in T_xM$ the limits $$\lambda(x,v):=\lim_{n\to^{+}_{-}\infty}\frac{1}{n}\log{\|Df^n(x)v\|}$$ exists and are equal. Moreover, one has a measurably varying splitting of the tangent bundle $TM=E_1\oplus...\oplus E_k$ and measurable invariant functions $\lambda_j:M\to\mathbb{R}$, $j=1,...,k$ such that if $v\in E_j$ then $\lambda(x,v)=\lambda_j(x)$. The number $\lambda_j(x)$ is called the \emptyseth{Lyapunov exponent} of $f$ at $x$. Now, let us define the notion of Bernoulli measure. We first recall the so-called Bernoulli shift. It is the homeomorphism $\sigma:\{1,...,n\}^{\mathbb{Z}}\to\{1,...,n\}^{\mathbb{Z}}$ defined by $\sigma(\{x_{n}\})=\{x_{n+1}\}$. In $\{1,...,n\}^{\mathbb{Z}}$ consider $m_B$ the product measure with respect to the uniform probability in $\{1,...,n\}$. It is easy to see that $m_B$ is invariant under $\sigma$. We say that $\mu\in{\mathcal M}(f)$ is a \emptyseth{Bernoulli measure} if $(f,\mu)$ is measure theoretically isomorphic to $(\sigma,m_B)$. \operatorname{Supp}bsection{Partial hyperbolicity} Let $\Lambda\operatorname{Supp}bset M$ to be invariant under a diffeomorphism $f$. Let $E,F$ to be subbundles of $T_{\Lambda}M$ of the tangent bundle over $\Lambda$ with trivial intersection at every $x\in\Lambda$. We say that $E$ \emptyseth{dominates} $F$ if there exists $N\in\mathbb{N}$ such that $$\|Df^N(x)|_E\|\|Df^{-N}(f^N(x))|_F\|\leq\frac{1}{2},$$ for every $x\in\Lambda$. We say that $\Lambda$ admits a \emptyseth{dominated splitting} if there exists a decomposition of the tangent bundle $T_{\Lambda}M=\bigoplus_{l=1}^kE_l$ such that $E_l$ dominates $E_{l+1}$. We say that a $f$-invariant subset $\Lambda$ is {\it partially hyperbolic} if it admits a dominated splitting $T_{\Lambda}M=E^s\oplus E^c_1\oplus\ldots\oplus E^c_k\oplus E^u$, with at least one of the extremal bundles being non-trivial, such that the extremal bundles have uniform contraction and expansion: there exist a constants $m\in\mathbb{N}$ such that for every $x\in M$: \begin{align*} \bullet \ &\|Df^m(x)v\|\leq 1/2 \text{ for each unitary } v\in E^s , \\ \bullet \ &\| Df^{-m}(x)v\|\leq 1/2 \text{ for each unitary } v\in E^u \\ \end{align*} and the other bundles, which are called center bundles, do not contracts neither expands. If all center bundles are trivial, then $\Lambda$ is called a {\it hyperbolic set}. Now, we say $\Lambda$ is {\it strongly partially hyperbolic} if both extremal bundles and center bundle are non-trivial and moreover such that all of its center bundles are one-dimensional. In particular a strongly partially hyperbolic set is not hyperbolic. We say that a diffeomorphism $f:M\rightarrow M$ is {\it partially hyperbolic} (resp. {\it strong partially hyperbolic} ) if $M$ is a partially hyperbolic (resp. strongly partially hyperbolic) set of $f$. When $M$ is a hyperbolic set we say that $f$ is {\it Anosov}. We remark now that strongly partially hyperbolic diffeomorphisms are by definition far from homoclinic tangencies, since all central sub bundles have dimension one. Examples of partially hyperbolic diffeomorphisms with higher dimensional central directions can be given by deforming some linear Anosov diffeomorphisms as in Ma\~n\'e's example. For instance, let $A$ be a linear Anosov diffeomorphism with eigenvalues $\lambda_1<\lambda_2<\lambda_3<1<\lambda_4$ but, such that $\lambda_2$ and $\lambda_3$ are close to 1. Then we can create a pitchfork bifurcation, producing two fixed points $p$ and $q$ with eigenvalues $\mu_1(p)<1<\mu_2(p)<\mu_3(p)<\mu_4(p)$ and $\mu_1(q)<\mu_2(q)<1<\mu_3(q)<\mu_4(q)$, such that $\mu_3(q)$ is still close to 1. Moreover, as in Ma\~n\'e's argument \cite{M} we can guarantee that this diffeomorphism is transitive. Now we can perform another pitchfork bifurcation on $q$ producing two other fixed points $q_1$ and $q_2$ with eigenvalues $\mu_1(q_1)<\mu_2(q_1)<1<\mu_3(q_1)<\mu_4(q_1)$ and $\mu_1(q_2)<\mu_2(q_2)<\mu_3(q_2)<1<\mu_4(q_2)$. Once again, this diffeomorphism is transitive. Now, since the bifurcations preservers the center unstable leaves, we can guarantee that there exists a dominated splitting $E^s\oplus E_1^c\oplus E_2^c\oplus E^u$, where $E^c_1$ is related to $\mu_2$ and $E^c_2$ is related to $\mu_3$. As in Ma\~n\'e's example, the unstable foliation will be minimal. In particular, it will be topologically mixing also. \begin{remark}\label{r.hps} If $f$ is partially hyperbolic, by Theorem 6.1 of \cite{HPS} there exist strong stable and strong unstable foliations that integrate $E^s$ and $E^u$. More, precisely, for any point $x\in M$ there is a unique invariant local strong stable manifold $W_{loc}^{ss}(x)$ which is a smooth graph of a function $\partialhi_x: E^s\rightarrow E^c\oplus E^u$ (in local coordinates), and varies continuously with $x$. In particular, $W_{loc}^{ss}(x)$ has uniform size for every $x\in M$. The same holds for $W_{loc}^{uu}(x)$, integrating $E^u$. Saturating these local manifolds, we obtain two foliations, that we denote by $\mathcal{F}^s$ and $\mathcal{F}^u$ respectively. Indeed, $\mathcal{F}^s(x)=\bigcup_{n\geq 0}f^{-n}(W^{ss}_{loc}(f^n(x))$. Analogous definition holds for $\mathcal{F}^{u}$. \end{remark} \operatorname{Supp}bsection{Robustness and Genericity} As mentioned before, we deal with the space $\operatorname{dim}ff^1(M)$ of $C^1$ diffeomorphisms over $M$ endowed with the $C^1$-topology. This is a Baire space. Thus any residual subset, i.e. a countable intersection of open and dense sets, is dense. When a property $P$ holds for any diffeomorphism in a fixed residual subset, we will say that $P$ holds generically. Or even, that a generic diffeomorphisms exhibits the property $P$. On the other hand, we say that a property holds robustly for a diffeomorphism $f$ if there exists a neighbourhood $\mathcal{U}$ of $f$ such that the property holds for any diffeomorphism in $\mathcal{U}$. In this way, we say that a diffeomorphism $f\in\operatorname{dim}ff^1(M)$ is \emptyseth{robustly transitive} if it admits a neighborhood entirely formed by transitive diffeomorphisms. In this paper we let ${\mathcal T}(M)$ denote the open set of $\operatorname{dim}ff^1(M)$ formed by robustly transitive diffeomorphisms which are far from tangencies. Notice that being far from tangencies is, by definition, an open condition. Also we define by $\mathcal{T}_{NH}(M)$ as the interior of robustly transitive strongly partially hyperbolic diffeomorphisms, which is a subset of $\mathcal{T}(M)$. When dealing with properties which involves objects defined by the diffeomorphism itself we need to deal with the continuations of these objects. For instance, when we say that a homoclinic class of $f$ is robustly topologically mixing, we are fixing a hyperbolic periodic point $p$ of $f$ and a neighbourhood $\mathcal{U}$ of $f$ such that for any $g\in \mathcal{U}$ the continuation $p(g)$ of $p$ is defined and the homoclinic class $H(p(g),g)$ is topologically mixing, i.e. for any $U$ and $V$ open sets of $H(p(g),g)$ there exists $N>0$ such that for any $n\geq N$ we have $g^n(U)\cap V\neq \emptysettyset$. Another example of a robust property is given by the following well known result which says that partial hyperbolicity is a robust property. \begin{proposition}[p. 289 of \cite{BDV}] \label{p.phpersiste} Let $\Lambda$ be a (strongly) partially hyperbolic set for $f$. Then, there exists a neighborhood $U$ of $\Lambda$ and a $C^1$ neighborhood ${\mathcal U}$ of $f$ such that every $g$-invariant set $\Gamma\operatorname{Supp}bset U$, is (strongly) partially hyperbolic, for every $g\in{\mathcal U}$. \end{proposition} \section{Some Tools} In this section, we collect some results that will be used in the proofs of the main results. \operatorname{Supp}bsection{Perturbative Tools} We start with Franks' lemma \cite{F}. This lemma enable us to deal with some non-linear problems using linear arguments. \begin{theorem}[Franks lemma] \label{l.franks} Let $f\in \operatorname{dim}ff^1(M)$ and $\mathcal{U}$ be a $C^1$-neighborhood of $f$ in $\operatorname{dim}ff^1(M)$. Then, there exist a neighborhood $\mathcal{U}_0\operatorname{Supp}bset \mathcal{U}$ of $f$ and $\delta>0$ such that if $g\in \mathcal{U}_0(f)$, $S=\{p_1,\dots,p_m\}\operatorname{Supp}bset M$ and $\{L_i:T_{p_i}M\to T_{g(p_{i})}M\}_{i=1}^{m}$ are linear maps satisfying $\|L_i-Dg(p_i)\|\leq\delta$ for $i=1,\dots m$ then there exists $h\in\mathcal{U}(f)$ coinciding with $g$ outside any prescribed neighborhood of $S$ and such that $h(p_i)=g(p_i)$ and $Dh(p_i)=L_i$. \end{theorem} One of the main applications of Franks lemma is to change the index of a periodic orbit, after a perturbation, if the Lyapunov exponents of the orbit is weak enough. More precisely, we can prove the following: \begin{lemma} Let $f\in \operatorname{dim}ff^1(M)$ having a sequence of hyperbolic periodic points $p_n$ with some index $s+1$, having negative Lyapunov exponents arbitrarily close to zero. Then, there exists $g$ arbitrarily close to $f$ having hyperbolic periodic points of indices $s$ and $s+1$. \label{change.index}\end{lemma} {\it Proof:} Given a neighborhood $\mathcal{U}$ of $f$ let us consider $\delta>0$ given for this neighborhood and $U_0$ another small enough neighborhood of $f$. We will suppose that the sequence of periodic points $p_n$ is such that the smallest eigenvalue $\lambda_{p_n}$ of $Df^{\tau(p_n)}$ with absolute value smaller than 1, has multiplicity one. The argument is similar in the other cases. Our hypothesis says that $$ \frac{1}{\tau(p_n)}\log \|Df^{\tau(p_n)}|E^s(p_n)\|=\frac{1}{\tau(p_n)}\log |\lambda_{p_n}| $$ approaches zero as $n$ grows. Now, let us consider $E_n$ as the eigenspace of the eigenvalues $\lambda_{p_n}$, and $\{E_l\}$ the other eigenspaces. We can define linear maps $L_i:T_{f^i(p)}M\rightarrow T_{f^{i+1}(p)}M$, equal to $Df(f^i(p))$ in all subspaces $Df^i(p)E_l$, but in $Df^i(p)E_n$ we choose $L_i$ satisfying $\|L_i | \ Df^iE_n\|=(1+\alpha)\|Df(f^i(p))| Df^iE_n\|$, where $\alpha>0$ depends on $\delta>0$. Then, $L_i$ is $\delta-$close to $Df(f^i(p))$, and also preserves the eigenspace $Df^i(p)E_n$. Hence, using Franks lemma we can find $g\in \mathcal{U}$ such that $p_n$ still is a hyperbolic periodic point and moreover $Dg(f^i(p))=L_i$, where $g$ depends on the periodic point $p_n$. In particular, $E_n$ is a invariant subspace of $T_{p_n}M$ for $Dg^{\tau(p_n)}$ and moreover: $$ \|Dg^{\tau(p)}(p_n)| E_n\|=(1+\alpha)^{\tau(p_n)}\lambda_n. $$ Hence, by hypothesis, we can choose $p$ equal some $p_n$, in order to have, after the above perturbation: $$ \frac{1}{\tau(p)}\log \|Dg^{\tau(p)}|E_n(p)\|>0. $$ Since $L_i$ can be chosen such that the other Lyapunov exponents of $p$ keep unchanged, we have that $p$ has index $s$. To finish the proof, we just observe that, Franks lemma changes the initial diffeomorphism only in a arbitrary neighborhood of the orbit of $p$, therefore the neighborhood $\mathcal{U}$ could be chosen such that the hyperbolic periodic point $p_1$ of $f$ has a continuation, which implies that $p_1(g)$ is also a hyperbolic periodic point of $g$ with index $s+1$. $ \square$ Another result that we shall use is Hayashi's connecting lemma \cite{H}. This will be helpful to create some heterodimensional cycles. \begin{theorem}[$C^1$-connecting lemma] Let $f \in \operatorname{dim}ff^1(M)$ and $p_1,\, p_2$ hyperbolic periodic points of $f$, such that there exist sequences $y_n\in M$ and positive integers $k_n$ such that: \begin{itemize} \item $y_n\rightarrow y \in W_{loc}^u(p_1, f))$, $y\neq p_1$; and \item $f^{k_n}(y_n)\rightarrow x \in W_{loc}^s(p_2, f))$, $x\neq p_2$. \end{itemize} Then, there exists a $C^1$ diffeomorphism $g$ $C^1-$close to $f$ such that $W^u(p_1,g)$ and $W^s(p_2,g)$ have a non empty intersection close to $y$. \label{connecting lema}\end{theorem} As it is well known, this result implies that if $f$ is a generic diffeomorphism having a non-hyperbolic homoclinic class which contains two periodic points $p$ and $q$ with different indices then there exist arbitrarily small perturbations of $f$ such that $p$ and $q$ belongs to a heterodimensional cycle. \operatorname{Supp}bsection{Generic Results} We start this subsection with one of the main generic result used in this paper, which is a result of Abdenur and Crovisier, Theorem 3 in \cite{AC}. They prove the existence of a decomposition of any generic isolated chain-transitive set. Since we solely are interested here in the study of isolated homoclinic classes, we quote their result only for homoclinic classes. \begin{theorem}[Theorem 3 in \cite{AC}] There exists a residual subset $\mathcal{R}\operatorname{Supp}bset \operatorname{dim}ff^1(M)$ such that for every $f\in{\mathcal R}$, any isolated homoclinic class $H(p,f)$ of a hyperbolic periodic point $p$ of $f$, decomposes uniquely as the finite union $H(p)=\Lambda_1\cup \ldots \cup \Lambda_l$, of disjoint compact sets on each of which $f^l$ is topologically mixing. Moreover, $l$ is the smallest positive integer such that $W^u(f^l(p))$ has a non empty transversal intersection with $W^s(p)$. \label{t.acmixing} \end{theorem} As an application, they obtain that generically any transitive diffeomorphism is topologically mixing. The positive integer $l$ in the previous theorem is, in fact, the period of the homoclinic class, $l(O(p)).$ This number gives a nice information about the intersections between stable and unstable manifolds of hyperbolic periodic points homoclinically related to $p$. More precisely: Since for any two periodic points $p_1$ and $p_2$ which are homoclinically related their homoclinic classes $H(p_1)$ and $H(p_2)$ are equal we can recast the following result of \cite{AC} as: \begin{proposition}[Proposition 2.1 in \cite{AC}] Consider hyperbolic periodic points $p_1$ and $p_2$ which are homoclinically related to $p$, and such that $W^u(p_1)\partialitchfork W^s(p_2)\neq \emptysettyset$. Then $W^u(f^n(p_1))\partialitchfork W^s(p_2)\neq \emptysettyset$ if, and only if, $n$ belongs to the group $l(O(p))\mathbb{Z}$. \label{prop.bla}\end{proposition} \begin{remark} In particular, if $\tilde{p}$ is homoclinically related to $p$, then $W^u(f^n(\tilde{p}))\partialitchfork W^s(\tilde{p})\neq \emptysettyset$ if, and only if, $n\in l(O(p))\mathbb{Z}$. \label{rmk.int.}\end{remark} Here, we also investigate properties of topologically mixing homoclinic classes which may not be the whole manifold. In this sense we remark the following: \begin{remark} Also as a direct consequence of the Theorem \mathbb{R}f{t.acmixing} we have that generically, if an isolated homoclinic class $H(p)$ is topologically mixing then $W^u(f(p))$ has a non empty transversal intersection with $W^s(p)$. Now, since this intersection is robust we point out that Theorem \mathbb{R}f{baranasquestion} is a consequence of this and Proposition 2.3 in \cite{AC}. \label{mixing1}\end{remark} The result below, of Bonatti and Crovisier \cite{BC}, proves that a large class of transitive diffeomorphism have the property that the whole manifold coincides with a homoclinic class. \begin{theorem}[Bonatti and Crovisier] \label{BC} There exists a residual subset $\mathcal{R}$ of $\operatorname{dim}ff^1(M)$ such that for every transitive diffeomorphism $f\in{\mathcal R}$ if $p$ is a hyperbolic periodic point of $f$ then $M=H(p,f)$. \end{theorem} Another generic result is the following \begin{theorem}[Theorem A, item (1), \cite{CMP}] \label{t.cmp} There exists a residual subset $\mathcal{R}$ of $\operatorname{dim}ff^1(M)$ such that for every $f\in {\mathcal R}$ if two homoclinic classes $H(p_1,f)$ and $H(p_2,f)$ are either equal or disjoint. \end{theorem} The next result, from \cite{ABCDW}, says that generically, homoclinic classes are index complete. \begin{theorem}[Theorem 1 in \cite{ABCDW}] There is a residual subset $\mathcal{R}\in \operatorname{dim}ff^1(M)$ of diffeomorphisms $f$ such that, every $f\in \mathcal{R}$ and any homoclinic class containing hyperbolic periodic points of indices $i$ and $j$, also contains hyperbolic periodic points of index $k$ for every $i\leq k\leq j$. \label{t.ABCDW}\end{theorem} The next tool we shall use is due to Abdenur, Bonatti and Crovisier in \cite{ABC} which extends Sigmund's result \cite{S2} to the non-hyperbolic setting. \begin{theorem}[Theorem 3.5, item (a), in \cite{ABC}] Let $\Lambda$ be an isolated non-hyperbolic transitive set of a $C^1-$generic diffeomorphism $f$, then the set of periodic measures supported in $\Lambda$ is a dense subset of the set $M_f(\Lambda)$ of invariant measures supported in $\Lambda$. \label{ABC} \end{theorem} Crovisier, Sambarino and Yang in \cite{CSY} showed that for any diffeomorphism $f$ in an open and dense subset far from homoclinic tangencies, every homoclinic class of $f$ has a kind of strong partial hyperbolicity. More precisely, the difference is that the ``partially hyperbolic splitting'' found by them could have either one or both trivial extremal bundles. In this last scenario, by our definition the diffeomorphism would not be partially hyperbolic. However, by an abuse of notation, we will continue calling it partially hyperbolic as in \cite{CSY}. Their result gives other important properties. Like, information of the minimal and maximal indices of periodic points inside the homoclinic class. More precisely: \begin{theorem}[Theorem 1.1(2) in \cite{CSY}] There is an open and dense subset $\mathcal{A}\operatorname{Supp}bset \operatorname{dim}ff^1(M)-\{cl(\mathcal{HT})\}$ such that for every $f\in \mathcal{A}$, any homoclinic class $H(p)$ is a partially hyperbolic set of $f$ $$ T_{H(p)}M= E^s\oplus E^c_1\oplus \ldots E^c_k\oplus E^u, $$ with $dim\ E^c_i=1$, $i=1,\ldots, k$, and moreover the minimal stable dimension of the periodic points of $H(p)$ is $dim(E^s) $ or $dim (E^{s})+1$. Similarly the maximal stable dimension of the periodic orbits of $H(p)$ is $dim(E^s)+k$ or $dim(E^s)+k-1$. For every $i$, $1\leq i\leq k$ there exists periodic points in $H(p)$ whose Lyapunov exponent along $E^c_i$, is arbitrary close to $0$. \label{p.CSY}\end{theorem} \section{Large Periods Property}\label{lpp} In this section we introduce the large periods property, our main tool to detect mixing properties. \begin{definition} Let $f:X\to X$ be a homeomorphism of a metric space. We say that $f$ has the large periods property if for any $\varepsilon>0$ there exists $N_0\in\mathbb{N}$ such that for every $n\geq N_0$ there exists $p_n\in\operatorname{Fix}(f^n)$, whose orbit under $f$ is $\varepsilon$ dense in $X$. \end{definition} A simple remark is that if $X$ has an isolated point and $f$ has the large period property then $X$ is a singleton. The large periods property can be used as a criterion to assure mixing, as the next result shows. \begin{lemma} \label{criteriopramixing} Every homeomorphism of a metric space with the large periods property is topologically mixing \end{lemma} \begin{proof} Let $f:X\to X$ be a homeomorphism with the large periods property. Notice that $f$ is transitive. Indeed, given $U$ and $V$ non-empty and disjoint open sets take $\varepsilon<\min\{diam(U),diam(V)\}$. By the large periods property, there exists a point $p\in\operatorname{Per}(f)$ whose orbit is $\varepsilon$ dense in $X$. This implies that there exists a point $y\in V$ and $n>0$ such that $f^n(y)\in U$. Thus $f$ is transitive. We now prove that $f$ is topologically mixing. Let $U$ and $V$ be non-void and disjoint open sets. By the transitivity of $f$ there exists a first iterate $n_1$ such that $f^{n_1}(U)\cap V\neq\emptysettyset$. In particular, $f^j(U)\cap V=\emptysettyset$ for every $j=1,...,n_1-1$. Take an open ball $B\operatorname{Supp}bset U$, satisfying $$f^{n_1}(B)\operatorname{Supp}bset f^{n_1}(U)\cap V,$$ and $\varepsilon=diam(B)/2$. Let $N_0=N_0(\varepsilon)$ be given by the large periods property. We claim that $f^n(V)\cap U\neq\emptysettyset$, for every $n\geq N_0$. Indeed, we know that there exists $p\in\operatorname{Fix}(f^{\tau})$, with $\tau=n+n_1$, whose orbit under $f$ is $\varepsilon$ dense in $X$. By the choice of $\varepsilon$, there is an iterate of $p$ in $B$. Since $p$ is periodic we shall assume for simplicity that $p$ itself is in $B$. This implies that $f^{n_1}(p)\in V$, and therefore $$f^n(f^{n_1}(p))=f^{n+n_1}(p)=f^{\tau}(p)=p\in U.$$ This proves our claim, and establishes the lemma. \end{proof} It is a natural question if the converse of this result is true. However, Carvalho and Kwietniak \cite{CK} gave an example of a homeomorphism of a compact metric space with the two-sided limit shadowing property, but without periodic points. Theorem B in \cite{CK} establishes that the two-sided limit shadowing property implies topological mixing. Therefore, the converse of Lemma \mathbb{R}f{criteriopramixing} is not true in general. We now turn our attention to the differentiable setting and the semi-local dynamics of homoclinic classes. \begin{definition} Let $f:M\to M$ be a diffeomorphism and let $H(p)$ be a homoclinic class of $f$. We say that an invariant subset $\Lambda\operatorname{Supp}bset H(p)$ has the \emptyseth{homoclinic large periods property} if for any $\varepsilon>0$ there exists $N_0\in\mathbb{N}$ such that for every $n\geq N_0$ it is possible to find a point $p_n\in\operatorname{Fix}(f^n)$ in $\Lambda$, and homoclinically related with $p$, whose orbit under $f$ is $\varepsilon$ dense in $\Lambda$. \end{definition} In the sequel, we shall establish a result which produces hyperbolic horseshoes having the homoclinic large periods property when there exists a special type of homoclinic intersection. For its proof we shall need the classical shadowing lemma. \begin{definition} Let $f:X\to X$ be a homeomorphism of a metric space $X$. Given $\delta>0$ we say that a sequence $\{x_n\}$ is a $\varepsilon$-pseudo orbit if $d(f(x_n),x_{n+1})<\varepsilon$, for every $n$. We say that the pseudo orbit is $\varepsilon$ shadowed by a point $x\in X$, for $\varepsilon>0$, if $d(f^n(x),x_n)<\varepsilon$, for every $n$. The pseudo orbit is said to be periodic if there exists a minimum number $\tau$ such that $x_{n+\tau}=x_n$, for every $n$. The number $\tau$ is called the period of the pseudo orbit. \end{definition} \begin{theorem}[Shadowing Lemma \cite{Rob}] \label{t.shadowinglemma} Let $\Lambda$ be a locally maximal hyperbolic set. For every $\varepsilon>0$ there exists a $\delta>0$ such that every periodic $\delta$-pseudo orbit can be $\varepsilon$-shadowed by a periodic orbit. Moreover, if $\tau$ is the period of the pseudo orbit, then the periodic point is a fixed point for $f^{\tau}$. \end{theorem} \begin{lemma} \label{horseshoemixing} Let $f$ be a diffeomorphisms with a hyperbolic periodic point $p$ such that there exists a point of transverse intersection $q\in W^s(p)\partialitchfork W^u(f(p))$. Then, for any small enough neighborhood $U$ of $O(p)\cup O(q)$, the restriction of $f$ to the maximal invariant set $\Lambda_U=\cap_{n\in\mathbb{Z}}f^n(U)$ has the homoclinic large periods property. \end{lemma} \begin{proof} For this proof, we denote $\tau:=\tau(p)$ the period of $p$. It is a well known result (see for instance, Theorem 4.5, pg. 260 in \cite{Rob}) that for any small enough neighborhood $U$ of $O(p)\cup O(q)$ the maximal invariant set $\Lambda_U=\cap_{n\in\mathbb{Z}}f^n(U)$ is a hyperbolic set. Take an arbitrary $\varepsilon>0$ and $\delta>0$ given by Theorem \mathbb{R}f{t.shadowinglemma}. We claim that there exists a number $N_0$ such that for every $n\geq N_0$ it is possible to construct a periodic $\delta$-pseudo orbit inside $U$, with period exactly equal to $n$, and whose Hausdorff distance to $O(p)\cup O(q)$ is smaller than $\varepsilon$. Once we have settled this, the shadowing lemma will produce periodic orbits, which are fixed points for $f^n$, and whose Hausdorff distance to $O(p)\cup O(q)$ is $2\varepsilon$. In particular, these orbits must be $3\varepsilon$ dense in $\Lambda_U$, with respect to the Hausdorff distance. Moreover, if $\varepsilon$ is small enough, all of these periodic orbits will be homoclinically related by the hyperbolicity of $\Lambda_U$. Thus, we are left to show our claim. With such goal in mind, we take a large iterate $x=f^{N\tau}(q)$ such that $$f^{-r\tau}(x)\in B(p,\delta/2),$$ for every $r=0,...,\tau-1$. Observe that $f^{-1}(x)\in W^u(p)$, since $q\in W^u(f(p))$. This implies that there exists a smallest positive integer $l\in\mathbb{N}$ such that $$f^{-l\tau-1}(x)\in B(p,\delta/2).$$ Now, we can give the number $N_0$. For each $r=1,...,\tau-1$, let $k_r=rl$ and take $L=\partialrod_{r=1}^{\tau-1}k_r$. We define $N_0:=L\tau$. Observe that if $n\geq N_0$ we can write $$n=(a+L)\tau+r=(a+L-k_r)\tau+k_r\tau+r,$$ for some $r\in\{1,...,\tau-1\}$ and $a\in\mathbb{N}$. To complete the proof, we shall give the pseudo orbit. It will be given by several strings of orbit, with jumps at specific points. For this reason, and for the sake of clarity, we divide the construction in several steps between each jump. \begin{itemize} \item \emptyseth{The first string:} Define $x_0=f^{-(l+r)\tau-1}(x)$, $x_{j}=f^j(x_0)$, for every $j=1,...,l\tau$. \item \emptyseth{The second string:} Notice that $f(x_{l\tau})=f^{-r\tau}(x)\in B(p,\delta/2)$. Put $x_{l\tau+1}=f^{-(l+r-1)\tau-1}(x)\in B(p,\delta/2)$, and $x_{l\tau+1+j}=f^j(x_{l\tau+1})$, for every $j=1,...,l\tau$. \item \emptyseth{The procedure continues inductively:} Notice again that $f(x_{2l\tau+1})=f^{-(r-1)\tau}(x)$ $\in B(p,\delta/2)$, and put $x_{2l\tau+2}=f^{-(l+r-2)\tau-1}(x)$. We proceed with the construction in an analogous way, defining $x_{jl\tau+j}:=f^{-(l+r-j)\tau-1}(x)$ and the next $l\tau$ terms of the sequence as simply the iterates of this point, for every $j<r$. In this manner we construct a sequence with $rl\tau+r-1$ terms. \item \emptyseth{The last string:} Observe that $f(x_{rl\tau+r-1})=f^{-\tau}(x)\in B(p,\delta/2)$. Hence, we can choose $x_{rl\tau+r}=x$ and the next $(a+L-k_r)\tau-1$ terms of the sequence as simply the iterates of this point, all of which belongs to $B(p,\delta/2)$. \item \emptyseth{The last jump:} Finally, we close the pseudo orbit by putting $x_{(a+L-k_r)\tau+k_r\tau+r}=x_0$. \end{itemize} This gives a periodic $\delta$-pseudo orbit with period $n$, as required. \end{proof} As an application, from Lemmas \mathbb{R}f{criteriopramixing} and \mathbb{R}f{horseshoemixing} we obtain the following result. \begin{proposition} \label{mixinghorseshoe} Let $f$ be a diffeomorphisms with a hyperbolic periodic point $p$ having a non empty transversal intersection between its stable manifold and the unstable manifold of $f(p)$, i.e. there exists $q\in W^s(p,f)\partialitchfork W^u(f(p),f)$. Then, for any small enough neighborhood $U$ of $O(p)\cup O(q)$, the maximal invariant set $\Lambda_U$ in $U$ is topologically mixing hyperbolic set. \end{proposition} As a by product of these arguments, we prove that if a homoclinic class have the homoclinic large periods property then this holds robustly. \begin{proposition} \label{r.lpp} Let $f$ be a diffeomorphisms with a hyperbolic periodic point $p$ such that the homoclinic class of $p$, $H(p)$, has the homoclinic large periods property. Then, $H(p(g))$ has the homoclinic large periods property for any diffeomorphism $g$ close enough to $f$. \end{proposition} \begin{proof}[Proof of Proposition \mathbb{R}f{r.lpp}] Since $H(p)$ has the homoclinic large periods property the period of this homoclinic class has to be one, $l(O(p))=1$. Indeed, unless the class reduce itself to a fixed point, there will be two periodic points homoclinically related to $p$ such that their periods are two distinct prime numbers. Hence, by Proposition \mathbb{R}f{prop.bla} we have that $W^s(p)\partialitchfork W^u(f(p))\neq\emptysettyset$. Therefore, since this intersection is robust, we can conclude also by Proposition \mathbb{R}f{prop.bla} and Remark \mathbb{R}f{rmk.int.} that $W^s(\tilde{p})\partialitchfork W^u(g(\tilde{p}))\neq\emptysettyset$ for every hyperbolic period point $\tilde{p}$ homoclinically related to $p(g)$, for every diffeomorphism $g$ close enough to $f$. So, take an arbitrary $\varepsilon>0$. There exists a periodic point $\tilde{p}\in H(p(g))$, homoclinically related with $p(g)$ and whose orbit is $\varepsilon/2$ dense in $H(p(g))$. Now, Lemma \mathbb{R}f{horseshoemixing} implies that there exists $N_0$ such that for every $n\geq N_0$ we can find a periodic orbit $\gamma=O(b)$ homoclinically related to $\tilde{p}$, $b\in\operatorname{Fix}(g^n)$, which contains a subset $\varepsilon/2$ close to $O(\tilde{p})$ in the Hausdorff distance. In particular, $\gamma$ is an $\varepsilon$ dense orbit inside $H(p(g))$. This establishes that $H(p(g))$ has the homoclinic large periods property, and completes the proof. \end{proof} Observe that the above proof establishes indeed that if a homoclinic class $H(p)$ of a diffemorphism $f$ is such that $W^s(p)\partialitchfork W^u(f(p))\neq\emptysettyset$ then $H(p)$ has the homoclinic large periods property. Thus, combining these facts and Theorem \mathbb{R}f{t.acmixing} we have the following corollary. \begin{corollary} Let $f$ be a generic diffeomorphism. An isolated homoclinic class of $f$ is topologically mixing if, and only if, it has homoclinic large periods property robustly. \label{c.lpp}\end{corollary} \section{Topologically mixing homoclinic classes} \operatorname{Supp}bsection{Denseness of Bernoulli measures: Proof of Theorem \mathbb{R}f{teoA}} We recall the following result of Bowen. \begin{theorem}[\cite{Bow1}, Theorem 34] \label{bowen} Let $\Lambda$ be a topologically mixing isolated hyperbolic set. Then, there exists a Bernoulli measure supported in $\Lambda$. \end{theorem} \begin{remark} Actually Bowen constructs a measure such that $(f|_{\Lambda},\mu_B)$ is a $K$-automorphism. But, in this case, $(f_{\Lambda},\mu_B)$ is measure theoretically isomorphic to a mixing Markov chain and by \cite{FO} it is isomorphic to a Bernoulli shift. \end{remark} Now, we give the proof of Theorem \mathbb{R}f{teoA}. \begin{proof}[Proof of Theorem \mathbb{R}f{teoA}] Let $H(p)$ be an isolated topologically mixing homoclinic class of a $C^1$ generic diffeomorphism $f$. Let $\mu$ be an invariant measure supported in $H(p)$ and let $\varepsilon>0$ be arbitrarily chosen. By Theorem \mathbb{R}f{ABC} there exists a measure $\mu_{\tilde{p}}$, supported on a hyperbolic periodic orbit $O(\tilde{p})$, with $\tilde{p}\in H(p)$, which is $\varepsilon/2$ close to $\mu$. Since $f$ is $C^1$ generic, Theorem \mathbb{R}f{t.cmp} implies that $H(\tilde{p})=H(p)$. In particular, we have that $H(\tilde{p})$ is topologically mixing. From Remark \mathbb{R}f{mixing1} we know that there exists a point $q\in W^s(\tilde{p})\partialitchfork W^u(f(\tilde{p}))$. For every small neighborhood $U$ of $O(\tilde{p})\cup O(q)$, Proposition \mathbb{R}f{mixinghorseshoe} tells us that the maximal invariant set $\Lambda_U=\cap_{n\in\mathbb{Z}}f^n(U)$ is a topologically mixing hyperbolic set. Moreover, since $q$ is a homoclinic point of $\tilde{p}$, by choosing $U$ sufficiently small we have that the points in $\Lambda_U$ spent portions of their orbit as large as we please shadowing the orbit of $\tilde{p}$. Now, take $\nu$ the Bernoulli measure supported in $\Lambda_U$ which is given by Theorem \mathbb{R}f{bowen}. Since a typical point in the support of $\nu$ spent large portions of its orbit shadowing the orbit of $\tilde{p}$, we can choose $U$ such that $\nu$ is $\varepsilon/2$ close to $\mu_{\tilde{p}}$. Thus, $\nu$ is $\varepsilon$ close to $\mu$ and we are done. \end{proof} \begin{remark} The techniques employed above can be used to give a new proof of Sigmund's result on the denseness of Bernoulli measures for hyperbolic topologically mixing basic sets \cite{S1}. Indeed, our use of the large periods property gives a geometric alternative to the symbolic approach of Sigmund and a proof of his result using our techniques would proceed by the same argument as above, in the proof of Theorem \mathbb{R}f{teoA}, after we modified the following key results: first, Sigmund's result on denseness of periodic measures in a hyperbolic basic set, \cite{S2}, can be used instead of Theorem \mathbb{R}f{ABC}, and second Bowen's proof of Smale's Spectral Decomposition Theorem (see pag. 47 of \cite{Bow}) can be used instead of Theorem \mathbb{R}f{t.acmixing} and Remark \mathbb{R}f{mixing1} to show the existence of nice intersections between the stable and unstable manifolds of hyperbolic periodic points. Therefore, with these modifications, the same proof as above can be applied. \end{remark} \section{Robustly large Homoclinic class} In this section we shall prove Theorem \mathbb{R}f{main.homoclinic} as a consequence of the following result: \begin{theorem} Let $f\in \operatorname{dim}ff^1(M)$ be a robustly transitive strong partially hyperbolic diffeomorphism, with $TM=E^s\oplus E^c_1\oplus \ldots E^c_k\oplus E^u$, having hyperbolic periodic points $p_s$ and $p_u$ with index $s$ and $d-u$, respectively, where $s=dim\ E^s$ and $u=dim \ E^u$. Then, there exists an open subset $\mathcal{V}_f$ whose closure contains $f$, such that $M=H(p_s(g))=H(p_u(g))$ for every $g\in \mathcal{V}_f$. \label{r.homoclinic}\end{theorem} Before we prove Theorem \mathbb{R}f{r.homoclinic}, let us see how it implies Theorem \mathbb{R}f{main.homoclinic}. \begin{proof}[Proof of Theorem \mathbb{R}f{main.homoclinic}] First we observe that it suffices to deal with the interior of non-hyperbolic robustly transitive diffeomorphisms, since in the Anosov case the whole manifold is robustly a homoclinic class, which is a consequence of the shadowing lemma. Recall that $\mathcal{T}_{NH}(M)\operatorname{Supp}bset{\mathcal T}(M)$ denotes the interior of non-hyperbolic robustly transitive diffeomorphisms far from homoclinic tangencies. Hence, by Theorem \mathbb{R}f{BC} and Theorem \mathbb{R}f{p.CSY} there exists a residual subset $\mathcal{R}$ in $\mathcal{T}_{NH}(M)$ such that if $f\in \mathcal{R}$ then: \begin{itemize} \item[a)] $M$ coincides with a homoclinic class; \item[b)] $f$ is partially hyperbolic, with the central bundle admitting a splitting in one dimension sub bundles. I.e., $TM=E^s\oplus E_1^c\oplus \ldots \oplus E^c_k\oplus E^u$; \item[c)] either there exist a hyperbolic periodic point with index $s$, or there exists hyperbolic periodic points with index $s+1$ whose the $(s+1)-$Lyapunov exponent is arbitrarilly close to zero. Where $s=dim\ E^s$. \item[d)] either there exist a hyperbolic periodic point with index $d-u$, or there exists hyperbolic periodic points with index $d-u-1$ whose the $(d-u-1)-$Lyapunov exponent is arbitrary close to zero. Where $u=dim \ E^u$. \end{itemize} According to Theorem \mathbb{R}f{p.CSY}, $E^s$ and/or $E^u$ could be trivial. However, this cannot happen in our situation. Indeed, we claim that both $E^s$ and $E^u$ are non-trivial. In particular, $f$ is strongly partially hyperbolic. To see this, suppose by contradiction the existence of $f\in \mathcal{R}$ with $E^s$ trivial. Hence, by item $c)$ above, $f$ should have either a source or hyperbolic periodic points with index one, with the only one Lyapunov negative exponent being arbitrary close to zero. In the last case, we can use Lemma \mathbb{R}f{change.index} to perturb $f$ in order to find also a source. Therefore, if $E^s$ is trivial, then we can find a diffeomorphism $g$ close to $f$, having a source, which is a contradiction with the transitivity of $g$. Similarly we conclude that $E^u$ is also non-trivial. Henceforth, item b) above can be replaced by: \begin{itemize} \item[b')] every $f\in \mathcal{R}$ is strongly partially hyperbolic. \end{itemize} Moreover, by the same argument above using Lemma \mathbb{R}f{change.index}, after a perturbation we can assume that $f$ has hyperbolic periodic points of indices $s$ and $d-u$. Thus, we can find a dense subset $\mathcal{R}_1$ inside $\mathcal{T}_{NH}(M)$ formed by robustly transitive strong partially hyperbolic diffeomorphisms $f$ satisfying the hypothesis of Theorem \mathbb{R}f{r.homoclinic}. Then, considering $\mathcal{V}_f$ given by Theorem \mathbb{R}f{r.homoclinic} for every $f\in \mathcal{R}_1$ we have that $$ \mathcal{A}=\bigcup_{f\in \mathcal{R}_1} \mathcal{V}_f, $$ is an open and dense subset of $\mathcal{T}_{NH}(M)\operatorname{Supp}bset{\mathcal T}(M)$. By Theorem \mathbb{R}f{r.homoclinic}, for every diffeomorphism in $\mathcal{A}$ the whole manifold $M$ coincides with a homoclinic class. This ends the proof \end{proof} In the sequence we prove some technical results which are key steps in the proof of Theorem \mathbb{R}f{r.homoclinic}. The following result allows to find open sets of diffeomorphisms for which the topological dimension of stable (and unstable manifold) of hyperbolic periodic points is larger than the differentiable dimension. \begin{lemma} Let $f\in \operatorname{dim}ff^1(M)$ be a robustly transitive strong partially hyperbolic diffeomorphism. Suppose there are hyperbolic periodic points $p_j$, $j=i,\ i+1, \ldots, k$, with indices $I(p_j)=j$ for $f$. Hence, given any small enough neighborhood $\mathcal{U}$ of $f$, where is defined the continuation of the hyperbolic periodic points $p_j$, there exists an open set $\mathcal{V}\operatorname{Supp}bset \mathcal{U}$ such that for every $g\in \mathcal{V}$: $$ W^s(p_k(g))\operatorname{Supp}bset cl(W^s(p_{k-1}(g)))\operatorname{Supp}bset \ldots \operatorname{Supp}bset cl(W^s(p_{i+1}(g)))\operatorname{Supp}bset cl(W^s(p_{i}(g))), and $$ $$ W^u(p_i(g))\operatorname{Supp}bset cl(W^u(p_{i+1}(g)))\operatorname{Supp}bset \ldots \operatorname{Supp}bset cl(W^u(p_{k-1}(g)))\operatorname{Supp}bset cl(W^u(p_{k}(g))). $$ \label{l.blender}\end{lemma} To prove the above lemma we will use the following result which is a consequence of Proposition 6.14 and Lemma 6.12 in \cite{BDV}, which are results of Diaz and Rocha \cite{DR}. It is worth to point out that this result is a consequence of the well known blender technique, which appears by means of unfolding a heterodimensional co-dimensional one cycle far from homoclinic tangencies. \begin{proposition} Let $f$ be a $C^1$ diffeomorphism with a heterodimensional cycle associated to saddles $p$ and $\tilde{p}$ with indices $i$ and $i+1$, respectively. Suppose that the cycle is $C^1-$far from homoclinic tangencies. Then there exists an open set $\mathcal{V}\operatorname{Supp}bset \operatorname{dim}ff^1(M)$ whose closure contains $f$ such that for every $g\in \mathcal{V}$ $$ W^s(\tilde{p}(g))\operatorname{Supp}bset cl(W^s(p(g))) \text{ and } W^u(p(g))\operatorname{Supp}bset cl(W^u(\tilde{p}(g))). $$ \label{p.blender}\end{proposition} \begin{proof}[Proof of Lemma \mathbb{R}f{l.blender}] Since $f$ is a robustly transitive strong partially hyperbolic diffeomorphism, we can assume that every diffeomorphism $g\in \mathcal{U}$ is transitive and is strong partially hyperbolic, reducing $\mathcal{U}$ if necessary. In particular, $\mathcal{U}$ is far from homoclinic tangencies, $\mathcal{U}\operatorname{Supp}bset (cl(\mathcal{HT}(M)))^c$. Now, using the transitivity of $f$, there are points $x_n$ converging to the stable manifold of $p_{i+1}$ whose a sequence of iterates $f^{m_n}(x_n)$ is converging to the unstable manifold of $p_{i}$. Hence, we can use Hayashi's connecting lemma, to perturb the diffeomorphism $f$ to $\tilde{f}$ such that $W^u(p_i(\tilde{f}))$ intersects $W^s(p_{i+1}(\tilde{f}))$, which one we could assume be transversal after a perturbation, if necessary, since $\operatorname{dim}m\ W^u(p_i(\tilde{f}))+\operatorname{dim}m\ W^s(p_{i+1}(\tilde{f}))>d$. Hence, we can use once more the connecting lemma to find $f_1\in {\mathcal{U}}$ close to $\tilde{f}$ exhibiting a heterodimensional cycle between $p_i(f_1)$ and $p_{i+1}(f_1)$, since $\tilde{f}$ is also transitive. Moreover, and in fact this is needed to apply Proposition \mathbb{R}f{p.blender}, the intersection between $W^s(p_i(f_1))$ and $W^u(p_{i+1}(f_1))$ could be assumed quasi-transversal in the sense that $ T_qW^s(p_i(f_1))\cap T_qW^u(p_{i+1}(f_1))=\{0\}$. If this is not true, we can do a perturbation of the diffeomorphism using Franks lemma, to get such property. Thus, since $f_1$ is far from homoclinic tangencies, we can use Proposition \mathbb{R}f{p.blender} to find an open set $\mathcal{V}_1\operatorname{Supp}bset \mathcal{U}$ such that $$ W^s(p_{i+1}(g))\operatorname{Supp}bset cl(W^s(p_i(g))) \text{ and } W^u(p_{i}(g))\operatorname{Supp}bset cl(W^u(p_{i+1}(g))), $$ for every $g\in \mathcal{V}_1$. Now, since $f_1$ is also robustly transitive we can repeat the above argument to find $f_2\in \mathcal{V}_1$ exhibiting a heterodimensional cycle between $p_{i+1}$ and $p_{i+2}$. Thus, by Proposition \mathbb{R}f{p.blender} there exists an open set $\mathcal{V}_2\operatorname{Supp}bset \mathcal{V}_1$, such that $$ W^s(p_{i+2}(g))\operatorname{Supp}bset cl(W^s(p_{i+1}(g))) \text{ and } W^u(p_{i+1}(g))\operatorname{Supp}bset cl(W^u(p_{i+2}(g))), $$ for every $g\in \mathcal{V}_2$. Repeating this argument finitely many times we will find open sets $\mathcal{V}_{k-i}\operatorname{Supp}bset \mathcal{V}_{k-i-1}\operatorname{Supp}bset \ldots \operatorname{Supp}bset \mathcal{V}_1$ such that $$ W^s(p_{i+j}(g))\operatorname{Supp}bset cl(W^s(p_{i+j-1}(g))) \text{ and } W^u(p_{i+j-1}(g))\operatorname{Supp}bset cl(W^u(p_{i+j}(g))), $$ for every $g\in \mathcal{V}_{j}$, and $j=1,\dots k-i$. Taking $\mathcal{V}=\mathcal{V}_{k-i}$ the result follows. \end{proof} The next result use properties of a partially hyperbolic splitting to guarantee that some special kind of dense sub-manifolds in $M$ should intersect each other transversally and densely in the whole manifold. \begin{lemma} Let $f$ be a partially hyperbolic diffeomorphism on $M$ with non trivial stable bundle $E^s$, and having a hyperbolic periodic point $p$ with index $s=dim \ E^s$. If $W^s(O(p))$ and $W^u(O(p))$ are dense in $M$, then $M=H(p)$. \label{transversallity}\end{lemma} \begin{proof} Let $E^s\oplus E^c\oplus E^u$ be the partially hyperbolic splitting. Using Remark \mathbb{R}f{r.hps} we know that the local strong stable manifolds have uniform size. For any $x\in M$, since $W^u(O(p))$ is dense, there exists $q\in W^u(O(p))$ arbitrarily close to $x$. Also, by hypothesis of the index of $p$, and the partially hyperbolic structure, it should be true that $T_qW^u(O(p))=E^c\oplus E^u$. Hence, by the continuity of the local strong stable manifold, $W^{ss}_{loc}(y)$ should intersect transversally $W^u(O(p))$ in a point close to $q$, for any point $y$ close enough to $q$. In particular, since $W^s(O(p))$ is also dense, there exists $\tilde{q}\in W^s(O(p))$ such that $W_{loc}^{ss}(\tilde{q})$ intersects transversally $W^u(O(p))$. However, $W_{loc}^{ss}(\tilde{q})$ is contained in $W^s(O(p))$, which implies there is a transversal intersection between $W^s(O(p))$ and $W^u(O(p))$ close to $q$, in particular, close to $x$. \end{proof} Finally, using the above lemmas we give a proof of Theorem \mathbb{R}f{r.homoclinic}. \begin{proof}[Proof Theorem \mathbb{R}f{r.homoclinic}] Since $p_s$ and $p_u$ are hyperbolic periodic points, we take $\mathcal{U}$ small enough such that every diffeomorphism $g\in \mathcal{U}$ has defined the continuations $p_s(g)$ and $p_u(g)$. Reducing $\mathcal{U}$ if necessary, we could also assume that every $g\in \mathcal{U}$ is a strong partially hyperbolic diffeomorphism with same extremal bundles dimension as in the partially hyperbolic decomposition of $TM$ as $f$, which follows by the continuity of the partially hyperbolicity and the existence of $p_s$ and $p_u$ robustly. Now, using Theorem \mathbb{R}f{BC} together with Theorem \mathbb{R}f{t.ABCDW} we can find a residual subset $\mathcal{R}$ in $\mathcal{U}$ such that $M$ coincides with a homoclinic class for every $g\in \mathcal{R}$, and moreover $g$ has hyperbolic periodic points of any index in $[s,d-u]\cap\mathbb{N}$. We fix $g\in \mathcal{R}$, and let $p_s=p_s(g)$, $p_{s+1}$, $\ldots$, $p_{d-u}=p_u(g)$ be hyperbolic periodic points of $g$ with indices $s$, $s+1$, $\ldots$, $d-u$, respectively. Also, for all $n\in \mathbb{N}$, let $\mathcal{V}_n\operatorname{Supp}bset \mathcal{U}$ small neighborhoods of $g$, such that if $g_n\in \mathcal{V}_n$, then $g_n$ converges to $g$ in the $C^1-$topology, when $n$ goes to infinity. Now, since $g$ is still a robustly transitive strong partially hyperbolic diffeomorphism having hyperbolic periodic points of all possible indices, we denote by $\tilde{\mathcal{V}}_n\operatorname{Supp}bset \mathcal{V}_n$ the open sets given for $g$ and $\mathcal{V}_n$ by Lemma \mathbb{R}f{l.blender}. Hence, using the invariance of the stable manifold of hyperbolic periodic points, by Lemma \mathbb{R}f{l.blender} we have the following: \begin{equation} cl(W^s(O(p_{d-u}(r))))\operatorname{Supp}bset cl(W^s(O(p_{d-u-1}(r))))\operatorname{Supp}bset \ldots \operatorname{Supp}bset cl(W^s(O(p_{s}(r)))), \label{eq10}\end{equation} for every $r\in \tilde{\mathcal{V}}_n$. {\it Claim: $W^u(O(p_{s}(r)))$ and $W^s(O(p_{d-u}(r)))$ are dense in $M$, for every $r\in \tilde{\mathcal{V}}_n$}. Since $r$ is transitive, there exist $x\in M$ such that the forward orbit of $x$ is dense in $M$. Now, since $r$ is partially hyperbolic, for Remark \mathbb{R}f{r.hps} there exists the strong stable foliation that integrates the direction $E^{s}$. Moreover, these leafs have local uniform length. Hence, as done in the proof of Lemma \mathbb{R}f{transversallity}, we can take $r^j(x)$ close enough to $p_s(r)$ such that $W^{ss}(x)$, the strong stable leaf containing $x$, intersects the local unstable manifold of $p_s(r)$, $W^u_{loc}(p_s(r))$. Therefore, since points in the same strong stable leaf have the same omega limit set, we have that $W^u(O(p_s(r)))$ is dense in the whole manifold $M$. We can repeat this argument using also the existence of a point $y$ having a dense backward orbit, and the existence of the strong unstable foliation to conclude that $W^s(O(p_{d-u}(r)))$ is also dense in $M$. Thus, by equation (\mathbb{R}f{eq10}) and the Claim, we have that $W^s(O(p_{s}(r))))$ is dense in $M$. Similarly, we can show that $W^u(O(p_{d-u}(r))))$ is also dense in $M$. Provided that $r$ is strong partially hyperbolic, and that $W^s(O(p_{i}(r))))$ and \\ $W^u(O(p_{i}(r))))$ are dense in $M$, for $i=s$ and $d-u$, we can apply Lemma \mathbb{R}f{transversallity} for $f$ and $f^{-1}$ to conclude that $$ M=H(p_{s}(r))=H(p_{d-u}(r)), $$ for every $r\in \tilde{\mathcal{V}}_n$. Hence, the proof is finished defining $\tilde{\mathcal{V}}_g=\cup \tilde{\mathcal{V}}_n$, and $$ \mathcal{V}_f=\bigcup_{g\in \mathcal{R}} \tilde{\mathcal{V}}_g, $$ which is an open and dense subset of $\mathcal{U}$, and hence contains $f$ in its closure. \end{proof} \end{document}
\begin{document} \begin{abstract} In this paper we provide a local construction of a Sasakian manifold given a K\"ahler manifold. Obatined in this way manifold we call Sasakian lift of K\"ahler base. Almost contact metric structure is determined by the operation of the lift of vector fields - idea similar to lifts in Ehresmann connections. We show that Sasakian lift inherits geometry very close to its K\"ahler base. In some sense geometry of the lift is in analogy with geometry of hypersurface in K\"ahler manifold. There are obtained structure equations between corresponding Levi-Civita connections, curvatures and Ricci tensors of the lift and its base. We study lifts of symmetries different kind: of complex structure, of K\"hler metric, and K\"ahler structure automorphisms. In connection with $\eta$-Ricci solitons we introduce more general class of manifolds called twisted $\eta$-Ricci solitons. As we show class of $\alpha$-Sasakian twisted $\eta$-Ricci solitons is invariant under naturally defined group of structure deformations. As corollary it is proved that orbit of Sasakian lift of steady or shrinking Ricci-K\"ahler soliton contains $\alpha$-Sasakian Ricci soliton. In case of expanding Ricci-K\"ahler soliton existence of $\alpha$-Sasakina Ricci solition is assured provided expansion coefficient is small enough. \end{abstract} \maketitle \section{Introduction} The relations between Sasakian and K\"ahler manifolds is now quite well understood. In case structure is regular, characteristic vector field or Reeb vector field is regular, manifold can be viewed as line or circle bundle over K\"ahler manifold, where K\"ahler structure is determined by Sasakian structure. In the paper we consider in some sense reverse construction which allow to create Sasakian manifold given K\"ahler manifold. Construction is natural. Obtained in this way Sasakian manifolds share many properties of K\"ahler manifolds. Construction is of Sasakian lift is purely local - so strictly speaking we should rather consider our construction in terms of germs of structures. There is analogy between our construction and idea of Ehresmann connection on fiber bundle $\pi:\mathcal P \rightarrow \mathcal B$, $\dim \mathcal B=n$, $\dim \mathcal P=n+k$. Ehresmann connection is some $n$-dimensional distribution $\mathcal D$ on total space and there is 1-1 operation operation between vector fields on base manifold and sections of $\mathcal D$. In terms of structure equation here is analogy with theory of hypersurfaces in Riemann manifolds. Say $\iota: (\mathcal M,\bar g) \rightarrow (\mathcal{\bar M}, g)$, $\iota$ is inclusion, $\bar g=\iota^* g$. The first structure equation relates connection of manifold and connection of hypersurface $\nabla_XY = \bar\nabla_XY+h(X,Y)\xi$, $h$ being second fundamental form, $\xi$ normal vector field. Now the first structure equation for the lift $\pi: (\mathcal M , g) \rightarrow (\mathcal B, \bar g) $, $\mathcal M= \mathcal B^L$ is Sasakian lift, $\mathcal B$ its K\"ahler base, reads \begin{equation} \bar\nabla_{X^L}Y^L = (\nabla_XY)^L-\Phi(X^L,Y^L)\xi, \end{equation} $\xi$ being Reeb vector field. So covariant derivative of Sasakian lift is determined by the lift of covariant derivative of its K\"ahler base. The other possible point of view of our construction is theory of Riemann submersions with 1-dimensional fibers. Geometry of Sasakian lift is very close to geometry of its base. For example for the Ricci tensor $\bar Ric$, of the lift, tensor field $\rho = Ric(X,\phi Y)$ is totally skew-symmetric, ie. some 2-form, moreover its related to the K\"ahler-Ricci form $\rho$, and K\"ahler form $\omega$ \begin{equation} \bar\rho =\pi^*\rho - 2\pi^*\omega, \end{equation} in particular this Sasakian-Ricci form is closed. In geometric terms we study relations between infinitesimal symmetries of complex structure, Killing vector fields on K\"ahler manifold and some class of infinitesimal symmetries of Sasakian lift, as we call our construction. In particular there is local map between inifinitesimal symmetries of K\"ahler structure and almost contact structure of Sasakian manifold. The other results of this kind are relations between holomorphic space forms and Sasakian manifolds of constant $\phi$-sectional curvatures, also K\"ahler Einstein manifolds and Sasakian $\eta$-Einstein manifolds. Lift of K\"ahler Einstein base is Sasakian $\eta$-Einstein manifold. We provide relations between curvatures and Ricci tensors of K\"ahler manifold and its Sasakian lift. Obtained results allow us to show that Sasakian lift of holomorphic space form is Sasakian manifold of constant $\phi$-sectional curvature, also that lift of K\"ahler Einstein manifold is $\eta$-Einstein Sasakian manifold. One of important subject of this paper is to study properties of Sasakian lift of K\"ahler-Ricci soliton. Our main result here is that Sasakian lift satisfies what we call equation of twisted $\eta$-Ricci soliton \begin{align*} & Ric +\frac{1}{2}\mathcal L_X g = \lambda g +2C_1 \alpha_X\odot\eta +C_2 \eta\otimes\eta, \\ & \alpha_X = \mathcal L_X\eta, \end{align*} where $\lambda$, $C_1$, $C_2$ are some constants and $\odot$ denotes symmmetric tensor product. In this paper almost contact metric manifold is called $\eta$-Ricci soliton if there is vector field $X$ and \begin{equation} Ric +\frac{1}{2}\mathcal L_X g = \lambda g + \mu \eta\otimes \eta, \end{equation} thus our definition is more general than that provided in \cite{ChoKimura}, where strictly speaking $\eta$-Ricci soliton is a metric which satisfies above equation, where $X=\xi$. In case of $\alpha$-Sasakian manifold Reeb vector field is Killing, therefore manifold is $\eta$-Ricci soliton only if it is $\eta$-Einstein manifold. Condition for soliton vector field $X=\xi$ is rather restrictive. For example for very wide class of manifolds which satisfy $\mathcal L_\xi\eta=0$, and $d\Phi = 2f\eta\wedge \Phi$, for some local function $f$, the shape of Ricci tensor of strict $\eta$-Ricci soliton is completely determined. Namely Ricci tensor is necessary of the form, $h=\frac{1}{2}\mathcal L_\xi \phi$, \begin{equation} Ric(X,Y) = \alpha g(X,Y)+\beta g(X, h\phi Y)+ \gamma \eta(X)\eta(Y), \end{equation} where $ \alpha$, $\beta$, $\gamma$ are some functions. In case $h=0$, manifold is $\eta$-Einstein. For sake of completness paper contains short exposition of geometry of class of deformations of Sasakian manifold. These deformations extend well-known $\mathcal D$-homotheties. Main result here is that kind of deformation, we call them $\mathcal D_{\alpha,\beta}$-homotheties \begin{equation} g |_{\mathcal D}\mapsto g' |_{\mathcal D} = \alpha g|_{\mathcal D}, \quad g |_{\{\xi\}} \mapsto g'|_{\{\xi\}} = \beta^2 g|_{\{\xi\}}, \end{equation} $\alpha$, $\beta = const. > 0$, map $\alpha$-Sasakian manifold into another $\alpha'$-Sasakian manifold. As we will see action of $\mathcal D_{\alpha,\beta}$-homotheties determines three invariant classes of twisted $\alpha$-Sasakian $\eta$-Ricci solitons where $C_1 < \frac{1}{2}$, $C_1 = \frac{1}{2}$ or $C_1> \frac{1}{2}$. Lift of K\"ahler-Ricci soliton provides example of Sasakian twisted $\eta$-Ricci soliton with $C_1 < \frac{1}{2}$. This raises existence question: do exist $\alpha$-Sasakian twisted $\eta$-Ricci solitons with $C_1 \geqslant \frac{1}{2}$? In some cases we have stronger result: there is $\mathcal D_{\alpha,\beta}$-homothety such that image is $\alpha$-Sasakian Ricci soliton. As we will see this is the case of lift over K\"ahler-Ricci steady or shrinking solitons. The lift of expanding K\"ahler Ricci soliton can always be deformed into $\alpha$-Sasakian $\eta$-Ricci soliton and into Ricci soliton if expansion coefficient is small enough. Used notation can be confused for the reader. Particularly this concerns the how we use the notion of $\alpha$-Sasakian manifold. In some parts of paper this term is used in wider sense: manifold is called $\alpha$-Sasakian if there is real constant $c > 0$, and covariant derivative $\nabla\phi$ satisfies $(\nabla_X\phi)Y=c(g(X,Y)\xi-\eta(Y)X)$. However in expressions like $\frac{\beta}{\alpha}$-Sasakian, it is assumed that $c=\frac{\beta}{\alpha}$. \section{Preliminaries} In this section we will recall some basic facts about almost contact metric manifolds, and in particular about Sasakian manifolds. \subsection{Almost contact metric manifolds} Let $\mathcal{M}$ be a smooth connected odd-dimensional manifold, $\dim \mathcal{M} =2n+1 \geqslant 3$. An almost contact metric structure on $\mathcal{M}$, is a quadruple of tensor fields $(\phi,\xi,\eta)$, where $\phi$ is $(1,1)$-tensor field, $\xi$ a vector field, $\eta$ a 1-form, and $g$ a Riemnnian metric, which satisfy \cite{Blair} \begin{align} & \phi^2X = -X + \eta(X)\xi, \quad \eta(\xi) = 1, \\ & g(\phi X,\phi Y) = g(X,Y)-\eta(X)\eta(Y), \end{align} where $X$, $Y$ are arbitrary vector fields on $\mathcal{M}$. Triple $(\phi,\xi,\eta)$ is called an almost contact structure (on $\mathcal{M}$). From definition it follows that tensor field $\Phi(X,Y)=g(X,\phi Y)$, is totally skew-symmetric, a 2-form on $\mathcal{M}$, called a fundamental form. In the literature vector field $\xi$ is referred to as characteristic vector field, or Reeb vector field. In analogy the form $\eta$ is called characteristic form. Distribution $\mathcal{D} = \{\eta =0\}$, is called characteristic distribution, or simply kernel distribution, as its sections $X\in \Gamma^\infty(\mathcal{D})$, satisfy $\eta(X)=0$. As $\eta$ is non-zero everywhere $\dim \mathcal{D} = 2n$. Manifold equipped with fixed almost contact metric structure is called almost contact metric manifold. Let for $(1,1)$-tensor field $S$, $N_S$ denote its Nijenhuis torsion, thus \begin{equation} N_S(X,Y)=S^2[X,Y]+[SX,SY]-S([SX,Y]+[X,SY]). \end{equation} Almost contact metric structure $(\phi,\xi,\eta,g)$ is said to be normal if tensor field $N^{(1)} = N_\phi +2d\eta\otimes \xi$, vanishes everywhere. Normality is the condition of integrability of naturally defined complex structure $J$ on a product of real line (a circle) and almost contact metric manifold. There are three classes most widely studied almost contact metric manifolds \begin{itemize} \item[a)] Contact metric manifolds defined by condition \begin{equation} d\eta = \Phi; \end{equation} \item[b)] Almost Kenmotsu manifolds \begin{equation} d\eta = 0,\quad d\Phi = 2\eta\wedge\Phi; \end{equation} \item[c)] Almost cosymplectic (or almost coK\"ahler) manifolds \begin{equation} d\eta =0, \quad d\Phi =0. \end{equation} \end{itemize} If additionaly manifold is normal we obtain following corresponding classes \begin{itemize} \item[an)] Sasakian manifolds - ie. contact metric and normal \begin{equation} (\nabla_X\phi)Y = g(X,Y)\xi - \eta(Y)X; \end{equation} \item[bn)] Kenmotsu manifolds \begin{equation} (\nabla_X\phi)Y = g(\phi X,Y)\xi - \eta(Y)\phi X; \end{equation} \item[cn)] Cosymplectic manifolds \begin{equation} \nabla\phi =0. \end{equation} \end{itemize} Above we have provided characterization of respective manifold in terms of covariant derivative - with resp. to the Levi-Civita connection of $g$ - of the structure tensor $\nabla\phi$. For an almost contact metric manifold $\mathcal{D}$-homothety, with coefficient $\alpha$, is a deformation of an almost contact metric structure $(\phi,\xi,\eta,g) \rightarrow (\phi',\xi',\eta',g')$, defined by \begin{align*} & \phi'=\phi,\quad \xi'=\frac{1}{\alpha}\xi,\quad \eta' = \alpha\eta, \\ & g' = \alpha g +(\alpha^2-\alpha)\eta\otimes\eta. \end{align*} In this paper we will consider more general deformations of an almost contact metric structure, defined by real parameters $\alpha$, $\beta > 0$, given by \begin{align*} & \phi' = \phi, \quad \xi'=\frac{1}{\beta}\xi, \quad \eta' = \beta\eta, \\ & g'= \alpha g + (\beta^2-\alpha)\eta\otimes\eta. \end{align*} We call such deformations $\mathcal D_{\alpha,\beta}$-homotheties. \subsection{Sasakian manifolds} Here we provide some very basic properties of Sasakian manifold. On Sasakian manifold \begin{equation} (\nabla_X\phi)Y = g(X,Y)\xi - \eta(Y)X, \end{equation} which implies that Reeb vector field $\xi$, is Killing vector field $\mathcal{L}_\xi g=0$, moreover \begin{equation} \mathcal{L}_\xi\phi =0, \quad \nabla_X\xi = -\phi X,\quad \nabla_\xi\xi=0, \end{equation} for curvature $R_{XY}\xi$, and Ricci tensor $Ric(X,\xi)$, we have \begin{align} \label{l:curv2} & R_{XY}\xi = \eta(Y)X-\eta(X)Y, \quad R_{X\xi}\xi = X-\eta(X)\xi, \\ & \label{l:ric:xi} Ric(X,\xi) = 2n\eta(X), \end{align} in particular $Ric(\xi,\xi)=2n$, and $R_{XY}\xi =0$, for $X$, $Y$ sections of characteristic distribution, $X$, $Y\in \Gamma^\infty(\mathcal{D})$, see \cite{Blair}. More general almost contact metric manifold is called $\alpha$-Sasakian \begin{equation} (\nabla_X\phi)Y = \alpha(g(X,Y)\xi -\eta(Y)X), \end{equation} for some non-zero real constant $\alpha \neq 0$. As we will proceed further we will obtain following equation for Ricci tensor $Ric$ of $\alpha$-Sasakian manifold \begin{equation} \label{e:e:sol} Ric +\frac{1}{2}(\mathcal{L}_Xg) = \lambda g +2C_1\alpha_X\odot\eta + C_2\eta\otimes \eta, \end{equation} where $\lambda$, $C_1$, $C_2$ are some real constants, vector field $X$, and 1-form $\alpha_X$, satisfy \begin{equation} \label{e:e:sol2} \eta(X) = 0, \quad (\mathcal{L}_X\eta)(Y)=\alpha_X(Y), \end{equation} in particular on $\alpha$-Sasakian manifold $\alpha_X(\xi)=0$, as by assumption $\eta(X)=0$, hence $(\mathcal{L}_X\eta)(\xi)=2d\eta(X,\xi)=0$. \subsection{K\"ahler manifolds} Almost complex structure on manifold is a $(1,1)$-tensor field $J$, such that $J^2X = -Id$. Structure is said to be complex if any point admits a local chart, such that local coefficient of $J$, in this chart, are all constants. Necessary and sufficient condition, is vanishing Nijenhuis torsion of $J$. If additionally there is Riemannian metric $g$, with properties $g(JX,JY)= g(X,Y)$, and complex structure is covariant constant for Levi-Civita connection - manifold is called a K\"ahler manifold. Tensor field $\omega(X,Y)=g(X,JY)$, is a maximal rank 2-form, called K\"ahler form, as $J$ is parallel, K\"ahler form is always closed. In particular K\"ahler form determines symplectic structure on K\"ahler manifold. In dimensions $ > 2$, K\"ahler manifold of constant sectional curvature is always locally flat, thus more natural notion is holomorphic sectional curvature. This is sectional curvature of "complex" plane ie. a plane spanned by vectors $X$, and $JX$. If holomorphic curvature does not depend neither on point nor on complex plane section - K\"ahler manifold is said to have constant holomorphic curvature. Infinitesimal automorphism of complex structure is a vector field $X$, which generates local 1-parameter flow $f_t$, of biholomorphisms, that is $f_{t*}J = Jf_{t*}$. The vector field $X$ is an infinitesimal automorphism if and only if $\mathcal{L}_XJ =0$. Note that if $X$ determines infinitesimal automorphism, then also vector field $JX$ is an infinitesimal automorphism. Moreover complex vector field $X^\mathbb{C}=X-\sqrt{-1}JX$, is holomorphic. Its local coordinates in complex chart are holomorphic functions. We say that vector field $X$ is K\"ahler structure automorphism if the two of the three conditions \begin{equation} \mathcal{L}_XJ = 0,\quad \mathcal{L}_Xg =0, \quad \mathcal{L}_X\omega =0, \end{equation} are satisfied. Then the third condition is automatically satisfied. For example if $X$ is complex structure automorphism and Killing vector field, then it preserves K\"aler form. In particular is locally Hamiltonian with respect to symplectic structure determined by K\"ahler form, that is locally \begin{equation} \omega(X,Y)= dH(Y), \end{equation} for some locally defined function $H$. cf. \cite{Bess},\cite{KoNo2}. \subsection{Ricci solitons} Given geometric objects on manifold is it important to know do exist objects with some particular properties. One of possible way to find such particular entity is trough geometric flow. Of course here the point is how to create proper law of evolution. This kind of object is so-called Ricci flow introduced by Hamilton \cite{Hamilton} \begin{equation} \frac{\partial}{\partial t}g= -2Ric(g_t), \quad t\in [0,T), \quad T > 0, \end{equation} where we search for solution on some non-empty interval, with given initial condition $g_0 = g$. In present time Ricci flow is one of most extensively studied subject. The goals are two-fold: analytical and geometrical. In terms of analysis there are studies considering problems of existence of the Ricci, on side of geometry plenty new manifolds which admits non-trivially Ricci flows \cite{ChowKnopf},\cite{ChowLuNi}. Particular case of solutions are flows of the form \begin{equation} g_t = c(t)f_t^*g, \end{equation} where $f_t$ $1$-parameter group of diffeomorphisms. Such solutions are called Ricci solitons. In some sense they represent trivial solutions of Ricci flow, say $g_0$, and $g_t$, are always isometric up to homothety. In infinitesimal terms Ricci soliton is Riemannian manifold (Riemannian metric), which admits a vector field $X$, such that \begin{equation} Ric +\frac{1}{2}\mathcal{L}_Xg = \lambda g, \end{equation} for some real constant $\lambda \in \mathbb{R}$, \cite{Cao1},\cite{Cao2},\cite{Ivey},\cite{SongWeink}. Depending on sign of $\lambda$ there are expanding Ricci solitons: $\lambda >0$, steady: $\lambda =0$, and shrinking: $\lambda < 0$. Of course in this classification only sign of $\lambda$ counts as homothety $g \mapsto g' = cg$, $c > 0$, provides Ricci soliton with soliton constant $\lambda' = \lambda c$ (and soliton vector field $X' = cX$). In particular we can always normalize equation to $\lambda =1, 0, -1$. Assuming $X$ is gradient $g(X,Y)=dH(Y)$, we have $(\mathcal{L}_Xg)(Y,Z) = 2Hess_H(Y,Z)$, where as usually $Hess_H$, stands for Hessian of the function, which is defined by \begin{equation} Hess_H(Y,Z)=(\nabla_Y dH)(Z), \end{equation} ie. it is covariant derivative of differential form of the function $H$. Therefore in case $X=grad H$, we obtain \begin{equation} Ric + Hess_H = \lambda g. \end{equation} The solution to the Ricci flow equation in case of initial metric being K\"aheler, is family of K\"ahler metrics. Therefore in particular case of Ricci soliton we obtain that the vector field $X$, satisfies $\mathcal L_X J=0$. Equivalently metric of K\"ahler manifold is K\"ahler-Ricci soliton if \begin{equation} \rho +\frac{1}{2}\mathcal{L}_X\omega = \lambda\omega, \quad \mathcal L_X J=0, \end{equation} where $\rho$ denotes the Ricci form \cite{Bryant}. \section{Sasakian lift of K\"ahler manifold} Let $\mathcal{N}$ be K\"ahler manifold with K\"ahler structure $(J,g)$, let $\omega$ be a K\"ahler form $\omega(X,Y) = g(X,JY)$. Assume there is globally defined 1-form $\tau$, such that $\omega = d\tau$, if there is no such form, so real cohomology class $[\omega] \neq 0$, we restrict structure to some open subset $\subset \mathcal{N}$, to assure existence of $\tau$. Then on product $\mathcal{M} = \mathbb{R}\times\mathcal{N}$, we introduce structure of Sasakian manifold in terms of a lift of vector field on $\mathcal{N}$. Set $$ \xi = \partial_t,\quad \eta = dt + \pi_2^*\tau, $$ for vector field $X$ on K\"ahler base we define its lift $X \mapsto X^L$ $$X^L= -\pi_2^*\tau(X)\xi+X, $$ where $\pi_1$, $\pi_2$ are projections $$ \pi_1: \mathcal{M} \rightarrow \mathbb{R},\quad \pi_2: \mathcal{M}\rightarrow \mathcal{N}, $$ on the first and second product components. If $f:\mathcal{N} \rightarrow \mathbb{R}$, is a smooth function on K\"ahler base, we set $\bar{f}:\mathcal{M}\rightarrow \mathbb{R}$, $\bar{f}=f\circ\pi_2$, then $(fX)^L = \bar{f}X^L$. Note for $\bar{f}$, we have $\xi\bar{f}= d\bar{f}(\xi)=df(\pi_{2*}\xi)=0$. For functions $f_i$, and vector fields $X_i$, $i=1,2$, on $\mathcal{N}$ we have \begin{equation} (f_1 X_1+f_2X_2)^L = \bar{f}_1X_1^L+\bar{f}_2X_2^L, \end{equation} in particular $(\cdot)^L$ is $\mathbb{R}$-linear. With help of the operation of the lift we now introduce on $\mathcal{M}$ a tensor field $\phi$, and metric $g^L$ , requiring \begin{equation} \phi X^L = (JX)^L, \quad \phi\xi =0, \quad g^L = \eta\otimes\eta + \pi_2^*g. \end{equation} Note following formula for commutator of lifts \begin{equation} \label{e:bra:lif} [X^L,Y^L] = [X,Y]^L-2d\eta(X^L,Y^L)\xi. \end{equation} The following proposition is just simple verification with help of provided definitions. \begin{proposition} Tensor fields $(\phi^L,\xi,\eta, g^L)$, determine almost contact metric structure on $\mathcal{M}$. \end{proposition} \begin{proof} If $(X_1,X_2,\ldots, X_{2n})$ is a local frame of vector fields on K\"ahler base vector fields $(X_1^L,X_2^L,\ldots,X^L_{2n})$ is a local frame which spans characteristic distribution $\mathcal D = \ker \eta = \{ \eta = 0 \}$, with Reeb vector field they create local frame on the lift $\mathcal M$. Therefore any vector $\bar Y$ on the lift field can be given locally by \begin{equation} \bar Y =a^0\xi + \sum_{i=1}^{2n}a^i X_i^L, \end{equation} $a^i$, $i=0,\ldots, 2n$ are some functions, therefore it is enough to verify that $\phi^2 X^L = -X^L$. By definition \begin{equation} \phi^2 X^L = \phi (JX)^L = (J^2 X)^L = -X^L. \end{equation} Similarly in case of metric given vector fields $\bar Y$, $ \bar Z$, \begin{equation} g^L(\phi\bar Y,\phi \bar Z) = \sum_{i,j=1}^{2n} a^i b^j g^L(\phi X_i^L,\phi X_j^L), \end{equation} from definition we have \begin{equation} g^L(\phi X_i^L,\phi X_j^L) = g(JX_i,JX_j)\circ \pi_2, \end{equation} as base manifold is K\"ahler $g(JX_i,JX_j) = g(X_i,X_j)$, from other hand \begin{equation} g^L(X_i^L, X_j^L ) = g(X_i,X_j)\circ\pi_2, \end{equation} hence $g^L(\phi \bar Y, \phi \bar Z) = g^L(\bar Y, \bar Z)-\eta(\bar Y)\eta(\bar Z)$. \end{proof} The proof o the above Proposition says little more. \begin{corollary} For $(t,q)\in \mathbb{R}\times \mathcal{N}$, map $X \mapsto X^L$, establishes isometry $(T_q\mathcal{N}, g_q)\rightarrow (\mathcal{D}_{(t,q)}, g^L|_\mathcal{D})$. In particular if $E_i$, $i=1,\ldots 2n$, is an orthonormal local frame on K\"ahler base, lifts $E_i^L$, $i=1,\ldots 2n$, form an orthonormal frame spanning contact distribution. \end{corollary} In future we we simplify notation and instead of eg. \begin{equation*} g^L(X^L,Y^L) = g(X,Y)\circ \pi_2, \end{equation*} we write $ g^L(X^L,Y^L) = g(X,Y), $ if it does not lead to a confusion. Similarly $d\eta(X^L,Y^L)=\Phi(X^L,Y^L)=\omega(X,Y)$. For function on K\"ahler base, or according to our simplified notation $X^L\bar{f}=\overline{Xf}$. \begin{proposition} Manifold $\mathcal{M}$, equipped with structure $(\phi^L,\xi,\eta,g^L)$, is Sasakian manifold. \end{proposition} \begin{proof} The first we note that the fundamental form of $\mathcal{M}$ is just pullback of K\"ahler form $\omega$, $\Phi = \pi_2^*\omega$. From other hand $d\eta = d\pi_2^*\tau = \pi_2^*d\tau = \pi_2^*\omega = \Phi$, by assumption about $\tau$, and our above remark. Therefore $\mathcal{M}$ is contact metric manifold. To end the proof we directly verify that $\mathcal{M}$, is normal $N^{(1)} = 0$, it is enough to verify normality on vector fields of form $X^L$, $N^{(1)}(X^L,Y^L)= 0$, as they span the module $\Gamma^\infty(\mathcal{D})$, of all sections of contact distribution, and to verify directly that $N^{(1)}(\xi,X^L)=0$. As $N^{(1)} = N_\phi +2d\eta\otimes\xi$, with help of (\ref{e:bra:lif}) we obtain \begin{equation} N_\phi(X^L,Y^L) = (N_J(X,Y))^L -2 d\eta(X^L,Y^L)\xi = -2 d\eta(X^L,Y^L)\xi, \end{equation} as $J$ is complex structure, by Newlander-Nirenberg theorem this equivalent to vanishing its Nijenhuis torsion $N_J=0$. So $$ N^{(1)}(X^L,Y^L) = -2d\eta(X^L,Y^L)\xi +2d\eta(X^L,Y^L)\xi =0. $$ The case $N^{(1)}(\xi, X^L)$ is almost evident as for every vector field on K\"ahler base there is $[\xi, X^L]=0$, $d\eta(\xi, \cdot) =0$. \end{proof} The almost contact metric structure constructed as above we call Sasakian lift of a K\"ahler structure. Consequently manifold itself we call Sasakian lift of K\"ahler manifold. If it is not explicitely stated what particular manifold, we just use a term Sasakian lift to emphasize that almost contact metric structure is obtained from some K\"ahler base manifold with help of the above described construction. \subsection{Structure equations} Here we provide fundamental relations between Levi-Civita connections of K\"ahler base and its Saskian lift. Let $\bar\nabla$ denote the operator of the covariant derivative of Levi-Civita connection of Sasakian lift metric $\bar{\nabla} = LC(g^L)$, while $\nabla=LC(g)$, the Levi-Civita connection of K\"ahler base. \begin{proposition} \label{p:streqs} For vector fields $X$, $Y$ on K\"ahler base we have \begin{align} \label{streqs1} & \bar{\nabla}_{X^L}\xi = - \phi X^L = -(JX)^L, \\ \label{streqs2} & \bar{\nabla}_{X^L}Y^L = (\nabla_XY)^L -\Phi(X^L,Y^L)\xi, \end{align} \end{proposition} \begin{proof} The first structure equation comes from property of any Sasakian manifold and the definition of the lift. The second structure equation is consequence of Koszul formula for Levi-Civita connection, applied to both K\"ahler base and its lift. Using Koszul formula we need to take into account that \begin{equation*} g^L(X^L,Y^L) = g(X,Y),\quad X^Lg^L(Y^L,Z^L)=Xg(Y,Z). \end{equation*} Therefore \begin{align*} & 2g^L(\bar \nabla_{X^L}Y^L, Z^L) = X^Lg^L(Y^L,Z^L)+Y^Lg^L(X^L,Z^L) - \\ & \qquad Z^Lg^L(X^L,Y^L) + g^L([X^L, Y^L], Z^L) + g^L([Z^L,X^L],Y^L) + \\ & \qquad g^L([Z^L,Y^L],X^L) = 2 g(\nabla_XY, Z)\circ\pi_2 = 2g^L((\nabla_XY)^L, Z^L), \end{align*} from other hand projection $\bar\nabla_{X^L}Y^L$ on $\xi$, is given by $g^L(\bar\nabla_{X^L}Y^L,\xi) = -g^L(\bar{\nabla}_{X^L}\xi,Y^L) = -\Phi(X^L,Y^L)\xi$. Here we use only the fact that on Sasakian manifold always $\nabla\xi = -\phi$. \end{proof} Note above formula coincides with formula of commutator of the lifts \begin{align*} & [X^L,Y^L] = \bar\nabla_{X^L}Y^L-\bar\nabla_{Y^L}X^L = \\ & \qquad (\nabla_XY)^L-\Phi(X^L,Y^L)\xi - (\nabla_YX)^L +\Phi(Y^L,X^L)\xi = \\ & \qquad [X,Y]^L -2\Phi(X^L,Y^L)\xi, \end{align*} In words orthogonal projection of covariant derivative of lifts $\bar\nabla_X^LY^L$, on characteristic distribution is equal exactly to the lift of covariant derivative on K\"ahler base $(\nabla_XY)^L$, while projection on direction of Reeb vector field is equal to $-\Phi(X^L,Y^L)\xi$, however note that $\Phi(X^L,Y^L) = \omega(X,Y)\circ \pi_2$, ie. pullback of K\"ahler form on these vector fileds. In symbolic terms we can describe this as \begin{equation} \bar \nabla = \nabla^L -(\pi_2^*\omega)\otimes\xi. \end{equation} The structure equations in the Proposition {\bf \ref{p:streqs}.} remind structure equations for hypersurface in Riemannian manifold. However there is remarkable difference: in case of hypersurface its second fundamental form is symmetric tensor field, while in our case the tensor which supposedly plays a role of second fundamental form is skew-symmetric. Note that (\ref{streqs2}) {\it is not } a definition of connection. $\bar \nabla $ is just Levi-Civita connection of the metric $g^L$. But in particular case of lifts of vector fields from K\"ahler base, (\ref{streqs2}) holds true. Having the structure equations as above we proceed to obtain relations between corresponding curvature tensors of K\"ahler manifold and its Sasakian lift. \begin{proposition} \label{p:curv} Curvatures $R$ and $\bar{R}$ of K\"ahler base and its Sasakian lift are related by \begin{align} \label{l:curv1} & \bar{R}_{X^LY^L}Z^L = (R_{XY}Z)^L + \Phi(Y^L,Z^L)\phi X^L - \Phi(X^L,Z^L)\phi Y^L - \\ &\nonumber \qquad 2\Phi(X^L,Y^L)\phi Z^L, \end{align} \end{proposition} \begin{proof} By the structure equations (\ref{streqs1}), (\ref{streqs2}) \begin{align} \label{eq1} & \bar{\nabla}_{X^L}\bar{\nabla}_{Y^L}Z^L = \bar{\nabla}_{X^L}(\nabla_YZ)^L - X^L\Phi(Y^L,Z^L)\xi + \\ & \nonumber \qquad \Phi(Y^L,Z^L)\phi X^L = (\nabla_X\nabla_YZ)^L - (\Phi(X^L,(\nabla_YZ)^L)+ \\ & \nonumber \qquad X^L\Phi(Y^L,Z^L))\xi + \Phi(Y^L,Z^L)\phi X^L, \\ \label{eq2} & \bar{\nabla}_{[X^L,Y^L]}Z^L = \bar{\nabla}_{[X,Y]^L}Z^L - 2d\eta(X^L,Y^L)\bar{\nabla}_\xi Z^L = \\ & \nonumber \qquad (\nabla_{[X,Y]}Z)^L - \Phi([X,Y]^L,Z^L)\xi - 2d\eta(X^L,Y^L)\bar{\nabla}_\xi Z^L. \end{align} For the lift of vector field $Z^L$, $[\xi, Z^L] = 0$. Therefore as Levi-Civita connection has no torsion, we have \begin{equation} \label{eq3} \bar{\nabla}_\xi Z^L = \bar{\nabla}_{Z^L}\xi = -\phi Z^L. \end{equation} For curvature $$ \bar{R}_{X^LY^L}Z^L = \bar{\nabla}_{X^L}\bar{\nabla}_{Y^L}Z^L - \bar{\nabla}_{Y^L}\bar{\nabla}_{X^L}Z^L - \bar{\nabla}_{[X^L,Y^L]}Z^L, $$ in view of (\ref{eq1})-(\ref{eq3}), we obtain \begin{align*} & \bar{R}_{X^LY^L}Z^L = (R_{XY}Z)^L +\Phi(Y^L,Z^L)\phi X^L - \Phi(X^L,Z^L)\phi Y^L + \\ & \nonumber \qquad ( ( -\bar\nabla_{X^L}\Phi)(Y^L,Z^L) + (\bar\nabla_{Y^L}\Phi)(X^L,Z^L))\xi - 2\Phi(X^L,Y^L)\phi Z^L , \end{align*} as manifold is Sasakian $(\bar{\nabla}_{X^L}\Phi)(Y^L,Z^L)=(\bar{\nabla}_{Y^L}\Phi)(X^L,Z^L)=0$, vanish. \end{proof} The following proposition describes relation between Ricci tensors of K\"ahler base and its lift \begin{proposition} \label{p:ric} Ricci tensors and scalar curvatures of K\"ahler base and its Sasakian lift are related by \begin{align} \label{r1} & \bar{R}ic(X^L,Y^L) = Ric(X,Y)-2g(X,Y), \end{align} in particular we have for scalar curvatures of K\"ahler base and Sasakian lift $s$, $\bar{s}$ \begin{align} \label{r2} & \bar{s}= s-2n. \end{align} \end{proposition} \begin{proof} In the proof we use adopted local orthonormal frame $(\xi, E_1^L,\ldots E_{2n}^L)$, where $(E_1,\ldots E_{2n})$, is local orthonormal frame on K\"ahler base. Then \begin{equation} \label{ric1} \bar{R}ic(X^L,Y^L) = \bar{R}(\xi, X^L,Y^L,\xi) + \sum\limits_{i=1}^{2n}\bar{R}(E_i^L,X^L,Y^L,E_i^L), \end{equation} by the Proposition {\bf \ref{p:curv}.}, (\ref{l:curv1}), and curvature identities for Sasakian manifold (\ref{l:curv2}), we obtain \begin{align} \label{tr:curv} & \sum\limits_{i=1}^{2n}\bar{R}(E_i^L,X^L,Y^L,E_i^L) = \sum\limits_{i=1}^{2n}R(E_i,X,Y,E_i) - \\ & \nonumber \qquad 3\sum\limits_{i=1}^{2n}\Phi(E_i^L,X^L)\Phi(E_i^L,Y^L) = Ric(X,Y)-3g(JX,JY)= \\ & \nonumber \qquad Ric(X,Y)-3g(X,Y), \\ \label{xi-sect} & \bar{R}(\xi,X^L,Y^L,\xi) = \bar{R}(X^L,\xi,\xi,Y^L) =g^L(X^L,Y^L)=g(X,Y), \end{align} now with help of (\ref{ric1})-(\ref{xi-sect}), we find \begin{equation*} \bar{R}ic(X^L,Y^L)=Ric(X,Y)-2g(X,Y). \end{equation*} For the scalar curvature of the lift $ \bar{s}= \bar{R}ic(\xi,\xi)+\sum_{i=1}^{2n}\bar{R}ic(E_i^L,E_i^L), $ and by (\ref{l:ric:xi}), (\ref{r1}), \begin{equation*} \bar{s}=2n+\sum\limits_{i=1}^{2n}(Ric(E_i,E_i)-2g(E_i,E_i)) =2n +s -4n = s-2n. \end{equation*} \end{proof} \begin{proposition} On Sasakian lift tensor field $\bar{\rho}(\cdot,\cdot)=\bar{R}ic(\cdot,\phi \cdot)$, is a closed 2-form, moreover \begin{equation} \bar{\rho} = \pi_2^*\rho -2\pi_2^*\omega, \end{equation} that is $\bar{\rho}$ is a pullback of difference of Ricci form and twice of K\"ahler form. \end{proposition} \begin{proof} We have $ \bar{\rho}(X^L,Y^L) = \bar{R}ic(X^L,\phi Y^L) = \bar{R}ic(X^L,(JY)^L), $ and in virtue of the Proposition {\bf \ref{p:ric}.}, \begin{align*} & \bar{R}ic(X^L,(JY)^L) = Ric(X,JY) -2g(X,JY) = \\ & \qquad \rho(X,Y)-2\omega(X,Y), \end{align*} clearly $\bar{\rho}(X^L,\xi)=\bar{\rho}(\xi,X^L)=0$, therefore $\bar{\rho}$ is skew-symmetric and closed, as both Ricci and K\"ahler forms are closed. \end{proof} Here are some corollaries of obtained results. \begin{theorem} \label{th:chc} If K\"ahler base has constant holomorphic sectional curvature $c=const.$, then its Sasakian lift is Sasakian manifold of constant $\phi$-sectional curvature $\bar{c} = c-3$. \end{theorem} \begin{proof} Let fix a point $(t,q)\in \mathcal{M}$, and let $v\in \mathcal{D}_{(t,q)}$, be unit vector, then $\phi$-sectional curvature $K_\phi(v)$, is a sectional curvature of plane $(v,\phi v)$. Hence \begin{equation*} K_\phi(v) = \bar{R}(v,\phi v,\phi v,v) = g^L(\bar{R}_{v\phi v}\phi v, v). \end{equation*} As $(\cdot)^L$ is point-wise linear isometry between $T_q\mathcal{N}$ and $\mathcal{D}_{(t,q)}$, there is local vector field $X$ on K\"ahler base, such that $X^L= v$ at the point $(t,q)$. We can assume that $X$ is normalized. In view of the Proposition {\bf \ref{p:curv}}, eq. (\ref{l:curv1}), having in mind that $g^L(X^L,X^L)=g(X,X)$, and \begin{equation*} g^L((R_{XJX}JX)^L,X^L) = g(R_{XJX}JX,X), \end{equation*} we obtain \begin{equation*} \bar{R}(X^L,\phi X^L,\phi X^L,X^L) = R(X,JX,JX,X)-3g^2(X,X)= c-3, \end{equation*} where $c=R(X,JX,JX,X)$ is holomorphic sectional curvature of K\"ahler base. By assumption $c=const$, in particular at the point $(t,q)$, \begin{equation*} \bar{R}(X^L,\phi X^L,\phi X^L,X^L) = K_\phi(v)= c-3. \end{equation*} As point and vector are arbitrary this shows that $\mathcal M$ has constant $\phi$-sectional curvature $c-3$. \end{proof} \begin{theorem} \label{th:ein} If K\"ahler base is K\"ahler-Einstein manifold with Einstein constant $c=const.$, then its Sasakian lift is $\eta$-Einstein manifold \begin{equation} \bar{R}ic = (c-2) \bar{g}+(2n-c+2)\eta\otimes\eta. \end{equation} In particular is Einstein if and only if $c=2n+2$. \end{theorem} The particular case is when base K\"aler manifold has constant holomorphic curvature $c=4$. \begin{theorem} If K\"ahler base is locally isometric to complex projective space $\mathbb{C}P^n$, equipped with Fubini-Study metric of constant holomorphic curvature $c=4$, then its lift is locally isometric to unit sphere $\mathbb{S}^{2n+1}\subset\mathbb{C}^{n+1}$, equipped with its canonical Sasakian structure of constant sectional curvature $\bar{c}=1$. \end{theorem} \begin{proof} By Theorem {\bf \ref{th:chc}}, lift of the K\"ahler base of constant holomorphic curvature is Sasakian manifold of constant $\phi$-sectional curvature $\bar{c}=c-3$. Note that curvature operator of manifold with constant $\phi$-sectional curvature is completely determined, cf. {\bf Theorem 7.19, p. 139} in (\cite{Blair}). \end{proof} Note that the case of dimension three is exceptional. As every two-dimensional K\"ahler manifold is Einstein, its lift is $\eta$-Einstein, yet coefficient $c$ now is in general a some function - in fact determined by Gaussian curvature of 2-dimensional base. \section{$\mathcal D_{\alpha,\beta}$-homothety of Sasakian manifolds and $\alpha$-Sasakian manifolds} In this section we provide detailed study of $\mathcal D_{\alpha,\beta}$-homothety Sasakian manifold. Fundamental result here is that image of Sasakian manifold by some $\mathcal D_{\alpha,\beta}$-homothety with parameters $\alpha$, $\beta > 0$ is $\frac{\beta}{\alpha}$-Sasakian manifold. Let $\mathcal{M}$, be a Sasakian manifold with almost contact metric structure $(\phi,\xi,\eta,g)$. For real positive parameters $\alpha$, $\beta$, let consider new structure on $\mathcal{M}$, $(\phi'=\phi,\xi',\eta',g')$, given by \begin{align} & \xi' = \frac{1}{\beta}\xi, \quad \eta'=\beta\eta, \\ & g' = \alpha g +(\beta^2-\alpha)\eta\otimes\eta. \end{align} It is useful to have explicitly inverse map for metric \begin{align} g = \frac{1}{\alpha}g'+(\frac{1}{\beta^2}- \frac{1}{\alpha})\eta'\otimes\eta'. \end{align} Here our main goal is to study relations between Levi-Civita connection $\nabla = LC(g)$, $\nabla' = LC(g')$, Riemann curvatures and in particular Ricci tensors. Note that in general deformed structure is not contact metric. It satisfies weaker condition \begin{equation} d\eta' = \frac{\beta}{\alpha}\Phi', \end{equation} where $\Phi'(X,Y)=g'(X,\phi Y) = \alpha \Phi(X,Y)$. \begin{proposition} Let $\nabla$ be Levi-Civita connection of Sasakian manifold, and $\nabla'$ Levi-Civita connection of metric obtained by $\mathcal D_{\alpha,\beta}$-homotheties of Sasakian metric. Connections are related by the following formula \begin{equation} \label{e:txy} \nabla_XY= \nabla'_XY+\frac{\beta^2-\alpha}{\alpha\beta}(\eta'(X)\phi Y + \eta'(Y)\phi X). \end{equation} \end{proposition} \begin{proof} The proof is rather standard with help of Koszul formula for Levi-Civita connection. Let denote by $T_XY$, the difference tensor $\nabla_XY=\nabla'_XY+T_XY$. As connections are torsion-less $T_XY$ is symmetric $T_XY=T_YX$. Therefore \begin{align*} & -2g'(T_XY,Z)= 2g'(\nabla'_XY,Z) - 2g'(\nabla_XY,Z) = (\beta^2-\alpha)(X \eta(Y)\eta(Z)+ \\ & \qquad Y\eta(X)\eta(Z) - Z\eta(X)\eta(Y)) + (\beta^2-\alpha)(\eta([X,Y])\eta(Z) + \\ & \qquad \eta([Z,X])\eta(Y) + \eta([Z,Y])\eta(X)) - 2(\beta^2-\alpha)\eta(\nabla_XY)\eta(Z), \end{align*} note $X\eta(Y)\eta(Z) = (X\eta(Y))\eta(Z)+\eta(Y)(X\eta(Z))$, just well-known Leibniz rule, moreover $\eta([X,Y])= \eta(\nabla_XY)-\eta(\nabla_YX)$. Therefore using Leibniz rule after regrouping we find \begin{align*} & (\beta^2-\alpha)(X\eta(Z)-Z\eta(X)+\eta([Z,X]))\eta(Y) + \\ & \qquad (\beta^2-\alpha)(Y\eta(Z)-Z\eta(Y)+\eta([Z,Y]))\eta(X) + \\ & \qquad (\beta^2-\alpha)(X\eta(Y)-\eta(\nabla_XY))\eta(Z)) + \\ & \qquad (\beta^2-\alpha)(Y\eta(X)-\eta(\nabla_YX))\eta(Z)) = \\ & \qquad 2(\beta^2-\alpha)(d\eta(X,Z)\eta(Y)+d\eta(Y,Z)\eta(X)) + \\ & \qquad (\beta^2-\alpha)((\nabla_X\eta)(Y)+(\nabla_Y\eta)(X))\eta(Z), \end{align*} however on Sasakian manifold $(\nabla_X\eta)(Y)+(\nabla_Y\eta)(X)=0$, thus we obtain \begin{equation*} -g'(T_XY,Z)= (\beta^2-\alpha)(d\eta(X,Z)\eta(Y)+d\eta(Y,Z)\eta(X)). \end{equation*} Note $d\eta=\frac{1}{\beta}d\eta' = \frac{1}{\alpha}\Phi'$, in terms of deformed structure the above equation reads \begin{equation*} -g'(T_XY,Z)= \frac{\beta^2-\alpha}{\alpha\beta}( \Phi'(X,Z))\eta'(Y)+ \Phi'(Y,Z)\eta'(X)), \end{equation*} from $\Phi'(X,Y)=g'(X,\phi Y)= -g'(\phi X,Y)$, we finally obtain \begin{equation} T_XY = \frac{\beta^2-\alpha}{\alpha\beta}(\eta'(X)\phi Y +\eta'(Y)\phi X), \end{equation} ie. (\ref{e:txy}). \end{proof} Of course once we have explicit form of tensor $T_XY$ we can directly verify that $\nabla'g'=0$, and use the fact that Levi-Civita connection is unique connection which is both torsion-less and $\nabla'g'$. Such direct verification provides alternative proof of the above statement. For further reference we set $c=c_{\alpha,\beta}=\frac{\beta^2-\alpha}{\alpha\beta}$. \begin{proposition} Covariant derivative $\nabla'\phi$ is given by \begin{equation} \label{e:nabpfi} (\nabla'_X\phi)Y = \frac{\beta}{\alpha}(g'(X,Y)\xi'-\eta'(Y)X). \end{equation} \end{proposition} \begin{proof} We have \begin{align} \label{e:nab1} & (\nabla_X\phi )Y = (\nabla'_X\phi)Y +(T_X\phi)Y= (\nabla'_X\phi)Y + \\ & \nonumber\qquad T_X\phi Y - \phi T_XY = (\nabla'_X\phi)Y + c(\eta'(X)\phi^2Y - \\ & \nonumber\qquad \eta'(X)\phi^2Y-\eta'(Y)\phi^2X) = (\nabla'_X\phi)Y + c(\eta'(Y)X-\eta'(X)\eta'(Y)\xi'), \end{align} As $\mathcal{M}$ is Sasakian \begin{align} \label{e:nab2} & (\nabla_X\phi)Y=g(X,Y)\xi -\eta(Y)X = \frac{\beta}{\alpha}g'(X,Y)\xi' - \\ &\nonumber \qquad c\eta'(X)\eta'(Y)\xi' - \frac{1}{\beta}\eta'(Y)X, \end{align} comparing (\ref{e:nab1}), (\ref{e:nab2}), we obtain (\ref{e:nabpfi}). \end{proof} \begin{corollary} Image of Sasakian manifold by $\mathcal D_{\alpha,\beta}$-homothety with parameters $\alpha$, $\beta > 0$ is $\frac{\beta}{\alpha}$-Sasakian manifold. \end{corollary} To find relation between corresponding curvature operators we use following general formula \begin{equation} \label{e:r:rp} R_{XY}Z=R'_{XY} +(\nabla'_XT)_YZ-(\nabla'_YT)_XZ+[T_X,T_Y]Z, \end{equation} where $[T_X,T_Y]Z=T_XT_YZ-T_YT_XZ$. For covariant derivative $\nabla'_XT$ on base of above Propositions we find \begin{align} \label{e:nabptxy} & (\nabla'_XT)_YZ =c ((\nabla'_X\eta')(Y)\phi Z + (\nabla'_X\eta')(Z)\phi Y +\\ & \nonumber\qquad \eta'(Y)(\nabla'_X\phi)Z+\eta'(Z)(\nabla'_X\phi)Y) = \\ & \nonumber\qquad \frac{c\beta}{\alpha}(\Phi'(X,Y)\phi Z+ \Phi'(X,Z)\phi Y )+ \\ & \nonumber\qquad \frac{c\beta}{\alpha}(g'(X,Z)\eta'(Y)+g'(X,Y)\eta'(Z))\xi' - \\ & \nonumber\qquad \frac{c\beta}{\alpha} 2\eta'(Y)\eta'(Z)X), \end{align} we have used $(\nabla_X\eta')(Y)=\frac{\beta}{\alpha}\Phi'(X,Y)$. For $T_XT_YZ$, we obtain \begin{align} \label{e:txtyz} & T_XT_YZ = c\eta'(X)\phi T_YZ = c^2(\eta'(X)\eta'(Y)\phi Z + \\ & \nonumber\qquad\eta'(X)\eta'(Z)\phi Y) \end{align} as $\eta'(T_XY)=0$, for every $X$, $Y$. In view of (\ref{e:r:rp}),(\ref{e:nabptxy}),(\ref{e:txtyz}), we can establish following result. \begin{proposition} Let $g'$ be a $\mathcal D_{\alpha,\beta}$-homothety of Sasakian metric. Then Riemann curvature operator $R$ and the curvature operator $R'$ of the deformed metric are related by following formula \begin{align} & R_{XY}Z = R'_{XY}Z + \\ & \nonumber\qquad \frac{c\beta}{\alpha}( \Phi'(X,Z)\phi Y -\Phi'(Y,Z)\phi X+2\Phi'(X,Y)\phi Z) + \\ & \nonumber\qquad \frac{c\beta}{\alpha}(g'(X,Z)\eta'(Y)-g'(Y,Z)\eta'(X))\xi' - \\ & \nonumber \qquad \frac{c\beta}{\alpha}\eta'(Z)(\eta'(Y)X-\eta'(X)Y)) - \\ & \nonumber \qquad \frac{c^3\beta}{\alpha}\eta'(Z)(\eta'(Y)\phi X-\eta'(X)\phi Y), \end{align} \end{proposition} We have $Ric(Y,Z) = Tr \{ X\mapsto R_{XY}Z\}$, so as corollary from the above proposition we obtain \begin{corollary} Ricci tensors of Sasakian manifold and its $\frac{\beta}{\alpha}$-Sasakian deformation are related by \begin{equation} Ric(Y,Z)=Ric'(Y,Z)+2\frac{c\beta}{\alpha}(g'(Y,Z)-(n+1)\eta'(Y)\eta'(Z)). \end{equation} \end{corollary} As we know already $\mathcal D_{\alpha,\beta}$-homothety of Sasakian manifold is an $\frac{\beta}{\alpha}$-Sasakian manifold. Providing two consecutive homotheties with parameters $(\alpha_i,\beta_i)$, $i=1,2$, we obtain that resulting manifold is $\frac{\beta_1\beta_2}{\alpha_1\alpha_2}$-Sasakian. Therefore as conclusion we obtain general statement that $\mathcal D_{\alpha,\beta}$-homothety with parameters $\alpha_1$, $\beta_1$ of some $\alpha$-Sasakian manifold is $(\frac{\beta_1}{\alpha_1}\alpha)$-Sasakian. \subsection{$\mathcal D_{\alpha,\beta}$-homotheties of $\alpha$-Sasakian twisted $\eta$-Ricci soliton} In this part of the paper we will prove important result that equation which defines twisted $\eta$-Ricci soliton is on $\alpha$-Sasakian manifolds, invariant under $\mathcal D_{\alpha,\beta}$-homotheties. \begin{theorem} Let assume $\alpha$-Sasakian manifold $(\mathcal{M},\phi,\xi,\eta,g)$ is twisted $\eta$-Ricci soliton \begin{align} \label{e:sas:esol} & Ric +\frac{1}{2}(\mathcal{L}_Xg) = \lambda g+2C_1 \alpha_X\odot\eta + C_2\eta\otimes\eta, \\ & (\mathcal{L}_X\eta)(Y) = \alpha_X(Y),\quad \eta(X)=0, \end{align} then its image by $\mathcal D_{\alpha,\beta}$-homothety $(\mathcal{M},\phi,\xi',\eta',g')$, is also a twisted $\eta$-Ricci soliton soliton, \begin{align} & Ric' +\frac{1}{2}\mathcal{L}_{X'}g' = \lambda'g' + 2C_1'\alpha'_{X'}\odot \eta' +C_2'\eta'\otimes\eta', \\ & (\mathcal{L}_{X'}\eta')(Y) = \alpha'_{X'}, \quad \quad \eta'(X')=0, \end{align} \end{theorem} \begin{proof} The proof is almost evident. Without loosing generality we may assume that manifold is Sasakian. We only need to find how each term in equation (\ref{e:sas:esol}) changes under $\mathcal D_{\alpha,\beta}$-homothety. Therefore \begin{align} & Ric = Ric'+2\frac{c\beta}{\alpha}g'- 2(n+1)\frac{c\beta}{\alpha}\eta'\otimes \eta', \\ & \mathcal{L}_Xg= \frac{1}{\alpha}\mathcal{L}_Xg' + 2(\frac{1}{\beta^2}-\frac{1}{\alpha})(\mathcal{L}_X\eta')\odot\eta' = \\ & \nonumber\qquad\mathcal{L}_{X'}g'+ 2(\frac{\alpha}{\beta^2}-1)(\mathcal{L}_{X'}\eta')\odot\eta', \\ & \alpha_X = \mathcal{L}_X\eta = \frac{\alpha}{\beta}\mathcal{L}_{X'}\eta' = \frac{\alpha}{\beta}\alpha'_{X'}, \\ & g = \frac{1}{\alpha}g'+(\frac{1}{\beta^2}-\frac{1}{\alpha})\eta'\otimes\eta', \quad \eta\otimes\eta = \frac{1}{\beta^2}\eta'\otimes\eta', \end{align} however we need to rescale vector field $X$ by $\frac{1}{\alpha}$, $X \mapsto X'=\frac{1}{\alpha}X$, after regrouping we obtain that $\frac{\beta}{\alpha}$-Sasakian manifold is twisted $\eta$-Ricci soliton with constants $\lambda'$, $C_1'$, $C_2'$, given by \begin{align*} & \lambda' = \frac{1}{\alpha}(\lambda-2c\beta), \quad C_1' = \frac{\alpha}{\beta^2}(C_1-\frac{1}{2})+\frac{1}{2}, \\ & C_2' = \lambda(\frac{1}{\beta^2}-\frac{1}{\alpha}) + \frac{C_2}{\beta^2}+2(n+1)c\frac{\beta}{\alpha} \end{align*} where $c=\frac{\beta^2-\alpha}{\alpha\beta}$. \end{proof} On the base of above formulas we can answer question whether or not it is possible to remove the twist from the equation. That means does exist $D_{\alpha,\beta}$-homothety so $C_1=0$? From above formulas we see that necessary and sufficient condition is that the source structure satisfies $C_1 < \frac{1}{2}$. Behavior under deformations determines three classes of $\alpha$-Sasakian twisted $\eta$-Ricci solitons determined by value of twist coefficient $C_1$. The first class are those manifolds where $C_1 < \frac{1}{2}$, second say singular class are manifolds where $C_1=\frac{1}{2}$, and the third class are manifolds where $C_1 > \frac{1}{2}$. Later on studying lifts of Ricci-K\"ahler solitons as corollary we obtain that class $C_1 < \frac{1}{2}$ is always nonempty. Exactly lift of Ricci-K\"ahler soliton belongs to this class. So basically there is problem to solve about the other two classes: do exist $\alpha$-Sasakian twisted $\eta$-Ricci solitons with $C_1 \geqslant \frac{1}{2}$? Note that it is the case where it is not possible to remove twist by $\mathcal D_{\alpha,\beta}$-homothety. \section{Lifts of Killing vector fields, inifinitesimal biholomorhpisms and automorhpisms} In this we are interested in lifts of vector fields which satisfy some additional conditions. We just want to ask questions what is a lift of complex structure infinitesimal automorphism and similarly what is a lift of Killing vector field. \begin{proposition} \label{p:inaut} Let $V$ be an infinitesimal automorphism of complex structure on K\"ahler base. Then its lift $V^L$ satisfies \begin{equation} (\mathcal{L}_{V^L}\phi)X^L = 2g^L(V^L,X^L)\xi, \quad (\mathcal{L}_{V^L}\phi)\xi = 0. \end{equation} \end{proposition} \begin{proof} We have \begin{equation} \label{e:v:phi} [V^L,\phi X^L] = [V, JX]^L -2 \Phi(V^L,\phi X^L)\xi, \end{equation} \begin{equation} \label{e:phi:v} \phi[V^L,X^L] = \phi [V,X]^L = (J[V,X])^L, \end{equation} as $\Phi(V^L,\phi X^L) = -g^L(V^L,X^L)$, by (\ref{e:v:phi}), (\ref{e:phi:v}) \begin{align*} & (\mathcal{L}_{V^L}\phi)X^L = [V^L,\phi X^L] -\phi[V^L,X^L]= ((\mathcal{L}_VJ)X)^L+ \\ & \nonumber \qquad 2g^L(V^L,X^L)\xi, \end{align*} and result follows by assumption that $\mathcal{L}_VJ=0$. \end{proof} In similar way we can prove following statement considering lift of Killing vector fields from K\"ahler base. \begin{proposition} \label{p:kill} Let $V$ be a Killing vector field on K\"ahler base. Then its lift satisfies \begin{equation} (\mathcal{L}_{V^L}g^L)(X^L,Y^L) = 0, \quad (\mathcal{L}_{V^L}g^L)(\xi, X^L) = 2\Phi(V^L,X^L). \end{equation} \end{proposition} \begin{proposition} Let $V$ be an inifinitesimal automorphism of K\"ahler form. Its lift satisfies \begin{equation} \label{e:lie:vl:phi} \mathcal{L}_{V^L}\Phi = 0, \end{equation} in particular one-form $\mathcal{L}_{V^L}\eta$ is closed. \end{proposition} \begin{proof} To proof (\ref{e:lie:vl:phi}), we show that $(\mathcal{L}_{V^L}\Phi)(X^L,Y^L)=0$, and $(\mathcal{L}_{V^L}\Phi)(X^L,\xi)=0$. As Lie derivative and exterior derivative commute we have \begin{equation} 0=\mathcal{L}_{V^L}\Phi = \mathcal{L}_{V^L}d\eta = d(\mathcal{L}_{V^L}\eta), \end{equation} hence $\mathcal{L}_{V^L}\eta$ is closed one-form. \end{proof} Now we provide result establishing kind of relationship between local symmetries of K\"ahler base and some vector fields on its Sasakian lift \begin{theorem} Let $V$ be inifinitesimal automorphism of K\"ahler structure of K\"ahler base. Its lift $V^L$, satisfies \begin{equation} \mathcal{L}_{V^L}\phi = 2\alpha\otimes\xi , \quad \mathcal{L}_{V^L}g^L = 4\alpha^\phi\odot\eta, \quad \mathcal{L}_{V^L}\Phi = 0, \end{equation} where $\alpha(\cdot)=g^L(V^L,\cdot)$, $\alpha^\phi(\cdot)=\alpha(\phi\cdot)=-g^L(\phi V^L,\cdot)$. and form $\alpha^\phi$ is closed, $d\alpha^\phi=0$. \end{theorem} \begin{proof} Only what requires is a proof that form $\alpha^\phi$ is closed. Hover note that we may identify $\alpha^\phi$, with a pullback of closed form $X\mapsto \omega(V,X)$. \end{proof} Here we formulate main result of this section. \begin{theorem} Let $V$ be automorphism of K\"ahler structure of K\"ahler base. Then there is locally defined function $f$, $df(\xi)=0$, on Sasakian lift, such that vector field $U_V=V^L+f\xi$, is local infinitesimal automorphism of almost contact metric structure of Sasakian lift. \end{theorem} \begin{proof} From the properties of the Lie derivative \begin{equation*} (\mathcal{L}_{U_V}\phi)Y^L=(\mathcal{L}_{V^L}\phi)Y^L + f(\mathcal{L}_\xi\phi)Y^L-df(\phi Y^L)\xi, \end{equation*} on Sasakian manifold $\mathcal{L}_\xi\phi=0$, in the view of the Proposition {\bf\ref{p:inaut}}., we obtain \begin{equation*} (\mathcal{L}_{U_V}\phi)Y^L = (2g^L(V^L,Y^L)-df(\phi Y^L))\xi. \end{equation*} As $V$ is an automorphism of K\"ahler structure, is locally Hamiltonian, with respect to K\"ahler form $\omega(V,Y)=dH(Y)$, for locally defined function $H$. Set $f=-2\bar{H}=-2H\circ\pi_2$, then \begin{equation*} df(\phi Y^L) = df((JY)^L) = -2dH(JY)= -2\omega(V,JY)=2g(V,Y). \end{equation*} Having $f$ determined we verify directly that \begin{equation*} (\mathcal{L}_{U_V}\phi)\xi = 0, \quad (\mathcal{L}_{U_V}g^L)(Y^L,Z^L)=0, \quad (\mathcal{L}_{U_V}g^L)(\xi,Y^L)=0. \end{equation*} \end{proof} \section{Sasakian lift of K\"ahler-Ricci soliton and $\alpha$-Sasakian $\eta$-Ricci solitons } In this section we are interested particularly in lifts of K\"ahler-Ricci solitons. Let vector field $X$ satisfies Ricci soliton equation on K\"ahler base \begin{equation} \label{e:ric:sol} Ric+\dfrac{1}{2}\mathcal{L}_Xg = \lambda g, \end{equation} where $\lambda = const.$ is a real constant. By direct computations we find $(\mathcal{L}_{X^L}g^L)(Y^L,Z^L) = (\mathcal{L}_Xg)(Y,Z)$, then by (\ref{e:ric:sol}) \begin{equation*} \frac{1}{2}(\mathcal{L}_{X^L}g^L)(Y^L,Z^L) = \frac{1}{2}(\mathcal{L}_Xg)(Y,Z)= \lambda g(Y,Z)-Ric(Y,Z), \end{equation*} From other hand by the Proposition {\bf\ref{p:ric}.}, $$ \bar{R}ic(Y^L,Z^L)=Ric(Y,Z)-2g(Y,Z), $$ summing up we find \begin{align} \label{e:ric:sol2} & \bar{R}ic(Y^L,Z^L)+\frac{1}{2}(\mathcal{L}_{X^L}g^L)(Y^L,Z^L) = Ric(Y,Z)-2g(Y,Z) + \\ & \nonumber \lambda g(Y,Z)-Ric(Y,Z) = (\lambda-2)g^L(Y^L,Z^L), \end{align} in similar way \begin{equation} \label{e:ric:sol3} \bar{R}ic(\xi,Y^L)+\frac{1}{2}(\mathcal{L}_{X^L}g^L)(\xi,Y^L)= \Phi(X^L,Y^L), \end{equation} \begin{equation} \label{e:ric:sol4} \bar{R}ic(\xi,\xi)+\frac{1}{2}(\mathcal{L}_{X^L}g^L)(\xi,\xi) = 2n. \end{equation} The identities (\ref{e:ric:sol2})-(\ref{e:ric:sol4}), allow us to state the following result \begin{theorem} Let K\"ahler base be a K\"ahler-Ricci soliton. Sasakian lift is twisted $\eta$-Ricci soliton \begin{align} & \bar{R}ic+\frac{1}{2}\mathcal{L}_{X^L}g^L= (\lambda-2)g^L-2(\mathcal L_{X^L}\eta)\odot\eta + (2n-2+\lambda)\eta\otimes \eta, \end{align} \end{theorem} \begin{corollary} Sasakian lift of K\"ahler-Ricci soliton admits $\mathcal D_{\alpha,\beta}$-homothety so $\alpha$-Sasakian image is $\eta$-Ricci soliton. \end{corollary} \begin{proof} Sasakian lift satisfies equation of twisted $\eta$-Ricci soliton. By {\bf Corollary \ref{c:tesol:esol} .} there is a $\mathcal D_{\alpha,\beta}$-homothety so image is $\alpha$-Sasakina $\eta$-Ricci soliton. \end{proof} As Ricci tensor determines a 2-form on Sasakian lift above equation can be expressed in the following form \begin{proposition} Sasakian lift of K\"ahler-Ricci soliton, satisfies following equations with resp. to the fundamental form and Ricci form of Sasakian lift \begin{equation} \bar{\rho}+\frac{1}{2}\mathcal{L}_{X^L}\Phi = (\lambda-2)\Phi. \end{equation} \end{proposition} Above corollary give us plenty of examples of $\eta$-Ricci solitons. \end{document}
\begin{document} \def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1} \if11 { \title{\bf A Proper Concordance Index for Time-Varying Risk} \author{A. Gandy\\ Department of Mathematics, Imperial College London, SW7 2AZ, U.K.\\ [email protected] \\ and \\ T. J. Matcham \hspace{.2cm}\\ Department of Mathematics, Imperial College London, SW7 2AZ, U.K.\\ NIHR ARC Northwest London, SW10 9NH, U.K.\\ [email protected] } \maketitle } \fi \if01 { \begin{center} {\LARGE\bf A Proper Concordance Index for Time-Varying Risk} \end{center} } \fi \begin{abstract} Harrel's concordance index is a commonly used discrimination metric for survival models, particularly for models where the relative ordering of the risk of individuals is time-independent, such as the proportional hazards model. There are several suggestions, but no consensus, on how it could be extended to models where relative risk can vary over time, e.g.\ in case of crossing hazard rates. We show that these concordance indices are not proper, in the sense that they are maximised in the limit by the true data generating model. Furthermore, we show that a concordance index is proper if and only if the risk score used is concordant with the hazard rate at the first event time for each comparable pair of events. Thus, we suggest using the hazard rate as the time-varying risk score when calculating concordance. Through simulations, we demonstrate situations in which other concordance indices can lead to incorrect models being selected over a true model, justifying the use of our suggested risk prediction in both model selection and in loss functions in, e.g., deep learning models. \end{abstract} \noindent {\it Keywords:} Survival Discrimination metric; Crossing Hazards, Survival Loss Function \spacingset{1.9} \section{Introduction} \label{sec:intro} Accurate patient prognosis estimation is an important clinical tool with applications including advising patients of their likely disease outcomes, informed selection of patient treatment as well as the design and evaluation of clinical trials. There exist several approaches to quantifying the predictive accuracy of survival models \citep{harrell1996multivariable}, which can in turn be optimized for, in order to improve a given aspect of the predictions. Discrimination metrics focus on a model's ability to correctly order the predictions of the patient outcomes. This could be important, for example, in deciding the order in which a set of patients should be treated. The most significant metric of survival model discrimination is Harrel's concordance index \citep{Harrel:1984}, hereafter the C-index, which was first developed as an adaptation of the Kendall-Goodman-Kruskal-Somers type rank correlation index \citep{Goodman:1954} to right-censored survival data, similar to an adaptation of Kendall's $\tau$ by \citet{brown1973nonparametric} and \citet{schemper1984analyses}. We use the following setup. Let $(X_i, U_i, Z_i), i\in \mathbb{N}$, be independent and identically distributed with the lifetime $X_i$ and the right censoring time $U_i$ being non-negative random variables. Let the covariate $Z_i$ be an element of some space $\mathcal{Z}$. We observe $(T_i, D_i, Z_i),i=1,\dots,n$, where $T_i=\min(X_i,U_i)$ is the time at risk and $D_i=\mathbb{I}(X_i\leq U_i)$ is the event indicator. The C-index estimates the probability that the predicted risk scores of a pair of individuals is concordant with that of their observed survival times. Only for pairs of individuals $(i,j)$, with $i\neq j$, for whom the first event is not a censoring event is an ordering of the outcome possible, i.e., the pair is comparable. The probability that individual $i$ has such an event occurring before event $j$ is $P(D_i=1,T_i<T_j)$. In this work we instead use \begin{equation*} \pi_{comp} = P(D_i=1,T_i\leq T_j). \end{equation*} This has no effect on the continuous time case as $P(T_i=T_j)=0$, however it is critical for the discrete case in Section \ref{sec3}, where ties will be possible. Survival predictions are differentiated with functions of the covariate called risk scores. In situations where the relative risk of individual is not changing over time, e.g., in a proportional hazards model with only time-constant covariates, this is sufficient to discriminate between individuals. However, the risk score should arguably be time-dependent in situations where the relative risk of individuals changes over time, e.g., in cases where hazards of risk groups cross \citep{Mantel1988-qh}, where a proportional hazards model has time-dependent covariates, or where the risk prediction is individual over time as in machine learning approaches to survival models \citep{Lee:2018}. Two situations in which we see crossing occur include when surgery or more aggressive medication incur a high initial hazard before eventually reducing the overall risk relative to the control group \citep{james2017abiraterone, rothwell1999prediction}. Thus, the risk score we use is allowed to depend on the covariate and on time. Specifically, in a paired comparison, we compare the risk scores at the time when the first event occurs. Intuitively, this comparison gives a prediction of who was most at imminent risk of the event, given that they have survived until the first event time. This framework covers previous specific suggestions for dealing with time-varying risks \citep{Antolini:2005,blanche2019c,haider2020effective}. Formally, the risk score is specified through a function $q:[0,\infty)\times {\cal Z}\to \mathbb{R}$ and for a given pair $(i,j)$, with $T_i\leq T_j$, we say that $i$ has a higher risk score than $j$ if $q(T_i|Z_i)>q(T_i|Z_j)$. Higher values of the risk score indicate a propensity towards earlier events. For a pair $(i,j)$, where we observe that $i$ has occurred before j, i.e., $D_i=1,T_i\leq T_j$ we say that this pair is concordant if $q(T_i|Z_i)>q(T_i|Z_j)$. The probability of a pair having observed the event of $i$ before the event of $j$ and being concordant is \begin{equation} \label{piconc_notiedpred} P(D_i=1, T_i\leq T_j, q(T_i|Z_i)>q(T_i|Z_j)). \end{equation} Defining a concordance index as (\ref{piconc_notiedpred}) divided by $\pi_{comp}$, would imply the following: a perfect model that could correctly order every pair would have a concordance of 1, a model that simply guesses for each pair would have a concordance of 0.5 on average, and a model that always orders incorrectly would have a concordance of 0. A model that gives the same prediction for each individual would also only get a concordance of 0, which seems undesirable. To avoid the latter, tied risk scores are often rewarded with a score of 0.5, such that a model with the same risk score for everyone would still score 0.5 \citep{harrell1996multivariable}. Hence, in the C-index $$C_q = \frac{\pi_{conc}}{\pi_{comp}}$$ we use \begin{equation*} \pi_{conc} = P[D_i=1,T_i\leq T_j,q(T_i|Z_i)>q(T_i|Z_j)] + \frac{1}{2}P[D_i=1,T_i\leq T_j,q(T_i|Z_i)=q(T_i|Z_j)]. \end{equation*} Given a random sample $(T_i,D_i,Z_i)^n_{i=1}$ we can estimate $\pi_{conc}$ and $\pi_{comp}$ with: \begin{align*} \hat{\pi}_{conc} = \frac{1}{n(n-1)}\sum^n_{i=1}\sum^n_{j=1;j\neq i}&\{\mathbb{I}[D_i=1, T_i\leq T_j, q(T_i|Z_i)>q(T_i|Z_j)]\\&+\frac{1}{2}\mathbb{I}[D_i=1,T_i\leq T_j,q(T_i|Z_i)=q(T_i|Z_j)]\},\\ \hat{\pi}_{comp} = \frac{1}{n(n-1)}\sum^n_{i=1}\sum^n_{j=1;j\neq i}&\mathbb{I}(D_i=1,T_i\leq T_j) \end{align*} and thus estimate the C-index $C_q$ by $$c ^n_q = \hat{\pi}_{conc}/ \hat{\pi}_{comp}.$$ Often, the risk score $q(t|z)$ being used is not dependent on the first argument $t$. For example, if a proportional hazards model with covariates $z$ is used, then often the linear predictor $q(t|z) = z\hat\beta$ is used as risk score, where $\hat \beta$ is an estimate of the regression coefficient. For more general survival models, where we have access to a survival function $S(t|z)$ as a function of the covariates $z$, a definition of a risk score is less obvious, as there may not be a clear definition of what constitutes higher risk, for example when the underlying hazard rates of individuals cross. Several methods of computing risk scores in this setting have been considered, for example $q(t|Z) = -S(t_0|Z)$, the negative of the survival function evaluated at some fixed time $t_0>0$ \citep{blanche2019c}, or $q(t|Z)= -\inf\{t \text{ s.t } S(t|Z) \leq 0.5\}$, the negative of the median survival time \citep{haider2020effective}. The negative is taken as predicted survival times have the opposite ordering to risk scores. Again, these suggestions do not depend on the first argument of $q$. In the work of \citet{Antolini:2005}, a time-dependent concordance index, $C^{td}$, is introduced. This adaptation of the C-index is developed for models with either time-varying covariates or time-varying effects, while supposing the predicted survival function is the 'natural' relative risk predictor. This leads to an event-time dependent risk score \begin{equation*} q(t|z) = -S(t|z). \end{equation*} This index is used widely in deep learning survival models, wherein the survival curves for distinct individuals are prone to crossing \citep{Zhong:2021}. A similar concordance index has seen use in loss functions for deep survival models \citep{Lee:2018}. In Sections \ref{sec4} and \ref{sec5}, we show through examples that these concordance indices do not always maximally reward correct models, and therefore could lead to selection of inferior predictive models. This is reinforced by the work of \cite{rindt2022survival}, wherein they show that $C^{td}$ and other metrics are not proper scoring rules. The main contribution of this paper (Section \ref{sec2} and \ref{sec3}) is that using the conditional hazard rate $\alpha(t|z)$ as a time-dependent risk score for an individual with covariates $z$ does behave analogously to a proper scoring rule. Since the definition of a proper scoring rule cannot be directly applied to the concordance index, we define a proper concordance index as $C_q$ defined above with a risk score $q$ such that $$\forall \tilde{q}:[0,\infty)\times\mathcal{Z}\rightarrow\mathbb{R}: \quad C_q\geq C_{\tilde{q}}.$$ Thus we suggest using $q(t|z)=\alpha(t|z)$ as risk score in concordance indices. Finally, we demonstrate in Section \ref{sec6} the advantage of using this risk score when training deep learning models, both as an element of the loss function, as well as in model validation. \section{Continuous Event Time} \label{sec2} The following theorem shows that, the estimated concordance index $c_q^n$ converges in probability to the concordance $C_q$ for any risk score $q:[0,\infty)\times\mathcal{Z}\to\mathbb{R}$ and that the concordance is maximised iff the risk score is concordant with the hazard rate. We assume that $X_i$ and $U_i$ are independent given $Z_i$, i.e., $X_i\perp \!\!\! \perp U_i\mid Z_i$, that $X_i|Z_i$ has an absolutely continuous distribution, and that there exists $\alpha:[0,\infty)\times\mathcal{Z}\rightarrow[0,\infty)$ such that the hazard rate of $X_i$ given $Z_i$ is $\alpha(t| Z_i)$. We also assume that $U_i\leq {\cal T}$ for some ${\cal T}\in \mathbb{R}$, i.e.\ that we have a finite observation window. \newtheorem{theorem}{THEOREM} \begin{theorem} \label{thm:1} Under the continuous time set-up, if $\pi_{comp}>0$ then $$ c_q^n\overset{p}{\to} C_q \quad (n\to \infty). $$ Furthermore, the following equivalence holds:\\ $C_q$ is a proper concordance index, i.e., $$\forall \tilde{q}:[0,\infty)\times\mathcal{Z}\rightarrow\mathbb{R}: \quad C_q\geq C_{\tilde{q}}$$ if and only if for $i\neq j$: \begin{equation} \label{eq:riskhazconcordant} \begin{split} E\int_0^{\tau_{ij}} \{&\mathbb{I}[q(s\vert Z_i)\geq q(s\vert Z_j), \alpha(s\vert Z_i)<\alpha(s\vert Z_j)] \\ &+ \mathbb{I}[q(s\vert Z_i)\leq q(s\vert Z_j), \alpha(s\vert Z_i)>\alpha(s\vert Z_j)] \}ds=0. \end{split} \end{equation} where $\tau_{ij}=T_i \wedge T_j$. \end{theorem} Equation (\ref{eq:riskhazconcordant}) is trivially satisfied if $q=\alpha$, which is why we suggest using the hazard rate as the risk score. More generally, (\ref{eq:riskhazconcordant}) is satisfied if the risk score $q$ and the hazard rate $\alpha$ are concordant in the sense that $\forall s\in[0,\infty), z_1,z_2\in\mathcal{Z}: q(s|z_1)>q(s|z_2)\iff\alpha(s| z_1)>\alpha(s| z_2)$. To show Theorem \ref{thm:1}, we need to introduce some counting process notation. $$ N_{ij}^{comp}(t)= \mathbb{I}(T_i \leq t, D_i=1,T_i\leq T_j) $$ indicates if the event for $i$ is known to have occurred before the event for $j$ by time $t$. The counting process $N_{ij}^{conc,1}(t)$ indicates if additionally the risk scores are in line with $i$ occurring before $j$, i.e., $$ N_{ij}^{conc,1}(t) = N_{ij}^{comp}(t)\cdot \mathbb{I}[q(T_i|Z_i)>q(T_i|Z_j)] $$ and $N_{ij}^{conc,2}(t)$ indicates if additionally the risk scores for $i$ and $j$ are tied, i.e., $$ N_{ij}^{conc,2}(t) = N_{ij}^{comp}(t)\cdot \mathbb{I}[q(T_i|Z_i)=q(T_i|Z_j)]. $$ $N_{ij}^{conc}(t)$ adds these two together, with tied predictions instead contributing 1/2, i.e., $$ N_{ij}^{conc}(t)=N_{ij}^{conc,1}(t) +\frac{1}{2}N_{ij}^{conc,2}(t). $$ Based on the above, we now define the concordance of $n$ individuals using information up to time t as \begin{equation}\label{conc16} c^n_q(t) = \frac{\sum^{n}_{i=1}\sum^n_{j=1,j\neq i}N_{ij}^{conc}(t) }{\sum_{i=1}^n\sum_{j=1,j\neq i}^nN_{ij}^{comp}(t) }. \end{equation} We have $c_q^n=c_q^n({\cal T})$. The following lemma derives the compensator of $N_{ij}^{conc}$ with respect to the filtration $({\cal F}_t)_t$, where $\mathcal{F}_t=\sigma(Z_i, \mathbb{I}(T_i\leq s), \mathbb{I}(T_i\leq s, D_i=1), i\in \mathbb{N}, 0\leq s\leq t)$ is the information observed up to time $t$. \newtheorem{lemma}{LEMMA} \begin{lemma} \label{thm:lem1} $N_{ij}^{conc}(t)$ has a unique decomposition into a martingale $M_{ij}^{conc}(t)$ and compensator $$\Lambda_{ij}^{conc}(t)=\int_0^tY_{ij}(s)[Q_{ij}^1(s)+ \frac{1}{2}Q_{ij}^2(s)]\alpha(s|Z_i)ds,$$ where $ Y_{ij}(t) = \mathbb{I}(\tau_{ij} \geq t)$, $\tau _{ij}=T_i\wedge T_j$, $ Q_{ij}^1(t) = \mathbb{I}[q^{\tau_{ij}}(t| Z_i)>q^{\tau_{ij}}(t| Z_j)]$, $ Q_{ij}^2(t) = \mathbb{I}[q^{\tau_{ij}}(t| Z_i)=q^{\tau_{ij}}(t| Z_j)]$, and $q^{\tau_{ij}}(t|\cdot)=q(t\wedge \tau_{ij}|\cdot)$. \end{lemma} The proof of this lemma can be found in the Appendix. \begin{proof}[Proof of Theorem \ref{thm:1}] $\hat \pi_{conc}$ can be written as a U-statistic $$ \hat \pi_{conc} = \frac{1}{n(n-1)}\sum_{i=1}^{n-1}\sum_{j=i+1}^nh[(T_i,D_i,Z_i),(T_j,D_j,Z_j)] $$with the kernel \begin{equation*} h[(T_i,D_i,Z_i),(T_j,D_j,Z_j)] = N_{ij}^{conc}({\cal T})+N_{ji}^{conc}({\cal T}). \end{equation*}The kernel $h$ is bounded, implying $Eh^2[(T_i,D_i,Z_i),(T_j,D_j,Z_j)]<\infty$, and thus Theorem 12.3 of \citet{vaart:1998} shows that $\hat \pi_{conc}$ is asymptotically normal as $n\rightarrow \infty$ with mean $\frac{1}{2} E[N_{ij}^{conc}(t)+N_{ji}^{conc}(t)]=E[N_{ij}^{conc}({\cal T})]=\pi_{conc}$. Thus, we have $\hat \pi_{conc}\overset{p}{\to}\pi_{conc}$ as $n\to \infty$. Similarly, we can show $\hat \pi_{comp}\overset{p}{\to} \pi_{comp}$ as $n\to \infty$. Hence, by the assumption $\pi_{comp}>0$, we have $c_n^q=\hat\pi_{conc}/\hat \pi_{comp}\overset{p}{\to}\pi_{conc}/\pi_{comp}=C_q$ as $n\to \infty$. Our choice of $q$ has no influence on the denominator, so considering the numerator only we find that, using Lemma \ref{thm:lem1}, \begin{align*} 2 E[N_{ij}^{conc}(t)] =&E[N_{ij}^{conc}(t)+N_{ji}^{conc}(t)] =E[\Lambda_{ij}^{conc}(t)+\Lambda_{ji}^{conc}(t)] + E[M_{ij}^{conc}(t)+M_{ji}^{conc}(t)]\nonumber\\ =&E[\Lambda_{ij}^{conc}(t)+\Lambda_{ji}^{conc}(t)] + 0 = E\int_0^tf_q(s)Y_{ij}(s)ds\nonumber = E\int_0^{t\wedge\tau_{ij}} f_q(s) ds, \end{align*} where \begin{align*} f_q(s)=&\alpha(s|Z_i)\mathbb{I}[q(s|Z_i)>q(s|Z_j)]+\alpha(s|Z_j)\mathbb{I}[q(s|Z_i)<q(s|Z_j)]\\ & +0.5[\alpha(s|Z_i)+ \alpha(s|Z_j)]\mathbb{I}[q(s|Z_i)=q(s|Z_j)]. \end{align*} Let $F_q=E\int_0^{\tau_{ij}} f_q(s) ds$ and let \begin{align*} A_q(s)=&\mathbb{I}[q(s\vert Z_i)\geq q(s\vert Z_j), \alpha(s\vert Z_i)<\alpha(s\vert Z_j)] +\\ &\mathbb{I}[q(s\vert Z_i)\leq q(s\vert Z_j), \alpha(s\vert Z_i)>\alpha(s\vert Z_j)]. \end{align*} Then, for any $q$, \begin{align*} F_\alpha-F_q=E\int_0^{\tau_{ij}}[f_\alpha(s)-f_q(s)]ds =E\int_0^{\tau_{ij}}[f_\alpha(s)-f_q(s)]A_q(s)ds, \end{align*} as $f_q(s)=f_\alpha(s)$ if $A_q(s)=0$. The latter can be seen by going through the three cases $\alpha(s\vert Z_i)>\alpha(s\vert Z_j)$, $\alpha(s|Z_i)<\alpha(s|Z_j)$ and $\alpha(s|Z_i)=\alpha(s|Z_j)$. Furthermore, $A_q(s)=1$ implies $f_\alpha(s)>f_q(s)$. Thus, $F_\alpha \geq F_q$ and $F_\alpha=F_q$ if and only if $E \int_0^{\tau_{ij}} A_q(s)ds=0$. \end{proof} \section{Discrete Event Time} \label{sec3} We show that an analogous result to Theorem \ref{thm:1} holds for discrete time data. To show the result we need to treat pairs of events with tied event times ($T_i=T_j, i\neq j$) as comparable. Suppose that the possible event and censoring times $X_i$ and $U_i$ are discrete random variables over the positive integers $\mathbb{N}^+$. We denote the discrete hazard rate by $\alpha(t| Z_i)=P(X_i=t\vert X_i \geq t, Z_i)$. As before, we assume $X_i\perp\!\!\!\perp U_i\mid Z_i$ and that there is a finite observation window ensured by $U_i\leq\cal{T}$ for some $\cal{T}\in \mathbb{N}^+$. The definitions of $T_i$, $D_i$, $\pi_{comp}$, $c_q^n$ and $C_q$ are as in the previous sections and risk scores are now defined as $q:\mathbb{N}^+\times\mathcal{Z}\rightarrow[0,1]$. \begin{theorem} \label{thm:2} Under the discrete time set-up, if $\pi_{comp}>0$ then $$ c_q^n\overset{p}{\to} C_q \quad (n\to \infty). $$ Furthermore, the following equivalence holds:\\ $C_q$ is a proper concordance index, i.e., $$\forall \tilde{q}:\mathbb{N}^+\times\mathcal{Z}\rightarrow[0,1]: \quad C_q\geq C_{\tilde{q}}$$ if and only if for $i\neq j$: \begin{equation} \label{eq:riskhazconcordant_discrete} \begin{split} E\sum_{s=1}^{\tau_{ij}} \{&\mathbb{I}[q(s\vert Z_i)\geq q(s\vert Z_j), \alpha(s\vert Z_i)<\alpha(s\vert Z_j)] \\ &+ \mathbb{I}[q(s\vert Z_i)\leq q(s\vert Z_j), \alpha(s\vert Z_i)>\alpha(s\vert Z_j)] \}=0. \end{split} \end{equation} where $\tau_{ij}=T_i \wedge T_j$. \end{theorem} The proof of which is similar to that of Theorem \ref{thm:1} and can be found in the Appendix. The decision to treat pairs with tied event times as comparable pulls each concordance score towards 0.5. Also, the scores of different models are pulled closer together, while retaining the same ordering. This is because for such pairs we always have $N_{ij}^{\text{conc}}(\tau_{ij}) + N_{ji}^{\text{conc}}(\tau_{ij})=1$ and $N_{ij}^{\text{comp}}(\tau_{ij}) + N_{ji}^{\text{comp}}(\tau_{ij})=2$. We prove these statements fully in Appendix \ref{App:Ties}. \section{Demonstration of incorrect model selection} \label{sec4} We now present an experiment to compare concordance indices produced by different risk scores. The set up is chosen to show that it is possible to favour incorrect models over the true data generating mechanism. Further studies would be needed to show how typical this situation is. We generate a data set with crossing hazards inspired by the problem discussed by \cite{Mantel1988-qh}. Let a population of 2000 be divided into two groups, with covariate $Z_i = 0$ for those in group 0 and $Z_i = 1$ for those in group 1. The data generating model $M_0$ is specified by the hazard rates $$ \alpha_{M_0}(t\vert Z_i=0) = 0.5, \quad \alpha_{M_0}(t\vert Z_i=1) = t$$ There is independent right censoring by an exponential distribution with rate 0.05 as well as censoring for anyone who survives until $t=1.1$. Now let there be 3 incorrect models $M_1$, $M_2$, $M_3$, for us to compare to, which are defined by their hazard rates $\alpha_{M_1}$, $\alpha_{M_2}$, $\alpha_{M_3}$ as follows: $$ \alpha_{M_1}(t\vert Z_i=0) = 0.5, \quad \alpha_{M_1}(t\vert Z_i=1) = \begin{cases} t, & ( t\leq 0.5) \\ 10t, & (0.5<t) \end{cases}, $$ $ \alpha_{M_2}(t\vert Z_i=0) = 0.25,$ $ \alpha_{M_2}(t\vert Z_i=1) = t, $ $ \alpha_{M_3}(t\vert Z_i=0) = 0.5,$ $ \alpha_{M_3}(t\vert Z_i=1) = 0.5t. $ The hazard and cumulative hazard rates are shown in in Figure \ref{fig:1}. We use four different risk scores to calculate concordance indices. Our suggestions of the hazard at time of first event uses $q(s|Z)=\alpha(s|Z)$ and is denoted by $C_\alpha$. Survival at time of first event, the suggestion of \cite{Antolini:2005}, is denoted by $C^{td}$ and uses $q(t|z)=-S(t|z)$, where $S(t|z)$ is the survivor function at time $t$ for an individual with covariate $z$. Survival at fixed times 0.5 and 1.05 are denoted by $C_{S(0.5)}$ and $C_{S(1.05)}$ and use $q(t|z)=-S(0.5|z)$ and $q(t|z)=-S(1.05|z)$, respectively. The quantile survival time is denoted by $C_{\mu(s)}$ and uses $q(t|z)=-\inf\{u \text{ s.t } S(u|z) \geq s\} $. We generated 100 different data sets and computed the resulting concordance indices as well as, for every concordance index, the frequency with which each model achieved the highest concordance index. Results are presented in Table \ref{fig:1}. As anticipated by Theorem \ref{thm:1}, $C_\alpha$ almost always selects the correct model, but is unable to distinguish between $M_0$ and $M_1$ as both models have risk scores concordant with the hazard rate of the true model $M_0$. The concordance $C^{td}$ consistently selects an incorrect model in this situation. $C_{S(0.5)}$ fails to perform any model selection, giving every model an equal score in every experiment. $C_{S(1.05)}$ mostly selects an an incorrect model. Finally, $C_{\mu(0.5)}$ similarly chooses an incorrect model in most iterations, while $C_{\mu(0.75)}$ mostly fails to distinguish between $M_0$ and $M_3$, showing that choosing $\mu(s)$ as the risk score can perform as well as $C_\alpha$, but is dependent on s (the best of which will be unknown). With each iteration there is a small chance that the randomly generated data will result in concordance calculation orderings that do not match the order of the expected concordances. This has resulted in a small number of deviations in model selection from the general trend. \begin{figure} \caption{Hazard rates (top row) and cumulative hazard rates (bottom row) of models $M_0, \dots, M_3$ (left to right). Group 0: solid lines; group 1: dotted lines.} \label{fig:1} \end{figure} \begin{table} \caption{Simulation from $M_0$ as described in Section 4. Left: Average concordance scores for each model/risk score. Right: Frequency of model selection via the highest risk score 100 replications; tied highest scores counted for all tied models.} \label{tab:1} \centering \begin{tabular}{c c c c c} \hline & $M_0$ & $M_1$ & $M_2$ & $M_3$ \\ \hline $C_\alpha$ & 0.57 & 0.57 & 0.55 & 0.53 \\ $C^{td}$ & 0.53 & 0.57 & 0.57 & 0.52 \\ $C_{S(0.5)}$ & 0.52& 0.52 & 0.52 & 0.52 \\ $C_{S(1.05)}$ & 0.48 & 0.48 & 0.48 & 0.52 \\ $C_{\mu(0.5)}$ & 0.48 & 0.48 & 0.48 & 0.52 \\ $C_{\mu(0.75)}$ & 0.52, & 0.48 & 0.48 & 0.52 \\ \hline \end{tabular} \quad \begin{tabular}{c c c c c} \hline & $ M_0$ & $M_1$ & $M_2$ & $M_3$\\ \hline $C_\alpha$&98 & 98 & 2 & 0\\ $C^{td}$ &0 & 50 & 50 & 0\\ $C_{S(0.5)}$ &100 & 100 & 100 & 100\\ $C_{S(1.05)}$ &4 & 4 & 4 & 96\\ $C_{\mu(0.5)}$&4 & 4 & 4 & 96\\ $C_{\mu(0.75)}$ & 96 & 4 & 4 & 96\\ \hline \end{tabular} \end{table} \section{Comparing Kaplan-Meier Estimates} \label{sec5} For a set of right censored survival data, the maximum likelihood estimator over all valid survival distributions is given by the Kaplan-Meier estimator. The Kaplan-Meier estimate is useful when simply examining recovery rates and probable event times for groups of individuals, as well as investigating the effectiveness of a treatment. In the latter case, individuals are grouped by treatment, and survival curve estimates for each group are compared using, for example, the log-rank test to establish treatment efficacy. The quality of the fit of such estimates are also commonly evaluated using the C-index. In the following experiment we investigate two further situations, ($M_4, M_5$), with crossing hazard rates between groups, for which Kaplan-Meier estimates will be evaluated by the concordance index using a range of risk scores. Let there be two groups of 2000 patients with hazards rates $$ \alpha_{M_4}(t\vert Z_i=0) = \begin{cases} 6, & ( t\leq 0.1) \\ 1, & (t>0.1) \end{cases}, \qquad \alpha_{M_4}(t\vert Z_i=1) = 1.4 $$ in the first experiment, and as $$ \alpha_{M_5}(t\vert Z_i=0) = \begin{cases} 0.5, & ( t\leq 0.9) \\ 10, & (t>0.9) \end{cases},\qquad \alpha_{M_5}(t\vert Z_i=1) = \begin{cases} 2, & ( t\leq 0.9) \\ 1, & ( t> 0.9) \end{cases}, $$ in the second experiment. The Kaplan-Meier estimates of the survival functions will be calculated for each group, which will then be used to produce risk scores for all event times. The risk score used in $C_\alpha$ requires an estimate of the hazard rate at each event time. One formula for the hazard rate, given a survival function when the survival time has an absolutely continuous distribution function is $\alpha(t) = f(t)/S(t) = \frac{-dS(t)}{dt}/S(t)$ \cite[Example II.4.1]{Andersen:2012}. Therefore, by smoothing and differentiating the Kaplan-Meier estimate of the survival function, we will be able to produce an estimate of the hazard rate. In the following experiment we smooth the Kaplan-Meier survival estimate using a triangular smoothing kernel with bandwidth $b=0.05$. Furthermore, in order to prevent a bias at the beginning and end of the survival curve, we firstly extend $\hat{S}(t)$ by $b$ around t = 0, and secondly we report results right censored $b$ earlier than the true right-censoring time, giving a final right-censor time of 1. For full implementation details refer to Appendix \ref{appendix:code}. The survival and hazard function estimates are show in Figures \ref{fig:2} and \ref{fig:3}. The hazard rate estimates correctly only cross once in each experiment, near the true hazard crossing times. Concordance results for each risk score are reported in Table \ref{tab:2}. For model $M_4$ we find that all concordance indices, except $C_\alpha$, find that this model has almost no ability to discriminate, scoring only 0.51, whereas $C_\alpha$ scores 0.57, indicating good discrimination. For model $M_5$, $C_\alpha$ finds the model to be even stronger, whereas every other risk score reports the model to be significantly discordant, with scores of 0.44, despite the predicted hazard and survival functions matching the truth well. The estimated survival plot is produced using the lifelines python package \cite{Davidson}, which includes $95\%$ confidence intervals. This experiment shows a clear shortcoming of calculating the C-index with other risk scores that is not experienced with $C_\alpha$, further justifying its use. Furthermore, $M_4$ may be of special interest as it reflects a treatment setting wherein the treatment group experiences an initial period of higher mortality, followed by a recovered period where they are healthier. We have shown that the other risk scores cannot be relied to recognise strong models in situations where hazard rates cross. These experiments also give a simple method of calculating $C_\alpha$ for the Kaplan-Meier survival function estimate. This method may be less feasible in situations with less data points, as this will result in poorer hazard function estimates. \begin{figure} \caption{Estimated survival and hazard functions for data from $M_4$.} \label{fig:2} \end{figure} \begin{figure} \caption{Estimated survival and hazard functions for data generated by $M_5$. } \label{fig:3} \end{figure} \begin{table} \caption{ Concordance index scores across range of risk indices for models Kaplan-Meier estimates of models $M_4$ and $M_5$.} \label{tab:2} \centering \begin{tabular}{c c c c c c c} \hline & $C_\alpha$ & $C^{td}$ & $C_{S(0.5)}$ & $C_{\mu(0.25)}$ & $C_{\mu(0.5)}$ & $C_{\mu(0.75)}$ \\ \hline $M_4$ &0.57 & 0.51 & 0.51 & 0.51 & 0.51 & 0.51 \\ $M_5$ &0.61 & 0.44 & 0.44 & 0.44 & 0.44 & 0.44 \\ \hline \end{tabular} \end{table} \section{Deep Learning} \label{sec6} In this section we produce an experiment to test how using $C_{\alpha}$ in the loss function of a deep learning survival model may improve predictions. In the work of \cite{Lee:2018} they present DeepHit, a neural network that predicts discrete probability mass functions for each $T_i$, $\hat{f}(t\vert Z_i)$. The loss function used to train DeepHit is the sum of two terms, the regular log-likelihood function \begin{equation} L_0 = \sum_{i, D_i=1}\log(\hat{f}(t \vert Z_i))+ \sum_{i, D_i=0}\log(1-\hat{F}(t\vert Z_i)), \end{equation} where $\hat{F}(t\vert Z_i) = \sum_{s\leq t}\hat{f}(s\vert Z_i)$. As well as a second term that is designed to encourage the minimisation of $C^{td}$ \begin{equation} L^{td} = \sum_{i\neq j} A_{i,j}\cdot \eta (\hat{F}(T_i\vert Z_i), \hat{F}(T_i\vert Z_j)), \end{equation} where $A_{i,j}=\mathbf{1}(T_i<T_j, D_i=1)$ indicates which ordering of each pair (i,j) has first experienced the event first (if at all) and $\eta(x,y)=\exp(\frac{-(x-y)}{\sigma})$ for some fixed hyper-parameter $\sigma >0$. It seems this function is chosen instead of the previous concordance equation as it is differentiable and can therefore be optimised for using gradient descent. However, we argue that a loss built to mimic the C-Index should return scores for a pair in as similar a way as possible and decided to use the sigmoid function $\sigma(x) = \frac{e^{t}}{e^{t}+1}$ instead of the exponential, giving $\eta(x,y)=\sigma(\frac{-(x-y)}{\sigma})$, so that loss incurred by each pair is limited to $[0,1]$ with risk score draws returning 0.5. Another change made was the calculation of C-indices in validation and testing was altered to match that given in Section \ref{sec3}, treating pairs with equal observation time as comparable. In our experiment we will target $C_\alpha$ by adapting $L^{td}$ to instead evaluate the ordering of the predicted hazard rate $\hat{\alpha}$ at the first event time for each pair. \begin{equation} L_{\alpha} = \sum_{i\neq j} A_{i,j}\cdot \eta (\hat{\alpha}(T_i\vert Z_i), \hat{\alpha}(T_i\vert Z_j)) \end{equation} To test this model we again produce some synthetic survival data, modelling discrete hazard rates as DeepHit produces prediction for discrete event times. We let there be two groups of 10,000 patients with discrete hazard rates $$ \alpha_{M_6}(t\vert Z_{i,1}=0) = \begin{cases} 0.05, & ( t = 1,\dots, 5) \\ 0.5, & (t = 6,\dots, 10) \end{cases}, \qquad \alpha_{M_6}(t\vert Z_{i,1}=1) = \begin{cases} 0.5, & ( t = 1,\dots, 5) \\ 0.05, & ( t = 6,\dots 10) \end{cases}, $$ Noise variables are included alongside the true covariates. For each patient, independent covariates $Z_{i,k}\sim Bernoulli(0.5)$ for $k=2,\dots, 10$ are included. We train on 80$\%$ of this data, validate during training on 4$\%$ and test with the remaining 16$\%$. The validation is accomplished by calculating $C^{td}$ for the model trained with the $L^{td}$ loss term and $C_{\alpha}$ for the model trained with $L_{\alpha}$ (since these are the metric each model is targeting) and training stops if these scores do not improve for 5 training epochs. \begin{figure} \caption{DeepHit predicted discrete cumulative hazard using $L^{td} \label{fig:4} \end{figure} The models perform admirably when evaluating them according to the concordance index corresponding to the loss functions used. The model trained with $L_{td}$ loss scores $C_{td}=0.69$, while the model trained with $L_{\alpha}$ loss scores $C_{\alpha}=0.69$ on the testing data. However, comparison of concordance scores in this setting is non-informative as each model is designed to perform well for their respective metrics. Since we know the form of the true hazard rates for each individual we can compare the predictions directly with the truth. In Figure \ref{fig:4} (a) and (b) all discrete cumulative hazard predictions are displayed. DeepHit trained with $L_{\alpha}$ performs as expected, predicting the cumulative hazards correctly, with some noise due to the inclusion of the nine noise variables. Conversely, when training with $L^{td}$ the cumulative hazards rates for the second group cross below those of the first group at $t=7$, several steps earlier than it should. This error can be explained by the following; suppose the network gave the true hazard rates as the predictions. The loss function based on $C^{td}$ would then evaluate across all pairs of individuals. If we consider the pairs (i,j) such that $Z_{i,1}=0, Z_{j,1} = 1$, $6\leq T_i\leq T_j$ and $D_i=1$. Despite individual i actually having a higher hazard rate at $T_i$, the cumulative hazard for j would be higher, so $L^{td}$ considers the pair non-concordant and would incur a loss. Such spurious losses would then in future training cycles encourage the neural network to re-weight in such a way that the cumulative hazard of j would be lower at $T_i$, while the cumulative hazard for i would be higher. This experiment is designed to prominently display the consequences of choosing to target $C_{\alpha}$ instead of $C^{td}$ in the loss function of deep learning survival models. Our result shows that if the underlying process generating the data does have crossing hazard rates between individuals, then this trend may not be effectively learned by using $L_{td}$ instead of $L_{\alpha}$. In a real data setting it may be expected that the hazard rate crossing may not be as dramatic, but without knowing the true hazard functions, using $L_{\alpha}$ may be preferred. \section{Discussion} In this paper we have explored the limitations of risk scores used in the C-index, specifically when it is used to assess survival models that are capable of producing crossing hazard rate predictions. Previous work has approached solving similar problems, for example Antolini's C-index focused on models with crossing survival curves. In our work we show that even these efforts can still suffer from rewarding incorrect survival models more highly than the truth. The main contribution of this paper is the development of the proper concordance index $C_{\alpha}$, which we prove has the desirable property of asymptotically not scoring a prediction more highly than the true survival distribution. We demonstrated the advantages of $C_{\alpha}$ in two settings, first as the metric of success for Kaplan-Meier models, and secondly its various uses for deep learning models. The deep learning models trained with loss functions incorporating $C_{\alpha}$ outperformed those targeting $C^{td}$ in a situation with crossing hazard rates. This experiment also further demonstrated the advantages of $C_{\alpha}$ as success metric, as well as it's use as a validation metric during training. \appendix \section{Appendix} \subsection{Proofs} \label{appendix} \begin{proof}[Proof of Lemma \ref{thm:lem1}] Let $i\in \mathbb{N}$. Consider the counting process $N_{i}(t)=D_i \mathbb{I}(T_i\leq t)$, with a unique decomposition $N_i(t)=\Lambda_i(t)+M_i(t)$ into a compensator $\Lambda_i(t)=\int_0^t\alpha(s|Z_i)Y_i(s)ds$ and a finite variation local martingale $M_i$ with respect to $(\mathcal{F}_t)$. With $j\in \mathbb{N},$ $j\neq i$, let $N_{i}^{\tau_{ij}}$ be $N_i$ stopped at $\tau_{ij}$, i.e., $N_{i}^{\tau_{ij}}(t)=N_{i}(t\wedge \tau_{ij})$. Since a finite variation local martingale stopped at a stopping time is also a finite variation local martingale $M^{\tau_{ij}}_{i}(t):=M_i(t\wedge \tau_{ij})$ is a finite variation local martingale. By uniqueness of decomposition \citep[Theorem III.16]{Protter:2010}, the compensator of $N^{\tau_{ij}}_{i}(t)$ is therefore \begin{align*} \Lambda_i^{\tau_{ij}}(t)& = \int_0^{t\wedge \tau_{ij}}\alpha(s|Z_i)Y_i(s)ds = \int_0^t\alpha(s|Z_i)Y_{ij}(s)ds, \end{align*} where $Y_{ij}(t) = \mathbb{I}(t \leq \tau_{ij} )$. Letting $Q_{ij}^1(t)=\mathbb{I}[q(t\wedge\tau_{ij}|Z_i)>q(t\wedge\tau_{ij}|Z_j)]$, we can write $N_{ij}^{conc,1}$ as \begin{align*} N_{ij}^{conc,1}(t) &= \int_0^t Q_{ij}^1(s)dN_{i}^{\tau_{ij}}(s)= \Lambda_{ij}^{conc,1}(t) + M_{ij}^{conc,1}(t), \end{align*} where $\Lambda_{ij}^{conc,1}(t):= \int_0^t Q_{ij}^1(s)d\Lambda^{\tau_{ij}}_{i}(s) $ and $M_{ij}^{conc,1}(t) := \int_0^t Q_{ij}^1(s)dM^{\tau_{ij}}_{i}(s)$. $M_{ij}^{conc,1}(t)$ is a local martingale with respect to $(\mathcal{F}_t)$ as $Q_{ij}^1$ is predictable and bounded \cite[Theorem II.3.1]{Andersen:2012}. Thus, again by uniqueness of decomposition, the compensator of $N_{ij}^{conc,1}(t)$ is given by $\Lambda_{ij}^{conc,1}(t)$, which can be rewritten as \begin{align*} \Lambda^{conc,1}_{ij}(t) &= \int_0^t\alpha(s|Z_i)Q^1_{ij}(s)Y_{ij}(s)ds, \end{align*} implying that the intensity of $N_{ij}^{conc,1}(t)$ is $\lambda_{ij}^{conc,1}(t) = \alpha(t|Z_i)Q_{ij}^1(t)Y_{ij}(t)$. By similar arguments we can show that the process $N_{ij}^{conc,2}(t)$ has a decomposition into local martingales and compensators with intensity process \begin{align*} \lambda_{ij}^{conc,2}(t) &= \alpha(t|Z_i)Q_{ij}^2(t)Y_{ij}(t), \end{align*} where $Q_{ij}^2(t) = \mathbb{I}[q(t\wedge\tau_{ij}|Z_i)=q(t\wedge\tau_{ij}|Z_j)]$. Now we can decompose $N_{ij}^{conc}$ as \begin{align*} N_{ij}^{conc}(t)&=N_{ij}^{conc,1}(t) +N_{ij}^{conc,2}(t)/2= \Lambda_{ij}^{conc,1}(t) +\Lambda_{ij}^{conc,2}(t) /2 +M_{ij}^{conc,1}(t) + M_{ij}^{conc,2}(t)/2. \end{align*} Since the property of a process being a local martingale is closed under addition and scalar multiplication, the final two terms, which we call $M_{ij}(t)$, form a local martingale. Therefore, the first two terms are the compensator of $N_{ij}^{conc}(t)$, which simplify to \begin{align*} \Lambda_{ij}^{conc} &= \int_0^t\alpha(s|Z_i)[Q_{ij}^1(s)+0.5 Q_{ij}^2(s)]Y_{ij}(s)ds. \end{align*} To show that $M_{ij}^{conc}$ is a martingale, and not just a local martingale, we use Theorem I.51 of \citet{Protter:2010}, which requires us to show \begin{equation*} E[\sup_{s\leq t}|M_{ij}^{conc}(s)|]<\infty \quad \forall t\geq 0. \end{equation*} First we write \begin{align*} E[\sup_{s\leq t}|M_{ij}^{conc}(s)|] &\leq E[\sup_{s\leq t}|N_{ij}^{conc}(t)|] + E[\sup_{s\leq t}|\Lambda_{ij}^{conc}(t)|] \leq 1 + E[\sup_{s\leq t}|\Lambda_{ij}^{conc}(t)|], \end{align*} and then we bound the second term \begin{align*} \sup_{s\leq t}|\Lambda_{ij}^{conc}(t)| &= \int_0^t(\alpha(s|Z_i)Q_{ij}^1(s) + \frac{\alpha(s|Z_i)Q_{ij}^2(s)}{2})Y_{ij}(s)ds\\ &\leq \frac{3}{2}\int_0^t\alpha(s|Z_i)Y_{ij}(s)ds \leq \frac{3}{2}\int_0^{X_i}\alpha_i(s|Z_i)ds = \frac{3}{2}H(X_i|Z_i), \end{align*} where $H(t|Z_i)=\int_0^t \alpha(s|Z_i) ds$ is the $i$th individual's integrated hazard rate. Suppose $Y$ is a random variable with integrated hazard rate $H$ and cumulative distribution function $F$. Then $E[H(Y)]=E[-\log(F(Y))]=-\int \log(F(y))dF(y)=-\int_0^1 \log(u)du=1$. Hence, \begin{equation*} E[\sup_{s\leq t}|M_{ij}^{conc}(s)|] \leq 1 + \frac{3}{2}E[H(t\vert Z_i)] = 5/2 <\infty. \end{equation*} \ \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2}] For $t \in\mathbb{N}^+$ we define $N_{ij}^{comp}(t), N_{ij}^{conc,1}(t), N_{ij}^{conc,2}(t)$ and $N_{ij}^{conc}(t)$ as in Section \ref{sec2}, with the filtration defined as $\mathcal{F}_t=\sigma(Z_i, \mathbb{I}(T_i\leq s), \mathbb{I}(T_i\leq s, D_i=1), i\in \mathbb{N}, s=1,\dots, t)$ and $\mathcal{F}_0 = \sigma(Z_i, i \in \mathbb{N})$ we can derive compensators of the $N_{ij}^{conc}(t)$ with respect to $\mathcal{F}_t$. Let \begin{equation*} M_{ij}(t) = N_{ij}^{conc}(t)-\Lambda_{ij}^{conc}(t) \end{equation*} where $\Lambda_{ij}^{conc}(t)=\sum_{s=1}^{t}Y_{ij}(s)[\mathbb{I}(q(s\vert Z_i) > q(s\vert Z_j)) + \frac{1}{2}\mathbb{I}(q(s\vert Z_i)=q(s\vert Z_j))]\alpha (s\vert Z_i)$ and $Y_{ij}(s) = \mathbb{I}(\tau_{ij} \geq s)$. $M_{ij}(t)$ defines a discrete-time martingale with respect to $(\mathcal{F}_t)$ because $$ E[N_{i,j}^{conc}(t)-N_{i,j}^{conc}(t-1) = 1 \vert \mathcal{F}_{t-1}, \tau_{i,j} < t] = 0 $$ and \begin{align*} &E[N_{i,j}^{conc}(t)-N_{i,j}^{conc}(t-1) = 1 \vert \mathcal{F}_{t-1}, \tau_{i,j} \geq t]= \\ &P(\Delta N_{i,j}^{conc,1}(t) = 1 \vert \mathcal{F}_{t-1}, \tau_{i,j} \geq t) + P(\Delta N_{i,j}^{conc,2}(t) = 1 \vert \mathcal{F}_{t-1}, \tau_{i,j} \geq t)= \\ &\mathbb{I}(q(t\vert Z_i) > q(t\vert Z_j))\alpha(t\vert Z_i) + \mathbb{I}(q(t\vert Z_i) = q(t\vert Z_j))\alpha(t\vert Z_i) \end{align*} Therefore, the compensator of $N_{ij}^{conc}(t)$ is $\Lambda_{ij}^{conc}(t)$. The proof from this point on is the same as that for Theorem \ref{thm:1} with integrals replaced by sums. \end{proof} \subsection{Effect of Tie Inclusion} \label{App:Ties} Suppose we have discrete event data $\{T_i, D_i\}_{i=1}^{n}$ for which we have potential models $M_1$ and $M_2$, with corresponding discrete hazard rates $q_1(t\vert Z_i)$ and $q_2(t \vert Z_i)$. Let \begin{align*} a &= \sum^n_{i=1}\sum^n_{j=1;j\neq i}\{\mathbb{I}[D_i=1, T_i< T_j, q_1(T_i|Z_i)>q_1(T_i|Z_j)] +\frac{1}{2}\mathbb{I}[D_i=1,T_i< T_j,q_1(T_i|Z_i)=q_1(T_i|Z_j)]\},\\ b &= \sum^n_{i=1}\sum^n_{j=1;j\neq i}\{\mathbb{I}[D_i=1, T_i< T_j, q_2(T_i|Z_i)>q_2(T_i|Z_j)] +\frac{1}{2}\mathbb{I}[D_i=1,T_i< T_j,q_2(T_i|Z_i)=q_2(T_i|Z_j)]\},\\ c &= \sum^n_{i=1}\sum^n_{j=1;j\neq i}\mathbb{I}(D_i=1,T_i< T_j)\\ \Tilde{c} &= \sum^n_{i=1}\sum^n_{j=1;j\neq i}\mathbb{I}(D_i=1,T_i= T_j),\\ w &= \frac{c}{c+2\Tilde{c}}. \end{align*} Suppose that $c>0$ and $\Tilde{c}>0$. With these definitions we have $c^n_{q_1}= \frac{a}{c}$, $c^n_{q_2}= \frac{b}{c}$ if ties are not included and $c^n_{q_1}=\frac{a+\Tilde{c}}{c+2\Tilde{c}}=w\frac{a}{c}+(1-w)\frac{1}{2}$, $c^n_{q_2}=\frac{b+\Tilde{c}}{c+2\Tilde{c}}=w\frac{b}{c}+(1-w)\frac{1}{2}$ if they are. Hence ordering is the same in either case. We also have $$ \left|\frac{a+\Tilde{c}}{c + 2\Tilde{c}}-\frac{1}{2}\right| = w\left|\frac{a}{c}-\frac{1}{2}\right| < \left|\frac{a}{c}-\frac{1}{2}\right|, $$ showing that the inclusion of ties pulls $c_{q}^n$ closer to 0.5. Finally, the inclusion of ties pulls the estimates $c^n_{q_1}$ and $c^n_{q_2}$ closer together as $$ \left|\frac{a+\Tilde{c}}{c + 2\Tilde{c}}-\frac{b+\Tilde{c}}{c+2\Tilde{c}}\right| = \left| w\frac{a}{c}+(1-w)\frac{1}{2}-w\frac{b}{c}-(1-w)\frac{1}{2} \right|= w\left| \frac{a}{c}-\frac{b}{c} \right|< \left| \frac{a}{c}-\frac{b}{c}\right|. $$ \subsection{Code} \label{appendix:code} Code for all experiments reported in this document can be found at \newline https://github.com/tmatcham/CrossingHazardConcordance \if11 { \section*{Funding} This article presents independent research supported by the National Institute for Health Research (NIHR) under the Applied Health Research (ARC) programme for Northwest London. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. TM was supported by the EPSRC Centre for Doctoral Training in Modern Statistics and Statistical Machine Learning (EP/S023151/1). } \fi \section{Conflict of Interest Statement} The authors report there are no competing interests to declare. \end{document}
\betaegin{document} \title[]{On the HJY Gap Conjecture in CR geometry vs. the SOS Conjecture for polynomials} \mathrm{Aut}hor{Peter Ebenfelt} \alphaddress{Department of Mathematics, University of California at San Diego, La Jolla, CA 92093-0112} \email{[email protected]} \thanks{The author was supported in part by the NSF grant DMS-1301282.} \betaegin{abstract} We show that the Huang-Ji-Yin (HJY) Gap Conjecture concerning CR mappings between spheres follows from a conjecture regarding Sums of Squares (SOS) of polynomials. The connection between the two problems is made by the CR Gauss equation and the fact that the former conjecture follows from the latter follows from a recent result, due to the author, on partial rigidity of CR mappings of strictly pseudoconvex hypersurfaces into spheres. \end{abstract} \thanks{2000 {\em Mathematics Subject Classification}. 32H02, 32V30} \maketitle \section{Introduction} The purpose of this note is to explain how the Huang-Ji-Yin (HJY) Gap Conjecture concerning CR mappings between spheres \cite{HuangJiYin09} follows from a conjecture regarding Sums of Squares (SOS) of polynomials. The connection between the two problems is made by the CR Gauss equation (a well known fact) and the implication follows from a recent result, due to the author \cite{E12}, on partial rigidity ("flatness") of CR mappings of strictly pseudoconvex hypersurfaces into spheres. The HJY Gap Conjecture concerns CR mappings $f$ of an open piece of the unit sphere $\mathbb S^n\subset\mathbb C^{n+1}$ into the unit sphere $\mathbb S^N\subset \mathbb C^{N+1}$ when the codimension $N-n$ lies in the integral interval $[0,D_n]$, where $D_n$ is a specific integer that depends on $n$ (with $D_n\sim \sqrt{2}n^{3/2}$, see below); here, we use the non-standard convention that the superscript $m$ on a real hypersurface $M^m\subset \mathbb C^{m+1}$ refers to the CR dimension, and not the real dimension (which is $2m+1$). The mappings $f$ are assumed to be (sufficiently) smooth and, by results in \cite{Forstneric89} and \cite{CS90}, they therefore extend as rational maps without poles on $\omegaverline{\mathbb B_{n+1}}$, where $\mathbb B_{n+1}\subset \mathbb C^{n+1}$ denotes the unit ball. In particular, there is no loss of generality in considering globally defined CR mappings $f\colon \mathbb S^{n}\to \mathbb S^N$. The conjecture asserts that there is a collection of finitely many disjoint integral subintervals $I_1,\lambdadots, I_{\kappa_0}\subset [0,D_n]$ with the property that if the codimension $N-n$ belongs to one of these subintervals, $N-n\in I_\kappa=[a_\kappa,b_\kappa]$, then \betaegin{equation}\lambdaabel{f=TLf0} f=T\circ L\circ f_0, \end{equation} where $f_0$ is a CR mapping $S^n\to S^{N_0}$ for some $N_0$ with codimension $N_0-n<a_\kappa\lambdaeq N-n$ (in particular, then $N_0<N$), and where $L\colon S^{N_0}\to S^N$ is the standard linear embedding in which the last $N-N_0$ coordinates are zero and $T\colon S^N\to S^N$ is an automorphism of the target sphere $S^N$. It is well known and easy to see that the representation \eqref{f=TLf0} is equivalent to the statement that the image $f(\mathbb S^n)$ is contained in an affine complex subspace $A^{N_0+1}$ of dimension $N_0+1$. Before formulating the HJY Gap Conjecture more precisely, we must introduce the integral intervals $I_\kappa$. For $n\geq 2$, we define \betaegin{equation}\lambdaabel{Ik} I_\kappa:=\lambdaeft \{j\in \mathbb N\colon (\kappa-1)n+\kappa\lambdaeq j\lambdaeq \sum_{i=0}^{\kappa-1}(n-i)-1=n+(n-1)+\lambdadots+(n-\kappa+1)-1\right\}, \end{equation} for $\kappa=1,\lambdadots, \kappa_0$, where $\kappa_0=\kappa_0(n)$ is the largest integer $\kappa$ such that the integral interval $I_\kappa$ is non-trivial, i.e., \betaegin{equation}\lambdaabel{k0} (\kappa-1)n+\kappa\lambdaeq \sum_{i=0}^{\kappa-1}(n-i)-1. \end{equation} A simple calculation shows that $\kappa_0=\kappa_0(n)$ is increasing in $n$ (clearly, with $\kappa_0<n$) and grows like $\sqrt{2n}$. We have, e.g., $\kappa_0(2)=1$, $\kappa_0(4)=2$, and for $\kappa_0(n)\geq 3$, we need $n\geq 7$. For the integer $D_n$ referenced above, we can then take $$ D_n=\kappa_0n-\frac{\kappa_0(\kappa_0-1)}{2}-1=\sqrt{2}n^{3/2}-n-\sqrt{2n}+O(1). $$ Now, the conjecture made by X. Huang, S. Ji, and W. Yin in \cite{HuangJiYin09} can be formulated as follows: \betaegin{conjecture}[HJY Gap Conjecture]\lambdaabel{HJYConj} For $n\geq 2$, let $\kappa_0$ and $I_1,\lambdadots I_{\kappa_0}$ be as above and assume that $f\colon \mathbb S^n\to \mathbb S^{N}$ is a sufficiently smooth CR mapping. If the codimension $N-n\in I_\kappa$, then there exists an integer $n\lambdaeq N_0<N$ with \betaegin{equation}\lambdaabel{No-nest} N_0-n\lambdaeq (\kappa-1)n-\kappa-1 \end{equation} and an affine complex subspace $A^{N_0+1}$ of dimension $N_0+1$ such that $f(\mathbb S^n)\subset S^N\cap A^{N_0+1}$. \end{conjecture} The $\kappa$th integral interval $I_\kappa$ with the property described in the conjecture above is referred to as the $\kappa$th {\it gap}. We note that the existence of the first gap is the statement that if $f\colon \mathbb S^n\to \mathbb S^{N}$ is a sufficiently smooth CR mapping and $1\lambdaeq N-n\lambdaeq n-1$, then $f(\mathbb S^n)\subset \mathbb S^N\cap A^{n+1}$. Since $\mathbb S^N\cap A^{n+1}$ is a sphere in the $(n+1)$-dimensional complex space $A^{n+1}$ and, thus, CR equivalent to $S^n\subset \mathbb C^{n+1}$, we can write $f=T\circ L\circ f_0$, where $T$ and $L$ are as in \eqref{f=TLf0} and $f_0$ is a map of $S^n$ to itself. By work of Poincar\'e \cite{Poincare07}, Alexander \cite{Alexander74}, and Pinchuk \cite{Pincuk74}, $f_0$ is in fact an automorphism of $\mathbb S^n$ (unless it is constant, of course) and by an appropriate choice of $T$, we can in fact make $f_0$ linear. The existence of the first gap, under the assumption that $f$ is real-analytic, was established by Faran in \cite{Faran86}; the smoothness required for this was subsequently lowered to $C^{N-n}$ by Forstneric \cite{Forstneric89} and then to $C^2$ by X. Huang in \cite{Huang99}. The existence of the second gap (when $n\geq 4$) and the third gap (when $n\geq 7$) was established under the assumption of $C^3$-smoothness of $f$ in \cite{HuangJiXu06} and \cite{HuangJiYin12}, respectively. The existence of the $\kappa$th gap for $3< \kappa\lambdaeq \kappa_0$ is an open problem at this time. It is, however, known \cite{JPDLebl09} that when the codimension $N-n$ is sufficiently large, then there are no more gaps (in the sense of Conjecture \ref{HJYConj}). For the first three gaps, one can also classify the possible maps $f_0$ that appear in \eqref{f=TLf0}, as in the (very simple) Poincar\'e-Alexander-Pinchuk classification corresponding to the first gap described above; see \cite{HuangJi01}, \cite{Hamada05}, \cite{HuangJiYin12}. For the gaps beyond these, such a classification is most likely beyond what one can hope for at this time, at least for large $\kappa$. To the best of the author's knowledge, there is no conjecture as to what such "model" maps would be for general $\kappa$. For a CR mapping $f\colon \mathbb S^n\to \mathbb S^N$, there is a notion of the CR second fundamental form of $f$ and its covariant derivatives, and if we form the corresponding sectional curvatures (defined more precisely in the next section), then we obtain a collection of polynomials $\Omega^1(z), \lambdadots,\Omega^{N-n}(z)$ in the variables $z=(z^1,\lambdadots,z^n)\in\mathbb C^n$, whose coefficients consist of components of the second fundamental form and its covariant derivatives up to some finite order (bounded from above by the codimension $N-n$); we shall refer to the polynomial mapping $\Omega=(\Omega^1,\lambdadots\Omega^{N-n})$ as the total second fundamental polynomial. These polynomials satisfy a Sums Of Squares (SOS) identity as a consequence of a CR version of the Gauss equation. The SOS identity has the following form \betaegin{equation}\lambdaabel{SOSid} \sum_{j=1}^{N-n}|\Omega^j(z)|^2 = A(z,\betaar z)\sum_{i=1}^n|z^i|^2, \end{equation} where $A(z,\betaar z)$ is a Hermitian (real-valued) polynomial in $z$ and $\betaar z$. To simplify the notation, for a polynomial mapping $P(z)=(P^1(z),\lambdadots,P^q(z))$ we shall write $|\!| P(z)|\!|^2$ for the SOS of moduli of the components, i.e., \betaegin{equation} |\!| P(z)|\!|^2:=\sum_{k=1}^q|P^k(z)|^2. \end{equation} The number $q$ of terms in the norm will differ depending on the mapping in question, but will be clear from the context. Using this notation, the identity \eqref{SOSid} can be written in the following way: \betaegin{equation}\lambdaabel{SOSid'} |\!|\Omega(z)|\!|^2=A(z,\betaar z)|\!| z|\!|^2. \end{equation} The polynomial $A(z,\betaar z)$ is in principle computable from $f$, but useful properties of $A$ seem difficult to extract directly in this way, and often it suffices to know that $\Omega$ satisfies an SOS identity of this form, for some Hermitian polynomial $A$. SOS identities of the form \eqref{SOSid'} appear in many different contexts, and there is an abundance of literature considering various aspects of such identities. We mention here only a few, and only ones with a connection to CR geometry and complex analysis: e.g., \cite{Quillen68}, \cite{CatlinDangelo96}, \cite{CatlinJPD99}, \cite{EHZ05}, \cite{JPDLebl09}, \cite{JPD11}, \cite{HuangY13}, \cite{GrHa13}, \cite{GruLV14}, \cite{E15}, and refer the reader to these papers for further connections and references to the literature. The reader is especially referred to the paper \cite{JPD11} by D'Angelo, which contains an excellent discussion of SOS identities and positivity conditions. We shall here be concerned with a very specific property of polynomial maps $\Omega$ that satisfy \eqref{SOSid'}, namely the possible linear ranks that can occur. For a polynomial mapping $P(z)=(P^1(z),\lambdadots,P^q(z))$, we define its {\it linear rank} to be the dimension of the complex vector space $V_P$ spanned by its components, in the polynomial ring $\mathbb C[z]$. The main result in this note is that the HJY Gap Conjecture will follow from the following conjecture regarding the possible linear ranks of polynomial mappings $P(z)$ that satisfy an SOS identity: \betaegin{conjecture}[SOS Conjecture]\lambdaabel{SOSConj} Let $P(z)=(P^1(z),\lambdadots,P^q(z))$ be a polynomial mapping in $z=(z^1,\lambdadots, z^n)\in \mathbb C^n$, and assume that there exists a Hermitian polynomial $A(z,\betaar z)$ such that the SOS identity \betaegin{equation}\lambdaabel{SOSidP} |\!| P(z)|\!|^2=A(z,\betaar z)|\!| z|\!|^2 \end{equation} holds. If $r$ denotes the linear rank of $P(z)$, then either \betaegin{equation}\lambdaabel{rmax} r\geq (\kappa_0+1)n-\frac{\kappa_0(\kappa_0+1)}{2}-1, \end{equation} where $\kappa_0$ is the largest integer $\kappa$ such that \eqref{k0} holds, or there exists a integer $1\lambdaeq \kappa\lambdaeq \kappa_0<n$ such that \betaegin{equation}\lambdaabel{e:ASOSbound} \sum_{i=0}^{\kappa-1} (n-i)=n\kappa-\frac{\kappa(\kappa-1)}{2}\lambdaeq r\lambdaeq \kappa n. \end{equation} \end{conjecture} \betaegin{remark} {\rm The integer $\kappa_0$ is also the integer for which the integral intervals in $\kappa$, defined by \eqref{e:ASOSbound} start overlapping for $\kappa=\kappa_0+1$. } \end{remark} The main result in this note is that this SOS Conjecture implies the HJY Gap Conjecture: \betaegin{theorem}\lambdaabel{MainThm} If the SOS Conjecture $\ref{SOSConj}$ holds, then the HJY Gap Conjecture $\ref{HJYConj}$ holds. \end{theorem} The connection between the two conjectures is explained in Section \ref{GaussSec}. The conclusion of Theorem \ref{MainThm} will then be derived, in Section \ref{Proof}, as a consequence of Theorem 1.1 in \cite{E12}, reproduced here in a special case as Theorem \ref{Thm1.1}. \subsection{Results on the SOS Conjecture; reduction to an alternative SOS Conjecture} While the literature on SOS of polynomials is vast, as mentioned above, there are very few results that have a direct impact on the SOS Conjecture \ref{SOSConj}. To the best of the author's knowledge, the only general result on this conjecture is what is now known as Huang's Lemma, which first appeared in \cite{Huang99}, and which establishes the first gap in the SOS Conjecture: If $r<n$, then $A\equiv 0$, and, hence $r=0$. Huang used this result in \cite{Huang99} to give a new proof of Faran's result regarding existence of the first gap in the Gap Conjecture \ref{HJYConj}, and to show that it suffices to assume that the mappings are merely $C^2$-smooth. In another recent paper \cite{GrHa13} by Grundmeier and Halfpap, the SOS Conjecture \ref{SOSConj} was established in the special case where $A(z,\betaar z)$ is itself an SOS, i.e., \betaegin{equation}\lambdaabel{ASOS} A(z,\betaar z)=|\!| F(z)|\!|^2, \end{equation} for some polynomial mapping $F(z)$. The integer $\kappa$ in the conjecture in this case is the linear rank of the polynomial mapping $F(z)$; it is assumed in \cite{GrHa13} that the components of $P(z)$ are homogeneous polynomials, but a simple homogenization argument can remove this assumption (cf. \cite{E15}). The Grundmeier-Halfpap result by itself does not seem to have any direct implications for the Gap Conjecture \ref{HJYConj}, as the needed information regarding the Hermitian polynomial $A(z,\betaar z)$ seems difficult to glean from the mapping $f$, but it offers the opportunity to formulate an alternative, arguably simplified version of the SOS conjecture, which would imply Conjecture \ref{SOSConj} as a consequence of the Grundmeier-Halfpap result. We shall formulate this alternative SOS Conjecture in what follows. We observe that, by standard linear algebra arguments, any Hermitian polynomial $A(z,\betaar z)$ can be expressed as a difference of squared norms of polynomial mappings, \betaegin{equation}\lambdaabel{DOS} A(z,\betaar z)=|\!| F(z)|\!|^2-|\!| G(z)|\!|^2, \end{equation} where $F=(F^1,\lambdadots,F^{q_+})$ and $G=(G^1,\lambdadots, G^{q_-})$ are mappings whose components are polynomials in $z$. We may further assume that the complex vector spaces $V_F$, $V_G$ spanned by their respective components have dimensions $q_+$, $q_-$, respectively (i.e., the components of $F$ and $G$ are linearly independent, so their linear ranks are $q_+$, $q_-$, respectively), and that $V_F\cap V_G=\{0\}$. The Grundmeier-Halfpap result proves Conjecture \ref{SOSConj} in the special case where $G=0$. Thus, it suffices to prove the conjecture in the case where $G\neq 0$. In this case, the product $A(z,\betaar z)|\!| z|\!|^2$ need of course not be an SOS, so this must be assumed. An optimistic view of the situation in the conjecture would be to hope that the "gaps" in linear ranks that are predicted in \eqref{e:ASOSbound} can only occur when $G=0$, and when $G\neq 0$, but $A(z,\betaar z)|\!| z|\!|^2$ is still an SOS, the lower bound \eqref{rmax} always holds. The author has reasons to believe that this optimistic view is indeed what happens, though at this point the reasons are too vague to try to explain in this note. In any case, the following "weak", or alternative form of the SOS Conjecture, if true, then implies the SOS Conjecture \ref{SOSConj}, in view of the Grundmeier-Halfpap result. \betaegin{conjecture}[Weak (Alternative) SOS Conjecture]\lambdaabel{SOSConj2} Let $P(z)=(P^1(z),\lambdadots,P^q(z))$ be a polynomial mapping in $z=(z^1,\lambdadots, z^n)\in \mathbb C^n$, and assume that there exists a Hermitian polynomial $A(z,\betaar z)$ of the form \eqref{DOS} such that the SOS identity \eqref{SOSid} holds. If $r$ denotes the linear rank of $P(z)$ and if the polynomial mapping $G$ in \eqref{DOS} is not identically zero, then \eqref{rmax} holds. \end{conjecture} One of the main difficulties in Conjecture \ref{SOSConj2} when $G\neq 0$ comes from the fact that it seems hard to characterize when $A(z,\betaar z)|\!| z|\!|^2$ is in fact an SOS of the form \eqref{SOSid}. The reader is referred to, e.g., \cite{JPDVar04}, \cite{JPD11} for discussions related to this difficulty. We can mention here that a necessary condition for an SOS identity \eqref{e:ASOSbound} to hold is that $V_{G\omegatimes z}\subset V_{F\omegatimes z}$, where the tensor product of two mappings $F\omegatimes H$ is defined as the mapping whose components comprise all the products of components $F^jH^k$. From this one can easily see that the linear rank $r=\dim_\mathbb C V_P$ in Conjecture \ref{SOSConj2} must satisfy \betaegin{equation}\lambdaabel{specrk} \dim_\mathbb C V_{F\omegatimes z}/V_{G\omegatimes z}\lambdaeq r\lambdaeq \dim_\mathbb C V_{F\omegatimes z}. \end{equation} The lower bound can only be realized if a maximum number of "cancellations" occur. If we consider the 1-parameter family of Hermitian polynomials $$ A_t(z,\betaar z):=|\!| F(z)|\!|^2-t|\!| G(z)|\!|^2 $$ for $0\lambdaeq t\lambdaeq 1$, where $A(z,\betaar z)=A_1(z,\betaar z)$ satisfies an SOS identity \eqref{e:ASOSbound}, then clearly $A_t(z,\betaar z)|\!| z|\!|^2$ is an SOS for each $0\lambdaeq t\lambdaeq 1$ (since $A_t(z,\betaar z)=A_1(z,\betaar z)+(1-t)|\!| G(z)|\!|^2$). One can show that "cancellations" causing strict inequality in the upper bound in \eqref{specrk} do not occur for general $t$ in this range, and the linear rank of $A_t(z,\betaar z)|\!| z|\!|^2$ for such $t$ is then $r=\dim_\mathbb C V_{F\omegatimes z}$. Nevertheless, for the given $A(z,\betaar z)=A_1(z,\betaar z)$, all we can say seems to be that the estimate \eqref{specrk} holds. \section{The second fundamental form and the Gauss equation}\lambdaabel{GaussSec} We shall utilize E. Cartan's differential systems ("moving frames") approach to CR geometry, as well as S. Webster's theory of psuedohermitian structures. We will follow the set-up and notational conventions introduced in \cite{BEH08} (see also \cite{E15} and \cite{EHZ04}). We shall summarize the notation very briefly here, but refer the reader to \cite{BEH08} (which, on occasion, refers to \cite{EHZ04}) for all details. We shall also from the beginning specialize the general set-up to the special case of CR mappings between spheres, which simplifies matters significantly due to the vanishing of the CR curvature tensor of the sphere. Thus, let $f\colon \mathbb S^n\to \mathbb S^N$ be a smooth CR mapping with $2\lambdaeq n\lambdaeq N$. For a point $p_0\in \mathbb S^n$, we may choose local adapted (to $f$), admissible (in the sense of Webster \cite{Webster78}) CR coframes $(\theta,\theta^\alphalpha,\theta^{\betaar\alphalpha})$ on $\mathbb S^n$ near $p_0$ and $(\hat \theta,\hat \theta^A,\hat \theta^{\betaar A})$ on $\mathbb S^N$ near $\hat p_0:=f(p_0)$, where the convention in \cite{BEH08} dictates that Greek indices, $\alphalpha$, etc., range over $\{1,\lambdadots, n\}$, capital Latin letters, $A$, etc., range over $\{1,\lambdadots N\}$, and where barring an index on a previously defined object corresponds to complex conjugation, e.g., $\theta^{\betaar\alphalpha}:=\omegaverline{\theta^\alphalpha}$. Being adapted means that \betaegin{equation} f^*\hat\theta=\theta,\quad f^*\hat\theta^\alphalpha=\theta^\alphalpha,\quad f^*\hat\theta^a=0, \end{equation} where we have used the further convention that lower case Latin letters $a$, etc., run over the indices $\{N-n+1,\lambdadots, N\}$. Thus, in particular, $f$ is a (local) pseudohermitian mapping between the (local) pseudohermitian structures obtained on $\mathbb S^n$ and $\mathbb S^N$ by fixing the contact forms $\theta$ and $\hat \theta$ near $p_0$ and $\hat p_0$, respectively. We denote by $g_{\alphalpha\betaar\betaeta}$, $\hat g_{A\betaar B}$ the respective Levi forms (which can, and later will be both assumed to be the identity), and by $\omegamega_\alphalpha{}^\betaeta$, $\hat \omegamega_A{}^B$ the Tanaka-Webster connection forms. We shall pull all forms and tensors back to $\mathbb S^n$ by $f$, and for convenience of notation, we shall simply denote by $\hat\omegamega_A{}^B$ the pulled back form $f^*\hat\omegamega_A{}^B$, etc. Moreover, the fact that the two coframes are adapted implies that we can drop the $\hat{}$ on the pullbacks to $\mathbb S^n$ without any risk of confusion; in other words, we have, e.g., $\omegamega_\alphalpha{}^\betaeta=\hat\omegamega_\alphalpha{}^\betaeta$ and $g_{\alphalpha\betaar\betaeta}=\hat g_{\alphalpha\betaar\betaeta}$ (we repeat here that we refer to \cite{BEH08} and \cite{EHZ04} for the details), and of course $\omegamega_\alphalpha{}^a$, e.g., can have only one meaning. The collection of 1-forms $(\omegamega_{\alphalpha}^{\:\:\:a})$ on $\mathbb S^n$ defines the \emph{second fundamental form} of the mapping $f$, denoted $\Pi_f\colon T^{1,0}\mathbb S^n\times T^{1,0}\mathbb S^n\to T^{1,0}\mathbb S^N/f_*T^{1,0}\mathbb S^n$, as described in \cite{BEH08}. We recall from there that \betaegin{equation}\lambdaabel{SFF1} \omegamega_{\alphalpha}^{\:\:\:a} = \omegamega_{\alphalpha \:\:\: \betaeta}^{\:\:\:a}\theta^{\betaeta}, \qquad \omegamega_{\alphalpha \:\:\: \betaeta}^{\:\:\:a} = \omegamega_{\betaeta \:\:\: \alphalpha}^{\:\:\:a}. \end{equation} If we identify the CR-normal space $T_{f(p)}^{1,0}\mathbb S^N/f_*T_{p}^{1,0}\mathbb S^n$, also denoted by $N_{p}^{1,0}{\mathbb S^n}$, with $\mathbb{C}^{N-n}$, then we may identify $\Pi_f$ with the $\mathbb C^{N-n}$-valued, symmetric $n\times n$ matrix $(\omegamega_{\alphalpha}{}^a{}_ \betaeta)_{a=n+1}^{N}$. We shall not be so concerned with the matrix structure of this object, and consider $\Pi_f$ as the collection, indexed by $\alphalpha, \betaeta$, of its component vectors $(\omegamega_{\alphalpha}{}^a{}_ \betaeta)_{a=n+1}^{N}$ in $\mathbb{C}^{N-n}$. By viewing the second fundamental form as a section over $\mathbb S^n$ of the bundle $(T^*)^{1,0}\mathbb S^n\omegatimes N^{1,0}{\mathbb S^n} \omegatimes (T^*)^{1,0}\mathbb S^n$, we may use the pseudohermitian connections on $\mathbb S^n$ and $\mathbb S^N$ to define the covariant differential \betaegin{equation*} \nabla \omegamega_{\alphalpha\:\:\betaeta}^{\:\:a} = d\omegamega_{\alphalpha\:\:\betaeta}^{\:\:a} - \omegamega_{\mu\:\:\betaeta}^{\:\:a}\omegamega_{\alphalpha}^{\:\:\mu} + \omegamega_{\alphalpha\:\:\betaeta}^{\:\:b}\omegamega_{b}^{\:\:a} - \omegamega_{\alphalpha\:\:\mu}^{\:\:a}\omegamega_{\betaeta}^{\:\:\mu}. \end{equation*} We write $\omegamega_{\alphalpha\:\:\betaeta ; \gamma}^{\:\:a}$ to denote the component in the direction $\theta^{\gamma}$ and define higher order derivatives inductively as: \betaegin{equation*} \nabla \omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{j}}^{\:\:a} = d\omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{j}}^{\:\:a} + \omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{j}}^{\:\:b}\omegamega_{b}^{\:\:a} - \sum_{l=1}^{j}\omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{l-1}\mu \gamma_{l+1}\lambdadots\gamma_{j}}^{\:\:a}\omegamega_{\gamma_{l}}^{\:\:\mu}. \end{equation*} A tensor $T_{\alpha_1\lambdadots\alpha_r\betaar\beta_1\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}$, with $r,s\geq1$, is called {\em conformally flat} if it is a linear combination of $g_{\alpha_i\betaar\beta_j}$ for $i=1,\lambdadots,r$, $j=1,\lambdadots,s$, i.e. \betaegin{equation} T_{\alpha_1\lambdadots\alpha_r\betaar\beta_1\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}=\sum_{i=1}^r\sum_{j=1}^s g_{\alphalpha_i\betaar\betaeta_j} (T_{ij})_{\alpha_1\lambdadots\widehat{\alpha_i}\lambdadots \alpha_r\betaar\beta_1\lambdadots\widehat{\betaar\beta_j}\lambdadots\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}, \end{equation} where e.g.\ $\widehat{\alpha}$ means omission of that factor. (A similar definition can be made for tensors with different orderings of indices.) The following observation gives a motivation for this definition. Let $T_{\alpha_1\lambdadots\alpha_r\betaar\beta_1\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}$ be a tensor, symmetric in $\alphalpha_1,\lambdadots,\alphalpha_r$ as well as in $\betaeta_1,\lambdadots,\betaeta_s$, and form the homogeneous vector-valued polynomial of bi-degree $(r,s)$ whose components are given by $$T^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}(z,\betaar z):= T_{\alpha_1\lambdadots\alpha_r\betaar\beta_1\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}z^{\alphalpha_1}\lambdadots z^{\alphalpha_r}\omegaverline{z^{\betaeta_1}}\lambdadots\omegaverline{z^{\betaeta_s}}, $$ where $z=(z^1,\lambdadots,z^n)$ and the usual summation convention is used. Then, the reader can check that the tensor is conformally flat if and only if all the polynomials $T^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}(z,\betaar z)$ are divisible by the Hermitian form $g(z,\betaar z):=g_{\alphalpha\betaar\betaeta}z^\alphalpha\omegaverline{z^\betaeta}$. Moreover, and importantly, a conformally flat tensor has the property that its covariant derivatives are again conformally flat, since one of the defining properties of the pseudohermitian connection is that $\nabla g_{\alphalpha\betaar\betaeta}=0$. We shall use the terminology that $T_{\alpha_1\lambdadots\alpha_r\betaar\beta_1\lambdadots\betaar\beta_s}{}^{a_1\lambdadots a_t\betaar b_1\lambdadots\betaar b_q}\equiv 0 \mod \CFT$ if the tensor is conformally flat. Now, the Gauss equation for the second fundamental form of a CR mapping $f\colon \mathbb S^n\to \mathbb S^N$ takes the following simple form (since the CR curvature tensors of $\mathbb S^n$ and $\mathbb S^N$ vanish): \betaegin{equation}\lambdaabel{Gauss} g_{a\betaar b}\omegamega_{\alphalpha}{}^a{}_{\nu}\omegamega_{\betaar\betaeta}{}^{\betaar b}{}_{\betaar\mu}\equiv 0\mod\CFT. \end{equation} We proceed as in the proof of Theorem 5.1 in \cite{BEH08} and take repeated covariant derivatives in $\theta^{\gamma_r}$ and $\theta^{\betaar \lambdaambda_s}$ in the Gauss equation. By using the fact that $\omegamega_\alphalpha{}^a{}_{\betaeta;\betaar\mu}$ is conformally flat (Lemma 4.1 in \cite{BEH08}) and the commutation formula in Lemma 4.2 in \cite{BEH08}, we obtain the full family of Gauss equations, for any $r,s\geq 2$: \betaegin{equation}\lambdaabel{GaussFull} g_{a\betaar b}\omegamega_{\gamma_1}{}^a{}_{\gamma_2;\lambdadots\gamma_r}\omegamega_{\betaar\lambdaambda_1}{}^{\betaar b}{}_{\betaar\lambdaambda_2;\lambdadots\betaar\lambdaambda_s}\equiv 0\mod\CFT. \end{equation} We now consider also the component vectors of higher order derivatives of $\Pi_f$ as elements of $\mathbb{C}^{N-n}\cong N_p^{1,0}S^n$ and define an increasing sequence of vector spaces \betaegin{equation*} E_{2}(p) \subseteq \lambdadots \subseteq E_{l}(p) \subseteq \lambdadots \subseteq \mathbb{C}^{N-n}\cong N_p^{1,0}\mathbb S^n \end{equation*} by letting $E_{l}(p)$ be the span of the vectors \betaegin{equation}\lambdaabel{Eldef} (\omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{j}}^{\:\:a})_{a=n+1}^{N}, \qquad \forall\, 2 \lambdaeq j \lambdaeq l, \gamma_{j}\in \{1,\lambdadots,n\}, \end{equation} evaluated at $p \in \mathbb S^n$. We let $d_l(p)$ be the dimension of $E_l(p)$, and for convenience we set $d_1(p)=0$. As is mentioned in \cite{E12}, it is shown in \cite{EHZ04} that $d_l(p)$ defined in this way coincides with the $d_l(p)$ defined by (1.3) in \cite{E12}. By moving to a nearby point $p_0$ if necessary, we may assume that all $d_l=d_l(p)$ are locally constant near $p_0$ and \betaegin{equation}\lambdaabel{dimstab} 0=d_1<d_2<\lambdadots<d_{l_0}=d_{l_0+1}=\lambdadots\lambdaeq N-n \end{equation} for some $1\lambdaeq l_0\lambdaeq N-n+1$ (with $l_0=1$ if $d_2=0$ near such generic $p_0$). The mapping $f$ is said to be constantly $l_0$-degenerate of rank $d:=d_{l_0}\lambdaeq N-n$ at $p_0$; the codimension $N-n-d$ is called the degeneracy and if the degeneracy is $0$, then the mapping is also said to be $l_0$-nondegenerate. For each integer $l\geq 2$, we form the $\mathbb C^{N-n}$-valued, homogeneous polynomial $\Omega_{(l)}=(\Omega^1_{(l)},\lambdadots,\Omega^{N-n}_{(l)})$ in $z=(z^1,\lambdadots,z^n)\in \mathbb C^n$ as follows: \betaegin{equation}\lambdaabel{Omega(l)} \Omega^j_{(l)}(z):= \omegamega_{\gamma_{1}\:\:\gamma_{2};\gamma_{3}\lambdadots\gamma_{l}}^{\:\:a}z^{\gamma_1}\lambdadots z^{\gamma_l}, \quad a=n+j, \end{equation} and we define the {\it total second fundamental polynomial} $\Omega=(\Omega^1,\lambdadots,\Omega^{N-n})$ of $f$ near $p_0$ as follows: \betaegin{equation} \Omega^j(z):=\sum_{l=2}^{l_0}\Omega^j_{(l)}(z), \end{equation} where $l_0$ is the integer, defined above, where the dimensions $d_l$ stabilize. The following proposition is easily proved by using the fact that the rank of a matrix equals that of its transpose; the details are left to the reader. \betaegin{proposition}\lambdaabel{Omegarank} The rank $d=d_{l_0}$ of the $l_0$-degeneracy is also the linear rank of the polynomial mapping $\Omega(z)$, i.e., the dimension of the vector space in $\mathbb C[z]$ spanned by the polynomials $\Omega^1(z),\lambdadots, \Omega^{N-n}(z)$. \end{proposition} We now recall, as mentioned above, that we may choose the adapted, admissible CR coframes (near $p_0$ and $\hat p_0=f(p_0)$) in such a way that the Levi forms of $\mathbb S^n$ and $\mathbb S^N$ both equal the identity matrix. Let us now insist on such a choice of coframes. We then notice that the full family of Gauss equations in \eqref{GaussFull} for $r,s\lambdaeq l_0$ can be summarized in the following Sum-Of-Squares identity for the total second fundamental polynomial. \betaegin{lemma}[Total polynomial Gauss equation]\lambdaabel{GaussLemma} There exists a Hermitian polynomial $A(z,\betaar z)$ such that \betaegin{equation}\lambdaabel{GaussLemmaId} |\!|\Omega(z)|\!|^2=A(z,\betaar z)|\!| z|\!|^2, \end{equation} where the notation $|\!|\Omega(z)|\!|^2:=\sum_{j=1}^{N-n} |\Omega^j(z)|^2$ introduced in the introduction has been used. \end{lemma} \betaegin{proof} The proof consists of multiplying the identities \eqref{GaussFull} by $z^{\gamma_1}\lambdadots z^{\gamma_r}\omegaverline{z^{\lambdaambda_1}\lambdadots z^{\lambdaambda_s}}$ and summing according to the summation convention. The conformally flat tensors on the right hand sides all contain a factor of $|\!| z|\!|^2$. The proof is then completed by comparing the polynomial identities obtained in this way to the result of expanding the left hand side of \eqref{GaussLemmaId} and collecting terms of a fixed bidegree $(r,s)$. The details are left to the reader. \end{proof} \section{Proof of Theorem \ref{MainThm}}\lambdaabel{Proof} We shall prove Conjecture \ref{HJYConj} under the assumption that the conclusion of Conjecture \ref{SOSConj} holds. We quote first Theorem 1.1 in \cite{E12}, in the special case of CR mappings $f\colon \mathbb S^n\to \mathbb S^N$: \betaegin{theorem}[\cite{E12}]\lambdaabel{Thm1.1} Let $f\colon \mathbb S^n \to \mathbb S^N$ be a smooth CR mapping and the dimensions $d_l(p)$ be as defined in Section $\ref{GaussSec}$. Let $U$ be an open subset of $\mathbb S^n$ on which $f$ is constantly $l_0$-degenerate, and such that $d_l=d_l(p)$, for $2\lambdaeq l\lambdaeq l_0$, are constant on $U$ and \eqref{dimstab} holds. Assume that there are integers $0\lambdaeq k_2, k_3,\lambdadots,k_{l_0}\lambdaeq n-1$, such that: \betaegin{equation} \betaegin{aligned}\lambdaabel{conds0} d_l-d_{l-1} <&\sum_{j=0}^{k_l}(n-j),\quad l=2,\lambdadots, l_0,\quad (d_1=0)\\ k:=&\sum_{l=2}^{l_0}k_l<n. \end{aligned} \end{equation} Then $f(\mathbb S^n)$ is contained in a complex affine subspace $A^{n+d+k+1}$ of dimension $n+d+k+1$, where $k$ is defined in \eqref{conds0} and $d:=d_{l_0}$ is the rank of the $l_0$-degeneracy. \end{theorem} \betaegin{remark} {\rm The integers $k_2,\lambdadots, k_{l_0}$ become invariants of the mapping $f$ if we require them to be minimal in an obvious way. The invariant $k_2$ was introduced in \cite{Huang03} and called there the geometric rank of $f$. This geometric rank plays an important role in \cite{Huang03}, \cite{HuangJiXu06}, and \cite{HuangJiYin12}. } \end{remark} \betaegin{proof}[Proof of Theorem $\ref{MainThm}$] We assume now that there is a mapping $f\colon \mathbb S^n \to \mathbb S^N$ with codimension $N-n\in I_\kappa$ for some $\kappa\lambdaeq \kappa_0<n$. Thus, we have $$ N-n\lambdaeq \sum_{i=0}^{\kappa-1}(n-i)-1. $$ We consider an open subset $U\subset \mathbb S^n$ as in Theorem \ref{Thm1.1}. Since the rank of the $l_0$-degeneracy satisfies $d\lambdaeq N-n$, we then have \betaegin{equation}\lambdaabel{dest} d\lambdaeq \sum_{i=0}^{\kappa-1}(n-i)-1, \end{equation} in Theorem \ref{Thm1.1}. By Proposition \ref{Omegarank}, $d$ is also the linear rank of the total second fundamental polynomial $\Omega(z)$, and by Lemma \ref{GaussLemma}, an SOS identity of the form \eqref{GaussLemmaId} holds. If we now assume that the SOS Conjecture \ref{SOSConj} holds, then \eqref{dest} implies that in fact \betaegin{equation}\lambdaabel{SOScons} d\lambdaeq (\kappa-1)n. \end{equation} It is also clear from \eqref{dest} that there exist integers $0\lambdaeq k_l\lambdaeq \kappa-1$ such that the first identity in \eqref{conds0} hold. We shall choose the $k_j$ minimal, so that in addition we have \betaegin{equation}\lambdaabel{klmin} d_l-d_{l-1}\geq \sum_{j=0}^{k_l-1}(n-j), \end{equation} where the right hand side is understood to be $0$ if $k_l=0$. We claim that \betaegin{equation}\lambdaabel{kclaim} k:=\sum_{l=2}^{l_0}k_l \lambdaeq \kappa-1. \end{equation} If we can prove this claim, then it follows from Theorem \ref{Thm1.1}, since $\kappa\lambdaeq \kappa_0<n$, that $f(\mathbb S^n)$ is contained in a complex affine subspace $A^{N_0+1}$ of dimension $N_0=n+d+k$, and the codimension satisfies, by \eqref{SOScons} and \eqref{kclaim}, $$ N_0-n=d+k\lambdaeq (\kappa-1)n+\kappa-1, $$ which is precisely the desired conclusion in the Gap Conjecture \ref{HJYConj}. Thus, we proceed to prove \eqref{kclaim}. Let us denote by $g(j)$ the non-increasing function \betaegin{equation} g(j)=\lambdaeft\{ \betaegin{aligned} n-j,&\quad 0\lambdaeq j< n\\ 0,&\quad j\geq n. \end{aligned} \right. \end{equation} Using the fact that we have set $d_1=0$, we can telescope $d$ as follows \betaegin{equation} d=(d_{l_0}-d_{l_0-1})+\lambdadots (d_2-d_1)=\sum_{l=2}^{l_0}(d_l-d_{l-1}), \end{equation} and deduce from \eqref{klmin} that \betaegin{equation}\lambdaabel{dest2} d\geq \sum_{l=2}^{l_0}\sum_{j=0}^{k_l-1}(n-j)=\sum_{l=2}^{l_0}\sum_{j=0}^{k_l-1}g(j). \end{equation} Since $g(j)$ is non-increasing, we can estimate \betaegin{equation}\lambdaabel{shift} \sum_{l=2}^{l_0}\sum_{j=0}^{k_l-1}g(j)\geq \sum_{l=2}^{l_0}\sum_{j=0}^{k_l-1}g\lambdaeft(j+m_l\right), \end{equation} where we have set $m_2=0$ and, for $3\lambdaeq l\lambdaeq l_0$, \betaegin{equation} m_l:=\sum_{i=2}^{l-1}k_{i}. \end{equation} Substituting $i=j+m_l$ in \eqref{shift}, we deduce from \eqref{dest2} \betaegin{equation} d\geq \sum_{l=2}^{l_0}\sum_{i=m_l}^{m_l+k_l-1}g(i)=\sum_{l=2}^{l_0}\sum_{i=m_l}^{m_{l+1}-1}g(i)= \sum_{i=0}^{m_{l_0+1}-1}g(i). \end{equation} Since $m_{l_0+1}=k$, we conclude that \betaegin{equation} d\geq \sum_{i=0}^{k-1}g(i), \end{equation} and since $k<n$, we also have $g(i)=n-i$ for $i=1,\lambdadots, k-1$, and therefore we can write \betaegin{equation} \sum_{i=0}^{k-1}(n-i)\lambdaeq d. \end{equation} By comparing this with \eqref{dest}, we conclude that $k-1<\kappa-1$, which establishes the claim \eqref{kclaim}. This completes the proof of Theorem \ref{MainThm}. \end{proof} \def$'${$'$} \betaegin{thebibliography}{10} \betaibitem{Alexander74} H.~Alexander. \newblock Holomorphic mappings from the ball and polydisc. \newblock {\em Math. Ann.}, 209:249--256, 1974. \betaibitem{BEH08} M.~Salah Baouendi, Peter Ebenfelt, and Xiaojun Huang. \newblock Super-rigidity for {CR} embeddings of real hypersurfaces into hyperquadrics. \newblock {\em Adv. Math.}, 219(5):1427--1445, 2008. \betaibitem{CatlinDangelo96} D.~Catlin and J.~D'Angelo. \newblock A stabilization theorem for {Hermitian} forms and applications to holomorphic mappings. \newblock {\em Math. Res. Let.}, 3:149--166, 1996. \betaibitem{CatlinJPD99} David~W. Catlin and John~P. D'Angelo. \newblock An isometric imbedding theorem for holomorphic bundles. \newblock {\em Math. Res. Lett.}, 6(1):43--60, 1999. \betaibitem{CS90} J.~A. Cima and T.~J. Suffridge. \newblock Boundary behavior of rational proper maps. \newblock {\em Duke Math. J.}, 60(1):135--138, 1990. \betaibitem{JPD11} John~P. D'Angelo. \newblock Hermitian analogues of {H}ilbert's 17-th problem. \newblock {\em Adv. Math.}, 226(5):4607--4637, 2011. \betaibitem{JPDLebl09} John~P. D'Angelo and Ji{\v{r}}{\'{\i}} Lebl. \newblock Complexity results for {CR} mappings between spheres. \newblock {\em Internat. J. Math.}, 20(2):149--166, 2009. \betaibitem{JPDVar04} John~P. D'Angelo and Dror Varolin. \newblock Positivity conditions for {H}ermitian symmetric functions. \newblock {\em Asian J. Math.}, 8(2):215--231, 2004. \betaibitem{E15} Peter Ebenfelt. \newblock Local holomorphic isometries of a modified projective space into a standard projective space; rational conformal factors. \newblock {\em Math. Ann., {\rm to appear}}. \betaibitem{E12} Peter Ebenfelt. \newblock Partial rigidity of degenerate {CR} embeddings into spheres. \newblock {\em Adv. Math.}, 239:72--96, 2013. \betaibitem{EHZ04} Peter Ebenfelt, Xiaojun Huang, and Dmitry Zaitsev. \newblock Rigidity of {CR}-immersions into spheres. \newblock {\em Comm. Anal. Geom.}, 12(3):631--670, 2004. \betaibitem{EHZ05} Peter Ebenfelt, Xiaojun Huang, and Dmitry Zaitsev. \newblock The equivalence problem and rigidity for hypersurfaces embedded into hyperquadrics. \newblock {\em Amer. J. Math.}, 127(1):169--191, 2005. \betaibitem{Faran86} James~J. Faran. \newblock The linearity of proper holomorphic maps between balls in the low codimension case. \newblock {\em J. Differential Geom.}, 24(1):15--17, 1986. \betaibitem{Forstneric89} Franc Forstneri{\v{c}}. \newblock Extending proper holomorphic mappings of positive codimension. \newblock {\em Invent. Math.}, 95(1):31--61, 1989. \betaibitem{GrHa13} Dusty Grundmeier and Jennifer Halfpap~Kacmarcik. \newblock An application of {M}acaulay's estimate to sums of squares problems in several complex variables. \newblock {\em Proc. Amer. Math. Soc.}, 143(4):1411--1422, 2015. \betaibitem{GruLV14} Dusty Grundmeier, Ji{\v{r}}{\'{\i}} Lebl, and Liz Vivas. \newblock Bounding the rank of {H}ermitian forms and rigidity for {CR} mappings of hyperquadrics. \newblock {\em Math. Ann.}, 358(3-4):1059--1089, 2014. \betaibitem{Hamada05} Hidetaka Hamada. \newblock Rational proper holomorphic maps from {$\betaold B^n$} into {$\betaold B^{2n}$}. \newblock {\em Math. Ann.}, 331(3):693--711, 2005. \betaibitem{Huang99} Xiaojun Huang. \newblock On a linearity problem for proper holomorphic maps between balls in complex spaces of different dimensions. \newblock {\em J. Differential Geom.}, 51:13--33, 1999. \betaibitem{Huang03} Xiaojun Huang. \newblock On a semi-rigidity property for holomorphic maps. \newblock {\em Asian J. Math.}, 7(4):463--492, 2003. \betaibitem{HuangJi01} Xiaojun Huang and Shanyu Ji. \newblock Mapping $\mathbb{B}^n$ into $\mathbb{B}^{2n-1}$. \newblock {\em Inventiones Mathematicae}, 145:219--250, 2001. \newblock 10.1007/s002220100140. \betaibitem{HuangJiXu06} Xiaojun Huang, Shanyu Ji, and Dekang Xu. \newblock A new gap phenomenon for proper holomorphic mappings from {$B^n$} into {$B^N$}. \newblock {\em Math. Res. Lett.}, 13(4):515--529, 2006. \betaibitem{HuangJiYin09} Xiaojun Huang, Shanyu Ji, and Wanke Yin. \newblock Recent progress on two problems in several complex variables. \newblock {\em Proceedings of the ICCM 2007, International Press}, Vol I:563--575, 2009. \betaibitem{HuangJiYin12} Xiaojun Huang, Shanyu Ji, and Wanke Yin. \newblock On the third gap for proper holomorphic maps between balls. \newblock {\em Math. Ann.}, 358(1-2):115--142, 2014. \betaibitem{HuangY13} Xiaojun Huang and Yuan Yuan. \newblock Holomorphic isometry from a {K}\"ahler manifold into a product of complex projective manifolds. \newblock {\em Geom. Funct. Anal.}, 24(3):854--886, 2014. \betaibitem{Pincuk74} S.~I. Pin\v{c}uk. \newblock On proper holomorphic mappings of strictly pseudoconvex domains. \newblock {\em Siberian Math. J.}, 15:909--917, 1974. \betaibitem{Poincare07} H.~Poincar{\'e}. \newblock Les fonctions analytiques de deux variables et la repr{\'e}sentation conforme. \newblock {\em Rend. Circ. Mat. Palermo}, 23(2):185--220, 1907. \betaibitem{Quillen68} Daniel~G. Quillen. \newblock On the representation of hermitian forms as sums of squares. \newblock {\em Invent. Math.}, 5:237--242, 1968. \betaibitem{Webster78} S.~M. Webster. \newblock Pseudo-{H}ermitian structures on a real hypersurface. \newblock {\em J. Differential Geom.}, 13(1):25--41, 1978. \end{thebibliography} \end{document}
\betagin{document} \renewcommand{Glossary of Notation}{Glossary of Notation} \renewcommand{\pagedeclaration}[1]{, #1} \setlength{\nomitemsep}{0pt} {\rm d}eltaf\e#1\e{\betagin{equation}#1\end{equation}} {\rm d}eltaf\ea#1\ea{\betagin{align}#1\end{align}} {\rm d}eltaf\eq#1{{\rm(\ref{#1})}} \tildemeshetaeoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{quest}[thm]{Question} \newtheorem{ass}[thm]{Assumption} \tildemeshetaeoremstyle{definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{ex}[thm]{Example} \newtheorem{rem}[thm]{Remark} \newtheorem{conj}[thm]{Conjecture} {\rm num}berwithin{equation}{section} {\rm d}eltaf\mathop{\rm dim}\nolimits{{\mathbin{\mathfrak m}}athop{\rm dim}\nolimits} {\rm d}eltaf\mathop{\rm supp}\nolimits{{\mathbin{\mathfrak m}}athop{\rm supp}\nolimits} {\rm d}eltaf\mathop{\rm cosupp}\nolimits{{\mathbin{\mathfrak m}}athop{\rm cosupp}\nolimits} {\rm d}eltaf{\mathop{\rm id}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm id}\nolimits} {\rm d}eltaf\mathop{\rm Hess}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Hess}\nolimits} {\rm d}eltaf\mathop{\rm Crit}{{\mathbin{\mathfrak m}}athop{\rm Crit}} {\rm d}eltaf\mathop{\rm Sym}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Sym}\nolimits} {\rm d}eltaf\mathop{\rm DR}\nolimits{{\mathbin{\mathfrak m}}athop{\rm DR}\nolimits} {\rm d}eltaf\mathop{\rm HP}\nolimits{{\mathbin{\mathfrak m}}athop{\rm HP}\nolimits} {\rm d}eltaf\mathop{\rm HN}\nolimits{{\mathbin{\mathfrak m}}athop{\rm HN}\nolimits} {\rm d}eltaf\mathop{\rm HC}\nolimits{{\mathbin{\mathfrak m}}athop{\rm HC}\nolimits} {\rm d}eltaf\mathop{\rm NC}\nolimits{{\mathbin{\mathfrak m}}athop{\rm NC}\nolimits} {\rm d}eltaf\mathop{\rm PC}\nolimits{{\mathbin{\mathfrak m}}athop{\rm PC}\nolimits} {\rm d}eltaf\mathop{\rm CC}\nolimits{{\mathbin{\mathfrak m}}athop{\rm CC}\nolimits} {\rm d}eltaf{\rm inf}{{\rm inf}} {\rm d}eltaf\mathop{\bf St}\nolimits{{\mathbin{\mathfrak m}}athop{\bf St}\nolimits} {\rm d}eltaf\mathop{\bf Art}\nolimits{{\mathbin{\mathfrak m}}athop{\bf Art}\nolimits} {\rm d}eltaf\mathop{\bf dSt}\nolimits{{\mathbin{\mathfrak m}}athop{\bf dSt}\nolimits} {\rm d}eltaf\mathop{\bf dArt}\nolimits{{\mathbin{\mathfrak m}}athop{\bf dArt}\nolimits} {\rm d}eltaf\mathop{\bf dSch}\nolimits{{\mathbin{\mathfrak m}}athop{\bf dSch}\nolimits} {\rm d}eltaf{\rm cl}{{\rm cl}} {\rm d}eltaf\mathop{\rm fib}\nolimits{{\mathbin{\mathfrak m}}athop{\rm fib}\nolimits} {\rm d}eltaf\mathop{\rm Ho}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Ho}\nolimits} {\rm d}eltaf\mathop{\rm Ker}{{\mathbin{\mathfrak m}}athop{\rm Ker}} {\rm d}eltaf\mathop{\rm Coker}{{\mathbin{\mathfrak m}}athop{\rm Coker}} {\rm d}eltaf\mathop{\rm GL}{{\mathbin{\mathfrak m}}athop{\rm GL}} {\rm d}eltaf\mathop{\boldsymbol{\rm Spec}}\nolimits{{\mathbin{\mathfrak m}}athop{\boldsymbol{\rm Spec}}\nolimits} {\rm d}eltaf\mathop{\rm Im}{{\mathbin{\mathfrak m}}athop{\rm Im}} {\rm d}eltaf\mathop{\rm inc}{{\mathbin{\mathfrak m}}athop{\rm inc}} {\rm d}eltaf\mathop{\rm Ext}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Ext}\nolimits} {\rm d}eltaf{\rm qcoh}{{\rm qcoh}} {\rm d}eltaf{\mathop{\rm id}\nolimits}{{{\mathbin{\mathfrak m}}athop{\rm id}\nolimits}} {\rm d}eltaf\mathbin{\mathcal{H}om}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal{H}om}} {\rm d}eltaf\mathop{\rm Aut}{{\mathbin{\mathfrak m}}athop{\rm Aut}} \newcommand{{\rm d}_{dR}}{{\rm d}_{dR}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Pic}}{{\mathbin{\mathfrak m}}athop{\rm Pic}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm End}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm End}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm hd}}{{\mathbin{\mathfrak m}}athop{\rm hd}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Hilb}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Hilb}\nolimits} \newcommand{{\rm cs}}{{\rm cs}} \newcommand{{\scriptscriptstyle{\rm Con}}}{{\scriptscriptstyle{\rm Con}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{\bf dim}\kern.1em}}{{{\mathbin{\mathfrak m}}athbin{\bf dim}\kern.1em}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Stab}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Stab}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{stab}}\nolimits}{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{stab}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Quot}}{{\mathbin{\mathfrak m}}athop{\rm Quot}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm SU}}{{\mathbin{\mathfrak m}}athop{\rm SU}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm SL}}{{\mathbin{\mathfrak m}}athop{\rm SL}} \newcommand{{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}}{{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Gr}}{{\mathbin{\mathfrak m}}athop{\rm Gr}} \newcommand{{\mathbin{\mathfrak m}}athop{\tildemesext{M\"o}}}{{\mathbin{\mathfrak m}}athop{\tildemesext{M\"o}}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Ch}}{{\mathbin{\mathfrak m}}athop{\rm Ch}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Tr}}{{\mathbin{\mathfrak m}}athop{\rm Tr}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits} \newcommand{{\rm num}}{{\rm num}} \newcommand{{\rm vir}}{{\rm vir}} \newcommand{{\rm stp}}{{\rm stp}} \newcommand{{\rm fr}}{{\rm fr}} \newcommand{{\rm stf}}{{\rm stf}} \newcommand{{\rm si}}{{\rm si}} \newcommand{{\rm na}}{{\rm na}} \newcommand{{\rm stk}}{{\rm stk}} \newcommand{{\rm ss}}{{\rm ss}} \newcommand{{\rm st}}{{\rm st}} \newcommand{{\rm all}}{{\rm all}} \newcommand{{\rm rk}}{{\rm rk}} \newcommand{{\rm vi}}{{\rm vi}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\rm char}} \newcommand{{\rm \kern.05em ind}}{{\rm \kern.05em ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Exp}}{{\mathbin{\mathfrak m}}athop{\rm Exp}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Per}}{{\mathbin{\mathfrak m}}athop{\rm Per}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm cone}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm cone}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm At}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm At}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Vect}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Vect}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Rep}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Rep}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm LCF}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm LCF}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai}{{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits_{\rm al}^{\rm ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm SF\!}\,}}\nolimits}{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm SF\!}\,}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm SF\!}\,}}\nolimitsi}{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm SF\!}\,}}\nolimits^{\rm ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm SF}}\nolimits}{{\mathbin{\mathfrak m}}athop{\bar{\rm SF}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm SF}}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\bar{\rm SF}}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm SF}}\nolimitsai}{{\tildemesextstyle\bar{\rm SF}{}_{\rm al}^{\rm ind}}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimits}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimitsi}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm SF\!}\,}}\nolimits_{\rm al}^{\rm ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm LSF}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm LSF}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\rm LSF}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\rm LSF}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm LSF\!}\,}}\nolimits}{{\mathbin{\mathfrak m}}athop{\setminusash{\underline{\rm LSF\!}\,}}\nolimits} \newcommand{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits}{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits} \newcommand{{{\rm d}ot{\bar{\rm LSF}}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits}{{{\rm d}ot{\bar{\rm LSF}}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits} \newcommand{{\rm d}uoLSF}{{{\rm d}ot{\bar{\underline{\rm LSF\!}\,}}}\kern-.1em{\mathbin{\mathfrak m}}athop{} \nolimits} \newcommand{\ouLSF}{{\bar{\underline{\rm LSF\!}\,}}\kern-.1em{\mathbin{\mathfrak m}}athop{} \nolimits} \newcommand{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimitsi}{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits^{\rm ind}} \newcommand{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimitsa}{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits_{\rm al}} \newcommand{{{\rm d}ot{\bar{\rm LSF}}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimitsa}{{{\rm d}ot{\bar{\rm LSF}}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits_{\rm al}} \newcommand{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimitsai}{{{\rm d}ot{\rm LSF}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits^{\rm ind}_{\rm al}} \newcommand{{{\rm d}ot{\underline{\rm LSF\!}\,}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits}{{{\rm d}ot{\underline{\rm LSF\!}\,}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimits} \newcommand{{{\rm d}ot{\underline{\rm LSF\!}\,}}\kern-.1em{\mathbin{\mathfrak m}}athop{}\nolimitsi}{{{\rm d}ot{\underline{\rm LSF\!}\,}}\kern-.1em{\mathbin{\mathfrak m}}athop{} \nolimits^{\rm ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimits}{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimitsai}{{\mathbin{\mathfrak m}}athop{\bar{\rm LSF}}\nolimits_{\rm al}^{\rm ind}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimits}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimitsa}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimits_{\rm al}} \newcommand{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimitsi}{{\mathbin{\mathfrak m}}athop{\bar{\underline{\rm LSF\!}\,}}\nolimits_{\rm al}^{\rm ind}} \newcommand{\tildemesext{\rm mod-${\mathbin{\mathbb C}} Q$}}{\tildemesext{\rm mod-${\mathbin{\mathbb C}} Q$}} \newcommand{\tildemesext{\rm mod-${\mathbin{\mathbb C}} Q$}I}{\tildemesext{\rm mod-${\mathbin{\mathbb C}} Q/I$}} \newcommand{\tildemesext{\rm mod-${\mathbin{\mathbb K}} Q$}}{\tildemesext{\rm mod-${\mathbin{\mathbb K}} Q$}} \newcommand{\tildemesext{\rm mod-${\mathbin{\mathbb K}} Q$}I}{\tildemesext{\rm mod-${\mathbin{\mathbb K}} Q/I$}} \newcommand{\tildemesext{\rm proj-${\mathbin{\mathbb K}} Q$}}{\tildemesext{\rm proj-${\mathbin{\mathbb K}} Q$}} \newcommand{\tildemesext{\rm proj-${\mathbin{\mathbb K}} Q$}I}{\tildemesext{\rm proj-${\mathbin{\mathbb K}} Q/I$}} \newcommand{{\mathbin{\mathfrak m}}athop{\rm Obj\kern .1em}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Obj\kern .1em}\nolimits} \newcommand{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{Obj}\kern .05em}\nolimits}{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{Obj}\kern .05em}\nolimits} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}s}{{{\mathbin{\mathfrak m}}athscr A}\kern-1.5pt{}_{\rm si}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr B}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr B}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr E}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr E}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr K}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr K}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr Y}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr Y}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr X}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr X}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr V}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr V}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr W}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr W}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr S}\kern-3pt{}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr S}\kern-3pt{}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb B}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb B}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb{CF}}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb{CF}}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb B}}v}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal B}}} {\rm d}eltaf{\mathbin{\mathfrak m}}athop{\rm Per}f{{\mathbin{\mathfrak m}}athop{\rm Perf}\nolimits} {\rm d}eltaf{\mathbin{\mathfrak m}}athop{\rm Per}fis{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm Perf-is}}\nolimits} {\rm d}eltaf\mathop{\tildemesext{\rm Lb-is}}\nolimits{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm Lb-is}}\nolimits} {\rm d}eltaf\mathop{\rm coh}\nolimits{{\mathbin{\mathfrak m}}athop{\rm coh}\nolimits} {\rm d}eltaf\mathop{\rm Ho}\nolimitsm{{\mathbin{\mathfrak m}}athop{\rm Hom}\nolimits} {\rm d}eltaf{\mathbin{\mathfrak m}}athop{\rm Per}v{{\mathbin{\mathfrak m}}athop{\rm Perv}\nolimits} {\rm d}eltaf\mathop{\rm Sch}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Sch}\nolimits} {\rm d}eltaf\mathop{\rm Var}\nolimits{{\mathbin{\mathfrak m}}athop{\rm Var}\nolimits} {\rm d}eltaf{\rm red}{{\rm red}} {\rm d}eltaf\mathop{\rm DM}\nolimits{{\mathbin{\mathfrak m}}athop{\rm DM}\nolimits} {\rm d}eltaf\mathop{\rm SO}\nolimits{{\mathbin{\mathfrak m}}athop{\rm SO}\nolimits} {\rm d}eltaf{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{{\mathbin{\mathfrak m}}athop{\rm Spec}\nolimits} {\rm d}eltaf\mathop{\rm rank}\nolimits{{\mathbin{\mathfrak m}}athop{\rm rank}\nolimits} {\rm d}eltaf\boldsymbol{\boldsymbol} {\rm d}eltaf{\rm alg}{{\rm alg}} {\rm d}eltaf\geqslant{\geqslantqslant} {\rm d}eltaf\leqslant\nobreak{\leqslant\nobreakqslant\nobreak} {\rm d}eltaf{\mathbin{\mathbb A}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb A}}} {\rm d}eltaf{\mathbin{\mathbb L}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb L}}} {\rm d}eltaf{\mathbin{\mathbb T}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb T}}} {\rm d}eltaf{\bs R}{{\boldsymbol R}} {\rm d}eltaf{\bs S}{{\boldsymbol S}} {\rm d}eltaf{\bs U}{{\boldsymbol U}} {\rm d}eltaf{\bs V}{{\boldsymbol V}} {\rm d}eltaf{\bs W}{{\boldsymbol W}} {\rm d}eltaf{\bs Y}{{\boldsymbol Y}} {\rm d}eltaf{\bs Z}{{\boldsymbol Z}} {\rm d}eltaf{\bs X}{{\boldsymbol X}} {\rm d}eltaf{\mathbin{\mathbb{CP}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb{CP}}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb{RP}}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb{RP}}}} {\rm d}eltaf{\mathbin{\cal F}}{{{\mathbin{\mathfrak m}}athbin{\cal F}}} {\rm d}eltaf{\mathbin{\cal G}}{{{\mathbin{\mathfrak m}}athbin{\cal G}}} {\rm d}eltaf{\mathbin{\cal K}}{{{\mathbin{\mathfrak m}}athbin{\cal K}}} {\rm d}eltaf{\mathbin{\cal S}}{{{\mathbin{\mathfrak m}}athbin{\cal S}\kern -0.1em}} {\rm d}eltaf{\mathbin{\cal S}}z{{{\mathbin{\mathfrak m}}athbin{\cal S}\kern -0.1em}^{\kern .1em 0}} {\rm d}eltaf{\mathbin{\cal E}}{{{\mathbin{\mathfrak m}}athbin{\cal E}}} {\rm d}eltaf{\mathbin{\cal L}}{{{\mathbin{\mathfrak m}}athbin{\cal L}}} {\rm d}eltaf{\mathbin{\cal H}}{{{\mathbin{\mathfrak m}}athbin{\cal H}}} {\rm d}eltaf{\mathbin{\cal O}}{{{\mathbin{\mathfrak m}}athbin{\cal O}}} {\rm d}eltaf{\mathbin{\cal P}}{{{\mathbin{\mathfrak m}}athbin{\cal P}}} {\rm d}eltaf{\mathbin{\cal S}}{{{\mathbin{\mathfrak m}}athbin{\cal S}}} {\rm d}eltaf{\mathbin{\cal{PV}}}{{{\mathbin{\mathfrak m}}athbin{\cal{PV}}}} {\rm d}eltaf{\mathbin{\cal{TS}}}{{{\mathbin{\mathfrak m}}athbin{\cal{TS}}}} {\rm d}eltaf{\mathbin{\cal W}}{{{\mathbin{\mathfrak m}}athbin{\cal W}}} {\rm d}eltaf{\mathbin{\cal M}}{{{\mathbin{\mathfrak m}}athbin{\cal M}}} {\rm d}eltaf{\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}{{{\mathbin{\mathfrak m}}athbin{\setminusash{\,\,\overlineerline{\!\!{\mathbin{\mathfrak m}}athcal M\!}\,}}}} {\rm d}eltaf{\mathbin{\cal O}}{{{\mathbin{\mathfrak m}}athbin{\cal O}}} {\rm d}eltaf{\mathbin{\cal A}}{{{\mathbin{\mathfrak m}}athbin{\cal A}}} {\rm d}eltaf{\mathbin{\cal B}}{{{\mathbin{\mathfrak m}}athbin{\cal B}}} {\rm d}eltaf{\mathbin{\cal C}}{{{\mathbin{\mathfrak m}}athbin{\cal C}}} {\rm d}eltaf{\mathbin{\scr D}}{{{\mathbin{\mathfrak m}}athbin{\scr D}}} {\rm d}eltaf{\mathbin{\cal{PV}}}{{{\mathbin{\mathfrak m}}athbin{\cal{PV}}}} {\rm d}eltaf{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}} {\rm d}eltaf{\mathbin{\mathbb D}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb D}}} {\rm d}eltaf{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}} {\rm d}eltaf{\mathbin{\mathbb P}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb P}}} {\rm d}eltaf{\mathbin{\mathcal A}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal A}}} {\rm d}eltaf{\mathbin{\mathcal P}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal P}}} {\rm d}eltaf{\mathbin{\mathfrak E}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak E}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak F}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak F}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak H}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak H}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal Q}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal Q}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak N}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak N}}} \newcommand{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak U}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak{U}\kern .05em}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}} \newcommand{{\mathbin{\mathfrak m}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak m}}} \newcommand{{\mathbin{\mathfrak R}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak R}}} \newcommand{{\mathbin{\mathfrak S}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak S}}} \newcommand{\mathop{\mathfrak{Exact}\kern .05em}\nolimits}{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{Exact}\kern .05em}\nolimits} \newcommand{{\mathbin{\mathcal{V}ect}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal{V}ect}}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak{Vect}}}} \newcommand{\mathbin{\mathcal{H}ol}}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal{H}ol}} \newcommand{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak H}}ol}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak{Hol}}}} \newcommand{{\mathbin{\mathfrak W}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak W}}} \newcommand{\mathbin{\mathcal{H}om}}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal{H}om}} \newcommand{\mathcal{E}xt}{{\mathbin{\mathfrak m}}athcal{E}xt} \newcommand{{\rm d}tensor}{{\rm st}ackrel{L}{\otimesimes}} \newcommand{\mathop{\rm tr}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm tr}\nolimits} \newcommand{\mathop{\rm Exal}\nolimits}{{\mathbin{\mathfrak m}}athop{\rm Exal}\nolimits} {\rm d}eltaf{\mathbin{\mathfrak G}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak G}}} {\rm d}eltaf{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak U}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak U}} {\rm d}eltaf{\mathbin{\mathbb C}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb C}}} {\rm d}eltaf{\mathbin{\mathbb K}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb K}}} {\rm d}eltaf{\mathbin{\mathbb N}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb N}}} {\rm d}eltaf{\mathbin{\mathbb Q}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb Q}}} {\rm d}eltaf{\mathbin{\mathbb Z}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb Z}}} {\rm d}eltaf{\mathbin{\mathbb K}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb K}}} {\rm d}eltaf{\mathbin{\mathbb R}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb R}}} \newcommand{{\mathbin{\mathbb A}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb A}}} \newcommand{\bar{\rm d}eltalta}{\bar{\rm d}eltalta} \newcommand{\bar{\rm d}eltalta_{\rm ss}}{\bar{\rm d}eltalta_{\rm ss}} \newcommand{\bar\epsilonsilon}{\bar\epsilonsilon} {\rm d}eltaf\alpha{\alphapha} {\rm d}eltaf\beta{\betata} {\rm d}eltaf\gamma{\gammamma} {\rm d}eltaf{\rm d}elta{{\rm d}eltalta} {\rm d}eltaf\iota{\iotata} {\rm d}eltaf\epsilon{\epsilonsilon} {\rm d}eltaf\lambda{\lambdambda} {\rm d}eltaf\kappa{\kappappa} {\rm d}eltaf\tildemesheta{\tildemeshetaeta} {\rm d}eltaf\zeta{\zetata} {\rm d}eltaf\upsilon{\upsilonsilon} {\rm d}eltaf\varphi{\varphi} {\rm d}eltaf\sigma{\sigmagma} {\rm d}eltaf\omega{\omegaega} {\rm d}eltaf\Delta{\Deltalta} {\rm d}eltaf\Lambda{\Lambdambda} {\rm d}eltaf\Theta{\Thetaeta} {\rm d}eltaf{\rm T}{{\rm T}} {\rm d}eltaf{\mathbin{\cal O}}m{{\mathbin{\cal O}}mega} {\rm d}eltaf\Gamma{\Gammamma} {\rm d}eltaf\Sigma{\Sigmagma} {\rm d}eltaf{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}p{{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}psilon} {\rm d}eltaf\partial{\partial} {\rm d}eltaf{\bar\partial}{{\bar\partial}} {\rm d}eltaf\tildemesextstyle{\tildemesextstyle} {\rm d}eltaf\scriptscriptstyle{\scriptscriptstyle} {\rm d}eltaf\setminus{\setminus} {\rm d}eltaf\wedge{\wedgeedge} {\rm d}eltaf\ltimes{\ltimesimes} {\rm d}eltaf\sharp{\sharparp} {\rm d}eltaf\bullet{\bulletllet} {\rm d}eltaf\oplus{\opluslus} {\rm d}eltaf\otimes{\otimesimes} {\rm d}eltaf\odot{\odotot} {\rm d}eltaf\bigotimes{\bigotimesimes} {\rm d}eltaf\boxtimes{\boxtimesimes} {\rm d}eltaf\overline{\overlineerline} {\rm d}eltaf\underline{\underline} {\rm d}eltaf\bigoplus{\bigopluslus} {\rm d}eltaf\infty{{\rm inf}ty} {\rm d}eltaf\emptyset{\emptyset} {\rm d}eltaf\rightarrow{\rightarrow} {\rm d}eltaf\allowbreak{{\rm all}owbreak} {\rm d}eltaf\longrightarrow{\longrightarrow} {\rm d}eltaf{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow} {\rm d}eltaf{\rm d}ashrightarrow{{\rm d}ashrightarrow} {\rm d}eltaf\rightrightarrows{\rightrightarrows} {\rm d}eltaf{\mathbin{\mathbb R}}a{{\mathbin{\mathbb R}}ightarrow} {\rm d}eltaf\Longrightarrow{\Longrightarrow} \newcommand{{\mathbin{\ell\kern .08em}}}{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\ell\kern .08em}}l\kern .08em}}} {\rm d}eltaf{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}a{{\tildemesextstyle{\rm fr}ac{1}{2}}} {\rm d}eltaf\ban#1{\bigl\lambdangle #1 \bigr\rightarrowngle} {\rm d}eltaf{\rm an}{{\rm an}} \newcommand{\grave}{\graveave} {\rm d}eltaf\tildemes{\tildemesimes} {\rm d}eltaf\circ{\circrc} {\rm d}eltaf\tildemesi{\tildemesilde} {\rm d}eltaf{\rm d}{{\rm d}} \newcommand{{\rm d}}{{\rm d}} {\rm d}eltaf\odot{\odotot} {\rm d}eltaf\boxdot{\boxdot} {\rm d}eltaf{\mathbin{\mathfrak m}}d#1{\vert #1 \vert} {\rm d}eltaf{\mathbin{\mathfrak m}}s#1{\vert #1 \vert^2} {\rm d}eltaf\nm#1{\Vert #1 \Vert} {\rm d}eltaf\bnm#1{\big\Vert #1 \big\Vert} {\rm d}eltaf\bmd#1{\big\vert #1 \big\vert} {\rm d}eltaf\mathop{\rm HM}\nolimits{{\mathbin{\mathfrak m}}athop{\rm HM}\nolimits} {\rm d}eltaf{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}HM{{\mathbin{\mathfrak m}}athop{\rm MHM}\nolimits} {\rm d}eltaf\rightarrowt{{\mathbin{\mathfrak m}}athop{\bf rat}\nolimits} {\rm d}eltaf{\mathbin{\mathbb R}}at{{\mathbin{\mathfrak m}}athop{\bf Rat}\nolimits} {\rm d}eltaf{\mathbin{\cal{HV}}}{{{\mathbin{\mathfrak m}}athbin{\cal{HV}}}} {\rm d}eltaf{\mathbin{\cal S}}{{{\mathbin{\mathfrak m}}athbin{\cal S}}} {\rm d}eltaf{\mathbin{\cal S}}z{{{\mathbin{\mathfrak m}}athbin{\cal S}\kern -0.1em}^{\kern .1em 0}} {\rm d}eltaf\otimesL{{\kern .1em{\mathbin{\mathfrak m}}athop{\otimesimes}\limits^{\scriptscriptstyle L}\kern .1em}} {\rm d}eltaf\boxtimesL{{\kern .2em{\mathbin{\mathfrak m}}athop{\boxtimesimes}\limits^{\scriptscriptstyle L}\kern .2em}} {\rm d}eltaf\boxtimesT{{\kern .1em{\mathbin{\mathfrak m}}athop{\boxtimesimes}\limits^{\scriptscriptstyle T}\kern .1em}} {\rm d}eltaf{\mathop{\rm Sh}\nolimits}{{{\mathbin{\mathfrak m}}athop{\rm Sh}\nolimits}} {\rm d}eltaf{\mathbin{\mathbb K}}alg{{\tildemesext{\rm ${\mathbin{\mathbb K}}$-alg}}} {\rm d}eltaf{\mathbin{\mathbb K}}vect{{\tildemesext{\rm ${\mathbin{\mathbb K}}$-vect}}} {\rm d}eltaf{\mathop{\mathfrak{Iso}}\nolimits}{{{\mathbin{\mathfrak m}}athop{{\mathbin{\mathfrak m}}athfrak{Iso}}\nolimits}} {\rm d}eltaf{\mathbin{\bs{\cal M}}}{{{\mathbin{\mathfrak m}}athbin{\boldsymbol{\cal M}}}} {\rm d}eltaf{\rm na}i{{\tildemesext{\rm na\"\i}}} {\rm d}eltaf{\mathop{\text{\rm Lis-me}}\nolimits}{{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm Lis-me}}\nolimits}} {\rm d}eltaf{\mathop{\text{\rm Lis-\'et}}\nolimits}{{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm Lis-\'et}}\nolimits}} {\rm d}eltaf{\mathop{\text{\rm Lis-an}}\nolimits}{{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm Lis-an}}\nolimits}} {\rm d}eltaf\otimesL{{\kern .1em{\mathbin{\mathfrak m}}athop{\otimesimes}\limits^{\scriptscriptstyle L}\kern .1em}} {\rm d}eltaf\boxtimesL{{\kern .2em{\mathbin{\mathfrak m}}athop{\boxtimesimes}\limits^{\scriptscriptstyle L}\kern .2em}} {\rm d}eltaf{\mathop{\text{\rm Lis-me}}\nolimits}m{{{\mathbin{\mathfrak m}}athop{\tildemesext{\rm LM-\'em}}\nolimits}} {\rm d}eltaf{\mathbin{\cal R}}{{{\mathbin{\mathfrak m}}athbin{\cal R}}} {\rm d}eltaf{\rm st}m{{{\rm st},{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{\mathbin{\mathfrak m}}u}} \tildemesitle{{\bf{Generalized Donaldson--Thomas theory \\ over fields ${\mathbin{\mathbb K}} \neq {\mathbin{\mathbb C}}$}}} \author{\setminusallskip\\ Vittoria Bussi \\ \setminusall{The Mathematical Institute,}\\ \setminusall{Andrew Wiles Building Radcliffe Observatory Quarter,}\\ \setminusall{Woodstock Road, Oxford, OX1 3LB, U.K.} \\ \setminusall{E-mail: \tildemest [email protected]}\\ \setminusallskip} {\rm d}ate{\setminusall{}} {\mathbin{\mathfrak m}}aketitle \betagin{abstract} {\it Generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tildemesau)$ defined by Joyce and Song \circte{JoSo} are rational numbers which `count' both $\tildemesau$-stable and $\tildemesau$-semistable coherent sheaves with Chern character $\alpha$ on a Calabi--Yau 3-fold $X$, where $\tildemesau$ denotes Gieseker stability for some ample line bundle on $X$. The $\bar{DT}{}^\alpha(\tildemesau)$ are defined for all classes $\alpha$, and are equal to the classical $DT^\alpha(\tildemesau)$ defined by Thomas \circte{Thom} when it is defined. They are unchanged under deformations of $X$, and transform by a wall-crossing formula under change of stability condition~$\tildemesau$. Joyce and Song use gauge theory and transcendental complex analytic methods, so that their theory of generalized Donaldson--Thomas invariants is valid only in the complex case. This also forces them to put constraints on the Calabi--Yau $3$-fold they can define generalized Donaldson--Thomas invariants for. \setminusallskip This paper will propose a new algebraic method extending the theory to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero, and partly to triangulated categories and for non necessarily compact Calabi--Yau 3-folds under some hypothesis. \setminusallskip It will describe the local structure of the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of (complexes of) coherent sheaves on $X,$ showing that an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ carries the structure of a $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-invariant d-critical locus in the sense of \circte{ Joyc2} and thus it may be written locally as the zero locus of a regular function defined on an \'etale neighborhood in the tangent space of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ and use this to deduce identities on the Behrend function $\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. \setminusallskip Moreover, when ${\mathbin{\mathbb K}} = {\mathbin{\mathbb C}},$ \circte[Thm. 4.9]{JoSo} uses the integral Hodge conjecture result by Voisin for Calabi--Yau $3$-folds over ${\mathbin{\mathbb C}}$ to show that the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X.$ This is important for the results that $\bar{DT}{}^\alpha(\tildemesau)$ for $\alpha \in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ are invariant under deformations of $X$, even to make sense. We will provide an algebraic proof of that result, characterizing the numerical Grothendieck group of a Calabi--Yau $3$-fold in terms of a globally constant lattice described using the Picard scheme. \end{abstract} \tildemesableofcontents \section*{Introduction} {\mathbin{\mathfrak m}}arkboth{Introduction}{Introduction} \addcontentsline{toc}{section}{Introduction} \lambdabel{dt1} In the following we will summarize some background material on Donaldson--Thomas theory which permits to allocate our problem and state the main result. After that, we outline the contents of the sections. Expert readers can skip the first introductory part. \betagin{center} \tildemesextbf{Notations and conventions} \end{center} Let ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero. A {\it Calabi--Yau\/ $3$-fold\/} is a smooth projective 3-fold $X$ over ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$, with trivial canonical bundle{\rm \kern.05em ind}ex{canonical bundle} $K_X$ and $H^1({\mathbin{\cal O}}_X)=0$. Fix a very ample line bundle ${\mathbin{\cal O}}_X(1)$ on $X$, and let $\tildemesau$ be Gieseker stability on the abelian category of coherent sheaves $\mathop{\rm coh}\nolimits(X)$ on $X$ with respect to ${\mathbin{\cal O}}_X(1).$ If $E$ is a coherent sheaf on X then the class $[E] \in K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$ is in effect the Chern character ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(E)$ of $E$ in the Chow ring $A^*(X)_{{\mathbin{\mathbb Q}}}$ as in \circte{Fult}. For a class $\alpha$ in the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau),{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ for the coarse moduli schemes of $\tildemesau$-(semi)stable sheaves $E$ with class $[E]=\alpha$. Then ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)$ is a projective ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$-scheme whose points correspond to S-equivalence classes of $\tildemesau$-semistable sheaves, and ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ is an open subscheme of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)$ whose points correspond to isomorphism classes of $\tildemesau$-stable sheaves. Write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ for the moduli stack of coherent sheaves $E$ on $X$. It is an Artin ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$-stack, locally of finite type and has affine geometric stabilizers. For $\alpha\in K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$, write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha$ for the open and closed substack of $E$ with $[E]=\alpha$ in $K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$. Write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm ss}^\alpha(\tildemesau), {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)$ for the substacks of $\tildemesau$-(semi)stable sheaves $E$ in class $[E]=\alpha$, which are finite type open substacks of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha$. {\rm \kern.05em ind}ex{Artin stack!affine geometric stabilizers} {\rm \kern.05em ind}ex{Artin stack!locally of finite type} \nomenclature[M al st]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$}{coarse moduli scheme of $\tildemesau$-stable sheaves $E$ with class $[E]=\alpha$} \nomenclature[M al rss]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)$}{coarse moduli scheme of $\tildemesau$-semistable sheaves $E$ with class $[E]=\alpha$} {\rm \kern.05em ind}ex{Gieseker stability} \nomenclature[Knum]{$K^{\rm num}(\mathop{\rm coh}\nolimits(X))$}{numerical Grothendieck group of the abelian category $\mathop{\rm coh}\nolimits(X)$} {\rm \kern.05em ind}ex{Grothendieck group!numerical} \betagin{center} \tildemesextbf{Historical overview} \end{center} {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$}\nomenclature[DTa]{$DT^\alpha(\tildemesau)$}{original Donaldson--Thomas invariants defined in \circte{Thom}} In 1998, Thomas \circte{Thom}, following his proposal with Donaldson \circte{DoTh}, motivates a {\it holomorphic Casson invariant} and defines the {\it Donaldson--Thomas invariants\/} $DT^\alpha(\tildemesau)$ which are integers `counting' $\tildemesau$-stable coherent sheaves with Chern character $\alpha$ on a Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb K}},$ where $\tildemesau$ denotes Gieseker stability for some ample line bundle on $X$. Mathematically, and in `modern' terms, he found that ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ is endowed with a symmetric obstruction theory and defined \betagin{equation*} DT^\alpha(\tildemesau)\quad =\tildemesextstyle{\rm d}isplaystyle \int\limits_{\setminusall{[{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)]^{\rm vir}}}\!\!\! 1 \end{equation*} which is mathematical reflection of the heuristic that views ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ as the critical locus of the {\it holomorphic Chern-Simons functional} and the shadow of a more deeper `derived' geometry. A crucial result is that the invariants are unchanged under deformations of the underlying geometry of $X$. Finally we remark that the conventional definition of Thomas \circte{Thom} works only for classes $\alpha$ containing no strictly $\tildemesau$-semistable sheaves and this permits to work just with schemes rather than stacks as the stable moduli scheme itself already encodes all the information about the $\mathop{\rm Ext}\nolimits$ groups, and thus about the tangent-obstruction complex of the moduli functor. \setminusallskip In 2005, Behrend \circte{Behr} proved a {\it virtual Gauss--Bonnet theorem} which in particular yields that Donaldson--Thomas type invariants can be written as a weighted Euler characteristic $$DT^\alpha(\tildemesau)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau), \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}\bigr)$$ of the stable moduli scheme ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ \nomenclature[M al tau]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$}{coarse moduli scheme of $\tildemesau$-stable objects in class $\alpha$} by a constructible function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)},$ as a consequence known in literature as the {\it Behrend function}. It depends only on the scheme structure of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau),$ and it is convenient to think about it as a multiplicity function. An important moral is that it is better to `count' points in a moduli scheme by the weighted Euler characteristic rather than the unweighted one as it often gives answers unchanged under deformations of the underlying geometry. It is worth to point out that this equation is local, and `motivic', and makes sense even for non-proper finite type ${\mathbin{\mathbb K}}$-schemes. Anyway, using this formula to generalize the classical picture by defining the Donaldson--Thomas invariants as ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau), \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)}\bigr)$ when ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)\ne{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ is not a good idea as in the case there are strictly $\tildemesau$-semistable sheaves, the moduli scheme ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)$ is no more a good model and suggest that schemes are no more `enough' to extend the theory. \setminusallskip The crucial work by Behrend \circte{Behr} suggests that Donaldson--Thomas invariants can be written as motivic invariants, like those studied by Joyce in \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}, and so it raises the possibility that one can extend the results of \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} to Donaldson--Thomas invariants by including Behrend functions as weights. \setminusallskip {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tildemesau)$} \nomenclature[DTb]{$\bar{DT}{}^\alpha(\tildemesau)$}{generalized Donaldson--Thomas invariants defined in \circte{JoSo}} Thus, in 2005, Joyce and Song \circte{JoSo} proposed a theory of {\it generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tildemesau)$. They are rational numbers which `count' both $\tildemesau$-stable and $\tildemesau$-semistable coherent sheaves with Chern character $\alpha$ on a compact Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb C}}$; strictly $\tildemesau$-semistable sheaves must be counted with complicated rational weights. The $\bar{DT}{}^\alpha(\tildemesau)$ are defined for all classes $\alpha$, and are equal to $DT^\alpha(\tildemesau)$ when it is defined. They are unchanged under deformations of $X$, and transform by a wall-crossing formula under change of stability condition~$\tildemesau$. The theory is valid also for compactly supported coherent sheaves on {\it compactly embeddable} noncompact Calabi--Yau 3-folds in the complex analytic topology. \setminusallskip To prove all this they study the local structure of the moduli stack {\rm \kern.05em ind}ex{moduli stack!local structure} ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of coherent sheaves on $X.$ They first show that ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is Zariski locally isomorphic to the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ of algebraic vector bundles on $X$. Then they use {\it gauge theory} on complex vector bundles and transcendental complex analytic methods to show that an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ may be written locally in the complex analytic topology as $\mathop{\rm Crit}(f)$ for $f:U\rightarrow{\mathbin{\mathbb C}}$ a {\rm \kern.05em ind}ex{analytic topology} holomorphic function on a complex manifold $U$. They use this to deduce identities on the Behrend function $\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ through the Milnor fibre description of Behrend functions. These identities \betagin{equation*} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2)=(-1)^{\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi([E_1],[E_2])} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1)\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_2), \end{equation*} \setminusallskip \betagin{equation*} {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l} [\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\ \lambda\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \quad - \!\!\!\! \!\!\!\! \!\!\!\! {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l}[{\mathbin{\mathfrak m}}u]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ {\mathbin{\mathfrak m}}u\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(D)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2), \end{equation*} where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ are crucial for the whole program of Joyce and Song, which is based on the idea that Behrend's approach should be integrated with Joyce's theory \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}. As the proof uses gauge theory and transcendental methods, it works only over~${\mathbin{\mathbb C}}$ and forces them to put constraints on the Calabi--Yau 3-fold they can define generalized Donaldson--Thomas invariants for. Finally, in \circte[\S 4.5]{JoSo}, when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}},$ the Chern character embeds $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in $H^{\rm even}(X;{\mathbin{\mathbb Q}})$, and the Voisin Hodge conjecture result \circte{Vois} for Calabi--Yau over ${\mathbin{\mathbb C}}$ completely characterize its image. They use this to show $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X$. This is important for the $\bar{DT}{}^\alpha(\tildemesau)$ with $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ to be invariant under deformations of $X$ even to make sense. {\rm \kern.05em ind}ex{deformation-invariance}{\rm \kern.05em ind}ex{Chern character} \setminusallskip In 2008 and 2010, with two subsequent papers \circte{KoSo1,KoSo}, Kontsevich and Soibelman also studied generalizations of Donaldson--Thomas invariants, both in the direction of motivic and categorified Donaldson--Thomas invariants. \setminusallskip In \circte{KoSo1}, they proposed a very general version of the theory, which, very roughly speaking, can be outlined saying that, supposing for the sake of simplicity that ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)$, their oversimplified idea is to define {\it motivic Donaldson--Thomas invariants} $$DT^\alpha_{\tildemesextrm{mot}}={\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}psilon({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau),\nu_{\tildemesextrm{mot}}),$$ where $\nu_{\tildemesextrm{mot}}$ is a complicated constructible function which we can refer to as the {\it motivic Behrend function} for a general motivic invariant ${\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}psilon.$ Their construction is closely related to Joyce and Song's construction, even if they work in a more general context: they consider derived categories of coherent sheaves, Bridgeland stability conditions, and general motivic invariants, whereas Joyce and Song work with abelian categories of coherent sheaves, Gieseker stability, and the Euler characteristic. However, the price to work in a more general context is that most results depend on conjectures (motivic Behrend function identities, existence of orientation data, absence of poles). In particular, Kontsevich and Soibelman's parallel passages of Joyce and Song's proof of the Behrend function identities \circte[\S 4.4 \& \S 6.3]{KoSo1} work over a field ${\mathbin{\mathbb K}}$ of characteristic zero, and say that the formal completion ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{[E]}$ of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ at $[E]$ can be written in terms of $\mathop{\rm Crit}(f)$ for $f$ a formal power series on $\setminusash{\mathop{\rm Ext}\nolimits^1(E,E)}$, with no convergence criteria. Their analogue \circte[Conj.\,4]{KoSo1}, concerns the {\it motivic Milnor fibre} of the formal power series $f$. So the Behrend function identities are related to a conjecture of Kontsevich and Soibelman \circte[Conj.\,4]{KoSo1} and its application in \circte[\S 6.3]{KoSo1}, and could probably be deduced from it. Anyway, Joyce and Song's approach \circte{JoSo} is not wholly algebro-geometric -- it uses gauge theory, and transcendental complex analytic geometry methods. Therefore this method will not suffice to prove the parallel conjectures in Kontsevich and Soibelman \circte[Conj.\,4]{KoSo1}, which are supposed to hold for general fields ${\mathbin{\mathbb K}}$ as well as ${\mathbin{\mathbb C}}$, and for general motivic invariants of algebraic ${\mathbin{\mathbb K}}$-schemes as well as for the {\rm \kern.05em ind}ex{motivic invariant} topological Euler characteristic. Recently, in 2012, Le Quy Thuong \circte{Thuong} provided a proof for this conjecture using some deep high technology results from motivic integration. \setminusallskip In \circte{KoSo}, Kontsevich and Soibelman exposed the categorified version of Donaldson--Thomas theory. To fix ideas, suppose again ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau).$ Following Thomas' argument \circte{Thom}, one can, heuristically, think of $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}$ as the Euler characteristic of the {\it perverse sheaf of vanishing cycles} $\cal P$ of the holomorphic Chern-Simons functional. Following this philosophy in which perverse sheaves are categorification of constructible functions, the hypercohomology $${{\mathbin{\mathfrak m}}athbb H}^*\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau);{\cal P}\vert_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}\bigr)$$ would be a natural cohomology group of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ whose Euler characteristic is the Donaldson--Thomas invariant. Thus, the very basic idea in Kontsevich and Soibelman's paper is to define some kind of `generalized cohomology' for the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ as a kind of Ringel--Hall algebra. \setminusallskip In 2013, a sequence of five papers \circte{Joyc2,BBJ,BBDJS,BJM,BBBJ} developed a theory of d-critical loci, a new class of geometric objects by Joyce, and uses this theory to apply powerful results of derived algebraic geometry as in \circte{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} to Donaldson--Thomas theory. It is showed that the moduli stack of (complexes of) coherent sheaves on a Calabi--Yau 3-fold carries the structure of an algebraic d-critical stack and it is given locally in the Zariski topology as the critical locus of a regular function. Moreover, using the notion of {\it orientation data}, they construct a natural perverse sheaf and a natural motive on the moduli stacks, thus answering a long-standing question in the problem of categorification. See \S\ref{ourpapers} for a detailed discussion. \betagin{center} \tildemesextbf{The main result and its implications} \end{center} Following Joyce and Song's proposal, the aim of this paper is to provide an extension of the theory of generalized Donaldson--Thomas invariants in \circte{JoSo} to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. Our argument provides the algebraic analogue of \circte[Thm 5.5]{JoSo}, \circte[Thm 5.11]{JoSo} and \circte[Cor. 5.28]{JoSo} which are enough to extend \circte{JoSo} at least for compact Calabi--Yau 3-folds. Unfortunately, to extend the whole project to complexes of sheaves and to compactly supported sheaves on a noncompact quasi-projective Calabi--Yau 3-fold, we would need other results also from derived algebraic geometry which we do not have at the present. We hope to come back on this point in a future work. \setminusallskip We will show that an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ near $[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$ may be written locally in the \'etale topology as the zero locus ${\rm d} f^{-1}(0)$ for a $G$-invariant regular function $f$ defined on a \'etale neighborhood of $0\in U({\mathbin{\mathbb K}})$ in the affine ${\mathbin{\mathbb K}}$-space $\mathop{\rm Ext}\nolimits^1(E,E),$ {\rm \kern.05em ind}ex{\'etale topology} where $G$ is a maximal torus of $\mathop{\rm Aut}(E).$ {\rm \kern.05em ind}ex{reductive group!maximal} \setminusallskip Based on this picture, we give an algebraic proof of the Behrend function identities. We point out that our approach is actually valid much more generally for any stack which is locally a global quotient, and we actually do not use any particular properties of coherent sheaves on Calabi--Yau 3-folds. In the past, the author tried a picture in which the moduli stack of coherent sheaves was locally described as a zero locus of an algebraic almost closed 1-form in the sense of \circte{Behr}, which turned out later to be a wrong direction to follow. \setminusallskip Finally, we will study the deformation invariance properties of $\bar{DT}{}^\alpha(\tildemesau)$ under changes of the underlying geometry of $X,$ characterizing a globally constant lattice containing the image through the Chern character of $K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$ and in which classes $\alpha$ vary. \setminusallskip The implications are quite exciting and far-reaching. Our algebraic method could lead to the extension of generalized Donaldson--Thomas theory to the derived categorical context. The plan to extend from abelian to derived categories the theory of Joyce and Song \circte{JoSo} starts by reinterpreting the series of papers by Joyce \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8} in this new general setup. In particular: \setminusallskip \betagin{itemize} \item[$(a)$]Defining configurations in triangulated categories ${\mathbin{\mathfrak m}}athcal{T}$ requires to replace the exact sequences by distinguished triangles. \setminusallskip \item[$(b)$]Constructing moduli stacks of objects and configurations in ${\mathbin{\mathfrak m}}athcal{T}$. Again, the theory of derived algebraic geometry \circte{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} can give us a satisfactory answer. \setminusallskip \item[$(c)$]Defining stability conditions on triangulated categories can be approached using Bridgeland's results, and its extension by Gorodentscev et al., which combines Bridgeland's idea with Rudakov's definition for abelian categories \circte{Rud}. Since Joyce's stability conditions \circte{Joyc.3} are based on Rudakov, the modifications should be straightforward. \end{itemize} \betagin{itemize} \item[$(d)$]The `nonfunctoriality of the cone' in triangulated categories causes that the triangulated category versions of some operations on configurations are defined up to isomorphism, but not canonically, which yields that corresponding diagrams may be commutative, but not Cartesian as in the abelian case. In particular, one loses the associativity of the Ringel-Hall algebra of stack functions, which is a crucial object in Joyce and Song framework. We expect that derived Hall algebra approach of To\"en \circte{Toen3} resolve this issue. See also \circte{PL}. \end{itemize} \setminusallskip We expect that a well-behaved theory of invariants counting $\tildemesau$-semistable objects in triangulated categories in the style of Joyce's theory exists, and we hope to come back on it in a future work. \betagin{center} \tildemesextbf{Outstanding problems and recent research} \end{center} Donaldson--Thomas theory depicted in this picture is promising and the literature based on the sketched milestones \circte{Thom,Behr,JoSo,KoSo1,KoSo} is vast. Although several interesting developments have been achieved, there are many outstanding problems and a whole final picture overcoming these problems and related conjectures is far reaching. \setminusallskip In 2003, Maulik, Nekrasov, Okounkov and Pandharipande \circte{MNOP1,MNOP2} stated the celebrated {\it MNOP conjecture} in which Donaldson--Thomas invariants for sheaves of rank one have been conjectured to have deep connections with Gromov--Witten theory of Calabi--Yau 3-folds, but also with Gopakumar--Vafa invariants and Pandharipande--Thomas invariants \circte{PaTh}. Even if there are some results on this conjectural equivalence of theories of curve counting invariants (Bridgeland \circte{Brid2,Brid3}, Stoppa and Thomas \circte {StTh}, Toda \circte{Toda}), the MNOP conjecture is still unproved. Moreover, very little is known about the `meaning' of higher rank Donaldson--Thomas invariants. In the same work, \circte[Conj.1]{MNOP1}, they formulated a conjecture on values of the virtual count of ${\mathbin{\mathfrak m}}athop{\rm Hilb}\nolimits^dX$ (Donaldson--Thomas counting of dimension zero sheaves), that has now been established and different proofs are given by Behrend and Fantechi \circte{BeFa2}, Levine and Pandharipande~\circte{LePa} and Li \circte{Li}. \setminusallskip In \circte[Questions 4.18,5.7,5.10,5.12,6.29]{JoSo} Joyce and Song pointed out some outstanding problems of their theory and suggest new methods to deal with them. Some of those questions have been answered with new methods as in \circte{Joyc2,BBJ,BBDJS,BJM,BBBJ}. However, the main limitation of Joyce and Song's approach is due to the fact that they work using gauge theory and transcendental complex analytic methods, which causes the theory is valid only over the complex numbers and puts restrictions on the Calabi--Yau which they can define the theory for, and they deal with abelian rather than triangulated categories. This limits the usefulness of their theory as, for many applications, especially to physics, one needs triangulated categories. Moreover, in \circte[\S 6]{JoSo}, Joyce and Song, following Kontsevich and Soibelman \circte[\S 2.5 \& \S 7.1]{KoSo1}, and from ideas similar to Aspinwall--Morrison computation for a Calabi--Yau 3-fold, defined the {\it BPS invariants} ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{DT}{}^\alpha(\tildemesau),$ also generalizations of Donaldson--Thomas invariants, and conjectured to be integers for certain $\tildemesau.$ There are some evidences on this fact \circte[\S 6]{JoSo}, but the problem is still open. Finally, in \circte[\S 7]{JoSo}, they extended their generalized Donaldson--Thomas theory to abelian categories of representations of a quiver $Q$ with relations coming from a superpotential on Q, and connected their ideas with the already existing literature on noncommutative Donaldson--Thomas invariants and on invariants counting quiver representations (just to cite some names: Bryan, Ginzburg, Hanany, Nagao, Nakajima, Reineke, Szendr\H oi, and Young). This is an active area of research in representation theory. \setminusallskip There is a seething big area of research which aims to extend Donaldson--Thomas theory in the derived categorical framework. For a long time there was the problem to prove that the moduli space of complexes of sheaves can be given as a critical locus, similarly to the moduli space of sheaves. In 2006, Behrend and Getzler \circte{BeGe} announced a development in this direction, which various papers in literature refers to (e.g. Toda \circte{Toda,Toda2}), but the paper has not yet been published. It says that the formal potential function $f$ for the cyclic dg Lie algebra L coming from the Schur objects in the derived category of coherent sheaves on Calabi--Yau 3-folds can be made to be convergent over a local neighborhood of the origin. In \circte[Conj.\,1.2]{Toda2}, Toda formulates the derived categorical analog of \circte[Thm.\,5.5]{JoSo} and then Hua announced in \circte{Hua} a joint work with Behrend \circte{BeHua} about the construction of the derived moduli space of complexes of coherent sheaves. In \circte{Hua}, Hua gives a construction of the global Chern-Simons functions for toric Calabi--Yau stacks of dimension three using strong exceptional collections. The moduli spaces of sheaves on such stacks can be identified with critical loci of these functions. Still in the direction of derived categorical context, Chang and Li \circte{ChangLi} defined recently a semi--perfect obstruction theory and used it to construct virtual cycles of moduli of derived objects on Calabi--Yau 3-folds. In an other paper with Kiem \circte{KiLi2}, Li studied stable objects in derived category using a `${\mathbin{\mathbb C}}^*$-intrinsic blowup' strategy. Finally, in 2013, the author et al. in \circte{BBBJ} completely answered the issue of presenting the moduli stack as a critical locus, and this opens now the question about possibilities to extend the whole project in \circte{JoSo} to triangulated categories, the main difficult of which would be to provide a generalization of wall-crossing formulae from abelian to triangulated categories, in the style of Joyce \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8}. \setminusallskip This discussion enlightens the fact that beyond this theory there is some deeper `derived' geometry: as the deformation theory of coherent sheaves concerns the $\mathop{\rm Ext}\nolimits$ groups, one way to talk about different geometric structures on moduli spaces is to ask what information they store about the $\mathop{\rm Ext}\nolimits$ groups. For instance, in Kontsevich and Soibelman's context, an interesting problem, among others, is finding what kind of geometric structure on moduli spaces of coherent sheaves on a Calabi--Yau 3-fold X would be the most appropriate for doing motivic and categorified Donaldson--Thomas theory. As a consequence, a natural question would be to ask if derived algebraic geometry has something again to say about a theory of Donaldson--Thomas invariants for Calabi--Yau m-folds for $m>3,$ and what is the most suitable geometric structure to develop the theory, see Corollary \ref{da5cor2}. \setminusallskip Finally, due to both many unproved conjectures and exciting results, Kontsevich and Soibelman's motivic and categorified theory brings to life a fervid area of research (just to cite some investigators: Behrend, Bryan, Davison, Dimca, Mozgovoy, Nagao and Szendr\H oi). In the present work, we will not discuss much more this area, but we will come back to Kontsevich and Soibelman's theory later. \betagin{center} \tildemesextbf{Outline of the paper} \end{center} The paper begins with a section of background material on obstruction theories and conventional definition of Donaldson--Thomas theory, Behrend functions and Behrend's approach to Donaldson--Thomas theory and, finally, Joyce and Song's and Kontsevich and Soibelman's generalization of Donaldson--Thomas theory. This mainly aims to provide a soft introduction to Donaldson--Thomas theory and more specifically to Joyce's theory and the scenery in which the following sections take place. {\mathbin{\mathfrak m}}edskip Subsection \ref{dt2} will briefly recall material from \circte{BeFa}, \circte{LiTi} and then \circte{Thom}. This should provide a general picture about obstruction theories and the classical Donaldson--Thomas invariants. To say that a scheme $X$ has an {\it obstruction theory} {\rm \kern.05em ind}ex{obstruction theory} means, very roughly speaking, that one is given locally on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}$ an equivalence class of morphisms of vector bundles such that at each point the kernel of the induced linear map of vector spaces is the tangent space to $X$, and the cokernel is a space of obstructions. Following Donaldson and Thomas \circte[\S 3]{DoTh}, Thomas \circte{Thom} motivates a holomorphic Casson invariant {\rm \kern.05em ind}ex{holomorphic Casson invariant} counting bundles on a Calabi--Yau 3-fold. He develops the deformation theory necessary to obtain the virtual moduli cycles in moduli spaces of stable sheaves whose higher obstruction groups vanish, which allows him to define the holomorphic Casson invariant of a Calabi--Yau 3-fold X and prove it is deformation invariant. Heuristically, the Donaldson--Thomas moduli space is the critical set of the holomorphic Chern--Simons functional {\rm \kern.05em ind}ex{holomorphic Chern--Simons functional} and the Donaldson--Thomas invariant is a holomorphic analogue of the Casson invariant. {\mathbin{\mathfrak m}}edskip Subsection \ref{dt3} provides a more eclectic presentation of the Behrend function{\rm \kern.05em ind}ex{Behrend function}. The first part will review the microlocal approach to defining it, with a discussion on the attempt to categorify Donaldson--Thomas theory. In particular the section describes the bridge between perverse sheaves and vanishing cycles on one hand, and Milnor fibres and Behrend functions on the other. Thus, if ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is the Donaldson--Thomas moduli space of stable sheaves, one can, heuristically, think of $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}$ as the Euler characteristic of the perverse sheaf of vanishing cycles of the holomorphic Chern-Simons functional. Following this philosophy in which perverse sheaves are categorification of constructible functions, the section outline the categorification program for Donaldson--Thomas theory. Then, in the second part, the Euler characteristic weighted by the Behrend function is compared to the unweighted Euler characteristic, motivating the necessity to introduce the Behrend function as a multiplicity function. Finally, some properties are listed, in particular the Behrend approach to the Donaldson--Thomas invariants as weighted Euler characteristics and the formula in the complex setting of the Behrend function through {\it linking numbers}, which guarantee a more useful expression also in the case it is not known if the scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme. This is done introducing the definition of almost closed $1$-forms. We point out that Pandharipande and Thomas \circte{PaTh} give examples which are zeroes of almost closed 1-forms, but are not locally critical loci, and this is the main indication that almost closed 1-forms are not `enough' to develop our whole program. {\mathbin{\mathfrak m}}edskip Subsection \ref{dt4} combines some results of Joyce's series of papers \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} with Behrend's approach to Donaldson--Thomas theory and describes how Joyce and Song developed the theory of generalized Donaldson--Thomas invariants in \circte{JoSo}. The idea behind the entire project is that one should insert the Behrend function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}$ of the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of coherent sheaves as a weight in the Joyce's program. A good introduction to the book is provided by Joyce in \circte{Joyc.8}. Then, a concluding remark presents a sketch on Kontsevich and Soibelman's generalization of Donaldson--Thomas theory. As the present paper is mainly concentrated on Joyce and Song's approach, the remark focuses on analogies and differences between the two theories rather than going into a detailed explanation of Kontsevich and Soibelman's program, both because it is beyond the author's competence and it is not directly involved in the results presented here. {\mathbin{\mathfrak m}}edskip Sections \ref{dcr}--\ref{ourpapers} presents briefly the main application in Donaldson--Thomas theory coming from the vast project developed in the series of papers \circte{Joyc2,BBJ,BBDJS,BJM,BBBJ}. We first summarize the theory of d-critical schemes and stacks introduced by Joyce \circte{Joyc2}. They form a new class of spaces, which should be regarded as classical truncations of the $-1$-shifted symplectic derived schemes of \circte{PTVV}. They are simpler than their derived analogues. In \circte{BBJ}, we prove a Darboux theorem for derived schemes with symplectic forms of degree $k<0$, in the sense of Pantev, To\"en, Vaqui\'e and Vezzosi \circte{PTVV}. More precisely, we show that a derived scheme ${\bs X}$ with symplectic form $\tildemesi\omega$ of degree $k$ is locally equivalent to $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$ for $\mathop{\boldsymbol{\rm Spec}}\nolimits A$ an affine derived scheme in which the cdga $A$ has Darboux-like coordinates with respect to which the symplectic form $\omega$ is standard, and in which the differential in $A$ is given by a Poisson bracket with a Hamiltonian function $H$ of degree $k+1$. When $k=-1$, this implies that a $-1$-shifted symplectic derived scheme $({\bs X},\tildemesi\omega)$ is Zariski locally equivalent to the derived critical locus $\boldsymbol\mathop{\rm Crit}(H)$ of a regular function $H:U\rightarrow{\mathbin{\mathbb A}}^1$ on a smooth scheme $U$. We use this to show that the classical scheme $X=t_0({\bs X})$ has the structure of an {\it algebraic d-critical locus}, in the sense of Joyce~\circte{Joyc2}. In the sequels \circte{BBBJ,BBDJS,BJM} we extend these results to (derived) Artin stacks, and discuss applications to categorified and motivic Donaldson--Thomas theory of Calabi--Yau 3-folds. {\mathbin{\mathfrak m}}edskip Section \ref{main.1} states our main results, including the description of the local structure of the moduli stack of coherent sheaves on a Calabi--Yau 3-fold, the Behrend function identities and the deformation invariance of the theory. The section explains why and where Joyce and Song use the restriction ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ in \circte{JoSo} and how our results overcome this restriction: \S\ref{dt.1} provides algebraic analogues of \circte[Thm. 5.5]{JoSo} and \circte[Thm. 5.11]{JoSo}. Finally \S\ref{def} provides the analogue of \circte[Cor. 5.28]{JoSo} which yields the deformation invariance over ${\mathbin{\mathbb K}}$ of the generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tildemesau)$ defined for classes $\alpha$ varying in a deformation invariant lattice $\Lambdambda_X$ described below in which the numerical Grothendieck group injects through the Chern character map. The section culminates in Theorem \ref{mainthm} which summarizes all these ideas. {\mathbin{\mathfrak m}}edskip Subsection \ref{dt.1} proves the Behrend function identities {\rm \kern.05em ind}ex{Behrend function!Behrend identities} above using the existence of a $T$-equivariant $d$-critical chart in the sense of \circte{Joyc2} for each given point $E$ of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}},$ where $T\subset G$ is a maximal torus in $G,$ a maximal torus of $\mathop{\rm Aut}(E).$ This gives us the local description of the stack as a critical locus for a $T$-invariant regular function $f$ defined on a smooth scheme $U\subset \mathop{\rm Ext}\nolimits^1(E,E).$ This method is valid for every locally global quotient stack, and in particular it provides the required local description of the moduli stack (Theorem \ref{dt5thm2}). {\rm \kern.05em ind}ex{moduli stack!local structure} Note that we actually would not need the assumption of the local quotient structure if we wanted to restrict just to sheaves on Calabi--Yau 3-folds, as this would follow from the standard method for constructing coarse moduli schemes of semistable coherent sheaves as in Huybrechts and Lehn \circte{HuLe2}. More precisely, one can find a `good' local atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ which is a G-invariant, locally closed ${\mathbin{\mathbb K}}$-subscheme in the Grothendieck's Quot Scheme{\rm \kern.05em ind}ex{Quot scheme} ${\mathbin{\mathfrak m}}athop{\rm Quot}_X\bigl({\mathbin{\mathbb K}}^{P(n)}\otimes{\mathbin{\cal O}}_X(-n),P\bigr)$, explained in \circte[\S 2.2]{HuLe2}, which parametrizes quotients ${\mathbin{\mathbb K}}^{P(n)}\otimes{\mathbin{\cal O}}_X(-n)\allowbreak\tildemeswoheadrightarrow E'$, where $E'$ has Hilbert polynomial $P,$ and which is acted on by the ${\mathbin{\mathbb K}}$-group $\mathop{\rm GL}(P(n),{\mathbin{\mathbb K}}).$ From \circte{JoSo} it turns out that the proof of the first Behrend identity is reduced to an identity between the Behrend function of the zero locus of ${\rm d} f$, which is a ${\mathbin{\mathbb K}}^*$-scheme, and the Behrend function of the fixed part of this zero locus, that is $$ \nu_{{\rm d} f^{-1}(0)}(p)=(-1)^{\mathop{\rm dim}\nolimits(T_p {\rm d} f^{-1}(0))-\mathop{\rm dim}\nolimits(T_p ({\rm d} f^{-1}(0))^T)}\nu_{({\rm d} f^{-1}(0))^T}(p), $$ where $p$ is a point in the ${\mathbin{\mathbb K}}^*$-fixed point locus $({\rm d} f^{-1}(0))^T.$ This relation is a generalization of the result in \circte{BeFa2} to the case that $p$ is not necessarily an isolated fixed point of the action and ${\mathbin{\mathbb K}}$ is a general algebraically closed field of characteristic zero. This argument is a different approach from the one suggested in a work by Li--Qin \circte{LiQin}, where there they use some properties of the Thom classes of vector bundles. The first Behrend function identity over algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero follows from a trick in the argument of the second Behrend function identity proof, which is directly proved over ${\mathbin{\mathbb K}},$ and is based on Theorem \ref{blowup}, which is the algebraic version of \circte[Thm 4.11]{JoSo}. {\mathbin{\mathfrak m}}edskip Subsection \ref{def} yields that it is possible to extend \circte[Cor. 5.28]{JoSo} on the deformation invariance of the generalized Donaldson--Thomas invariants in the compact case to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. First of all, using existence results by Grothendieck and Artin, and smoothness and properness properties of the {\it relative Picard scheme} in a family of Calabi--Yau 3-folds, one proves that the Picard groups form a local system. Moreover, it is a local system with finite monodromy, so it can be made trivial after passing to a finite \'etale cover of the base scheme, as formulated in the Theorem which is the algebraic generalization of \circte[Thm. 4.21]{JoSo}, and which studies the monodromy of the Picard scheme in a family instead of the numerical Grothendieck group. Finally, Theorem \ref{definv}, a substitute for \circte[Thm. 4.19]{JoSo}, which does not need the integral Hodge conjecture result by \circte{Vois} over Calabi--Yau 3-folds and which is valid over ${\mathbin{\mathbb K}},$ characterizes the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme: \betagin{equation*} \Lambda_X=\tildemesextstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \tildemesextrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \tildemesextrm{ such that } \end{equation*} \betagin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in {\mathbin{\mathfrak m}}athop{\rm Pic}(X)/ _{\tildemesextrm{torsion}}, \; \lambda_2-{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}a\lambda_1^2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\tildemesextstyle{\rm fr}ac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*} where $\lambda_1^2$ is defined as the map $\alpha\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\rightarrow {\rm fr}ac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and ${\rm fr}ac{1}{12}\lambda_1 c_2(TX)$ is defined as ${\rm fr}ac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Theorem \ref{definv} proves that $\Lambda_X$ is deformation invariant and the Chern character gives an injective morphism ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\!\Lambda_X$. Our $\bar{DT}{}^\alpha(\tildemesau)$ will be defined for classes $\alpha\in\Lambdambda_X.$ {\mathbin{\mathfrak m}}edskip Section \ref{dt7} sketches some implications of the theory and proposes new ideas for further research, in particular in the direction of derived categorical framework trying to establish a theory of generalized Donaldson--Thomas invariants for objects in the derived category of coherent sheaves, and for non necessarily compact Calabi--Yau 3-folds. \noindent{\bf Acknowledgements.} I would like to thank Tom Bridgeland, Daniel Huybrechts, Frances Kirwan, Jun Li, Balazs Szendr\H oi, Richard Thomas and Bertrand To\"en for useful discussions and especially my supervisor Dominic Joyce for his continuous support, for many enlightening suggestions and valuable discussions. This research is part of my D.Phil. project funded by an EPSRC Studentship. \section{Donaldson--Thomas theory: background material} This section should be conceived as background picture in which next new sections should be allocated. The competent reader can skip directly to \S\ref{main.1}. \subsection{Obstruction theories and Donaldson--Thomas type invariants} \lambdabel{dt2} This section will briefly recall material from \circte{BeFa}, \circte{LiTi} and then \circte{Thom} which provide both important notions used in the sequel and a hopefully interesting picture of Donaldson--Thomas theory. \subsubsection{Obstruction theories} \lambdabel{dt2.1} {\rm \kern.05em ind}ex{obstruction theory|(} Suppose that $X$ is a subscheme of a smooth scheme $M,$ cut out by a section $s$ of a rank $r$ vector bundle $E\rightarrow M.$ Then the {\it expected dimension}, or virtual dimension, of $X$ is $n-r,$ the dimension it would have if the section $s$ was transverse. If it is not transverse, one wants to take a correct $(n-r)$-cycle on $X.$ As the section $s$ induces a cone in $E_{|_X},$ one may then intersect this cone with the zero section of $X$ inside $E$ to get a cycle of expected dimension on $X.$ The key observation is that one works entirely on $X$ and not in the ambient scheme $M.$ The deformation theory of the moduli problem is often endowed with the infinitesimal version of $s:M\rightarrow E$ on $X,$ namely the linearization of $s,$ yielding the following exact sequence: \betagin{equation*} \xymatrix@C=20pt@R=10pt{0\ar[r] & TX \ar[r] & TM_{|_X} \ar[r]^{{\rm d} s} & E_{|_X} \ar[r] & Ob \ar[r] & 0, } \end{equation*} for some cokernel $Ob,$ which in the moduli problem becomes the {\it obstruction sheaf}.{\rm \kern.05em ind}ex{obstruction sheaf} \setminusallskip Moduli spaces in algebraic geometry often have an expected dimension {\rm \kern.05em ind}ex{virtual dimension} at each point, which is a lower bound for the dimension at that point. Sometimes it may not coincide with the actual dimension of the moduli space and sometimes it is not possible to get a space of the expected dimension. When one has a moduli space $X$ one obtains {\it numerical invariants}{\rm \kern.05em ind}ex{numerical invariants} by integrating certain cohomology classes over the virtual moduli cycle, a class of the expected dimension in its Chow ring. \setminusallskip One example is the moduli space of torsion-free, semi-stable vector bundles on a surface which yields the {\it Donaldson theory} {\rm \kern.05em ind}ex{Donaldson invariants} and which provides a set of differential invariants of 4-manifolds. Another one is the moduli space of stable maps from curves of genus $g$ to a fixed projective variety which yields the {\it Gromov--Witten invariants}{\rm \kern.05em ind}ex{Gromov--Witten invariants}, a kind of generalization of the classical enumerative invariant which counts the number of algebraic curves with appropriate constraints in a variety. In both cases, these invariants are intersection theories on the moduli spaces, respectively, of vector bundles over the surfaces, and of stable maps from curves to a variety. The fundamental class is the core of an intersection theory. However, for Gromov--Witten invariants for example, one cannot take the fundamental class of the whole moduli space directly. The virtual moduli cycle, roughly speaking, plays the role of the fundamental class in an appropriate ``good'' intersection theory. \setminusallskip {\rm \kern.05em ind}ex{cycle!of correct dimension}{\rm \kern.05em ind}ex{excess intersection theory} A nice picture to start with is the following situation: when the expected dimension does not coincide with the actual dimension of the moduli space, one may view this as if the moduli space is a subspace of an `ambient' space cut out by a set of `equations' whose vanishing loci do not meet transversely. Such a situation is well understood in the following setting described in the Introduction of \circte{LiTi}: let $X$, $Y$ and $W$ be smooth varieties, $X,Y\rightarrow W$ and let $Z=X\tildemesimes_W Y.$ Then $[X]\cdot [Y]$, the intersection of the cycle $[X]$ and $[Y]$, is a cycle in $A_* W$ of dimension $\mathop{\rm dim}\nolimits X+\mathop{\rm dim}\nolimits Y-\mathop{\rm dim}\nolimits W$. When $\mathop{\rm dim}\nolimits Z=\mathop{\rm dim}\nolimits X+\mathop{\rm dim}\nolimits Y-\mathop{\rm dim}\nolimits W$, then $[Z]=[X]\cdot [Y]$. Otherwise, $[Z]$ may not be $ [X]\cdot[Y]$. The {\it excess intersection theory} gives that one can find a cycle in $A_* Z$ so that it is $[X]\cdot[Y]$. One may view this cycle as the virtual cycle of $Z$ representing $[X]\cdot[Y]$. Following Fulton--MacPherson's normal cone {\rm \kern.05em ind}ex{normal cone!Fulton--MacPherson's construction} construction (in \circte{Fult,FuMacP1,FuMacP2}), this cycle is the image of the cycle of the normal cone to $Z$\ in $X$, denoted by $C_{Z/X}$, under the Gysin homomorphism $s^*: A_*( C_{Y/W}\tildemesimes _YZ) \rightarrow A_* Z$, where $s: Z\rightarrow C_{Y/W}\tildemesimes_YZ$\ is the zero section. This theory does not apply directly to moduli schemes, since, except for some isolated cases, it is impossible to find pairs $X\rightarrow W$\ and $Y\rightarrow W$ for smooth $X,Y$ and $W$ so that $X\tildemesimes_WY$\ is the moduli space and $[X]\cdot[Y]$ so defined is the virtual moduli cycle one needs. \setminusallskip Behrend and Fantechi \circte{BeFa} and Li and Tian \circte{LiTi} give two different approaches to deal with this. Very briefly, the strategy to Li and Tian's approach in \circte{LiTi} is that rather than trying to find an embedding of the moduli space into some ambient space, they will construct a cone in a vector bundle directly, say $C\subset V$, over the moduli space and then define the virtual moduli cycle to be $s^*[C]$, where $s$ is the zero section of $V$. The pair $C\subset V$ will be constructed based on a choice of the {\it tangent-obstruction complex} {\rm \kern.05em ind}ex{tangent-obstruction complex} of the moduli functor. The construction commutes with Gysin maps and carries a good invariance property. \setminusallskip In \circte{BeFa} Behrend and Fantechi introduce the notion of {\it cone stacks} {\rm \kern.05em ind}ex{cone!cone stack} over a scheme $X$ (or more generally for Deligne--Mumford stacks). These are {\rm \kern.05em ind}ex{Deligne--Mumford stack} Artin stacks which are locally the quotient of a cone by a vector bundle acting on it. They call a cone {\it abelian} {\rm \kern.05em ind}ex{cone!abelian} if it is defined as \nomenclature[SpecSym]{${\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec\mathop{\rm Sym}\nolimits {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}$}{abelian cone associated to a coherent sheaf ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}$} ${\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec\mathop{\rm Sym}\nolimits {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}$, where ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr F}}$ is a coherent sheaf on $X$. Every cone is contained as a closed subcone in a minimal abelian one, which is called its {\it abelian hull}. {\rm \kern.05em ind}ex{cone!abelian hull}The notions of being abelian and of abelian hull generalize immediately to cone stacks. Then, for a complex $E^\bullet$ in the derived category $D(X)$ of quasicoherent sheaves on $X$ \nomenclature[DX]{$D(X)$}{derived category of quasicoherent sheaves on $X$} which satisfies some suitable assumptions (denoted by ($*$), see Definition \ref{dt1def1}), there is an associated abelian cone stack $h^1/h^0((E^\bullet)^\vee)$. \nomenclature[hh]{$h^1/h^0((E^\bullet)^\vee)$}{abelian cone stack associated to a complex $E^\bullet$} In particular the {\it cotangent complex} {\rm \kern.05em ind}ex{cotangent complex} $L_X ^\bullet$ of $X$ constructed by Illusie \circte{Illu1} (a helpful review is given in Illusie \circte[\S 1]{Illu2}) satisfies condition ($*$), so one can define the abelian cone stack ${{\mathbin{\mathfrak m}}athfrak N}_X:=h^1/h^0((L_X^\bullet)^\vee)$, the {\it intrinsic normal sheaf}. {\rm \kern.05em ind}ex{intrinsic normal sheaf} \nomenclature[N X]{${{\mathbin{\mathfrak m}}athfrak N}_X$}{intrinsic normal sheaf over a scheme $X$} More directly, ${{\mathbin{\mathfrak m}}athfrak N}_X$ is constructed as follows: \'etale locally on $X$, embed an open set $U$ of $X$ in a smooth scheme $W$, and take the stack quotient of the {\it normal sheaf} {\rm \kern.05em ind}ex{normal sheaf} (viewed as abelian cone) $N_{U/W}$ by the natural action of $TW_{|_{U}}$. One can glue these abelian cone stacks together to get ${{\mathbin{\mathfrak m}}athfrak N}_X$. The {\it intrinsic normal cone} {\rm \kern.05em ind}ex{intrinsic normal cone} ${{\mathbin{\mathfrak m}}athfrak C}_X$ is the closed \nomenclature[C X]{${{\mathbin{\mathfrak m}}athfrak C}_X$}{intrinsic normal cone associated to a scheme $X$} subcone stack of ${{\mathbin{\mathfrak m}}athfrak N}_X$ defined by replacing $N_{U/W}$ by the {\it normal cone} {\rm \kern.05em ind}ex{normal cone} $C_{U/W}$ in the previous construction. In particular, the intrinsic normal sheaf ${{\mathbin{\mathfrak m}}athfrak N}_X$ of $X$ carries the obstructions for deformations of affine $X$-schemes. With this motivation, they introduce the notion of {\it obstruction theory} {\rm \kern.05em ind}ex{obstruction theory!definition} for $X$. To say that $X$ has an obstruction theory means, very roughly speaking, that one is given locally on $X$ an equivalence class of morphisms of vector bundles such that at each point the kernel of the induced linear map of vector spaces is the tangent space to $X$, and the cokernel is a space of obstructions. That is, this is an object $E^\bullet$ in the derived category together with a morphism $E^\bullet\rightarrow L_X^\bullet$, satisfying Condition ($*$) and such that the induced map ${{\mathbin{\mathfrak m}}athfrak N}_X\rightarrow h^1/h^0((E^\bullet)^\vee)$ is a closed immersion. One denotes the sheaf $h^1({E^\bullet}^\vee)$ by $Ob,$ the obstruction sheaf of the obstruction theory. It contains the obstructions to the smoothness of $X.$ When an obstruction theory $E^\bullet$ is {\it perfect}, {\rm \kern.05em ind}ex{obstruction theory!perfect} ${{\mathbin{\mathfrak m}}athfrak E}=h^1/h^0((E^\bullet)^\vee)$ is a vector bundle stack. Once an obstruction theory is given, with the additional technical assumption that it admits a global resolution, one can define a virtual fundamental class of the expected dimension: one has a vector bundle stack ${{\mathbin{\mathfrak m}}athfrak E}$ with a closed subcone stack ${{\mathbin{\mathfrak m}}athfrak C}_X$, and to define the virtual fundamental class of $X$ with respect to $E^\bullet$ one simply intersects ${{\mathbin{\mathfrak m}}athfrak C}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}$ with the zero section of ${{\mathbin{\mathfrak m}}athfrak E}$. To get round of the problem of dealing with Chow groups for Artin stacks, Behrend and Fantechi choose to assume that $E^\bullet$ is globally given by a homomorphism of vector bundles $F^{-1}\rightarrow F^0$. Then ${{\mathbin{\mathfrak m}}athfrak C}_X$ gives rise to a cone $C$ in $F_1={F^{-1}}^\vee$ and one intersects $C$ with the zero section of $F_1$ (see \circte{Kre} for a statement without this assumption). \setminusallskip So, recall the following definitions from Behrend and Fantechi~\circte{Behr,BeFa,BeFa2}: \betagin{dfn} Let $Y$ be a ${\mathbin{\mathbb K}}$-scheme, and $D(Y)$ the derived category of quasicoherent sheaves on $Y$. \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] A complex $E^\bullet\in D(Y)$ is {\it perfect of perfect amplitude contained in\/} $[a,b]$, if \'{e}tale locally on $Y$, $E^\bullet$ is quasi-isomorphic to a complex of locally free sheaves of finite rank in degrees $a,a+1,\ldots,b$. \item[{\bf(b)}] A complex $E^\bullet\in D(Y)$ {\it satisfies condition\/} $(*)$ if \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[(i)] $h^i(E^\bullet)=0$ for all $i>0$, \item[(ii)] $h^i(E^\bullet)$ is coherent for $i=0,-1$. \end{itemize} \item[{\bf(c)}] An {\it obstruction theory\/}{\rm \kern.05em ind}ex{obstruction theory!definition} for $Y$ is a morphism $\varphi:E^\bullet\rightarrow L_Y$ in $D(Y)$, where $L_Y=L_{Y/{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{\mathbin{\mathbb K}}}$ is the cotangent complex of $Y$, and $E$ satisfies condition $(*)$, and $h^0(\varphi)$ is an isomorphism, and $h^{-1}(\varphi)$ is an epimorphism. \item[{\bf(d)}] An obstruction theory $\varphi:E^\bullet\rightarrow L_Y$ is called {\it perfect\/}{\rm \kern.05em ind}ex{obstruction theory!perfect} if $E^\bullet$ is perfect of perfect amplitude contained in $[-1,0]$. \item[{\bf(e)}] A perfect obstruction theory $\varphi:E^\bullet\rightarrow L_Y$ on $Y$ is called {\it symmetric\/}{\rm \kern.05em ind}ex{obstruction theory!symmetric}{\rm \kern.05em ind}ex{symmetric obstruction theory!definition} if there exists an isomorphism $\vartheta:E^\bullet\rightarrow E^{\bullet\vee}[1]$, such that $\vartheta^{\vee}[1]=\vartheta$. Here $E^{\bullet\vee}\!=\!R\mathbin{\mathcal{H}om}(E^\bullet,{\mathbin{\cal O}}_Y)$ is the {\it dual\/} of $E^\bullet$, and $\vartheta^\vee$ the dual morphism of~$\vartheta$. \item[{\bf(f)}] If moreover $Y$ is a scheme with a $G$-action, where $G$ is an algebraic group, an {\it equivariant }perfect obstruction theory {\rm \kern.05em ind}ex{obstruction theory!equivariant} is a morphism $E^\bullet\rightarrow L_Y$ in {\rm \kern.05em ind}ex{algebraic group} the category $D(Y)^G$, which is a perfect obstruction theory as a \nomenclature[DG]{$D(X)^G$}{derived category of equivariant quasicoherent ${\mathbin{\cal O}}_X$-modules} morphism in $D(Y)$ (this definition is originally due to Graber--Pandharipande~\circte{GP}). Here $D(Y)^G$ denotes the derived category of the abelian category of $G$-equivariant quasicoherent ${\mathbin{\cal O}}_Y$-modules. \item[{\bf(g)}] A {\it symmetric equivariant }obstruction theory (or an {\it equivariant symmetric }obstruction theory) {\rm \kern.05em ind}ex{obstruction theory!equivariant symmetric} is a pair $(E^\bullet\rightarrow L_Y,E^\bullet\rightarrow E^{\bullet\vee}[1])$ of morphisms in the category $D(Y)^G$, such that $E^\bullet\rightarrow L_Y$ is an equivariant perfect obstruction theory and $\vartheta:E^\bullet\rightarrow E^{\bullet\vee}[1]$ is an isomorphism satisfying $\vartheta^\vee[1]=\vartheta$ in $D(Y)^G.$ Note that this is more than requiring that the obstruction theory be equivariant and symmetric, separately, as said in \circte{BeFa2}. \end{itemize} If instead $Y{\rm st}ackrel{\psi}{\longrightarrow}U$ is a morphism of ${\mathbin{\mathbb K}}$-schemes, so $Y$ is a $U$-scheme, we define {\it relative} {\rm \kern.05em ind}ex{obstruction theory!relative} perfect obstruction theories $\phi:E^\bullet\rightarrow L_{Y/U}$ in the obvious way. \lambdabel{dt1def1} \end{dfn} Behrend and Fantechi \circte[Th.~4.5]{BeFa} prove the following theorem, which both explains the term obstruction theory and provides a criterion for verification in practice: \betagin{thm} The following two conditions are equivalent for $E^\bullet \in D(Y)$ satisfying condition $(*)$. \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] The morphism $\phi: E^\bullet \rightarrow L_Y$ is an obstruction theory. \item[{\bf(b)}] Suppose that we are given a square-zero extension $\overline{T}$ of $T$ with ideal sheaf $J$, with $T,\overline T$ affine, and a morphism $g:T \rightarrow Y.$ The morphism $\phi$ induces an element $\phi^*(\omega(g))\in \mathop{\rm Ext}\nolimits^1(g^*E^\bullet, J)$ from $\omega(g)\in\mathop{\rm Ext}\nolimits^1(g^*L_Y, J)$ by composition. Then $\phi^*(\omega(g))$ vanishes if and only if there exists an extension $\overline{g}$ of\/ $g$. If it vanishes, then the set of extensions form a torsor under~$\mathop{\rm Ho}\nolimitsm(g^*E^\bullet,J)$. \end{itemize} \end{thm} Some examples can be found in \circte{BeFa2}: Lagrangian intersections, sheaves on Calabi--Yau $3$-folds, stable maps to Calabi--Yau $3$-folds. Next section will concentrate on Donaldson--Thomas obstruction theory as in \circte{Thom}. {\rm \kern.05em ind}ex{obstruction theory|)} \subsubsection{Donaldson--Thomas invariants of Calabi--Yau 3-folds} \lambdabel{dt2.2}{\rm \kern.05em ind}ex{Calabi--Yau 3-fold|(}{\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$|(} {\it Donaldson--Thomas invariants\/} $DT^\alpha(\tildemesau)$ were defined by Richard Thomas \circte{Thom}, following a proposal of Donaldson and Thomas~\circte[\S 3]{DoTh}. They are the virtual counts of stable sheaves on Calabi--Yau 3-folds $X.$ Starting from the formal picture in which a Calabi--Yau $n$-fold is the complex analogue of an oriented real $n$-manifold, and a Fano with a fixed smooth anticanonical divisor is the analogue of a manifold with boundary, Thomas motivates a holomorphic Casson invariant counting bundles on a Calabi--Yau 3-fold. He develops the deformation theory necessary to obtain the virtual moduli cycles in moduli spaces of stable sheaves whose higher obstruction groups vanish which allows to define the holomorphic Casson invariant of a Calabi--Yau 3-fold $X$, prove it is deformation invariant, and compute it explicitly in some examples. Thus, heuristically, the Donaldson--Thomas moduli space is the critical set of the holomorphic Chern-Simons functional and the Donaldson--Thomas invariant is a holomorphic analogue of the Casson invariant. {\rm \kern.05em ind}ex{holomorphic Chern--Simons functional} {\rm \kern.05em ind}ex{holomorphic Casson invariant} \setminusallskip Mathematically, Donaldson--Thomas invariants are constructed as follows. Deformation theory gives rise to a perfect obstruction theory \circte{BeFa} (or a tangent-obstruction complex {\rm \kern.05em ind}ex{obstruction theory} in the language of \circte{LiTi}) on the moduli space of stable sheaves ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau).$ Recall that Thomas supposes ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau),$ that is, there are no strictly semistable sheaves\/ $E$ in class\/ $\alpha,$ which implies the properness of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau).$ As Thomas points out in \circte{Thom}, the obstruction sheaf is equal to ${\mathbin{\cal O}}mega_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}$, the sheaf of K\"ahler differentials, and hence the tangents $T_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}$ are dual to the obstructions. This expresses a certain symmetry of the obstruction theory on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ and is a mathematical reflection of the heuristic that views ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ as the critical locus of a holomorphic functional. Associated to the perfect obstruction theory is the virtual fundamental class{\rm \kern.05em ind}ex{virtual moduli cycle}, an element of the Chow group $A_*({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau))$ of {\rm \kern.05em ind}ex{Chow group} algebraic cycles modulo rational equivalence on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau).$ One implication {\rm \kern.05em ind}ex{algebraic cycles} of the symmetry of the obstruction theory is the fact that the virtual fundamental class $[{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)]^{\rm vir}$ is of degree zero. It can hence be integrated over the proper space of stable sheaves to an integer, the Donaldson--Thomas invariant or `virtual count' of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ \e DT^\alpha(\tildemesau)\quad=\tildemesextstyle{\rm d}isplaystyle \int_{\setminusall{[{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)]^{\rm vir}}}1. \lambdabel{dt2eq1} \e In fact Thomas did not define invariants $DT^\alpha(\tildemesau)$ counting sheaves with fixed class $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, but coarser invariants $DT^P(\tildemesau)$ \nomenclature[DTP]{$DT^P(\tildemesau)$}{Donaldson--Thomas invariants counting sheaves with fixed Hilbert polynomial} counting sheaves with fixed Hilbert polynomial{\rm \kern.05em ind}ex{Hilbert polynomial} $P(t)\in{\mathbin{\mathbb Q}}[t]$. \nomenclature[PHilb]{$P(t)$}{Hilbert polynomial $\in {\mathbin{\mathbb Q}}[t]$} Thus $$ {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^P(\tildemesau)\;\; = {\rm d}isplaystyle\coprod_{\setminusall{\alpha:P_\alpha=P}}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau) \quad\leqslant\nobreakadsto\quad DT^P(\tildemesau) \quad= \!\!\!\!\!\!\!\!\! \tildemesextstyle{\rm d}isplaystyle \sum_{\setminusall{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X)):P_\alpha=P}} \!\!\!\!\!\!\!\!\! DT^\alpha(\tildemesau), $$ is the relationship with Joyce and Song's version $DT^\alpha(\tildemesau)$ reviewed in \S\ref{dt4}, where the r.h.s. has only finitely many nonzero terms in the sum. Here, Thomas' main result \circte[\S 3]{Thom}: {\rm \kern.05em ind}ex{deformation-invariance} \betagin{thm} For each Hilbert polynomial $P(t),$ the invariant\/ $DT^P(\tildemesau)$ is unchanged by continuous deformations of the underlying Calabi--Yau $3$-fold~$X$ over ${\mathbin{\mathbb K}}.$ \lambdabel{dt2thm1} \end{thm} The same proof shows that $DT^\alpha(\tildemesau)$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is deformation-invariant, {\it provided\/} it is known that the group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is deformation-invariant, so that this statement makes sense. This issue is discussed in \circte[\S 4.5]{JoSo}. There, it is shown that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ one can describe $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in terms of cohomology groups {\rm \kern.05em ind}ex{Grothendieck group!numerical} $H^*(X;{\mathbin{\mathbb Z}}),$ $H^*(X;{\mathbin{\mathbb Q}})$, so that $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is manifestly deformation-invariant, and therefore $DT^\alpha(\tildemesau)$ is also deformation-invariant. {\rm \kern.05em ind}ex{deformation-invariance} Theorem \circte[Thm. 4.19]{JoSo} crucially uses the integral Hodge conjecture result by \circte{Vois} for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}.$ {\rm \kern.05em ind}ex{Hodge conjecture}{\rm \kern.05em ind}ex{Picard scheme} In \circte[Rmk 4.20(e)]{JoSo}, Joyce and Song propose to extend that description over an algebraically closed base field ${\mathbin{\mathbb K}}$ of characteristic zero by replacing $H^*(X;{\mathbin{\mathbb Q}})$ by the {\it algebraic de Rham cohomology\/}{\rm \kern.05em ind}ex{algebraic de Rham cohomology} $H^*_{\rm dR}(X)$ \nomenclature[HalgDR]{$H^*_{\rm dR}(X)$}{algebraic de Rham cohomology of a smooth projective ${\mathbin{\mathbb K}}$-scheme $X$} of Hartshorne \circte{Hart1}. For $X$ a smooth projective ${\mathbin{\mathbb K}}$-scheme, $H^*_{\rm dR}(X)$ is a finite-dimensional vector space over ${\mathbin{\mathbb K}}$. There is a Chern character map {\rm \kern.05em ind}ex{Chern character} ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X)){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra H^{\rm even}_{\rm dR}(X)$. In \circte[\S 4]{Hart1}, Hartshorne considers how $H^*_{\rm dR}(X_t)$ varies in families $X_t:t\in T$, and defines a {\it Gauss--Manin connection}, which {\rm \kern.05em ind}ex{Gauss--Manin connection} makes sense of $H^*_{\rm dR}(X_t)$ being locally constant in~$t$. In \S\ref{def} we will use another idea to characterize the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme. {\rm \kern.05em ind}ex{Picard scheme} \setminusallskip Next section will introduce the Behrend function and the work done by Behrend in \circte{Behr}, which has been crucial for the development of Donaldson--Thomas theory. {\rm \kern.05em ind}ex{Calabi--Yau 3-fold|)}{\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$|)} \subsection{Microlocal geometry and the Behrend function} \lambdabel{dt3} {\rm \kern.05em ind}ex{microlocal geometry|(}{\rm \kern.05em ind}ex{Behrend function|(} This section briefly explains Behrend's approach \circte{Behr} to Donaldson--Thomas invariants as Euler characteristics of moduli schemes weighted by the Behrend function. It was introduced by Behrend \circte{Behr} for finite type ${\mathbin{\mathbb C}}$-schemes $X$; in \circte[\S 4.1]{JoSo} it has been generalized to Artin ${\mathbin{\mathbb K}}$-stacks. Behrend functions are also defined for complex analytic spaces $X_{\rm an}$, and the Behrend function of a ${\mathbin{\mathbb C}}$-scheme $X$ coincides with that of the underlying complex analytic space~$X_{\rm an}$. The theory is also valid for ${\mathbin{\mathbb K}}$-schemes acted on by a reductive linear algebraic group. A good reference for this section, other than the original paper by Behrend \circte{Behr}, are \circte[\S4]{JoSo} and \circte{O} for the equivariant version. \subsubsection{Microlocal approach to the Behrend function} \lambdabel{dt3.1} In \circte{Behr}, Behrend suggests a microlocal approach to the problem. The first part of the discussion describes how the Behrend function is defined while the second part, although not detailed and not directly involved in the rest of the paper, aim to give a more complete picture. \paragraph{The definition of the Behrend function.} \lambdabel{dt3.1.1} Let ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, and $X$ a finite type ${\mathbin{\mathbb K}}$-scheme. Suppose $X{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra M$ is an embedding of $X$ as a closed subscheme of a smooth ${\mathbin{\mathbb K}}$-scheme $M$. Then one has a commutative diagram \e \betagin{gathered} \xymatrix@C=70pt@R=20pt{ Z_*(X) \ar[dr]_{c_0^M} \ar[r]_\cong^{\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits & {\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X) \ar[d]^{c_0^{SM}} \ar[r]_\cong^{{\mathbin{\mathfrak m}}athop{\rm Ch}} & {\mathbin{\cal L}}_X(M) \ar[dl]^{0^!}\\ & A_0(X) & } \lambdabel{dt3eq1} \end{gathered} \e{\rm \kern.05em ind}ex{algebraic cycles}\nomenclature[CF]{${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$}{group of ${\mathbin{\mathbb Z}}$-valued constructible functions on $X$ as in \circte{Joyc.1}}{\rm \kern.05em ind}ex{constructible function}{\rm \kern.05em ind}ex{local Euler obstruction}\nomenclature[Eu]{${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits$}{the `local Euler obstruction', an isomorphism $Z_*(X)\rightarrow{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$} where the two horizontal arrows are isomorphisms. Here $Z_*(X)$ denotes the group of {\it algebraic cycles\/} on $X$, as in Fulton \circte{Fult}, and ${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$ the group of ${\mathbin{\mathbb Z}}$-valued constructible functions on $X$ in the sense of \circte{Joyc.1}. The {\it local Euler obstruction\/} is a group isomorphism ${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits:Z_*(X)\rightarrow{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$. The local Euler obstruction was first defined by MacPherson \circte{MacP} to solve the problem of existence of covariantly functorial Chern classes, answering thus a Deligne--Grothendieck conjecture when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, using complex {\rm \kern.05em ind}ex{Deligne--Grothendieck conjecture} analysis, but Gonzalez--Sprinberg \circte{GS} provides an alternative algebraic definition which works over any algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero. It is the obstruction to extending a certain section of the tautological bundle on the {\it Nash blowup}. {\rm \kern.05em ind}ex{Nash blowup} More precisely, if $V$ is a prime cycle on $X$, the constructible function ${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits(V)$ is given by \betagin{equation*} \tildemesextstyle{\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits(V):x\longmapsto{\rm d}isplaystyle \int_{{\mathbin{\mathfrak m}}u^{-1}(x)}c(\tildemesi T)\cap s({\mathbin{\mathfrak m}}u^{-1}(x),\tildemesi V), \end{equation*} where ${\mathbin{\mathfrak m}}u:\tildemesi V\rightarrow V$ is the Nash blowup of $V$, $\tildemesi T$ the dual of the universal quotient bundle, $c$ the total Chern class and $s$ the Segre class of the normal cone to a closed immersion. Kennedy \circte[Lem. 4]{Kenn} proves that ${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits(V)$ is constructible.{\rm \kern.05em ind}ex{constructible function} \setminusallskip As pointed out in the next section, it is worth observing that independently, at about the same time, Kashiwara proved an {\it index theorem} over ${\mathbin{\mathbb C}}$ for a holonomic ${\mathbin{\mathfrak m}}athcal{D}$-module relating its local Euler characteristic and the local Euler obstruction with respect to an appropriate stratification (see \circte{Gin} for details). It coincides with the one defined above and this is equivalent to saying that the diagram \eq{dt3eq2} below commutes. {\rm \kern.05em ind}ex{index theorem!microlocal index theorem}{\rm \kern.05em ind}ex{Euler characteristic} \setminusallskip Observe that this part of the diagram exists without the embedding into $M$ and is sufficient to give the definition of the Behrend function as follow. Let $C_{X/M}$ be the {\it normal cone\/} of $X$ in $M$, as in \circte[p.73]{Fult}, and $\pi:C_{X/M}\rightarrow X$ the projection. As in \circte[\S 1.1]{Behr}, define a cycle ${{\mathbin{\mathfrak m}}athfrak C}_{X/M}\in Z_*(X)$ by $$ {{\mathbin{\mathfrak m}}athfrak C}_{X/M}=\tildemesextstyle{\rm d}isplaystyle\sum_{C'}(-1)^{\mathop{\rm dim}\nolimits\pi(C')}{\rm mult}(C')\pi(C'), $$ where the sum is over all irreducible components $C'$ of $C_{X/M}$. It turns out that ${{\mathbin{\mathfrak m}}athfrak C}_{X/M}$ depends only on $X$, and not on the embedding $X{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra M$. Behrend \circte[Prop. 1.1]{Behr} proves that given a finite type ${\mathbin{\mathbb K}}$-scheme $X$, there exists a unique cycle ${{\mathbin{\mathfrak m}}athfrak C}_X\in Z_*(X)$, such that for any \'etale map $\varphi:U\rightarrow X$ for a ${\mathbin{\mathbb K}}$-scheme $U$ and any closed embedding $U{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra M$ into a smooth ${\mathbin{\mathbb K}}$-scheme $M$, one has $\varphi^*({{\mathbin{\mathfrak m}}athfrak C}_X)={{\mathbin{\mathfrak m}}athfrak C}_{U/M}$ in $Z_*(U)$. If $X$ is a subscheme of a smooth $M$ one takes $U=X$ and get ${{\mathbin{\mathfrak m}}athfrak C}_X={{\mathbin{\mathfrak m}}athfrak C}_{X/M}$. Behrend calls ${{\mathbin{\mathfrak m}}athfrak C}_X$ the {\it signed support of the intrinsic normal cone}, or the {\it distinguished cycle} of~$X$. For each finite type ${\mathbin{\mathbb K}}$-scheme $X$, define the {\it Behrend function} $\nu_X$ in ${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(X)$ by $\nu_X={\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits({{\mathbin{\mathfrak m}}athfrak C}_X)$, as in Behrend~\circte[\S 1.2]{Behr}.{\rm \kern.05em ind}ex{Behrend function!definition}{\rm \kern.05em ind}ex{intrinsic normal cone!signed support}{\rm \kern.05em ind}ex{distinguished cycle} {\mathbin{\mathfrak m}}edskip{\rm \kern.05em ind}ex{conormal bundle} {\rm \kern.05em ind}ex{characteristic cycle map} \nomenclature[char]{${\mathbin{\mathfrak m}}athop{\rm Ch}$}{the characteristic cycle map ${\mathbin{\mathfrak m}}athop{\rm Ch}:{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(U)\rightarrow {\mathbin{\cal L}}(U)$} For completeness, the section now describes the other side of the diagram \eq{dt3eq1}, which yields another possible way to define the Behrend function. Write ${\mathbin{\cal L}}_X(M)$ for the free abelian {\rm \kern.05em ind}ex{Lagrangian cycle!conical Lagrangian cycle} \nomenclature[L]{${\mathbin{\cal L}}_X(M)$}{free abelian group generated by closed, irreducible, reduced, conical Lagrangian, ${\mathbin{\mathbb K}}$-subvariety in $T^*M$ lying over cycles contained in $X$} group generated by closed, irreducible, reduced, conical Lagrangian, ${\mathbin{\mathbb K}}$-subvariety in ${\mathbin{\cal O}}mega_M$ lying over cycles contained in $X.$ The isomorphism ${\mathbin{\mathfrak m}}athop{\rm Ch}:{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(X)\rightarrow{\mathbin{\cal L}}_X(M)$ maps a constructible function to its characteristic cycle, which is a conic Lagrangian cycle on {\rm \kern.05em ind}ex{characteristic cycle map} ${\mathbin{\cal O}}mega_M$ supported inside $X$ defined in the following way. Consider the commutative diagram of group isomorphisms that fits in the diagram \eq{dt3eq1}: \e \xymatrix@R=10pt@C=50pt{ Z_*(M) \ar[r]^{\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits \ar@/_.7pc/[rr]_L & {\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(M) \ar[r]^{\mathbin{\mathfrak m}}athop{\rm Ch} & {\mathbin{\cal L}}(M).} \lambdabel{dt6eqq1} \e Here $L:Z_*(M)\rightarrow {\mathbin{\cal L}}(M)$ is defined on any prime cycle $V$ by $L:V\rightarrow (-1)^{\mathop{\rm dim}\nolimits(V)}{\mathbin{\ell\kern .08em}}l(V),$ where ${\mathbin{\ell\kern .08em}}l(V)$ is the closure of the conormal bundle of any nonsingular dense open subset of $V.$ Then ${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits,$ $L$ are isomorphisms, and the {\it characteristic cycle map} ${\mathbin{\mathfrak m}}athop{\rm Ch}:{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(M)\rightarrow {\mathbin{\cal L}}(M)\subset Z_{\mathop{\rm dim}\nolimits M}({\mathbin{\cal O}}m_M)$ is defined to be the unique isomorphism making (\ref{dt6eqq1}) commute. In the complex case Ginsburg \circte{Gin} describes the inverse of this map as {\it intersection multiplicity} between two conical Lagrangian cycles. This formula is crucial in \circte[\S 4.3]{Behr}, where Behrend gives an expression for the Behrend function in terms of linking numbers, which {\rm \kern.05em ind}ex{linking number} has a validity also in the case it is not known if a scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme (Theorem \ref{dt6prop1}). See also \circte[Ex. 19.2.4]{Fult}.{\rm \kern.05em ind}ex{characteristic cycle map!inverse}{\rm \kern.05em ind}ex{intersection multiplicity} {\mathbin{\mathfrak m}}edskip{\rm \kern.05em ind}ex{Chern-Mather class} {\rm \kern.05em ind}ex{Schwartz-MacPherson Chern class} \nomenclature[cm]{$c^M(V)$}{Mather class of an algebraic cycle $V$} {\rm \kern.05em ind}ex{algebraic cycles} The maps to $A_0(X)$ are the degree zero {\it Chern-Mather class}, the degree zero {\it Schwartz-MacPherson Chern class}, and the intersection with the zero section, respectively. The Mather class is a homomorphism $c^M:Z_*(X)\rightarrow A_*(X),$ whose definition is a globalization of the construction of the local Euler obstruction. One has $c^M(V)={\mathbin{\mathfrak m}}u_*\big(c(\wedgeidetilde T)\cap[\wedgeidetilde V]\big)\,,$ for a prime cycle $V$ of degree $p$ on $X$ with the same notation as above. For a the expression in terms of normal cones, see for example \circte[\S 1]{Sabb}. Applying $c^M$ to the cycle ${{\mathbin{\mathfrak m}}athfrak C}_X$, one obtains the {\it Aluffi class} {\rm \kern.05em ind}ex{Aluffi class} $\alphapha_X=c^M({{\mathbin{\mathfrak m}}athfrak C}_X)\in A_*(X)$ defined in \circte{Alu}. \nomenclature[11]{$\alpha_X$}{Aluffi class of $X$} If $X$ is smooth, its Aluffi class equals $\alphapha_X=c({\mathbin{\cal O}}mega_X)\cap[X]\,.$ {\mathbin{\mathfrak m}}edskip Now given a symmetric obstruction theory on $X$, the {\it cone of curvilinear obstructions} $cv{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow ob={\mathbin{\cal O}}mega_X$, pulls back to a cone in {\rm \kern.05em ind}ex{cone!of curvilinear obstructions} \nomenclature[cv]{cv}{cone of curvilinear obstructions} ${\mathbin{\cal O}}mega_{M_{|_{X}}}$ via the epimorphism ${\mathbin{\cal O}}mega_{M_{|_{X}}}\rightarrow {\mathbin{\cal O}}mega_X.$ Via the embedding ${\mathbin{\cal O}}mega_{M_{|_{X}}}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow {\mathbin{\cal O}}mega_M$ one obtains a conic subscheme $C{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow {\mathbin{\cal O}}mega_M$, the {\it obstruction cone }for {\rm \kern.05em ind}ex{cone!obstruction cone} the embedding $X{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow M$. Behrend proves that the virtual fundamental class is $[X]^{\rm vir}=0^![C]$. \nomenclature[csm]{$c^{SM}(f)$}{Schwartz-MacPherson Chern class of a constructible function $f$} The key fact is that $C$ is {\it Lagrangian}. Because of this, there exists a unique constructible function $\nu_X$ on $X$ such that ${\mathbin{\mathfrak m}}athop{\rm Ch}(\nu_X)=[C]$ and $c_0^{SM}(\nu_X)=[X]^{\rm vir}$. Then Theorem \ref{dt3thm4} below follows as an application of MacPherson's theorem~\circte{MacP} (or equivalently from the microlocal index theorem of {\rm \kern.05em ind}ex{index theorem!microlocal index theorem} Kashiwara~\circte{KaSc}), which one can think of as a kind of generalization of the {\it Gauss--Bonnet theorem} to singular schemes. {\rm \kern.05em ind}ex{Gauss--Bonnet theorem} See Theorem \ref{dt3thm4} below for its validity over ${\mathbin{\mathbb K}}.$ The cycle ${{\mathbin{\mathfrak m}}athfrak C}_X$ such that ${\mathbin{\mathfrak m}}athop{\rm Eu}\nolimits({{\mathbin{\mathfrak m}}athfrak C}_X)=\nu_X$ is as defined above, the (signed) support of the intrinsic normal cone of $X$. The Aluffi class $\alphapha_X=c^M({{\mathbin{\mathfrak m}}athfrak C}_X)=c^{SM}(\nu_X)$ has thus the property that its degree zero component is the virtual fundamental class of any symmetric obstruction theory on $X.$ {\rm \kern.05em ind}ex{virtual moduli cycle} {\rm \kern.05em ind}ex{obstruction theory!symmetric} {\mathbin{\mathfrak m}}edskip{\rm \kern.05em ind}ex{Zariski topology}\nomenclature[Xan]{$X_{\rm an}$}{complex analytic space $X_{\rm an}$ underlying $X$} {\rm \kern.05em ind}ex{Behrend function!algebraic} {\rm \kern.05em ind}ex{Behrend function!analytic} In the case ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, using MacPherson's complex analytic definition of the local Euler obstruction \circte{MacP}, the definition of $\nu_X$ makes sense in the framework of complex analytic geometry, and so Behrend functions can be defined for complex analytic spaces $X_{\rm an}$.{\rm \kern.05em ind}ex{complex analytic space} Thus, as in \circte[Prop. 4.2]{JoSo} one has that if $X$ is a finite type ${\mathbin{\mathbb K}}$-scheme, then the Behrend function $\nu_X$ is a well-defined\/ ${\mathbin{\mathbb Z}}$-valued constructible function on $X,$ in the Zariski topology. If $Y$ is a complex analytic space then the Behrend function $\nu_Y$ is a well-defined\/ ${\mathbin{\mathbb Z}}$-valued locally constructible function on $Y,$ in the analytic topology. {\rm \kern.05em ind}ex{analytic topology} Finally, if $X$ is a finite type ${\mathbin{\mathbb C}}$-scheme, with underlying complex analytic space $X_{\rm an},$ then the algebraic Behrend function $\nu_X$ and the analytic Behrend function $\nu_{\setminusash{X_{\rm an}}}$ coincide. In particular, $\nu_X$ depends only on the complex analytic space $X_{\rm an}$ underlying $X,$ locally in the analytic topology. Finally, the definition of Behrend functions is valid over ${\mathbin{\mathbb K}}$-schemes, algebraic ${\mathbin{\mathbb K}}$-spaces and Artin ${\mathbin{\mathbb K}}$-stacks, locally of finite type (see \circte[Prop. 4.4]{JoSo}). {\rm \kern.05em ind}ex{Behrend function!of Artin stack} \paragraph{Categorifying the theory.} \lambdabel{dt3.1.2} {\rm \kern.05em ind}ex{categorification|(} What follows will not be needed to understand the rest of the paper. We include this material both for completeness, as it underlies the theory of Behrend functions, and also because it is one of the main application of the whole program \circte{Joyc2,BBJ,BBDJS,BJM,BBBJ} explained in \S\ref{ourpapers}. {\mathbin{\mathfrak m}}edskip For this paragraph, restrict to ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ for simplicity. There exists a sophisticated modern theory of linear partial differential equations on a smooth complex algebraic variety $X,$ sometimes called {\it microlocal analysis}, because it involves analysis on the cotangent bundle $T^* X; $ this yields a theory which is invariant with respect to the action of the whole group of canonical transformation of $T^* X$ while the usual theory is only invariant under the subgroup induced by diffeomorphism of $X.$ It is sometimes called ${\mathbin{\mathfrak m}}athcal{D}$-{\it module theory,} {\rm \kern.05em ind}ex{sheaf of rings of holomorphic linear partial differential operators of finite order} because it involves sheaves of modules ${\mathbin{\mathfrak m}}athcal{M}$ over the sheaf of rings of holomorphic linear partial differential operators of finite order ${\mathbin{\mathfrak m}}athcal{D}={\mathbin{\mathfrak m}}athcal{D}_X;$ \nomenclature[D]{${\mathbin{\mathfrak m}}athcal{D}_X$}{sheaf of rings of holomorphic linear partial differential operators of finite order} these rings are noncommutative, left and right Noetherian, and have finite global homological dimension. It is also sometimes called {\it algebraic analysis} because it involves such algebraic constructions as $\mathop{\rm Ext}\nolimits^i_{\mathbin{\mathfrak m}}athcal{D}({\mathbin{\mathfrak m}}athcal{M},{\mathbin{\mathfrak m}}athcal{N}).$ The theory as it is known today grew out of the work done in the 1960s by the school of Mikio Sato in Japan. During the 1970's, one of the central themes in ${\mathbin{\mathfrak m}}athcal{D}$-module theory was David Hilbert's twenty-first problem, now called the {\rm \kern.05em ind}ex{Riemann-Hilbert problem} {\it Riemann-Hilbert problem.} A generalization of it may be stated as the problem to solve the {\it Riemann-Hilbert correspondence,} which, roughly speaking, {\rm \kern.05em ind}ex{Riemann-Hilbert correspondence} describes the nature of the correspondence between a system of differential equations and its solutions. A comprehensive reference is the book of Kashiwara and Shapira \circte{KaSc}, while an interesting eclectic vision on the subject is provided by Ginsburg \circte{Gin}. One has the following commutative diagram: \e \betagin{gathered} \xymatrix@C=60pt@R=25pt{ \tildemesextrm{(perverse) constructible sheaves} \ar[d]_{{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi} & \ar[l]_\sigmam^{DR} \tildemesextrm{(regular) holonomic modules} \ar[d]^{SS}\\ \tildemesextrm{constructible functions} \ar[r]_\sigmam^{{\mathbin{\mathfrak m}}athop{\rm Ch}} & \tildemesextrm{Lagrangian cycles in }T^* X.} \lambdabel{dt3eq2} \end{gathered} \e {\rm \kern.05em ind}ex{constructible sheaf!perverse} {\rm \kern.05em ind}ex{holonomic modules!regular} {\rm \kern.05em ind}ex{characteristic cycle map} \nomenclature[SS]{$SS$}{characteristic cycle map} {\rm \kern.05em ind}ex{characteristic cycle} {\rm \kern.05em ind}ex{characteristic variety} Recall that here $SS$ denotes the {\it characteristic cycle map} which to a ${\mathbin{\mathfrak m}}athcal{D}$-module ${\mathbin{\mathfrak m}}athcal{M}$ associates its {\it characteristic cycle.} It is the formal linear combination of irreducible components of the {\it characteristic variety} (the support of the graded sheaf gr${\mathbin{\mathfrak m}}athcal{M}$ associated to ${\mathbin{\mathfrak m}}athcal{M}$) counted with their multiplicities. It looks like $$SS({\mathbin{\mathfrak m}}athcal{M})=\sum m_\alpha({\mathbin{\mathfrak m}}athcal{M})\cdot \overlineerline{T^*_{X_\alpha}X}$$ for a stratification $\{X_\alpha \}$ of $X,$ where $m_\alpha({\mathbin{\mathfrak m}}athcal{M})$ are positive integers and $ \overlineerline{T^*_{X_\alpha}X}$ is the closure of the conormal bundle $T^*_{X_\alpha}X.$ Each component of the characteristic variety has dimension at least $\mathop{\rm dim}\nolimits(X).$ A ${\mathbin{\mathfrak m}}athcal{D}$-module ${\mathbin{\mathfrak m}}athcal{M}$ is called {\it holonomic} if its characteristic variety is pure of dimension $\mathop{\rm dim}\nolimits(X).$ To have also {\it regular singularities} means, very roughly speaking, that the system is determined by its principal symbol. \setminusallskip So, to a holonomic system it has been associated an object of microlocal nature, the characteristic cycle. On the other side, the Riemann-Hilbert correspondence associates to an holonomic system ${\mathbin{\mathfrak m}}athcal{M}$ its {\it De Rham complex,} {\rm \kern.05em ind}ex{De Rham complex} \nomenclature[DRM]{$\tildemesextrm{DR}{\mathbin{\mathfrak m}}athcal{M}$}{De Rham complex of an holonomic system} $$ \xymatrix@C=40pt@R=10pt{\tildemesextrm{DR}({\mathbin{\mathfrak m}}athcal{M}): \; 0 \ar[r] & {\mathbin{\cal O}}m^0({\mathbin{\mathfrak m}}athcal{M}) \ar[r]^{d\;} & {\mathbin{\cal O}}m^1({\mathbin{\mathfrak m}}athcal{M}) \ar[r]^{\;d} & \ldots \ar[r]^{d\;\;\;\;\;\;\;\;} & {\mathbin{\cal O}}m^{\mathop{\rm dim}\nolimits(X)}({\mathbin{\mathfrak m}}athcal{M}) \ar[r]^{\;\;\;\;\;\;\; d} & 0,} $$ where $ {\mathbin{\cal O}}m^p({\mathbin{\mathfrak m}}athcal{M})$ is the sheaf of ${\mathbin{\mathfrak m}}athcal{M}$-valued $p$-forms on $X$ and $d$ is the differential defined by Cartan formula. As an object in the derived category it can be expressed as $\tildemesextrm{DR}({\mathbin{\mathfrak m}}athcal{M})= \tildemesextrm{R}\mathbin{\mathcal{H}om}_{{\mathbin{\mathfrak m}}athcal{D}_X}({\mathbin{\cal O}}_X,{\mathbin{\mathfrak m}}athcal{M})[\mathop{\rm dim}\nolimits(X)].$ If ${\mathbin{\mathfrak m}}athcal{M}$ is holonomic, $\tildemesextrm{DR}({\mathbin{\mathfrak m}}athcal{M})$ is constructible and determines ${\mathbin{\mathfrak m}}athcal{M}$ provided that the latter has regular singularities. Recall the following definition (see also \circte[\S 4]{JoSo}): \betagin{dfn} Let $X$ be a complex analytic space. Consider sheaves of ${\mathbin{\mathbb Q}}$-modules $\cal C$ on $X$. Note that these are {\it not\/} coherent sheaves, which are sheaves of ${\mathbin{\cal O}}_X$-modules. A sheaf $\cal C$ is called {\it constructible\/}{\rm \kern.05em ind}ex{sheaf!constructible}{\rm \kern.05em ind}ex{constructible sheaf} if there is a locally finite stratification $X=\bigcup_{j\in J}X_j$ of $X$ in the complex analytic topology, such that ${\cal C}\vert_{X_j}$ is a ${\mathbin{\mathbb Q}}$-local system for all $j\in J$, and all the {\rm \kern.05em ind}ex{analytic topology} stalks ${\cal C}_x$ for $x\in X$ are finite-dimensional ${\mathbin{\mathbb Q}}$-vector spaces. A complex ${\cal C}^\bullet$ of sheaves of ${\mathbin{\mathbb Q}}$-modules on $X$ is called {\it constructible\/} if all its cohomology sheaves $H^i({\cal C}^\bullet)$ for $i\in{\mathbin{\mathbb Z}}$ are constructible. {\rm \kern.05em ind}ex{constructible complex} Write $D^b_{\scriptscriptstyle{\rm Con}}(X)$\nomenclature[DbCon(X)]{$D^b_{\scriptscriptstyle{\rm Con}}(X)$}{bounded derived category of constructible complexes on $X$} for the bounded derived category of constructible complexes on $X$. It is a triangulated category. By \circte[Thm. 4.1.5]{Dimc}, $D^b_{\scriptscriptstyle{\rm Con}}(X)$ is closed under Grothendieck's ``six operations on sheaves''{\rm \kern.05em ind}ex{sheaf!Grothendieck's six operations} $R\varphi_*,R\varphi_!,\varphi^*,\varphi^!,{\cal RH}om,\setminusash{{\mathbin{\mathfrak m}}athop{\otimesimes}\limits^{\scriptscriptstyle L}}$. The {\it perverse sheaves\/} on $X$ are a particular abelian subcategory ${\mathbin{\mathfrak m}}athop{\rm Per}(X)$\nomenclature[Per(X)]{${\mathbin{\mathfrak m}}athop{\rm Per}(X)$}{abelian category of perverse sheaves on $X$} in $D^b_{\scriptscriptstyle{\rm Con}}(X)$, which is the heart of a t-structure on $D^b_{\scriptscriptstyle{\rm Con}}(X)$. So perverse sheaves are actually complexes of sheaves, not sheaves, on $X$. The category ${\mathbin{\mathfrak m}}athop{\rm Per}(X)$ is noetherian{\rm \kern.05em ind}ex{noetherian}{\rm \kern.05em ind}ex{abelian category!noetherian} and locally artinian, and is artinian{\rm \kern.05em ind}ex{artinian}{\rm \kern.05em ind}ex{abelian category!artinian} if $X$ is of finite type, so every perverse sheaf {\rm \kern.05em ind}ex{perverse sheaf} has (locally) a unique filtration whose quotients are {\it simple} {\rm \kern.05em ind}ex{perverse sheaf!simple} perverse sheaves; and the simple perverse sheaves can be described completely in terms of irreducible local systems on irreducible subvarieties in~$X$. \lambdabel{dt3def1} \end{dfn} Now, given a constructible sheaf ${\cal C}^\bullet$ there is associated a constructible function on $X$: define a map \nomenclature[1wchX]{${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X$}{constructible function on $X$ associated to a constructible sheaf} ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X: {\mathbin{\mathfrak m}}athop{\rm Obj\kern .1em}\nolimits(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ by taking Euler characteristics of the cohomology of stalks of complexes, given by \betagin{equation*} {\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X({\cal C}^\bullet):x\longmapsto \tildemesextstyle{\rm d}isplaystyle \sum_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits{\cal H}^k({\cal C}^\bullet)_x. \end{equation*} Since distinguished triangles in $D^b_{\scriptscriptstyle{\rm Con}}(X)$ give long exact sequences on cohomology of stalks ${\cal H}^k(-)_x$, this ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X$ is additive over distinguished triangles, and so descends to a group morphism ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X:K_0(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow {\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$. These maps ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X:{\mathbin{\mathfrak m}}athop{\rm Obj\kern .1em}\nolimits(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow{\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ and ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X: K_0(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow {\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ are surjective, since ${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ is spanned by the characteristic functions of closed analytic cycles $Y$ in $X$, and each such $Y$ lifts to a perverse sheaf in $D^b_{\scriptscriptstyle{\rm Con}}(X)$. In category-theoretic terms, $X{\mathbin{\mathfrak m}}apsto D^b_{\scriptscriptstyle{\rm Con}}(X)$ is a functor $D^b_{\scriptscriptstyle{\rm Con}}$ from complex analytic spaces to triangulated categories, and $X{\mathbin{\mathfrak m}}apsto {\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ is a functor ${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}$ from complex analytic spaces to abelian groups, and $X{\mathbin{\mathfrak m}}apsto{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X$ is a natural transformation ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi$ from $D^b_{\scriptscriptstyle{\rm Con}}$ {\rm \kern.05em ind}ex{natural transformation} to~${\mathbin{\mathfrak m}}athop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}$. \setminusallskip For a holonomic ${\mathbin{\mathfrak m}}athcal{D}$-module ${\mathbin{\mathfrak m}}athcal{M}$ one sets ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(x,{\mathbin{\mathfrak m}}athcal{M})={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(x,\tildemesextrm{DR}({\mathbin{\mathfrak m}}athcal{M})).$ Thus, if ${\mathbin{\mathfrak m}}athcal{M}$ is a regular holonomic ${\mathbin{\mathfrak m}}athcal{D}$-module on $X\subset M,$ with $M$ smooth, whose characteristic cycle is $[C_{X/M}]$, then $$\nu_X(P)=\sum_i (-1)^i \mathop{\rm dim}\nolimits_{\mathbin{\mathbb C}} H^i_{\{P\}}(X,{\mathbin{\mathfrak m}}athcal{M}_{DR})\,,$$ for any point $P\in M$. Here $H^i_{\{P\}}$ denotes cohomology with supports in the subscheme $\{P\}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookrightarrow M$ and ${\mathbin{\mathfrak m}}athcal{M}_{DR}$ denotes the perverse sheaf associated to \nomenclature[MDR]{${\mathbin{\mathfrak m}}athcal{M}_{DR}$}{perverse sheaf associated to a regular holonomic ${\mathbin{\mathfrak m}}athcal{D}$-module ${\mathbin{\mathfrak m}}athcal{M}$ via the Riemann-Hilbert correspondence} ${\mathbin{\mathfrak m}}athcal{M}$ via the Riemann-Hilbert correspondence, as incarnated, for example, by the De~Rham complex $\tildemesextrm{DR}({\mathbin{\mathfrak m}}athcal{M}).$ At the moment, Kai Behrend is attempting to give explicit constructions in some cases (see \circte{GeBaViLa}). \setminusallskip In the case $X$ is the critical scheme of a regular function $f$ on a smooth scheme $M,$ Behrend \circte{Behr} gives the following expression for the Behrend function due to Parusi\'nski and Pragacz \circte{PaPr}. This formula has been crucial in \circte{JoSo}. For the definition of the {\it Milnor fibres\/} {\rm \kern.05em ind}ex{Milnor fibre} for holomorphic functions on complex {\rm \kern.05em ind}ex{vanishing cycle} analytic spaces and the a review on {\it vanishing cycles\/} a survey paper on the subject is Massey \circte{Mass}, and three books are Kashiwara and Schapira \circte{KaSc}, Dimca \circte{Dimc}, and Sch\"urmann \circte{Schur}. Over the field ${\mathbin{\mathbb C}}$, Saito's theory of {\it mixed Hodge modules\/}{\rm \kern.05em ind}ex{mixed Hodge module} \circte{Sait} provides a generalization of the theory of perverse sheaves with more structure, which may also be a context in which to generalize Donaldson--Thomas theory. \betagin{thm} Let\/ $U$ be a complex manifold of dimension $n,$ and\/ $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function, and define $X$ to be the complex analytic space $\mathop{\rm Crit}(f)$ contained in $U_0=f^{-1}(\{0\}).$ Then the Behrend function $\nu_X$ of\/ $X$ is given by \e \nu_X(x)=(-1)^{\mathop{\rm dim}\nolimits U}\bigl(1-{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_f(x))\bigr) \qquad\tildemesext{for $x\in X$.} \lambdabel{dt3eq3} \e Moreover, the perverse sheaf of vanishing cycles{\rm \kern.05em ind}ex{vanishing cycle!perverse sheaf}{\rm \kern.05em ind}ex{perverse sheaf!of vanishing cycles} $\phi_f(\underline{{\mathbin{\mathbb Q}}}[n-1])$ on $U_0$ is supported on $X,$ and \nomenclature[1vphif]{$\phi_f$}{vanishing cycle functor on derived category of constructible sheaves} \nomenclature[MF]{$MF_f(x)$}{Milnor fibre of a holomorphic function $f$ at point $x$} \e {\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_{U_0}\bigl(\phi_f(\underline{{\mathbin{\mathbb Q}}}[n-1])\bigr)(x)=\betagin{cases} \nu_X(x), & x\in X, \\ 0, & x\in U_0\setminus X, \end{cases} \lambdabel{dt3eq4} \e where $\nu_X$ is the Behrend function of the complex analytic space~$X$. \lambdabel{dt3thm1} \end{thm} Thus, if $X$ is the Donaldson--Thomas moduli space of stable sheaves, one can, heuristically, think of $\nu_X$ as the {\it Euler characteristic of the perverse sheaf of vanishing cycles of the holomorphic Chern-Simons functional.} {\mathbin{\mathfrak m}}edskip In \circte[Question 4.18, 5.7]{JoSo}, Joyce and Song ask the following question. \betagin{quest}{\bf(a)} Let\/ $X$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb C}},$ and write\/ ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm si}$ \nomenclature[M al si]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm si}$}{coarse moduli space of simple coherent sheaves on $X$} {\rm \kern.05em ind}ex{coherent sheaf!simple} for the coarse moduli space of simple coherent sheaves on\/ $X$. Does there exist a natural perverse {\rm \kern.05em ind}ex{perverse sheaf} sheaf\/ $\cal P$ on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm si},$ with\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm si}}({\cal P})=\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm si}},$ which is locally isomorphic to $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ for $f,U$ as in \circte[Thm. 5.4]{JoSo}? \setminusallskip \noindent{\bf(b)} Is there also some Artin stack version of\/ $\cal P$ in\/ {\bf(a)} for the moduli stack\/ ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}},$ locally isomorphic to $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ for $f,U$ as in Theorem {\rm\ref{dt5thm1}} below? \noindent{\bf(c)} Let $M$ be a complex manifold, $\omega$ an almost closed holomorphic $(1,0)$-form on $M$ as defined below, and $X = \omega^{-1}(0)$ as a complex analytic subspace of $M.$ Can one define a natural perverse sheaf $\cal P$ supported on $X$, with ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi_X(\cal P)$ $= \nu_X$ , such that $\cal P\cong$ $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ when $\omega={\rm d} f$ for $f:M\rightarrow {\mathbin{\mathbb C}}$ holomorphic? Are there generalizations to the algebraic setting? \lambdabel{dt5quest1} \end{quest} One can also ask Question \ref{dt5quest1} for Saito's mixed Hodge modules~\circte{Sait}.{\rm \kern.05em ind}ex{mixed Hodge module} If the answer to Question \ref{dt5quest1}(a) is yes, it would provide a way of {\it categorifying\/} (conventional) Donaldson--Thomas invariants $DT^\alpha(\tildemesau)$. That is ${{\mathbin{\mathfrak m}}athbb H}^*\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau);{\cal P}\vert_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}\bigr)$ would be a natural cohomology {\rm \kern.05em ind}ex{perverse sheaf!hypercohomology} \nomenclature[HPM]{${{\mathbin{\mathfrak m}}athbb H}^*\bigl(\cal M; \cal P\bigr)$}{hypercohomology of a perverse sheaf $\cal P$ on a scheme $\cal M$} group of the stable moduli scheme ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ whose Euler characteristic is the Donaldson--Thomas invariant. This question is also crucial for the programme of Kontsevich--Soibelman \circte{KoSo1} to extend Donaldson--Thomas invariants of Calabi--Yau 3-folds to other motivic invariants.{\rm \kern.05em ind}ex{motivic invariant} as discussed in \circte[Remark 5.8]{JoSo}.{\rm \kern.05em ind}ex{categorification|)} We will explain in \S\ref{ourpapers} how this question has been resolved. \subsubsection{The Behrend function and its characterization} \lambdabel{dt3.2} Here we will point out some important remarks and properties of the Behrend function. \paragraph{Behrend function as a multiplicity function in the weighted Euler characteristic.} \lambdabel{dt3.2.1} {\rm \kern.05em ind}ex{Behrend function!multiplicity function|(} {\rm \kern.05em ind}ex{Euler characteristic} It is worth to report here \circte[ \S 1.2]{JoSo} which provides a good way to think of Behrend functions as {\it multiplicity functions}. If $X$ is a finite type ${\mathbin{\mathbb C}}$-scheme then the Euler characteristic ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X)$ `counts' points without multiplicity, so that each point of $X({\mathbin{\mathbb C}})$ contributes 1 to ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X)$. \nomenclature[xred]{$X^{\rm red}$}{underlying reduced ${\mathbin{\mathbb C}}$-scheme of the ${\mathbin{\mathbb C}}$-scheme $X$} If $X^{\rm red}$ is the underlying reduced ${\mathbin{\mathbb C}}$-scheme then $X^{\rm red}({\mathbin{\mathbb C}})=X({\mathbin{\mathbb C}})$, so ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X^{\rm red})={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X)$, and ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X)$ does not see non-reduced behaviour in $X$. However, the {\it weighted Euler characteristic} {\rm \kern.05em ind}ex{Euler characteristic!weighted} ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X,\nu_X)$ `counts' each $x\in X({\mathbin{\mathbb C}})$ weighted by its multiplicity $\nu_X(x)$. The Behrend function $\nu_X$ detects non-reduced behaviour, so in general ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X,\nu_X)\ne{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X^{\rm red},\nu_{X^{\rm red}})$. For example, let $X$ be the $k$-fold point ${\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec\bigl({\mathbin{\mathbb C}}[z]/(z^k)\bigr)$ for $k\geqslant 1$. Then $X({\mathbin{\mathbb C}})$ is a single point $x$ with $\nu_X(x)=k$, so ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X)=1={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X^{\rm red},\nu_{X^{\rm red}})$, but~${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X,\nu_X)=k$. \setminusallskip An important moral of \circte{Behr} is that (at least in moduli problems with symmetric obstruction theories, such as Donaldson--Thomas theory) it is better to `count' points in a moduli scheme ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}$ by the weighted Euler characteristic ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}},\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}})$ than by the unweighted Euler characteristic ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}})$. One reason is that ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}},\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}})$ often gives answers unchanged under deformations of the underlying geometry, but ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}})$ does not. For example, consider the family of ${\mathbin{\mathbb C}}$-schemes $X_t={\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec\bigl({\mathbin{\mathbb C}}[z]/(z^2-t^2)\bigr)$ for $t\in{\mathbin{\mathbb C}}$. Then $X_t$ is two reduced points $\pm t$ for $t\ne 0$, and a double point when $t=0$. So as above we find that ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X_t,\nu_{X_t})=2$ for all $t$, which is deformation-invariant, but ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X_t)$ is 2 for $t\ne 0$ and 1 for $t=0$, which is not deformation-invariant. {\rm \kern.05em ind}ex{deformation-invariance} {\rm \kern.05em ind}ex{Behrend function!multiplicity function|)} \paragraph{Properties of the Behrend function.} \lambdabel{dt3.2.2} Here are some important properties of Behrend functions. They are proved by Behrend \circte[\S 1.2 \& Prop. 1.5]{Behr} when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, but his proof is valid for general~${\mathbin{\mathbb K}}$. \betagin{thm} Let $X,Y$ be Artin ${\mathbin{\mathbb K}}$-stacks locally of finite type. Then: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\rm(i)}] If\/ $X$ is smooth of dimension\/ $n$ then~$\nu_X\equiv(-1)^n$. \item[{\rm(ii)}] If\/ $\varphi:X\!\rightarrow\! Y$ is smooth with relative dimension $n$ then\/~$\nu_X\!\equiv\!(-1)^n\varphi^*(\nu_Y)$. \item[{\rm(iii)}] $\nu_{X\tildemesimes Y}\equiv\nu_X\boxdot\nu_Y,$ where $(\nu_X\boxdot\nu_Y)(x,y)=\nu_X(x)\nu_Y(y)$. \end{itemize} \lambdabel{dt3thm3} \end{thm} Let us recall \circte[Thm 4.11]{JoSo}. It is stated using the Milnor fibre, but its proof works algebraically over ${\mathbin{\mathbb K}}$. \betagin{thm} Let\/ $U$ be a smooth ${\mathbin{\mathbb K}}$-variety, $f:U\rightarrow {\mathbin{\mathbb A}}^1_{{\mathbin{\mathbb K}}}$ a regular function over $U,$ and $V$ a smooth ${\mathbin{\mathbb K}}$-subvariety of $U,$ and\/ $v\in V\cap\mathop{\rm Crit}(f)$. Define $\tildemesi U$ to be the blowup of\/ $U$ along\/ $V,$ with blowup map $\pi:\tildemesi U\rightarrow U,$ and set\/ $\tildemesi f=f\circ\pi:\tildemesi U\rightarrow {\mathbin{\mathbb A}}^1_{\mathbin{\mathbb K}}$. Then $\pi^{-1}(v)={{\mathbin{\mathfrak m}}athbb P}(T_vU/T_vV)$ is contained in $\mathop{\rm Crit}(\tildemesi f),$ and $$ \nu_{\mathop{\rm Crit}(f)}(v)\quad =\,{\rm d}isplaystyle \int\limits_{w\in {{\mathbin{\mathfrak m}}athbb P}(T_vU/T_vV)}\nu_{\mathop{\rm Crit}(\tildemesi f)}(w)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\quad+\quad (-1)^{\mathop{\rm dim}\nolimits U-\mathop{\rm dim}\nolimits V}\bigl(1-\mathop{\rm dim}\nolimits U+\mathop{\rm dim}\nolimits V\bigr)\nu_{\mathop{\rm Crit}(f\vert_V)}(v), $$ where $w{\mathbin{\mathfrak m}}apsto\nu_{\mathop{\rm Crit}(f)}(w)$ is a constructible function{\rm \kern.05em ind}ex{constructible function} on ${{\mathbin{\mathfrak m}}athbb P}(T_vU/T_vV),$ and the integral is the Euler characteristic of\/ ${{\mathbin{\mathfrak m}}athbb P}(T_vU/T_vV)$ weighted by this. \lambdabel{blowup} \end{thm} One can see the next result as a kind of {\it virtual Gauss--Bonnet formula}. It is crucial for Donaldson--Thomas theory. It is proved by Behrend \circte[Th. 4.18]{Behr} when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, but his proof is valid for general~${\mathbin{\mathbb K}}$. It depends crucially on \circte[Prop.\,1.12]{Behr} which again depend on an application of MacPherson's theorem \circte{MacP} over ${\mathbin{\mathbb C}}$ but valid over general ${\mathbin{\mathbb K}}$ thanks to Kennedy \circte{Kenn} and the definition of the Euler characteristic over algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero given by Joyce \circte{Joyc.1}. See also an independent construction of the Schwartz--MacPherson Chern class given by Aluffi \circte{Alu1}. \betagin{thm} Let $X$ a proper ${\mathbin{\mathbb K}}$-scheme with a symmetric obstruction theory, and\/ $[X]^{\rm vir}\in A_0(X)$ the corresponding virtual class. Then \betagin{equation*} \tildemesextstyle{\rm d}isplaystyle \int_{[X]^{\rm vir}}1={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X,\nu_X)\in{\mathbin{\mathbb Z}}, \end{equation*} where ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(X,\nu_X)=\int_{X({\mathbin{\mathbb K}})}\nu_X\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi$ is the Euler characteristic of\/ $X$ weighted by the Behrend function $\nu_X$ of\/ $X$. In particular, $ \int_{[X]^{\rm vir}}1$ depends only on the\/ ${\mathbin{\mathbb K}}$-scheme structure of\/ $X,$ not on the choice of symmetric obstruction theory. \lambdabel{dt3thm4} \end{thm} Theorem \ref{dt3thm4} implies that $DT^\alpha(\tildemesau)$ in \eq{dt2eq1} is given by {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$} \e DT^\alpha(\tildemesau)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau),\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}\bigr). \lambdabel{dt3eq5} \e There is a big difference between the two equations \eq{dt2eq1} and \eq{dt3eq5} defining Donaldson--Thomas invariants. Equation \eq{dt2eq1} is non-local, and non-motivic, and makes sense only if ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ is a proper ${\mathbin{\mathbb K}}$-scheme. But \eq{dt3eq5} is local, and (in a sense) motivic, and makes sense for arbitrary finite type ${\mathbin{\mathbb K}}$-schemes ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$. In fact, one could take \eq{dt3eq5} to be the definition of Donaldson--Thomas invariants even when ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)\ne{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$, but in \circte [\S 6.5]{JoSo} Joyce and Song argued that this is not a good idea, as then $DT^\alpha(\tildemesau)$ would not be unchanged under deformations of~$X$. In \circte[\S 6.5]{JoSo} Joyce and Song say: \betagin{quotation} `Equation \eq{dt3eq5} was the inspiration for this book. It shows that Donaldson--Thomas invariants $DT^\alpha(\tildemesau)$ can be written as {\it motivic\/} invariants,{\rm \kern.05em ind}ex{motivic invariant} like those studied in \circte{Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}, and so it raises the possibility that we can extend the results of \circte{Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} to Donaldson--Thomas invariants by including Behrend functions as weights.' \end{quotation} \paragraph{Almost closed 1-forms.} \lambdabel{dt3.2.3} {\rm \kern.05em ind}ex{almost closed $1$-form|(} In \circte{PaTh} Pandharipande and Thomas give a counterexample to the idea that every scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme. This limits the usefulness of the above formula for $\nu_X(x)$ in terms of the Milnor fibre. Here is the more general approach due to Behrend \circte{Behr}, which the author tried to use to give a strictly algebraic proof on the Behrend function identities, but later this proof turned out to be not completely correct. \betagin{dfn} Let ${\mathbin{\mathbb K}}$ be an algebraically closed field, and $M$ a smooth ${\mathbin{\mathbb K}}$-scheme. Let $\omega$ be an algebraic 1-form on $M$, that is, $\omega\in H^0(T^*M)$. Call $\omega$ {\it almost closed\/} if ${\rm d}\omega$ is a section of $I_\omega\cdot\Lambda^2T^*M$, where $I_\omega$ is the ideal sheaf of the zero locus $\omega^{-1}(0)$ of $\omega$. Equivalently, ${\rm d}\omega\vert_{\omega^{-1}(0)}$ is zero as a section of $\Lambda^2T^*M\vert_{\omega^{-1}(0)}$. In (\'etale) local coordinates $(z_1,\ldots,z_n)$ on $M$, if $$\omega=f_1{\rm d} z_1+\cdots+f_n{\rm d} z_n,$$ then $\omega$ is almost closed provided \betagin{equation*} {\rm fr}ac{\partial f_j}{\partial z_k}\equiv{\rm fr}ac{\partial f_k}{\partial z_j} \;\>{\mathbin{\mathfrak m}}od (f_1,\ldots,f_n). \end{equation*} \lambdabel{dt3def2} \end{dfn} Let $M$ be a smooth Deligne--Mumford stack and $\omegaega$ an almost {\rm \kern.05em ind}ex{Deligne--Mumford stack} closed 1-form on $M$ with zero locus $X=Z(\omegaega)$. It is a general principle, that a section of a vector bundle defines a perfect obstruction theory for the zero locus of the section. This obstruction theory is given by \e \betagin{gathered} \xymatrix@C=30pt@R=30pt{ [T_{M_{|_{X}}} \ar[r]^{d\circrc\omegaega^\vee}\ar[d]_{\omegaega^\vee} & {\mathbin{\cal O}}mega_{M_{|_{X}}}]\ar[d]^1\\ [I/I^2\ar[r]^{d} & {\mathbin{\cal O}}mega_{M_{|_{X}}}]} \lambdabel{dt3example} \end{gathered} \e This obstruction theory is symmetric, in a canonical way, because {\rm \kern.05em ind}ex{obstruction theory!symmetric} under the assumption that $\omegaega$ is almost closed one has that $d\circrc\omegaega^\vee$ is self-dual, as a homomorphism of vector bundles over $X$. \setminusallskip Behrend \circte[Prop. 3.14]{Behr} proves a kind of converse of that, by a proof valid for general~${\mathbin{\mathbb K}}$, which says that, at least locally, every symmetric obstruction theory is given in this way by an almost closed $1$-form. \betagin{prop} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field, and\/ $X$ a ${\mathbin{\mathbb K}}$-scheme with a symmetric obstruction theory. Then $X$ may be covered by Zariski open sets $Y\subseteq X$ such that there exists a smooth\/ ${\mathbin{\mathbb K}}$-scheme $M,$ an almost closed\/ $1$-form $\omega$ on $M,$ and an isomorphism of\/ ${\mathbin{\mathbb K}}$-schemes\/~$Y\cong\omega^{-1}(0)$. \lambdabel{dt3prop5} \end{prop} Restricting to ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, Behrend \circte[Prop. 4.22]{Behr} gives an expression for the Behrend function of the zero locus of an almost closed 1-form as a {\it linking number}{\rm \kern.05em ind}ex{linking number}. It is possible to use it to give an algebraic proof of the first Behrend identity over ${\mathbin{\mathbb C}}.$ \betagin{prop} Let\/ $M$ be a smooth scheme and\/ $\omega$ an almost closed $1$-form on $M,$ and let\/ $Y=\omega^{-1}(0)$ be the scheme-theoretic zero locus of\/ $\omega$. Fix\/ $p$ a closed point in $Y$, choose \'etale coordinates $(x_1,\ldots,x_n)$ on $M$ around $p$ with\/ $(x_1,\ldots,x_n,p_1,\ldots, p_n)$ the associated canonical coordinates for $T^*M.$ Write $\omega={\rm d}isplaystyle\sum_{i=1}^{n}f_i {\rm d} x_i$ in these coordinates. One can identify $T^*M$ near $p$ with\/~${\mathbin{\mathbb C}}^{2n}$. Then for all\/ $\eta\in{\mathbin{\mathbb C}}$ and\/ $\epsilon\in{\mathbin{\mathbb R}}$ with\/ $0<{\mathbin{\mathfrak m}}d{\eta}\ll\epsilon\ll 1$ one has \e \nu_Y(p)=L_{{\cal S}_\epsilon}\bigl(\Gamma_{\eta^{-1}\omega}\cap{\cal S}_\epsilon, \Delta\cap {\cal S}_\epsilon\bigr), \lambdabel{dt6eq8} \e where \nomenclature[sphere]{${\cal S}_\epsilon$}{sphere of radius $\epsilon$ in ${\mathbin{\mathbb C}}^{2n}$} \nomenclature[1b]{$\Gamma_{\eta^{-1}\omega}$}{graph of $\eta^{-1}\omega$ for $\omega$ almost closed $1$-form and $\eta\in{\mathbin{\mathbb C}}$} \nomenclature[1c]{$\Delta$}{graph of the section given by the square of the distance function} \nomenclature[LS]{$L_{{\cal S}_\epsilon}(\,,\,)$}{linking number of two disjoint, closed, oriented $(n-1)$-submanifolds in ${\cal S}_\epsilon$} \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item ${\cal S}_\epsilon\!=\!\bigl\{(x_1,\ldots,p_n)\!\in\!{\mathbin{\mathbb C}}^{2n}: {\mathbin{\mathfrak m}}s{x_1}\!+\!\cdots\!+\!{\mathbin{\mathfrak m}}s{p_n}\!=\!\epsilon^2\bigr\}$ is the sphere of radius $\epsilon$ in ${\mathbin{\mathbb C}}^{2n},$ \item $\Gamma_{\eta^{-1}\omega}$ is the graph of\/ $\eta^{-1}\omega$ regarded locally as a complex submanifold of\/ ${\mathbin{\mathbb C}}^{2n}$ of real dimension $2n$ oriented so that $M\longrightarrow {\mathbin{\cal O}}mega_M$ is orientation preserving and defined by the equations $\{\eta p_i=f_i(x)\},$ \item $\Delta=\bigl\{(x_1,\ldots,p_n)\!\in\!{\mathbin{\mathbb C}}^{2n}:p_j\!=\!\bar x_j,$ $j\!=\!1,\ldots,n\bigr\},$ i.e. the image of the smooth map $M\longrightarrow{\mathbin{\cal O}}mega_M$ given by the section ${\rm d}\varrho$ of ${\mathbin{\cal O}}mega_M,$ with $$\varrho={\rm d}isplaystyle \sum_{i} x_{i} \bar x_{i}+{\rm d}isplaystyle \sum_{i} p_{i} \bar p_{i}$$ the square of the distance function defined on ${\mathbin{\cal O}}mega_M$ by the choice of coordinates of real dimension $2n,$ \item $L_{{\cal S}_\epsilon}(\,,\,)$ is the linking number of two disjoint, closed, oriented\/ $(n\!-\!1)$-submanifolds in~${\cal S}_\epsilon$. \end{itemize} \lambdabel{dt6prop1} \end{prop} We remark here that $\Deltalta$ is not a complex submanifold, but only a real submanifold. Thus, there are no good generalizations of $\Deltalta$ to other fields ${\mathbin{\mathbb K}}.$ {\rm \kern.05em ind}ex{almost closed $1$-form|)} {\rm \kern.05em ind}ex{microlocal geometry|)}{\rm \kern.05em ind}ex{Behrend function|)} \subsection{Generalizations of Donaldson--Thomas theory} \lambdabel{dt4} Next it will be briefly reviewed how the theory of generalized Donaldson--Thomas invariants has been developed, starting from the series of papers \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} about constructible functions, stack functions, Ringel--Hall algebras, counting invariants for Calabi--Yau 3-folds, and wall-crossing and then summarizing the main results in \circte{JoSo} including the definition of generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tildemesau) \in{\mathbin{\mathbb Q}}$, their deformation-invariance, and wall-crossing formulae under change of stability condition~$\tildemesau$. In the sequel, there are two paragraphs on statements and a sketch of proofs of the theorems \circte[Thm 5.5]{JoSo} and \circte[Thm 5.11]{JoSo} on which this paper is concentrated. We conclude with a brief and rough remark on Kontsevich and Soibelman's parallel approach to Donaldson--Thomas theory \circte{KoSo1}, focusing more on analogies and differences with Joyce and Song's construction \circte{JoSo} rather than going into a detailed exposition. This choice is due to the fact that for the present paper we do not need it. {\rm \kern.05em ind}ex{constructible function} {\rm \kern.05em ind}ex{stack function} {\rm \kern.05em ind}ex{Ringel--Hall algebra} {\rm \kern.05em ind}ex{counting invariants for Calabi--Yau 3-folds} {\rm \kern.05em ind}ex{wall crossing} {\rm \kern.05em ind}ex{deformation-invariance} {\rm \kern.05em ind}ex{configurations} \subsubsection[Brief sketch of background from $\tildemesext{\circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}}$]{Brief sketch of background from \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}} \lambdabel{dt4.1} Here it will be recalled a few important ideas from \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4, Joyc.5,Joyc.6,Joyc.7}. They deal with {\it Artin stacks} rather than coarse moduli {\rm \kern.05em ind}ex{Artin stack} schemes,{\rm \kern.05em ind}ex{coarse moduli scheme}{\rm \kern.05em ind}ex{moduli scheme!coarse} as in \circte{Thom}. Let $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb C}}$, and write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ for the moduli stack of all coherent sheaves $E$ on $X$. It is an Artin ${\mathbin{\mathbb C}}$-stack. \setminusallskip {\rm \kern.05em ind}ex{stack function} \nomenclature[SFfF]{${\mathbin{\mathfrak m}}athop{\rm SF}\nolimits({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak F}})$}{vector space of `stack functions' on an Artin stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak F}}$, defined using representable 1-morphisms} The ring of {\it stack functions} ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimits({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ in \circte{Joyc.2} is basically the Grothendieck group $K_0({\mathbin{\mathfrak m}}athop{\rm Sta}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ of the {\rm \kern.05em ind}ex{2-category} {\rm \kern.05em ind}ex{Grothendieck group} 2-category ${\mathbin{\mathfrak m}}athop{\rm Sta}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of stacks over ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. That is, \nomenclature[Sta]{${\mathbin{\mathfrak m}}athop{\rm Sta}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$}{2-category of stacks over ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$} ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimits({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ is generated by isomorphism classes $[({\mathbin{\mathfrak R}},\rho)]$ of representable 1-morphisms $\rho:{\mathbin{\mathfrak R}}\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ for ${\mathbin{\mathfrak R}}$ a finite type Artin ${\mathbin{\mathbb C}}$-stack, with the relation {\rm \kern.05em ind}ex{1-morphism!representable} {\rm \kern.05em ind}ex{Artin stack!of finite type} \betagin{equation*} [({\mathbin{\mathfrak R}},\rho)]=[({\mathbin{\mathfrak S}},\rho\vert_{\mathbin{\mathfrak S}})]+[({\mathbin{\mathfrak R}}\setminus{\mathbin{\mathfrak S}},\rho\vert_{{\mathbin{\mathfrak R}}\setminus{\mathbin{\mathfrak S}}})] \end{equation*} when ${\mathbin{\mathfrak S}}$ is a closed ${\mathbin{\mathbb C}}$-substack of ${\mathbin{\mathfrak R}}$. In \circte{Joyc.2} Joyce studies different kinds of stack function spaces with other choices of generators and relations, and operations on these spaces. These include projections \nomenclature[1pivi]{$\Pi^{\rm vi}_n$}{projection to stack functions of `virtual rank $n$'} $\Pi^{\rm vi}_n:{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})\rightarrow{\mathbin{\mathfrak m}}athop{\rm SF}\nolimits({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ to stack functions of {\it virtual rank} $n$, which act on $[({\mathbin{\mathfrak R}},\rho)]$ by modifying ${\mathbin{\mathfrak R}}$ depending on its stabilizer groups. {\rm \kern.05em ind}ex{stabilizer group} {\rm \kern.05em ind}ex{virtual rank}{\rm \kern.05em ind}ex{stack function!virtual rank} \nomenclature[SFai]{${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$}{Lie subalgebra of ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ of stack functions `supported on virtual indecomposables'} {\rm \kern.05em ind}ex{virtual indecomposable}{\rm \kern.05em ind}ex{stack function!supported on virtual indecomposables} In \circte[\S 5.2]{Joyc.4} he defines a {\it Ringel--Hall} type algebra{\rm \kern.05em ind}ex{Ringel--Hall algebra} ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ of stack \nomenclature[SFal]{${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$}{Ringel--Hall algebra of stack functions with with algebra stabilizers} functions{\rm \kern.05em ind}ex{stack function!with algebra stabilizers} {\it with algebra stabilizers} on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, with an associative, non-commutative multiplication $*$ and in \circte[\S 5.2]{Joyc.4} he defines a Lie subalgebra ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ of stack functions {\it supported on virtual indecomposables}. In \circte[\S 6.5]{Joyc.4} he defines an explicit Lie algebra $L(X)$ to be the \nomenclature[LX]{$L(X)$}{Lie algebra depending on a Calabi--Yau $3$-fold $X$} \nomenclature[1laal]{$\lambda^\alpha$}{basis element of Lie algebra $L(X)$} ${\mathbin{\mathbb Q}}$-vector space with basis of symbols $\lambda^\alpha$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, with Lie bracket \e [\lambda^\alpha,\lambda^\beta]=\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\alpha,\beta)\lambda^{\alpha+\beta}, \lambdabel{dt4eq1} \e for $\alpha,\beta\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, where $\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\,,\,)$ is the \nomenclature[1wchbar]{$\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\,,\,)$}{Euler form on $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} {\it Euler form} {\rm \kern.05em ind}ex{Euler form} on $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ defined as follows: \e \bar{{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi}([E],[F])={\rm d}isplaystyle\sum_{i\geqslantq 0} (-1)^i \mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^i(E,F) \lambdabel{eu} \e for all $E,F\in \mathop{\rm coh}\nolimits(X).$ As $X$ is a Calabi--Yau 3-fold, $\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi$ is antisymmetric, so \eq{dt4eq1} satisfies the Jacobi identity and makes $L(X)$ into an infinite-dimensional Lie algebra over~${\mathbin{\mathbb Q}}$. Then in \circte[\S 6.6]{Joyc.4} Joyce defines a \nomenclature[1vPsSfL]{$\Psi$}{Lie algebra morphism $\Psi:{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})\rightarrow L(X)$} {\it Lie algebra morphism\/} $\Psi:{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})\rightarrow L(X)$, which, roughly speaking, is of the form \e \Psi(f)\;\;\;\;=\tildemesextstyle{\rm d}isplaystyle \sum_{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk} \bigl(f\vert_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha}\bigr)\lambda^{\alpha}, \lambdabel{dt4eq2} \e where $f=\sum_{i=1}^mc_i[({\mathbin{\mathfrak R}}_i,\rho_i)]$ is a stack function on $M$, and ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha$ is the substack in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of sheaves $E$ with \nomenclature[Mal]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha$}{the substack in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of sheaves $E$ with class $\alpha$} class $\alpha$, and ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk}$ is a kind of stack-theoretic Euler \nomenclature[1wchistk]{${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk}$}{stack-theoretic Euler characteristic} {\rm \kern.05em ind}ex{Euler characteristic!stack-theoretic} characteristic. But in fact the definition of $\Psi$, and the proof that $\Psi$ is a Lie algebra morphism, are highly nontrivial, and use many ideas from \circte{Joyc.1,Joyc.2,Joyc.4}, including those of `virtual rank' and `virtual indecomposable'.{\rm \kern.05em ind}ex{virtual indecomposable} The problem is that the obvious definition of ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk}$ usually involves dividing by zero, so defining \eq{dt4eq2} in a way that makes sense is quite subtle. The proof that $\Psi$ is a Lie algebra morphism uses {\it Serre duality} and the {\rm \kern.05em ind}ex{Serre duality} assumption that $X$ is a Calabi--Yau 3-fold. \setminusallskip \nomenclature[M bal ssalt]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm ss}^\alpha(\tildemesau)$}{open, finite type substack in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of $\tildemesau$-semistable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} \nomenclature[M bal stalt]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)$}{open, finite type substack in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of $\tildemesau$-stable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} Now let $\tildemesau$ be a stability condition on $\mathop{\rm coh}\nolimits(X)$, such as Gieseker stability. Then one has open, finite type substacks {\rm \kern.05em ind}ex{Gieseker stability} ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm ss}^\alpha(\tildemesau),{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)$ in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of $\tildemesau$-(semi)stable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$. Write $\bar{\rm d}elta_{\rm ss}^\alpha(\tildemesau)$ for the {\rm \kern.05em ind}ex{stack function!characteristic function} \nomenclature[1cc]{$\bar{\rm d}elta_{\rm ss}^\alpha(\tildemesau)$}{element of the Ringel--Hall Lie algebra ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ that ‘counts’ $\tildemesau$- semistable objects in class $\alpha$} characteristic function of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm ss}^\alpha(\tildemesau)$, in the sense of stack functions \circte{Joyc.2}. Then $\bar{\rm d}elta_{\rm ss}^\alpha(\tildemesau)\in{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$. In \circte[\S 8]{Joyc.5}, Joyce defines elements $\bar\epsilon^\alpha(\tildemesau)$ in ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ by \nomenclature[1ccc]{$\bar\epsilon^\alpha(\tildemesau)$}{element of the Ringel--Hall Lie algebra ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$ that ‘counts’ $\tildemesau$- semistable objects in class $\alpha$} \e \bar\epsilon^\alpha(\tildemesau)\quad= \!\!\!\!\!\!\! \sum_{\betagin{subarray}{l}\setminusall{n\geqslant 1,\;\alpha_1,\ldots,\alpha_n\in K^{\rm num}(\mathop{\rm coh}\nolimits(X)):}\\ \setminusall{\alpha_1+\cdots+\alpha_n=\alpha,\; \tildemesau(\alpha_i)=\tildemesau(\alpha),\tildemesext{ all $i$}}\end{subarray}} \!\!\! {\rm fr}ac{(-1)^{n-1}}{n}\,\,\bar{\rm d}elta_{\rm ss}^{\alpha_1}(\tildemesau)*\bar {\rm d}elta_{\rm ss}^{\alpha_2}(\tildemesau)* \cdots*\bar{\rm d}elta_{\rm ss}^{\alpha_n}(\tildemesau), \lambdabel{dt4eq3} \e where $*$ is the Ringel--Hall multiplication in ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsa({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$. Then \circte[Thm. 8.7]{Joyc.5} shows that $\bar\epsilon^\alpha(\tildemesau)$ lies in the Lie subalgebra ${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$, a nontrivial result. Thus one can apply the Lie algebra morphism $\Psi$ to $\bar\epsilon^\alpha(\tildemesau)$. In \circte[\S 6.6]{Joyc.6} he defines invariants $J^\alpha(\tildemesau)\in{\mathbin{\mathbb Q}}$ for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ by \nomenclature[Jalt]{$J^\alpha(\tildemesau)$}{invariant counting $\tildemesau$-semistable sheaves in class $\alpha$ on a Calabi–Yau $3$-fold, introduced in \circte{Joyc.7}} \e \Psi\bigl(\bar\epsilon^\alpha(\tildemesau)\bigr)=J^\alpha(\tildemesau)\lambda^\alpha. \lambdabel{dt4eq4} \e \setminusallskip These $J^\alpha(\tildemesau)$ are rational numbers `counting' {\rm \kern.05em ind}ex{invariant $J^\alpha(\tildemesau)$} $\tildemesau$-semistable sheaves $E$ in class $\alpha$. When ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ then $J^\alpha(\tildemesau)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau))$, that is, $J^\alpha(\tildemesau)$ is the na\"\i ve Euler characteristic of the moduli space {\rm \kern.05em ind}ex{Euler characteristic!na\"\i ve} ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$. This is {\it not\/} weighted by the Behrend function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)}$, and so in general does not coincide with the Donaldson--Thomas invariant $DT^\alpha(\tildemesau)$ in~\eq{dt4eq1}. {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$} \nomenclature[1wchinaive]{${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\tildemesextrm{na}}(C)$}{na\"\i ve Euler characteristic of a constructible set $C$ in a stack as in \circte{Joyc.1}} As the $J^\alpha(\tildemesau)$ do not include Behrend functions, they do not count semistable sheaves with multiplicity, and so they will not in general be unchanged under deformations of the underlying Calabi--Yau 3-fold, as Donaldson--Thomas invariants are. However, the $J^\alpha(\tildemesau)$ do have very good properties under change of stability condition. In \circte{Joyc.6} Joyce shows that if $\tildemesau,\tildemesi\tildemesau$ are two stability conditions on $\mathop{\rm coh}\nolimits(X)$, then it is possible to write $\bar\epsilon^\alpha(\tildemesi\tildemesau)$ in terms of a (complicated) explicit formula involving the $\bar\epsilon^\beta(\tildemesau)$ for $\beta\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ and the Lie bracket in~${\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})$. Applying the Lie algebra morphism $\Psi$ shows that $J^\alpha(\tildemesi\tildemesau)\lambda^\alpha$ may be written in terms of the $J^\beta(\tildemesau)\lambda^\beta$ and the Lie bracket in $L(X)$, and hence \circte[Thm. 6.28]{Joyc.6} yields an explicit transformation law for the $J^\alpha(\tildemesau)$ under change of stability condition. In \circte{Joyc.7} he shows how to encode invariants $J^\alpha(\tildemesau)$ satisfying a transformation law in generating functions on a complex manifold of stability conditions, which are both holomorphic and continuous, despite the discontinuous wall-crossing behaviour of the $J^\alpha(\tildemesau)$. \subsubsection[Summary of the main results from $\tildemesext{\circte{JoSo}}$]{Summary of the main results from \circte{JoSo}} {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tildemesau)$|(} \lambdabel{dt4.2} The basic idea behind the project developed in \circte{JoSo} is that the Behrend function $\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ of coherent {\rm \kern.05em ind}ex{Behrend function} sheaves in $X$ should be inserted as a weight in the programme of \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} summarized in \S\ref{dt4.1}. Thus one will obtain weighted versions $\tildemesi\Psi$ of the Lie algebra morphism $\Psi$ of \eq{dt4eq2}, and $\bar{DT}{}^\alpha(\tildemesau)$ of the counting invariant $J^\alpha(\tildemesau)\in{\mathbin{\mathbb Q}}$ in \eq{dt4eq4}. Here is how this is worked out in \circte{JoSo}. Joyce and Song define a modification $\tildemesi L(X)$ of the Lie algebra $L(X)$ above, \nomenclature[LXt]{$\tildemesi L(X)$}{Lie algebra depending on a Calabi--Yau $3$-fold $X$, variant of $L(X)$} the ${\mathbin{\mathbb Q}}$-vector space with basis of symbols $\tildemesi \lambda^\alpha$ for \nomenclature[1lati]{$\tildemesi \lambda^\alpha$}{basis element of Lie algebra $\tildemesi L(X)$} $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, with Lie bracket \betagin{equation*} [\tildemesi\lambda^\alpha,\tildemesi\lambda^\beta]=(-1)^{\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\alpha,\beta)} \bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\alpha,\beta)\tildemesi \lambda^{\alpha+\beta}, \end{equation*} which is \eq{dt4eq2} with a sign change. Then they define a {\it Lie algebra morphism\/} $\tildemesi\Psi:{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})\rightarrow\tildemesi L(X)$. Roughly speaking this is of the form \nomenclature[1vPsiti]{$\tildemesi\Psi$}{Lie algebra morphism $\tildemesi\Psi:{\mathbin{\mathfrak m}}athop{\rm SF}\nolimitsai({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}})\rightarrow\tildemesi L(X)$} \e \tildemesi\Psi(f)\;\;\;\;=\tildemesextstyle{\rm d}isplaystyle \sum_{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk} \bigl(f\vert_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}^\alpha},\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}\bigr)\tildemesi \lambda^{\alpha}, \lambdabel{dt4eq5} \e that is, in \eq{dt4eq2} we replace the stack-theoretic Euler characteristic ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk}$ with a stack-theoretic Euler {\rm \kern.05em ind}ex{Euler characteristic!stack-theoretic} {\rm \kern.05em ind}ex{Euler characteristic!weighted} characteristic weighted by the Behrend function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}$. The proof that $\tildemesi\Psi$ is a Lie algebra morphism combines the proof in \circte{Joyc.4} that $\Psi$ is a Lie algebra morphism with the two {\it Behrend function identities} {\rm \kern.05em ind}ex{Behrend function!Behrend identities} \eq{dt6eq1}--\eq{dt6eq2} proved in \circte[thm. 5.11]{JoSo} and reported below. Proving \eq{dt6eq1}--\eq{dt6eq2} requires a deep understanding of the local structure of the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, which is of interest {\rm \kern.05em ind}ex{moduli stack!local structure} in itself. First they show using a composition of {\it Seidel--Thomas twists} by ${\mathbin{\cal O}}_X(-n)$ for $n\gg 0$ that ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is {\rm \kern.05em ind}ex{Seidel--Thomas twist} locally 1-isomorphic to the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ of vector bundles {\rm \kern.05em ind}ex{vector bundle!moduli stack} on $X$. Then they prove that near $[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect({\mathbin{\mathbb C}})$, an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ can be written locally in the complex analytic {\rm \kern.05em ind}ex{analytic topology} topology in the form $\mathop{\rm Crit}(f)$ for $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function on an open set $U$ in $\mathop{\rm Ext}\nolimits^1(E,E)$. These $U,f$ are {\it not algebraic}, they are constructed using gauge theory on the {\rm \kern.05em ind}ex{gauge theory} complex vector bundle $E$ over $X$ and transcendental methods. Finally, they deduce \eq{dt6eq1}--\eq{dt6eq2} using the Milnor fibre expression \eq{dt3eq3} for Behrend functions {\rm \kern.05em ind}ex{Milnor fibre} applied to these~$U,f$. \setminusallskip Before going on with the review of Joyce and Song's program, it is worth to stop for a while on some details about \circte[Thm 5.5]{JoSo} and \circte[Thm 5.11]{JoSo}, the statements of the theorems and how they prove it. This will be useful later on in \S\ref{dt5}. \paragraph[Gauge theory and transcendental complex analysis from $\tildemesext{\circte{JoSo}}$]{Gauge theory and transcendental complex analytic geometry from \circte{JoSo}.} \lambdabel{dt5.1} {\rm \kern.05em ind}ex{gauge theory} In \circte[Thm. 5.5]{JoSo} Joyce and Song give a local characterization of an atlas{\rm \kern.05em ind}ex{Artin stack!atlas} for the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ as the critical points of a holomorphic function on a complex manifold. The statement and a sketch of its proof are reported below. Some background references are Kobayashi \circte[\S VII.3]{Koba}, L\"ubke and Teleman \circte[\S 4.1 \& \S 4.3]{LuTe}, Friedman and Morgan \circte[\S 4.1--\S 4.2]{FrMo} and Miyajima \circte{Miya}. {\rm \kern.05em ind}ex{moduli stack} \betagin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. Suppose\/ $E$ is a coherent sheaf on\/ $X,$ so that\/ $[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb C}})$. Let\/ $G$ be a maximal reductive subgroup{\rm \kern.05em ind}ex{reductive group!maximal} in $\mathop{\rm Aut}(E),$ and\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ its complexification. Then\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ is an algebraic ${\mathbin{\mathbb C}}$-subgroup of\/ $\mathop{\rm Aut}(E),$ a maximal reductive subgroup,{\rm \kern.05em ind}ex{reductive group!maximal} and\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}=\mathop{\rm Aut}(E)$ if and only if\/ $\mathop{\rm Aut}(E)$ is reductive. There exists a quasiprojective ${\mathbin{\mathbb C}}$-scheme $S,$ an action of\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ on $S,$ a point\/ $s\in S({\mathbin{\mathbb C}})$ fixed by $G^{\scriptscriptstyle{\mathbin{\mathbb C}}},$ and a $1$-morphism of Artin ${\mathbin{\mathbb C}}$-stacks $\Phi:[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}},$ {\rm \kern.05em ind}ex{1-morphism} which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G^{\scriptscriptstyle{\mathbin{\mathbb C}}},$ {\rm \kern.05em ind}ex{quotient stack} where $[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]$ is the quotient stack, such that\/ $\Phi(s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}})=[E],$ the induced morphism on stabilizer groups {\rm \kern.05em ind}ex{stabilizer group} $\Phi_*:{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]}(s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}})\rightarrow{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E])$ is the natural morphism $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\mathop{\rm Aut}(E)\cong{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E]),$ and\/ ${\rm d}\Phi\vert_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}}:T_sS\cong T_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}} [S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism. {\rm \kern.05em ind}ex{versal family} Furthermore, $S$ parametrizes a formally versal family $(S,{\cal D})$ of coherent sheaves on $X,$ equivariant under the action of\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ on $S,$ with fibre\/ ${\cal D}_s\cong E$ at\/ $s$. If\/ $\mathop{\rm Aut}(E)$ is reductive then $\Phi$ is \'etale. Write $S_{\rm an}$ for the complex analytic space {\rm \kern.05em ind}ex{complex analytic space} underlying the ${\mathbin{\mathbb C}}$-scheme $S$. Then there exists an open neighbourhood\/ $U$ of\/ $0$ in\/ $\mathop{\rm Ext}\nolimits^1(E,E)$ in the analytic topology, a holomorphic {\rm \kern.05em ind}ex{analytic topology} function $f:U\rightarrow{\mathbin{\mathbb C}}$ with\/ $f(0)={\rm d} f\vert_0=0,$ an open neighbourhood\/ $V$ of\/ $s$ in $S_{\rm an},$ and an isomorphism of complex analytic spaces $\Xi:\mathop{\rm Crit}(f)\rightarrow V,$ such that\/ $\Xi(0)=s$ and\/ ${\rm d}\Xi\vert_0:T_0\mathop{\rm Crit}(f)\rightarrow T_sV$ is the inverse of\/ ${\rm d}\Phi\vert_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}}:T_sS\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$. Moreover we can choose $U,f,V$ to be $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$-invariant, and\/ $\Xi$ to be $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$-equivariant. \lambdabel{dt5thm1} \end{thm} In \circte{JoSo}, Theorem \ref{dt5thm1} gives Joyce and Song the possibility to use the Milnor fibre{\rm \kern.05em ind}ex{Milnor fibre} formula \eq{dt3eq3} for the Behrend function of $\mathop{\rm Crit}(f)$ to study the Behrend function $\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, crucially used in proving Behrend identities. {\rm \kern.05em ind}ex{Behrend function!Behrend identities} The proof of Theorem \ref{dt5thm1} comes in two parts. First it is shown in \circte[\S 8]{JoSo} that ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ near $[E]$ is locally isomorphic, as an Artin ${\mathbin{\mathbb C}}$-stack, to the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ of {\it algebraic vector bundles\/}{\rm \kern.05em ind}ex{vector bundle!algebraic} on $X$ near $[E']$ for some vector bundle $E'\rightarrow X$. The proof uses algebraic geometry, and is valid for $X$ a Calabi--Yau $m$-fold for any $m>0$ over any algebraically closed field ${\mathbin{\mathbb K}}$. The local morphism ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ is the composition of shifts and $m$ {\it Seidel--Thomas twists\/}{\rm \kern.05em ind}ex{Seidel--Thomas twist} by ${\mathbin{\cal O}}_X(-n)$ for~$n\gg 0$. Thus, it is enough to prove Theorem \ref{dt5thm1} with ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ in place of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. This is done in \circte[\S 9]{JoSo} using gauge theory on vector bundles over $X$. An interesting motivation for this approach could be found in \circte[\S 3]{DoTh} and \circte[\S 2]{Thom}. Let $E\rightarrow X$ be a fixed complex (not holomorphic) vector bundle over $X$. Write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$ for the infinite-dimensional {\rm \kern.05em ind}ex{semiconnection} \nomenclature[As]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$}{affine space of smooth semiconnections on a vector bundle} affine space of smooth {\it semiconnections} (${\bar\partial}$-operators) on $E$, and ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$ for the infinite-dimensional Lie group of {\rm \kern.05em ind}ex{gauge theory!gauge group} \nomenclature[G]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$}{gauge group of smooth gauge transformations of a vector bundle} {\it smooth gauge transformations} of $E$. Then ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$ acts on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$, and ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr B}}={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}/{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$ \nomenclature[B]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr B}}$}{space of gauge-equivalence classes of semiconnections on a vector bundle} is the space of gauge-equivalence classes of semiconnections on~$E$. Fix ${\bar\partial}_E$ in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$ coming from a holomorphic vector bundle structure on $E$. Then points in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$ are of the form ${\bar\partial}_E+A$ for $A\in C^\infty\bigl({\mathbin{\mathfrak m}}athop{\rm End}\nolimits(E)\otimes_{\mathbin{\mathbb C}} \Lambda^{0,1}T^*X\bigr)$, and ${\bar\partial}_E+A$ makes $E$ into a holomorphic vector bundle if $F_A^{0,2}={\bar\partial}_EA+A\wedge A$ is zero in $\setminusash{C^\infty\bigl({\mathbin{\mathfrak m}}athop{\rm End}\nolimits(E)\otimes_{\mathbin{\mathbb C}} \Lambda^{0,2}T^*X\bigr)}$. Thus, the moduli space (stack) of holomorphic vector bundle structures on $E$ is isomorphic to $\{{\bar\partial}_E+A\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}: F_A^{0,2}=0\}/{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$. In \circte{Thom}, it is observed that when $X$ is a Calabi--Yau 3-fold, there is a natural holomorphic function $CS:{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}\rightarrow{\mathbin{\mathbb C}}$ called the {\it holomorphic Chern--Simons functional}, invariant under \nomenclature[CS]{$CS$}{holomorphic Chern--Simons functional} ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$ up to addition of constants, such that $\{{\bar\partial}_E+A\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}:F_A^{0,2}=0\}$ is the critical locus of $CS$. Thus, ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak V}}ect$ is (informally) locally the critical points of a holomorphic function $CS$ on an infinite-dimensional complex stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr B}}={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}/{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$. To prove Theorem \ref{dt5thm1} Joyce and Song show that one can find a finite-dimensional complex submanifold $U$ in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr A}}$ and a finite-dimensional complex Lie subgroup $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ in ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athscr G}}$ preserving $U$ such that the theorem holds with~$f=CS\vert_U$. These $U,f$ are {\it not algebraic}, they are constructed using gauge theory on the complex vector bundle $E$ over $X$ and transcendental methods. \paragraph[The Behrend function identities from $\tildemesext{\circte{JoSo}}$]{The Behrend function identities from \circte{JoSo}.} \lambdabel{dtBehid} {\rm \kern.05em ind}ex{Behrend function!Behrend identities|(} In \circte[Thm. 5.11]{JoSo} Behrend function identities are proven: they are the crucial step to define the Lie algebra morphism $\tildemesi\Psi$ below and then the generalized Donaldson--Thomas invariants: \betagin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. The Behrend function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}: {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb C}})\rightarrow{\mathbin{\mathbb Z}}$ is a natural locally constructible function on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. For all\/ $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ it satisfies: \betagin{equation} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2)=(-1)^{\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi([E_1],[E_2])} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1)\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_2), \lambdabel{dt6eq1} \end{equation} \setminusallskip \betagin{equation} {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l} [\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\ \lambda\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \quad - \!\!\!\! \!\!\!\! \!\!\!\! {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l}[{\mathbin{\mathfrak m}}u]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ {\mathbin{\mathfrak m}}u\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(D)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2), \lambdabel{dt6eq2} \end{equation} where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X).$ Here\/ $\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi([E_1],[E_2])$ in \eq{dt6eq1} is the Euler form as in \eq{eu}, {\rm \kern.05em ind}ex{Euler form} and in \eq{dt6eq2} the correspondence between\/ $[\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and\/ $F\in\mathop{\rm coh}\nolimits(X)$ is that\/ $[\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ lifts to some\/ $0\ne\lambda\in\mathop{\rm Ext}\nolimits^1(E_2,E_1),$ which corresponds to a short exact sequence\/ $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in\/ $\mathop{\rm coh}\nolimits(X)$ in the usual way. The function $[\lambda]{\mathbin{\mathfrak m}}apsto\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)$ is a constructible function\/ ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\rightarrow{\mathbin{\mathbb Z}},$ and the integrals in \eq{dt6eq2} are integrals of constructible functions using the Euler characteristic as measure. {\rm \kern.05em ind}ex{constructible function} {\rm \kern.05em ind}ex{Euler characteristic} \lambdabel{dt6thm1} \end{thm} Joyce and Song prove Theorem \ref{dt6thm1} using Theorem \ref{dt5thm1} and the Milnor fibre description of Behrend functions from ~\S\ref{dt4}. They apply Theorem \ref{dt5thm1} to $E=E_1\oplus E_2$, and take the maximal reductive subgroup $G$ of $\mathop{\rm Aut}(E)$ to contain {\rm \kern.05em ind}ex{reductive group!maximal} the subgroup $\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}(1)\bigr\}$, so that $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ contains $\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}:\lambda\in {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m\bigr\}$. Equations \eq{dt6eq1} and \eq{dt6eq2} are proved by a kind of localization using this ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$-action on~$\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$. More precisely, Theorem \ref{dt5thm1} gives an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ near $E$ as $\mathop{\rm Crit}(f)$ near $0$, {\rm \kern.05em ind}ex{Milnor fibre} where $f$ is a holomorphic function defined near $0$ on ~$\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ and $f$ is invariant under the action of $T=\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}(1)\bigr\}$ on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ by conjugation. The fixed points of $T$ on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ are $\mathop{\rm Ext}\nolimits^1(E_1,E_1)\oplus\mathop{\rm Ext}\nolimits^1(E_2,E_2)$ and heuristically one can says that the restriction of $f$ to these fixed points is $f_1 + f_2,$ where $f_j$ is defined near $0$ in $\mathop{\rm Ext}\nolimits^1(E_j,E_j)$ and $\mathop{\rm Crit}(f_j)$ is an atlas for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ near $E_j$. {\rm \kern.05em ind}ex{moduli stack!atlas} The Milnor fibre $MF_f(0)$ is invariant under $T$, so by localization one has $${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi (MF_f(0))={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_f(0)^T)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_{f_1 + f_2}(0)).$$ A product property of Behrend functions, which may be seen as a kind of {\it Thom-Sebastiani theorem}, gives {\rm \kern.05em ind}ex{Thom-Sebastiani theorem} $$1-{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_{f_1 + f_2}(0))=(1-{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_{f_1}(0)))(1-{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(MF_{f_2}(0))).$$ Then the identity \eq{dt6eq1} follows from Theorem \ref{dt3thm1}: $$\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}} (E)=(-1)^{ \mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E,E)-\mathop{\rm dim}\nolimits\mathop{\rm Ho}\nolimitsm(E,E)}(1-{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi (MF_f(0))),$$ and the analogues for $E_1$ and $E_2$. Equation \eq{dt6eq2} uses a more involved argument to do with the Milnor fibres of $f$ at non-fixed points of the $U(1)$-action. The proof of Theorem \ref{dt6thm1} uses gauge theory, and transcendental complex analytic {\rm \kern.05em ind}ex{gauge theory} geometry methods, and is valid only over~${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$. However, as pointed out in \circte[Question 5.12]{JoSo}, Theorem \ref{dt6thm1} makes sense as a statement in algebraic geometry, for Calabi--Yau 3-folds over {\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$} ${\mathbin{\mathbb K}}$. {\rm \kern.05em ind}ex{constructible function} {\rm \kern.05em ind}ex{conical Lagrangian cycle} {\rm \kern.05em ind}ex{Behrend function!Behrend identities|)} {\mathbin{\mathfrak m}}edskip In \circte[\S 5]{JoSo}, Joyce and Song then define {\it generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tildemesau)\in{\mathbin{\mathbb Q}}$ by{\rm \kern.05em ind}ex{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tildemesau)$} \e \tildemesi\Psi\bigl(\bar\epsilon^\alpha(\tildemesau)\bigr)=-\bar{DT}{}^\alpha(\tildemesau)\tildemesi \lambda^\alpha, \lambdabel{dt4eq8} \e as in \eq{dt4eq4}. When ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ then $\bar\epsilon^\alpha(\tildemesau)=\bar{\rm d}elta_{\rm ss}^\alpha(\tildemesau)$, and \eq{dt4eq5} gives \e \tildemesi\Psi\bigl(\bar\epsilon^\alpha(\tildemesau)\bigr)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi^{\rm stk}\bigl( {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha (\tildemesau),\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)}\bigr)\tildemesi\lambda^\alpha. \lambdabel{dt4eq9} \e The projection $\pi:{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$ from the moduli stack to the coarse moduli scheme{\rm \kern.05em ind}ex{coarse moduli scheme}{\rm \kern.05em ind}ex{moduli scheme!coarse} is smooth of dimension $-1$, so $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{\rm st}^\alpha(\tildemesau)}=-\pi^*(\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)})$ by (ii) in \S\ref{dt3.2.2}, and comparing \eq{dt3eq5}, \eq{dt4eq8}, \eq{dt4eq9} shows that $\bar{DT}{}^\alpha(\tildemesau)=DT^\alpha(\tildemesau)$. But the new invariants $\bar{DT}{}^\alpha(\tildemesau)$ are also defined for $\alpha$ with ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm ss}^\alpha(\tildemesau)\ne{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm st}^\alpha(\tildemesau)$, when conventional Donaldson--Thomas invariants $DT^\alpha(\tildemesau)$ are not defined. {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$} \setminusallskip Thanks to Theorem \ref{dt5thm1} and Theorem \ref{dt6thm1}, $\tildemesi\Psi$ is a Lie algebra morphism \circte[\S 5.3]{JoSo}, thus the change of stability condition formula for the $\bar\epsilon^\alpha(\tildemesau)$ in \circte{Joyc.6} implies a formula for the elements $-\bar{DT}{}^\alpha(\tildemesau)\tildemesi \lambda^\alpha$ in $\tildemesi L(X)$, and thus a transformation law for the invariants $\bar{DT}{}^\alpha(\tildemesau)$, {\rm \kern.05em ind}ex{wall crossing} using combinatorial coefficients. {\rm \kern.05em ind}ex{Euler form} \nomenclature[V]{$V(I,\Gamma,\kappa;\tildemesau,\tildemesi\tildemesau)$}{combinatorial coefficient used in wall-crossing formulae} \setminusallskip\nomenclature[PI]{$PI^{\alpha,n}(\tildemesau')$}{invariants counting stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow E$} \nomenclature[PT]{$PT_{n,\betata}$}{Pandharipande–Thomas invariants}{\rm \kern.05em ind}ex{Pandharipande--Thomas invariants}{\rm \kern.05em ind}ex{stable pair} \nomenclature[M al stp]{${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau')$}{the moduli space of stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow X$ with $[E]=\alpha$} To study the new invariants $\bar{DT}{}^\alpha(\tildemesau)$, it is helpful to introduce another family of invariants $PI^{\alpha,n}(\tildemesau')$,{\rm \kern.05em ind}ex{stable pair invariants $PI^{\alpha,n}(\tildemesau')$} similar to Pandharipande--Thomas invariants \circte{PaTh}. Let $n\gg 0$ be fixed. A {\it stable pair} is a nonzero morphism $s:{\mathbin{\cal O}}_{X}(-n)\rightarrow E$ in $\mathop{\rm coh}\nolimits(X)$ such that $E$ is $\tildemesau$-semistable, and if $\mathop{\rm Im} s\subset E'\subset E$ with $E'\ne E$ then $\tildemesau([E'])<\tildemesau([E])$. For $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ and $n\gg 0$, the moduli space ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau')$ of stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow X$ with $[E]=\alpha$ is a fine moduli scheme,{\rm \kern.05em ind}ex{fine moduli scheme}{\rm \kern.05em ind}ex{moduli scheme!fine} which is proper and has a symmetric obstruction theory.{\rm \kern.05em ind}ex{symmetric obstruction theory}{\rm \kern.05em ind}ex{obstruction theory!symmetric} Joyce and Song define \e \tildemesextstyle PI^{\alpha,n}(\tildemesau')\;\;\;\;={\rm d}isplaystyle \int_{\setminusall{[{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau')]^{\rm vir}}}1\;\;\;\;=\;\;\;\; {\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl( {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau'),\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau')} \bigr)\in{\mathbin{\mathbb Z}}, \lambdabel{dt4eq11} \e where the second equality follows from Theorem \ref{dt3thm4}. By a similar proof to that for Donaldson--Thomas invariants in \circte{Thom}, Joyce and Song find that $PI^{\alpha,n}(\tildemesau')$ is unchanged under deformations of the underlying Calabi--Yau 3-fold~$X$. {\rm \kern.05em ind}ex{deformation-invariance} By a wall-crossing proof similar to that for $\bar{DT}{}^\alpha(\tildemesau)$, they show that $PI^{\alpha,n}(\tildemesau')$ can be written in terms of the $\bar{DT}{}^\beta(\tildemesau).$ As $PI^{\alpha,n}(\tildemesau')$ is deformation-invariant, one deduces from this relation by induction on $\mathop{\rm rank}\nolimits\alpha$ with $\mathop{\rm dim}\nolimits\alpha$ fixed that $\bar{DT}{}^\alpha(\tildemesau)$ is also deformation-invariant. {\rm \kern.05em ind}ex{deformation-invariance} \setminusallskip The pair invariants $PI^{\alpha,n}(\tildemesau')$ are a useful tool for computing the $\bar{DT}{}^\alpha(\tildemesau)$ in examples in \circte[\S 6]{JoSo}. The method is to describe the moduli spaces ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athcal M}}_{\rm stp}^{\alpha,n}(\tildemesau')$ explicitly, and then use \eq{dt4eq11} to compute $PI^{\alpha,n}(\tildemesau')$, and their relation with $\bar{DT}{}^\alpha(\tildemesau)$ to deduce the values of $\bar{DT}{}^\alpha(\tildemesau)$. Their point of view is that the $\bar{DT}{}^\alpha(\tildemesau)$ are of primary interest, and the $PI^{\alpha,n}(\tildemesau')$ are secondary invariants, of less interest in themselves. \paragraph[Motivic Donaldson--Thomas invariants: Kontsevich and Soibelman's approach from $\tildemesext{\circte{KoSo1}}$]{Motivic Donaldson--Thomas invariants: Kontsevich and Soibelman's approach from \circte{KoSo1}.} {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!motivic|(} \lambdabel{KoSo} Kontsevich and Soibelman in \circte{KoSo1} also studied generalizations of Donaldson--Thomas invariants. They work in a more general context but their results are in great part based on conjectures. They consider derived categories of coherent sheaves, Bridgeland stability conditions \circte{Brid1}, and general motivic invariants, whereas Joyce and Song work with abelian categories of coherent sheaves, Gieseker stability, and the Euler characteristic. Kontsevich and Soibelman's motivic functions in the equivariant setting \circte[\S 4.2]{KoSo1}, motivic Hall algebra \circte[\S 6.1]{KoSo1}, motivic quantum torus \circte[\S 6.2]{KoSo1} and their algebra morphism to define Donaldson--Thomas invariants \circte[Thm.\,8]{KoSo1} all have an analogue in Joyce and Song's program. {\mathbin{\mathfrak m}}edskip It is worth to note here some points (see \circte[\S 1.6]{JoSo} for the entire discussion). \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] Joyce was probably the first to approach Donaldson--Thomas type invariants in an abstract categorical setting. He developed the technique of motivic stack functions and understood the relevance of motives to the counting problem \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}. The main limitation of his approach was due to the fact that he worked with abelian rather than triangulated categories. For many applications, especially to physics, one needs triangulated categories. The more recent theory of Joyce and Song \circte{JoSo} fixes some of these gaps and fits well with the general philosophy of \circte{KoSo1} (and actually Joyce and Song use some ideas from Kontsevich and Soibelman). They deal with concrete examples of categories (e.g. the category of coherent sheaves) and construct numerical invariants via Behrend approach. It is difficult to prove that they are in fact invariants of triangulated categories which is manifest in \circte{KoSo1}. \item[{\bf(b)}] Kontsevich and Soibelman write their wall-crossing formulae in terms of products in a pro-nilpotent Lie group while Joyce and Song's formulae are written in terms of combinatorial coefficients. \item[{\bf(c)}] Equations \eq{dt6eq1}--\eq{dt6eq2} are related to a conjecture of Kontsevich and Soibelman \circte[Conj.\,4]{KoSo1} and its application in \circte[\S 6.3]{KoSo1}, and could probably be deduced from it. Joyce and Song got the idea of proving \eq{dt6eq1}--\eq{dt6eq2} by localization using the ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$-action on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2, E_1\oplus E_2)$ from \circte{KoSo1}. However, Kontsevich and Soibelman approach \circte[Conj.\,4]{KoSo1} via formal power series and non-Archimedean geometry. Their analogue concerns the `motivic Milnor fibre' {\rm \kern.05em ind}ex{motivic Milnor fibre} of the formal power series $f$. Instead, in Theorem \ref{dt5thm1} Joyce and Song in effect first prove that they can choose the formal power series to be convergent, and then use ordinary differential geometry and Milnor fibres. \item[{\bf(d)}] While Joyce's series of papers \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} develops the difficult idea of `virtual rank' and `virtual indecomposables', Kontsevich and Soibelman have no analogue of these. They come up against the problem (specialization from virtual Poincar\'e polynomial to Euler characteristic) this technology was designed to solve in the `absence of poles conjecture' \circte[\S7]{KoSo1}. \end{itemize}{\rm \kern.05em ind}ex{non-Archimedean geometry}{\rm \kern.05em ind}ex{formal power series} Section \ref{dt7} proposes new ideas for further research also in the direction of Kontsevich and Soibelman's paper \circte{KoSo1}. {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!motivic|)} {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tildemesau)$|)} \section{D-critical loci} \lambdabel{dcr} We summarizes the theory of d-critical schemes and stacks introduced by Joyce \circte{Joyc.2}. There are two versions of the theory, complex analytic and algebraic d-critical loci, sometimes we give results for both the versions simultaneously, otherwise just briefly indicate the differences between the two, referring to \circte{Joyc.2} for details. \subsection{D-critical schemes} \lambdabel{dcr.1} Let $X$ be a complex analytic space or a ${\mathbin{\mathbb K}}$-scheme. Then \circte[Th.~2.1 \& Prop.~2.3]{Joyc.2} associates a natural sheaf ${\mathbin{\cal S}}_X$ to $X$, such that, very briefly, sections of ${\mathbin{\cal S}}_X$ parametrize different ways of writing $X$ as $\mathop{\rm Crit}(f)$ for $U$ a complex manifold or smooth ${\mathbin{\mathbb K}}$-scheme and $f:U\rightarrow{\mathbin{\mathbb C}}$ holomorphic or $f:U\rightarrow{\mathbin{\mathbb A}}^1$ regular. We refer to \circte[Th.~2.1 \& Prop.~2.3]{Joyc.2} for details. Just to give a bit more clear idea, we point out the following: \betagin{rem} Suppose we have $U$ a complex manifold, $f:U\rightarrow{\mathbin{\mathbb C}}$ an holomorphic, and $X=\mathop{\rm Crit}(f)$, as a closed complex analytic subspace of $U$. Write $i:X{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra U$ for the inclusion, and $I_{X,U}\subseteq i^{-1}({\mathbin{\cal O}}_U)$ for the sheaf of ideals vanishing on $X\subseteq U$. Then we obtain a natural section $s\in H^0({\mathbin{\cal S}}_X)$. Essentially $s=f+I_{{\rm d} f}^2$, where $I_{{\rm d} f}\subseteq{\mathbin{\cal O}}_U$ is the ideal generated by ${\rm d} f$. Note that $f\vert_X=f+I_{{\rm d} f}$, so $s$ determines $f\vert_X$. Basically, $s$ remembers all of the information about $f$ which makes sense intrinsically on $X$, rather than on the ambient space~$U$. \lambdabel{dc2ex1} \end{rem} Following \circte[Def.~2.5]{Joyc.2} we define algebraic d-critical loci: \betagin{dfn} An ({\it algebraic\/}) {\it d-critical locus\/} over a field ${\mathbin{\mathbb K}}$ is a pair $(X,s)$, where $X$ is a ${\mathbin{\mathbb K}}$-scheme and $s\in H^0({\mathbin{\cal S}}z_X)$, such that for each $x\in X$, there exists a Zariski open neighbourhood $R$ of $x$ in $X$, a smooth ${\mathbin{\mathbb K}}$-scheme $U$, a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1={\mathbin{\mathbb K}}$, and a closed embedding $i:R{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra U$, such that $i(R)=\mathop{\rm Crit}(f)$ as ${\mathbin{\mathbb K}}$-subschemes of $U$, and $\iota_{R,U}(s\vert_R)=i^{-1}(f)+I_{R,U}^2$. We call the quadruple $(R,U,f,i)$ a {\it critical chart\/} on~$(X,s)$. If $U'\subseteq U$ is a Zariski open, and $R'=i^{-1}(U')\subseteq R$, $i'=i\vert_{R'}:R'{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra U'$, and $f'=f\vert_{U'}$, then $(R',U',f',i')$ is a critical chart on $(X,s)$, and we call it a {\it subchart\/} of $(R,U,f,i)$, and we write~$(R',U',f',i')\subseteq (R,U,f,i)$. \setminusallskip Let $(R,U,f,i),(S,V,g,j)$ be critical charts on $(X,s)$, with $R\subseteq S\subseteq X$. An {\it embedding\/} of $(R,U,f,i)$ in $(S,V,g,j)$ is a locally closed embedding $\Phi:U{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra V$ such that $\Phi\circ i=j\vert_R$ and $f=g\circ\Phi$. As a shorthand we write $\Phi: (R,U,f,i){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra(S,V,g,j)$. If $\Phi:(R,U,f,i){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra (S,V,g,j)$ and $\Psi:(S,V,g,j){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra(T,W,h,k)$ are embeddings, then $\Psi\circ\Phi:(R,U,i,e){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra(T,W,h,k)$ is also an embedding. \setminusallskip A {\it morphism\/} $\phi:(X,s)\rightarrow (Y,t)$ of d-critical loci $(X,s),(Y,t)$ is a ${\mathbin{\mathbb K}}$-scheme morphism $\phi:X\rightarrow Y$ with $\phi^{\rm st}ar(t)=s$. This makes d-critical loci into a category. \lambdabel{sa3def1} \end{dfn} \betagin{rem} {\bf(a)} For $(X,s)$ to be a (complex analytic or algebraic) d-critical locus places strong local restrictions on the singularities of $X$. For example, Behrend \circte{Behr} notes that if $X$ has reduced local complete intersection singularities then locally it cannot be the zeroes of an almost closed 1-form on a smooth space, and hence not locally a critical locus, and Pandharipande and Thomas \circte{PaTh} give examples which are zeroes of almost closed 1-forms, but are not locally critical loci. \setminusallskip \noindent{\bf(b)} If $X=\mathop{\rm Crit}(f)$ for holomorphic $f:U\rightarrow{\mathbin{\mathbb C}}$, then $f\vert_{X^{\rm red}}$ is locally constant, and we can write $f=f^0+c$ uniquely near $X$ in $U$ for $f^0:U\rightarrow{\mathbin{\mathbb C}}$ holomorphic with $\mathop{\rm Crit}(f^0)=X=\mathop{\rm Crit}(f)$, $f^0\vert_{X^{\rm red}}=0$, and $c:U\rightarrow{\mathbin{\mathbb C}}$ locally constant with~$c\vert_{X^{\rm red}}=f\vert_{X^{\rm red}}$. Defining d-critical loci using $s\in H^0({\mathbin{\cal S}}z_X)$ corresponds to remembering only the function $f^0$ near $X$ in $U$, and forgetting the locally constant function $f\vert_{X^{\rm red}}:X^{\rm red}\rightarrow{\mathbin{\mathbb C}}$. \setminusallskip \noindent{\bf(c)} In \circte[ex. 2.16]{Joyc.2}, Joyce shows a case in which the algebraic d-critical locus remembers more information, locally, than the symmetric obstruction theory. In \circte[ex. 2.17]{Joyc.2}, Joyce shows that the (symmetric) obstruction theory remembers global, non-local information which is forgotten by the algebraic d-critical locus. \setminusallskip \noindent{\bf(e)} One could think about critical charts as Kuranishi neighbourhoods on a topological space, and embeddings as analogous to coordinate changes between Kuranishi neighbourhoods. \end{rem} Here are~\circte[Prop.s 2.8, 2.30, Th.s 2.20, 2.28, Def.~2.31, Rem 2.32 \& Cor.~2.33]{Joyc.2}: \betagin{prop} Let\/ $\phi:X\rightarrow Y$ be a smooth morphism of\/ ${\mathbin{\mathbb K}}$-schemes. Suppose $t\in H^0({\mathbin{\cal S}}z_Y),$ and set\/ $s:=\phi^{\rm st}ar(t)\in H^0({\mathbin{\cal S}}z_X)$. If\/ $(Y,t)$ is a d-critical locus, then\/ $(X,s)$ is a d-critical locus, and\/ $\phi:(X,s)\rightarrow(Y,t)$ is a morphism of d-critical loci. Conversely, if also $\phi:X\rightarrow Y$ is surjective, then $(X,s)$ a d-critical locus implies $(Y,t)$ is a d-critical locus. \lambdabel{sa3prop1} \end{prop} \betagin{thm} Suppose\/ $(X,s)$ is an algebraic d-critical locus, and\/ $(R,U,f,i),\allowbreak(S,V,g,j)$ are critical charts on $(X,s)$. Then for each\/ $x\in R\cap S\subseteq X$ there exist subcharts $(R',U',f',i')\subseteq(R,U,f,i),$ $(S',V',g',j')\subseteq (S,V,g,j)$ with\/ $x\in R'\cap S'\subseteq X,$ a critical chart\/ $(T,W,h,k)$ on $(X,s),$ and embeddings $\Phi:(R',U',f',i'){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra (T,W,h,k),$ $\Psi:(S',V',g',j'){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra(T,W,h,k)$. \lambdabel{sa3thm2} \end{thm} \betagin{thm} Let\/ $(X,s)$ be an algebraic d-critical locus, and\/ $X^{\rm red}\subseteq X$ the associated reduced\/ ${\mathbin{\mathbb K}}$-subscheme. Then there exists a line bundle $K_{X,s}$ on $X^{\rm red}$ which we call the \betagin{bfseries}canonical bundle\end{bfseries} of\/ $(X,s),$ which is natural up to canonical isomorphism, and is characterized by the following properties: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each $x\in X^{\rm red},$ there is a canonical isomorphism \e \kappa_x:K_{X,s}\vert_x\,{\bulletildrel\cong\overlineer\longrightarrow}\, \bigl(\Lambda^{\rm top}T_x^*X\bigr){}^{\otimes^2}, \lambdabel{sa3eq3} \e where $T_xX$ is the Zariski tangent space of\/ $X$ at\/~$x$. \item[{\bf(b)}] If\/ $(R,U,f,i)$ is a critical chart on $(X,s),$ there is a natural isomorphism \e \iota_{R,U,f,i}:K_{X,s}\vert_{R^{\rm red}}\longrightarrow i^*\bigl(K_U^{\otimes^2}\bigr)\vert_{R^{\rm red}}, \lambdabel{sa3eq4} \e where $K_U=\Lambda^{\mathop{\rm dim}\nolimits U}T^*U$ is the canonical bundle of\/ $U$ in the usual sense. \item[{\bf(c)}] In the situation of\/ {\bf(b)\rm,} let\/ $x\in R$. Then we have an exact sequence \e \betagin{gathered} {}\!\!\!\!\xymatrix@C=22pt@R=15pt{ 0 \ar[r] & T_xX \ar[r]^(0.4){{\rm d} i\vert_x} & T_{i(x)}U \ar[rr]^(0.53){\mathop{\rm Hess}\nolimits_{i(x)}f} && T_{i(x)}^*U \ar[r]^{{\rm d} i\vert_x^*} & T_x^*X \ar[r] & 0, } \end{gathered} \lambdabel{sa3eq5} \e and the following diagram commutes: \betagin{equation*} \xymatrix@C=150pt@R=13pt{ *+[r]{K_{X,s}\vert_x} \ar[dr]_{\iota_{R,U,f,i}\vert_x} \ar[r]_(0.55){\kappa_x} & *+[l]{\bigl(\Lambda^{\rm top}T_x^*X\bigr){}^{\otimes^2}} \ar[d]_(0.45){\alpha_{x,R,U,f,i}} \\ & *+[l]{K_U\vert_{i(x)}^{\otimes^2},\!\!\!} } \end{equation*} where $\alpha_{x,R,U,f,i}$ is induced by taking top exterior powers in\/~\eq{sa3eq5}. \end{itemize} \lambdabel{sa3thm3} \end{thm} \betagin{prop} Suppose $\phi:(X,s)\rightarrow(Y,t)$ is a morphism of d-critical loci with\/ $\phi:X\rightarrow Y$ smooth, as in Proposition\/ {\rm\ref{sa3prop1}}. The \betagin{bfseries}relative cotangent bundle\end{bfseries} $T^*_{X/Y}$ is a vector bundle of mixed rank on $X$ in the exact sequence of coherent sheaves on $X\!:$ \e \xymatrix@C=35pt{0 \ar[r] & \phi^*(T^*Y) \ar[r]^(0.55){{\rm d}\phi^*} & T^*X \ar[r] & T^*_{X/Y} \ar[r] & 0. } \lambdabel{sa3eq6} \e There is a natural isomorphism of line bundles on $X^{\rm red}\!:$ \e {\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}p_\phi:\phi\vert_{X^{\rm red}}^* (K_{Y,t})\otimes\bigl(\Lambda^{\rm top}T^*_{X/Y}\bigr)\big\vert_{X^{\rm red}}^{\otimes^2} \,{\bulletildrel\cong\overlineer\longrightarrow}\,K_{X,s}, \lambdabel{sa3eq7} \e such that for each\/ $x\in X^{\rm red}$ the following diagram of isomorphisms commutes: \e \betagin{gathered} \xymatrix@C=160pt@R=17pt{ *+[r]{K_{Y,t} \vert_{\phi(x)}\otimes\bigl(\Lambda^{\rm top}T^*_{X/Y}\vert_x\bigr)^{\otimes^2}} \ar[r]_(0.7){{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}p_\phi\vert_x} \ar[d]^{\kappa_{\phi(x)}\otimes{\mathop{\rm id}\nolimits}} & *+[l]{K_{X,s}\vert_x} \ar[d]_{\kappa_x} \\ *+[r]{\bigl(\Lambda^{\rm top}T_{\phi(x)}^*Y\bigr)^{\otimes^2}\otimes \bigl(\Lambda^{\rm top}T^*_{X/Y}\vert_x\bigr)^{\otimes^2}} \ar[r]^(0.7){\upsilon_x^{\otimes^2}} & *+[l]{\bigl(\Lambda^{\rm top}T_x^*X\bigr)^{\otimes^2},\!\!{}} } \end{gathered} \lambdabel{sa3eq8} \e where $\kappa_x,\kappa_{\phi(x)}$ are as in {\rm\eq{sa3eq3},} and\/ $\upsilon_x:\Lambda^{\rm top}T_{\phi(x)}^*Y\otimes \Lambda^{\rm top}T^*_{X/Y} \vert_x\rightarrow\Lambda^{\rm top}T_x^*X$ is obtained by restricting \eq{sa3eq6} to $x$ and taking top exterior powers. \lambdabel{sa3prop2} \end{prop} \betagin{dfn} Let $(X,s)$ be an algebraic d-critical locus, and $K_{X,s}$ its canonical bundle from Theorem \ref{sa3thm3}. An {\it orientation\/} on $(X,s)$ is a choice of square root line bundle $K_{X,s}^{1/2}$ for $K_{X,s}$ on $X^{\rm red}$. That is, an orientation is a line bundle $L$ on $X^{\rm red}$, together with an isomorphism $L^{\otimes^2}=L\otimes L\cong K_{X,s}$. A d-critical locus with an orientation will be called an {\it oriented d-critical locus}. \lambdabel{sa3def2} \end{dfn} \betagin{rem} In view of equation \eq{sa3eq3}, one might hope to define a canonical orientation $K_{X,s}^{1/2}$ for a d-critical locus $(X,s)$ by $K_{X,s}^{1/2}\big\vert_x=\Lambda^{\rm top}T_x^*X$ for $x\in X^{\rm red}$. However, {\it this does not work}, as the spaces $\Lambda^{\rm top}T_x^*X$ do not vary continuously with $x\in X^{\rm red}$ if $X$ is not smooth. An example in \circte[Ex.~2.39]{Joyc.2} shows that d-critical loci need not admit orientations. \lambdabel{sa3rem1} \end{rem} In the situation of Proposition \ref{sa3prop2}, the factor $(\Lambda^{\rm top}T^*_{X/Y})\vert_{X^{\rm red}}^{\otimes^2}$ in \eq{sa3eq7} has a natural square root $(\Lambda^{\rm top}T^*_{X/Y})\vert_{X^{\rm red}}$. Thus we deduce: \betagin{cor} Let\/ $\phi:(X,s)\rightarrow(Y,t)$ be a morphism of d-critical loci with\/ $\phi:X\rightarrow Y$ smooth. Then each orientation $K_{Y,t}^{1/2}$ for\/ $(Y,t)$ lifts to a natural orientation $K_{X,s}^{1/2}=\phi\vert_{X^{\rm red}}^*(K_{Y,t}^{1/2})\otimes(\Lambda^{\rm top}T^*_{X/Y}) \vert_{X^{\rm red}}$ for~$(X,s)$. \lambdabel{sa3cor1} \end{cor} \subsection{D-critical stacks} \lambdabel{dcr.2} In \circte[\S 2.7--\S 2.8]{Joyc.2} Joyce extends the material of \S\ref{dcr.1} from ${\mathbin{\mathbb K}}$-schemes to Artin ${\mathbin{\mathbb K}}$-stacks. We work in the context of the theory of {\it sheaves on Artin stacks} by Laumon and Moret-Bailly \circte{LaMo}. \betagin{prop}[Laumon and Moret-Bailly \circte{LaMo}] Let\/ $X$ be an Artin\/ ${\mathbin{\mathbb K}}$-stack. The category of sheaves of sets on $X$ in the lisse-\'etale topology is equivalent to the category ${\mathop{\rm Sh}\nolimits}(X)$ defined as follows: \setminusallskip \noindent{\bf(A)} Objects ${\mathbin{\cal A}}$ of\/ ${\mathop{\rm Sh}\nolimits}(X)$ comprise the following data: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each\/ ${\mathbin{\mathbb K}}$-scheme $T$ and smooth\/ $1$-morphism $t:T\rightarrow X$ in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ we are given a sheaf of sets ${\mathbin{\cal A}}(T,t)$ on $T,$ in the \'etale topology. \item[{\bf(b)}] For each\/ $2$-commutative diagram in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}\!:$ \e \betagin{gathered} \xymatrix@C=50pt@R=6pt{ & U \ar[ddr]^u \\ \rrtwocell_{}\omegait^{}\omegait{^{\eta}} && \\ T \ar[uur]^{\phi} \ar[rr]_t && X, } \end{gathered} \lambdabel{sa3eq9} \e where $T,U$ are schemes and\/ $t: T\rightarrow X,$ $u:U\rightarrow X$ are smooth\/ $1$-morphisms in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ we are given a morphism ${\mathbin{\cal A}}(\phi,\eta):\phi^{-1} ({\mathbin{\cal A}}(U,u)) \rightarrow{\mathbin{\cal A}}(T,t)$ of \'etale sheaves of sets on $T$. \end{itemize} This data must satisfy the following conditions: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] If\/ $\phi:T\rightarrow U$ in {\bf(b)} is \'etale, then ${\mathbin{\cal A}}(\phi,\eta)$ is an isomorphism. \item[{\bf(ii)}] For each\/ $2$-commutative diagram in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}\!:$ \betagin{equation*} \xymatrix@C=70pt@R=6pt{ & V \ar[ddr]^v \\ \rrtwocell_{}\omegait^{}\omegait{^{\zeta}} && \\ U \ar[uur]^{\psi} \ar[rr]_(0.3)u && X, \\ \urrtwocell_{}\omegait^{}\omegait{^{\eta}} && \\ T \ar[uu]_{\phi} \ar@/_/[uurr]_t } \end{equation*} with $T,U,V$ schemes and\/ $t,u,v$ smooth, we must have \betagin{align*} {\mathbin{\cal A}}\bigl(\psi\circ\phi,(\zeta*{\mathop{\rm id}\nolimits}_{\phi})\odot\eta\bigr) &={\mathbin{\cal A}}(\phi,\eta)\circ\phi^{-1}({\mathbin{\cal A}}(\psi,\zeta))\quad\tildemesext{as morphisms}\\ (\psi\circ\phi)^{-1}({\mathbin{\cal A}}(V,v))&=\phi^{-1}\circ \psi^{-1}({\mathbin{\cal A}}(V,v))\longrightarrow{\mathbin{\cal A}}(T,t). \end{align*} \end{itemize} \noindent{\bf(B)} Morphisms $\alpha:{\mathbin{\cal A}}\rightarrow{\mathbin{\cal B}}$ of\/ ${\mathop{\rm Sh}\nolimits}(X)$ comprise a morphism $\alpha(T,t):{\mathbin{\cal A}}(T,t)\rightarrow{\mathbin{\cal B}}(T,t)$ of \'etale sheaves of sets on a scheme $T$ for all smooth\/ $1$-morphisms $t:T\rightarrow X,$ such that for each diagram \eq{sa3eq9} in {\bf(b)} the following commutes: \betagin{equation*} \xymatrix@C=120pt@R=25pt{*+[r]{\phi^{-1}({\mathbin{\cal A}}(U,u))} \ar[d]^{\phi^{-1}(\alpha(U,u))} \ar[r]_(0.55){{\mathbin{\cal A}}(\phi,\eta)} & *+[l]{{\mathbin{\cal A}}(T,t)} \ar[d]_{\alpha(T,t)} \\ *+[r]{\phi^{-1}({\mathbin{\cal B}}(U,u))} \ar[r]^(0.55){{\mathbin{\cal B}}(\phi,\eta)} & *+[l]{{\mathbin{\cal B}}(T,t).\!{}} } \end{equation*} \noindent{\bf(C)} Composition of morphisms ${\mathbin{\cal A}}\,{\bulletildrel\alpha \overlineer\longrightarrow}\,{\mathbin{\cal B}}\,{\bulletildrel\beta\overlineer\longrightarrow}\,{\mathbin{\cal C}}$ in ${\mathop{\rm Sh}\nolimits}(X)$ is $(\beta\circ\alpha)(T,t)=\allowbreak\beta(T,t)\allowbreak\circ\allowbreak\alpha(T,t)$. Identity morphisms ${\mathop{\rm id}\nolimits}_{\mathbin{\cal A}}:{\mathbin{\cal A}}\rightarrow{\mathbin{\cal A}}$ are ${\mathop{\rm id}\nolimits}_{\mathbin{\cal A}}(T,t)={\mathop{\rm id}\nolimits}_{{\mathbin{\cal A}}(T,t)}$. \setminusallskip The analogue of all the above also holds for (\'etale) sheaves of\/ ${\mathbin{\mathbb K}}$-vector spaces, sheaves of\/ ${\mathbin{\mathbb K}}$-algebras, and so on, in place of (\'etale) sheaves of sets. Furthermore, the analogue of all the above holds for quasi-coherent sheaves, (or coherent sheaves, or vector bundles, or line bundles) on $X,$ where in {\bf(a)} ${\mathbin{\cal A}}(T,t)$ becomes a quasi-coherent sheaf (or coherent sheaf, or vector bundle, or line bundle) on $T,$ in {\bf(b)} we replace $\phi^{-1}({\mathbin{\cal A}}(U,u))$ by the pullback\/ $\phi^*({\mathbin{\cal A}}(U,u))$ of quasi-coherent sheaves (etc.), and\/ ${\mathbin{\cal A}}(\phi,\eta),\allowbreak\alpha(T,t)$ become morphisms of quasi-coherent sheaves (etc.) on\/~$T$. \setminusallskip We can also describe \betagin{bfseries}global sections\end{bfseries} of sheaves on Artin ${\mathbin{\mathbb K}}$-stacks in the above framework: a global section $s\in H^0({\mathbin{\cal A}})$ of\/ ${\mathbin{\cal A}}$ in part\/ {\bf(A)} assigns a global section $s(T,t)\in H^0({\mathbin{\cal A}}(T,t))$ of\/ ${\mathbin{\cal A}}(T,t)$ on\/ $T$ for all smooth\/ $t:T\rightarrow X$ from a scheme $T,$ such that\/ ${\mathbin{\cal A}}(\phi,\eta)^*(s(U,u))=s(T,t)$ in $H^0({\mathbin{\cal A}}(T,t))$ for all\/ $2$-commutative diagrams \eq{sa3eq9} with\/ $t,u$ smooth. \lambdabel{sa3prop3} \end{prop} In \circte[Cor.~2.52]{Joyc.2} Joyce generalizes the sheaves ${\mathbin{\cal S}}_X,{\mathbin{\cal S}}z_X$ in \S\ref{dcr.1} to Artin ${\mathbin{\mathbb K}}$-stacks: \betagin{prop} Let\/ $X$ be an Artin ${\mathbin{\mathbb K}}$-stack, and write\/ ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}alg$ and\/ ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}vect$ for the categories of sheaves of\/ ${\mathbin{\mathbb K}}$-algebras and\/ ${\mathbin{\mathbb K}}$-vector spaces on $X$ defined in Proposition\/ {\rm\ref{sa3prop3}}. Then: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] We may define canonical objects\/ ${\mathbin{\cal S}}_X$ in both\/ ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}alg$ and\/ ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}vect$ by ${\mathbin{\cal S}}_X(T,t):={\mathbin{\cal S}}_T$ for all smooth morphisms $t:T\rightarrow X$ for $T\in\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}},$ for ${\mathbin{\cal S}}_T$ as in\/ {\rm\S\ref{dcr.1}} taken to be a sheaf of\/ ${\mathbin{\mathbb K}}$-algebras (or ${\mathbin{\mathbb K}}$-vector spaces) on $T$ in the \'etale topology, and\/ ${\mathbin{\cal S}}_X(\phi,\eta) :=\phi^{\rm st}ar:\phi^{-1}({\mathbin{\cal S}}_X(U,u))=\phi^{-1}({\mathbin{\cal S}}_U)\rightarrow{\mathbin{\cal S}}_T ={\mathbin{\cal S}}_X(T,t)$ for all\/ $2$-commutative diagrams \eq{sa3eq9} in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}$ with\/ $t,u$ smooth, where $\phi^{\rm st}ar$ is as in {\rm\S\ref{dcr.1}}. \item[{\bf(b)}] There is a natural decomposition ${\mathbin{\cal S}}_X\!=\!{\mathbin{\mathbb K}}_X\!\oplus\!{\mathbin{\cal S}}z_X$ in ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}vect$ induced by the splitting ${\mathbin{\cal S}}_X(T,t)\!=\!{\mathbin{\cal S}}_T\!=\!{\mathbin{\mathbb K}}_T\oplus{\mathbin{\cal S}}z_T$ in {\rm\S\ref{dcr.1},} where ${\mathbin{\mathbb K}}_X$ is a sheaf of\/ ${\mathbin{\mathbb K}}$-subalgebras of\/ ${\mathbin{\cal S}}_X$ in ${\mathop{\rm Sh}\nolimits}(X)_{\mathbin{\mathbb K}}alg,$ and\/ ${\mathbin{\cal S}}z_X$ a sheaf of ideals~in\/~${\mathbin{\cal S}}_X$. \end{itemize} \lambdabel{sa3prop4} \end{prop} Here \circte[Def. 2.53]{Joyc.2} is the generalization of Definition \ref{sa3def1} to Artin stacks. \betagin{dfn} A {\it d-critical stack\/} $(X,s)$ is an Artin ${\mathbin{\mathbb K}}$-stack $X$ and a global section $s\in H^0({\mathbin{\cal S}}z_X)$, where ${\mathbin{\cal S}}z_X$ is as in Proposition \ref{sa3prop4}, such that $\bigl(T,s(T,t)\bigr)$ is an algebraic d-critical locus in the sense of Definition \ref{sa3def1} for all smooth morphisms $t:T\rightarrow X$ with~$T\in\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}}$. \lambdabel{sa3def3} \end{dfn} Here is a convenient way to understand d-critical stacks $(X,s)$ in terms of d-critical structures on an atlas $t:T\rightarrow X$ for~$X$ from \circte[Prop.~2.54]{Joyc.2}. \betagin{prop} Let\/ $X$ be an Artin ${\mathbin{\mathbb K}}$-stack, and\/ $t: T\rightarrow X$ a smooth atlas for $X$. Then $T\tildemes_{t,X,t}T$ is equivalent to a ${\mathbin{\mathbb K}}$-scheme $U$ as $t$ is representable and\/ $T$ a scheme, so we have a $2$-Cartesian diagram \e \betagin{gathered} \xymatrix@C=90pt@R=21pt{ *+[r]{U} \ar[d]^{\pi_1} \ar[r]_(0.3){\pi_2} {\rm d}rtwocell_{}\omegait^{}\omegait{^{\eta}} & *+[l]{ T} \ar[d]_t \\ *+[r]{T} \ar[r]^(0.7)t & *+[l]{X} } \end{gathered} \lambdabel{sa3eq10} \e in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ with\/ $\pi_1,\pi_2:U\rightarrow T$ smooth morphisms in $\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}}$. Also $T,U,\pi_1,\pi_2$ can be naturally completed to a smooth groupoid in $\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}},$ and\/ $X$ is equivalent in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}$ to the associated groupoid stack\/ $[U\rightrightarrows T]$. \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] Let\/ ${\mathbin{\cal S}}_X$ be as in Proposition\/ {\rm\ref{sa3prop4},} and\/ ${\mathbin{\cal S}}_T,{\mathbin{\cal S}}_U$ be as in {\rm\S\ref{dcr.1},} regarded as sheaves on $T,U$ in the \'etale topology, and define $\pi_i^{\rm st}ar:\pi_i^{-1}({\mathbin{\cal S}}_T)\rightarrow {\mathbin{\cal S}}_U$ as in {\rm\S\ref{dcr.1}} for $i=1,2$. Consider the map $t^*:H^0({\mathbin{\cal S}}_X)\rightarrow H^0({\mathbin{\cal S}}_T)$ mapping $t^*:s{\mathbin{\mathfrak m}}apsto s(T,t)$. This is injective, and induces a bijection \e t^*:H^0({\mathbin{\cal S}}_X)\,{\bulletildrel\cong\overlineer\longrightarrow}\,\bigl\{s'\in H^0({\mathbin{\cal S}}_T):\tildemesext{$\pi_1^{\rm st}ar(s')= \pi_2^{\rm st}ar(s')$ in $H^0({\mathbin{\cal S}}_U)$}\bigr\}. \lambdabel{sa3eq11} \e The analogue holds for ${\mathbin{\cal S}}z_X,{\mathbin{\cal S}}z_T,{\mathbin{\cal S}}z_U$. \item[{\bf(ii)}] Suppose $s\in H^0({\mathbin{\cal S}}z_X),$ so that\/ $t^*(s)\in H^0({\mathbin{\cal S}}z_T)$ with\/ $\pi_1^{\rm st}ar\circ t^*(s)=\pi_2^{\rm st}ar\circ t^*(s)$. Then $(X,s)$ is a d-critical stack if and only if\/ $\bigl(T,t^*(s)\bigr)$ is an algebraic d-critical locus, and then\/ $\bigl(U,\pi_1^{\rm st}ar\circ t^*(s)\bigr)$ is also an algebraic d-critical locus. \end{itemize} \lambdabel{sa3prop5} \end{prop} In \circte[Ex.~2.55]{Joyc.2} we consider quotient stacks $X=[T/G]$. \betagin{ex} Suppose an algebraic ${\mathbin{\mathbb K}}$-group $G$ acts on a ${\mathbin{\mathbb K}}$-scheme $T$ with action ${\mathbin{\mathfrak m}}u:G\tildemes T\rightarrow T$, and write $X$ for the quotient Artin ${\mathbin{\mathbb K}}$-stack $[T/G]$. Then as in \eq{sa3eq10} there is a natural 2-Cartesian diagram \betagin{equation*} \xymatrix@C=110pt@R=21pt{ *+[r]{G\tildemes T} \ar[d]^{\pi_T} \ar[r]_(0.3){{\mathbin{\mathfrak m}}u} {\rm d}rtwocell_{}\omegait^{}\omegait{^{\eta}} & *+[l]{T} \ar[d]_t \\ *+[r]{T} \ar[r]^(0.6)t & *+[l]{X=[T/G],\!{}} } \end{equation*} where $t:T\rightarrow X$ is a smooth atlas for $X$. If $s'\in H^0({\mathbin{\cal S}}z_T)$ then $\pi_1^{\rm st}ar(s')=\pi_2^{\rm st}ar(s')$ in \eq{sa3eq11} becomes $\pi_T^{\rm st}ar(s')={\mathbin{\mathfrak m}}u^{\rm st}ar(s')$ on $G\tildemes T$, that is, $s'$ is $G$-invariant. Hence, Proposition \ref{sa3prop5} shows that d-critical structures $s$ on $X=[T/G]$ are in 1-1 correspondence with $G$-invariant d-critical structures $s'$ on~$T$. \lambdabel{sa3ex} \end{ex} Here \circte[Th.~2.56]{Joyc.2} is an analogue of Theorem~\ref{sa3thm3}. \betagin{thm} Let\/ $(X,s)$ be a d-critical stack. Using the description of quasi-coherent sheaves on $X^{\rm red}$ in Proposition {\rm\ref{sa3prop3}} there is a line bundle $K_{X,s}$ on the reduced\/ ${\mathbin{\mathbb K}}$-substack\/ $X^{\rm red}$ of\/ $X$ called the \betagin{bfseries}canonical bundle\end{bfseries} of\/ $(X,s),$ unique up to canonical isomorphism, such that: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each point\/ $x\in X^{\rm red}\subseteq X$ we have a canonical isomorphism \e \kappa_x:K_{X,s}\vert_x\,{\bulletildrel\cong\overlineer\longrightarrow}\, \bigl(\Lambda^{\rm top}T_x^*X\bigr)^{\otimes^2} \otimes\bigl(\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_x(X)\bigr)^{\otimes^2}, \lambdabel{sa3eq12} \e where $T_x^*X$ is the Zariski cotangent space of\/ $X$ at\/ $x,$ and\/ ${\mathop{\mathfrak{Iso}}\nolimits}_x(X)$ the Lie algebra of the isotropy group (stabilizer group) ${\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_x(X)$ of\/ $X$ at\/~$x$. \item[{\bf(b)}] If\/ $T$ is a ${\mathbin{\mathbb K}}$-scheme and\/ $t: T\rightarrow X$ a smooth $1$-morphism, so that\/ $t^{\rm red}:T^{\rm red}\rightarrow X^{\rm red}$ is also smooth, then there is a natural isomorphism of line bundles on $T^{\rm red}\!:$ \e \Gamma_{T,t}:K_{X,s}(T^{\rm red},t^{\rm red})\,{\bulletildrel\cong\overlineer\longrightarrow}\, K_{T,s(T,t)}\otimes \bigl(\Lambda^{\rm top}T^*_{ T/X}\bigr)\big\vert_{T^{\rm red}}^{\otimes^{-2}}. \lambdabel{sa3eq13} \e Here $\bigl(T,s(T,t)\bigr)$ is an algebraic d-critical locus by Definition\/ {\rm\ref{sa3def3},} and\/ $K_{T,s(T,t)}\rightarrow T^{\rm red}$ is its canonical bundle from Theorem\/~{\rm\ref{sa3thm3}}. \item[{\bf(c)}] If\/ $t:T\rightarrow X$ is a smooth $1$-morphism, we have a distinguished triangle in~$D_{{\rm qcoh}}(T)\!:$ \e \xymatrix@C=40pt{ t^*({\mathbin{\mathbb L}}_X) \ar[r]^(0.55){{\mathbin{\mathbb L}}_t} & {\mathbin{\mathbb L}}_T \ar[r] & T^*_{T/X} \ar[r] & t^*({\mathbin{\mathbb L}}_X)[1], } \lambdabel{sa3eq14} \e where ${\mathbin{\mathbb L}}_T,{\mathbin{\mathbb L}}_X$ are the cotangent complexes of\/ $T,X,$ and\/ $T^*_{T/X}$ the relative cotangent bundle of\/ $t: T\rightarrow X,$ a vector bundle of mixed rank on\/ $T$. Let\/ $p\in T^{\rm red}\subseteq T,$ so that\/ $t(p):=t\circ p\in X$. Taking the long exact cohomology sequence of\/ \eq{sa3eq14} and restricting to $p\in T$ gives an exact sequence \e 0 \longrightarrow T^*_{t(p)}X \longrightarrow T^*_pT \longrightarrow T^*_{ T/X}\vert_p \longrightarrow {\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)^* \longrightarrow 0. \lambdabel{sa3eq15} \e Then the following diagram commutes: \betagin{equation*} \xymatrix@!0@C=108pt@R=50pt{*+[r]{K_{X,s}\vert_{t(p)}} \ar[d]^{\kappa_{t(p)}} \ar@{=}[r] & K_{X,s}(T^{\rm red},t^{\rm red})\vert_p \ar[rr]_(0.3){\Gamma_{T,t}\vert_p} && *+[l]{K_{T,s(T,t)}\vert_p\otimes \bigl(\Lambda^{\rm top}T^*_{\setminusash{ T/X}}\bigr)\big\vert_p^{\otimes^{-2}}} \ar[d]_{\kappa_p\otimes{\mathop{\rm id}\nolimits}} \\ *+[r]{\bigl(\Lambda^{\rm top}T_{t(p)}^*X\bigr)^{\otimes^2}\!\!\otimes\!\bigl(\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)\bigr)^{\otimes^2}} \ar[rrr]^(0.54){\alpha_p^2} &&& *+[l]{\bigl(\Lambda^{\rm top}T^*_pT\bigr)^{\otimes^2}\!\!\otimes\! \bigl(\Lambda^{\rm top}T^*_{T/X}\bigr) \big\vert_p^{\otimes^{-2}},} } \end{equation*} where $\kappa_p,\kappa_{t(p)},\Gamma_{T,t}$ are as in {\rm\eq{sa3eq3}, \eq{sa3eq12}} and\/ {\rm\eq{sa3eq13},} respectively, and\/ $$\alpha_p:\Lambda^{\rm top}T_{t(p)}^*X\otimes\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)\,{\bulletildrel\cong\overlineer\longrightarrow}\,\Lambda^{\rm top}T^*_pT\otimes\Lambda^{\rm top}T^*_{T/X}\vert^{-1}_p$$ is induced by taking top exterior powers in\/~\eq{sa3eq15}. \end{itemize} \lambdabel{sa3thm5} \end{thm} Here \circte[Def.~2.57]{Joyc.2} is the analogue of Definition~\ref{sa3def2}: \betagin{dfn} Let $(X,s)$ be a d-critical stack, and $K_{X,s}$ its canonical bundle from Theorem \ref{sa3thm5}. An {\it orientation\/} on $(X,s)$ is a choice of square root line bundle $K_{X,s}^{1/2}$ for $K_{X,s}$ on $X^{\rm red}$. That is, an orientation is a line bundle $L$ on $X^{\rm red}$, together with an isomorphism $L^{\otimes^2}=L\otimes L\cong K_{X,s}$. A d-critical stack with an orientation will be called an {\it oriented d-critical stack}. \lambdabel{sa3def4} \end{dfn} Let $(X,s)$ be an oriented d-critical stack. Then for each smooth $t:T\rightarrow X$ we have a square root $K_{X,s}^{1/2} (T^{\rm red},t^{\rm red})$. Thus by \eq{sa3eq13}, $K_{X,s}^{1/2}(T^{\rm red},t^{\rm red})\otimes (\Lambda^{\rm top}{\mathbin{\mathbb L}}_{\setminusash{T/X}})\vert_{T^{\rm red}}$ is a square root for $K_{T,s(T,t)}$. This proves~\circte[Lem.~2.58]{Joyc.2}: \betagin{lem} Let\/ $(X,s)$ be a d-critical stack. Then an orientation $K_{X,s}^{1/2}$ for $(X,s)$ determines a canonical orientation\/ $K_{T,s(T,t)}^{1/2}$ for the algebraic d-critical locus $\bigl(T,s(T,t)\bigr),$ for all smooth\/ $t: T\rightarrow X$ with\/ $T$ a ${\mathbin{\mathbb K}}$-scheme. \lambdabel{sa3lem1} \end{lem} \subsection{Equivariant d-critical loci} \lambdabel{eqdcr} Here we summarizes some results about group actions on algebraic d-critical loci from \circte{Joyc2}. \betagin{dfn} Let $(X,s)$ be an algebraic d-critical locus over ${\mathbin{\mathbb K}}$, and ${\mathbin{\mathfrak m}}u:G\tildemes X\rightarrow X$ an action of an algebraic ${\mathbin{\mathbb K}}$-group $G$ on the ${\mathbin{\mathbb K}}$-scheme $X$. We also write the action as ${\mathbin{\mathfrak m}}u(\gamma):X\rightarrow X$ for $\gamma\in G$. We say that $(X,s)$ is $G$-{\it invariant\/} if ${\mathbin{\mathfrak m}}u(\gamma)^{\rm st}ar(s)=s$ for all $\gamma\in G$, or equivalently, if ${\mathbin{\mathfrak m}}u^{\rm st}ar(s)=\pi_X^{\rm st}ar(s)$ in $H^0({\mathbin{\cal S}}z_{G\tildemes X})$, where $\pi_X:G\tildemes X\rightarrow X$ is the projection. \setminusallskip Let ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi:G\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$ be a morphism of algebraic ${\mathbin{\mathbb K}}$-groups, that is, a character of $G$, where ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m={\mathbin{\mathbb K}}\setminus\{0\}$ is the multiplicative group. We say that $(X,s)$ is $G$-{\it equivariant, with character\/} ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi,$ if ${\mathbin{\mathfrak m}}u(\gamma)^{\rm st}ar(s)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi(\gamma)\cdot s$ for all $\gamma\in G$, or equivalently, if ${\mathbin{\mathfrak m}}u^{\rm st}ar(s)=({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\circ\pi_G)\cdot(\pi_X^{\rm st}ar(s))$ in $H^0({\mathbin{\cal S}}z_{G\tildemes X})$, where $H^0({\mathbin{\cal O}}_G)\ni{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi$ acts on $H^0({\mathbin{\cal S}}z_{G\tildemes X})$ by multiplication, as $G$ is a smooth ${\mathbin{\mathbb K}}$-scheme. \setminusallskip Suppose $(X,s)$ is $G$-invariant or $G$-equivariant, with ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi=1$ in the $G$-invariant case. We call a critical chart $(R,U,f,i)$ on $(X,s)$ with a $G$-action $\rho:G\tildemes U\rightarrow U$ a $G$-{\it equivariant critical chart\/} if $R\subseteq X$ is a $G$-invariant open subscheme, and $i:R{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra U$, $f:U\rightarrow{\mathbin{\mathbb A}}^1$ are equivariant with respect to the actions ${\mathbin{\mathfrak m}}u\vert_{G\tildemes R},\rho,{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi$ of $G$ on $R,U,{\mathbin{\mathbb A}}^1$, respectively. \setminusallskip We call a subchart $(R',U',f',i')\subseteq(R,U,f,i)$ a $G$-{\it equivariant subchart\/} if $R'\subseteq R$ and $U'\subseteq U$ are $G$-invariant open subschemes. Then $(R',U',f',i'),\rho'$ is a $G$-equivariant critical chart, where~$\rho'=\rho\vert_{G\tildemes U'}$. \lambdabel{dc2def7} \end{dfn} Note that $X$ may not be covered by $G$-equivariant critical charts without extra assumptions on~$X,G$. We will restrict to the case when $G$ is a torus, with a `good' action on~$X$: \betagin{dfn} Let $X$ be a ${\mathbin{\mathbb K}}$-scheme, $G$ an algebraic ${\mathbin{\mathbb K}}$-torus, and ${\mathbin{\mathfrak m}}u:G\tildemes X\rightarrow X$ an action of $G$ on $X$. We call ${\mathbin{\mathfrak m}}u$ a {\it good action\/} if $X$ admits a Zariski open cover by $G$-invariant affine open ${\mathbin{\mathbb K}}$-subschemes $U\subseteq X$. \lambdabel{dc2def8} \end{dfn} A torus-equivariant d-critical locus $(X,s)$ admits an open cover by equivariant critical charts if and only if the torus action is good: \betagin{prop} Let\/ $(X,s)$ be an algebraic d-critical locus which is invariant or equivariant under the action ${\mathbin{\mathfrak m}}u:G\tildemes X\rightarrow X$ of an algebraic torus $G$. \setminusallskip \noindent{\bf(a)} If\/ ${\mathbin{\mathfrak m}}u$ is good then for all\/ $x\in X$ there exists a $G$-equivariant critical chart\/ $(R,U,f,i),\rho$ on $(X,s)$ with\/ $x\in R,$ and we may take $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits T_xX$. \setminusallskip \noindent{\bf(b)} Conversely, if for all\/ $x\in X$ there exists a $G$-equivariant critical chart\/ $(R,U,f,i),\rho$ on $(X,s)$ with\/ $x\in R,$ then ${\mathbin{\mathfrak m}}u$ is good. \lambdabel{dc2prop14} \end{prop} \section{Derived symplectic structures in Donaldson--Thomas theory} \lambdabel{ourpapers} We are now going to use derived algebraic geometry from \circte{PTVV} and summarize the main results from the sequel \circte{BBJ,BBDJS,BJM,BBBJ} and their consequences in Donaldson--Thomas theory. Some of them will not be used to prove our main results stated in \S\ref{main.1}, but we will expose them as they contribute to a whole picture of the theory. \subsection{Symplectic derived schemes and critical loci} Here we summarizes the main results from \circte{BBJ}. The following is \circte[Thm. 5.18]{BBJ}. \betagin{thm} Let\/ ${\bs X}$ be a derived\/ ${\mathbin{\mathbb K}}$-scheme with\/ $k$-shifted symplectic form $\tildemesi\omega$ for $k<0,$ and\/ $x\in{\bs X}$. Then there exists a standard form cdga $A$ over ${\mathbin{\mathbb K}}$ which is minimal at\/ $p\in{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec H^0(A)$ in the sense of \circte[\S 4]{BBJ}, a $k$-shifted symplectic form\/ $\omega$ on $\mathop{\boldsymbol{\rm Spec}}\nolimits A,$ and a morphism $\boldsymbol f:{\bs U}=\mathop{\boldsymbol{\rm Spec}}\nolimits A\rightarrow{\bs X}$ with\/ $\boldsymbol f(p)=x$ and\/ $\boldsymbol f^*(\tildemesi\omega)\sigmam\omega,$ such that if\/ $k$ is odd or divisible by $4,$ then $\boldsymbol f$ is a Zariski open inclusion, and\/ $A,\omega$ are in Darboux form, and if\/ $k\equiv 2{\mathbin{\mathfrak m}}od 4,$ then $\boldsymbol f$ is \'etale, and\/ $A,\omega$ are in strong Darboux form, as in \circte[\S 5]{BBJ}. \lambdabel{sa2thm3} \end{thm} Let $Y$ be a Calabi--Yau $m$-fold over ${\mathbin{\mathbb K}}$, that is, a smooth projective ${\mathbin{\mathbb K}}$-scheme with $H^i({\mathbin{\cal O}}_Y)={\mathbin{\mathbb K}}$ for $i=0,m$ and $H^i({\mathbin{\cal O}}_Y)=0$ for $0<i<m$. Suppose ${\mathbin{\cal M}}$ is a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$, where we call $F\in\mathop{\rm coh}\nolimits(Y)$ {\it simple\/} if $\mathop{\rm Ho}\nolimitsm(F,F)={\mathbin{\mathbb K}}$. More generally, suppose ${\mathbin{\cal M}}$ is a moduli ${\mathbin{\mathbb K}}$-scheme of simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$, where we call $F^\bullet\in D^b\mathop{\rm coh}\nolimits(Y)$ {\it simple\/} if $\mathop{\rm Ho}\nolimitsm(F^\bullet,F^\bullet)={\mathbin{\mathbb K}}$ and ${\mathbin{\mathfrak m}}athop{\rm Ext}^{<0}(F^\bullet,F^\bullet)=0$. Such moduli spaces ${\mathbin{\cal M}}$ are only known to be algebraic ${\mathbin{\mathbb K}}$-spaces in general, but we assume ${\mathbin{\cal M}}$ is a ${\mathbin{\mathbb K}}$-scheme. Then ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$, for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-scheme. To make ${\mathbin{\cal M}},\boldsymbol{\mathbin{\cal M}}$ into schemes rather than stacks, we consider moduli of sheaves or complexes with fixed determinant. Then Pantev et al.\ \circte[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $(2-m)$-shifted symplectic structure $\omega$, so Theorem \ref{sa2thm3} shows that $(\boldsymbol{\mathbin{\cal M}},\omega)$ is Zariski locally modelled on $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$, and ${\mathbin{\cal M}}$ is Zariski locally modelled on ${\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec H^0(A)$. In the case $m=3$, so that $k=-1$, we get \circte[Cor. 5.19]{BBJ}: \betagin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over a field\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves, or simple complexes of coherent sheaves, on $Y$. Then for each\/ $[F]\in{\mathbin{\cal M}},$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with\/ $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F),$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1,$ and an isomorphism from $\mathop{\rm Crit}(f)\subseteq U$ to a Zariski open neighbourhood of\/ $[F]$ in\/~${\mathbin{\cal M}}$. \lambdabel{da5cor1} \end{cor} Here $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F)$ comes from $A$ minimal at $p$ and $\boldsymbol f(p)=[F]$ in Theorem \ref{sa2thm3}. This is a new result in Donaldson--Thomas theory. We already explained that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ and ${\mathbin{\cal M}}$ is a moduli space of simple coherent sheaves on $Y$, using gauge theory and transcendental complex methods, Joyce and Song \circte[Th.~5.4]{JoSo} prove that the underlying complex analytic space ${\mathbin{\cal M}}^{\rm an}$ of ${\mathbin{\cal M}}$ is locally of the form $\mathop{\rm Crit}(f)$ for $U$ a complex manifold and $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function. Behrend and Getzler announced the analogue of \circte[Th.~5.4]{JoSo} for moduli of complexes in $D^b\mathop{\rm coh}\nolimits(Y)$, but the proof has not yet appeared. Over general ${\mathbin{\mathbb K}}$, as in Kontsevich and Soibelman \circte[\S 3.3]{KoSo1} the formal neighbourhood ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{\mathbin{\cal M}}_{[F]}$ of ${\mathbin{\cal M}}$ at any $[F]\in{\mathbin{\cal M}}$ is isomorphic to the critical locus $\mathop{\rm Crit}({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at f)$ of a formal power series ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at f$ on $\mathop{\rm Ext}\nolimits^1(F,F)$ with only cubic and higher terms. \setminusallskip Here are \circte[Thm. 6.6 \& Cor. 6.7]{BBJ}: \betagin{thm} Suppose\/ $({\bs X},\tildemesi\omega)$ is a $-1$-shifted symplectic derived\/ ${\mathbin{\mathbb K}}$-scheme, and let\/ $X=t_0({\bs X})$ be the associated classical\/ ${\mathbin{\mathbb K}}$-scheme of\/ ${{\bs X}}$. Then $X$ extends uniquely to an algebraic d-critical locus\/ $(X,s),$ with the property that whenever\/ $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$ is a $-1$-shifted symplectic derived\/ ${\mathbin{\mathbb K}}$-scheme in Darboux form with Hamiltonian $H\in A(0),$ as in \circte[Ex.s 5.8 \& 5.15]{BBJ}, and\/ $\boldsymbol f:\mathop{\boldsymbol{\rm Spec}}\nolimits A\rightarrow{\bs X}$ is an equivalence in $\mathop{\bf dSch}\nolimits_{\mathbin{\mathbb K}}$ with a Zariski open derived\/ ${\mathbin{\mathbb K}}$-subscheme\/ ${\bs R}\subseteq{\bs X}$ with\/ $\boldsymbol f^*(\tildemesi\omega)\sigmam\omega,$ writing\/ $U={\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec A(0),$ $R=t_0({\bs R}),$ $f=t_0(\boldsymbol f)$ so that\/ $H:U\rightarrow{\mathbin{\mathbb A}}^1$ is regular and\/ $f:\mathop{\rm Crit}(H)\rightarrow R$ is an isomorphism, for $\mathop{\rm Crit}(H)\subseteq U$ the classical critical locus of\/ $H,$ then $(R,U,H,f^{-1})$ is a critical chart on\/~$(X,s)$. The canonical bundle $K_{X,s}$ from Theorem\/ {\rm\ref{sa3thm3}} is naturally isomorphic to the determinant line bundle ${\rm d}eltat({\mathbin{\mathbb L}}_{{\bs X}})\vert_{X^{\rm red}}$ of the cotangent complex\/ ${\mathbin{\mathbb L}}_{{\bs X}}$ of\/~${\bs X}$. \lambdabel{da6thm4} \end{thm} We can think of Theorem \ref{da6thm4} as defining a {\it truncation functor} \e \betagin{split} F:\bigl\{&\tildemesext{category of $-1$-shifted symplectic derived ${\mathbin{\mathbb K}}$-schemes $({\bs X},\omega)$}\bigr\}\\ &\longrightarrow\bigl\{\tildemesext{category of algebraic d-critical loci $(X,s)$ over ${\mathbin{\mathbb K}}$}\bigr\}, \end{split} \lambdabel{da6eq3} \e where the morphisms $\boldsymbol f:({\bs X},\omega)\rightarrow({\bs Y},\omega')$ in the first line are (homotopy classes of) \'etale maps $\boldsymbol f:{\bs X}\rightarrow{\bs Y}$ with $\boldsymbol f^*(\omega')\sigmam\omega$, and the morphisms $f:(X,s)\rightarrow (Y,t)$ in the second line are \'etale maps $f:X\rightarrow Y$ with~$f^*(t)=s$. In \circte[Ex.~2.17]{Joyc2} Joyce gives an example of $-1$-shifted symplectic derived schemes $({\bs X},\omega),({\bs Y},\omega')$, both global critical loci, such that ${\bs X},{\bs Y}$ are not equivalent as derived ${\mathbin{\mathbb K}}$-schemes, but their truncations $F({\bs X},\omega),F({\bs Y},\omega')$ are isomorphic as algebraic d-critical loci. Thus, the functor $F$ in \eq{da6eq3} is not full. \setminusallskip Suppose again $Y$ is a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ and ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$. Then Thomas \circte{Thom} defined a natural {\it perfect obstruction theory\/} $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ on ${\mathbin{\cal M}}$ in the sense of Behrend and Fantechi \circte{BeFa}, and Behrend \circte{Behr} showed that $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ can be made into a {\it symmetric obstruction theory}. More generally, if ${\mathbin{\cal M}}$ is a moduli ${\mathbin{\mathbb K}}$-scheme of simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$, then Huybrechts and Thomas \circte{HuTh} defined a natural symmetric obstruction theory on~${\mathbin{\cal M}}$. Now in derived algebraic geometry ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$ for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-scheme, and the obstruction theory $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ from \circte{HuTh,Thom} is ${\mathbin{\mathbb L}}_{t_0}:{\mathbin{\mathbb L}}_{\boldsymbol{\mathbin{\cal M}}}\vert_{\mathbin{\cal M}}\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Pantev et al.\ \circte[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $-1$-shifted symplectic structure $\omega$, and the symmetric structure on $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ from \circte{Behr} is $\omega^0\vert_{\mathbin{\cal M}}$. So as for Corollary \ref{da5cor1}, Theorem \ref{da6thm4} implies: \betagin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a classical moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with perfect obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Thomas\/ {\rm\circte{Thom}} or Huybrechts and Thomas\/ {\rm\circte{HuTh}}. Then ${\mathbin{\cal M}}$ extends naturally to an algebraic d-critical locus $({\mathbin{\cal M}},s)$. The canonical bundle $K_{{\mathbin{\cal M}},s}$ from Theorem\/ {\rm\ref{sa3thm3}} is naturally isomorphic to ${\rm d}eltat({\mathbin{\cal E}}^\bullet)\vert_{{\mathbin{\cal M}}^{\rm red}}$. \lambdabel{da6cor1} \end{cor} \subsection{Categorification using perverse sheaves and motives} Here we summarizes the main results from \circte{BBDJS}. This particular section is not really used in the sequel, but it completes the discussion started in \S\ref{dt3.1.2}. The following theorems are \circte[Cor. 6.10 \& Cor. 6.11]{BBDJS}: \betagin{thm} Let\/ $(\boldsymbol X,\omega)$ be a $-1$-shifted symplectic derived scheme over\/ ${\mathbin{\mathbb C}}$ in the sense of Pantev et al.\ {\rm\circte{PTVV},} and\/ $X=t_0(\boldsymbol X)$ the associated classical\/ ${\mathbin{\mathbb C}}$-scheme. Suppose we are given a square root\/ $\setminusash{{\rm d}eltat({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X^{1/2}}$ for ${\rm d}eltat({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X$. Then we may define $P_{\boldsymbol X,\omega}^\bullet\in{\mathbin{\mathfrak m}}athop{\rm Per}v(X),$ uniquely up to canonical isomorphism, and isomorphisms $\Sigma_{\boldsymbol X,\omega}:P_{\boldsymbol X,\omega}^\bullet\rightarrow {\mathbin{\mathbb D}}_X(P_{\boldsymbol X,\omega}^\bullet),$ ${\rm T}_{\boldsymbol X,\omega}:P_{\boldsymbol X,\omega}^\bullet\rightarrow P_{\boldsymbol X,\omega}^\bullet$. The same applies for ${\mathbin{\scr D}}$-modules and mixed Hodge modules on $X,$ and for $l$-adic perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on $X$ if\/ $\boldsymbol X$ is over ${\mathbin{\mathbb K}}$ with\/~${\mathbin{\mathfrak m}}athop{\rm char}{\mathbin{\mathbb K}}=0$. \lambdabel{sm6cor2} \end{thm} \betagin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with natural (symmetric) obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Behrend\/ {\rm\circte{Behr},} Thomas\/ {\rm\circte{Thom},} or Huybrechts and Thomas\/ {\rm\circte{HuTh}}. Suppose we are given a square root\/ ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ for ${\rm d}eltat({\mathbin{\cal E}}^\bullet)$. Then we may define $P_{\mathbin{\cal M}}^\bullet\in{\mathbin{\mathfrak m}}athop{\rm Per}v({\mathbin{\cal M}}),$ uniquely up to canonical isomorphism, and isomorphisms $\Sigma_{\mathbin{\cal M}}:P_{\mathbin{\cal M}}^\bullet\rightarrow {\mathbin{\mathbb D}}_{\mathbin{\cal M}}(P_{\mathbin{\cal M}}^\bullet),$ ${\rm T}_{\mathbin{\cal M}}:P_{\mathbin{\cal M}}^\bullet\rightarrow P_{\mathbin{\cal M}}^\bullet$. The same applies for ${\mathbin{\scr D}}$-modules and mixed Hodge modules on ${\mathbin{\cal M}},$ and for $l$-adic perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on ${\mathbin{\cal M}}$ if\/ $Y,{\mathbin{\cal M}}$ are over ${\mathbin{\mathbb K}}$ with\/~${\mathbin{\mathfrak m}}athop{\rm char}{\mathbin{\mathbb K}}=0$. \lambdabel{sm6cor3} \end{thm} Theorem \ref{sm6cor3} is relevant to the {\it categorification\/} of Donaldson--Thomas theory as discussed in \S\ref{dt3.1.2}. As in \circte[\S 1.2]{Behr}, the perverse sheaf $P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet$ has pointwise Euler characteristic ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr)=\nu$. This implies that when $A$ is a field, say $A={\mathbin{\mathbb Q}}$, the (compactly-supported) hypercohomologies ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^*\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr), {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^*_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr)$ satisfy \betagin{align*} \tildemesextstyle\sum\limits_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^k\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr)&= \tildemesextstyle\sum\limits_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^k_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau),\nu\bigr)=DT^\alpha(\tildemesau), \end{align*} where ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^k\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr) \cong {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^{-k}_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr){}^*$ by Verdier duality. That is, we have produced a natural graded ${\mathbin{\mathbb Q}}$-vector space ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^*\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet\bigr)$, thought of as some kind of generalized cohomology of ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$, whose graded dimension is $DT^\alpha(\tildemesau)$. This gives a new interpretation of the Donaldson--Thomas invariant~$DT^\alpha(\tildemesau)$. \setminusallskip In fact, as discussed at length in \circte[\S 3]{Szen}, the first natural ``refinement'' or ``quantization'' direction of a Donaldson--Thomas invariant $DT^\alpha(\tildemesau)\in{\mathbin{\mathbb Z}}$ is not the Poincar\'e polynomial of this cohomology, but its weight polynomial \betagin{equation*} w\bigl({{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb H}}^*(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet), t\bigr) \in{\mathbin{\mathbb Z}}\bigl[t^{\pm{\rm fr}ac{1}{2}}\bigr], \end{equation*} defined using the mixed Hodge structure on the cohomology of the mixed Hodge module version of $P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)}^\bullet$, which exists assuming that ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$ is projective. \setminusallskip The material above is related to work by other authors. The idea of categorifying Donaldson--Thomas invariants using perverse sheaves or ${\mathbin{\scr D}}$-modules is probably first due to Behrend \circte{Behr}, and for Hilbert schemes ${\mathbin{\mathfrak m}}athop{\rm Hilb}^n(Y)$ of a Calabi--Yau 3-fold $Y$ is discussed by Dimca and Szendr\H oi \circte{DiSz} and Behrend, Bryan and Szendr\H oi \circte[\S 3.4]{BBS}, using mixed Hodge modules. Corollary \ref{sm6cor3} answers a question of Joyce and Song~\circte[Question~5.7(a)]{JoSo}. \setminusallskip As in \circte{JoSo,KoSo1} representations of {\it quivers with superpotentials\/} $(Q,W)$ give 3-Calabi--Yau triangulated categories, and one can define Donaldson--Thomas type invariants $DT^\alpha_{Q,W}(\tildemesau)$ `counting' such representations, which are simple algebraic `toy models' for Donaldson--Thomas invariants of Calabi--Yau 3-folds. Kontsevich and Soibelman \circte{KoSo2} explain how to categorify these quiver invariants $DT^\alpha_{Q,W}(\tildemesau)$, and define an associative multiplication on the categorification to make a {\it Cohomological Hall Algebra}. This paper was motivated by the aim of extending \circte{KoSo2} to define Cohomological Hall Algebras for Calabi--Yau 3-folds. \setminusallskip The square root ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ required in Corollary \ref{sm6cor3} corresponds roughly to {\it orientation data\/} in the work of Kontsevich and Soibelman \circte[\S 5]{KoSo1}, \circte{KoSo2}. \setminusallskip Finally, we point out that Kiem and Li \circte{KiLi} have recently proved an analogue of Corollary \ref{sm6cor3} by complex analytic methods, beginning from Joyce and Song's result \circte[Th.~5.4]{JoSo}, proved using gauge theory, that ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$ is locally isomorphic to $\mathop{\rm Crit}(f)$ as a complex analytic space, for $V$ a complex manifold and $f:V\rightarrow{\mathbin{\mathbb C}}$ holomorphic. {\mathbin{\mathfrak m}}edskip Now, we summarizes the main results from \circte{BJM}. The following theorems are \circte[Cor. 5.12 \& Cor. 5.13]{BJM}: \betagin{thm} Let\/ $(\boldsymbol X,\omega)$ be a $-1$-shifted symplectic derived scheme over\/ ${\mathbin{\mathbb K}}$ in the sense of Pantev et al.\ {\rm\circte{PTVV},} and\/ $X=t_0(\boldsymbol X)$ the associated classical\/ ${\mathbin{\mathbb K}}$-scheme, assumed of finite type. Suppose we are given a square root\/ $\setminusash{{\rm d}eltat({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X^{1/2}}$ for ${\rm d}eltat({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X$. Then we may define a natural motive $MF_{\boldsymbol X,\omega}\in {\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{\mathbin{\mathfrak m}}u}_X$. \lambdabel{mo5cor3} \end{thm} \betagin{thm} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a finite type moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Thomas\/ {\rm\circte{Thom}} or Huybrechts and Thomas\/ {\rm\circte{HuTh}}. Suppose we are given a square root\/ ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ for ${\rm d}eltat({\mathbin{\cal E}}^\bullet)$. Then we may define a natural motive $MF_{\mathbin{\cal M}}\in{\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}at{\mathbin{\mathfrak m}}u}_{\mathbin{\cal M}}.$ \lambdabel{mo5cor4} \end{thm} Kontsevich and Soibelman define a motive over ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$, by associating a formal power series to each (not necessarily closed) point, and taking its motivic Milnor fibre. The question of how these formal power series and motivic Milnor fibres vary in families over the base ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$ is not really addressed in \circte{KoSo1}. Corollary \ref{mo5cor4} answers this question, showing that Zariski locally in ${\mathbin{\cal M}}_{\rm st}^\alpha(\tildemesau)$ we can take the formal power series and motivic Milnor fibres to all come from a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ on a smooth ${\mathbin{\mathbb K}}$-scheme~$U$. As before, the square root ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ required in Corollary \ref{mo5cor4} corresponds roughly to {\it orientation data\/} in Kontsevich and Soibelman \circte[\S 5]{KoSo1}, \circte{KoSo2}. \subsection{Generalization to symplectic derived stacks} Here we summarizes the main results from \circte{BBBJ}. The following theorems are \circte[Cor. 2.11 \& Cor. 2.12]{BBBJ}: \betagin{thm} Let\/ $({\bs X},\omega_{\bs X})$ be a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the corresponding classical Artin ${\mathbin{\mathbb K}}$-stack. Then for each\/ $p\in X$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with dimension $\mathop{\rm dim}\nolimits H^0\bigl({\mathbin{\mathbb L}}_X\vert_p\bigr),$ a point\/ $t\in U,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ ${\rm d}_{dR} f\vert_t=0,$ so that\/ $T:=\mathop{\rm Crit}(f)\subseteq U$ is a closed\/ ${\mathbin{\mathbb K}}$-subscheme with\/ $t\in T,$ and a morphism $\varphi:T\rightarrow X$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits H^1\bigl({\mathbin{\mathbb L}}_X\vert_p\bigr),$ with\/ $\varphi(t)=p$. We may take\/~$f\vert_{T^{\rm red}}=0$. \lambdabel{sa2cor1} \end{thm} Thus, the underlying classical stack $X$ of a $-1$-shifted symplectic derived stack $({\bs X},\omega_{\bs X})$ admits an atlas consisting of critical loci of regular functions on smooth schemes. \setminusallskip Now let $Y$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, and ${\mathbin{\cal M}}$ a classical moduli stack of coherent sheaves $F$ on $Y$, or complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$. Then ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$, for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli stack. The (open) condition $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$ is needed to make $\boldsymbol{\mathbin{\cal M}}$ $1$-geometric and $1$-truncated (that is, a derived Artin stack, in our terminology); without it, ${\mathbin{\cal M}},\boldsymbol{\mathbin{\cal M}}$ would be a higher derived stack. Pantev et al.\ \circte[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $-1$-shifted symplectic structure $\omega_{\boldsymbol{\mathbin{\cal M}}}$. Applying Theorem \ref{sa2cor1} and using $H^i\bigl({\mathbin{\mathbb L}}_{\boldsymbol{\mathbin{\cal M}}}\vert_{[F]}\bigr)\cong \mathop{\rm Ext}\nolimits^{1-i}(F,F)^*$ yields a new result on classical 3-Calabi--Yau moduli stacks, the statement of which involves no derived geometry: \betagin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F,$ or more generally of complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$. Then for each\/ $[F]\in{\mathbin{\cal M}},$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with\/ $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F),$ a point\/ $u\in U,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ ${\rm d}_{dR} f\vert_u=0,$ and a morphism $\varphi:\mathop{\rm Crit}(f)\rightarrow{\mathbin{\cal M}}$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Ho}\nolimitsm(F,F),$ with\/~$\varphi(u)=[F]$. \lambdabel{sa2cor2} \end{cor} This is an analogue of \circte[Cor.~5.19]{BBJ}. When ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, a related result for coherent sheaves only, with $U$ a complex manifold and $f$ a holomorphic function, was proved by Joyce and Song \circte[Th.~5.5]{JoSo} using gauge theory and transcendental complex methods. \setminusallskip Here is \circte[Thm. 3.18]{BBBJ}, a stack version of Theorem \ref{da6thm4}. \betagin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $({\bs X},\omega_{\bs X})$ a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the corresponding classical Artin ${\mathbin{\mathbb K}}$-stack. Then there exists a unique d-critical structure $s\in H^0({\mathbin{\cal S}}z_X)$ on $X,$ making $(X,s)$ into a d-critical stack, with the following properties: \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] Let\/ $U,$ $f:U\rightarrow{\mathbin{\mathbb A}}^1,$ $T=\mathop{\rm Crit}(f)$ and\/ $\varphi:T\rightarrow X$ be as in Corollary\/ {\rm\ref{sa2cor1},} with\/ $f\vert_{T^{\rm red}}=0$. There is a unique $s_T\in H^0({\mathbin{\cal S}}z_T)$ on $T$ with\/ $\iota_{T,U}(s_T)=i^{-1}(f)+I_{T,U}^2,$ and\/ $(T,s_T)$ is an algebraic d-critical locus. Then $s(T,\varphi)=s_T$ in $H^0({\mathbin{\cal S}}z_T)$. \item[{\bf(b)}] The canonical bundle $K_{X,s}$ of\/ $(X,s)$ from Theorem\/ {\rm\ref{sa3thm5}} is naturally isomorphic to the restriction ${\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})\vert_{X^{\rm red}}$ to $X^{\rm red}\subseteq X\subseteq{\bs X}$ of the determinant line bundle ${\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})$ of the cotangent complex\/ ${\mathbin{\mathbb L}}_{\bs X}$ of\/~${\bs X}$. \end{itemize} \lambdabel{sa3thm6} \end{thm} We can think of Theorem \ref{sa3thm6} as defining a {\it truncation functor} \betagin{align*} F:\bigl\{&\tildemesext{$\infty$-category of $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stacks $({\bs X},\omega_{\bs X})$}\bigr\}\\ &\longrightarrow\bigl\{\tildemesext{2-category of d-critical stacks $(X,s)$ over ${\mathbin{\mathbb K}}$}\bigr\}. \end{align*} Let $Y$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, and ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-stack of coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$, or complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$. There is a natural obstruction theory $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ on ${\mathbin{\cal M}}$, where ${\mathbin{\cal E}}^\bullet\in D_{{\rm qcoh}}({\mathbin{\cal M}})$ is perfect in the interval $[-2,1]$, and $h^i({\mathbin{\cal E}}^\bullet)\vert_F\cong\mathop{\rm Ext}\nolimits^{1-i}(F,F)^*$ for each ${\mathbin{\mathbb K}}$-point $F\in{\mathbin{\cal M}}$, regarding $F$ as an object in $\mathop{\rm coh}\nolimits(Y)$ or $D^b\mathop{\rm coh}\nolimits(Y)$. Now in derived algebraic geometry ${\mathbin{\cal M}}=t_0({\mathbin{\bs{\cal M}}})$ for ${\mathbin{\bs{\cal M}}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-stack, and $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ is ${\mathbin{\mathbb L}}_{t_0}:{\mathbin{\mathbb L}}_{{\mathbin{\bs{\cal M}}}} \vert_{\mathbin{\cal M}}\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Pantev et al.\ \circte[\S 2.1]{PTVV} prove ${\mathbin{\bs{\cal M}}}$ has a $-1$-shifted symplectic structure $\omega$. Thus Theorem \ref{sa3thm6} implies \circte[Cor. 3.19]{BBBJ}: \betagin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}}$ of characteristic zero, and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F$ in $\mathop{\rm coh}\nolimits(Y),$ or complexes of coherent sheaves $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0,$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Then ${\mathbin{\cal M}}$ extends naturally to an algebraic d-critical locus $({\mathbin{\cal M}},s)$. The canonical bundle $K_{{\mathbin{\cal M}},s}$ from Theorem\/ {\rm\ref{sa3thm5}} is naturally isomorphic to ${\rm d}eltat({\mathbin{\cal E}}^\bullet)\vert_{{\mathbin{\cal M}}^{\rm red}}$. \lambdabel{sa3cor2} \end{cor} Here is \circte[Cor. 4.13]{BBBJ}, the stack version of Theorem \ref{sm6cor2}: \betagin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $({\bs X},\omega)$ a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the associated classical Artin\/ ${\mathbin{\mathbb K}}$-stack. Suppose we are given a square root\/ $\setminusash{{\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2}}$. Then working in $l$-adic perverse sheaves on stacks \circte[\S 4]{BBBJ} we may define a perverse sheaf\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet$ on $X$ uniquely up to canonical isomorphism, and Verdier duality and monodromy isomorphisms ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck\Sigma_{{\bs X},\omega}:{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet\rightarrow {\mathbin{\mathbb D}}_X({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet)$ and\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck{\rm T}_{{\bs X},\omega}:{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet\rightarrow{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet$. These are characterized by the fact that given a diagram \betagin{equation*} \xymatrix@C=60pt{ {\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1) & {\bs V} \ar[l]_(0.3){\boldsymbol i} \ar[r]^{\boldsymbol\varphi} & {\bs X} } \end{equation*} such that\/ $U$ is a smooth\/ ${\mathbin{\mathbb K}}$-scheme, $\boldsymbol\varphi$ smooth of dimension $n,$ ${\mathbin{\mathbb L}}_{{\bs V}/{\bs U}} \sigmameq {\mathbin{\mathbb T}}_{{\bs V}/{\bs X}}[2],$ $\boldsymbol\varphi^*(\omega_{\bs X})\sigmam \boldsymbol i^*(\omega_{\bs U})$ for $\omega_{\bs U}$ the natural\/ $-1$-shifted symplectic structure on ${\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1),$ and\/ $\varphi^*({\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2})\!\cong\! i^*(K_U) \otimes \Lambda^n{\mathbin{\mathbb T}}_{{\bs V}/{\bs X}},$ then $\varphi^*({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{{\bs X},\omega}^\bullet)[n],$ $\varphi^*({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck\Sigma_{{\bs X},\omega}^\bullet)[n],$ $\varphi^*({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck{\rm T}_{{\bs X},\omega}^\bullet)[n]$ are canonically isomorphic to $i^*({\mathbin{\cal{PV}}}_{U,f}),$ $i^*(\sigma_{U,f}),$ $i^*(\tildemesau_{U,f}),$ for\/ ${\mathbin{\cal{PV}}}_{U,f},\sigma_{U,f},\tildemesau_{U,f}$ as in \circte{BBBJ} . The same applies in the other theories of perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on stacks. \lambdabel{sa4cor1} \end{thm} Here is \circte[Cor. 4.14]{BBBJ}, the stack version of Theorem \ref{sm6cor3}: \betagin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over an algebraically closed field\/ ${\mathbin{\mathbb K}}$ of characteristic zero, and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F$ in $\mathop{\rm coh}\nolimits(Y),$ or of complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with\/ $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0,$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Suppose we are given a square root\/ ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$. Then working in $l$-adic perverse sheaves on stacks \circte[\S 4]{BBBJ}, we may define a natural perverse sheaf\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet\in{\mathbin{\mathfrak m}}athop{\rm Per}v({\mathbin{\cal M}}),$ and Verdier duality and monodromy isomorphisms ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck\Sigma_{\mathbin{\cal M}}:{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet\rightarrow{\mathbin{\mathbb D}}_{\mathbin{\cal M}}({\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet)$ and\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck{\rm T}_{\mathbin{\cal M}}:{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet\rightarrow{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet$. The pointwise Euler characteristic of\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet$ is the Behrend function $\nu_{\mathbin{\cal M}}$ of\/ ${\mathbin{\cal M}}$ from Joyce and Song {\rm\circte[\S 4]{JoSo},} so that\/ ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitseck P_{\mathbin{\cal M}}^\bullet$ is in effect a categorification of the Donaldson--Thomas theory of ${\mathbin{\cal M}}$. The same applies in the other theories of perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on stacks. \lambdabel{sa4cor2} \end{thm} Here is \circte[Cor. 5.16]{BBBJ}, the stack version of Theorem \ref{mo5cor3}: \betagin{thm} Let\/ $({\bs X},\omega)$ be a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack in the sense of Pantev et al.\ {\rm\circte{PTVV},} and\/ $X=t_0({\bs X})$ the associated classical Artin\/ ${\mathbin{\mathbb K}}$-stack, assumed of finite type and locally a global quotient. Suppose we are given a square root\/ ${\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2}$ for ${\rm d}eltat({\mathbin{\mathbb L}}_{\bs X}) \vert_X$. Then we may define a natural motive $MF_{{\bs X},\omega}\in{\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{\rm st}m_X,$ which is characterized by the fact that given a diagram \betagin{equation*} \xymatrix@C=60pt{ {\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1) & {\bs V} \ar[l]_(0.3){\boldsymbol i} \ar[r]^{\boldsymbol\varphi} & {\bs X} } \end{equation*} such that\/ $U$ is a smooth\/ ${\mathbin{\mathbb K}}$-scheme, $\boldsymbol\varphi$ is smooth of dimension $n,$ ${\mathbin{\mathbb L}}_{{\bs V}/{\bs U}} \sigmameq {\mathbin{\mathbb T}}_{{\bs V}/{\bs X}}[2],$ $\boldsymbol\varphi^*(\omega_{\bs X})\sigmam \boldsymbol i^*(\omega_{\bs U})$ for $\omega_{\bs U}$ the natural\/ $-1$-shifted symplectic structure on ${\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1),$ and\/ $\varphi^*({\rm d}eltat({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2})\cong i^*(K_U)\otimes\Lambda^n{\mathbin{\mathbb T}}_{{\bs V}/{\bs X}},$ then~$\varphi^*(MF_{{\bs X},\omega})={\mathbin{\mathbb L}}^{n/2}\odot i^*(MF^{{\rm mot},\phi}_{U,f})$ in ${\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{\rm st}m_V$. \lambdabel{sa5cor1} \end{thm} Here is \circte[Cor. 5.17]{BBBJ}, the stack version of Theorem \ref{mo5cor4}: \betagin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ a finite type classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ with natural obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Suppose we are given a square root\/ ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ for ${\rm d}eltat({\mathbin{\cal E}}^\bullet)$. Then we may define a natural motive $MF_{\mathbin{\cal M}}\in{\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{\rm st}m_{\mathbin{\cal M}}$. \lambdabel{sa5cor2} \end{thm} Theorem \ref{sa5cor2} is relevant to Kontsevich and Soibelman's theory of {\it motivic Donaldson--Thomas invariants\/} \circte{KoSo1}. Again, our square root ${\rm d}eltat({\mathbin{\cal E}}^\bullet)^{1/2}$ roughly coincides with their {\it orientation data\/} \circte[\S 5]{KoSo1}. In \circte[\S 6.2]{KoSo1}, given a finite type moduli stack ${\mathbin{\cal M}}$ of coherent sheaves on a Calabi--Yau 3-fold $Y$ with orientation data, they define a motive $\int_{\mathbin{\cal M}} 1$ in a ring $D^{\mathbin{\mathfrak m}}u$ isomorphic to our ${\mathbin{\setminusash{\,\,\overlineerline{\!\!\mathcal M\!}\,}}}^{\rm st}m_{\mathbin{\mathbb K}}$. We expect this should agree with $\pi_*(MF_{\mathbin{\cal M}})$ in our notation, with $\pi:{\mathbin{\cal M}}\rightarrow{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{\mathbin{\mathbb K}}$ the projection. This $\int_{\mathbin{\cal M}} 1$ is roughly the motivic Donaldson--Thomas invariant of ${\mathbin{\cal M}}$. Their construction involves expressing ${\mathbin{\cal M}}$ near each point in terms of the critical locus of a formal power series. Kontsevich and Soibelman's constructions were partly conjectural, and our results may fill some gaps in their theory. \section{The main results} {\mathbin{\mathfrak m}}arkboth{Statements of main results}{Statements of main results} \lambdabel{main.1} We will prove and use the algebraic analogue of Theorem \ref{dt5thm1}, which we can state as follows: \betagin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}},$ and write ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ for the moduli stack of coherent sheaves on $X$. Then for each\/ $[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}}),$ there exists a smooth\/ affine ${\mathbin{\mathbb K}}$-scheme $U,$ a point\/ $p\in{\mathbin{\mathfrak m}}athop{\tildemesextstyle\rm U}({\mathbin{\mathbb K}}),$ an \'etale morphism $u:U\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$ with $u(p)=0,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ $f\vert_p=\partial f\vert_p=0,$ and a $1$-morphism $\xi:\mathop{\rm Crit}(f)\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ with $\xi(p)=[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}}),$ such that if $\iota:\mathop{\rm Ext}\nolimits^1(E,E)\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is the natural isomorphism, then ${\rm d}\xi\vert_p=\iota\circ{\rm d} u\vert_p:T_pU\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. Moreover, let $G$ be a maximal algebraic torus in $\mathop{\rm Aut}(E)$, acting on $\mathop{\rm Ext}\nolimits^1(E,E)$ by $\gamma:\epsilon{\mathbin{\mathfrak m}}apsto\gamma\circ\epsilon\circ\gamma^{-1}$. Then we can choose $U,p,u,f,\xi$ and a $G$-action on $U$ such that $u$ is $G$-equivariant and $p,f$ are $G$-invariant, so that $\mathop{\rm Crit}(f)$ is $G$-invariant, and\/ $\xi:\mathop{\rm Crit}(f) \rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ factors through the projection $\mathop{\rm Crit}(f)\rightarrow[\mathop{\rm Crit}(f)/G]$. \lambdabel{dt5thm2} \end{thm} {\rm \kern.05em ind}ex{reductive group!maximal} {\rm \kern.05em ind}ex{almost closed $1$-form} {\rm \kern.05em ind}ex{Zariski topology} \nomenclature[1oi]{$\xi$}{the $1$-morphism $\xi:\omega^{-1}(0)\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$}{\rm \kern.05em ind}ex{coarse moduli scheme}{\rm \kern.05em ind}ex{moduli scheme!coarse} Note that you can regard $u:U\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$ as an \'etale open neighbourhood of $0$ in $\mathop{\rm Ext}\nolimits^1(E,E).$ Theorem \ref{dt5thm2} will be proved in \S\ref{localdes}, using \S\ref{dcr}. Next, we will use this to prove the algebraic analogue of Theorem \ref{dt6thm1}: \betagin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}},$ and ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. The Behrend function $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}: {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})\rightarrow{\mathbin{\mathbb Z}}$ is a natural locally constructible function on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. For all\/ $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ it satisfies: \betagin{equation} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2)=(-1)^{\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi([E_1],[E_2])} \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1)\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_2), \lambdabel{dt6eq1.1} \end{equation} \setminusallskip \betagin{equation} {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l} [\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\ \lambda\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \quad - \!\!\!\! \!\!\!\! \!\!\!\! {\rm d}isplaystyle \int\limits_{\setminusall{\betagin{subarray}{l}[{\mathbin{\mathfrak m}}u]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ {\mathbin{\mathfrak m}}u\; {\mathop{\text{\rm Lis-\'et}}\nolimits}ftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(D)\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2), \lambdabel{dt6eq2.1} \end{equation} where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X).$ Here\/ $\bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi([E_1],[E_2])$ in \eq{dt6eq1.1} is the Euler form as in \eq{eu}, {\rm \kern.05em ind}ex{Euler form} and in \eq{dt6eq2.1} the correspondence between\/ $[\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and\/ $F\in\mathop{\rm coh}\nolimits(X)$ is that\/ $[\lambda]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ lifts to some\/ $0\ne\lambda\in\mathop{\rm Ext}\nolimits^1(E_2,E_1),$ which corresponds to a short exact sequence\/ $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in\/ $\mathop{\rm coh}\nolimits(X)$ in the usual way. The function $[\lambda]{\mathbin{\mathfrak m}}apsto\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)$ is a constructible function\/ ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\rightarrow{\mathbin{\mathbb Z}},$ and the integrals in \eq{dt6eq2.1} are integrals of constructible functions using the Euler characteristic as measure. {\rm \kern.05em ind}ex{constructible function} {\rm \kern.05em ind}ex{Euler characteristic} \lambdabel{dt6thm1.1} \end{thm} As in \S\ref{dt4}, the identities \eq{dt6eq1.1}--\eq{dt6eq2.1} are crucial for the whole program in \circte{JoSo}, and will be proved in \S\ref{dt.1}. {\mathbin{\mathfrak m}}edskip In the next theorem, the condition that $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$ is necessary for $\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ to be an Artin stack, rather than a higher stack. Note that this condition is automatically satisfied by complexes $E^\bullet$ which are semistable in any stability condition, for example Bridgeland stability conditions \circte{Brid1}. Therefore to prove wall-crossing formulae for Donaldson-Thomas invariants in the derived category $D^b\mathop{\rm coh}\nolimits(X)$ under change of stability condition by the ``dominant stability condition'' method of \circte{Joyc.4,Joyc.5,Joyc.6,Joyc.7,KaSc}, it is enough to know the Behrend function identities \eq{dt6eq1.1}--\eq{dt6eq2.1} for complexes $E^\bullet$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$, and we do not need to deal with complexes $E^\bullet$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)\ne 0$, or with higher stacks. \betagin{thm} Let\/ $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}},$ and write $\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ for the moduli stack of complexes $E^\bullet$ in $D^b\mathop{\rm coh}\nolimits(X)$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$. This is an Artin stack by \circte{HuTh}. Let\/ $[E^\bullet]\in\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}}),$ and suppose that a Zariski open neighbourhood of $[E^\bullet]$ in $\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$ is equivalent to a global quotient $[S/\mathop{\rm GL}(n,{\mathbin{\mathbb K}})]$ for $S$ a ${\mathbin{\mathbb K}}$-scheme with a $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-action. Then the analogues of Theorems \ref{dt5thm2} and \ref{dt6thm1.1} hold with $\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}},E^\bullet$ in place of\/ $E,{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. \lambdabel{dt6thm1.1.bis} \end{thm} The condition on $\tildemesi{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ that it should be {\it locally a global quotient}, is known for the moduli stack of coherent sheaves ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ using Quot schemes. A proof of that can be found in \circte[\S 9.3]{JoSo}, where Joyce and Song uses the standard method for constructing coarse moduli schemes{\rm \kern.05em ind}ex{coarse moduli scheme}{\rm \kern.05em ind}ex{moduli scheme!coarse} of semistable coherent sheaves in Huybrechts and Lehn \circte{HuLe2}, adapting it for Artin stacks, and an argument similar to parts of that of Luna's Etale Slice Theorem~\circte[\S III]{Luna}.{\rm \kern.05em ind}ex{Luna's Etale Slice Theorem} However, this is not known for the moduli stack of complexes. The author expects Theorem \ref{dt6thm1.1.bis} to hold without this technical assumption, but currently can't prove it. \setminusallskip The proof of Theorem \ref{dt6thm1.1.bis} is the same as the proof of Theorem \ref{dt6thm1.1}, substituting sheaves with complexes of sheaves, and accordingly making the obvious modifications. {\mathbin{\mathfrak m}}edskip Finally, in \S\ref{def} we will characterize the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a deformation invariant lattice described using the Picard group. First of all, using existence results, and smoothness and properness properties of the relative Picard scheme in a family of Calabi--Yau 3-folds, one proves that the Picard groups form a local system. Actually, it is a local system with finite monodromy, so it can be made trivial after passing to a finite \'etale cover of the base scheme, as formulated in the analogue of \circte[Thm. 4.21]{JoSo}, which studies the monodromy of the Picard scheme instead of the numerical Grothendieck group in a family. Then, Theorem \ref{definv}, a substitute for \circte[Thm. 4.19]{JoSo}, which does not need the integral Hodge conjecture result by Voisin \circte{Vois} for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}$ and which is valid over ${\mathbin{\mathbb K}},$ characterizes the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme: \betagin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}}$ with\/ $H^1({\mathbin{\cal O}}_X)\!=\!0$. Define\nomenclature[1l]{$\Lambda_X$}{lattice associated to a Calabi--Yau 3-fold $X$} \betagin{equation*} \Lambda_X=\tildemesextstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \tildemesextrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \tildemesextrm{ such that } \end{equation*} \betagin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in {\mathbin{\mathfrak m}}athop{\rm Pic}(X)/ {\tildemesextrm{torsion}}, \; \lambda_2-{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}a\lambda_1^2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\tildemesextstyle{\rm fr}ac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*} \setminusallskip where $\lambda_1^2$ is defined as the map $\alpha\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\rightarrow {\rm fr}ac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and ${\rm fr}ac{1}{12}\lambda_1 c_2(TX)$ is defined as ${\rm fr}ac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Then\/ for any family of Calabi-Yau 3-folds $\pi : {\cal X} \rightarrow S$ over a connected base $S$ with $X=\pi^{-1}(s_0),$ the lattices $\Lambda_{X_s}$ form a local system of abelian groups over $S$ with fibre $\Lambda_X$. Furthermore, the monodromy of this system lies in a finite subgroup of $\mathop{\rm Aut}(\Lambda_X)$, so after passing to an \'etale cover $\tildemesi S\rightarrow S$ of S, we can take the local system to be trivial, and coherently identify $\Lambda_{X_{\tildemesi s}}\cong \Lambda_X$ for all $\tildemesi s\in \tildemesi S$. Finally, the Chern character gives an injective morphism ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\!\Lambda_X$. \lambdabel{definv} \end{thm} Following \circte{JoSo}, this yields \betagin{thm} The generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tildemesau)$ over ${\mathbin{\mathbb K}}$ for $\alpha\in\Lambdambda_X$ are unchanged under deformations of the underlying Calabi--Yau 3-fold X, by which we mean the following: let ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}{\rm st}ackrel{\varphi}{\longrightarrow}T$ a smooth projective morphism of algebraic ${\mathbin{\mathbb K}}$-varieties $X,T$, with $T$ connected. Let ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)$ be a relative very ample line bundle for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}{\rm st}ackrel{\varphi}{\longrightarrow}T$. For each $t\in T({\mathbin{\mathbb K}})$, write $X_t$ for the fibre $X\tildemesimes_{\varphi,T,t}{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{\mathbin{\mathbb K}}$ of $\varphi$ over $t$, and ${\mathbin{\cal O}}_{X_t}(1)$ for ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)\vert_{X_t}$. Suppose that $X_t$ is a smooth Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ for all $t\in T({\mathbin{\mathbb K}})$, with $H^1({\mathbin{\cal O}}_{X_t})= 0$. Then the generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tildemesau)_t$ are independent of $t\in T({\mathbin{\mathbb K}}).$ \lambdabel{defthm} \end{thm} \setminusallskip More precisely, the isomorphism $\Lambda_{X_t} = \Lambda_X$ is canonical up to action of a finite group $\Gammamma,$ the monodromy on $T,$ and $DT^\alpha(\tildemesau)_t$ are independent of the action of $\Gammamma$ on $\alphapha,$ so whichever identification $\Lambda_{X_t}=\Lambda_X$ is chosen, it is still true $DT^\alpha(\tildemesau)_t$ independent of $t.$ {\mathbin{\mathfrak m}}edskip Now, recall that in \circte{JoSo} Joyce and Song used the assumption that the base field is the field of complex numbers ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ for the Calabi--Yau 3-fold $X$ in three main ways:{\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$|(} {\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$!algebraically closed} \betagin{itemize}{\rm \kern.05em ind}ex{gauge theory}{\rm \kern.05em ind}ex{Behrend function!Behrend identities} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[\bf(a)] Theorem \ref{dt5thm1} in \S\ref{dt4} is proved using gauge theory and transcendental complex analytic methods, and work only over ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$. It is used to prove the Behrend function identities \eq{dt6eq1}--\eq{dt6eq2}, which are vital for much of their results, including the wall crossing formula for the $\bar{DT}{}^\alpha(\tildemesau)$, and the relation between $PI^{\alpha,n}(\tildemesau'),\bar{DT}{}^\alpha(\tildemesau)$. \item[\bf(b)] In \circte[\S 4.5]{JoSo}, when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ the Chern character{\rm \kern.05em ind}ex{Chern character} embeds $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in $H^{\rm even}(X;{\mathbin{\mathbb Q}})$, and they use this to show $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X$. This is important for the results that $\bar{DT}{}^\alpha(\tildemesau)$ and $PI^{\alpha,n}(\tildemesau')$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ are invariant under deformations of $X$ even to make sense. \item[\bf(c)] Their notion of `compactly embeddable' noncompact Calabi-Yau 3-folds {\rm \kern.05em ind}ex{compactly embeddable} in \circte[\S 6.7]{JoSo} is complex analytic and does not make sense for general~${\mathbin{\mathbb K}}$. This constrains the noncompact Calabi--Yau 3-folds they can define generalized Donaldson--Thomas invariants for. \end{itemize} Now Theorem \ref{dt5thm2} and Theorem \ref{dt6thm1.1} extend the results in {\bf(a)} over algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero. As noted in \circte{Joyc.1}, constructible functions{\rm \kern.05em ind}ex{constructible function!in positive characteristic} methods fail for ${\mathbin{\mathbb K}}$ of positive characteristic.{\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$!positive characteristic} Because of this, the alternative descriptions \eq{dt3eq5} and \eq{dt4eq11}, for $DT^\alpha(\tildemesau)$ and $PI^{\alpha,n}(\tildemesau')$ as weighted Euler characteristics, and the definition of $\bar{DT}{}^\alpha(\tildemesau)$ in \S\ref{dt4}, cannot work in positive characteristic, so working over an algebraically closed field of characteristic zero is about as general as is reasonable. \setminusallskip The point {\bf(a)} above has consequences also on {\bf(c)}, because Joyce and Song only need the notion of `compactly embeddable' as their complex analytic proof of \eq{dt6eq1}--\eq{dt6eq2} requires $X$ compact. Unfortunately the given algebraic version of \eq{dt6eq1}--\eq{dt6eq2} in Theorem \ref{dt6thm1.1} uses results from derived algebraic geometry, and the author does not know if they apply also for compactly supported sheaves {\rm \kern.05em ind}ex{coherent sheaf!compactly supported} on a noncompact~$X$.{\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$|)} {\rm \kern.05em ind}ex{Calabi--Yau 3-fold!noncompact} We can prove a version of that under some technical assumptions, as stated in \S\ref{dt7}. Observe, also, that in the noncompact case you cannot expect to have the deformation invariance property unless in some particular cases in which the moduli space is proper. The extension of {\bf(b)} to ${\mathbin{\mathbb K}}$ is given in Section \ref{def}, which yields Theorem \ref{defthm}, thanks to which it is possible to extend \circte[Cor. 5.28]{JoSo} about the deformation invariance of the generalized Donaldson--Thomas invariants in the compact case to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. Thus, this proves our main theorem: \betagin{thm}The theory of generalized Donaldson--Thomas invariants defined in \circte{JoSo} is valid over algebraically closed fields of characteristic zero. \lambdabel{mainthm} \end{thm} Next, we will respectively prove Theorems \ref{dt5thm2}, \ref{dt6thm1.1} and \ref{definv} in \S\ref{localdes}, \S\ref{dt.1} and \S\ref{def}. \subsection{Local description of the Donaldson--Thomas moduli space} \lambdabel{localdes} Let us fix a moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ which is locally a global quotient. In particular, ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ can be the moduli stack of coherent sheaves over a Calabi-Yau 3-fold $X$, so that the theory exposed in \S\ref{dcr} and \S\ref{ourpapers} applies. \setminusallskip The first step in order to proving Theorem \ref{dt5thm2} is to show the existence of a quasiprojective ${\mathbin{\mathbb K}}$-scheme $S,$ an action of\/ $G$ on $S,$ a point\/ $s\in S({\mathbin{\mathbb K}})$ fixed by $G,$ and a $1$-morphism of Artin ${\mathbin{\mathbb K}}$-stacks $\xi:[S/G]\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}},$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G,$ {\rm \kern.05em ind}ex{Artin stack}{\rm \kern.05em ind}ex{quotient stack}{\rm \kern.05em ind}ex{stabilizer group} where $[S/G]$ is the quotient stack, such that\/ $\xi(s\,G)=[E],$ the induced morphism on stabilizer groups $\xi_*:{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{[S/G]}(s\,G)\rightarrow{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E])$ is the natural morphism $G{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\mathop{\rm Aut}(E)\cong{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E]),$ and\/ ${\rm d}\xi\vert_{s\,G}:T_sS\cong T_{s\,G} [S/G]\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism. \setminusallskip As ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is locally a global quotient, let's say ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is locally $[Q/H]$ with $H=\mathop{\rm GL}(n,{\mathbin{\mathbb K}}),$ and a ${\mathbin{\mathbb K}}$-scheme $Q$ which is $H$-invariant, so that the projection $[Q/H]\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is a 1-isomorphism with an open ${\mathbin{\mathbb K}}$-substack ${{\mathbin{\mathfrak m}}athfrak Q}$ of ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$. This $1$-isomorphism identifies the stabilizer groups ${\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E])=\mathop{\rm Aut}(E)$ and\/ ${\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{[Q/H]}(sH)={\mathbin{\mathfrak m}}athop{\rm Stab}\nolimits_H(s),$ and the Zariski tangent spaces $T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\cong\mathop{\rm Ext}\nolimits^1(E,E)$ and\/ $T_{sH}[Q/H]\cong T_sQ/T_s(sH),$ so one has natural isomorphisms $\mathop{\rm Aut}(E)\cong{\mathbin{\mathfrak m}}athop{\rm Stab}\nolimits_H(s)$ and\/~$\mathop{\rm Ext}\nolimits^1(E,E)\cong T_sQ/T_s(sH)$, and $G$ is identified as a subgroup of $H.$ \setminusallskip To obtain the 1-morphism with the required properties, following \circte[\S 9.3]{JoSo} and Luna's Etale Slice Theorem~\circte[\S III]{Luna}, we obtain an atlas $S$ as a $G$-invariant, locally closed\/ ${\mathbin{\mathbb K}}$-subscheme in $Q$ with\/ $s\in S({\mathbin{\mathbb K}}),$ such that\/ $T_sQ=T_sS\oplus T_s(sH),$ and the morphism ${\mathbin{\mathfrak m}}u: S\tildemesimes H\rightarrow Q$ induced by the inclusion $S{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra Q$ and the $H$-action on $Q$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E).$ Here $s\in Q({\mathbin{\mathbb K}})$ project to the point\/ $sH$ in ${{\mathbin{\mathfrak m}}athfrak Q}({\mathbin{\mathbb K}})$ identified with\/ $[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$ under the $1$-isomorphism\/ ${{\mathbin{\mathfrak m}}athfrak Q}\cong [Q/H]$ and $G$, a ${\mathbin{\mathbb K}}$-subgroup of the ${\mathbin{\mathbb K}}$-group H, is as in the statement of Theorem \ref{dt5thm2}, that is, a maximal torus in $\mathop{\rm Aut}(E).$ Since $S$ is invariant under the ${\mathbin{\mathbb K}}$-subgroup $G$ of the ${\mathbin{\mathbb K}}$-group $H$ acting on $Q$, the inclusion $i:S{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra Q$ induces a representable 1-morphism of quotient stacks $i_*:[S/G]\rightarrow [Q/H]$. In \circte{JoSo}, Joyce and Song found that $i_*$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G$. Combining the 1-morphism $i_*:[S/G]\rightarrow [Q/H]$, the 1-isomorphism ${{\mathbin{\mathfrak m}}athfrak Q}\cong [Q/H]$, and the open inclusion ${{\mathbin{\mathfrak m}}athfrak Q}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, yields a 1-morphism $\xi:[S/G]\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, as required for Theorem \ref{dt5thm2}. This $\xi$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G$, as $i_*$ is. If $\mathop{\rm Aut}(E)$ is reductive, so that $G=\mathop{\rm Aut}(E)$, then $\xi$ is smooth of dimension 0, that is, $\xi$ is \'etale.{\rm \kern.05em ind}ex{etale morphism@\'etale morphism} The conditions that $\xi(s\,G)=[E]$ and that $\xi_*:{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{[S/G]}(s\,G)\rightarrow{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E])$ is the natural $G{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\mathop{\rm Aut}(E) \cong{\mathbin{\mathfrak m}}athop{\rm Iso}\nolimits_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}([E])$ in Theorem \ref{dt5thm1} are immediate from the construction. That ${\rm d}\xi\vert_{s\,G}:T_sS\cong T_{s\,G} [S/G]\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism follows from $T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\cong T_{sH}[Q/H]\cong T_sQ/T_s(sH)$ and $T_sQ=T_sS\oplus T_s(sH)$.{\rm \kern.05em ind}ex{Artin stack!atlas|)}{\rm \kern.05em ind}ex{moduli stack!atlas|)} \setminusallskip In conclusion, we can summarize as follows: given a point $[E]\in {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$, that is an equivalence class of a (complex of) coherent sheaves, we will denote by $G$ a maximal torus in $\mathop{\rm Aut}(E).$ As ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ is locally a global quotient, there exists an atlas $S$, which is a scheme over ${\mathbin{\mathbb K}}$, and a smooth morphism $\pi : S \rightarrow {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$, with $\pi$ smooth of relative dimension $\mathop{\rm dim}\nolimits G.$ If $x \in S$ is the point corresponding to $E\in {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}}),$ then $\pi$ smooth of $\mathop{\rm dim}\nolimits G$ means that $\pi$ has {\it minimal dimension} near $E,$ that is $T_x S = \mathop{\rm Ext}\nolimits^1(E,E).$ Moreover, the atlas $S$ is endowed with a $G$-action, so that $\pi$ descends to a morphism $[S/G] \rightarrow {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}.$ \setminusallskip Note next that the maximal torus $G$ acts on $S$ preserving $s$ and fixing $x.$ By replacing $S$ by a $G$-equivariant \'etale open neighbourhood $S'$ of $s,$ we can suppose $S$ is affine. Then, from material in \S\ref{dcr} and \S\ref{ourpapers} we deduce that the atlas $S'$ in the sense of Theorems \ref{sa2cor1} and \ref{sa3thm6} for the moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ carries a d-critical locus structure $(S',s_{S'})$ which is $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-equivariant in the sense of \S\ref{eqdcr}. \setminusallskip Using Proposition \ref{dc2prop14}, there exists a $G$-invariant critical chart $(R,U,f,i)$ in the sense of \S\ref{dcr} for $(S,s)$ with $x$ in $R,$ and $\mathop{\rm dim}\nolimits U$ to be minimal so that $T_{i(x)}U = T_x R = \mathop{\rm Ext}\nolimits^1(E,E)$. \setminusallskip Making $U$ smaller if necessary, we can choose $G$-equivariant \'etale coordinates $U \rightarrow {\mathbin{\mathbb A}}^n = \mathop{\rm Ext}\nolimits^1(E,E)$ near $i(x),$ sending $i(x)$ to $0,$ and with $T_{i(x)}U = \mathop{\rm Ext}\nolimits^1(E,E)$ the given identification. Then we can regard $U \rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ as a $G$-equivariant \'etale open neighbourhood of $0$ in $\mathop{\rm Ext}\nolimits^1(E,E),$ which concludes the proof of Theorem \ref{dt5thm2}. \subsection{Behrend function identities} \lambdabel{dt.1} Now we are ready to prove Theorem \ref{dt6thm1.1}. Let $X$ be a Calabi--Yau $3$-fold over an algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero, ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ the moduli stack of coherent sheaves on $X$, and $E_1,E_2$ be coherent sheaves on $X$. Set $E=E_1\oplus E_2$. Using the splitting \e \mathop{\rm Ext}\nolimits^1(E,E)\!=\!\mathop{\rm Ext}\nolimits^1(E_1,E_1)\!\oplus\!\mathop{\rm Ext}\nolimits^1(E_2,E_2)\!\oplus\!\mathop{\rm Ext}\nolimits^1(E_1,E_2) \!\oplus\!\mathop{\rm Ext}\nolimits^1(E_2,E_1), \lambdabel{dt6eq3} \e write elements of $\mathop{\rm Ext}\nolimits^1(E,E)$ as $(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})$ with $\epsilon_{ij}\in\mathop{\rm Ext}\nolimits^1(E_i,E_j)$. For simplicity, we will write $e_{ij}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_i,E_j).$ Choose a maximal torus {\rm \kern.05em ind}ex{ reductive group!maximal} $G$ of $\mathop{\rm Aut}(E)$ which contains the subgroup $T=\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m\bigr\}$, which acts on $\mathop{\rm Ext}\nolimits^1(E,E)$ by \e \lambda:(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21}) {\mathbin{\mathfrak m}}apsto(\epsilon_{11},\epsilon_{22},\lambda^{-1}\epsilon_{12},\lambda\epsilon_{21}). \lambdabel{dt6eq4} \e Apply Theorem \ref{dt5thm2} with these $E$ and $G$. This gives an \'etale morphism $u:U\rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ with $U$ a smooth affine ${\mathbin{\mathbb K}}$-scheme, and $u(p)=0,$ for $p\in U({\mathbin{\mathbb K}}),$ a $G$-invariant regular function $f:U\rightarrow {\mathbin{\mathbb A}}^1_{\mathbin{\mathbb K}}$ on $U$ with $f\vert_p=\partial f\vert_p=0,$ an open neighbourhood\/ $V$ of\/ $s$ in $S,$ and a $1$-morphism $\xi:\mathop{\rm Crit}(f) \rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ with\/ $\xi(p)=[E]\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$ and\/ ${\rm d}\xi\vert_p:T_p(\mathop{\rm Crit}(f))=\mathop{\rm Ext}\nolimits^1(E,E)\rightarrow T_{[E]}{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ the natural isomorphism. Then the Behrend function $\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}$ at $[E]=[E_1\oplus E_2]$ satisfies {\rm \kern.05em ind}ex{moduli stack!local structure} \e \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)} \nu_{\mathop{\rm Crit}(f)}(0), \lambdabel{dt6eq5} \e where one uses that $\xi$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ and Theorem \ref{dt3thm3} to say that $$\nu_{\mathop{\rm Crit}(f)}=(-1)^{\mathop{\rm dim}\nolimits(\mathop{\rm Aut}(E))}\xi^*(\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}).$$ On the other hand, the last part of the proof of \eq{dt6eq1.1} in \circte[Section 10.1]{JoSo} uses algebraic methods and gives \e \nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}(E_1)\nu_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}(E_2)=\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}\tildemesimes{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1,E_2)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E_1)+\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E_2)}\nu_{Crit(f^G)}(0), \lambdabel{dt6eq6} \e where $\nu_{Crit(f^G)}(0)=\nu_{\mathop{\rm Crit}(f)^G}(0)=\nu_{\mathop{\rm Crit}(f\vert_{U\cap\mathop{\rm Ext}\nolimits^1(E,E)^G})}(0)$ and $U$ is as in Theorem \ref{dt5thm2} and $\mathop{\rm Ext}\nolimits^1(E,E)^G$ denotes the fixed point locus of $\mathop{\rm Ext}\nolimits^1(E,E)$ for the $G$-action. Thus what actually remains to prove in order to establish identity \eq{dt6eq1.1} is \e \nu_{\mathop{\rm Crit}(f)}(0)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)+\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)}\nu_{Crit(f^G)}(0). \lambdabel{dt6eq7} \e This is a generalization of a result in \circte{BeFa2} over ${\mathbin{\mathbb C}}$ in the case of an isolated ${\mathbin{\mathbb C}}^*$-fixed point. Combining equations \eq{dt6eq5}, \eq{dt6eq6} and \eq{dt6eq7} and sorting out the signs as in \circte[Section 10.1]{JoSo} proves equation \eq{dt6eq1.1}. Equation \eq{dt6eq7} will be crucial also for the proof of the second Behrend identity \eq{dt6eq2.1}. {\mathbin{\mathfrak m}}edskip Let us start by recalling an easy result similar to \circte[Prop. 10.1]{JoSo}, but now in the \'etale topology. Let $u:U \rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ be the \'etale map as in \S\ref{localdes}, and $p\in U$ such that $u(p)=0.$ We will consider points $(0,0,\epsilon_{12},0),(0,0,0,\epsilon_{21})\in \mathop{\rm Ext}\nolimits^1(E,E)$ like basically points in $U$. This is because we consider a unique lift $\alpha(e_{12})$ of $(0,0,\epsilon_{12},0)\in \mathop{\rm Ext}\nolimits^1(E,E)$ to $U$, such that $u(\alpha(e_{12}))=(0,0,e_{12},0)$ and $\lim_{\lambda \rightarrow 0} \lambda . \alpha(e_{12}) =p,$ using that $\lim_{\lambda\rightarrow 0} (0,0,\lambda^{-1}\epsilon_{12},0) = (0,0,0,0).$ So we can state the following result, for the proof of which we cite \circte[Prop.10.1]{JoSo}, with appropriate obvious modifications, working in the \'etale topology. \betagin{prop} Let\/ $\epsilon_{12}\in\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ and\/ $\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$. Then \betagin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] $(0,0,\epsilon_{12},0),(0,0,0,\epsilon_{21})\in\mathop{\rm Crit}(f) \subseteq U\subseteq\mathop{\rm Ext}\nolimits^1(E,E),$ and\/ $(0,0,\epsilon_{12},0),\allowbreak (0,\allowbreak 0,\allowbreak 0,\allowbreak\epsilon_{21})\in V\subseteq S({\mathbin{\mathbb K}})\subseteq\mathop{\rm Ext}\nolimits^1(E,E);$ \item[{\bf(ii)}] $\xi$ maps $(0,0,\epsilon_{12},0) {\mathbin{\mathfrak m}}apsto (0,0,\epsilon_{12},0)$ and\/ $(0,0,0,\epsilon_{21}){\mathbin{\mathfrak m}}apsto(0,0,0,\epsilon_{21});$ and \item[{\bf(iii)}] the induced morphism on closed points $[S/\mathop{\rm Aut}(E)] ({\mathbin{\mathbb K}})\rightarrow{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}({\mathbin{\mathbb K}})$ maps $[(0,\allowbreak 0,\allowbreak 0,\allowbreak\epsilon_{21})]{\mathbin{\mathfrak m}}apsto[F]$ and\/ $[(0,0,\epsilon_{12},0)]{\mathbin{\mathfrak m}}apsto [F'],$ where the exact sequences $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ and\/ $0\rightarrow E_2\rightarrow F'\rightarrow E_1\rightarrow 0$ in $\mathop{\rm coh}\nolimits(X)$ correspond to $\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and\/ $\epsilon_{12}\in\mathop{\rm Ext}\nolimits^1(E_1,E_2),$ respectively. \end{itemize} \lambdabel{dt10prop2} \end{prop} Now use the idea in \circte[\S10.2]{JoSo}. Set $U'=\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U:\epsilon_{21}\ne 0\bigr\}$, an open set in $U$, and write $V'$ for the submanifold of $(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U'$ with $\epsilon_{12}=0$. Let $\tildemesi U'$ be the blowup of $U'$ along $V'$, with projection $\pi':\tildemesi U'\rightarrow U'$. Points of $\tildemesi U'$ may be written $(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda\epsilon_{12},\epsilon_{21})$, where $[\epsilon_{12}]\in{\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_1,E_2))$, and $\lambda\in{\mathbin{\mathbb K}}$, and $\epsilon_{21}\ne 0$. Write $f'=f\vert_{U'}$ and $\tildemesi f'=f'\circ\pi'$. Then applying Theorem \ref{blowup} to $U',V',f',\tildemesi U',\pi',\tildemesi f'$ at the point $(0,0,0,\epsilon_{21})\in U'$ gives \e \betagin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \quad={\rm d}isplaystyle\int\limits_{\setminusall{[\epsilon_{12}] \in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}}\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(\tildemesi f')}(0,0,[\epsilon_{12}],0,\epsilon_{21})\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi + (-1)^{e_{12}}\bigl(1- e_{12}\bigr) \nu_{\mathop{\rm Crit}(f\vert_{V'})}(0,0,0,\epsilon_{21}). \lambdabel{dt6eq26} \end{split} \e Here $\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21})$ is independent of the choice of $\epsilon_{21}$ representing the point $[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$, and is a constructible function{\rm \kern.05em ind}ex{constructible function} of $[\epsilon_{21}]$, so the integrals in \eq{dt6eq26} are well-defined. Note that $\nu_{\mathop{\rm Crit}(f)}$ and the other Behrend functions in the sequel are nonzero just on the zero loci of the corresponding functions, so here and in the sequel the integrals over the whole ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(\ldots))$ actually are just over the points that lie in these zero loci. Adopt this convention for the whole section. \setminusallskip Similarly consider the analogous situation exchanging the role of $\epsilon_{12}$ and $\epsilon_{21}.$ Set $U''=\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U :\epsilon_{12}\ne 0\bigr\}$, an open set in $U$, and write $V'' =\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U'' : \epsilon_{21}=0\bigr\}$. Let $\tildemesi U''$ be the blowup of $U''$ along $V''$, with projection $\pi'':\tildemesi U''\rightarrow U''$. Points of $\tildemesi U''$ may be written $(\epsilon_{11},\epsilon_{22},\epsilon_{12},[\epsilon_{21}],\lambda\epsilon_{21})$, where $[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$, and $\lambda\in{\mathbin{\mathbb K}}$, and $\epsilon_{12}\ne 0$. Write $f''=f\vert_{U''}$ and $\tildemesi f''=f''\circ\pi''$. Similarly to the previous situation, we can apply Theorem \ref{blowup} to $U'',V'',f'',\tildemesi U'',\pi'',\tildemesi f''$ at the point $(0,0,\epsilon_{12},0)\in U''$ which gives \e \betagin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0)\quad ={\rm d}isplaystyle\int\limits_{\setminusall{[\epsilon_{21}] \in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}}\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(\tildemesi f'')}(0,0,\epsilon_{12},0,[\epsilon_{21}])\,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi + (-1)^{e_{21}}\bigl(1- e_{21}\bigr) \nu_{\mathop{\rm Crit}(f\vert_{V''})}(0,0,\epsilon_{12},0). \lambdabel{dt6eq27} \end{split} \e Let $L_{12}\rightarrow{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))$ and $L_{21}\rightarrow{\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ be the tautological line bundles, so that the fibre of $L_{12}$ over a point $[\epsilon_{12}]$ in ${\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_1,E_2))$ is the 1-dimensional subspace $\{\lambda\,\epsilon_{12}: \lambda\in{\mathbin{\mathbb K}}\}$ in $\mathop{\rm Ext}\nolimits^1(E_1,E_2)$. Consider the fibre product $$ \xymatrix@C=70pt@R=25pt{Z\; \ar[r]^{\tildemesextrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} \ar[d] & \mathop{\rm Ext}\nolimits^1(E_1,E_1)\tildemesimes \mathop{\rm Ext}\nolimits^1(E_2,E_2)\tildemesimes (L_{12}\oplus L_{21}) \ar[d] \\ U \; \ar[r]^{\tildemesextrm{\'etale}\quad} & \mathop{\rm Ext}\nolimits^1(E,E)} $$ where the horizontal maps are \'etale morphisms. Informally, this defines $Z\subseteq\mathop{\rm Ext}\nolimits^1(E_1,E_1)\tildemesimes \mathop{\rm Ext}\nolimits^1(E_2,E_2)\tildemesimes (L_{12}\oplus L_{21})$ to be the \'etale open subset of points $\bigl(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda_{1} \, \epsilon_{12}, [\epsilon_{21}],\lambda_{2} \, \epsilon_{21}\bigr)$ for $\lambda_i \in {\mathbin{\mathbb K}},$ for which $(\epsilon_{21},\epsilon_{22},\lambda_{1}\,\epsilon_{12},\lambda_{2} \, \epsilon_{21})$ lies in $U.$ Observe that $Z$ contains both $\tildemesi U'$ and $\tildemesi U'',$ which respectively have subspaces $\mathop{\rm Crit}(\tildemesi f')$ and $\mathop{\rm Crit}(\tildemesi f'').$ \setminusallskip Define also an \'etale open set of points $W\subseteq\mathop{\rm Ext}\nolimits^1(E_1,E_1)\tildemesimes \mathop{\rm Ext}\nolimits^1(E_2,E_2)\tildemesimes (L_{12}\otimes L_{21})$ fitting into the following cartesian square: $$ \xymatrix@C=70pt@R=25pt{Z\; \ar[r]^{\tildemesextrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} \ar[d]_\Pi & \mathop{\rm Ext}\nolimits^1(E_1,E_1) \tildemesimes \mathop{\rm Ext}\nolimits^1(E_2,E_2)\tildemesimes (L_{12}\oplus L_{21}) \ar[d]^{\Pi'} \\ W \; \ar[r]^{\tildemesextrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} & \mathop{\rm Ext}\nolimits^1(E_1,E_1)\tildemesimes \mathop{\rm Ext}\nolimits^1(E_2,E_2)\tildemesimes (L_{12}\otimes L_{21}) } $$ where the line bundle $$L_{12}\otimes L_{21}\rightarrow{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\tildemesimes {\mathbin{\mathfrak m}}athbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$$ has fibre over $([\epsilon_{12}],[\epsilon_{21}])$ which is $\{\lambda\,\epsilon_{12}\otimes\epsilon_{21}: \lambda\in{\mathbin{\mathbb K}}\}$. Write points of the total space of $L_{12}\otimes L_{21}$ as $\bigl([\epsilon_{12}],[\epsilon_{21}],\lambda\,\epsilon_{12}\otimes\epsilon_{21}\bigr)$. Informally, $W$ is defined as the open subset of points $\bigl(\epsilon_{11},\epsilon_{22},[\epsilon_{12}], [\epsilon_{21}],\lambda\,\epsilon_{12}\otimes\epsilon_{21}\bigr)$ for which $(\epsilon_{21},\epsilon_{22},\lambda\,\epsilon_{12},\epsilon_{21})$ lies in $U$. Since $U$ is $G$-invariant, this definition is independent of the choice of representatives $\epsilon_{12},\epsilon_{21}$ for $[\epsilon_{12}], [\epsilon_{21}]$, since any other choice would replace $(\epsilon_{11},\epsilon_{22},\lambda\,\epsilon_{12},\epsilon_{21})$ by $(\epsilon_{11},\epsilon_{22}, \lambda{\mathbin{\mathfrak m}}u\,\epsilon_{12},{\mathbin{\mathfrak m}}u^{-1}\epsilon_{21})$ for some ${\mathbin{\mathfrak m}}u\in{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$. The map $\Pi: Z\rightarrow W$ is \'etale equivalent to $$\Pi':(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda_{1} \, \epsilon_{12}, [\epsilon_{21}],\lambda_{2} \, \epsilon_{21}){\mathbin{\mathfrak m}}apsto(\epsilon_{11},\epsilon_{22},\allowbreak [\epsilon_{12}],\allowbreak[\epsilon_{21}],\lambda\epsilon_{12}\otimes\epsilon_{21})$$ which is a smooth projection of relative dimension $1$ except at the points such that $\lambda_{1} =\lambda_{2} =0.$ However it is smooth at $(0 ,\lambda_{2} )$ with $\lambda_{2} \neq 0$ and similarly at $(\lambda_{1},0 )$ with $\lambda_{1} \neq 0,$ that is, the two restrictions of $\Pi$ to $\tildemesi U'$ and $\tildemesi U''$ are both smooth of relative dimension $1.$ {\rm \kern.05em ind}ex{almost closed $1$-form!invariant} \setminusallskip Here is the crucial point: $\mathop{\rm Crit}(\tildemesi f')\subset \tildemesi U'$ and $\mathop{\rm Crit}(\tildemesi f'')\subset \tildemesi U''$ are ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$-invariant subschemes, so there exists a subscheme $Q$ of $W$ such that $\mathop{\rm Crit}(\tildemesi f')=\Pi^{-1}(Q)\cap\tildemesi U'$ and $\mathop{\rm Crit}(\tildemesi f'')=\Pi^{-1}(Q)\cap\tildemesi U''$ and both $\Pi: \mathop{\rm Crit}(\tildemesi f')\rightarrow Q$ and $\Pi: \tildemesi \omega''^{-1}(0)\rightarrow Q$ are smooth of relative dimension $1.$ Thus Theorem \ref{dt3thm3} yields that $\nu_{\mathop{\rm Crit}(\tildemesi f')}=-\Pi^*(\nu_Q)$ and $\nu_{\mathop{\rm Crit}(\tildemesi f'')}=-\Pi^*(\nu_Q)$ and then \e \nu_{\mathop{\rm Crit}(\tildemesi f')}(0,0,[\epsilon_{12}],0,\epsilon_{21})=-\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0)=\nu_{\mathop{\rm Crit}(\tildemesi f'')}(0,0,\epsilon_{12},0,[\epsilon_{21}]), \lambdabel{dt6eq29} \e where the sign comes from the fact that the map $\Pi$ is smooth of relative dimension $1.$ Moreover observe that \e \nu_{\mathop{\rm Crit}(f\vert_{V'})}(0,0,0,\epsilon_{21})=(-1)^{ e_{21}}\nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \lambdabel{dt6eq30} \e This is because the $T$-invariance of $f$ imply that its values on $(\epsilon_{11},\epsilon_{22},0,\epsilon_{21})$ and $(\epsilon_{11},\epsilon_{22},0,0)$ are the same and the projection $\mathop{\rm Crit}(f\vert_{V'}) \rightarrow\mathop{\rm Crit}(f\vert_{U^T})$ is smooth of relative dimension $e_{21}.$ For the same reason, one has \e \nu_{\mathop{\rm Crit}(f\vert_{V''})}(0,0,\epsilon_{12},0)=(-1)^{e_{12}}\nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \lambdabel{dt6eq31} \e \setminusallskip Now, substitute equations (\ref{dt6eq29}), (\ref{dt6eq30}) and (\ref{dt6eq31}) into (\ref{dt6eq26}) and (\ref{dt6eq27}). One gets \e \betagin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21})\quad=\quad-{\rm d}isplaystyle\int\limits_{\setminusall{[\epsilon_{12}] \in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}}\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi +(-1)^{e_{12}+e_{21}}\bigl(1-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0), \end{split} \lambdabel{dt6eq32} \e \e \betagin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12})\quad=\quad-{\rm d}isplaystyle\int\limits_{\setminusall{[\epsilon_{21}] \in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}}\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi +(-1)^{e_{12}+e_{21}}\bigl(1-e_{21}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \end{split} \lambdabel{dt6eq33} \e \setminusallskip Finally integrating \eq{dt6eq32} over $[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and \eq{dt6eq33} over $[\epsilon_{12}] \in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)),$ yields \e \betagin{split} \int\limits_{\setminusall{[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\quad =\quad - \int\limits_{\setminusall{([\epsilon_{12}],[\epsilon_{21}])\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\tildemesimes {\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \\ & +(-1)^{e_{12}+e_{21}}\bigl(1-e_{12}\bigr) e_{21} \nu_{\mathop{\rm Crit}(f)^G}(0), \lambdabel{dt6eq34} \end{split} \e \e \betagin{split} \int\limits_{\setminusall{[\epsilon_{12}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\quad=\quad - \int\limits_{\setminusall{([\epsilon_{12}],[\epsilon_{21}])\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\tildemesimes {\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \\& +(-1)^{e_{12}+e_{21}}\bigl(1-e_{21}\bigr) e_{12} \nu_{\mathop{\rm Crit}(f)^G}(0), \lambdabel{dt6eq35} \end{split} \e since ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\bigr)=e_{21}$ and ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl({\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\bigr)=e_{12}.$ Subtracting \eq{dt6eq34} from \eq{dt6eq35}, gives \e \betagin{split} \int\limits_{\setminusall{[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\quad - \int\limits_{\setminusall{[\epsilon_{12}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi= \\ & (-1)^{e_{12}+e_{21}}\bigl(e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0). \end{split} \lambdabel{dt6eq25} \e Consider equation \eq{dt6eq25} applied substituting ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})$ to ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)).$ This adds one dimension to $\mathop{\rm Ext}\nolimits^1(E,E).$ Denote $\tildemesi{\tildemesi f}$ the lift of $f$ to $\mathop{\rm Ext}\nolimits^1(E,E)\oplus{\mathbin{\mathbb K}}.$ In this case equation \eq{dt6eq25} becomes \e \betagin{split} \int\limits_{\setminusall{[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})}(0,0,0,\epsilon_{21}\oplus\lambda) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\quad - \int\limits_{\setminusall{[\epsilon_{12}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})}(0,0,\epsilon_{12},0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi= \\ &(-1)^{1+e_{12}+e_{21}}\bigl(1+e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})^G}(0), \end{split} \lambdabel{dt6eq25.1.1} \e Now, observe that $\nu_{\mathop{\rm Crit}(f)}=-\nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})}$ from Theorem \ref{dt3thm3} and $\nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})^G}(0)=\nu_{\mathop{\rm Crit}(f)^G}(0)$ as $(\mathop{\rm Ext}\nolimits^1(E,E)\oplus{\mathbin{\mathbb K}})^G=\mathop{\rm Ext}\nolimits^1(E,E)^G\oplus 0$ and the map $\mathop{\rm Crit}({\tildemesi{\tildemesi f}})^G \rightarrow \mathop{\rm Crit}(f)^G$ is \'etale. Thus \e \betagin{split} -\!\!\!\!\!\!\!\!\!\int\limits_{\setminusall{[\epsilon_{21}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi \; -\; \nu_{\mathop{\rm Crit}(f)}(0,0,0,0) \quad + \int\limits_{\setminusall{[\epsilon_{12}]\in{\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi= \\ & (-1)^{1+e_{12}+e_{21}}\bigl(1+e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0). \end{split} \lambdabel{dt6eq25.1} \e \setminusallskip Here, $\nu_{\mathop{\rm Crit}(f)}(0)$ on the l.h.s. comes from the fact that the ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$-action over ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})$ fixes ${\mathbin{\mathfrak m}}athbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and $[0,1];$ the free orbits of the ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athbb G}}_m$-action contribute zero to the weighted Euler characteristic. Then one uses that $\nu_{\mathop{\rm Crit}({\tildemesi{\tildemesi f}})}$ valued over $[0,1]$ is equal to $-\nu_{\mathop{\rm Crit}(f)}(0).$ Adding \eq{dt6eq25} and \eq{dt6eq25.1} yields \eq{dt6eq7}, which concludes the proof of identity \eq{dt6eq1.1}. {\mathbin{\mathfrak m}}edskip The conclusion of the proof of identity \eq{dt6eq2.1} is now easy. Let $0\ne\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ correspond to the short exact sequence $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in $\mathop{\rm coh}\nolimits(X)$. Then \e \nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(F)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)}\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \lambdabel{dt6eq24} \e using $\xi_*:[(0,0,0,\epsilon_{21})]{\mathbin{\mathfrak m}}apsto[F]$ from Proposition \ref{dt10prop2} and $\xi$ smooth of relative dimension $\mathop{\rm dim}\nolimits(\mathop{\rm Aut}(E))$ and properties of Behrend function in Theorem \ref{dt3thm3}. Substituting \eq{dt6eq24} and its analogue for $D$ in the place of $F$ into \eq{dt6eq2.1}, using equation \eq{dt6eq5} and identity \eq{dt6eq7} to substitute for $\nu_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}}(E_1\oplus E_2)$, and cancelling factors of $(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)} $, one gets that \eq{dt6eq2.1} is equivalent to \eq{dt6eq25}, which concludes the proof. \subsection{Deformation invariance issue} {\mathbin{\mathfrak m}}arkboth{Deformation invariance issue}{Deformation invariance issue} \lambdabel{def} {\rm \kern.05em ind}ex{Picard scheme}{\rm \kern.05em ind}ex{Picard scheme!relative} Thomas' original definition \eq{dt2eq1} of {\rm \kern.05em ind}ex{Donaldson--Thomas invariants!original $DT^\alpha(\tildemesau)$} $DT^\alpha(\tildemesau)$, and Joyce and Song's definition \eq{dt4eq11} of the pair {\rm \kern.05em ind}ex{stable pair invariants $PI^{\alpha,n}(\tildemesau')$} invariants $PI^{\alpha,n}(\tildemesau')$, are both valid over ${\mathbin{\mathbb K}}$. Joyce and Song suggest to solve problem (b) in \S\ref{main.1} to work in \circte[Rmk 4.20 (e)]{JoSo}, replacing $H^*(X;{\mathbin{\mathbb Q}})$ by the {\it algebraic de Rham cohomology\/}{\rm \kern.05em ind}ex{algebraic de Rham cohomology} $H^*_{\rm dR}(X)$ of Hartshorne \circte{Hart1}. {\rm \kern.05em ind}ex{cohomology} Here we suggest another argument which is based on the theory of {\it Picard schemes} by Grothendieck \circte{Grot4,Grot5}. Other references are \circte{Art,Kle}. Even if our argument will not prove that the numerical Grothendieck groups are deformation invariant, as this last fact depend deeply on the integral Hodge conjecture type result \circte{Vois} which we are not able to prove in this more general context, we will however find a deformation invariant lattice $\Lambdambda_{X_t}$ containing its image through the Chern character map and define $\bar{DT}{}^\alpha(\tildemesau)_t$ for $\alpha\in \Lambdambda_{X_t}$ which will be deformation invariant. \nomenclature[Pic]{${\mathbin{\mathfrak m}}athop{\rm Pic}_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}/T}$}{relative Picard scheme of a family ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}\rightarrow T$} \nomenclature[Picc]{${\mathbin{\mathfrak m}}athop{\rm Pic}(X)$}{Picard scheme of a ${\mathbin{\mathbb K}}$-scheme} {\mathbin{\mathfrak m}}edskip To prove deformation-invariance we need to work not with a single Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb K}}$, but with a {\it family\/} of Calabi--Yau 3-folds ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}{\rm st}ackrel{\varphi}{\longrightarrow}T$ over a base ${\mathbin{\mathbb K}}$-scheme $T$. Taking $T={\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{\mathbin{\mathbb K}}$ recovers the case of one Calabi--Yau 3-fold. Here are our assumptions and notation for such families. Let ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}{\rm st}ackrel{\varphi}{\longrightarrow}T$ be a smooth projective morphism of algebraic ${\mathbin{\mathbb K}}$-varieties $X,T$, with $T$ connected. Let ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)$ be a relative very ample line bundle for ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}{\rm st}ackrel{\varphi}{\longrightarrow}T$. For each $t\in T({\mathbin{\mathbb K}})$, write $X_t$ for the fibre $X\tildemesimes_{\varphi,T,t}{\mathbin{\mathfrak m}}athop{\rm Sp}\nolimitsec{\mathbin{\mathbb K}}$ of $\varphi$ over $t$, and ${\mathbin{\cal O}}_{X_t}(1)$ for ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)\vert_{X_t}$. Suppose that $X_t$ is a smooth Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ for all $t\in T({\mathbin{\mathbb K}})$, with $H^1({\mathbin{\cal O}}_{X_t})= 0$. {\mathbin{\mathfrak m}}edskip There are some important existence theorems which refine the original Grothendieck's theorem \circte[Thm. 3.1]{Grot4}. In \circte[Thm. 7.3]{Art}, Artin proves that given $f:X\rightarrow S$ a flat, proper, and finitely presented map of algebraic spaces cohomologically flat in dimension zero, then the relative Picard scheme ${\mathbin{\mathfrak m}}athop{\rm Pic}_{X/S}$ exists as an algebraic space which is locally of finite presentation over S. Its fibres are the Picard schemes ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ of the fibres. They form a family whose total space is ${\mathbin{\mathfrak m}}athop{\rm Pic}_{X/S}.$ In \circte[Prop. 2.10]{Grot5} Grothendieck shows that if $H^2(X_s,{\mathbin{\cal O}}_{X_{s}})=0$ for some $s\in S,$ there exists a neighborhood $U$ of $s$ such that the scheme ${\mathbin{\mathfrak m}}athop{\rm Pic}_{{X/S}_{|_U}}$ is smooth, and in this case $\mathop{\rm dim}\nolimits({\mathbin{\mathfrak m}}athop{\rm Pic}(X_s))=\mathop{\rm dim}\nolimits(H^1(X_s,{\mathbin{\cal O}}_{X_s})).$ {\mathbin{\mathfrak m}}edskip In our case, ${\mathbin{\mathfrak m}}athop{\rm Pic}_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}/T}$ exists and is smooth with $0$-dimensional fibres which are the Picard schemes ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t).$ Moreover the morphism $(\pi,P): {\mathbin{\mathfrak m}}athop{\rm Pic}_{{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}/T}\longrightarrow T\tildemesimes {\mathbin{\mathbb Q}}[s],$ where $\pi$ is the projection to the base scheme and $P$ assigns to an isomorphism class of a line bundle $[L]$ its Hilbert polynomial $P_L(s)$ with respect to ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1),$ is proper. This implies an upper semicontinuity result for $t\rightarrow\mathop{\rm dim}\nolimits({\mathbin{\mathfrak m}}athop{\rm Pic}(X_t))$ \circte[Cor. 2.7]{Grot5}. These results yield that the Picard schemes ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ are canonically isomorphic {\it locally} in $T({\mathbin{\mathbb K}})$. Observe that at the moment we don't have canonical isomorphisms ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)\cong {\mathbin{\mathfrak m}}athop{\rm Pic}(X)$ for all $t\in T({\mathbin{\mathbb K}})$ (this would be canonically isomorphic {\it globally} in $T({\mathbin{\mathbb K}})$). Instead, we mean that the groups ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ form a {\it local system of abelian groups\/} over $T({\mathbin{\mathbb K}})$, with fibre ${\mathbin{\mathfrak m}}athop{\rm Pic}(X)$. {\mathbin{\mathfrak m}}edskip When ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, Joyce and Song proved \circte[\S 4]{JoSo} that $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ form a local system of abelian groups over $T({\mathbin{\mathbb K}})$, with fibre $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$. This means that in simply-connected regions of $T({\mathbin{\mathbb C}})$ in the complex analytic topology the $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ are all canonically isomorphic, and isomorphic to $K(\mathop{\rm coh}\nolimits(X))$. But around loops in $T({\mathbin{\mathbb C}})$, this isomorphism with $K(\mathop{\rm coh}\nolimits(X))$ can change by {\it monodromy},{\rm \kern.05em ind}ex{monodromy} by an automorphism ${\mathbin{\mathfrak m}}u:K(\mathop{\rm coh}\nolimits(X))\rightarrow K(\mathop{\rm coh}\nolimits(X))$ of $K(\mathop{\rm coh}\nolimits(X))$. In \circte[Thm 4.21]{JoSo} they showed that the group of such monodromies ${\mathbin{\mathfrak m}}u$ is finite, and so it is possible to make it trivial by passing to a finite cover $\tildemesi T$ of $T$. If they worked instead with invariants $PI^{P,n}(\tildemesau')$ counting pairs $s:{\mathbin{\cal O}}_{X}(-n)\rightarrow E$ in which $E$ has fixed Hilbert polynomial{\rm \kern.05em ind}ex{Hilbert polynomial} $P$, rather than fixed class $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, as in Thomas' original definition of Donaldson--Thomas invariants \circte{Thom}, then they could drop the assumption on $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ in Theorem \circte[Thm. 5.25]{JoSo}. {\mathbin{\mathfrak m}}edskip Similarly, we now study monodromy phenomena for ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ in families of smooth ${\mathbin{\mathbb K}}$-schemes ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}\rightarrow T$ following the idea of \circte[Thm. 4.21]{JoSo}. We find that we can always eliminate such monodromy by passing to a finite cover $\tildemesi T$ of $T$. This is crucial to prove deformation-invariance of the~$\bar{DT}{}^\alpha(\tildemesau),PI^{\alpha,n}(\tildemesau')$ in \circte[\S 12]{JoSo}. \betagin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $\varphi:{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}\rightarrow T$ a smooth projective morphism of\/ ${\mathbin{\mathbb K}}$-schemes with\/ $T$ connected, and\/ ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)$ a relative very ample line bundle on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}},$ so that for each\/ $t\in T({\mathbin{\mathbb K}}),$ the fibre $X_t$ of\/ $\varphi$ is a smooth projective ${\mathbin{\mathbb K}}$-scheme with very ample line bundle\/ ${\mathbin{\cal O}}_{X_t}(1)$. Suppose the Picard schemes ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ are locally constant in $T({\mathbin{\mathbb K}}),$ so that\/ $t{\mathbin{\mathfrak m}}apsto {\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ is a local system of abelian groups on\/~$T$. Fix a base point\/ $s\in T({\mathbin{\mathbb K}}),$ and let\/ $\Gamma$ be the monodromy group of ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s).$ Then $\Gamma$ is a finite group. There exists a finite \'etale cover $\pi:\tildemesi T\rightarrow T$ of degree ${\mathbin{\mathfrak m}}d{\Gamma},$ with\/ $\tildemesi T$ a connected ${\mathbin{\mathbb K}}$-scheme, such that writing $\tildemesi {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}={{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}\tildemesimes_T\tildemesi T$ and $\tildemesi\varphi:\tildemesi {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}\rightarrow\tildemesi T$ for the natural projection, with fibre $\tildemesi X_{\tildemesi t}$ at $\tildemesi t\in\tildemesi T({\mathbin{\mathbb K}}),$ then ${\mathbin{\mathfrak m}}athop{\rm Pic}(\tildemesi X_{\tildemesi t})$ for all $\tildemesi t\in\tildemesi T({\mathbin{\mathbb K}})$ are all globally canonically isomorphic to ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$. That is, the local system $\tildemesi t{\mathbin{\mathfrak m}}apsto {\mathbin{\mathfrak m}}athop{\rm Pic}(\tildemesi X_{\tildemesi t})$ on $\tildemesi T$ is trivial. \lambdabel{finmon} \end{thm} \betagin{proof} As ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ is finitely generated, one can choose classes $[L_1],\ldots, [L_k]\in {\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ as generators. Let $P_1,\ldots, P_k$ be the Hilbert polynomials respectively of $[L_1],\ldots, [L_k]$ with respect to ${\mathbin{\cal O}}_{X_s}(1)$. Let $\gamma\in\Gamma$, and consider the images $\gamma\cdot [L_i] \in {\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ for $i=1,\ldots,k$. As we assume ${\mathbin{\cal O}}_{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}}(1)$ is globally defined on $T$ and does not change under monodromy, it follows that the Hilbert polynomials $P_1,\ldots, P_k$ do not change under monodromy. Hence $\gamma\cdot [L_i]$ has Hilbert polynomial $P_i$. Again one uses properness to show that the set ${\mathbin{\mathfrak m}}athop{\rm Pic}^{P_i}(X_s)$ composed by isomorphism classes of line bundles in ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ with Hilbert polynomial $P_i$ for some $i=1,\ldots, k$ is a finite set, that is, every $P_i$ is the Hilbert polynomial of only finitely many classes $[R_1],\ldots,[R_{n_i}]$ in ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s).$ It follows that for each $\gamma\in\Gamma$ we have $\gamma\cdot [L_i]\in\{[R_1],\ldots,[R_{n_i}]\}$. So there are at most $n_1\cdots n_k$ possibilities for $(\gamma\cdot [L_1],\ldots, \gamma\cdot [L_k])$. But $(\gamma\cdot [L_1],\ldots,\gamma\cdot [L_k])$ determines $\gamma$ as $[L_1],\ldots, [L_k]$ generate ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$. Hence ${\mathbin{\mathfrak m}}d{\Gamma}\leqslant\nobreak n_1\cdots n_k$, and $\Gamma$ is finite. We can now construct an \'etale cover $\pi:\tildemesi T\rightarrow T$ which is a principal $\Gamma$-bundle, and so has degree ${\mathbin{\mathfrak m}}d{\Gamma}$, such that the ${\mathbin{\mathbb K}}$-points of $\tildemesi T$ are pairs $(t,\iota)$ where $t\in T({\mathbin{\mathbb K}})$ and $\iota:{\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)\rightarrow {\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$ is an isomorphism from the properness and smoothness argument above, and $\Gamma$ acts freely on $\tildemesi T({\mathbin{\mathbb K}})$ by $\gamma:(t,\iota){\mathbin{\mathfrak m}}apsto (t,\gamma\circ\iota)$, so that the $\Gamma$-orbits correspond to points $t\in T({\mathbin{\mathbb K}})$. Then for $\tildemesi t=(t,\iota)$ we have $\tildemesi X_{\tildemesi t}=X_t$, with canonical isomorphism~$\iota:{\mathbin{\mathfrak m}}athop{\rm Pic}(\tildemesi X_{\tildemesi t})\rightarrow {\mathbin{\mathfrak m}}athop{\rm Pic}(X_s)$. \end{proof} {\rm \kern.05em ind}ex{Grothendieck group!numerical}{\rm \kern.05em ind}ex{monodromy} {\rm \kern.05em ind}ex{cohomology}{\rm \kern.05em ind}ex{Picard scheme} So the conclusion is that from properness and smoothness argument, ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ are canonically isomorphic locally in $T({\mathbin{\mathbb K}})$. But by Theorem \ref{finmon}, one can pass to a finite cover $\tildemesi T$ of $T$, so that the ${\mathbin{\mathfrak m}}athop{\rm Pic}(\tildemesi X_{\tildemesi t})$ are canonically isomorphic globally in $\tildemesi T({\mathbin{\mathbb K}})$. So, replacing ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}},T$ by $\tildemesi {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak X}},\tildemesi T$, we will assume from here that the Picard schemes ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ are all canonically isomorphic globally in $T({\mathbin{\mathbb K}})$, and we write ${\mathbin{\mathfrak m}}athop{\rm Pic}(X)$ for this group ${\mathbin{\mathfrak m}}athop{\rm Pic}(X_t)$ up to canonical isomorphism. {\mathbin{\mathfrak m}}edskip In Theorem \circte[Thm. 4.19]{JoSo} Joyce and Song showed that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ and $H^1({\mathbin{\cal O}}_X)=0$ the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under small deformations of $X$ up to canonical isomorphism. As we said, here we will not prove this result. So, the idea is to construct a globally constant lattice $\Lambdambda_{X}$ using the globally constancy of the Picard schemes such that there exist an inclusion $K^{\rm num}(\mathop{\rm coh}\nolimits(X)){{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\Lambdambda_{X}.$ It could happen that the image of the numerical Grothendieck group varies with $t$ as it has to do with the integral Hodge conjecture as in \circte[Thm. 4.19]{JoSo}, but this does not affect the deformation invariance of $\bar{DT}{}^\alpha(\tildemesau)$ as for them to be deformation invariant is enough to find a deformation invariant lattice in which the classes $\alpha$ vary. Next, we describe such lattice $\Lambdambda_X$ and explain how the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is contained in it. Our idea follows \circte[Thm. 4.19]{JoSo}. {\mathbin{\mathfrak m}}edskip{\rm \kern.05em ind}ex{Chern character} Let $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, with $H^1({\mathbin{\cal O}}_X)=0$ and consider the {\it Chern character}, as in Hartshorne \circte{Hart2}: for each $E\in\mathop{\rm coh}\nolimits(X)$ we have the rank $r(E)\in A^0(X)\cong {\mathbin{\mathbb Z}},$ and the Chern classes $c_i(E)\in A^{i}(X)$ for $i=1,2,3$. It is useful to organize these into the Chern character ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(E)$\nomenclature[ch(E)]{${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(E)$}{Chern character of a coherent sheaf $E$} \nomenclature[chi(E)]{${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_i(E)$}{$i^{\rm th}$ component of Chern character of $E$} in $A^{*}(X)_{\mathbin{\mathbb Q}}$, \nomenclature[Heven(X)]{$H_{dR}^{\rm even}(X)$}{even cohomology of a ${\mathbin{\mathbb K}}$-scheme $X$} where ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(E)={\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_0(E)+{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_1(E)+{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_2(E)+{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_3(E)$ with ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_i(E)\in A^{i}(X)_{\mathbin{\mathbb Q}}:$ \e {\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_0(E)=r(E),\quad {\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_1(E)=c_1(E),\quad {\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_2(E)=\tildemesextstyle{\rm fr}ac{1}{2}\bigl(c_1(E)^2-2c_2(E)\bigr),\quad {\mathbin{\mathfrak m}}athop{\rm ch}\nolimits_3(E)=\tildemesextstyle{\rm fr}ac{1}{6}\bigl(c_1(E)^3-3c_1(E)c_2(E)+3c_3(E)\bigr). \lambdabel{chernch} \e By the Hirzebruch--Riemann--Roch Theorem{\rm \kern.05em ind}ex{Hirzebruch--Riemann--Roch Theorem} \circte[Th.~A.4.1]{Hart2}, the Euler form{\rm \kern.05em ind}ex{Euler form} on coherent sheaves $E,F$ is given in terms of their Chern characters by \e \bar{\mathbin{\mathfrak m}}athop{\rm ch}\nolimitsi\bigl([E],[F]\bigr)={\rm d}eltag\bigl({\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(E)^\vee\cdot{\mathbin{\mathfrak m}}athop{\rm ch}\nolimits(F)\cdot{\rm td}(TX)\bigr){}_3, \lambdabel{euform} \e where $(\cdot)_3$ denotes the component of degree $3$ in $A^*(X)_{\mathbin{\mathbb Q}}$ and where ${\rm td}(TX)$\nomenclature[td(TX)]{${\rm td}(TX)$}{Todd class of $TX$ in $H_{dR}^{\rm even}(X)$} is the {\it Todd class\/}{\rm \kern.05em ind}ex{Todd class} of $TX$, which is $1+{\rm fr}ac{1}{12}c_2(TX)$ as $X$ is a Calabi--Yau 3-fold, and $(\lambda_0,\lambda_1,\lambda_2,\lambda_3)^\vee=(\lambda_0,-\lambda_1,\lambda_2,-\lambda_3)$, writing $(\lambda_0,\ldots,\lambda_3)\in A^{*}(X)$ with $\lambda_i\in A^{i}(X)$. Define: \betagin{equation*} \Lambda_X=\tildemesextstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \tildemesextrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \tildemesextrm{ such that } \end{equation*} \betagin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in {\mathbin{\mathfrak m}}athop{\rm Pic}(X)/ {\tildemesextrm{torsion}}, \; \lambda_2-{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}a\lambda_1^2\in \mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\tildemesextstyle{\rm fr}ac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*} \setminusallskip where $\lambda_1^2$ is defined as the map $\alpha\in{\mathbin{\mathfrak m}}athop{\rm Pic}(X)\rightarrow {\rm fr}ac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and ${\rm fr}ac{1}{12}\lambda_1 c_2(TX)$ is defined as ${\rm fr}ac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Theorem \ref{definv} states that $\Lambda_X$ is deformation invariant and the Chern character gives an injective morphism ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!{{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}ookra\!\Lambda_X$. {\rm \kern.05em ind}ex{Picard scheme} {\rm \kern.05em ind}ex{deformation invariance} The proof of Theorem \ref{definv} is straightforward: \betagin{proof} The proof follows exactly as in \circte[Thm. 4.19]{JoSo} and the fact the Picard scheme ${\mathbin{\mathfrak m}}athop{\rm Pic}(X)$ is globally constant in families from the argument above yields that the lattice $\Lambdambda_X$ is deformation invariant. Moreover, the proof that ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits\bigl(K^{\rm num}(\mathop{\rm coh}\nolimits(X))\bigr)\subseteq\Lambda_X$ is again as in \circte[Thm. 4.19]{JoSo}. Observe that we do not prove that ${\mathbin{\mathfrak m}}athop{\rm ch}\nolimits\bigl(K^{\rm num}(\mathop{\rm coh}\nolimits(X))\bigr)=\Lambda_X,$ fact which uses Voisin's Hodge conjecture proof for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}$ \circte{Vois}. \end{proof} \betagin{quest} Does Voisin's result \circte{Vois} work over ${\mathbin{\mathbb K}}$ in terms of $\mathop{\rm Ho}\nolimitsm({\mathbin{\mathfrak m}}athop{\rm Pic}(X),{\mathbin{\mathbb Z}})$? \end{quest} This concludes the discussion of problem (b) in \S\ref{main.1} and yields the deformation-invariance of $DT^\alpha(\tildemesau),$ $PI^{\alpha,n}(\tildemesau')$ over ${\mathbin{\mathbb K}}.$ \section{Implications and conjectures} {\mathbin{\mathfrak m}}arkboth{Implications and conjectures}{Implications and conjectures} \lambdabel{dt7}{\rm \kern.05em ind}ex{non-Archimedean geometry}{\rm \kern.05em ind}ex{formal power series} In this final section we sketch some exciting and far reaching implications of the theory and propose new ideas for further research. One proposal is in the direction of extending Donaldson--Thomas invariants to compactly supported coherent sheaves on noncompact quasi-projective Calabi--Yau 3-folds. A second idea is in the derived categorical framework trying to establish a theory of generalized Donaldson--Thomas invariants for objects in the derived category of coherent sheaves. Here we expose the problems and illustrate some possible approaches when known. \subsection{Noncompact Calabi--Yau 3-folds} We start by recalling the following definition from \circte[Def. 6.27]{JoSo}: \betagin{dfn} Let $X$ be a noncompact Calabi-Yau $3$-fold over ${\mathbin{\mathbb C}}.$ We call $X$ compactly embeddable if whenever $K \subset X$ is a compact subset, in the analytic topology, there exists an open neighbourhood $U$ of $K$ in $X$ in the analytic topology, a compact Calabi-Yau 3-fold $Y$ over ${\mathbin{\mathbb C}}$ with $H^1({\mathbin{\cal O}}_Y ) = 0,$ an open subset $V$ of $Y$ in the analytic topology, and an isomorphism of complex manifolds $\varphi : U \rightarrow V.$ \end{dfn} Joyce and Song only need the notion of `compactly embeddable' as their complex analytic proof of \eq{dt6eq1}--\eq{dt6eq2} recalled in \S\ref{dtBehid}, requires $X$ compact; but unfortunately the given algebraic version of \eq{dt6eq1}--\eq{dt6eq2} in Theorem \ref{dt6thm1.1} uses results from derived algebraic geometry \circte{ToVe1,ToVe2,PTVV,Toen,Toen2,Toen3,Toen4}, and the author does not know if they apply also for compactly supported sheaves {\rm \kern.05em ind}ex{coherent sheaf!compactly supported} on a noncompact~$X$.{\rm \kern.05em ind}ex{field ${\mathbin{\mathbb K}}$|)} {\rm \kern.05em ind}ex{Calabi--Yau 3-fold!noncompact} {\mathbin{\mathfrak m}}edskip More precisely, in \circte{PTVV} it is shown that if $X$ is a projective Calabi-Yau $m$-fold then the derived moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f(X)}$ of perfect complexes of coherent sheaves on $X$ is $(2-m)$-shifted symplectic. It is not obvious that if $X$ is a quasi-projective Calabi-Yau $m$-fold, possibly noncompact, then the derived moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}(X)}$ of perfect complexes on $X$ with compactly-supported cohomology is also $(2-m)$-shifted symplectic. {\mathbin{\mathfrak m}}edskip At the present, we can state the following result. We thank Bertrand To\"en for explaining this to us. \betagin{thm} Suppose $Z$ is smooth projective of dimension $m,$ and $s \in H^0(K_Z^{-1}),$ and $X \subset Z$ is Zariski open with $s$ nonvanishing on $X,$ so that $X$ is a (generally non compact) quasi-projective Calabi-Yau $m$-fold. Then the derived moduli stack ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X)$ of compactly-supported coherent sheaves on $X,$ or of perfect complexes on $X$ with compactly-supported cohomology, is $(2-m)$-shifted symplectic. \lambdabel{noncpt} \end{thm} \betagin{proof} Let $Z$ be smooth and projective of dimension $m,$ and $s$ be any section of $K_Z^{-1}$. Let $Y$ be the derived scheme of zeros of $s$ and $X=Z\setminus Y.$ Then, $Y$ is equipped with a canonical $O$-orientation in the sense of \circte{PTVV} of dimension $m-1,$ so ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f}(Y)$ is $(2-m-1)$-symplectic, even if $Y$ is not smooth. The restriction map ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f}(Z) \rightarrow {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f}(Y)$ is moreover Lagrangian. The map $\ast \rightarrow {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f}(Y),$ corresponding to the zero object is \'etale, and thus its pull-back provides a Lagrangian map ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X) \rightarrow\ast$, or, equivalently, a $(2-m)$-symplectic structure on ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X).$ Now if $X'$ is open in $X,$ then ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X') \rightarrow {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X)$ is an open immersion, so ${{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak M}}_{{\mathbin{\mathfrak m}}athop{\rm Per}f_{cs}}(X')$ is also $(2-m)$-symplectic. \end{proof} We remark the following: \betagin{itemize} \item[$(a)$] We point out that the condition of Theorem \ref{noncpt} is similar to the compactly-embeddable condition in \circte[Def. 6.27]{JoSo}, but more general, as we do not require $Z$ to be a Calabi-Yau. \setminusallskip \item[$(b)$] Observe that in the non-compact case we cannot expect to have the deformation invariance property unless in some particular cases in which the moduli space is proper. \setminusallskip \item[$(c)$] Note that we need the noncompact Calabi--Yau to be quasi-projective in order to have a quasi projective Quot scheme \circte[Thm. 6.3]{NN}. \end{itemize} We conclude the section with the following: \betagin{conj}The theory of generalized Donaldson--Thomas invariants defined in \circte{JoSo} is valid over algebraically closed fields of characteristic zero for compactly supported coherent sheaves on noncompact quasi-projective Calabi--Yau 3-folds. In this last case, one can define $\bar{DT}{}^\alpha(\tildemesau)$ and prove the wall--crossing formulae and the relation with $PI^{\alpha,n}(\tildemesau')$ is still valid, while one loses the deformation invariance property and the properness of moduli spaces. \end{conj} \subsection{Derived categorical framework} \lambdabel{dercat} Our algebraic method could lead to the extension of generalized Donaldson--Thomas theory to the derived categorical context. The plan to extend from abelian to derived categories the theory of Joyce and Song \circte{JoSo} starts by reinterpreting the series of papers by Joyce \circte{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8} in this new general setup. In particular: \setminusallskip \betagin{itemize} \item[$(a)$]Defining configurations in triangulated categories ${\mathbin{\mathfrak m}}athcal{T}$ requires to replace the exact sequences by distinguished triangles. \setminusallskip \item[$(b)$]Constructing moduli stacks of objects and configurations in ${\mathbin{\mathfrak m}}athcal{T}$. Again, the theory of derived algebraic geometry \circte{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} can give us a satisfactory answer. \setminusallskip \item[$(c)$]Defining stability conditions on triangulated categories can be approached using Bridgeland's results, and its extension by Gorodentscev et al., which combines Bridgeland's idea with Rudakov's definition for abelian categories \circte{Rud}. Since Joyce's stability conditions \circte{Joyc.3} are based on Rudakov, the modifications should be straightforward. \end{itemize} \betagin{itemize} \item[$(d)$]The `nonfunctoriality of the cone' in triangulated categories causes that the triangulated category versions of some operations on configurations are defined up to isomorphism, but not canonically, which yields that corresponding diagrams may be commutative, but not Cartesian as in the abelian case. In particular, one loses the associativity of the Ringel-Hall algebra of stack functions, which is a crucial object in Joyce and Song framework. We expect that derived Hall algebra approach of To\"en \circte{Toen3} resolve this issue. See also \circte{PL}. \end{itemize} \setminusallskip The list above does not represent a big difficulty. The main issues actually are: proving existence of Bridgeland stability conditions (or other type) on the derived category; proving that semistable moduli schemes and stacks are finite type (permissible), and proving that two stability conditions can be joined by a path of permissible stability conditions. \setminusallskip Theorem \ref{dt6thm1.1.bis} is just one of the steps in developing this program. The author thus expects that a well-behaved theory of invariants counting $\tildemesau$-semistable objects in triangulated categories in the style of Joyce's theory exists, that is, Theorem \ref{mainthm} should be valid also in the derived categorical context: \betagin{conj} The theory of generalized Donaldson--Thomas invariants defined in \circte{JoSo} is valid for complexes of coherent sheaves on Calabi-Yau $3$-folds over algebraically closed fields of characteristic zero. \end{conj} \setminusall{ \betagin{thebibliography}{99} \addcontentsline{toc}{section}{References} \bibitem{Alu} P. Aluffi, {\it Weighted Chern-Mather classes and Milnor classes of hypersurfaces}, in Singularities--Sapporo 1998, volume~29 of Adv. Stud. Pure Math., pages 1--20. Kinokuniya, Tokyo, 2000. \bibitem{Alu1} P. Aluffi, {\it Limits of Chow groups, and a new construction of Chern--Schwartz--MacPherson classes,} Pure Appl. Math. Q. 2 (2006), no. 4, part 2, 915--941. \bibitem{Art}M. Artin, {\it Algebraization of formal moduli: I}, in “Global Analysis, papers in honor of K. Kodaira,” Princeton U. Press, 1969, pp. 21--71. \bibitem{Behr} K. Behrend, {\it Donaldson--Thomas type invariants via microlocal geometry}, Ann. of Math. 170 (2009), 1307--1338. math.AG/0507523. \bibitem{BeFa} K. Behrend and B. Fantechi, {\it The intrinsic normal cone}, Invent. Math. 128 (1997), 45--88. alg-geom/9601010. \bibitem{BeFa2} K. Behrend and B. Fantechi, {\it Symmetric obstruction theories and Hilbert schemes of points on threefolds}, Algebra and Number Theory 2 (2008), 313--345, math.AG/0512556. \bibitem{BBS} K. Behrend, J. Bryan and B. Szendr\H oi, {\it Motivic degree zero Donaldson--Thomas invariants}, Invent. Math. 192 (2013), 111--160. arXiv:0909.5088. \bibitem{GeBaViLa}K. Behrend and B. Fantechi, {\it Gerstenhaber and Batalin--Vilkovisky structures on Lagrangian intersections}, appeared in the Manin Festschrift. \bibitem{BeGe} K. Behrend and E. Getzler, {\it On holomorphic Chern-Simons functional}, preprint. \bibitem{BeHua} K. Behrend and Zheng Hua, {\it Derived moduli space of complexes of coherent sheaves}, still not appeared. \bibitem{BBBJ} O. Ben-Bassat, C. Brav, V. Bussi, and D. Joyce, {\it A `Darboux Theorem' for shifted symplectic structures on derived Artin stacks, with applications}, arXiv:1312.0090, 2013. \bibitem{BBDJS} C. Brav, V. Bussi, D. Dupont, D. Joyce, and B. Szendr\H oi, {\it Symmetries and stabilization for sheaves of vanishing cycles}, arXiv:1211.3259, 2012. \bibitem{BBJ} C. Brav, V. Bussi and D. Joyce, {\it A Darboux theorem for derived schemes with shifted symplectic structure}, arXiv:1305.6302, 2013. \bibitem{Brid1} T. Bridgeland, {\it Stability conditions on triangulated categories}, Ann. of Math. 166 (2007), 317--345. math.AG/0212237. \bibitem{Brid2} T. Bridgeland, {\it Hall algebras and curve-counting invariants}, arXiv:1002.4374, 2010. \bibitem{Brid3} T. Bridgeland, {\it An introduction to motivic Hall algebras}, arXiv:1002.4372, 2010. \bibitem{BJM} V. Bussi, D. Joyce and S. Meinhardt, {\it On motivic vanishing cycles of critical loci}, arXiv:1305.6428, 2013. \bibitem{Buss} V. Bussi, {\it Categorification of Lagrangian intersections on complex symplectic manifolds using perverse sheaves of vanishing cycles}, in preparation, 2013. \bibitem{ChangLi} H-L. Chang and J. Li, {\it Semi-Perfect Obstruction theory and DT Invariants of Derived Objects}, arXiv:1105.3261, 2011. \bibitem{DeLo} J. Denef, F. Loeser, {\it Geometry on arc spaces of algebraic varieties}, arXiv:math/0006050. \bibitem{Dimc} A. Dimca, {\it Sheaves in Topology}, Universitext, Springer-Verlag, Berlin, 2004. \bibitem{DiSz} A. Dimca and B. Szendr\H oi, {\it The Milnor fibre of the Pfaffian and the Hilbert scheme of four points on ${\mathbin{\mathbb C}}^3$}, Math. Res. Lett. 16 (2009) 1037--1055. arXiv:0904.2419. \bibitem{DoTh} S.K. Donaldson and R.P. Thomas, {\it Gauge Theory in Higher Dimensions}, Chapter 3 in S.A. Huggett, L.J. Mason, K.P. Tod, S.T. Tsou and N.M.J. Woodhouse, editors, {\it The Geometric Universe}, Oxford University Press, Oxford, 1998. \bibitem{Du1} A. Dubson, {\it Classes caract\'eristiques des vari\'et\'es singuli\'eres,} C. R. Acad. Sci. Paris S\'er. A--B 287 (1978), no. 4, A237--A240. \bibitem{Du2} A. Dubson, {\it Formule pour l'indice des complexes constructibles et D-modules holonomes}, C. R. Acad. Sci. Paris 298 (1984), 113--116. \bibitem{Du3} A. Dubson, {\it Calcul des invariants num\'eriques des singularit\'es et applications}, preprint, Univ. of Bonn (1981). \bibitem{EG} D.Edidin, W.Graham, {\it Equivariant intersection theory,} Invent. Math. 131 (1998), no. 3, 595--634. \bibitem{Eise} D. Eisenbud, {\it Commutative algebra}, Graduate Texts in Math. 150, Springer-Verlag, New York, 1995. \bibitem{FrMo} R. Friedman and J.W. Morgan, {\it Smooth four-manifolds and complex surfaces}, Ergeb. der Math. und ihrer Grenzgebiete 27, Springer-Verlag, 1994. \bibitem{Fult} W. Fulton, {\it Intersection Theory}, Ergeb. Math. und ihrer Grenzgebiete 2, Springer-Verlag, Berlin, 1984. \bibitem{FuMacP1}W. Fulton and R. MacPherson, {\it Intersecting cycles on an algebraic variety}, Real and complex singularities (Proc. Ninth Nordic Summer School/NAVF Sympos. Math., Oslo, 1976) ed. P. Holm, Sijthoff and Noordhoff, Alphen aan den Rijn, 1977, pp. 179--197. \bibitem{FuMacP2}W. Fulton and R. MacPherson, {\it Defining algebraic intersections}, in Algebraic Geometry, Proceedings, Tromso, Norway, 1977, Springer Lecture Notes 687 (1978), 1--30. \bibitem{FuMacP3}W. Fulton and R. MacPherson, {\it Categorical Framework for the Study of Singular Spaces }, Memoirs of the American Mathematical Society 243, 1981. \bibitem{GeMa} S.I. Gelfand and Y.I. Manin, {\it Methods of Homological Algebra}, second edition, Springer-Verlag, Berlin, 2003. \bibitem{Gil} W.D. Gillam, {\it Deformation of quotients on a product}, http://www.math.brown.edu/~wgillam/quot.pdf \bibitem{Gin} V. Ginsburg, {\it Characteristic varieties and vanishing cycles}, Invent. Math., 84; p.327--402, 1986. \bibitem{Gome} T.L. G\'omez, {\it Algebraic stacks}, Proc. Indian Acad. Sci. Math. Sci. 111 (2001), 1--31, math.AG/9911199. \bibitem{GS} G. Gonzalez--Sprinberg, {\it L’obstruction locale d’Euler et le th\'eor\'eme de MacPherson}, Ast\'erisque 82--83 (1981), 7--32. \bibitem{GP} T. Graber and R. Pandharipande, {\it Localization of virtual classes}, Invent. Math., 135(2): 487--518, 1999. \bibitem{GrHa} P. Griffiths and J. Harris, {\it Principles of Algebraic Geometry}, Wiley-Interscience, New York, 1978. \bibitem{Grot1} A. Grothendieck, {\it Elements de G\'eom\'etrie Alg\'ebrique}, part I Publ. Math. IHES 4 (1960), part II Publ. Math. IHES 8 (1961), part III Publ. Math. IHES 11 (1960) and 17 (1963), and part IV Publ. Math. IHES 20 (1964), 24 (1965), 28 (1966) and 32 (1967). \bibitem{Grot2} A. Grothendieck, {\it Rev\^etements \'Etales et Groupe Fondamental (SGA1)}, Springer Lecture Notes 224, Springer-Verlag, Berlin, 1971. \bibitem{Grot3} A. Grothendieck, {\it Techniques de construction et th\'eor\'emes d’existence en g\'eom\'etrie alg\'ebrique IV: Les sch\'emas de Hilbert}, S\'eminaire Bourbaki, 1960/61, no. 221. \bibitem{Grot4} A. Grothendieck, {\it Technique de descente et th\'eor\'emes d'existence en g\'eom\'etrie alg\'ebrique V : Les sch\'emas de Picard. Th\'eor\'emes d'existence}, S\'eminaire Bourbaki, 1961/62, no. 232. \bibitem{Grot5} A. Grothendieck, {\it Technique de descente et th\'eor\'emes d'existence en g\'eom\'etrie alg\'ebrique VI: Les sch\'emas de Picard. Propri\'et\'es g\'en\'erales}, S\'eminaire Bourbaki, 1961/62, no. 236. \bibitem{Hart1} R. Hartshorne, {\it On the de Rham cohomology of algebraic varieties}, Publ. Math. I.H.E.S. 45 (1975), 5--99. \bibitem{Hart2} R. Hartshorne, {\it Algebraic Geometry}, Graduate Texts in Math. 52, Springer, New York, 1977. \bibitem{Hua} Z. Hua, {\it Chern--Simons functions on toric Calabi--Yau threefolds and Donaldson--Thomas theory}, arXiv:1103.1921, 2011 \bibitem{Huyb} D. Huybrechts, {\it Fourier--Mukai transforms in Algebraic Geometry}, Oxford University Press, Oxford, 2006. \bibitem{HuLe2} D. Huybrechts and M. Lehn, {\it The geometry of moduli spaces of sheaves}, Aspects of Math. E31, Vieweg, Braunschweig/Wiesbaden, 1997. \bibitem{HuTh} D. Huybrechts and R.P. Thomas, {\it Deformation-obstruction theory for complexes via Atiyah and Kodaira--Spencer classes}, arXiv:0805.3527, 2008. \bibitem{Illu1} L. Illusie, {\it Complexe cotangent et d\'eformations. I}, Springer Lecture Notes in Math. 239, Springer-Verlag, Berlin, 1971. \bibitem{Illu2} L. Illusie, {\it Cotangent complex and deformations of torsors and group schemes}, pages 159--189 in Springer Lecture Notes in Math. 274, Springer-Verlag, Berlin, 1972. \bibitem{Joyc.1} D. Joyce, {\it Constructible functions on Artin stacks}, J. L.M.S. 74 (2006), 583--606, math.AG/0403305. \bibitem{Joyc.2} D. Joyce, {\it Motivic invariants of Artin stacks and `stack functions'}, Quart. J. Math. 58 (2007), 345--392, math.AG/0509722. \bibitem{Joyc.3} D. Joyce, {\it Configurations in abelian categories. I. Basic properties and moduli stacks}, Adv. Math. 203 (2006), 194--255, math.AG/0312190. \bibitem{Joyc.4} D. Joyce, {\it Configurations in abelian categories. II. Ringel--Hall algebras}, Adv. Math. 210 (2007), 635--706, math.AG/0503029. \bibitem{Joyc.5} D. Joyce, {\it Configurations in abelian categories. III. Stability conditions and identities}, Adv. Math. 215 (2007), 153--219, math.AG/0410267. \bibitem{Joyc.6} D. Joyce, {\it Configurations in abelian categories. IV. Invariants and changing stability conditions}, Adv. Math. 217 (2008), 125--204, math.AG/0410268. \bibitem{Joyc.7} D. Joyce, {\it Holomorphic generating functions for invariants counting coherent sheaves on Calabi--Yau $3$-folds}, Geometry and Topology 11 (2007), 667--725, hep-th/0607039. \bibitem{Joyc.8} D. Joyce, {\it Generalized Donaldson--Thomas invariants}, arXiv:0910.0105, 2009. \bibitem{JoSo} D. Joyce and Y. Song, {\it A theory of generalized Donaldson--Thomas invariants}, Memoirs of the AMS , arXiv:0810.5645v5. \bibitem{Joyc2} D. Joyce, {\it A classical model for derived critical loci}, arXiv:1304.4508, 2013. \bibitem{JoSo} D. Joyce and Y. Song, {\it A theory of generalized Donaldson--Thomas invariants}, Mem. Amer. Math. Soc. 217 (2012), no. 1020. arXiv:0810.5645. \bibitem{KaSc} M. Kashiwara and P. Schapira, {\it Sheaves on Manifolds}, Grundlehren der math. Wiss. 292, Springer-Verlag, 1990. \bibitem{KL}N.M. Katz and G. Laumon, {\it Transformation de Fourier et majoration des sommes exponentielles,} Publ. Math. I.H.E.S. 62 (1985), 145--202. \bibitem{Kem} G. Kemper, {\it Separating invariants}, J. Symbolic Comput. 44 (2009), no. 9, 1212--1222, \\ https://www.ricam.oeaw.ac.at/mega2007 /electronic/37.pdf \bibitem{Kenn} G. Kennedy, {\it MacPherson's Chern classes of singular algebraic varieties}, Comm. Algebra 18 (1990), 2821--2839. \bibitem{KiLi1}Y. H. Kiem and J. Li. {\it Localizing virtual cycles by cosections}, Preprint, arXiv:1007.3085. \bibitem{KiLi2} Y. H. Kiem and J. Li, {\it A wall crossing formula of Donaldson--Thomas invariants without Chern-Simons functional}, arXiv:0905.4770. \bibitem{KiLi} Y.-H. Kiem and J. Li, {\it Categorification of Donaldson--Thomas invariants via perverse sheaves}, arXiv:1212.6444, 2012. \bibitem{Kle}S. L. Kleiman, {\it The Picard scheme}, In Fundamental algebraic geometry, volume 123 of Math. Surveys Monogr., pages 235--321. Amer. Math. Soc., Providence, RI, 2005, http://arxiv.org/pdf/math/0504020v1. \bibitem{Koba} S. Kobayashi, {\it Differential geometry of complex vector bundles}, Publ. Math. Soc. Japan 15, Princeton University Press, Princeton, NJ, 1987. \bibitem{KoSo1} M. Kontsevich and Y. Soibelman, {\it Stability structures, motivic Donaldson--Thomas invariants and cluster transformations}, Mirror symmetry and tropical geometry, 55--89, Contemp. Math., 527, Amer. Math. Soc., Providence, RI, 2010, arXiv:0811.2435, 2008. \bibitem{KoSo} M. Kontsevich and Y. Soibelman, {\it Cohomological Hall algebra, exponential Hodge structures and motivic Donaldson--Thomas invariants}, arXiv:1006.2706v2. \bibitem{KoSo2} M. Kontsevich and Y. Soibelman, {\it Motivic Donaldson--Thomas invariants: summary of results}, arXiv:0910.4315, 2009. \bibitem{Kre} A. Kresch, {\it Cycle groups for Artin stacks}, Invent. Math. 138 (1999), 495--536. math.AG/9810166. \bibitem{LaMo} G. Laumon and L. Moret-Bailly, {\it Champs alg\'ebriques}, Ergeb. der Math. und ihrer Grenzgebiete 39, Springer-Verlag, Berlin, 2000. \bibitem{LeTe}D.T. Le and B. Teissier, {\it Vari\'et\'es polaires et classes de Chern des vari\'et\'es singuli\'eres}, Ann. of Math. 1981. \bibitem{LePa} M. Levine and R. Pandharipande, {\it Algebraic cobordism revisited}, math.AG/0605196. \bibitem{Li} J. Li, {\it Zero dimensional Donaldson--Thomas invariants of threefolds}, Geometry and Topology 10 (2006), 2117--2171, math.AG/0604490. \bibitem{LiTi} J. Li and G. Tian, {\it Virtual moduli cycles and Gromov-Witten invariants of algebraic varieties}, Journal of the American Mathematical Society 11 (1998), 119--174. math.AG/9602007. \bibitem{LiQin} W.P. Li, Z. Qin, {\it Donaldson--Thomas invariants of certain Calabi--Yau 3-folds}, arXiv1002.4080, 2010. \bibitem{Lie} M. Lieblich, {\it Moduli of complexes on a proper morphism}, J. of Algebraic Geometry 15 (2006) 175-206. \bibitem{Loo} E. Looijenga, {\it Motivic measures}, Seminaire Bourbaki, Asterisque 42 (1999-2000), Expos No. 874. \bibitem{PL}, P. Lowrey, {\it The moduli stack and motivic Hall algebra for the bounded derived category}, http://arxiv.org/pdf/1110.5117.pdf. \bibitem{LuTe} M. L\"ubke and A. Teleman, {\it The Kobayashi--Hitchin correspondence}, World Scientific, Singapore, 1995. \bibitem{Luna} D. Luna, {\it Slices \'etales}, Bull. Soc. Math. France, M\'emoire 33 (1973), 81--105. \bibitem{MacL}S. Mac Lane, {\it Categories for the working mathematician}, Graduate texts in mathematics 5, Springer, 1998. \bibitem{MacP} R.D. MacPherson, {\it Chern classes for singular algebraic varieties}, Ann. Math. 100 (1974), 423--432. \bibitem{Mass} D.B. Massey, {\it Notes on perverse sheaves and vanishing cycles}, arXiv:math/9908107, 1999. \bibitem{Mats} H. Matsumura, {\it Commutative ring theory}, Cambridge University Press, Cambridge, 1986. \bibitem{Miya} K. Miyajima, {\it Kuranishi family of vector bundles and algebraic description of Einstein--Hermitian connections}, Publ. RIMS, Kyoto Univ. 25 (1989), 301--320. \bibitem{MNOP1} D. Maulik, N. Nekrasov, A. Okounkov, and R. Pandharipande, {\it Gromov-Witten theory and Donaldson--Thomas theory. I}, Compos. Math. 142 (2006), 1263--1285. math.AG/0312059. \bibitem{MNOP2} D. Maulik, N. Nekrasov, A. Okounkov, and R. Pandharipande, {\it Gromov-Witten theory and Donaldson--Thomas theory. II}, Compos. Math. 142 (2006), 1286--1304. math.AG/0406092. \bibitem{MPT} D. Maulik, R. Pandharipande, R. P. Thomas, {\it Curves on K3 surfaces and modular forms}, J.Topology p.937--996, (2010), math.AG/1001.2719. \bibitem{NaAz}V. Navarro Aznar, {\it Sur les multiplicit\'es de Schubert locales des faisceaux alg\'ebriques coh\'erents}, Compositio Mathematica, tome 48, no. 3 (1983), p. 311--326. \bibitem{NN}N. Nitsure, {\it Construction of Hilbert and Quot schemes}, Fundamental algebraic geometry, 105--137, Math. Surveys Monogr., 123, Amer. Math. Soc., Providence, RI, 2005. \bibitem{NN1}N. Nitsure, {\it Deformation Theory for Vector Bundles}, Moduli spaces and vector bundles’ LMS lecture note series 359 (2009) in honour of Peter Newstead. \bibitem{O} T. Ohmoto, {\it Equivariant Chern classes of singular algebraic varieties with group actions,} Math. Proc. Cambridge Philos. Soc. 140 (2006), no. 1, 115--134. \bibitem{PaTh.1} R. Pandharipande and R.P. Thomas, {\it Curve counting via stable pairs in the derived category}, Invent. Math. 178 (2009), 407--447. arXiv:0707.2348. \bibitem{PaTh} R. Pandharipande and R.P. Thomas, {\it Almost closed\/ $1$-forms}, {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}fil\break arXiv:1204.3958, 2012. \bibitem{PTVV} T. Pantev, B. To\"en, M. Vaqui\'e and G. Vezzosi, {\it Shifted symplectic structures}, Publ. Math. I.H.E.S. 117 (2013), 271--328. arXiv:1111.3209. \bibitem{PaPr} A. Parusi\'nski and P. Pragacz, {\it Characteristic classes of hypersurfaces and characteristic cycles}, J. Alg. Geom. 10 (2001), 63--79. math.AG/9801102. \bibitem{Ram}C. P. Ramanujam, {\it On a geometric interpretation of multiplicity}, Inventiones Mathematicae, Volume 22, Number 1, 63--67. \bibitem{Ra} S. Raskin, {\it The cotangent stack}, seminar note in www.math.harvard.edu \bibitem{Rud} A. Rudakov, {\it Stability for an Abelian Category}, Journal of Algebra 197 (1997), 231--245. \bibitem{Sabb} C. Sabbah, {\it Quelques remarques sur la g\'eometrie des espaces conormaux}, Asterisque 130 (1985), 161--192. \bibitem{Sabb1} C. Sabbah, {\it Espaces conormaux bivariants}, Th\'ese, Universit\'e Paris VII, 1986. \bibitem{Sait} M. Saito, {\it Mixed Hodge modules}, Publ. Res. Inst. Math. Sci. 26 (1990), 221-333. \bibitem{Sh}P. Schapira, {\it Cycles Lagrangiens, fonctions constructibles et applications}, Seminaire EDP, Publ. Ecole Polytechnique (1988/89) \bibitem{Schur} J. Sch\"urmann, {\it Topology of singular spaces and constructible sheaves}, Monografie Matematyczne 63, Birkh\"auser, Basel, 2003. \bibitem{Schur1}J. Sch\"urmann, {\it Lectures on characteristic classes of constructible functions. Notes by Piotr Pragacz and Andrzej Weber}, Trends Math., Topics in cohomological studies of algebraic varieties, 175--201, Birkh\"auser, Basel, 2005. \bibitem{Schur2}J. Sch\"urmann, {\it A general intersection formula for Lagrangian cycles}, Compositio Math. 140 (1004) 1037--1052 \bibitem{SY} J. Sch\"urmann and S. Yokura, {\it A survey of characteristic classes of singular spaces}, Singularity theory, 865--952, World Sci. Publ., Hackensack, NJ, 2007. arXiv:math/0511175. \bibitem{SeTh} P. Seidel and R.P. Thomas, {\it Braid group actions on derived categories of coherent sheaves}, Duke Math. J. 108 (2001), 37--108. math.AG/0001043. \bibitem{Serr} J.P. Serre, {\it G\'eom\'etrie alg\'ebrique et g\'eom\'etrie analytique}, Ann. Inst. Fourier 6 (1956), 1--42. \bibitem{Sos}P. Sosna, {\it Linearisations of triangulated categories with respect to finite group actions,} arXiv:1108.2144v1. \bibitem{StTh} J. Stoppa and R.P. Thomas, {\it Hilbert schemes and stable pairs: GIT and derived category wall crossings}, arXiv:0903.1444, 2009. \bibitem{Szen}B. Szendroi, {Nekrasov’s partition function and refined Donaldson--Thomas theory: the rank one case}, SIGMA (2012) 088, 16pp. arXiv:1210.5181. \bibitem{Thom} R.P. Thomas, {\it A holomorphic Casson invariant for Calabi--Yau $3$-folds, and bundles on $K3$ fibrations}, J. Diff. Geom. 54 (2000), 367--438. {{\mathbin{\mathfrak m}}athbin{{\mathbin{\mathfrak m}}athfrak h}}fil\break math.AG/9806111. \bibitem{Thuong}Le Quy Thuong, {\it Proofs of the integral identity conjecture over algebraically closed fields}, arXiv:1206.5334. \bibitem{Toda} Y. Toda, {\it Curve counting theories via stable objects I. DT/PT correspondence}, arXiv:0902.4371, 2009. \bibitem{Toda2} Y. Toda, {\it Curve counting theories via stable objects II: DT/ncDT flop formula}, arXiv:0909.5129, 2011. \bibitem{Toen2}B. To\"en, {\it Moduli of objects in dg-categories}, arXiv:math/0503269,2007. \bibitem{Toen3}B. To\"en, {\it Derived Hall Algebras}, arXiv:math/0501343, 2005. \bibitem{Toen4}B. To\"en, G. Vezzosi, {\it From HAG to DAG: derived moduli stacks}, arXiv:math/0210407, 2002. \bibitem{Toen} B. To\"en, {\it Higher and derived stacks: a global overview}, pages 435--487 in {\it Algebraic Geometry --- Seattle 2005}, Proc. Symp. Pure Math. 80, Part 1, A.M.S., Providence, RI, 2009. math.AG/0604504. \bibitem{ToVe1} B. To\"en and G. Vezzosi, {\it Homotopical Algebraic Geometry II: Geometric Stacks and Applications}, Mem. Amer. Math. Soc. 193 (2008), no. 902. math.AG/0404373. \bibitem{ToVe2} B. To\"en and G. Vezzosi, {\it From HAG to DAG: derived moduli stacks}, pages 173--216 in {\it Axiomatic, enriched and motivic homotopy theory}, NATO Sci. Ser. II Math. Phys. Chem., 131, Kluwer, Dordrecht, 2004. math.AG/0210407. \bibitem{Totaro} B. Totaro, {\it The Chow ring of a classifying space,} Algebraic K-theory (Seattle, WA, 1997), 249--281, Proc. Sympos. Pure Math., 67, Amer. Math. Soc., Providence, RI, 1999. \bibitem{Verd} J.-L. Verdier, {\it Sp\'ecialisation des classes de Chern}, Exp.\ 7 in {\it Caract\'eristique d'Euler--Poincar\'e}, Seminaire E.N.S. 1978-9, Ast\'erisque 82--83 (1981). \bibitem{Vist} A. Vistoli, {\it Intersection theory on algebraic stacks and on their moduli spaces}, Invent. Math., 97, 613--670, 1989. \bibitem{Vois} C. Voisin, {\it On integral Hodge classes on uniruled or Calabi--Yau threefolds}, pages 43--73 in S. Mukai, Y. Miyaoka, S. Mori, A. Moriwaki and I. Nakamura, editors, {\it Moduli Spaces and Arithmetic Geometry (Kyoto 2004)}, Advanced Studies in Pure Math. 45, Math. Soc. Japan, Tokyo, 2006. math.AG/0412279. \bibitem{Yo}S. Yokura, {\it Bivariant Theories of Constructible Functions and Grothendieck Transformations}, Topology and Its Applications, Vol. 123, (2002), pp. 283--296. \bibitem{Zhang}Z. Zhang, {\it Moduli spaces of sheaves on K3 surfaces and symplectic stacks,} arXiv:1111.6294v1. \bibitem{Z} J. Zhou, {\it Classes de Chern en th\'eorie bivariante}, in Th\'ese, Universit\'e Aix-Marseille II (1995). \end{thebibliography} } \end{document}
\begin{document} \title[Waring decompositions for a polynomial vector]{On the number of Waring decompositions for a generic polynomial vector\\ } \author[E. Angelini, F. Galuppi, M. Mella]{Elena Angelini, Francesco Galuppi, Massimiliano Mella} \address[E.Angelini, F. Galuppi, M. Mella]{ Dipartimento di Matematica e Informatica\\ Universit\`a di Ferrara\\ Via Machiavelli 35\\ 44121 Ferrara, Italia} \email{[email protected], [email protected], [email protected]} \author[G. Ottaviani]{Giorgio Ottaviani} \address[G. Ottaviani]{Dipartimento di Matematica e Informatica 'Ulisse Dini'\\ Universit\`{a} di Firenze \\ Viale Morgagni 67/A \\ Firenze , Italia} \email{[email protected]} \begin{abstract} We prove that a general polynomial vector $(f_1, f_2, f_3)$ in three homogeneous variables of degrees $(3,3,4)$ has a unique Waring decomposition of rank 7. This is the first new case we are aware, and likely the last one, after five examples known since 19th century and the binary case. We prove that there are no identifiable cases among pairs $(f_1, f_2)$ in three homogeneous variables of degree $(a, a+1)$, unless $a=2$, and we give a lower bound on the number of decompositions. The new example was discovered with Numerical Algebraic Geometry, while its proof needs Nonabelian Apolarity. \end{abstract} \maketitle \section{Introduction}\label{sec:intr} Let $f_1$, $f_2$ be two general quadratic forms in $n+1$ variables over $\mathbb{C}$. A well known theorem, which goes back to Jacobi and Weierstrass, says that $f_1$, $f_2$ can be simultaneously diagonalized. More precisely there exist linear forms $l_0,\ldots, l_n$ and scalars $\lambda_0,\ldots, \lambda_n$ such that \begin{equation}\label{eq:2quadrics} \left\{\begin{array}{rcl}f_1&=&\sum_{i=0}^nl_i^2\\ \hspace{0.2cm} \\ f_2&=&\sum_{i=0}^n\lambda_il_i^2\end{array}\right. \end{equation} An important feature is that the forms $l_i$ are unique (up to order) and their equivalence class, up to multiple scalars, depend only on the pencil $\left< f_1, f_2\right>$, hence also $\lambda_i$ are uniquely determined after $f_1$, $f_2$ have been chosen in this order. The canonical form (\ref{eq:2quadrics}) allows to write easily the basic invariants of the pencil, like the discriminant which takes the form $\prod_{i<j}(\lambda_i-\lambda_j)^2$. We call (\ref{eq:2quadrics}) a (simultaneous) Waring decomposition of the pair $(f_1, f_2)$. The pencil $(f_1,f_2)$ has a unique Waring decomposition with $n+1$ summands if and only if its discriminant does not vanish. In the tensor terminology, $(f_1, f_2)$ is {\it generically identifiable}. We generalize now the decomposition (\ref{eq:2quadrics}) to $r$ general forms, even allowing different degrees. For symmetry reasons, it is convenient not to distinguish $f_1$ from the other $f_j$'s, so we will allow scalars $\lambda^j_i$ to the decomposition of each $f_j$, including $f_1$. To be precise, let $ f=(f_1, \ldots, f_r) $ be a vector of general homogeneous forms of degree $ a_1, \ldots, a_r $ in $n+1$ variables over the complex field $ \mathbb{C} $, i.e. $ f_i \in {\mathrm Sym}^{a_i} \mathbb{C}^{n+1} $ for all $ i \in \{1, \ldots, r\} $. Let assume that $ 2 \leq a_{1} \leq \ldots \leq a_{r} $. \begin{defn}\label{simdec} A \emph{Waring decomposition} of $ f=(f_1, \ldots, f_r) $ is given by linear forms $ \ell_1, \ldots, \ell_k \in \mathbb{P}(\mathbb{C}^\vee) $ and scalars $ (\lambda_{1}^{j}, \ldots, \lambda_k^{j}) \in \mathbb{C}^{k}-\{\underline{0}\} $ with $ j \in \{1, \ldots, r\} $ such that \begin{equation}\label{eq:dec} f_{j} = \lambda_{1}^{j}\ell_{1}^{a_{j}}+ \ldots + \lambda_{k}^{j}\ell_{k}^{a_{j}} \end{equation} for all $ j \in \{1, \ldots, r\} $ or in vector notation \begin{equation}\label{eq:simwaring} f=\sum_{i=1}^k\left(\lambda_{i}^{1}\ell_{i}^{a_{1}},\ldots, \lambda_{i}^{r}\ell_{i}^{a_{r}}\right) \end{equation} The geometric argument in \S \ref{subsec:projbundle} shows that every $f$ has a Waring decomposition. We consider two Waring decompositions of $f$ as in (\ref{eq:simwaring}) being equal if they differ just by the order of the $k$ summands. The {\it rank} of $f$ is the minimum number $k$ of summands appearing in (\ref{eq:simwaring}), this definition coincides with the classical one in the case $r=1$ (the vector $f$ given by a single polynomial). \end{defn} Due to the presence of the scalars $\lambda^j_i$, each form $\ell_{i}$ depends essentially only on $n$ conditions. So the decomposition (\ref{eq:dec}) may be thought as a nonlinear system with $\sum_{i=1}^r{{a_i+n}\choose n}$ data (given by $f_{j}$) and $k(r+n)$ unknowns (given by $kr$ scalars $\lambda^j_i$ and $k$ forms $\ell_{i}$). This is a very classical subject, see for example \cite{Re, Lon, Ro, Sco, Te2}, although in most of classical papers the degrees $a_i$ were assumed equal, with the notable exception of \cite{Ro}. \begin{defn}\label{d:perfectcases} Let $ a_1,\ldots, a_r, n$ be as above. \noindent The space $ {\mathrm Sym}^{a_1} \mathbb{C}^{n+1}\oplus\ldots\oplus {\mathrm Sym}^{a_r} \mathbb{C}^{n+1}$ is called \emph{perfect} if there exists $k$ such that \begin{equation}\label{eq:perfect} \sum_{i=1}^r{{a_i+n}\choose n} = k(r+n) \end{equation} i.e. when (\ref{eq:dec}) corresponds to a square polynomial system. \end{defn} The arithmetic condition (\ref{eq:perfect}) means that $\sum_{i=1}^r{{a_i+n}\choose n}$ is divisible by $(r+n)$, in other terms the number of summands $k$ in the system (\ref{eq:dec}) is uniquely determined. The case with two quadratic forms described in (\ref{eq:2quadrics}) corresponds to $r=2$, $a_1=a_2=2$, $k=n+1$ and it is perfect. The perfect cases are important because, by the above dimensional count, we expect finitely many Waring decompositions for the generic polynomial vector in a perfect space $ {\mathrm Sym}^{a_1} \mathbb{C}^{n+1}\oplus\ldots\oplus {\mathrm Sym}^{a_r} \mathbb{C}^{n+1}$. It may happen that general elements in perfect spaces have no decompositions with the expected number $k$ of summands, the first example, beside the one of plane conics, was found by Clebsch in the XIXth century and regards ternary quartics, where $r=1$, $a_1=4$ and $n=2$. Equation (\ref{eq:perfect}) gives $k=5$ but in this case the system (\ref{eq:dec}) has no solutions and indeed $6$ summands are needed to find a Waring decomposition of the general ternary quartic. It is well known that all the perfect cases with $r=1$ when the system (\ref{eq:dec}) has no solutions have been determined by Alexander and Hirschowitz, while more cases for $r\ge 2$ have been found in \cite{CaCh}, where a collection of classical and modern interesting examples is listed. Still, perfectness is a necessary condition to have finitely many Waring decompositions. So two natural questions, of increasing difficulty, arise. \vskip 0.4cm {\bf Question 1} Are there other perfect cases for $a_1,\ldots, a_r, n$, beyond (\ref{eq:2quadrics}), where a unique Waring decomposition (\ref{eq:simwaring}) exists for generic $f$, namely where we have generic identifiability ? \vskip 0.4cm {\bf Question 2} Compute the number of Waring decompositions (up to order of summands) for a generic $f$ in any perfect case. \vskip 0.4cm The above two questions are probably quite difficult, but we feel it is worth to state them as guiding problems. These two questions are open even in the case $r=1$ of a single polynomial. In case $r=1$, Question 1 has a conjectural answer due to the third author, who proved many cases of this Conjecture in \cite{Me, Me1}. The birational technique used in these papers has been generalized to our setting in \S \ref{sec:ternary} of this paper. Always in case $r=1$, some number of decompositions for small $a_1$ and $n$ have been computed (with high probability) in \cite{HOOS} by homotopy continuation techniques, with the numerical software Bertini \cite{Be}. In this paper we contribute to the above two questions. Before stating our conclusions, we still need to expose other known results on this topic. In the case $n=1$ (binary forms) there is a result by Ciliberto and Russo \cite{CR} which completely answers our Question 1. \begin{thm}[Ciliberto-Russo]\label{thm:binforms0} Let $n=1$. In all the perfect cases there is a unique Waring decomposition for generic $f\in {\mathrm Sym}^{a_1}\mathbb{C}^2\oplus\ldots\oplus {\mathrm Sym}^{a_r}\mathbb{C}^2$ if and only if $ a_{1}+1 \geq \frac{\sum_{i=1}^r(a_i+1)}{r+1} $. (Note the fraction $\frac{\sum_{i=1}^r(a_i+1)}{r+1}$ equals the number $k$ of summands). \end{thm} We will provide alternative proofs to Theorem \ref{thm:binforms0} by using Apolarity, see Theorem \ref{thm:nonabelian_applied2}. As widely expected, for $n>1$ generic identifiability is quite a rare phenomenon. They have been extensively investigated in the XIX$^{\rm th}$ century and at the beginning of the XX$^{\rm th}$ century and the following are the only discovered cases that we are aware: \begin{equation}\label{eq:classiclist} \left\{\begin{array}{ll} (i) &({\mathrm Sym}^2\mathbb{C}^n)^{\oplus 2}, \textrm{rank\ } n,\textrm{Weierstrass \cite{We}, as in (\ref{eq:2quadrics})},\\ (ii) & {\mathrm Sym}^5\mathbb{C}^3, \textrm{rank }7, \textrm{Hilbert \cite{Hi}, see also \cite{Ri} and \cite{Pa}},\\ (iii)&{\mathrm Sym}^3\mathbb{C}^4, \textrm{rank } 5,\textrm{Sylvester Pentahedral Theorem \cite{Sy}},\\ (iv)& ({\mathrm Sym}^2\mathbb{C}^3)^{\oplus 4}, \textrm{rank\ } 4,\\ (v)&{\mathrm Sym}^2\mathbb{C}^3\oplus {\mathrm Sym}^3\mathbb{C}^3,\textrm{rank\ }4,\textrm{Roberts \cite{Ro}.} \end{array}\right. \end{equation} The interest in Waring decompositions was revived by Mukai's work on 3-folds, \cite{Mu}\cite{Mu1}. Since then many authors devoted their energies to understand, interpret and expand the theory. Cases $(ii)$ and $(iii)$ in (\ref{eq:classiclist}) were explained by Ranestad and Schreyer in \cite{RS} by using syzygies, see also \cite{MM} for an approach via projective geometry and \cite{OO} for a vector bundle approach (called in this paper ``Nonabelian Apolarity'', see \S\ref{sec:Nonabelian}). Case $(v)$ was reviewed in \cite{OS} in the setting of Lueroth quartics. $(iv)$ is a classical and ``easy'' result, there is a unique Waring decomposition of a general 4-tuple of ternary quadrics. There is a very nice geometric interpretation for this latter case. Four points in $\p^5$ define a $\p^3$ that cuts the Veronese surface in 4 points giving the required unique decomposition. See Remark \ref{rem:d^n} for a generalization to arbitrary $(d,n)$. Our main contribution with respect to unique decompositions is the following new case. \begin{thm} \label{thm:main334} A general $f\in {\mathrm Sym}^3\mathbb{C}^3\oplus {\mathrm Sym}^3\mathbb{C}^3\oplus {\mathrm Sym}^4\mathbb{C}^3$ has a unique Waring decomposition of rank 7, namely it is identifiable. \end{thm} The Theorem will be proved in the general setting of Theorem \ref{thm:nonabelian_applied2}. Beside the new example found we think it is important to stress the way it arised. We adapted the methods in \cite{HOOS} to our setting, by using the software Bertini \cite{Be} and also the package {\it Numerical Algebraic Geometry} \cite{KL} in Macaulay2 \cite{M2}, with the generous help by Jon Hauenstein and Anton Leykin, who assisted us in writing our first scripts. The computational analysis of perfect cases of forms on $\mathbb{C}^3$ suggested that for ${\mathrm Sym}^3\mathbb{C}^3\oplus {\mathrm Sym}^3\mathbb{C}^3\oplus {\mathrm Sym}^4\mathbb{C}^3$ the Waring decomposition is unique. Then we proved it via Nonabelian Apolarity with the choice of a vector bundle. Another novelty of this paper is a unified proof of almost all cases with a unique Waring decomposition via Nonabelian Apolarity with the choice of a vector bundle $E$, see Theorem \ref{thm:nonabelian_applied2}. Finally we borrowed a construction from \cite{MM} to prove, see Theorem \ref{thm:unirat}, that whenever we have uniqueness for rank $k$ then the variety parametrizing Waring decompositions of higher rank is unirational. Pick $r=2$ and $n=2$, the space ${\mathrm Sym}^a\mathbb{C}^3\oplus{\mathrm Sym}^{a+1}\mathbb{C}^3$ is perfect if and only if $a=2t$ is even. All the numerical computations we did suggested that identifiability holds only for $a=2$ (by Robert's Theorem, see (\ref{eq:classiclist}) $(v)$). Once again this pushed us to prove the non-uniqueness for these pencils of plane curves. Our main contribution to Question 2 regards this case and it is the following. \begin{thm} \label{th:main_identifi_intro} A general $f\in{\mathrm Sym}^{a}\mathbb{C}^3\oplus{\mathrm Sym}^{a+1}\mathbb{C}^3$ is identifiable if and only if $a=2$, corresponding to (v) in the list (\ref{eq:classiclist}). Moreover $f$ has finitely many Waring decompositions if and only if $a=2t$ and in this case the number of decompositions is at least $$ \frac{(3t-2)(t-1)}2+1. $$ \end{thm} We know by equation~(\ref{eq:classiclist})(v) that the bound is sharp for $t=1$ and we verified with high probability, using \cite{Be}, that it is attained also for $t=2$. On the other hand we do not expect it to be sharp in general. Theorem \ref{th:main_identifi_intro} is proved in section \S \ref{sec:ternary}. The main idea, borrowed from \cite{Me}, is to bound the number of decompositions with the degree of a tangential projection, see Theorem \ref{th:birational_tangent_proj}. To bound the latter we use a degeneration argument, see Lemma \ref{lem:birational_degeneration}, that reduces the computation needed to an intersection calculation on the plane. \end{ack} \section{The Secant construction}\label{sec:secant} \subsection{Secant Varieties} Let us recall, next, the main definitions and results concerning secant varieties. Let $\Gr_k=\Gr(k,N)$ be the Grassmannian of $k$-linear spaces in $\p^N$. Let $X\subset\p^{N}$ be an irreducible variety $$\Gamma_{k+1}(X)\subset X\times\cdots\times X\times\Gr_k,$$ the closure of the graph of $$\alpha:(X\times\cdots\times X)\setminus\Delta\to \Gr_k,$$ taking $(x_0,\ldots,x_{k})$ to the $[\langle x_0,\ldots,x_{k}\rangle]$, for $(k+1)$-tuple of distinct points. Observe that $\Gamma_{k+1}(X)$ is irreducible of dimension $(k+1)n$. Let $\pi_2:\Gamma_{k+1}(X)\to\Gr_k$ be the natural projection. Denote by $$S_{k+1}(X):=\pi_2(\Gamma_{k+1}(X))\subset\Gr_k.$$ Again $S_{k+1}(X)$ is irreducible of dimension $(k+1)n$. Finally let $$I_{k+1}=\{(x,[\Lambda])| x\in \Lambda\}\subset\p^{N}\times\Gr_k,$$ with natural projections $\pi_i$ onto the factors. Observe that $\pi_2:I_{k+1}\to\Gr_k$ is a $\p^{k}$-bundle on $\Gr_k$. \begin{defn}\label{def:secant} Let $X\subset\p^{N}$ be an irreducible variety. The {\it abstract $k$-Secant variety} is $$\sec_k(X):=\pi_2^{-1}(S_k(X))\subset I_k.$$ While the {\it $k$-Secant variety} is $$\Sec_k(X):=\pi_1(\sec_k(X))\subset\p^N.$$ It is immediate that $\sec_k(X)$ is a $(kn+k-1)$-dimensional variety with a $\p^{k-1}$-bundle structure on $S_k(X)$. One says that $X$ is $k$-\emph{defective} if $$\dim\Sec_k(X)<\min\{\dim\sec_k(X),N\}$$ and calls $ k $-\emph{defect} the number $$\delta_{k}=\min\{\dim\sec_k(X),N\}-\dim\Sec_k(X).$$ \end{defn} \begin{rem} Let us stress that in our definition $\Sec_1(X)=X$. A simple but useful feature of the above definition is the following. Let $\Lambda_1$ and $\Lambda_2$ be two distinct $k$-secant $(k-1)$-linear space to $X\subset\p^{N}$. Let $\lambda_1$ and $\lambda_2$ be the corresponding projective $(k-1)$-spaces in $\sec_k(X)$. Then we have $\lambda_1\cap \lambda_2=\emptyset$. \label{re:vuoto} \end{rem} Here is the main result we use about secant varieties. \begin{thm}[Terracini Lemma \cite{Te}\cite{ChCi}] \label{th:terracini} Let $X\subset\p^{N}$ be an irreducible, projective variety. If $p_1,\ldots, p_{k}\in X$ are general points and $z\in\langle p_1,\ldots, p_{k}\rangle$ is a general point, then the embedded tangent space at $z$ is $$\T_z\Sec_k(X) = \langle \T_{p_1}X,\ldots, \T_{p_{k}}X\rangle$$ If $X$ is k-defective, then the general hyperplane $H$ containing $\T_{z}\Sec(X)$ is tangent to $X$ along a variety $\Sigma(p_1,\ldots, p_{k})$ of pure, positive dimension, containing $p_1,\ldots, p_{k}$. \end{thm} \subsection{Secants to a projective bundle}\label{subsec:projbundle} We show a geometric interpretation of the decomposition (\ref{eq:dec}) by considering the $k$-secant variety to the projective bundle (see \cite[II, \S 7]{Har}) $$ X=\mathbb{P}(\mathcal{O}_{\p^n}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^n}(a_{r}) ) \subset \mathbb{P}\left(H^0\left(\oplus_i \mathcal{O}_{\p^n}(a_{i})\right)\right) = \mathbb{P}^{N-1}, $$ where $N=\sum_{i=1}^r{{a_i+n}\choose n}$. We denote by $\pi\colon X\to\p^n$ the bundle projection. Note that $\dim X=(r+n-1)$ and the immersion in $\mathbb{P}^{N-1}$ corresponds to the canonical invertible sheaf $\o_X(1)$ constructed on $X$ (\cite[II, \S 7]{Har}). Indeed $X$ is parametrized by $\left(\lambda^{(1)}\ell^{a_{1}}, \ldots, \lambda^{(r)}\ell^{a_{r}} \right)\in \oplus_{i=1}^rH^0\left(\mathcal{O}_{\p^n}(a_{i})\right)$, where $\ell\in\mathbb{C}^{n+1}$ and $\lambda^{(i)}$ are scalars. $X$ coincides with polynomial vectors of rank $1$, as defined in the Introduction. It follows that the $k$-secant variety to $X$ is parametrized by $\displaystyle{\sum_{i=1}^{k}}\left(\lambda_{i}^{1}\ell_{i}^{a_{1}}, \ldots, \lambda_{i}^{r}\ell_{i}^{a_{r}} \right), $ where $\lambda_i^j$ are scalars and $\ell_i\in\mathbb{C}^{n+1}$. In the case $a_i=i$ for $i=1,\ldots, d$, this construction appears already in \cite{CQU}. Since $X$ is not contained in a hyperplane, it follows that any polynomial vector has a Waring decomposition as in (\ref{eq:simwaring}). Thus, the number of decompositions by means of $ k $ linear forms of $ f_{1}, \ldots, f_{r} $ equates the $k$-\emph{secant degree} of $X$. If $ a_{i} = a $ for all $ i \in \{1, \ldots, r\} $, then we deal with $ \mathbb{P}^{r-1}\times\p^n $ embedded through the Segre-Veronese map with $ \o(1,a) $, as we can see in Proposition 1.3. of \cite{DF} or in \cite{BBCC}. \\ Moreover, we remark that assuming to be in a perfect case in the sense of Definition \ref{d:perfectcases} is equivalent to the fact that $ \mathbb{P}(\mathcal{O}_{\p^n}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^n}(a_{r}) ) $ is a \emph{perfect} variety, i.e. $(n+r)|N$. \\ {Theorem \ref{thm:binforms0} has the following reformulation (compare with Claim $5.3.$ and Proposition 1.14 of \cite{CR}) :} \begin{cor}\label{c:identifiability} If (\ref{eq:perfect}) and $ a_{1}+1 \geq k $ hold, then $ \mathbb{P}(\mathcal{O}_{\p^1}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\p^1}(a_{r}) ) $ is $ k $-identifiable, i.e. its $k$-secant degree is equal to $ 1 $. \end{cor} \begin{rem} A formula for the dimension of the $k$-secant variety of the rational normal scroll $X$ for $n=1$ has been given in \cite[pag. 359]{CaJo} (with a sign mistake, corrected in \cite[Prop. 1.14]{CR}). \end{rem} \begin{rem}\label{rem:d^n} We may consider the Veronese variety $V:=V_{d,n}\subset\p^{{{d+n}\choose{n}}-1}$. Let $s-1={\rm cod} V$ then $s$ general points determine a unique $\p^s$ that intersects $V$ in $d^n$ points. The $d^n$ points are linearly independent only if $d^n=s$ that is either $n=1$ or $(d,n)=2$. This shows that a general vector $f=(f_1,\ldots,f_s)$ of forms of degree $d$ admits ${d^n}\choose{s}$ decompositions, see the table at the end of \S\ref{sec:compapp} for some numerical examples. On the other hand, from a different perspective, dropping the requirement that the linear forms giving the decompositions are linearly independent, this shows that there is a unique set of $d^n$ linear forms that decompose the general vector $f$. Note that this time only the forms and not the coefficient are uniquely determined. We will not dwell on this point of view here and left it for a forthcoming paper. \end{rem} \section{Nonabelian Apolarity and Identifiability} \label{sec:binary}\label{sec:Nonabelian} Let $f\in Sym^dV$. For any $e\in{\mathbb Z}$, Sylvester constructed the catalecticant map $C_f\colon {\mathrm Sym}^eV^*\to {\mathrm Sym}^{d-e}V$ which is the contraction by $f$. Its main property is the inequality $\mathrm{rk\ }C_f\le\mathrm{rk\ } f$, where the rank on left-hand side is the rank of a linear map, while the rank on the right-hand side has been defined in the Introduction. In particular the $(k+1)$-minors of $C_f$ vanish on the variety of polynomials with rank bounded by $k$, which is $\Sec_k(V_{d,n})$. {The catalecticant map behaves well with polynomial vectors. If $f\in \oplus_{i=1}^r{\mathrm Sym}^{a_i} V $, for any $e\in{\mathbb Z}$ we define the catalecticant map $C_f\colon {\mathrm Sym}^eV^*\to \oplus_{i=1}^r{\mathrm Sym}^{a_i-e}V$ which is again the contraction by $f$. If $f$ has rank one, this means there exists $\ell\in V$ and scalars $\lambda^{(i)}$ such that $f=\left(\lambda^{(1)}\ell^{a_{1}}, \ldots, \lambda^{(r)}\ell^{a_{r}} \right)$ . It follows that $\mathrm{rk\ }C_f\le 1$, since the image of $C_f$ is generated by $\left(\lambda^{(1)}\ell^{a_{1}-e}, \ldots, \lambda^{(r)}\ell^{a_{r}-e} \right)$, which is zero if and only if $a_r<e$. It follows by linearity the basic inequality $$\mathrm{rk\ }C_f\le\mathrm{rk\ } f.$$ Again the $(k+1)$-minors of $C_f$ vanish on the variety of polynomial vectors with rank bounded by $k$, which is $\Sec_k(X)$, where $X$ is the projective bundle defined in \S\ref{subsec:projbundle}. A classical example is the following. Assume $V=\mathbb{C}^3$. London showed in \cite{Lon}(see also \cite{Sco}) that a pencil of ternary cubics $f=(f_1,f_2)\in {\mathrm Sym}^3V\oplus{\mathrm Sym}^3V$ has border rank $5$ if and only if $\det C_f=0$ where $C_f\colon {\mathrm Sym}^2V^*\to V\oplus V$ is represented by a $6\times 6$ matrix (see \cite[Remark 4.2]{CaCh} for a modern reference). Indeed $\det C_f$ is the equation of $\Sec_5(X)$ where $X$ is the Segre-Veronese variety $\left(\p^1\times\p^2,\o_X(1,3)\right)$. Note that $X$ is $5$-defective according to Definition \ref{def:secant} and this phenomenon is pretty similar to the case of Clebsch quartics recalled in the introduction. The following result goes back to Sylvester. \begin{prop}[Classical Apolarity] Let $f=\sum_{i=1}^kl_i^d\in Sym^dV$, let $Z=\{l_1,\ldots, l_k\}\subset V$. Let $C_f\colon {\mathrm Sym}^eV^*\to {\mathrm Sym}^{d-e}V$ be the contraction by $f$. Assume the rank of $C_f$ equals $k$. Then $${\mathrm BaseLocus}\ker \left(C_f\right)\supseteq Z.$$ \end{prop} \begin{proof} Apolarity Lemma (see \cite{RS}) says that $I_Z\subset f^{\perp}$, which reads in degree $e$ as $H^0(I_Z(e))\subset \ker C_f$. Look at the subspaces in this inclusion as subspaces of $H^0(\p^n,\o(d))$. The assumption on the rank implies that (compare with the proof of \cite[ Prop. 4.3]{OO}) $${\mathrm codim\ }H^0(I_Z(e))\le k={\mathrm rk\ }C_f = {\mathrm codim\ } \ker C_f,$$ hence we have the equality $H^0(I_Z(e))=\ker C_f$. It follows $${\mathrm BaseLocus}\ker \left(C_f\right) ={\mathrm BaseLocus}H^0(I_Z(e))\supseteq Z.$$\end{proof} Classical Apolarity is a powerful tool to recover $Z$ from $f$, hence it is a powerful tool to write down a minimal Waring decomposition of $f$. The following Proposition \ref{prop:nonabelian} is a further generalization and it reduces to classical apolarity when $(X,L)=(\p V,{\o}(d))$ and $E={\o}(e)$ is a line bundle. The vector bundle $E$ may have larger rank and explains the name of Nonabelian Apolarity. We recall that the natural map $H^0(E)\otimes H^0(E^*\otimes L)\to H^0(L)$ induces the linear map $H^0(E)\otimes H^0(L)^*\to H^0(E^*\otimes L)^*$, then for any $f\in H^0(L)^*$ we have the contraction map $A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$. \begin{prop}[Nonabelian Apolarity]\label{prop:nonabelian}\cite[ Prop. 4.3]{OO} Let $X$ be a variety, $L\in Pic(X)$ a very ample line bundle which gives the embedding $X\subset \p\left(H^0(X,L)^*\right)=\p W$. Let $E$ be a vector bundle on $X$. Let $f=\sum_{i=1}^k w_i\in W$ with $z_i=[w_i]\in\p W$, let $Z=\{z_1,\ldots, z_k\} \subset\p W$. It is induced $A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$. Assume that $\mathrm{rk}A_f=k\cdot \mathrm{rk}E$. Then ${\mathrm {BaseLocus}}\ker \left(A_f\right)\supseteq Z$. \end{prop} In all cases we apply the Theorem we will compute separately $\mathrm{rk}A_f$. Nonabelian Apolarity enhances the power of Classical Apolarity and may detect a minimal Waring decomposition of a polynomial in some cases when Classical Apolarity fails, see next Proposition \ref{prop:nonabelian_applied}. Our main examples start with the quotient bundle $Q$ on $\p^n=\p(V)$, it has rank $n$ and it is defined by the Euler exact sequence $$0\rig{}\o(-1)\rig{}\o\otimes V^*\rig{}Q\rig{}0.$$ Let $L=\o(d)$ and $E=Q(e)$. Any $f\in {\mathrm Sym}^d\mathbb{C}^3$ induces the contraction map \begin{equation}\label{eq:contractionq} A_f\colon H^0(Q(e))\to H^0(Q^*(d-e))^*\simeq H^0(Q(d-e-1))^*. \end{equation} The following was the argument used in \cite{OO} to prove cases (ii) and (iii) of \ref{eq:classiclist}. \begin{prop} \label{prop:nonabelian_applied} Let $X$ be a variety, $L\in Pic(X)$ a very ample line bundle and $E$ a vector bundle on $X$ with ${\rm rk}E=\dim X$. Let $[f]\in \p(H^0(L)^*)$ be a general point, $k= \frac{h^0(X,L)}{\dim X+1}$, and $A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$ the {contraction} map. Assume that $\mathrm{rk}A_f=r\cdot \mathrm{rk}E$, and $c_{\rm rkE}(E)=k$. Assume moreover that for a specific $f$ the base locus of $\ker A_f$ is given by $k$ points. Then the $k$-secant map $$\pi_k:\sec_k(X)\to\p(H^0(L)^*)$$ is birational. The assumptions are verified in the following cases, corresponding to (ii) and (iii) of (\ref{eq:classiclist}). $$\begin{array}{c|c|c|c} (X,L) &H^0(L)& \textrm{rank\ }&E\\ \hline\\ (\p^2,\o(5))& {\mathrm Sym}^5\mathbb{C}^3& 7&Q_{\p^2}(2)\\ (\p^3,\o(3))&{\mathrm Sym}^3\mathbb{C}^4& 5&Q_{\p^3}^*(2)\\ \end{array}$$ \end{prop} Specific $f$'s in the statement may be found as random polynomials in \cite{M2}. In order to prove also cases (iv) and (v) of (\ref{eq:classiclist}) and moreover our Theorem \ref{thm:main334} we need to extend this result as follows \begin{thm} \label{thm:nonabelian_applied2} Let $X\rig{\pi} Y$ be a projective bundle, $L\in Pic(X)$ a very ample line bundle and $F$ a vector bundle on $Y$, we denote $E=\pi^*F$. Let $[f]\in \p(H^0(L)^*)$ be a general point, $k= \frac{h^0(X,L)}{\dim X+1}$, and $A_f\colon H^0(E)\to H^0(E^*\otimes L)^*$ the {contraction} map. Let $a=\dim\ker A_f$. Assume that $\mathrm{rk}A_f=k\cdot \mathrm{rk}E$ and that $(c_{\mathrm{rk }F}F)^{a}=k$. Assume moreover that for a specific $f$ the base locus of $\ker A_f$ is given by $k$ fibers of $\pi$. Then the $k$-secant map $$\pi_k:\sec_k(X)\to\p(H^0(L)^*)$$ is birational. The assumptions are verified in the following cases. {\footnotesize $$ \begin{array}{l|l|c|c|l} (X,L) &H^0(L)& \textrm{rank\ }&F&\dim\ker A_f\\ \hline\\ \left(\p\left(\oplus_{i=1}^r\o_{\p^1}(a_i)\right),\o_X(1)\right)&\oplus_{i=1}^r {\mathrm Sym}^{a_i}\mathbb{C}^2& {\tiny k:=\frac{\sum_{i=1}^r a_i+1}{r+1}} &\o_{\p^1}(k)&1 (\textrm{if\ }k\le a_1+1)\\ \left(\p\left(\o_{\p^2}(2)^4\right),\o_X(1)\right)& ({\mathrm Sym}^2\mathbb{C}^3)^{\oplus 4}& 4&\o_{\p^2}(2)&2\\ \left(\p\left(\o_{\p^2}(2)\oplus\o_{\p^2}(3)\right),\o_X(1)\right)&{\mathrm Sym}^2\mathbb{C}^3\oplus {\mathrm Sym}^3\mathbb{C}^3&4&\o_{\p^2}(2)&2 \\ \left(\p\left(\o_{\p^2}(3)^2\oplus\o_{\p^2}(4)\right),\o_X(1)\right)&\left({\mathrm Sym}^3\mathbb{C}^3\right)^{\oplus{2}}\oplus{\mathrm Sym}^4\mathbb{C}^3&7&Q_{\p^2}(2) &1 \end{array}$$} \end{thm} \begin{proof} By Proposition \ref{prop:nonabelian} we have $Z\subset Baselocus(\ker A_f)$, where the base locus can be found by the common zero locus of some sections $s_1,\ldots, s_a$ of $E$ which span $\ker A_f$. Since $E=\pi^*F$ and $H^0(X,E)$ is naturally isomorphic to $H^0(Y,F)$, the zero locus of each section of $E$ corresponds to the pullback through $\pi$ of the zero locus of the corresponding section of $F$. By the assumption on the top Chern class of $F$ we expect that the base locus of $\ker A_f$ contains $k=\mathrm{length\ }(Z)$ fibers of the projective bundle $X$. The hypothesis guarantees that this expectation is realized for a specific polynomial vector $f$. By semicontinuity, it is realized for the generic $f$. This determines the forms $l_i$ in (\ref{eq:simwaring}) for a generic polynomial vector $f$. It follows that $f$ is in the linear span of the fibers $\pi^{-1}(l_i)$ where $Z=\{l_1,\ldots, l_a\}$. Fix representatives for the forms $l_i$ for $i=1,\ldots, k$. Now the scalars $\lambda_i^j$ in (\ref{eq:simwaring}) are found by solving a linear system. Our assumptions imply that $X$ is not $k$-defective, otherwise the base locus of $\ker A_f$ should be positive dimensional. In particular the tangent spaces at points in $Z$, which are general, are independent by Terracini Lemma. Since each $\pi$-fiber is contained in the corresponding tangent space, it follows that the fibers $\pi^{-1}(l_i)$ corresponding to $l_i\in Z$ are independent. It follows that the scalars $\lambda_i^j$ in (\ref{eq:simwaring}) are uniquely determined and we have generic identifiability. The check that the assumptions are verified in the cases listed has been perfomed with random polynomials with the aid of Macaulay2 package \cite{M2}. In all these cases, by the projection formula we have the natural isomorphism $H^0(X,E^*\otimes L)\simeq H^0(Y,F\otimes\pi_*L)$. \end{proof} Note that the first case in the list of Theorem \ref{thm:nonabelian_applied2} corresponds to Ciliberto-Russo Theorem \ref{thm:binforms0}, in this case $H^0(E)={\mathrm Sym}^k\mathbb{C}^2$ has dimension $k+1$, $H^0(E^*\otimes L)={\mathrm Sym}^{a_1-k}\mathbb{C}^2\oplus\ldots\oplus {\mathrm Sym}^{a_r-k}$ has dimension $\sum_{i=1}^r(a_i-k+1)= k$ (if $k\le a_1+1)$ and the contraction map $A_f$ has rank $k$, with one-dimensional kernel. The last case in the list of Theorem \ref{thm:nonabelian_applied2} corresponds to Theorem \ref{thm:main334}. A general vector $f\in ({\mathrm Sym}^3\mathbb{C}^3)^{\oplus 2}\oplus {\mathrm Sym}^4\mathbb{C}^3$ induces the contraction $A_f\colon H^0(Q(2))\to H^0(Q)\oplus H^0(Q)\oplus H^0(Q(1))$ with one-dimensional kernel. Each element in the kernel vanishes on $7$ points which give the seven Waring summands of $f$. } Note also that $\left(\p\left(\o_{\p^2}(2)^4\right),\o_X(1)\right)$ coincides with Segre-Veronese variety $(\p^3\times\p^2,\o(1,2))$ \begin{rem}\label{rem:catalecticant} The assumption $ a_{1}+1 \geq k $ in \ref{thm:binforms0} is equivalent to $\frac{1}{r+1}\sum_{i=1}^r(a_i+1)\le a_1+1$ which means that $a_i$ are ``balanced''. \end{rem} We conclude this section showing how the existence of a unique decomposition determines the birational geometry of the varieties parametrizing higher rank decompositions. The following is just a slight generalization of \cite[Theorem 4.4]{MM} \begin{thm} \label{thm:unirat} Let $X\subset\p^N$ be such that the $k$-secant map $\pi_k:\sec_k(X)\to \p^N$ is birational. Assume that $X$ is unirational then for $p\in \p^N$ general the variety $\pi_h^{-1}(p)$ is unirational for any $h\geq k$, in particular it is irreducible. \end{thm} \begin{proof} Let $p\in \p^N$ be a general point, then for $h>k$ we have $\dim \pi_h^{-1}(p)=(h+1)\dim X-1-N=(h-k)(\dim X+1)$. Note that, for $q\in \p^N$ general, a general point in $x\in\pi^{-1}_h(q)$ is uniquely associated to a set of $h$ points $\{x_1,\ldots,x_h\}\subset X$ and an $h$-tuple $(\lambda_1,\ldots,\lambda_h)\in{\mathbb{C}^h}$ with the requirement that $$q=\sum\lambda_ix_i .$$ Therefore the birationality of $\pi_k$ allows to associate, to a general point in $q\in \p^N$, its unique decomposition in sum of $k$ factors. That is $\pi_k^{-1}(q)=(q,[\Lambda_k(q)])$ for a general point $q\in\p^N$. Via this identification we may define a map $$\psi_h:(X\times\p^1)^{h-k}\map\pi^{-1}_h(p) $$ given by \begin{eqnarray*}(x_1,\lambda_1,\ldots,x_{h-k},\lambda_{h-k})\mapsto (p,[\langle x_1,\ldots,x_{h-k},\Lambda_k(p-\lambda_1x_1-\ldots-\lambda_{h-k}x_{h-k})\rangle]). \end{eqnarray*} The map $\psi_h$ is clearly generically finite, of degree ${h}\choose{n+1}$, and dominant. This is sufficient to show the claim. \end{proof} Theorem \ref{thm:unirat} applies to all decompositions that admit a unique form \begin{cor} Let $f=(f_1,\ldots,f_r)$ be a vector of general homogeneous forms. If $f$ has a unique Waring decomposition of rank $k$. Then the set of decompotions of rank $h>k$ is parametrized by a unirational variety. \end{cor} \begin{rem} Let's go back to our starting example (\ref{eq:2quadrics}) and specialize $f_1=\sum_{i=0}^nx_i^2$ to the euclidean quadratic form. Then any minimal Waring decomposition of $f_1$ consists of $n+1$ orthogonal summands, with respect to the euclidean form. It follows that the decomposition (\ref{eq:2quadrics}) is equivalent to the diagonalization of $f_2$ with orthogonal summands. Over the reals, this is possible for any $f_2$ by the Spectral Theorem. Also Robert's Theorem, see (v) of (\ref{eq:classiclist}), has a similar interpretation. If $f_1=x_0^2+x_1^2+x_2^2$ and $f_2\in{\mathrm Sym}^3\mathbb{C}^3$ is general, the unique Waring decomposition of the pair $(f_1,f_2)$ consists in chosing four representatives of lines $\{l_1,\ldots, l_4\}$ and scalars $\lambda_1,\ldots, \lambda_4$ such that \begin{equation}\label{eq:robertortho} \left\{\begin{array}{rcl}f_1&=&\sum_{i=1}^4l_i^2\\ \hspace{0.2cm} \\ f_2&=&\sum_{i=1}^4\lambda_il_i^3\end{array}\right. \end{equation} Denote by $L$ the $3\times 4$ matrix whose $i$-th column is given by the coefficients of $l_i$. Then the first condition in (\ref{eq:robertortho}) is equivalent to the equation \begin{equation}\label{eq:llt}LL^t=I.\end{equation} This equation generalizes orthonormal basis and the column of $L$ makes a {\it Parseval} frame, according to \cite{CMS} \S 2.1. So Robert's Theorem states that the general ternary cubic has a unique decomposition consisting of a Parseval frame. In general a Parseval frame for a field $F$ is given by $\{l_1,\ldots, l_n\}\subset F^d$ such that the corresponding $d\times n$ matrix $L$ satisfies the condition $LL^t=I$. This is equivalent to the equation $\sum_{i=1}^n(\sum_{j=1}^d l_{ji}x_j)^2=\sum_{i=1}^dx_i^2$, so again to a Waring decomposition with $n$ summands of the euclidean form in $F^d$. This makes a connection of our paper with \cite{ORS}, which studies frames in the setting of secant varieties and tensor decomposition. For example equation (7) in \cite{ORS} define a solution to (\ref{eq:llt}) with the additional condition that the four columns have unit norm. Note that equation (8) in \cite{ORS} define a Waring decomposition of the pair $(f_1, T)$. Unfortunately the additional condition about unitary norm does not allow to transfer directly the results of \cite{ORS} to our setting, but we believe this connection deserves to be pushed further. It is interesting to notice that the decompositions of moments $M_2$ and $M_3$ in \cite[\S 3]{AGHKT} is a (simultaneous) Waring decompositions of the quadric $M_2$ and the cubic $M_3$. \end{rem} \section{Computational approach}\label{sec:compapp} In this section we describe how we can face Question 1 and Question 2, introduced in \S\ref{sec:intr}, from the computational analysis point of view.\\ \indent With the aid of Bertini \cite{Be}, \cite{BHSW} and Macaulay2 \cite{M2} software systems, we can construct algorithms, based on homotopy continuation techniques and monodromy loops, that, in the spirit of \cite{HOOS}, yield the number of Waring decompositions of a generic polynomial vector $ f = (f_{1}, \ldots, f_{r}) \in {\mathrm Sym}^{a_1} \mathbb{C}^{n+1} \oplus \ldots \oplus {\mathrm Sym}^{a_r} \mathbb{C}^{n+1} $ with high probability. Precisely, given $ n,r,a_{1}, \ldots, a_{r}, k \in \mathbb{N} $ satisfying (\ref{eq:perfect}) and coordinates $ x_{0}, \ldots, x_{n} $, we focus on the polynomial system \begin{equation}\label{eq:polsys} \left\{ \begin{array}{l} f_{1} = \lambda_{1}^{1}\ell_{1}^{a_{1}}+ \ldots + \lambda_{k}^{1}\ell_{k}^{a_{1}} \\ \quad \, \, \, \vdots \\ f_{r} = \lambda_{1}^{r}\ell_{1}^{a_{r}}+ \ldots + \lambda_{k}^{r}\ell_{k}^{a_{r}}\\ \end{array} \right. \end{equation} where $ f_{j} \in {\mathrm Sym}^{a_j} \mathbb{C}^{n+1} $ is a fixed general element, while $ \ell_{i} = x_{0}+ \sum_{h=1}^{n}l_{h}^{i}x_{h} \in \mathbb{P}(\mathbb{C}^{\vee}) $ and $ \lambda_{i}^{j} \in \mathbb{C} $ are unknown. By expanding the expressions on the right hand side of (\ref{eq:polsys}) and by applying the identity principle for polynomials, the $j$-th equation of (\ref{eq:polsys}) splits in $ {{a_{j}+n} \choose {n}} $ conditions. Our aim is to compute the number of solutions of $ F_{(f_{1}, \ldots, f_{r})}([l_{1}^{1}, \ldots, l_{n}^{1}, \lambda_{1}^{1}, \ldots, \lambda_{1}^{r}], \ldots \ldots, [l_{1}^{k}, \ldots, l_{n}^{k}, \lambda_{k}^{1}, \ldots, \lambda_{k}^{r}]) $, the square non linear system of order $ k(r+n)$, arising from the equivalent version of (\ref{eq:polsys}) in which in each equation all the terms are on one side of the equal sign. In practice, to work with general $ f_{j} $'s, we assign random complex values $ \overline{l}_{h}^{i} $, $ \overline{\lambda}_{i}^{j} $ to $ l_{h}^{i} $, $ \lambda_{i}^{j} $ and, by means of $ F_{(f_{1}, \ldots, f_{r})} $, we compute the corresponding $ \overline{f}_{1}, \ldots, \overline{f}_{r} $, the coefficients of which are so called \emph{start parameters}. In this way, we know a solution $ ([\overline{l}_{1}^{1}, \ldots,\overline{l}_{n}^{1}, \overline{\lambda}_{1}^{1}, \ldots, \overline{\lambda}_{1}^{r}], \ldots \ldots, [\overline{\lambda}_{1}^{k}, \ldots,\overline{l}_{n}^{k}, \overline{\lambda}_{k}^{1}, \ldots, \overline{\lambda}_{k}^{r}]) \in \mathbb{C}^{k(r+n)}$ of $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $, i.e. a Waring decomposition of $ \overline{f} = (\overline{f}_{1}, \ldots, \overline{f}_{r}) $, which is called a \emph{startpoint}. Then we consider $ F_{1} $ and $ F_{2} $, two square polynomial systems of order $ k(n+r) $ obtained from $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ by replacing the constant terms with random complex values. We therefore construct 3 segment homotopies $$ H_{i} : \mathbb{C}^{k(r+n)} \times [0,1] \to \mathbb{C}^{k(r+n)} $$ for $ i \in \{0,1,2\} $: $H_{0}$ between $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ and $ F_{1} $, $ H_{1} $ between $ F_{1} $ and $ F_{2} $, $ H_{2} $ between $ F_{2} $ and $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $. Through $ H_{0} $, we get a \emph{path} connecting the startpoint to a solution of $ F_{1} $, called \emph{endpoint}, which therefore becomes a startpoint for the second step given by $ H_{1} $, and so on. At the end of this loop, we check if the output is a Waring decomposition of $ \overline{f} $ different from the starting one. If this is not the case, this procedure suggests that the case under investigation is identifiable, otherwise we iterate this technique with these two \emph{startingpoints}, and so on. If at certain point, the number of solutions of $ F_{(\overline{f}_{1}, \ldots, \overline{f}_{r})} $ stabilizes, then, with high probability, we know the number of Waring decompositions of a generic polynomial vector in $ {\mathrm Sym}^{a_1} \mathbb{C}^{n+1} \oplus \ldots \oplus {\mathrm Sym}^{a_r} \mathbb{C}^{n+1} $. \\ We have implemented the homotopy continuation technique both in the software Bertini\cite{Be}, opportunely coordinated with Matlab, and in the software Macaulay2, with the aid of the package \emph{Numerical Algebraic Geometry}\cite{KL}. \\ \indent Before starting with this computational analysis, we need to check that the variety $ \mathbb{P}(\mathcal{O}_{\mathbb{P}^{n}}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\mathbb{P}^{n}}(a_{r})) $, introduced in \S\ref{sec:secant}, is not $ k $-defective, in which case (\ref{eq:polsys}) has no solutions. In order to do that, by using Macaulay2, we can construct a probabilistic algorithm based on Theorem \ref{th:terracini}, that computes the dimension of the span of the affine tangent spaces to $ \mathbb{P}(\mathcal{O}_{\mathbb{P}^{n}}(a_{1}) \oplus \ldots \oplus \mathcal{O}_{\mathbb{P}^{n}}(a_{r})) $ at $ k $ random points and then we can apply semicontinuity properties. \\ \indent In the following table we summarize the results we are able to obtain combining numerical and theoretical approaches. Our technique is as follows. We first apply the probabilistic algorithm, checking $ k $-defectivity, described above. If this suggests positive $k$-defect $ \delta_{k} $, we do not pursue the computational approach. When $ \delta_{k} $ is zero, we apply homotopy continuation technique. If the number of decompositions (up to order of summands) stabilizes to a number, $ \symbol{35}_{k} $, we indicate it. If homotopy technique does not stabilize to a fixed number, we apply degeneration techniques like in \S\ref{sec:ternary} to get a lower bound. If everything fails, we put a question mark. Bold degrees are the one obtained via theoretical arguments.\\ \begin{longtable}{|c|c|l|c|c|c|} \hline \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{|c|}{$n$}& \multicolumn{1}{|l|}{ $(a_{1},\ldots,a_{r})$}& \multicolumn{1}{|c|}{$k$}& \multicolumn{1}{|c|}{$\delta_k$}& \multicolumn{1}{|c|}{$\symbol{35}_{k}$}\\ \hline \endhead \hline \endfoot $2$ &$2$ &$(4,5)$ & $ 9 $ &0 &$ 3 $ \\ $2$ &$2$ &$(6,6)$ & $ 14 $ &0 & $ \geq 2 $ \\ $2$ &$2$ &$(6,7)$ & $ 16 $ &0& $ \geq 8 $\\ $2$ &$3$ &$(2,4)$ & $ 9$ &$2$& \\ $3$ &$2$ &$(2,2,6)$ & $ 8 $ & $4$& \\ $3$ &$2$ &$(3,3,4)$ & $ 7 $ & 0&${\bf 1} $ \\ $3$ &$2$ & $(3,4,4)$ & $ 8 $ & 0&$ 4 $ \\ $3$ &$2$ & $(5,5,6)$ & $ 14 $ & 0&$ 205$ \\ $3$ &$3$ & $(3,3,3)$ & $ 10 $ & 0&$ 56 $ \\ $4$ &$2$ & $(2,2,4,4)$ & $ 7 $ & $2$& \\ $4$ &$2$ & $(2,3,3,3)$ & $ 6 $ & 0&$ 2$ \\ $4$ &$2$ & $(4,\ldots,4)$ & $ 10 $ &0 & $ ?$ \\ $5$ &$2$ & $(5,\ldots,5,6)$ & $ 16 $ &0 & $ ? $\\ $6$ &$2$ & $(2,\ldots,2,3)$& $ 5 $ & $3$ & \\ $6$ &$4$ & $(2.\ldots,2)$ & $ 9 $ & 0&$ 45 $ \\ $7$&$3$&$(2,\ldots,2)$&$7$&0&$\bf 8$\\ $8$&$2$&$(3,\ldots,3)$&$8$&$0$&$\bf 9$\\ $8$&$2$&$(2,\ldots,2,6)$&$7$&$ 7$& \\ $11$&$4$&$(2,\ldots,2)$&$11$&$0$& ${\bf 4368}$\\ $13$&$2$&$(4,\ldots,4)$&$13$&$0$& ${\bf 560}$\\ $15$&$2$&$(4,\ldots,4,6)$&$14$& $6$& \\ $17$&$3$&$(3,\ldots,3)$&$17$& $0$& $ {\bf 8436285}$\\ $19$&$2$&$(5,\ldots,5)$&$19$& $0$& ${\bf 177100}$\\ $26$&$2$&$(6,\ldots,6)$&$26$& $0$& ${\bf 254186856}$ \\ \end{longtable} \section{Identifiability of pairs of ternary forms}\label{sec:ternary} In this section we aim to study the identifiability of pairs of ternary forms. In particular we study the special case of two forms of degree $a$ and $a+1$. Our main result is the following \begin{thm} \label{th:main_identifi} Let $a$ be an integer then a general pair of ternary forms of degree $a$ and $a+1$ is identifiable if and only if $a=2$. Moreover there are finitely many decompositions if and only if $a=2t$ is even, and for such an $a$ the number of decompositions is at least $$ \frac{(3t-2)(t-1)}2+1. $$ \end{thm} The Theorem has two directions on one hand we need to prove that $a=2$ is identifiable, on the other we need to show that for $a>2$ a general pair is never identifiable. The former is a classical result we already recalled in (iii) of (\ref{eq:classiclist}) and in Theorem \ref{thm:nonabelian_applied2}. For the latter observe that $\dim\sec_k(X)=4k-1$, therefore if either $4k-1<N$ or $4k-1>N$ the general pair is never identifiable. We are left to consider the perfect case $N=4k-1$. Under this assumption we may assume that $X$ is not $k$-defective, we will prove that this is always the case in Remark~\ref{rem:non_defective}, otherwise the non identifiability is immediate. Hence the core of the question is to study generically finite maps $$\pi_k:\sec_k(X)\to\p^N,$$ with $4k=(a+2)^2$. This yields our last numerical constraint that is $a=2t$ needs to be even. The first step is borrowed from, \cite{Me}\cite{Me1}, and it is slight generalization of \cite[Theorem 2.1]{Me}, see also \cite{CR}. \begin{thm}\label{th:birational_tangent_proj} Let $X\subset\p^N$ be an irreducible variety of dimension $n$. Assume that the natural map $\sigma:\sec_{k}(X)\to\p^N$ is dominant and generically finite of degree $d$. Let $z\in\Sec_{k-1}(X)$ be a general point. Consider $\f:\p^N\dasharrow\p^n$ the projection from the embedded tangent space $\T_z\Sec_{k-1}(X)$. Then $\f_{|X}:X\dasharrow\p^n$ is dominant and generically finite of degree at most $d$. \label{th:proj} \end{thm} \begin{proof} Choose a general point $z$ on a general $(k-1)$-secant linear space spanned by $\langle p_1,\ldots,p_{k-1}\rangle$. Let $f:Y\to \p^N$ be the blow up of $\sec_{k-1}(X)$ with exceptional divisor $E$, and fiber $F_z=f^{-1}(z)$. Let $y\in F_z$ be a general point. This point uniquely determines a linear space $\Pi$ of dimension $(k-1)(n+1)$ that contains $\T_z\sec_{k-1}(X)$. Then the projection $\f_{|X}:X\dasharrow \p^n$ is generically finite of degree $d$ if and only if $(\Pi\setminus \T_z\sec_{k-1}(X))\cap X$ consists of just $d$ points. Assume that $\{x_1,\ldots, x_a\}\subset (\Pi\setminus \T_z\sec_{k-1}(X))\cap X$. By Terracini Lemma, Theorem \ref{th:terracini}, $$\T_z\sec_{k-1}(X)= \langle \T_{p_1}X,\ldots, \T_{p_{k-1}}X\rangle$$ Consider the linear spaces $\Lambda_i=\langle x_i, p_1,\ldots,p_{k-1}\rangle$. The Trisecant Lemma, see for instance \cite[Proposition 2.6]{ChCi}, yields $\Lambda_i\neq\Lambda_j$, for $i\neq j$. Let $\Lambda_i^Y$, and $\Pi^Y$ be the strict transforms on $Y$. Since $z\in\langle p_1,\ldots,p_{k-1}\rangle$ and $y=\Pi^Y\cap F_z$ then $\Lambda_i^Y$ contains the point $y\in F_z$. In particular we have $$\Lambda_i^Y\cap\Lambda_j^Y\neq\emptyset$$ Let $\pi_1:\Sec_k(X)\to\p^N$ be the morphism from the abstract secant variety, and $\mu:U\to Y$ the induced morphism. That is $U=\Sec_k(X)\times_{\p^N} Y$. Then there exists a commutative diagram $$\diagram U\dto_{p}\rto^{\mu}&Y\dto^{f}\\ \Sec_k(X)\rto^{\pi_1}&\p^N\enddiagram$$ Let $\lambda_i$ and $\Lambda^U_i$ be the strict transform of $\Lambda_i$ in $\Sec_k(X)$ and $U$ respectively. By Remark \ref{re:vuoto} $\lambda_i\cap \lambda_j=\emptyset$, so that $$\Lambda^U_i\cap \Lambda^U_j=\emptyset.$$ This proves that $\sharp{\mu^{-1}(y)}\geq a$. But $y$ is a general point of a divisor in the normal variety $Y$. Therefore $\deg\mu$, and henceforth $\deg\pi_1$, is at least $a$. \end{proof} To apply Theorem~\ref{th:birational_tangent_proj} we need to better understand $X$ and its tangential projections. By definition we have $$X\simeq\p((\o_{\p^2}(-1)\oplus\o_{\p^2})\otimes\pi^*\o_{\p^2}(a+1))$$ then $X\subset\p^N$ can be seen as the embedding of $\p^3$ blown up in one point $q$ embedded by monoids of degree $a+1$ with vertex $p$. That is let $\L=|\I_{q^a}(a+1)|\subset|\o_{\p^3}(a+1)|$, and $Y=Bl_q\p^3$ then $$X=\f_\L(Y)\subset\p^N.$$ It is now easy, via Terracini Lemma, to realize that the restriction of the tangential projection $\f_{|X}X\dasharrow\p^3$ is given by the linear system $$\mathcal{H}=|\I_{q^a\cup p_1^2\ldots\cup p_{k-1}^2}(a+1)|\subset|\o_{\p^3}(a+1)|.$$ We already assumed that $X$ is not $k$-defective that is we work under the condition \noindent$(\dag)$\hspace{5cm} $\dim\mathcal{H}=3.$ \begin{rem} It is interesting to note that for $a=2$ the map $\f_{|X}$ is the standard Cremona transformation of $\p^3$ given by $(x_0,\ldots,x_3)\mapsto (1/x_0,\ldots,1/x_3)$. \end{rem} Let us work out a preliminary Lemma, that we reproduce by the lack of an adequate reference. \begin{lem} \label{lem:birational_degeneration} Let $\Delta$ be a complex disk around the origin, $X$ a variety and $\o_X(1)$ a base point free line bundle. Consider the product $V=X\times \Delta$, with the natural projections, $\pi_1$ and $\pi_2$. Let $V_t=X\times\{t\}$ and $\o_V(d)=\pi_1^*(\o_{X}(d))$. Fix a configuration $p_1,\ldots,p_l$ of $l$ points on $V_0$ and let $\sigma_i:\Delta\to V$ be sections such that $\sigma_i(0)=p_i$ and $\{\sigma_i(t)\}_{i=1,\ldots,l}$ are general points of $V_t$ for $t\neq 0$. Let $P=\cup_{i=1}^l\sigma_i(\Delta)$, and $P_t=P\cap V_t$. Consider the linear system $\mathcal{H}=|\o_{V}(d)\otimes\I_{P^2}|$ on $V$, with $\mathcal{H}_t:=\mathcal{H}_{|V_t}$. Assume that $\dim\mathcal{H}_0=\dim\mathcal{H}_t=\dim X$, for $t\in\Delta$. Let $d(t)$ be the degree of the map induced by $\mathcal{H}_t$. Then $d(0)\leq d(t)$. \end{lem} \begin{proof} If, for $t\neq 0$, $\f_{\mathcal{H}_t}$ is not dominant the claim is clear. Assume that $\f_{\mathcal{H}_t}$ is dominant for $t\neq 0$. Then $\f_{\mathcal{H}_t}$ is generically finite and $\deg\f_{\mathcal{H}_t}(X)=1$, for $t\neq 0$. Let $\mu:Z\times\Delta\to V$ be a resolution of the base locus, $V_{Zt}=\mu^*V_t$, and $\mathcal{H}_{Z}=\mu^{-1}_*\mathcal{H}$ the strict transform linear systems on $Z$. Then $V_{Zt}$ is a blow up of $V_t=X$, for $t\neq 0$, and $V_{Z0}=\mu^{-1}_*V_{0}+R$, for some effective, eventually trivial, residual divisor $R$. By hypothesis $\mathcal{H}_0$ is the flat limit of $\mathcal{H}_t$, for $t\neq 0$. Hence flatness forces $$d(t)=\mathcal{H}_Z^{\dim X}\cdot V_{Zt}=\mathcal{H}_Z^{\dim X}\cdot(\mu^{-1}_*V_{0}+R)\geq \mathcal{H}_Z^{\dim X}\cdot \mu^{-1}_*V_{0}=d(0).$$ \end{proof} Lemma~\ref{lem:birational_degeneration} allows us to work on a degenerate configution to study the degree of the map induced by $|\I_{q^a\cup p_1^2\ldots\cup p_{k-1}^2}(a+1)|\subset|\o_{\p^3}(a+1)|\subset|\o_\p^3(a+1)|$. \begin{lem} \label{lem:degeneration_ok} Let $H\subset \p^3\setminus\{q\}$ be a plane, $B:=\{p_1,\ldots,p_b\}\subset H$ a set of $b:=1/2t(t+3)$ general points, and $C:=\{x_1,\ldots,x_c\}\subset\p^3\setminus\{q\cup H\}$ a set of $c:=1/2t(t+1)$ general points. Let $a=2t$ and $$\mathcal{H}:=|\I_{q^a\cup C^2\cup B^2}(a+1)|\subset|\o_{\p^3}(a+1)|,$$ be the linear system of monoids with vertex $q$ and double points along $B\cup C$, and $\f_\mathcal{H}$ the associated map. Then $\dim \mathcal{H}=3$ and $$\deg\f_{\mathcal{H}}> \frac{(3t-2)(t-1)}2.$$ \end{lem} \begin{proof} Note that by construction the lines $\Span{q,p_i}$ and $\Span{q,x_i}$ are contained in the base locus of $\mathcal{H}$. Let us start computing $\dim\mathcal{H}$. First we prove that there is a unique element in $\mathcal{H}$ containing the plane $H$. \begin{claim}\label{cl:1} $|\mathcal{H}-H|=0$. \end{claim} \begin{proof}[Proof of the Claim] Let $D\in\mathcal{H}$ be such that $D=H+R$ for a residual divisor in $|\o(a)|$. Then $R$ is a cone with vertex $q$ over a plane curve $\Gamma\subset H$. Moreover $R$ is singular along $C$ and has to contain $B$. This forces $\Gamma$ to contain $B$ and to be singular at $\Span{q,x_1}\cap H$. In other words $\Gamma$ is a plane curve of degree $2t$ with $c= 1/2t(t+1)$ general double points and passing thorugh $b= 1/2t(t+3)$ general points. Note that $$\binom{2t+2}2-3c-b=1.$$ It is well known, see for instance \cite{AH}, that the $c$ points impose independent conditions on plane curves of degree $2t$. Clearly the latter $b$ simple points do the same therefore there is a unique plane curve $\Gamma$ satisfying the requirements. This shows that $R$ is unique and in conclusion the claim is proved. \end{proof} We are ready to compute the dimension of $\mathcal{H}$ \begin{claim}\label{cl:2} $\dim\mathcal{H}=3$ \end{claim} \begin{proof}[Proof of the Claim] The expected dimension of $\mathcal{H}$ is 3. Then by Claim~\ref{cl:1} it is enough to show that $\dim\mathcal{H}_{|H}=2$. To do this observe that $\mathcal{H}_{|H}$ is a linear system of plane curves of degree $2t+1$ with $b$ general double points and $c$ simple general points. As in the proof of Claim~\ref{cl:1} we compute the expected dimension $$\binom{2t+3}2-3b-c=3,$$ and conclude by \cite{AH}. \end{proof} Next we want to determine the base locus scheme of $\mathcal{H}_{|H}$. Let $\epsilon:S\to H$ be the blow up of $B$ and $\Span{q,x_i}\cap H$, with $\mathcal{H}_S$ strict transform linear system. We will first prove the following. \begin{claim} The scheme base locus of $|\I_{B^2}(2t+1)|\subset|\o_{\p^2}(2t+1)|$ is $B^2$. \end{claim} \begin{proof} Let $\mathcal{L}_{ij}:=|\I_{B\setminus\{p_i,p_j\}}(t)|\subset|\o_{\p^2}(t)|$, then $$\dim \mathcal{L}_{ij}=\binom{t+2}2-b-2-1=2.$$ By the Trisecant Lemma, see for instance \cite[Proposition 2.6]{ChCi}, we conclude that $$\Bs\mathcal{L}_{ij}=B\setminus\{p_i,p_j\}.$$ Let $\Gamma_i, \Gamma_j\in\mathcal{L}_{ij}$ be such that $\Gamma_i\ni p_i$ and $\Gamma_j\ni p_j$. Then by construction we have $$D_{ij}:=\Gamma_i+\Gamma_j+\Span{p_i,p_j}\in\mathcal{H}.$$ Let $D_{ijS}$, $\mathcal{L}_{ijS}$ be the strict transforms on $S$. Note that $\Gamma_h$ belongs to a pencil of curves in $\mathcal{L}_{hk}$ for any $k$. These pencils do not have common base locus outside of $B$ since $\mathcal{L}_{ijS}$ is base point free and $\dim \mathcal{L}_{ij}=2$. Therefore the $D_{ijS}$ have no common base locus. \end{proof} \begin{claim} $\mathcal{H}_{S}$ is base point free. \end{claim} \begin{proof} To prove the Claim it is enough to prove that the simple base points associated to $C$ impose independent conditions. Since $C\subset\p^3$ is general this is again implied by the Trisecant Lemma. \end{proof} Then we have $$\deg\f_{\mathcal{H}_{S}}=\mathcal{H}_{S}^2=(2t+1)^2-4b-c=\frac{(3t-2)(t-1)}2.$$ To conclude observe that, with the same argument of the claims, we can prove that $\f_{\mathcal{H}|R}$ is generically finite, therefore $$\deg\f_{\mathcal{H}}>\deg\f_{\mathcal{H}|H}=\deg\f_{\mathcal{H}_{S}}=(2t+1)^2-4b-c=\frac{(3t-2)(t-1)}2 $$ \end{proof} \begin{rem}\label{rem:non_defective}Lemma~\ref{lem:degeneration_ok} proves that $\deg\f_\mathcal{H}$ is finite. Hence as a byproduct we get that condition $(\dag)$ is always satisfied in our range. That is $X$ is not $k$-defective for $a=2t$. \end{rem} \begin{proof}[Proof of Theorem~\ref{th:main_identifi}] We already know that the number of decomposition is finite only if $a=2t$. By Remark~\ref{rem:non_defective} we conclude that the number is finite when $a=2t$. Let $d$ be the number of decompositions for a general pair. Then by Theorem~\ref{th:birational_tangent_proj} we know that $d\geq \deg\f$ where $\f:X\dasharrow \p^3$ is the tangential projection. The required bound is obtained combining Lemma~\ref{lem:birational_degeneration} and Lemma~\ref{lem:degeneration_ok}. \end{proof} \end{document}
\begin{document} \title[Population models at stochastic times]{Population models at stochastic times} \author{Enzo Orsingher} \email{[email protected]} \author{Costantino Ricciuti} \email{[email protected]} \author{Bruno Toaldo} \email{[email protected]} \address{Department of Statistical Sciences, Sapienza - University of Rome} \keywords{Non-linear birth processes, sublinear and linear death processes, sojourn times, fractional birth processes, random time} \date{\today} \subjclass[2010]{60G22; 60G55} \begin{abstract} In this article, we consider time-changed models of population evolution $\mathcal{X}^f(t)=\mathcal{X}(H^f(t))$, where $\mathcal{X}$ is a counting process and $H^f$ is a subordinator with Laplace exponent $f$. In the case $\mathcal{X}$ is a pure birth process, we study the form of the distribution, the intertimes between successive jumps and the condition of explosion (also in the case of killed subordinators). We also investigate the case where $\mathcal{X}$ represents a death process (linear or sublinear) and study the extinction probabilities as a function of the initial population size $n_0$. Finally, the subordinated linear birth-death process is considered. A special attention is devoted to the case where birth and death rates coincide; the sojourn times are also analysed. \end{abstract} \maketitle \section{Introduction} Birth and death processes can be applied in modelling many dynamical systems, such as cosmic showers, fragmentation processes, queueing systems, epidemics, population growth and aftershocks in earthquakes. The time-changed version of such processes has also been analysed since it is useful to describe the dynamics of various systems when the underlying environmental conditions randomly change. For example, the fractional birth and death processes, studied in \citet{orspolber3, orspolber4, orspolber1, orspolber2}, are time-changed processes where the distribution of the time is related to the fractional diffusion equations. On this point consult \citet{cahoy, cahoypol} for some applications and simulations. In this paper, we consider the case where the random time is a subordinator. Actually, subordinated Markov processes have been extensively studied since the Fifties. The case of birth and death processes merits however a further investigation and this is the role of the present paper. We consider here compositions of point processes $\mathcal{X}(t)$, $t>0$, with an arbitrary subordinator $H^f(t)$ related to the Bern\v{s}tein functions $f$. We denote such processes as $\mathcal{X}^f(t)= \mathcal{X}(H^f(t))$. The general form of $f$ is as follows \begin{align} f(x)= \alpha+\beta x+\int _0^\infty (1-e^{-xs}) \nu(ds) \qquad \alpha \geq 0, \beta \geq 0, \label{Bernstein} \end{align} where $\nu$ is the L\'evy measure satisfying \begin{align} \int_0^\infty (s \wedge 1 ) \nu(ds) < \infty. \label{misura di Levy} \end{align} In this paper we refer to the case $\alpha=\beta=0$, unless explicitly stated. The structure of the paper is as follows: section 2 treats the subordinated non-linear birth process; section 3 deals with the subordinated linear and sublinear death processes; section 4 analyses the linear birth-death process, with particular attention to the case where birth and death rates coincide. In all three cases, we compute directly the state probabilities by means of the composition formula \begin{align} \Pr \ll \mathcal{X}^f(t)=k \rr = \int_0^{\infty} \Pr \ll \mathcal{X}(s)=k \rr \Pr \ll H^f(t) \in ds \rr. \end{align} Despite most of the subordinators do not possess an explicit form for the probability density function, the distribution of $\mathcal{X}(H^f(t))$ always presents a closed form in terms of the Laplace exponent $f$. We also study the transition probabilities, both for finite and infinitesimal time intervals. We emphasize that the subordinated point processes have a fundamental difference with respect to the classical ones, in that they perform upward or downward jumps of arbitrary size. For infinitesimal time intervals, we provide a direct and simple proof of the following fact: \begin{align} \Pr \ll \mathcal{X}^f(t+dt)=k | \mathcal{X}^f(t)=r \rr = dt \int_0^{\infty} \Pr \ll \mathcal{X}(s)=k | \mathcal{X}(0)=r \rr \nu(ds), \label{14b} \end{align} which is related to Bochner subordination (see \cite{phillips}).\\ The first case taken into account is that of a non-linear birth process with birth rates $\lambda_k$, $k\geq 1$, which is denoted by $\mathcal{N}(t)$. The subordinated process $\mathcal{N}^f(t)$ does not explode if and only if the following condition is fullfilled \begin{align} \sum_{j=1}^\infty \frac{1}{\lambda_j} \, = \, \infty. \end{align} This is the same condition of non-explosion holding for the classical case. Such a condition ceases to be true if we consider a L\'evy exponent with $\alpha \neq 0$, which is related to the so-called killed subordinator. In this case, indeed, the process $\mathcal{N}^f(t)$ can explode in a finite time, even if $\mathcal{N}(t)$ does not; more precisely \begin{align} \Pr \ll \mathcal{N}^f(t)= \infty \rr = 1-e^{-\alpha t}. \end{align} We note that $\mathcal{N}^f(t)$ can be regarded as a process where upward jumps are separated by exponentially distribuited time intervals $Y_k$ such that \begin{align} \Pr \ll Y_k>t | \mathcal{N}^f(T_{k-1})=r \rr= e^{-f(\lambda_r) t} \end{align} where $T_{k-1}$ is the instant of the ($k-1$)-th jump. In section 3 we study the subordinated linear and sublinear death processes, that we respectively denote by $M^f(t)$ and $\mathbb{M}^f(t)$, with an initial number of components $n_0$. We emphasize that in the sublinear case the annihilation is initially slower, then accelerates when few survivors remain. So, despite $M^f(t)$ and $\mathbb{M}^f(t)$ present different state probabilities, we observe that the extinction probabilities coincide and we prove that they decrease for increasing values of $n_0$. In section 4, the subordinated linear birth-death process $L^f(t)$ is considered. If the birth and death rates coincide and $H^f$ is a stable subordinator, we compute the mean sojourn time in each state and find, in some particular cases, the distribution of the intertimes between successive jumps. We finally study the probability density of the sojourn times, by giving a sketch of the derivation of their Laplace transforms. \section{Subordinated non-linear birth process} We consider in this section the process $ \mathcal{N}^f(t)= \mathcal{N}(H^f(t))$, where $\mathcal{N}$ is a non-linear birth process with one progenitor and rates $\lambda _k$, $k \geq 1 $, and $H^f(t)$ is a subordinator independent from $\mathcal{N}(t)$. It is well known that the state probabilities of $\mathcal{N}(t)$ read \begin{align} \Pr \ll\mathcal{N}(t)=k| \mathcal{N}(0)=1 \rr = \, & \begin{cases} \prod_{j=1}^{k-1} \lambda _ j\sum_{m=1}^k \frac{e^{-\lambda_m t }}{ \prod_{l=1 , l \neq m}^k (\lambda_l-\lambda_m )}, \qquad & k>1, \\ e^{-t f(\lambda_1) }, & k=1. \end{cases} \end{align} The subordinated process $\mathcal{N}^f(t)$ thus possesses the following distribution: \begin{align} \Pr \ll \mathcal{N}^f(t)=k|\mathcal{N}^f(0)=1 \rr \, = \, & \int_0^\infty \Pr \ll \mathcal{N}(s)=k | \mathcal{N}(0)=1\rr \Pr \ll H^f(t) \in ds \rr \notag \\ \, = \, & \begin{cases} \prod_{j=1}^{k-1} \lambda _ j\sum_{m=1}^k \frac{e^{-t \, f(\lambda _m )}}{ \prod_{l=1 , l \neq m}^k (\lambda_l-\lambda_m )}, \qquad & k>1, \\ e^{-t f(\lambda_1) }, & k=1. \end{cases} \label{non linear birth 1 progenitor} \end{align} The distribution \eqref{non linear birth 1 progenitor} can be easily generalised to the case of $r$ progenitors and reads \begin{align} \Pr \ll \mathcal{N}^f(t)=r+k | \mathcal{N}^f(0)=r \rr \, = \,\begin{cases} \prod_{j=r}^{r+k-1} \lambda _ j\sum_{m=r}^{r+k} \frac{e^{-tf(\lambda _m) }}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )}, \quad & k>0, \\ e^{-tf(\lambda_r) }, & k=0. \end{cases} \label{23} \end{align} The subordinated process $\mathcal{N}^f(t)$ is time-homogeneous and Markovian. So, the last formula permits us to write \begin{align} &\Pr \ll \mathcal{N}^f(t+dt)=r+k | \mathcal{N}^f(t)=r \rr \notag \\ = \, & \begin{cases} \prod_{j=r}^{r+k-1} \lambda _ j\sum_{m=r}^{r+k} \frac{1-dt f(\lambda_m )}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )}, \quad & k>0, \\ 1-dt f(\lambda_r), & k=0. \label{rprog} \end{cases} \end{align} To find an alternative expression for the transition probabilities we need the following \begin{lem} For any sequence of $k+1$ distinct positive numbers $\lambda_r, \lambda_{r+1} \cdots \lambda _{r+k}$ the following relationship holds: \begin{align} c_{r,k}=\sum_{m=r}^{r+k} \frac{1}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )}=0. \label{Vandermonde} \end{align} \end{lem} \begin{proof} It is a consequence of \eqref{23} by letting $t \to 0$. An alternative proof can be obtained by suitably adapting the calculation in Theorem 2.1 of \cite{orspolber4}. \end{proof} We are now able to state the following theorem. \begin{te} For $k>r$ the transition probability takes the form \begin{align} \Pr \ll \mathcal{N}^f(t+dt)=k| \mathcal{N}^f(t)=r \rr = dt\, \int _0 ^{\infty} \Pr \ll \mathcal{N}(s)=k|\mathcal{N}(0)=r \rr \nu (ds) \end{align} \end{te} \begin{proof} By repeatedly using both \eqref{Vandermonde} and the representation \eqref{Bernstein} of the Bern\v{s}tein functions $f$, we have that \begin{align} \Pr \ll \, \mathcal{N}^f(t+dt)=k | \mathcal{N}^f(t)=r \rr \, &= \, \prod_{j=r}^{r+k-1} \lambda_ j\sum_{m=r}^{r+k} \frac{1-dt f(\lambda _m )}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )} \notag \\ &= \, -dt\prod_{j=r}^{r+k-1} \lambda_ j\sum_{m=r}^{r+k} \frac{f(\lambda _m )}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )}\notag \\ & =- dt \int _0^\infty \prod_{j=r}^{r+k-1} \lambda _ j\sum_{m=r}^{r+k} \frac{1-e^{-\lambda_m s }}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )} \nu (ds) \notag \\ &= dt \int _0^\infty \prod_{j=r}^{r+k-1} \lambda _ j\sum_{m=r}^{r+k} \frac{e^{-\lambda_m s }}{ \prod_{l=r , l \neq m}^{r+k} (\lambda_l-\lambda_m )} \nu (ds) \label{ccc}. \end{align} In light of \eqref{Vandermonde}, the integrand in \eqref{ccc} is $ \mathcal{O}( s)$ for $s \to 0$. Reminding \eqref{misura di Levy}, this ensures the convergence of \eqref{ccc}, and the proof is thus complete. \end{proof} \begin{os} For the sake of completeness, we observe that in the case $k=0$ we have \begin{align} \Pr \ll \, \mathcal{N}^f(t+dt)=r | \mathcal{N}^f(t)=r \rr \, = \, & 1-dt f(\lambda _r) \notag \\ = \, & 1-dt \int _0^\infty (1-e^{-\lambda _r s}) \nu(ds)\\ = \, & 1-dt \int _0^\infty (1-\Pr \ll \mathcal{N}(s)=r | \mathcal{N}(0)=r \rr) \nu(ds). \end{align} \end{os} \begin{os} The subordinated non-linear birth process performs jumps of arbitrary height as the subordinated Poisson process (see, for example, \citet{orstoa}). Thus, in view of markovianity, we can write the governing equations for the state probabilities $p_k^f(t)= \Pr \ll \mathcal{N}^f(t)=k | \mathcal{N}^f(0) =1 \rr$. For $k>1$ we have that \begin{align} &\frac{d}{dt} p_k^f(t)\, = \, -f(\lambda_k) p_k^f(t)+ \sum_{r=1}^{k-1} p_r^f(t) \int _0^\infty \prod_{j=r}^{k-1} \lambda_j \sum_{m=r}^{k} \frac{e^{-\lambda_m s }}{ \prod_{l=r , l \neq m}^{k} (\lambda_l-\lambda_m )} \nu (ds), \end{align} while for $k=1$ \begin{align} &\frac{d}{dt} p_1^f(t) \, = \, - f(\lambda_1) p_1^f(t). \end{align} \end{os} \begin{os} The process $\mathcal{N}(H^f(t))$ presents positive and integer-valued jumps occurring at random times $T_1, T_2, \cdots T_n $. The inter-arrival times $Y_1, Y_2, \cdots Y_n $ are defined as \begin{align} Y_k= T_k-T_{k-1}. \end{align} It is easy to prove that \begin{align} \Pr \ll Y_k >t |\mathcal{N}^f(T_{k-1})=r \rr= e^{-f(\lambda_r)t}. \end{align} This can be justified by considering that in the time intervals $[T_{k-1}, T_{k-1}+t]$, no new offspring appears in the population and thus, by \eqref{rprog}, we have \begin{align} \Pr \ll Y_k >t |\mathcal{N}^f(T_{k-1})=r \rr= \Pr \ll \mathcal{N}^f(t+T_{k-1})=r|\mathcal{N}^f(T_{k-1})=r)\rr= e^{-f(\lambda_r)t}. \end{align} \end{os} \subsection{Condition of explosion for the subordinated non-linear birth process} We note that the explosion of the process $\mathcal{N}^f(t)$, $t>0$, in a finite time is avoided if and only if \begin{align} T_{\infty}= Y_1+Y_2 \cdots Y_{\infty}= \infty \end{align} where $Y_j$ , $j \geq 1$, are the intertimes between successive jumps (see \cite{grimmet}, p. 252). For the non-linear classical process we have that \begin{align} \mathbb{E} e^{-T_{\infty}}\, = \, & \mathbb{E}e^{- \sum _{j=1}^{\infty}Y_j}= \lim_{n \to \infty} \prod_{j=1}^{n} \mathbb{E} e^{-Y_j}= \lim_{n \to \infty} \prod _{j=1}^n \frac{\lambda _j}{1+\lambda _j} \notag \\ = \, & \prod _{j=1}^{\infty} \frac{1}{1+\frac{1}{\lambda _j}}= \frac{1}{1+ \sum _{j=1}^{\infty}\frac{1}{\lambda _j}+ \cdots}. \end{align} So, if $\sum_{j=1}^\infty \frac{1}{\lambda_j}= \infty$ we have $e^{-T_{\infty}}=0$ a.s., that is $T_{\infty}=\infty$. Therefore, for the subordinated non-linear birth process we have that \begin{align} \Pr \ll \mathcal{N}^f(t) < \infty \rr \, = \, & \int _0 ^{\infty} \sum _{k=1}^{\infty} \Pr \ll \mathcal{N}(s)=k \rr \Pr\ll {H^f(t) \in ds}\rr \notag \\ = \, & \int_0^{\infty} \Pr\ll {H^f(t) \in ds}\rr =1, \qquad \forall t>0. \end{align} Instead, if $\sum _{j=1}^{\infty} \frac{1}{\lambda _j} <\infty$, we get $\sum _{k=1}^{\infty} \Pr \ll \mathcal{N}(s)=k \rr < \infty $, and this implies that $\Pr \ll \mathcal{N}^f(t)<\infty \rr <1$. We can now consider the case of killed subordinators $\mathcal{H}^g(t)$, defined as \begin{align} \mathcal{H}^g(t)= \begin{cases} H^f(t), &\qquad t< T,\\ \infty, &\qquad t \geq T, \end{cases} \end{align} where $T\sim Exp(\alpha)$ and $H^f(t)$ is an ordinary subordinator related to the function $f(x)= \int _0^{\infty}(1-e^{-sx}) \nu(ds)$. It is well-known that $\mathcal{H}^g(t)$ is related to a Bern\v{s}tein function \begin{align} g(x)= \alpha+f(x). \end{align} In this case, even if $\sum_{j=1}^\infty \frac{1}{\lambda_j}= \infty$ , the probability of explosion for $\mathcal{N}^f(t)$ is positive and equal to \begin{align} \Pr \ll \mathcal{N}^f(t)= \infty \rr = 1- e^{-t \alpha}. \end{align} This can be proven by observing that \begin{align} \Pr \ll \mathcal{N}^f(t) < \infty \rr \, = \, & \int _0 ^{\infty} \sum _{k=1}^{\infty} \Pr \ll \mathcal{N}(s)=k \rr \Pr\ll {H^f(t) \in ds}\rr \notag \\ = \, & \int _0^{\infty} \Pr\ll {H^f(t) \in ds}\rr = \int _0^{\infty} e^{- \mu s} \Pr\ll {H^f(t) \in ds}\rr\bigg|_{\mu =0} \notag \\ = \, & e^{- \alpha t- f(\mu) t}\bigg|_{\mu =0}= e^{-\alpha t}. \end{align} If, instead, $\sum_{j=1}^\infty \frac{1}{\lambda_j}< \infty$, we have $\sum _{k=1}^{\infty} \Pr \ll \mathcal{N}(s)=k \rr <1$ and, a fortiori, $\Pr \ll \mathcal{N}^f(t)<\infty \rr < e ^{-\alpha t}$. \subsection{Subordinated linear birth process} The subordinated Yule-Furry process $N^f(t)$ with one initial progenitor possesses the following distribution \begin{align} p_k^f(t) \, = \, & \int _0 ^ {\infty} e^{- \lambda s } (1-e^{- \lambda s })^{k-1} \Pr \lbrace H^ {f}(t) \in ds \rbrace \notag \\ = \, & \int _0 ^ {\infty} e^{- \lambda s }\sum _{J=0}^{k-1} \binom{k-1}{j}(-1)^j e^{- \lambda sj }\Pr \lbrace H^ {f}(t) \in ds \rbrace\notag \\ = \, & \sum _{j=0}^{k-1}\binom{k-1}{j} (-1)^j \int _0 ^{\infty} e^{- s(\lambda + \lambda j) } \Pr \lbrace H^ {f}(t) \in ds \rbrace \notag \\ = \, & \sum _{j=0}^{k-1}\binom{k-1}{j} (-1)^j e^{-t\, f(\lambda (j+1))}. \end{align} Of course, this is obtainable from the distribution $\mathcal{N}^f(t)$ by assuming that $\lambda _j=\lambda j$ We now compute the factorial moments of the subordinated linear birth process. The probability generating function is \begin{align} G^f(u,t)= \sum_{k=1}^\infty u^k \int_0^\infty e^{-\lambda s} (1-e^{-\lambda s})^{k-1} \Pr (H^f(t) \in ds). \end{align} The $r$-th order factorial moments are \begin{align} & \frac{\partial^r}{\partial u^r} G^f(u,t)\bigg|_{u=1} \notag \\ = \, & \sum_{k=r}^\infty k(k-1)\cdots(k-r+1) \int_0^\infty e^{-\lambda s} (1-e^{-\lambda s})^{k-1} \Pr \ll H^f(t) \in ds\rr \notag \\ = \, & \sum_{k=r}^\infty k(k-1)\cdots(k-r+1) \int_0^\infty e^{-\lambda s} (1-e^{-\lambda s})^{k-r} (1-e^{-\lambda s})^{r-1}\Pr \ll H^f(t) \in ds \rr \end{align} and since \begin{align} \sum_{k=r}^\infty k(k-1)...(k-r+1)(1-p)^{k-r} = (-1)^r\frac{d^r}{dp^r} \sum_{k=0}^\infty (1-p)^k = (-1)^r\frac{d^r}{dp^r} \frac{1}{p}= \frac{r!}{p^{r+1}} \end{align} we have that \begin{align} \frac{\partial^r}{\partial u^r} G(u,t)\bigg|_{u=1} \, = \, & r!\int_0^\infty e^{\lambda r s }(1-e^{-\lambda s})^{r-1} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & r! \sum_{m=0} ^{r-1} \begin{pmatrix} r-1 \\ m \end{pmatrix}(-1)^m \int_0^\infty e^{- \lambda s(m-r)} \Pr \ll H^f(t) \in ds\rr \\ = \, & r! \sum_{m=0} ^{r-1} \begin{pmatrix} r-1 \\ m \end{pmatrix}(-1)^m e^{-t f(\lambda (m-r))}. \end{align} By $f(-x)$, $x>0$ we mean the extended Bern\v{s}tein function, having representation \begin{align} f(-x)=\int_0^{\infty} (1-e^{sx}) \nu(ds), \qquad x>0, \label{extended bernstein} \end{align} provided that the integral in \eqref{extended bernstein} is convergent. In particular, we infer that \begin{align} \mathbb{E}(\mathcal{N}^f(t))=e^{-tf(-\lambda)} \end{align} and \begin{align} \textrm{Var} ( \mathcal{N}^f(t) )= 2e^{-tf(-2\lambda)}- e^{-tf(-\lambda)}-e^{-2tf(-\lambda)}. \end{align} For a stable subordinator, that is with L\'evy measure $\nu(ds)= \frac{\alpha s^{-\alpha -1}}{\Gamma(1-\alpha)}ds$ , $\alpha \in (0,1)$, all the factorial moments are infinite. Instead, for a tempered stable subordinator, where $\nu(ds)= \frac{\alpha e^{-\theta s} s^{-\alpha -1}}{\Gamma(1-\alpha)}ds$, $\alpha \in (0,1)$ and $\theta>0$, only the factorial moments of order $r$ such that $r< \frac{\theta}{\lambda}$ are finite. If we then consider the Gamma subordinator, with $\nu(ds)= \frac{e^{-\alpha s }}{s}ds$, only the factorial moments of order $r$ such that $r< \frac{\alpha}{\lambda}$ are finite. \subsection{Fractional subordinated non-linear birth process} The fractional non-linear birth process has state probabilities $ p_k^{\nu}(t)$ solving the fractional differential equation \begin{align} \frac{d^{\nu}p_k^{\nu}(t)}{dt^{\nu}}= -\lambda _k p_k^{\nu}(t) +\lambda _{k-1}p_{k-1}^{\nu}(t) \qquad \nu \in (0,1), k \geq 1 \end{align} with initial condition \begin{align} p_k^{\nu}(0)= \begin{cases} 1, \qquad &k=1, \\ 0, & k>1. \end{cases} \end{align} The state probabilities read (see Orsingher and Polito \cite{orspolber1}) \begin{align} p_k^{\nu}(t)= \Pr \ll \mathcal{N}^{\nu}(t)=k | \mathcal{N}^{\nu}(0)=1 \rr= \prod_{j=1}^{k-1} \lambda_j \sum_{m=1}^{k} \frac{E_{\nu,1}(-\lambda_m t^{\nu} )}{ \prod_{l=1 , l \neq m}^{k} (\lambda_l-\lambda_m )} \qquad \nu \in (0,1), \end{align} where \begin{align} E_{\nu,1}(-\eta t^{\nu})= \frac{\sin (\nu \pi) }{\pi} \int _0 ^{\infty} \frac{r^{\nu -1 } e^{-r \eta ^{\frac{1}{\nu}}t}}{r^{2 \nu}+2r^{\nu}\cos(\nu \pi)+1}dr \end{align} is the Mittag-Leffler function (see formula (7.3) in \citet{saxena}). So, the subordinated non-linear fractional birth process has distribution \begin{align} & \Pr \ll \mathcal{N}^{\nu}(H^f(t))=k | \mathcal{N}^{\nu}(0)=1 \rr \notag \\ = \, & \prod_{j=1}^{k-1} \lambda_j \sum_{m=1}^{k} \frac{1}{ \prod_{l=1 , l \neq m}^{k} (\lambda_l-\lambda_m )}\frac{\sin (\nu \pi) }{\pi} \int _0 ^{\infty} \frac{r^{\nu -1 } e^{-tf(r \lambda_m ^{\frac{1}{\nu}})}}{r^{2 \nu}+2r^{\nu}\cos(\nu \pi)+1}dr. \end{align} \section{Subordinated death processes} We now consider the process $M^f(t)= M(H^f(t))$, where $M$ is a linear death process with $n_0$ progenitors. The state probabilities read \begin{align} &\Pr \ll M^f(t)=k | M^f(0) = n_0 \rr = \int _0^{\infty} \binom{n_0}{k} e^{-\mu ks}(1-e^{-\mu s}) ^{n_0-k} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & \begin{pmatrix} n_0 \\ k \end{pmatrix} \sum _ {j=0} ^ {n_0-k}\begin{pmatrix} n_0-k \\ j \end{pmatrix}(-1)^j \int _0 ^ {\infty}e^{- (\mu k + \mu j)s }\Pr \ll H^f(t) \in ds \rr \notag \\ = \, & \begin{pmatrix} n_0 \\ k \end{pmatrix} \sum _ {j=0} ^ {n_0-k}\begin{pmatrix} n_0-k \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu k + \mu j)}. \end{align} In particular, the extinction probability is \begin{align} \Pr \ll M^f(t)=0 | M^f(0) = n_0 \rr \,= & \sum _ {j=0} ^ {n_0}\begin{pmatrix} n_0 \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu j)}\notag \\ = \, & 1+ \sum _ {j=1} ^ {n_0}\begin{pmatrix} n_0 \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu j)} \end{align} and converges to $1$ exponentially fast with rate $f(\mu)$. \begin{os} We observe that the extinction probability is a decreasing function of $n_0$ for any choice of the subordinator $H^f(t)$. This can be shown by observing that \begin{align} &\Pr \ll M^f(t)=0 |M^f(0)=n_0 \rr -\Pr \ll M^f(t)=0 |M^f(0)=n_0-1 \rr \notag \\ = \, & \sum _ {j=1} ^ {n_0}\begin{pmatrix} n_0 \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu j)} -\sum _ {j=1} ^ {n_0-1}\begin{pmatrix} n_0-1 \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu j)} \notag \\ = \, & \sum _ {j=1} ^ {n_0-1}\begin{pmatrix} n_0-1 \\ j-1 \end{pmatrix}(-1)^j e^ {-tf(\mu j)}+(-1)^{n_0}e^{-tf(\mu n_0)}\notag \\ = \, & \sum _ {j=1} ^ {n_0}\begin{pmatrix} n_0-1 \\ j-1 \end{pmatrix}(-1)^j e^ {-tf(\mu j)} \notag \\ = \, & -\sum _ {j=0} ^ {n_0-1}\begin{pmatrix} n_0-1 \\ j \end{pmatrix}(-1)^j e^ {-tf(\mu (j+1)} \notag \\ = \, &-\int_0 ^{\infty} \sum _ {j=0} ^ {n_0-1}\begin{pmatrix} n_0-1 \\ j \end{pmatrix}(-1)^j e^ {-s\mu (j+1)} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, &-\int_0^\infty e^{-\mu s}(1-e^{-\mu s})^{n_0-1}\Pr \ll H^f(t) \in ds \rr < 0. \end{align} This permits us also to establish the following upper bound which is valid for all values of $n_0$. \begin{align} \Pr \ll M^f(t)=0 |M^f(0)=n_0 \rr < \Pr \ll M^f(t)=0 |M^f(0)=1 \rr = 1-e^{-tf(\mu)}. \end{align} We also infer that \begin{align*} & \Pr \ll M^f(t)= k|M^f(0)=n_0 \rr = \\ & \Pr \ll M^f(t)=k|M^f(0)=n_0 -1 \rr - \frac{1}{n_0} \Pr \ll M^f(t)=1|M^f(0)=n_0 \rr \qquad \forall k< n_0 \end{align*} \end{os} \begin{os} The probability generating function of the subordinated linear death process is \begin{align} G(u,t)= \int_0^\infty (ue^{- \mu s}+1-e^{- \mu s})^{n_0} \Pr \ll H^f(t) \in ds \rr. \end{align} We now compute the factorial moments of order $r$ for the process $M^f(t)$: \begin{align} &\mathbb{E} \bigl ( M^f(t) (M^f(t)-1)(M^f(t)-2) \cdots (M^f(t)-r+1) \bigr ) \notag \\ = \, &\int_0^\infty \frac{\partial^r}{\partial u^r} (ue^{- \mu s}+1-e^{- \mu s})^{n_0}|_{u=1} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & n_0(n_0-1)(n_0-2)...(n_0-r+1) \int_0^\infty e^ {-\mu r s } \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & n_0(n_0-1)(n_0-2)...(n_0-r+1) e^{-tf(\mu r )} \notag \\ = \, & r! \binom{n_0}{r} e^{-tf(\mu r )} \qquad \qquad \textrm{for } r \leq n_0. \end{align} In particular, we extract the expressions \begin{align} \mathbb{E} \, M^f(t)= n_0 e^ {-t \, f(\mu)} \end{align} and \begin{align} \textrm{Var} \,M^f(t)= n_0 e^{-tf(\mu)}-n_0 e^{-tf(2 \mu)} + n_0^2 e^{-t f(2 \mu)} -n_0 ^2 e^{-2t f(\mu)}. \end{align} The variance can be also be obtained as \begin{align} \textrm{Var} \, M^f(t)= \, & \mathbb{E} \ll \textrm{Var} \, (M(H^f(t))|H^f(t)) \rr + \textrm{Var} \, \ll \mathbb{E}(M(H^f(t))|H^f(t)) \rr \notag \\ = \, & \mathbb{E}\bigl ( n_0 e^{-\mu H^f(t)}(1-e^{-\mu H^f(t)}) \bigr )+ \textrm{Var}\, ( n_0 e ^{-\mu H^f(t)})\notag \\ = \, & n_0 e^{-tf(\mu)}-n_0 e^{-tf(2 \mu)} + n_0^2 e^{-t f(2 \mu)} -n_0 ^2 e^{-2t f(\mu)}. \end{align} \end{os} \begin{os} The transition probabilities \begin{align} \Pr \ll M^f(t_0+t)=k|M^f(t_0)=r \rr = \binom{r}{k} \sum_{j=0}^{r-k}\binom{r-k}{j} (-1)^j e^{-tf(\mu k + \mu j)} \end{align} permit us to write, for a small time interval $[t,t+dt)$, \begin{align} & \Pr \ll M^f(t_0+dt)=k | M^f(t_0)=r \rr \notag \\ = \, & \binom{r}{k} \sum_{j=0}^{r-k}\binom{r-k}{j}(-1)^j (1-dt \,f(\mu k + \mu j) ) \notag \\ = \, & -dt \binom{r}{k} \sum_{j=0}^{r-k}\binom{r-k}{j}(-1)^j \, f(\mu k + \mu j) \notag \\ = \, & -dt \binom{r}{k} \sum_{j=0}^{r-k}\binom{r-k}{j}(-1)^j \, \int _0 ^\infty (1-e^{-(\mu k + \mu j)s}) \nu (ds) \notag \\ = \, & dt \binom{r}{k} \int _0 ^\infty \sum _ {j=0} ^ {r-k}\binom{r-k}{j}(-1)^j \, e^{- \mu js} e^{-\mu k s} \nu (ds) \notag \\ = \, & dt \int _0 ^\infty \binom{r}{k} (1-e^{- \mu s})^{r-k} e^{-\mu k s} \nu (ds)\notag \\ = \, & dt \int_0^\infty \Pr \ll M(s) = k | M(0) = r \rr \nu(ds) \label{trans prob death process} \qquad 0 \leq k <r \leq n_0 \end{align} It follows that the subordinated death process decreases with downwards jumps of arbitrary size. Formula \eqref{trans prob death process} is a special case of \eqref{14b} for the linear death process. \end{os} \begin{os} If $M^f(t_0)=r$, the probability that the number of individuals does not change during a time interval of length $t$ is \begin{align} \Pr \ll M^f(t_0+t)=r| M^f(t_0)=r \rr \, = e^{-tf(r \mu)}. \label{312} \end{align} As a consequence, the random time between two successive jumps has exponential distribution with rate $f(\mu r)$, i.e. \begin{align} T_r \sim \textrm{Exp}(f(\mu r)). \end{align} From \eqref{312} we have also that \begin{align} \Pr \ll M^f(t+dt) = r | M^f(t) = r \rr \, = \, 1-dtf(\mu r). \end{align} \end{os} \begin{os} In view of \eqref{trans prob death process} we can write the governing equations for the transition probabilities $p_k^f(t)= \Pr \ll M^f(t)=k|M^f(0)=n_0 \rr$, for $0 \leq k \leq n_0$ \begin{align} \frac{d}{dt} p_k^f(t)= -p_k^f(t)f(\mu k)+ \sum_{j=k+1}^{n_0} p_j^f(t) \int _0 ^\infty \binom{j}{k} (1-e^{- \mu s})^{j-k} e^{-\mu k s} \nu (ds) . \end{align} \end{os} \subsection{The subordinated sublinear death process} In the sublinear death process we have that, for $0 \leq k \leq n_0$, \begin{align} \Pr \ll \mathbb{M}(t+dt)=k-1| \mathbb{M}(t)=k, \mathbb{M}(0)=n_{0}\rr= \mu (n_0-k+1)dt+o(dt) \end{align} so that the probability that a particle disappears in $[t,t+dt)$ is proportional to the number of deaths occurred in $[0,t)$. It is well-known that \begin{align} \Pr \ll \mathbb{M}(t)=k| \mathbb{M}(0)=n_0\rr \, = \, \begin{cases} e^{-\mu t}(1-e^{-\mu t })^{n_0-k}, \qquad &k=1,2, \dots, n_0, \\ (1-e^{-\mu t})^{n_0} , & k=0. \end{cases} \end{align} So, the probability law of the subordinated process immediately follows \begin{align} &\Pr \ll \mathbb{M}^f(t)=k| \mathbb{M}^f(0)=n_0\rr \notag \\ = \, & \begin{cases} \sum_{j=0}^{n_0-k} \begin{pmatrix} n_0-k \\ j \end{pmatrix} (-1)^j e^{- tf(\mu (j+1))}, \qquad &k=0,1, \dots, n_0, \\ \sum _{k=0}^{n_0} \begin{pmatrix} n_0 \\k \end{pmatrix} (-1)^k e^{-t f( \mu k) }, &k=0 \end{cases} \end{align} The extinction probability is a decreasing function of $n_0$ as in the sublinear death process. Furthermore we observe that the extinction probabilities for the subordinated linear and sublinear death process coincide. \section{Subordinated linear birth-death processes} In this section we consider the linear birth and death process $L(t)$ with one progenitor at the time $H^f(t)$. We recall that, for $k\geq 1$ (see \citet{bailey}, page 90), \begin{align} \Pr \ll L(t)=k | L(0) =1 \rr = \begin{cases} \frac{(\lambda-\mu)^2e^{-(\lambda-\mu)t}(\lambda(1-e^{-(\lambda-\mu)t}))^{k-1}}{(\lambda-\mu e^{-(\lambda-\mu)t})^{k+1}}, \qquad & \lambda > \mu, \\ \frac{(\mu-\lambda)^2 e^{-(\mu-\lambda)t}(\lambda(1-e^{-(\mu-\lambda)t}))^{k-1}}{(\mu-\lambda e^{-(\mu-\lambda)t})^{k+1}}, & \lambda < \mu, \\ \frac{(\lambda t)^{k-1}}{(1+\lambda t)^{k+1}}, & \lambda = \mu. \end{cases} \end{align} while the extinction probabilities have the form \begin{align} \Pr \ll L(t)=0 |L(0) = 1 \rr = \begin{cases} \frac{\mu-\mu e^{-t(\lambda-\mu)}}{\lambda-\mu e ^{-t(\lambda- \mu)}}, \qquad & \lambda > \mu,\\ \frac{\mu-\mu e^{-t(\mu- \lambda)}}{\lambda-\mu e ^{-t(\mu- \lambda)}}, & \mu > \lambda, \\ \frac{\lambda t}{1+\lambda t}, & \lambda= \mu. \end{cases} \end{align} We now study the subordinated process $L^f(t)=L(H^f(t))$. When $\lambda \neq \mu$, after a series expansion we easily obtain that \begin{align} &\Pr \ll L^f(t)=k |L^f(0) =1 \rr \notag \\ = \, & \begin{cases} \l \frac{\lambda - \mu }{\lambda} \r^2 \sum_{l=0}^\infty \binom{l+k}{l} \l \frac{\mu }{\lambda} \r^l \sum_{r=0}^{k-1} (-1)^r \binom{k-1}{r} e^{-t f \l \l \lambda - \mu \r \l l+r+1 \r \r} , \qquad &\lambda >\mu, \\ \l \frac{\mu -\lambda}{\mu} \r^2 \l \frac{\lambda}{\mu} \r^{k-1} \sum_{l=0}^\infty \binom{l+k}{l} \l \frac{\lambda }{\mu} \r^l \sum_{r=0}^{k-1} (-1)^r \binom{k-1}{r} e^{-t f\l \l \mu - \lambda \r \l l+r+1 \r \r}, & \lambda <\mu, \end{cases} \end{align} provided that $k \geq 1$. Moreover, the extinction probabilities have the following form \begin{align} \Pr \ll L^f(t)=0 \rr \, = \, \begin{cases} \frac{\mu - \lambda}{\lambda} \l \sum_{m=1}^\infty \l \frac{\mu}{\lambda} \r^m e^{-tf\l (\lambda - \mu)m \r} \r+\frac{\mu}{\lambda}, \qquad &\lambda>\mu, \\ 1- \l \frac{\mu -\lambda}{\lambda} \r \sum_{m=1}^\infty \l \frac{\lambda}{\mu} \r^m e^{-tf \l \l \mu - \lambda \r m \r}, &\lambda <\mu. \end{cases} \end{align} Similarly to the classical process, we have \begin{align} \lim_{t \to \infty} \Pr \ll L^f(t)=0 \rr = \begin{cases} \frac{\mu}{\lambda}, \qquad &\lambda > \mu, \\ 1, & \lambda < \mu. \end{cases} \end{align} \subsection{Processes with equal birth and death rates} We concentrate ourselves on the case $\lambda= \mu$, which leads to some interesting results. The extinction probability reads \begin{align} \Pr \ll L^f(t) = 0 | L^f(0) =1 \rr \, = \, & \int_0^\infty \frac{\lambda s}{1+\lambda s} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, &1- \int_0^\infty \frac{1}{1+\lambda s} \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & 1-\int_0^\infty \Pr \ll H^f(t) \in ds \rr \int_0^\infty dw \, e^{-w\lambda s} \, e^{-w} \notag \\ = \, &1-\int_0^\infty dw \, e^{-w} e^{-tf(\lambda w)} \label{exctinction probability}. \end{align} We note that \begin{align} \lim_{t \to \infty} \Pr \ll L^f(t)=0 |L^f(0) = 1 \rr = 1 \end{align} as in the classical case. From \eqref{exctinction probability} we infer that the distribution of the extinction time $T_0^f = \inf \ll t \geq 0 : L^f(t) = 0 \rr$, has the following form \begin{align} \Pr \ll T_0^f \in dt \rr / dt \, = \, \int_0^\infty e^{-w} f(\lambda w) e^{-tf(\lambda w)} dw. \end{align} We now observe that all the state probabilities of the process $L(t)$ depend on the extinction probability (see \cite{orspolber1}) \begin{align} \Pr \ll L(t) = k |L(0) =1 \rr \, = \, & \frac{(\lambda t)^{k-1}}{\l 1+\lambda t \r^{k+1}} \qquad \qquad \qquad \qquad \qquad \qquad \qquad k\geq 1 \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \l \frac{\lambda}{1+\lambda t} \r \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \l \lambda \l 1-\Pr \ll L(t)=0\rr \r \r. \label{state prob for L(t)} \end{align} Hence, the state probabilities of $L^f(t)$ can be written, for $k \geq 1$, as \begin{align} &\Pr \ll L^f(t) = k |L^f(0) =1 \rr \, \notag \\ = \, &\frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k}\left[ \lambda \int_0^\infty \l 1-\Pr \ll L(s)=0 \rr \r \Pr \ll H^f(t) \in ds \rr \right] \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \left[ \lambda \l 1- \Pr \ll L^f(t) = 0 \rr \r \right] \notag \\ =\, &\frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \left[ \lambda \int_0^\infty dw \, e^{-w} e^{-tf(\lambda w)} \right]. \label{state probabilities L^f(t)} \end{align} \subsection{Transition probabilities} To compute the transition probabilities of $L^f(t)$, we recall that the linear birth-death process with $r$ progenitors has the following probability law (see \cite{bailey}, page 94, formula 8.47): \begin{align} \Pr \ll L(t)= n| L(0)=r \rr = \sum _{j=0}^{min(r,n)} \binom{r}{j} \binom{r+n-j-1}{r-1} \alpha ^{r-j} \beta ^{n-j}(1-\alpha-\beta)^{j}, \label{transition prob birth death} \end{align} where $n \geq 0$ and \begin{align} \alpha \,= \, \frac{\mu(e^{(\lambda-\mu)t}-1)}{\lambda e^{(\lambda-\mu)t}-\mu } \qquad \textrm{and} \qquad \beta \,= \, \frac{\lambda(e^{(\lambda-\mu)t}-1)}{\lambda e^{(\lambda-\mu)t}-\mu }. \end{align} In the case $\lambda= \mu$ we have \begin{align} \lim_{\mu \to \lambda} \alpha \, = \, \lim_{\mu \to \lambda} \beta = \frac{\lambda t}{1+\lambda t} \end{align} so that \begin{align} &\Pr \ll L(t)=n|L(0)=r \rr \notag \\ = \, & \sum _{j=0}^{min(r,n)} \binom{r}{j} \binom{r+n-j-1}{r-1} \biggl ( \frac{\lambda t}{1+\lambda t}\biggr )^{r+n-2j}\biggl (1-2 \frac{\lambda t}{1+ \lambda t}\biggr)^{j} \notag \\ = \, & \sum _{j=0}^{min(r,n)} \sum_{k=0}^j \binom{r}{j} \binom{r+n-j-1}{r-1} \binom{j}{k}(-2)^k \biggl ( \frac{\lambda t}{1+\lambda t}\biggr )^{r+n-2j+k}. \label{mi serve per phillips nascita e morte 2} \end{align} One can check that for $r=1$ the last formula reduces to \begin{align} \Pr \ll L(t)=n |L(0) =1 \rr= \frac{(\lambda t)^{n-1}}{(1+\lambda t)^{n+1}}. \end{align} The transition probabilities related to the subordinated process $L^f(t)$ can be written in an elegant form, as shown in the following theorem. \begin{te} In the subordinated linear birth-death process $L^f(t)$, when $\lambda=\mu$, $n\geq 0$, $r\geq 1$, $n \neq r$, we have that \begin{align} &\Pr \ll L^f(t+t_0)=n |L^f(t_0)=r \rr \notag \\= \, &\sum _{j=0}^{min(r,n)} \sum_{k=0}^j \binom{r}{j} \binom{r+n-j-1}{r-1} \binom{j}{k}2^k \frac{ (-1)^{r+n-1} \lambda ^ {r+n+k-2j}}{(r+n-2j+k-1)!} \notag \\ & \times \frac{d^{r+n-2j+k-1}}{d \lambda ^{r+n-2j+k-1}} \biggl [ \frac{1}{\lambda}- \frac{1}{\lambda}\int_0^\infty dw \, e^{-w} e^{-tf(\lambda w)} \biggr ] \label{probabilità di transizione processo nascita morte subordinato} \end{align} \end{te} \begin{proof} By subordination we have \begin{align} \Pr \ll L^f(t)=n | L^f(0)=r \rr \, = \, & \int _0^{\infty} \Pr \ll L(s)=n |L(0)=r \rr \Pr \ll H^f(t) \in ds \rr \notag \\ = \, & \sum _{j=0}^{min(r,n)} \sum_{k=0}^j \binom{r}{j} \binom{r+n-j-1}{r-1} \binom{j}{k}(-2)^k \notag \\ & \times \int_0 ^ {\infty} \Pr \ll H(t) \in ds \rr \biggl ( \frac{\lambda s}{1+\lambda s}\biggr )^{r+n-2j+k}. \end{align} To compute the last integral, we preliminarly observe that \begin{align} \frac{d^m}{d \lambda ^m} \frac{1}{1+\lambda s}= (-1)^m m!\, s^m \frac{1}{(1+\lambda s)^{m+1}} \end{align} and consequently \begin{align} \biggl ( \frac{\lambda s}{1+ \lambda s } \biggr )^m = \frac{(-1)^{m-1} s \, \lambda ^m}{(m-1)!}\frac{d^{m-1}}{d \lambda ^{m-1}} \frac{1}{1+\lambda s}. \label{mi serve per phillips in nascita e morte 1} \end{align} So, we have \begin{align} &\Pr \ll L^f(t)=n|L^f(0)=r \rr \notag \\ = \, & \sum _{j=0}^{min(r,n)} \sum_{k=0}^j \binom{r}{j} \binom{r+n-j-1}{r-1} \binom{j}{k}2^k \frac{ (-1)^{r+n-1} \lambda^{r+n-2j+k}}{(r+n-2j+k-1)!} \notag \\ & \times \frac{d^{r+n-2j+k-1}}{d \lambda ^{r+n-2j+k-1}} \int _0 ^{\infty} \frac{s}{1+\lambda s} \Pr \ll H^f(t) \in ds \rr \end{align} where, by using \eqref{exctinction probability}, we write \begin{align} \int _0 ^{\infty} \frac{s}{1+\lambda s} \Pr \ll H^f(t) \in ds \rr &= \frac{1}{\lambda} \int _0 ^{\infty} \frac{\lambda s}{1+\lambda s} \Pr \ll H^f(t) \in ds \rr \notag \\ &= \frac{1}{\lambda} \biggl [1-\int_0^\infty dw \, e^{-w} e^{-tf(\lambda w)} \biggr ] \end{align} and the desired result immediately follows. \end{proof} \begin{os} For a small time interval $dt$, the quantity in square brackets in (\ref{probabilità di transizione processo nascita morte subordinato}) can be written as \begin{align*} & \frac{1}{\lambda}- \frac{1}{\lambda} \int_0^{\infty} dw \, e^{-w}(1-dt f(\lambda w)) \\ & =dt \, \frac{1}{\lambda} \int_0^{\infty} dw \, e^{-w} \int_0^{\infty} \nu(ds) (1-e^{-\lambda w s})\\ & = dt \int_0^{\infty} \nu(ds) \frac{s}{1+\lambda s} \end{align*} Then, by using (\ref{mi serve per phillips in nascita e morte 1}) e (\ref{mi serve per phillips nascita e morte 2}), formula (\ref{probabilità di transizione processo nascita morte subordinato}) reduces to \begin{align*} \Pr \ll L^f(t_0+dt)=n| L^f(t_0)=k \rr = dt\int_0^{\infty} \nu(ds) \Pr \ll L(s)=n | L(0)=k \rr \end{align*} thus proving relation (\ref{14b}) for subordinated birth-death processes. \end{os} \begin{os} If $L^f(0)=1$, from (\ref{state probabilities L^f(t)}) we have that the probability that the number of individuals does not change during a time interval of length $dt$ is \begin{align*} \Pr \ll L^f(dt)=1|L^f(0)=1 \rr= 1-dt\, \frac{d}{d \lambda} \bigl (\lambda\int _0^{\infty}dw \, e^{-w}f(\lambda w) \bigr ) \end{align*} Thus the waiting time for the first jump, i.e. \begin{align*} T_1= inf \ll t>0: L^f(t) \neq 1 \rr , \end{align*} has the following distribution \begin{align} \Pr \ll T_1>t \rr = e^{-t \frac{d}{d \lambda} (\lambda\int _0^{\infty}dw \, e^{-w}f(\lambda w)}). \end{align} For example, in the case $H^f(t)$ is a stable subordinator with index $\alpha \in (0,1)$, $T_1$ has an exponential distribution with parameter $ \lambda ^{\alpha} \Gamma (\alpha +2)$. \end{os} \subsection{Mean sojourn times} Let $V_k(t)$, $k\geq 1$ the total amount of time that the process $L(t)$ spends in the state $k$ up to time $t$, i.e. \begin{align} V_k(t)= \int_0 ^t I_k(L(s))\, ds, \end{align} where $I_k(.)$ is the indicator function of the state $k$. The mean sojourn time up to time $t$ is given by \begin{align} \mathbb{E}V_k(t)=\int_0 ^t \Pr \ll L(s)=k |L(0) =1 \rr ds. \end{align} By means of \eqref{state prob for L(t)} we have that \begin{align} \mathbb{E}V_k(t) \, = \, & \int_0^t \Pr \ll L(s)=k |L(0)=1 \rr ds \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \l \lambda \l t-\int_0^t \Pr \ll L(s)=0\rr ds \r \r \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \l \lambda \l t-\int_0^t \frac{\lambda s}{1+\lambda s}ds \r \r \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \log (1+\lambda t) \notag \\ = \, & \frac{1}{\lambda k} \l \frac{\lambda t}{1+\lambda t} \r^k \end{align} and the mean asymptotic sojourn time is therefore given by \begin{align} \mathbb{E}V_k(\infty)=\frac{1}{\lambda k}. \label{mean classic sojourn time} \end{align} In view of \eqref{state probabilities L^f(t)}, for the sojourn time $V_k^f(t)$ of the subordinated process $L^f(t)$ we have that \begin{align} \mathbb{E}V_k^f(t) \, = \, &\int_0^t \Pr \ll L^f(s)=k |L^f(0) =1 \rr ds \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \left[ \lambda \int_0^\infty dw \, e^{-w} \frac{1}{f(\lambda w)} \l 1- e^{-tf(\lambda w)} \r \right] \end{align} and the mean asymptotic sojourn time is given by \begin{align} \mathbb{E}V_k^f(\infty)= \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \left[ \lambda \int_0^\infty dw \, e^{-w} \frac{1}{f(\lambda w)} \right]. \end{align} It is possible to obtain an explicit expression for $ \mathbb{E}V_k^f(\infty)$ in the case of a stable subordinator, when $f(x)=x^{\alpha}$, $\alpha \in (0,1)$, i.e. \begin{align} \mathbb{E}V_k^f(\infty) \, = \, & \frac{(-1)^{k-1} \lambda^{k-1}}{k!} \frac{d^k}{d\lambda^k} \left[ \lambda \int_0^\infty dw \, e^{-w} \frac{1}{\lambda ^{\alpha}w^{\alpha}} \right] \notag \\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}\Gamma (1-\alpha)}{k!} \frac{d^k}{d \lambda ^k} \lambda ^{1-\alpha} \notag\\ = \, & \frac{(-1)^{k-1} \lambda^{k-1}\Gamma (1-\alpha)}{k!} (1-\alpha)(-\alpha)(-\alpha-1)\cdots (-\alpha -k+1) \lambda ^{-\alpha -k+1} \notag\\ = \, & \frac{\Gamma(1-\alpha)\Gamma(\alpha+k)}{k!\Gamma(\alpha)\lambda^{\alpha}}\notag \\ = \, & \frac{B(1-\alpha, k+ \alpha)}{\Gamma(\alpha) \lambda^{\alpha}}, \qquad \textrm{for } k \geq 1. \label{tempo medio di soggiorno stabile} \end{align} In the case $\alpha= \frac{1}{2}$, by using the duplication formula for the Gamma function and the Stirling formula, the quantity in \eqref{tempo medio di soggiorno stabile} can be estimated, for large values of $k$, in the following way: \begin{align} \mathbb{E} V_k^f(\infty)=\frac{\Gamma(\frac{1}{2}+k)}{k! \sqrt{\lambda}} = \frac{\Gamma(\frac{1}{2})2^{1-2k}\Gamma(2k)}{k! \sqrt{\lambda}\Gamma(k)} \simeq \frac{1}{\sqrt{\lambda k}} \end{align} which is somehow related to \eqref{mean classic sojourn time}. We finally note that \begin{align} \frac{1}{(\alpha + k) \Gamma(\alpha) \lambda^\alpha} < \mathbb{E} V_k^f(\infty) < \frac{1}{(1-\alpha)\Gamma(\alpha)\lambda^{\alpha}}, \qquad \forall k \geq 1, \end{align} since \begin{align} \frac{1}{(\alpha +k)} < B(1-\alpha, k+\alpha)< \frac{1}{1-\alpha}. \end{align} \subsection{On the distribution of the sojourn times} Let $L^f_k(t)$ be a linear birth-death process with $k$ progenitors. We now study the distribution of the sojourn time \begin{align} V_{k}(t) \, = \, \int_0^t I_k \l L^f_k(s) \r ds \end{align} which represents the total amount of time that the process spends in the state $k$ up to time $t$. We now define the Laplace transform \begin{align} r_k(\mu) \, = \, \int_0^\infty e^{-\mu t} \Pr \ll L^f_k(t) = k \rr dt. \end{align} The hitting time \begin{align} V_k^{-1}(t) \, = \, \inf \ll w > 0 : V_k(w) > t \rr \end{align} is such that \begin{align} \mathbb{E} \int_0^\infty e^{-\mu V_k^{-1}(t)} dt \, = \, & \mathbb{E} \int_0^\infty e^{-\mu t} dV_k(t) \notag \\ = \, & \mathbb{E} \int_0^\infty e^{-\mu t} I_k\l L_k^f(t) \r dt \notag \\ = \, &r_k(\mu). \label{14} \end{align} By Proposition 3.17, chapter V, of \cite{getoor} we have \begin{align} \mathbb{E} e^{-\mu V_k^{-1}(t)} \, = \, e^{-t\frac{1}{r_k(\mu)}}.\label{sss} \end{align} Now we resort to the fact that \begin{align} \Pr \ll V_k(t) > x \rr \, = \, \Pr \ll V_k^{-1}(x) < t \rr \end{align} and thus we can write \begin{align} \Pr \ll V_k(t) \in dx \rr /dx\, = \, -\frac{\partial}{\partial x} \int_0^t \Pr \ll V_k^{-1}(x) \in dw \rr. \label{inverso} \end{align} We therefore have that \begin{align} \frac{1}{dx}\int_0^\infty e^{-\mu t}\Pr \ll V_k(t) \in dx \rr dt \, & = -\frac{d}{dx} \int_0^{\infty} dt \, e^{-\mu t} \int_0^{t} \Pr \ll V_k^{-1}(x) \in dw\rr \notag \\ &= - \frac{d}{dx} \int_0^{\infty} dw \int _w^{\infty} dt \, e^{-\mu t} \Pr \ll V_k^{-1}(x) \in dw\rr \notag \\ &= -\frac{1}{\mu} \frac{d}{dx} \int_0^{\infty} dw\, e^{- \mu w} \Pr \ll V_k^{-1}(x) \in dw\rr \notag \\ & =-\frac{1}{\mu} \frac{d}{d x} e^{-x \frac{1}{r_k(\mu)}} \notag \\ &= \frac{1}{\mu \, r_k(\mu)}e^{-x \frac{1}{r_k(\mu)}}. \end{align} If $r_k(0)<\infty$, from \eqref{sss} it emerges that $\Pr \ll V_k^{-1}(t) < \infty \rr<1$; so the sample paths of $V_k(t)$ become constant after a random time with positive probability. This is related to the fact that the subordinated birth and death process extinguishes with probability one in a finite time when $\lambda = \mu$. We finally observe that in the case $k=1$ by \eqref{state probabilities L^f(t)} we have \begin{align} r_1(\mu) \, = \, \int_0^{\infty} e^{-\mu t}\Pr \ll L^f(t)=k \rr dt \, = \, \frac{d}{d\lambda}\left[ \lambda \int_0^{\infty}dw \, e^{-w}\frac{1}{\mu+f(\lambda w)}\right], \end{align} provided that the Fubini Theorem holds true. \end{document}
\betaegin{document} \openup 1.0\jot \deltaate{}\title{{\Large \bf On the first Banhatti-Sombor index \thanks{ Supported by the National Natural Science Foundation of China (Nos. 12071411 and 11771443).} \betaegin{abstract} Let $d_v$ be the degree of the vertex $v$ in a connected graph $G$. The first Banhatti-Sombor index of $G$ is defined as $BSO(G) =\sigmaum_{uv\in E(G)}\sigmaqrt{\frac{1}{d^2_u}+\frac{1}{d^2_v}}$, which is a new vertex-degree-based topological index introduced by Kulli. In this paper, the mathematical relations between the first Banhatti-Sombor index and some other well-known vertex-degree-based topological indices are established. In addition, the trees extremal with respect to the first Banhatti-Sombor index on trees and chemical trees are characterized, respectively. \betaigskip \noindent {\betaf MSC Classification:} 05C05, 05C07, 05C09, 05C92 \noindent {\betaf Keywords:} The first Banhatti-Sombor index; Degree; Tree \end{abstract} \betaaselineskip 20pt \sigmaection{\lambdaarge Introduction} Let $G$ be a simple undirected connected graph with vertex set $V(G)$ and edge set $E(G)$. The number of vertices and edges of $G$ is called order and size, respectively. Denote by $\overline{G}$ the complement of $G$. For $v\in V(G)$, $d_v$ denotes the degree of vertex $v$ in $G$. The minimum and the maximum degree of $G$ are denoted by $\deltaelta(G)$ and $\Deltalta(G)$, or simply $\deltaelta$ and $\Deltalta$, respectively. A pendant vertex of $G$ is a vertex of degree $1$. A graph $G$ is called $(\Deltalta, \deltaelta)$-semiregular if $\{d_u, d_v\} = \{\Deltalta, \deltaelta\}$ holds for all edges $uv\in E(G)$. Denote by $K_n$, $C_n$, $P_n$ and $K_{1,\,n-1}$ the complete graph, cycle, path and star with $n$ vertices, respectively. The study of topological indices of various graph structures has been of interest to chemists, mathematicians, and scientists from related fields due to the fact that the topological indices play a significant role in mathematical chemistry especially in the QSPR/QSAR modeling. In 1975, the Randi\'{c} index of a graph $G$ introduced by Randi\'{c} \cite{R} is the most important and widely applied. It is defined as $$R(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{\sigmaqrt{d_ud_v}}.$$ The modified second Zagreb index of a graph $G$, introduced by Nikoli\'{c} et al. \cite{NKMT}, is defined as $$M_2^{*}(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}$$ The harmonic index and the inverse degree index of a graph $G$ proposed by Fajtlowicz \cite{F} are two the older vertex-degree-based topological indices. They are respectively defined as $$H(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{2}{d_u+d_v}, \quad ID(G)=\sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{1}{d_u^2}+\frac{1}{d_v^2}\rhoight).$$ The symmetric division deg index, the inverse sum indeg index and the geometric-arithmetic index of a graph $G$, introduced by Vuki\v{c}evi\'{c} \cite{V, VG, VF}, Ga\v{s}perov \cite{VG} and Furtula \cite{VF}, are respectively defined as $$SDD(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{d_u^2+d_v^2}{2d_ud_v}, \quad ISI(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{d_ud_v}{d_u+d_v}, \quad GA(G)=\sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{d_ud_v}}{d_u+d_v}.$$ The forgotten topological index, introduced by Furtula and Gutman \cite{FG}, is defined as $$F(G)=\sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(d_u^2+d_v^2\rhoight)$$ In 2021, the Sombor index of a graph $G$ is defined as $$SO(G) =\sigmaum_{uv\in E(G)}\sigmaqrt{d_u^2+d_v^2},$$ which is a novel vertex-degree-based molecular structure descriptor proposed by Gutman \cite{G}. The investigation of the Sombor index of graphs has quickly received much attention. In particular, Red\v{z}epovi\'{c} \cite{R2} showed that the Sombor index may be used successfully on modeling thermodynamic properties of compounds due to the fact that the Sombor index has satisfactory prediction potential in modeling entropy and enthalpy of vaporization of alkanes. Das et al. \cite{DCC} and Wang et al. \cite{WMLF} gave the mathematical relations between the Sombor index and some other well-known vertex-degree-based topological indices. For other related results, one may refer to \cite{CGR, DTW, LMZ, KG, MMM, RDA} and the references therein. Inspired by work on Sombor index, the first Banhatti-Sombor index of a connected graph $G$ was introduced by Kulli \cite{K} very recently and is defined as $$BSO(G) =\sigmaum_{uv\in E(G)}\sigmaqrt{\frac{1}{d^2_u}+\frac{1}{d^2_v}}.$$ We find that the new index has close contact with numerous well-known vertex-degree-based topological indices. Moreover, the trees with the maximum and minimum first Banhatti-Sombor index among the set of trees with $n$ vertices are determined, respectively. In particular, the extremal values of the first Banhatti-Sombor index for chemical trees are characterized. \sigmaection{\lambdaarge Preliminaries} \betaegin{lemma}\lambdaabel{le2,1} For any edge $uv\in E(G)$, $d_u^2+d_v^2$ or $\frac{1}{d_u^2}+\frac{1}{d_v^2}$ is a constant if and only if $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{lemma} \betaegin{lemma}\lambdaabel{le2,2} For any positive real number $a$ and $b$, we have $$\frac{2\sigmaqrt{2}(a^2+b^2+ab)}{3(a+b)}\lambdaeq \sigmaqrt{a^2+b^2}\lambdaeq\frac{\sigmaqrt{2}(a^2+b^2)}{a+b}$$ with equality if and only if $a=b$. \end{lemma} \betaegin{lemma}{\betaf (\cite{R1})}\lambdaabel{le2,3} If $a_i>0$, $b_i>0$, $p>0$, $i=1, 2, \lambdadots, n$, then the following inequality holds: $$\sigmaum\lambdaimits_{i=1}^{n}\frac{a_k^{p+1}}{b_k^p}\gammaeq \frac{\lambdaeft(\sigmaum\lambdaimits_{i=1}^{n}a_i\rhoight)^{p+1}}{\lambdaeft(\sigmaum\lambdaimits_{i=1}^{n}b_i\rhoight)^{p}}$$ with equality if and only if $\frac{a_1}{b_1}=\frac{a_2}{b_2}=\cdots=\frac{a_n}{b_n}$. \end{lemma} \betaegin{lemma}{\betaf (\cite{DM})}\lambdaabel{le2,4} Let $a_1, a_2, \lambdadots, a_n$ and $b_1, b_2, \lambdadots, b_n$ be real numbers such that $q\lambdaeq \frac{a_i}{b_i}\lambdaeq Q$ and $a_i\neq 0$ for $i=1, 2, \lambdadots, n$. Then there holds $$\sigmaum\lambdaimits_{i=1}^{n}b_i^2+Qq\sigmaum\lambdaimits_{i=1}^{n}a_i^2\lambdaeq (Q+q)\sigmaum\lambdaimits_{i=1}^{n}a_ib_i$$ with equality if and only if $b_i=qa_i$ or $b_i=Qa_i$ for et least one $i$, $i=1, 2, \lambdadots, n$. \end{lemma} \betaegin{lemma}{\betaf (\cite{D})}\lambdaabel{le2,5} If $a=(a_1, a_2, \lambdadots, a_n)$, $b=(b_1, b_2, \lambdadots, b_n)$ are sequences of real numbers and $c=(c_1, c_2, \lambdadots, c_n)$, $d=(d_1, d_2, \lambdadots, d_n)$ are nonnegative, then $$\sigmaum\lambdaimits_{i=1}^{n}d_i\sigmaum\lambdaimits_{i=1}^{n}c_ia_i^2+\sigmaum\lambdaimits_{i=1}^{n}c_i\sigmaum\lambdaimits_{i=1}^{n}d_ib_i^2\gammaeq 2\sigmaum\lambdaimits_{i=1}^{n}c_ia_i\sigmaum\lambdaimits_{i=1}^{n}d_ib_i$$ with equality if and only if $a=b=(k, k, \lambdadots, k)$ is a constant sequence for positive $c_i$ and $d_i$, $i=1, 2, \lambdadots, n$. \end{lemma} \betaegin{lemma}{\betaf (\cite{LMR})}\lambdaabel{le2,6} Let $a_1, a_2, \lambdadots, a_n$ and $b_1, b_2, \lambdadots, b_n$ be real numbers such that $a\lambdaeq a_i\lambdaeq A$ and $b\lambdaeq b_i\lambdaeq B$ for $i=1, 2, \lambdadots, n$. Then there holds $$\lambdaeft\lambdavert \frac{1}{n}\sigmaum\lambdaimits_{i=1}^{n}a_ib_i-\frac{1}{n}\sigmaum\lambdaimits_{i=1}^{n}a_i\frac{1}{n}\sigmaum\lambdaimits_{i=1}^{n}b_i\rhoight\rhovert\lambdaeq \frac{1}{n}\lambdaeft\lambdafloor\frac{n}{2}\rhoight\rhofloor\lambdaeft(1-\frac{1}{n}\lambdaeft\lambdafloor\frac{n}{2}\rhoight\rhofloor\rhoight)(A-a)(B-b),$$ where $\lambdafloor x\rhofloor$ denotes the integer part of $x$. \end{lemma} \sigmaection{\lambdaarge On relations between the first Banhatti-Sombor index and other degree-based indices} \sigmaubsection{\lambdaarge Bounds in terms of order, size and degree} \betaegin{theorem}\lambdaabel{th3,1} Let $G$ be a connected graph of order $n$ and size $m$ with the minimum degree $\deltaelta$. Then $$\frac{n}{\sigmaqrt{2}}\lambdaeq BSO(G)\lambdaeq \frac{\sigmaqrt{2}m}{\deltaelta}$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof} Note that $$BSO(G)=\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{\deltaelta^2}+\frac{1}{\deltaelta^2}}=\frac{\sigmaqrt{2}m}{\deltaelta}$$ with equality if and only if $d_u=\deltaelta$ for any vertex $u$, that is, $G$ is a regular graph. By the Cauchy-Schwarz inequality, we have $$BSO(G)=\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\gammaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{1}{d_u}+\frac{1}{d_v}\rhoight)=\frac{1}{\sigmaqrt{2}}n$$ with equality if and only if $d_u=d_v$ for any edge $uv$, that is, $G$ is a regular graph. \end{proof} \betaegin{corollary}\lambdaabel{cor3,1} Let $G$ be a regular connected graph with $n$ vertices. Then $$BSO(G)= \frac{n}{\sigmaqrt{2}}.$$ \end{corollary} \betaegin{remark} This implies that $BSO(G)$ dose not increase with the increase of the number of edges of $G$. Clearly, $BSO(K_n)=BSO(C_n)$. \end{remark} \betaegin{corollary}\lambdaabel{cor3,2} Let $U_n$ be a unicyclic graph with $n$ vertices. Then $$BSO(U_n)\gammaeq \frac{n}{\sigmaqrt{2}}$$ with equality if and only if $G\cong C_n$. \end{corollary} \betaegin{corollary}\lambdaabel{cor3,3} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\sigmaqrt{2} n\lambdaeq BSO(G)+BSO(\overline{G})\lambdaeq \sigmaqrt{2}\lambdaeft(\frac{m}{\deltaelta}+\frac{n(n-1)-2m}{2(n-1-\Deltalta)}\rhoight)$$ with equality if and only if $G$ is a regular graph. \end{corollary} \betaegin{theorem}\lambdaabel{th3,2} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$. Then $$BSO(G)\lambdaeq n-m(2-\sigmaqrt{2})\frac{1}{\Deltalta}$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof}Without loss of generality, we suppose that $d_u\gammaeq d_v$. Then we have \betaegin{eqnarray*} BSO(G) & = & \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}} \lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{1}{d_v}+(\sigmaqrt{2}-1)\frac{1}{d_u}\rhoight)\\ & \lambdaeq & \sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{1}{d_v}+\frac{1}{d_u}\rhoight)+m(\sigmaqrt{2}-2)\frac{1}{\Deltalta} = n-m(2-\sigmaqrt{2})\frac{1}{\Deltalta} \end{eqnarray*} with equality if and only if $G$ is a regular graph. \end{proof} \betaegin{corollary}\lambdaabel{cor3,4} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\sigmaqrt{2} n\lambdaeq BSO(G)+BSO(\overline{G})\lambdaeq 2n-(2-\sigmaqrt{2})\lambdaeft(\frac{m}{\Deltalta}+\frac{n(n-1)-2m}{2(n-1-\deltaelta)}\rhoight)$$ with equality if and only if $G$ is a regular graph. \end{corollary} \sigmaubsection{\lambdaarge Bounds in terms of the Randi\'{c} index, the modified second Zagreb index and the inverse degree index} \betaegin{theorem}\lambdaabel{th3,3} Let $G$ be a connected graph with the maximum degree $\Deltalta$. Then $$\sigmaqrt{2}R(G)\lambdaeq BSO(G)\lambdaeq \sigmaqrt{2}\Deltalta M_2^{*}(G)$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof} By the arithmetic geometric inequality, we have $$BSO(G)=\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\gammaeq \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{2}{d_ud_v}}=\sigmaqrt{2}R(G)$$ with equality if and only if $d_u=d_v$ for any edges, that is, $G$ is a regular graph. It easy to see that $$BSO(G)=\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{2\Deltalta^2}}{d_ud_v}=\sigmaqrt{2}\Deltalta M_2^{*}(G)$$ with equality if and only if $d_u=d_v=\Deltalta$ for any edges, that is, $G$ is a regular graph. \end{proof} \betaegin{theorem}\lambdaabel{th3,4} Let $G$ be a connected graph with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \sigmaqrt{mID(G)}$$ with equality if and only if $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{theorem} \betaegin{proof}By the Cauchy-Schwarz inequality, we have $$BSO(G)= \sigmaum\lambdaimits_{uv\in E(G)}1\cdot \sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \sigmaqrt{\sigmaum\lambdaimits_{uv\in E(G)}1^2\sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{1}{d_u^2}+\frac{1}{d_v^2}\rhoight)}=\sigmaqrt{mID(G)},$$ with equality if and only if $\frac{1}{d_u^2}+\frac{1}{d_v^2}$ is a constant for any edge $uv$ in a connected graph $G$. By Lemma \rhoef{le2,1}, $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{proof} \sigmaubsection{\lambdaarge Bounds in terms of the harmonic index, the symmetric division deg index and the modified second Zagreb index} \betaegin{theorem}\lambdaabel{th3,5} Let $G$ be a connected graph with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\sigmaqrt{2}H(G)\lambdaeq BSO(G)\lambdaeq \frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{\Deltalta}{\deltaelta}+\frac{\deltaelta}{\Deltalta}\rhoight)H(G)$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof} By Lemma \rhoef{le2,2}, we have \betaegin{eqnarray*} BSO(G) & = & \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight)}{d_u+d_v}\\ & \lambdaeq & \sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{\Deltalta}{\deltaelta}+\frac{\deltaelta}{\Deltalta}\rhoight)\frac{2}{d_u+d_v} = \frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{\Deltalta}{\deltaelta}+\frac{\deltaelta}{\Deltalta}\rhoight)H(G). \end{eqnarray*} with equality if and only if $d_u=d_v$ for any edges, that is, $G$ is a regular graph. By Lemma \rhoef{le2,2}, we have \betaegin{eqnarray*} BSO(G) & = & \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\gammaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}+1\rhoight)}{3(d_u+d_v)}\\ & \gammaeq & \sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{2}(2+1)}{3(d_u+d_v)}= \sigmaqrt{2}H(G) \end{eqnarray*} with equality if and only if $d_u=d_v$ for any edges, that is, $G$ is a regular graph. \end{proof} \betaegin{theorem}\lambdaabel{th3,6} Let $G$ be a connected graph with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\frac{2\sigmaqrt{2}}{3\Deltalta}SDD(G)+\frac{\sigmaqrt{2}}{3}H(G)\lambdaeq BSO(G)\lambdaeq \frac{\sigmaqrt{2}}{\deltaelta}SDD(G)$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof}By Lemma \rhoef{le2,2}, we have \betaegin{eqnarray*} BSO(G) & = & \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight)}{d_u+d_v}\\ & \lambdaeq & \sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight)}{\deltaelta+\deltaelta} = \frac{\sigmaqrt{2}}{\deltaelta}SDD(G). \end{eqnarray*} with equality if and only if $d_u=d_v=\deltaelta$ for any edges, that is, $G$ is a regular graph. By Lemma \rhoef{le2,2}, we have \betaegin{eqnarray*} BSO(G) & = & \sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}} \gammaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}+1\rhoight)}{3(d_u+d_v)}\\ & = & \sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{2}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight)}{3(d_u+d_v)}+\sigmaum\lambdaimits_{uv\in E(G)}\frac{2\sigmaqrt{2}}{3(d_u+d_v)} = \frac{2\sigmaqrt{2}}{3\Deltalta}SDD(G)+\frac{\sigmaqrt{2}}{3}H(G). \end{eqnarray*} with equality if and only if $d_u=d_v=\Deltalta$ for any edges, that is, $G$ is a regular graph. \end{proof} \betaegin{theorem}\lambdaabel{th3,7} Let $G$ be a connected graph with $n$ vertices. Then $$BSO(G)\lambdaeq \sigmaqrt{2M_2^{*}(G)SDD(G)}$$ with equality if and only if $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{theorem} \betaegin{proof} Let $r=1$, $a_i=\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}$ and $b_i=\frac{1}{d_ud_v}$ in Lemma \rhoef{le2,3}. Then we have $$\frac{\lambdaeft(\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\rhoight)^2}{\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\frac{\frac{1}{d_u^2}+\frac{1}{d_v^2}}{\frac{1}{d_ud_v}}=\sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight),$$ that is, $$BSO(G)\lambdaeq \sigmaqrt{2M_2^{*}(G)SDD(G)}$$ with equality if and only if $\sigmaqrt{d_u^2+d_v^2}$ is a constant for any edge $uv$ in $G$, by Lemma \rhoef{le2,1}, $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{proof} \betaegin{corollary}\lambdaabel{cor3,5} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \sigmaqrt{mM_2^{*}(G)\lambdaeft(\frac{\Deltalta}{\deltaelta}+\frac{\deltaelta}{\Deltalta}\rhoight)}$$ with equality if and only if $G$ is a regular graph or a $(\Deltalta, \deltaelta)$-semiregular bipartite graph. \end{corollary} \betaegin{proof} Without loss of generality, we assume that $d_u\gammaeq d_v$. By the proof of Theorem \rhoef{th3,7}, we have $$\frac{\lambdaeft(\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\rhoight)^2}{\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}}\lambdaeq \sigmaum\lambdaimits_{uv\in E(G)}\lambdaeft(\frac{d_v}{d_u}+\frac{d_u}{d_v}\rhoight)\lambdaeq \lambdaeft(\frac{\Deltalta}{\deltaelta}+\frac{\deltaelta}{\Deltalta}\rhoight)m$$ with equality if and only if $d_u=\Deltalta$ and $d_v=\deltaelta$ for any edge $uv$. This implies that $G$ is $G$ is a regular graph or a $(\Deltalta, \deltaelta)$-semiregular bipartite graph. Conversely, it is easy to check that equality holds in Corollary \rhoef{cor3,4} when $G$ is a regular graph or a $(\Deltalta, \deltaelta)$-semiregular bipartite graph. \end{proof} \sigmaubsection{\lambdaarge Bounds in terms of the forgotten index} \betaegin{theorem}\lambdaabel{th3,8} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\gammaeq \frac{\sigmaqrt{2}}{\Deltalta^3+\deltaelta^3}\lambdaeft(\frac{m\deltaelta^3}{\Deltalta}+\frac{F}{2}\rhoight)$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof}Let $a_i=\sigmaqrt{d_u^2+d_v^2}$ and $b_i=\frac{1}{d_ud_v}$ in Lemma \rhoef{le2,4}. Then $q=\frac{1}{\sigmaqrt{2}\Deltalta^3}$ and $Q=\frac{1}{\sigmaqrt{2}\deltaelta^3}$. By Lemma \rhoef{le2,4}, we have $$\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_u^2d_v^2}+\frac{1}{2\Deltalta^3\deltaelta^3}\sigmaum\lambdaimits_{uv\in E(G)}(d_u^2+d_v^2)\lambdaeq \frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{1}{\Deltalta^3}+\frac{1}{\deltaelta^3}\rhoight)\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}},$$ that is, $$\frac{m}{\Deltalta^4}+\frac{1}{2\Deltalta^3\deltaelta^3}F(G)\lambdaeq \frac{1}{\sigmaqrt{2}}\lambdaeft(\frac{1}{\Deltalta^3}+\frac{1}{\deltaelta^3}\rhoight)BSO(G),$$ that is, $$BSO(G)\gammaeq \frac{\sigmaqrt{2}}{\Deltalta^3+\deltaelta^3}\lambdaeft(\frac{m\deltaelta^3}{\Deltalta}+\frac{F}{2}\rhoight)$$ with equality if and only if $d_u=d_v=\Deltalta$ for any edge $uv$, that is, $G$ is a regular graph. \end{proof} \betaegin{theorem}\lambdaabel{th3,9} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \frac{2mSDD(G)+M_2^{*}(G)F(G)}{2SO(G)}$$ with equality if and only if $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{theorem} \betaegin{proof} Let $a_i=b_i=\sigmaqrt{d_u^2+d_v^2}$, $c_i=\frac{1}{d_ud_v}$ and $d_i=1$ in Lemma \rhoef{le2,5}. Then we have $$m\sigmaum\lambdaimits_{uv\in E(G)}\frac{d_u^2+d_v^2}{d_ud_v}+\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}\sigmaum\lambdaimits_{uv\in E(G)}(d_u^2+d_v^2)\gammaeq 2\sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{d_u^2+d_v^2}}{d_ud_v}\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{d_u^2+d_v^2},$$ that is, $$2mSDD(G)+M_2^{*}(G)F(G)\gammaeq 2BSO(G)SO(G),$$ that is, $$BSO(G)\lambdaeq \frac{2mSDD(G)+M_2^{*}(G)F(G)}{2SO(G)}$$ with equality if and only if $a_i=b_i=\sigmaqrt{d_u^2+d_v^2}$ for any edge $uv$ in $G$, that is, $d_u^2+d_v^2$ is a constant for any edge $uv$ in $G$, by Lemma \rhoef{le2,1}, $G$ is a regular graph (when $G$ is non-bipartite) or $G$ is a $(\Deltalta, \deltaelta)$-semiregular bipartite graph (when $G$ is bipartite). \end{proof} \betaegin{corollary}\lambdaabel{cor3,6} Let $G$ be a connected graph of size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \frac{m(\Deltalta^2\deltaelta+\deltaelta^2)+\Deltalta F(G)}{2\sigmaqrt{2}\Deltalta\deltaelta^3}$$ with equality if and only if $G$ is a regular graph. \end{corollary} \betaegin{corollary}\lambdaabel{cor3,7} Let $G$ be a connected graph of size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \frac{m^2(2\Deltalta^3+\Deltalta^2\deltaelta+\deltaelta^3)}{2\Deltalta\deltaelta^2SO(G)}$$ with equality if and only if $G$ is a regular graph. \end{corollary} \sigmaubsection{\lambdaarge Bounds in terms of the inverse sum indeg index and the geometric-arithmetic index} \betaegin{theorem}\lambdaabel{th3,10} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \frac{H(G)SDD(G)+2M_2^{*}(G)ISI(G)}{\sigmaqrt{2}GA(G)}$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof} Let $a_i=\sigmaqrt{d_u^2+d_v^2}$, $b_i=\sigmaqrt{2d_ud_v}$, $c_i=\frac{1}{d_ud_v}$ and $d_i=\frac{1}{d_u+d_v}$ in Lemma \rhoef{le2,5}. Then we have \betaegin{align*} & \sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_u+d_v}\sigmaum\lambdaimits_{uv\in E(G)}\frac{d_u^2+d_v^2}{d_ud_v}+\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}\sigmaum\lambdaimits_{uv\in E(G)}\frac{2d_ud_v}{d_u+d_v}\\ \gammaeq {}& 2\sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{d_u^2+d_v^2}}{d_ud_v}\sigmaum\lambdaimits_{uv\in E(G)}\frac{\sigmaqrt{2d_ud_v}}{d_u+d_v}, \end{align*} that is, $$H(G)SDD(G)+2M_2^{*}(G)ISI(G)\gammaeq \sigmaqrt{2}BSO(G)GA(G),$$ that is, $$BSO(G)\lambdaeq \frac{H(G)SDD(G)+2M_2^{*}(G)ISI(G)}{\sigmaqrt{2}GA(G)}$$ with equality if and only if $\sigmaqrt{d_u^2+d_v^2}=\sigmaqrt{2d_ud_v}$ for any edge $uv$, that is, $G$ is a regular graph. \end{proof} \betaegin{corollary}\lambdaabel{cor3,8} Let $G$ be a connected graph of size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$BSO(G)\lambdaeq \frac{m^2\Deltalta^2+m^2\deltaelta^2+4m\Deltalta ISI(G)}{2\sigmaqrt{2}\Deltalta\deltaelta^2GA(G)}$$ with equality if and only if $G$ is a regular graph. \end{corollary} \sigmaubsection{\lambdaarge Bounds in terms of the Sombor index and the modified second Zagreb index} \betaegin{theorem}\lambdaabel{th3,11} Let $G$ be a connected graph of size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\frac{2m^2}{SO(G)}\lambdaeq BSO(G)\lambdaeq \frac{1}{\deltaelta^2}SO(G)$$ with equality if and only if $G$ is a regular graph. \end{theorem} \betaegin{proof}It is easy to see that $$BSO(G)=\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\lambdaeq \frac{1}{\deltaelta^2}\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{d_u^2+d_v^2}\lambdaeq\frac{1}{\deltaelta^2}SO(G),$$ with equality if and only if $d_u=d_v=\Deltalta$ for any edges, that is, $G$ is a regular graph. Let $a_i=b_i=\frac{1}{\sigmaqrt{d_ud_v}}$ and $c_i=d_i=\sigmaqrt{d_u^2+d_v^2}$ in Lemma \rhoef{le2,5}. Then $$2\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{d_u^2+d_v^2}\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}\gammaeq 2\lambdaeft(\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{d_u^2+d_v^2}{d_ud_v}}\rhoight)^2\gammaeq 4m^2,$$ that is, $$2SO(G)BSO(G)\gammaeq 4m^2,$$ that is, $$BSO(G)\gammaeq \frac{2m^2}{SO(G)}$$ with equality if and only if $G$ is a regular graph. \end{proof} \betaegin{theorem}\lambdaabel{th3,12} Let $G$ be a connected graph of order $n$ and size $m$ with the maximum degree $\Deltalta$ and the minimum degree $\deltaelta$. Then $$\lambdaeft\lambdavert \frac{1}{m}BSO(G)-\frac{1}{m^2}SO(G)M_2^{*}(G)\rhoight\rhovert\lambdaeq \xi(m)\frac{\sigmaqrt{2}(\Deltalta+\deltaelta) (\Deltalta-\deltaelta)^2}{\Deltalta^2\deltaelta^2},$$ where $$\xi(m)=\frac{1}{4}\lambdaeft(1-\frac{1+(-1)^{m+1}}{2m^2}\rhoight).$$ \end{theorem} \betaegin{proof} Let $a_i=\sigmaqrt{d_u^2+d_v^2}$ and $b_i=\frac{1}{d_ud_v}$ in Lemma \rhoef{le2,6}. Then $a=\sigmaqrt{2}\deltaelta$, $A=\sigmaqrt{2}\Deltalta$, $b=\frac{1}{\Deltalta^2}$ and $B=\frac{1}{\deltaelta^2}$. By Lemma \rhoef{le2,6}, we have \betaegin{align*} & \lambdaeft\lambdavert \frac{1}{m}\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{\frac{1}{d_u^2}+\frac{1}{d_v^2}}-\frac{1}{m^2}\sigmaum\lambdaimits_{uv\in E(G)}\sigmaqrt{d_u^2+d_v^2}\sigmaum\lambdaimits_{uv\in E(G)}\frac{1}{d_ud_v}\rhoight\rhovert \\ \lambdaeq {}& \frac{1}{m}\lambdaeft\lambdafloor\frac{m}{2}\rhoight\rhofloor\lambdaeft(1-\frac{1}{m}\lambdaeft\lambdafloor\frac{m}{2}\rhoight\rhofloor\rhoight) \sigmaqrt{2}(\Deltalta-\deltaelta)(\frac{1}{\deltaelta^2}-\frac{1}{\Deltalta^2}), \end{align*} that is, $$\lambdaeft\lambdavert \frac{1}{m}BSO(G)-\frac{1}{m^2}SO(G)M_2^{*}(G)\rhoight\rhovert\lambdaeq \xi(m)\frac{\sigmaqrt{2}(\Deltalta+\deltaelta) (\Deltalta-\deltaelta)^2}{\Deltalta^2\deltaelta^2},$$ where $$\xi(m)=\frac{1}{m}\lambdaeft\lambdafloor\frac{m}{2}\rhoight\rhofloor\lambdaeft(1-\frac{1}{m}\lambdaeft\lambdafloor\frac{m}{2}\rhoight\rhofloor\rhoight) =\frac{1}{4}\lambdaeft(1-\frac{1+(-1)^{m+1}}{2m^2}\rhoight).$$ \end{proof} \sigmaection{\lambdaarge The first Banhatti-Sombor index of trees} In this section, we determine the trees with the maximum and minimum first Banhatti-Sombor index among the set of trees of order $n$, respectively. For a tree $T_n$ of order $n$ with maximum degree $\Deltalta$, denote by $n_i$ the number of vertices with degree $i$ in $T_n$ for $1\lambdaeq i\lambdaeq\Deltalta$, and $m_{i,j}$ the number of edges of $T_n$ connecting vertices of degree $i$ and $j$, where $1\lambdaeq i\lambdaeq j\lambdaeq\Deltalta$. Note that $T_n$ is connected, so $m_{1,1}=0$ for $n\gammaeq 3$. Let $N=\{(i,j)\in\mathbb{N}\tauimes\mathbb{N}:1\lambdaeq i\lambdaeq j\lambdaeq\Deltalta\}$. Then clearly the following relations hold: \betaegin{equation}\lambdaabel{eq1} |V(T_n)|=n=\sigmaum_{i=1}^{\Deltalta}n_i, \end{equation} \betaegin{equation}\lambdaabel{eq2} |E(T_n)|=n-1=\sigmaum_{(i,j)\in N}m_{i,j}, \end{equation} and \betaegin{equation}\lambdaabel{eq3} \lambdaeft\{ \betaegin{aligned} &2m_{1,1}+m_{1,2}+\lambdadots+m_{1,\Deltalta}=n_1,\\ &m_{1,2}+2m_{2,2}+\lambdadots+m_{2,\Deltalta}=2n_2,\\ &\lambdadots\\ &m_{1,\Deltalta}+m_{2,\Deltalta}+\lambdadots+2m_{\Deltalta,\Deltalta}=\Deltalta n_\Deltalta. \end{aligned} \rhoight. \end{equation} It follows easily from (\rhoef{eq1}) and (\rhoef{eq3}) that \betaegin{equation}\lambdaabel{eq4} n=\sigmaum_{(i,j)\in N}\frac{i+j}{ij}m_{i,j}. \end{equation} And the definition of the first Banhatti-Sombor index is equivalent to \betaegin{equation}\lambdaabel{eq5} SO(G)=\sigmaum_{(i,j)\in P}\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}m_{i,j}. \end{equation} \betaegin{theorem}\lambdaabel{th4,1} Let $T_n$ be a tree with $n$-vertex. Then $$\frac{\sigmaqrt{2}(n-3)}{2}+\sigmaqrt{5}\lambdaeq BSO(T_n)\lambdaeq \sigmaqrt{1+(n-1)^2}.$$ The equality in the left hand side holds if and only if $T_n\cong P_n$, and the equality in the right hand side holds if and only if $T_n\cong K_{1,\,n-1}$. \end{theorem} \betaegin{proof} First, we consider the equality in the left side. Let $N_1=\Big\{(i,j)\in N:(i,j)\neq(1,1),(i,j)\neq(1,2),(i,j)\neq(2,2)\Big\}$. By equation (\rhoef{eq4}), we have $$3m_{1,2}+2m_{2,2}=2n-\sigmaum_{(i,j)\in N_1}\frac{2(i+j)}{ij}m_{i,j},$$ and by equation (\rhoef{eq2}), we have $$m_{1,2}+m_{2,2}=n-1-\sigmaum_{(i,j)\in N_1}m_{i,j}.$$ Then we obtain the following expression for $m_{1,2}$ and $m_{2,2}$: $$m_{1,2}=2+\sigmaum_{(i,j)\in N_1}\Big[2-\frac{2(i+j)}{ij}\Big]m_{i,j},$$ $$m_{2,2}=n-3+\sigmaum_{(i,j)\in N_1}\Big[\frac{2(i+j)}{ij}-3\Big]m_{i,j}.$$ According to the expression (\rhoef{eq5}), we have \betaegin{eqnarray*} BSO(T_n)& = & m_{1,2}\sigmaqrt{\frac{1}{4}+1}+m_{2,2}\sigmaqrt{\frac{1}{4}+\frac{1}{4}}+\sigmaum_{(i,j)\in N_1}\!\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}m_{i,j}\\ & = & \sigmaqrt{5}\Big[1+\sigmaum_{(i,j)\in N_1}\Big(1-\frac{i+j}{ij}\Big)m_{i,j}\Big]+\frac{\sigmaqrt{2}}{2}\Big\{n-3+\sigmaum_{(i,j)\in N_1}\Big[\frac{2(i+j)}{ij}-3\Big]m_{i,j}\Big\}\\ & &+\sigmaum_{(i,j)\in N_1}\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}m_{i,j}\\ & = & \frac{\sigmaqrt{2}}{2}(n-3)+\sigmaqrt{5}+\sigmaum_{(i,j)\in N_1}\Big[\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}+({\sigmaqrt{2}-\sigmaqrt{5}})\frac{i+j}{ij}+\sigmaqrt{5}-\frac{3\sigmaqrt{2}}{2}\Big]m_{i,j}. \end{eqnarray*} Let $f(x,y)=\sigmaqrt{\frac{1}{x^2}+\frac{1}{y^2}}+({\sigmaqrt{2}-\sigmaqrt{5}})\frac{x+y}{xy}+\sigmaqrt{5}-\frac{3\sigmaqrt{2}}{2}$, where $(x,y)\in N$, it is easy to see that $f(1,2)=0$, $f(2,2)=0$ and $f(x,y)> 0$ for $(x,y)\in N_1$. Therefore, $BSO(T_n)=\frac{\sigmaqrt{2}}{2}(n-3)+\sigmaqrt{5}$ if and only if $m_{i,j}=0$ for all $(i,j)\in N_1$. And this occurs if and only if $T_n\cong P_n$. Conversely, if $T_n\cong P_n$, by (\rhoef{eq5}), we obtain $$BSO(P_n)=2\sigmaqrt{\frac{1}{4}+1}+(n-3)\sigmaqrt{\frac{1}{4}+\frac{1}{4}}=\frac{\sigmaqrt{2}}{2}(n-3)+\sigmaqrt{5}.$$ Thus, we have $BSO(T_n)\gammaeq BSO(P_n)$ with equality if and only if $T_n\cong P_n$. Now, we consider the equality on the right side. Let $N_2=\Big\{(i,j)\in N:(i,j)\neq(1,1),(i,j)\neq(1,\Deltalta),(i,j)\neq(\Deltalta,\Deltalta)\Big\}$. Similar to the proof of the above, by equation (\rhoef{eq4}), we have $$(\Deltalta+1)m_{1,\Deltalta}+2m_{\Deltalta,\Deltalta}=\Deltalta n-\sigmaum_{(i,j)\in N_2}\Deltalta\frac{i+j}{ij}m_{i,j},$$ and by equation (\rhoef{eq2}), we have $$m_{1,\Deltalta}+m_{\Deltalta,\Deltalta}=n-1-\sigmaum_{(i,j)\in N_2}m_{i,j}.$$ Then we obtain the following expression for $m_{1,\Deltalta}$ and $m_{\Deltalta,\Deltalta}$: $$(\Deltalta-1)m_{1,\Deltalta}=(\Deltalta-2)n+2-\sigmaum_{(i,j)\in N_2}\Big(\Deltalta\frac{i+j}{ij}-2\Big)m_{i,j},$$ $$(\Deltalta-1)m_{\Deltalta,\Deltalta}=n-(\Deltalta+1)+\sigmaum_{(i,j)\in N_2}\Big(\Deltalta\frac{i+j}{ij}-(\Deltalta+1)\Big)m_{i,j}.$$ According to the expression (\rhoef{eq4}), we have \betaegin{eqnarray*} BSO(T_n) & = & m_{1,\Deltalta}\sigmaqrt{\frac{1}{\Deltalta^2}+1}+m_{\Deltalta,\Deltalta}\sigmaqrt{\frac{1}{\Deltalta^2}+\frac{1}{\Deltalta^2}}+\sigmaum_{(i,j)\in N_2}\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}m_{i,j}\\ & = & \frac{\sigmaqrt{\Deltalta^2+1}}{\Deltalta(\Deltalta-1)}\Big[(\Deltalta\!-\!2)n+2-\sigmaum_{(i,j)\in N_2}\Big(\Deltalta\frac{i+j}{ij}-2\Big)m_{i,j}\Big]\\ & & +\frac{\sigmaqrt{1+\Deltalta^2}}{\Deltalta(\Deltalta-1)}\Big[n\!-\!(\Deltalta+1)+\!\sigmaum_{(i,j)\in N_2}\!\Big(\Deltalta\frac{i+j}{ij}\!-\!(\Deltalta+1)\Big)m_{i,j}\Big]\\ & & +\!\sigmaum_{(i,j)\in N_2}\!\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}m_{i,j}\\ & = & \frac{(\Deltalta-2)n\sigmaqrt{\Deltalta^2+1}+\sigmaqrt{2}(n-\Deltalta-1)+2\sigmaqrt{\Deltalta^2+1}}{\Deltalta(\Deltalta-1)}\\ & & +\sigmaum_{(i,j)\in N_2}\Big[\sigmaqrt{\frac{1}{i^2}+\frac{1}{j^2}}+\frac{\sigmaqrt{2}-\sigmaqrt{\Deltalta^2+1}}{\Deltalta-1}\frac{i+j}{ij}+\frac{2\sigmaqrt{\Deltalta^2+1}-\sigmaqrt{2}(\Deltalta+1)} {\Deltalta(\Deltalta-1)}\Big]m_{i,j}. \end{eqnarray*} Let $g(x,y)=\sigmaqrt{\frac{1}{x^2}+\frac{1}{y^2}}+\frac{\sigmaqrt{2}-\sigmaqrt{\Deltalta^2+1}}{\Deltalta-1}\frac{x+y}{xy}+\frac{2\sigmaqrt{\Deltalta^2+1}-\sigmaqrt{2}(\Deltalta+1)}{\Deltalta(\Deltalta-1)}$, where $(x,y)\in N$, it is easy to see that $f(1,\Deltalta)=0$, $f(\Deltalta,\Deltalta)=0$ and $f(x,y)< 0$ for $(x,y)\in N_2$. Therefore, $BSO(T_n)=\frac{(\Deltalta-2)n\sigmaqrt{\Deltalta^2+1}+\sigmaqrt{2}(n-\Deltalta-1)+2\sigmaqrt{\Deltalta^2+1}}{\Deltalta(\Deltalta-1)}$ if and only if $m_{i,j}=0$ for all $(i,j)\in N_2$. And this occurs if and only if $n_2=n_3=\lambdadots=n_{\Deltalta-1}=0$. Let $h(x)=\frac{(x-2)n\sigmaqrt{x^2+1}+\sigmaqrt{2}(n-x-1)+2\sigmaqrt{x^2+1}}{x(x-1)}$. By derivative, we know that $h(x)$ is an increasing function for $[2, +\infty)$. Thus $$h(\Deltalta)\lambdaeq h(n-1)=\sigmaqrt{1+(n-1)^2}.$$ Conversely, $BSO(K_{1,\,n-1})=\sigmaqrt{1+(n-1)^2}$. Thus, we have $BSO(T_n)\lambdaeq BSO(K_{1,\,n-1})$ with equality if and only if $T_n\cong K_{1,\,n-1}$. \end{proof} Similar to the method used in Theorem \rhoef{th4,1}, we now give an upper bound on chemical trees without its proof. \betaegin{theorem}\lambdaabel{th4,2} Let $T_n$ is a chemical tree with $n$ vertices. If $n-2=0(mod\ 3)$, then $$BSO(T_n)\lambdaeq \frac{2\sigmaqrt{17}(n+1)+\sigmaqrt{2}(n-5)}{12}$$ with equality if and only if $n_2=n_3=0$. \end{theorem} \sigmamall { \betaegin{thebibliography}{99} \betaibitem{CGR} R. Cruz, I. Gutman, J. Rada, Sombor index of chemical graphs, Appl. Math. Comput. 399 (2021) 126018. \betaibitem{D} S.S. Dragomir, On some inequalities (Romanian), ``Caiete Metodico \c{S}tiin\c{t}ifice", No. 13, 1984, pp. 20. Faculty of Mathematics, Timi\c{s}oara University, Romania. \betaibitem{DCC} K.Ch. Das, A.S. \c{C}evik, I.N. Cangul, Y. Shang, On Sombor Index, Symmetry 13 (2021) 140. \betaibitem{DM} J.B. Diaz, F.T. Metcalf, Stronger forms of a class of inequalities of G. Polja-G. Szego and L.V. Kantorovich, Bull. Amer. Math. Soc. 69 (1963) 415-418. \betaibitem{DTW} H. Deng, Z. Tang, R. Wu, Molecular trees with extremal values of Sombor indices, Int J Quantum Chem. DOI: 10.1002/qua.26622. \betaibitem{F} S. Fajtlowicz, On conjectures of Graffiti-II, Congr. Numer. 60 (1987) 187-197. \betaibitem{FG} B. Furtula, I. Gutman, A forgotten topological index, J. Math. Chem. 53 (2015) 1184-1190. \betaibitem{G} I. Gutman, Geometric approach to degree-based topological indices: Sombor indices, MATCH Commun. Math. Comput. Chem. 86 (2021) 11-16. \betaibitem{K} V.R.Kulli, On Banhatti-Sombor indices, International Journal of Applied Chemistry, 8 (2021) 21-25. \betaibitem{KG} V.R. Kulli, I. Gutman, Computation of Sombor Indices of Certain Networks, International Journal of Applied Chemistry, 8 (2021) 1-5. \betaibitem{KS} J. Karamata, Sur une in\'{e}galit\'{e} relative aux fonctions convexes, Publ. Math. Univ. Belgrade 1 (1932) 145-148. \betaibitem{LMR} X. Li, R.N. Mohapatra, R.S. Rodriguez, Gr\"{u}ss-type inequalities, J. Math. Anal. Appl. 267 (2002) 434-443. \betaibitem{LMZ} Z. Lin, L. Miao, T. Zhou, On the spectral radius, energy and Estrada index of the Sombor matrix of graphs, arXiv:2102.03960. \betaibitem{MMM} I. Milovanovi\'{c}, E. Milovanovi\'{c}, M. Mateji\'{c}, On some mathematical properties of sombor indices, Bull. Int. Math. Virtual Inst. 11 (2021) 341-353. \betaibitem{NKMT} S. Nikoli\'{c}, G. Kova\v{c}evi\'{c}, A. Mili\v{c}evi\'{c}, N. Trinajsti\'{c}, The Zagreb indices 30 years after, Croat. Chem. Acta 76 (2003) 113-124. \betaibitem{R} M. Randi\'{c}, On characterization of molecular branching, J. Am. Chem. Soc. 97 (1975) 6609-6615. \betaibitem{R1} J. Radon, \"{U}ber die absolut additiven Mengenfunktionen, Wiener-Sitzungsber. (IIa) 122 (1913) 1295-1438. \betaibitem{R2} I. Red\v{z}epovi\'{c}, Chemical applicability of Sombor indices, J. Serb. Chem. Soc. https://doi.org/10.2298/JSC201215006R. \betaibitem{RDA} T. R\'{e}ti, T. Do\v{s}li\'{c}, A. Ali, On the Sombor index of graphs, Contrib. Math. 3 (2021) 11-18. \betaibitem{V} D. Vuki\v{c}evi\'{c}, Bond additive modelling 2. Mathematical properties of max-min rodeg index, Croat. Chem. Acta 83 (2010) 261-273. \betaibitem{VF} D. Vuki\v{c}evi\'{c}, B. Furtula, Topological index based on the ratios of geometrical and arithmetical means of end-vertex degrees of edges, J. Math. Chem. 46 (2000) 1369-1376. \betaibitem{VG} D. Vuki\v{c}evi\'{c}, M. Ga\v{s}perov, Bond additive modelling 1. Adriatic indices, Croat. Chem. Acta 83 (2010) 261-273. \betaibitem{WMLF} Z. Wang, Y. Mao, Y. Li, B. Furtula, On relations between Sombor and other degree-based indices, J. Appl. Math. Comput. https://doi.org/10.1007/s12190-021-01516-x. \end{thebibliography} } \end{document}
\begin{document} \title{Cherlin's conjecture for almost simple groups of Lie rank $1$} \author{Nick Gill} \address{ Department of Mathematics, University of South Wales, Treforest, CF37 1DL, U.K.} \email{[email protected]} \author{Francis Hunt} \address{ Department of Mathematics, University of South Wales, Treforest, CF37 1DL, U.K.} \email{[email protected]} \author{Pablo Spiga} \address{Dipartimento di Matematica e Applicazioni, University of Milano-Bicocca, Via Cozzi 55, 20125 Milano, Italy} \email{[email protected]} \begin{abstract} We prove Cherlin's conjecture, concerning binary primitive permutation groups, for those groups with socle isomorphic to $\mathrm{SL}q$, $\mathop{{^2\mathrm{B}_2}(q)}$, $\mathop{{^2\mathrm{G}_2}(q)}$ or $\mathrm{PSU}_3(q)$. Our method uses the notion of a ``strongly non-binary action''. \end{abstract} \maketitle \section{Introduction} All groups in this paper are finite. In this note our main result is the following. \begin{thm}\label{t: psl2} Let $G$ be an almost simple primitive permutation group on the set $\Omega$ with socle isomorphic to a linear group $\mathrm{SL}q$, or to a Suzuki group $\mathop{{^2\mathrm{B}_2}(q)}$, or to a Ree group $\mathop{{^2\mathrm{G}_2}(q)}$, or to a unitary group $\mathrm{PSU}_3(q)$. Then, either $G$ is not binary, or $G=\mathop{\mathrm{Sym}}(\Omega)\cong \mathop{\mathrm{Sym}}(5)\cong \mathcal{PG}ammaL_2(4)\cong \mathcal{PG}L_2(5)$, or $G=\mathop{\mathrm{Sym}}(\Omega)\cong \mathop{\mathrm{Sym}}(6)\cong \mathrm{P}\Sigma\mathrm{L}_2(9)$. \end{thm} Theorem~\ref{t: psl2} is a contribution towards a proof of a conjecture of Cherlin \cite{cherlin1}. This conjecture asserts that a primitive binary permutation group lies on a short explicit list of known actions. The precise definition of ``binary'' and ``binary action'' is given in Section~\ref{s: bin back} below. An equivalent definition, couched in terms of ``relational structures'', can be found in \cite{cherlin2}; the connection between this conjecture and Lachlan's theory of sporadic structures can be found in \cite{cherlin1}. It is this connection that really enlivens the study of binary permutation groups, and provides motivation to work towards a proof of Cherlin's conjecture. Let us briefly describe the status of this conjecture. By work of Cherlin \cite{cherlin2} and Wiscons \cite{wiscons}, this very general conjecture has been reduced to the following statement concerning almost simple groups. \begin{conj}\label{conj: cherlin} If $G$ is a binary almost simple primitive permutation group on the set $\Omega$, then $G=\mathrm{Sym}(\Omega)$. \end{conj} One sees immediately that Theorem~\ref{t: psl2} settles Conjecture~\ref{conj: cherlin} for almost simple primitive permutation groups with socle isomorphic to $\mathrm{SL}q$, or $\mathop{{^2\mathrm{B}_2}(q)}$, or $\mathop{{^2\mathrm{G}_2}(q)}$, or $\mathrm{PSU}_3(q)$, that is, for each Lie type group of twisted Lie rank $1$. Theorem~\ref{t: psl2} is the third recent result of this type; in recent work, the first and third authors settled Conjecture~\ref{conj: cherlin} for groups with alternating socle, and for the $\mathcal{C}_1$ primitive actions of groups with classical socle \cite{gs_binary}. A brief word about our methods: the aforementioned work on groups with alternating or classical socle was based on the study of so-called ``beautiful subsets''. These objects are defined below, and their usefulness is explained by Lemma~\ref{l: forbidden} and Example~\ref{ex: snba1} below, which together imply that whenever an action admits a beautiful subset the action is not binary. In the current note our approach is different for the reason that the family of actions under consideration -- the primitive actions of almost simple groups with socle a Lie group of Lie rank $1$-- very often do not have beautiful subsets. To deal with this situation we need to develop a more general theory: Suppose that we have a group $G$ acting on a set $\Omega$, and we want to show that this action is non-binary. The key property of beautiful subsets that makes them useful is that they allow us to argue ``inductively'', in the sense that if we can find a subset $\Lambda$ of $\Omega$ that is ``beautiful'', then the full action of $G$ on $\Omega$ is non-binary. In order to deal with the absence of beautiful subsets, we have studied this inductive property more formally via the notion of a ``strongly non-binary subset''. The theory of such subsets is developed in \S\ref{s: bin back} and allows us to apply an inductive argument in a more general setting. The advantages of Theorem~\ref{t: psl2} and of this theory are several: firstly, Theorem~\ref{t: psl2} is a material advance towards a proof of Conjecture~\ref{conj: cherlin}; secondly, it demonstrates the possibility of obtaining results in situations where one cannot use the notion of a beautiful subset, as in~\cite{gs_binary}; thirdly, it turns out that the rank $1$ groups tend to be a sticking point when making general arguments concerning binary groups. We hope, therefore, that by disposing of this case here, we will be able to deal more easily with the remaining cases required for a proof of Cherlin's conjecture. Investigation in this direction is in progress, see~\cite{gls_binary}. \subsection{Structure of the paper} The proof of Theorem~\ref{t: psl2} is split into several parts. First, in \S\ref{s: bin back}, after giving a number of definitions, we prove some general results about binary actions; in particular Lemma~\ref{l: forbidden} is vital. In \S\ref{s: structure} we give some basic information concerning groups with socle isomorphic to $\mathrm{PSL}_2(q)$; then in \S\ref{s: fp} we calculate the size of the fixed set for various elements of $\mathcal{PG}ammaL_2(q)$ in various primitive actions; these results are then used to prove Lemmas~\ref{l: handy q odd} and \ref{l: handy q even}; it is worth remarking that these fixed point calculations yield the required conclusions almost immediately for the groups $\mathrm{SL}q$ and $\mathcal{PG}L_2(q)$, however a finer analysis is required to deal with those almost simple groups that contain field automorphisms. The three lemmas just mentioned -- Lemmas~\ref{l: forbidden}, \ref{l: handy q odd} and \ref{l: handy q even} -- directly imply Theorem~\ref{t: psl2} for $\mathrm{PSL}_2(q)$ when $q\geq 9$. The remaining small cases, when $q\in\{4,5,7,8\}$, can be verified directly using GAP \cite{GAP} or by referencing the calculations of Wiscons \cite{wiscons2}. In \S\ref{s: suzuki}, \S\ref{s: ree} and \S\ref{s: psu}, we give a proof of Theorem~\ref{t: psl2} for groups with socle $\mathop{{^2\mathrm{B}_2}(q)}$, $\mathop{{^2\mathrm{G}_2}(q)}$ and $\mathrm{PSU}_3(q)$, respectively. In the first two cases the theorems are easy consequences of propositions asserting that the primitive actions in question admit strongly non-binary subsets (see \S\ref{s: bin back} for the definition of a strongly non-binary subset). The final case -- socle $\mathrm{PSU}_3(q)$ -- is dealt with somewhat differently. \section{Binary actions and strongly non-binary actions}\label{s: bin back} Throughout this section $G$ is a finite group acting (not necessarily faithfully) on a set $\Omega$ of cardinality $t$. Here, our job is to give a definition of ``binary action'', and of ``strongly non-binary action'', and to connect these definitions to earlier work on ``beautiful sets''. Given a subset $\Lambda$ of $\Omega$, we write $G_\Lambda:=\{g\in G\mid \lambda^g\in\Lambda,\forall \lambda\in \Lambda\}$ for the set-wise stabilizer of $\Lambda$, $G_{(\Lambda)}:=\{g\in G\mid \lambda^g=\lambda, \forall\lambda\in \Lambda\}$ for the point-wise stabilizer of $\Lambda$, and $G^\Lambda$ for the permutation group induced on $\Lambda$ by the action of $G_\Lambda$. In particular, $G^\Lambda\cong G_\Lambda/G_{(\Lambda)}$. Given a positive integer $r$, the group $G$ is called \textit{$r$-subtuple complete} with respect to the pair of $n$-tuples $I, J \in \Omega^n$, if it contains elements that map every subtuple of size $r$ in $I$ to the corresponding subtuple in $J$ i.e. $$\textrm{for every } k_1, k_2, \dots, k_r\in\{ 1, \ldots, n\}, \textrm{ there exists } h \in G \textrm{ with }I_{k_i}^h=J_{k_i}, \textrm{ for every }i \in\{ 1, \ldots, r\}.$$ Here $I_k$ denotes the $k^{\text{th}}$ element of tuple $I$ and $I^g$ denotes the image of $I$ under the action of $g$. Note that $n$-subtuple completeness simply requires the existence of an element of $G$ mapping $I$ to $J$. The group $G$ is said to be of {\it arity $r$} if, for all $n\in\mathbb{N}$ with $n\geq r$ and for all $n$-tuples $I, J \in \Omega^n$, $r$-subtuple completeness (with respect to $I$ and $J$) implies $n$-subtuple completeness (with respect to $I$ and $J$). When $G$ has arity 2, we say that $G$ is {\it binary}. A pair $(I,J)$ of $n$-tuples of $\Omega$ is called a {\it non-binary witness for the action of $G$ on $\Omega$}, if $G$ is $2$-subtuple complete with respect to $I$ and $J$, but not $n$-subtuple complete, that is, $I$ and $J$ are not $G$-conjugate. To show that the action of $G$ on $\Omega$ is non-binary it is sufficient to find a non-binary witness $(I,J)$. We say that the action of $G$ on $\Omega$ is \emph{strongly non-binary} if there exists a non-binary witness $(I,J)$ such that \begin{itemize} \item $I$ and $J$ are $t$-tuples where $|\Omega|=t$; \item the entries of $I$ (resp. $J$) are distinct entries of $\Omega$. \end{itemize} \begin{example}\label{ex: snba1}{\rm If $G$ acts $2$-transitively on $\Omega$ with kernel $K$ and $G/K\cong G^\Omega\not\cong\mathop{\mathrm{Sym}}(\Omega)$, then $G$ is strongly non-binary. Indeed, by $2$-transitivity, any pair $(I,J)$ of $t$-tuples of distinct elements from $\Omega$ is $2$-subtuple complete. Since $G/K\cong G^\Omega\not\cong\mathop{\mathrm{Sym}}(\Omega)$, we can choose $I$ and $J$ in distinct $G$-orbits. Thus $(I,J)$ is a non-binary witness.} \end{example} \begin{example}\label{ex: snba2}{\rm Let $G$ be a subgroup of $\mathop{\mathrm{Sym}}(\Omega)$, let $g_1, g_2,\ldots,g_r$ be elements of $G$, and let $\tau,\eta_1,\ldots,\eta_r$ be elements of $\mathop{\mathrm{Sym}}(\Omega)$ with \[ g_1=\tau\eta_1,\,\,g_2=\tau\eta_2,\,\,\ldots,\,\,g_r=\tau\eta_r. \] Suppose that, for every $i\in \{1,\ldots,r\}$, the support of $\tau$ is disjoint from the support of $\eta_i$; moreover, suppose that, for each $\omega\in\Omega$, there exists $i\in\{1,\ldots,r\}$ (which may depend upon $\omega$) with $\omega^{\eta_i}=\omega$. Suppose, in addition, $\tau\notin G$. Now, writing $\Omega=\{\omega_1,\dots, \omega_t\}$, observe that \[ ((\omega_1,\omega_2,\dots, \omega_t), (\omega_1^{\tau},\omega_2^{\tau}, \ldots,\omega_t^{\tau})) \] is a non-binary witness. Thus the action of $G$ on $\Omega$ is strongly non-binary.} \end{example} The notion of a strongly non-binary action allows us to ``argue inductively'' using suitably chosen set-stabilizers. The following lemma (which was first stated in~\cite{gs_binary} and which, in any case, is virtually self-evident) clarifies what we mean by this. \begin{lem}\label{l: forbidden} Suppose that there exists a subset $\Lambda \subseteq \Omega$ such that $G^\Lambda$ is strongly non-binary. Then $G$ is not binary. \end{lem} In what follows a \emph{strongly non-binary subset} is a subset $\Lambda$ of $\Omega$ such that $G^\Lambda$ is strongly non-binary. We are ready for the third main concept of this section, that of a ``beautiful subset''; this is closely related to the example of a strongly non-binary action given in Example~\ref{ex: snba1}. Specifically, we say that a subset $\Lambda\subseteq \Omega$ is a \emph{$G$-beautiful subset} if $G^\Lambda$ is a $2$-transitive subgroup of $\mathrm{Sym}(\Lambda)$ which is neither $\mathrm{Alt}(\Lambda)$ nor $\mathrm{Sym}(\Lambda)$. Note that we will tend to drop the ``$G$'' in $G$-beautiful, so long as the context is clear (for instance, when $G$ is a permutation group on $\Omega$, that is, $G\le\mathop{\mathrm{Sym}}(\Omega)$). In the light of Example~\ref{ex: snba1}, the curious reader may be wondering why the definition of a beautiful subset excludes also the possibility that $G^\Lambda=\mathrm{Alt}(\Lambda)$. This exclusion is explained by the following lemma, which is~\cite[Corollary 2.3]{gs_binary}. \begin{lem}\label{l: beautiful} Suppose that $G$ is almost simple with socle $S$. If $\Omega$ contains an $S$-beautiful subset, then $G$ is not binary. \end{lem} In what follows we will see a number of examples of strongly non-binary actions of the types given in Examples~\ref{ex: snba1} and \ref{ex: snba2}, as well as examples of beautiful subsets. To study these examples we will make use of the fact that the finite faithful 2-transitive actions are all known thanks to the Classification of Finite Simple Groups. One naturally wonders whether other examples of strongly non-binary witnesses exist. This is indeed the case and the existence of a strongly non-binary witness is related to the classic concept of $2$-closure introduced by Wielandt~\cite{Wielandt}. Given a permutation group $G$ on $\Omega$, the \emph{$2$-closure of $G$} is the set $$G^{(2)}:=\{\sigma\in \mathop{\mathrm{Sym}}(\Omega)\mid \forall (\omega_1,\omega_2)\in \Omega\times \Omega, \textrm{there exists }g_{\omega_1\omega_2}\in G \textrm{ with }\omega_1^\sigma=\omega_1^{g_{\omega_1\omega_2}}, \omega_2^\sigma=\omega_2^{g_{\omega_1\omega_2}}\},$$ that is, $G^{(2)}$ is the largest subgroup of $\mathop{\mathrm{Sym}}(\Omega)$ having the same orbitals as $G$. The group $G$ is said to be $2$-closed if and only if $G=G^{(2)}$. We claim that $G$ is not $2$-closed if and only if $G$ has a strongly non-binary witness. Write $\Omega:=\{\omega_1,\ldots,\omega_t\}$. If $G$ is not $2$-closed, then there exists $\sigma\in G^{(2)}\setminus G$. Now, it is easy to verify that $I:=(\omega_1,\ldots,\omega_t)$ and $J:=I^\sigma=(\omega_1^\sigma,\ldots,\omega_t^\sigma)$ are $2$-subtuple complete (because $\sigma\in G^{(2)}$) and are not $G$-conjugate (because $g\notin G$). Thus $(I,J)$ is a strongly non-binary witness. The converse is similar. \section{Groups with socle isomorphic to \texorpdfstring{$\mathrm{PSL}_2(q)$}{PSL2(q)}}\label{s: structure} In this section we start by studying some of the basic properties of involutions and Klein $4$-subgroups of the almost simple groups $G$ with socle $\mathrm{PSL}_2(q)$. (In particular, $\mathrm{PSL}_2(q)\le G\le \mathcal{PG}ammaL_2(q)$.) All of these properties are well-known and/or easy to verify by direct calculation. We also set up some basic notation for what follows. For a group $J$, write $m_2(J)$ for the \emph{$2$-rank} of $J$, i.e.\ the maximum rank of an abelian $2$-subgroup of $J$. If $q$ is odd and $J$ is a section of $G$ (i.e.\ a quotient of a subgroup of $G$), then $m_2(J)\leq 3$. What is more, $m_2(J)\leq 2$ unless $q$ is a square and $G$ contains a field automorphism of order $2$. \begin{lem}\label{l: quotients split} Let $L$ be a subgroup of $\mathcal{PG}L_2(q)$ with $q$ odd, and let $K$ be a subgroup of $\nor {\mathcal{PG}L_2(q)} L $ with $K$ isomorphic to a Klein $4$-group and with $K\cap L=1$. Then $|L|$ is odd. \end{lem} \begin{proof} Let $P$ be a Sylow $2$-subgroup of $\langle K,L\rangle=K\ltimes L$ containing $K$. Then $P=K\ltimes Q$, for some Sylow $2$-subgroup $Q$ of $L$. If $Q\ne 1$, then $K$ centralises a non-identity element of $Q$ and hence $m_2(\mathcal{PG}L_2(q))\ge m_2(P)=m_2(K\ltimes Q)\ge m_2(K)+m_2(\cent Q K)\ge 2+1=3$, a contradiction. \end{proof} Suppose that $q$ is odd. There is exactly one $\mathcal{PG}L_2(q)$-conjugacy class of Klein $4$-subgroups of $\mathrm{PSL}_2(q)$, and one can check directly that $\cent{\mathcal{PG}L_2(q)} K =K$ for each Klein $4$-subgroup of $\mathrm{PSL}_2(q)$. When $q\equiv \pm 3\pmod 8$, a Sylow $2$-subgroup of $\mathrm{PSL}_2(q)$ is a Klein $4$-subgroup and, by Sylow's theorems, there is exactly one $\mathrm{PSL}_2(q)$-conjugacy class of Klein $4$-subgroups of $\mathrm{PSL}_2(q)$; in this case $\nor {\mathrm{PSL}_2(q)} K \cong \mathop{\mathrm{Alt}}(4)$. When $q\equiv \pm 1\pmod 8$, there are two $\mathrm{PSL}_2(q)$-conjugacy classes of Klein $4$-subgroups of $\mathrm{PSL}_2(q)$ and these are fused in $\mathcal{PG}L_2(q)$; in this case $\nor {\mathrm{PSL}_2(q)} K \cong\mathop{\mathrm{Sym}}(4)$. We need information concerning involutions in $\mathcal{PG}ammaL_2(q)\setminus\mathcal{PG}L_2(q)$ -- such involutions must be field automorphisms, as defined in~\cite{gls3}. The following result is a special case of \cite[Prop. 4.9.1]{gls3}. \begin{lem}\label{l: fields} Let $f_1,f_2\in\mathcal{PG}ammaL_2(q)\setminus\mathcal{PG}L_2(q)$ be of order $t$ for some prime $t$, and suppose that $f_1\mathcal{PG}L_2(q)=f_2\mathcal{PG}L_2(q)$. Then $f_1$ and $f_2$ are $\mathcal{PG}L_2(q)$-conjugate. \end{lem} \subsection{Fixed point calculations}\label{s: fp} We let $G$ be a group having socle $S$ with $S\cong \mathrm{PSL}_2(q)$. Using the classification of the maximal subgroups of $G$ (see for example~\cite{bhr}), it is important to observe that, for every maximal subgroup $M$ of $G$ there exists a maximal subgroup $H$ of $S$ with $M=\nor G H$; in particular, this allows us to identify (up to permutation isomorphism) each primitive $G$-set $\Omega$ with the set of $G$-conjugates of some maximal subgroup $H$ of $S$. Therefore, we let $H$ be a maximal subgroup of $S$ with $\nor G H $ maximal in $G$, and set $\Omega$ to be $H^G:=\{H^g\mid g\in G\}$, the set of all conjugates of $H$ in $G$. All possibilities for $H$ and $|\Omega|$ are given in the first and in the third column of Tables~\ref{t: inv q odd} and \ref{t: inv q even}, where in Table~\ref{t: inv q odd} the symbol $\zeta$ is defined by \begin{equation}\label{e: zeta} \zeta:=\begin{cases} 2 & \textrm{if }G\not\le \mathrm{P}\Sigma L_2(q) \textrm{ and }q \textrm{ is odd}, \textrm{ or }q\textrm{ is even},\\ 1 & \textrm{if }G\le \mathrm{P}\Sigma L_2(q) \textrm{ and }q \textrm{ is odd}. \end{cases} \end{equation} (See \cite{bhr} to verify this. The conditions that are listed in Table~\ref{t: inv q odd} are necessary for the action of $G$ on $\Omega$ to be primitive, but they are not necessarily sufficient.) Finally, we write $\mathcal{P}(H)$ for the power set of $H$. In what follows, we calculate the number of fixed points of an involution $g\in S$, and (when $q$ is odd) of a Klein $4$-subgroup $K\leq S$, for the action of $G$ on $\Omega$. (Given a subset $Y$ of a permutation group $X$ on $\Omega$, we write ${\rm Fix}_\Omega(Y):=\{\omega\in \Omega\mid \omega^y=\omega,\forall y\in Y\}$ and simply ${\rm Fix}_\Omega(y)$ when the set $Y$ consists of the single element $y$.) To calculate the number of fixed points of $g$ and of $K$, we make use of the well-known formulas (see for instance~\cite[Lemma~$2.5$]{LiebeckSaxl}) \begin{equation}\label{e: fora} |{\rm Fix}_\Omega(g)| = \frac{|\Omega|\cdot |H\cap g^G|}{|g^G|},\qquad |{\rm Fix}_\Omega(K)| = \frac{|\Omega|\cdot |\mathcal{P}(H)\cap K^G|}{|K^G|}. \end{equation} Given an involution $g\in S$, from~\cite{gls3} we obtain \[ |g^G|=\begin{cases} \frac12q(q-1), & \textrm{if } q\equiv 3\pmod 4,\\ \frac12q(q+1), & \textrm{if }q\equiv 1\pmod 4, \\ q^2-1, &\textrm{if } q \textrm{ even}. \end{cases} \] Using this information, Eq.~\eqref{e: fora} and the fact that $\mathrm{PSL}_2(q)$ has a unique conjugacy class of involutions, it is a straightforward computation to verify the fourth and fifth column in Table~\ref{t: inv q odd} and the third and fourth column in Table~\ref{t: inv q even}. \begin{table} \begin{adjustbox}{angle=90} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $H$ & Conditions & $|\Omega|$ & $|H\cap g^G|$ & $|{\rm Fix}_\Omega(g)|$&$|\mathcal{P}(H)\cap K^G|$&$|{\rm Fix}_\Omega(K)|$\\ \hline $[q]:(\frac{q-1}{2})$ & None & $q+1$ & $\begin{cases} 0, & q\equiv 3(4) \\ q, & q\equiv 1(4) \end{cases}$ & $\begin{cases} 0, & q\equiv 3(4) \\ 2, & q\equiv 1(4) \end{cases}$ &$0$&$0$\\ $D_{q-1}$ & None & $\frac{q(q+1)}{2}$ & $\begin{cases} \frac{q-1}{2}, & q\equiv 3(4) \\ \frac{q+1}{2}, & q\equiv 1(4) \end{cases}$ & $\frac{q+1}{2}$&$\begin{cases}0,&q\equiv 3(8)\\ \frac{q-1}{4},&q\equiv 5(8)\\\frac{\zeta(q+1)}{8},&q\equiv 1(8)\\0,&q\equiv 7(8)\end{cases}$&$\begin{cases}0,&q\equiv 3(8)\\ 3,&q\equiv 5(8)\\3,&q\equiv 1(8)\\0,&q\equiv 7(8)\end{cases}$\\ $D_{q+1}$ & None & $\frac{q(q-1)}{2}$ & $\begin{cases} \frac{q+3}{2}, & q\equiv 3(4) \\ \frac{q+1}{2}, & q\equiv 1(4) \end{cases}$ & $\begin{cases} \frac{q+3}{2}, & q\equiv 3(4) \\ \frac{q-1}{2}, & q\equiv 1(4) \end{cases}$ &$\begin{cases}\frac{q+1}{4},&q\equiv 3(8)\\0,&q\equiv 5(8)\\0,&q\equiv 1(8)\\\frac{\zeta(q+1)}{8},&q\equiv 7(8)\end{cases}$&$\begin{cases}3,&q\equiv 3(8)\\ 0,&q\equiv 5(8)\\0,&q\equiv 1(8)\\3,&q\equiv 7(8)\end{cases}$\\ $\mathrm{PSL}_2(q_0)$ & $q=q_0^a$, $a$\textrm{ odd } & $\frac{q(q^2-1)}{q_0(q_0^2-1)}$ & $\begin{cases} \frac{q_0(q_0-1)}{2}, & q_0\equiv 3(4)\\ \frac{q_0(q_0+1)}{2}, & q_0\equiv 1(4) \\ \end{cases}$ & $\begin{cases} \frac{q+1}{q_0+1}, & q_0\equiv 3(4)\\ \frac{q-1}{q_0-1}, & q_0\equiv 1(4) \\ \end{cases}$ &$\begin{cases}\frac{q_0(q_0^2-1)}{24},&q\equiv \pm 3(8)\\\frac{\zeta q_0(q_0^2-1)}{48},&q\equiv \pm 1(8)\end{cases}$&$1$\\ $\mathcal{PG}L_2(q_0)$ & $q=q_0^2$, $\zeta=1$ & $\frac{\sqrt{q}(q+1)}{2}$ & $q$ & $\sqrt{q}$ &$\frac{q_0(q_0^2-1)}{24}$ or $\frac{q_0(q_0^2-1)}{8}$&$1$ or $3$\\ $\mathop{\mathrm{Alt}}(4)$ & $q=p\equiv \pm 3(8)$ & $\frac{q(q^2-1)}{24}$ & $3$ & $\begin{cases} \frac{q+1}{4}, & q\equiv 3(8) \\ \frac{q-1}{4}, & q\equiv 5(8) \end{cases}$ &$1$&$1$\\ $\mathop{\mathrm{Sym}}(4)$ & $q=p\equiv \pm 1(8)$, $\zeta=1$ & $\frac{q(q^2-1)}{48}$ & $9$ & $\begin{cases} \frac{3(q+1)}{8}, & q\equiv 7(8) \\ \frac{3(q-1)}{8}, & q\equiv 1(8) \end{cases}$ &$1$ or $3$&$1$ or $3$\\ $\mathop{\mathrm{Alt}}(5)$ & $\begin{array}{l}q=p, q\equiv \pm 1 (10), \textrm{ or}\\ q=p^2, p\equiv \pm 3(10) \end{array}$ & $\frac{\zeta q(q^2-1)}{120}$ & $15$ & $\begin{cases} \frac{\zeta(q+1)}{4}, & q\equiv 3(4) \\ \frac{\zeta(q-1)}{4}, & q\equiv 1(4) \end{cases}$&$5$&$\begin{cases}\zeta,&q\equiv \pm 3(8)\\2,&q\equiv \pm 1(8)\end{cases}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Fixed points of involutions in $S$ and of a Klein $4$-subgroup of $S$, for $q$ odd. The symbol $\zeta$ is defined in~\eqref{e: zeta}.}\label{t: inv q odd} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|} \hline $H$ & $|\Omega|$ & $|H\cap g^G|$ & $|{\rm Fix}_\Omega(g)|$\\ \hline $[q]:(q-1)$ & $q+1$ & $q-1$ & $1$ \\ $D_{2(q-1)}$ & $\frac12q(q+1)$ & $q-1$ & $\frac12q$ \\ $D_{2(q+1)}$ & $\frac12q(q-1)$ & $q+1$ & $\frac12q$ \\ $\mathrm{SL}_2(q_0)$ & $\frac{q(q^2-1)}{q_0(q_0^2-1)}$ & $q_0^2-1$ & $\frac{q}{q_0}$ \\ \hline \end{tabular} \caption{Fixed points of involutions in $S$ for $q$ even.}\label{t: inv q even} \end{table} Suppose that $q\equiv \pm 3\pmod 8$ and let $K$ be a Klein $4$-subgroup of $S$. As we mentioned above, $K$ is a Sylow $2$-subgroup of $S$, all Klein $4$-subgroups of $S$ are conjugate and $\nor {\mathrm{PSL}_2(q)}K\cong \mathop{\mathrm{Alt}}(4)$. Therefore $|K^G|=\frac{1}{24}q(q^2-1)$. Using this and Eq.~\eqref{e: fora}, it is easy to confirm (when $q\equiv \pm 3\pmod 8$) the veracity of the sixth and seventh column in Table~\ref{t: inv q odd}. (Note that the $\mathcal{PG}L_2(q_0)$ and $\mathop{\mathrm{Sym}}(4)$ rows do not apply when $q\equiv \pm3\pmod 8$.) Suppose now that $q\equiv \pm 1\pmod 8$ and let $K$ be a Klein $4$-subgroup of $S$. In this case, there are two $S$-conjugacy classes of Klein $4$-subgroups and, regardless of the $S$-conjugacy class on which $K$ lies, we have $\nor G K\cong \mathop{\mathrm{Sym}}(4)$. In particular, \[ |K^G|=\frac{1}{48}\zeta q(q^2-1), \] where $\zeta$ is the parameter that was defined in \eqref{e: zeta}. As above, using this and Eq.~\eqref{e: fora}, it is easy to confirm (when $q\equiv \pm 1\pmod 8$) the veracity of the sixth and seventh column in Table~\ref{t: inv q odd}. (Note that the $\mathop{\mathrm{Alt}}(4)$ row does not apply when $q\equiv \pm1\pmod 8$.) For the proof of Theorem~\ref{t: psl2}, we also need to compute the number of fixed points of field involutions of $G$ only for certain primitive actions when $q$ is odd: this information is tabulated in Table~\ref{t: f field}. Of course, here we assume that $q$ is a square and that $G$ does contain a field automorphism of order $2$. Now observe that \[ |f^G|= \begin{cases}\frac{\zeta}{2}\sqrt{q}(q+1),&\textrm{if }q \textrm{ is odd},\\ \sqrt{q}(q+1),&\textrm{if }q \textrm{ is even}. \end{cases} \] From this and~\eqref{e: fora}, the veracity of Table~\ref{t: f field} follows from easy calculations (which we omit). Note that Lemma~\ref{l: fields} means that it is convenient to assume that $G\geq \mathcal{PG}L_2(q)$ where this makes no difference; however for the final action in Table~\ref{t: f field}, we must assume that $G$ does {\bf not} contain $\mathcal{PG}L_2(q)$ since otherwise the action is not primitive. To make this clear we state the assumed value of $\zeta$ in the ``Conditions'' column in each case. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline $H$ & Conditions & $|\Omega|$ & $|\nor GH\cap f^G|$ & $|{\rm Fix}_\Omega(f)|$\\ \hline $D_{q-1}$ & $\zeta=2$ & $\frac12q(q+1)$ & $2\sqrt{q}$ & $q$ \\ $D_{q+1}$ & $\zeta=2$ & $\frac12q(q-1)$ & 0 & 0 \\ $\mathrm{PSL}_2(q_0)$ & $q=q_0^a$, $a$\textrm{ odd }, $\zeta=2$, & $\frac{q(q^2-1)}{q_0(q_0^2-1)}$ & $\sqrt{q_0}(q_0+1)$ & $\frac{\sqrt{q}(q-1)}{\sqrt{q_0}(q_0-1)}$ \\ $\mathop{\mathrm{Alt}}(5)$ & $q=p^2\equiv \pm1\pmod{10}$, $\zeta=1$ & $\frac1{120}q(q^2-1)$ & $10$ & $\frac{1}{6} \sqrt{q}(q-1)$ \\ \hline \end{tabular} \caption{Fixed points of field automorphisms of order $2$ for selected primitive actions of $G$ with $q$ odd.}\label{t: f field} \end{table} We are now ready to prove the two lemmas that together yield Theorem~\ref{t: psl2} for groups with socle $\mathrm{PSL}_2(q)$. \begin{lem}\label{l: handy q odd} Let $G$ be an almost simple primitive permutation group on the set $\Omega$ with socle isomorphic to $\mathrm{SL}q$ with $q$ odd. If $q>9$, then $\Omega$ contains a strongly non-binary subset. \end{lem} \begin{proof} Our notation here is consistent with that established above. For instance, we identify $\Omega$ with the set of $G$-conjugates of $H$. We must consider the actions corresponding to the first column of Table~\ref{t: inv q odd}. \noindent\textsc{Line 1: $H$ is a Borel subgroup of $S$}. In this case $G$ acts $2$-transitively on $\Omega$, but $ \mathrm{Alt}(\Omega)\nleq G$. Thus $\Omega$ itself is a beautiful subset and hence strongly non-binary (of the type given in Example~\ref{ex: snba1}). \noindent\textsc{Line 5: $H\cong \mathcal{PG}L_2(q_0)$ where $q=q_0^2$}. We regard $S$ as the projective image of those elements of $\mathrm{GL}_2(q)$ that have square determinant, and we may assume that $H$ consists of the projective image of those elements in $\mathrm{GL}_2(q_0)$ whose entries are all in $\mathbb{F}_{q_0}$. Let $T$ be the set of diagonal elements in $S$; let $T_0:=H\cap T$, a maximal split torus in $H$; let $\alpha$ be an element of $\mathbb{F}_q$ that does not lie in any proper subfield of $\mathbb{F}_q$; and define \[ N_0:=\left\{\begin{pmatrix} 1 & \alpha b \\ 0 & 1 \end{pmatrix} \mid b\in\mathbb{F}_{q_0}\right\}. \] Clearly, $N_0$ is a subgroup of $G$, $T_0$ normalizes $N_0$ and $T_0\cap N_0=\{1\}$. Thus we can form the semidirect product $X:=N_0\rtimes T_0$ and we observe that $X\cap H=T_0$. Let $\Lambda$ be the orbit of $H$ under the group $X$. One obtains immediately that $\Lambda$ is a set of size $q_0$ on which $X$ acts $2$-transitively. If $q_0>5$, then $G$ does not contain a section isomorphic to $\mathrm{Alt}(q_0)$ and we conclude that $\Lambda$ is a beautiful subset for the action of $G$ on $\Omega$. If $q_0=5$, then $q=5^2$ and we can check the result directly using \texttt{magma}~\cite{magma}. \noindent\textsc{Line 4: $H\cong \mathrm{PSL}_2(q_0)$ where $q=q_0^a$ for some odd prime $a$}. We consider first the special situation where $q$ is a square. We consider $S$ as before, with $H$ the projective image of those elements in $\mathrm{GL}_2(q_0)$ whose entries are all in $\mathbb{F}_{q_0}$ and which have square determinant in $\mathbb{F}_{q_0}$; finally, we know that $H$ has a subgroup $H_1$ isomorphic to $\mathcal{PG}L_2(\sqrt{q_0})$ (since $q_0$ is a square by assumption). We take $H_1$ to be the projective image of those elements in $\mathrm{GL}_2(\sqrt{q_0})$ whose entries are all in $\mathbb{F}_{\sqrt{q_0}}$. Let $T$ be the set of diagonal elements in $S$; let $T_0:=H_1\cap T$, a maximal split torus in $H_1$; let $\alpha$ be an element of $\mathbb{F}_q$ that does not lie in any proper subfield of $\mathbb{F}_q$; and define \[ N_0:=\left\{\begin{pmatrix} 1 & \alpha b \\ 0 & 1 \end{pmatrix} \mid b\in\mathbb{F}_{\sqrt{q_0}}\right\}. \] As above, $N_0$ is a subgroup of $G$, $T_0$ normalizes $N_0$ and $T_0\cap N_0=\{1\}$. Thus we can form the semidirect product $X:=N_0\rtimes T_0$ and we observe that $X\cap H=T_0$. Let $\Lambda$ be the orbit of $H$ under the group $X$. One obtains immediately that $\Lambda$ is a set of size $\sqrt{q_0}$ on which $X$ acts $2$-transitively. If $\sqrt{q_0}>5$, then $G$ does not contain a section isomorphic to $\mathrm{Alt}(q_0)$ and we conclude that $\Lambda$ is a beautiful subset for the action of $G$ on $\Omega$. The outstanding cases (that is, $\sqrt{q_0}\le 5$ or $q$ is not a square) will be dealt with below. \noindent\textsc{Lines 2,3,4,6,7,8}. Here we will show that in every case we can find a strongly non-binary subset $\Lambda$ for which $G^\Lambda$ is as in Example~\ref{ex: snba2}. We let $g$ be an involution in $S$ and $h\in g^G$ with $K:=\langle g,h\rangle$ a Klein $4$-subgroup of $S$ and we let \[ \Lambda={\rm Fix}(g)\cup{\rm Fix}(h)\cup{\rm Fix}(gh). \] Observe that $\Lambda$, ${\rm Fix}(g)$, ${\rm Fix}(h)$ and ${\rm Fix}(gh)$ are $g$-invariant and $h$-invariant. Write $\tau_1$ for the permutation induced by $g$ on ${\rm Fix}(gh)$ and $\tau_2$ for the permutation induced by $g$ on ${\rm Fix}(h)$, and observe that the supports of $\tau_1$ and $\tau_2$ are disjoint, and that $g$ induces the permutation $\tau_1\tau_2$ on $\Lambda$. Observe, furthermore, that $h$ induces the permutation $\tau_1$ on ${\rm Fix}(gh)$; now write $\tau_3$ for the involution induced by $h$ on ${\rm Fix}(g)$, and observe that the supports of $\tau_1$ and $\tau_3$ are disjoint, and that $h$ induces the permutation $\tau_1\tau_3$ on $\Lambda$. Observe, finally, that the supports of $\tau_2$ and $\tau_3$ are disjoint and that, since $g,h$ and $gh$ are conjugate, the permutations $\tau_1,\tau_2$ and $\tau_3$ all have support of equal size. Comparing the entries in the fifth and seventh column of Table~\ref{t: inv q odd}, we see that $|{\rm Fix}(g)|\geq |{\rm Fix}(K)|+2$. (Here we are using our assumption that $q>9$.) This implies, in particular, that $\tau_1,\tau_2$ and $\tau_3$ are non-trivial permutations of order $2$. Observe that either there exists $f\in G_\Lambda$ inducing the permutation $\tau_1$ on $\Lambda$ or else $\Lambda$ is a strongly non-binary subset of $\Omega$ (it corresponds to Example~\ref{ex: snba2}). Suppose that $G$ does not contain a field automorphism of order $2$, and suppose that $f\in G_\Lambda$ induces the permutation $\tau_1$ on $\Lambda$. This would imply that $G_{\Lambda}$ contained an elementary-abelian subgroup of order $8$. But, as we observed earlier, $m_2(Q)\leq 2$ for any section $Q$ in $G$, which is a contradiction. We conclude that $\Lambda$ is a strongly non-binary subset of $\Omega$. Note that this argument disposes of Lines 6 and 7 of Table~\ref{t: inv q odd}. It also deals with one of the outstanding cases for Line 4, namely the situation where $q$ is not a square. Suppose from here on that $G$ contains a field automorphism of order $2$. In particular $q$ is a square and $q\equiv 1\pmod 8$. Now, the previous argument implies that $G^\Lambda$ is strongly non-binary unless $G_\Lambda$ contains a field automorphism that induces the element $\tau_1$, so assume that this is the case. There are two possibilities: \begin{enumerate} \item[(a)] there is a field automorphism $f$ of order $2$ that induces the element $\tau_1$ on $\Lambda$; \item[(b)] there is a field automorphism $f$ of order $2$ that fixes $\Lambda$ point-wise (and some element of $G_\Lambda\cap (G\setminus \mathcal{PG}L_2(q))$ of order divisible by $4$ induces the element $\tau_1$ on $\Lambda$). \end{enumerate} Note first that Line 3 of Table~\ref{t: inv q odd} is immediately excluded since field automorphisms of order $2$ have no fixed points in this action (see Table~\ref{t: f field}). We are left only with Lines 2 and 8, as well as Line 4 with $q_0\in\{9,25\}$. Assume that Case~(a) holds. Observe that, the action on $\Lambda$ gives a natural homomorphism $\langle S_{(\Lambda)},f,g,h\rangle \to \mathop{\mathrm{Sym}}(\Lambda)$ whose image is elementary abelian of order $8$, and whose kernel is $S_{(\Lambda)}$. What is more, by Lemma~\ref{l: quotients split}, $S_{(\Lambda)}$ has odd order, and we conclude that $\langle f,g,h\rangle$ is elementary abelian of order $8$. Since $f$ centralizes $\langle g,h\rangle$ we may consider the action of $\langle g,h\rangle$ on ${\rm Fix}(f)$. Observe that if $\gamma\not\in{\rm Fix}(g)\cup {\rm Fix}(h)\cup{\rm Fix}(gh)$, then $\gamma^g\neq \gamma^h$, and so $\langle g, h\rangle$ acts semi-regularly on ${\rm Fix}(f)\setminus({\rm Fix}(f)\cap\Lambda)$ and so \[ |{\rm Fix}(f)\setminus({\rm Fix}(f)\cap\Lambda)|\equiv 0 \pmod 4. \] Now in this case ${\rm Fix}(f)\cap \Lambda = {\rm Fix}(g)\cup{\rm Fix}(h)$ and we conclude that \begin{equation}\label{eq: f} |{\rm Fix}(f)|-2|{\rm Fix}(g)|+|{\rm Fix}(K)|\equiv 0\pmod 4. \end{equation} Let us consider the remaining actions, one by one. \noindent\textsc{Line 2: $H\cong D_{q-1}$}. In this case \eqref{eq: f} implies that \[ |{\rm Fix}(f)|-2|{\rm Fix}(g)|+|{\rm Fix}(K)|=q-(q+1)+3\equiv 0\pmod 4 \] which is a contradiction. \noindent\textsc{Line 4: $H\cong \mathrm{PSL}_2(q_0)$ with $q=q_0^a$ and $a$ an odd prime}. Note first that we may assume that $q_0\in\{9,25\}$, with $p=\sqrt{q_0}$. Choose $g\in S$ to be an element of order $p$; an easy calculation using \eqref{e: fora} confirms that $g$ fixes $\frac{q}{q_0}$ points of $\Omega$. Now choose $h\in S$ to be an element of order $p$ (hence also fixing the same number of points of $\Omega$) such that $\langle g,h\rangle$ is an elementary-abelian group of order $q_0$. We require, moreover, that $\langle g,h\rangle$ fixes no points of $\Omega$: for this we just make sure that $\langle g,h\rangle$ is not conjugate to a Sylow $p$-subgroup of $H$. As usual we set $\Lambda={\rm Fix}(g)\cup{\rm Fix}(h)\cup {\rm Fix}(gh)$. We define $\tau_1, \tau_2, \tau_3$ exactly as in the argument for \textsc{Lines 2,3,4,6,7,8}. Now if $f$ is an element inducing the permutation $\tau_1$, then $f$ has order divisible by $p$, and $f$ fixes at least $\frac{2q}{q_0}$ elements of $\Omega$. This implies immediately that $f\not\in S$, and we conclude that $a=p$. Now, referring to Lemma~\ref{l: fields}, we see that $f$ must be a field automorphism of degree $a=p$ and an easy calculation with \eqref{e: fora} implies that such an element fixes $\frac12p(p^2+1)$ points of $\Omega$ and so cannot induce the permutation $\tau_1$. Now, referring to Example~\ref{ex: snba2}, we conclude that $\Lambda$ is a strongly non-binary subset. \noindent\textsc{Line 8: $H\cong A_5$.} In this case we assume that $G\leq {\rm P\Sigma L}_2(q)$, otherwise the action on $\Omega$ is not primitive. In particular $\zeta=1$ and \eqref{eq: f} implies that \[ |{\rm Fix}(f)|-2|{\rm Fix}(g)|+|{\rm Fix}(K)|=\frac{1}{6}\sqrt{q}(q-1)-\frac{1}{2}(q-1)+2\equiv 0\pmod 4. \] which is a contradiction. We are left with Case~(b). Note in this case that $q=p^a$ where $a$ is divisible by $4$. This immediately excludes Line 8 of the table (since $q=p^2$ here) as well as the remaining cases for Line $4$ (since here $q=9^a$ or $25^a$ where $a$ is an odd prime). Thus the only line left to consider is Line 2. But note that, for Case (b) to hold, ${\rm Fix}(f)$ must contain $\Lambda$ and so \[ |{\rm Fix}(f)|\geq 3|{\rm Fix}(g)|-2|{\rm Fix}(K)|. \] But Tables~\ref{t: inv q odd} and \ref{t: f field} then give that \[ q\geq \frac{3}{2}(q+1)-6. \] This is a contradiction for $q>9$ and we are done. \end{proof} \begin{lem}\label{l: handy q even} Let $G$ be an almost simple primitive permutation group on the set $\Omega$ with socle isomorphic to $\mathrm{SL}q$ with $q=2^a$. If $a>3$, then $\Omega$ contains a strongly non-binary subset. \end{lem} \begin{proof} Our notation here is consistent with that established above. We must consider the actions corresponding to the first column of Table~\ref{t: inv q even}. \noindent\textsc{Line 1: $H$ is a Borel subgroup of $S$}. In this case $G$ acts 2-transitively on $\Omega$, but $G\not\cong \mathrm{Alt}(\Omega)$ or $\mathrm{Sym}(\Omega)$. Thus $\Omega$ itself is a beautiful subset (and hence strongly non-binary). \noindent\textsc{Line 2: $H\cong D_{2(q-1)}$}. We may assume that $H$ contains $T$, the set of diagonal elements in $S$. We define \[ N:=\left\{\begin{pmatrix} 1 & \alpha \\ 0 & 1 \end{pmatrix} \mid \alpha\in\mathbb{F}_{q}\right\}. \] Now it is clear that $T$ normalizes $N$ and that $T\cap N=\{1\}$. Thus we can form the semidirect product $X=N\rtimes T_0$ and we observe that $X\cap H=T$. Using the identification of $\Omega$ with the set of $G$-conjugates of $H$, we let $\Lambda$ be the orbit of $H$ under the group $X$. One obtains immediately that $\Lambda$ is a set of size $q\geq 8$ on which $X$ acts 2-transitively. Since $G$ does not contain a section isomorphic to $\mathrm{Alt}(q)$ we conclude that $\Lambda$ is a beautiful subset for the action of $G$ on $\Omega$ and we are done. \noindent\textsc{Line 3: $H\cong D_{2(q+1)}$}. We proceed similarly to the case where $q$ is odd in Lemma~\ref{l: handy q odd}: let $g$ be an involution in $S$ and $h\in g^G$ with $K:=\langle g,h\rangle$ a Klein $4$-subgroup of $S$ and we let \[ \Lambda={\rm Fix}(g)\cup{\rm Fix}(h)\cup{\rm Fix}(gh). \] Observe that $\Lambda$, ${\rm Fix}(g)$, ${\rm Fix}(h)$ and ${\rm Fix}(gh)$ are $g$-invariant and $h$-invariant. Observe, furthermore, that ${\rm Fix}(g)$, ${\rm Fix}(h)$ and ${\rm Fix}(gh)$ are all disjoint and, by Table~\ref{t: inv q even}, are of size $\frac12 q$. Write $\tau_1$ for the permutation induced by $g$ on ${\rm Fix}(gh)$, $\tau_2$ for the permutation induced by $g$ on ${\rm Fix}(h)$, and $\tau_3$ for the permutation induced by $g$ on ${\rm Fix}(gh)$. Then $g$ induces the permutation $\tau_1\tau_2$ on $\Lambda$, while $h$ induces the permutation $\tau_1\tau_3$ on $\Lambda$. Then $\Lambda$ is a strongly non-binary subset provided there is no element $f\in G_\Lambda$ that induces the permutation $\tau_1$. Such an element must have even order and must fix at least $q$ elements of $\Omega$. Now Table~\ref{t: inv q even} implies that $f\not\in S$. On the other hand, if $f$ is a field-automorphism of order $2^c$, then it does not fix any elements of $\Omega$. We conclude that $\Lambda$ is a strongly non-binary subset and we are done. \noindent\textsc{Line 4: $H\cong \mathrm{SL}_2(q_0)$ where $q=q_0^b$ for some prime $b$.} Note that, using \cite{bhr}, we can exclude the possibility that $q_0=2$. Suppose first that $q_0>4$, and take $\beta \in \mathbb{F}_q\setminus \mathbb{F}_{q_0}$. We may assume that $H$ consists of those elements in $S=\mathrm{SL}_2(q)$ whose entries are all in $\mathbb{F}_{q_0}$. Let $T$ be the set of diagonal elements in $S$; let $T_0=S\cap T$, a maximal split torus in $S$; and define \[ N_0:=\left\{\begin{pmatrix} 1 & \beta \alpha \\ 0 & 1 \end{pmatrix} \mid \alpha\in\mathbb{F}_{q_0}\right\}. \] Now it is clear that $T_0$ normalizes $N_0$ and that $T_0\cap N_0=\{1\}$. Thus we can form the semidirect product $X=N_0\rtimes T_0$ and we observe that $X\cap H=T_0$. Using the identification of $\Omega$ with the set of $G$-conjugates of $H$, we let $\Lambda$ be the orbit of $H$ under the group $X$. One obtains immediately that $\Lambda$ is a set of size $q_0\geq 8$ on which $X$ acts 2-transitively. Since $G$ does not contain a section isomorphic to $\mathrm{Alt}(q_0)$ we conclude that $\Lambda$ is a beautiful subset for the action of $G$ on $\Omega$ and we are done. The only remaining case is when $q_0=4$. As $q=2^a$, we have $a=2b$. In this case we make use of the fact that the number of $S$-conjugacy classes of subgroups of $S$ isomorphic to a Klein 4-subgroup is $(q+2)/6$. Since $H$ contains a unique conjugacy class of Klein 4-subgroups, there exists a Klein $4$-subgroup $K:=\langle g,h\rangle$ of $S$ with $K\nleq H^g$, for every $g\in S$, that is, ${\rm Fix}(K)=\emptyset$ for the action on cosets of $H$. Observe that $q$ is a square. We choose $K$ so that, not only does it not lie in a conjugate of $H$, it also doesn't lie in a conjugate of $\mathrm{SL}_2(\sqrt{q})=\mathrm{SL}_2(2^b)$, the centralizer of a field automorphism of order $2$. Define \[ \Lambda={\rm Fix}(g)\cup{\rm Fix}(h)\cup{\rm Fix}(gh). \] Observe that $g$ acts on $\Lambda$, and on ${\rm Fix}(h)$, and on ${\rm Fix}(gh)$. Write $\tau_1$ for the involution induced by $g$ on ${\rm Fix}(gh)$ and $\tau_2$ for the permutation induced by $g$ on ${\rm Fix}(h)$, and observe that the supports of $\tau_1$ and $\tau_2$ are disjoint, and that $g$ induces the permutation $\tau_1\tau_2$ on $\Lambda$. Exactly as in the case when $H=D_{2(q-1)}$, $\Lambda$ is either strongly non-binary (and we are done), or else there exists $f\in G^\Lambda$ such that $f$ induces the permutation $\tau_1$ on $\Lambda$. Suppose that this latter possibility occurs, and observe that ${\rm Fix}(f)$ contains ${\rm Fix}(g)\cup{\rm Fix}(h)$ and so $|{\rm Fix}(f)|\geq \frac{q}{2}$. If $f\in S$, then $f$ is conjugate to $g$ and $|{\rm Fix}(g)|=\frac{q}{4}$, so we have a contradiction. Suppose that $f\not\in S$. The subgroup structure of $\mathrm{SL}_2(q)$ implies that if a subgroup $X$ is normalized by a Klein $4$-group, then $X$ is elementary abelian of even order. Thus $S_{(\Lambda)}$ is elementary abelian of even order. But if $S_{(\Lambda)}$ is non-trivial, then an involution fixes at least $\frac{3q}{4}$ points which is a contradiction. Thus $S_{(\Lambda)}$ is trivial. Now, note that since $q=4^b$, where $b$ is prime, either $q=16$, or else we may assume that $f$ is a field automorphism of order $2$. Thus $\langle K, f\rangle$ is elementary-abelian. But, since $K$ does not lie in a conjugate of $\mathrm{SL}_2(\sqrt{q})$ we have a contradiction here. In the case $q=16$, a moment's thought shows that either $f$ is a field automorphism of order $2$, or else there is a field automorphism of order $2$ that fixes $\Lambda$ point-wise. Either way one concludes that there is a field automorphism $f_1$ such that $\langle K, f_1\rangle$ is elementary-abelian and, again, we have a contradiction. Thus in all cases we have a strongly non-binary subset of the type given in Example~\ref{ex: snba2} and we are done. \end{proof} We remark again that Theorem~\ref{t: psl2} for groups with socle $\mathrm{PSL}_2(q)$ is an immediate consequence of Lemmas~\ref{l: forbidden}, \ref{l: handy q odd} and \ref{l: handy q even}. \section{Groups with socle isomorphic to \texorpdfstring{$^{2}B_2(q)$}{2B2(q)}}\label{s: suzuki} In this section we prove Theorem~\ref{t: psl2} for groups with socle $\mathop{{^2\mathrm{B}_2}(q)}$. This theorem follows, {\it \`a la} the other main results, from Lemma~\ref{l: suzuki} combined with Lemma~\ref{l: forbidden}. In what follows $G$ is an almost simple group with socle $S\cong\mathop{{^2\mathrm{B}_2}(q)}$, where $q=2^a$ and $a$ is an odd integer with $a\geq 3$. We write $r:=2^{\frac{a+1}{2}}$ and define $\theta$ to be the following field automorphism of $\mathbb{F}_q$: \[ \theta: \mathbb{F}_q\to \mathbb{F}_q, \,\,\, x \mapsto x^r. \] We need some basic facts, all of which can be found in \cite{suzuki}. First, ${\rm Out}(S)$ is a cyclic group of odd order $a$. Second, $G$ contains a single conjugacy class of involutions; writing $g$ for one of these involutions we note that \[ |g^G|=(q^2+1)(q-1). \] Third, the maximal subgroups of $S$ fall into three families: Borel subgroups, normalizers of maximal tori, and subfield subgroups. For the second of these families, we need some fixed point calculations, and these are given in Table~\ref{t: suz} (making use of \eqref{e: fora}). Each line of this table corresponds to a conjugacy class of maximal tori in $S$; we write $H$ for a maximal subgroup of $S$ and $\Omega$ for the set of right cosets of $H$ in $S$; in the final column we write $K$ for a Klein $4$-subgroup of $S$. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline $H$ & $|\Omega|$ & $|H\cap g^G|$ & $|{\rm Fix}_\Omega(g)|$ & $|\mathcal{P}(H)\cap K^S|$&$|{\rm Fix}_\Omega(K)|$\\ \hline $D_{2(q-1)}$ & $\frac12q^2(q^2+1)$ & $q-1$ & $\frac12q^2$&$0$ & $0$\\ $(q+r+1)\rtimes 4$ & $\frac14q^2(q-1)(q-r+1)$ & $q+r+1$ & $\frac14q^2$ &$0$ & $0$\\ $(q-r+1)\rtimes 4$ & $\frac14q^2(q-1)(q+r+1)$ & $q-r+1$ & $\frac14q^2$ &$0$ &$0$\\ \hline \end{tabular} \caption{Fixed points of involutions and Klein $4$-subgroups for selected primitive actions of almost simple Suzuki groups.}\label{t: suz} \end{table} \begin{lem}\label{l: suzuki} Let $G$ be an almost simple primitive permutation group on the set $\Omega$ with socle $S\cong\mathop{{^2\mathrm{B}_2}(q)}$. Then $\Omega$ contains a strongly non-binary subset. \end{lem} \begin{proof} Note that $|S|$ is not divisible by $3$ and hence $G$ does not contain a section isomorphic to an alternating group $\mathop{\mathrm{Alt}}(n)$ with $n\geq 3$. Referring to \cite{suzuki}, we see that a maximal subgroup of $G$ is necessarily the normalizer in $G$ of a maximal subgroup $H$ of $S$. Thus we can identify $\Omega$ with the set of right cosets of $H$ in $S$. We split into three families, as per the discussion above. First, if $H$ is a Borel subgroup, then the action of $G$ on $\Omega$ is $2$-transitive and, since $G$ contains no alternating sections, we obtain immediately that $\Omega$ itself is a beautiful subset. Second, if $H$ is the normalizer in $S$ of a maximal torus, then we set $K$ to be a Klein 4-subgroup of $S$, and we let $g,h$ be distinct involutions in $K$. Referring to Table~\ref{t: suz}, we see that $g$ and $h$ fix at least $16$ points of $\Omega$, while $K$ fixes none. We set $\lambda_3$ to be one of the fixed points of $g$ and write $\lambda_4$ for the point $\lambda_3^h$. Similarly $\lambda_5\in{\rm Fix}(h)$ and $\lambda_6=\lambda_5^g$. Finally pick $\lambda_1\in{\rm Fix}(gh)$ and let $\lambda_2=\lambda_1^g$; observe that $\lambda_2=\lambda_1^h$. Now let $\Lambda=\{\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5,\lambda_6\}$ and observe that $K$ acts on this set with the element $g$ inducing the permutation $(\lambda_1,\lambda_2)(\lambda_5,\lambda_6)$ while the element $h$ induces the permutation $(\lambda_1,\lambda_2)(\lambda_3,\lambda_4)$. Suppose that $f\in G_\Lambda$ induces the permutation $(\lambda_1,\lambda_2)$ on $\Lambda$. This would imply that $f$ and $g$ fix the point $\lambda_3$ and so the stabilizer of $\lambda_3$ must contain a section isomorphic to a Klein 4-subgroup. This is impossible: the Sylow $2$-subgroups of $H$ are cyclic of order $2$ or $4$ and, since $|{\rm Out}(S)|$ is odd, this is true of the stabilizer in $G$ of $\lambda_3$. Therefore $\Lambda$ is a strongly non-binary subset of $\Omega$: it corresponds to Example~\ref{ex: snba2}. Third, suppose that $H$ is a subfield subgroup of $S$. It is convenient to take $S$ to be the set of $4\times 4$ matrices over $\mathbb{F}_q$ described on \cite[p.133]{suzuki}; then we take $H$ to be the subgroup of $S$ consisting of matrices with entries over $\mathbb{F}_{q_0}$ with $q=q_0^b$ for some prime $b$, and $q_0>2$. The following set forms a Sylow $2$-subgroup of $S$: \[ P_2(q):= \left\{\begin{pmatrix} 1 & 0 & 0 & 0 \\ \alpha & 1 & 0 & 0 \\ \alpha^{1+\theta}+\beta & \alpha^\theta & 1 & 0 \\ \alpha^{2+\theta}+\alpha\beta+\beta^\theta & \beta&\alpha & 1 \end{pmatrix} \mid \alpha, \beta \in \mathbb{F}_q \right\}. \] The subgroup $P_2(q)$ is normalized by the following subgroup of $S$, \[ K(q):= \left\{\begin{pmatrix} \zeta_1 & 0 & 0 & 0 \\ 0 & \zeta_2 & 0 & 0 \\ 0 & 0 & \zeta_3 & 0 \\ 0 & 0 & 0 & \zeta_4 \end{pmatrix} \mid \exists\kappa \in \mathbb{F}_q\setminus\{0\}, \zeta_1^\theta = \kappa^{1+\theta}, \zeta_2^\theta=\kappa, \zeta_3=\zeta_2^{-1}, \zeta_4=\zeta_1^{-1} \right\}. \] The group $P_2(q)\rtimes K(q)$ is a maximal Borel subgroup of $S$, while $P_2(q_0)\rtimes K(q_0)$ is a maximal Borel subgroup of $H$. Observe that the center $\Zent {P_2(q)}$ of $P_2(q)$ consists of those matrices for which $\alpha=0$. Let $\zeta\in\mathbb{F}_q\setminus\mathbb{F}_{q_0}$ and consider the group \[ ZP_2(\zeta,q_0):= \left\{\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \zeta\beta & 0 & 1 & 0 \\ (\zeta\beta)^\theta & \zeta\beta & 0 & 1 \end{pmatrix} \mid \beta \in \mathbb{F}_{q_0} \right\}. \] Observe that $K(q_0)$ normalizes $ZP_2(\zeta,q_0)$, that $K(q_0)<H$, that $ZP_2(\zeta,q_0)\cap H=\{1\}$ and that $K(q_0)$ acts fixed-point-freely on $ZP_2(\zeta,q_0)$. Let $X:=ZP_2(\zeta,q_0)\rtimes K(q_0)$; identifying $\Omega$ with the set of $G$-conjugates of $H$, we let $\Lambda$ be the orbit of $H$ under the group $X$. One obtains immediately that $\Lambda$ is a set of size $q_0\geq 8$ on which $X$ acts 2-transitively. The absence of alternating sections implies that $\Lambda$ is a beautiful subset. \end{proof} \section{Groups with socle isomorphic to \texorpdfstring{$^{2}G_2(q)$}{2G2(q)}}\label{s: ree} In this section we prove Theorem~\ref{t: psl2} for groups with socle $\mathop{{^2\mathrm{G}_2}(q)}$. This theorem follows, {\it \`a la} the other main results, from Lemma~\ref{l: ree} combined with Lemma~\ref{l: forbidden}. In what follows $G$ is an almost simple group with socle $S\cong\mathop{{^2\mathrm{G}_2}(q)}$, where $q=3^a$ and $a$ is an odd integer with $a\geq 3$. We write $r:=3^{\frac{a+1}{2}}$ and define $\theta$ to be the following field automorphism of $\mathbb{F}_q$: \[ \theta: \mathbb{F}_q\to \mathbb{F}_q, \,\,\, x \mapsto x^r. \] We need some basic facts, all of which can be found in \cite{kleidman}. First, ${\rm Out}(S)$ is a cyclic group of odd order $a$. Second, $G$ contains a single conjugacy class of involutions; writing $g$ for one of these involutions we note that \[ |g^G|=q^2(q^2-q+1). \] Third, the order of $S$ is not divisible by $5$, and so $G$ does not contain a section isomorphic to $\mathrm{Alt}(n)$ with $n\geq 5$. Fourth, the maximal subgroups of $G$ fall into four families: Borel subgroups, normalizers of maximal tori, involution centralizers, and subfield subgroups. For all but the first of these families, we need some fixed point calculations, and these are given in Table~\ref{t: ree} (making use of \eqref{e: fora}). In each line of this table we write $H$ for a maximal subgroup of $S$ and $\Omega$ for the set of right cosets of $H$ in $S$; in the final column we write $K$ for a Klein $4$-subgroup of $S$. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline $H$ & $|\Omega|$ & $|H\cap g^G|$ & $|{\rm Fix}_\Omega(g)|$ & $|K^G\cap \mathcal{P}(H)|$ & $|{\rm Fix}_\Omega(K)|$\\ \hline $(2^2\times D_{\frac{q+1}{2}})\rtimes 3$ & $\frac{q^3(q^2-q+1)(q-1)}{6}$ & $q+4$ & $\frac{q(q-1)(q+4)}{6}$ & $\frac{3q+5}{2}$ & $\frac{3q+5}{2}$\\ $(q+r+1)\rtimes 6$ & $\frac{q^3(q^2-1)(q-r+1)}{6}$ & $q+r+1$ & $\frac{q(q^2-1)}{6}$ & $0$ & $0$ \\ $(q-r+1)\rtimes 6$ & $\frac{q^3(q^2-1)(q+r+1)}{6}$ & $q-r+1$ & $\frac{q(q^2-1)}{6}$ & $0$ & $0$ \\ $2\times \mathrm{PSL}_2(q)$ & $q^2(q^2-q+1)$ & $q^2-q+1$ & $q^2-q+1$ & $\frac{(q+4)q(q-1)}{6}$ & $q+4$\\ ${^2G_2}(q_0)$ & $\frac{q^3(q^3+1)(q-1)}{q_0^3(q_0^3+1)(q_0-1)}$ & $q_0^2(q_0^2-q_0+1)$ & $\frac{q(q^2-1)}{q_0(q_0^2-1)}$ & $\frac16q_0^3(q_0^2-q_0+1)(q_0-1)$ & $\frac{q+1}{q_0+1}$ \\ \hline \end{tabular} \caption{Fixed points of involutions and Klein $4$-subgroups for selected primitive actions of almost simple Ree groups.}\label{t: ree} \end{table} The calculations given in Table~\ref{t: ree} make use of the fact there is a unique class of involutions and a unique class of Klein $4$-subgroups in $S$; their normalizers are maximal subgroups. In particular the normalizer of a Klein $4$-subgroup in $S$ is the group $H$ in the first line of Table~\ref{t: ree}; combined with the fact that a Sylow $2$-subgroup of $S$ is elementary abelian of order $8$, we are able to complete the final entry in that first row. The other entries in the table follow from easy calculations. \begin{lem}\label{l: ree} Let $G$ be an almost simple primitive permutation group on the set $\Omega$ with socle $S\cong\mathop{{^2\mathrm{G}_2}(q)}$. Then $\Omega$ contains a strongly non-binary subset. \end{lem} \begin{proof} Referring to \cite{kleidman}, we see that a maximal subgroup of $G$ is necessarily the normalizer in $G$ of a maximal subgroup $H$ of $S$. Thus we can identify $\Omega$ with the set of right cosets of $H$ in $S$. We split into two cases. First, if $H$ is a Borel subgroup, then the action of $G$ on $\Omega$ is $2$-transitive and, since $G$ contains no sections isomorphic to $\mathrm{Alt}(n)$ with $n\geq 5$, we obtain immediately that $\Omega$ itself is a beautiful subset. Second, if $H$ is not a Borel subgroup, then we set $K$ to be a Klein 4-subgroup of $S$, we let $g,h$ be distinct involutions in $K$, and we let \[ \Lambda={\rm Fix}(g)\cup{\rm Fix}(h)\cup{\rm Fix}(gh). \] Observe that $\Lambda$, ${\rm Fix}(g)$, ${\rm Fix}(h)$ and ${\rm Fix}(gh)$ are $g$-invariant and $h$-invariant. Write $\tau_1$ for the involution induced by $g$ on ${\rm Fix}(gh)$ and $\tau_2$ for the permutation induced by $g$ on ${\rm Fix}(h)$, and observe that the supports of $\tau_1$ and $\tau_2$ are disjoint, and that $g$ induces the permutation $\tau_1\tau_2$ on $\Lambda$. Observe, furthermore, that $h$ induces the permutation $\tau_1$ on ${\rm Fix}(gh)$; now write $\tau_3$ for the involution induced by $h$ on ${\rm Fix}(g)$, and observe that the supports of $\tau_1$ and $\tau_3$ are disjoint, and that $h$ induces the permutation $\tau_1\tau_3$ on $\Lambda$. Observe, finally, that the supports of $\tau_2$ and $\tau_3$ are disjoint. Now, suppose that $f\in G_\Lambda$ induces the permutation $\tau_1$ on $\Lambda$. This would imply that $f$ fixes more points than $g$. Since $f$ has even order and all involutions in $G$ are conjugate, some odd power of $f$ is a conjugate of $g$, which is a contradiction. Thus $\Lambda$ is a strongly non-binary subset of $\Omega$ (it corresponds to Example~\ref{ex: snba2}). \end{proof} \section{Groups with socle isomorphic to \texorpdfstring{$\mathrm{PSU}_3(q)$}{PSU(3,q)}}\label{s: psu} In this section we prove Theorem~\ref{t: psl2} for groups with socle $\mathrm{PSU}_3(q)$. Strictly speaking, this theorem does not follow {\it \`a la} the other main results. Firstly, we do not prove the existence of beautiful subsets or of strongly non-binary subsets: we simply prove that the primitive groups under consideration are not binary. Second, for some primitive actions we make use of computer aided computations. The basic ideas for these computations are inspired from a deeper analysis in~\cite{DV_NG_PS}, where Conjecture~\ref{conj: cherlin} is proved for most almost simple groups with socle a sporadic simple group. The following lemmas are taken from~\cite{DV_NG_PS} and are stated in a form tailored to our needs in this paper. \begin{lem}\label{l: again0}Let $G$ be a transitive group on a set $\Omega$, let $\alpha$ be a point of $\Omega$ and let $\Lambda\subseteq \Omega$ be a $G_\alpha$-orbit. If $G$ is binary, then $G_\alpha^\Lambda$ is binary. \end{lem} \begin{proof}Assume that $G$ is binary. Let $\ell\in\mathbb{N}$ and let $I:=(\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ and $J:=(\lambda_1',\lambda_2',\ldots,\lambda_\ell')$ be two tuples in $\Lambda^\ell$ which are $2$-subtuple complete for the action of $G_\alpha$ on $\Lambda$. Clearly, $I_0:=(\alpha,\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ and $J_0:=(\alpha,\lambda_1',\lambda_2',\ldots,\lambda'_\ell)$ are $2$-subtuple complete for the action of $G$ on $\Omega$; as $G$ is binary, $I_0$ and $J_0$ are in the same $G$-orbit; hence $I$ and $J$ are in the same $G_\alpha$-orbit. From this we deduce that $G_\alpha^\Lambda$ is binary. \end{proof} We caution the reader that in the next lemma when we write $\Lambda$ we \emph{are not} referring to a subset of $\Omega$ -- here the set $\Lambda$ is allowed to be any set whatsoever that satisfies the listed suppositions. \begin{lem}\label{l: again} Let $G$ be a primitive group on a set $\Omega$, let $\alpha$ be a point of $\Omega$, let $M$ be the stabilizer of $\alpha$ in $G$ and let $d$ be an integer. Suppose $M\ne 1$ and, for each transitive action of $M$ on a set $\Lambda$ satisfying: \begin{enumerate} \item $|\Lambda|>1$, and \item every composition factor of $M$ is isomorphic to some section of $M^\Lambda$, and \item $M$ is binary in its action on $\Lambda$, \end{enumerate} we have that $d$ divides $|\Lambda|$. Then either $d$ divides $|\Omega|-1$ or $G$ is not binary. \end{lem} \begin{proof} Suppose that $G$ is binary. Since $\{\beta\in\Omega\mid \beta^m=\beta,\forall m\in M\}$ is a block of imprimitivity for $G$ and since $G$ is primitive, we obtain that either $M$ fixes each point of $\Omega$ or $\alpha$ is the only point fixed by $M$. The former possibility is excluded because $M\neq 1$ by hypothesis. Therefore $\alpha$ is the only point fixed by $M$. Let $\Lambda\subseteq\Omega\setminus\{\alpha\}$ be an $M$-orbit. Thus $|\Lambda|>1$ and (1) holds. Since $G$ is a primitive group on $\Omega$, from~\cite[Theorem~3.2C]{dixon_mortimer}, we obtain that every composition factor of $M$ is isomorphic to some section of $M^\Lambda$ and hence (2) holds. From Lemma~\ref{l: again0}, the action of $M$ on $\Lambda$ is binary and hence (3) also holds. Therefore, $d$ divides $|\Lambda|$ and hence each orbit of $M$ on $\Omega\setminus\{\alpha\}$ has cardinality divisible by $d$. Thus $|\Omega|-1$ is divisible by $d$. \end{proof} \begin{proof}[Proof of Theorem~$\ref{t: psl2}$ for almost simple groups with socle $\mathrm{PSU}_3(q)$.] Let $G$ be an almost simple primitive group on the set $\Omega$ with socle $S$ isomorphic to $\mathrm{PSU}_3(q)$. Observe that $q\ge 3$ because $\mathrm{PSU}_3(2)$ is soluble. When $q\le 9$, we can check directly with \texttt{magma} the veracity of our statement by constructing all the primitive actions under consideration and checking one-by-one that none is binary (in each case we are able to exhibit a non-binary witness). For the rest of the proof we assume that $q>9$, that is, $q\ge 11$: among other things, this will allow us to exclude some ``novelties'' in dealing with the maximal subgroups of $G$. Moreover, we let $V:=\mathbb{F}_{q^2}^3$ be the natural $3$-dimensional Hermitian space over the field $\mathbb{F}_{q^2}$ of cardinality $q^2$ for the appropriate covering group of $G$. Let $\alpha\in \Omega$ and let $M:=G_\alpha$ be the stabilizer in $G$ of the point $\alpha$. We subdivide the proof according to the structure of $M$ as described in~\cite[Section~8, Tables~8.5,~8.6]{bhr}. In this proof we use~\cite{bhr} as a crib. \noindent\textsc{The group $M$ is in the Aschbacher class $\mathcal{C}_1$. }This case is completely settled in~\cite{gs_binary}, where the authors have proved Cherlin's conjecture for almost simple classical groups acting on the cosets of a maximal subgroup in the Aschbacher class $\mathcal{C}_1$. \noindent\textsc{The group $M$ is in the Aschbacher class $\mathcal{C}_2$.} From~\cite{bhr}, we get that the action of $G$ on $\Omega$ is permutation equivalent to the natural action of $G$ on \[ \{\{V_1,V_2,V_3\}\mid \dim_{\mathbb{F}_{q^2}}(V_1)= \dim_{\mathbb{F}_{q^2}}(V_2)=\dim_{\mathbb{F}_{q^2}}(V_3)=1, V=V_1\perp V_2\perp V_3, V_1,V_2,V_3 \textrm{ non isotropic}\}. \] Therefore we identify $\Omega$ with the latter set. Let $e_1,e_2,e_3$ be the canonical basis of $V$ and, replacing $G$ by a suitable conjugate, we may assume that the matrix associated to the Hermitian form on $V$ with respect to the basis $e_1,e_2,e_3$ is the identity matrix. Thus $\omega_0:=\{\langle e_1\rangle,\langle e_2\rangle,\langle e_3\rangle\}\in\Omega$. Consider $\Omega_0:=\{\{V_1,V_2,V_3\}\in \Omega\mid V_1=\langle e_1\rangle\}$. Clearly, $G_{\Omega_0}=G_{\langle e_1\rangle}$, $G_{\langle e_1\rangle}$ is a classical group, $G_{\Omega_0}/\Zent {G_{\Omega_0}}$ is almost simple with socle isomorphic to $\mathrm{PSL}_2(q)$ (here we are using $q>3$), and the action of $G_{\Omega_0}$ on $\Omega_0$ is permutation equivalent to the action of $G_{\langle e_1\rangle}$ on $\Omega_0':=\{\{W_1,W_2\}\mid \dim(W_1)=\dim(W_2), \langle e_1\rangle^\perp=W_1\perp W_2, W_1,W_2 \textrm{ non degenerate}\}$. Therefore $G^{\Omega_0}$ is an almost simple primitive group with socle isomorphic to $\mathrm{PSL}_2(q)$ and having degree $|\Omega_0|=q(q-1)/2$. Applying Theorem~\ref{t: psl2} to $G^{\Omega_0}$, we obtain that $G^{\Omega_0}$ is not binary and hence there exist two $\ell$-tuples $(\{W_{1,1},W_{1,2}\},\ldots,\{W_{\ell,1},W_{\ell,2}\})$ and $(\{W'_{1,1},W'_{1,2}\},\ldots,\{W'_{\ell,1},W'_{\ell,2}\})$ in $\Omega_0^\ell$ which are $2$-subtuple complete for the action of $G_{\Omega_0}$ but not in the same $G_{\Omega_0}$-orbit. By construction the two $\ell$-tuples \begin{align*} I& :=(\{\langle e_1\rangle,W_{1,1},W_{1,2}\},\{\langle e_1\rangle,W_{2,1},W_{2,2}\},\ldots,\{\langle e_1\rangle,W_{\ell,1},W_{\ell,2}\}), \\ J&:=(\{\langle e_1\rangle,W'_{1,1},W'_{1,2}\},\{\langle e_1\rangle,W'_{2,1},W'_{2,2}\},\ldots,\{\langle e_1\rangle,W'_{\ell,1},W'_{\ell,2}\}) \end{align*} are in $\Omega^\ell$ and are $2$-subtuple complete. Moreover, a moment's thought yields that $I$ and $J$ are not in the same $G$-orbit. Thus $G$ is not binary. \noindent\textsc{The group $M$ is in the Aschbacher class $\mathcal{C}_3$. }Here $M$ is the normalizer in $G$ of a maximal non-split torus $T$ of $S$ of order $(q^2-q+1)/\gcd(q+1,3)$. From~\cite{bhr}, we infer that $\nor S T$ is a split extension of $T$ by a cyclic group $\langle x\rangle$ of order $3$ (arising from an element of order $3$ in the Weyl group of $S$), thus $M=C\rtimes K$, with $C=\langle c\rangle$ cyclic such that $C\cap S=T$ and with $K$ abelian. (The group $K$ is the direct product of a cyclic group of order $3$ and a cyclic group of order $|G:G\cap\mathrm{PGU}_3(q)|$.) An inspection of the maximal subgroups of $\mathrm{PSU}_3(q)$ reveals that there exists $g\in \nor S K\setminus M$. Set $\beta:=\alpha^g$. Since $g\notin M$, we get $\beta\neq \alpha$ and, since $g\in \nor S K$, we get $G_\alpha\cap G_\beta=M\cap M^g\ge K$. Therefore $G_\alpha\cap G_\beta=C'\rtimes K$, for some cyclic subgroup $C'$ of $C$. Set $\Lambda:=\beta^M$. Now, the action induced by $M$ on the $M$-orbit $\Lambda$ is permutation isomorphic to the action of $M=C\rtimes K$ on the right cosets of $M\cap M^g=C'\rtimes K$. We use the ``bar'' notation and denote by $\bar{M}$ the group $M^\Lambda$. Thus $\bar{M}=\langle\bar{c}\rangle\rtimes \bar{K}$ and the action of $\bar{M}$ on $\Lambda$ is permutation isomorphic to the natural action of $\langle\bar{c}\rangle\rtimes\bar{K}$ on $\langle\bar{c}\rangle$: with $\langle\bar{c}\rangle$ acting on $\langle\bar{c}\rangle$ via its regular representation and with $\bar{K}\cong K$ acting on $\langle\bar{c}\rangle$ via conjugation. Now, $\bar{c}^{\bar{x}}=\bar{c}^\kappa$, for some $\kappa\in\mathbb{Z}$ with $\kappa^3\equiv 1\pmod {|\bar{c}|}$ and $\kappa\not\equiv 1\pmod {|\bar{c}|}$. Consider the two triples $I:=(1,\bar{c},\bar{c}^{1+\kappa^2})$ and $J:=(1,\bar{c},\bar{c}^{1+\kappa})$. Now $(1,\bar{c})^{id_{\bar{M}}}=(1,\bar{c})$, $(1,\bar{c}^{1+\kappa^2})^{\bar{x}}=(1,\bar{c}^{\kappa+\kappa^3})=(1,\bar{c}^{\kappa+1})$ and $$(\bar{c},\bar{c}^{1+\kappa^2})^{\bar{c}^{-1}\bar{x}^2\bar{c}}=(\bar{c}^{\bar{c}^{-1}\bar{x}^2\bar{c}},(\bar{c}^{1+\kappa^2})^{\bar{c}^{-1}\bar{x}^2\bar{c}})=(\bar{c},\bar{c}^{\kappa^4+1})=(\bar{c},\bar{c}^{\kappa+1}).$$ Thus $I$ and $J$ are $2$-subtuple complete for the action of $\bar{M}$ on $\langle\bar{c}\rangle$. Observe that $I$ and $J$ are not in the same $\bar{M}$-orbit because the only element of $\bar{M}$ fixing $1$ and the generator $\bar{c}$ of $\langle\bar{c}\rangle$ is the identity, but $\bar{c}^{1+\kappa^2}\ne \bar{c}^{1+\kappa}$ because $\kappa\not\equiv 1\pmod {|\bar{c}|}$. Therefore $\bar{M}$ is not binary. From Lemma~\ref{l: again0}, we deduce that $G$ is not binary. \noindent\textsc{The group $M$ is in the Aschbacher class $\mathcal{C}_5$.} Let $H$ be the stabilizer in $M$ of a non-isotropic $1$-dimensional subspace $\langle v\rangle$ of $V$ and let $K$ be the stabilizer of $\langle v\rangle$ in $G$. Thus $K$ is a maximal subgroup of $G$ in the Aschbacher class $\mathcal{C}_1$; moreover, using~\cite{bhr} and $q>8$, we see that there exists $g\in \Zent K\setminus M$. Set $\beta:=\alpha^g$. Since $g\notin M$, we get $\beta\neq \alpha$ and, since $g\in \Zent K$, we get $G_\alpha\cap G_\beta=M\cap M^g\ge M\cap K=H$. Since $H$ is maximal in $M$, we obtain $G_\alpha\cap G_\beta=H$ and hence the action induced by $G_\alpha=M$ on the $G_\alpha$-orbit $\beta^{G_\alpha}$ is permutation isomorphic to the action of $M$ on the right cosets of $H$. By construction, this latter action is the natural action of the classical group $M$ on the cosets of a maximal subgroup in its $\mathcal{C}_1$-Aschbacher class. From~\cite[Theorem~B]{gs_binary}, this action is not binary. Therefore, $G$ is not binary by Lemma~\ref{l: again0}. \noindent\textsc{The group $M$ is in the Aschbacher class $\mathcal{C}_6$ or in the Aschbacher class $\mathcal{S}$.} Now the isomorphism class of $M$ is explicitly given in~\cite{bhr}. Since $|M|$ is very small (actually $|M|\le 720$), with the invaluable help of the computer algebra system \texttt{magma}, we compute all the transitive $M$-sets and we select the $M$-sets $\Lambda$ with $|\Lambda|>1$, with every composition factor of $M$ isomorphic to some section of $M^\Lambda$ and with $M^\Lambda$ binary. In all cases, we see that $|\Lambda|$ is even. Therefore, applying Lemma~\ref{l: again}, we obtain that either $G$ is not binary or $|\Omega|-1$ is even. In the latter case, $|\Omega|$ is odd and hence $M$ contains a Sylow $2$-subgroup of $G$. From~\cite{bhr}, we see that a Sylow $2$-subgroup of $M\cap S$ has size $8$, but this is a contradiction because $|\mathrm{PSU}_3(q)|$ is always divisible by $16$. \end{proof} \end{document}
\begin{document} \begin{abstract} Let $S$ be a connected orientable surface of finite topological type. We prove that there is an exhaustion of the curve complex $\mathcal C(S)$ by a sequence of finite rigid sets. \end{abstract} \title{Exhausting curve complexes by finite rigid sets} \section{Introduction} The curve complex $\mathcal C(S)$ of a surface $S$ is a simplicial complex whose $k$-simplices correspond to sets of $k+1$ distinct isotopy classes of essential simple closed curves on $S$ with pairwise disjoint representatives. The extended mapping class group $\mathrm{Mod}^{\pm}(S)$ of $S$ acts on $\mathcal C(S)$ by simplicial automorphisms, and a well-known theorem due to Ivanov \cite{Ivanov}, Korkmaz \cite{Korkmaz} and Luo \cite{Luo}, asserts that $\mathcal C(S)$ is {\em simplicially rigid} for $S\ne S_{1,2}$. More concretely, the natural homomorphism \[\mathrm{Mod}^{\pm}(S) \to \mathrm{Aut}(\mathcal C(S))\] is surjective unless $S=S_{1,2}$; in the case $S=S_{1,2}$ there is an automorphism of $\mathcal C(S)$ that sends a separating curve on $S$ to a non-separating one and thus cannot be induced by an element of $\mathrm{Mod}^{\pm}(S)$, see \cite{Luo}. In \cite{AL} we extended this picture and showed that curve complexes are {\em finitely rigid}. Specifically, for $S\ne S_{1,2}$ we identified a finite subcomplex $\mathfrak X(S) \subset \mathcal C(S)$ with the property that very locally injective map $\mathfrak X(S) \to \mathcal C(S)$ is the restriction of an element of $\mathrm{Mod}^{\pm}(S)$; in the case of $S_{1,2}$ a similar statement can be made, this time using the group $\mathrm{Aut}(\mathcal C(S))$ instead of $\mathrm{Mod}^{\pm}(S)$. We refer to such a subset $\mathfrak X(S)$ as a {\em rigid} set. The rigid sets constructed in \cite{AL} enjoy some curious properties. For instance, if $S=S_{0,n}$ is a sphere with $n$ punctures then $\mathfrak X(S)$ is a homeomorphic to an $(n-4)$-dimensional sphere. Since $\mathcal C(S)$ has dimension $n-4$, it follows that $\mathfrak X(S)$ represents a non-trivial element of $H_{n-4}(\mathcal C(S), \mathbb Z)$ which, by a result of Harer \cite{Harer}, is the only non-trivial homology group of $\mathcal C(S)$. In fact, Broaddus \cite{Broaddus} and Birman-Broaddus-Menasco \cite{BBM} have recently proved $\mathfrak X(S)$ is a generator of $H_{n-4}(\mathcal C(S), \mathbb Z)$, when viewed as a $\mathrm{Mod}^{\pm}(S)$-module; in the case when $S$ has genus $\mathcal Ge 2$ and at least one puncture, they prove that $\mathfrak X(S)$ {\em contains} a generator for the homology of $\mathcal C(S)$. The rigid sets identified in \cite{AL} all have diameter 2 in $\mathcal C(S)$, and a natural question is whether there exist finite rigid sets in $\mathcal C(S)$ of arbitrarily large diameter; see Question 1 of \cite{AL}. In this paper we prove that, in fact, there exists an exhaustion of $\mathcal C(S)$ by finite rigid sets: \begin{theorem} Let $S\ne S_{1,2}$ be a connected orientable surface of finite topological type. There exists a sequence $\mathfrak X_1 \subset \mathfrak X_2 \subset \ldots \subset \mathcal C(S)$ such that: \begin{enumerate} \item $\mathfrak X_i$ is a finite rigid set for all $i\mathcal Ge 1$, \item $\mathfrak X_i$ has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$, for all $i\mathcal Ge 1$, and \item $\bigcup_{i\mathcal Ge 1} \mathfrak X_i = \mathcal C(S)$. \end{enumerate} \label{main} \end{theorem} \begin{remark} A similar statement can be made for $S= S_{1,2}$, by replacing $\mathrm{Mod}^{\pm}(S)$ by $\mathrm{Aut}(\mathcal C(S))$ in the definition of rigid set above. \end{remark} \begin{remark} We stress that Theorem \ref{main} above does not follow from the main result in \cite{AL}. Indeed, a subset of $\mathcal C(S)$ containing a rigid set need not be itself rigid; compare with Proposition \ref{onecurve} below. \end{remark} As a consequence of Theorem \ref{main} we will obtain a ``finitistic" proof of the aforementioned result of Ivanov-Korkmaz-Luo \cite{Ivanov,Korkmaz,Luo} on the simplicial rigidity of the curve complex. In fact, we will deduce the following stronger form due to Shackleton \cite{Shackleton}: \begin{corollary}\label{C:Ivanov} Let $S\ne S_{1,2}$ be a connected orientable surface of finite topological type. If $\phi:\mathcal C(S) \to \mathcal C(S)$ is a locally injective simplicial map, then there exists $h\in \mathrm{Mod}^{\pm}(S)$ such that $h = \phi$. \end{corollary} The first author and Souto \cite{AS} proved that if $\mathfrak X\subset \mathcal C(S)$ is a rigid set satisfying some extra conditions, then every (weakly) injective homomorphism from the right-angled Artin group $\mathbb A(\mathfrak X)$ into $\mathrm{Mod}^{\pm}(S)$ is obtained, up to conjugation, by taking powers of roots of Dehn twists in the vertices of $\mathfrak X$. Since the finite rigid sets $\mathfrak X_i$ of Theorem \ref{main} all satisfy the conditions of \cite{AS}, we obtain the following result; here, $T_\mathcal Gamma$ denotes the Dehn twist about $\mathcal Gamma$: \begin{corollary} Let $S\ne S_{1,2}$ be a connected orientable surface of finite topological type, and consider the sequence $\mathfrak X_1 \subset \mathfrak X_2 \subset \ldots \subset \mathcal C(S)$ of finite rigid sets given by Theorem \ref{main}. If $\rho_i:\mathbb A(\mathfrak X_i) \to \mathrm{Mod}^{\pm}(S)$ is an injective homomorphism, then there exist functions $a,b:\mathcal C^{(0)}(S)\to\ \mathbb Z \setminus\{0\}$ and $f_i \in \mathrm{Mod}^{\pm} (S)$ such that \[\rho_i(\mathcal Gamma^{a(\mathcal Gamma)}) = f_iT_\mathcal Gamma^{b(\mathcal Gamma)}f_i^{-1},\] for every vertex $\mathcal Gamma$ of $\mathfrak X_i$. \end{corollary} We remark that Kim-Koberda \cite{KK1} had previously shown the existence of injective homomorphisms $\mathbb A(Y_i) \to \mathrm{Mod}^{\pm}(S)$ for sequences $Y_1 \subset Y_2 \ldots $ of subsets of $\mathcal C(S)$. Such homomorphisms may in fact be obtained by sending a generator of $\mathbb A(Y_i)$ to a sufficiently high power of a Dehn {\em multi-twist}, see \cite{KK2}. \noindent{\bf Plan of the paper.} In Section 2 we recall some necessary definitions and basic results from our previous paper \cite{AL}. Section 3 deals with the problem of enlarging a rigid set in a way that remains rigid. As was the case in \cite{AL}, the techniques used in the proof of our main result differ depending on the genus of $S$. As a result, we will prove Theorem \ref{main} for surfaces of genus $0$, $g\mathcal Ge 2$, and 1, in Sections 4, 5 and 6, respectively. \noindent{\bf Acknowledgements} We thank Brian Bowditch for conversations and for his continued interest in this work. We also thank Juan Souto for conversations. Parts of this paper were completed during the conference ``Mapping class groups and Teichm\"uller Theory"; we would like to express our gratitude to the Michah Sageev and the Technion for their hospitality and financial support. \section{Definitions} Let $S=S_{g,n}$ be an orientable surface of genus $g$ with $n$ punctures and/or marked points. We define the {\em complexity} of $S$ as $\xi(S) =3g-3+n$. We say that a simple closed curve on $S$ is {\em essential} if it does not bound a disk or a once-punctured disk on $S$. An {\em essential subsurface} of $S$ is a properly embedded subsurface $N \subset S$ for which each boundary component is an essential curve in $S$. The {\em curve complex} $\mathcal C(S)$ of $S$ is a simplicial complex whose $k$-simplices correspond to sets of $k+1$ isotopy classes of essential simple closed curves on $S$ with pairwise disjoint representatives. In order to simplify the notation, a set of isotopy classes of simple closed curves will be confused with its representative curves, the corresponding vertices of $\mathcal C(S)$, and the subcomplex of $\mathcal C(S)$ spanned by the vertices. We also assume that representatives of isotopy classes of curves and subsurfaces intersect {\em minimally} (that is, transversely and in the minimal number of components), and denote by $i(\alpha,\beta)$ their intersection number. If $\xi(S) > 1$, then $\mathcal C(S)$ is a connected complex of dimension $\xi(S)-1$. If $\xi(S)\le 0$ and $S\ne S_{1,0}$, then $\mathcal C(S)$ is empty. If $\xi(S)=1$ or $S=S_{1,0}$, then $\mathcal C(S)$ is a countable set of vertices; in order to obtain a connected complex, we modify the definition of $\mathcal C(S)$ by declaring $\alpha, \beta\in \mathcal C^{(0)}(S)$ to be adjacent in $\mathcal C(S)$ whenever $i(\alpha,\beta)=1$ if $S=S_{1,1}$ or $S=S_{1,0}$, and whenever $i(\alpha,\beta)=2$ if $S=S_{0,4}$. Furthermore, we add triangles to make $\mathcal C(S)$ into a flag complex. In all three cases, the complex $\mathcal C(S)$ so obtained is isomorphic to the well-known {\em Farey complex}. We recall some definitions and results from \cite{AL} that we will need later. \begin{definition} [Detectable intersection] Let $S$ be a surface and $Y\subset \mathcal C(S)$ a subcomplex. If $\alpha,\beta \in Y$ are curves with $i(\alpha,\beta) \neq 0$, then we say that their intersection is {\em $Y$--detectable} (or simply {\em detectable} if $Y$ is understood) if there are two pants decompositions $P_\alpha,P_\beta \subset Y$ such that \begin{equation} \label{E:detectable} \alpha \in P_\alpha, \, \beta \in P_\beta, \mbox{ and } P_\alpha - \alpha = P_\beta - \beta. \end{equation} \label{D:detectable} \end{definition} We note that if $\alpha,\beta$ have detectable intersection, then they must fill a $\xi=1$ (essential) subsurface, which we denote $ N(\alpha\cup\beta) \subset S$. For notational purposes, we call $P = P_\alpha - \alpha = P_\beta - \beta$ a {\em pants decomposition of $S - N(\alpha\cup\beta)$}, even though it includes the boundary components of $N(\alpha\cup\beta)$. The following is Lemma 2.3 in \cite{AL}: \begin{lemma} \label{L:detectable2detectable} Let $Y \subset \mathcal C(S)$ be a subcomplex, and $\alpha,\beta \in Y$ intersecting curves with $Y$--detectable intersection. If $\phi:Y \to \mathcal C(S)$ is a locally injective simplicial map, then $\phi(\alpha),\phi(\beta)$ have $\phi(Y)$--detectable intersection, and hence fill a $\xi=1$ subsurface. \end{lemma} \subsection{Farey neighbors} A large part of our arguments will rely on being able to recognize when two curves are {\em Farey neighbors}, as we now define: \begin{definition}[Farey neighbors] Let $\alpha$ and $\beta$ be curves on $S$ which fill a $\xi=1$ subsurface $N \subset S$. We say $\alpha$ and $\beta$ are {\em Farey neighbors} if they are adjacent in $\mathcal C(N)$. \end{definition} The following result is a useful tool for recognizing Farey neighbors, and is a rephrasing of Lemma 2.4 in \cite{AL} (see also the comment immediately after it): \begin{lemma} \label{l:fareydetect} Suppose $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ are curves on $S$ such that: \begin{enumerate} \item $\alpha_2,\alpha_3$ together fill a $\xi=1$ subsurface $N\subset S$ \item $i(\alpha_i,\alpha_j) = 0 \mathcal Leftrightarrow |i-j| > 1$ for all $i \neq j$. \item $\alpha_1$ and $\alpha_4$ have non-zero intersection number with exactly one component of $\partial N$. \end{enumerate} Then $\alpha_2,\alpha_3$ are Farey neighbors. \end{lemma} \section{Enlarging rigid sets} \label{s:3} In this section we discuss the problem of enlarging rigid sets of the curve complex. We recall the definition of rigid set from \cite{AL}: \begin{definition}[Rigid set] Suppose $S\ne S_{1,2}$. We say that $Y \subset \mathcal C(S)$ is {\em rigid} if for every locally injective simplicial map $\phi:Y \to \mathcal C(S)$ there exists $h\in \mathrm{Mod}^{\pm}(S)$ with $h|_{Y} = \phi$, unique up to the pointwise stabilizer of $Y$ in $\mathrm{Mod}^{\pm}(S)$. \end{definition} \begin{remark} The definition above may seem somewhat different to the one used in \cite{AL}, where we used the group $\mathrm{Aut}(\mathcal C(S))$ instead of $\mathrm{Mod}^{\pm}(S)$. Nevertheless, in the light of the results of Ivanov \cite{Ivanov}, Korkmaz \cite{Korkmaz} and Luo \cite{Luo} mentioned in the introduction, the two definitions are essentially the same as $S\ne S_{1,2}$. For $S=S_{1,2}$, however, we will use the group $\mathrm{Aut}(\mathcal C(S))$ instead of $\mathrm{Mod}^{\pm}(S)$, due to the existence of {\em non-geometric} automorphisms of $\mathcal C(S)$. \end{remark} The main step in the proof of Theorem \ref{main} is to enlarge the rigid sets constructed in \cite{AL} in a way that the sets we obtain remain rigid. As we mentioned in the introduction, while one might be tempted to guess that a set that contains a rigid set is necessarily rigid, this is far from true, as the next result shows: \begin{proposition} Let $S=S_{0,n}$, with $n\mathcal Ge 5$, and $\mathfrak X$ the finite rigid set identified in \cite{AL} (defined in Section \ref{S:punctured spheres}). For every curve $\alpha\in \mathcal C(S) \setminus \mathfrak X$, the set $\mathfrak X \cup \{\alpha\}$ is not rigid. \label{onecurve} \end{proposition} \begin{proof} Let $S_{\alpha}$ be the smallest subsurface of $S$ containing those curves in $\mathfrak X$ which are disjoint from $\alpha$, and $S_\alpha'$ the connected component of $S\setminus S_\alpha$ that contains $\alpha$; from the construction in \cite{AL}, every component of $\partial S_\alpha'$ which is essential in $S$ is an element of $\mathfrak X$. Consider a mapping class $f\in \mathrm{Mod}(S)$ that is pseudo-Anosov on $S'_\alpha$ and the identity on $S_\alpha$. Define a map $\phi:\mathfrak X\cup \{\alpha\}\to \mathcal C(S)$ by $\phi(\beta) = \beta$ for all $\beta \ne \alpha$, and $\phi(\alpha) = f(\alpha)$. By construction, the map $\phi$ is locally injective and simplicial, but cannot be the restriction of an element of $\mathrm{Mod}^{\pm}(S)$. \end{proof} While Proposition \ref{onecurve} serves to highlight the obstacles for enlarging a rigid set to a set that is also rigid, we now explain two procedures for doing so. First, we recall the following definition from \cite{AL}: \begin{definition} Let $A$ be a set of curves in $S$. \begin{enumerate} \item $A$ is {\em almost filling (in $S$)} if the set \[ B = \{ \beta \in \mathcal C^{(0)}(S) \setminus A \mid i(\alpha,\beta) = 0 \, \forall \alpha \in A \} \] is finite. In this case, we call $B$ the {\em set of curves determined by $A$}. \item If $A$ is almost filling (in $S$), and $B = \{ \beta \}$ is a single curve, then we say that $\beta$ is uniquely determined by $A$. \end{enumerate} \end{definition} An immediate consequence of the definition is the following. \begin{lemma} Let $Y$ be a rigid set of curves, and $A \subset Y$ an almost filling in $S$. If $\beta$ is uniquely determined by $A$, then $Y \cup \{ \beta \}$ is rigid. \label{determine} \end{lemma} \begin{proof} Given any locally injective simplicial map $\phi \colon\thinspacelon Y \cup \{ \beta \} \to \mathcal C(S)$, we let $f \in \mathrm{Mod}^{\pm}(\mathcal C(S))$ be such that $f|_Y = \phi$. Then $f(\beta)$ is the unique curve determined by $f(A) = \phi(A)$. On the other hand, $\phi(\beta)$ is connected by an edge to every vertex in $\phi(A)$, since $\phi$ is simplicial. Since $\phi$ is injective on the star of $\beta$, it is injective on $\beta \cup A$, and so $\phi(\beta) \not \in A$. It follows that $\phi(\beta)$ is the curve uniquely determined by $\phi(A)$, and hence $f(\beta) = \phi(\beta)$. \end{proof} In particular, this gives rise to one method for enlarging a rigid set which we formalize as follows. Given a subset $Y \subset \mathcal C(S)$ define \[ Y' = Y \cup \{ \beta \mid \beta \mbox{ is uniquely determined by some almost filling set } A \subset Y \}. \] From this we recursively define $Y^{r} = (Y^{r-1})'$ for all $r > 0$ where $Y = Y^{0}$. Observe that, as an immediate consequence of Lemma \ref{determine}, we obtain: \begin{proposition} If $Y \subset \mathcal C(S)$ is a rigid set, then so is $Y^r$ for all $r \mathcal Geq 0$. \label{p:prime} \end{proposition} Next, we give a sufficient condition for the union of two rigid sets to be rigid. Before doing so, we need the following definition: \begin{definition}[Weakly rigid set] We say that a set $Y\subset \mathcal C(S)$ is {\em weakly rigid} if , whenever $h,h'\in \mathrm{Mod}^{\pm}(S)$ satisfy $h|_Y = h'|_Y$, then $h = h'$. \end{definition} Alternatively, $Y$ is weakly rigid if the pointwise stabilizer in $\mathrm{Mod}^\pm(S)$ is trivial. Note that if $Y$ is a weakly rigid set, then so is every set containing $Y$. \begin{lemma} Let $Y_1, Y_2 \subset \mathcal C(S)$ be rigid sets. If $Y_1\cap Y_2$ is weakly rigid then $Y_1\cup Y_2$ is rigid. \label{L:glue} \end{lemma} \begin{proof} Let $\phi:Y_1\cup Y_2\to \mathcal C(S)$ be a locally injective simplicial map. Since $Y_i$ is rigid and has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$ (because $Y_1 \cap Y_2$ does), there exists a unique $h_i\in\mathrm{Mod}^{\pm}(S)$ such that $h_i|_{Y_i} = \phi|_{Y_i}$. Finally, since $Y_1\cap Y_2$ is weakly rigid we have $h_1 = h_2=h$. Therefore $h|_{Y_1\cup Y_2} = \phi$, and the result follows. \end{proof} We now proceed to describe our second method for enlarging a rigid set. We start with some definitions and notation. We write $T_\alpha$ for the Dehn twist along a curve $\alpha$. Recall that the half twist $H_\alpha$ about a curve $\alpha$ is defined if and only if the curve cuts off a pair of pants containing two punctures of $S$. Furthermore, there is exactly one half twist about $\alpha$ if in addition $S$ is not a four-holed sphere. \begin{definition} Say that Farey neighbors $\alpha,\beta$ are {\em twistable} if either: \begin{itemize} \item $N(\alpha \cup \beta)$ is a one-holed torus or \item $N(\alpha \cup \beta)$ is a four-holed sphere and $H_\alpha,H_\beta$ are both defined and unique. \end{itemize} In this situation we define $f_\alpha = T_\alpha$ and $f_\beta= T_\beta$ in the first case and $f_\alpha = H_\alpha$ and $f_\beta = H_\beta$ in the second. We call $f_\alpha,f_\beta$ the {\em twisting pair} for $\alpha,\beta$. In case (1) we call $\alpha,\beta$ {\em toroidal} and in case (2) we call them {\em spherical}. \end{definition} We note that whether twistable Farey neighbors $\alpha,\beta$ are toroidal or spherical can be distinguished (i) by $i(\alpha,\beta)$ (whether it is $1$ or $2$), (ii) by the homeomorphism types of $\alpha$ and $\beta$ (whether they are nonseparating curves or they cut off a pair of pants), or (iii) by the homeomorphism type of $N(\alpha \cup \beta)$ (whether it is a one-holed torus or a four-holed sphere). The following well-known fact describes the common feature of these two situations. \begin{proposition} \label{P:twistable_Farey_neighbors} Suppose $\alpha,\beta$ are twistable Farey neighbors and that $f_\alpha,f_\beta$ is their twisting pair. Then \[ f_\alpha(\beta) = f_\beta^{-1}(\alpha) \mbox{ and } f_\alpha^{-1}(\beta) = f_\beta(\alpha), \] and these are the unique common Farey neighbors of both $\alpha$ and $\beta$. \end{proposition} Sets of twistable Farey neighbors which interact with each other frequently occur in rigid sets. We distinguish one particular type of such set in the following definition: \begin{definition} \label{D:strings} Suppose $Y$ is a rigid subset of $\mathcal C(S)$ and $ A = \{ \alpha_1,\ldots,\alpha_k \} \subset Y$. We say that $A$ is a {\em closed string of Farey neighbors} in $Y$ provided the following conditions are satisfied, counting indices modulo $k$: \begin{enumerate} \item The curves $\alpha_i,\alpha_{i+1}$ are twistable Farey neighbors with twisting pair $f_{\alpha_i},f_{\alpha_{i+1}}$. \item $i(\alpha_i,\alpha_{i+1}) \neq 0$ is $Y$--detectable. \item $i(\alpha_i,\alpha_j) = 0$ if $i-j \neq \pm 1$ modulo $k$. \item $\alpha_i,\alpha_{i+1},\alpha_{i+2},\alpha_{i+3}$ satisfies the hypothesis of Lemma~\ref{l:fareydetect}. \end{enumerate} Given a closed string of twistable Farey neighbors $ A \subset Y$, we define $$Y_A= Y \cup \{ f_{\alpha_i}^{\pm 1}(\alpha_j) \}_{i,j = 1}^k.$$ \end{definition} \begin{remark} Two comments are in order: \begin{enumerate} \item There is a priori some ambiguity in the notation as $f_{\alpha_i}$ can be defined as part of the twisting pair for $\alpha_i,\alpha_{i+1}$ as well as for $\alpha_{i-1},\alpha_i$. However, if $\alpha_i$ is part of two pairs of different twistable Farey neighbors in $Y$, then they must both be toroidal or both spherical as this is determined by the homeomorphism type of $\alpha_i$. Consequently, the mapping class $f_{\alpha_i}$ is independent of what twistable pair it is included in. \item The set $Y_A$ has a more descriptive definition, given condition (3) of Definition~\ref{D:strings}. Namely, \[ Y_A = Y \cup \{ f_{\alpha_i}^{\pm 1}(\alpha_j) \mid i - j = \pm 1 \mbox{ modulo } k \}. \] \end{enumerate} \end{remark} See Figure \ref{F:twistable} for an example of a closed string of twistable Farey neighbors and two of their images under the twisting pair. \begin{figure} \caption{The set $Y= A = \{\alpha_1, \ldots, \alpha_5\} \label{F:twistable} \end{figure} The situation in the next proposition arises in multiple settings, and provides a way to extend a rigid set to a larger set which is nearly rigid. \begin{proposition} \label{P:strings_prop1} Let $Y$ be a rigid subset of $\mathcal C(S)$ and $A = \{ \alpha_1,\ldots,\alpha_k\} \subset Y$ a closed string of twistable Farey neighbors in $Y$. Then, counting indices modulo $k$: \begin{enumerate} \item $f_{\alpha_i}^{\pm 1}(\alpha_{i+1}) = f_{\alpha_{i+1}}^{\mp 1}(\alpha_i)$ are the unique common Farey neighbors of $\alpha_i$ and $\alpha_{i+1}$, \item $i(f_{\alpha_i}^{\pm 1}(\alpha_j),\alpha_{j'}) \neq 0$ for all $i$ and all $(j,j') \in \{(i+1,i),(i+1,i+1),(i-1,i),(i-1,i-1)\}$.Furthermore, these intersections are $Y_A$--detectable. \item For any locally injective simplicial map $\phi \colon\thinspacelon Y_A \to \mathcal C(S)$, \[ \phi(f_{\alpha_i}(\alpha_{i+1})) = \phi(f_{\alpha_{i+1}}^{-1}(\alpha_i))\mbox{ and } \phi(f_{\alpha_i}^{-1}(\alpha_{i+1})) = \phi(f_{\alpha_{i+1}}(\alpha_i)) \] are the unique Farey neighbors of $\phi(\alpha_i)$ and $\phi(\alpha_{i+1})$. \end{enumerate} \label{l:twistable} \end{proposition} \begin{proof} Conclusion (1) follows immediately from Definition~\ref{D:strings} part (1) and Proposition~\ref{P:twistable_Farey_neighbors}. Next we prove conclusion (2). Fix $(j,j')$ as in the proposition. Then since $i(\alpha_i,\alpha_j) \neq 0$, it follows that $f_{\alpha_i}(\alpha_j)$ nontrivially intersects both $\alpha_i$ and $\alpha_j$. Since $\alpha_{j'}$ is one of these latter two curves, the first statement follows. By part (2) of Definition~\ref{D:strings}, $i(\alpha_i,\alpha_j) \neq 0$ is $Y$--detectable. Let $P_{\alpha_i},P_{\alpha_j} \subset Y$ be pants decompositions containing $\alpha_i$ and $\alpha_j$, respectively, as in Definition~\ref{D:detectable}, and set $P = P_{\alpha_i} - \alpha_i = P_{\alpha_j} - \alpha_j$. Then since $f_{\alpha_i}$ is supported in $N(\alpha_i \cup \alpha_j)$ which is contained in the complement of $P$, we can define two more pants decompositions \[ P_{f_{\alpha_i}^{\pm 1}(\alpha_j)} = P \cup f_{\alpha_i}^{\pm 1}(\alpha_j) \subset Y_A. \] Together with $P_{\alpha_i}$ and $P_{\alpha_j}$ these are sufficient to detect all the intersections claimed. In all cases, $P \subset Y$ is the pants decomposition of the complement of $N(\alpha_i \cup \alpha_j)$, as required. For conclusion (3), we explain why $\phi(f_{\alpha_i}(\alpha_{i+1}))$ and $\phi(\alpha_i)$ are Farey neighbors. The other three cases are similar. For this, we consider the set \[\{ \, \, \phi(f_{\alpha_i}(\alpha_{i-1})) \, \, , \, \, \phi(\alpha_i) = \phi(f_{\alpha_i}(\alpha_i)) \, \, , \, \, \phi(f_{\alpha_i}(\alpha_{i+1})) \, \, , \, \, \phi(\alpha_{i+2}) =\phi(f_{\alpha_i}(\alpha_{i+2})) \, \, \}. \] The equalities here follow from the disjointness property (3) of Definition~\ref{D:strings} since a Dehn twist or half-twist has no effect on a curve that is disjoint from the curve supporting the twist. The goal is to prove that all three conditions of Lemma~\ref{l:fareydetect} are satisfied. By part (2) of the proposition and Lemma~\ref{L:detectable2detectable} it follows that any two consecutive curves in this set have $\phi(Y_A)$--detectable intersections, and fill a $\xi$=1 subsurface. Therefore condition (1) of Lemma~\ref{l:fareydetect} is satisfied for this set of curves. Since $\alpha_{i-1},\alpha_i,\alpha_{i+1},\alpha_{i+2}$ satisfy condition (2) of Lemma~\ref{l:fareydetect} and the given set is the image of these under the simplicial map $\phi \circ f_{\alpha_i}$, these curves also satisfy condition (2) of Lemma~\ref{l:fareydetect}. Finally, we wish to verify that condition (3) of Lemma~\ref{l:fareydetect} is satisfied. Since $Y$ is rigid, there exists $f \in \mathrm{Mod}^{\pm 1}(S)$ inducing $\phi|_{Y}$. We also note that $N = N(\alpha_i \cup \alpha_{i+1}) = N(\alpha_i \cup f_{\alpha_i}(\alpha_{i+1}))$ has only one boundary component---all other holes of this subsurface (if any) must be punctures of $S$. Since the pants decomposition of the complement of $N$ is \[ P = P_{\alpha_i} - \alpha_i = P_{\alpha_{i+1}} - \alpha_{i+1} = P_{f_{\alpha_i}(\alpha_{i+1})} - f_{\alpha_i}(\alpha_{i+1}).\] and is contained in $Y$, we see that $\phi(P) = f(P)$. Since this is used in the $\phi(Y_A)$--detection of both $i(\phi(\alpha_i),\phi(\alpha_{i+1})) \neq 0$ and $i(\phi(\alpha_i),\phi(f_{\alpha_i}(\alpha_{i+1}))) \neq 0$, we have \[ f(N) = N(f(\alpha_i) \cup f(\alpha_{i+1})) = N(\phi(\alpha_i) \cup \phi(\alpha_{i+1})) = N(\phi(\alpha_i) \cup \phi(f_{\alpha_i}(\alpha_{i+1}))).\] Consequently, this surface has only one boundary component, and so condition (3) of Lemma~\ref{l:fareydetect} is satisfied. \end{proof} \begin{proposition} \label{P:closed_string} If $Y$ is a rigid subset of $\mathcal C(S)$ and $A = \{ \alpha_1,\ldots,\alpha_k \}$ is a closed string of twistable Farey neighbors in $Y$, then any locally injective simplicial map $\phi \colon\thinspacelon Y_A \to \mathcal C(S)$ which is the identity on $Y$ satisfies $\phi(Y_A) = Y_A$. Furthermore, the subgroup of the automorphism group of $Y_A$ fixing $Y$ pointwise has order at most $2$. If this subgroup is nontrivial, then it is generated by the involution $\sigma \colon\thinspacelon Y_A \to Y_A$ given by $\sigma(f_{\alpha_i}(\alpha_j)) = f_{\alpha_i}^{-1}(\alpha_j)$ for all $i,j$ (or equivalently, for all $i,j$ with $i-j = \pm 1$ (modulo $k$)). \end{proposition} \begin{proof} Since $f_{\alpha_i}^{\pm 1}(\alpha_{i+1})$ is the unique pair of common Farey neighbors of $\alpha_i,\alpha_{i+1}$, and since $\phi(\alpha_i) = \alpha_i$, $\phi(\alpha_{i+1}) = \alpha_{i+1}$, Proposition~\ref{P:strings_prop1} implies that for every $i$ and $j$ with $i-j = \pm 1$ (modulo $k$), we have \[ \{ \phi(f_{\alpha_i}(\alpha_j)),\phi(f_{\alpha_i}^{-1}(\alpha_j)) \} = \{ f_{\alpha_i}(\alpha_j),f_{\alpha_i}^{-1}(\alpha_j) \}, \] and so the first claim of the proposition follows. Next we suppose $\phi$ is any automorphism of $Y$ that restricts to the identity on $Y$. We claim that if there is some $i,j$ with $i-j = \pm 1$ (modulo $k$) so that $\phi(f_{\alpha_i}(\alpha_j)) = f_{\alpha_i}^{-1}(\alpha_j)$, then this is true for every $i,j$ with $i-j = \pm 1$ (modulo $k$). To this end, suppose that $\phi(f_{\alpha_i}(\alpha_{i+1})) = f_{\alpha_i}^{-1}(\alpha_{i+1})$ for some index $i$ (the case $\phi(f_{\alpha_i}(\alpha_{i-1})) = f_{\alpha_i}^{-1}(\alpha_{i-1})$ is similar). Then note that \[ i(f_{\alpha_i}(\alpha_{i-1}),f_{\alpha_i}(\alpha_{i+1})) = 0 = i(f_{\alpha_i}^{-1}(\alpha_{i-1}),f_{\alpha_i}^{-1}(\alpha_{i+1})) \] while \[ i(f_{\alpha_i}(\alpha_{i-1}),f_{\alpha_i}^{-1}(\alpha_{i+1})) \neq 0 \neq i(f_{\alpha_i}^{-1}(\alpha_{i-1}),f_{\alpha_i}(\alpha_{i+1})). \] Since $\phi$ is simplicial and locally injective, we must have $\phi(f_{\alpha_i}(\alpha_{i-1})) = f_{\alpha_i}^{-1}(\alpha_{i-1})$ and $\phi(f_{\alpha_i}^{-1}(\alpha_{i-1})) = f_{\alpha_i}(\alpha_{i-1})$. Consequently, \[ \phi(f_{\alpha_{i-1}}(\alpha_i)) = \phi(f_{\alpha_i}^{-1}(\alpha_{i-1})) = f_{\alpha_i}(\alpha_{i-1}) = f_{\alpha_{i-1}}^{-1}(\alpha_i).\] Repeating this argument again, it follows that for all $i$, $\phi(f_{\alpha_i}(\alpha_{i+1})) = f_{\alpha_i}^{-1}(\alpha_{i+1})$, as required. Thus, in this case, $\phi$ is given by $\sigma$ as in the statement of the proposition. If we are not in the situation of the previous paragraph, then it follows that $\phi$ is the identity, completing the proof. \end{proof} After this discussion we are in a position to explain how to obtain an exhaustion of $\mathcal C(S)$ by finite rigid sets. Here, $\mathrm{Mod}(S)$ denotes the index 2 subgroup of $\mathrm{Mod}^{\pm}(S)$ consisting of those mapping classes that preserve orientation. \begin{proposition} Let $Y\subset \mathcal C(S)$ be a finite rigid set such that $\mathrm{Mod}(S)\cdot Y = \mathcal C(S)$. Suppose there exists $G\subset Y$ such that: \begin{enumerate} \item The set $\{f_\alpha \mid \alpha \in G\}$ generates $\mathrm{Mod}(S)$; \item $Y \cap f_\alpha(Y)$ is weakly rigid, for all $\alpha \in G$. \end{enumerate} Then there exists an sequence $Y= Y_1 \subset Y_2 \subset \ldots \subset Y_n \subset \ldots$ such that $Y_i$ is a finite rigid set, has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$ for all $i$, and $$\bigcup_{i\in \mathbb N} Y_i = \mathcal C(S).$$ \label{P:finish} \end{proposition} \begin{proof} First, the fact that $Y$ is rigid implies that $f_\alpha(Y)$ is rigid for all $\alpha \in Y$. Therefore, the set $Y_2:=Y \cup f_G(Y)$ is also rigid by assumption (2) and repeated application of Lemma~\ref{L:glue}. We now define, for all $n\mathcal Ge 2$, $$Y_{n+1}:= Y_n \cup f_G (Y_n).$$ By induction, we see that $Y_n$ is rigid for all $n$ and so the first claim follows. Next, the pointwise stabilizer of $Y$ in $\mathrm{Mod}^{\pm}(S)$ is trivial because $Y \cap f_\alpha(Y)$ is weakly rigid. Therefore, $Y_n$ has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$, as $Y\subset Y_n$ for all $n$. Finally, since $\{f_\alpha \mid \alpha \in G\}$ generates $\mathrm{Mod}(S)$ and $\mathrm{Mod}(S)\cdot Y = \mathcal C(S)$, it follows that $$\bigcup_{i\in \mathbb N} Y_i = \mathcal C(S),$$ which completes the proof. \end{proof} We end this section by explaining how Theorem \ref{main} implies that curve complexes are simplicially rigid: \begin{proof}[Proof of Corollary \ref{C:Ivanov}] Let $S\ne S_{1,2}$, and let $\phi:\mathcal C(S)\to \mathcal C(S)$ be a locally injective simplicial map. Let $\mathfrak X_1\subset \mathfrak X_2 \subset \ldots $ be the exhaustion of $\mathcal C(S)$ provided by Theorem \ref{main}. Since $\mathfrak X_i$ is rigid and has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$, there exists a unique mapping class $h_i\in \mathrm{Mod}^{\pm}(S)$ such that $h_i|_{\mathfrak X_i} = \phi|_{\mathfrak X_i}$. Finally, Lemma \ref{L:glue} implies that $h_i= h_j$ for all $i,j$, and thus the result follows. \end{proof} \section{Punctured spheres} \label{S:punctured spheres} In this section we prove Theorem \ref{main} for $S=S_{0,n}$. If $n\le 3$ then $\mathcal C(S)$ is empty and thus the result is trivially true. The case $n=4$ is dealt with at the end of this section, as it needs special treatment. Thus, from now on we assume that $n\mathcal Ge 5$. As in \cite{AL} we represent $S$ as the double of an $n$-gon $\Delta$ with vertices removed, and define $\mathfrak X$ as the set of curves on $S$ obtained by connecting every non-adjacent pair of sides of $\Delta$ by a straight line segment and then doubling; see Figure \ref{F:octagonarcs} for the case $n = 8$: \begin{figure} \caption{Octagon and arcs for $S_{0,8} \label{F:octagonarcs} \end{figure} Note that the point-wise stabilizer of $\mathfrak X$ in $\mathrm{Mod}^{\pm}(S)$ has order two, and is generated by an orientation-reversing involution $i:S\to S$ that interchanges the two copies of $\Delta$. The rigidity of the set $\mathfrak X$, which was established in \cite{AL}, may be rephrased as follows: \begin{theorem}[\cite{AL}]\label{T:sphere} For any locally injective simplicial map $\phi:\mathfrak X \to \mathcal C(S)$, there exists a unique $h \in \mathrm{Mod}(S)$ such that $h|_\mathfrak X = \phi$, unique up to precomposing with $i$. \label{t:sphere} \end{theorem} We are going to enlarge the set $\mathfrak X$ in the fashion described in Section \ref{s:3}. We number the sides of the $\Delta$ in a cyclic order, and denote by $\alpha_j$ the curve defined by the arc on $\Delta$ that connects the sides with labels $j$ and $j+2$ mod $n$. Let $A= \{\alpha_1, \ldots, \alpha_n\}$; in the terminology of \cite{AL}, $A$ is the set of {\em chain curves} of $\mathfrak X$. Observe that every element of $A$ bounds a disk containing exactly two punctures of $S$, and that if two elements of $A$ have non-zero intersection number then they are Farey neighbors in $\mathfrak X$. Thus we see that $A$ is a closed string of $n$ twistable Farey neighbors, and may consider the set $\mathfrak X_A$ from Definition \ref{D:strings}. As a first step towards proving Theorem \ref{main} for $S_{0,n}$, we show that $\mathfrak X_A$ is rigid. Since the pointwise stabilizer of $\mathfrak X_A$ is trivial, this amounts to the following statement: \begin{theorem} \label{T:sphere-larger} For any locally injective simplicial map $\phi:\mathfrak X_A \to \mathcal C(S)$, there exists a unique $g \in \mathrm{Mod}^\pm(S)$ such that $g|_{\mathfrak X_A} = \phi$. \end{theorem} \begin{proof} Let $\phi:\mathfrak X_A \to \mathcal C(S)$ be a locally injective simplicial map. By Theorem \ref{t:sphere}, there exists $h \in \mathrm{Mod}^{\pm}(S)$ such that $h|_{\mathfrak X} = \phi|_{\mathfrak X}$, unique up to precomposing with the involution $i$. Since $i$ fixes every element of $\mathfrak X$, after precomposing $\phi$ with $h^{-1}$ we may assume that $\phi|_{\mathfrak X}$ is the identity map. By Proposition \ref{P:closed_string}, $\phi(\mathfrak X_A) =\mathfrak X_A$; moreover, the automorphism group of $\mathfrak X_A$ fixing $\mathfrak X$ pointwise has order two, generated by the involution $\sigma: \mathfrak X_A \to \mathfrak X_A$ that interchanges $f_{\alpha_i}(\alpha_{i+1}) $ and $f_{\alpha_{i+1}}^{-1}(\alpha_i)$ for all $i$. Since $ i|_{\mathfrak X_A}=\sigma$, up to precomposing $\phi$ with $i$, we deduce that $\phi|_{\mathfrak X_A}$ is the identity, as we wanted to prove. \end{proof} We now prove Theorem \ref{main} for spheres with punctures: \begin{proof}[Proof of Theorem \ref{main} for $S=S_{0,n}$, $n\mathcal Ge 5$] Let $\mathfrak X_A$ be the set constructed above, which is rigid and has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$, by Theorem \ref{T:sphere-larger}. The set $\{H_\alpha \mid \alpha \in A\}$ generates $\mathrm{Mod}(S)$; see, for instance, Corollary 4.15 of \cite{Farb-Margalit}. In addition, $\mathfrak X_A \cap H_\alpha(\mathfrak X_A)$ is weakly rigid, for all $\alpha \in A$, as it contains $A$ and $H_{\alpha_i}(\alpha_j)$ for any $\alpha_i,\alpha_j$ disjoint from $\alpha$. Finally, by inspection we see $\mathrm{Mod}(S)\cdot \mathfrak X_A = \mathcal C(S)$. Therefore, we may apply Proposition \ref{P:finish} to the sets $Y=\mathfrak X_A$ and $G= A$ to obtain the desired sequence $\mathfrak X_A= Y_1 \subset Y_2 \subset \ldots \subset Y_n \subset \ldots$ of finite rigid sets. \end{proof} \subsection{Proof of Theorem \ref{main} for $S=S_{0,4}$} As mentioned in the introduction, in this case $\mathcal C(S)$ is isomorphic to the Farey complex. It is easy to see, and is otherwise explicitly stated in \cite{AL}, that any triangle in $\mathcal C(S)$ is rigid. From this, plus the fact that any edge in $\mathcal C(S)$ is contained in exactly two triangles, it follows that any subcomplex of $\mathcal C(S)$ that is homeomorphic to a disk is also rigid. Consider the dual graph of $\mathcal C(S)$ (which is in fact a trivalent tree $T$), equipped with the natural path-metric. Let $Y_1$ be a triangle in $\mathcal C(S)$, and define $Y_n$ to be the union of all triangles of $\mathcal C(S)$ whose corresponding vertices in $T$ are at distance at most $n$ from the vertex corresponding to $Y_1$. Then the sequence $(Y_n)_{n\in \mathbb N}$ gives the desired exhaustion of $\mathcal C(S)$. \section{Closed and punctured surfaces of genus $g \mathcal Ge 2$} In this section we consider the case of a surface $S$ of genus $g\mathcal Ge 2$ with $n\mathcal Ge 0$ marked points. First observe that if $g = 2$ and $n = 0$, then since $\mathcal C(S_{2,0}) \colon\thinspaceng \mathcal C(S_{0,6})$ \cite{Luo}, the main theorem for $S_{2,0}$ follows from the case $S_{0,6}$, already proved in Section~\ref{S:punctured spheres}. We therefore assume that $n \mathcal Geq 1$ if $g = 2$. We let $\mathfrak X \subset \mathcal C(S)$ denote the finite rigid set constructed in \cite{AL}. The definition of the set $\mathfrak X$ is somewhat involved and we will not recall it in full detail. Instead, we first note that $\mathfrak X$ contains the set of {\em chain curves} \[ \mathcal CC = \{\alpha_0^0,\ldots,\alpha_0^n,\alpha_1,\ldots,\alpha_{2g-1} \} \] depicted in Figure \ref{F:genus4chain}. For notational purposes we also write $\alpha_0 = \alpha_0^1$ (and in case $n = 0$, $\alpha_0 = \alpha_0^0$). In addition to these curves, $\mathfrak X$ contains every curve which occurs as the boundary component of a subsurface of $S$ filled by a subset $A \subset \mathcal CC$, provided its union is connected in $S$ and has one of the following forms: \begin{enumerate} \item $A = \{ \alpha_0^i,\alpha_0^j,\alpha_k \}$ where $0 \leq i \leq j \leq n$ and $k = 1$ or $2g+1$. \item $A = \{ \alpha_0^i, \alpha_0^j, \alpha_k,\alpha_{k+1} \}$ where $0 \leq i \leq j \leq n$ and $k = 1$ or $2g$. \item $A = \{ \alpha_i \mid i \in I \}$ where $I \subset \{0,\ldots,2g+1\}$ is an interval (modulo $2g+2$). If $n >0$ and $A$ has an odd number of curves, then we additionally require that the first and last numbers in the interval $I$ to be even. \end{enumerate} See Figure \ref{F:examples} for some key examples. \begin{figure} \caption{Chain curves $\mathcal CC$ on a genus $4$ surface with $2$ marked points.} \label{F:genus4chain} \end{figure} \begin{figure} \caption{Examples of subsets of $\mathcal CC$ (in blue), together with the boundary components (in red) of the subsurface filled by them. The red curves are in $\mathfrak X$.} \label{F:examples} \end{figure} The pointwise stabilizer of $\mathfrak X$ in $\mathrm{Mod}^{\pm}(S)$ is trivial. Thus the rigidity of the set $\mathfrak X$, established in \cite{AL}, may be rephrased as follows: \begin{theorem}[\cite{AL}] Let $S = S_{g,n}$ with $g \mathcal Geq 2$ and $n \mathcal Geq 0$ (and $n \mathcal Geq 1$ if $g = 2$). For any locally injective simplicial map $\phi:\mathfrak X \to \mathcal C(S)$, there exists a unique $h \in \mathrm{Mod}^{\pm}(S)$ such that $h|_{\mathfrak X} = \phi$. \label{t:rigidhigher} \end{theorem} It will be necessary to refer to some of the curves in $\mathfrak X$ by name, so we describe the naming convention briefly in those cases, along the lines of \cite{AL}. We have already described the names of the elements of $\mathcal CC$. For $0 < i < j \leq n$ we let $\epsilon^{ij}$ be the boundary component of the subsurface $N(\alpha_1 \cup \alpha_0^{i-1} \cup \alpha_0^j)$ that also bounds a $(j-i+1)$--punctured disk in $S$ (containing the $i^{th}$ through $j^{th}$ punctures). We call the curves $\epsilon^{ij}$ {\em outer curves}; see Figure~\ref{F:examples}. For $0 < i \leq j \leq n$, we also consider the other boundary component of $N(\alpha_1 \cup \alpha_0^{i-1} \cup \alpha_j)$; this is a separating curve dividing the surface into two (punctured) subsurfaces of genus $1$ and $g-1$ respectively. We denote this curve $\sigma^{ij}$. One more curve in $\mathfrak X$ that we refer to as $\beta$ is shown in Figure~\ref{F:examples}, and is a component of the boundary of the subsurface $N(\alpha_{2g-2} \cup \alpha_{2g-1} \cup \alpha_{2g})$. The strategy for proving Theorem \ref{main} for surfaces of genus $g\mathcal Ge 2$ is similar in spirit to the one for punctured spheres, although considerably more involved. The main idea is to produce successive rigid enlargements of the rigid set $\mathfrak X$ identified in \cite{AL}, until we are in a position to apply Proposition \ref{P:finish}. We begin by replacing $\mathfrak X$ with $\mathfrak X'$, which is rigid by Proposition \ref{p:prime}. For every $0 < j \leq n$, let \[ A_j = \{ \sigma^{i j} \mid 0 < i \leq j \} \cup \{ \sigma^{j i} \mid j \leq i \leq n \} \cup \{\alpha_1, \alpha_3, \alpha_4, \alpha_5, \ldots, \alpha_{2g+1} \}.\] The set $A_j$ is almost filling and uniquely determines a curve denoted $\alpha_1^j$; see Figure~\ref{F:alpha1j}. The naming is suggestive, as all $\alpha_1^j$ are homotopic to $\alpha_1$ upon filling in the punctures. \begin{figure} \caption{The surface on the left contains the set $A_2$ which uniquely determines $\alpha_1^2$.} \label{F:alpha1j} \end{figure} We can similarly find a subset $A_0$ (shown in the left of Figure~\ref{F:alpha10}) which is almost filling and uniquely determines a curve denoted $\alpha_1^0$ (shown on the right of Figure~\ref{F:alpha10}), which bounds a disk enclosing every puncture of $S$. Consequently, $\alpha_1^j \in \mathfrak X'$, for all $j = 0,\ldots,n$, \begin{figure} \caption{The curves $A_0 \subset \mathfrak X$ (left) and the curve $\alpha_1^0 \in \mathfrak X'$ (right).} \label{F:alpha10} \end{figure} \subsection{Punctured surface promotion} One issue that arises only in the case $n > 0$ is that for intervals $I \subset \{0,\ldots,2g+1 \}$ (modulo $2g+2$) of odd length, the boundary curves of the neighborhood of the subsurface filled by $A= \{ \alpha_i \mid i \in I\}$ are only contained in $\mathfrak X$ when $I$ starts and ends with even indexed curves. Passing to the set $\mathfrak X'$ allows us to easily enlarge further to a set which rectifies this problem. Specifically, we define $\mathfrak X_1$ to be the union of $\mathfrak X'$ together with boundary components of subsurfaces filled by sets $A = \{ \alpha_i,\alpha_{i+1},\ldots,\alpha_j \}$ where $0 < i \leq j \leq 2g-1$ and $i,j$ are both odd. See Figure \ref{F:Aodd} for examples. Let $\mathfrak B_o$ be the set of all curves defined by such sets $A$. Before we proceed, we describe this set in more detail. Cutting $S$ open along $\alpha_1 \cup \alpha_3 \cup \ldots \cup \alpha_{2g-1} \cup \alpha_{2g+1}$ we obtain two components $\mathcal Theta_o^+$ and $\mathcal Theta_o^-$. These are each spheres with holes: $\mathcal Theta_o^+$ is the sphere in ``front'' in Figure~\ref{F:genus4chain}, which is a $(g+n+1)$--holed sphere containing the $n$ punctures of $S$, while $\mathcal Theta_o^-$ is the $(g+1)$--holed sphere in the ``back'' in Figure~\ref{F:genus4chain}. For every $A = \{ \alpha_i,\alpha_{i+1},\ldots,\alpha_j \}$ where $0 < i < j \leq 2g-1$ and $i,j$ are both odd, the boundary of the subsurface filled by $A$ has exactly two components $\beta_A^\pm$ with $\beta_A^+ \subset \mathcal Theta_o^+$ and $\beta_A^- \subset \mathcal Theta_o^-$ (possibly peripheral in $\mathcal Theta_o^\pm$ depending on $A$). Furthermore, for every such set $A$, there is a ``complementary'' set $A' \subset \mathfrak X$ so that $A \cup A'$ is almost filling, and so that $\{\beta_A^\pm\}$ is the set determined by $A \cup A'$. See Figure~\ref{F:Aodd}. \begin{figure} \caption{The sets $A = \{ \alpha_1,\ldots,\alpha_5 \} \label{F:Aodd} \end{figure} \begin{lemma} For all $g \mathcal Geq 2$ and $n \mathcal Geq 1$, the set $\mathfrak X_1$ is rigid and has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S_{g,n})$. \label{L:X1} \end{lemma} \begin{proof} First, $\mathfrak X_1$ has trivial pointwise stabilizer since $\mathfrak X$ does. Given any locally injective simplicial map $\phi \colon\thinspacelon \mathfrak X_1 \to \mathcal C(S)$, there exists a unique $h \in \mathrm{Mod}^{\pm}(S)$ so that $\phi = h|_{\mathfrak X'}$, by Theorem \ref{t:rigidhigher} and Proposition~\ref{p:prime}. Composing with the inverse of $h$ if necessary, we can assume $\phi$ is the identity on $\mathfrak X'$. So we need only show that $\phi(\mathcal Gamma) = \mathcal Gamma$ for all $\mathcal Gamma \in \mathfrak X_1 - \mathfrak X'$. With respect to the notation above, any such curve is $\beta_A^{\pm}$ for $A = \{ \alpha_i,\alpha_{i+1},\ldots,\alpha_j \}$, where $0 < i \leq j \leq 2g-1$ and $i,j$ are both odd. Since $A\cup A'$ is almost filling, $\phi(\{\beta_A^{\pm}\}) = \{\beta_A^{\pm}\}$. Now, for $A=\{\alpha_1,\alpha_2, \alpha_3\}$, we have $i(\beta_A^+, \alpha_0^1) \ne 0$ and $i(\beta_A^-, \alpha_0^1) = 0$; here, $\alpha_0^1$ is the curve depicted in Figure \ref{F:alpha10}. Therefore $\phi(\beta_A^+) =\beta_A^+$, as $\phi$ is locally injective and simplicial. Finally, an easy connectivity argument involving the set of curves $\{\beta_A^{\pm}\}_A$ yields the desired result. \end{proof} \subsection{Half the proof and the case of one or fewer punctures.} We now enlarge the set $\mathfrak X_1 \subset \mathcal C(S)$ from Lemma \ref{L:X1} to $\mathfrak X_1^2=(\mathfrak X_1')' \subset \mathcal C(S)$. According to Proposition~\ref{p:prime}, $\mathfrak X_1^2$ is rigid, and since the pointwise stabilizer of $\mathfrak X$ is trivial, so is the pointwise stabilizer of $\mathfrak X_1^2$. We will need the following lemma, see Figure~\ref{F:genus4chain} for the labeling of the curves: \begin{lemma} For any $g \mathcal Geq 2$ and $n \mathcal Geq 0$ (with $n \mathcal Geq 1$ if $g = 2$), we have $T_{\alpha_{2g}}(\alpha_{2g-1}) \in \mathfrak X_1^2$. \label{L:determine_chain} \end{lemma} \begin{proof} This requires a series of pictures, slightly different for the case $g \mathcal Geq 3$ and for $g = 2$. \noindent{\bf Case 1: $g \mathcal Geq 3$.} We refer the reader to Figure~\ref{F:chain_g>2}: although we have only drawn the figures for $g=3$ and $n=2$, it is straightforward to extend them to all $g \mathcal Geq 3$ and $n \mathcal Geq 0$. The upper left figure shows an almost filling set of curves contained in $\mathfrak X_1$, determining uniquely the curve on the upper right figure, which is thus in $\mathfrak X_1'$. This curve is then used to produce an almost filling set, depicted on the lower left hand figure, that uniquely determines $T_{\alpha_{2g}}(\alpha_{2g-1})$, shown on the right. Thus we see that $T_{\alpha_{2g}}(\alpha_{2g-1})\in \mathfrak X^2_1$, as claimed. \noindent{\bf Case 2: $g=2$ and $n\mathcal Ge 1$.} In this case a different set of pictures is required, see Figure \ref{F:2n_chain_chain}. The upper left hand figure shows an almost filling set of curves that is contained in $\mathfrak X_1$ and uniquely determines the curve shown on the upper right. This curve is then used to produce an almost filling set, depicted in the middle left picture, which is contained in $\mathfrak X'_1$ and uniquely determines the curve in the middle right figure. We now make use of this new curve to produce an almost filling set (lower left) that is contained in $\mathfrak X_1^2$ and uniquely determines $T_{\alpha_{2g}}(\alpha_{2g-1})$ (lower right). \end{proof} \begin{figure} \caption{Illustrating $T_{\alpha_{2g} \label{F:chain_g>2} \end{figure} \begin{figure} \caption{Illustrating $T_{\alpha_{2g} \label{F:2n_chain_chain} \end{figure} We claim that the set $\mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)$ is rigid. More concretely: \begin{lemma} Let $\phi: \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)\to \mathcal C(S)$ be a locally injective simplicial map. Then there exists a unique $h\in \mathrm{Mod}^{\pm}(S)$ such that $h|_{\mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)} =\phi$. \label{L:enlarge1} \end{lemma} \begin{proof} Let $\phi: \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)$ be a locally injective simplicial map. Since $\mathfrak X_1^2$ is rigid and its pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$ is trivial, there exists a unique $h\in \mathrm{Mod}^{\pm}(S)$ such that $h|_{\mathfrak X_1^2} =\phi|_{\mathfrak X_1^2} $. Precomposing $\phi$ with $h^{-1}$, we may assume that in fact $\phi|_{\mathfrak X_1^2} $ is the identity map. For $i=0, \ldots, n$, $\mathcal CC_i=\{\alpha_0^i, \alpha_1, \ldots, \alpha_{2g+1}\}$ is a closed string of twistable Farey neighbors in $\mathfrak X_1^2$ (the fact that the nonzero intersection numbers between these curves is $\mathfrak X$--detectable, hence $\mathfrak X_1^2$--detectable, is shown in the proofs of Theorem 5.1 and 6.1 in \cite{AL}). Consider the set $\mathfrak X_1^2 \cup T_{\mathcal CC_i}(\mathcal CC_i)$, and observe that, in the terminology of Definition \ref{D:strings}, it equals $Y_A$ for $Y= \mathfrak X_1^2$ and $A = \mathcal CC_i$. By Proposition \ref{P:closed_string}, $$\phi(\mathfrak X_1^2 \cup T_{\mathcal CC_i}(\mathcal CC_i)) = \mathfrak X_1^2 \cup T_{\mathcal CC_i}(\mathcal CC_i);$$ moreover, the automorphism group of $\mathfrak X_1^2 \cup T_{\mathcal CC_i}(\mathcal CC_i)$ fixing $\mathfrak X_1^2$ pointwise has order at most two. But, by Lemma \ref{L:determine_chain}, $T_{\alpha_{2g}}(\alpha_{2g-1}) \in \mathfrak X_1^2$ and thus such group is trivial. In other words, we have shown that the set $\mathfrak X_1^2 \cup T_{\mathcal CC_i}(\mathcal CC_i)$ is rigid. Now, $\mathfrak X_1^2 \cup T_{\mathcal CC_0}(\mathcal CC_0) \cup T_{\mathcal CC_1}(\mathcal CC_1)$ is also rigid by Lemma \ref{L:glue}, since $(\mathfrak X_1^2 \cup T_{\mathcal CC_0}(\mathcal CC_0) ) \cap (\mathfrak X_1^2 \cup T_{\mathcal CC_1}(\mathcal CC_1))$ is weakly rigid as it contains $\mathfrak X_1^2$. Since $ T_\mathcal CC(\mathcal CC) = \bigcup_{i=0}^n T_{\mathcal CC_i}(\mathcal CC_i)$, we may repeat essentially this same argument $n-1$ more times to conclude $\mathfrak X_1^2 \cup T_{\mathcal CC}(\mathcal CC)$ is rigid, as required. \end{proof} Next, we provide a further enlargement of our rigid set. Let $\beta$ be the curve depicted in Figure \ref{F:examples}, which is one of the boundary components of the surface $N(\alpha_{2g-2} \cup \alpha_{2g-1} \cup \alpha_{2g})$. We claim: \begin{lemma} The set $\mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC) \cup T_\beta(\mathcal CC)$ is rigid. \label{L:enlarge2} \end{lemma} \begin{proof} Let $\phi: \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC) \cup T_\beta(\mathcal CC) \to \mathcal C(S)$ be a locally injective simplicial map. By Lemma \ref{L:enlarge1}, $\mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)$ is rigid and thus, up to precomposing $\phi$ with an element of $\mathrm{Mod}^{\pm}(S)$, we may assume that $\phi|_{\mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)}$ is the identity. The set $$A= \{\alpha_{2g}, \alpha_{2g-1}, \alpha_{2g-2}, \beta, \alpha_{2g+1}\}\subset \mathfrak X_1^2$$ is a closed string of twistable Farey neighbors in $\mathfrak X_1^2$ (again, detectability of the nonzero intersection numbers is shown in \cite{AL}). Therefore, we may apply Proposition \ref{P:closed_string} to $\mathfrak X= \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC)$ and $A$ to deduce that $\phi(\mathfrak X_A) = \mathfrak X_A$; observe that $\mathfrak X_A = \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC) \cup T_\beta(\mathcal CC)$. Moreover, the automorphism group of $\mathfrak X_A$ fixing $\mathfrak X$ pointwise is trivial, by Lemma \ref{L:enlarge1}, and thus the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{main} for $g \mathcal Geq 2$ and $n \leq 1$] Let $Y = \mathfrak X_1^2 \cup T_{\mathcal CC}(\mathcal CC) \cup T_{\beta}(\mathcal CC)$. When $S$ is closed or has one puncture, the Dehn twists about chain curves and the Dehn twist about the curve $\beta$ generate $\mathrm{Mod}(S)$; see, for example, Corollary 4.15 of \cite{Farb-Margalit}. For $\mathcal Gamma \in \mathcal CC \cup \{ \beta \}$, the set \[ T_\mathcal Gamma(Y) \cap (Y) \] contains $\mathcal CC$, together with $T_\alpha(\alpha')$ for any $\alpha,\alpha' \in \mathcal CC$ which are disjoint from $\mathcal Gamma$. In particular, this set is weakly rigid. By inspection, the $\mathrm{Mod}(S)$--orbit of $Y$ is all of $\mathcal C(S)$, and so by Proposition~\ref{P:finish}, this set suffices to prove the theorem. \end{proof} \subsection{Multiple punctures.} When $S_{g,n}$ has $n \mathcal Geq 2$ (and $g \mathcal Geq 2$), the twists in the curves $\mathcal CC$ and $\{\beta\}$ do not generate the entire mapping class group. In this case, one needs to add the set of half-twists about the outer curves $\epsilon^{i \, i+1}$ bounding twice-punctured disks; see again Corollary 4.15 of \cite{Farb-Margalit}. Because of this, and in the light of Proposition~\ref{P:finish}, when $n \mathcal Geq 2$ we would like to enlarge our rigid set from the previous subsection by adding half-twists of chain curves about outer curves $\epsilon^{i \, i+1}$. In fact, denoting this set of outer curves by $\mathfrak O_P = \{\epsilon^{i \, i+1}\}_{i=1}^{n-1}$ we shall show that these curves are already in $\mathfrak X_1^2$. Specifically, we prove \begin{lemma} We have $H_{\mathfrak O_P}(\mathcal CC) \subset \mathfrak X_1^2$. \label{L:enlarge3} \end{lemma} \begin{proof} If $\alpha \in \mathcal CC$ and $\epsilon^{j \, j+1} \in \mathfrak O_P$, then we must show that $H_{\epsilon^{j \, j+1}}(\alpha) \in \mathfrak X_1^2$ for each $j = 1,\ldots, n-1$. This is clear if $i(\alpha,\epsilon^{j \, j+1}) = 0$, since then $H_{\epsilon^{j \, j+1}}(\alpha) = \alpha$. The intersection number is nonzero only when $\alpha = \alpha_0^j$, so it suffices to consider only this case. To prove $H_{\epsilon^{j \, j+1}}(\alpha_0^j) \in \mathfrak X_1^2$, we need only exhibit the almost filling sets from $\mathfrak X_1'$ uniquely determining this curve. This in turn requires an almost filling set from $\mathfrak X_1$. As before, we provide the necessary curves in a sequence of two figures. First, the almost filling set on the left of Figure \ref{F:half1} is contained in $\mathfrak X'$, and hence $\mathfrak X_1$ (compare with Figure \ref{F:alpha1j}), and uniquely determines the curve $\mathcal Gamma_1$ depicted on the right of the same figure. Therefore, $\mathcal Gamma_1 \in \mathfrak X_1'$. Figure \ref{F:half2} is then an almost filling set in $\mathfrak X_1'$, and uniquely determines the curve on the right of the same figure. This curve is $H_{\epsilon^{j \, j+1}}(\alpha_0^j)$, and so completes the proof. \end{proof} \begin{figure} \caption{Determining the curve $\mathcal Gamma_1$.} \label{F:half1} \end{figure} \begin{figure} \caption{Determining the curve $H_{\epsilon^{i,i+1} \label{F:half2} \end{figure} We are finally in a position to prove Theorem \ref{main} for surfaces of genus $g\mathcal Ge 2$ and $n \mathcal Geq 2$: \begin{proof}[Proof of Theorem \ref{main} for $S=S_{g,n}$, $g\mathcal Ge 2$, $n \mathcal Geq 2$.] The set $Y= \mathfrak X_1^2 \cup T_\mathcal CC(\mathcal CC) \cup T_\beta(\mathcal CC)$ is rigid by Lemma \ref{L:enlarge3}, and has trivial pointwise stabilizer in $\mathrm{Mod}^{\pm}(S)$ since $\mathfrak X$ does. Moreover $\mathrm{Mod}(S) \cdot Y = \mathcal C(S)$, by inspection. Consider the subset $ G = \mathcal CC \cup \{\beta\} \cup \mathfrak O_P$; as mentioned before, the (half) twists about elements of $G$ generate $\mathrm{Mod}(S)$. In addition, for every $\alpha \in G$, $Y \cap f_\alpha(Y)$ is weakly rigid. Thus we can apply Proposition \ref{P:finish} to $Y$ and $ G$, hence obtaining the desired exhaustion of $\mathcal C(S)$. \end{proof} \section{Tori} In this section we will prove Therorem \ref{main} for $S=S_{1,n}$, for $n\mathcal Ge 0$. First, if $n \le 1$ then $\mathcal C(S)$ is isomorphic to the Farey complex, and thus the result follows as in the case of $S_{0,4}$; see Section \ref{S:punctured spheres}. For $n=2$, Theorem \ref{main} is not true as stated due to the existence of {\em non-geometric} automorphisms of $\mathcal C(S)$, as mentioned in the introduction. However, in the light of the isomorphism $\mathcal C(S_{0,5}) \colon\thinspaceng \mathcal C(S_{1,2})$ \cite{Luo}, the same statement holds after replacing the group $\mathrm{Mod}^{\pm}(S)$ by $\mathrm{Aut}(\mathcal C(S))$ in the definition of rigid set, by the results of Section \ref{S:punctured spheres}. Therefore, from now on we assume $n\mathcal Ge 3$. In \cite{AL}, we constructed a finite rigid set $\mathfrak X$ described as follows. View $S_{1,n}$ as a unit square with $n$ punctures along the horizonal midline and the sides identified. The set $\mathfrak X$ contains a subset $\mathcal CC \subset \mathfrak X$ of $n+1$ {\em chain curves} \[ \mathcal CC = \{ \alpha_1,\ldots,\alpha_n\} \cup \{\beta\} \] where $\alpha_1,\ldots,\alpha_n$ are distinct curves which appear as vertical lines in the square and $\beta$ is the curve which appears as a horizontal line; see Figure \ref{F:original torus curves}. We assume that the indices on the $\alpha_i$ are ordered cyclically around the torus, and that the punctures are labelled so that the $i^{th}$ puncture lies between $\alpha_i$ and $\alpha_{i+1}$. The boundaries of the subsurfaces filled by connected unions of these chain curves form a collection of curves, denoted $\mathfrak O$ which we refer to as {\em outer curves}. Then $$\mathfrak X = \mathcal CC \cup \mathfrak O.$$ This set has a nontrivial pointwise stabilizer in $ \mathrm{Mod}^\pm(S_{1,n})$, which can be realized as the (descent to $S_{1,n}$ of the) horizontal reflection of the square through the midline containing the punctures. Denoting this involution $r \colon\thinspacelon S_{1,n} \to S_{1,n}$, we can summarize the result of \cite{AL} in the following: \begin{figure} \caption{Chain curves on the left, and some examples of outer curves on the right, in $S_{1,5} \label{F:original torus curves} \end{figure} \begin{theorem} \cite{AL} For any locally injective simplicial map $\phi \colon\thinspacelon \mathfrak X \to \mathcal C(S_{1,n})$ there exists $h \in \mathrm{Mod}^\pm(S_{1,n})$ such that $h|_{\mathfrak X} = \phi$. Moreover, $h$ is unique up to precomposing with $r$. \label{T:AL_torus} \end{theorem} The strategy of proof is again similar to that of previous sections, although the technicalities are different, and boils down to producing an enlargement of the set $\mathfrak X$ so that Proposition \ref{P:finish} can be applied. We begin by enlarging the set $\mathfrak X$ as follows. We let $\mathrm{d}elta_i$ be the curve coming from the vertical line through the $i^{th}$ puncture in the square. For every $1 \leq i \leq n$, let $\beta_i^+$ be the curve obtained from $\beta$ by pushing it up over the $i^{th}$ puncture. More precisely, we consider the point-pushing homeomorphism $f_i \colon\thinspacelon S_{1,n} \to S_{1,n}$ that pushes the $i^{th}$ puncture up and around $\mathrm{d}elta_i$, and then let $\beta_i^+ = f_i(\beta)$. We similarly define $\beta_i^- = f_i^{-1}(\beta)$, and set $\beta_{i,i+1}^\pm = f_{i+1}^{\pm 1}f_i^{\pm 1}(\beta)$, where the subscripts are taken modulo $n$. See Figure~\ref{F:torus beta curves}. \begin{figure} \caption{Curves $\beta_2^-$, $\beta_3^+$, and $\beta_{4,5} \label{F:torus beta curves} \end{figure} Let \[ \mathfrak X_1 = \mathfrak X \cup \{\beta_i^\pm \mid 1 \leq i \leq n \} \cup \{ \beta_{i,i+1}^\pm \mid 1 \leq i \leq n\} \] with indices in the last set taken modulo $n$. We first prove that this set is rigid; since the poinwise stabilizer of $\mathfrak X_1$ in $\mathrm{Mod}^{\pm}(S_{1,n})$ is trivial, this amounts to the following: \begin{proposition} For any locally injective simplicial map $\phi \colon\thinspacelon \mathfrak X_1 \to \mathcal C(S_{1,n})$, there exists a unique $h \in \mathrm{Mod}^\pm(S_{1,n})$ so that $h|_{\mathfrak X_1} = \phi$. \label{P:torus_large} \end{proposition} The proof of this proposition will require a repeated application of Lemma \ref{l:fareydetect}, and as such, we must verify that certain quadruples of curves satisfy the hypotheses of that lemma. We will need to refer to the outer curves by name. To this end, note that since any outer curve surrounds a set of (cyclically) consecutive punctures, we can determine an outer curve by specifying the first and last puncture surrounded. Consequently, we let $\epsilon^{i \, j}$ denote the outer curve surrounding all punctures from the $i^{th}$ to the $j^{th}$, with all indices taken modulo $n$. Observe that since the set of punctures is cyclically ordered, we do not need to assume that $i<j$ in the definition of $\epsilon^{i \, j}$. We will need the following lemma: \begin{figure} \caption{The curves $\beta_{2 \, 3} \label{F:torusfareynb2} \end{figure} \begin{lemma} \label{L:quads_satisfying_Lfareydetect} For each $1 \leq i \leq n$, consider the following four quadruples of curves in $\mathfrak X_1$, with indices taken modulo $n$: \begin{itemize} \item $\beta_{i-1 \, i}^\pm,\epsilon^{i+1 \, i},\beta_i^\pm,\epsilon^{i-1 \, i}$, \item $\beta_{i \, i+1}^\pm,\epsilon^{i \, i-1}, \beta_i^\pm,\epsilon^{i \, i+1}$, \end{itemize} Each of these satisfies the hypothesis of Lemma~\ref{l:fareydetect}. Furthermore, the nonzero intersections are all $\mathfrak X_1$--detectable. Consequently, $\epsilon^{i+1 \, i}$ and $\epsilon^{i \, i-1}$ are the unique Farey neighbors of $\beta_i^-$ and $\beta_i^+$. \end{lemma} \begin{proof} The fact that the four quadruples of curves each satisfy the hypothesis of Lemma~\ref{l:fareydetect} is clear by inspection. See the left side of Figure~\ref{F:torusfareynb2} for the case $\beta_{i-1 \, i}^-,\epsilon^{i+1 \, i},\beta_i^-,\epsilon^{i-1 \, i}$. The four-holed sphere $N$ filled by the Farey neighbors $\epsilon^{i+1 \, i},\beta_i^\pm$ and $\epsilon^{i \, i-1}, \beta_i^\pm$ has holes corresponding to the $i^{th}$ puncture, and the curves $\beta$ and $\epsilon^{i+1 \, i-1}$; see the right side of Figure~\ref{F:torusfareynb2}. Only $\epsilon^{i+1 \, i-1}$ intersects $\beta_{i-1 \, i}^\pm, \beta_{i \, i+1}^\pm, \epsilon^{i-1 \, i},\epsilon^{i \, i+1}$ nontrivially, as required for Lemma~\ref{l:fareydetect}. To see that all the intersections are $\mathfrak X_1$--detectable, we need only exhibit the necessary curves in $\mathfrak X_1$ determining a pants decomposition of $S - N$. See Figure~\ref{F:torusdetectfareynb1} for the curves necessary to detect $i(\beta_i^-,\epsilon^{i-1 \, i}) \neq 0$. We leave the other cases to the reader. \begin{figure} \caption{We use $\{\beta, \beta_{2 \, 3} \label{F:torusdetectfareynb1} \end{figure} \end{proof} We are now in a position to prove Proposition \ref{P:torus_large}: \begin{proof}[Proof of Proposition \ref{P:torus_large}] Let $\phi \colon\thinspacelon \mathfrak X_1 \to \mathcal C(S_{1,n})$ be a locally injective simplicial map. By Theorem \ref{T:AL_torus}, there exists $f \in \mathrm{Mod}^\pm(S_{1,n})$ such that $f|_{\mathfrak X} = \phi|_{\mathfrak X}$, unique up to precomposing with $r$. In fact, after precomposing $\phi$ with $f^{-1}$ we may as well assume that $\phi|_{\mathfrak X}$ is the identity. According to Lemma~\ref{L:quads_satisfying_Lfareydetect}, for all $i$, $\phi(\epsilon^{i+1 \, i}) = \epsilon^{i+1 \, i}$ and $\phi(\epsilon^{i \, i-1}) = \epsilon^{i \, i-1}$ are the unique Farey neighbors of $\phi(\beta_i^-)$ and $\phi(\beta_i^+)$ (with indices taken modulo $n$). Consequently, $\phi(\{\beta_i^{\pm}\}) = \{ \beta_i^\pm\}$ for all $i$. Notice that $i(\beta_i^+,\beta_j^-) = 0$ for all $i,j$, while $i(\beta_i^+,\beta_j^+) = i(\beta_i^-,\beta_j^-) = 2$ for all $i,j$. It follows that if $\phi(\beta_i^-) = \beta_i^+$ for some $i$, then this is true for all $i$. Composing with $r$ if necessary, we deduce that $\phi(\beta_i^\pm) = \beta_i^\pm$ for all $i$. All that remains is to see that $\phi(\beta_{i \, i+1}^\pm) = \beta_{i \, i+1}^\pm$ for all $i$. To prove this we need only show that $\beta_{i \, i+1}^\pm \in (\mathfrak X \cup \{\beta_j^\pm \mid 1 \leq j \leq n \})'$, and then we can apply Proposition~\ref{p:prime}. First note that when $n = 3$, then $\beta_{i \, i+1}^\pm = \beta_{i+2}^\mp$, so there is nothing to prove in this case. In general, one readily checks that $\beta_{i \, i+1}^+$ is uniquely determined by the almost filling set \[ \{ \beta,\beta_1^-,\beta_2^-,\ldots,\beta_n^- \} \setminus \{ \beta_i^-,\beta_{i+1}^- \}.\] This completes the proof. \end{proof} Let $\mathfrak O_P = \{\epsilon^{i \, i+1}\}_{i=1}^n$, counting indices modulo $n$. For $n \mathcal Geq 5$, this is a closed string of twistable Farey neighbors in $\mathfrak X_1$, and we could appeal to Proposition~\ref{P:closed_string} to add the half-twists about curves of $\mathfrak O_P$ about curves in $\mathfrak O_P$ in this case. However, we can provide a single argument for all $n \mathcal Geq 3$. \begin{lemma} \label{L:half_twist_tori} For all $\epsilon,\epsilon' \in \mathfrak O_P$, $H_{\epsilon}^{\pm 1}(\epsilon') \in \mathfrak X'$. Consequently, $H_{\epsilon}^{\pm 1}(\mathfrak X_1') \cup \mathfrak X_1'$ is rigid. \end{lemma} \begin{proof} We start with the proof of the first statement. If $i(\epsilon,\epsilon') = 0$, then there is nothing to prove. Otherwise, up to a homeomorphism we may assume that $\epsilon = \epsilon^{i \, i+1}$ and $\epsilon' = \epsilon^{i+1 \, i+2}$. Then we note that $H_{\epsilon^{i \, i+1}}(\epsilon^{i+1 \, i+2})$ is the curve uniquely determined by the almost filling set of curves \[ \{ \beta_{i+1}^+ \} \cup \{\alpha_1,\ldots,\alpha_n \} \setminus \{ \alpha_{i+1},\alpha_{i+2} \},\] completing the proof of the first statement. For the second statement, we note that $H_{\epsilon^{i \, i+1}}(\mathfrak X_1') \cap \mathfrak X_1'$ contains the weakly rigid set $\mathfrak O_P \cup \{\beta_{i+2}^+ \}$, for example. Therefore, since $\mathfrak X_1$ is rigid by Proposition~\ref{P:torus_large}, so is $\mathfrak X_1'$ by Proposition~\ref{p:prime}, and hence by Lemma~\ref{L:glue} it follows that $H_{\epsilon^{i \, i+1}}(\mathfrak X_1') \cup \mathfrak X_1'$ is rigid, as required. A similar argument proves the statement for $H_{\epsilon^{i \, i+1}}^{-1}$. \end{proof} We also need to consider Dehn twists in $\alpha_i$ and $\beta$. To deal with these, we first define $\mathfrak X_2 = \mathfrak X_1' \cup H_{\mathfrak O_P}(\mathfrak X_1')$, where $H_{\mathfrak O_P}(\mathfrak X_1')$ is the union of $H_{\epsilon}^{\pm 1}(\mathfrak X_1')$ over all $\epsilon \in \mathfrak O_P$. By Lemma~\ref{L:half_twist_tori}, $\mathfrak X_2$ is rigid. \begin{lemma} For all $i = 1,\ldots,n$, we have $T_{\alpha_i}^{\pm 1}(\beta) = T_{\beta}^{\mp 1}(\alpha_i) \in \mathfrak X_1^2 \subset \mathfrak X_2^2$. Consequently, $T_{\alpha_i}^{\pm 1}(\mathfrak X_2^2) \cup \mathfrak X_2^2$ and $T_\beta^{\pm 1}(\mathfrak X_2^2) \cup \mathfrak X_2^2$ is rigid. \end{lemma} \begin{proof} As in previous arguments, we exhibit a series of pictures that will yield the desired result; see Figure \ref{F:torustwist}. It is straightforward to modify such pictures to treat the case of an arbitrary $n\mathcal Ge 3$. The top left picture shows an almost filling set in $\mathfrak X_1$ that uniquely determines a curve in $\mathfrak X_1'$ on the top right. Then the lower left is an almost filling set in $\mathfrak X_1'$ that uniquely determines the curve in $(\mathfrak X_1')' = \mathfrak X_1^2$. This curve is precisely $T_\beta(\alpha_i) = T_{\alpha_i}^{-1}(\beta)$. Similarly, $T_{\alpha_i}(\beta) = T_\beta^{-1}(\alpha_i) \in \mathfrak X_1^2$. Finally, we easily observe that $\mathfrak X_2^2 \cap T_{\alpha_i}(\mathfrak X_2^2)$ is weakly rigid, as it contains $\mathcal CC \cup H_{\epsilon^{i-1 \, i}}(\alpha_{i-1})$, which is weakly rigid. Appealing to Lemma~\ref{L:glue}, it follows that $\mathfrak X_2^2 \cup T_{\alpha_i}(\mathfrak X_2^2)$ is rigid. The other cases follow similarly. \end{proof} \begin{figure} \caption{Illustrating $T_\beta(\alpha_2) \in \mathfrak X_2^2$ on $S_{1,5} \label{F:torustwist} \end{figure} Finally, we prove our main result for surfaces of genus 1. \begin{proof}[Proof of Theorem \ref{main} for $S=S_{1,n}$] Since $\mathfrak X_2$ is rigid, by Propositions \ref{p:prime}, the set $Y=\mathfrak X_2^2$ is rigid. Moreover, $\mathrm{Mod}(S_{1,n}) \cdot Y = \mathcal C(S_{1,n})$, by inspection. A generating set for $\mathrm{Mod}(S_{1,n})$ is given by the Dehn twists $f_\alpha$ about the elements $\alpha \in \mathcal CC$ and the half-twists $f_\epsilon$ about the elements $\epsilon \in A= \{\epsilon^{i \, i+1}\}$ (see Section 4.4 of \cite{Farb-Margalit}, for instance). Let $G= \mathcal CC \cup A$ and note that, for each $\alpha \in G$, the set $Y \cup f_\alpha Y$ is rigid. Therefore, we may apply Proposition \ref{P:finish} to obtain the desired exhaustion of $\mathcal C(S_{1,n})$ by finite rigid sets. \end{proof} \end{document}
\begin{document} \title{Connections between discriminants and the root distribution of polynomials with rational generating function} \author{Khang Tran\\ Department of Mathematics and Computer Science\\ Truman State University} \maketitle \begin{abstract} Let $H_{m}(z)$ be a sequence of polynomials whose generating function $\sum_{m=0}^{\infty}H_{m}(z)t^{m}$ is the reciprocal of a bivariate polynomial $D(t,z)$. We show that in the three cases $D(t,z)=1+B(z)t+A(z)t^{2}$, $D(t,z)=1+B(z)t+A(z)t^{3}$ and $D(t,z)=1+B(z)t+A(z)t^{4}$, where $A(z)$ and $B(z)$ are any polynomials in $z$ with complex coefficients, the roots of $H_{m}(z)$ lie on a portion of a real algebraic curve whose equation is explicitly given. The proofs involve the $q$-analogue of the discriminant, a concept introduced by Mourad Ismail. \end{abstract} \footnote{The author acknowledges support from NSF grant DMS-0838434 \textquotedblright{}EMSW21MCTP: Research Experience for Graduate Students\textquotedblright{} from the University of Illinois at Urbana-Champaign. } \section{Introduction} In this paper we study the root distribution of a sequence of polynomials satisfying one of the following three-term recurrences: \begin{eqnarray*} H_{m}(z)+B(z)H_{m-1}(z)+A(z)H_{m-2}(z) & = & 0,\\ H_{m}(z)+B(z)H_{m-1}(z)+A(z)H_{m-3}(z) & = & 0,\\ H_{m}(z)+B(z)H_{m-1}(z)+A(z)H_{m-4}(z) & = & 0, \end{eqnarray*} with certain initial conditions and $A(z),B(z)$ polynomials in $z$ with complex coefficients. For the study of the root distribution of other sequences of polynomials that satisfy three-term recurrences, see \cite{cc} and \cite{hs}. In particular, we choose the initial conditions so that the generating function is \[ \sum_{m=0}^{\infty}H_{m}(z)t^{m}=\frac{1}{D(t,z)} \] where $D(t,z)=1+B(z)t+A(z)t^{2}$, $D(t,z)=1+B(z)t+A(z)t^{3}$, or $D(t,z)=1+B(z)t+A(z)t^{4}$. We notice that the root distribution of $H_{m}(z)$ will be the same if we replace $1$ in the numerator by any monomial $N(t,z)$. If $N(t,z)$ is not a monomial, the root distribution will be different. The quadratic case $D(t,z)=1+B(z)t+A(z)t^{2}$ is not difficult and it is also mentioned in \cite{tz}. We present this case in Section 2 because it gives some directions to our main cases, the cubic and quartic denominators $D(t,z)$, in Sections 3 and 4. Our approach uses the concept of $q$-analogue of the discriminant ($q$-discriminant) introduced by Ismail \cite{ismail}. The $q$-discriminant of a polynomial $P_{n}(x)$ of degree $n$ and leading coefficient $p$ is \begin{equation} \mathrm{Disc}_{x}(P;q)=p^{2n-2}q^{n(n-1)/2}\prod_{1\le i<j\le n}(q^{-1/2}x_{i}-q^{1/2}x_{j})(q^{1/2}x_{i}-q^{-1/2}x_{j})\label{eq:qdisc} \end{equation} where $x_{i}$, $1\le i\le n,$ are the roots of $P_{n}(x)$. This $q$-discriminant is $0$ if and only if a quotient of roots $x_{i}/x_{j}$ equals $q$. As $q\rightarrow1$, this $q$-discriminant becomes the ordinary discriminant which is denoted by $\mathrm{Disc}_{x}P(x)$. For the study of resultants and ordinary discriminants and their various formulas, see \cite{aar}, \cite{apostal}, \cite{dilcherstolarsky}, and \cite{gisheismail}. We will see that the concept of $q$-discriminant is useful in proving connections between the root distribution of a sequence of polynomials $H_{m}(z)$ and the discriminant of the denominator of its generating function $\mathrm{Disc}_{t}D(t,z)$. We will show in the three cases mentioned above that the roots of $H_{m}(z)$ lie on a portion of a real algebraic curve (see Theorem \ref{quadratic}, Theorem \ref{cubic}, and Theorem \ref{quartic}). For the study of sequences of polynomials whose roots approach fixed curves, see \cite{boyergoh,boyergoh-1,boyergoh-2}. Other studies of the limits of zeros of polynomials satisfying a linear homogeneous recursion whose coefficients are polynomials in $z$ are given in \cite{bkw,bkw-1}. The $q$-discriminant will appear as the quotient $q$ of roots in $t$ of $D(t,z)$. One advantage of looking at the quotients of roots is that, at least in the three cases above, although the roots of $H_{m}(z)$ lie on a curve depending on $A(z)$ and $B(z)$, the quotients of roots $t=t(z)$ of $D(t,z)$ lie on a fixed curve independent of these two polynomials. We will show that this independent curve is the unit circle in the quadratic case and two peculiar curves (see Figures 1 and 2 in Sections 3 and 4) in the cubic and quartic cases. From computer experiments, this curve looks more complicated in the quintic case $D(z,t)=1+B(z)t+A(z)t^{5}$ (see Figure 3 in Section 4). As an application of these theorems, we will consider an example where $D(t,z)=1+(z^{2}-2z+a)t+z^{2}t^{2}$ and $a\in\mathbb{R}$. We will see that the roots of $H_{m}(z)$ lie either on portions of the circle of radius $\sqrt{a}$ or real intervals depending on the value $a$ compared to the critical values $0$ and $4$. Also, the endpoints of the curves where the roots of $H_{m}(z)$ lie are roots of $\mathrm{Disc}_{t}D(t,z)$. Interestingly, the critical values $0$ and $4$ are roots of the double discriminant $\mathrm{Disc}_{z}\mathrm{Disc}_{t}D(t,z)=4096a^{3}(a-4).$ \section{The quadratic denominator} In this section, we will consider the root distribution of $H_{m}(z)$ when the denominator of the generating function is $D(t,z)=1+B(z)t+A(z)t^{2}$. \begin{theorem}\label{quadratic} Let $H_{m}(z)$ be a sequence of polynomials whose generating function is \[ \sum H_{m}(z)t^{m}=\frac{1}{1+B(z)t+A(z)t^{2}} \] where $A(z)$ and $B(z)$ are polynomials in $z$ with complex coefficients. The roots of $H_{m}(z)$ which satisfy $A(z)\ne0$ lie on the curve $\mathcal{C}_{2}$ defined by \[ \Im\frac{B^{2}(z)}{A(z)}=0\qquad\mbox{and}\qquad0\le\Re\frac{B^{2}(z)}{A(z)}\le4, \] and are dense there as $m\rightarrow\infty$. \end{theorem} \begin{proof} Suppose $z_{0}$ is a root of $H_{m}(z)$ which satisfies $A(z_{0})\ne0$. Let $t_{1}=t_{1}(z_{0})$ and $t_{2}=t_{2}(z_{0})$ be the roots of $D(t,z_{0})$. If $t_{1}=t_{2}$ then $\mathrm{Disc}_{t}D(t,z_{0})=B^{2}(z_{0})-4A(z_{0})=0$. In this case $z_{0}$ belongs to $\mathcal{C}_{2}$, and we only need to consider the case $t_{1}\ne t_{2}$. By partial fractions, we have \begin{eqnarray} \frac{1}{D(t,z_{0})} & = & \frac{1}{A(z_{0})(t-t_{1})(t-t_{2})}\nonumber \\ & = & \frac{1}{A(z_{0})(t_{1}-t_{2})}\left(\frac{1}{t-t_{1}}-\frac{1}{t-t_{2}}\right)\nonumber \\ & = & \frac{1}{A(z_{0})}\sum_{m=0}^{\infty}\frac{t_{1}^{m+1}-t_{2}^{m+1}}{(t_{1}-t_{2})t_{1}^{m+1}t_{2}^{m+1}}t^{n}.\label{eq:quadraticT} \end{eqnarray} Thus if we let $t_{1}=qt_{2}$ then $q$ is an $(m+1)$-st root of unity and $q\ne1$. By the definition of $q$-discriminant in \eqref{eq:qdisc}, $q$ is a root of $\mathrm{Disc}_{t}(D(t,z_{0});q)$ which equals \[ q\left(B^{2}(z_{0})-(q+q^{-1}+2)A(z_{0})\right). \] This implies that \[ \frac{B^{2}(z_{0})}{A(z_{0})}=q+q^{-1}+2. \] Thus $z_{0}\in\mathcal{C}_{2}$ since $q$ is an $(m+1)$-th root of unity. The map $B^{2}(z)/A(z)$ maps an open neighborhood $U$ of a point on $\mathcal{C}_{2}$ onto an open set which contains a point $2\Re q+2$, where $q$ is an $(m+1)$-th root of unity, when $m$ is large. From \eqref{eq:quadraticT}, there is a solution of $H_{m}(z)$ in $U$. The density of the roots of $H_{m}(z)$ follows. \end{proof} \begin{example} We consider an example in which the generating function of $H_{m}(z)$ is given by \[ \frac{1}{z^{2}t^{2}+(z^{2}-2z+a)t+1}=\sum_{m=0}^{\infty}H_{m}(z)t^{m} \] where $a\in\mathbb{R}$ . Let $z=x+iy$. We exhibit the three possible cases for the root distribution of $H_{m}(z)$ depending on $a$: \begin{enumerate} \item If $a\le0$, the roots of $H_{m}(z)$ lie on the two real intervals defined by \[ (x^{2}+a)(x^{2}-4x+a)\le0. \] \item If $0<a\le4$, the roots of $H_{m}(z)$ can lie either on the half circle $x^{2}+y^{2}=a$, $x\ge0$, or on the real interval defined by $x^{2}-4x+a\le0$. \item If $a>4$, the roots of $H_{m}(z)$ lie on two parts of the circle $x^{2}+y^{2}=a$ restricted by $0\le x\le2$. \end{enumerate} Indeed, by complex expansion, we have \[ \Im\frac{B^{2}(z)}{A(z)}=\frac{2y(x^{2}+y^{2}-a)P}{(x^{2}+y^{2})^{2}}\qquad\mbox{and}\qquad\mbox{\ensuremath{\Re}}\frac{B^{2}(z)}{A(z)}=\frac{P^{2}-Q^{2}}{(x^{2}+y^{2})^{2}}, \] where \[ P=ax-2x^{2}+x^{3}-2y^{2}+xy^{2}\qquad\mbox{and}\qquad Q=y(x^{2}+y^{2}-a). \] Theorem \ref{quadratic} yields three cases: $y=0$, $x^{2}+y^{2}-a=0$ or $P=0$. Since $\Re\left(B^{2}(z)/A(z)\right)\ge0$, all these cases give $Q=0$. We note that if $x^{2}+y^{2}-a=0$ then the condition $\Re\left(B^{2}(z)/A(z)\right)\le4$ reduces to \begin{equation} x(a+x^{2}+y^{2})(ax-4x^{2}+x^{3}-4y^{2}+xy^{2})=4a^{2}x(x-2)\le0.\label{eq:circle} \end{equation} Suppose $a\le0$. Then the condition $Q=0$ implies that the roots of $H_{m}(z)$ are real. The condition $\Re\left(B^{2}(z)/A(z)\right)\le4$ becomes \begin{equation} (x^{3}-2x^{2}+ax)^{2}-4x^{4}=x^{2}(x^{2}+a)(x^{2}-4x+a)\le0.\label{eq:realline} \end{equation} Suppose $0<a\le4$. The roots of $H_{m}(z)$ lie either on the half circle $x^{2}+y^{2}-a=0$, $x\ge0$ (from the inequality \eqref{eq:circle}), or on the real interval given by $x^{2}-4x+a\le0$ (from the inequality \eqref{eq:realline}). If $a>4$ then the roots of $H_{m}(z)$ lie on the two parts of the circle $x^{2}+y^{2}-a=0$ restricted by $0\le x\le2$ (from the inequality \eqref{eq:circle}). We notice that in this example, the inequality $\Re\left(B^{2}(z)/A(z)\right)\le4$ gives the endpoints of the curves where the roots of $H_{m}(z)$ lie. Thus, these endpoints are roots of $\mathrm{Disc}_{t}(1+B(z)t+A(z)t^{2})=B^{2}(z)-4A(z)$. Moreover the critical values of $a$, which are $0$ and $4$, are roots of the double discriminant of the denominator \[ \mathrm{Disc}_{z}\mathrm{Disc}_{t}(1+(z^{2}-2z+a)t+z^{2}t^{2})=4096a^{3}(a-4). \] This comes from the fact that the endpoints of the fixed curves containing the roots of $H_{m}(z)$ are the roots of $\mathrm{Disc}_{t}(1+(z^{2}-2z+a)t+z^{2}t^{2})$. When this discriminant has a double root as a polynomial in $z$, some two endpoints of the fixed curves coincide. That explains the change in the shape of the root distribution. \end{example} \section{The cubic denominator} In this section we show that in the cubic case $D(t,z)=1+B(z)t+A(z)t^{3}$, the roots of $H_{m}(z)$ lie on a portion of a real algebraic curve. As we see in the proof of Theorem \ref{quadratic}, we can first consider the distribution of the quotients of roots $q=t_{i}/t_{j}$ of $D(t,z)$, and then we can relate to the root distribution of $H_{m}(z)$ using the $q$-discriminant. While in the previous section this quotient lies on the unit circle, in this section we show that this quotient lie on the curve in Figure 1. \begin{lemma}\label{cubicqlemma} Suppose $\zeta_{1},\zeta_{2}\ne0$ are complex numbers such that $1/\zeta_{1}+1/\zeta_{2}+1=0$ and \begin{equation} \frac{\zeta_{1}^{m+1}-1}{\zeta_{1}-1}=\frac{\zeta_{2}^{m+1}-1}{\zeta_{2}-1}.\label{eq:cubic-q} \end{equation} Then $\zeta_{1}$ and $\zeta_{2}$ lie on the union $C_{1}\cup C_{2}\cup C_{3}$ where the Cartesian equations of $C_{1}$, $C_{2}$ and $C_{3}$ are given by \begin{eqnarray*} C_{1} & : & (x+1)^{2}+y^{2}=1,x\le-\frac{1}{2},\\ C_{2} & : & x=-\frac{1}{2},-\frac{\sqrt{3}}{2}\le y\le\frac{\sqrt{3}}{2},\\ C_{3} & : & x^{2}+y^{2}=1,x\ge-\frac{1}{2}, \end{eqnarray*} and are dense there as $m\rightarrow\infty$. \end{lemma} \begin{figure} \caption{Distribution of the quotients of the roots of the cubic denominator} \label{quotientcubicfig} \end{figure} \begin{proof} We can rewrite \eqref{eq:cubic-q} as \[ \sum_{k=0}^{m}\zeta_{1}^{k}=\sum_{k=0}^{m}\zeta_{2}^{k} \] where we can replace $\zeta_{2}$ by $-\zeta_{1}/(\zeta_{1}+1)$. By multiplying both sides by $(\zeta_{1}+1)^{m}$, we note that there are at most $2m-2$ solutions $\zeta=\zeta_{1}\ne0,-2$ counting multiplicity. Let $m=3n+k$ where $k=1,2,3$. From implicit differentiation, we can check that the equation \eqref{eq:cubic-q} has roots at $e^{2\pi i/3},e^{4\pi i/3}$ with multiplicity $k-1$. After subtracting this number of roots from $2m-2$, we conclude that there are at most $6n$ roots $\zeta\ne0,-2,e^{2\pi i/3},e^{4\pi i/3}$. We first show that if $\zeta\ne-2$ is a root, then so is $-\zeta-1$. From the two equations in the hypothesis, we note that $\zeta\ne0,-1$ and \[ \sum_{k=0}^{m}\zeta^{k}=\sum_{k=0}^{m}\left(-\frac{\zeta}{\zeta+1}\right)^{k}. \] Subtracting $1$, then dividing by $\zeta$ and multiplying both sides by $(\zeta+1)^{m}$ , we obtain \begin{eqnarray*} 0 & = & \sum_{k=0}^{m-1}\zeta^{k}(\zeta+1)^{m}+\sum_{k=0}^{m-1}(\zeta+1)^{m-k-1}(-\zeta)^{k}\\ & = & \sum_{k=0}^{m-1}\zeta^{k}(\zeta+1)^{m-k-1}\left((\zeta+1)^{k+1}-(-1)^{k+1}\right)\\ & = & (\zeta+2)\sum_{k=0}^{m-1}\zeta^{k}(\zeta+1)^{m-k-1}\sum_{i=0}^{k}(\zeta+1)^{k-i}(-1)^{i}\\ & = & (\zeta+2)\sum_{k=0}^{m-1}\sum_{i=0}^{k}\zeta^{k}(-\zeta-1)^{m-1-i}. \end{eqnarray*} By interchanging the summation and reversing the index of summation we obtain \begin{eqnarray*} \sum_{k=0}^{m-1}\sum_{i=0}^{k}\zeta^{k}(-\zeta-1)^{m-1-i} & = & \sum_{i=0}^{m-1}\sum_{k=i}^{m-1}\zeta^{k}(-\zeta-1)^{m-1-i}\\ & = & \sum_{i=0}^{m-1}\sum_{k=0}^{i}\zeta^{m-1-k}(-\zeta-1)^{i}. \end{eqnarray*} Hence we have symmetry between $\zeta$ and $-1-\zeta$ in the two double summations. Our goal is to show that the number of roots $\zeta\ne0,-2,e^{2\pi i/3},e^{4\pi i/3}$ on $C_{1}\cup C_{2}\cup C_{3}$ is at least $6n$, counting multiplicities. Then all roots will lie on $C_{1}\cup C_{2}\cup C_{3}$ since we have at most $6n$ roots $\zeta\ne0,-2,e^{2\pi i/3},e^{4\pi i/3}$. By the symmetry of roots mentioned above, if $\zeta\ne-2$ is a solution in $C_{1}$ then $(-1-1/\zeta,-\zeta-1)$ is a solution in $C_{2}\times C_{3}$. Hence there is a bijection between roots in $C_{1}\backslash\{-2\}$, $C_{2}$ and $C_{3}$. Thus if $C_{1}\backslash\{e^{2\pi i/3},e^{4\pi i/3}\}$ contains at least $2n+1$ roots then all of the roots lie on $C_{1}\cup C_{2}\cup C_{3}$. Let $\zeta=\zeta_{1}$ be a root on $C_{1}\backslash\{e^{2\pi i/3},e^{4\pi i/3}\}$. Then the equation $1/\zeta_{1}+1/\zeta_{2}+1=0$ gives $\zeta_{2}=\bar{\zeta}$. Thus \eqref{eq:cubic-q} gives \[ \Im\frac{\zeta^{m+1}-1}{\zeta-1}=0. \] Write $\zeta=re^{i\theta}$ where $r=-2\cos\theta$, $\cos\theta\le-1/2$. Then complex expansion yields \[ r^{m+2}\sin m\theta-r^{m+1}\sin(m+1)\theta+r\sin\theta=0. \] Divide $r$, replace $r$ by $-2\cos\theta$ and combine the first two terms to obtain \begin{eqnarray*} 0 & = & (-1)^{m+1}2^{m}\cos^{m}\theta\left(2\sin m\theta\cos\theta+\sin(m+1)\theta\right)+\sin\theta\\ & = & (-1)^{m+1}2^{m}\cos^{m}\theta\left(2\sin(m+1)\theta-2\cos m\theta\sin\theta+\sin(m+1)\theta\right)+\sin\theta\\ & = & (-1)^{m+1}2^{m}\cos^{m}\theta\left(2\sin(m+1)\theta+\sin m\theta\cos\theta-\cos m\theta\sin\theta\right)+\sin\theta\\ & = & (-1)^{m+1}2^{m}\cos^{m}\theta\left(2\sin(m+1)\theta+\sin(m-1)\theta\right)+\sin\theta. \end{eqnarray*} We note that the right side has different signs if $\sin(m+1)\theta=1$ and $\sin(m+1)\theta=-1$. Thus we can apply the Intermediate Value Theorem on several intervals whose boundaries are the solutions of $\sin(m+1)\theta=\pm1$. The equations $\sin(m+1)\theta=\pm1$ give \[ (m+1)\theta=\pm\frac{\pi}{2}+2j\pi. \] The condition $2\pi/3<\theta<4\pi/3$ and the fact that $m=3n+k$, $k=1,2,3$, yield \[ n+\frac{k+1}{3}\pm\frac{1}{4}<j<2n+\frac{2(k+1)}{3}\pm\frac{1}{4}. \] If $k=1$, we have at least $2n+1$ roots coming from $2n+1$ intervals formed by the $2n+2$ points \[ \frac{2j\pi\pm\pi/2}{m+1}, \] where $n<j\le2n+1$. If $k=2$, we have at least $2n+1$ roots coming from $2n+1$ intervals formed by the $2n+2$ points \[ \left\{ \frac{2j-\pi/2}{m+1}:n+1\le j<2n+2\right\} \cup\left\{ \frac{2j+\pi/2}{m+1}:n+1<j\le2n+2\right\} . \] If $k=3$, we have at least $2n+1$ roots coming from $2n+1$ intervals formed by the $2n+2$ points \[ \frac{2j\pi\pm\pi/2}{m+1}, \] where $n+1<j<2n+2$. The density follows from the distribution of $2n+1$ roots mentioned above. The lemma follows. \end{proof} \begin{theorem}\label{cubic} Let $H_{m}(z)$ be a sequence of polynomials whose generating function is \[ \sum H_{m}(z)t^{m}=\frac{1}{1+B(z)t+A(z)t^{3}} \] where $A(z)$ and $B(z)$ are polynomials in $z$ with complex coefficients. The roots of $H_{m}(z)$ which satisfy $A(z)\ne0$ lie on the curve $\mathcal{C}_{3}$ defined by \[ \Im\frac{B^{3}(z)}{A(z)}=0\qquad\mbox{and}\qquad0\le-\Re\frac{B^{3}(z)}{A(z)}\le\frac{3^{3}}{2^{2}}, \] and are dense there as $m\rightarrow\infty$. \end{theorem} \begin{proof} For a little simplification, we consider the roots of $H_{m-1}(z)$. Let $z_{0}$ be a root of $H_{m-1}(z)$ which satisfies $A(z_{0})\ne0$. Let $t_{1}=t_{1}(z_{0})$, $t_{2}=t_{2}(z_{0})$ and $t_{3}=t_{3}(z_{0})$ be the roots of $D(t,z_{0})=1+B(z_{0})t+A(z_{0})t^{3}$. It suffices to consider $\mathrm{Disc}_{t}(D(t,z_{0}))=-4A(z_{0})B^{3}(z_{0})-27A^{2}(z_{0})\ne0$. By partial fractions, the function $1/D(t,z_{0})$ is \[ \frac{1}{A(z_{0})(t_{1}-t_{2})(t_{1}-t_{3})(t-t_{1})}+\frac{1}{A(z_{0})(t_{2}-t_{1})(t_{2}-t_{3})(t-t_{2})}+\frac{1}{A(z_{0})(t_{3}-t_{1})(t_{3}-t_{2})(t-t_{3})}. \] We expand $1/(t-t_{i})$ using geometric series and write the expression above as \[ \sum_{m=1}^{\infty}\frac{t_{1}^{m+1}t_{2}^{m}-t_{1}^{m}t_{2}^{m+1}-t_{1}^{m+1}t_{3}^{m}+t_{2}^{m+1}t_{3}^{m}+t_{1}^{m}t_{3}^{m+1}-t_{2}^{m}t_{3}^{m+1}}{A(z_{0})t_{1}^{m}t_{2}^{m}t_{3}^{m}(t_{1}-t_{2})(t_{1}-t_{3})(t_{2}-t_{3})}t^{m-1}. \] Since $z_{0}$ is a root of $H_{m-1}(z)$, we have \[ t_{1}^{m+1}t_{2}^{m}-t_{1}^{m}t_{2}^{m+1}-t_{1}^{m+1}t_{3}^{m}+t_{2}^{m+1}t_{3}^{m}+t_{1}^{m}t_{3}^{m+1}-t_{2}^{m}t_{3}^{m+1}=0. \] We divide this equation by $t_{3}^{2m+1}$ and let $q=q_{1}=t_{1}/t_{3}$, $q_{2}=t_{2}/t_{3}$ to obtain \[ q_{1}^{m+1}q_{2}^{m}-q_{1}^{m}q_{2}^{m+1}-q_{1}^{m+1}+q_{2}^{m+1}+q_{1}^{m}-q_{2}^{m}=0 \] where $q_{1}+q_{2}+1=0$ since $t_{1}+t_{2}+t_{2}=0$. The equation can be written as \[ q_{1}^{m}q_{2}^{m}(q_{1}-q_{2})-q_{1}^{m}(q_{1}-1)+q_{2}^{m}(q_{2}-1)=0. \] Since $q_{1}^{m}q_{2}^{m}(q_{1}-q_{2})=q_{1}^{m}q_{2}^{m}(q_{1}-1)-q_{1}^{m}q_{2}^{m}(q_{2}-1)$ and $q_{1},q_{2}\ne0,1$, this equation becomes \[ \frac{q_{1}^{m}-1}{q_{1}^{m}(q_{1}-1)}=\frac{q_{2}^{m}-1}{q_{2}^{m}(q_{2}-1)}. \] Let $\zeta_{1}=1/q_{1}$ and $\zeta_{2}=1/q_{2}$ and add 1 to both sides. Then \[ \frac{\zeta_{1}^{m+1}-1}{\zeta_{1}-1}=\frac{\zeta_{2}^{m+1}-1}{\zeta_{2}-1}. \] Thus $\zeta_{1}$ and $\zeta_{2}$ (and also $q_{1}$ and $q_{2}$) lie on the curve given in Lemma \ref{cubicqlemma}. Since $q_{1}$ and $q_{2}$ are given by quotients of two roots, they are roots of the $q$-discriminant given by \[ \mbox{Disc}_{t}(D(t,z_{0});q)=-B^{3}(z_{0})A(z_{0})q^{2}(1+q)^{2}-A^{2}(z_{0})(1+q+q^{2})^{3}. \] This gives \[ \frac{B^{3}(z_{0})}{A(z_{0})}=-\frac{(1+q+q^{2})^{3}}{q^{2}(1+q)^{2}}. \] It remains to show that the map \[ f(q)=-\frac{(1+q+q^{2})^{3}}{q^{2}(1+q)^{2}} \] maps the curve in Figure \ref{quotientcubicfig} to the real interval $[-27/4,0]$. Let $q$ be a point on the this curve. We note that \[ f(q)=f(-1-q)=-\frac{(q^{-1}+1+q)^{3}}{q^{-1}+2+q}. \] Since $q$ lies on the curve in Figure \ref{quotientcubicfig}, we have the three possible cases $\bar{q}=-1-q$, $|q|=1$ or $|-1-q|=1$. In the first case, $\Im f(q)=0$ since $f(q)=\overline{f(q)}$. In the second and third cases, $\Im f(q)=0$ since $q+q^{-1}\in\mathbb{R}$ and $f(q)=f(-1-q)$. Furthermore, $f(q)$ attains its minimum and maximum when $q=1$ and $q=e^{2\pi i/3}$ respectively. The density of the roots of $H_{m}(z)$ follows from similar arguments as in the proof of Theorem \ref{quadratic}. \end{proof} \section{The quartic denominator} In this section, we will show that in the case $D(t,z)=1+B(z)+A(z)t^{4}$ the roots of $H_{m}(z)$ lie on a portion of a real algebraic curve. Similar to the approach in the previous sections, we first consider the distribution of the quotients of roots of $D(t,z)$. Before looking at these quotients, let us recall that the Chebyshev polynomial of the second kind $U_{m}(z)$ is \[ U_{m}(z)=\frac{\sin(m+1)\theta}{\sin\theta} \] where \[ z=\cos\theta. \] Suppose $z_{1},z_{2}\in\mathbb{C}$ such that $|z_{1}|=|z_{2}|$. Let $e^{2i\theta}=z_{1}/z_{2}$ and $z=\cos\theta$. If $k$ is a positive integer then \begin{eqnarray} \frac{z_{1}^{k}-z_{2}^{k}}{z_{1}-z_{2}} & = & (z_{1}z_{2})^{(k-1)/2}\frac{(z_{1}/z_{2})^{m/2}-(z_{2}/z_{1})^{m/2}}{(z_{1}/z_{2})^{1/2}-(z_{2}/z_{1})^{1/2}}\nonumber \\ & = & (z_{1}z_{2})^{(k-1)/2}U_{m}(z).\label{eq:ChebyshevU} \end{eqnarray} By analytic continuation, we can extend this identity to any pair of complex numbers $z_{1}$ and $z_{2}$ with \[ 2z=\left(\frac{z_{1}}{z_{2}}\right)^{1/2}+\left(\frac{z_{2}}{z_{1}}\right)^{1/2}. \] \begin{lemma}\label{quarticqlemma} Suppose $z_{0}$ is a root of $H_{m}(z)$ and $q=q(z_{0})$ is a quotient of two roots in $t$ of $1+B(z_{0})t+A(z_{0})t^{4}$. Then the set of all such quotients belongs to the curve depicted in Figure 2, where the Cartesian equation of the quartic curve on the left is \[ 1+2x+2x^{2}+2x^{3}+x^{4}-2y^{2}+2xy^{2}+2x^{2}y^{2}+y^{4}=0, \] and the curve on the right is the unit circle with real part at least $-1/3$. All such quotients are dense on this curve as $m\rightarrow\infty$. \end{lemma} \begin{figure} \caption{Distribution of the quotients of the roots of the quartic denominator} \label{quotientquarticfig} \end{figure} \begin{proof} For each $z_{0}\in\mathbb{C}$, let $t_{1}=t_{1}(z_{0})$, $t_{2}=t_{2}(z_{0})$, $t_{3}=t_{3}(z_{0})$, and $t_{4}=t_{4}(z_{0})$ be the roots of the denominator $1+B(z_{0})t+A(z_{0})t^{4}$. By partial fractions, we have \begin{eqnarray*} \frac{1}{1+B(z_{0})t+A(z_{0})t^{4}} & = & \frac{1}{A(z_{0})(t-t_{1})(t-t_{2})(t-t_{3})(1-t_{4})}\\ & = & \sum_{m=0}^{\infty}H_{m}(z_{0})t^{m}, \end{eqnarray*} where \begin{eqnarray*} A(z_{0})H_{m}(z_{0}) & = & \frac{1}{t_{1}^{m+1}(t_{1}-t_{2})(t_{1}-t_{3})(t_{1}-t_{4})}+\frac{1}{t_{2}^{m+1}(t_{2}-t_{1})(t_{2}-t_{3})(t_{2}-t_{4})}\\ & & +\frac{1}{t_{3}^{m+1}(t_{3}-t_{1})(t_{3}-t_{2})(t_{3}-t_{4})}+\frac{1}{t_{4}^{m+1}(t_{4}-t_{1})(t_{4}-t_{2})(t_{4}-t_{3})}. \end{eqnarray*} Let $q_{1}=t_{1}/t_{4}$, $q_{2}=t_{2}/t_{4}$, $q_{3}=t_{3}/t_{4}$. For a little reduction in the powers of $q_{i}$, $1\le i\le3$, we will consider the roots of the polynomial $H_{m-2}(z)$. We put all terms of $A(z_{0})H_{m-2}(z_{0})$ over a common denominator and then divide the numerator by $t_{4}^{3m}$. The condition $H_{m-2}(z_{0})=0$ implies \begin{eqnarray} 0 & = & q_{1}^{m+1}(-q_{2}^{m-1}q_{3}^{m-1}(q_{2}-q_{3})+q_{2}^{m}-q_{3}^{m}-q_{2}^{m-1}+q_{3}^{m-1})\nonumber \\ & & +q_{1}^{m}(q_{2}^{m-1}q_{3}^{m-1}(q_{2}^{2}-q_{3}^{2})-q_{2}^{m+1}+q_{3}^{m+1}+q_{2}^{m-1}-q_{3}^{m-1})\nonumber \\ & & +q_{1}^{m-1}(-q_{2}^{m}q_{3}^{m}(q_{2}-q_{3})+q_{2}^{m+1}-q_{3}^{m+1}-q_{2}^{m}+q_{3}^{m})\nonumber \\ & & +q_{2}^{m-1}q_{3}^{m-1}(q_{2}-q_{3})-q_{2}^{m-1}q_{3}^{m-1}(q_{2}^{2}-q_{3}^{2})+q_{2}^{m}q_{3}^{m}(q_{2}-q_{3}).\label{eq:quartic-q} \end{eqnarray} The fact \begin{eqnarray*} t_{1}+t_{2}+t_{3}+t_{4} & = & 0\\ t_{1}t_{2}+t_{1}t_{3}+t_{1}t_{4}+t_{2}t_{3}+t_{2}t_{4}+t_{3}t_{4} & = & 0 \end{eqnarray*} gives \[ q_{2}+q_{3}=-1-q_{1}\qquad\mbox{and}\qquad q_{2}q_{3}=q_{1}^{2}+q_{1}+1. \] From the symmetric reductions, the right side of \eqref{eq:quartic-q}, after being divided by $q_{2}-q_{3}$, is a polynomial in $q_{1}$ of degree $3m-1$ . We used a computer algebra system to check for the root distribution of this polynomial in the case $m\le5$. We now assume that $m\ge6$. We will show that the number of roots $q_{1}$ lying on the two curves in Figure \ref{quotientquarticfig} is at least $3m-1$. The first step is to show that if the set of $q_{1}$ belongs to the unit circle with $\Re q_{1}\ge-1/3$ and is dense there as $m\rightarrow\infty$ then the set of $q_{2}$ and $q_{3}$ belongs to the quartic curve given in the lemma and is dense on this quartic curve as $m\rightarrow\infty$. Then we will find the number of roots $q_{1}$ on the unit circle with $\Re q_{1}\ge-1/3$. Suppose $q_{1}=e^{i\pi\theta}$ lies on the unit circle and $1\ge\cos\theta\ge-1/3$. We note that $q_{2}$ and $q_{3}$ are the two roots of the equation \[ f(q):=q^{2}+(1+q_{1})q+q_{1}^{2}+q_{1}+1=0. \] Thus the quadratic formula gives \[ q=\frac{-1-e^{i\theta}\pm ie^{i\theta/2}\sqrt{6\cos\theta+2}}{2}. \] Splitting the real and imaginary parts of the function on the left side, we leave it to the reader to check that this function maps the interval $1\ge\cos\theta\ge-1/3$ to the quartic curve \[ 1+2x+2x^{2}+2x^{3}+x^{4}-2y^{2}+2xy^{2}+2x^{2}y^{2}+y^{4}=0. \] We now compute the number of roots $q_{1}=e^{i\pi\theta}$ with $\cos\theta\ge-1/3$. We first consider $q_{1}\ne\pm i,1$. Let \[ 2\zeta=\left(\frac{q_{2}}{q_{3}}\right)^{1/2}+\left(\frac{q_{3}}{q_{2}}\right)^{1/2}. \] Equation \eqref{eq:ChebyshevU} gives \begin{eqnarray*} \frac{q_{2}^{m}-q_{3}^{m}}{q_{2}-q_{3}} & = & (q_{2}q_{3})^{(m-1)/2}U_{m-1}\left(\zeta\right) \end{eqnarray*} where \begin{equation} \zeta^{2}=\frac{1}{4}\frac{(q_{2}+q_{3})^{2}}{q_{2}q_{3}}=\frac{(q_{1}+1)^{2}}{4(q_{1}^{2}+q_{1}+1)}=\frac{1}{4(2\cos\theta+1)}+\frac{1}{4}\in\mathbb{R}.\label{eq:zetaform} \end{equation} We divide \eqref{eq:quartic-q} by $q_{2}-q_{3}$ and rewrite it in terms of Chebyshev polynomials: \begin{eqnarray*} 0 & = & U_{m}(\zeta)(-q_{1}^{m}+q_{1}^{m-1})(q_{1}^{2}+q_{1}+1)^{m/2}\\ & & +U_{m-1}(\zeta)(q_{1}^{m+1}-q_{1}^{m-1})(q_{1}^{2}+q_{1}+1)^{(m-1)/2}\\ & & +U_{m-2}(\zeta)(-q_{1}^{m+1}+q_{1}^{m})(q_{1}^{2}+q_{1}+1)^{(m-2)/2}\\ & & +(q_{1}^{2}+q_{1}+1)^{m-1}(-3q_{1}^{m+1}-2q_{1}^{m}-q_{1}^{m-1}+q_{1}^{2}+2q_{1}+3). \end{eqnarray*} We divide this equation by $q_{1}^{(3m-2)/2}(1-q_{1})(q_{1}+q_{1}^{-1}+1)^{m/2}$ and write $(q_{1}^{m}-1)/(q_{1}-1)$ in terms of Chebyshev polynomials. We obtain \begin{eqnarray*} 0 & = & U_{m}(\zeta)+2\zeta U_{m-1}(\zeta)+U_{m-2}(\zeta)/(q_{1}+q_{1}^{-1}+1)\\ & & +(q_{1}+q_{1}^{-1}+1)^{m/2-1}\left(3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)\right), \end{eqnarray*} where \begin{equation} \xi^{2}=\frac{(q_{1}+1)^{2}}{4q_{1}}=\frac{2\cos\theta+2}{4}.\label{eq:xiform} \end{equation} Finally, from \eqref{eq:zetaform} we can replace $1/(q_{1}+q_{1}^{-1}+1)$ by $(4\zeta^{2}-1)$ and use the recurrence definition of the Chebyshev polynomials to rewrite this equation in the symmetric form below: \begin{eqnarray} 0 & = & (4\zeta^{2}-1)^{(m-2)/4}\left(3U_{m}\left(\zeta\right)+2U_{m-2}(\zeta)+U_{m-4}(\zeta)\right)\nonumber \\ & & +(4\xi^{2}-1)^{(m-2)/4}\left(3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)\right).\label{eq:quartic-real} \end{eqnarray} From this symmetric form, the right expression remains the same if we interchange $\zeta$ and $\xi$ or if we interchange $\cos\theta$ and $-\cos\theta/(2\cos\theta+1)$ (from \eqref{eq:zetaform} and \eqref{eq:xiform}). Thus the numbers of roots $q_{1}$ are the same in the two cases $0<\cos\theta<1$ and $-1/3<\cos\theta<0$. It is sufficient to count the number of roots $0<\cos\theta<1$ or $1/2<\xi^{2}<1$. Let $\cos\alpha=\xi$ and $U_{m}(\xi)=\sin(m+1)\alpha/\sin\alpha$ where $-\pi/4<\alpha<\pi/4$, $\alpha\ne0$. The idea is to show that in this case the summand \begin{equation} (4\xi^{2}-1)^{(m-2)/4}\left(3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)\right)\label{eq:dominantterm} \end{equation} dominates the right expression of \eqref{eq:quartic-real}. Since $\zeta^{2}$ and $\xi^{2}$ in \eqref{eq:quartic-real} are real numbers and the Chebyshev polynomials in this equation are either even or odd, we can apply the Intermediate Value Theorem. We note that \eqref{eq:dominantterm} has different signs when $\sin(m+1)\alpha=1$ and when $\sin(m+1)\alpha=-1$. Suppose $\sin(m+1)\alpha=\pm1$ and $-\pi/4<\alpha<\pi/4$. Since \[ 4\zeta^{2}-1=\frac{1}{1+2\cos\theta}<1, \] it suffices to show \begin{equation} \left|(4\xi^{2}-1)^{(m-2)/2}\left(3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)\right)\right|\ge\left|3U_{m}\left(\zeta\right)+2U_{m-2}(\zeta)+U_{m-4}(\zeta)\right|.\label{eq:quartic-Chevinq} \end{equation} Let $\zeta=\cos\beta$. Using the fact that $1/3<\zeta^{2}<1/2$ and $U_{m}(\zeta)=\sin(m+1)\beta/\sin\beta$, we obtain the following upper bound for the right hand side of \eqref{eq:quartic-Chevinq}: \begin{eqnarray*} \left|3U_{m}\left(\zeta\right)+2U_{m-2}(\zeta)+U_{m-4}(\zeta)\right| & \le & 6\sqrt{2}. \end{eqnarray*} Since \begin{equation} \alpha=\frac{\pi}{4}\frac{(4k\pm2)}{m+1}\label{eq:alphaform} \end{equation} where $k\in\mathbb{Z}$ and $-\pi/4<\alpha<\pi/4$, we have \[ |\alpha|\le\frac{\pi}{4}\left(1-\frac{1}{m+1}\right). \] Thus \[ \cos\alpha\ge\frac{\sqrt{2}}{2}\left(\cos\frac{\pi}{4(m+1)}+\sin\frac{\pi}{4(m+1)}\right). \] This inequality and \eqref{eq:xiform} give \begin{equation} 2\cos\theta=4\cos^{2}\alpha-2\ge4\sin\frac{\pi}{2(m+1)}.\label{eq:cosrestriction} \end{equation} From the definition of the Chebyshev polynomial, we have \begin{eqnarray*} U_{m-2}(\xi) & = & \frac{\sin(m-1)\alpha}{\sin\alpha}\\ & = & \frac{\sin(m+1)\alpha\cos2\alpha-\cos(m+1)\alpha\sin2\alpha}{\sin\alpha}\\ & = & \frac{\sin(m+1)\alpha\cos2\alpha}{\sin\alpha}. \end{eqnarray*} With similar computations for $U_{m-4}(\xi)$, we obtain \[ |3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)|=\frac{|\sin(m+1)\alpha||3+2\cos2\alpha+\cos4\alpha|}{|\sin\alpha|}. \] Since $\sin(m+1)\alpha=\pm1$ and $\cos2\alpha\ge0$, the right side is at least $2\sqrt{2}$. We combine this with \eqref{eq:cosrestriction} to have \begin{eqnarray*} \left|(2\cos\theta+1)^{(m-2)/2}\left(3U_{m}\left(\xi\right)+2U_{m-2}(\xi)+U_{m-4}(\xi)\right)\right| & \ge & \left(1+4\sin\frac{\pi}{2(m+1)}\right)^{(m-2)/2}2\sqrt{2}\\ & \ge & 6\sqrt{2} \end{eqnarray*} when $m\ge6.$ The inequality \eqref{eq:quartic-Chevinq} follows. By the Intermediate Value Theorem, we have at least one root when $\sin(m+1)\alpha$ changes between $-1$ and $1$ with $-\pi/4<\alpha<\pi/4$. From the formula \eqref{eq:alphaform}, the number of roots $q_{1}$ when $0<\cos\theta<1$ is at least $2(\left\lfloor (m-2)/4\right\rfloor )$. By symmetry, the number of roots $q_{1}\ne\pm i,1$ with $\Re q_{1}>-1/3$ on the unit circle is at least $4(\left\lfloor (m-2)/4\right\rfloor )$. Note that each of these roots gives two more roots $q_{2}$ and $q_{3}$ on the quartic curve. It remains to check the multiplicities of $q_{1}=\pm i,1$ in the equation \eqref{eq:quartic-q}. We note that this equation has a root $q_{1}=1$ with multiplicity at least 1. In the case $q_{1}=1$ we obtain four more roots $q_{2}$, $q_{2}^{-1}$, $q_{3}$, and $q_{3}^{-1}$. We now consider the case $q_{1}=\pm i$. The equation $q^{2}+(1+q_{1})q+q_{1}^{2}+q_{1}+1=0$ where $q=q_{2},q_{3}$ gives $(q_{2},q_{3})=(-1,i)$ or $(q_{2},q_{3})=(i,-1)$ when $q_{1}=i$ and $(q_{2},q_{3})=(-1,-i)$ or $(q_{2},q_{3})=(-i,-1)$ when $q_{1}=-i$ . Hence each of the roots $q_{1}=\pm i$ gives us another root at $-1$ with the same multiplicity. To check the multiplicities at $q_{1}=\pm i$, we need to differentiate the equation \eqref{eq:quartic-q} with respect to $q_{1}$. We obtain its derivatives by applying implicit differentiation to the equation $q^{2}+(1+q_{1})q+q_{1}^{2}+q_{1}+1=0$. After substituting $q_{1}=\pm i$ in \eqref{eq:quartic-q} and its derivatives, we see that the multiplicity of $\pm i$ is \[ \begin{cases} 2 & \mbox{if }m=4k\\ 3 & \mbox{if }m=4k+1\\ 0 & \mbox{if }m=4k+2\\ 1 & \mbox{if }m=4k+3 \end{cases}. \] The table below tabulates the $3m-1$ roots of \eqref{eq:quartic-q}. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $m=4k$ & $m=4k+1$ & $m=4k+2$ & $m=4k+3$\tabularnewline \hline \hline $q_{1}=e^{i\theta}$,$\Re q_{1}>-1/3$, $q_{1}\ne\pm i,1$ & $3(4k-4)$ & $3(4k-4)$ & $12k$ & $12k$\tabularnewline \hline $q_{1}=1$ & $5$ & $5$ & $5$ & $5$\tabularnewline \hline $q_{1}=\pm i$ & $6$ & $9$ & $0$ & $3$\tabularnewline \hline Total & $12k-1$ & $12k+2$ & $12k+5$ & $12k+8$\tabularnewline \hline \end{tabular} \par\end{center} All the roots counted on the table lie on the curves given in the lemma. The number of roots counted equals the number of possible roots which is $3m-1$. Also, as a consequence of the Intermediate Value Theorem applied to the intervals formed by $\sin(m+1)\alpha=\pm1$, the roots $q_{1}$ are dense on the portion of the unit circle with real part at least $-1/3$. The lemma follows. \end{proof} \begin{theorem}\label{quartic} Let $H_{m}(z)$ be a sequence of polynomials whose generating function is \[ \sum H_{m}(z)t^{m}=\frac{1}{1+B(z)t+A(z)t^{4}} \] where $A(z)$ and $B(z)$ are polynomials in $z$ with complex coefficients. The roots of $H_{m}(z)$ which satisfy $A(z)\ne0$ lie on the curve $\mathcal{C}_{4}$ defined by \[ \Im\frac{B^{4}(z)}{A(z)}=0\qquad\mbox{and}\qquad0\le\Re\frac{B^{4}(z)}{A(z)}\le\frac{4^{4}}{3^{3}}, \] and are dense there as $m\rightarrow\infty$. \end{theorem} \begin{proof} From the definition of $q$-discriminant in \eqref{eq:qdisc}, we have \[ \mathrm{Disc}_{t}(1+B(z)t+A(z)t^{4};q)=-A^{2}(z)B^{4}(z)q^{3}(1+q+q^{2})^{3}+A^{3}(z)(1+q+q^{2}+q^{3})^{4}. \] If $q$ is a quotient of two roots of $1+B(z)t+A(z)t^{4}$, then \begin{eqnarray*} \frac{B^{4}(z)}{A(z)} & = & \frac{(1+q+q^{2}+q^{3})^{4}}{q^{3}(1+q+q^{2})^{3}}. \end{eqnarray*} Let $f(q)$ be the function on the right side. We note that $f(q)$ maps $q_{1}=e^{i\theta}$ with $\Re q_{1}\ge-1/3$ to the real interval $[0,4^{4}/3^{3}]$ since \[ f(q_{1})=\frac{(q_{1}^{3/2}+q_{1}^{-3/2}+q_{1}^{1/2}+q_{1}^{-1/2})^{4}}{(q_{1}+q_{1}^{-1}+1)^{3}}. \] If $q$ is a point on the quartic curve in Lemma \ref{quarticqlemma} then $q$ and $q_{1}$ are related by \[ q_{1}^{2}+q^{2}+q_{1}q+q_{1}+q+1=0. \] Multiplying this equation by $q_{1}-q$, we obtain \[ q_{1}^{3}+q_{1}^{2}+q_{1}=q^{3}+q^{2}+q. \] Thus by the definition of $f(q)$, we have $f(q)=f(q_{1})$. Since \[ \mathrm{Disc}_{t}(1+B(z)t+A(z)t^{4})=-3^{3}A^{2}(z)B^{4}(z)+4^{4}A^{3}(z), \] the roots of $H_{m}(z)$ lie on the curve $\mathcal{C}_{4}$. The density of these roots follows from arguments similar to those in the proof of Theorem \ref{quadratic}. \end{proof} \textbf{Remark: }One may try to find the root distribution of $H_{m}(z)$ in the case $D(t,z)=1+B(z)t+A(z)t^{5}$. From computer experiments, the distribution of the quotients of roots of $D(t,z)$ in the case $m=50$ is given in the figure below. \begin{figure} \caption{Distribution of the quotients of roots of the quintic denominator} \end{figure} We end this paper with the following conjecture. \begin{conjecture} Let $H_{m}(z)$ be a sequence of polynomials whose generating function is \[ \sum H_{m}(z)t^{m}=\frac{1}{1+B(z)t+A(z)t^{n}} \] where $A(z)$ and $B(z)$ are polynomials in $z$ with complex coefficients. The roots of $H_{m}(z)$ which satisfy $A(z)\ne0$ lie on the curve $\mathcal{C}_{n}$ defined by \[ \Im\frac{B^{n}(z)}{A(z)}=0\qquad\mbox{and}\qquad0\le(-1)^{n}\Re\frac{B^{n}(z)}{A(z)}\le\frac{n^{n}}{(n-1)^{n-1}}, \] and are dense there as $m\rightarrow\infty$. \end{conjecture} \end{document}
\begin{equation}gin{document} \title{Support of Non-separable Multivariate Scaling Function.} \author{ {\bf Irina Maximenko} \thanks{Supported by RFFI, grant \# 06-01-00457.} } \date{} \maketitle \abstract{ We make an estimation of the support of a multivariable scaling function for an arbitrary dilation matrix. We give a method of calculating the values of the scaling function on a tight set using the knowledge of the size of the support. } \section{Introduction} \label{ss0} For construction of wavelet bases it is often convenient to use a scaling function, a function which satisfies the following functional equation \begin{equation} \varphi (x) = 2 \sum \limits_{ z \in \z}{c_{q} \varphi (2x-q)}, \ \ x \in {\mathbb R}. \label{01} \end{equation} The equation ({\mathbb R}ef{01}) is called a scaling equation and the sequence of coefficients $\{ c_q \}$ is called a mask. The scaling function can be built from the mask. Finite support masks are of the most interest because they generate wavelet functions with compact support. The knowledge of the support, particularly, allows to apply an algorithm of constructing the scaling function on a tight set. In one-dimensional case if function $\varphi$ satisfies equation \begin{equation} \varphi (x) = 2 \sum \limits_{q=N1}^{N2} {c_{q} \varphi (2x-q)}, \ \ x \in {\mathbb R}, \label{02} \end{equation} then the support of scaling function is contained between N1 and N2 (you can see, for example, \cite {D1}, \cite{Ch1}). In d-dimensional case the coefficient in scaling equation is a matrix which satisfies some natural demands. Dilation matrix is an $ d \times d $ integer matrix such that all its eigenvalues moduluses are more than unit. We define the norm of a matrix as the operator norm from ${\mathbb R}^d$ to ${\mathbb R}^d$: $$ \|M\| = \max \sqrt {|\lambda_{MM^*}|}, $$ where $M^*$ is conjugative to $M$ matrix. For dilation matrices we have \begin{equation} \lim_{n\to+\infty}\|M^{-n}\|=0 \ \ \ \ and \ \ \ \ \lim_{n\to+\infty} |M^{n} x|=0, \ \ \ \forall x \in {\mathbb R}^d , \ \ \ x \ne 0 . \label{00} \end{equation} Hence the set $ \{ M^{-j} k \}_{j \in \z, \ k \in {\mathbb Z}^{d}} $ is tight in ${\mathbb R}^d$. Note that the norm of matrix $M^{-1}$ isn't obliged to be less than unit. For example matrix $$ A= \left( \begin{equation}gin{array}{cc} 0 & 1\\ 3 & 1 \end{array} {\mathbb R}ight) $$ is a dilation matrix, its eigenvalues are $\approx 2.3028$ and $ -1.3028 $, nevertheless the norm of inverse matrix approximately equals to $\approx 1.1233$, that means that $\|A^{-1}\|> 1$. Let \ $ m:=|\det M| $ ($\det M$ is the determinant of matrix $M$), then $m$ is integer, more than unit. The equation \begin{equation} \label{1} \varphi (x) = m \sum \limits_{ q \in {\mathbb Z}^{d} }{c_q \varphi (Mx-q) } , \ \ x \in {\mathbb R}^d \end{equation} is called a scaling equation and the function $ \varphi $ is called $ (M,c)$ scaling function. The cascade operator $ T_c $ is the linear operator given by \begin{equation} (T_c f)(x):= m \sum \limits_{ q \in {\mathbb Z}^{d}} {c_q f (Mx-q)}. \label{4} \end{equation} The iteration scheme $T_c^n f=T_c(T_c^{n-1} f) , \ \ n=1,2,... $ is called a cascade algorithm. As a initial function $F_0$ we can take any peacewise continuous function with compact support which satisfies the following conditions \begin{equation} \sum \limits_{q \in {\mathbb Z}^{d}} {F_0(x+q)}=1, \ \ \ \widehat{F}({\mathbf0}) =1 \ \ \ {\mbox{and}} \ \ \ \widehat{F}(u) \ \ \ {\mbox {is continious in the origin}}. \label{S1a} \end{equation} For example as a $F_0$ we can take the characteristic function $F_0 := \chi_{[-1/2,1/2)^d}$ or $F_0= \prod \limits_{k=1}^{d}B(x_j)$ where $B(x_j)=1-|x_j|$ if $|x_j|<1$ and $B(x_j)=0$ in another case. Let's define 1-periodic by each variable function $$ m_0 (u):= \sum \limits_{q \in {\mathbb Z}^{d}}{c_qe^{-2 \pi i(q,u)}} , \ \ \ \ \ \ u \in {\mathbb R}^d. $$ Obviously if $ \{ c_q \} $ has finite support then function $ m_0$ is a trigonometric polynomial. The scaling equation in frequency domain is $$ \widehat{\varphi}(u)=m_0({M^*}^{-1}u) \widehat {\varphi}({M^*}^{-1}u). $$ Repeating this procedure (suppose that Fourier transform of scaling function is continuous in origin and $\widehat \varphi ({\mathbf0} )=1$) we get \begin{equation} \widehat \varphi (u)=\prod \limits_{j=1}^{\infty}{m_0({M^*}^{-j}u)}. \label{IP} \end{equation} For convergence of the infinite product it is necessary that $m_0({\mathbf0})=1$ or (the equivalent condition) \begin{equation}gin{equation} \sum \limits_{k \in {\mathbb Z}^{d}}{c_k}=1. \label{M31} \end{equation} If $m_0$ is a trigonometric polynomial then the infinite product converges uniformly on compact sets. Let the space $S'$ be a set of tempered distributions defined on a set of test functions $S=S({\mathbb R}^d)$ (infinitely differentiable functions which with all their derivatives decrease on infinity faster than arbitrary power function). The sequence of distributions (or functions) $f_n$ converge in $S'$ to distribution $f$ if $\lim \limits_{n \to \infty}(f_n, \ g) =(f, \ g)$ for all functions $g \in S$. Distribution $f$ equals to zero in neighborhood U of the point $x_0$ if for any test function $g$ which is not zero only on neighborhood U we have $(f, \ g) =0$ (see for example \cite{GSH}). If distribution $f$ isn't equal to zero in any neighborhood U of the point $x_0$ then $x_0$ is called essential for functional $f$. Totality of all essential points is called the support of distribution $f$. The support of distribution $f$ corresponding to usual continuous or piecewise continuous function $f$ is the closure of the set on which $f(x) \ne 0$. In book \cite{NPS} it is proved that the condition ({\mathbb R}ef{M31}) is sufficient for convergence of the cascade algorithm in $S'$. So \begin{equation}gin{theo} \cite{NPS}. Let $\{ c_q \} $ be a finite mask with ({\mathbb R}ef{M31}). Then scaling equation ({\mathbb R}ef{1}) has unique, up to constant factor, decision $\varphi \in S'$ with compact support. This decision is given by ({\mathbb R}ef{IP}). More over, for any distribution with compact support $f \in S'$ sequence $f_n=T^n f$ converges in $S'$ to function $c \cdot \varphi$ where factor $c$ equals to $\widehat {f}(0)$. \label{DSE} \end{theo} In section {\mathbb R}ef{ss3} the support of a multivariate scaling function is estimated for an arbitrary dilation matrix and for masks which provide the convergence in $S'$ of the cascade algorithm. Then obviously the estimation is true for other types of convergences. In theory if we know the mask we can find the scaling function and the wavelet function because we can make inverse Fourier transform in ({\mathbb R}ef{IP}). However we can use a simpler on practice method of construction $\varphi$. In section {\mathbb R}ef{ss1} we show how the knowledge of size of support can be used for calculating values of a scaling function on a tight set (in one dimensional case this method is described, for example, in \cite{Ch1}). \section{The Estimation of Scaling Function Support} \label{ss3} \begin{equation}gin{lem} Let $F_1, ... , F_n, ...$ be peacewise continuous functions with compact supports on ${\mathbb R}^d$. Let the sequence of functions $F_1, ... , F_n, ...$ converge in $S'$. Suppose that $ supp \ F_1 \subset A_1 , \ supp \ F_2 \subset A_2, \ ... \ , supp \ F_n \subset A_n, \ ... $, where the sets $A_1, \ A_2,..., A_n, ...\in {\mathbb R}^d$ are closed full spheres with the same center. Let the sequence of spheres $A_n$ converge which means that there exists a sphere $A$ such that its radius is the limit of radiuses of spheres $A_n$. Then the support of limit function $F$ is in $A$. \label{supp} \end{lem} Proof. We are going to show that any point out of $A$ doesn't belong to the support of distribution $F$. Let's fix an arbitrary point $a_0 \in {\mathbb R}^d \setminus A$. Since the set $A$ is closed there exists neighborhood $U$ of point $A$ which is separated from the set $A$. Since $ \lim A_n =A$ there exists a number $N$ such that all spheres $A_n$, \ $n>N$ will be separated from $U$. Let test function $g \in S$ be not zero only in the neighborhood $U$. Then $(F_n , \ g)=0$ for all $n>N$ and hence $(F , \ g)=0$. That means that point $a_0$ doesn't belong to the support of distribution $F$. \ \ \ \ \ \ {\mathbb R}ule [-5pt]{5pt}{5pt} Note. The lemma is true if instead of full spheres we consider rectangle parallelepipeds with faces parallel to coordinate hyperplanes with the same center. The convergence of rectangles means here the convergence of edges. Denote $\Omega$ the set of indexes $q$ such that $c_q$ is not zero. The mask $c_q$ has a finite support hence $\Omega$ is a finite set. Remind the definition of cascade operator (as an initial function $F_0$ we take a piecewise continuous function with compact support which satisfies ({\mathbb R}ef{S1a}) ) \begin{equation} F_n(x)= (T_c F_{n-1})(x):= m \sum \limits_{ q \in \Omega} {c_{q} F_{n-1} (Mx-q)}. \label{4C} \end{equation} First we estimate the support of limit (scaling) function when $\| M^{-1} \| < 1$. \begin{equation}gin{theo} Let $M$ be a dilation matrix and $\| M^{-1} \| < 1$. Suppose that mask $\{ c_q \} $ has finite support and ({\mathbb R}ef{M31}). Then for the support of limit function $\varphi$ the following estimation is true $$ {{\mathbb R}m supp} \ \varphi \subset \{ x \in {\mathbb R}^d : |x| \le \frac{Q \ \| M^{-1} \|}{1 - \| M^{-1} \|} \}, $$ where $Q:=\max \limits_{q \in \Omega}|q|$. \label{norma} \end{theo} Proof. Let's fix an initial function $F_0$which is a piecewise continuous function with compact support that satisfies ({\mathbb R}ef{S1a}). Denote its support as $\Omega_0$. The support of function $F_0$ is a compact set hence there exists a full sphere with radius R which contains $\Omega_0$, \ $R:= \max \{ |x|, \ x \in \Omega_0 \}. $ Applying cascade operator ({\mathbb R}ef{4C}) to function $F_0$ we get function $F_1$. Function $F_1$ is not zero if $M x-q \in \Omega_0$ for any $q \in \Omega$ or what is the same $x \in M^{-1} (\Omega_0+q)$. Denote the support of function $F_1$ as $\Omega_1$. Then for $x \in \Omega_1$ it is true that \begin{equation} | x | \le | M^{-1} (\Omega_0+q) | \le \| M^{-1}\| R+\| M^{-1}\| Q. \label{Q2} \end{equation} Denote the support of function $F_n$ as $\Omega_n$. Let's prove by induction that function $F_n$ equals to zero outside of the set of $x$ where \begin{equation} |x| \le \| M^{-1}\|^n R+Q ( \| M^{-1}\| + \| M^{-1}\|^2+...+\| M^{-1}\|^n). \label{Q1} \end{equation} For $n=1$ the assertion is true ({\mathbb R}ef{Q2}). Suppose that estimation ({\mathbb R}ef{Q1}) is true for $n$ and we prove it for $n+1$. Applying cascade operator ({\mathbb R}ef{4C}) to function $F_n$ we see that function $F_{n+1}$ is not zero if $M x-q \in \Omega_n$ for any $q$ or what is the same $x \in M^{-1} (\Omega_n+q)$. Then for $x \in \Omega_{n+1}$ it is true that $$ |x| \le \| M^{-1}\| Q+ \| M^{-1}\| (\| M^{-1}\|^n R+Q ( \| M^{-1}\| + \| M^{-1}\|^2+...+\| M^{-1}\|^n))= $$ $$ =\| M^{-1}\|^{n+1}R+Q ( \| M^{-1}\| + \| M^{-1}\|^2+...+\| M^{-1}\|^{n+1}). $$ The induction is completed. By theorem {\mathbb R}ef{DSE} cascade algorithm converges in $S'$. Because $\| M^{-1}\| < 1$ the limit of the right side ({\mathbb R}ef{Q1}) exists and is finite. Using lemma {\mathbb R}ef{supp} and directing n to infinity we have $$ |x| \le \frac{Q \ \| M^{-1} \|}{1 - \| M^{-1} \|}, \ \ \ \ t \in {{\mathbb R}m supp} \ \varphi . \ \ \ \ \ \ {\mathbb R}ule [-5pt]{5pt}{5pt} $$ Note. The support of scaling function $\varphi$ doesn't depend on the size of support of the initial function $F_0$. \begin{equation}gin{coro} Let $d=1$. Then dilation matrix equals to $m, \ \ m \in \z, \ \ |m|>1$. Suppose that mask $\{ c_q \} $ has finite support and ({\mathbb R}ef{M31}). Then $$ {{\mathbb R}m supp} \ \varphi \subset \{ x \in {\mathbb R} : \ \ |x| \le \frac{Q}{|m| - 1} \}. $$ \label{odn} \end{coro} \begin{equation}gin{coro} Let $M$ be a diagonal dilation matrix $d \times d$. Suppose that mask $\{ c_q \} $ has finite support and ({\mathbb R}ef{M31}). Then $$ {{\mathbb R}m supp} \ \varphi \subset \{ x= (x_1,...,x_d) \in {\mathbb R}^d : \ \ |x_k| \le \frac{Q}{|\lambda_k| - 1}, \ \ \ \ k=1,...,d \}. $$ \label{diag} \end{coro} Let now the norm of matrix $M^{-1}$ be not necessary less than unit. Consider first the case if matrix $M$ is an Jordan box size $s$ with eigenvalue $\lambda, \ \ |\lambda|>1$. \begin{equation} Mx=\left( \begin{equation}gin{array}{ccccc} \lambda & 0 & \ldots & 0 & 0 \\ 1 &\lambda & \ldots & 0 & 0 \\ 0 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \ldots & 1 & \lambda \\ \end{array} {\mathbb R}ight) \left( \begin{equation}gin{array}{c} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_s \\ \end{array} {\mathbb R}ight) = \left( \begin{equation}gin{array}{c} \lambda x_1 \\ x_1 + \lambda x_2 \\ x_2 + \lambda x_3 \\ \vdots \\ x_{s-1} +\lambda x_s \\ \end{array} {\mathbb R}ight) . \label{J1} \end{equation} \begin{equation}gin{lem} Let dilation matrix $M$ be a Jordan box size $s$ with eigenvalue $\lambda $. Suppose that mask $\{ c_q \} $ has finite support and ({\mathbb R}ef{M31}). Then for each coordinate of the support of the limit function $\varphi$ we have $$ \mbox{if} \ \ \ \ |\lambda| \ne 2, \ \ \ \ \mbox{then} \ \ \ \ |x_k| \le \frac {Q}{|\lambda|-2} \left(1- \frac{1}{(|\lambda| - 1)^k} {\mathbb R}ight) , \ \ \ \ k=1,...,s; $$ $$ \mbox{if} \ \ \ \ |\lambda| = 2, \ \ \ \ \mbox{then} \ \ \ \ |x_k| \le Qk , \ \ \ \ k=1,...,s. $$ \label{jordan} \end{lem} Proof. Matrix $M$ is a dilation matrix hence $| \lambda | > 1$. Let's fix an initial function $F_0$ a piecewise continuous function with compact support which satisfies ({\mathbb R}ef{S1a}). Its support $\Omega_0$ is contained in the sphere of radius R, \ $R:= \max \{ |x|, \ x \in \Omega_0 \}. $ Applying cascade operator ({\mathbb R}ef{4C}) to function $F_0$ we get function $F_1$. Function $F_1$ is not zero if $M x-q \in \Omega_0$ for any $q \in \Omega$ or what is the same $x \in M^{-1} (\Omega_0+q)$. As above denote as $\Omega_1$ the support of function $F_1$. Then $|M x| \le R+|q| \le R +Q$ for $ x \in \Omega_1.$ First coordinate of vector $M x$ equals to $\lambda x_1$ hence $ |\lambda x_1 | \le |Mx| \le R +Q. $ When we multiply matrix $M$ by vector the first coordinate doesn't depend on others. Then by the same reasons as in theorem {\mathbb R}ef{norma} we have \begin{equation} |x_1^{(n)}| \le \frac{R}{|\lambda|^n}+Q \left( \frac{1}{|\lambda|} + \frac{1}{|\lambda|^2}+...+\frac{1}{|\lambda|^n} {\mathbb R}ight)=:A_{n1}, \ \ n=1,2,3,... . \label{suppn1} \end{equation} Using lemma {\mathbb R}ef{supp} and directing $n$ to infinity we have \begin{equation} |x_1^{(\infty)}| \le Q \left( \frac{1}{|\lambda|} + \frac{1}{|\lambda|^2}+...+\frac{1}{|\lambda|^n}+...{\mathbb R}ight)= \frac{Q}{|\lambda|-1}=:A_{\infty \ 1}. \label{suppinf1} \end{equation} Let's prove by induction that for support of function $F_1$ for each coordinate we have an estimation \begin{equation} |x_k^{(1)}| \le \left( Q+R {\mathbb R}ight) \left(\frac{1}{|\lambda|}+\frac{1}{|\lambda|^2}+...+ \frac{1}{|\lambda|^k}{\mathbb R}ight)=:A_{1k} , \ \ \ k=1,...,s. \label{supp1k} \end{equation} For $k=1$ the assertion follows from ({\mathbb R}ef{suppn1}) $$ |x_1^{(1)}| \le \left( Q+R {\mathbb R}ight) \frac{1}{|\lambda|} . $$ Suppose that estimation ({\mathbb R}ef{supp1k}) is true for $k \ (k=1,...,s-1)$ and we will prove it for $k+1$. $$ |x_k^{(1)}+ \lambda x_{k+1}^{(1)}| \le |Mx| \le Q+R . $$ Using the induction postulate we have $$ |x_{k+1}^{(1)}| \le \frac{1}{|\lambda|} \left( Q+R + A_{1k} {\mathbb R}ight)= $$ $$ =\frac{1}{|\lambda|} \left( Q+R + \left( Q+R {\mathbb R}ight) \left(\frac{1}{|\lambda|}+\frac{1}{|\lambda|^2}+...+ \frac{1}{|\lambda|^k}{\mathbb R}ight) {\mathbb R}ight)= $$ $$ =\left( Q+R{\mathbb R}ight) \left(\frac{1}{|\lambda|}+\frac{1}{|\lambda|^2}+...+ \frac{1}{|\lambda|^{k+1}}{\mathbb R}ight) , $$ The induction is completed. So the estimation for function $F_1$ support is proved. We get function $F_n$ from the formula ({\mathbb R}ef{4C}) hence if $|x_k^{(n)}| \le A_{nk}$ then $$ |x_{k-1}^{(n)}+\lambda x_k^{(n)}| \le Q+A_{n-1 \ k} $$ Hence \begin{equation} |x_k^{(n)}| \le \frac{1}{|\lambda|} \left( Q+A_{n-1 \ k} +A_{n \ k-1} {\mathbb R}ight)=:A_{nk}, \ \ k=2,...,s, \ \ n=1,2,3,... \label{suppnk} \end{equation} We get a recurrent formula $$ A_{nk}=\frac{1}{|\lambda|} \left( Q+A_{n-1 \ k} +A_{n \ k-1} {\mathbb R}ight) . $$ Directing $n$ to infinity we get the estimation for support of limit function $\varphi$ $$ A_{\infty \ k}= \frac{1}{|\lambda|} \left( Q+A_{\infty \ k} +A_{\infty \ k-1} {\mathbb R}ight), \ \ k=2,...,s. $$ From this equation for $A_{\infty \ k}$ we have \begin{equation} A_{\infty \ k} = \frac{1}{|\lambda|-1} \left( Q+A_{\infty \ k-1} {\mathbb R}ight). \label{suppinfk} \end{equation} Let's prove by induction by $k$ that \begin{equation} A_{\infty \ k}= Q \left( \frac{1}{|\lambda|-1} + \frac{1}{(|\lambda|-1)^2}+...+ \frac{1}{(|\lambda|-1)^k}{\mathbb R}ight) , \ \ k=1,...,s. \label{suppphik} \end{equation} For $k=1$ the assertion is true ({\mathbb R}ef{suppinf1}). Suppose now that estimation ({\mathbb R}ef{suppphik}) is true for $k \ (k=1,...,s-1)$, and prove it for $k+1$. $$ A_{\infty \ {k+1}} = \frac{1}{|\lambda|-1} \left( Q+A_{\infty \ k} {\mathbb R}ight)= $$ $$ =\frac{1}{|\lambda|-1} \left( Q+Q \left( \frac{1}{|\lambda|-1} + \frac{1}{(|\lambda|-1)^2}+...+ \frac{1}{(|\lambda|-1)^k}{\mathbb R}ight) {\mathbb R}ight)= $$ $$ =Q \left( \frac{1}{|\lambda|-1} + \frac{1}{(|\lambda|-1)^2}+...+ \frac{1}{(|\lambda|-1)^{k+1}} {\mathbb R}ight), $$ this completes the induction. Calculating the sum of finite geometrical progression for $|\lambda| \ne 2$ we have \begin{equation} A_{\infty \ k}=\frac{Q}{|\lambda|-2} \left(1-\frac{1}{(|\lambda|-1)^k} {\mathbb R}ight) , \ \ \ k=1,...,s, \label{otvet} \end{equation} for $|\lambda| = 2$ we have \begin{equation} A_{\infty \ k}=Qk, \ \ \ k=1,...,s. \ \ \ {\mathbb R}ule [-5pt]{5pt}{5pt} \label{otvet1} \end{equation} Note. By proving lemma {\mathbb R}ef{jordan} we didn't use the fact that $q$ are integers. An arbitrary non-degenerate matrix $M$ can be represented as $M=C^{-1}GC$, where $G$ is a Jordan matrix (consists of Jordan boxes), matrix C is unitary. Each of Jordan boxes (including boxes size $1 \times 1$) generates a subspace invariant to multiplying by matrix $M$. Let's denote the rectangular parallelepiped $P \in {\mathbb R}^d$ as follows: \ \ if in row $p$ of matrix $G$ there is a simple eigenvalue $\lambda_p$, then $|x_p| \le \frac{Q}{|\lambda_p| - 1}$; \ \ if in row $p$ of matrix $G$ there is a beginning of a Jordan box of size $s$ which corresponds to eigenvalue $\lambda_p \ne 2$ then $ |x_{p+k}| \le \frac {Q}{|\lambda_p|-2} \left(1- \frac{1}{(|\lambda_p| - 1)^k} {\mathbb R}ight), \ \ k=1,...,s; $ \ \ if in row $p$ of matrix $G$ there is a beginning of a Jordan box of size $s$ which corresponds to eigenvalue $\lambda_p = 2$ then $ |x_{p+k}| \le Qk, \ \ k=1,...,s.$ \begin{equation}gin{theo} Let $M$ be a dilation matrix and all its eigenvalues $\lambda_1,...,\lambda_d$ be real. Suppose the mask $\{ c_q \} $ has finite support and ({\mathbb R}ef{M31}). Then the support of limit function $\varphi$ is contained in $C P$, where matrix $C$ and set $P$ are defined above. \label{diff} \end{theo} Proof. The matrix $M$ has real eigenvalues hence in formula $M=CGC^{-1}$ matrices $G$ and $C$ are real. Applying cascade operator ({\mathbb R}ef{4C}) to function $F_n$ we get \begin{equation} F_{n+1} (x) = m \sum \limits_{ q \in \Omega}{c_q F_n (CGC^{-1}x-q)}, \ \ x \in {\mathbb R}^d. \label{s1} \end{equation} Making the change of variables $x=Ct$ we get \begin{equation} F_{n+1} (Ct) = m \sum \limits_{ q\in \Omega}{c_q F_n (CGt-q)}, \ \ t \in {\mathbb R}^d . \label{s2} \end{equation} Denoting $F_n^1 (t)= F_n (Ct)$ the formula ({\mathbb R}ef{s2}) will be: \begin{equation} F_{n+1}^1 (t) = m \sum \limits_{ q\in \Omega}{c_q F_n^1 (Gt-C^{-1}q)}, \ \ t \in {\mathbb R}^d. \label{s3} \end{equation} Directing $n$ to infinity we have \begin{equation} \varphi_1 (t) =m \sum \limits_{ q \in \Omega} {c_{q} \varphi_1 (Gt-C^{-1}q)}, \ \ t \in {\mathbb R}^d, \label{s5} \end{equation} where $\varphi_1 (t)=\varphi (Ct) $. Matrix $G$ consists of Jordan boxes. Each of them corresponds to a subspace invariant to multiplying by matrix $G$. Hence using the note after lemma {\mathbb R}ef{jordan}, we can apply lemma {\mathbb R}ef{jordan} to function $\varphi_1 $, to each Jordan box separately. If the corresponding eigenvalue is simple then we apply corollary {\mathbb R}ef{odn}. Then we get rectangular parallelepiped which we denote as $P$. Then we apply matrix $C$ to the bounds of support of function $\varphi_1 $ and we get the set $CP$ which will contain support of function $\varphi$. $\ \ \ {\mathbb R}ule [-5pt]{5pt}{5pt}$ \section{Values of Scaling Function on a Tight Set} \label{ss1} Let coefficients $c_q$ be not zero on the set $\Omega$, where $\Omega$ is the finite system integer $d$-dimension vectors. If we know the bounds of support of scaling function $\varphi$, then using scaling equation ({\mathbb R}ef{1}), we can find values of function $\varphi$ on tight set $ \{ M^{-j} k \}_{j \ge 0, \ k \in {\mathbb Z}^{d}} $. If we know the values of scaling function $\varphi$ in integer points then equations $$ \varphi (M^{-1}x) = m \sum \limits_{ q \in \Omega }{c_q \varphi (x-q) } , $$ $$ \varphi (M^{-2}x) =m \sum \limits_{ q \in \Omega }{c_q \varphi (M^{-1}x-q) } , $$ $$ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ .\ . $$ uniquely define values of function $\varphi$ in points $M^{-j} k, \ \ j \in \z, \ k \in {\mathbb Z}^{d} $. For finding values of function $\varphi$ in integer points we will use scaling equation ({\mathbb R}ef{1}) once more. Suppose values $\varphi (k_1),...,\varphi (k_N)$ are not zero. Making transformation of summation index in the right side of ({\mathbb R}ef{1}) we get $$ \varphi (k) = m \sum \limits_{ q \in \Omega }{c_q \varphi (Mk-q) } = m \sum \limits_{ p \in \Omega_1 }{c_{Mk-p} \varphi (p) }. $$ Denote $r=( \varphi (k_1),...,\varphi (k_N))^T$ and denote $B$ as the operator generated by matrix $c_{Mk-p}$. Then the matrix of operator $B$ looks like $$ B= m \left( \begin{equation}gin{array}{cccc} c_{Mk_1-k_1} & c_{Mk_1-k_2} & \ldots & c_{Mk_1-k_N}\\ c_{Mk_2-k_1} & c_{Mk_2-k_2} & \ldots & c_{Mk_2-k_N}\\ \vdots & \vdots & \ddots & \vdots \\ c_{Mk_N-k_1} & c_{Mk_N-k_2} & \ldots & c_{Mk_N-k_N} \end{array} {\mathbb R}ight) . $$ In the matrix form we have $r=Br.$ The values of function $\varphi$ in integer points are the coordinates of eigenvector $r$ which corresponds to eigenvalue 1 of matrix $B$. The vector $r$ is normalized so that function $\varphi$ satisfies partion of unit: $$ \varphi(k_1)+\varphi(k_2)+...+\varphi(k_N)=1. $$ If cascade algorithm strong converges then the eigenvector corresponding to eigenvalue 1 of matrix $B$ is unique up to constant. \begin{equation}gin{thebibliography}{99} \bibitem{Ch1} {\sc Chui Ch.} An Introduction to Wavelets.// Texas A \& M University, College Station, 1992. \bibitem{D1}{\sc Daubechies I.} Ten Lectures on Wavelets.// Society for industrial and applied mathematics, Philadephia, Pennsylvania, 1992. \bibitem{GSH}{\sc Gelfand I.M., Shilov G.E.} Distributions and operations on them.// Moscow: Phismatlit, 1959, 470 p. \bibitem{NPS} {\sc Novikov I.Ya., Protassov V.Yu., Skopina M.A.} Wavelet Theory.// Moscow: Phismatlit, 2005, 616 pp. \end{thebibliography} \end{document}
\begin{document} \title{Analytic computable structure theory and $L^p$-spaces part 2} \author{Tyler A. Brown} \address{Department of Mathematics\\ Iowa State University\\ Ames, Iowa 50011} \email{[email protected]} \author{Timothy H. McNicholl} \address{Department of Mathematics\\ Iowa State University\\ Ames, Iowa 50011} \email{[email protected]} \begin{abstract} Suppose $p \geq 1$ is a computable real. We extend previous work of Clanin, Stull, and McNicholl by determining the degrees of categoricity of the separable $L^p$ spaces whose underlying measure spaces are atomic but not purely atomic. In addition, we ascertain the complexity of associated projection maps. \end{abstract} \thanks{The second author was supported in by Simons Foundation Grant \# 317870.} \maketitle \section{Introduction}\label{sec:intro} We continue here the program, recently initiated by Melnikov and Nies (see \cite{Melnikov.Nies.2013}, \cite{Melnikov.2013}), of utilizing the tools of computable analysis to investigate the effective structure theory of metric structures, in particular $L^p$ spaces where $p \geq 1$ is computable. Specifically, we seek to classify the $L^p$ spaces that are computably categorical in that they have exactly one computable presentation up to computable isometric isomorphism. We also seek to determine the degrees of categoricity of those $L^p$ spaces that are not computably categorical; this is the least powerful Turing degree that computes an isometric isomorphism between any two computable presentations of the space. Recall that when $\Omega$ is a measure space, an \emph{atom} of $\Omega$ is a non-null measurable set $A$ so that $\mu(B) = \mu(A)$ for every non-null measurable set $B \subseteq A$. A measure space with no atoms is \emph{non-atomic}, and a measure space is \emph{purely atomic} if its $\sigma$-algebra is generated by its atoms. Recall also that with every measure space $\Omega$ there is an associated pseudo-metric $D_\Omega$ on its finitely measurable sets. Namely, $D_\Omega(A,B)$ is the measure of the symmetric difference of $A$ and $B$. The space $\Omega$ is said to be \emph{separable} if this associated pseudo-metric space is separable. By means of convergence in measure, it is possible to show that an $L^p$ space is separable if and only if its underlying measure space is separable. Suppose $p \geq 1$ is computable. It is essentially shown in \cite{Pour-El.Richards.1989} that every separable $L^2$ space is computably categorical. In \cite{McNicholl.2015}, the second author showed that $\ell^p$ is computably categorical only when $p \neq 2$. Moreover, in \cite{McNicholl.2017} he showed that $\ell^p_n$ is computably categorical and that the degree of categoricity of $\ell^p$ is \textbf{0}''. Together, these results determine the degrees of categoricity of separable spaces of the form $L^p(\Omega)$ when $\Omega$ is purely atomic. In the paper preceding this, Clanin, McNicholl, and Stull showed that $L^p(\Omega)$ is computably categorical when $\Omega$ is separable and nonatomic \cite{Clanin.McNicholl.Stull.2019}. Here, we complete the picture by determining the degrees of categoricity of separable $L^p$ spaces whose underlying measure spaces are atomic but not purely atomic. Specifically, we show the following. \begin{theorem}\label{thm:main} Suppose $\Omega$ is a separable measure space that is atomic but not purely atomic, and suppose $p$ is a computable real so that $p \geq 1$ and $p \neq 2$. Assume $L^p(\Omega)$ is nonzero. \begin{enumerate} \item If $\Omega$ has finitely many atoms, then the degree of categoricity of $L^p(\Omega)$ is $\mathbf{0'}$. \label{thm:main::itm:finite} \item If $\Omega$ has infinitely many atoms, then the degree of categoricity of $L^p(\Omega)$ is $\mathbf{0''}$. \label{thm:main::itm:infinite} \end{enumerate} \end{theorem} Suppose $\Omega$ is a separable measure space that is not purely atomic, and suppose $L^p(\Omega)$ is nonzero where $1 \leq p < \infty$. It follows from the Carath\'eodory classification of separable measure spaces that $L^p(\Omega)$ isometrically isomorphic to $L^p[0,1]$ if $\Omega$ has no atoms and that $L^p(\Omega)$ is isometrically isomorphic to $\ell^p_n \oplus L^p[0,1]$ if $\Omega$ has $n \geq 1$ atoms (see e.g. \cite{Cembranos.Mendoza.1997}). It also follows that if $\Omega$ has infinitely many atoms, then $L^p(\Omega)$ is isometrically isomorphic to $\ell^p \oplus L^p[0,1]$. Degrees of categoricity are preserved by isometric isomorphism. We thus have the following. \begin{corollary}\label{cor:lpnLp01} Suppose $p$ is a computable real so that $p \geq 1$ and $p \neq 2$. Then, the degree of categoricity of $\ell^p_n \oplus L^p[0,1]$ is $\mathbf{0'}$, and the degree of categoricity of $\ell^p \oplus L^p[0,1]$ is $\mathbf{0''}$. \end{corollary} These results are somewhat surprising in that one might suspect that the spaces $\ell^p_n\oplus L^p[0,1]$ and $\ell^p\oplus L^p[0,1]$ do not have structure much different from their constituent (summand) spaces. It turns out that allowing these summand spaces to ``work together" indeed produces a few complications in terms of their hybridized structure, particularly while establishing lower bounds for degrees of computable categoricity. For example, in Section \ref{sec:lower}, we construct a computable presentation of $\ell^p_n\oplus L^p[0,1]$ so that projections of vectors into each of the summand spaces are incomputable. However, as we will demonstrate in Section \ref{sec:upper}, it can be fruitful to dissect the hybrid case and consider the constituent spaces in tandem. In fact, our results regarding the upper bounds for the degree of categoricity of $\ell^p_n\oplus L^p[0,1]$ and $\ell^p\oplus L^p[0,1]$ can be considered in this manner and largely piggyback on the results in \cite{Clanin.McNicholl.Stull.2019} and \cite{McNicholl.2017}. Another consequence of these findings is that when $p \geq 1$ is computable, every separable $L^p$ space is $\mathbf{0''}$-categorical. The paper is organized as follows. Background and preliminaries are covered in Sections \ref{sec:back} and \ref{sec:prelim}. In Section \ref{sec:projection}, we present results on the complexity of the natural projection maps for spaces of the form $\ell^p_n \oplus L^p[0,1]$ or $\ell^p \oplus L^p[0,1]$. We derive lower bounds on degrees of categoricity in Section \ref{sec:lower} and corresponding upper bounds in Section \ref{sec:upper}. These proofs utilize our results on projection maps in Section \ref{sec:projection}. Finally in Section \ref{sec:conclusion} we summarize our findings and pose questions for further investigation. \section{Background}\label{sec:back} Here, we cover pertinent notions regarding external and internal direct sums of Banach spaces and the notion of complemented subspaces of an internal direct sum of Banach spaces. We also summarize additional background material from \cite{Clanin.McNicholl.Stull.2019}. We will assume our field of scalars consists of the complex numbers, but all results hold for the field of real numbers as well. When $S \subseteq \mathbb{N}^*$, let $S\downarrow$ denote the downset of $S$; i.e. the set of all $\nu \in \mathbb{N}^*$ so that $\nu \subseteq \mu$ for some $\mu \in S$. Suppose $1 \leq p < \infty$. If $\mathcal{B}_0$, $\ldots$, $\mathcal{B}_n$ are Banach spaces, their \emph{$L^p$-sum} consists of the vector space $\mathcal{B}_0 \times \ldots \times \mathcal{B}_n$ together with the norm \[ \norm{(u_0, \ldots, u_n)}_p = \left( \sum_{j = 0}^n \norm{u_j}_{\mathcal{B}_j}^p \right)^{1/p}. \] Thus, the external direct sum of two $L^p$ spaces is their $L^p$-sum. If $\mathcal{B}_j$ is a Banach space for each $j \in \mathbb{N}$, then the $L^p$-sum of $\{\mathcal{B}_j\}_{j \in \mathbb{N}}$ consists of all $f \in \prod_j B_j$ so that $\sum_j \norm{f(j)}_{\mathcal{B}_j}^p < \infty$. This is easily seen to be a Banach space under the norm \[ \norm{f}_p = \left( \sum_j \norm{f(j)}_{\mathcal{B}_j}^p \right)^{1/p}. \] Suppose $\mathcal{B}$ is a Banach space and $\mathcal{M}$ and $\mathcal{N}$ are subspaces of $\mathcal{B}$. Recall that $\mathcal{B}$ is the \emph{internal direct sum} of $\mathcal{M}$ and $\mathcal{N}$ if $\mathcal{M} \cap \mathcal{N} = \{\mathbf{0}\}$ and $\mathcal{B} = \mathcal{M} + \mathcal{N}$. In this case, $\mathcal{M}$ is said to be \emph{complemented} and $\mathcal{N}$ is said to be the \emph{complement} of $\mathcal{M}$. When $\mathcal{M}$ is a complemented subspace of $\mathcal{B}$, let $P_{\mathcal{M}}$ denote the associated projection map. That is, $P_{\mathcal{M}}$ is the unique linear map of $\mathcal{B}$ onto $\mathcal{M}$ so that $P_{\mathcal{M}}(f) = f$ for all $f \in \mathcal{M}$ and $P_{\mathcal{M}}(f) = \mathbf{0}$ for all $f \in \mathcal{N}$. Note that if $T$ is an isometric isomorphism of $\mathcal{B}_0$ onto $\mathcal{B}_1$, and if $\mathcal{M}$ is a complemented subspace of $\mathcal{B}_0$, then $T[\mathcal{M}]$ is a complemented subspace of $\mathcal{B}_1$ and $P_{T[\mathcal{M}]} = TP_{\mathcal{M}}T^{-1}$. Suppose $f,g$ are vectors in an $L^p$ space. We say that $f$ and $g$ are \emph{disjointly supported} if the intersection of their supports is null; equivalently, if $f \cdot g = \mathbf{0}$. We say that $f$ is a \emph{subvector} of $g$ if there is a measurable set $A$ so that $f = g \cdot \chi_A$ (where $\chi_A$ is the characteristic function of $A$); equivalently, if $g - f$ and $f$ are disjointly supported. We write $f \preceq g$ if $f$ is a subvector of $g$. It is readily seen that $\preceq$ is a partial order. Also, $f$ is an atom of $\preceq$ if and only if $\operatorname{supp}(f)$ is an atom of the underlying measure space. A subset $X$ of a Banach space $\mathcal{B}$ is \emph{linearly dense} if its linear span is dense in $\mathcal{B}$. Suppose $S \subseteq \mathbb{N}^*$ is a tree and $\phi : S \rightarrow L^p(\Omega)$. We say that $\phi$ is \emph{summative} if for every nonterminal node $\nu$ of $S$, $\phi(\nu) = \sum_{\nu'} \phi(\nu')$ where $\nu'$ ranges over the children of $\nu$ in $S$. We say that $\phi$ is \emph{separating} if $\phi(\nu)$ and $\phi(\nu')$ are disjointly supported whenever $\nu, \nu' \in S$ are incomparable. We then say that $\phi$ is a \emph{disintegration} if its range is linearly dense, and if it is injective, non-vanishing, summative, and separating. Fix a disintegration $\phi : S \rightarrow L^p(\Omega)$. A non-root node $\nu$ of $S$ is an \emph{almost norm-maximizing} child of its parent if \[ \norm{\phi(\nu')}_p^p \leq \norm{\phi(\nu)}_p^p + 2^{-|\nu|} \] whenever $\nu' \in S$ is a sibling of $\nu$. A chain $C \subseteq S$ is \emph{almost norm-maximizing} if for every $\nu \in C$, if $\nu$ has a child in $S$, then $C$ contains an almost norm-maximizing child of $\nu$. Suppose $\mathcal{B}$ is a Banach space. A \emph{structure} on $\mathcal{B}$ is a surjection of the natural numbers onto a linearly dense subset of $\mathcal{B}$. A \emph{presentation of $\mathcal{B}$} is a pair $(\mathcal{B}, R)$ where $R$ is a structure on $\mathcal{B}$. Among all presentations of a Banach space $\mathcal{B}$, one may be designated as \emph{standard}; in this case, we will identify $\mathcal{B}$ with its standard presentation. In particular, if $p \geq 1$ is a computable real, and if $D$ is a standard map of $\mathbb{N}$ onto the set of characteristic functions of dyadic subintervals of $[0,1]$, then $(L^p[0,1], D)$ is the standard presentation of $L^p[0,1]$. If $R(n) = 1$ for all $n \in \mathbb{N}$, then $(\mathbb{C}, R)$ is the standard presentation of $\mathbb{C}$ as a Banach space over itself. The standard presentations of $\ell^p$ and $\ell^p_n$ are given by the standard bases for these spaces. The standard presentations of $\ell^p \oplus L^p[0,1]$ and $\ell^p_n \oplus L^p[0,1]$ are defined in the obvious way. Fix a presentation $\mathcal{B}^\# = (\mathcal{B}, R)$ of a Banach space $\mathcal{B}$. By a \emph{rational vector} of $\mathcal{B}^\#$ we mean a vector of the form $\sum_{j \leq M} \alpha_j R(j)$ where $\alpha_0, \ldots, \alpha_M \in \mathbb{Q}(i)$. We say $\mathcal{B}^\#$ is \emph{computable} if the norm function is computable on the rational vectors of $\mathcal{B}^\#$. That is, if there is an algorithm that given $\alpha_0, \ldots, \alpha_M \in \mathbb{Q}(i)$ and $k \in \mathbb{N}$ produces a rational number $q$ so that $|\norm{\sum_{j \leq M} \alpha_j R(j)} - q| < 2^{-k}$. The standard definitions just described are all easily seen to be computable. A Banach space is \emph{computably presentable} if it has a computable presentation. With a presentation $\mathcal{B}^\#$ of a Banach space, there are associated classes of computable vectors and sequences. With a pair $(\mathcal{B}_0^\#, \mathcal{B}_1^\#)$ of presentations of Banach spaces, there is an associated class of computable functions from $\mathcal{B}_0^\#$ into $\mathcal{B}_1^\#$. Definitions of concepts such as these have become fairly well-known; we refer the reader to whom they are unfamiliar to Section 2.2.2 of \cite{Clanin.McNicholl.Stull.2019}. We will make frequent use of the following result from \cite{Clanin.McNicholl.Stull.2019}. \begin{theorem}\label{thm:disint.comp} Suppose $p \geq 1$ is a computable real so that $p \neq 2$. Then, every computable presentation of a nonzero $L^p$ space has a computable disintegration. \end{theorem} The following is essentially proven in \cite{McNicholl.2017}. \begin{theorem}\label{thm:anm.chains} Suppose $p \geq 1$ is a computable real, and suppose $\mathcal{B}^\#$ is a computable presentation of an $L^p$ space. If $\phi$ is a computable disintegration of $\mathcal{B}^\#$, then there is a partition $\{C_n\}_{n < \kappa}$ (where $\kappa \leq \omega$) of $\operatorname{dom}(\phi)$ into uniformly c.e. almost norm-maximizing chains. \end{theorem} Degrees of categoricity for countable structures were introduced in \cite{Fokina.Kalimullin.Miller.2010}. Since then, the study of these degrees has given rise to a number of surprising results and very challenging questions; see, for example, \cite{Csima.Franklin.Shore.2013} and \cite{Anderson.Csima.2016}. Any notion of classical computable structure theory can be adapted to the setting of Banach spaces by replacing `isomorphism' with `isometric isomorphism'. Thus, we arrive at the previously given definition of the degree of categoricity for a computably presentable Banach space. \section{Preliminaries}\label{sec:prelim} \subsection{Preliminaries from functional analysis}\label{sec:prelim::subsec:FA} Here we establish several preliminary lemmas and theorems from classical functional analysis that will be used later to prove Theorem \ref{thm:main}. We first establish the results needed to locate the $\preceq$-atoms of $L^p(\Omega)$ via the use of almost norm-maximizing chains. We then conclude this section with results regarding disintegrations on complemented subspaces of $L^p(\Omega)$. The proof of the following is essentially the same as the proof of Proposition 4.1 of \cite{McNicholl.2017}. \begin{proposition}\label{prop:subvectorLimitsExist}If $g_0 \preceq g_1 \preceq ...$ are vectors in $L^p(\Omega)$, then $\lim_n g_n$ exists in the $L^p$-norm and is the $\preceq$-infimum of $\{g_0,g_1,...\}$. \end{proposition} The following generalizes Theorem 3.4 of \cite{McNicholl.2017}. \begin{theorem}\label{thm:limitsAreAtoms} Suppose $\Omega$ is a measure space and $\phi: S \rightarrow L^p(\Omega)$ is a disintegration.\\ \begin{enumerate} \item If $C \subseteq S$ is an almost norm-maximizing chain, then the $\preceq$-infimum of $\phi[C]$ exists and is either \textbf{0} or an atom of $\preceq$. Furthermore, $\inf\phi[C]$ is the limit in the $L^p$ norm of $\phi(\nu)$ as $\nu$ traverses the nodes in $C$ in increasing order.\label{thm:limitsAreAtoms::itm:inf} \item If $\{C_n\}_{n=0}^\infty$ is a partition of $S$ into almost norm-maximizing chains, then $\inf\phi[C_0], \inf\phi[C_1], ...$ are disjointly supported. Furthermore, if $A$ is an atom of $\Omega$, then there exists a unique $n$ so that $A$ is the support of $\inf \phi[C_n]$. \label{thm:limitsAreAtoms::itm:unique} \end{enumerate} \end{theorem} \begin{proof} (\ref{thm:limitsAreAtoms::itm:inf}): Suppose $C \subseteq S$ is an almost norm-maximizing chain. By Proposition \ref{prop:subvectorLimitsExist}, $g:=\inf \phi[C]$ exists and is the limit in the $L^p$-norm of $\phi(\nu)$ as $\nu$ traverses the nodes in $C$ in increasing order. We claim that $g$ is an atom if it is nonzero. For, suppose $h \preceq g$. Let $\delta = \min\{\norm{g - h}_p^p, \norm{h}_p^p\}$, and let $\epsilon> 0$. Since the range of $\phi$ is linearly dense, there is a finite $S_1 \subseteq S$ and a family of scalars $\{\alpha_\nu\}_{\nu \in S_1}$ so that \[ \norm{\sum_{\nu \in S_1} \alpha_\nu \phi(\nu) - h}_p < \frac{\epsilon}{2}. \] Let $f = \sum_{\nu \in S_1} \alpha_\nu \phi(\nu)$. Then, \begin{eqnarray*} \norm{f - g}_p^p & \geq & \norm{(f - h) \cdot \chi_{\operatorname{supp}(g)}}_p^p \\ & = & \norm{f \cdot \chi_{\operatorname{supp}(g)} - h}_p^p. \end{eqnarray*} Let $S_1^0 = \{\nu \in S_1\ :\ g \preceq \phi(\nu)\}$, and let $\beta = \sum_{\nu \in S_1^0} \alpha_\nu$. Then, since $\phi$ is separating, $f \cdot \chi_{\operatorname{supp}(g)} = \beta g$. However, since $g - h$ and $h$ are disjointly supported, \begin{eqnarray*} \norm{\beta g - h}_p^p & = & |\beta|^p\norm{g - h}_p^p + |\beta - 1|^p \norm{h}_p^p \\ & \geq & (|\beta|^p + |\beta -1|^p) \delta \\ & \geq & (|\beta|^p + ||\beta| - 1|^p) \delta\\ & \geq & \max\{|\beta|^p, ||\beta| - 1|^p\} \delta\\ & \geq & 2^{-p} \delta. \end{eqnarray*} Thus, $\delta < \epsilon$ for every $\epsilon > 0$. Therefore, $\delta = 0$ and so either $g = h$ or $h = \mathbf{0}$. Thus, $g$ is an atom. \\ (\ref{thm:limitsAreAtoms::itm:unique}): Suppose $C_0,C_1,...$ is a partition of $S$ into almost norm-maximizing chains. By the above, $\inf \phi([C_k])$ exists for each $k$, and so we set $h_k := \inf \phi[C_k]$. We first claim that $h_0, h_1, \ldots$ are disjointly supported vectors. Supposing that $k \neq k'$ it suffices to prove that there are incomparable nodes $\nu_0,\nu_1$ such that $\nu_0 \in C_k$ and $\nu_1 \in C_{k'}$. We do this in two cases. First, suppose there exist $\nu \in C_k, \nu' \in C_{k'}$ such that $|\nu| = |\nu'|$. Since the chains $C_0,C_1,...$ partition $S$, $\nu \neq \nu'$. Thus, $\nu_0:=\nu$ and $\nu_1:=\nu'$ are incomparable. Now suppose $|\nu| \neq |\nu'|$ whenever $\nu \in C_k$ and $\nu' \in C_{k'}$. Let $\nu$ be the $\subseteq$-minimal node in $C_k$ and let $\nu'$ be the $\subseteq$-minimal node in $C_{k'}$. Without loss of generality, assume $|\nu|<|\nu'|$. Then $C_k$ must contain a terminal node $\tau_k$ of $S$, and $|\tau_k|<|\nu'|$. Let $\mu\in S$ be the ancestor of $\nu'$ such that $|\mu|=|\tau_k|$. Note that $\mu \notin C_{k'}$ since $|\mu| < |\nu'|$. Furthermore, $\mu \notin C_k$ either, for $\tau_k$ is terminal in $S$. Therefore, since $|\mu|=|\tau_k|$, $\mu$ and $\tau_k$ are incomparable. From this it follows that $\nu_0:=\nu$ and $\nu_1:=\nu'$ are incomparable. Now let $A$ be an atom of $\Omega$. If there is a $\preceq$-atom $g$ in $\operatorname{ran}(\phi)$ whose support includes $A$ then there is nothing to show. So suppose that there is no atom in $\operatorname{ran}(\phi)$ whose support includes $A$. We claim that for each $n \in \mathbb{N}$ there is a $\nu \in S$ so that $|\nu|=n$ and $A \subseteq \operatorname{supp}(\phi(\nu))$. For, suppose otherwise. Since $\phi$ is summative and separating, it follows that $A \not \subseteq \operatorname{supp}(\phi(\nu))$ for all $\nu \in S$. Let $\mu$ denote the measure function of $\Omega$. Then, for any $g \in \operatorname{ran}(\phi)$, $\mu(A\cap\ \operatorname{supp}(g))=0$. Thus $\mu(A) \leq \norm{f -\chi_A}_p$ whenever $f$ belongs to the linear span of $\operatorname{ran}(\phi)$- a contradiction since the range of $\phi$ is linearly dense. Now let $\nu_s$ denote the node of length $s$ so that $A\subseteq \operatorname{supp}(\phi(\nu_s))$. Let $f = \phi(\emptyset) \cdot \chi_A$. Then, $f \preceq \phi(\nu_s)$ for all $s$. For each $s$, let $k_s$ denote the $k$ so that $\nu_s\in C_k$. We claim that $\lim_s k_s$ exists. To see this, suppose otherwise. Then we may let $s_0<s_1<...$ be the increasing enumeration of all values of $s$ so that $k_s\neq k_{s+1}$. Since for all $m$, $\nu_{s_m+1} \supset \nu_{s_m}$, $\nu_{s_m}$ is a nonterminal node in $S$. Thus since $C_{k_{s_m}}$ is almost norm-maximizing it must contain a child of $\nu_{s_m}$ in $S$; denote this child by $\mu_m$. Then, $\phi(\mu_m)\preceq \phi(\nu_{s_m})$ and $\phi(\mu_m)$ and $\phi(\nu_{s_m+1})$ are disjointly supported. Also, since $\mu_m$ is an almost norm-maximizing child of $\nu_{s_m}$, $\norm{\phi(\nu_{s_m+1})}^p_p \leq \norm{\phi(\mu_m)}_p^p+2^{-s_m}$. Since $\phi(\mu_{m+r})\preceq \phi(\nu_{s_{m+r}})\preceq \phi(\nu_{s_m+1})$, $\phi(\mu_m)$ and $\phi(\mu_{m+r})$ are disjointly supported if $r>0$. Thus by the above inequality and the summativity of $\phi$ we have \begin{align*} \sum_m\norm{\phi(\nu_{s_m+1})}_p^p &\leq \sum_m \norm{\phi(\mu_m)}_p^p + \sum_{m}2^{-s_m} \\ &=\norm{\sum_m \phi(\mu_m)}_p^p + \sum_{m}2^{-s_m}\\ &\leq \norm{\phi(\emptyset)}_p^p+ \sum_{m}2^{-s_m}\\ &<\infty. \end{align*} But since $f \preceq \phi(\nu_{s_m+1})$ for all $m$, $\norm{\phi(\nu_{s_m + 1})}_p^p \geq \norm{f} > 0$ for all $m$- a contradiction. Therefore, $k:=\lim_s k_s$ exists. Since the chains partition $S$, $C_k$ is the only chain so that $A \subseteq \operatorname{supp}(\phi(\nu))$ for all $\phi(\nu)\in\phi[C_k]$. It follows immediately from part (\ref{thm:limitsAreAtoms::itm:inf}) that $A$ is the support of $\inf \phi[C_k]$. The result now follows. \end{proof} We say that subspaces $\mathcal{M}$, $\mathcal{N}$ of $L^p(\Omega)$ are disjointly supported if $f$, $g$ are disjointly supported whenever $f \in \mathcal{M}$ and $g \in \mathcal{N}$. \begin{lemma}\label{lm:proj.disint} Suppose $\phi$ is a disintegration of $L^p(\Omega)$ and that $\mathcal{M}$ is a complemented subspace of $L^p(\Omega)$. Suppose also that $\mathcal{M}$ and its complement are disjointly supported. Then, $P_{\mathcal{M}} \phi$ is summative and separating, and its range is linearly dense in $\mathcal{M}$. \end{lemma} \begin{proof} Let $P = P_{\mathcal{M}}$, and let $\psi = P\phi$. Since $P$ is linear, it follows that $\psi$ is summative. Since $\mathcal{M}$ and its complement are disjointly supported, it also follows that $\psi(\nu)$ is a subvector of $\phi(\nu)$ for each $\nu \in \operatorname{dom}(\phi)$. We can then infer that $\psi$ is separating. We now show that the range of $P\phi$ is linearly dense in $\mathcal{M}$. Let $\epsilon>0$. By the linear density of $\phi$ and the disjointness of support of $\mathcal{M}$ and $\mathcal{N}$, for any $f\in\mathcal{M}$ there is a collection of scalars $\{\alpha_\nu\}_{\nu\in S}$ such that \begin{align*} \epsilon^p&>\norm{f-\sum_{\nu\in S}\alpha_\nu \phi(\nu)}_p^p\\ &=\norm{f-\sum_{\nu\in S}\alpha_\nu P(\phi(\nu))-\sum_{\nu\in S}\alpha_\nu P_{\mathcal{N}}(\phi(\nu))}_p^p\\ &=\norm{f-\sum_{\nu\in S}\alpha_\nu P(\phi(\nu))}^p_p+\norm{\sum_{\nu\in S}\alpha_\nu P_{\mathcal{N}}(\phi(\nu))}_p^p\\ &\geq \norm{f-\sum_{\nu\in S}\alpha_\nu P(\phi(\nu))}^p_p. \end{align*} Thus we have that the range of $P \phi$ is linearly dense in $\mathcal{M}$. \end{proof} \begin{lemma}\label{lm:proj.disint.Lp} Suppose $\phi$ is a disintegration of $\mathcal{M} \oplus L^p[0,1]$ where $\mathcal{M}$ is either $\ell^p$ or $\ell^p_n$. Suppose $\{C_n\}_{n \in \mathbb{N}}$ is a partition of $\operatorname{dom}(\phi)$ into almost norm-maximizing chains and that $g_n = \inf \phi[C_n]$ for all $n$. Then, for each $\nu \in \operatorname{dom}(\phi)$, \[ P_{\{\mathbf{0}\} \oplus L^p[0,1]} \phi(\nu) = \phi(\nu) - \sum_{g_n \preceq \phi(\nu)} g_n. \] \end{lemma} \begin{proof} Let $\mathcal{B} = \mathcal{M} \oplus L^p[0,1]$, and let $P = P_{\{\mathbf{0}\} \oplus L^p[0,1]}$. For each $f \in \mathcal{B}$, let $\mathcal{A}_f$ denote the set of all atoms $g$ of $\mathcal{B}$ so that $g \preceq f$. Thus, $P(f) = f - \sum_{g \in \mathcal{A}_f} g$. Suppose $g \in \mathcal{A}_{\phi(\nu)}$. Then, $\operatorname{supp}(g)$ is an atom. So, by Theorem \ref{thm:limitsAreAtoms}, $\operatorname{supp}(g) = \operatorname{supp}(g_n)$ for some $n$. We claim that $\nu \in C_n\downarrow$. For, suppose $\nu \not \in C_n\downarrow$. Let $\nu'$ be the largest node in $C_n\downarrow$ so that $\nu' \subseteq \nu$. Thus, $\nu' \neq \nu$ so and $\nu'$ has a child in $\operatorname{dom}(\phi)$. Therefore, $\nu'$ has a child $\nu''$ in $C_n$ since $C_n$ is almost norm-maximizing. Thus, $\nu''$ and $\nu$ are incomparable. It follows that $g_n$ and $g$ are disjointly supported- a contradiction. Since $\nu \in C_n\downarrow$, it follows that $g,g_n \preceq \phi(\nu)$. Thus, since $\operatorname{supp}(g) = \operatorname{supp}(g_n)$, $g = g_n$. \end{proof} \subsection{Preliminaries from computable analysis}\label{sec:prelim::subsec:CA} This subsection essentially effectivizes the notions of the previous subsection and makes explicit the computable presentations we will employ in the proofs of our main theorem and its corollary. The following is from \cite{Pour-El.Richards.1989}. \begin{theorem}\label{thm:seq.comp.map} Suppose $\mathcal{B}_0^\#$ and $\mathcal{B}_1^\#$ are computable presentations of Banach spaces $\mathcal{B}_0$ and $\mathcal{B}_1$ respectively and that $T : \mathcal{B}_0^\# \rightarrow \mathcal{B}_1^\#$ is bounded and linear. Then, $T$ is computable if and only if $T$ maps a linearly dense computable sequence of $\mathcal{B}_0^\#$ to a computable sequence of $\mathcal{B}_1^\#$. \end{theorem} \begin{definition}\label{def:compu.comp} Suppose $\mathcal{B}^\#$ is a computable presentation of a Banach space $\mathcal{B}$, and suppose $\mathcal{M}$ is a complemented subspace of $\mathcal{B}$. We say $\mathcal{M}$ is a \emph{computably complemented subspace of $\mathcal{B}$} if $P_\mathcal{M}$ is a computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$. \end{definition} We relativize this notion in the obvious way. \begin{proposition}\label{prop:proj.comp} Suppose $\mathcal{B}_j^\#$ is a computable presentation of a Banach space $\mathcal{B}_j$ for each $j \in \{0,1\}$, and suppose $T$ is an $X$-computable isometric isomorphism of $\mathcal{B}_0^\#$ onto $\mathcal{B}_1^\#$. If $\mathcal{M}$ is a computably complemented subspace of $\mathcal{B}_0^\#$, then $T[\mathcal{M}]$ is an $X$-computably complemented subspace of $\mathcal{B}_1^\#$. \end{proposition} \begin{proof} This is clear from the fact that $P_{T[\mathcal{M}]} = TP_{\mathcal{M}}T^{-1}$. \end{proof} \begin{lemma}\label{lm:disintsArePresentations} Let $p \geq 1$ be computable. Suppose $S$ is a tree, and suppose $\phi : S \rightarrow L^p(\Omega)$ is summative and separating. Suppose also that $\operatorname{ran}(S)$ is linearly dense and that $\nu \mapsto \norm{\phi(\nu)}_p$ is computable. Let $R = \phi h$ where $h$ is a computable surjection of $\mathbb{N}$ onto $S$. Then, $(L^p(\Omega), R)$ is a computable presentation of $L^p(\Omega)$. \end{lemma} \begin{proof} Since $\operatorname{ran}(\phi)$ is linearly dense, it follows that $R$ is a structure on $L^p(\Omega)$ and that $L^p(\Omega)^\# := (L^p(\Omega),R)$ is a presentation of $L^p(\Omega)$. Now we must demonstrate that this presentation is computable. That is, we must show that the norm function is computable on the rational vectors of $L^p(\Omega)^\#$. So, suppose $\alpha_0, \ldots, \alpha_M \in \mathbb{Q}(i)$ are given, and let $f = \sum_j \alpha_j R(j)$. Compute a finite tree $F \subseteq S$ so that $R(j) \in F$ for each $j \leq M$. For each $\nu \in F$, let $\alpha_\nu = \sum_{h(j) = \nu} \alpha_j$. Thus, $\sum_j \alpha_j R(j) = \sum_\nu \alpha_\nu \phi(\nu)$. Let $\beta_0, \ldots, \beta_k$ denote the leaf nodes of $F$. Thus, $\operatorname{supp}(f) = \bigcup_j \operatorname{supp}(\phi(\beta_j))$. Therefore, \begin{eqnarray*} \norm{f}_p^p & = & \sum_j \norm{f \cdot \chi_{\operatorname{supp}(\phi(\beta_j))}}_p^p\\ & = & \sum_j \norm{ \left(\sum_{\nu \subseteq \beta_j} \alpha_\nu\right) \phi(\beta_j)}_p^p\\ & = & \sum_j \left| \sum_{\nu \subseteq \beta_j} \alpha_\nu \right|^p \norm{\phi(\beta_j)}_p^p. \end{eqnarray*} Since $\nu \mapsto \norm{\phi(\nu)}_p$ is computable, it follows that $\norm{f}_p$ can be computed from $\alpha_0, \ldots, \alpha_M$. \end{proof} \section{Complexity of projection maps}\label{sec:projection} Here we establish the complexity of projection maps on the spaces $\ell^p_n\oplus L^p[0,1]$ and $\ell^p\oplus L^p[0,1]$ respectively. The main theorem of this section is the core of our argument yielding the upper bounds for each of the aforementioned spaces. \begin{theorem}\label{thm:proj.Lp.comp} Let $p \geq 1$ be a computable real besides $2$. Suppose $\mathcal{M}$ is either $\ell^p_n$ or $\ell^p$, and suppose $(\mathcal{M} \oplus L^p[0,1])^\#$ is a computable presentation of $\mathcal{M} \oplus L^p[0,1]$. \begin{enumerate} \item If $\mathcal{M} = \ell^p_n$, then $P_{\{\mathbf{0}\} \oplus L^p[0,1]}$ is a $\emptyset'$-computable map of $(\mathcal{M} \oplus L^p[0,1])^\#$ into $(\mathcal{M} \oplus L^p[0,1])^\#$. \item If $\mathcal{M} = \ell^p$, then $P_{\{\mathbf{0}\}\oplus L^p[0,1]}$ is a $\emptyset''$-computable map of $(\mathcal{M} \oplus L^p[0,1])^\#$ into $(\mathcal{M} \oplus L^p[0,1])^\#$. \end{enumerate} \end{theorem} \begin{proof} Let $\mathcal{B} = \mathcal{M} \oplus L^p[0,1]$, and let $\phi$ be a computable disintegration of $\mathcal{B}^\#$. Set $S = \operatorname{dom}(\phi)$. Abbreviate $P_{\{\mathbf{0}\} \oplus L^p[0,1]}$ by $P$. Let $\mathcal{B}^\# = (\mathcal{B}, R)$. Fix a computable surjection $h$ of $\mathbb{N}$ onto $S$, and set $R'(j) = \phi(h(j))$. Let $\mathcal{B}^+ = (\mathcal{B}, R')$. By Lemma \ref{lm:disintsArePresentations}, $\mathcal{B}^+$ is a computable presentation of $\mathcal{B}$. Furthermore, since $R'$ is a computable sequence of $\mathcal{B}^\#$, it follows from Theorem \ref{thm:seq.comp.map} that $\mathcal{B}^\#$ is computably isometrically isomorphic to $\mathcal{B}^+$ (namely, by the identity map). By Theorem \ref{thm:limitsAreAtoms}, there is a partition $\{C_j\}_{j \in \mathbb{N}}$ of $S$ into almost norm-maximizing chains. Let $g_j = \inf \phi[C_j]$. By Lemma \ref{lm:proj.disint.Lp}, \[ P(\phi(\nu)) = \phi(\nu) - \sum_{g_j \preceq \phi(\nu)} g_j. \] Let $U_\nu = \{j \in \mathbb{N}\ :\ \nu \in C_j\downarrow\}$. We first claim that \[ \sum_{g_j \preceq \phi(\nu)} g_j = \sum_{j \in U_\nu} g_j. \] For, if $j \in U_\nu$, then $g_j \preceq \phi(\nu)$. Suppose $j \not \in U_\nu$ and $g_j \preceq \phi(\nu)$. Let $\mu_0$ be the maximal element of $C_n\downarrow$ so that $\mu_0 \subseteq \nu$. Thus, $\mu_0 \subset \nu$, and so $\mu_0$ has a child $\mu'$ in $S$. Therefore $g_j \preceq \phi(\mu'), \phi(\nu)$. Since $\phi$ is separating, it follows that $g_j = \mathbf{0}$. Now, suppose $\mathcal{M} = \ell^p_n$. We obtain from Theorem \ref{thm:limitsAreAtoms}, that there are exactly $n$ values of $j$ so that $g_j$ is nonzero. So, let $D = \{j\ : g_j \neq \mathbf{0}\}$. Then, $\phi(\nu) - P(\phi(\nu)) = \sum_{j \in U_\nu \cap D} g_j$. It then follows from Theorem \ref{thm:limitsAreAtoms} that $\{g_j\}_{j \in \mathbb{N}}$ is a $\emptyset'$-computable sequence of $\mathcal{B}^\#$. Thus, $\{P(R'(j))\}_{j \in \mathbb{N}}$ is a $\emptyset'$-computable sequence of $\mathcal{B}^+$. Therefore, by the relativization of Theorem \ref{thm:seq.comp.map}, $P$ is a $\emptyset'$-computable map of $\mathcal{B}^+$ into $\mathcal{B}^+$. But, since $\mathcal{B}^\#$ is computably isometrically isomorphic to $\mathcal{B}^+$, $P$ is also a $\emptyset'$-computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$. Now, suppose $\mathcal{M} = \ell^p$. For each $\nu \in S$, let $h_\nu = \phi(\nu) - P(\phi(\nu))$. Since $\{g_j\}_{j \in U_\nu}$ is a summable sequence of disjointly supported vectors, $\sum_{j \in U_\nu} \norm{g_j}_p^p < \infty$. Moreover, since $\{g_j\}_{j \in U_\nu}$ is a $\emptyset'$-computable sequence of $\mathcal{B}^\#$, it follows that $\sum_{j \in U_\nu} \norm{g_j}_p^p$ is $\emptyset''$-computable uniformly in $\nu$. Observe that for each $N \in \mathbb{N}$, \begin{eqnarray*} \norm{\sum_{j \in U_\nu \cap [0,N]} g_j - h_\nu}_p^p & = & \norm{\sum_{j \in U_\nu \cap [N+1, \infty)} g_j}_p^p\\ & = & \sum_{j \in U_\nu \cap [N+1, \infty)} \norm{g_j}_p^p. \end{eqnarray*} From this we obtain that $h_\nu$ is a $\emptyset''$-computable vector of $\mathcal{B}^\#$ uniformly in $\nu$. It then follows that $\{P(R'(j))\}_{j \in \mathbb{N}}$ is a $\emptyset''$-computable sequence of $\mathcal{B}^+$ and so $P$ is a $\emptyset''$-computable map of $\mathcal{B}^+$ into $\mathcal{B}^+$. \end{proof} The sharpness of the bounds in Proposition \ref{prop:proj.comp} will be demonstrated in Section \ref{sec:lower}. \section{Upper bound results}\label{sec:upper} Here we will use the complexity of projection maps described in the previous section to produce the upper bounds for the degree of categoricity of $\ell^p_n\oplus L^p[0,1]$ and $\ell^p\oplus L^p[0,1]$ respectively. \begin{theorem}\label{thm:upper} Suppose $p \geq 1$ is a computable real so that $p \neq 2$. Then, $\ell^p_n \oplus L^p[0,1]$ is $\emptyset'$-categorical, and $\ell^p \oplus L^p[0,1]$ is $\emptyset''$-categorical. \end{theorem} \begin{proof} Suppose $\mathcal{A}$ is either $\ell^p_n$ or $\ell^p$, and let $\mathcal{B} = \mathcal{A} \oplus L^p[0,1]$. Let $\mathcal{B}^\#$ be a computable presentation of $\mathcal{B}$, and let $\phi$ be a computable disintegration of $\mathcal{B}^\#$. Let $S = \operatorname{dom}(\phi)$. Since $\mathcal{B}$ is infinite-dimensional, $S$ is infinite. Fix a computable surjection $h$ of $\mathbb{N}$ onto $S$. Let $\mathcal{M} = \mathcal{A} \oplus \{\mathbf{0}\}$, and let $\mathcal{N} = \{\mathbf{0}\} \oplus L^p[0,1]$. In addition, let $P = P_{\mathcal{N}}$. We first claim that there is a $\emptyset'$-computable map $T_1 : \mathcal{A} \rightarrow \mathcal{B}^\#$ so that $\operatorname{ran}(T_1) = \mathcal{M}$. For suppose $\mathcal{A} = \ell^p_n$. Then, by Theorem \ref{thm:proj.Lp.comp}, $P$ is a $\emptyset'$-computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$. Let $\mathcal{M}^\# = (\mathcal{M}, (I-P)\phi h)$. By the relativization of Lemma \ref{lm:disintsArePresentations}, $\mathcal{M}^\#$ is a $\emptyset'$-computable presentation of $\mathcal{M}$. In Section 6 of \cite{McNicholl.2017}, it is shown that $\ell^p_n$ is computably categorical. So, by relativizing this result, there is a $\emptyset'$-computable isometric isomorphism $T_1$ of $\ell^p_n$ onto $\mathcal{M}^\#$. Since $(I - P)\phi h$ is a $\emptyset'$-computable sequence of $\mathcal{B}^\#$, by the relativization of Theorem \ref{thm:seq.comp.map}, $T_1$ is a $\emptyset'$-computable map of $\ell^p_n$ into $\mathcal{B}^\#$. Now, suppose $\mathcal{A} = \ell^p$. By Theorem \ref{thm:anm.chains}, there is a partition $\{C_n\}_{m < \kappa}$ of $S$ into uniformly c.e. almost norm-maximizing chains; since $S$ is infinite, it follows that $\kappa = \omega$. Let $g_n = \inf \phi[C_n]$. Then, there is a $\emptyset'$-computable one-to-one enumeration $\{n_k\}_{k = 0}^\infty$ of all $n$ so that $g_n$ is nonzero. By Theorem \ref{thm:limitsAreAtoms}, for each $j \in \mathbb{N}$, there is a unique $k$ so that $\{j\} = \operatorname{supp}(g_{n_k})$. Let $T_1$ be the unique linear map of $\ell^p$ into $\mathcal{N}$ so that $T_1(e_k) = \norm{g_{n_k}}_p^{-1} g_{n_k}$ for all $k$. Since the $g_{n_k}$'s are disjointly supported, it follows that $T_1$ is isometric. It follows from the relativization of Theorem \ref{thm:seq.comp.map} that $T_1$ is a $\emptyset'$-computable map of $\ell^p$ into $\mathcal{B}^\#$. We now claim that if $\mathcal{A} = \ell^p_n$, then there is a $\emptyset'$-computable map $T_2$ of $L^p[0,1]$ into $\mathcal{B}^\#$ so that $\operatorname{ran}(T_2) = \mathcal{N}$. For, in this case, by Theorem \ref{thm:proj.Lp.comp}, $P$ is a $\emptyset'$-computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$. Thus, by Lemma \ref{lm:proj.disint}, $\mathcal{N}^\# = (\mathcal{N}, P \phi h)$ is a $\emptyset'$-computable presentation of $\mathcal{N}$. So by the relativization of Theorem 1.1 of \cite{Clanin.McNicholl.Stull.2019}, there is a $\emptyset'$-computable isometric isomorphism $T_2$ of $L^p[0,1]$ onto $\mathcal{N}^\#$. Since $P \phi h$ is a $\emptyset'$-computable sequence of $\mathcal{B}^\#$, $T_2$ is a $\emptyset'$-computable map of $L^p[0,1]$ into $\mathcal{B}^\#$ by the relativization of Theorem \ref{thm:seq.comp.map}. It similarly follows that when $\mathcal{A} = \ell^p$, there is a $\emptyset''$-computable map $T_2$ of $L^p[0,1]$ into $\mathcal{B}^\#$ so that $\operatorname{ran}(T_2) = \mathcal{N}$. We now form a map $T$ by gluing the maps $T_1$ and $T_2$ together. Namely, when, $v \in \mathcal{A}$ and $f \in L^p[0,1]$, let $T(v,f) = T_1(v) + T_2(f)$. Thus, $T$ is an isometric automorphism of $\mathcal{B}$. If $\mathcal{A} = \ell^p_n$, then $T$ is a $\emptyset'$-computable map of the standard presentation of $\mathcal{B}$ onto $\mathcal{B}^\#$; otherwise it is a $\emptyset''$-computable map between these presentations. \end{proof} \section{Lower bound results}\label{sec:lower} In each of the following subsections we construct ill-behaved computable presentations for $\ell^p_n\oplus L^p[0,1]$ and $\ell^p\oplus L^p[0,1]$ respectively. We then show that any oracle that computes a linear isometric isomorphism between each constructed presentation and its standard copy must also compute \textbf{d}, where \textbf{d} is any c.e. degree in the $\ell^p_n\oplus L^p[0,1]$ case and $\mathbf{d}$ is the degree of $\emptyset''$ in the $\ell^p\oplus L^p[0,1]$ case. \subsection{The finitely atomic case}\label{sec:lower::subsec:lpn} We complete our proof of Theorem \ref{thm:main}.\ref{thm:main::itm:finite} by establishing the following. \begin{theorem}\label{thm:lpn.lower} Suppose $p \geq 1$ is computable and $p \neq 2$. Let $\mathbf{d}$ be a c.e. degree. Then, there is a computable presentation $(\ell^p_n \oplus L^p[0,1])^\#$ of $\ell^p_n \oplus L^p[0,1]$ so that every degree that computes an isometric isomorphism of $\ell^p \oplus L^p[0,1]$ onto $(\ell^p \oplus L^p[0,1])^\#$ also computes $\mathbf{d}$. \end{theorem} Let $\mathcal{B} = \ell^p_n \oplus L^p[0,1]$. We construct $\mathcal{B}^\#$ as follows. We first construct a disintegration $\phi$ of $\mathcal{B}$. Let $\gamma\in(0,1)$ be a left-c.e. real so that the left Dedekind cut of $\gamma$ has Turing degree $\mathbf{d}$. Let $\{q_n\}$ be a computable and increasing sequence of positive rational numbers so that $\lim_j q_j = \gamma$. Let $c = 1 - \gamma + q_0$. Define \begin{eqnarray*} a((1)) & = & 1 - c\\ b((1)) & = & 1\\ a({(0)^{j+1}} ^\frown(1)) & = & \gamma - q_j\\ b({(0)^{j+1}}^\frown(1)) & = & \gamma - q_{j-1}. \end{eqnarray*} Assuming $a(\nu)$ and $b(\nu)$ have been defined, set $a(\nu^\frown(0)) = a(\nu)$, $a(\nu^\frown(1)) = b(\nu^\frown(0)) = \frac{1}{2} (a(\nu) + b(\nu))$, and $b(\nu^\frown(1)) = b(\nu)$. Now, let: \begin{eqnarray*} \phi(\emptyset) & = & ((1 - \gamma)^{1/p} e_0 + e_1 + \ldots + e_{n-1}, \chi_{[0, 1 - c]} + c^{-1/p}\chi_{[1-c,1]})\\ \phi((0)^{j+1}) & = & ((1 - \gamma)^{1/p}e_0, \chi_{[\gamma - q_j, 1]})\\ \phi((j)) & = & (e_{j-1}, \mathbf{0})\ \mbox{if $2 \leq j < n$}\\ \phi(\mu) & = & c^{-1/p}(\mathbf{0}, \chi_{[a(\mu), b(\mu)]})\ \mbox{if $(1) \subseteq \mu$}\\ \phi(\mu) & = & (\mathbf{0}, \chi_{[a(\mu), b(\mu)]})\ \mbox{if ${(0)^{j+1}}^\frown(1) \subseteq \mu$} \end{eqnarray*} \begin{lemma}\label{lm:phi.disint} $\phi$ is a disintegration of $\mathcal{B}$. \end{lemma} \begin{proof} By construction, $\phi$ is summative, separating, injective, and never zero. It only remains to show that $\operatorname{ran}(\phi)$ is linearly dense. By construction, $(e_j, \mathbf{0}) \in ran(\phi)$ when $1 \leq j \leq n - 1$. So, it is enough to show that $(e_0,\mathbf{0}) \in \langle \operatorname{ran}(\phi) \operatorname{ran}gle$ and that $(0,\chi_I) \in \langle \operatorname{ran}(\phi)\operatorname{ran}gle$ for every closed interval $I \subseteq [0,1]$. Let $\epsilon >0$ be given. There is a $K\in \mathbb{N}$ so that $|\gamma-q_K|/|1 - \gamma| < \epsilon^p$. Furthermore, \begin{align*} \norm{(e_0, \textbf{0}) - \frac{1}{(1-\gamma)^{1/p}} \phi((0)^{K+1})}_p^p&=\norm{(\mathbf{0},-\frac{1}{(1-\gamma)^{1/p}}\chi_{[0,(\gamma - q_K)]})}_p^p\\ &=\frac{|\gamma-q_K|}{|1-\gamma|}\\ &<\epsilon^p \end{align*} Therefore $(e_0,\mathbf{0}) \in \langle \operatorname{ran}(\phi)\operatorname{ran}gle$. Let $\mathcal{M} = \{\mathbf{0}\} \oplus L^p[0,1]$, and let $E(f) = \operatorname{supp}(P_\mathcal{M}(f))$. By construction, for each $f \in \operatorname{ran}(\phi)$, $\chi_{E(f)}$ belongs to the linear span of $\operatorname{ran}(\phi)$. By induction, \[ \bigcup_{|\nu| = j} E(\phi(\nu)) = [0,1]. \] Since $\phi$ is separating, $\{E(\phi(\nu))\}_{|\nu| = j}$ is a partition of $[0,1]$. Let $L_\nu$ denote the length of $E(\phi(\nu))$. Then, by construction, $\lim_j \max_{|\nu| = j} L_\nu = 0$. It follows that if $I \subseteq [0,1]$ is a closed interval, then $\chi_I$ belongs to the closed linear span of $\operatorname{ran}(\phi)$. \end{proof} \begin{lemma}\label{lm:norm.phi.comp} $\nu \mapsto \norm{\phi(\nu)}_p$ is computable. \end{lemma} \begin{proof} We have: \begin{eqnarray*} \norm{\phi(\emptyset)}_p^p & = & n+1 - q_0\\ \norm{\phi((0)^{j+1})}_p^p& = & 1 - q_j\\ \end{eqnarray*} If $2 \leq j < n$, then $\norm{\phi((j))}_p^p = 1$. If $(1) \subseteq \mu$, then $\norm{\phi(\mu)}_p^p = c^{-1} c (b(\mu) - a(\mu)) = 2^{-|\mu| + 1}$. Moreover, if ${(0)^{j+1}}^\frown(1) \subseteq \mu$, then $\norm{\phi(\mu)}_p^p = b(\mu) - a(\mu) = 2^{-|\mu| + j + 2}(q_j - q_{j-1})$. Thus, $\nu \mapsto \norm{\phi(\nu)}_p$ is computable. \end{proof} Therefore, $\mathcal{B}^\#$ is a computable presentation of $\mathcal{B}$ by Lemma \ref{lm:disintsArePresentations}. \begin{lemma}\label{lm:proj} If the projection $P_{\langle e_0 \operatorname{ran}gle \oplus \{\mathbf{0}\}}$ is an $X$-computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$, then $X$ computes $\mathbf{d}$. \end{lemma} \begin{proof} Let $P = P_{\langle e_0 \operatorname{ran}gle \oplus \{\mathbf{0}\}}$. Suppose $P$ is an $X$-computable map of $\mathcal{B}^\#$ into $\mathcal{B}^\#$. Let $f = \phi((0))$. Thus, $f$ is a computable vector of $\mathcal{B}^\#$, and so $X$ computes $(1 - \gamma)^{1/p} = \norm{P(f)}_p$. Therefore, $X$ computes $\gamma$ and so $X$ computes $\mathbf{d}$. \end{proof} Suppose $X$ computes an isometric isomorphism $T$ of $\mathcal{B}$ onto $\mathcal{B}^\#$. Since $T$ preserves the subvector ordering, there is a $j_0$ so that $T((e_{j_0}, \mathbf{0}))$ is a nonzero scalar multiple of $(e_0, \mathbf{0})$. Let $\mathcal{M} = \langle (e_{j_0}, \mathbf{0}) \operatorname{ran}gle$. Then, $T[\mathcal{M}] = \langle (e_0, \mathbf{0}) \operatorname{ran}gle$. Let $P = P_{T[\mathcal{M}]}$. By Theorem \ref{thm:proj.Lp.comp}, $P$ is an $X$-computable map of $\mathcal{B}^\#$ into itself. Let $f = \phi((0))$. Thus, $f$ is a computable vector of $\mathcal{B}^\#$. Hence, $\norm{P(f)}_p = (1 - \gamma)^{1/p}$ is $X$-computable. Therefore, $X$ computes $\gamma$, and so $X$ computes $\mathbf{d}$. Note that we have also established the sharpness of the bound in Theorem \ref{thm:proj.Lp.comp}. \subsection{The infinitely atomic case}\label{sec:lower::subsec:lp} We complete our proof of Theorem \ref{thm:main} by proving the following. \begin{theorem}\label{thm:lpLp01.lower} Suppose $p \geq 1$ is computable and $p \neq 2$. There is a computable presentation $\mathcal{B}^\#$ of $\ell^p \oplus L^p[0,1]$ so that every oracle that computes an isometric isomorphism of $\ell^p \oplus L^p[0,1]$ onto $\mathcal{B}^\#$ also computes $\emptyset''$. \end{theorem} We construct $\mathcal{B}$ as follows. Let \[ m_e = \left\{\begin{array}{ll} \# W_e & \mbox{\ if $e \in \operatorname{Fin}$;}\\ \omega & \mbox{\ otherwise.}\\ \end{array} \right. \] For each $e \in \mathbb{N}$, let \[ \mathcal{B}_e = \left\{ \begin{array}{ll} \ell^p_{2^{m_e}} & \mbox{\ if $e \in \operatorname{Fin}$;}\\ L^p[0,1] & \mbox{\ otherwise.}\\ \end{array} \right. \] Let $\mathcal{B}$ be the $L^p$ sum of $\{\mathcal{B}_e\}_{e \in \mathbb{N}}$. Let $\iota_e$ be the natural injection of $\mathcal{B}_e$ into $\mathcal{B}$. We now build a presentation of $\mathcal{B}$ via the construction of a disintegration $\phi$ of $\mathcal{B}$. Let \[ S = \omega^{\leq 1}\ \cup\ \{(e)^\frown\alpha\ :\ \alpha \in \{0,1\}^{< m_e}\}. \] Thus, $S$ is c.e.. Let \[ g_e = \left\{\begin{array}{ll} 2^{-m_e/p} \sum_{j < 2^{m_e}} e_j & \mbox{\ if $e \in \operatorname{Fin}$;}\\ \chi_{[0,1]} & \mbox{\ otherwise.}\\ \end{array} \right. \] Let $f_e = \iota_e(g_e)$. For each $e$ we let $\phi((e)) = 2^{-(e+1)}f_e$. For each $\nu \in S - \{\emptyset\}$, we recursively define a set $I(\nu)$ as follows. For each $e \in \mathbb{N}$, let \[ I((e)) = \left\{ \begin{array}{ll} \{0, \ldots, 2^{m_e} - 1\} & \mbox{\ if $e \in \operatorname{Fin}$;}\\ \relax [0,1] & \mbox{\ otherwise.}\\ \end{array} \right. \] Suppose $\nu \in S$ and $I(\nu)$ has been defined. Let $a(\nu) = \min I(\nu)$, and let $b(\nu) = \max I(\nu)$. Let $e = \nu(0)$. If $e \not \in \operatorname{Fin}$, let: \begin{eqnarray*} I(\nu^\frown(0)) & = & [a(\nu), 2^{-1}(a(\nu) + b(\nu))]\\ I(\nu ^\frown(1)) & = & [2^{-1}(a(\nu) + b(\nu)), b(\nu)] \end{eqnarray*} If $e \in \operatorname{Fin}$, and if $|\nu| + 1 < m_e$, let: \begin{eqnarray*} I(\nu^\frown(0)) & = & \{a(\nu), \ldots, a(\nu) + \frac{1}{2}\#I(\nu) - 1\}\\ I(\nu^\frown(1)) & = & \{ \frac{1}{2}\#I(\nu), \ldots, b(\nu)\} \end{eqnarray*} When $\nu \in S$, let \[ \phi(\nu) = \left\{ \begin{array}{ll} \sum_e 2^{-(e+1)} f_e & \mbox{\ if $\nu = \emptyset$;}\\ 2^{-(\nu(0)+1)}f_{\nu(0)} \cdot \chi_{I(\nu)} & \mbox{\ otherwise.}\\ \end{array} \right. \] Let $h$ be a computable surjection of $\mathbb{N}$ onto $S$, and let $\mathcal{B}^\# = (\mathcal{B}, \phi h)$. We divide the verification of our construction into the following lemmas. Let $U = \sum_{e \in \operatorname{Fin}} \iota_e(\mathcal{B}_e)$, and let $V = \sum_{e \not \in \operatorname{Fin}} \iota_e(\mathcal{B}_e)$. \begin{lemma}\label{lm:isom.Lp} $\mathcal{B}$ is isometrically isomorphic to $\ell^p \oplus L^p[0,1]$. \end{lemma} \begin{proof} Note that $\mathcal{B} = U + V$. If $e \in \operatorname{Fin}$, then $\mathcal{B}_e$ is a finite-dimensional $L^p$ space. So, $U$ is isometrically isomorphic to $\ell^p$. If $e \not \in \operatorname{Fin}$, then $\mathcal{B}_e = L^p[0,1]$. So, $V$ is the $L^p$-sum of $L^p[0,1]$ with itself $\aleph_0$ times. However, this is the same thing as $L^p(\Omega)$ where $\Omega$ is the product of Lebesgue measure on $[0,1]$ with itself $\aleph_0$ times. As discussed in the introduction, this implies that $V$ is isometrically isomorphic to $L^p[0,1]$. \end{proof} It follows from the construction that $\phi$ is a disintegration of $\mathcal{B}$. \begin{lemma}\label{cl:norm.phi.comp} $\nu \mapsto \norm{\phi(\nu)}_{\mathcal{B}}$ is computable. \end{lemma} \begin{proof} By construction, $\norm{\phi((e))}_{\mathcal{B}} = 2^{-(e+1)}$ for each $e$. Thus, since $\phi$ is summative, $\phi(\emptyset) = 1$. If $\nu'$ is a child of $\nu$ in $S$, then by construction $\norm{\phi(\nu')}_{\mathcal{B}}^p = \frac{1}{2} \norm{\phi(\nu)}_{\mathcal{B}}^p$. It follows that $\nu \mapsto \norm{\phi(\nu)}_{\mathcal{B}}$ is computable. \end{proof} It now follows from Lemma \ref{lm:disintsArePresentations} that $\mathcal{B}^\#$ is a computable presentation of $\mathcal{B}$. \begin{lemma}\label{lm:decomp} If $T$ is an isometric isomorphism of $\ell^p \oplus L^p[0,1]$ onto $\mathcal{B}$, then $T[\ell^p \oplus \{\mathbf{0}\}] =U$ and $T[\{\mathbf{0}\} \oplus L^p[0,1]] = V$. \end{lemma} \begin{proof} Suppose $T$ is an isometric isomorphism of $\ell^p \oplus L^p[0,1]$ onto $\mathcal{B}$. Let $U' = T[\ell^p \oplus \{\mathbf{0}\}]$, and let $V' = T[\{\mathbf{0}\} \oplus L^p[0,1]]$. Thus, $\mathcal{B}$ is the internal direct sum of $U'$ and $V'$. Suppose $j \in \mathbb{N}$, and let $T((e_j, \mathbf{0})) = f + g$ where $f \in U$ and $g \in V$. Since $T((e_j, \mathbf{0}))$ is an atom of $\mathcal{B}$, and since there are no atoms in $V$, it follows that $g = 0$ and so $T((e_j, \mathbf{0})) \in U$. We can then conclude that $U' \subseteq U$. Conversely, suppose $e \in \operatorname{Fin}$ and $h = \iota_e(e_j)$. Then, $T^{-1}(h)$ is an atom of $\ell^p \oplus L^p[0,1]$ and so $T^{-1}(h) \in \ell^p \oplus \{\mathbf{0}\}$. It follows that $\mathcal{B}_e \subseteq U'$ and so $U \subseteq U'$. Since $\mathcal{B}$ is the internal direct sum of $U$ and $V$, it now follows that $V = V'$. \end{proof} Let $P = P_V$. \begin{lemma}\label{lm:comp.P} If $P$ is an $X$-computable map from $\mathcal{B}^\#$ into $\mathcal{B}^\#$, then $X$ computes $\operatorname{Fin}$. \end{lemma} \begin{proof} Suppose $X$ computes $P$ from $\mathcal{B}^\#$ into $\mathcal{B}^\#$. If $\nu=(e)$, note that \[ P(\phi(\nu)) = \left\{\begin{array}{ll} 2^{-(e+1)}f_e & \mbox{\ if $e\not \in \operatorname{Fin}$;}\\ \mathbf{0} & \mbox{\ otherwise.}\\ \end{array} \right. \] and \[ \norm{P(\phi(\nu))}^p_\mathcal{B} = \left\{\begin{array}{ll} 2^{-(e+1)} & \mbox{\ if $e \not \in \operatorname{Fin}$;}\\ 0 & \mbox{\ otherwise.}\\ \end{array} \right. \] Given $e \in \mathbb{N}$, we can compute with oracle $X$ a rational number $q$ so that $| \norm{P(\phi((e)))}_{\mathcal{B}}^p - q| < 2^{-(e+3)}$. If $|q| < 2^{-(e+2)}$, then $\norm{P(\phi((e)))}_{\mathcal{B}}^p < 2^{-(e+1)}$ and so $e \in \operatorname{Fin}$. Otherwise, $\norm{P(\phi(\nu))}_{\mathcal{B}}^p \neq 0$ and so $e \not \in \operatorname{Fin}$. \end{proof} Theorem \ref{thm:lpLp01.lower} now follows from Proposition \ref{prop:proj.comp}. Note that we have also demonstrated the sharpness of the bounds in Theorem \ref{thm:proj.Lp.comp}. \section{Conclusion}\label{sec:conclusion} Suppose $p \geq 1$ is a computable real with $p \neq 2$. We have now classified the computably categorical $L^p$ spaces and determined the degrees of categoricity of those that are not computably categorical. Our results relate the degree of categoricity of an $L^p$ space to the structure of the underlying measure space. We have also determined the complexity of the natural projection operators on these spaces as well as their relationship to the degrees of categoricity. In addition, we have provided the first example of a $\emptyset''$-categorical Banach space that is not $\emptyset'$-categorical. This result leads to the following. \begin{question} If $n \in \mathbb{N}$ and $n \geq 2$, is there is a $\emptyset^{(n+1)}$-categorical Banach space that is not $\emptyset^{(n)}$-categorical? \end{question} We note that Melnikov and Nies have shown that each compact computable metric space is $\emptyset''$-categorical and that there is a compact computable Polish space that is not $\emptyset'$-categorical \cite{Melnikov.Nies.2013}. We have shown that the degrees of $\emptyset$, $\emptyset'$, and $\emptyset''$ are degrees of categoricity of Banach spaces. These results lead to the following. \begin{question} Is every hyperarithmetical degree the degree of categoricity of a Banach space? \end{question} \begin{question} Is there a Banach space that does not have a degree of categoricity? \end{question} \end{document}
\begin{document} \title{\textsc{ A Strong Law of Large Numbers for Positive Random Variables} \thanks{~ We are deeply grateful to J\'anos \textsc{Koml\'os}, who went over the entire manuscript with the magnifying glass and offered line-by-line criticism and wisdom. We thank Daniel \textsc{Ocone}, Albert \textsc{Shiryaev} for invaluable advice; and Richard \textsc{Groenewald}, Tomoyuki \textsc{Ichiba}, Kostas \textsc{Kardaras}, Tze-Leung \textsc{Lai}, Kasper \textsc{Larsen}, Ayeong \textsc{Lee}, Emily \textsc{Sergel}, Nathan \textsc{Soedjak} for careful readings and suggestions.} } \author{ \textsc{Ioannis Karatzas} \thanks{~ Department of Mathematics, Columbia University, New York, NY 10027 (e-mail: {\it [email protected]}). Support from the National Science Foundation under Grant DMS-20-04977 is gratefully acknowledged. } \and \textsc{Walter Schachermayer} \thanks{~ Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria (email: {\it [email protected]}). Support from the Austrian Science Fund (FWF) under grant P-28861, and by the Vienna Science and Technology Fund (WWTF) through project MA16-021, is gratefully acknowledged. } } \maketitle \begin{abstract} \noindent In the spirit of the famous \textsc{Koml\'os} (1967) theorem, every sequence of nonnegative, measurable functions $\{ f_n \}_{n \in \mathbb N}$ on a probability space, contains a subsequence which---along with all its subsequences---converges a.e.\,\,in \textsc{Ces\`aro} mean to some measurable $f_* : \Omega \to [0, \infty]$. This result of \textsc{von\,Weizs\"acker} (2004) is proved here using a new methodology and elementary tools; these sharpen also a theorem of \textsc{Delbaen \& Schachermayer} (1994), replacing general convex combinations by \textsc{Ces\`aro} means. \end{abstract} \noindent {\sl AMS 2020 Subject Classification:} Primary 60A10, 60F15; Secondary 60G42, 60G46. \noindent {\sl Keywords:} Strong law of large numbers, hereditary convergence, partition of unity \section{Introduction} \label{sec1} On a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, consider real-valued measurable functions $f_1, f_2, \cdots \,.$ If these are independent and have the same distribution with $\mathbb E ( | f_1|) < \infty\,$, the celebrated \textsc{Kolmogorov} strong law of large numbers (\cite{K2};\,\cite{KAN};\,\cite{Du}, p.\,73) states that the ``sample average" $ \, ( f_1 + \cdots + f_N) / N\,$ converges $\mathbb{P}-$a.e.\,to the ``ensemble average" $\,\mathbb E (f_1) = \int_\Omega f_1 \, \mathrm{d} \mathbb{P}\,,$ as $ N \to \infty$. More generally, if $f_n (\omega) = f \big( T^{n-1}(\omega) \big), \, n \ge 2, \, \omega \in \Omega$ are the images of an integrable function $f_1: \Omega \to \mathbb R$ along the orbit of successive actions of a measure-preserving transformation $T: \Omega \to \Omega\,,$ then the above sample average converges $\mathbb{P}-$a.e.\,to the conditional expectation $f_* = \mathbb E ( f_1 |{\cal I})$ of $f_1$ given the $\sigma-$algebra ${\cal I}$ of $T-$invariant sets, by the \textsc{Birkhoff} pointwise ergodic theorem (\cite{Du}, p.\,333). A deep result of \textsc{Koml\'os} \cite{K}, already 55 years old but always very striking, says that such ``stabilization via averaging" occurs within {\it any} sequence $f_1, f_2, \cdots \,$ of measurable, real-valued functions with $\, \sup_{n \in \mathbb N} \mathbb E ( | f_n|) < \infty\,.$ More precisely, there exist then an integrable function $f_*$ and a subsequence $ \{ f_{n_k} \}_{k \in \mathbb N} $ such that $ \, ( f_{n_1} + \cdots + f_{n_K}) / K\,$ converges to $f_*\,$, $\,\mathbb{P}-$a.e.\,\,as $ K \to \infty$; and the same is true for any further subsequence of this $ \{ f_{n_k} \}_{k \in \mathbb N}\,.$ This result inspired further path-breaking work in probability theory (\cite{G},\,\cite{Ch2},\,\cite{Ch3}) culminating with \\ \textsc{Aldous} (1977), where exchangeability plays a crucial r\^ole. It, and its ramifications \cite{DS1},\,\cite{DS2} involving forward convex combinations, have been very useful in the field of convex optimization; more generally, when one seeks objects with specific properties, and tries to ascertain their existence using weak compactness arguments. Stochastic control, optimal stopping and hypothesis testing are examples of the former (e.g.,\,\cite{KS},\,\cite{KW},\,\cite{CK},\,\cite{KZ},\,\cite{LZ}); the \textsc{Doob-Meyer} and \textsc{Bichteler-Dellacherie} theorems in stochastic analysis provide instances of the latter (e.g.,\,\cite{J},\,\cite{BSV1},\,\cite{BSV2}). We develop here a very simple argument for the \textsc{Koml\'os} theorem, in the important special case of nonnegative $f_1, f_2, \cdots \,$ treated by \textsc{von\,Weizs\"acker} (2004). The argument dispenses with boundedness in $ \mathbb{L}^1$, at the cost of allowing the function $f_*$ to take infinite values. \section{Background} \label{sec2} We place ourselves on a given, fixed probability space $(\Omega, \mathcal{F}, \mathbb{P})$, and consider a sequence $f_1, f_2, \cdots \,$ of measurable, real-valued functions defined on it. We say that this sequence {\it converges hereditarily in \textsc{Ces\`aro} mean} to some measurable $f_*:\Omega \to \mathbb{R} \cup \{\pm \infty\}$, and write $ f_n \xrightarrow[n \to \infty]{hC}f_*\,,~~ \mathbb{P} -\hbox{a.e.,} $ if, for {\it every} subsequence $\big\{f_{n_k} \big\}_{k \in \mathbb{N}}$ of the original sequence, we have \begin{equation} \label{1} \lim_{K\to \infty} \frac{1}{K} \sum_{k=1}^K f_{n_k} = f_*\,,\qquad \mathbb{P} -\hbox{a.e.} \end{equation} Clearly then, every other such sequence $g_1, g_2, \cdots \,$ which is {\it equivalent} to $f_1, f_2, \cdots \,,$ in the sense of $\, \sum_{n \in \mathbb N} \mathbb{P}( f_n \neq g_n) < \infty\,$ (cf.\,\cite{KAN}), also has this property. In 1967, \textsc{Koml\'os} proved the following remarkable result. The argument in \cite{K} is very clear, but also long and quite involved. Simpler proofs and extensions have appeared since (e.g.,\,\cite{S},\,\cite{T};\,\cite{B}). \begin{theorem} [\textsc{Koml\'os} (1967)] \label{Kom} If the sequence $\{f_n \}_{n \in \mathbb{N}}$ is bounded in $ \mathbb{L}^1,$ i.e., $\sup_{n \in \mathbb{N}} \mathbb{E} (|f_n|) < \infty\,$ holds, there exist an integrable $f_*:\Omega \to \mathbb{R}$ and a subsequence $\big\{f_{n_k}\big\}_{k \in \mathbb{N}}$ of $\{f_n \}_{n \in \mathbb{N}}\,,$ which converges hereditarily in \textsc{Ces\`aro} mean to $f_*\,:$ \begin{equation} \label{02} f_{n_k} \xrightarrow[k \to \infty]{hC}f_*\,, \qquad \mathbb{P}-\hbox{a.e.} \end{equation} \end{theorem} This result was motivated by an earlier one, Theorem \ref{Rev} right below. For the convenience of the reader, we provide in \S \,\ref{sec5f} a simple proof (in the manner of\,\cite{Ch}, pp.\,137-141) of that precursor result, which proceeds by extracting a {\it martingale difference} subsequence. This crucial idea, which establishes a powerful link to martingale theory and simplifies the arguments, appears in this context for the first time in \cite{K} (for related results, see \cite{K1}). \begin{theorem} [\textsc{R\'ev\'esz} (1965)] \label{Rev} If the sequence $ \{f_n \}_{n \in \mathbb{N}}$ satisfies $\,\sup_{n \in \mathbb{N}} \mathbb{E} (f_n^2) < \infty\,,$ there exist a function $g \in \mathbb{L}^2$ and a subsequence $ \{f_{n_k} \}_{k \in \mathbb{N}}\,,$ such that $ \, \sum_{k \in \mathbb{N}} a_k \big(f_{n_k} -g \big)\, $ converges $\,\mathbb{P}-$a.e., for any sequence $ \{a_k \}_{k \in \mathbb{N}} \subset \mathbb R$ with $\,\sum_{k \in \mathbb{N}} a^2_k < \infty$. \end{theorem} It is clear that this property of the subsequence $ \{f_{n_k} \}_{k \in \mathbb{N}} $ is inherited by all {\it its} subsequences (just ``stretch out" the $a_k$'s accordingly, and fill out the gaps with zeroes). In a related development, \textsc{Delbaen \& Schachermayer} (\cite{DS1},\,Lemma A1.1;\,\cite{DS2}) showed with very simple arguments that, from every sequence $\{f_n \}_{n \in \mathbb{N}}$ of nonnegative, measurable functions, a sequence of convex combinations $\,g_n \in \text{conv}(f_n, f_{n+1}, \cdots ), ~ n \in \mathbb N\,$ of its elements can be extracted, which converges $\mathbb{P}-$a.e.\,to a measurable $f_* : \Omega \to [0, \infty]$. This result was called ``a somewhat vulgar version of \textsc{Koml\'os}'s theorem" in \cite{DS2}, and is implied by Theorem \ref{theorem3} below. Indeed, convergence for \textsc{Ces\`aro} averages is much more precise than for unspecified forward convex combinations. In several contexts, including optimization treated via convex duality, nonnegativity is often no restriction at all, but rather the natural setting (e.g.,\,\cite{KS};\,\cite{LZ}; \cite{KS2};\,\cite{KK}, Chapter 3 and Appendix). Then, in the presence of convexity, Lemma A1.1 in \cite{DS1}, or Theorem \ref{theorem3} here, are very useful analogues of Theorem \ref{Kom}: they lead to limit functions $f_*$ in convex sets (such as the positive orthant in $ \mathbb{L}^0$, or the unit ball in $ \mathbb{L}^1$) which are not compact in the usual sense, but {\it are} ``convexly compact" as in \textsc{\v Zitkovi\'c} \cite{Z}. \section{Result} \label{sec3} The purpose of this note is to prove with new and elementary tools the following version of Theorem \ref{Kom}, due to \textsc{von\,Weizs\"acker} \cite{vW} and studied further in \cite{Ta}, \S\,5.2.3 of \cite{KS2}. \begin{theorem} \label{theorem3} Given a sequence $ \{f_n \}_{n \in \mathbb{N}}$ of {\rm nonnegative}, measurable functions on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, there exist a measurable function $f_*:\Omega \to [0, \infty]$ and a subsequence $\big\{f_{n_k}\big\}_{k \in \mathbb{N}}$ of the original sequence, such that \eqref{02} holds. \end{theorem} Our proof appears in Section \ref{sec5}; it is, we believe, not without methodological/pedagogical merit. We observe that the result imposes no restriction whatsoever on the functions $f_1, f_2, \cdots$, apart from measurability and nonnegativity. This comes at a price: the function $f_*\,$, constructed here carefully in \eqref{5}--\eqref{8} below, can take the value $+\infty$ on a set of positive measure. \section{Preparation} \label{sec4} We place ourselves in the setting of Theorem \ref{theorem3}. The arguments that follow often necessitate passing to subsequences, and to diagonal subsequences, of a given $ \{f_n \}_{n \in \mathbb{N}}$. To simplify typography, we denote frequently such subsequences by the same symbols, $ \{f_n \}_{n \in \mathbb{N}}$. For each integer $k \in \mathbb{N}$, we introduce now the truncated functions \begin{equation} \label{3} f_n^{(k)}\, :=\, f_n \cdot \mathbf{ 1}_{ \{ k-1 \le f_n < k \} }\,, \qquad n \in \mathbb{N} \end{equation} and note the partition of unity $\, \sum_{k \in \mathbb{N}} f^{(k)}_n = f_n\,, ~ \forall ~ n \in \mathbb{N}\,. $ \begin{lemma} \label{lemma04} For the sequence of functions $ \{f_n \}_{n \in \mathbb{N}}$ in Theorem \ref{theorem3}, there exists a subsequence, denoted by the same symbols and such that, for every $k\in \mathbb{N}$, the functions of \eqref{3} converge to an appropriate measurable function $f^{(k)}:\Omega \to [0,\infty)\,,$ in the sense \begin{equation} \label{4} f_n^{(k)} \, \xrightarrow[n \to \infty]{hC} \, f^{(k)}, \qquad \mathbb{P}-\text{a.e.} \end{equation} For each fixed $k \in \mathbb N,$ this convergence holds also in $\mathbb{L}^1.$ \end{lemma} \noindent {\it Proof} (after \cite{Ch}, pp.\,145--146): For arbitrary, fixed $k\in \mathbb{N}\,,$ the sequence $ \big\{f_n^{(k)} \big\}_{n \in \mathbb{N}}$ of \eqref{3} is bounded in $\mathbb{L}^\infty,$ thus also in $\mathbb{L}^2$. Theorem \ref{Rev} provides a function $ f^{(k)} \in \mathbb{L}^2$ and a subsequence $ \{ f_{n_j}^{(k)} \}_{j \in \mathbb{N}}$ of $ \{f_n^{(k)} \}_{n \in \mathbb{N}}\,$, such that $\,\sum_{j \in \mathbb{N}} (f_{n_j}^{(k)} -f^{(k)} )/ j\,$ converges $\,\mathbb{P}-$a.e.; and as mentioned right after Theorem \ref{Rev}, this is inherited by all subsequences of $ \{ f_{n_j}^{(k)} \}_{j \in \mathbb{N}}$, and the \textsc{Kronecker} Lemma (\cite{Du}, p.\,81) gives $$ 0= \lim_{J\to \infty} \frac{1}{J} \sum_{j=1}^J \big( f_{n_j}^{(k)} - f^{(k)} \big)=\lim_{ J \to \infty} \frac{1}{J} \sum_{j=1}^J f_{n_j}^{(k)} - f^{(k)} ,\qquad \mathbb{P} -\hbox{a.e.} $$ We pass now to a diagonal subsequence, denoted $ \big\{f_n \big\}_{n \in \mathbb{N}}\,$ again, and such that \eqref{4} holds for {\it every} $k \in \mathbb N\,.$ The last claim follows by the dominated convergence theorem. \qed With these ingredients, we introduce the measurable function $f:\Omega \to [0,\infty]$ via \begin{equation} \label{5} f \,:=\, \sum_{k \in \mathbb{N}} f^{(k)}, \qquad \text{and consider the set} \quad A_\infty \,:= \,\{f=\infty\}. \end{equation} With the help of \textsc{Fatou}'s Lemma, and the notation of (\ref{3})--(\ref{5}), Lemma \ref{lemma04} gives then \begin{equation} \label{6} \varliminf_{N\to \infty} \frac{1}{N} \sum^{N}_{n=1} f_n \geq f\,, \qquad \mathbb{P} -\text{a.e.} \end{equation} \begin{equation} \label{7} \lim_{N\to \infty} \frac{1}{N} \sum^{N}_{n=1} f_n = \infty =f\,, \qquad \mathbb{P} -\text{a.e.} \quad \text{on} \quad A_\infty \end{equation} for a suitable subsequence (denoted by the same symbols) of the original sequence $ \{f_n \}_{n \in \mathbb{N}}\,,$ and for all further subsequences of this subsequence. The inequality in \eqref{6} can easily be strict. Consider, for instance, $f_n \equiv n\,,$ so that $f_n^{(k)} =0$ holds in \eqref{3} for every fixed $k \in \mathbb{N}$ and all $n \in \mathbb{N}$ sufficiently large. We obtain $f^{(k)} =0$ in \eqref{4}, thus $f=0$ in \eqref{5}; and yet $\frac{1}{N} \sum^N_{n=1} f_n \to \infty$ as $N\to \infty$. This preparation allows us to formulate a more technical and precise version of Theorem \ref{theorem3}, Proposition \ref{prop05} below, which implies it. The convention $ \,\infty \cdot 0=0$ is employed here, and throughout. \begin{proposition} \label{prop05} Fix a sequence $\{ f_n \}_{n \in \mathbb N}$ of nonnegative, measurable functions on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$, and recall the notation \eqref{3}--\eqref{5}. There exist then a subsequence, denoted again $\{f_n\}_{n\in \mathbb{N}}\,$, and a set $A \supseteq A_\infty\,,$ such that \begin{equation} \label{8} f_n \, \xrightarrow[n \to \infty]{hC} \,f_* := \max \big(f, \, \infty \cdot \mathbf{1}_{A } \big)\,, \qquad \mathbb{P} -\text{a.e.} \end{equation} We have $A =A_\infty \,, $ thus also $f_* \equiv f ,$ when $\,\lim_{K\to \infty} \varlimsup_{n \to \infty} \, \mathbb{P} \big(f_n \ge K , f < \infty\big) =0\,$. \end{proposition} This last condition holds if $\, \big\{ f_n \,\mathbf{1}_{ \{ f < \infty\} } \big\}_{n \in \mathbb N}\,$ is bounded in $\mathbb{L}^0$, i.e., $\,\lim_{K\to \infty} \sup_{n \in \mathbb N} \, {\mathbb P} (f_n \ge K, f < \infty ) = 0\, .$ A bit more stringently, if not only $\{ f_n \}_{n \in \mathbb N}$ but also its solid, convex hull in $\mathbb{L}^0_+,$ is bounded in $\mathbb{L}^0, $ then $\{ f_n \}_{n \in \mathbb N}$ is bounded in $\mathbb{L} ^1(\mathbb{Q})$ under some probability measure $ \mathbb{Q} \sim \mathbb{P},$ and thus $\mathbb{P} ( f < \infty) =1$ (e.g., Proposition A.11 in \cite{KK}). Whereas, if $\{ f_n \}_{n \in \mathbb N}$ is bounded in $\mathbb{L}^1(\mathbb{P})$, i.e., $ \kappa := \sup_{n \in \mathbb{N}} \mathbb{E} (f_n)< \infty , $ then $f$ in \eqref{5} is integrable, since $\,\mathbb{E} (f ) \le \kappa $ holds from \eqref{6} and \textsc{Fatou}. \section{Proofs} \label{sec5} We shall need a couple of auxiliary results. First, and always with the notation of (\ref{3})--(\ref{5}), we note the following consequence of monotone and dominated convergence. \begin{lemma} \label{lemma_2} Suppose a set $\,D\subseteq \Omega \backslash A_\infty = \{f < \infty \}\,$ satisfies $\,\mathbb{E}\big(f \, \mathbf{1}_D\big) < \infty\,.$ Then, for any given $\varepsilon \in (0,1),$ there exist $K \in \mathbb N $ and a subsequence of the given sequence $\{ f_n \}_{n \in \mathbb N}$ such that for it, and for any of its subsequences $($denoted again $\{ f_n \}_{n \in \mathbb N}),$ we have for arbitrary integers $L >K:$ \begin{equation} \label{10} \lim_{n \to \infty} \mathbb{E} \big[\,f_n \, \mathbf{ 1}_{ \{ K \le f_n <L \}\cap D }\, \big]=:\lim_{n \to \infty} \mathbb{E} \Big[\,f_n^{\,[K,L)}\, \mathbf{ 1}_D\, \Big] < \varepsilon\,. \end{equation} \end{lemma} We are using throughout the notation \begin{equation} \label{11} f_n^{\,[K,L)} \,:= \sum^L_{k=K+1} f_n^{(k)} = f_n \, \mathbf{1}_{[K,L)} (f_n)\,, \quad ~~~f_n^{\,[K,\infty)} \,:=\, \sum_{k \geq K+1} f_n^{(k)} = f_n \, \mathbf{1}_{[K, \infty)} (f_n) \,; \end{equation} in an analogous manner $\, f^{\,[K,L)} := \sum^L_{k=K+1} f^{(k)}\,, ~ \,f^{\,[K,\infty)} := \sum_{k \geq K+1} f^{(k)} \,,$ and Lemma \ref{lemma04} gives \begin{equation} \label{11a} f_n^{\,[K,L)} \, \xrightarrow[n \to \infty]{hC} \, f^{\,[K,L)} \,, \qquad \hbox{both $\,\mathbb{P}-$a.e. and in $\,\mathbb{L}^1$.} \end{equation} Secondly, we recall \eqref{7} and observe the following dichotomy. \begin{lemma} \label{lemma_3} In the setting of Proposition \ref{prop05}, consider any measurable set $B \supseteq \{f=\infty\} $ such that the property $\,f_n \xrightarrow[n \to \infty]{hC} \infty\,$ of \eqref{7} holds $\,\mathbb{P}-$a.e.\,on $B$. Then, either \noindent (i) there exist a set $\,C \supseteq B$ with $\mathbb{P} (C) > \mathbb{P} (B)$ and a subsequence, still denoted $ \{f_n \}_{n \in \mathbb{N}}\,,$ with \begin{equation} \label{9} f_n \xrightarrow[n \to \infty]{hC} \infty \qquad \text{valid} \quad \mathbb{P} -\text{a.e.} ~ \text{on} ~ \,C\,; \qquad \text{or,} \end{equation} (ii) the \textsc{Ces\`aro} convergence $~f_n \xrightarrow[n \to \infty]{hC} f < \infty ~~ \text{holds}~~ \mathbb{P} -a.e. ~ \text{on} ~\, \Omega \setminus B \subseteq \{f< \infty\} \,.$ \end{lemma} Under {\it Case\,(ii),} the set $B\supseteq A_\infty = \{f = \infty \}$ is maximal for the $\,\mathbb{P} -$a.e.\,\,property $f_n \stackrel{hC}{\longrightarrow} \infty\,$: it cannot be ``inflated" to a set $C\supseteq B,$ which satisfies \eqref{9} and has bigger measure. This leads eventually to Proposition \ref{prop05}, and thence to Theorem \ref{theorem3}. Before proving these two results, we dispense with the proof of Theorem \ref{Rev}; this is completely self-contained, and has nothing to do with Lemma \ref{lemma_2} or Lemma \ref{lemma_3}. \subsection{Proof of Theorem \ref{Rev}} \label{sec5f} Because $\{f_n\}_{n \in \mathbb N}$ is bounded in $\mathbb{L}^2$, we can extract a subsequence that converges to some $g \in \mathbb{L}^2$ weakly in $\mathbb{L}^2$. Thus, it suffices to prove the result for a sequence $\{g_n\}_{n \in \mathbb N}$ bounded in $\mathbb{L}^2$, and with $g_n \to 0$ weakly in $\mathbb{L}^2$. We take such a sequence, then, and approximate each $g_n$ by a {\it simple} function $h_n \in \mathbb{L}^2$ with $\|g_n -h_n \|_2 \le 2^{-n}, ~\forall \,n \in \mathbb N.$ This gives, in particular, \begin{equation} \label{R1} \sum_{n \in \mathbb N} \, \big| g_n - h_n \big| < \infty\,, \quad \mathbb{P} -\hbox{a.e.\,;}\qquad h_n \to 0 \quad \hbox{weakly in } \mathbb{L}^2. \end{equation} We construct now, by induction, a sequence $1=n_1 < n_2 < \cdots\,$ of integers, such that \begin{equation} \label{R2} \big| \vartheta_k \big| < 2^{-k} \quad \text{holds } ~ \mathbb{P} -\text{a.e., for } ~ ~\vartheta_k:= \mathbb E \big( h_{n_k} \, \big| \, h_{n_1} , \cdots, h_{n_{k-1}} \big)\,,~~k=2,3, \cdots , \end{equation} as follows: The function $h_{n_1} = h_1$ is simple, thus so is $ \,\mathbb E ( h_n | h_1) = \sum_{j=1}^J \gamma^{(n)}_j \mathbf{ 1}_{A_j} \,$ with $A_1, \cdots, A_J$ a partition of the space, and ${\mathbb P} (A_j)>0$, $\gamma^{(n)}_j := \big(1 / {\mathbb P} (A_j) \big) \cdot \mathbb E \big( h_n \,\mathbf{ 1}_{A_j} \big)\,.$ This last expectation tends to zero as $n \to \infty$ from \eqref{R1}, for every fixed $j$; so we can choose $n_2 > n_1 =1$ with $ \big|\gamma^{(n_2)}_j \big| < 2^{-2},$ for \noindent $ j=1, \cdots, J$; i.e., $\big|\vartheta_2 \big|< 2^{-2},$ ${\mathbb P}-$a.e. Clearly, we can keep repeating this argument since, at each stage, $\big( h_{n_1} , \cdots, h_{n_{k-1}} \big)$ generates a finite partition of the space; and this way we arrive at \eqref{R2}. The sequence $\{ h_n \}_{n \in \mathbb N}$ is bounded in $\mathbb{L}^2$, thus so is the martingale $ X_n := \sum_{k=0}^n a_k \big(h_{n_k} - \vartheta_k \big)\,, ~ n \in \mathbb N_0\,, $ for any $ \{a_n \}_{n \in \mathbb{N}_0} \subset \mathbb R $ with $\sum_{n \in \mathbb{N}} a^2_n < \infty$. Martingale convergence theory (\cite{Du}, p.\,236) shows that the series $\sum_{k \in \mathbb N} a_k \big(h_{n_k} - \vartheta_k \big)$ converges ${\mathbb P}-$a.e. But we have also $\sum_{k \in \mathbb N} \big( \big| \vartheta_k \big| + \big| g_{n_k} - h_{n_k} \big| \big)< \infty, $ $\,{\mathbb P}-$a.e.\,\,from \eqref{R1}--\eqref{R2}, and deduce that $ \sum_{k \in \mathbb{N}} a_k \,g_{n_k} $ converges $\,\mathbb{P}-$a.e., the claim of the theorem. \qed \subsection{Proof of Lemma \ref{lemma_2}} \label{sec5a} Let us call {\it ``Lemma \ref{lemma_2}$^{\,\dagger}$"} the same statement as that of Lemma \ref{lemma_2}, except that \eqref{10} is now replaced by \begin{equation} \label{10a} \forall~ L=K+1, K+2, \cdots \,:~~~ \mathbb{E} \Big[\,f_n^{\,[K,L)}\, \mathbf{ 1}_D\, \Big] < \varepsilon\,, ~\text{for all but finitely many}~ n \in \mathbb N . \end{equation} {\it Claim: Lemma \ref{lemma_2}$^{\,\dagger}$ implies Lemma \ref{lemma_2}.} Let a subsequence of the original $\{ f_n \}_{n \in \mathbb N}$ be given (denoted $\{ f_n \}_{n \in \mathbb N}$ again), along with arbitrary $\varepsilon \in (0,1)$. Lemma \ref{lemma_2}$^{\,\dagger}$ guarantees the existence of $K \in \mathbb N$, depending on $\varepsilon$ and the subsequence, such that \eqref{10a} holds for all integers $L \ge K+1$. Choose $L=K+1$ first. From Lemma \ref{lemma_2}$^{\,\dagger}$ and \textsc{Bolzano-Weierstrass}, (the current) $\{ f_n \}_{n \in \mathbb N}$ has a subsequence for which the expectation in \eqref{10a} converges, with limit $\le \varepsilon / 2.$ Now choose $L=K+2$ and a subsequence of the last subsequence, for which the expectation in \eqref{10a} converges and has limit $\le \varepsilon / 2.$ Continuing in this manner, then diagonalizing, we obtain a subsequence that satisfies \eqref{10a}. \noindent {\it Proof of Lemma \ref{lemma_2}$^{\,\dagger}$.} We argue by contradiction, assuming that $\{ f_n \}_{n \in \mathbb N}$ has a subsequence for which Lemma \ref{lemma_2}$^{\,\dagger}$ fails. Then there exists an $\varepsilon \in (0,1)$ with the property that, for every subsequence of $\{ f_n \}_{n \in \mathbb N}$ and every $K \in \mathbb{N}$, there exists an integer $L>K$ such that \begin{equation} \label{11b} \mathbb{E} \bigg[\sum^L_{k=K+1} f^{(k)}_n \, \mathbf{1}_D \bigg] = \mathbb{E} \Big(f_n^{[K,L)} \, \mathbf{1}_D \Big) \geq \varepsilon \end{equation} holds for infinitely many integers $n \in \mathbb{N}$. But this means that there is a subsequence, again denoted by $\{f_n\}_{n \in \mathbb{N}}\,,$ {\it along which we have \eqref{11b} for every $n \in \mathbb{N};$} and, as a result, also \begin{equation} \label{11c} \mathbb{E} \bigg[\sum^L_{k=K+1} \Big(\frac{1}{N} \sum^N_{n=1} f_n^{(k)}\Big) \, \mathbf{1}_D \bigg] \geq \varepsilon\,, \qquad \forall ~ n \in \mathbb N\,. \end{equation} Now all the truncated functions $f_n^{(k)}$ as in \eqref{3}, for $k=K+1, \dots, L$ and $n \in \mathbb{N}$, take values on the ``Procrustean bed" $\{ 0 \} \cup [K,L)$; and $\, \lim_{N\to\infty} \frac{1}{N} \sum^N_{n=1} f_n^{(k)} = f^{(k)}\,$ holds $\mathbb{P} -$a.e., for the selected subsequence and all its subsequences, on account of Lemma \ref{lemma04}. Thus, $\, \mathbb{E} \big[\sum^L_{k=K+1} f^{(k)} \, \mathbf{1}_D \big] \geq \varepsilon \,$ from bounded convergence and \eqref{11c}; and the nonnegativity of these $f^{(k)}$'s implies also \begin{equation}\label{11d} \mathbb{E} \bigg(\sum_{k\geq K+1} f^{(k)} \, \mathbf{1}_D \bigg) = \mathbb{E} \Big(f^{\,[K, \infty)} \, \mathbf{1}_D \Big) \geq \varepsilon\,, \qquad \forall ~~K \in \mathbb{N}\,. \end{equation} The nonnegativity gives also $\,\lim_{K\to \infty} \uparrow \sum^K_{k=1} f^{(k)}\, \mathbf{1}_D =f\, \mathbf{1}_D\,$, both $\mathbb{P} -$a.e.\,and in $\mathbb{L}^1$. Since $ \mathbb{E}\big(f \, \mathbf{1}_D\big) < \infty $ by assumption, $ \mathbb{E} \big[f^{\,[K, \infty)} \, \mathbf{1}_D\big] < \varepsilon/2\,$ holds for all $K \in \mathbb{N}$ large enough. But this contradicts \eqref{11d}, and we are done. \qed \subsection{Proof of Lemma \ref{lemma_3}} \label{sec5b} We start by fixing $j \in \mathbb{N}$ and distinguishing two contingencies, with the definitions \begin{equation} \label{12} D_j := \{f \leq j\} \backslash B \,, \qquad E_n^{[K, \infty)} \,:=\, \big\{f_n^{\,[K,\infty)} \geq K \big\} \cap D_j \,=\, \big\{f_n \geq K \big\} \cap D_j \,, \end{equation} \begin{equation} \label{12too} \alpha \,:=\, \lim_{K\to \infty} \varlimsup_{n \to \infty} \mathbb{P} \big(E_n^{[K, \infty)}\big) \,: \end{equation} \noindent {\it Contingency ~I:} $~\alpha > 0\,.$ \noindent {\it Contingency II:} $~\alpha = 0\,.$ \noindent $\bullet~$ Under {\bf Contingency\,I}\,, we pass to a subsequence $\{f_n\}_{n \in \mathbb{N}}$ with $\mathbb{P} \big(E_n^{\,[n^2, \infty)}\big) \geq \alpha /2 \,,$ $\forall ~n \in \mathbb{N}\,$; and consider indicators $\,g_n := \mathbf{1}_{ E_n^{\,[n^2, \infty)}}, ~n \in \mathbb{N}\,,$ all of them supported on the set $\Omega \setminus B$. Arguing as in Lemma \ref{lemma04} we obtain a subsequence, still denoted $\{g_n\} _{n \in \mathbb{N}}$, with $ \, g_n \xrightarrow[n \to \infty]{hC} g\,,$ $ \mathbb{P} -\hbox{a.e.,} $ for some $g:\Omega \to [0,1]\,$ with $\{g >0\} \subseteq \Omega \backslash B \,$ and $\mathbb{E}(g) \geq \alpha /2$ by bounded convergence. Thus, $f_n \xrightarrow[n \to \infty]{hC} \infty\, $ holds $\, \mathbb{P} -$a.e. on $\,\{g >0\}\,$. This set has $\,\mathbb{P} \big( g>0\big) = \mathbb{E} [\mathbf{1}_{\{g>0\}} ] \geq \mathbb{E}(g) \geq \alpha/2 \,;$ we are under {\it Case (i)} of Lemma \ref{lemma_3}, with $C := \{g >0\} \cup B$ and $\mathbb{P}(C) > \mathbb{P}(B)$. \noindent $\bullet~$ Now we pass to {\bf Contingency\,II}\,. We fix $\varepsilon >0,$ $D_j=\{f \leq j\} \backslash B $, and apply Lemma \ref{lemma_2} with this $D_j$ to construct inductively a subsequence $\big\{ n_m \big\}_{m \in \mathbb N}\,,$ along with sequences $\big\{ K_m \big\}_{m \in \mathbb N}\,, $ $\big\{ L_m \big\}_{m \in \mathbb N}\, $ of integers increasing to infinity and such that \begin{equation} \label{A.8} \mathbb{P} \big(E_{n_m}^{\,[L_{m }, \infty ) }\big) = \mathbb{P} \big(\big\{ f_{n_m} \ge L_m \big\} \cap D_j \big) < 2^{- m } \end{equation} \begin{equation} \label{A.9} \mathbb{E} \Big[\, f_{n_p}^{\,[K_m, L_p)} \, \mathbf{ 1}_{D_j} \, \Big] < 2^{- m }\,,\qquad \forall ~~p = m, m+1, \cdots \end{equation} hold for every $ m \in \mathbb N$. With the choice (\ref{A.8}), the sequences $\, \big\{ f_{n_m} \cdot \mathbf{ 1}_{D_j} \big\}_{m \in \mathbb N}\,$ and $\, \big\{ f_{n_m}^{\,[0, L_m)} \cdot \mathbf{ 1}_{D_j} \big\}_{m \in \mathbb N}\,$ are equivalent in the sense introduced in section \ref{sec2}, as the probability of their respective general terms being different is bounded from above by $2^{- m }$. We claim that \begin{equation} \label{A.14} f_{n_m} \cdot \mathbf{ 1}_{D_j}\, \xrightarrow[m \to \infty]{hC} \,f \cdot \mathbf{ 1}_{D_j}\,,\quad \mathbb{P} -\hbox{a.e.;} \end{equation} and in view of the previous statement, this amounts to \begin{equation} \label{A.10} f_{n_m}^{\,[0, L_m)} \cdot \mathbf{ 1}_{D_j}\, \xrightarrow[m \to \infty]{hC} \,f \cdot \mathbf{ 1}_{D_j}\,,\quad \mathbb{P} -\hbox{a.e.} \end{equation} To prove (\ref{A.10}), we start by observing that the sequence $\, \big\{ f_{n_m}^{\,[0, L_m)} \cdot \mathbf{ 1}_{D_j} \big\}_{m \in \mathbb N}\,$ is {\it uniformly integrable}, thus bounded in $\mathbb{L}^1$, as $$ \sup_{p \in \mathbb N \atop p \ge m} \, \mathbb{E} \Big[\, f_{n_p}^{\,[0, L_p)} \, \mathbf{ 1}_{D_j} \cdot \mathbf{ 1}_{ \big\{ f_{n_p}^{\,[0, L_p)} \ge K_m \big\} }\, \Big] < 2^{- m } $$ holds on account of (\ref{A.9}) for every $m \in \mathbb N\,$. Theorem \ref{Kom} gives an integrable function $h : \Omega \to [0, \infty)\,$ with \begin{equation} \label{A.11} f_{n_m}^{\,[0, L_m)} \cdot \mathbf{ 1}_{D_j}\, \xrightarrow[m \to \infty]{hC} \,h \cdot \mathbf{ 1}_{D_j}\,,\quad \mathbb{P} -\hbox{a.e.,} \end{equation} and we need to argue that this $h$ agrees with $f$ from (\ref{5}), $\mathbb{P}-$a.e.\,on $D_j$. Indeed, for every $K \in \mathbb N$ and all $m$ large enough, $\, \sum_{k=1}^K \, f_{n_m} \, \mathbf{ 1}_{ \{ k-1 \le f_{n_m} < k \} } \,=\, f_{n_m}^{\,[0,K)} \,\le \, f_{n_m}^{\,[0,L_m)}\, $ holds, therefore $\, \sum_{k=1}^K f^{(k)} \cdot \mathbf{ 1}_{D_j} \le h \cdot \mathbf{ 1}_{D_j}\,$ by letting $m \to \infty$, on account of (\ref{A.11}) and Lemma \ref{lemma04}. Passing now to the limit as $K \to \infty$ and recalling (4.3), we arrive at \begin{equation} \label{A.13} f \cdot \mathbf{ 1}_{D_j}\, \le \, h \cdot \mathbf{ 1}_{D_j}\,,\quad \mathbb{P} -\hbox{a.e.} \end{equation} To obtain the inequality in the reverse direction, we take expectations. From (\ref{A.11}) and uniform integrability, we have $\, \mathbb{E} \Big[\, f_{n_m}^{\,[0, L_m)} \cdot \mathbf{ 1}_{D_j} \, \Big] \, \xrightarrow[m \to \infty]{hC}\, \mathbb{E} \big[\, h \cdot \mathbf{ 1}_{D_j} \, \big]\,,$ therefore also $$ \mathbb{E} \big[\, h \cdot \mathbf{ 1}_{D_j} \, \big]\,= \lim_{M \to \infty \atop K \to \infty} \frac{1}{M}\, \mathbb{E} \bigg[\, \sum_{m=1}^M f_{n_m}^{\,[0, L_m \wedge K)} \cdot \mathbf{ 1}_{D_j} \, \bigg] \,= \lim_{M \to \infty \atop K \to \infty} \frac{1}{M}\, \mathbb{E} \bigg[\, \sum_{m=1}^M \sum_{k=1}^{L_m \wedge K} f_{n_m}^{(k)} \cdot \mathbf{ 1}_{D_j} \, \bigg]~~~~~~~~~~~~ $$ $$ ~~~~~~~~~~~~~~ \le \lim_{M \to \infty \atop K \to \infty} \frac{1}{M}\, \mathbb{E} \bigg[\, \sum_{m=1}^M \bigg( \sum_{k=1}^{ K} f_{n_m}^{(k)} \bigg) \cdot \mathbf{ 1}_{D_j} \, \bigg]\,=\,\lim_{K \to \infty } \, \mathbb{E} \bigg[\, \sum_{k=1}^{K } \, f^{(k)} \cdot \mathbf{ 1}_{D_j} \, \bigg]\,\le\, \mathbb{E} \big[\, f \cdot \mathbf{ 1}_{D_j} \, \big] $$ \noindent from Lemma \ref{lemma04}. In conjunction with (\ref{A.13}), this shows $\,f \cdot \mathbf{ 1}_{D_j}\, =\, h \cdot \mathbf{ 1}_{D_j}\,,~~ \mathbb{P} -\hbox{a.e.},$ as claimed; and on account of (\ref{A.11}) it establishes (\ref{A.10}), thus (\ref{A.14}) as well. The final step is to let $ \,j\to \infty\,$: we do this again by extracting subsequences, successively for each $j \in \mathbb N\,$, then passing to a diagonal subsequence. We obtain then \eqref{A.14} with $D_j$ replaced by the set $D:= \bigcup_{j \in \mathbb N}D_j =\{ f < \infty \} \backslash B ,$ and deduce that we are in {\it Case\,(ii)} of Lemma \ref{lemma_3}. \qed \subsection{Proofs of Proposition \ref{prop05} and Theorem \ref{theorem3}} \label{sec5c} On the strength of Lemma \ref{lemma_3} we construct, by exhaustion or transfinite induction arguments and as long as we are under the dispensation of its {\it Case\,(i),} an increasing sequence $ B\subseteq B_1 \subseteq B_2 \subseteq \dots$ of sets as postulated there, whose union $B_\infty := \bigcup_{j \in \mathbb{N}} B_j \supseteq B \supseteq \{f=\infty\}$ is maximal with the property \eqref{9} for an appropriate subsequence. But maximality means that, on the complement $\Omega \backslash B_\infty$ of this set, we must be in the realm of {\it Case\,(ii)} in Lemma \ref{lemma_3}. This establishes the first claim of Proposition \ref{prop05} with $A = B_\infty \supseteq \{f=\infty\} \,$, thus also Theorem \ref{theorem3}. For the second claim of the Proposition, we note that equality holds right above, that is, $ B_\infty = \{f=\infty\} ,$ if we are under Contingency II (i.e., $\alpha =0$) in \S\,\ref{sec5b} (proof of Lemma \ref{lemma_3}) and with $B = \{f=\infty\}\,$ in (\ref{12}); a sufficient condition for this, is $\, \lim_{K\to \infty} \varlimsup_{n \to \infty} \mathbb{P} (f_n \ge K, f < \infty ) =0$. The claim now follows. \qed \end{document}
\begin{document} \title[Equicontinuity of minimal sets for amenable group actions on dendrites]{Equicontinuity of minimal sets for\\ amenable group actions on dendrites} \author{Enhui Shi\ \ \&\ \ Xiangdong Ye} \address[E.H. Shi]{School of Mathematical Sciences, Soochow University, Suzhou 215006, P. R. China} \email{[email protected]} \address[X. Ye] {Wu Wen-Tsun Key Laboratory of Mathematics, USTC, Chinese Academy of Sciences and Department of Mathematics, University of Science and Technology of China, Hefei, Anhui 230026, China} \email{[email protected]} \begin{abstract} We show that if $G$ is an amenable group acting on a dendrite $X$, then the restriction of $G$ to any minimal set $K$ is equicontinuous, and $K$ is either finite or homeomorphic to the Cantor set. \end{abstract} \keywords{Equicontinuity, amenable group, minimal sets} \subjclass[2010]{54H20, 37B25, 37B05, 37B40} \maketitle \section{Introduction} It is well known that every \tw{continuous} action of \tw{a topological group}~$G$ on \tw{a compact metric space}~$X$ must have a minimal set~$K$. A natural question is \tw{to ask} what can \tw{be said} about the topology of~$K$, and the dynamics of the subsystem~$(K, G)$. The answer to this question \tw{certainly} depends on the topology of~$X$ and \tw{involves} the algebraic structure of~$G$. \tw{We assume throughout that groups are topological groups, and that the actions are continuous.} In the case of \tw{an orientation-preserving} group action on the circle~$\mathbb S^1$, the topology of minimal sets and the dynamics on them are well understood. In fact, for any action of \tw{a topological} group~$G$ on~$\mathbb S^1$, the minimal set~$K$ can only be a finite set, a Cantor set, or the whole circle (see, \tw{for example,}~\cite{Nav}). \tw{The interaction between the topology of~$K$ and the algebraic structure of~$G$ arises as follows.} \begin{itemize} \item \tw{If}~$K$ is a Cantor set, then~$(K, G)$ is \tw{semi-conjugate} to a minimal action \tw{of~$G$} on~$\mathbb S^1$. \item If~$K=\mathbb S^1$, then~$(K, G)$ is either equicontinuous, or~$(K,G)$ is~$\epsilon$-strongly proximal for some~$\epsilon>0$, and~$G$ contains a free non-commutative subgroup (\tw{so, in particular},~$G$ cannot be amenable; see \cite{Ma}). \end{itemize} The classes of minimal group actions on the circle \tw{up to topological conjugacy} \tw{have been} classified by Ghys using bounded Euler class (see~\cite{Ghy, Gh}). Recently, there \tw{has been} considerable progress in \tw{the study of} group actions on dendrites. Minimal group actions on dendrites appear naturally in the theory of~$3$-dimensional hyperbolic geometry (see, for example,~\cite{Bo, Mi}). Shi proved that every minimal group action on \tw{a dendrite} is strongly proximal, and the acting group cannot be amenable (see~\cite{Sh, SWZ}). Based on the results obtained by Marzougui and Naghmouchi in~\cite{MN}, Shi and Ye showed that \tw{an} amenable group action on \tw{a dendrite} always has a minimal set consisting of~$1$ or~$2$ points (see~\cite{SY}), which is also implied by the work of Malyutin and Duchesne--Monod (see~\cite{Mal, DM}). For group actions on dendrites with no finite orbits, Glasner and Megrelishvili showed the extreme proximality of minimal subsystems and the strong proximality of the whole system; for amenable group actions on dendrites, they showed that every infinite minimal subsystem is almost automorphic (see~\cite{GM}). For~$\mathbb Z$ actions on dendrites, Naghmouchi proved that every minimal set is either finite or an adding machine (see~\cite{Nag}). We \tw{prove} the following theorem in this paper, which extends the corresponding result for~$\mathbb Z$ actions in~\cite{Nag}, and \tw{answers} a question proposed by Glasner and Megrelishvili in~\cite{GM}. \begin{thm}\label{theoremwas1-1} Let~$G$ be an amenable group acting on a dendrite~$X$, \tw{and suppose that}~$K$ is a minimal set \tw{for the action}. Then~$(K, G)$ is equicontinuous, and~$K$ is either finite or homeomorphic to the Cantor set. \end{thm} Recently, Shi and Ye have shown that every amenable group action on uniquely arcwise connected continua (without the assumption of local connectedness) must have a minimal set consisting of~$1$ or~$2$ points (see~\cite{SY1}). We end this \tw{introduction} with the following general question: \begin{quote} {\it What results holding for group actions on dendrites can be extended to actions on uniquely arcwise connected continua?} \end{quote} In the following, we assume all the groups \tw{appearing} in this paper are countable. \section{Preliminaries} \subsection{Group actions} Let~$X$ be a compact metric space,~${\rm Homeo}(X)$ \tw{its homeomorphism group}, \tw{and let}~$G$ be a group. A group homomorphism~$\phi: G\rightarrow {\rm Homeo}(X)$ is called an \emph{action} of $G$ on $X$; we \tw{also write}~$(X, G)$ to denote \tw{an} action of~$G$ on~$X$. For brevity, we usually \tw{write}~$gx$ or~$g(x)$ instead of~$\phi(g)(x)$. The \emph{orbit} of~$x\in X$ under the action of~$G$ is the set \[ Gx=\{gx\mid g\in G\}. \] For a subset~$A\subseteq X$, set~$GA=\bigcup_{x\in A}Gx$; \tw{a set}~$A$ is said to be~$G$-\emph{invariant} if~$GA=A$; \tw{finally, a point}~$x\in X$ is called a \emph{fixed point} of \tw{the action} if~$Gx=\{x\}$. If~$A$ is a~$G$-invariant closed subset of~$X$ and~$\overline{Gx}=A$ for every~$x\in A$ \tw{(that is, the orbit of each point is dense)}, then~$A$ is called a \emph{minimal set for the action}. \tw{In this setting every action has a minimal set by Zorn's lemma.} A Borel probability measure~$\mu$ on~$X$ is called~$G$-\emph{invariant} if~$\mu(g(A))=\mu(A)$ for every Borel set~$A\subset X$ and every~$g\in G$. The following lemma follows directly from the~$G$-invariance of \tw{the support}~${\rm supp(\mu)}$ \tw{(which is automatic)}. \begin{lem}\label{lemmawas2-1} If~$(X, G)$ is minimal and~$\mu$ is a~$G$-invariant Borel probability measure on~$X$, then~${\rm supp(\mu)}=X$. \end{lem} \begin{lem}\label{lemmawas2-2} \tw{Suppose that a group}~$G$ acts on a compact metric space~$X$, \tw{ and that}~$K$ is a minimal set in~$X$ \tw{carrying a}~$G$-invariant Borel probability measure~$\mu$. If~$U$ and~$V$ are open sets in~$X$ such that~$V\supset U$ and~$g(V\cap K)\subset U\cap K$ for some~$g\in G$, then~$K\cap (V\setminus {\overline U})=\emptyset$. \end{lem} \begin{proof} Assume to the contrary that there is some~$u\in K\cap (V\setminus {\overline U})$. Then there is \tw{an} open neighborhood~$W\ni u$ with~$W\subset V\setminus {\overline U}$. By Lemma~\ref{lemmawas2-1}, \tw{we have}~$\mu(W\cap K)>0$. \tw{This then implies that}~$\mu(V\cap K)=\mu (g(V\cap K))\leq \mu(U\cap K)<\mu(V\cap K)$, a contradiction. \end{proof} \subsection{Amenable groups} Amenability was first introduced by von Neumann. Recall that a countable group $G$ is \tw{said to be} \emph{amenable} if there is a sequence of finite sets~$F_i$ ($i=1, 2, 3,\ \dots$) such that \[ \lim\limits_{i\to\infty}\frac{|gF_i\bigtriangleup F_i|}{|F_i|}=0 \] for every~$g\in G$, where~$|F_i|$ is the number of elements in~$F_i$. The \tw{sequence}~$(F_i)$ is called a \tw{\emph{F{\o}lner sequence}} \tw{and each~$F_i$ a F{\o}lner set}. It is well known that solvable groups and finite groups are amenable \tw{ and that }any group containing a free \tw{non-commutative} subgroup is not amenable. One may consult \tw{the monograph of Paterson}~\cite{Pa} for the proofs of the following lemmas. \begin{lem}\label{lemmawas2-3} Every subgroup of an amenable group is amenable. \end{lem} \begin{lem}\label{lemmawas2-4} A group $G$ is amenable if and only if every action of $G$ on a compact metric space~$X$ has a~$G$-invariant Borel probability measure on~$X$. \end{lem} \subsection{Dendrites} A \emph{continuum} is a \tw{non-empty} connected compact metric space. A continuum is \tw{said to be} \emph{non-degenerate} if it is not a single point. An \emph{arc} is a continuum which is homeomorphic to the closed interval~$[0, 1]$. A continuum~$X$ is \emph{uniquely arcwise connected} if for any two points~$x\not=y\in X$ there is a unique arc~$[x, y]$ in~$X$ \tw{connecting}~$x$ and~$y$. A \emph{dendrite}~$X$ is a locally connected, uniquely arcwise connected, continuum. If~$Y$ is a subcontinuum of \tw{a dendrite}~$X$, then~$Y$ is called a \emph{subdendrite} of~$X$. For a dendrite~$X$ and a point~$c\in X$, if~$X\setminus\{c\}$ is not connected, then $c$ is called a \emph{cut point} of~$X$; if~$X\setminus\{c\}$ has at least~$3$ components, then~$c$ is called a \emph{branch point} of~$X$. \tw{Lemmas~\ref{lemmawas2-5} to~\ref{lemmawas2-8} are taken} from~\cite{Na}. \begin{lem}\label{lemmawas2-5} Let~$X$ be a dendrite with metric~$d$. Then, for every~$\epsilon>0$, there is a~$\delta>0$ such that~${\rm diam}([x, y])<\epsilon$ whenever~$d(x, y)<\delta$. \end{lem} \begin{lem}\label{lemmawas2-6} Let~$X$ be a dendrite. If~$A_i\ (i=1, 2, 3, \dots)$ is a sequence of mutually disjoint sub-dendrites of~$X$, then~${\rm diam}(A_i)\rightarrow 0$ as~$i\rightarrow\infty$. \end{lem} \begin{lem}\label{lemmawas2-7} Let~$X$ be a dendrite. Then~$X$ has at most countably many branch points. If~$X$ is nondegenerate, then the cut point set of~$X$ is uncountable. \end{lem} \begin{lem}\label{lemmawas2-8} Let~$X$ be a dendrite and~$c\in X$. Then each component~$U$ of~$X\setminus\{c\}$ is open in~$X$, and~$\overline U=U\cup \{c\}$. \end{lem} Now we give a proof of the following technical lemma. \begin{lem}\label{lemmawas2-9} Let~$X$ be a dendrite and let~$f:X\rightarrow X$ be a homeomorphism. Suppose~$o$ is a fixed point of~$f$, \tw{and let}~$c_1, c_2$ be cut points of~$X$ different from~$o$. Suppose \tw{that}~$U$ is a component of~$X\setminus \{c_1\}$ not \tw{containing}~$o$, \tw{that}~$V$ is a component of~$X\setminus \{c_2\}$ not \tw{containing}~$o$, \tw{and that}~$f(c_1)\in V$. Then~$f(U)\subset V$. \end{lem} \begin{proof} Assume to the contrary that there is some~$u\in U$ with~$f(u)\notin V$. Since~$c_2$ is a cut point,~$f(c_1)\in V$,~$o\notin V$, and~$f(o)=o$, we have~$c_2\in [f(o), f(c_1)]$ and~$c_2\in [f(u), f(c_1)]$. This implies \tw{that}~$f^{-1}(c_2)\in [o, c_1]\cap [u, c_1]=\{c_1\}$ since~$o\notin U$. Thus~$f(c_1)=c_2$, which contradicts \tw{the assumption that}~$f(c_1)\in V$. \end{proof} If~$[a, b]$ is an arc in a dendrite~$X$, denote by~$[a, b)$,~$(a,b]$, and~$(a, b)$ the sets~$[a,b]\setminus\{b\}$,~$[a,b]\setminus\{a\}$, and~$[a,b]\setminus\{a, b\}$, respectively. \subsection{Equicontinuity} Let~$X$ be a compact metric space with metric~$d$, and let~$G$ be a group acting on~$X$. Two points~$x, y\in X$ are said to be \emph{regionally proximal} if there are sequences~$(x_i)$,~$(y_i)$ in~$X$ and~$(g_i)$ in~$G$ such that~$x_i\rightarrow x$ \tw{and}~$y_i\rightarrow y$ as~$i\rightarrow\infty$, and~$\lim g_ix_i=\lim g_iy_i=w$ for some~$w\in X$. If~$x,y$ are regionally proximal and~$x\not=y$, then $\{x, y\}$ \tw{is} said to be a \emph{non-trivial regionally proximal pair}. The action~$(X, G)$ is \emph{equicontinuous} if, for every~$\epsilon>0$, there is a~$\delta>0$ such that~$d(gx, gy)<\epsilon$ \tw{for all~$g\in G$} whenever~$d(x, y)<\delta$. The following lemma can be \tw{found} in~\cite{Au}. \begin{lem}\label{lemmawas2-10} Suppose~$(X,G)$ is a group action. Then~$(X,G)$ is equicontinuous if and only if it contains no non-trivial regionally proximal pair. \end{lem} \section{Proof of the main theorem} In this section we are going to show our main result. Before doing this we state two simple lemmas. \begin{lem}\label{lemmawas3-1} Suppose a group~$G$ acts on the closed interval~$[0, 1]$. If~$K\subset [0, 1]$ is minimal, then~$K$ contains at most~$2$ points. \end{lem} \begin{proof} Let~$x=\inf{K}$ and~$y=\sup{K}$. Then~$G$ preserves the set~$\{x, y\}$, so~$K=\{x,y\}$ by the minimality of~$K$. \end{proof} \begin{lem}[\tw{See}~\cite{SY}]\label{lemmawas3-2} Let~$G$ be an amenable group acting on a dendrite~$X$. Then there is a~$G$-invariant set consisting of~$1$ or~$2$ points. \end{lem} Now we are ready to prove the main result. \begin{proof}[Proof of Theorem~\ref{theoremwas1-1}] We first show that~$(K,G)$ is equicontinuous. Assume to the contrary that~$(K, G)$ is not equicontinuous. Then \tw{by} Lemma~\ref{lemmawas2-10}, there are~$u\not=v\in K$ such that~$u,v$ are regionally proximal; that is, there are sequences~$(u_i), (v_i)$ in~$X$ and~$(g_i)$ in~$G$ with \begin{equation}\label{equationwas3-1} u_i\rightarrow u, v_i\rightarrow v,\ \lim g_ix_i=\lim g_iy_i=w \end{equation} \tw{as~$i\to\infty$} for some~$w\in K$. \tw{By} Lemma~\ref{lemmawas3-2}, there are~$o_1, o_2\in X$ such that~$\{o_1, o_2\}$ is a~$G$-invariant set. Then~$[o_1, o_2]$ is~$G$-invariant by the \tw{unique} arcwise connectedness of~$X$. From the assumption,~$K$ is infinite so~$K\cap [o_1, o_2]=\emptyset$ by Lemma~\ref{lemmawas3-1}. Without loss of generality, we may suppose \tw{that}~$o_1=o_2$ \tw{and denote this common point by}~$o$; otherwise, we need only collapse~$[o_1, o_2]$ to one point. Then~$o$ is a fixed point \tw{for the action}. \noindent {\bf Case 1.} $[u, o]\cap [v, o]=\{o\}$ (see Fig.1(1)). By Lemma~\ref{lemmawas2-7}, we can \tw{choose} cut points~$c_1\in (u, o)$ and~$c_2\in (v, o)$. Let~$D_u$ be the component of~$X\setminus \{c_1\}$, which contains~$u$; let~$D_v$ be the component of~$X\setminus \{c_2\}$, which contains~$v$. From minimality and Lemma~\ref{lemmawas2-8}, there is some~$g'\in G$ with~$g'w\in D_u$. From~\eqref{equationwas3-1} and Lemma~\ref{lemmawas2-5}, we have \begin{equation}\label{equationwas3-2} u_i\in D_u, v_i\in D_v \ {\mbox {and}}\ g'g_i[u_i, v_i]\subset D_u \end{equation} \tw{for large enough~$i$.} Write~$g=g'g_i$. Then~$o\in [u_i, v_i]$ and~$g(o)\in D_u$. This is a contradiction, since~$o$ is fixed by~$G$. \noindent {\bf Case 2.} $[u, o]\cap [v, o]=[z, o]$ for some~$z\not=o$. \noindent {\bf Subcase 2.1.} $z=v$ (see Fig.1(2)). Then~$u\not=z$ and~$z\in K$. Take a cut point~$c_1\in (u, z)$ \tw{and let}~$D_u$ be the component of~$X\setminus \{c_1\}$ which contains~$u$. Then~$v\notin D_u$, and there is some~$g\in G$ with~$gz\in D_u$ by the minimality of~$K$. Take a cut point~$c_2\in (z, o)$ which is sufficiently close to~$z$ \tw{to ensure} that~$g(c_2)\in D_u$. Let~$D_z$ be the component of~$X\setminus \{c_2\}$ which contains~$z$. By Lemma~\ref{lemmawas2-4}, there is a~$G$-invariant Borel probability measure on~$K$. Applying Lemma~\ref{lemmawas2-9}, we get~$g(D_z)\subset D_u$, \tw{which} contradicts Lemma~\ref{lemmawas2-2}, since~$z\in D_z\setminus {\overline D_u}$. \noindent {\bf Subcase 2.2.} $z=u$. \tw{In this case we can deduce a contradiction along the lines of the} argument in Subcase~2.1. \noindent {\bf Subcase 2.3.} $z\not=u$ and~$z\not=v$ (see Fig.1(3)). Take a cut point~$c_1\in (u, z)$. Let~$D_u$ be the component of~$X\setminus \{c_1\}$, which contains~$u$. Similar to the argument in Case~1, there is some~$g\in G$ with~$g(z)\in D_u$. Take a cut point~$c_2\in (z, o)$ which is sufficiently close to~$z$ \tw{to ensure} that~$g(c_2)\in D_u$. Let~$D_z$ be the component of~$X\setminus \{c_2\}$, which contains~$z$. Then~$g(D_z)\subset D_u$ by Lemma~\ref{lemmawas2-9}. This contradicts Lemma~\ref{lemmawas2-2} since~$v\in D_z\setminus {\overline D_u}$. Now we prove that if~$K$ is not finite, then~$K$ is homeomorphic to the Cantor set. \tw{If not, then} there is some non-degenerate connected component~$Y$ of~$K$. Clearly, for any~$g, g'\in G$, either~$g(Y)=g'(Y)$ or~$g(Y)\cap g'(Y)=\emptyset$. This, together with Lemma~\ref{lemmawas2-6} and the equicontinuity of~$(K, G)$, implies that the subgroup~$H=\{g\in G: g(Y)=Y\}$ has finite index in~$G$. \tw{It follows that}~$(Y, H)$ is minimal. This contradicts Lemma~\ref{lemmawas3-2} and Lemma~\ref{lemmawas2-3}, since~$Y$ is a non-degenerate dendrite. \end{proof} \end{document}
\begin{document} \title{A co-analytic maximal set of orthogonal measures} \author{Vera Fischer} \address{Kurt G\"odel Research Center, University of Vienna, W\"ahringer Strasse 25, 1090 Vienna, Austria} \email{[email protected]} \author{Asger T\"ornquist} \address{Kurt G\"odel Research Center, University of Vienna, W\"ahringer Strasse 25, 1090 Vienna, Austria} \email{[email protected]} \thanks{The authors wish to thank the Austrian Science Fund FWF for post-doctoral support through grant no. P 19375-N18 (T\"ornquist) and P 20835-N13 (Fischer).} \subjclass[2000]{03E15} \keywords{Descriptive set theory; constructible sets} \begin{abstract} We prove that if $V=L$ then there is a $\Pi^1_1$ maximal orthogonal (i.e. mutually singular) set of measures on Cantor space. This provides a natural counterpoint to the well-known Theorem of Preiss and Rataj \cite{preissrataj85} that no analytic set of measures can be maximal orthogonal. \end{abstract} \maketitle \section{Introduction} Let $X$ be a Polish space and let $P(X)$ be the associated Polish space of Borel probability measures on $X$ (see e.g. \cite[17.E]{kechris95}). Recall that $\mu,\nu\in P(X)$ are said to be {\it orthogonal} (or {\it mutually singular}) if there is a Borel set $B\subseteq X$ such that $\mu(B)=1$ and $\nu(B)=0$. We will write $\mu\perp\nu$. Preiss and Rataj proved in \cite{preissrataj85} that if $X$ is an uncountable Polish space then no analytic set of measures can be maximal orthogonal, answering a question raised by Mauldin. Later Kechris and Sofronidis \cite{kecsof01} gave a new proof of this result using Hjorth's theory of turbulence. The purpose of this paper is to prove that Preiss and Rataj's result is in some sense optimal. Specifically, we will prove: \begin{theorem} If $V=L$ then there is a $\Pi^1_1$ maximal set of orthogonal measures in $P(2^\omega)$. \label{mainthmv1} \end{theorem} The assumption that $V=L$ can of course be replaced by the assumption that all reals are constructible. Also, the proof easily relativizes to a parameter $x\in2^\omega$: If $V=L[x]$ then there is a $\Pi^1_1(x)$ maximal orthogonal set of measures in $P(2^\omega)$. Theorem \ref{mainthmv1} belongs to a line of results starting with A.W. Miller's paper \cite{miller89}. Miller proved, among several other results, that assuming $V=L$ there is a $\Pi^1_1$ maximal almost disjoint family in $\mathcal P(\omega)$, there is a $\Pi^1_1$ Hamel basis for ${\mathbb R}$ over ${\mathbb Q}$, and there is a $\Pi^1_1$ set meeting every line in ${\mathbb R}^2$ exactly twice. More recently, Miller's technique has found use in the study of maximal cofinitary subgroups of the infinite symmetric group $S_\infty$: Gao and Zhang showed in \cite{gaozha05} that if $V=L$ then there is a maximal cofinitary group generated by a $\Pi^1_1$ subset of $S_\infty$, and Kastermans in \cite{kastermans09} improved this by showing that if $V=L$ then there is a $\Pi^1_1$ maximal cofinitary subgroup of $S_\infty$. The present paper is organized into four sections. In \S 2 we introduce the basic effective descriptive set-theoretic notions related to the space $P(2^\omega)$, in particular, we introduce a natural notion of a code for a measure on $2^\omega$. We also revisit a product measures construction due to Kechris and Sofronidis. Theorem \ref{mainthmv1} is proved in \S 3. The proof hinges on a method for coding a given real into a non-atomic measure while keeping the {\it measure class} of the original measure intact in the process; this is the content of the ``Coding Lemma'' \ref{thecodinglemma}. Finally in \S 4 we show that a maximal orthogonal family of continuous measures always has size continuum, and that if there is a Cohen real over $L$ in $V$ then there is no $\Pi^1_1$ maximal set of orthogonal measures. {\it Remark.} In the present paper we have attempted to give a completely elementary account of Miller's technique as it applies to Theorem \ref{mainthmv1} above, and to provide the details of the argument while relying only on standard methods that can be found in places such as \cite[\S 13]{kanamori97} or \cite[Ch. 5]{drake74}. A somewhat different exposition of the details of Miller's technique can be found in Kastermans' thesis \cite{kastermans06}. \section{Preliminaries} For $s\in 2^{<\omega}$, let $$ N_s=\{x\in2^\omega: s\subseteq x\}, $$ the basic neighbourhood defined by $s$. Define $$ p(2^\omega)=\{f:2^{<\omega}\rightarrow[0,1]: f(\emptyset)=1\wedge (\forall s\in 2^{<\omega})f(s)=f(s^\smallfrown 0)+ f(s^\smallfrown 1)\}. $$ Then $p(2^\omega)\subseteq [0,1]^{2^{<\omega}}$ is closed, and an easy application of Kolmogorov's consistency Theorem shows that for each $f\in p(2^\omega)$ there is a unique $\mu_f\in P(2^\omega)$ such that $\mu_f(N_s)=f(s)$ for all $s\in 2^{<\omega}$, see \cite[17.17]{kechris95}. Conversely, if $\mu\in P(2^\omega)$ then $f(s)=\mu(N_s)$ defines $f\in p(2^\omega)$ such that $\mu_f=\mu$, thus $f\mapsto\mu_f$ is a bijection. We will call the element $f\in p(2^\omega)$ the {\it code} for $\mu_f$. Note that if $s_n$ enumerates $2^{<\omega}$ and we let $f_n:2^\omega\rightarrow \mathbb{R}$ be defined as follows: $$ f_n(x) = \left\{ \begin{array}{ll} 1 & \mbox{if $s_n\subseteq x$}\\ 0 & \mbox{otherwise},\end{array} \right. $$ then the metric on $P(2^\omega)$ defined by $$ \delta(\mu,\nu)=\sum_{n=0}^{\infty} 2^{-n-1}\dfrac{|\int f_n d\mu-\int f_n d\nu|}{\|f_n\|_\infty} $$ given in \cite[17.19]{kechris95} makes the map $f\mapsto \mu_f$ an isometric bijection if we equip $p(2^\omega)$ with the metric $$ d(f,g)=\sum_{n=0}^\infty 2^{-n-1}|f(s_n)-g(s_n)|. $$ Let $(q_i:i\in\omega)$ be a recursive enumeration of $$ \{q:2^n\to \mathbb Q\cap[0,1]: n\in\omega\wedge \sum_{s\in\dom(q)} q(s)=1\}. $$ For each $q_i$, let $\hat q_i\in p(2^\omega)$ be the unique element of $p(2^\omega)$ such that $$ (\forall s\in \dom(q_i))(\forall k)\hat q_i(s\smallfrown 0^k)=q_i(s), $$ where $0^k$ denotes a sequence of zeros of length $k$. Clearly the sequence $(\hat q_i: i\in\omega)$ is dense in $p(2^\omega)$, and it is routine to see that the relations $P,Q\subseteq\omega^4$ defined by $$ P(i,j,m,k)\iff d(\hat q_i,\hat q_j)\leq\frac m {k+1} $$ $$ Q(i,j,m,k)\iff d(\hat q_i,\hat q_j)<\frac m {k+1} $$ are recursive. Thus $(\hat q_i:i\in\omega)$ provides a recursive presentation (in the sense of \cite[3B]{moschovakis80}) of $p(2^\omega)$, and so $(\mu_{\hat q_i}: i\in\omega)$ provides a recursive presentation of $P(2^\omega)$. The map $f\mapsto\mu_f$ is then a recursive isomorphism between $p(2^\omega)$ and $P(2^\omega)$. So from a descriptive set-theoretic point of view there is really no difference between working with $P(2^\omega)$ or $p(2^\omega)$. In particular, it doesn't matter in hierarchy complexity calculations if we deal with the codes for measures, or with the measures themselves. \begin{remark} Although we could easily have given $P(2^\omega)$ a recursive presentation directly without the detour via $p(2^\omega)$, the space $p(2^\omega)$ will still be useful to us. Namely, elements of $P(2^\omega)$ are formally functions $\mu:\mathcal B(2^\omega)\to [0,1]$ defined on the Borel sets $\mathcal B(2^\omega)$, and so formally $\mu\notin L_\delta$ for any $\delta<\omega_1$. However, since codes are simply functions from $2^{<\omega}$ to $[0,1]$, the code for $\mu$ may be in $L_\delta$ for some $\delta<\omega_1$, even if $\mu\notin L_\delta$. \end{remark} Recall from real analysis that if $\mu,\nu\in P(2^\omega)$, then $\mu$ is {\it absolutely continuous} with respect to $\nu$, written $\mu \ll \nu$ if for all Borel subset $B$ of $2^\omega$ it holds that $\nu(B)=0$ implies $\mu(B)=0$. We say that $\mu,\nu\in P(2^\omega)$ are {\it absolutely equivalent}, written $\mu\approx\nu$, if $\mu\ll\nu$ and $\nu\ll\mu$. \begin{lemma} (a) The relations $\ll$, $\approx$ and $\bot$ are arithmetical. (b) The set $$ P_c(2^\omega)=\{\mu\in P(2^\omega):\mu\;\hbox{is non-atomic}\} $$ is arithmetical. \end{lemma} \begin{proof} (a) To see that $\ll$ and $\approx$ are arithmetical, note that by \cite[p. 105]{kechris95} \begin{align*} \mu\ll\nu\iff(\forall\epsilon>0)(\exists \delta>0)(\forall B\subseteq 2^\omega\;\hbox{Borel}) (\nu(B)<\delta \rightarrow \mu(B)<\epsilon) \end{align*} and so using \cite[17.10]{kechris95} this is equivalent to \begin{align*} \mu\ll\nu\iff (\forall\epsilon>0)(\exists\delta>0)(\forall s_1,\dots, s_n\in 2^{<\omega})&(\nu(\bigcup_{i=1}^n N_{s_i})<\delta\longrightarrow\\ &\mu(\bigcup_{i=1}^n N_{s_i})<\epsilon). \end{align*} To see that $\bot$ is arithmetical, note that \begin{align*} \mu\bot\nu\iff &(\forall\epsilon>0)(\exists s_1,\dots, s_n\in 2^{<\omega})(\mu(\bigcup_{i=1}^n N_{s_i})<\epsilon\wedge \nu(\bigcup_{i=1}^n N_{s_i})>1-\epsilon) \end{align*} (b) We claim that $$ \mu\in P_c(2^\omega)\iff (\forall\epsilon >0)(\exists n)(\forall s\in 2^{<\omega})(\hbox{lh}(s)=n\rightarrow \mu(N_s)<\epsilon). $$ The implication from right to left is clear. To see the reverse implication, note that the tree $$ \{s\in 2^{<\omega}:\mu(N_s)>\epsilon\} $$ is finite branching, so by K\"{o}nig's Lemma it either has finite height or it has an infinite branch. The latter is the case if and only if $\mu$ has an atom. \end{proof} \begin{remark} We let $$ p_c(2^\omega)=\{f\in p(2^\omega):\mu_f\text{ is non-atomic}\}, $$ which is arithmetical by the above. \end{remark} We now recall a construction due to Kechris and Sofronidis \cite[p. 1463f]{kecsof01}, which is based on a result of Kakutani \cite{kakutani48} regarding the equivalence of product measures. For $x\in 2^\omega$, define $\alpha^x\in [0,1]^\omega$ by $$ \alpha^x(n) = \left\{ \begin{array}{ll} \frac{1}{4}(1+ \frac{1}{\sqrt{n+1}}) & \mbox{if $x(n)=1$}\\ \frac{1}{4} & \mbox{if $x(n)=0$}.\end{array} \right. $$ Then we let $\mu^x\in P(2^\omega)$ be the product measure on $2^\omega$ defined by $$ \mu^x=\prod_{n=0}^\infty [\alpha^x(n)\delta_0+(1-\alpha^x(n))\delta_1] $$ where $\delta_0,\delta_1$ are the point measures on $2=\{0,1\}$. The function $x\mapsto\mu^x$ is continuous. The corresponding map $2^\omega \to p(2^\omega):x\mapsto f^x$ such that $\mu^x=\mu_{f^x}$ for all $x$, is given by $$ f^x(s)=\prod_{k=0}^{\hbox{lh}(s)}[(1-s(k))\alpha^x(k)+s(k)(1-\alpha^x(k))], $$ and is clearly recursive. For $x,x^\prime\in 2^\omega$, let $$ xE_I x^\prime\iff \sum_{n=0}^\infty{\frac{|x(n)-x^\prime(n)|}{n+1}<\infty}. $$ From Kakutani's theorem we obtain that if $xE_Ix^\prime$ then $\mu^x\approx \mu^{x^\prime}$ and if $\neg{(x{E_I} x^\prime)}$ then $\mu^x\bot\mu^{x^\prime}$. (See \cite[p. 1463]{kecsof01}.) For the next lemma it is worth recalling that $\mu,\nu\in P(2^\omega)$ are orthogonal if and only if $$ \neg(\exists\eta\in P(2^\omega)) \eta\ll\mu\wedge\eta\ll\nu. $$ Moreover, $\ll$ has the {\it ccc below} property: For any $\mu\in P(2^\omega)$, any family of orthogonal measures $\ll$ below $\mu$ is countable, see e.g. the proof of \cite[Theorem 3.1]{kecsof01}. \begin{lemma} (a) If $\mu\in P(2^\omega)$ then $$ \{x\in2^\omega:\mu^x\perp\mu\} $$ is comeagre. (b) If $(\mu_n)$ is a finite or countable sequence of measures on $2^\omega$ then there is $\nu\in P(2^\omega)$ such that $$ (\forall n) \nu\perp\mu_n $$ and $\nu$ is arithmetical in $(\mu_n)$. \label{ortholemma} \end{lemma} \begin{proof} (a) Since $$ \{x\in 2^\omega:\neg\mu^x\bot \mu\} $$ is clearly $E_I$ invariant, if it is non-meagre it must be comeagre. But then there must be an uncountable sequence $x_\alpha\in 2^\omega$, $\alpha<\omega_1$, such that if $\alpha\neq \beta$ then $\neg x_\alpha E_I x_\beta$ and $\mu^{x_\alpha}\not\perp \mu$, contradicting that $\bot$ is ccc below $\mu$. (b) The set $\{x\in 2^\omega:(\forall n)(\mu^x\bot \mu_n)\}$ is arithmetical in $(\mu_n)$, and by (a) it is comeagre. Thus the second claim follows from \cite[4.1.4]{kechris73}. \end{proof} \section{Proof of the main theorem} In this section we prove Theorem \ref{mainthmv1}. It is clearly enough to establish the following: \begin{theorem} If $V=L$ then there is a $\Pi^1_1$ maximal set of orthogonal measures in $P_c(2^\omega)$.\label{mainthmv2} \end{theorem} Then Theorem \ref{mainthmv1} follows by taking a union of a $\Pi^1_1$ maximal set of orthogonal measures in $P_c(2^\omega)$ with the set of all point measures (Dirac measures), which clearly is a $\Pi^0_1$ set. Our notation follows that of \cite[p. 167ff.]{kanamori97}, with very few differences. For convenience we recall the definitions and facts that are most important for the present paper. The canonical wellordering of $L$ will be denoted $<_L$. The language of set theory (LOST) is denoted $\mathcal L_\epsilon$. If $x\in2^\omega$ then we define a binary relation on $\omega$ by $$ m\inx{x}n\iff x(\langle m,n\rangle)=1, $$ where $\langle \cdot,\cdot\rangle$ refers to some standard G\"odel pairing function of coding a pair of integers by a single integer. We let $$ M_x=(\omega,\epsilon_x), $$ the $\mathcal L_\epsilon$ structure coded by $x$. If $M_x$ is wellfounded and extensional then we denote by $\tr(M_x)$ the transitive collapse of $M_x$, and by $\pi_x:M_x\to\tr(M_x)$ the corresponding isomorphism. The following proposition encapsulates the basic descriptive set-theoretic correspondences between $x$, $M_x$ and the satisfaction relation. We refer to \cite[13.8]{kanamori97} and the remarks immediately thereafter for a proof. \begin{prop} (a) If $\varphi(v_0,\ldots,v_{k-1})$ is a LOST formula with all free variables shown then $$ \{(x,n_0\ldots, n_{k-1})\in2^\omega\times\omega\times\cdots\times\omega: M_x\models \varphi[n_0,\ldots,n_{k-1}]\}. $$ is arithmetical. (b) For $x\in2^\omega$ such that $M_x$ is wellfounded and extensional, the relation $$ \{(m,f)\in\omega\times p(2^\omega): \pi_x(m)=f\} $$ is arithmetical in $x$. The same holds if we replace $p(2^\omega)$ with $\omega^\omega$, $2^\omega$, or other reasonable Polish product spaces. (c) There is a LOST sentence $\sigma_0$ such that if $M_x\models\sigma_0$ and $M_x$ is wellfounded and extensional, then $M_x\simeq L_\delta$ for some limit ordinal $\delta<\omega_1$. (d) There is a LOST formula $\varphi_0(v_0,v_1)$ which defines the canonical wellordering of $L_\delta$ for all $\delta>\omega$.\label{basicprop} \end{prop} {\it Remark.} For $x\in2^\omega$ and $n_0,n_1\in\omega$ it will be convenient to write $n_0<_{\varphi_0}^x n_1$ as an abbreviation of $M_x\models \varphi_0[n_0,n_1]$. By (a) in the previous proposition $n_0<_{\varphi_0}^x n_1$ is arithmetical uniformly in $x$. As motivation for the proof of Theorem \ref{mainthmv1}, we first prove the following easier result: \begin{prop} If $V=L$ then there is a $\Delta^1_2$ maximal set of orthogonal measures.\label{easyprop} \end{prop} \begin{proof} Work in $L$. The construction is done by induction on $\omega_1$. We choose a sequence $\langle\mu_\beta:\beta<\omega_1\rangle$ such that at each $\alpha<\omega_1$ it is the case that $\mu_\alpha$ is the $<_L$-least measure such that $\mu_\alpha\perp\mu_\beta$ for all $\beta<\alpha$. That $\mu_\alpha$ always exists follows from Lemma \ref{ortholemma}. Then it is easy to see that $$ A=\{\mu_\alpha:\alpha<\omega_1\} $$ is a maximal orthogonal set of measures. To see that $A$ is $\Delta^1_2$, define a relation $P\subseteq p(2^\omega)^{\leq\omega}\times 2^\omega$ by letting $P(s,x)$ if and only if \begin{enumerate} \item $M_x$ is wellfounded and transitive, $M_x\models\sigma_0$, and for some $m\in\omega$ we have $\pi_x(m)=s$. \item $\{\mu_{s(n)}:n<\lh(s)\}$ is a set of orthogonal measures. \item For all $n<\lh(s)$ it holds that $s(n)$ is the $<_L$-smallest code for a measure orthogonal to all $s(k)$ for which $s(k)<_L s(n)$. \end{enumerate} Condition (1) is clearly $\Pi^1_1$, and (2) is arithmetical. Finally, if (1) and (2) hold then (3) may be expressed by saying \begin{align*} &(\forall n<\lh(s))(\forall f\in p(2^\omega))(\forall n_1)[(\exists n_0) n_0<_{\varphi_0}^x n_1\wedge \pi_x(n_0)=f\wedge\pi_x(n_1)=s(n)]\\ &\longrightarrow [(\exists l)(\exists l')\neg s(l)\perp f\wedge \pi_x(l')=s(l)\wedge l'<_{\varphi_0}^x n_1]. \end{align*} Note that $P(s,x)$ holds if and only if $s$ is a sequence of codes for the measures in some initial segment $\{\mu_\alpha:\alpha<\beta\}$, and that the inductive construction of this initial segment is witnessed in $L_\delta\simeq M_x$, for some limit $\delta<\omega_1$. It then follows that \begin{align*} \mu\in A \iff& (\exists s)(\exists x) [P(s,x)\wedge (\exists n) \mu_{s(n)}=\mu]\\ \iff& (\forall f)(\forall s)(\forall x)[(P(s,x)\wedge \mu=\mu_f)\longrightarrow \\ &((\forall l) s(l)<_L f\vee (\exists l) s(l)=\mu)]. \end{align*} Since the reference to $<_L$ can be replaced by $<_{\varphi_0}^x$, this shows that $A$ is $\Delta^1_2$. \end{proof} To prove Theorem \ref{mainthmv2} we will use the technique developed by A.W. Miller in \cite{miller89}. The idea is to replace $P$ in the previous proof with a $\Pi^1_1$ relation $\hat P\subseteq p_c(2^\omega)\times2^\omega$ with the property that for all $f\in p_c(2^\omega)$ if $$ (\exists x) \hat P(f,x) $$ then $f$ ``codes'' some witness $x\in2^\omega$ to this fact, more precisely we will have $$ (\exists x)\hat P(f,x)\iff (\exists x\in\Delta^1_1(f)) \hat P(f,x). $$ Our maximal orthogonal set of measures will then be $$ \hat A=\{\mu_f\in P_c(2^\omega): (\exists x) \hat P(f,x)\}=\{\mu_f\in P_c(2^\omega): (\exists x\in\Delta^1_1(f)) \hat P(f,x)\}, $$ which will be $\Pi^1_1$ since $(\exists x\in\Delta^1_1(f))$ may be replaced by a universal quantification, see e.g. \cite[4.19]{manwei85}. What $f\in p_c(2^\omega)$ specifically will code is on the one hand the part of the inductive construction witnessing that $f\in\hat A$, and on the other hand $x\in2^\omega$ such that $M_x\simeq L_\delta$ for some limit $\delta<\omega_1$ in which the inductive construction takes place. We need the following facts for which B. Kasterman's thesis \cite{kastermans06} is an excellent reference; see also \cite[\S 3]{kasstezha08}. \begin{lemma} (a) There are unboundedly many limit ordinals $\delta<\omega_1$ such that there is $x\in L_{\delta+\omega}\cap2^\omega$ such that $M_x\simeq L_\delta$. (b) If $M_x\simeq L_\delta$ for some limit $\delta<\omega_1$ then there is $x'\in\Delta^1_1(x)$ such that $M_{x'}\simeq L_{\delta+\omega}$.\label{kastlem} \end{lemma} {\it Remark.} (a) follows from \cite[Lemma 3.6]{kasstezha08}, (b) from \cite[Lemma 3.5]{kasstezha08}. \noindent{\it Coding a real into a measure.} We now describe a way of coding a given real $z\in2^\omega$ into a measure $\mu\in P_c(2^\omega)$. Given $\mu\in P_c(2^\omega)$ and $s\in 2^{<\omega}$ we let $t(s,\mu)$ be the lexicographically least $t\in 2^{<\omega}$ such that $s\subseteq t$, $\mu(N_{s^\smallfrown 0})>0$ and $\mu(N_{s^\smallfrown 1})>0$ if it exists, and otherwise we let $t(s,\mu)=\emptyset$. Define inductively $t^\mu_n\in 2^{<\omega}$ by letting $t^\mu_0=\emptyset$ and $$ t^\mu_{n+1}=t({t^\mu_n}^\smallfrown 0,\mu). $$ Note that since $\mu$ is non-atomic we have that $\lh(t_{n+1}^\mu)>\lh(t_n^\mu)$; we let $t_\infty^{\mu}=\bigcup_{n=0}^\infty t_n^\mu$. For $f\in p_c(2^\omega)$ and $n\in\omega\cup\{\infty\}$ we will write $t^f_n$ for $t^{\mu_f}_n$. Clearly the sequence $(t_n^f:n\in\omega)$ is recursive in $f$. Define $R\subseteq p_c(2^\omega)\times 2^\omega$ as follows: \begin{align*} R(f,z) \iff &(\forall n\in\omega) [z(n)=1 \leftrightarrow (f({t_n^f}^\smallfrown 0)=\frac{2}{3}f(t^f_n) \wedge f({t_n^f}^\smallfrown 1)=\frac{1}{3}f(t_n^f)) \\ &\wedge z(n)=0 \leftrightarrow f({t_n^f}^\smallfrown 0)=\frac{1}{3} f(t_n^f)\wedge f({t_n^f}^\smallfrown 1)=\frac{2}{3} f(t_n^f)] \end{align*} \begin{codinglemma} Given $z\in 2^\omega$ and $f\in p_c(2^\omega)$ there is $g\in p_c(2^\omega)$ such that $\mu_f\approx\mu_g$ and $R(g,z)$. Moreover, $g$ may be found in a recursive way given $f$ and $z$: There is a recursive function $G:p_c(2^\omega)\times 2^\omega\to p_c(2^\omega)$ such that $\mu_{G(f,z)}\approx \mu_f$ and $R(G(f,z),z)$ for all $f\in p_c(2^\omega)$, $z\in 2^\omega$.\label{thecodinglemma} \end{codinglemma} \begin{proof} We define $G(f,z)$ inductively. Suppose $G(f,z)\rest 2^{< n}$ has been defined. Then for $s\in 2^n$ we let $$ G(f,z)(s^\smallfrown i)= \left\{ \begin{array}{ll} \frac{2}{3}G(f,z)(s) & \text{if $s=t_k^{f}$ for some $k\in\omega$, and } \\ & z(k)=1, i=0;\\ \frac{1}{3} G(f,z)(s) & \text{if $s=t_k^{f}$ for some $k\in\omega$, and }\\ & z(k)=1, i=1;\\ \frac{1}{3} G(f,z)(s) & \text{if $s=t_k^{f}$ for some $k\in\omega$, and }\\ & z(k)=0, i=0;\\ \frac{2}{3} G(f,z)(s) & \text{if $s=t_k^{f}$ for some $k\in\omega$, and }\\ & z(k)=0, i=0;\\ 0 & \text{if } f(s)=0;\\ \\ \dfrac{G(f,z)(s)}{f(s)} f(s^\smallfrown i) & \mbox{otherwise}.\end{array} \right. $$ Define for $s\in 2^{<\omega}$ $$ \theta(s)=\frac{G(f,z)(s)}{f(s)} $$ whenever $f(s)\neq 0$, and let $\theta(s)=0$ otherwise. Note that if $x\neq t_\infty^f$ and $s\in 2^{<\omega}$ is the longest sequence such that $s\subseteq x$ and $s\subseteq t_\infty^f$ then $\theta(x\upharpoonright n)$ is constant for $n>\lh(s)$. Let $(s_i:i\in\omega)$ enumerate the set $$ \{ s \in 2^{<\omega}: s\nsubseteq t_\infty^f\wedge s\upharpoonright\lh(s)-1\subseteq t_\infty^f\}. $$ Then for any Borel set $B$, $$ \mu_{G(f,z)}(B)=\sum_{i=0}^\infty \theta(s_i) \mu_f(B\cap N_{s_i}), $$ and since $\theta(s_i)=0$ if and only if $f(s_i)=0$ this shows that $\mu_{G(f,z)}(B)=0$ if and only if $\mu_f(B)=0$. Thus $\mu_g\approx\mu_f$, as required. In fact, if we define $$ \hat\theta(x)=\lim_{n\to\infty} \theta(x\upharpoonright n) $$ if $x\neq t_\infty^f$ and $\hat\theta(t_\infty)=0$ then $$ \frac{d\mu_g}{d\mu_f}=\hat\theta. $$ Finally, it is clear from the definition of $G$ that $R(G(f,z),z)$, and that $G$ is recursive. \end{proof} \begin{remark} The relation $R(f,z)$ may be read as ``$f$ codes $z$''. The set $$ \dom(R)=\{f\in p_c(2^\omega): (\exists z) R(f,z)\} $$ is $\Pi^0_1$ since deciding whether a given $f\in p_c(2^\omega)$ codes {\it some} $z\in2^\omega$ only requires us to check for all $n$ that either $f({t^f_n}^\smallfrown i)=\frac{1}{3}f(t^f_n)$ or $f({t^f_n}^\smallfrown i)=\frac{2}{3}f(t^f_n)$ holds, for $i=0$ and $i=1$. Thus we may define a $\Pi^0_1$ function $r$ on $\dom(R)$ by $$ r(\mu)=z\iff R(\mu,z). $$ \end{remark} \begin{proof}[Proof of Theorem \ref{mainthmv2}] Work in $L$. We first define a maximal set of orthogonal measures by induction on $\omega_1$, and then subsequently see that this set is $\Pi^1_1$. Let $\langle\mu_\alpha:\alpha<\omega_1\rangle$ be the sequence defined in \ref{easyprop}. We will define a new sequence of measures $\langle\nu_\alpha<\alpha<\omega_1\rangle$ such that $\mu_\alpha\approx\nu_\alpha$, but where the resulting maximal orthogonal set $$ \hat A=\{\nu_\alpha:\alpha<\omega_1\} $$ is $\Pi^1_1$. Suppose $\langle\nu_\alpha\in P_c(2^\omega):\alpha<\beta\rangle$ has been defined for some $\beta<\omega_1$. Let $s_0\in p_c(2^\omega)^{\leq\omega}$ be $<_L$ least such that $$ \{\mu_\alpha:\alpha\leq\beta\}=\{\mu_{s_0(n)}:n\in\lh(s_0)\}, $$ and $s_0(0)$ is the $<_L$ largest element of $\{s_0(n):n\in\lh(s_0)\}$. Let $x_0\in2^\omega$ be $<_L$ least such that $M_{x_0}\simeq L_\delta$ for some limit $\delta<\omega_1$ and $s_0\in L_\delta$, and $x_0\in L_{\delta+\omega}$. That $x_0$ exists follows from Lemma \ref{kastlem}. We let $\nu_\beta=\mu_{G(s_0(0),\langle s_0,x_0\rangle)}$, where $\langle\cdot,\cdot\rangle$ denotes some (fixed) reasonable recursive way of coding a pair $(s,x)\in p_c(2^\omega)^{\leq\omega}\times 2^\omega$ as a single element of $2^\omega$. Note that $G(s_0(0),\langle s_0,x_0\rangle)\in L_{\delta+\omega}$, since $G$ is recursive. It is clear that $$ \hat A=\{\nu_\alpha:\alpha<\omega_1\} $$ is a maximal orthogonal set of measures. Thus it remains only to see that $\hat A$ is $\Pi^1_1$. We first define a relation $Q\subseteq p_c(2^\omega)^{\leq\omega}\times2^\omega$, similar to $P$ in Proposition \ref{easyprop}. We let $Q(s,x)$ if and only if \begin{enumerate}[\hspace{1.6em}(a)] \item $M_x$ is wellfounded and transitive, $M_x\models\sigma_0$, and for some $m\in\omega$ we have $\pi_x(m)=s$. \item $\{\mu_{s(n)}:n<\lh(s)\}$ is a set of orthogonal continuous measures. \item For all $n<\lh(s)$ it holds that $s(n)$ is the $<_L$-smallest code for a continuous measure orthogonal to all $s(k)$ for which $s(k)<_L s(n)$. \item $(\forall k>0) s(k)<_L s(0)$. \end{enumerate} That the relation $Q$ is $\Pi^1_1$ follows as in the proof of Proposition \ref{easyprop}. Now define a relation $\hat P\subseteq p_c(2^\omega)\times2^\omega$ by letting $\hat P(f,x)$ if and only if \begin{enumerate} \item $M_x$ is wellfounded and transitive, $M_x\models\sigma_0$, and for some $m\in\omega$ we have $\pi_x(m)=f$. \item $f\in\dom(R)$, $r(f)=\langle s,w\rangle$ for some $(s,w)\in p_c(2^\omega)^{\leq\omega}\times2^\omega$, and $Q(s,w)$. \item $(\forall s'\in p_c(2^\omega)^{\leq\omega})[\{s(n):n\in\lh(s)\}=\{s'(n):n\in\lh(s')\}\wedge s(0)=s'(0)]\longrightarrow \neg (s'<_L s)$ \item If $M_w\simeq L_{\delta'}$ then $w\in L_{\delta'+\omega}$, and $w$ is $<_L$ least such that this holds and for some $m\in\omega$, $\pi_w(m)=s$. \item $f=G(s(0),\langle s,w\rangle)$. \end{enumerate} Conditions (1) and (2) are $\Pi^1_1$ and (5) is $\Pi^0_1$. If (1) and (2) hold then (3) is equivalent to \begin{align*} (\forall s'\in p_c(2^\omega)^{\leq\omega})((\forall l)(\exists l') s(l)=s'(l')\wedge (\forall l)(\exists l') s'(l)=s(l')\wedge s(0)=s'(0))\\ \longrightarrow ((\forall n_0)(\forall n_1) (\pi_x(n_0)=s\wedge \pi_x(n_1)=s'\longrightarrow \neg n_1<^x_{\varphi_0} n_0)). \end{align*} which is a $\Pi^1_1$ predicate. To verify that (4) is a $\Pi^1_1$ condition, define as in \cite[p. 170]{kanamori97} the restriction $M_x\upharpoonright k$, for $x\in2^\omega$ and $k\in\omega$, to be the $\mathcal L_{\epsilon}$ structure $$ M_x\upharpoonright k=(\{n: n\inx x k\},\epsilon_x). $$ Assuming that (1)--(3) hold (4) is equivalent to the conjunction of the following two conditions: \begin{align*} (\forall k)&((M_x\upharpoonright k\models\sigma_0\wedge (\exists l) l\inx x k\wedge M_x\upharpoonright l\simeq M_w)\longrightarrow\\ &((\exists n_0, n_1) n_0\inx x k\wedge n_1\inx x k\wedge \pi_x(n_0)=s\wedge \pi_x(n_1)=w)) \end{align*} and \begin{align*} (\forall w')(\forall k)&((M_{w'}\simeq M_x\upharpoonright k\wedge M_{w'}\models \sigma_0\wedge w'<_L w)\longrightarrow\\ &(\forall n_0, n_1)((n_0\inx x k\wedge n_1\inx x k)\longrightarrow \pi_x(n_0)\neq s\wedge \pi_x(n_1)\neq w')), \end{align*} where $\simeq$ denotes isomorphism between $\mathcal L_\epsilon$ structures. Since $\simeq$, which is $\Sigma^1_1$, only occurs on the left-hand side of the above implications, and since $<_L$ may be replaced by using $<_{\varphi_0}^x$ as before, this shows that (4) may be replaced by a $\Pi^1_1$ predicate, which proves that $\hat P$ is a $\Pi^1_1$ relation. It is clear from the definition of $\hat P$ that $$ \hat A=\{\mu_f\in P_c(2^\omega): (\exists x)\hat P(f,x)\}. $$ To see that $\hat A$ is $\Pi^1_1$, suppose that $\mu_f\in \hat A$. Then $\hat P(f,x)$ for some $x$, and so $f\in\dom(R)$ and $r(f)=\langle s,w\rangle$ where $M_w\simeq L_{\delta'}$. By Lemma \ref{kastlem} there is $w'\in\Delta^1_1(w)$ such that $M_{w'}\simeq L_{\delta'+\omega}$ and by condition (4) above $w\in L_{\delta'+\omega}$. But then $\hat P(f,w')$ holds. Thus we have shown that $$ (\exists x) \hat P(f,x)\iff (\exists x\in\Delta^1_1(f))\hat P(f,x), $$ which proves that $\hat A$ is $\Pi^1_1$. \end{proof} \section{Final remarks} In this final section we consider two natural questions: What is the cardinality of a maximal orthogonal family of measures? And is it consistent that there is no co-analytic maximal orthogonal family of measures? Both questions can be answered using the product measure construction of Kechris and Sofronidis described in \S 2. Any maximal orthogonal family in $P(2^\omega)$ must have size $\mathfrak c$ since there are $\mathfrak c$ many point measures. But even if we only consider non-atomic measures we reach the same conclusion: \begin{prop} Let $A\subseteq P_c(2^\omega)$, $|A|<\mathfrak{c}$, be a set of pairwise orthogonal measures. Then $A$ is not maximal orthogonal in $P_c(2^\omega)$. In fact, there is a product measure which is orthogonal to all elements of $A$. \end{prop} \begin{proof} Suppose $A=\{\nu_\alpha: \alpha<\mathfrak{\kappa}\}$, where $\kappa<\mathfrak{c}$, is an orthogonal family. Since $E_I$ (as defined in \S 2) has meagre classes, it follows by Mycielski's Theorem (see e.g. \cite{gao08}) that there are perfectly many $E_I$ classes, and so we can find a sequence $(\mu_\alpha :\alpha<\mathfrak{c})$ of orthogonal product measures. For each $\nu_\alpha$ there can be at most countably many $\beta<\mathfrak c$ such that $$ \mu_{\beta}\not\perp\nu_\alpha $$ since $\ll$ is ccc below $\nu_\alpha$ (see e.g. the proof of \cite[Theorem 3.1]{kecsof01}). Since $\kappa<\mathfrak c$ it follows that there must be some $\beta<\mathfrak c$ such that $\mu_\beta$ is orthogonal to all elements of $A$. \end{proof} \begin{prop} If there is a Cohen real over $L$ then there is no $\Pi^1_1$ maximal orthogonal set of measures.\label{cohen} \end{prop} \begin{proof} We will use the following result of Judah and Shelah \cite{judshe89}: If there is a Cohen real over $L$ then every $\Delta^1_2$ set of reals is Baire measurable. Suppose $A\subseteq P(2^\omega)$ is a $\Pi^1_1$ maximal orthogonal set of measures. Then define a relation $Q \subseteq 2^\omega\times P(2^\omega)^\omega$ by $$ Q(x,(\nu_n))\iff (\forall n) (\nu_n\in A\wedge\nu_n\not\perp\mu^x)\wedge (\forall\mu)(\mu\not\perp\mu^x\longrightarrow (\exists n) \nu_n\not\perp\mu) $$ Then for each $x\in2^\omega$ the section $Q_x$ is non-empty since $A$ is maximal. Using $\Pi^1_1$ uniformization, we obtain a function $f:2^\omega\to P(2^\omega)^\omega$ having a $\Pi^1_1$ graph and such that $$ (\forall x) Q(x,f(x)). $$ Now if $U\subseteq P(2^\omega)^\omega$ is a basic open set then \begin{align*} x\in f^{-1}(U)&\iff (\exists (\nu_n)\in P(2^\omega)^\omega) f(x)=(\nu_n)\wedge (\nu_n)\in U\\ &\iff (\forall (\nu_n)\in P(2^\omega)^\omega) f(x)\neq(\nu_n)\vee (\nu_n)\in U. \end{align*} Thus $f^{-1}(U)$ is $\Delta^1_2$, and so if there is a Cohen real over $L$ then it has the property of Baire. It follows that $f$ is a function with the Baire property. But then we may argue just as in Kechris and Sofronidis' proof that no analytic set of measure is maximal orthogonal and arrive at a contradiction: Indeed, $f$ is an $E_I$-invariant assignment of countable subsets of $P(2^\omega)$, and $E_I$ is a turbulent equivalence relation, and so we must have that $$ x\mapsto A(x)=\{f(x)(n):n\in\omega\} $$ is constant on a comeagre set. This contradicts that $$ (\forall x,x'\in2^\omega) \neg (x E_I x')\implies \mu^x\perp\mu^{x'} $$ and the ccc-below property of $\ll$. \end{proof} The natural relativization of Proposition \ref{cohen} gives us that if for every $x\in2^\omega$ there is a Cohen real over $L[x]$ then there is no co-analytic (i.e., boldface $\mathbf\Pi^1_1$) maximal set of orthogonal measures. We do not know what happens with the complexity of maximal orthogonal sets if we add other types of reals. In fact, we do not know the answer to the following: \begin{question} If there is a $\Pi^1_1$ maximal orthogonal set of measures, are all reals constructible? \end{question} \end{document}
\begin{document} \begin{abstract} We describe a local model for any Singular Riemannian Foliation in a neighbourhood of a closed saturated submanifold of a regular stratum. Moreover we construct a Lie groupoid which controls the transverse geometry of the linear approximation of the Singular Riemannian Foliation around these submanifolds. We also discuss the closure of this Lie groupoid and its Lie algebroid. \end{abstract} \maketitle \section{Introduction} In the theory of regular foliations a crucial role is played by the \emph{holonomy groupoid} of the foliation which gives a complete description of the geometry of the foliation transverse to its leaves. The holonomy groupoid of a foliation should be thought of as an atlas for the singular (and badly behaved) leaf space of the foliation. For example, if the leaf space of a foliation $\mathcal{F}$ on a manifold $M$ is a manifold (or an orbifold), then the geometric structures it admits are in correspondence with geometric structures on on the normal bundle $\nu(\mathcal{F}) = TM/T\mathcal{F}$ which are invariant under the natural action of the holonomy groupoid of $\mathcal{F}$ (see for example \cite{Haefliger}). Moreover, when the holonomy groupoid of a foliation is well behaved (e.g., a proper/compact Lie groupoid) one obtains a simple explicit model for the foliation in a neighbourhood of a leaf known as the Reeb Stability Theorem (see \cite{Moerdijk-Mrcun,Reeb}, or \cite{Crainic-Struchiner}). It is therefore natural to try to extend the construction of the holonomy groupoid to the case of singular foliations. The main attempt, so far, to obtain this generalisation has been made in \cite{Androulidakis-Skandalis} where a singular foliation is defined in terms of the module of vector fields which generates the foliation, see also \cite{Androulidakis-Zambon} and \cite{Garmendia-Zambon} . However, the groupoid constructed is rarely a Lie groupoid, and in this level of generality, it is possible that there does not exist a smooth groupoid describing the holonomy of the singular foliation. In this paper we focus on a special case of singular foliations known as Riemannian foliations. We also consider a ``more geometric" approach to the definition of a singular foliation as a partition of the ambient manifold into leaves instead of fixing the module of vector fields chosen to generate it. Our main purpose is two-fold: on the one hand we describe a local model for any singular Riemannian foliation in a neighbourhood of a closed saturated submanifold of a stratum, and on the other hand we obtain a holonomy groupoid for the linearization of a singular Riemannian foliation in a neighbourhood of such submanifolds. The holonomy groupoid that we obtain does not solve the problem of obtaining a Lie groupoid describing the transverse geometry of an arbitrary singular Riemannian foliation. This is still an open problem. However, our groupoid fits conceptually into this framework. Every Lie groupoid has a first order approximation around a saturated submanifold which is a transformation groupoid associated to a representation of the restriction of the original groupoid to the submanifold on the normal bundle of the submanifold \cite{Crainic-Mestre-Struchiner,Fernandes-Hoyo}. Even though the possible existence of a Lie groupoid describing the original singular Riemannian foliation is an open problem, what we obtain is a candidate for its first order approximation in a neighbourhood of any closed saturated submanifold of a regular stratum. We now explain in more details the results of this paper, but before we do so, we feel it is necessary to warn the reader that there are two distinct notions of holonomy that appear in this paper. The first one which already appeared above is the \emph{leafwise holonomy of a regular foliation}. The second one is the \emph{holonomy of a connection} obtained by parallel translations along paths for a fixed connection on a vector bundle (or a principal bundle). We hope that with this warning we will avoid unnecessary confusions and we will try to minimize this possibility by writing $\nabla$-holonomy for the second concept whenever it is not clear from the context. \subsection*{Semi-Local Models for Singular Riemannian Foliations} Given a Riemannian manifold $(M, \ensuremath{ \mathrm{g} })$, a partition $\ensuremath{\mathcal{F}} = \{L\}$ of $M$ into complete connected submanifolds (the \emph{leaves of $\ensuremath{\mathcal{F}}$}) is called a \emph{singular foliation} if every vector tangent to a leaf can be locally extended to a vector field everywhere tangent to the leaves (see \cite{Sussmann}). A singular foliation $\ensuremath{\mathcal{F}}$ is called \emph{Riemannian} (SRF for short) if each geodesic starting perpendicular to a leaf stays perpendicular to all leaves it meets. A typical example of a SRF is the decomposition of a Riemannian manifold $M$ into the orbits of an isometric group action on $M$. Such a foliation is called \emph{homogeneous}. Another relevant example of a SRF is the \emph{$\nabla$-holonomy foliation} (presented below in Example \ref{holonomy-foliation}) which is related to other important types of foliations, like polar foliations \cite{Toeben} or Wilking's dual foliation to the Sharafutdinov projection \cite{Wilking}. \begin{example}[$\nabla$-Holonomy foliation] \label{holonomy-foliation} Let $B$ be a complete Riemannian manifold, $\mathbb{R}^{k} \to E \stackrel{\pi}{\rightarrow} B$ an Euclidean vector bundle (i.e. a vector bundle with a fiberwise metric) and $\nabla^{\tau}: \mathfrak{X} (B) \times \Gamma (E) \longrightarrow \Gamma (E)$ a linear connection which is compatible with the fiberwise metric of $E$. Denote by $C^{\infty}([0, 1], M)$ the set of piecewise smooth curves in $M$ and by $\text{Hol}^{\tau}$ the $\nabla$-holonomy groupoid of $\nabla^{\tau}$ (i.e. the groupoid generated by all parallel transports along curves in $B$). Then the partition $\ensuremath{\mathcal{F}}^{\tau} = \{\text{Hol}^{\tau}(v)\}_{v \in E}$ where \begin{equation*} \text{Hol}^{\tau}(v) := \left\{ \mathcal{P}_{\alpha} (v) \in E : \alpha \in C^{\infty}([0, 1], M), \; \pi(v) = \alpha(0) \right\} \end{equation*} is a SRF with respect to the \emph{Sasaki metric} on $E$ (i.e. if we denote by $\mathcal{T}$ the linear horizontal distribution on $E$ determined by $\nabla^\tau$, then the Sasaki metric is the metric which turns $T_v E = E_{\pi(v)} \oplus \mathcal{T} |_v$ an orthogonal decomposition, preserving the fiberwise metric of $E$ and the metric induced by the isomorphism $d \pi |_{\mathcal{T}}$ on $\mathcal{T}$). We should stress three simple geometrical aspects of the $\nabla$-holonomy foliation. First considering the representation of $\text{Hol}^{\tau}$ on $E$ it is possible to see that the holonomy foliation is in fact given by the orbits of a groupoid, more precisely by the orbits of the transformation groupoid $\text{Hol}^{\tau} \ltimes E$ (see the definition in Section \ref{section-facts-groupoids}). Second, the intersection of the holonomy leaves with the fibers of $E$ are the orbits of holonomy groups. The last property is particularly special since there are infinitely many examples of non homogeneous SRFs on Euclidean spaces (see Radeschi \cite{Radeschi-clifford}). Third, $\mathcal{T}$ is tangent to $\ensuremath{\mathcal{F}}^{\tau}$ and $\mathcal{T} = T \ensuremath{\mathcal{F}}^{\tau}$ iff the connection $\nabla^{\tau}$ is flat (i.e., when $\ensuremath{\mathcal{F}}^{\tau}$ is regular foliation). \end{example} \begin{figure} \caption{Two possible illustrated schemes for holonomy foliation where $\mathcal{P} \label{holonomy} \end{figure} In \cite{Alexandrino-Radeschi-Molinosconjecture}, the first author and Radeschi proved the so called Molino's conjecture which states that \emph{given $(M, \ensuremath{\mathcal{F}})$ a SRF on a complete manifold $M$, the partition $\overline{\ensuremath{\mathcal{F}}} = \{ \overline{L} \mid L\in \ensuremath{\mathcal{F}}\}$ of $M$ into the closures of the leaves of $\ensuremath{\mathcal{F}}$ is also a SRF}. In order to prove Molino's conjecture, they defined two foliations on an $\epsilon$-tubular neighbourhood $U$ around a closure of fixed leaf $L$. The first foliation was the so called \emph{linearized foliation} $\ensuremath{\mathcal{F}}lin$ of $\ensuremath{\mathcal{F}}$ in $U$. It is a subfoliation of $\ensuremath{\mathcal{F}}$ spanned by the first order approximations, around $\overline{L}$, of the vector fields tangent to $\ensuremath{\mathcal{F}}$. The second foliation denoted by $\widehat{\ensuremath{\mathcal{F}}}^\ell$ was then obtained from $\ensuremath{\mathcal{F}}lin$ by taking the ``local closure'' of the leaves of $\ensuremath{\mathcal{F}}lin$. Roughly speaking both foliations described the semi-local dynamical behavior of the foliation $\ensuremath{\mathcal{F}}$ (see Section \ref{preliminaries-foliation} for the definitions). Let us now illustrate $\ensuremath{\mathcal{F}},\,\ensuremath{\mathcal{F}}lin,\,\widehat{\ensuremath{\mathcal{F}}}^\ell$ in the prototypical examples (described bellow) which are convenient generalisations of example \ref{holonomy-foliation}. \begin{example} \label{intermediate-holonomy-foliation} Just like in the example \ref{holonomy-foliation} let $B$ be a complete Riemannian manifold, $\mathbb{R}^{k}\to E \stackrel{\pi}{\rightarrow} B$ an Euclidean vector bundle and $\nabla^{\tau}$ a connection which is compatible with the fiberwise metric of $E$. Additionally to the previous data in Example \ref{holonomy-foliation}, consider \begin{itemize} \item $\ensuremath{\mathcal{F}}^E = \{L^{E}_{v}\}_{v \in E}$ a singular foliation on $E$ such that each fiber $E_{b}$ is saturated and $\ensuremath{\mathcal{F}}_{b}:= \ensuremath{\mathcal{F}}^{E}|_{E_{b}}$ is an \emph{infinitesimal foliation} (i.e. $\ensuremath{\mathcal{F}}_b$ is a SRF on a vector space $E_{b}$ with $\{0_b\}$ as a leaf). Assume that $\ensuremath{\mathcal{F}}^E$ is $\text{Hol}^\tau$-invariant (i.e. the parallel transport sends leaves into leaves). \end{itemize} Denote by $K^{0}_{b}$ the maximal connected Lie subgroup of isometries of $E_{b}$ that fixes each leaf of $\ensuremath{\mathcal{F}}_{b}$ and $\overline{K^{0}_{b}}$ the closure of $K_{b}^{0}$ for each $b \in B$ (see the discussion in Example \ref{example-linearization}). Then the following three partitions of $E$ are in fact smooth singular foliations: \begin{enumerate} \item[(a)] $\ensuremath{\mathcal{F}} = \{ \text{Hol}^\tau (L^{E}_{v}) \}_{v \in E}$; \item[(b)] $\ensuremath{\mathcal{F}}lin = \{ \text{Hol}^\tau (K^{0}_{\pi(v)} (v)) \}_{v \in E}$; \item[(c)] $\widehat{\ensuremath{\mathcal{F}}}^\ell = \{ \text{Hol}^\tau (\overline{K_{\pi(v)}^{0}}(v)) \}_{v \in E}$. \end{enumerate} For the Sasaki metric on $E$ (described in Example \ref{holonomy-foliation}), the singular foliation $\ensuremath{\mathcal{F}}$ turns out to be a SRF, $\ensuremath{\mathcal{F}}lin$ becomes its linearized foliation, and $\widehat{\ensuremath{\mathcal{F}}}^\ell$ becomes the local closure of $\ensuremath{\mathcal{F}}lin$. \end{example} Example \ref{intermediate-holonomy-foliation} is in fact the semi-local model of a SRF in a $\epsilon$-tubular neighborhood of a closed leaf $L = B$ (see Theorem \ref{theorem-semi-local-model}). In order to obtain a semi-local model around more general closed saturated submanifolds of a stratum, one must also take into account the restriction of the original foliation to the submanifold. This leads us to the following generalisation of the previous example. \begin{example}[Generalized holonomy foliation] \label{generalized-holonomy-foliation} As in the previous example, let $B$ be a complete Riemannian manifold, and $\mathbb{R}^{k}\to E \stackrel{\pi}{\rightarrow} B$ an Euclidean vector bundle endowed with a singular foliation $\ensuremath{\mathcal{F}}^E = \{L^{E}_{v}\}_{v \in E}$ such that each fiber $E_{b}$ is saturated and $\ensuremath{\mathcal{F}}_{b}:= \ensuremath{\mathcal{F}}^{E}|_{E_{b}}$ is an infinitesimal foliation . This time we consider also a (regular) Riemannian foliation $\ensuremath{\mathcal{F}}_B$ on $B$ and we take $\nabla^\tau$ to be an $\ensuremath{\mathcal{F}}_B$-partial connection on $E$ which is compatible with the fiberwise metric on $E$, and such that $\ensuremath{\mathcal{F}}^E$ is invariant under parallel translation with respect to $\nabla^\tau$. If we denote by $K^{0}_{b}$ the maximal connected group of isometries of $E_{b}$ that fixes each leaf of $\ensuremath{\mathcal{F}}_{b}$ and by $\overline{K^{0}_{b}}$ the closure of $K_{b}^{0}$ for each $b \in B$, then we obtain three foliations on $E$: \begin{enumerate} \item[(a)] $\ensuremath{\mathcal{F}} = \{ \text{Hol}^{\tau} (L^{E}_{v}) \}_{v \in E}$ where $L^{E}_{v}\in \ensuremath{\mathcal{F}}^{E}$; \item[(b)] $\ensuremath{\mathcal{F}}lin = \{ \text{Hol}^{\tau} (K^{0}_{\pi(v)} (v)) \}_{v \in E}$; \item[(c)] $\widehat{\ensuremath{\mathcal{F}}}^\ell = \{ \text{Hol}^{\tau} (\overline{K_{\pi(v)}^{0}}(v)) \}_{v \in E}$, \end{enumerate} where $Hol^\tau$ is generated by parallel translations along paths in the leaves of $\ensuremath{\mathcal{F}}_B$, with respect to $\nabla^\tau$. It follows that there is a Sasaki metric on $E$ such that the foliation $\ensuremath{\mathcal{F}}$ turns out to be a SRF, $\ensuremath{\mathcal{F}}lin$ becomes its linearized foliation, and $\widehat{\ensuremath{\mathcal{F}}}^\ell$ becomes the local closure of $\ensuremath{\mathcal{F}}lin$. \end{example} Our first theorem states that the example above is in fact a local model for any SRF in a small tubular neighbourhood of any closed saturated submanifold of a stratum. \begin{theorem} \label{theorem-semi-local-model} Let $\ensuremath{\mathcal{F}}$ be a singular Riemannian foliation on a complete manifold $(M,g)$ and $B$ be a closed saturated submanifold contained in a stratum of the foliation. Then there exists a saturated $\epsilon$-tubular neighborhood $U$ of $B$ in $M$ such that the foliations $\ensuremath{\mathcal{F}},\,\ensuremath{\mathcal{F}}lin,\,\widehat{\ensuremath{\mathcal{F}}}^\ell$ restricted to $U$ are foliated diffeomorphic to the foliations described in the Example \ref{generalized-holonomy-foliation} where the Euclidean vector bundle $E$ is the normal bundle of $B$. \end{theorem} The main ingredients of the proof of item were already presented in \cite{Alexandrino-Radeschi-Molinosconjecture}. Here we put these ingredients together with help of Proposition \ref{lemma-linearized-vector-linearfoliation} stressing its semi-local description (see also the discussion in Mendes and Radeschi \cite{Mendes-Radeschi} for the case where $B$ is a closed leaf). \subsection*{Lie Groupoids and Singular Riemannian Foliations} Lie groupoids are structures which generalize smooth manifolds, Lie groups and Lie group actions (see \cite{Moerdijk-Mrcun} or Section \ref{section-facts-groupoids} below). Every Lie Groupoid determines a singular foliation on its base manifold by taking (the connected components of) its orbits. It is not yet known how to characterise the singular foliations which arise as orbits of a Lie groupoid. Moreover, even when a singular foliation arises in this fashion, there does not exist in general a canonical Lie groupoid which describes it. In contrast, to any regular foliation one can associate two canonical Lie groupoids which describe it, namely, the monodromy and the holonomy groupoids of the foliation (see Section \ref{section-facts-groupoids}). For a given regular foliation $\mathcal{F}$ on $M$, the monodromy groupoid $\Pi_1(\mathcal{F})$ is the unique source simply connected Lie groupoid which integrates the Lie algebroid $T\mathcal{F}$, while the holonomy groupoid $\mathrm{Hol}(\mathcal{F})$ is the terminal object in the category of source connected Lie groupoids which has $\mathcal{F}$ as its orbit foliation (see \cite{Crainic-Moerdijk-foliation}). The orbit foliation associated to a Lie groupoid is not in general a SRF. However, if the groupoid is proper, or more generally if it admits a compatible Riemannian metric (a Riemannian Groupoid), then the orbit foliation is in fact a SRF (see \cite{Fernandes-Hoyo,Hessel-Pflaum-Tang}). An important feature of Riemannian Groupoids is that they can be linearized around saturated submanifolds \cite{Fernandes-Hoyo}. In a nutshell, this means that the restriction of a Lie groupoid $\mathcal{G}\rightrightarrows M$ to a small tubular neighbourhood of a saturated submanifold $B$ of $M$ is locally isomorphic to a transformation groupoid associated to a representation (i.e., a linear action) of the restriction of $\mathcal{G}$ to $B$ on the normal bundle $E$ of $B$ in $M$. It is therefore natural to assume that if there exists a holonomy groupoid associated to a SRF, then it should be linearizable. In this paper we construct a canonical linear groupoid associated to the linearized foliation $\mathcal{F^\ell}$ in a tubular neighbourhood of a closed saturated submanifold contained in a regular stratum. This is the content of the following theorem. \begin{theorem} \label{theorem-linear-holonomy-groupoid} Let $\ensuremath{\mathcal{F}}$ be a singular Riemannian foliation on a complete manifold $(M,g)$ and $B$ be a closed saturated submanifold contained in a stratum of the foliation. Then there exists a saturated $\epsilon$-tubular neighborhood $U$ of $B$ in $M$ such that the leaves of the foliations $\ensuremath{\mathcal{F}}lin,\,\widehat{\ensuremath{\mathcal{F}}}^\ell$ restricted to $U$ are orbits of a canonical transformation groupoid associated to a representation of a regular groupoid $\mathcal{G} \rightrightarrows B$ on the normal bundle $E$ of $B$ in $M$. \end{theorem} We argue that the transformation groupoid whose orbits are the leaves of $\ensuremath{\mathcal{F}}^{\ell}$ obtained in the theorem above should be thought of as the linearization of the holonomy groupoid of the SRF $\ensuremath{\mathcal{F}}$ around $B$ (even though we do not know if such a holonomy groupoid exists). For this reason, we call this groupoid the \emph{Linear Holonomy Groupoid} of $\ensuremath{\mathcal{F}}$ at $U$. The construction of the Linear Holonomy Groupoid of $\ensuremath{\mathcal{F}}$ at $U$ relies on the fact that we can ``lift" the foliation $\ensuremath{\mathcal{F}}lin$ to an $O(n)$-invariant regular foliation on the orthogonal frame bundle of the normal bundle $E$ of $B$ in $M$. The (usual) holonomy groupoid of this (regular) lifted foliation comes with a free action of $O (n)$ by automorphisms, and the quotient groupoid comes with a canonical representation on E. It is possible to check that $\overline{\ensuremath{\mathcal{F}}^{\ell}}=(\overline{\ensuremath{\mathcal{F}}})^\ell$ and hence to conclude that the leaves of SRF $\overline{\ensuremath{\mathcal{F}}^{\ell}}$ are also orbits of a Lie groupoid. It is then a natural to ask what is the relation of the Linear Holonomy Groupoid $\mathcal{G}^{\ell}$ that describe $\ensuremath{\mathcal{F}}^{\ell}$ and this ``bigger'' groupoid. \begin{theorem} \label{prop:subgrupoids} Let $\ensuremath{\mathcal{F}}$ be a singular Riemannian foliation on a complete manifold $(M,g)$ and $B = \overline{L}$. Then there exists a proper Lie groupoid $\overline{\mathcal{G}^{\ell}}$ over a saturated $\epsilon$-tubular neighborhood $U$ of $B$ whose orbits are the leaves of $\overline{\ensuremath{\mathcal{F}}^{\ell}}$. In addition $\mathcal{G}^{\ell}$ is a dense Lie subgroupoid of $\overline{\mathcal{G}^{\ell}}$. \end{theorem} \begin{remark} Let $\ensuremath{\mathcal{F}}$ be a Riemannian foliation on $B$. Then the fact that $B=\overline{L_{q}}$ implies that for all $b\in B$ we have that $B=\overline{L_b}$, i.e., that the foliation is dense. \end{remark} \subsection*{ $\ensuremath{\mathcal{F}}$-partial connection and Lie Algebroid} In the particular case when the regular foliation $\ensuremath{\mathcal{F}}_B$ is \emph{a dense foliation} (each leaf $L\in \ensuremath{\mathcal{F}}_B$ is dense) and $U$ is a saturated $\epsilon$-tubular neighborhood around $B$ we have a second subfoliation $\ensuremath{\mathcal{F}}^{\tau}$ so that $$\ensuremath{\mathcal{F}}^{\tau} \subset \ensuremath{\mathcal{F}}^{\ell} \subset \ensuremath{\mathcal{F}}_U.$$ Roughly speaking, the subfoliation $\ensuremath{\mathcal{F}}^{\tau}$ could be thought of as the foliation produced just by taking parallel transports of normal vectors with respect to some $\ensuremath{\mathcal{F}}$-partial connection. In other words, if we consider Example \ref{generalized-holonomy-foliation} applied to the particular case where $\ensuremath{\mathcal{F}}_B$ is a dense foliation, $E$ is the normal bundle of $B$, and $\ensuremath{\mathcal{F}}^E$ is the foliation given by the points of $E$, then the partition $\ensuremath{\mathcal{F}}^{\tau} = \{L^{\tau}_{v}\}_{v \in E}$ which has leaves $L^{\tau}_{v} := \text{Hol}^\tau (v)$ for $v \in E$ is a singular (smooth) subfoliation of $\ensuremath{\mathcal{F}}$. The next result also assure that the leaves of $\ensuremath{\mathcal{F}}^{\tau}$ are orbits of a Lie groupoid. More generally, we can consider a foliation $ \ensuremath{\mathcal{F}}^{\tau} $ induced by a partial connection $ \nabla^{\tau} $, just starting with a dense foliation $ \ensuremath{\mathcal{F}}_B $, without assuming that it is a Riemannian foliation. \begin{theorem} \label{foliated-Ambrose-and-Singer} Let $\mathbb{R}^{n}\to E\to B$ be an Euclidean vector bundle. Assume that there exists a dense foliation $\ensuremath{\mathcal{F}}_{B}$ on the basis $B$ (each leaf $L\in \ensuremath{\mathcal{F}}_B$ is dense) and a partial linear connection $\nabla^{\tau}: \mathfrak{X}(\ensuremath{\mathcal{F}}_{B})\times \Gamma(E)\to \Gamma(E)$ compatible with the metric of $E$. Consider the singular partition $\ensuremath{\mathcal{F}}^{\tau} = \{L^{\tau}_{v}\}_{v \in E}$, with leaves $L^{\tau}_{v} = \text{Hol}^{\,\tau} (v)$ where $\text{Hol}^{\,\tau}$ is the holonomy groupoid of $\nabla^{\tau}$. Then the leaves of the singular foliation $\ensuremath{\mathcal{F}}^{\tau}$ are orbits of a transformation groupoid associated to a representation of a Lie groupoid over $B$ on $E$. \end{theorem} The Lie groupoid over $B$ of the previous theorem is constructed only in terms of the $\ensuremath{\mathcal{F}}$-partial connection $\ensuremath{\mathcal{F}}$. It is a Lie subgroupoid of the Gauge Groupoid associated to the frame bundle of $E$. In this sense, the theorem above may be thought of as a generalisation of the Ambrose-Singer reduction theorem to the context of foliated connections. We will explore this point of view even further in Section \ref{section-rotate-translate-groupoid} when we describe the infinitesimal objects associated to the groupoids of this paper, i.e., the \emph{Lie algebroids} (see Section \ref{section-facts-algebroids}). This will give also an alternative way of obtaining the Lie groupoids via integration of their algebroids (see Definition \ref{definition-algebroid} and Propositions \ref{proposition-algebroid-puro} and \ref{proposition-algebroid-SRF}). \subsection*{Organization of the Paper} This paper is organized as follows. In Section \ref{preliminaries-foliation} we review preliminary facts needed in the rest of the paper. In particular we review the main ingredients of the theory of SRF that will allow us to prove Theorem \ref{theorem-semi-local-model} (see Section \ref{section-semi-local-modelsofSRF}). In Section \ref{section-lie-groupoid-structure} we discuss the Lie groupoid structure needed to prove Theorem \ref{theorem-linear-holonomy-groupoid} and Theorem \ref{foliated-Ambrose-and-Singer}. In these proofs, the frame bundles associated to the foliation are used in a natural way. The proof indicates that orthogonal frame bundles can play an important role in the study of SRF as they have been playing in the study of (regular) Riemannian foliations (see Molino's book \cite{Molino}). We give in Section \ref{section-subgroupoid-closure} the proof of Theorem \ref{prop:subgrupoids}. Finally in Section \ref{section-rotate-translate-groupoid} we remark that the existence of the Lie groupoid structure in Example \ref{generalized-holonomy-foliation} does not require the hypothesis that $\ensuremath{\mathcal{F}}_{B}$ is a Riemannian foliation on $B$ and stress its Lie algebroid structure (Propositions \ref{proposition-algebroid-puro} and \ref{proposition-algebroid-SRF}). By moving from the concrete case of SRF to abstract considerations in Section \ref{section-rotate-translate-groupoid}, we hope to provide the proper motivation for readers who are not experts in the theory of Lie groupoids. \section{Preliminaries} \label{preliminaries-foliation} \subsection{A few facts about SRF} \label{section-facts-SRF} In this section we briefly review a few facts on a SRF $(M, \ensuremath{\mathcal{F}})$ extracted from \cite{Alexandrino-Radeschi-Molinosconjecture} (see also \cite{Mendes-Radeschi} and \cite{Molino}). \subsubsection{Linearization of vector fields} Let $B$ be a closed saturated submanifold contained in a stratum, e.g., the closure of a leaf ($B = \overline{L}$) or the minimal stratum of $\ensuremath{\mathcal{F}}$, and let $U\subset M$ be an $\epsilon$-tubular neighbourhood of $B$ with metric projection $\ensuremath{\mathsf{p}}: U \to B$. For each smooth vector field $\vec{V}$ in $U$, we can associate a smooth vector field $\vec{V}^\ell$, called the \emph{linearization of $\vec{V}$ with respect to $B$}, as follows: \[ \vec{V}^\ell = \lim_{\lambda \to 0} \vec{V}_{\lambda} \] where $\vec{V}_{\lambda} |_{q} := (h_{\lambda}^{-1})_*(\vec{V} |_{h_{\lambda}(q)})$ and $h_\lambda: U \to U$ denotes the homothetic transformation around $B$, i.e., the map given by $h_{\lambda} (\exp(v)) := \exp(\lambda v)$ for each $v \in \nu^\epsilon (B)$ and $\lambda \in (0, 1]$. Since the plaques of $\ensuremath{\mathcal{F}}$ are invariant under homothetic transformation, one can conclude that if $\vec{X}$ is a vector field tangent to $\ensuremath{\mathcal{F}}$, then $\vec{X}^{\ell}$ is still tangent to the foliation $\ensuremath{\mathcal{F}}$. \begin{example}[Infinitesimal foliation] \label{example-linearization} Consider a SRF $\ensuremath{\mathcal{F}}$ on $\mathbb{R}^n$, where $B=\{0\}$ is a closed leaf. Given a smooth vector field $\vec{X}$ tangent to the leaves, the associated linearized vector field is given by \[ \vec{X}^{\ell}_v = \lim_{\lambda\to 0}(h_{\lambda}^{-1})_* \vec{X}_{ h_{\lambda}(v)} = \lim_{\lambda\to 0}\frac{1}{\lambda} \vec{X}_{\lambda v} = \left(\nabla_{v} \vec{X}\right)_{0} \] Note that the linear vector field $\vec{X}^{\ell}_{(\cdot)}=(\nabla_{(\cdot)} \vec{X})_{0}$ is determined by a matrix which is skew-symmetric and hence $\vec{X}^{\ell}$ is a Killing vector field. In fact since $\vec{X}^{\ell}$ is tangent to the leaves, it is tangent to the distance spheres around $0$, and therefore $$0 = \langle \vec{X}^{\ell}_v,v \rangle = \left\langle \left(\nabla_{v} \vec{X}\right)_0, v \right\rangle,$$ for all unitary $v$. We can define $\mathfrak{k}$ as the maximal Lie subalgebra of $\mathfrak{o}(n)$ that induces these Killing vector fields fields and $K^{0}$ the connected subgroup of $SO(n)$ that has Lie algebra $\mathfrak{k}$. It is not difficult to check that $K^0$ is the maximal connected Lie subgroup of $SO(n)$ that fixes each leaf of $\ensuremath{\mathcal{F}}$. In addition, when the leaves of $\ensuremath{\mathcal{F}}$ are compact, $K^{0}$ is compact. \end{example} \subsubsection{The linearized foliation $\ensuremath{\mathcal{F}}lin$} \label{subsection-linearized foliation} Let $\mathsf{D}$ be the pseudogroup of local diffeomorphisms of $U$, generated by the flows of linearized vector fields tangent to $\ensuremath{\mathcal{F}}$. Then the partition of $U$ into the orbits of diffeomorphisms in $\mathsf{D}$ is called the \emph{linearized foliation of $\ensuremath{\mathcal{F}}$ (with respect to $B$)} and is denoted by $(U,\ensuremath{\mathcal{F}}lin).$ Since the linearization of vector fields tangent to $\ensuremath{\mathcal{F}}$ are still vector fields tangent to $\ensuremath{\mathcal{F}}$, one concludes that $\ensuremath{\mathcal{F}}^{\ell}$ is a subfoliation of $\ensuremath{\mathcal{F}}$. In other words, the leaves of $\ensuremath{\mathcal{F}}^{\ell}$ are contained in the leaves of $\ensuremath{\mathcal{F}}$ and $\ensuremath{\mathcal{F}}^{\ell}|_{B}$ coincides with $\ensuremath{\mathcal{F}}|_{B}$. We now recall that $\ensuremath{\mathcal{F}}^{\ell}$ is the \emph{maximal infinitesimal homogeneous subfoliation of $\ensuremath{\mathcal{F}}$}. In fact, given a point $b \in B$, denote $U_b := \ensuremath{\mathsf{p}}^{-1}(b) \subset U$ and let $\ensuremath{\mathcal{F}}_b$ (resp. $\ensuremath{\mathcal{F}}^{\ell}_{b}$) denote the partition of $U_b$ into the connected components of $\ensuremath{\mathcal{F}} \cap U_b$ (resp. $\ensuremath{\mathcal{F}}^{\ell}\cap U_b$). It was proved in \cite[Propositions 6.5]{Molino} that $\ensuremath{\mathcal{F}}_b$ turns out to be a SRF on the Euclidean space $(U_b, \ensuremath{ \mathrm{g} }_b)$ called the \emph{(reduced) infinitesimal foliation at $b$}, once we identify $U_b$ (via exponential map) with an open set of $\nu_b (B)$ with the flat metric $\ensuremath{ \mathrm{g} }_b$. In addition the foliation $(U_b, \ensuremath{\mathcal{F}}^{\ell}_{b})$ is homogeneous, more precisely the leaves of $\ensuremath{\mathcal{F}}^{\ell}_{b}$ are orbits of $K^{0}_{b}$, where $K^{0}_{b}$ denotes the maximal connected Lie group of isometries of $\nu_b (B)$ that fixes each leaf of $\ensuremath{\mathcal{F}}_b$ (cf. Example \ref{example-linearization}). More generally, given a vector field $\vec{X}$ tangent to the leaves, the flow $\varphi_t$ of the linearized vector field $\vec{X}^{\ell}$ induces an isometry between $(U_b, \ensuremath{ \mathrm{g} }_b)$ and $(U_{\varphi_t (b)}, \ensuremath{ \mathrm{g} }_{\varphi_t (b)})$. \subsubsection{The distributions $\mathcal{K}$, $\mathcal{T}$ and $\mathcal{N}$} \label{subsection-3-distribution} Let us now review the definition of the 3 important distributions necessary to understand the semi-local model of $\ensuremath{\mathcal{F}}$. \begin{itemize} \item $\mathcal{K}=\ker\ensuremath{\mathsf{p}}_*$ where $\ensuremath{\mathsf{p}}: U \to B$. \item There exists a distribution $\widehat{\mathcal{T}} \subset T U$ tangent to the leaves of $\ensuremath{\mathcal{F}}$, such that $\widehat{\mathcal{T}} |_B = T \ensuremath{\mathcal{F}} |_{B}$ (for a construction of such distribution see \cite[Proposition 3.1]{Alexandrino-desingularization}). The \emph{distribution $\mathcal{T}$ is the linearization of $\widehat{\mathcal{T}}$} with respect to $B$. \newline We point out that $\mathcal{T}$ is tangent to $\ensuremath{\mathcal{F}}$ since it is a linearization of a distribution tangent to the leaves and the rank of $\widehat{\mathcal{T}}$ and $\mathcal{T}$ are equal to $\dim \ensuremath{\mathcal{F}}|_B$. \item Let $b = \ensuremath{\mathsf{p}}(q)$ and $\widehat{\mathcal{N}}_q$ be the subspace of $T_q S_b$ which is $g_b$-orthogonal to $\mathcal{K}_q$ where $S_b$ is the slice of $\ensuremath{\mathcal{F}}$ at $b$. The \emph{distribution $\mathcal{N}$ is the linearization of $\widehat{\mathcal{N}}$ with respect to $B$}. \end{itemize} Note that all three distributions are homothetic invariant and $TU = \mathcal{K} \oplus H$ with $H := \mathcal{T}\oplus\mathcal{N}$ (see figure \ref{TNK}). We quickly conclude that $\mathcal{K}$, $\mathcal{T}$ and $\mathcal{N}$ can be identified with homothetic invariant distributions on the normal bundle $\nu(B)$ and particularly $H$ is identified with a horizontal distribution in $\nu(B)$. \begin{figure} \caption{Illustrated scheme of the distributions $\mathcal{K} \label{TNK} \end{figure} The next two results are consequences of a discussion presented in Section 5 of \cite{Alexandrino-Radeschi-Molinosconjecture}. \begin{proposition} \label{proposition-connections-tau-affine} The homothetic invariant distribution $\mathcal{T} \oplus \mathcal{N}$ induces a linear connection $\nabla: \mathfrak{X}(B)\times \Gamma(\nu(B))\to \Gamma(\nu(B))$ that when restricted to $\ensuremath{\mathcal{F}}_{B}$ induces a partial linear connection $\nabla^{\tau}: \mathfrak{X}(\ensuremath{\mathcal{F}}_{B})\times \Gamma(\nu(B))\to \Gamma(\nu(B))$ compatible with the metric of $\nu(B)$. \end{proposition} \begin{example}[Regular case around closed leaf]\label{regular-case-1} If $\ensuremath{\mathcal{F}}$ is regular and $B = L$ is a closed leaf of $\ensuremath{\mathcal{F}}$ then, by counting dimensions, we conclude that $\mathcal{N}$ needs to be the zero distribution and $\mathcal{T}$ needs to be the linearization of $T \ensuremath{\mathcal{F}}$. In this case the induced parcial connection $\nabla^{\tau}$ is in fact a (total) connection. More precisely $\nabla^{\tau}$ is the restriction to $L$ of the Bott connection of $\ensuremath{\mathcal{F}}$, since both have the same horizontal distribution. \end{example} The distributions $\widehat{\mathcal N}$ and $\mathcal N$ satisfy the following property: \begin{proposition}[\cite{Alexandrino-Radeschi-Molinosconjecture}] \label{proposition-transverlinearfield} For every smooth $\ensuremath{\mathcal{F}}_B$-basic vector field $\vec{Y}_{0}$ along a plaque $P$ in $B$ there exists a smooth extension $\vec{Y}_{0}$ to an open set of $U$ such that \begin{enumerate} \item $\vec{Y}_{0}$ is foliated and tangent to $\widehat{\mathcal N}$. \item The linearization $\vec{Y}:=\vec{Y}_{0}^{\ell}$ of $\vec{Y}_{0}$ with respect to $B$ is tangent to $\mathcal{N}$, and it is foliated with respect to both $\ensuremath{\mathcal{F}}$ and $\ensuremath{\mathcal{F}}lin$. \end{enumerate} \end{proposition} We also need a simple but important observation. \begin{corollary} \label{corollary-isomorphismK} Let $\vec{Y}$ be the vector field defined in Proposition \ref{proposition-transverlinearfield}, $\varphi^{Y}_{s}$ the local flow associated to $\vec{Y}$, and $K^{0}_{b}$ the Lie group defined in Section \ref{subsection-linearized foliation}. Then $\varphi_{s}^{Y}$ induces an isomorphism $\widehat{\varphi}: K^{0}_{b} \to K^{0}_{\tilde{b}}$ where $\tilde{b} = \varphi_{s}^{Y}(b)$. \end{corollary} \begin{proof} First we claim that \emph{if a flow of a vector field $\vec{Y}$ sends a vector field $\vec{X}^{1}$ to $\vec{X}^{2}$, then the flow of linearization of $\vec{Y}$ sends the linearization of $\vec{X}^{1}$ into the linearization of $\vec{X}^{2}$}. In fact, let $\hat{\varphi}_{s}$, $\varphi^{1}_{t}$ and $\varphi^{2}_{t}$ be the flows of the vector fields $\vec{Y}$, $\vec{X}^{1}$ and $\vec{X}^{2}$. From the hypothesis we have that $\varphi_{t}^{2} = \hat{\varphi}_{s} \circ \varphi^{1}_{t} \circ \hat{\varphi}_{-s}$. Note that the flows of $\vec{Y}_{\lambda}$, $\vec{X}^{i}_{\lambda}$ (for $i = 1, 2$) are $\hat{\varphi}_{s,\lambda} = h_{\lambda}^{-1} \circ \hat{\varphi}_{s} \circ h_{\lambda}$ and $\varphi_{t,\lambda}^{i} = h_{\lambda}^{-1} \circ \varphi_{t}^{i} \circ h_{\lambda}$ respectively. Therefore $\hat{\varphi}_{s,\lambda} \circ \varphi^{1}_{t,\lambda} \circ \hat{\varphi}_{-s,\lambda} = \varphi^{2}_{t,\lambda}$. The claim now follows by taking the limit when $\lambda$ goes to zero. Therefore if $t \to g_{t}$ is a flow of a linearized foliated vector field on $U_b$ that preserves $\ensuremath{\mathcal{F}}_b$, i.e., $g_{t} \in K_{b}^{0}$, then $t \to \hat{\varphi}(g_{t}) := \varphi_{s}^{Y}|_{U_b} \circ g_{t} \circ \big(\varphi_{s}^{Y}|_{U_b}\big)^{-1}$ is also a flow of a linearized foliated vector field on $U_{\tilde{b}}$ fixing $\ensuremath{\mathcal{F}}_{\tilde{b}}.$ Therefore $t \to \hat{\varphi}(g_{t})$ is a flow of a Killing vector field that preserves $\ensuremath{\mathcal{F}}_{\tilde{b}}$, i.e., it is contained in $K_{\tilde{b}}^{0}$. \end{proof} In what follows we say that a (partial) connection $\nabla^{\tau}$ compatible with the metric of $E=\nu(B)$ is a \emph{$\ensuremath{\mathcal{F}}$-compatible metric connection} if for each curve $\alpha\subset L\subset B$ and each parallel field $t\to\xi(t)\in \Gamma(\alpha^{*}E)$ along $\alpha$, we have that $\xi(t)\in L_{\xi(0)}.$ \begin{lemma} \label{lemma-total-connection} Assume that $B=\overline{L_q}$ i.e., $B$ is the closure of a leaf $L_q$. Then there exists a $\overline{\ensuremath{\mathcal{F}}}$-compatible metric connection $\nabla^{\overline{\tau}}:\mathfrak{X}(B)\times\Gamma(E)\to \Gamma(E)$ that extends the $\ensuremath{\mathcal{F}}$-partial compatible metric connection $\nabla^{\tau}$. \end{lemma} \begin{proof} Our goal is to find a linear connection $\overline{\mathcal{T}}$ (and hence $TE= \mathcal{K}\oplus \overline{\mathcal{T}}$) so that: \begin{enumerate} \item[(a)] $\overline{\mathcal{T}}\subset T\overline{\ensuremath{\mathcal{F}}}$, \item[(b)] $\mathcal{T}\subset\overline{\mathcal{T}}$. \end{enumerate} The fact that $\overline{\ensuremath{\mathcal{F}}}=\{\overline{L}\}$ is a S.R.F with closed leaves (see \cite{Alexandrino-Radeschi-Molinosconjecture}) and item (a) will assure that $\overline{\mathcal{T}}$ induces the $\overline{\ensuremath{\mathcal{F}}}$-compatible metric connection $\nabla^{\overline{\tau}},$ (recall Example \ref{holonomy-foliation}). Item (b) will imply that $\nabla^{\overline{\tau}}$ is an extension of the $\ensuremath{\mathcal{F}}$-partial compatible metric connection $\nabla^{\tau}.$ Let us consider an open covering $\{U_{\alpha}^{b}\}$ of $B$ where $U_{\alpha}^{b}$ are precompact open sets of $B$. Define $U_{\alpha}=\mathrm{Tub}_{\epsilon}(U_{\alpha}^{b})=\ensuremath{\mathsf{p}}^{-1}(U_{\alpha}^{b})$ where $\ensuremath{\mathsf{p}}: \mathrm{Tub}_{\epsilon}(B) \to B$ is the project metric. Therefore $\mathrm{Tub}_{\epsilon}(B)=\cup_{\alpha} U_{\alpha}$ and $U_{\alpha}$ is homothetic invariant. We claim that \emph{ there exists a regular homothetic distribution $\mathcal{N}_\alpha$ on $U_\alpha$ so that \begin{itemize} \item $\mathcal{N}_{\alpha}\subset T\overline{\ensuremath{\mathcal{F}}},$ \item $TE= \mathcal{T} \oplus \mathcal{N}_{\alpha} \oplus \mathcal{K},$ \item $\mathcal{N}_{\alpha}|_{B}=\nu(\ensuremath{\mathcal{F}}|_{B})$. \end{itemize}} In fact, let $\{ X_{\alpha}^{i} \} $ be an orthonormal frame of $\nu(\ensuremath{\mathcal{F}}|_{U_\alpha})$. By \cite{Alexandrino-Radeschi-Molinosconjecture}, we can extend these vector fields to vector fields tangent to $\overline{\ensuremath{\mathcal{F}}}$ and then linearize them to produce linear independent vector fields $\{(X_{\alpha}^{i})^{\ell}\}.$ This process can be done at least in a smaller tubular neighborhood $\mathrm{Tub}_{\epsilon_0}(U_{\alpha}^{b})$ for $\epsilon_0<\epsilon$. But since linearized vector fields are homothetic invariant, and the homothetic transformation $h_\lambda$ are diffeomorphisms, we can extend $\{(X_{\alpha}^{i})^{\ell}\}$ to linearly independent vector fields defined on $\mathrm{Tub}_{\epsilon}(B)$ and set $\mathcal{N}_{\alpha}$ the distribution generated by these vector fields. Since the homothetic transformation preserves the distrubtion $\mathcal{K}$, $\mathcal{T}$ and send closure leaves to closure leaves, we conclude that the $\mathcal{N}_\alpha$ satisfies the desired properties. Now we define, for each $\alpha$, a metric $g_\alpha$ on $U_\alpha$ so that: \begin{enumerate} \item[(i)] $\mathcal{K}$ is orthogonal (with respect to $g_\alpha$) to the distribution $\mathcal{T}\oplus \mathcal{N}_\alpha$ , which is a regular distribution contained in $T\overline{\ensuremath{\mathcal{F}}}$; \item[(ii)] $g|_{\mathcal{K}}=g_{\alpha}|_{\mathcal{K}}.$ \end{enumerate} Property (i) implies that $\nu(\overline{\ensuremath{\mathcal{F}}})$ is contained in $\mathcal{K}$ and property (ii) that the vector space $\nu(\overline{L})$ does not depend on $\alpha$. Finally we define $\tilde{g}=\sum_{\alpha} \varphi_{\alpha} g_{\alpha},$ where $\{\varphi_\alpha\}$ is a partition of unity subordinated to $\{U_\alpha\}$. Set $\widetilde{\mathcal{\overline{T}}}$ the orthogonal complement (with respect to $\tilde{g}$) of $\mathcal{K}$. From property (i) and definition of $\tilde{g}$, we infer that $\mathcal{T}$ is $\tilde{g}$ orthogonal to $K$ and hence \begin{equation} \label{eq-1-lemma-total-connection} \mathcal{T}\subset \widetilde{\mathcal{\overline{T}}}. \end{equation} As we remarked before, the normal distribution of $\overline{\ensuremath{\mathcal{F}}}$ does not depend on $\alpha$ and is contained in $\mathcal{K}$, and hence the normal distribution $\nu(\overline{\ensuremath{\mathcal{F}}})$ (with respect to $\tilde{g}$) is contained in $\mathcal{K}$. The fact that $\nu(\overline{\ensuremath{\mathcal{F}}}) \subset \mathcal{K}$ and $\mathcal{K}$ is the orthogonal complement (with respect $\tilde{g}$) of $\widetilde{\mathcal{\overline{T}}}$ imply that \begin{equation} \label{eq-2-lemma-total-connection} \widetilde{\mathcal{\overline{T}}}\subset T\overline{\ensuremath{\mathcal{F}}}. \end{equation} Set $\mathcal{\overline{T}}:=(\widetilde{\mathcal{\overline{T}}})^{\ell}.$ Eq \eqref{eq-1-lemma-total-connection}, \eqref{eq-2-lemma-total-connection} and the fact that $(T\overline{\ensuremath{\mathcal{F}}})^{\ell}\subset T\overline{\ensuremath{\mathcal{F}}}$ and $\mathcal{T}^{\ell}=\mathcal{T}$ imply $$\mathcal{T}=\mathcal{T}^{\ell}\subset (\widetilde{\mathcal{\overline{T}}})^{\ell}= \overline{\mathcal{T}}=(\widetilde{\mathcal{\overline{T}}})^{\ell} \subset (T\overline{\ensuremath{\mathcal{F}}})^{\ell}\subset T(\overline{\ensuremath{\mathcal{F}}}) $$ and hence items (a) and (b) are fulfilled. \end{proof} \begin{remark} In Lemma \ref{lemma-total-connection}, the distribution $\mathcal{N}$ of the triple $(\mathcal{K},\mathcal{T},\mathcal{N})$ associated to $\ensuremath{\mathcal{F}}$ can been chosen to be a distribution tangent to the leaves of $\overline{\ensuremath{\mathcal{F}}}$. It is defined as the distribution in $\overline{\mathcal{T}}$ so that $\overline{\mathcal{T}}=\mathcal{T}\oplus\mathcal{N}$ and $\ensuremath{\mathsf{p}}_{*}(\mathcal{N})=\nu(\ensuremath{\mathcal{F}}|_{B}).$ \end{remark} \subsubsection{The local closure foliation} Let us now recall the construction of the foliation $\widehat{\ensuremath{\mathcal{F}}}^\ell$, that has the following properties: $\ensuremath{\mathcal{F}}lin\subset \widehat{\ensuremath{\mathcal{F}}}^\ell \subset \overline{\ensuremath{\mathcal{F}}lin}$ and the restriction $\widehat{\ensuremath{\mathcal{F}}}^\ell_b$ to each $\ensuremath{\mathsf{p}}$-fiber $U_b$ is homogeneous and closed since it is formed by the orbits of $\overline{K_{b}^{0}}$. A leaf $\widehat{L}_q$ of $\widehat{\ensuremath{\mathcal{F}}}^\ell$ through $q$ is defined as: \[ \widehat{L}_q = \left\{ \Phi(k \cdot q) \mid \Phi \in \mathsf{D}, \, k \in \overline{K_{b}^{0}} \right\} \] The foliations $\ensuremath{\mathcal{F}}lin$ and $\widehat{\ensuremath{\mathcal{F}}}^\ell$ are SRF with respect to the metric which turns $\mathcal{K}$ orthogonal to $\mathcal{T} \oplus \mathcal{N}$, preserving the metric of each component, i.e., the flat metric induced by exponential map in $\mathcal{K}$ and the metric $\ensuremath{\mathsf{p}}^\ast g_B$ in $\mathcal{N} \oplus \mathcal{T}$ where $g_B$ is the restriction of $g$ on $B$ (see figure \ref{TNK}). More precisely $\widehat{\ensuremath{\mathcal{F}}}^\ell$ is an \emph{orbit like foliation}, that is, a SRF such that each infinitesimal foliation is a SRF given by orbits of a compact subgroup of $O(n)$ (see \cite{Alexandrino-Radeschi-Molinosconjecture} for the definition and properties). \begin{example} An illustration of the concept of orbit-like foliation can be extracted from example \ref{holonomy-foliation}. Consider in this example a holonomy foliation $\ensuremath{\mathcal{F}}^\tau$ determined by a linear connection $\nabla^\tau: \mathfrak{X}(B) \times \Gamma(E) \to \Gamma(E)$ such that the holonomy groups are all compact (e.g. a Riemannian connection in a Riemannian manifold and a normal connection of a submanifold embedded in a Euclidean space as presented in \cite{Berndt-Console-Olmos}). Since the intersection of the leaves of $\ensuremath{\mathcal{F}}^\tau $ with the fibers of $E$ are given by orbits of the holonomy groups, in this particular case, the holonomy foliation already is an orbit-like foliation from which follows that $\ensuremath{\mathcal{F}}^\tau = (\ensuremath{\mathcal{F}}^\tau)^\ell = (\widehat{\ensuremath{\mathcal{F}}^\tau})^\ell$. \end{example} \subsection{A few facts about Lie groupoids} \label{section-facts-groupoids} Recall that a Lie groupoid is composed of two manifolds $\mathcal{G}_1$ and $\mathcal{G}_0$ where elements of $\mathcal{G}_1$ are thought of as arrows between elements of $\mathcal{G}_0$. The maps $s,t: \mathcal{G}_1 \to \mathcal{G}_0$ which associate to an arrow $g \in \mathcal{G}_1$ its source and its targets are required to be a surjective submersions. It then follows that the space \[\mathcal{G}_2 = \{(g,h) \in \mathcal{G}_1\times \mathcal{G}_1: s(g)=t(h)\}\] of composable arrows is a manifold and there is a smooth multiplication map \[m: \mathcal{G}_2 \longrightarrow \mathcal{G}_1, \quad m(g,h) = gh\] which is associative and satisfies $s(gh) = s(h)$ and $t(gh) = t(g)$. A Lie groupoid $\mathcal{G}_1$ also comes equipped with a smooth embedding of $\mathcal{G}_0$ into $\mathcal{G}_1$ \[u: \mathcal{G}_0 \to \mathcal{G}_1, \quad u(x) = 1_x\] which allows us to view each $x \in M$ as an identity arrow $1_x$ whose source and target is $x$. Needless to say, this arrow acts as an identity: \[1_xh = h, \text{ and } g1_x = g \text{ for all } h \in t^{-1}(x), \text{ and } g \in s^{-1}(x).\] Finally, there is a diffeomorphism $i: \mathcal{G}_1 \to \mathcal{G}_1$ which associates to each arrow $g \in \mathcal{G}_1$ its inverse arrow $i(g) = g^{-1}$ which satisfies \[s(g^{-1}) = t(g),\quad t(g^{-1}) = s(g), \quad gg^{-1} = 1_{t(g)}, \quad g^{-1}g = 1_{s(g)}.\] We will denote a Lie groupoid by $\mathcal{G}_1 \rightrightarrows \mathcal{G}_0$. Any Lie groupoid $\mathcal{G} = \mathcal{G}_1 \rightrightarrows \mathcal{G}_0$ induces a foliation on $\mathcal{G}_0$ whose leaves are the connected components of \emph{the orbits of} $\mathcal{G}$, $\mathcal{O}_x = \{t(s^{-1}(x))\}$. Some examples of Lie groupoids will be important for us in this paper. We present them here. \begin{example} An example of a Lie groupoid that will be important throughout this paper is the \emph{Holonomy groupoid} $\mathrm{Hol}(\ensuremath{\mathcal{F}}) \rightrightarrows M$ of a regular foliation $\ensuremath{\mathcal{F}}$ on $M$. An arrow in $\mathrm{Hol}(\ensuremath{\mathcal{F}})$ is a class of a path in a leaf of $\ensuremath{\mathcal{F}}$, where the equivalence relation identifies leafwise homotopic paths and paths inducing the same germ of diffeomorphisms sliding transversals along the paths. The source of $[\alpha] \in \mathrm{Hol}(\ensuremath{\mathcal{F}})$ is $\alpha(0)$, the starting point of $\alpha$, and the target of $[\alpha]$ is its endpoint $\alpha(1)$. Multiplication is given by concatenation of paths. The identity arrow at $x \in M$ is the homotopy class of the constant path at $x$, and inversion is given by \[[\alpha]^{-1} = [\bar{\alpha}], \text{ where } \bar{\alpha}(t) = \alpha(1-t).\] The orbits of $\mathrm{Hol}(\ensuremath{\mathcal{F}})$ are precisely the leaves of $\ensuremath{\mathcal{F}}$. \end{example} \begin{example} A Lie groupoid over a point is just a Lie group. More generally, a bundle of Lie groups is just a Lie groupoid for which $s = t$. In this case, each $s$-fiber inherits the structure of a Lie group and we can view $\mathcal{G}_1$ as a smooth family of Lie groups parameterized by $\mathcal{G}_0$. \end{example} \begin{example}\label{example:action-groupoid} Let $\mu:G\times M\to M$ be an action. Then the \emph{action groupoid or transformation groupoid} is defined as $\mathcal{G}_1=G\times M$ and $\mathcal{G}_0=M$ with source map $s(g,x)=x$, target map $t(g,x)=\mu(g,x)$ and unity map $u(x)=(e,x)$. The product map is the composition, i.e, $m\big((h_2,y),(h_1,x)\big)=(h_{2}h_{1},x)$ and the inverse map is $i(h_1,x)=(h_{1}^{-1},\mu(h_{1},x)).$ \end{example} \begin{example} Associated to each $G$-principal bundle $\pi: P\to B$ there exists a transitive Lie groupoid with isotropy equals to $G$ called the \emph{gauge groupoid} of $P$. This groupoid can be realized as the quotient of $P\times P$ by the diagonal action of $G$, i.e., $\mathcal{G}_1:=(P\times P)/G$ and $\mathcal{G}_0:=B$ with structure determined by \[s([p,q]) = \pi(q), \quad t([p,q]) = \pi(p), \quad [p,q]\cdot[q,r] = [p,r].\] \end{example} \begin{example}\label{ex:gl-groupoid} Similar to vector spaces, a vector bundle $E\to B$ has a \emph{general linear groupoid} $GL(E)\rightrightarrows B$ whose arrows are the linear isomorphisms between the fibers. This groupoid can be realized as the gauge groupoid of the frame bundle $F(E)\to B$. In this paper we usually assume that $E$ is an Euclidean vector bundle. In this case we reduce the general linear groupoid of $E$ to obtain the \emph{orthogonal linear groupoid} $\mathcal{O}(E)\rightrightarrows M$, whose arrows are the linear isometries between the fibers of $E$. Equivalently, $\mathcal{O}(E)$ can be realized as the gauge groupoid associated to the orthogonal frame bundle of $E$. \end{example} In this paper we will be interested in applying two general constructions for Lie groupoids to the specific setting coming from the study of SRFs, namely, we will need to take the quotient of a Lie groupoid by a free and proper action of a Lie group $G$ through automorphisms, and we will need to consider the transformation groupoid associated to a representation of a Lie groupoid on a vector bundle. We now explain these constructions. Let $G$ be a Lie group and $\mathcal{G} = \mathcal{G}_1 \rightrightarrows \mathcal{G}_0$ be a Lie groupoid. An \emph{action of $G$ on $\mathcal{G}$ by Lie groupoid automorphisms} is an action of $G$ on $\mathcal{G}_1$ and on $\mathcal{G}_0$ such that for each $a \in G$ the map \[\xymatrix{ \mathcal{G}_1 \ar@<0.25pc>[d] \ar@<-0.25pc>[d] \ar[rr]^{\Psi_a} & & \mathcal{G}_1 \ar@<0.25pc>[d] \ar@<-0.25pc>[d] \\ \mathcal{G}_0 \ar[rr]_{\psi_a} & & \mathcal{G}_0} \] is a Lie groupoid morphism, i.e., commutes with all of the structure maps. The action is said to be \emph{free and proper} if the action on $\mathcal{G}_1$ is free and proper. We remark that since $s: \mathcal{G}_1 \to \mathcal{G}_0$ is a surjective submersion with a $G$-invariant global section $u: \mathcal{G}_0 \to \mathcal{G}_1$, it follows that the action of $G$ on $\mathcal{G}_0$ is also free and proper. In this paper the action that will appear naturally is a right action. \begin{proposition} \label{prop:quotient} Let $G$ be a Lie group which acts freely and properly on a Lie groupoid $\mathcal{G}_1 \rightrightarrows \mathcal{G}_0$ through automorphisms. Then $\mathcal{G}_1/G \rightrightarrows \mathcal{G}_0/G$ has an induced structure of a Lie groupoid. \end{proposition} \begin{proof} We define the source and target of $\mathcal{G}/G$ in the obvious way: \[\bar{s}[g] = [s(g)], \quad \bar{t}[g] = [t(g)].\] The maps $\bar{s}$ and $\bar{t}$ are well defined because of the $G$-equivariance of $s$ and $t$. Moreover, they are smooth surjective submersions because $s$, $t$ and the projection $p: \mathcal{G} \to \mathcal{G}/G$ are smooth surjective submersions. Next we define the multiplication of $\mathcal{G}/G$. A pair $([g], [h])$ of arrows of $\mathcal{G}/G$ is composable iff $[s(g)] = [t(h)]$. Therefore, there exists a unique $a \in G$ such that $s(g) = t(h)a = t(ha)$. We define \[[g][h] = [g(ha)].\] The reader can easily check that the multiplication map is well defined and associative. Moreover, we note that the multiplication $\bar{m}: (\mathcal{G}/G)_2 \to \mathcal{G}/G$ sits in the following commutative diagram \[\xymatrix{ \mathcal{G}_2 \ar[d]_{p_2} \ar[r]^m & \mathcal{G}\ar[d]_p \\ (\mathcal{G}/G)_2 \ar[r]_{\bar{m}} & \mathcal{G}/G,} \] where \[p_2: \mathcal{G}_2 \to (\mathcal{G}/G)_2, \quad p_2(g,h) = ([g],[h])\] is a surjective submersion. It follows that $\bar{m}$ is smooth. The other structure are defined similarly and are clearly smooth. They are defined by \[[g]^{-1} = [g^{-1}], \quad \bar{u}([x]) = [1_x].\] Finally, we note that the unit and inverses of the quotient groupoid indeed satisfy the properties in the definition of a Lie groupoid. For example, if $\bar{s}([g]) = [x]$, then $s(g) = xa$ and \[[g][1_x] =[g (1_x a)] = [g1_{xa}] = [g].\] The other properties are proven similarly by direct computations. \end{proof} We next describe the transformation groupoid associated to a representation of a Lie groupoid $\mathcal{G}_1\rightrightarrows \mathcal{G}_0$ on a vector bundle $\pi:E \to \mathcal{G}_0$. Recall that a \emph{representation of $\mathcal{G}_1\rightrightarrows \mathcal{G}_0$ on a vector bundle $\pi: E \to \mathcal{G}_0$} is a smooth map \[\psi: \mathcal{G}_1 \times_{\mathcal{G}_0} E \longrightarrow E \quad \psi(g,e) = ge,\] where $\mathcal{G}_1 \times_{\mathcal{G}_0} E = \{(g, v) \in \mathcal{G}_1 \times E : s(g) = \pi(v)\},$ satisfying the following properties: \begin{itemize} \item $\psi(g, \cdot) = \psi_g: E_{s(g)} \to E_{t(g)}$ is a linear isomorphism for all $g \in \mathcal{G}_1$; \item $1_{\pi(v)}v = v$ for all $v \in E$; \item $g(hv) = (gh)v$ for all $(g,h) \in \mathcal{G}_2$ and all $v \in E_{s(h)}$. \end{itemize} Equivalently, a representation of $\mathcal{G}$ on $E$ can be recast a Lie groupoid morphism $\mathcal{G} \to \mathrm{GL}(E)$. Given a representation of $\mathcal{G}_1\rightrightarrows \mathcal{G}_0$ on a vector bundle $\pi: E \to \mathcal{G}_0$, one obtains a new groupoid $\mathcal{G} \ltimes E$ called the \emph{transformation Lie groupoid of the representation}. Its structure is described bellow. The manifolds of arrows and objects are \[(\mathcal{G} \ltimes E)_1 = \mathcal{G}_1 \times_{\mathcal{G}_0} E, \quad (\mathcal{G} \ltimes E)_0 = E. \] Its source, target and multiplication is given by \[s(g,v) = v, \quad t(g,v) = gv, \quad (g, hv)(h,v) = (gh, v).\] Finally, its unit and inverse is given by \[1_v = (1_{\pi(v)},v), \quad (g,v)^{-1} = (g^{-1}, gv).\] An example relevant to the theory of SRFs will be given in the next section. \begin{remark} For any saturated submanifold $B$ of $\mathcal{G}_0$, there is a canonical representation of the restriction groupoid $\mathcal{G}_B \rightrightarrows B$ on the normal bundle $\nu(B)$ called \emph{normal representation}. The transformation groupoid of the normal representation is a \emph{local linear model} for $\mathcal{G}$ around $B$. There are linearization results identifying $\mathcal{G}$ around $B$ with the local linear model, for instance if $\mathcal{G}$ is proper \cite{Crainic-Struchiner,Fernandes-Hoyo}. Going back to SRFs, we will show in Proposition \ref{proposition-groupoid-linear-foliation} that the linearized foliation $\ensuremath{\mathcal{F}}lin$ is the orbit foliation of a representation, and this may be interpreted as the local linear model for a possibly groupoid presenting $\ensuremath{\mathcal{F}}$. \end{remark} \subsection{A few facts about Lie algebroids} \label{section-facts-algebroids} In Section \ref{section-rotate-translate-groupoid} we will also make use of the infinitesimal object associated to a Lie groupoid, known as a \emph{Lie algebroid}, and some elements of its integration theory which we recall here. A Lie algebroid is a vector bundle $\pi: A \to M$ endowed with: \begin{itemize} \item a Lie bracket $[\cdot,\cdot]$ on the space $\Gamma(A)$ of sections of $A$; \item a vector bundle map $\rho: A \to TM$ known as the \emph{anchor of the Lie algebroid}, \end{itemize} such that the following Leibniz identity holds for all sections $\alpha, \beta \in \Gamma(A)$ and for all smooth maps $f \in \mathrm{C}^\infty(M)$ \[[\alpha, f\beta] = f[\alpha, \beta] + \rho(\alpha)(f)\beta.\] Every Lie groupoid $\mathcal{G}_1 \rightrightarrows \mathcal{G}_0$ determines a Lie algebroid $A \to \mathcal{G}_0$. The construction of $A$ is analogous to the construction of the Lie algebra of a Lie group, but taking into account that on a Lie groupoid there are many identity elements, and that right translation by an element $g \in \mathcal{G}_1$ is only defined on the source fiber of $t(g)$. Explicitly, one takes the vector bundle $A \to \mathcal{G}_0$ to be the pullback by $u$ of the kernel of $ds$. It then follows that the sections of $A$ identify with the space of right invariant vector fields on $\mathcal{G}_1$ and this identification induces a Lie bracket on $\Gamma(A)$. The anchor map is given by the restriction of $dt$ to $A$. A simple verification shows that one obtains in this way a Lie algebroid out of a Lie groupoid. \begin{example} Let $\mu:G\times M\to M$ be an action and let $\mathcal{G}_1=G\times M \rightrightarrows M = \mathcal{G}_0$ be the action groupoid of Example \ref{example:action-groupoid}. Following the previous discussion we conclude that the associated Lie algebroid $A\to \mathcal{G}_0$, must be $A=\mathfrak{g}\times M\to M$. Here it is clear that Lie bracket on $\Gamma(A)$ is induced by right invariant vector fields on $G$ and their infinitesimal action on $M$. Moreover, the anchor map $d t:A\to TM$ defined as $ dt (\phi_x)=d\mu_x \phi=\frac{d}{d t}\mu(\exp(t\phi),x)|_{t=0}$ induces a Lie algebra morphism between $\Gamma(A)$ with the \emph{fundamental vector fields} $x\to\phi^{\#}(x):=d\mu_x \phi \in \Gamma(TM)$, see Figure \ref{holonomy}. \end{example} \begin{figure} \caption{An illustration of the action groupoid and its respectively Lie algebroid. Here $z = h g \, x$, $y = g \, x$ and $X \in \Gamma(A)$.} \label{holonomy} \end{figure} Here are some other examples of Lie algebroids which will be relevant for us in this paper: \begin{example} The tangent distribution of a regular foliation $\ensuremath{\mathcal{F}}$ gives rise to a subbundle $T\ensuremath{\mathcal{F}}$ of $TM$. Since this distribution is involutive the bracket of sections of $T\ensuremath{\mathcal{F}}$ is again a section of $T\ensuremath{\mathcal{F}}$. This bracket, together with the inclusions $T\ensuremath{\mathcal{F}} \to TM$ as an anchor form a structure of Lie algebroid to $T\ensuremath{\mathcal{F}}$. \end{example} \begin{example} Another example of a Lie algebroid that will be used in this paper is that of a bundle of algebras. A bundle of Lie algebras is just a Lie algebroid $A\to M$ for which the anchor satisfies $\rho\equiv 0$. In this case, each fiber inherits the structure of a Lie algebra and we can view $A$ as a smooth family of Lie groups parameterized by $M$. \end{example} \begin{example} The Lie algebroid of the general linear groupoid of a vector bundle $E \to M$ (Example \ref{ex:gl-groupoid}) is called the \emph{general linear algebroid} of $E$, and is denoted by $\mathfrak{gl}(E)$. As a vector bundle, $\mathfrak{gl}(E)$ fits into a short exact sequence \begin{equation}\label{eq:exact-sequence}0\longrightarrow E^{*}\otimes E\longrightarrow \mathfrak{gl}(E) \longrightarrow TM \longrightarrow 0.\end{equation} Its space of sections can be identified with the space of degree 1 derivations of the vector bundle $E$, i.e., the space of linear operator $D:\Gamma(E)\to \Gamma(E)$ such that there exists a vector field $X_D$ in $M$, satisfying $D(fs)=X_D(f)s + fD(s)$ for all sections $s \in \Gamma(E)$ and functions $f \in \mathrm{C}^\infty(M)$. The Lie bracket os two derivations is the commutator bracket. We observe that the vector bundle splittings of the exact sequence \eqref{eq:exact-sequence} are in 1-1 correspondence with linear connections on $E$. In fact, a connection $\nabla$ produces the splitting $\sigma(X)=\nabla_X$. When $E$ is an Euclidean vector bundle, an analogous construction can be made so that the splittings correspond bijectively to connections compatible with the fiberwise metric. The general linear algebroid of $E$ can also be realized as the \emph{Atiyah algebroid} of the frame bundle of $E$. We recall that the Atiyah algebroid of a principal $G$-bundle $\pi:P \to M$ is $A = \frac{TP}{G}$ as a vector bundle (over $M$). Its Lie bracket on the space of sections is obtained by identifying sections of $A$ with $G$-invariant vector fields on $P$, and its anchor is induced by $d\pi: TP \to TM$. For more details see \cite{Crainic-Moerdijk}. \end{example} A Lie algebroid is called \emph{integrable} if it is isomorphic to the Lie algebroid of a Lie groupoid. In contrast to the usual Lie theory for Lie groups, not every Lie algebroid is integrable, but the obstructions for integrability are well known (see \cite{Crainic-Fernandes}). In this paper we will only need the fact that every Lie subalgebroid of an integrable Lie algebroid is itself integrable. Even though this result follows from the general obstruction theorem of \cite{Crainic-Fernandes}, we will use here the approach of \cite{Moerdijk-Mrcun-subalgebroid} where an explicit description of the integrating groupoid is given as follows. Let $A \to M$ be a Lie subalgebroid of $A' \to M$, and $\mathcal{G} = \mathcal{G}_1\rightrightarrows \mathcal{G}_0 = M$ be a Lie groupoid which integrates $A'$. Then for each $g \in \mathcal{G}_1$ we may consider the subspace of $\ker ds$ obtained by right translating $A_t(g) \subset \ker d_{1_{t(g)}}s$ to $\ker d_gs$. In this way, one obtains a $s$-vertical involutive distribution on $T\mathcal{G}_1$, i.e., a regular foliation $\ensuremath{\mathcal{F}}^A$ on $\mathcal{G}_1$. It then follows that the holonomy groupoid of this foliation is a Lie groupoid $\mathrm{Hol}(\ensuremath{\mathcal{F}}^A) \rightrightarrows \mathcal{G}_1$. Moreover, $\mathcal{G}$ acts on $\mathrm{Hol}(\ensuremath{\mathcal{F}}^A)$ by automorphisms and the quotient $\mathcal{H} \rightrightarrows \mathcal{G}_0$ is a Lie groupoid integrating $A$. This Lie groupoid comes equipped with a Lie groupoid immersion $i: \mathcal{H} \to \mathcal{G}$ whose derivative restricts to the inclusion of $A$ into $A'$. For further details we refer to \cite{Moerdijk-Mrcun-subalgebroid}. \begin{remark} It is also proven in \cite{Moerdijk-Mrcun-subalgebroid} that the groupoid morphism $i: \mathcal{H} \to \mathcal{G}$ is injective if and only if the holonomy of the foliation $\ensuremath{\mathcal{F}}^A$ is trivial. \end{remark} \section{Semi-local Models of SRF} \label{section-semi-local-modelsofSRF} In this section we prove Theorem \ref{theorem-semi-local-model}. Using the results in Section \ref{section-facts-SRF} it suffices to prove the proposition below. Before we do so, let us stress some notation. First, denote the generalized holonomy foliation defined in Example \ref{generalized-holonomy-foliation} by $\ensuremath{\mathcal{F}} (\nabla^{\tau},\ensuremath{\mathcal{F}}^{E})$. Now remember that given $(M, g, \ensuremath{\mathcal{F}})$ a SRF and $B \subset M$ a closed saturated submanifold contained in a stratum, it is possible to consider the linearized foliation $\ensuremath{\mathcal{F}}lin$ on a $\epsilon$-tubular neighborhood $U$ around $B$ and for each $b \in B$ we denoted by $\ensuremath{\mathcal{F}}_b$ the infinitesimal foliation on $\nu_b B$ obtained by homothetic extension of the foliation $(\exp^\nu)^{-1}(\ensuremath{\mathcal{F}} \cap U_b)$. By an abuse of notation we will be denoted by $\ensuremath{\mathcal{F}}lin$ the homothetic extension of $(\exp^{\nu})^{-1}(\ensuremath{\mathcal{F}}lin)$ on $\nu B$. \begin{proposition} \label{lemma-linearized-vector-linearfoliation} Let $(M, g, \ensuremath{\mathcal{F}})$ be a SRF, $B \subset M$ a closed saturated submanifold contained in a stratum, $\mathcal{T}$ the distribution on $E = \nu B$ described in section \ref{subsection-3-distribution} and $\ensuremath{\mathcal{F}}^{E}$ the singular foliation on $E$ fiberwise determined by the infinitesimal foliations of $\ensuremath{\mathcal{F}}$. Then \begin{enumerate} \item[(a)] $\ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E}) = \ensuremath{\mathcal{F}}$. \item[(b)]$\ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E})^\ell = \ensuremath{\mathcal{F}}lin$. \end{enumerate} \end{proposition} \begin{proof} \ \begin{enumerate} \item[(a)] $\ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E}) = \ensuremath{\mathcal{F}}$ follows direct from the definition of $\ensuremath{\mathcal{F}}^{E}$ (i.e., $\ensuremath{\mathcal{F}}^{E}$ is the foliation which leaves are leaves of $\ensuremath{\mathcal{F}}_{b}$, $\forall b\in B$) and the fact that $\nabla^{\tau}$ is a $\ensuremath{\mathcal{F}}$-invariant connection (i.e., for any $v\in E$ and any $\ensuremath{\mathcal{F}}_{B}$-leafwise path $\alpha$ the $\mathcal{T}$-horizontal curve $t\to \mathcal{P}_{\alpha}^{t}(v)$ is contained in $L_v$). \item[(b)] Since the distribution $\mathcal{T}$ is tangent to $\ensuremath{\mathcal{F}}lin$ and the orbits of $K_{b}^{0}$ are contained in $\ensuremath{\mathcal{F}}lin$ we conclude that $\ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E})^\ell \subset \ensuremath{\mathcal{F}}lin$. Now we want to prove that $\ensuremath{\mathcal{F}}lin \subset \ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E})^\ell$, i.e., that flows of linearized vector fields are contained in $\ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E})^\ell$. Let $\varphi_{t}$ be the flow of a linearized vector field tangent to $\ensuremath{\mathcal{F}}$, $b_0 \in B$, $v_{0} \in E_{b_0}$ and let $P_{b_0}$ be a plaque through $b_0$ in a normal (geodesic) coordinate system, so that for each $b \in P_{b_0}$ one can associate a unique geodesic segment $\gamma_{b_0, b} \subset P_{b_0}$ connecting $b$ to $b_0$. Consider $\mathcal{P}_{\gamma_{b_0, b_t}}: E_{b_t} \to E_{b_0}$ the parallel transport along $\gamma_{b_0, b_t}$ where $b_t = \pi (\varphi_{t}(v_0))$ and $\pi: E|_{P_{b_0}} \to P_{b_0}$ the base point projection. Then $t \to \mathcal{P}_{\gamma_{b_0, b_t}} \circ \varphi_{t} =: k_t$ is a curve of isometries of $E_{b_0}$ that fixes $\ensuremath{\mathcal{F}}_{b_0}$ and such that $k_0 = \text{Id}$. Therefore $t \to k_t$ is a curve in $K_{b_0}^{0}$ starting at the identity. It follows that $\varphi_{t}(v_0) = \mathcal{P}_{(\gamma_{b_0, b_t})^{-1}} (k_t (x_0)) \in \ensuremath{\mathcal{F}} (\nabla^{\tau}, \ensuremath{\mathcal{F}}^{E})^\ell$. This concludes the proof. \end{enumerate} \end{proof} \section{Lie Groupoid Structures} \label{section-lie-groupoid-structure} In this section we expose and discuss the Lie groupoid whose orbits are the leaves of the foliations $\ensuremath{\mathcal{F}}^{\ell}$ and $\widehat{\ensuremath{\mathcal{F}}}^\ell$, and in particular we prove Theorem \ref{theorem-linear-holonomy-groupoid} (this is done in Theorem \ref{proposition-groupoid-linear-foliation} bellow). After the proof we exemplify our construction in two extreme cases: regular foliations around a closed leaf, and around a fixed point of a singular foliation. We hope that these examples will help the reader in understanding the main theorem. We also presented here the proof of Theorem \ref{foliated-Ambrose-and-Singer}. \subsection{The Holonomy Groupoid of $\ensuremath{\mathcal{F}}lin$} \begin{theorem} \label{proposition-groupoid-linear-foliation} Let $\ensuremath{\mathcal{F}}$ be a singular Riemannian foliation on a complete manifold $(M,g)$, $B$ be a closed saturated submanifold contained in a stratum and $U$ a saturated $\epsilon$-tubular neighbourhood of $B$. Then the leaves of the foliations $\ensuremath{\mathcal{F}}lin$ and $\widehat{\ensuremath{\mathcal{F}}}^\ell$ are orbits of a Lie groupoid. \end{theorem} \begin{proof} Let us prove that the leaves of $\ensuremath{\mathcal{F}}lin$ are orbits of a Lie groupoid. A similar proof holds for the foliation $\widehat{\ensuremath{\mathcal{F}}}^\ell$. We denote by $E$ the normal bundle $\nu(B)$ of $B$. Set $\ensuremath{\mathcal{F}}^{\ell}_{0}$ the homothetic extensions of $(\exp^{\nu})^{-1}(\ensuremath{\mathcal{F}}^{\ell})$ to $E$. Recall that for each linearized flow on $U$ tangent to $\ensuremath{\mathcal{F}}$ we can associate a flow $t\to \varphi_{t}$ on $E$ so that $\varphi_{t}: E_{b} \to E_{\varphi_{t}(b)}$ is an isometry (which will also be called the \emph{linearized flow}) for each $b \in B$ where $\varphi_t(b)$ makes sense. The singular foliation $\ensuremath{\mathcal{F}}^{\ell}_{0}$ are the orbits of the pseudo-group generated by these flows. Let $O(E)$ be the orthogonal frame bundle associated to $E\to B$. Note that each linearized flow $t\to \varphi_{t}$ on $E$ induces a flow $t \to \varphi_{t}$ on $O (E)$. Let $\widetilde{\ensuremath{\mathcal{F}}} = \{\widetilde{L}_{\xi_b}\}_{\xi_b \in O (E)}$ be the singular foliation obtained by compositions of these lifted flows. Let $\mathcal{H}^{\tau}$ be the horizontal distribution on $O(E)$ along $T \ensuremath{\mathcal{F}}_B$ induced by the partial linear connection $\nabla^{\tau}$ described in Proposition \ref{proposition-connections-tau-affine}. Then for each frame $\xi_{b}\in O(E)_{b}$ we have: \begin{equation} \label{eq-1-lemma-groupoid-linear-foliation} T_{\xi_{b}}\widetilde{L}_{\xi_{b}}=\mathcal{H}^{\tau}_{\xi_{b}}\oplus T_{\xi_{b}}\big( O(E)_{b}\cap \widetilde{L}_{\xi_{b}} \big) \end{equation} Since for each $b\in B$ the group $K_{b}^{0}$ (defined in Section \ref{preliminaries-foliation}) acts effectively on $E_{b}$, the group $K_{b}^{0}$ induces a free action on $O(E)_{b}$ and the orbits of this action coincide with the intersection of the leaves of $\widetilde{\ensuremath{\mathcal{F}}}$ with $O(E)_{b}$. In particular: \begin{equation} \label{eq-2-lemma-groupoid-linear-foliation} \dim T_{\xi_{b}}\big( O(E)_{b}\cap \widetilde{L}_{\xi_{b}} \big)=\dim K_{b}^{0}. \end{equation} Once $ \dim K_{b}^{0}$ does not depend on $b\in B$ (recall Corollary \ref{corollary-isomorphismK}) we infer from equations \eqref{eq-1-lemma-groupoid-linear-foliation} and \eqref{eq-2-lemma-groupoid-linear-foliation} that the foliation $\widetilde{\ensuremath{\mathcal{F}}}$ is a (regular) foliation, i.e., the dimensions of the leaves are constant. Let $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) \rightrightarrows O(E)$ be the holonomy groupoid of the foliation $\widetilde{\ensuremath{\mathcal{F}}}$. Note that the lifted flows act on $O(E)$ through bundle automorphisms, and, as such, they are $O(n)$-equivariant. It follows that the foliation $\widetilde{\ensuremath{\mathcal{F}}}$ is invariant by the right $O(n)$-action on $O(E)$. This action maps leafwise curves to leafwise curves, and submanifolds of leaves to transversal submanifolds of leaves. Therefore, the action can be lifted to an action on the holonomy groupoid and it is easy to check that this gives a a free and proper action by automorphims on $\mathrm{Hol}(\widetilde{F})\rightrightarrows O(E)$. By taking the quotient under the $O(n)$-action we obtain the following commutative diagram \[\xymatrix{ \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) \ar@<0.25pc>[r] \ar@<-0.25pc>[r] \ar[d] & O(E) \ar[d] \\ \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n) \ar@<0.25pc>[r] \ar@<-0.25pc>[r] & O(E)/O(n) = B. }\] We conclude that the groupoid $\mathcal{G} = \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n) \rightrightarrows B$ is a Lie groupoid (recall Proposition \ref{prop:quotient}). We remark that $\mathcal{G}$ comes with a canonical representation on $E$. If we identify $E$ with the associated bundle $O(E)\times_{O(n)} \mathbb{R}^n$, then the representation is given by \[\bar{g}\cdot [\xi, v] := [t(g),v],\] where $g$ is the unique representative of $\bar{g}$ in $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}$ such that $s(g) = \xi$. Note also that the orbits of this representation coincide with the leaves of $\ensuremath{\mathcal{F}}^\ell$. In fact, any leafwise curve of $\widetilde{\ensuremath{\mathcal{F}}}$ is homotopic to a concatenation of linearized flows, and we can represent the arrows of $\mathrm{Hol}({\widetilde{\ensuremath{\mathcal{F}}}})$ as concatenation of linearized flows, then the arrows of the representation $\mathcal{G}\ltimes E$ are products between arrows of the form $[(\tilde{\varphi}_{t},\xi, v)]$ with source $[(\xi,v)]$ and target $[(\varphi_t(\xi),v)]=\varphi_t([(\xi,v)])$ generated by linearized flows. Therefore, by defining $\mathcal{G}^{\ell}=\mathcal{G}\ltimes E$ the transformation Lie groupoid of the representation we obtain a Lie groupoid whose orbits are the leaves of the foliation $\ensuremath{\mathcal{F}}lin$. A similar construction holds for $\hat{\ensuremath{\mathcal{F}}}^\ell$. \end{proof} The Lie groupoid $\mathcal{G}^\ell \rightrightarrows E$ constructed in the previous theorem will be called the \emph{Linear Holonomy Groupoid of $\ensuremath{\mathcal{F}}$ around $B$}. \begin{remark} We remark that the Lie groupoid $\mathcal{G} \rightrightarrows B$ comes with canonical representation on the normal bundle of $\ensuremath{\mathcal{F}}$ restricted to $B$. In fact, even though the normal bundle $\nu(\ensuremath{\mathcal{F}}) = TM/T\ensuremath{\mathcal{F}}$ is not a smooth vector bundle, when we restrict it to $B$ we obtain an honest vector bundle $\nu(\ensuremath{\mathcal{F}})|_B = E \oplus \nu(\ensuremath{\mathcal{F}}_B)$, where $\nu(\ensuremath{\mathcal{F}}_B) = TB/T\ensuremath{\mathcal{F}}_B$. On the one hand, $\mathcal{G}$ has a representation on $E$ discussed in the previous theorem: it is the action which gives rise to the linearized foliation on $E$. On the other hand, since $\mathcal{G}$ is a source connected regular Lie groupoid with orbit foliations $\ensuremath{\mathcal{F}}_B$, it follows that there exists a morphism of Lie groupoids which is a surjective submersion $\mathcal{G} \to \mathrm{Hol}(\ensuremath{\mathcal{F}}_B)$ (see \cite{Crainic-Moerdijk-foliation}). By composing this morphism with the canonical action of $\mathrm{Hol}(\ensuremath{\mathcal{F}}_B)$ on $\nu(\ensuremath{\mathcal{F}}_B)$ we obtain a representation of $\mathcal{G}$ on $\nu(\ensuremath{\mathcal{F}}_B)$. \end{remark} \begin{remark}[Monodromy groupoid] \label{rmk:monodromy} We note that a similar construction can be made using the monodromy groupoid $\Pi_1(\widetilde{\ensuremath{\mathcal{F}}}) \rightrightarrows O(E)$ of the lifted foliation $\widetilde{\ensuremath{\mathcal{F}}}$ instead of the monodromy groupoid. The groupoid obtained after taking the quotient by the $O(n)$-action has the same leaves as the groupoid constructed in the proof above. In fact, there is a natural groupoid covering map $\Pi_1(\widetilde{\ensuremath{\mathcal{F}}})/O(n) \to \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n)$ induced by the covering map existent before taking the quotient by the $O(n)$-action. \end{remark} \begin{remark}[Regular case]\label{ex:regular-case} In \cite{Molino}, Molino studied the structure of a (regular) Riemannian foliation $\ensuremath{\mathcal{F}}$ on a complete Riemannian manifold $B$ considering its lift to the associated orthogonal frame bundle. His construction could be considered a particular case of the construction above. More precisely in this case we can consider the vector bundle $E$ as the normal bundle of the foliation $\nu (\ensuremath{\mathcal{F}})$ with the induced metric and the partial Bott connection $\nabla^{\tau}$ on $\nu(\ensuremath{\mathcal{F}})$, where \[\nabla^{\tau}_X (Y \text{ mod } \ensuremath{\mathcal{F}}) = [X,Y] \text{ mod } \ensuremath{\mathcal{F}}.\] Since the Bott connection is locally flat, one sees that the lifted foliation $\widetilde{\ensuremath{\mathcal{F}}}$ on $O(E)$ is also regular. \end{remark} In order to get a better feeling of the Linear Holonomy Groupoid, we present bellow two extreme examples where the groupoid can be explicitly described. \begin{example}[Regular case around closed leaf]\label{regular-case-2} When $\ensuremath{\mathcal{F}}$ is regular and $B = L$ is a closed leaf of $\ensuremath{\mathcal{F}}$ we point out that $\mathcal{G}^\ell$ is in fact the linearization (see \cite{Crainic-Struchiner}) of the (usual) holonomy groupoid of the foliation $\ensuremath{\mathcal{F}}$ around $L$. In other words, $\mathcal{G}^\ell$ is isomorphic to the transformation groupoid $\mathrm{Hol}(\ensuremath{\mathcal{F}})|_{L} \ltimes \nu L \rightrightarrows \nu L$ where $\mathrm{Hol}(\ensuremath{\mathcal{F}})|_{L} \rightrightarrows L$ is the restriction of the holonomy groupoid to $L$. In order to show this, we recall that $E$ is just the normal bundle $\nu(L)$, and connection $\nabla^\tau$ is the Bott connection of the foliation (see Remark \ref{ex:regular-case}). Moreover, by counting dimensions we see that the distribution $\mathcal{N}$ must be trivial, or equivalently, the Lie group $K^0_b$ is trivial for all $b \in B$. From this it follows that the lifted foliation $\widetilde{\ensuremath{\mathcal{F}}}$ on the orthogonal frame bundle $O(E)$ is the horizontal foliation determined by the (flat!) Bott connection. With this description we can compute the Lie groupoid $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) \rightrightarrows O(E)$ explicitly. One obtains that \[\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) = O(E)\times_L\mathrm{Hol}(\ensuremath{\mathcal{F}})|_L = \{(\xi, [\alpha]): \pi(\xi) = \alpha(0)\},\] where $\pi: O(E) \to L$ denotes the frame bundle projection. The source of $(\xi, [\alpha])$ is $\xi$ and its target is $\widetilde{\alpha}_\xi(1)$ where $\widetilde{\alpha}_\xi$ is the horizontal lift of $\alpha$ to $O(E)$ starting at the frame $\xi$. The product is \[(\xi_2, [\alpha_2])\cdot (\xi_1, [\alpha_1]) = (\xi_1, [\alpha_2 \ast \alpha_1]),\] where $\alpha_2 \ast \alpha_1$ denotes the concatenation of $\alpha_1$ with $\alpha_2$. It is then clear that $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) / O(n)$ is isomorphic to $\mathrm{Hol}(\ensuremath{\mathcal{F}})|_L$ and that under this identification, both representations on $\nu(L)=E$ coincide. \end{example} \begin{example}[Around a Fixed Point] The example above is an extreme example because of the triviality of the of the group $K^0$. We now explain the other extreme case where the connection is trivial. Let $\ensuremath{\mathcal{F}}$ be a SRF foliation on $M$ with a fixed point $x$, i.e., such that $L_x = \{x\}$, and let $B = \{x\}$. In this case, $E = T_xM$ is a vector space with an inner action, and the linearized foliation is given by the orbits of the subgroup $K^0$ of the orthogonal group of the vector space $T_xM$. The frame bundle in this case is just the orthogonal group itself, and the lifted foliation is $\widetilde{F} = \{K^0a: a \in O(T_xM)\}$. It is easy to check that this foliation has trivial holonomy groups, and therefore, its holonomy groupoid is given by \[\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) = \{(ka,a): a \in O(T_xM), \text{ and } k \in K^0\}.\] The source of an arrow $(ka,a)$ is $a$ while its target is $ka$, and the product is given by $(k'ka, ka)\cdot(ka, a) = (k'ka, a)$. In other words, we can identify $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})$ with the action groupoid of the action of $K^0$ on $O(T_xM)$. It now follows easily that $\mathcal{G} = \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(T_xM) = K^0$ (seen as a Lie groupoid over $\{x\}$), and that $\mathcal{G}^\ell = K^0 \ltimes T_xM \rightrightarrows T_xM$ is the action groupoid associated to the representation of $K^0$ on $T_xM$. \end{example} \subsection{Proof of Theorem \ref{foliated-Ambrose-and-Singer}} \label{Sec-foliated-Ambrose-and-Singer} As in Theorem \ref{proposition-groupoid-linear-foliation}, let $O(E)$ be the orthogonal frame bundle associated to $E \to B$ and $\mathcal{H}^{\tau}$ the horizontal distribution on $O(E)$ along $T \ensuremath{\mathcal{F}}_B$ induced by the partial linear connection $\nabla^{\tau}$ on $E$. Note that a parallel transport along a regular curve $\alpha\subset L$ (i.e., a curve that is a integral line of a vector field tangent to the leaves of $\ensuremath{\mathcal{F}}_{B}$) can be described by a linearized flow $t \to \varphi_{t}$ that can be lifted to a flow $t \to \varphi_{t}$ on $O(E)$. Let $\widetilde{\ensuremath{\mathcal{F}}} = \{\widetilde{L}_{\xi_b}\}_{\xi_b \in O(E)}$ be the singular foliation obtained as the orbits of the pseudo-group generated by these lifted flows. Our goal is to prove that $\widetilde{\ensuremath{\mathcal{F}}}$ is a regular foliation. Once we have proved this we can follow the same argument as in the proof of Theorem \ref{proposition-groupoid-linear-foliation}, i.e., we can define $\widetilde{\mathcal{G}} = \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) \rightrightarrows O(E)$ as the holonomy Lie groupoid of $\widetilde{\ensuremath{\mathcal{F}}}$, set $\mathcal{G}$ as the Lie groupoid $\widetilde{\mathcal{G}} / O(n) \rightrightarrows B$ and the desired Lie groupoid $\mathcal{G}^{\tau}$ will be $\mathcal{G} \ltimes E$. Note that for each $b \in B$ the holonomy group $\text{Hol}^{\tau}_{b}$ (with respect to $\nabla^{\tau}$) acts effectively and freely on $O(E)_b$ and the orbits of this action coincide with the intersection of the leaves of $\widetilde{\ensuremath{\mathcal{F}}}$ with $O(E)_b$. In particular \begin{equation} \label{eq-1-proof-theorem-holonomy} \dim T_{\xi_{b}} \big( O(E)_b\cap \widetilde{L}_{\xi_{b}} \big) = \dim \mathfrak{hol}_{b} \end{equation} where $\mathfrak{hol}^{\tau}_{b}$ denotes de Lie algebra of the holonomy group $\text{Hol}^{\tau}_{b}$. Equations \eqref{eq-1-lemma-groupoid-linear-foliation} (that also holds here), \eqref{eq-1-proof-theorem-holonomy}, and the fact that the dimension of $\widetilde{\ensuremath{\mathcal{F}}}$ is lower semi-continuous, allow us to conclude that for a slice $S_{b}$ of $\ensuremath{\mathcal{F}}_{B}$ at $b$ and each $y\in S_{b}$ close to $b$ \begin{equation} \label{eq1-holonomy-constant} \dim \mathfrak{hol}^{\tau}_{y}\geq \dim \mathfrak{hol}^{\tau}_{b}. \end{equation} Since $\ensuremath{\mathcal{F}}_{B}$ is dense on $B$, we can find a sequence $(b_{n})_{n \in \mathbb{N}}$ of points in $S_{b}$ so that $b_{n} \to y$ and $b_{n} \in L_{b}$ from which $b_{n} \in L_{b}$ we conclude that \begin{equation} \label{eq2-holonomy-constant} \dim \mathfrak{hol}^{\tau}_{b_{n}}=\dim \mathfrak{hol}^{\tau}_{b}. \end{equation} On the other hand, by replacing $b$ with $y$ and $y$ with $b_{n}$ in Equation \eqref{eq1-holonomy-constant}, we have that \begin{equation} \label{eq3-holonomy-constant} \dim \mathfrak{hol}^{\tau}_{b_{n}}\geq \dim \mathfrak{hol}^{\tau}_{y}. \end{equation} These equations together imply: $$\dim \mathfrak{hol}^{\tau}_{b} = \dim \mathfrak{hol}^{\tau}_{b_{n}} \geq \dim \mathfrak{hol}^{\tau}_{y} \geq \dim \mathfrak{hol}^{\tau}_{b}.$$ Therefore $\dim \mathfrak{hol}^{\tau}_{y}=\dim \mathfrak{hol}^{\tau}_{b}$ for $y\in S_{b}$ near $b$. This fact, together with Equations \eqref{eq-1-lemma-groupoid-linear-foliation} and \eqref{eq-1-proof-theorem-holonomy} imply that the foliation $\widetilde{\ensuremath{\mathcal{F}}}$ is a regular foliation. \section{Lie groupoid structure of $\overline{\ensuremath{\mathcal{F}}lin}$} \label{section-subgroupoid-closure} In this section we show that if $B=\overline{L}$ is the closure of a leaf $L$ in $M$, then the leaf closure foliation $\overline{\ensuremath{\mathcal{F}}lin}$ comes from a proper Lie groupoid. Moreover, the linear holonomy groupoid $\mathcal{G}^{\ell}$ constructed in Section \ref{section-lie-groupoid-structure} is a subgroupoid of the groupoid $\overline{\mathcal{G}^{\ell}}$ describing $\overline{\ensuremath{\mathcal{F}}lin}$, and in fact, $\mathcal{G}^{\ell}$ is dense in $\overline{\mathcal{G}^{\ell}}$. The proof of this result will rely on a similar statement for regular Riemannian foliations which can be seen as an extension of Molino's Theorem about leaf closures \cite{Molino}. We include the proof of this particular case in Section \ref{closure-regular-case} and proceed to the singular case in Section \ref{around-leaf-closure}. \subsection{Closure on the regular case}\label{subsec-regular-case} The structure theory for regular Riemannian foliations developed by Molino (\cite{Molino}) has as a consequence that the foliation $\overline{\ensuremath{\mathcal{F}}}$ given by the leaf closures of a regular Riemannian foliation $\ensuremath{\mathcal{F}}$ is itself a (possibly singular) Riemannian foliation. However, a bit more is true: the leaves of are the orbits of a proper Lie groupoid. This fact seems to be well known to the community working in the intersection of Lie groupoid theory and Riemannian foliations (see for example \cite{Wang}). Here we present a proof of this result in the spirit of the previous constructions of this paper. \begin{proposition}\label{closure-regular-case} Let $\ensuremath{\mathcal{F}}$ be a regular Riemannian foliation on a complete manifold $(M,g)$. Then the leaves of the singular foliation $\overline{\ensuremath{\mathcal{F}}}$ are orbits of a proper Lie groupoid $\overline{\mathrm{Hol}({\ensuremath{\mathcal{F}}})}\rightrightarrows M$. Moreover, $\mathrm{Hol}(\ensuremath{\mathcal{F}})$ is a dense subgroupoid of $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$. \end{proposition} \begin{proof} Let $\ensuremath{\mathcal{F}}$ be a (regular) Riemannian foliation and denote by $E$ the normal bundle of $\ensuremath{\mathcal{F}}$, and by $\widetilde{\ensuremath{\mathcal{F}}}$ the lifted foliation on $O(E)$ using the Bott connection (see Example \ref{ex:regular-case}). This foliation is transversally parallelizable and this implies that $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$ is a simple regular foliation \cite{Molino}. Since $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$ is a simple foliation and the restriction of $\widetilde{\ensuremath{\mathcal{F}}}$ to a leaf closure is a Lie foliation, then we conclude that both $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$ and $\widetilde{\ensuremath{\mathcal{F}}}$ have trivial leafwise holonomy. In other words, for each pair of points in a leaf there is a single holonomy class of a path connecting them and therefore the holonomy groupoid of $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$ is just the set of pairs of points on the same leaf, and therefore is a proper Lie groupoid. Moreover, the homomorphism $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})\to\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})$ is injective. We now show that $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})$ is dense in $\mathrm{Hol}(\overline{\widetilde{F}})$. Let $x\xrightarrow{g} y$ be an arrow of $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})$, this means that $x,y$ are in a same leaf closure $\overline{\widetilde{L}}$. There exists sequences $x_n, y_n$ in $\widetilde{L}$ converging to $x$ and $y$ respectively, and a sequence of path holonomies $x_n\xrightarrow{h_n} y_n$. Using the fact that $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})$ is proper and has trivial isotropies we conclude that $h_n$ converges to $g$. The action of $O(n)$ on $O(E)$ preserves the foliation $\widetilde{\ensuremath{\mathcal{F}}}$, and consequently preserves $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$. It extended to a free and proper action on $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})\rightrightarrows O(E)$ by automorphisms. It follows that we obtain a commutative diagram of Lie groupoids \[\xymatrix{ \mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}}) \ar@<0.25pc>[r] \ar@<-0.25pc>[r] \ar[d] & O(E) \ar[d] \\ \mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}} }})/O(q) \ar@<0.25pc>[r] \ar@<-0.25pc>[r] & M . }\] Let $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$ be the Lie groupoid $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}} }})/O(q) \rightrightarrows M.$ Since the leaves of the lifted foliation $\tilde{\ensuremath{\mathcal{F}}}$ projects into leaves of $\ensuremath{\mathcal{F}}$, and the projection $O(E) \xrightarrow{\pi} M$ is closed follows that $\overline{L} = \pi(\overline{\widetilde{L}})$ showing that the orbits of $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$ are the leaf closures of $\ensuremath{\mathcal{F}}$. Moreover, since $O(n)$ is a compact Lie group it follows also that $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$ is proper. We deduce that $\mathrm{Hol}(\ensuremath{\mathcal{F}})$ is a dense subgroupoid of $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$ from the fact that $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})$ is a subgroupoid of $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})$. \end{proof} \begin{remark}\label{rmk:foliation-with-symmetries} If we assume moreover that $M\curvearrowleft K$ is a right action by isometries which preserves $\ensuremath{\mathcal{F}}$, then $K$ acts on the right of $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}\rightrightarrows M$ by automorphims. In fact, if $\ensuremath{\mathcal{F}}$ is invariant under the isometric action of $K$ on $M$, then for each $k$ in $K$ we have an isometry $E_x \xrightarrow{d R_{k}} E_{x\cdot k}$. This defines a right action on $O(E)\curvearrowleft K$ which assigns a frame $\mathbb{R}^{n}\xrightarrow{\xi_x} E_x$ to \[ \xymatrix{ \mathbb{R}^{n}\ar[r]^{\xi_x}\ar@/^1.2pc/[rr]^{d R_{k}\circ \xi_x} & E_x \ar[r]^{d R_{k}} & E_{x\cdot k}. } \] Since $K$ preserves $\ensuremath{\mathcal{F}}$ the above action preserves $\widetilde\ensuremath{\mathcal{F}}$. Consequently this action sends closures to closures, and therefore $K$ acts by automorphims on $\mathrm{Hol}(\overline{\ensuremath{\mathcal{F}}'})$. Note also that for $k$ in $K$ the map $k:O(E)\to O(E)$ is a $O(n)$-equivariant map. So, for $\xi$ in $O(E)$ and $A$ in $O(n)$, we have $$ (\xi A)\cdot k= d R_{k}\circ (\xi \circ A) = (d R_{k}\circ \xi)\circ A = (\xi \cdot k)A. $$ Therefore, these actions commute, and the action of $K$ on $\mathrm{Hol}(\overline{\widetilde{\ensuremath{\mathcal{F}}}})$ induces a $K$ action on $\overline{\mathrm{Hol}(\ensuremath{\mathcal{F}})}$ by automorphims. \end{remark} \subsection{Around the closure of a leaf}\label{around-leaf-closure} We now use the results of Section \ref{subsec-regular-case} to generalize Proposition \ref{closure-regular-case} to the linearization of a SRF around the closure of a leaf. \begin{theorem} Let $\ensuremath{\mathcal{F}}$ be a singular Riemannian foliation on a complete manifold $(M,g)$ and $B = \overline{L}$. Then there exists a proper Lie groupoid $\overline{\mathcal{G}^{\ell}}$ over a saturated $\epsilon$-tubular neighborhood $U$ of $B$ whose orbits are the leaves of $\overline{\ensuremath{\mathcal{F}}^{\ell}}$. In addition $\mathcal{G}^{\ell}$ is a dense Lie subgroupoid of $\overline{\mathcal{G}^{\ell}}$. \end{theorem} \begin{proof} Since the foliation $\ensuremath{\mathcal{F}}_B$ is dense we can us Lemma \ref{lemma-total-connection} to produce a Riemannian metric on $O(E)$ such that the lifted foliation $\widetilde{F}$ is a regular Riemannian foliation. Since $\widetilde{\ensuremath{\mathcal{F}}}$ is a Riemannian foliation, Proposition \ref{closure-regular-case} implies that there exists a proper Lie groupoid $\overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})}$ such that its orbit foliation is $\overline{\widetilde{\ensuremath{\mathcal{F}}}}$ and such that $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})$ is a dense subgroupoid. By taking the product of both of these Lie groupoids with the trivial Lie groupoid $\mathbb{R}^n\rightrightarrows \mathbb{R}^n$ we obtain an injective morphism of Lie groupoids \[\xymatrix{ \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}}) \times \mathbb{R}^n \ar@<0.25pc>[dr] \ar@<-0.25pc>[dr] \ar[rr] & & \overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})} \times \mathbb{R}^n \ar@<0.25pc>[dl] \ar@<-0.25pc>[dl] \\ & O(E)\times \mathbb{R}^n .&} \] From Remark \ref{rmk:foliation-with-symmetries} and Proposition \ref{prop:quotient} we can take the quotient by $O(n)$ to obtain \[\xymatrix{ \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})\times{\mathbb{R}^n} \ar[r] \ar[d] & \overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})} \times \mathbb{R}^n \ar[d]^{\pi} \\ \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n)\ltimes E \ar[r] & \overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})}/O(n)\ltimes E. }\] This shows that $\mathcal{G}^{\ell}=\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n)\ltimes E $ is a subgroupoid of $\overline{\mathcal{G}^{\ell}}:=\overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})}/O(n)\ltimes E$. Moreover, since $\overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})}$ is proper and the transformation groupoid of an action of a proper groupoid is again proper, it follows that $\overline{\mathcal{G}^{\ell}}$ is a proper Lie groupoid. We will now show that $\mathcal{G}^{\ell}$ is dense in $\overline{\mathcal{G}^{\ell}}$. Let $g$ be an arrow of $\overline{\mathcal{G}^{\ell}}$. Fix an arrow $(\widetilde{g},v)$ in $\overline{\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})}\times{\mathbb{R}^n}$ with $\pi(\widetilde{g},v)=g$. By density there exists a sequence $\widetilde{h}_n$ in $\mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})$ converging to $\widetilde{g}$. Setting $h_n=\pi(\widetilde{h}_n,v)$ we get a sequence on $\mathcal{G}^{\ell}$ converging to $g$, showing that $\mathcal{G}^{\ell}$ is dense in $\overline{\mathcal{G}^{\ell}}$. Finally, in order to show that the orbits of $\overline{\mathcal{G}^{\ell}}$ are the leaf closures of the orbits of $\mathcal{G}^{\ell}$, we identify $E$ with the associated bundle $O(E) \times_{O(n)} \mathbb{R}^n$ and note that the natural projection map $\pi: O(E) \times \mathbb{R}^n \to E$ is closed. Since each leaf $L$ of $\ensuremath{\mathcal{F}}lin$ can be seen as $L=\pi(\tilde{L}\times\{v\})$ for a leaf $\tilde{L}$ of $\tilde{\ensuremath{\mathcal{F}}}$, it follows that $\overline{L} = \pi(\overline{\widetilde{L}}\times \{v\})$, and hence the orbit foliation of $\overline{\mathcal{G}^{\ell}}$ is $\overline{\ensuremath{\mathcal{F}}lin}$. \end{proof} \section{The Lie Algebroid Associated to The Infinitesimal Data} \label{section-rotate-translate-groupoid} In this section, we discuss the Lie algebroid of the linear holonomy groupoid of a SRF, see Definition \ref{definition-algebroid}. The construction holds in a slightly more general setting where we drop any metric condition on the data. In what we present bellow, we will avoid a direct use of the SRF by using only the infinitesimal data that one obtains from the semi-local model of a SRF. The main ingredients of the construction are as follows: \begin{enumerate} \item[(a)] A rank $n$ vector bundle $E \to B$; \item[(b)] a (regular) foliation $\ensuremath{\mathcal{F}}_B$ on $B$; \item[(c)] a $\ensuremath{\mathcal{F}}_B$-partial linear connection $\nabla^{\tau}: \mathfrak{X}(\ensuremath{\mathcal{F}}_{B})\times \Gamma(E)\to \Gamma(E)$; \item[(d)] a bundle of Lie algebras $\mathfrak{k} \to B$ such that: \begin{enumerate} \item[(d1)] $\mathfrak{hol}^{\tau}_{b} \subset \mathfrak{k}_{b}\subset \mathrm{End} (E_b)$, $\forall b \in B$, where $\mathfrak{hol}^{\tau}_{b}$ is the (leafwise) $\nabla$-holonomy Lie algebra of $\nabla^{\tau}$, $\mathfrak{k}_b$ is the fiber of $\mathfrak{k}$ over $b \in B$, and $\mathrm{End}(E_b)$ denotes the Lie algebra of endomorphisms of $E_b$, \item[(d2)] For all $X \in \mathfrak{X}(\ensuremath{\mathcal{F}}_B)$, $\Gamma(\mathfrak{k})$ is invariant by elements of the form $\nabla^\tau_X$ under the commutator bracket of $\Gamma(\mathrm{End}(E))$, i.e., \[\nabla_X^\tau \phi = \nabla_X^\tau \circ \phi - \phi \circ \nabla_X^\tau \in \Gamma(\mathfrak{k})\] for all $X \in \mathfrak{X}(\ensuremath{\mathcal{F}}_B)$ and $\phi \in \Gamma(\mathfrak{k})$. \end{enumerate} \end{enumerate} Using these ingredients we build an integrable Lie algebroid $A(\nabla^{\tau},\mathfrak{k}) \to B$ which satisfies the following properties: \begin{itemize} \item $A(\nabla^{\tau},\mathfrak{k})$ fits into an exact sequence of Lie algebroids \[0 \longrightarrow \mathfrak{k} \longrightarrow A(\nabla^{\tau},\mathfrak{k}) \longrightarrow T\ensuremath{\mathcal{F}}_B \longrightarrow 0;\] \item $A(\nabla^{\tau},\mathfrak{k})$ is a Lie subalgebroid of the Atiyah algebroid $\mathfrak{gl}(E)$ of the frame bundle of $E$. \end{itemize} \begin{remark} If we start with a SRF $\ensuremath{\mathcal{F}}$ on $M$, and a closed saturated submanifold $B$ contained in a stratum of $M$, then we obtain the infinitesimal data from the semi-local model of $\ensuremath{\mathcal{F}}$ around $B$. In this case, $E$ is the normal bundle to $B$ in $M$, $\nabla^\tau$ is the $\ensuremath{\mathcal{F}}_B$-partial connection defined in the Proposition \ref{proposition-connections-tau-affine}, and $\mathfrak{k}_b$ is the Lie algebra of the Lie group $K^0_b$. For this particular example it follows that $A(\nabla^{\tau},\mathfrak{k})$ will be the Lie algebroid of the Lie groupoid $\mathcal{G} = \mathrm{Hol}(\widetilde{F})/O(n) \rightrightarrows B$, and the inclusion of $A(\nabla^{\tau},\mathfrak{k})$ is the restriction of the differential of the representation map $\mathcal{G} \to \mathrm{GL}(E)$. It then follows that the Lie algebroid of the linear holonomy groupoid $\mathcal{G}^\ell$ is the action algebroid associated to the representation $A(\nabla^{\tau},\mathfrak{k}) \to \mathfrak{gl}(E)$. \end{remark} \begin{remark} Before we present the formal construction of our Lie algebroid, let us briefly give an intuition of how it appear in the case of $\mathcal{G} = \mathrm{Hol}(\widetilde{\ensuremath{\mathcal{F}}})/O(n) \rightrightarrows B$. As explained in Section \ref{section-lie-groupoid-structure} each leaf of the foliation $\widetilde{\ensuremath{\mathcal{F}}}$ on $O(E)$ is invariant under the action of $K^{0}$ and the basic distribution $\mathcal{H}^{\tau}$ is tangent to it. These facts allow us to identify $T\widetilde{\mathcal{F}} $ with $\mathcal{H}^{\tau}\oplus\mathfrak{k}$. With this identification, we can compute the Lie bracket of $O(n)$-invariant vector fields tangent to $\widetilde{\ensuremath{\mathcal{F}}}$ in terms of their components. Passing to the quotient, we obtain a Lie bracket on the vector bundle $$T\mathcal{F}_{B}\oplus \mathfrak{k}=(\mathcal{H}^{\tau}\oplus\mathfrak{k})/O(n)=T\widetilde{\mathcal{F}}/O(n)\to O(E)/O(n)=B.$$ \end{remark} \begin{remark} Our construction can be thought of as a foliated version of the Ambrose-Singer reduction theorem. In fact, one can restate the classical theorem as follows: if $\nabla$ is a linear connection on a rank $n$ vector bundle $E \to B$, then for any Lie algebra $\mathfrak{k}$ such that $\mathfrak{hol}^\nabla \subset \mathfrak{k} \subset \mathfrak{gl}(n)$, there exists transitive Lie subalgebroid $A(\nabla, \mathfrak{k})$ of the Atiyah algebroid $\mathfrak{gl}(E)$ such that the isotropy Lie algebras of $A(\nabla, \mathfrak{k})$ are all isomorphic to $\mathfrak{k}$. In the case of a foliated connection me must replace the Lie algebra above by a bundle of Lie algebras $\mathfrak{k}$ which contains the possibly singular bundle of Lie algebras $\mathfrak{hol}^\tau$. \end{remark} The first step needed in the construction of $A(\nabla^{\tau},\mathfrak{k})$ is a foliated version of the Atiyah algebroid $\mathfrak{gl}(E)$. For a vector bundle $E$ over a foliated manifold $B$ we define the \emph{foliated general linear algebroid} $\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B) \to B$ as follows. As a vector bundle, $\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B)$ is the fibered product with respect to the anchor $\rho$ of $T\ensuremath{\mathcal{F}}_B$ with $\mathfrak{gl}(E)$, i.e, \[\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B) = \{(X, D) \in T\ensuremath{\mathcal{F}}_B \oplus \mathfrak{gl}(E): \rho(D) = X\}.\] We remark that $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$ sits in a short exact sequence \begin{equation}\label{Atiyah-sequence} 0 \longrightarrow \mathrm{End}(E) \longrightarrow \mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B) \longrightarrow T\ensuremath{\mathcal{F}}_B \longrightarrow 0, \end{equation} and therefore $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$ is a (smooth) vector bundle. The space of sections of $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$ identifies with the space of \emph{$\ensuremath{\mathcal{F}}_B$-compatible derivations of $E$}: $\mathbb{R}$-linear operators $D: \Gamma(E) \to \Gamma(E)$ such that there exists a $\ensuremath{\mathcal{F}}_B$ foliated vector field $X \in \mathfrak{X}(\ensuremath{\mathcal{F}}_B)$ for which \[D(fs)=fD(s)+X_D(f)s\] for all $f$ in $C^{\infty}(B)$ and $s$ in $\Gamma(E)$. The Lie bracket of $\Gamma(\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B))$ is the commutator bracket of derivations. Finally, the anchor of $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$ is $\rho(X,D) = X$. The main purpose of considering the foliated general linear algebroid is that there is a one-to-one correspondence between splittings of the foliated Atiyah sequence \eqref{Atiyah-sequence} and $\ensuremath{\mathcal{F}}_B$-partial connections on $E$ given by \[\nabla^\tau_Xs = \tau(X)(s)\] for any splitting $\tau: T\ensuremath{\mathcal{F}}_B \to \mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$. It follows that a choice of a $\ensuremath{\mathcal{F}}_B$-partial connection induces an identification of vector bundles $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B) \simeq T\ensuremath{\mathcal{F}}_B \oplus \mathrm{End}(E)$. Under this identification we can re-express the anchor $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$ as $\rho(X, \phi) = X$. The Lie bracket on the space of sections can also be re-expressed as \[[(X_1,\phi_1), (X_2, \phi_2)] = ([X_1,X_2]_{\mathfrak{X}(\ensuremath{\mathcal{F}}_B)}, [\phi_1,\phi_2]_{\mathrm{End}(E)} + \nabla^\tau_{X_1} \phi_2 - \nabla^\tau_{X_2} \phi_1 - R^{\nabla^\tau}(X_1,X_2)),\] for all $(X_1,\phi_1), (X_2,\phi_2) \in \Gamma(T\ensuremath{\mathcal{F}}_B \oplus \mathrm{End}(E))$, where \[R^{\nabla^\tau} (X_1,X_2)= \nabla^\tau_{[X_1,X_2]} - [\nabla^\tau_{X_1},\nabla^\tau_{X_2}] \in \Gamma(\mathrm{End}(E))\] denotes the curvature of $\nabla^\tau$, and \[\nabla_X^\tau \phi = \nabla_X^\tau \circ \phi - \phi \circ \nabla_X^\tau \in \Gamma(\mathrm{End}(E))\] is the induced $\ensuremath{\mathcal{F}}_B$-partial linear connection on $\mathrm{End}(E)$. It is by now clear how to construct the Lie subalgebroid $A(\nabla^{\tau},\mathfrak{k})$ of $\mathfrak{gl}(E)$. \begin{definition} \label{definition-algebroid} As a vector bundle we take $A(\nabla^{\tau},\mathfrak{k}) = T\ensuremath{\mathcal{F}}_B \oplus \mathfrak{k}$. Its anchor map is the projection onto the first factor, and its bracket is given by the restriction of the bracket on $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$, i.e., \[[(X_1,\phi_1), (X_2, \phi_2)] = ([X_1,X_2]_{\mathfrak{X}(\ensuremath{\mathcal{F}}_B)}, [\phi_1,\phi_2]_{\mathfrak{k}} + \nabla^\tau_{X_1} \phi_2 - \nabla^\tau_{X_2} \phi_1 - R^{\nabla^\tau}(X_1,X_2)),\] for all $(X_1,\phi_1), (X_2,\phi_2) \in \Gamma(T\ensuremath{\mathcal{F}}_B \oplus \mathfrak{k})$. \end{definition} \begin{proposition} \label{proposition-algebroid-puro} $A(\nabla^{\tau},\mathfrak{k})$ is a Lie subalgebroid of $\mathfrak{gl}(E)$. \end{proposition} \begin{proof} Since $\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B)$ is a Lie subalgebroid of $\mathfrak{gl}(E)$, it suffices to show that $A(\nabla^{\tau},\mathfrak{k})$ is a Lie subalgebroid of $\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B)$. Therefore we must show that $\Gamma(A(\nabla^{\tau},\mathfrak{k}))$ is closed under the Lie bracket of $\Gamma(\mathfrak{gl}(E,\ensuremath{\mathcal{F}}_B))$. This boils down to \begin{itemize} \item $\mathfrak{k}$ is a sub bundle of Lie algebras of $\mathrm{End}(E)$ : this is part of condition (d1); \item $R^{\nabla^\tau}(X_1, X_2)$ takes value in $\mathfrak{k}$ for all $X_1, X_2 \in \mathfrak{X}(\ensuremath{\mathcal{F}}_B)$: this follows from the fact that $\mathfrak{hol}^\tau_b \subset \mathfrak{k}_b$ for all $b \in B$, which is also part of condition (d1); \item $\nabla^\tau_X\phi$ belongs to $\Gamma(\mathfrak{k})$ for all $X\in \mathfrak{X}(\ensuremath{\mathcal{F}}_B)$, and $\phi \in \Gamma(\mathfrak{k})$: this is condition (d2). \end{itemize} \end{proof} \begin{remark} In the case where $E$ is an Euclidean vector bundle, it is common to consider the Lie subalgebroid $\mathfrak{o}(E)$ instead of $\mathfrak{gl}(E)$, where $\mathfrak{o}(E)$ is the Lie subalgebroid whose space of sections is the subspace of derivations of $E$ which satisfy \[X_D\langle s_1, s_2 \rangle = \langle D(s_1), s_2 \rangle + \langle s_1, D(s_2) \rangle \text{ for all } s_1, s_2 \in \Gamma(E).\] If we use the fiberwise metric on $E$ to identify $\mathrm{End}(E)$ with $E^* \otimes E^*$, then $\mathfrak{o}(E)$ is the Lie subalgebroid which fits into the exact sequence \[0 \longrightarrow E^*\wedge E^* \longrightarrow \mathfrak{o}(E) \longrightarrow TB \longrightarrow 0,\] and splittings of this sequence are in one-to-one correspondence with linear connections which are compatible with the fiberwise metric on $E$. The Lie subalgebroid $\mathfrak{o}(E)$ is the Lie algebroid of the Lie groupoid $\mathcal{O}(E) \rightrightarrows B$ whose arrows consist of linear isometries between the fibers of $E$ (see Example \ref{ex:gl-groupoid}). When $B$ is a foliated manifold we may construct a Lie subalgebroid $\mathfrak{o}(E, \ensuremath{\mathcal{F}}_B)$ which is analogous to $\mathfrak{gl}(E, \ensuremath{\mathcal{F}}_B)$. Splittings of the corresponding short exact sequence are in one-to-one correspondence with $\ensuremath{\mathcal{F}}_B$-partial connections compatible with the fiberwise metric on $E$. Finally, we remark that the infinitesimal data associated to a SRF $\ensuremath{\mathcal{F}}$ around a closed saturated submanifold $B$ of a regular stratum satisfies a stronger version of conditions (a) through (d) of the beginning of this section. In this case $\nabla^\tau$ is compatible with the metric on $E$, and $\mathfrak{k}$ is a sub bundle of Lie algebras of $E^*\otimes E^*$. It then follows that $A(\nabla^\tau, \mathfrak{k})$ is a Lie subalgebroid of $\mathfrak{o}(E)$. \end{remark} \begin{proposition} \label{proposition-algebroid-SRF} When the infinitesimal data (a)-(d) come from a SRF $\ensuremath{\mathcal{F}}$ around a closed saturated submanifold $B$ of a regular stratum, then $A(\nabla^\tau, \mathfrak{k})$ is the Lie algebroid of the Lie groupoid $\mathcal{G} = \mathrm{Hol}(\widetilde{F})/O(n) \rightrightarrows B$ constructed in Section \ref{section-lie-groupoid-structure}. \end{proposition} \begin{proof} We consider $A(\nabla^\tau, \mathfrak{k})$ as a Lie subalgebroid $\mathfrak{o}(E)$ and follow the integration scheme for Lie subalgebroids developed in \cite{Moerdijk-Mrcun-subalgebroid} and described at the end of Section \ref{section-facts-algebroids} of this paper. The source fibers of $\mathcal{O}(E)$ identify with the orthogonal frame bundle of $O(E)$, and under this identification the right invariant foliation $\ensuremath{\mathcal{F}}^{A(\nabla^\tau, \mathfrak{k})}$ on $\mathcal{O}(E)$ is mapped to to the lifted $\widetilde{F}$ on $O(E)$. It then follows that the Lie groupoid integrating $A(\nabla^\tau, \mathfrak{k})$ obtained by taking $\mathrm{Hol}(\ensuremath{\mathcal{F}}^{A(\nabla^\tau, \mathfrak{k})})/ \mathcal{O}(E)$ is isomorphic to $\mathcal{G} = \mathrm{Hol}(\widetilde{F})/O(n)$. \end{proof} \begin{remark} With the explicit description developed here it is now a simple computation the inclusion of $A(\nabla^\tau, \mathfrak{k})$ in $\mathfrak{gl}(E)$ is the restriction of the differential of the Lie groupoid morphism $\mathcal{G} = \mathrm{Hol}(\widetilde{F})/ O(n) \to GL(E)$ induced from the representation of of $\mathcal{G}$ on $E$. It then follows that the Lie algebroid of the linear holonomy groupoid $\mathcal{G}^\ell \rightrightarrows E$ is the action algebroid $A(\nabla^\tau, \mathfrak{k})\ltimes E \to E$. \end{remark} \end{document}
\begin{document} \title{ Towards Heisenberg Limit in Magnetometry with Parametric Down Converted Photons} \author{Aziz Kolkiran} \author{G. S. Agarwal} \affiliation{Department of Physics, Oklahoma State University, Stillwater, OK - 74078, USA} \date{\today} \begin{abstract} Recent theoretical and experimental papers have shown how one can achieve Heisenberg limited measurements by using entangled photons. Here we show how the photons in non-collinear down conversion process can be used for improving the sensitivity of magneto-optical rotation by a factor of four which takes us towards the Heisenberg limit. Our results apply to sources with arbitrary pumping. We also present several generalizations of earlier results for the collinear geometry. The sensitivity depends on whether the two-photon or four-photon coincidence detection is used. \end{abstract} \pacs{00000} \maketitle \section{Introduction} Parametric down conversion is a process that is used to produce light possessing strong quantum features. Photon pairs generated by this process show entanglement with respect to different physical attributes such as time of arrival \cite{Friberg 1985} and states of polarization \cite{Kwiat 1995}. They are increasingly being utilized for very basic experiments to test the foundation of quantum mechanics and to do quantum information processing \cite{Mandel 1995, Zeilinger 1999,Kwiat 1995}. It is also recognized that entangled photon pairs could be useful in many practical applications in precision metrology involving e.g. interferometry \cite{Caves 1981,Yurke 1986,Dowling 1998,Holland 1993}, imaging \cite{Abouraddy 2001,Pittman 1995}, lithography \cite{Boto 2000,Agarwal 2001,Bjork 2001,Dangelo 2001} and spectroscopy \cite{Agarwal 2003}. There is a proposal \cite{Lee 2002} to use electromagnetic fields in $NOON$ states to improve the sensitivity of measurements by a factor of $N$. Some implementations of this proposal exist \cite{Walther 2004}. In particular, the use of photon pairs in interferometers allows phases to be measured to the precision in the Heisenberg limit where uncertainty scales as $1/N$ \cite{Giovannetti 2004} as compared to the shot noise limit where it scales as $1/\sqrt{N}$. This means that for large number of particles, a dramatic improvement in measurement resolution should be possible. In this paper we present an analysis of how parametric down converted photons could be very useful in getting better spectroscopic information about the medium. We demonstrate how the improvement in magneto-optical rotation (MOR) of light could be realized by employing two different schemes with collinear and non-collinear down conversion geometry in compared to use of coherent light. We calculate the resolution that can be achieved in the MOR's both by use of coherent light and down converted light. We discuss that the Heisenberg limit \cite{Ou 1997} could be reached in magnetometry by the use down converted light. \section{MOR using coherent light source} Consider a single mode coherent light travelling in the z-direction and a linear isotropic medium made anisotropic by the application of the magnetic field $\mathbf{B}$ in the z-direction. The incident field can be written in the form \begin{equation} {\mathbf E}(z,t)=\exp(-i\omega t+ikz)(\hat{x}\varepsilon_x+\hat{y}\varepsilon_y)+c.c. \end{equation} The medium is described by the frequency and magnetic field dependent susceptibilities $\chi_{\!\pm}(\omega)$. That means horizontally and vertically polarized components of the incident light will rotate on travelling the medium of length $l$ and the field at the exit can be written as \begin{equation} {\bf E}(l,t)=\exp(-i\omega t+ikl)(\hat{x}\varepsilon_{xl}+\hat{y}\varepsilon_{yl})+c.c. \end{equation} The rotation of the horizontal and vertical components can be expressed by the relations \begin{equation} \left(\begin{array}{c}\varepsilon_{xl}\\\varepsilon_{yl}\end{array}\right)=R \left(\begin{array}{cc}\varepsilon_{x}\\\varepsilon_{y}\end{array}\right) \end{equation} where \begin{eqnarray}\label{rotation} R &=& e^{i\theta_{\!+}} e^{i\frac{\theta}{2}}\left(\begin{array}{cc}\cos\frac{\theta}{2} & -\sin\frac{\theta}{2} \\ \sin\frac{\theta}{2} & \cos\frac{\theta}{2} \end{array}\right), \\ \theta &=& kl(\chi_{\!+} - \chi_{\!-}),\label{mor}\\ \theta_+\!\!\!&=& kl\chi_{\!+}. \end{eqnarray} The corresponding quantum mechanical description can be obtained by replacing the classical amplitudes $\varepsilon_x$ and $\varepsilon_y$ by the annihilation operators $a_x$ and $a_y$ respectively. For measurements with coherent sources one can look at the intensities of the $x$ and $y$ components of the output when the input is $x$ polarized with coherent state amplitude $\alpha_x$ (See Fig. \ref{setup}(a). Then the measured quantities will be \begin{eqnarray} I_{xl}&=&\langle {a_{xl}}^{\dag}a_{xl}\rangle=|\alpha_x|^2 \cos^2\frac{\theta}{2},\label{coherent1}\\ I_{yl}&=&\langle {a_{yl}}^{\dag}a_{yl}\rangle=|\alpha_x|^2 \sin^2\frac{\theta}{2}.\label{coherent2} \end{eqnarray} One can estimate the minimum detectable rotation angle $\theta_m$ by looking at the fluctuations $\Delta N_d$ in the photon number difference between horizontal and vertical photons, where the number difference operator is given as $N_d={a_{yl}}^{\dag}a_{yl}-{a_{xl}}^{\dag}a_{xl}$. This expression is calculated to be $(\Delta N_d)^2=|\alpha_x|^2\sin^2\theta$ and since the fluctuation noise is 1 we obtain $\theta_m\approx 1/\sqrt{\langle N\rangle}$ where $\langle N\rangle$ is the mean number of input photons which is equal to $|\alpha_x|^2$. \begin{figure} \caption{\label{setup} \label{setup} \end{figure} \section{MOR using collinear type-II PDC and two-photon coincidence} We now discuss how the results (\ref{coherent1}) and (\ref{coherent2}) are modified if we work with down-converted photons. We first consider the collinear case shown in Fig. \ref{setup}(b). The state produced in collinear PDC can be written by \begin{equation} |\psi_{col}\rangle=\frac{1}{\cosh r}\sum_{n=0}^{\infty}(-e^{i\phi}\tanh r)^n |n\rangle_H |n\rangle_V. \end{equation} The value of the parameter $r$ and the phase $\phi$ are related to the pump amplitude of the nonlinear crystal that is used in the down conversion process and the coupling constant between the electromagnetic field and the crystal. Note that the state $|\psi_{col}\rangle$ is a superposition of $n$ photon pairs of horizontally and vertically polarized modes. Inside the medium, these modes rotate with the same rotation matrix $R$ given in Eq. (\ref{rotation}): \begin{equation} \left(\begin{array}{c}a_{Hl}\\a_{Vl}\end{array}\right)=R \left(\begin{array}{cc}a_H\\a_V\end{array}\right). \end{equation} One can measure the intensity of each mode: \begin{eqnarray} I_H&\equiv&\langle {a_{Hl}}^{\dag}a_{Hl}\rangle=\sinh^2r\nonumber\\ &=&\langle {a_{Vl}}^{\dag}a_{Vl}\rangle\equiv I_V. \end{eqnarray} And the two-photon coincidence count is: \begin{eqnarray}\label{co-PDC} I_{HV}&\equiv &\langle {a_{Hl}}^{\dag}{a_{Vl}}^{\dag}a_{Hl}a_{Vl}\rangle\nonumber\\ &=&\cos^2\theta\sinh^2r\cosh^2r+\sinh^4r \end{eqnarray} Note the difference between Eqs. (\ref{coherent1}) and (\ref{co-PDC}). With collinearly down-converted photons we measure a rotation angle that is twice as large compared with the angle for a coherent input. For $r\ll 1$ we obtain the same result as given in \cite{Agarwal 2003}. The fringe pattern and the visibility is given in Figs. \ref{collinear-twophoton} and \ref{visibility}. One can calculate the minimum detectable rotation angle again by looking at the fluctuations in the photon number difference $N_d$. This is given by $(\Delta N_d)^2=4\sinh^2r\cosh^2\sin^2\theta=(1+\langle N\rangle)\langle N\rangle\sin^2\theta\approx{\langle N\rangle}^2\sin^2\theta$ for large $\langle N\rangle$ where $\langle N\rangle=2\sinh^2r$. Making $(\Delta N_d)\sim 1$ \cite{Ou 1997} we get $\theta_m\approx 1/\langle N\rangle$. Note that the sensitivity of this quantity is also improved by a factor of $1/\sqrt{N}$. \begin{figure} \caption{\label{collinear-twophoton} \label{collinear-twophoton} \end{figure} \begin{figure} \caption{\label{visibility} \label{visibility} \end{figure} \section{MOR using non-collinear type-II PDC and four-photon coincidence} Next, we discuss the non-collinear PDC case. We have found an arrangement shown in Fig. \ref{setup}(c) which is especially attractive for improving sensitivity. The entangled photons are coming in two different spatial modes, $a$ and $b$. While one mode (say $a$) is going parallel to $\mathbf{B}$ inside the medium, the other is going anti-parallel to it. At the exit we separate the $H$ and $V$ modes by polarizing beam splitters. The state of the input photons can be written in the form \cite{Kok 2000} \begin{equation} |\psi_{non}\rangle=\frac{1}{\cosh^2r}\sum_{n=0}^{\infty}\sqrt{n+1}(\tanh r)^n|\psi_n\rangle, \end{equation} where \begin{equation}\label{state2b} |\psi_n\rangle=\frac{1}{\sqrt{n+1}} \sum_{m=0}^n (-1)^m |n-m\rangle_{a_H}|m\rangle_{a_V}|m\rangle_{b_H}|n-m\rangle_{b_V}. \end{equation} Here $|m\rangle_{a_V}$ represents $m$ vertically polarized photons in mode $a$. Inside the medium, $``+"$ and $``-"$ polarization components of the modes $a$ and $b$ gain phases $kl\chi_{\!+}$ and $kl\chi_{\!-}$ respectively. Thus we can write an effective Hamiltonian for the evolution of the state $|\psi_{non}\rangle$ inside the medium as follows: \begin{equation}\label{medium} H_{med}=\chi_{\!+}{a_{+}}^{\dag}a_{+}+ \chi_{\!-}{a_{-}}^{\dag}a_{-}- \chi_{\!+}{b_{+}}^{\dag}b_{+}- \chi_{\!-}{b_{-}}^{\dag}b_{-}, \end{equation} where \begin{equation} a_{\pm}=\frac{1}{\sqrt{2}}(a_H\pm ia_V),\quad b_{\pm}=\frac{1}{\sqrt{2}}(b_H\pm ib_V). \end{equation} \begin{figure} \caption{\label{noncollinear-4photon} \label{noncollinear-4photon} \end{figure} \begin{figure} \caption{\label{collinear-4photon_a} \label{collinear-4photon_a} \end{figure} The minus sign in front of the $b_{\pm}$ modes comes from the fact that they are travelling anti-parallel to the $\mathbf B$ field inside the medium. Then one can calculate the probability of detecting four photons in each mode as: \begin{eqnarray}\label{4photon1} P_{non}&=&|\langle 1_{a_H}1_{a_V}1_{b_H}1_{b_V}|\exp(-itH_{med})|\psi_{non}\rangle|^2\nonumber\\ &=&\frac{\tanh^4r}{\cosh^4r}\cos^2(2\theta), \end{eqnarray} where $t$ is the duration for the state to evolve inside the medium. Note that this four-photon probability has the rotation angle that is four-times as large compared with the angle for a coherent input. The fringe pattern with respect to $\theta$ and the probability distribution with respect to $r$ are shown in Fig. \ref{noncollinear-4photon} (a) and (b). Next we also examine the four-photon probability in the collinear case. The probability of finding two $H$-photons and two $V$-photons at the exit ports of the polarizing beam splitter is given by: \begin{eqnarray}\label{4photon2} P_{col}&=&|\langle2_{a_H}2_{a_V}|\exp(-itH_{med})|\psi_{col}\rangle|^2\nonumber\\ &=&\frac{\tanh^4r}{\cosh^2r}\frac{1}{16}[1+3\cos(2\theta)]^2, \end{eqnarray} where we take $H_{med}=\chi_{\!+}{a_{+}}^{\dag}a_{+}+ \chi_{\!-}{a_{-}}^{\dag}a_{-}$ because of the collinear geometry. The normalized plot of this quantity with respect to the magneto-optical rotation angle $\theta$ and the envelope of the probability with respect to $r$ are shown in Fig. \ref{collinear-4photon_a} (a) and (b). On the other hand one can also calculate the coincidence counts of four photons two-by-two at each detector as given by Glauber's higher order correlation functions: \begin{widetext} \begin{eqnarray} I_{HHVV}&=&\langle {a_{Hl}}^{\dag 2}{a_{Vl}}^{\dag 2}{a_{Hl}}^2{a_{Vl}}^2\rangle\nonumber\\ &=&(3\cos^2\theta-1)^2\sinh^4r\cosh^4r +4(3\cos^2\theta+1)\sinh^6r\cosh^2r+4\sinh^8r.\label{4photon in detector} \end{eqnarray} \end{widetext} The plot of this quantity for different values of the interaction parameter $r$ and the visibility are shown in Figs. \ref{collinear-4photon} and \ref{visibility}. \begin{figure} \caption{\label{collinear-4photon} \label{collinear-4photon} \end{figure} Note the distinction between Eqs. (\ref{4photon2}) and (\ref{4photon in detector}) which is a reflection of what the detector is set to measure as we explain now. The former is the probability of the state $|\psi_{col}(t)\rangle$ to be projected onto the particular four-photon subspace $|22\rangle$ i.e. $Tr[|22\rangle\langle22|\rho_{col}(t)]$ where $\rho_{col}(t)=U|\psi_{col}\rangle\langle\psi_{col}|U^{\dag}$ and $U$ is the unitary operator that represents the evolution of the state by the Hamiltonian $H_{med}$ in the collinear geometry. On the other hand, coincidence counting of four-photons at the detectors $D_H$ and $D_V$ (see Fig. \ref{setup}(b)) is represented by the expectation value $\langle{a_{H}}^{\dag 2}{a_{V}}^{\dag 2}{a_{H}}^2{a_{V}}^2\rangle= Tr[{a_{H}}^{\dag 2}{a_{V}}^{\dag 2}{a_{H}}^2{a_{V}}^2\rho_{col}(t)]$. Note here that the operator ${a_{H}}^{\dag 2}{a_{V}}^{\dag 2}{a_{H}}^2{a_{V}}^2$ has the spectral decomposition $\sum_{nm}^{\infty}C_{nm}|nm\rangle\langle nm|$ and obviously it contains the projectors of all $(n+m)$-photon subspaces with nonzero coefficients $C_{nm}$. Therefore four-photon counting process at detectors includes not only $|22\rangle$ but all other states $|nm\rangle$ in $|\psi_{col}(t)\rangle$. Here the state $|nm\rangle$ represents $n$ and $m$ photons in the $a_H$ and $a_V$ modes respectively. \section{Conclusion} We showed that the use of non-collinear type-II PDC light in MOR's increases the sensitivity by a factor of four in comparison to coherent light. We also give an argument that minimum rotation uncertainty scales to Heisenberg limit by the use of down converted photons. It should be noted that Heisenberg limit should be understood as an approximate limit at a large mean photon number, that is, the rotation uncertainty approaches the order of $1/\langle N\rangle$ for large $\langle N\rangle$ \cite{Holland 1993}. The regime with an interaction parameter value of $r=1.3$ has already been reached in the experiment \cite{Eisenberg 2004} giving entanglement of $12$ photons and an evidence also was given for entanglement up to 100 photons. \appendix \section{Four-photon probability} In this appendix we show the details of the calculation leading to the result given in Eq. (\ref{4photon1}). One can obtain the result first by solving the Schr\"odinger equation for the state $|1_{a_H}1_{a_V}1_{b_H}1_{b_V}\rangle$ in the four-photon subspace of the electromagnetic field and having the inner product with the state $|\psi\rangle_{non}$. Since the parts of the Hamiltonian having $a$ and $b$ modes commute, we can solve the Schr\"odinger equation for the states $|1_{a_H}1_{a_V}\rangle$ and $|1_{b_H}1_{b_V}\rangle$ separately. Let us start with a general time-dependent state in the $a_H$ and $a_V$ modes which contain two photons totally; \begin{equation} |\phi(t)\rangle=c(t)|20\rangle+d(t)|02\rangle+f(t)|11\rangle \end{equation} with the initial condition $|\phi(0)\rangle=|11\rangle$. Solving the Schr\"odinger equation by using the effective Hamiltonian $H=\chi_{\!+}{a_{+}}^{\dag}a_{+}+ \chi_{\!-}{a_{-}}^{\dag}a_{-}$ gives us the result \begin{eqnarray} |\phi(t)\rangle&=&e^{-it\chi}[\frac{1}{\sqrt{2}}\sin(\Omega t)|20\rangle-\frac{1}{\sqrt{2}}\sin(\Omega t)|02\rangle \nonumber \\ &+& \cos(\Omega t)|11\rangle] \end{eqnarray} where $\chi=\chi_{\!+} + \chi_{\!-}$ and $\Omega=\chi_{\!+} - \chi_{\!-}$. For a medium of length $l$, the angle $\Omega t$ corresponds to the MOR angle $\theta$ which is given in Eq. (\ref{mor}). The solution for the state $|1_{b_H}1_{b_V}\rangle$ can be obtained just by replacing $\theta$ by $-\theta$ because the direction of propagation of the $b$ modes are opposite to that of $a$ modes inside the medium. This is the reason that the part of the effective Hamiltonian for the $b_{\pm}$ modes takes minus sign in Eq. (\ref{medium}). Consequently we can write the solution of the Schr\"odinger equation for the state $|1_{a_H}1_{a_V}1_{b_H}1_{b_V}\rangle$ as: \begin{widetext} \begin{eqnarray} \exp(-itH_{medium})|1_{a_H}1_{a_V}1_{b_H}1_{b_V}\rangle &=& \exp(-it\chi)\left[\frac{1}{\sqrt{2}}\sin\theta|20\rangle- \frac{1}{\sqrt{2}}\sin\theta|02\rangle+\cos\theta|11\rangle \right]\nonumber\\ &\otimes& \exp(-it\chi)\left[-\frac{1}{\sqrt{2}}\sin\theta|20\rangle+ \frac{1}{\sqrt{2}}\sin\theta|02\rangle+\cos\theta|11\rangle \right]. \end{eqnarray} \end{widetext} Taking the inner product of this with the state $|\psi\rangle_{non}$ and having the absolute square gives us the result given in Eq. (\ref{4photon1}). The result given in Eq. ({\ref{4photon2}}) can be obtained by following the same method given above. \end{document}
\begin{document} \begin{frontmatter} \title{Hamiltonian and reversible systems \\ with smooth families of invariant tori} \author{Mikhail B. Sevryuk} \ead{[email protected], [email protected]} \address{V.L.~Tal'rose Institute for Energy Problems of Chemical Physics, N.N.~Sem\"enov Federal Research Center of Chemical Physics, Russian Academy of Sciences, 38 Leninski\u{\i} Prospect, Bld.~2, Moscow 119334, Russia} \begin{abstract} For various values of $n$, $d$, and the phase space dimension, we construct simple examples of Hamiltonian and reversible systems possessing smooth $d$-parameter families of invariant $n$-tori carrying conditionally periodic motions. In the Hamiltonian case, these tori can be isotropic, coisotropic, or atropic (neither isotropic nor coisotropic). The cases of non-compact and compact phase spaces are considered. In particular, for any $N\geq 3$ and any vector $\omega\in\mR^N$, we present an example of an analytic Hamiltonian system with $N$ degrees of freedom and with an isolated (and even unique) invariant $N$-torus carrying conditionally periodic motions with frequency vector $\omega$ (but this torus is atropic rather than Lagrangian and the symplectic form is not exact). Examples of isolated atropic invariant tori carrying conditionally periodic motions are given in the paper for the first time. The paper can also be used as an introduction to the problem of the isolatedness of invariant tori in Hamiltonian and reversible systems. \end{abstract} \begin{keyword} Hamiltonian systems; Reversible systems; Kronecker torus; Isolatedness; Uniqueness; Families of tori; Lagrangian torus; Isotropic torus; Coisotropic torus; Atropic torus; Symmetric torus; KAM theory \MSC[2020] 70K43 \sep 70H12 \sep 70H33 \sep 70H08 \end{keyword} \end{frontmatter} \section{Introduction and overview}\label{introduction} \subsection{Kronecker tori}\label{Kronecker} Finite-dimensional invariant tori carrying conditionally periodic motions are among the key elements of the structure of smooth dynamical systems with continuous time. The importance and ubiquity of such tori stems, in the long run, from the fact that any finite-dimensional connected compact Abelian Lie group is a torus \cite{A1969,DK2000,S2007}. By definition, given a certain flow on a certain manifold, an invariant $n$-torus carrying \emph{conditionally periodic} motions ($n$ being a non-negative integer) is an invariant submanifold $\cT$ diffeomorphic to the standard $n$-torus $\mT^n=(\mR/2\pi\mZ)^n$ and such that the induced dynamics on $\cT$ in a suitable angular coordinate $\varp\in\mT^n$ has the form $\dot{\varp}=\omega$ where $\omega\in\mR^n$ is a constant vector (called the \emph{frequency vector}). Flows on $\mT^n$ afforded by equations $\dot{\varp}\equiv\omega$ are also said to be linear, parallel, rotational, translational, or Kronecker, and invariant tori carrying conditionally periodic motions are therefore sometimes called \emph{Kronecker tori} \cite{KP2003,MP643,P707,S415,T2012}. A Kronecker flow $g^t$ on $\mT^n$ with any frequency vector $\omega\in\mR^n$ possesses the \emph{uniform recurrence property}: for any $T>0$ and $\vare>0$ there exists $\Theta\geq T$ such that for any $\varp\in\mT^n$ the distance between $\varp$ and $g^\Theta(\varp) = \varp+\Theta\omega$ (e.g., with respect to some fixed Riemannian metric) is smaller than $\vare$. Recall the almost obvious proof of this fact. First of all, there is $\delta>0$ such that the distance between $\varp$ and $\varp+\Delta$ is smaller than $\vare$ whenever $\varp\in\mT^n$ and $\Delta\in\mR^n$, $|\Delta|<\delta$ (here and henceforth, given $c\in\mR^n$, the symbols $|c|$ denote the $\Fl_1$-norm $|c_1|+\cdots+|c_n|$ of $c$). Second, there are positive integers $\Fm_2>\Fm_1$ and a vector $\Delta\in\mR^n$, $|\Delta|<\delta$ such that $\Fm_2T\omega = \Fm_1T\omega+\Delta$. Finally, it is sufficient to set $\Theta=(\Fm_2-\Fm_1)T$, because $\varp+(\Fm_2-\Fm_1)T\omega = \varp+\Delta$ for any $\varp\in\mT^n$. The frequency vector $\omega=(\omega_1,\ldots,\omega_n) \in \mR^n$ of a Kronecker $n$-torus and the torus itself are said to be \emph{non-resonant} if the frequencies $\omega_1,\ldots,\omega_n$ are incommensurable (linearly independent over rationals) and are said to be \emph{resonant} otherwise. Conditionally periodic motions with non-resonant frequency vectors are usually called \emph{quasi-periodic} motions. Each phase curve on a non-resonant Kronecker torus fills up it densely. If the frequencies $\omega_1,\ldots,\omega_n$ of a resonant Kronecker $n$-torus $\cT$ satisfy $r$ independent resonance relations $\bigl\langle j^{(\iota)},\omega \bigr\rangle=0$, $j^{(\iota)}\in\mZ^n\setminus\{0\}$, $1\leq\iota\leq r$, $1\leq r\leq n$ (here and henceforth, the angle brackets $\langle{\cdot},{\cdot}\rangle$ denote the standard inner product), then $\cT$ is foliated by non-resonant Kronecker $(n-r)$-tori, the frequency vector of all these tori being the same. The occurrence of a resonant Kronecker torus in a certain dynamical system usually indicates some degeneracy (for instance, the presence of many first integrals). Normally, dynamical systems exhibit (smooth or Cantor-like) families of non-resonant Kronecker tori, the dimension of all the tori in a given family being the same. Moreover, the frequency vectors of these tori are not merely non-resonant but \emph{strongly non-resonant} (for instance, Diophantine), i.e., badly approximable by resonant vectors. Recall that a vector $\omega\in\mR^n$ is said to be \emph{Diophantine} if there exist constants $\tau\geq n-1$ and $\gamma>0$ such that $\bigl| \langle j,\omega\rangle \bigr| \geq \gamma|j|^{-\tau}$ for any $j\in\mZ^n\setminus\{0\}$ (vectors that are not Diophantine are said to be \emph{Liouville}). Families of Kronecker tori with strongly incommensurable frequencies are the subject of the \emph{KAM} (Kolmogorov--Arnold--Moser) theory. The reader is referred to e.g.\ the monographs \cite{BHS1996,BHTB1990,KP2003}, \S\S~6.2--6.4 of the monograph \cite{AKN2006}, and the survey or tutorial papers \cite{BS249,dlL175,S1113,S137,S603} for the main ideas, methods, and results of the (mainly finite-dimensional) KAM theory and the bibliography, as well as for a precise definition of a Cantor-like family of Kronecker tori. The book \cite{D2014} presents a brilliant semi-popular introduction to the KAM theory. The ``core'' of the theory, namely, families of Kronecker $N$-tori in Hamiltonian systems with $N$ degrees of freedom, is treated in detail in e.g.\ the articles \cite{B42,B21,EFK1733,MP643,P707,S351,T12851}. The papers dealing with various special aspects of the KAM theory are exemplified by \cite{BHN355,BH191,F1521,H79,H989,H49,K259,P380,QS757,S435,S599,S415}. Some open problems in the theory are listed and discussed in the works \cite{FK1905,H797,St177,S6215}. Typical finite-dimensional autonomous dissipative systems (with no special structure on the phase space the system is assumed to preserve) possess equilibria (Kronecker $0$-tori) and closed trajectories (Kronecker $1$-tori with a nonzero frequency). Typical smooth families of dissipative systems depending on $\Fr\geq 1$ external parameters $\mu_1,\ldots,\mu_{\Fr}$ also exhibit Cantor-like $\Fr$-parameter families of strongly non-resonant Kronecker $n$-tori in the product of the phase space and the parameter space $\{\mu\}$, the dimension $n$ ranging between $2$ and the phase space dimension \cite{BHN355,BHS1996,BHTB1990}. Here and henceforth, the word \emph{``typical''} means that the systems (or the families of systems) with the properties indicated constitute an open set (to be more precise, a set with non-empty interior) in the appropriate functional space. On the other hand, finite-dimensional autonomous Hamiltonian and reversible systems typically admit many Cantor-like families of strongly non-resonant Kronecker $n$-tori, and these families (with different dimensions $n$) constitute complicated hierarchical structures. \subsection{Review and the main result:\ Hamiltonian systems}\label{gamiltonovy} In this section and henceforth, we will employ the following useful notation. Given a non-negative integer $a$, the combination of symbols $\mR^a_w$ will denote the Euclidean space $\mR^a$ with coordinates $(w_1,\ldots,w_a)$, and the combination of symbols $\mT^a_\varp$ will denote the torus $\mT^a$ with angular coordinates $(\varp_1,\ldots,\varp_a)$. The properties of Kronecker tori in Hamiltonian systems very much depend on the ``relations'' of the torus in question with the symplectic $2$-form. Recall that a submanifold $\cL$ of a $2N$-dimensional symplectic manifold is said to be \emph{isotropic} if the tangent space $T_\Lambda\cL$ to $\cL$ at any point $\Lambda\in\cL$ is contained in its skew-orthogonal complement: $T_\Lambda\cL\subset(T_\Lambda\cL)^\bot$ (in other words, if the restriction of the symplectic form to $\cL$ vanishes), and is said to be \emph{coisotropic} if the tangent space $T_\Lambda\cL$ to $\cL$ at any point $\Lambda\in\cL$ contains its skew-orthogonal complement: $(T_\Lambda\cL)^\bot\subset T_\Lambda\cL$. If $\cL$ is isotropic then $\dim\cL\leq N$, and if $\cL$ is coisotropic then $\dim\cL\geq N$. A submanifold $\cL$ that is both isotropic and coisotropic is said to be \emph{Lagrangian}, in which case $\dim\cL=N$. In the sequel, it will be convenient to call isotropic submanifolds $\cL$ with $\dim\cL<N$ \emph{strictly isotropic} and to call coisotropic submanifolds $\cL$ with $\dim\cL>N$ \emph{strictly coisotropic}. In other words, strictly isotropic submanifolds are isotropic submanifolds that are not Lagrangian, and strictly coisotropic submanifolds are coisotropic submanifolds that are not Lagrangian. \begin{rem}\label{strictly} In the literature, the terms ``strictly isotropic'' and ``strictly coisotropic'' are also used with a different meaning, see e.g.\ \cite{B2005,BZ365}. In the $h$-principle theory, one speaks of subcritical isotropic immersions and embeddings in symplectic and contact manifolds where the meaning of the words ``subcritical isotropic'' is close to ``non-Lagrangian isotropic'' (see e.g.\ the tutorial \cite{EM2002}). \end{rem} It is clear that if $\dim\cL$ is equal to $0$ or $1$ then $\cL$ is necessarily isotropic, and if $\codim\cL$ is equal to $0$ or $1$ then $\cL$ is necessarily coisotropic. Now suppose that $\cL$ is invariant under a Hamiltonian flow with Hamilton function $H$, $H|_{\cL}$ is a constant, and almost all the points of $\cL$ are not equilibria. Then $\cL$ is isotropic if $\dim\cL=2$ and is coisotropic if $\codim\cL=2$. Recall the simple proof of this fact. Denote the symplectic form by $\Omega$. Let a point $\Lambda\in\cL$ be not an equilibrium, and let $X\in T_\Lambda\cL\setminus\{0\}$ be the vector of the Hamiltonian vector field in question at $\Lambda$. Let $\dim\cL=2$. For any vector $Y\in T_\Lambda\cL$ one has $\Omega(Y,X) = dH(Y) = 0$ since $H$ is a constant on $\cL$. Consequently, $\cL$ is isotropic. On the other hand, let $\codim\cL=2$, and let $\fH\supset\cL$ be the level hypersurface of $H$ containing $\cL$. The space of the tangent vectors to the ambient symplectic manifold at $\Lambda$ that are skew-orthogonal to $X$ is just $T_\Lambda\fH \supset T_\Lambda\cL$, in particular, $X\in(T_\Lambda\cL)^\bot$. Let $X,Y$ be a basis of $(T_\Lambda\cL)^\bot$. Then $Y\in T_\Lambda\fH$ since $X\in T_\Lambda\cL$ and therefore $\Omega(Y,X)=0$. If $Y\in T_\Lambda\fH\setminus T_\Lambda\cL$, then $Y$ would be skew-orthogonal to the whole space $T_\Lambda\fH$ because $\Omega(Y,Y)=0$ and $\dim T_\Lambda\fH-\dim T_\Lambda\cL = 1$, so that $\dim(T_\Lambda\fH)^\bot \geq 2$ in this hypothetical case. Thus, $Y\in T_\Lambda\cL$. Consequently, $\cL$ is coisotropic. One of the key facts in the Hamiltonian KAM theory is the \emph{Herman lemma} which states that any non-resonant Kronecker torus of a Hamiltonian system is isotropic provided that the symplectic form is exact (see e.g.\ \cite{BHS1996,F1521,S1113} for a proof and \cite{BS249,S137} for a discussion; these works also contain references to the original papers by M.R.~Herman). A Hamiltonian system on a symplectic manifold with a non-exact symplectic form may admit strictly coisotropic non-resonant Kronecker tori as well as non-resonant Kronecker tori that are neither isotropic nor coisotropic. Tori of the latter type are said to be \emph{atropic} \cite{BS249,S1113,St177}. According to what was explained in the previous paragraph, the dimension of an atropic non-resonant Kronecker torus always lies between $3$ and $2N-3$ where $N$ is the number of degrees of freedom. Now the main ``informal'' conclusion of the Hamiltonian KAM theory can be stated as follows. Typical Hamiltonian systems with $N\geq 1$ degrees of freedom admit $n$-parameter families of isotropic strongly non-resonant Kronecker $n$-tori for each $0\leq n\leq N$. These families are smooth for $n=0$ and $1$ and are Cantor-like for $n\geq 2$. If $N\geq 2$ and the symplectic form is not exact (and meets certain Diophantine-like conditions), typical Hamiltonian systems also possess $(2N-n)$-parameter families of strictly coisotropic strongly non-resonant Kronecker $n$-tori for each $N+1\leq n\leq 2N-1$ (see the works \cite{BHS1996,BS249,H989,H49,P380,S1113,St177} and references therein). These families are smooth for $n=2N-1$ and are Cantor-like for $n\leq 2N-2$. Finally, if $N\geq 3$ and the symplectic form is not exact (and meets certain Diophantine-like conditions), typical Hamiltonian systems also exhibit Cantor-like $\kappa$-parameter families of atropic strongly non-resonant Kronecker $n$-tori for any $3\leq n\leq 2N-3$ and $1\leq\kappa\leq\min(n-2, \, 2N-n-2)$ such that $n+\kappa$ is even (see the works \cite{BS249,S1113,St177} and references therein). \begin{rem}\label{dichotomyHam} One sees that if $\kappa$ is the number of parameters in typical families of Kronecker $n$-tori in Hamiltonian systems with $N$ degrees of freedom, then $\kappa+n=2N$ for coisotropic (Lagrangian or strictly coisotropic) Kronecker tori (so that the Lebesgue measure of the union of the tori is positive) and $\kappa+n\leq 2N-2$ for non-coisotropic (strictly isotropic or atropic) Kronecker tori (so that the union of the tori is of measure zero). \end{rem} Until 1984, the Hamiltonian KAM theory only dealt with isotropic Kronecker tori. The non-isotropic Hamiltonian KAM theory was founded by I.O.~Parasyuk \cite{P380}. Strictly isotropic Kronecker tori in Hamiltonian systems are often said to be \emph{lower dimensional}. For strictly coisotropic Kronecker tori in Hamiltonian systems, the term ``higher dimensional'' is also used but much more rarely. Of course, of all the Kronecker $n$-tori in Hamiltonian systems with $N$ degrees of freedom, Lagrangian Kronecker $N$-tori are best studied. A generic Lagrangian non-resonant Kronecker torus $\cT$ in a Hamiltonian system (with any number $N$ of degrees of freedom) is \emph{KAM stable}: in any neighborhood of $\cT$, there is a family of other Lagrangian Kronecker tori, their union having positive Lebesgue measure and density one at $\cT$ (all these tori constitute an $N$-parameter family which is, generally speaking, Cantor-like for $N\geq 2$). To be more precise, the KAM stability of $\cT$ is implied by the so-called Kolmogorov non-degeneracy of $\cT$ \cite{B42}. No arithmetic conditions (like strong incommensurability) on the frequencies of $\cT$ are needed in this remarkable result, and it is valid in the $C^\ell$ smoothness class with any finite sufficiently large $\ell$ (not to mention the $C^\infty$, Gevrey, and analytic categories). On the other hand, generic Lagrangian resonant Kronecker tori in Hamiltonian systems with $N\geq 2$ degrees of freedom in the $C^\ell$ smoothness classes, $\ell\geq 2$, are not KAM stable \cite{B21}. For some previous results concerning density points of quasi-periodicity (not necessarily in the Hamiltonian realm), see e.g.\ the works \cite{BHN355,BHS1996,BHTB1990,BS249,EFK1733}. \begin{rem}\label{stability} The term ``KAM stable'' is sometimes understood in a quite different sense (see e.g.\ \cite{K259,S351}): an unperturbed system (or an unperturbed Hamilton function) possessing a smooth family of Kronecker tori is said to be KAM stable if any perturbed system admits a Cantor-like family of Kronecker tori close to the unperturbed ones (provided that the perturbation lies in the suitable functional class and is sufficiently small). \end{rem} Since Kronecker tori in Hamiltonian systems tend to be organized into (Cantor-like) families, the natural question arises whether such tori can be isolated. The isolatedness of a torus can be understood in different ways. \begin{dfn}\label{isolated} A Kronecker $n$-torus $\cT$ of a dynamical system is said to be \emph{isolated} if it is not included in a (Cantor-like) family of Kronecker $n$-tori. A torus $\cT$ is said to be \emph{strongly isolated} if there exists a neighborhood $\cO$ of $\cT$ in the phase space such that there are no Kronecker tori (of any dimension) entirely contained in $\cO\setminus\cT$. A torus $\cT$ is said to be \emph{unique} if there are no Kronecker tori (of any dimension) outside $\cT$ in the whole phase space. \end{dfn} In particular, a strongly isolated torus $\cT$ is unique in the neighborhood $\cO$ mentioned in Definition~\ref{isolated}. Of course, a generic equilibrium in a Hamiltonian system is always isolated, to be more precise, an equilibrium $O$ is isolated whenever none of the eigenvalues $\lambda_{\Fi}$ of the linearization of the vector field at $O$ is zero. If $O$ is \emph{hyperbolic} (i.e., if all the eigenvalues $\lambda_{\Fi}$ have nonzero real parts), then it is strongly isolated. Examples of unique equilibria in Euclidean phase spaces are also straightforward: if the equilibrium $0$ of a system with a quadratic Hamilton function in $\mR^{2N}$ is hyperbolic then it is unique. On the other hand, the question of whether an \emph{elliptic} equilibrium of a Hamiltonian system (i.e., an equilibrium for which all the eigenvalues $\lambda_{\Fi}$ are nonzero and lie on the imaginary axis) can be strongly isolated (or at least can be not accumulated by a set of Lagrangian Kronecker tori of positive measure) is very far from being easy. So is the question of whether such an equilibrium can be Lyapunov unstable. We will not discuss this problem here and confine ourselves by citing the papers and preprints \cite{F09059,FK1905,FS67,T12851} (see also the references therein). In general, the instability of an equilibrium is a more delicate topic than that of, say, a Lagrangian Kronecker torus \cite{FS67}. Surprisingly, it seems that the question of whether strictly isotropic Kronecker tori of dimensions from $1$ to $N-1$ in Hamiltonian systems with $N\geq 2$ degrees of freedom can be isolated was never considered until 2017. In December 2017 and January 2018, the author and the user Khanickus of MathOverflow \cite{K2018} constructed independently two very similar explicit (and exceedingly simple) examples of Hamiltonian systems in $\mR^3\times\mT^1$ with a unique periodic orbit. Subsequently, for any integers $n\geq 1$ and $N\geq n+1$ and for any vector $\omega\in\mR^n$, the author \cite{S415} proposed an example of a Hamiltonian system with $N$ degrees of freedom, with the phase space $\mR^{2N-n}_w\times\mT^n_\varp$, with the exact symplectic form \[ \sum_{i=1}^n dw_i\wedge d\varp_i + \sum_{\nu=1}^{N-n} dw_{n+\nu}\wedge dw_{N+\nu}, \] and with a Hamilton function independent of $\varp$, polynomial in $w$, and such that $\{w=0\}$ is a unique Kronecker $n$-torus (in the sense of Definition~\ref{isolated}), the frequency vector of $\{w=0\}$ being $\omega$. The paper \cite{S415} also contains an example of a Hamiltonian system with $N$ degrees of freedom, with the compact phase space $\mT^{2N-n}_w\times\mT^n_\varp$, with the symplectic form given by the same formula (but no longer exact), and with a trigonometric polynomial Hamilton function independent of $\varp$ and such that $\{w=0\}$ is a strongly isolated Kronecker $n$-torus (in the sense of Definition~\ref{isolated}), the frequency vector of $\{w=0\}$ being $\omega$. Thus, the problem of the possible isolatedness of strictly isotropic Kronecker tori in Hamiltonian systems has been completely solved by now. It is clear that periodic orbits (Kronecker $1$-tori with a nonzero frequency) of Hamiltonian systems with one degree of freedom are always included in smooth one-parameter families (each periodic orbit being a connected component of an energy level line). The question of whether Lagrangian Kronecker tori in Hamiltonian systems with $N\geq 2$ degrees of freedom can be isolated has turned out to be highly nontrivial. To the best of the author's knowledge, this question is still open, even in the case where the frequency vector is Liouville (or even resonant) and the Hamilton function is only $C^\infty$ smooth (see \cite{B42,B21,FF01575}). It is proven in the landmark paper \cite{EFK1733} that a Lagrangian Kronecker $N$-torus $\cT$ with a \emph{Diophantine} frequency vector is never isolated in the \emph{analytic} category (where the symplectic form, the Hamilton function, and the torus itself are analytic), however degenerate the Hamilton function is at $\cT$. Such a torus is always accumulated by other Lagrangian Kronecker tori (with Diophantine frequency vectors), i.e., is always included in a (Cantor-like) $\Fr$-parameter family of Lagrangian Kronecker tori with $\Fr\geq 1$. Nevertheless, it is not known whether $\Fr$ is always equal to $N$, i.e., whether the union of Lagrangian Kronecker tori in any neighborhood of $\cT$ always has positive measure (Herman conjectured the affirmative answer in the model problem of fixed points of analytic symplectomorphisms \cite{H797}). For $N=2$, however, the equality $\Fr=2$ is always valid even in the $C^\infty$ category \cite{EFK1733}. For Liouville frequency vectors or non-analytic Hamilton functions, there are several examples in the literature of a Lagrangian Kronecker $N$-torus $\cT$ that is accumulated by other Lagrangian Kronecker tori, but the union of these tori is of measure zero. The phase space in all these examples is $\mR^N_u\times\mT^N_\varp$, the symplectic form is $\sum_{i=1}^N du_i\wedge d\varp_i$, the Hamilton function is $H(u,\varp) = \langle u,\omega\rangle + O\bigl( |u|^2 \bigr)$, and $\cT=\{u=0\}$, where $\omega\in\mR^N$ is the frequency vector of $\cT$. It is well known that locally, in some neighborhood of a Lagrangian Kronecker torus, this setup can always be achieved (up to an additive constant in $H$) \cite{B42,B21,MP643}; in fact, this is an immediate consequence of A.~Weinstein's equivalence theorem for Lagrangian submanifolds \cite{W329}. For any $N\geq 2$ and any resonant vector $\omega$, a very simple example with an analytic (and even quadratic in $u$) Hamilton function $H$ is presented in the paper \cite{B21}. In this example, the torus $\cT$ is accumulated by a continuous $N$-parameter family of isotropic Kronecker $(N-1)$-tori. The article \cite{EFK1733} contains examples for any $N\geq 4$ and \emph{any} vector $\omega$ with $C^\infty$ as well as Gevrey regular (with any exponent $\sigma>1$) Hamilton functions $H$. It is pointed out in the paper \cite{FS67} that for $C^\infty$ Hamilton functions $H$, the construction of \cite{EFK1733} can be extended to $N=3$ and any vector $\omega$ (and an analog for elliptic equilibria in $\mR^6$ is described). The paper \cite{FS67} also presents an analog for elliptic equilibria in $\mR^4$ but for Liouville frequencies. Finally, for any $N\geq 3$ and non-resonant but ``sufficiently Liouville'' vectors $\omega$, G.~Farr\'e and B.~Fayad \cite{FF01575} constructed examples with \emph{analytic} Hamilton functions $H$. The words ``sufficiently Liouville'' mean that if $\tilde{\omega} = (\omega_1,\ldots,\omega_{N-1}) \in \mR^{N-1}$, then the infimum of the set of the ratios \[ \frac{\ln\bigl| \langle j,\tilde{\omega}\rangle \bigr|}{|j|}, \quad j\in\mZ^{N-1}\setminus\{0\} \] is $-\infty$. In the examples of \cite{EFK1733} and \cite{FF01575}, the hypersurface $\{u_N=0\}$ is foliated by Lagrangian Kronecker tori with frequency vector $\omega$. As far as the author knows, the question of whether strictly coisotropic or atropic Kronecker tori in Hamiltonian systems can be isolated has never been raised. The present paper gives an exhaustive answer to this question in the case of \emph{atropic} tori. Here is our main result. \begin{thm}\label{mainHam} For any integers $N$, $n$, $d$ in the ranges $N\geq 2$, $1\leq n\leq N-1$, $0\leq d\leq 2N-n$ and for any vector $\omega\in\mR^n$, there exist an exact symplectic form $\Omega$ on the manifold $\cM = \mR^{2N-n}_w\times\mT^n_\varp$ with constant coefficients and a Hamilton function $H:\cM\to\mR$ independent of $\varp$, polynomial in $w$, and such that the corresponding Hamiltonian system on $\cM$ admits a $d$-parameter analytic family of \emph{strictly isotropic} Kronecker $n$-tori of the form $\{w=\const\}$. There are no Kronecker tori (of any dimension) outside this family. The $n$-torus $\{w=0\}$ belongs to this family, its frequency vector is equal to $\omega$, and if $d=0$ then this torus is unique in the sense of Definition~\ref{isolated}. If $d=2N-n$ then the family in question makes up the whole phase space. For any integers $N$, $n$, $d$ in the ranges $N\geq 3$, $3\leq n\leq 2N-3$, $0\leq d\leq 2N-n$ and for any vector $\omega\in\mR^n$, there exist a non-exact symplectic form $\Omega$ on the manifold $\cM = \mR^{2N-n}_w\times\mT^n_\varp$ with constant coefficients and a Hamilton function $H:\cM\to\mR$ independent of $\varp$, polynomial in $w$, and such that the corresponding Hamiltonian system on $\cM$ admits a $d$-parameter analytic family of \emph{atropic} Kronecker $n$-tori of the form $\{w=\const\}$. There are no Kronecker tori (of any dimension) outside this family. The $n$-torus $\{w=0\}$ belongs to this family and its frequency vector is equal to $\omega$, and if $d=0$ then this torus is unique. If $d=2N-n$ then the family in question makes up the whole phase space. Similar statements hold \emph{mutatis mutandis} for the compact manifold $\widehat{\cM} = \mT^{2N-n}_w\times\mT^n_\varp$, the modifications being as follows. First, the symplectic form $\Omega$ is not exact in the case of strictly isotropic Kronecker $n$-tori either. Second, the Hamilton function $H$ is now trigonometric polynomial in $w$. Third, it is no longer valid that there are no Kronecker tori (of any dimension) outside the family under consideration. Fourth, if $d=0$ then the $n$-torus $\{w=0\}$ is strongly isolated (rather than unique) in the sense of Definition~\ref{isolated}. \end{thm} This theorem is proven in Sections~\ref{symplectic}--\ref{compact} by constructing explicit examples which generalize the examples of the note \cite{S415}. The problem of whether strictly coisotropic Kronecker tori in Hamiltonian systems can be isolated remains open. There is little doubt that this problem is as difficult as the analogous problem (discussed above) for Lagrangian Kronecker tori. Thus, the isolatedness question for Kronecker tori in Hamiltonian systems is very hard for coisotropic (Lagrangian or strictly coisotropic) Kronecker $n$-tori (for $n\geq 2$) and is rather easy for non-coisotropic (strictly isotropic or atropic) ones. This dichotomy surprisingly coincides with the other dichotomy pointed out in Remark~\ref{dichotomyHam}. Setting $n=N\geq 3$ and $d=0$ in Theorem~\ref{mainHam}, we obtain a Hamilton function $H: \mR^n_w\times\mT^n_\varp \to \mR$ independent of $\varp$, polynomial in $w$, and such that the $n$-torus $\{w=0\}$ is a unique Kronecker torus, the frequency vector of this torus can be any prescribed vector in $\mR^n$. However, this astonishing picture is marred by the fact that the corresponding symplectic form on $\mR^n_w\times\mT^n_\varp$ is not standard and even not exact, and the torus $\{w=0\}$ is atropic rather than Lagrangian. In the examples of Sections~\ref{symplectic}--\ref{compact}, one deals with coisotropic and non-coisotropic Kronecker tori in a unified way. However, coisotropic Kronecker $n$-tori in our examples for $N\geq 1$ degrees of freedom ($N\leq n\leq 2N-1$) are always organized into $(2N-n)$-parameter analytic families. Most probably, non-resonant coisotropic Kronecker $(2N-1)$-tori in Hamiltonian systems with $N$ degrees of freedom cannot be isolated for any $N\geq 2$, cf.\ \cite{BHS1996,H989,H49}. \subsection{Review and the main result:\ reversible systems}\label{obratimye} While speaking of Kronecker tori (and, more generally, any invariant submanifolds) in reversible systems, one usually only considers \emph{symmetric} invariant submanifolds, i.e., invariant submanifolds that are also invariant under the reversing involution $G$ of the phase space. The dynamics of $G$-reversible systems and the properties of symmetric invariant submanifolds in such systems very much depend on the structure of the fixed point set $\Fix G$ of the involution $G$. This set is a submanifold of the phase space of the same smoothness class as the involution $G$ itself. However, the manifold $\Fix G$ can well be empty or consist of several connected components of different dimensions even if the phase space is connected (see e.g.\ simple examples in the papers \cite{BDP223,DP3119,PF280,QS757} and references therein; in fact, the literature on the structure of the fixed point sets of involutions of various manifolds is by now immense). It is well known that in any symmetric non-resonant Kronecker $n$-torus $\cT$ (with a frequency vector $\omega\in\mR^n$) of a $G$-reversible system, one can choose an angular coordinate $\varp\in\mT^n$ such that the dynamics on $\cT$ takes the form $\dot{\varp}=\omega$ and the restriction of the reversing involution $G$ to $\cT$ takes the form $G|_{\cT}: \varp\mapsto-\varp$ (in particular, this implies that the set $(\Fix G)\cap\cT = \Fix\bigl( G|_{\cT} \bigr)$ consists of $2^n$ points). This very easy but fundamental \emph{standard reflection lemma} is proven in e.g.\ the works \cite{BHS1996,S435} (see also the papers \cite{S137,S599} for a discussion). We will confine ourselves with the case where the fixed point set $\Fix G$ of the reversing involution $G$ is non-empty and all its connected components are of the same dimension, so that $\dim\Fix G$ is well defined (in fact, this is so for almost all the reversible systems encountered in practice). We will say that an involution $G$ satisfying this condition is of \emph{type $(\fL,m)$} if $\dim\Fix G=m$ and $\codim\Fix G=\fL$ (cf.\ \cite{BHN355,BH191,BHS1996,QS757}). It follows from the standard reflection lemma that if a system reversible with respect to an involution of type $(\fL,m)$ admits a symmetric non-resonant Kronecker $n$-torus then $n\leq\fL$. Therefore, in the reversible KAM theory \cite{BHN355,BH191,BHS1996,QS757,St177,S435,S137,S599,S603,S415}, it only makes sense to consider symmetric Kronecker $n$-tori in systems reversible with respect to involutions of types $(n+l,m)$ with $l\geq 0$. Now the main ``informal'' conclusion of the KAM theory for \emph{individual} reversible systems (not for reversible systems depending on external parameters) can be stated as follows. For $n=0$ and $1$, typical systems reversible with respect to an involution of type $(n+l,m)$ with $m\geq l\geq 0$ admit smooth $(m-l)$-parameter families of symmetric non-resonant Kronecker $n$-tori. For each $n\geq 2$, typical systems reversible with respect to an involution of type $(n+l,m)$ with $l\geq 0$ and $m\geq l+1$ admit Cantor-like $(m-l)$-parameter families of symmetric strongly non-resonant Kronecker $n$-tori. \begin{rem}\label{dichotomyrev} One sees that if $\kappa=m-l$ is the number of parameters in typical families of symmetric Kronecker $n$-tori in systems reversible with respect to involutions of types $(n+l,m)$, then $\kappa+n=m+n+l$ (i.e., $\kappa=m+l$) for $l=0$ (so that the Lebesgue measure of the union of the tori is positive) and $\kappa+n<m+n+l$ for $l\geq 1$ (so that the union of the tori is of measure zero). Of course, here we suppose that $m\geq l$ for $n\leq 1$ and $m>l$ for $n\geq 2$. If $m$ and $l$ do not meet these conditions, one needs external parameters $\mu_{\Fj}$ to obtain persistent families of symmetric Kronecker $n$-tori. However, the measure of the union of the tori in the product of the phase space and the parameter space $\{\mu\}$ is typically positive for $l=0$ and zero for $l\geq 1$ in this case as well (see \cite{S435,S137,S599,S603}). \end{rem} By analogy with Hamiltonian systems, one may ask under what conditions symmetric Kronecker tori in reversible systems can be isolated or unique. When we speak of the isolatedness, strong isolatedness, and unicity of such tori, we still interpret these concepts in strict accordance with Definition~\ref{isolated}: we have in view the absence of other Kronecker tori whatsoever, and not just the absence of other symmetric Kronecker tori. For any $n\geq 0$ and $l\geq m\geq 0$, it is very easy to construct a system that is reversible with respect to an involution of type $(n+l,m)$ and admits a unique symmetric Kronecker $n$-torus with any prescribed frequency vector $\omega\in\mR^n$. Indeed, let the phase space be $\mR^m_u\times\mT^n_\varp\times\mR^m_v\times\mR^{l-m}_q$ and the reversing involution be \[ G: (u,\varp,v,q) \mapsto (u,-\varp,-v,-q). \] Then $G$ is an involution of type $(n+l,m)$. The system \[ \dot{u}=v, \quad \dot{\varp}=\omega, \quad \dot{v}=u, \quad \dot{q}_\nu=q_\nu^2 \] ($1\leq\nu\leq l-m$) is reversible with respect to $G$, and $\{u=0, \; v=0, \; q=0\}$ is a unique Kronecker $n$-torus of this system. This Kronecker torus is symmetric, and its frequency vector is $\omega$. For any integers $n\geq 0$, $m\geq 0$, $l\geq 1$ and for any vector $\omega\in\mR^n$, the note \cite{S415} considers the manifold \begin{equation} \cK = \mR^m_u\times\mT^n_\varp\times\mR^l_q \label{cK} \end{equation} equipped with the involution \begin{equation} G: (u,\varp,q) \mapsto (u,-\varp,-q) \label{inv} \end{equation} of type $(n+l,m)$ and presents an example of a $G$-reversible system on $\cK$ with the right-hand side independent of $\varp$, polynomial in $(u,q)$, and such that $\{u=0, \; q=0\}$ is a unique symmetric Kronecker $n$-torus, its frequency vector being $\omega$. For the compact manifold \begin{equation} \widehat{\cK} = \mT^m_u\times\mT^n_\varp\times\mT^l_q \label{hatcK} \end{equation} equipped with the involution $G$ given by the same formula \eqref{inv} and having the same type, the paper \cite{S415} contains an example of a $G$-reversible system on $\widehat{\cK}$ with the right-hand side independent of $\varp$, trigonometric polynomial in $(u,q)$, and such that $\{u=0, \; q=0\}$ is a strongly isolated symmetric Kronecker $n$-torus, its frequency vector being $\omega$. In the present paper, we generalize these examples of the note \cite{S415}. Here is our second result. \begin{thm}\label{mainrev} For any integers $n$, $m$, $l$, $d_\ast$, $d$ in the ranges $n\geq 0$, $m\geq 0$, $l\geq 1$, $0\leq d_\ast\leq m$, $d_\ast\leq d\leq d_\ast+l$ and for any vector $\omega\in\mR^n$, there exists a system of ordinary differential equations on \eqref{cK} reversible with respect to the involution \eqref{inv} of type $(n+l,m)$ and possessing the following properties. The right-hand side of this system is independent of $\varp$ and polynomial in $(u,q)$. The system admits a $d$-parameter analytic family of Kronecker $n$-tori of the form $\{u=\const, \; q=\const\}$. There are no Kronecker tori (of any dimension) outside this family. The family includes a $d_\ast$-parameter analytic subfamily of symmetric Kronecker $n$-tori of the form $\{u=\const, \; q=0\}$. The $n$-torus $\{u=0, \; q=0\}$ belongs to this subfamily, its frequency vector is equal to $\omega$, and if $d_\ast=d=0$ then this torus is unique in the sense of Definition~\ref{isolated}. If $d_\ast=m$ and $d=m+l$ then the $d$-parameter family in question makes up the whole phase space. Similar statements hold \emph{mutatis mutandis} for the compact phase space \eqref{hatcK}, the modifications being as follows. First, the right-hand side of the system is now trigonometric polynomial in $(u,q)$. Second, it is no longer valid that there are no Kronecker tori (of any dimension) outside the family under consideration. Third, the symmetric Kronecker $n$-tori have the form $\{u=\const, \; q=q^0\}$, where each component of $q^0$ is equal to either $0$ or $\pi$. Fourth, if $d_\ast=d=0$ then the $n$-torus $\{u=0, \; q=0\}$ is strongly isolated (rather than unique) in the sense of Definition~\ref{isolated}. \end{thm} This theorem is proven in Sections~\ref{reversible}--\ref{comprev}. In the examples of Sections~\ref{reversible}--\ref{comprev}, one deals with the cases $l=0$ and $l\geq 1$ in a unified way. However, for $l=0$ the whole phase space $\mR^m\times\mT^n$ or $\mT^m\times\mT^n$ in our examples is foliated by symmetric Kronecker $n$-tori. The only case not covered by the examples above is that of symmetric Kronecker $n$-tori in systems reversible with respect to involutions of types $(n,m)$ with $m\geq 1$. If $n=0$ or $1$ then symmetric Kronecker $n$-tori in such systems are always organized into smooth $m$-parameter families and cannot be isolated \cite{S415}. To the best of the author's knowledge, the question of whether symmetric Kronecker $n$-tori in systems reversible with respect to involutions of types $(n,m)$ can be isolated for $m\geq 1$ and $n\geq 2$ has never been raised and is open. One may conjecture that such tori with Diophantine frequency vectors are never isolated in the analytic category (where the involution, the vector field, and the torus itself are analytic), similarly to Lagrangian Kronecker tori in Hamiltonian systems \cite{EFK1733} (see Section~\ref{gamiltonovy}). Most probably, this question is very hard. To summarize, the problem of the possible isolatedness of symmetric Kronecker $n$-tori in systems reversible with respect to involutions of types $(n+l,m)$ is rather easy (and has been solved) for $l\geq 1$ and is probably highly nontrivial for $l=0$ (if $n\geq 2$ and $m\geq 1$). Like in the Hamiltonian realm, this dichotomy coincides with the dichotomy of Remark~\ref{dichotomyrev}. \section{Preliminaries}\label{symplectic} Given non-negative integers $a$ and $b$, we designate the identity $a\times a$ matrix as $I_a$ and the zero $a\times b$ matrix as $0_{a\times b}$. In fact, the symbols $I_0$, $0_{0\times b}$, and $0_{a\times 0}$ correspond to no actual objects and will only be used for unifying the notation. Let $s\geq 1$, $k$, and $l$ be non-negative integers and consider a skew-symmetric $(2s+2k)\times (2s+2k)$ matrix $J$ of the form \[ J = \begin{pmatrix} 0_{s\times s} & -Z^{\rt} \\ Z & L \end{pmatrix}, \] where $Z$ is an $(s+2k)\times s$ matrix of rank $s$ and $L$ is a skew-symmetric $(s+2k)\times (s+2k)$ matrix (the superscript ``t'' denotes transposing). If $k=0$ then the matrix $J$ is always non-singular ($\det J = (\det Z)^2$). If $k\geq 1$ then for any fixed matrix $Z$, the matrix $J$ may be non-singular or singular depending on the matrix $L$. Indeed, we can suppose without loss of generality that the last $s$ rows of $Z$ constitute a non-singular $s\times s$ matrix $Z_\sharp$. Let the matrix $L$ have the form \[ L = \begin{pmatrix} L_\sharp & 0_{2k\times s} \\ 0_{s\times 2k} & 0_{s\times s} \end{pmatrix}, \] where $L_\sharp$ is a skew-symmetric $2k\times 2k$ matrix, then $\det J = \det L_\sharp(\det Z_\sharp)^2 \neq 0$ if and only if $\det L_\sharp \neq 0$. In the sequel, we will assume the matrix $J$ to be non-singular, so that the skew-symmetric $(2s+2k+2l)\times (2s+2k+2l)$ matrix \begin{equation} \cJ = \begin{pmatrix} \begin{matrix} 0_{s\times s} & -Z^{\rt} \\ Z & L \end{matrix} & 0_{(2s+2k)\times 2l} \\ 0_{2l\times (2s+2k)} & \begin{matrix} 0_{l\times l} & -I_l \\ I_l & 0_{l\times l} \end{matrix} \end{pmatrix} \label{cJ} \end{equation} is also non-singular and can be treated as the \emph{structure matrix} (the matrix of the Poisson brackets $\{{\cdot},{\cdot}\}$ of the coordinate functions, see e.g.\ \cite{B1,HLW2006,O1993,St177}) of a certain symplectic form $\Omega$ (with constant coefficients) on the manifold \begin{equation} \cM = \mR^s_u\times\mT^{s+2k}_\varp\times\mR^l_p\times\mR^l_q \label{cM} \end{equation} (cf.\ Lemma~1 in \cite{St177}). A Hamilton function $H:\cM\to\mR$ affords the equations of motion \cite{B1,HLW2006,O1993} \begin{equation} \begin{pmatrix} \dot{u} \\ \dot{\varp} \\ \dot{p} \\ \dot{q} \end{pmatrix} = \cJ\frac{\partial H}{\partial(u,\varp,p,q)} = \begin{pmatrix} -Z^{\rt}\partial H/\partial\varp \\ Z\partial H/\partial u + L\partial H/\partial\varp \\ -\partial H/\partial q \\ \partial H/\partial p \end{pmatrix}. \label{XH} \end{equation} This is an autonomous Hamiltonian system with $N=s+k+l$ degrees of freedom. For any $u^0\in\mR^s$, $p^0\in\mR^l$, $q^0\in\mR^l$, consider the $(s+2k)$-torus \begin{equation} \cT_{u^0,p^0,q^0} = \bigl\{ (u^0,\varp,p^0,q^0) \bigm| \varp\in\mT^{s+2k} \bigr\}. \label{cT} \end{equation} For any $\varp^0\in\mT^{s+2k}$, the skew-orthogonal complement $T^\bot$ (with respect to $\Omega$) of the tangent space $T$ to $\cT_{u^0,p^0,q^0}$ at the point $(u^0,\varp^0,p^0,q^0)$ consists of all the vectors of the form $\psi\partial/\partial\varp + P\partial/\partial p + Q\partial/\partial q$, where $P\in\mR^l$, $Q\in\mR^l$, $\psi\in\cZ$, and $\cZ$ is the $s$-dimensional subspace of $\mR^{s+2k}$ spanned by the columns of the matrix $Z$. Indeed, the space of all such vectors is of dimension $s+2l = \dim\cM-(s+2k)$. It is therefore sufficient to verify that $\Omega(V,W) = 0$ for any vector $V\in T$ (i.e., any vector $V = \Phi\partial/\partial\varp$ with $\Phi\in\mR^{s+2k}$) and any vector $W = (ZU)\partial/\partial\varp + P\partial/\partial p + Q\partial/\partial q$ with $U\in\mR^s$, $P\in\mR^l$, $Q\in\mR^l$. According to \eqref{XH}, the linear Hamilton function $H = H(u,p,q) = \langle U,u\rangle-\langle P,q\rangle+\langle Q,p\rangle$ on $\cM$ affords the constant Hamiltonian vector field equal to $W$. Thus, $\Omega(V,W) = dH(V) = 0$. We arrive at the conclusion that the $(s+2k)$-tori \eqref{cT} are isotropic for $k=0$ ($T\subset T^\bot$ at any point), are coisotropic for $l=0$ ($T^\bot\subset T$ at any point), and are therefore Lagrangian for $k=l=0$. For $kl>0$, these tori are atropic. Note that $\dim(T\cap T^\bot)=s$ in all the cases. It is clear that the symplectic form $\Omega$ is exact if and only if its coordinate representation does not contain terms $c_{\alpha\beta}d\varp_\alpha\wedge d\varp_\beta$, $1\leq\alpha<\beta\leq s+2k$, i.e., if the tori \eqref{cT} are isotropic. Thus, $\Omega$ is exact for $k=0$ and is not exact for $k\geq 1$. \section{The main construction}\label{center} \subsection{The system}\label{system} Now let $\zeta_1,\ldots,\zeta_s$, $\xi_1,\ldots,\xi_l$, $\eta_1,\ldots,\eta_l$ be arbitrary non-negative real constants and let $h:\mR^s\to\mR$ be an arbitrary smooth function. Consider the Hamilton function \begin{equation} H(u,p,q) = h(u) + lp_1 \sum_{i=1}^s \zeta_iu_i^2 + \sum_{\nu=1}^l (\xi_\nu p_\nu q_\nu^2+\eta_\nu p_\nu^3/3) \label{ourH} \end{equation} on the symplectic manifold \eqref{cM}. The term $lp_1 \sum_{i=1}^s \zeta_iu_i^2$ is automatically absent for $l=0$. According to \eqref{XH}, the equations of motion afforded by $H$ take the form \begin{equation} \begin{aligned} \dot{u}_i &= 0, \\ \dot{\varp}_\alpha &= \sum_{i=1}^s Z_{\alpha i}\left( \frac{\partial h(u)}{\partial u_i} + 2l\zeta_iu_ip_1 \right), \\ \dot{p}_\nu &= -2\xi_\nu p_\nu q_\nu, \\ \dot{q}_\nu &= \xi_\nu q_\nu^2 + \eta_\nu p_\nu^2 + \delta_{1\nu}l \sum_{i=1}^s \zeta_iu_i^2, \end{aligned} \label{XourH} \end{equation} where $1\leq i\leq s$, $1\leq\alpha\leq s+2k$, $1\leq\nu\leq l$, and $\delta_{1\nu}$ is the Kronecker delta. The fundamental property of this system is that $\dot{q}_\nu\geq 0$ everywhere in the phase space $\cM$, $1\leq\nu\leq l$. In the note \cite{S415}, we considered the particular case of the Hamilton function \eqref{ourH} and the system \eqref{XourH} where $k=0$, $Z=I_s$, $L=0_{s\times s}$, $h(u)=\langle u,\omega\rangle$ ($\omega\in\mR^s$), $l\geq 1$, $\zeta_i=1/l$ for all $1\leq i\leq s$, and $\xi_\nu=\eta_\nu=1$ for all $1\leq\nu\leq l$ (in the notation of \cite{S415}, $s=n$ and $l=m+1$ where $n\geq 1$ and $m\geq 0$). All the conditionally periodic motions of the system \eqref{XourH} fill up the manifold \[ \fM = \bigl\{ (u,\varp,p,q) \bigm| l\zeta_iu_i=0 \; \forall i, \;\; \eta_\nu p_\nu=0 \; \forall\nu, \;\; \xi_\nu q_\nu=0 \; \forall\nu \bigr\} \] foliated by Kronecker $(s+2k)$-tori of the form \eqref{cT}. Of course, always $\cT_{0,0,0}\subset\fM$. The frequency vector of a torus $\cT_{u^0,p^0,q^0} \subset \fM$ is $\omega(u^0) = Z\partial h(u^0)/\partial u \in \cZ$ (recall that $\cZ$ is the $s$-dimensional subspace of $\mR^{s+2k}$ spanned by the columns of the matrix $Z$). If $l=0$ then $\fM=\cM$. The system \eqref{XourH} admits no conditionally periodic motions outside $\fM$. Indeed, if $(u,\varp,p,q)\in\fM$ then $\dot{u}=0$, $\dot{\varp}=Z\partial h(u)/\partial u$, $\dot{p}=0$, $\dot{q}=0$. On the other hand, since $\dot{q}_\nu\geq 0$ everywhere in $\cM$, the recurrence property of conditionally periodic motions implies that $\dot{q}_\nu\equiv 0$ on Kronecker tori, $1\leq\nu\leq l$. Consequently, a point $(u,\varp,p,q)\notin\fM$ does not belong to any Kronecker torus of \eqref{XourH} (of any dimension) because $\dot{q}_1>0$ whenever $l\zeta_iu_i\neq 0$ for at least one $i$ and $\dot{q}_\nu>0$ whenever $\eta_\nu p_\nu\neq 0$ or $\xi_\nu q_\nu\neq 0$, $1\leq\nu\leq l$. The Kronecker $(s+2k)$-tori $\cT_{u^0,p^0,q^0} \subset \fM$ constitute an analytic $d$-parameter family where $d = \dim\fM-(s+2k)$. If $l=0$ then $d=s$. If $l\geq 1$ (i.e., if the tori \eqref{cT} are not coisotropic) then $d$ can take any integer value between $0$ and $s+2l$; to be more precise, $d$ is the number of zero constants among $\zeta_i$, $\xi_\nu$, $\eta_\nu$ ($1\leq i\leq s$, $1\leq\nu\leq l$). The equality $d=0$ holds if and only if all the numbers $\zeta_i$, $\xi_\nu$, $\eta_\nu$ are positive in which case $\fM=\cT_{0,0,0}$, and $\cT_{0,0,0}$ is a unique Kronecker torus of the system \eqref{XourH}. The equality $d=s+2l$ occurs if and only if all the numbers $\zeta_i$, $\xi_\nu$, $\eta_\nu$ are equal to zero in which case $\fM=\cM$. If a torus $\cT_{u^0,p^0,q^0}$ lies in $\fM$ then its frequency vector $\omega(u^0) = Z\partial h(u^0)/\partial u$ can be made equal to any prescribed vector in $\mR^{s+2k}$ by a suitable choice of the matrix $Z$ and the function $h$ (one can even choose $h$ to be linear). The construction just described can be formally carried out for $s=0$ as well, but for $s=0$ the frequency vector of each invariant $2k$-torus of the form \eqref{cT} is zero: such a torus consists of equilibria. The $s+l+\delta_{0l}$ functions \[ H, \qquad u_i \;\; (1\leq i\leq s), \qquad \xi_\nu p_\nu q_\nu^2+\eta_\nu p_\nu^3/3 \;\; (2\leq\nu\leq l) \] are first integrals of the system \eqref{XourH} which are pairwise in involution. In fact, this system always admits $s+l$ first integrals that are pairwise in involution and are functionally independent almost everywhere. Indeed, let $f_\nu(p_\nu,q_\nu) = \xi_\nu p_\nu q_\nu^2+\eta_\nu p_\nu^3/3$ if $\xi_\nu+\eta_\nu>0$, and let $f_\nu$ be any smooth function in $p_\nu,q_\nu$ with the differential other than zero almost everywhere if $\xi_\nu=\eta_\nu=0$. In the case where $l\geq 1$ and $\sum_{i=0}^s \zeta_i > 0$, the functions \[ H, \qquad u_i \;\; (1\leq i\leq s), \qquad f_\nu(p_\nu,q_\nu) \;\; (2\leq\nu\leq l) \] are the desired $s+l$ first integrals. In the opposite case where $l \sum_{i=0}^s \zeta_i = 0$, one can choose the $s+l$ first integrals in question to be equal to \[ u_i \;\; (1\leq i\leq s), \qquad f_\nu(p_\nu,q_\nu) \;\; (1\leq\nu\leq l). \] \subsection{The analysis}\label{analysis} The dimension $n=s+2k$ of the tori \eqref{cT} can be smaller than, equal to, or greater than the number $N=s+k+l$ of degrees of freedom: $n-N=k-l$. The maximal possible value $s+2l$ of the quantity $d$ is always equal to $2N-n$. If $l=0$ (the case of coisotropic tori \eqref{cT}) then $d=s=2N-n$ and $k=n-N$. These equalities determine integers $s\geq 1$ and $k\geq 0$ if and only if $N\geq 1$ and $N\leq n\leq 2N-1$. If $l\geq 1$ then $n=s+2k=2N-s-2l\leq 2N-3$ and $N\geq 2$. It is easy to see that for any integers $N\geq 2$ and $n$ in the range $1\leq n\leq 2N-3$, one can choose integers $s\geq 1$, $k\geq 0$, $l\geq 1$ such that $N=s+k+l$ and $n=s+2k$. Indeed, if $1\leq n\leq N-1$ then it suffices to set $s=n$, $k=0$, $l=N-n$. In this case, the tori \eqref{cT} are strictly isotropic. Of course, the converse is also true: if $l\geq 1$ and $k=0$ then $1\leq n=s\leq N-1=s+l-1$. On the other hand, if $N\leq n\leq 2N-3$ (so that $N\geq 3$) then it suffices to set $s=2N-n-2$, $k=n-N+1$, $l=1$. In this case, the tori \eqref{cT} are atropic. If $l\geq 1$ and $k\geq 1$ (so that the tori \eqref{cT} are atropic) then $n=s+2k\geq 3$ and $N=s+k+l\geq 3$. One easily sees that for any integers $N\geq 3$ and $n$ in the range $3\leq n\leq 2N-3$, one can choose positive integers $s$, $k$, $l$ such that $N=s+k+l$ and $n=s+2k$. In the previous paragraph, we verified this for $N\leq n\leq 2N-3$. On the other hand, if $3\leq n\leq N-1$ (so that $N\geq 4$) then it suffices to set $s=n-2$, $k=1$, $l=N-n+1$. The case where $n=N\geq 3$, $1\leq k=l\leq\bigl\lfloor (N-1)/2 \bigr\rfloor$ (here $\lfloor{\cdot}\rfloor$ denotes the floor function), $s=N-2l$, and $d=0$ is probably the most interesting one. In this case we obtain the unique Kronecker $N$-torus $\cT_{0,0,0}$ of the Hamiltonian system \eqref{XourH} with $N$ degrees of freedom, and the frequency vector of this torus can be any vector in $\mR^N$ (but this torus is atropic rather than Lagrangian). The case where $n=N\geq 3$, $1\leq k=l\leq\bigl\lfloor (N-1)/2 \bigr\rfloor$, $s=N-2l$, and $d=N$ is also very interesting. In this case the whole phase space of the Hamiltonian system \eqref{XourH} with $N$ degrees of freedom is smoothly foliated by Kronecker $N$-tori \eqref{cT}. However, this system is not \emph{Liouville integrable} (completely integrable): the Kronecker tori in question are atropic rather than Lagrangian, and the $N$ first integrals $u_1,\ldots,u_{N-2l}$, $p_1,\ldots,p_l$, $q_1,\ldots,q_l$ are \emph{not} pairwise in involution: $\{q_\nu,p_\nu\}\equiv 1$, $1\leq\nu\leq l$. In fact, for any $s\geq 1$, $k$, and $l$, the whole phase space $\cM$ of the Hamiltonian system \eqref{XourH} is smoothly foliated by Kronecker tori \eqref{cT} whenever $d=s+2l$ (as was already pointed out in Section~\ref{system}), in which case the $s+2l$ functions $u_1,\ldots,u_s$, $p_1,\ldots,p_l$, $q_1,\ldots,q_l$ are independent first integrals of the system. The matrix of the Poisson brackets of these functions is \[ \cP = \begin{pmatrix} 0_{s\times s} & 0_{s\times l} & 0_{s\times l} \\ 0_{l\times s} & 0_{l\times l} & -I_l \\ 0_{l\times s} & I_l & 0_{l\times l} \end{pmatrix}, \] and they are not pairwise in involution for $l\geq 1$. If $l>k$ then the number of the first integrals in question exceeds the number $N=s+k+l$ of degrees of freedom. However, one cannot call the system \eqref{XourH} superintegrable for $l>k\geq 1$ and $d=s+2l$. Besides the existence of $M>N$ independent first integrals, the definition of a \emph{superintegrable} Hamiltonian system with $N$ degrees of freedom (see the papers \cite{F93,H79,KS811} and references therein; superintegrable systems are also known as properly degenerate or non-commutatively integrable systems) includes other requirements, for instance, that there be $N$ integrals pairwise in involution among the $M$ integrals under consideration (we have only $s+l<N$ integrals in involution, e.g., $u_1,\ldots,u_s$, $p_1,\ldots,p_l$), or that the rank of the matrix of the Poisson brackets of the integrals be equal to $2(M-N)$ almost everywhere (in our case the rank of $\cP$ is $2l>2(s+2l-N)=2(l-k)$), or that the common level surfaces of the integrals be isotropic (in our case the tori \eqref{cT} are atropic). Note that the tori \eqref{cT} are isotropic if and only if the symplectic form $\Omega$ on $\cM$ is exact (both the properties in our setup are equivalent to the equality $k=0$). This observation is consistent with the Herman lemma (see Section~\ref{gamiltonovy}). \section{Compact phase spaces}\label{compact} Like in the setting of our note \cite{S415}, the general construction of Sections~\ref{symplectic} and~\ref{center} admits an analogue with a compact phase space. Consider the symplectic manifold \begin{equation} \widehat{\cM} = \mT^s_u\times\mT^{s+2k}_\varp\times\mT^l_p\times\mT^l_q \label{hatcM} \end{equation} with the same structure matrix \eqref{cJ}. Of course, now the corresponding symplectic form $\Omega$ is always non-exact. The $(s+2k)$-tori \eqref{cT} with $u^0\in\mT^s$, $p^0\in\mT^l$, $q^0\in\mT^l$ are again isotropic for $k=0$, are coisotropic for $l=0$, and are atropic for $kl>0$. For any angular variable $z$ introduce the notation $\tilde{z}=\sin z$ (cf.\ \cite{S415}). Consider the Hamilton function \[ \widehat{H}(u,p,q) = h(u) + l\tilde{p}_1 \sum_{i=1}^s \zeta_i\tilde{u}_i^2 + \sum_{\nu=1}^l (\xi_\nu\tilde{p}_\nu\tilde{q}_\nu^2+\eta_\nu\tilde{p}_\nu^3/3) \] on \eqref{hatcM}, where again $\zeta_1,\ldots,\zeta_s$, $\xi_1,\ldots,\xi_l$, $\eta_1,\ldots,\eta_l$ are arbitrary non-negative real constants and $h:\mT^s\to\mR$ is an arbitrary smooth function. The Hamilton function $\widehat{H}$ affords the equations of motion \begin{equation} \begin{aligned} \dot{u}_i &= 0, \\ \dot{\varp}_\alpha &= \sum_{i=1}^s Z_{\alpha i}\left( \frac{\partial h(u)}{\partial u_i} + l\zeta_i\sin 2u_i\tilde{p}_1 \right), \\ \dot{p}_\nu &= -\xi_\nu\tilde{p}_\nu\sin 2q_\nu, \\ \dot{q}_\nu &= (\xi_\nu\tilde{q}_\nu^2 + \eta_\nu\tilde{p}_\nu^2)\cos p_\nu + \delta_{1\nu}l\left( \sum_{i=1}^s \zeta_i\tilde{u}_i^2 \right)\cos p_1, \end{aligned} \label{XourhatH} \end{equation} where $1\leq i\leq s$, $1\leq\alpha\leq s+2k$, $1\leq\nu\leq l$. The manifold \[ \widehat{\fM} = \bigl\{ (u,\varp,p,q) \bigm| l\zeta_i\tilde{u}_i=0 \; \forall i, \;\; \eta_\nu\tilde{p}_\nu=0 \; \forall\nu, \;\; \xi_\nu\tilde{q}_\nu=0 \; \forall\nu \bigr\} \] is again foliated by Kronecker $(s+2k)$-tori of the form \eqref{cT} (with $u^0\in\mT^s$, $p^0\in\mT^l$, $q^0\in\mT^l$), $\cT_{0,0,0}\subset\widehat{\fM}$ in all the cases, and the frequency vector of a torus $\cT_{u^0,p^0,q^0} \subset \widehat{\fM}$ is $\omega(u^0) = Z\partial h(u^0)/\partial u \in \cZ$. This frequency vector can again be made equal to any prescribed vector in $\mR^{s+2k}$ by a suitable choice of the matrix $Z$ and the function $h$; one can choose $h$ to be of the form $\sum_{i=1}^s c_i\sin(u_i-u^0_i)$. The dimension $s+2k+d$ of the manifold $\widehat{\fM}$ is determined in exactly the same way as that of the manifold $\fM$ in Section~\ref{system}. In particular, if $l=0$ then $\widehat{\fM}=\widehat{\cM}$. In contrast to the case of the system \eqref{XourH}, it is, generally speaking, \emph{not} true that the system \eqref{XourhatH} for $l\geq 1$ possesses no conditionally periodic motions outside $\widehat{\fM}$. Indeed, suppose that $\sum_{i=1}^s \zeta_i > 0$ and choose an arbitrary point $u^0\in\mT^s$ such that $\chi = \sum_{i=1}^s \zeta_i\sin^2u^0_i > 0$. Consider the $(s+2k+1)$-torus \begin{equation} \bigl\{ (u^0,\varp,0,q) \bigm| q_2=\cdots=q_l=0 \bigr\} \not\subset \widehat{\fM}. \label{exception} \end{equation} This torus is invariant under the flow of \eqref{XourhatH} with the induced dynamics \[ \dot{\varp} = \omega(u^0), \qquad \dot{q}_1 = \xi_1\sin^2q_1 + l\chi. \] It is clear that the motion on the torus \eqref{exception} is conditionally periodic. The frequencies of this motion are equal to $\omega_1(u^0),\ldots,\omega_{s+2k}(u^0),\varpi$ where \[ \varpi = 2\pi\left( \int_0^{2\pi}\frac{d\Fq}{\xi_1\sin^2\Fq+l\chi} \right)^{-1} = \bigl[ l\chi(l\chi+\xi_1) \bigr]^{1/2}. \] Nevertheless, for any fixed $q^\star\in\mT^l$, no point $(u,\varp,p,q)\notin\widehat{\fM}$ belongs to a Kronecker torus of \eqref{XourhatH} (of any dimension) entirely contained in the domain \[ \fD^+_{q^\star} = \bigl\{ (u,\varp,p,q) \bigm| p_\nu\in(-\pi/2,\pi/2)\bmod 2\pi \; \forall\nu, \;\; q_\nu\neq q^\star_\nu \; \forall\nu \bigr\}. \] Indeed, $\dot{q}_\nu\geq 0$ everywhere in the domain $\fD^+_{q^\star}$, $1\leq\nu\leq l$, and $\sum_{\nu=1}^l \dot{q}_\nu > 0$ everywhere in $\fD^+_{q^\star} \setminus \widehat{\fM}$. If for some $\nu$ a function $q_\nu: \mR \to \mT^1\setminus\{q^\star_\nu\}$ satisfies the conditions that $\dot{q}_\nu(t)\geq 0$ for all $t$ and $\dot{q}_\nu(0)>0$, then $q_\nu(t)$ tends to a certain point $q_\nu^{\lim}\neq q_\nu(0)$ as $t\to+\infty$ and the recurrence property fails. Similarly, no point $(u,\varp,p,q)\notin\widehat{\fM}$ belongs to a Kronecker torus of \eqref{XourhatH} (of any dimension) entirely contained in the domain \[ \fD^-_{q^\star} = \bigl\{ (u,\varp,p,q) \bigm| p_\nu\in(\pi/2,3\pi/2)\bmod 2\pi \; \forall\nu, \;\; q_\nu\neq q^\star_\nu \; \forall\nu \bigr\}. \] One may even fix any sequence of numbers $\vare_1,\ldots,\vare_l$, where $\vare_\nu=\pm 1$ for all $\nu$, and replace $\fD^+_{q^\star}$ or $\fD^-_{q^\star}$ with the domain \[ \fD^\vare_{q^\star} = \bigl\{ (u,\varp,p,q) \bigm| p_\nu\in\fI_{\vare_\nu}\bmod 2\pi \; \forall\nu, \;\; q_\nu\neq q^\star_\nu \; \forall\nu \bigr\}, \] where $\fI_1 = (-\pi/2,\pi/2)$ and $\fI_{-1} = (\pi/2,3\pi/2)$, so that $\vare_\nu\dot{q}_\nu\geq 0$ everywhere in $\fD^\vare_{q^\star}$, $1\leq\nu\leq l$. If $l\geq 1$ and all the constants $\zeta_1,\ldots,\zeta_s$, $\xi_1,\ldots,\xi_l$, $\eta_1,\ldots,\eta_l$ are positive (so that $d=0$), then $\cT_{0,0,0}$ is the only Kronecker torus of \eqref{XourhatH} entirely contained in the domain \[ \bigl\{ (u,\varp,p,q) \bigm| u_i\neq\pi \; \forall i, \;\; p_\nu\in(-\pi/2,\pi/2)\bmod 2\pi \; \forall\nu, \;\; q_\nu\neq\pi \; \forall\nu \bigr\}. \] So, in this case $\cT_{0,0,0}$ is strongly isolated. \section{Reversible analogues}\label{reversible} Both the Hamiltonian systems \eqref{XourH} and \eqref{XourhatH} are reversible with respect to the phase space involution \[ \widetilde{G}: (u,\varp,p,q) \mapsto (u,-\varp,p,-q) \] of type $(s+2k+l,s+l)$, so that $\dim\Fix\widetilde{G}=s+l \geq 1$, $\codim\Fix\widetilde{G}=s+2k+l \geq \dim\Fix\widetilde{G}$, and $\codim\Fix\widetilde{G}-n=l < \dim\Fix\widetilde{G}$, where $n=s+2k$. However, $\widetilde{G}\bigl( \cT_{u^0,p^0,q^0} \bigr) = \cT_{u^0,p^0,-q^0}$, so that not all the $n$-tori \eqref{cT} are invariant under $\widetilde{G}$. In the case of the system \eqref{XourH}, a torus $\cT_{u^0,p^0,q^0}$ is invariant under $\widetilde{G}$ if and only if $q^0=0$. Consequently, the statement ``each torus $\cT_{u^0,p^0,q^0} \subset \fM$ is symmetric'' is valid if and only if all the numbers $\xi_1,\ldots,\xi_l$ are positive. In the case of the system \eqref{XourhatH}, a torus $\cT_{u^0,p^0,q^0}$ is invariant under $\widetilde{G}$ if and only if $q^0=-q^0$, i.e., if each component of $q^0$ is equal to either $0$ or $\pi$. Again, the statement ``each torus $\cT_{u^0,p^0,q^0} \subset \widehat{\fM}$ is symmetric'' holds if and only if all the numbers $\xi_1,\ldots,\xi_l$ are positive. It is easy to construct a $G$-reversible counterpart of the system \eqref{XourH} for any non-negative integer values of $n$, $\dim\Fix G$, and $\codim\Fix G-n$, where $n$ is the dimension of symmetric Kronecker tori. Let $m$, $n$, $l$ be non-negative integers and consider the manifold \eqref{cK} equipped with the involution \eqref{inv} of type $(n+l,m)$. For any $u^0\in\mR^m$ and $q^0\in\mR^l$, consider the $n$-torus \begin{equation} \cT_{u^0,q^0} = \bigl\{ (u^0,\varp,q^0) \bigm| \varp\in\mT^n \bigr\}. \label{TG} \end{equation} Since $G\bigl( \cT_{u^0,q^0} \bigr) = \cT_{u^0,-q^0}$, a torus $\cT_{u^0,q^0}$ is invariant under $G$ if and only if $q^0=0$. Now let $\zeta_1,\ldots,\zeta_m$, $\xi_1,\ldots,\xi_l$ be arbitrary non-negative real constants and let $h:\mR^m\to\mR^n$ be an arbitrary smooth mapping. The system \begin{equation} \begin{aligned} \dot{u}_i &= 0, \\ \dot{\varp}_\alpha &= h_\alpha(u), \\ \dot{q}_\nu &= \xi_\nu q_\nu^2 + \delta_{1\nu}l \sum_{i=1}^m \zeta_iu_i^2 \end{aligned} \label{Xrev} \end{equation} (where $1\leq i\leq m$, $1\leq\alpha\leq n$, $1\leq\nu\leq l$) is reversible with respect to $G$. The term $l \sum_{i=1}^m \zeta_iu_i^2$ automatically vanishes for $l=0$. The key property of the system \eqref{Xrev} is that $\dot{q}_\nu\geq 0$ everywhere in the phase space $\cK$, $1\leq\nu\leq l$. In the note \cite{S415}, we considered a similar system with $h(u)\equiv\omega\in\mR^n$, with $l\geq 1$, and with the equation for $\dot{q}_\nu$ of the form \[ \dot{q}_\nu = \delta_{1\nu}\left( \sum_{\mu=1}^l q_\mu^2 + \sum_{i=1}^m u_i^2 \right), \] $1\leq\nu\leq l$. Our variables $m$ and $l$ play the roles of $\ell$ and $m+1$ in \cite{S415}, respectively. All the conditionally periodic motions of the system \eqref{Xrev} fill up the manifold \[ \fK = \bigl\{ (u,\varp,q) \bigm| l\zeta_iu_i=0 \; \forall i, \;\; \xi_\nu q_\nu=0 \; \forall\nu \bigr\} \] foliated by Kronecker $n$-tori of the form \eqref{TG}. Of course, always $\cT_{0,0}\subset\fK$, and the torus $\cT_{0,0}$ is symmetric. The frequency vector of a torus $\cT_{u^0,q^0} \subset \fK$ is $h(u^0)$, and this vector can be made equal to any prescribed vector $\omega\in\mR^n$ just by setting $h(u)\equiv\omega$. If $l=0$ then $\fK=\cK$. The system \eqref{Xrev} admits no conditionally periodic motions outside $\fK$. These features of $\fK$ can be verified in exactly the same way as in Section~\ref{system}. If $(u,\varp,q)\in\fK$ then $\dot{u}=0$, $\dot{\varp}=h(u)$, $\dot{q}=0$. On the other hand, since $\dot{q}_\nu\geq 0$ everywhere in $\cK$, the recurrence property of conditionally periodic motions implies that $\dot{q}_\nu\equiv 0$ on Kronecker tori, $1\leq\nu\leq l$. Consequently, a point $(u,\varp,q)\notin\fK$ does not belong to any Kronecker torus of \eqref{Xrev} (symmetric or not and of any dimension) because $\dot{q}_1>0$ whenever $l\zeta_iu_i\neq 0$ for at least one $i$ and $\dot{q}_\nu>0$ whenever $\xi_\nu q_\nu\neq 0$, $1\leq\nu\leq l$. The Kronecker $n$-tori $\cT_{u^0,q^0} \subset \fK$ constitute an analytic $d$-parameter family where $d = \dim\fK-n$. If $l=0$ then $d=m$. If $l\geq 1$ (i.e., if $\codim\Fix G>n$) then $d$ can take any integer value between $0$ and $m+l$; to be more precise, $d$ is the number of zero constants among $\zeta_i$, $\xi_\nu$ ($1\leq i\leq m$, $1\leq\nu\leq l$). The equality $d=0$ holds if and only if all the numbers $\zeta_i$, $\xi_\nu$ are positive in which case $\fK=\cT_{0,0}$, and $\cT_{0,0}$ is a unique Kronecker torus of the system \eqref{Xrev}. The equality $d=m+l$ occurs if and only if all the numbers $\zeta_i$, $\xi_\nu$ are equal to zero in which case $\fK=\cK$. The symmetric Kronecker $n$-tori $\cT_{u^0,0}$ of the system \eqref{Xrev} are characterized by the condition $l\zeta_iu^0_i=0 \; \forall i$ and constitute an analytic $d_\ast$-parameter family where $d_\ast$ is determined as follows. If $l=0$ then $d_\ast=m$, and all the Kronecker $n$-tori constituting $\fK=\cK$ are symmetric. If $l\geq 1$ then $d_\ast$ is the number of zero constants among $\zeta_i$ ($1\leq i\leq m$) and can therefore take any integer value between $0$ and $m$. In all the cases, $d-d_\ast\leq l$. \section{Compactified reversible analogues}\label{comprev} The system \eqref{Xrev} can be compactified in the same way as the system \eqref{XourH}, cf.\ \cite{S415}. Consider the manifold \eqref{hatcK} equipped with the involution $G$ given by the same formula \eqref{inv} and having the same type $(n+l,m)$. For any $u^0\in\mT^m$ and $q^0\in\mT^l$, consider the $n$-torus $\cT_{u^0,q^0}$ given by the same expression \eqref{TG}. Since $G\bigl( \cT_{u^0,q^0} \bigr) = \cT_{u^0,-q^0}$, a torus $\cT_{u^0,q^0}$ is invariant under $G$ if and only if $q^0=-q^0$, i.e., if each component of $q^0$ is equal to either $0$ or $\pi$. Now let again $\zeta_1,\ldots,\zeta_m$, $\xi_1,\ldots,\xi_l$ be arbitrary non-negative real constants and let $h:\mT^m\to\mR^n$ be an arbitrary smooth mapping. The system \begin{equation} \begin{aligned} \dot{u}_i &= 0, \\ \dot{\varp}_\alpha &= h_\alpha(u), \\ \dot{q}_\nu &= \xi_\nu\tilde{q}_\nu^2 + \delta_{1\nu}l \sum_{i=1}^m \zeta_i\tilde{u}_i^2 \end{aligned} \label{Xrevhat} \end{equation} (where $1\leq i\leq m$, $1\leq\alpha\leq n$, $1\leq\nu\leq l$, and the notation $\tilde{z}=\sin z$ is used) is reversible with respect to $G$, and $\dot{q}_\nu\geq 0$ everywhere in the phase space $\widehat{\cK}$, $1\leq\nu\leq l$. The manifold \[ \widehat{\fK} = \bigl\{ (u,\varp,q) \bigm| l\zeta_i\tilde{u}_i=0 \; \forall i, \;\; \xi_\nu\tilde{q}_\nu=0 \; \forall\nu \bigr\} \] is again foliated by Kronecker $n$-tori of the form \eqref{TG} (with $u^0\in\mT^m$ and $q^0\in\mT^l$), $\cT_{0,0}\subset\widehat{\fK}$ in all the cases, and the torus $\cT_{0,0}$ is symmetric. The frequency vector of a torus $\cT_{u^0,q^0} \subset \widehat{\fK}$ is $h(u^0)$, and this vector can be made equal to any prescribed vector $\omega\in\mR^n$ just by setting $h(u)\equiv\omega$. The dimension $n+d$ of the manifold $\widehat{\fK}$ is determined in exactly the same way as that of the manifold $\fK$ in Section~\ref{reversible}. In particular, if $l=0$ then $\widehat{\fK}=\widehat{\cK}$. If $l\geq 1$ then $\sum_{\nu=1}^l \dot{q}_\nu > 0$ everywhere in $\widehat{\cK} \setminus \widehat{\fK}$. Like in Section~\ref{compact} and in contrast to the case of the system \eqref{Xrev}, it is, generally speaking, \emph{not} true that the system \eqref{Xrevhat} for $l\geq 1$ possesses no conditionally periodic motions outside $\widehat{\fK}$. Indeed, similarly to the example in Section~\ref{compact}, suppose that $m\geq 1$, $\sum_{i=1}^m \zeta_i > 0$ and choose an arbitrary point $u^0\in\mT^m$ such that $\chi = \sum_{i=1}^m \zeta_i\sin^2u^0_i > 0$. Consider the $(n+1)$-torus \[ \bigl\{ (u^0,\varp,q) \bigm| q_2=\cdots=q_l=0 \bigr\} \not\subset \widehat{\fK}. \] This is a symmetric Kronecker torus of the system \eqref{Xrevhat} with the frequencies $h_1(u^0),\ldots,h_n(u^0),\varpi$, where $\varpi = \bigl[ l\chi(l\chi+\xi_1) \bigr]^{1/2}$. Nevertheless, for any fixed $q^\star\in\mT^l$, no point $(u,\varp,q)\notin\widehat{\fK}$ belongs to a Kronecker torus of \eqref{Xrevhat} (symmetric or not and of any dimension) entirely contained in the domain \[ \bigl\{ (u,\varp,q) \bigm| q_\nu\neq q^\star_\nu \; \forall\nu \bigr\}. \] This may be verified in exactly the same way as in Section~\ref{compact}. If $l\geq 1$ and all the constants $\zeta_1,\ldots,\zeta_m$, $\xi_1,\ldots,\xi_l$ are positive (so that $d=0$), then $\cT_{0,0}$ is the only Kronecker torus of \eqref{Xrevhat} entirely contained in the domain \[ \bigl\{ (u,\varp,q) \bigm| u_i\neq\pi \; \forall i, \;\; q_\nu\neq\pi \; \forall\nu \bigr\}. \] So, in this case $\cT_{0,0}$ is strongly isolated. The symmetric Kronecker $n$-tori $\cT_{u^0,q^0}$ of the system \eqref{Xrevhat} make up an $(n+d_\ast)$-dimensional submanifold of the manifold $\widehat{\fK}$, where $d_\ast$ is determined in exactly the same way as in Section~\ref{reversible}. \section*{Declaration of interest} Declarations of interest: none. \section*{Acknowledgments} I am grateful to B.~Fayad for fruitful correspondence and sending me the breakthrough preprint \cite{FF01575} prior to submission to arXiv. \end{document}
\begin{document} \title{Higher K-theory of Toric stacks} \author{Roy Joshua and Amalendu Krishna} \thanks{The first author was supported by a grant from the National Science Foundation. The second author was supported by the Swarnajayanti fellowship, Govt. of India, 2011.} \address{Department of Mathematics, Ohio State University, Columbus, Ohio, 43210, USA} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai, India} \email{[email protected]} \email{[email protected]} \baselineskip=10pt \keywords{toric, fan, stack, K-theory} \subjclass[2010]{19L47, 14M25} \begin{abstract} In this paper, we develop several techniques for computing the higher G-theory and K-theory of quotient stacks. Our main results for computing these groups are in terms of spectral sequences. We show that these spectral sequences degenerate in the case of many toric stacks, thereby providing an efficient computation of their higher K-theory. We apply our main results to give explicit description for the higher K-theory for many smooth toric stacks. As another application, we describe the higher K-theory of toric stack bundles over smooth base schemes. \end{abstract} \maketitle \section{Introduction}\label{section:Intro} Toric varieties form a good testing ground for verifying many conjectures in algebraic geometry. This becomes particularly apparent when one wants to understand cohomology theories for algebraic varieties. Computations of cohomology rings of smooth toric varieties such as the Grothendieck ring of vector bundles, the Chow ring and the singular cohomology ring have been well-understood for many years. These computations facilitate predictions on the structure of various cohomology rings of a general algebraic variety. Just like toric varieties, one would like to have a class of algebraic stacks on which various cohomological problems about stacks can be tested. The class of {\sl toric stacks}, first introduced and studied in \cite{BCS} by Borisov, Chen and Smith, in terms of combinatorial data called {\sl stacky fans}, is precisely such a class of algebraic stacks. These stacks are expected to be the toy models for understanding cohomology theories of algebraic stacks, a problem which is still very complicated in general. Such a point of view probably accounts for the recent flurry of activity in this area with several groups considering various forms of toric stacks. (See for example, \cite{FMN}, \cite{Laf}, \cite{GSI} in addition to \cite{BCS}.) In \cite{FMN}, Fantechi, Mann and Nironi study the structure of toric Deligne-Mumford stacks in detail. Recently, in \cite{GSI}, Geraschenko and Satriano consider in detail {\sl toric stacks} which may not be Deligne-Mumford. A class of toric stacks of this kind and their cohomology were earlier considered by Lafforgue \cite{Laf} in the study of geometric Langlands correspondence. All examples of toric stacks studied before, including those in \cite{BCS}, \cite{FMN} and \cite{Laf} are shown to be special cases of the stacks introduced in \cite{GSI}. These stacks appear naturally while solving certain moduli problems and computation of their cohomological invariants allows us to understand these invariants for many moduli spaces. In \cite{BCS}, Borisov, Chen and Smith computed the rational Chow ring and orbifold Chow ring of toric Deligne-Mumford stacks. The integral version of this result for certain type of toric Deligne-Mumford stacks is due to Jiang and Tseng \cite{JTseng1}, and Iwanari \cite{Iwanari}. Jiang and Tseng also extend some of these results to certain toric stack bundles in \cite{Jiang} and \cite{JTseng2}. Borisov and Horja \cite{BH} computed the integral Grothendieck ring $\bdK _0(X)$ of a toric Deligne-Mumford stack $X$. See also \cite{Sm}. On the other hand, almost nothing has been worked out till now regarding the higher K-groups of toric stacks, even when they are Deligne-Mumford stacks, though the higher K-groups of toric varieties had been well understood ({\sl cf.} \cite{VV}) and the higher Chow groups of toric varieties have been computed recently in \cite{Krishna}. Furthermore, we still do not know how to compute even the Grothendieck K-theory ring of a general smooth toric stack. One goal of this paper is to develop general techniques for computing the (integral) higher K-theory of smooth toric stacks. In fact, our results apply to a much bigger class of stacks than just toric stacks. In particular, these results can be used to describe the higher equivariant K-theory of many spherical varieties. Our general results are in terms of spectral sequences which we show degenerate in various cases of interest. This allows us to give explicit description of the higher K-theory of toric stacks. As a consequence of this degeneration of spectral sequences, we show how one can recover (the integral versions of) and generalize all the previously known computations of the Grothendieck group of toric Deligne-Mumford stacks. As further applications of the main results, we completely describe the (integral) higher K-theory of weighted projective spaces. As another application, we give a complete description of the higher K-theory of toric stack bundles over a smooth base scheme. \subsection{Overview of the main results} The following is an overview of our main results. We shall fix a base field $k$ throughout this text. A {\sl scheme} in this paper will mean a separated and reduced scheme of finite type over $k$. A {\sl linear algebraic group} $G$ over $k$ will mean a smooth and affine group scheme over $k$. By a closed subgroup $H$ of an algebraic group $G$, we shall mean a morphism $H \to G$ of algebraic groups over $k$ which is a closed immersion of $k$-schemes. In particular, a closed subgroup of a linear algebraic group will be of the same type and hence smooth. An algebraic group $G$ will be called {\sl diagonalizable} if it is a product of a split torus over $k$ and a finite abelian group of order prime to the characteristic of $k$. In particular, we shall be dealing with only those tori which are split over $k$. Unless mentioned otherwise, all products of schemes will be taken over $k$. A $G$-scheme will mean a scheme with an action of the algebraic group $G$. For a $G$-scheme $X$, let $\bdG ^G(X)$ (resp. $\bdK ^G(X)$) denote the spectrum of the K-theory of $G$-equivariant coherent sheaves (resp. vector bundles) on $X$. Let $R(G)$ denote the representation ring of $G$. This is canonically identified with $\bdK ^\bdG _0(k)$. If ${\mathfrak X}$ denotes an algebraic stack, we let $\bdK ({\mathfrak X})$ ($\bdG ({\mathfrak X})$) denote the Quillen K-theory (G-theory) of the exact category of vector bundles (coherent sheaves, respectively) on the stack ${\mathfrak X}$. For a quotient stack $\mathfrak{X} = [X/G]$, the spectrum $\bdK (\mathfrak{X})$ (resp. $\bdG (\mathfrak{X})$) is canonically weakly equivalent to the equivariant K-theory $\bdK ^G(X)$ (resp. G-theory $\bdG ^G(X)$) of $X$. See \S ~\ref{subsection:K-thry} for more details. Recall (see below) that a {\sl stacky} toric stack $\mathfrak{X}$ is of the form $[X/G]$ where $X$ is a toric variety with dense torus $T$ and $G$ is a diagonalizable group with a given morphism $\phi: G \to T$. \vskip .3cm Our first result is the construction of a spectral sequence which allows one to compute the higher K-theory of the stack $[X/G]$ from the K-theory of the stack $[X/T]$, whenever a torus $T$ acts on a scheme $X$ and $\phi : G \to T$ is a morphism of diagonalizable groups. This is related to the spectral sequence of Merkurjev (\cite[Theorem 5.3]{Merk}), whose $E_2$-terms are expressed in terms of ${\bdG}_*([X/T])$ and which converges to ${\bdG}_*(X)$ (See also \cite{Lev} for related constructions). \begin{comment} This is closely related to the spectral sequence of Merkurjev \cite[Theorem 5.3]{Merk} whose $E_2$-terms are expressed in terms of the equivariant G-theory of a scheme with a group action and converging to the non-equivariant G-theory of the same scheme. (See also \cite{Lev} for related constructions.) Since we restrict to actions by diagonalizable groups, the class of group actions we consider is definitely more restrictive than what was considered in \cite{Merk}. Nevertheless, for this smaller class, our approach provides a generalization of the Merkurjev spectral sequence: the $E_2$-terms in our spectral sequence are given in terms of the $T$-equivariant G-theory of a $T$-scheme and converge to the equivariant G-theory, equivariant with respect to a diagonalizable group mapping into the torus. \end{comment} We also prove the degeneration of our spectral sequences in many cases, which provides an efficient tool for computing the higher K-theory of many quotient stacks, including toric stacks. \begin{thm} \label{thm:main-thm-1} Let $T$ be a split torus acting on a scheme $X$ and let $\phi: G \to T$ be a morphism of diagonalizable groups so that $G$ acts on $X$ via $\phi$. Then, there is a spectral sequence: \begin{equation}\label{eqn:gen.weak.eq1} E^{s,t}_2 = {{\mathbb T}or}_{s}^{R(T)}(R(G), \bdG _t([X/T])) {\mathbb R}ightarrow \bdG _{s+t}([X/G]). \end{equation} Moreover, the edge map $\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G) \to \bdG _0([X/G])$ is an isomorphism. The spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates at the $E_2$-terms if $X$ is a smooth toric variety with dense torus $T$ such that $\bdK _0([X/T])$ is a projective $R(T)$-module and we obtain the ring isomorphism: \begin{equation}\label{eqn:gen.weak.eq2} \bdK _*([X/T]) {\underlinederset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _*([X/G]). \end{equation} In particular, this isomorphism holds when $X$ is a smooth and projective toric variety. \end{thm} If $\mathfrak{X} = [X/G]$ is a generically stacky toric stack associated to the data $\underlinederline{X} = (X, G \xrightarrow{\phi} T)$, then the above results apply to the G-theory and K-theory of $\mathfrak{X}$. We shall apply Theorem~\ref{thm:main-thm-1} in Subsection~\ref{subsection:BHR} to give an explicit presentation of the Grothendieck K-theory ring of a smooth toric stack. If we specialize to the case of smooth toric Deligne-Mumford stacks, this recovers the main result of Borisov--Horja \cite{BH}. Another useful application of Theorem~\ref{thm:main-thm-1} is that it tells us how we can read off the $T'$-equivariant G-groups of a $T$-scheme $X$ in terms of its $T$-equivariant G-groups, whenever $T'$ is a closed subgroup of $T$. The special case of the isomorphism $\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdG _0([X/G])$ when $G$ is the trivial group and $X$ is a smooth toric variety, recovers the main result of \cite{Moreli}. We should also observe that Theorem~\ref{thm:main-thm-1} applies to a bigger class of schemes than just toric varieties. In particular, one can use them to compute the equivariant $K$-theory of many spherical varieties. Another special case of the isomorphism $\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdG _0([X/G])$ when $G$ is the trivial group and $X$ is a spherical variety, recovers the main result of \cite{Takeda}. \vskip .3cm \begin{thm}\label{thm:main-thm-2} Let $T$ be a split torus acting on a smooth and projective scheme $X$ which is $T$-equivariantly linear ({\sl cf.} Definition~\ref{defn:linear}). Let $\phi: G \to T$ be a morphism of diagonalizable groups so that $G$ acts on $X$ via $\phi$. Then the map \begin{equation}\label{eqn:main-2-0} \rho: \bdK _0([X/G]){\underlinederset {{\mathbb Z}} \otimes} \bdK _*(k) \cong \bdK _0([X/G]){\underlinederset {R(G)} \otimes} \bdK ^\bdG _*(k) \to \bdK _*([X/G]) \end{equation} is a ring isomorphism. \end{thm} It turns out that all smooth and projective spherical varieties ({\sl cf.} \S~\ref{subsubsection:Spherical}) satisfy the hypothesis of Theorem~\ref{thm:main-thm-2}. When $X$ is a smooth projective toric variety and $G = T$, then the above theorem recovers a result of Vezzosi--Vistoli (\cite[Theorem~6.9]{VV}). \vspace*{1cm} As an illustration of how the spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates in the cases not covered by Theorems~\ref{thm:main-thm-1} and ~\ref{thm:main-thm-2}, we prove the following result which describes the higher K-theory of toric stack bundles. \begin{thm}\label{thm:main-thm-3} Let $B$ be a smooth scheme over a perfect field $k$ and let $[X/G]$ be a toric stack where $X$ is smooth and projective. Let $R_G\left(\bdK _*(B), {\mathbb D}elta \right)$ denote the Stanley-Reisner algebra ({\sl cf.} Definition~\ref{defn:RING}) over $\bdK _*(B)$ associated to a closed subgroup $G$ of $T$. Let $\pi: \mathfrak{X} \to B$ be a toric stack bundle with fiber $[X/G]$. Then there is a ring isomorphism \begin{equation}\label{eqn:vanish4*0} {\mathbb P}hi_G : R_G\left(\bdK _*(B), {\mathbb D}elta \right) \xrightarrow{\cong} \bdK _*(\mathfrak{X}). \end{equation} \end{thm} When $G$ is the trivial group, the Grothendieck group $\bdK _0(\mathfrak{X})$, was computed in \cite[Theorem~1.2(iii)]{SU}. When $[X/G]$ is a Deligne-Mumford stack, a computation of $\bdK _0(\mathfrak{X})$ appears in \cite{JTseng2}. The focus of this paper was to describe the higher K-theory of toric stacks. Similar description of the motivic cohomology (higher Chow groups) of such stacks will appear in \cite{JK}. \vskip .3cm Here is an {\it outline of the paper}. The second section is a review of toric stacks and their K-theory. In \S~\ref{section:ELin}, we define the notion of equivariantly linear schemes and study their G-theory. We prove Theorem~\ref{thm:main-thm-1} in \S~\ref{section:Gen}, which is the most general result of this paper. We conclude this section with a detailed description of the Grothendieck K-theory ring of general smooth toric stacks. In \S~\ref{section:Kunneth}, we prove a derived K{\"u}nneth formula and prove Theorem~\ref{thm:main-thm-2} as a consequence. We conclude this section by working out the higher K-theory of (stacky) weighted projective spaces. We study the K-theory of toric stack bundles over smooth base schemes in the last two sections and conclude by providing a complete determination of these. \section{A review of toric stacks and their K-theory} \label{section:T-stacks} In this section, we review the concept of toric stacks from \cite{GSI} and set up the notations for the G-theory and K-theory of such stacks. This is done in some detail for the convenience of the reader. In what follows, we shall fix a base field $k$ and all schemes and algebraic groups will be defined over $k$. Let ${{\mathcal V}}_k$ denote the category of $k$-schemes and let ${\mathcal V}^S_k$ denote the full subcategory of smooth $k$-schemes. If $G$ is an algebraic group over $k$, we shall denote the category of $G$-schemes with $G$-equivariant maps by ${\mathcal V}_G$. The full subcategory of smooth $G$-schemes will be denoted by ${\mathcal V}^S_G$. \subsection{Toric stacks}\label{subsection:TStacks-def} \begin{defn} \label{toric.stacks.def} Let $T$ be a torus and let $X$ be a toric variety with dense torus $T$. According to \cite{GSI}, a {\sl toric stack} $\mathfrak{X}$ is an Artin stack of the form $[X/G]$ where $G$ is a subgroup of $T$. A {\sl generically stacky toric stack} is an Artin stack of the form $[X/G]$ where $G$ is a diagonalizable group with a morphism $\phi: G \to T$. In this case, the stack $\mathfrak{X}$ has an open substack of the form $[T/G]$ which acts on it. The action of $\mathfrak{T} = [T/G]$ on $\mathfrak{X}$ is induced from the torus action on $X$. The stack $\mathfrak{T}$ is often called the {\sl stacky} dense torus of $\mathfrak{X}$. A generically stacky toric stack $[X/G]$ as above will often be described by the data $\underlinederline{X} = (X, G \xrightarrow{\phi} T)$. \end{defn} \begin{exms} Generically stacky toric stacks arise naturally while one studies toric stacks. This is because a toric variety $X$ with dense torus $T$ has many $T$-invariant subvarieties which are toric varieties and whose dense tori are quotients of $T$. If $Z \subsetneq X$ is such a subvariety, and $G$ is a diagonalizable subgroup of the torus $T$, then $[Z/G]$ is not a toric stack but only a generically stacky toric stack. A (generically stacky) toric stack $\mathfrak{X}$ is called a {\sl toric Deligne-Mumford stack} if it is a Deligne-Mumford stack after forgetting the toric structure. It is called smooth if $X$ is a smooth scheme. As pointed out in the introduction, Deligne-Mumford toric stacks were introduced for the first time in \cite{BCS} using the notion of stacky fans. A geometric description of the stacks considered in \cite{BCS} was given in \cite{FMN} where many nice properties of such stacks were proven. It turns out that all these stacks are special cases of the ones defined above. One extreme case of a toric stack is when $G$ is the trivial group, in which case $\mathfrak{X}$ is just a toric variety. The other extreme case is when $G$ is all of $T$: clearly such toric stacks are Artin. Toric stacks of this form were considered before by Lafforgue \cite{Laf}. In general, a toric stack occupies a place between these two extreme cases. If $\mathfrak{X}$ is a toric Deligne-Mumford stack, then the stacky torus $\mathfrak{T}$ is of the form $T' \times \mathfrak{B}_{\mu}$, where $T'$ is a torus and $\mathfrak{B}_{\mu}$ is the classifying stack of a finite abelian group $\mu$. In general, every generically stacky toric stack can be written in the form $\mathfrak{X} \times \mathfrak{B}_G$, where $\mathfrak{X}$ is a toric stack and $\mathfrak{B}_G$ is the classifying stack of a diagonalizable group $G$. This decomposition often reduces the study of the cohomology theories of generically stacky toric stacks to the study the cohomology theories of toric stacks and the classifying stacks of diagonalizable groups. The coarse moduli space $\pi: \mathfrak{X} \to \overline{X}$ of a Deligne-Mumford toric stack is a simplicial toric variety whose dense torus is the moduli space of $\mathfrak{T}$. Conversely, every simplicial toric variety is the coarse moduli space of a canonically defined toric Deligne-Mumford stack ({\sl cf.} \cite[\S~4.2]{FMN}). \end{exms} \subsection{Toric stacks via stacky fans}\label{subsection:Fan} In \cite{GSI}, Geraschenko and Satriano showed that all (generically stacky) toric stacks are obtained from {\sl stacky fans} in much the same way toric varieties are obtained from fans. They describe in detail the dictionary between toric stacks and stacky fans. Associated to the toric variety $X$ is a fan $\Sigma$ on the lattice of 1-parameter subgroups of $T$, $L={\rm Hom}_{\rm gp}({\mathbb G}_m,T)$ (see \cite[\S 1.4]{fulton} or \cite[\S 3.1]{cls}). The surjection of tori $T\to T/G$ corresponds to the homomorphism of lattices of 1-parameter subgroups, $\beta\colon L\to N={\rm Hom}_{\rm gp}({\mathbb G}_m,T/G)$. The dual homomorphism, $\beta^*\colon \text{hom}(N,{\mathbb Z})\to \text{hom}(L,{\mathbb Z})$, is the induced homomorphism of characters. Since $T\to T/G$ is surjective, $\beta^*$ is injective, and the image of $\beta$ has finite index. Therefore, one may define a \emph{stacky fan} as a pair $(\Sigma,\beta)$, where $\Sigma$ is a fan on a lattice $L$, and $\beta\colon L\to N$ is a homomorphism to a lattice $N$ such that $\beta(L)$ has finite index in $N$. Conversely, any stacky fan $(\Sigma,\beta)$ gives rise to a toric stack as follows. Let $X_\Sigma$ be the toric variety associated to $\Sigma$. The dual of $\beta$, $\beta^*\colon N^{\vee} \to L^{\vee}$, induces a homomorphism of tori $T_\beta\colon T_L\to T_N$, naturally identifying $\beta$ with the induced map on lattices of 1-parameter subgroups. Since $\beta(L)$ is of finite index in $N$, $\beta^*$ is injective, so $T_\beta$ is surjective. Let $G_\beta=\ker(T_\beta)$. Note that $T_L$ is the torus of $X_\Sigma$ and $G_\beta\subseteq T_L$ is a subgroup. If $(\Sigma,\beta)$ is a stacky fan, the associated toric stack $\mathfrak{X}_{\Sigma,\beta}$ is defined to be $[X_\Sigma/G_\beta]$, with the torus $T_N=T_L/G_\beta$. A \emph{generically stacky fan} is a pair $(\Sigma,\beta)$, where $\Sigma$ is a fan on a lattice $L$, and $\beta \colon L\to N$ is a homomorphism to a finitely generated abelian group. If $(\Sigma, \beta)$ is a generically stacky fan, the associated generically stacky toric stack $\mathfrak{X}_{\Sigma,\beta}$ is defined to be $[X_\Sigma/G_\beta]$, where the action of $G_\beta$ on $X_\Sigma$ is induced by the homomorphism $G_\beta\to D(L^*)=T_L$. One can give a more explicit description of $\mathfrak{X}_{\Sigma,\beta}$ considered above which will show that it is a generically stacky toric stack. Let $(\Sigma,\beta\colon L\to N)$ be a generically stacky fan and let $C(\beta)$ denote the complex $L \xrightarrow{\beta} N$. Let \[ {\mathbb Z}^s\xrightarrow Q {\mathbb Z}^r\to N\to 0 \] be a presentation of $N$, and let $B\colon L\to {\mathbb Z}^r$ be a lift of $\beta$ (which exists). One defines the fan $\Sigma'$ on $L{\text{\rm op}}lus {\mathbb Z}^s$ as follows. Let $\tau$ be the cone generated by $e_1,\dots, e_s\in {\mathbb Z}^s$. For each $\sigma\in \Sigma$, let $\sigma'$ be the cone spanned by $\sigma$ and $\tau$ in $L{\text{\rm op}}lus {\mathbb Z}^s$. Let $\Sigma'$ be the fan generated by all the $\sigma'$. Corresponding to the cone $\tau$, we have the closed subvariety $Y\subseteq X_{\Sigma'}$, which is isomorphic to $X_\Sigma$ since $\Sigma$ is the \emph{star} (sometimes called the \emph{link}) of $\tau$ \cite[Proposition 3.2.7]{cls}. One defines \[ \xymatrix@R-2pc @C-1pc{ \llap{$\beta'=B{\text{\rm op}}lus Q\colon\,$}L{\text{\rm op}}lus {\mathbb Z}^s\ar[r] & {\mathbb Z}^r\\ (l,a)\ar@{|->}[r] & B(l)+Q(a).} \] Then $(\Sigma',\beta')$ is a generically stacky fan and we see that $\mathfrak{X}_{\Sigma,\beta}\cong [Y/G_{\beta'}]$. Note that $C(\beta')$ is quasi-isomorphic to $C(\beta)$, so $G_{\beta'}\cong G_\beta$. Toric stacks and generically stacky toric stacks arise naturally, especially in the solution of certain moduli problems. Any toric variety naturally gives rise to a toric stack. In fact, it is shown in \cite[Theorem~6.1]{GSII} that if $k$ is algebraically closed field of characteristic zero, then every Artin stack with a dense open torus substack is a toric stack under certain fairly general conditions. We refer the readers to \cite{GSI} where many examples of toric and generically stacky toric stacks are discussed. \vskip .3cm {\sl In the rest of this paper, a {\sl toric stack} will always mean any generically stacky toric stack. A toric stack as in Definition ~\ref{toric.stacks.def} will be called a reduced toric stack or a toric orbifold}. \vskip .3cm \subsection{K-theory of quotient stacks}\label{subsection:K-thry} Let $G$ be a linear algebraic group acting on a scheme $X$. The spectrum of the K-theory of $G$-equivariant coherent sheaves (resp. vector bundles) on $X$ is denoted by $\bdG ^G(X)$ (resp. $\bdK ^G(X)$). We will let $\bdK ^G$ denote $\bdK ^G(Spec \, k)$. The direct sum of the homotopy groups of these spectra are denoted by $\bdG ^G_*(X)$ and $\bdK ^G_*(X)$. The latter is a graded ring. The natural map $\bdK ^G(X) \to \bdG ^G(X)$ is a weak equivalence if $X$ is smooth. For a quotient stack $\mathfrak{X}$ of the form $[X/G]$, one writes $\bdK ^G(X)$ and $\bdK (\mathfrak{X})$ interchangeably. The ring $\bdK ^G_0(k)$ will be denoted by $R(G)$. This is same as the representation ring of $G$. The functor $X \mapsto \bdG ^G(X)$ on ${\mathcal V}_G$ is covariant for proper maps and contravariant for flat maps. It also satisfies the localization sequence and the projection formula. It satisfies the homotopy invariance property in the sense that if $f: V \to X$ is a $G$-equivariant vector bundle, then the map $f^*: \bdG ^G(X) \to \bdG ^G(V)$ is a weak equivalence. The functor $X \mapsto \bdK ^G(X)$ on ${\mathcal V}_G$ is a contravariant functor with values in commutative graded rings. For any $G$-equivariant morphism $f: X \to Y$, $\bdG ^G(X)$ is a module spectrum over the ring spectrum $\bdK ^G(Y)$. In particular, $\bdG ^G_*(X)$ is an $R(G)$-module. We refer to \cite[\S~1]{Thomason1} to verify the above properties. \section{Equivariant G-theory of linear schemes}\label{section:ELin} We will prove Theorem~\ref{thm:main-thm-1} as a consequence of a more general result (Theorem~\ref{thm:main-thm-1*}) on the equivariant G-theory of schemes with a group action. In this section, we study the equivariant G-theory of a certain class of schemes which we call equivariantly linear. Such schemes in the non-equivariant set-up were earlier considered by Jannsen \cite{Jan} and Totaro \cite{Totaro1}. The G-theory of such schemes in the non-equivariant set-up was studied in \cite{J01}. We end this section with a proof of Theorem~\ref{thm:main-thm-1*} for equivariantly linear schemes. \begin{defn}\label{defn:linear} Let $G$ be a linear algebraic group over $k$ and let $X \in {\mathcal V}_G$. \begin{enumerate} \item We will say $X$ is $G$-equivariantly $0$-linear if it is either empty or isomorphic to ${\rm Spec \,}({\rm Sym}(V^*))$ where $V$ is a finite-dimensional rational representation of $G$. \item For a positive integer $n$, we will say that $X$ is $G$-equivariantly $n$-linear if there exists a family of objects $\{U, Y, Z\}$ in ${\mathcal V}_G$ such that $Z \subseteq Y$ is a $G$-invariant closed immersion with $U$ its complement, $Z$ and one of the schemes $U$ or $Y$ are $G$-equivariantly $(n-1)$-linear and $X$ is the other member of the family $\{U, Y, Z\}$. \item We will say that $X$ is {\it $G$-equivariantly linear (or simply, $G$-linear)} if it is $G$-equivariantly $n$-linear for some $n \ge 0$. \end{enumerate} \end{defn} It is immediate from the above definition that if $G \to G'$ is a morphism of algebraic groups then every $G'$-equivariantly linear scheme is also $G$-equivariantly linear. \begin{defn}\label{defn:T-CELL} Let $G$ be a linear algebraic group over $k$. A scheme $X \in {\mathcal V}_G$ is called {\sl $G$-equivariantly cellular} (or, $G$-cellular) if there is a filtration \[ \emptyset = X_{n+1} \subsetneq X_n \subsetneq \cdots \subsetneq X_1 \subsetneq X_0 = X \] by $G$-invariant closed subschemes such that each $X_i \setminus X_{i+1}$ is isomorphic to a rational representation $V_i$ of $G$. These representations of $G$ are called the (affine) $G$-{\em cells} of $X$. \end{defn} It is obvious that a $G$-equivariantly cellular scheme is cellular in the usual sense ({\sl cf.} \cite[Example~1.9.1]{Fulton1}). Before we collect examples of equivariantly linear schemes, we state the following two elementary results which will be used throughout this paper. \begin{lem}\label{lem:elem} Let $G$ be a diagonalizable group over $k$ and let $H \subseteq G$ be a closed subgroup. Then $H$ is also defined over $k$ and is diagonalizable. If $T$ is a split torus over $k$, then all subtori and quotients of $T$ are defined over $k$ and are split over $k$. \end{lem} \begin{proof} The first statement follows from \cite[Proposition~8.2]{Borel}. If $T$ is a split torus over $k$, then any of its subgroups is defined over $k$ and is split by the first assertion. In particular, all quotients of $T$ are defined over $k$. Furthermore, all such quotients are split over $k$ by \cite[Corollary~8.2]{Borel}. \end{proof} \begin{lem}\label{lem:Open-orbit} Let $T$ be a split torus acting on a scheme $X$ with finitely many orbits. Then: \begin{enumerate} \item Any $T$-orbit in $X$ of minimal dimension is closed. \item Any $T$-orbit in $X$ of maximal dimension is open. \end{enumerate} \end{lem} \begin{proof} The first assertion is well known and can be found in \cite[Proposition~1.8]{Borel}. We prove the second assertion. Let $f:S \to X$ be the inertia group scheme over $X$ for the $T$-action and let $\gamma: X \to S$ denote the unit section. Then for a point $x \in X$, the fiber $S_x$ of the map $f$ is the stabilizer subgroup of $x$ and the dimension of $S_x$ is its dimension at the point $\gamma(x)$. For any $s \ge 0$, let $X_{\le s}$ denote the set of points $x \in X$ such that $\text{\rm dim}(S_x) \le s$. It follows from Chevalley's theorem ({\sl cf.} \cite[\S~13.1.3]{EGAIV}) that each $X_{\le s}$ is open in $X$ (see also \cite[\S~2.2]{VV}). Let $U \subseteq X$ denote a $T$-orbit of maximal dimension (say, $d$) and let $x \in U$. Suppose $s = \text{\rm dim}(S_x) \ge 0$. Notice that all points in a $T$-orbit have the same stabilizer subgroup because $T$ is abelian. We claim that there is no point on $X$ whose stabilizer subgroup has dimension less than $s$. If there is such a point $y \in X$, then $Ty$ is a $T$-orbit of $X$ of dimension bigger than the dimension of $U$, contradicting our choice of $U$. This proves the claim. It follows from this claim that $X_{\le s}$ is a $T$-invariant open subscheme of $X$ which is a disjoint union of its $T$-orbits such that the stabilizer subgroups of all points of $X_{\le s}$ have dimension $s$. In particular, all $T$-orbits in $X_{\le s}$ have dimension $d$. We conclude from the first assertion of the lemma that all $T$-orbits (of closed points) in $X_{\le s}$ (including $U$) are closed in $X_{\le s}$. Since there only finitely many orbits in $X$, the same is true for $X_{\le s}$. We conclude that $X_{\le s}$ is a finite disjoint union of its closed orbits. Hence these orbits must also be open in $X_{\le s}$. In particular, $U$ is open in $X_{\le s}$ and hence in $X$. \end{proof} \begin{remk}\label{remk:Gen-diag} The reader can verify that Lemma~\ref{lem:Open-orbit} is true for the action of any diagonalizable group. But we do not need this general case. \end{remk} The following result yields many examples of equivariantly linear schemes. \begin{prop}\label{prop:lin-elem} Let $T$ be a split torus over $k$ and let $T'$ be a quotient of $T$. Let $T$ act on $T'$ via the quotient map. Then the following hold. \begin{enumerate} \item $T'$ is $T$-linear. \item A toric variety with dense torus $T$ is $T$-linear. \item A $T$-cellular scheme is $T$-linear. \item If $k$ is algebraically closed, then every $T$-scheme with finitely many $T$-orbits is $T$-linear. \end{enumerate} \end{prop} \begin{proof} We first prove $(1)$. It follows from Lemma~\ref{lem:elem} that $T'$ is a split torus. Hence, it is enough to show using the remark following the definition of $T$-linear schemes that a split torus $T$ is $T$-linear under the multiplication action. We can write $T = ({\mathbb G}_m)^n$ and consider ${\mathbb A}^n$ as the toric variety with the dense torus $T$ via the coordinate-wise multiplication so that the complement of $T$ is the union of the coordinate hyperplanes. Since ${\mathbb A}^n$ is $T$-linear, it suffices to show that the union of the coordinate hyperplanes is $T$-linear. We shall prove by induction on the rank of $T$ that any union of the coordinate hyperplanes in ${\mathbb A}^n$ is $T$-linear. If $n =1$, then this is obvious. So let us assume that $n > 1$ and let $Y$ be a union of some coordinate hyperplanes in ${\mathbb A}^n$. After permuting the coordinates, we can write $Y$ as $Y^{n}_{\{1, \cdots, m\}} = H_1 \cup \cdots \cup H_m$ where $H_i = \{(x_1, \cdots , x_n) \in {\mathbb A}^n | x_i = 0\}$. If $m =1$, then $Y^{n}_{\{1\}}$ is $T$-equivariantly $0$-linear. So we assume by an induction on $m$ that $Y^{n}_{\{2, \cdots , m\}}$ is $T$-linear. Set $U = Y^{n}_{\{1, \cdots, m\}} \setminus Y^{n}_{\{2, \cdots , m\}}$. Then $U$ is the complement of a union of hyperplanes $W^{n-1}_{\{2, \cdots ,m\}}$ in $H_1 \cong {\mathbb A}^{n-1}$. Notice that $T$ acts on $H_1$ through the product $T_1$ of its last $(n-1)$ factors. By induction on $n$, we conclude that $W^{n-1}_{\{2, \cdots ,m\}}$ is $T_1$-linear. Since $H_1$ is clearly $T_1$-linear, we conclude that $U$ is $T_1$-linear and hence $T$-linear. Thus we have concluded that both $Y^{n}_{\{2, \cdots , m\}}$ and $U$ are $T$-linear. It follows from this that $Y^{n}_{\{1, \cdots, m\}}$ is $T$-linear too. The assertion $(2)$ easily follows from $(1)$ and an induction on the number of $T$-orbits in a toric variety. The assertion (3) is immediate from the definitions, using an induction on the length of the filtration of a $T$-cellular scheme. To prove $(4)$, let $X$ be a $T$-scheme with only finitely many $T$-orbits. It follows from Lemma~\ref{lem:Open-orbit} that $X$ has an open $T$-orbit $U$. Since $k$ is algebraically closed, such an open $T$-orbit must be isomorphic to a quotient of $T$. In particular, it is $T$-linear by the first assertion. An induction of the number of $T$-orbit implies that $X \setminus U$ is $T$-linear. We conclude that $X$ is also $T$-linear. \end{proof} \subsubsection{Spherical varieties}\label{subsubsection:Spherical} Recall that if $G$ is a connected reductive group over $k$, then a normal variety $X \in {\mathcal V}_G$ is called {\sl spherical} if a Borel subgroup of $G$ has a dense open orbit in $X$. The spherical varieties constitute a large class of varieties with group actions, including toric varieties, flag varieties and all symmetric varieties. It is known that a spherical variety $X$ has only finitely many fixed points for the $T$-action where $T$ is a maximal torus of $G$ contained in $B$. It follows from a theorem of Bialynicki-Birula \cite{BB} (generalized to the case of non-algebraically closed fields by Hesselink \cite{Hessel}) that if $T$ is a split torus over $k$ and if $X$ is a smooth projective variety with a $T$-action such that the fixed point locus $X^T$ is isolated, then $X$ is $T$-equivariantly cellular. We conclude that a smooth and projective spherical variety is $T$-cellular and hence $T$-linear. We do not know if all spherical varieties are $T$-linear. \vskip .4cm \subsection{Equivariant G-theory of equivariantly linear schemes} \label{subsection:Equiv-linear} Recall that if a linear algebraic group $G$ acts on a scheme $X$, then the G-theory and K-theory of the quotient stack $[X/G]$ are same as the equivariant G-theory and K-theory of $X$ for the action $G$. We shall use this identification throughout this text without further mention. The following result from \cite[\S~1.9]{Thomason1} will be used repeatedly in this text. \begin{thm}\label{thm:Morita} Let $G$ be a linear algebraic group over $k$ and let $H \subseteq G$ be a closed subgroup of $G$. Then for any $X \in {\mathcal V}_H$, the map of spectra \[ \bdG ([(X \stackrel{H}{\times} G)/G]) \to \bdG ([X/H]) \] is a weak-equivalence. In particular, the map of spectra \[ \bdG ([(X \times G/H)/G]) \to \bdG ([X/H]) \] is a weak-equivalence if $X \in {\mathcal V}_G$. These are weak equivalences of ring spectra if $X$ is smooth. \end{thm} \vskip .3cm Recall that for a stack $\mathfrak{X}$, the K-theory spectrum $\bdK (\mathfrak{X})$ is a ring spectrum and $\bdG (\mathfrak{X})$ is a module spectrum over $\bdK (\mathfrak{X})$. In the following results, we make essential use of the derived smash products of module spectra over ring spectra. This is the derived functor of the smash product of spectra in their homotopy category. We refer to \cite{SS} (see also \cite{EKMM} and \cite[\S~3]{J01}) for basic results in this direction. In general, if $R$ is a ring spectrum and $M, N$ are module spectra over $R$, the derived smash product of $M$ and $N$ over $R$ will be denoted by $M{\overlineerset L {\underlinederset {R} \wedge}} N$. We shall now prove the following special case of Theorem~\ref{thm:main-thm-1*}. The proof follows a trick used in \cite[Theorem~4.1]{J01} in a different context. \begin{prop}\label{prop:Linear-case} Let $T$ be a split torus and let $X \in {\mathcal V}_T$ be $T$-linear. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0} \bdK ([{{\rm Spec \,}(k)}/G]) {\overlineerset L {\underlinederset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. \end{prop} \begin{proof} We assume that $X$ is $T$-equivariantly $n$-linear for some $n \ge 0$. We shall prove our result by an ascending induction on $n$. If $n = 0$, then $X \cong {\mathbb A}^n$ and hence by the homotopy invariance, we can assume that $X = {\rm Spec \,}(k)$, and the result is immediate in this case. We now assume that $n > 0$. By the definition of $T$-linearity, there are two cases to consider: \begin{enumerate} \item There exists a $T$-invariant closed subscheme $Y$ of $X$ with complement $U$ such that $Y$ and $U$ are $T$-equivariantly $(n-1)$-linear. \item There exists a $T$-scheme $Z$ which contains $X$ as a $T$-invariant open subscheme such that $Z$ and $Y = Z \setminus X$ are $T$-equivariantly $(n-1)$-linear. \end{enumerate} In the first case, the localization fiber sequence in equivariant G-theory gives us a commutative diagram of fiber sequences in the homotopy category of spectra\footnote {For spectra, this is same as a cofiber sequence}: \[ \xymatrix@C2pc{ {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([U/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([X/G])} \ \ar@<1ex>[r] &{\bdG ([U/G])}.} \] The left and the right vertical maps are weak equivalences by the induction. We conclude that the middle vertical map is a weak equivalence too. In the second case, we obtain as before, a commutative diagram of fiber sequences in the homotopy category of spectra: \[ \xymatrix@C2pc{ {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([Z/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([Z/G])} \ \ar@<1ex>[r] &{\bdG ([X/G])}.} \] The first two vertical maps are weak equivalences by induction and hence the last vertical map must also be a weak equivalence. This completes the proof of the proposition. \end{proof} We end this section with the following (rather technical) result which will be used in the proof of Theorem~\ref{thm:main-thm-1*}. Taking $V$ to be ${\rm Spec \,}(k)$, this becomes a special case of what is considered in the last Proposition. \begin{lem}\label{lem:BC-I} Let $T$ be a split torus over $k$ and let $T'$ be a quotient of $T$. Let $T$ act on $T'$ via the quotient map and let it act trivially on an affine scheme $V$. Consider the scheme $X = V \times T'$ where $T$ acts diagonally. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on any $T$-scheme via $\phi$. Then the map of spectra \begin{equation}\label{eqn:BC-I0} \bdK ([{{\rm Spec \,}(k)}/G]) {\overlineerset L {\underlinederset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. \end{lem} \begin{proof} Let $H$ denote the image of $G$ in $T'$ under the composite map $G \xrightarrow{\phi} T \twoheadrightarrow T'$ and let $H' = {T'}/H$. Notice that $T'$ is a split torus by Lemma~\ref{lem:elem}. Since $T$ (and hence $G$) acts trivially on the scheme $V$, it follows that $T$ and $G$ act on $X$ via their quotients $T'$ and $H$, respectively. Since $X$ is affine and all the underlying groups are diagonalizable, it follows from \cite[Lemma~5.6]{Thomason1} that the maps of spectra \begin{equation}\label{eqn:BC-I1} \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdK ^T \to \bdG ([X/T]) ; \end{equation} \[ \bdG ([X/{H}]) {\overlineerset L {\underlinederset {\bdK ^{H}} \wedge}} \bdK ^G \to \bdG ([X/G]) \] are weak equivalences. Using the first weak equivalence, we obtain \begin{equation}\label{eqn:BC-I2} \begin{array}{lll} \bdG ([X/T]) {\overlineerset L {\underlinederset {\bdK ^{T}} \wedge}} \bdK ^G & {\cong} & \left(\bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdK ^T\right) {\overlineerset L {\underlinederset {\bdK ^{T}} \wedge}} \bdK ^G \\ & {\cong} & \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdK ^G \\ & {\cong} & \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \left(\bdK ^H {\overlineerset L {\underlinederset {\bdK ^{H}} \wedge}} \bdK ^G\right) \\ & {\cong} & \left( \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdK ^H\right) {\overlineerset L {\underlinederset {\bdK ^{H}} \wedge}} \bdK ^G. \end{array} \end{equation} On the other hand, we have \begin{equation}\label{eqn:BC-I3} \begin{array}{lll} \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdK ^H & {\cong}^{1} & \bdG ([X/{T'}]) {\overlineerset L {\underlinederset {\bdK ^{T'}} \wedge}} \bdG ([{H'}/{T'}]) \\ & {\cong}^{2} & \bdG ([(X \times H')/{T'}])\\ & {\cong}^{3} & \bdG ([X/H]), \end{array} \end{equation} where the isomorphisms ${\cong}^{1}$ and ${\cong}^{3}$ follow from Theorem~\ref{thm:Morita}. The isomorphism ${\cong}^{2}$ follows from Propositions~\ref{prop:lin-elem} and ~\ref{prop:Kunneth-Linear-case}. Combining ~\eqref{eqn:BC-I1}, ~\eqref{eqn:BC-I2} and ~\eqref{eqn:BC-I3}, we get the weak equivalences \[ \bdG ([X/T]) {\overlineerset L {\underlinederset {\bdK ^{T}} \wedge}} \bdK ^G \cong \bdG ([X/H]) {\overlineerset L {\underlinederset {\bdK ^{H}} \wedge}} \bdK ^G \cong \bdG ([X/G]) \] and this proves the lemma. \end{proof} \section{G-theory of general toric stacks}\label{section:Gen} This section is devoted to the determination of the G-theory of a general (generically stacky) toric stack. We prove our main results in a much more general set-up where the underlying scheme with a $T$-action need not be a toric variety. Our first result is a spectral sequence that computes the $G$-equivariant G-theory of a $T$-scheme $X$ in terms of its $T$-equivariant G-theory and the representation ring of $G$ whenever there is a morphism of diagonalizable groups $\phi: G \to T$. When the underlying scheme is assumed to be smooth, these conclusions may be stated in terms of K-theory instead of G-theory. This result specializes to the case of all (generically stacky) toric stacks when $X$ is assumed to be a toric variety. We conclude this section with an explicit presentation of the Grothendieck K-theory ring of a smooth toric stack which may not necessarily be Deligne-Mumford. \vskip .3cm We now prove the following main result of this section and derive its consequences. \begin{thm}\label{thm:main-thm-1*} Let $T$ be a split torus acting on a scheme $X$ and let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0*} \bdK ([{{\rm Spec \,}(k)}/G]) {\overlineerset L {\underlinederset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. In particular, one obtains a spectral sequence: \begin{equation}\label{eqn:gen.weak.eq*1} E^2_{s,t} = {{\mathbb T}or}_{s,t}^{\bdK ^T_*(k)}(\bdK ^G_*(k), \bdG _*([X/T])) {\mathbb R}ightarrow \bdG _{s+t}([X/G]). \end{equation} \end{thm} \begin{proof} We shall prove the theorem by the noetherian induction on $T$-schemes. The statement of the theorem is obvious if $X$ is the empty scheme so that both sides of ~\eqref{eqn:gen.weak.eq*0*} are contractible. Suppose $X$ is any $T$-scheme such that ~\eqref{eqn:gen.weak.eq*0*} holds when $X$ is replaced by all its proper $T$-invariant closed subschemes. We show that ~\eqref{eqn:gen.weak.eq*0*} holds for $X$. This will prove the theorem. By Thomason's generic slice theorem \cite[Proposition~4.10]{Thomason2}, there exists a $T$-invariant dense open subset $U \subseteq X$ which is affine. Moreover, $T$ acts on $U$ via its quotient $T'$ which in turn acts freely on $U$ with affine geometric quotient $U/T$ such that there is a $T$-equivariant isomorphism $U \cong (U/T) \times T'$. Here, $T$ acts trivially on $U/T$, via the quotient map on $T'$ and diagonally on $U$. The weak equivalence of ~\eqref{eqn:gen.weak.eq*0*} holds for $U$ by Lemma~\ref{lem:BC-I}. We now set $Y = X \setminus U$. Then $Y$ is a proper $T$-invariant closed subscheme of $X$. The localization sequence induces the commutative diagram of the fiber sequences in the homotopy category of spectra: \[ \xymatrix@C2pc{ {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overlineerset L {\underlinederset {\bdK ^T} \wedge}} \bdG ([U/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([X/G])} \ \ar@<1ex>[r] &{\bdG ([U/G])}.} \] We have shown above that the right vertical map is a weak equivalence. The left vertical map is a weak equivalence by the noetherian induction. We conclude that the middle vertical map is a weak equivalence too. The existence of the spectral sequence now follows along standard lines (see for example, \cite[Theorem IV.4.1]{EKMM}). \end{proof} \vskip .3cm {\bf{Proof of Theorem~\ref{thm:main-thm-1}:}} To obtain the spectral sequence ~\eqref{eqn:gen.weak.eq1}, it is enough to identify this spectral sequence with the one in ~\eqref{eqn:gen.weak.eq*1}. To see this, we recall from \cite[Lemma~5.6]{Thomason1} that the maps $R(T){\underlinederset {{\mathbb Z}} \otimes} \bdK _*(k) \to \bdK _*([{{\rm Spec \,}(k)}/T])$ and $R(G){\underlinederset {{\mathbb Z}} \otimes} \bdK _*(k) \to \bdK _*([{{\rm Spec \,}(k)}/G])$ are ring isomorphisms. Since $R(T)$ and $R(G)$ are flat ${\mathbb Z}$-modules, these isomorphisms can be written as \begin{equation}\label{eqn:main-thm-1*0} R(T) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} \bdK _*(k) \xrightarrow{\cong} \bdK _*([{{\rm Spec \,}(k)}/T]) \ {\rm and} \ \ R(G) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} \bdK _*(k) \xrightarrow{\cong} \bdK _*([{{\rm Spec \,}(k)}/G]), \end{equation} where ${\overlineerset L \otimes}$ denotes the derived tensor product. Let $M^{\bullet} \xrightarrow{\sim} R(G)$ be a flat resolution of $R(G)$ as an $R(T)$-module. Since $R(T)$ is a flat ${\mathbb Z}$-module, we see that $M^{\bullet} \xrightarrow{\sim} R(G)$ is a flat resolution of $R(G)$ also as a ${\mathbb Z}$-module. In particular, we obtain \begin{equation}\label{eqn:main-thm-1*1} \begin{array}{lll} \bdK ^G_*(k) {\overlineerset L {\underlinederset {\bdK ^T_*(k)} \otimes}} \bdG _*([X/T]) & \cong & \left(R(G) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} \bdK _*(k) \right) {\overlineerset L {\underlinederset {R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & \cong & \left(M ^{\bullet}{\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} \bdK _*(k) \right) {\overlineerset L {\underlinederset {R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & {\cong}^{1} & \left(M^{\bullet} {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k) \right) {\overlineerset L {\underlinederset{R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & {\cong}^{2} & \left(M ^{\bullet} {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k) \right) {\underlinederset {R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes} \bdG _*([X/T]) \\ & {\cong} & M ^{\bullet} {\underlinederset {R(T)} \otimes} \left(R(T) {\underlinederset {{\mathbb Z}} \otimes} \bdK _*(k)\right) {\underlinederset {R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes} \bdG _*([X/T]) \\ & {\cong} & M ^{\bullet} {\underlinederset {R(T)} \otimes} \bdG _*([X/T]) \\ & {\cong}^{3} & M ^{\bullet}{\overlineerset L {\underlinederset {R(T)} \otimes}} \bdG _*([X/T]) \\ & {\cong} & R(G) {\overlineerset L {\underlinederset {R(T)} \otimes}} \bdG _*([X/T]), \end{array} \end{equation} where the isomorphism ${\cong}^{1}$ follows because $M^{\bullet}$ is a complex of flat ${\mathbb Z}$-modules, ${\cong}^{2}$ follows because $M^{\bullet} {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)$ is a complex of flat $R(T) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k)$-modules and the isomorphism ${\cong}^{3}$ follows because $M^{\bullet}$ is a complex of flat $R(T)$-modules. Taking the homology groups on the both sides, we obtain \[ {{\mathbb T}or}_{s,t}^{\bdK ^T_*(k)}(\bdK ^\bdG _*(k), \bdG _*([X/T])) \cong {{\mathbb T}or}^{R(T)}_{s,t}(R(G), \bdG _*([X/T])) \] which yields the spectral sequence ~\eqref{eqn:gen.weak.eq1}. The isomorphism of the edge map $\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G) \to \bdG _0([X/G])$ follows immediately from ~\eqref{eqn:gen.weak.eq1} and the fact that the equivariant G-theory spectra appearing in Theorem ~\ref{thm:main-thm-1} are all connected (have no negative homotopy groups). Let us now assume that $X$ is a smooth toric variety with dense torus $T$ such that $\bdK _0([X/T])$ is a projective $R(T)$-module. In this case, we can identify the G-theory and the K-theory. To show the degeneration of the spectral sequence ~\eqref{eqn:gen.weak.eq1}, it suffices to show that the map \begin{equation}\label{eqn:Degen1} \bdG _*([X/T]) {\underlinederset {R(T)} \otimes} R(G) \to \bdG _*([X/T]) {\overlineerset L {\underlinederset {R(T)} \otimes}} R(G) \end{equation} is an isomorphism. However, we have \[ \begin{array}{lll} \bdG _*([X/T]) {\overlineerset L {\underlinederset {R(T)} \otimes}} R(G) & {\cong}^{0} & \left(\bdK ^T_*(k) {\underlinederset {R(T)} \otimes} \bdG _0([X/T])\right) {\overlineerset L {\underlinederset {R(T)} \otimes}} R(G) \\ & {\cong}^{1} & \left(\bdK ^T_*(k) {\overlineerset L {\underlinederset {R(T)} \otimes}} \bdG _0([X/T])\right) {\overlineerset L {\underlinederset {R(T)} \otimes}} R(G) \\ & {\cong}^{2} & \left(\bdK _*(k) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} R(T)\right) {\overlineerset L {\underlinederset {R(T)} \otimes}} \left(\bdG _0([X/T]) {\overlineerset L {\underlinederset {R(T)} \otimes}} R(G)\right) \\ & {\cong}^{3} & \left(\bdK _*(k) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} R(T)\right) {\overlineerset L {\underlinederset {R(T)} \otimes}} \left(\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{4} & \bdK _*(k) {\overlineerset L {\underlinederset {{\mathbb Z}} \otimes}} \left(\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{5} & \bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \left(\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{6} & \left(\bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \bdG _0([X/T])\right) {\underlinederset {R(T)} \otimes} R(G) \\ & {\cong}^{7} & \bdG _*([X/T]) {\underlinederset {R(T)} \otimes} R(G). \end{array} \] The isomorphism ${\cong}^{0}$ follows from \cite[Proposition~6.4]{VV} in general and also from Theorem~\ref{thm:main-thm-2} when $X$ is projective. The isomorphism ${\cong}^{1}$ follows because $\bdG _0([X/T])$ is projective $R(T)$-module. The isomorphism ${\cong}^{2}$ follows from \cite[Lemma~5.6]{Thomason1} because $R(T)$ is flat ${\mathbb Z}$-module. The isomorphism ${\cong}^{3}$ follows again from the projectivity of $\bdG _0([X/T])$ as an $R(T)$-module. The isomorphisms ${\cong}^{4}$ and ${\cong}^{6}$ are the associativity of the ordinary and derived tensor products. The isomorphism ${\cong}^{5}$ follows because $R(G)$ is a free ${\mathbb Z}$-module and $\bdG _0([X/T]) {\underlinederset {R(T)} \otimes} R(G)$ is a projective $R(G)$-module and hence is flat as a ${\mathbb Z}$-module. The isomorphism ${\cong}^{7}$ follows again from \cite[Proposition~6.4]{VV} in general and also from Theorem~\ref{thm:main-thm-2} when $X$ is projective. This proves ~\eqref{eqn:Degen1}. The projectivity of $\bdG _0([X/T])$ as $R(T)$-module when $X$ is a smooth and projective toric variety, is shown in \cite[Proposition~6.9]{VV} (see also Lemma~\ref{lem:linear-P}). The proof of Theorem~\ref{thm:main-thm-1} is now complete. $\hspace*{14.4cm} \hfil\squareuare$ \begin{remk}\label{remk:SS} The spectral sequence ~\eqref{eqn:gen.weak.eq1} is basically an Eilenberg-Moore type spectral sequence. A spectral sequence similar to the one in ~\eqref{eqn:gen.weak.eq1} had been constructed by Merkurjev \cite{Merk} in the special case when $G$ is the trivial group. The construction of that spectral sequence is considerably more involved. This special case ($G = \{e\}$) of the above construction yields a completely different and simpler proof of Merkurjev's theorem in the setting of schemes with the action of split tori. \end{remk} \begin{remk}\label{remk:Baggio} It was shown by Baggio \cite{Bag} that there are examples of non-projective smooth toric varieties $X$ such that $\bdG _0([X/T])$ is a projective $R(T)$-module. This shows that there are smooth non-projective toric varieties for which the spectral sequence in Theorem ~\ref{thm:main-thm-1} degenerates. In all these cases, one obtains a complete description of the K-theory of the toric stack $[X/G]$. We shall see in Section~\ref{subsection:WPS} that there are examples where the spectral sequence of Theorem ~\ref{thm:main-thm-1} degenerates even if $\bdG _0([X/T])$ is not a projective $R(T)$-module. \end{remk} \subsection{Grothendieck group of toric stacks}\label{subsection:BHR} In \cite{BH}, Borisov and Horja had computed the Grothendieck $K$-theory ring $\bdK _0([X/G])$ when $[X/G]$ is a smooth toric Deligne-Mumford stack. Recall from \S~\ref{section:T-stacks} that the dense stacky torus of a Deligne-Mumford stack is of the form $T' \times \mathfrak{B}_{\mu}$ where $T'$ is a torus and $\mu$ is a finite abelian group. The following consequence of Theorem~\ref{thm:main-thm-1} generalizes the result of \cite{BH} to the case of all smooth toric stacks, not necessarily Deligne-Mumford. Even in this latter case, we obtain a simpler proof. \begin{thm}\label{thm:BH} Let $\mathfrak{X} = [X/G]$ be a smooth and reduced toric stack associated to the data $\underlinederline{X} = (X, G \xrightarrow{\phi} T)$. Let ${\mathbb D}elta$ be the fan defining $X$ and let $d$ be the number of rays in ${\mathbb D}elta$. Let $I^G_{{\mathbb D}elta}$ denote the ideal of the Laurent polynomial algebra ${\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations: \begin{enumerate} \item $(t_{j_1}-1) \cdots (t_{j_l}-1), \ 1 \le j_p \le d$ such that the rays $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone of ${\mathbb D}elta$. \item $\left(\stackrel{d}{\underlinederset{j = 1}\prod} (t_j)^{<-\chi, v_j>}\right) - 1, \ \chi \in (T/G)^{\vee}$. \end{enumerate} Then there is a ring isomorphism \begin{equation}\label{eqn:BH0} \phi: \frac{{\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}{I^G_{{\mathbb D}elta}} \xrightarrow{\cong} \bdK _0(\mathfrak{X}). \end{equation} \end{thm} \begin{proof} It follows from Theorem~\ref{thm:main-thm-1} that the map $\bdK _0([X/T]) {\underlinederset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _0([X/G])$ is a ring isomorphism. Since $G$ is a diagonalizable subgroup of $T$ ($[X/G]$ is reduced), the ring $R(G)$ is a quotient of $R(T)$ by the ideal $J^G_{{\mathbb D}elta} = \left(\chi - 1, \chi \in (T/G)^{\vee}\right)$ ({\sl cf.} Lemma~\ref{lem:Groupring}). This implies that \begin{equation}\label{eqn:BH1} \bdK _0([X/G]) \cong \frac{\bdK _0([X/T])}{J^G_{{\mathbb D}elta}\bdK _0([X/T])}. \end{equation} If we let ${\mathbb D}elta(1) = \{\rho_1, \cdots , \rho_d\}$, then for each $1 \le j \le d$, there is a unique $T$-equivariant line bundle $L_j$ on $X$ which has a $T$-equivariant section $s_{j} : X \to L_{j}$ and whose zero locus is the orbit closure $V_j = \overline{O_{\rho_j}}$. Then every character $\chi \in T^{\vee}$ acts on $\bdK _0([X/T])$ by multiplication with the element $(\stackrel{d}{\underlinederset{j = 1}\prod} ([L_{j}])^{<\chi, v_j>})$ ({\sl cf.} \cite[Proposition~4.3]{SU}). We conclude that there is a ring isomorphism \begin{equation}\label{eqn:BH2} \frac{\bdK _0([X/T])} {\left(\stackrel{d}{\underlinederset{j = 1}\prod} ([L^{\vee}_{j}])^{<-\chi, v_j>} - 1, \ \chi \in (T/G)^{\vee} \right)} \xrightarrow{\cong} \bdK _0([X/G]). \end{equation} If $I^T_{{\mathbb D}elta}$ denotes the ideal of ${\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations (1) above, then it follows from \cite[Theorem~6.4]{VV} that there is a ring isomorphism \begin{equation}\label{eqn:BH3} \frac{{\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}{I^T_{{\mathbb D}elta}} \xrightarrow{\cong} \bdK _0([X/T]). \end{equation} Setting $\phi(t_j) = [L^{\vee}_{j}]$, we obtain the isomorphism ~\eqref{eqn:BH0} by combining ~\eqref{eqn:BH2} and ~\eqref{eqn:BH3}. \end{proof} \begin{remk}\label{remk:non-red} If $[X/G]$ is not a reduced stack and there is an exact sequence \[ 0 \to H \to G \to F \to 0 \] where $F = {\rm Im}(\phi)$, then the stack $[X/G]$ is isomorphic to $[X/F] \times \mathfrak{B}_H$. In this case, one obtains an isomorphism $\bdK _*([X/G]) \cong \bdK _*([X/F]) {\underlinederset {R(F)} \otimes} R(G)$ ({\sl cf.} \cite[Lemma~5.6]{Thomason1}). In particular, if $H$ is a torus, one obtains $\bdK _*([X/G]) \cong \bdK _*([X/F]) {\underlinederset {{\mathbb Z}} \otimes} R(H)$. Thus, we see that the calculation of the K-theory of a (generically stacky) toric stack can be easily reduced to the case of reduced stacks. \end{remk} \section{A K{\"u}nneth formula and its consequences} \label{section:Kunneth} Our goal in this section is to prove Theorem~\ref{thm:main-thm-2} and give applications. We shall deduce this theorem from a K{\"u}nneth spectral sequence for the equivariant K-theory for the action of diagonalizable groups. A similar spectral sequence for topological K-theory was constructed long time ago by Hodgkin \cite{Hodgkin} and Snaith \cite{Snaith}. A spectral sequence of this kind in the non-equivariant setting was constructed by the first author in \cite[Theorem~4.1]{J01}. \subsection{K{\"u}nneth formula}\label{subsection:KFormula} Suppose that $X$ and $X'$ are schemes acted upon by a linear algebraic group $G$. In this case, the flatness of $X$ and $X'$ over $k$ implies that the spectra $\bdG ([X/G])$ and $\bdG ([{X'}/G])$ are module spectra over the ring spectrum $\bdK ([{{\rm Spec \,}(k)}/G])$. This flatness also ensures that the external tensor product of coherent ${\mathcal O}$-modules induces a pairing $\bdG ([X/G]) \wedge \bdG ([{X'}/G]) \to \bdG ([(X{ \times}{X'})/G])$, where the action of $G$ on $X{\times}{X'}$ is the diagonal action. This pairing is compatible with the structure of the above spectra as module spectra over the ring spectrum $\bdK ([{{\rm Spec \,}(k)}/G])$ so that one obtains the induced pairing: \[ p_1^* \wedge p_2^*: \bdG ([X/G]) {\overlineerset L {\underlinederset {\bdK ([{{\rm Spec \,}(k)}/ G])} \wedge}} \bdG ([{X'}/G]) \to \bdG ([(X \times {X'})/ G]). \] This is a map of ring spectra if $X$ and $X'$ are smooth. \begin{prop}\label{prop:Kunneth-Linear-case} Let $T$ be a split torus and let $X, X'$ be in ${\mathcal V}_T$ such that $X$ is $T$-linear. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ and $X'$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0} \bdG ([X/G]) {\overlineerset L {\underlinederset {\bdK ([{{\rm Spec \,}(k)}/G])} \wedge}} \bdG ([X'/G]) \to \bdG ([(X\times X')/G]) \end{equation} is a weak-equivalence. In particular, there exists a first quadrant spectral sequence \begin{equation}\label{eqn:MT2*1} E^2_{s,t} = {{\mathbb T}or}^{\bdK ^G_*(k)}_{s,t}(\bdG _*([X/G]), \bdG _*([{X'}/G])) {\mathbb R}ightarrow \bdG _{s+t}([(X \times X')/G]). \end{equation} \end{prop} \begin{proof} We assume that $X$ is $T$-equivariantly $n$-linear for some $n \ge 0$. This proposition is proved by an ascending induction on $n$, along the same lines as the proof of Proposition~\ref{prop:Linear-case}. We sketch the argument. If $n = 0$, then $X \cong {\mathbb A}^n$ and hence by the homotopy invariance, we can assume that $X = {\rm Spec \,}(k)$, and the result is immediate in this case. We now assume that $n > 0$. By the definition of $T$-linearity, there are two cases to consider: \begin{enumerate} \item There exists a $T$-invariant closed subscheme $Y$ of $X$ with complement $U$ such that $Y$ and $U$ are $T$-equivariantly $(n-1)$-linear. \item There exists a $T$-scheme $Z$ which contains $X$ as a $T$-invariant open subscheme such that $Z$ and $Y = Z \setminus X$ are $T$-equivariantly $(n-1)$-linear. \end{enumerate} In the first case, the localization fiber sequence in equivariant G-theory gives us a commutative diagram of fiber sequences in the homotopy category of spectra: \[ \[email protected]{ {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([Y/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([X/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([U/G])} \ar@<-1ex>[d] \\ {\bdG ([(Y \times X')/G])} \ar@<1ex>[r] & {\bdG ([(X \times X')/G])} \ \ar@<1ex>[r] &{\bdG ([(U \times X')/G])}.} \] The left and the right vertical maps are weak equivalences by the induction on $n$. We conclude that the middle vertical map is a weak equivalence. The second case is proved in the same way where we now use induction on $Y$ and $Z$ (see the proof of Proposition~\ref{prop:Linear-case}). The existence of the spectral sequence now follows along standard lines (see for example, \cite[Theorem IV.4.1]{EKMM}). \end{proof} \begin{comment} In the second case, we obtain as before, a commutative diagram of fiber sequences in the homotopy category of spectra \[ \[email protected]{ {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([Y/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([Z/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overlineerset L {\underlinederset {\bdK ^G} \wedge}} \bdG ([X/G])} \ar@<-1ex>[d] \\ {\bdG ([(Y \times X')/G])} \ar@<1ex>[r] & {\bdG ([(X \times X')/G])} \ \ar@<1ex>[r] &{\bdG ([(Z \times X')/G])}.} \] The first two vertical maps are weak equivalences by induction on $n$ and hence the last vertical map must also be a weak equivalence. \end{comment} \begin{remk}\label{remk:subtorus-case} As an application of Proposition~\ref{prop:Kunneth-Linear-case}, one can obtain another proof of the special case of the spectral sequence ~\eqref{eqn:gen.weak.eq1} when $G$ is a closed subgroup of $T$. This is done by taking $G = T$, $X' = T/G$ in ~\eqref{eqn:MT2*1} and using the Morita weak equivalences $\bdG ([{X'}/T]) \cong \bdG ([{\rm Spec \,}(k)/G])$ and $\bdG ([(X \times X')/T]) \cong \bdG ([X/G])$. Notice that $X' = T/G$ is $T$-linear by Proposition~\ref{prop:lin-elem}. \end{remk} \begin{cor}[K{\"u}nneth decomposition]\label{cor:Kunneth-dec} Let $T$ be a split torus over $k$ and let $X$ be a $T$-linear scheme. Then the class of the diagonal $[{\mathbb D}elta] \in \bdG _0([(X \times X)/G])$ admits a strong K{\"u}nneth decomposition, i.e., may be written as $\stackrel{n}{\underlinederset{i =1}\Sigma} p_1^*(\alpha_i) \otimes p_2^*(\beta_i)$, where $\alpha_i, \beta_i \in \bdG _0([X/G])$. \end{cor} \begin{proof} The spectral sequence of Proposition~\ref{prop:Kunneth-Linear-case} shows in general that \begin{equation}\label{eqn:Ku-0} \bdG _0([(X \times X') / G]) \cong \bdG _0([X/G]){\underlinederset {R(G)} \otimes} \bdG _0([{X'}/G]). \end{equation} The K{\"u}nneth decomposition now follows by taking $X = X'$. \end{proof} \vskip .3cm {\bf{Proof of Theorem~\ref{thm:main-thm-2}:}} Let $X$ be a smooth and projective $T$-linear scheme. Since the group $G$ is diagonalizable, we apply \cite[Lemma~5.6]{Thomason1} to obtain the isomorphism: \begin{equation}\label{eqn:main-2*0} R(G) {\underlinederset {{\mathbb Z}}\otimes} \bdK _*(k) \xrightarrow{\cong} \bdK ^G_*(k) \end{equation} and this provides the first isomorphism of ~\eqref{eqn:main-2-0}. Since $X$ is smooth, we can identify $G_*([X/G])$ with $\bdK _*([X/G])$. Let $[x] \in \bdK _*([X/G])$. Then $[x]= p_{1*} ({\mathbb D}elta \circ p_2^*([x]))$. Now we use the K{\"u}nneth decomposition for ${\mathbb D}elta$ obtained in Corollary~\ref{cor:Kunneth-dec} and the projection formula (since $X$ is projective) to identify the last term with $\stackrel{n}{\underlinederset{i =1}\sum} \alpha _i \circ p_{1*}p_2^*(\beta _i\circ [x])$. The Cartesian square \[ \xymatrix@C2pc{ X\times X \ar[r]^>>>>>>{p_2} \ar[d]_{p_1} & X \ar[d]^{p'_1} \\ X \ar[r]_<<<<<<{p'_2} & {\rm Spec \,}(k)} \] and the flat base-change for the equivariant G-theory show that $p_{1*}( p_2^*(\beta_i \circ [x]))$ identifies with ${p_{2}'}^*{p_1'}_*(\beta _i \circ [x])$ so that \begin{equation}\label{eqn:surj.diag} [x]= \stackrel{n}{\underlinederset{i = 1}\sum} \alpha _i \circ {p_2'}^*({p_1'}_*(\beta _i \circ [x])). \end{equation} The class ${p_1'}_*(\beta _i \circ [x]) \in \bdG ([{{\rm Spec \,}(k)}/G])$. It follows that the classes $\{\alpha_i \}$ generate $\pi_*(G[X/G])$ as a module over $\bdK ^G_*(k)$. This shows that the map in question is surjective. Next we prove the injectivity of the map $\rho$. The key is the following diagram: \begin{equation}\label{eqn:inj.dig} \xymatrix@C2pc{ \bdK _*([X/G]) \ar@<1ex>[dr]_{\mu} & {\bdK _0([X/G]) {\underlinederset {\bdK ^G_0(k)} \otimes} \bdK ^G_*(k)} \ar[l]_<<<<<<{\rho} \ar@<1ex>^{\alpha}[d] \\ & {{{\rm Hom}}_{\bdK ^G_0(k)}(\bdK _0([X/G]), \bdK ^G_*(k))}} \end{equation} where $\alpha(x \otimes y)$ (resp. $\mu(x)$, $ x \in \bdK _*([X/G])$) is defined by $ \alpha (x \otimes y) =$ the map $x' \mapsto f_*(x' \circ x) \circ y$ (resp., the map $x' \mapsto f_*(x' \circ x)$). Here, $f$ denotes the projection map $X \to {\rm Spec \,}(k)$ and $x' \circ x$ denotes the product in the ring $\bdK _*([X/G])$. The commutativity of the above diagram is an immediate consequence of the projection formula: observe that $\rho(x \otimes y) = x \circ f^*(y)$. Therefore, to show that $ \rho$ is injective, it suffices to show that the map $\alpha$ is injective. For this, we define a map $\beta$ to be a splitting for $\alpha$ as follows. If $\phi \in {{\rm Hom}}_{\bdK ^G_0(k)}(\bdK _0([X/G]), \bdK ^G_*(k))$, we let $\beta(\phi) = \stackrel{n}{\underlinederset{i =1}\sum} \alpha_{i} \otimes (\phi(\beta_{i}))$. Observe that \[ \begin{array}{lll} \beta (\alpha (x \otimes y)) & = & \beta \left(the \quad map \quad {x'} \rightarrow f_*(x' \circ x) \circ y\right) \\ & = & (\stackrel{n}{\underlinederset{i =1}\sum} \alpha_{i} \otimes f_*(\beta_{i} \cdot x)) \circ y. \end{array} \] We next observe that $f_*( \beta_{i} \cdot x) \in \bdK ^G_0(k)$, so that we may write the last term as $(\stackrel{n}{\underlinederset{i =1}\sum} \alpha_{i} . f^*f_*( \beta_{i} \cdot x)) \circ y$. By ~\eqref{eqn:surj.diag}, the last term $=x \circ y$. This proves that $ \alpha$ is injective and hence that so is $\rho$. This completes the proof. $\hspace*{3cm} \hfil \squareuare$ \vskip .4cm The following result generalizes ~\eqref{eqn:gen.weak.eq2} to a bigger class of schemes. \begin{cor}\label{cor:Base-change} Let $T$ be a split torus over $k$ and let $X$ be a smooth and projective $T$-linear scheme. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the map \[ \bdK _*([X/T]) {\underlinederset {R(T)} \otimes} R(G) \to \bdK _*([X/G]) \] is an isomorphism. In particular, $\bdK _0([X/G])$ is a free $R(G)$-module $($and hence a free ${\mathbb Z}$-module$)$ if $X$ is $T$-cellular. \end{cor} \begin{proof} To prove the first part of the corollary, we trace through the sequence of isomorphisms: \[ \begin{array}{lll} \bdK _*([X/T]) {\underlinederset {R(T)} \otimes} R(G) & {\cong} & \left(\bdK ^T_*(k) {\underlinederset {R(T)} \otimes} \bdK _0([X/T])\right) {\underlinederset {R(T)} \otimes} R(G) \\ & {\cong} & \left(\bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} R(T)\right) {\underlinederset {R(T)} \otimes} \left( \bdK _0([X/T]) {\underlinederset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{\dag} & \bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \bdK _0([X/G]) \\ & {\cong} & \bdK _*([X/G]). \end{array} \] The first and the last isomorphisms in this sequence follow from Theorem~\ref{thm:main-thm-2} and the isomorphism ${\cong}^{\dag}$ follows from Theorem~\ref{thm:main-thm-1}. This proves the first part of the corollary. If $X$ is $T$-cellular, the freeness of $\bdK _0([X/G])$ as an $R(G)$-module follows from Lemma~\ref{lem:linear-P}. \end{proof} \begin{remk}\label{remk:freeness} In the special case when $[X/G]$ is a smooth toric Deligne-Mumford stack (with $X$ projective), the freeness of $\bdK _0([X/G])$ as ${\mathbb Z}$-module was earlier shown in \cite[Theorem~2.2]{Hua} and independently in \cite{GHHKK} using symplectic methods. It is known ({\sl cf.} \cite[Example~4.1]{Hua}) that the freeness property may fail if $X$ is not projective. \end{remk} \subsection{K-theory of weighted projective spaces} \label{subsection:WPS} In the past, there have been many attempts to study the K-theory and Chow rings of weighted projective spaces. However, there are only a few explicit computations in this regard. We end this section with an explicit description of the integral higher K-theory of stacky weighted projective spaces. These are examples of toric stacks, where the spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates even though $\bdK_0([X/T])$ is not a projective $R(T)$-module. We also describe the rational higher G-theory of weighted projective schemes as another application of Theorem~\ref{thm:main-thm-2}. \subsubsection{Weighted projective spaces} Let $\underlinederline{q} = \{q_0, \cdots , q_n\}$ be an ordered set of positive integers and let $d = gcd(q_0, \cdots , q_n)$. This ordered set of positive integers gives rise to a morphism of tori $\phi: {\mathbb G}_m \to ({\mathbb G}_m)^{n+1}$ given by $\phi(\lambda) = (\lambda^{q_0}, \cdots, \lambda^{q_n})$. The (stacky) weighted projective space ${\mathbb P}(q_0, \cdots ,q_n)$ is the stack $[{({\mathbb A}^{n+1}_k \setminus \{0\})}/{{\mathbb G}_m}]$, where ${\mathbb G}_m$ acts on ${\mathbb A}^{n+1}_k$ by $\lambda \cdot (a_0, \cdots , a_n) = (\lambda^{q_0}a_0, \cdots , \lambda^{q_n} a_n)$. Notice that ${\mathbb A}^{n+1} \setminus \{0\}$ is a toric variety with dense torus $T = ({\mathbb G}_m)^{n+1}$ acting by the coordinate-wise multiplication. We see that ${\mathbb P}(\underlinederline{q})$ is the toric stack associated to the data $(({\mathbb A}^{n+1}_k \setminus \{0\}), {\mathbb G}_m \xrightarrow{\phi} T)$. It is known that ${\mathbb P}(\underlinederline{q})$ is a Deligne-Mumford toric stack and is reduced (an orbifold) if and only if $d = 1$. \subsubsection{{\rm K}-theory of ${\mathbb P}(\underlinederline{q})$} To describe the higher K-theory of ${\mathbb P}(\underlinederline{q})$, we consider ${\mathbb A}^{n+1}$ as the toric variety with dense torus $T = ({\mathbb G}_m)^{n+1}$ acting by the coordinate-wise multiplication. Let $V$ be the $(n+1)$-dimensional representation of $T$ which represents ${\mathbb A}^{n+1}$ as the toric variety. Let $\iota: {\rm Spec \,}(k) \to {\mathbb A}^{n+1}$ and $j: U \to {\mathbb A}^{n+1}$ be the $T$-invariant closed and open inclusions, where we set $U = {\mathbb A}^{n+1} \setminus \{0\}$. Observe that $V$ is the $T$-equivariant normal bundle of ${\rm Spec \,}(k)$ sitting inside ${\mathbb A}^{n+1}$ as the origin. We have the localization exact sequence: \begin{equation}\label{eqn:WPS0} \cdots \to \bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \xrightarrow{\iota_*} \bdK_i([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}]) \xrightarrow{j^*} \bdK_i([{U}/{{\mathbb G}_m}]) \to \cdots . \end{equation} Our first claim is that this sequence splits into short exact sequences \begin{equation}\label{eqn:WPS1} 0 \to \bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \xrightarrow{\iota_*} \bdK_i([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}]) \xrightarrow{j^*} \bdK_i([{U}/{{\mathbb G}_m}]) \to 0 \end{equation} for each $i \ge 0$. Using \cite[Proposition~4.3]{VV}, it suffices to show that $\lambda_{-1}(V) = \stackrel{n}{\underlinederset{i=0}\sum} (-1)^i[\wedge^i(V)]$ is not a zero-divisor in the ring $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$. However, we can write $V = \stackrel{n}{\underlinederset{i = 0}{\text{\rm op}}lus} V_i$, where ${\mathbb G}_m$ acts on $V_i \cong k$ by $\lambda \cdot v = \lambda^{q_i}v$. Since each $q_i$ is positive, we see that no irreducible factor of $V$ is trivial. It follows from \cite[Lemma~4.2]{VV} that $\lambda_{-1}(V)$ is not a zero-divisor in the ring $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$, and hence ~\eqref{eqn:WPS1} is exact. We have thus proven our claim. We can now use ~\eqref{eqn:WPS1} to compute $\bdK_*([U/{{\mathbb G}_m}])$. We first observe that the map $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \to \bdK_*([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}])$ induced by the structure map is an isomorphism by the homotopy invariance. So we can identify the middle term of ~\eqref{eqn:WPS1} with $\bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$. Furthermore, it follows from the Self-intersection formula (\cite[Theorem~2.1]{VV}) that the map $\iota_*$ is multiplication by $\lambda_{-1}(V)$ under this identification. Since $V = \stackrel{n}{\underlinederset{i = 0}{\text{\rm op}}lus} V_i$, we get $\lambda_{-1}(V) = \stackrel{n}{\underlinederset{i = 0}\prod} \lambda_{-1}(V_i)$. Furthermore, since the class of $V_i$ in $R({\mathbb G}_m) = {\mathbb Z}[t^{\pm 1}]$ is $t^{q_i}$, we see that $\lambda_{-1}(V_i) = 1 - t^{q_i}$. We conclude that $\lambda_{-1}(V) = \stackrel{n}{\underlinederset{i = 0}\prod} (1 - t^{q_i})$. We have thus proven: \begin{thm}\label{thm:WPS-main} There is a ring isomorphism \[ \frac{\bdK_*(k)[t^{\pm 1}]}{\stackrel{n}{\underlinederset{i = 0}\prod} (1 - t^{q_i})} \xrightarrow{\cong} \bdK_*({\mathbb P}(\underlinederline{q})). \] \end{thm} \begin{remk}\label{remk:Non-proj} In the above calculations, we can replace ${\mathbb G}_m$ by the dense torus $T$ to get a similar formula. In this case, the exact sequence ~\eqref{eqn:WPS1} shows that $\bdK_0([({\mathbb A}^{n+1} \setminus \{0\})/T])$ is a quotient of $R(T)$ and hence is not a projective $R(T)$-module. \end{remk} \subsubsection{{\rm G}-theory of weighted projective scheme} The weighted projective scheme is the scheme theoretic quotient of ${\mathbb A}^{n+1} \setminus \{0\}$ by the above action of ${\mathbb G}_m$. This is the coarse moduli scheme of ${\mathbb P}(\underlinederline{q})$. We shall denote this scheme by $\widetilde{{\mathbb P}(\underlinederline{q})}$. It is known that this is a normal (but singular in general) projective scheme. There was no computation available for the higher G-theory or K-theory of this schematic weighted projective space. As an application of Theorem~\ref{thm:main-thm-2}, we now give a simple description of the rational higher G-theory of $\widetilde{{\mathbb P}(\underlinederline{q})}$. We still do not know how to compute its K-theory. In order to describe the higher G-theory of $\widetilde{{\mathbb P}(\underlinederline{q})}$, we shall use the following presentation of this scheme which allows us to use our main results. We assume that the characteristic of $k$ does not divide any $q_i$. The torus $T = {\mathbb G} ^n_m$ acts on ${\mathbb P}^n_k$ as the dense open torus by $(\lambda_1, \cdots , \lambda_n) \star [z_0, \cdots , z_n] = [z_0, \lambda_1 z_1, \cdots , \lambda_n z_n]$. Let $G = \mu_{q_0} \times \cdots \times \mu_{q_n}$ be the product of finite cyclic groups. Then $G$ acts on ${\mathbb P}^n_k$ by $(a_0, \cdots , a_n) \bullet [z_0, \cdots , z_n] = [a_0z_0, \cdots , a_nz_n]$. It is then easy to see that $\widetilde{{\mathbb P}(\underlinederline{q})}$ is isomorphic to the scheme ${{\mathbb P}^n_k}/G$. Define $\phi : G \to T$ by $\phi(a_0, \cdots , a_n) = ({a_1}/{a_0}, \cdots , {a_n}/{a_0})$. Then one checks that \[ \begin{array}{lll} H := {\rm Ker}(\phi) & = & \{(a_0, \cdots , a_n) \in G | a_0 = \cdots = a_n\} \\ & = & \{\lambda \in {\mathbb G}_m| \lambda^{q_0} = 1 = \cdots = \lambda^{q_n}\} \\ & = & \{\lambda \in {\mathbb G}_m| \lambda^d = 1\} \\ & \cong & \mu_d. \end{array} \] Moreover, it is easy to see that \[ \begin{array}{lll} (a_0, \cdots , a_n) \bullet [z_0, \cdots , z_n] & = & [a_0z_0, \cdots , a_nz_n] \\ & = & [a^{-1}_0(a_0z_0), \cdots , a^{-1}_0(a_nz_n)] \\ & = & [z_0, ({a_1}/{a_0})z_1, \cdots , ({a_n}/{a_0})z_n] \\ & = & \phi(a_0, \cdots , a_n) \star [z_0, \cdots , z_n]. \end{array} \] In particular, $G$ acts on ${\mathbb P}^n_k$ through $\phi$. We conclude that $\mathfrak{X} = [{\mathbb P}^{n}_k/G]$ is a smooth toric Deligne-Mumford stack associated to the data $({\mathbb P}^n_k, G \xrightarrow{\phi} T)$ and there is an isomorphism $\mathfrak{X} \cong [{{\mathbb P}^n_k}/F] \times {\mathfrak{B}}_{\mu_d}$, where $F = {\rm Im}(\phi)$. \begin{thm}\label{thm:WPS-K-th} There is a ring isomorphism \begin{equation}\label{eqn:WPS0} \bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \frac{[t, t_0, \cdots , t_n]} {((t-1)^{n+1}, t^{q_0}_0-1, \cdots , t^{q_n}_n-1)} \xrightarrow{\cong} \bdK _*(\mathfrak{X}). \end{equation} \end{thm} \begin{proof} It follows from Corollary~\ref{cor:Base-change} and Theorem~\ref{thm:main-thm-2} that there is a ring isomorphism \[ \bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \bdK _0([{{\mathbb P}^n_k}/T]) {\underlinederset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _*(\mathfrak{X})). \] On the other hand, the projective bundle formula implies that the left side of this isomorphism is same as $\bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \frac{R(T)[t]}{((t-1)^{n+1})} {\underlinederset {R(T)} \otimes} R(G)$ which in turn is isomorphic to $\bdK _*(k) {\underlinederset {{\mathbb Z}} \otimes} \frac{R(G)[t]}{((t-1)^{n+1})}$. The theorem now follows from the isomorphism $R(G) \cong \frac{{\mathbb Z}[t_0, \cdots , t_n]}{(t^{q_0}_0-1, \cdots , t^{q_n}_n-1)}$. \end{proof} \begin{cor} There is an isomorphism \[ \frac{\bdG_*(k)[t]}{((t-1)^{n+1})} \xrightarrow{\cong} \bdG_*\left(\widetilde{{\mathbb P}(\underlinederline{q})}\right) \] with the rational coefficients. \end{cor} \begin{proof} All the groups in this proof will be considered with rational coefficients. Let $\pi: {\mathbb P}^{n+1}_k \to \widetilde{{\mathbb P}(\underlinederline{q})}$ be the quotient map. The assignment ${\mathcal F} \mapsto \left(\pi_*({\mathcal F})\right)^G$ defines a covariant functor from the category of $G$-equivariant coherent sheaves on ${\mathbb P}^{n+1}_k$ to the category of ordinary coherent sheaves on $\widetilde{{\mathbb P}(\underlinederline{q})}$. Since the characteristic of $k$ does not divide the order of $G$, this functor is exact and gives a push-forward map $\pi_*: {\bdG}^G_*({\mathbb P}^{n+1}_k) \to \bdG_*\left(\widetilde{{\mathbb P}(\underlinederline{q})}\right)$. Let ${\mathbb C}H^G_*({\mathbb P}^{n+1}_k)$ denote the equivariant higher Chow groups of ${\mathbb P}^{n+1}_k$ (\cite{EG1}). By \cite[Theorem~3]{EG1}, there is a push-forward map $\overline{\pi}_*: {\mathbb C}H^G_*({\mathbb P}^{n+1}_k) \to {\mathbb C}H_*\left(\widetilde{{\mathbb P}(\underlinederline{q})}\right)$ which is an isomorphism. It follows from \cite[Theorem~9.8, Lemma~9.1]{Krishna0} (see also \cite[Theorem~3.1]{EG2}) that there is a commutative diagram \[ \xymatrix@C3pc{ {\bdG}^G_*({\mathbb P}^{n+1}_k) {\underlinederset{R(G)}\otimes} {\mathbb Q} \ar[r]^>>>>>>{\tau_G} \ar[d]_{\pi_*} & {\mathbb C}H^G_*({\mathbb P}^{n+1}_k) \ar[d]^{\overline{\pi}_*}_{\cong} \\ \bdG_*\left(\widetilde{{\mathbb P}(\underlinederline{q})}\right) \ar[r]_{\tau} & {\mathbb C}H_*\left(\widetilde{{\mathbb P}(\underlinederline{q})}\right),} \] where the horizontal arrows are the Riemann-Roch maps which are isomorphisms (\cite[Theorem~8.6]{Krishna0}). It follows that the left vertical arrow is an isomorphism. The corollary now follows by combining this isomorphism with Theorem~\ref{thm:WPS-K-th}. \end{proof} \section{Toric stack bundles and the stacky Leray-Hirsch theorem} \label{section:TS-bundle} Toric bundle schemes and their cohomology were first studied by Sankaran and Uma in \cite{SU}. They computed the Grothendieck group of a toric bundle over a smooth base scheme. Jiang \cite{Jiang} studied smooth and simplicial Deligne-Mumford toric stack bundles over schemes and computed their Chow rings. These bundles are relative analogues of toric Deligne-Mumford stacks. A description of the Grothendieck group of toric Deligne-Mumford stack bundles was given by Jiang and Tseng in \cite{JTseng2}. In this section, we give a general definition of toric stack bundles over a base scheme in such a way that every fiber of this bundle is a (generically stacky) toric stack in the sense of \cite{GSI}. We prove a stacky version of the Leray-Hirsch theorem for the algebraic K-theory of stack bundles. This Leray-Hirsch theorem will be used in the next section to describe the higher K-theory of toric stack bundles. \subsection{Toric stack bundles}\label{subsection:TS-bun-def} Let $T$ be a split torus of rank $n$ and let $X$ be a scheme with a $T$-action. Let $G$ be a diagonalizable group over $k$ and let $\phi: G \to T$ be a morphism of algebraic groups over $k$. Let $p: E \to B$ be a principal $T$-bundle over a scheme $B$. Let $G$ act on $E \times X$ by $g(e,x) = (e,gx):= (e, \phi(g)x)$ and let $T$ act on $E \times X$ via the diagonal action. It is easy to see that these two actions commute and the projection map $E \times X \to E$ is equivariant with respect to these actions. The commutativity of the actions ensures that the $G$-action descends to the quotients $E(X) : = E \stackrel{T}{\times} X$ and $E/T = B$ such that the induced map of quotients $\overline{p} : E(X) \to B$ is $G$-equivariant. Since $E$ has trivial $G$-action, so does $B$ and we see that $G$ acts on $E(X)$ fiber-wise and the map $\overline{p}$ canonically factors through the stack quotient $\pi: [E(X)/G] \to B$. Notice that $E$ is a Zariski locally trivial $T$-bundle and so are $E(X) \to B$ and $[E(X)/G] \to B$. Setting $\mathfrak{X} = [E(X)/G]$, we conclude that the map $\pi: \mathfrak{X} \to B$ is a Zariski locally trivial fibration each of whose fiber is the stack $[X/G]$. The morphism $\pi$ will be called a {\sl stack bundle} over $B$. If $X$ is a toric variety with dense torus $T$, then $\pi: \mathfrak{X} \to B$ will be called a {\sl toric stack bundle} over $B$. In this case, each fiber of $\pi$ is the toric stack $[X/G]$ in the sense of \cite{GSI}. If $[X/G]$ is a Deligne-Mumford stack, this construction recovers the notion of toric stack bundles used in \cite{Jiang} and \cite{JTseng2}. \subsection{Leray-Hirsch Theorem for stack bundles} \label{subsection:LRT} First we prove the following lemma. \begin{lem}\label{lem:linear-P} Let $X$ be a $T$-equivariantly cellular scheme with the $T$-equivariant cellular decomposition \begin{equation}\label{eqn:linear-P*} \emptyset = X_{n+1} \subsetneq X_n \subsetneq \cdots \subsetneq X_1 \subsetneq X_0 = X \end{equation} and let $U_i = X\setminus X_i$ for $0 \le i \le n+1$. Let $G$ be a diagonalizable group provided with a morphism of algebraic groups $\phi: G \to T$. Then for any $0 \le i \le n$, the sequence \begin{equation}\label{eqn:linear-P0} 0 \to \bdG ^G_*\left(U_{i+1} \setminus U_i\right) \to \bdG ^G_*(U_{i+1}) \to \bdG ^G_*(U_i) \to 0 \end{equation} is exact. In particular, $\bdG ^G_0(X)$ is a free $R(G)$-module of rank equal to the number of $T$-invariant affine cells in $X$ with basis given by the closures of the affine cells. \end{lem} \begin{proof} To prove the exactness part of the proposition, we first make the following claim. Suppose $X$ is a $G$-scheme and $j:U \hookrightarrow X$ is a $G$-invariant open inclusion with complement $Y$. Suppose that $U$ is isomorphic to a representation of $G$. Then the localization sequence \begin{equation}\label{eqn:linear-P1} 0 \to \bdG ^G_*(Y) \to \bdG ^G_*(X) \xrightarrow{j^*} \bdG ^G_*(U) \to 0 \end{equation} is (split) short exact. To prove the claim, let $\alpha: X \to {\rm Spec \,}(k)$ and $\beta: U \to {\rm Spec \,}(k)$ be the structure maps (which are $G$-equivariant) so that $\beta = \alpha \circ j$. The homotopy invariance of equivariant K-theory shows that $\beta^*$ is an isomorphism. Let $\gamma = \alpha^* \circ (\beta^*)^{-1}$. Then one checks that $\gamma$ is a section of $j^*$ and hence the localization sequence splits into short exact sequences. This proves the claim. We shall prove ~\eqref{eqn:linear-P0} by induction on the number of $T$-invariant affine cells in $X$. For $i = 0$, ~\eqref{eqn:linear-P0} is immediate. So we assume $i \ge 1$ and consider the commutative diagram: \begin{equation}\label{eqn:linear-P2} \xymatrix@C1pc{ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \bdG ^G_*(X_i \setminus X_{i+1}) \ar[r] \ar@{=}[d] & \bdG ^G_*(X_1 \setminus X_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(X_1 \setminus X_{i}) \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \bdG ^G_*(X_i \setminus X_{i+1}) \ar[r] & \bdG ^G_*(X \setminus X_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(X \setminus X_{i}) \ar[r] \ar[d] & 0 \\ & & \bdG ^G_*(X \setminus X_1) \ar@{=}[r] \ar[d] & \bdG ^G_*(X \setminus X_1) \ar[r] \ar[d] & 0 \\ & & 0 & 0.} \end{equation} The top row is exact by induction on the number of affine cells since $X_1$ is $T$-equivariantly cellular with fewer number of cells. The two columns are exact by the above claim. It follows that the middle row is exact, which proves ~\eqref{eqn:linear-P0}. To prove the last (freeness) assertion, we apply ~\eqref{eqn:linear-P1} to the inclusion $X_1 \subset X$ and see that $\bdG ^G_0(X) \cong \bdG ^G_0(X_1) {\text{\rm op}}lus R(G)$. An induction on the number of affine $G$-cells now finishes the proof. \end{proof} \begin{prop}\label{prop:linear} Let $X$ be a $T$-equivariantly cellular scheme and let $B$ be any scheme with trivial $T$-action. Then the external product map \begin{equation}\label{eqn:linear1} \bdG _*(B) {\otimes}_{{\mathbb Z}} \bdG ^G_0(X) \to \bdG ^G_*(B \times X) \end{equation} is an isomorphism. In particular, the natural map $\bdK _*(k) {\otimes}_{{\mathbb Z}} \bdG ^G_0(X) \to \bdG ^G_*(X)$ is an isomorphism. \end{prop} \begin{proof} Since the map \begin{equation}\label{eqn:linear1*} \bdG_ *(B) \otimes_{{\mathbb Z}} R(G) \xrightarrow{\cong} \bdG ^G_*(B) \end{equation} is an isomorphism ({\sl cf.} \cite[Lemma~5.6]{Thomason1}), the lemma is equivalent to the assertion that the map \begin{equation}\label{eqn:linear1*0} \bdG ^G_*(B) {\otimes}_{R(G)} \bdG ^G_0(X) \to \bdG ^G_*(B \times X) \end{equation} is an isomorphism. Consider the cellular decomposition of $X$ as in Lemma~\ref{lem:linear-P}. Then each $U_i = X\setminus X_i$ is also a $T$-equivariantly cellular scheme. It suffices to show by induction on $i \ge 0$ that ~\eqref{eqn:linear1*0} holds when $X$ is any of these $U_i$'s. There is nothing to prove for $i =0$ and the case $i = 1$ follows by the homotopy invariance since $U_1$ is an affine space. To prove the general case, we use the short exact sequence \begin{equation}\label{eqn:linear2} 0 \to \bdG ^G_0\left(U_{i+1} \setminus U_i\right) \to \bdG ^G_0(U_{i+1}) \to \bdG ^G_0(U_i) \to 0 \end{equation} given by Lemma~\ref{lem:linear-P}. This sequence splits, since each $\bdG ^G_0(U_i)$ was shown to be free over $R(G)$ in Lemma~\ref{lem:linear-P}. Tensoring this with $\bdG ^G_*(B)$ over $R(G)$, we obtain a commutative diagram \[ \[email protected]{ 0 \ar[r] & \bdG ^G_*(B) {\otimes} \bdG ^G_0\left(U_{i+1} \setminus U_i\right) \ar[r] \ar[d] & \bdG ^G_*(B) {\otimes} \bdG ^G_0(U_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(B) {\otimes} \bdG ^G_0(U_i) \ar[r] \ar[d] & 0 \\ \ar[r] & \bdG ^G_*(B \times (U_{i+1} \setminus U_i)) \ar[r]_{\ \ \ i_*} & \bdG ^G_*(B \times U_{i+1}) \ar[r]_{j^*} & \bdG ^G_*(B \times U_i) \ar[r] &} \] where the top row remains exact since the short exact sequence in ~\eqref{eqn:linear2} is split. The bottom row is the localization exact sequence. The left vertical arrow is an isomorphism by the homotopy invariance and the right vertical arrow is an isomorphism by the induction. In particular, $j^*$ is surjective in all indices. We conclude that $i_*$ is injective in all indices and the middle vertical arrow is an isomorphism. \end{proof} \begin{thm}$($Stacky Leray-Hirsch theorem$)$\label{thm:LHT} Suppose that $k$ is a perfect field and $B$ is a smooth scheme over $k$. Let $X$ be a $T$-equivariantly cellular scheme. Let $\mathfrak{F} \xrightarrow{i} \mathfrak{X} \xrightarrow{\pi} B$ be a Zariski locally trivial stack bundle (\S~\ref{subsection:TS-bun-def}) each of whose fiber $\mathfrak{F}$ is a smooth stack of the form $[X/G]$. Assume that there are elements $\{e_1, \cdots , e_r\}$ in $\bdK _0(\mathfrak{X})$ such that $\{f_1 = i^*(e_1), \cdots , f_r = i^*(e_r)\}$ is an $R(G)$-basis of $\bdK _0(\mathfrak{X}_b)$ for each fiber $\mathfrak{X}_b = \mathfrak{F}$ of the fibration. Then the map \begin{equation}\label{eqn:LHT**} {\mathbb P}hi : \bdK _0(\mathfrak{F}) {\underlinederset{R(G)}\otimes} \bdK ^G_*(B) \to \bdK _*(\mathfrak{X}) \end{equation} \[ {\mathbb P}hi\left({\underlinederset{1 \le i \le r}\sum} \ f_i \otimes b_i\right) = {\underlinederset{1 \le i \le r}\sum} {\pi}^*(b_i) e_i \] is an isomorphism of $R(G)$-modules. In particular, $\bdK _*(\mathfrak{X})$ is a free $\bdK ^G_*(B)$-module and the map $\pi^*: \bdK ^G_*(B) \to \bdK _*(\mathfrak{X})$ is injective. \end{thm} \begin{proof} Since $k$ is perfect and since the fibration $p$ is Zariski locally trivial, we can find a filtration \begin{equation}\label{eqn:LHT-fil} \emptyset = B_{n+1} \subsetneq B_n \subsetneq \cdots \subsetneq B_1 \subsetneq B_0 = B \end{equation} of $B$ by closed subschemes such that for each $0 \le i \le n$, the scheme $B_i \setminus B_{i+1}$ is smooth and the given fibration is trivial over it. We set $U_i = B \setminus B_i$ and $V_i = U_i \setminus U_{i-1} = B_{i-1} \setminus B_i$. Observe then that each of $U_i$'s and $V_i$'s is smooth. Set $\mathfrak{X}_i = {\pi}^{-1}(U_i)$ and $\mathfrak{W}_i = {\pi}^{-1}(V_i) = V_i \times \mathfrak{F}$. Let ${\text{\'et}}a_i : \mathfrak{X}_i \hookrightarrow \mathfrak{X}$ and $\iota_i: \mathfrak{W}_i \hookrightarrow\mathfrak{X}$ be the inclusion maps. We prove by induction on $i$ that the map $\bdK _0(\mathfrak{F}) {\underlinederset{R(G)}\otimes} \bdK ^G_*(U_i) \to \bdK _*(\mathfrak{X}_i)$ is an isomorphism, which will prove the theorem. Since $U_0 = \emptyset$ and $\mathfrak{X}_1 = U_1 \times \mathfrak{F}$, the desired isomorphism for $i \le 1$ follows from Proposition~\ref{prop:linear} and the isomorphism $U_1 \times \mathfrak{F} \cong [(U_1 \times X)/G]$. We now consider the commutative diagram: \begin{equation}\label{eqn:LHT&} \[email protected]{ {\begin{array}{c} \bdK ^G_*(U_{i}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[r] \ar[d] & {\begin{array}{c} \bdK ^G_*(V_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \ar[r] & {\begin{array}{c} \bdK ^G_*(U_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \ar[r] & {\begin{array}{c} \bdK ^G_*(U_{i}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[r] \ar[d] & {\begin{array}{c} \bdK ^G_*(V_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \\ \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}).} \end{equation} The top row in this diagram is obtained by tensoring the K-theory long exact localization sequence with $\bdK _0(\mathfrak{F})$ over $R(G)$, and the bottom row is just the localization exact sequence. Since $\bdK _0(\mathfrak{F})$ is a free $R(G)$-module ({\sl cf.} Lemma~\ref{lem:linear-P}), the top row is also exact. It is easily checked that the second and the third squares commute using the commutativity property of the push-forward and pull-back maps of K-theory of coherent sheaves in a Cartesian diagram of proper and flat maps. We show that the other squares also commute. It is enough to show that the first square commutes as the fourth one is same as the first. Let $\partialta$ denote the connecting homomorphism in a long exact localization sequence for higher K-theory. If we start with an element $b \otimes i^*(e_j) \in \bdK _*(U_{i}) {\otimes} \bdK _0(\mathfrak{F})$ and map this horizontally, we obtain $\partialta b \otimes i^*(e_j)$ which maps vertically down to ${\pi}^*(\partialta b) \cdot \iota^*_{i+1}(e_j)$. On the other hand, if we first map vertically, we obtain ${\pi}^*(b) \cdot {\text{\'et}}a^*_i(e_j)$ which maps horizontally to $\partialta \left({\pi}^*(b) \cdot {\text{\'et}}a^*_i(e_j) \right)$. Now, we recall that these elements in the higher K-theory of coherent sheaves are represented by the elements in the higher homotopy groups of the various infinite loop spaces. Moreover, if we have a closed immersion of smooth stacks $\mathfrak{F} \hookrightarrow \mathfrak{X}$ with open complement $\mathfrak{U}$, then we have a fibration sequence of ring spectra \begin{equation}\label{eqn:Ring-sp} \bdK (\mathfrak{F}) \to \bdK (\mathfrak{X}) \to \bdK (\mathfrak{U}). \end{equation} The homotopy groups of these ring spectra form graded rings and the connecting homomorphism in the long exact sequence of the homotopy groups associated to the above fibration sequence satisfies the Leibniz rule (e.g., see \cite[Appendix~A]{Brown} and \cite[\S~2.4]{Panin}). Applying this Leibniz rule, we see that the term $\partialta \left({\pi}^*(b) \cdot {\text{\'et}}a^*_i(e_j) \right)$ is same as $\partialta {\pi}^*(b) \cdot \iota^*_{i+1}(e_j) = {\pi}^*\left(\partialta b\right) \cdot \iota^*_{i+1}(e_j)$ since $\partialta ({\text{\'et}}a^*_i(e_j)) = 0$. We have shown that the above diagram commutes. The first and the fourth vertical arrows in ~\eqref{eqn:LHT&} are isomorphisms by induction. The second and the fifth vertical arrows are isomorphisms by Proposition~\ref{prop:linear}. Hence the middle vertical arrow is also an isomorphism by 5-lemma. To show that $\pi^*$ is injective, consider the $T$-invariant filtration of $X$ as in ~\eqref{eqn:linear-P*} and let $j: [E(U_1)/G] = \mathfrak{X}_1 \to \mathfrak{X}$ be the open inclusion. If we apply ~\eqref{eqn:LHT**} to the map $ \mathfrak{X}_1 \to B$, we see that the composite map $\bdK ^G_*(B) \to \bdK _*(\mathfrak{X}) \to \bdK _*(\mathfrak{X}_1)$ is an isomorphism (since $U_1$ is a $T$-invariant cell of $X$). We conclude that $\pi^*$ is split injective. \end{proof} \begin{comment} \begin{cor}\label{cor:LHT-Toric} Let $B$ be a smooth scheme over $k$. Let $\mathfrak{X} = [E(X)/G] \xrightarrow{p} B$ be a toric stack bundle as in ~\eqref{subsection:TS-bun-def} where $G$ is a diagonalizable subgroup of $T$. Assume that there are elements $\{e_1, \cdots , e_r\}$ in $\bdK _0(\mathfrak{X})$ such that for every point $b \in B$, $\{f_1 = i^*(e_1), \cdots , f_r = i^*(e_r)\}$ forms an $R(G) = \bdK ^G_0\left(k(b)\right)$-basis of $\bdK _0(\mathfrak{X}_b)$ for the fiber $\mathfrak{X}_b$. Then $\bdK _*(\mathfrak{X})$ is a free $\bdK ^G_*(B)$-module with basis $\{e_1, \cdots , e_r\}$. \end{cor} \end{comment} \section{Higher K-theory of toric stack bundles} \label{section:K-Chow-TSB} In this section, we give explicit descriptions of the higher K-theory of toric stack bundles in terms of the higher K-theory of the base scheme. Let $T$ be a split torus of rank $n$. Let $N = {\rm Hom}({\mathbb G}_m, T)$ be the lattice of one-parameter subgroups of $T$ and let $M = {\rm Hom}(T, {\mathbb G}_m) = N^{\vee}$ be its character group. Let $X = X({\mathbb D}elta)$ be a smooth projective toric variety associated to a fan ${\mathbb D}elta$ in $N_{{\mathbb R}}$. Let \begin{equation}\label{eqn:reduced} 0 \to G \to T \to T' \to 0 \end{equation} be an exact sequence of diagonalizable groups. This yields the exact sequence of the character groups \begin{equation}\label{eqn:reduced} 0 \to T'^{\vee} \to T^{\vee} \to G ^{\vee} \to 0. \end{equation} \subsection{The Stanley-Reisner algebra associated to a subgroup of $T$}\label{subsection:SRA} We fix an ordering $\{\sigma_1, \cdots , \sigma_m\}$ of ${\mathbb D}elta_{\rm max}$ and let $\tau_i \subset \sigma_i$ be the cone which is the intersection of $\sigma_i$ with all those $\sigma_j$ such that $j \ge i$ and which intersect $\sigma_i$ in dimension $n-1$. Let $\tau'_i \subset \sigma_i$ be the cone such that $\tau_i \cap \tau'_i = \{0\}$ and ${\rm dim}(\tau_i) + {\rm dim}(\tau'_i) = n$ for $1 \le i \le m$. It is easy to see that $\tau'_i$ is the intersection of $\sigma_i$ with all those $\sigma_j$ such that $j \le i$ and which intersect $\sigma_i$ in dimension $n-1$. Since $X$ is smooth and projective, it is well known that we can choose the above ordering of ${\mathbb D}elta_{\rm max}$ such that \begin{equation}\label{eqn:order} \tau_i \subset \sigma_j {\mathbb R}ightarrow \ i \le j \ \ {\rm and} \ \ \tau'_i \subset \sigma_j {\mathbb R}ightarrow \ j \le i. \end{equation} Let ${\mathbb D}elta_1 = \{\rho_1, \cdots , \rho_d\}$ be the set of one-dimensional cones in ${\mathbb D}elta$ and let $\{v_1, \cdots , v_d\}$ be the associated primitive elements of $N$. We choose $\{\rho_1, \cdots , \rho_n\}$ to be a set of one dimensional faces of $\sigma_m$ such that $\{v_1, \cdots , v_n\}$ is a basis of $N$. Let $\{\chi_1, \cdots , \chi_n\}$ be the dual basis of $M$. Let $\{\chi'_1, \cdots , \chi'_r\}$ be a chosen basis of $T'^{\vee} = M'$. We will denote the group operations in all the lattices additively. \begin{defn}\label{defn:RING} Let $A$ be a commutative ring with unit and let $\{r_1, \cdots ,r_n\}$ be a set of invertible elements in $A$. Let $I^T_{{\mathbb D}elta}$ denote the ideal of the Laurent polynomial algebra $A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the elements \begin{equation}\label{eqn:Reln1} (t_{j_1}-1) \cdots (t_{j_l}-1), \ 1 \le j_p \le d \end{equation} such that $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone of ${\mathbb D}elta$. Let $J^G_{\mathbb D}elta$ denote the ideal of $A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations \begin{equation}\label{eqn:Reln2} s_i := \left(\stackrel{d}{\underlinederset{j = 1}\prod} (t_j)^{<-\chi'_i, v_j>}\right) - r_i, \ 1 \le i \le r. \end{equation} We define the $A$-algebras $R_T(A, {\mathbb D}elta)$ and $R_G(A, {\mathbb D}elta)$ to be quotients of ${A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}$ by the ideals $I^T_{\mathbb D}elta$ and $I^G_{\mathbb D}elta = I^T_{{\mathbb D}elta} + J^G_{{\mathbb D}elta}$, respectively. \end{defn} The ring $R_G(A, {\mathbb D}elta)$ will be called the {\sl Stanley-Reisner} algebra over $A$ associated to the subgroup $G$. Every character $\chi \in M$ acts on $R_T(A, {\mathbb D}elta)$ via multiplication by the element $t_\chi = \left(\stackrel{d}{\underlinederset{j = 1}\prod} (t_j)^{<-\chi, v_j>}\right)$ and this makes $R_T(A, {\mathbb D}elta)$ (and hence $R_G(A, {\mathbb D}elta)$) an $\left(A {\underlinederset{{\mathbb Z}}\otimes} R(T)\right)$-algebra. \vskip .3cm \begin{comment} {\bf{Notations:}} For a $T$-equivariant (resp. ordinary) line bundle $L$ on a smooth scheme $X$, let $c^T_1(L)$ (resp. $c_1(L)$) denote the class $1- [L^{\vee}]$ in $\bdK ^T_0(X)$ (resp. $\bdK _0(X)$). For elements $\alpha, \beta \in \bdK ^T_0(X)$, let $\alpha +_F \beta$ denote the element $\alpha + \beta - \alpha \beta$. Notice that $c_1(L \otimes M) = c_1(L) +_F c_1(M)$. For a non-negative integer $n$, the term $n_Fc_1(L)$ will denote $c_1(L) +_F \cdots +_F c_1(L) = c_1(L^{\otimes n})$. For $n < 0$, $n_Fc_1(L)$ will mean $(-n)_Fc_1(L^{\vee})$. \end{comment} \subsection{The K-theory of toric stack bundles} \label{subsection:Main-formula} Let $T$ be a split torus over a perfect field $k$ and let $G$ be a closed subgroup of $T$ (which may not necessarily be a torus). Let $X$ be a smooth projective toric variety with dense torus $T$ and let $\pi:\mathfrak{X} = [(E(X)/G] \to B$ be a toric stack bundle over a smooth $k$-scheme $B$ associated to a principal $T$-bundle $p: E \to B$. We wish to describe the K-theory of $\mathfrak{X}$ in terms of the K-theory of $B$. \vskip .3cm Any $T$-equivariant line bundle $L \to X$ uniquely defines a $G$-equivariant line bundle $E(L) = E \stackrel{T}{\times} L$ on $E(X)$, where the $G$-action on $E(L)$ is given exactly as on $E(X)$. Every $\rho \in {\mathbb D}elta_1$ defines a unique $T$-equivariant line bundle $L_{\rho}$ on $X$ with a $T$-equivariant section $s_{\rho} : X \to L_{\rho}$ which is transverse to the zero-section and whose zero locus is the orbit closure $V_\rho = \overline{O_\rho}$. For any $\sigma \in {\mathbb D}elta$, let $u_{\sigma}$ denote the fundamental class $[{\mathcal O}_{V_\sigma}]$ of the $T$-invariant subscheme $V_{\sigma}$ in $\bdK ^T_0(X)$ and let $y_{\sigma}$ denote the fundamental class of $[E\left(V_\sigma\right)]$ in $\bdK ^G_0(E(X)) = \bdK _0(\mathfrak{X})$. Notice that ${\overline{p}}_{\sigma}: E\left(V_\sigma\right) \to B$ is a $G$-equivariant smooth projective toric sub-bundle of $\overline{p}:E(X) \to B$ with fiber $V_\sigma$. In particular, $\pi_{\sigma} :[E\left(V_\sigma\right)/G] \to B$ is a toric stack sub-bundle of $\pi: \mathfrak{X} \to B$ with fiber $[V_\sigma/G]$. We set $\mathfrak{X}_{\sigma} = [E\left(V_\sigma\right)/G]$. Suppose that $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone in ${\mathbb D}elta$. Then $s = (s_{j_1}, \cdots , s_{j_l})$ yields a $G$-equivariant nowhere vanishing section of $E(L_{\rho_{j_1}}) {\text{\rm op}}lus \cdots {\text{\rm op}}lus E(L_{\rho_{j_l}})$ and hence the Whitney sum formula for Chern classes in K-theory implies that \begin{equation}\label{eqn:vanish1} y_{\rho_{j_1}} \cdots y_{\rho_{j_l}} = 0 \ {\rm in} \ \bdK ^G_0\left(E(X)\right). \end{equation} We now consider the commutative diagram \begin{equation}\label{eqn:vanish2} \[email protected]{ X_l \ar[r]^{\iota} \ar[d]_{\pi_l} & E(X) \ar[d]_{\overline{p}} & E \times X \ar[d]^{p_E} \ar[r]^>>>>{p_X} \ar[l]_{p'} & X \ar[d]^{\pi_X} \\ {\rm Spec}(l) \ar[r] & B & E \ar[l]^{p} \ar[r]_<<<<{\pi_E} & {\rm Spec}(k),} \end{equation} where ${\rm Spec}(l)$ is any point of $B$. It is clear that all squares are Cartesian and all the maps in the right square are $T$-equivariant. We define $(T \times G)$-actions on any $T$-invariant subscheme $Y \subseteq X$ and on $E$ by $(t,g)\cdot y = tg \cdot y$ and $(t,g)\cdot e = t\cdot e$, respectively. An action of $(T \times G)$ on $E \times X$ is defined by $(t,g) \cdot (e,x) = (t\cdot e, tg \cdot x)$. It is clear that these are group actions such that the square on the right in ~\eqref{eqn:vanish2} is $(T\times G)$-equivariant. This implies that the middle square is also $(T\times G)$-equivariant and the map $\overline{p}$ is $G$-equivariant with respect to the trivial action of $G$ on $B$. The square on the left is $G$-equivariant. Let $L_{\chi}$ denote the $T$-equivariant line bundle on ${\rm Spec}(k)$ associated to a character $\chi$ of $T$. Let $(T \times G)$ act on $L_{\chi}$ by $(t,g)\cdot v = \chi(t) \chi(g) \cdot v$. If $\chi \in M' = T'^{\vee}$, then $G$ acts trivially on $L_{\chi}$ and hence it acts trivially on $\pi^*_E(L_{\chi})$. Recall that $(T \times G)$ acts on $E$ via $T$. Hence $\pi^*_E(L_{\chi}) \to E$ is a $(T \times G)$-equivariant line bundle on which $G$-acts trivially. Since the $T$-equivariant line bundles on $E$ are same as ordinary line bundles on $B$, we find that for every $\chi \in M'$, there is a unique ordinary line bundle $\zeta_{\chi}$ on $B$ such that $\pi^*_E(L_{\chi}) = p^*(\zeta_{\chi})$. Since $G$ acts trivially on $B$, there is a canonical ring homomorphism $c_B: \bdK _*(B) \to \bdK ^G_*(B)$ such that the composite $\bdK _*(B) \xrightarrow{c_B} \bdK ^G_*(B) \to \bdK _*(B)$ is identity. These maps are simply the maps $\bdK _*(B) \xrightarrow{c_B} \bdK ^G_*(B) = \bdK _*(B)\otimes_{{\mathbb Z}} R(G) \to \bdK _*(B)$. Since $p^*_X \circ \pi^*_X (L_{\chi}) = p^*_E \circ \pi^*_E (L_{\chi})$ and since the $(T \times G)$-equivariant vector bundles on $E \times X$ are same as $G$-equivariant vector bundles on $E(X)$, we conclude that for every $\chi \in M'$, there is a unique ordinary line bundle $\zeta_{\chi}$ on $B$ such that \begin{equation}\label{eqn:vanish3} E(\pi^*_X(L_{\chi})) = {\overline{p}}^*(\zeta_{\chi}) = {\overline{p}}^*\left(c_B(\zeta_{\chi})\right). \end{equation} Notice also that on each open subset of $B$ where the bundle $p$ is trivial, the restriction of $\zeta_{\chi}$ is the trivial line bundle since $\zeta_{\chi}$ is obtained from the $T$-line bundle $L_{\chi}$ on ${\rm Spec \,}(k)$. We define a homomorphism of $\bdK _*(B)$-algebras $\bdK _*(B)[t^{\pm 1}_1, \cdots , t^{\pm 1}_d] \to \bdK _*(\mathfrak{X})$ by the assignment $t_i \mapsto [{E(L^{\vee}_{\rho_i})}/G]$ for $1 \le i \le d$. If we let $r_i = \zeta_{\chi'_i}$ for $1 \le i \le r$ ({\sl cf.} \S~\ref{subsection:SRA}), then it follows from ~\eqref{eqn:vanish1} and ~\eqref{eqn:vanish3} that this homomorphism descends to a $\bdK _*(B)$-algebra homomorphism \begin{equation}\label{eqn:vanish4} {\mathbb P}hi_G : R_G\left(\bdK _*(B), {\mathbb D}elta \right) \to \bdK _*(\mathfrak{X}). \end{equation} \vskip .3cm Given a sequence $\gamma = \{i_1, \cdots, i_d\}$ of integers, set $E(\gamma) = E\left((L^{\vee}_{\rho_1})^{i_1} \otimes \cdots \otimes (L^{\vee}_{\rho_d})^{i_d}\right)$. We then see that for a monomial $\gamma(\underlinederline{t}) = t^{i_1}_1\cdots t^{i_d}_d$, we have \begin{equation}\label{eqn:vanish4*} {\mathbb P}hi_G(\gamma(\underlinederline{t})) = [{E(\gamma)}/G]. \end{equation} The following result describes the higher K-theory of the toric stack bundle $\pi: \mathfrak{X} \to B$. \begin{thm}\label{thm:CTB} The homomorphism ${\mathbb P}hi_G$ is an isomorphism. \end{thm} Before we prove this theorem, we consider some special cases which will be used in the final proof. The following observations will be used throughout the proofs. The first observation is that the cell closures of $X$ are the $T$-equivariant subschemes $V_{\tau_i}$. So the classes of ${\mathcal O}_{V_{\tau_i}}$ form an $R(G)$-basis of $\bdK ^G_0(X)$ by Lemma~\ref{lem:linear-P}. Since $\iota^*(y_{\tau_i}) = [{\mathcal O}_{V_{\tau_i}}]$, we see that Theorem~\ref{thm:LHT} applies to the toric stack bundle $\pi:\mathfrak{X} \to B$. Second observation is that $G$ is a diagonalizable group which acts trivially on $B$. Hence the map $\bdK _*(B) {\underlinederset{{\mathbb Z}}\otimes} R(G) \to \bdK ^G_*(B)$ is a ring isomorphism by \cite[Lemma~3.6]{Thomason1}. This identification will be used without further mention. Since any character $\chi \in M$ acts on $R_T(\bdK _*(B), {\mathbb D}elta)$ and $\bdK ^G_*(E(X))$ via multiplication by $t_{\chi}$ and ${\mathbb P}hi_G(t_{\chi})$ respectively ({\sl cf.} \cite[Proposition~4.3]{SU}), we observe that the composite map $R_T(\bdK _*(B), {\mathbb D}elta) \to R_G(\bdK _*(B), {\mathbb D}elta) \to \bdK ^G_*(E(X))$ is $\bdK ^T_*(B)$-linear. \begin{remk}\label{remk:Thomason} We remark that the result of Thomason in \cite[Lemma~3.6]{Thomason1} is stated for affine schemes, but his proof works for all schemes. Another way to deduce the general case from the affine case is to get a stratification of $B$ by affine subschemes as in ~\eqref{eqn:LHT-fil}, use induction on the number of affine strata, the localization sequence and the fact that $R(G)$ is free over ${\mathbb Z}$. \end{remk} \vskip .3cm \begin{lem}\label{lem:T-case} The homomorphism ${\mathbb P}hi_G$ is an isomorphism when $G = T$. \end{lem} \begin{proof} In this case, we first notice that the map $R_T({\mathbb Z}, {\mathbb D}elta) \xrightarrow{\phi} \bdK ^T_0(X)$ which takes $t_i$ to $[L^{\vee}_{\rho_i}]$, is an isomorphism of $R(T)$-algebras by \cite[Theorem~6.4]{VV}. On the other hand, we have the maps \begin{equation}\label{eqn:CTB0*} \bdK _*(B) {\underlinederset{{\mathbb Z}}\otimes} R_T({\mathbb Z}, {\mathbb D}elta) \xrightarrow{\cong} R_T(\bdK _*(B), {\mathbb D}elta) \xrightarrow{{\mathbb P}hi_T} \bdK ^T_*(E(X)), \end{equation} where the first map takes $\alpha \otimes t_i$ to $\alpha \cdot t_i$ for $1 \le i \le d$. This map is clearly an isomorphism (see ~\eqref{eqn:Reln1}). It is clear from the definition of ${\mathbb P}hi_T$ that the composite map is same as the map ${\mathbb P}hi$ in ~\eqref{eqn:LHT**} (with $G = T$). It follows from Theorem~\ref{thm:LHT} that the composite map in ~\eqref{eqn:CTB0*} is an isomorphism. We conclude that ${\mathbb P}hi_T$ is an isomorphism. \end{proof} \begin{cor}\label{cor:T-case*} For any closed subgroup $G \subseteq T$, the ring $R_G(\bdK _*(B), {\mathbb D}elta)$ is a free $\bdK _*(B)$-module. \end{cor} \begin{proof} We have seen above that the image of a character $\chi \in M$ in $R_T(\bdK _*(B), {\mathbb D}elta)$ is $t_{\chi}$. If we let $J^G$ denote the ideal $\left(\chi'_1 - \zeta_{\chi'_1}, \cdots , \chi'_r - \zeta_{\chi'_r}\right)$ in $\bdK ^T_*(B)$, then it follows from ~\eqref{eqn:Reln2} that $J^G_{{\mathbb D}elta} = J^GR_T(\bdK _*(B), {\mathbb D}elta)$ under the map $\bdK ^T_*(B) \to R_T(\bdK _*(B), {\mathbb D}elta)$. It follows from Lemma~\ref{lem:T-case} and Theorem~\ref{thm:LHT} (with $G=T$) that $R_T(\bdK _*(B), {\mathbb D}elta)$ is a free $\bdK ^T_*(B)$-module. This implies that $R_G(\bdK _*(B), {\mathbb D}elta) = {R_T(\bdK _*(B), {\mathbb D}elta)}/{J^G_{{\mathbb D}elta}}$ is a free ${\bdK ^T_*(B)}/{J^G}$-module. Thus, it suffices to show that ${\bdK ^T_*(B)}/{J^G}$ is a free $\bdK _*(B)$-module. Since $\bdK ^T_*(B)$ is isomorphic to a Laurent polynomial ring $\bdK _*(B)[x^{\pm 1}_1, \cdots, x^{\pm 1}_n]$ and since each character $\chi \in M'$ is a monomial in this ring, the desired freeness follows from Lemma~\ref{lem:easy}. \end{proof} \begin{lem}\label{lem:Trvial-bundle-case} The homomorphism ${\mathbb P}hi_G$ is an isomorphism when $p: E \to B$ is a trivial principal bundle. \end{lem} \begin{proof} Since $p: E \to B$ is a trivial bundle, we have observed before that $\zeta_{\chi'_i} =1$ for each $1 \le i \le r$. In particular, the map ${\bdK ^T_*(B)}/{J^G} \to \bdK ^G_*(B)$ is an isomorphism by Lemma~\ref{lem:Groupring}, where $J^G$ is as in Corollary~\ref{cor:T-case*}. It follows from Theorem~\ref{thm:LHT} and Lemma~\ref{lem:T-case} that ${\mathbb P}hi_T$ is an isomorphism of free $\bdK ^T_*(B)$-modules. This implies that $R_G(\bdK _*(B), {\mathbb D}elta) = {R_T(\bdK _*(B), {\mathbb D}elta)}/{J^G_{{\mathbb D}elta}}$ is a free ${\bdK ^T_*(B)}/{J^G} = \bdK ^G_*(B)$-module. It follows from this and Theorem~\ref{thm:LHT} that ${\mathbb P}hi_G$ is a basis preserving homomorphism of free $\bdK ^G_*(B)$-modules of same rank. Hence, it must be an isomorphism. \end{proof} \begin{lem}\label{lem:easy} Let $S = A[x^{\pm 1}_1, \cdots, x^{\pm 1}_n]$ be a Laurent polynomial ring over a commutative ring $A$ with unit. Let $\{t_1, \cdots , t_r\}$ be a set of monomials in $S$ and let $\{u_1, \cdots , u_r\}$ be a set of units in $A$. Then the ring $\frac{S}{(t_1 - u_1, \cdots , t_r-u_r)}$ is free over $A$. \end{lem} \begin{proof} This is left as an easy exercise using the fact that $S$ is a free $A$-module on the monomials. \end{proof} \begin{lem}\label{lem:Groupring} Let $A$ be a commutative ring with unit and let \[ 0 \to L \to M \to N \to 0 \] be a short exact sequence of finitely generated abelian groups. Let $I_L$ be the ideal of the group ring $A[M]$ generated by the set $\{s-1 | s \in S\}$, where $S$ is a generating set of $L$. Then the map of group rings $\frac{A[M]}{I_L} \to A[N]$ is an isomorphism. \end{lem} \begin{proof} This is an elementary exercise and a proof can be found in \cite[Proposition~2]{May}. \end{proof} \vskip .5cm {\bf{Proof of Theorem~\ref{thm:CTB}}:} We shall prove this theorem along the same lines as the proof of Theorem~\ref{thm:LHT}. Recall that our base field $k$ is perfect. We consider the stratification of $B$ by smooth locally closed subschemes as in ~\eqref{eqn:LHT-fil}. We shall follow the notations used in the proof of Theorem~\ref{thm:LHT}. It suffices to show by induction on $i$ that the theorem is true when $B$ is replaced by each $U_i$. Since $U_0 = \emptyset$ and since $E \xrightarrow{p} B$ is trivial over $U_1$, the desired isomorphism for $i \le 1$ follows from Lemma~\ref{lem:Trvial-bundle-case}. Given a smooth locally closed subscheme $j: U \hookrightarrow B$, let $\zeta^U_i = j^*(\zeta_{\chi'_i}) \in \bdK _*(U)$ for $1 \le i \le r$ and set $J^G_U = \left(\chi'_1 - \zeta^U_{1}, \cdots , \chi'_r - \zeta^U_{r}\right)$. We have seen in the proof of Lemma~\ref{lem:T-case} that for any such inclusion $U \subseteq B$, $R_T(\bdK _*(U), {\mathbb D}elta)$ is same as $\bdK ^T_0(X) {\underlinederset{{\mathbb Z}}\otimes}\bdK_*(U)$. Moreover, the maps \begin{equation}\label{eqn:CTB12*} R_G(\bdK _*(B), {\mathbb D}elta) {\underlinederset{\bdK _*(B)}\otimes} \bdK _*(U) \cong \frac{R_T(\bdK _*(B), {\mathbb D}elta)}{J^GR_T(\bdK _*(B), {\mathbb D}elta)} {\underlinederset{\bdK _*(B)}\otimes} \bdK _*(U) \to \frac{R_T(\bdK _*(U), {\mathbb D}elta)}{J^G_UR_T(\bdK _*(U), {\mathbb D}elta)} \end{equation} \[ \hspace*{10cm} \to R_G(\bdK _*(U), {\mathbb D}elta) \] are all isomorphisms. We now consider the diagram: \begin{equation}\label{eqn:LHT&CT} \[email protected]{ R_G(\bdK _*(U_i), {\mathbb D}elta) \ar[r] \ar[d]_{{\mathbb P}hi^{U_i}_G} & R_G(\bdK _*(V_{i+1}), {\mathbb D}elta) \ar[r] \ar[d]_{{\mathbb P}hi^{V_{i+1}}_G} & R_G(\bdK _*(U_{i+1}), {\mathbb D}elta) \ar[r] \ar[d]_{{\mathbb P}hi^{U_{i+1}}_G} & R_G(\bdK _*(U_i), {\mathbb D}elta) \ar[r] \ar[d]_{{\mathbb P}hi^{U_i}_G} & R_G(\bdK _*(V_{i+1}), {\mathbb D}elta) \ar[d]_{{\mathbb P}hi^{V_{i+1}}_G} \\ \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}).} \end{equation} Using ~\eqref{eqn:CTB12*}, we see that the top row of ~\eqref{eqn:LHT&CT} is obtained by tensoring the localization exact sequence \[ \cdots \to \bdK _*(U_{i}) \to \bdK _*(V_{i+1}) \to \bdK _*(U_{i+1}) \to \bdK _*(U_{i}) \to \bdK _*(V_{i+1}) \to \cdots \] of $\bdK _*(B)$-modules with $R_G(\bdK _*(B), {\mathbb D}elta)$. Hence, this row is exact by Corollary~\ref{cor:T-case*}. The bottom row is anyway a localization exact sequence. We now show that the diagram ~\eqref{eqn:LHT&CT} commutes. It is clear that the third square commutes and the fourth square is same as the first. So we need to check that the first two squares commute. Let $\alpha: V_{i+1} \hookrightarrow U_{i+1}$ and $\beta: \mathfrak{M}_{i+1} \hookrightarrow \mathfrak{X}_{i+1}$ be the closed immersions of smooth schemes and stacks. Following the notations in the proof of Theorem~\ref{thm:LHT}, we see that for any $u \in \bdK _*(U_i)$ and for any monomial $\gamma(\underlinederline{t}) = t^{i_1}_1\cdots t^{i_d}_d$, \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} \partialta \circ {\mathbb P}hi^{U_i}_G(u \otimes \gamma(\underlinederline{t})) & = & \partialta\left(\pi^*_{U_i}(u) \cdot {\text{\'et}}a^*_i\left([{E(\gamma)}/G]\right)\right) \\ & {=} & \partialta(\pi^*_{U_i}(u)) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \pi^*_{V_{i+1}}(\partialta(u)) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & {\mathbb P}hi^{V_{i+1}}_G \left(\partialta(u)\otimes \gamma(\underlinederline{t})\right) \\ & = & {\mathbb P}hi^{V_{i+1}}_G \circ \partialta(u \otimes \gamma(\underlinederline{t})), \end{array} \end{equation} where $E(\gamma) \in \bdK _*(\mathfrak{X})$ is as in ~\eqref{eqn:vanish4*}. The second equality follows from the Leibniz rule and the third equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the first (and the last) square commutes. \begin{comment} any $1 \le j \le d$, we have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} \partialta \circ {\mathbb P}hi^{U_i}_G(u \otimes t_j)) & = & \partialta\left(\pi^*_{U_i}(u) \cdot {\text{\'et}}a^*_i\left({[{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & {=} & \partialta(\pi^*_{U_i}(u)) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \pi^*_{V_{i+1}}(\partialta(u)) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & {\mathbb P}hi^{V_{i+1}}_G \left(\partialta(u)\otimes t_j\right) \\ & = & {\mathbb P}hi^{V_{i+1}}_G \circ \partialta(u \otimes t_j), \end{array} \end{equation} \end{comment} To show the commutativity of the second square, let $v \in \bdK _*(V_{i+1})$. We then have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} {\beta}_* \circ {\mathbb P}hi^{V_{i+1}}_G (v \otimes \gamma(\underlinederline{t})) & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right)\right) \\ & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \beta^* \circ {\text{\'et}}a^*_{i+1}\left([{E(\gamma)}/G]\right)\right) \\ & = & {\beta}_*(\pi^*_{V_{i+1}}(v)) \cdot {\text{\'et}}a^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \pi^*_{U_{i+1}}(\alpha_*(v)) \cdot {\text{\'et}}a^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & {\mathbb P}hi^{U_{i+1}}_G (\alpha_*(v) \otimes\gamma(\underlinederline{t}) ) \\ & = & {\mathbb P}hi^{U_{i+1}}_G \circ \alpha_* (v \otimes \gamma(\underlinederline{t})), \end{array} \end{equation} where third equality follows from the projection formula and the fourth equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the second square commutes. \begin{comment} We then have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} {\beta}_* \circ {\mathbb P}hi^{V_{i+1}}_G (v \otimes t_j) & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \beta^* \circ {\text{\'et}}a^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & = & {\beta}_*(\pi^*_{V_{i+1}}(v)) \cdot {\text{\'et}}a^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \pi^*_{U_{i+1}}(\alpha_*(v)) \cdot {\text{\'et}}a^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & {\mathbb P}hi^{U_{i+1}}_G (\alpha_*(v) \otimes t_j) \\ & = & {\mathbb P}hi^{U_{i+1}}_G \circ \alpha_* (v \otimes t_j), \end{array} \end{equation} where third equality follows from the projection formula and the fourth equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the second square commutes. \end{comment} The first and the fourth vertical arrows are isomorphisms by induction. The second and the fifth vertical arrows are isomorphisms by Lemma~\ref{lem:Trvial-bundle-case}. Hence the middle vertical arrow is also an isomorphism by 5-lemma. This concludes the proof of Theorem~\ref{thm:CTB}. $\hspace*{12.5cm} \hfil\squareuare$ \vskip .3cm \begin{remk}\label{remk:Non-reduced-bundle} It was assumed in Theorem~\ref{thm:CTB} that $G$ is subgroup of $T$. Since $\mathfrak{X}$ is just the toric stack $[{E(X)}/G]$ associated to the data $(E(X), G \xrightarrow{\phi} T)$, the general case can always be reduced to the case of Theorem~\ref{thm:CTB}. We refer to Remark~\ref{remk:non-red} for how this can be done. \end{remk} \vskip .3cm \noindent\emph{Acknowledgments.} Parts of this work were carried out while the first author was visiting the Tata Institute of Fundamental Research, while the second author was visiting the Mathematics department of Ohio state university, Columbus and also while both the authors were visiting the Mathematics department of the Harish Chandra Research Institute, Allahabad. The first author was also supported by an adjunct professorship at the same institute. They would would like to thank these departments for the invitation and financial support during these visits. They also would like to thank Hsian-Hua Tseng for helpful comments on an earlier version of this paper. \enlargethispage*{75pt} \end{document}
\begin{document} ^{-1}aketitle \begin{abstract} \noindent We introduce a theory of Cyclic Kummer extensions of commutative rings for partial Galois extensions of finite groups, extending some of the well-known results of the theory of Kummer extensions of commutative rings developed by A. Z. Borevich. In particular, we provide necessary and sufficient conditions a to determine when a partial $n$-kummerian extension is equivalent to either a radical or a $I$-radical extension, for some subgroup $I$ of the cyclic group $C_n$. \end{abstract} \noindent \textbf{2010 AMS Subject Classification:} Primary 13B05. Secondary 13A50, 16W22.\\ \noindent \textbf{Key Words:} Partial Kummer extensions, Galois extensions, cocycles, coboundary. \section{Introduction} The theory of Kummer extensions of commutative rings was introduced by A. Z. Borevich in \cite{B} and it is proved that every Kummer extension of a ring $R$ with group $G$ has a decomposition into a direct sum of $R$-submodules, which are image of homomorphisms defined in terms of characters of $G$. Further, the author introduced the term of radical extension and shows that every cyclic Kummer extension is equivalent to them. Explicitely, it is proved in \cite[Theorem 2, section 8]{B} that every cyclic extension $T$ of a Kummerian ring $R$ with Galois group $G$ is $G$-equivalent to the radical extension $S_{Q, \varphi, \chi}$, for some $R$-module $Q$ of rank one. On the other hand the theory of partial Galois extensions of commutative rings was introduced and studied in \cite{DFP}, extending some of the well known results given in celebrated paper by Chase, Harrison and Rosenberg in \cite{CHR}. In particular, given a partial Galois extension $R\subseteq S,$ in \cite[Theorem 5.1]{DFP} the authors established a one to one correspondence between the subgroups of $G$ and the separable $R$-subalgebras $T$ of $S$ which are $\alpha$-strong and such that $H_T$, the subgroup of $G$ such that the elements of $T$ stay fixed by the partial action $\alpha$, is a subgroup of $G$. In \cite{BCMP} the authors complete \cite[Theorem 5.1]{DFP} by showing that for any normal subgroup $H$ of $G$ the subring $S^{\alpha_H}$ of $S$ is a partial Galois extension of $R$ with Galois group $G/H.$ In the case that $G$ is abelian, this leads to the construction of the inverse semigroup of equivalence classes of partial Galois abelian extensions of $R$ with same group $G$, called the Harrison inverse semigroup and denoted by $^{-1}athcal{H}_{\rm par}(G, R)$, this semigroup contains the Harrison group defined in \cite{H}. Moreover, as in the classical case, the study of $^{-1}athcal{H}_{\rm par}(G, R)$ is reduced to the cyclic case. Starting from the partial Galois theory for abelian groups, it is possible to get a partial cyclic Kummer theory. For this, our principal goals in this work is to generalize the results \cite[section 2]{B}, \cite[Theorem 1, section 3]{B} and \cite[Theorem 2, section 8]{B} to the partial context. Thus we connect invertible modules with one dimensional partial cocycles; our new ingredients are include satured set and $I$-radical extension, which can be seen as subalgebras of Borevich's radical extensions. The paper is organized as follows. After of the introduction, in Section 2 we present some preliminaries facts on partial actions and partial Galois cohomology of groups. In section 3 we connect invertible modules with one dimensional partial cocycles (see Section \ref{coho}). In section 4 we present a partial Kummer theory. In the first part we show that determine a partial $n$-kummerian extension is a sum of invertible modules described in section \ref{IMCC}, where the sum cover all the characters of $G$. In second part, given $m\in ^{-1}athbb{N}$ and $I\subseteq \{0,\dots, m-1\}$ we introduce the notion of Borevich's $I$-radical extension and in Proposition \ref{iradd} we give necessary and sufficient conditions to determine when this extensions have an algebra structure. In the third part we determine which partial cyclic Kummer extensions can be parametrized by $I$-radical extensions, and show that the study of this kind of partial action can be reduced to the global case. Throughout this work the word ring means an associative ring with an identity element. For a commutative ring $R$ we say that an $R$-module is f.g.p if it is finitely generated and projective, faithfully projective if it is faithful and f.g.p. Moreover, unadorned $\otimesimes$ means $\otimesimes_R.$ The Picard group of $R$ is denoted by \textbf{Pic}($R$) and it consists of all $R$-isomorphism classes of f.g.p $R$-modules of rank 1, with binary operation given by $[P][Q]=[P\otimesimes Q]$. Recall that its identity is $[R]$ and the inverse of $[P]$ in \textbf{Pic}(R) is $[P^*]$, where $P^*=Hom_R(P, R)$. Finally for a ring $A$ and $X,Y\subseteq A$ the set $^{-1}athcal{U}(A)$ is the group of invertible elements of $A$ and $XY$ denotes the set of finite sums of elements of the form $xy$ (or $yx$) for $x \in X$ and $y \in Y.$ . \section{Preliminaries} In this section we recall some basic notions which will be used in the paper. \subsection{Partial Actions of groups} Let $k$ be a commutative ring and $G$ be a group. Following \cite{DE} we say that a \textit{partial action} $\alpha$ of $G$ on a $k$-algebra $S$ is a family of $k$-algebra isomorphisms $\alpha=\{\alpha_g \colon S_{g^{-1}}\to S_g\}_{g\in G}$, which will be denoted by $(S,\alpha),$ where , for each $g\in G$, $S_g$ is an ideal of $S$ such that \begin{itemize} \item[(i)] $S_1=S,\, \alpha_1=id_S$, where 1 is the identity element of the group $G,$ \item[(ii)] $\alpha_g(S_{g^{-1}}\cap S_h)=S_g\cap S_{gh}, \, \textbf{f}orall g,h \in G$, \item[(iii)] $\alpha_g \circ \alpha_h(x)=\alpha_{gh}(x),$ $ \textbf{f}orall x\in S_{h^{-1}}\cap S_{{(gh)}^{-1}},\, \textbf{f}orall g,h \in G$. \end{itemize} Conditions (ii) and (iii) are equivalent to the fact that $\alpha_{gh}$ is an extension of $\alpha_g \circ\alpha_h,$ moreover we say that $\alpha$ is {\it global} if $\alpha_{gh}=\alpha_g \circ \alpha_h,$ for all $g,h\in G.$ Two classical examples of partial actions are the following. \begin{exe} (Induced partial action) Let $\beta$ be a global action of $G$ on a ring $T$ and $S$ a unital ideal of $T.$ For $g\in G$ we set $S_g=S\cap \beta_g(S)$ and $\alpha_g=\beta_g^{-1}id_{S_{g^{-1}}}$ then the family $\alpha=\{\alpha_g \colon S_{g^{-1}}\to S_g\}_{g\in G}$ is a partial action of $G$ on $S.$ \end{exe} \begin{exe}\label{ext0} (Extension by zero) Let $G$ be a group $S$ a ring and $H$ a subgroup of $G$ acting (globally) on $S$ with action $\beta.$ Set $S_g=\{0\}$ for all $g\in G\setminus H.$ Then $\beta^0=\{\beta_g \colon S_{g^{-1}}\to S_g\}_{g\in G}$ is a partial action of $G$ on $S$ and is called the extension by zero of $\beta.$ \end{exe} \begin{defn}\label{equiva} Let $S$ and $S'$ be two rings with partial action $\alpha$ and $\alpha',$ respectively. We say that $(S, \alpha)$ and $(S',\alpha')$ are \emph{$G$-isomorphic}, which is denoted by $(S,\alpha)\overset{par}{\sim} (S',\alpha')$, if there is a $k$-algebra isomorphism $f: S\rightarrow S'$ such that for all $g \in G$: \begin{enumerate} \item [(i)] $f(S_g)=S'_g$, \item [(ii)]$ f \circ \alpha_g= \alpha'_g \circ f$ in ${S_{g^{-1}}}$. \end{enumerate} \end{defn} For our purposes we will assume hereafter that $G$ is finite and $\alpha$ is unital, that is, every ideal $S_g$ is unital, with its identity element denoted by $1_g,$ by \cite[Theorem 4.5]{DE} this condition is equivalent to say that $\alpha$ possesses a globalization. (see \cite[p. 79]{DFP} for more details). The {\it ring of subinvariants of S} is the set $S^{\alpha}=\{a \in S^{-1}id \alpha_{g}(a1_{g^{-1}})=a1_g\},$ if $\alpha$ is global, then $S^\alpha$ is denoted by $S^G.$ Let $R$ be unital a subring of $S$ with $1_R=1_S.$ Then following \cite{DFP}, we say that $S\supseteq R$ is a \textit{partial Galois extension} if \begin{enumerate} \item[(i)] $R=S^\alpha$; \item[(ii)] For some $m\in ^{-1}athbb{N}$ there exist elements $x_i,y_i\in S, 1\leq i\leq m$, such that \begin{equation*}\label{G2} \sum_{i=1}^mx_i\alpha_{g}(y_i1_{g^{-1}})=\deltalta_{1, g},\, \text{for each}\, g \in G. \end{equation*} \end{enumerate} The elements $x_i,y_i$ in (ii) are called \textit{partial Galois coordinates} of $S$ over $R$. \begin{rem}\label{isogal} Let $(T, \beta)$ be a globalization of $(S, \alpha).$ Then by \cite[Theorem 3.3]{DFP}, $S\supseteq R$ is a partial Galois extension, if and only if, $T\supseteq T^G$ is a Galois extension, moreover by \cite[Proposition 2.3]{DFP} there is a $T^G$-bilinear map $\psi\colon T^{-1}apsto T $ such that $\psi {^{-1}id_R}: R\to T^G$ is a ring isomorphism whose inverse is given by $x^{-1}apsto x1_S,$ for all $x\in T^G.$ \end{rem} We give the following. \begin{lem} Let $\alpha$ be a unital partial action of $G$ on $S$ and $R$ a subring of $S.$ Then $S/R$ is a partial Galois extension, if and only if, \begin{itemize} \item $S=R^\alpha.$ \item For some $m\in ^{-1}athbb{N}$ there exist elements $x_i,y_i\in S, 1\leq i\leq m$, such that \end{itemize} \begin{equation}\label{galequiv}\sum_{i=1}^m\alpha_{g}(x_i1_{g^{-1}})\alpha_{h}(y_i1_{h^{-1}})=\deltalta_{g, h}1_g,\, \text{for all}\,\, g, h \in G.\end{equation} \end{lem} \proof $(^{-1}athbb{R}ightarrow)$ Suppose that $S/R$ is a partial Galois extension, then $S^\alpha=R.$ Take $g,h\in G,$ then there is $m\in ^{-1}athbb{N}$ and $x_i,y_i\in S, 1\leq i\leq m$, such that $\sum_{i=1}^mx_i\alpha_{l}(y_i1_{l^{-1}})=\deltalta_{1, l},$ for all $l\in G$. Hence, $\sum\limits_{i=1}^m\alpha_g(x_i1_{g^{-1}})\alpha_{gl}(y_i1_{(gh)^{-1}})=\alpha_g(\deltalta_{1,l}1_{g^{-1}})$. In particular, taking $l=hg^{-1}$ we get $$\sum\limits_{i=1}^n\alpha_{g}(x_i1_{g^{-1}})\alpha_{h}(y_i1_{h^{-1}})=\alpha_{h}(\deltalta_{1,hg^{-1}}1_{h^{-1}})=\deltalta_{g,h}1_g,$$ as desired. For the part $(\Leftarrow)$ take $h=1.$ \endproof \subsubsection{Partial cohomology of groups}\label{coho} Now we recall from \cite{DK} some notions about partial cohomology of groups. \begin{defn} Let $(S,\alpha)$ be a partial action of $G.$Given $n\in ^{-1}athbb{N}$, an $n$-cochain of G with values in S is a function $f:G^n\to S$, such that $f(g_1,\dots,g_n)\in ^{-1}athcal{U}(S1_{g_1}1_{g_1g_2}\cdots 1_{g_1g_2\cdots g_n})$. A $0$-cochain is an element of $^{-1}athcal{U}(S)$. \end{defn} \begin{rem} Let $C^n(G,\alpha,S)$ denote the set of all $n$-cochains. This set is an abelian group and its identity is the map $(g_1,\dots, g_n)^{-1}apsto 1_{g_1}1_{g_1g_2}\cdots 1_{g_1g_2\cdots g_n}$ and the inverse of $f\in C^n(G,\alpha,S)$ is $f^{-1}(g_1,\dots, g_n)=f(g_1,\dots, g_n)^{-1}$, where $f(g_1,\dots, g_n)^{-1}$ is the inverse of $f(g_1,\dots, g_n)$ in $S1_{g_1}1_{g_1g_2}\cdots 1_{g_1g_2\cdots g_n}$ for each $g_1,\dots, g_n\in G$. \end{rem} \begin{defn}[The coboundary homomorphism] Let $n\in ^{-1}athbb{N}, n>0, \, f\in C^n(G,\alpha, S)$ and $g_1,\dots , g_{n+1}\in G$, set \small \begin{align*} (\deltalta^nf)(g_1,\dots , g_{n+1})=&\alpha_{g_1}\left(f(g_2,\dots , g_{n+1})1_{g_1^{-1}}\right)\prod_{i=1}^nf(g_1,\dots, g_i,g_{i+1},\dots , g_{n+1})^{(-1)^i}\\ &f(g_1,\dots , g_n)^{(-1)^{n+1}}. \end{align*} \normalsize \end{defn} By \cite[Proposition 1.5]{DK} the map $\deltalta^n: C^n(G,\alpha, S) \to C^{n+1}(G,\alpha, S)$ is a group homomorphism such that $$(\deltalta^{n+1}\deltalta^nf)(g_1,g_2,\dots, g_{n+2})=1_{g_1}1_{g_1g_2}\cdots 1_{g_1g_2\cdots g_{n+2}},$$ for any $n\in ^{-1}athbb{N}, \, f\in C^n(G,\alpha, S)$ and $g_1,g_2,\dots , g_{n+2}\in G$. \begin{defn} Let $n\in ^{-1}athbb{N}$, we define the groups $Z^n(G,\alpha,S):=ker \deltalta^n$ of partial n-cocycles, $B^n(G,\alpha,S)=Im \deltalta^{n-1}$ of partial $n$-coboundaries, and $H^n(G,\alpha,S)=\textbf{f}rac{ker \deltalta^n}{Im \deltalta^{n-1}}$ of partial $n$-cohomologies of G with values in S, $n\geq 1$. \end{defn} \begin{exe} \begin{align*} B^1(G,\alpha, S)&=\{f\in C^1(G,\alpha, S)^{-1}id f(g)=\alpha_g(t1_{g^{-1}})t^{-1}, \, \text{for some}\quad t\in ^{-1}athcal{U}(S)\};\\ Z^1(G,\alpha, S)&=\{f\in C^1(G,\alpha, S)^{-1}id f(gh)1_g=f(g)\alpha_g(f(h)1_{g^{-1}}), \, \textbf{f}orall g,h \in G\}. \end{align*} \end{exe} Two cocycles $f,f'\in Z^n(G,\alpha, S)$ are called \textit{cohomologous} if they differ by an $n$-coboundary.\\ Notice that for $f\in Z^1(G,\alpha, S)$ we get that \begin{equation}\label{invf} f^{-1}(gh)1_g=\alpha_g(f^{-1}(h)1_{g^{-1}})f^{-1}(g), \end{equation} for all $g,h\in G.$ \section{Invertible Modules Connected with $Z^1(G,\alpha, S)$}\label{IMCC} In this section we connect invertible modules with one dimensional cocycles (see Section \ref{coho}) in the partial context. Thus we extend the results of \cite[Section 2]{B} to the frame partial actions. From now on in this work $S$ will denote a commutative algebra over $k,$ $G$ an abelian group, $(S, \alpha)$, a unital partial action of $G$ on $S$ and $R$ a subring of $S$ such that $S\supseteq R$ is a partial Galois extension with coordinate system $\{x_i,y_i\in S, 1\leq i\leq m\},$ for some $m\in ^{-1}athbb{N}.$ An $R$-submodule $X$ of $S$ is called invertible, if there exists a submodule $Y$ of $S$ such that $XY=R. $ We denote by ${\rm Inv}_R(S)$ the group consinsting of invertible $R$-submodules of $S.$ The trace map $tr_{S/R}\colon S\to R$ is defined by $tr_{S/R}(s)=\sum_{g \in G}\alpha_{g}(s1_{g^{-1}}),$ for all $s\in S,$ by \cite[Remark 3.4]{DFP}, in the ring $S$ there exists $w\in S$ such that \begin{equation}\label{tr1}tr_{S/R}(w)=1,\end{equation} Take $w\in S$ given by \eqref{tr1}, we associate to each one-dimensional cocycle $f\in Z^1(G,\alpha, S)$ the element $\widehat{f}\in End_R(S)$, by setting \begin{equation}\label{hat}\widehat{f}(x)=\sum_{g \in G}f^{-1}(g)\alpha_{g}(wx1_{g^{-1}}), \quad x\in S.\end{equation} \begin{prop} The map $\widehat{f}$ satisfies $\widehat{f}\circ \widehat{f}=\widehat{f}$ and $Q_f=Im(\widehat{f})$ is a f.g.p $R$-module. \end{prop} \begin{proof} It is clear that $\widehat{f}\in End_R(S).$ Now, let $x\in S$. Then, \begin{align*} \widehat{f}\circ \widehat{f}(x)&=\widehat{f}\left(\sum_{g \in G}f^{-1}(g)\alpha_{g}(wx1_{g^{-1}})\right)\\ &=\sum_{h \in G}f^{-1}(h)\alpha_{h}\left(w\sum_{g \in G}f^{-1}(g)\alpha_{g}(wx1_{g{-1}})1_{h^{-1}}\right)\\ &=\sum_{g, h}f^{-1}(h)\alpha_{h}(w1_{h^{-1}})\alpha_{h}(f^{-1}(g)1_{h^{-1}})\alpha_{h}[\alpha_{g}(wx1_{g^{-1}})1_{h^{-1}}]\\ &=\sum_{g, h}f(h)^{-1}\alpha_{h}(w1_{h^{-1}})\alpha_{h}(f(g)^{-1}1_{g^{-1}})\alpha_{hg}(wx1_{(hg)^{-1}})1_{h}\\ &=\sum_{g, h}f(h)^{-1}\alpha_{h}(f(g)^{-1}1_{g^{-1}})\alpha_{h}(w1_{h^{-1}})\alpha_{hg}(wx1_{(hg)^{-1}})\\ &\stackrel{\eqref{invf}}=\sum_{g, h}f^{-1}(hg)\alpha_{h}(w1_{h^{-1}})\alpha_{hg}(wx1_{(hg)^{-1}})\\ &\stackrel{l=hg}=\sum_{g, l}f^{-1}(l)\alpha_{h}(w1_{h^{-1}})\alpha_{l}(wx1_{l^{-1}})\\ &=\left(\sum_{h}\alpha_{h}(w1_{h^{-1}})\right)\left(\sum_{l}f^{-1}(l)\alpha_{l}(wx1_{l^{-1}})\right)\\ &\stackrel{\eqref{tr1}}=\widehat{f}(x). \end{align*} The other affirmation follows directly. \end{proof} \begin{prop}\label{cond} Let $f\in Z^1(G,\alpha, S).$ Then $$Q_f=\{a\in S^{-1}id\alpha_{g}(a1_{g^{-1}})=f(g)a, \textbf{f}orall g\in G \}.$$ In particular if $e_p:G\to S$ is defined by $g^{-1}apsto 1_g,$ for all $g\in G,$ then $Q_{e_p}=R.$ \end{prop} \begin{proof} Let $a\in Q_f=Im(\widehat{f})$. Then, $a=\sum\limits_{h\in G}f(h)^{-1}\alpha_{h}(wx1_{h^{-1}}),$ for some $x\in S$. Then, \begin{align*} \alpha_{g}(a1_{g^{-1}})&=\sum_{h\in G}\alpha_{g}(f(h)^{-1}1_{g^{-1}})\alpha_{g}(\alpha_{h}(wx1_{h^{-1}})1_{g^{-1}})\\ &\stackrel{\eqref{invf}}=\sum_{h\in G}f^{-1}(gh)f(g)\alpha_{gh}(wx1_{(gh)^{-1}})1_g\\ &\stackrel{f(g)\in S_g}=f(g)\sum_{h}f^{-1}(gh)\alpha_{gh}(wx1_{(gh)^{-1}})\\ &=f(g)a. \end{align*} Conversely, assume that $f(g)a=\alpha_{g}(a1_{g}),$ for all $g\in G$. Thus, \begin{align*} a1_g&=f^{-1}(g)\alpha_{g}(a1_{g^{-1}})\\ &=f^{-1}(g)\alpha_{g}(a1_S1_{g^{-1}})\\ &=f^{-1}(g)\alpha_{g}\left(a\sum_{h \in G}\alpha_{h}(w1_{h^{-1}})1_{g^{-1}}\right)\\ &=f^{-1}(g)\alpha_{g}\left(\sum_{h \in G}\alpha_{h}(w1_{h^{-1}})\alpha_{h}(\alpha_{h^{-1}}(a1_{h}))1_{g^{-1}}\right)\\ &=\sum_{h \in G}f^{-1}(g)\alpha_{g}[\alpha_{h}(w\alpha_{h^{-1}}(a1_{h})1_{h^{-1}})1_{g^{-1}}]\\ &=\sum_{h \in G}f^{-1}(g)\alpha_{gh}(w\alpha_{h^{-1}}(a1_{h})1_{(gh)^{-1}})1_{g}. \end{align*} Again by \eqref{invf} we get $$f^{-1}(g)1_g=f^{-1}((gh)h^{-1})1_{g}=f^{-1}(gh)\alpha_{gh}(f^{-1}(h^{-1})1_{gh}).$$ We have that, \begin{align*} a1_{g} &=\sum_{h \in G}f^{-1}(gh)\alpha_{gh}(wf^{-1}(h^{-1})\alpha_{h^{-1}}(a1_{h})1_{(gh)^{-1}}), \end{align*} for all $g\in G.$ In particular, taking $g=1$ we get that $a\in Q_f=Im(\widehat{f})$. \end{proof} \begin{prop}\label{free} If $f\in B^1(G,\alpha, S),$ that is $f(g)=\alpha_{g}(u1_{g^{-1}})u^{-1}, $ for some $u\in ^{-1}athcal{U}(S),$ then $Q_f=Ru.$ Moreover, if the cocycles $f,f'\in Z^1(G,\alpha, S)$ are cohomologous, i.e., $f(g)=f'(g)\alpha_{g}(u1_{g^{-1}})u^{-1}$ for some $u\in ^{-1}athcal{U}(S),$ then $Q_f=Q_{f'}u$. \end{prop} \begin{proof} Since $f\in B^1(G,\alpha, S)$ there is $u\in ^{-1}athcal{U}(S)$ such that $f(g)=\alpha_{g}(u1_{g^{-1}})u^{-1}, $ for all $g\in G.$ We shall prove that $Q_f=Ru.$ Let $a\in Q_f$ then $au^{-1} \in R,$ indeed for $g\in G,$ we have by Proposition \ref{cond} that $$\alpha_g(au^{-1}1_{g^{-1}})=\alpha_g(a1_{g^{-1}})\alpha_g(u^{-1}1_{g^{-1}})=f(g)af^{-1}(g)u^{-1}=au^{-1}1_g,$$ and we get that $au^{-1} \in R,$ that is $a\in Ru.$ For the other inclusion, since $f(g)u=\alpha_{g}(u1_{g^{-1}})$ we have $u\in Q_f$ and thus $Ru\subseteq Q_f$. Now take $f,f'\in Z^1(G,\alpha, S)$ cohomologous and $a \in Q_f,$ then for any $g\in G$ \begin{align*} au^{-1} f'(g)&=af(g)\alpha_{g}(u^{-1}1_{g^{-1}})=\alpha_g(au^{-1}1_{g^{-1}}), \end{align*} and $au^{-1} \in Q_{f'},$ from this we get that $Q_f=Q_{f'}u,$ as desired. \end{proof} Now we prove that the $R$-module $Q_f$ does not depend on the choice of an element $w$ with trace 1. Indeed, consider the $R$-homomorphism $\widetilde{f}\in End_R(S)$, defined by \begin{equation}\label{wildef}\widetilde{f}(x)=\sum_{g \in G}f^{-1}(g)\alpha_{g}(x1_{g^{-1}}),\end{equation} for all $x\in S.$ Then we have the following. \begin{prop} \label{equal} Let $\widetilde{f}$ defined by \eqref{wildef}. Then $Im(\widetilde{f})=Q_f=Im(\widehat{f})$. \end{prop} \begin{proof} First all we prove that $Im(\widetilde{f})\subseteq Im(\widehat{f})$. Indeed, for $x\in S$ we have that \begin{align*} \widehat{f}\circ \widetilde{f}(x)&=\widehat{f}\left(\sum_{g \in G}f^{-1}(g)\alpha_{g}(x1_{g^{-1}})\right)\\ &=\sum_{h \in G}f(h)^{-1}\alpha_{h}\left(w\sum_{g \in G}f^{-1}(g)\alpha_{g}(x1_{g^{-1}})\right)\\ &=\sum_{g, h}f(h)^{-1}\alpha_{h}(w1_{h^{-1}})\alpha_{h}(f^{-1}(g)\alpha_{g}(x1_{g^{-1}}))\\ &=\sum_{g, h}\alpha_{h}(w1_{h^{-1}})f(h)^{-1}\alpha_{h}(f^{-1}(g)\alpha_{g}(x1_{g^{-1}}))\\ &=\sum_{g, h}\alpha_{h}(w1_{h^{-1}})f(h)^{-1}\alpha_{h}(f^{-1}(g)1_{g^{-1}})\alpha_{h}(\alpha_{g}(x1_{g^{-1}})1_{h^{-1}})\\ &=\sum_{g, h}\alpha_{h}(w1_{h^{-1}})f^{-1}(h g)\alpha_{hg}(x1_{(gh)^{-1}})1_{h}\\ &\stackrel{l=hg}=\sum_{h, l}\alpha_{h}(w1_{h^{-1}})f^{-1}(l)\alpha_{l}(x1_{(l^{-1}})1_{h}\\ &=\left(\sum_{h \in G}\alpha_{h}(w1_{h^{-1}})\right)\left(\sum_{l \in G}f^{-1}(l)\alpha_{l}(x1_{l^{-1}})\right)\\ &=\sum_{g \in G}f^{-1}(l)\alpha_{l}(x1_{l^{-1}})\\ &=\widetilde{f}(x). \end{align*} The other inclusion follows from the fact that $\widehat{f}(x)=\widetilde{f}(wx)$ for all $x\in S.$ \end{proof} \begin{prop}\label{ann} The equality $Q_fS=S$ holds. Furthermore, \begin{equation} \label{equal1}\sum_{i=1}^m\widehat{f}(x_i)\widetilde{f^{-1}}(y_i)=1,\end{equation} where $x_1,\dots, x_m; y_1,\dots, y_m\in S$ is a partial Galois coordinates system of $S$ over $R.$ \end{prop} \begin{proof} Let $x_1,\dots, x_m; y_1,\dots, y_m\in S$ be a partial Galois coordinates of $S$ over $R,$ then \begin{align*} \sum_{i=1}^m\widehat{f}(x_i)\widetilde{f^{-1}}(y_i)&=\sum_{i=1}^m\left(\sum_{g \in G}f(g)^{-1}\alpha_{g}(wx_i1_{g^{-1}})\right)\left(\sum_{h \in G}f(h)\alpha_{h}(y_i1_{h^{-1}})\right)\\ &=\sum_{g,h}\sum_{i=1}^mf^{-1}(g)f(h)\alpha_{g}(wx_i1_{g^{-1}})\alpha_{h}(y_i1_{h^{-1}})\\ &=\sum_{g,h}\sum_{i=1}^mf^{-1}(g)f(h)\alpha_{g}(wx_i1_{g^{-1}})\alpha_{h}(y_i1_{h^{-1}})\\ &=\sum_{g,h}f^{-1}(g)\alpha_{g}(w1_{g^{-1}})f(h)\sum_{i=1}^m\alpha_{g}(x_i1_{g^{-1}})\alpha_{h}(y_i1_{h^{-1}})\\ &\stackrel{\eqref{galequiv}}=\sum_{g,h}f^{-1}(g)\alpha_{g}(w1_{g^{-1}})f(h)\deltalta_{g,h}\\ &=\sum_{g}f^{-1}(g)\alpha_{g}(w1_{g^{-1}})f(g)\\ &=\sum_{g}\alpha_{g}(w1_{g^{-1}})=1. \end{align*} With respect to the equality $Q_fS= S,$ we have that \begin{equation*} Q_fS\subseteq S=\sum_{i=1}^m\widehat{f}(x_i)\widetilde{f^{-1}}(y_i)S\subseteq \sum_{i=1}^m\widehat{f}(x_i)S\subseteq Q_fS. \end{equation*} \end{proof} The following is a consequence of Proposition \ref{ann}. \begin{cor}\label{faithful} $Q_f$ is a faithful $R$-module (in particular, $Q_f\ne 0$). \end{cor} \begin{prop}\label{6} Let $f,f'\in Z^1(G,\alpha, S)$. Then $Q_fQ_{f'}=Q_{ff'}$. \end{prop} \begin{proof} The fact that $Q_fQ_g\subseteq Q_{fg}$ follows from Proposition \ref{cond}. For the other inclusion, if $c=\widetilde{fg}(z), z\in S$, by Proposition \ref{ann} we have that $z=\sum_is_iv_i, $ for some $s_i\in S$ and $ v_i\in Q_{f'}.$ Then \begin{align*} c=\widetilde{fg}(z)&=\sum_{g, i}f(g)^{-1}f'(g)^{-1}\alpha_{g}(s_i1_{g^{-1}})\alpha_{g}(v_i1_{g^{-1}})\\ &\stackrel{Prop. \ref{cond}}=\sum_{g, i}f(g)^{-1}f'(g)^{-1}\alpha_{g}(s_i1_{g^{-1}})f'(g)v_i\\ &=\sum_{g, i}f(g)^{-1}\alpha_{g}(s_i1_{g^{-1}})v_i\\ &=\sum_i\widetilde{f}(s_i)v_i. \end{align*} Note that, $\widetilde{f}(s_i)\in Q_f$ and $v_i\in Q_g$ and thus $c\in Q_fQ_g$. \end{proof} \begin{prop} Let $f,g\in Z^1(G,\alpha,S)$. Then $Q_f\otimesimes_R Q_{f^{-1}}=R\xi\simeq R$ as $R$-modules, where $\xi=\sum\limits_{i=1}^m\widehat{f}(x_i)\otimesimes_R \widetilde{f^{-1}}(y_i)$. \end{prop} \begin{proof} Note that $R\xi\subseteq Q_f\otimesimes_R Q_{f^{-1}}$. For the other inclusion, as $S$ is a partial Galois extension from $R$, then the partial Galois coordinates $x_1,\dots , x_m; y_1,\dots, y_m$ generate the $R$-module $S$. Thus, $\widehat{f}(x_1),\dots, \widehat{f}(x_n)$ and $\widetilde{f^{-1}}(y_1),\dots, \widetilde{f^{-1}}(y_n)$ generate the $S$-modules $Q_f$ and $Q_{f^{-1}}$ respectively, and then $Q_f\otimesimes_R Q_{f^{-1}}$ is generated by $\{\widehat{f}(x_i)\otimesimes_R \widetilde{f^{-1}}(y_j)^{-1}id 1\leq i,j\leq n\}.$ Hence it is enough to prove that $\widehat{f}(x_i)\otimesimes_R \widetilde{f^{-1}}(y_j)\in R\xi,$ for all $1\leq i,j\leq n$. Since $ff^{-1}(g)=1_g,$ for each $g\in G,$ we get by Proposition \ref{6} and Proposition \ref{free} that \begin{equation}\label{inv}Q_fQ_{f^{-1}}=Q_{ff^{-1}}=R,\end{equation} hence \begin{align*} \widehat{f}(x_i)\otimesimes_R \widetilde{f^{-1}}(y_j)&\stackrel{\eqref{equal1}}=\widehat{f}(x_i)\otimesimes_R \sum_{k=1}^m\widetilde{f^{-1}}(y_j)\widehat{f}(x_k)\widetilde{f^{-1}}(y_k)\\ &=\sum_{k=1}^m \widehat{f}(x_i)\widetilde{f^{-1}}(y_j)\widehat{f}(x_k)\otimesimes_R \widetilde{f^{-1}}(y_k)\\ &= \widehat{f}(x_i)\widetilde{f^{-1}}(y_j)\left(\sum_{k=1}^m\widehat{f}(x_k)\otimesimes_R \widetilde{f^{-1}}(y_k)\right)\\ &=\widehat{f}(x_i)\widetilde{f^{-1}}(y_j)\xi. \end{align*} The module $R\xi$ is faithful, as the tensor product of two faithfully projective modules, and thus free of rank one, that is $R\xi\simeq R.$ \end{proof} \begin{rem}\label{r1} It follows from the equality \eqref{inv} that $Q_f \in {\rm Inv}_R(S),$ for all $f\in Z^1(G,\alpha, S).$ Then $[Q_f] \in {\textbf{ Pic}}(R),$ and thus $Q_{f^{-1}}$ is isomorphic with the $R$-module $Q_f^*=Hom_R(Q_f,R)$. \end{rem} Using the fact that $Q_f$ is faithfully projective and the method of localization we get that. \begin{prop}\label{8} Let $M$ be an $R$-submodule of $S$, $f\in Z^1(G,\alpha,S).$ Then the $R$-homomorphism $\varphi:Q_f\otimesimes_R M\to Q_fM$, defined by $a\otimesimes x^{-1}apsto ax$ ($a\in Q_f, \, x\in M$) is an isomorphism. In particular, the map $Z^1(G,\alpha,S)\ni f^{-1}apsto [Q_f]\in {\bf Pic}(R)$ is a group homomorphism. \end{prop} The following result is a consequence of Propositions \ref{8} and \ref{6}. \begin{thm}\label{iso} Let $f,g$ be one-dimensional cocycles of $Z^1(G,\alpha,S)$. Then the $R$-homomorphism $\psi:Q_f\otimesimes_R Q_g\to Q_{fg}$ defined by $a\otimesimes b^{-1}apsto ab$ for all $a\in Q_f,\, b\in Q_g$ is an isomorphism. \end{thm} By Propostion \ref{free} and Theorem \ref{iso} we have a group homomorphism $$\lambda : H^1(G,\alpha,S) \ni {\rm cls}(f) ^{-1}apsto [Q_f] \in \textbf{Pic}(R).$$ We finish this section by giving a relation between $\lambda$ and the monomorphism $\varphi_1: H^1(G,\alpha,S) \to \textbf{Pic}(R)$ which is to the head of the seven-terms exact sequence related to partial Galois extension of commutative rings (see \cite{DPP, DPPR}). The map $\varphi_1$ sends ${\rm cls}(f)$ to the $R$-isomorphism class $[S_f^G],$ where $$S_f^G=\{x\in S^{-1}id f(g)\alpha_g(a1_{g^{-1}})=a1_g, \textbf{f}orall g\in G \}.$$ Therefore $S_f^G= Q_{f^{-1}}$ thanks to Proposition \ref{cond}, and it follows that $\lambda({\rm cls}(f) )=\varphi_1({\rm cls}(f^{-1}))$ and we conclude that $\lambda$ is a monomorphism. \section{Partial actions and Kummer Extensions} In this section we present a partial Kummer theory, one of the main results in this section is that any partial $n$-kummerian ring extension is a sum of invertible modules., induced by one dimensional cocycles. First, we recall the following. \begin{defn}\label{kummer}Let $n\geq 2$ be a natural number. A commutative ring $R$ is called $n$-kummerian if there exists an element $\omegaega\in ^{-1}athcal{U}(R),$ such that: \begin{itemize} \item [a)] $\omega^n=1.$ \item [b)] $1-\omega^i\in^{-1}athcal{U}(R), $ for all $i\in \{1, \cdots, n-1\}.$ \end{itemize} \end{defn} \begin{rem}\label{K} In $R$ we have that: $$\sum\limits_{i=0}^{n-1}\omega^i=0\hspace{2cm}\text{and}\hspace{2cm} \prod\limits_{i=1}^{n-1}(1-\omega^i)=n1_R.$$ \end{rem} Let $R$ be a $n$-kummerian ring and $\omega$ as in Definition \ref{kummer}, any group homomorphism $\chi: G\to \langle \omega \rangle$ is called a character of the group $G$. Let $\hat G=Hom(G,\langle \omega \rangle)$ the set of all characters of $G$ in $\langle \omega \rangle$. We define a group structure on $\hat G$ as follows: For $\chi_1,\chi_2\in\hat G$, their product $\chi_1\chi_2$ is defined by $$(\chi_1\chi_2)(g)=\chi_1(g)\chi_2(g), g\in G.$$ With this product $\hat G$ is a group isomorphic to $G$. \begin{defn}\label{parkum} The partial Galois extension $S\supseteq R$ of $G$ is called partial $n$-kummerian if $G$ is an abelian Galois group of order $n$ and $R$ is an $n$-kummerian ring. \end{defn} For $\chi\in \hat G$ we set \begin{equation}\label{chip} \chi_p: G\ni g^{-1}apsto \chi(g)1_g\in S. \end{equation} \begin{rem}Let $\omega\in R$ be as in Definiton \ref{kummer} and $\Omega$ be the cyclic group generated by $\omega.$ Note that $ Im \,\chi_p\subseteq \bigcup_{g\in G}\Omega1_g$ which is an inverse semigroup. \end{rem} \begin{rem}\label{kglob} Let $(T, \beta)$ be a globalization of $(S, \alpha)$ and $\psi$ be the ring isomorphism given in Remark \ref{isogal}. Then $\omega'=\psi(\omega)\in T^G$ satisfies a) and b) in Definition \ref{kummer}, in particular $T\supseteq T^G$ is a $n$-kummerian ring extension. Moreover for any $\chi\in \hat G,$ we have that $\psi_\chi:=\psi\circ \chi\in Hom(G, \langle \omega' \rangle)$ and any element of $ Hom(G, \langle \omega' \rangle)$ is of this form. \end{rem} We have the following. \begin{lem}\label{summa} Let $S\supseteq R$ be a partial $n$-kummerian ring extension. Then \begin{enumerate} \item For any $g\in G, g\neq e,$ we have \begin{equation}\label{equal0}\sum\limits_{\chi \in \hat G}\chi(g)=0, \end{equation} \item The set $\hat{G}_{\rm par}=\{\chi_p ^{-1}id \chi \in \hat G\}$ is a subgroup of $Z^1(G,\alpha,S).$ Moreover \begin{enumerate} \item The map $^{-1}u_p=\hat G\ni \chi ^{-1}apsto \chi_p\in \hat{G}_{\rm par}$ is a group epimorphism with $\ker ^{-1}u_p=\{\chi\in\hat G^{-1}id \chi(g)=1 \text{ if }\, 1_g\neq 0 \}$. In particular, if $1_g\neq 0$ for all $g\in G$ the groups $\hat G$ and $\hat{G}_{\rm par}$ are isomorphic. \item The map $\hat G_{\rm par}\ni \chi_p^{-1}apsto Q_{\chi_p}\in{\rm Inv}_R(S)$, is a group homomorphism. \end{enumerate} \item One has \begin{equation}\label{suma} S=\sum\limits_{\chi\in\hat G} Q_{\chi_p},\end{equation} and the sum in \eqref{suma} is direct if and only if $S\supseteq R$ is a (global) Galois extension. \item For for any $\chi\in \hat G$ we have $Q_{\psi_\chi}1_S \subseteq Q_{\chi_p}.$ \end{enumerate} \end{lem} \proof 1) By Remark \ref{kglob} and the proof of \cite[Theorem 1]{B} Section 3, we get $\sum\limits_{\chi \in \hat G}\psi _\chi(g)=0, $ for any $g\in G, g\neq e,$ the fact that $\psi$ is a ring isomorphism implies the result. \noindent 2) It is clear that $\hat{G}_{\rm par}$ is a group. Now take $g,h\in G$ and $\chi_p\in \hat{G}_{\rm par},$ then $$ \chi_p (gh)1_g=\chi(g)1_g\chi(h)1_{gh}=\chi(g)1_g\alpha_g (\chi(h)1_h1_{g^{-1}}) =\chi_p (g)\alpha_g(\chi_p (h)1_{g^{-1}}), $$ and $\chi_p\in Z^1(G,\alpha,S).$ Now for part a) is clear that the map is an epimorphism. Now since $R$ is $n$-kummerian we have \begin{align*} \chi\in \ker ^{-1}u_p &\Longleftrightarrow \chi(g)1_g=1_g, \textbf{f}orall g\in G \\&\Longleftrightarrow 1_g(1-\chi(g))=0, \textbf{f}orall g\in G \\&\Longleftrightarrow 1_g= 0 \vee \chi(g)=1, \textbf{f}orall g\in G \end{align*} Finally, part b) is a consequence of Proposition \ref{6} and Remark \ref{r1}. \noindent 3) Let $x\in S.$ Then \begin{align*} \sum_{\chi_p}\widetilde{\chi}_p(x)&=\sum_{\chi_p,g}\chi^{-1}_p(g)\alpha_g( x1_{g^{-1}}) \\&=\sum_{\chi,g}\chi^{-1}(g)\alpha_g( x1_{g^{-1}})\\ &=\sum_g\left(\sum_{\chi}\chi^{-1}(g)\alpha_g(x1_{g^{-1}})\right)= \\ &= \sum_g\left(\alpha_g(x1_{g^{-1}})\sum_{\chi}\chi^{-1}(g)\right)\\ &\stackrel{\eqref{equal0}}= nx, \end{align*} and we get that $nx\in \sum\limits_{\chi_p} Im \widetilde{\chi}_p=\sum\limits_{\chi_p} Q_{\chi_p},$ where the last equality follows from Proposition \ref{equal}. Since $(n1_R)^{-1} \in R$ and each $Q_{\chi_p}$ is an $R$-module we get $x\in\sum\limits_{\chi_p}Q_{\chi_p}, $ and $S=\sum\limits_{\chi\in\hat G} Q_{\chi_p}.$ \noindent Now, if $S/R$ is a Galois extension, then $\chi_p=\chi,$ for all $\chi \in \hat G$ and the sum in \eqref{suma} is direct thanks to \cite[Section 3, Theorem 1]{B}. Conversely, by Remark \ref{r1} we have that ${\rm rk}(Q_{\chi_p})=1,$ for all $\chi \in \hat G.$ Then ${\rm rk} (S)={\rm rk}\left(\bigoplus\limits_{\chi_p\in \hat G} Q_{\chi_p}\right )=n,$ and the result follows from \cite[Corollary 4.6]{DFP}. \noindent 4) For $b\in Q_{\psi_\chi} ,$ then \begin{align*} \alpha_g((b1_S) 1_{g^{-1}})&=\beta_g(b)\beta_g(1_S)1_g=\beta_g(b)1_g=\psi_\chi(g)b1_g =\chi(g)b1_g=\chi_p(g)(b1_S), \end{align*} and we get $b1_S\in Q_{\chi_p}$ thanks to Proposition \ref{cond}.\endproof \begin{rem} It follows by equation \eqref{suma} and Proposition \ref{6} that $S$ is a strong $\hat G$-system (see \cite[Definition16]{NY}). Moreover, by 4) of Lemma \ref{summa} follows that the map $T=\bigoplus\limits_{\chi \in \hat G}Q_{\psi_\chi} \ni t^{-1}apsto t1_S\in \sum\limits_{\chi\in\hat G} Q_{\chi_p}=S$ is a ring epimorphism that preserves the homogeneous components. \end{rem} \subsection{On Borevich's radical extensions}\label{Brad} Here we recall the notion of Borevich's radical extensions . Let $R$ be a commutative ring. For a non-zero $R$-module $Q$ and a natural number $i$ we denote $Q^{{\otimesimes} ^i}=\underbrace{Q\otimesimes \cdots \otimesimes Q}_{i-times},$ where $Q^{{\otimesimes} ^0}=R.$ Suppose that there is $m\in ^{-1}athbb{N}$ and an $R$-module homomorphism $\varphi\colon Q^{{\otimesimes} ^m}\to R.$ Let $S_{Q, \varphi}=\bigoplus\limits_{i=0}^{m-1}Q^{{\otimesimes} ^i}.$ We recall the construction of a product in $S_{Q, \varphi}$. (This process is the so-called factorization by $\varphi$ (see \cite[Section 5]{B} for details). \\ Consider the tensor $R$-algebra $R[Q]=\bigoplus\limits_{i=0}^\infty Q^{{\otimesimes} ^i},$ and define recursively the $R$-module homomorphism $\tilde \varphi: R[Q]\to S_{Q, \varphi}$ as follows: \\ For $x\in Q^{{\otimesimes} ^i}\subseteq R[Q]$ we set $ \tilde\varphi(x)=x,$ if $0\leq i\leq m-1.$ Now if $x=a_1\otimesimes \cdots \otimesimes a_i,$ with $ i\geq m$ \text { and } $\tilde\varphi_{^{-1}id Q^{{\otimesimes} ^k}}$ is defined \text { for } $k<i,$ we set \begin{align*}\tilde \varphi(a_1\otimesimes \cdots \otimesimes a_i)&=\tilde\varphi( \varphi (a_1\otimesimes \cdots \otimesimes a_{m})\otimesimes a_{m+1}\otimesimes \cdots \otimesimes a_i) \\&=\tilde\varphi( \varphi (a_1\otimesimes \cdots \otimesimes a_{m}) a_{m+1}\otimesimes \cdots \otimesimes a_i). \end{align*} For elements $x,y\in S_{Q, \varphi}$ one defines $$x\bullet y=\tilde\varphi(x\otimes y).$$ with this product $S_{Q, \varphi}$ is a commutative $R$-algebra containing $R$ as a unital subring. Notice that, $S_{Q, \varphi}$ is graded by the cyclic group $C_m,$ and is strongly graded in the case that $\varphi$ is an isomorphism. The $R$-algebra $S_{Q, \varphi}$ constructed above is called a \textit{radical extension of $R.$} \begin{defn}\label{iradd} Let $m\in ^{-1}athbb{N}$ and $I\subseteq \{0, \cdots, m-1\}.$ The $I$-radical extension of $R$ is the $R$-submodule of $S_{Q, \varphi}$ given by $S_{Q, \varphi, I}=\bigoplus\limits_{i\in I}Q^{{\otimesimes} ^i}.$ Moreover we say that $I$ is $m$-saturated if it is closed under the addition in $C_m.$ That is, $I$ is m-saturated, if and only if, I viewed as a subset of $C_m$ is a subgroup. \end{defn} Now we give a criteria to determine when $S_{Q, \varphi, I}$ is an $R$-algebra. \begin{prop}\label{iradd} Let $I\subseteq \{0, \cdots, m-1\}$ and suppose that the map $\varphi$ as above is an isomorphism. Then the following statements hold: \begin{enumerate} \item $S_{Q, \varphi, I}$ is a commutative $R$-subalgebra of $S_{Q, \varphi},$ if and only if, I is m-saturated. In this case $S_{Q, \varphi, I}$ is a I-graded $R$-algebra and ${\rm rk}_R(S_{Q, \varphi, I})$ divides $m.$ \item Let $I\subseteq \{0,\cdots, m-1 \}$ be $m$-saturated and consider the $I$-radical extension $S_{Q, \varphi, I},$ then there exists a f.g.p $R$-module $Q'$ with ${\rm rk}(Q')=1,$ and a $R$-algebra isomorphism $S_{Q, \varphi, I}\simeq S_{Q', \varphi'}.$ \end{enumerate} \end{prop} \proof 1) Part ($\Leftarrow$) is clear. Now to prove ($^{-1}athbb{R}ightarrow$), suppose that $S_{Q, \varphi, I}$ is a commutative $R$-subalgebra of $S_{Q, \varphi}.$ Let $i,j\in I,$ to show that $i+_m j\in I$ it is enough to show that $Q^{{\otimesimes} ^{i+_mj}}\subseteq S_{Q, \varphi, I},$ where $+_m$ denotes the addition in $C_m.$ Since $\varphi$ is an isomorphism then $S_{Q, \varphi}$ is a strongly graded $R$-algebra, and thus $Q^{{\otimesimes} ^{i+_mj}}=Q^{{\otimesimes} ^i}\bullet Q^{{\otimesimes} ^j}\subseteq S_{Q, \varphi, I} ,$ as desired. \endproof 2) Take $n\in \{0,\cdots, m-1 \} $ such that $I=\langle n \rangle$ and write $Q'=Q^{\otimes ^n},$ then ${\rm rk}_R(Q')=1.$ Now consider the $R$-module homomorphism $\varphi: Q^{\otimes ^m}\to R,$ then there is a $R$-module homomorphism $\varphi': Q'^{\otimes ^{m'}}\to R$ where $m'=\textbf{f}rac {m}{{\rm gcd}\{m,n\}}$ is the cardinality of $I.$ Then the isomorphism $S_{Q, \varphi, I}\simeq S_{Q', \varphi'}.$ is given by the identity. \endproof Consider the $I$-radical extension $S_{Q, \varphi, I}$ of $R$ and define an action on $S_{Q, \varphi, I}$ as follows. \begin{equation*}\label{gactionI}^{-1}u_g(x)=\chi^i(g)x,\text{ for all }g\in G,\, x\in Q^{{\otimesimes} ^i},\,\, i\in I.\end{equation*} Then by Proposition \ref{iradd} we get. \begin{cor} Let $I\subseteq \{0,\cdots, m-1 \}$ be $m$-saturated. Then there is a radical extension $S_{Q', \varphi'}$ of $R$ such that $S_{Q, \varphi, I}$ and $S_{Q', \varphi'}$ are $G$-isomorphic. In particular, the ring of invariants of $S_{Q, \varphi, I}$ is $R.$ \end{cor} \subsection{On Partial Cyclic Kummer Extensions} It is observed in \cite[Section 5.2]{BCMP} that the study of partial Galois extensions of finite abelian groups can be reduced to the cyclic case, then we assume from now on that $G=\langle g \rangle$ is a cyclic group of order $n.$ Moreover we also assume that $R$ is an $n$-kummerian ring for some $n\geq 2$ and $\omega\in R$ verifies the hypotheses of Definition \ref{kummer}. Further we fix $\chi\in \hat G$ with $\chi(g)=\omega$ a generating element. Since $\chi$ has order $n$, Proposition \ref{8} implies that $[Q_{\chi}]\in {\bf Pic}_n(R),$ where ${\bf Pic}_n(R)$ denotes the subgroup of elements of ${\bf Pic}(R)$ whose order divides $n.$ Take a $R$-module isomorphism $\varphi: {Q^{{\otimesimes}^n}_{\chi}}\to R.$ Then, according to Section \ref{Brad}, we construct the radical extension $S_{Q, \varphi, \chi}.$ By \cite[Theorem 1]{B}, Section 8 we have that $S_{Q, \varphi, \chi}\supseteq R$ is a (global) cyclic Kummer extension of $R$ with Galois group $G,$ where the action is defined by \begin{equation*}\label{gaction}^{-1}u_g(x)=\chi^i(g)x,\text{ for all }g\in G,\, x\in Q^{{\otimesimes} ^i},\,\, 0\leq i\leq n-1.\end{equation*} Moreover, by \cite[Theorem 2, Section 8]{B} every cyclic Kummer extension of $R$ with Galois group $G,$ is $G$-isomorphic to a radical extension of $R.$ Thus, it is natural to ask which partial kummer extensions are equivalent to either a radical or a $I$-radical extension of $R.$ In view of \eqref{suma} a necessary condition for this is that $S=\displaystyle\bigoplus\limits_{i\in X} Q_{\chi^i_p},$ for some $X\subseteq \{0,\cdots, n-1 \}.$ \begin{prop}\label{isaX} Let $X\subseteq \{0,\cdots, n-1 \}$ such that $S=\displaystyle\bigoplus\limits_{i\in X} Q_{\chi^i_p},$ then $S$ is an $R$-epimorphic image of the extension $S_{Q_{\chi_p}, \varphi}.$ In particular, there is a $R$-module isomorphism between $S$ and $S_{Q_{\chi_p}, \varphi, X}=\displaystyle\bigoplus\limits_{i\in X}Q_{\chi_p}^{{\otimesimes} ^i} .$ \end{prop} \proof Consider the radical extension $S_{Q_{\chi_p}, \varphi}=\displaystyle\bigoplus\limits_{i=0}^{n-1}Q_{\chi_p}^{{\otimesimes} ^i}.$ By Theorem \ref{iso} the map $\lambda^i\colon Q_{\chi_p}^{{\otimesimes} ^i}\to Q_{\chi^i_p}$ given by $$a_1\otimesimes \cdots \otimesimes a_i \to a_1\cdots a_i , \,\,\text{for all}\,\, i\in \{0,\cdots, n-1 \}$$ is a well defined $R$-module isomorphism. Now let $\lambda\colon \displaystyle\bigoplus\limits_{i=0}^{n-1}Q_{\chi_p}^{{\otimesimes} ^i} \to S $ be defined by $\lambda=\displaystyle\sum_{i=0}^{n-1}\tilde\lambda^i,$ where $\tilde\lambda^i=\lambda^i$ if $i\in X$ and zero otherwise, then $\lambda$ is a $R$-module epimorphism. The fact that $S=\displaystyle\bigoplus\limits_{i\in X} Q_{\chi^i_p}$ is a direct sum implies that the kernel of $\lambda$ is $\displaystyle\bigoplus\limits_{i\notin X}Q_{\chi_p}^{{\otimesimes} ^i}$ and thus $S$ and $S_{Q_{\chi_p}, \varphi, X}$ are isomorphic as $R$-modules, and the isomorphism is given by $\lambda_X=\displaystyle\sum_{i\in X}\tilde\lambda^i.$ \endproof Now we are interested in knowing if $S$ and $S_{Q_{\chi_p}, \varphi, X}=\displaystyle\bigoplus\limits_{i\in X}Q_{\chi_p}^{{\otimesimes} ^i} $ are isomorphic as $R$-algebras, but for this question to make sense we have to require, according to Proposition \ref{iradd}, that $X$ has to be a $n$-saturated set. We have the following. \begin{prop}\label{isat} Suppose that $S=\displaystyle\bigoplus\limits_{i\in I} Q_{\chi^i_p},$ where $I$ is a $n$-saturated set, then $\lambda_I$ gives a $I$-graded $R$-algebra isomorphism between $S$ and $S_{Q_{\chi_p}, \varphi, I}$ . Conversely, if $\lambda_I$ is a $R$-algebra isomorphism, then $S=\displaystyle\bigoplus\limits_{i\in I} Q_{\chi^i_p}.$ \end{prop} \proof We shall check that the map $\lambda_I^{-1}$ preserves products, where $\lambda_I$ is as in the proof of Proposition \ref{isaX}. Let $x=a_1\cdots a_i\in Q_{\chi^i_p}$ and $y=a_{i+1}\cdots a_{i+j}\in Q_{\chi^j_p},$ with $i,j\in I$ and $a_l\in Q_{\chi_p},$ for all $l\in\{1,\cdots, i+j\}.$ We consider two cases {\bf Case 1} $i+j < n.$ In this case $\lambda_I^{-1}(xy)=a_1\otimesimes \cdots \otimesimes a_i\otimesimes a_{i+1}\cdots a_{i+j}=\lambda^{-1}(x)\bullet \lambda^{-1}(y)$ {\bf Case 2} $i+j \geq n.$ Here \begin{align*}\lambda_I^{-1}(xy)&=\lambda_I^{-1}(\underbrace{a_1\cdots a_i\cdots a_n}_{\in R} a_{n+1}\cdots a_{i+j})\\ &=a_1\cdots a_i\cdots a_n\lambda_I^{-1}(a_{n+1}\cdots a_{i+j})\\ &=a_1\cdots a_i\cdots a_n a_{n+1}\otimesimes \cdots \otimesimes a_{i+j}\\ &=(a_1\otimesimes \cdots \otimesimes a_i)\bullet (a_{i+1}\otimesimes \cdots \otimesimes a_{i+j})\\ &=\lambda_I^{-1}(x)\bullet \lambda_I^{-1}(y), \end{align*} as desired. The converse is clear. \endproof Let $H$ be a subgroup of $G$ then then $H$ acts partially on $S$ with partial action $\alpha_H=\{\alpha_h: S_{h^{-1}}\to S_h\}_{h\in H}$ and $S\supseteq S^{\alpha_H}$ is a partial Galois extension. Notice that in general $R \subseteq S^{\alpha_H}$ for any subgroup $H$ of $G.$ The following fact shows that the equality holds exactly when $\alpha$ is an extension by zero of $\alpha_H.$ (see Example \ref{ext0}). \begin{prop}\label{casiglob} Let $S\supseteq R$ be a partial Galois extension and $H$ a subgroup of $G$ acting globally on $S$ with action $\beta.$ Then $R=S^H,$ if and only if, $\alpha$ is extension by zero of $\beta.$ \end{prop} \begin{proof} It is clear that if $\alpha$ is extension by zero of $\beta,$ then $R=S^H.$ Conversely suppose that $R=S^H,$ then by \cite[iv) Theorem 4.1]{DFP} there are $S$-module isomorphisms $\prod_{g\in G}S_g\simeq S\otimesimes S\simeq \prod_{h\in H}S_h$. We shall show that $S_g={0}$ for all $g\in G\setminus H.$ For this, let $^{-1}athfrak{p}$ be a prime ideal of $S,$ then $$\sum_{h \in H}{\rm rk}_{R_^{-1}athfrak{p}}((S_h)_^{-1}athfrak{p})+\sum_{g \in G\setminus H}{\rm rk}_{R_^{-1}athfrak{p}}((S_g)_{^{-1}athfrak{p}})=\sum_{h \in H}{\rm rk}_{R_^{-1}athfrak{p}}((S_h)_^{-1}athfrak{p}).$$ Thus, for $g \in G\setminus H$ we have that $(S_g)_{^{-1}athfrak{p}}=0$ which implies $S_g=0,$ as desired. \end{proof} Now we give the main result of this work. \begin{thm}\label{partoglob} Let $S\supseteq R$ be a partial n-kummerian extension, then there is a $n$-saturated set $I$ of $\{1,\cdots, n\}$ such that $S=\displaystyle\bigoplus\limits_{i\in I} Q_{\chi^i_p},$ if and only if, there is a subgroup $H$ of $G$ of order m such that $S\supseteq R$ is a global m-kummerian extension with Galois group $H$ and global action $\alpha_H.$ In this case $\alpha$ is the extension by zero of $\alpha_H.$ \end{thm} \begin{proof} Suppose that $S=\displaystyle\bigoplus\limits_{i\in I} Q_{\chi^i_p},$ where $I$ is a $n$-saturated set. Let $i_0\in\{1,\cdots, n-1\} $ such that $I=\langle i_0\rangle$ and write $H=\langle g^{i_0}\rangle,$ then $S=\displaystyle\bigoplus\limits_{i=0}^m Q_{\tilde\chi^i_p},$ where $m$ is the order of $H$ and $\tilde\chi=\chi^{i_0}.$ then $S\supseteq S^{\alpha_H}$ is a partial Galois extension and by Proposition \ref{cond} we have \begin{equation}\label{equall}R=Q_{\chi^0_p}=Q_{\tilde\chi^0_p}=S^{\alpha_H}.\end{equation} Finally, since ${\rm rk}_RS={\rm rk}_R\left(\displaystyle\bigoplus\limits_{i=0}^m Q_{\tilde\chi^i_p}\right)=m=^{-1}id H^{-1}id$ we get that $H$ acts globally on $S,$ thanks to \cite[Corollary 4.6]{DFP}. Conversely, suppose that there exists a subgroup $H$ of $G$ such that $S\supseteq R$ is a global $m$-kummerian extension with Galois group $H.$ Write $H=\langle g^{i_0}\rangle,$ then by 3) of Lemma \ref{summa} we have that $S=\displaystyle\bigoplus\limits_{i=0}^m Q_{\tilde\chi^i_p},$ where $\tilde\chi=\chi^{i_0}.$ Finally taking $I=\langle i_0\rangle$ a $n$-saturated set of $\{1,\cdots, n\}$ we obtain $S=\displaystyle\bigoplus\limits_{i\in I} Q_{\chi^i_p}.$ The final assertions follows from Proposition \ref{casiglob} and \eqref{equall}. \end{proof} \begin{rem}\label{para} It follows from Proposition \ref{isat} and Theorem \ref{partoglob} that the study of partial Kummer extensions which are parametrized by $I$-radical extensions can be reduced to the global case. \end{rem} \subsection{Some final examples and remarks} As observed in Remark \ref{para} there are partial kummerian extension thar are not equivalent to radical extensions. We give two examples of them. \begin{exe}\label{e1}\cite[Example 6.1]{DFP} Let $G=\langle \sigma ^{-1}id \sigma^4=1 \rangle ,$ and put $S=^{-1}athbb{C}e_1\oplus ^{-1}athbb{C}e_2\oplus ^{-1}athbb{C}e_3\oplus ^{-1}athbb{C}e_4,$ where $e_1,e_2,e_3$ and $e_4$ are orthogonal idempotents with sum $1_S.$ Then there there is a partial action $\alpha$ of $G$ on $S$ by setting. $$S_e=S,\,\,\,\, S_g= ^{-1}athbb{C}e_1\oplus ^{-1}athbb{C}e_2,\,\,\, S_{g^2}= ^{-1}athbb{C}e_1\oplus ^{-1}athbb{C}e_3\,\,\,\, \text{and}\,\,\,\,S_{g^3}= ^{-1}athbb{C}e_2\oplus ^{-1}athbb{C}e_3$$ and defining $\alpha_1={\rm id}_S,$ and $$\alpha_g(e_2)=e_1,\,\,\,\alpha_g(e_3)=e_2, \,\,\,\alpha_{g^2}(e_1)=e_3,\,\,\,\alpha_{g^2}(e_3)=e_1\,\,\,\text{and}\,\,\, \alpha_{g^3}=\alpha^{-1}_g.$$ Then $^{-1}athbb{C} \simeq \{(z,z,z)^{-1}id z\in ^{-1}athbb{C}\}$ is 4-kummerian with $w=i$ and $S/^{-1}athbb{C}$ is a partial $4$-kummerian extension. Moreover $\hat G={\rm hom}(G, ^{-1}athbb{C})$ is generated by $\chi,$ where $\chi(\sigma)=i.$ Then by Proposition \ref{cond} we have that \begin{itemize} \item $Q_{e_g}= \{(r,r,r)^{-1}id r\in R\}=\langle e_1+ e_2+ e_3\rangle;$ \item $Q_{\chi_p}=\{(r, ir,-r)^{-1}id r\in R\}=\langle e_1+ ie_2 -e_3\rangle;$ \item$Q_{\chi_p^2}=Q_{\chi_p}Q_{\chi_p}=\{(r, -r, r)^{-1}id r\in R\}=\langle e_1-e_2 +e_3\rangle;$ \item $Q_{\chi_p^3}=\{(r, -ir,r)^{-1}id r\in R\}=\langle e_1-ie_2 +e_3\rangle,$ \end{itemize} and we have that \begin{align*} S&=\sum_{\chi \in \hat G}Q_{\chi_p}=Q_{e_g}\oplus Q_{\chi_p}\oplus Q_{\chi_p^2}=Q_{e_g}\oplus Q_{\chi_p}\oplus Q_{\chi_p^3}=Q_{\chi_p}\oplus Q_{\chi_p^2}\oplus Q_{\chi_p^3} \end{align*} Moreover the sum $Q_{e_g}+ Q_{\chi_p^2}+ Q_{\chi_p^3}$ is not direct. \end{exe} \begin{exe}\label{e2}\cite[Example 6.2]{DFP} Let $G=\langle \sigma ^{-1}id \sigma^ 5=1 \rangle .$ Then there is a partial action of $G$ on $S=^{-1}athbb{C}e_1\oplus ^{-1}athbb{C}e_2\oplus ^{-1}athbb{C}e_3\oplus ^{-1}athbb{C}e_4\oplus ^{-1}athbb{C}e_5,$ where $e_1,e_2,e_3, e_4$ and $e_5$ are orthogonal idempotents with sum $1_S,$ such that $S/R$ is a partial Galois extension, where $R=\{(r,r,s,s)^{-1}id r,s\in ^{-1}athbb{C}\}$ is a 5-kummerian ring with $w$ a fifth primitive root of the unity. Then by Proposition \ref{cond} we have \begin{itemize} \item $Q_{e_g}= R=\langle e_1+ e_2, e_3 + e_4\rangle;$ \item $Q_{\chi_p}=\{(r, wr,s, ws)^{-1}id r\in ^{-1}athbb{C}\}=\langle e_1+ we_2, e_3 + we_4\rangle;$ \item$Q_{\chi_p^2}=\{(r, w^2r,s, w^2s)^{-1}id r,s\in ^{-1}athbb{C}\}=\langle e_1+ w^2e_2, e_3 + w^2e_4\rangle;$ \item $Q_{\chi_p^3}=\{(r, w^3r,s, w^3s)^{-1}id r,s\in ^{-1}athbb{C}\}=\langle e_1+ w^3e_2, e_3 + w^3e_4\rangle; $ \item $Q_{\chi_p^4}=\{(r, w^4r,s, w^4s)^{-1}id r,s\in ^{-1}athbb{C}\}=\langle e_1+ w^4e_2, e_3 + w^4e_4\rangle; $ \end{itemize} and we have that $S=\sum_{\chi \in \hat G}Q_{\chi_p}=Q_{\chi^i_p}\oplus Q_{\chi^j_p},$ for all $1\leq i,j\leq 5, i\neq j.$ \end{exe} Inspired by Example \ref{e1} and Example \ref{e2} we finish this work with the following. \\ \noindent {\bf Question:} Let $S\supseteq R$ be a partial $n$-kummerian extension with Galois cyclic group $G.$ Write $S=\sum\limits_{i=0}^{n-1} Q_{\chi^i_p},$ where $\hat G=\langle \chi \rangle,$ then is there a subset $X$ of $\{0,1,\cdots, n-1\}$ such that $S=\bigoplus\limits_{i\in X} Q_{\chi^i_p}$?. Notice that the answer of the question above is affirmative for all partial kummerian extensions that can be parametrized by radical extensions, but as observed in Example \ref{e1} and Example \ref{e2} there are others partial kummerian extensions for which the answer is also affirmative. In particular, these partial Galois extensions $S\supseteq R$ are such that ${\rm rk}_R(S)$ is well defined. \end{document}
\begin{document} \title{On almost self-centered graphs and almost peripheral graphs\footnote{E-mail addresses: {\tt [email protected]}(Y.Hu), {\tt [email protected]}(X.Zhan).}} \author{\hskip -10mm Yanan Hu and Xingzhi Zhan\thanks{Corresponding author.}\\ {\hskip -10mm \small Department of Mathematics, East China Normal University, Shanghai 200241, China}}\maketitle \begin{abstract} An almost self-centered graph is a connected graph of order $n$ with exactly $n-2$ central vertices, and an almost peripheral graph is a connected graph of order $n$ with exactly $n-1$ peripheral vertices. We determine (1) the maximum girth of an almost self-centered graph of order $n;$ (2) the maximum independence number of an almost self-centered graph of order $n$ and radius $r;$ (3) the minimum order of a $k$-regular almost self-centered graph and (4) the maximum size of an almost peripheral graph of order $n;$ (5) which numbers are possible for the maximum degree of an almost peripheral graph of order $n;$ (6) the maximum number of vertices of maximum degree in an almost peripheral graph of order $n$ whose maximum degree is the second largest possible. Whenever the extremal graphs have a neat form, we also describe them. \end{abstract} {\bf Key words.} Almost self-centered graph; almost peripheral graph; girth; independence number {\bf Mathematics Subject Classification.} 05C35, 05C07, 05C69 \section{Introduction} We consider finite simple graphs. The {\it order} of a graph is its number of vertices, and the {\it size} its number of edges. We denote by $V(G)$ and $E(G)$ the vertex set and edge set of a graph $G$ respectively. Denote by $d_{G}(u,v)$ the distance between two vertices $u$ and $v$ in $G.$ The {\it eccentricity}, denoted by $ecc_G(v),$ of a vertex $v$ in a graph $G$ is the distance to a vertex farthest from $v.$ Thus $ecc_G(v)={\rm max}\{d_G(v,u)| u\in V(G)\}.$ If the graph $G$ is clear from the context, we omit the subscript $G.$ If $ecc(v)=d(v,x),$ then the vertex $x$ is called an {\it eccentric vertex} of $v.$ The {\it radius} of a graph $G,$ denoted ${\rm rad}(G),$ is the minimum eccentricity of all the vertices in $V(G),$ whereas the {\it diameter} of $G,$ denoted ${\rm diam}(G),$ is the maximum eccentricity. A vertex $v$ is a {\it central vertex} of $G$ if $ecc(v)={\rm rad}(G).$ The {\it center} of a graph $G,$ denoted $C(G),$ is the set of all central vertices of $G.$ A vertex $u$ is a {\it peripheral vertex} of $G$ if $ecc(u)={\rm diam}(G).$ The {\it periphery} of $G$ is the set of all peripheral vertices of $G.$ A graph with a finite radius or diameter is necessarily connected. If ${\rm rad}(G)={\rm diam}(G),$ then the graph $G$ is called {\it self-centered.} Thus, a self-centered graph is a graph in which every vertex is a central vertex. This class of graphs have been extensively studied. See [2] and the references therein. Since a nontrivial graph has at least two peripheral vertices, a connected non-self-centered graph of order $n$ has at most $n-2$ central vertices. The following concept was introduced in [4]. {\bf Definition 1.} A connected graph of order $n$ is called {\it almost self-centered} if it has exactly $n-2$ central vertices. Since every graph has at least one central vertex, a connected graph of order $n$ has at most $n-1$ peripheral vertices. The following concept was introduced in [5]. {\bf Definition 2.} A connected graph of order $n$ is called {\it almost peripheral} if it has exactly $n-1$ peripheral vertices. In this paper we investigate several extremal problems on these two classes of graphs. In particular, we determine (1) the maximum girth of an almost self-centered graph of order $n;$ (2) the maximum independence number of an almost self-centered graph of order $n$ and radius $r;$ (3) the minimum order of a $k$-regular almost self-centered graph and (4) the maximum size of an almost peripheral graph of order $n;$ (5) which numbers are possible for the maximum degree of an almost peripheral graph of order $n;$ (6) the maximum number of vertices of maximum degree in an almost peripheral graph of order $n$ whose maximum degree is the second largest possible. Whenever the extremal graphs have a neat form, we also describe them. In Section 2 we treat almost self-centered graphs, and in Section 3 we treat almost peripheral graphs. For graphs $G$ and $H,$ the notation $G+H$ means the disjoint union of $G$ and $H.$ A {\it dominating vertex} in a graph of order $n$ is a vertex of degree $n-1.$ Two vertices $u$ and $v$ on a cycle $C$ of length $n$ are called {\it antipodal vertices} if $d_C(u,v)=\lfloor n/2\rfloor.$ An {\it $(x,y)$-path} is a path with endpoints $x$ and $y.$ A {\it diametral path} in a graph $G$ is a shortest $(x,y)$-path of length ${\rm diam}(G).$ We list some notations which will be used: \newline\indent $C_n$: the cycle of order $n,$ $P_n$: the path of order $n,$ $K_n$: the complete graph of order $n,$ \newline\indent $\overline{G}$: the complement of the graph $G,$ \,\, $e(G)$: the size of the graph $G,$ \newline\indent $\delta(G)$: the minimum degree of vertices of the graph $G,$ \newline\indent $\Delta(G)$: the maximum degree of vertices of the graph $G,$ \newline\indent $\alpha(G)$: the independence number of the graph $G,$ \,\, $g(G)$: the girth of the graph $G,$ \newline\indent ${\rm deg}(v)$: the degree of the vertex $v,$ \,\, $N(v)$: the neighborhood of the vertex $v,$ \newline\indent $N[v]$: the closed neighborhood of the vertex $v;$ i.e., $N[v]=N(v)\cup \{v\},$ \newline\indent $N_i(v)$: the $i$-th neighborhood of the vertex $v;$ i.e., $N_i(v)=\{x\in V(G)|\, d(v,x)=i\}.$ It is known [6, p.288] that if $G$ is a connected graph satisfying ${\rm diam}(G)\ge {\rm rad}(G)+2,$ then every integer $k$ with ${\rm rad}(G)<k<{\rm diam}(G)$ is the eccentricity of some vertex. Thus if $G$ is an almost self-centered graph or an almost peripheral graph, then the vertices of $G$ have only two distinct eccentricities and hence ${\rm diam}(G)={\rm rad}(G)+1.$ \section{Almost self-centered graphs} A {\it binocle} is a graph that consists of two cycles $C,\,D$ and a $(u,v)$-path $P$ such that $V(P)\cap V(C)=\{u\},$ $V(P)\cap V(D)=\{v\}$ and $V(C)\cap V(D)\subseteq V(P).$ Here we allow the possibility that $P$ has length $0;$ i.e., $P$ is a vertex. Note also that if $P$ is nontrivial, then $C$ and $D$ are vertex-disjoint. A {\it theta} (or {\it theta graph}) is a graph that consists of three internally vertex-disjoint paths sharing the same two endpoints. $\theta_{a,b,c}$ will denote the theta consisting of three paths with lengths $a,$ $b$ and $c$ respectively. A binocle and a theta is depicted in Figure 1. \vskip 3mm \par \centerline{\includegraphics[width=4.5in]{Fig1.jpg}} \par We make the convention that the girth of an acyclic graph is undefined. Thus whenever we talk about the girth of a graph, the graph is not acyclic. A connected graph is said to be {\it unicyclic} if it contains exactly one cycle. Recall that a connected graph of order $n$ is unicyclic if and only if it has size $n$ [7, p.77]. {\bf Lemma 1.} {\it Let $G$ be a unicyclic graph of order $n\ge 6.$ Then $G$ is almost self-centered if and only if $n$ is odd and $G$ is the graph obtained from $C_{n-1}$ by attaching one edge.} {\bf Proof.} Suppose $G$ is almost self-centered. We have $e(G)=n$ and $G\neq C_n.$ It is known [3] that the center of any connected graph lies within one block. Let $B$ be the block of $G$ in which $C(G)$ lies. Then $B$ is unicyclic and $\delta(B)=2.$ Thus $B$ is a cycle of order $n-2$ or $n-1.$ If $B=C_{n-2},$ let $V(G)\setminus V(B)=\{x,y\}.$ Then $x$ and $y$ are leaves. Since $x$ and $y$ are the only two peripheral vertices, their neighbors are a pair of antipodal vertices of the cycle $B.$ But then $B$ contains a vertex whose eccentricity in $G$ is ${\rm diam}(G)-2,$ contradicting the assumption that $G$ is almost self-centered. Hence $B=C_{n-1}$ and $G$ is the graph obtained from $C_{n-1}$ by attaching one edge. Since $G$ is almost self-centered, $n$ is odd. Conversely, it is easy to verify that this graph is almost self-centered. $\Box$ {\bf Lemma 2.} {\it If $G$ is a connected graph of order $n$ and size $n+1$ with $\delta(G)=2,$ then $G$ is either a binocle or a theta.} {\bf Proof.} Since $\sum_{x\in V(G)}{\rm deg}(x)=2n+2$ and $\delta(G)=2,$ the degree sequence of $G$ is $(2,2,\ldots,2,4)$ or $(2,2,\ldots,2,3,3).$ In the former case, $G$ is a graph consisting of two cycles sharing a common vertex, which is a binocle, while in the latter case, $G$ is a theta. $\Box$ Lemma 2 can also be proved easily using induction on the order. {\bf Lemma 3.} {\it Let $a,b,c$ be positive integers with $a\le b\le c.$ Then ${\rm rad}(\theta_{a,b,c})=\lfloor (a+c)/2\rfloor$ and ${\rm diam}(\theta_{a,b,c})=\lfloor (b+c)/2\rfloor.$ Consequently $\theta_{a,b,c}$ is self-centered if and only if $b=a$ if $a+c$ is odd and $b\le a+1$ if $a+c$ is even. Also ${\rm diam}(\theta_{a,b,c})={\rm rad}(\theta_{a,b,c})+1$ if and only if $a+1\le b\le a+2$ if $a+c$ is odd and $a+2\le b\le a+3$ if $a+c$ is even. } {\bf Proof.} Easy verification. $\Box$ {\bf Lemma 4.} {\it Let $G$ be a connected graph of order $n$ and size $n+1$ with $\delta (G)=2.$ Then $G$ is almost self-centered if and only if $n$ is even and $G=\theta_{1,2,n-2}.$ } {\bf Proof.} By Lemma 2, $G$ is either a binocle or a theta. Suppose that $G$ is almost self-centered. It is easy to see that an almost self-centered graph with minimum degree $2$ is $2$-connected. Since a binocle has connectivity $1,$ we deduce that $G$ is a theta. Let $G=\theta_{a,b,c}$ with $a\le b\le c.$ Since $G$ is almost self-centered, ${\rm diam}(G)={\rm rad}(G)+1.$ By Lemma 3, $a+1\le b\le a+2$ if $a+c$ is odd and $a+2\le b\le a+3$ if $a+c$ is even. First suppose that $a+c$ is odd. We assert that $a=1.$ To the contrary, assume $a\ge 2.$ Then $b\ge a+1\ge 3.$ Let $G=\theta_{a,b,c}$ consist of the three $(x,y)$-paths $P_1,\, P_2,\, P_3$ of lengths $a,\, b,\, c$ respectively. Denote $r={\rm rad}(G)$ and $d=r+1={\rm diam}(G).$ Note that $x$ and $y$ are central vertices of $G;$ i.e., $ecc(x)=ecc(y)=r.$ Let $w_1$ be the neighbor of $x$ on $P_2$ and let $w_2$ be the neighbor of $y$ on $P_2.$ Let $x_1$ and $x_2$ be the two antipodal vertices of $x$ on the odd cycle $C=P_1\cup P_3$ where $d_C(x_2,y)=d_C(x_1,y)+1.$ Then $d_G(w_1,x_2)\ge r+1=d.$ Thus both $w_1$ and $x_2$ are peripheral vertices. Similarly, $w_2$ is a peripheral vertex. But then $G$ contains at least three peripheral vertices, a contradiction. The case when $a+c$ is even can be treated similarly. Hence $a=1.$ Lemma 3 implies that $b\ge 2$ if $a+c$ is odd and $b\ge 3$ if $a+c$ is even. If $b\ge 3,$ using the above argument we obtain contradictions. Thus $a+c=1+c$ is odd and $b=2.$ It follows that $n=a+b+c-1=1+(1+c)$ is even and $G=\theta_{1,2,n-2}.$ Conversely, it is easy to verify that if $n$ is even then the theta $\theta_{1,2,n-2}$ is almost self-centered. $\Box$ Now we are ready to state and prove the first main result. {\bf Theorem 5.} {\it Let $g(n)$ denote the maximum girth of an almost self-centered graph of order $n$ with $n\ge 5.$ Then $$ g(n)=\begin{cases} n-1 \quad {\rm if}\,\,\,n\,\,\,{\rm is}\,\,\,{\rm odd},\\ 4\lfloor n/6\rfloor \quad {\rm if}\,\,\,n\,\,\,{\rm is}\,\,\,{\rm even}\,\,\,{\rm and}\,\,\,n\neq 10,\\ 5 \quad {\rm if}\,\,\,n=10. \end{cases} $$ Furthermore, if $n\ge 12$ and $6$ divides $n,$ then $g(n)$ is attained uniquely by the graph obtained from $\theta_{n/3,n/3,n/3}$ by attaching an edge to a vertex of degree three. } {\bf Proof.} Let $G$ be an almost self-centered graph of order $n\ge 5.$ Clearly $G\neq C_n.$ Hence $g(n)\le n-1.$ On the other hand, if $n$ is odd, then the graph obtained from $C_{n-1}$ by attaching an edge is almost self-centered and has girth $n-1.$ Hence $g(n)=n-1$ if $n$ is odd. Now suppose that $n$ is even. Note that adding edges to a graph does not increase its girth. The cases $n\le 16$ can be verified by a computer search. Using Lemma 1 and the fact [1, p.195] that a graph of order $n$ and size $n+3$ has girth at most $\lfloor 4(n+3)/9\rfloor,$ we need only check the sizes $n+1$ and $n+2$ for a graph of order $n\le 16.$ Next suppose that $n$ is even and $n\ge 18.$ We first show that $g(G)\le 4\lfloor n/6\rfloor.$ It is known [1, p.195] that a graph of order $n$ and size $n+2$ has girth at most $\lfloor n/2\rfloor +1.$ The inequality $\lfloor n/2\rfloor +1\le 4\lfloor n/6\rfloor$ for $n\ge 18$ implies that if $e(G)\ge n+2,$ then $g(G)\le 4\lfloor n/6\rfloor.$ Also, Lemma 1 excludes the possibility that $e(G)=n.$ It remains to consider the case when $e(G)=n+1,$ and from now on we make this assumption. It is known [3] that the center of any connected graph lies within one block. Let $B$ be the block of $G$ in which $C(G)$ lies. Since $|C(G)|=n-2$ and $e(G)=n+1,$ the size of $B$ equals its order plus one. Since $n\ge 18,$ $B$ is $2$-connected and $\delta(B)=2.$ By Lemma 2, $B$ is a theta. Let $B=\theta_{a,b,c},$ which consists of three $(x,y)$-paths $P_1,\,P_2,\,P_3$ whose lengths are $a,\,b,\,c$ respectively with $a\le b\le c.$ Since the eccentricities of two adjacent vertices differ by at most one, every leaf of $G$ is a peripheral vertex. Hence $G$ has at most two leaves. We first exclude the possibility of two leaves. To the contrary, assume that $G$ has two distinct leaves $u$ and $v$ whose neighbors are $s$ and $t$ respectively. Denote $d={\rm diam}(B)$ and $f={\rm diam}(G).$ Then $d+1\le f\le d+2.$ Clearly it is impossible that $f<d.$ It is also impossible that $f=d,$ since otherwise $G$ would have at least four peripheral vertices, a contradiction. Hence $f\ge d+1.$ The inequality $f\le d+2$ follows from the fact that adding two leaves to $B$ can increase its diameter by at most $2.$ We distinguish two cases. Case 1. $f=d+2.$ In this case ${\rm rad}(G)=d+1.$ Since adding leaves to $B$ can increase the eccentricity of any vertex of $B$ by at most $1,$ we deduce that $B$ is self-centered. Clearly $d_B(s,t)=d\ge 2.$ Let $w$ be an internal vertex on a shortest $(s,t)$-path in $B.$ Then $ecc_G(w)=d,$ contradicting ${\rm rad}(G)=d+1.$ Case 2. $f=d+1.$ Since ${\rm rad}(G)=d,$ we have ${\rm rad}(B)\ge d-1.$ We further consider two subcases. Subcase 2.1. ${\rm rad}(B)=d;$ i.e., $B$ is self-centered. Let $s^{\prime}$ be an eccentric vertex of $s$ in $B.$ Then $d_B(s^{\prime},s)=d,$ implying that $d_G(s^{\prime},u)=d+1=f.$ But then $G$ has at least three peripheral vertices $u,\,v,\, s^{\prime},$ a contradiction. Subcase 2.2. ${\rm rad}(B)=d-1.$ By Lemma 3, $a+1\le b\le a+2$ if $a+c$ is odd and $a+2\le b\le a+3$ if $a+c$ is even. It suffices to consider the two cases: $b\ge a+2;$ $b=a+1$ and $a+c$ is odd. First suppose $b\ge a+2.$ Since $ecc_B(x)={\rm rad}(B)=d-1$ and ${\rm rad}(G)=d,$ one of the two leaves, say $u,$ must be an eccentric vertex of $x$ in $G.$ Let $p$ be the neighbor of $x$ on $P_2.$ Using the structure of the theta $B$ and the condition $b\ge a+2,$ we deduce that $d_G(p,u)\ge d+1.$ Consequently $G$ has at least three peripheral vertices $u,\,v,\, p,$ which is a contradiction. Next suppose that $b=a+1$ and $a+c$ is odd. If in $G,$ $x$ and $y$ have a common eccentric vertex (which must be one of the two leaves), say $u,$ then $s$ is a common eccentric vertex of $x$ and $y$ in $B.$ Note that now $s$ lies in $P_3.$ Such a situation occurs only if $a=1,$ and hence $b=2.$ Let $q$ be the internal vertex of $P_2.$ Then $d_G(q,u)=d+1.$ Thus $G$ has at least three peripheral vertices $u,\,v,\, q,$ which is a contradiction. If in $G,$ $x$ and $y$ do not have a common eccentric vertex, then one of $u$ and $v$ is an eccentric vertex of $x$ and the other is an eccentric vertex of $y.$ The conditions $a+b+c=n-1,$ $b=a+1,$ and $n\ge 18$ imply that $a+c\ge 8.$ Hence $d-1=\lfloor (a+c)/2\rfloor\ge 4.$ Since $d_G(u,v)=d+1,$ we have $d_B(s,t)=d-1\ge 4.$ Note that $s$ and $t$ lie in $P_3.$ Choose two adjacent vertices $v_1$ and $v_2$ on $P_3$ between $s$ and $t.$ Since the cycle $P_1\cup P_3$ is odd, $v_1$ and $v_2$ have a common antipodal vertex $z$ on $P_1.$ It is easy to verify that $ecc_G(z)=d-1,$ contradicting the fact that ${\rm rad}(G)=d.$ If $G$ has no leaf, by Lemma 4 $G=\theta_{1,2,n-2}.$ Thus $g(G)=3< 4\lfloor n/6\rfloor.$ Finally we consider the case when $G$ has exactly one leaf. Let $u$ be the leaf and let $s$ be its neighbor. Note that $s\in V(B),$ since otherwise the vertices of $G$ would have at least three distinct eccentricities, contradicting the assumption that $G$ is almost self-centered. We continue using the notations $d={\rm diam}(B)$ and $f={\rm diam}(G).$ If $f=d,$ then $G$ would have at least three peripheral vertices, a contradiction. It is also impossible that $f\ge d+2,$ since adding a leaf increases the eccentricity of any vertex by at most $1.$ Hence $f=d+1.$ Clearly ${\rm rad}(B)\ge d-1.$ We assert that $B$ is self-centered; i.e., ${\rm rad}(B)=d.$ To the contrary, suppose ${\rm rad}(B)=d-1.$ Then $ecc_B(x)=ecc_B(y)=d-1.$ Since ${\rm rad}(G)=d,$ we deduce that $u$ must be the common eccentric vertex of $x$ and $y,$ implying that $s$ is a common antipodal vertex of $x$ and $y$ on the cycle $P_1\cup P_3.$ As argued above, $a+c$ is odd and $a=1.$ By Lemma 3, $b=2$ or $b=3.$ If $b=2,$ let $z$ be a neighbor of $s$ on $P_3.$ Then $ecc_G(z)=d-1,$ contradicting the fact that ${\rm rad}(G)=d.$ If $b=3,$ let $w_1$ be the neighbor of $x$ on $P_2$ and let $w_2$ be the neighbor of $y$ on $P_2.$ Then it is easy to check that $G$ has at least three peripheral vertices $u,\,w_1,\, w_2,$ which is a contradiction again. Thus $B$ is self-centered. By Lemma 3, $b=a$ if $a+c$ is odd and $b\le a+1$ if $a+c$ is even. We have $a+b+c=n,$ and clearly $g(G)=a+b.$ There are two possibilities: (1) $b=a$; (2) $b=a+1$ and $a+c$ is even. Denote $k=\lfloor n/6\rfloor.$ Suppose $b=a.$ We have $3a\le n,$ implying that $a\le n/3.$ If $n=6k$ or $n=6k+2,$ we obtain $g(G)=2a\le 4k.$ If $n=6k+4,$ we have $a\le 2k+1.$ The case $a=2k+1$ will be excluded. Assume $a=2k+1.$ Then $b=2k+1$ and $c=2k+2.$ Thus $a+c=b+c$ is odd. But then $G$ has at least three peripheral vertices, a contradiction. Hence $a\le 2k$ and $g(G)=2a\le 4k.$ Suppose $b=a+1$ and $a+c$ is even. We have $a\le (n-1)/3\le 2k+1,$ where in the second inequality we have used $n\le 6k+4.$ But it is impossible that $a=2k+1,$ since otherwise $c=2k+1<2k+2=b,$ contradicting our assumption that $b\le c.$ The conditions $a+b+c=n,$ $b=a+1$ and that both $n$ and $a+c$ are even imply that $a$ is odd. Thus $a=2k$ is also impossible. It follows that $a\le 2k-1$ and consequently $g(G)=a+b=2a+1\le 4k-1.$ Finally we prove that the upper bound $4\lfloor n/6\rfloor$ can be attained and when $6$ divides $n,$ the extremal graph is unique. Denote $k=\lfloor n/6\rfloor.$ Let $G$ be the graph obtained from $\theta_{2k,2k,n-4k}$ by attaching an edge to one of the two vertices of degree three. Then $G$ is an almost self-centered graph of order $n$ with girth $4\lfloor n/6\rfloor.$ Suppose $G$ is an almost self-centered graph of order $n=6k\ge 18$ with girth $4k.$ Then the above analysis shows that $G$ is a graph obtained from $\theta_{a,b,c}$ by attaching an edge where $a=b.$ Since $g(G)=a+b=2a=4k,$ we have $a=b=2k.$ The condition $a+b+c=n$ further implies $c=2k.$ Thus the theta is $\theta_{2k,2k,2k}.$ There is only one way to attach an edge to this theta so that the resulting graph is almost self-centered; i.e., attach the edge to a vertex of degree three. This shows that the extremal graph is unique. The proof is complete. $\Box$ One conclusion in Theorem 5 states that if $n\ge 12$ and $6$ divides $n,$ then the extremal graph for $g(n)$ is unique. We remark that if $n$ is even with $n\ge 14$ and $6$ does not divide $n,$ then there are at least three extremal graphs for $g(n).$ This can be seen as follows. Using the notations in the proof of Theorem 5, we may attach an edge to any vertex on $P_1$ of the theta $\theta_{2k,2k,n-2k}$ to obtain an extremal graph. Next we consider the independence number. There is only one almost self-centered graph of order $n$ and radius $1;$ i.e., the graph obtained from $K_n$ by deleting an edge. {\bf Theorem 6.} {\it The maximum independence number of an almost self-centered graph of order $n$ and radius $r$ with $r\ge 2$ is $n-r.$ } {\bf Proof.} Let $G$ be an almost self-centered graph of order $n$ and radius $r.$ First recall that ${\rm diam}(G)={\rm rad}(G)+1=r+1.$ Let $P$ be a diametral path of $G.$ If $r=2,$ $P$ has order $4.$ Any independent set can contain at most $2$ of the four vertices on $P.$ Thus $\alpha (G)\le n-2.$ Suppose $r\ge 3.$ Let $x$ be a central vertex of the path $P.$ Now $P$ has order at least $5$ and any vertex on $P$ is not an eccentric vertex of $x.$ Let $y$ be an eccentric vertex of $x.$ It is known [3] that the center of any connected graph lies within one block. Let $B$ be the block of $G$ in which $C(G)$ lies. Then $x,\,y\in V(B).$ By Menger's theorem [7, p.167], there are two internally disjoint $(x,y)$-paths $Q_1$ and $Q_2.$ Denote by $k$ the length of the cycle $D=Q_1\cup Q_2.$ Then $k\ge 2r.$ Any independent set can contain at most $\lfloor k/2\rfloor$ vertices on $D.$ Thus $\alpha (G)\le \lfloor k/2\rfloor+(n-k)=n-\lceil k/2\rceil\le n-(k/2)\le n-r.$ Conversely we construct a graph to show that the upper bound $n-r$ can be attained. Attaching an edge to the cycle $v_1,\, v_2,\ldots,v_{2r}$ at the vertex $v_1$ we obtain a graph $H.$ Adding $n-2r-1$ new vertices to $H$ such that each of them has $v_1$ and $v_3$ as neighbors, we obtain the graph $Z(n,r).$ It is easy to see that $Z(n,r)$ is an almost self-centered graph of order $n$ and radius $r$ with independence number $n-r.$ The graph $Z(12,4)$ is depicted in Figure 2. $\Box$ \vskip 3mm \par \centerline{\includegraphics[width=2.4in]{Fig2.jpg}} \par {\bf Corollary 7.} {\it The maximum independence number of an almost self-centered graph of order $n$ with $n\ge 5$ is $n-2,$ and there are exactly two extremal graphs.} {\bf Proof.} By Theorem 6 and the fact that the almost self-centered graph of order $n$ and radius $1$ has independence number $2,$ we deduce that the maximum independence number is $n-2.$ Suppose $G$ is an almost self-centered graph of order $n$ whose independence number is $n-2.$ By Theorem 6, ${\rm rad}(G)=2$ and consequently ${\rm diam}(G)=3.$ Let $P:\,x_1,x_2,x_3,x_4$ be a diametral path of $G.$ Then $x_1$ and $x_4$ are the two peripheral vertices of $G.$ Denote $S=V(G)\setminus\{x_1,x_2,x_3,x_4\}.$ $G$ has only one maximum independent set; i.e, $S\cup T$ where $T$ consists of two vertices from $P.$ There are three possible choices for $T:$ $\{x_1,\,x_3\},$ $\{x_2,\,x_4\}$ and $\{x_1,\,x_4\},$ the first two of which will yield isomorphic graphs. Since every leaf of an almost self-centered graph is a peripheral vertex, every vertex in $S$ has degree at least $2.$ If $T=\{x_1,\,x_3\},$ then every vertex in $S$ has $x_2$ and $x_4$ as neighbors; if $T=\{x_1,\,x_4\},$ then every vertex in $S$ has $x_2$ and $x_3$ as neighbors. Conversely, it is easy to see that these two graphs satisfy all the requirements. $\Box$ Now we consider regular almost self-centered graphs. {\bf Theorem 8.} {\it Let $r(k)$ denote the minimum order of a $k$-regular almost self-centered graph. Then $$ r(k)=\begin{cases} 12 \quad {\rm if}\,\,\,k=3,\\ 2k+2 \quad {\rm if}\,\,\,k\ge 4. \end{cases} $$} {\bf Proof.} Let $G$ be a $k$-regular almost self-centered graph of order $n$, and let $x$ and $y$ be the two peripheral vertices of $G.$ There is only one almost self-centered graph of order $n$ and diameter at most $2;$ i.e., the graph obtained from $K_n$ by deleting an edge. Thus ${\rm diam}(G)\ge 3,$ implying that $N[x]\cap N[y]=\phi.$ It follows that $$ n\ge |N[x]|+|N[y]|=(k+1)+(k+1)=2k+2. $$ We first show $r(3)=12.$ Suppose $k=3.$ Then $n$ is even and $n\ge 2\times 3+2=8.$ We will exclude the two orders $8$ and $10.$ If $n=8,$ then ${\rm diam}(G)=3$ and $G-\{x,\,y\}$ is a $2$-regular graph of order $6,$ which must be $C_6$ or $2C_3.$ In each case, $G$ has at least four peripheral vertices, a contradiction. If $n=10,$ we deduce that ${\rm diam}(G)=3,$ since otherwise either $G$ has a vertex of degree at least $4$ or $G$ has three peripheral vertices. Recall that $N_i(x)=\{v\in V(G)|\, d(x,\,v)=i\}.$ We have $|N_1(x)|=3$ and $|N_3(x)|=1,$ implying $|N_2(x)|=5.$ Here we have used the fact that $G$ has exactly two peripheral vertices. Note that each vertex in $N_2(x)$ has at least one neighbor in $N_1(x).$ Analyzing possible adjacency relations in $G-\{x,\,y\},$ we deduce that $G$ has at least four peripheral vertices, a contradiction. Thus we have proved that $n\ge 12.$ On the other hand, the graph depicted in Figure 3 is a $3$-regular almost self-centered graph of order $12.$ This shows $r(3)=12.$ \vskip 3mm \par \centerline{\includegraphics[width=3.2in]{Fig3.jpg}} \par Next suppose $k\ge 4.$ We have proved above that any $k$-regular almost self-centered graph has order at least $2k+2.$ To show $r(k)=2k+2,$ it suffices to construct such a graph $R$ of order $2k+2.$ Let $V(R)=\{x_0,\,y_0\}\cup A\cup B$ where $A=\{x_1,x_2,\dots,x_k\}$ and $B=\{y_1,y_2,\dots,y_k\}.$ We use the notation $u\leftrightarrow v$ to mean that the two vertices $u$ and $v$ are adjacent. If $k$ is even, the adjacency of $R$ is defined as follows: \begin{align*} N(x_0)=A,\,\,\, N(y_0)=B,\,\,\, N(x_i)=\{x_{i+\frac{k}{2}},y_i,y_{i+1},\ldots,y_{i+k-3}\}\,\,\, {\rm if}\,\,\,1\le i\le k/2,\quad\quad\\ N(x_j)=\{x_{j-\frac{k}{2}},y_j,y_{j+1},\ldots,y_{j+k-3}\}\,\,\, {\rm if}\,\,\,k/2+1\le j\le k, \quad y_i\leftrightarrow y_{i+\frac{k}{2}}, \,\,\,{\rm if}\,\,\,1\le i\le k/2. \end{align*} If $k$ is odd, the adjacency of $R$ is defined as follows: \begin{align*} N(x_0)=A,\,\,\, N(y_0)=B,\,\,\, N(x_1)=\{x_2,x_3,\ldots,x_{\frac{k+1}{2}}\}\cup \{y_1,y_2,\ldots,y_{\frac{k-1}{2}}\},\\ N(x_i)=\{x_1\}\cup (B\setminus\{y_{i-1},y_k\})\,\,\, {\rm if}\,\,\,2\le i\le (k+1)/2,\quad\quad\quad\quad\\ N(x_j)=\{x_1\}\cup (B\setminus\{y_{j-\frac{k+1}{2}}\})\,\,\, {\rm if}\,\,\,(k+3)/2\le j\le k,\quad\quad\quad\quad\\ N(y_k)=\{x_{\frac{k+3}{2}},\ldots,x_k\}\cup\{y_1,y_2,\ldots,y_{\frac{k-1}{2}}\}.\quad\quad\quad\quad\quad\quad \end{align*} Here the subscripts of the vertices are taken modulo $k.$ It is easy to verify that $R$ is a $k$-regular almost self-centered graph of order $2k+2$ with periphery $\{x_0,\, y_0\}$ and center $A\cup B.$ $\Box$ \section{Almost peripheral graphs} {\bf Theorem 9.} {\it The maximum size of an almost peripheral graph of order $n$ is $\lfloor (n-1)^2/2\rfloor.$ If $n$ is odd, this maximum size is attained uniquely by the graph $\overline{K_1+((n-1)/2)K_2};$ if $n$ is even, this maximum size is attained uniquely by the graph $\overline{K_1+((n-4)/2)K_2+P_3}.$ } {\bf Proof.} Use the fact that an almost peripheral graph can have at most one dominating vertex and the degree sum formula. $\Box$ In the following result we determine which numbers are possible for the maximum degree of an almost peripheral graph with a given order. {\bf Theorem 10.} {\it There exists an almost peripheral graph of order $n\ge 7$ with maximum degree $\Delta$ if and only if $\Delta\in \{3,\,4,\ldots,n-4,\,n-1\}.$ } {\bf Proof.} Suppose that $G$ is an almost peripheral graph of order $n\ge 7$ with maximum degree $\Delta.$ Clearly $3\le \Delta\le n-1.$ We first exclude the two values $n-2$ and $n-3$ for $\Delta.$ To the contrary suppose $\Delta=n-2$ or $n-3.$ Note that ${\rm rad}(G)\ge 2$ and ${\rm diam}(G)={\rm rad}(G)+1\ge 3.$ Let $x\in V(G)$ with ${\rm deg}(x)=\Delta.$ There exists a vertex $y$ with $y\not\in N[x]$ such that $y$ and $x$ have a common neighbor $w.$ If $\Delta=n-2,$ then both $x$ and $w$ have eccentricity at most $2,$ implying that they are central vertices, a contradiction. Suppose $\Delta=n-3.$ Let $z$ be the vertex outside $N[x]\cup\{y\}.$ We always have ${\rm rad}(G)=2$ and hence ${\rm diam}(G)={\rm rad}(G)+1=3.$ If $z$ and $y$ are adjacent, then $w$ is the central vertex. Since $ecc(x)=3$ and $z$ is the only possible eccentric vertex of $x,$ we deduce that $z$ is nonadjacent to any vertex in $N[x].$ It follows that $z$ is a leaf. Since $ecc(z)=3,$ $y$ is another central vertex, a contradiction. If $z$ and $y$ are nonadjacent, then $N(x)\cap N(z)\neq\phi.$ In this case, $x$ is the central vertex. Since ${\rm diam}(G)=3,$ there exists a $(y,z)$-path $P$ of length $2$ or $3.$ Then any internal vertex of $P$ is a central vertex different from $x,$ a contradiction. Conversely we will show that every number in $\{3,\,4,\ldots,n-4,\,n-1\}$ can be attained. The star of order $n$ is an almost peripheral graph with maximum degree $n-1.$ Next, for each $\Delta$ with $3\le\Delta\le n-4$ we construct an almost peripheral graph $G(n,\Delta)$ of order $n$ with maximum degree $\Delta.$ We will first construct all $G(n,3)$ for $n=7,8,\ldots,$ and then inductively construct the remaining $G(n,\Delta)$ with $\Delta\ge 4.$ $G(7,3),$ $G(8,3),$ $G(9,3),$ and $G(10,3)$ are depicted in Figure 4. \vskip 3mm \par \centerline{\includegraphics[width=4.2in]{Fig4.jpg}} \par We will need the four preliminary graphs in Figure 5. \vskip 3mm \par \centerline{\includegraphics[width=4.5in]{Fig5.jpg}} \par Now let $n\ge 11$ and denote $k=\lfloor (n+5)/4\rfloor.$ If $n\equiv 3\,\,\,{\rm mod}\,\,\,4,$ $G(n,3)$ is obtained from the graph in Figure 5 (1) by replacing the edges $ab$ and $bd$ by a path of length $k-1,$ and replacing the edges $ac$ and $cd$ by a path of length $k-2;$ if $n\equiv 0\,\,\,{\rm mod}\,\,\,4,$ $G(n,3)$ is obtained from the graph in Figure 5 (2) by replacing the edges $ef$ and $fh$ by a path of length $k-1,$ and replacing the edges $eg$ and $gh$ by a path of length $k-2;$ if $n\equiv 1\,\,\,{\rm mod}\,\,\,4,$ $G(n,3)$ is obtained from the graph in Figure 5 (3) by replacing the edges $ik,$ $kp,$ $sq$ and $qp$ by a path of length $k-2,$ $k-1,$ $k-3$ and $k-2$ respectively; if $n\equiv 2\,\,\,{\rm mod}\,\,\,4,$ $G(n,3)$ is obtained from the graph in Figure 5 (4) by replacing the edges $tu,$ $uy,$ $zw$ and $wy$ by a path of length $k-3,$ $k-1,$ $k-3$ and $k-2$ respectively. For a vertex $v$ in a graph, the operation {\it duplicating $v$} means that adding a new vertex $x$ and adding edges incident to $x$ such that $N(x)=N(v).$ Note that for $n=7,$ $n-4=3$ and that every $G(n,3)$ constructed above contains a vertex of degree $3$ that has a non-central neighbor of degree $2.$ Now suppose that we have constructed $G(n,3),$ $G(n,4),$ $\ldots,$ $G(n,n-4)$ where $G(n,\Delta)$ contains a vertex of degree $\Delta$ that has a non-central neighbor $x_{\Delta}$ of degree $2,$ $\Delta=3,\ldots, n-4,$ Then in $G(n,\Delta),$ duplicate the vertex $x_{\Delta}$ to obtain a new graph which we denote by $G(n+1,\Delta+1).$ Thus we can construct $G(n+1,3),$ $G(n+1,4),$ $\ldots,$ $G(n+1,n-3)$ which satisfy all the requirements and the additional condition of containing a vertex of maximum degree that has a non-central neighbor of degree $2.$ Thus the inductive steps can continue. $\Box$ Finally we consider the maximum number of vertices of maximum degree in an almost peripheral graph. {\it Blowing up a vertex $v$ in a graph into the complete graph $K_t$} is the operation of replacing $v$ by $K_t$ and adding edges joining each vertex in $N(v)$ to each vertex in $K_t.$ {\bf Definition 3.} A vertex $v$ in a graph $G$ is called a {\it top vertex} if ${\rm deg}(v)=\Delta(G).$ {\bf Theorem 11.} {\it The maximum number of top vertices in an almost peripheral graph of order $n\ge 8$ with maximum degree $n-4$ is $n-5$ and this maximum number is uniquely attained by the graph obtained from the graph of order $7$ in Figure 4 by blowing up a non-central vertex of degree $3$ into $K_{n-6}.$ } {\bf Proof.} First, it is easy to verify that the extremal graph given in Theorem 11 is an almost peripheral graph of order $n$ with maximum degree $n-4$ that has $n-5$ top vertices. Let $G$ be an almost peripheral graph of order $n\ge 8$ with maximum degree $n-4.$ We may suppose that $G$ has at least three top vertices, since otherwise the number of top vertices in $G$ is less than $n-5.$ Recall that ${\rm diam}(G)={\rm rad}(G)+1\ge 2.$ Let $x$ be a peripheral vertex of degree $n-4.$ Then there are only three vertices outside $N[x].$ We will use the fact that every vertex in $N_i(x)$ has at least one neighbor in $N_{i-1}(x)$ for $1\le i\le {\rm diam}(G).$ The proof consists of a series of claims. {\bf Claim 1.} ${\rm diam}(G)=3.$ Clearly ${\rm diam}(G)=ecc(x)\le 4.$ If $ecc(x)=4,$ let $x,r,s,p,q$ be a diametral path. Then $ecc(r)\le 3$ and $ecc(s)\le 3,$ implying that both $r$ and $s$ are central vertices, a contradiction. Thus ${\rm diam}(G)\le 3.$ On the other hand, it is impossible that ${\rm diam}(G)=2,$ since otherwise ${\rm rad}(G)=1,$ implying that $\Delta(G)=n-1,$ a contradiction. Hence ${\rm diam}(G)=3.$ {\bf Claim 2.} The vertex $x$ has only one eccentric vertex, which is not a leaf. If $|N_3(x)|=2,$ then $|N_2(x)|=1.$ Now the vertex in $N_2(x)$ and its neighbors in $N(x)$ are central vertices, a contradiction. Thus $x$ has only one eccentric vertex, which we denote by $w.$ Let $N_2(x)=\{u,\,v\}.$ If $w$ is a leaf, without loss of generality, suppose $u$ is the neighbor of $w.$ Since $ecc(w)\le 3,$ we deduce that $ecc(u)\le 2;$ i.e., $u$ is a central vertex. If $u$ and $v$ are adjacent, then every neighbor of $u$ in $N(x)$ is also a central vertex, a contradiction; if $u$ and $v$ are nonadjacent, then $d(u,\,v)=2,$ implying that $u$ and $v$ have a common neighbor $y$ in $N(x).$ But then $y$ is also a central vertex, a contradiction again. Claim 2 shows that $N(w)=\{u,\,v\}.$ {\bf Claim 3.} $u$ and $v$ have at most one common neighbor in $N(x).$ This holds since every common neighbor of $u$ and $v$ in $N(x)$ is a central vertex. {\bf Claim 4.} $u$ and $v$ are nonadjacent. To the contrary, assume that $u$ and $v$ are adjacent. Then any neighbor of either $u$ or $v$ in $N(x)$ is a central vertex. It follows that $u$ and $v$ have a common neighbor $y$ in $N(x)$ and $y$ is their only neighbor in $N(x).$ Now any vertex $z\in N(x)\setminus\{y\}$ must be adjacent to $y,$ since $d(z,\,w)\le 3.$ Consequently ${\rm deg}(y)=n-2>n-4=\Delta(G),$ a contradiction. {\bf Claim 5.} Neither $u$ nor $v$ is a top vertex. To the contrary, assume ${\rm deg}(u)=n-4.$ By Claim 4, $|N(u)\cap N(x)|=n-5.$ Let $z\in N(x)$ be the nonneighbor of $u.$ If $d(u,z)=2,$ then $u$ is a central vertex and $u,\,v$ have no common neighbor in $N(x).$ Hence $v$ is adjacent to $z.$ Let $y$ be a common neighbor of $u$ and $z.$ Then $y$ is also a central vertex, a contradiction. Hence $d(u,z)=3.$ The condition $d(z,w)\le 3$ implies that $z$ is adjacent to $v.$ If $u$ and $v$ have no common neighbor in $N(x),$ then $G$ is self-centered, a contradiction. If $u$ and $v$ have a common neighbor in $N(x),$ then $x$ and $u$ are the only two vertices with degree $n-4,$ contradicting our assumption that $G$ has at least three top vertices. Similarly we can prove that ${\rm deg}(v)<n-4.$ {\bf Claim 6.} Each of $u$ and $v$ has at least two neighbors in $N(x).$ To the contrary, assume that $N(u)\cap N(x)=\{y\}.$ Then for any vertex $z\in N(x)\setminus\{y\},$ $z$ cannot be adjacent to both $y$ and $v$, since otherwise $y$ and $z$ are central vertices. Considering $d(z, u)$ and $d(z, w)$ we deduce that $z$ is a peripheral vertex. If $y$ and $v$ are nonadjacent, then $G$ is self-centered, a contradiction. If $y$ and $v$ are adjacent, then $y$ is the central vertex. Since $d(z, w)\le 3,$ we obtain $d(z, v)\le 2.$ It follows that $v$ is also a central vertex, a contradiction. Similarly we can prove that $v$ has at least two neighbors in $N(x).$ {\bf Claim 7.} $G$ has at most $n-5$ top vertices and the extremal graph is unique. By Claim 3 and Claim 6, $u$ has a neighbor $y$ in $N(x)$ that is nonadjacent to $v,$ and $v$ has a neighbor $z$ in $N(x)$ that is nonadjacent to $u.$ Note that if $f$ is a neighbor of $u$ in $N(x)$ and $g$ is a neighbor of $v$ in $N(x)$ with $f\neq g,$ then $f$ and $g$ are nonadjacent, since otherwise $f$ and $g$ are central vertices. Using Claim 6 again we deduce that neither $y$ nor $z$ has maximum degree. Thus $G$ has at least the five vertices $y,z,u,v,w$ with degrees less than $n-4.$ It follows that $G$ has at most $n-5$ top vertices. Conversely, suppose $G$ has $n-5$ top vertices. Then the above analysis shows that (1) each of $u$ and $v$ has exactly two neighbors in $N(x);$ (2) $u$ and $v$ have exactly one common neighbor $h$ in $N(x);$ and (3) the closed neighborhood of every vertex in $N(x)\setminus \{y,z,h\}$ is equal to $N[x].$ Consequently $G$ is the graph obtained from the graph of order $7$ in Figure 4 by blowing up a non-central vertex of degree $3$ into $K_{n-6}.$ This completes the proof. $\Box$ The extremal graph of order $10$ in Theorem 11 is depicted in Figure 6. \vskip 3mm \par \centerline{\includegraphics[width=3.2in]{Fig6.jpg}} \par \vskip 5mm {\bf Acknowledgement.} This research was supported by the NSFC grants 11671148 and 11771148 and Science and Technology Commission of Shanghai Municipality (STCSM) grant 18dz2271000. \end{document}
\betaegin{equation}gin{document} \nuewtheorem{thm}{Theorem} \nuewtheorem{lem}[thm]{Lemma} \nuewtheorem{cor}[thm]{Corollary} \nuewtheorem{prop}[thm]{Proposition} \nuewtheorem{rem}[thm]{Remark} \nuewtheorem{eg}[thm]{Example} \nuewtheorem{defn}[thm]{Definition} \nuewtheorem{assum}[thm]{Assumption} \renewcommand {\tauheequation}{\alpharabic{equation}} \deltaef\tauhesection{\alpharabic{section}} \tauitle{\betaf An Elementary Proof for the Structure of Wasserstein Derivatives} \alphauthor{ Cong Wu \tauhanks{ \nuo Department of Mathematics, University of Southern California, Los Angeles, CA 90089. E-mail: [email protected].} ~ and ~{Jianfeng Zhang} \tauhanks{\nuoindent Department of Mathematics, University of Southern California, Los Angeles, CA 90089. E-mail: [email protected]. This author is supported in part by NSF grant \#1413717. } } \deltaate{} \muaketitle \betaegin{equation}gin{abstract} Let $F: \muathbb{L}^2(\Omega, \muathbb{R})\varphiootnote{The space $\muathbb{R}$ can be replaced with general $\muathbb{R}^d$. We assume $d=1$ here for simplicity. } \tauo \muathbb{R}$ be a law invariant and continuously Fr\'echet differentiable mapping. Based on Lions \cite{Lions}, Cardaliaguet \cite{Cardaliaguet} (Theorem 6.2 and 6.5) proved that: \betaegin{equation}a \lambdaanglebel{Derivative} D F (\xi) = g(\xi), \varepsilonnd{equation}a where $g: \muathbb{R}\tauo \muathbb{R}$ is a deterministic function which depends only on the law of $\xi$. See also Carmona \& Delarue \cite{CD} Section 5.2 and Gangbo \& Tudorascu \cite{GT}. In this short note we provide an elementary proof for this well known result. This note is part of our accompanying paper \cite{WZ}, which deals with a more general situation. \varepsilonnd{abstract} \betas \nuo Let ${\cal P}_2(\muathbb{R})$ denote the set of square integrable probability measures on $\muathbb{R}$, and consider a mapping $f: {\cal P}_2(\muathbb{R}) \tauo \muathbb{R}$. As in standard literature, we lift $f$ to a function $F: \muathbb{L}^2(\Omega, \muathbb{R})\tauo \muathbb{R}$ by $F(\xi):=f({\cal L}_\xi)$, where $(\Omega,{\cal F},\muathbb{P})$ is an atomless Polish probability space and ${\cal L}_\xi$ denotes the law of $\xi$. If $F$ is Frech\'et differentiable, then $DF(\xi)$ can be identified as an element of $\muathbb{L}^2(\Omega, \muathbb{R})$: \betaegin{equation}a \lambdaanglebel{Frechet} \muathbb{E}\betaig[ DF(\xi) \varepsilonta\betaig] = \lambdaim_{\varepsilon\tauo 0} {F(\xi + \varepsilon \varepsilonta) - F(\xi) \omegaver \varepsilon},\quad \mubox{for all}~\varepsilonta \in \muathbb{L}^2(\Omega, \muathbb{R}). \varepsilonnd{equation}a We start with the simple case that $\xi$ is discrete. Let $\delta_x$ denote the Dirac measure of $x$. \betaegin{equation}gin{prop} \lambdaanglebel{prop-discrete} Assume $\xi$ is discrete: $\muathbb{P}(\xi = x_i) = p_i$, $i\gammae 1$. If $F$ is Fr\'echet differentiable at $\xi$, then \reff{Derivative} holds with \betaegin{equation}a \lambdaanglebel{formula1} g(x_i) := \lambdaim_{\varepsilon\tauo 0} {f(\sum_{j\nueq i} p_j \delta_{x_j} + p_i \delta_{x_i+\varepsilon}) - f(\sum_{j\gammae 1} p_j \delta_{x_j})\omegaver \varepsilon p_i}, \quad i\gammae 1. \varepsilonnd{equation}a \varepsilonnd{prop} To prove the proposition, we need the following result. \betaegin{equation}gin{lem} \lambdaanglebel{lem-elem} Let $X\in \muathbb{L}^2(\Omega, \muathbb{R})$. Assume $A\in {\cal F}$ with $\muathbb{P}(A)>0$ satisfies \betaegin{equation}a \lambdaanglebel{elem} \muathbb{E}[X {\bf 1}_{A_1}]=\muathbb{E}[X {\bf 1}_{A_2}], \quad\mubox{for all}~ A_1,A_2\subset A~ \mubox{such that}~\muathbb{P}(A_1) = \muathbb{P}(A_2). \varepsilonnd{equation}a Then $X$ is a constant, $\muathbb{P}$-a.s. in $A$. \varepsilonnd{lem} \proof This result is elementary, we nevertheless provide a proof for completeness. Assume the result is not true. Denote $c := {\muathbb{E}[X{\bf 1}_A]\omegaver \muathbb{P}(A)}$ and $A_1 := \{X <c\}\cap A$, $A_2:= \{X>c\}\cap A$. Then $\muathbb{P}(A_1)>0$, $\muathbb{P}(A_2) >0$. Assume without loss of generality that $\muathbb{P}(A_1) \lambdae \muathbb{P}(A_2)$. Since $(\Omega, {\cal F}, \muathbb{P})$ is atomless, there is a random variable $U$ with uniform distribution on $[0, 1]$. Denote $A_{2,x} := A_2 \cap \{U \lambdae x\}$, $x\in [0, 1]$. Clearly there exists $x_0$ such that $\muathbb{P}(A_{2,x_0}) = \muathbb{P}(A_1)$. Apply \reff{elem} on $A_1$ and $A_{2,x_0}$ we obtain the desired contradiction. \widehatfill \vrule width 1.7 pt height 6.8 pt depth 2.5pt} \def\cd{\cdotule width.25cm height.25cm depth0cm \betaegin{equation}gin{rem} \lambdaanglebel{rem-elem} {\rm Lemma \ref{lem-elem} may not hold if $(\Omega, {\cal F}, \muathbb{P})$ has atoms. Indeed, consider $\Omega := \{\omega_1, \omega_2\}$ with $\muathbb{P}(\omega_1) = {1\omegaver 3}, \muathbb{P}(\omega_2) = {2\omegaver 3}$. Set $A:= \Omega$ and $X$ is an arbitrary random variable. The \reff{elem} holds true trivially because $\muathbb{P}(A_1) \nueq \muathbb{P}(A_2)$ whenever $A_1\nueq A_2$. However, $X$ may not be a constant. \widehatfill \vrule width 1.7 pt height 6.8 pt depth 2.5pt} \def\cd{\cdotule width.25cm height.25cm depth0cm } \varepsilonnd{rem} \nuo{\betaf Proof of Proposition \ref{prop-discrete}.} Fix an $i\gammae 1$. For an arbitrary $A_1 \subset A:= \{\xi = x_i\}$, set $\varepsilonta := {\bf 1}_{A_1}$. Note that, for any $\varepsilon > 0$, we have \betaegin{equation}aa {\cal L}_{\xi + \varepsilon \varepsilonta} = \sum_{j\nueq i} p_j \delta_{x_j} + \muathbb{P}(A_1) \delta_{x_i + \varepsilon} + [p_i - \muathbb{P}(A_1)] \delta_{x_i}, \varepsilonnd{equation}aa which depends only on ${\cal L}_\xi$ and $\muathbb{P}(A_1)$. By \reff{Frechet}, \betaegin{equation}a \lambdaanglebel{formula0} \muathbb{E}\betaig[DF(\xi) {\bf 1}_{A_1}\betaig] = \lambdaim_{\varepsilon\tauo 0} {f\betaig( \sum_{j\nueq i} p_j \delta_{x_j} + \muathbb{P}(A_1) \delta_{x_i + \varepsilon} + [p_i - \muathbb{P}(A_1)] \delta_{x_i}\betaig) - f(\sum_{j\gammae 1} p_j \delta_{x_j})\omegaver \varepsilon}. \varepsilonnd{equation}a In particular, $\muathbb{E}\betaig[DF(\xi) {\bf 1}_{A_1}\betaig]$ depends only on $\muathbb{P}(A_1)$ for $A_1\subset \{\xi = x_i\}$. Applying Lemma \ref{lem-elem}, we see that $DF(\xi)$ is a constant, $\muathbb{P}$-a.s. on $\{\xi=x_i\}$. Now set $A_1 := \{\xi=x_i\}$ in \reff{formula0}, we obtain \reff{formula1} immediately. \widehatfill \vrule width 1.7 pt height 6.8 pt depth 2.5pt} \def\cd{\cdotule width.25cm height.25cm depth0cm We now consider the general case. \betaegin{equation}gin{thm} \lambdaanglebel{thm-general} If $F$ is continuously Fr\'echet differentiable, then \reff{Derivative} holds with $g$ depending only on ${\cal L}_\xi$ but not on the particular choice of $\xi$. \varepsilonnd{thm} \proof For each $n\gammae 1$, denote $x^n_i := i 2^{-n}, i \in \muathbb{Z}$, and $\xi_n := \sum_{i=-\infty}^\infty x^n_i {\bf 1}_{\{x^n_i \lambdae \xi < x^n_{i+1}\}}$. Since $\xi_n$ is discrete, by Proposition \ref{prop-discrete} we have $DF(\xi_n) = g_n(\xi_n)= \tauilde g_n(\xi)$, where $g_n$ is defined on $\{x^n_i, i\in \muathbb{Z}\}$ by \reff{formula1} (with $g_n(x^n_i) :=0$ when $\muathbb{P}(\xi_n = x^n_i)=0$) and $\tauilde g_n(x) := g_n(x^n_i)$ for $x\in [x^n_i, x^n_{i+1})$. Clearly $\lambdaim_{n\tauo \infty}\muathbb{E}[|\xi_n-\xi|^2] = 0$. Then by the continuous differentiability of $F$ we see that $\lambdaim_{n\tauo\infty} \muathbb{E}[|\tauilde g_n(\xi) - DF(\xi)|^2] = 0$. Thus, there exists a subsequence $\{n_k\}_{k\gammae 1}$ such that $\tauilde g_{n_k} (\xi) \tauo D F(\xi)$, $\muathbb{P}$-a.s. Denote $K := \{x: \lambdaimsup_{k\tauo\infty} \tauilde g_{n_k}(x) = \lambdaiminf_{k\tauo\infty} \tauilde g_{n_k}(x)\}$, and $g(x) := \lambdaim_{k\tauo \infty} \tauilde g_{n_k}(x){\bf 1}_K(x)$. Then $\muathbb{P}(\xi\in K)=1$ and $D F(\xi) = g(\xi)$, $\muathbb{P}$-a.s. Moreover, let $\xi'$ be another random variable such that ${\cal L}_{\xi'}={\cal L}_\xi$. Define $\xi_n'$ similarly. Then $D F(\xi'_n) = \tauilde g_n(\xi')$ for the same function $\tauilde g_n$. Note that $\muathbb{P}(\xi'\in K) = \muathbb{P}(\xi\in K) =1$, then $\lambdaim_{k\tauo\infty} \tauilde g_{n_k}(\xi') = g(\xi')$, $\muathbb{P}$-a.s. On the other hand, $D F(\xi'_{n_k}) \tauo DF (\xi')$ in $\muathbb{L}^2$. So $D F(\xi') = g(\xi')$, and thus $g$ does not depend on the choice of $\xi$. \widehatfill \vrule width 1.7 pt height 6.8 pt depth 2.5pt} \def\cd{\cdotule width.25cm height.25cm depth0cm \betaegin{equation}gin{rem} \lambdaanglebel{rem-joint} {\rm One may also write $D F(\xi) = g({\cal L}_\xi, \xi)$, where $g: {\cal P}_2(\muathbb{R}) \tauimes \muathbb{R} \tauo \muathbb{R}$. When $DF$ is uniformly continuous, one may easily construct $g$ jointly measurable in $(\muu, x) \in {\cal P}_2(\muathbb{R}) \tauimes \muathbb{R}$. One may also extend the result to the case that $F$ is a function of processes. We leave the details to \cite{WZ}. \widehatfill \vrule width 1.7 pt height 6.8 pt depth 2.5pt} \def\cd{\cdotule width.25cm height.25cm depth0cm } \varepsilonnd{rem} \betaegin{equation}gin{thebibliography}{1} \betaibitem{Cardaliaguet} Cardaliaguet P. (2013), {\it Notes on Mean Field Games (from P.-L. LionsÕ lectures at College de France)}, preprint, www.ceremade.dauphine.fr/$\sigmam$cardalia/MFG100629.pdf. \betaibitem{CD} Carmona, R. and Delarue, F. (2017) {\sl Probabilistic Theory of Mean Field Games I Ð Mean Field FBSDEs, Control, and Games}, Springer Verlag, 2017. \betaibitem{GT} Gangbo, W. and Tudorascu, A. {\it On differentiability in the Wasserstein space and well-posedness for Hamilton-Jacobi equations}. Technical report, 2017. \betaibitem{Lions} Lions, P.-L. {\sl Cours au Coll\'ege de France}. www.college-de-france.fr. \betaibitem{WZ} Wu, C. and Zhang, J. {\it Viscosity Solutions to Parabolic Master Equations and McKean-Vlasov SDEs with Closed-loop Controls}, preprint, arXiv:1805.02639. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Sparse Quadrature for High-Dimensional Integration with Gaussian Measure hanks{This work is supported by DARPA's EQUiPS program under contract number W911NF-15-2-0121.} \slugger{sisc}{xxxx}{xx}{x}{x--x} \begin{abstract} In this work we analyze the dimension-independent convergence property of an abstract sparse quadrature scheme for numerical integration of functions of high-dimensional parameters with Gaussian measure. Under certain assumptions of the exactness and the boundedness of univariate quadrature rules as well as the regularity of the parametric functions with respect to the parameters, we obtain the convergence rate $O(N^{-s})$, where $N$ is the number of indices, and $s$ is independent of the number of the parameter dimensions. Moreover, we propose both an a-priori and an a-posteriori schemes for the construction of a practical sparse quadrature rule and perform numerical experiments to demonstrate their dimension-independent convergence rates. \end{abstract} \begin{keywords} uncertainty quantification, high-dimensional integration, curse of dimensionality, convergence analysis, Gaussian measure, sparse grids, a-priori construction, a-posteriori construction \end{keywords} \begin{AMS} 65C20, 65D30, 65D32, 65N12, 65N15, 65N21 \end{AMS} \pagestyle{myheadings} \thispagestyle{plain} \markboth{Sparse Quadrature for High-dimensional Integration with Gaussian Measure}{P. Chen} \section{Introduction} In the mathematical modelling of a physical system, uncertainties may arise from various sources of the system input, such as material properties, initial/boundary conditions, and computational geometries. These uncertainies lead to the discrepancy between experimental/observational data and the output of mathematical models in many computational science and engineering fields. How to propagate the uncertainties through the mathematical models and how to calibrate them with given data are known as uncertainty quantification (UQ) problems \cite{ghanem2003stochastic, le2010introduction, xiu2010numerical, smith2013uncertainty}. One of the central tasks of UQ is to compute the integral of some quantity of interest related to the solution with respect to the probability law of the uncertain input. When the uncertain input are approximated by many or a countably infinite number of random variables or parameters, e.g., by Karhunen--Lo\`eve expansion \cite{Schwab2006Karhunen}, one faces high/infinite-dimensional integration problems. Since the integral with respect to the parameters can not be computed analytically in general, numerical integration based on certain quadrature rules has to be employed. However, it is of great challenge to perform high-dimensional numerical integration as the computational complexity grows exponentially fast with respect to the number of the parameter dimensions for most deterministic quadratures, which is widely known as ``curse of dimensionality". On the other hand, probabilistic quadrature rules, in particular the Monte Carlo \cite{caflisch1998monte}, are best known to break the curse of dimensionality. However, the convergence of these quadrature rules are often very slow, e.g., the convergence rate of Monte Carlo quadrature is $O(M^{-1/2})$ with $M$ samples, even for functions smoothly depending on low-dimensional parameters. Recent years have seen a great development of a sparse quadrature -- numerical integration based on sparse grids \cite{gerstner1998numerical, gerstner2003dimension, xiu2005high, bungartz2004sparse, nobile2008sparse, babuvska2010stochastic, Schillings2013, beck2014quasi, chen2015new} -- to efficiently deal with high-dimensional integration problems. The curse of dimensionality is shown to be alleviated and/or broken by adaptive allocation of the quadrature points in different dimensions by ample numerical evidence \cite{gerstner1998numerical, gerstner2003dimension, griebel2010dimension, Schillings2013, nobile2014convergence, chen2015sparse, chen2016sparse}, which is also observed for interpolation problems by the same or similar dimension-adaptive algorithms \cite{nobile2008anisotropic, ma2009adaptive, chkifa2014high, chkifa2015breaking}. The dimension-independent convergence rate of the sparse quadrature for infinite-dimensional integration with respect to uniformly distributed parameters was proved in \cite{Schillings2013, Schillings2014}, which is based on the dimension-independent convergence of Legendre/Tayor polynomial chaos approximation of stochastic problems in \cite{cohen2010convergence, cohen2011analytic, chkifa2015breaking}. Different approximation methods of the stochastic problems with (lognormal) Gaussian random parameters have been studied in \cite{li2007probabilistic, lin2009efficient, gittelson2010stochastic, schillings2011efficient, charrier2012strong, chen2013weightedrbm, ernst2014stochastic, graham2015quasi, kuo2015multilevel, nobile2016adaptive}. More recently, a dimension-independent convergence rate of the polynomial chaos (based on Hermite polynormials) approximation for an elliptic problem with lognormal coefficients is obtained in \cite{hoang2014n}, whose convergence rate is improved in \cite{bachmayr2017sparse}. A convergence result based on \cite{bachmayr2017sparse} is obtained in \cite{ernst2016convergence} for a sparse collocation method. In this work, we show the dimension-independent convergence rate of an abstract sparse quadrature scheme for infinite-dimensional integration problems with i.i.d. standard Gaussian distributed parameters. The result holds under certain assumptions of the exactness and the boundedness of univariate quadrature rules, and certain regularity assumptions of the parametric functions with respect to the parameters. In particular, only weighted finitely many derivatives are required to exist as in \cite{bachmayr2017sparse}, compared to an analytic regularity requirement for the result with uniform distribution in \cite{Schillings2013}. Two examples are provided to illustrate the regularity assumptions, including an infinite-dimensional nonlinear parametric function, and an elliptic PDE with nonlinear parametric lognormal coefficients. The key of the proof relies on three results: 1). the exactness and the boundedness of the sparse quadrature in arbitrary number of dimensions; 2). the bound of the sparse quadrature error by a weighted sum of the Hermite coefficients; 3). the summability of a weighted sequence of the coefficients arising from the regularity assumptions of the parametric function. Based on the proof, we propose a-priori construction of the sparse quadrature, whose error is guaranteed to converge with a dimension-independent convergence rate with respect to the number of indices. We also present a goal-oriented a-posteriori construction of the sparse quadrature, which turns out to be more accurate for the test examples. Both the a-priori and the a-posteriori construction schemes are built on several univariate quadrature rules, including the non-nested Gauss--Hermite quadrature rule \cite{gil2007numerical}, the nested transformed Gauss--Kronrod--Patterson (or Gauss--Patterson) quadrature rule \cite{gerstner1998numerical}, and the nested Genz--Keister quadrature rule \cite{genz1996fully}. We will investigate and compare the convergence properties of the construction schemes with different quadrature rules in high dimensions. Numerical experiments on the sparse quadrature for a nonlinear parametric function and an elliptic parametric PDE are performed to demonstrate the dimension-independent convergence rate, and to compare the a-priori and the a-posteriori construction schemes with different quadrature rules. The rest of the paper is organized as follows. In Section \ref{sec:ASQ} we present the sparse quadrature. Several univariate quadrature rules are introduced in hierarchical representation in Section \ref{subsec:UnivQuad}, followed by the presentation of tensorization of these rules in Section \ref{subsec:TensQuad} and of the sparse quadrature in Section \ref{subsec:ASQ}. Section \ref{sec:ConvAnal} is devoted to a convergence analysis of the sparse quadrature, with a dimension-independent convergence rate obtained in the main theorem in Section \ref{subsec:DimenIndeCon} and two examples shown to satisfy the regularity assumptions in Section \ref{subsec:Examples}. In Section \ref{sec:construction} we introduce an a-priori scheme (in Section \ref{sec:aprioricons}) and an a-posteriori scheme (in Section \ref{sec:aposteriori}) for the construction of the sparse quadrature. We present two sets of numerical experiments in Section \ref{sec:Numerics}, one is on the sparse quadrature for numerical integration of a infinite-dimensional parametric function in Section \ref{subsec:function} and the other for numerical integration of two quantities of interest related to the solution of an elliptic parametric PDE in Section \ref{subsec:PDE}. In the last Section \ref{sec:Conclusion} we conclude with some further research perspectives. \section{Sparse quadrature with Gaussian measure} In this section, we present a sparse quadrature for numerical integration of a function of high/infinite-dimensional parameters with Gaussian measure. At first, we formulate a hierarchical representation of a univariate quadrature with three different quadrature rules. Then a tensor-product quadrature is constructed by tensorization of the univariate quadrature. The sparse quadrature is then defined by a sum of the tensorized univariate quadrature in an admissible index set. \label{sec:ASQ} \subsection{Univariate quadrature}\label{subsec:UnivQuad} Let $f: {\mathbb{R}} \to \mathcal{S}$ be a univariate function of a random variable with standard Gaussian (or normal) distribution $N(0,1)$, which takes values in some Banach space $\mathcal{S}$. Let $I$ denote an \emph{integral operator} defined as \begin{equation} I(f) = \int_{{\mathbb{R}}} f(y) d\gamma(y), \end{equation} where $\gamma(y)$ is a Gaussian measure with the probability density function $\rho(y)$ given by \begin{equation} \rho(y) = \frac{1}{\sqrt{2\pi}} e^{-y^2/2}. \end{equation} We introduce a sequence of \emph{quadrature operators} $\{\mathcal{Q}_l\}_{l\geq 0}$ indexed by \emph{level} $l \in {\mathbb{N}}$, defined as \begin{equation}\label{eq:UnivQuad} \mathcal{Q}_l(f) = \sum_{k = 0}^{m_l-1} w_k^l f(y_k^l), \quad l \geq 0, \end{equation} where $y_k^l \in {\mathbb{R}}$ and $w_k^l \in {\mathbb{R}}$, $k = 0, \dots, m_l-1$, represent quadrature points and weights; $m_l$ is the number of the quadrature points at level $l$, which satisfies $ m_0 = 1 \text{ and } m_l < m_{l+1}. $ We consider two classical choices of $m_l$ \cite{genz1996fully, klimke2006uncertainty, babuvska2010stochastic} -- adding one point or doubling the number of points from level $l$ to $l+1$, i.e., $ m_{l+1} = l + 1 \text{ or } m_l = 2^{l+1}-1. $ Let $\{\triangle_l\}_{l \geq 0}$ denote a set of \emph{difference quadrature operators}, which are defined as \begin{equation}\label{eq:DiffOper} \triangle_l = \mathcal{Q}_l - \mathcal{Q}_{l-1}, \quad l \geq 0\;, \end{equation} where we set $\mathcal{Q}_{-1} = 0$ by convention, i.e., $\mathcal{Q}_{-1} (f) = 0$. Then we obtain a hierarchical representation of $\mathcal{Q}_l$ through a telescopic sum of $\triangle_i$, $i = 0, \dots, l$, i.e., \begin{equation} \mathcal{Q}_l = \sum_{i = 0}^l \triangle_i\;. \end{equation} As for the quadrature points and weights in \eqref{eq:UnivQuad} as well as the specific number of points in each level, we consider the following ones. \begin{enumerate} \item \textbf{Gauss--Hermite (GH) quadrature.} A Gauss quadrature is used for the approximation of the integral with the density $\rho$ as the weight function \cite{gil2007numerical}, where $y_0^0 = 0$ and $w_0^0 = 1$ for $l = 0$, and for $l \geq 1$, $y_k^l$, $k = 0, \dots, m_l-1$, are the roots of the orthonormal (with respect to $\rho$) Hermite polynomial $H_n$ for $n = m_l$, where \begin{equation}\label{eq:Hermite} H_{n}(y) = \frac{(-1)^{n}}{\sqrt{n!}} \frac{\rho^{(n)}(y)}{\rho(y)}, \quad n \geq 0 \;, \end{equation} and the weights $w_k^l$, $k = 0, 1, \dots, m_l-1$, are given by \begin{equation}\label{eq:HermiteWeight} w_k^l = \frac{ 1}{m_l^2(H_{m_l-1}(y_k^l))^2}\;. \end{equation} Note that this quadrature rule is provided for the weight function $\rho(y)$ instead of $e^{-y^2}$ in the classical formula \cite[\S 5.3]{gil2007numerical}. It is exact with $m_l$ points for polynomials of degree up to $2m_l - 1$, the maximum possible exactness. However, the quadrature points are not nested in the sense that $\{y^l_k\}$ are not included in $\{y^{l'}_k\}$ for $l' > l$ (except for $l = 0$ and $m_{l'}$ odd which share the point $y = 0$), so that we need to evaluate the function at all the quadrature points at each level $l$. As for the number of points $m_l$ at each level $l$, we consider $m_l = l+1$ (denoted as GH1) and $m_l = 2^{l+1}-1$ (GH2). \item \textbf{Transformed Gauss--Kronrod--Patteron (tGKP) quadrature.} In \cite{kronrod1965nodes}, Kronrod presented a method to add $m+1$ points to a $m$-point Gauss--Legendre quadrature rule for integration with constant weight and showed its optimality in integrating polynomials with such nested construction. Patterson \cite{patterson1968optimum} extended this construction iteratively and obtained a nested quadrature rule with $m_l = 2^{l+1}-1$ points at level $l$ (denoted as GKP). Then for integration with more general weight, e.g., normal weight $\rho$ in our problem, we can make a change of variables, e.g., by the following map \begin{equation} x = F_\rho(y)\;, \end{equation} where $F_\rho$ is the cumulative distribution function given by $F_\rho(y) = \int_{-\infty}^y \rho(y)dy$, so that $dx = \rho(y)dy$ and the integration with weight $\rho$ can be transformed as \begin{equation} \int_{\mathbb{R}} f(y)\rho(y)dy = \int_{0}^1 f(F_\rho^{-1}(x)) dx \approx \sum_{k = 0}^{m_l-1} f(F_\rho^{-1}(x_k^l)) w_k^l\;. \end{equation} where $F_\rho^{-1}$ is the inverse of $F_\rho$, $x^l_k$ and $w_k^l$ are the GKP points and weights at level $l$. This transformed GKP (tGKP) has been used, e.g., in \cite{gerstner1998numerical}. \item \textbf{Genz--Keister (GK) quadrature}: In \cite{genz1996fully}, Genz and Keister extended the GKP construction for uniform distribution to that for normal distribution. However, the construction does not follow that of GKP since the quadrature points obtained by Kronrod's method in level $l = 2$ are not real valued, thus they can not be used as quadrature points. Instead, Genz and Keister showed that, among several extensions, $1, 2, 6, 10, 16$ points can be added, resulting in $m_l = 1, 3, 9, 19, 35$ points at level $l = 0, 1, 2, 3, 4$. Further extension to higher levels is limited by the construction error due to ill-conditioned matrix equations, see details in \cite{genz1996fully}. \end{enumerate} \subsection{Tensor-product quadrature} \label{subsec:TensQuad} For a given function $f: Y \to \mathcal{S}$, where $Y = {\mathbb{R}}^J$, $J \in {\mathbb{N}}$ for finite dimensions or $J = \infty$ for infinite dimensions, we consider the product measure space $(Y, \mathcal{B}(Y), {\boldsymbol{\gamma}})$ as in \cite{bachmayr2017sparse} where $\mathcal{B}(Y)$ is the $\Sigma$-algebra generated by the Borel cylinders and ${\boldsymbol{\gamma}}$ is a tensorized Gaussian probability measure. The task is to compute the integral \begin{equation}\label{eq:MultiIntegral} I(f) = \int_{Y} f({\boldsymbol{y}}) d{\boldsymbol{\gamma}}({\boldsymbol{y}})\;. \end{equation} In order to approximate \eqref{eq:MultiIntegral}, we define a tensor-product quadrature as follows. By $\mathcal{F}$ we denote a multi-index set of indices ${\boldsymbol{\nu}} = (\nu_1, \dots, \nu_J) $, which is defined as \begin{equation} \mathcal{F} = \{{\boldsymbol{\nu}} \in {\mathbb{N}}^J: |{\boldsymbol{\nu}}|_1 < \infty\}, \end{equation} where $|{\boldsymbol{\nu}}|_1 = \nu_1 + \cdots + \nu_J$. Note that each ${\boldsymbol{\nu}} \in \mathcal{F}$ is finitely supported and we denote its finite support set as \begin{equation} {\mathbb{J}}_{\boldsymbol{\nu}} = \{j \in {\mathbb{N}}: \nu_j \neq 0\}. \end{equation} Given ${\boldsymbol{\nu}} \in \mathcal{F}$, we define a multivariate quadrature operator $\mathcal{Q}_{\boldsymbol{\nu}}$ as tensorization of the univariate quadrature operators on the tensor-product grids $G_{\boldsymbol{\nu}} =\{{\boldsymbol{y}}^{\boldsymbol{\nu}}_{\boldsymbol{k}}: k_{j} = 0, \dots, m_{\nu_{j}} - 1, j \in {\mathbb{J}}_{\boldsymbol{\nu}}\}$, i.e., \begin{equation}\label{eq:TensorQuad} \mathcal{Q}_{\boldsymbol{\nu}} (f) = \bigotimes_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} \mathcal{Q}_{\nu_j}(f) \equiv \sum_{k_{j_1}=0}^{m_{\nu_{j_1}}-1} \cdots \sum_{k_{j_d}=0}^{m_{\nu_{j_d}}-1} w^{\nu_{j_1}}_{k_{j_1}} \cdots w^{\nu_{j_d}}_{k_{j_d}} f\left(y^{\nu_{j_1}}_{k_{j_1}}, \dots, y^{\nu_{j_d}}_{k_{j_d}}\right)\;, \end{equation} where we suppose ${\mathbb{J}}_{\boldsymbol{\nu}}$ is explicitly given as ${\mathbb{J}}_{\boldsymbol{\nu}} = \{j_1, \dots, j_d\}$ for some $d\in {\mathbb{N}}$, and we set $y_j = 0$ for all $j \not \in {\mathbb{J}}_{\boldsymbol{\nu}}$ and omit their appearance in the arguments of $f$ by slight abuse of notation. \emph{A full tensor-product quadrature} for approximation of \eqref{eq:MultiIntegral} is defined as $\mathcal{Q}_{{\boldsymbol{\nu}}}(f)$ for ${\boldsymbol{\nu}} = \boldsymbol{l}$, i.e., $\nu_j = l$ for each $j = 1, \dots, J$ at given $l \in {\mathbb{N}}$. However, the total computational cost of $(m_l)^J$ function evaluations grows exponentially with respect to the dimension $J$, known as \emph{curse of dimensionality}, rendering this quadrature rule computationally prohibitive for large $J$, especially when evaluation of $f$ is expensive. \subsection{Sparse quadrature} \label{subsec:ASQ} In order to alleviate the curse of dimensionality, we turn to a \emph{sparse quadrature}, which breaks the restriction of taking $\nu_j = l$ in each dimension and allows free choice of ${\boldsymbol{\nu}}\in \mathcal{F}$. For each ${\boldsymbol{\nu}} \in \mathcal{F}$ with support ${\mathbb{J}}_{{\boldsymbol{\nu}}}$ in $d$ dimensions, we define a multivariate difference quadrature operator as \begin{equation}\label{eq:TensorDiff} \triangle_{\boldsymbol{\nu}} (f) = \bigotimes_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} \triangle_{\nu_j} (f) \equiv \bigotimes_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} (\mathcal{Q}_{\nu_j} - \mathcal{Q}_{\nu_j-1}) (f)\;, \end{equation} which can be computed through \eqref{eq:TensorQuad} with $2^d$ terms. If the quadrature points are nested, this computation only involves $\prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} m_{\nu_j}$ times of evaluation of the function $f$. Otherwise, the number becomes $\prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}}(m_{\nu_{j}}+m_{\nu_{j}-1})$. Both cost becomes feasible for small $d$. By $\Lambda$ we denote an \emph{admissible} index set \cite{gerstner2003dimension}, also called \emph{downward closed} or \emph{monotonic} index set \cite{chkifa2014high, Schillings2013}, which is defined such that \begin{equation} \text{for any } {\boldsymbol{\nu}} \in \mathcal{F}, \text{ if } {\boldsymbol{\nu}} \in \Lambda, \text{ then } {\boldsymbol{\mu}} \in \Lambda \text{ for all } {\boldsymbol{\mu}} \preceq{\boldsymbol{\nu}} \; (i.e., \mu_j \leq \nu_j, \forall j \geq 1)\;. \end{equation} Then we can define a \emph{sparse quadrature operator} on the grids $G_{\Lambda} = \cup_{{\boldsymbol{\nu}} \in \Lambda} G_{\boldsymbol{\nu}}$ as \begin{equation}\label{eq:QuadLambdaIntegral} \mathcal{Q}_{\Lambda} (f) = \sum_{{\boldsymbol{\nu}} \in \Lambda} \triangle_{\boldsymbol{\nu}}(f)\;. \end{equation} Note that both the full tensor-product quadrature and the Smolyak quadrature \cite{smolyak1963quadrature, gerstner1998numerical} can be represented as the sparse quadrature with $\Lambda := \{{\boldsymbol{\nu}} \in \mathcal{F}, |{\boldsymbol{\nu}}|_\infty \leq l\}$ for the former, where $|{\boldsymbol{\nu}}|_\infty := \max_{j\geq 1} \nu_j$, and $\Lambda := \{{\boldsymbol{\nu}} \in \mathcal{F}, |{\boldsymbol{\nu}}|_1 \leq l\}$ for the latter. A more general sparse quadrature is an anisotropic sparse quadrature in \cite{gerstner2003dimension, nobile2008anisotropic}, where the maximum level of the index $\nu_j$ is allowed to vary for different $j$. The index set $\Lambda$ and the corresponding quadrature points $G_{\Lambda}$ for the full tensor-product quadrature, the isotropic Smolyak sparse quadrature, and the anisotropic sparse quadrature are shown for GK with $l = 4$ in Fig. \ref{fig:sparsegrid} in two dimensions, from which we can observe large reduction of the points successively. \begin{figure} \caption{The admissible index sets (top) and the corresponding GK quadrature points (bottom). Left: tensor-product grids; middle: isotropic Smolyak sparse grids; right: anisotropic sparse grids.} \label{fig:sparsegrid} \end{figure} \section{Convergence analysis} \label{sec:ConvAnal} Let $N$ be the cardinality of an admissible index set $\Lambda$, which we denote as $\Lambda_N$ to reflect its cardinality. In this section we provide sufficient conditions for the existence of a sparse quadrature $\mathcal{Q}_{\Lambda_N}$ whose quadrature error $||I(f) - \mathcal{Q}_{\Lambda_N}(f)||_\mathcal{S}$ does not depend on the dimension $J$, thus breaking the curse of dimensionality. Moreover, we analyze the convergence rate of this error with respect to $N$ under certain assumptions on the regularity of the function $f$ with respect to ${\boldsymbol{y}}$. We provide two specific examples for which such assumptions are illustrated. \subsection{Convergence analysis} \label{subsec:DimenIndeCon} In general, we consider the function $f$ to have finite second moment, i.e., \begin{equation} ||f||_{L^2_{\boldsymbol{\gamma}}(Y, \mathcal{S})} = \left(\int_{Y}||f({\boldsymbol{y}})||^2_\mathcal{S} d{\boldsymbol{\gamma}}({\boldsymbol{y}})\right)^{1/2} < \infty\;. \end{equation} In this situation, $f$ admits a polynomial expansion on the Hermite series \cite{bachmayr2017sparse}, i.e. \begin{equation}\label{eq:HermiteExpansion} f({\boldsymbol{y}}) = \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} f_{{\boldsymbol{\nu}}} H_{{\boldsymbol{\nu}}}({\boldsymbol{y}})\;, \end{equation} where the multivariate Hermite polynomials $H_{\boldsymbol{\nu}}({\boldsymbol{y}})$ and the coefficient $f_{{\boldsymbol{\nu}}}$ read \begin{equation} H_{\boldsymbol{\nu}}({\boldsymbol{y}}) = \prod_{j \geq 1} H_{\nu_j}(y_j), \text{ and } f_{\boldsymbol{\nu}} = \int_{Y} f({\boldsymbol{y}})H_{{\boldsymbol{\nu}}}({\boldsymbol{y}})d{\boldsymbol{\gamma}}({\boldsymbol{y}}) \;. \end{equation} Here and in what follows we consider $J=\infty$ ($J \in {\mathbb{N}}$ is a special case where $y_j = 0$ for $j > J$). The univariate Hermite polynomials $\{H_n\}_{n\geq 0}$, as given in \eqref{eq:Hermite}, are orthonormal. Due to this orthonormality, we have the Parseval's identity \begin{equation} ||f||_{L^2_{\boldsymbol{\gamma}}(Y,\mathcal{S})}^2 = \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} ||f_{\boldsymbol{\nu}}||_\mathcal{S}^2\;, \end{equation} i.e., $\{||f_{\boldsymbol{\nu}}||_\mathcal{S}\}_{{\boldsymbol{\nu}} \in \mathcal{F}} \in \ell^2(\mathcal{F})$, a sufficient and necessary condition for $f \in L^2_{\boldsymbol{\gamma}}(Y,\mathcal{S})$. \begin{assumption}\label{ass:Quadrature} We make the following assumptions on the properties of the univariate quadrature operators $\{\mathcal{Q}_l\}_{l\geq 0}$: \begin{enumerate} \item[A.1] The quadrature at level $l$ is exact for all the functions $f \in {\mathbb{P}}_l \otimes \mathcal{S}$, where ${\mathbb{P}}_l = \text{span}\{y^i: i = 0, \dots, l\}$, i.e. \begin{equation} I(f) = \mathcal{Q}_l(f) \quad \forall f \in {\mathbb{P}}_l \otimes \mathcal{S}\;. \end{equation} In particular, $I(H_n) = \mathcal{Q}_l(H_n)$ for Hermite polynomials $H_n$, $n = 0, \dots, l$. \item[A.2] The quadrature $\mathcal{Q}_l(H_n)$ for $H_n$ with $n > l$ is bounded by $2$, i.e. \begin{equation} |\mathcal{Q}_l(H_n)| < 2, \quad \forall l \geq 0\;. \end{equation} \end{enumerate} \end{assumption} Both the Gauss--Hermite (GH) quadrature and the Genz--Keister (GK) quadrature satisfy assumption A.1 for $m_l \geq l+1$, see \cite{gil2007numerical} and \cite{genz1996fully}, while it does not hold for the transformed Gauss--Kronrod--Patterson (tGKP) quadrature. As for assumption $A.2$, we can verify it for the GH quadrature in the following lemma. \begin{lemma}\label{prop:HermiteBound} For the Gauss--Hermite quadrature with $m_l = l + 1$ quadrature points at any level $l \geq 0$, see Sec. \ref{subsec:UnivQuad}, we have the bound \begin{equation} |\mathcal{Q}_l(H_n)| < 2, \quad \forall n \geq 0\;, \end{equation} for the orthonormal Hermite polynomials $H_n$, $n \geq 0$, defined in \eqref{eq:Hermite}. \end{lemma} The proof is based on the Cram\'er inequality, e.g., in \cite{abramowitz1966handbook}, that is made aware from \cite[Lemma 14]{ernst2016convergence}, and the Markoff's theorem, e.g., in \cite{szeg1939orthogonal}. \begin{proof} For the (physicists') orthogonal Hermite polynomials $\tilde{H}_n$, $n = 0, 1, \dots $, defined as \cite[Chap. 22, p. 776]{abramowitz1966handbook} \begin{equation} \tilde{H}_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n} e^{-x^2}, \text{ with } \int_{-\infty}^\infty \tilde{H}_n(x) \tilde{H}_m(x) e^{-x^2} dx = \sqrt{\pi} 2^n n! \delta_{nm}\;, \end{equation} we have the Cram\'er inequality \cite[Chap. 22, p. 787]{abramowitz1966handbook} \begin{equation} |\tilde{H}_n(x)| < c 2^{n/2} \sqrt{n!} e^{x^2/2}, \text{ with } c \approx 1.086435\;. \end{equation} Consequently, with proper rescaling for the (probabilists') orthonormal Hermite polynormials defined in \eqref{eq:Hermite}, i.e., $H_n(x) = (2^{n/2}\sqrt{n!})^{-1} \tilde{H}_n(x/\sqrt{2})$, we have \begin{equation}\label{eq:Hnfn} |H_n(x)| < c e^{x^2/4}\;. \end{equation} For the smooth function $f(x) = e^{x^2/4}$, by Markoff's theorem \cite[Chap. 16, p. 378]{szeg1939orthogonal} (note there $n = m_l = l + 1$ for our $l$ here) there exists $\xi \in {\mathbb{R}}$ s.t. \begin{equation}\label{eq:fxQlf} \int_{{\mathbb{R}}} f(x) \rho(x)dx = \mathcal{Q}_l(f) + \frac{f^{(2l+2)}(\xi)}{(2l+2)!}k_{l+1}^{-2}\;, \end{equation} where $k_{l+1}$ is the highest coefficient of the Hermite polynomial $H_{l+1}(x)$. As any even order derivative of $f$ is non-negative (see \cite[Lemma 4]{nevai1980mean}), from \eqref{eq:fxQlf} we have \begin{equation} \mathcal{Q}_l(f) \leq \int_{{\mathbb{R}}} f(x) \rho(x)dx = \frac{1}{\sqrt{2\pi}} \int_{{\mathbb{R}}} e^{x^2/4} e^{-x^2/2} dx = \sqrt{2}\;. \end{equation} Hence, we obtain \begin{equation} |\mathcal{Q}_l(H_n)| \leq \mathcal{Q}_l(|H_n|) \leq c \mathcal{Q}_l(f) \leq \sqrt{2} c \approx 1.536451 < 2\;, \end{equation} where the first inequality is due to the positivity of the quadrature weights \eqref{eq:HermiteWeight}, and the second one is due to the bound \eqref{eq:Hnfn}. \end{proof} As for the GK quadrature and the tGKP quadrature, no theoretical result is known to us for assumption A.2. Numerically, we compute $\mathcal{Q}_l(H_n)$ by all the three types of quadrature rules with all possible levels $l$ and degrees of Hermite polynomial $n$ upto machine precision. The results show that $A.2$ holds in all cases with a sharper bound $|\mathcal{Q}_l(H_n)| \leq 1$. The left of Fig. \ref{fig:QuadAccu} displays the numerical value $|\mathcal{Q}_l(H_n)|$ for the three quadrature rules with $l = 3$ and $n = 0, \dots, 150$ (the polynomial degree $n$ can not be larger due to machine precision); the right of Fig. \ref{fig:QuadAccu} shows $|\mathcal{Q}_l(H_n)| \leq 1$ by the GH2 (GH with $m_l = 2^{l+1}-1$) quadrature at $l = 0, 1, 2, 3, 4, 5$ and $n = 0, \dots, 150$. Moreover, from the left figure we can also see that GH2 (with $m_3 = 15$ points) is exact (with machine precision) for $I(H_n)$ for $n = 0, \dots, 29$, and GK (with $m_3= 19$ points) is exact for $n = 0, \dots, 29$, which satisfy assumption A.1. \vspace*{0.2cm} \begin{figure} \caption{Left: the numerical values $|\mathcal{Q} \label{fig:QuadAccu} \end{figure} Assumption \ref{ass:Quadrature} implies the exactness and the boundedness of the sparse quadrature $\mathcal{Q}_{\Lambda}$ in multiple dimensions as presented in the following lemma. Similar results have been obtained on the exactness of the sparse quadrature for integration with respect to uniform measure, see, e.g., \cite{back2011stochastic, Schillings2013}. \begin{lemma} Under Assumption \ref{ass:Quadrature}, for any admissible index set $\Lambda \subset \mathcal{F}$, we have \begin{equation}\label{eq:ExactLambda} I(f) = \mathcal{Q}_\Lambda(f), \quad \forall f \in {\mathbb{P}}_\Lambda \otimes \mathcal{S}\;, \end{equation} where ${\mathbb{P}}_{\Lambda} = \text{span}\{\prod_{j\geq 1}y_j^{\nu_j}, {\boldsymbol{\nu}} \in \Lambda\}$. In particular, as $H_{\boldsymbol{\nu}} \in {\mathbb{P}}_\Lambda$, we have \begin{equation}\label{eq:QuadLambda} I(H_{\boldsymbol{\nu}}) = \mathcal{Q}_\Lambda(H_{\boldsymbol{\nu}}) , \quad \forall {\boldsymbol{\nu}} \in \Lambda\;. \end{equation} Moreover, for any ${\boldsymbol{\nu}} \in \mathcal{F} \setminus {\boldsymbol 0}$, we have \begin{equation}\label{eq:QuadBound} |\mathcal{Q}_{\Lambda \cap \mathcal{R}_{\boldsymbol{\nu}}} (H_{{\boldsymbol{\nu}}})| \leq \prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} (1+\nu_j)^3\;. \end{equation} where the index set $\mathcal{R}_{\boldsymbol{\nu}} := \{{\boldsymbol{\mu}} \in \mathcal{F}: {\boldsymbol{\mu}} \preceq {\boldsymbol{\nu}}\}$, and ${\mathbb{J}}_{\boldsymbol{\nu}}$ is the support set of ${\boldsymbol{\nu}}$. \end{lemma} \begin{proof} The result \eqref{eq:ExactLambda} can be obtained by induction based on the assumption A.1, e.g., as in \cite[Theorem 4.2]{Schillings2013} for the uniform measure. Here, we provide a different proof for the Gaussian measure. First, for $\Lambda = \{{\boldsymbol{z}}ero\}$, i.e., $f({\boldsymbol{y}}) = u_{\boldsymbol{z}}ero$ with some function $u_{\boldsymbol{z}}ero \in \mathcal{S}$ for all ${\boldsymbol{y}} \in Y$, we have $I(f) = u_{\boldsymbol{z}}ero$ and $\mathcal{Q}_{\boldsymbol{z}}ero(f) = f({\boldsymbol{z}}ero) = u_{\boldsymbol{z}}ero$, which verifies \eqref{eq:ExactLambda}. Suppose \eqref{eq:ExactLambda} holds for an admissible set $\Lambda$, then we only need to verify that \eqref{eq:ExactLambda} also holds for the admissible set $\Lambda_{+k} = \Lambda \cup \{{\boldsymbol{\nu}}_{+k}\}$ for all possible $k \in {\mathbb{N}}$, where ${\boldsymbol{\nu}}_{+k} = {\boldsymbol{\nu}}^*+{\boldsymbol{e}}_k$ for some ${\boldsymbol{\nu}}^* \in \Lambda$ such that $({\boldsymbol{\nu}}^*)_k \geq \nu_k$ for all ${\boldsymbol{\nu}} \in \Lambda$. Here ${\boldsymbol{e}}_k\in \mathcal{F}$ whose $k$-th elements is one and all other elements are zero. In fact, the function $f \in {\mathbb{P}}_{\Lambda_{+k}} \otimes \mathcal{S}$ can be decomposed as \begin{equation} f({\boldsymbol{y}}) = \sum_{{\boldsymbol{\nu}} \in \Lambda_{+k} } {\boldsymbol{y}}^{{\boldsymbol{\nu}}} u_{\boldsymbol{\nu}} = \sum_{{\boldsymbol{\nu}} \in \Lambda } {\boldsymbol{y}}^{{\boldsymbol{\nu}}} u_{\boldsymbol{\nu}} +{\boldsymbol{y}}^{{\boldsymbol{\nu}}_{+k}} u_{{\boldsymbol{\nu}}_{+k}} \equiv f_{\Lambda}({\boldsymbol{y}}) + f_{+k}({\boldsymbol{y}}), \end{equation} where we have denoted $f_{\Lambda}({\boldsymbol{y}}) = \sum_{{\boldsymbol{\nu}} \in \Lambda } {\boldsymbol{y}}^{{\boldsymbol{\nu}}} u_{\boldsymbol{\nu}} $ and $f_{+k}({\boldsymbol{y}}) = {\boldsymbol{y}}^{{\boldsymbol{\nu}}_{+k}} u_{{\boldsymbol{\nu}}_{+k}} $. Then by the definition \eqref{eq:QuadLambdaIntegral} of the sparse quadrature operator, we have \begin{equation} \mathcal{Q}_{\Lambda_{+k}}(f_{\Lambda}) = \mathcal{Q}_{\Lambda}(f_{\Lambda}) + \triangle_{{\boldsymbol{\nu}}_{+k}} (f_{\Lambda})\;, \end{equation} where the first term $\mathcal{Q}_{\Lambda}(f_{\Lambda}) = I(f_{\Lambda})$ by the induction's assumption, and the second term, by the definition \eqref{eq:TensorDiff}, can be explicitly written as \begin{equation} \triangle_{{\boldsymbol{\nu}}_{+k}} (f_{\Lambda}) = \sum_{{\boldsymbol{\nu}} \in \Lambda} u_{{\boldsymbol{\nu}}} \bigotimes_{j \in {\mathbb{J}}_{{\boldsymbol{\nu}}_{+k}}} (\mathcal{Q}_{({\boldsymbol{\nu}}_{+k})_j} - \mathcal{Q}_{({\boldsymbol{\nu}}_{+k})_j - 1}) (y_j^{\nu_j})\;. \end{equation} By A.1 and the fact $\nu_k \leq ({\boldsymbol{\nu}}^*)_k$ for all ${\boldsymbol{\nu}} \in \Lambda$ and ${\boldsymbol{\nu}}_{+k} = {\boldsymbol{\nu}}^* + {\boldsymbol{e}}_k$, we have \begin{equation}\label{eq:cQnuk} (\mathcal{Q}_{({\boldsymbol{\nu}}_{+k})_k} - \mathcal{Q}_{({\boldsymbol{\nu}}_{+k})_k - 1}) (y_k^{\nu_k}) = (\mathcal{Q}_{({\boldsymbol{\nu}}^*)_k+1} - \mathcal{Q}_{({\boldsymbol{\nu}}^*)_k }) (y_k^{\nu_k})= I(y_k^{\nu_k}) - I(y_k^{\nu_k}) = 0\;, \end{equation} which implies that $\triangle_{{\boldsymbol{\nu}}_{+k}}(f_{\Lambda}) = 0$, thus $\mathcal{Q}_{\Lambda_{+k}}(f_{\Lambda}) = I(f_{\Lambda})$. As for $f_{+k}$, we have \begin{equation} \mathcal{Q}_{\Lambda_{+k}}(f_{+k}) = \sum_{{\boldsymbol{\nu}} \in \Lambda_{+k}} \triangle_{\boldsymbol{\nu}} (f_{+k}) = \sum_{{\boldsymbol{\nu}} \in \mathcal{R}_{{\boldsymbol{\nu}}_{+k}}} \triangle_{\boldsymbol{\nu}} (f_{+k}) + \sum_{{\boldsymbol{\nu}} \in \Lambda_{+k} \setminus \mathcal{R}_{{\boldsymbol{\nu}}_{+k}}} \triangle_{\boldsymbol{\nu}} (f_{+k})\;, \end{equation} where we recall that $\mathcal{R}_{{\boldsymbol{\nu}}_{+k}} = \{{\boldsymbol{\mu}} \in \mathcal{F}: {\boldsymbol{\mu}} \preceq {\boldsymbol{\nu}}_{+k} \}$. Then by A.1 the first term yields \begin{equation} \sum_{{\boldsymbol{\nu}} \in \mathcal{R}_{{\boldsymbol{\nu}}_{+k}}} \triangle_{\boldsymbol{\nu}} (f_{+k}) = \mathcal{Q}_{\mathcal{R}_{{\boldsymbol{\nu}}_{+k}}} (f_{+k}) = u_{{\boldsymbol{\nu}}_{+k}} \bigotimes_{j\geq 1} \mathcal{Q}_{({\boldsymbol{\nu}}_{+k})_j}\left(y_j^{({\boldsymbol{\nu}}_{+k})_j}\right) = I(f_{+k})\;, \end{equation} and $\triangle_{\boldsymbol{\nu}}(f_{+k})$ vanishes for each ${\boldsymbol{\nu}} \in \Lambda_{+k} \setminus \mathcal{R}_{{\boldsymbol{\nu}}_{+k}}$ by the same reasoning as in \eqref{eq:cQnuk}, i.e., there exists $j \in {\mathbb{J}}_{{\boldsymbol{\nu}}}$ such that $\nu_j > ({\boldsymbol{\nu}}_{+k})_j$, so that $(\mathcal{Q}_{\nu_j} - \mathcal{Q}_{\nu_j - 1})\left(y_j^{({\boldsymbol{\nu}}_{+k})_j}\right)= 0$. Therefore, we also have $\mathcal{Q}_{\Lambda_{+k}}(f_{+k}) = I(f_{+k})$, so that $\mathcal{Q}_{\Lambda_{+k}}(f) = I(f)$ for any $f \in {\mathbb{P}}_{\Lambda_{+k}} \otimes \mathcal{S}$. This completes the induction and concludes the equality \eqref{eq:ExactLambda}. To check \eqref{eq:QuadBound}, by the definition of the sparse quadrature in \eqref{eq:QuadLambdaIntegral} we have \begin{equation} |\mathcal{Q}_{\Lambda \cap \mathcal{R}_{\boldsymbol{\nu}}} (H_{{\boldsymbol{\nu}}})| = \Big | \sum_{{\boldsymbol{\mu}} \in \Lambda \cap \mathcal{R}_{\boldsymbol{\nu}}} \triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}}) \Big | \leq \sum_{{\boldsymbol{\mu}} \in \Lambda \cap \mathcal{R}_{\boldsymbol{\nu}}} |\triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}})| \leq \sum_{{\boldsymbol{\mu}} \in \mathcal{R}_{\boldsymbol{\nu}}} |\triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}})|\;. \end{equation} By the definition of $\triangle_{\boldsymbol{\mu}}$ in \eqref{eq:TensorDiff}, we have \begin{equation} |\triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}})| \leq \prod_{j \in {\mathbb{J}}_{{\boldsymbol{\mu}}}} |\mathcal{Q}_{\mu_j}(H_{\nu_j}) - \mathcal{Q}_{\mu_j - 1}(H_{\nu_j})| \leq \prod_{j \in {\mathbb{J}}_{{\boldsymbol{\mu}}}}4 = 4^{|{\mathbb{J}}_{\boldsymbol{\mu}}|}\;, \end{equation} where the second bound is due to the assumption A.2. Therefore, we have \begin{equation} \sum_{{\boldsymbol{\mu}} \in \mathcal{R}_{\boldsymbol{\nu}}} |\triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}})| \leq \sum_{{\boldsymbol{\mu}} \in \mathcal{R}_{\boldsymbol{\nu}}} 4^{|{\mathbb{J}}_{\boldsymbol{\mu}}|} \leq \sum_{{\boldsymbol{\mu}} \in \mathcal{R}_{\boldsymbol{\nu}}} 4^{|{\mathbb{J}}_{\boldsymbol{\nu}}|} = \prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} 4(1+\nu_j) \leq \prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} (1+\nu_j)^3\;, \end{equation} where for the equality we have used $\sum_{{\boldsymbol{\mu}} \in \mathcal{R}_{\boldsymbol{\nu}}} 1 = \prod_{j \in {\mathbb{J}}_{\boldsymbol{\nu}}} (1+\nu_j)$ and for last inequality we have used $4(1+n) \leq (1+n)^3$ for $n \geq 1$, which completes the proof. \end{proof} The following lemma bounds the quadrature error $||I(f) - \mathcal{Q}_{\Lambda_N}(f)||_\mathcal{S}$ in terms of the weighted $\ell^1$-norm of the Hermite coefficient $\{||f_{{\boldsymbol{\nu}}}||_\mathcal{S}\}_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N}$. Similar results using Legendre polynomial expansion and triangular inequality can be found in \cite[Lemma 4.2]{chkifa2014high} and \cite[Lemma 4.5]{Schillings2013} for interpolation and integration with uniform measure. Instead of relying on the Lebesgue constant in these papers, we use the orthogonality of the Hermite polynomials and the bound in assumption A.2. \begin{lemma}\label{lemma:WeightedSum} Under Assumption \ref{ass:Quadrature}, for any $f\in L^2_{{\boldsymbol{\gamma}}}(Y,\mathcal{S})$, we have that for any $N \in {\mathbb{N}}$, there exists an admissible index set $\Lambda_N \subset \mathcal{F}$ with $|\Lambda_N| = N$, such that \begin{equation}\label{eq:weightedSum} ||I(f) - \mathcal{Q}_{\Lambda_N}(f)||_\mathcal{S} \leq \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N}c_{\boldsymbol{\nu}} ||f_{\boldsymbol{\nu}}||_\mathcal{S} \;, \end{equation} where $c_{\boldsymbol{\nu}} := \prod_{j\in {\mathbb{J}}_{\boldsymbol{\nu}}}(1+\nu_j)^3$, the upper bound obtained in \eqref{eq:QuadBound}. \end{lemma} \begin{proof} As $f\in L^2_{{\boldsymbol{\gamma}}}(Y,\mathcal{S})$, we have the polynomial expansion of $f$ on the Hermite series as in \eqref{eq:HermiteExpansion}, so that \begin{equation} \mathcal{Q}_{\Lambda_N} (f) = \mathcal{Q}_{\Lambda_N} \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} f_{{\boldsymbol{\nu}}} H_{{\boldsymbol{\nu}}} \right) = \sum_{{\boldsymbol{\nu}} \in \Lambda_N} f_{{\boldsymbol{\nu}}} \mathcal{Q}_{\Lambda_N} (H_{{\boldsymbol{\nu}}}) + \sum_{{\boldsymbol{\nu}} \in \mathcal{F} \setminus \Lambda_N} f_{{\boldsymbol{\nu}}} \mathcal{Q}_{\Lambda_N} (H_{{\boldsymbol{\nu}}})\;. \end{equation} Therefore, by the identity \eqref{eq:QuadLambda} we obtain \begin{equation}\label{eq:IQError} ||I(f) - \mathcal{Q}_{\Lambda_N} (f)||_\mathcal{S} \leq \sum_{{\boldsymbol{\nu}} \in \mathcal{F} \setminus \Lambda_N} ||f_{{\boldsymbol{\nu}}}||_\mathcal{S} |(I-\mathcal{Q}_{\Lambda_N}) (H_{{\boldsymbol{\nu}}})| \end{equation} For any ${\boldsymbol{\nu}} \in \mathcal{F}\setminus {\boldsymbol 0}$, there exists $j \in {\mathbb{N}}$ such that $\nu_j \neq 0$, for which we have $I_j(H_{\nu_j}) = 0$ due to the orthogonality of $H_{\nu_j}$, hence \begin{equation} I(H_{\boldsymbol{\nu}}) = \prod_{j\geq 1} I_j(H_{\nu_j}) = 0\;. \end{equation} Moreover, for any ${\boldsymbol{\nu}} \in \mathcal{F}$, we have \begin{equation} \begin{split} \mathcal{Q}_{\Lambda_N}(H_{\boldsymbol{\nu}})& = \sum_{\mu \in \Lambda_N} \triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}}) \\ &= \sum_{\mu \in \Lambda_N} \prod_{j \geq 1} (\mathcal{Q}_{\mu_j}(H_{\nu_j}) - \mathcal{Q}_{\mu_j-1}(H_{\nu_j}))\\ & = \sum_{\mu \in \Lambda_N \cap \mathcal{R}_{\boldsymbol{\nu}}} \prod_{j \geq 1} (\mathcal{Q}_{\mu_j}(H_{\nu_j}) - \mathcal{Q}_{\mu_j-1}(H_{\nu_j}))\\ &= \sum_{\mu \in \Lambda_N \cap \mathcal{R}_{\boldsymbol{\nu}}} \triangle_{\boldsymbol{\mu}}(H_{\boldsymbol{\nu}}) = \mathcal{Q}_{\Lambda_N \cap \mathcal{R}_{\boldsymbol{\nu}}}(H_{\boldsymbol{\nu}}) \;, \end{split} \end{equation} where the third equality is due to the assumption A.1. As a result, \eqref{eq:IQError} becomes \begin{equation} ||I(f) - \mathcal{Q}_{\Lambda_N} (f)||_\mathcal{S} \leq \sum_{{\boldsymbol{\nu}} \in \mathcal{F} \setminus \Lambda_N} ||f_{{\boldsymbol{\nu}}}||_\mathcal{S} |\mathcal{Q}_{\Lambda_N \cap \mathcal{R}_{\boldsymbol{\nu}}}(H_{\boldsymbol{\nu}})| \leq \sum_{{\boldsymbol{\nu}} \in \mathcal{F} \setminus \Lambda_N} c_{\boldsymbol{\nu}} ||f_{{\boldsymbol{\nu}}}||_\mathcal{S}\;, \end{equation} which completes the proof by using the bound \eqref{eq:QuadBound}. \end{proof} In order to control the quadrature error, which is bounded by a weighted sum of the Hermite coefficients as above, we make the following assumptions from \cite[Theorem 3.3]{bachmayr2017sparse} on the derivatives of the function $f$ with respect to the parameter ${\boldsymbol{y}}$. \begin{assumption}\label{ass:DeriBound} \begin{enumerate} \item[B.1] Let $0 < q < 2$ , and $(\tau_j)_{j\geq 1}$ be a positive sequence such that \begin{equation}\label{eq:lqtauj} (\tau_j^{-1})_{j\geq 1} \in \ell^q({\mathbb{N}})\;. \end{equation} \item[B.2] Let $r$ be the smallest integer such that $r > 14/q$, we assume $\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} f \in L^2_{{\boldsymbol{\gamma}}}(Y,\mathcal{S})$ and there holds \begin{equation}\label{eq:boundedinteg} \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \int_{Y} ||\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} f({\boldsymbol{y}})||^2_\mathcal{S} d{\boldsymbol{\gamma}}({\boldsymbol{y}}) < \infty\;, \end{equation} where $\boldsymbol{\tau}^{2{\boldsymbol{\mu}}} = \prod_{j\geq 1} \tau_j^{2\mu_j}$, ${\boldsymbol{\mu}} ! = \prod_{j \geq 1} \mu_j!$, and $\partial_{\boldsymbol{y}}^{\boldsymbol{\mu}} f({\boldsymbol{y}}) = \left(\prod_{j \geq 1} \partial_{y_j}^{\mu_j} \right) f({\boldsymbol{y}})$. \end{enumerate} \end{assumption} \begin{remark} Assumption \ref{ass:DeriBound} characterizes the relation between the regularity of the function $f$ with respect to the parameter ${\boldsymbol{y}}$ and sparsity of the parametrization, i.e., the anisotropic property of the function with respect to different dimensions. The smaller $q$ is, the faster $\tau_j$ grows, so the faster $\partial_{\boldsymbol{y}}^{{\boldsymbol{\mu}}}f({\boldsymbol{y}})$ decays with respect to $j$, and as $r > 14/q$ becomes larger, the higher orders of derivative are needed. We will present two examples in the next section to verify Assumption \ref{ass:DeriBound} and illustrate this discussion. \end{remark} The following result establishes the equivalence between the weighted summability of the integral of the mixed derivatives and the weighted summability of the Hermite coefficients, which is the key to bring the sparsity of the parametrization to the dimension-independent convergence rate. \begin{proposition}\label{prop:EquivFourierDeri} \cite[Theorem 3.3, Lemma 5.1]{bachmayr2017sparse} Under Assumption \ref{ass:DeriBound}, we have \begin{equation} \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \int_{Y} ||\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} f({\boldsymbol{y}})||^2_\mathcal{S} d{\boldsymbol{\gamma}}({\boldsymbol{y}}) = \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} b_{{\boldsymbol{\nu}}} ||f_{\boldsymbol{\nu}}||_\mathcal{S}^2\;, \end{equation} where the weights $b_{\boldsymbol{\nu}}$ given by \begin{equation}\label{eq:bnu} b_{\boldsymbol{\nu}} = \sum_{|{\boldsymbol{\mu}}|_\infty\leq r} \left( \begin{array}{cc} {\boldsymbol{\nu}} \\ {\boldsymbol{\mu}} \end{array} \right) \boldsymbol{\tau}^{2{\boldsymbol{\mu}}}, \quad \text{ with } \left( \begin{array}{cc} {\boldsymbol{\nu}} \\ {\boldsymbol{\mu}} \end{array} \right) = \prod_{j \geq 1} \left( \begin{array}{cc} \nu_j \\ \mu_j \end{array} \right)\;, \end{equation} satisfies the summability condition \begin{equation}\label{eq:bsnuq2} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} b_{\boldsymbol{\nu}}^{-q/2} < \infty \end{equation} for any integer $r$ such that $r > 2/q$. \end{proposition} Based on the summability \eqref{eq:bsnuq2} and its proof, we obtain the following result. \begin{lemma}\label{lemm:cnubnu} Under Assumption \ref{ass:DeriBound}, for any $\eta \geq q/4$, we have \begin{equation}\label{eq:bnucnu} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \left(\frac{b_{\boldsymbol{\nu}}}{c_{\boldsymbol{\nu}}^{1/\eta}}\right)^{-2\eta} < \infty\;. \end{equation} \end{lemma} \begin{proof} By the definition of $b_{\boldsymbol{\nu}} $ in \eqref{eq:bnu}, we can rewrite it as \begin{equation} b_{\boldsymbol{\nu}} = \prod_{j\geq 1} \left( \sum_{l = 0}^r \left( \begin{array}{cc} \nu_j \\ l \end{array} \right) \tau_j^{2l} \right)\;. \end{equation} Then the left hand side of \eqref{eq:bnucnu} can be written in the factorized form as \begin{equation} \begin{split} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \left(\frac{b_{\boldsymbol{\nu}}}{c_{\boldsymbol{\nu}}^{1/\eta}}\right)^{-2\eta}& = \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \prod_{j\geq 1} \left( \sum_{l=0}^r \left( \begin{array}{cc} \nu_j \\ l \end{array} \right) \tau_j^{2l} \frac{1}{(1+\nu_j)^{3/\eta}} \right)^{-2\eta}\\ & = \prod_{j\geq 1} \sum_{n\geq 0} \left( \sum_{l=0}^r \left( \begin{array}{cc} n \\ l \end{array} \right) \tau_j^{2l} \frac{1}{(1+n)^{3/\eta}} \right)^{-2\eta}\;, \end{split} \end{equation} as long as we can show that the product on the right hand side is finite. Now we have \begin{equation}\label{eq:subWeightSumD} \begin{split} &\sum_{n\geq 0} \left( \sum_{l=0}^r \left( \begin{array}{cc} n \\ l \end{array} \right) \tau_j^{2l} \frac{1}{(1+n)^{3/\eta}} \right)^{-2\eta}\\ &\leq \sum_{n \geq 0} \left( \left( \begin{array}{cc} n \\ n \wedge r \end{array} \right) \tau_j^{2(n\wedge r)} \frac{1}{(1+n)^{3/\eta}} \right)^{-2\eta}\\ & \leq 1 + 2^{6} \tau_j^{-4\eta} + \cdots +r^{6} \tau_j^{-4\eta (r-1)} + C_{r,\eta} \tau_j^{- 4\eta r} =: d_j(r,\eta,\tau_j)\;, \end{split} \end{equation} where in the first inequality we have only kept the term $l = n \wedge r =\min\{n, r\}$, and the constant $C_{r,\eta}$ is defined as \begin{equation} C_{r,\eta} = \sum_{n\geq r} \left( \left( \begin{array}{cc} n\\ r \end{array} \right) \frac{1}{(1+n)^{3/\eta}} \right)^{-2\eta}= (r!)^{2\eta} \sum_{n\geq 0} \left( \frac{(n+1)\cdots(n+r)}{(1+n+r)^{3/\eta}} \right)^{-2\eta}\;. \end{equation} As the term in the big parentheses grows as $n^{r-3/\eta}$ when $n \to \infty$, and $2\eta (r-3/\eta) > 1$ for any $\eta \geq q/4$ when $r > 14/q$, so that $C_{r,\eta} < \infty$. Since $(\tau_j^{-1})_{j \geq 1} \in \ell^q({\mathbb{N}})$ by Assumption \ref{ass:DeriBound}, we have $\tau_j \to \infty$ as $j \to \infty$, so that there exists $J_\tau < \infty$ such that $\tau_j > 1$ for all $j > J_\tau$. For $j > J_\tau$, we can bound the right hand side of \eqref{eq:subWeightSumD} by \begin{equation} d_j(r,\eta,\tau_j) \leq 1+ (2^{6} + \cdots + r^{6} + C_{r,\eta}) \tau_j^{-4\eta}\;. \end{equation} Consequently, by setting $D_{r,\eta} = 2^{6} + \cdots + r^{6} + C_{r,\eta}$, we have \begin{equation}\label{eq:bncneta2} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \left(\frac{b_{\boldsymbol{\nu}}}{c_{\boldsymbol{\nu}}^{1/\eta}}\right)^{-2\eta} \leq \prod_{j\geq 1} d_j(r,\eta,\tau_j) \leq \prod_{1\leq j \leq J_\tau} d_j(r,\eta,\tau_j) \prod_{j > J_\tau} (1+ D_{r,\eta} \tau_j^{-4\eta})\;, \end{equation} where the first term is bounded as $J_\tau < \infty$. The second term can be written as \begin{equation} \prod_{j > J_\tau} (1+ D_{r,\eta} \tau_j^{-4\eta}) = \exp\left(\sum_{j > J_\tau}\log\left(1+D_{r,\eta}\tau_j^{-4\eta}\right) \right) \;, \end{equation} which, by using $\log(1+x) \leq x$ for all $x> -1$, can be bounded by \begin{equation}\label{eq:expDrq} \exp\left(\sum_{j > J_\tau}\log\left(1+D_{r,\eta}\tau_j^{-4\eta}\right) \right) \leq \exp\left(D_{r,\eta} \sum_{j > J_\tau} \tau_j^{-4\eta}\right), \end{equation} which is finite when $\eta \geq q/4$ since $(\tau_j^{-1})_{j\geq 1} \in \ell^q({\mathbb{N}})$ in Assumption \ref{ass:DeriBound}. Hence, \eqref{eq:bnucnu} is concluded by \eqref{eq:bncneta2} and \eqref{eq:expDrq}. \end{proof} We are at the point to state and prove the main theorem. The idea behind the proof is from the short discussion in \cite[Remark 5.1]{bachmayr2017sparse} and the result \cite[Lemma 2.9]{ZS17_723}. \begin{theorem}\label{thm:N-termConv} Under Assumption \ref{ass:Quadrature} and \ref{ass:DeriBound}, there exists an admissible index set $\Lambda_N \subset \mathcal{F}$, a set of indices corresponding to the $N$ smallest value of $b_{\boldsymbol{\nu}}$ defined in \eqref{eq:bnu}, such that the sparse quadrature error is bounded by \begin{equation}\label{eq:quaderror} ||I(f)- \mathcal{Q}_{\Lambda_N}(f)||_{\mathcal{S}} \leq C (N+1)^{-s}, \quad s = \frac{1}{q} - \frac{1}{2}\;, \end{equation} where the constant $C$ is independent of $N$. \end{theorem} \begin{proof} We consider the right hand side of \eqref{eq:weightedSum} in Lemma \ref{lemma:WeightedSum}, which we can bound by multiplying and dividing $b_{\boldsymbol{\nu}}^{-1/2+\eta}$ with $\eta \geq q/4$ as \begin{equation}\label{eq:cnufnubound} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N}c_{\boldsymbol{\nu}} ||f_{\boldsymbol{\nu}}||_\mathcal{S} \leq \sup_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} b_{\boldsymbol{\nu}}^{-1/2+\eta} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} \frac{c_{\boldsymbol{\nu}}}{b_{\boldsymbol{\nu}}^{\eta}} b_{\boldsymbol{\nu}}^{1/2} ||f_{\boldsymbol{\nu}}||_\mathcal{S}\;, \end{equation} where the second term can be bounded by using Cauchy--Schwarz inequality as \begin{equation} \begin{split} \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} \frac{c_{\boldsymbol{\nu}}}{b_{\boldsymbol{\nu}}^{\eta}} b_{\boldsymbol{\nu}}^{1/2} ||f_{\boldsymbol{\nu}}||_\mathcal{S} &\leq \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} \left(\frac{c_{\boldsymbol{\nu}}}{b_{\boldsymbol{\nu}}^{\eta}}\right)^2 \right)^{1/2} \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} b_{\boldsymbol{\nu}} ||f_{\boldsymbol{\nu}}||_\mathcal{S}^2 \right)^{1/2}\\ & \leq \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \left(\frac{b_{\boldsymbol{\nu}}}{c_{\boldsymbol{\nu}}^{1/\eta}}\right)^{-2\eta} \right)^{1/2} \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} b_{\boldsymbol{\nu}} ||f_{\boldsymbol{\nu}}||_\mathcal{S}^2 \right)^{1/2}\;, \end{split} \end{equation} which is finite as a result of Lemma \ref{lemm:cnubnu} for the first term and Assumption \ref{ass:DeriBound} and Proposition \ref{prop:EquivFourierDeri} for the second. By an increasing rearrangement of the sequence $(b_{\boldsymbol{\nu}})_{{\boldsymbol{\nu}} \in \mathcal{F}}$, which is equivalent to a decreasing rearrangement of $(b_{\boldsymbol{\nu}}^{-1/2+\eta})_{{\boldsymbol{\nu}} \in \mathcal{F}}$ for $\eta < 1/2$, which we denote as $(d_n)_{n \geq 1}$, the first term on the right hand side of \eqref{eq:cnufnubound} becomes \begin{equation} \sup_{{\boldsymbol{\nu}} \in \mathcal{F}\setminus \Lambda_N} b_{\boldsymbol{\nu}}^{-(1-2\eta)/2} = d_{N+1}^{}\;. \end{equation} Since $(b_{\boldsymbol{\nu}}^{-1/2})_{{\boldsymbol{\nu}} \in \mathcal{F}} \in \ell^{q}(\mathcal{F})$ as given in Proposition \ref{prop:EquivFourierDeri}, so that $(d_n)_{n \geq 1} \in \ell^{q/(1-2\eta)}({\mathbb{N}})$. As a result, by taking $\eta = q/4$, the smallest value for $\eta$, we have $(d_n)_{n \geq 1} \in \ell^{\tilde{q}}({\mathbb{N}})$ where $\tilde{q} = 2q/(2-q) \in (0, \infty)$ for $q \in (0, 2)$. As $(d_n)_{n \geq 1}$ is monotonically decreasing, when $\tilde{q} \in (0, 1)$, by H\"older's inequality for $s = \frac{1}{\tilde{q}}$ and its conjugate $t = \frac{1}{1-\tilde{q}}$ we obtain \begin{equation} d_{n}^{\tilde{q}^2} \leq \frac{1}{n} \sum_{i = 1}^n d_i^{\tilde{q}^2} \leq \frac{1}{n} \left(\left(d_i^{\tilde{q}^2}\right)^{s}\right)^{\frac{1}{s}} \left(\sum_{i = 1}^n 1^{t}\right)^{\frac{1}{t}} = \left(\sum_{i = 1}^n d_i^{\tilde{q}} \right)^{\tilde{q}} n^{-\tilde{q}}, \end{equation} so that \begin{equation} d_{n} \leq \left(\sum_{i = 1}^n d_i^{\tilde{q}}\right)^{\frac{1}{\tilde{q}}} n^{-s} \leq \left(\sum_{i = 1}^\infty d_i^{\tilde{q}}\right)^{\frac{1}{\tilde{q}}} n^{-s}, \end{equation} For $\tilde{q} \geq 1$, again by H\"older's inequality for $\tilde{q}$ and its conjugate $t = \frac{1}{1-s}$ where $s = \frac{1}{\tilde{q}}$ we have \begin{equation} d_n \leq n^{-1} \sum_{i = 1}^n d_i \leq n^{-1} \left(\sum_{i=1}^n d_i^{\tilde{q}} \right)^{\frac{1}{\tilde{q}}} \left(\sum_{i=1}^n 1^{t} \right)^{1- s} \leq \left(\sum_{i=1}^\infty d_i^{\tilde{q}} \right)^{\frac{1}{\tilde{q}}} n^{-s}. \end{equation} Consequently, the main result \eqref{eq:quaderror} holds with the constant \begin{equation} C = \left(\sum_{i=1}^\infty d_i^{\tilde{q}} \right)^{\frac{1}{\tilde{q}}} \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} \left(\frac{b_{\boldsymbol{\nu}}}{c_{\boldsymbol{\nu}}^{1/\eta}}\right)^{-2\eta} \right)^{1/2} \left( \sum_{{\boldsymbol{\nu}} \in \mathcal{F}} b_{\boldsymbol{\nu}} ||f_{\boldsymbol{\nu}}||_\mathcal{S}^2 \right)^{1/2} < \infty, \end{equation} which is independent of $N$. To conclude the proof, we need to show that the index set $\Lambda_N$ can be taken such that it is admissible, for which we only need to verify that for any $k \in {\mathbb{N}}$ and ${\boldsymbol{\nu}} \in \mathcal{F}$, we have \begin{equation}\label{eq:bnuekgeqbnu} b_{{\boldsymbol{\nu}}+{\boldsymbol{e}}_k} \geq b_{{\boldsymbol{\nu}}}\;. \end{equation} This is true by the definition of $b_{{\boldsymbol{\nu}}}$ in \eqref{eq:bnu}, i.e., for Kronecker delta $\delta_{jk}$, \begin{equation} b_{{\boldsymbol{\nu}}+{\boldsymbol{e}}_k} = \sum_{|{\boldsymbol{\mu}}|_\infty\leq r} \prod_{j \geq 1} \left( \begin{array}{cc} \nu_j +\delta_{jk} \\ \mu_j \end{array} \right) \tau_j^{2\mu_j} \geq \sum_{|{\boldsymbol{\mu}}|_\infty\leq r} \prod_{j \geq 1} \left( \begin{array}{cc} \nu_j \\ \mu_j \end{array} \right) \tau_j^{2\mu_j} = b_{\boldsymbol{\nu}}\;. \end{equation} \end{proof} \begin{remark} The convergence of the quadrature error with respect to the number of indices does not depend on the number of the parameter dimensions, thus breaking the curse of dimensionality. It only depends on the summability parameter $q$, which measures the sparsity of the parametric function with respect to the parameters: the smaller $q$ is, the sparser $f$ is, the faster the convergence becomes. \end{remark} \begin{remark}\label{rmk:ConvRate} For any parametric function satisfying the Assumption \ref{ass:DeriBound}, our theorem implies that we can construct the admissible index set completely based on the definition of $b_{\boldsymbol{\nu}}$ in \eqref{eq:bnu} in order to achieve the convergence rate $N^{-s}$ with $s = 1/q - 1/2$. This convergence rate is obtained as an upper bound, which is not necessarily optimal. In fact, our numerical tests indicate that it could be improved. \end{remark} The convergence rate is obtained with respect to the number of indices $N$ in the index set $\Lambda_N$, which is not necessarily the same as the number of quadrature points. The following corollary provides a convergence rate with respect to the number of quadrature points in the case of Gauss--Hermite quadrature with $m_l = l + 1$. \begin{corollary}\label{cor:index2point} As a result of Theorem \ref{thm:N-termConv}, for the case of Gauss--Hermite quadrature with $m_l = l + 1$, the sparse quadrature error is bounded by \begin{equation} ||I(f)- \mathcal{Q}_{\Lambda_N}(f)||_{\mathcal{S}} \leq C N_p^{-s/2}, \quad s = \frac{1}{q} - \frac{1}{2}\;, \end{equation} where $C$ is independent of the number of quadrature points $N_p$ corresponding to $\Lambda_N$. \end{corollary} \begin{proof} The bound is a result of \cite[Proposition 18]{ernst2016convergence}, which states that there exists a constant $C$ such that $N_p \leq C N^2$. \end{proof} \begin{remark} Similar convergence rates are observed in practice for both GH1, GH2, and GK with respect to the number of quadrature points as that of indices, as shown in our numerical tests. The reason might be that $\mathcal{Q}_l$, which uses $m_l$ quadrature points, is exact at least for $P_{n}$ with $n \geq m_l-1$ (in fact it is exact for $P_{2m_l-1}$ by GH quadrature), which is much richer than $P_{l}$. \end{remark} \subsection{Examples} \label{subsec:Examples} The dimension-independent convergence rate relies on the assumption on the derivatives of the function $f({\boldsymbol{y}})$ with respect to the parameter ${\boldsymbol{y}}$ as stated in Assumption \ref{ass:DeriBound}. Here we provide two examples which satisfy such assumption. For both examples, we assume a common structure that the function $f$ depends on ${\boldsymbol{y}}$ through $\kappa({\boldsymbol{y}})$ as $f(\kappa({\boldsymbol{y}}))$, where $\kappa$ is given by \begin{equation}\label{eq:kappa} \kappa({\boldsymbol{y}}) = \sum_{j \geq 1} y_j \psi_j\;, \end{equation} where we assume $\max_{j\geq 1}||\psi_j|| < \infty$, e.g., $||\psi_j|| = |\psi_j|$ if $\psi \in {\mathbb{R}}$ and $||\psi_j|| = ||\psi_j||_{L^\infty(D)}$ if $\psi_j$ is a function in a physical domain $D$. \subsubsection{Example 1 -- A nonlinear parametric function} \label{sec:ex1} We first consider a function that does not depend on the physical coordinate $x$, where we set $\psi_j = j^{-\alpha}$ in $\kappa$, in particular, \begin{equation}\label{eq:fbsy} f({\boldsymbol{y}}) = f(\kappa({\boldsymbol{y}})) = \exp\left(\sum_{j \geq 1} y_j j^{-\alpha} \right), \quad \alpha > 1\;. \end{equation} To satisfy Assumption \ref{ass:DeriBound}, we compute \begin{equation} \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \int_{Y} |\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} f({\boldsymbol{y}})|^2 d{\boldsymbol{\gamma}}({\boldsymbol{y}}) = \int_Y f^2({\boldsymbol{y}}) d{\boldsymbol{\gamma}}({\boldsymbol{y}}) \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \prod_{j \geq 1}j^{-2\alpha \mu_j} \;, \end{equation} where, for $\alpha > 1$, we have \begin{equation} \int_Y f^2({\boldsymbol{y}}) d{\boldsymbol{\gamma}}({\boldsymbol{y}}) = \prod_{j\geq 1} \int_{{\mathbb{R}}} e^{2j^{-\alpha}y_j} \rho(y_j)dy_j = \exp\left(2\sum_{j\geq 1} j^{-2\alpha}\right) < \infty\;. \end{equation} Moreover, we have the bound (by using $1+x+\cdots + x^r/r! < e^x$ for any $x > 0$) \begin{equation} \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \prod_{j \geq 1}j^{-2\alpha \mu_j} = \prod_{j\geq 1} \left(\sum_{l=0}^r \frac{(\tau_j^{2} j^{-2\alpha})^l}{l!}\right) \leq \exp\left(\sum_{j\geq 1} \tau_j^{2} j^{-2\alpha}\right)\;, \end{equation} which is finite if and only if $\tau_j \lesssim j^{\alpha - 1/2 -\varepsilon}$ for arbitrary $\varepsilon > 0$, so that $(\tau_j^{-1})_{j \geq 1} \in \ell^q({\mathbb{N}})$ for $q \geq 1/(\alpha - 1/2-\varepsilon)$. By Theorem \ref{thm:N-termConv}, we obtain the convergence rate $N^{-s}$ for $s = 1/q - 1/2 \leq \alpha - 1 -\varepsilon$. Note that the case $\alpha \leq 1$ is not covered by the theorem. \subsubsection{Example 2 -- PDE solution as a nonlinear map} \label{sec:ex3} We consider the solution (nonlinear with respect to $\kappa$) of the diffusion equation: find $u({\boldsymbol{y}})\in H^1_0(D)$ such that \begin{equation} - \text{div}(e^{\kappa({\boldsymbol{y}})} \nabla u({\boldsymbol{y}})) = g, \text{ in } D\;, \end{equation} with homogeneous Dirichlet boundary condition, and $g \in H^{-1}(D)$. This example is studied in detail in \cite{bachmayr2017sparse}. Under the parametrization \eqref{eq:kappa}, for $(\tau_j)_{j\geq 1}$ such that \begin{equation}\label{eq:taupsi} \sup_{x\in D} \sum_{j \geq 1} \tau_j |\psi_j(x)| < \frac{\text{ln} 2}{\sqrt{r}}\;, \end{equation} and $(\tau_j^{-1})_{j\geq 1} \in \ell^q({\mathbb{N}})$ for any $0 < q < \infty$, they proved the bound \cite[Theorem 4.2]{bachmayr2017sparse} \begin{equation}\label{eq:DiffusionBound} \sum_{|{\boldsymbol{\mu}}|_\infty \leq r} \frac{\boldsymbol{\tau}^{2{\boldsymbol{\mu}}}}{{\boldsymbol{\mu}}!} \int_{Y} ||\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} u({\boldsymbol{y}})||^2_\mathcal{S} d{\boldsymbol{\gamma}}({\boldsymbol{y}}) \leq C\int_{Y} \exp(4||\kappa({\boldsymbol{y}})||_{L^{\infty}(D)}) d{\boldsymbol{\gamma}}({\boldsymbol{y}}) < \infty\;, \end{equation} where $\mathcal{S} =H^1_0(D)$, $C$ is a constant independent of ${\boldsymbol{y}}$. The first inequality is ensured by \eqref{eq:taupsi} from a careful estimate of the partial derivatives of $u$ with respect to ${\boldsymbol{y}}$ and the sum of their integrals, while the second inequality is ensured by $(\tau_j^{-1})_{j\geq 1} \in \ell^q({\mathbb{N}})$. Then the convergence rate $N^{-s}$ with $s = 1/q - 1/2$ in Theorem \ref{thm:N-termConv} is established for $f({\boldsymbol{y}}) = u({\boldsymbol{y}})$. Note that in \cite{bachmayr2017sparse} only $r > 2/q$ is needed for the convergence result of a Hermite polynomial approximation error, while we need $r > 14/q$ for the convergence of the sparse quadrature error due to the proof in Lemma \ref{lemm:cnubnu}. Here, the solution $u({\boldsymbol{y}})$ can be replaced by a bounded linear functional of $f({\boldsymbol{y}}) = f(u({\boldsymbol{y}}))$, and the inequality \eqref{eq:DiffusionBound} can be verified for $f$ due to \begin{equation}\label{eq:linearf} |\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} f({\boldsymbol{y}})|^2 \leq ||f||^2_{\mathcal{S}'} ||\partial^{\boldsymbol{\mu}}_{\boldsymbol{y}} u({\boldsymbol{y}})||^2_\mathcal{S}. \end{equation} \section{Construction of the sparse quadrature} \label{sec:construction} We present two algorithms for the construction of the sparse quadrature -- one is a-priori construction that guarantees the dimension-independent convergence rate in Theorem \ref{thm:N-termConv}; the other is a goal-oriented a-posteriori construction based on a-posteriori error indicator -- the difference quadrature $\triangle_{\boldsymbol{\nu}}(f)$ in \eqref{eq:TensorDiff} that depends on each specific function $f$, which however can not guarantee the dimension-independent convergence rate in theory but achieve so in our numerical experiments in Sec \ref{sec:numerics}. \subsection{A-priori construction} \label{sec:aprioricons} A-priori construction of sparse grids has been considered in the literature, e.g., in \cite{ma2009adaptive, beck2014quasi}. In our setting, from Theorem \ref{thm:N-termConv} we observe that the dimension-independent convergence rate of the sparse quadrature can be achieved by choosing the admissible index set $\Lambda_N$ with indices ${\boldsymbol{\nu}} \in \mathcal{F}$ corresponding to the largest value of $b_{{\boldsymbol{\nu}}}$. While we can compute $b_{\boldsymbol{\nu}}$ for all the indices ${\boldsymbol{\nu}} \in \mathcal{F}_{r,J}$ where \begin{equation} \mathcal{F}_{r,J}= \{{\boldsymbol{\nu}} \in \mathcal{F}: |{\boldsymbol{\nu}}|_{\infty} \leq r, \text{ and } \nu_j = 0 \text{ for } j > J \}\;, \end{equation} it is expensive/unfeasible if $r$ and $J$ are very large or infinite. For a feasible construction, we first arrange $(\tau_j)_{j\geq 1}$ to be in increasing order. Then, thanks to the monotonic increasing property of $b_{\boldsymbol{\nu}}$ in \eqref{eq:bnuekgeqbnu}, we can adaptively construct the admissible index set $\Lambda_N$ by Algorithm \ref{alg:SparseQuad} (with candidate indices from a forward neighbor index set, see \eqref{eq:ForwNeib} ahead). Note that even for indices that cannot be sorted in lexicographic order, e.g., ${\boldsymbol{\nu}} = (2, 1)$ and ${\boldsymbol{\mu}} = (1, 2)$, $b_{\boldsymbol{\nu}} > b_{\boldsymbol{\mu}}$ due to the reordering just introduced. This implies that the a-priori construction, that iteratively explores variables one after the other (see again \eqref{eq:ForwNeib} ahead), will never miss the largest index still not included in the set, which guarantees that the convergence rate predicted by theory will be attained. We explain this algorithm in detail in the next section. We remark that this a-priori construction depends only on the parameters $q$, $\boldsymbol{\tau}$ and $r$ in Assumption \ref{ass:DeriBound} for any function satisfying such assumption. However, it is not always straightforward or possible to verify this assumption especially for nonlinear function with respect to the parameter as in Example 2. In this situation, and in the common parametrization as in \eqref{eq:kappa}, we use $\tau_j = j^{\alpha - 1}$ when $||\psi_j||$ decays as $j^{-\alpha}$ as demonstrated in Section 5.2 (see Fig. \ref{fig:CompareTau}), and choose $r = \text{floor}(14(\alpha-1))+1$, the closest integer larger than $14(\alpha-1)$ according to Assumption \ref{ass:DeriBound}. Alternatively, we turn to a goal-oriented a-posteriori construction that does not need $q$, $\boldsymbol{\tau}$ and $r$. \subsection{Goal-oriented a-posteriori construction} \label{sec:aposteriori} We present a goal-oriented a-posteriori construction of the sparse quadrature based on a dimension-adaptive tensor-product quadrature initially developed in \cite{gerstner2003dimension} which we call \emph{adaptive sparse quadrature}, whose associated grids $G_{\Lambda}$ is called \emph{adaptive sparse grids}. The basic idea is based on the following adaptive process: given an admissible index set $\Lambda$, we search an index ${\boldsymbol{\nu}} \in \mathcal{F}$ among the forward neighbors of $\Lambda$ (${\boldsymbol{\nu}} \in \mathcal{F} $ is called a forward neighbor of $\Lambda$ if $\Lambda \cup {\boldsymbol{\nu}}$ is still admissible), at which $||\triangle_{\boldsymbol{\nu}}||_\mathcal{S}$ is maximized, and add this index to the index set $\Lambda = \Lambda \cup \{{\boldsymbol{\nu}}\}$. As the number of forward neighbors depends on the dimension $J$ (in fact, the forward neighbors of ${\boldsymbol 0}$ are ${\boldsymbol{e}}_j$ for all $j$), it is not feasible to search over all the forward neighbors in high or infinite dimensions. In such cases, it is usually reasonable to assume that the dimensions with small indices are more important than those with large indcies, as determined, e.g., by the decaying eigenvalues in Karhunen--Lo\`eve representation of a random field. Therefore, we can explore the forward neighbors dimension by dimension in the set (see, e.g., \cite{Schillings2013, chkifa2014high}) \begin{equation}\label{eq:ForwNeib} \mathcal{N}(\Lambda) := \{{\boldsymbol{\nu}} \not \in \Lambda: {\boldsymbol{\nu}} - {\boldsymbol{e}}_j \in \Lambda, \forall j \in {\mathbb{J}}_{\boldsymbol{\nu}} \text{ and } \nu_j = 0\;, \forall j > j(\Lambda)+1\}, \end{equation} where ${\mathbb{J}}_{\boldsymbol{\nu}} = \{j: \nu_j \neq 0\}$; $j(\Lambda)$ is the smallest $j$ such that $\nu_{j+1} = 0$ for all ${\boldsymbol{\nu}} \in \Lambda$. More generally, $j(\Lambda) + K$ for a certain $K \geq 1$ can be used, see \cite{nobile2016adaptive}. The adaptive sparse quadrature can be constructed following a basic greedy algorithm proposed in \cite{gerstner2003dimension}, which was improved on the data structure in \cite{klimke2006uncertainty} to cope with very high dimensions (e.g., upto $10^4$ dimensions in a personal laptop with $16$GB memory). We present the goal-oriented a-posteriori construction also in Algorithm \ref{alg:SparseQuad}. \begin{algorithm} \caption{Adaptive sparse quadrature} \label{alg:SparseQuad} \begin{algorithmic}[1] \STATE{\textbf{Input: } maximum number of indices $N_{\text{max}}$, function $f$.} \STATE{\textbf{Output: } the admissible index set $\Lambda_N$, quadrature $\mathcal{Q}_{\Lambda_N}(f)$.} \STATE{Set $N= 1$, $\Lambda_N = \{{\boldsymbol 0}\}$, evaluate $f({\boldsymbol 0})$ and set $ \mathcal{Q}_{\Lambda_N}(f)=f({\boldsymbol 0})$.} \WHILE{$N < N_{\text{max}}$} \STATE{Construct the forward neighbor set $\mathcal{N}(\Lambda_N)$ by \eqref{eq:ForwNeib}.} \IF {a-priori construction} \STATE{Compute $b_{{\boldsymbol{\nu}}}$ for all ${\boldsymbol{\nu}} \in \mathcal{N}(\Lambda_N)$ by \eqref{eq:bnu}.} \STATE{Take ${\boldsymbol{\nu}} = \operatornamewithlimits{argmin}_{{\boldsymbol{\mu}} \in \mathcal{N}(\Lambda_N)} b_{\boldsymbol{\nu}}$.} \ELSE \STATE{Compute $\triangle_{\boldsymbol{\nu}}(f)$ for all ${\boldsymbol{\nu}} \in \mathcal{N}(\Lambda_N)$ by \eqref{eq:TensorQuad}.} \STATE{Take ${\boldsymbol{\nu}} = \operatornamewithlimits{argmax}_{{\boldsymbol{\mu}} \in \mathcal{N}(\Lambda_N)} ||\triangle_{{\boldsymbol{\mu}}}(f)||_\mathcal{S}$.} \ENDIF \STATE{Enrich the index set $\Lambda_{N+1} = \Lambda_N \cup \{{\boldsymbol{\nu}}\}$}. \STATE{Set $\mathcal{Q}_{\Lambda_{N+1}}(f) = \mathcal{Q}_{\Lambda_N}(f) + \triangle_{{\boldsymbol{\nu}}}(f)$.} \STATE{Set $ N \leftarrow N + 1$.} \ENDWHILE \end{algorithmic} \end{algorithm} \begin{remark} Instead of using the maximum number of indices as the stopping criterion, we can use some others, such as the maximum number of points, or an heuristic error indicator $||\sum_{{\boldsymbol{\mu}} \in \mathcal{N}(\Lambda_N)}\triangle_{{\boldsymbol{\mu}}}(f)||_\mathcal{S}$, or $b_{{\boldsymbol{\nu}}}^{-(2-q)/4q}$ for the a-priori construction. Moreover, for the a-posteriori construction, it is also a common practice to chose ${\boldsymbol{\nu}}$ as ${\boldsymbol{\nu}} = \operatornamewithlimits{argmax}_{{\boldsymbol{\mu}} \in \mathcal{N}(\Lambda_N)} ||\triangle_{{\boldsymbol{\mu}}}(f)||_\mathcal{S} /|G_{\boldsymbol{\mu}}|$ to balance the error and the work, e.g., \cite{gerstner2003dimension, nobile2016adaptive}. We caution that these heuristic error indicators are not rigorous and may lead to early stop of the algorithm in the case that $||\triangle_{{\boldsymbol{\mu}}}(f)||_\mathcal{S}$ is critically small for all ${\boldsymbol{\mu}}$ in $\mathcal{N}(\Lambda_N)$, which can be possibly addressed by a verification process \cite{chen2015new}. \end{remark} \begin{remark} Note that to construct $\Lambda_N$, we need to evaluate the function $f$ at all quadrature points corresponding to $\mathcal{N}(\Lambda_N)$ by the a-posteriori construction, so that the total number of function evaluations is larger than that in $\Lambda_N$ as presented in Corollary \ref{cor:index2point}. We will also investigate the convergence rate with respect to the total number of quadrature points in the numerical experiments. \end{remark} \section{Numerical experiments} \label{sec:numerics} In this section, we present two numerical experiments for a parametric function and a parametric PDE to demonstrate the convergence property of the sparse quadrature using different univariate quadrature rules and different construction schemes in comparison with the Monte Carlo quadrature. \label{sec:Numerics} \subsection{A parametric function} \label{subsec:function} We first consider the nonlinear parametric function presented in Example 1, Sec \ref{sec:ex1}. The expectation of the function is given analytically, which is \begin{equation} I(f) = \exp\left( \frac{1}{2} \zeta(2\alpha) \right)\;, \end{equation} where $ \zeta(2\alpha) = \sum_{j\geq 1} j^{-2\alpha}$ is the Riemann zeta function. We compute it by truncation of $j$ at $10^4$ dimensions and use it as the reference value. We run Algorithm \ref{alg:SparseQuad} for the construction of the sparse quadrature with both the a-priori construction in Sec. \ref{sec:aprioricons}, and the goal-oriented a-posteriori construction in Sec. \ref{sec:aposteriori}. For the former, we use $\tau_j = j^{\alpha - 1/2}$, as obtained in Example 1, for the computation of $b_{\boldsymbol{\nu}}$ in \eqref{eq:bnu}. We set the maximum number of sparse grid points at $10^5$. The forward neighbor index set \eqref{eq:ForwNeib} is used since $\tau_j$ is monotonically increasing. We test the four quadrature rules: 1) Gauss--Hermite rule with $m_l = l+1$ (GH1 for short); 2) Gauss--Hermite rule with $m_l = 2^{l+1}-1$ (GH2); 3) transformed Gauss--Kronrod--Patterson rule (tGKP) with maximum level $l = 6$; 4) Genz--Keister rule (GK) with maximum level $l = 4$. \begin{figure} \caption{Decay of quadrature errors $|I(f) - \mathcal{Q} \label{fig:OldPrioriPosteriori} \end{figure} Figure \ref{fig:OldPrioriPosteriori} displays the decay of the quadrature errors with respect to the number of indices $|\Lambda|$ and the number of sparse grid points (function evaluations) $|G_\Lambda|$ in $\Lambda$. We can observe a dimension-independent convergence rate of the quadrature error, not only with respect to the number of indices as predicted by Theorem \ref{thm:N-termConv}, but also with respect to the number of points. Note that the convergence rate obtained is indeed dimension-independent, since only part of the dimensions at disposal have been activated as observed in Fig. \ref{fig:level}: in other words, had we considered even more than the current $10^4$ random variables, possibly countably many, we would have observed the same convergence curve. It is evident from the comparison that both the a-priori and the a-posteriori construction schemes lead to very close convergence rates for the quadrature rules GH1, GH2 and GK, while the a-posteriori construction gives smaller quadrature errors at the same number of indices/points for all four quadrature rules. The numerical convergence rate with respect to the number of indices is about $N^{-s}$ for GH1, GH2, and GK, with $s = 2$ for $\alpha = 2$, which is faster than that predicted by Theorem \ref{thm:N-termConv} at $s = \alpha -1$. This indicates that the convergence rate obtained in Theorem \ref{thm:N-termConv} is possibly not optimal. Note that the convergence is sightly slower than $N^{-2}$ with respect to the number of points, which is due to the larger number of points than the number of indices. The performance of GH1, GH2, and GK are very close: the errors of GH2 and GK overlap with respect to the number of indices while the latter is smaller than the former with respect to the number of points, because GK points are nested while GH2 (also GH1) points are not. On the other hand, it is shown that tGKP does not converge as fast as the other three rules and gets stagnated for a large number of indices and points. This is due to the fact that the degree of exactness of tGKP is much smaller than the others; in particular, it does not satisfy A.1 of Assumption \ref{ass:Quadrature} as shown in Fig \ref{fig:QuadAccu}. \begin{figure} \caption{Maximum level ($\max_{{\boldsymbol{\nu} \label{fig:level} \end{figure} The sparse grid level $l$ for the two construction schemes with the four quadrature rules is displayed in Fig. \ref{fig:level}. Note that we have set the maximum level for GH2 and tGKP as $6$, and for GK as $4$ due to the availability of the quadrature points (for tGKP and GK). The a-priori construction tends to use higher levels for the first few dimensions than the a-posteriori construction for GH1, GH2, and GK, and gives rise to the larger number of points that become useless because of the high exactness of the GH and GK quadrature rules (see the early divergence of the errors in the right part of Fig. \ref{fig:OldPrioriPosteriori}). This high exactness is explored and benefited by the a-posteriori construction. On the other hand, the low exactness of the tGKP is not seen by the a-priori construction but by the a-posteriori, see the different levels for tGKP in Fig. \ref{fig:level}. Moreover, the a-priori construction leads to less accurate quadrature results compared to the a-posteriori construction, especially for GH2, GK, and tGKP as the number of these quadrature points double from one level to the next. As for GH1, the a-priori construction is very close to the a-posteriori construction in terms of accuracy. This is because only one quadrature point is added from one level to the next, so that the number of indices and the number of quadrature points are closer than those for the other three quadrature rules. Note that the a-priori construction is performed completely based on the quantity $b_{\boldsymbol{\nu}}$ in \eqref{eq:bnu}, which only depends on the index for fixed $(\tau_j)_{j\geq 1}$, regardless of how many quadrature points are used in the same index set. The convergence rates have been investigated with respect to the number of indices and points in $\Lambda$ to demonstrate the results in Theorem \ref{thm:N-termConv}. However, in order to construct $\Lambda$, the indices in its forward neighbor set $\mathcal{N}(\Lambda)$ (see the definition \eqref{eq:ForwNeib}) have to be searched over. Hence, we need to evaluate the function at each quadrature point in $\mathcal{N}(\Lambda)$ by the a-posteriori construction, or evaluate $b_{{\boldsymbol{\nu}}}$ (defined in \eqref{eq:bnu}) by the a-priori construction. Here we emphasize that the computational cost for the evaluation of $b_{{\boldsymbol{\nu}}}$ could be negligible compared to that of the function evaluation which requires, e.g., PDE solve, so that the a-priori construction is potentially more efficient than the a-posteriori. For instance, here 30601 function evaluations are performed out of 100500 points (the remaining points are in the forward neighbor set $\mathcal{N}(\Lambda)$) by GH1 quadrature rule. To investigate the convergence rate with respect to the total number of indices and points in $\begin{array}r{\Lambda} = \Lambda \cup \mathcal{N}(\Lambda)$, which represents the total computational cost, we compute the quadrature error $|I(f) - \mathcal{Q}_{\begin{array}r{\Lambda}}(f)|$ for the GK rule with $\alpha = 1, 2, 3$. We also compute the Monte Carlo quadrature error by an average of $100$ trials for all $\alpha$ in $10^3$ dimensions. The quadrature errors are reported in Fig. \ref{fig:AllPrioriPosteriori}. \begin{figure} \caption{Decay of quadrature errors $|I(f) - \mathcal{Q} \label{fig:AllPrioriPosteriori} \end{figure} We can observe that the convergence rates of the quadrature errors with respect to both the total number of indices and the total number of points corresponding to the union set $\begin{array}r{\Lambda}$ are about $N^{-s}$, where $s = \alpha - 1/2$ for all $\alpha = 1, 2, 3$, by both the a-priori and the a-posteriori construction schemes. Meanwhile, the average of Monte Carlo (MC) quadrature errors decays as $N^{-1/2}$ for all $\alpha$, which is much slower than that of the sparse quadrature errors for $\alpha = 2, 3$. In the case $\alpha = 1$, the sparse quadrature still achieves very close convergence rate as $N^{-1/2}$ for MC and with smaller errors in this test example, see in the right part of Fig. \ref{fig:AllPrioriPosteriori}. Note that the MC quadrature error is measured in average/expectation, which could be much less accurate depending on the trial, while the sparse quadrature error is deterministically bounded. \subsection{A parametric PDE} \label{subsec:PDE} In this section, we consider the parametric PDE of Example 2 in Sec. \ref{sec:ex3}, where the coefficient $\kappa$ is a Gaussian random field allowing the Karhunen--Lo\`eve expansion \begin{equation}\label{eq:KLExpansion} \kappa = \kappa_0 + \sum_{j \geq 1} \sqrt{\lambda_j} \phi_j y_j\;, \end{equation} where $(\lambda_j, \phi_j)_{j\geq 1}$ are the eigenpairs of $(-\delta \triangle)^{-\alpha}$, $\delta, \alpha > 0$, with homogeneous Dirichelet boundary condition on the boundary $\partial D$ of the domain $D \in {\mathbb{R}}^d$, and $(y_j)_{j \geq 1}$ are i.i.d. standard Gaussian random variables. For the simple case $D = (0,1)$, we have for $\delta = 1/\pi^2$, \begin{equation} \lambda_j = j^{-2\alpha}, \text{ and } \phi_j = \sin(\pi j x)\;. \end{equation} This monodimensional PDE problem under the above parametrization is well-posed under the condition $\alpha > 1/2$, see \cite[Assumption 3.1]{charrier2012strong}. In the numerical test, we set $\kappa_0 = 0$, the forcing term $g = 1$, and prescribe zero Dirichlet boundary condition at $x=0, 1$. A uniform mesh with mesh size $h = 1/2^{10}$ is used for the discretization of the domain $D$, therefore we truncate $j$ with $J = 1023$ dimensions in the parametrization \eqref{eq:KLExpansion}. We use a finite element method with piecewise linear element to solve the elliptic PDE. Under the parametrization \eqref{eq:KLExpansion}, our quantity of interest is the average value of $u$ in $D$ and we compute its first two moments, i.e., we compute ${\mathbb{E}}[f_1]$ and ${\mathbb{E}}[f_2]$, where \begin{equation} f_1({\boldsymbol{y}}) = Q(u({\boldsymbol{y}})) \text{ and } f_2({\boldsymbol{y}}) = Q^2(u({\boldsymbol{y}})), \text{ where } Q(u({\boldsymbol{y}})) = \int_D u({\boldsymbol{y}}) dx\;. \end{equation} We construct the sparse quadrature by both the a-priori and the a-posteriori construction schemes presented in Algorithm \ref{alg:SparseQuad}. For the a-priori construction, to satisfy the condition \eqref{eq:taupsi} with $\psi_j = \sqrt{\lambda_j} \phi_j = j^{-\alpha} \sin(\pi j x)$, a choice of $\tau_j \propto j^{\alpha - 1 - \varepsilon}$ for arbitrary small $\varepsilon > 0$ is sufficient since \begin{equation} \sup_{x \in D} \sum_{j \geq 1} \tau_j |\psi_j(x)| \leq \sum_{j\geq 1} \tau_j ||\psi_j||_{L^\infty(D)} = \sum_{j\geq 1} \tau_j j^{-\alpha}\;. \end{equation} Here, we set $\tau_j = j^{\alpha - 1}$ with $\alpha = 2$. To run Algorithm \ref{alg:SparseQuad}, we set the maximum number of sparse grid points set as $10^5$. \begin{figure} \caption{Decay of quadrature errors $|I(f) - \mathcal{Q} \label{fig:OldPrioriPosterioriPDE} \end{figure} Fig. \ref{fig:OldPrioriPosterioriPDE} displays the convergence of the quadrature errors of the two moments ${\mathbb{E}}[f_1]$ and ${\mathbb{E}}[f_2]$ with respect to the number of indices and points in the index set $\Lambda$, where we compute the error by \begin{equation} |I(f) - \mathcal{Q}_{\Lambda}(f)| \approx |\mathcal{Q}_{\begin{array}r{\Lambda}_{\text{max}}}^{\text{GK}}(f) - \mathcal{Q}_{\Lambda}(f)|\;. \end{equation} Here $\mathcal{Q}_{\begin{array}r{\Lambda}_{\text{max}}}^{\text{GK}}(f)$ is the approximation of $I(f)$ by the a-posteriori GK quadrature at the largest index set $\begin{array}r{\Lambda}_{\text{max}} = \Lambda_{\text{max}} \cup \mathcal{N}(\Lambda_{\text{max}})$ with about $10^5$ quadrature points. GK quadrature is used since it is more accurate for this test example as shown in Fig. \ref{fig:OldPrioriPosterioriPDE}. Moreover, the number of activated dimensions in $\Lambda$, for which the maximum grid level is larger than $1$ in $\Lambda \cup \mathcal{N}(\Lambda)$, is smaller than the number of the full dimensions for all quadrature rules, in particular smaller than the number of dimensions activated by the a-posteriori GK in $\begin{array}r{\Lambda}_{\text{max}}$, see Fig. \ref{fig:levelPDE}, which indicates that the quadrature errors computed for the indices and the points in $\Lambda$ are unbiased and the convergence rate is dimension-independent. From the decaying of the quadrature errors, we can observe the dimension-independent convergence rate about $N^{-s}$ with $s = 2$ with respect to the number of both indices and points in $\Lambda$, for both quantities of interest $f_1$ and $f_2$. Again, GK quadrature turns out to be the most accurate and tGKP is the least with the same number of quadrature points. The a-priori construction gives less accurate quadrature results compared to the a-posteriori construction, in particular for GH2, tGKP, and GK as explained in the last section. We remark that the same index set has been constructed for both $f_1$ and $f_2$ by the a-priori construction, while by the a-posteriori construction, the index sets for the two quantities are different. This can be illustrated by Fig. \ref{fig:levelPDE}, where the maximum level in each dimension is the same for $f_1$ and $f_2$ by the a-priori construction and different by the posteriori construction, see the comparison of GH1 and GK for the two quantities. Therefore, the same index set can be used for different quantities of interest (with the same $(\tau_j)_{j\geq 1}$) once constructed by the a-priori scheme. On the other hand, the posteriori scheme requires a complete reconstruction of the index set for each new quantity of interest. \begin{figure} \caption{Maximum level ($\max_{{\boldsymbol{\nu} \label{fig:levelPDE} \end{figure} \begin{figure} \caption{Left: maximum level ($\max_{{\boldsymbol{\nu} \label{fig:CompareTau} \end{figure} Note that with $\tau_j = j^{\alpha - 1}$, i.e., $(\tau^{-1}_j)_{j\geq 1} \in \ell^q({\mathbb{N}})$ for $q > 1/(\alpha - 1)$, the numerical convergence about $N^{-s}$ with $s = 2$ is faster than the convergence of $N^{-s}$ with $s = 1/q - 1/2 < \alpha - 3/2 = 1/2$ according to Theorem \ref{thm:N-termConv}. However, as the choice $\tau_j = j^{\alpha - 1}$ might be only a sufficient condition for the Assumption \ref{ass:DeriBound}, so we may numerically relax it. Here we also test $\tau_j = j^{\alpha - 1/2}$ and $\tau_j = j^{\alpha}$. The maximum level in each dimension and the convergence of the quadrature errors are shown in Fig. \ref{fig:CompareTau} for the a-priori construction with GH1. We can see that the three choices of $\tau_j$ produce a very close convergence rate $N^{-s}$ with $s = 2$, though $\tau_j = j^{\alpha - 1}$ leads to more accurate quadrature than $\tau_j = j^{\alpha - 1/2}$ and $\tau_j = j^{\alpha}$. The maximum levels from the three choices are also the same except in a small number of dimensions. \begin{figure} \caption{Decay of quadrature errors $|I(f) - \mathcal{Q} \label{fig:AllPrioriPosterioriPDE} \end{figure} Finally, in Fig. \ref{fig:AllPrioriPosterioriPDE} we report the decaying of the sparse quadrature errors for both ${\mathbb{E}}[f_1]$ and ${\mathbb{E}}[f_2]$ with respect to both the number of indices and the number of points in the union set $\begin{array}r{\Lambda} = \Lambda \cup \mathcal{N}(\Lambda)$, which correspond to the total computational cost. We use the most accurate GK quadrature rule and test $\alpha = 1, 2, 3$. The convergence rate about $N^{-s}$ with $s = \alpha - 1/2$ can be observed for all $\alpha$ and for both the a-priori construction and the a-posteriori construction, which indicates that the convergence rate only depends on the sparsity parameter $\alpha$, and is much higher than the Monte Carlo convergence rate $N^{-1/2}$ for $\alpha = 2, 3$. In the case $\alpha = 1$, the sparse quadrature errors converge with rate about $N^{-1/2}$ and is smaller than that of Monte Carlo quadrature errors, which are computed as the average of 100 trials. \section{Conclusion} \label{sec:Conclusion} In this work, we analyzed the dimension-independent convergence property of an abstract sparse quadrature for high-dimensional integration with Gaussian measure under certain assumptions on the univariate quadrature rules and the regularity of the parametric function with respect to the parameters, which established the foundation of efficient algorithms to break the curse of dimensionality commonly faced by a class of high and infinite-dimensional integration problems. We presented both a-priori and a-posteriori construction schemes for numerical integration. Moreover, we investigated the a-priori and the a-posteriori construction schemes with four kinds of different univariate quadrature rules and studied their convergence properties through numerical experiments on a nonlinear parametric function and a nonlinear parametric PDE. The numerical results demonstrate that the convergence rates of the quadrature errors do not depend on the number of dimensions but only on some parameter related to the regularity of the parametric function. This conclusion holds not only for the convergence of the quadrature errors with respect to the number of the indices in the admissible index set as stated in the main theorem, but also for that with respect to the total number of quadrature points corresponding to the union of the admissible index set and its forward neighbor set, i.e., with respect to the total number of function evaluations or PDE solutions. The convergence of the sparse quadrature errors (with rate $N^{-s}$) is faster than the Monte Carlo quadrature errors (i.e., $s > 1/2$) in all the numerical examples with sufficiently large $\alpha$ (or small $q$) which indicates the regularity of the parametric function. The numerical convergence rates in the examples are larger than those of the theoretical prediction in the main theorem, which indicate that the latter may not be optimal. How to improve the theoretical convergence rate is worthy to investigate. Further work on the development and the application of the sparse quadrature in solving high-dimensional integration problems in different areas, such as Bayesian inverse problems \cite{chen2016hessian} and optimization under uncertainty, are interesting and promising. Moreover, comparison of the sparse quadrature with a type of quasi-Monte Carlo quadrature \cite{graham2015quasi, kuo2015multilevel} is interesting for high-dimensional integration with Gaussian measure. \end{document}
\begin{document} \title{Sums and Products of Regular Polytopes' Squared Chord Lengths} \author{Jessica N. Copher} \address{Department of Mathematics\br Box 8205\br North Carolina State University\br Raleigh, NC 27695-8205} \email{[email protected]} \begin{abstract} Although previous research has found several facts concerning chord lengths of regular polytopes, none of these investigations has considered whether any of these facts define relationships that might generalize to the chord lengths of \emph{all} regular polytopes. Consequently, this paper explores whether four findings of previous studies\textemdash viz., the four facts relating to the sums and products of squared chord lengths of regular polygons inscribed in unit circles\textemdash can be generalized to all regular ($n$-dimensional) polytopes (inscribed in unit $n$-spheres). We show that (a) one of these four facts actually does generalize to all regular polytopes (of dimension $n\geq 2$), (b) one generalizes to all regular polytopes except most simplices, (c) one generalizes only to the family of crosspolytopes, and (d) one generalizes only to the crosspolytopes and 24-cell. We also discover several corollaries (due to reciprocation) and some theorems specific to the three-dimensional regular polytopes along the way. \end{abstract} \subjclass{Primary 51M20; Secondary 52B12, 52B11, 52B10} \keywords{Regular convex polytopes, chords, diagonals, faces} \maketitle \section{Introduction} Several discoveries have been made concerning the chord lengths of certain types of regular polytopes (most often, the regular two-dimensional polytopes, i.e., regular polygons; e.g., see \cite{FontaineHurley,Kappraff2002,SasaneChapman,Steinbach}). However, no single rule has been found to apply to the chord lengths of \emph{all} regular polytopes. To remedy this situation, we consider whether any of the four facts discovered in one area of previous investigation (regarding the chord lengths of two-dimensional regular polytopes) can be generalized to apply to the chord lengths of \emph{all} $n$-dimensional regular polytopes where $n\geq 2$.\footnote{Although the 0-dimensional polytope (i.e., a single point) and the 1-dimensional polytope (i.e., a line segment) are regular, we exclude them as being trivial.} (In this paper, we will call an $n$-dimensional polytope an ``$n$-polytope,'' an $n$-dimensional sphere an ``$n$-sphere,'' etc.). \subsection{The Four Known Facts} First, recall that a \emph{chord} of a regular 2-polytope (i.e., regular polygon) is a line segment whose two endpoints are vertices of the polygon. Let $\mathcal{P}$ be a regular polygon with $E$ edges that is inscribed in a unit circle (i.e., ``2-sphere''). \begin{enumerate} \item The sum of the squared chord lengths of $\mathcal{P}$ equals $E^{2}$. \item The sum of the squared \emph{distinct} chord lengths of $\mathcal{P}$ equals: \begin{enumerate} \item $E$ (when $E$ is odd) \cite{MorleyHarding} \item some integer (when $E$ is even) \cite[pp. 490-491]{Kappraff2002}. \end{enumerate} \item The product of the squared chord lengths of $\mathcal{P}$ equals $E^{E}$. \item The product of the squared \emph{distinct }chord lengths of $\mathcal{P}$ equals: \begin{enumerate} \item $E$ (when $E$ is odd) \item some integer (when $E$ is even) \cite[pp. 490-491]{Kappraff2002}. \end{enumerate} \end{enumerate} (Facts 1 and 3 are given in an unpublished 2013 article\footnote{Mustonen's unpublished article is titled ``Lengths of edges and diagonals and sums of them in regular polygons as roots of algebraic equations'' and can be accessed at http://www.survo.fi/papers/Roots2013.pdf as of 16 March 2019. Although Mustonen states Facts 1 and 3 without mathematical proof, a proof of Fact 1 may be obtained by substituting $R=1$ in Solution 6.73 of \cite{Prasolov}. (An English version of \cite{Prasolov}, edited and translated by Dimitry Leites, is in preparation with the title, \emph{Encyclopedia of Problems in Plane and Solid Geometry}). Likewise, Fact 3 can be derived from a statement proven in \cite[pp. 161-162]{Honsberger}, namely, that if $\mathcal{P}$ is a regular polygon with $E$ edges inscribed in a unit circle, the product $p$ of the lengths of the chords of $\mathcal{P}$ emanating from a given vertex $\mathbf{P}$ equals $E$. Denoting the vertices of $\mathcal{P}$ by $\mathbf{P_i}$ for $i=1,2,...,V$ and their associated products by $p_i$, we have $p_1p_2...p_V=p^V=E^{V}$. Since each chord's length occurs twice in $p_1p_2...p_V$, this is also equal to the product of the \emph{squared} chord lengths of $\mathcal{P}$. Since, for regular polygons, $V=E$ (see \cite{CoxeterGreitzer}), we obtain Fact 3.} by S. Mustonen). \subsection{Overview of Procedure} To test the generalization of these four 2-polytope facts to all dimensions (greater than or equal to 2), we apply the following four steps: \begin{enumerate} \item Consider a regular $n$-polytope inscribed in a unit $n$-sphere ($n \geq 2$). \item Compute both the sum and product of (a) the polytope\textquoteright s squared chord lengths and (b) the polytope\textquoteright s squared distinct chord lengths. \item Compare the results obtained in Step 2 with the values (which are usually some variant on $E$) given by the relevant 2-polytope fact. \item If the compared values in Step 3 are different, compare the results obtained in Step 2 with other values related to the $n$-polytope (e.g., other $j$-face cardinalities). \end{enumerate} The next section contains a review of some basic definitions and properties necessary to carrying out these steps. In the four remaining sections, we carry out these steps for each of the four facts in turn. \section{Preliminaries} In this paper, all $n$-polytopes being considered are \emph{finite convex} regions of Euclidean $n$-space. For a full definition, see \cite[pp.126-127]{Coxeter1973}. An $n$-polytope $\mathcal{P}$ has one or more \emph{$j$-faces} where $j \in \left \{-1,0,1,2,...,n \right \}$ (cf. \cite{Matousek}). For $j=0,1,...,n-1$, each of these $j$-faces is the $j$-dimensional intersection of $\mathcal{P}$ and $\mathcal{H}$ (where $\mathcal{H}$ is an $\left(n-1\right)$-plane such that $\mathcal{P}$ lies in one of the closed half-spaces determined by $\mathcal{H}$ and has a nonempty intersection with $\mathcal{H}$) \cite{Matousek}. For $j=n$, the sole $j$-face is just the $n$-polytope $\mathcal{P}$ itself \cite{Matousek}, and, for $j=-1$, the sole $j$-face is the empty set \cite{Matousek}. For any $n$-polytope, the 0-faces are called \emph{vertices}, the 1-faces are called \emph{edges}, the $(n-2)$-faces are called \emph{ridges}, and the $(n-1)$-faces are called \emph{facets} \cite{Matousek}. \sloppy An $n$-polytope is called \emph{regular} if: \begin{enumerate} \item Whenever one of its $\left(j-2\right)$-faces is incident with one of its $j$-faces, there are exactly two\emph{ }$\left(j-1\right)$-faces that are incident with both the $\left(j-2\right)$-face and the $j$-face (for $j=1,2,...,n$), and \item For a given $j\in\left\{ 0,1,2,...,n\right\} $, all of the $j$-faces are equidistant from a point $\mathbf{O}$ called the polytope's \emph{center} \cite{Coxeter1965}. (That is, for each $j\in\left\{0,1,2,..., n\right\} $, there is an $n$-sphere centered at $\mathbf{O}$ that passes through all the $j$-faces' centers). Without loss of generality, all polytopes will be considered as being centered at the origin. \end{enumerate} One useful consequence of regularity is that, for a given $j$, any of a polytope\textquoteright s $j$-faces can be interchanged with any of the polytope's other $j$-faces via one of its \emph{symmetries} \cite{Grunbaum}. A second useful consequence of regularity is that, when a regular $n$-polytope is circumscribed about an $n$-sphere, $n\geq2$, it may be reciprocated with respect to the $n$-sphere to form a new regular $n$-polytope inscribed inside the $n$-sphere (this maps the original polytope's $\left(j-1\right)$-faces to the new polytope's $\left(n-j\right)$-faces for $j=0,1,...,n$) \cite{Coxeter1973}. There are exactly five regular 3-polytopes, six regular 4-polytopes, and three regular $n$-polytopes in each dimension $n\geq5$ \cite{Coxeter1973}. Table \ref{tab:j-Face-Cardinality-and-Shape} shows the number and shape of the $j$-faces of each of these polytopes. \begingroup \makeatletter \renewcommand\@makefntext[1] {\noindent\makebox[1.8em][r]{\myfnmark}#1} \makeatother \renewcommand{\alph{footnote}}{\alph{footnote}} \begin{longtable}{ccccccc} \caption{\label{tab:j-Face-Cardinality-and-Shape}Cardinality and Shape of $j$-Faces of Regular Polytopes \cite{Coxeter1973}.} \tabularnewline \hline \midrule \textbf{Polytope} & \multicolumn{4}{c}{\textbf{$j$-Face Cardinality}} & \multicolumn{2}{c}{\textbf{Shape of $j$-Face}\footnotemark[1]}\tabularnewline \endfirsthead \tabularnewline \midrule \textbf{3-polytopes} & \textbf{0-face} & \textbf{1-faces} & \textbf{2-face} & & \textbf{2-face} & \tabularnewline \midrule \emph{tetrahedron} & 4 & 6 & 4 & & \small triangle & \tabularnewline \midrule \emph{octahedron} & 6 & 12 & 8 & & \small triangle & \tabularnewline \midrule \emph{cube} & 8 & 12 & 6 & & \small square & \tabularnewline \midrule \emph{icosahedron} & 12 & 30 & 20 & & \small triangle & \tabularnewline \midrule \emph{dodecahedron} & 20 & 30 & 12 & & \small pentagon & \tabularnewline \midrule \textbf{4-polytopes} & \textbf{0-face} & \textbf{1-face} & \textbf{2-face} & \textbf{3-face} & \textbf{2-face} & \textbf{3-face}\tabularnewline \midrule \emph{5-cell} & 5 & 10 & 10 & 5 & \small triangle & \small tetrahedron\tabularnewline \midrule \emph{16-cell} & 8 & 24 & 32 & 16 & \small triangle & \small tetrahedron\tabularnewline \midrule \emph{8-cell} & 16 & 32 & 24 & 8 & \small square & \small cube\tabularnewline \midrule \emph{24-cell} & 24 & 96 & 96 & 24 & \small triangle & \small octahedron\tabularnewline \midrule \emph{600-cell} & 120 & 720 & 1200 & 600 & \small triangle & \small tetrahedron\tabularnewline \midrule \emph{120-cell} & 600 & 1200 & 720 & 120 & \small pentagon & \small dodecahedron\tabularnewline \midrule \textbf{$n$-polytopes} & \multicolumn{4}{c}{\textbf{any $j$-face}} & \multicolumn{2}{c}{\textbf{any $j$-face}}\tabularnewline \midrule \emph{$n$-simplex} & \multicolumn{4}{c}{$\binom{n+1}{j+1}$} & \multicolumn{2}{c}{\small $j$-simplex}\tabularnewline \midrule \parbox{2cm}{\centering\emph{$n$-crosspolytope}} & \multicolumn{4}{c}{$2^{j+1}\binom{n}{j+1}$} & \multicolumn{2}{c}{\small $j$-simplex}\tabularnewline \midrule \emph{$n$-cube} & \multicolumn{4}{c}{$2^{n-j}\binom{n}{j}$} & \multicolumn{2}{c}{\small $j$-cube}\tabularnewline \bottomrule \end{longtable} \begin{center} \begin{minipage}{.95\linewidth} \renewcommand{\footnoterule}{} \footnotetext[1]{\myfnmark{a} The shape of a 0-face is a point; the shape of a 1-face is a line segment.} \end{minipage} \end{center} \endgroup \noindent Lastly, a \emph{chord} of a regular $n$-polytope is being defined, in this paper, as a line segment whose endpoints are vertices (i.e., 0-faces) of the polytope. Thus, the chords of a regular polytope consist of all of its edges and diagonals. \section{FACT 1: Sum of Squared Chords} The first 2-polytope fact stated that, for any regular 2-polytope inscribed in a unit 2-sphere, the sum of its squared chord lengths equals $E^{2}.$ To test generalization of this fact to all regular $n$-polytopes ($n\geq 2$), we will make use of the following three lemmas and definition. \subsection{\label{Sect:3.1}Preliminaries} \begin{lem} \label{Lemma:edge-length}Let $\mathcal{P}$ be a regular $n$-polytope inscribed in a unit $n$-sphere, $n\geq2$, and let $x$ denote the ratio of the $n$-sphere's radius $_{0}R$ to the polytope's half-edge length $l$. Then the edge length $e$ of $\mathcal{P}$ is given by the formula $e=\frac{2}{x}$. \end{lem} \begin{proof} From the hypotheses, we have $x=\frac{_{0}R}{l}$. To find the edge length $e$ (which is equal to $2l$), we first solve for $2l$ and then replace $_{0}R$ with $1$ (since the $n$-sphere has a radius of 1). We obtain: $e=2l=\frac{2_{0}R}{x}=\frac{2}{x}$. \end{proof} \begin{rem} To use Lemma \ref{Lemma:edge-length} in our proofs, we will substitute values (or formulas) for $x$ obtained from \cite{Coxeter1973} for select polytopes. For the regular 3- and 4-polytopes, we will substitute the values of $x$ that are listed in the tables of ``regular polyhedra in ordinary space'' and ``regular polytopes in four dimensions'' in \cite{Coxeter1973} (i.e., Table Ii-ii on pp. 292-293)\footnote{The values of $x$ are referred to as the values of $\frac{_{0}R}{l}$ in \cite{Coxeter1973}.} and are also given in the cells marked with an asterisk in our Table \ref{tab:Edge-Length}. For the regular $n$-simplices, $n$-crosspolytopes, and $n$-hypercubes, we will first substitute $j=0$ into the formulas for $\frac{_{j}R}{l}$ (the ratio of the \textquotedblleft radius of $n$-sphere intersecting each $j$-face\textquoteright s center\textquotedblright{} to \textquotedblleft half-edge length\textquotedblright ) listed in the table of ``regular polytopes in $n$ dimensions'' in \cite{Coxeter1973} (i.e., Table Iiii on pp. 294-295) to obtain formulas for $\frac{_{0}R}{l}$ (i.e., formulas for $x$) that we can then substitute into Lemma \ref{Lemma:edge-length} to obtain the edge length $e$ for each of these three $n$-polytopes. Each of these steps are shown in the last three rows on Table \ref{tab:Edge-Length}. \end{rem} \begingroup \makeatletter \renewcommand\@makefntext[1] {\noindent\makebox[1.8em][r]{\myfnmark}#1} \makeatother \renewcommand{\alph{footnote}}{\alph{footnote}} \begin{longtable}{ccccc} \caption{\label{tab:Edge-Length}Edge Length, $e$, of Select Regular Polytopes.} \tabularnewline \hline \midrule & \textbf{Polytope} & $\frac{_{j}R}{l}$ & $\frac{_{0}R}{l}$ ( $=x$) & $e$\tabularnewline \midrule \endfirsthead {\multirow{3}{*}{\textbf{3-polytopes}\footnotemark[1]}} & \emph{cube} & NA & $\sqrt{3}${*} & $\frac{2}{\sqrt{3}}$\tabularnewline \cmidrule{2-5} & \emph{icosahedron} & NA & $\sqrt{\tau\sqrt{5}}${*} & $\frac{2}{\sqrt{\tau\sqrt{5}}}$\tabularnewline \cmidrule{2-5} & \emph{dodecahedron} & NA & $\tau\sqrt{3}${*} & $\frac{2}{\tau\sqrt{3}}$\tabularnewline \midrule \textbf{4-polytope} & \emph{24-cell} & NA & $2${*} & $1$\tabularnewline \midrule \multirow{3}{*}{\textbf{$n$-polytopes}\footnotemark[2]} & \emph{$n$-simplex} & $\sqrt{\frac{2}{j+1}-\frac{2}{n+1}}$ & $\sqrt{2-\frac{2}{n+1}}$ & $\frac{2}{\sqrt{2-\frac{2}{n+1}}}$\tabularnewline \cmidrule{2-5} & \emph{$n$-crosspolytope} & $\sqrt{\frac{2}{j+1}}$ & $\sqrt{2}$ & $\sqrt{2}$\tabularnewline \cmidrule{2-5} & \emph{$n$-cube} & $\sqrt{n-j}$ & $\sqrt{n}$ & $\frac{2}{\sqrt{n}}$\tabularnewline \bottomrule \end{longtable} \begin{center} \begin{minipage}{.95\linewidth} \renewcommand{\footnoterule}{} \footnotetext[1]{\myfnmark{a} In these rows (and throughout the paper), $\tau$ denotes the golden ratio $\frac{1+\sqrt{5}}{2}$.} \footnotetext[2]{\myfnmark{b} Although Coxeter \cite{Coxeter1973} only states that his formulas for $\frac{_{j}R}{l}$ are valid for $n\geq5$ (see his Table Iiii), it is evident from his development of these formulas (see pp. 133-134, 158-159) that they are, in fact, valid for $n\geq2$.} \end{minipage} \end{center} \endgroup \begin{lem} \label{Lemma:mV/2}Let $\mathcal{P}$ be a regular polytope with $V$ vertices, and let $m_{i}$ be the number of chords of length $d_{i}$ emanating from a given vertex of $\mathcal{P}$. Then the total number of chords of length $d_{i}$ is given by $N_{i}=\frac{m_{i}V}{2}$. \end{lem} \begin{proof} Let $\mathbf{P_{1},P_{2},\ldots,P_{V}}$ be the vertices of $\mathcal{P}$. Without loss of generality, consider the vertex $\mathbf{P_{1}}$. If vertex $\mathbf{P_{1}}$ has $m_i$ chords of length $d_{i}$ emanating from it, then, by the mappings induced by regularity, each of the $V$ vertices of the polytope also has $m_i$ chords of length $d_{i}$ emanating from it. However, since the chord $\mathbf{P_{j}P_{j'}}$ is equal to the chord $\mathbf{P_{j'}P_{j}}$ for all distinct $j,j'\in\left\{ 1,2,\ldots,V\right\} $, this yields a total of only $\frac{m_iV}{2}$ chords of length $d_{i}$. \end{proof} \begin{lem} \label{lem:N=00003DV(V-1)/2}Let $\mathcal{P}$ be a regular polytope with $V$ vertices and $N$ total chords. Then $N=\frac{V\left(V-1\right)}{2}$. \end{lem} \begin{proof} Since each chord in $\mathcal{P}$ corresponds to a single pair of distinct vertices, the total number of chords $N$ is equal to the total number of pairs of distinct vertices, i.e., $N=\binom{V}{2}=\frac{V\left(V-1\right)}{2}$. \end{proof} \begin{defn} An $n$-polytope is called \emph{centrally symmetric} if there exists a symmetry of the polytope that transforms each point $\left(x_{1},\;x_{2},\;\ldots\,,\;x_{n}\right)$ of the polytope into the point $\left(-x_{1},\;-x_{2},\;\ldots\,,\;-x_{n}\right)$ (cf. \cite{Coxeter1973,Grunbaum}). \end{defn} \noindent A useful property of centrally-symmetric polytopes is that, for each vertex $\mathbf{P}$, there is a unique vertex $\mathbf{P}'$ lying diametrically directly opposite to it (cf. \cite{Coxeter1973}). All regular $n$-polytopes, $n\geq2$, are centrally symmetric except for the odd-edged polygons and simplices \cite{Coxeter1973}. \subsection{\label{subsec:Generalizing-Fact-1}Generalizing Fact 1} \begin{thm} \label{Thm:sum-of-all-chords-Vsquared}Let $\mathcal{P}$ be any regular $n$-polytope inscribed in a unit $n$-sphere (where $n \geq 2$). Then \[\sum_{i=1}^{N}c_{i}^{2}=V^{2}\] where $c_{i}$ is the length of each $i$\textsuperscript{th} chord of $\mathcal{P}$, $N$ is the total number of chords, and $V$ is the total number of vertices. \end{thm} \begin{proof} A regular $n$-polytope, where $n \geq 2$, is either centrally symmetric or \emph{not} centrally symmetric (in the latter case, it is either an odd-edged polygon or an $n$-simplex). Thus, we consider three cases: (a) $\mathcal{P}$ is centrally symmetric, (b) $\mathcal{P}$ is an odd-edged polygon, or (c) $\mathcal{P}$ is an $n$-simplex. \emph{Case 1 ($\mathcal{P}$ is centrally symmetric).} Let $k$ be the number of \emph{distinct} chords lengths of $\mathcal{P}$. Without loss of generality, chose a vertex \textbf{$\mathbf{P}_{\mathbf{0}}$} of the polytope. Let \textbf{$\mathbf{P_{i}}$} for $i=1,2,...,k$ be $k$ other vertices of the polytope such that $\mathbf{P}_{0}$ and $\mathbf{P_{i}}$ form the endpoints of $k$ chords each having a distinct length $d_{i}$ and $d_{1}<d_{2}<\cdots<d_{k}$. We now find a formula (or value) for each of these $k$ distinct lengths. Observe that\textemdash except in the case where both $\mathbf{P}_{0}$ and $\mathbf{P_{i}}$ are collinear with the center $\mathbf{O}$ of the unit $n$-sphere\textemdash each chord $\mathbf{P_0P_i}$ is the side of a triangle $\triangle \mathbf{P}_{0} \mathbf{O} \mathbf{P_{i}}$. Moreover, the other two sides of the triangle ($\mathbf{OP_{0}}$ and $\mathbf{OP_{i}}$) form radii of the $n$-sphere and so have length 1. Applying the Law of Cosines, we have $d_{i}^{2}=1^{2}+1^{2}-2\left(1\right)\left(1\right)\cos\theta_{i}=2-2\cos\theta_{i}$ where $\theta_{i}$ is the angle between the two unit-long sides of the triangle. When both $\mathbf{P}_{0}$ and $\mathbf{P_{i}}$ are collinear with $\mathbf{O}$, the chord $\mathbf{P_{0}P_{i}}$ is a diameter of the unit $n$-sphere and so has length 2. Since a diameter is the \emph{longest} possible chord of an $n$-sphere, $d_{k}=2$. Furthermore, only one diameter can emanate from $\mathbf{P_{0}}$, so Lemma \ref{Lemma:mV/2} reveals that $\mathcal{P}$ has a total of $\frac{1V}{2}$ chords of length $d_{k}$. Letting $N_{i}$ be the number of chords of length $d_{i}$ ($i=1,2,...,k$), we have \[ \sum_{i=1}^{N}c_{i}^{2}=\sum_{i=1}^{k}N_{i}d_{i}^{2}=\sum_{i=1}^{k-1}N_{i}\left(2-2\cos\theta_{i}\right)+N_{k}d_{k}^{2}= \] \[ 2\sum_{i=1}^{k-1}N_{i}-2\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}+\frac{V}{2}\left(2^{2}\right)= 2\left(\sum_{i=1}^{k}N_{i}-\frac{V}{2}\right)-2\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}+2V. \] Since $\sum_{i=1}^{k}N_{i}$ must equal the total number of chords (i.e., $N$) and since $N=\frac{V\left(V-1\right)}{2}$ by Lemma \ref{lem:N=00003DV(V-1)/2}, we have: \[ \sum_{i=1}^{N}c_{i}^{2}=2\left[\frac{V\left(V-1\right)}{2}-\frac{V}{2}\right]-2\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}+2V=V^{2}-2\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}. \] Next, we show that $\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}=0$. Since $\mathcal{P}$ is centrally symmetric, each vertex $\mathbf{P}_{\mathbf{i}}$ has an opposite vertex $\mathbf{P_{i}'}$ for $i=0,1,...,k$. Thus, for $i\in\left\{1,2,...,k-1\right\}$, $\mathbf{P_{0}P_{k}}$ and $\mathbf{P_{i}P_{i}'}$ intersect at $\mathbf{O}$ to form vertical angles $\angle\mathbf{P_{0}OP_{i}}$ and $\angle\mathbf{P_{k}OP_{i}'}$ . Hence, these two angles are congruent. Furthermore, observe that $\angle\mathbf{P_{0}OP_{i}'}$ is the supplement of $\angle\mathbf{P_{0}OP_{i}}$ (which has measure $\theta_{i}$). Thus, $\angle\mathbf{P_{0}OP_{i}'}$ has measure $\pi-\theta_{i}$. However, the line segment \textbf{$\mathbf{P_{0}P_{i}'}$} (lying opposite to the angle $\angle\mathbf{P_{0}OP_{i}'}$) is a side of the triangle $\triangle\mathbf{P_{0}OP_{i}'}$ (whose other sides, $\mathbf{OP_{0}}$ and $\mathbf{OP_{i}'}$, are radii of length 1) and, moreover, must be the same length as one of the original\emph{ $\mathbf{P_{0}P_{i}}$} for $i=1,2,...,k-1$ (because these $k-1$ chords are of all possible distinct chord lengths\textemdash except for that of the unique diameter emanating from $\mathbf{P_{0}}$\textemdash and we know $\mathbf{P_{0}P_{i}'}\neq\mathbf{P_{0}P_{k}}$). Hence, by side-side-side, $\triangle\mathbf{P_{0}OP_{i}'}$ is congruent to one of the ``original triangles $\triangle\mathbf{P_{0}OP_{i}}$.'' Thus, the ``original $\mathbf{P_{0}P_{i}}$'' (i.e., the one that is the same length as $\mathbf{P_{0}P_{i}'}$) must lie opposite to one of the ``original angles $\angle\mathbf{P_{0}OP_{i}}$,'' \emph{which must have the same measure as $\angle\mathbf{P_{0}OP_{i}'}$ (i.e., $\pi-\theta_i$)}. Thus, we have shown that, for each angle of measure $\theta_{i}$ contained in the set $\left\{ \angle\mathbf{P_{0}OP_{i}}:\:i=1,2,...,k-1\right\} $, there is an angle (not necessarily distinct from the angle of measure $\theta_{i}$) that is also contained in this set and has measure $\pi-\theta_{i}$. Moreover, since we have shown that, for every chord of length $d_{i}$ (corresponding to an angle of measure $\theta_{i}$) that emanates from $\mathbf{P_{0}}$, there is a chord of length $d_{i}'$ (corresponding to an angle of measure $\pi-\theta_{i}$) that also emanates from $\mathbf{P_{0}}$, Lemma \ref{Lemma:mV/2} tells us that there must be the same number $N_{i}$ of chords of length $d_{i}$ as chords of length $d_{i}'$. Call the latter number $N_{i}'$. Returning to $\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}$, this shows that, if the angle $\angle\mathbf{P_{0}OP_{i}}$ of measure $\theta_{i}$ and its supplement (the one contained in the set $\left\{ \angle\mathbf{P_{0}OP_{i}}:\:i=1,2,...,k\right\} $) are distinct angles, we have $N_{i}\cos\theta_{i}+N_{i}'\cos\left(\pi-\theta_{i}\right)=N_{i}\cos\theta_{i}+N_{i}\cos\left(\pi-\theta_{i}\right)=N_{i}\left(\cos\theta_{i}-\cos\theta_{i}\right)=0$ and these two angles contribute nothing to $\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}$. If the angle of measure $\theta_{i}$ and its supplement are actually the same angle, then $\theta_{i}=\frac{\pi}{2}$ and $N_{i}\cos\left(\frac{\pi}{2}\right)=0$. Thus, this angle also contributes nothing to $\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}$. Therefore, we see that $\sum_{i=1}^{k-1}N_{i}\cos\theta_{i}=0$ and $\sum_{i=1}^{N}c_{i}^{2}=V^{2}.$ \emph{Case 2 ($\mathcal{P}$ is an odd-edged polygon).} Since $\mathcal{P}$ is a regular polygon, Fact 1 tells is $\sum_{i=1}^{N}c_{i}^{2}=E^{2}$. Since the number of vertices, $V$, of an $E$-edged polygon is equal to $E$ \cite{CoxeterGreitzer}, we have: \emph{$\sum_{i=1}^{N}c_{i}^{2}=V^{2}$.} \emph{Case 3 ($\mathcal{P}$ is an $n$-simplex).} A simplex has no diagonals (since all of its vertices are connected by edges; cf. \cite{Coxeter1973}), thus, all of the chords of $\mathcal{P}$ are edges. An edge is a 1-face, so the number of edges $E$ is obtained by letting $j=1$ in the cardinality formula for the $j$-faces of $n$-simplices given in Table \ref{tab:j-Face-Cardinality-and-Shape}: \[ E=\binom{n+1}{j+1}=\binom{n+1}{2}=\frac{\left(n+1\right)n}{2}. \] The edge length $e$ of an $n$-simplex is $2\left(2-\frac{2}{n+1}\right)^{-1/2}$ (see Table \ref{tab:Edge-Length}). We have: $$ \sum_{i=1}^{N}c_{i}^{2}=Ee^{2}=\frac{\left(n+1\right)n}{2}\left(\frac{4}{2-\frac{2}{n+1}}\right)=\left(n+1\right)^{2}. $$ Since the vertices are 0-faces, the number of vertices $V$ of $\mathcal{P}$ can be seen to be $n+1$ from Table \ref{tab:j-Face-Cardinality-and-Shape}. Therefore, $\sum_{i=1}^{N}c_{i}^{2}=(n+1)^{2}=V^{2}$. \end{proof} \begin{cor} \label{DualCor:sum-of-all-chords}Let $\mathcal{Q}$ be any regular $n$-polytope circumscribed about a unit $n$-sphere (where $n \geq 2$). Then $\sum_{i=1}^{N}s_{i}^{2}=F^{2}$ where $s_{i}$ is the length of each $i$\textsuperscript{th} line segment whose endpoints are centers of facets of $\mathcal{Q}$, $N$ is the number of these line segments, and $F$ is the number of facets of $\mathcal{Q}$. \end{cor} \begin{proof} The centers of the facets of $\mathcal{Q}$ are also the vertices of a reciprocal regular $n$-polytope $\mathcal{P}$ of $\mathcal{Q}$ with respect to the unit $n$-sphere about which $\mathcal{Q}$ is circumscribed (cf. \cite{Coxeter1973}). Since the vertices of $\mathcal{P}$ are the centers of the facets $\mathcal{Q}$, the chords of $\mathcal{P}$ are the line segments whose endpoints are the centers of facets of $\mathcal{Q}$. Hence, the lengths of these line segments ($s_{i}$ for $i=1,2,\ldots,N$) are also the lengths of the chords of $\mathcal{P}$. Thus, by Theorem \ref{Thm:sum-of-all-chords-Vsquared}, we have $\sum_{i=1}^{N}s_{i}^{2}=V^{2}$ where $V$ is the number of vertices of $\mathcal{P}$. Moreover, since every facet of $\mathcal{Q}$ corresponds to a vertex of $\mathcal{P}$ (and vice versa), the number of facets $F$ of $\mathcal{Q}$ is the same as the number of vertices $V$ of $\mathcal{P}$. Therefore, we have $\sum_{i=1}^{N}s_{i}^{2}=F^{2}$ for $\mathcal{Q}$. \end{proof} \subsection{Discussion} Theorem \ref{Thm:sum-of-all-chords-Vsquared} shows that Fact 1 does, in fact, apply to \emph{all} regular $n$-polytopes where $n \geq 2$\textemdash \emph{if the \textquotedblleft number of edges\textquotedblright{} ($E$) mentioned in Fact 1 is reinterpreted as \textquotedblleft number of vertices\textquotedblright{} ($V$).} This reinterpretation is valid since, for 2-polytopes, $E=V$. \section{FACT 2: Sum of Squared Distinct Chords} The second 2-polytope fact stated that, for any regular 2-polytope inscribed in a unit 2-sphere, the sum of its squared distinct chord lengths equals (a) $E$ (when $E$ is odd) and (b) some integer (when $E$ is even). To test generalization of this fact to all regular $n$-polytopes ($n \geq 2$), we use the following lemmas. \subsection{Preliminaries} \begin{lem} \label{Lemma-=0000233}Let $\mathcal{P}$ be a regular 2-polytope with $E$ edges and $k$ chords of distinct lengths. If $E$ is odd, then $E=2k+1$. \end{lem} \begin{proof} Let $\mathbf{O}$ be the center of $\mathcal{P}$, let $\mathbf{P_{0}}$ be a vertex of $\mathcal{P}$, and let $\mathbf{P_{i}}$ for $i=1,2,...,k$ be $k$ other vertices of $\mathcal{P}$ such that $\mathbf{P}_{0}$ and $\mathbf{P_{i}}$ form the endpoints of $k$ chords each having a distinct length $d_{i}$. Since $E$ is odd, the line $l$ that passes through $\mathbf{O}$ and $\mathbf{P_{0}}$ is an axis of symmetry of $\mathcal{P}$ (cf. \cite{AgricolaFriendrich}). Furthermore, only one of the vertices of $\mathcal{P}$ lies on $l$ (i.e., $\mathbf{P_{0}};$ cf. \cite{AgricolaFriendrich}). If we reflect $\mathcal{P}$ across the line $l$, we see that, for each $\mathbf{P_{i}}$ where $i \neq 0$, there is another vertex, call it $\mathbf{P_{i}'}$, that is the same distance $d_{i}$ from $\mathbf{P_{0}}$ (because it is now in the same location as $\mathbf{P_{i}}$ had been before reflecting $\mathcal{P}$ and $\mathbf{P_{0}}$ has not moved). Since there are no other (non-identity) symmetries of $\mathcal{P}$ that hold $\mathbf{P_{0}}$ constant, this shows that these two vertices, \textbf{$\mathbf{P_{i}}$} and $\mathbf{P_{i}'}$, are the only two vertices that are a distance of $d_{i}$ away from $\mathbf{P_{0}}$ for each $i\in\left\{ 1,2,...,k\right\} $. Thus, $\mathcal{P}$ has $2k$ vertices besides $\mathbf{P_{0}}$ and the total number of vertices $V$ of $\mathcal{P}$ is $2k+1$. Since $E=V$ for 2-polytopes, we have $E=2k+1$. \end{proof} \begin{lem} \label{lem:odd-edged-non-simplex-dim1or2}Let $\mathcal{P}$ be a regular $n$-polytope, $n \geq 2$, that is \emph{not} a simplex of dimension $n\geq3$. If $\mathcal{P}$ has an odd number of edges $E$, then $n=2$. \end{lem} \begin{proof} From Table \ref{tab:j-Face-Cardinality-and-Shape}, we see that all the regular 3- and 4-polytopes have even $E$. Likewise, substituting $j=1$ into the $j$-face cardinality formulas for the $n$-crosspolytopes and $n$-cubes in Table \ref{tab:j-Face-Cardinality-and-Shape} yields $E=2^{2}\binom{n}{2}=2n(n-1)$ and $E=2^{n-1}\binom{n}{1}=2^{n-1}n$, respectively, both of which are even. Since $\mathcal{P}$ is not a simplex of dimension $n \geq 3$ by assumption, this means $\mathcal{P}$ is not a polytope of dimension $n\geq3$. Therefore, $n=2$. \end{proof} \subsection{\label{subsec:Generalizing-Fact-2}Generalizing Fact 2} \begin{thm} \label{Thm:sum-of-distinct-chords}Let $\mathcal{P}$ be a regular $n$-polytope inscribed in a unit $n$-sphere ($n \geq 2$). Let $\mathcal{P}$ have $V$ vertices, $E$ edges, and $k$ chords of distinct lengths, and let $d_{i}$ denote the $i$\textsuperscript{th} distinct chord length of $\mathcal{P}$. \begin{enumerate} \item If $\mathcal{P}$ is a simplex of dimension $n\geq3$, then $\sum_{i=1}^{k}d_{i}^{2}$ is a non-integral rational number. \item If $\mathcal{P}$ is \emph{not }a simplex of dimension $n\geq3$, then $\sum_{i=1}^{k}d_{i}^{2}$ is an integer. Specifically: \begin{enumerate} \item If $E$ is odd, this integer is $2k+1$. \item If $E$ is even, this integer is $2k+2$. \end{enumerate} \end{enumerate} \end{thm} \begin{proof} We prove Part 1, Part 2a, and then Part 2b. \emph{Part 1.} Let $\mathcal{P}$ be a regular $n$-simplex inscribed in a unit $n$-sphere where $n\geq3$. Recall that a simplex has only one type of chord (i.e., its edges), all of which have length $e=2\left(2-\frac{2}{n+1}\right)^{-1/2}$. Thus, we have \[ \sum_{i=1}^{k}d_{i}^{2}=e^{2}=\frac{4}{2-\frac{2}{n+1}}=\frac{2\left(n+1\right)}{n}. \] Observe that both the numerator and denominator of $\frac{2\left(n+1\right)}{n}$ are integers (and that $n\neq0$). Thus, $\frac{2\left(n+1\right)}{n}$ is a rational number. Moreover, since $n\geq3$, $n$ does not divide 2. On the other hand, since $n$ and $n+1$ are consecutive integers, they are relatively prime. Hence, since $n\neq1$, $n$ does not divide $n+1$. We conclude that $\frac{2\left(n+1\right)}{n}$ is not an integer but is a rational number. \emph{Part 2a.} Let $\mathcal{P}$ be an \emph{odd}-edged regular $n$-polytope (inscribed in a unit $n$-sphere) of dimension $n \geq 2$ that is \emph{not} a simplex of dimension $n\geq3$. Then, by Lemma \ref{lem:odd-edged-non-simplex-dim1or2}, $\mathcal{P}$ is 2-dimensional. Hence, by Fact 2, we have $\sum_{i=1}^{k}d_{i}^{2}=E$. Since $E$ is odd, $\sum_{i=1}^{k}d_{i}^{2}=E=2k+1$ by Lemma \ref{Lemma-=0000233}. \emph{Part 2b.} Let $\mathcal{P}$ be an \emph{even}-edged regular $n$-polytope (inscribed in a unit $n$-sphere) of dimension $n \geq 2$ that is \emph{not} a simplex of dimension $n\geq3$. Thus, $\mathcal{P}$ is neither an odd-edged polygon nor a simplex of dimension $n \geq 2$. Hence, $\mathcal{P}$ is centrally symmetric. Choose a vertex \textbf{$\mathbf{P}_{\mathbf{0}}$} of $\mathcal{P}$ and let \textbf{$\mathbf{P_{i}}$} for $i=1,2,...,k$ be $k$ other vertices of the polytope such that $\mathbf{P}_{0}$ and $\mathbf{P_{i}}$ form the endpoints of $k$ different chords each having a distinct length $d_{i}$ and $d_{1}<d_{2}<\cdots<d_{k}$ (just as in Case 3 of the proof of Theorem \ref{Thm:sum-of-all-chords-Vsquared}). From the proof of Theorem \ref{Thm:sum-of-all-chords-Vsquared}, we know that $d_{i}^{2}=2-2\cos\theta_{i}$ for $i=1,2,...,k-1$ (where $\theta_{i}$ is the measure of the angle $\angle\mathbf{P_{0}OP_{i}}$) and $d_{k}^{2}=2^{2}.$ Therefore, we have \[ \sum_{i=1}^{k}d_{i}^{2}=\sum_{i=1}^{k-1}\left(2-2\cos\theta_{i}\right)+d_{k}^{2}=2\left(k-1\right)-2\sum_{i=1}^{k-1}\cos\theta_{i}+4. \] Next, we show that $\sum_{i=1}^{k-1}\cos\theta_{i}=0$. Recall from the proof of Theorem \ref{Thm:sum-of-all-chords-Vsquared} that, for each angle of measure $\theta_{i}$ contained in the set $\left\{ \angle\mathbf{P_{0}OP_{i}}:\:i=1,2,...,k\right\} $, there is an angle (not necessarily distinct from the angle of measure $\theta_{i}$) that is supplementary to the angle of measure $\theta_{i}$ and is also contained in the set. Thus, if the angle of measure $\theta_{i}$ and its supplement are distinct angles, it follows that $\cos\theta_{i}+\cos\left(\pi-\theta_{i}\right)=\cos\theta_{i}-\cos\theta_{i}=0$ and these two angles contribute nothing to $\sum_{i=1}^{k-1}\cos\theta_{i}$. If the angle of measure $\theta_{i}$ and its supplement are actually the same angle, then $\theta_{i}=\frac{\pi}{2}$ and $\cos\left(\frac{\pi}{2}\right)=0$. Therefore, we see that $\sum_{i=1}^{k-1}\cos\theta_{i}=0$ and $\sum_{i=1}^{N}d_{i}^{2}=2k+2.$ \end{proof} \begin{cor} \label{Cor:dual-to-sum-of-distinct}Let $\mathcal{Q}$ be a regular $n$-polytope circumscribed about a unit $n$-sphere ($n \geq 2$) having $E$ edges and $F$ facets. Let $t_{i}$ for $i=1,2,...,k$ be the distinct lengths of the line segments whose endpoints are centers of facets. \begin{enumerate} \item If $\mathcal{Q}$ is a simplex of dimension $n\geq3$, then $\sum_{i=1}^{k}t_{i}^{2}$ is a non-integral rational number. \item If $\mathcal{Q}$ is \emph{not }a simplex of dimension $n\geq3$, then $\sum_{i=1}^{k}t_{i}^{2}$ is an integer. Specifically: \begin{enumerate} \item If $E$ is odd, this integer is $2k+1$. \item If $E$ is even, this integer is $2k+2$. \end{enumerate} \end{enumerate} \end{cor} \begin{proof} This corollary follows by much the same reasoning as Corollary \ref{DualCor:sum-of-all-chords}. Note that the reciprocal of a regular simplex of dimension $n\geq3$ is a regular simplex of dimension $n\geq3$, the reciprocal of an odd-edged regular polygon is an odd-edged regular polygon, and the reciprocal of a centrally-symmetric regular polytope is a centrally-symmetric regular polytope (cf. \cite{Coxeter1973}). \end{proof} \subsubsection{Special Cases (the regular 3-polytopes)} \begin{thm} \label{Cor:special-case-sum-of-distinct}Let $\mathcal{P}$ be a regular $3$-polytope inscribed in a unit 3-sphere with $V$ vertices and $k$ distinct chords. Let $d_{i}$ be the $i$\textsuperscript{th} distinct chord length of $\mathcal{P}$. \begin{enumerate} \item For the self-dual tetrahedron, $\sum_{i=1}^{k}d_{i}^{2}\in\mathbb{Q}$. \item For the dual pair of octahedron and cube, $\sum_{i=1}^{k}d_{i}^{2}=V$. \item For the dual pair of icosahedron and dodecahedron, $\sum_{i=1}^{k}d_{i}^{2}=2k+2$. \end{enumerate} \end{thm} \begin{proof}Parts 1 and 3 follow directly from Parts 1 and 2c of Theorem \ref{Thm:sum-of-distinct-chords}, respectively (since the icosahedron and dodecahedron have an even number of edges). For Part 2, recall that the regular octahedron and cube have an even number of edges. Hence, for each of them, Part 2c of Theorem \ref{Thm:sum-of-distinct-chords} guarantees that $\sum_{i=1}^{k}d_{i}^{2}=2k+2$. We determine $k$ for the octahedron and cube. \emph{Octahedron.} The octahedron is just the regular 3-crosspolytope, so consider a regular $n$-crosspolytope of edge length $e$ inscribed in a unit $n$-sphere (where $n\geq2$). It can be constructed by (a) creating a Cartesian cross ($n$ mutually orthogonal lines through a point $\mathbf{O}$; see \cite{Coxeter1973}), (b) finding the points on each of these lines that are all a distance $e\sqrt{0.5}$ away from the point $\mathbf{O}$, and (c) connecting all pairs of these points that do not lie on the same line of the Cartesian cross by line segments (cf. the construction in \cite{Sequin}). We now have a regular $n$-crosspolytope whose vertices are the $2n$ points found in part ``b'' and whose edges are all the line segments constructed in part ``c.'' (These edges do have length $e$, as can be checked by applying the Pythagorean Theorem to a triangle whose vertices consist of any one of the pairs of points mentioned in part ``c'' and the point $\mathbf{O}$). Now we determine $k$. From part ``c'' of our construction, it is readily apparent that any vertex $\mathbf{P}$ of the crosspolytope is connected to all other vertices by edges except for the vertex $\mathbf{P}'$ lying on the same ``Cartesian cross'' line as $\mathbf{P}$. Thus, the chords of the crosspolytope come in two types: (a) edges and (b) inner diagonals $\mathbf{PP'}$ that are portions of the ``Cartesian cross'' lines. Thus, $k=2$ and $\sum_{i=1}^{k}d_{i}^{2}=2k+2=6$, which is the number of vertices. \emph{Cube.} The cube is the 3-cube, so consider an $n$-cube. An $n$-cube\textquoteright s $j$-faces are $j$-cubes (see Table \ref{tab:j-Face-Cardinality-and-Shape}) and its chords thus come in $n$ types: (a) edges and (b) $n-1$ types of diagonals (each of which is the \textquotedblleft longest diagonal\textquotedblright{} of a $j$-face, where $2\leq j\leq n$, of the $n$-cube). Thus, $k=n$ for a regular $n$-cube and $k=3$ for the 3-cube. Therefore, $\sum_{i=1}^{k}d_{i}^{2}=2k+2=8$, which is the number of vertices. \end{proof} \begin{cor} \label{DualCor-to-special-case-of-sum-of-distinct}Let $\mathcal{Q}$ be a regular 3-polytope circumscribed about a unit 3-sphere with $F$ facets. Let $t_{i}$, for $i=1,2,...,k$, be the distinct lengths of the line segments whose endpoints are centers of facets of $\mathcal{Q}$. Then \begin{enumerate} \item For the self-dual tetrahedron, $\sum_{i=1}^{k}t_{i}^{2}\in\mathbf{\mathbb{Q}}$. \item For the dual pair of octahedron and cube, $\sum_{i=1}^{k}t{}_{i}^{2}=F$. \item For the dual pair of icosahedron and dodecahedron, $\sum_{i=1}^{k}t_{i}^{2}=2k+2$. \end{enumerate} \end{cor} \subsection{Discussion} Theorem \ref{Thm:sum-of-distinct-chords} (combined with Lemma \ref{lem:odd-edged-non-simplex-dim1or2}) shows that Fact 2 does, in fact, apply to \emph{all} regular $n$-polytopes (where $n \geq 2$) \emph{except} most $n$-simplices. Moreover, this theorem shows that the ``$E$'' in Fact 2 can be re-characterized as $2k+1$ (where $k$ is the number of distinct chords) and specifies the ``some integer'' in Fact 2 as $2k+2$. Thus, these two integers are remarkably similar. \section{FACT 3: Product of Squared Chords} The third 2-polytope fact stated that, for any regular 2-polytope inscribed in a unit 2-sphere, the product of its squared chord lengths equals $E^{E}.$ We test generalization of this fact to (a) the regular $n$-crosspolytopes (where $n \geq 2$), (b) the regular 24-cell, and (c) the other regular polytopes. \subsection{Preliminaries} No additional information is need to test generalization of Fact 3 to the $n$-crosspolytopes. However, in testing generalization to the 24-cell, we will make use of the following definition, and, in testing generalization to the other regular polytopes, we will make use the cardinalities in Table \ref{tab:number-of-vertices-incident-to-edges-vs}. \begin{defn} A \emph{section} of a regular $n$-polytope $\mathcal{P}$ is a nonempty intersection of $\mathcal{P}$ and an $(n-1)$-plane $\mathcal{H}$ (cf. \cite{Coxeter1973}). This intersection is a $j$-polytope for some $j \in \left \{0, 1, ..., n-1 \right \}$ (cf. \cite{Coxeter1973}). \end{defn} \noindent Of particular interest to us are the sections known as \emph{simplified sections}. These are the sections (a) that are formed when the $(n-1)$-plane $\mathcal{H}$ is orthogonal to a line $l$ that passes through the $n$-polytope's center $\mathbf{O}$ and a given, fixed vertex $\mathbf{P_{0}}$ and (b) that form $j$-polytopes whose vertices are all vertices of the original $n$-polytope $\mathcal{P}$. For these sections, which we will denote by $\mathcal{S}_{i}$ for $i=0,1,2,...,k$, the distances between the fixed vertex $\mathbf{P_{0}}$ and each one of the vertices in a given section ($\mathcal{S}_{i}$ for some particular $i$) are always the same length (for a regular polytope; cf. \cite{Coxeter1973}). Hence, if we let $\mathbf{P_{i}}$ be one of the vertices in the section $\mathcal{S}_{i}$ of a given regular polytope, the lengths of the chords $\mathbf{P_{0}P_{i}}$ and $\mathbf{P_{0}P_{j}}$ are distinct if $i\neq j$. \begin{table} \caption{\label{tab:number-of-vertices-incident-to-edges-vs}Number of ``Vertices Incident to Any Edge'' (\textbf{$\nu$}) and Number of ``Edges Incident to Any Vertex'' (\textbf{$\varepsilon$}) for the Regular 3-Polytopes \cite{Coxeter1973}.} {} \centering{} \begin{tabular}{ccc} \toprule \textbf{3-Polytope} & \multicolumn{1}{c}{\textbf{$\nu$}} & \multicolumn{1}{c}{\textbf{$\varepsilon$}}\tabularnewline \midrule \midrule tetrahedron & 2 & 3\tabularnewline \midrule octahedron & 2 & 4\tabularnewline \midrule cube & 2 & 3\tabularnewline \midrule icosahedron & 2 & 5\tabularnewline \midrule dodecahedron & 2 & 3\tabularnewline \bottomrule \end{tabular} \end{table} \subsection{\label{subsec:Generalizing-Fact-3}Generalizing Fact 3 to Crosspolytopes} \begin{thm} \label{Thm:product-of-all-chords}Let $\mathcal{P}$ be a regular $n$-crosspolytope with $V$ vertices and $F$ facets inscribed in a unit $n$-sphere (where $n \geq 2$). Then \[ \prod_{i=1}^{N}c_{i}^{2}=F^{V} \] where $N$ is the total number of chords of $\mathcal{P}$ and $c_{i}$ is the length of each $i$\textsuperscript{th} chord. \end{thm} \begin{proof} Recall from the proof of Theorem \ref{Cor:special-case-sum-of-distinct} that the chords of an $n$-crosspolytope ($n\geq2$) come in two types: (a) edges and (b) inner diagonals. Let the lengths of the latter be denoted by $d$. An edge is a 1-face, so the number of edges $E$, given by Table \ref{tab:j-Face-Cardinality-and-Shape}, is \[ E=2^{j+1}\binom{n}{j+1}=2^{2}\binom{n}{2}=2n(n-1). \] The number of inner diagonals, $N_{d}$, is the total number of chords (i.e., $N$) minus the number of edges. By Lemma \ref{lem:N=00003DV(V-1)/2}, we have: \[ N_{d}=\frac{V\left(V-1\right)}{2}-E. \] Recall from the proof of Theorem \ref{Cor:special-case-sum-of-distinct} that an $n$-crosspolytope has $2n$ vertices ($n \geq 2)$. Thus, substituting the values for $V$ and $E$ yields \[ N_{d}=\frac{(2n)(2n-1)}{2}-2n(n-1)=n. \] Now that we know the number of chords of length $e$ and length $d$, we want to calculate these lengths. From Table \ref{tab:Edge-Length}, we see $e=2^{1/2}$. From the proof of Theorem \ref{Cor:special-case-sum-of-distinct}, we see $d$ is twice the distance from the polytope's center $\mathbf{O}$ to any one of its vertices and that this distance is $e\sqrt{0.5}$. Thus, $d=2e\sqrt{0.5}=2$. Therefore, we have: \[ \prod_{i=1}^{N}c_{i}^{2}=(e^{2})^{E}(d^{2})^{N_{d}}=2{}^{2n(n-1)}(2)^{2n}=\left(2^{n}\right)^{2n}=(2^{n})^{V}. \] From Table \ref{tab:j-Face-Cardinality-and-Shape}, the number of facets is $F=2^{n}\binom{n}{n}=2^{n}$. We conclude that $\prod_{i=1}^{N}c_{i}^{2}=(2^{n})^{V}$=$F^{V}$. \end{proof} \begin{cor} \label{DualCor:product-of-all}Let $\mathcal{Q}$ be an $n$-cube with $V$ vertices and $F$ facets that circumscribes a unit $n$-sphere ($n\geq2$). Then $\prod_{i=1}^{N}s_{i}^{2}=V^{F}$ where $s_{i}$ is the length of each $i$\textsuperscript{th} line segment whose endpoints are centers of facets of $\mathcal{Q}$ and $N$ is the total number of these line segments. \end{cor} \begin{proof} The reciprocal of a regular $n$-crosspolytope is an $n$-cube \cite{Coxeter1973}. \end{proof} \subsection{Generalizing Fact 3 to the 24-cell} \begin{thm} \label{Thm:24-cell-product-of-all-chords}Let $\mathcal{P}$ be a regular 24-cell with $E$ edges and $R$ ridges inscribed in a unit 4-sphere. Then \[ \prod_{i=1}^{N}c_{i}^{2}=6^{E}=6^{R} \] where $c_{i}$ is the length of each $i$\textsuperscript{th} chord of $\mathcal{P}$ and $N$ is the total number of chords. \end{thm} \begin{proof} According to Table \ref{tab:Edge-Length}, the edge length of $\mathcal{P}$ is 1. To find its other distinct chord lengths, we consider the distance $d_{i}'$ from a given, fixed vertex $\mathbf{P_{0}}$ of an arbitrary 24-cell to some vertex $\mathbf{P_{i}}$ in its simplified section $\mathcal{S}_{i}$ (where $i=1,2,...,k$). (Note that\textemdash since these $k$ distances are distinct\textemdash they will actually be the polytope's distinct chord lengths). In Table V(i) in \cite{Coxeter1973}, there is a list of the values of $a_{i}$, which is the distance $d_{i}'$ divided by the edge length $e$ of an arbitrary 24-cell (so $d_{i}'=a_{i}e$).\footnote{Instead of ``$a_{i}$,'' Coxeter \cite{Coxeter1973} actually uses ``$a$.''} Thus, to obtain the distinct chord lengths $d_{i}$ for $\mathcal{P}$, we multiply each of the values of $a_{i}$ by the edge length of $\mathcal{P}$ (i.e., by 1). These values are listed in the second column of Table \ref{tab:24-cell-simplified-sects-1}. (For convenience' sake, the values of $d_{i}^{2}$ are listed in the third column of Table \ref{tab:24-cell-simplified-sects-1}). To find the total number of chords of each distinct length, we will use the total number of vertices $V_i$ in each simplified section $\mathcal{S}_{i}$ as listed in Table V(i) of \cite{Coxeter1973}. Obviously, the number of chords $m_{i}$ of a given length $d_{i}$ emanating from $\mathbf{P_{0}}$ is the same as $V_i$ (see the fourth column of Table \ref{tab:24-cell-simplified-sects-1}). Thus, the total number of chords of length $d_{i}$ is $\frac{m_{i}V}{2}$ by Lemma \ref{Lemma:mV/2}. The total number of vertices $V$ of a 24-cell is $24$ (see Table \ref{tab:j-Face-Cardinality-and-Shape}). Substituting the values of $m_{i}$ and $V$ into $\frac{m_{i}V}{2}$ yields the total number of chords of length $d_{i}$ for $i=1,2,...,k$ (as listed in the last column of Table \ref{tab:24-cell-simplified-sects-1}). \begin{longtable}{ccccc} \caption{\label{tab:24-cell-simplified-sects-1}Length and Number of Distinct Chords in Regular 24-Cell Inscribed in Unit 4-Sphere (second and fourth columns due to \cite{Coxeter1973}).} \tabularnewline \hline \midrule \textbf{ $\mathbf{\mathcal{S}}_{\mathbf{i}}$} & $a_{i}$ ($=d_{i}'=d_{i}$) & $d_{i}^{2}$ & \# of vertices ($m_{i})$ & \# of chords of length $d_{i}$\tabularnewline \midrule \endfirsthead \textbf{$\mathbf{\mathcal{S}_{1}}$} & $1$ & $1$ & 8 & 96\tabularnewline \midrule \textbf{$\mathbf{\mathcal{S}_{2}}$} & $2^{1/2}$ & $2$ & 6 & 72\tabularnewline \midrule \textbf{$\mathbf{\mathcal{S}_{3}}$} & $3^{1/2}$ & $3$ & 8 & 96\tabularnewline \midrule \textbf{$\mathbf{\mathcal{S}_{4}}$} & $2$ & $4$ & 1 & 12\tabularnewline \bottomrule \end{longtable} \noindent Therefore, we have $\prod_{i=1}^{k}c_{i}^{2}=\left(1\right)^{96}\left(2\right)^{72}\left(3\right)^{96}\left(4\right)^{12}=6^{96}$. Table \ref{tab:j-Face-Cardinality-and-Shape} reveals that, for the 24-cell, the number of edges $E$ and the number of ridges $R$ are both equal to 96. We conclude: $\prod_{i=1}^{k}c_{i}^{2}=6^{E}=6^{R}$ . \end{proof} \subsection{Generalizing Fact 3 to Other Regular Polytopes} \begin{thm} \label{Thm:3D-product-of-all}Let $\mathcal{P}$ be a regular 3-polytope with $E$ edges and $V$ vertices inscribed in a unit 3-sphere. Then \[ \prod_{i=1}^{N}c_{i}^{2}=\frac{\nu^{a}}{\varepsilon^{b}} \] where $c_{i}$ is the length of each $i$\textsuperscript{th} chord of $\mathcal{P}$, $\nu$ is the number of vertices incident to any edge, $\varepsilon$ is the number of edges incident to any vertex, and $a,\,b$ are positive integers such that $E$ divides $b$ and \begin{enumerate} \item for the self-dual tetrahedron, \[ a\equiv0\:(\textrm{mod }E)\textrm{ and }a\equiv E\:(\textrm{mod }V) \] \item for the dual pair of octahedron and cube, \[ a\equiv V\:(\textrm{mod }E)\textrm{ and }a\equiv E\:(\textrm{mod }V)\textrm{, and} \] \item for the dual pair of icosahedron and dodecahedron, \[ a\equiv V\:(\textrm{mod }E)\textrm{ and }a\equiv0\:(\textrm{mod }V). \] \end{enumerate} In addition, the number $\varepsilon$ in the equation above may be replaced by \[ \frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V} \] where $\left(E,V\right)$ is the greatest common divisor of $E$ and $V$ and $\left[E,V\right]$ is the least common multiple of $E$ and $V$. \end{thm} \begin{proof}We consider each of these five cases in turn: (a) the tetrahedron, (b) the octahedron, (c) the cube, (d) the icosahedron, and (e) the dodecahedron. \emph{Tetrahedron.} Recall from the proof of Theorem \ref{Thm:sum-of-all-chords-Vsquared} that a regular $n$-simplex inscribed in a unit $n$-sphere has $\frac{\left(n+1\right)n}{2}$ edges of length $2\left(2-\frac{2}{n+1}\right)^{-1/2}$ and no diagonals. Substituting $n=3$ into these expressions shows that, for a regular 3-simplex (i.e., the tetrahedron) inscribed in a unit 3-sphere, $E=6$ and the edge length $e$ is $\frac{2\sqrt{6}}{3}$. Therefore, we have \begin{equation} \prod_{i=1}^{N}c_{i}^{2}=e^{E}=\left(\left(\frac{2\sqrt{6}}{3}\right)^{2}\right)^{6}=\frac{2^{18}}{3^{6}}.\label{eq:3d-product-of-all-tetra} \end{equation} Observe that we have $2$ as the numerator\textquoteright s base and $3$ as the denominator\textquoteright s base (which are $\nu$ and $\varepsilon$ for the tetrahedron, respectively), as desired. Next, we consider the exponents. Recall that, for a tetrahedron, $E=6$ and $V=4$. For the denominator\textquoteright s exponent, observe that $E$ divides 6. For the numerator\textquoteright s exponent, observe that (a) $18\equiv6\equiv E\:(\mathrm{mod}\,E)$ and (b) $18\equiv2\equiv6\equiv E\:(\mathrm{mod\,}V)$ (as desired). Finally, the identity $EV=(E,V)\cdot[E,V]$ implies $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}$. Substituting into the left-hand side of this equation yields $\frac{6}{\left(6,4\right)}=3$, which is the denominator's base (as desired). \emph{Octahedron.} Recall from the proof of Theorem \ref{Thm:product-of-all-chords} that the chords of a regular $n$-crosspolytope inscribed in a unit $n$-sphere consist of $2n(n-1)$ edges of length $2^{1/2}$ and $n$ inner diagonals of length $2$. Substituting $n=3$ into these expressions shows that a regular 3-crosspolytope (i.e., the octahedron) inscribed in a unit 3-sphere has 12 edges of length $2^{1/2}$ (which we denote by $e$) and three inner diagonals of length $2$ (which we denote by $d$). Therefore: \begin{equation} \prod_{i=1}^{N}c_{i}^{2}=\left(e^{2}\right)^{12}\left(d^{2}\right)^{3}=\left(2\right)^{12}\left(2^{2}\right)^{3}=2^{18}=\frac{2^{18+24q}}{2^{24q}}=\frac{2^{18+24q}}{4^{12q}}\label{eq:3d-product-of-all-octa} \end{equation} where $q$ is some integer. Observe that we have $2$ as the numerator\textquoteright s base and $4$ as the denominator\textquoteright s base (which are $\nu$ and $\varepsilon$ for the octahedron, respectively), as desired. Next, we consider the exponents. Recall that $E=12$ and $V=6$ for the octahedron. For the denominator\textquoteright s exponent, observe that $E$ divides $12q$ (as desired). For the numerator\textquoteright s exponent, observe that (a) $18+24q\equiv0\equiv12\equiv E\:(\mathrm{mod\,}V)$ and (b) $18+24q\equiv6\equiv V\:(\mathrm{mod\,}E)$ (as desired). Finally, $\frac{12}{\left(12,6\right)}=2$, which is the denominator's base (as desired). \emph{Cube.} First, we determine the distinct chord lengths of a cube inscribed in a unit 3-sphere. Since a cube has eight vertices, we know there must be seven chords (not necessarily distinct) emanating from a given vertex $\mathbf{P}$. From Table \ref{tab:number-of-vertices-incident-to-edges-vs}, we see that there are three edges emanating from $\mathbf{P}$ (three of the seven chords). Table \ref{tab:Edge-Length} in reveals that the cube's edges have length $3^{-1/2}2$. For the next distinct chord length, observe that three of the cube's square faces meet at $\mathbf{P}$ so that each pair of faces shares an edge emanating from $\mathbf{P}$ (cf. \cite{Coxeter1973}). Since a square has four vertices, each of these three faces has one vertex that is \emph{not} joined to $\mathbf{P}$ by an edge. Thus, each face has one diagonal emanating from $\mathbf{P}$ and each of these three diagonals forms an (outer) diagonal of the cube. Using Pythagorean's Theorem on the right triangle formed by one of these diagonals and two adjacent edges shows that these outer diagonals have length $\sqrt{\left(3^{-1/2}2\right)^{2}+\left(3^{-1/2}2\right)^{2}}=\frac{2\sqrt{6}}{3}$. We have now determined the length of six of the seven chords emanating from $\mathbf{P}$. For the seventh, recall that a cube is centrally-symmetric and, hence, $\mathbf{P}$ has a vertex $\mathbf{P'}$ diametrically opposite to it. Hence, the chord $\mathbf{PP'}$ is a diameter. Thus, it has length 2 (and is the seventh chord). Next, we determine the total number of chords of these distinct lengths. By Lemma \ref{Lemma:mV/2}, viz., $N_{i}=\frac{m_{i}V}{2}$, the cube has $\frac{3\left(8\right)}{2}=12$ chords of length $3^{-1/2}2$, $\frac{3\left(8\right)}{2}=12$ chords of length $\frac{2\sqrt{6}}{3}$, and $\frac{1\left(8\right)}{2}=4$ chords of length 2. Therefore, we have \begin{equation} \prod_{i=1}^{N}c_{i}^{2}=\left(\left(3^{-1/2}2\right)^{2}\right)^{12}\left(\left(\frac{2\sqrt{6}}{3}\right)^{2}\right)^{12}\left(\left(2\right)^{2}\right)^{4}=\frac{2^{68}}{3^{24}}.\label{eq:product-all-chords-cube} \end{equation} Observe that 2 is the numerator\textquoteright s base and 3 is the denominator\textquoteright s base ($\nu$ and $\varepsilon$ for the cube, respectively), as desired. Next, we consider the exponents. Recall that $E=12$ and $V=8$ for the cube. For the denominator\textquoteright s exponent, observe that $E$ divides 24 (as desired). For the numerator\textquoteright s exponent, observe that (a) $68\equiv12\equiv E\:(\mathrm{mod}\,V)$ and (b) $68\equiv8\equiv V\:(\mathrm{mod}\,E)$ (as desired). Finally, $\frac{12}{\left(12,8\right)}=3$, which is the denominator's base. \emph{Icosahedron.} The edge length of a regular icosahedron inscribed in a unit 3-sphere is $5^{-1/4}\tau^{-1/2}2$ (see Table \ref{tab:Edge-Length}). The coordinates for a regular icosahedron of edge length 2 are given in \cite{Coxeter1973}. To find the distinct chord lengths of a regular icosahedron of edge length $5^{-1/4}\tau^{-1/2}2$, we: \begin{enumerate} \item Select the coordinates of an arbitrary vertex (of a regular icosahedron of edge length 2). \item Compute the distance between this vertex and the other vertices (of the regular icosahedron of edge length 2). \item Divide each one of the $k$ distinct distances found in step 3 by the edge length $2$. \item Multiply each result in step 3 by the new edge length $5^{-1/4}\tau^{-1/2}2$. \end{enumerate} Comparing with \cite[p. 238, para. 3]{Coxeter1973} shows the validity of steps 3 and 4. We choose the vertex $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle $ and compute the distance between it and the other vertices. To do this, we make repeated use of three identities, namely, $\tau^{2}=\tau+1$, $\tau^{-1}=\tau-1$, and $\tau^{-2}=-\tau+2$ (given in \cite{Dunlap}): $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} 0, & \tau, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 0, & 2\end{array}\right\rangle \right\Vert =2$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} 0, & -\tau, & 1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 2\tau, & 0\end{array}\right\rangle \right\Vert =2\tau$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} 0, & -\tau, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 2\tau, & 2\end{array}\right\rangle \right\Vert =2\sqrt{\tau+2}$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} 1, & 0, & \tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau, & 1-\tau\end{array}\right\rangle \right\Vert =2$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} 1, & 0, & -\tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau, & 1+\tau\end{array}\right\rangle \right\Vert =2\tau$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} -1, & 0, & \tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau, & 1-\tau\end{array}\right\rangle \right\Vert =2$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} -1, & 0, & -\tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau, & 1+\tau\end{array}\right\rangle \right\Vert =2\tau$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} \tau, & 1, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau, & \tau-1, & 1\end{array}\right\rangle \right\Vert =2$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} \tau, & -1, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau, & \tau+1, & 1\end{array}\right\rangle \right\Vert =2\tau$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} -\tau, & 1, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau, & \tau-1, & 1\end{array}\right\rangle \right\Vert =2$ $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle -\left\langle \begin{array}{ccc} -\tau, & -1, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau, & \tau+1, & 1\end{array}\right\rangle \right\Vert =2\tau$ Thus, there are 3 distinct chord lengths: 2, $2\tau$, and $2\sqrt{\tau+2}$. Dividing these by 2 yields 1, $\tau$, and $\sqrt{\tau+2}$, and multiplying them by $5^{-1/4}\tau^{-1/2}2$ yields $5^{-1/4}\tau^{-1/2}2$, $5^{-1/4}\tau^{1/2}2$, and $5^{-1/4}\tau^{-1/2}2\sqrt{\tau+2}$. It can be shown that the last of these equals $2$. From our computations, we see that, emanating from the vertex $\left\langle \begin{array}{ccc} 0, & \tau, & 1\end{array}\right\rangle $, there are 5 chords of length 2, 5 chords of length $2\tau$, and one chord of length $2\sqrt{\tau+2}$. Thus, for the regular icosahedron of edge length $5^{-1/4}\tau^{-1/2}2$, there are 5 chords of length $5^{-1/4}\tau^{-1/2}2$, 5 chords of length $5^{-1/4}\tau^{1/2}2$, and one chord of length $2$. Since a regular icosahedron has $12$ vertices, Lemma \ref{Lemma:mV/2}, viz., $N_{i}=\frac{m_{i}V}{2}$, shows that there are a total of 30 chords of length $5^{-1/4}\tau^{-1/2}2$, 30 chords of length $5^{-1/4}\tau^{1/2}2$, and six chords of length $2$. Therefore, we have \begin{equation} \prod_{i=1}^{N}c_{i}^{2}=\left(\left(5^{-1/4}\tau^{-1/2}2\right)^{2}\right)^{30}\left(\left(5^{-1/4}\tau^{1/2}2\right)^{2}\right)^{30}\left(\left(2\right)^{2}\right)^{6}=\frac{2^{132}}{5^{30}}.\label{eq:3D-product-of-all-ico} \end{equation} Observe that 2 is the numerator\textquoteright s base and 5 is the denominator\textquoteright s base ($\nu$ and $\varepsilon$ for the icosahedron, respectively). Next, we consider the exponents. Recall that $E=30$ and $V=12$ for the icosahedron. For the denominator\textquoteright s exponent, observe that $E$ divides 30 (as desired). For the numerator\textquoteright s exponent, observe that (a) $132=12(11)=V(11)$ and therefore $132\equiv V\:(\mathrm{mod}\,V)$ and that (b) $132=30(4)+12=E(4)+V$ and therefore $132\equiv V\:(\mathrm{mod}\,E)$ as desired. Finally, $\frac{30}{\left(30,12\right)}=5$, which is the denominator's base. \emph{Dodecahedron.} The edge length of a regular dodecahedron inscribed in a unit 3-sphere is $3^{-1/2}\tau^{-1}2$ (see Table \ref{tab:Edge-Length}). The coordinates for a regular dodecahedron of edge length $2\tau^{-1}$ are given in \cite{Coxeter1973}. To find the distinct chord lengths of a regular dodecahedron of edge length $3^{-1/2}\tau^{-1}2$, we apply the analogous four steps to the four steps listed in the ``Icosahedron case'' (replacing $5^{-1/4}\tau^{-1/2}2$ with $3^{-1/2}\tau^{-1}2$ and 2 with $2\tau^{-1}$). We choose the vertex $\left\langle \begin{array}{ccc} 0, & \tau^{-1}, & \tau\end{array}\right\rangle$ (which we will denote by $\mathbf{P_0}$) and compute the distance between this vertex and the other vertices: $\mathbf{P_0} -\left\langle \begin{array}{ccc} 0, & \tau^{-1}, & -\tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 0, & 2\tau\end{array}\right\rangle \right\Vert =2\tau$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 0, & -\tau^{-1}, & \tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 2\tau^{-1}, & 0\end{array}\right\rangle \right\Vert =2\tau^{-1}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 0, & -\tau^{-1}, & -\tau\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 0, & 2\tau^{-1}, & 2\tau\end{array}\right\rangle \right\Vert =2\sqrt{3}$ $\mathbf{P_0}-\left\langle \begin{array}{ccc} \tau, & 0, & \tau^{-1}\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau, & \tau^{-1}, & \tau-\tau^{-1}\end{array}\right\rangle \right\Vert =2$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} \tau, & 0, & -\tau^{-1}\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau, & \tau^{-1}, & \tau+\tau^{-1}\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0}-\left\langle \begin{array}{ccc} -\tau, & 0, & \tau^{-1}\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau, & \tau^{-1}, & \tau-\tau^{-1}\end{array}\right\rangle \right\Vert =$2 $\mathbf{P_0} -\left\langle \begin{array}{ccc} -\tau, & 0, & -\tau^{-1}\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau, & \tau^{-1}, & \tau+\tau^{-1}\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} \tau^{-1}, & \tau, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau^{-1}, & \tau^{-1}-\tau, & \tau\end{array}\right\rangle \right\Vert =2$ $\mathbf{P_0}-\left\langle \begin{array}{ccc} \tau^{-1}, & -\tau, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -\tau^{-1}, & \tau^{-1}+\tau, & \tau\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -\tau^{-1}, & \tau, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau^{-1}, & \tau^{-1}-\tau, & \tau\end{array}\right\rangle \right\Vert =2$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -\tau^{-1}, & -\tau, & 0\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} \tau^{-1}, & \tau^{-1}+\tau, & \tau\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 1, & 1, & 1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau^{-1}-1, & \tau-1\end{array}\right\rangle \right\Vert =2\tau^{-1}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 1, & 1, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau^{-1}-1, & \tau+1\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 1, & -1, & 1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau^{-1}+1, & \tau-1\end{array}\right\rangle \right\Vert =2$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -1, & 1, & 1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau^{-1}-1, & \tau-1\end{array}\right\rangle \right\Vert =2\tau^{-1}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} 1, & -1, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} -1, & \tau^{-1}+1, & \tau+1\end{array}\right\rangle \right\Vert =2\tau$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -1, & -1, & 1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau^{-1}+1, & \tau-1\end{array}\right\rangle \right\Vert =2$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -1, & 1, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau^{-1}-1, & \tau+1\end{array}\right\rangle \right\Vert =2\sqrt{2}$ $\mathbf{P_0} -\left\langle \begin{array}{ccc} -1, & -1, & -1\end{array}\right\rangle =\left\Vert \left\langle \begin{array}{ccc} 1, & \tau^{-1}+1, & \tau+1\end{array}\right\rangle \right\Vert =2\tau$ Thus, there are 5 distinct chord lengths: $2\tau$, $2\tau^{-1}$, $2\sqrt{3}$, 2, and $2\sqrt{2}$. Dividing these by $2\tau^{-1}$ yields $\tau^{2}$, $1$, $\tau\sqrt{3}$, $\tau$, and $\tau\sqrt{2}$, and multiplying them by $3^{-1/2}\tau^{-1}2$ yields $3^{-1/2}\tau2$, $3^{-1/2}\tau^{-1}2$, $2$, $3^{-1/2}2$, and $3^{-1/2}2\sqrt{2}$. From our computations, we see that, emanating from the vertex $\left\langle \begin{array}{ccc} 0, & \tau^{-1}, & \tau\end{array}\right\rangle $, there are three chords of length $2\tau$, three chords of length $2\tau^{-1}$, one chord of length $2\sqrt{3}$, six chords of length 2, and six chords of length $2\sqrt{2}$. Thus, for the regular dodecahedron of edge length $3^{-1/2}\tau^{-1}2$, there are three chords of length $3^{-1/2}\tau2$, three chords of length $3^{-1/2}\tau^{-1}2$, one chord of length $2$, six chords of length $3^{-1/2}2$, and six chords of length $3^{-1/2}2\sqrt{2}$. Since a regular dodecahedron has $20$ vertices, Lemma \ref{Lemma:mV/2}, viz., $N_{i}=\frac{m_{i}V}{2}$, shows that there are a total of 30 chords of length $3^{-1/2}\tau2$, 30 chords of length $3^{-1/2}\tau^{-1}2$, 10 chord of length $2$, 60 chords of length $3^{-1/2}2$, and 60 chords of length $3^{-1/2}2\sqrt{2}$. Therefore, we have $\prod_{i=1}^{N}c_{i}^{2}=$ \begin{equation} \left(\left(\frac{\tau2}{\sqrt{3}}\right)^{2}\right)^{30}\left(\left(\frac{2}{\tau\sqrt{3}}\right)^{2}\right)^{30}\left(2^{2}\right)^{10}\left(\left(\frac{2}{\sqrt{3}}\right)^{2}\right)^{60}\left(\left(2\sqrt{\frac{2}{3}}\right)^{2}\right)^{60}=\frac{2^{440}}{3^{180}}.\label{eq:product-of-all-dodeca} \end{equation} Observe that 2 is the numerator\textquoteright s base and 3 is the denominator\textquoteright s base ($\nu$ and $\varepsilon$ for the dodecahedron, respectively). Next, we consider the exponents. Recall that $E=30$ and $V=20$ for the dodecahedron. For the denominator\textquoteright s exponent, observe that $E$ divides 180 as desired. For the numerator\textquoteright s exponent, observe that (a) $440=20(22)=V(22)$ and therefore $440\equiv V\:(\mathrm{mod}\,V)$ and that (b) $440=30(14)+20=E(14)+V$ and therefore $440\equiv V\:(\mathrm{mod}\,E)$ as desired. Finally, $\frac{30}{\left(30,20\right)}=3$, which is the denominator's base. \end{proof} \subsection{Discussion} Theorem \ref{Thm:product-of-all-chords} shows that Fact 3 does, in fact, apply to at least one family of regular $n$-polytopes (where $n \geq 2$)\textemdash i.e., the family of regular $n$-crosspolytopes\textemdash \emph{if the ``number of edges'' ($E$) in the base is replaced by ``number of facets'' ($F$) and the ``number of edges'' ($E$) in the exponent is replaced by ``number of vertices'' ($V$).} These replacements are valid since, for 2-polytopes, the facets are, in fact, the edges and $E=V$. Theorem \ref{Thm:24-cell-product-of-all-chords} shows that Fact 3 does \emph{not }generalize to the 24-cell but does show that, for the 24-cell (inscribed in a unit 4-sphere), the product of the squared chord lengths is a ``very nice'' integer ($6^{E}$, which is equal to $6^{R}$). Finally, our proof of Theorem \ref{Thm:3D-product-of-all} shows that Fact 3 does \emph{not} generalize to the regular 3-simplex, 3-cube, icosahedron, or dodecahedron and therefore \emph{cannot} generalize to all regular $n$-simplices or to all $n$-cubes (where $n \geq 2$) and \emph{probably} does not generalize to the remaining regular $n$-simplices or $n$-cubes (i.e., those of dimension $n>3$) or to the 600-cell (i.e., the ``hypericosahedron'') or the 120-cell (i.e., the ``hyperdodecahedron''). \section{FACT 4: Product of Squared Distinct Chords} The fourth 2-polytope fact stated that, for any regular 2-polytope inscribed in a unit 2-sphere, the product of its squared distinct chord lengths equals (a) $E$ (when $E$ is odd) and (b) some integer (when $E$ is even). We test generalization of this fact to (a) the regular $n$-crosspolytopes ($n \geq 2$), (b) the 24-cell, and (c) the other regular polytopes. We need no additional information. \subsection{Generalizing Fact 4 to Crosspolytopes} \begin{thm} \label{Thm:crossPT-product-of-distinct} Let $\mathcal{P}$ be a regular $n$-crosspolytope inscribed in a unit $n$-sphere ($n \geq 2$) with $k$ distinct chords. Let $d_{i}$ be the $i$\textsuperscript{th} distinct chord length. Then $\prod_{i=1}^{k}d_{i}^{2} = 8$. \end{thm} \begin{proof} Recall from the proof of Theorem \ref{Thm:product-of-all-chords} that the chords of a regular $n$-crosspolytope inscribed in a unit $n$-sphere ($n\geq2$) consist of edges of length $2^{1/2}$ and inner diagonals of length $2$. Hence, $\prod_{i=1}^{N}d_{i}^{2}=\left(2^{1/2}\right)^{2}(2)^{2}=8$. \end{proof} \begin{cor} \label{Cor:crossPTs-product-of-distinct}Let $\mathcal{Q}$ be a regular $n$-cube circumscribed about a unit $n$-sphere (where $n \geq 2$). Let $t_{i}$, for $i=1,2,...,k$, be the distinct lengths of the line segments whose endpoints are centers of facets. Then $\prod_{i=1}^{k}t_{i}^{2} = 8$. \end{cor} \subsection{Generalizing Fact 4 to the 24-cell} \begin{thm} \label{Thm:24-cell-product-of-distinct}Let $\mathcal{P}$ be a 24-cell that is inscribed in a unit $4$-sphere and has $V$ vertices, $F$ facets, and $k$ distinct chord lengths. Let $d_{i}$ denote the $i$\textsuperscript{th} distinct chord length of $\mathcal{P}$. Then \[\prod_{i=1}^{k}d_{i}^{2}=F=V.\] \end{thm} \begin{proof} Recall from the proof of Theorem \ref{Thm:24-cell-product-of-all-chords} that a regular $24$-cell inscribed in a unit $4$-sphere has four distinct chord lengths: 1, $2^{1/2}$, $3^{1/2},$ and 2. Squaring these yields 1, 2, 3, and 4. Thus, $\prod_{i=1}^{k}d_{i}^{2}=4!=24$. Table \ref{tab:j-Face-Cardinality-and-Shape} reveals that the 24-cell has 24 vertices and 24 facets. Therefore: $\prod_{i=1}^{k}d_{i}^{2}=F=V$. \end{proof} \begin{cor} \label{DualCor:product-of-distinct}Let $\mathcal{Q}$ be a 24-cell \emph{circumscribed} about a unit $4$-sphere with $V$ vertices and $F$ facets. Let $t_{i}$, for $i=1,2,...,k$, be the distinct lengths of the line segments whose endpoints are centers of facets. Then $\prod_{i=1}^{k}t{}_{i}^{2}=V=F$. \end{cor} \begin{proof} Note that a regular 24-cell is self-reciprocal \cite{Coxeter1973}. \end{proof} \subsection{Generalizing Fact 4 to Other Regular Polytopes} \begin{thm} \label{3DThm:product-of-distinct}Let $\mathcal{P}$ be a regular 3-polytope with $E$ edges and $V$ vertices inscribed in a unit 3-sphere, and let $a$ and $b$ be the exponents in Theorem \ref{Thm:3D-product-of-all}. Then \[ \prod_{i=1}^{k}d_{i}^{2}=\frac{\nu^{c}}{\varepsilon^{d}} \] where $d_{i}$ is the length of each $i$\textsuperscript{th} distinct chord of $\mathcal{P}$, $\nu$ is the number of vertices incident to any edge, $\varepsilon$ is the number of edges incident to any vertex, and $c,d\in\mathbb{Z}$ such that \begin{enumerate} \item for the self-dual tetrahedron, $b=dE$, \item for the cube and icosahedron, $b=dE$, and \item for the octahedron and dodecahedron, $a=cVm$ (where $m=1$ or $2$, respectively). \end{enumerate} In addition, the number $\varepsilon$ in the equation above may be replaced by either \[ \frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}. \] \end{thm} \begin{proof} We consider each of these five cases in turn: (a) the tetrahedron, (b) the cube, (c) the icosahedron, (d) the octahedron, and (e) the dodecahedron. \emph{Tetrahedron.} Recall that a regular tetrahedron inscribed in a unit 3-sphere has one type of chord: edges of length $\frac{2\sqrt{6}}{3}$. Therefore, we have \begin{equation} \prod_{i=1}^{k}d_{i}^{2}=\left(\frac{2\sqrt{6}}{3}\right)^{2}=\frac{2^{3}}{3}.\label{eq:product-of-distinct-tetra} \end{equation} Observe that 2 is the numerator\textquoteright s base and 3 is the denominator\textquoteright s base ($\nu$ and $\varepsilon$, respectively), as desired. For the denominator\textquoteright s exponent, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that $E=6$ and $b=6$ for the tetrahedron. Observe that $d=1=\frac{6}{6}=\frac{b}{E}$, so $b=dE$ (as desired). Finally, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that, for the tetrahedron, $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}=3$, which is the denominator's base. \emph{Cube.} Recall from the proof of Theorem \ref{Thm:3D-product-of-all} that a cube inscribed in a unit 3-sphere has three types of chords: edges of length $3^{-1/2}2$, outer diagonals of length $\frac{2\sqrt{6}}{3}$, and inner diagonals of length 2. Therefore, we have \begin{equation} \prod_{i=1}^{k}d_{i}^{2}=\left(3^{-1/2}2\right)^{2}\left(\frac{2\sqrt{6}}{3}\right)^{2}(2)^{2}=\frac{2^{7}}{3^{2}}.\label{eq:product-of-distinct-cube} \end{equation} Observe that the numerator\textquoteright s base is 2 and the denominator\textquoteright s base is 3 ($\nu$ and $\varepsilon$, respectively). For the denominator\textquoteright s exponent, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that $E=12$ and $b=24$ for the cube. Observe that $d=2=\frac{24}{12}=\frac{b}{E}$, so $b=dE$. Finally, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that, for the cube, $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}=3$, which is the denominator's base. \emph{Icosahedron.} Recall that a regular icosahedron inscribed in a unit 3-sphere has three distinct chord lengths: $\frac{2}{5^{1/4}\tau^{1/2}}$, $\frac{2\tau^{1/2}}{5^{1/4}}$, and 2. Therefore: \begin{equation} \prod_{i=1}^{k}d_{i}^{2}=\left(\frac{2}{5^{1/4}\tau^{1/2}}\right)^{2}\left(\frac{2\tau^{1/2}}{5^{1/4}}\right)^{2}(2)^{2}=\frac{2^{6}}{5}.\label{eq:product-of-distinct-ico} \end{equation} Observe that the numerator\textquoteright s base is 2 and the denominator\textquoteright s base is 5 ($\nu$ and $\varepsilon$, respectively). For the denominator\textquoteright s exponent, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that $E=30$ and $b=30$ for the icosahedron. Observe that $d=1=\frac{30}{30}=\frac{b}{E}$, so $b=dE$ (as desired). Finally, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that, for the icosahedron, $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}=5$, which is the denominator's base. \emph{Octahedron.} Recall the proof of Theorem \ref{Thm:3D-product-of-all} that a regular octahedron inscribed in a unit 3-sphere has two types of chords: edges of length $\sqrt{2}$ and inner diagonals of length 2. Therefore, we have \begin{equation} \prod_{i=1}^{k}d_{i}^{2}=\left(\sqrt{2}\right)^{2}(2)^{2}=\frac{2^{3+4q}}{2^{4q}}=\frac{2^{3+4q}}{4^{2q}}.\label{eq:product-of-distinct-octa} \end{equation} where $q$ is the same as in the proof of Theorem \ref{Thm:3D-product-of-all}. Observe that the numerator\textquoteright s base is 2 and the denominator\textquoteright s base is 4 ($\nu$ and $\varepsilon$, respectively). For the numerator\textquoteright s exponent, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that $V=6$ and $a=18+24q$ for the octahedron. Observe that $c=3+4q=\frac{18+24q}{6}=\frac{a}{V}$, so $a=cV\cdot1=cVm$. Finally, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that, for the octahedron, $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}=2$, which is the base of the denominator in $\frac{2^{3+4q}}{2^{4q}}$. \emph{Dodecahedron.} Recall that a regular dodecahedron inscribed in a unit 3-sphere has five distinct chord lengths: $\frac{2\tau}{\sqrt{3}}$, $\frac{2}{\sqrt{3}\tau}$, 2, $\frac{2}{\sqrt{3}}$, and $\frac{2\sqrt{2}}{\sqrt{3}}$. Therefore: \begin{equation} \prod_{i=1}^{k}d_{i}^{2}=\left(\frac{2\tau}{\sqrt{3}}\right)^{2}\left(\frac{2}{\sqrt{3}\tau}\right)^{2}\left(2\right)^{2}\left(\frac{2}{\sqrt{3}}\right)^{2}\left(\frac{2\sqrt{2}}{\sqrt{3}}\right)^{2}=\frac{2^{11}}{3^{4}}.\label{eq:product-of-distinct-dodeca} \end{equation} Observe that the numerator\textquoteright s base is 2 and the denominator's base is 3 ($\nu$ and $\varepsilon$, respectively). For the numerator\textquoteright s exponent, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that $V=20$ and $a=440$ for the dodecahedron. Observe that $c=11=\frac{440}{40}=\frac{a}{2V}$, so $a=cV\cdot2=cVm$. Finally, recall from the proof of Theorem \ref{Thm:3D-product-of-all} that, for the dodecahedron, $\frac{E}{\left(E,V\right)}=\frac{\left[E,V\right]}{V}=3$, which is the denominator's base. \end{proof} \subsection{Discussion} Theorem \ref{Thm:crossPT-product-of-distinct} shows that Fact 4 does, in fact, apply to the regular $n$-crosspolytopes (where $n\geq2$) since they have \emph{even} $E$ and, thus, fall under the ``even $E$'' case of Fact 4. Theorem \ref{Thm:24-cell-product-of-distinct} shows that Fact 4 also applies to the 24-cell (which has even $E$) and shows, for the 24-cell, that the ``some integer'' is the ``number of facets'' ($F$) or, equivalently, the ``number of vertices'' ($V$). Finally, our proof of Theorem \ref{3DThm:product-of-distinct} shows that Fact 4 does \emph{not} generalize to the regular 3-simplex, 3-cube, icosahedron, or dodecahedron and therefore \emph{cannot} generalize to all regular $n$-simplices or all $n$-cubes ($n \geq 2$) and \emph{probably} does not generalize to the remaining regular $n$-simplices or $n$-cubes (i.e., those of dimension $n>3$) or to the 600-cell (i.e., the ``hypericosahedron'') or the 120-cell (i.e., the ``hyperdodecahedron''). \section{Conclusion} We have succeeded in finding a single rule that applies to the chord lengths of \emph{all} regular $n$-polytopes, $n \geq 2$ (namely, the rule given by Theorem \ref{Thm:sum-of-all-chords-Vsquared}). Facts 2-4 generalized to several types of regular polytopes. Given that the literature contained little information regarding the chord lengths of regular $n$-polytopes, $n\geq3$, it is clear that this investigation has taken the study of chord regular polytopes' chord lengths to a new level (and higher dimensions). \subsection*{Acknowledgment} The author thanks Dr. Kenneth Wantz of Regent University for his thorough critiques. \end{document}
\begin{document} \begin{frontmatter} \title{On the Use of Random Forest for Two-Sample Testing} \author{{\large Simon Hediger$^{b}$ \hspace*{2mm}} {\large Loris Michel$^{a}$ \hspace*{2mm}} {\large Jeffrey N{\"a}f}$^{a}$ \footnote{\textit{\small Corresponding Author. E-mail address: [email protected], Address: ETH Z체rich, HG G 10.1, R채mistrasse 101, 8092 Z체rich.}} \\[3mm] $^{a}$\textit{\small Seminar for Statistics, ETH Z\"{u}rich, Switzerland} \\ $^{b}$\textit{\small Department of Banking and Finance, University of Zurich, Switzerland}\\ } \begin{abstract} Following the line of classification-based two-sample testing, tests based on the Random Forest classifier are proposed. The developed tests are easy to use, require almost no tuning, and are applicable for \emph{any} distribution on $\ensuremath{{\mathbb R}}^d$. Furthermore, the built-in variable importance measure of the Random Forest gives potential insights into which variables make out the difference in distribution. An asymptotic power analysis for the proposed tests is developed. Finally, two real-world applications illustrate the usefulness of the introduced methodology. To simplify the use of the method, the R-package ``hypoRF'' is provided. \end{abstract} \begin{keyword} Random Forest, Distribution Testing, Classification, Kernel Two-Sample Test, MMD, Total Variation Distance, U-statistics \end{keyword} \end{frontmatter} \section{Introduction} Two-sample testing via classification methods is an old idea tracing back to the work of \cite{Friedman2004OnMG}. Generally speaking, one adapts the output of a classifier to construct a two-sample test. Let $\mathbf{X}_1, \ldots, \mathbf{X}_{n_0}$ and $\mathbf{Y}_1, \ldots, \mathbf{Y}_{n_1}$ be a collection of $\ensuremath{{\mathbb R}}^d$-valued random vectors, such that $\mathbf{X}_i \stackrel{iid}{\sim} P$ and $\mathbf{Y}_i \stackrel{iid}{\sim} Q$, where $P$ and $Q$ are some Borel probability measure on $\ensuremath{{\mathbb R}}^d$. The goal is to test \begin{align} \label{H0} H_0: P=Q, \ \ \ H_A:P \neq Q. \end{align} Given these iid samples of vectors, we define labels $\ell_i=1$ for each $\mathbf{X}_i$ and $\ell_i=0$ for each $\mathbf{Y}_i$ to obtain the data $\left(\mathbf{Z}_j, \ell_j \right)$, $j=1,\ldots, N$, for $N=n_0+n_1$, and $\mathbf{Z}_j=\mathbf{X}_i$ or $\mathbf{Z}_j=\mathbf{Y}_i$. On this data, we train a classifier $\hat{g}: \ensuremath{{\mathbb R}}^d \to \{0,1 \}$. If $\hat{g}$ is able to ``accurately'' predict $\ell$ on some test sample, it is taken as evidence against $H_0$. In this work, we assume the data is generated from a mixture distribution \[ \mathbf{Z}_j \stackrel{iid}{\sim} (1-\pi) P + \pi Q, \] such that $n_1 \sim \mbox{Bin}(\pi, N)$, where $\mbox{Bin}$ denotes the Binomial distribution. While our exposition will be valid for general classifiers, we specifically target the use of the Random Forest (RF) classifier in this work. Random Forest is a powerful and flexible method developed by \cite{Breiman2001}, known to have a remarkably stable performance in applications (see e.g. the extensive work of \cite{RFsuperiority}). This approach to testing was used in scientific applications, especially in the field of neuroscience. We refer to \cite{DPLBpublished} for an excellent literature overview. More recently, a lot of additional work has been produced in this direction in the statistical literature, see e.g., \cite{DPLBpublished}; \cite{Rosen2016}; \cite{facebookguys}; \cite{BORJI2019}; \cite{gagnon-bartsch2019}; \cite{kim2019}; \cite{Cal2020}. The closest relation to our work appears to be the extensive recent work of \cite{DPLBpublished}. Our first out-of-sample test in Section \ref{outofsampletest}, though derived independently, is very closely related to their test in Section 9.1. Moreover, \cite[Proposition 9.1]{DPLBpublished} provide a consistency result for general classifiers under mild assumptions. We add to this discussion, by showing that under imbalance these assumptions nonetheless break down for the Bayes classifier, such that a test based on this classifier is not consistent. \cite{DPLBpublished} also provide a rule of thumb on when to use classification-based tests, as opposed to more fine-tuned statistical tests designed for a specific problem. We extend this discussion by adding a recommendation when to use the RF-based test, as opposed to kernel-based tests, as for instance proposed in \cite{Twosampletest}, \cite{NIPS2012_4727}, \cite{NIPS2015_5685} and \cite{NIPS2016_6148}. These tests are natural competitors to classification-based tests and our work indicates that: \begin{itemize} \item[1.] If the differences between $P$, $Q$ can be found in the marginal distributions, even sparsely so, the RF-based test tends to perform very well. We demonstrate in Section \ref{contamin} that the RF-based test succeeds in an example with marginal differences, that is difficult for kernel-based tests. \item[2. ] If the change is mostly found in the dependency structure, or copula, kernel tests like MMD may be preferable. As is demonstrated in \ref{simusec} the RF-based test still has power, but less so than the kernel-based tests. \end{itemize} In addition, the Random Forest classifier brings two features to the two-sample testing problem: The out-of-bag (OOB) statistics and the variable importance measures. The former is used to increase sample efficiency, compared to a test based on a holdout sample, while the latter provides insights into the source of distributional differences. Our work also shares similarities with \cite{Rosen2016}, \cite{gagnon-bartsch2019} and \cite{kim2019}. The work of \cite{gagnon-bartsch2019} focuses on the use of the in-sample classification error as a test statistic in the balanced case. \cite{Rosen2016} focuses attention on the power of different classifier-based test statistics for specific alternatives. They also seem to be the first to propose the use of bootstrap-based classification tests. The work of \cite{kim2019} presents a different approach based on regression and focuses on local testing, i.e. determining where the distributional difference appears. The next two subsections list our contributions and demonstrate the advantages of our method with a small toy example. Section \ref{framework} introduces the two tests used, the first based on out-of-sample observations and the second on the OOB statistics. It closes with a theoretical insight into the consistency of classifier-based tests. Section \ref{U-stats-Test} extends this theoretical insight into an asymptotic power analysis for a version of the OOB error-based test, using U-statistics theory. Finally, Section \ref{Application} discusses the role of the variable importance measure of the Random Forest and demonstrates the power of our tests with simulated as well as two real-world data sets. \begin{figure} \caption{\textbf{(Intro)} \label{fig:intro} \end{figure} \subsection{Contributions} Our work differentiates itself from the existing literature in several aspects: \begin{itemize} \item[-] The out-of-sample test based on the class-wise errors in Proposition \ref{Prob1}, though similar to the one in \cite[Proposition 1]{DPLBpublished}, requires less assumptions to conserve the level asymptotically (though \cite{DPLBpublished} focus on a setting, where both the number of observations $N \to \infty$ as well as the dimension $d \to \infty$. In our work, $d$ is assumed to be fixed). \item[-] We show that no test based on the Bayes classifier is consistent for $\pi \neq 1/2$ in Lemma \ref{consistencylemma0}, but that a simple change in the classifier's ``cutoff'' restores consistency. \item[-] We utilize the OOB error and variable importance measure in this context to both increase the power of the test and extract more meaning in practice. As shown in simulations, the increase in power with the OOB test is substantial. \item[-] We analyze the asymptotic normality of an OOB error-based test statistic using U-statistics theory and use it to derive an expression for the approximate power of the test in Section \ref{U-stats-Test}. \item[-] We provide empirical evidence in Section \ref{contamin}, and in \ref{simusec}, that our test constitutes an important complementary method to powerful kernel-based tests, leading to improved performance in some traditionally difficult examples. \item[-] Finally, we provide the \texttt{R}-package hypoRF available on CRAN, with an implementation of the method. \end{itemize} \subsection{Motivational example} \label{motexamplesec} We consider a toy example to demonstrate the proposed methodology underlying the Random Forest classifier two-sample test. We choose $P$ and $Q$ to be five-dimensional multivariate Gaussian probability distributions. The covariance matrix of $P$ is the identity and the distribution $Q$ only differs from $P$ in the last two components between which a positive correlation of $0.8$ is imposed. The OOB statistics-based two-sample test correctly rejects with a $p$-value of $0.0099$ (details are given in Section \ref{finaltest}). Figure \ref{fig:intro} presents a visual summary of the test. The right plot displays the last two components of the sampled points. On the top left, the estimated means, by component and class, indicate that no distributional difference is visible in the margins. The bottom left plot shows the variable importance measure for each component (as presented in Section \ref{varimportance}). We can see that the last two components are picked-up as relevant variables, according to the threshold prescribed by the dotted red line. Thus our method correctly rejects in this example and moreover delivers a hint which components might be responsible for the perceived difference in distribution. \section{Framework} \label{framework} Let $\mathbf{Z}_1,\ldots, \mathbf{Z}_{N}$ be random vectors with values in $\ensuremath{{\mathcal X}} \subset \ensuremath{{\mathbb R}}^d$ and $l_1, \ldots, l_N$ corresponding labels in $\{0,1 \}$, collected in a dataset $D_{N}=\{(\mathbf{Z}_i,l_i)\}^{N}_{i=1}$ with \[ \mathbf{Z}_i \stackrel{iid}{\sim} (1-\pi) P + \pi Q. \] A sample $\mathbf{Z}_i$ coming from the mixture component $P$ (respectively $Q$) is labeled $l_i=0$ (respectively $l_i=1$). Let $\hat{g}(\mathbf{Z}):=g(\mathbf{Z}, D_{N_{train}})$ be a classifier trained on a subset $D_{N_{train}}$ of size $N_{train} < N $ of the observed data. Given the setting above, we now present two tests based on the discriminative ability of $\hat{g}$. The first such test uses an independent test set and is very similar to the test proposed by \cite{DPLBpublished}. The second test in Section \ref{finaltest} is entirely new and uses the OOB error to obtain its decision rule. \subsection{Out-of-sample test}\label{outofsampletest} Let $N_{test}=N-N_{train}$ be the number of test points. Moreover, $n_{0,j}$ is the number of observations coming from class 0, and $n_{1,j}$ the number of observations from class 1, for $j \in \{train, test \}$. We assume throughout the paper that $n_{0,j} \geq 1, n_{1,j} \geq 1$. If there is no difference in the distribution of the two groups, it clearly holds that \[ \ensuremath{{\mathbb P}}(\ell_i =1 | \mathbf{Z}_i)=\ensuremath{{\mathbb P}}(\ell_i=1)=\pi, \] in other words, $\ell_i$ is independent of $\mathbf{Z}_i$. If $\pi=1/2$, a test can be constructed by considering the overall out-of-sample classification error, \begin{align*} \hat{L}^{(\hat{g})}= \frac{1}{N_{test}} \sum_{i=1}^{N_{test}} \ensuremath{{\mathbb I}}\{ \hat{g}(\mathbf{Z}_i) \neq \ell_i \}, \end{align*} which under the null hypothesis of equal distributions has $N_{test} \hat{L}^{(\hat{g})} \sim \mbox{Bin}(N_{test}, 1/2)$. Here, $\ensuremath{{\mathbb I}} \{ \hat{g}(\mathbf{Z}_i) \neq \ell_i \} $ takes the value 1 if $\hat{g}(\mathbf{Z}_i) \neq \ell_i$ and 0 otherwise. In an effort to extend this principle for general $\pi$, we instead use an approach based on the class-wise errors \[ \hat{L}_0^{(\hat{g})}= \frac{1}{n_{0, test}} \sum_{\{i: \ell_i=0 \}}\ensuremath{{\mathbb I}}\{ \hat{g}(\mathbf{Z}_i) \neq 0 \}, \ \ \hat{L}_1^{(\hat{g})}= \frac{1}{n_{1, test}} \sum_{\{i: \ell_i=1 \}} \ensuremath{{\mathbb I}}\{ \hat{g}(\mathbf{Z}_i) \neq 1 \}, \] similar to \cite{DPLBpublished}. Define, for $j \in \{0,1\}$, the true class-wise loss for a given classifier $\hat{g}$ as $L_j^{(\hat{g})} = \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}) \neq j| D_{N_{train}}, \ell = j)$. As shown in the proof of Proposition \ref{Prob1}, conditioned on the training data and the number of observations from class $j \in \{0,1 \}$, $n_{j, test} \hat{L}_j^{(\hat{g})}| D_{N_{train}},n_{j, test} \sim \func{Bin}(n_{j, test}, L_j^{(\hat{g})})$. The loss $L_j^{(\hat{g})}$ depends on the classifier and is generally not known, even under $H_0$. However if $P=Q$, it holds that \begin{align*} L_0^{(\hat{g})} + L_1^{(\hat{g})} &= \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z})=0| D_{N_{train}}, \ell=1 ) + \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z})=1| D_{N_{train}}, \ell=0 )\\ &=\ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z})=0| D_{N_{train}} ) + \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z})=1| D_{N_{train}} )\\ &=1, \end{align*} where we used independence of $\ell$ and $\mathbf{Z}$ when $P=Q$. As a side-note, this shows that $L_0^{(\hat{g})} + L_1^{(\hat{g})}=1$ will be true, as soon as $\ell$ and $\hat{g}(\mathbf{Z})$ are independent. This follows if $P=Q$, but also if $\hat{g}$ negates the dependence between $\ell$ and $\mathbf{Z}$, which essentially means it has no discriminating abilities. Thus under $H_0$, $L_0^{(\hat{g})}=1-L_1^{(\hat{g})}$. Define for $p \in [0,1]$ the linear combination, $\hat{L}_{p}^{(\hat{g})}:= (1-p) \hat{L}_0^{(\hat{g})} + p\hat{L}_1^{(\hat{g})}$ and \begin{align*} \hat{\sigma}_c := 1/2 \sqrt{ \frac{ \hat{L}_{0}^{(\hat{g})} (1-\hat{L}_{0}^{(\hat{g})})}{n_{0,test}} + \frac{\hat{L}_{1}^{(\hat{g})}(1-\hat{L}_{1}^{(\hat{g})})}{n_{1,test}} }. \end{align*} Let moreover, \[ \hat{g}(D_N):=\left( \hat{g}(\mathbf{Z}_1), \ldots, \hat{g}(\mathbf{Z}_{N}) \right). \] We are then able to formulate the following decision rule: \begin{align}\label{Binomialtest} \delta_{B}(\hat{g}(D_{N_{test}})) := \ensuremath{{\mathbb I}} \left\{ \hat{L}_{1/2}^{(\hat{g})} - 1/2 < \hat{\sigma}_c \Phi^{-1}(\alpha) + \epsilon_{N_{test}} \right\}, \end{align} where $\Phi^{-1}(\alpha)$ is the $\alpha$ quantile of the standard normal distribution and $\epsilon_{N_{test}}$ is a decreasing sequence of small non-random numbers. Then \begin{proposition} \label{Prob1} There exists a sequence $\epsilon_{N_{test}}$, such that the decision rule in \eqref{Binomialtest} conserves the level asymptotically, i.e. \[ \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(\hat{g}(D_{N_{test}}))=1 \right) \leq \alpha, \] under $H_0:P=Q$. \end{proposition} Proposition \ref{Prob1} is related to the first part of Proposition 9.1. in \cite{DPLBpublished}. Note that we did not put any restrictions on how $L_0^{\hat{g}}$, $L_1^{\hat{g}}$ change individually and in particular, we made no assumption on how $N_{train}$ behaves, as $N_{test}$ goes to infinity. The reason for including the sequence $\epsilon_N$ is that, when $N_{train}$ increases with $N_{test}$, boundary cases are possible, in which the variance $L_0^{\hat{g}}(1-L_0^{\hat{g}}) + L_1^{\hat{g}}(1-L_1^{\hat{g}})$ decreases as $1/N_{test}$ or faster, while still being nonzero for finite $N$. In this case the asymptotic normality of $(\hat{L}_{1/2}^{(\hat{g})} - 1/2)/\hat{\sigma}_c$ breaks down and it becomes increasingly difficult to control the behavior of the acceptance probability under the Null. Adding $\epsilon_N$ makes it possible to circumvent this difficulty, albeit at the price of a potential loss in asymptotic power in these boundary cases. If $N_{train}$ grows at the same rate as $N_{test}$, such boundary cases appear unlikely in practice. In fact, for a Random Forest classifier, it rather seems the classifier just outputs the majority class for all $N_{train}$ large enough, such that $\hat{\sigma}_c=0$ and $\hat{L}_{1/2}^{(\hat{g})}=0$, $\hat{L}_{1/2}^{(\hat{g})}=1$ or vice versa. In this case the level is guaranteed, even if $\epsilon_N=0$ for all $N$. We will in the following simply take $\epsilon_{N_{test}}=0$ for the remainder of this paper. The test is summarized in Algorithm \ref{RFTestalg1}. We briefly highlight the connection between the above decision rule and the one based on the overall classification error $\hat{L}^{(\hat{g})}$, in the case of $\pi=1/2$ and $\epsilon_{N_{test}}=0$. Since, for $\hat{\pi}=n_{1,test}/N_{test}$. \begin{equation} \hat{L}^{(\hat{g})} = (1-\hat{\pi}) \hat{L}_0^{(\hat{g})} + \hat{\pi} \hat{L}_1^{(\hat{g})} = \hat{L}^{(\hat{g})}_{\hat{\pi}}, \end{equation} and $\hat{\pi} \to \pi = 1/2$ a.s., it holds that $|\hat{L}^{(\hat{g})} - \hat{L}_{1/2}^{(\hat{g})}| \to 0$, a.s. Consequently, the (unconditional) limiting distribution of $\hat{L}_{1/2}^{(\hat{g})}$ is the same as that of $\hat{L}^{(\hat{g})}$ or, \[ \frac{ \sqrt{N_{test}}\left( \hat{L}_{1/2}^{(\hat{g})} - 1/2 \right)}{\sqrt{ 1/4 }} \to N(0,1), \] under $H_0$. In particular, the asymptotic variance of $\hat{L}_{1/2}^{(\hat{g})}$ under the null is the variance of $\hat{L}^{(\hat{g})}$ and thus one would expect the two tests to behave roughly the same for a large sample size, in the case of $\pi=1/2$. However, as we demonstrate in Section \ref{metrics}, focusing on an equally weighted in-class loss, instead of the overall loss $\hat{L}^{(\hat{g})}$, can be beneficial when $\pi \neq 1/2$. \begin{algorithm} \caption{$\text{BinomialTest} \gets \text{function}(Z,\ell,...)$}\label{RFTestalg1} \begin{algorithmic}[1] \ensuremath{{\mathbb R}}equire $\mathbf{Z} \in \mathbb{R}^{N \times d}$, $\ell \in \{0,1\}^N$ \State $D_{N_{train}} \gets (\ell_i, \boldsymbol{Z_i})_{i=1}^{N_{train}}$\Comment{random separation of training data} \State Training of a classifier, $\hat{g}(.)$ on $D_{N_{train}}$ \State $err_0 \gets \frac{1}{n_{0,test}} \sum_{i=N_{train} + 1}^{N} \ensuremath{{\mathbb I}}{ \{ \ell_i = 0 \} } \ensuremath{{\mathbb I}}{ \{ \hat{g}(\mathbf{Z}_i) \neq 0 \} } $ \State $err_1 \gets \frac{1}{n_{1,test}} \sum_{i=N_{train} + 1}^{N} \ensuremath{{\mathbb I}}{ \{ \ell_i = 1 \} } \ensuremath{{\mathbb I}}{ \{ \hat{g}(\mathbf{Z}_i) \neq 1 \} } $ \State $err_{1/2} \gets \frac{1}{2} err_0 + \frac{1}{2} err_1 $\Comment{calculating the out-of-sample classification error} \State $sig \gets 1/2 \sqrt{err_0 (1-err_0)/n_{0,test} + err_1(1-err_1)/n_{1,test} }$ \If{$sig > 0$} \State $pvalue \gets \Phi\left( \frac{err_{1/2} - 1/2 }{sig} \right)$ \ensuremath{{\mathbb E}}lsIf{$sig == 0$} \State $pvalue \gets \ensuremath{{\mathbb I}}\{ err_{1/2} - 1/2 > 0 \}$ \ensuremath{{\mathbb E}}ndIf \State \textbf{return} $pvalue$ \end{algorithmic} \end{algorithm} Naturally, the split in training and test set is not ideal. For finite sample sizes, one would like to have as many (test) samples as possible to detect differences. At the same time, it would be preferable to have the classifier trained on many data points. This in fact resembles a bias-variance trade-off, similar to what was described in \cite{facebookguys}: Let $g_{1/2}^*$ be the Bayes classifier defined in Section \ref{metrics}. For $\pi=1/2$, there is a trade-off between the closeness of $L^{(\hat{g})}$ to $L^{(g_{1/2}^{*})}$, which may be achieved through a large training set and the closeness of $\hat{L}^{(\hat{g})}$ to $L^{(\hat{g})}$, which is generally only true in large test sets. \subsection{Out-of-bag test} \label{finaltest} For the purpose of overcoming the arbitrary split in training and testing, Random Forest delivers an interesting tool: the OOB error introduced in \cite{Breiman2001}. Since each tree is build on a bootstrapped sample taken from $D_{N}$, there will be approximately 1/3 of the trees that are not using the $i$th observation $(\ell_i,\mathbf{Z}_i)$. Thus we may use this ensemble of trees not containing observation $i$ to obtain an estimate of the out-of-sample error for $i$. We slightly generalize this here, in assuming we have an ensemble learner $g$: That is, we assume to have iid copies of a random element $\nu$, $\nu_1, \ldots, \nu_B$, such that each $\hat{g}_{\nu_{b}}(\mathbf{Z}):=g(\mathbf{Z}, D_{N_{train}}, \nu_b)$ is a different classifier. We then consider the average \begin{align} \hat{g}(\mathbf{Z}):=\frac{1}{B} \sum_{b=1}^B \hat{g}_{\nu_{b}}(\mathbf{Z}). \end{align} For $B \to \infty$, this is (a.s.) $\hat{g}(\mathbf{Z})=\ensuremath{{\mathbb E}}_{\nu}[\hat{g}_{\nu}(\mathbf{Z})]$. For Random Forest, $\nu$ usually represents the bootstrap sampling of observations and the sampling of variables to consider at each splitpoint for a given tree. Let as before, $n_0:=\sum_{i=1}^{N} \ensuremath{{\mathbb I}}{\{ \ell_i=0 \} }$ and $n_1:= \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{\{ \ell_i=1 \} }$, with $n_0 \geq 1$, $n_1 \geq 1$. We assume in the following that each $\hat{g}_{\nu_{b}}(\mathbf{Z})$ uses a bootstrapped sample from the original data, as Random Forest does. The class-wise OOB error of such an ensemble of learners trained on $N$ observations is defined as \begin{align*} \mathcal{E}^{oob}_0 &= \frac{1}{n_0} \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{\{ \ell_i=0 \} } \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq 0 \}},\\ \mathcal{E}^{oob}_1 &= \frac{1}{n_1} \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{ \{ \ell_i=1 \}} \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq 1 \}},\\ \mathcal{E}^{oob}_{p} &= (1-p) \mathcal{E}^{oob}_0 + p \mathcal{E}^{oob}_1, \end{align*} where $\hat{g}_{-i}$, represents the ensemble of learners not containing the $i^{\text{th}}$ observation for training. Unfortunately, the test statistic $\mathcal{E}^{oob}_{1/2}$ is difficult to handle; due to the complex dependency structure between the elements of the sum, it is not clear what the (asymptotic) distribution under the null is. For theoretical purposes, we consider in Section \ref{U-stats-Test} a solution based on the concept of U-statistics. Here, we recommend using the OOB error together with a permutation test. See e.g., \cite{permutationtests} or \cite{DPLBpublished}, who use it in conjunction with the out-of-sample error evaluated on a test set: We first calculate the class-wise OOB errors $\mathcal{E}^{oob}_0$, $\mathcal{E}^{oob}_1$ and then reshuffle the labels $K$ times to obtain $K$ permutations, $\sigma_{1}, \ldots, \sigma_{K}$ say. For each of these new datasets $\left(\mathbf{Z}_i , \ell_{\sigma_{k}(i)} \right)_{i=1}^{N}$, $k \in \{1 , \ldots, K \}$, we calculate the OOB errors \[ \mathcal{E}^{oob, k}_j:=\frac{1}{n_j} \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{\{ \ell_{\sigma_{k}(i)}=j \} } \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_{\sigma_{k}(i)} \}}, \] for $j \in \{0,1 \}$. Under $H_0$, $(\ell_{1}, \ldots, \ell_{N})$ and $(\mathbf{Z}_{1}, \ldots, \mathbf{Z}_{N})$ are independent and each $\mathcal{E}^{oob}_{1/2}$ is simply an iid draw from the distribution $F$ of the random variable $\mathcal{E}^{oob}_{1/2}|(\mathbf{Z}_{1}, \ldots, \mathbf{Z}_{N})$. As such we can accurately approximate the $\alpha$ quantile $F^{-1}(\alpha)$ of said distribution by performing a large number of permutations and use the decision rule \begin{equation}\label{oobdecision} \delta_{oob}(D_{N})=\left \{ \mathcal{E}^{oob}_{1/2} \leq F^{-1}(\alpha)\right \}. \end{equation} Thus, as in the decision in Equation \eqref{Binomialtest}, the rejection region depends on the data at hand. Nonetheless, the level will be conserved, as proven e.g. in \cite[Theorem 1]{Hemerik2018}. Heuristically, this procedure will have power under the alternative, as in this case there is some dependence between $(\ell_{1}, \ldots, \ell_{N})$ and $(\mathbf{Z}_{1}, \ldots, \mathbf{Z}_{N})$, formed by the difference in the distribution of the $\mathbf{Z}_i$. The OOB error $\mathcal{E}^{oob}_{1/2}$ will thus be different than the ones observed under permutations. The whole procedure is described in Algorithm \ref{permutationalg}. We name this test ``hypoRF''. \begin{algorithm} \setstretch{1.15} \caption{$\text{hypoRF} \gets \text{function}(\mathbf{Z},K,...)$}\label{permutationalg} \begin{algorithmic}[1] \ensuremath{{\mathbb R}}equire $\mathbf{Z} \in \mathbb{R}^{N \times d}$, $\ell \in \{0,1\}^N, K$ \State $D_{N} \gets (\ell_i, \mathbf{Z}_i)_{i=1}^{N}$ \State $n_{j} \gets \sum_{i=1}^{N} \ensuremath{{\mathbb I}} \{ \ell_{\sigma_{k}(i) } =j \}$ \State Training of an ensemble learner $\hat{g}(.)$ on $D_{N}$ \State $OOB_j \gets \frac{1}{n_j} \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i}(\mathbf{Z}_i)\neq j \}} \ensuremath{{\mathbb I}} \{ \ell_i=j \} $\Comment{calculating the OOB-error for $j \in \{0,1\}$} \State $OOB_{1/2} \gets 1/2( OOB_0 + OOB_1 )$ \ensuremath{{\mathcal F}}or{\texttt{k in 1:K}} \State $D_{N}^k \gets \left(\ell_{\sigma_{k}(i)}, \mathbf{Z}_i \right)_{i=1}^{N}$\Comment{reshuffle the label} \State $OOB^k_j \gets \frac{1}{n_{j}} \sum_{i=1}^{N} \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i}(\mathbf{Z}_i) \neq j \}} \ensuremath{{\mathbb I}} \{ \ell_{\sigma_{k}(i) } =j \}$ \State $OOB^k_{1/2} \gets 1/2( OOB_0^k + OOB_1^k )$\Comment{calculating the OOB-error} \ensuremath{{\mathbb E}}ndFor \State $mean \gets \frac{1}{K}\sum_{k=1}^K OOB^k_{1/2}$ \State $sig \gets \sqrt{\frac{1}{K-1}\sum_{k=1}^K (OOB^k_{1/2} - mean)^2}$ \If{$sig > 0$} \State $pvalue \gets \frac{1}{K+1}\sum_{k=1}^K \left(\ensuremath{{\mathbb I}}\{OOB^k_{1/2}<OOB_{1/2}\}+1 \right)$ \ensuremath{{\mathbb E}}lsIf{$sig == 0$} \State $pvalue \gets \ensuremath{{\mathbb I}}\{ OOB_{1/2} - mean > 0 \}$ \ensuremath{{\mathbb E}}ndIf \State \textbf{return} $pvalue$ \end{algorithmic} \end{algorithm} \subsection{What classifier to use} \label{metrics} The foregoing tests are valid for any classifier $g: \ensuremath{{\mathcal X}} \to \{0,1 \}$. In practice, most classifiers try to approximate the \emph{Bayes} classifier: Let for $p$, $q$ the densities of $P$, $Q$ \begin{equation} \eta(\mathbf{z}) := \ensuremath{{\mathbb E}}[\ell | \mathbf{z} ] = \frac{\pi q(\mathbf{z}) }{ \pi q(\mathbf{z}) + (1-\pi) p(\mathbf{z}) }, \end{equation} then the Bayes classifier is given as $g_{1/2}^{*}(\mathbf{Z})=\ensuremath{{\mathbb I}}{\{ \eta(\mathbf{Z}) > 1/2 \} }$, see e.g., \cite{ptpr}. It is the classifier with minimal classification error, designated the Bayes error $L_{\pi}^{(g_{1/2}^{*})}= \ensuremath{{\mathbb P}}(g_{1/2}^{*}(\mathbf{Z}) \neq \ell )$. Under $H_0$, this Bayes error will be $\min(\pi,1-\pi)$. An interesting question is whether $g_{1/2}^{*}$ leads to a consistent test in our framework. We first define consistency for a \emph{hypothesis test}: Let $\Theta$ be the space of tuples of all distributions on $\ensuremath{{\mathbb R}}^d$, $\theta=(P,Q) \in \Theta$, $\Theta_0=\{(P,Q): P=Q \}$, $\Theta_1=\{(P,Q): P \neq Q \}$. Let $\delta: \ensuremath{{\mathcal X}}^N \to \{0,1 \}$ be a decision rule and $\phi(\theta):= \ensuremath{{\mathbb E}}_{\theta}[\delta]$. Following e.g., \cite{vaart_1998} we call a test consistent at level $\alpha$ (for $\Theta_1$), if $\limsup_{N} \sup_{\theta \in \Theta_0} \phi(\theta) \leq \alpha$ and for any $\theta \in \Theta_1$, $\liminf_{N} \phi(\theta)=1$. For theoretical purposes, we extend this definition also to $\delta$ that depend on the unknown $\theta$ itself, for instance via the densities of $P$ and $Q$ respectively. Under the assumption of equal class probabilities $\pi=1/2$ the Bayes error has the property that, \begin{equation} L^{(g_{1/2}^{*})}=1/2 (1-TV(P, Q)), \label{eq:TVanderror} \end{equation} where $TV(P, Q)$ is the total variation distance between $P$, $Q$: $TV(P, Q)= 2\sup_{A} |P(A) - Q(A)|$, with the supremum taken over all Borel sets on $\ensuremath{{\mathbb R}}^d$. As $TV$ defines a metric on the space of all probability measures on $\ensuremath{{\mathbb R}}^d$, it holds that $P=Q \iff TV(P, Q)=0$. Consequently, as soon as there is any difference in $P$ and $Q$, $TV(P, Q) > 0$ and $L^{(g_{1/2}^{*})} < 1/2$. Thus we would expect a test based on $g_{1/2}^{*}$ to be consistent. More generally, \cite{DPLBpublished} prove that if the classifier $\hat{g}$ is such that \begin{align}\label{Ramdasassumption} \hat{L}^{(\hat{g})}_0 = L_{0} + o_{\ensuremath{{\mathbb P}}}(1), \ \hat{L}^{(\hat{g})}_1 = L_{1} + o_{\ensuremath{{\mathbb P}}}(1), \text{ for some $L_0,L_1 \in (0,1)$ with $L_0 + L_1 = 1-\varepsilon$, for any $\varepsilon > 0$}, \end{align} then the decision rule in \eqref{Binomialtest} is consistent. Unfortunately, this assumption doesn't hold for $g_{1/2}^*$, if $\pi \neq 1/2.$ In this case, simple counterexamples show that even when $P,Q$ are different, it might still be that $L^{(g_{1/2}^{*})}_0 + L^{(g_{1/2}^{*})}_1=1$. \begin{lemma} \label{consistencylemma0} Take $\ensuremath{{\mathcal X}} \subset \ensuremath{{\mathbb R}}$ and $\pi \neq 1/2$. Then no decision rule of the form, $\delta(D_N)=\delta( g^*_{1/2}(D_N) ) $ is consistent. \end{lemma} Thus even though we allow the classifier $g^*_{1/2}$ to depend for each $(P,Q) \in \Theta_1$ on the densities $p$ of $P$ and $q$ of $Q$, we are not able to construct a consistent test. The problem appears to be that the Bayes classifier minimizes the \emph{overall} classification loss, so that condition \eqref{Ramdasassumption} cannot hold. In doing so, it focuses too much on the overrepresented class. Indeed, we might define the following alternative classifier: For given $P$, $Q$ let $g_{\pi}^*$ be the classifier that minimizes the error $L_{1/2}^{g}$, i.e. a classifier that solves the problem \begin{equation}\label{newproblem} \argmin \{ L_{1/2}^{g}: g: \ensuremath{{\mathcal X}} \to \{0,1 \} \text{ a classifier} \}. \end{equation} It turns out a slight variation to the Bayes classifier solves this problem: \begin{lemma} \label{consistencylemma2} The classifier \begin{equation}\label{gpi} g_{\pi}^*(\mathbf{z}) = \ensuremath{{\mathbb I}} \left\{ \eta(\mathbf{z}) > \pi \right\}, \end{equation} is a solution to \eqref{newproblem}. Moreover it holds that \begin{equation}\label{eq:TVanderror2} 1 - TV(P,Q) = L_0^{g_{\pi}^*} + L_1^{g_{\pi}^*}, \end{equation} for any $\pi \in (0,1)$. \end{lemma} Thus for this classifier a generalization of \eqref{eq:TVanderror} holds for any $\pi \in (0,1)$. In particular, it now yields a consistent test: \begin{corollary}\label{consistencycor} The decision rule $\delta_B(g_{\pi}^*(D_N))$ in \eqref{Binomialtest} is consistent for any $\pi \in (0,1)$. \end{corollary} Since this theoretical classifier needs no training, the two testing approaches coincide with an evaluation of the classifier loss on the overall data $D_N$. While this analysis with theoretical classifiers is by no means sufficient for the much more complicated case of a classifier $\hat{g}$ trained on data, it suggests that adapting the ``cutoff'' in a given classifier might improve consistency issues. Indeed, we use the classifier \[ \hat{g}(\mathbf{z})=\ensuremath{{\mathbb I}}\{\hat{\eta}(\mathbf{z}) > \hat{\pi} \}, \] where $\hat{\pi}$ is an estimate of the prior probability based on the \emph{training} data. As long as the later is used (as opposed to the test data), the tests above are still valid. \section{Tests based on U-Statistics} \label{U-stats-Test} To avoid the splitting in training and test set, we introduced an OOB error-based test in Section \ref{finaltest}. In this section, we discuss a potential framework to analyse a version of such a test theoretically. For $N_{train} \leq N$, let again, $n_{0, train}= \sum_{i=1}^{N_{train}} \ensuremath{{\mathbb I}}\{\ell_i=0\}$ and $n_{1, train}= \sum_{i=1}^{N_{train}} \ensuremath{{\mathbb I}}\{\ell_i=1\}$. Let $D_{N_{train}}^{-i}$ denote the data set without observation $(\ell_i, \mathbf{Z}_i)$. Then we consider the class-wise OOB error based on $N_{train}$ observations: \begin{align}\label{overalloob} h_{N_{train}}((\ell_1, \mathbf{Z}_{1}), \ldots, (\ell_{N_{train}}, \mathbf{Z}_{N_{train}}))&:= \frac{1}{2} \left( \frac{1}{n_{0, train}} \sum_{i: \ell_i=0} \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) = 1 \}} + \frac{1}{n_{1, train}} \sum_{i: \ell_i=1} \ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) = 0 \}} \right) \nonumber \\ &=\frac{1}{2} \sum_{i=1}^{N_{train}} \varepsilon_{i}^{oob}, \end{align} where \[ \varepsilon_{i}^{oob}:=\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \left( \frac{1-\ell_{i}}{n_{0, train}} + \frac{\ell_{i}}{n_{1, train}} \right), \] for $\hat{g}_{-i}$ trained on $D_{N_{train}}^{-i}$. Also recall that $L_{j}^{\hat{g}}=\ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}) = j| D_{N_{train}}, \ell \neq j)$ for $j \in \{0,1\}$ and $L_{1/2}^{(\hat{g})}=1/2(L_{0}^{(\hat{g})} + L_{1}^{(\hat{g})} )$. We assume that the number of classifiers in the ensemble, $B \to \infty$, so that $\hat{g}(\mathbf{Z}) \to \ensuremath{{\mathbb E}}_{\nu}[\hat{g}_{\nu}(\mathbf{Z})]$, almost surely. We refer to the function $h_{N_{train}}$ as kernel of size $N_{train}$ and define the incomplete U-Statistics, \begin{equation} \label{estimatedU} \hat{U}_{N,K}:=\frac{1}{K} \sum h_{N_{train}}((\mathbf{Z}_{i_1},\ell_{i_1}),\ldots, (\mathbf{Z}_{i_{N_{train}}},\ell_{i_{N_{train}}})), \end{equation} where the sum is taken over $K$ randomly chosen subsets of size $N_{train}$ - see e.g., \cite{Ustat90}, \cite{epub17654}, \cite{RFuncertainty}, \cite{peng2019asymptotic}. We assume that $K$ goes to infinity as $N$ goes to infinity. Since we are only considering learners for which the $i$th sample point is not included, we may simply see $\hat{g}_{-i}$ as an infinite ensemble build on the dataset $D_{N_{train}}^{-i}$ only. Consequently, with the assumption of an infinite number of learners, the OOB error is ``almost'' unbiased for $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g})}]$. \begin{lemma}\label{expectationlemma} $\ensuremath{{\mathbb E}}[h_{N_{train}}((\ell_1, \mathbf{Z}_{N_{train}}), \ldots, (\ell_{N_{train}}, \mathbf{Z}_{N_{train}}))]=\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$. \end{lemma} Here, $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$ refers to the expected value of the error based on the classifier trained on $N_{train}-1$ data points. As such, it does not depend on $i$. This is essentially the same result as in \cite{Luntz} in the case of the leave-one-out error. We are now able to show that $h_{N_{train}}$ in \eqref{overalloob} is a \emph{symmetric} function, unbiased for $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$: \begin{lemma}\label{hntrainlemma} $h_{N_{train}}$ is a valid kernel for the expectation $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$. \end{lemma} Combining arguments from \cite{RFuncertainty} and \cite{wager2017estimation}, we obtain the conditions for asymptotic normality listed in Theorem \ref{asymptoticnormalxx}. Though both paper consider the asymptotic distribution of a Random Forest prediction at a fixed $\mathbf{z}$, the $U$-Statistics theory they develop can be used in our context as well. We also refer to \cite{peng2019asymptotic} and \cite{RomanoUstats}, who already refined the results of \cite{RFuncertainty} for asymptotic normality of a $U$-statistics with growing kernel size. \cite{peng2019asymptotic} in particular, derived a similar result to Theorem \ref{asymptoticnormalxx} independently from us. Let for random variables $\xi_1, \xi_2$, $\ensuremath{{\mathbb V}}(\xi_1)$, $\mbox{Cov}(\xi_1,\xi_2)$ be the variance and covariance respectively and define for the following, for $c \in \{1,\ldots,N_{train} \}$, \begin{align} \zeta_{c,N_{train}}&=\ensuremath{{\mathbb V}}(\ensuremath{{\mathbb E}}[h_{N_{train}}((\mathbf{Z}_{1},\ell_1),\ldots, (\mathbf{Z}_{{N_{train}}},\ell_{{N_{train}}}))|(\mathbf{Z}_{1},\ell_1),\ldots, (\mathbf{Z}_{c},\ell_c)]). \end{align} In particular, $ \zeta_{1,N_{train}}$ and $ \zeta_{N_{train},N_{train}}$ will be of special interest. \cite{Ustat90} provides an immediate important result: \begin{lemma} \label{zeta1result} $N_{train} \zeta_{1,N_{train}} \leq \zeta_{N_{train},N_{train}}$ \end{lemma} Lemma \ref{zeta1result}, which is actually true for any $U$-statistics, shows that, whenever the second moment of the kernel $h_{N_{train}}$ exists, $\zeta_{1,N_{train}}=O(N_{train}^{-1})$. Then \begin{theorem}\label{asymptoticnormalxx} Assume that for $N \to \infty$, $N_{train}=N_{train}(N) \to \infty$ and $K=K(N) \to \infty$, \begin{align} \lim_{N} \frac{K N_{train}^2}{N} \frac{\zeta_{1,N_{train}}}{{\zeta_{N_{train},N_{train}}}} &= 0, \label{zetacond}\\ \lim_{N} \frac{\sqrt{K} N_{train}}{N}&= 0. \label{Kocond} \end{align} Then, \begin{align} \label{normality2} \frac{\sqrt{K}(\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-1})}] ) }{\sqrt{ \zeta_{N_{train},N_{train}}}} \stackrel{D}{\to} N(0,1). \end{align} \end{theorem} Condition \eqref{zetacond} is hard to control in general, but with Lemma \ref{zeta1result}, it can be seen that choosing \begin{align}\label{Kocondd} \frac{K N_{train}}{N} \to 0, \end{align} is sufficient for both \eqref{zetacond} and \eqref{Kocond}. If $K=\log(N_{train})^{1+d}$, this corresponds to the condition $\log(N_{train})^{1+d} N_{train}/N \to 0$ required by \cite{wager2017estimation}. In the context of Random Forest, Theorem \ref{asymptoticnormalxx} essentially proves that the OOB error of a prediction function that is bounded, is asymptotically normal if the number of trees is ``high'' and if $K$ forests are trained on subsamples such that \eqref{zetacond} and \eqref{Kocond} are true. Since the OOB error with infinite learners is essentially the leave-one-out error in the context of cross-validation, this also means that a test of the cross-validation error could be derived under much weaker assumption as for instance in \cite{epub17654}. The key reason for the generality of the result, as was also realized by \cite{peng2019asymptotic}, is that $K$ should be chosen small relative to $N$. This introduces additional variance, such that conditions on $\zeta_{1,N_{train}}$ usually required in such results, see e.g., \cite{RomanoUstats}, can be replaced by \eqref{Kocondd}. This has an additional computational advantage, but it may come at the price of reduced power, as will be seen in Corollary \ref{asymptoticpower2}. \cite[Section 3]{RFuncertainty} also provide a consistent estimate for $\zeta_{c,N_{train}}$, denoted $\hat{\zeta}_{c,N_{train}}$, for any $c \in \{ 1,\ldots, N_{train}\}$. As its population counterpart, this estimator is also bounded by 1 for all $c$ and $N_{train}$ in our case. Thus if for a classifier \eqref{zetacond} and \eqref{Kocond} are true, the decision rule \begin{equation}\label{test3} \delta(\hat{g}(D_{N}))=\ensuremath{{\mathbb I}} \left \{\frac{\sqrt{K} (\hat{U}_{N,K} - 1/2)}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} < \Phi^{-1}(\alpha) \right\}, \end{equation} constitutes a valid test. To illustrate Theorem \ref{asymptoticnormalxx}, Figure \ref{asymptoticnormalityillustrationHA} displays the simulated distribution of \begin{align}\label{teststatisticHA} Z=\frac{\sqrt{K} (\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}] )}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}}, \end{align} for $P=N(\boldsymbol{\mu}_1, I_{10 \times 10})$ and $Q=N(\boldsymbol{\mu}_2, I_{10 \times 10})$, with $\boldsymbol{\mu}_1=\mathbf{0}$ and $\boldsymbol{\mu}_2=0.4/\sqrt{10} \cdot \mathbf{1}$. We simulated $S=500$ replications using $N=6000$, $K=\ceil{2*\log(N)}=17$ and $N_{train}=\ceil{N/(K*\log(\log(N)))}=163$. \begin{figure} \caption{Illustration of the asymptotic normality of the OOB error based test-statistic for the Random Forest classifier. In this example, $P=N(\mathbf{0} \label{asymptoticnormalityillustrationHA} \end{figure} With this at hand, we can construct another test: \begin{corollary} \label{asymptoticpower2} Assume the conditions of Theorem \ref{asymptoticnormalxx} hold true and that $\hat{\zeta}_{N_{train},N_{train}}/\zeta_{N_{train},N_{train}} \stackrel{p}{\to} 1$. Then the decision rule in \eqref{test3} conserves the level asymptotically and has approximate power \begin{align}\label{powerexpression} \Phi \left( \Phi^{-1}(\alpha) + \sqrt{\frac{K}{\zeta_{N_{train},N_{train}}}} (1/2 - \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]) \right). \end{align} \end{corollary} The test has thus power going to one, as soon as \begin{align} \label{whatwewant2} \limsup_{N} \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}] < 1/2. \end{align} Condition \eqref{whatwewant2} mirrors condition (A9) in \cite{DPLBpublished}, in that it asks for a better than chance prediction in expectation. Crucially, Corollary \ref{asymptoticpower2} also illustrates the downside of the weak assumptions used in Theorem \ref{asymptoticnormalxx}: The power is dependent on $\sqrt{K}$, as well as the accuracy of the trained classifier through $\ensuremath{{\mathbb E}}[L^{(\hat{g}_{-i})}]$. Since our theory requires that $K$ is of small order compared to $N$, we lose power, at least theoretically. In practice, it appears from simulations with Random Forest that $\zeta_{N_{train}, N_{train}}$ decreases to zero and roughly behaves like $1/N_{train}$. From the asymptotic power expression above, it can be seen that this would offset the small order $K$. Nonetheless, the test of Corollary \ref{asymptoticpower2} appears less powerful than the Binomial and hypoRF test. In the example of Figure \ref{asymptoticnormalityillustrationHA}, plugging the estimate of $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$ obtained from the 500 repetitions into \eqref{powerexpression} and averaging, we obtain an expected power of 0.63. The actual power, i.e. the fraction of rejected tests over the 500 repetitions, is given as 0.61. The Binomial test with Random Forest on the other hand, reaches a power of 1. This illustrates that the test derived in this section, still lacks behind the test that uses sample-splitting. Nonetheless, modern $U$-statistics theory gives powerful theoretical tools to construct OOB-error based tests with tractable asymptotic power. \begin{tikzpicture}[x=1pt,y=1pt] \definecolor{fillColor}{RGB}{255,255,255} \path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (578.16,289.08); \begin{scope} \path[clip] ( 0.00, 0.00) rectangle (289.08,289.08); \definecolor{drawColor}{RGB}{0,0,0} \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (156.54, 15.60) {Z}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 10.80,150.54) {Density}; \end{scope} \begin{scope} \path[clip] ( 0.00, 0.00) rectangle (578.16,289.08); \definecolor{drawColor}{RGB}{0,0,0} \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 57.15, 61.20) -- (255.93, 61.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 57.15, 61.20) -- ( 57.15, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (106.85, 61.20) -- (106.85, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (156.54, 61.20) -- (156.54, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (206.23, 61.20) -- (206.23, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (255.93, 61.20) -- (255.93, 55.20); \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 57.15, 39.60) {-4}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (106.85, 39.60) {-2}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (156.54, 39.60) {0}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (206.23, 39.60) {2}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (255.93, 39.60) {4}; \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20, 67.82) -- ( 49.20,214.88); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20, 67.82) -- ( 43.20, 67.82); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,104.58) -- ( 43.20,104.58); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,141.35) -- ( 43.20,141.35); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,178.11) -- ( 43.20,178.11); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,214.88) -- ( 43.20,214.88); \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 34.80, 67.82) {0.0}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 34.80,104.58) {0.1}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 34.80,141.35) {0.2}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 34.80,178.11) {0.3}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at ( 34.80,214.88) {0.4}; \end{scope} \begin{scope} \path[clip] ( 49.20, 61.20) rectangle (263.88,239.88); \definecolor{drawColor}{RGB}{0,0,0} \definecolor{fillColor}{RGB}{255,255,255} \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] ( 69.57, 67.82) rectangle ( 82.00, 70.76); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] ( 82.00, 67.82) rectangle ( 94.42, 78.11); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] ( 94.42, 67.82) rectangle (106.85, 82.52); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (106.85, 67.82) rectangle (119.27,109.00); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (119.27, 67.82) rectangle (131.69,138.41); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (131.69, 67.82) rectangle (144.12,170.76); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (144.12, 67.82) rectangle (156.54,185.47); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (156.54, 67.82) rectangle (168.96,206.06); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (168.96, 67.82) rectangle (181.39,166.35); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (181.39, 67.82) rectangle (193.81,134.00); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (193.81, 67.82) rectangle (206.23,109.00); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (206.23, 67.82) rectangle (218.66, 88.41); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (218.66, 67.82) rectangle (231.08, 76.64); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (231.08, 67.82) rectangle (243.51, 69.29); \definecolor{drawColor}{RGB}{0,0,255} \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 57.15, 67.87) -- ( 59.64, 67.89) -- ( 62.12, 67.93) -- ( 64.61, 67.97) -- ( 67.09, 68.04) -- ( 69.57, 68.14) -- ( 72.06, 68.27) -- ( 74.54, 68.45) -- ( 77.03, 68.69) -- ( 79.51, 69.02) -- ( 82.00, 69.45) -- ( 84.48, 70.01) -- ( 86.97, 70.73) -- ( 89.45, 71.65) -- ( 91.94, 72.81) -- ( 94.42, 74.26) -- ( 96.91, 76.05) -- ( 99.39, 78.23) -- (101.88, 80.86) -- (104.36, 83.99) -- (106.85, 87.67) -- (109.33, 91.94) -- (111.82, 96.84) -- (114.30,102.40) -- (116.78,108.60) -- (119.27,115.44) -- (121.75,122.87) -- (124.24,130.82) -- (126.72,139.21) -- (129.21,147.91) -- (131.69,156.78) -- (134.18,165.65) -- (136.66,174.32) -- (139.15,182.62) -- (141.63,190.33) -- (144.12,197.26) -- (146.60,203.21) -- (149.09,208.04) -- (151.57,211.59) -- (154.06,213.76) -- (156.54,214.49) -- (159.02,213.76) -- (161.51,211.59) -- (163.99,208.04) -- (166.48,203.21) -- (168.96,197.26) -- (171.45,190.33) -- (173.93,182.62) -- (176.42,174.32) -- (178.90,165.65) -- (181.39,156.78) -- (183.87,147.91) -- (186.36,139.21) -- (188.84,130.82) -- (191.33,122.87) -- (193.81,115.44) -- (196.30,108.60) -- (198.78,102.40) -- (201.27, 96.84) -- (203.75, 91.94) -- (206.23, 87.67) -- (208.72, 83.99) -- (211.20, 80.86) -- (213.69, 78.23) -- (216.17, 76.05) -- (218.66, 74.26) -- (221.14, 72.81) -- (223.63, 71.65) -- (226.11, 70.73) -- (228.60, 70.01) -- (231.08, 69.45) -- (233.57, 69.02) -- (236.05, 68.69) -- (238.54, 68.45) -- (241.02, 68.27) -- (243.51, 68.14) -- (245.99, 68.04) -- (248.47, 67.97) -- (250.96, 67.93) -- (253.44, 67.89) -- (255.93, 67.87); \end{scope} \begin{scope} \path[clip] (338.28, 61.20) rectangle (552.96,239.88); \definecolor{drawColor}{RGB}{0,0,0} \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.37,139.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (428.48,132.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (482.93,186.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (478.00,180.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.56,141.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.89,142.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (405.30,112.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (428.66,132.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.58,166.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.97,130.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (483.90,186.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.38,136.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (406.34,114.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (463.89,167.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.74,136.56) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.18,151.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (391.09, 97.56) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (422.17,126.80) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (410.73,116.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.25,148.46) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (388.94, 96.60) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (422.80,127.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.72,142.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.64,153.06) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (419.35,125.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (494.32,196.99) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (465.82,170.61) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (483.57,186.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.58,154.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (478.82,181.50) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (468.86,172.35) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (433.66,138.06) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (412.42,118.09) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (409.54,115.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (470.35,173.18) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (476.18,178.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (487.39,190.61) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (492.85,196.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.99,151.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (533.99,228.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.61,153.59) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (403.09,109.36) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (464.85,168.40) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (478.54,181.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (466.81,171.39) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.36,159.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (427.92,131.60) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.39,163.29) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.22,141.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.75,160.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (413.50,119.12) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.72,160.65) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.02,157.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (381.44, 84.70) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.55,141.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (476.95,179.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (474.45,177.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (420.24,125.74) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (475.93,178.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (401.49,107.35) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.36,144.41) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.03,135.87) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (371.96, 74.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.87,156.25) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (424.43,128.44) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (414.55,120.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.58,133.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.38,159.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.85,157.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (479.65,182.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.03,156.33) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (428.10,131.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.03,139.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (466.41,170.93) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.95,167.02) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (472.57,175.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (511.02,214.45) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (466.21,170.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.42,154.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (409.24,115.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (416.06,121.33) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (399.32,105.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.45,153.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.93,163.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (396.92,104.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (501.56,204.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.15,145.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.33,145.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.78,130.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.91,136.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.82,145.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.60,147.70) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.35,158.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.21,159.57) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (487.77,190.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (422.38,126.89) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.68,157.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (479.94,182.46) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (398.86,105.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.31,151.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.06,150.10) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (419.12,124.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.05,155.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.76,147.70) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (503.08,207.21) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (528.46,224.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (411.87,116.89) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (403.85,109.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.83,151.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (393.63,100.70) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (418.89,124.59) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (467.62,171.80) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.41,160.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (496.45,199.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (508.66,213.43) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (513.81,215.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.39,141.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (472.35,175.41) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (481.70,184.49) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.28,153.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (490.59,192.46) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (420.46,125.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.04,143.45) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (463.70,167.39) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (391.76, 98.71) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (410.44,116.35) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (416.79,122.55) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (416.55,122.30) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.89,141.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (468.23,172.13) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (421.54,126.19) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (430.85,135.79) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (495.90,197.81) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.27,161.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (471.22,173.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (382.58, 85.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (512.35,215.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.37,143.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (463.51,167.36) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (482.00,184.53) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (413.24,119.11) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (422.59,127.16) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (409.85,115.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (412.15,117.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (481.39,184.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.70,159.14) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.21,163.14) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (421.32,126.09) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.21,155.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.36,156.66) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (374.02, 77.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.55,143.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (509.80,213.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (465.23,169.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.55,160.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (475.18,178.19) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (385.60, 89.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (420.68,125.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (386.49, 91.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (407.34,114.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (395.88,103.87) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (384.65, 86.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (489.34,191.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (390.40, 97.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.04,163.07) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (419.57,125.10) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (388.16, 95.80) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (464.65,168.35) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (480.51,183.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.21,133.34) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (424.83,128.84) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.17,145.10) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.38,143.12) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.81,129.80) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.95,134.59) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (505.64,209.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.79,147.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.39,165.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (415.81,121.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.73,148.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (430.49,135.24) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.31,147.06) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.09,148.46) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (346.23, 67.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (503.90,207.29) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (504.75,207.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (471.67,174.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (464.27,168.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (464.46,168.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (414.29,120.55) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (467.21,171.53) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (489.75,191.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.58,160.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (414.03,120.19) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (480.80,183.42) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (485.24,188.97) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (430.67,135.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.72,141.34) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.28,147.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (461.11,164.01) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.87,143.97) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.03,132.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (500.84,203.91) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (499.48,202.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (498.84,202.03) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.19,158.93) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.96,147.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (472.80,176.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.10,161.40) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.48,152.30) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (417.50,123.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (491.03,193.89) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (369.54, 73.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.20,130.07) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (408.31,115.02) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (403.47,109.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (486.66,189.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (418.67,124.01) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (519.28,219.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (387.34, 91.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (469.70,172.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (475.68,178.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (389.68, 97.03) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (472.12,175.24) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (414.80,120.81) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.52,157.07) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.38,150.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.35,139.16) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.01,129.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.52,139.25) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (418.44,123.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (417.27,122.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.62,129.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (474.94,177.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (491.47,195.85) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.54,143.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.09,154.44) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.41,148.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.57,163.61) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.20,135.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (482.62,185.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (417.74,123.18) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (470.78,173.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.05,140.91) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (427.16,131.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.86,151.10) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (471.89,174.84) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.67,151.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.62,137.50) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.12,153.23) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.88,155.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (467.82,171.89) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (461.29,164.03) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.70,156.22) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (366.59, 72.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.87,159.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (464.08,168.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.85,144.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.03,144.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (491.92,196.09) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.22,150.21) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (488.94,191.30) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.50,145.36) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (500.15,203.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (383.65, 85.81) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (394.22,102.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (485.94,189.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.70,143.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.97,161.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (408.93,115.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.07,155.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (419.80,125.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.77,153.77) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.44,137.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.42,129.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.19,158.12) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.63,147.19) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (461.48,164.34) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (473.97,176.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.39,130.08) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (469.92,172.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (492.38,196.50) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.86,158.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (423.83,128.29) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (417.97,123.18) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (474.21,176.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (471.00,173.63) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (411.59,116.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (463.14,167.06) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (468.65,172.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (421.11,125.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (405.65,113.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.89,160.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.76,134.44) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (429.40,133.41) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (421.75,126.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (459.33,161.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (480.22,183.13) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (394.79,103.61) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (459.86,162.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.54,140.02) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.93,161.40) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (478.27,181.22) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.74,154.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.23,160.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (430.31,135.06) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.88,140.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.72,155.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (412.97,119.07) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (488.15,190.76) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.80,153.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.54,156.17) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (397.91,104.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (392.40, 98.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (412.70,118.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (521.70,221.39) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (393.03, 99.37) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.05,141.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (473.50,176.53) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (428.85,132.25) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (396.41,104.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.27,137.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (423.42,128.21) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.70,151.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.62,161.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (481.10,183.73) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.21,165.79) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (477.74,180.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.93,148.42) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (484.23,187.12) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.86,156.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.86,139.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (415.06,120.94) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (545.01,233.26) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (463.32,167.33) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (404.58,111.45) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (424.03,128.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (468.03,172.13) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (401.90,107.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (524.65,222.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (431.56,136.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (420.89,125.84) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (395.34,103.72) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (423.21,127.60) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.52,158.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.02,151.22) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (498.21,201.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.80,161.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (423.62,128.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (433.83,138.13) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (465.43,169.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (497.02,199.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (497.61,200.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (427.54,131.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (420.02,125.42) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (470.13,173.16) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (433.14,137.71) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.12,147.36) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (469.49,172.66) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (460.75,163.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (469.07,172.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.39,155.56) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (487.02,190.03) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (423.01,127.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (397.42,104.60) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (456.04,159.46) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (400.21,106.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (406.00,113.66) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (479.37,181.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (459.50,162.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (378.89, 82.89) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (466.01,170.84) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.69,139.42) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (515.42,215.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (402.70,108.59) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (465.62,169.71) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (400.65,106.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (493.33,196.66) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.35,157.04) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.18,138.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.02,157.02) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (476.69,178.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (442.47,147.15) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (399.77,105.71) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (377.43, 82.29) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (486.29,189.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.88,143.44) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.23,155.53) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (488.54,190.93) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (485.59,189.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (502.30,206.97) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (415.31,121.05) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (467.41,171.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.52,144.42) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (424.23,128.38) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (402.30,107.93) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.35,151.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (461.84,164.59) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (482.31,185.17) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (433.49,138.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.98,145.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.97,137.69) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.05,142.91) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.20,156.33) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.53,159.11) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.57,148.77) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (434.01,138.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (417.03,122.77) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (415.56,121.23) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (401.07,107.32) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.53,156.85) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (404.22,111.29) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.19,144.36) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (375.82, 79.17) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (439.21,143.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (446.51,151.40) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (451.37,155.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.76,166.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (380.22, 84.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.03,128.87) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (421.96,126.57) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (458.45,161.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.56,155.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (427.35,131.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (466.61,170.95) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.01,144.74) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (436.38,141.17) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (506.59,212.19) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (483.25,186.34) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (469.28,172.65) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (433.31,137.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.20,139.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (452.69,156.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (477.21,179.43) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (407.67,114.87) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (470.56,173.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (493.82,196.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.09,136.87) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.91,155.28) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (448.93,153.85) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.15,151.83) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (477.47,179.97) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (411.30,116.80) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.23,129.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (427.73,131.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (454.69,158.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (444.89,149.99) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (459.68,162.68) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (476.44,178.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (462.03,165.00) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.22,141.64) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (465.04,168.73) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (430.13,134.62) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (440.68,144.57) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (424.63,128.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (447.96,153.23) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (413.77,119.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (416.30,121.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (407.99,114.96) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.66,145.61) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (459.15,161.86) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (468.44,172.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.22,143.02) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (517.22,217.92) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.54,150.82) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (426.59,130.09) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (461.66,164.57) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (467.01,171.39) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (484.90,187.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (357.25, 68.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (471.44,173.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (507.59,212.25) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (453.19,157.03) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (490.17,191.98) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (449.26,154.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (362.78, 72.11) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (407.01,114.50) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (443.44,147.67) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (457.06,160.70) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (432.79,137.52) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (473.03,176.20) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (475.43,178.75) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (473.27,176.47) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (494.83,197.51) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (484.56,187.48) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (428.29,131.90) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (473.74,176.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (455.02,158.88) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (406.68,114.22) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (418.21,123.78) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (408.62,115.21) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (474.69,177.54) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (398.39,104.85) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (411.02,116.60) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (404.95,111.58) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (435.71,140.27) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (495.36,197.66) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (438.71,143.23) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (410.14,115.97) circle ( 2.25); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (479.09,181.68) circle ( 2.25); \end{scope} \begin{scope} \path[clip] ( 0.00, 0.00) rectangle (578.16,289.08); \definecolor{drawColor}{RGB}{0,0,0} \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (349.13, 61.20) -- (542.11, 61.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (349.13, 61.20) -- (349.13, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (381.30, 61.20) -- (381.30, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (413.46, 61.20) -- (413.46, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (445.62, 61.20) -- (445.62, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (477.78, 61.20) -- (477.78, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (509.94, 61.20) -- (509.94, 55.20); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (542.11, 61.20) -- (542.11, 55.20); \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (349.13, 39.60) {-3}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (381.30, 39.60) {-2}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (413.46, 39.60) {-1}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (445.62, 39.60) {0}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (477.78, 39.60) {1}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (509.94, 39.60) {2}; \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (542.11, 39.60) {3}; \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28, 69.01) -- (338.28,230.89); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28, 69.01) -- (332.28, 69.01); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28, 95.99) -- (332.28, 95.99); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28,122.97) -- (332.28,122.97); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28,149.95) -- (332.28,149.95); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28,176.93) -- (332.28,176.93); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28,203.91) -- (332.28,203.91); \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28,230.89) -- (332.28,230.89); \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88, 69.01) {-3}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88, 95.99) {-2}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88,122.97) {-1}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88,149.95) {0}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88,176.93) {1}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88,203.91) {2}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (323.88,230.89) {3}; \path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (338.28, 61.20) -- (552.96, 61.20) -- (552.96,239.88) -- (338.28,239.88) -- (338.28, 61.20); \end{scope} \begin{scope} \path[clip] (289.08, 0.00) rectangle (578.16,289.08); \definecolor{drawColor}{RGB}{0,0,0} \node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (445.62, 15.60) {Theoretical Quantiles}; \node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.00] at (299.88,150.54) {Sample Quantiles}; \end{scope} \end{tikzpicture} \section{Application} \label{Application} In this section, we first describe the proposed significance threshold for the variable importance measure and apply the hypoRF test to simulated and real application cases. In the simulation section, we will compare the hypoRF to recent kernel-based tests by investigating the power of a selected scenario. A more extensive simulation study is given in \ref{simusec}. In Section \ref{sec:Real}, two real data sets from biology and finance are considered. \subsection{Variable importance measure} \label{varimportance} Variable importance measures in the context of Random Forest are practical tools introduced by \cite{Breiman2001}. As a by-product of the hypoRF test of Section \ref{finaltest}, we obtain a significance threshold for such a given variable importance measure: For each permutation, we record the maximum variable importance measure $I_{\sigma}$ over all variables, thus approximating the distribution of $I_{\sigma}$ under $H_0$. The estimated $1-\alpha$ quantile of this distribution will then be used as the significance threshold. Every variable with an importance measure above this threshold will be called significant. This should serve as an additional hint, in which components a rejection decision might originate from. We will use in all instances the ``Gini'' importance measure or ``Mean Decrease Impurity'', see e.g., \cite[Section 5]{Biau2016}. Obtaining $p$-values for the variable importance measure by permuting the response vector was developed much earlier in \cite{variableimportance1} and further developed in \cite{Janitza2018}. As we are not directly interested in $p$-values for each variable, our approach differs slightly and is more in the spirit of the Westfall-Young permutation approach, see e.g., \cite{westfallyoung}. Since we use a permutation approach already to define the decision rule of the hypoRF test, the significance threshold for the variable importance arises without any additional cost. Figure \ref{fig:intro} in Section \ref{motexamplesec} demonstrates that in this example the Random Forest is able to correctly identify the effect of the last two components. This appears remarkable, as there is only a change in dependence, but no marginal change. On the other hand, one could imagine a situation, where no significant variable may be identified, but the test overall still rejects. This is illustrated in Figure \ref{fig:intro2}. In this example, instead of endowing only the last two components with correlations, we introduced correlations of 0.4 between all variables when changing from $P$ to $Q$. Again the hypoRF test manages to differentiate between the two distributions. However this time, no significant variables can be identified. This seems sensible, as the source of change is divided equally between the different components in this example. Any situation could also be a mixture of the above extreme examples: There could be one or several significant variables, but the test still rejects, even after removing them. Section \ref{sec:Real} will show real-world examples in which some variables can be identified to be significant in the above sense. \begin{figure} \caption{\textbf{(Application)} \label{fig:intro2} \end{figure} \subsection{Simulation} \label{contamin} In what follows, we will demonstrate the power of the proposed tests through simulation, and compare it with 3 kernel methods and a recently proposed Random Forest test based on the classification probability. To this end, we will use both the first version of the test, as described in Algorithm \ref{RFTestalg1} (``Binomial'' test), and the refined version in Algorithm \ref{permutationalg} (``hypoRF'' test). For the latter, as mentioned in Section \ref{finaltest}, we will use $K=100$ permutations. For the Binomial test described in Algorithm \ref{RFTestalg1} we decided to set $N_{train}=N_{test}$, as taking half of the data as training and the other half as test set seems to be a sensible solution a priori. To conduct our simulations we will use the R-package ``hypoRF'' developed by the authors, which consists of the ``hypoRF'' function including the two proposed tests. For each pair of samples, we run all tests and save the decisions. The estimated power is then the fraction of rejected among the $S$ tests. The 3 kernel-based tests include the ``quadratic time MMD'' \citep{Twosampletest} using a permutation approach to approximate the $H_0$ distribution (``MMDboot''), its optimized version ``MMD-full'', as well as the ``ME'' test with optimized locations, ``ME-full'' \citep{NIPS2016_6148}. The original idea of the ``MMD-full'' was formulated in \cite{NIPS2012_4727}, however they subsequently used a linear version of the MMD. We instead use the approach of \cite{NIPS2016_6148}, which uses the optimization procedure of \cite{NIPS2012_4727} together with the quadratic MMD from \cite{Twosampletest}. A Python implementation of these methods is available from the link provided in \cite{NIPS2016_6148} (\url{https://github.com/wittawatj/interpretable-test}). Among these tests, it seems the MMDboot still is somewhat of a gold-standard, with newer methods such as presented in \cite{NIPS2012_4727}, \cite{NIPS2015_5685} and \cite{NIPS2016_6148}, more focused on developing more efficient versions of the test that are nearly as good. Nonetheless, the new methods often end up being surprisingly competitive or even better in some situations, as recently demonstrated in \cite{NIPS2016_6148}. Thus our choice to include MMD-full, ME-full as well. For all tests, we use a Gaussian kernel, which is a standard and reasonable choice if no a priori knowledge about the optimal kernel is available. The Gaussian kernel requires a bandwidth parameter $\sigma$, which is tuned in MMD-full and ME-full based on training data. For MMDboot we use the ``median heuristic'', as described in \cite[Section 8]{Twosampletest}, which takes $\sigma$ to be the median (Euclidean) distance between the elements in $(\mathbf{Z}_i)_{i=1}^{2n}$. Finally, we consider the method of \cite{Cal2020}, which is a test based on the classification probability of Random Forest. We would like to emphasize that their first publication on arXiv appeared more than 6 months after our first upload on arXiv. As such, we do not view them as a direct competitor. Nonetheless, it seems interesting to compare their performance to the one of hypoRF, as they use a permutation approach based on the \emph{in-sample} probability estimates. We would like to stress that we did not use any tuning for the parameters of the RF-based tests, just as we did not use any tuning for MMDboot. As such, comparing the MMD/ME-full to the other methods might not be entirely fair. On the other hand, our chosen sample size might be too small for the optimized versions to work in full capacity. In particular, all optimized tests suffer from a similar drawback as our Binomial test: The tuning of the method takes up half of the available data. While \cite{NIPS2016_6148} find that ME-full outperforms the MMD, they only observe settings where the latter also uses half of the data to tune its kernel, as proposed in \cite{NIPS2012_4727}. In our terminology, they only compare ME-full to MMD-full, instead of MMDboot. It seems unclear a priori what happens if we instead employ the median heuristic for the MMD and let it use all of the available data, as in \cite{Twosampletest}. It should also be said that both optimization and testing of the ME-full scale linearly in $N$, making its performance below all the more impressive. On the other hand, the optimization depends on some hyperparameters common in gradient-based optimization, such as step size taken in the gradient step, the maximum number of iterations, etc. As this optimization is rather complicated for large $d$, some parameter choices sometimes lead to a longer runtime of the ME than the calculation-intensive hypoRF and CPT-RF. In general, it seems both runtime and performance of ME-full are in practice highly dependent on the chosen hyperparameters; we tried 3 different sets of parameters based on the code in \url{https://github.com/wittawatj/interpretable-test} with very different power results. The setting used in this simulation study is the exact same as used in their simulation study. As discussed in \cite{DPLB:Powerdiscussion}, changing the parameters of our experiments (for instance the dimension $d$) should be done in a way that leaves the Kullback-Leibler (KL) Divergence constant. When varying the dimension $d$ we generally follow this suggestion, though in our case, this is not as imminent; whatever unconscious advantage we might give our testing procedure is also inherent in the competing methods. Finally, also note that, while our methods would be in principle applicable to arbitrary classifiers, we did not compare our proposed tests with tests based on other classifiers, such as those used in \cite{facebookguys}. Rather, we believe the choice of classifiers for binary classification is a more general problem and should be studied separately, as for example done extensively in \cite{RFsuperiority}. The only exception to this, is our use of an LDA classifier-based test for the example of a Gaussian mean-shift in \ref{subsec:gauss_dsmall}. Where not differently stated, we use for the following experiments: $N=600$ observations, $300$ per class, $d=200$ dimensions, $K=100$ permutations and $600$ trees for the RF-based tests. In some examples, we additionally study a sparse case, where the intended change in distribution appears only in $c < d$ components. Throughout, notation such as \[ P=\sum_{t=1}^T \omega_{t} N(\boldsymbol{\mu}_t, \Sigma_t), \] with $\omega_{t} \geq 0$, $\sum_{t=1}^T \omega_t=1$, $\boldsymbol{\mu}_t \in \ensuremath{{\mathbb R}}^d$, $\Sigma_t \in \ensuremath{{\mathbb R}}^{d\times d}$ means $P$ is a discrete mixture of $T$ $d$-valued Gaussians. Moreover, if $P_1, \ldots, P_d$ are distributions on $\ensuremath{{\mathbb R}}$, we will denote by \[ P=\prod_{j=1}^d P_j, \] their product measure on $\ensuremath{{\mathbb R}}^d$. In other words, in this case, we simply take all the components of $\mathbf{X}$ to be independent.\\ The prime example which we present here in the main text is rather challenging. Let $P=N(\boldsymbol{\mu},\Sigma)$ with $\boldsymbol{\mu}$ set to $50 \cdot \mathbf{1}$ and $\Sigma=25 \cdot I_{d\times d}$. For the alternative, we consider the mixture \[ Q=\lambda H_c + (1-\lambda) P, \] $\lambda \in [0,1]$, and $H_c$ some distribution on $\ensuremath{{\mathbb R}}^d$. This is a ``contamination'' of $P$ by $H_c$ with $\lambda$ determining the contamination strength. Here, we take $H_c$ to be another independent $(d-c)$-variate Gaussian together with $c$ components that are in turn independent $\text{Binomial}(100,0.5)$ distributed. We thereby choose parameters such that the Binomial components in $H_c$ have the same mean and variance as the Gaussian components and such that differentiating between Binomial and Gaussian is known to be difficult. Figure \ref{contaminationillustration} displays two realizations of a Gaussian and Binomial component respectively. We take $d=200$ and $c$ to be $10\%$ of 200, or $c=20$. \begin{figure} \caption{$\text{Binomial} \caption{$N(50,25)$ distribution} \caption{\textbf{(Contamination)} \label{contaminationillustration} \end{figure} This problem is difficult; the Binomial and Gaussian components can hardly be differentiated by eye, the contamination level varies and the contamination is only in $c$ out of $d$ components actually detectable. Moreover, the combination of discrete and continuous components means the optimal kernel choice might not be clear, even with full information. Thus even for $300$ observations for each class, no test displays any power until we reach a contamination level of $0.5$. However, for higher contamination levels, Figure \ref{fig:test4K100} clearly displays the superiority of the RF-based tests: None of the kernel tests appear to significantly rise over the level of $5 \%$. On the other hand, the two proposed tests slowly grow from around $0.05$ to almost $0.4$ in the case of the hypoRF test. Interestingly, while relatively close at first, the difference in power between the Binomial test and the hypoRF grows and is starkest for $\lambda=1$, again demonstrating the benefit of using the OOB error as a test statistic. Although slightly worse than the hypoRF, the CPT-RF is also clearly beating the Binomial test, highlighting the benefit of using the permutation approach with (in-sample) classification probabilities. \begin{figure} \caption{\textbf{(Contamination)} \label{fig:test4K100} \end{figure} Finally, we consider the case $d=c$, so that $H_c$ simply consists out of $d$ independent Binomial distributions. The result is displayed in Figure \ref{fig:test5K100} and all RF-based tests are now extremely strong, while the kernel tests fail to detect any signal.\\ More simulation examples can be found in \ref{simusec}. \begin{figure} \caption{\textbf{(Contamination)} \label{fig:test5K100} \end{figure} \subsection{Real Data} \label{sec:Real} As a first application, we consider a high-dimensional microarray data set from \cite{githubdata}. The data set is about breast cancer, originally provided by \cite{Gravier2010}. They examined 168 patients with 2905 gene expressions, each over a five-year period. The 111 patients with no metastasis of small node-negative breast carcinoma after diagnosis were labeled ``good'', and the 57 patients with early metastasis were labeled ``poor''.\\ The application of the hypoRF to the two groups is summarized in Figure \ref{fig:genes}. The test detects a clear difference between the groups ``good'' and ``poor'' with ``8p23'', ``8p21'' and ``3q25'' being the most important (and significant) genes. There seems to be a high correlation between the genes that are located close to each other (especially within the same chromosome). This has the effect that the Random Forest takes a more or less arbitrary choice at a split point between those highly correlated genes. This in turn is reflected in the variable importance measure. For this reason, one should be careful when interpreting the variable importance measure on a gene level. It appears that chromosomes 8 and 3 play an important role in distinguishing the two groups. This finding is in line with \cite[Figure 2, p. 1129]{Gravier2010}. \begin{figure} \caption{\textbf{(Genes)} \label{fig:genes} \end{figure} In the second example, we are interested in the relative importance of financial risk factors (asset-specific characteristics). We claim that a financial risk factor has explanatory power if it contributes significantly to the classification of individual stock returns above or below the overall median. We use monthly stock return data from the Center for Research in Security Prices (CRSP). Our sample period starts in January 1977 and ends in December 2016, totaling 40 years. Additionally, we obtain the 94 stock-level predictive characteristics used by \cite{dacheng2020} from Dacheng Xiu's webpage - see, \url{http://dachxiu.chicagobooth.edu}. Between 1977 and 2016 we only use stocks for which we have a full return history. This leads to 501 stocks with 94 stock-specific characteristics. The group ``positive'' contains stocks and time points for which the return was above the overall median and vice versa for the ``negative'' group. The two groups are balanced and contain more than 120'000 observations each.\\ The application of the hpyoRF test on the two groups is summarized in Figure \ref{fig:riskfactors}. The ordering of the different risk factors is in line with the findings in \cite[Figure 5, p. 34]{dacheng2020}, 1-month momentum being the most important characteristic.\\ One could argue that stocks that are at time point $t$ close to the overall median are more or less randomly assigned to one of the two groups. Hence, a possible option is to only assign a stock and time point to a certain group if the return is above (below) a certain threshold, i.e., overall median $\pm \epsilon$. However, we observed that the result is very robust for different values of $\epsilon$. \begin{figure} \caption{\textbf{(Riskfactors)} \label{fig:riskfactors} \end{figure} \section{Discussion} We discussed in this paper two easy to use and powerful tests based on Random Forest and empirically demonstrated their efficacy. We presented some consistency and power results and showed a way of adapting the Bayes classifier to obtain a consistent test. This adaptation consisted simply in changing the ``cutoff'' of the classifier. Especially the test based on the OOB statistics (hypoRF) proved to be powerful and additionally delivered a way to assess the significance of individual variables. This was demonstrated in applications using medical and financial data. After our first publication on arXiv, \cite{Cal2020} developed an approach based on a smooth transformation of the in-sample probabilities. Interestingly, experiments using their approach with OOB probability estimates, as a hybrid of their and our methodology, delivered very promising results. Investigating this further could lead to a further improve in power for RF-based tests. \appendix \section{Proofs} \subsection{Proofs to Section \ref{framework}} \begin{proposition}[Restatement of Proposition \ref{Prob1}] The decision rule in \eqref{Binomialtest} conserves the level asymptotically, i.e. \[ \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(\hat{g}(D_{N_{test}}))=1 \right) \leq \alpha, \] under $H_0:P=Q$. \end{proposition} \begin{proof} Let $\mathcal{H}_{N}=\{D_{N_{train}},n_{1,test} \}$. Note that, $n_{1,test}$, $n_{0,test}$ contain the same probabilistic information, so it does not matter which we condition on. We first prove that, \begin{align}\label{conddist} n_{j,test} \hat{L}_j^{(\hat{g})}|\mathcal{H}_{N} , \sim \func{Bin}(n_{j,test}, L_j^{(\hat{g})}), \end{align} for $j \in \{0,1 \}$ and $\hat{L}_0^{(\hat{g})}$, $\hat{L}_1^{(\hat{g})}$ are conditionally independent given $D_{N_{train}}$, $n_{1,test}$. To prove \eqref{conddist} first note that by exchangeability (due to iid sampling), \[ \sum_{i: \ell_i=j} \ensuremath{{\mathbb I}}\{\hat{g}(\mathbf{Z}_i) \neq \ell_i\} \stackrel{D}{=} \sum_{i=1}^{n_{j,test}} \ensuremath{{\mathbb I}}\{\hat{g}(\mathbf{Z}_i) \neq j\}, \] $j \in \{0,1 \}$. Conditional on $\mathcal{H}_{N}$, the above is a sum of $n_{j,test}$ iid, elements $\ensuremath{{\mathbb I}}\{\hat{g}(\mathbf{Z}_i) \neq j\}$, with \[ \ensuremath{{\mathbb I}}\{\hat{g}(\mathbf{Z}_i) \neq j\} | \mathcal{H}_{N} \sim \mbox{Bin}(1, \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}_i) \neq j|\mathcal{H}_{N})). \] Finally, since the event $\hat{g}(\mathbf{Z}_i) \neq j$ is independent of $n_{j,test}$, \begin{align*} \ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}_i) \neq j|\mathcal{H}_{N})&=\ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}_i) \neq j|D_{N_{train}})\\ &=\ensuremath{{\mathbb P}}(\hat{g}(\mathbf{Z}_i) \neq \ell_i|D_{N_{train}}, \ell_i=0)\\ &= L_j^{(\hat{g})}. \end{align*} Let $\tilde{\sigma}_c^2:=L_{0}^{(\hat{g})} (1-L_{0}^{(\hat{g})}) + L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})}) $ and recall that \[ \hat{\sigma}_{c}^2=\frac{\hat{L}_{0}^{(\hat{g})} (1-\hat{L}_{0}^{(\hat{g})})}{n_{test,0}} + \frac{\hat{L}_{1}^{(\hat{g})} (1-\hat{L}_{1}^{(\hat{g})})}{n_{test,1}}. \] Moreover, set for all $N_{test}$: \[ \epsilon_{N_{test}}:=\epsilon \cdot \frac{1}{N_{test}^{\nu}}, \] for some $\epsilon > 0$ and $\nu \in (1/2,1)$. Note that we assume $N_{test} \to \infty$, while $N_{train}$ might also increase to infinity at any rate, or stay constant. Let for the following \[ E:= \left \{ \frac{n_{1,test}}{N_{test}} \to \pi \right \}. \] Then $\ensuremath{{\mathbb P}}(E)=1$, as $\frac{n_{1,test}}{N_{test}} \to \pi$ a.s. First assume for a realized sequence of $D_{N_{train}}$, $N_{test} \tilde{\sigma}_c^2 \to \infty$ holds. Then for a realized sequence of $n_{1,test}$, with the property that $n_{1,test}/N_{test} \to \pi$ (i.e. on $E$), it holds that \[ \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1 | \mathcal{H}_N \right) \leq \Phi(\Phi^{-1}(\alpha)) = \alpha. \] Indeed, if $N_{test} L_{0}^{(\hat{g})} (1-L_{0}^{(\hat{g})}) \to \infty $ and $N_{test} L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})}) \to \infty$, then conditional on $\mathcal{H}_N$, \begin{align}\label{asymptoticnormalnicecase} \frac{\hat{L}^{(\hat{g})}_{1/2} - 1/2 }{\hat{\sigma}_{c}} \to N(0,1), \end{align} by the Lindeberg-Feller Central Limit Theorem. On the other hand assume $N_{test} L_{0}^{(\hat{g})} (1-L_{0}^{(\hat{g})}) \to \infty $ does not hold, but $N_{test} L_{1}^{(\hat{g})} (1-L_{1}^{(\hat{g})}) \to \infty$ is still true. The former holds if and only if $N_{test} L_{0}^{(\hat{g})}$ does not go to infinity (iff $L_{0}^{(\hat{g})} \to 0$) or $N_{test} (1- L_{0}^{(\hat{g})})$ does not go to infinity (iff $L_{0}^{(\hat{g})} \to 1$). Then we may write \begin{align*} \frac{\hat{L}^{(\hat{g})}_{1/2} - 1/2 }{\hat{\sigma}_{c}} = \left( \frac{\sqrt{N_{test}} (\hat{L}^{(\hat{g})}_{0} - j) }{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} + \frac{\sqrt{N_{test}} (\hat{L}^{(\hat{g})}_{1} - (1-j)) }{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} \right) \frac{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}}{\sqrt{N_{test}}\hat{\sigma}_{c}}, \end{align*} for $j \in \{0,1 \}$. In this case, \[ \frac{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}}{\sqrt{N_{test}}\hat{\sigma}_{c}} \stackrel{p}{\to} 1. \] Moreover, for all $\delta > 0$, \begin{align*} \ensuremath{{\mathbb P}} \left(\frac{\sqrt{N_{test}}}{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} \Big| \hat{L}_{0}^{(\hat{g})} - L_{0}^{(\hat{g})} | > \delta |\mathcal{H}_N \right) &\leq \frac{N_{test} }{\delta^2 L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})} \ensuremath{{\mathbb V}}(\hat{L}_{0}^{(\hat{g})} |\mathcal{H}_N) \\ &= \frac{N_{test} }{\delta^2 L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})}) } \frac{L_{0}^{(\hat{g})}(1-L_{0}^{(\hat{g})})}{n_{0,test}} \\ &\approx\frac{L_{0}^{(\hat{g})}(1-L_{0}^{(\hat{g})})}{ L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}, \end{align*} on $E$. Since $N_{test} L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})}) \to \infty$ is still true, this means that \[ \frac{L_{0}^{(\hat{g})}(1-L_{0}^{(\hat{g})})}{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})} = \frac{N_{test} L_{0}^{(\hat{g})}(1-L_{0}^{(\hat{g})})}{N_{test}L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})} \to 0, \] on $E$ and thus, \begin{align*} \frac{\sqrt{N_{test}} (\hat{L}^{(\hat{g})}_{0} - j) }{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} = \frac{\sqrt{N_{test}} (\hat{L}^{(\hat{g})}_{0} - L_{0}^{(\hat{g})}) }{\sqrt{L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} + \frac{\sqrt{N_{test}} (L_{0}^{(\hat{g})} - 0) }{\sqrt{ N_{test} L_{1}^{(\hat{g})}(1-L_{1}^{(\hat{g})})}} \stackrel{p}{\to} 0, \end{align*} and \eqref{asymptoticnormalnicecase} remains true. Finally note that $\epsilon_{N_{test}}$ is of too small order to make a difference in that case, since by the above $\sqrt{N_{test}} (\hat{L}^{(\hat{g})}_{1/2} - 1/2) = O_{\ensuremath{{\mathbb P}}}(1) $, while $\sqrt{N_{test}} \epsilon_{N_{test}} \to 0$. Now assume that $N_{train}$, $D_{N_{train}}$ are such that $\liminf_{N_{test}} N_{test} \tilde{\sigma}_c^2 \to \infty$ does not hold. In this case, using again Markov's inequality, \[ N_{test}(\hat{L}_{1/2} - 1/2) = O_{\ensuremath{{\mathbb P}}}(1), \] i.e. $ \lim_{M \to \infty} \limsup_{N_{test}} \ensuremath{{\mathbb P}}( N_{test}(\hat{L}_{1/2} - 1/2) > M | \mathcal{H}_{N} )=0$. Thus, \begin{align*} \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1 | \mathcal{H}_{N} \right) &\leq \ensuremath{{\mathbb P}} \left( N_{test}(\hat{L}^{(\hat{g})}_{1/2} - 1/2) > \epsilon \cdot N_{test}^{1-\nu} | \mathcal{H}_{N} \right) \to 0, \end{align*} as $\epsilon \cdot N_{test}^{1-\nu} \to \infty$. Thus we have shown that for a realized sequence of $D_{N_{train}},n_{1,test}$, with the property that $n_{1,test}/N_{test} \to \pi$, it holds that \[ \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1 | \mathcal{H}_N \right) \leq \alpha. \] On the other hand, \begin{align*} &\limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1 \right)= \\ &= \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb E}} \left[ \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1| D_{N_{train}},n_{1,test} \right) \ensuremath{{\mathbb I}}_E\right]\\ & \leq \ensuremath{{\mathbb E}} \left[ \limsup_{N_{test} \to \infty} \ensuremath{{\mathbb P}} \left(\delta_{B}(D_N)=1 | D_{N_{train}},n_{1,test} \right) \ensuremath{{\mathbb I}}_E\right]\\ &\leq \alpha. \end{align*} \end{proof} \begin{lemma} [Restatement of Lemma \ref{consistencylemma0}] Take $\ensuremath{{\mathcal X}} \subset \ensuremath{{\mathbb R}}$ and $\pi \neq 1/2$. Then no decision rule of the form, $\delta(D_N)=\delta( g^*_{1/2}(D_N) ) $ is consistent. \end{lemma} \begin{proof} We first show that if $\pi \neq \frac{1}{2}$, one can construct $(P,Q) \in \Theta_1$ that the Bayes classifier is not able to differentiate. Consider $\pi > 1/2$, $d=1$ and $Q$ being the uniform distribution on $(0,1)$, with density $q(z)=\ensuremath{{\mathbb I}} \{ z \in (0,1) \}$. We write $q=\ensuremath{{\mathbb I}}{(0,1)}$ for short. $P$ is a mixture of $Q$ and another uniform on $R \subset (0,1)$, so that \[ p= (1-\alpha)\ensuremath{{\mathbb I}}{(0,1)} + \alpha\frac{\ensuremath{{\mathbb I}}{R}}{|R|}. \] Giving $Q$ a label of 1 and $P$ a label of 0 when observing $(1-\pi) P + \pi Q$, and taking $|R|=1/2$, the Bayes classifier is then given as $g_{1/2}^{*}(z)=\ensuremath{{\mathbb I}} \{\eta(z) > 1/2 \}$, where \begin{align*} \eta(z):=\begin{cases}\pi /( \pi + (1- \pi)(1+\alpha) ), & \text{ if } z \in R\\ \pi /(\pi + (1-\pi) (1-\alpha)), & \text{ if } z \notin R\end{cases}. \end{align*} Simple algebra shows that for any $\alpha < \min(\pi/(1-\pi) - 1,1)$, $\eta(z) > 1/2$ and thus $g_{1/2}^*(z)=1$ for all $z \in (0,1)$. In particular, $L_0^{(g_{1/2}^{*})}=1$ and $L_0^{(g_{1/2}^{*})}=0$ and both $L_0^{(g_{1/2}^{*})} + L_1^{(g_{1/2}^{*})}=1$ and $L^{(g_{1/2}^{*})}=1-\pi=\min(\pi,1-\pi)$. On the other hand, for any $\theta_0 \in \Theta_0$, simple evaluation of $\eta(z)$ shows that $g_{1/2}^*(z)=1$ for all $z$. Consequently, for $\theta_1=(P,Q)$ in the above example and $\theta_0 \in \Theta_0$ arbitrary, it holds that \[ \ensuremath{{\mathbb E}}_{\theta_0}[ f( g^*_{1/2}(D_N) ) ] = \ensuremath{{\mathbb E}}_{\theta_1}[ f( g^*_{1/2}(D_N) ) ], \] for any bounded measurable function $f: \{0,1 \}^N \to \ensuremath{{\mathbb R}}$. In particular, since the test conserves the level by assumption, $\phi(\theta_1)=\phi(\theta_0)\leq \alpha$ and the test has no power. \end{proof} \begin{lemma}[Restatement of Lemma \ref{consistencylemma2}] The classifier \begin{equation} g_{\pi}^*(\mathbf{z}) = \ensuremath{{\mathbb I}} \left\{ \eta(\mathbf{z}) > \pi \right\}, \end{equation} is a solution to \eqref{newproblem}. Moreover it holds that \begin{equation}\label{eq:TVanderror2d} 1 - TV(P,Q) = L_0^{g_{\pi}^*} + L_1^{g_{\pi}^*}, \end{equation} for any $\pi \in (0,1)$. \end{lemma} \begin{proof} We show Relation \eqref{eq:TVanderror2d} for the classifier \[ g^*(\mathbf{z}): = \ensuremath{{\mathbb I}} \left\{ \eta(\mathbf{z}) > \pi \right\}. \] If this is true, it will immediately follows that $g^*=g_{\pi}^*$. Indeed, let $h_{\#}P$ be the push-forward measure of $P$ through a measurable function $h: \ensuremath{{\mathcal X}} \to \ensuremath{{\mathbb R}}$. Taking $h=g$, for an arbitrary classifier $g$, it holds that \begin{align*} 1-(L_0^{g_{\pi}^*} + L_1^{g_{\pi}^*} ) &= TV(P,Q)\\ &\geq P(g(\mathbf{X}) = 0 ) - Q(g(\mathbf{Y}) = 0 )\\ &=\ensuremath{{\mathbb P}}(g(\mathbf{Z}) = 0 | \ell=0 ) - \ensuremath{{\mathbb P}}(g(\mathbf{Z}) = 0 | \ell=1 ) \\ &=1-(L_0^{g} + L_1^{g} ), \end{align*} where the first inequality follows, because $\{\mathbf{x}: g(\mathbf{x}) = 0 \}$ and $\{\mathbf{y}: g(\mathbf{y}) = 0 \}$ are two Borel sets on $\ensuremath{{\mathcal X}}$. Consequently, it also holds for any classifier $g$ that \[ L_{1/2}^{g} = \frac{1}{2} (L_0^{g} + L_1^{g} ) \geq \frac{1}{2} (L_0^{g_{\pi}^*} + L_1^{g_{\pi}^*} ) = L_{1/2}^{g_{\pi}^*}, \] or $g^*=g_{\pi}^*$. It remains to prove \eqref{eq:TVanderror2d} for $g^*$: It is well-known that (one of) the sets attaining the maximum in the definition of $TV(P,Q)$ is given by $A^*:=\{\mathbf{z}: q(\mathbf{z} ) \leq p(\mathbf{z}) \}$. It is possible to rewrite $A^*$: \begin{align*} A^*&=\left \{ \mathbf{z}: \frac{\pi q(\mathbf{z})}{ (1-\pi) p(\mathbf{z})+ \pi q(\mathbf{z}) } \leq \frac{\pi}{1-\pi} \frac{(1-\pi) p(\mathbf{z})}{ (1-\pi) p(\mathbf{z})+ \pi q(\mathbf{z}) } \right \}\\ &= \left \{ \mathbf{z}: \eta(\mathbf{z}) \leq \frac{\pi}{1-\pi} (1-\eta(\mathbf{z})) \right\}\\ &= \{ \mathbf{z}:\eta(\mathbf{z}) \leq \pi \}. \end{align*} Thus \begin{align*} TV(P,Q) = P(A^*) - Q(A^*) &= \ensuremath{{\mathbb P}}( \eta(\mathbf{z}) \leq \pi | \ell=0 ) - \ensuremath{{\mathbb P}}( \eta(\mathbf{z}) \leq \pi | \ell=1 ) \\ &= 1- \ensuremath{{\mathbb P}}( \eta(\mathbf{z}) > \pi | \ell=0 ) - \ensuremath{{\mathbb P}}( \eta(\mathbf{z}) \leq \pi | \ell=1 )\\ &=1 - (\ensuremath{{\mathbb P}}( \eta(\mathbf{z}) > \pi | \ell=0 ) + \ensuremath{{\mathbb P}}( \eta(\mathbf{z}) \leq \pi | \ell=1 ))\\ &= 1- (L_0^{g_{\pi}^*} + L_1^{g_{\pi}^*}). \end{align*} \end{proof} \begin{corollary}[Restatement of Corollary \ref{consistencycor}] The decision rule $\delta_B(g_{\pi}^*(D_N))$ in \eqref{Binomialtest} is consistent for any $\pi \in (0,1)$. \end{corollary} \begin{proof} We restate here the decision rule in \eqref{Binomialtest} for completeness, \begin{align*} \delta_{B}(g^*_{\pi}(D_{N})) = \ensuremath{{\mathbb I}} \left\{ \hat{L}_{1/2}^{(g^*_{\pi})} - 1/2 < \hat{\sigma}_c \Phi^{-1}(\alpha) + \epsilon_{N} \right\}, \end{align*} since $N_{test}=N$. First we show that the decision rule conserves the level, for $\epsilon_{N}=0$ for all $N$. Since, for any $P,Q$, $P=Q$, $\eta(z)=\pi$, $\hat{L}_0^{(g_{\pi}^*)}=1$ and $\hat{L}_1^{(g_{\pi}^*)}=0$ a.s., so that for all $\theta_0 \in \Theta_0$ and any sample size, \begin{align*} \phi(\theta_0)=\ensuremath{{\mathbb P}}_{\theta_0}( \hat{L}_{1/2}^{(g_{\pi}^*)} < 1/2 )=0. \end{align*} Thus in particular $\sup_{\Theta_0} \phi(\theta_0) = 0 \leq \alpha$. Assume $\theta \in \Theta_1$, so that $TV(P,Q) > 0$. We assume first that also $TV(P,Q) < 1$. Since now the classifier itself does not need to be estimated, it holds that \[ N_{j} \hat{L}_j^{(g_{\pi}^*)} | N_{j} \sim \func{Bin}(N_{j}, L_j^{(g_{\pi}^*)}), \] as proven in Proposition \ref{Prob1}. Since $1 > TV(P,Q) > 0$, $0 < L_0^{(g_{\pi}^*)} + L_1^{(g_{\pi}^*)} < 1$, so that $N L_j^{(g_{\pi}^*)}(1-L_j^{(g_{\pi}^*)}) \to \infty$ for $j=0$ or $j=1$. Conditional on any sequence of $N_0, N_1$, such that $N_0 \to \infty$ and $N_1 \to \infty$, as $N\to \infty$, \[ \sqrt{N_{0}} (\hat{L}_0^{(g_{\pi}^*)} - L_0^{(g_{\pi}^*)} )\stackrel{D}{\to} N(0, L_0^{(g_{\pi}^*)}(1-L_0^{(g_{\pi}^*)})) \text{ and } \sqrt{N_{1}} (\hat{L}_1^{(g_{\pi}^*)} - L_1^{(g_{\pi}^*)} )\stackrel{D}{\to} N(0, L_1^{(g_{\pi}^*)}(1-L_1^{(g_{\pi}^*)})), \] and since $\hat{L}_0^{(g_{\pi}^*)}$, $\hat{L}_1^{(g_{\pi}^*)}$ are conditionally independent, it holds that \begin{align*} \frac{ \hat{L}_{1/2}^{(g_{\pi}^*)} - L_{1/2}^{(g_{\pi}^*)} }{ 1/2 \sqrt{ \frac{\hat{L}_0^{(g_{\pi}^*)}(1-\hat{L}_0^{(g_{\pi}^*)})}{N_{0}} + \frac{\hat{L}_1^{(g_{\pi}^*)}(1-\hat{L}_1^{(g_{\pi}^*)})}{N_{1}} } }=\frac{ \hat{L}_{1/2}^{(g_{\pi}^*)} - L_{1/2}^{(g_{\pi}^*)} }{ \hat{\sigma}_c } \stackrel{D}{\to} N(0,1), \end{align*} as in Proposition \ref{Prob1}. Consequently, \begin{align*} \ensuremath{{\mathbb P}} \left( \frac{\hat{L}_{1/2}^{(g_{\pi}^*)} - 1/2 }{ \hat{\sigma}_c } < \Phi^{-1}(\alpha) \Big| N_0 \right)& = \ensuremath{{\mathbb P}} \left( \frac{\hat{L}_{1/2}^{(g_{\pi}^*)} - L_{1/2}^{(g_{\pi}^*)} }{ \hat{\sigma}_c } < \Phi^{-1}(\alpha) - \frac{ L_{1/2}^{(g_{\pi}^*) } - 1/2}{ \hat{\sigma}_c } \Big| N_0 \right). \end{align*} Now for any realized sequence of $N_0$, $N_1$ such that $N_0 \to \infty$ and $N_1 \to \infty$, as $N\to \infty$, this probability goes to 1, since $L_{1/2}^{(g_{\pi}^*)} - 1/2 < 0$ and $\hat{\sigma}_c=O(N_0^{-1/2}) \to 0$. Since $N_1/N \to \pi$, a.s., and $N_0=N-N_1$, this will be true for almost all sequences. Thus applying dominated convergence to the above conditional result, one sees that \[ \ensuremath{{\mathbb P}} \left( \frac{\hat{L}_{1/2}^{(g_{\pi}^*)} - 1/2 }{ \hat{\sigma}_c } < \Phi^{-1}(\alpha) \right) \to 1. \] If $TV(P,Q)=1$ on the other hand, $L_{1/2}^{(g_{\pi}^*)}=0$ and $\hat{\sigma}_c=0$ a.s. and trivially the rejection probability becomes \[ \ensuremath{{\mathbb P}}( \hat{L}_{1/2}^{(g_{\pi}^*)} < 1/2 )=1. \] \end{proof} \subsection{Proofs to Section \ref{U-stats-Test}} \begin{lemma}[Restatement of Lemma \ref{expectationlemma}] $\ensuremath{{\mathbb E}}[h_{N_{train}}((\ell_1, \mathbf{Z}_{N_{train}}), \ldots, (\ell_{N_{train}}, \mathbf{Z}_{N_{train}}))]=\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$. \end{lemma} \begin{proof} First we note that \[ \ensuremath{{\mathbb E}}[L_{1/2}^{\hat{g}_{-i}}]=\frac{1}{2} \left( \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i | \ell_i=1 ) + \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i | \ell_i=0) \right). \] Let $B(i) \leq B$ be the number of classifiers in the ensemble, not containing observation $i$. Since we assume that each classifier in the ensemble receives a bootstrapped version of $D_{N_{train}}$, there is a probability $p > 0$, that any given classifier $\hat{g}_{\nu_b}$ will not contain observation $i$. Since this bootstrapping is done independently for each classifier, we have that $B(i) \sim \mbox{Bin}(p, B)$. Thus as $B \to \infty$, also $B(i) \to \infty$ a.s. and thus $\hat{g}_{-i}(\mathbf{Z})=\ensuremath{{\mathbb E}}_{\nu}[\hat{g}_{\nu}(D_{N_{train}}^{-i})(\mathbf{Z})]$, or \begin{align*} \ensuremath{{\mathbb E}}[\varepsilon_{i}^{oob}] &=\ensuremath{{\mathbb E}}[\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \left( \frac{1-\ell_{i}}{n_{0, train}} + \frac{\ell_{i}}{n_{1, train}} \right) ]\\ &=\ensuremath{{\mathbb E}} \left[\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \frac{1-\ell_{i}}{n_{0, train}} \right] + \ensuremath{{\mathbb E}} \left[\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \frac{\ell_{i}}{n_{1, train}} \right] . \end{align*} Now, since $\ell_i=\ensuremath{{\mathbb I}}\{ \ell_i=1 \}$, it holds that \begin{align*} \ensuremath{{\mathbb E}} \left[\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \frac{\ell_{i}}{n_{1, train}} \right]&=\ensuremath{{\mathbb E}} \left[\frac{1}{n_{1, train}} \cdot \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i , \ell_i=1 | n_{1, train} ) \right]\\ &=\ensuremath{{\mathbb E}} \left[\frac{\ensuremath{{\mathbb P}}(\ell_i=1| n_{1, train})}{n_{1, train}} \cdot \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i, | n_{1, train}, \ell_i=1 ) \right]\\ &=\ensuremath{{\mathbb E}} \left[\frac{\ensuremath{{\mathbb P}}(\ell_i=1| n_{1, train})}{n_{1, train}} \right] \cdot \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i, | \ell_i=1 ), \end{align*} since the event $\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i$ is independent of $n_{1, train}$ given the event $\ell_i=1$. Finally, \begin{align*} \ensuremath{{\mathbb E}} \left[\frac{\ensuremath{{\mathbb P}}(\ell_i=1| n_{1, train})}{n_{1, train}} \right]&= \frac{1}{N_{train}} \ensuremath{{\mathbb E}} \left[ \sum_{i=1}^{N_{train}} \frac{\ensuremath{{\mathbb P}}(\ell_i=1| n_{1, train})}{n_{1, train}} \right] \\ &=\frac{1}{N_{train}} \ensuremath{{\mathbb E}} \left[ \ensuremath{{\mathbb E}}\left[\frac{1}{n_{1, train}}\sum_{i=1}^{N_{train}} \ell_i \Big| n_{1, train} \right] \right]\\ &=\frac{1}{N_{train}}. \end{align*} Similarly, \begin{align*} \ensuremath{{\mathbb E}} \left[\ensuremath{{\mathbb I}}{ \{\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i \}} \frac{1-\ell_{i}}{n_{0, train}} \right] = \frac{1}{N_{train}} \ensuremath{{\mathbb P}}(\hat{g}_{-i} (\mathbf{Z}_i) \neq \ell_i, | \ell_i=0 ). \end{align*} Thus indeed, \[ \ensuremath{{\mathbb E}}[h_{N_{train}}((\ell_1, \mathbf{Z}_{N_{train}}), \ldots, (\ell_{N_{train}}, \mathbf{Z}_{N_{train}}))]=N_{train} \ensuremath{{\mathbb E}}[\varepsilon_{1}^{oob} ]=\ensuremath{{\mathbb E}}[L_{1/2}^{\hat{g}_{-i}}]. \] \end{proof} \begin{lemma} $h_{N_{train}}$ is a valid kernel for the expectation $\ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]$. \end{lemma} \begin{proof} Unbiasedness was proven above. Symmetry follows, since for any two permutations $\sigma_1$, $\sigma_2$, there exists $i,j$ such that $\sigma_{1}(j)=\sigma_{2}(i):=u$, and thus \begin{align*} \varepsilon_{\sigma_1(i)}^{oob}&=\ensuremath{{\mathbb E}}[\ensuremath{{\mathbb I}}{ \{g(\mathbf{Z}_{\sigma_1(i)}, D_{N_{train}}^{-\sigma_1(i)}, \theta) \neq \ell_{\sigma_1(i)} } \} \left( \frac{1-\ell_{\sigma_1(i)}}{n_{0, train}} + \frac{\ell_{\sigma_1(i)}}{n_{1, train}} \right) | D_{N_{train}}^{\sigma_1}] \\ &= \ensuremath{{\mathbb E}}[\ensuremath{{\mathbb I}}{ \{g(\mathbf{Z}_{u}, D_{N_{train}}^{-u}, \theta) \neq \ell_{u} \} } \left( \frac{1-\ell_{u}}{n_{0, train}} + \frac{\ell_{u}}{n_{1, train}} \right) | D_{N_{train}}^{\sigma_1} ] \\ &= \ensuremath{{\mathbb E}}[\ensuremath{{\mathbb I}}{ \{g(\mathbf{Z}_{u}, D_{N_{train}}^{-u}, \theta) \neq \ell_{u} \} } \left( \frac{1-\ell_{u}}{n_{0, train}} + \frac{\ell_{u}}{n_{1, train}} \right) | D_{N_{train}}^{\sigma_2}] \\ &= \varepsilon_{\sigma_2(j)}^{oob}, \end{align*} where $D_{N_{train}}^{\sigma_s}= (\mathbf{Z}_{\sigma_s(1)},\ell_{\sigma_s(1)}),\ldots,(\mathbf{Z}_{\sigma_s(N_{train})},\ell_{\sigma_s(N_{train})})$, $s \in \{1,2 \}$. But that means the sum in \eqref{overalloob} does not change. \end{proof} We also need a well-known auxiliary result: \begin{lemma}\label{subsequenceconvergencelemma} Let $(\xi_N)_{N}$, $\xi$ be an arbitrary sequence of random variables. If every subsequence has a subsequence such that $\xi_{N(k(l))} \stackrel{D}{\to} \xi$, then $\xi_{N} \stackrel{D}{\to} \xi$. \end{lemma} \begin{theorem}[Restatement of Theorem \ref{asymptoticnormalxx}] Assume that for $N \to \infty$, $N_{train}=N_{train}(N) \to \infty$ and $K=K(N) \to \infty$, \begin{align} \lim_{N} \frac{K N_{train}^2}{N} \frac{\zeta_{1,N_{train}}}{{\zeta_{N_{train},N_{train}}}} &= 0, \label{zetacondapp}\\ \lim_{N} \frac{\sqrt{K} N_{train}}{N}&= 0. \label{Kocondapp} \end{align} Then, \begin{align} \label{normality2app} \frac{\sqrt{K}(\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-1})}] ) }{\sqrt{ \zeta_{N_{train},N_{train}}}} \stackrel{D}{\to} N(0,1). \end{align} \end{theorem} \begin{proof} Let for the following $\xi_i= (\mathbf{Z}_{i},\ell_{i})$ for brevity and consider the complete U-statistics \begin{equation}\label{completeU} \hat{U}_{N}:=\frac{1}{\binom{N}{N_{train}}} \sum h_{N_{train}}(\xi_{i_1},\ldots, \xi_{i_{N_{train}}}), \end{equation} where the sum is taken over all $\binom{N}{N_{train}}$ possible subsets of size $N_{train} \leq N$ from $\{1,\ldots, N\}$. From the ``H-Decomposition'', see e.g., \cite{Ustat90}, the variance of $ \hat{U}_{N}$ can be bounded as, \begin{align*} \ensuremath{{\mathbb V}}( \hat{U}_{N}) &\leq \frac{N_{train}^2}{N} \zeta_{1,N_{train}} + \frac{N_{train}^2}{N^2} \ensuremath{{\mathbb V}}(h)\\ &\leq \frac{N_{train}^2}{N} \zeta_{1,N_{train}} + \frac{N_{train}^2}{N^2} \zeta_{N_{train},N_{train}} , \end{align*} see also \cite[Lemma 7]{wager2017estimation}. Thus it holds for all $\varepsilon > 0$ that \begin{align*} \ensuremath{{\mathbb P}} \left( \frac{\sqrt{K} |\hat{U}_{N} - \ensuremath{{\mathbb E}}[L_{1/2}^{\hat{g}_{-1}} ] |}{\sqrt{\zeta_{N_{train},N_{train}}} } > \varepsilon \right) &\leq \frac{K\ensuremath{{\mathbb V}}(\hat{U}_{N})}{\varepsilon^2 \zeta_{N_{train},N_{train}}} \\ &= \frac{1}{\varepsilon^2} \left( \frac{K N_{train}^2}{N} \frac{\zeta_{1,N_{train}}}{\zeta_{N_{train},N_{train}}} + \frac{K N_{train}^2}{N^2} \right) \\ & \to 0, \end{align*} by \eqref{zetacondapp} and \eqref{Kocondapp}. We now use the idea of \cite[Lemma A]{Ustat90} to prove \eqref{normality2}: As in \cite{RFuncertainty}, we denote by $\mathcal{S}_{N,N_{train}}=\{S_j: j=1,\ldots, \binom{N}{N_{train}} \}$ all possible subsamples of size $N_{train}$ sampled without replacement. Let $M_{N, N_{train}}=(M_{S_1}, \ldots, M_{S_{(N,N_{train})}})$ be the number of times each subsample appears when sampling $K$ times. Then $M_{N, N_{train}} | \xi_1, \xi_2, \ldots $ is multinomial distributed. Thus \begin{align}\label{divideandconquer} \frac{\sqrt{K} \left( \hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] \right)}{ \sqrt{\zeta_{N_{train},N_{train}}}} & \stackrel{D}{=} \sqrt{K}^{-1} \left( \sum_{i=1}^{N,N_{train}} M_{S_i} ( h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] ) \right)/ \sqrt{\zeta_{N_{train},N_{train}}} \nonumber \\ &\stackrel{D}{=} \frac{1}{ \sqrt{\zeta_{N_{train},N_{train}}} \sqrt{K}} \left( \sum_{i=1}^{(N,N_{train})} \frac{K}{\binom{N}{N_{train}}} ( h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] ) \right) + \nonumber \\ &\frac{1}{ \sqrt{\zeta_{N_{train},N_{train}}}\sqrt{K}} \left( \sum_{i=1}^{(N,N_{train})} (M_{S_i} - \frac{K}{\binom{N}{N_{train}}}) ( h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] ) \right) \nonumber \\ &\stackrel{D}{=} \frac{\sqrt{K}(\hat{U}_{N} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}])}{\sqrt{\zeta_{N_{train},N_{train}}} } + \nonumber \\ &\sqrt{K}\left( \frac{1}{K} \sum_{i=1}^{(N,N_{train})} (M_{S_i} - \frac{K}{\binom{N}{N_{train}}}) \frac{( h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] )}{\sqrt{\zeta_{N_{train},N_{train}}}} \right). \end{align} Let $a_i=(h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}])/\sqrt{\zeta_{N_{train},N_{train}}} $, as in \cite{Ustat90}. Then \[ \hat{U}_{N,2}=\binom{N}{N_{train}}^{-1} \sum_{i=1}^{(N,N_{train})} a_i^2, \] is again a U-statistics with $\ensuremath{{\mathbb E}}[\hat{U}_{N,2}]=1$ and \begin{align*} \ensuremath{{\mathbb P}} (|\hat{U}_{N,2} -1 | > \varepsilon)&\leq \frac{1}{\varepsilon} \frac{N_{train}^2}{N} \ensuremath{{\mathbb V}}(\ensuremath{{\mathbb E}}[( h_{N_{train}}( S_i) - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}] )^2|\xi_1]) + \frac{N_{train}^2}{N^2}\\ &= O\left(\frac{N_{train}}{N} \right)\\ &=o(K), \end{align*} using Lemma \ref{zeta1result}. Thus, $\hat{U}_{N,2} \stackrel{p}{\to} 1$ and this will be true for any given subsequence as well. Similarly, \[ \binom{N}{N_{train}}^{-1} \sum_{i=1}^{(N,N_{train})} a_i \leq \frac{\sqrt{K}(\hat{U}_{N} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}]) }{\sqrt{\zeta_{N_{train},N_{train}}}} \stackrel{p}{\to} 0. \] For each given subsequence we can thus choose a further subsequence such that $\hat{U}_{N,2} \stackrel{a.s.}{\to} 1$, as well as $\sqrt{K} (U - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}])/\sqrt{\zeta_{N_{train},N_{train}}} \stackrel{a.s.}{\to} 0$. Then it follows from \eqref{divideandconquer} and the same characteristic function arguments as in \cite[Lemma A]{Ustat90} that, \begin{align*} &\lim_{N \to \infty} \ensuremath{{\mathbb E}}[\exp\left( \iota t \sqrt{K} (\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}])/\zeta_{N_{train},N_{train}} \right)] = \\ &\lim_{N \to \infty} \ensuremath{{\mathbb E}}\left[ \exp \left( \iota t \sqrt{K} (\hat{U}_{N} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-1}}_{1/2}]) /\zeta_{N_{train},N_{train}}\right) \right] \cdot \exp \left( -\frac{t^2}{2} \right)\\ &=\exp \left( -\frac{t^2}{2} \right), \end{align*} where we suppressed the dependence on the chosen subsequence. Thus the subsequence converges in distribution to $N(0,1)$ and by Lemma \ref{subsequenceconvergencelemma}, so does the overall sequence. \end{proof} \begin{corollary}[Restatement of Corollary \ref{asymptoticpower2}] Assume the conditions of Theorem \ref{asymptoticnormalxx} hold true and that $\hat{\zeta}_{N_{train},N_{train}}/\zeta_{N_{train},N_{train}} \stackrel{p}{\to} 1$. Then the decision rule in \eqref{test3} conserves the level asymptotically and has approximate power \begin{align}\label{powerexpressionapp} \Phi \left( \Phi^{-1}(\alpha) + \sqrt{\frac{K}{\zeta_{N_{train},N_{train}}}} (1/2 - \ensuremath{{\mathbb E}}[L_{1/2}^{(\hat{g}_{-i})}]) \right). \end{align} \end{corollary} \begin{proof} From Theorem \ref{asymptoticnormalxx} and the assumption that $\hat{\zeta}_{N_{train},N_{train}}$ is a consistent estimator, it follows that \begin{align*} \frac{\sqrt{K} (\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-i}}_{1/2}])}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} &\stackrel{D}{\to} N(0,1). \end{align*} In particular, under $H_0$, as $\ensuremath{{\mathbb E}}[L^{\hat{g}_{-i}}_{1/2}]=1/2$: \[ \frac{\sqrt{K} (\hat{U}_{N,K} - 1/2)}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} \stackrel{D}{\to} N(0,1), \] so that the decision rule \eqref{test3} attains the right level as $K \to \infty$. Moreover, under the alternative, for $t^*:=\Phi^{-1}(\alpha)$, \begin{align*} &\ensuremath{{\mathbb P}}\left(\frac{\sqrt{K} (\hat{U}_{N,K} - 1/2)}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} < t^* \right)\\ &= \ensuremath{{\mathbb P}} \left(\frac{\sqrt{K} (\hat{U}_{N,K} - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-i}}_{1/2}])}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} < t^* - \frac{\sqrt{K} (\ensuremath{{\mathbb E}}[L^{\hat{g}_{-i}}_{1/2}] - 1/2)}{\sqrt{\hat{\zeta}_{N_{train},N_{train}}}} \right)\\ &= \Phi \left( t^* + \frac{\sqrt{K} (1/2 - \ensuremath{{\mathbb E}}[L^{\hat{g}_{-i}}_{1/2}])}{\sqrt{\zeta_{N_{train},N_{train}}}} \right) + o_{\ensuremath{{\mathbb P}}}(1). \end{align*} \end{proof} \section{Further Simulations} \label{simusec} Additional simulation examples can be found in the next three subsections. \subsubsection{Gaussian Mean Shift} \label{subsec:gauss_dsmall} The classical and most prominent example of two-sample testing is the detection of a mean-shifts between two Gaussians. That is, we assume $\ensuremath{{\mathbb P}}_X=N(\boldsymbol{\mu}_1,I_{d\times d})$ and $\ensuremath{{\mathbb P}}_Y=N(\boldsymbol{\mu}_2,I_{d\times d})$ so that the testing problem reduces to \[ H_0: \boldsymbol{\mu}_1=\boldsymbol{\mu}_2 \ \ \text{vs} \ \ H_1:\boldsymbol{\mu}_1 \neq \boldsymbol{\mu}_2. \] We will implement this by simply taking $\boldsymbol{\mu}_2=\boldsymbol{\mu}_1 + (\delta/\sqrt{d}) \cdot \mathbf{1}$, for some $\delta \in \ensuremath{{\mathbb R}}$. It appears clear that our test should not be the first to choose here. For $d$ much smaller than $n$, the optimal test would be given by Hotelling's test \citep{hotelling1931}. For $d$ approaching and even superseding $n$, the MMD with a Gaussian kernel, or an LDA classifier as in \cite{DPLBpublished}, might be the logical next choice. For this reason, we also included the LDA classifier in this example. For all the other examples, the simulated power of the LDA two-sample test is always no better than the level - as expected. Allowing the trees in the forest to grow fully, i.e., setting the minimum node size to a low number like 1, one observes a type of overfitting of the Random Forest. Thus we would expect our test to be beaten at least by MMDboot. Surprisingly this does not happen: As can be seen in Figure \ref{fig:1a}, all the RF-based tests display an impressive amount of power, where our hypoRF test is the strongest in all the provided mean shift scenarios. The Binomial test is even stronger than MMDboot and LDA, which seems surprising given the known strong performance of the MMD and LDA in this situation. The hypoRF test on the other hand towers above all others, together with MMD-full. In fact, the hypoRF and Binomial test almost appear to give respectively an upper and lower bound for the MMD-full in this example. Aside from the impressive power of our tests, it is also interesting to note the difference between MMD-full and MMDboot. While this seems not surprising, given that MMD-full is essentially the optimized version of MMDboot, we will see in subsequent examples that their power ranking is often reversed. To make the example more interesting, one might ask what happens if the mean shift is not present in all of the $d$ components, but only in $c < d$ of them? This was noted to be a difficult problem in \cite{NIPS2015_5685}. We therefore study a ``sparse'' case $c=2$ ($1\%$ out of $d=200$) and a ``moderately sparse'' case $c=20$ ($10\%$ out of $d=200$), now considering $\boldsymbol{\mu}_2=\boldsymbol{\mu}_1 + (\delta/\sqrt{c}) \cdot \mathbf{1}$. Note that there is some advantage here, as we now scale $\delta$ only by a factor of $\sqrt{c} < \sqrt{d}$. Thus, if a test is able to detect the sparse changes well, it should display a higher power than before. Indeed as seen in Figure \ref{fig:1b}, the performance of the kernel tests are remarkably stable (given the randomness inherent in the simulation), when changing from $c=d=200$ to $c=20$ to $c=2$. On the other hand, the performance of the RF-based tests appear to increase. Thus the odds only shift in favor of our tests and the test of \cite{Cal2020}: For $c=20$ the optimized MMD, MMD-full, is still very competitive, though MMDboot, ME-full, and LDA fall further behind. While the hypoRF, the CPT-RF and the fully optimized MMD test reach a power of close to $1$, the remaining kernel tests and LDA stay below 0.7. The Binomial test, on the other hand, displays almost the same performance as MMD-full, ending with a power of a bit over 0.8. Its performance is amplified in the sparse case, in which the Binomial, CPT-RF and hypoRF test beat the other tests by a large margin. The power of both tests quickly increases from around 0.05 to 1, as $\delta$ passes from 0.2 to 1. While the performance of the Binomial test is impressive, the hypoRF test manages to pick up the nuanced changes even faster, at times almost doubling the power of the Binomial test. Though the price to pay for this is a much higher computational effort. It should be said that both the sparse and moderately sparse case here are tailor-made for a RF-based classifier; not only are the changes only appearing in a few components, but they appear marginally and are thus easy to detect in the splitting process of the trees. Nonetheless, it seems surprising how strong the tests perform. We will now turn to more complex examples, where changes in the marginals alone are not as easy, or even impossible to detect. \begin{figure} \caption{\textbf{(Mean Shift)} \label{fig:1a} \end{figure} \begin{figure} \caption{$c=20$, moderately sparse case.} \caption{$c=2$ sparse case.} \caption{\textbf{(Mean Shift)} \label{fig:1b} \end{figure} \subsubsection{Changing the Dependency Structure} The previous example focused only on cases where the changes in distribution can be observed marginally. For these examples, it would in principle be enough to compare the marginal distributions to detect the difference between $Q$ and $P$. An interesting class of problems arises when we instead leave the marginal distribution unchanged but change the \emph{dependency structure} when moving from $P$ to $Q$. We will hereafter study two examples; the first one concerning a simple change from a multivariate Gaussian with independent components to one with nonzero correlation. The second one again takes $P$ to have independent Gaussian components, but induces a more complex dependence structure on $Q$, via a $t$-copula. Thus for what follows, we set $P= N(0, I_{d\times d})$. First, consider $Q=N(0, \Sigma)$, where $\Sigma$ is some positive definite correlation matrix. As for any $d$ there are potentially $d(d-1)/2$ unique correlation coefficients in this matrix, the number of possible specifications is enormous even for small $d$. For simplicity, we only consider a single correlation number $\rho$, which we either use (I) in all $d(d-1)/2$ or (II) in only $c < d(d-1)/2$ cases. Figure \ref{fig:3a_gaussian_dep} displays the result of case (I). Now the superiority of our hypoRF test is challenged, though it manages to at least hold its own against MMD-full and ME-full. The roles of MMD-full and MMD are also reversed, the latter now displaying a much higher power, that in fact dwarfs the power of all other tests. MMD-full displays together with the Binomial test the smallest amount of power, both apparently suffering from the decrease in sample size. ME-full on the other hand, which suffers the same drawback, manages to put up a very strong performance, on par with the hypoRF. This is all the more impressive, keeping in mind that the ME is a test that scales linearly in $N$. Case (II) can be seen in Figure \ref{fig:3b_gaussian_dep}. Again the resulting ``sparsity'' is beneficial for our test, with the hypoRF now being on par with the powerful MMD test, and with ME-full only slightly above the Binomial test. \begin{figure} \caption{\textbf{(Dependency)} \label{fig:3a_gaussian_dep} \end{figure} \begin{figure} \caption{\textbf{(Dependency)} \label{fig:3b_gaussian_dep} \end{figure} In the second example, we study a change in dependence, which is more interesting than the simple change of the covariance matrix. In particular, $Q$ is now given by a distribution that has standard Gaussian marginals bound together by a $t$-copula, see e.g., \cite{DeMc:05} or \cite[Chapter 5]{McFrEm:15}. While the density and cdf of the resulting distribution $Q$ are relatively complicated, it is simple and insightful to simulate from this distribution, as described in \cite{DeMc:05}: Let $x \mapsto t_{v}(x)$ denote the cdf of a univariate $t$-distribution with $\nu$ degrees of freedom, and $T_{\nu}(R)$ the multivariate $t$-distribution with dispersion matrix $R$ and $\nu$ degrees of freedom. We first simulate from a multivariate $t$-distribution with dispersion matrix $R$ and degrees of freedom $\nu$, to obtain $\mathbf{T} \sim T_{\nu}(R)$. In the second step, simply set $\mathbf{Y}:= \left(\Phi^{-1} (t_{v}(T_1)), \ldots, \Phi^{-1} (t_{v}(T_p)) \right)^T$. We denote $Q= T_{\Phi}(\nu, R)$. What kind of dependency structure does $\mathbf{Y}$ have? It is well known that $\mathbf{T} \sim t_{\nu}(R)$ has \[ \mathbf{T} \stackrel{D}{=} G^{-1/2} \mathbf{N}, \] with $\mathbf{N} \sim N(0, R)$ and $G \sim \func{Gamma}(\nu/2, \nu/2)$ independent of $\mathbf{N}$. As such, the dependence induced in $\mathbf{T}$, and therefore in $Q$, is dictated through the mutual latent random variable $G$. It persists, even if $R=I_{d\times d}$ and induces more complex dependencies than mere correlation. These dependencies are moreover stronger, the smaller $\nu$, though this effect is hard to quantify. One reason this dependency structure is particularly interesting in our case is that it spans more than two columns, contrary to correlation which is an inherent bivariate property. We again study the case (I) with all $d$ components tied together by the $t$-copula, and (II) only the first $c=20< d$ components having a $t$-copula dependency, while the remaining $d-c=180$ columns are again independent $N(0,1)$. The results for case (I) are shown in Figure \ref{fig:3a_copula}. Now our tests, together with ME-full cannot compete with CPT-RF, MMD and MMD-full. However for the ME-full, this very much depends again on the hyperparameters chosen, for some settings ME-full was as good as MMD-full. Though there appears to be no clear way how to determine this. Both MMD-based tests manage to stay at almost one, even for $\nu=8$, which seems to be an extremely impressive feat. The CPT-RF test falls behind the two MMD-based tests, but has still an impressively high power, compared to our hypoRF test. Our best test, on the other hand, loses power quickly for $\nu > 4$, while the Binomial test does so even for $\nu > 2$. The results for case (II) shown in Figure \ref{fig:3b_copula}, are similarly insightful. Given the difficulty of this problem, it is not surprising that almost all of the tests fail to have any power for $\nu > 3$. The exception is once again the MMD, performing incredibly strong up to $\nu=5$. The performance of MMDboot is not only interesting in that it beats our tests, but also in how it beats all other kernel approaches in the same way. In particular, MMD-full stands no chance, which again is likely, in part, due to the reduced sample size the MMDboot has available for testing. Though hard to generalize, it appears from this analysis that a complex, rather weak dependence, is a job best done by the plain MMDboot. \begin{figure} \caption{\textbf{(Dependency)} \label{fig:3a_copula} \end{figure} \begin{figure} \caption{\textbf{(Dependency)} \label{fig:3b_copula} \end{figure} \subsubsection{Multivariate Blob} \label{Blobsection} A well-known difficult example is the ``Gaussian Blob'', an example where ``the main data variation does not reflect the difference between between $P$ and $Q$'' \citep{NIPS2012_4727}, see e.g., \cite{NIPS2012_4727} and \cite{NIPS2016_6148}. We study here the following generalization of this idea: Let $T \in \ensuremath{{\mathbb N}}$, $\boldsymbol{\mu}=\left( \boldsymbol{\mu}_t \right)_{t=1}^T$, $\boldsymbol{\mu}_t \in \ensuremath{{\mathbb R}}^d$, and $\boldsymbol{\Sigma}=\left( \Sigma_t \right)_{t=1}^T$, with $\Sigma_t$ a positive definite $d \times d$ matrix. We consider the mixture \[ N( \boldsymbol{\mu}, \boldsymbol{\Sigma}):= \sum_{t=1}^T \frac{1}{T} N(\boldsymbol{\mu}_t, \Sigma_t). \] For $\boldsymbol{\mu}$, we will always use a baseline vector of size $d$, $w$ say, and include in $\boldsymbol{\mu}$ all possible enumerations of choosing $d$ elements from $w \in \ensuremath{{\mathbb R}}^d$ with replacement. This gives a total number of $T=c^d$ possibilities and each $\boldsymbol{\mu}_t \in \ensuremath{{\mathbb R}}^d$ is one possible such enumeration. For example, if $c=d=2$ and $w=(1,2)$ then we may set $\boldsymbol{\mu}_1=(1,1)$, $\boldsymbol{\mu}_2=(2,2)$, $\boldsymbol{\mu}_3=(1,2)$, $\boldsymbol{\mu}_4=(2,1)$. We will refer to each element of this mixture as a ``Blob'' and study two experiments where we change the covariance matrices $\Sigma_t$ of the blobs when changing from $P$ to $Q$, i.e., \[ P=N( \boldsymbol{\mu}, \boldsymbol{\Sigma}_X), \ \ Q=N(\boldsymbol{\mu}, \boldsymbol{\Sigma}_Y). \] Obviously it quickly gets infeasible to simulate from $N( \boldsymbol{\mu}, \boldsymbol{\Sigma})$, as with increasing $d$ the number of blobs explodes. Though, as shown below, this difficulty can be circumvented when $\Sigma_t$ is diagonal for all $t$. The example also considerably worsens the curse of dimensionality, as even for small $d$ the numbers of observations in each Blob is likely to be very small. Thus for $300$ observations, we have a rather difficult example at hand. We will subsequently study two experiments. The first one takes $w=\left( 1,2,3 \right)$, $\Sigma_{1,X}=\Sigma_{2,X}=\ldots =\Sigma_{t,X}=I_{d\times d}$ and $\Sigma_{1,Y}=\Sigma_{2,Y}=\ldots =\Sigma_{t,Y}=\Sigma$ to be a correlation matrix with nonzero elements on the off-diagonal. In particular, we generate $\Sigma$ randomly at the beginning of the $S$ trials for a given $d$, such that (1) it is a positive definite correlation matrix and (2) it has a ratio of minimal to maximal eigenvalue of at most $1-1/\sqrt{d}$. For $d=2$, this corresponds to the original Blob example as in \cite{NIPS2012_4727}, albeit with a less strict bound on the eigenvalue ratio. The resulting distribution for $d=1$ and $d=2$ is plotted in Figure \ref{figblobillustration1}. Table \ref{originalblobresults} displays the result of the experiment with our usual set-up and a variation of $d=2,3$ and the number of blobs being $2^d$ and $3^d$. Very surprisingly our hypoRF test is the only one displaying notable power throughout the example. MMD and MMD-full are not able to detect any difference between the distribution with this sample size. Interestingly, the ME which we would have expected to work well in this example is also only at the level. However, this again depends on the specification chosen for the hyperparameters of the optimization. For another parametrization, we obtained a power of 0.116 for $d=2$, $blobs=2^2$ and $0.082$ for $d=2$ and $blobs=3^2$, all other values being on the level. \begin{table} \centering \begin{tabular}{l*{8}{c}} N & d & Blobs & ME-full & MMD & MMD-full & Binomial & hypoRF \\ \hline 600 & 2 & $2^2$ & 0.056 & 0.054 & 0.072 & 0.204 & 0.306 \\ 600 & 2 & $3^2$ & 0.064 & 0.048 & 0.070 & 0.070 & 0.190 \\ 600 & 3 & $2^3$ & 0.052 & 0.040 & 0.060 & 0.088 & 0.116 \\ 600 & 3 & $3^3$ & 0.056 & 0.060 & 0.060 & 0.064 & 0.084 \\ \end{tabular} \caption{\textbf{(Blob)} Power for different $N$, $d$ and number of Blobs. Each power was calculated with a simulation of size $S=500$ for a specific test.} \label{originalblobresults} \end{table} The second experiment takes $w=\left( -5,0, 5 \right)$ and for all $t$, $\Sigma_{t,X}$, $\Sigma_{t,Y}$ to be diagonal and generated similarly to $\boldsymbol{\mu}$. That is, we take $\Sigma_{t,X}=\mbox{diag}(\sigma_{t,X}^2)$, where each $\sigma_{t,X}$ is a vector including $d$ draws with replacement from a base vector $v_X \in \ensuremath{{\mathbb R}}^d$, and analogously with $\Sigma_{t,Y}$. In this case, it is possible to rewrite $P$ and $Q$, as \[ P= \prod_{j=1}^d P_X \text{ and } Q= \prod_{j=1}^d P_Y, \] with \[ P_X=\frac{1}{3} N(w_1, v^2_{1,X}) + \frac{1}{3} N(w_2, v^2_{2,X}) + \frac{1}{3} N(w_3, v^2_{3,X}), \] and \[ P_Y=\frac{1}{3} N(w_1, v^2_{1,Y}) + \frac{1}{3} N(w_2, v^2_{2,Y}) + \frac{1}{3} N(w_3, v^2_{3,Y}). \] As such, it is feasible to simulate from $P$ and $Q$, even for large $d$, by simply simulating $d$ times from $P_X$ and $P_Y$. We consider $w=\left( -5,0,5 \right)$ and the standard deviations \begin{align*} \left( v_{1,X},v_{2,X},v_{3,X}\right)&= \left(1,1,1 \right),\\ \left( v_{1,Y},v_{2,Y},v_{3,Y}\right)&= \left(1,2,1 \right). \end{align*} The change between the distributions is subtle even in notation; only the standard deviation of the middle mixture component is changed from 1 to 2. This has the effect that the middle component gets spread out more, causing it to melt into the other two. The resulting distribution for $d=1$ and $d=2$ is plotted in Figure \ref{figblobillustration2}. Unsurprisingly, $P$ looks quite similar as in Figure \ref{figblobillustration1}. The marginal plots ($d=1$) appear to be very different, though this is only an effect of having centers $(-5,0,5)$ instead of $(1,2,3)$. On the other hand, while not clearly visible, it can be seen that the different blobs of $Q$ display different behavior in variance; every Blob in positions $(2,1)$, $(2,2)$, $(2,3)$, $(1,2)$, $(3,2)$ on the $3 \times 3$ grid has its variance increased. The results of the simulations are seen in Figure \ref{newblobresults}. The Binomial, CPT-RF and hypoRF test display a power quickly increasing with dimensions, regardless of the decreasing number of observations in each Blob. This also holds true, to a smaller degree, for the ME-full, which due to its location optimization appears to be able to adapt to the problem structure. However, its power considerably lacks behind the RF-based tests. In contrast, the behavior of the MMD-based tests quickly deteriorates as the number of samples per Blob decreases. Indeed from a kernel perspective, all points have more or less the same distance from each other, whether they are coming from $P$ or $Q$. Thus the extreme power of the MMD to detect ``joint'' changes in the structure of the data (i.e., dependency changes) cements its downfall here, as it is unable to detect the marginal difference. This example might appear rather strange; it has a flavor of a mathematical counterexample, simple or even nonsensical on the outset, but proving an important point: While the differences between $P$ and $Q$ are obvious to the naked eye if only one marginal each is plotted with a histogram, the example manages to completely fool the kernel tests (under a Gaussian kernel at least). As such it is not only a demonstration of the merits of our test but also a way of fooling very general kernel tests. It might be interesting to find real-world applications, where such data structure is likely. \begin{figure} \caption{$P$} \caption{$Q$} \caption{\textbf{(Blob)} \label{figblobillustration1} \end{figure} \begin{figure} \caption{$P$} \caption{$Q$} \caption{\textbf{(Blob)} \label{figblobillustration2} \end{figure} \begin{figure} \caption{\textbf{(Blob)} \label{newblobresults} \end{figure} \section{Financial Riskfactors} \label{factors} \begin{table} \begin{small} \begin{center} \captionsetup{type=table} \begin{align*} \resizebox{12cm}{!}{$\displaystyle \begin{array}[t]{lllll}\toprule \mbox{No.}& \mbox{Acronym}& \mbox{Firm Characteristic} & \mbox{Frequency} & \mbox{Literature} \\ \hline 1 & \mbox{absacc} &\mbox{Absolute accruals} &\mbox{Annual} &\mbox{\cite{bandyopadhyay2010accrual}}\\ 2 & \mbox{acc} &\mbox{Working capital accruals} &\mbox{Annual} &\mbox{\cite{sloan1996stock}}\\ 3 & \mbox{aeavol} &\mbox{Abnormal earnings announcement volume} &\mbox{Quarterly} &\mbox{\cite{lerman2008high}}\\ 4 & \mbox{age} &\mbox{Years since first Compustat coverage} &\mbox{Annual} &\mbox{\cite{jiang:2005}}\\ 5 & \mbox{agr} &\mbox{Asset growth} &\mbox{Annual} &\mbox{\cite{cooper2008asset}}\\ 6 & \mbox{baspread} &\mbox{Bid-ask spread} &\mbox{Monthly} &\mbox{\cite{amihud:1989}}\\ 7 & \mbox{beta} &\mbox{Beta} &\mbox{Monthly} &\mbox{\cite{fama:1973}}\\ 8 & \mbox{betasq} &\mbox{Beta squared} &\mbox{Monthly} &\mbox{\cite{fama:1973}}\\ 9 & \mbox{bm} &\mbox{Book-to-market} &\mbox{Annual} &\mbox{\cite{rosenberg1985persuasive}}\\ 10 & \mbox{bmia} &\mbox{Industry-adjusted book-to-market} &\mbox{Annual} &\mbox{\cite{asness:2000}}\\ 11 & \mbox{cash} &\mbox{Cash holdings} &\mbox{Quarterly} &\mbox{\cite{palazzo:2012}}\\ 12 & \mbox{cashdebt} &\mbox{Cash flow to debt} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 13 & \mbox{cashpr} &\mbox{Cash productivity} &\mbox{Annual} &\mbox{\cite{chandrashekar2009productivity}}\\ 14 & \mbox{cfp} &\mbox{Cash flow to price ratio} &\mbox{Annual} &\mbox{\cite{desai:2004}}\\ 15 & \mbox{cfpia} &\mbox{Industry-adjusted cash flow to price ratio} &\mbox{Annual} &\mbox{\cite{asness:2000}}\\ 16 & \mbox{chatoia} &\mbox{Industry-adjusted change in asset turnover} &\mbox{Annual} &\mbox{\cite{soliman2008use}}\\ 17 & \mbox{chcsho} &\mbox{Change in shares outstanding} &\mbox{Annual} &\mbox{\cite{pontiff2008share}}\\ 18 & \mbox{chempia} &\mbox{Industry-adjusted change in employees} &\mbox{Annual} &\mbox{\cite{asness:2000}}\\ 19 & \mbox{chinv} &\mbox{Change in inventory} &\mbox{Annual} &\mbox{\cite{thomas2002inventory}}\\ 20 & \mbox{chmom} &\mbox{Change in 6-month momentum} &\mbox{Monthly} &\mbox{\cite{gettleman2006acceleration}}\\ 21 & \mbox{chpmia} &\mbox{Industry-adjusted change in profit margin} &\mbox{Annual} &\mbox{\cite{soliman2008use}}\\ 22 & \mbox{chtx} &\mbox{Change in tax expense} &\mbox{Quarterly} &\mbox{\cite{thomas2011tax}}\\ 23 & \mbox{cinvest} &\mbox{Corporate investment} &\mbox{Quarterly} &\mbox{\cite{titman2004capital}}\\ 24 & \mbox{convind} &\mbox{Convertible debt indicator} &\mbox{Annual} &\mbox{\cite{valta:2016}}\\ 25 & \mbox{currat} &\mbox{Current ratio} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 26 & \mbox{depr} &\mbox{Depreciation / PP\&E} &\mbox{Annual} &\mbox{\cite{holthausen:1992}}\\ 27 & \mbox{divi} &\mbox{Dividend initiation} &\mbox{Annual} &\mbox{\cite{michaely:1995}}\\ 28 & \mbox{divo} &\mbox{Dividend omission} &\mbox{Annual} &\mbox{\cite{michaely:1995}}\\ 29 & \mbox{dolvol} &\mbox{Dollar trading volume} &\mbox{Monthly} &\mbox{\cite{chordia2001trading}}\\ 30 & \mbox{dy} &\mbox{Dividend to price} &\mbox{Annual} &\mbox{\cite{litzenberger:1982}}\\ 31 & \mbox{ear} &\mbox{Earnings announcement return} &\mbox{Quarterly} &\mbox{\cite{kishore:2008}}\\ 32 & \mbox{egr} &\mbox{Growth in common shareholder equity} &\mbox{Annual} &\mbox{\cite{richardson2005accrual}}\\ 33 & \mbox{ep} &\mbox{Earnings to price} &\mbox{Annual} &\mbox{\cite{basu:1977}}\\ 34 & \mbox{gma} &\mbox{Gross profitability} &\mbox{Annual} &\mbox{\cite{novy:2013}}\\ 35 & \mbox{grcapx} &\mbox{Growth in capital expenditures} &\mbox{Annual} &\mbox{\cite{anderson:2006}}\\ 36 & \mbox{grltnoa} &\mbox{Growth in long term net operating assets} &\mbox{Annual} &\mbox{\cite{fairfield:2003}}\\ 37 & \mbox{herf} &\mbox{Industry sales concentration} &\mbox{Annual} &\mbox{\cite{hou:2006}}\\ 38 & \mbox{hire} &\mbox{Employee growth rate} &\mbox{Annual} &\mbox{\cite{belo:2014}}\\ 39 & \mbox{idiovol} &\mbox{Idiosyncratic return volatility} &\mbox{Monthly} &\mbox{\cite{ali:2003}}\\ 40 & \mbox{ill} &\mbox{Illiquidity} &\mbox{Monthly} &\mbox{\cite{amihud:2002}}\\ 41 & \mbox{indmom} &\mbox{Industry momentum} &\mbox{Monthly} &\mbox{\cite{moskowitz:1999}}\\ 42 & \mbox{invest} &\mbox{Capital expenditures and inventory} &\mbox{Annual} &\mbox{\cite{chen:2010}}\\ 43 & \mbox{lev} &\mbox{Leverage} &\mbox{Annual} &\mbox{\cite{bhandari1988debt}}\\ 44 & \mbox{lgr} &\mbox{Growth in long-term debt} &\mbox{Annual} &\mbox{\cite{richardson2005accrual}}\\ 45 & \mbox{maxret} &\mbox{Maximum daily return} &\mbox{Monthly} &\mbox{\cite{bali2011maxing}}\\ 46 & \mbox{mom12m} &\mbox{12-month momentum} &\mbox{Monthly} &\mbox{\cite{jegadeesh:titman:1993}}\\ 47 & \mbox{mom1m} &\mbox{1-month momentum} &\mbox{Monthly} &\mbox{\cite{jegadeesh:titman:1993}}\\ 48 & \mbox{mom36m} &\mbox{36-month momentum} &\mbox{Monthly} &\mbox{\cite{jegadeesh:titman:1993}}\\ 49 & \mbox{mom6m} &\mbox{6-month momentum} &\mbox{Monthly} &\mbox{\cite{jegadeesh:titman:1993}}\\ 50 & \mbox{ms} &\mbox{Financial statement score} &\mbox{Quarterly} &\mbox{\cite{mohanram:2005}}\\ \bottomrule \end{array} $} \end{align*} \captionof{table}{\textbf{(Riskfactors)} This table lists the 94 financial characteristics we use in Section \ref{sec:Real}. We obtain the characteristics used by \cite{dacheng2020} from Dacheng Xiu's webpage; see \url{http://dachxiu.chicagobooth.edu}. Note that the data is collected in \cite{green:2017}.} \label{table:factors1} \end{center} \end{small} \end{table} \begin{table} \begin{small} \begin{center} \begin{align*} \resizebox{12cm}{!}{$\displaystyle \begin{array}{lllll}\toprule \mbox{No.}& \mbox{Acronym}& \mbox{Firm Characteristic} & \mbox{Frequency} & \mbox{Literature} \\ \hline 51 & \mbox{mvel1} &\mbox{Size} &\mbox{Monthly} &\mbox{\cite{banz1981relationship}}\\ 52 & \mbox{mveia} &\mbox{Industry-adjusted size} &\mbox{Annual} &\mbox{\cite{asness:2000}}\\ 53 & \mbox{nincr} &\mbox{Number of earnings increases} &\mbox{Quarterly} &\mbox{\cite{barth:1999}}\\ 54 & \mbox{operprof} &\mbox{Operating profitability} &\mbox{Annual} &\mbox{\cite{fama:french:2015}}\\ 55 & \mbox{orgcap} &\mbox{Organizational capital} &\mbox{Annual} &\mbox{\cite{eisfeldt:2013}}\\ 56 & \mbox{pchcapxia} &\mbox{Industry adjusted change in capital exp.} &\mbox{Annual} &\mbox{\cite{abarbanell:1998}}\\ 57 & \mbox{pchcurrat} &\mbox{Change in current ratio} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 58 & \mbox{pchdepr} &\mbox{Change in depreciation} &\mbox{Annual} &\mbox{\cite{holthausen:1992}}\\ 59 & \mbox{pchgmpchsale} &\mbox{Change in gross margin - change in sales} &\mbox{Annual} &\mbox{\cite{abarbanell:1998}}\\ 60 & \mbox{pchquick} &\mbox{Change in quick ratio} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 61 & \mbox{pchsalepchinvt} &\mbox{Change in sales - change in inventory} &\mbox{Annual} &\mbox{\cite{abarbanell:1998}}\\ 62 & \mbox{pchsalepchrect} &\mbox{Change in sales - change in A/R} &\mbox{Annual} &\mbox{\cite{abarbanell:1998}}\\ 63 & \mbox{pchsalepchxsga} &\mbox{Change in sales - change in SG\&A} &\mbox{Annual} &\mbox{\cite{abarbanell:1998}}\\ 64 & \mbox{ppchsaleinv} &\mbox{Change sales-to-inventory} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 65 & \mbox{pctacc} &\mbox{Percent accruals} &\mbox{Annual} &\mbox{\cite{hafzalla2011percent}}\\ 66 & \mbox{pricedelay} &\mbox{Price delay} &\mbox{Monthly} &\mbox{\cite{hou:2005}}\\ 67 & \mbox{ps} &\mbox{Financial statements score} &\mbox{Annual} &\mbox{\cite{piotroski2000value}}\\ 68 & \mbox{quick} &\mbox{Quick ratio} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 69 & \mbox{rd} &\mbox{R\&D increase} &\mbox{Annual} &\mbox{\cite{eberhart2004examination}}\\ 70 & \mbox{rdmve} &\mbox{R\&D to market capitalization} &\mbox{Annual} &\mbox{\cite{guo:2006}}\\ 71 & \mbox{rdsale} &\mbox{R\&D to sales} &\mbox{Annual} &\mbox{\cite{guo:2006}}\\ 72 & \mbox{realestate} &\mbox{Real estate holdings} &\mbox{Annual} &\mbox{\cite{tuzel:2010}}\\ 73 & \mbox{retvol} &\mbox{Return volatility} &\mbox{Monthly} &\mbox{\cite{ang2006cross}}\\ 74 & \mbox{roaq} &\mbox{Return on assets} &\mbox{Quarterly} &\mbox{\cite{balakrishnan2010post}}\\ 75 & \mbox{roavol} &\mbox{Earnings volatility} &\mbox{Quarterly} &\mbox{\cite{francis:2004}}\\ 76 & \mbox{roeq} &\mbox{Return on equity} &\mbox{Quarterly} &\mbox{\cite{hou2015digesting}}\\ 77 & \mbox{roic} &\mbox{Return on invested capital} &\mbox{Annual} &\mbox{\cite{brown:2007}}\\ 78 & \mbox{rsup} &\mbox{Revenue surprise} &\mbox{Quarterly} &\mbox{\cite{kama:2009}}\\ 79 & \mbox{salecash} &\mbox{Sales to cash} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 80 & \mbox{saleinv} &\mbox{Sales to inventory} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 81 & \mbox{salerec} &\mbox{Sales to receivables} &\mbox{Annual} &\mbox{\cite{ou:1989}}\\ 82 & \mbox{secured} &\mbox{Secured debt} &\mbox{Annual} &\mbox{\cite{valta:2016}}\\ 83 & \mbox{securedind} &\mbox{Secured debt indicator} &\mbox{Annual} &\mbox{\cite{valta:2016}}\\ 84 & \mbox{sgr} &\mbox{Sales growth} &\mbox{Annual} &\mbox{\cite{lakonishok1994contrarian}}\\ 85 & \mbox{sin} &\mbox{Sin stocks} &\mbox{Annual} &\mbox{\cite{hong:2009}}\\ 86 & \mbox{sp} &\mbox{Sales to price} &\mbox{Annual} &\mbox{\cite{barbee:1996}}\\ 87 & \mbox{stddolvol} &\mbox{Volatility of liquidity (dollar trading volume)} &\mbox{Monthly} &\mbox{\cite{chordia2001trading}}\\ 88 & \mbox{stdturn} &\mbox{Volatility of liquidity (share turnover)} &\mbox{Monthly} &\mbox{\cite{chordia2001trading}}\\ 89 & \mbox{stdacc} &\mbox{Accrual volatility} &\mbox{Quarterly} &\mbox{\cite{bandyopadhyay2010accrual}}\\ 90 & \mbox{stdcf} &\mbox{Cash flow volatility} &\mbox{Quarterly} &\mbox{\cite{huang2009cross}}\\ 91 & \mbox{tang} &\mbox{Debt capacity/firm tangibility} &\mbox{Annual} &\mbox{\cite{almeida:2007}}\\ 92 & \mbox{tb} &\mbox{Tax income to book income} &\mbox{Annual} &\mbox{\cite{lev:2004}}\\ 93 & \mbox{turn} &\mbox{Share turnover} &\mbox{Monthly} &\mbox{\cite{datar1998liquidity}}\\ 94 & \mbox{zerotrade} &\mbox{Zero trading days} &\mbox{Monthly} &\mbox{\cite{liu:2006}}\\ \bottomrule \end{array} $} \end{align*} \captionof{table}{\textbf{(Riskfactors)} Table \ref{table:factors1} continued.} \label{table:factors2} \end{center} \end{small} \end{table} \end{document}
\begin{document} \title{On the dimension of additive sets} \author{P. Candela} \address{D\'epartement de math\'ematiques et applications\newline \indent \'Ecole normale sup\'erieure, Paris, France} \email{[email protected]} \author{H. A. Helfgott} \address{D\'epartement de math\'ematiques et applications\newline \indent \'Ecole normale sup\'erieure, Paris, France} \email{[email protected]} \thanks{Research supported by project ANR-12-BS01-0011 CAESAR and by a postdoctoral grant of the \'Ecole normale sup\'erieure, Paris.} \subjclass[2010]{Primary 11B30; Secondary 05D40} \keywords{Additive dimension, dissociated sets} \maketitle \begin{abstract} We study the relations between several notions of dimension for an additive set, some of which are well-known and some of which are more recent, appearing for instance in work of Schoen and Shkredov. We obtain bounds for the ratios between these dimensions by improving an inequality of Lev and Yuster, and we show that these bounds are asymptotically sharp, using in particular the existence of large dissociated subsets of $\{0,1\}^n\subset \mathbb{Z}^n$. \end{abstract} \section{Introduction} Let $A$ be an additive set, that is, a finite subset of an abelian group. A \emph{subset sum} of $A$ is a sum of the form $\sum_{a\in A'} a$ for some set $A'\subset A$. By a \emph{$[-1,1]$-combination of} $A$, we mean a sum $\sum_{a\in A} \varepsilon_a\, a$ with coefficients $\varepsilon_a$ lying in $ [-1,1]=\{-1,0,1\}$. \begin{defn} A subset $D$ of an abelian group is said to be \emph{dissociated} if the subset sums of $D$ are pairwise distinct; equivalently, the only $[-1,1]$-combination of $D$ that equals 0 is the one with all coefficients equal to 0. We say that $D$ is a \emph{maximal} dissociated subset of $A$ if there is no dissociated set $D'\subset A$ such that $D'\supsetneq D$. \end{defn} Dissociativity plays an important role in additive combinatorics and harmonic analysis; see \cite{TomNotes} and \cite[\S 4.5]{T-V}. In particular, it provides an analogue, in the setting of general abelian groups, of the concept of linear independence from linear algebra, and it is often used to define a notion of dimension for an additive set. For a recent instance, in the work of Schoen and Shkredov \cite{SS} the terminology `additive dimension of $A$' is used for the maximal cardinality of a dissociated subset of $A$. We shall call this quantity the dissociativity dimension. \begin{defn} Let $A$ be an additive set. We define the \emph{dissociativity dimension} of $A$ to be the number $d_d(A):=\max \{|D|: D\subset A,\; D\textrm{ is dissociated}\}$. We say that $D$ is a \emph{maximum} dissociated subset of $A$ if $|D|=d_d(A)$. We also define the \emph{lower dissociativity dimension} of $A$ to be the number $d_d^-(A):=\min \{|D|: D\subset A \textrm{ is maximal dissociated}\}$. \end{defn} The variant $d_d^-(A)$ is considered less often than $d_d(A)$ in the literature; it appears for instance in \cite[Section 8]{SS}, where it is denoted by $\tilde d(A)$. In linear algebra, the concepts of linear independence and dimension are linked to that of a linear-span. The well-known basic result is that in a vector space the maximum cardinality of a linearly independent set, if finite, is equal to the minimum cardinality of a spanning set, the resulting number being by definition the dimension of the space. In the more general context of additive sets, there is an analogue of the linear span, related to dissociativity. We define it and give a corresponding notion of dimension, as follows. \begin{defn} Given a subset $S$ of an abelian group $G$, the 1\emph{-span} of $S$, denoted $\langle S \rangle$, is the set of all $[-1,1]$-combinations of $S$. Given a subset $A\subset G$, we shall call a set $S\subset G$ satisfying $\langle S\rangle \supset A$ a \emph{1-spanning set} for $A$. We define the \emph{1-span dimension} of an additive set $A$ to be the number $d_s(A):=\min \{|S|: S\subset A,\; \langle S\rangle \supset A\}$. \end{defn} This quantity has also been considered in \cite[Section 8]{SS}, where it is denoted $d(A)$. A variant of this notion, which can be called the \emph{lower 1-span dimension} of $A$, is the number $d_s^-(A):=\min \{|S|: S\subset G,\; \langle S\rangle \supset A\}$; here $G$ is the ambient abelian group containing $A$ and the sets $S$ are allowed to have elements in $G\setminus A$. This variant also appears in \cite{SS}, where it is denoted $d_*(A)$. It had already appeared in previous works, notably as the number denoted $\ell(A)$ in \cite{Sch}. Given the basic result from linear algebra recalled above, it is natural to compare the numbers $d_d(A),d_d^-(A)$ with $d_s(A),d_s^-(A)$. It follows promptly from the definitions that if $D$ is a maximal dissociated subset of $A$ then $\langle D\rangle \supset A$. We then deduce that \[ d_s^-(A)\leq d_s(A)\leq d_d^-(A)\leq d_d(A). \] In contrast to the linear-algebra setting, each of these inequalities can be a strict one. In this paper we study the extent to which these quantities can differ from each other. Our first result is the following lower bound on the ratio $d_s^-(A) / d_d(A)$. \begin{theorem}\label{thm:main} Let $A$ be an additive set. Then we have \begin{equation}\label{eq:mainineq} \frac{d_s^-(A)}{d_d(A)}\;\geq \; \frac{1}{\log_4 d_d(A)}\; \big(1+o(1)_{d_d(A)\to \infty}\big). \end{equation} \end{theorem} We deduce this from an inequality relating the size of an arbitrary 1-spanning set for $A$ to the size of an arbitrary dissociated subset of $A$; see Proposition \ref{lem:dslb}. This inequality can be viewed as a refinement of an inequality of Lev and Yuster, namely inequality $(*)$ in \cite[Proof of Theorem 2]{LY}. It is then natural to wonder whether there exist additive sets for which the ratio $d_s^-/d_d$ reaches the lower bound given by \eqref{eq:mainineq}, and more precisely whether each of the ratios of consecutive dimensions, i.e. $d_s^-/d_s,d_s/d_d^-,d_d^-/d_d$ can reach this lower bound. For each positive integer $n$, let $Q_n$ denote the discrete cube $\{0,1\}^n$ viewed as an additive set in $\mathbb{Z}^n$. It follows from known results that $d_d(Q_n)=n\log_4 n\;(1+o(1))$ as $n\to \infty$. This was established independently by Lindstr\"om \cite{Lind} and by Cantor and Mills \cite{C&M}; the result is related to the \emph{coin weighing problem}, and similar results have been treated in other works (for a recent treatment, providing several references, see \cite{Bs}). Let $D_n$ be a dissociated subset of $Q_n$ of cardinality $|D_n|=d_d(Q_n)$. Since the standard basis is itself a maximal dissociated subset of $Q_n$ of minimum size $n$, the set $Q_n$ shows that the ratio $d_d^-(A)/d_d(A)$ can be as small as $1/\log_4 d_d(A)$ asymptotically as $d_d(A)\to\infty$. Hence the lower bound in \eqref{eq:mainineq} is asymptotically sharp. Moreover, this set $D_n$ itself is an example showing that $d_s^-(A)/d_s(A)$ can also be as small as $1/\log_4 d_d(A)$, since for $D_n$ we have $d_s^-(D_n)=n$ yet $d_s(D_n)=|D_n|=d_d(D_n)$ (as $D_n$ is dissociated). Our second result completes the picture by showing that the remaining ratio $d_s(A)/d_d^-(A)$ can also be this small. \begin{theorem}\label{thm:midratio} For each positive integer $n$ there exists a set $A_n\subset \{0,1,2\}^n$ satisfying $d_d(A_n)= n\log_4 n\, (1+o(1)_{n\to \infty})$ and such that \begin{equation}\label{eq:midratio} \frac{d_s(A_n)}{d_d^-(A_n)}\leq \frac{1}{\log_4 d_d(A_n)}\big(1+o(1)_{n\to\infty}\big). \end{equation} \end{theorem} Theorems \ref{thm:main} and \ref{thm:midratio} are proved in Section \ref{section:main}. In Section \ref{section:[N]} we consider sets of integers to examine whether, for at least some nice family of subsets of $\mathbb{Z}$, we have that for every set $A$ in the family the dissociativity dimensions $d_d(A)$, $d_d^-(A)$ lie closer to the spanning dimensions $d_s(A),d_s^-(A)$ than is guaranteed by \eqref{eq:mainineq}. The family of intervals $[N]=\{1,2,\ldots,N\}$ is a natural one to consider; let us recall for instance (see \cite[p. 59]{Erdos-Graham}) that it is one of the oldest problems of Erd\H os to prove that $d_d([N])=\log_2 N + O(1)$. We do not pursue that problem here, but we prove the following. \begin{theorem}\label{thm:[N]} For any positive integer $N$ we have \[ d_s([N])=d_d^-([N])= \lfloor \log_3 N\rfloor + \big\lceil \log_3 2N -\lfloor \log_3 N\rfloor \big\rceil. \] \end{theorem} In the final section we briefly describe a relation between the dimension $d_s$ and a result of Schoen on maximal densities of subsets of $\mathbb{Z}mod{p}$ avoiding solutions to a linear equation with integer coefficients. \section{On general additive sets: Theorems \ref{thm:main} and \ref{thm:midratio} }\label{section:main} Given an additive set $A$, a 1-spanning set $S\subset A$ has size bounded below trivially by $\log_3(|A|)$, since $|\{-1,0,1\}^{S}|\geq |A|$. The argument leading to inequality $(*)$ in \cite[Proof of Theorem 2]{LY} is easily adapted to yield the following lower bound for $|S|$: we have $|S|\geq |D| / \log_2(2|D|+1)$ for every dissociated set $D\subset A$. This lower bound can be strengthened as follows. \begin{proposition}\label{lem:dslb} Let $A$ be a finite subset of an abelian group $G$, let $D\subset A$ be dissociated, and let $S\subset G$ be a 1-spanning set for $A$. Then \begin{equation}\label{eq:dslb} \frac{|D|}{\log_4|D|} \leq |S| \mathopen{}\mathclose\bgroup\originalleft(1+ \frac{4+\log_2 \log 4|S|}{\log_2 |D|}\aftergroup\egroup\originalright). \end{equation} \end{proposition} Theorem \ref{thm:main} follows from this, since $d_s^-(A)\leq d_d(A)$. \begin{proof} Let $m=|S|, n=|D|$, and let us fix a labelling of the elements of $S$ and $D$, thus $S=\{s_1,s_2,\ldots, s_m\}$ and $D=\{d_1,d_2,\ldots,d_n\}$. Since $\langle S\rangle \supset A\supset D$, for each $j\in [n]$ we can fix a choice of a vector $(c_{i,j})_{i\in [m]}\in \{-1,0,1\}^m$ such that $d_j= \sum_{i\in [m]} c_{i,j} s_i$. Let $C$ be the $m\times n$ matrix with $(i,j)$ entry $c_{i,j}$. The subset sums of $D$ are the combinations $\sum_{j=1}^{n} \lambda_j d_j$ with $\lambda=(\lambda_j)\in \{0,1\}^n$. We have \begin{equation}\label{eq:main1} \forall\, \lambda\in \{0,1\}^n,\qquad \sum_{j\in [n]} \lambda_jd_j = \sum_{i\in [m]} \Big(\sum_{j\in [n]} c_{i,j}\, \lambda_j \Big) s_i = \sum_{i\in [m]} (C\lambda)_i\,s_i . \end{equation} We shall prove that, for some intervals of integers $\Lambda_1,\Lambda_2,\ldots, \Lambda_m$, each of width $O\Big(\sqrt{|D|\log |S|}\Big)$, for a large proportion of the elements $\lambda\in \{0,1\}^n$ we have $(C\lambda)_i \in \Lambda_i$ for every $i\in [m]$. To this end, fix any $i\in [m]$, and let us consider the terms $\lambda_1 c_{i,1},\ldots, \lambda_n c_{i,n}$ as independent random variables, the $j$th one taking value $c_{i,j}$ with probability $1/2$ and value 0 otherwise, for each $j\in [n]$. (Note that we are thus using the uniform probability on $\{0,1\}^n$.) Then letting $\mu_i=\frac{1}{2}\sum_{j\in [n]} c_{i,j}$, by Hoeffding's inequality \cite[Chapter 3, Theorem 1.3]{Gut} we have \[ \forall\, t>0,\qquad \mathbb{P}\mathopen{}\mathclose\bgroup\originalleft(\Big|\mu_i-\sum_{j\in [n]} \lambda_j\, c_{i,j} \Big| > t\Big(\sum_{j\in [n]} c_{i,j}^2\Big)^{1/2}\aftergroup\egroup\originalright) \leq 2 \exp \mathopen{}\mathclose\bgroup\originalleft(-2t^2 \aftergroup\egroup\originalright). \] Since $\Big(\sum_{j\in [n]} c_{i,j}^2\Big)^{1/2}\leq |D|^{1/2}$, letting $t= \sqrt{ \log( 2 r |S|)/2}$, for $r>0$, we deduce that \[ \mathbb{P}\mathopen{}\mathclose\bgroup\originalleft(\Big|\mu_i - \sum_j \lambda_j\, c_{i,j} \Big| > |D|^{1/2} \sqrt{ \log( 2 r |S|)/2}\aftergroup\egroup\originalright) \leq (r|S|)^{-1} . \] By the union bound, the probability that the latter event holds for some $i\in [m]$ is thus at most $r^{-1}$. Hence \begin{equation}\label{eq:main3} \mathbb{P}\Big(\Big|\mu_i - (C \lambda)_i \Big|\leq \sqrt{ |D| \log( 2 r |S|)/2}\,\,\textrm{ for all }i\in [m]\Big) \geq 1-r^{-1} . \end{equation} Now let $\Lambda_i=\Big[\mu_i -\sqrt{ |D| \log( 2 r |S|)/2},\mu_i +\sqrt{ |D| \log( 2 r |S|)/2}\Big]$. Combining \eqref{eq:main1} and \eqref{eq:main3}, we obtain that for at least $(1-r^{-1})2^n$ values of $\lambda \in \{0,1\}^n$, the subset sum $\sum_{j\in [n]} \lambda_j d_j$ is an integer linear combination of the elements $s_1,\ldots, s_m$, with $i$th coefficient $(C\lambda)_i\in \Lambda_i$ for each $i\in [m]$. Since these subset sums are pairwise distinct (by dissociativity of $D$), we conclude that \[ \big(1-r^{-1}\big)\, 2^{|D|} \leq \prod_{j\in [m]} |\Lambda_j |\leq \big(2|D|\log(2r |S|)\big)^{|S|/2}. \] Choosing $r=2$, taking $\log_2$ of both sides and rearranging, we obtain \eqref{eq:dslb}. \end{proof} We now turn to comparing $d_s$ and $d_d^-$, towards Theorem \ref{thm:midratio}. We shall call a subset $S$ of an additive set $A$ satisfying $\langle S\rangle \supset A$ and $|S| = d_s(A)$ a \emph{minimum 1-spanning subset of} $A$. The following small example shows that the dimensions $d_s$ and $d_d^-$ can indeed differ. \begin{example}\label{lem:eg1} Let $\{x_1,x_2\}$ be the standard basis in $\mathbb{R}^2$, and let \[ A=\{x_1, x_2, x_1+x_2, 2x_1, 2x_2\}. \] This set has \textup{(}unique\textup{)} minimum 1-spanning subset $\{x_1,x_2,x_1+x_2\}$, while any maximal dissociated subset of $A$ has size $4$. \end{example} The claims in this example are easily checked by inspection. In fact, this example is the simplest case of the following general construction, which is our main ingredient in our proof of Theorem \ref{thm:midratio}. \begin{proposition}\label{lem:geneg} Let $B_n=\{x_1, x_2, \ldots, x_n\}$ be the standard basis of $\mathbb{R}^n$, let $s_n=\sum_{i\in [n]} x_i$, and let $D$ be a dissociated non-empty subset of $\{0,1\}^n$. Then the set \[ A_n= B_n\cup\{s_n\}\cup (2\cdot D) \] satisfies $d_s(A_n)= n+1$ and $d_d^-(A_n)= d_d(A_n) = n+|D|$. \end{proposition} Here $2\cdot D$ denotes the set $\{2x:x\in D\}\subset \{0,2\}^n$. \begin{proof} To begin with, we claim that a 1-spanning subset $S\subset A_n$ must have at least $n+1$ elements. To show this, we distinguish two cases. Case 1: $S$ does not contain $s_n$. Then, in order to be 1-spanning, $S$ must contain all other elements of $A_n$. Indeed, firstly, an element $x_i\in B_n$ must lie in $S$, for otherwise it cannot be in the 1-span of $S$, since every element of $A_n\setminus\{s_n, x_i\}$, modulo 2, has a zero $x_i$-component. An element of $2\cdot D$ must also lie in $S$, for it cannot be in the 1-span of other elements of $2\cdot D$ (since $D$ is dissociated), nor can it lie in $2\cdot D +\varepsilon_1 x_1+\dots+\varepsilon_n x_n$ with $\varepsilon_i\in [-1,1]$ not all zero, as it is congruent to 0 modulo 2. We have thus shown that $S$ must indeed contain $A_n\setminus \{s_n\}$, so our claim holds in this case, i.e. $|S|\geq n+1$. Case 2: $S$ contains $s_n$, and does not contain some $x_j$. (If it contained $s_n$ and every $x_j$, then our claim would hold already.) In this case, in order to 1-span $x_j$ using $s_n$, the set $S$ must contain every $x_i$ with $i\neq j$. Moreover, $S$ must then also contain every element of $2\cdot D$. Indeed, an element of $2\cdot D$ equals either $2x_j$ or some combination $y$ involving some $2x_i$ with $i\neq j$. Now $2x_j$ must lie in $S$ in order to be 1-spanned by $S$, since $S$ does not contain $x_j$ and $D$ is dissociated. We claim that $S$ must also contain every other $y\in 2\cdot D$. Indeed, suppose that $y$ were not in $S$, and suppose that we had a $[-1,1]$-combination of elements of $S$ equal to $y$. This combination would then have to involve $s_n$, because otherwise it could only involve elements of $2\cdot D$ different from $y$, contradicting that $2\cdot D$ is dissociated. By involving $s_n$, this combination involves $x_j$. But the latter can then be neither cancelled nor increased to $2x_j$, since $S$ misses $x_j$, whence this combination could not equal $y$, a contradiction. We conclude that $S$ must be $A_n\setminus \{x_j\}$, so we have $|S|=n+|D|\geq n+1$ in this case. The set $S_n:=B_n \cup \{s_n\}$, of size $n+1$, is 1-spanning for $A_n$ (and is not dissociated). We have thus shown that $d_s(A_n)= n+1$. Now suppose that $S$ is a maximal dissociated subset of $A_n$. Then $S$ cannot contain $S_n$, so there exists some element $s\in S_n\setminus (S\cap S_n)$. Note also that, being maximal dissociated, $S$ must be 1-spanning for $A_n$. We can then distinguish the same two cases as above. In the first case, we have $s=s_n$. Then, as in case 1 above, we must have $S= A_n\setminus \{s_n\}$, which is dissociated (as can be seen using that $B_n$ and $2\cdot D$ both are), clearly maximal, and of size $n+|D|$. In the second case, we have $s = x_j$ for some $j\in [n]$. Then, $S$ must contain $s_n$ (it cannot 1-span it otherwise) and so we are in case 2 above, in which $S$ must be $A_n\setminus \{x_j\}$. Thus in this second case, either we get a contradiction (if $A_n\setminus \{x_j\}$ is not dissociated), or $S=A_n\setminus \{x_j\}$ is a maximal dissociated set of size $n+|D|$. \end{proof} We now combine Proposition \ref{lem:geneg} with \cite[Theorem 1]{LY}. \begin{proof}[Proof of Theorem \ref{thm:midratio}] As mentioned in the introduction, there exists a dissociated set $D_n\subset \{0,1\}^n$ of cardinality $|D_n|= n\log_4 n\, (1+o(1))$ as $n\to \infty$. Applying Proposition \ref{lem:geneg} with this set $D_n$, we obtain a set $A_n\subset \{0,1,2\}^n$ satisfying $d_s(A_n)=n+1$ and $d_d^-(A_n)=d_d(A_n)= n \log_4 n\,(1+o(1)_{n\to \infty})$, whence \eqref{eq:midratio} follows. \end{proof} \section{Focusing on some sets of integers: Theorem \ref{thm:[N]}}\label{section:[N]} So far, the examples that we have discussed of additive sets with small dimension-ratios have all been given by subsets of $\mathbb{Z}^n$ for large $n$. Note that by applying an appropriate Freiman isomorphism of sufficiently high order to such a set, we can obtain a subset of $\mathbb{Z}$ satisfying the same dimensional properties. For example, if for each $n$ we choose a Freiman isomorphism $\phi_n:\{0,1,2\}^n\to \mathbb{Z}$ of order $n^2$ (say) and satisfying\footnote{The existence of such Freiman isomorphisms is a standard result; see for instance \cite[Lemma 5.25]{T-V}.} $\phi_n(0)=0$, then applying $\phi_n$ to the set $A_n$ from Theorem \ref{thm:midratio} for each $n$ we obtain a family of sets $\phi_n(A_n)\subset \mathbb{Z}$ satisfying \eqref{eq:midratio}. One may wonder whether for some natural families of subsets of $\mathbb{Z}$ the dimensions $d_s^-, d_s,d_d^-,d_d$ lie closer to each other. In this section we show that this is the case for the family of intervals $[N]$, in the sense of Theorem \ref{thm:[N]}; thus we have $d_s([N])=d_d^-([N])$ for any positive integer $N$. To prove Theorem \ref{thm:[N]}, we shall construct a maximal dissociated subset of $[N]$ of size $d_s([N])$, using the following simple fact concerning the powers of 3. \begin{lemma}\label{lem:P3} The set $P_3(k)=\{1,3,\ldots, 3^{k-1}\}$ satisfies $\langle P_3(k)\rangle = \mathopen{}\mathclose\bgroup\originalleft[-\frac{3^k-1}{2},\frac{3^k-1}{2}\aftergroup\egroup\originalright]$. \end{lemma} \begin{proof} The claim holds for $k=1$. For $k>1$, we may suppose by induction that the claim holds for $k-1$, thus $\langle P_3(k-1)\rangle \supset \mathopen{}\mathclose\bgroup\originalleft[-\frac{3^{k-1}-1}{2},\frac{3^{k-1}-1}{2}\aftergroup\egroup\originalright]$. Then we have \begin{eqnarray*} \langle P_3(k)\rangle & = & \{-3^{k-1},0,3^{k-1}\} + \langle P_3(k-1)\rangle = \{-3^{k-1},0,3^{k-1}\} + \mathopen{}\mathclose\bgroup\originalleft[\frac{-3^{k-1}+1}{2},\frac{3^{k-1}-1}{2}\aftergroup\egroup\originalright] \\ & = & \mathopen{}\mathclose\bgroup\originalleft[-\frac{3^k-1}{2},\frac{3^k-1}{2}\aftergroup\egroup\originalright] . \end{eqnarray*} \end{proof} We shall also use the following. \begin{lemma}\label{lem:dissospan} Let $A$ be an additive set and let $S\subset A$ be dissociated and satisfy $\langle S \rangle \supset A$. Then $S$ is maximal dissociated. \end{lemma} \begin{proof} If there existed $a\in A\setminus S$ such that $S\cup \{a\}$ is dissociated, then $a$ could not lie in the 1-span of $S$, contradicting that $\langle S \rangle \supset A$. \end{proof} To establish Theorem \ref{thm:[N]} we distinguish two cases, according to whether the fractional part $\{ \log_3 N \}:= \log_3 N - \lfloor \log_3 N \rfloor$ satisfies $\{ \log_3 N \}<1-\log_3 2$ or $\{ \log_3 N \}> 1-\log_3 2$. \begin{proposition}\label{prop:case1} Let $N$ be a positive integer. The following statements are equivalent. \begin{enumerate}[leftmargin=30pt] \item We have $\{ \log_3 N \} <1-\log_3 2$. \item The set $S_1:=\{1,3,3^2,\ldots, 3^{\lfloor \log_3 N\rfloor}\}$ is a minimum 1-spanning maximal dissociated subset of $[N]$. In particular $d_s([N])= d_d^-([N])= \lfloor\log_3 N\rfloor +1$. \end{enumerate} \end{proposition} \begin{proof} It follows from Lemma \ref{lem:P3} that \[ \langle S_1\rangle = \langle P_3(\lfloor \log_3 N\rfloor+1) \rangle = \mathopen{}\mathclose\bgroup\originalleft[-\frac{3^{\lfloor \log_3 N\rfloor+1}-1}{2},\frac{3^{\lfloor \log_3 N\rfloor+1}-1}{2}\aftergroup\egroup\originalright]. \] Therefore $S_1$ is a 1-spanning subset of $[N]$ if and only if $\frac{3^{\lfloor \log_3 N\rfloor+1}-1}{2}\geq N$, that is if and only if $\{ \log_3 N \} <1-\log_3 2$. In particular, $(ii)$ implies $(i)$. Now if $(i)$ holds, then we claim that $S_1$ is in fact \emph{minimum} 1-spanning for $[N]$. Indeed, any 1-spanning subset $S$ of $[N]$ must satisfy $N \leq (3^{|S|}-1)/2$, since to cover $[N]$ with $[-1,1]$-combinations of $S$ we only use the combinations with positive value. Hence $|S| \geq \log_3 (2N+1) >\log_3 N \geq \lfloor \log_3 N\rfloor$, so we have indeed that $|S| \geq \lfloor \log_3 N\rfloor +1=|S_1|$. Finally, note that $S_1$ is dissociated, so by Lemma \ref{lem:dissospan} it is maximal dissociated in $[N]$. We have thus shown that $(ii)$ holds. \end{proof} We now treat the second case. \begin{proposition}\label{prop:case2} Let $N$ be a positive integer, and let $t=1+ \sum_{i=0}^{ \lfloor \log_3 N\rfloor }3^i= \frac{3^{\lfloor \log_3 N \rfloor+1}+1}{2}$. The following statements are equivalent. \begin{enumerate}[leftmargin=30pt] \item We have $\{ \log_3 N \} > 1-\log_3 2$. \item The set $S_2:=\{1,3,3^2,\ldots, 3^{\lfloor \log_3 N\rfloor}\}\cup \{t\}$ is a minimum 1-spanning maximal dissociated subset of $[N]$. In particular $d_s([N])= d_d^-([N])= \lfloor\log_3 N\rfloor +2$. \end{enumerate} \end{proposition} \begin{proof} By Lemma \ref{lem:P3} we have \[ \langle S_2\rangle = \langle P_3(\lfloor \log_3 N\rfloor+1) \rangle + \{-t,0,t\} = \mathopen{}\mathclose\bgroup\originalleft[-3^{\lfloor \log_3 N\rfloor+1},3^{\lfloor \log_3 N\rfloor+1}\aftergroup\egroup\originalright]. \] Thus $S_2$ is a 1-spanning set for $[N]$ which is dissociated. We have $S_2\subset [N]$ if and only if $t\leq N$, i.e. $\{ \log_3 N \} > 1-\log_3 2$. In particular, $(ii)$ implies $(i)$. If $(i)$ holds, then we claim that $S_2$ is \emph{minimum} 1-spanning. Indeed, as shown at the end of the previous proof, if $S$ is 1-spanning for $[N]$ then we must have $|S|\geq \log_3(2N+1)$. If $|S|$ were less than $|S_2|$, i.e. if $|S|\leq \lfloor\log_3 N\rfloor +1$, then we would have $\lfloor \log_3 N \rfloor +1\geq \log_3(2N+1) > \log_3 2+ \log_3 N$, that is $\{ \log_3 N \}< 1-\log_3 2$, which contradicts $(i)$, so we must have $|S|\geq \lfloor\log_3 N\rfloor +2=|S_2|$. Note also that $S_2$ is dissociated, and therefore maximal dissociated in $[N]$ (by Lemma \ref{lem:dissospan} again). We have thus shown that $(ii)$ holds. \end{proof} This completes the proof of Theorem \ref{thm:[N]}. \section{Final remarks} In \cite{Sch}, Schoen gave an interesting argument, using Chang's theorem, yielding an upper bound for the maximum density of a subset $A$ of $\mathbb{Z}mod{p}$ ($p$ prime) such that the Cartesian power $A^k$ contains no element $x$ solving a given integer linear equation $L(x)=c_1x_1+\cdots +c_k x_k=0$. We call such a set $A$ an \emph{$L$-free set}. Schoen's upper bound involves the dimension $d_s^-(C)$, where $C=\{c_1,\ldots,c_k\}$ is the set of coefficients of $L$ (in \cite{Sch} this dimension is denoted $\ell(C)$). It is a straightforward task to check that in Schoen's argument one can use $d_s(C)$ instead of $d_s^-(C)$. Thus one obtains the following version of Schoen's result. \begin{theorem}\label{thm:circle-u-b} Let $L(x)=c_1x_1+\cdots+c_k x_k$ be a linear form with coefficients $c_i \in \mathbb{Z}$, and let $m_L(\mathbb{Z}mod{p})=\max\{ |A|/p: A\subset \mathbb{Z}mod{p},\;A\textrm{ is }L\textrm{-free}\}$. Then \begin{equation}\label{eq:main-bounds} m_L(\mathbb{Z}mod{p})\leq e^{- d_s(C)/12}, \end{equation} where $C=\{c_1,c_2,\ldots,c_k\}$. \end{theorem} As recalled in the introduction, there exists a dissociated set $D\subset \{0,1\}^n$ of size $\sim n\log_4 n$, and this has dimension $d_s(D)=|D|$, which is roughly $\log_4 n$ times $d_s^-(D)=n$. Applying an appropriate Freiman isomorphism $\phi: \{0,1\}^n \to \mathbb{Z}$, as in the previous section, we obtain a set $C=\phi(D) \subset \mathbb{Z}$ with the same properties (note that $d_s^-(C)\leq d_s^-(D)$ and $d_s(C)=d_s(D)$). For a linear form $L$ with coefficient-set $C$, the bound \eqref{eq:main-bounds} is thus stronger than the version with $d_s^-(C)$. It would be interesting to strengthen the upper bound on $m_L(\mathbb{Z}mod{p})$ further. \textbf{Acknowledgements.} The first author is grateful to Jakob Vidmar for programming computer searches that shed light on problems treated in this paper. The authors are also very grateful to Vsevolod Lev for bringing to their attention the results on dissociated subsets of $\{0,1\}^n$ in \cite{Bs,C&M,Lind}. \end{document}
\begin{document} \baselineskip=18pt \title{f On subgroups in division rings of type 2} \newcommand{\dpcm}{ \rule{3mm}{3mm}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{F}{\mathbb{F}} \newcommand{\ts}[1]{\langle #1\rangle} \begin{abstract} \baselineskip=18pt Let $D$ be a division ring with center $F$. We say that $D$ is a {\em division ring of type $2$} if for every two elements $x, y\in D,$ the division subring $F(x, y)$ is a finite dimensional vector space over $F$. In this paper we investigate multiplicative subgroups in such a ring. \end{abstract} {\bf {\em Key words:}} division ring; type 2; finitely generated subgroups. {\bf{\em Mathematics Subject Classification 2010}}: 16K20 \section{Introduction} In the theory of division rings, one of the problems is to determine which groups can occur as multiplicative groups of non-commutative division rings. There are some interesting results relating to this problem. Among them we note the famous discovery of Wedderburn in 1905, which states that {\it if $D^*$ is a finite group, then $D$ is commutative}, where $D^*$ denotes the multiplicative group of $D$. Later, L. K. Hua (see, for example, in [12, p. 223]) proved that the multiplicative group of a non-commutative division ring cannot be solvable. Recently, in \cite{hai-thin2} it was shown that the group $D^*$ cannot even be locally nilpotent. Note also Kaplansky's Theorem (see [12,(15.15), p. 259]) which states that if the group $D^*/F^*$ is torsion, then $D$ is commutative, where $F$ is the center of $D$. Some other results of this kind can be found for example, in \cite{Ak1}-\cite{Ak3}, \cite{hai-huynh}-\cite{hai-thin2},... In this paper we consider this question for division rings of type $2$. Recall that a division ring $D$ with center $F$ is said to be {\em division ring of type $2$} if for every two elements $x, y\in D,$ the division subring $F(x, y)$ is a finite dimensional vector space over $F$. This concept is an extension of that of locally finite division rings. By definition, a division ring $D$ is {\em centrally finite} if it is a finite dimensional vector space over its center $F$ and $D$ is {\em locally finite} if for every finite subset $S$ of $D$, the division subring $F(S)$ generated by $S\cup F$ in $D$ is a finite dimensional vector space over $F$. There exist locally finite division rings which are not centrally finite (it is not hard to give some examples). Of course, every locally finite division ring is a ring of type $2$. However, at present no example of a division ring of type $2$ is known which is not locally finite. The difficulties are related with the following famous longstanding conjecture known as the Kurosh Problem for division rings \cite{kha}. Recall that a division ring $D$ is {\em algebraic} over its center $F$ (briefly, $D$ is {\em algebraic}), if every element of $D$ is algebraic over $F$. Clearly, a locally finite division ring is algebraic. Kurosh conjectured that any algebraic division ring is locally finite. Unfortunately, this problem remains still unsolved in general, it is answered in the affirmative for the following special cases: for $F$ uncountable \cite{row}, $F$ finite \cite{lam}, and for $F$ having only finite algebraic field extensions (in particular for $F$ algebraically closed). The last case follows from the Levitzki-Shirshov Theorem which states that {\em any algebraic algebra of bounded degree is locally finite} (see e.g. \cite{dren}, \cite{kha}). The answer for the case of finite $F$ is due to Jacobson who proved that {\em an algebraic division ring $D$ is commutative provided its center is finite} (see, for example, \cite{lam}). Later, more general theorems of this kind (known as commutativity theorems) were proved by Jacobson and Herstein. For more information we refer to [9, Ch. 3]. Finally, we would like to note that the results obtained in this paper for division rings of type $2$ have not been proved elsewhere before for locally finite division rings. So, at least (in the fortunate case if the Kurosh Problem will be answered in the affirmative, as we would like to see) our results generalize previous results for the finite dimensional case. Throughout this paper the following notation will be used consistently: $D$ denotes a division ring with center $F$ and $D^*$ is the multiplicative group of $D$. If $S$ is a nonempty subset of $D$, then we denote by $F[S]$ and $F(S)$ the subring and the division subring of $D$ generated by $S$ over $F$, respectively. The symbol $D'$ is used to denote the derived group $[D^*, D^*]$. An element $x$ in $D$ is said to be {\em radical} over a subring $K$ of $D$ if there exists some positive integer $n(x)$ depending on $x$ such that $x^{n(x)}\in K$. A nonempty subset $S$ of $D$ is {\em radical} over $K$ if every element of $S$ is radical over $K$. We denote by $N_{D/F}$ and $RN_{D/F}$ the norm and the reduced norm, respectively. Finally, if $G$ is any group then we always use the symbol $Z(G)$ to denote the center of $G$. \section{Finitely generated subgroups} The main purpose in this section is to prove that in any non-commutative division ring of type $2$ there are no finitely generated subgroups containing the center. \begin{lemma}\label{lem:2.1} Let $D$ be a division ring with center $F$, $D_1$ be a division subring of $D$ containing $F$. Suppose that $D_1$ is a finite dimensional vector space over $F$ and $a\in D_1$. Then, $N_{D_1/F}(a)$ is periodic if and only if $N_{F(a)/F}(a)$ is periodic. \end{lemma} \begin{proof} Let $F_1=Z(D_1)\supset F$, $m^2=[D_1:F_1]$ and $n=[F_1(a):F_1]$. By [4, Lemma 3, p.145] and [4, Corollary 4, p. 150], we have $$N_{D_1/F_1}(a)=[RN_{D_1/F_1}(a)]^m=[N_{F_1(a)/F_1}(a)]^{m^2/n}.$$ Now, using the Tower formulae for the norm (cf. \cite{dra}), from the equality above we get $$N_{D_1/F}(a)=[N_{F_1(a)/F}(a)]^{m^2/n}.$$ Since $a\in F(a), we have N_{F_1(a)/F(a)}(a)=a^k$, where $k=[F_1(a):F(a)]$. Therefore $$N_{F(a)/F}(a^{k})=N_{F(a)/F}(N_{F_1(a)/F(a)}(a))=N_{F_1(a)/F}(a).$$ It follows that $N_{D_1/F}(a)=[N_{F(a)/F}(a)]^{km^2/n}$, and the conclusion is now obvious. \end{proof} The following proposition is useful. In particular, it is needed to prove the subsequent theorem. \begin{proposition}\label{prop:2.2} Let $D$ be a division ring with center $F$. If $N$ is a subnormal subgroup of $D^*$ then $Z(N)=N\cap F$. \end{proposition} \begin{proof} If $N$ is contained in $F$ then there is nothing to prove. Thus, suppose that $N$ is non-central. By [15, 14.4.2, p. 439], $C_D(N)=F$. Hence $Z(N)\subseteq N\cap F$. Since the inclusion $N\cap F\subseteq Z(N)$ is obvious, $Z(N)= N\cap F$. \end{proof} \begin{theorem}\label{thm:2.3} Let $D$ be a division ring of type $2$. Then $Z(D')$ is a torsion group. \end{theorem} \begin{proof} By Proposition \ref{prop:2.2}, $Z(D')=D'\cap F^*$. Any element $a\in Z(D')$ can be written in the form $a=c_1 c_2\ldots c_r$, where $c_i=[x_i, y_i]$ with $x_i, y_i\in D^*$ for $i\in\{1, \ldots, r\}$. Put $D_1=D_2:= F(c_1, c_2)$, $D_3:= F(c_1 c_2, c_3)$, $\ldots$, $D_r:= F(c_1...c_{r-1}, c_r)$ and $F_i=Z(D_i)$ for $i\in\{1, \ldots, r\}$. Since $D$ is of type $2$, $[D_i:F]<\infty$. Since $N_{F(x_i, y_i)/F}(c_i)=1$, by Lemma \ref{lem:2.1}, $N_{F(c_i)/F}(c_i)$ is periodic. Again by Lemma~\ref{lem:2.1}, $N_{D_i/F}(c_i)$ is periodic. Therefore, there exists some positive integer $n_i$ such that $N_{D_i/F}(c_i^{n_i})=1$. Recall that $D_2=D_1$. Hence we get $$N_{D_2/F}(c_1 c_2)^m=N_{D_2/F} (c_1)^m N_{D_2/F}(c_2)^m=1,$$ where $m=n_1 n_2$. Again by Lemma \ref{lem:2.1}, $N_{F(c_1 c_2)/F}(c_1 c_2)$ is periodic; hence $N_{D_3/F}(c_1 c_2)$ is periodic. By induction, $N_{D_r/F}(c_1... c_{r-1})$ is periodic. Suppose that $N_{D_r/F}(c_1... c_{r-1})^n=1$. Then $$N_{D_r/F}(a^n)=N_{D_r/F}(c_1... c_{r-1})^n N_{D_r/F}(c_r)^n=1.$$ Hence, $a^{n[D_r:F]}=1$. Therefore, $a$ is periodic. Thus $Z(D')$ is torsion. \end{proof} \begin{corollary}\label{cor:2.4} Let $D$ be a non-commutative ring of type $2$ with center $F$. Then $D'\setminus Z(D')$ contains no elements purely inseparable over $F$. \end{corollary} \begin{proof} Suppose that $a\in D'\setminus Z(D')$ is purely inseparable over $F$. Then, there exists some positive integer $m$ such that $a^{p^m}\in F$. Since $Z(D')=D'\cap F$ (by Proposition 2.2), $a^{p^m}\in Z(D')$. By Theorem \ref{thm:2.3}, there exists some positive integer $r$ such that $a^{rp^m}=1$. Denote by $k$ the order of $a$ in the group $D^*$. If $p$ divides $k$, then $k=pt$ and we have $$1=a^k=a^{pt}=(a^t)^p.$$ Consequently, $a^t=1$, which is impossible in view of the choice of $k$. Now, suppose that $p$ does not divide $k$. Then, $(k, p^m)=1$ and $\alpha k+\beta p^m=1$ for some integers $\alpha$ and $\beta$. Therefore, we have $$a=a^{\alpha k+\beta p^m}=(a^k)^\alpha.a^{\beta p^m}=(a^{p^m})^\beta\in F.$$ Consequently, $a\in F\cap D'=Z(D')$, a contradiction. \end{proof} Note that in \cite{mah} the author proved that $Z(D')$ is finite if $D$ is centrally finite. In virtue of this fact, he expressed his ideas that $Z(D')$ is torsion for any division ring $D$ algebraic over its center, but he has not been able to prove this. Therefore, Theorem~\ref{thm:2.3} represents some progress in this direction. Moreover (and this is more important for our purpose), we need this theorem to establish the main result in the present section. In fact, we shall prove that in a division ring $D$ of type $2$ with center $F$, there are no finitely generated subgroups containing $F^*$. Consequently, if $D$ is of type $2$ and $D^*$ is finitely generated, then $D$ is a field. Note that if the multiplicative group of a field is finitely generated, then it is finite. So, if $D$ is of type $2$ and $D^*$ is finitely generated, then $D$ is even a finite field. Our next theorem strongly generalizes the result obtained in [2, Theorem 1] which states that, if $D$ is centrally finite and $D^*$ is finitely generated, then $D$ is commutative. \begin{theorem}\label{thm:2.6} Let $D$ be a non-commutative division ring of type $2$ with center $F$ and suppose that $N$ is a subgroup of $D^*$ containing $F^*$. Then $N$ is not finitely generated. \end{theorem} \begin{proof} Suppose that there is a finitely generated subgroup $N=\ts{x_1, x_2, \ldots, x_n}$ of $D^*$ containing $F^*$. Then, in virtue of [15, 5.5.8, p. 113], $F^*N'/N'$ is a finitely generated abelian group, where $N'$ denotes the derived subgroup of $N$. \noindent {\em Case 1: $char(D)=0$.} Then, $F$ contains the field $\mathbb{Q}$ of rational numbers and it follows that $\mathbb{Q}^*/(\mathbb{Q}^*\cap N')\simeq \mathbb{Q}^*N'/N'$. Since $F^*N'/N'$ is finitely generated, $\mathbb{Q}^*N'/N'$ is finitely generated and consequently $\mathbb{Q}^*/(\mathbb{Q}^*\cap N')$ is finitely generated. Consider an arbitrary element $a\in \mathbb{Q}^*\cap N'$. Then $a\in F^*\cap D'=Z(D')$. By Theorem \ref{thm:2.3}, $a$ is periodic. Since $a\in \mathbb{Q}$, we get $a=\pm{1}$. Thus, $\mathbb{Q}^*\cap N'$ is finite. Since $\mathbb{Q}^*/(\mathbb{Q}^*\cap N')$ is finitely generated, $\mathbb{Q}^*$ is finitely generated, which is impossible. \noindent {\em Case 2: $char(D)=p > 0$.} Denoting by $\mathbb{F}_p$ the prime subfield of $F$, we shall prove that $F$ is algebraic over $\mathbb{F}_p$. In fact, suppose that $u\in F$ and $u$ is transcendental over $\mathbb{F}_p$. Then, the group $\mathbb{F}_p(u)^*/(\mathbb{F}_p(u)^*\cap N')$ considered as a subgroup of $F^*N'/N'$ is finitely generated. Consider an arbitrary element $f(u)/g(u)\in \mathbb{F}_p(u)^*\cap N'$, where $f(X), g(X)\in \mathbb{F}_p[X], ((f(X), g(X))=1$ and $g(u)\neq 0$. As above, we have $f(u)^s/g(u)^s=1$ for some positive integer $s$. Since $u$ is transcendental over $\mathbb{F}_p$, it follows that $f(u)/g(u)\in \mathbb{F}_p$. Therefore, $\mathbb{F}_p(u)^*\cap N'$ is finite and consequently, $\mathbb{F}_p(u)^*$ is finitely generated, so $\mathbb{F}_p(u)$ is finite field, which is impossible. Hence $F$ is algebraic over $\mathbb{F}_p$ and it follows that $D$ is algebraic over $\mathbb{F}_p$. Now, in virtue of Jacobson's Theorem [12, (13.11), p. 219], $D$ is commutative, a contradiction. \end{proof} \begin{corollary}\label{cor:2.7} Let $D$ be a division ring of type $2$. If $D^*$ is finitely generated, then $D$ is a finite field. \end{corollary} If $M$ is a finitely generated maximal subgroup of $D^*$, then clearly $D^*$ is finitely generated. So, the next result follows immediately from Corollary \ref{cor:2.7}. \begin{corollary}\label{cor:2.8} Assume that $D$ is a division ring of type $2$. If $D^*$ has a finitely generated maximal subgroup, then $D$ is a finite field. \end{corollary} In the same way as in the proof of Theorem \ref{thm:2.6}, we obtain the following corollary. \begin{corollary}\label{cor:2.9} Assume that $D$ is a non-commutative division ring of type $2$ with center $F$ and $S$ is a subgroup of $D^*$. If $N=SF^*$, then $N/N'$ is not finitely generated. \end{corollary} \begin{proof} Suppose that $N/N'$ is finitely generated. Since $N'=S'$ and $F^*/(F^*\cap S') \simeq S'F^*/S'$, it follows that $F^*/(F^*\cap S')$ is a finitely generated abelian group. Now, in the same way as in the proof of Theorem \ref{thm:2.6}, we conclude that $D$ is commutative and this is a contradiction. \end{proof} The following result follows immediately from Corollary \ref{cor:2.9}. \begin{corollary}\label{cor:2.10} If $D$ is a non-commutative division ring of type $2$, then $D^*/D'$ is not finitely generated. \end{corollary} \section{The radicality of subgroups} In this section we study subgroups of $D^*$ which are radical over some subring of $D$. To prove the next theorem we need the following useful property of division rings of type $2$. \begin{lemma}\label{lem:3.1} Let $D$ be a division ring of type $2$ with center $F$ and let $N$ be a subnormal subgroup of $D^*$. If for every pair of elements $x, y\in N$, there exists some positive integer $n_{xy}$ such that $x^{n_{xy}}y=yx^{n_{xy}}$, then $N\subseteq F$. \end{lemma} \begin{proof} Since $N$ is subnormal in $D^*$, there exists a series of subgroups $$N=N_1\triangleleft N_2\triangleleft\ldots\triangleleft N_r=D^*.$$ Suppose that $x, y\in N$ and $K:=F(x, y)$. By putting $M_i=K\cap N_i,\, \forall i\in\{1, \ldots, r\},$ we obtain the following series of subgroups: $$M_1\triangleleft M_2\triangleleft\ldots\triangleleft M_r=K^*.$$ For any $a\in M_1\leq N_1=N$, suppose that $n_{ax}$ and $n_{ay}$ are positive integers such that $$a^{n_{ax}}x=xa^{n_{ax}} \mbox{ and } a^{n_{ay}}y=ya^{n_{ay}}.$$ Then, for $n:=n_{ax}n_{ay}$ we have $$a^n=(a^{n_{ax}})^{n_{ay}}=(xa^{n_{ax}}x^{-1})^{n_{ay}}=xa^{n_{ax}n_{ay}}x^{-1}=xa^nx^{-1}$$ and $$a^n=(a^{n_{ay}})^{n_{ax}}=(ya^{n_{ay}}y^{-1})^{n_{ax}}=ya^{n_{ay}n_{ay}}y^{-1}=ya^ny^{-1}.$$ Therefore $a^n\in Z(K)$. Hence $M_1$ is radical over $Z(K)$. By [6, Theorem 1], $M_1\subseteq Z(K)$. In particular, $x$ and $y$ commute with each other. Consequently, $N$ is an abelian group. By [15, 14.4.4, p. 440], $N\subseteq F$. \end{proof} \begin{theorem}\label{thm:3.2} Let $D$ be a division ring of type $2$ with center $F$, $K$ be a proper division subring of $D$, and suppose that $N$ is a normal subgroup of $D^*$. If $N$ is radical over $K$, then $N\subseteq F$. \end{theorem} \begin{proof} Suppose that $N$ is not contained in the center $F$. If $N\setminus K=\emptyset$, then $N\subseteq K$. By [15, p. 433], either $K\subseteq F$ or $K=D$. Since $K\neq D$ by the assumption, it follows that $K\subseteq F$. Hence $N\subseteq F$, which contradicts the assumption. Thus, we have $N\setminus K\neq\emptyset$. Now, to complete the proof of our theorem we shall show that the elements of $N$ satisfy the requirements of Lemma \ref{lem:3.1}. To this end, suppose that $a, b\in N$. We examine the following cases: \noindent {\em Case 1:} $a\in K$. {\em Subcase 1.1:} $b\not\in K$. We shall prove that there exists some positive integer $n$ such that $a^nb=ba^n$. Suppose that $a^nb\neq ba^n, \forall n\in\mathbb{N}$. Then, $a+b\neq 0, a\neq \pm{1}$ and $b\neq \pm{1}$. So we have $$x=(a+b)a(a+b)^{-1}, y=(b+1)a(b+1)^{-1}\in N.$$ Since $N$ is radical over $K$, we can find positive integers $m_x$ and $m_y$ such that $$x^{m_x}=(a+b)a^{m_x}(a+b)^{-1}, y^{m_y}=(b+1)a^{m_y}(b+1)^{-1}\in K.$$ Putting $m=m_xm_y$, we have $$x^m=(a+b)a^m(a+b)^{-1}, y^m=(b+1)a^m(b+1)^{-1}\in K.$$ Direct calculations give the equalities $$x^mb-y^mb+x^ma-y^m=x^m(a+b)-y^m(b+1)=(a+b)a^m-(b+1)a^m=a^m(a-1),$$ from which we get $$(x^m-y^m)b=a^m(a-1)+y^m-x^ma.$$ If $(x^m-y^m)\neq 0$, then $b=(x^m-y^m)^{-1}[a(a^m-1)+y^m-x^ma]\in K$, contrary to the choice of $b$. Therefore $(x^m-y^m)= 0$ and consequently, $a^m(a-1)=y^m(a-1)$. Since $a\neq 1,a^m=y^m=(b+1)a^m(b+1)^{-1}$ and it follows that $a^mb=ba^m$, a contradiction. {\em Subcase 1.2:} $b\in K$. Consider an element $x\in N\setminus K$. Since $xb\not\in K$, by Subcase 1.1, there exist positive integers $r, s$ such that $$a^rxb=xba^r \mbox{ and } a^sx=xa^s.$$ From these equalities it follows that $$a^{rs}=(xb)^{-1}a^{rs}(xb)=b^{-1}(x^{-1}a^{rs}x)b=b^{-1}a^{rs}b,$$ and consequently, $a^{rs}b=ba^{rs}.$ \noindent {\em Case 2:} $a\not\in K$. Since $N$ is radical over $K$, there exists some positive integer $m$ such that $a^m\in K$. By Case 1, there exists a positive integer $n$ such that $a^{mn}b=ba^{mn}$. \end{proof} Theorem \ref{thm:3.2} is closely related to the following conjecture of Herstein in \cite{her2}: {\em ``For a division ring $D$, given a subnormal subgroup $N$ of $D^*$. If $N$ is radical over the center $F$ of $D$, then $N$ is central, i. e. $N\subseteq F$."} In Theorem \ref{thm:3.2}, the subgroup $N$ is required to be radical over an arbitrary proper division subring $K$ of $D$, which does not necessarily coincide with the center $F$. Notice that $N$ is required to be normal in $D^*$. So, the following question seems to be interesting: {\em ``For a division ring $D$, given a subnormal subgroup $N$ of $D^*$. Is $N$ contained in the center $F$of $D$, provided it is radical over some proper division subring of $D$?"} Finally, we consider the question of the existence of maximal subgroups in $D$ which are radical over $F$. Recall that if $D$ is centrally finite of index different from the characteristic of $F$, then $D^*$ contains no such subgroups (see [1, Theorem 5]). Here, we consider the case when $D$ is of type $2$ with $[D: F]=\infty$ and we prove that, if $char\,F=p > 0$, then $D^*$ contains no such subgroups. \begin{theorem}\label{thm:3.3} Let $D$ be a division ring of type $2$ with center $F$ such that $[D:F]=\infty$ and $char\, F = p>0$. Then the group $D^*$ contains no maximal subgroups which are radical over $F$. \end{theorem} \begin{proof} Suppose that $M$ is a maximal subgroup of $D^*$ which is radical over $F$. Put $G=D'\cap M$. For each $x\in G$, there exists a positive integer $n(x)$ such that $x^{n(x)}\in F$. It follows that $x^{n(x)}\in D'\cap F=Z(D')$. By Theorem \ref{thm:2.3}, $Z(D')$ is torsion, so $x$ is periodic. Thus, $G$ is a torsion group. Since $M'\leq G, M'$ is also torsion. For any $x,y\in M'$, put $H=\ts{x,y}$ and $D_1=F(x,y)$. Then $n:=[D_1:F]<\infty$ and $H$ is a torsion subgroup of $D_1^*\leq GL_n(F)$. By [12, (9.9'), p. 154], $H$ is finite. Since $char F=p>0$, by [12, (13.3), p. 215], $H$ is cyclic. In particular, $x$ and $y$ commute with each other, and consequently, $M'$ is abelian. It follows that $M$ is a solvable group. Thus $M$ is a solvable maximal subgroup of $D^*$. By [1, Corollary 2, p. 432] and [3, Theorem 6], $[D:F]<\infty$, a contradiction. \end{proof} \end{document}
\begin{document} \setlength{\textheight}{8.0truein} \copyrightheading{12}{5\&6}{2012}{0394-0402} \runninghead{Encryption with Weakly Random Keys Using Quantum Ciphertext } {J. Bouda, M. Pivoluska, M. Plesch } \normalsize\textlineskip \thispagestyle{empty} \setcounter{page}{1} \vspace*{0.88truein} \alphfootnote \fpage{1} \centerline{\bf ENCRYPTION WITH WEAKLY RANDOM KEYS USING A QUANTUM CIPHERTEXT} \vspace*{0.035truein} \vspace*{0.37truein} \centerline{\footnotesize JAN BOUDA} \vspace*{0.015truein} \centerline{\footnotesize\it Faculty of Informatics, Masaryk University, Botanick\'{a} 68a} \baselineskip=10pt \centerline{\footnotesize\it Brno, Czech republic} \vspace*{10pt} \centerline{\footnotesize MATEJ PIVOLUSKA} \vspace*{0.015truein} \centerline{\footnotesize\it Faculty of Informatics, Masaryk University, Botanick\'{a} 68a} \baselineskip=10pt \centerline{\footnotesize\it Brno, Czech republic} \vspace*{10pt} \centerline{\footnotesize MARTIN PLESCH} \vspace*{0.015truein} \centerline{\footnotesize\it Faculty of Informatics, Masaryk University, Botanick\'{a} 68a} \baselineskip=10pt \centerline{\footnotesize\it Brno, Czech republic} \centerline{\footnotesize\it Institute of Physics, Slovak Academy of Sciences, D\'{u}bravsk\'{a} cesta 9} \baselineskip=10pt \centerline{\footnotesize\it Bratislava, Slovakia} \vspace*{10pt} \vspace*{0.225truein} \publisher{(June 17, 2011)}{(January 19, 2012)} \vspace*{0.21truein} \abstracts{ The lack of perfect randomness can cause significant problems in securing communication between two parties. McInnes and Pinkas \cite{McInnesPinkas-ImpossibilityofPrivate-1991} proved that unconditionally secure encryption is impossible when the key is sampled from a weak random source. The adversary can always gain some information about the plaintext, regardless of the cryptosystem design. Most notably, the adversary can obtain full information about the plaintext if he has access to just two bits of information about the source (irrespective on length of the key). In this paper we show that for every weak random source there is a cryptosystem with a classical plaintext, a classical key, and a quantum ciphertext that bounds the adversary's probability $p$ to guess correctly the plaintext strictly under the McInnes-Pinkas bound, except for a single case, where it coincides with the bound. In addition, regardless of the source of randomness, the adversary's probability $p$ is strictly smaller than $1$ as long as there is some uncertainty in the key (Shannon/min-entropy is non-zero). These results are another demonstration that quantum information processing can solve cryptographic tasks with strictly higher security than classical information processing.} {}{} \section{Introduction} Random numbers play a crucial role in many areas of computer science, e.g. randomized algorithms and cryptography. Real world random number generators deliver imperfect (biased) randomness and a number of theoretical models of imperfect random number sources \cite{Blum-Independentunbiasedcoin-1984,ChorGoldreich-Unbiasedbitsfrom-1988,SanthaVazirani-Generatingquasi-randomsequences-1985,Neumann-Varioustechniquesused-1951} were introduced to study possibilities to obtain perfect randomness through their software postprocessing \cite{Shaltiel-RecentDevelopmentsin-2002}. Importance of this post-processing stems from the fact that many of its applications were designed to use perfect randomness, and, in fact, vitally require it for a reasonable performance. Devices based on quantum mechanical properties should theoretically serve as sources of ideal randomness \cite{ColbeckRenner-2011,idQuantique1} or even as unconditionally secure cryptosystems \cite{BennetBrassard-1984,idQuantique2,Mayers-2001,ShorPreskill-2000,TamakiKoashiImoto-2003}. However, in real conditions they are extremely sensitive to the influence of environment and rely on classical post-processing of the measurement results. Even after these procedures the outcome is far from being perfect (see e.g. \cite{Lydersen-2010} and references therein). Cryptography counts among fields that are highly sensitive to the quality of randomness used (see. e.g. \cite{Dodis-2002}). One of the most prominent results showing the influence of weakness of randomness is by McInnes and Pinkas \cite{McInnesPinkas-ImpossibilityofPrivate-1991} proving that there is no perfectly secure encryption scheme when the key is sampled from a weakly random source (e.g. min-entropy \cite{ChorGoldreich-Unbiasedbitsfrom-1988}). The authors also derived a tight (minimal and achievable) bound on probability that the adversary can determine the plaintext, when an arbitrary encryption system is used, but the key is sampled from a min-entropy source. This bound is a function of $c = l-b$, where $l$ is the key length and $b$ is its min-entropy (see Section 2 for definition of min-entropy). In this paper we propose an encryption of classical information using classical key and quantum channel (i.e. ciphertext is a quantum state) such that the adversary's probability to determine the plaintext is strictly smaller than the McInnes-Pinkas bound for all values $c=l-b\geq0$, except for values $c=0$ and $c=1$, where it coincides with the bound. The paper is organized as follows. In the second section we introduce basic definitions and recall the result by McInnes and Pinkas in detail. In the third section we introduce the encryption onto a quantum ciphertext. In the fourth section we derive the maximal security for the approximation of a continuous key. In the fifth section we deal with the consequences of the discretisation of the random key, whereas in the last section we summarize our results and conclude. \section{Preliminaries} The min-entropy of the probability distribution (random variable, source) $\mathbf{Z}$ is defined by \begin{equation}\label{minentropy} H_{\infty}(\mathbf{Z})=\min_{z\in Z}\left( -\log Pr\left( \mathbf{Z} =z\right) \right) . \end{equation} We denote a source as $(l,b)$-source if it is emitting $l$-bit strings drawn according to a probability distribution with min-entropy at least $b$. Thus, every specific $l$-bit sequence is drawn with probability smaller or equal to $2^{-b}$. Notice that for $b=l-c$, the probability of each $l$-bit string is upper bounded by $2^c\frac{1}{2^{-l}}$. and parameter $c$ is called min-entropy loss. A source is $(l,b)$-flat iff it is an $(l,b)$ source and it is uniform on some subset of $2^{b}$ sample points i.e., all probabilities are either $0$ or $2^{-b}$. We are going to consider the following scenario. Alice and Bob share a secret key $k$ that is used to determine the encoding (decoding) function. In this setting the plaintext is a single, uniformly distributed bit and the ciphertext is (arbitrarily long, but finite) bitstring. The encryption system is specified by the set of encoding rules \begin{equation} e_{k}:X\rightarrow Y \end{equation} parameterized by the keys $k\in K$, with $X=\{0,1\}$ being the set of plaintexts and $Y$ being the set of ciphertexts. To each encoding function there is the corresponding decoding function $d_{k}$ that perfectly recovers the original input, i.e. \begin{equation} \forall k\in K\ d_{k}\circ e_{k}=id_{X}. \end{equation} Let ${\mathbf{X}}$ be the random variable describing the probability distribution of the plaintext and ${\mathbf{X}^{\prime}}$ random variable describing adversary's estimate given by his decoding strategy. Security is parameterized by adversary's ability to recover the original plaintext \begin{equation} \label{equ:adversary_probability}p=\sum_{x\in X}Pr({\mathbf{X}^{\prime} }=x|{\mathbf{X}}=x)Pr({\mathbf{X}}=x). \end{equation} In the McInnes and Pinkas paper, for a given length of the key $l$ and a parameter $c$ the key is distributed according to an $(l,l-c)$ distribution. Parameters $l$ and $c$ are part of the cryptosystem design, the adversary is assumed to know the actual (biased) distribution ${\mathbf{K}}$ of the key (but not the value $k$ of the key). The probability $p$ to reveal the plaintext the adversary can achieve for an arbitrary cryptosystem, a suitable $(l,l-c)$ source and a uniformly distributed plaintext is lower bounded by \begin{equation} p\geq \begin{cases} 1 & \text{for }2\leq c\leq l\\ \frac{1}{2}+\frac{2^{c}}{8} & \text{for }2-\log_{2}3\leq c\leq2\\ \frac{2^{c}}{2} & \text{for }0\leq c\leq2-\log_{2}3. \end{cases} \label{equ:pinkas_probability} \end{equation} In particular, they have shown that the (maximal achievable) security\footnote{Probability of making a correct guess by the adversary is bounded from below by the formula \eqref{equ:pinkas_probability} independently of $l$. However, such security is achievable only for a cryptosystem with high enough $l$, for smaller $l$ the achievable probability for the attacker rises further.} is independent of $l$ and no security can be achieved if $c\geq2$. \section{Quantum ciphertext} In our solution we consider encoding classical plaintext to a quantum ciphertext. The encoding function is of the form \begin{equation} e_{k}:X\rightarrow{\mathrm{S}}({\mathcal{H}}) \end{equation} parameterized by the key $k\in K$, with $X=\{0,1\}$ being the set of plaintexts, ${\mathcal{H}}$ a suitable Hilbert space and ${\mathrm{S} }({\mathcal{H}})$ the set of (possibly mixed) states on the Hilbert space ${\mathcal{H}}$, i.e. positive trace one operators acting on ${\mathcal{H}}$. In our further analysis we limit ourselves to only a single quantum bit, i.e. to two-dimensional Hilbert space ${\mathcal{H=H}}_{2}$. Extensions to higher dimensional Hilbert spaces will be discussed in the conclusion. The decoding procedure consists of measurement of the received quantum system aiming to distinguish between states $e_{k}(0)$ and $e_{k}(1)$. The correctness requirement gives (regardless of ${\mathcal{H}}$) that for every $k$ the states $e_{k}(0)$ and $e_{k}(1)$ must be orthogonal. For a qubit, this is only possible if all of them are pure states. Let us use the notation $e_{k}(0)=\left\vert \psi_{k}\right\rangle $ and $e_{k}(1)=\left\vert \phi _{k}\right\rangle $, with the orthogonality condition \begin{equation} \left\langle \phi_{k}|\psi_{k}\right\rangle =0\label{orthogonality} \end{equation} for all $k$'s. To obtain the encoded bit, Bob (knowing the key) adjusts his measurement device accordingly to obtain a well defined result discriminating between $\left\vert \psi_{k}\right\rangle $ and $\left\vert \phi_{k}\right\rangle $. For this purpose a standard von Neumann measurement is fully sufficient. On the other hand, the adversary's knowledge is limited to the (known) probability distribution on keys. He has to discriminate between the average states \begin{equation} \rho_{0}=\sum_{k\in K}P({\mathbf{K}}=k)\left\vert \psi_{k}\right\rangle\left\langle\psi_k\right\vert \text{ and }\rho_{1}=\sum_{k\in K}P({\mathbf{K}}=k)\left\vert \phi_{k}\right\rangle\left\langle\phi_k\right\vert . \label{equ:average states} \end{equation} Analogously to the classical case, the effectiveness of the adversary's strategy is given by the probability $p$ given by Eq. \eqref{equ:adversary_probability}. The adversary's strategy is thus to maximize this probability by performing a minimum error measurement, which is a simple two outcome von Neumann measurement \cite{Heinosaari-2009,Helstrom-1976,Nielsen+Chuang-Quant_Compu_Quant:2000}. Without the loss of generality we can choose a basis so that the state $\rho_{0}$ (\ref{equ:average states}) has the form \begin{equation} \rho_{0}=\left( \begin{array} [c]{cc} a & 0\\ 0 & 1-a \end{array} \right) \end{equation} with $a\geq\frac{1}{2}$. Due to the orthogonality condition (\ref{orthogonality}) the state $\rho_{1}$ has the form \begin{equation} \rho_{1}=\left( \begin{array} [c]{cc} 1-a & 0\\ 0 & \ a\ \end{array} \right) . \end{equation} The optimal measurement of the attacker is thus just the spin $z$ projection measurement and the probability to get a correct outcome (determine the original plaintext) then reads \begin{equation} p=\frac{1}{2}\left[ \frac{1}{2}Tr\left\vert \rho_{0}-\rho_{1}\right\vert +1\right] =a.\label{adversary_probability_quantum} \end{equation} \section{Continuous code} In this section we assume that Alice and Bob share a random key that is continuous, e.g. a single complex number. Such a coding can uniformly cover the state space of a single qubit $\mathcal{H}$, which can be depicted as a Bloch sphere. Later, in Section \ref{sec:discrete_code}, we will show that there exists a discrete coding that approximates the continuous coding introduced in this section with an arbitrary precision (with respect to adversary's probability to reveal the plaintext). Suppose that Alice and Bob know (or expect) that the random key they share can be biased by a certain amount. Their aim is to choose the coding in such a way that any partial knowledge about the key by an eavesdropper would lead to as small probability of obtaining the correct encrypted bit as possible. This is naturally achieved by a smooth coverage of the state space (Bloch sphere) in such a way that the probability density of selected states will be equal on all points of the sphere. The first important observation is that we can fix an arbitrary adversary's measurement $P={|0\rangle}{\langle0|}$ by fixing the basis and determine the key distribution that is optimally distinguished by the measurement. This can be done as all measurements are unitarily equivalent, i.e. for each pair of measurements there is a unitary rotation of the sphere that maps one measurement to the other. Hence, if there is an optimal distribution for one measurement, for any other measurement there exists a (different) distribution giving the same result. Let us define a source with min-entropy loss at most $c$ for continuous spaces. According to \eqref{minentropy} a discrete source with length $l$ and min-entropy $H_\infty(\mathbf{Z})$ has min-entropy loss $c=l-\mathbf{H}_\infty(\mathbf{Z})$: \begin{align} \nonumber c& = l - H_\infty\left(\mathbf{Z}\right)\\ \nonumber & = l - \min_{z\in \mathbf{Z}} \left(-\log Pr\left(\mathbf{Z}=z\right)\right)\\ \nonumber & = \max_{z\in \mathbf{Z}} \left(l +\log Pr\left(\mathbf{Z}=z\right)\right)\\ & = \max_{z\in \mathbf{Z}} \left(\log \left(|\mathbf{Z}|Pr\left(\mathbf{Z}=z\right)\right)\right).\label{discML} \end{align} Equation \eqref{discML} can be easily extended to continuous space by changing maximalization to $\sup$. $|\mathbf{Z}|$ has to be changed to the volume of the probability space ${\mathcal H}$ and probability function $Pr$ becomes probability density function $\mu$. Now let us define a continuous weak source over space ${\mathcal H}$ with min entropy loss $c$ as a set of probability density functions for which \begin{equation} c = \sup_{\psi\in{\mathcal H}}\log\left({|{\mathcal H}|\mu\left(\phi\right)}\right). \end{equation} After additional simplification it is easy to see that the condition reads $\sup_{{\mathcal H}}\mu\left(\phi\right)=\frac{2^c}{|{\mathcal H}|}$. Let us consider all distributions on the sphere such that for any state on the sphere its probability density is at most $2^c{|{\mathcal H}|}^{-1}$, with $|{\mathcal H}|$ being the area of the surface of the Bloch sphere. Later, in Section \ref{sec:discrete_code}, we show that such distributions are analogous to discrete min-entropy sources. Flat distributions correspond to continuous distributions where only a $2^{-c}$ fraction of all possible keys would appear with equal nonzero probability density. Let us now examine a situation, where the adversary prepares for Alice and Bob a flat distribution on the subset of size $2^{-c}\left\vert {\mathcal{H} }\right\vert $ (i.e. the keys are selected with equal probabilities from $2^{-c}$ fraction of all possible keys). We propose a distribution $\mu _{opt}\left( {|\phi\rangle}\right) $ defined as \[ \mu_{opt}\left( {|\phi\rangle}\right) = \begin{cases} \frac{1}{2^{-c}\left\vert {\mathcal{H}}\right\vert } & \text{for } {|\phi\rangle}\in Y\\ 0 & \text{elsewhere} \end{cases} \] for $Y=\{{|\phi\rangle}\in{\mathcal{H}};\left\vert \langle0|\phi \rangle\right\vert ^{2}\geq g;|Y|=2^{-c}\left\vert {\mathcal{H}}\right\vert\}$ for some suitable constant $g$ dependent on $c$. Let us assume that Alice encodes $0$ (uniformly at random) into one of the states from $Y$. The probability to obtain the correct outcome of the measurement $P$ is \begin{align} p_{opt} & =\int_{{\mathcal{H}}}\mu_{opt}\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi\label{p_opt} =\frac{1}{2^{-c}\left\vert {\mathcal{H}}\right\vert }\int_{{|\phi\rangle }\in Y}\left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi. \end{align} Let $\mu\left( {|\phi\rangle}\right)$ be any distribution with $\mu\left( {|\phi\rangle}\right)\leq 2^{c}|{\mathcal H}|^{-1}$ everywhere, i.e. we restrict ourselves to distributions corresponding to discrete min-entropy $(l,l-c)$ distributions. We will now prove that the distribution $\mu_{opt}$ is optimal among these distributions, i.e there is no other distribution such that the adversary can obtain a higher probability to detect the correct ciphertext. Let us calculate this probability for a general distribution $\mu\left({|\phi\rangle}\right) $ \begin{align} p & =\int_{{\mathcal{H}}}\mu\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi\nonumber\\ & =\int_{{|\phi\rangle}\in Y}\mu\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi+\int_{{|\phi\rangle}\in\backslash Y}\mu\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi\nonumber\\ & =\int_{{|\phi\rangle}\in Y}\mu_{opt}\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi+\int_{{|\phi\rangle}\in Y}\left[ \mu\left( {|\phi\rangle}\right) -\mu_{opt}\left( {|\phi\rangle}\right) \right] \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi+\int _{{|\phi\rangle}\in\backslash Y}\mu\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi\nonumber\\ & =p_{opt}+\int_{{|\phi\rangle}\in Y}\left[ \mu\left( {|\phi\rangle }\right) -\mu_{opt}\left( {|\phi\rangle}\right) \right] \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi+\int_{{|\phi\rangle}\in\backslash Y}\mu\left( {|\phi\rangle}\right) \left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi,\label{flat} \end{align} where $\backslash Y=\mathcal{H}-Y$. Let us now analyze the two remaining terms in the Eq. (\ref{flat}). As $\mu\left( {|\phi\rangle}\right) \leq\mu_{opt}\left( {|\phi\rangle}\right) $ for all ${|\phi\rangle}\in Y$ (recall that $\mu_{opt}\left( {|\phi\rangle}\right) $ reaches the maximal permitted value on the subset $Y$), the integrated function is non-positive on the whole area of integration ($Y$), $\left\vert \langle0|\phi\rangle\right\vert ^{2}\geq g$ on the whole area as well and thus the first integral can be bounded from above by $g\int _{{|\phi\rangle}\in Y}\left[ \mu\left( {|\phi\rangle}\right) -\mu _{opt}\left( {|\phi\rangle}\right) \right] d\phi$. In the last term the integrated function is positive, but $\left\vert \langle0|\phi\rangle \right\vert ^{2}<g$ on the whole area and thus the second integral can be also bounded from above by $g\int_{{|\phi\rangle}\in\backslash Y}\mu\left( {|\phi\rangle}\right) d\phi$. Altogether we get \[ p\leq p_{opt}+g\int_{{|\phi\rangle}\in Y}\left[ \mu\left( {|\phi\rangle }\right) -\mu_{opt}\left( {|\phi\rangle}\right) \right] d\phi +g\int_{{|\phi\rangle}\in\backslash Y}\mu\left( {|\phi\rangle}\right) d\phi. \] As $\mu_{opt}\left( {|\phi\rangle}\right) $ is $0$ outside $Y$, we can include it into the integration outside $Y$ with a minus sign. Now we join the integration through the whole space and using the normalization condition on both distributions we get \[ p\leq p_{opt}+g\int_{{\mathcal{H}}}\left[ \mu\left( {|\phi\rangle}\right) -\mu_{opt}\left( {|\phi\rangle}\right) \right] d\phi=p_{opt}. \] This completes the proof of the optimality of the flat $\mu_{opt}\left( {|\phi\rangle}\right) $ distribution. Recall that the state space of a single qubit represented as a Bloch sphere is proportional to a unit sphere with the surface equal to $\left\vert {\mathcal{H}}\right\vert =4\pi$. The set $Y$ derived above is a spherical cap on the Bloch sphere with the center in ${|0\rangle}$. The desired flat distribution is hence equivalent to the uniform distribution on a spherical cap with a surface $4\pi2^{-c}$. The height of such a spherical cap is $h=2^{-c+1}$ (reaching from $2$ for $c=0$ to $0$ for large $c$). The average state observed by the attacker in the case of the plaintext $0$ is the center of the mass of the surface of the spherical cap, which is on the axis of the cap at the height $h/2$. The average state then reads \begin{equation} \rho_{0}=\frac{1}{2}\mathbb{I}+\frac{h}{2}{\sigma}_{z}=p{|0\rangle}{\langle 0|}+\left( 1-p\right) {|1\rangle}{\langle1|} \end{equation} with \begin{equation} p=1-h/4=1-2^{-c-1}.\label{probability_quantum} \end{equation} The appearance of $h/4$ instead of $h/2$ is due to the renormalization of the axis: while the Bloch sphere has diameter $2$, the parameter $p$ changes from $0$ to $1$ across the sphere. Accordingly, the state observed by the attacker in the case of sending $1$ is $\rho_{1}=\left( 1-p\right) {|0\rangle}{\langle0|}+p{|1\rangle}{\langle1|}$. Observing that $p\geq1/2$ and substituting into Eq. (\ref{adversary_probability_quantum}), the optimal adversary's probability for this state is also $p$. The comparison of classical bounds and quantum approach is given in the Fig. (\ref{figure1}). It is clear that the probability of the adversary to correctly guess the ciphertext is for all parameters $c$ (except $0$ and $1$) strictly better than classical. Even more interesting, the probability is non-vanishing even for large $c$, what means that (some) privacy is established even in the case when the adversary has almost perfect control of the "random" key. \begin{figure}\label{figure1} \end{figure} \section{Discrete code} \label{sec:discrete_code} In this section we will show that for any $c\geq0$ and an arbitrarily small $\epsilon>0$ there exists an encryption system \begin{equation} e_{k}:\{0,1\}\rightarrow{\mathrm{S}}({\mathcal{H}}) \end{equation} indexed by keys from a finite set $K$ that bounds the adversary's probability to determine the original plaintext by $1-2^{-c-1}+\epsilon$. All the derivations made in the previous section were done for continuous distribution of the states $\left\vert \psi\right\rangle $ over the state space. If we consider a discrete finite number $n$ of possible keys $\left\vert \psi_{i}\right\rangle $, we can only approximate such distributions using a finite number of lattice points \cite{Kraetzel-LatticePoints-1989} uniformly distributed over the surface of the Bloch sphere. Let us assign to each single state $\left\vert \psi _{i}\right\rangle $ its neighborhood, i.e. a subset ${\mathcal{H}}_{i}$ of the whole Hilbert space such that \begin{itemize} \addtolength{\itemsep}{-2mm} \item[(i)] The surface on the Bloch sphere of each ${\mathcal{H}}_{i}$ equals to $\frac{4\pi}{n}$. \item[(ii)] Neighborhoods corresponding to different states are disjoint, i.e. $\forall i,j\,{|\psi_{i}\rangle}\neq{|\psi_{j}\rangle }\Rightarrow{\mathcal{H}}_{i}\cap{\mathcal{H}}_{j}=\emptyset$. \item[(iii)] The distance between a state $\left\vert \psi_{i}\right\rangle $ and any state in its neighborhood measured in the relative angle on the Bloch sphere is upper bounded by $O\left( n^{-1/2}\right) $. \end{itemize} For every $n$, there exist a set of states $\left\vert \psi _{i}\right\rangle $ and its neighborhoods that fulfill the aforementioned conditions. This can be seen from the fact that even a suitable (by far non-optimal) distribution of points on $\sqrt{n}$ meridians with $\sqrt{n}$ points each yields a maximal angle of $\frac{3\pi}{2\sqrt{n}}$ (for explanation see Fig. (\ref{poludnik})), which fulfills the third condition, whereas the first two can be fulfilled trivially. \begin{figure}\label{poludniky} \label{poludnik} \end{figure} We will show that for any probability distribution on the $n$ aforementioned states with min-entropy at least $\log_2(n)-c$, the adversary's probability to determine the plaintext is at most $p_{opt}+\epsilon$, where $\epsilon$ drops as $n^{-1/2}$. Let us fix a particular distribution $(q_{i})_{i=1}^{n}$ on states ${|\psi_{i} \rangle}$. We construct a continuous distribution $\mu_{q}(|\phi\rangle)=\frac{nq_{i} }{4\pi}$ for $\phi\in{\mathcal{H}}_{i}$. The adversary's probability to obtain the plaintext in the case of distribution $(q_{i})_{i=1}^{n}$ on states ${|\psi_{i}\rangle}$ reads (compare to Eq. \eqref{p_opt}) \begin{equation} q_{n}=\sum_{i=1}^n q_{i}\left\vert \langle0|\psi_{i} \rangle\right\vert ^{2}.\label{qn} \end{equation} As the difference of the fidelity of the state ${|\psi_{i} \rangle}$ with ${|0 \rangle}$ and any state from its neighborhood with ${|0 \rangle}$ is upper bounded by $O\left( n^{-1/2}\right)$, we can approximate Eq. (\ref{qn}) by a correctly normalized integral over its neighborhood \begin{equation} \left\vert \langle0|\psi_{i} \rangle\right\vert ^{2}\leq \frac{n}{4\pi}\int_{{|\phi\rangle}\in{\mathcal{H}}_{i}}\left[ \left\vert \langle0|\phi\rangle\right\vert ^{2}+O\left( n^{-1/2}\right) \right]d\phi. \end{equation} Substituting we get \begin{align} q_{n} & \leq\sum_{i=1}^{n}q_{i} \frac{n}{4\pi}\int_{{|\phi\rangle}\in{\mathcal{H}}_{i}}\left[ \left\vert \langle0|\phi\rangle\right\vert ^{2}+O\left( n^{-1/2}\right) \right]d\phi\nonumber\\ & =\sum_{i=1}^{n} \int_{{|\phi\rangle}\in{\mathcal{H}}_{i}}\left[ q_{i}\frac{n}{4\pi}\left\vert \langle0|\phi\rangle\right\vert ^{2}+ q_{i}\frac{n}{4\pi}O\left(n^{-1/2}\right) \right]d\phi\nonumber\\ & = \int_{{|\phi\rangle}\in{\mathcal{H}}} \mu_q(|\phi\rangle)\left\vert \langle0|\phi\rangle\right\vert ^{2}d\phi + \sum_{i=1}^{n}\left(q_i\frac{n}{4\pi}O\left(n^{-1/2}\right) \int_{{|\phi\rangle}\in{\mathcal{H}}_{i}} d\phi\right)\nonumber\\ & \leq p_{opt} + \sum_{i=1}^{n}\left(q_iO\left(n^{-1/2}\right)\right)\label{mu}\\ & = p_{opt} + O\left(n^{-1/2}\right)\nonumber\\ & \leq\left[ 1+O\left( n^{-1/2}\right) \right] p_{opt}.\label{discretization} \end{align} Inequality \eqref{mu} holds, because $\mu_{q}$ satisfies all min-entropy requirements set in the previous section, and thus the integral can be bounded from above by $p_{opt}$. The last inequality holds, because $p_{opt}$ is a positive constant, not depending on $n$. We can conclude that for any $c\geq0$ and an arbitrarily small $\epsilon>0$ there exist a (sufficiently large) $n$ that bounds the adversary's probability to determine the original plaintext by $1-2^{-c-1}+\epsilon$. It is obvious that for any fixed $c$ a suitable $\epsilon$ can be chosen such that the probability of the adversary to guess the ciphertext correctly is strictly smaller than one. Moreover, consider a cryptosystem with only two elements $k_{1},k_{2}\in K$ such that $Pr({\mathbf{K}}=k_{1})>0$, $Pr({\mathbf{K}}=k_{2})>0$. Then the average states $\rho_{0}$, $\rho_{1}$ are not pure, i.e. (in a suitable basis) \begin{equation} \rho_{0}= \begin{pmatrix} \ a\ & 0\\ 0 & 1-a \end{pmatrix} \end{equation} with $a,1-a>0$. This implies the quantity in Eq. (\ref{adversary_probability_quantum}) is strictly smaller than $1$. Thus, for any encoding the adversary's probability to determine the original plaintext is strictly smaller than $1$, as long as there are at least two keys with nonzero probability\footnote{Equivalently: Shannon (or min-entropy) of the key is non-zero.}. \section{Conclusion} In this paper we have shown that a quantum ciphertext can significantly increase the security of classical communication between two parties using weak random sources. In particular, for significantly weak sources, where less than a quarter of keys can be used for encryption, no security can be achieved with a classical ciphertext. This is true independently both on the length of the key $l$ and the length of the ciphertext. On the contrary, for quantum ciphertexts, some level of security can be achieved even if only two keys appear with non-zero probability. For all sources (for all values of $c$) the presented quantum approach is at least as good as the classical one. Moreover, except for $c=0$ (trivial case with perfect sources and perfect security) and $c=1$ the quantum approach outperforms the classical for all values of $c$. It is important to stress that our result does not give a lower bound on the security one can achieve in this problem in general. Using encoding into systems of higher dimension (e.g. a pair of qubits or a qutrit) may achieve significantly better results, as more complicated encodings also using mixed states come into question. \section{Acknowledgments} We acknowledge the support of the Czech Science Foundation GA{\v C}R projects P202/12/1142 and P202/12/G061, as well as projects CE SAS QUTE and VEGA 2/0072/12. MPl acknowledges the support of SoMoPro project funded under FP7 (People) Grant Agreement no 229603 and by South Moravian Region. \nonumsection{References} \end{document}
\begin{document} \title{ Uniqueness of weakly reversible and \deficiency zero realizations of dynamical systems } \begin{abstract} A reaction network together with a choice of rate constants uniquely gives rise to a system of differential equations, according to the law of mass-action kinetics. On the other hand, {\em different} networks can generate {\em the same} dynamical system under mass-action kinetics. Therefore, the problem of identifying ``the" underlying network of a dynamical system is {\em not well-posed, in general}. Here we show that the problem of identifying an underlying {\em weakly reversible deficiency zero network} is well-posed, in the sense that the solution is unique whenever it exists. This can be very useful in applications because from the perspective of both dynamics and network structure, a weakly reversibly deficiency zero ($\textit{WR}_\textit{0}$) realization is the simplest possible one. Moreover, while mass-action systems can exhibit practically any dynamical behavior, including multistability, oscillations, and chaos, $\mrm{WR}_0$\ systems are remarkably stable for {\em any} choice of rate constants: they have a unique positive steady state within each invariant polyhedron, and cannot give rise to oscillations or chaotic dynamics. We also prove that both of our hypotheses (i.e., weak reversibility and deficiency zero) are necessary for uniqueness. \end{abstract} \section{Introduction} \label{sec:intro} A common approach to mathematical modeling in biochemistry, molecular biology, ecology, and (bio)chemical engineering is based on non-linear interactions between different species or entities, e.g., metabolic or signaling pathways in cells, predator-prey relations in population dynamics, and reactions inside a chemical reactor~\cite{Malthus1798, Verhulst1838, Feinberg1987, Sontag2001}. These interactions, represented by a directed graph and under specified kinetic rules, generate a system of differential equations that model the time-dependent abundance of the interacting species. The most common kind of such kinetic rules is \emph{mass-action kinetics}, where the rate at which interaction occurs is proportional to the abundance of interacting species, and the resulting differential equations have a polynomial right-hand side. Complex qualitative dynamics such as multistability, oscillations, and chaos are possible under mass-action kinetics~\cite{voit2015-150, yu2018mathematical}. In general, determining the qualitative dynamics of a given mass-action system can be a very difficult task; on the other hand, certain structures in an associated directed graph, or \emph{reaction network}, are linked to special dynamical properties. One such network property is \emph{weak reversibility}, i.e., every reaction is part of an oriented cycle. A weakly reversible system has at least one positive steady state within each invariant polyhedron~\cite{Boros2019}. Even though some weakly reversible systems may have infinitely many steady states~\cite{boros2019weakly}, they are conjectured to be persistent and even permanent~\cite{CraciunNazarovPantea2013_GAC}. After some restrictions on rate constants, weakly reversible mass-action systems are \emph{complex-balanced}, which are known for admitting a globally defined strict Lyapunov function~\cite{HornJackson1972,feinberg1972complex, horn1972necessary}. In particular, complex-balanced systems cannot give rise to multistability, oscillations, or chaotic dynamics, and are conjectured to have a globally stable positive steady state within each invariant polyhedron; this conjecture has been proved under some additional assumptions~\cite{CraciunNazarovPantea2013_GAC, Anderson2011_GAC, BorosHofbauer2019_GAC, gopalkrishnan2014geometric}. The number of constraints on the rate constants necessary for complex-balancing is measured by an integer called the \emph{deficiency} of the network~\cite{CraciunDickensteinShiuSturmfels2009} (see \Cref{def:deficiency}). In the case of weak reversibility and deficiency zero, which we call $\textbf{\emph{WR}}_\textbf{\emph{0}}$ for short, the system is {\em always} complex-balanced, regardless of the choice of rate constants; this is the statement of the Deficiency Zero Theorem~\cite{feinberg1972complex,horn1972necessary, feinberg2019foundations}. An example of a $\mrm{WR}_0$\ biochemical system is a model for T-cell receptor signal transduction~\cite{Sontag2001, Mckeithan1995}, whereby a T-cell receptor ($\cf{T}$) forms an initial ligand-receptor complex ($\cf{C}_0$) that is converted (by phosphorylation) to an active form ($\cf{C}_N$) via a sequence of intermediates ($\cf{C}_i$); the active form $\cf{C}_N$ of the complex is responsible for generating a signal. Also, any of the ligand-receptor complex $\cf{C}_i$ may dissociate. The reaction network (with rate constants as labeled) proposed by McKeithan for this process is shown in \Cref{fig:T-cell}. This reaction network is weakly reversible and deficiency zero, and thus any positive steady state is asymptotically stable within its invariant polyhedron. Indeed, any positive steady state is \emph{globally stable} within its invariant polyhedron~\cite{Sontag2001}. \begin{figure} \caption{A mass-action system modelling signal transduction of a T-cell receptor~\cite{Mckeithan1995} \label{fig:T-cell} \end{figure} In short, under mass-action kinetics, a reaction network (dictating interactions of interest) and a choice of positive rate constants (proportionality constants for each interaction) uniquely determine the dynamics. Moreover, if the network satisfies certain conditions, then one can deduce the qualitative dynamics without solving the system of differential equations. On the other hand, a given system of differential equations, even if known to have come from mass-action kinetics, is \emph{not} associated with a unique network structure~\cite{CraciunPantea2008}. Indeed, without additional requirements, there are infinitely many networks that can give rise to the same dynamical system under mass-action kinetics. Earlier studies took advantage of the possibility of finding a network with desirable properties, to conclude that a system of differential equations has certain dynamical properties~\cite{CraciunJinYu2019, CraciunJinYu_STN, JohnstonSiegelSzederkenyi2013, BrustengaCraciunSorea2020, CraciunSorea2020}. For example, the dynamical system \eq{ \frac{dx}{dt} = 3 - 3x^3 } can be generated by any of the three networks in \Cref{fig:intro} using the rate constants labeled in the figure (and also can be generated by many other networks, for well-chosen rate constants). The network in \Cref{fig:intro-notWR} is neither weakly reversible nor has deficiency zero; the network in \Cref{fig:intro-notdef0} is weakly reversible but has a positive deficiency. The Deficiency Zero Theorem is therefore silent until one recognizes that the same dynamics is also realized by the $\mrm{WR}_0$\ system in \Cref{fig:intro-def0}. While it is unnecessary to search for a $\mrm{WR}_0$\ realization for such simple differential equations, this process of finding \df{dynamically equivalent} (see \Cref{def:DE}) realizations applies to much more complicated and higher dimensional systems. Moreover, finding different realizations is essentially a linear feasibility problem, and algorithms exist for this very purpose~\cite{Szederkenyi2012_Toolbox, RudanSzederkenyiKatalinPeni2014}. \renewcommand{\hspace{0cm}}{\hspace{1cm}} \begin{figure} \caption{Mass-action systems that give rise to the same system of differential equations. Labels on edges are rate constants for that reaction.} \label{fig:intro} \end{figure} The general non-uniqueness of networks that can generate a given dynamical system poses a challenge: what is a reaction mechanism that is consistent with kinetic data? The lack of network identifiability implies that even if one has experimental data with perfect accuracy and temporal resolution, it is impossible to determine the underlying reaction network, without imposing conditions on the network structure. Thus, it is reasonable to seek the \lq\lq simplest\rq\rq\ network that is consistent with the data. One may require minimal number of vertices in the network, or weak reversibility, or perhaps minimal deficiency; these are not unrelated. If there exists a weakly reversible realization, then there exists one that uses the minimal number of vertices, namely the distinct monomials appearing in the differential equations~\cite{CraciunJinYu2019}. Moreover, by minimizing the number of vertices in the network, one also minimizes deficiency (see \Cref{prop:V}). Since $\mrm{WR}_0$\ realizations are desirable because of simplicity as well as stable dynamics (by the Deficiency Zero Theorem), the question we pose is: can there be {\em different} $\mrm{WR}_0$\ realizations for the same dynamical system? {\em The answer is no: $\textit{WR}_\textit{0}$ realizations are unique~(see \Cref{thm:WR}).} Moreover, we show that {\em both} weak reversibility and deficiency zero are necessary for uniqueness. This paper is organized as follows. \Cref{sec:mas} introduces mass-action systems and dynamical equivalence, as well as other notions and results of reaction network theory used throughout this work. Then we show the necessity of weak reversibility and deficiency zero in \Cref{sec:nec}, and describe restrictions on the network structure imposed by these two conditions in \Cref{sec:lem}. Finally, we prove the uniqueness of weakly reversible and deficiency zero realization in \Cref{sec:thm}. \section{Background} \label{sec:mas} In this section, we introduce mass-action systems and notions that are necessary for our exposition. For a short introduction to the mathematics of mass-action systems, see~\cite{yu2018mathematical}, and a detailed description of classical results, see~\cite{feinberg2019foundations}. Throughout, let $\ensuremath{\mathbb{R}}pp$ denote the set of positive real numbers, and $\ensuremath{\mathbb{R}}pp^n$ the set of vectors with positive components, i.e., $\vv x \in \ensuremath{\mathbb{R}}pp^n$ if $x_i > 0$ for all $i = 1$, $2,\dots, n$. Analogously, let $\ensuremath{\mathbb{R}}p$ and $\ensuremath{\mathbb{R}}p^n$ denote the sets of non-negative numbers and vectors respectively. Summing over the empty set returns the zero vector, i.e., $\displaystyle \sum_{\vv y \in \emptyset} \vv y = \vv 0$. The disjoint union of sets is denoted $X \sqcup Y$. \begin{defn} \label{def:crn} A \df{reaction network} is a directed graph $G = (V,E)$, where $V$ is a finite subset of $\ensuremath{\mathbb{R}}^n$, and there are neither self-loops nor isolated vertices. \end{defn} In the reaction network literature, a vertex is also called a \df{complex}. An edge $(\vv y, \vv y')$, also called a \df{reaction}, is denoted $\vv y \to \vv y'$. Vertices are points in $\ensuremath{\mathbb{R}}^n$, so an edge $\vv y \to \vv y' \in E$ can be regarded as a bona fide vector between vertices. Each edge is associated to a \df{reaction vector} $\vv y' - \vv y \in \ensuremath{\mathbb{R}}^n$. The set of vertices of a reaction network can naturally be partitioned according to the connected components, also called \df{linkage classes} in the reaction network literature. Given a reaction network, we identify a linkage class by the subset of vertices in that connected component. A reaction network is said to be \df{weakly reversible} if every linkage class is strongly connected, i.e., every edge is part of an oriented cycle. \begin{defn} \label{def:affindep} A reaction network $(V,E)$ has \df{affinely independent linkage classes} if the vertices in each linkage class are affinely independent, i.e., if the vertices in $\{\vv y_0, \vv y_1,\ldots, \vv y_m\} \subseteq V$ define a linkage class, then the set $\{ \vv y_j - \vv y_0 \colon j=1,2,\ldots, m \}$ consists of linearly independent vectors. \end{defn} \begin{figure} \caption{The mass-action systems of \Cref{fig:intro} \label{fig:intro-EG} \end{figure} \begin{ex} \label{ex:intro} The examples of \Cref{fig:intro} are represented as directed graphs embedded in $\ensuremath{\mathbb{R}}$ in \Cref{fig:intro-EG}. The complexes $\cf{0}$ and $\cf{3X}$ correspond to the points $0$ and $3 \in \ensuremath{\mathbb{R}}$ respectively. Similarly, the complexes $\cf{X}$ and $\cf{2X}$ can be understood as $1$ and $2 \in \ensuremath{\mathbb{R}}$. The rate constants have been kept for clarity. An edge in a network is a vector that points from the source to the target. The networks shown in \Cref{fig:intro-EG-CB,fig:intro-EG-def1} each has one linkage class, while that of \Cref{fig:intro-EG-notWR} has two linkage classes and is not weakly reversible. The network in \Cref{fig:intro-EG-def1} does \emph{not} have affinely independent linkage classes, since $0$, $2$, and $3$ are co-linear, thus not affinely independent, points of $\ensuremath{\mathbb{R}}$. \end{ex} \begin{defn} \label{def:mas} Let $G = (V,E)$ be a reaction network in $\ensuremath{\mathbb{R}}^n$. Let $\vv\kappa \in \ensuremath{\mathbb{R}}pp^{E}$ be a vector of \df{rate constants}. A \df{mass-action system} $\ensuremath{(G, \vv \kk)}$ is the weighted directed graph, whose \df{associated dynamical system} is the system of differential equations on $\ensuremath{\mathbb{R}}pp^n$ \eqn{\label{eq:mas} \frac{d\vv x}{dt} = \sum_{\vv y_i \to \vv y_j \in E}\!\! \kappa_{ij} \vv x^{\vv y_i}(\vv y_j - \vv y_i), } where $\vv x^\vv y = x_1^{y_1}x_2^{y_2}\cdots x_n^{y_n}$. \end{defn} It is sometimes convenient to refer to $\kappa_{ij}$ even though $\vv y_i \to \vv y_j$ may not be an edge in the network. In such cases, the convention is to take $\kappa_{ij} = 0$. We can rearrange the sum on the right-hand side of \eqref{eq:mas} by grouping terms with the same monomial, each being multiplied by the weighted sum of reaction vectors originating from the corresponding source vertex, as in \eq{ \frac{d\vv x}{dt} = \sum_{\vv y_i \in V} \,\vv x^{\vv y_i} \sum_{\vv y_j \in V} \kappa_{ij} (\vv y_j - \vv y_i). } We give a name to the weighted sum associated with each monomial. \begin{defn} \label{def:directvect} Let $(G,\vv\kappa)$ be a mass-action system, and let $\vv y_i \in V$. The \df{net reaction vector from $\vv y_i$} is \eqn{ \label{eq:directvect} \sum_{\vv y_j \in V} \kappa_{ij} (\vv y_j - \vv y_i). } \end{defn} For convenience, we may refer to the net reaction vector even though $\vv y_i \not\in V$; in this case, let the net reaction vector be the zero vector. The right-hand side of \eqref{eq:mas} clearly lies in the linear space of all net reaction vectors. It also lies in the (possibly larger) \df{stoichiometric subspace} \eq{ S = \Span\{ \vv y_j - \vv y_i \colon \vv y_i \to \vv y_j \in E\} . } Hence, any solution to \eqref{eq:mas} is confined to translate of $S$. For any $\vv x_0 \in \ensuremath{\mathbb{R}}pp^n$, the \df{stoichiometric class of $\vv x_0$} is the polyhedron $(\vv x_0+S)\cap \ensuremath{\mathbb{R}}pp^n$. In the current work, we extend the notion of the stoichiometric subspace to subsets of vertices, usually from the same linkage class. \begin{defn} \label{def:stoichsubsp} Let $(V, E)$ be a reaction network and $V_0 \subseteq V$. The \df{stoichiometric subspace defined by $V_0$} is the vector space \eq{ S(V_0) = \Span \{ \vv y_j - \vv y_i \colon \vv y_i, \,\vv y_j \in V_0\}. } \end{defn} If $V_0 = \{ \vv y_0,\vv y_1,\ldots, \vv y_m\}$, then $S(V_0) = \Span \{\vv y_j - \vv y_0 \colon j=1,2,\ldots, m \}$, so that if $V_0$ consists of affinely independent vertices, the set of vectors from a fixed vertex to all others forms a basis for $S(V_0)$. Clearly, if $V_0 \subseteq V_1$, then $S(V_0) \subseteq S(V_1)$. If $V_0$ defines a linkage class of $G$, then $S(V_0)$, the stoichiometric subspace \emph{of the linkage class}, captures the geometry of this connected component. Furthermore, the stoichiometric subspace \emph{of the network} is the vector sum of the stoichiometric subspaces of the linkage classes. In particular, if $G$ has a single linkage class, then $S(V) = S$. However, if $G$ has multiple linkage class, $S$ may be a proper subspace of $S(V)$. While weak reversibility is a property of a reaction network, it has dynamical implications. For example, every weakly reversible mass-action system has at least one positive steady state within every stoichiometric class~\cite{Boros2019}; they are also conjectured to be persistent and permanent, with certain cases proven, e.g., when $n = 2$~\cite{CraciunNazarovPantea2013_GAC}, or when $\dim S = 2$ with all trajectories bounded~\cite{Pantea2012_GAC}, or when there is only one linkage class~\cite{BorosHofbauer2019_GAC, Anderson2011_GAC}. Weak reversibility is also necessary for complex-balancing, known for their asymptotic stability and conjectured to be globally stable~\cite{Horn1974_GAC, Craciun2019_GAC-inclusion}. By definition, a positive state $\vv x$ is a \df{complex-balanced steady state} of a mass-action system $(G, \vv\kappa)$ if at every $\vv y_i \in V$, we have \eq{ \sum_{\vv y_i \to \vv y_j \in E}\!\! \kappa_{ij} \vv x^{\vv y_i} = \sum_{\vv y_j \to \vv y_i \in E}\!\! \kappa_{ji} \vv x^{\vv y_j} . } The above equation can be interpreted as balancing the fluxes across the vertex $\vv y_i$. Once a mass-action system admits a complex-balanced steady state, all of its positive steady states are complex-balanced~\cite{HornJackson1972}; therefore we call such a mass-action system a \df{complex-balanced system}. Because every stoichiometric class has exactly one complex-balanced steady state, the set of positive steady states has dimension equal to $\dim S^\perp$~\cite{HornJackson1972}. Not every weakly reversible mass-action system is complex-balanced. In general, for a system to be complex-balanced, the rate constants should satisfy some algebraic constraints, the number of which is measured by a non-negative integer called the deficiency~\cite{CraciunDickensteinShiuSturmfels2009}. \begin{defn} \label{def:deficiency} Let $(V,E)$ be a reaction network with $\ell$ linkage classes and stoichiometric subspace $S$. The \df{deficiency} of the network is the integer $\delta = |V| - \ell - \dim S$. \end{defn} One can also consider the \df{deficiency of a linkage class} given by $V_i \subseteq V$, defined as $\delta_i = |V_i| - 1 - \dim S(V_i)$. It is easy to see that \eq{ \delta \geq \sum_{i=1}^\ell \delta_i, } with equality if and only if the stoichiometric subspaces of the linkage classes $\{S(V_i)\}_{i=1}^\ell$ are linearly independent. If $\delta = 0$, then necessarily $\delta_i = 0$ for all $i=1$, $2,\ldots, \ell$. When the deficiency of a weakly reversible reaction network is zero, then for any choice of positive rate constants, the mass-action system is complex-balanced~\cite{horn1972necessary, feinberg1972complex}. It then follows that within every stoichiometric class, the associated dynamical system \eqref{eq:mas} has a unique positive steady state, which is linearly stable~\cite{HornJackson1972, Johnston_note}. The deficiency is a property of the reaction network, not of the associated dynamical system, yet in the case of deficiency zero, it has strong implications on the dynamics under mass-action kinetics. In this work, we are interested in weakly reversible and deficiency zero reaction networks, which we refer to as a \df{$\textbf{WR}_\textbf{0}$ network}. The associated dynamical system \eqref{eq:mas} is uniquely defined by the network and its rate constants; however, different reaction networks can give rise to the same system of differential equations under mass-action kinetics~\cite{CraciunPantea2008}. Dynamical equivalence captures the notion when different mass-action systems (networks with their rate constants) have the same associated dynamical system, which occurs if and only if the net reaction vectors coincide. \begin{defn} \label{def:DE} Two mass-action systems $(G,\vv\kappa)$ and $(G', \vv\kappa')$ are said to be \df{dynamically equivalent} if for all $\vv y_i \in V \cup V'$, we have \eqn{ \label{eq:DE} \sum_{\vv y_i \to \vv y_j \in E} \!\! \kappa_{ij} (\vv y_j - \vv y_i) = \sum_{\vv y_i \to \vv y_j \in E'} \!\! \kappa'_{ij} (\vv y_j - \vv y_i). } We call such mass-action systems \df{realizations} of the associated dynamical system. \end{defn} The sums above may be over an empty set, for example on the left-hand side of \eqref{eq:DE} if $\vv y_i \not\in V$; in such cases, we take the sum to be the zero vector. This demonstrates that a realization may contain as many vertices as one would like, as long as it includes those corresponding to the monomials that appear in the dynamical system. Finally, consider when only a subset of vertices satisfies \eqref{eq:DE}. \begin{defn} \label{def:DEsubset} Let $(G,\vv\kappa)$ and $(G', \vv\kappa')$ be mass-action systems and $V_0 \subseteq V \cup V'$. The mass-action systems are said to be \df{dynamically equivalent on $V_0$} if for all $\vv y_i \in V_0$, we have \eq{ \sum_{\vv y_i \to \vv y_j \in E} \!\! \kappa_{ij} (\vv y_j - \vv y_i) = \sum_{\vv y_i \to \vv y_j \in E'} \!\! \kappa'_{ij} (\vv y_j - \vv y_i). } \end{defn} \section{On unique weakly reversible, deficiency zero realization} The goal of this paper is to prove that for an associated dynamical system of a mass-action system, there is at most one weakly reversible, deficiency zero realization, which we call a {$\mrm{WR}_0$\ realization}. This problem has been solved when there is only one linkage class~\cite{Csercsik2011ParametricUO, CraciunJohnstonSzederkenyiTonelloTothYu2020}. Here, we consider the problem in full generality. In \Cref{sec:nec}, we demonstrate the necessity of weak reversibility and deficiency zero through examples. Then in \Cref{sec:lem}, we establish some restrictions on the network structure of a $\mrm{WR}_0$\ realization. Finally, we prove that weak reversibility and deficiency zero are sufficient for a unique realization in \Cref{sec:thm}. \subsection{Necessary conditions for uniqueness of $\mrm{WR}_0$\ realization} \label{sec:nec} To show that weak reversibility and deficiency zero are necessary for uniqueness, we consider three examples. The first, shown in \Cref{fig:NotWR}, involves two realizations with zero deficiency but are not weakly reversible. In \Cref{fig:NotD0-1,fig:NotD0-2} are weakly reversible and dynamically equivalent systems but (some) have positive deficiencies. \Cref{fig:NotD0-1} illustrates when deficiency is positive because of an affinely \emph{dependent} linkage class, while \Cref{fig:NotD0-2} illustrates when it is because of linear \emph{dependence} between stoichiometric subspaces of linkage classes. \begin{ex} \label{ex:NotWR} The dynamically equivalent systems with zero deficiency shown in \Cref{fig:NotWR} share the same associated dynamical system, namely, \eq{ \dot{x} &= 1 + 2y^2, \\ \dot{y} &= 1 - 2y^2. } Neither of the mass-action systems is weakly reversible, demonstrating that weak reversibility is necessary for a unique $\mrm{WR}_0$\ realization. Note that the monomials appearing in the associated dynamical system must be source vertices in the realization. \end{ex} \begin{figure} \caption{Two dynamically equivalent mass-action systems with zero deficiency, but are not weakly reversible. Source vertices are shown in blue while targets are in gray.} \label{fig:NotWR} \end{figure} \begin{ex} \label{ex:NotD0-1} The dynamically equivalent, weakly reversible mass-action systems shown in \Cref{fig:NotD0-1} share the associated dynamical system \eq{ \dot{x} &= 2 - 2x^2y^2, \\ \dot{y} &= 2 - 2x^2y^2. } The network in \Cref{fig:NotD0-1b} has positive deficiency, demonstrating that $\delta = 0$ is necessary for a unique $\mrm{WR}_0$\ realization. The vertices in \Cref{fig:NotD0-1b} are \emph{not} affinely independent. As we shall see in \Cref{thm:def0}, this is one of two causes for a positive deficiency. \end{ex} \begin{figure} \caption{Two dynamically equivalent mass-action systems that are weakly reversible, but the deficiency of the network in (b) is positive.} \label{fig:NotD0-1b} \label{fig:NotD0-1} \end{figure} \begin{ex} Finally, the dynamically equivalent, weakly reversible mass-action systems shown in \Cref{fig:NotD0-2} share the associated dynamical system \eq{ \dot{x} &= 2 - x + x^2 - 2x^3. } However, both networks have a positive deficiency, demonstrating that $\delta = 0$ is necessary for a unique realization. In \Cref{fig:NotD0-2a}, the stoichiometric subspaces of the linkage classes are \emph{not} linearly independent. As we shall see in \Cref{thm:def0}, this is another cause for positive deficiency. \end{ex} \begin{figure} \caption{Two dynamically equivalent mass-action systems that are weakly reversible, but the deficiency of each network is positive.} \label{fig:NotD0-2a} \label{fig:NotD0-2} \end{figure} \subsection{Network structure of $\mrm{WR}_0$\ realizations} \label{sec:lem} There are at least three restrictions on the network structure of a $\mrm{WR}_0$\ realization. First, deficiency zero can be characterized by affinely independent linkage classes and linearly independent stoichiometric subspaces of the linkage classes. Second, a $\mrm{WR}_0$\ realization uses the minimal number of vertices, precisely those corresponding to the monomials that appear explicitly in the dynamical system. Finally, if there are two $\mrm{WR}_0$\ realizations, they must have the same number of linkage classes. We address each of these assertions below. If the deficiency $\delta$ of the network is zero, then the deficiency $\delta_i$ of each linkage class is also zero, and $\delta = \sum_i \delta_i$. The former implies that the stoichiometric subspace of the linkage class $S_i$ is of dimension $|V_i| - 1$, i.e., the vertices in $V_i$ are affinely independent. The latter implies that the stoichiometric subspaces of the linkage classes are linearly independent. Therefore, we have the following theorem. \begin{thm}[\protect{\cite{feinberg2019foundations, CraciunJohnstonSzederkenyiTonelloTothYu2020}}] \label{thm:def0} The deficiency of a reaction network is zero if and only if \begin{enumerate}[label={(\roman*)}] \item the network has affinely independent linkage classes, and \item the stoichiometric subspaces of the linkage classes are linearly independent. \end{enumerate} \end{thm} When it comes to weakly reversible realizations, any vertices for which the net reaction vector is zero can be removed while maintaining dynamical equivalence~\cite{CraciunJinYu2019}. Indeed, a weakly reversible realization exists if and only if one exists using only the vertices that appear in the monomials of the differential equations (after simplification). Moreover, for any additional vertex not coming from the monomials, the deficiency increases by one. \begin{prop}[\protect{\cite[Theorems 4.8 and 4.12]{CraciunJinYu2019}}] \label{prop:V} The vertices of any $\textit{WR}_\textit{0}$ realization of $\dot{\vv x} = \vv f(\vv x)$ are precisely the exponents in the monomials of $\vv f(\vv x)$ after simplification. \end{prop} Once the set of vertices is fixed, finding realizations (satisfying certain constraints like weak reversibility, minimal deficiency, or complex-balancing) is relatively simple, and algorithms based on optimization techniques exist. For example, see \cite{JohnstonSiegelSzederkenyi2013, RudanSzederkenyiKatalinPeni2014}, or \cite{Szederkenyi2012_Toolbox} for a MATLAB implementation. Finally, we remark that the number of linkage classes in any $\mrm{WR}_0$\ realization is also fixed. Recall that deficiency of a network $G$ is $\delta_G = |V| - \ell_G - \dim S_G$, where $|V|$ is the number of vertices in $G$, $\ell_G$ is the number of linkage classes, and $S_G$ is the stoichiometric subspace. We already noted in \Cref{prop:V} that $|V|$ is the number of distinct monomials in the differential equations after simplification, hence constant between dynamically equivalent $\mrm{WR}_0$\ realizations. If $(G, \vv\kappa)$ and $(G',\vv\kappa')$ are two such realizations, they share the same associated dynamical system and thus the same set of positive steady states $Z$, whose codimension is $\dim S_G$~\cite{HornJackson1972}. Therefore, if $\delta_G = \delta_{G'} = 0$, then \eq{ \ell_G = |V| - \dim S_G = |V| - \codim Z = \ell_{G'}. } In other words, the realizations have the same number of linkage classes. \begin{prop}\label{prop:ell} In any $\textit{WR}_\textit{0}$ realization of $\dot{\vv x} = \vv f(\vv x)$, the number of linkage classes is given by $|V| - \codim Z$, where $|V|$ is the number of distinct monomials in $\vv f(\vv x)$ after simplification, and $Z$ is the set of positive steady states. \end{prop} \subsection{Proof of uniqueness of $\textbf{WR}_\textbf{0}$ realization} \label{sec:thm} The case for the uniqueness of $\mrm{WR}_0$\ realization with a single linkage class follows immediately from \Cref{thm:def0}. This case was first proved in \cite{Csercsik2011ParametricUO} using linear algebraic methods; a more geometric proof can be found in \cite{CraciunJohnstonSzederkenyiTonelloTothYu2020}. \begin{cor} \label{cor:l=1} Any weakly reversible, deficiency zero realization with a single linkage class is unique. \end{cor} \begin{proof} Suppose that two $\mrm{WR}_0$\ realizations $(G, \vv\kappa)$ and $(G', \vv\kappa')$ are dynamically equivalent, i.e., for every $\vv y_i \in V = V'$, we have \eq{ \sum_{\vv y_j \neq \vv y_i} \kappa_{ij} (\vv y_j - \vv y_i) = \sum_{\vv y_j \neq \vv y_i} \kappa'_{ij} (\vv y_j - \vv y_i) , } where $\kappa_{ij}$, $\kappa'_{ij} \geq 0$ are the appropriate rate constants from the realizations (which is set to zero if no edge is present). Because the vertices are affinely independent, the only solution to the linear equation \eq{ \sum_{\vv y_j \neq \vv y_i} \left(\kappa_{ij} - \kappa'_{ij}\right) (\vv y_j - \vv y_i) = \vv 0 } is $\kappa_{ij} = \kappa'_{ij}$ for all $i \neq j$. This is true for all edges with source vertex $\vv y_i$. Hence, $(G,\vv\kappa) = (G', \vv\kappa')$. \end{proof} \begin{figure} \caption{Two dynamically equivalent mass-action systems. Each linkage class of (a) is properly contained in a linkage class of (b). Vertices in (b) are highlighted according to the partitioning by the linkage classes of (a).} \label{fig:contain} \end{figure} In the case of multiple linkage classes, there could (in theory at least) be different ways to partition the vertex set by linkage classes while maintaining dynamical equivalence. For example, the two systems in \Cref{fig:contain} are dynamically equivalent and weakly reversible. While the linkage classes of (a) are \eq{ V_1 \sqcup V_2 = \{ \cf{0}, \, \cf{2X} + \cf{2Y} \} \sqcup \{ \cf{2X}, \, \cf{2Y} \}, } the network (b) has only one linkage class. Note that $V_1$ is properly contained in the linkage class of (b), which has affinely \emph{dependent} vertices and deficiency one. \begin{figure} \caption{Two dynamically equivalent mass-action systems. No linkage class of~(a) is properly contained in any linkage class of~(b), and vice versa. Vertices in~(b) are highlighted according to the partitioning by the linkage classes of~(a).} \label{fig:exchange} \end{figure} As a second example, consider the dynamically equivalent systems in \Cref{fig:exchange}. Each of the linkage classes of (a), \eq{ V_1 = \{ \cf{0}, \, \cf{3X} \} \quad \text{and} \quad V_2 = \{ \cf{X}, \, \cf{2X} \}, } is split between different linkage classes of (b), with $\cf{0}$ and $\cf{2X}$ belonging to one linkage class, while $\cf{X}$ and $\cf{3X}$ belonging to another. Note that the systems shown in \Cref{fig:exchange} have linkage classes that generate linearly \emph{dependent} stoichiometric subspaces. The two networks also have deficiency one. The examples in \Cref{fig:contain,fig:exchange} illustrate that, at least in principle, that vertices can be arranged into different linkage classes while maintaining dynamical equivalence. If the partitioning of vertices by linkage classes are identical between two $\mrm{WR}_0$\ realizations, treating each linkage class as if it is a mass-action system with one connected component, we can conclude uniqueness from \Cref{cor:l=1}. In the remainder of this section, we prove that the situations in \Cref{fig:contain,fig:exchange} are inconsistent with deficiency zero. More precisely, \Cref{lem:contain} shows that if a linkage class is properly contained in a linkage class of another realization (the situation in \Cref{fig:contain-b}), then the latter linkage class cannot be affinely independent. \Cref{lem:exchange} shows that if a linkage class is split between different linkage classes of another realization (the scenario of \Cref{fig:exchange}), then the stoichiometric subspaces of the latter's linkage classes cannot be linearly independent. Although the following lemmas repeatedly refer to a reaction network $G$ with one linkage class, we have in mind $G$ as one connected component of a larger reaction network. \begin{lem} \label{lem:contain} Let $(G,\vv\kappa)$ be a mass-action system with one linkage class. Let $(G',\vv\kappa')$ be a weakly reversible mass-action system with one linkage class such that $V \subsetneq V'$. If they are dynamically equivalent on $V$, the vertices of $G'$ cannot be affinely independent. \end{lem} \begin{proof} Since $G'$ is strongly connected and $V \subsetneq V'$, there exists $\vv y_0 \in V$ and $\vv y'\in V'\setminus V$ such that $\vv y_0 \to \vv y'\in E'$. The rate constant for this reaction in $(G', \vv\kappa')$ is non-zero. Dynamical equivalence at $\vv y_0$ demands that \eq{ \vv 0 &= \sum_{\vv y'_i \in V'\setminus V} \! \kappa'_i (\vv y'_i - \vv y_0) + \sum_{\vv y_i \in V} (\kappa'_i - \kappa_i) (\vv y_i - \vv y_0), } where $\kappa_i$, $\kappa'_i \geq 0$ are the appropriate rate constants of $\vv y_0 \to \vv y_i$ in either $G$ or $G'$. If the vertices of $G'$ are affinely independent, the coefficients in the above equation must be zeroes. In particular, $\kappa'_i = 0$ whenever $\vv y'_i \not\in V$, contradicting the existence of a reaction in $G'$ from $V$ to $V'\setminus V$. \end{proof} Recall that the stoichiometric subspace $S$ of a reaction network $G$ is the linear span of all reaction vectors in $G$. If $G$ has a single linkage class, then $S = S(V)$, where $S(V)$ is the span of vectors that point between vertices. The following lemma characterizes the stoichiometric subspace of a weakly reversible mass-action system $(G,\vv\kappa)$ in terms of the net reaction vectors. \begin{lem} \label{lem:netvect} Let $(G,\vv\kappa)$ be a mass-action system with one linkage class. For each $\vv y_i \in V$, let $\vv w_i$ be the net reaction vector from $\vv y_i$, and let $S$ be the stoichiometric subspace. Then we have: \begin{enumerate}[label={(\roman*)}] \item $\Span \{ \vv w_i\}_{i=1}^m \subseteq S$. \item If $G$ is weakly reversible, then $V$ is the set of \emph{source} vertices, and $\Span\{ \vv w_i\}_{i=1}^m = S$. \item If $G$ is weakly reversible and deficiency zero, then any $m-1$ vectors from the set $\{\vv w_i\}_{i=1}^m$ are linearly independent, i.e., the set is a basis for $S$. \end{enumerate} \end{lem} \begin{proof} By definition, a net reaction vector is \eq{ \vv w_i = \sum_{\vv y_j \in V} \kappa_{ij} (\vv y_j - \vv y_i), } where $\kappa_{ij} > 0$ if $\vv y_i \to \vv y_j \in E$ and $\kappa_{ij} = 0$ otherwise. This in turn implies that each $\vv w_i$ is a vector in the stoichiometric subspace $S$. Thus, $\Span \{ \vv w_i\}_{i=1}^m \subseteq S$. Suppose $G$ is weakly reversible, and suppose for a contradiction that the net reaction vectors do not span all of $S$, i.e., $W = \Span \{ \vv w_i\}_{i=1}^m \subsetneq S$. Then there exists a non-zero vector $\vv v \in S$ that is perpendicular to $W$. Since $\vv v \neq \vv 0$, there exists a reaction $\vv y_i \to \vv y_j \in E$ such that $\vv v \cdot (\vv y_j - \vv y_i) \neq 0$. In particular, the set $\{ \vv v \cdot \vv y_i \}_{i=1}^m$ has at least two different numbers. Let $V_{\max} = \{ \vv y_i \in V \colon \vv v \cdot \vv y_i = \max_j \vv v \cdot \vv y_j\}$ be the subset of vertices which maximizes the dot product. Weak reversibility implies that there exists an edge from a vertex in $V_{\max}$ to a vertex not in it. Without loss of generality, let $\vv y_1 \to \vv y_2$ be this edge, where $\vv y_1 \in V_{\max}$. Note that for all $i=1$, $2,\ldots, m$, we have $\vv v \cdot (\vv y_i - \vv y_1) \leq 0$, so \eq{ \vv v \cdot \vv w_1 &= \sum_{\vv y_j \in V} \kappa_{1j} \vv v \cdot (\vv y_j - \vv y_1) \leq \kappa_{12} \vv v \cdot (\vv y_2 - \vv y_1) < 0. } In other words, $\vv v$ is not perpendicular to $\vv w_1$, contradicting our assumption that $\vv v$ is perpendicular to $W$. Finally, suppose that $G$ is $\mrm{WR}_0$. Let $\mm W \in \ensuremath{\mathbb{R}}^{n\times m}$ be the matrix with $\vv w_i$ as its $i$th column. By (ii) of the lemma, the range of $\mm W$ is the stoichiometric subspace $S$, which is of dimension $m - 1$ because $\delta = 0 = m - 1 - \dim S$. In other words, the rank of $\mm W$ is $m-1$, thus the dimension of $\ker \mm W$ is 1. Since $G$ is weakly reversible, the mass-action system $(G,\vv\kappa)$ admits a positive steady state $\vv x$~\cite{Boros2019}. Rearranging the steady state equation by first summing over the vertices, we obtain the following liner equation involving the net reaction vectors: \eq{ \vv 0 &= \sum_{\vv y_i \in V} \vv x^{\vv y_i} \!\!\!\sum_{\vv y_i \to \vv y_j \in E} \kappa_{ij} (\vv y_j - \vv y_i) = \sum_{\vv y_i \in V} \vv x^{\vv y_i} \vv w_i. } In particular, the vector $\vv \alpha = (\vv x^{\vv y_1}, \vv x^{\vv y_2}, \ldots, \vv x^{\vv y_m})^\top$ has strictly positive coordinates and spans $\ker \mm W$. Therefore, any non-zero vector in $\ker \mm W$ cannot have a zero as one of its components. This implies that any choice of $m-1$ columns of $\mm W$ form a linearly independent set, which spans $\ran \mm W = S$. \end{proof} \begin{lem} \label{lem:exchange} Let $(G, \vv\kappa)$ be a $\textit{WR}_\textit{0}$ realization with one linkage class. Let $(G',\vv\kappa')$ be a mass-action system with $V \subseteq V'$, and suppose that the vertices in $V$ are split between at least two linkage classes of $G'$. If these systems are dynamically equivalent on $V$, then the stoichiometric subspaces of the linkage classes of $G'$ cannot be linearly independent. \end{lem} \begin{proof} Partition $V' = V'_1 \sqcup V'_2 \sqcup \cdots \sqcup V'_\ell$ according to the linkage classes of $G'$. For each $p=1$, $2,\ldots, \ell$, let $V_p = V'_p \cap V$. Without loss of generality (by throwing away linkage classes that do not intersect $V$), we may assume that each $V_p \neq \emptyset$. For each $\vv y_i \in V$, let $\vv w_i$ be its net reaction vector, and let $S_p = \Span\{ \vv w_i \colon \vv y_i \in V_p\}$ be the span of net reaction vectors in $V_p$. By \hyperref[lem:netvect]{\Cref{lem:netvect}(i)} on the connected component $V'_p$, we know that $S_p \subseteq S(V'_p)$. However, applying \hyperref[lem:netvect]{\Cref{lem:netvect}(iii)} to all of $V$ implies that $S_1$ is not linearly independent of $S_2 + S_3 + \cdots + S_\ell$, i.e., $S(V'_1)$, $S(V'_2), \ldots, S(V'_\ell)$ are not linearly independent. \end{proof} We now prove uniqueness of $\mrm{WR}_0$\ realizations for networks with any number of linkage classes. \begin{thm} \label{thm:WR} Any weakly reversible, deficiency zero realization is unique. In other words, if a mass-action system admits several different realizations, then at most one of them can be $\textit{WR}_\textit{0}$. \end{thm} \begin{proof} Let $(G,\vv\kappa)$ be a $\mrm{WR}_0$\ realization. Suppose that $(G',\vv\kappa')$ is a dynamically equivalent $\mrm{WR}_0$\ realization. As noted in \Cref{prop:V,prop:ell}, the vertices of $\mrm{WR}_0$\ realizations are determined by the monomials that appear explicitly in the associated system of differential equations, and the number of linkage classes remains constant. Moreover, each linkage class has zero deficiency. Partition the vertices by the linkage classes of $G$ as $V = V_1 \sqcup V_2 \sqcup \cdots \sqcup V_{\ell}$. Similarly partition the vertices by the linkage classes of $G'$, as in $V = V'_1 \sqcup V'_2 \sqcup \cdots \sqcup V'_\ell$. If any of linkage classes share the same set of vertices, i.e., $V_i = V'_j$ for some $i$, $j$, it follows from Corollary \ref{cor:l=1} that this linkage class in $(G,\vv\kappa)$ is identical in structure and rate constants to that of $(G', \vv\kappa')$. We proceed by induction on the number of differing linkage classes. Without loss of generality, we assume that $\ell \geq 2$ and there is no identical linkage classes between the two realization. Recall from \Cref{thm:def0} that any deficiency zero realization has affinely independent linkage classes, and the stoichiometric subspaces of the linkage classes are linearly independent. The linkage class, say $V_1$, of $G$ intersects non-trivially with some linkage class of $G'$, say $V'_j$, in the sense that either $V'_j \setminus V_1 \neq \emptyset$ or $V_1 \setminus V'_j \neq \emptyset$ (or both). It is neither the case that $V_1 \subsetneq V'_j$ nor $V'_j \subsetneq V_1$, or we would contradict the affine independence assumption of the larger subset of vertices by Lemma~\ref{lem:contain}. Hence it must be the case that the intersection is non-trivial, i.e., $V_1 \cap V'_j \neq \emptyset$, and the symmetric difference is also non-trivial, i.e., $V_1 \triangle V'_j \neq \emptyset$. This falls under the setup of \Cref{lem:exchange}; thus $S(V'_j)$ is not linearly independent from $S(V'\setminus V'_j)$. In other words, the stoichiometric subspaces of the linkage classes of $G'$ are not linearly independent and $G'$ has positive deficiency. Therefore, we conclude that $V_1 = V'_j$, a contradiction. \end{proof} \section{Discussion} \label{sec:concl} In this paper, we proved that any weakly reversible, deficiency zero ($\mrm{WR}_0$) realization of a mass-action system is unique, as conjectured in \cite{CraciunJohnstonSzederkenyiTonelloTothYu2020}. Since deficiency zero weakly reversible networks are {\em minimal} representations of mass-action systems, this provides a possible {\em Occam's razor} approach to network identification and parameter identification, since neither one of these identification problems has a unique solution in general~\cite{CraciunPantea2008}. In future work~\cite{paper_3_part_2}, we will use some of the approaches developed here to design an efficient algorithm for the identification of these networks. A similar approach may be used for the identification of network representations of lowest deficiency, and allow for wider applicability of classical results for networks with positive deficiency, such as the Deficiency One Theorem~\cite{feinberg2019foundations}. \end{document}
\begin{document} \title{Holomorphic representation of minimal surfaces in simply isotropic space} \author{Luiz C. B. da Silva } \institute{Da Silva, L. C. B. \at Department of Physics of Complex Systems,\\ Weizmann Institute of Science, Rehovot 7610001, Israel\\ \email{[email protected]}} \date{First published in [J. Geom. (2021) 112:35] by Springer Nature (fulltext available at the link \url{https://doi.org/10.1007/s00022-021-00598-z})} \maketitle \begin{abstract} It is known that minimal surfaces in Euclidean space can be represented in terms of holomorphic functions. For example, we have the well-known Weierstrass representation, where part of the holomorphic data is chosen to be the stereographic projection of the normal of the corresponding surface, and also the Bj\"orling representation, where it is prescribed a curve on the surface and the unit normal on this curve. In this work, we are interested in the holomorphic representation of minimal surfaces in simply isotropic space, a three-dimensional space equipped with a rank 2 metric of index zero. Since the isotropic metric is degenerate, a surface normal cannot be unequivocally defined based on metric properties only, which leads to distinct definitions of an isotropic normal. As a consequence, this may also lead to distinct forms of a Weierstrass and of a Bj\"orling representation. Here, we show how to represent simply isotropic minimal surfaces in accordance with the choice of an isotropic surface normal. \keywords{Simply isotropic space \and minimal surface \and holomorphic representation \and stereographic projection} \subclass{53A10 \and 53A35 \and 53C42} \end{abstract} \section{Introduction} It is well-known that minimal surfaces in Euclidean space can be parameterized in terms of holomorphic functions, which gives the so-called Weierstrass representation \cite{Barbosa-Colares1986,Weierstrass}: $\mathbf{x}(z)=\mathrm{Re}(\int^z\phi\,\mathrm{d} z)$, where $\phi$ is a holomorphic isotropic curve in $\mathbb{C}^3$ with no real periods, i.e., each $\phi_j$ is holomorphic, $\phi_1^2+\phi_2^2+\phi_3^2=0$, and each $\mathrm{Re}\int^z\phi_j\mathrm{d} z$ is path-independent. In addition, if we define $\phi_1=\frac{F}{2}(1-G^2)$, $\phi_2=\frac{\mathrm{i} F}{2}(1+G^2)$, and $\phi_3=FG$, the holomorphic function $G$ can be seen as the stereographic projection of the Gauss map of the minimal immersion $\mathbf{x}(z)$. (The function $F$ can be associated with the differential of the third coordinate, i.e., the differential of the height data.) An alternative to the Weierstrass representation is the Bj\"orling representation \cite{LopezMMJ2018,SchwarzCrelle1875}, which consists in the Cauchy problem for the mean curvature, i.e., one prescribes a curve $c(s)$ and the unit normal $\mathbf{n}(s)$ along $c$. The corresponding minimal surface is parameterized by $\mathbf{x}(z)=\mathrm{Re}\int^z[c'(w)-\mathrm{i}\, \mathbf{n}(w)\times c'(w)]\mathrm{d} w$, where $c(w)$ and $\mathbf{n}(w)$ are analytic extensions of $\mathbf{n}$ and $c$. Alternatively, we may prescribe the initial curve together with the tangent plane. This later version of the Bj\"orling representation has been already discussed in simply isotropic space \cite{StrubeckerAM1954} which helps establishing simply isotropic analogs of two theorems due to Schwarz concerning lines and planes of symmetries of minimal surfaces \cite{SchwarzCrelle1875}. (See also chapter 12 of \cite{Sachs1990}.) Recently, Sato showed that simply isotropic minimal surfaces also admit a Weierstrass representation given by \cite{SatoArXiv2018}: $\mathbf{x}(u, v) = \textrm{Re} \int^{z} (F, \mathrm{i} F, 2FG) \mathrm{d} w$, $z=u+\mathrm{i} v\in U\subseteq\mathbb{C}$. When expressed in their normal form, i.e., as a graph, isotropic minimal surfaces are the graph of harmonic functions and, therefore, we can write $\mathbf{x}(z)=(z,\mathrm{Re}\,h)$, where $h$ is holomorphic. It is worth mentioning that Strubecker, for example in Ref. \cite{StrubeckerAMSUH1975}, p. 154 right after Eq. (23), refers to this representation as the isotropic analog of the Weierstrass representation. Sato's approach in fact gives $\mathbf{x}(z)=(z,\mathrm{Re}\,h)$ by setting $F(z)=1$ and choosing $G$ appropriately, but it is more general since we do not have to write the surface as a graph. {By adopting a Weierstrass representation instead of writing an isotropic minimal surface as a graph, a} natural question then arises: why should we use two holomorphic functions to represent isotropic minimal surfaces when we know that one is enough? Note that in Euclidean space we do have a reduction in the number of ``degrees of freedom", from three to two\footnote{Any conformal minimal immersion has harmonic coordinates and, consequently, each of the three coordinates can be locally seen as the real part of a holomorphic function on the plane.}. However, the Euclidean experience also teaches us that the advantage of a holomorphic representation lies in the ability to control key geometric information of minimal surfaces. The reduction in the number of holomorphic functions we need to represent minimal surfaces then follows as an extra. In the case of a Weierstrass representation, we control the (stereographic projection of the) unit normal. Then, we may also ask whether it is possible to interpret part of the holomorphic data of an isotropic minimal surface $M^2$ as the stereographic projection of its Gauss map. Observe, however, that in simply isotropic space there are more than one choice for a Gauss map, as recently emphasized in Ref. \cite{KelleciJMAA2021}. Therefore, to answer the previous question we must also specify what Gauss map we have in mind. Here, we are going to show that it is possible to find a Weierstrass representation for simply isotropic minimal surfaces such that part of their holomorphic data can be associated with the stereographic projection of either their parabolic normal or their minimal normal. (In this respect, we show that Sato's choice for the holomorphic representation is slightly related to the minimal normal. See Subsect. \mathrm{Re}f{subsect::OtherChoicesWeierstRep} for a discussion of his motivations.) We also discuss the Bj\"orling representation and show that there are three ways of doing that. The remaining of this text is divided as follows. In Sect. 2, we present some background material on simply isotropic geometry. In Sect. 3, we present and study some properties of the stereographic projection in the simply isotropic space from a sphere of parabolic type to the plane. In Sect. 4, we present the main results of this work, namely the isotropic Weierstrass representation and its relation to the extrinsic geometry of isotropic minimal surfaces. Finally, in Sect. 5, we discuss the Cauchy problem for the minimal surface equation, i.e., the Bj\"orling representation. The last section contains our concluding remarks. \section{Geometric background: the simply isotropic space} The simply isotropic space $\mathbb{I}^3$ is an example of a Cayley-Klein geometry. More precisely, we start with the projective space and choose as the group of rigid motions those projectivies that leave the so-called absolute figure invariant. The space $\mathbb{I}^3$ is the geometry in affine space corresponding to the choice of an absolute figure composed of a plane and a degenerated quadric \cite{Sachs1990}. (See, e.g., Ref. \cite{daSilvaTJM2020,PottmannACM2009} for texts in English.) Here, we shall adopt the metric viewpoint. In other words, let us denote by the \emph{simply isotropic space}, $\mathbb{I}^3$, the vector space $\mathbb{R}^3$ equipped with the degenerated metric \begin{equation} \langle u,v\rangle = u^1v^1+u^2v^2, \end{equation} where $u=(u^1,u^2,u^3)$ and $v=(v^1,v^2,v^3)$. The set of vectors $\{(0,0,u^3)\}$ that degenerate the isotropic metric is the set of \emph{isotropic vectors}. A plane containing an isotropic vector is called an \emph{isotropic plane}. On the set of isotropic vectors we use the secondary metric $\llangle u,v\rrangle=u^3v^3$. (Therefore, $\mathbb{I}^3$ is an example of a Cayley-Klein vector space \cite{StruveRM2005}.) The inner product induces an isotropic semi-norm in a natural way: $\Vert u\Vert=\sqrt{\langle u,u\rangle}$. In addition, we shall refer to the projection over the $xy$-plane as the \emph{top-view projection}, which is denoted here by $u=(u^1,u^2,u^3)\mapsto \tilde{u}\equiv(u^1,u^2,0)$. In the following, it will prove useful to use the Euclidean scalar product. Let us denote by $\cdot$ and $\times$ the inner and vector products in Euclidean space $\mathbb{E}^3$: $u\cdot v=u^1v^1+u^2v^2+u^3v^3$ and $u\times v=(u^2v^3-u^3v^2,-u^1v^3+u^3v^1,u^1v^2-u^2v^1)$. We are interested on surfaces $\mathbf{x}:(u^1,u^2)\in U\mapsto M\subset\mathbb{I}^3$ whose induced metric is non-degenerated. The \emph{admissible surfaces} are those surfaces $M^2$ that do not have any isotropic tangent plane. Therefore, $\vert\partial(x^1,x^2)/\partial(u^1,u^2)\vert\not=0$ and, consequently, every admissible surface can be reparameterized as a graph $(u^1,u^2)\mapsto (u^1,u^2,F(u^1,u^2))$, the so-called \emph{normal form}. Here, the induced first fundamental form reads $\mathrm{I}=(\mathrm{d} u^1)^2+(\mathrm{d} u^2)^2$, which implies that every surface in isotropic space is intrinsically flat. However, it is possible to introduce an extrinsic Gaussian curvature as the ratio between the determinants of the first and second fundamental forms. Indeed, the normal of a surface with respect to the metric is the vector field $\mathcal{N}=(0,0,1)$. Then, we may define the Christoffel symbols $\Gamma_{ij}^k$ and the coefficients of the second fundamental form $h_{ij}$ by the equation \[ \mathbf{x}_{ij} = \Gamma_{ij}^k\mathbf{x}_k+h_{ij}\mathcal{N}, \] where $\mathbf{x}_{i}=\partial \mathbf{x}/\partial u^i$, $\mathbf{x}_{ij}=\partial^2\mathbf{x}/\partial u^i\partial u^j$, and we are summing on repeated indices (from 1 to 2). Finally, the isotropic Gaussian and mean curvatures are respectively defined by \begin{equation} K = \frac{h}{g}=\frac{h_{11}h_{22}-h_{12}^2}{g_{11}g_{22}-g_{12}^2}\mbox{ and }H = \frac{g_{11}h_{22}-2g_{12}h_{12}+g_{22}h_{11}}{2(g_{11}g_{22}-g_{12}^2)}, \end{equation} where $g_{ij}=\langle \mathbf{x}_i,\mathbf{x}_j\rangle$ denotes the coefficients of the first fundamental form. The coefficients of the second fundamental form can be computed as \begin{equation} h_{ij}=\frac{\det(\mathbf{x}_1,\mathbf{x}_2,\mathbf{x}_{ij})}{\sqrt{g_{11}g_{22}-g_{12}^2}}=\mathbf{x}_{ij}\cdot\mathbf{N}_m,\,\mbox{ where }\mathbf{N}_m = \frac{\mathbf{x}_1\times \mathbf{x}_2}{\sqrt{g_{11}g_{22}-g_{12}^2}}. \end{equation} We shall refer to $\mathbf{N}_m$ as the \emph{minimal normal} since the corresponding shape operator $-\mathrm{d}\mathbf{N}_m$ is traceless \cite{KelleciJMAA2021}. Its determinant, however, gives the Gaussian curvature, $K=\det(-\mathrm{d}\mathbf{N}_m)$. \begin{remark} {Rigorously, the minimal normal $\mathbf{N}_m$ does not provide a proper Gauss map from which one can define a shape operator understood as a self-adjoint operator on the tangent planes. Therefore, the trace and determinant of $-\mathrm{d}\mathbf{N}_m$ are computed with the proviso that the tangent vector $\mathbf{x}_i$ is formally identified with $\mathbf{a}_i=\mathbf{x}_i\times\mathcal{N}$. (Please, see Subsection 2.1 and Remark 2.1 of \cite{KelleciJMAA2021}.)} \end{remark} If we insist on seeing $K$ and $H$ as the determinant and trace of a shape operator, we may introduce the so-called \emph{parabolic normal} $\xi$ defined by \cite{daSilvaJG2019} \begin{equation} \xi = \tilde{\mathbf{N}}_m+\frac{1}{2}\left(1-\Vert \tilde{\mathbf{N}}_m\Vert^2\right)\mathcal{N}. \end{equation} The parabolic normal $\xi$ takes values on the unit sphere of parabolic type $\Sigma^2=\{(x,y,z)\in\mathbb{I}^3:z=\frac{1}{2}(1-x^2-y^2)\}$. From the \emph{isotropic shape operator} $A=-\mathrm{d}\xi$, we can compute the Gaussian and mean curvatures as $K=\det A$ and $H=\mathrm{tr}\,A$ \cite{daSilvaJG2019,PavkovicJAZU1990}. In addition, we can alternatively compute the coefficients of the second fundamental form as $h_{ij}=\langle A(\mathbf{x}_i),\mathbf{x}_j\rangle$. \begin{remark} {In addition to spheres of parabolic type, we also have the so-called {\emph{spheres of cylindrical type}}, which are the metric spheres: $G(p,r)=\{x\in\mathbb{I}^3:\langle x-p,x-p\rangle=r^2\}$. These surfaces, however, are not admissible since all their tangent planes are isotropic. Therefore, we can not use these spheres to define a Gauss map for admissible surfaces as done for the parabolic normal.} \end{remark} \subsection{Simply isotropic minimal surfaces} {We define as minimal surfaces in simply space those surfaces with $H=0$.} If $M^2$ is parameterized in its normal form, $\mathbf{x}(u^1,u^2)=(u^1,u^2,F(u^1,u^2))$, then the minimal and parabolic normals are given by $\mathbf{N}_m=(-F_1,-F_2,1)$ and $\xi=(-F_1,-F_2,\frac{1}{2}-\frac{1}{2}(F_1^2+F_2^2))$. The first and second fundamental forms are $\mathrm{I}=\delta_{ij}\mathrm{d} u^i\mathrm{d} u^j$ and $\mathrm{II}=F_{ij}\mathrm{d} u^i\mathrm{d} u^j$, from which follows that the Gaussian and mean curvatures are $K=F_{11}F_{22}-F_{12}^2$ and $H=\frac{1}{2}(F_{11}+F_{22})$. Therefore, \begin{theorem} Let $M^2\subset\mathbb{I}^3$ be an admissible simply isotropic minimal surface, then $M^2$ is locally the graph of a harmonic function. \end{theorem} {In Euclidean space, the minimal surfaces are the critical points of the area functional, i.e., $H=0$ is the Euler-Lagrange equation of the problem $\min_M\int_M\mathrm{d} A$. The same can not be done in the simply isotropic space since the area in the simply isotropic induced metric is the same as its projection on top view plane. In fact, if we fix the boundary curve $\gamma$, every surface $M^2$ with $\partial M^2=\gamma$ would be a critical point of the simply isotropic area functional. In the simply isotropic space, instead of the area computed from the isotropic metric, we may consider the so-called \emph{relative area} $\mathcal{O}^*$ \cite{Sachs1990,StrubeckerMZ1942}:} \begin{equation} \mathcal{O}^* = \int_{U}\det(\xi,\mathbf{x}_1,\mathbf{x}_2)\,\mathrm{d} u^1\mathrm{d} u^2 = \int_{U} \xi\cdot\mathbf{N}_m \sqrt{\det g_{ij}}\,\mathrm{d} u^1\mathrm{d} u^2 = \int_{U} \xi\cdot\mathbf{N}_m\mathrm{d} A. \end{equation} {If $M^2$ is parametrized in its normal form over $U$, then the relative area becomes $\mathcal{O}^*=\frac{1}{2}\int_U(1+F_1^2+F_2^2)\mathrm{d} u^1\mathrm{d} u^2$. Now, consider a normal variation of $M^2$: $\mathbf{x}_{\varepsilon}=\mathbf{x}+\varepsilon V\mathcal{N}=(u^1,u^2,F+\varepsilon V)$, where $V\vert_{\partial M^2}=0$. The relative area as a function of $\varepsilon$ is then} \begin{eqnarray} \mathcal{O}^*(\varepsilon) & = & \frac{1}{2}\int_U [1+F_1^2+F_2^2+2\varepsilon(V_1F_1+V_2F_2)+\varepsilon^2(V_1^2+V_2^2)]\,\mathrm{d} u^1\mathrm{d} u^2\nonumber\\ & = & \mathcal{O}^*(0)-\varepsilon\int_U V(F_{11}+F_{22})\,\mathrm{d} u^1\mathrm{d} u^2+\frac{\varepsilon^2}{2}\int_U(V_1^2+V_2^2)\,\mathrm{d} u^1\mathrm{d} u^2\nonumber\\ & = & \mathcal{O}^*(0)-2\varepsilon\int_U VH\,\mathrm{d} A+\frac{\varepsilon^2}{2}\int_U(V_1^2+V_2^2)\,\mathrm{d} u^1\mathrm{d} u^2. \end{eqnarray} {Therefore, a surface $M^2$ is a critical point of the relative area functional if and only if $H=0$. Note, however, that despite the Euler-Lagrange equation $H=0$ is a simply isotropic invariant, the relative area itself is not. (Note, in addition, that every simply isotropic minimal surface is stable, i.e., they all have positive second variation.)} {An example of simply isotropic minimal surface is given by the helicoid, which is the graph of the harmonic function $F(x,y)=a \arctan\frac{y-y_0}{x-x_0}$. Since the helicoid is the only Euclidean minimal surface which is simultaneously the graph of a harmonic function, the helicoid is the only surface which is both minimal in $\mathbb{E}^3$ and $\mathbb{I}^3$. Further} examples of isotropic minimal surfaces can be provided by (i) looking for isotropic analogs of Euclidean minimal surfaces, such as the well known Enneper and Scherk surfaces \cite{StrubeckerAMSUH1975,StrubeckerCrelle1954}; (ii) employing some sort of separation of variables, such as the Scherk surfaces, which correspond to solutions of the form $z=f(x)+g(y)$ or $x=g(y)+h(z)$, or the so-called affine factorable surfaces \cite{AydinTWMS2020}, which correspond to solutions of the form $z=f(x)g(y+ax)$ or $x=g(y+az)h(z)$ ($a$ constant); or (iii) looking for surfaces invariant by a 1-parameter group of isotropic isometries \cite{daSilvaMJOU2021,Sachs1990}. Here, we shall be interested on a generic representation of isotropic minimal surfaces in terms of holomorphic functions. \section{Stereographic projection in simply isotropic space} In order to associate the holomorphic data of a Weierstrass representation with a choice of a Gauss map, we first define the stereographic projection of either a unit sphere of parabolic type or a horizontal plane over the (top view) plane. The first will be related to the parabolic normal, since it takes values on a unit sphere of parabolic type, while the second projection will be related to the minimal normal, since it takes values on the horizontal plane $\{z=1\}$. Let $\Sigma^2$ be the isotropic unit sphere of parabolic type centered at the {origin\footnote{{By the center of the sphere $\Sigma^2$ we mean its focus.}}} \begin{equation} \Sigma^2 = \left\{(x,y,z)\in\mathbb{I}^3:z=\frac{1}{2}-\frac{1}{2}(x^2+y^2)\right\}. \end{equation} The North pole of $\Sigma^2$ is the point $N=(0,0,\frac{1}{2})$. (We may say that the South pole is the point of the sphere at infinity.) As in Euclidean space, we define the stereographic projection $\pi:\Sigma^2\setminus\{N\}\to\mathbb{C}^*$ by defining $\pi(p)$ as the intersection of the line connecting $N$ to $p$ with the $xy$-plane, which is identified with $\mathbb{C}$. (Here, $\mathbb{C}^*=\mathbb{C}\setminus\{0\}$.) The line $\ell$ connecting $N$ to $p=(p_1,p_2,p_3)\in\Sigma^2$ can be parameterized as $\ell:t\mapsto t(p-N)+N=(tp_1,tp_2,t(p_3-\frac{1}{2})+\frac{1}{2})$. Imposing the third coordinate of $\ell(t)$ to vanish, gives \begin{equation} 0=t(p_3-\frac{1}{2})+\frac{1}{2}\Rightarrow t=\frac{1}{1-2p_3}. \end{equation} Then, the stereographic projection $\pi$ is given by \begin{equation} p\in\Sigma^2\setminus\{N\} \mapsto \pi(p)=\left(\frac{p_1}{1-2p_3},\frac{p_2}{1-2p_3}\right)=\left(\frac{p_1}{p_1^2+p_2^2},\frac{p_2}{p_1^2+p_2^2}\right). \end{equation} On the other hand, given $\pi(p)=(x,y,0)\in\mathbb{C}^*$, the line {$\lambda$} connecting $N$ to $\pi(p)$ is ${\lambda}:t\mapsto (\pi(p)-N)t+N=(xt,yt,\frac{1}{2}(1-t))$. Imposing ${\lambda}(t)\in\Sigma^2$, gives \begin{equation} \frac{1}{2}-\frac{t}{2}=\frac{1}{2}-\frac{1}{2}[(xt)^2+(yt)^2]\Rightarrow t = t^2(x^2+y^2). \end{equation} Since $t\not=0$, we find $t=(x^2+y^2)^{-1}$. Thus, the inverse $\pi^{-1}$ of the stereographic projection is given by \begin{equation} (x+\mathrm{i} y)\in\mathbb{C}^* \mapsto \pi^{-1}(x+\mathrm{i} y)=\left(\frac{x}{x^2+y^2},\frac{y}{x^2+y^2},\frac{1}{2}-\frac{1}{2(x^2+y^2)}\right). \end{equation} Note that the origin $0=0+\mathrm{i}0$ would be sent by $\pi^{-1}$ to a point on $\Sigma^2$ at infinity, which we may see as the South pole $S$ of $\Sigma^2$: $S\equiv\pi^{-1}(0)$. In short, we can define \begin{definition}[Parabolic stereographic projection] Let $\Sigma^2$ be the unit sphere of parabolic type in $\mathbb{I}^3$ centered at the origin and $N=(0,0,\frac{1}{2})$ its North pole. The stereographic projection, $\pi$, of $\Sigma^2$ over the {$xy$-plane}, identified with $\mathbb{C}$, is the map $$ \begin{array}{ccccl} \pi & : & \Sigma^2\setminus\{N\} & \to & \mathbb{C} \\ & & (p_1,p_2,p_3) & \mapsto & \Big(\displaystyle\frac{p_1}{1-2p_3},\displaystyle\frac{p_2}{1-2p_3}\Big)=\displaystyle\frac{1}{p_1^2+p_2^2}(p_1,p_2). \\ \end{array} $$ In addition, its inverse is the map given by $$ \begin{array}{ccccl} \pi^{-1} & : & \mathbb{C}\setminus\{0\} & \to & \Sigma^2 \\ & & x+\mathrm{i} y & \mapsto & \Big(\displaystyle\frac{x}{x^2+y^2},\frac{y}{x^2+y^2},\frac{1}{2}-\displaystyle\frac{1}{2(x^2+y^2)}\Big). \\ \end{array} $$ \end{definition} Before studying the properties of the parabolic stereographic projection, let us define the stereographic projection of the plane $\Pi:z=1$ over the $xy$-plane. First, note that for a sphere of parabolic type of radius $R$, $\Sigma^2_R:z=\frac{R}{2}-\frac{1}{2R}(x^2+y^2)$, the corresponding stereographic projection is given by \begin{equation} \pi_R(p_1,p_2,p_3) = \Big(\frac{p_1}{1-\frac{2}{R}p_3},\frac{p_2}{1-\frac{2}{R}p_3}\Big)=\frac{R^2}{p_1^2+p_2^2}(p_1,p_2). \end{equation} In addition, for $R\gg1$, we have $\Sigma_R^2\sim\{z=\frac{R}{2}\}$ and $\pi_R(p)\sim \tilde{p}$. In other words, the sphere $\Sigma_R^2$ can be approximated by a plane parallel to the top view plane. This reasoning suggests defining the stereographic projection associated with the plane $\{z=1\}$ by the top view projection $\pi_{\infty}(p)=\tilde{p}$. \begin{definition}[Top view projection as a stereographic projection] Let $\Pi$ be the plane $\{(x,y,z):z=1\}$. The stereographic projection, $\pi_{\infty}$, of $\Pi$ over the {$xy$-plane}, identified with $\mathbb{C}$, is the map $\pi_{\infty}(p_1,p_2,p_3)=(p_1,p_2)$. In addition, its inverse is simply given by $\pi_{\infty}^{-1}(x+\mathrm{i} y)= (x,y,1)$. \end{definition} For the stereographic projection of the Euclidean sphere over the plane, $\pi_E$, it is well known that great and small circles are sent under $\pi_E$ to circles or lines in $\mathbb{C}$ and vice versa \cite{Ahlfors1979}. We have an analogous result in isotropic space. To prove this, we first investigate the image of spherical $r$-geodesics under the parabolic stereographic projection in order to single out the properties characterizing their image in $\mathbb{C}$ and, later, we investigate the image of generic plane curves on the parabolic sphere. (A curve on a surface $M^2$ is an $r$-geodesic if its acceleration vector is parallel to the parabolic normal of $M^2$ \cite{daSilvaJG2019,PavkovicJAZU1990}. It is worth mentioning that $r$-geodesics on a sphere of parabolic type come from the intersections with planes passing through its center \cite{daSilvaJG2019}.) \begin{proposition}\label{PropStereogProjRelGeod} A curve $\alpha$ is an $r$-geodesic in $\Sigma^2$ if and only if, under the stereographic projection $\pi$, it corresponds either to a straight line in $\mathbb{C}$ passing through the origin if $N\in\alpha$ or to a circle in $\mathbb{C}$ whose radius $R$ and center $\mathcal{C}$ satisfy $R^2=1+\mathrm{dist}(\mathcal{C},0)^2$ if $N\not\in\alpha$, where $N$ is the north pole of $\Sigma^2$. \end{proposition} \begin{proof} Since any $r$-geodesic $\alpha$ in $\Sigma^2$ comes from the intersection with a plane $\Pi$ passing through the center of $\Sigma^2$ \cite{daSilvaJG2019}, we can implicitly write $\alpha$ as \begin{equation}\label{eqImpEqParR-geod} \alpha:\left\{ \begin{array}{c} z = \frac{1}{2}-\frac{1}{2}(x^2+y^2) \\ Ax+By+Cz=0 \end{array} \right., \end{equation} where $(A,B,C)\not=0$. Note that $(x,y,z)\in\alpha$ is sent to $\pi(\alpha)=\frac{1}{x^2+y^2}(x,y)$. Substituting the first expression of Eq. (\mathrm{Re}f{eqImpEqParR-geod}) into the second, one has $$2Ax+2By+C(1-x^2-y^2)=0.$$ If $C=0$, then $2Ax+2By=0\Rightarrow \frac{x}{x^2+y^2}A+\frac{y}{x^2+y^2}B=0$ and $\pi(\alpha)$ lies in a straight line in $\mathbb{C}$ passing through $0\in\mathbb{C}$. (Note that $0\not\in\pi(\alpha)$.) On the other hand, if $C\not=0$, then $2Ax/(x^2+y^2)+2By/(x^2+y^2)+C/(x^2+y^2)=C$. Here, $N\not\in\alpha$ and, in addition, writing $(X,Y)=\pi(\alpha)=\frac{1}{(x^2+y^2)}(x,y)$, it follows that $X^2+Y^2=(x^2+y^2)^{-1}$. Finally, $$C(X^2+Y^2)+2AX+2BY=C\Rightarrow(X+\frac{A}{C})^2+(Y+\frac{B}{C})^2=1+\frac{A^2+B^2}{C^2},$$ which is the circle of center $\mathcal{C}=(A/C,B/C)$ and radius $R=\sqrt{1+\mathrm{dist}(\mathcal{C},0)^2}$. Conversely, given $\ell:AX+BY=0$, we may parameterize $\ell^*\equiv\ell-\{0\}$ by $t\mapsto (t,-At/B)$, where we are assuming, without loss of generality, that $B\not=0$. Finally, $\ell^*$ is sent by $\pi^{-1}$ into $$t\mapsto \pi^{-1}(\ell^*(t))=\Big(\frac{B^2}{t(A^2+B^2)},-\frac{AB}{t(A^2+B^2)},\frac{1}{2}-\frac{B^2}{2t^2(A^2+B^2)}\Big),$$ which lies in the plane $\Pi:Ax+By+Cz=0$ with $C=0$. On the other hand, given a circle $c:(X-a)^2+(Y-b)^2=R^2=1+a^2+b^2$ in $\mathbb{C}$, we may parameterize it by $\theta\mapsto (a+R\cos\theta,b+R\sin\theta).$ Applying $\pi^{-1}$ gives $$\theta\mapsto\pi^{-1}(c(\theta))=\Big(\frac{a+R\cos\theta}{X^2+Y^2},\frac{b+R\sin\theta}{X^2+Y^2},\frac{1}{2}-\frac{1}{2(X^2+Y^2)}\Big),$$ where $X^2+Y^2=1+2(a^2+b^2)+2aR\cos\theta+2bR\sin\theta$. Direct computations show that $\pi^{-1}(c(\theta))$ lies in the plane $\Pi:-ax-by+z=0$. \qed \end{proof} \begin{proposition} A curve $\alpha$ in $\Sigma^2$ is a small circle, i.e., a curve obtained from the intersection of $\Sigma^2$ with a plane, if and only if it is sent by the parabolic stereographic project $\pi$ into either a circle in $\mathbb{C}$ if $N\not\in\alpha$ or into a straight line in $\mathbb{C}$ if $N\in\alpha$, where $N$ is the north pole of $\Sigma^2$. \end{proposition} \begin{proof} Since any plane curve $\alpha$ in $\Sigma^2$ comes from the intersection with a plane $\Pi$ with normal $(A,B,C)\not=0$, we can implicitly write $\alpha$ as \begin{equation}\label{eqParSphericalPlaneCurves} \alpha:\left\{ \begin{array}{c} z = \frac{1}{2}-\frac{1}{2}(x^2+y^2) \\ Ax+By+Cz+D=0 \end{array} \right.. \end{equation} Note that if $(x,y,z)\in\alpha$, then $\pi(\alpha)=(\frac{x}{x^2+y^2},\frac{y}{x^2+y^2})$. Now, substituting the first expression of Eq. (\mathrm{Re}f{eqParSphericalPlaneCurves}) into the second, one has \begin{equation}\label{EqIntParSphrWithPlaneSingleEq} 2Ax+2By+C(1-x^2-y^2)+2D=0. \end{equation} If $C=0$, then $2Ax+2By+2D=0$. Since $C=0$, $N\in\alpha$ if and only if $D=0$. Then, $Ax+By=0$ and $\pi(\alpha)$ lies in a straight line passing through $0$. Otherwise, if $N\not\in\alpha$, then $Ax+By+D=0\Rightarrow A\frac{x}{x^2+y^2}+B\frac{y}{x^2+y^2}+\frac{D}{x^2+y^2}=0$. Writing $(X,Y)=\pi(\alpha)=\frac{1}{(x^2+y^2)}(x,y)$, it follows that $X^2+Y^2=(x^2+y^2)^{-1}$ and $\pi(\alpha)$ lies in the circle $$\Big(X+\frac{A}{2D}\Big)^2+\Big(Y+\frac{B}{2D}\Big)^2=\frac{A^2+B^2}{4D^2}>0.$$ On the other hand, if $C\not=0$, then $(C+2D)+2Ax+2By=C(x^2+y^2)$. Here, $N\in\alpha$ if and only if $C+2D=0$. So, if $N\in\alpha$, then $A\frac{x}{x^2+y^2}+B\frac{y}{x^2+y^2}=C$ and $\pi(\alpha)$ lies in a straight line not passing through the origin. Otherwise, if $N\not\in\alpha$, then writing $(X,Y)=\pi(\alpha)=\frac{1}{(x^2+y^2)}(x,y)$, it follows that Eq. (\mathrm{Re}f{EqIntParSphrWithPlaneSingleEq}) leads to $(C+2D)(X^2+Y^2)+2AX+2BY=C$. Finally, $\pi(\alpha)$ lies in the circle $$\Big(X+\frac{A}{C+2D}\Big)^2+\Big(Y+\frac{B}{C+2D}\Big)^2=\frac{C}{C+2D}+\frac{A^2+B^2}{(C+2D)^2}>0,$$ where the right-hand side of the equation above has to be positive because $\Sigma^2\cap\Pi\not=\emptyset$, or just a single point. Indeed, since $C\not=0$, $\Pi$ can not be vertical and, consequently, there exists a point $p_0\in \Sigma^2$ at which $T_{p_0}\Sigma^2$ is parallel to $\Pi$. Since the implicit equation of $T_{p}\Sigma^2$ at any $p=(p_1,p_2,p_3)$ is $z-p_3=-p_1(x-p_1)-p_2(y-p_2)$, parallelism occurs at $p_0=(\frac{A}{C},\frac{B}{C},\frac{1}{2}-\frac{1}{2C^2}(A^2+B^2))$ since the equation of $T_{p_0}\Sigma^2$ is $Ax+By+Cz=\frac{1}{2C}(A^2+B^2+C^2)$. Finally, assuming without loss of generality that $C>0$, for the intersection $\alpha=\Sigma^2\cap\Pi$ to be non-empty, we should have $-D<\frac{1}{2C}(A^2+B^2+C^2)$, i.e., $A^2+B^2+C(C+2D)>0$. (Equality would occur when $\Pi=T_{p_0}\Sigma^2$ and, consequently, $\alpha$ would degenerate to a single point $\alpha=\{p_0\}$.) Indeed, if two parallel planes $\Pi_i:ax+by+cz=d_i$ ($i=1,2$) have a normal $\mathbf{n}=(a,b,c)$ making an acute angle with $\mathcal{N}=(0,0,1)$, i.e., $c>0$, then $d_2>d_1$ implies that $\Pi_2$ is above $\Pi_1$ with respect to $\mathcal{N}$. For the converse, the reader may follow similar steps to those of the proof of Prop. \mathrm{Re}f{PropStereogProjRelGeod} to show that circles/lines in $\mathbb{C}$ are sent under $\pi^{-1}$ to small circles in $\Sigma^2$. \qed \end{proof} \section{Weierstrass representation of simply isotropic minimal surfaces} In analogy with what happens in $\mathbb{E}^3$, if we apply the Laplace-Beltrami operator $\Delta_g$ to the parameterization $\mathbf{x}$ of a minimal surface $M^2$ in $\mathbb{I}^3$, we have $\Delta_g\mathbf{x}=2H\mathcal{N}$ \cite{SatoArXiv2018}. Therefore, if we parameterize $M^2$ with isothermal coordinates, i.e., $g_{11}=g_{22}$ and $g_{12}=0$, then the coordinates functions of $M^2$ are harmonic functions on the plane. (Remember, if $g_{ij}=F^2\delta_{ij}$, then $\Delta_g=\frac{1}{F^2} \Delta$, where $\Delta$ is the flat 2d Laplacian.) Identifying the Euclidean plane with $\mathbb{C}$, we can then parameterize $M^2$ using the real part of holomorphic functions. More precisely, we can parameterize $M^2$ by \cite{SatoArXiv2018} \begin{equation} \mathbf{x}(z) = \mathrm{Re}\left(\int_z\phi(w)\mathrm{d} w\right), \end{equation} where $\phi=(\phi_1,\phi_2,\phi_3)\in\mathbb{C}^3$ satisfies $\phi_1^2+\phi_2^2=0$ and $\vert\phi_1\vert^2+\vert\phi_2\vert^2=0$. The first condition guarantees $\mathbf{x}$ above is an isothermic simply isotropic minimal immersion, while the second guarantees $\mathbf{x}$ is admissible. Note that, due to the degenerate nature of the simply isotropic metric, the third coordinate $\phi_3$ is not functionally related to $\phi_1$ and $\phi_2$. In the following, we shall exploit this freedom to associated a Weierstrass representation with a given choice of a Gauss map. If we write $\mathbf{x}=\mathrm{Re}(\int\mathbf{\phi})=\frac{1}{2}(\int\mathbf{\phi}+\int\bar{\mathbf{\phi}})$, $\phi=(\phi_1,\phi_2,\phi_3)$, then using that $\partial_u=\partial_z+\partial_{\bar{z}}$ and $\partial_v=\mathrm{i}(\partial_z-\partial_{\bar{z}})$ \cite{Ahlfors1979}, $z=u+\mathrm{i} v$, we write the tangent vectors as \begin{equation} \mathbf{x}_u = \displaystyle\frac{\mathbf{\phi}+\bar{\mathbf{\phi}}}{2}=\mathrm{Re}(\mathbf{\phi})\mbox{ and } \mathbf{x}_v = -\displaystyle\frac{\mathbf{\phi}-\bar{\mathbf{\phi}}}{2\mathrm{i}}=-\mathrm{Im}(\mathbf{\phi}). \end{equation} Now, noticing that \begin{equation} \mathbf{x}_u\times\mathbf{x}_v = (\mathrm{Im}(\phi_2\bar{\phi}_3),\mathrm{Im}(\phi_3\bar{\phi}_1),\mathrm{Im}(\phi_1\bar{\phi}_2)), \end{equation} the minimal normal is \begin{equation} \mathbf{N}_m = \left(\frac{\mathrm{Im}(\phi_2\bar{\phi}_3)}{\mathrm{Im}(\phi_1\bar{\phi}_2)},\frac{\mathrm{Im}(\phi_3\bar{\phi}_1)}{\mathrm{Im}(\phi_1\bar{\phi}_2)},1\right). \end{equation} In addition, since $0=\phi_1^2+\phi_2^2=(\phi_2-\mathrm{i}\phi_1)(\phi_2+\mathrm{i}\phi_1)$, we can write $\phi_2=\pm\mathrm{i}\phi_1$. The minimal normal then becomes \begin{equation} \mathbf{N}_m = \left(\frac{\mathrm{Im}(\pm\mathrm{i}\phi_1\bar{\phi}_3)}{\mathrm{Im}(\mp\mathrm{i}\vert\phi_1\vert^2)},\frac{\mathrm{Im}(\phi_3\bar{\phi}_1)}{\mathrm{Im}(\mp\mathrm{i}\vert\phi_1\vert^2)},1\right) = \left(-\frac{\mathrm{Re}(\phi_3\bar{\phi}_1)}{\vert\phi_1\vert^2},\mp\frac{\mathrm{Im}(\phi_3\bar{\phi}_1)}{\vert\phi_1\vert^2},1\right). \end{equation} Finally, seeing the minimal normal $\mathbf{N}_m$ and the parabolic normal $\xi$ as maps taking values in $\mathbb{C}\times\mathbb{R}$, we can either write \begin{equation} \mathbf{N}_m = \left(-\frac{\phi_3}{\phi_1},1\right)\mbox{ and }\xi = \left(-\frac{\phi_3}{\phi_1},\frac{1}{2}-\frac{1}{2}\left\vert\frac{\phi_3}{\phi_1}\right\vert^2\right),\,\mbox{ if }\phi_2=\mathrm{i}\phi_1, \end{equation} or \begin{equation} \mathbf{N}_m = \left(-\frac{\bar{\phi}_3}{\bar{\phi}_1},1\right)\mbox{ and }\xi = \left(-\frac{\bar{\phi}_3}{\bar{\phi}_1},\frac{1}{2}-\frac{1}{2}\left\vert\frac{\phi_3}{\phi_1}\right\vert^2\right),\,\mbox{ if }\phi_2=-\mathrm{i}\phi_1. \end{equation} Note that there is a freedom in the choice of the third coordinate of the complex curve $\phi$. Consequently, this allows us to choose the second holomorphic function $G$ to conform to the choice of Gauss map we have in mind. In the following, it will prove to be more convenient to choose $\phi_2=\mathrm{i}\phi_1$ when working with the minimal normal $\mathbf{N}_m$ and to choose $\phi_2=-\mathrm{i}\phi_1$ when working with the parabolic normal $\xi$. We should do that in order to have two holomorphic functions in the Weierstrass data instead of one holomorphic and another anti-holomorphic. On the one hand, choosing $\phi_2=\mathrm{i} \phi_1$ and by requiring that the top view projection of the minimal normal $\mathbf{N}_m$ is part of the Weierstrass data, we can write for some holomorphic functions $F$ and $G$ \begin{equation}\label{eq::WeierstrassRepWithGtopviewNm} \phi = (F,\mathrm{i} F,-FG) \Rightarrow \tilde{\mathbf{N}}_m = G. \end{equation} From $\vert\phi_1\vert^2+\vert\phi_2\vert^2=2\vert F\vert^2$, it follows that $\mathbf{x}$ fails to be regular on the zeros of $F$. This also means that the singularities of a simply isotropic minimal immersion are isolated. On the other hand, choosing $\phi_2=-\mathrm{i} \phi_1$ and by requiring that the stereographic projection of the parabolic normal $\xi$ is part of the Weierstrass data, we can write for some holomorphic functions $F$ and $G$ \begin{equation}\label{eq::WeierstrassRepWithGprojParNor} \phi = \left(F,-\mathrm{i} F,-\displaystyle\frac{F}{G}\right), \end{equation} which gives \begin{equation} \xi = \left(\frac{G}{\vert G\vert^2},\frac{1}{2}-\frac{1}{2\vert G\vert^2}\right)\Rightarrow \pi(\xi) = G. \end{equation} As before, we also have $\vert\phi_1\vert^2+\vert\phi_2\vert^2=2\vert F\vert^2$, from which follows that $\mathbf{x}$ fails to be regular on the zeros of $F$. Conversely, on the one hand, suppose we have a meromorphic function $G$ and a holomorphic function $F$ defined on $M^2$ such that the zeros of $F$ coincide with the poles of $G$ in a way that a zero of order $m$ of $F$ corresponds to a pole of order $m$ of $G$. Then, $\phi_1=F$, $\phi_2=\mathrm{i} F$, and $\phi_3=-FG$ are holomorphic on $M^2$ and satisfy $\phi_1^2+\phi_2^2=0$ and $\vert\phi_1\vert^2+\vert\phi_2\vert^2>0$ outside the zeros of $F$. In addition, if $\phi_1,\phi_2,$ and $\phi_3$ have no real periods we obtain a simply isotropic minimal immersion $\mathbf{x}:M^2\to\mathbb{I}^3$ such that $\tilde{\mathbf{N}}_m=G$. On the other hand, suppose we have holomorphic functions $G$ and $F$ defined on $M^2$ such that the zeros of $F$ coincide with the zeros of $G$ in a way that a zero of order $m$ of $F$ corresponds to a zero of order $m$ of $G$. Then, $\phi_1=F$, $\phi_2=-\mathrm{i} F$, and $\phi_3=-F/G$ are holomorphic on $M^2$ and satisfy $\phi_1^2+\phi_2^2=0$ and $\vert\phi_1\vert^2+\vert\phi_2\vert^2>0$ outside the zeros of $F$. In addition, if $\phi_1,\phi_2,$ and $\phi_3$ have no real periods we obtain a simply isotropic minimal immersion $\mathbf{x}:M^2\to\mathbb{I}^3$ such that $\pi(\xi)=G$. \subsection{Extrinsic geometry of simply isotropic minimal surfaces} Now, we shall focus on extrinsic geometric properties. For that, we must compute the second fundamental form of a minimal immersion $\mathbf{x}=\mathrm{Re}\int\phi$. Since the tangent vectors are \begin{equation}\label{eq::TangVecGenericHolRep} \mathbf{x}_u = \frac{\phi+\bar{\phi}}{2}=\mathrm{Re}(\phi)\mbox{ and }\mathbf{x}_v = -\frac{\phi-\bar{\phi}}{2\mathrm{i}}=-\mathrm{Im}(\phi), \end{equation} it follows that the second derivatives are \begin{equation}\label{eq::2ndDerGenericHolRep} \mathbf{x}_{uu} = \displaystyle\frac{\mathbf{\phi}'+\bar{\mathbf{\phi}}'}{2}=\mathrm{Re}(\mathbf{\phi}'),\,\mathbf{x}_{uv} = -\displaystyle\frac{\mathbf{\phi}'-\bar{\mathbf{\phi}}'}{2\mathrm{i}}=-\mathrm{Im}(\mathbf{\phi}'),\, \mathbf{x}_{vv} = -\mathrm{Re}(\mathbf{\phi}'). \end{equation} Finally, we are in condition to compute the second fundamental form and, from it, the Gaussian curvature of a simply isotropic minimal immersion. But, first, we need the following auxiliary result. \begin{lemma}\label{lem::1st2ndFFandKminComplexCurve} The first and second fundamental forms $\mathrm{I}$ and $\mathrm{II}$ and the Gauss curvature $K$ of the simply isotropic minimal immersion associated with the complex curve $\phi=(\phi_1,\phi_2=\pm\mathrm{i}\phi_1,\phi_3)$ are given by \begin{equation} \mathrm{I} = \vert \phi_1\vert^2(\mathrm{d} u^2+\mathrm{d} v^2), \end{equation} \begin{equation} \mathrm{II} = \mathrm{Re}\left[\phi_1\left(\frac{\phi_3}{\phi_1}\right)'\right](\mathrm{d} u^2-\mathrm{d} v^2)-2\,\mathrm{Im}\left[\phi_1\left(\frac{\phi_3}{\phi_1}\right)'\right]\mathrm{d} u\mathrm{d} v, \end{equation} and \begin{equation} K = - \left\vert\frac{1}{ \phi_1}\left(\frac{\phi_3}{\phi_1}\right)'\right\vert^2, \end{equation} respectively. \end{lemma} From Lemma \mathrm{Re}f{lem::1st2ndFFandKminComplexCurve}, the proof of the next theorem follows straightforwardly. \begin{theorem}\label{thr::1st2ndFFandKMinNormal} The first and second fundamental forms and the Gauss curvature of the minimal immersion associated with $\phi=(F,\mathrm{i} F,-FG)$, so that $G=\tilde{\mathbf{N}}_m$, are given by \begin{equation} \mathrm{I} = \vert F\vert^2\,\vert\mathrm{d} z\vert^2,\, \mathrm{II} =-\mathrm{Re}(FG'\,\mathrm{d} z^2), \mbox{ and } K = - \left\vert\frac{G'}{F}\right\vert^2, \end{equation} respectively. On the other hand, the first and second fundamental form and Gauss curvature of the minimal immersion associated with $\phi=(F,-\mathrm{i} F,-\frac{F}{G})$, so that $G=\pi(\xi)$, are given by \begin{equation} \mathrm{I} = \vert F\vert^2\,\vert\mathrm{d} z\vert^2,\, \mathrm{II} = \mathrm{Re}\left(F\frac{G'}{G^2}\mathrm{d} z^2\right), \mbox{ and } K = - \left\vert\frac{ G'}{F G^2}\right\vert^2, \end{equation} respectively. \end{theorem} \begin{proof}[of Lemma \mathrm{Re}f{lem::1st2ndFFandKminComplexCurve}] We will do the proof for $\phi_2=\mathrm{i}\phi_1$, the proof for $\phi_2=-\mathrm{i}\phi_1$ being entirely analogous. The coefficients of the first fundamental form are $$ \langle\mathbf{x}_u,\mathbf{x}_u\rangle = \sum_{j=1}^2\frac{\phi_j^2+2\phi_1\bar{\phi}_1+\phi_j^2}{4} = \frac{\vert \phi_1\vert^2+\vert \phi_2\vert^2}{2}, \langle\mathbf{x}_u,\mathbf{x}_v\rangle = \sum_{j=1}^2\frac{\phi_j^2-\bar{\phi}_j^2}{-4\mathrm{i}} = 0, $$ and \begin{eqnarray*} \langle\mathbf{x}_v,\mathbf{x}_v\rangle & = & -\sum_{j=1}^2\frac{\phi_j^2-2\phi_1\bar{\phi}_1+\phi_j^2}{4} = \frac{\vert \phi_1\vert^2+\vert \phi_2\vert^2}{2}. \end{eqnarray*} Substituting $\phi_2=\mathrm{i}\phi_1$ gives the desired expression for the metric. For the second fundamental form, first note that the minimal normal is given by $\mathbf{N}_m=(-\frac{1}{2}(\frac{\phi_3}{\phi_1}+\frac{\bar{\phi}_3}{\bar{\phi}_1}),-\frac{1}{2\mathrm{i}}(\frac{\phi_3}{\phi_1}-\frac{\bar{\phi}_3}{\bar{\phi}_1}),1)$. Then, the coefficients of the second fundamental form are \begin{eqnarray*} h_{11} & = & \frac{\phi'+\bar{\phi}'}{2}\cdot\mathbf{N}_m\\ & = & (\frac{\phi_1'+\bar{\phi}_1'}{2},\frac{\mathrm{i}\phi_1'-\mathrm{i}\bar{\phi}_1'}{2},\frac{\phi_3'+\bar{\phi}_3'}{2})\cdot(-\frac{1}{2}(\frac{\phi_3}{\phi_1}+\frac{\bar{\phi}_3}{\bar{\phi}_1}),-\frac{1}{2\mathrm{i}}(\frac{\phi_3}{\phi_1}-\frac{\bar{\phi}_3}{\bar{\phi}_1}),1)\\ & = & \frac{1}{2}\Big(\phi_1(\frac{\phi_3}{\phi_1})'+\overline{\phi_1(\frac{\phi_3}{\phi_1})'}\Big)= \mathrm{Re}[\phi_1(\frac{\phi_3}{\phi_1})'], \end{eqnarray*} \begin{eqnarray*} h_{12} & = & -\frac{\phi'-\bar{\phi}'}{2\mathrm{i}}\cdot\mathbf{N}_m\\ & = & (-\frac{\phi_1'-\bar{\phi}_1'}{2\mathrm{i}},-\frac{\mathrm{i}\phi_1'+\mathrm{i}\bar{\phi}_1'}{2\mathrm{i}},-\frac{\phi_3'-\bar{\phi}_3'}{2\mathrm{i}})\cdot(-\frac{1}{2}(\frac{\phi_3}{\phi_1}+\frac{\bar{\phi}_3}{\bar{\phi}_1}),-\frac{1}{2\mathrm{i}}(\frac{\phi_3}{\phi_1}-\frac{\bar{\phi}_3}{\bar{\phi}_1}),1)\\ & = & -\frac{1}{2}\Big(\phi_1(\frac{\phi_3}{\phi_1})'-\overline{\phi_1(\frac{\phi_3}{\phi_1})'}\Big)= -\mathrm{Im}[\phi_1(\frac{\phi_3}{\phi_1})'], \end{eqnarray*} and $h_{22} = -h_{11}=\mathrm{Re}[\phi_1(\frac{\phi_3}{\phi_1})']$. Finally, the Gaussian curvature becomes \begin{equation*} K = -\frac{(\mathrm{Re}[\phi_1(\frac{\phi_3}{\phi_1})'])^2+(\mathrm{Im}[\phi_1(\frac{\phi_3}{\phi_1})'])^2}{\vert \phi_1\vert^4}= -\frac{1}{\vert \phi_1\vert^2}\left\vert (\frac{\phi_3}{\phi_1})'\right\vert^2. \end{equation*} \qed \end{proof} From the second fundamental form in Theorem \mathrm{Re}f{thr::1st2ndFFandKMinNormal} we conclude that $\mathbf{v}=v_1\mathbf{x}_u+ v_2\mathbf{x}_v$ points to an asymptotic direction if and only if $FG'(v_1+\mathrm{i} v_2)^2\in \mathrm{i}\mathbb{R}$ and to a principal curvature direction if and only if $FG'(v_1+\mathrm{i} v_2)^2\in \mathbb{R}$. An analogous conclusion is valid for the data $\phi=(F,-\mathrm{i} F,-\frac{F}{G})$. For every $\theta$, the minimal surface $\mathbf{x}_{\theta}$ associated with the Weierstrass data $({\mathrm{e}^{-\mathrm{i}\theta}}F,G)$ is isometric to the surface $\mathbf{x}_{0}$ associated with the data $(F,G)$ and together they form {an} \emph{associate family} of minimal surfaces: $${\mathbf{x}_{\theta}=\cos(\theta)\,\mathrm{Re}\int\phi\,\mathrm{d} z+\sin(\theta)\,\mathrm{Im}\int\phi\,\mathrm{d} z}.$$ The surface with $\theta=\frac{\pi}{2}$ is called the \emph{conjugate} of the surface $\mathbf{x}_{0}$. Therefore, it follows as a corollary of Theorem \mathrm{Re}f{thr::1st2ndFFandKMinNormal} that asymptotic {and} principal directions of $\mathbf{x}_{\theta}$ are mapped {respectively} into principal {and} asymptotic directions of its conjugate surface $\mathbf{x}_{\theta+\frac{\pi}{2}}$. \subsection{Other choices of a Weierstrass representation} \label{subsect::OtherChoicesWeierstRep} The holomorphic data in Sato's choice of representation is $\phi=(F,\mathrm{i} F,2FG)$ \cite{SatoArXiv2018}. For this representation, we have $\tilde{\mathbf{N}}_m=2G$ and $\pi(\xi)=-\frac{2}{\bar{G}}$. Moreover, in Ref. \cite{SatoArXiv2018} it is also provided the expression for the second fundamental form and Gaussian curvature. From the above, $\mathrm{II}$ and $K$ of the minimal immersion $\mathbf{x}=\mathrm{Re}\int(F,\mathrm{i} F,2FG)$ can be further simplified and written as $$ \mathrm{II} = 2\,\mathrm{Re}(FG'\mathrm{d} z^2) \mbox{ and }K = -4\Big\vert\frac{G'}{F}\Big\vert^2.$$ It is worth mentioning that the motivation of Ref. \cite{SatoArXiv2018} was to understand stationary surfaces in 4d Minkowski space $\mathbb{E}_1^4$, i.e., spacelike surfaces with zero mean curvature. Indeed, from the fact that $\mathbb{I}^3$ can be isometrically immersed in $\mathbb{E}_1^4$, it is established that there exists a one-to-one correspondence between flat stationary and simply isotropic minimal surfaces, up to rigid motions in the respective spaces. In addition, using that the 3d Euclidean and Minkowski spaces can also be isometrically immersed in $\mathbb{E}_1^4$, it is shown that minimal, maximal, and simply isotropic minimal surfaces in $\mathbb{E}^3$, $\mathbb{E}_1^3$, and $\mathbb{I}^3$, {respectively,} can be associated with members of the 1-parameter family $\{f_{\theta}\}_{\theta\in[0,2\pi]}$ of stationary surfaces given by \cite{SatoArXiv2018} \begin{equation} f_{\theta}=\mathrm{Re}\int^z\Big(F(1-\cos2\theta\,G^2),\mathrm{i} F(1+\cos2\theta\,G^2),2\cos\theta\,FG,2\sin\theta\,FG\Big)\mathrm{d} z. \end{equation} In fact, the immersions $f_{0},f_{\frac{\pi}{4}}$, and $f_{\frac{\pi}{2}}$ can be associated with minimal, simply isotropic minimal, and maximal surfaces in $\mathbb{E}^3$, $\mathbb{I}^3$, and $\mathbb{E}_1^3$, respectively. Moreover, these stationary surfaces are contained in a 3d subspace {of $\mathbb{E}_1^4$} and it is possible to deduce that $f_{0},f_{\frac{\pi}{4}}$, and $f_{\frac{\pi}{2}}$ correspond to surfaces with Gaussian curvature $K\leq0$, $K=0$, and $K\geq0$ \cite{MaAM2013,SatoArXiv2018}, respectively\footnote{The correspondence with minimal surfaces in $\mathbb{I}^3$ is a special feature of $\mathbb{E}_1^4$ since a zero mean curvature surface in 4d Euclidean space with zero Gaussian curvature must be a plane.}. Finally, the representations we proposed in this work, and also the one proposed by Sato, are by no means the only possible choices. For example, it seems natural to choose $\phi_1=F$, $\phi_2=-\mathrm{i} F$, and $\phi_3=G$, but note that $\mathbf{N}_m=\frac{1}{\vert F\vert^2}(\mathrm{Re}(\bar{F}G),\mathrm{Im}(\bar{F}G),1)$, which implies $\tilde{\mathbf{N}}_m=G/F$ and $\pi\circ\xi=\bar{F}/\bar{G}$. Therefore, neither $F$ nor $G$ have a clear geometric meaning as in (\mathrm{Re}f{eq::WeierstrassRepWithGtopviewNm}) or (\mathrm{Re}f{eq::WeierstrassRepWithGprojParNor}). Yet another possibility is to choose $\phi_1=FG$, $\phi_2=\mathrm{i} FG$, and $\phi_3=F$. Here, the minimal normal is $\mathbf{N}_m=\frac{1}{\vert G\vert^2}(\mathrm{Re}\,G,\mathrm{Im}\,G,1)$, which implies $\tilde{\mathbf{N}}_m=1/\bar{G}$ and $\pi\circ\xi=G$. The latter has the geometric meaning we seek, but the determinant of the metric is $\det g_{ij}=2\vert F\vert^2\vert G\vert^2$ and, consequently, we can no longer associate the singularities of the minimal immersion with the zeros of a single function. \subsection{{Examples}} \label{subsect::ExamplesWeierstrassRepr} {We are going to build examples of minimal surfaces corresponding to the isotropic counterparts of the helicoid, catenoid, and Scherk surfaces using the holomorphic representation. (See Sato \cite{SatoArXiv2018} for further examples.)} {\textit{(a) The helicoid and logarithmoid of revolution}: Consider the holomorphic data $F=1$ and $G=-z/p$, where $p\in\mathbb{R}$. Using the minimal normal representation, we have $ \int(F,\mathrm{i} F,-FG)\mathrm{d} z = (z,\mathrm{i} z,\frac{z^2}{2p})+(a,b,c)$. Thus, the corresponding minimal surface is the hyperbolic paraboloid:} \begin{equation} \mathbf{x}(z=u+\mathrm{i} v) = (u,-v,\frac{u^2-v^2}{2p}). \end{equation} {The corresponding conjugate surface is the hyperbolic paraboloid} \begin{equation} \mathbf{x}_{\frac{\pi}{2}}(z=u+\mathrm{i} v) = (v,u,\frac{uv}{p}). \end{equation} {On the other hand, using the parabolic normal representation, we have $\int(F,-\mathrm{i} F,-F/G)\mathrm{d} z = (z,-\mathrm{i} z,p\ln z)+(a,b,c)$. Thus, up to translations, the corresponding minimal surface is the logarithmoid of revolution (see Fig. \mathrm{Re}f{fig:AssociatedFamilyHelicoid}):} \begin{equation} \mathbf{x}(z=r\mathrm{e}^{\mathrm{i} \varphi}) = (r\cos\varphi,r\sin\varphi,p\ln r). \end{equation} {The corresponding conjugate surface is the helicoid (see Fig. \mathrm{Re}f{fig:AssociatedFamilyHelicoid}):} \begin{equation} \mathbf{x}_{\frac{\pi}{2}}(z=r\mathrm{e}^{\mathrm{i} \varphi}) = (r\sin\varphi,-r\cos\varphi,p\,\varphi). \end{equation} \begin{figure} \caption{{The associate family of simply isotropic minimal surfaces of the helicoid parametrized as $\mathbf{x} \label{fig:AssociatedFamilyHelicoid} \end{figure} \begin{figure} \caption{{Minimal Scherk surfaces: (a) The minimal surface in Euclidean space $\mathbb{E} \label{fig:ScherkSurf} \end{figure} {In $\mathbb{E}^3$, the conjugate surface of the helicoid is the catenoid, which is the only minimal surface of revolution in $\mathbb{E}^3$. The logarithmoid of revolution is the only simply isotropic minimal surface of revolution \cite{daSilvaMJOU2021,Sachs1990} and, therefore, we may see it as the counterpart of the catenoid in $\mathbb{I}^3$.} {\textit{(b) The Scherk surfaces}: The so-called first Scherk surface is the minimal surface obtained by moving a plane curve along another plane curve. The solution in $\mathbb{E}^3$ is provided by the doubly periodic minimal surface $x_1=\ln\frac{\cos x_3}{\cos x_3}$. We may pose the same problem in $\mathbb{I}^3$. However, in $\mathbb{I}^3$, we must distinguish between three types of planes $\Pi_1$ and $\Pi_2$ containing the generating curves \cite{StrubeckerCrelle1954}. Namely, we have the:} \begin{enumerate} \item Scherk surface of first type: $\Pi_1$ and $\Pi_2$ are both isotropic, e.g., $\Pi_1:x=\gamma\, y$, $\Pi_2:y=-\gamma\, y$ ($\gamma$ constant); \item Scherk surface of second type: $\Pi_1$ is isotropic and $\Pi_2$ is not isotropic, e.g., $\Pi_1:y=0$, $\Pi_2:z=0$; \item Scherk surface of third type: $\Pi_1$ and $\Pi_2$ are both non isotropic, e.g., $\Pi_1:y-z=\frac{\pi}{2}$, $\Pi_2:y+z=\frac{\pi}{2}$. \end{enumerate} {Those surfaces are 1. planes and hyperbolic paraboloid whose cross sectional hyperbolas lie on isotropic planes; 2. the simply periodic surface $x_1=-\ln\frac{x_3}{\cos x_2}$; and 3. the doubly periodic surface $\mathbf{x}(u^1,u^2)=(\ln\frac{\sin u^1}{\sin u^2},u^1+u^2,u^1-u^2)$.} {We already obtained the isotropic holomorphic representation of hyperbolic paraboloids. Now, consider the holomorphic data $F=\frac{1}{z}$ and $G=-\frac{1}{z}$. Using the parabolic normal representation, we have $\int(F,-\mathrm{i} F,-\frac{F}{G})=(\ln z,-\mathrm{i} \ln z,z)$. Thus, up to translations, the corresponding minimal surface is} \begin{equation} \mathbf{x}(z)=\mathrm{Re}(\ln z,-\mathrm{i} \ln z,z)=(\ln\vert z\vert,\arg z,\mathrm{Re}\, z). \end{equation} {Using the identity $\cos(\arctan x)=\frac{1}{\sqrt{1+x^2}}$, we see that the map $\mathbf{x}=(x_1,x_2,x_3)$ parametrizes the Scherk surface of second type (see Fig. \mathrm{Re}f{fig:ScherkSurf}.(b).) $x_1 = \ln \frac{x_3}{\cos x_2}$.} {Alternatively, defining $w=\ln z$, we reparametrize $\mathbf{x}(z)$ as the harmonic graph} \begin{equation} \mathbf{x}(w=u+\mathrm{i} v)=(w,\mathrm{Re}(\mathrm{e}^w))=(u,v,\mathrm{e}^u\cos v). \end{equation} {Finally, concerning the Scherk surface of third type, Strubecker showed that it can be reparametrized as the harmonic graph of $h(z)=\mathrm{Im}\,\ln \cosh z$ \cite{StrubeckerCrelle1954}. Therefore, the corresponding parabolic holomorphic data can be taken as $F=1$ and $G=-\mathrm{i}\coth z$. (Despite the huge resemblance with the classic minimal Scherk surface, the graph of $h(z)=\mathrm{Im}\ln \cosh z$ is not minimal in $\mathbb{E}^3$.)} \section{Isotropic Bj\"orling representation} The Bj\"orling representation consists in the Cauchy problem for the minimal surface equation. More precisely, given an analytic curve $c(s)$ and a unit vector $\mathbf{e}$ along $c$, with $\{c',\mathbf{e}\}$ linearly independent, find the minimal surface $S$ which contains $c$ and such that $\mathbf{e}$ is tangent to $S$ along $c$. In Euclidean space, we may equivalently prescribe a unit vector field $\mathbf{n}$ such that $\mathbf{n}$ is normal to the sought minimal surface along $c$. If we prescribe the tangent planes along $c$, $s\mapsto\mbox{span}\{c'(s),\mathbf{e}(s)\}$, then from Eq. (\mathrm{Re}f{eq::TangVecGenericHolRep}) we may set $\mathrm{Re}(\phi)=c'$ and $\mathrm{Im}(\phi)=-\mathbf{e}$ and the corresponding minimal surface is parameterized by \begin{equation} \mathbf{x}(z)=\mathrm{Re}\int\Big[c'(z)-\mathrm{i} \,\mathbf{e}(z)\Big]\mathrm{d} z, \end{equation} where $c(z)$ and $\mathbf{e}(z)$ are analytical extensions of $c(s)$ and of $\mathbf{e}(s)$, respectively. If we prescribe the minimal normal $\mathbf{N}_m$ along $c$, then $\mathbf{N}_m\times c'$ is a tangent vector and we can use the previous construction. Indeed, from Eq. (\mathrm{Re}f{eq::TangVecGenericHolRep}) we may set $\mathrm{Re}(\phi)=c'$ and $\mathrm{Im}(\phi)=-\mathbf{N}_m\times c'$ and the corresponding minimal surface is parameterized by { \begin{equation} \mathbf{x}(z)=\mathrm{Re}\int\Big[c'(z)-\frac{\mathrm{i}}{\Omega(z)} \,\mathbf{N}_m(z)\times c'(z)\Big]\mathrm{d} z, \end{equation} } where {$\Omega=\sqrt{\langle\mathbf{N}_m\times c',\mathbf{N}_m\times c'\rangle}$ and} $c(z)$ and $\mathbf{N}_m(z)$ are analytical extensions of $c(s)$ and of $\mathbf{N}_m(s)$, respectively. Finally, if we prescribe the parabolic normal $\xi$ along $c$, then we can construct a tangent vector given by {$\mathbf{e}=(\xi+\frac{1+\Vert\tilde{\xi}\Vert^2}{2}\mathcal{N})\times c'$}. Now, the corresponding minimal surface is parameterized by { \begin{equation} \mathbf{x}(z)=\mathrm{Re}\int\Big[c'(z)-\frac{\mathrm{i}}{\Omega(z)} \,\Big(\xi(z)+\frac{1+\Vert\tilde{\xi}(z)\Vert^2}{2}\mathcal{N}\Big)\times c'(z)\Big]\mathrm{d} z, \end{equation} } where {$\Omega=\sqrt{\langle\mathbf{e},\mathbf{e}\rangle}$ and} $c(z)$ and $\xi(z)$ are analytical extensions of $c(s)$ and of $\xi(s)$, respectively. \subsection{{Examples}} \begin{figure} \caption{{Bent Scherk surface obtained from the Bj\"orling representation by prescribing an oscillating tangent plane along a circle (dashed black line) instead of an oscillating tangent plane along a line as in the usual Scherk surface (see Figure \mathrm{Re} \label{fig:BentScherk} \end{figure} {We now build examples of simply isotropic minimal surfaces using the Bj\"orling representation corresponding to the isotropic counterparts of the catenoid (logarithmoid of revolution) and Scherk surfaces. In addition, we also build a bent Scherk surface, i.e., instead of a surface composed of a linear chain we obtain a circular one. (See figures \mathrm{Re}f{fig:ScherkSurf} and \mathrm{Re}f{fig:BentScherk}.)} {\textit{(a) Logarithmoid of revolution:} In Euclidean space, we may obtain the catenoid by prescribing the analytic curve $c(s)=(\cos s,\sin s,0)$ and the vector field $\mathbf{e}=(0,0,1)$ and demand $\mathbf{e}$ to be tangent to the surface along $c$. However, in the simply isotropic space we should add some inclination to $\mathbf{e}$ to avoid isotropic tangent planes. Thus, consider the vector field $\mathbf{e}(s)=(\cos s,\sin s,p)$ along $c$. Then, the holomorphic curve associated with the Bj\"orling problem is $$c'(z)-\mathrm{i} \mathbf{e}(z)=(-\sin z-\mathrm{i}\cos z,\cos z-\mathrm{i}\sin z,-\mathrm{i} p)=(-\mathrm{i}\mathrm{e}^{-\mathrm{i} z},\mathrm{e}^{-\mathrm{i} z},-\mathrm{i} p),$$ whose integration gives $\int(c'-\mathrm{i}\mathbf{e})\mathrm{d} z=(\mathrm{e}^{-\mathrm{i} z},\mathrm{i} \mathrm{e}^{-\mathrm{i} z},-\mathrm{i} pz)$. Now, performing the coordinate change $w=\mathrm{e}^{-\mathrm{i} z}$, we finally obtain \begin{equation} \mathbf{x}(w=r\mathrm{e}^{-\mathrm{i}\varphi})=\mathrm{Re}(w,\mathrm{i} w,p\ln w)=(r\cos\varphi,r\sin\varphi,p\ln r), \end{equation} which parametrizes the logarithmoid of revolution.} {\textit{(b) Scherk surface:} Let us consider the analytic curve $c(s)=(0,s,A\cos s)$ and the vector field $\mathbf{e}=(1,0,A\cos s)$, where $A\in\mathbb{R}$. Then, the holomorphic curve associated with the Bj\"orling problem is $$c'(z)-\mathrm{i} \mathbf{e}(z)=(-\mathrm{i},1,-A\sin z-\mathrm{i} A\cos z)=(-\mathrm{i},1,-\mathrm{i} A\,\mathrm{e}^{-\mathrm{i} z}),$$ whose integration gives $\int(c'-\mathrm{i}\mathbf{e})\mathrm{d} z=(-\mathrm{i} z,z,A\, \mathrm{e}^{-\mathrm{i} z})$. We finally obtain \begin{equation} \mathbf{x}(z=v+\mathrm{i} u)=(u,v,A\,\mathrm{e}^u \cos v), \end{equation} which parametrizes the Scherk surface of the second type (see Subsection \mathrm{Re}f{subsect::ExamplesWeierstrassRepr}).} {\textit{(c) Bent Scherk surface:} Instead of a singly periodic minimal surface, let us build a Scherk surface with a bent core. Consider the analytic curve $c(s)=(\sin s,\cos s,0)$ and the vector field $\mathbf{e}=(\sin s,\cos s,A\lambda\cos \lambda s)$, where $\lambda,A\in\mathbb{R}$. Then, the holomorphic curve associated with the Bj\"orling problem is $$c'-\mathrm{i} \mathbf{e}=(\cos z-\mathrm{i} \sin z,-\sin z-\mathrm{i} \cos z,-\mathrm{i} A\lambda \cos\lambda z)=(\mathrm{e}^{-\mathrm{i} z},-\mathrm{i} \mathrm{e}^{-\mathrm{i} z},-\mathrm{i} A\lambda \cos\lambda z),$$ whose integration gives $\int(c'-\mathrm{i}\mathbf{e})\mathrm{d} z=(\mathrm{i}\mathrm{e}^{-\mathrm{i} z},\mathrm{e}^{-\mathrm{i} z},-\mathrm{i} A\sin\lambda z)$. Performing the coordinate change $w=\mathrm{i}\,\mathrm{e}^{-\mathrm{i} z}$ [$z=\mathrm{i} \ln(-\mathrm{i} w)$], we finally obtain the bent Scherk surface (see figure \mathrm{Re}f{fig:BentScherk}) \begin{eqnarray} \mathbf{x}(w=r\mathrm{e}^{\mathrm{i} \varphi}) &=& \mathrm{Re}(w,-\mathrm{i} w,-\mathrm{i}\,A\sin[\mathrm{i}\lambda\ln(-\mathrm{i} w)])\nonumber\\ & = & \Big(r\cos\varphi,r\sin\varphi,A\cos[\lambda(\frac{\pi}{2}-\varphi)]\sinh(\lambda\ln r)\Big).\label{eq::BentScherk} \end{eqnarray} To the best of our knowledge, the construction of a bent Scherk surface has never been reported on the literature. (A similar procedure may be used to construct a bent helicoid as done in $\mathbb{E}^3$. See Ref. \cite{LopezMMJ2018} and references therein.)} \section{Concluding remarks} In this work, we pushed further results in Ref. \cite{SatoArXiv2018} concerning the Weierstrass representation of minimal surfaces in simply isotropic space $\mathbb{I}^3$ by providing a way to associate part of the holomorphic data with the choice of a Gauss map. We also discussed simply isotropic analogs of the Bj\"orling representation, which correspond to the Cauchy problem for the minimal surface equation. Sato's choice for the Weierstrass representation in Ref. \cite{SatoArXiv2018} was motivated by the study of stationary surfaces in 4d Minkowski space $\mathbb{E}_1^4$, i.e., zero mean curvature spacelike surfaces. In fact, simply isotropic minimal surfaces are put in correspondence with flat stationary surfaces, while minimal and maximal surfaces in the 3d Euclidean and Minkowski spaces are put in correspondence with stationary surfaces of curvature $K\leq0$ and $K\geq0$, respectively. Minimal, simply isotropic minimal, and maximal surfaces are associated with members of a 1-parameter family $\{f_{\theta}\}$ of stationary surfaces. The family of immersions $f_{\theta}$ though does not exhaust the class of stationary surfaces in $\mathbb{E}_1^4$. Indeed, from Theorem 2.4 of Ref. \cite{MaAM2013}, given two holomorphic functions $\phi,\psi$ and a holomorphic 1-form $\mathrm{d} h$ satisfying a set of regularity conditions one can generically represent stationary surfaces in $\mathbb{E}_1^4$ as \begin{equation*} \mathbf{x}(z)=2\,\mathrm{Re}\int^z(\phi+\psi,-\mathrm{i}(\phi+\psi),1-\phi\psi,1+\phi\psi)\mathrm{d} h. \end{equation*} The functions $\phi,\psi$ are associated with the Gauss maps of the stationary surface while $\mathrm{d} h$ is the height differential. Minimal, maximal, and simply isotropic minimal surfaces in $\mathbb{E}^3$, $\mathbb{E}_1^3$, and $\mathbb{I}^3$ respectively correspond to considering $\phi\equiv-1/\psi$, $\phi\equiv1/\psi$, and $\psi\equiv0$ (or $\phi\equiv0$) in the representation above. This later observation poses the interesting problem of finding other geometries that can be isometrically immersed in $\mathbb{E}_1^4$ and whose corresponding zero mean curvature surfaces can be put in correspondence with stationary surfaces of $\mathbb{E}_1^4$. In addition, we may also ask how many non-equivalent geometries are needed in order to exhaust all stationary surfaces in $\mathbb{E}_1^4$. In conclusion, the take-home message of this work is that there are multiple ways of holomorphically represent simply isotropic minimal surfaces. However, distinct choices have distinct advantages/disadvantages and, therefore, when deciding for a representation or another we should have in mind which specific geometric feature the chosen representation will allow us to control. \end{document}
\begin{document} \title{A Riemann-von Mangoldt-type formula for the \\ distribution of Beurling primes} \author{\large{Szil\' ard Gy. R\' ev\' esz}$^1$} \affiliation[1]{Alfréd Rényi Institute of Mathematics, 13 Reáltanoda street Budapest, Hungary 1053} \newcommand{{\mathcal C}orrespondingAuthorEmail}{[email protected]} \newcommand{\KeyWords}{Beurling zeta function, analytic continuation, arithmetical semigroups, Beurling prime number formula, zero of the Beurling zeta function, Riemann-von Mangoldt formula} \newcommand{11M41}{11M41} \newcommand{11F66, 11M36, 30B50, 30C15}{11F66, 11M36, 30B50, 30C15} \begin{abstract} In this paper we work out a Riemann-von Mangoldt type formula for the summatory function ${\mathcal p}si(x):=\sum_{g\in{\mathcal G}, |g|\le x} \Lambda_{{\mathcal G}}(g)$, where ${\mathcal G}$ is an arithmetical semigroup (a Beurling generalized system of integers) and $\Lambda_{{\mathcal G}}$ is the corresponding von Mangoldt function attaining $\log|p|$ for $g=p^k$ with a prime element $p\in {\mathcal G}$ and zero otherwise. On the way towards this formula, we prove explicit estimates on the Beurling zeta function $\zetata_{{\mathcal G}}$, belonging to ${\mathcal G}$, to the number of zeroes of $\zetata_{\mathcal G}$ in various regions, in particular within the critical strip where the analytic continuation exists, and to the magnitude of the logarithmic derivative of $\zetata_{\mathcal G}$, under the sole additional assumption that Knopfmacher's Axiom A is satisfied. We also construct a technically useful broken line contour to which the technic of integral transformation can be well applied. The whole work serves as a first step towards a further study of the distribution of zeros of the Beurling zeta function, providing appropriate zero density and zero clustering estimates, to be presented in the continuation of this paper. \end{abstract} \maketitle \section{Introduction} Beurling's theory fits well to the study of several mathematical structures. A vast field of applications of Beurling's theory is nowadays called \emph{arithmetical semigroups}, which are described in detail e.g. by Knopfmacher, \cite{Knopf}. Here ${\mathcal G}$ is a unitary, commutative semigroup, with a countable set of indecomposable generators, called the \emph{primes} of ${\mathcal G}$ and denoted usually as $p\in{\mathcal P}$, (with ${\mathcal P}\subset {\mathcal G}$ the set of all primes within ${\mathcal G}$), which freely generate the whole of ${\mathcal G}$: i.e., any element $g\in {\mathcal G}$ can be (essentially, i.e. up to order of terms) uniquely written in the form $g=p_1^{k_1}\cdot \dots \cdot p_m^{k_m}$: two (essentially) different such expressions are necessarily different as elements of ${\mathcal G}$, while each element has its (essentially) own unique prime decomposition. Moreover, there is a \emph{norm} $|\cdot|~: {\mathcal G}\to {\mathbb R}_{+}$ so that the following hold. Firstly, the image of ${\mathcal G}$, $|{\mathcal G}|\subset {\mathbb R}_{+}$ is discrete, i.e. any finite interval of ${\mathbb R}_{+}$ can contain the norm of only a finite number of elements of ${\mathcal G}$; thus the function \begin{equation}\label{Ndef} {{\mathcal N}}(x):=\# \{g\in {\mathcal G}~:~ |g| \leq x\} \end{equation} exists as a finite, nondecreasing, nonnegative integer valued function on ${\mathbb R}_{+}$. Second, the norm is multiplicative, i.e. $|g\cdot h| = |g| \cdot |h|$; it follows that for the unit element $e$ of ${\mathcal G}$ $|e|=1$, and that all other elements $g \in {\mathcal G}$ have norms strictly larger than 1 (otherwise the different elements $g^m$ -- which are really different by their different unique prime decomposition -- would have a sequence of norms identically $1$ or converging to $0$). There are many structures fitting into this general theory, e.g. the category of all finite Abelian groups, the ideals ${\mathcal I}_K$ of an algebraic number field $K$ over the rational numbers, algebraic structures like e.g. semisimple rings, graphs of some required properties, finite pseudometrizable topological spaces, symmetric Riemannian manifolds, compact Lie groups etc., see Knopfmacher's book, pages 11-22. In this work we assume the so-called \emph{"Axiom A"} of Knopfmacher see pages 73-79 of his fundamental book \cite{Knopf}, in its normalized form. \begin{definition} It is said that ${{\mathcal N}}$ (or, loosely speaking, $\zeta$) satisfies \emph{Axiom A} -- more precisely, Axiom $A(\kappa,\theta)$ with the suitable constants $\kappa>0$ and $0<\theta<1$ -- if we have\footnote{The usual formulation uses the more natural version ${\mathcal R}(x):= {\mathcal N}(x)-\kappa x$. However, our version is more convenient with respect to the initial values at 1, as we here have ${\mathcal R}(1-0)=0$. All respective integrals of the form $\int_0$ will be understood as integrals from $1-0$, and thus we can avoid considering endpoint values in the partial integration formulae. Alternatively, we could have taken also ${\mathcal N}(x)$ left continuous, and integrals from 1 in the usual sense: also with this convention we would have ${\mathcal R}(1)=0$.} for the remainder term \begin{align}\label{Athetacondi}\notag & {\mathcal R}(x):= {\mathcal N}(x)-\kappa (x-1) \\ \textrm{the estimate} \notag \\ & \left| {\mathcal R}(x) \right| \leq A x^{\theta} {\mathcal q}uad (\kappa, A > 0, ~ 0<\theta<1 ~ \textrm{constants}, ~ x {\mathcal g}eq 1 ~ \textrm{arbitrary}). \end{align} \end{definition} It is clear that under Axiom A the Beurling zeta function \begin{equation}\label{zetadef} \zeta(s):=\zeta_{{\mathcal G}}(s):=\int_1^{\infty} x^{-s} d{\mathcal N}(x) = \sum_{g\in{\mathcal G}} \frac{1}{|g|^s} \end{equation} admits a meromorphic, essentially analytic continuation $\kappa\frac{1}{s-1}+\int_1^{\infty} x^{-s} d{\mathcal R}(x)$ up to ${\mathcal R}e s >\theta$ with only one, simple pole at 1. The Beurling zeta function \eqref{zetadef} can be used to express the generalized von Mangoldt function \begin{equation}\label{vonMangoldtLambda} \Lambda (g):=\Lambda_{{\mathcal G}}(g):=\begin{cases} \log|p| {\mathcal q}uad \textrm{if}{\mathcal q}uad g=p^k, ~ k\in{\mathbb N} ~~\textrm{with some prime}~~ p\in{\mathcal G}\\ 0 {\mathcal q}uad \textrm{if}{\mathcal q}uad g\in{\mathcal G} ~~\textrm{is not a prime power in} ~~{\mathcal G} \end{cases} \end{equation} as coefficients of the logarithmic derivative of the zeta function \begin{equation}\label{zetalogder} -\frac{\zetata'}{\zetata}(s) = \sum_{g\in {\mathcal G}} \frac{\Lambda(g)}{|g|^s}. \end{equation} The Beurling theory of generalized primes is mainly concerned with the analysis of the summatory function \begin{equation}\label{psidef} {\mathcal p}si(x):={\mathcal p}si_{{\mathcal G}}(x):=\sum_{g\in {\mathcal G},~|g|\leq x} \Lambda (g). \end{equation} Apart from generality and applicability to e.g. distribution of prime ideals in number fields, the interest in these things were greatly boosted by a construction of Diamond, Montgomery and Vorhauer \cite{DMV}. They basically showed that under Axiom A the Riemann hypothesis may still fail: moreover, nothing better than the most classical zero-free region and error term of \begin{equation}\label{classicalzerofree} \zetata(s) \ne 0 {\mathcal q}quad \text{whenever}~~~ s=\sigmagma+it, ~~ \sigmagma > 1-\frac{c}{\log t}, \end{equation} and \begin{equation}\label{classicalerrorterm} {\mathcal p}si(x)=x +O(x\exp(-c\sqrt{\log x}) \end{equation} follows from \eqref{Athetacondi} at least if $\theta>1/2$. Therefore, Vinogradov mean value theorems on trigonometric sums and many other stuff are certainly irrelevant in this generality, and for Beurling zeta functions a careful revival of the combination of "ancient-classical" methods and "elementary" arguments can only be implemented. In the classical case of prime number distribution, as well as regarding some extensions to primes in arithmetical progressions and distribution of prime ideals in algebraic number fields, the connection between location and distribution of zeta-zeroes and oscillatory behavior in the remainder term of the prime number formula ${\mathcal p}si(x)\thicksim x$ is well understood. On the other hand in the generality of Beurling primes and zeta function, investigations so far were focused on mainly three directions. First, better and better, minimal conditions were sought in order to have a Chebyshev type formula $x\ll {\mathcal p}si(x) \ll x$, see e.g. \cite{Vindas12, DZ-13-2, DZ-13-3}. Understandably, as in the classical case, this relation requires only an analysis of the $\zetata$ function of Beurling in, and on the boundary of the convergence halfplane. Second, conditions for the prime number theorem to hold, were sought see e.g. \cite{Beur, K-98, DebruyneVindas-PNT, DZ-17, {Zhang15-IJM}}. Again, this relies on the boundary behavior of $\zetata$ on the one-line $\sigma=1$. Third, rough (as compared to our knowledge in the prime number case) estimates and equivalences were worked out in the analysis of the connection between $\zetata$-zero distribution and error term behavior for ${\mathcal p}si(x)$ see e.g. \cite{H-5}, \cite{H-20}. Further, examples were constructed for arithmetical semigroups with very "regular" (such as satisfying the Riemann hypothesis RH and error estimates ${\mathcal p}si(x)=x+O(x^{1/2+\varepsilon})$) and very "irregular" (such as having no better zero-free regions than \eqref{classicalzerofree} and no better asymptotic error estimates than \eqref{classicalerrorterm}) behavior and zero- or prime distribution, see e.g. \cite{DMV}, \cite{Zhang7}, \cite{H-15}, \cite{BrouckeDebruyneVindas}. Here we must point out that the above citations are just examples, and are far from being a complete description of the otherwise formidable literature\footnote{E.g. a natural, but somewhat different direction, going back to Beurling himself, is the study of analogous questions in case the assumption of Axiom A is weakened to e.g. an asymptotic condition on $N(x)$ with a product of $x$ and a sum of powers of $\log x$, or sum of powers of $\log x$ perturbed by almost periodic polynomials in $\log x$, or $N(x)-cx$ periodic, see \cite{Beur}, \cite{Zhang93}, \cite{H-12}, \cite{RevB}.}. Throughout analysis of these directions as well as for much more information the reader may consult the monograph \cite{DZ-16}. With this work we start a systematic analysis of the connection between these two aspects of a Beurling generalized number system or arithmetical semigroup. As starting point, here we work out a number of relatively standard initial estimates on the behavior of the Beurling zeta function $\zeta(s)$ and its zeroes, assuming only the above mentioned Axiom A. Although these results bring no surprises, and are known for more than a century for the classical Riemann zeta function, it seems that until now there is no systematic presentation of them for the general Beurling case. Most probably some, if not all, can be found in some form or in another in various papers, but for our long-range goal with this study it seemed to be rather inconvenient to try to reference out, one by one, in different forms and various versions, all these technical auxiliary results. Moreover, we put an emphasis on getting explicit, relatively sharp constants, so that in any further research, in particular if explicit, numerical results would be needed, the future researcher could draw from our efforts. With the single assumption formulated in Axiom A, we arrive at a Riemann-von Mangoldt type formula, which, in principle, explains very clearly, even if not explicitly, the connection between the location of zeta-zeros and the oscillation of the prime number formula. From here, one can better explain the expectations, with which we consider that certain assumptions on one side correspond to respective properties on the other side. In particular, we can at least start to think of the question of Littlewood \cite{Littlewood}, placed into the general context of the Beurling theory: what amount of oscillation is "caused by having a certain zero" $\rho=\beta+i{\mathcal g}amma$ of $\zetata$, and how this oscillation can be demonstrated effectively, i.e. with explicit bounds approaching the expected order, suggested by the Riemann-von Mangoldt formula? This question, and to find some explicit bounds on ${\mathcal p}si(x)$ exploiting "the effect of that one hypothetical zeta-zero", was a notable problem first answered by Turán \cite{Turan1}. In many respects, the best (and essentially optimal) results in this direction are due to Pintz \cite{Pintz1} \cite{PintzProcStekl}, but various ramifications, equivalences and variants were proved by a number of people including the author \cite{RevAA}. Here we refrain from any further discussion of these related number theory developments. The present paper is not a stand-alone paper. We intend to work out a detailed analysis of the connection between distribution of $\zetata$-zeros and oscillation of the (von Mangoldt type modified) prime counting function ${\mathcal p}si(x)$. In the continuation of this work \cite{Rev20} we present several definitely new results on the distribution of the zeros of the Beurling zeta function. The interested reader may have a look into \cite{Rev20} to see what our direct aims are in this regard. Then, based on the analysis in these two papers dealing with the Beurling zeta function only, we intend to turn to the analysis of oscillatory properties of ${\mathcal p}si(x)$ itself. A more detailed description of our "number theory" results we postpone to the introduction of these forthcoming papers, but let us emphasize that the calculus here, however cumbersome, is not totally pointless. In particular, the concluding result of this work is a Riemann-von Mangoldt type formula. The Riemann-von Mangoldt formula is expressing oscillation of the remainder term in the prime number theorem PNT (here for Beurling primes) by suitable partial sums of a generally divergent, yet for our purpose very intuitive series over the zeroes of $\zetata$. Although the proof of this is standard, known in the classical case from the XIXth century, it seems that authors on the Beurling generalized number systems discarded this formula, possibly because of the uncontrollable divergence properties of the series. Yet it is very intuitive if we try to read down the exact correspondence between the oscillation of the remainder term of the PNT on the one hand and the location and distribution of the zeros of the Beurling zeta function on the other hand. Although the results proved in the Beurling case did not reach the generality and precision which would be suggested by these formulae, we can point out that in the classical case, and in fact in somewhat wider generality, rather precise correspondences were already proved \cite{Pintz1}, \cite{PintzProcStekl}, \cite{RevAA}. Starting with this formula thus seems to be the right first step to work out similarly sharp correspondences in the generality of Beurling number systems. \section{Basic properties of the Beurling $\zetata$} \label{sec:basics} The following basic lemmas are slightly elaborated forms of 4.2.6. Proposition, 4.2.8. Proposition and 4.2.10. Corollary of \cite{Knopf}. These form the most basic facts concerning the Beurling zeta function. Still, we give a proof for explicit handling of the arising constants in these estimates. \begin{lemma}\label{l:zetaxs} Denote the "partial sums" (partial Laplace transforms) of ${\mathcal N}|_{[1,X]}$ as $\zeta_X$ for arbitrary $X{\mathcal g}eq 1$: \begin{equation}\label{zexdef} \zeta_X(s):=\int_1^X x^{-s} d{\mathcal N}(x). \end{equation} Then $\zeta_X(s)$ is an entire function and for $\sigmagma:={\mathcal R}e s >\theta$ it admits \begin{equation}\label{zxrewritten} \zeta_X(s)= \begin{cases} \zeta(s)-\frac{\kappa X^{1-s}}{s-1}-\int_X^\infty x^{-s}d{\mathcal R}(x) & \textrm{for all} ~~~s\ne 1, \\ \frac{\kappa }{s-1}-\frac{\kappa X^{1-s}}{s-1}+\int_1^X x^{-s}d{\mathcal R}(x) & \textrm{for all} ~~~s\ne 1, \\ \kappa \log X + \int_1^X \frac{d{\mathcal R}(x)}{x} & \textrm{for}~~~ s=1, \end{cases} \end{equation} together with the estimate \begin{equation}\label{zxesti} \left|\zeta_X(s) \right| \leq \zeta_X(\sigmagma) \leq \begin{cases} \min \left( \frac{\kappa X^{1-\sigmagma}}{1-\sigmagma} + \frac{A}{\sigmagma-\theta},~ \kappa X^{1-\sigmagma}\log X + \frac{A}{\sigmagma-\theta}\right) &\textrm{if} {\mathcal q}uad \theta<\sigmagma < 1, \\ \kappa \log X + \frac{A}{1-\theta}& \textrm{if} {\mathcal q}quad \sigmagma =1, \\ \min\left( \frac{\sigmagma (A+\kappa)}{\sigmagma-1},~{\kappa}\log X + \frac{\sigmagma A}{\sigmagma-\theta} \right) &\textrm{if}{\mathcal q}uad \sigmagma>1. \end{cases} \end{equation} Moreover, the above remainder terms can be bounded as follows. \begin{equation}\label{zxrlarge} \left| \int_X^\infty x^{-s}d{\mathcal R}(x) \right| \leq A \frac{|s|+\sigmagma-\theta}{\sigmagma-\theta} X^{\theta-\sigmagma} \end{equation} and \begin{equation}\label{zxrlow} \left| \int_1^X x^{-s}d{\mathcal R}(x) \right| \leq A \left( |s|\frac{1-X^{\theta-\sigmagma}}{\sigmagma-\theta} + X^{\theta-\sigmagma}\right) \leq A \min \left( \frac{|s|}{\sigmagma-\theta},~|s| \log X + X^{\theta-\sigmagma} \right). \end{equation} \end{lemma} \begin{proof} Clearly, both $\zeta_X$ and $\int_1^X x^{-s} d{\mathcal R}(x)$ are entire functions, $\zeta$ is regular for ${\mathcal R}e s > \theta$ except at $s=1$, $X^{1-s}/(s-1)$ is regular all over ${\mathcal C}C\setminus \{1\}$, while $\int_X^\infty x^{-s} d{\mathcal R}(x)$ is regular for ${\mathcal R}e s >\theta$. Hence it suffices to prove the formulae \eqref{zxrewritten} for ${\mathcal R}e s>1$, and then refer to analytic continuation in extending them all over the common domain of regularity of both sides, i.e. $\{ {\mathcal R}e s > \theta \} \setminus \{1\}$. The formulae then follow as for ${\mathcal R}e s > 1$ the defining integrals are absolutely convergent, and to get the first form we only need to recall ${\mathcal N}(x)=\kappa (x-1) +{\mathcal R}(x)$ and write $$ \zeta_X(s)=\zeta(s)-\int_X^\infty x^{-s}d{\mathcal N}(x) = \zeta(s) - \kappa \int_X^\infty x^{-s} d(x-1) - \int_X^\infty x^{-s} d{\mathcal R}(x); $$ while to obtain the second and also the third ones it suffices to write again ${\mathcal N}(x)=\kappa (x-1) +{\mathcal R}(x)$ and execute the integration directly for the main term. To prove the estimate \eqref{zxesti} we utilize that by definition ${\mathcal N}(x)$ is increasing, hence $d{\mathcal N}(x)$ is positive, and thus $|\zeta_X(s)|\leq \zeta_X(\sigmagma)$, which in turn can be calculated by partial integration and substituting \eqref{Athetacondi}. Namely, for $\sigmagma\ne 1$ and $\sigmagma>\theta$ \begin{align*} \zeta_X(\sigmagma) &= \left[ x^{-\sigmagma} {\mathcal N}(x)\right]_1^X +\sigmagma\int_1^X x^{-\sigmagma-1} {\mathcal N}(x) dx \\ & \leq \frac{\kappa (X-1) + A X^{\theta}}{X^{\sigmagma}} + \sigmagma \int_1^X \frac{\kappa(x-1)}{x^{1+\sigmagma}}dx +\sigmagma \int_1^X \frac{A}{x^{\sigmagma-\theta+1}}dx \\ & = \kappa (X^{1-\sigmagma}-X^{-\sigma}) + A X^{\theta-\sigmagma} + \frac{\sigmagma \kappa}{1-\sigmagma} \left(X^{1-\sigmagma} -1 \right) + \kappa\left(X^{-\sigmagma} -1 \right) + \frac{\sigmagma A}{\sigmagma-\theta} \left( 1-X^{\theta-\sigmagma}\right) \\ & = \frac{\kappa}{1-\sigmagma} \left(X^{1-\sigmagma} -1 \right) + \frac{\sigmagma A}{\sigmagma-\theta} -\frac{\theta A}{\sigmagma-\theta} X^{\theta-\sigmagma} < \frac{\kappa}{1-\sigmagma} \left(X^{1-\sigmagma} -1 \right) + \frac{\sigmagma A}{\sigmagma-\theta} \\ & \leq \begin{cases} \min \left( \frac{\kappa X^{1-\sigmagma}}{1-\sigmagma} + \frac{A}{\sigmagma-\theta},~ \kappa X^{1-\sigmagma}\log X + \frac{A}{\sigmagma-\theta}\right) &\textrm{if} {\mathcal q}uad \sigmagma<1 \\ \min\left( \frac{\sigmagma (A+\kappa)}{\sigmagma-1},~ {\kappa}\log X + \frac{\sigmagma A}{\sigmagma-\theta} \right) &\textrm{if}{\mathcal q}uad \sigmagma>1 \end{cases}, \end{align*} with an application of $\frac{e^y-1}{y}\le e^y\frac{1-e^{-y}}{y}\le e^y ~(y>0)$ in the very last estimations involving $\log X$. The case $\sigmagma=1$ follows by taking the limit of the second estimate in the first line of \eqref{zxesti} when $\sigmagma\to 1-$, as $\zeta_X(\sigmagma)$ is regular. It remains to estimate the error terms arising from ${\mathcal R}(x)$. After partial integration $$ \int x^{-s} d{\mathcal R}(x) = \left[{\mathcal R}(x) x^{-s} \right] +s \int \frac{{\mathcal R}(x)}{x^{s+1}} dx $$ and computing the actual values of the integrated part and applying $|{\mathcal R}(x)|\leq Ax^\theta$ both there and in the integrals lead to \eqref{zxrlarge} and the first estimate in \eqref{zxrlow}, resp. The last estimate in \eqref{zxrlow} follows again by $\frac{1-e^{-y}}{y}\le 1 ~(y>0)$. \end{proof} \begin{lemma}\label{l:zkiss} We have \begin{equation}\label{zsgeneral} \left|\zeta(s)-\frac{\kappa}{s-1}\right|\leq \frac{A|s|}{\sigmagma-\theta} {\mathcal q}quad {\mathcal q}quad{\mathcal q}quad (\theta <\sigmagma ,~ t\in{\mathbb R},~~ s\ne 1). \end{equation} In particular, for large enough values of $t$ it holds \begin{equation}\label{zsgenlarget} \left|\zeta(s) \right|\leq \sqrt{2} \frac{(A+\kappa)|t|}{\sigmagma-\theta} {\mathcal q}quad {\mathcal q}quad{\mathcal q}quad (\theta <\sigmagma \leq |t|). \end{equation} while for small values of $t$ we have \begin{equation}\label{zssmin1} |\zeta(s)(s-1)-\kappa|\leq \frac{A|s||s-1|}{\sigmagma-\theta} \leq \frac{100 A}{\sigmagma-\theta}{\mathcal q}quad (\theta <\sigmagma \leq 4,~|t|\leq 9). \end{equation} As a consequence, we also have \begin{equation}\label{polenozero} \zeta(s)\ne 0 {\mathcal q}quad \textrm{for}{\mathcal q}quad |s-1| \leq \frac{\kappa(1-\theta)}{A+\kappa}. \end{equation} \end{lemma} \begin{proof} Actually, by ${\mathcal N}(x)=\kappa x +{\mathcal R}(x)$ and the trivial integral $\int_1^{\infty} x^{-s}dx=1/(s-1)$, to get \eqref{zsgeneral} it remains only to see that $|\int_1^{\infty} x^{-s}d{\mathcal R}(x)| \leq A|s|/(\sigmagma-\theta)$. That is straightforward (and is contained, after $X\to\infty$, in \eqref{zxrlow}, too). Whence we have \eqref{zsgeneral}. To deduce \eqref{zsgenlarget} one has to take into account that in the given domain we have $t^2+\sigmagma^2 \leq 2 t^2$, hence $|s|\leq \sqrt{2}|t|$, and that $t^2 |s-1|^2 /(\sigmagma-\theta)^2 = t^2(t^2+(\sigma-1)^2)/(\sigmagma-\theta)^2 > \sigma^2(\sigma^2+(\sigma-1)^2)/\sigma^2 = \sigma^2+(\sigma-1)^2 {\mathcal g}e 1/2$, hence $1/|s-1| \le \sqrt{2} |t|/(\sigmagma-\theta)$, too. To obtain \eqref{zssmin1} from \eqref{zsgeneral} it suffices to estimate the arising factor $|s||s-1|$ trivially. Further, from \eqref{zssmin1} it follows that to have that $\zeta(s) (s-1) = \kappa + (s-1) \int_1^{\infty} x^{-s}d{\mathcal R}(x)=\kappa + \left(\zeta(s)(s-1)-\kappa\right)\ne 0$, it suffices that $ \kappa > |s(s-1)| \frac{A}{\sigmagma-\theta}$, that is $\frac{\kappa(\sigma-\theta)}{A|s|} > |s-1|$. Note that $\zeta(s) \ne 0$ for $\sigma>0$, whence we can restrict considerations to the critical strip $\theta<\sigma\le 1$. Let now $0<r<1-\theta$ be a parameter, to be specified later, and assume that $|s-1|\le r$. Then we are to show $f(s):=\frac{\kappa(\sigma-\theta)}{A|s|}>r$ for $s=\sigma+it \in \{s~:~ \theta<\sigma\le 1, |s-1|\le r\}=:G$. It is clear that for any fixed value of $\sigma=\sigma_0$, $f(\sigma_0+it)$ becomes minimal at the endpoints $\sigma_0{\mathcal p}m it_0$, where $t_0=\sqrt{r^2-(1-\sigma_0)^2}$. Further, the function value of $f$ is symmetric with respect to the real axis, so in fact it suffices to take into account the upper endpoint here. So consider the function $f$ restricted to these points on the quarter circle bounding $G$ from the upper left side, i.e. $f(1+r\cos \varphi+ir\sigman\varphi)$, as ${\mathcal p}i/2\le\varphi\le {\mathcal p}i$. We have $f(1+r\cos\varphi+ir\sigman\varphi) =\frac{\kappa}{A} \frac{1+r\cos\varphi -\theta}{\sqrt{(1+r\cos\varphi)^2+(r\sigman\varphi)^2}}=\frac{\kappa}{A} \sqrt{\frac{(1+r\cos\varphi -\theta)^2}{(1+r\cos\varphi)^2+(r\sigman\varphi)^2}}=\frac{\kappa}{A} \sqrt{\frac{(1-\theta-r\lambda)^2}{1-2r\lambda+r^2}}$, where we have put $\lambda:=-\cos\varphi \in [0,1]$. Whence it suffices to find the minimum of the function $g(\lambda):=\frac{(1-\theta-r\lambda)^2}{1-2r\lambda+r^2}$. A little calculus shows that $g'(\lambda)<0$, the function is strictly decreasing, whence its minimum is attained at the point $\lambda=1$. Winding up, this means that $f$ is minimal at the point $s=1-r$. It remains to derive $f(1-r)= \frac{\kappa}{A} \frac{1-\theta-r}{1-r} >r$ for a sufficiently small choice of the parameter $r$. This is equivalent to $\frac{\kappa(1-\theta)}{A} > \frac{\kappa+A}{A}r-r^2$, which surely holds whenever $\frac{\kappa(1-\theta)}{A} {\mathcal g}e \frac{\kappa+A}{A}r$, i.e. whenever $r\le \frac{\kappa(1-\theta)}{\kappa+A}$, as needed. \end{proof} \begin{lemma}\label{l:oneperzeta} We have \begin{equation}\label{zsintheright} |\zeta(s)| \leq \frac{(A+\kappa)\sigmagma}{\sigmagma-1} {\mathcal q}quad (\sigmagma >1). \end{equation} and also \begin{equation}\label{reciprok} |\zeta(s)| {\mathcal g}eq \frac{1}{\zeta(\sigmagma)} > \frac{\sigmagma-1}{(A+\kappa)\sigmagma} {\mathcal q}quad (\sigmagma >1). \end{equation} \end{lemma} \begin{proof} The first estimate follows from \eqref{zxesti}, last line after $X\to\infty$. To obtain the second, consider the Dirichlet series of $1/\zeta$: as $|\mu(g)|\leq 1$, the coefficientwise estimation of $1/\zeta$ provides the estimate $|1/\zeta(s)|\leq \zeta(\sigmagma)$, and the rest is already contained in the first inequality. \end{proof} \begin{lemma}\label{l:zetainrighthp} In the right halfplane of convergence we have \begin{equation}\label{zsintherightlarget} |\zeta(s)| \leq \frac{\kappa}{\sigmagma-\theta}\log|t| + \frac73 \frac{\sigmagma(A+\kappa)}{\sigmagma-\theta} {\mathcal q}quad (\sigmagma >1,~|t|{\mathcal g}eq 4) \end{equation} \end{lemma} \begin{proof} By the first formula in \eqref{zxrewritten} we can write \begin{equation}\label{zetazetax} \zeta(s)=\zeta_X(s)+\frac{\kappa X^{1-s}}{s-1}+\int_X^{\infty}\frac{d{\mathcal R}(x)}{x^s}. \end{equation} From here by termwise estimation, using the second part of the third line in \eqref{zxesti} in combination with \eqref{zxrlarge}, we arrive at $$ |\zeta(s)|\leq {\kappa}\log X + \frac{\sigmagma A}{\sigmagma-\theta} + \frac{\kappa}{|t|} X^{1-\sigmagma} + \frac{A(|s|+\sigmagma)}{\sigmagma-\theta} X^{\theta-\sigmagma}. $$ Since $|t|{\mathcal g}eq 4$ and $1<\sigmagma$, we have $|s|/(|t|\sigmagma)=\sqrt{|t|^{-2}+\sigmagma^{-2}} \leq (33/32)$, and $1/|s-1|\leq 1/4 \leq \sigmagma/(\sigmagma-\theta)$, so for $X:=|t|^{1/(\sigmagma-\theta)}{\mathcal g}eq 1$ we also have $$ |\zeta(s)|\leq \frac{\kappa}{\sigmagma-\theta}\log |t| + \frac{\sigmagma A}{\sigmagma-\theta} + \frac{\kappa}{4} + \frac{A\left(\frac{33}{32}|t|\sigmagma+\sigmagma\frac{|t|}{4}\right)}{\sigmagma-\theta} \frac{1}{|t|} < \frac{\kappa}{\sigmagma-\theta}\log |t| + \frac{7\sigmagma(\kappa+ A)}{3(\sigmagma-\theta)}. $$ \end{proof} For smaller values of $t$, but still separated from zero, \eqref{zsintherightlarget} can be complemented by a similar estimate not containing any term with $\log |t|$, but for $t$ very close to $0$ one needs to take into account the singularity of $\zeta$ at $1$, so that the best we can have is an estimate of the form \eqref{zsgeneral}. Here is an explicit formulation of these, actually valid for all $\sigma>\theta$. \begin{lemma}\label{r:zestismallt} Let $\tau {\mathcal g}e 1$ be any parameter. Then we have \begin{equation}\label{zestitnearzero} \left| \zeta(s) - \dfrac{\kappa}{s-1} \right|\le (\tau+1) \dfrac{A\max(1,\sigma)}{\sigma-\theta} {\mathcal q}quad (\sigma>\theta, |t|\le \tau). \end{equation} and \begin{equation}\label{zestitsmall} |\zeta(s)| \le \dfrac{(\tau+1)(A+\kappa) \max(1,\sigma) }{\sigma-\theta} {\mathcal q}quad (\sigma>\theta, ~ \frac{1}{\tau+1} \le |t|\le \tau). \end{equation} Furthermore, we also have the estimate \begin{equation}\label{zestitnotlarge} |\zeta(s)|\le (\tau+1)(A+\kappa) \max(1,\sigma) \max\left(\frac{1}{\sigma-\theta}, \frac{1}{|1-\sigma|}\right) {\mathcal q}quad (\sigma>\theta, |t|\le \tau). \end{equation} \end{lemma} \begin{proof} The estimate \eqref{zestitnearzero} follows directly from \eqref{zsgeneral} and $|s|\le \sigma+\tau \le (\tau+1)\max(1,\sigma)$. To see \eqref{zestitsmall} we need to note only $1/|s-1| \le 1/|t| \le \tau+1 \le (\tau+1)\sigma/(\sigma-\theta)$ and take into account \eqref{zestitnearzero}, already proven. The assertion \eqref{zestitnotlarge} is a direct consequence of \eqref{zestitnearzero} if we take into account the trivial estimate $\kappa/|s-1|\le \kappa/|\sigma-1|$. \end{proof} \begin{lemma}\label{l:zetaincriticalstrip} In the critical strip we have \begin{equation}\label{zsintheleft} |\zeta(s)| \leq 2~ (A+\kappa) \max \left( \frac{1}{\sigmagma-\theta}, \frac{1}{1-\sigmagma}\right)\cdot |t|^{\frac{1-\sigmagma}{1-\theta}} {\mathcal q}quad (\theta <\sigmagma <1,~ |t| {\mathcal g}eq 4). \end{equation} Moreover, we also have \begin{equation}\label{zslogos} |\zeta(s)| \leq 2 \frac{A+\kappa}{\sigmagma-\theta} |t|^{\frac{1-\sigmagma}{1-\theta}} \cdot \log |t| {\mathcal q}quad (\theta <\sigmagma <1,~ |t| {\mathcal g}eq e^{5/4}). \end{equation} and \begin{equation}\label{zslogoskist} |\zeta(s)| \leq \frac52 \frac{A+\kappa}{\sigmagma-\theta} |t|^{\frac{1-\sigmagma}{1-\theta}} {\mathcal q}quad (\theta <\sigmagma <1,~ 1 \leq |t| \leq e^{5/4}). \end{equation} \end{lemma} Note that by \eqref{zestitnotlarge} we have $|\zeta(s)| \le 2(A+\kappa) \max \left( \frac{1}{\sigmagma-\theta}, \frac{1}{|1-\sigmagma|}\right)$ for $\theta<\sigma<1, ~|t| \le 1$. \begin{proof} Again we start with \eqref{zetazetax}, coming from the first formula in \eqref{zxrewritten}. Since now $|t|{\mathcal g}eq 4$ and $\theta<\sigmagma<1$, we have $|s|\leq (5/4)|t|$, $|s|+\sigmagma-\theta < 3/2|t|$, and $1/|s-1|\leq 1/4$, so the estimates of the first part of the first line of \eqref{zxesti} and \eqref{zxrlarge} yield whenever $X{\mathcal g}eq 1$ that $$ |\zeta(s)|\leq \frac{5}{4} \frac{\kappa X^{1-\sigmagma}}{1-\sigmagma} + \frac{A}{\sigmagma-\theta} \left(1+\frac32 |t| X^{\theta-\sigmagma} \right). $$ First consider the case when $\theta<\sigma<\frac{1+\theta}{2}$. In this case we have $1-\sigma>\frac12(1-\theta)$, whence in view of $|t|{\mathcal g}e 4$ also $|t|^{\frac{1-\sigmagma}{1-\theta}}{\mathcal g}eq 4^{1/2}$ allowing to estimate 1 by $\frac12 |t|^{\frac{1-\sigmagma}{1-\theta}}$. If on the other hand $\frac{1+\theta}{2}\le \sigma <1$, then we have $\frac{1}{\sigma-\theta}\le\frac12 \frac{1}{1-\sigma}|t|^{\frac{1-\sigmagma}{1-\theta}}$, because $\frac{\sigmagma-\theta}{1-\sigmagma}|t|^{\frac{1-\sigmagma}{1-\theta}}{\mathcal g}e \frac{\sigmagma-\theta}{1-\sigmagma}4^{\frac{1-\sigmagma}{1-\theta}}= \left(\frac{1-\theta}{1-\sigmagma}-1\right)4^{\frac{1-\sigmagma}{1-\theta}}$ and the function $\varphi(u):=(1/u-1)4^u$ is decreasing in $(0,1/2)$ giving $\varphi\left(\frac{1-\theta}{1-\sigmagma}\right){\mathcal g}e \varphi(1/2)=4^{1/2}=2$. In sum, we obtain for both cases $$ |\zeta(s)|\leq \left(\frac{5}{4} \kappa X^{1-\sigmagma} + 2A |t| X^{\theta-\sigmagma} \right) \max\left(\frac{1}{1-\sigmagma}, \frac{1}{\sigmagma-\theta}\right) . $$ So, choosing $X:=|t|^{1/(1-\theta)}{\mathcal g}eq 1$ implies the first estimate in \eqref{zsintheleft}. To get also the estimates \eqref{zslogos} and \eqref{zslogoskist}, we use the second part of the first line of \eqref{zxesti} together with \eqref{zxrlarge}. Similarly as above we again choose $X:=|t|^{1/(1-\theta)}{\mathcal g}eq 1$ and obtain \begin{align*} |\zeta(s)|& \leq \left(\kappa+\kappa X^{1-\sigmagma} \log X + \frac{A}{\sigmagma-\theta}\right) + \frac{\kappa}{4}X^{1-\sigmagma} + \frac32\frac{A}{\sigmagma-\theta}|t|X^{\theta-\sigmagma} \\ & \leq \frac54\kappa |t|^{\frac{1-\sigmagma}{1-\theta}} + \frac{\kappa}{1-\theta} |t|^{\frac{1-\sigmagma}{1-\theta}}\log|t|+ \frac52 \frac{A}{\sigmagma-\theta} |t|^{\frac{1-\sigmagma}{1-\theta}} \le \left(\kappa \log|t| + \frac54 \kappa +\frac52 A \right) \frac{|t|^{\frac{1-\sigmagma}{1-\theta}}}{\sigmagma-\theta}. \end{align*} From here in case $|t|{\mathcal g}e e^{5/4}$ we find $5/4 \le \log|t|$ whence $5/4 ~\kappa\le \kappa \log|t|$ and $5/2 ~A \le 2 \log|t|$, resulting in \eqref{zslogos}. In case $1\le |t|\le e^{5/4}$ using $0\le \log|t|\le 5/4$ directly gives \eqref{zslogoskist}. \end{proof} \section{Estimates for the number of zeros of $\zetata$}\label{sec:zeros} In this section we work out explicit estimates for the number of $\zetata$-zeroes in various regions. The underlying principle is the well-known fact that if the zeta function has finite order, then Jensen's inequality always leads to a local bound $\ll \log |t|$ as below. \begin{lemma}\label{l:Jensen} Let $\theta<b<1$ and consider the rectangle $H:=[b,1]\times [i(T-h),i(T+h)]$, where $h:=\frac{\sqrt{7}}{3} \sqrt{(b-\theta)(1-\theta)}$ and $|T| {\mathcal g}e e^{5/4}+\sqrt{3}\approx 5.222...$ is arbitrary. Then the number $n(H)$ of zeta-zeros in the rectangle $H$ satisfy \begin{align}\label{zerosinH} n(H) & \leq \frac{1-\theta}{b-\theta}\left(0.654 \log|T| + \log\log |T| + 6\log(A+\kappa) + 6\log\frac1{1-\theta} +12.5 \right) \notag \\ &\leq \frac{1-\theta}{b-\theta}\left(\log|T| + 6\log(A+\kappa) + 6\log\frac1{1-\theta} +12.5 \right) \end{align} Moreover, if $|T|\le 5.222$, then we have analogously the $\log|T|$-free estimate \begin{equation}\label{zerosinH-smallt} n(H) \leq \frac{1-\theta}{b-\theta}\left(6\log(A+\kappa) + 6\log\frac1{1-\theta}+14\right). \end{equation} \end{lemma} \begin{remark}\label{r:alsoindisk} Note that our estimate includes also the total number $N$ of zeroes in the disc $\mathcal{D}_r:=\{s~:~|s-(p+iT)|\leq r:=p-q\}$, where $p:=1+(1-\theta)$ and $q:=\theta+\frac23(b-\theta)$ are parameters introduced in the proof. \end{remark} \begin{proof} Given $b$, let us introduce the parameters $a,p$ and $q$ satisfying $\theta<a<q<b<1<p\leq 2$. We will choose the concrete values of these parameters a few lines below. Let us draw the circles ${\mathcal C}_R$ of radius $R:=p-a$ and ${\mathcal C}_r$ of radius $r:=p-q$ around $z_0:=p+iT$. Then the disk ${\mathcal D}_r$ bounded by ${\mathcal C}_r$ will cover the rectangle $H_1:=[b,1]\times [T-ih_1,T+ih_1]$ with $h_1:=\sqrt{(b-q)(2p-b-q)}$. Actually, in the following we estimate the total number $N$ of roots in ${\mathcal D}_r$: since $H_1\subset {\mathcal D}_r$, it follows that also the number $n_1$ of roots situated in $H_1$ satisfy this estimate. The proof will then be concluded by showing that our choice $p:=1+(1-\theta)$ and $q:=\theta+\frac23(b-\theta)$ of parameters imply $h\le h_1$, i.e. $H\subset H_1$, so that $n(H) \le n_1\le N$, too. The parameter $a$ will be chosen to be arbitrarily close to $\theta$, i.e. $a\to\theta+$. We use Jensen's inequality (see \cite[p. 43]{Hol}) in the form that for $f$ regular in $D(z_0,R)$ and for any $r<R$ $$ \log|f(z_0)| + \nu \log\frac Rr \leq \frac{1}{2{\mathcal p}i} \int_{-{\mathcal p}i}^{{\mathcal p}i} \log|f(z_0+R e^{i\varphi})|d\varphi, $$ where $\nu$ is the number of zeroes of $f$ in the disk $D(z_0,r)$. To apply this, now we put $z_0:=p+iT$, $R:=p-a(<2-\theta \leq 2)$, $r:=p-q$, and $f:=\zetata$. Then $\zetata$ is indeed regular in the disc $D(z_0,R)$, and Jensen's inequality yields $$ N \log \frac{R}{r} \leq \frac{1}{2{\mathcal p}i} \int_{-{\mathcal p}i}^{{\mathcal p}i} \log|\zetata(z_0+R e^{i\varphi})|d\varphi - \log|\zetata(p+iT)|, $$ with $N$ denoting he number of $\zeta$-zeroes in $D(z_0,r)={\mathcal D}_r$. According to Lemma \ref{l:oneperzeta}, formula \eqref{reciprok} we have $\log|\zeta(p+iT)| {\mathcal g}eq \log\frac{p-1}{(A+\kappa)p}$, thus using also $\log R/r = -\log r/R =-\log\left(1-\frac{R-r}{R}\right)> \frac{R-r}{R}=\frac{q-a}{p-a}$ we are led to \begin{equation}\label{Ninthedisk} N \frac{q-a}{p-a} \leq \frac{1}{2{\mathcal p}i} \int_{-{\mathcal p}i}^{{\mathcal p}i} \log|\zetata(z_0+R e^{i\varphi})|d\varphi + \log \frac{(A+\kappa)p}{p-1}. \end{equation} It remains to estimate the integral. We will cut the integral into two parts according to $s=\sigma+it=z_0+Re^{i\varphi}$ belonging to the right halfplane of convergence or to the critical strip. The first case occurs when $\sigma=p+R\cos\varphi>1$ and the second when $\sigma<1$, so that the first case happens exactly for $|\varphi|<{\alpha}pha:=\arccos\left(-\frac{p-1}{R}\right)=\arccos\left(-\frac{1-\theta}{R}\right)$, and the second when ${\alpha}pha<|\varphi|<{\mathcal p}i$ (the one point equality cases being irrelevant for the evaluation of the integrals in question). \underline{Part 1 ($\sigma>1$).} Then we have according to the uniform estimate \eqref{zsintheright} of Lemma \ref{l:oneperzeta} that $$ \log|\zetata(z_0+Re^{i\varphi})| \le \log(A+\kappa)+\log\frac{\sigma}{\sigma-1} \le \log(A+\kappa)+(\sigma-1) +\log\frac{1}{\sigma-1}. $$ Thus for the relevant part of the integral we obtain \begin{align*} \int_{-{\alpha}}^{{\alpha}} \log|\zetata(z_0+Re^{i\varphi})|d\varphi &\le 2\int_0^{\alpha} \left(\log(A+\kappa)+(p+R\cos\varphi-1) + \log\frac{1}{R} + \log\frac{R}{p-1+R\cos\varphi} \right)d\varphi \\&= 2{\alpha}\left(\log(A+\kappa)+(1-\theta)+ 2R\sigman{\alpha} + \log\frac{1}{R} \right) + 2\int_0^{\alpha} \log\frac{1}{\frac{p-1}{R}+\cos\varphi}d\varphi. \end{align*} So writing here $w:=\frac{p-1}{R}=\frac{1-\theta}{R}>1/2$ (the value of which is to converge to $1/2+0$ finally), and recalling that ${\alpha}=\arccos(-w)$, we now estimate the last integral as follows. \begin{align*} \int_0^{\alpha} \log\frac{1}{w+\cos\varphi}d\varphi& =\int_{-w}^1 \log\frac{1}{w+u}\frac{du}{\sqrt{1-u^2}} \\& = \left[(\arcsin u- \arcsin(-w))\log\frac{1}{w+u} \right]_{-w}^1 - \int_{-w}^1 (\arcsin u- \arcsin(-w)) \frac{-1}{w+u} du \\& = \left(\frac{{\mathcal p}i}{2}-(\frac{{\mathcal p}i}{2}-{\alpha}pha)\right) \log\frac{1}{w+1}-0 +\int_{-w}^1 \frac{\arcsin u- \arcsin(-w)}{u-(-w)} du. \end{align*} When $u$ increases from $-w\approx-1/2$ to $0$, the difference quotient $\frac{\arcsin u- \arcsin(-w)}{u-(-w)}$ under the integral sign is only decreases (for $\arcsin$ is concave on $[-w,0]$), and then it starts to increase again (for convexity of $\arcsin$ on $[0,1]$), so that the maximum of the derivative $\arcsin'(-w)$ at $-w$ and the slope of the chord between the points $(-w,\arcsin(-w))$ and $(1,{\mathcal p}i/2)$ is a valid upper estimation for the integrand. That is, $$ \frac{\arcsin u- \arcsin(-w)}{u-(-w)} \le \max \left(\frac{1}{\sqrt{1-w^2}}, \frac{{\mathcal p}i/2+\arcsin w}{1+w}\right)=\max \left(\frac{1}{\sqrt{1-w^2}}, \frac{{\alpha}pha}{1+w}\right). $$ Recalling that the value of $w$ is to be close to $1/2$, and thus ${\alpha}pha\approx 2{\mathcal p}i/3$, we get that the last maximum is the second expression. As a result, $$ \int_0^{\alpha} \log\frac{1}{w+\cos\varphi}d\varphi \le {\alpha} \log\frac{1}{w+1}+{\alpha}pha. $$ Therefore, \begin{align}\label{logzetaintegralinhalfplane} \int_{-{\alpha}}^{{\alpha}} \log|& \zetata(z_0+Re^{i\varphi})|d\varphi \le 2{\alpha}\left(\log(A+\kappa)+(2-\theta)+ 2R\sigman{\alpha} + \log\frac{1}{R(w+1)}\right). \end{align} Note that until here we did not use any assumption on the value of $T$, it could be arbitrary. \underline{Part 2: ($\theta<\sigma<1$)}. Along the part ${\mathcal C}m$ of the circle ${\mathcal C}_R$, which belongs to the critical strip, the minimal value of $t=\Im s=\Im (T+Re^{i\varphi})$ is $T-R\sigman(-{\alpha}pha)=T-R\sqrt{1-(\frac{p-1}{R})^2}=T-\sqrt{R^2-(p-1)^2}>T-\sqrt{(p-\theta)^2-(p-1)^2} =T-\sqrt{3}(1-\theta)$, and the maximal value of $t=\Im s$ is $T+\sqrt{3}(1-\theta)$. \underline{Case 2.1: $|T|{\mathcal g}e e^{5/4}+\sqrt{3}$.} So, for $T{\mathcal g}e e^{5/4} +\sqrt{3}$ the whole curve ${\mathcal C}m$ lies in the range where $t{\mathcal g}e e^{5/4}$, and the estimates of Lemma \ref{l:zetaincriticalstrip}, in particular \eqref{zslogos} apply. We can therefore write \begin{align*} \int_{|\varphi| {\mathcal g}e {\alpha}} \log|& \zetata(z_0+Re^{i\varphi})|d\varphi \le \int_{|\varphi| {\mathcal g}e {\alpha}} \log\left(2 \frac{A+\kappa}{\sigmagma-\theta} |t|^{\frac{1-\sigmagma}{1-\theta}} \cdot \log |t|\right) d\varphi \\ &\le 2({\mathcal p}i-{\alpha})\log\left(2(A+\kappa)\right)+\int_{|\varphi| {\mathcal g}e {\alpha}} \left( -\log(\sigma-\theta) + \frac{1-\sigmagma}{1-\theta} \log t + \log\log t \right) d\varphi. \end{align*} Recalling $s=\sigma+it=p+iT+R\cos\varphi+iR\sigman\varphi$, $R<2(1-\theta)$ and ${\alpha}>2{\mathcal p}i/3$ leads to \begin{align}\label{intlogsigmamtheta} \int_{|\varphi| {\mathcal g}e {\alpha}} \log(\sigma-\theta) d\varphi & = 2 \int_{\alpha}pha^{\mathcal p}i \log\left(p-\theta+R\cos\varphi\right)d\varphi=2 ({\mathcal p}i-{\alpha})\log R + 2\int_{\alpha}pha^{\mathcal p}i\log\left(\frac{2(1-\theta)}{R}+\cos\varphi\right) d\varphi \notag \\& > 2 ({\mathcal p}i-{\alpha})\log R + 2\int_{2{\mathcal p}i/3}^{\mathcal p}i\log\left(1+\cos\varphi\right) d\varphi \\& = 2 ({\mathcal p}i-{\alpha})\log R + \frac{2{\mathcal p}i}{3}\log 2 +8 \int_0^{{\mathcal p}i/6} \log(\sigman u)du \approx 2 ({\mathcal p}i-{\alpha})\log R -5.510912076,\notag \end{align} at the end calculating numerically the value of the last definite integral. Further, \begin{align*} & \int_{|\varphi| {\mathcal g}e {\alpha}} \left(\frac{1-\sigmagma}{1-\theta} \log t + \log\log t \right) d\varphi \\& = \int_{\alpha}^{\mathcal p}i \left(\frac{1-p-R\cos\varphi}{1-\theta} \log (T-R\sigman\varphi)(T+R\sigman\varphi) + \log\log (T-R\sigman\varphi)+ \log\log (T+R\sigman\varphi)\right) d\varphi \\& \le \int_{\alpha}^{\mathcal p}i \left(\frac{1-p-R\cos\varphi}{1-\theta} \log (T^2-R^2\sigman^2\varphi) + 2\log(\frac12 \log (T^2-R^2\sigman^2\varphi))\right) d\varphi, \end{align*} using with $x:=\log(T-R\sigman\varphi)$ and $y:=\log(T+R\sigman\varphi)$ that $\log x+\log y \le 2\log\frac{x+y}{2}$ in view of concavity of the $\log$ function. Thus estimating $(T^2-R^2\sigman^2\varphi)$ simply by $T^2$ we are led to \begin{align*} \int_{|\varphi| {\mathcal g}e {\alpha}} & \left(\frac{1-\sigmagma}{1-\theta} \log t + \log\log t \right) d\varphi \le 2 \log T \int_0^{{\mathcal p}i-{\alpha}} \left(-1+\frac{R}{1-\theta}\cos\varphi \right) d\varphi + 2({\mathcal p}i-{\alpha}) \log\log T \\&= 2\log T \left(-({\mathcal p}i-{\alpha})+\frac{R}{1-\theta} \sigman({\mathcal p}i-{\alpha})\right)+2({\mathcal p}i-{\alpha}) \log\log T. \end{align*} Adding the obtained estimates furnishes \begin{align*} \int_{|\varphi| {\mathcal g}e {\alpha}} \log| \zetata(z_0+Re^{i\varphi})|d\varphi & \le 2({\mathcal p}i-{\alpha})\log\left(2(A+\kappa)\right) - 2 ({\mathcal p}i-{\alpha})\log R + 5.511 \\& + 2\log T \left(-({\mathcal p}i-{\alpha})+\frac{R}{1-\theta} \sigman {\alpha}\right)+2({\mathcal p}i-{\alpha}) \log\log T. \end{align*} Combining this estimate with \eqref{logzetaintegralinhalfplane} yields \begin{align*} \frac1{2{\mathcal p}i} \int_{-{\mathcal p}i}^{{\mathcal p}i} \log| \zetata(z_0+Re^{i\varphi})|d\varphi & \le \frac{{\alpha}}{{\mathcal p}i} \left(\log(A+\kappa)+(2-\theta)+ 2R\sigman{\alpha} + \log\frac{1}{R} +\log\frac{1}{w+1}\right) \\&+\left(1-\frac{{\alpha}}{{\mathcal p}i}\right)\log\left(2(A+\kappa)\right) - \left(1-\frac{{\alpha}}{{\mathcal p}i}\right) \log R +1.755 \\& + \frac{1}{{\mathcal p}i}\log T \left(-({\mathcal p}i-{\alpha})+\frac{R}{1-\theta} \sigman {\alpha}\right)+\left(1-\frac{{\alpha}}{{\mathcal p}i}\right) \log\log T \\&=\log(A+\kappa) + \left(1-\frac{{\alpha}}{{\mathcal p}i}\right)\log2 +\frac{{\alpha}}{{\mathcal p}i}(2-\theta)+2R\frac{{\alpha}pha\sigman{\alpha}}{{\mathcal p}i} + \frac{{\alpha}}{{\mathcal p}i}\log\frac{1}{w+1} +1.755 \\& + \log\frac{1}{R} + \left(\frac{R}{1-\theta} \frac{\sigman {\alpha}}{{\mathcal p}i}-\left(1-\frac{{\alpha}}{{\mathcal p}i}\right) \right) \log T +\left(1-\frac{{\alpha}}{{\mathcal p}i}\right) \log\log T =:J(a) \end{align*} Note that with fixed $\theta$ and the given choice of parameters $p,q$, all other parameters ${\alpha}, w$ depend only on $a$. Now we infer from this and \eqref{Ninthedisk} the estimate $$ N\le \frac{p-a}{q-a} \left\{J(a)+ \log \frac{(A+\kappa)p}{p-1} \right\}. $$ The left hand side is a fixed integer (the number of zeta-zeroes in ${\mathcal D}_r$), while the right hand side estimate is valid for all $a>\theta$. Allowing now $a\to\theta+$, and using that then $R\to 2(1-\theta)$, $w=\frac{p-1}{R}\to 1/2$, ${\alpha}pha=\arccos(-w)\to 2{\mathcal p}i/3$, we obtain $$ N\le \frac{p-\theta}{q-\theta} \left\{J(\theta)+ \log \frac{(A+\kappa)p}{p-1} \right\}, $$ where \begin{align*} J(\theta)&=\log(A+\kappa) + \frac{2}{3}(2-\theta)+2(1-\theta)\frac{\sqrt{3}}{3}+ \left(\frac{1}{2}+\frac{\sqrt{3}}{2{\mathcal p}i}\right)\log\frac{2}{3} +1.755 \\& -\frac{2}{3}\log2 + \log\frac{1}{1-\theta} + \left(\frac{\sqrt{3}}{{\mathcal p}i}-\frac{1}{3} \right) \log T +\frac{1}{3} \log\log T \end{align*} so that \begin{align*} N & \le \frac{3(1-\theta)}{b-\theta} \left\{J(\theta) + \log(A+\kappa)+\log \frac{p}{p-1}\right\} \\&\le \frac{3(1-\theta)}{b-\theta} \bigg\{ 2 \log(A+\kappa) + 2 \log\frac{1}{1-\theta} + \left(\frac{\sqrt{3}}{{\mathcal p}i}-\frac{1}{3} \right) \log T +\frac{1}{3} \log\log T +C(\theta)\bigg\} \end{align*} with $$ C(\theta):=\log(1+(1-\theta)) + \frac{2}{3}(2-\theta)+2(1-\theta)\frac{\sqrt{3}}{3}+ \left(\frac{1}{2}+\frac{\sqrt{3}}{2{\mathcal p}i}\right)\log\frac{2}{3} +1.755 -\frac{2}{3}\log2 < 4.165 . $$ The estimate \eqref{zerosinH} follows. \underline{Case 2.2: $2 \le T \le e^{5/4}+\sqrt{3}$.} In this case, points on ${{\mathcal C}m}$ satisfy $s=\sigma+it,~|t|\le e^{5/4}+2\sqrt{3} \approx 6.9544 <7=:\tau$ and also $|t|{\mathcal g}e 2-\sqrt{3}\theta - \sqrt{3}(1-\theta)= 2-\sqrt{3} >0.125=1/8=1/(\tau+1)$, whence all over ${\mathcal C}m$ we can use the estimate of \eqref{zestitsmall}, providing for the respective integral \begin{align}\label{intoflogzetainstrip} \int_{|\varphi| {\mathcal g}e {\alpha}} \log| \zetata(z_0+Re^{i\varphi})|d\varphi & \le 2({\mathcal p}i-{\alpha})\log\left(8(A+\kappa)\right) + \int_{|\varphi| {\mathcal g}e {\alpha}} \frac{1}{p+R\cos\varphi-\theta} d\varphi \notag \\& < 2({\mathcal p}i-{\alpha})\log\left(8(A+\kappa)\right) - 2 ({\mathcal p}i-{\alpha})\log R + 5.511, \end{align} referring to the already accomplished calculation in \eqref{intlogsigmamtheta}. Adding the estimate \eqref{logzetaintegralinhalfplane} furnishes \begin{align*} \frac{1}{2{\mathcal p}i}\int_{-{\mathcal p}i}^{{\mathcal p}i} \log|\zetata(z_0+Re^{i\varphi})|d\varphi & \le \log(A+\kappa) + \left(1-\frac{{\alpha}pha}{{\mathcal p}i}\right) \log 8 + \frac{{\alpha}pha(2-\theta)}{{\mathcal p}i} \\&{\mathcal q}uad + \frac{2 R {\alpha}pha \sigman{\alpha}}{{\mathcal p}i} - \log R + \frac{{\alpha}pha}{{\mathcal p}i} \log\frac{1}{(w+1)}+1.755=:J^*(a) \end{align*} As a result, in this case we find $N\le \frac{p-a}{q-a} \left(J^*(\theta) + \log(A+\kappa)+\log \frac{p}{p-1}\right)$, and so after passing to the limit $a \searrow \theta$ -- entailing $R\to 2(1-\theta), w\to 1/2, {\alpha}pha\to 2{\mathcal p}i/3$-- we get \begin{align}\label{middlecase} N & \le \frac{3(1-\theta)}{b-\theta} \left\{J^*(\theta) + \log(A+\kappa)+\log \frac{p}{p-1}\right\} \notag \\ &\le \frac{3(1-\theta)}{b-\theta} \bigg\{ 2 \log(A+\kappa) + 2 \log\frac{1}{1-\theta} + +C^*(\theta)\bigg\} \end{align} with \begin{align*} C^*(\theta) & = \log(2-\theta) + \frac13\log 8 + \frac23(2-\theta)+2(1-\theta)\frac{\sqrt{3}}{3}-\log 2 - \frac23 \log\frac32 +1.755 \\ &\le \log 2 +\frac43 + \frac{2\sqrt{3}}{3} - \frac23 \log\frac32 +1.755<4.665<\frac{14}{3}. \end{align*} \underline{Case 2.3: $0\le T \le 2$.} If all the points of ${\mathcal C}m$ satisfy also $|t|{\mathcal g}e 1/8$, then all the estimates derived from \eqref{middlecase} with parameter value $\tau=7$ in the above case apply, because we also have that in this case if $s=\sigma+it \in{\mathcal C}m$, then $|t|\le 2+2{\rm diam\,} {\mathcal C}m \le 2+2\sqrt{3}<5.5<7$. Denote now $S:=\{s=\sigma+it~:~ \sigma < \frac{1+\theta}{2} \}$ and $S^*:=\{s=\sigma+it~:~ \frac{1+\theta}{2} \le \sigma \le 1\}$. In fact, for points in $S \cap {\mathcal C}m$ \eqref{zestitnotlarge} provides the same estimates than \eqref{zestitsmall} furnished in \eqref{middlecase}, for then $\max \left( \frac{1}{\sigma-\theta},\frac{1}{1-\sigma}\right)=\frac{1}{\sigma-\theta}$. Therefore, we need not bother with subcases whether points in $S \cap {\mathcal C}m$ are close to the real axis or not. There is need for some extra calculus only if there are points of ${\mathcal C}m\cap S^*$ in the $1/8$ neighborhood of the real axis ${\mathbb R}$. Here instead of \eqref{zestitsmall} we are to use \eqref{zestitnotlarge}, but now with some lower values (depending on subcases and even subarcs of ${\mathcal C}m$) of the parameter $\tau$. So assume that there is a nonempty subset ${\mathcal I}$--and then consisting either one or two subarcs of ${\mathcal C}m$--where $\frac{1+\theta}{2} \le \sigma \le 1$ and $|t|\le 1/8$. Denote $\beta:=\arccos \left(-\frac{p-\frac{1+\theta}{2}}{R}\right)={\mathcal p}i-\arccos\left(\frac{3(1-\theta}{2R}\right)$. The arcs with $|\varphi| \in [{\alpha}pha,\beta]$ are exactly the arcs of ${\mathcal C}m$ which fall into the strip $S^*$; the part satisfying $|t|\le 1/8$ make up the points of ${\mathcal I}$. Consider first the subcase when ${\mathcal I}$ consists of two arcs, i.e. the upper and lower parts of ${\mathcal C}m \cap S^*$ are both involved. Then also the arc in between (the part of ${\mathcal C}m$ lying in $S=\{s~:~\sigma\le \frac{1+\theta}{2}\}$) lies in $|t|\le 1/8$; moreover, even the rest of ${\mathcal C}m$ satisfies $$ |t|\le \frac18+R(\sigman{\alpha}pha-\sigman\beta) \le \frac18+2(1-\theta)\left( \sqrt{1-\left(\frac{1-\theta}{2R}\right)^2} - \sqrt{1-\left(\frac{3(1-\theta}{2R}\right)^2} \right)=:H(a). $$ As above, we will have $a\to \theta$ and $R\to 2(1-\theta)$, whence for close enough $a$ we will have $H(a) \le 1/8+2(1-\theta)\left( \frac{\sqrt{3}}{2} - \frac{\sqrt{7}}{4} \right)+{\varepsilon} \le 0.54$, and all over the arc ${\mathcal C}m$ we can calculate with $\tau=0.54$ and constant $1.54$. So in this subcase for points in ${\mathcal C}m$ we have \begin{align*} \left|\log \zetata(p+Re^{i\varphi})\right| & \le \log(A+\kappa){\mathcal q}quad \\ & + \begin{cases} \log \frac98 + \log\frac{1}{\sigma-\theta} {\mathcal q}uad& \textrm{if}{\mathcal q}uad \sigma \le \frac{1+\theta}{2} {\mathcal q}uad \left(\textrm{and whence} ~ |t|\le 1/8 \right) \\ \log \frac98 + \log\frac{1}{1-\sigma} {\mathcal q}uad& \textrm{if}{\mathcal q}uad \sigma {\mathcal g}e \frac{1+\theta}{2} {\mathcal q}uad \textrm{and} ~ |t|\le 1/8 ~\left( \textrm{i.e.} ~ s\in {\mathcal I} \right) \\ \log1.54 + \log\frac{1}{\sigma-\theta} {\mathcal q}uad& \textrm{if}{\mathcal q}uad \sigma {\mathcal g}e \frac{1+\theta}{2} {\mathcal q}uad \textrm{and} ~ |t|>1/8 ~\left( \textrm{i.e.} ~ s \in (S\cap{\mathcal C}m)\setminus {\mathcal I} \right) \end{cases} \end{align*} It follows that then, compared to the above estimations, we have a gain (decrease) $\log8-\log\frac98$ on $|\varphi|{\mathcal g}e \beta$ and at least a gain $\log8-\log1.54$ on ${\alpha}pha\le |\varphi| \le \beta$, and a loss (increase) of $\log\frac{1}{1-\sigma} - \log\frac{1}{\sigma-\theta} < \log\frac{1-\theta}{1-\sigma}$ on ${\mathcal I}$. Let us now estimate the possible loss on ${\mathcal I}$. On one arc we can write \begin{align*} \int_{\alpha}pha^\beta \log\frac{1-\theta}{1-\sigma} d\varphi &= \int_{\alpha}pha^\beta \log\frac{1-\theta}{1-p-R\cos\varphi} d\varphi =-(\beta-{\alpha}pha)\log \frac{R}{1-\theta} + \int_{\alpha}pha^\beta \log\frac{1}{-w-\cos\varphi}d\varphi \\ & =-(\beta-{\alpha}pha)\log \frac{R}{1-\theta} + \int_{{\mathcal p}i-{\alpha}pha}^{{\mathcal p}i-\beta} \log\frac{1}{-w+\cos\varphi}d\varphi \\&=-(\beta-{\alpha}pha)\log \frac{R}{1-\theta}+\int_w^{\frac{3(1-\theta)}{2R}} \log\frac{1}{-w+u}\frac{du}{\sqrt{1-u^2}} \\&\le -(\beta-{\alpha}pha)\log \frac{R}{1-\theta}+\frac{4}{\sqrt{7}} \int_w^{\frac{3(1-\theta)}{2R}} \log\frac{1}{-w+u}du \\&= -(\beta-{\alpha}pha)\log \frac{R}{1-\theta}+\frac{4}{\sqrt{7}} \left[v-v\log v \right]_0^{\frac{3(1-\theta)}{2R}-w} \\& = -(\beta-{\alpha}pha)\log \frac{R}{1-\theta}+ \frac{4}{\sqrt{7}} \left[ \frac{1-\theta}{2R}-\frac{1-\theta}{2R} \log\frac{1-\theta}{2R}\right]=:L(a). \end{align*} As a result, the difference in the estimation of $\int_{|\varphi|{\mathcal g}e{\alpha}pha} \log|\zetata(z_0+Re^{i\varphi})|d\varphi$ as compared to \eqref{intoflogzetainstrip} in the previous case is $$ D(a)\le 2({\mathcal p}i-\beta) \log\frac98 + 2(\beta-{\alpha}pha)\log 1.54 - 2({\mathcal p}i-{\alpha})\log 8 + 2L(a)=:D_2(a). $$ In the final estimation of $N$ the parameter $a$ was to go to $\theta$, so that it suffices to compute $D_2(\theta)=\lim_{a \to \theta+} D(a)$. Note that $a\to\theta+$ entails $w\to 1/2$, $R\to 2(1-\theta)$, ${\alpha}pha\to2{\mathcal p}i/3$, $\beta\to \arccos(-3/4)$ and thus in particular $$ L(a) \to L(\theta)=-(\arccos(-3/4)-2{\mathcal p}i/3) \log2 +\frac{4}{\sqrt{7}} \left[ \frac{1}{4}-\frac{1}{4} \log\frac{1}{4}\right]\approx 0.677\ldots<0.7. $$ Therefore, $$ \frac12 D_2(\theta) <({\mathcal p}i-\arccos(-3/4))\log\frac9{64} + (\arccos(-3/4)-2{\mathcal p}i/3)\log \frac{1.54}{8} +0.7<0.7-\frac{{\mathcal p}i}{3}\log 5<0, $$ whence the total estimate of the number of zeroes $N$ can only improve compared to the previous case. It remains to deal with the case when exactly one subarc of $S^*\cap {\mathcal C}m$ meshes into the $1/8$ neighborhood of the real axis ${\mathbb R}$. In this case the loss on the respective arc is at most $L(a)$ (as opposed to $2L(a)$ above), while for the rest of points we still have $|t|\le 1/8+\textrm{diam} {\mathcal C}m =1/8+\sqrt{3} <2$. It follows that we can then use \eqref{zestitnotlarge} with the parameter $\tau=2$, $\tau+1=3$, and compute the difference of gains and possible losses as $$ D_1(a) \le (2{\mathcal p}i-{\alpha}pha-\beta) \log\frac{3}{8} + L(a)< (2{\mathcal p}i-{\alpha}pha-\beta) \log\frac{3}{8}+0.7, $$ which is easily seen to be negative again. Therefore, the estimations of Case 2.2 extend to all three subcases of Case 2.3 when ${\mathcal I}$ meshes into $S^*\cap {\mathcal C}m$ in no, in one, or in two subarcs; whence the proof is completed. \end{proof} Joining the necessary number of such rectangles we get the following. \begin{lemma}\label{l:Jensenone} Let $\theta<b<1$ and $|t|{\mathcal g}e 6.3$, say. Consider the rectangle $H:=\{ z\in{\mathcal C}C~:~ {\mathcal R}e z\in [b,1],~\Im z\in (t-1,t+1)\}$. Then the number of zeta-zeros $n(H)$ in the rectangle $H$ satisfy \begin{equation}\label{zeroshightt} n(H) \leq \frac{\sqrt{1-\theta}}{\sqrt{(b-\theta)^3}}\left(2\log|t| + 12\log(A+\kappa) + 12\log\frac1{1-\theta} +25\right) \end{equation} \end{lemma} \begin{proof} We consider the union of rectangles $H_j:= \{ z\in{\mathcal C}C~:~ {\mathcal R}e z\in [b,1],~\Im z\in (t_j-h,t_j+h)\}$ with $h:=\frac{\sqrt{7}}{3} \sqrt{(b-\theta)(1-\theta)}$ as in the above Lemma. If $t_j=t+1-(2j-1)h{\mathcal g}e t-1{\mathcal g}e 5.222$, then the union $\cup_{j=1}^m H_j$, where $$ m:=\left\lceil \frac{2}{h}\right\rceil = \left\lceil \frac{2}{2\frac{\sqrt{7}}{3} \sqrt{(b-\theta)(1-\theta)}} \right\rceil=\left\lceil \frac{3/\sqrt{7}}{\sqrt{(b-\theta)(1-\theta)}} \right\rceil \le \frac{\lceil 3/\sqrt{7}\rceil}{3/\sqrt{7}}~\frac{3/\sqrt{7}}{\sqrt{(b-\theta)(1-\theta)}}, $$ fully covers $H$, whence the total number of zeroes exceeds $n(H)$. Now $\frac{3/\sqrt{7}}{\sqrt{(b-\theta)(1-\theta)}} {\mathcal g}e 3/\sqrt{7}$, thus taking into account $\lceil x\rceil \le \frac{\lceil 3/\sqrt{7}\rceil}{3/\sqrt{7}} x = 2\sqrt{7}/3 x$ for all $x{\mathcal g}e 3/\sqrt{7}$, we obtain $m\le 2/\sqrt{(b-\theta)(1-\theta)}.$ In the small rectangles $H_j$ there are at most $n_j:=\frac{1-\theta}{b-\theta} \left(\log t_j+6\log(A+\kappa)+6\log\frac{1}{1-\theta}+12.5 \right)$ zeroes, because $t_j=t+1-(2j-1)h{\mathcal g}e t-1{\mathcal g}e 5.222$ and the above Lemma applies. So we are to estimate $N:=\sum_{j=1}^m \frac{1-\theta}{b-\theta} \left(\log t_j+6\log(A+\kappa)+6\log\frac{1}{1-\theta}+12.5 \right)$. Pairing the opposite terms thus provides $$ 2N \le \frac{1-\theta}{b-\theta} \left\{ \sum_{j=1}^m \left(\log t_j+\log t_{m-j}\right) +2m \left(6\log(A+\kappa)+6\log\frac{1}{1-\theta}+12.5 \right)\right\}. $$ It remains to check that $t_j+t_{m-j}\le 2t$, always, whence by the concavity of $\log t$ it follows that $\log t_j+\log t_{m-j} \le 2\log t$. Using this and the above estimate $m\le 2/\sqrt{(b-\theta)(1-\theta)}$ finally furnishes the assertion. \end{proof} \begin{remark} Similarly, the number of zeroes in $[b,1]\times [-iT,iT]$ can be estimated for all $T{\mathcal g}e 1$, say, as \begin{align}\label{NTest} N(b,T):&=\#\{ \rho=\beta+i{\mathcal g}amma~:~ \zeta(\rho)=0, \beta{\mathcal g}eq b, |{\mathcal g}amma|\leq T \} \notag \\ &\leq \frac{\sqrt{1-\theta}}{\sqrt{(b-\theta)^3}} \left\{2 T \log_{+} T + \left(12\log(A+\kappa) + 12\log\frac1{1-\theta} +28\right)T \right\}. \end{align} \end{remark} This can slightly be improved, in particular for the case when $b$ is closer to $\theta$ than to $1$, as follows. \begin{lemma}\label{l:Littlewood} Let $\theta<b<1$ and consider any height $T{\mathcal g}eq 5 $ together with the rectangle $Q:=Q(b,T):=\{ z\in{\mathcal C}C~:~ {\mathcal R}e z\in [b,1],~\Im z\in (-T,T)\}$. Then the number of zeta-zeros $N(b,T)$ in the rectangle $Q$ satisfy \begin{equation}\label{zerosinth-corr} N(b,T)\le \frac{1}{b-\theta} \left\{\frac{1}{2} T \log T + \left(2 \log(A+\kappa) + \log\frac{1}{b-\theta} + 3 \right)T\right\}. \end{equation} \end{lemma} \begin{proof} Instead of Jensen's theorem, we now apply Littlewood's theorem (see \cite[p. 166]{Dav}) to the rectangle $Q(a,T):=[a,2]\times [-iT,iT]$, with $\theta < a\leq (1+\theta)/2$ and also $a<b$, assuming momentarily that on the boundary ${\mathcal p}artial Q(a,T)$ there is no zero of $\zeta$ (and then get it even for other values of $a$ and $T$ by taking limits). This theorem provides the average formula \begin{align}\label{Littlewoodintegral} 2{\mathcal p}i \int_a^2 N(q,T) dq & = \int_{-T}^{T} \left[\log|\zeta(a+it)| - \log|\zeta(2+it)| \right] dt \notag \\ &{\mathcal q}quad + \int_{a}^{2} \left[\arg \zeta(\sigmagma+iT) - \arg \zeta(\sigmagma-iT) \right] d\sigmagma, \end{align} where $\arg \zeta$ is defined by continuous variation from $\zeta(2+it)$. As in \cite{Dav}, the estimation of the second integral is executed by an appeal to Backlund's observation: $\arg \zeta(s) = \Im \log \zeta(s)= \frac{1}{2i} \left[ \log \zeta(s) -\overline{\log \zeta(s)} \right]$, hence the second integral is $$ \int_{a}^{2} \left[\arg \zeta(\sigmagma+iT) - \arg \zeta(\sigmagma-iT) \right] d\sigmagma = -i \int_{a}^{2} \left[\log \zeta(\sigmagma+iT) - \log \zeta(\sigmagma-iT) \right] d\sigmagma $$ At $\sigmagma=2$, Lemma \ref{l:oneperzeta}, \eqref{reciprok} provides that either $|{\mathcal R}e \zeta(2{\mathcal p}m iT)| {\mathcal g}eq 1/(2\sqrt{2}(A+\kappa))$, or $|\Im \zeta(2{\mathcal p}m iT)| {\mathcal g}eq 1/(2\sqrt{2}(A+\kappa))$; e.g. in the first case let $$ g(s):=g_T(s):= \frac12\left[ \zeta(s+iT)+\zeta(s-iT)\right]. $$ The number of zeroes $\nu(\sigmagma)$ of $g$ along the segment $[\sigmagma,2]$ is the number of cases when ${\mathcal R}e \zeta({\alpha}{\mathcal p}m iT)$ vanishes along this segment, hence by the definition using continuous variation, we must have $|\arg \zeta(\sigmagma{\mathcal p}m iT)| \leq (\nu(\sigmagma)+1){\mathcal p}i$. Therefore, $$ \frac{1}{{\mathcal p}i}\int_a^2 \arg\zeta(\sigmagma{\mathcal p}m iT) d\sigmagma \leq \int_a^2 \nu(\sigmagma)d\sigmagma + (2-a) \leq \int_a^2 n_g(2-\sigmagma) d\sigmagma +2, $$ where now $n(2-\sigmagma):=n_g(2-\sigmagma):=\#\{s~:~|s-2|\leq r:=2-\sigmagma,~g(s)=0\}$, which is clearly at least as large as $\nu(\sigmagma)$. Now we use Jensen's formula to estimate the last integral as $$ \int_a^2 n_g(2-\sigmagma) d\sigmagma =\int_0^{2-a} n(r)dr < 2 \int_0^{2-a} \frac{n(r)}{r} dr \leq \frac{1}{{\mathcal p}i} \int_0^{2{\mathcal p}i} \log|g(2+(2-a)e^{i\varphi})| d\varphi - 2 \log|g(2)|. $$ By the above, $- \log|g(2)| = -\log |{\mathcal R}e\zeta(2+ iT) | \leq \log(2\sqrt{2}(A+\kappa))$, while for the integral we can apply $| g(2+(2-a)e^{i\varphi})| \leq \max \left( |\zeta(2+iT+(2-a)e^{i\varphi})|,~|\zeta(2-iT+(2-a)e^{i\varphi})|\right)$. This last expression can be estimated using the uniform estimates of Lemma \ref{l:zkiss}, \eqref{zsgenlarget} (note that $T{\mathcal g}eq 5$ so the variables considered are all in the range $|t|{\mathcal g}eq \sigma$). We are led to \begin{align*} \int_a^2 n_g(2-\sigmagma) d\sigmagma & \leq 2\left( \log\frac{\sqrt{2}(A+\kappa)}{a-\theta} + \log(T+2) + \log(2\sqrt{2}(A+\kappa))\right) \\ & \leq 2 \log T + 4 \log(A+\kappa) + 2 \log\frac{1}{a-\theta} + 3.4. \end{align*} In all for the argumentum integrals we obtain \begin{align}\label{argint} \int_{a}^{2} \big[\arg \zeta(\sigmagma+iT) & - \arg \zeta(\sigmagma-iT) \big] d\sigmagma \leq 2{\mathcal p}i\left( \int_a^2 n_g(2-\sigmagma) d\sigmagma +2\right) \notag \\ & \leq 4 {\mathcal p}i \log T + 8 {\mathcal p}i \log(A+\kappa) + 4 {\mathcal p}i \log\frac{1}{a-\theta} + 2{\mathcal p}i\cdot 5.4. \end{align} Now let us go to the evaluation of the other integral. Using once again the estimate of Lemma \ref{l:oneperzeta}, \eqref{reciprok}, the term along the $\sigmagma=2$ line is of no problem, contributing \begin{equation}\label{logzetaonthe2line} \int_{-T}^{T} - \log|\zeta(2+it)| dt \leq 2T \log \left(2(A+\kappa) \right). \end{equation} For the most essential part, the integral along the $\sigmagma=a$ line, in the finite range between $[-4,4]$ an application of \eqref{zsgeneral} of Lemma \ref{l:zkiss} yields, using also $\theta <a\leq (1+\theta)/2$, \begin{align*} \int_0^4 \log |\zeta (a+it)| dt & \leq \int_0^4 \log \left( \frac{A|a+it|}{a-\theta} + \frac{\kappa}{|(1-(a+it)|}\right) dt \leq \int_0^4 \log \left( \frac{(A+\kappa)\sqrt{1+t^2}}{a-\theta} \right) dt \\ & = 4 \log \left(\frac{A+\kappa}{a-\theta} \right) + \frac 12 \int_0^4 \log (1+t^2) dt = 4 \log \left(\frac{A+\kappa}{a-\theta} \right) + 2.992244... \end{align*} For the range with $4\leq|t|\leq T$ we can refer to the estimate \eqref{zestitsmall} of Lemma \ref{r:zestismallt}. From that and the above we obtain \begin{align*} \int_{-T}^{T} \log|\zeta(a+it)| dt & < 8 \log \left(\frac{A+\kappa}{a-\theta} \right) + 6 + 2\int_4^T \left( \log \frac{(A+\kappa)}{a-\theta} + \log (t+1) \right) dt \\ & \leq 2T \log \left(\frac{A+\kappa}{a-\theta} \right) + 6+ 2\left\{((T+1) \log (T+1) -(T+1)) -( 5\log 5-5) \right\} \\ & = 2T\log T + 2T \log\frac{T+1}{T} +2\log (T+1) + \left(\log \frac{A+\kappa}{a-\theta} -1\right) \cdot 2 T + 14 - 10 \log 5 \\ & < 2T\log T + \left(\log \frac{A+\kappa}{a-\theta} -1\right) \cdot 2 T + 2 \log(T+1) + 16-10\log 5. \end{align*} Collecting \eqref{Littlewoodintegral}, \eqref{argint}, \eqref{logzetaonthe2line} and the last estimates we obtain from Littlewood's theorem that \begin{align*} 2{\mathcal p}i \int_a^2 N(q,T) dq & \leq 2T \log T + \left(\log\frac{2(A+\kappa)^2}{a-\theta} -1 \right) 2T + 4{\mathcal p}i \log T +2\log(T+1) \\ &{\mathcal q}quad{\mathcal q}quad{\mathcal q}quad + 8 {\mathcal p}i \log(A+\kappa) + 4{\mathcal p}i \log\frac{1}{a-\theta} + 2{\mathcal p}i\cdot5.4+ 16-10\log 5. \end{align*} Here we choose $a:=(b+3\theta)/4=\theta+\frac14(b-\theta)$ and apply $(b-a) N(b,T) \leq \int_a^2 N(q,T) dq$ to obtain (taking into account $b-a=\frac34 (b-\theta)$, $a-\theta=\frac14 (b-\theta)$ and also $T{\mathcal g}e 5$) \begin{align*} N(b,T) & \leq \frac{1}{b-a} \bigg\{ \frac{1}{{\mathcal p}i} T \log T + \left(\frac{2}{{\mathcal p}i} \log(A+\kappa) + \frac{3\log 2-1}{{\mathcal p}i} + \frac1{{\mathcal p}i} \log \frac{1}{b-\theta} \right) T \\ & {\mathcal q}quad {\mathcal q}quad{\mathcal q}quad+ 2 \log T + \frac{1}{{\mathcal p}i} \log(T+1) + 4 \log(A+\kappa) + 2 \log\frac{4}{b-\theta} + 5.4 + \frac{8-10\log 5}{{\mathcal p}i}\bigg\} \\ & \leq \frac{1}{\frac{3}{4}(b-\theta)} \bigg\{\frac{1}{{\mathcal p}i} T \log T + \left(\left(\frac{2}{{\mathcal p}i}+\frac{4}{T}\right) \log(A+\kappa) + \left( \frac{1}{{\mathcal p}i}+\frac{2}{T}\right) \log\frac{1}{b-\theta} + 0.35 \right) T \\ & {\mathcal q}quad {\mathcal q}quad{\mathcal q}quad + \left(2 \frac{\log T}{T} + \frac{1}{{\mathcal p}i} \frac{\log(T+1)}{T} + \left(2 \log 4 + 5.4 + \frac{8-10\log 5}{{\mathcal p}i}\right)\frac{1}{T} \right) T \bigg\} \\ & \leq \frac{1}{\frac{3}{4}(b-\theta)} \bigg\{\frac{1}{{\mathcal p}i} T \log T + \bigg[ \left(\frac{2}{{\mathcal p}i} +\frac45\right) \log(A+\kappa) + \left(\frac{1}{{\mathcal p}i}+\frac{2}{5} \right)\log\frac{1}{b-\theta} \\ & {\mathcal q}quad {\mathcal q}quad{\mathcal q}quad{\mathcal q}quad {\mathcal q}quad+0.35 + 2 \frac{\log 5}{5} + \frac{1}{{\mathcal p}i} \frac{\log 6}{5} + \left(2 \log 4 + 5.4 + \frac{8-10\log 5}{{\mathcal p}i}\right)\frac{1}{5} \bigg] T \bigg\} \\ & \leq \frac{4}{3(b-\theta)} \left\{\frac{1}{{\mathcal p}i} T \log T + \left[ 1.44 \log(A+\kappa) + 0.72 \log\frac{1}{b-\theta} +2.23\right]T\right\} \\ & \le \frac{1}{b-\theta} \left\{0.425 ~T \log T + \left[1.92 \log(A+\kappa) + 0.96 \log\frac{1}{b-\theta} + 2.98\right]T\right\} \end{align*} and the assertion follows. \end{proof} \begin{lemma}\label{c:zerosinrange} Let $\theta<b<1$ and consider any heights $T>R{\mathcal g}eq 5 $ together with the rectangle $Q:=Q(b,R,T):=\{ z\in{\mathcal C}C~:~ {\mathcal R}e z\in [b,1],~\Im z\in (R,T)\}$. Then the number of zeta-zeros $N(b,R,T)$ in the rectangle $Q$ satisfies \begin{equation}\label{zerosbetween} N(b,R,T) \leq\frac{1}{b-\theta} \left\{ \frac{4}{3{\mathcal p}i} (T-R) \left(\log T + \log\left(\frac{11.4 (A+\kappa)^2}{b-\theta}\right)\right) + \frac{16}{3} \log\left(\frac{60 (A+\kappa)^2}{b-\theta}\right)\right\}. \end{equation} In particular, for the zeroes between $T-1$ and $T+1$ we have for $T{\mathcal g}eq 6$ \begin{align}\label{zerosbetweenone} N(b,T-1,T+1) \leq \frac{1}{(b-\theta)} \left\{ 0.85 \log T + 6.2 \log\left( \frac{(A+\kappa)^2}{b-\theta}\right) + 24 \right\}. \end{align} \end{lemma} \begin{proof} We can apply the Littlewood theorem not only to $Q(b,T)$, but also to the rectangle $Q(b,R):=[b,2]\times [-iR,iR]$: moreover, we can simply subtract the contributions of the two formulae to estimate the contribution of the zeroes in between. Thus we obtain (again with $a:=(b+3\theta)/4$) the estimations \begin{align*} 2{\mathcal p}i (b-a)[N(b,T)-N(b,R)] &\leq 2{\mathcal p}i \int_a^2 [N(q,T)-N(q,R)]dq \\& = \int_{[-T,-R]\cup[R,T]} \left[\log|\zeta(a+it)| - \log|\zeta(2+it)| \right] dt \\ &{\mathcal q}quad {\mathcal q}quad + \int_{a}^{2} \left[\arg \zeta(\sigmagma+iT) - \arg \zeta(\sigmagma-iT) \right] d\sigmagma \\ &{\mathcal q}quad{\mathcal q}quad - \int_{a}^{2} \left[\arg \zeta(\sigmagma+iR) - \arg \zeta(\sigmagma-iR) \right] d\sigmagma. \end{align*} As before in \eqref{argint}, for the difference of the two argumentum integrals we can infer -- also writing in the trivial equality $\log\frac{1}{a-\theta} = \log \frac{1}{b-\theta} + \log 4$-- the estimates \begin{align*} \int_{a}^{2} (\dots T\dots )d\sigmagma & - \int_{a}^{2} (\dots R \dots ) d\sigmagma \\& \leq 8 {\mathcal p}i \log T + 16{\mathcal p}i \log(A+\kappa) + 8{\mathcal p}i \log\frac{1}{b-\theta} +8{\mathcal p}i \log 4 + 2{\mathcal p}i 10.8. \end{align*} On the other hand for the contribution of the main term integrals of $\log|\zeta|$ we have a better estimation, due to the fact that here the part over $[-R,R]$ of the integral is canceled. For the part over $\sigmagma=2$ by the same uniform estimation using Lemma \ref{l:oneperzeta}, \eqref{reciprok}, we obtain $$ \int_{[-T,-R]\cup[R,T]} - \log|\zeta(2+it)| dt \leq 2(T-R) \log \left( 2(A+\kappa) \right). $$ Lastly, the contribution over the $\sigmagma=a$ line can be estimated using \eqref{zestitsmall} of Lemma \ref{r:zestismallt} as \begin{align*} \int_{[-T,-R]\cup[R,T]} \log|\zeta(a+it)| dt &\leq \int_{[-T,-R]\cup[R,T]} \left( \log \frac{\sqrt{2}(A+\kappa)}{a-\theta} + \log |t| \right) dt. \\ & \leq 2 (T-R) \left(\log T+ \log \frac{4\sqrt{2}(A+\kappa)}{b-\theta}\right) \end{align*} So collecting our estimates and writing in $b-a=\frac34 (b-\theta)$ we are led to \begin{align*} 2{\mathcal p}i \frac34 (b-\theta) N(b,R,T) & \leq 2{\mathcal p}i \int_a^2 [N(q,T)-N(q,R)]dq \\ & \leq 2 (T-R) \left( \log T + \log \left(\frac{8\sqrt{2}(A+\kappa)^2}{b-\theta}\right) \right) \\ & + 8 {\mathcal p}i \log T + 16 {\mathcal p}i \log(A+\kappa) + 8{\mathcal p}i \log\frac{1}{b-\theta} +2{\mathcal p}i 10.8+8{\mathcal p}i\log 4 \\&\leq 2 (T-R)\left(\log T + \log\left(\frac{11.4 (A+\kappa)^2}{b-\theta}\right)\right) + 8 {\mathcal p}i \left(\log\left(\frac{59.52 (A+\kappa)^2}{b-\theta}\right)\right). \end{align*} The inequality \eqref{zerosbetween} follows. This formula applies also for $N(b,T+1,T-1)$ on noting that $\log(T+1)\leq \log T + \log\frac76$ so that \begin{align*} N(b,T-1,T+1) & \leq \frac{1}{b-\theta} \left\{ \frac{8}{3{\mathcal p}i}\left(\log T + \log\left(\frac{\frac76 11.4 (A+\kappa)^2}{b-\theta}\right)\right) + \frac{16}{3} \left(\log\left(\frac{59.52 (A+\kappa)^2}{b-\theta}\right)\right) \right\} \\& < \frac{1}{b-\theta} \left\{ 0.8489 \log T + 6.2 \log\left(\frac{(A+\kappa)^2}{b-\theta}\right) + \left(0.8489 \log 13.3 + \frac{16}{3} \log 59.52 \right) \right\}. \end{align*} From here a little numerical calculus ends the proof of the Lemma. \end{proof} \section{The logarithmic derivative of the Beurling $\zetata$} \label{sec:logder} \begin{lemma}\label{l:borcar} Let $z=a+it_0$ with $|t_0| {\mathcal g}eq e^{5/4}+\sqrt{3}=5.222\ldots$ and $\theta<a\leq 1$. With $\deltalta:=(a-\theta)/3$ denote by $S$ the (multi)set of the $\zeta$-zeroes (listed according to multiplicity) not farther from $z$ than $\deltalta$. Then we have \begin{align}\label{zlogprime} \left|\frac{\zeta'}{\zeta}(z)-\sum_{\rho\in S} \frac{1}{z-\rho} \right| & < \frac{9(1-\theta)}{(a-\theta)^2} \left(22.5+14\log(A+\kappa)+14\log \frac{1}{a-\theta} + 5\log |t_0|\right). \end{align} Furthermore, for $0 \le |t_0| \le 5.23$ an analogous estimate (without any term containing $\log |t_0|$) holds true: \begin{equation}\label{zlogprime-tsmall} \left|\frac{\zeta'}{\zeta}(z)+\frac{1}{z-1}-\sum_{\rho\in S} \frac{1}{z-\rho} \right| \le \frac{9(1-\theta)}{(a-\theta)^2} \left(34+14\log(A+\kappa)+18\log \frac{1}{a-\theta}\right) \end{equation} \end{lemma} \begin{proof} Let now $z_0:=p+it_0$, $p:=1+(1-\theta)$, and denote by $D_j$ the disk around it with the radii $R_j:=p-(\theta+j\deltalta)$ for $j=1,2,3$, where we have chosen $\deltalta:=(a-\theta)/3$. Observe that then $z\in{\mathcal p}artial D_3\subset D_3$. By Lemma \ref{l:Jensen} and Remark \ref{r:alsoindisk} (applied with $q:=\theta+\deltalta=\theta+\frac23 (b-\theta)$ i.e. with $b:=\theta+\frac32\deltalta$) we know that the number $N$ of $\zeta$-zeroes in the disk $D_1$ satisfies whenever $t_0 {\mathcal g}e 5.222\ldots$ the estimate \begin{align}\label{zerosinDone} N & \leq \frac{1-\theta}{\frac{3}{2}\delta} \left(12.5+6\log(A+\kappa)+6\log \frac{1}{1-\theta} + \log t_0 \right) \\ \notag & =\frac{1-\theta}{\delta} \left(\frac{25}{3}+4\log(A+\kappa)+4\log \frac{1}{1-\theta} + \frac23 \log t_0\right). \end{align} Now denote the (multi)set of all these zeroes (again taking them according to multiplicities) as $S'$ and define \begin{equation}\label{eq:Blaschkefactor} P(s):={\mathcal p}rod_{\rho\in S'}{\alpha}_{\rho}(s),{\mathcal q}quad \textrm{where} {\mathcal q}quad {\alpha}(s):={\alpha}_{\rho}(s):= \frac{(s-z_0)R_1-(\rho-z_0)R_1} {R_1^2-(s-z_0)\overline{(\rho-z_0)}}. \end{equation} Let us consider now $$ f(s):=\frac{\zetata(s)}{P(s)}, $$ which is a regular function in $D_1$, moreover, by construction, it is non-vanishing in $D_1$. Hence also $g(s):=\log f(s)$ is regular at least in $D_1$; by appropriate choice of the logarithm we may assume that $\Im g(z_0)= \textrm{arg}~ f(z_0) \in [-{\mathcal p}i,{\mathcal p}i]$. First we estimate the order of magnitude of $f$ on the perimeter of $D_1$. If $s=\sigma+it\in{\mathcal p}artial D_1$, then any factor of $P(s)$ is exactly 1 in modulus, hence $|f(s)|= |\zeta(s)|$. Note that $\sigmagma-\theta$ is at least $\deltalta$, $\sigmagma<p+R_1<2p-\theta < 4$, $R_1<p-\theta<2$ and $t>t_0-R_1>3.22$. Moreover, a little coordinate geometry reveals that the distance of the point $z_0=p+it_0$ from the line $\sigma=t$ is $\frac{1}{\sqrt2} (\Im z_0 -{\mathcal R}e z_0){\mathcal g}e \frac{1}{\sqrt2} (5.2-p)>\frac{3}{\sqrt2}>2>R_1$, whence the whole disk $D_1$ lies in the domain $\sigma<t$, and the estimate \eqref{zsgenlarget} of Lemma \ref{l:zkiss} applies furnishing $$ |f(s)|= |\zeta(s)| \leq \sqrt{2} \frac{(A+\kappa) t}{\sigma-\theta} {\mathcal q}quad (s\in{\mathcal p}artial D_1). $$ As $g(s):=\log f(s)$ is analytic on $D_1$, we can apply the Borel-Caratheodory lemma: $$ \max_{|s-z_0|\leq R_2} |g(s)-g(z_0)| \leq \frac{2R_2}{R_1-R_2}\max_{|s-z_0|\leq R_1} {\mathcal R}e \left( g(s) - g(z_0) \right). $$ That is, with a slight reformulation we obtain for $s\in D_2$ $$ |g(s)| \leq |g(s)-g(z_0)| + |\Im g(z_0)| + |{\mathcal R}e g(z_0)| \leq {\mathcal p}i + \frac{2R_2}{\deltalta}\max_{|s-z_0|\leq R_1} {\mathcal R}e g(s) +\left( \frac{2R_2}{\deltalta} {\mathcal p}m 1 \right) {\mathcal R}e (-g(z_0)), $$ where here by the choice of our radii and parameters, it is clear that the sign of the coefficient in front of ${\mathcal R}e (-g(z_0))$ is positive. Firstly, $$ {\mathcal R}e g(s) = \log|f(s)| \leq \log\left( \sqrt{2} \frac{(A+\kappa)|t|}{\sigma-\theta}\right) \le \log\left( \sqrt{2} \frac{(A+\kappa)(t_0+R_1)}{\deltalta}\right) {\mathcal q}quad (s\in{\mathcal p}artial D_1). $$ that is, taking into account $R_1/t_0<2/5.2$, we get for the maximum of ${\mathcal R}e g(s)$ on ${\mathcal p}artial D_1$ $$ \max_{|s-z_0|\leq R_1} {\mathcal R}e g(s) \leq \log\left(2\frac{(A+\kappa)t_0}{\deltalta}\right). $$ Application of the Borel-Caratheodory lemma on $D_2$ thus yields $$ \max_{|s-z_0|\leq R_2} |g(s)| \le {\mathcal p}i + \frac{4(1-\theta)-4\delta}{\delta} \left(\log(2(A+\kappa)) +\log t_0+ \log\frac1{\deltalta} \right) + \left(\frac{4(1-\theta)-4\delta}{\delta} {\mathcal p}m 1\right) {\mathcal R}e(- g(z_0)) . $$ Since the factors ${\alpha}(s)$ of $P(s)$ are all less than $1$ at $z_0$, by \eqref{reciprok} we obtain $$ - {\mathcal R}e g(z_0) = \log\left| \frac{1}{\zeta(z_0)}\right| + \log \left|P(z_0)\right| \leq \log\frac{p(A+\kappa)}{p-1} \le \log\frac{2(A+\kappa)}{1-\theta}. $$ Hence taking into account $\frac1{\delta}=\frac3{a-\theta}$ and $\frac1{1-\theta}\le\frac1{a-\theta}$, we are led to \begin{align*} \max_{|s-z_0| \leq R_2} |g(s)| & \le {\mathcal p}i+ \left(\frac{4(1-\theta)}{\delta} -4\right) \left(\log(2(A+\kappa)) +\log t_0+ \log\frac1{\deltalta} \right) + \left(\frac{4(1-\theta)}{\delta} -3\right) \log\frac{2(A+\kappa)}{1-\theta} \\ & \le \frac{4(1-\theta)}{\delta} \left(\log\left(12(A+\kappa)^2\right) +\log t_0+ 2 \log\frac1{a-\theta} \right) +{\mathcal p}i -4\log6-3\log2 \end{align*} Note that the sum of the constants ${\mathcal p}i -4\log6-3\log2$ is negative, whence can be dropped. Finally we apply the usual Cauchy estimate to the regular function $g=\log f$ in $D_2$ to obtain a generally valid estimate of the derivative in $D_3$. Thus the radius of the estimation is $\deltalta$, which will divide on the right hand side to give \begin{align}\label{MgD3} \max_{|s-z_0|\leq R_3} |g'(s)| & \le \frac{4(1-\theta)}{\delta^2} \left(\log12 +2\log (A+\kappa)+\log t_0+ 2 \log\frac1{a-\theta} \right). \end{align} In view of the definition of $f(s)$, $P(s)$ and $S'$ we have $f'/f=\zeta'/\zeta -P'/P$, hence $$ g'(s)= \frac{\zeta'}{\zeta}(s) - \sum_{\rho\in S'} \frac{1}{s-\rho} - \sum_{\rho\in S'} \frac{\overline{(\rho-z_0)}} {R_1^2-(s-z_0)\overline{(\rho-z_0)}}, $$ Note that the last sum has $N$ terms, and for any $s\in D_3$ the difference in the denominator is at least $2\deltalta R_1$ in modulus, while the numerator is at most $R_1$. Therefore, we have $$ \left| \sum_{\rho\in S'} \frac{\overline{(\rho-z_0)}} {R_1^2-(s-z_0)\overline{(\rho-z_0)}}\right| \le \frac{N}{2\delta}. $$ Finally, we need to take care of the difference between $S$ and $S'$. Here we specify to the point $z=a+it_0\in D_3$. Clearly, $S\subset S'$, and for any $\rho\in S'\setminus S$ $|z-\rho|>\deltalta$ by construction (definition of $S$). Therefore, $\left|\sum_{\rho\in S'\setminus S} \frac1{z-\rho}\right|$ does not exceed $N/\deltalta$, so we get $$ \left|g'(z)-\left(\frac{\zeta'}{\zeta}(s)-\sum_{\rho\in S} \frac{1}{z-\rho} \right)\right| \le \frac{3N}{2\delta} \le \frac{1-\theta}{\delta^2} \left(12.5+6\log(A+\kappa)+6\log \frac{1}{1-\theta} + \log t_0\right). $$ Combining with \eqref{MgD3} we obtain the assertion. If $z=a+it_0$ with $|t_0| \le e^{5/4}+\sqrt{3}$, then an analogous proof should be followed with $\zetata(s)(s-1)$ replacing $\zetata(s)$ in the argument. Again, let $N$ denote the number of $\zetata$-zeroes in the disk $D_1$. Analogously to \eqref{zerosinDone} now the estimate \eqref{zerosinH-smallt} of Lemma \ref{l:Jensen} provides (taking into account Remark \ref{r:alsoindisk}, too) $N \leq \frac{1-\theta}{b-\theta}\left(14 +6\log(A+\kappa) + 6\log\frac1{1-\theta}\right)$, where also here the value of $b$ is chosen as $\theta+\frac32\delta$, so that we get\footnote{Note that Lemma \ref{l:Littlewood} \eqref{zerosinth-corr} can only provide with $T= 5$ and with $b:=\theta+\delta$ the less sharp estimate \begin{align*} N(b,T) & \le \frac{1}{b-\theta} \left\{\frac{1}{2} T \log T + \left(2 \log(A+\kappa) + \log\frac{1}{b-\theta} + 3 \right)T\right\} \\ &\le \frac{1}{\delta} \left\{\frac{5}{2} \log 5 + \left(2 \log(A+\kappa) + \log\frac{3}{a-\theta} + 3 \right)5\right\}\le \frac{1}{\delta} \left\{25 + 10 \log(A+\kappa) + 5 \log\frac{1}{a-\theta}\right\} \end{align*} } \begin{equation}\label{eq:nwhentsmall} N\le \frac{1}{\delta} \left(\frac{28}3 +4\log(A+\kappa) + 4\log\frac1{1-\theta}\right). \end{equation} Let us define now similarly as above the function $F(s):=\zeta(s) (s-1)/P(s)$, where $P(s)$ is the Blaschke-type product of zero factors as in \eqref{eq:Blaschkefactor}. The function $\zeta(s) (s-1)$ has an estimate, uniformly valid all over $\sigma>\theta$. Namely, according to \eqref{zssmin1} we have $|\zeta(s) (s-1)| \le \frac{|s||s-1|A}{\sigma-\theta} +\kappa\le \frac{(\max(|s|,|s-1|))^2 (A+\kappa)}{\sigma-\theta}$. As the Blaschke factors have absolute value 1 on the circle ${\mathcal p}artial D_1$, we therefore obtain $$ |F(s)| \le \frac{(\max(|s|,|s-1|))^2(A+\kappa)}{\sigma-\theta} {\mathcal q}quad (s\in{\mathcal p}artial D_1). $$ As $G(s):=\log F(s)$ is analytic on $D_1$, we can apply the Borel-Caratheodory lemma: $$ \max_{|s-z_0|\leq R_2} |G(s)-G(z_0)| \leq \frac{2R_2}{R_1-R_2}\max_{|s-z_0|\leq R_1} {\mathcal R}e \left( G(s) - G(z_0) \right). $$ That is, with the same slight reformulation as above we obtain for $s\in D_2$ $$ |G(s)| \leq |G(s)-G(z_0)| + |\Im G(z_0)| + |{\mathcal R}e G(z_0)| \leq {\mathcal p}i + \frac{2R_2}{\deltalta}\max_{|s-z_0|\leq R_1} {\mathcal R}e G(s) +\left( \frac{2R_2}{\deltalta} {\mathcal p}m 1 \right) {\mathcal R}e (-G(z_0)), $$ We are to estimate the maximum of ${\mathcal R}e G(s)=\log |F(s)|$ -- or, equivalently, of $|F(s)|$ -- on ${\mathcal p}artial D_1$. For $\sigma{\mathcal g}e 1$ we already have $\sigma-\theta{\mathcal g}e 1-\theta{\mathcal g}e a-\theta$, and we can easily get ${\mathcal R}e G(s) \le \log \frac{1}{a-\theta} + \log(|s|^2(A+\kappa)) \le \log \frac{1}{a-\theta}+\log ((7.23^2+4^2)(A+\kappa)) < \log \frac{1}{a-\theta}+\log (69(A+\kappa))$. Otherwise write $\sigma+it=z_0+R_1e^{i\varphi}$ where $\varphi:=\arg (s-z_0) \in [-{\mathcal p}i,-{\alpha}pha] \cup [{\alpha},{\mathcal p}i]$. Then $\sigma-\theta=p-\theta+R_1\cos \varphi=\delta+R_1(1+\cos\varphi)$ and $t=t_0+R_1\sigman\varphi$, so that in particular it is clear that we can restrict to the case of positive $\sigman \varphi$, i.e. when ${\alpha}\le \varphi \le {\mathcal p}i$. Introducing the new variables ${\mathcal p}si:=\frac{{\mathcal p}i-\varphi}{2} \in [0,{\mathcal p}i/2]$ and $x:=2R_1\sigman{\mathcal p}si \in [0,4]$, we can write \begin{align*} \frac{|F(s)|}{A+\kappa} & \le \frac{\sigma^2+t^2}{\delta+R_1(1+\cos\varphi)} \le \frac{1+(t_0+R_1 2\sigman{\mathcal p}si\cos{\mathcal p}si)^2}{\delta+2R_1\sigman^2{\mathcal p}si} \\ & < 2R_1 \frac{28.4+4R_1 \sigman{\mathcal p}si + 4R_1^2 \sigman^2{\mathcal p}si}{2R_1\delta+4R_1^2\sigman^2{\mathcal p}si}=2R_1 \frac{28.4+2x + x^2}{2R_1\delta+x^2} \end{align*} If $q(x):=\frac{A+x+x^2}{B+x^2}$ with $A:=28.4$ and $B=2R_1\delta \le 4\frac{a-\theta}{3} <4/3$, then for $x\in [0,4]$ the maximum of $q(x)$ has to occur somewhere in $[0,\frac{B}{A-B}]\subset [0,B/27]$, for $x>\frac{B}{A-B}$ the derivative $q'(x)$ is easily seen\footnote{Its sign is equivalent to that of $B+(B-A)x-x^2$.} to be negative. However, for $0\le x \le B/27$ we have $q(x) \le \frac1{B} (A+B/27+(B/27)^2) < \frac1{B} (28.4+0.4+0.1)<29/B$, whence we find for all $s$ with $\sigma<1$ the estimate $\frac{|F(s)|}{A+\kappa} \le 29/\delta$ and $\log|F(s)| \le \log 67 + \log(A+\kappa) + \log\frac{1}{a-\theta}$. Combining with the above we thus obtain for all cases the estimate $$ {\mathcal R}e G(s) \le \log (69(A+\kappa)) + \frac{1}{a-\theta} {\mathcal q}quad ( s \in {\mathcal p}artial D_1). $$ The value of $-{\mathcal R}e G(z_0)=\log\left|\frac{1}{\zeta(z_0)(z_0-1)}\right|+\log|P(z_0)|$ can be estimated similarly as above with reference to \eqref{reciprok}, $|z_0-1|{\mathcal g}e {\mathcal R}e (z_0-1)=1-\theta$ and $P(z_0)\le 1$ furnishing \begin{align*} -{\mathcal R}e G(z_0) & \le \log\frac{1}{\zeta(z_0)} +\log\frac1{1-\theta} \le \log\left(\frac{2(A+\kappa)}{(1-\theta)^2}\right)\le \log\left(\frac{2(A+\kappa)}{(a-\theta)^2}\right). \end{align*} Collecting these terms we get similarly to \eqref{MgD3} \begin{align}\label{MgD3-smallt} \max_{|s-z_0|\leq R_3} |G'(s)| & \le \frac{1}{\delta} \left\{{\mathcal p}i+ \left(\frac{4(1-\theta)}{\delta} -4\right) \left(\log\left(\frac{69(A+\kappa)}{a-\theta}\right)\right) + \left(\frac{4(1-\theta)}{\delta} -3\right) \log\frac{2(A+\kappa)}{(a-\theta)^2} \right\} \notag \\ & \le \frac{4(1-\theta)}{\delta^2} \left(\log 138 +2\log (A+\kappa)+ 3 \log\frac1{a-\theta} \right) + \frac1{\delta}\left\{{\mathcal p}i - 4\log 69 - 3 \log 2\right\} \notag \\ & < \frac{4(1-\theta)}{\delta^2} \left(4.93 +2\log (A+\kappa)+ 3 \log\frac1{a-\theta} \right) \end{align} Much like the above, specifying to $s=z$ and using the estimate \eqref{eq:nwhentsmall} of $N$ now we infer $$ \left|G'(z)-\left(\frac{\zeta'}{\zeta}(z)+\frac{1}{z-1}-\sum_{\rho\in S} \frac{1}{z-\rho} \right)\right| \le \frac{3N}{2\delta} \le \frac{1-\theta}{\delta^2} \left(14+6\log(A+\kappa)+6\log \frac{1}{1-\theta}\right), $$ whence combining with \eqref{MgD3-smallt} we obtain the assertion. \end{proof} \begin{lemma}\label{l:path-translates} For any given parameter $\theta<b<1$, and for any finite and symmetric to zero set ${\mathcal A}\subset[-iB,iB]$ of cardinality $\#{\mathcal A}=n$, there exists a broken line ${\mathcal G}amma={\mathcal G}amma_b^{{\mathcal A}}$, symmetric to the real axis and consisting of horizontal and vertical line segments only, so that its upper half is $$ {\mathcal G}amma_{+}= \bigcup_{k=1}^{\infty} \{[\sigmagma_{k-1}+it_{k-1},\sigmagma_{k-1}+it_{k}] \cup [\sigmagma_{k-1}+it_{k},\sigmagma_{k}+it_{k}]\} $$ with $\sigmagma_j\in [\frac{b+\theta}{2},b]$, ($j\in{\mathbb N}$), $t_0=0$, $t_1\in[4,5]$ and $t_j\in [t_{j-1}+1,t_{j-1}+2]$ $(j{\mathcal g}eq 2)$ and satisfying that the distance of any ${\mathcal A}$-translate $\rho+ i{\alpha}pha ~(i{\alpha}pha\in{\mathcal A})$ of a $\zetata$-zero $\rho$ from any point $s=t+i\sigmagma \in {\mathcal G}amma$ is at least $d:=d(t):=d(b,\theta,n,B;t)$ with \begin{equation}\label{ddist-corr} d(t):=\frac{(b-\theta)^2}{4n \left(4.4 \log(|t|+B+5) + 51 \log (A+\kappa) + 31 \log\frac{1}{b-\theta}+ 113\right)}. \end{equation} Moreover, the same separation from translates of $\zetata$-zeros holds also for the whole horizontal line segments $H_k:=[\frac{b+\theta}{2}+it_k,2+it_k]$, $k=1,\dots,\infty$, and their reflections $\overline{H_k}:=[\frac{b+\theta}{2}-it_k,2-it_k]$, $k=1,\dots,\infty$, and furthermore the same separation holds from the translated singularity points $1+i{\alpha}$ of $\zetata$, too. \end{lemma} \begin{proof} Let us denote $a:=\frac{b+\theta}{2}$, and $p:=0.51\theta+0.49b$. Note that $a-p=0.01(b-\theta)>d$, so that any translated zeta-zero with real part $\beta:={\mathcal R}e \rho={\mathcal R}e(\rho+i{\alpha}pha)\le p~({\alpha}pha\in{\mathcal A})$ necessarily has distance at least $d$ from the whole curve ${\mathcal G}amma$ and from all the $H_k$, all lying in the half-plane ${\mathcal R}e s{\mathcal g}e a$. According to \eqref{zerosinth-corr} in Lemma \ref{l:Littlewood}, the number of zeroes in the rectangle $Q(p,6):=[p,1]\times [-i6,i6]$ satisfies \begin{align}\label{Np6} N(p,6) & \le \frac{1/0.49}{b-\theta} \left\{ \frac1{2}6 \log 6 + 12\log(A+\kappa) + 6 \log\frac{1/0.49}{b-\theta} + 18 \right\} \notag \\& \le \frac{1}{b-\theta} \left\{25\log(A+\kappa)+12.5 \log\dfrac{1}{b-\theta}+57\right\}=:X. \end{align} Let now $R{\mathcal g}e 5$ be any parameter. Then for estimating the number of $\zeta$-zeroes in the rectangle $[p,1]\times[iR,i(R+1)]$ we can apply \eqref{zerosbetween} Lemma \ref{c:zerosinrange} with $T:=R+1$ to get \begin{align}\label{Np5R} N(p, R, R+1) & \le \frac{1}{p-\theta} \left\{ \frac{4}{3{\mathcal p}i} \left(\log (R+1) + \log\left(\frac{11.4 (A+\kappa)^2}{p-\theta}\right)\right) + \frac{16}{3} \log\left(\frac{60 (A+\kappa)^2}{p-\theta}\right)\right\} \notag \\& < \frac{1}{b-\theta} \left\{\log (R+1) + \log\left(\frac{11.4 (A+\kappa)^2}{0.49(b-\theta)}\right) + \frac{16}{3\cdot0.49} \log\left(\frac{60 (A+\kappa)^2}{0.49(b-\theta)}\right)\right\} \notag \\ \notag & < \frac{1}{b-\theta} \left\{\log (R+1) + 24 \log(A+\kappa)+12 \log\dfrac{1}{b-\theta}+56\right\} \\ & < \frac{1}{b-\theta} \log (R+1)+X = :X_0(R). \end{align} Note that $X_0(R)$ in \eqref{Np5R} is increasing in function of $R{\mathcal g}e 5$, and it exceeds \eqref{Np6}, always. First we describe how we choose the horizontal line segments $H_k$, i.e. the values of the coordinates $t_k$, to satisfy that the segments $H_k:=[a+it_k,2+it_k]$ avoid $\zeta$-zeroes and translated zeta-zeroes by at least $d$. We need to construct only $\{t_k\}_{k\in {\mathbb N}}$ in view of the obvious symmetry (of both the curve ${\mathcal G}amma$ and the set of all translated zeta-zeroes) with respect to ${\mathbb R}$. Our choice is inductive; first we choose $t_1\in [4,5]$, and then inductively $t_k\in [t_{k-1}+1,t_{k-1}+2]$. (Recall that $t_0:=0$, but it is a technical choice only and we do not claim to have estimates on the corresponding horizontal line segment, not being part of ${\mathcal G}amma$.) Let $i{\alpha}pha \in {\mathcal A} $ be given arbitrarily. The translated zeta-zero $\rho+i{\alpha}pha$ has imaginary part ${\mathcal g}amma+{\alpha}pha$, and so the set of vertical coordinate values $t$ for which any point of the horizontal line segment $[a+it,2+it]$ get closer to this zero than $d$ is restricted to the coordinates in $[{\mathcal g}amma+{\alpha}pha-d,{\mathcal g}amma+{\alpha}pha+d]$, having measure $2d$. So if we can specify some $t_1\in [4+d,5-d]$ such that it avoids this $2d$-measure prohibited interval for each ${\alpha}pha\in{\mathcal A}$ and $\rho$ with $\beta{\mathcal g}e p$, then $t_1$ is appropriate for our first choice. Now $[{\mathcal g}amma+{\alpha}pha-d,{\mathcal g}amma+{\alpha}pha+d]$ meshes into $[4+d,5-d]$ only in case ${\mathcal g}amma \in [4-{\alpha}pha,5-{\alpha}pha]$, so that the total measure of prohibited $t$-values for $t_1$ is at most $\sum_{i{\alpha}pha\in{\mathcal A}} 2d N(p,4-{\alpha}pha,5-{\alpha}pha) \le 2d n \max_{R \in [4-B,4+B]} N(p,R,R+1)= 2dn \max_{0\le R\le 4+B} N(p,R,R+1)$. If $B\le 1$, then these zero numbers are below $N(p,6)$, estimated by \eqref{Np6}, and if $B{\mathcal g}e 1$ then $B+4{\mathcal g}e 5$ and Lemma \ref{c:zerosinrange} applies, resulting in the estimate \eqref{Np5R} with $R:=B+4$. All these are below $X_0(B+4)$ (even if $B\le 1$ and $B+4\in [4,5)$, in which case we need only $X_0(B+4){\mathcal g}e X$), so that the total measure of prohibited $t$-values for $t_1$ is at most $2dnX_0(B+4)$, while the full measure of $[4+d,5-d]$ is $1-2d$. Now excluding also all possible intervals of maximum length $2d$ containing values $|t+{\alpha}|<d$ for any ${\alpha}pha\in {\mathcal A}A$, we will still have a measure at least $1-2d-n2d>1/2$. Thus there is room for an admissible $t_1$ whenever $2nd X_0(B+4)$ stays below $1/2$, or, equivalently, if $$ d < \frac{1}{4n X_0(B+4)}=\frac{b-\theta}{4n \left\{\log(B+5)+ 25 \log(A+\kappa)+12.5 \log\dfrac{1}{b-\theta} +57\right\}}. $$ This is clearly guaranteed by \eqref{ddist-corr} with $t=t_1\in [4,5]$ because $\log(B+5)\le \log(|t|+B+5)$. The argument for the choice of any $t_k$ ($k{\mathcal g}e 2$) is analogous. The only difference here is that in place of $[4+d,5-d]$ we now look for an admissible $t_k$ in the segment $[t_{k-1}+1+d,t_{k-1}+2-d]$, again of measure $1-2d$, so that after exclusion of all points with $|t+{\alpha}|<d$, still of measure $>1/2$. Noting that the value of $R:=R_k:=t_{k-1}+1$ is at least 5, the measure of the prohibited set can now be estimated by $2dn X_0(t_{k-1}+B+1)$, resulting in the condition $$ d < \frac{1}{4n X_0(t_k+B)}, $$ in view of $t_{k-1}+1\le t_k$. This is again clearly guaranteed by \eqref{ddist-corr} with $t=t_k$. The room for the choice of some appropriate $\sigma_k$ for the vertical line segments $V_k:=[\sigma_k+it_k,\sigma_k+it_{k+1}]$ is a little tighter. Again, it will suffice to ascertain the existence of an appropriate $\sigma_k$ for each $k{\mathcal g}e 0$ in view of the symmetry of the configuration. The starting step is to pick $\sigma_0$ from the interval $[a,b]$ having measure $b-a=(b-\theta)/2$, such that $V_0:=[\sigma_0,\sigma_0+it_1]$ -- and whence by symmetry also the reflected segment and their union $[\sigma_0-it_1,\sigma_0+it_1]$ -- avoids all translated zeroes $\rho+i{\alpha}pha$ by at least $d$, and, further, it also avoids the translated poles $1+i{\alpha}$ by at least $d$. To ensure the latter requirement, we will simply take $\sigma \in [a,b-d]$, so that the distance from translated poles will be at least $d$, always. So given that $t_1\le 5-d$, it thus suffices to ensure that $W_0:=[\sigma_0,\sigma_0+i(5-d)]$ avoids all translated zeroes $\rho+i{\alpha}pha$ by at least $d$. As above, consider any translated zero $\rho+i{\alpha}pha$ with $i{\alpha}pha\in {\mathcal A}$. Then in case $\Im (\rho+i{\alpha}pha)={\mathcal g}amma +{\alpha}pha \not \in [-d,5]$, we necessarily see an avoidance by at least $d$: so that we need to take into account only zeta-zeroes in the rectangle $[a-d,b+d]\times [(-{\alpha}pha-d)i,(-{\alpha}pha+5)i] \subset [p,1]\times [(-{\alpha}pha-d)i,(-{\alpha}pha+5)i]$. Therefore, the number of possibly interfering zeroes does not exceed $\max_{-B \le {\alpha}pha \le B} N(p,-{\alpha}pha-d,-{\alpha}pha+5)=\max_{0 \le {\alpha}pha \le B} N(p,{\alpha}pha-d,{\alpha}pha+5)$. The zeroes accounted for in $N(p,{\alpha}pha-d,{\alpha}pha+5)$ are covered by the ones accounted for in $N(p,5)$ and in $N(p,\max({\alpha}pha-d,5),{\alpha}pha+5)$ together. So with any given ${\alpha}pha{\mathcal g}e 0$ let us put $R:=\max({\alpha}pha-d,5){\mathcal g}e 5$ and $T:={\alpha}pha+5$; then $T-R\le 5+d\le 5.01$ and \eqref{zerosbetween} of Lemma \ref{c:zerosinrange} furnishes \begin{align}\label{Np10R} N(p,R,T) & \le \frac{1}{0.49(b-\theta)} \left\{ \frac{20.04}{3{\mathcal p}i} \left(\log T + \log\left(\frac{11.4 (A+\kappa)^2}{0.49(b-\theta)}\right) \right) + \frac{16}{3} \log\left(\frac{60 (A+\kappa)^2}{0.49(b-\theta)}\right)\right\} \notag \\ & \le \frac{1}{b-\theta} \left\{4.4\log (B+5) + 30.5 \log(A+\kappa)+25\log\dfrac{1}{b-\theta}+66\right\} \end{align} Estimating $N(p,5)$ by \eqref{zerosinth-corr} in Lemma \ref{l:Littlewood} -- similarly to \eqref{Np6} -- and adding we are led to \begin{align}\label{Np10-full} N(p, &{\alpha}pha-d,{\alpha}pha+5) \le \frac{1}{p-\theta} \left\{\frac{1}{2} \log 5 +2 \log(A+\kappa) + \log\frac{1}{p-\theta} + 3 \right\} 5 +N(p,R,T) \notag \\& < \frac{1}{b-\theta} \left\{4.4 \log (B+5) + 51\log(A+\kappa)+31\log\dfrac{1}{b-\theta}+112.5\right\}=:Y_0(B) \end{align} Therefore the arc measure of $\sigma$-values closer than $d$ to the real part $\beta$ of any of the translated zeroes $\rho+i{\alpha}pha$ with $\rho\in [p,1]\times [i({\alpha}pha-d),i({\alpha}pha+5)]$ and ${\alpha}pha \in {\mathcal A}$ does not exceed $2dnY_0(B)$, while the total measure of available values for $\sigma_0$ is $(b-d)-a=\frac{b-\theta}{2}-d$, recalling that we have already excluded also the interval $[b-d,b]$, possibly too close to some translated pole $1+i{\alpha}$. Note that with this last exclusion we guarantee that no translated pole can get closer than $d$ to any point $\sigma_0+it$, whatever is $t\in {\mathbb R}$. So finally it suffices to have $$ d \le \frac{b-\theta}{4nY_0(B)+2}< \frac{(b-\theta)^2}{4n \left(4.4 \log(|t|+B+5) + 51 \log (A+\kappa) + 31 \log\frac{1}{b-\theta}+ 113\right)}, $$ guaranteed in the condition \eqref{ddist-corr} of the statement. Finally, for the selection of the coordinates $\sigma_k$ with $k{\mathcal g}e 1$ we work similarly. We are to choose a coordinate $\sigma_k\in[a,b]$ so that the vertical segment $W_k:=[\sigma_k+it_k,\sigma_k+i(t_k+2-d)]$ -- and then consequently also $V_k\subset W_k$ -- avoids all translated zeroes $\rho+i{\alpha}pha$ of $\zeta$ by at least $d$. Recall that the value $t_k$ was chosen\footnote{At least for $k{\mathcal g}e 2$. For $k=1$, technically there is a problem with the argument because of $t_0:=0$, and not $3$. But we can conduct this calculation taking, technically, a modified $t_0:=3$ here, so that the choice $t_1$ is in the right interval. We leave to the reader the checking of the necessary technical modifications and that the above obtained estimates remain valid also for $k=1$.} from $[t_{k-1}+1+d,t_{k-1}+2-d]$, so that any zeta-zero closer than $d$ must have imaginary part ${\mathcal g}amma+{\alpha}pha \in [t_{k-1}+1,t_{k-1}+2]$ and real part $\beta{\mathcal g}e a-d > p$. The number of all these zeroes is $N(p,t_{k-1}+1-{\alpha}pha,t_{k-1}+2-{\alpha}pha)$. As ${\alpha}pha\in[-B,B]$ we find $N(p,t_{k-1}+1-{\alpha}pha,t_{k-1}+2-{\alpha}pha) \le \max_{t_{k-1}-B +1 \le R\le t_{k-1}+B+1} N(p,R,R+1)= \max_{0 \le R\le t_{k-1}+B+1} N(p,R,R+1) \le \max(N(p,6) ~,~\max_{5 \le R \le t_{k-1}+B+1} N(p,R,R+1)) \le X_0(\max(6,t_{k-1}+B+1))$, according to \eqref{Np6} and \eqref{Np5R} above. Therefore noting that $t_{k-1}+1\le t_k$ and using the monotonicity of $X_0$ again, we find that the prohibited set is of measure $\le 2nd X_0(\max(6,t_k+B))\le X_0(t_k+B+2)$ -- using $t_k{\mathcal g}e 4$, i.e. $t_k+2{\mathcal g}e 6$ here -- while the room available for our choice of $\sigma_k$ is again $b-d-a=\frac{b-\theta}{2}-d$. Whence there is a choice for $\sigma_k$ whenever $d < \frac{b-\theta}{4n X_0(t_k+B+2)+2}$. Here we have for $s\in V_k \subset {\mathcal G}amma$ that $t_k \le t \le t_{k+1}\le t_k+2$, so that the term $\log(R+1)=\log((t_k+B+2)+1)$, occurring in $X_0(t_k+B+2)$, can be estimated from above by $\log(t+B+3)$. Thus for finding an appropriate value of $\sigma_k$ it also suffices to have $$ d < \frac{(b-\theta)^2}{4n \left\{\log (t+B+3) + 25 \log(A+\kappa)+12.5 \log\dfrac{1}{b-\theta}+57 \right\}+2(b-\theta)}. $$ Here it becomes clear that this is always satisfied whenever \eqref{ddist-corr} holds, therefore we indeed have an appropriate choice for $\sigma_k \in[a,b]$, as wanted. \end{proof} \begin{lemma}\label{l:zzpongamma-c} For any $0<\theta<b<1$ and symmetric to ${\mathbb R}$ translation set ${\mathcal A}\subset [-iB,iB]$, on the broken line ${\mathcal G}amma={\mathcal G}amma_b^{{\mathcal A}}$, constructed in the above Lemma \ref{l:path-translates}, as well as on the horizontal line segments $H_k:=[a+it_k,2+it_k]$ and $\overline{H_k}$, $k=1,\dots,\infty$ with $a:=\frac{b+\theta}{2}$, we have uniformly for all ${\alpha}pha \in {\mathcal A}$ \begin{equation}\label{linezest-c} \left| \frac{\zeta'}{\zeta}(s+i{\alpha}pha) \right| \le n \frac{1-\theta}{(b-\theta)^{3}} \left(6 \log(|t|+B+5)+60\log(A+\kappa) + 40 \log\frac1{b-\theta}+ 140\right)^2. \end{equation} \end{lemma} \begin{proof} By symmetry, it suffices to work out the estimates on the upper halfplane i.e. for an arbitrary $s\in {\mathcal G}amma_{+} \cup (\cup_{k=1}^\infty H_k)$. Let us fix such a value of $s=\sigma+it$, as well as an ${\alpha}pha\in {\mathcal A}$. Formula \eqref{zlogprime} and \eqref{zlogprime-tsmall} of Lemma \ref{l:borcar} provide us with $z:=s+i{\alpha}=\sigma+i(t+{\alpha}pha)$ and $\epsilon=1$ or $0$ according to $|t+{\alpha}|<5.3$ or not, respectively, the estimate \begin{align}\label{logzetaprimesum} \notag \left| \frac{\zeta'}{\zeta}(s+i{\alpha}pha) \right| & \le \left|\sum_{\rho\in S} \frac{1}{(s+i{\alpha})-\rho}\right| + \epsilon \left|\frac{1}{(s+i{\alpha})-1}\right| + \\ & + \frac{9(1-\theta)}{(\sigma-\theta)^2} \left(34+14\log(A+\kappa)+18\log \frac{1}{\sigma-\theta} + 5\log_{+} |t+{\alpha}|\right), \end{align} where $S$ stands for the multiset of $\zeta$-zeroes within distance $\delta:=\frac{\sigma-\theta}{3}$ from $z=s+i{\alpha}$. Note that the leftmost point of this neighborhood of $z=s+i{\alpha}$ is $\frac{2\sigma+\theta}{3}+i(t+{\alpha})$, still lying to the right of $q+i(t+{\alpha})$ with $q:=\frac{2a+\theta}{3}=\frac{b+2\theta}{3}$. Therefore these neighboring zeros belong also to the disk centered at $p+i(t+{\alpha})$ (where $p:=1+(1-\theta)$) and of radius $r:=p-q$, with the above $q$. Referring to Remark \ref{r:alsoindisk} provides that the number $N$ of these zeroes is thus estimated by \begin{align*} N:=N(S) & \le \frac{1-\theta}{\frac32(q-\theta)}\left(\log_{+}|t+{\alpha}| + 6\log(A+\kappa) + 6\log\frac1{1-\theta}+14\right) \notag \\ & \le \frac{1-\theta}{b-\theta}\left(2\log_{+}(t+B) + 12\log(A+\kappa) + 12\log\frac1{1-\theta}+28\right). \end{align*} Further, the absolute value of each term in the sum $\sum_{\rho\in S} \frac{1}{(s+i{\alpha})-\rho}$ over $S$ is at most $1/d(t)$ with the value of $d(t)$ given in \eqref{ddist-corr}, whence\footnote{Using also the inequality $(3X+30Y+20Z+59)^2{\mathcal g}e (2X+12Y+12Z+28)(4.4+51Y+31Z+113)$, for all $X,Y,Z{\mathcal g}e 0$.} we get \begin{align*} \left|\sum_{\rho\in S} \frac{1}{(s+i{\alpha})-\rho}\right| \le &\frac{4n(1-\theta)}{(b-\theta)^3} \left(2 \log(t+B+5)+12\log(A+\kappa) +12\log\frac1{b-\theta}+28\right) \\ & {\mathcal q}quad \times\left(4.4 \log(t+B+5) + 51 \log (A+\kappa) + 31 \log\frac{1}{b-\theta}+ 113\right) \\ & \le \frac{4n(1-\theta)}{(b-\theta)^3} \left(3\log(t+B+5)+30\log(A+\kappa) + 20 \log\frac1{b-\theta}+ 59\right)^2. \end{align*} The rightmost expression in \eqref{logzetaprimesum} is an order of magnitude smaller. Indeed, writing in $\sigma{\mathcal g}e a =\frac{b+\theta}{2}$, $\log\frac{1}{\sigma-\theta}\le \log\frac{2}{b-\theta}$, $\log_{+}(|t+{\alpha}pha|)\le \log(t+B+5)$ and $1\le n/(b-\theta)$ we get the upper bound \begin{align*} &\le \frac{4n(1-\theta)}{(b-\theta)^3} \cdot 9 \cdot \left(5\log_{+}(t+B)+14\log(A+\kappa)+18\log \frac{1}{b-\theta} + 47\right) \\ & < \frac{4n(1-\theta)}{(b-\theta)^3} \cdot 18 \cdot \left(3\log(t+B+5)+30\log(A+\kappa) + 20 \log\frac1{b-\theta}+ 59\right). \end{align*} Finally, in case $\epsilon=1$, we have a term $|1/(\sigma+i(t+{\alpha}pha)-1)|\le 1/d(t)$, with the value of $d(t)$ given in \eqref{ddist-corr}. Let us put $X:=3\log(t+B+5)+30\log(A+\kappa) + 20 \log\frac1{b-\theta}+ 59$. Then it is clear that we have $\frac{1}{d} \le \frac{4n(1-\theta)}{(b-\theta)^3} \left(4.4 \log(|t|+B+5) + 51 \log (A+\kappa) + 31 \log\frac{1}{b-\theta}+ 113\right) \le \frac{4n(1-\theta)}{(b-\theta)^3} 2X$. Collecting our estimates and substituting into \eqref{logzetaprimesum} we thus get $$ \left| \frac{\zeta'}{\zeta}(s+i{\alpha}pha) \right| \le \frac{4n(1-\theta)}{(b-\theta)^3} \left\{ X^2 + 18X + 2X \right\} = \frac{4n(1-\theta)}{(b-\theta)^3} (X^2+20 X) \le \frac{4n(1-\theta)}{(b-\theta)^3} (X+10)^2. $$ The assertion of the Lemma is proved. \end{proof} \section{A Riemann-von Mangoldt type formula of prime distribution}\label{sec:sumrho} In the following we will denote the set of $\zetata$-zeroes, lying to the right of ${\mathcal G}amma$, by ${\mathcal Z}({\mathcal G}amma)$, and denote ${\mathcal Z}({\mathcal G}amma,T)$ the set of those zeroes $\rho=\beta+i{\mathcal g}amma\in {\mathcal Z}({\mathcal G}amma)$ which satisfy $|{\mathcal g}amma|\leq T$. \begin{theorem}[Riemann-von Mangoldt formula]\label{l:vonMangoldt} Let $\theta<b<1$ and ${\mathcal G}amma={\mathcal G}amma_b^{\{0\}}$ be the curve defined in Lemma \ref{l:path-translates} for the one-element set ${\mathcal A}:=\{0\}$ with $t_k$ denoting the corresponding set of abscissae in the construction. Then for any $k=1,2,\ldots$, and $4 \leq t_k<x$ we have $$ {\mathcal p}si(x)=x - \sum_{\rho \in {\mathcal Z}({\mathcal G}amma,t_k)} \frac{x^{\rho}}{\rho} + O\left( \frac{1-\theta}{(b-\theta)^{3}} \left(A+\kappa+\log \frac{x}{b-\theta}\right)^3 x^b \right). $$ \end{theorem} \begin{proof} Analogously to the effective version of the classical Perron formula, as given in e.g. \cite{T}, Th\'eor\`eme 2, Chapitre II.2, p. 135, for any fixed $T,x>1$, $1<p<2$ one can prove the effective representation $$ {\mathcal p}si(x)= \frac{1}{2{\mathcal p}i i} \int_{p-iT}^{p+iT} -\frac{\zetata'}{\zetata}(s) \frac{x^s}{s}ds + O\left( x^p \sum_{g\in{\mathcal G}}^\infty \frac{\Lambda(g)}{|g|^p(1+T |\log(x/|g|)|)}\right). $$ Here we move the path of integration to the curve ${\mathcal G}amma$ of Lemma \ref{l:path-translates}: more precisely, to the curve ${\mathcal G}amma_T$ consisting of the part of ${\mathcal G}amma$ in the strip $-T\leq \Im s \leq T$ joined by the two segments $[p-iT,q-iT]$ and $[q+iT,p+iT]$, with $q{\mathcal p}m iT\in {\mathcal G}amma$. Obviously, the set of zeroes of $\zetata$ in the domain encircled by $[p-iT,p+iT]$ and ${\mathcal G}amma_T$ is ${\mathcal Z}({\mathcal G}amma,T)$. Assuming that no zero lies on ${\mathcal G}amma_T$ it follows by an application of the residue theorem \begin{equation}\label{psifirstform} {\mathcal p}si(x)=x-\sum_{\rho \in {\mathcal Z}({\mathcal G}amma,T)} \frac{x^{\rho}}{\rho}+\int_{{\mathcal G}amma_T} -\frac{\zetata'}{\zetata}(s) \frac{x^s}{s}ds + O\left(\int_1^{\infty} \frac{x^p}{u^p(1+T|\log\frac{x}{u}|)} d{\mathcal p}si(u)\right). \end{equation} Now let us put $T=t_k ({\mathcal g}eq 4)$ and write $a:=\frac{b+\theta}{2}\in (\theta,1)$. Since the arc length of ${\mathcal G}amma_{t_k}$ is at most $3t_k$, by an application of Lemma \ref{l:zzpongamma-c} and using that $|1/s|\le 1/\sqrt{a^2+t^2}$ for all $s=\sigma+it\in{\mathcal G}amma$ we find \begin{align}\label{intGammaT} \int_{{\mathcal G}amma_T} \left|\frac{\zetata'}{\zetata}(s)\frac{x^s}{s}\right||ds| & \ll \frac{1-\theta}{(b-\theta)^{3}} \left(\log(A+\kappa)+1+\log t_k + \log \frac1{b-\theta}\right)^2 \left(\frac{x^p}{t_k} + x^b \int_0^{t_k} \frac1{\sqrt{a^2+u^2}} du\right) \notag \\ &\ll \frac{1-\theta}{(b-\theta)^{3}} \left(\log(A+\kappa)+1+\log t_k + \log \frac{1}{b-\theta}\right)^2 \left(\frac{x^p}{t_k} + x^b \log (t_k/a) \right), \end{align} Note that $a=\frac{b+\theta}{2}>b-\theta$, so that $\log(t_k/a)\le \log t_k +\log\frac{1}{b-\theta}$, and that $\log(A+\kappa)+1+\log\frac{1}{b-\theta}+\log(t_k+5) \ll (A+\kappa)+\log(\frac{x}{b-\theta})$ for $t_k{\mathcal g}e 4$, whence we get finally \begin{equation}\label{intGammaTfin} \int_{{\mathcal G}amma_T} \left|\frac{\zetata'}{\zetata}(s)\frac{x^s}{s}\right||ds| \ll \left((A+\kappa)+\log(\frac{x}{b-\theta}) \right)^3 \left(\frac{x^p}{t_k} + x^b \log (t_k) \right). \end{equation} For the $O$-term in \eqref{psifirstform} we execute a detailed calculus here. First, observe that by the definition of $\Lambda(g)$ we have $d{\mathcal p}si(u)\leq \log u ~d{{\mathcal N}}(u)$, (the inequality interpreted as between positive measures), whence it suffices to treat \begin{equation}\label{Jdef} J:=J(x,p,T):=x^p \int_1^{\infty} \frac{\log u}{u^p(1+T|\log(x/u)|)} d{\mathcal N}(u). \end{equation} First we approximate $\log u$ in the integral by $\log x$, with an error \begin{equation}\label{logxuerror} \notag \int_1^{\infty} \frac{|\log (x/u)|}{u^p(1+T|\log(x/u)|)} d{\mathcal N}(u) \leq \int_1^{\infty} \frac{1}{Tu^p} d{\mathcal N}(u)=\frac{1}{T}\zetata(p)\leq \frac{A+\kappa}{T}\frac{p}{p-1} , \end{equation} invoking \eqref{zsintheright} in the last step. Writing in ${\mathcal N}(u)=\kappa u + {\mathcal R} (u)$, we thus find \begin{equation}\label{Japprox} \left| J -\kappa \log x I - x^p \log x L\right| \leq \frac{2(A+\kappa)}{(p-1)T}x^p, \end{equation} where \begin{equation}\label{Idef} I:= \int_1^\infty \left(\frac{x}{u}\right)^p \frac{du}{1+T|\log(x/u)|} \end{equation} and \begin{equation}\label{Ldef} L:= \int_1^\infty\frac1{u^p} \frac{d{\mathcal R}(u)}{1+T|\log(x/u)|}. \end{equation} Cutting the interval of integration at $x$, we denote $I'$ and $L'$ the integrals on $[1,x]$, and $I"$, $L"$ the integrals on $[x,\infty]$. On the interval $[1,x]$ the absolute value is $\log(x/u)$, and conversely, on $[x,\infty]$ it is $\log(u/x)$. On both parts, the integrand is a nice, continuously differentiable function. Substituting $v:=x/u$ and then $w:=T\log v$ we obtain $$ I'=\int_1^x \frac{xv^{p-2}dv}{1+T\log v}=\frac{x}{T} \int_0^{T\log x}\frac{e^{{\alpha} w}dw}{1+w}{\mathcal q}quad\textrm{with}{\mathcal q}uad {\alpha}:=\frac{p-1}{T} $$ Estimating the exponential by $e$ up to $1/{\alpha}-1$, the part until $1/{\alpha}-1$ is at most $e\log(1/{\alpha})$. The rest is at most $$ \int_{1/{\alpha}-1}^{T\log x} {\alpha} e^{{\alpha} w}dw < e^{{\alpha} T\log x}=x^{{\alpha} T}, $$ whence \begin{equation}\label{Iprime} I'\leq \frac{e x\left(\log T+\log\frac{1}{p-1}\right)}{T} + \frac{x^p}{T} \end{equation} To estimate $I"$ we substitute first $v:=u/x$, and second $w:=T\log v$ to find $$ I" =x \int_1^{\infty} \frac{dv}{v^p(1+T\log v)} = \frac{x}{T} \int_0^{\infty} \frac{e^{-{\alpha} w} dw}{1+w}. $$ with the very same ${\alpha}$. Here we can calculate further as \begin{align}\label{alphacalc} \int_0^{\infty} \frac{e^{-{\alpha} w} dw}{1+w} & =\left[ \frac{-e^{-{\alpha} w}}{{\alpha} (1+w)} \right]_0^{\infty} -\frac{1}{{\alpha}} \int_0^{\infty} \frac{e^{-{\alpha} w} dw}{(1+w)^2} = \frac{1}{{\alpha}} \int_0^{\infty} \frac{1-e^{-{\alpha} w} }{(1+w)^2}dw \notag \\& \leq \frac{1}{{\alpha}} \int_0^{\infty} \frac{\min(1,{\alpha} w)}{(1+w)^2}dw = \int_0^{1/{\alpha}} \frac{w ~dw}{(1+w)^2} + \frac{1}{{\alpha}} \int_{1/{\alpha}}^{\infty} \frac{dw}{(1+w)^2} \notag \\& \leq \log(1+1/{\alpha}) +\frac{1}{1+{\alpha}} \leq \log(1/{\alpha}) + 2 \\ &= \log T +\log \frac{1}{p-1} + 2. \notag \end{align} So in all we obtain \begin{equation}\label{Iest} I\ll \frac{x^p + x\log T+x\log\frac{1}{p-1}}{T}. \end{equation} In case of $L$ we must pay some attention to the possible sign changes of ${\mathcal R}$, although the order of this term is in general smaller. Now \begin{align*} L'& =\left[ \frac{{\mathcal R}(u)}{u^p(1+T\log(x/u))}\right]_1^x + \int_1^x \frac{{\mathcal R}(u)\left({p-T(1+T\log(x/u))^{-1}}\right)}{u^{p+1}(1+T\log(x/u))} du \end{align*} so \begin{equation}\label{Lprime1} \left| L'-\frac{{\mathcal R}(x)}{x^p} \right| \leq \int_1^x\frac{|{\mathcal R}(u)|\left({p(1+T\log(x/u))+T}\right)} {u^{p+1}(1+T\log(x/u))^2} du . \end{equation} Writing in $|{\mathcal R}(u)|\leq Au^\theta$ and substituting $v:=x/u$ we obtain\footnote{To derive the last line from the last but one, we calculate the integral as follows. $\int_T^{T\log x} \frac{e^{\beta\xi}}{\xi}d\xi = \left[ \frac{e^{\beta\xi}}{\beta \xi} \right]_T^{T\log x} + \int_T^{T\log x} \frac{e^{\beta\xi}}{\beta\xi^2}d\xi < \frac{e^{(p-\theta)\log x}}{(p-\theta)\log x} + \int_{p-\theta}^{(p-\theta)\log x} \frac{e^vdv}{v^2}$. If $p-\theta <1$, the part of the last integral below $v=1$ does not exceed $e/(p-\theta)<e/(p-1)$. Here we distinguish two cases, according to $(p-\theta)\log x{\mathcal g}eq 1$ or not. In the first case the integral for $1\leq v \leq (p-\theta)\log x$ is at most $e^{(p-\theta)\log x}\int_1^{(p-\theta)\log x} \frac{dv}{v^2}<x^{p-\theta}$, and also $\frac{e^{(p-\theta)\log x}}{(p-\theta)\log x}\leq x^{p-\theta}$, which yields the estimate $\ll 1/(p-1)+x^{p-\theta}$. On the other hand for $(p-\theta)\log x<1$ we have $\frac{e^{(p-\theta)\log x}}{(p-\theta)\log x} < \frac{e}{(p-1)\log 4} \ll 1/(p-1)$ and the overall estimate is of the same order.} \begin{align}\label{Lprime} \left| L'-\frac{{\mathcal R}(x)}{x^p} \right| & \leq x^{\theta-p-1} \int_1^x\frac{Av^{1+p-\theta}\left({2(1+T\log v)+T}\right)} {(1+T\log v)^2} \frac{x dv}{v^2} \notag \\ & \leq 2AT x^{\theta-p} \int_1^x\frac{v^{p-\theta}(1+\log v)} {(1+T\log v)^2} \frac{dv}{v} \notag \\& = 2AT x^{\theta-p} \int_0^{\log x} e^{(p-\theta)w} \frac{1+w}{(1+Tw)^2} dw \notag \\ & = 2 AT x^{\theta-p} \left( \int_0^1 +\int_1^{\log x} \right) \\ & \ll AT x^{\theta-p}\int_0^1 \frac{dw}{(1+Tw)^2} + A x^{\theta-p} \int_1^{\log x} \frac{e^{(p-\theta)w}}{1+Tw} dw \notag \\ & = Ax^{\theta-p} \left(1 -\frac{1}{T+1}+\frac1T \int_T^{T\log x} \frac{e^{\beta \xi}}{1+\xi} d\xi\right) {\mathcal q}quad \left(\textrm{with} ~\beta:=\frac{p-\theta}{T}\right) \notag \\ & \ll Ax^{\theta-p} \left(1+\frac{1}{(p-1)T} + \frac{x^{p-\theta}}{T}\right) = Ax^{\theta-p}\left(1+\frac{1}{(p-1)T} \right) + \frac{A}{T} \notag. \end{align} Similarly, integration by parts gives \begin{align*} L"& =\left[ \frac{{\mathcal R}(u)}{u^p(1+T\log(u/x))}\right]_x^{\infty} + \int_x^{\infty} \frac{{\mathcal R}(u)\left({p+T(1+T\log(u/x))^{-1}}\right)}{u^{p+1}(1+T\log(u/x))} du \end{align*} whence with the very parameter $\beta$ \begin{align}\label{L2prime} \left| L" + \frac{{\mathcal R}(x)}{x^p} \right| & \leq A \int_x^{\infty} \frac{u^{\theta-p-1}\left|{T+p(1+T\log(u/x))}\right|} {(1+T\log(u/x))^2} du \notag \\ &= A x^{\theta-p} \int_1^{\infty} \frac{v^{\theta-p-1}\left|{T+p(1+T\log v)}\right|} {(1+T\log v)^2} dv \notag \\ &\leq A x^{\theta-p} \int_0^{\infty} \frac{e^{-\beta w}p \left({T+1+w}\right)} {(1+w)^2} \frac{dw}{T} \notag \\ & \leq 2 A x^{\theta-p} \int_0^{\infty} {e^{-\beta w}}\left(\frac1{T(1+w)} +\frac{1}{(1+w)^2}\right) {dw} \notag \\ & \leq 2 A x^{\theta-p} \left( \frac1T \log\frac{T}{p-\theta} + \frac 2T +1 \right) \end{align} estimating the integral as in \eqref{alphacalc}. Adding \eqref{Lprime} and \eqref{L2prime} we thus arrive at \begin{equation}\label{Lestimate} |L| \ll \frac{A}{T} + A x^{\theta-p}\left(1+\frac{1}{(p-1)T} + \frac{\log\frac{T}{p-1}}{T}\right). \end{equation} Taking into account \eqref{Iest} and \eqref{Lestimate} in \eqref{Japprox}, we are led to \begin{align}\label{Jfinal} J & \ll \frac{(A+\kappa)x^p}{(p-1)T} + \kappa \log x \frac{x^p + x\log \frac{T}{p-1}}{T} + A x^p \log x \left( \frac{1}{T} + x^{\theta-p}+ \frac{x^{\theta-p}}{(p-1)T} + \frac{ x^{\theta-p}\log\frac{T}{p-1}}{T}\right)\notag \\ & \ll (A+\kappa) \left( \frac{x^p \log x}{T} + \frac{x^p+x^\theta\log x}{(p-1)T}+ x \log x \frac{\log\frac{T}{p-1}}{T} + x^\theta\log x \right). \end{align} Let us take now $p:=1+1/\log x$ and apply the above for $T=t_k{\mathcal g}e 4$ and $x{\mathcal g}e t_k$ to get \begin{equation}\label{Jfinalsimpler} J \ll \left((A+\kappa)+\log x + \log\frac{1}{b-\theta}\right)^3 \left(\frac{x}{t_k}+x^\theta\right). \end{equation} Finally, using the estimates of \eqref{intGammaTfin} (applied for $T:=t_k$) and \eqref{Jfinalsimpler} in \eqref{psifirstform}, and taking into account that $4\le t_k\le x$ implies $x/t_k\le 4x^b$, the assertion of the Theorem follows. \end{proof} \end{document}
{\mathfrak b}egin{document} \title{Chen's primes and ternary Goldbach problem} {\mathfrak a}uthor{Hongze Li} {\mathfrak a}ddress{ Department of Mathematics, Shanghai Jiaotong University, Shanghai 200240, People's Republic of China} \email{[email protected]} {\mathfrak a}uthor{Hao Pan} {\mathfrak a}ddress{Department of Mathematics, Nanjing University, Nanjing 210093, People's Republic of China}\email{[email protected]} \keywords{Chen prime; Ternary Goldbach problem; Rosser's weight} \subjclass[2000]{Primary 11P32; Secondary 11N36, 11P55}\thanks{This work was supported by the National Natural Science Foundation of China (Grant No. 10771135).} \mathfrak maketitle {\mathfrak b}egin{abstract} We prove that there exists a $k_0>0$ such that every sufficiently large odd integer $n$ with $3\mathfrak mid n$ can be represented as $p_1+p_2+p_3$, where $p_1,p_2$ are Chen's primes and $p_3$ is a prime with $p_3+2$ has at most $k_0$ prime factors. \end{abstract} \section{Introduction} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} Let $\mathfrak mathcal P$ denote the set of all primes. Define $$ \mathfrak mathcal P_k=\{n:\, n\in \mathfrak mathbb{N} \text{ and } n\text{ has at most }k\text{ prime divisors}\} $$ and $$ \mathfrak mathcal P_k^{(2)}=\{p\in\mathfrak mathcal P:\, p+2\in\mathfrak mathcal P_k\}. $$ The well-known twin primes conjecture asserts that $\mathfrak mathcal P_1^{(2)}$ has infinitely many elements. Nowadays, the best result on the twin primes conjecture belongs to Chen \cite{Chen73}, who proved that $\mathfrak mathcal P_2^{(2)}$ has infinitely many elements. In fact, Chen proved that for sufficiently large $x$, $$ |\{p\in\mathfrak mathcal P_2^{(2)}:\, p\leq x,\ (p+2,P(x^{1/10}))=1\}|\gg\frac{x}{(\log x)^2}, $$ where $$ P(z)=\prod_{p<z}p. $$ In Iwaniec's unpublished notes \cite{Iwaniec96}, the exponent $1/10$ can be improved to $3/11$. In \cite{GreenTao08}, Green and Tao say a prime $p$ is Chen's prime if $p\in\mathfrak mathcal P_2^{(2)}$. On the other hand, in 1937 Vinogradov \cite{Vinogradov37} solved the ternary Goldbach problem and showed that every sufficiently large odd integer can be represented as the sum of three primes. Two years later, using Vinogradov's method, van der Corput \cite{Corput39} proved that the primes contain infinitely many non-trivial three term arithmetic progressions (3AP). In 1999, with the help of the vector sieve method, Tolev \cite{Tolev99} proved that there exist infinitely many non-trivial 3APs $\{p_1,p_2,p_3\}$ of primes, satisfying $p_1\in\mathfrak mathcal P_4^{(2)}$, $p_2\in\mathfrak mathcal P_5^{(2)}$ and $p_3\in\mathfrak mathcal P_{11}^{(2)}$. However, in \cite{GreenTao08}, with the help of the Szemer\'edi theorem, their transference principle and a result of Goldston and Y\i ld\i r\i m, Green and Tao proved that the primes contain arbitrarily long non-trivial arithmetic progressions. Certainly this is a remarkable breakthrough in additive number theory. Furthermore, Green and Tao also claimed that using their method, one can prove that Chen's primes contain arbitrarily long non-trivial arithmetic progressions. And for the 3APs of Chen's primes, they proposed a detail proof in \cite{GreenTao06}. Let us return to the ternary Goldbach problems. In \cite{Peneva00}, using Tolev's method, Peneva proved that every sufficiently large odd integer $n$ with $3\mathfrak mid n$ can be represented as $n=p_1+p_2+p_3$ with $p_i\in\mathfrak mathcal P_{k_i}^{(2)}$, $k_1=k_2=5$ and $k_3=8$. Subsequently, Tolev \cite{Tolev00} improved Peneva's result to $k_1=2$, $k_2=5$ and $k_3=7$. Recently, Meng \cite{Meng07} proved that every sufficiently large odd integer $n$ with $3\nmid n-1$ can be represented as $n=p_1+p_2+p_3$, where $p_1$ is Chen's prime, $p_2\in\mathfrak mathcal P_3^{(2)}$ and $p_3\in\mathfrak mathcal P$ (not of special type!). Of course, we wish to prove that every sufficiently large odd integer $n$ with $3\mathfrak mid n$ can be represented as the sum of three Chen's primes. Unfortunately, as we shall see later, it seems not easy. The key of Green and Tao's proof in \cite{GreenTao06} is to transfer Chen's primes to a subset with positive density of $\mathfrak mathbb Z_N=\mathfrak mathbb Z/N\mathfrak mathbb Z$ (where $N$ is a large prime). But this density is too small. However, in the present paper, we shall prove the following result: {\mathfrak b}egin{Thm} \label{chengoldbach} There exists a positive integer $k_0$ such that every sufficiently large odd integer $n$ with $3\mathfrak mid n$ can be represented as $p_1+p_2+p_3$, where $p_1,p_2$ are Chen's primes and $p_3\in\mathfrak mathcal P_{k_0}^{(2)}$. \end{Thm} In Sections 2 and 3, we shall estimate some exponential sums involving the primes of special type. The proof of Theorem \ref{chengoldbach} will be given in Section 4. \section{The Minor Arcs} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} Let $k_0\geq 8$ be a fixed integer and $B=6^9$. Suppose that $n$ is a sufficiently large integer, $W>0$ is an even integer with $W\leq (\log n)^B$, and $1\leq b\leq W$ satisfies $(b(b+2),W)=1$. Let $\mathfrak mathbb T$ denote the torus $\mathfrak mathbb R/\mathfrak mathbb Z$. For $1\leq q\leq(\log n)^B$, define $$ \mathfrak mathfrak M_{a,q}=\{{\mathfrak a}lpha\in\mathfrak mathbb T:\,|{\mathfrak a}lpha q-a|\leq(\log n)^B/n\}. $$ Let $$ \mathfrak mathfrak M={\mathfrak b}igcup_{\substack{1\leq a\leq q\leq (\log n)^B\\(a,q)=1}}\mathfrak mathfrak M_{a,q} $$ and $\mathfrak m=\mathfrak mathbb T\setminus\mathfrak mathfrak M$. Let $D=n^{0.32}$ and $z_0=n^{1/k_0}$. For square-free number $d=p_1p_2\cdots p_k$ with primes $p_1>p_2>\cdots>p_k$, define Rosser's weights with the order $D$ by $$ \lambda_{D}^{+}(d)={\mathfrak b}egin{cases} (-1)^k&\text{if }p_1\cdots p_{2l}p_{2l+1}^3<D\text{ for all }0\leq l\leq (k-1)/2,\\ 0&\text{otherwise}, \end{cases} $$ and $$ \lambda_{D}^{-}(d)={\mathfrak b}egin{cases} (-1)^k&\text{if }p_1\cdots p_{2l-1}p_{2l}^3<D\text{ for all }1\leq l\leq k/2,\\ 0&\text{otherwise}. \end{cases} $$ It is easy to see that $\lambda_D^{\pm}(d)=0$ if $d\geq D$. Let $F(s)$ and $f(s)$ denote the functions of linear sieve. The following lemma is a fundamental result in sieve method. {\mathfrak b}egin{Lem}[Iwaniec \cite{Iwaniec80a, Iwaniec80b}] \label{iwaniec} Suppose that $\mathfrak mathcal P_*$ is any set of primes and $\omega$ is a multiplicative function satisfying: $$ 0<\omega(p)<p\text{ for }p\in\mathfrak mathcal P_*,\ \omega(p)=0\text{ for }p\not\in\mathfrak mathcal P_*, $$ and $$ \prod_{z_1\leq p<z_2}{\mathfrak b}igg(1-\frac{\omega(p)}{p}{\mathfrak b}igg)^{-1}\leq\frac{\log z_2}{\log z_1}{\mathfrak b}igg(1+\frac{L}{\log z_1}{\mathfrak b}igg) $$ for a constant $L>0$ and for all $2\leq z_1\leq z_2$. Then we have {\mathfrak b}egin{align} \label{rosserF} \prod_{p<z}{\mathfrak b}igg(1-\frac{\omega(p)}{p}{\mathfrak b}igg)\leq&\sum_{d\mathfrak mid P_*(z)}\lambda_D^+(d)\frac{\omega(d)}{d}\notag\\ \leq&\prod_{p<z}{\mathfrak b}igg(1-\frac{\omega(p)}{p}{\mathfrak b}igg)(F(s)+O(e^{\sqrt{L}-s}(\log D)^{-1/3})), \end{align} provided that $2\leq z\leq D$, where $s=\log D/\log z$ and $$ P_*(z)=\prod_{p\in\mathfrak mathcal P_*\cap[1,z)}p. $$ Similarly, {\mathfrak b}egin{align} \label{rosserf} \prod_{p<z}{\mathfrak b}igg(1-\frac{\omega(p)}{p}{\mathfrak b}igg)\geq&\sum_{d\mathfrak mid P_*(z)}\lambda_D^-(d)\frac{\omega(d)}{d}\notag\\ \geq&\prod_{p< z}{\mathfrak b}igg(1-\frac{\omega(p)}{p}{\mathfrak b}igg)(f(s)+O(e^{\sqrt{L}-s}(\log D)^{-1/3})), \end{align} provided that $2\leq z\leq \sqrt{D}$. Furthermore, for any square-free integer $q$, {\mathfrak b}egin{align} \label{rosserpm} \sum_{d\mathfrak mid q}\lambda_D^-(d)\leq\sum_{d\mathfrak mid q}\mathfrak mu(d)\leq\sum_{d\mathfrak mid q}\lambda_D^+(d). \end{align} \end{Lem} Define $$ S_{n,z_0}({\mathfrak a}lpha)=\sum_{\substack{p\leq n\\ p\equiv b\pmod{W}\\ (p+2,P(z_0))=1}}e({\mathfrak a}lpha(p-b)/W)\log p. $$ Clearly $$ S_{n,z_0}({\mathfrak a}lpha)=\sum_{\substack{p\leq n\\ p\equiv b\pmod{W}}}e({\mathfrak a}lpha(p-b)/W)\log p\sum_{d\mathfrak mid (p+2,P(z_0))}\mathfrak mu(d). $$ Let $$ S_{n,z_0}^{\pm}({\mathfrak a}lpha)=\sum_{\substack{p\leq n\\ p\equiv b\pmod{W}}}e({\mathfrak a}lpha(p-b)/W)\log p\sum_{d\mathfrak mid (p+2,P(z_0))}\lambda_D^\pm(d). $$ {\mathfrak b}egin{Lem} \label{spm} For any ${\mathfrak a}lpha\in\mathfrak mathbb T$, we have $|S_{n,z_0}^{+}({\mathfrak a}lpha)-S_{n,z_0}({\mathfrak a}lpha)|\leq S_{n,z_0}^{+}(0)-S_{n,z_0}(0)$ and $|S_{n,z_0}({\mathfrak a}lpha)-S_{n,z_0}^{-}({\mathfrak a}lpha)|\leq S_{n,z_0}(0)-S_{n,z_0}^-(0)$. \end{Lem} {\mathfrak b}egin{proof} By (\ref{rosserpm}), {\mathfrak b}egin{align*} |S_{n,z_0}^{+}({\mathfrak a}lpha)-S_{n,z_0}({\mathfrak a}lpha)|\leq&\sum_{\substack{p\leq n\\ p\equiv b\pmod{W}}}\log p{\mathfrak b}igg|\sum_{d\mathfrak mid (p+2,P(z_0))}\lambda^+(d)-\sum_{d\mathfrak mid (p+2,P(z_0))}\mathfrak mu(d){\mathfrak b}igg|\\ =&S_{n,z_0}^{+}(0)-S_{n,z_0}(0). \end{align*} The proof of the second inequality is similar. \end{proof} Let $\tau$ denote the divisor function. It is well-known $$ \sum_{d\leq X}\tau(d)^A\ll_AX(\log X)^{2^A-1} $$ and $$ \sum_{d\leq X}\frac{\tau(d)^A}{d}\ll_A(\log X)^{2^A}. $$ {\mathfrak b}egin{Lem} Suppose that $X,X',Y,Y',Z,Z'>0$ satisfy $X\leq X'\leq 2X$, $Y\leq Y'\leq 2Y$ and $XY\leq Z\leq Z'\leq X'Y'$. For any $d\geq 1$, let $u_d$, $v_d$, $w_d$ be complex numbers with $|u_d|, |v_d|, |w_d|\leq\tau(d)^{A}(\log(XY))^{A}$. Suppose that $1\leq a\leq q$ with $(a,q)=1$, and ${\mathfrak a}lpha\in\mathfrak mathbb T$ with $|{\mathfrak a}lpha q-a|\leq 1/q$. Then {\mathfrak b}egin{align} \label{expsum1} &\sum_{X\leq x\leq X'}u_x{\mathfrak b}igg|\sum_{\substack{Y\leq y\leq Y'\\ Z\leq xy\leq Z'\\ xy\equiv b\pmod{W}}}v_ye({\mathfrak a}lpha (xy-b)/W)\sum_{\substack{d\mathfrak mid xy+2\\ d\leq D}}w_d{\mathfrak b}igg|\notag\\ \ll&_A XY(\log(DXYq))^{2^{2A+2}}\tau^{2A+2}(W){\mathfrak b}igg(\frac{1}{q}+\frac{D^2W}{X}+\frac{qW}{XY}{\mathfrak b}igg)^{1/4}\notag\\ &+XY^{1/2}(\log(DXY))^{2^{2A+2}}\tau^{2A+1}(W) \end{align} provided that $X\geq D^2W$. Furthermore, suppose that $0\leq v_{y_1}\leq v_{y_2}\leq (\log(XY))^A$ for any $y_1,y_2$ with $y_1\leq y_2$. Then {\mathfrak b}egin{align} \label{expsum2} &\sum_{X\leq x\leq X'}u_x{\mathfrak b}igg|\sum_{\substack{Y\leq y\leq Y'\\ Z\leq xy\leq Z'\\ xy\equiv b\pmod{W}}}v_ye({\mathfrak a}lpha (xy-b)/W)\sum_{\substack{d\mathfrak mid xy+2\\ d\leq D}}w_d{\mathfrak b}igg|\notag\\ \ll&_A XY(\log(DXYq))^{6^{A+2}}{\mathfrak b}igg(\frac{1}{q}+\frac{DW}{Y}+\frac{qW}{XY}{\mathfrak b}igg)^{1/4} \end{align} provided that $Y\geq DW$. \end{Lem} {\mathfrak b}egin{proof} By the Cauchy-Schwarz inequality, {\mathfrak b}egin{align*} &{\mathfrak b}igg(\sum_{X\leq x\leq X'}u_x{\mathfrak b}igg|\sum_{\substack{Y\leq y\leq Y'\\ Z\leq xy\leq Z'\\ xy\equiv b\pmod{W}}}v_ye({\mathfrak a}lpha (xy-b)/W)\sum_{\substack{d\mathfrak mid xy+2\\ d\leq D}}w_d{\mathfrak b}igg| {\mathfrak b}igg)^2\\ \leq&{\mathfrak b}igg(\sum_{X\leq x\leq X'}|u_x|^2{\mathfrak b}igg) {\mathfrak b}igg(\sum_{\substack{X\leq x\leq X'\\ d_1,d_2\leq D}}w_{d_1}\overline{w_{d_2}}\sum_{\substack{Y\leq y_1,y_2\leq Y'\\ Z\leq xy_1,xy_2\leq Z'\\ xy_1,xy_2\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}\text{ for }i=1,2}}v_{y_1}\overline{v_{y_2}}e({\mathfrak a}lpha x(y_1-y_2)/W){\mathfrak b}igg)\\ \ll&_AX(\log X)^{6^A}\sum_{\substack{1\leq b'\leq W,\ (b',W)=1\\ Y\leq y_1,y_2\leq Y'\\ y_1\equiv y_2\equiv b'\pmod{W}\\ d_1,d_2\leq D}}v_{y_1}\overline{v_{y_2}}w_{d_1}\overline{w_{d_2}}\sum_{\substack{X\leq x\leq X'\\ Z\leq xy_1,xy_2\leq Z'\\ xb'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}\text{ for }i=1,2}}e({\mathfrak a}lpha x(y_1-y_2)/W).\\ \end{align*} We have {\mathfrak b}egin{align*} &\sum_{\substack{X\leq x\leq X'\\ Z\leq xy_1,xy_2\leq Z'\\ xb'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}\text{ for }i=1,2}}e({\mathfrak a}lpha x(y_1-y_2)/W)\\ =&\sum_{\substack{\mathfrak max\{X, Z/y_1, Z/y_2\}\leq x\leq \mathfrak min\{X', Z'/y_1,Z'/y_2\}\\ xb'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}\text{ for }i=1,2}}e({\mathfrak a}lpha x(y_1-y_2)/W)\\ \ll&\mathfrak min{\mathfrak b}igg\{\frac{X}{[d_1,d_2,W]},\frac{1}{\|{\mathfrak a}lpha [d_1,d_2,W](y_1-y_2)/W\|}{\mathfrak b}igg\}, \end{align*} where $\|\theta\|=\mathfrak min\{|\theta-t|:\, t\in\mathfrak mathbb Z\}$. Hence for each $1\leq b'\leq W$ with $(b',W)=1$, we have {\mathfrak b}egin{align*} &\sum_{\substack{Y\leq y_1,y_2\leq Y'\\ y_1\equiv y_2\equiv b'\pmod{W}\\ d_1,d_2\leq D} }v_{y_1}\overline{v_{y_2}}w_{d_1}\overline{w_{d_2}}{\mathfrak b}igg|\sum_{\substack{X\leq x\leq X'\\ Z\leq xy_1,xy_2\leq Z'\\ xb'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}\text{ for }i=1,2}}e({\mathfrak a}lpha x(y_1-y_2)/W){\mathfrak b}igg|\\ \ll&\sum_{\substack{1\leq y_1,y_2\leq Y'\\ y_1\equiv y_2\equiv b'\pmod{W}\\ d_1,d_2\leq D}}|v_{y_1}||v_{y_2}||w_{d_1}||w_{d_2}|\mathfrak min{\mathfrak b}igg\{\frac{X}{[d_1,d_2,W]},\frac{1}{\|{\mathfrak a}lpha [d_1,d_2,W](y_1-y_2)/W\|}{\mathfrak b}igg\}\\ \ll&Y(\log(XY))^{6^A}{\mathfrak b}igg(\sum_{\substack{h\leq D^2Y}}\tau^{4A+3}(hW)\mathfrak min{\mathfrak b}igg\{\frac{XY}{hW},\frac{1}{\|{\mathfrak a}lpha h\|}{\mathfrak b}igg\}+\frac{X}{W}\sum_{\substack{h\leq D^2}}\frac{\tau^{4A+2}(hW)}{h}{\mathfrak b}igg)\\ &(\text{where }h=[d_1,d_2,W](y_1-y_2)/W\text{ if }y_1>y_2, \text{ and }h=[d_1,d_2,W]/W)\text{ if }y_1=y_2\\ \ll&Y(\log(DXY))^{6^A+1}\tau^{4A+3}(W)\mathfrak max_{H\leq D^2Y}\sum_{\substack{H/2\leq h\leq H}}\tau^{4A+3}(h)\mathfrak min{\mathfrak b}igg\{\frac{XY}{HW},\frac{1}{\|{\mathfrak a}lpha h\|}{\mathfrak b}igg\}\\ &+\frac{XY\tau^{4A+2}(W)}{W}(\log(DXY))^{2^{4A+3}}. \end{align*} Applying Lemma 2.2 of \cite{Vaughan97}, for any $H\leq D^2Y$, {\mathfrak b}egin{align*} &\sum_{\substack{H/2\leq h\leq H}}\tau^{4A+3}(h)\mathfrak min{\mathfrak b}igg\{\frac{XY}{HW},\frac{1}{\|{\mathfrak a}lpha h\|}{\mathfrak b}igg\}\\ \leq&{\mathfrak b}igg(\sum_{\substack{h\leq H}}\tau^{8A+6}(h){\mathfrak b}igg)^{1/2}{\mathfrak b}igg(\frac{XY}{HW}\sum_{\substack{h\leq H}}\mathfrak min{\mathfrak b}igg\{\frac{XY}{hW},\frac{1}{\|{\mathfrak a}lpha h\|}{\mathfrak b}igg\}{\mathfrak b}igg)^{1/2}\\ \ll&_A{\mathfrak b}igg(\frac{X^2Y^2(\log(DYq))^{2^{8A+6}}}{W^2}{\mathfrak b}igg(\frac{1}{q}+\frac{D^2W}{X}+\frac{qW}{XY}{\mathfrak b}igg){\mathfrak b}igg)^{1/2}. \end{align*} This concludes the proof of (\ref{expsum1}). Let us turn to (\ref{expsum2}). Clearly {\mathfrak b}egin{align*} &{\mathfrak b}igg(\sum_{X\leq x\leq X'}u_x{\mathfrak b}igg|\sum_{\substack{Y\leq y\leq Y'\\ Z\leq xy\leq Z'\\ xy\equiv b\pmod{W}}}v_ye({\mathfrak a}lpha (xy-b)/W)\sum_{\substack{d\mathfrak mid xy+2\\ d\leq D}}w_d{\mathfrak b}igg|{\mathfrak b}igg)^2\\ \leq&{\mathfrak b}igg(\sum_{X\leq x\leq X'}|u_x|^2{\mathfrak b}igg){\mathfrak b}igg(\sum_{X\leq x\leq X'}{\mathfrak b}igg|\sum_{\substack{Y\leq y\leq Y'\\ Z\leq xy\leq Z'\\ xy\equiv b\pmod{W}}}v_ye({\mathfrak a}lpha (xy-b)/W)\sum_{\substack{d\mathfrak mid xy+2\\ d\leq D}}w_d{\mathfrak b}igg|^2{\mathfrak b}igg)\\ \ll&X(\log X)^{6^A}\sum_{\substack{ 1\leq b'\leq W,\ (b',W)=1\\ X\leq x\leq X'\\ x\equiv b'\pmod{W}\\ d_1,d_2\leq D}}|w_{d_1}||w_{d_2}| \prod_{i=1}^2{\mathfrak b}igg|\sum_{\substack{Y\leq y_i\leq Y'\\ Z\leq xy_i\leq Z'\\ y_ib'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}}}v_{y_i}e({\mathfrak a}lpha xy_i/W){\mathfrak b}igg|. \end{align*} By the partial summation, {\mathfrak b}egin{align*} &\sum_{\substack{Y\leq y_i\leq Y'\\ Z\leq xy_i\leq Z'\\ y_ib'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}}}v_{y_i}e({\mathfrak a}lpha xy_i/W)\\ =&v_{Y'}\sum_{\substack{1\leq y_i\leq Y'\\ Z\leq xy_i\leq Z'\\ y_ib'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}}}e({\mathfrak a}lpha xy_i/W)- v_{Y}\sum_{\substack{1\leq y_i\leq Y-1\\ Z\leq xy_i\leq Z'\\ y_ib'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}}}e({\mathfrak a}lpha xy_i/W)\\ &-\sum_{Y\leq Y''\leq Y'-1}(v_{Y''+1}-v_{Y''})\sum_{\substack{1\leq y_i\leq Y''\\ Z\leq xy_i\leq Z'\\ y_ib'\equiv b\pmod{W}\\ xy_i\equiv -2\pmod{d_i}}}e({\mathfrak a}lpha xy_i/W)\\ \ll&{\mathfrak b}igg(v_{Y'}+v_{Y} +\sum_{Y\leq Y''\leq Y'-1}(v_{Y''+1}-v_{Y''}){\mathfrak b}igg) \mathfrak min{\mathfrak b}igg\{\frac{Y}{[d_i,W]},\frac{1}{\|{\mathfrak a}lpha [d_i,W]x/W\|}{\mathfrak b}igg\}\\ \ll&_A(\log(XY))^A \mathfrak min{\mathfrak b}igg\{\frac{Y}{[d_i,W]},\frac{1}{\|{\mathfrak a}lpha [d_i,W]x/W\|}{\mathfrak b}igg\}.\\ \end{align*} For any $1\leq b'\leq W$ with $(b',W)=1$, {\mathfrak b}egin{align*} &\sum_{\substack{ X\leq x\leq X'\\ x\equiv b'\pmod{W}\\ d_1,d_2\leq D}}|w_{d_1}||w_{d_2}| \prod_{i=1}^2\mathfrak min{\mathfrak b}igg\{\frac{Y}{[d_i,W]},\frac{1}{\|{\mathfrak a}lpha [d_i,W]x/W\|}{\mathfrak b}igg\}\\ \leq&(\log(XY))^{2A}\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\sum_{\substack{ d\leq D}}\tau(d)^A\mathfrak min{\mathfrak b}igg\{\frac{Y}{[d,W]},\frac{1}{\|{\mathfrak a}lpha [d,W]x/W\|}{\mathfrak b}igg\}{\mathfrak b}igg)^2\\ \leq&(\log(XY))^{2A}\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\sum_{\substack{ h\leq D}}\tau(hW)^{A+1}\mathfrak min{\mathfrak b}igg\{\frac{Y}{hW},\frac{1}{\|{\mathfrak a}lpha hx\|}{\mathfrak b}igg\}{\mathfrak b}igg)^2\\ &(\text{where }h=[d,W]/W)\\ \leq&(\log(XY))^{2A}\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\sum_{2^k\leq D}\sum_{\substack{ 2^k\leq h\leq 2^{k+1}}}\tau(hW)^{A+1}\mathfrak min{\mathfrak b}igg\{\frac{Y}{hW},\frac{1}{\|{\mathfrak a}lpha hx\|}{\mathfrak b}igg\}{\mathfrak b}igg)^2\\ \ll&(\log(DXY))^{2A+2}\mathfrak max_{H\leq D}\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\sum_{\substack{H/2\leq h\leq H}}\tau(hW)^{A+1}\mathfrak min{\mathfrak b}igg\{\frac{Y}{HW},\frac{1}{\|{\mathfrak a}lpha hx\|}{\mathfrak b}igg\}{\mathfrak b}igg)^2. \end{align*} And for any $H\leq D$, {\mathfrak b}egin{align*} &\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\sum_{\substack{H/2\leq h\leq H}}\tau(hW)^{A+1}\mathfrak min{\mathfrak b}igg\{\frac{Y}{HW},\frac{1}{\|{\mathfrak a}lpha hx\|}{\mathfrak b}igg\}{\mathfrak b}igg)^2\\ \leq&\sum_{\substack{X\leq x\leq X'\\ x\equiv b'\pmod{W}}}{\mathfrak b}igg(\tau(W)^{2A+2}\sum_{\substack{h\leq H}}\tau(h)^{2A+2}{\mathfrak b}igg){\mathfrak b}igg(\frac{Y}{HW}\sum_{\substack{h\leq H}}\mathfrak min{\mathfrak b}igg\{\frac{Y}{HW},\frac{1}{\|{\mathfrak a}lpha hx\|}{\mathfrak b}igg\}{\mathfrak b}igg)\\ \ll&_A(\log D)^{2^{2A+2}}Y\sum_{\substack{1\leq x\leq X'H}}\tau(x)\mathfrak min{\mathfrak b}igg\{\frac{Y}{HW},\frac{1}{\|{\mathfrak a}lpha x\|}{\mathfrak b}igg\}. \end{align*} Finally, {\mathfrak b}egin{align*} &\sum_{\substack{1\leq x\leq X'H}}\tau(x)\mathfrak min{\mathfrak b}igg\{\frac{Y}{HW},\frac{1}{\|{\mathfrak a}lpha x\|}{\mathfrak b}igg\}\\ \ll&{\mathfrak b}igg(\sum_{\substack{1\leq x\leq X'H}}\tau(x)^2{\mathfrak b}igg)^{1/2}{\mathfrak b}igg(\frac{Y}{HW}\sum_{\substack{1\leq x\leq X'H}}\mathfrak min{\mathfrak b}igg\{\frac{X'Y}{xW},\frac{1}{\|{\mathfrak a}lpha x\|}{\mathfrak b}igg\}{\mathfrak b}igg)^{1/2}\\ \ll&{\mathfrak b}igg(\frac{X^2Y^2(\log(DXq))^{4}}{W^2}{\mathfrak b}igg(\frac{1}{q}+\frac{DW}{Y}+\frac{qW}{XY}{\mathfrak b}igg){\mathfrak b}igg)^{1/2}. \end{align*} \end{proof} Define $$ \tau_k(x)=\{(d_1,d_2,\ldots,d_k):\, d_1d_2\cdots d_k\mathfrak mid x\}. $$ Let $G(x)$ be an arbitrary complex function over $\mathfrak mathbb N$. Consider \mathfrak medskip\noindent {{\mathfrak b}f Type I sums}{\it $$ \sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ P\leq ml\leq P_1}}a_mG(ml)\qquad\text{and}\qquad\sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ P\leq ml\leq P_1}}a_m(\log l)G(ml) $$ where $M_1\leq 2M$, $L_1\leq 2L$, $|a_m|\leq\tau_5(m)\log P$,} \mathfrak medskip\noindent and \mathfrak medskip\noindent {{\mathfrak b}f Type II sums}{\it $$ \sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ P\leq ml\leq P_1}}a_mb_lG(ml) $$ where $M_1\leq 2M$, $L_1\leq 2L$, $|a_m|\leq\tau_5(m)\log P$, $|b_l|\leq\tau_5(l)\log P$.} \mathfrak medskip The following Lemma is due to Heath-Brown \cite{HeathBrown82}: {\mathfrak b}egin{Lem} \label{heathbrown} Let $P, P_1, u, v, z$ be positive integers satisfying $2<P<P_1\leq 2P$, $2\leq u<v\leq z\leq P$, $u^2\leq z$, $128uz^2\leq P_1$, $2^{18}P_1\leq v^3$. Then we may decompose the sum $$ \sum_{P<n\leq P_1}\Lambda(n)G(n) $$ into $O((\log P)^6)$ sums, each of which is either of type I with $L\geq z$ or of type II with $u\leq L\leq v$. \end{Lem} {\mathfrak b}egin{Lem} \label{spmminor} For any ${\mathfrak a}lpha\in\mathfrak m$, $$ S_{n,z_0}^{\pm}({\mathfrak a}lpha)\ll n(\log n)^{6^8-B/4}. $$ \end{Lem} {\mathfrak b}egin{proof} Clearly $$ S_{n,z_0}^{\pm}({\mathfrak a}lpha)=\sum_{\substack{n^{0.99}\leq p\leq n\\ p\equiv b\pmod{W}}}e({\mathfrak a}lpha(p-b)/W)\log p\sum_{d\mathfrak mid (p+2,P(z_0))}\lambda_D^\pm(d)+O(n^{0.995}). $$ And notice that for any $x\leq n$, $$ {\mathfrak b}igg|\sum_{d\mathfrak mid(x+2,P(z_0))}\lambda_D^{\pm}(d){\mathfrak b}igg|\leq\tau(x+2)\ll_\epsilon n^\epsilon. $$ So it suffices to estimate the sum {\mathfrak b}egin{equation} \label{minorm} \sum_{\substack{n'\leq x\leq n\\ x\equiv b\pmod{W}}}\Lambda(x)e({\mathfrak a}lpha(x-b)/W)\sum_{d\mathfrak mid(x+2,P(z_0))}\lambda_D^{\pm}(d), \end{equation} where $n'\geq n/2$. Since ${\mathfrak a}lpha\in\mathfrak m$, there exist $1\leq a\leq q$ with $(a,q)=1$ and $(\log n)^B\leq q\leq n(\log n)^{-B}$ such that $|{\mathfrak a}lpha q-a|\leq (\log n)^B/n$. Applying Lemma \ref{heathbrown} with $u=n^{0.17}$, $v=n^{0.34}$ and $z=n^{0.35}$, the sum (\ref{minorm}) can be decomposed into $O((\log n)^6)$ type I sums $$ \sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ ml\equiv b\pmod{W}\\ n'\leq ml\leq n}}a_me({\mathfrak a}lpha(ml-b)/W)\sum_{d\mathfrak mid(ml+2,P(z_0))}\lambda_D^{\pm}(d) $$ and $$ \qquad\sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ ml\equiv b\pmod{W}\\ n'\leq ml\leq n}}a_m(\log l)e({\mathfrak a}lpha(ml-b)/W)\sum_{d\mathfrak mid(ml+2,P(z_0))}\lambda_D^{\pm}(d) $$ with $L\geq n^{0.35}$, and type II sums $$ \sum_{\substack{M<m\leq M_1\\ L<l\leq L_1\\ ml\equiv b\pmod{W}\\ n'\leq ml\leq n}}a_mb_le({\mathfrak a}lpha(ml-b)/W)\sum_{d\mathfrak mid(ml+2,P(z_0))}\lambda_D^{\pm}(d) $$ with $n^{0.17}\leq L\leq n^{0.34}$. Noting that $\lambda_D^{\pm}(d)=0$ whenever $d\geq D$, in view of (\ref{expsum1}) with $A=5$, these type II sums are all $\ll n(\log n)^{2^{13}-B/4}$. And by (\ref{expsum2}), all type I sums are $\ll n(\log n)^{6^7-B/4}$. \end{proof} \section{The Major Arcs} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} Define $$ \Delta(x;q):=\mathfrak max_{\substack{1\leq r\leq q\\ (r,q)=1}}{\mathfrak b}igg|\sum_{\substack{p\leq x\\ p\equiv r\pmod{q}}}\log p-\frac{x}{\phi(q)}{\mathfrak b}igg|. $$ The well-known Bombieri-Vinogradov theorem asserts that for any $A>0$ {\mathfrak b}egin{equation} \label{bv} \sum_{q\leq n^{1/2-\epsilon}}\mathfrak max_{x\leq n}\Delta(x;q)\ll_{A,\epsilon}\frac{n}{(\log n)^A}. \end{equation} Define $$ \phi_2(q):=q\prod_{\substack{2<p\mathfrak mid q\\ }}{\mathfrak b}igg(1-\frac{2}{p}{\mathfrak b}igg). $$ {\mathfrak b}egin{Lem} \label{spmFf} $$ S_{n,z_0}^+(0)\leq\frac{4e^{-\gamma}k_0\mathfrak mathfrak S_1 n}{\phi_2(W)\log n}(F(s)+O(e^{-s}(\log n)^{-1/3})) $$ and $$ S_{n,z_0}^-(0)\geq\frac{4e^{-\gamma}k_0\mathfrak mathfrak S_1 n}{\phi_2(W)\log n}(f(s)+O(e^{-s}(\log n)^{-1/3})), $$ where $\gamma$ is Euler's constant, $$ \mathfrak mathfrak S_1=\prod_{p>2}{\mathfrak b}igg(1-\frac{1}{(p-1)^2}{\mathfrak b}igg)=0.6601\ldots $$ and $s=\log D/\log z_0$. \end{Lem} {\mathfrak b}egin{proof} {\mathfrak b}egin{align*} S_{n,z_0}^+(0) =&\sum_{\substack{d\mathfrak mid P(z_0)\\ b\equiv -2\pmod{(d,W)}}}\lambda_D^+(d)\sum_{\substack{p\leq n\\ p\equiv b\pmod{W}\\ p\equiv -2\pmod{d}}}\log p\\ =&\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1}}\lambda_D^+(d){\mathfrak b}igg(\frac{n}{\phi(Wd)}+O(\Delta(n;Wd)){\mathfrak b}igg) \end{align*} since $(W,b+2)=1$. Since $\lambda_D^+(d)$ vanishes for $d\geq D$, by (\ref{bv}) we have, {\mathfrak b}egin{align*} S_{n,z_0}^+(0) =\frac{n}{\phi(W)}\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1}}\frac{\lambda_D^+(d)}{\phi(d)}+O{\mathfrak b}igg(\frac{n}{(\log n)^{5B}}{\mathfrak b}igg). \end{align*} Applying (\ref{rosserF}), {\mathfrak b}egin{align*} \sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1}}\frac{\lambda_D^+(d)}{\phi(d)} \leq\prod_{p\leq z_0,\ p\nmid W}{\mathfrak b}igg(1-\frac{1}{p-1}{\mathfrak b}igg)(F(s)+O(e^{-s}(\log D)^{-1/3})). \end{align*} Similarly, {\mathfrak b}egin{align*} S_{n,z_0}^-(0) \geq\frac{n}{\phi(W)}\prod_{p\leq z_0,\ p\nmid W}{\mathfrak b}igg(1-\frac{1}{p-1}{\mathfrak b}igg)(f(s)+O(e^{-s}(\log n)^{-1/3})). \end{align*} Finally, by the Mertens theorem, {\mathfrak b}egin{align*} \prod_{p\leq z_0,\ p\nmid W}{\mathfrak b}igg(1-\frac{1}{p-1}{\mathfrak b}igg)^{-1}= \frac{\phi_2(W)}{2\phi(W)}\prod_{2<p\leq z_0}{\mathfrak b}igg(1-\frac{1}{p-1}{\mathfrak b}igg)^{-1}=& \frac{\phi_2(W)}{4\phi(W)}(\mathfrak mathfrak S_1^{-1}e^{\gamma}\log{z_0}+O(1)). \end{align*} \end{proof} Let $m=(n-b)/W$. Define $\Lambda_*(x)=\log x$ or $0$ according to whether $x$ is prime. {\mathfrak b}egin{Lem} \label{saq} Suppose that $1\leq a\leq q\leq (\log n)^B$ and $(a,q)=1$. Then we have {\mathfrak b}egin{align} {\mathfrak b}igg|S_{n,z_0}(a/q)-\frac{{\bf 1}_{(W,q)=1}\mathfrak mu(q)\tau^*(a,q)4e^{-\gamma}k_0\mathfrak mathfrak S_1Wm}{\phi_2(Wq)\log(Wm+b)}{\mathfrak b}igg| \leq\frac{5e^{-\gamma}k_0\mathfrak mathfrak S_1(F(s)-f(s))Wm}{\phi_2(W)\log(Wm+b)}, \end{align} where ${\bf 1}_{(W,q)=1}=1$ or $0$ according to whether $(W,q)=1$, $$ \tau^*(a,q)=\sum_{\substack{d\mathfrak mid q\\ (d,q/d)=1}}e(ar_d/q), $$ and $1\leq r_d\leq q$ is the unique integer $r$ such that $Wr\equiv -b\pmod{d}$ and $Wr\equiv -b-2\pmod{q/d}$. \end{Lem} {\mathfrak b}egin{proof} Clearly {\mathfrak b}egin{align*} &\sum_{\substack{1\leq x\leq m\\ (Wx+b+2,P(z_0))=1}}\Lambda_*(Wx+b)e(ax/q) =&\sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ (Wr+b+2,q)=1}}e(ar/q)\sum_{\substack{1\leq p\leq Wm+b\\ p\equiv Wr+b\pmod{Wq}\\ (p+2,P(z_0))=1}}\log p+O(\log^{B+1}n). \end{align*} Notice that {\mathfrak b}egin{align*} &\sum_{\substack{1\leq p\leq Wm+b\\ p\equiv Wr+b\pmod{Wq}}}\log p\sum_{d\mathfrak mid(p+2,P(z_0))}\lambda_D^-(d)\\ \leq&\sum_{\substack{1\leq p\leq Wm+b\\ p\equiv Wr+b\pmod{Wq}}}\log p\sum_{d\mathfrak mid (p+2,P(z_0))}\mathfrak mu(d)\\\leq&\sum_{\substack{1\leq p\leq Wm+b\\ p\equiv Wr+b\pmod{Wq}}}\log p\sum_{d\mathfrak mid(p+2,P(z_0))}\lambda_D^+(d). \end{align*} From the proof of Lemma \ref{spmFf}, we know that {\mathfrak b}egin{align*} &{\mathfrak b}igg|\sum_{\substack{1\leq p\leq Wm+b\\ p\equiv Wr+b\pmod{Wq}\\ (p+2,P(z_0))=1}}\log p- \frac{e^{-\gamma}k_0\mathfrak mathfrak S_1Wm}{\phi_2(Wq)\log(Wm+b)}{\mathfrak b}igg|\\ \leq&\frac{1.1e^{-\gamma}k_0\mathfrak mathfrak S_1(F(s)-f(s))Wm}{\phi_2(Wq)\log(Wm+b)}. \end{align*} By noting that $W$ is even and $(W,b(b+2))=1$, {\mathfrak b}egin{align*} \sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ (Wr+b+2,q)=1}}e(ar/q) =&\sum_{d_1,d_2\mathfrak mid q}\mathfrak mu(d_1)\mathfrak mu(d_2)\sum_{\substack{1\leq r\leq q\\ d_1\mathfrak mid Wr+b\\ d_2\mathfrak mid Wr+b+2}}e(ar/q)\\ =&\sum_{\substack{d_1d_2=q\\ (d_1,d_2)=1\\ (d_1,W)=(d_2,W)=1}}\mathfrak mu(d_1)\mathfrak mu(d_2)e(ar_{d_1}/q)\\ =&{\mathfrak b}egin{cases} \mathfrak mu(q)\tau^*(a,q)&\text{if }(W,q)=1,\\ 0&\text{otherwise}. \end{cases} \end{align*} Furthermore, we have $$ |\{1\leq r\leq q:\, ((Wr+b)(Wr+b+2),q)=1\}|=q\prod_{\substack{p\mathfrak mid q,\ p\nmid W}}{\mathfrak b}igg(1-\frac{2}{p}{\mathfrak b}igg)=\frac{\phi_2(Wq)}{\phi_2(W)}. $$ All are done. \end{proof} {\mathfrak b}egin{Lem} \label{spmalpha} Suppose that $1\leq a\leq q\leq (\log n)^B$ and $(a,q)=1$. Then for any ${\mathfrak a}lpha\in\mathfrak mathfrak M_{a,q}$, {\mathfrak b}egin{align} S_{n,z_0}^\pm({\mathfrak a}lpha)=\frac{S_{n,z_0}^\pm(a/q)}{m}\sum_{1\leq y\leq m}e(\theta y)+ O{\mathfrak b}igg(\frac{m}{(\log m)^{3B}}{\mathfrak b}igg), \end{align} where $\theta={\mathfrak a}lpha-a/q$. \end{Lem} {\mathfrak b}egin{proof} By the partial summation, {\mathfrak b}egin{align*} &S_{n,z_0}^\pm({\mathfrak a}lpha)\\ =&e(\theta m)\sum_{\substack{1\leq x\leq m}}\Lambda_*(Wx+b)e(ax/q)\sum_{d\mathfrak mid (Wx+b+2,P(z_0))}\lambda_D^{\pm}(d)\\ &-\sum_{y\leq m-1}(e(\theta(y+1))-e(\theta y))\sum_{\substack{1\leq x\leq y}}\Lambda_*(Wx+b)e(ax/q)\sum_{d\mathfrak mid (Wx+b+2,P(z_0))}\lambda_D^{\pm}(d). \end{align*} Recalling that $(W,b+2)=1$, write {\mathfrak b}egin{align*} &\sum_{\substack{1\leq x\leq y}}\Lambda_*(Wx+b)e(ax/q)\sum_{d\mathfrak mid (Wx+b+2,P(z_0))}\lambda_D^{\pm}(d)\\ =&\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1\\ d\leq D}}\lambda_D^\pm(d)\sum_{\substack{1\leq x\leq y\\ Wx\equiv-b-2\pmod{d}}}\Lambda_*(Wx+b)e(ax/q)\\ =&\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1\\ d\leq D}}\lambda_D^\pm(d){\mathfrak b}igg(\sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ Wr\equiv-b-2\pmod{(d,q)}}}e(ar/q)\sum_{\substack{1\leq x\leq y\\ x\equiv r\pmod{q}\\ Wx\equiv-b-2\pmod{d}}}\Lambda_*(Wx+b)+O(\log^{B+1}n){\mathfrak b}igg). \end{align*} Now {\mathfrak b}egin{align*} &\sum_{\substack{1\leq x\leq y\\ x\equiv r\pmod{q}\\ Wx\equiv-b-2\pmod{d}}}\Lambda_*(Wx+b)=\frac{Wy+b}{\phi(W[d,q])}+O(\Delta(Wy+b;W[d,q])). \end{align*} Notice that for any $d'$ with $q\mathfrak mid d'$, $|\{d:\,[d,q]=d'\}|\leq\tau(q)$. Hence {\mathfrak b}egin{align*} &\sum_{\substack{1\leq x\leq y}}\Lambda_*(Wx+b)e(ax/q)\sum_{d\mathfrak mid (Wx+b+2,P(z_0))}\lambda_D^{\pm}(d)\\ =&\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1\\ d\leq D}}\lambda_D^\pm(d)\sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ Wr\equiv-b-2\pmod{(d,q)}}}e(ar/q)\frac{Wy+b}{\phi(W[d,q])} +O{\mathfrak b}igg(\frac{Wy+b}{(\log(Wy+b))^{5B}}{\mathfrak b}igg). \end{align*} So {\mathfrak b}egin{align*} S_{n,z_0}^\pm({\mathfrak a}lpha) =W\sum_{1\leq y\leq m}e(\theta y)\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1\\ d\leq D}}\lambda_D^\pm(d)\sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ Wr\equiv-b-2\pmod{(d,q)}}}\frac{e(ar/q)}{\phi(W[d,q])} +O{\mathfrak b}igg(\frac{m}{(\log m)^{3B}}{\mathfrak b}igg). \end{align*} Setting $\theta\rightarrow0$ in the above equation, we obtain that {\mathfrak b}egin{align*} W\sum_{\substack{d\mathfrak mid P(z_0)\\ (d,W)=1\\ d\leq D}}\lambda_D^\pm(d)\sum_{\substack{1\leq r\leq q\\ (Wr+b,q)=1\\ Wr\equiv-b-2\pmod{(d,q)}}}\frac{e(ar/q)}{\phi(W[d,q])} =\frac{S_{n,z_0}^\pm(a/q)}{m} +O{\mathfrak b}igg(\frac{1}{(\log m)^{3B}}{\mathfrak b}igg). \end{align*} \end{proof} Combining Lemmas \ref{spm}, \ref{spmminor}, \ref{spmFf}, \ref{saq} and \ref{spmalpha}, we get {\mathfrak b}egin{Lem} \label{sumexp} Suppose that $1\leq a\leq q\leq (\log n)^B$ and $(a,q)=1$. Then for any ${\mathfrak a}lpha\in\mathfrak mathfrak M_{a,q}$, {\mathfrak b}egin{align} {\mathfrak b}igg|S_{n,z_0}({\mathfrak a}lpha)-\frac{{\bf 1}_{(W,q)=1}\mathfrak mu(q)\tau^*(a,q)4e^{-\gamma}k_0\mathfrak mathfrak S_1W}{\phi_2(Wq)\log n}\sum_{1\leq y\leq m}e(\theta y){\mathfrak b}igg|\leq \frac{15e^{-\gamma}k_0\mathfrak mathfrak S_1(F(s)-f(s))n}{\phi_2(W)\log n}, \end{align} where $\theta={\mathfrak a}lpha-a/q$. Furthermore, for any ${\mathfrak a}lpha\in\mathfrak m$, {\mathfrak b}egin{align} |S_{n,z_0}({\mathfrak a}lpha)|\leq \frac{5e^{-\gamma}k_0\mathfrak mathfrak S_1(F(s)-f(s))n}{\phi_2(W)\log n}. \end{align} \end{Lem} {\mathfrak b}egin{Lem} \label{primediff} {\mathfrak b}egin{align} \sum_{\substack{p_1,p_2\leq n\\ (p_i+2,P(z_0))=1\\ p_i\equiv b\pmod{W}\\ p_2-p_1=WM}}1 \ll\frac{k_0^2nW}{\phi_2(W)^2\log^4 n}\prod_{\substack{p\mathfrak mid M\\ p\nmid W}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)\prod_{\substack{p\mathfrak mid (WM+2)(WM-2)\\ p\nmid W}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg). \end{align} \end{Lem} {\mathfrak b}egin{proof} Let $z_1=n^{1/10}$. Let $\omega_1$ and $\omega_2$ be two multiplicative functions satisfying that $$ \omega_1(p)={\mathfrak b}egin{cases} 4&\quad\text{ if }p<z_0\text{ and }p\nmid{WM(WM-2)(WM+2)},\\ 3&\quad\text{ if }p<z_0\text{ and }p\mathfrak mid{(WM-2)(WM+2)},\ p\nmid W,\\ 2&\quad\text{ if }p<z_0,\ p\mathfrak mid{M}\text{ and }p\nmid W,\\ 0&\quad\text{otherwise},\\ \end{cases} $$ and $$ \omega_2(p)={\mathfrak b}egin{cases} 2&\quad\text{ if }z_0\leq p<z_1\text{ and }p\nmid{WM},\\ 1&\quad\text{ if }z_0\leq p<z_1,\ p\mathfrak mid{M}\text{ and }p\nmid W,\\ 0&\quad\text{otherwise}, \end{cases} $$ for prime $p$. And for $1\leq i\leq 2$, let $g_i$ be the multiplicative functions with $$ g_i(p)=\frac{\omega_i(p)}{p}{\mathfrak b}igg(1-\frac{\omega_i(p)}{p}{\mathfrak b}igg)^{-1} $$ for prime $p$, and let $$ G_1^{(i)}(z)=\sum_{\substack{l\mathfrak mid P(z)\\ l<z}}g_i(l). $$ Define $$ \lambda_1(d)=\frac{d}{\omega_1(d)}\sum_{\substack{l\mathfrak mid P(z_0)\\ d\mathfrak mid l<z_0}}\frac{\mathfrak mu(l/d)\mathfrak mu(l)g_1(l)}{G_1^{(1)}(z_0)} $$ for $d\mathfrak mid P(z_0)$ and $$ \lambda_2(d)=\frac{d}{\omega_2(d)}\sum_{\substack{l\mathfrak mid P(z_0,z_1)\\ d\mathfrak mid l<z_1}}\frac{\mathfrak mu(l/d)\mathfrak mu(l)g_2(l)}{G_1^{(2)}(z_1)} $$ for $d\mathfrak mid P(z_0,z_1)=\prod_{z_0\leq p<z_1}p$. Then $\lambda_1(1)=\lambda_2(1)=1$. Therefore {\mathfrak b}egin{align*} &\sum_{\substack{x_1,x_2\leq n/W\\ (Wx_i+b+2,P(z_0))=1\\ (Wx_i+b,P(z_1))=1\\ x_2-x_1=M}}1\\ \leq&\sum_{\substack{x\leq n/W}}{\mathfrak b}igg(\sum_{\substack{d\mathfrak mid P(z_0)\\ d\mathfrak mid (Wx+b)(Wx+WM+b)\\ d\mathfrak mid (Wx+b+2)(Wx+WM+b+2)}}\lambda_1(d){\mathfrak b}igg)^2 {\mathfrak b}igg(\sum_{\substack{d\mathfrak mid P(z_0,z_1)\\ d\mathfrak mid (Wx+b)(Wx+WM+b)}}\lambda_2(d){\mathfrak b}igg)^2\\ =&\sum_{\substack{d_1,d_2\mathfrak mid P(z_0)\\ d_3,d_4\mathfrak mid P(z_0,z_1)}}\lambda_1(d_1)\lambda_1(d_2)\lambda_2(d_3)\lambda_2(d_4)\sum_{\substack{x\leq n/W\\ [d_3,d_4]\mathfrak mid (Wx+b)(Wx+WM+b)\\ [d_1,d_2],[d_3,d_4]\mathfrak mid (Wx+b+2)(Wx+WM+b+2)}}1\\ =&\sum_{\substack{d_1,d_2\mathfrak mid P(z_0)\\ d_3,d_4\mathfrak mid P(z_0,z_1)}}\lambda_1(d_1)\lambda_1(d_2)\lambda_2(d_3)\lambda_2(d_4)\omega_1([d_1,d_2])\omega_2([d_3,d_4]){\mathfrak b}igg(\frac{n/W}{[d_1,d_2][d_3,d_4]}+O(1){\mathfrak b}igg). \end{align*} By Selberg's sieve method, we know that $|\lambda_1(d)|,|\lambda_2(d)|\leq 1$ and {\mathfrak b}egin{align*} &\sum_{\substack{d_1,d_2\mathfrak mid P(z_0)\\ d_3,d_4\mathfrak mid P(z_0,z_1)}}\lambda_1(d_1)\lambda_1(d_2)\lambda_2(d_3)\lambda_2(d_4)\frac{\omega_1([d_1,d_2])\omega_2([d_3,d_4])}{[d_1,d_2][d_3,d_4]}\\ =&{\mathfrak b}igg(\sum_{d_1,d_2\mathfrak mid P(z_0)}\lambda_1(d_1)\lambda_1(d_2)\frac{\omega_1([d_1,d_2])}{[d_1,d_2]}{\mathfrak b}igg){\mathfrak b}igg(\sum_{d_3,d_4\mathfrak mid P(z_0,z_1)}\lambda_2(d_3)\lambda_2(d_4)\frac{\omega_2([d_3,d_4])}{[d_3,d_4]}{\mathfrak b}igg)\\ =&\frac{1}{G_1^{(1)}(z_0)}\frac{1}{G_1^{(2)}(z_1)}\ll\prod_{p <z_0}{\mathfrak b}igg(1-\frac{\omega_1(p)}{p}{\mathfrak b}igg)\prod_{z_0\leq p<z_1}{\mathfrak b}igg(1-\frac{\omega_2(p)}{p}{\mathfrak b}igg). \end{align*} Thus {\mathfrak b}egin{align*} &\sum_{\substack{x_1,x_2\leq n/W\\ (Wx_i+b+2,P(z_0))=1\\ (Wx_i+b,P(z_1))=1\\ x_2-x_1=M}}1\\ \ll&\frac{n}{W}\prod_{p\mathfrak mid P(z_0)}{\mathfrak b}igg(1-\frac{\omega_1(p)}{p}{\mathfrak b}igg)\prod_{p\mathfrak mid P(z_0,z_1)}{\mathfrak b}igg(1-\frac{\omega_2(p)}{p}{\mathfrak b}igg)\\ \ll&\frac{n}{W(\log z_0)^2(\log z_1)^2}\prod_{\substack{ p\mathfrak mid W}}{\mathfrak b}igg(1+\frac{4}{p}{\mathfrak b}igg)\prod_{\substack{p\mathfrak mid M\\ p\nmid W}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)\prod_{\substack{p\mathfrak mid (WM+2)(WM-2)\\ p\nmid W}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)\\ \ll&\frac{k_0^2nW}{\phi_2(W)^2\log^4 n}\prod_{\substack{p\mathfrak mid M\\ p\nmid W}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)\prod_{\substack{p\mathfrak mid (WM+2)(WM-2)\\ p\nmid W}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg). \end{align*} \end{proof} {\mathfrak b}egin{Lem} Then {\mathfrak b}egin{align} \label{equalsum} \sum_{\substack{p_1,p_2,p_3,p_4\leq n\\ (p_i+2,P(z_0))=1\\ p_i\equiv b\pmod{W}\\ p_1+p_4=p_2+p_3}}1 \ll\frac{k_0^4Wn^3}{\phi_2(W)^4\log^8 n}. \end{align} \end{Lem} {\mathfrak b}egin{proof} Applying Lemma \ref{primediff}, {\mathfrak b}egin{align*} \sum_{\substack{p_1,p_2,p_3,p_4\leq n\\ (p_i+2,P(z_0))=1\\ p_i\equiv b\pmod{W}\\ p_1+p_4=p_2+p_3}}1\leq&\sum_{\substack{2<M\leq n/W}}{\mathfrak b}igg(\sum_{\substack{p_1,p_2\leq n\\ (p_i+2,P(z_0))=1\\ p_i\equiv b\pmod{W}\\ p_1+WM=p_2}}1{\mathfrak b}igg)^2+O(n^2)\\ \ll&\frac{k_0^4n^2W^2}{\phi_2(W)^4\log^8 n}\sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid M\\ p\nmid W}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)^2\prod_{\substack{p\mathfrak mid (WM-2)(WM+2)\\ p\nmid W}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)^2. \end{align*} By the H\"older inequality, {\mathfrak b}egin{align*} &\sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid M\\ p\nmid W}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)^2 \prod_{\substack{p\mathfrak mid (WM-2)(WM+2)\\ p\nmid W}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)^2\\ \leq&{\mathfrak b}igg(\sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid M}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)^6{\mathfrak b}igg)^{1/3} \prod_{j=1}^2{\mathfrak b}igg(\sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid WM+2(-1)^j\\ p\not=2}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)^6{\mathfrak b}igg)^{1/3}. \end{align*} Since $(1+p^{-1})^6\leq 1+24p^{-1}$, {\mathfrak b}egin{align*} \sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid WM\pm 2\\ p\not=2}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)^6 \leq&\sum_{\substack{2<M\leq n/W}} \sum_{\substack{d\mathfrak mid WM\pm 2\\ 2\nmid d}}\frac{\tau(d)^{12}}{d}\\ =&\sum_{\substack{d\mathfrak mid n\pm 2\\ (d,W)=1}}\frac{\tau(d)^{12}}{d}\sum_{\substack{2<M\leq n/W\\ d\mathfrak mid WM\pm 2}}1 \ll\frac{n}{W}. \end{align*} And by \cite[Lemma 14]{Green02}, we have $$ \sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid M}}{\mathfrak b}igg(1+\frac{2}{p}{\mathfrak b}igg)^6\leq \sum_{\substack{2<M\leq n/W}}\prod_{\substack{p\mathfrak mid M}}{\mathfrak b}igg(1+\frac{1}{p}{\mathfrak b}igg)^{12}\ll \frac{n}{W}. $$ \end{proof} \section{Proof of Theorem \ref{chengoldbach}} \setcounter{equation}{0} \setcounter{Thm}{0} \setcounter{Lem}{0} \setcounter{Cor}{0} First, let us introduce Green and Tao's enveloping sieve. Let $N$ be a large integer. Suppose that $a_1,\ldots,a_k,b_1,\ldots,b_k$ be integers with $|a_i|,|b_i|\leq N$. We say $$ \mathfrak mathcal {F}(x):=\prod_{i=1}^k(a_ix+b_i). $$ is a $k$-linear form. For every integer $q\geq 1$, define $$ \gamma_\mathfrak mathcal {F}(p):=q^{-1}|\{1\leq x\leq q:\, (\mathfrak mathcal {F}(x),q)=1\}|>0. $$ Let $$ X_{R!}(x)=\{x\in\mathfrak mathbb Z:\, (\mathfrak mathcal {F}(x),R!)=1\}, $$ where $1\leq R\leq N$. {\mathfrak b}egin{Lem}[{\cite[Proposition 3.1]{GreenTao06}}] \label{envsieve} There exists a non-negative function ${\mathfrak b}eta_R:\mathfrak mathbb Z\to\mathfrak mathbb R$ satisfying the following properties: \mathfrak medskip\noindent(i) {\mathfrak b}egin{equation} \label{betalower} {\mathfrak b}eta_R(x)\gg_k\mathfrak mathfrak{S}_\mathfrak mathcal {F}^{-1}\log^k R \,{\bf 1}_{X_{R!}}(x) \end{equation} for all integers $n$, where $$ \mathfrak mathfrak S_\mathfrak mathcal {F}:=\prod_{p}\frac{\gamma_\mathfrak mathcal {F}(p)}{(1-1/p)^k}. $$ \mathfrak medskip\noindent(ii) {\mathfrak b}egin{equation} \label{beteupper} {\mathfrak b}eta_R(x)\ll_{k,\epsilon}N^\epsilon \end{equation} for all $1\leq x\leq N$ and $\epsilon>0$. \mathfrak medskip\noindent(iii) {\mathfrak b}egin{equation} {\mathfrak b}eta_R(x)=\sum_{q\leq R^2}\sum_{\substack{1\leq a\leq q\\ (a,q)=1}}w(a/q)e(-ax/q), \end{equation} where $w(a/q)=w_R(a/q)$ satisfies $w(1)=1$ and $$ |w(a/q)|\ll_{k,\epsilon} q^{\epsilon-1} $$ for all $1\leq a\leq q\leq R^2$ with $(a,q)=1$. \mathfrak medskip\noindent(iv) For $1\leq a\leq q\leq R^2$ with $(a,q)=1$, if $q$ is not square-free, or $\gamma(q)=1$ and $q>1$, then $w(a/q)=0$. \end{Lem} Green and Tao also established a restriction theorem for ${\mathfrak b}eta_R$: {\mathfrak b}egin{Lem}[{\cite[Proposition 4.2]{GreenTao06}}] Let $R$, $N$ be large numbers such that $1\leq R\leq N^{1/10}$. Let $k$, $\mathfrak mathcal {F}$, ${\mathfrak b}eta_R$ as defined in Lemma \ref{envsieve}. Suppose that $\{u_i\}_{i=1}^N$ is an arbitrary sequence of complex numbers. Then for any $\rho>2$, {\mathfrak b}egin{equation}\label{restriction}{\mathfrak b}igg(\sum_{r\in\mathfrak mathbb Z_N}{\mathfrak b}igg|\frac{1}{N}\sum_{1\leq x\leq N}u_x{\mathfrak b}eta_R(x)e(-xr/N){\mathfrak b}igg|^\rho{\mathfrak b}igg)^{1/\rho} \ll_{\rho,k}{\mathfrak b}igg(\frac{1}{N}\sum_{1\leq x\leq N}|u_x|^2{\mathfrak b}eta_R(x){\mathfrak b}igg)^{1/2}. \end{equation} \end{Lem} The following lemma can be derived by a trivial modification of Chen's original proof in \cite{Chen73}: {\mathfrak b}egin{Lem} \label{chen} Suppose that $W$ is a positive integer. Then there exists a function $n_0(W)$ such that for every $1\leq b\leq W$ with $(b(b+2),W)=1$ and $n\geq n_0(W)$, $$ |\{p\leq n:\, p\equiv b\pmod{W},\ p+2\in\mathfrak mathcal P_2\text{ and }(p+2,P(n^{1/10}))=1\}|\geq \frac{C_1}{\phi_2(W)}\frac{n}{(\log n)^2}, $$ where $C_1$ is an absolute constant. \end{Lem} {\mathfrak b}egin{Lem} \label{nW} Suppose that $n_0(W)$ is an increasing positive function for all positive integers $W$, and $G(n)$ is an increasing positive function with $\lim_{n\to\infty}G(n)=\infty$. Then there exists an increasing positive function $W_0(n)$ for sufficiently large $n$ such that $n\geq n_0(W(n))$, $W_0(n)\leq G(n)$ and $\lim_{n\to\infty}W_0(n)=\infty$. \end{Lem} {\mathfrak b}egin{proof} Let $$ W_0(n)=\mathfrak max\{W\leq G(n):\, n_0(W)\leq n\}. $$ Clearly $W_0(n)$ is well-defined for sufficiently large $n$. Assume on the contrary that $\lim_{n\to\infty}W_0(n)<\infty$, i.e., there exists an integer $W'$ such that $W_0(n)<W'$ for all $n$. Let $n'\geq n_0(W')$ be an integer such that $G(n')\geq W'$. Obviously such $n'$ exists. But $W_0(n')\geq W'$ now. This leads to a contradiction. \end{proof} Let $C_2$ be the implied constant in (\ref{betalower}) with $k=2$. Let $$ \varpi=\frac{\mathfrak min\{C_1C_2,1\}}{10000}. $$ Let $C_3$ be the implied constant in (\ref{restriction}) with $\rho=12/5$ and $k=2$. And let $C_4$ be the implied constant in (\ref{equalsum}). {\mathfrak b}egin{Lem} \label{kappa} There exists $0<\kappa\leq\varpi$ such that we can choose $0<\delta,\epsilon\leq 1$ with $\epsilon^{6C_3^{12/5}\delta^{-12/5}+60C_4\delta^{-4}}\geq\kappa$ and $$ 3072\epsilon^2(C_3\delta^{-12/5}+5C_4\delta^{-4})+ 72C_3^{24/13}C_4^{3/13}\delta^{1/13}\leq\varpi^6. $$ \end{Lem} {\mathfrak b}egin{proof} Let $$ \delta=\mathfrak min{\mathfrak b}igg\{\frac{\varpi^{78}}{144^{13}C_3^{24}C_4^{3}},1{\mathfrak b}igg\} $$ and $$ \epsilon=\mathfrak min{\mathfrak b}igg\{\frac{\varpi^{3}\delta^{2}}{192(C_3\delta^{-12/5}+5C_4\delta^{-4})^{1/2}},1{\mathfrak b}igg\}. $$ Clearly $3072\epsilon^2(C_3\delta^{-12/5}+5C_4\delta^{-4})+ 72C_3^{24/13}C_4^{3/13}\delta^{1/13}\leq\varpi^6$. So we may arbitrarily choose a $\kappa$ satisfying $$ \kappa\leq\mathfrak min\{\epsilon^{6C_3^{12/5}\delta^{-12/5}+60C_4\delta^{-4}},\varpi\}. $$ \end{proof} Let $\kappa$ be a small constant satisfying the requirements of Lemma \ref{kappa}. Notice that $f(s)$ is increasing, $F(s)$ is decreasing and $F(s),f(s)=1+O(e^{-s})$. Choose a sufficiently large $k_0$ satisfying that $$ 20(F(k_0/4)-f(k_0/4))\leq\kappa^2. $$ Suppose that $n$ is a sufficiently large integer. Let $w=w(n)$ be a positive function satisfying $P(w)\leq\log n$, $n\geq n_0(P(w))$ (where $n_0$ is defined in Lemma \ref{chen}) and $\lim_{n\to\infty}w(n)=\infty$. By Lemma \ref{nW}, such $w$ exists. Let $W=P(w)$. The following lemma can be easily verified: {\mathfrak b}egin{Lem} For any odd integer $n$ with $3\mathfrak mid n$, there exist $1\leq b_1,b_2,b_3\leq W$ with $(b_i(b_i+2),W)=1$ such that $$ n\equiv b_1+b_2+b_3\pmod{W}. $$ Furthermore, if $n\equiv 4\pmod{6}$, then there exist $1\leq b_1,b_2\leq W$ with $(b_i(b_i+2),W)=1$ such that $$ n\equiv b_1+b_2\pmod{W}. $$ \end{Lem} Now suppose that $n$ is odd. Suppose that $1\leq b_1,b_2,b_3\leq W$ are integers satisfying $(b_i(b_i+2),W)=1$ and $n\equiv b_1+b_2+b_3\pmod{W}$. Let $n'=(n-b_1-b_2-b_3)/W$. Let $N$ be a prime in the interval $[(1+\kappa^2/20)n/W, (1+\kappa^2/10)n/W]$ and $R=N^{1/10}$. Thanks to the prime number theorem, such a prime $N$ always exists whenever $n$ is sufficiently large. Let $\mathfrak mathcal {F}_i(x)=(Wx+b_i)(Wx+b_i+2)$ for $i=1,2$. Substituting $N,\ R,\ \mathfrak mathcal {F}_i$ to Lemma \ref{envsieve}, we get the desired functions ${\mathfrak b}eta_i={\mathfrak b}eta_{i,R}$ for $i=1,2$. Let $$ A_i= \{x\leq (n-b_i)/2W:\, Wx+b_i\text{ is Chen's prime and }(Wx+b_i+2,P(n^{1/10}))=1\}, $$ for $i=1,2$, and define $$ {\mathfrak a}_i(x)= {\bf 1}_{A_i}(x)\frac{C_2\mathfrak mathfrak S_1^{-1}}{1000}\frac{\phi_2(W)(\log(Wx+b_i))^2}{n}. $$ By Lemma \ref{chen}, clearly we have {\mathfrak b}egin{equation} \label{a12sum} \sum_{x}{\mathfrak a}_i(x)\geq(1-\kappa)\frac{C_2\mathfrak mathfrak S_1^{-1}}{1000}\frac{\phi_2(W)(\log n)^2|A_i|}{n}\geq 6\varpi, \end{equation} whenever $n$ is sufficiently large. On the other hand, it is easy to see that $$ \mathfrak mathfrak S_{\mathfrak mathcal {F}_i}=\prod_{\substack{p\mathfrak mid W\\ p>2}}\frac{1}{(1-1/p)^2}\prod_{p\nmid W}\frac{1-2/p}{(1-1/p)^2}=\prod_{\substack{p\mathfrak mid W\\ p>2}}\frac{p}{p-2} \prod_{\substack{p>2}}{\mathfrak b}igg(1-\frac{1}{(p-1)^2}{\mathfrak b}igg)=\frac{W\mathfrak mathfrak S_1}{\phi_2(W)}. $$ Hence by (\ref{betalower}), for any $x$ {\mathfrak b}egin{equation} \label{abeta} {\mathfrak b}eta_i(x)\geq \frac{C_2\phi_2(W)\mathfrak mathfrak S_1^{-1}}{W}(\log R)^2{\bf 1}_{A_i}(x)\geq N{\mathfrak a}_i(x). \end{equation} Let $$ A_3=\{x\leq (n-b_3)/W:\, Wx+b_3\text{ is prime and }(Wx+b_3+2,P(n^{1/k_0}))=1\}. $$ and define $$ {\mathfrak a}_3(x)={\bf 1}_{A_3}(x)\frac{e^{\gamma}\phi_2(W)\log(Wx+b_3)\log n}{4k_0\mathfrak mathfrak S_1n}. $$ In view of Lemmas \ref{spm} and \ref{spmFf}, we also have {\mathfrak b}egin{equation} \label{a3sum} \sum_{x}{\mathfrak a}_3(x)=\frac{e^{\gamma}\phi_2(W)\log n}{4k_0\mathfrak mathfrak S_1n}S_{n,n^{1/k_0}}(0)\in[1-\kappa^2,1+\kappa^2]. \end{equation} Below we identify the set $\{1,2,\ldots,N\}$ with the group $\mathfrak mathbb Z_N=\mathfrak mathbb Z/N\mathfrak mathbb Z$. If there exist $x_1\in A_1$, $x_2\in A_2$ and $x_3\in A_3$ satisfying $x_1+x_2+x_3=n'$ in $\mathfrak mathbb Z_N$, then the equality also holds in $\mathfrak mathbb Z$. In fact, since $x_1+x_2\leq n/W$ and $x_3<n/W$ in $\mathfrak mathbb Z$, we must have $x_1+x_2+x_3<n'+N$ in $\mathfrak mathbb Z$. For any function $f:\,\mathfrak mathbb Z_N\to\mathfrak mathbb C$, define $$ \tilde{f}(r)=\sum_{x\in\mathfrak mathbb Z_N}f(x)e(-xr/N). $$ {\mathfrak b}egin{Lem} \label{nuFour} Let $\nu_i={\mathfrak b}eta_i/N$, then for any $r\in\mathfrak mathbb Z_N$, $$ |\tilde{\nu_i}(r)-{\bf 1}_{r=0}|\leq C_5w^{-1/2}, $$ where $C_5$ is an absolute constant and ${\bf 1}_{r=0}=1$ or $0$ according to whether $r=0$. \end{Lem} {\mathfrak b}egin{proof} See \cite[Lemma 6.1]{GreenTao06}. (Notice that our definitions are a little different from Green and Tao's.) \end{proof} {\mathfrak b}egin{Lem} \label{aFour} For any $0\not=r\in\mathfrak mathbb Z_N$, $$ |\tilde{{\mathfrak a}_3}(r)|\leq\frac{2}{w-2}+0.9\kappa^2. $$ \end{Lem} {\mathfrak b}egin{proof} If $r/N\in\mathfrak m$, then by Lemma \ref{sumexp}, we have $$ |\tilde{{\mathfrak a}_3}(r)|=\frac{e^\gamma\phi_2(W)\log n}{k_0\mathfrak mathfrak S_1n}|S_{n,n^{1/k_0}}(r/N)|\leq\frac{2}{5}\kappa^2. $$ Suppose that there exist $1\leq a\leq q\leq (\log n)^B$ with $(a,q)=1$ such that $r/N\in\mathfrak mathfrak M_{a,q}$. Then applying Lemma \ref{sumexp}, $$ |\tilde{{\mathfrak a}_3}(r)|\leq \frac{\phi_2(W)\log n}{k_0\mathfrak mathfrak S_1n}{\mathfrak b}igg(\frac{{\bf 1}_{(W,q)=1}|\tau^*(a,q)|k_0\mathfrak mathfrak S_1W}{\phi_2(Wq)\log n}{\mathfrak b}igg|\sum_{1\leq y\leq m}e(\theta y){\mathfrak b}igg|+ \frac{7k_0\mathfrak mathfrak S_1\kappa^2 n}{10\phi_2(W)\log n}{\mathfrak b}igg), $$ where $m=(n-b)/W$ and $\theta=r/N-a/q$. Recall that $\tau^*(a,q)=0$ whenever $(W,q)>1$. And if $a=q=1$, since $r\not=0$, $$ {\mathfrak b}igg|\sum_{1\leq y\leq m}e(yr/N){\mathfrak b}igg|\leq {\mathfrak b}igg|\sum_{1\leq y\leq N}e(yr/N){\mathfrak b}igg|+N-m\leq \frac{\kappa^2}{10} N. $$ Suppose that $q>1$ and $(W,q)=1$. Then by noting $W=\prod_{p<w}p$, $$ \frac{\phi_2(W)|\tau^*(a,q)|}{\phi_2(Wq)}\leq\frac{\phi_2(W)|\{d\mathfrak mid q:\, (d,q/d)=1\}|}{\phi_2(Wq)}\leq\frac{2}{w-2}, $$ since $q$ has at least one prime divisor not less than $w$. Finally, we have $WN\leq1.1 n$. \end{proof} Suppose that $\delta,\epsilon>0$ are two small numbers to be chosen later. For $1\leq i\leq 3$, let $$ \mathfrak mathbb RR_i=\{r\in\mathfrak mathbb Z_N:\,|\tilde{a_i}(r)|>\delta\}, $$ $$ \mathcal B_i=\{x\in\mathfrak mathbb Z_N:\,\|xr/N\|\leq \epsilon\}, $$ and define ${\mathfrak b}_i={\bf 1}_{\mathcal B_i}/|B_i|$. {\mathfrak b}egin{Lem} \label{bohr} $$ |\mathfrak mathbb RR_3|\leq 60C_4\delta^{-4}. $$ \end{Lem} {\mathfrak b}egin{proof} By (\ref{equalsum}), {\mathfrak b}egin{align*} \sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_3}(r)|^4=&N\sum_{\substack{x_1,x_2,x_3,x_4\in\mathfrak mathbb Z_N\\ x_1+x_4=x_2+x_3}}{\mathfrak a}_3(x_1){\mathfrak a}_3(x_2){\mathfrak a}_3(x_3){\mathfrak a}_3(x_4)\\ \leq&N{\mathfrak b}igg(\frac{e^{\gamma}\phi_2(W)(\log n)^2}{4k_0\mathfrak mathfrak S_1n}{\mathfrak b}igg)^4\sum_{\substack{p_1,p_2,p_3,p_4\leq n\\ (p_i+2,P(n^{1/k_0}))=1\\ p_i\equiv b_3\pmod{W}\\ p_1+p_4=p_2+p_3}}1\\ \leq&C_4e^{4\gamma}(4\mathfrak mathfrak S_1)^{-4}\frac{NW}{n}. \end{align*} Hence $$ |\mathfrak mathbb RR_3|\leq\sum_{r\in\mathfrak mathbb Z_N}\delta^{-4}|\tilde{a_3}(r)|^4\leq e^{4\gamma}(4\mathfrak mathfrak S_1)^{-4}. $$ \end{proof} {\mathfrak b}egin{Lem} \label{bohr} $$ |\mathfrak mathbb RR_1|,|\mathfrak mathbb RR_2|\leq 6C_3^{12/5}\delta^{-12/5} $$ provided that $w\geq C_5^{2}$. \end{Lem} {\mathfrak b}egin{proof} Suppose that $i\in\{1,2\}$. Let $u_x={\mathfrak a}_i(x)/\nu_i(x)$ or $0$ according to whether $\nu_i(x)\not=0$. By (\ref{abeta}), clearly $0\leq u_x\leq 1$. If $w\geq C_5^2$, by Lemma 4.7 we have $\tilde{\nu_i}(0)\leq 2$. Applying (\ref{restriction}), {\mathfrak b}egin{align*} \sum_{r\in\mathfrak mathbb Z_N}{\mathfrak b}igg|\sum_{1\leq x\leq N}u_x\nu_i(x)e(-xr/N){\mathfrak b}igg|^{12/5}\leq& C_3^{12/5}{\mathfrak b}igg(\sum_{1\leq x\leq N}|u_x|^2\nu_i(x){\mathfrak b}igg)^{6/5}\\ \leq& C_3^{12/5}|\tilde{\nu_i}(0)|^{6/5}\\ \leq&6C_3^{12/5}. \end{align*} It follows that $|\mathfrak mathbb RR_i|\leq 6C_3^{12/5}\delta^{-12/5}$. \end{proof} {\mathfrak b}egin{Lem} \label{bohr} $$ |\mathcal B_i|\geq\epsilon^{|\mathfrak mathbb RR_i|}N. $$ \end{Lem} {\mathfrak b}egin{proof} This is a simple application of the pigeonhole principle (cf. \cite[Lemma 1.4]{Tao}). \end{proof} For two functions $f, g:\,\mathfrak mathbb Z_N\to\mathfrak mathbb C$. Define $$ f*g(x)=\sum_{y\in\mathfrak mathbb Z_N}f(y)g(x-y). $$ It is easy to check that $\widetilde{(f*g)}=\tilde{f}\tilde{g}$. Let ${\mathfrak a}_i'={\mathfrak a}_i*{\mathfrak b}_i*{\mathfrak b}_i$ for $1\leq i\leq 3$. {\mathfrak b}egin{Lem} \label{a3upper} Suppose that $\epsilon^{|\mathfrak mathbb RR_3|}\geq (2/(w-2)+0.9\kappa^2)\kappa^{-1}$. Then for any $x\in\mathfrak mathbb Z_N$, $$ |{\mathfrak a}_3'(x)|\leq\frac{1+2\kappa}{N}. $$ \end{Lem} {\mathfrak b}egin{proof} {\mathfrak b}egin{align*} |{\mathfrak a}_3'(x)|=&|{\mathfrak a}_3(x)*{\mathfrak b}_3(x)*{\mathfrak b}_3(x)|\\ \leq&\frac{1}{N}|\tilde{{\mathfrak a}_3}(0)||\tilde{{\mathfrak b}_3}(0)|^2+ \frac{1}{N}\sum_{r\not=0}|\tilde{{\mathfrak a}_3}(r)\tilde{{\mathfrak b}_3}(r)^2|\\ \leq&\frac{1}{N}|\tilde{{\mathfrak a}_3}(0)||\tilde{{\mathfrak b}_3}(0)|^2+ \frac{1}{N}\mathfrak max_{r\not=0}|\tilde{{\mathfrak a}_3}(r)|\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak b}_3}(r)|^2\\ \leq&\frac{1+\kappa^2}{N}+ \frac{1}{|B_3|}{\mathfrak b}igg(\frac{2}{w-2}+0.9\kappa^2{\mathfrak b}igg), \end{align*} where we used Lemma \ref{aFour} in the last step. Thus the desired result easily follows from Lemma \ref{bohr}. \end{proof} {\mathfrak b}egin{Lem} \label{a12upper} Let $i\in\{1,2\}$. Suppose that $\epsilon^{|\mathfrak mathbb RR_i|}\geq C_5\kappa^{-1}w^{-1/2}$ and $C_5w^{-1/2}\leq \kappa$. Then for any $x\in\mathfrak mathbb Z_N$, $$ |{\mathfrak a}_i'(x)|\leq\frac{1+2\kappa}{N}. $$ \end{Lem} {\mathfrak b}egin{proof} Since $\nu_i(x)\geq{\mathfrak a}_i(x)$, applying Lemma 4.7 {\mathfrak b}egin{align*} |{\mathfrak a}_i'(x)|\leq&|\nu_i(x)*{\mathfrak b}_i(x)*{\mathfrak b}_i(x)|\\ \leq&\frac{1}{N}|\tilde{\nu_i}(0)||\tilde{{\mathfrak b}_i}(0)|^2+ \frac{1}{N}\mathfrak max_{r\not=0}|\tilde{\nu_i}(r)|\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak b}_i}(r)|^2\\ \leq&\frac{1+C_5w^{-1/2}}{N}+\frac{C_5w^{-1/2}}{|B_i|}. \end{align*} We are done. \end{proof} {\mathfrak b}egin{Lem} \label{betaone} For $1\leq i\leq 3$, $$ |1-\tilde{{\mathfrak b}_i}(r)|\leq 16\epsilon^2 $$ for any $r\in\mathfrak mathbb RR_i$. \end{Lem} {\mathfrak b}egin{proof} See the proof of Lemma 6.7 of \cite{Green05}. \end{proof} {\mathfrak b}egin{Lem} \label{threesum} Suppose that $w\geq C_5^2$. Then {\mathfrak b}egin{align*} &{\mathfrak b}igg|\sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1'(x_1){\mathfrak a}_2'(x_2){\mathfrak a}_3'(x_3)- \sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1(x_1){\mathfrak a}_2(x_2){\mathfrak a}_3(x_3){\mathfrak b}igg|\\ \leq&\frac{3072\epsilon^2(C_3^{12/5}\delta^{-12/5}+5C_4\delta^{-4})+ 72C_3^{24/13}C_4^{3/13}\delta^{1/13}}{N}. \end{align*} \end{Lem} {\mathfrak b}egin{proof} Clearly $$ \sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1(x_1){\mathfrak a}_2(x_2){\mathfrak a}_3(x_3)=\frac{1}{N}\sum_{r\in\mathfrak mathbb Z_N} \tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)e(n'r/N). $$ Hence {\mathfrak b}egin{align*} &{\mathfrak b}igg|\sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1'(x_1){\mathfrak a}_2'(x_2){\mathfrak a}_3'(x_3)- \sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1(x_1){\mathfrak a}_2(x_2){\mathfrak a}_3(x_3){\mathfrak b}igg|\\ \leq&\frac{1}{N}\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)(1-\tilde{{\mathfrak b}_1}(r)^2 \tilde{{\mathfrak b}_2}(r)^2\tilde{{\mathfrak b}_3}(r)^2)|. \end{align*} Since $w\geq C_5^2$, for $i=1,2$, $$ |\tilde{a_i}(r)|\leq|\tilde{a_i}(0)|\leq|\tilde{\nu_i}(0)|\leq 2. $$ We also have $$ |\tilde{a_3}(r)|\leq|\tilde{a_3}(0)|\leq1+\kappa^2. $$ If $r\in\mathfrak mathbb RR_1\cap\mathfrak mathbb RR_2\cap\mathfrak mathbb RR_3$, then by Lemma \ref{betaone}, $$ |1-\tilde{{\mathfrak b}_i}(r)^2|\leq|1-\tilde{{\mathfrak b}_i}(r)|(1+|\tilde{{\mathfrak b}_i}(r)|)\leq |1-\tilde{{\mathfrak b}_i}(r)|(1+|\tilde{{\mathfrak b}_i}(0)|)\leq 32\epsilon^2. $$ So {\mathfrak b}egin{align*} &|1-\tilde{{\mathfrak b}_1}(r)^2\tilde{{\mathfrak b}_2}(r)^2\tilde{{\mathfrak b}_3}(r)^2|\\ \leq&|1-\tilde{{\mathfrak b}_1}(r)^2|+ |\tilde{{\mathfrak b}_1}(r)|^2|1-\tilde{{\mathfrak b}_2}(r)^2|+ |\tilde{{\mathfrak b}_1}(r)\tilde{{\mathfrak b}_2}(r)|^2|1-\tilde{{\mathfrak b}_3}(r)^2|\\ \leq&96\epsilon^2. \end{align*} Thus {\mathfrak b}egin{align*} \sum_{r\in\mathfrak mathbb RR_1\cap\mathfrak mathbb RR_2\cap\mathfrak mathbb RR_3}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)(1-\tilde{{\mathfrak b}_1}(r)^2 \tilde{{\mathfrak b}_2}(r)^2\tilde{{\mathfrak b}_3}(r)^2)|\leq&768\epsilon^2\mathfrak min_{1\leq i\leq 3}|\mathfrak mathbb RR_i|\\ \leq&256\epsilon^2(12C_3^{12/5}\delta^{-12/5}+60C_4\delta^{-4}). \end{align*} On the other hand, by the H\"older inequality, {\mathfrak b}egin{align*} &\sum_{r\not\in\mathfrak mathbb RR_1\cap\mathfrak mathbb RR_2\cap\mathfrak mathbb RR_3}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)(1-\tilde{{\mathfrak b}_1}(r)^2 \tilde{{\mathfrak b}_2}(r)^2\tilde{{\mathfrak b}_3}(r)^2)|\\ \leq&2\sum_{r\not\in\mathfrak mathbb RR_1\cap\mathfrak mathbb RR_2\cap\mathfrak mathbb RR_3}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)|\\ \leq&2\mathfrak max_{r\not\in\mathfrak mathbb RR_1\cap\mathfrak mathbb RR_2\cap\mathfrak mathbb RR_3}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)|^{1/13} \sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_1}(r)\tilde{{\mathfrak a}_2}(r)\tilde{{\mathfrak a}_3}(r)|^{12/13}\\ \leq&4\delta^{1/13} {\mathfrak b}igg(\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_1}(r)|^{12/5}{\mathfrak b}igg)^{5/13} {\mathfrak b}igg(\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_2}(r)|^{12/5}{\mathfrak b}igg)^{5/13}{\mathfrak b}igg(\sum_{r\in\mathfrak mathbb Z_N}|\tilde{{\mathfrak a}_3}(r)|^4{\mathfrak b}igg)^{3/13}\\ \leq&4\delta^{1/13}(6C_3^{12/5})^{10/13}(60C_4)^{3/13}. \end{align*} \end{proof} {\mathfrak b}egin{Lem} \label{pollard} Suppose that $0<\theta_1,\theta_2,\theta_3\leqslant 1$ with $\theta_1+\theta_2+\theta_3>1$. Let $$ \theta=\mathfrak min\{\theta_1,\theta_2,\theta_3,(\theta_1+\theta_2+\theta_3-1)/4\}. $$ Suppose that $N$ is a prime greater than $2\theta^{-2}$, and $X_1,X_2,X_3$ are subsets of $\mathfrak mathbb Z_N$ with $|X_i|\geqslant\theta_i N$. Then for any $y\in\mathfrak mathbb Z_N$, we have $$ |\{(x_1,x_2,x_3):\, x_i\in X_i,\ x_1+x_2+x_3=y\}|\geq\theta^3 N^2. $$ \end{Lem} {\mathfrak b}egin{proof} See \cite[Lemma 3.3]{LiPan}. \end{proof} Now we are ready to prove Theorem \ref{chengoldbach}. {\mathfrak b}egin{proof}[Proof of Theorem \ref{chengoldbach}] For $1\leq i\leq 3$, let $$ A_i'=\{x\in\mathfrak mathbb Z_N:\, {\mathfrak a}_i'(x)\geq\varpi/N\}. $$ In view of (\ref{a3sum}) and Lemma \ref{a3upper}, $$ \frac{1+2\kappa}{N}|A_3'|+\frac{\varpi}{N}(N-|A_3'|)\geq\sum_{x\in\mathfrak mathbb Z_N}{\mathfrak a}_3'(x)= \sum_{x\in\mathfrak mathbb Z_N}{\mathfrak a}_3(x)\geq 1-\kappa^2. $$ Therefore $|A_3'|\geq (1-3\varpi)N$. Similarly, by (\ref{a12sum}) and Lemma \ref{a12upper}, for $i=1,2$ $$ \frac{1+2\kappa}{N}|A_i'|+\frac{\varpi}{N}(N-|A_i'|)\geq \sum_{x\in\mathfrak mathbb Z_N}{\mathfrak a}_i(x)\geq6\varpi. $$ Hence $|A_1'|,|A_2'|\geq \frac{9}{2}\varpi N$. Since $|A_1'|+|A_2'|+|A_3'|\geq (1+6\varpi)N$, with the help of Lemma \ref{pollard}, $$ \sum_{\substack{x_1\in A_1',\ x_2\in A_2',\ x_3\in A_3'\\ x_1+x_2+x_3=n'}}1\geq2\varpi^3N^2. $$ By Lemma \ref{kappa}, we may choose $0<\delta,\epsilon\leq 1$ with $\epsilon^{6C_3^{12/5}\delta^{-12/5}+60C_4\delta^{-4}}\geq\kappa$ satisfying $$ 3072\epsilon^2(C_3^{12/5}\delta^{-12/5}+5C_4\delta^{-4})+ 72C_3^{24/13}C_4^{3/13}\delta^{1/13}\leq\varpi^6. $$ Notice that $w=w(n)$ tends to infinity with $n$. So we may assume that $w\geq\mathfrak max\{20\kappa^{-2}+2,C_5^2\kappa^{-4}\}$. Since $\mathfrak max_{1\leq i\leq 3}|\mathfrak mathbb RR_i|\leq 6C_3^{12/5}\delta^{-12/5}+6C_4\delta^{-4}$, the requirements of Lemmas \ref{a3upper}, \ref{a12upper} and \ref{threesum} are obviously satisfied. Applying Lemma \ref{threesum}, {\mathfrak b}egin{align*} &\sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1(x_1){\mathfrak a}_2(x_2){\mathfrak a}_3(x_3)\\\geq&\sum_{\substack{x_1,x_2,x_3\in\mathfrak mathbb Z_N\\ x_1+x_2+x_3=n'}}{\mathfrak a}_1'(x_1){\mathfrak a}_2'(x_2){\mathfrak a}_3'(x_3)-\frac{3072\epsilon^2(C_3^{12/5}\delta^{-12/5}+5C_4\delta^{-4})+ 72C_3^{24/13}C_4^{3/13}\delta^{1/13}}{N}\\ \geq&\sum_{\substack{x_1\in A_1',\ x_2\in A_2',\ x_3\in A_3'\\ x_1+x_2+x_3=n'}}{\mathfrak b}igg(\frac{\varpi}{N}{\mathfrak b}igg)^3-\frac{\varpi^6}{N}\\ \geq&\frac{\varpi^6}{N}. \end{align*} This completes the proof. \end{proof} {\mathfrak b}egin{thebibliography}{99} {\mathfrak b}ibitem{Chen73} J.-R. Chen, \textit{On the representation of a large even integer as the sum of a prime and a product of at most two primes}, Sci. Sinica, \textbf{16}(1973), 157-176. {\mathfrak b}ibitem{Corput39} J. G. van der Corput, \textit{\"Uber Summen von Primzahlen und Primzahlquadraten}, Math. Ann., {{\mathfrak b}f 116}(1939), 1-50. {\mathfrak b}ibitem{Green02} B. Green, \textit{On arithmetic structures in dense sets of integers}, Duke Math. J., \textbf{114}(2002), 215-238. {\mathfrak b}ibitem{Green05} B. Green, \textit{Roth's theorem in the primes}, Ann. Math., {{\mathfrak b}f 161}(2005), 1609-1636. {\mathfrak b}ibitem{GreenTao06} B. Green and T. Tao, \textit{Restriction theory of the Selberg sieve, with application}, J. Theor. Nombres Bordeaux, {{\mathfrak b}f 18} (2006), 147-182. {\mathfrak b}ibitem{GreenTao08} B. Green and T. Tao, \textit{The primes contain arbitrarily long arithmetic progressions}, Ann. Math., {{\mathfrak b}f 167}(2008), 481-547. {\mathfrak b}ibitem{HeathBrown82} D. R. Heath-Brown, \textit{Prime numbers in short intervals and a generalized Vaughan identity}, Canad. J. Math., {{\mathfrak b}f 35} (1982), 1365-1377. {\mathfrak b}ibitem{Iwaniec80a} H. Iwaniec, \textit{Rosser's sieve}, Acta Arith., {{\mathfrak b}f 36} (1980), 171-202. {\mathfrak b}ibitem{Iwaniec80b} H. Iwaniec, \textit{A new form of the error term in the linear sieve}, Acta Arith., {{\mathfrak b}f 37} (1980), 307-320. {\mathfrak b}ibitem{Iwaniec96} H. Iwaniec, \textit{Sieve methods}, Graduate course, Rutgers, 1996. {\mathfrak b}ibitem{LiPan} H.-Z. Li and H. Pan, \textit{Ternary Goldbach problem for the subsets of primes with positive relative densities}, preprint, arXiv:math.NT/0701240. {\mathfrak b}ibitem{Meng07} X. Meng, \textit{A mean value theorem on the binary Goldbach problem and its application}, Monatsh. Math., {{\mathfrak b}f 151} (2007), 319-332. {\mathfrak b}ibitem{Peneva00} T. P. Peneva, \textit{On the ternary Goldbach problem with primes $p_i$ such that $p_i+2$ are almost-primes}, Acta Math. Hungar., {{\mathfrak b}f 86} (2000), 305-318. {\mathfrak b}ibitem{Tao} T. Tao, \textit{The Roth-Bourgain Theorem}, preprint, unpublished. {\mathfrak b}ibitem{Tolev99} D. I. Tolev, \textit{Arithmetic progressions of prime-almost-prime twins}, Acta Arith., {{\mathfrak b}f 88} (1999), 67-98. {\mathfrak b}ibitem{Tolev00} D. I. Tolev, \textit{Additive problems with prime numbers of special type}, Acta Arith., {{\mathfrak b}f 96} (2000), 53-88. {\mathfrak b}ibitem{Vaughan97} R. C. Vaughan, \textit{The Hardy-Littlewood Method}, Second edition, Cambridge University Press, Cambridge, 1997. {\mathfrak b}ibitem{Vinogradov37} I. M. Vinogradov, \textit{The representation of an odd number as a sum of three primes}, Dokl. Akad. Nauk. SSSR., {{\mathfrak b}f 16}(1937), 139-142. \end{thebibliography} \end{document}
\begin{document} \begin{abstract} For a Tychonoff space $X$, let $C_k(X)$ and $C_p(X)$ be the spaces of real-valued continuous functions $C(X)$ on $X$ endowed with the compact-open topology and the pointwise topology, respectively. If $X$ is compact, the classic result of A.~Grothendieck states that $C_k(X)$ has the Dunford-Pettis property and the sequential Dunford--Pettis property. We extend Grothendieck's result by showing that $C_k(X)$ has both the Dunford-Pettis property and the sequential Dunford-Pettis property if $X$ satisfies one of the following conditions: (i) $X$ is a hemicompact space, (ii) $X$ is a cosmic space (=a continuous image of a separable metrizable space), (iii) $X$ is the ordinal space $[0,\kappa)$ for some ordinal $\kappa$, or (vi) $X$ is a locally compact paracompact space. We show that if $X$ is a cosmic space, then $C_k(X)$ has the Grothendieck property if and only if every functionally bounded subset of $X$ is finite. We prove that $C_p(X)$ has the Dunford--Pettis property and the sequential Dunford-Pettis property for every Tychonoff space $X$, and $C_p(X) $ has the Grothendieck property if and only if every functionally bounded subset of $X$ is finite. \varepsilonnd{abstract} \maketitle \section{Introduction} The class of Banach spaces with the Dunford--Pettis property enjoying also the Grothendieck property plays an essential and important role in the general theory of Banach spaces (particularly of continuous functions) and vector measures with several remarkable applications, we refer the reader to \cite{Dales-Lau}, \cite{Diestel-DP}, \cite{Diestel} and \cite[Chapter~VI]{Diestel-Uhl}. It is well known by a result of A.~Grothendieck, see \cite[Corollary~4.5.10]{Dales-Lau}, that for every injective compact space $K$, the Banach space $C(K)$ has the Grothendieck property. Consequently, this applies for each extremely disconnected compact space $K$. A.~Grothendieck also proved that the Lebesgue spaces $L^{1}(\mu)$, the spaces $\pounds^{\infty}$ and $\pounds^{1}$ have the Dunford--Pettis property. This line of research was continued by J.~Bourgain in \cite{bourgain}, where he showed that the spaces $C_{L^{1}}$ and $L^{1}_{C}$ enjoy also the Dunford--Pettis property. Moreover, in \cite{bourgain1} J.~Bourgain provided interesting sufficient conditions for subspaces $L$ of the Banach space $C(K)$ to have the Dunford--Pettis property. This results have been used by J.A.~Cima and R.M.~Timoney \cite{Ci-Ti} to study the Dunford--Pettis property for $T$-ivariant algebras on $K$. F.~Bombal and I.~Villanueva characterized in \cite{BomVil} those compact spaces $K$ such that $C(K)\hat{\otimes} C(K)$ have the Dunford--Pettis property. Quite recently this area of research around Dunford--Pettis property for Banach spaces has been extended to a more general setting including general theory of locally convex spaces. This approach enabled specialists to apply this work to concrete problems related with the mean ergodic operators in Fr\'{e}chet spaces; we refer to articles \cite{ABR}, \cite{ABR1}, \cite{ABR2}, \cite{albaneze} and \cite{Bonet-Ricker} . The classical Dunford--Pettis theorem states that for any measure $\mu$ and each Banach space $Y$, if $T:L_1(\mu)\to Y$ is a weakly compact linear operator, then $T$ is completely continuous (i.e., $T$ takes weakly compact sets in $L_1(\mu)$ onto norm compact sets in $Y$). This result motivates Grothendieck to introduce the following property (for comments, see \cite[p.633-634]{Edwards}): \begin{definition}[\cite{Grothen}] \langlebel{def:DP} A locally convex space $E$ is said to have the {\varepsilonm Dunford--Pettis property} ($(DP)$ {\varepsilonm property} for short) if every continuous linear operator $T$ from $E$ into a quasi-complete locally convex space $F$, which transforms bounded sets of $E$ into relatively weakly compact subsets of $F$, also transforms absolutely convex weakly compact subsets of $E$ into relatively compact subsets of $F$. \varepsilonnd{definition} Actually, it suffices that $F$ runs over the class of Banach spaces, see \cite[p.633]{Edwards}. A.~Grothendieck proved in \cite[Proposition~2]{Grothen} that a Banach space $E$ has the $(DP)$ property if and only if given weakly null sequences $\{ x_n\}_{n\in\mathbb{N}}$ and $\{ \chi_n\}_{n\in\mathbb{N}}$ in $E$ and the Banach dual $E'$ of $E$, respectively, then $\lim_n \chi_n(x_n)=0$. He used this result to show that every Banach space $C(K)$ has the $(DP)$ property, see \cite[Th\'{e}or\`{e}me~1]{Grothen}. Extending this result to locally convex spaces (lcs, for short) and following \cite{Gabr-free-resp}, we consider the following ``sequential'' version of the $(DP)$ property. \begin{definition} \langlebel{def:sDP} A locally convex space $E$ is said to have the {\varepsilonm sequential Dunford--Pettis property} ($(sDP)$ {\varepsilonm property}) if given weakly null sequences $\{ x_n\}_{n\in\mathbb{N}}$ and $\{ \chi_n\}_{n\in\mathbb{N}}$ in $E$ and the strong dual $E'_\beta$ of $E$, respectively, then $\lim_n \chi_n(x_n)=0$. \varepsilonnd{definition} It turns out, as A.A.~Albanese, J.~Bonet and W.J.~Ricker proved in \cite[Corollary~3.4]{ABR}, that the $(DP)$ property and the $(sDP)$ property coincide for the much wider class of Fr\'{e}chet spaces (or, even more generally, for strict $(LF)$-spaces). In \cite[Proposition~3.3]{ABR} they showed that every barrelled quasi-complete space with the $(DP)$ property has also the $(sDP)$ property. For further results we refer the reader to \cite{ABR,Bonet-Lin-93,BFV,Diestel-DP} and reference therein. For a Tychonoff (=completely regular and Hausdorff) space $X$, we denote by $C_k(X)$ and $C_p(X)$ the spaces of real-valued continuous functions $C(X)$ on $X$ endowed with the compact-open topology and the pointwise topology, respectively. Being motivated by the aforementioned discussion and results it is natural to ask: \begin{problem} \langlebel{prob:Ck(X)-DPP} Characterize Tychonoff spaces $X$ for which $C_k(X)$ and $C_p(X)$ have the $(DP)$ property or the $(sDP)$ property. \varepsilonnd{problem} A.~Grothendieck proved that the Banach space $C(\beta\mathbb{N})$, where $\beta\mathbb{N}$ is the Stone--\v{C}ech compactification of the natural numbers $\mathbb{N}$ endowed with the discrete topology, has the following property: Any weak-$\ast$ convergent sequence in the Banach dual of $C(\beta\mathbb{N})$ is also weakly convergent. This result motivates the following important property. \begin{definition} \langlebel{def:Grothendieck} A locally convex space $E$ is said to have the {\varepsilonm Grothendieck property} if every weak-$\ast$ convergent sequence in the strong dual $E'_\beta$ is weakly convergent. \varepsilonnd{definition} These results mentioned above motivate also the second general question considered in the paper. \begin{problem} \langlebel{prob:Ck(X)-Grothendieck} Characterize Tychonoff spaces $X$ for which $C_k(X)$ and $C_p(X)$ have the Grothendieck property. \varepsilonnd{problem} For spaces $C_p(X)$ we obtain complete answers to Problems \ref{prob:Ck(X)-DPP} and \ref{prob:Ck(X)-Grothendieck}. Recall that a subset $A$ of a topological space $X$ is called {\varepsilonm functionally bounded in $X$} if every $f\in C(X)$ is bounded on $A$. \begin{theorem} \langlebel{t:DP-Cp} Let $X$ be a Tychonoff space. Then: \begin{enumerate} \item[{\rm (i)}] $C_p(X)$ has the $(sDP)$ property; \item[{\rm (ii)}] $C_p(X)$ has the $(DP)$ property; \item[{\rm (iii)}] $C_p(X) $ has the Grothendieck property if and only if every functionally bounded subset of $X$ is finite. \varepsilonnd{enumerate} \varepsilonnd{theorem} We prove Theorem \ref{t:DP-Cp} in Section \ref{sec:Cp-GDP}. In this section we also recall some general results related to the $(DP)$ property and the Grothendieck property which are essentially used in the article. For spaces $C_k(X)$ the situation is much more complicated. Following E.~Michael \cite{Mich}, a Tychonoff space $X$ is a {\varepsilonm cosmic space} if $X$ is a continuous image of a separable metrizable space. The following theorem is our second main result. \begin{theorem} \langlebel{t:DP-Ck} Assume that a Tychonoff space $X$ satisfies one of the following conditions: \begin{enumerate} \item[{\rm (i)}] $X$ is a hemicompact space; \item[{\rm (ii)}] $X$ is a cosmic space; \item[{\rm (iii)}] $X$ is the ordinal space $[0,\kappa)$ for some ordinal $\kappa$; \item[{\rm (iv)}] $X$ is a locally compact paracompact space. \varepsilonnd{enumerate} Then $C_k(X)$ has the $(sDP)$ property and the $(DP)$ property. \varepsilonnd{theorem} Since every compact subset of a cosmic space is metrizable, it is easy to see that indeed all four classes (i)-(iv) of Tychonoff spaces are independent (in the sense that there are spaces which belong to one of the classes but do not belong to other classes). In particular, the spaces $C_k(\mathbb{N}^\mathbb{N})$ and $C_k(\mathbb{Q})$ are non-Fr\'{e}chet spaces with the $(DP)$ property and the $(sDP)$ property. For the Grothendieck property, essentially using Theorem \ref{t:DP-Cp} we prove the following result (the definitions of $\mu$-spaces and sequential spaces are given before the proof of this result). \begin{theorem} \langlebel{t:Grothendieck-Ck} Let $X$ be a Tychonoff space. \begin{enumerate} \item[{\rm (i)}] If $X$ is a $\mu$-space whose compact subsets are sequential (for example, $X$ is cosmic), then $C_k(X)$ has the Grothendieck property if and only if every functionally bounded subset of $X$ is finite. \item[{\rm (ii)}] If $X$ is a sequential space, then $C_k(X)$ has the Grothendieck property if and only if $X$ is discrete. \varepsilonnd{enumerate} \varepsilonnd{theorem} Note that the condition on $X$ of being a sequential space in (ii) of Theorem \ref{t:Grothendieck-Ck} cannot be replaced by the condition ``$X$ is a $k$-space'', as the compact space $\beta\mathbb{N}$ shows. We prove Theorems \ref{t:DP-Ck} and \ref{t:Grothendieck-Ck} in Section \ref{sec:Ck-GDP}. \section{The $(DP)$ property and the Grothendieck property for $C_p(X)$} \langlebel{sec:Cp-GDP} In what follows we shall use the following result. \begin{theorem}[\protect{\cite[Theorem~9.3.4]{Edwards}}] \langlebel{t:Edwards-DP} An lcs $E$ has the $(DP)$ property if and only if every absolutely convex, weakly compact subset of $E$ is precompact for the topology $\tau_{\Sigma'}$ of uniform convergence on the absolutely convex, equicontinuous, weakly compact subsets of $E'_\beta$. \varepsilonnd{theorem} A subset $A$ of a topological space $X$ is called {\varepsilonm sequentially compact} if every sequence in $A$ contains a subsequence which is convergent in $A$. Following \cite{Gabr-free-resp}, a Tychonoff space $X$ is {\varepsilonm sequentially angelic} if a subset $K$ of $X$ is compact if and only if $K$ is sequentially compact. It is clear that every angelic space is sequentially angelic. An lcs $E$ is {\varepsilonm weakly sequentially angelic} if the space $E_w$ is sequentially angelic, where $E_w$ denotes the space $E$ endowed with the weak topology. The following proposition is proved in (ii) of Proposition~3.3 of \cite{ABR} (the condition of being quasi-complete is not used in the proof of this clause). \begin{proposition}[\cite{ABR}] \langlebel{p:DP-c0-quasibarrelled} Assume that an lcs $E$ satisfies the following conditions: \begin{enumerate} \item[{\rm (i)}] $E$ has the $(sDP)$ property; \item[{\rm (ii)}] both $E$ and $E'_\beta$ are weakly sequentially angelic. \varepsilonnd{enumerate} Then $E$ has the $(DP)$ property. \varepsilonnd{proposition} Also we shall use repeatedly the next assertion, see \cite[Corollary~3.4]{ABR}. \begin{proposition}[\cite{ABR}] \langlebel{p:DP-LF-DF} Let $E$ be a complete $(LF)$-space. Then $E$ has the $(DP)$ property if and only if it has the $(sDP)$ property. \varepsilonnd{proposition} \begin{proposition} \langlebel{p:sDP-strong-dual-Schur} Let $E$ be a locally complete lcs whose every separable bounded set is metrizable. Assume that $E$ is weakly sequentially angelic and does not contain an isomorphic copy of $\varepsilonll_1$. Then $E$ has the $(sDP)$ property if and only if the strong dual $E'_\beta$ has the Schur property. \varepsilonnd{proposition} \begin{proof} Assume that $E$ has the $(sDP)$ property. Let $\{ \chi_{n}\}_{n\in\mathbb{N}}$ be a $\sigma(E',E'')$-null sequence in $E'$. Then Proposition 3.3(i) of \cite{Gabr-free-resp} guarantees that $\chi_n \to 0$ in the Mackey topology $\mu(E',E)$ on $E'$. As $E$ does not contain an isomorphic copy of $\varepsilonll_1$, a result of Ruess \cite[Theorem~2.1]{ruess} asserts that $\chi_n \to 0$ in the strong topology. Thus $E'_\beta$ has the Schur property. Conversely, if $E'_\beta$ has the Schur property, then $E$ has the $(sDP)$ property by Proposition 3.1(ii) of \cite{Gabr-free-resp}. \varepsilonnd{proof} If $E$ is a Fr\'{e}chet space, the necessity in the following corollary is Theorem 2.7 of \cite{albaneze}. \begin{corollary} \langlebel{c:Frechet-DP-L1-Schur} Let $E$ be a strict $(LF)$-space not containing an isomorphic copy of $\varepsilonll_1$. Then $E$ has the $(DP)$ property if and only if the strong dual $E'_\beta$ has the Schur property. \varepsilonnd{corollary} \begin{proof} The space $E$ is weakly angelic by a result of B.~Cascales and J.~Orihuela, see \cite[Proposition~11.3]{kak}. Also it is well known that $E$ is even complete whose every bounded set is metrizable, see \cite{bonet}. Taking into account that the $(DP)$ property is equivalent to the $(sDP)$ property in the class of strict $(LF)$-spaces by Proposition \ref{p:DP-LF-DF}, the assertion follows from Proposition \ref{p:sDP-strong-dual-Schur}. \varepsilonnd{proof} Let $X$ be a Tychonoff space. It is well known that the dual space of $C_p(X)$ is (algebraically) the linear space $L(X)$ of all linear combinations $\chi=a_1 x_1+\cdots + a_n x_n$, where $a_1,\dots,a_n$ are real numbers and $x_1,\dots,x_n\in X$. So \[ \chi(f)=a_1 f(x_1)+\cdots + a_n f(x_n), \quad f\in C_p(X). \] If all the coefficients $a_1,\dots,a_n$ are nonzero and all $x_1,\dots,x_n$ are distinct, we set \[ \mathrm{supp}(\chi):=\{ x_1,\dots,x_n\} \mbox{ and } \chi(x_i) := a_i, \; i=1,\dots,n. \] If $x\not\in \mathrm{supp}(\chi)$ we set $\chi(x):=0$. We need the following proposition for which the statement (i) is noticed on page 392 of \cite{FKS-feral} and the case (ii) immediately follows from Theorem 5 or Theorem 10 of \cite{kakol}. Nevertheless, we provide its complete and independent proof for the sake of completeness and reader convenience. Following \cite{FKS-feral}, an lcs $E$ is called {\varepsilonm feral} if every infinite-dimensional subset of $E$ is unbounded. Recall that an lcs $E$ is called {\varepsilonm $c_0$-barrelled} if every null sequence in the weak-$\ast$ dual of $E$ is equicontinuous, see \cite[Chapter~12]{Jar} or \cite[Chapter~8]{bonet}. \begin{proposition} \langlebel{p:Cp-strong-feral} \begin{enumerate} \item[{\rm (i)}] The strong dual space $L_\beta(X)$ of $C_p(X)$ is feral. \item[{\rm (ii)}] $C_p(X)$ is $c_0$-barrelled if and only if it is barrelled. \varepsilonnd{enumerate} \varepsilonnd{proposition} \begin{proof} (i) Suppose for a contradiction that there is a bounded infinite-dimensional subset $B$ in $L_\beta(X)$. For $n=1$, fix arbitrarily a nonzero $\chi_1\in B$ and let $x_1 \in \mathrm{supp}(\chi_1)$ be such that $\chi_1(x_1)=a_1 \not= 0$. Since $B$ is infinite-dimensional, by induction, for every natural number $n>1$ there exists a $\chi_{n}\in B$ satisfying the following condition: there is an $x_{n}\in \mathrm{supp}(\chi_{n})$ such that \begin{equation} \langlebel{equ:DP-01} x_{n} \not\in \bigcup_{i=1}^{n-1}\mathrm{supp}(\chi_i) \; \mbox{ and } \; \chi_{n}(x_{n})=a_{n} \not= 0. \varepsilonnd{equation} Clearly, all elements $x_n$ are distinct. Passing to a subsequence if needed we can assume that, for every $n\in\mathbb{N}$, there is an open neighborhood $U_n$ of $x_n$ such that \begin{equation} \langlebel{equ:DP-02} U_n \cap \left( \big[\mathrm{supp}(\chi_n)\setminus\{ x_n\}\big] \cup \bigcup_{i=1}^{n-1} U_i \right) =\varepsilonmptyset. \varepsilonnd{equation} Finally, for every $n\in\mathbb{N}$, take a function $f_n\in C(X)$ such that $\mathrm{supp}(f_n)\subseteq U_n$ and $f_n(x_n)=n/a_n$. It is easy to see $f_n \to 0$ in $C_p(X)$, and hence the sequence $S=\{ f_n: n\in\mathbb{N}\}$ is bounded in $C_p(X)$. The choice of $f_n$, (\ref{equ:DP-01}) and (\ref{equ:DP-02}) imply $\chi_n(f_n)=f_n(x_n)=n \to\infty$. Therefore the sequence $\{ \chi_n: n\in\mathbb{N}\}\subseteq B$ is unbounded, a contradiction. (ii) If $C_p(X)$ is barrelled then trivially it is $c_0$-barrelled. Conversely, assume that $C_p(X)$ is $c_0$-barrelled. By the Buchwalter--Schmets theorem, it suffices to prove that every functionally bounded subset of $X$ is finite. Suppose for a contradiction that $X$ has a one-to-one sequence $\{ a_n:n \in\mathbb{N}\}\subseteq X$ which is functionally bounded in $X$. For every $n\in \mathbb{N}$, set $\chi_n= 2^{-n}\delta_{a_n}\in L(X)$, where $\delta_{a_n}$ is the Dirac measure at $a_n$. Then, for every $f\in C(X)$, we have \[ |\chi_n (f)| \leq 2^{-n}\cdot \sup\{ |f(a_n)|: n\in\mathbb{N}\} \to 0. \] Therefore $\chi_n $ is a weak-$\ast$ null sequence. Since $C_p(X)$ is $c_0$-barreled we obtain that the sequence $S=\{ \chi_n: n\in\mathbb{N}\}$ is equicontinuous. So there is a neighborhood $U$ of zero in $E$ such that $S\subseteq U^\circ$. Since, by the Alaoglu theorem, $U^\circ$ is a $\sigma(E',E)$-compact convex subset of $E'$, we obtain that $S$ is strongly bounded, see Theorem 11.11.5 of \cite{NaB}. Clearly, the sequence $S$ is infinite-dimensional and hence, by (i), $S$ is not bounded in the strong topology. This contradiction finishes the proof. \varepsilonnd{proof} Now Theorem \ref{t:DP-Cp} follows immediately from Proposition \ref{p:Cp-strong-feral} and the next result below. \begin{theorem} \langlebel{t:Cp-not-DP} Let $E$ be an lcs whose strong dual is feral. Then: \begin{enumerate} \item[{\rm (i)}] $E$ has the $(sDP)$ property; \item[{\rm (ii)}] $E$ has the $(DP)$ property; \item[{\rm (iii)}] $E$ has the Grothendieck property if and only if it is $c_0$-barrelled. \varepsilonnd{enumerate} \varepsilonnd{theorem} \begin{proof} (i) Let $S'=\{ \chi_n :n\in\mathbb{N}\}$ be a weakly null sequence in $E'_\beta$. As $E'_\beta$ is feral, the sequence $S'$ is finite-dimensional, and hence for every weakly null (even bounded) sequence $\{ x_n :n\in\mathbb{N}\}$ in $E$ we trivially have $\chi_n (x_n) \to 0$. Thus $E$ has the $(sDP)$ property. (ii) We use Theorem \ref{t:Edwards-DP}. First we note that every weakly compact subset of $E'_\beta$ is finite-dimensional. Therefore, every polar $A^\circ$ of an absolutely convex, equicontinuous and weakly compact subset $A$ of $E'_\beta$ defines a weak neighborhood at zero of $E$. Therefore $\tau_{\Sigma'}$ coincides with the weak topology of $E$. Thus $E$ has the $(DP)$ property. (iii) Assume that $E$ has the Grothendieck property. Suppose for a contradiction that $E$ is not $c_0$-barrelled. Then there exists a weak-$\ast$ null sequence $S$ in $E'$ which is not equicontinuous. Clearly, $S$ is infinite-dimensional. Since $E'_\beta$ is feral it follows that $S$ is not strongly bounded. Thus $S$ does not converge to zero in the weak topology of $E'_\beta$, and hence $E$ does not have the Grothendieck property. This contradiction shows that $E$ must be $c_0$-barrelled. Conversely, assume that $E$ is $c_0$-barrelled and let $S=\{ \chi_n: n\in\mathbb{N}\}$ be a weak-$\ast$ null sequence in $E'$. Then $S$ is equicontinuous. So there is a neighborhood $U$ of zero in $E$ such that $S\subseteq U^\circ$. Since, by the Alaoglu theorem, $U^\circ$ is a weak-$\ast$ compact convex subset of $E'$, we obtain that $S$ is strongly bounded, see Theorem 11.11.5 of \cite{NaB}. Therefore $S$ is finite-dimensional because $E'_\beta$ is feral. Hence $S$ is also a weakly null sequence in $E'_\beta$. Thus $E$ has the Grothendieck property. \varepsilonnd{proof} \section{ The $(DP)$ property and the Grothendieck property for $C_k(X)$} \langlebel{sec:Ck-GDP} For a Tychonoff space $X$, we denote by $M_c(X)$ the space of all finite real regular Borel measures on $X$ with compact support (which will be denoted by $\mu,\nu$ etc.). It is well known that $M_c(X)$ is the dual space of $C_k(X)$. Let $K$ be a compact subspace of a Tychonoff space $X$. Denote by $M_K(X)$ the linear subspace of $M_c(X)$ of all measures with support in $K$. Denote by $J: M(K)\to M_K(X)$ the natural inclusion map defined by \[ J(\nu)(A):= \nu(A\cap K), \] where $A$ is a Borel subset of $X$. \begin{lemma} \langlebel{l:M(K)-M(X)} Let $K$ be a compact subspace of a Tychonoff space $X$. Then $J$ is a linear isomorphism of the Banach space $M(K)$ onto the subspace $M_K(X)$ of $M_c(X)_\beta$. \varepsilonnd{lemma} \begin{proof} It is clear that $J$ is a linear isomorphism. We show that $J$ is a homeomorphism. Denote by $S$ the restriction map from $C_k(X)$ to $C(K)$, i.e., $S(f):= f|_K$ for every $f\inC_k(X)$. Clearly, $S$ is a continuous linear operator. Therefore its adjoint map $S^\ast: M(K) \to M_c(X)_\beta$ is continuous, see \cite[Theorem~8.11.3]{NaB}. Noting that \[ S^\ast (\nu)(f)=\nu(S(f)), \quad \nu\in M(K), \; f\in C_k(X), \] we see that $J$ is a corestriction of $S^\ast$ to $M_K(X)$. Thus $J$ is continuous. To show that $J$ is also open it is sufficient to prove that $J(B_{M(K)})$ contains a neighborhood of zero in $M_K(X)$, where $B_{M(K)}$ is the closed unit ball of the Banach space $M(K)$. Define \[ B:= \{ f\in C(X): \; |f(x)|\leq 1 \mbox{ for every } x\in X\}. \] It is clear that $B$ is a bounded subset of $C_k(X)$. Therefore, $B^\circ \cap M_K(X)$ is a neighborhood of zero in $M_K(X)$. We show that $B^\circ \cap M_K(X) \subseteq J(B_{M(K)})$. Indeed, let $\mu\in B^\circ \cap M_K(X)$ and denote by $\nu$ the restriction of $\mu$ onto $K$; so $\nu\in M(K)$ and $J(\nu)=\mu$. We have to prove that $\nu\in B_{M(K)}$. Fix an arbitrary function $g\in B_{C(K)}$. By the Tietze--Urysohn theorem, choose an extension $\omegaidetilde{g}\in C(X)$ of $g$ onto $X$ such that $|\omegaidetilde{g}(x)|\leq 1$ for every $x\in X$. Then $\omegaidetilde{g}\in B$, and since $\mu\in B^\circ$ we obtain \[ |\nu(g)|=\left| \int_K g(x) d \nu\right| =\left| \int_X \omegaidetilde{g}(x)d \mu\right| \leq 1. \] Thus $\nu\in B_{M(K)}$. \varepsilonnd{proof} Below we provide a quite general condition on a Tychonoff space $X$ for which the space $C_k(X)$ has the $(sDP)$ property. Recall that the sets \[ [K;\varepsilon] :=\{ f\in C(X): |f(x)|<\varepsilon \; \forall x\in K\}, \] where $K$ is a compact subset of $X$ and $\varepsilon>0$, form a base at zero of the compact-open topology $\tau_k$ of $C_k(X)$. \begin{theorem} \langlebel{t:sequential-DP-c0-quasibarrelled} Each $c_0$-barrelled space $C_k(X)$ has the $(sDP)$ property. \varepsilonnd{theorem} \begin{proof} Let $\{ f_n\}_{n\in\mathbb{N}}$ and $\{ \mu_n\}_{n\in\mathbb{N}}$ be weakly null sequences in $C_k(X)$ and its strong dual $M_c(X)_\beta$, respectively. We have to show that $\lim_n \mu_n(f_n)=0$. Observe that the weak topology of $M_c(X)_\beta$ is stronger than the weak-$\ast$ topology on $M_c(X)$. Therefore the $c_0$-barrelledness of $C_k(X)$ implies that the sequence $S=\{ \mu_n\}_{n\in\mathbb{N}}$ is equicontinuous. So there is a compact subset $K$ of $X$ and $\varepsilon>0$ such that $S\subseteq [K;\varepsilon]^\circ$. Since $X$ is Tychonoff, it follows that $\mathrm{supp}(\mu_n)\subseteq K$ for every $n\in\mathbb{N}$. Indeed, otherwise, there is a function $f\in C(X)$ with support in $X\setminus K$ such that $\mu(f)>0$. It is clear that $\langlembda f\in [K;\varepsilon]$ for every $\langlembda>0$, and hence $\mu(\langlembda f)> 1$ for sufficient large $\langlembda$, a contradiction. For every $n\in\mathbb{N}$, denote by $\nu_n$ the restriction of $\mu_n$ onto $K$, i.e., $\nu_n(A):=\mu_n(A\cap K)$ for every Borel subset $A$ of $X$. By Lemma \ref{l:M(K)-M(X)}, $\nu_n\to 0$ in the weak topology of the Banach space $M(K)$. Observe that the sequence $\{ f_n|_K\}_{n\in\mathbb{N}}$ is weakly null in the Banach space $C(K)$ because the restriction map $S:C_k(X) \to C(K)$, $S(f):= f|_K$, is continuous and hence is weakly continuous. Since the support of $\mu_n$ is contained in $K$ we obtain \[ \nu_n(f|_K)=\int_K f|_K(x)d\nu_n = \int_X f(x)d\mu_n =\mu_n(f) \] for every $f\in C(X)$. Now this equality and the $(sDP)$ property of $C(K)$ imply $ \lim_n \mu_n(f_n)=\lim_n \nu_n\big(f_n|_K\big)=0. $ Thus $C_k(X)$ has the $(sDP)$ property. \varepsilonnd{proof} Let $\alpha$ and $\kappa$ be ordinals such that $\alpha<\kappa$. Since $[0,\alpha]$ and $(\alpha,\kappa)$ are clopen subspaces of $[0,\kappa)$, we have \begin{equation} \langlebel{equ:DP-1} C_k\big([0,\kappa)\big) = C\big([0,\alpha]\big) \oplus C_k\big((\alpha,\kappa)\big), \varepsilonnd{equation} and hence, for strong dual spaces, \begin{equation} \langlebel{equ:DP-2} M_c\big([0,\kappa)\big)_\beta =M_c\big([0,\alpha]\big)_\beta\oplus M_c\big((\alpha,\kappa)\big)_\beta. \varepsilonnd{equation} \begin{proposition} \langlebel{p:Ck-ordinal-c0-barrelled} For every ordinal $\kappa$, the space $C_k \big([0,\kappa)\big)$ is $c_0$-barrelled. \varepsilonnd{proposition} \begin{proof} If $\kappa$ is a successor ordinal or has countable cofinality, then $C_k \big([0,\kappa)\big)$ is a Banach space or a Fr\'{e}chet space, respectively. Therefore $C_k \big([0,\kappa)\big)$ is even a barrelled space. Assume now that the cofinality $\mathrm{cf}(\kappa)$ of $\kappa$ is uncountable. For simplicity, set $E:=C_k\big([0,\kappa)\big)$ and let $E'_\beta :=M_c\big([0,\kappa)\big)_\beta $ be the strong dual of $E$. Let $A=\{ \mu_n\}_{n\in\mathbb{N}}$ be a weakly-$\ast$ null sequence in $E'_\beta$. For every $n\in \mathbb{N}$, the support of $\mu_n$ is compact and hence there is an ordinal $\alpha_n$, $\alpha_n< \kappa$, such that $\mathrm{supp}(\mu_n) \subseteq [0,\alpha_n]$. Set $\alpha:= \sup\{ \alpha_n: n\in\mathbb{N}\}$. Since $\mathrm{cf}(\kappa)>\omega$, we have $\alpha<\kappa$. For every $n\in \mathbb{N}$, denote by $\nu_n $ the restriction $ \mu_n |_{[0,\alpha]}$ of $\mu_n$ onto $[0,\alpha]$. Since the restriction map $T: E\to C\big([0,\alpha]\big), T(f)=f|_{[0,\alpha]}$, is surjective we obtain that the sequence $S=\{ \nu_n\}_{n\in\mathbb{N}}$ is a weakly-$\ast$ null sequence in the dual $M\big([0,\alpha]\big)$ of the Banach space $C\big([0,\alpha]\big)$. Thus $S$ is equicontinuous, and hence there is $\langlembda>0$ such that \[ S \subseteq \langlembda \tilde{B}_\alpha^\circ, \mbox{ where } \tilde{B}_\alpha :=\big\{ g\in C\big([0,\alpha]\big): |g(x)|\leq 1 \mbox{ for all } x\in [0,\alpha]\big\}. \] Set $B_\alpha:= \tilde{B}_\alpha \times C_k\big((\alpha,\kappa)\big)$. It follows from (\ref{equ:DP-1}) that $B_\alpha$ is a neighborhood of zero in $E$. Then, for every $f\in B_\alpha$ and each $\mu_n\in A$, we have \[ |\mu_n(f)|=\left| \int_{[0,\kappa)} f(x) d\mu_n \right| =\left| \int_{[0,\alpha]} f|_{[0,\alpha]} (x) d\nu_n \right| \leq \langlembda. \] Therefore $A\subseteq \langlembda B_\alpha^\circ$ and hence $A$ is equicontinuous. Thus $E$ is $c_0$-barrelled. \varepsilonnd{proof} The Nachbin--Shirota theorem, see \cite{bonet}, implies that $C_k \big([0,\kappa)\big)$ is barrelled if and only if $\kappa$ is a successor ordinal or has countable cofinality. Therefore, if the cofinality $\mathrm{cf}(\kappa)$ of $\kappa$ is uncountable (for example, $\kappa=\omega_1$), then $C_k \big([0,\kappa)\big)$ is $c_0$-barrelled but not barrelled. Below we prove Theorem \ref{t:DP-Ck}. {\varepsilonm Proof of Theorem \ref{t:DP-Ck}}. (i) Assume that $X$ is a hemicompact space. Then the space $C_k(X)$ has the $(sDP)$ property by Theorem \ref{t:sequential-DP-c0-quasibarrelled}. Applying Proposition \ref{p:DP-LF-DF} we obtain that the space $C_k(X)$ has also the $(DP)$-property. (ii) Assume that $X$ is a cosmic space. First we recall that each cosmic space is Lindel\"{o}f and every its compact subset is metrizable, see \cite{Mich}. Therefore $X$ is a $\mu$-space, and hence $C_k(X)$ is barrelled. Proposition 10.5 of \cite{Mich} implies that $C_p(X)$ is a cosmic space, and therefore every compact subset of $C_p(X)$ is metrizable. Observe that the weak topology of $C_k(X)$ is finer than the pointwise topology of $C_p(X)$. Thus every weakly compact subset of $C_k(X)$ is metrizable, and hence is $C_k(X)$ is weakly sequentially angelic. The space $C_k(X)$ has the $(sDP)$ property by Theorem \ref{t:sequential-DP-c0-quasibarrelled}. To prove that $C_k(X)$ has also the $(DP)$ property we apply Proposition \ref{p:DP-c0-quasibarrelled}. We proved that $C_k(X)$ is barrelled and weakly sequentially angelic. Therefore it remains to check that the strong dual $M_c(X)_\beta$ of $C_k(X)$ is weakly sequentially angelic. Observe that the space $C_p(X)$ being cosmic is separable, see \cite[p.994]{Mich}. Therefore, by Corollary 4.2.2 of \cite{mcoy}, also the space $C_k(X)$ is separable. It follows that $M_c(X)$ with the weak-$\ast$ topology admits a weaker metrizable locally convex vector topology. As the weak topology of the strong dual $M_c(X)_\beta$ of $C_k(X)$ is evidently stronger than the weak-$\ast$ topology on $M_c(X)$, we obtain that every weakly compact subsets of $M_c(X)_\beta$ is even metrizable, and therefore $M_c(X)_\beta$ is weakly sequentially angelic. Finally, Proposition \ref{p:DP-c0-quasibarrelled} implies that $C_k(X)$ has the $(DP)$ property. (iii) Let $X=[0,\kappa)$ for some ordinal $\kappa$. If $\kappa$ is a successor ordinal or has countable cofinality, then $[0,\kappa)$ is hemicompact. Thus, by (i), $C_k \big([0,\kappa)\big)$ has the $(sDP)$ property and the $(DP)$ property. Assume now that the cofinality $\mathrm{cf}(\kappa)$ of $\kappa$ is uncountable. Proposition \ref{p:Ck-ordinal-c0-barrelled} implies that $C_k\big([0,\kappa)\big)$ is $c_0$-barrelled. Thus, by Theorem \ref{t:sequential-DP-c0-quasibarrelled}, $C_k\big([0,\kappa)\big)$ has the $(sDP)$ property. We show below that $C_k\big([0,\kappa)\big)$ also has the $(DP)$ property. Suppose for a contradiction that $C_k\big([0,\kappa)\big)$ does not have the $(DP)$ property. Then, by Theorem \ref{t:Edwards-DP}, there exists an absolutely convex, weakly compact subset $Q$ of $C_k\big([0,\kappa)\big)$ which is not precompact in the topology $\tau_{\Sigma'}$. Therefore, by Theorem 5 of \cite{BGP}, there are an absolutely convex, equicontinuous, weakly compact subset $W$ of $M_c\big([0,\kappa)\big)_\beta$ and a sequence $\{ f_n: n\in\mathbb{N}\}$ in $Q$ such that \begin{equation} \langlebel{equ:DP-c0-quasibarrelled-3} f_n - f_m \not\in W^\circ \; \mbox{ for every distinct } n,m \in\mathbb{N}. \varepsilonnd{equation} For every $n\in \mathbb{N}$, choose $\alpha_n< \kappa$ such that $f_n(x)=f_n(\alpha_n)$ for every $x>\alpha_n$, see \cite[Example~3.1.27]{Eng}. Set $\alpha:= \sup\{ \alpha_n: n\in\mathbb{N}\}$. Since $\mathrm{cf}(\kappa)>\omega$, we have $\alpha<\kappa$. For every $n\in \mathbb{N}$, set $g_n:= f_n|_{[0,\alpha]}$. Since the restriction operator $T: C_k\big([0,\kappa)\big)\to C\big([0,\alpha]\big)$ onto $[0,\alpha]$ is continuous, it is weakly continuous and hence $T(Q)$ is a weakly compact subset of the Banach space $C\big([0,\alpha]\big)$. By the Eberlein--\v{S}mulian theorem, $T(Q)$ is weakly sequentially compact. Therefore, passing to a subsequence if needed, we can assume that $g_n$ weakly converges to some $g\in C\big([0,\alpha]\big)$. In particular, $g_n(\alpha)=f_n(\alpha) \to g(\alpha)$. Define $f\in E$ by $f|_{[0,\alpha]}:= g$ and $f|_{(\alpha,\kappa)}:=g(\alpha)$. Taking into account that $f_n|_{(\alpha,\kappa)}=f_n(\alpha) \mathbf{1}_{(\alpha,\kappa)}$ (where $\mathbf{1}_A$ denotes the characteristic function of a subset $A$), we obtain that $f_n\to f$ in the weak topology of $E$. Note that $f_n - f_{n+1}$ weakly converges to zero. Now (\ref{equ:DP-c0-quasibarrelled-3}) implies that there is a sequence $S=\{ \mu_n: n\in\mathbb{N}\}$ in $W$ such that \begin{equation} \langlebel{equ:DP-c0-quasibarrelled-4} \mu_n (f_n - f_{n+1}) >1 \; \mbox{ for every } n \in\mathbb{N}. \varepsilonnd{equation} For every $n\in \mathbb{N}$, choose $\alpha\leq\beta_n< \kappa$ such that $\mathrm{supp}(\mu_n) \subseteq [0,\beta_n]$. Set $\beta:= \sup\{ \beta_n: n\in\mathbb{N}\}$. Since $\mathrm{cf}(\kappa)>\omega$, we have $\beta<\kappa$. By Lemma \ref{l:M(K)-M(X)}, the sequence $S$ is contained in the closed subspace $L:= M_{[0,\beta]} \big([0,\kappa)\big)$ of $M_c \big([0,\kappa)\big)_\beta$ and $L$ is topologically isomorphic to the Banach space $M([0,\beta])$. Therefore, the set $W_\beta := W\cap L$ is a weakly compact subset of $M([0,\beta])$. Once again applying the Eberlein--\v{S}mulian theorem and passing to a subsequence if needed, we can assume that $\{ \mu_n\}_{n\in\mathbb{N}}$ weakly converges to a measure $\mu\in L\subseteq M_c \big([0,\kappa)\big)_\beta$. Now (\ref{equ:DP-c0-quasibarrelled-4}) and the $(sDP)$ property of $C_k\big([0,\kappa)\big)$ proved above imply \[ 1< \mu_n (f_n - f_{n+1}) = (\mu_n -\mu) (f_n - f_{n+1}) + \mu (f_n - f_{n+1}) \to 0. \] This contradiction shows that $C_k(X)$ has the $(DP)$ property. (iv) Assume that $X$ is a locally compact and paracompact space. Theorem 5.1.27 of \cite{Eng} states that $X=\bigoplus_{i\in I} X_i$ is the direct topological sum of a family $\{ X_i\}_{i\in I}$ of Lindel\"{o}f locally compact spaces. Since all $X_i$ are hemicompact by \cite[Ex.3.8.c]{Eng}, (i) implies that all spaces $C_k(X_i)$ have the $(DP)$ property and the $(sDP)$ property. Therefore the space $C_k(X)=\prod_{i\in I} C_k(X_i)$ has the $(DP)$ property by \cite[9.4.3(a)]{Edwards}. Now we check that $C_k(X)$ has also the $(sDP)$ property. Let $\{ f_n\}_{n\in\mathbb{N}}$ and $\{ \mu_n\}_{n\in\mathbb{N}}$ be weakly null sequences in $C_k(X)$ and $M_c(X)_\beta$, respectively. Choose a countable subfamily $J$ of the index set $I$ such that \[ \bigcup_{n\in\mathbb{N}} \mathrm{supp}(\mu_n) \subseteq \bigcup_{j\in J} X_j, \; \mbox{ and set } \; Y:=\bigcup_{j\in J} X_j. \] For every $n\in\mathbb{N}$, set $g_n:= f_n |_Y$ and let $\nu_n$ be the restriction of $\mu_n$ onto $Y$. By construction, $Y$ and $X\setminus Y$ are clopen subsets of $X$. Therefore $C_k(X)=C_k(Y)\times C_k(X\setminus Y)$ and hence $M_c(X)_\beta = M_c(Y)_\beta \times M_c(X\setminus Y)_\beta$. Since the projection onto the first summand is continuous, it is weakly continuous as well. Thus $ \{ g_n\}_{n\in\mathbb{N}}$ and $ \{ \nu_n\}_{n\in\mathbb{N}}$ are weakly null sequences in $C_k(Y)$ and $M_c(Y)_\beta$, respectively. As $Y$ is hemicompact, (i) implies $\mu_n(f_n)= \nu_n(g_n) \to 0$ as $n\to\infty$. Thus $C_k(X)$ has the $(sDP)$ property. \qed Having in mind the $(DP)$ property one may ask: {\varepsilonm Under which condition a null-sequence $\{ f_n: n\in\mathbb{N}\}$ in $C_{p}(X)$ weakly converges to zero in $C_k(X)$}? Recall the following known observation which is a consequence of the Lebesgue's dominated convergence theorem and the fact that every measure $\mu \in C_k(X)'=M_c(X)$ has compact support: \begin{fact}\langlebel{fact:weakly-null-Ck} Let $X$ be a Tychonoff space and let $S=\{ f_n:n\in\mathbb{N}\}$ be a null-sequence in $C_p(X)$. If $S$ is bounded in $C_k(X)$, then $f_{n}\rightarrow 0$ in the space $C_k(X)_{w}$, which means the space $C_k(X)$ endowed with the weak topology of $C_k(X)$. \varepsilonnd{fact} Using (i) of Theorem \ref{t:DP-Ck} and Fact \ref{fact:weakly-null-Ck} we obtain the following easy \begin{proposition} \langlebel{p:bounded-sequence-in-Ck} Let $X$ be a hemicompact space and let $S=\{ f_n:n\in\mathbb{N}\}$ be a null-sequence in $C_p(X)$. Then the following assertions are equivalent: \begin{enumerate} \item[{\rm (i)}] $S$ is bounded in the space $C_k(X)$; \item[{\rm (ii)}] $\mu_n(f_n)\to 0$ for every weakly null-sequence $\{\mu_{n}: n\in\mathbb{N}\}$ in the strong dual of $C_k(X)$. \varepsilonnd{enumerate} \varepsilonnd{proposition} \begin{proof} (i)$\Rightarrow$(ii) immediately follows from (i) of Theorem \ref{t:DP-Ck} and Fact \ref{fact:weakly-null-Ck}. (ii)$\Rightarrow$(i) Let $\{ K_n:n\in\mathbb{N}\}$ be a fundamental (increasing) sequence of compact sets in $X$. If all the $K_n$ are finite, then $C_{p}(X)=C_k(X)$ and hence $S$ is bounded by the assumption $f_{n}\rightarrow 0$ in $C_{p}(X)$. So we shall assume that all $K_n$ are infinite. Suppose for a contradiction that $S$ is not bounded in $C_k(X)$. Then there exists $K:=K_{m}$ such that $S$ is unbounded in the Banach space $C(K):=(C(K),\|.\|)$. Let $k_1<k_2<\cdots$ be a sequence in $\mathbb{N}$ such that $\|f_{k_n}\|\geq n$ for all $n\in \mathbb{N}$. For every $n\in\mathbb{N}$, pick $x_{n}\in K$ such that $|f_{k_n}(x_{n})|=\|f_{k_n}\|$ and set \[ \mu_{i}:= \|f_{k_n}\|^{-1}\delta_{x_{n}} \mbox{ if } i=k_n \mbox{ for some } n\in\mathbb{N}, \mbox{ and } \mu_{i}:=0 \mbox{ otherwise}. \] Then the sequence $M:=\{ \mu_{i}:i\in\mathbb{N}\}$ converges to zero in the norm dual of $C(K)$. It follows from Lemma \ref{l:M(K)-M(X)} that $\mu_i \to 0$ in the strong dual of $C_k(X)$. Therefore $M$ is a weakly null-sequence in the strong dual of $C_k(X)$. But since $|\mu_{k_n}(f_{k_n})|=1$ for every $n\in M$, we obtain that (ii) does not hold. This contradiction shows that $S$ is bounded in $C_k(X)$. \varepsilonnd{proof} \begin{remark} \langlebel{r:Ck-Cp-bounded} {\varepsilonm Let $X$ be a Tychonoff space containing an infinite compact subset $K$. Then $C_p(X)$ contains a null-sequence which is not bounded for $C_k(X)$. Indeed, since $K$ is infinite, there is an infinite discrete sequence $\{ x_n\}_{n\in\mathbb{N}}$ in $K$ with pairwise disjoint neighborhoods $V_n$ of $x_n$ in $X$, see \cite[Lemma 11.7.1]{Jar}. For every $n\in\mathbb{N}$, choose a function $f_n: X \to [0,n]$ with support in $V_n$ and $f_n(x_n)=n$. As $V_n$ are pairwise disjoint, $f_n \to 0$ in $C_p(X)$. It is clear that, for each $m\in\mathbb{N}$, we have \[ f_n \not\in\{ f\inC_k(X): |f(x)|\leq m \mbox{ for all } x\in K\}, \mbox{ for every } n>m. \] Thus $\{ f_{n}:n\in\mathbb{N}\}$ is not bounded in $C_k(X)$.} \qed \varepsilonnd{remark} To prove Theorem \ref{t:Grothendieck-Ck} we need the following lemma which (for compact spaces) actually is noticed in \cite[p.138]{Dales-Lau}. Recall that a sequence $\{ x_n\}_{n\in\mathbb{N}}$ in a topological space $X$ is called {\varepsilonm trivial} if there is $m\in\mathbb{N}$ such that $x_n =x_m$ for every $n\geq m$. \begin{lemma} \langlebel{l:Grothendieck-sequence} Let $X$ be a Tychonoff space. If $C_k(X)$ has the Grothendieck property, then $X$ does not contain non-trivial convergent sequences. \varepsilonnd{lemma} \begin{proof} Suppose for a contradiction that there is a sequence $S=\{ x_n\}_{n\in\mathbb{N}}$ converging to a point $x\in X\setminus S$, we shall assume that $S$ is one-to-one. Then the sequence $\{ \delta_{x_n}-\delta_x\}_{n\in\mathbb{N}}$ evidently converges weakly-$\ast$ to $0$ (as usual $\delta_z$ denotes the Dirac measure at $z\in X$). Set $K:=S\cup\{ x\}$, so $K$ is a compact subset of $X$. It is easy to see that the map \[ \mu \mapsto \sum_{n\in\mathbb{N}} \mu\big( \{ x_n\} \big), \quad \mu\in M(K), \] is a continuous linear functional of the Banach space $M(K)$. Then, by Lemma \ref{l:M(K)-M(X)} and the Hahn--Banach extension theorem, there exists an extension $\chi$ of this map to a continuous linear functional on $M_c(X)_\beta$. Since $\chi\big( \delta_{x_n}-\delta_x\big) =1$ for every $n\in\mathbb{N}$, we see that $\delta_{x_n}-\delta_x \not\to 0$ in the weak topology of $M_c(X)_\beta$. \varepsilonnd{proof} For the convenience of the reader we recall some definitions used in Theorem \ref{t:Grothendieck-Ck}. A topological space $X$ is a {\varepsilonm $\mu$-space} if $X$ is Tychonoff and every functionally bounded subset of $X$ is relatively compact. The Nachbin--Shirota theorem states that a Tychonoff space $X$ is a $\mu$-space if and only if $C_k(X)$ is barrelled. A topological space $X$ is called {\varepsilonm sequential} if for each non-closed subset $A\subseteq X$ there is a sequence $\{a_n\}_{n\in\omega}\subseteq A$ converging to some point $a\in \overline{A}\setminus A$. We note that a sequential space $X$ is discrete if and only if it does not contain non-trivial convergent sequences. (Indeed, if $z\in X$ is non-isolated, then the set $A:=X\setminus\{z \}$ is non-closed and hence there is a sequence (necessarily non-trivial) in $A$ converging to $z$.) Now we are ready to prove Theorem \ref{t:Grothendieck-Ck}. {\varepsilonm Proof of Theorem \ref{t:Grothendieck-Ck}}. (i) Let $X$ be a $\mu$-space whose compact subsets are sequential. Assume that $C_k(X)$ has the Grothendieck property. Let $A$ be a functionally bounded subset of $X$. Then its closure $\overline{A}$ is a compact subset of $X$. By Lemma \ref{l:Grothendieck-sequence}, $\overline{A}$ does not contain non-trivial convergent sequences. Since $\overline{A}$ is sequential we obtain that $A$ is finite. Conversely, if every functionally bounded subset of $X$ is finite, then $C_k(X)=C_p(X)$ and Theorem \ref{t:DP-Cp} applies. (ii) Let $X$ be a sequential space. Assume that $C_k(X)$ has the Grothendieck property. Then, by Lemma \ref{l:Grothendieck-sequence}, the space $X$ does not contain non-trivial convergent sequences. Since $X$ is sequential we obtain that $X$ is discrete. Conversely, if $X$ is discrete, then $C_k(X)=C_p(X)$ and Theorem \ref{t:DP-Cp} applies. \qed \begin{thebibliography}{10} \bibitem{ABR} A.A. Albanese, J. Bonet, and W.J. Ricker, \varepsilonmph{Grothendieck spaces with the Dunford--Pettis property}, Positivity, \textbf{14} (2010), 145--164. \bibitem{ABR1} A.A. Albanese, J. Bonet, and W. Ricker, \varepsilonmph{$C_0$-semigroups and mean ergodic operators in a class of Fr\'echet spaces}, J. Math. Anal. Appl. \textbf{365} (2010), 142--157. \bibitem{ABR2} A.A. Albanese, J. Bonet, and W. Ricker, \varepsilonmph{Mean ergodic semigroups of operators}, Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A Math. RACSAM \textbf{106} (2012), 299--319. \bibitem{albaneze} A.A. Albanese, and E.M. Mangino, \varepsilonmph{Some permanence results of the Dunford--Pettis and Grothendieck properties in lcHs}, Funct. Approx. Comment. Math. \textbf{44} (2011), 243--258. \bibitem{BGP} T. Banakh, S. Gabriyelyan, and I. Protasov, \varepsilonmph{On uniformly discrete subsets in uniform spaces and topological groups}, Mat. Studii \textbf{45} (2016), 76--97. \bibitem{BomVil} F.~Bombal, and I.~Villanueva, \varepsilonmph{On the Dunford--Pettis property of the tensor product of $C(K)$ spaces}, Proc. Amer. Math. Soc. \textbf{129} (2001), 1359--1363. \bibitem{Bonet-Lin-93} J. Bonet, and M. Lindstr\"{o}m, \varepsilonmph{Convergent sequences in duals of Fr\'{e}chet spaces}, pp. 391--404 in ``Functional Analysis'', Proc. of the Essen Conference, Marcel Dekker, New York, 1993. \bibitem{Bonet-Ricker} J. Bonet, and W. Ricker, \varepsilonmph{Schauder decompositions and the Grothendieck and Dunford--Pettis properties in K\"{o}the echelon spaces of infinite order}, Positivity \textbf{11} (2007), 77--93. \bibitem{BFV} J. Borwein, M. Fabian, and J. Vanderwerff, \varepsilonmph{Characterizations of Banach spaces via convex and other locally Lipschitz functions}, Acta Math. Vietnam. \textbf{22} (1997), 53--69. \bibitem{bourgain} J. Bourgain, \varepsilonmph{On the Dunford--Pettis property}, Proc. Amer. Math. Soc. \textbf{81} (1981), 265--272. \bibitem{bourgain1} J. Bourgain, \varepsilonmph{The Dunford--Pettis property for the ball-algebras, the polydisc algebras and the Sobolev spaces}, Studia Math. \textbf{77} (1984), 245--253. \bibitem{Ci-Ti} J.A. Cima, and R.M. Timoney, \varepsilonmph{The Dunford--Pettis property for certain planar uniform algebras}, Michigan Math. J. \textbf{34} (1987), 99--104. \bibitem{Dales-Lau} H.G. Dales, F.K. Dashiell, Jr., A.T.-M. Lau, and D. Strauss, \varepsilonmph{Banach Spaces of Continuous Functions as Dual Spaces}, Springer, 2016. \bibitem{Diestel-DP} J. Diestel, \varepsilonmph{A survey of results related to the Dunford--Pettis property}, Contemporary Math. AMS \textbf{2} (1980), 15--60. \bibitem{Diestel} J. Diestel, \varepsilonmph{Sequences and Series in Banach Spaces}, Garduate text in Mathematics \textbf{92}, Springer, 1984. \bibitem{Diestel-Uhl} J. Diestel, and J.J.Jr. Uhl, \varepsilonmph{Vector Measures}, Math. Surveys No. \textbf{15}, Amer. Math. Soc. Providance, 1977. \bibitem{Edwards} R.E. Edwards, \varepsilonmph{Functional Analysis}, Reinhart and Winston, New York, 1965. \bibitem{Eng} R.~Engelking, \varepsilonmph{General topology}, Panstwowe Wydawnictwo Naukowe, 1985. \bibitem{feka} J.C. Ferrando, and J. K\c akol, \varepsilonmph{On precompact sets in spaces $C_{c}(X)$}, Georgian Math. J. \textbf{20} (2013), 247--254. \bibitem{FKS-feral} J.C. Ferrando, J. K\c{a}kol, and S. Saxon, \varepsilonmph{The dual of the locally convex space $C_p(X)$}, Funct. Approx. Comment. Math. \textbf{50} (2014), 389--399. \bibitem{Gabr-free-resp} S. Gabriyelayn, \varepsilonmph{Locally convex spaces and Schur type properties}, Ann. Acad. Sci. Fenn. Math., accepted. \bibitem{GKKM} S. Gabriyelyan, J. K{\c{a}}kol, W. Kubi\'s, and W. Marciszewski, \varepsilonmph{Networks for the weak topology of Banach and Fr\'echet spaces}, J. Math. Anal. Appl. \textbf{432} (2015), 1183--1199. \bibitem{Grothen} A. Grothendieck, \varepsilonmph{Sur les applications lin\'{e}aires faiblement compactes d'espaces du type $C(K)$}, Canad. J. Math. \textbf{5} (1953), 129--173. \bibitem{Jar} H.~Jarchow, \varepsilonmph{Locally Convex Spaces}, B.G. Teubner, Stuttgart, 1981. \bibitem{kak} J.~K\c{a}kol, W.~Kubi\'s, and M.~Lopez-Pellicer, \varepsilonmph{Descriptive Topology in Selected Topics of Functional Analysis}, Developments in Mathematics, Springer, 2011. \bibitem{kakol} J. K\c akol, S.A. Saxon, and A. Tood, \varepsilonmph{Weak barreldness for $C(X)$ spaces}, J. Math. Anal. Appl. \textbf{297} (2004), 495--505. \bibitem{mcoy} R.A.~McCoy, and I.~Ntantu, \varepsilonmph{Topological Properties of Spaces of Continuous Functions}, Lecture Notes in Math. \textbf{1315}, 1988. \bibitem{Mich} E.~Michael, \varepsilonmph{$\aleph_0$-spaces}, J. Math. Mech. \textbf{15} (1966), 983--1002. \bibitem{NaB} L. Narici, and E. Beckenstein, \varepsilonmph{Topological vector spaces}, Second Edition, CRC Press, New York, 2011. \bibitem{bonet} P. P\'{e}rez Carreras, and J. Bonet, \varepsilonmph{Barrelled Locally Convex Spaces}, North-Holland Mathematics Studies \textbf{131}, Amsterdam, 1987. \bibitem{ruess} W. Ruess, \varepsilonmph{Locally convex spaces not containing $\varepsilonll_{1}$}, Funct. Approx. Comment. Math. \textbf{50} (2014), 351--358. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title[Configuration space of four points in the torus] {The ${\rm PSL}(2,{{\mathbb R}})^2$-configuration space of four points in the torus $S^1\times S^1$} \author[I.D. Platis ]{Ioannis D. Platis } \email{[email protected]} \address{Department of Mathematics and Applied Mathematics\\ University of Crete\\ University Campus\\ GR 700 13 Voutes\\ Heraklion Crete\\Greece} \begin{abstract} The torus ${{\mathbb T}}=S^1\times S^1$ appears as the ideal boundary $\partial_\infty AdS^3$ of the three-dimensional anti-de Sitter space $AdS^3$, as well as the F\"urstenberg boundary ${{\mathbb F}}(X)$ of the rank-2 symmetric space $X={\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$. We introduce cross-ratios on the torus in order to parametrise the ${\rm PSL}(2,{{\mathbb R}})^2$ configuration space of quadruples of pairwise distinct points in ${{\mathbb T}}$ and define a natural M\"obius structure in ${{\mathbb T}}$ and therefore to ${{\mathbb F}}(X)$ and $\partial_\infty AdS^3$ as well. \end{abstract} \date{{\today}\\ {\it 2010 Mathematics Subject Classifications.} 57M50, 51F99. \\ \it{Key words. Configuration space, torus, M\"obius structures.} } \maketitle {\mathcal{S}}ection{Introduction} Let $S$ be a topological space and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. Let $G$ be a group of homeomorphisms of $S$ acting diagonally on ${{\mathcal C}}_4$ from the left: for each $g\in G$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$, $$ (g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=(g(p_1),g(p_2),g(p_3),g(p_4)). $$ The $G$-{\it confiquration space of quadruples of pairwise distinct points in $S$} is the quotient space ${{\mathcal F}}_4={{\mathcal F}}_4(S)$ of ${{\mathcal C}}_4$ cut by the action of $G$. In this general setting, it is not obvious what kind of object ${{\mathcal F}}_4$ is; however, there are tractable cases. If for instance $S$ is a smooth manifold of dimension $s$ and $G$ is a Lie subgroup of diffeomorphisms of $S$ of dimension $g$, then ${{\mathcal F}}_4$ carries the structure of a smooth manifold of dimension $4s-g$ if the diagonal action is proper and free. The particular cases when $S$ are spheres bounding hyperbolic spaces and $G$ are the sets ${{\mathcal M}}(S)$ of M\"obius transformations acting on $S$ are prototypical; in some of these cases there exists neat parametrisations of the configurations spaces by cross-ratios. The most illustrative (and simplest) example of all of is that of the ${{\mathcal M}}(S^1)$-configuration space ${{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in the unit circle $S^1$. What follows is classical and well-known, see for instance \cite{Be}, but we include it here both for clarity as well as for setting up the notation we shall use throughout the paper. Let $S^1=\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$ be the unit circle. Recall that if ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ $\in{{\mathcal C}}_4(S^1)$ then its real cross-ratio is defined by $$ {{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]={{\mathfrak r}}ac{(x_4-x_2)(x_3-x_1)}{(x_4-x_1)(x_3-x_2)}, $$ where we agree that if one of the points is $\infty$ then $\infty:\infty=1$. The set of M\"obius transformations ${{\mathcal M}}(S^1)$ of $S^1$ comprises maps $g:S^1\to S^1$ of the form $$ g(x)={{\mathfrak r}}ac{ax+b}{cx+d},\quad x\in\overline{{{\mathbb R}}}, $$ where the matrix $$ A=\left(\begin{matrix} a&b\\ c&d\end{matrix}\right) $$ is in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. The cross-ratio ${\rm X}$ is invariant under the diagonal action of ${{\mathcal M}}(S^1)$ in ${{\mathcal C}}_4={{\mathcal C}}_4(S^1)$: If $g\in{{\mathcal M}}(S^1)$, then $$ {{\rm X}}(g({{\mathfrak x}}))=[g(x_1),g(x_2),g(x_3),g(x_4)]=[x_1,x_2,x_3,x_4]={{\rm X}}({{\mathfrak x}}), $$ for every ${{\mathfrak x}}\in{{\mathcal C}}_4$. Also, $X$ takes values in ${{\mathbb R}}{\mathcal{S}}etminus\{0,1\}$ and for each quadruple $(x_1,x_2,x_3,x_4)$ satisfies the standard symmetry properties: \begin{eqnarray*} && \noindent ({\rm S1})\quad {{\rm X}}(x_1,x_2,x_3,x_4)={{\rm X}}(x_2,x_1,x_4,x_3)={{\rm X}}(x_3,x_4,x_1,x_2)= {{\rm X}}(x_4,x_3,x_2,x_1),\\ && \noindent({\rm S2})\quad{{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_2,x_4,x_3)=1,\\ && \noindent({\rm S3})\quad {{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_4,x_2,x_3)\cdot {{\rm X}}(x_1,x_3,x_4,x_2)=-1. \end{eqnarray*} Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak x}}$ are eventually functions of ${{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]$. We let ${{\mathcal M}}(S^1)$ act diagonally on ${{\mathcal C}}_4(S^1)$ and let ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ be the ${{\mathcal M}}(S^1)$-configuration space of quadruples of pairwise distinct points in $S^1$. From invariance by cross-ratios we have that the map $$ {{\mathcal G}}:{{\mathcal F}}_4(S^1)\ni[{{\mathfrak x}}]\mapsto {{\rm X}}({{\mathfrak x}})\in{{\mathbb R}}{\mathcal{S}}etminus\{0,1\}, $$ is well-defined. Also ${{\mathcal G}}$ is surjective; to see this, recall that there is a triply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$; if $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$, then there is a unique $f\in{{\mathcal M}}(S^1)$ such that $$ f(x_1)=0,\quad f(x_2)=\infty,\quad f(x_3)=1. $$ Recall at this point that actually $f$ is given in terms of cross-ratios: $$ [0,\infty,f(x),1]=[x_1,x_2,x,x_3]. $$ Hence if $x\in{{\mathbb R}}{\mathcal{S}}etminus\{0,1\}$, then $[{{\mathfrak x}}]\mapsto x$, where ${{\mathfrak x}}=(0,\infty,x,1)$. Finally, ${{\mathcal G}}$ is injective: If ${{\mathfrak x}}$ and ${{\mathfrak x}}'$ are in ${{\mathcal C}}$ and ${{\rm X}}({{\mathfrak x}})=X({{\mathfrak x}}')=x$, then there exists a $g\in{{\mathcal M}}(S^1)$ such that ${{\mathfrak x}}=g({{\mathfrak x}}')$. All the above discussion boils down to the well-known fact that the configuration space ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in $S^1$ is isomorphic to ${{\mathbb R}}{\mathcal{S}}etminus\{0,1\}$. and therefore it inherits the structure of a one-dimensional disconnected real manifold. Moreover, the following possibilities occur for the relative position of the points $x_i$ of ${{\mathfrak x}}$ on the circle: \begin{enumerate} \item $x_1,x_2$ separate $x_3,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<0$. \item $x_1,x_3$ separate $x_2,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<1$. \item $x_1,x_4$ separate $x_2,x_3$. This happens if and only if $0<{{\rm X}}({{\mathfrak x}})<1$. \end{enumerate} Each of cases (1), (2) and (3) correspond to the connected components of ${{\mathcal F}}_4$. In an analogous manner, see again \cite{Be}, the ${\rm PSL}(2,{{\mathbb C}})$-configuration space ${{\mathcal F}}_4(S^2)$ of quadruples of pairwise disjoint points in the sphere $S^2$ is isomorphic to ${{\mathbb C}}{\mathcal{S}}etminus\{0,1\}$ and therefore inherits the structure of a one-dimensional complex manifold. The case of the ${\rm PU}(2,1)$-configuration space ${{\mathcal F}}_4(S^3)$ of quadruples of pairwise distinct points in $S^3$ is much harder but it is treated in the same spirit, see \cite{FP}: Using complex cross-ratios we find that besides a subset of lower dimension, ${{\mathcal F}}_4(S^3)$ is isomorphic to $({{\mathbb C}}{\mathcal{S}}etminus{{\mathbb R}})\times{{\mathbb C}} P^1$, a two-dimensional disconnected complex manifold. Finally, recent treatments for the cases of ${\rm PSp}(1,1)$ and ${\rm PSp}(2,1)$-configuration spaces of quadruples of pairwise distinct points in $S^3$ and $S^7$ may be found in \cite{GM} and \cite{C}, respectively. As we have mentioned above, spheres may be viewed as boundaries of symmetric spaces of non-compact type and of rank-1; that is hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, where ${{\mathbb K}}$ can be: a) ${{\mathbb R}}$ the set of real numbers, b) ${{\mathbb C}}$ the set of complex numbers, c) $\mathbb{H}$ the set of quaternions and d) $\mathbb{O}$ the set of octonions (in the last case $n=2$). Two problems arise naturally here; first, to describe configuration spaces of four points on products of such spheres by parametrising them with cross-ratios defined on those products and second, to describe configuration spaces of four points in boundaries of symmetric spaces of rank$>$1 again by parametrising them using cross-ratios defined on those boundaries. These two problems are sometimes intertwined and the crucial issue here is the definition of an appropriate cross-ratio; this is directly linked to the M\"obius geometry of the spaces we wish to study, as we explain below. In this paper we deal with both problems by describing in the manner above the configuration space of four points in the torus ${{\mathbb T}}=S^1\times S^1$; the torus is the F\"urstenberg boundary of the symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ which is of rank-2, as well as the ideal boundary of anti-de Sitter space $adS^3$, see Section \ref{sec:cons}. Returning to our original general setting, suppose that the $G$-configuration space ${{\mathcal F}}_4(S)$ has a real manifold structure due to a proper and free action of $G$ on ${{\mathcal C}}_4(S)$. By taking the product $S\times S$ and the space ${{\mathcal C}}_4(S\times S)$ of quadruples of pairwise distinct points of $S\times S$, the group $G\times G$ acts diagonally as follows: for $g=(g_1,g_2)$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4(S\times S)$, $p_i=(x_i,y_i)$, $i=1,2,3,4$, $$ (g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=\left((g_1(x_1),g_2(y_1)),(g_1(x_2),g_2(y_2)),(g_1(x_3),g_2(y_3)),(g_1(x_4),g_2(y_4))\right). $$ Using elementary arguments, one deduces that the action of $G\times G$ on ${{\mathcal C}}_4(S\times S)$ is proper. Towards the direction of the free action of $G\times G$ we observe that from the obvious injection $ {{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)\to{{\mathcal C}}_4(S\times S) $ which assigns to each $({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$, ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$, the quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $p_i=(x_i,y_i)$, $i=1,2,3,4.$, we obtain an injection $$ {{\mathcal F}}_4(S)\times{{\mathcal F}}_4(S)\ni([{{\mathfrak x}}],[{{\mathfrak y}}])\mapsto [{{\mathfrak p}}]\in{{\mathcal F}}_4(S^1\times S^1). $$ The image of this map is the subset ${{\mathcal F}}_4^{\mathcal{S}}harp(S\times S)$ of ${{\mathcal F}}_4(S\times S)$ comprising quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$, ${{\mathfrak y}}$ are in ${{\mathcal F}}_4(S)$. We may straightforwardly show that ${{\mathcal F}}_4^{\mathcal{S}}harp$ comprises principal orbits, that is, orbits of the maximal dimension of the $G\times G$ action; these are orbits of quadruples with trivial isotropy groups. Therefore ${{\mathcal F}}_4^{\mathcal{S}}harp(S\times S)$ is a manifold of dimension $2n$, where $n=\dim({{\mathcal F}}_4(S))$. If the action is free only on ${{\mathcal F}}^{\mathcal{S}}harp_4(S\times S)$, the orbits of the remaining points are of dimension less than $2n$. This is exactly the case we study in Section \ref{sec:config}, i.e., the configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points in the torus ${{\mathbb T}}=S^1\times S^1$. The subset of ${{\mathcal F}}_4({{\mathbb T}})$ of maximal dimension two is isomorphic to $$ {{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})={{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1)=({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})^2, $$ a disconnected subset of ${{\mathbb R}}^2$ comprising nine connected components, see Theorem \ref{thm:vec}. Also, by considering the natural involution $\iota_0:{{\mathbb T}}\to {{\mathbb T}}$ which maps each $(x,y)$ to $(y,x)$ and the group $\overline{{{\mathcal M}}({{\mathbb T}})}$ comprising M\"obius transformations of ${{\mathbb T}}$ followed by $\iota_0$, then by taking $\overline{{{\mathcal F}}^{\mathcal{S}}harp_4({{\mathbb T}})}$ to be ${{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ cut by the diagonal action of $\overline{{{\mathcal M}}({{\mathbb T}})}$ we find that it is isomorphic with a disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ comprising four open connected components and three components with 1-dimensional boundary (Theorem \ref{thm:band}). At this point, the goal of parametrising the configuration space by cross-ratios defined on the torus has not yet achieved; by Theorem \ref{thm:vec} the set ${{\mathcal F}}_4({{\mathbb T}})$ admits a parametrisation obtained by assigning to each ${{\mathfrak p}}$ the pair $({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}}))$. To this end, for each ${{\mathfrak p}}\in{{\mathcal C}}_4^{\mathcal{S}}harp$ we define $$ {{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}), $$ see Section \ref{sec:realX}, which is ${{\mathcal M}}({{\mathbb T}})$-invariant. Certain symmetries for ${{\mathbb X}}$ exist so that for each quadruple ${{\mathfrak p}}$ all 24 cross-ratios of quadruples resulting from permutations of points of ${{\mathfrak p}}$ are functions of two cross-ratios which we denote by ${{\mathbb X}}_1={{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2={{\mathbb X}}_2({{\mathfrak p}})$. According to Proposition \ref{prop:fundX}, $({{\mathbb X}}_1,{{\mathbb X}}_2)$ lie in a disconnected subset ${{\mathcal P}}$ of ${{\mathbb R}}^2$ comprising six components. Three of these components are open and the remaining three have boundaries which are pieces of the parabola $$ \Delta(u,v)=u^2+v^2-2u-2v+1-2uv=0. $$ In Theorem \ref{thm:F4} we prove that ${{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})$ is in a 2-1 surjection with ${{\mathcal P}}$ and therefore $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})}$ is isomorphic to ${{\mathcal P}}$. Remark that boundary components of ${{\mathcal P}}$ correspond to quadruples ${{\mathfrak p}}$ such that all points of ${{\mathfrak p}}$ lie on a Circle, that is, a ${{\mathcal M}}({{\mathbb T}})$-image of the diagonal curve $\gamma(x)=(x,x)$, $x\in S^1$, which is fixed by the involution $\iota_0$. Remark also that the parametrisation of $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})}$ by ${{\mathcal Q}}$ and ${{\mathcal P}}$ induces the same differentiable structure. We now discuss in brief some general aspects of M\"obius geometry. Let $S$ be a set comprising at least four points and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. A {\it positive cross-ratio} ${{\bf X}}$ on ${{\mathcal C}}_4$ is a map ${{\mathcal C}}_4\to{{\mathbb R}}_+$ such that for each ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal S}}$, a list of symmetric properties hold; explicitly, \begin{eqnarray*} && ({\rm S1})\quad {{\bf X}}(p_1,p_2,p_3,p_4)={{\bf X}}(p_2,p_1,p_4,p_3)={{\bf X}}(p_3,p_4,p_1,p_2) ={{\bf X}}(p_4,p_3,p_2,p_1),\\ && ({\rm S2})\quad({{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_2,p_4,p_3)=1,\\ && ({\rm S3})\quad {{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_4,p_2,p_3)\cdot {{\bf X}}(p_1,p_3,p_4,p_2)=1. \end{eqnarray*} Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak p}}$ are functions of ${{\bf X}}_1({{\mathfrak p}})=$ ${{\bf X}}(p_1,p_2,p_3,p_4)$ and ${{\bf X}}_2({{\mathfrak p}})={{\bf X}}(p_1,p_3,p_2,p_4)$. The {\it M\"obius structure} of $S$ is then defined to be the map $$ {{\mathfrak M}}_S:{{\mathcal C}}_4(S)\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}_+)^2. $$ The {\it M\"obius group} ${{\mathfrak M}}(S)$ comprises bijections $g:S\to S$ that leave ${{\bf X}}$ invariant, that is, ${{\bf X}}(g({{\mathfrak p}}))={{\bf X}}({{\mathfrak p}})$. We stress here that the above definitions vary depending on the author; however, all existing definitions are equivalent. An equivalent to our definition is the definition of {\it sub-M\"obius structure} in \cite{Bu}. Frequently, a M\"obius structure is obtained from a metric (or even a semi-metric) $\rho$ on $S$ and it is called a {\it M\"obius structure associated to $\rho$}. In the primitive case of the circle $S^1$, from the real cross-ratio ${\rm X}$ we obtain a positive cross-ratio ${{\bf X}}$ in ${{\mathcal C}}_4(S_1)$ by assigning to each ${{\mathfrak p}}\in{{\mathcal C}}_4(S^1)$ the number $$ {{\bf X}}({{\mathfrak p}})=|{\rm X}({{\mathfrak p}})|={{\mathfrak r}}ac{|x_4-x_2||x_3-x_1|}{|x_4-x_1||x_3-x_2|}={{\mathfrak r}}ac{\rho(x_4,x_2)\cdot \rho(x_3,x_1)}{\rho(x_4,x_1)\cdot \rho(x_3,x_2)}. $$ The metric $\rho$ here is the extension of the euclidean metric in ${{\mathbb R}}$ to $\overline{{{\mathbb R}}}$: $\rho(x,y)=|x-y|$ if $x,y\in{{\mathbb R}}$, $\rho(x,\infty)=+\infty$ and $\rho(\infty,\infty)=0$. One verifies that the positive cross-ratio satisfies properties (S1) and (S2) and (S3). The M\"obius structure of $S^1$ is thus the map $$ {{\mathfrak M}}_{S^1}({{\mathfrak p}})=({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}^+{\mathcal{S}}etminus\{0,1\})^2. $$ Note that the M\"obius group ${{\mathfrak M}}(S^1)$ for this M\"obius structure is ${\rm SL}(2,{{\mathbb R}})$, a double cover of ${{\mathcal M}}(S^1)$. Note also that since ${\rm X}_1({{\mathfrak p}})+{\rm X}_2({{\mathfrak p}})=1$, we have by triangle inequality \begin{equation}\label{eq:ptol} \left|{{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})\right|\le 1\quad\text{and}\quad{{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})\ge 1. \end{equation} Explicitly, ${{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_3$ separate $x_2$ and $x_4$; ${{\bf X}}_2({{\mathfrak p}})-{{\bf X}}_1({{\mathfrak p}})=1$ if $x_1$ and $x_2$ separate $x_3$ and $x_4$; ${{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_4$ separate $x_2$ and $x_3$. If the M\"obius structure ${{\mathfrak M}}_S$ of a space $S$ satisfies (\ref{eq:ptol}) then it is called Ptolemaean. The subsets of $S$ at which equalities hold in (\ref{eq:ptol}) are called Ptolemaean circles. In this way the M\"obius structure of $S^1$ associated to the euclidean metric is Ptolemaean and $S^1$ itself is a Ptolemaean circle for this M\"obius structure. $S^1$ is the boundary of the hyperbolic disc ${{\bf H}}_{{\mathbb C}}^1$; it is proved in \cite{P} that all M\"obius structures in boundaries of hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, $n=1,2\dots$, are Ptolemaean; all these M\"obius structures are associated to the the Kor\'anyi metric. Therefore all boundaries of symmetric spaces of non-compact type and of rank-1 have M\"obius structures which are all associated to a metric and they are all Ptolemaean. In the case of the torus we study here, this does not happen: The M\"obius structure which is defined in Section \ref{sec:mob} \begin{enumerate} \item is not associated to any semi-metric on ${{\mathbb T}}$; \item is not Ptolemaean, but \item there exist Ptolemaean circles for this structure. \end{enumerate} To the direction of defining M\"obius structures in boundaries of symmetric spaces of rank$>1$, little was known till recently; in his recent work \cite{B}, Byrer explicitly constructs cross-ratio triples in F\"urstenberg boundaries of symmetric spaces of higher rank. We have already mentioned that the torus ${{\mathbb T}}=S^1\times S^1$ which we study here appears naturally as the F\"urstenberg boundary of the rank-2 symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and is also isomorphic to the ideal boundary of 3-dimensional anti-de Sitter space $Ad S^3$. Our results apply to these spaces which we discuss in Section \ref{sec:cons}. \noindent{\it Acknowledgements:} Part of this work was carried out while the author visited University of Zurich; hospitality is gratefully appreciated. The author also wishes to thank Viktor Schroeder and Jonas Beyrer for fruitful discussions. {\mathcal{S}}ection{The Configuration Space of Four Points in the Torus }\label{sec:config} Our main results lie in this section. In Section \ref{sec:trans} we study the transitive action of the group of M\"obius transformations of the torus. The results about the configuration space are in Sections \ref{sec:confT} and \ref{sec:realX}. {\mathcal{S}}ubsection{The action of M\"obius transformations in the torus }\label{sec:trans} The torus ${{\mathbb T}}=S^1\times S^1$ is isomorphic to $\overline{{{\mathbb R}}}\times\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$. Let $(x,y)\in{{\mathbb T}}$; a M\"obius transformation of ${{\mathbb T}}$ is a map $g:{{\mathbb T}}\to{{\mathbb T}}$ of the form $$ g(x,y)=\left(g_1(x),g_2(y)\right), $$ where $g_1$ and $g_2$ are in ${{\mathcal M}}(S^1)$, that is, $$ g_1(x)={{\mathfrak r}}ac{ax+b}{cx+d},\quad g_2(y)={{\mathfrak r}}ac{a'y+b'}{c'y+d'}, $$ where the matrices $$ A_1=\left(\begin{matrix} a&b\\ c&d\end{matrix}\right)\quad\text{and}\quad A_2=\left(\begin{matrix} a'&b'\\ c'&d'\end{matrix}\right) $$ are both in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. Thus the set of M\"obius transformations ${{\mathcal M}}({{\mathbb T}})$ is ${{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)={\rm PSL}(2,{{\mathbb R}})\times{\rm PSL}(2,{{\mathbb R}})$. We wish to describe the action of ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathbb T}}$. First, the action is simply-transitive; this follows directly from simply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$. Secondly, the action is not doubly-transitive in the usual sense. If $$ {{\mathfrak c}}=(p_1,p_2)=((x_1,y_1),(x_2,y_2)), $$ is a pair of distinct points on the torus, then the cases: a) $x_1=x_2$ or $y_1=y_2$ and b) $x_1\neq x_2$ and $y_1\neq y_2$, are completely distinguished: A transformation $g\in{{\mathcal M}}({{\mathbb T}})$ maps couples of the form a) (resp. of the form b)) to couples of the same form; this prevents ${{\mathcal M}}({{\mathbb T}})$ to act doubly-transitively on ${{\mathbb T}}$ in the usual sense. The doubly-transitive action of ${{\mathcal M}}({{\mathbb T}})$ is rather partial in the sense above. Thirdly, as far as it concerns a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$, distinguished cases appear again. Indeed, consider an arbitrary triple $$ {{\mathfrak t}}=(p_1,p_2,p_3)=\left((x_1,y_1),(x_2,y_2),(x_3,y_3)\right), $$ of pairwise distinct points in ${{\mathbb T}}$ and we have the following distinguished cases: \begin{enumerate} \item [{a)}] Both $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ are triples of pairwise distinct points in $S^1$; \item [{b)}] $y_1=y_2=y_3$ and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points of $S^1$; \item [{c)}] $x_1=x_2=x_3$ and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points of $S^1$; \item [{d)}] $x_i=x_j=x$, $x_l\neq x$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points in $S^1$; \item [{e)}] $y_i=y_j=y$, $y_l\neq y$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$; \item [{f)}] Two $x_i$'s and two $y_j$'s are equal. \end{enumerate} All the above cases are not M\"obius equivalent: A $g\in{{\mathcal M}}({{\mathbb T}})$ maps triples of each of the above categories to a triple of the same category. However, there is a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$ at points which belong to the same category: Notice for instance that in the first case we have that there exist $g_1$ and $g_2$ in ${{\mathcal M}}(S^1)$ such that \begin{equation*}\label{eq:g12} g_1(x_1)=g_2(y_1)=0,\quad g_1(x_2)=g_2(y_2)=\infty,\quad g_1(x_3)=g_2(y_3)=1. \end{equation*} We derive that $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ satisfies \begin{equation*}\label{eq:g} g({{\mathfrak t}})=\left((0,0),(\infty,\infty),(1,1)\right). \end{equation*} In the second case, if $x_1=x_2=x_3$, then $(y_1,y_2,y_3)$ is a triple of distinct points in $S^1$. Therefore there exists a $g=(g_1,g_2)$ in ${{\mathcal M}}({{\mathbb T}})$ such that $g_1(x_i)=0$, $g_2(y_1)=0$, $g_2(y_2)=\infty$, $g_2(y_3)=1$, that is, $$ g({{\mathfrak t}})=\left((0,0),(0,\infty),(0,1)\right). $$ Analogously for the case $y_1=y_2=y_3$ we find that there exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that $$ g({{\mathfrak t}})=\left((0,0),(\infty,0),(1,0)\right). $$ The remaining cases are treated in the same manner and we leave them to the reader. {\mathcal{S}}ubsubsection{Circles} For each $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ we get an embedding of $S^1$ into ${{\mathbb T}}$ which is given by the parametrisation $$ \gamma(x)=(g_1(x),g_2(x)),\quad x\in S^1. $$ Such embeddings of $S^1$ into ${{\mathbb T}}$ will be called {\it M\"obius embeddings of $S^1$} or {\it Circles} on ${{\mathbb T}}$. Notice first, that each Circle is the image of the {\it standard Circle} $R_0$ via an element of ${{\mathcal M}}({{\mathbb T}})$; here, $R_0$ is the curve $\gamma(x)=(x,x)$, $x\in S^1$. Secondly, the involution $\iota_0$ of ${{\mathbb T}}$ defined by $\iota_0(x,y)=(y,x)$, fixes point-wise $R_0$. Hence to each Circle $R$ is associated an involution $\iota_R$ of ${{\mathbb T}}$ which fixes $R$ point-wise. Moreover, we have \begin{prop} Given a triple ${{\mathfrak t}}=(p_1,p_2,p_3)$ of the form a) above, there exists a Circle $R$ passing through the points of ${{\mathfrak t}}$ and thus the involution $\iota_R$ of ${{\mathbb T}}$ associated to $R$ fixes all points of ${{\mathfrak t}}$. \end{prop} \begin{proof} We normalise so that ${{\mathfrak t}}=(p_1,p_2,p_3)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(1,1). $$ Then the Circle passing through $p_i$ is $R_0$ and the involution is $\iota_0$. \end{proof} Three distinct points of ${{\mathbb T}}$ might lie on various embeddings of $S^1$; for instance, triples of the form b) and c) lie on $\gamma_y(x)=(g_1(x),y)$ for fixed $y$ and $\gamma_x(y)=(x,g_2(y))$ for fixed $x$, respectively, where $g_1,g_2\in{{\mathcal M}}(S^1)$. But in any case, only triples of points of the form a) lie on Circles. {\mathcal{S}}ubsection{The configuration space of four points in ${{\mathbb T}}$}\label{sec:confT} According to the notation which was set up in the introduction, let ${{\mathcal C}}_4={{\mathcal C}}_4({{\mathbb T}})$ be the space of quadruples of pairwise distinct points in ${{\mathbb T}}$ and let also ${{\mathcal F}}_4={{\mathcal F}}_4({{\mathbb T}})$ be the {\it configuration space of quadruples of pairwise distinct points in} ${{\mathbb T}}$, that is, the quotient of ${{\mathcal C}}_4$ by the diagonal action of the M\"obius group ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathcal C}}_4$. Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$ be arbitrary; if $p_i=(x_i,y_i)$, $i=1,2,3,4$, we shall denote by ${{\mathfrak x}}$ the quadruple $(x_1,x_2,x_3,x_4)$ and by ${{\mathfrak y}}$ the quadruple $(y_1,y_2,y_3,y_4)$. The isotropy group of ${{\mathfrak p}}$ is \begin{eqnarray*} {{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})&=&\{g\in{{\mathcal M}}({{\mathbb T}})\;|\;g({{\mathfrak p}})={{\mathfrak p}}\}\\ &=&\{(g_1,g_2)\in{{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)\;|\;g_1({{\mathfrak x}})={{\mathfrak x}}\;\text{and}\;g_2({{\mathfrak y}})={{\mathfrak y}}\}\\ &=&{{\mathcal M}}(S^1)({{\mathfrak x}})\times{{\mathcal M}}(S^1)({{\mathfrak y}}). \end{eqnarray*} Therefore the isotropy group ${{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})$ is trivial if and only if both isotropy groups ${{\mathcal M}}(S^1)({{\mathfrak x}})$ and ${{\mathcal M}}(S^1)({{\mathfrak y}})$ are trivial as well. If ${{\mathfrak p}}$ is such that $[{{\mathfrak p}}]$ is of maximal dimension (that is, both $[{{\mathfrak x}}]$ and $[{{\mathfrak y}}]$ are of maximal dimension), then we call ${{\mathfrak p}}$ {\it admissible}. Note that the dimension of the orbit of an admissible ${{\mathfrak p}}$ is 2. In the opposite case, we call ${{\mathfrak p}}$ {\it non-admissible}. We start with the non-admissible case first. Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ as above. We distinguish the following cases for ${{\mathcal M}}(S^1)({{\mathfrak x}})$: \begin{enumerate} \item [{${{\mathfrak x}}$-1)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and ${{\mathfrak x}}\in{{\mathcal C}}_4(S^1)$. We may then normalise so that $$ x_1=0,\quad x_2=\infty,\quad x_3={{\rm X}}({{\mathfrak x}}),\quad x_4=1. $$ \item [{${{\mathfrak x}}$-2)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and two points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2$, we may normalise so that $$ x_1=x_2=0,\quad x_3=1,\quad x_4=\infty; $$ we normalise similarly for the remaining cases. \item [{${{\mathfrak x}}$-3)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(0,\infty)$: Three points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2=x_3$, we may normalise so that $$ x_1=x_2=x_3=0,\quad x_4=\infty; $$ we normalise similarly for the remaining cases. \item [{${{\mathfrak x}}$-4)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(\infty)$: All points $x_i$ in ${{\mathfrak x}}$, $i=1,2,3,4$, are equal and we may normalise so that $x_i=\infty$. \end{enumerate} Notice that there are six sub-cases in ${{\mathfrak x}}$-2) and four sub-cases in ${{\mathfrak x}}$-3); in all, we have twelve distinguished cases. Entirely analogous distinguished cases ${{\mathfrak y}}$-1), ${{\mathfrak y}}$-2), ${{\mathfrak y}}$-3) and ${{\mathfrak y}}$-4) appear for ${{\mathcal M}}(S^1)({{\mathfrak y}})$. Non-admissible quadruples ${{\mathfrak p}}$ are such that all combinations of cases for ${{\mathfrak x}}$ and ${{\mathfrak y}}$ may appear except when ${{\mathfrak x}}$ falls into the case ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ falls into the case ${{\mathfrak y}}$-1). Mind that not all combinations are valid; for instance, there can be no ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-4) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) or ${{\mathfrak y}}$-3). Subsets of ${{\mathcal F}}_4$ corresponding to each valid combination are either of dimension 0 or 1. One-dimensional subsets appear when ${{\mathfrak x}}$ belongs to the ${{\mathfrak x}}$-1) case or ${{\mathfrak y}}$ belongs to the ${{\mathfrak y}}$-1) case. The corresponding subset is then isomorphic to ${{\mathbb R}}{\mathcal{S}}etminus\{0,1\}$ together with a point. For clarity, we will treat two cases: First, suppose that the non-admissible ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_1=y_2$. Then we may normalise so that $$ p_1=(0,0),\quad p_2=(\infty, 0),\quad p_3=({{\rm X}}({{\mathfrak x}}),1),\quad p_4=(1,\infty). $$ Therefore the subset of ${{\mathcal F}}_4({{\mathbb T}})$ comprising orbits of such ${{\mathfrak p}}$ is isomorphic to ${{\mathbb R}}{\mathcal{S}}etminus\{0,1\}\times\{b_{12}\}$, where $b_{12}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_1=y_2$. Secondly, suppose that ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-3) with $x_1=x_2=x_3$ and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_3=y_4$. Then we may normalise so that $$ p_1=(0,0),\quad p_2=(0, \infty),\quad p_3=(0,1),\quad p_4=(\infty,1). $$ The corresponding subset of the orbit space is then isomorphic to $\{a_{123}\}\times \{b_{34}\}$, where $a_{123}$ is the abstract point corresponding to quadruples ${{\mathfrak x}}$ such that $x_1=x_2=x_3$ and $b_{34}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_3=y_4$. Table \ref{table:1} shows all 68 distinguished subsets of ${{\mathcal F}}_4({{\mathbb T}})$ comprising non-admissible orbits of quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ and ${{\mathfrak x}}$ belong to the above categories: \begin{table}[h!] \centering \begin{tabular}{||c c c c||} {{\mathbb H}}line ${{\mathfrak x}}$ & ${{\mathfrak y}}$ & Corresponding subset of ${{\mathcal F}}_4$ & Components \\ [0.5ex] {{\mathbb H}}line{{\mathbb H}}line ${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-2) & $({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})\times\{b_{ij}\}$ & 6\\ ${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-3) & $({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})\times\{b_{ijk}\}$ & 3\\ ${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-4) & $({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})\times\{\infty\}$& 1\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-1) & $\{a_{ij}\}\times({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})$ & 6\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-2) &$\{a_{ij}\}\times\{b_{kl}\}$ & 24\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-3) & $\{a_{ij}\}\times\{b_{klm}\}$&12\\ ${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-1) & $\{a_{ijk}\}\times({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})$ & 3\\ ${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-2) & $\{a_{ijl}\}\times\{b_{lm}\}$&12\\ ${{\mathfrak x}}$-4) & ${{\mathfrak y}}$-1) &$\{\infty\}\times({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})$ &1 \\ [1ex] {{\mathbb H}}line \end{tabular} \caption{Subspaces of non-admissible orbits} \label{table:1} \end{table} If now ${{\mathfrak p}}$ is an admissible quadruple, we have that both ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ are in ${{\mathcal C}}_4(S^1)$. Let ${{\mathcal C}}^{\mathcal{S}}harp_4={{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ the subspace of ${{\mathcal C}}_4({{\mathbb T}})$ comprising admissible quadruples and denote by ${{\mathcal F}}_4{{\mathcal F}}_4({{\mathbb T}})$ the corresponding orbit space. The bijection $$ {{\mathfrak C}}:{{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})\ni {{\mathfrak p}}\mapsto ({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S^1)\times{{\mathcal C}}_4(S^1), $$ projects into the bijection $$ {{\mathfrak F}}:{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})\ni [{{\mathfrak p}}]\mapsto ([{{\mathfrak x}}],[{{\mathfrak y}}])\in{{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1), $$ and therefore we obtain \begin{thm}\label{thm:vec} The configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points of the torus ${{\mathbb T}}$ is isomorphic to a set comprising 69 distinguished components: 20 one-dimensional, 48 points and a 2-dimensional subset corresponding to the subset ${{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})$ of admissible quadruples. This subset may be identified to $({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})^2$. The identification is given by assigning to each $[{{\mathfrak p}}]$ the vector-valued cross-ratio $ \vec{{\mathbb X}}({{\mathfrak p}})=({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}})). $ \end{thm} The set ${{\mathcal F}}_4^{\mathcal{S}}harp=({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})^2$ is a subset of ${{\mathbb R}}^2$ comprising nine connected open components. We consider the space $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})}$; this is ${{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ factored by the diagonsl action of $\overline{{{\mathcal M}}({{\mathbb T}})}$: The latter comprises elements of ${{\mathcal M}}({{\mathbb T}})$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of ${{\mathbb T}}$. We thus have \begin{thm}\label{thm:band} The configuration space $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})}$ is identified to the disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ which is induced by identifying points of $({{\mathbb R}}{\mathcal{S}}etminus\{0,1\})^2$ which are symmetric with respect to the diagonal straight line $y=x$. Explicitly, ${{\mathcal Q}}$ has three open components: \begin{eqnarray*} && {{\mathcal Q}}_1^0=(-\infty,0)\times(0,1);\\ && {{\mathcal Q}}_2^0=(-\infty,0)\times(1,+\infty);\\ && {{\mathcal Q}}_3^0=(0,1)\times(1,+\infty), \end{eqnarray*} and three components with boundary: \begin{eqnarray*} && {{\mathcal Q}}_1^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x<0,\;y\ge x\};\\ && {{\mathcal Q}}_2^1=\{(x,y)\in{{\mathbb R}}^2\;|\;0<x<1,\;y\ge x\};\\ && {{\mathcal Q}}_3^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x>0,\;y\ge x\}. \end{eqnarray*} \end{thm} {\mathcal{S}}ubsection{Real Cross-Ratios and another parametrisation}\label{sec:realX} Using the vector-valued $\vec{{\mathbb X}}$ as in Theorem \ref{thm:vec} we define a real cross-ratio in ${{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ by $$ {{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}). $$ One may show that all 24 cross-ratios corresponding to an admissible quadruple ${{\mathfrak p}}$ depend on the following two: $$ {{\mathbb X}}_1({{\mathfrak p}})=[x_1,x_2,x_3,x_4]\cdot[y_1,y_2,y_3,y_4],\quad {{\mathbb X}}_2({{\mathfrak p}})=[x_1,x_3,x_2,x_4]\cdot[y_1,y_3,y_2,y_4]. $$ We now consider the map ${{\mathcal G}}^{\mathcal{S}}harp:{{\mathcal F}}_4^{\mathcal{S}}harp\to{{\mathbb R}}^2_*$, where $$ {{\mathcal G}}^{\mathcal{S}}harp([{{\mathfrak p}}])=\left({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}})\right). $$ The map ${{\mathcal G}}^{\mathcal{S}}harp$ is well defined since ${{\mathbb X}}$ remains invariant under the action of ${{\mathcal M}}({{\mathbb T}})$. Let \begin{equation}\label{eq:P} {{\mathcal P}}=\{(u,v)\in({{\mathbb R}}_*)^2\;|\;\Delta(u,v)=u^2+v^2-2u-2v+1-2uv\ge 0\}. \end{equation} The next fundamental inequality for cross-ratios in the following proposition shows exactly that ${{\mathcal G}}^{\mathcal{S}}harp$ takes its values in ${{\mathcal P}}$: \begin{prop}\label{prop:fundX} Let ${{\mathfrak p}}$ be an admissible quadruple of points in ${{\mathbb T}}$ and ${{\mathcal P}}$ as in (\ref{eq:P}). Then $$ ({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))\in{{\mathcal P}}. $$ Moreover, $\Delta({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))=0$ if and only if all points of ${{\mathfrak p}}$ lie on a Circle. \end{prop} \begin{proof} To prove the first statement, we may normalise so that ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1). $$ We then calculate $$ {{\mathbb X}}_1=xy,\quad {{\mathbb X}}_2=(1-x)(1-y). $$ Therefore $ {{\mathbb X}}_2=1+{{\mathbb X}}_1-x-y $, from where we derive $$ 1+{{\mathbb X}}_1-{{\mathbb X}}_2=x+y. $$ Taking this to the square we find $$ (1+{{\mathbb X}}_1-{{\mathbb X}}_2)^2=(x+y)^2\ge 4xy=4{{\mathbb X}}_1, $$ and the inequality follows. For the second statement, observe that equality holds only if $x=y$, i.e. all points lie on the standard Circle $R_0$ on ${{\mathbb T}}$. \end{proof} We proceed by showing that ${{\mathcal G}}^{\mathcal{S}}harp$ is surjective: \begin{prop} Let $(u,v)\in{{\mathcal P}}$. Then there exist a ${{\mathfrak p}} \in{{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ such that $$ {{\mathbb X}}_1({{\mathfrak p}})= u\quad\text{and}\quad {{\mathbb X}}_2({{\mathfrak p}})= v. $$ \end{prop} \begin{proof} Since $\Delta=(1+u-v)^2-4u\ge 0$ there exist $x,y$ such that $$ xy=u\quad\text{and}\quad x+y=1+u-v. $$ In fact, $$ x,y={{\mathfrak r}}ac{1+u-v\pm{\mathcal{S}}qrt{\Delta}}{2}. $$ Now one verifies that the admissible quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1), $$ is the quadruple in question. The proof is complete. \end{proof} Let $\iota_0$ be the involution associated to the standard Circle $R_0$. Notice in the proof that the quadruple $\iota_0({{\mathfrak p}})$, that is, $$ \iota_0(p_1)=(0,0),\quad \iota_0(p_2)=(\infty,\infty),\quad \iota_0(p_3)=(y,x),\quad \iota_0(p_4)=(1,1), $$ also satisfies ${{\mathbb X}}_1(\iota_0({{\mathfrak p}}))= u$ and ${{\mathbb X}}_2(\iota_0({{\mathfrak p}}))=v$. \begin{prop} Suppose that ${{\mathfrak p}}$ and ${{\mathfrak p}}'$ are two quadruples in ${{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ such that $$ {{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2. $$ Then one of the following cases occur: \begin{enumerate} \item There exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that $g({{\mathfrak p}})={{\mathfrak p}}'$; \item There exists a $g\in\overline{{{\mathcal M}}({{\mathbb T}})}$ such that $g({{\mathfrak p}})={{\mathfrak p}}'$. \end{enumerate} \end{prop} \begin{proof} We may normalise so that $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1), $$ and $$ p_1'=(0,0),\quad p_2'=(\infty,\infty),\quad p_3'=(x',y'),\quad p_4'=(1,1). $$ Then $ {{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2, $ imply $$ xy=x'y'\quad \text{and}\quad (1-x)(1-y)=(1-x')(1-y'). $$ It follows that $xy=x'y'$ and $x+y=x'+y'$. But then either $x=x'$ and $y=y'$ or $x=y'$ and $y=x'$. The proof is complete. \end{proof} The above discussion boils down to the following theorem: \begin{thm}\label{thm:F4} The ${{\mathcal M}}({{\mathbb T}})$-(rep. $\overline{{{\mathcal M}}({{\mathbb T}})}$)-configuration space ${{\mathcal F}}_4^{\mathcal{S}}harp={{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})$ (resp. $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp}=\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})})$ of admissible quadruples of points in the torus ${{\mathbb T}}$ is in a 2-1 (resp. 1-1) surjection with the set ${{\mathcal P}}$ given in (\ref{eq:P}). The subset ${{\mathcal F}}_4^{{\mathcal{S}}harp,0}$ of both ${{\mathcal F}}_4^{\mathcal{S}}harp$ and $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp}$ comprising equivalence classes of quadruples of points in the same Circle is in a bijection with the subset of ${{\mathcal P}}$ comprising $(u,v)$ such that $$ \Delta=u^2+v^2-2u-2v+1-2uv= 0. $$ Explicitly, ${{\mathcal P}}$ has three open components \begin{eqnarray*} && {{\mathcal P}}_1^0=(-\infty, 0)\times(0,+\infty);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v>0\};\\ && {{\mathcal P}}_2^0=(-\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v<0\};\\ && {{\mathcal P}}_3^0=(0,+\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v<0\}. \end{eqnarray*} and three components with one-dimensional boundaries: \begin{eqnarray*} && {{\mathcal P}}_1^1=\{(u,v)\in{{\mathcal P}}\;|\;0<u<1,\; 0<v<1,\;\Delta\ge 0\};\\ && {{\mathcal P}}_2^1=\{(u,v)\in{{\mathcal P}}\;|\;u>1,\;v>0,\;\Delta\ge 0\};\\ && {{\mathcal P}}_3^1=\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v>1,\;\Delta\ge 0\}. \end{eqnarray*} \end{thm} \begin{rem} The change of coordinates $$ u=xy,\quad v=1-x-y-xy $$ maps the set ${{\mathcal Q}}$ of Theorem \ref{thm:band} onto the set ${{\mathcal P}}$ in a bijective manner. \end{rem} {\mathcal{S}}ection{M\"obius Structure} Towards defining a M\"obius structure from the real cross-ratio ${{\mathbb X}}$ on the torus ${{\mathbb T}}$, we study first the case where both cross-ratios ${{\mathbb X}}_i({{\mathfrak p}})$, $i=1,2$, of an admissible quadruple of points are positive {Section \ref{sec:pos}). We then define ${{\mathcal M}}_{{\mathbb T}}$ and prove in Section \ref{sec:mob} that is is not Ptolemaean. {\mathcal{S}}ubsection{When both cross-ratios are positive}\label{sec:pos} Let $$ {{\mathcal P}}^1={{\mathcal P}}_1^1\;\dot\cup\;{{\mathcal P}}_2^1\;\dot\cup\;{{\mathcal P}}_3^1. $$ This set corresponds exactly to quadruples ${{\mathfrak p}}$ with both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Equivalently, ${{\rm X}}({{\mathfrak x}})$ and ${{\rm X}}({{\mathfrak y}})$ belong to the same connected component of ${{\mathbb R}}{\mathcal{S}}etminus\{0,1\}$, which means that the points of ${{\mathfrak x}}$ and ${{\mathfrak y}}$ have exactly the same type of ordering on the circle: If $x_1,x_2$ separate $x_3,x_4$ then also $y_1,y_2$ separate $y_3,y_4$ and so forth. Quadruples ${{\mathfrak p}}$ corresponding to ${{\mathcal P}}^1$ have both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Let ${{\mathcal F}}^{{\mathcal{S}}harp,+}_4=({{\mathcal G}}^{\mathcal{S}}harp)^{-1}({{\mathcal P}}^1)$. \begin{prop}\label{prop:Ptol-eq-T} Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ such that $[{{\mathfrak p}}]\in{{\mathcal F}}^{{\mathcal{S}}harp,+}_4$ and let ${{\mathbb X}}_i={{\mathbb X}}_i({{\mathfrak p}})$, $i-1,2$. Then \begin{equation}\label{eq:X12} {{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\ge 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\ge 1,\quad\text{or}\quad{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\le 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\le 1. \end{equation} Moreover, $[{{\mathfrak p}}]\in{{\mathcal F}}_4^{{\mathcal{S}}harp,0}$, that is, all points of ${{\mathfrak p}}$ lie in the same Circle if and only if $$ {{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1\quad \text{or}\quad |{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|=1. $$ Explicitly, \begin{enumerate} \item ${{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_3$ separate $p_2$ and $p_4$; \item ${{\mathbb X}}_2^{1/2}-{{\mathbb X}}_1^{1/2}=1$ if $p_1$ and $p_2$ separate $p_3$ and $p_4$; \item ${{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_4$ separate $p_2$ and $p_3$. \end{enumerate} \end{prop} \begin{proof} From the fundamental inequality for cross-ratios (Proposition \ref{prop:fundX}) we have \begin{eqnarray*} 0&\le&{{\mathbb X}}_1^2+{{\mathbb X}}_2^2-2{{\mathbb X}}_1-2{{\mathbb X}}_2+1-2{{\mathbb X}}_1{{\mathbb X}}_2\\ &=&({{\mathbb X}}_1+{{\mathbb X}}_2-1)^2-4{{\mathbb X}}_1{{\mathbb X}}_2\\ &=&({{\mathbb X}}_1+{{\mathbb X}}_2-2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)({{\mathbb X}}_1+{{\mathbb X}}_2+2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)\\ &=&\left(({{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2})^2-1\right)\left(({{\mathbb X}}_1^{1/2} -{{\mathbb X}}_2^{1/2})^2-1\right), \end{eqnarray*} and this proves Eqs. (\ref{eq:X12}). The details of the proof of the last statement are left to the reader. \end{proof} {\mathcal{S}}ubsection{M\"obius structure}\label{sec:mob} From the real cross-ratio ${{\mathbb X}}:{{\mathcal C}}^{\mathcal{S}}harp_4({{\mathbb T}})\to{{\mathbb R}}$ we define a positive cross-ratio ${{\bf X}}:{{\mathcal C}}^{\mathcal{S}}harp_4({{\mathbb T}})\to{{\mathbb R}}_+$ by setting $$ {{\bf X}}({{\mathfrak p}})=|{{\mathbb X}}({{\mathfrak p}})|^{1/2}, $$ for each ${{\mathfrak p}}\in{{\mathcal C}}_4^{\mathcal{S}}harp$. The positive cross-ratio is invariant under $\tilde{{{\mathcal M}}({{\mathbb T}})}$. The M\"obius structure on ${{\mathbb T}}$ associated to ${{\bf X}}$ and restricted to ${{\mathcal C}}_4^{\mathcal{S}}harp$ is the map $$ {{\mathfrak M}}_{{\mathbb T}}:{{\mathcal C}}^{\mathcal{S}}harp_4({{\mathbb T}})\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}})). $$ Recall that $(S,\rho)$ is a pseudo-semi-metric space if $\rho:S\times S\to{{\mathbb R}}_+$ satisfies a) $x=y$ implies $\rho(x,y)=0$ and b) $\rho(x,y)=\rho(y,x)$, for all $x,y\in S$. The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is associated to the pseudo-semi-metric $\rho:{{\mathbb T}}\times{{\mathbb T}}\to{{\mathbb R}}_+$, given by $$ \rho\left((x_1,y_1),(x_2,y_2)\right)=|x_1-x_2|^{1/2}\cdot|y_1-y_2|^{1/2}, $$ for each $(x_1,y_1)$ and $(x_2,y_2)$ in ${{\mathbb T}}$. In Section \ref{sec:cons} we explain the reason why we cannot have a natural positive cross-ratio compatible with the group action that is associated to any metric on ${{\mathbb T}}$. The following corollary concerning ${{\mathfrak M}}_T$ follows directly from Proposition \ref{prop:Ptol-eq-T}. \begin{cor} The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is not Ptolemaean. However, Circles are Ptolemaean circles for ${{\mathfrak M}}_{{\mathbb T}}$. \end{cor} {\mathcal{S}}ection{Application to Boundaries of ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and $AdS^3$} \label{sec:cons} In this section we show how the torus appears as the F\"urstenberg boundary of the symmetric space ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ as well as the ideal boundary of the 3-dimensional anti-de Sitter space $AdS^3$. We refer to \cite{BJ} for compactifications of symmetric spaces, to \cite{B} for recent developments on M\'obius structures in F\"urstenberg boundaries and finally to \cite{D} for a comprehensive treatment of anti-de Sitter space and its relations to Hyperbolic Geometry. Let ${{\mathbb R}}^{2,2}={{\mathbb R}}^4{\mathcal{S}}etminus\{0\}$ be the real vector space of dimension 4 equipped with a non-degenerate, indefinite pseudo-hermitian form $\langle\cdot,\cdot\rangle$ of signature $(2,2)$. Such a form is given by a $4\times 4$ matrix with 2 positive and 2 negative eigenvalues. Let ${{\bf x}}=\left[\begin{matrix} x_1 & x_2 &x_3 &x_4\end{matrix}\right]^T$ and ${{\bf y}}=\left[\begin{matrix} y_1 & y_2 &y_3 &y_4\end{matrix}\right]^T$ be column vectors. The pseudo-hermitian form is then defined by $$ \langle{{\bf x}},{{\bf y}}\rangle=x_1y_4-x_2y_3-x_3y_2+x_4y_1 $$ and it is given by the matrix $$ J=\left(\begin{matrix} 0&0&0&1\\ 0&0&-1&0\\ 0&-1&0&0\\ 1&0&0&0 \end{matrix}\right). $$ The isometry group of this pseudo-hermitian form is $G={\rm SO_0}(2,2)$. There is a natural identification of $G$ with ${\rm SL}(2,{{\mathbb R}})\times{\rm SL}(2,{{\mathbb R}})$, see \cite{GPP}; if $$ A_1=\left(\begin{matrix} a_1&b_1\\ c_1&d_1\end{matrix}\right)\quad\text{and}\quad A_2=\left(\begin{matrix} a_2&b_2\\ c_2&d_2\end{matrix}\right)\in{\rm SL}(2,{{\mathbb R}}), $$ then the pair $(A_1,A_2)$ is identified to $$ A=\left(\begin{matrix} a_1A_2^{-1}&b_1A_2^{-1}\\ c_1A_2^{-1}&d_1A_2^{-1}\end{matrix}\right)\in{\rm SO}_0(2,2). $$ The group $K={\rm SO}(2)\times$ ${\rm SO}(2)$ is a maximal compact subgroup of $G$ and $X=G/K$ is a symmetric space of rank-2. The symmetric space $X$ is also realised as ${{\bf H}}_{{\mathbb C}}^1\times{{\bf H}}_{{\mathbb C}}^1$, where ${{\bf H}}_{{\mathbb C}}^1=D$ is the Poincar\'e hyperbolic unit disc. The torus ${{\mathbb T}}=S^1\times S^1$ is the {\it maximal F\"urstenberg boundary} $\mathbb{F}(X)$ of the symmetric space $X$. Recall that if $G$ is a connected semi-simple Lie group and $X=G/K$ is the associated symmetric space, then the maximal F\"urstenberg boundary $\mathbb{F}(X)$ may be thought as $G/P_0$, where $P_0$ is a minimal parabolic subgroup of $G$, see for instance \cite{BJ}. If $X$ is of rank $\ge 2$ then $\mathbb{F}(X)$ cannot be the whole boundary of any compactification of $X$. In particular, if $X=D\times D$ , then $\mathbb{F}(X)={\rm SO}_0(2,2)/P_0$, $P_0=AN\times AN$ where $AN$ is the $AN$ group in the Iwasawa $KAN$ decomposition of ${\rm SL}(2,{{\mathbb R}})$. In this manner $\mathbb{F}(X)$ is just the corner of the boundary $(\overline{D}\times S^1)\cup(S^1\times\overline{D})$ of the compactification $\overline{D}\times\overline{D}$ of $X$. A rather neat way to represent ${{\mathbb T}}={{\mathbb F}}(X)$ is via its isomorphism to the ideal boundary of anti-de Sitter space which is obtained as follows: For the pseudo-hermitian product there are subspaces of positive (space-like) vectors $V_+$, of null (light-like) vectors $V_0$ and of negative (time-like) vectors $V_-$: \begin{eqnarray*} && V_+=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle> 0\right\},\\ && V_0=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle=0\right\},\\ && V_-=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle<0\right\}. \end{eqnarray*} If $\lambda$ is a non-zero real, then $\langle\lambda{{\bf x}},\lambda{{\bf x}}\rangle=\lambda^2\langle{{\bf x}},{{\bf x}}\rangle$. Therefore $\lambda{{\bf x}}$ is positive, null or negative if and only if ${{\bf x}}$ is positive, null or negative, respectively. Let $P$ be the projection map from ${{\mathbb R}}^{2,2}$ to projective space $P{{\mathbb R}}^3$. The {\it projective model of anti-de Sitter space} $AdS^3$ is now defined as the collection of negative vectors $PV_-$ in $P{{\mathbb R}}^3$ and its {\it ideal boundary} $\partial_\infty AdS^3$ is defined as the collection $PV_0$ of null vectors. Anti-de Sitter space $AdS^3$ carries a natural Lorentz structure; the isometry group of this structure is the projectivisation of the set ${\rm SO}(2,2)$ of unitary matrices for the pseudo-hermitian form with matrix $J$, that is, ${\rm PSO}_0(2,2)={\rm SO}(2,2)/\{\pm I\}$; here $I$ is the identity $4\times 4$ matrix. From the discussion above we have that ${\rm PSO_0}(2,2)$ is identified to ${\rm PSL}(2,{{\mathbb R}})^2={{\mathcal M}}({{\mathbb T}})$. Now the identification of $\partial_\infty AdS^3$ with the torus ${{\mathbb T}}={{\mathbb F}}(X)$ is given in terms of the Segre embedding ${{\mathcal S}}:{{\mathbb R}} P^1\times{{\mathbb R}} P^1\to{{\mathbb R}} P^3$. Recall that in homogeneous coordinates the Segre embedding $ w={{\mathcal S}}(x,y) $ is defined by $$ ({{\bf x}},{{\bf y}})=\left(\left[\begin{matrix} x_1\\ x_2 \end{matrix}\right]\;,\; \left[\begin{matrix} y_1\\ y_2 \end{matrix}\right]\right)\mapsto {{\bf w}}=\left[\begin{matrix} x_1y_1\\ x_1y_2\\ x_2y_1\\ x_2y_2 \end{matrix}\right]. $$ Notice that ${{\bf w}}$ is a null vector. The action of isometry group ${\rm PSO_0}(2,2)={\rm PSL}(2,{{\mathbb R}})^2$ of $AdS^3$ is extended naturally on the ideal boundary $\partial_\infty AdS^3$ which in this manner is identified to ${{\mathbb T}}$. We stress at this point that in contrast with the case of hyperbolic spaces, there are distinct points in $\partial AdS^3$ which may be orthogonal. To see this, let $p={{\mathcal S}}(x,y)$ and $p'={{\mathcal S}}(x',y')$ any distinct points; if $$ {{\bf x}}=\left[\begin{matrix} x_1\\x_2\end{matrix}\right],\quad {{\bf y}}=\left[\begin{matrix} y_1\\y_2\end{matrix}\right],\quad {{\bf x}}'=\left[\begin{matrix} x_1'\\x_2'\end{matrix}\right],\quad {{\bf y}}'=\left[\begin{matrix} y'_1\\y'_2\end{matrix}\right], $$ then $$ {{\bf p}}=\left[\begin{matrix}x_1y_1\\ x_1y_2\\ x_2y_1\\ x_2y_2 \end{matrix}\right],\quad{{\bf p}}'=\left[\begin{matrix} x_1y_1\\ x_1'y_2'\\ x_2'y_1'\\ x_2'y_2'\end{matrix}\right]. $$ One then calculates \begin{equation*} \langle{{\bf p}},{{\bf p}}'\rangle=(x_1x_2'-x_2x_1')(y_1y_2'-y_2y_1'). \end{equation*} Therefore $\langle{{\bf p}},{{\bf p}}'\rangle=0$ if and only if $x=x'$ or $y=y'$. For fixed $p\in\partial_\infty AdS^3$, $p={{\mathcal S}}(x,y)$ the locus $$ p^c=\{{{\mathcal S}}(x,z)\;|\;z\in S^1\}\cup \{{{\mathcal S}}(z,y)\;|\;z\in S^1\}\equiv (\{x\}\times S^1)\cup(S^1\times\{y\}) $$ comprises points of the ideal boundary which are orthogonal $p$. We call $p^c$ the {\it cross-completion} of $p$; transferring the picture in the context of a point $p=(x,y)$ lie on ${{\mathbb T}}$, orthogonal points are all points of $(\{x\}\times S^1)\cup S^1\cup\{y\})$. The ideal boundary $\partial_\infty AdS^3$ may be thus thought as the union of cross-completion of $\infty={{\mathcal S}}(\infty,\infty)$ and the remaining region of the torus which we denote by $N$. The set $N$ comprises points $p={{\mathcal S}}(x,y)$, $x,y\neq \infty$ with standard lifts $$ {{\bf p}}=\left[\begin{matrix}xy&x&y&1\end{matrix}\right]^T, $$ and can be viewed as the saddle surface $x_1=x_2x_3$ embedded in ${{\mathbb R}}^3$. But actually, $N$ admits a group structure: First, if $p={{\mathcal S}}(x,y)\in N$, we call $(x,y)$ the $N$-coordinates of $p$. To each such $p$ we assign the matrix $$ T(x,y)=\left(\begin{matrix} 1&y&x&xy\\ 0&1&0&x\\ 0&0&1&y\\ 0&0&0&1 \end{matrix}\right), $$ whose projectivisation gives an element of ${\rm PSO}_0(2,2)$ in the unipotent isotropy group of $\infty$. Note that if $G$ is the isomorphism ${\rm SL}(2,R)^2\to\to{\rm SO}_0(2,2)$, and $KAN$ is the Iwasawa decomposition of ${\rm SL}(2,R)$, then $T(x,y)$ lies in the image $G(N,N)$. It is straightforward to verify that $T(x,y)$ leaves the cross-completion $\infty^c$ of infinity invariant and maps $o={{\mathcal S}}(0,0)$ to $p$. Also, for $p=(x,y)$ and for $p'=(x',y')$ we have $$ T(x,y)T(x',y')=T(x+x',y+y'),\quad \left(T(x,y)\right)^{-1}=(-x,-y). $$ Thus $T$ is a group homomorphism from ${{\mathbb R}}^2$ to ${\rm PSO_0}(2,2)$ with group law $$ (x,y){\mathcal{S}}tar(x',y')=(x+x',y+y'). $$ In other words $N$ admits the structure of the additive group $({{\mathbb R}}^2,+)$. The natural Euclidean metric $e:N\times N\to{{\mathbb R}}_+$ where $$ e\left((x,y),(x',y')\right)=((x-x')^{2}+(y-y')^2)^{1/2}, $$ is invariant by the left action of $N$ but its similarity group is not ${\rm PSO}_0(2,2)$. To see this consider $$ D_\delta=\left(\begin{matrix} \delta&0\\ 0&1/\delta\end{matrix}\right)\quad \text{and}\quad D_{\delta'}=\left(\begin{matrix} \delta'&0\\ 0&1/\delta'\end{matrix}\right),\quad\delta,\delta'>0. $$ Then $G(D_\delta,D_{\delta'})(\xi_1,\xi_2)=(\delta^2\xi_1,(1/\delta')^2\xi_2)$, and $A=F(D_\delta,D_{\delta'})$ does not scale $e$ unless $\delta\delta'=1$ which is not always the case. Since all metrics in ${{\mathbb R}}^2$ are equivalent to $e$, there is no natural metric in $N$ such that its similarity group equals to ${\rm PSO}_0(2,2)$. In contrast, we define a function $a:N\to{{\mathbb R}}$, $$ a(x,y)=xy $$ and a gauge $$ \|(x,y)\|=|a(x,y)|^{1/2}=|x|^{1/2}|y|^{1/2}. $$ Essentially, we are mimicking here Kor\'anyi and Reimann and their construction for the Heisenberg group case, see \cite{KR}. The pseudo-semi-metric $\rho:N\times N\to{{\mathbb R}}_+$ is then defined by $$ \rho\left((x,y),(x',y')\right)=\|(x',y')^{-1}{\mathcal{S}}tar(x,y)\|=|x-x'|^{1/2}|y-y'|^{1/2}. $$ Let $\overline{N}=N\cup\{\infty\}$. The set ${{\mathcal C}}_4^{\mathcal{S}}harp({{\mathbb T}})$ of admissible quadruples is actually the set ${{\mathcal C}}_4^{\mathcal{S}}harp(\overline{N})$ of quadruples of points of $\overline{N}$ such that none of these points belongs to the cross-completion of any other point in the quadruple. The configuration space ${{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})$ is thus identified to ${{\mathcal C}}_4^{\mathcal{S}}harp(\overline{N})$ cut by the diagonal action of ${\rm PSO}_0(2,2)$ and the configuration space $\overline{{{\mathcal F}}_4^{\mathcal{S}}harp({{\mathbb T}})}$ is ${{\mathcal C}}_4^{\mathcal{S}}harp(\overline{N})$ cut by the diagonal action of $\overline{{\rm PSO}_0(2,2)}$ which comprises elements of ${\rm PSO}_0(2,2)$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of $\overline{N}$. The real cross-ratio ${{\mathbb X}}$ is thus defined in ${{\mathcal C}}_4^{\mathcal{S}}harp(\overline{N})$ by $$ {{\mathbb X}}({{\mathfrak p}})={{\mathfrak r}}ac{a(p_4{\mathcal{S}}tar p_2^{-1})\cdot a(p_3{\mathcal{S}}tar p_1^{-1})}{a(p_4{\mathcal{S}}tar p_1^{-1})\cdot a(p_3{\mathcal{S}}tar p_2^{-1})}, $$ for each ${{\mathfrak p}}\in{{\mathcal C}}_4^{\mathcal{S}}harp(\overline{N})$. The positive cross-ratio is $$ {{\bf X}}({{\mathfrak p}})={{\mathfrak r}}ac{\rho(p_4,p_2)\cdot\rho(p_3,p_1)}{\rho(p_4,p_1)\cdot\rho(p_3,p_2)}. $$ The results of the previous section apply now immediately. \end{document}
\begin{document} \title{Realization of a scalable Shor algorithm} \date{\today} \author{T. Monz} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{D. Nigg} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{E. A. Martinez} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{M. F. Brandl} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{P. Schindler} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{R. Rines} \affiliation{Center for Ultracold Atoms, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA} \author{S. X. Wang} \affiliation{Center for Ultracold Atoms, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA} \author{I. L. Chuang} \affiliation{Center for Ultracold Atoms, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA, USA} \author{R.~Blatt} \affiliation{Institut f\"ur Experimentalphysik, Universit\"at Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \affiliation{Institut f\"ur Quantenoptik und Quanteninformation, \"Osterreichische Akademie der Wissenschaften, Otto-Hittmair-Platz 1, A-6020 Innsbruck, Austria} \pacs{03.67.Lx, 37.10.Ty, 32.80.Qk} \begin{abstract} Quantum computers are able to outperform classical algorithms. This was long recognized by the visionary Richard Feynman who pointed out in the 1980s that quantum mechanical problems were better solved with quantum machines. It was only in 1994 that Peter Shor came up with an algorithm that is able to calculate the prime factors of a large number vastly more efficiently than known possible with a classical computer \cite{Shor}. This paradigmatic algorithm stimulated the flourishing research in quantum information processing and the quest for an actual implementation of a quantum computer. Over the last fifteen years, using skillful optimizations, several instances of a Shor algorithm have been implemented on various platforms and clearly proved the feasibility of quantum factoring \cite{citeulike:5725227, citeulike:11534452, citeulike:11389323, citeulike:3430059, citeulike:2365123, shor_nmr_chuang}. For general scalability, though, a different approach has to be pursued \cite{fake_shor_smolin}. Here, we report the realization of a fully scalable Shor algorithm as proposed by Kitaev \cite{iter_shor_kitaev}. For this, we demonstrate factoring the number fifteen by effectively employing and controlling seven qubits and four ``cache-qubits'', together with the implementation of generalized arithmetic operations, known as modular multipliers. The scalable algorithm has been realized with an ion-trap quantum computer exhibiting success probabilities in excess of 90\%. \end{abstract} \maketitle Shor's algorithm for factoring integers~\cite{Shor} is one of the examples where a quantum computer (QC) outperforms the most efficient known classical algorithms. Experimentally, its implementation is highly demanding as it requires both a sufficiently large quantum register and high-fidelity control. Clearly, such challenging requirements raise the question whether optimizations and experimental shortcuts are possible. While optimizations, especially system-specific or architectural, certainly are possible, for a demonstration of Shor's algorithm to be \emph{scalable} special care has to be taken not to oversimplify the implementation - for instance by employing knowledge about the solution prior to the actual experimental implementation - as pointed out in Ref.~\citenum{fake_shor_smolin}. In order to elucidate the general task at hand, we first explain and exemplify Shor's algorithm for factoring the number 15 in a (quantum) circuit model. Subsequently, we show how this circuit model is translated for and implemented with an ion-trap quantum computer. How does Shor's algorithm work? Here is a classical recipe to find the factors of a large number. As an example, assume the number we want to factor is $N=15$. Then pick a random number $a \in [2,N-1]$ (which we call the \emph{base} in the following), say $a=7$. Check if the greatest common divisor gcd$(a,N) = 1$, otherwise a factor is already determined. This is the case for $a=\{3,5,6,9,10,12\}$. Next, calculate the modular exponentiations $a^x\mod N$ for $x = 0,1,2...$ and find its period $r$: the first $x>0$ such that $a^x\mod N=1$. Given the period $r$, finding the factors requires calculating the greatest common divisors of $a^{r/2}\pm 1$ and $N$, which is classically efficiently possible - for instance using Euclid's algorithm. For our example ($N=15, a=7$) the modular exponentiation yields {1, 7, 4, 13, 1, ...}, which has period 4. The greatest common divisor of $a^{r/2}\pm 1=7^{4/2}\pm 1=\{48,50\}$ and $N=15$ are $\{3,5\}$, the non-trivial factors of $N$. For the chosen example $N=15$, the cases $a=\{4,11,14\}$ have periodicity $r=2$ and would only require a single multiplication step ($a^2 \mod N = 1$), which is considered an ``easy'' case~\cite{fake_shor_smolin}. Note that the periodicity for a chosen $a$ can not be predicted. How can this recipe be implemented in a QC? A QC also has to calculate $a^x \mod N$ in a computational register for $x=0,1,2...$ and then extract $r$. However, using the quantum Fourier-transform (QFT), this can be done with high probability in a single step (compared to $r$ steps classically). Here, $x$ is stored in a quantum register consisting of $k$ qubits, or period-register, which is in a superposition of 0 to $2^k-1$. The superposition in the period-register on its own does not provide a speedup compared to a classical computer. Measuring the period-register would collapse the state and only return a single value, say $x_1$, and the corresponding answer to $a^{x_1} \mod N$ in the computational register. However, if the QFT is applied to the period-register, the period of $a^x \mod N$ can be extracted from $\mathcal{O}(1)$ measurements. \begin{figure*} \caption{Circuit diagram of Shor's algorithm for factoring 15 based on Kitaev's approach for: a) a generic base $a$; and the specific circuit representations for the modular multipliers; b) The actual implementation for factoring 15 to base 11, optimised for the single input state it is subject to; c) Kitaev's approach to Shor's algorithm for the bases \{2,7,8,13\} \label{fig.shor_shem} \end{figure*} What are the requirements and challenges to implement Shor's algorithm? First, we focus on the period-register, to subsequently address modular exponentiation in the computational register. Factoring $N$, an $n=\lceil \log_2(N) \rceil$-bit number requires a minimum of $n$ qubits in the computational register (to store the results of $a^x \mod N$) and generally about $2n$ qubits in the period-register~\cite{mike_n_ike}. Thus even a seemingly simple example such as factoring 15 (an $n=4$ -bit number), would require $3n=12$ qubits when implemented in this straightforward way. These qubits then would have to be manipulated with high fidelity gates. Given the current state-of-the-art control over quantum systems~\cite{summaryarticle}, such an approach likely yields an unsatisfying performance. However, a full quantum implementation of this part of the algorithm is not really necessary. In Ref.~\citenum{iter_shor_kitaev} Kitaev notes that, if only the classical information of the QFT (such as the period $r$) is of interest, $2n$ qubits subject to a QFT can be replaced by a single qubit. This approach, however, requires qubit-recycling (specifically: in-sequence single-qubit readout and state reinitialization) paired with feed-forward to compensate for the reduced system size. In the following, Kitaev's QFT will be referred to as KQFT$^{(M)}$. It replaces a QFT acting on $M$ qubits with a semiclassical QFT acting repeatedly on a single qubit. Similar applications of Kitaev's approach to a semiclassical QFT in quantum algorithms have been investigated in Refs.~\cite{shor_iter_griffiths,shor_iter_plenio,shor_iter_ekert}. For the implementation of Shor's algorithm, Kitaev's approach provides a reduction from the previous $n$ computational qubits and $2n$ QFT qubits (in total $3 n$ qubits) to only $n$ computational-qubits and 1 KQFT$^{(2n)}$ qubit (in total $n+1$ qubits). \begin{figure*} \caption{Experimentally obtained truth table of the controlled 2 modular 15 multiplier: a) with the control-qubit being in state 0, the truth table corresponds to the identity operation; b) when the control qubit triggers the multiplication, the truth table illustrates the multiplication of the input state with 2 modular 15.} \label{fig.mod_mult_2} \end{figure*} A notably more challenging aspect than the QFT, and the second key-ingredient of Shor's algorithm, is the modular exponentiation, which admits these general simplifications: (i) Considering Kitaev's approach (see Fig.~\ref{fig.shor_shem}), the input state $\ket{1}$ (in decimal representation) is subject to a conditional multiplication based on the most-significant bit $k$ of the period register. At most there will be two results after this first step. It follows that, for the very first step it is sufficient to implement an optimized operation that conditionally maps $\ket{1}\rightarrow \ket{a^{2^k} \mod N}$. Considering the importance of a high-fidelity multiplication (with its performance being fed-forward to all subsequent qubits), this efficient simplification improves the overall performance of experimental realizations. (ii) Subsequent multipliers can similarly be replaced with maps by considering only possible outputs of the previous multiplications. However, using such maps will become exponentially more challenging, as the number of input and output states to be considered grows exponentially with the number of steps: after $n$ steps, $2^n > N$ possible outcomes need to be considered - a numerical task as challenging as factoring $N$ by classical means. Thus, controlled full modular multipliers need to be implemented. Fig.~\ref{fig.mod_mult_2} shows the experimentally obtained truth table for the modular multiplier $(2\mod 15)$ (see also supplementary material for modular multipliers with bases $\{7,8,11,13\}$). These quantum circuits can be efficiently derived from classical procedures using a variety of standard techniques for reversible quantum arithmetic and local logic optimization~\cite{efficient_multipliers_1,efficient_multipliers_2}. (iii) The very last multiplier allows one more simplification: Considering that the actual results of the modular exponentiation are not required for Shor's algorithm (as only the period encoded in the period-register is of interest), the last multiplier only has to create the correct amount of correlations between the period register and the computation register. Local operations after the conditional (entangling) operations may be discarded to facilitate the final multiplication without affecting the results of the implementation. (iv) In rare cases, certain qubits are not subject to operations in the computation. Thus, these qubits can be removed from the algorithm entirely. For larger scale quantum computation, optimization steps (i), (iii) and (iv) will only marginally effect the performance of the implementation. They represent only a small subset of the entire computation which mainly consists of the full modular multipliers. Thus, the realization of these modular multipliers is a core requirement for scalable implementations of Shor's algorithm. Furthermore, Kitaev's approach requires in-sequence measurements, qubit-recycling to reset the measured qubit, feed-forward of gate settings based on previous measurement results, as well as numerous controlled quantum operations - tasks that have not been realized in a combined experiment so far. We demonstrate these techniques in our realization of Shor's algorithm in an ion-trap quantum computer, with five $^{40}$Ca$^+$ ions in a linear Paul trap. The qubit is encoded in the ground state $S_{1/2}(m=-1/2)=\ket{1}$ and the metastable state $D_{5/2}(m=-1/2)=\ket{0}$. The universal set of quantum gates consists of the entangling M{\o}lmer-S{\o}renson interaction~\cite{ms_gate}, collective operations of the form $\exp(-i \frac{\theta}{2} S_\phi)$ with $S_\phi = \sum_i \sigma_\phi^{(i)}$, $\sigma_\phi^{(i)} = \cos(\phi)\sigma_x^{(i)} + \sin(\phi)\sigma_y^{(i)}$, $\sigma_{\{x,y\}}^{(i)}$ the Pauli operators of qubit $i$, $\theta=\Omega t$ determined by the Rabi frequency $\Omega$ and laser pulse duration $t$, $\phi$ determined by the relative phase between qubit and laser, and single qubit phase rotations induced by localized AC-Stark shifts (for more details see the supplementary material and Ref.~\citenum{order_finding_phips}). Unitary operations illustrated in Fig.~\ref{fig.shor_shem} are decomposed into primitive components such as two-target C-NOT and C-SWAP gates (or gates with global symmetries such as the four-target C-NOT employed here), from which an adaptation of the GRAPE algorithm~\cite{optimal_control} can efficiently derive an equivalent sequence of laser pulses acting on only the relevant qubits. The problem with this approach is that the resulting sequence generally includes operations acting on all qubits. Implementing the optimized 3-qubit operations on a 5-ion string therefore requires decoupling of the remaining qubits from the computation space. We spectroscopically decouple qubits by transferring any information from $\ket{S}\rightarrow \ket{D'}=D_{5/2}(m=-5/2)$ and $\ket{D}\rightarrow \ket{S'}=S_{1/2}(m=1/2)$. Here, the subspace $\{\ket{S'},\ket{D'}\}$ serves as a readily available ``quantum cache'' to store and retrieve quantum information in order to facilitate quantum computations. Finally, to complete the toolbox necessary for a Kitaev's approach to Shor's algorithm, we also implement single qubit readout (by encoding all other qubits in the $\{\ket{D},\ket{D'}\}$ subspace and subsequent electron shelving~\cite{electron_shelving_proposal} on the $S_{1/2}\leftrightarrow P_{1/2}$ transition), feed-forward (by storing counts detected during the single-qubit readout~\cite{teleport_riebe} in a classical register and subsequent conditional laser pulses) and state-reinitialization (using optical pumping for the ion, and Raman-cooling~\cite{wineland_bible,raman_cooling_zoller} for the motional state of the ion string). The pulse sequences and additional information on the implementation on the modular multipliers are available as supplementary material. The key differences of our implementation with respect to previous realizations of Shor's algorithm are: a) the entire quantum register is employed, without sparing qubits that don't partake in the calculation; b) besides the trivial first multiplication step (corresponding to $r=2$ for $a=\{4,11,14\}$, realized only once for $a=11$), all non-trivial modular multipliers $a=\{2,7,8,13\}$ have been realized and applied; and c) Kitaev's originally proposed scheme is implemented with complete qubit recycling -- doing both readout and reinitialization on the very same physical qubit. This is especially important for factoring 15 with base \{2,7,8,13\}, as at least two steps are required for the semiclassical QFT. In our realization we go beyond the minimal implementation of Shor's algorithm and not only employ all 7 qubits (comprised of 4 physical qubits in the computational register, 1 qubit in the periodicity register - recycled twice, plus additional cache qubits), but also include multiplication with up to the fourth power (although they correspond to the identity operation). This represents a realistic attempt at a scalable implementation of Shor's algorithm as the entire qubit register remains subject to decoherence processes along the computation, and no simplifications are employed which presume prior knowledge of the solution. The measurement results for base $a=\{2,7,8,11,13\}$ with periodicities $r=\{4,4,4,2,4\}$ are shown in Fig.~\ref{fig.shor_results}. In order to quantify the performance of the implementation, previous realizations mainly focused on the squared statistical overlap (SSO)~\cite{chiaverini_semiclas_fourier}, the classical equivalent to the Uhlmann fidelity~\cite{mike_n_ike}. While we achieved an SSO of \{0.968(1), 0.964(1), 0.966(1), 0.901(1), 0.972(1)\} for the case of a=\{2,7,8,11,13\}, we argue that this does not answer the question of a user in front of the quantum computer: ``What is the periodicity?'' Shor's algorithm allows one to deduce the periodicity with high probability from a single-shot measurement, since the output of the QFT is, in the exact case, a ratio of integers, where the denominator gives the desired periodicity. This periodicity is extracted using a continued fraction expansion, applied to $x/2^k$, a good approximation of the ideal case when $k$, the number of qubits, is sufficiently large. For the realised examples, the probabilistic nature of Shor's algorithm becomes clear: the output state $0$ never yields any information. For periodicity $4$ (and 3 qubits in the period-register), the output state $4$ suggests a fraction $\frac{4}{2^3}=\frac{1}{2}$, thus a periodicity of $2$ and also fails. For peridocity $4$, only the output states $2$ and $6$ allow one to deduce the correct periodicity. In our realisations to bases $a=\{2,7,8,11,13\}$, the probabilities to obtain output states that allow the derivation of the correct periodicity are $\{56(2),51(2),54(2), 47(2), 50(2)\}$\%. Thus, a confidence that the correct periodicity is obtained at a level of more than 99\%, requires the experiment to run about 8 times. \begin{figure} \caption{Results and correct order-assign probability for the different implementations to factor 15: a) 3-digit results (in decimal representation) of Shor's algorithm for the different bases. The ideal data (red) for periodicity $\{2,4\} \label{fig.shor_results} \end{figure} In summary, we have presented the realization of Kitaev's vision to realize a scalable Shor's algorithm with 3-digit resolution to factor 15 using bases \{2,7,8,11,13\}. Here, a semiclassical QFT combined with single-qubit readout, feed-forward and qubit recycling was successfully employed. Compared to the traditional algorithm, the required number of qubits can thus be reduced by almost a factor of 3. Furthermore, the entire quantum register has been subject to the computation in a ``black-box'' fashion. Employing the equivalent of a quantum cache by spectroscopic decoupling significantly facilitated the derivation of the necessary pulse sequences to achieve high-fidelity results. In the future, spectroscopic decoupling might be replaced by physically moving the qubits from the computational zone using segmented traps~\cite{scale_iontraps}. Our investigations also reveal some open questions and problems for current and upcoming realizations of Shor's algorithm, which also apply to several other large-scale quantum algorithms of interest: particularily, finding system-specific implementations of suitable pulse sequences to realize the desired evolution. The presented operations were efficiently constructed from classical circuits, and decomposed into manageable unitary building blocks (quantum gates) for which pulse sequences were obtained by an adapted GRAPE algorithm. Thus, the presented successful implementation in an ion-trap quantum computer demonstrates a viable approach to a scalable Shor algorithm. We gratefully acknowledge support by the Austrian Science Fund (FWF), through the SFB FoQus (FWF Project No. F4002-N16), by the European Commission (AQUTE), the NSF iQuISE IGERT, as well as the Institut f\"ur Quantenoptik und Quanteninformation GmbH. EM is a recipient of a DOC Fellowship of the Austrian Academy of Sciences. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office grant W911NF-10-1-0284. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government. \appendix \section{Supplementary Material} \subsection{Pulse sequences} In the following, the pulse sequences employed in the experiment are discussed in more detail. The nomenclature is as follows: The collective operations on the $S_{1/2}(m=-1/2)\leftrightarrow D_{5/2}(m=-1/2)$ transitions, addressing all ion-qubits, realize the unitary operation $$ R(\theta, \phi) = \exp(-i \frac{\pi}{2} \theta S_{\phi})$$ with the collective spin operator $$S_{\phi}= \sum_i \sigma_\phi^{(i)} = \sum_i \cos(\phi \pi) \sigma_x^{(i)} + \sin(\phi \pi) \sigma_y^{(i)}$$ based on the Pauli operators $\sigma_{\{x,y,z\}}^{(i)}$ acting on qubit qubit $i$. Here, the rotation angle $\theta$ is defined by $\theta=\frac{\Omega t}{\pi}$ with the Rabi frequency $\Omega$ and the laser pulse duration $t$. In this notation, a bit flip around $\sigma_x$ corresponds to $R(1,0)$. The collective operations are supplemented by single-qubit phase shifts of the form $$S_z(\theta,i) = \exp(-i \frac{\theta \pi}{2} \sigma_z^{(i)}).$$ The phase shift is realized by illuminating a single qubit with a tightly focused laser beam detuned $-20$~MHz from the carrier transition. Here, the induced AC-Stark shift $\Delta_{AC}$ implements the desired phase shift, with the rotation angle $\theta= \frac{\Delta_{AC} t}{\pi}$ depending on the pulse duration $t$. In combination, collective operations and single-qubit phase shifts allow us to implement arbitrary local operations. A universal set of quantum gates, capable of implementing any desired unitary operation, can be realized by combining these arbitrary local operations with an entangling interaction. In our experiment, we employ the M{\o}lmer-S{\o}rensen (MS) interaction~\cite{ms_gate} to realize entangling operations of the form $$MS(\theta)=\exp(-i \frac{\pi}{4} \theta S_{x}^2)$$ with $S_{x}=\sum \sigma_x^{(i)}$. Using this notation, the maximally entangling $MS(\frac{1}{2})$ operation applied onto the $N$-qubit state $\ket{0\ldots0}$ directly creates the $N$-qubit GHZ state. \subsection{Single-qubit measurement} Electron-shelving~\cite{electron_shelving_proposal} on the $S_{1/2} \leftrightarrow P_{1/2}$ transition addresses, and thus projects, all qubits of the quantum register. For Kitaev's implementation, however, only one qubit needs to be measured. With collective illumination, this can be achieved by transfering quantum information encoded in qubits that should not be measured into the $D$-state manifold. Here, the quantum information is protected against shelving light on the $S_{1/2} \leftrightarrow P_{1/2}$ transition - the ion will not scatter any photons. Using light resonant with the $S_{1/2}(m=-1/2) \leftrightarrow D_{5/2}(m=-5/2)$ transition (denoted by $R_2(\theta,\phi)$), a refocusing sequence of the form $R_2(0.5,0) \cdot S_z(1,i) \cdot R_2(0.5,0)$ efficiently encodes all but qubit $i$ in $D_{5/2}(m=-1/2)$ and $D_{5/2}(m=-5/2)$. Subsequently, the entire quantum register may be subject to shelving light, yet only qubit $i$ will be projected. \subsection{In-sequence detection and feed-forward} When all qubits that need to be protected against projection have been encoded in the $\{D_{5/2}(m=-1/2),D_{5/2}(m=-5/2)\}$ manifold, light at 397~nm resonant with the $S_{1/2}\leftrightarrow P_{1/2}$ transition state-dependently scatters photons an the remaining ion-qubits. The illumination time is set to 300~$\mu$s. A histogram of the photon counts detected at the photomultiplier tube is shown in Fig.~\ref{fig:det_hist}. Using counter electronics with discriminator set at 4 counts within the detection window, the state $D$ with a mean count rate of 0.24~{counts/ms} (or 0.07~counts within the detection window) and state $S$ with a mean countrate of 48~{counts/ms} (or 14.4~counts in the detection window) can be distinguished with a confidence better than 99.8\%. The boolean output of the discriminator is subsequently used in the electronics for state-dependent pulses and thus state-dependent operations. \begin{figure} \caption{In-sequence photon-count histogram: Using a detection window of 300 $\mu s$, the photomultiplier tube collects on average 0.07~{counts} \label{fig:det_hist} \end{figure} \subsection{Recooling and Qubit-reset} Scattering photons during the detection window heats the ion-string and can lower the quality of subsequent quantum operations applied to the register. Therefore recooling of the ion-string after the illumination with electron-shelving light is necessary. However, this recooling must not destroy any quantum information stored in the other qubits. Considering that the hidden quantum information is stored in the $D_{5/2}$ manifold, we employ 3-beam Raman-cooling~\cite{wineland_bible,raman_cooling_zoller} in the $S_{1/2}\leftrightarrow P_{1/2}$ manifold. The Raman light field, consisting of $\sigma^+$ and $\pi$ light with respect to the quantization axis, is detuned by 1.5~GHz from the resonant $S_{1/2} \leftrightarrow P_{1/2}$ transition. The relative detuning between $\sigma^+$ and $\pi$ is chosen such that it creates resonant coupling between $S_{1/2}(m=-1/2) \otimes \ket{n} \leftrightarrow S_{1/2}(m=1/2) \otimes \ket{n-1}$, with $\ket{n}$ representing the quantized axial state of motion of the ion. The transfer is reset by resonant $\sigma^-$ light. Raman cooling is employed for 500 $\mu$s. The qubit is reinitialized after cooling by an additional 50~$\mu$s of $\sigma^-$ light. However, if the measured qubit was found to be in state $D$, neither does the measurement heat the ion string nor does the Raman cooling affect the register. Therefore the qubit is transferred from $D_{5/2}(m=-1/2)$ to $S_{1/2}(m=1/2)$ (which was depleted by the previous 50~$\mu$s of $\sigma^-$). An additional pulse of $\sigma^-$ light for 50~$\mu$s finally initializes the qubit, regardless whether it was projected into $S$ or $D$. During the entire time when the qubit is subject to Raman cooling or initializing $\sigma^-$ light, a repump laser at 866~nm is applied to prevent population trapping in the $D_{3/2}$ manifold due to spontaneous decay from the $P_{1/2}$ state to $D_{3/2}$. \subsection{Pulse sequence optimisation} For a sufficiently large Hilbert-space it will no longer be possible to directly optimize unitary operations acting on the entire register. Decomposing the necessary unitary operations into building blocks acting on smaller register sizes will allow one the use of optimized pulse sequences for large-scale quantum computation. From a methodological point of view it may be preferred to physically decouple the qubits from any interactions (for instance by splitting and moving part of ion-qubit quantum register out of an interaction region, such as proposed in Ref.~\citenum{scale_iontraps}). However, given the technical requirements and challenges for splitting and moving ion-strings, we focus on spectroscopically decoupling certain ion-qubits from the interaction. In particular, we spectroscopically decouple an ion from subsequent interaction by transferring any quantum information from the $\{S_{1/2}(m=-1/2),D_{5/2}(m=-1/2)\}$ manifold to the $\{S_{1/2}(m=1/2),D_{5/2}(m=-5/2)\}$ manifold using refocusing techniques on the $D_{5/2}(m=-1/2)\leftrightarrow S_{1/2}(m=1/2)$ and $S_{1/2}(m=-1/2)\leftrightarrow D_{5/2}(m=-5/2)$ transitions. Using this approach, we optimise the controlled swap operation in a 3-qubit Hilbert space rather than a 5-qubit Hilbert space. \subsection{Controlled-SWAP} The controlled-SWAP operation, also known as Fredkin operation, plays a crucial role in the modular multiplication. For its implementation, however, we could not derive a pulse sequence that can incorporate an arbitrary number of spectator qubits --- qubits, that should be subject to the identity operation --- in the presented case, i.e. 2 spectator qubits in the computational register. However, using decoupling of spectator qubits, this additional requirement on the implementation is not necessary. Using pulse sequence optimization~\cite{optimal_control}, we obtained a sequence for the exact three-qubit case as shown in Tab.~\ref{tab:fredkin}. In total the sequence consists of 18 pulses, including 4 MS interactions. \begin{table}[h!] \centering \begin{tabular}{|c|l||c|l|} \hline Pulse Nr. & Pulse & Pulse Nr. & Pulse \\ \hline 1 & $R(1/2,1/2)$ & 10 & $R(1/2,1)$ \\ 2 & $S_z(3/2,3)$ & 11 & $S_z(1/4,2)$\\ 3 & $MS(4/8)$ & 12 & $S_z(3/2,3)$\\ 4 & $S_z(3/2,2)$ & 13 & $MS(4/8)$\\ 5 & $S_z(1/2,3)$ & 14 & $S_z(3/2,2)$\\ 6 & $R(3/4,0)$ & 15 & $S_z(3/2,1)$\\ 7 & $MS(6/8)$ & 16 & $R(1/2,1)$\\ 8 & $S_z(3/2,2)$ & 17 & $S_z(3/2,1)$\\ 9 & $MS(4/8)$ & 18 & $S_z(3/2,2)$ \\ \hline \end{tabular} \caption{Controlled SWAP operation: In a system of three ion-qubits, qubit 1 represents the control qubit and qubits $\{2,3\}$ are to be swapped depending on the state of the first qubit. Note that this sequence only works for three-qubit systems. Spectator qubits would not experience the identity operation.} \label{tab:fredkin} \end{table} \subsection{Four-Target Controlled-NOT} The modular multipliers $(7\mod 15)$ and $(13\mod 15)$ require, besides Fredkin operations, also CNOT operations acting on all qubits in the computational register. Such an operation can be implemented (see Ref.~\citenum{master_nebendahl} (p.90, eq. 5.21) ) with 2 MS operations plus local operations only - regardless of the size of the computational register. The respective sequence is shown in Tab.~\ref{tab:ft-cnot}. \begin{table}[h!] \centering \begin{tabular}{|c|l||c|l|} \hline Pulse Nr. & Pulse & Pulse Nr. & Pulse \\ \hline 1 & $R(1/2,1) $ & 6 & $MS(1/4) $\\ 2 & $S_z(3/2,1) $ & 7 & $R(3/4,0) $\\ 3 & $MS(3/4) $ & 8 & $S_z(3/2,1)$\\ 4 & $R(5/4,1)$ & 9 & $R(1/2,0)$\\ 5 & $S_z(1,1) $ & & \\ \hline \end{tabular} \caption{Four-target controlled NOT: Depending on the state of qubit one, the remaining four qubits \{2-5\} are subject to a conditional NOT operation.} \label{tab:ft-cnot} \end{table} \subsection{Two-Target Controlled-NOT} There exists an analytic solution to realize multi-target controlled-NOT operations in the presence of spectator qubits with the presented set of gates~\cite{master_nebendahl} - as required for the $\{2,7,8,13\}^2 \mod 15$ multiplier. However, we find that performing decoupling of subsets of qubits of the quantum register prior to the application of the multi-target controlled-NOT operation presented above both facilitates the optimisation, and improves the performance of the realisation of a two-target controlled-NOT operation. Thus, the required two-target controlled-NOT operation is implemented via (i) decoupling qubits 2 and 4, (ii) performing a multi-target controlled-not on all qubits with the first qubit acting as control, and (iii) recoupling of qubits 2 and 4. \subsection{Controlled Quantum Modular Multipliers} Based on the decomposition shown in Fig.~\ref{fig.shor_shem}d) and the respective pulse sequences outlined in the previous section, we investigate the performance of the building blocks as well as the respective conditional multipliers. In the following, the fidelities are defined as mean probabilities and standard deviations to observe the correct output state. The elements in the respective truth tables have been obtained as average over 200 repetitions. \begin{itemize} \item The Fredkin operation, controlled by qubit 1 and acting on qubits $\{35,23,34,45\}$, yields fidelities of $\{76(4),73(6),72(4),68(7)\}\%$. These numbers are consistent with MS gate interactions at a fidelity of about 95\% acting on three ions (in the presence of two decoupled ions) and local operations at a fidelity of 99.3\%. \item The 4-target CNOT gate operates at a fidelity of $86(3)\%$. \item Considering the quality for modular multipliers of $(\{2,4,7,8,11,13\}\mod 15)$, we find fidelities of $\{48(5),40(5),50(6),46(5),38(5)\}\%$. This performance is consistent with the multiplication of the performance of the individual building blocks: $\{37(6), 36(5), 37(6), 48(5), 36(5)\}\%$. \end{itemize} \begin{figure*} \caption{Controlled modular multipliers: While the full truth tables have been obtained, for improved visibility only the subset of data for the computational register (in decimal basis) is presented for modular multipliers $(\{2,7,8,11,13\} \label{fig.mod_mult_all} \end{figure*} \end{document}
\begin{document} \title{Adaptive Fourier-Galerkin Methods} \author{Claudio Canuto$^a$, Ricardo H. Nochetto$^b$ and Marco Verani$^c$} \date{January 26, 2012} \maketitle \begin{center} {\small $^a$ Dipartimento di Scienze Matematiche, Politecnico di Torino\\ Corso Duca degli Abruzzi 24, 10129 Torino, Italy\\ E-mail: {\tt [email protected]}\\ \vskip 0.1cm Department of Mathematics and Institute for Physical Science and Technology,\\ University of Maryland, College Park, MD 20742, USAy\\ E-mail: {\tt [email protected]} \vskip 0.1cm $^c$ MOX, Dipartimento di Matematica, Politecnico di Milano\\ Piazza Leonardo da Vinci 32, I-20133 Milano, Italy\\ E-mail: {\tt [email protected]}\\ } \end{center} \begin{abstract} \noindent We study the performance of adaptive Fourier-Galerkin methods in a periodic box in $\mathbb{R}^d$ with dimension $d\ge1$. These methods offer unlimited approximation power only restricted by solution and data regularity. They are of intrinsic interest but are also a first step towards understanding adaptivity for the $hp$-FEM. We examine two nonlinear approximation classes, one classical corresponding to algebraic decay of Fourier coefficients and another associated with exponential decay. We study the sparsity classes of the residual and show that they are the same as the solution for the algebraic class but not for the exponential one. This possible sparsity degradation for the exponential class can be compensated with coarsening, which we discuss in detail. We present several adaptive Fourier algorithms, and prove their contraction and optimal cardinality properties.\\ \\ \textbf{Keywords:} Spectral methods, adaptivity, convergence, optimal cardinality. \end{abstract} \section{Introduction}\label{S:intro} Adaptivity is now a fundamental tool in scientific and engineering computation. In contrast to the practice, which goes back to the 70's, the mathematical theory for multidimensional problems is rather recent. It started in 1996 with the convergence results by D\"orfler \cite{dorfler:96} and Morin, Nochetto, and Siebert \cite{MNS:00}. The first convergence rates were derived by Cohen, Dahmen, and DeVore \cite{CDDV:1998} for wavelets in any dimensions $d$, and for finite element methods (AFEM) by Binev, Dahmen, and DeVore \cite{BDD:04} for $d=2$ and Stevenson \cite{Stevenson:2007} for any $d$. The most comprehensive results for AFEM are those of Casc\'on, Kreuzer, Nochetto, and Siebert \cite{Nochetto-et-al:2008} for any $d$ and $L^2$ data, and Cohen, DeVore, and Nochetto \cite{CDN:11} for $d=2$ and $H^{-1}$ data; we refer to the survey \cite{NSV:09} by Nochetto, Siebert and Veeser. This theory is quite satisfactory in that it shows that AFEM delivers a convergence rate compatible with that of the approximation classes where the solution and data belong. The recent results in \cite{CDN:11} reveal that it is the approximation class of the solution that really matters. In all cases though the convergence rates are limited by the approximation power of the method (both wavelets and FEM), which is finite and related to the polynomial degree of the basis functions, and the regularity of the solution and data. The latter is always measured in an {\it algebraic} approximation class. In contrast very little is known for methods with infinite approximation power, such as those based on Fourier analysis. We mention here the results of DeVore and Temlyakov \cite{DeVore-Temlyakov:1995} for trigonometric sums and those of Binev et al \cite{BCDDPW:10} for the reduced basis method. A close relative to Fourier methods is the so-called $p$-version of the FEM (see e.g. \cite{Schwab:1998} and \cite{CHQZ2:2006}), which uses Legendre polynomials instead of exponentials as basis functions. The purpose of this paper is to present {\it adaptive Fourier-Galerkin methods (ADFOUR)}, and discuss their convergence and optimality properties. We do so in the context of both {\it algebraic} and {\it exponential} approximation classes, and take advantage of the orthogonality inherent to complex exponentials. We believe that this approach can be extended to the $p$-FEM. We view this theory as a first step towards understanding adaptivity for the $hp$-FEM, which combines mesh refinement ($h$-FEM) with polynomial enrichment ($p$-FEM) and is much harder to analyze. Our investigation reveals some striking differences between ADFOUR and AFEM and wavelet methods. The basic assumption, underlying the success of adaptivity, is that the information read in the residual is quasi-optimal for either mesh design or choosing wavelet coefficients for the actual solution. This entails that the sparsity classes of the residual and the solution coincide. We briefly illustrate below, and fully discuss later in Sect. \ref{sec:spars-res}, that this basic premise is false for exponential classes even though it is true for algebraic classes. Confronted with this unexpected fact, we have no alternative but to implement and study ADFOUR with {\it coarsening} for the exponential case; see Sect. \ref{S:coarsening} and Sect. \ref{sec:adfour-coarse}. This was the original idea of Cohen et al \cite{CDDV:1998} and Binev et al \cite{BDD:04} for the algebraic case, but it was subsequently removed by Stevenson \cite{Stevenson:2007}. We give now a brief description of the essential issues we are confronted with in designing and studying ADFOUR. To this end, we assume that we know the Fourier representation $\bv=\{v_k\}_{k\in\mathbb{Z}}$ of a periodic function $v$, and its non-increasing rearrangement $\bv^*=\{v_n^*\}_{n=1}^\infty$, namely, $|v_{n+1}^*|\le |v_n^*|$ for all $n\ge 1$. \noindent {\bf D\"orfler marking and best $N$-term approximation.} We recall the marking introduced by D\"orfler \cite{dorfler:96}, which is the only one for which there exist provable convergence rates. Given a parameter $\theta\in (0,1)$, and a current set of Fourier frequencies or indices $\Lambda$, say the first $N$ ones according to the labeling of $\bv$, we choose the next set $\partial\Lambda$ as the {\it minimal} set for which \begin{equation}\label{dorfler} \|P_{\partial\Lambda} \br\| \ge \theta \|\br\|, \end{equation} where $\br := \bv - P_\Lambda \bv$ is the {\it residual} and $P_\Lambda$ is the orthogonal projection in the $\ell^2$-norm $\|\cdot\|$ onto $\Lambda$. Note that, if $\br_*:= \br - P_{\partial\Lambda}\br$ and $\Lambda_* := \Lambda\cup\partial\Lambda$, then \eqref{dorfler} can be equivalently written as \begin{equation}\label{dorfler-2} \|\br_*\| = \|\br - P_{\partial\Lambda}\br\| \le \sqrt{1-\theta^2} \|\br\|, \end{equation} and that $\br = \bv|_{\Lambda^c}$ where $\Lambda^c := \mathbb N \backslash \Lambda$ is the complement of $\Lambda$ and likewise for $\br_*$. This is the simplest possible scenario because the information built in $\br$ is exactly that of $\bv$. Moreover, $\bv-\br=\{v_n^*\}_{n=1}^N$ is the best $N$-term approximation of $\bv$ in the $\ell^2$-norm and the corresponding error $E_N(v)$ is given by \begin{equation}\label{N-term} E_N(v) = \Big( \sum_{n>N} |v_n^*|^2 \Big)^{-\frac12} = \|\br\|. \end{equation} \noindent {\bf Algebraic vs exponential decay.} Suppose now that $\bv$ has the precise {\it algebraic} decay\footnote{Throughout the paper, $A \lsim B$ means $A \le c \, B$ for some constant $c>0$ independent of the relevant parameters in the inequality; $A \simeq B$ means $B \lsim A \lsim B$.} \begin{equation}\label{algebraic} |v_n^*| \simeq n^{-\frac{1}{\tau}} \quad\forall\, n\ge1. \end{equation} with \begin{equation}\label{tau} \frac{1}{\tau} = \frac{s}{d} + \frac{1}{2} \end{equation} and $s>0$. We denote by $\|\bv\|_{\ell^s_B}$ the smallest constant in the upper bound in \eqref{algebraic}. We thus have \[ E_N(v)^2 \simeq \|\bv\|_{\ell^\tau_w}^2 \sum_{n>N} n^{-\frac{2}{\tau}} = \|\bv\|_{\ell^s_B}^2 \sum_{n>N} n^{-\frac{2s}{d}-1} \simeq \|\bv\|_{\ell^s_B}^2 N^{-\frac{2s}{d}}. \] This decay is related to certain {\it Besov} regularity of $v$ \cite{DeVore-Temlyakov:1995}. Note that the effect of D\"orfler marking \eqref{dorfler-2} is to reduce the residual from $\br$ to $\br_*$ by a factor $\alpha = \sqrt{1-\theta^2}$, or equivalently \[ E_{N_*}(v) \le \alpha E_N(v), \] with $N_* = |\Lambda_*|$. Since the set $\Lambda_*$ is minimal, we deduce that $E_{N_*-1}(v) > \alpha E_N(v)$, whence \begin{equation}\label{bulk-1} \frac{N_*}{N} \simeq \alpha^{-\frac{d}{s}} \quad\Rightarrow\quad N_* - N \simeq \alpha^{-\frac{d}{s}} N \end{equation} for $\alpha$ small enough. This means that the number of degrees of freedom to be added is proportional to the current number. This simplifies considerably the complexity analysis since every step adds as many degrees of freedom as we have already accumulated. The exponential case is quite different. Suppose that $\bv$ has a {\it genuinely exponential} decay \begin{equation}\label{gen-exp} |v_n^*| \simeq e^{-\eta n} \quad\forall\, n\ge 1, \end{equation} corresponding to analytic functions \cite{Foias-Temam:1989}, and let $\|\bv\|_{\ell^{\eta}_G}$ be the smallest constant appearing in the upper bound in \eqref{gen-exp}. These definitions are slight simplifications of the actual ones in Sect. \ref{sec:nlg} but enough to give insight on the main issues at stake. We thus have \begin{equation*} E_N(v)^2 \simeq \|\bv\|_{\ell^{\eta}_G}^2 \sum_{n>N} e^{-2\eta n} \simeq \|\bv\|_{\ell^{\eta}_G}^2 e^{-2\eta N}; \end{equation*} this and similar decays are related to {\it Gevrey} classes of $C^\infty$ functions \cite{Foias-Temam:1989}. In contrast to \eqref{bulk-1}, D\"orfler marking now yields\footnote{Throughout the paper, $A \sim B$ means $A=B+c$ for some quantity $c \simeq 1$.} \begin{equation}\label{bulk-2} N_* - N \sim \frac{1}{\eta} \log \frac{1}{\alpha}. \end{equation} This shows that the number of additional degrees of freedom per step is fixed and independent of $N$, which makes their counting as well as their implementation a very delicate operation. \noindent {\bf Plateaux.} We now consider a situation opposite to the ideal decay examined above. Suppose that the first $K>1$ Fourier coefficients of $v$ are constant and either \begin{equation}\label{plateaux} |v_n^*| = \|\bv\|_{\ell^s_B} n^{-\frac{1}{\tau}}\quad\text{\rm or}\quad |v_n^*| = \|\bv\|_{\ell^{\eta}_G} e^{-\eta n} \quad\forall\, n\ge K, \end{equation} for each approximation class. A simple calculation reveals that either \begin{equation} \|\bv\| \simeq \|\bv\|_{\ell^s_B} K^{-s/d} \quad\text{\rm or}\quad \|\bv\| \simeq \|\bv\|_{\ell^{\eta}_G} e^{-\eta K}. \end{equation} Repeating the argument leading to \eqref{bulk-1} and \eqref{bulk-2} with $N=1$, we infer that either \begin{equation}\label{bulk-3} N_* \simeq K \alpha^{-\frac{d}{s}} \quad\text{\rm or }\quad N_* \sim K + \frac{1}{\eta}\log\frac{1}{\alpha}. \end{equation} For $K\gg1$ this is a much larger number than the optimal values \eqref{bulk-1} and \eqref{bulk-2}, and illustrates the fact that the D\"orfler condition \eqref{dorfler} adds many more frequencies in the presence of plateaux. We note that $K$ is a multiplicative constant in the left of \eqref{bulk-3} and additive in the right of \eqref{bulk-3}. \noindent {\bf Sparsity of the residual.} In practice we do not have access to the Fourier decomposition of $v$ but rather of the residual $r(v)=f-Lv$, where $f$ is the forcing function and $L$ the differential operator. Only an operator $L$ with constant coefficients leads to a spectral representation with diagonal matrix $\bA$, in which case the components of the residual $\br = \mathbf{f} - \bA\bv$ are directly those of $\mathbf{f}$ and $\bv$. In general $\bA$ decays away from the main diagonal with a law that depends on the regularity of the coefficients of $L$; we will examine in Sect. \ref{S:properties-A} either algebraic or exponential decay. In this much more intricate and interesting endeavor, studied in this paper, the components of $\bv$ interact with entries of $\bA$ to give rise to $\br$. The question whether $Lv$ belongs to the same approximation class of $v$ thus becomes relevant because adaptivity decisions are made with $r(v)$, and thereby on the range of $L$ rather than its domain. We now provide insight on the key issues at stake via a couple of heuristic examples; we discuss this fully in Sect. \ref{S:algebraic-case} and Sect. \ref{subsec:spars-res-exp}. We start with the exponential case: let $\bv:=\{v_k\}_{k\in\mathbb{Z}}$ be defined by \[ v_k = e^{-\eta n} \quad\textrm{if}\quad k = 2p(n-1), \qquad v_k = 0 \quad\textrm{otherwise}, \] for $p\ge2$ a given integer and $n\ge1$. This sequence exhibits gaps of size $2p$ between consecutive nonzero entries for $k\ge0$. Its non-decreasing rearrangement $\bv^*=\{v_n^*\}_{n=1}^\infty$ is thus given by \[ v_n^* = e^{-\eta n} \quad n\ge 1, \] whence $\bv\in\ell^\eta_G$ with $\|\bv\|_{\ell^\eta_G}=1$. Let $\bA:=(a_{ij})_{i,j=1}^\infty$ be the Toeplitz bi-infinite matrix given by \[ a_{ij} = 1\quad\textrm{if } |i-j| \le q, \qquad a_{ij} = 0\quad\textrm{otherwise}, \] with $1\le q<p$. This matrix $\bA$ has $2q+1$ main nontrivial diagonals and is both of exponential and algebraic class according to the Definition \ref{def:class.matrix} below. The product $\bA\bv$ is much less sparse than $\bv$ but, because $q<p$, consecutive frequencies of $\bv$ do not interact with each other: the $i$-th component reads \[ (\bA\bv)_i = e^{-\eta n}\quad\textrm{if}\quad \big| i-2p(n-1)\big| \le q \quad\textrm{for some}\quad n\ge 1, \] or $(\bA\bv)_i=0$ otherwise. The non-decreasing rearrangement $(\bA\bv)^*$ of $\bA\bv$ becomes \[ (\bA\bv)^*_m = e^{-\eta n} \quad\textrm{if}\quad (2q+1)(n-1) + 1 \le m \le (2q+1) n. \] Consequently, writing $(\bA\bv)^*_m = e^{-\eta \frac{n}{m} m}$ and observing that \[ \frac{n}{m} \ge \frac{n}{(2q+1)n} = \frac{1}{2q+1} \] and the equality is attained for $m=(2q+1)n$, we deduce \[ \bA\bv \in \ell^{\bar\eta}_G \quad\textrm{with}\quad \|\bA\bv\|_{\ell^{\bar\eta}_G} =1 \quad \bar\eta = \frac{\eta}{2q+1}. \] We thus conclude that the action of $\bA$ may shift the exponential class, from the one characterized by the parameter $\eta$ for $\bv$ to the one characterized by $\bar\eta<\eta$ for $\bA\bv$. This uncovers the crucial feature that the image $\bA\bv$ of $\bv$ may be substantially less sparse than $\bv$ itself. In Sect. \ref{subsec:spars-res-exp} we present a rigorous construction with $a_{ij}$ decreasing exponentially from the main diagonal and another, rather sophisticated, construction that illustrates the fact that the exponent $\tau = 1$ in the bound $|v_n^*| \lsim e^{-\eta n} = e^{-\eta n^\tau}$ for $\bv$ may deteriorate to some $\bar\tau < 1$ in the corresponding bound for $\bA\bv$. It is remarkable that a similar construction for the algebraic decay would not lead to a change of algebraic class. In fact, let $\bv=\{v_k\}_{k\in\mathbb{Z}}$ be given by \[ v_k = \frac{1}{n} \quad\textrm{if}\quad k = 2p(n-1) \quad\textrm{for some}\quad n\ge1, \] and $v_k=0$ otherwise. The non-decreasing rearrangement $\bv^*=\{v_n^*\}_{n=1}^\infty$ of $\bv$ satisfies $v^*_n=\frac{1}{n}$ whence \[ \bv\in \ell^s_B \quad\textrm{with}\quad s = \frac{d}{2} \quad \|\bv\|_{\ell^s_B}=1. \] On the other hand, the $i$-th component of $\bA\bv$ reads \[ (\bA\bv)_i = \frac{1}{n}\quad\textrm{if}\quad \big|i-2p(n-1)\big| \le q \quad\textrm{for some}\quad n\ge 1, \] or $(\bA\bv)_i=0$ otherwise. The non-decreasing rearrangement of $(\bA\bv)^*$ in turn satisfies \[ (\bA\bv)^*_m = \frac{1}{n} \quad\textrm{if}\quad (2q+1)(n-1) + 1 \le m \le (2q+1) n, \] whence writing $(\bA\bv)^*_m = \frac{m}{n} \frac{1}{m}$ and arguing as before we infer that \[ \bA\bv\in \ell^s_B \quad\textrm{with}\quad \|\bA\bv\|_{\ell^s_B} = 2q+1. \] Since $\|\bA\bv\|_{\ell^s_B} > \|\bv\|_{\ell^s_B}$ we realize that $\bA\bv$ is less sparse than $\bv$ but, in contrast to the exponential case, they belong to the same algebraic class $\ell^s_B$. Moreover, we will prove later in Sect. \ref{S:algebraic-case} that $\bA$ preserves the class $\ell^s_B$ provided entries of $\bA$ possess a suitable algebraic decay away from the main diagonal. Since D\"orfler marking is applied to the residual $\br$, it is its sparsity class that determines the degrees of freedom $|\partial\Lambda|$ to be added. The same argument leading to either \eqref{bulk-1} or \eqref{bulk-2} gives \[ |\partial\Lambda| \le \Big(\frac{\|\br\|_{\ell^s_B}}{\alpha\|\br\|}\Big)^{\frac{d}{s}} + 1 \qquad\text{\rm or }\qquad |\partial\Lambda| \le \frac{1}{\eta} \log \frac{\|\br\|_{\ell^{\eta}_G}}{\alpha\|\br\|} +1, \] for each class. We thus see that the ratios $\|\br\|_{\ell^s_B}/\|\br\|$ and $\|\br\|_{\ell^{\eta}_G}/\|\br\|$ control the behavior of the adaptive procedure. This has already been observed and exploited by Cohen et al \cite{CDDV:1998} in the context of wavelet methods for the class $\ell^s_B$. Our estimates, discussed in Sect. \ref{sec:spars-res}, are valid for both classes and use specific decay properties of the entries of $\bA$. \noindent {\bf Coarsening.} Ever since its inception by Cohen et al \cite{CDDV:1998} and Binev et al \cite{BDD:04}, this has been a controvertial issue for elliptic PDE. It was originally due to the lack of control on the ratio $\|\br\|_{\ell^s_B}/\|\br\|$ for large $s$ \cite{CDDV:1998}. It was removed by Stevenson et al \cite{Gantumur-Stevenson:2007,Stevenson:2007} for the algebraic class $\ell^s_B$ via a clever argument that exploits the minimality of D\"orfler marking. This implicitly implies that the approximation classes for both $v$ and $Lv$ coincide, which we prove explicitly in Sect. \ref{S:algebraic-case} for the algebraic case. This is not true though for the exponential case and is discussed in Sect. \ref{subsec:spars-res-exp}. For the latter, we need to resort to {\it coarsening} to keep the cardinality of ADFOUR quasi-optimal. To this end, we {construct an insightful example in Sect. \ref{S:coarsening} and prove} a rather simple but sharp coarsening estimate which improves upon \cite{CDDV:1998}. \noindent {\bf Contraction constant.} It is well known that the contraction constant $\rho(\theta) = \sqrt{1 - \frac{\alpha_*}{\alpha^*}\theta^2}$ cannot be arbitrarily close to $1$ for estimators whose upper and lower constants, $\alpha^*\ge\alpha_*$, do not coincide. This is, however, at odds with the philosophy of spectral methods which are expected to converge superlinearly (typically exponentially). Assuming that the decay properties of $\bA$ are known, we can enrich D\"orfler marking in such a way that the contraction factor becomes \[ \bar\rho(\theta) = \Big(\frac{\alpha^*}{\alpha_*}\Big)^{\frac12} \sqrt{1-\theta^2}. \] This leads to $\bar\rho(\theta)$ as close to $1$ as desired and to {\it aggressive} versions of ADFOUR discussed in Sect. \ref{sec:plain-adapt-alg}. This paper can be viewed as a first step towards understanding adaptivity for the $hp$-FEM. However, the results we present are of intrinsic interest and of value for periodic problems with high degree of regularity and rather complex structure. One such problem is turbulence in a periodic box. Our techniques exploit periodicity and orthogonality of the complex exponentials, but many of our assertions and conclusions extend to the non-periodic case for which the natural basis functions are Legendre polynomials; this is the case of the $p$-FEM. In any event, the study of adaptive Fourier-Galerkin methods seems to be a new paradigm in adaptivity, with many intriguing questions and surprises, some discussed in this paper. In contrast to the $h$-FEM, they exhibit unlimited approximation power which is only restricted by solution and data regularity. We organize the paper as follows. In Sect. \ref{sec:gen} we introduce the Fourier-Galerkin method, present a posteriori error estimators, and discuss properties of the underlying matrix $\bA$ for both algebraic and exponential approximation classes. In Sect. \ref{sec:plain-adapt-alg} we deal with four algorithms, two for each class, and prove their contraction properties. We devote Sect. \ref{sec:nl} to nonlinear approximation theory with an emphasis on the exponential class. In Sect. \ref{sec:spars-res} we turn to the study of the sparsity classes for the residual $\br$ along the lines outlined above. We examine the role of coarsening and prove a sharp coarsening estimate in Sect. \ref{S:coarsening}. We conclude with optimality properties of ADFOUR for the algebraic class in Sect. \ref{sec:complexity} and for the exponential class in Sect. \ref{sec:adfour-coarse}. \section{Fourier-Galerkin approximation}\label{sec:gen} \subsection{Fourier basis and norm representation} For $d \geq 1$, we consider $\Omega=(0,2\pi)^d$, and the trigonometric basis $$ \phi_k(x)=\frac1{(2\pi)^{d/2}} \, {\rm e}^{i k \cdot x} \;, \qquad k \in \mathbb{Z}^d \;, \quad x \in \mathbb{R}^d \;, $$ which is orthonormal in $L^2(\Omega)$; let $$ v = \sum_k \hat{v}_k \phi_k \;, \quad \hat{v}_k=(v,\phi_k) \;, \qquad \text{with } \ \Vert v \Vert_{L^2(\Omega)}^2= \sum_k |\hat{v}_k|^2 \;, $$ be the expansion of any $v \in L^2(\Omega)$ and the representation of its norm via the Parseval identity. Let $H^1_p(\Omega)=\{v \in H^1(\Omega) \, : \, v(x+2\pi e_j)=v(x) \ 1 \leq j \leq d\}$, and let $H^{-1}_p(\Omega)$ be its dual. Since the trigonometric basis is orthogonal in $H^1_p(\Omega)$ as well, one has for any $v \in H^1_p(\Omega)$ \begin{equation}\label{eq:four01} \Vert v \Vert_{H^1_p(\Omega)}^2 = \sum_k (1+|k|^2)|\hat{v}_k|^2 = \sum_k |\hat{V}_k|^2 \;, \qquad (\text{setting }\hat{V}_k := \sqrt{(1+|k|^2)}\hat{v}_k)\;; \end{equation} here and in the sequel, $|k|$ denotes the Euclidean norm of the multi-index $k$. On the other hand, if $f \in H^{-1}_p(\Omega)$, we set $$ \hat{f}_k=\langle f, \phi_k \rangle \;, \qquad \text{so that } \ \langle f, v \rangle = \sum_k \hat{f}_k \hat{v}_k \quad \forall v \in H^1_p(\Omega) \;; $$ the norm representation is \begin{equation}\label{eq:four02} \Vert f \Vert_{H^{-1}_p(\Omega)}^2 = \sum_k \frac1{(1+|k|^2)}|\hat{f}_k|^2 = \sum_k |\hat{F}_k|^2 \;, \qquad (\text{setting }\hat{F}_k := \frac1{\sqrt{(1+|k|^2)}}\hat{f}_k)\;. \end{equation} Throughout the paper, we will use the notation $\Vert \ . \ \Vert$ to indicate both the $H^1_p(\Omega)$-norm of a function $v$, or the $H^{-1}_p(\Omega)$-norm of a linear form $f$; the specific meaning will be clear from the context. Given any finite index set $\Lambda \subset {\mathbb{Z}}^d$, we define the subspace of $V:=H^1_p(\Omega)$ $$ V_{\Lambda} := {\rm span}\,\{\phi_k\, | \, k \in \Lambda \}\;; $$ we set $|\Lambda|= \rm{card}\, \Lambda$, so that $\rm{dim}\, V_{\Lambda}=|\Lambda|$. If $g$ admits an expansion $g = \sum_k \hat{g}_k \phi_k $ (converging in an appropriate norm), then we define its projection $P_\Lambda g$ upon $V_\Lambda$ by setting $$ P_\Lambda g = \sum_{k \in \Lambda} \hat{g}_k \phi_k \;. $$ \subsection{Galerkin discretization and residual} We now consider the elliptic problem \begin{equation}\label{eq:four03} \begin{cases} Lu=-\nabla \cdot (\nu \nabla u)+ \sigma u = f & \text{in } \Omega \;, \\ u \ \ 2\pi\text{-periodic in each direction} \;, \end{cases} \end{equation} where $\nu$ and $\sigma$ are sufficiently smooth real coefficients satisfying $0 < \nu_* \leq \nu(x) \leq \nu^* < \infty$ and $0 < \sigma_* \leq \sigma(x) \leq \sigma^* < \infty$ in $\Omega$; let us set $$ \alpha_* = \min(\nu_*, \sigma_*) \qquad \text{and} \qquad \alpha^* = \max(\nu^*, \sigma^*) \;. $$ We formulate this problem variationally as \begin{equation}\label{eq:four.1} u \in H^1_p(\Omega) \ \ : \quad a(u,v)= \langle f,v \rangle \qquad \forall v \in H^1_p(\Omega) \;, \end{equation} where $a(u,v)=\int_\Omega \nu \nabla u \cdot \nabla \bar{v} + \int_\Omega \sigma u \bar{v}$ (bar indicating as usual complex conjugate). We denote by $\tvert v \tvert = \sqrt{a(v,v)}$ the energy norm of any $v \in H^1_p(\Omega)$, which satisfies \begin{equation}\label{eq:four.1bis} \sqrt{\alpha_*} \Vert v \Vert \leq \tvert v \tvert \leq \sqrt{\alpha^*} \Vert v \Vert \;. \end{equation} Given any finite set $\Lambda \subset \mathbb{Z}^d$, the Galerkin approximation is defined as \begin{equation}\label{eq:four.2} u_\Lambda \in V_\Lambda \ \ : \quad a(u_\Lambda,v_\Lambda)= \langle f,v_\Lambda \rangle \qquad \forall v_\Lambda \in V_\Lambda \;. \end{equation} For any $w \in V_\Lambda$, we define the residual $$ r(w)=f-Lw = \sum_k \hat{r}_k(w) \phi_k \;, \qquad \text{where} \qquad \hat{r}_k(w) = \langle f - Lw, \phi_k \rangle = \langle f,\phi_k \rangle -a(w,\phi_k) \;. $$ Then, the previous definition of $u_\Lambda$ is equivalent to the condition \begin{equation}\label{eq:four.2.1ter} P_\Lambda r(u_\Lambda) = 0 \;, \qquad \text{i.e., } \quad \hat{r}_k(u_\Lambda)=0 \qquad \forall k \in \Lambda \;. \end{equation} On the other hand, by the continuity and coercivity of the bilinear form $a$, one has \begin{equation}\label{eq:four.2.1} \frac1{\alpha^*} \Vert r(u_\Lambda) \Vert \leq \Vert u - u_\Lambda \Vert \leq \frac1{\alpha_*} \Vert r(u_\Lambda) \Vert \;, \end{equation} or, equivalently, \begin{equation}\label{eq:four.2.1bis} \frac1{\sqrt{\alpha^*}} \Vert r(u_\Lambda) \Vert \leq \tvert u - u_\Lambda \tvert \leq \frac1{\sqrt{\alpha_*}} \Vert r(u_\Lambda) \Vert \;. \end{equation} \subsection{Algebraic representations}\label{sec:algebraic_repres} Let us identify the solution $u = \sum_k \hat{u}_k \phi_k$ of Problem (\ref{eq:four.1}) with the vector $\mathbf{u}=(\hat{U}_k)=(c_k \hat{u}_k) \in \mathbb{C}^{\mathbb{Z}^d}$ of its $H^1_p$-normalized Fourier coefficients, where we set for convenience $c_k=\sqrt{1+|k|^2}$. Similarly, let us identify the right-hand side $f$ with the vector $\mathbf{f}=(\hat{F}_\ell)=(c_\ell^{-1}\hat{f}_\ell) \in \mathbb{C}^{\mathbb{Z}^d}$ of its $H^{-1}_p$-normalized Fourier coefficients. Finally, let us introduce the bi-infinite, Hermitian and positive-definite matrix \begin{equation}\label{eq:four100} \mathbf{A}=(a_{\ell,k}) \qquad \text{with} \qquad a_{\ell,k}=\frac1{c_\ell c_k} a(\phi_k,\phi_\ell) \;. \end{equation} Then, Problem (\ref{eq:four.1}) can be equivalently written as \begin{equation}\label{eq:four110} \mathbf{A} \mathbf{u} = \mathbf{f} \;. \end{equation} We observe that the orthogonality properties of the trigonometric basis implies that the matrix $\mathbf{A}$ is diagonal if and only if the coefficients $\nu$ and $\sigma$ are constant in $\Omega$. Next, consider the Galerkin problem (\ref{eq:four.2}) and let $\mathbf{u}_\Lambda \in \mathbb{C}^{|\Lambda|}$ be the vector collecting the coefficients of $u_\Lambda$ indexed in $\Lambda$; let $\mathbf{f}_\Lambda \in \mathbb{C}^{|\Lambda|}$ be the analogous restriction for the vector of the coefficients of $f$. Finally, denote by $\mathbf{R}_\Lambda$ the matrix that restricts a bi-infinite vector to the portion indexed in $\Lambda$, so that $\mathbf{E}_\Lambda=\mathbf{R}_\Lambda^H$ is the corresponding extension matrix. Then, setting \begin{equation}\label{eq:four120} \mathbf{A}_\Lambda = \mathbf{R}_\Lambda \mathbf{A} \mathbf{R}_\Lambda^H \;, \end{equation} Problem (\ref{eq:four.2}) can be equivalently written as \begin{equation}\label{eq:four130} \mathbf{A}_\Lambda \mathbf{u}_\Lambda = \mathbf{f}_\Lambda \;. \end{equation} \subsection{Properties of the stiffness matrix}\label{S:properties-A} It is useful to express the elements of $\mathbf{A}$ in terms of the Fourier coefficients of the operator coefficients $\nu$ and $\sigma$. Precisely, writing $\nu=\sum_k \hat{\nu}_k \phi_k$ and $\sigma=\sum_k \hat{\sigma}_k \phi_k$ and using the orthogonality of the Fourier basis, one easily gets \begin{equation}\label{eq:four140} a_{\ell,k}= \frac1{(2\pi)^{d/2}}\left( \frac{\ell \cdot k}{c_\ell c_k} \hat{\nu}_{\ell-k} + \frac1{c_\ell c_k} \hat{\sigma}_{\ell-k} \right) \;. \end{equation} Note that the diagonal elements are uniformly bounded from below, \begin{equation}\label{eq:four150} a_{\ell,\ell} \geq \frac1{(2\pi)^{d/2}} \min({\hat{\nu}_0}, \hat{\sigma}_0) > 0 \;, \qquad \ell \in \mathbb{Z}^d \;, \end{equation} whereas all elements are bounded in modulus by the elements of a {{\it Toeplitz}} matrix, \begin{equation}\label{eq:four160} |a_{\ell,k}| \leq \frac1{(2\pi)^{d/2}} {\left( |\hat{\nu}_{\ell-k}|+ |\hat{\sigma}_{\ell-k}| \right)} \;, \qquad \ell, k \in \mathbb{Z}^d \;, \end{equation} which decay as $|\ell - k| \to \infty$ at a rate dictated by the smoothness of the operator coefficients. Indeed, if $\nu$ and $\sigma$ are sufficiently smooth, their Fourier coefficients decay at a suitable rate and this property is inherited by the off-diagonal elements of the matrix ${\bf A}$, via \eqref{eq:four160}. To be precise, if the coefficients $\nu$ and $\sigma$ have a finite order of regularity, then the rate of decay of their Fourier coefficients is algebraic, i.e. \begin{equation}\label{eq:ass-coeff-alg} |\hat{\nu}_k|, |\hat{\sigma}_k| \lesssim (1+\vert k \vert )^{-\eta} \qquad \forall k \in \mathbb{Z}^d \;, \end{equation} for some $\eta>0$. On the other hand, if the operator coefficients are real analytic in a neighborhood of $\Omega$, then the rate of decay of their Fourier coefficients is exponential, i.e. \begin{equation}\label{eq:ass-coeff-exp} |\hat{\nu}_k|, |\hat{\sigma}_k| \lesssim e^{-\eta\vert k \vert} \qquad \forall k \in \mathbb{Z}^d \;. \end{equation} Correspondingly, the matrix ${\bf A}$ belongs to one of the following classes. \begin{definition}[{regularity classes for $\bA$}]\label{def:class.matrix} {A matrix ${\bf A}$ is said to belong to} \begin{enumerate}[$\bullet$] \item the {algebraic} class ${\mathcal D}_a(\eta_L)$ if there exists a constant $c_L>0$ such that its elements satisfy \begin{equation} | a_{\ell,k} | \leq c_L (1+ \vert \ell - k \vert )^{-\eta_L}\; \qquad \ell, k \in \mathbb{Z}^d \; ; \end{equation} \item the {exponential} class ${\mathcal D}_e(\eta_L)$ if there exists a constant $c_L>0$ such that its elements satisfy \begin{equation}\label{eq:four170} | a_{\ell,k} | \leq c_L e^{-\eta_L\vert \ell - k \vert }\; \qquad \ell, k \in \mathbb{Z}^d \;. \end{equation} \end{enumerate} \end{definition} The following properties hold. \begin{property}[{continuity of $\bA$}]\label{prop:bounded} If {either} ${\bf A}\in{\mathcal D}_a(\eta_L)$, with $\eta_L>d$, or ${\bf A}\in{\mathcal D}_e(\eta_L)$, then ${\bf A}$ defines a bounded operator on {$\ell^2(\mathbb{Z}^d)$}. \end{property} \begin{proof} See e.g. \cite{Jaffard:1990, Dahlke-Fornasier-Groechenig:2010}. \end{proof} \begin{property}[{inverse of $\bA$: algebraic case}] \label{prop:inverse.alg} If ${\bf A}\in{\mathcal D}_a(\eta_L)$, with $\eta_L>d$ and ${\bf A}$ is invertible in $\ell^2(\mathbb{Z}^d)$, then ${\bf A}^{-1}\in{\mathcal D}_a(\eta_L)$. \end{property} \begin{proof} See e.g. \cite{Jaffard:1990}. \end{proof} \begin{property}[{inverse of $\bA$: exponential case}] \label{prop:inverse.matrix-estimate} If $\mathbf{A} \in {\mathcal D}_e(\eta_L)$ and there exists a constant $c_L$ satisfying \eqref{eq:four170} such that \begin{equation}\label{restriction-cL} c_L < \frac12({\rm e}^{\eta_L} -1) \min_\ell a_{\ell,\ell}\;, \end{equation} then ${\bf A}$ is invertible in $\ell^2(\mathbb{Z}^d)$ and ${\bf A}^{-1}\in{\mathcal D}_e(\bar{\eta}_L)$ where $\bar{\eta}_L \in (0,\eta_L]$ is such that $\bar{z}={\rm e}^{-\bar{\eta}_L}$ is the unique zero in the interval $(0,1)$ of the polynomial $$ z^2- \frac{{\rm e}^{2\eta_L}+2c_L+1}{{\rm e}^{\eta_L}(c_L+1)}z+1 \;. $$ \end{property} \begin{proof} {We follow the suggestion by Bini \cite{Bini:xx}, and thus exploit the one-to-one correspondence between Toeplitz matrices and formal Laurent series (see e.g. \cite{Toeplitz:book}):} $$ f(z) = \sum_{k=-\infty}^\infty a_k z^k \longleftrightarrow \bT_f=(t_{i,j}),\quad t_{i,j}=a_{i-j}.$$ We refer to the function $f(z)$ as to the symbol associated to the Toeplitz matrix $\bT_f$. {We recall now a few relations between $f(z)$ and $\bT_f$. If} $f(z)$ is analytic on ${\mathcal A}_\alpha=\{z \in \mathbb{C}: \e^{-\alpha}<\vert z \vert < \e^\alpha\}$ with $\alpha>0$, then there holds $f(z)=\sum_{k=-\infty}^{+\infty} a_k z^k$, where the coefficients $a_k$ have exponential decay with rate $\e^{-\alpha}$ {in the sense that} for every $0<\rho<\e^{-\alpha}$ there exists a constant $\gamma >0$ such that $\vert a_k \vert \leq \gamma \rho^{\vert k \vert}$. As a consequence, the symbol $f(z)$ of the Toeplitz matrix $\bT_f$ is analytic on $\mathcal{A}_{\alpha}$ for some $\alpha>0$ if and only if the elements of $\bT_f$ decay exponentially with rate $\e^{-\alpha}$. {Moreover, it is known} that if $f(z)$ is analytic on $\mathcal{A}_\alpha$ and it is non-zero on $\mathcal{A}_\beta\subset\mathcal{A}_\alpha$, then the function $g(z)=1/f(z)$ is well defined and analytic on $\mathcal{A}_\beta$, the matrix $\bT_g$ is the inverse of $\bT_f$ and the elements of $\bT_g$ decay exponentially with rate $\e^{-\beta}$. {We next} introduce the analytic functions in $\mathcal{A}_\alpha$ \[ h(z)=\sum_{k=1}^{\infty} \e^{-\alpha k}(z^k+z^{-k}) { ~= \frac{z}{\e^\alpha-z}+\frac{z^{-1}}{\e^{\alpha}-z^{-1}}}, \qquad f_c(z)=1-ch(z), \] with $c>0$. {For $|z|=1$ we deduce $|h(z)| \le 2\sum_{k=1}^\infty e^{-\alpha k} = 2/(e^\alpha-1)$, whence $c|h(z)| <1$ provided that $c<\frac{1}{2}(\e^\alpha-1)$; moreover $\|\bT_h\|\le\|\bT_h\|_\infty=2/(e^\alpha-1)$, which is indeed a particular instance of Schur Lemma for symmetric matrices. For this range of $c$'s,} $f_c(z)\not= 0$ for $\vert z \vert =1$ and for continuity there exists $\mathcal{A}_\beta\subset \mathcal{A}_\alpha$ on which $f_c(z)$ in non-zero. This implies that $g_c(z):=1/f_c(z)$ is analytic on $\mathcal{A}_\beta$ and the elements of the associated Toeplitz matrix $\bT_{g_c}$ decay exponentially with rate $\e^{-\beta}$. {The singularities of $g_c$ correspond to zeros of $f_c$, which are in turn the roots $\zeta_1,\zeta_2$ of the polynomial \[ z^2-\frac{\e^{2\alpha}+2c+1}{\e^\alpha(c+1)}z+1. \] These roots are real provided $c<\frac{1}{2}(\e^\alpha-1)$, in which case $e^{-\beta}=\zeta_1=\zeta_2^{-1}<1$.} Let $\bA\in\mathcal{D}_e(\alpha)$, i.e. there exists a constant $c$ such that $|a_{\ell,k} | \leq c e^{-\alpha \vert \ell - k \vert }$ for $\ell, k \in \mathbb{Z}^d$. {By rescaling of the rows of $\bA$, it is not restrictive to assume that the diagonal elements} $\bA$ are equal to $1$. Then, it is possible to write $\bA=\bI-\bS$ with $\vert \bS\vert \leq c\bT_h$, the inequality {being meant element by element, and $\|\bS\|<1$. Since $g_c(z)=1/(1-ch(z))=\sum_{k=0}^{\infty} c^k h(z)^k$ is well defined and analytic on $\mathcal{A}_\beta\subset\mathcal{A}_\alpha$, it follows that} \[ \left\vert \sum_{k=0}^{\infty} \bS^k\right\vert \leq \sum_{k=0}^{\infty}\vert \bS\vert^k \leq \sum_{k=0}^{\infty} c^k \bT_h^k = \bT_{g_c}. \] Hence, the elements of the matrix $\bT_{g_c}$ decay exponentially with rate $\e^{-\beta}$. {Property $\|\bS\|<1$ yields $\bA^{-1}=(\bI-\bS)^{-1}=\sum_{k=0}^{\infty} \bS^k$ and $ \vert\bA^{-1}\vert \leq \bT_{g_c}$, whence the coefficients of $\bA^{-1}$ being bounded by those of $\bT_{g_c}$ decay exponentially with rate $\e^{-\beta}$, i.e. $\bA^{-1}\in\mathcal{D}_e(\beta)$ for some $\beta<\alpha$. This gives \eqref{restriction-cL} once the row scaling of $\bA$ is taken into account.} \end{proof} { \begin{example}[sharpness of \eqref{restriction-cL}]\label{sharp-cL} \rm The following example illustrates that \eqref{restriction-cL} is sharp. Let $\bA$ be \[ a_{ij} = - 2^{-1-|i-j|} \quad i\ne j, \qquad a_{ii} = 1, \] which is singular because the sum of the coefficients in every row vanishes. This $\bA$ corresponds to $e^{\eta_L}=2$, $c_L=\frac12$ and $\frac12 (e^{\eta_L}-1)=\frac12$, which violates \eqref{restriction-cL}. \end{example} } For any integer $J \geq 0$, let $\mathbf{A}_J$ denote {the following symmetric truncation of the matrix $\mathbf{A}$} \begin{equation}\label{eq:trunc-matr} (\mathbf{A}_J)_{\ell,k}= \begin{cases} a_{\ell,k} & \text{if } |\ell-k| \leq J \;, \\ 0 & \text{elsewhere.} \end{cases} \end{equation} Then, we have the following well-known results, whose proof is reported for completeness. \begin{property}[truncation]\label{prop:matrix-estimate} {The truncated matrix $\bA_J$ has a number of non-vanishing entries bounded by $\omega_d J^d$, where $\omega_d$ is the measure of the Euclidean unit ball in $\mathbb{R}^d$. Moreover, under the assumption of Property \ref{prop:bounded},} there exists a constant $C_{\mathbf{A}} $ such that \[ \Vert \mathbf{A}-{\mathbf{A}}_J \Vert \leq \psi_{\mathbf{A}}(J,\eta):=C_{\mathbf{A}} \begin{cases} {(J+1)}^{-(\eta_L-d)} & \text{if } \mathbf{A} \in {\mathcal D}_a(\eta_L)~\text{(algebraic case)} \;, \\ {(J+1)^{d-1}}{\rm e}^{-\eta_L J} & \text{if}~\mathbf{A} \in {\mathcal D}_e(\eta_L)~\text{(exponential case)}\ , \end{cases} \] {for all $J\ge0$.} Consequently, under the assumptions of Property \ref{prop:inverse.alg} or \ref{prop:inverse.matrix-estimate}, one has \begin{equation}\label{eq:trunc-invmatr-err} \Vert \mathbf{A}^{-1}-(\mathbf{A}^{-1})_J \Vert \leq \psi_{\mathbf{A}^{-1}} (J,\bar{\eta}_L) \end{equation} where we {let $\bar{\eta}_L=\eta_L$ in the algebraic case and $\bar\eta_L$ be defined in Property \ref{prop:inverse.matrix-estimate} for the exponential case.} \end{property} \begin{proof} We use the {Schur Lemma for symmetric matrices}, $\Vert \mathbf{B} \Vert \leq \Vert \mathbf{B} \Vert_\infty =\sup_\ell \sum_k |b_{\ell,k}|$ for $\mathbf{B}=\mathbf{A}-\mathbf{A}_J$. Thus, in the algebraic case \begin{eqnarray*} \sup_\ell \sum_{k:|\ell-k|>J} |a_{\ell,k}| &\leq& C_L \sup_\ell \sum_{k:|\ell-k|>J}\frac1{(1+|\ell-k|)^{\eta_L}} \\ &\lsim & \sup_\ell \sum_{q=J+1}^\infty \sum_{\ {k:|\ell-k|=q}}\frac1{(1+q)^{\eta_L}} \lsim \sup_\ell \sum_{q=J+1}^\infty \frac{q^{d-1}}{(1+q)^{\eta_L}} \lsim {(J+1)}^{d-\eta_L} \;. \end{eqnarray*} A similar argument yields the result in the exponential case. \end{proof} \subsection{An equivalent formulation of the Galerkin problem} For future reference, herafter we rewrite the Galerkin problem \eqref{eq:four130} in an equivalent (infinite-dimensional) way. Let $$\mathbf{P}_\Lambda: \ell^2(\mathbb{Z}^d) \to \ell^2(\mathbb{Z}^d)$$ be the projector operator defined as \[ (\mathbf{P}_\Lambda \mathbf{v})_\lambda= \begin{cases} v_\lambda & \text{\rm if } \lambda\in\Lambda \;, \\ 0 & \text{\rm if } \lambda\notin\Lambda \;. \end{cases} \] Note that $\mathbf{P}_\Lambda$ can be represented as a diagonal bi-infinite matrix whose diagonal elements are $1$ for indexes belonging to $\Lambda$, zero otherwise. Let us set $\mathbf{Q}_\Lambda=\mathbf{I}-\mathbf{P}_\Lambda$ and we introduce the bi-infinite matrix $\widehat{\mathbf{A}}_\Lambda:= \mathbf{P}_\Lambda \mathbf{A} \mathbf{P}_\Lambda + \mathbf{Q}_\Lambda$ which is equal to $\mathbf{A}_\Lambda$ for indexes in $\Lambda$ and to the identity matrix, otherwise. The definitions of the projectors $\mathbf{P}_\Lambda$ and $\mathbf{Q}_\Lambda$ yield the following result. \begin{property}[{invertibility of $\widehat\bA$}]\label{prop:inf-matrix} If $\mathbf{A}$ is invertible with {either} $\mathbf{A}\in\mathcal{D}_a(\eta_L)$ or $\mathbf{A}\in\mathcal{D}_e(\eta_L)$, then the same holds for $\widehat{\mathbf{A}}_\Lambda$. \end{property} \noindent Now, let us consider the following extended Galerkin problem: find $\hat{\mathbf{u}}\in\ell^2(\mathbb{Z}^d)$ such that \begin{equation}\label{eq:inf-pb-galerkin} \widehat{\mathbf{A}}_\Lambda \hat{\mathbf{u}} = \mathbf{P}_\Lambda \mathbf{f}\ . \end{equation} Let ${\mathbf{E}}_\Lambda: \mathbb{C}^{\vert \Lambda\vert} \to \ell^2(\mathbb{Z}^d)$ be the extension operator defined in Sect. \ref{sec:algebraic_repres} and let $\mathbf{u}_\Lambda\in \mathbb{C}^{\vert \Lambda\vert}$ be the Galerkin solution to \eqref{eq:four130}; then, it is easy to check that $\hat{\mathbf{u}}={\mathbf{E}}_\Lambda \mathbf{u}_\Lambda$. In the following, with an abuse of notation, the solution of \eqref{eq:inf-pb-galerkin} will be denoted by $\mathbf{u}_\Lambda$. We will refer to it as to the (extended) Galerkin solution, meaning the infinite-dimensional representant of the finite-dimensional Galerkin solution. In case of possible confusion, we will make clear which version (infinite-dimensional or finite-dimensional) has to be considered. \section{Adaptive algorithms with contraction properties}\label{sec:plain-adapt-alg} Our first algorithm will be an {\sl ideal one}; it will serve as a reference to illustrate in the simplest situation the contraction property which guarantees the convergence of the algorithm, and it will be subsequently modified to get more efficient versions. The ideal algorithm uses as error estimator the ideal one, i.e., the norm of the residual in $H^{-1}_p(\Omega)$; we thus set, for any $v \in H^1_p(\Omega)$, \begin{equation}\label{eq:four.2.2} \eta^2(v)=\Vert r(v) \Vert^2 = \sum_{k \in \mathbb{Z}^d} |\hat{R}_k(v)|^2\;, \end{equation} so that (\ref{eq:four.2.1}) can be rephrased as \begin{equation}\label{eq:four.2.3} \frac1{\alpha^*} \eta(u_\Lambda) \leq \Vert u - u_\Lambda \Vert \leq \frac1{\alpha_*} \eta(u_\Lambda) \; ; \end{equation} {recall that $\hat R_k(v) = (1 + |k|^2)^{-1/2}r_k(v)$ according to \eqref{eq:four02}.} Obviously, this estimator is hardly computable in practice; in Sect. \ref{subsec:comput} we will introduce a feasible version, but for the moment we go through the ideal situation. Given any subset $\Lambda \subseteq \mathbb{Z}^d$, we also define the quantity $$ \eta^2(v;\Lambda) = \Vert P_\Lambda r(v) \Vert^2 = \sum_{k \in \Lambda} |\hat{R}_k(v)|^2\;, $$ so that $\eta(v)=\eta(v;\mathbb{Z}^d)$. \subsection{ADFOUR: an ideal algorithm} \label{sec:defADFOUR} We now introduce the following procedures, which will enter the definition of all our adaptive algorithms. \begin{itemize} \item $u_\Lambda := {\bf GAL}(\Lambda)$ \\ Given a finite subset $\Lambda \subset \mathbb{Z}^d$, the output $u_\Lambda \in V_\Lambda$ is the solution of the Galerkin problem (\ref{eq:four.2}) relative to $\Lambda$. \item $r := {\bf RES}(v_\Lambda)$ \\ Given a function $v_\Lambda \in V_\Lambda$ for some finite index set $\Lambda$, the output $r$ is the residual $r(v_\Lambda)=f-Lv_\Lambda$. \item $\Lambda^* := \text{\bf D\"ORFLER}(r, \theta)$\\ Given $\theta \in (0,1)$ and an element $r \in H^{-1}_p(\Omega)$, the ouput $\Lambda^* \subset \mathbb{Z}^d$ is a finite set such that the inequality \begin{equation}\label{eq:four.2.5.5} \Vert P_{\Lambda^*} r \Vert \geq \theta \Vert r \Vert \end{equation} is satisfied. \end{itemize} Note that the latter inequality is equivalent to \begin{equation}\label{eq:four.2.5.5bis} \Vert r-P_{\Lambda^*} r \Vert \leq \sqrt{1-\theta^2} \Vert r \Vert \;. \end{equation} If $r=r(u_\Lambda)$ is the residual of a Galerkin solution $u_\Lambda \in V_\Lambda$, then by (\ref{eq:four.2.1ter}) we can trivially assume that $\Lambda^*$ is contained in $\Lambda^c := \mathbb{Z}^d \setminus \Lambda$. For such a residual, inequality (\ref{eq:four.2.5.5}) can then be stated as \begin{equation}\label{eq:four.2.5.5ter} \eta(u_\Lambda;\Lambda^*) \geq \theta \eta(u_\Lambda) \;, \end{equation} a condition termed {\sl D\"orfler marking} in the finite element literature, or {\sl bulk chasing} in the wavelet literature. {Writing} $\hat{R}_k = \hat{R}_k(u_{\Lambda})$, the condition {\eqref{eq:four.2.5.5ter}} can be equivalently stated as \begin{equation}\label{eq:four.2.4.bis} \sum_{k \in \Lambda^*} |\hat{R}_k|^2 \geq \theta^2 \sum_{k \not \in \Lambda} |\hat{R}_k|^2 \;. \end{equation} Also note that a set $\Lambda^*$ of minimal cardinality can be immediately determined if the coefficients $\hat{R}_k$ are rearranged in non-increasing order of modulus; however, the subsequent convergence result does not require the property of minimal cardinality for the sets of active coefficients. In the sequel, we will invariably make the following assumption: \begin{assumption}[{D\"orfler marking}]\label{ass:minimality} The procedure {\bf D\"ORFLER} selects an index set $\Lambda^*$ of minimal cardinality among all those satisfying condition (\ref{eq:four.2.5.5}). \end{assumption} Given two parameters $\theta \in (0,1)$ and $tol \in [0,1)$, we are ready to define our ideal adaptive algorithm. {\bf Algorithm ADFOUR}($\theta, \, tol$) \begin{itemize} \item[\ ] Set $r_0:=f$, $\Lambda_0:=\emptyset$, $n=-1$ \item[\ ] do \begin{itemize} \item[\ ] $n \leftarrow n+1$ \item[\ ] $\partial\Lambda_{n}:= \text{\bf D\"ORFLER}(r_n, \theta)$ \item[\ ] $\Lambda_{n+1}:=\Lambda_{n} \cup \partial\Lambda_{n}$ \item[\ ] $u_{n+1}:= {\bf GAL}(\Lambda_{n+1})$ \item[\ ] $r_{n+1}:= {\bf RES}(u_{n+1})$ \end{itemize} \item[\ ] while $\Vert r_{n+1} \Vert > tol $ \end{itemize} The following result states the convergence of this algorithm, with a guaranteed error reduction rate. \begin{theorem}[{convergence of {\bf ADFOUR}}]\label{teo:four1} Let us set \begin{equation}\label{eq:def_rhotheta} \rho=\rho(\theta)= \sqrt{1 - \frac{\alpha_*}{\alpha^*}\theta^2} \in (0,1) \;. \end{equation} Let $\{\Lambda_n, \, u_n \}_{n\geq 0}$ be the sequence generated by the adaptive algorithm {\bf ADFOUR}. Then, the following bound holds for any $n$: $$ \tvert u-u_{n+1} \tvert \leq \rho \tvert u-u_n \tvert \;. $$ Thus, for any $tol>0$ the algorithm terminates in a finite number of iterations, whereas for $tol=0$ the sequence $u_n$ converges to $u$ in $H^1_p(\Omega)$ as $n \to \infty$. \end{theorem} \begin{proof} For convenience, we use the notation $e_n:=\tvert u-u_n \tvert$ and $d_n:=\tvert u_{n+1} - u_n \tvert$. As $V_{\Lambda_{n}} \subset V_{\Lambda_{n+1}}$, the following orthogonality property holds \begin{equation}\label{pytagora} e_{n+1}^2=e_n^2-d_n^2. \end{equation} On the other hand, for any $w \in H^{1}_p(\Omega)$, one has {in light of \eqref{eq:four.1bis}} \begin{eqnarray*} \Vert Lw \Vert = \sup_{v \in H^{1}_p(\Omega)} \frac{\langle Lw, v \rangle}{\Vert v \Vert} = \sup_{v \in H^{1}_p(\Omega)} \frac{a(w,v)}{\Vert v \Vert} \leq \tvert w \tvert \sup_{v \in H^{1}_p(\Omega)} \frac{\tvert v \tvert }{\Vert v \Vert} \leq \sqrt{\alpha^*} \tvert w \tvert \;. \end{eqnarray*} Thus, {using \eqref{eq:four.2.5.5},} \begin{eqnarray*} d_n^2 &\geq& \frac1{\alpha^*} \Vert L(u_{n+1}-u_n) \Vert^2 =\frac1{\alpha^*} \Vert r_{n+1}-r_n \Vert^2 \\ &\geq& \frac1{\alpha^*} \Vert P_{\Lambda_{n+1}} (r_{n+1}-r_n) \Vert^2 = \frac1{\alpha^*} \Vert P_{\Lambda_{n+1}} r_n \Vert^2 \geq \frac{\theta^2}{\alpha^*} \Vert r_n \Vert^2 \;. \end{eqnarray*} On the other hand, the {rightmost} inequality in (\ref{eq:four.2.1bis}) states that $\Vert r_n \Vert^2 \geq \alpha_* e_n^2$, whence the result. \end{proof} \subsection{{\bf F-ADFOUR:} A feasible version of ADFOUR}\label{subsec:comput} The error estimator $\eta(u_\Lambda)$ based on (\ref{eq:four.2.2}) is not computable in practice, since the residual $r(u_\Lambda)$ contains infinitely many coefficients. We {thus} introduce a new estimator, defined from an approximation of such residual with finite Fourier expansion (i.e., a trigonometric polynomial). To this end, let $\tilde{\nu}$, $\tilde{\sigma}$ and $\tilde{f}$ be suitable trigonometric polynomials, which approximate $\nu$, $\sigma$ and $f$, respectively, to a given accuracy. Then, the quantity \begin{equation}\label{eq:four.2.5} \tilde{r}(u_\Lambda)= \tilde{f} - \tilde{L}u_\Lambda = \tilde{f} + \nabla \cdot (\tilde{\nu} \nabla u_\Lambda) - \tilde{\sigma} u_\Lambda \end{equation} belongs to $V_{\tilde{\Lambda}}$ for some finite subset $\tilde{\Lambda} \subset \mathbb{Z}^d$, i.e., it has the finite (thus, computable) expansion $$ \tilde{r}(u_\Lambda)= \sum_{k \in \tilde{\Lambda}} \hat{\tilde{r}}_k(u_\Lambda) \phi_k\;. $$ The choice of the approximate coefficients has to be done in order to fulfil the following condition: for a fixed parameter $\gamma\in (0,\theta)$, we require that \begin{equation}\label{eq:four.2.6} \Vert r(u_\Lambda) - \tilde{r}(u_\Lambda) \Vert \leq \gamma \Vert \tilde{r}(u_\Lambda) \Vert \;. \end{equation} Satisfying such a condition is possible, provided we have full access to the data. Indeed, on the one hand, the left-hand side tends to $0$ as the approximation of the coefficients gets better and better, since (we keep here the full norm indication for a better clarity) \begin{eqnarray*} \Vert r(u_\Lambda) - \tilde{r}(u_\Lambda) \Vert_{H^{-1}_p(\Omega)} &\leq& \Vert f - \tilde{f} \Vert_{H^{-1}_p(\Omega)} + \Vert \nu - \tilde{\nu} \Vert_{L^\infty(\Omega)} \Vert \nabla u_\Lambda \Vert_{L^2(\Omega)^d} + \Vert \sigma - \tilde{\sigma} \Vert_{L^\infty(\Omega)} \Vert u_\Lambda \Vert_{L^2(\Omega)} \\ &\leq& \Vert f - \tilde{f} \Vert_{H^{-1}_p(\Omega)} +( \Vert \nu - \tilde{\nu} \Vert_{L^\infty(\Omega)} +\Vert \sigma - \tilde{\sigma} \Vert_{L^\infty(\Omega)}) \frac1{\alpha_*}\Vert f \Vert_{H^{-1}_p(\Omega)} \;, \end{eqnarray*} where we have used the bound on the solution of the Galerkin problem (\ref{eq:four.2}) in terms of the data. On the other hand, if $u_\Lambda \not = u$, then $ r(u_\Lambda) \not = 0$, {whence} the right-hand side of (\ref{eq:four.2.6}) converges to a non-zero value {as $\tilde\Lambda$ increases}. With this remark in mind, we define a new error estimator by setting \begin{equation}\label{eq:four.2.7} \tilde{\eta}^2(u_\Lambda)=\Vert \tilde{r}(u_\Lambda) \Vert^2 = \sum_{k \in \tilde{\Lambda}} |\hat{\tilde{R}}_k(u_\Lambda)|^2\;, \end{equation} which, {in view of \eqref{eq:four.2.6}, immediately yields} \begin{equation}\label{eq:four.2.3bis} \frac{1-\gamma}{\alpha^*} \tilde{\eta}(u_\Lambda) \leq \Vert u - u_\Lambda \Vert \leq \frac{1+\gamma}{\alpha_*} \tilde{\eta}(u_\Lambda) \;. \end{equation} \begin{lemma}[{feasible D\"orfler marking}]\label{lemma:four.1} Let $\Lambda^*$ be any finite index set such that $$ \tilde{\eta}(u_\Lambda;\Lambda^*) \geq \theta \tilde{\eta}(u_\Lambda) \;. $$ Then, \begin{equation}\label{eq:four.2.300} {\eta}(u_\Lambda;\Lambda^*) \geq \tilde{\theta} {\eta}(u_\Lambda)\;, \qquad \text{with } \ \ \tilde{\theta} = \frac{\theta-\gamma}{1+\gamma} \in (0, \theta)\;. \end{equation} \end{lemma} \begin{proof} One has \begin{eqnarray*} \Vert P_{\Lambda^*} {r}(u_\Lambda) \Vert &\geq& \Vert P_{\Lambda^*} \tilde{r}(u_\Lambda) \Vert - \Vert P_{\Lambda^*} \left( {r}(u_\Lambda) - \tilde{r}(u_\Lambda) \right) \Vert \\ &\geq& \theta \Vert \tilde{r}(u_\Lambda) \Vert - \Vert {r}(u_\Lambda) - \tilde{r}(u_\Lambda) \Vert \\ &\geq& (\theta-\gamma) \Vert \tilde{r}(u_\Lambda) \Vert \geq \frac{\theta-\gamma}{1+\gamma} \Vert {r}(u_\Lambda) \Vert \; , \end{eqnarray*} {which is the desired \eqref{eq:four.2.300}.} \end{proof} The previous result suggests introducing the following feasible variant of the procedure {\bf RES}: \begin{itemize} \item $r := \text{\bf F-RES}(v_\Lambda, \gamma)$ \\ Given $\gamma \in (0,\theta)$ and a function $v_\Lambda \in V_\Lambda$ for some finite index set $\Lambda$, the output $r$ is an approximate residual $\tilde{r}(v_\Lambda)=\tilde{f}+\nabla \cdot(\tilde{\nu}\nabla v_\Lambda)-\tilde{\sigma}v_\Lambda$, defined on a finite set $\tilde{\Lambda}$ and satisfying $$ \Vert r(v_\Lambda) - \tilde{r}(v_\Lambda) \Vert \leq \gamma \Vert \tilde{r}(v_\Lambda) \Vert \;. $$ \end{itemize} \begin{theorem}[{contraction property of {\bf F-AFOUR}}]\label{teo:four2} Consider the feasible variant {\bf F-ADFOUR} of the adaptive algorithm {\bf ADFOUR}, where the step $r_{n+1}:= {\bf RES}(u_{n+1})$ is replaced by the step $r_{n+1}:= \text{\bf F-RES}(u_{n+1},\gamma)$ for some $\gamma \in (0,\theta)$. Then, the same conclusions of Theorem \ref{teo:four1} hold true for this variant, with the contraction factor $\rho$ replaced by $\rho=\rho(\tilde{\theta})$, where $\tilde{\theta}$ is defined in (\ref{eq:four.2.300}). \endproof \end{theorem} In the rest of the paper, we will develop our analysis considering Algorithm {\bf ADFOUR} rather than {\bf F-ADFOUR}; this is just for the sake of simplicity, since all the conclusions extend in a straightforward manner to the latter version as well. \subsection{{\bf A-ADFOUR:} An aggressive version of ADFOUR}\label{subsec:aggress} Theorem \ref{teo:four1} indicates that even if one {chooses} $\theta$ very close to $1$, the predicted error reduction rate $\rho=\rho(\theta)$ is always bounded from below by the quantity $\sqrt{1 - \frac{\alpha_*}{\alpha^*}}$. Such a result looks overly pessimistic, particularly in the case of smooth (analytic) solutions, since a Fourier method allows for an exponential decay of the error as the number of (properly selected) active degrees of freedom is increased. {Fig \ref{fig:theta-var} displays the influence of D\"orfler parameter on the decay rate and number of solves: choosing $\theta$ closer to 1 does not significantly affect the rate of decay of the error versus the number of activated degrees of freedom, but it significantly reduces the number of iterations. This in turn reduces the computational cost measured in terms of Galerkin solves.} \begin{figure} \caption{Residual norm vs number of degrees of freedom activated by {\bf ADFOUR} \label{fig:theta-var} \end{figure} Motivated by this observation, hereafter we consider a variant of Algorithm {\bf ADFOUR}, which -- under the assumptions of Property \ref{prop:inverse.alg} or \ref{prop:inverse.matrix-estimate} -- guarantees an arbitrarily large error reduction per iteration, provided the set of the new degrees of freedom detected by {\bf D\"ORFLER} is suitably enriched. At the $n$-th iteration, let us define the {set} $\Lambda_{n+1}:=\Lambda_{n} \cup \partial\Lambda_{n}$ by setting \begin{equation}\label{eq:aggr1} \begin{split} \widetilde{\partial\Lambda}_{n}:=& \text{\bf D\"ORFLER}(r_n, \theta)\\ \partial\Lambda_{n}:=& \text{\bf ENRICH}(\widetilde{\partial\Lambda}_{n},J) \;, \end{split} \end{equation} where the latter procedure and the value of the integer $J$ will be defined later on. We recall that the set $\widetilde{\partial\Lambda}_n$ is such that $g_n= P_{\widetilde{\partial\Lambda}_n} r_n$ satisfies $$ \Vert r_n- g_n \Vert \leq \sqrt{1-\theta^2} \Vert r_n \Vert $$ (see (\ref{eq:four.2.5.5bis})). Let $w_n \in V$ be the solution of $L w_n = g_n$, which in general will have infinitely many components, and let us split it as $$ w_n= P_{\Lambda_{n+1}} w_n + P_{\Lambda_{n+1}^c} w_n =: y_n + z_n \in V_{\Lambda_{n+1}} \oplus V_{\Lambda_{n+1}^c} \;. $$ Then, by the minimality property of the Galerkin solution in the energy norm and by (\ref{eq:four.1bis}) and (\ref{eq:four.2.1bis}), one has \begin{eqnarray*} \tvert u - u_{n+1} \tvert &\leq& \tvert u - (u_{n}+y_{n}) \tvert \leq \tvert u- u_n - w_{n} + z_{n} \tvert \\ &\leq& \frac1{\sqrt{\alpha_*}} \Vert L(u- u_n - w_{n}) \Vert + \sqrt{\alpha^*}\Vert z_{n} \Vert = \frac1{\sqrt{\alpha_*}} \Vert r_n - g_{n} \Vert + \sqrt{\alpha^*} \Vert z_{n} \Vert \;. \end{eqnarray*} Thus, $$ \tvert u - u_{n+1} \tvert \leq \frac{1}{\sqrt{\alpha_*}}\sqrt{(1-\theta^2)} \, \Vert r_n \Vert + \sqrt{\alpha^*} \Vert z_{n}\Vert \;. $$ Now we can write $z_n= \big( P_{\Lambda_{n+1}^c} L^{-1} P_{\widetilde{\partial\Lambda}_n} \big) r_n $; hence, if $\Lambda_{n+1}$ is defined in such a way that $$ k \in \Lambda_{n+1}^c \quad \text{and} \quad \ell \in \widetilde{\partial\Lambda}_n\qquad \Rightarrow \qquad |k - \ell | > J \;, $$ then we have $$ \Vert P_{\Lambda_{n+1}^c} L^{-1} P_{\widetilde{\partial\Lambda}_n} \Vert \leq \Vert \mathbf{A}^{-1}-(\mathbf{A}^{-1})_J \Vert \leq \psi_{\mathbf{A}^{-1}} (J,\bar{\eta}_L) \;, $$ where we have used (\ref{eq:trunc-invmatr-err}). Now, $J>0$ can be chosen to satisfy \begin{equation}\label{eq:aggr2} \psi_{\mathbf{A}^{-1}} (J,\bar{\eta}_L) \leq \sqrt{\frac{1-\theta^2}{\alpha_* \alpha^*}} \;, \end{equation} in such a way that \begin{equation}\label{eq:aggr_error_reduct} \tvert u - u_{n+1} \tvert \leq \frac1{\sqrt{\alpha_*}} \sqrt{1-\theta^2} \, \Vert r_n \Vert \leq \left(\frac{\alpha^*}{\alpha_*}\right)^{1/2} \!\!\! \sqrt{1-\theta^2} \, \tvert u - u_{n} \tvert \;. \end{equation} Note that, as desired, the new error reduction rate \begin{equation}\label{eq:aggr3} \bar{\rho}=\left(\frac{\alpha^*}{\alpha_*}\right)^{1/2}\! \! \sqrt{1-\theta^2} \end{equation} can be made arbitrarily small by choosing $\theta$ arbitrarily close to $1$. The procedure {\bf ENRICH} is thus defined as follows: \begin{itemize} \item $\Lambda^* := \text{\bf ENRICH}(\Lambda,J)$ \\ Given an integer $J \geq 0$ and a finite set $\Lambda \subset \mathbb{Z}^d$, the output is the set $$ \Lambda^* := \{ k \in \mathbb{Z}^d\ : \ \text{ there exists } \ell \in \Lambda \text{ such that } |k - \ell| \leq J \} \;. $$ \end{itemize} Note that since the procedure adds a $d$-dimensional ball of radius $J$ around each point of $\Lambda$, the cardinality of the new set $\Lambda^*$ can be estimated as \begin{equation}\label{eq:estim-enrich} |\Lambda^*| \leq |\overline{B_d(0,J)}\cap \mathbb{Z}^d|\, |\Lambda| \sim \omega_d J^d |\Lambda| \;, \end{equation} where $\omega_d$ {is the measure of the $d$-dimensional Euclidean} unit ball $B_d(0,1)$ centered at the origin. It is convenient for future reference to denote by ${\partial\Lambda}_{n}:= \text{\bf E-D\"ORFLER}(r_n, \theta,J)$ the procedure described in (\ref{eq:aggr1}). We summarize our results in the following theorem. \begin{theorem}[{contraction property of {\bf A-ADFOUR}}]\label{teo:four3} Consider the aggressive variant {\bf A-ADFOUR} of the adaptive algorithm {\bf ADFOUR}, in which the step ${\partial\Lambda}_{n}:= \text{\bf D\"ORFLER}(r_n, \theta)$ is replaced by $$ {\partial\Lambda}_{n}:= \text{\bf E-D\"ORFLER}(r_n, \theta, J) \;, $$ where $\theta$ is such that $\bar{\rho}$ defined in (\ref{eq:aggr3}) is smaller than $1$, and $J$ is the smallest integer for which (\ref{eq:aggr2}) is fulfilled. Let the assumptions of Property \ref{prop:inverse.alg} or \ref{prop:inverse.matrix-estimate} be satisfied. Then, the same conclusions of Theorem \ref{teo:four1} hold true for this variant, with the contraction factor $\rho$ replaced by $\bar{\rho}$. \endproof \end{theorem} \subsection{{C-ADFOUR and PC-ADFOUR:} ADFOUR with coarsening}\label{subsec:coarse-adfour} The adaptive algorithm {\bf ADFOUR} and its variants introduced above are not guaranteed to be optimal in terms of complexity. Indeed, the discussion in the forthcoming Sect. \ref{sec:spars-res} {for the exponential case} will indicate that the residual $r(u_\Lambda)$ may be significantly less sparse than the corresponding Galerkin solution $u_\Lambda$; in particular, we will see that many indices in $\Lambda$, activated in an early stage of the adaptive process, could be lately discarded since the corresponding components of $u_\Lambda$ are zero. For these reasons, we propose here a new variant of algorithm {\bf ADFOUR}, which incorporates a recursive coarsening step. The algorithm is constructed through the procedures {\bf GAL}, {\bf RES}, {\bf D\"ORFLER} already introduced in Sect. \ref{sec:defADFOUR}, together with the new procedure {\bf COARSE} defined as follows: \begin{itemize} \item $\Lambda := {\bf COARSE}(w, \epsilon)$\\ Given a function $w \in V_{\Lambda^*}$ for some finite index set $\Lambda^*$, and an accuracy $\epsilon$ which is known to satisfy $\Vert u - w \Vert \leq \epsilon$, the output $\Lambda \subseteq \Lambda^*$ is a set of minimal cardinality such that \begin{equation}\label{eq:def-coarse} \Vert w - P_\Lambda w \Vert \leq 2 \epsilon \;. \end{equation} \end{itemize} We will subsequently show (see Theorem \ref{T:coarsening}) that the cardinality $|\Lambda|$ is optimally related to the sparsity class of $u$. The following result will be used several times in the paper. \begin{property}[{coarsening}]\label{prop:cons-coarse} The procedure {\bf COARSE} guarantees the bounds \begin{equation}\label{eq:def-coarse-bis} \Vert u - P_\Lambda w \Vert \leq 3 \epsilon \end{equation} and, for the Galerkin solution $u_\Lambda \in V_\Lambda$, \begin{equation}\label{eq:def-coarse-ter} \tvert u - u_\Lambda \tvert \leq 3 \sqrt{\alpha^*}\epsilon \;. \end{equation} \end{property} \par{\noindent\it Proof}. \ignorespaces The first bound is trivial, the second one follows from the minimality property of the Galerkin solution in the energy norm and from (\ref{eq:four.1bis}): $$ \tvert u - u_\Lambda \tvert \leq \tvert u - P_{\Lambda}w \tvert \leq \sqrt{\alpha^*} \Vert u-P_{\Lambda} w \Vert \leq 3 \sqrt{\alpha^*} \epsilon \;. \qquad \quad \endproof $$ Given two parameters $\theta \in (0,1)$ and $tol \in [0,1)$, we define the following adaptive algorithm with coarsening. {\bf Algorithm C-ADFOUR}($\theta, \ tol$) \begin{itemize} \item[\ ] Set $r_0:=f$, $\Lambda_0:=\emptyset$, $n=-1$ \item[\ ] do \begin{itemize} \item[\ ] $n \leftarrow n+1$ \item[\ ] set $\Lambda_{n,0}=\Lambda_n$, $r_{n,0}=r_n$ \item[\ ] $k=-1$ \item[\ ] do \begin{itemize} \item[\ ] $k \leftarrow k+1$ \item[\ ] $\partial\Lambda_{n,k}:= \text{\bf D\"ORFLER}(r_{n,k}, \theta)$ \item[\ ] $\Lambda_{n,k+1}:=\Lambda_{n,k} \cup \partial\Lambda_{n,k}$ \item[\ ] $u_{n,k+1}:= {\bf GAL}(\Lambda_{n,k+1})$ \item[\ ] $r_{n,k+1}:= {\bf RES}(u_{n,k+1})$ \end{itemize} \item[\ ] while $\Vert r_{n,k+1} \Vert > \sqrt{1-\theta^2} \Vert r_{n} \Vert$ \item[\ ] $\Lambda_{n+1}:={\bf COARSE}\left(u_{n,k+1}, \tfrac1{\sqrt\alpha_*} \Vert r_{n,k+1} \Vert \right)$ \item[\ ] $u_{n+1}:={\bf GAL}(\Lambda_{n+1})$ \item[\ ] $r_{n+1}:={\bf RES}(u_{n+1})$ \end{itemize} \item[\ ] while $\Vert r_{n+1} \Vert > tol $ \end{itemize} We observe that the {specific} choice of accuracy $\epsilon=\epsilon_n {= \frac{1}{\sqrt{\alpha_*}} \|r_{n,k+1}\|}$ in each call of {\bf COARSE} in the algorithm above is motivated by the wish of guaranteeing a fixed reduction of the residual {and error} at each outer iteration. This is made precise in the following theorem. \begin{theorem}[{contraction property of {\bf C-ADFOUR}}]\label{teo:adfour-coarse-convergence} The algorithm {\bf C-ADFOUR} {satisfies} \begin{enumerate}[\rm (i)] \item The number of iterations of each inner loop is finite and bounded independently of $n$; \item The sequence of residuals $r_n$ and errors $u-u_n$ generated for $n \geq 0$ by the algorithm satisfies the inequalities \begin{equation}\label{eq:adgev.1} \Vert r_{n+1} \Vert \leq \rho \Vert r_{n} \Vert \end{equation} and \begin{equation}\label{eq:adgev.2} \tvert u-u_{n+1} \tvert \leq \rho \tvert u-u_{n} \tvert \end{equation} for \begin{equation}\label{eq:adgev.2bis} \rho=3 \frac{\alpha^*}{\alpha_*}\sqrt{1-\theta^2} \;. \end{equation} In particular, if $\theta$ is chosen in such a way that $\rho<1$, for any $tol>0$ the algorithm terminates in a finite number of iterations, whereas for $tol=0$ the sequence $u_n$ converges to $u$ in $H^1_p(\Omega)$ as $n \to \infty$. \end{enumerate} \end{theorem} \begin{proof} {\rm (i)} For any fixed $n$, each inner iteration behaves as the algorithm {\bf ADFOUR} considered in {Sect. \ref{sec:defADFOUR}}. Hence, setting again $\rho=\sqrt{1-\frac{\alpha_*}{\alpha^*} \theta^2}$, we have as in Theorem \ref{teo:four1} $$ \tvert u-u_{n,k+1} \tvert \leq \rho^{k+1} \tvert u-u_n \tvert \;, $$ which implies, by (\ref{eq:four.2.1bis}), $$ \Vert r_{n,k+1} \Vert \leq \sqrt{\alpha^*} \tvert u-u_{n,k+1} \tvert \leq \sqrt{\alpha^*} \rho^{k+1} \tvert u-u_n \tvert \leq \sqrt{\frac{\alpha^*}{\alpha_*}} \rho^{k+1} \Vert r_{n} \Vert \;. $$ This shows that the termination criterion \begin{equation}\label{eq:adgev.3} \Vert r_{n,k+1} \Vert \leq \sqrt{1-\theta^2} \, \Vert r_{n} \Vert \end{equation} is certainly satisfied if $$ \sqrt{\frac{\alpha^*}{\alpha_*}} \rho^{k+1} \leq \sqrt{1-\theta^2} \;, $$ i.e., as soon as $$ k+1 \geq \frac{\log \left( \tfrac{\alpha_*}{\alpha^*}(1-\theta^2) \right)}{2 \log \rho} {> k}\;. $$ We conclude that the number $K_n { = k+1}$ of inner iterations is bounded by {$1 + \frac{\log \left( \tfrac{\alpha_*}{\alpha^*}(1-\theta^2) \right)}{2 \log \rho}$, which is independent of $n$.} {\rm (ii)} By (\ref{eq:four.2.1}), we have $$ \Vert u-u_{n,k+1} \Vert \leq \frac1{\alpha_*} \Vert r_{n,k+1} \Vert \;. $$ At the exit of the inner loop, the quantity on the right-hand side is precisely the parameter $\epsilon_n$ {fed} to the procedure {\bf COARSE}; then, Property \ref{prop:cons-coarse} yields $$ \tvert u - u_{n+1} \tvert \leq 3 \sqrt{\alpha^*} \epsilon_n \;. $$ On the other hand, the termination criterion (\ref{eq:adgev.3}) yields $$ \epsilon_n \leq \frac1{\alpha_*} \sqrt{1-\theta^2} \Vert r_{n} \Vert \;, $$ so that $$ \tvert u - u_{n+1} \tvert \leq 3 \frac{\sqrt{\alpha^*}}{\alpha_*} \sqrt{1-\theta^2} \Vert r_{n} \Vert \;. $$ This bound together with the left-hand inequality in (\ref{eq:four.2.1bis}) applied to $r_{n+1}$ yields (\ref{eq:adgev.1}), whereas the same inequality applied to $r_{n}$ yields (\ref{eq:adgev.2}). \end{proof} A coarsening step can also be inserted in the aggressive algorithm {\bf A-ADFOUR} considered in Sect. \ref{subsec:aggress}; indeed, the enrichment step {\bf ENRICH} could activate a larger number of degrees of freedom than really needed, endangering optimality. The algorithm we now propose can be viewed as a variant of {\bf C-ADFOUR}, in which the use of {\bf E-D\"ORFLER} instead of {\bf D\"ORFLER} allows one to take a single inner iteration; in this respect, one can consider the enrichment step as a ``prediction'', and the coarsening step as a ``correction'', of the new set of active degrees of freedom. For this reason, we call this variant the {\bf Predictor/Corrector-ADFOUR}, or simply {\bf PC-ADFOUR}. Given two parameters $\theta \in (0,1)$ and $tol \in [0,1)$, we choose $J\geq 1$ as the smallest integer for which (\ref{eq:aggr2}) is fulfilled, and we define the following adaptive algorithm. {\bf Algorithm PC-ADFOUR}($\theta, \ tol, \ J$) \begin{itemize} \item[\ ] Set $r_0:=f$, $\Lambda_0:=\emptyset$, $n=-1$ \item[\ ] do \begin{itemize} \item[\ ] $n \leftarrow n+1$ \item[\ ] $\widehat{\partial\Lambda}_{n}:= \text{\bf E-D\"ORFLER}(r_{n}, \theta, J)$ \item[\ ] $\widehat\Lambda_{n+1}:= \Lambda_{n} \cup \widehat{\partial\Lambda}_{n}$ \item[\ ] $\widehat{u}_{n+1}:= {\bf GAL}(\widehat\Lambda_{n})$ \item[\ ] $\Lambda_{n+1}:= {\bf COARSE}\left(\widehat{u}_{n+1}, \frac{1}{\alpha_*} \sqrt{1-\theta^2}\|r_n\|\right)$ \item[\ ] $u_{n+1}:={\bf GAL}(\Lambda_{n+1})$ \item[\ ] $r_{n+1}:={\bf RES}(u_{n+1})$ \end{itemize} \item[\ ] while $\Vert r_{n+1} \Vert > tol $ \end{itemize} \begin{theorem}[{contraction property of {\bf PC-ADFOUR}}]\label{teo:pc-adfour-convergence} {If} the assumptions of Property \ref{prop:inverse.alg} or Property \ref{prop:inverse.matrix-estimate} be satisfied, {then} the statement (ii) of Theorem \ref{teo:adfour-coarse-convergence} applies to Algorithm {\bf PC-ADFOUR} as well. \end{theorem} \begin{proof} The first inequalities in both (\ref{eq:aggr_error_reduct}) and (\ref{eq:four.1bis}) yield $$ \Vert u - \widehat{u}_{n+1} \Vert \leq \frac1{\alpha_*} \sqrt{1-\theta^2} \, \Vert r_n \Vert \; . $$ {Since} the right-hand side is precisely the parameter $\epsilon_n$ {fed} to the procedure {\bf COARSE}, one proceeds as in the proof of Theorem \ref{teo:adfour-coarse-convergence}. \end{proof} \section{Nonlinear approximation in Fourier spaces}\label{sec:nl} \subsection{Best $N$-term approximation and rearrangement}\label{sec:abstract-nl} Given any nonempty finite index set $\Lambda \subset \mathbb{Z}^d$ and the corresponding subspace $V_\Lambda \subset V=H^1_p(\Omega)$ of dimension $|\Lambda|=\text{card}\, \Lambda$, the best approximation of $v$ in $V_{\Lambda}$ is the orthogonal projection of $v$ upon $V_{\Lambda}$, i.e. the function $P_{\Lambda} v = \sum_{k \in \Lambda} \hat v_k \phi_k$, which satisfies $$ \Vert v - P_{\Lambda} v \Vert= \left(\sum_{k \not \in \Lambda} |\hat{V}_k|^2\right)^{1/2} $$ (we set $P_\Lambda v = 0$ if $\Lambda=\emptyset$). For any integer $N \geq 1$, we minimize this error over all possible choices of $\Lambda$ with cardinality $N$, {thereby} leading to the {\sl best $N$-term approximation error} $$ E_N(v)= \inf_{\Lambda \subset\mathbb{Z}^d , \ |\Lambda|=N} \Vert v - P_{\Lambda} v \Vert \;. $$ A way to construct a {\sl best $N$-term approximation} $v_N$ of $v$ consists of rearranging the coefficients of $v$ in decreasing order of modulus $$ \vert \hat{V}_{k_1}\vert \geq \ldots \geq \vert \hat{V}_{k_n} \vert \geq \vert \hat{V}_{k_{n+1}} \vert \geq \dots $$ and setting $v_N=P_{\Lambda_N}v$ with $\Lambda_N = \{ k_n \ : \ 1 \leq n \leq N \}$. {As already mentioned in the Introduction, let us denote from now on $v_n^*=\hat{V}_{k_n}$ the rearranged and rescaled Fourier coefficients of $v$}. Then, $$ E_N(v)= \left(\sum_{n>N} |v_n^*|^2 \right)^{1/2} \;. $$ Next, given a strictly decreasing function $\phi : \mathbb{N} \to \mathbb{R}_+$ such that $\phi(0)=\phi_0$ for some $\phi_0 >0$ and $\phi(N) \to 0$ when $N \to \infty$, we introduce the corresponding {\sl sparsity class} ${\mathcal A}_\phi$ by setting \begin{equation}\label{eq:nl.gen.001} {\mathcal A}_\phi = {\Big\{ v \in V \ : \ \Vert v \Vert_{{\mathcal A}_\phi}:= \sup_{N \geq 0} \, \frac{E_N(v)}{\phi(N)} < +\infty \Big\} \; .} \end{equation} We point out that in applications $\Vert v\Vert_{{\mathcal A}_\phi}$ need not be a (quasi-)norm {since} ${\mathcal A}_\phi$ need not be a linear space. Note however that $\Vert v \Vert_{{\mathcal A}_\phi}$ always controls the $V$-norm of $v$, since $\Vert v \Vert = E_0(v) \leq {\phi_0} \Vert v \Vert_{{\mathcal A}_\phi}$. Observe that $v \in {\mathcal A}_\phi$ iff there exists a constant $c>0$ such that \begin{equation}\label{eq:nl.gen.1} E_N(v) \leq c \phi(N)\;, \qquad \forall N \geq 0 \;. \end{equation} The quantity $\Vert v \Vert_{{\mathcal A}_\phi}$ {dictates} the minimal number $N_\varepsilon$ of basis functions needed to approximate $v$ with accuracy $\varepsilon$. {In fact}, from the relations $$ E_{N_\varepsilon}(v) \leq \varepsilon < E_{N_\varepsilon-1}(v) \leq \phi(N_\varepsilon-1) \Vert v \Vert_{{\mathcal A}_\phi} \;, $$ {and the monotonicity of $\phi$, we obtain} \begin{equation}\label{eq:nl.gen.2o} N_\varepsilon \leq \phi^{-1}\left(\frac{\varepsilon}{\Vert v \Vert_{{\mathcal A}_\phi}}\right) +1 \;. \end{equation} The second addend on the right-hand side can be absorbed by a multiple of the first one, provided $\varepsilon$ is sufficiently small; in other words, it is not restrictive to assume that there exists a constant $\kappa$ slightly larger than $1$ such that \begin{equation}\label{eq:nl.gen.2} N_\varepsilon \leq \kappa \, \phi^{-1}\left(\frac{\varepsilon}{\Vert v \Vert_{{\mathcal A}_\phi}}\right) \;. \end{equation} \begin{remark}[{sparsity class for $V'$}]\label{rem:nl1}{\rm Replacing $V$ by $V'$ in (\ref{eq:nl.gen.001}) leads to the definition of a sparsity class, still denoted by ${\mathcal A}_\phi$, in the space of linear continuous forms {$f$} on $H^1_p(\Omega)$. This observation applies to the subsequent definitions as well (e.g., for the class ${\mathcal A}^{\eta,t}_G$). In essence, we will treat in a unified way the nonlinear approximation of a function $v \in H^1_p(\Omega)$ and of a form ${f} \in H^{-1}_p(\Omega)$. \endproof } \end{remark} Throughout the paper, we shall consider two main families of sparsity classes, identified by specific choices of the function $\phi$ depending upon one or more parameters. The first family is related to the best approximation in {\em Besov} spaces of periodic functions, thus accounting for a finite-order regularity in $\Omega$; the corresponding functions $\phi$ exhibit an algebraic decay as $N \to \infty$, {which motivates our terminology of {\em algebraic classes}}. The second family is related to the best approximation in {\em Gevrey} spaces of periodic functions, which are formed by infinitely-differentiable functions in $\Omega$; the associated $\phi$'s exhibit an exponential decay, and for this reason such classes will be referred to as {\em exponential classes}. Properties of both families are collected hereafter. \subsection{Algebraic classes}\label{sec:nlb} The following is the counterpart for Fourier approximations of by now well-known nonlinear approximation settings {\cite{DeVore:1998}}, e.g. for wavelets or nested finite elements. For this reason, we just state definitions and properties without proofs. For $s>0$, let us introduce the function \begin{equation}\label{eq:nlb.200} \phi(N)=N^{-s/d} \qquad \qquad \text{for } N \geq 1\;, \end{equation} and $\phi(0)=\phi_0>1$ arbitrary, with inverse \begin{equation}\label{eq:nlb.201} \phi^{-1}(\lambda) = \lambda^{-d/s} \qquad \qquad \text{for } \lambda \leq 1\;, \end{equation} and let us consider the corresponding class ${\mathcal A}_\phi$ defined in (\ref{eq:nl.gen.001}). \begin{definition}[{algebraic class of functions}]\label{def:ABes} We denote by ${\mathcal A}^{s}_B$ the subset of $V$ defined as $$ {\mathcal A}^{s}_B {:= \Big\{ v \in V \ : \ \Vert v \Vert_{{\mathcal A}^{s}_B}:= \Vert v \Vert + \sup_{N \geq 1} \, E_N(v) \, N^{s/d} < +\infty \Big\} \;.} $$ \end{definition} It is immediately seen that ${\mathcal A}^{s}_B$ contains the Sobolev space of periodic functions $H^{s+1}_p(\Omega)$. On the other hand, it is proven in \cite{DeVore-Temlyakov:1995}, as a part of a more general result, that for $0 < \sigma, \tau \leq \infty$, the Besov space ${B^{s+1}_{\tau,\sigma}(\Omega)}=B^{s+1}_\sigma(L^\tau(\Omega))$ is contained in ${\mathcal A}^{s^*}_B$ {provided $s^* := s-d(1/\tau-1/2)_+>0$.} Let us associate the quantity $\tau>0$ to the parameter $s$, via the relation $$ \frac1\tau=\frac{s}{d} + \frac12 \;. $$ The condition for a function $v$ to belong to some class ${\mathcal A}^{s}_B$ can be equivalently stated as a condition on the vector ${\bm v}=(\hat{V}_k)_{k \in \mathbb{Z}^d}$ of its Fourier coefficients, precisely, on the rate of decay of the non-increasing rearrangement ${\bm v}^* =(v^*_n)_{n \geq 1}$ of ${\bm v}$. \begin{definition}[{algebraic class of sequences}]\label{def:elpicBes} {Let $\ell_B^{s}(\mathbb{Z}^d)$ be} the subset of sequences ${\bm v} \in \ell^{2}(\mathbb{Z}^d)$ so that $$ \Vert {\bm v} \Vert_{\ell_B^{s}(\mathbb{Z}^d)} := \sup_{n \geq 1} n^{1/\tau} |v_n^*| < +\infty \;. $$ \end{definition} \noindent Note that this space is often denoted by $\ell^\tau_w(\mathbb{Z}^d)$ in the literature, being an example of Lorentz space. The relationship between ${\mathcal A}^{s}_B$ and $\ell_B^{s}(\mathbb{Z}^d)$ is stated in the following Proposition. \begin{proposition}[{equivalence of algebraic classes}]\label{prop:nlg1-alg} Given a function $v \in V$ and the sequence ${\bm v}$ of its Fourier coefficients, one has $v \in {\mathcal A}^{s}_B$ if and only if ${\bm v} \in \ell_B^{s}(\mathbb{Z}^d)$, with $$ \Vert v \Vert_{{\mathcal A}^{s}_B} \lsim \Vert {\bm v} \Vert_{\ell_B^{s}(\mathbb{Z}^d)} \lsim \Vert v \Vert_{{\mathcal A}^{s}_B}\,. $$ \end{proposition} At last, we note that the quasi-Minkowski inequality $$ \Vert {\bm u}+{\bm v} \Vert_{\ell_B^{s}(\mathbb{Z}^d)} \leq C_s \left( \Vert {\bm u} \Vert_{\ell_B^{s}(\mathbb{Z}^d)} + \Vert {\bm v} \Vert_{\ell_B^{s}(\mathbb{Z}^d)} \right) $$ holds in $\ell_B^{s}(\mathbb{Z}^d)$, yet the constant $C_s$ blows up exponentially as $s \to \infty$. \subsection{Exponential classes}\label{sec:nlg} We first recall the definition of Gevrey spaces of periodic functions in $\Omega=(0,2\pi)^d$ (see \cite{Foias-Temam:1989}). Given reals $\eta > 0 $, {$0<t\le d$} and $s \geq 0$, we set $$ G^{\eta,t, s}_p(\Omega) { ~:=\Big\{ v \in L^2(\Omega) \ : \ \Vert v \Vert_{G,\eta,t,s}^2 = \sum_{k\in\mathbb{Z}} \e^{2\eta |k|^t}(1+|k|^{2s}) |\hat{v}_k|^2 < +\infty \Big\} \;.} $$ Note that $G^{\eta,t, s}_p(\Omega)$ is contained in all Sobolev spaces of periodic functions $H^r_p(\Omega)$, $r \geq 0$. Furthermore, if $t \geq 1$, $G^{\eta,t, s}_p(\Omega)$ is made of analytic functions. Gevrey spaces have been introduced to study the $C^\infty$ and analytical regularity of the solutions of partial differential equations. For our elliptic problem (\ref{eq:four03}), {the following statement is an example of shift theorem in Gevrey spaces}. \begin{theorem}[{shift theorem}] {If the assumptions of Property \ref{prop:inverse.matrix-estimate} are satisfied, then for any $\eta<\bar\eta_L$, ${0<t\leq 1}$ and $s \geq -1$, $L$ is an isomorphism between $G^{\eta,t, s+2}_p(\Omega)$ and $G^{\eta,t, s}_p(\Omega)$.} \end{theorem} \begin{proof} {Proceeding as in Sect. \ref{sec:algebraic_repres}, it is immediate to see} that the problem $Lu=f$ can be equivalently formulated as $\bA \bu = \bff$, where the vectors $\bff$ and $\bu$ contain the Fourier coefficients of functions $f$ and $u$ normalized in $H_p^s(\Omega)$ and $H_p^{s+2}(\Omega)$, respectively. {If $\bW=\text{diag}(\e^{\eta\vert k \vert^t})$ is a bi-infinite diagonal exponential matrix, then we can write $\bW\bu=\bW\bA^{-1}\bff=(\bW\bA^{-1} \bW^{-1})\bW\bff$. We observe that property $\|\bW\bu\|_{\ell^2}\lesssim\|\bW\bff\|_{\ell^2}$, which implies the thesis, is a consequence of $\| \bW\bA^{-1}\bW^{-1} \|_{\ell^2}\lesssim 1 $. To show the latter inequality, we let ${\bf x, \bf y}\in\ell^2(\mathbb{Z}^d)$ and notice that \begin{equation*} |{\bf y}^T \bW \bA^{-1}\bW^{-1} {\bf x}| \le c_L \sum_{m\in\mathbb{Z}^d} e^{-\bar\eta_L |m|} \sum_{k\in\mathbb{Z}^d} |y_{m+k}| e^{\eta |m+k|^t} e^{-\eta |k|^t} |x_k|. \end{equation*} Since ${0<t\le 1}$, we deduce $|m+k|^t \le |m|^t + |k|^t$ and $e^{\eta (|m+k|^t- |k|^t)} \le e^{\eta |m|^t}$, whence \[ \|\bW\bA^{-1}\bW^{-1}{\bf x}\| = \sup_{{\bf y}\in\mathbb{Z}^d} \frac{|{\bf y}^T\bW\bA^{-1}\bW^{-1}{\bf x}|}{\|\bf y\|} \le c_L \sum_{m\in\mathbb{Z}^d} e^{(-\bar\eta_L + \eta) |m|^t} \|\bf x\| \lsim \|\bf x\| \] because $\bar\eta_L > \eta$ and the series converges. This implies the desired estimate. } \end{proof} From now on, we fix $s=1$ and we normalize again the Fourier coefficients of a function $v$ with respect to the $H^1_p(\Omega)$-norm. Thus, we set \begin{equation}\label{eq:gevrey-class} G^{\eta,t}_p(\Omega)= G^{\eta,t, 1}_p(\Omega)= \{ v \in V \ : \ \Vert v \Vert_{G,\eta,t}^2 = \sum_k \e^{2\eta |k|^t} |\hat{V}_k|^2 < +\infty \} \;. \end{equation} Functions in $G^{\eta,t}_p(\Omega)$ can be approximated by the linear orthogonal projection $$ P_M v = \sum_{|k| \leq M} \hat{V}_k \phi_k \;, $$ for which we have \begin{eqnarray*} \Vert v -P_M v \Vert^2 &=& \sum_{|k| > M} |\hat{V}_k|^2 = \sum_{|k| > M} \e^{-2\eta |k|^t} \e^{2\eta |k|^t} |\hat{V}_k|^2 \\ &\leq & \e^{-2\eta M^t} \sum_{|k| > M} \e^{2\eta |k|^t} |\hat{V}_k|^2 \leq \e^{-2\eta M^t} \Vert v \Vert_{G,\eta,t}^2 \;. \end{eqnarray*} As already observed {in Property \ref{prop:matrix-estimate}}, setting $N={\rm card}\{k \, : \, |k|\leq M \}$, one has $N \sim \omega_d M^d$, so that \begin{equation}\label{eq:nlg.3} E_N(v) \leq \Vert v -P_M v \Vert \lsim {\rm exp}\left(- \eta \omega_d^{-t/d} N^{t/d} \right) \Vert v \Vert_{G,\eta,t} \;. \end{equation} Hence, we are led to introduce the function \begin{equation}\label{eq:nlg.300} \phi(N)={\rm exp}\left(- \eta \omega_d^{-t/d} N^{t/d} \right) \quad \qquad (N \geq 0) \;, \end{equation} whose inverse is given by \begin{equation}\label{eq:nlg.301} \phi^{-1}(\lambda) = \frac{\omega_d}{\eta^{d/t}}\left( \log \frac1\lambda \right)^{d/t} \quad \qquad (\lambda \leq 1) \;, \end{equation} and to consider the corresponding class ${\mathcal A}_\phi$ defined in (\ref{eq:nl.gen.001}), which therefore contains $G^{\eta,t}_p(\Omega)$. \begin{definition}[{exponential class of functions}]\label{def:AGev} We denote by ${\mathcal A}^{\eta,t}_G$ the subset of $G^{\eta,t}_p(\Omega)$ defined as $$ {\mathcal A}^{\eta,t}_G { := \Big\{ v \in V \ : \ \Vert v \Vert_{{\mathcal A}^{\eta,t}_G}:= \sup_{N \geq 0} \, E_N(v) \, {\rm exp}\left(\eta \omega_d^{-t/d} N^{t/d} \right) < +\infty \Big\} \;.} $$ \end{definition} At this point, we make the subsequent notation easier by introducing the $t$-dependent function $$ \tau= \frac{t}{d} \le 1 \;. $$ As in the algebraic case, the class ${\mathcal A}^{\eta,t}_G$ can be equivalently characterized in terms of behavior of rearranged sequences of Fourier coefficients. \begin{definition}[{exponential class of sequences}]\label{def:elpicGev} {Let $\ell_G^{\eta,t}(\mathbb{Z}^d)$ be the} subset of sequences ${\bv} \in \ell^{2}(\mathbb{Z}^d)$ so that \looseness=-1 $$ \Vert {\bv} \Vert_{\ell_G^{\eta,t}(\mathbb{Z}^d)} := \sup_{n \geq 1} n^{(1-\tau)/2} {\rm exp}\left(\eta \omega_d^{-\tau} n^{\tau} \right) |v_n^*| < +\infty \;, $$ where {${\bv}^*=(v_n^*)_{n=1}^\infty$} is the non-increasing rearrangement of ${\bv}$. \end{definition} The relationship between ${\mathcal A}^{\eta,t}_G$ and $\ell_G^{\eta,t}(\mathbb{Z}^d)$ is stated in the following Proposition. \begin{proposition}[{equivalence of exponential classes}]\label{prop:nlg1} Given a function $v \in V$ and the sequence ${\bm v}=(\hat{V}_k)_{k \in \mathbb{Z}^d}$ of its Fourier coefficients, one has $v \in {\mathcal A}^{\eta,t}_G$ if and only if ${\bv} \in \ell_G^{\eta,t}(\mathbb{Z}^d)$, with $$ \Vert v \Vert_{{\mathcal A}^{\eta,t}_G} \lsim \Vert {\bv} \Vert_{\ell_G^{\eta,t}(\mathbb{Z}^d)} \lsim \Vert v \Vert_{{\mathcal A}^{\eta,t}_G}\,. $$ \end{proposition} \begin{proof} Assume first that ${\bv} \in \ell_G^{\eta,t}(\mathbb{Z}^d)$. Then, $$ E_N(v)^2 = \Vert v - P_N(v) \Vert^2 = \sum_{n>N} |v_n^*|^2 \lsim \sum_{n>N} n^{\tau-1} {\rm exp}\left(-2\eta \omega_d^{-\tau} n^{\tau} \right) \Vert{\bv}\Vert_{\ell_G^{\eta,t}(\mathbb{Z}^d)}^2 \;. $$ Now, setting for simplicity $\alpha=2\eta \omega_d^{-\tau}$, one has $$ S:=\sum_{n>N} n^{\tau-1} \e^{- \alpha n^{\tau}} \sim \int_N^\infty x^{\tau-1} \e^{- \alpha x^{\tau}} dx \;. $$ The substitution $z=x^{\tau}$ yields $$ S \sim \frac{d}{t} \int_{N^{\tau}}^\infty \e^{-\alpha z} dz = \frac{d}{\alpha t} \e^{-\alpha N^{\tau}} $$ {whence $\|v\|_{\mathcal{A}_G^{\eta,t}} \lsim \Vert {\bv} \Vert_{\ell_G^{\eta,t}(\mathbb{Z}^d)}$.} Conversely, {let} $v \in {\mathcal A}^{\eta,t}_G$. We have to prove that for any $n \geq 1$, one has $$ n^{1-\tau} |v_n^*|^2 \lsim \e^{-\alpha n^{\tau}} \Vert v \Vert_{{\mathcal A}^{\eta,t}_G} \;. $$ Let $m<n$ be the largest integer such that $n-m \geq n^{1-\tau}$ (note that $0 \leq 1-\tau < 1$), i.e., $m \sim n(1-n^{-\tau})$. Then, $$ n^{1-\tau} |v_n^*|^2 \leq (n-m)|v_n^*|^2 \leq \sum_{j=m+1}^n |v_j^*|^2 \leq \Vert v - P_m(v) \Vert^2 \leq \e^{-\alpha m^{\tau}} \Vert v \Vert_{{\mathcal A}^{\eta,t}_G}^2 \;. $$ Now, by Taylor expansion, $$ m^{\tau} \sim n^{\tau}(1-n^{-\tau})^{\tau} = n^{\tau}\left(1- \tau n^{-\tau} + o(n^{-\tau})\right) = n^{\tau} -\tau + o(1) \;, $$ so that $ \e^{-\alpha m^{\tau}} \lsim \e^{-\alpha n^{\tau}}$, and {$\Vert {\bv} \Vert_{\ell_G^{\eta,t}(\mathbb{Z}^d)} \lsim \|v\|_{\mathcal{A}_G^{\eta,t}}$ is proven.} \end{proof} Next, we briefly comment on the structure of the set $\ell_G^{\eta,t}(\mathbb{Z}^d)$. This is not a vector space, since it may happen that ${\bu},\,{\bv}$ belong to this set, whereas ${\bu}+{\bv}$ does not. Assume for simplicity that $\tau=1$ and consider for instance the sequences in $\ell_G^{\eta,t}(\mathbb{Z}^d)$ \begin{eqnarray*} {\bu} &=& \left(\e^{-\eta}, 0 , \e^{-2\eta}, 0 , \e^{-3 \eta}, 0 , \e^{-4 \eta}, 0 ,\dots \right) \;, \\ {\bv} &=& \left(0, \e^{-\eta}, 0 , \e^{-2\eta}, 0 , \e^{-3 \eta}, 0, \e^{-4 \eta}, \dots \right) \;, \end{eqnarray*} Then, $$ {\bu}+{\bv} = ({\bu}+{\bv})^*= \left(\e^{-\eta},\e^{-\eta} , \e^{-2\eta},\e^{-2\eta} , \e^{-3 \eta},\e^{-3\eta} , \e^{-4 \eta}, \e^{-4\eta} ,\dots \right) \;; $$ thus, $({\bu}+{\bv})^*_{2j}= \e^{-\eta j}$, so that $\e^{\eta 2j} ({\bu}+{\bv})^*_{2j} \to \infty$ as $j \to +\infty$, i.e., {${\bu}+{\bv} \notin\ell_G^{\eta,t}(\mathbb{Z}^d)$}. On the other hand, we have the following property. \begin{lemma}[{quasi-triangle inequality}]\label{L:nlg2} If ${\bu}_i\in {\ell_G^{\eta_i,t}(\mathbb{Z}^d)}$ for $i=1,2$, then ${\bu}_1+{\bu}_2 \in {\ell_G^{\eta,t}(\mathbb{Z}^d)}$ with \[ {\|{\bu}_1+{\bu}_2\|_{\ell_{G}^{\eta,t}} \le \|{\bu}_1\|_{\ell_{G}^{\eta_1,t}} + \|{\bu}_2\|_{\ell_{G}^{\eta_2,t}},} \qquad \eta^{-\frac{1}{\tau}} = \eta_1^{-\frac{1}{\tau}} + \eta_2^{-\frac{1}{\tau}}. \] \end{lemma} \begin{proof} We use the characterization given by Proposition \ref{prop:nlg1}, so that $$ \Vert u_i - P_{N_i}(u_i) \Vert \le \|u_i\|_{\mathcal{A}_G^{\eta,t}} {\rm exp}\left(- \eta \omega_d^{-\tau} N_i^{\tau} \right)\quad i=1,2\;. $$ Given $N\ge1$, we seek $N_1,N_2$ so that \[ N = N_1+N_2, \qquad \eta_1 N_1^\tau = \eta_2 N_2^\tau. \] This implies \[ N = N_1 \eta_1^{\frac{1}{\tau}} {\Big(\eta_1^{-\frac{1}{\tau}}+ \eta_2^{-\frac{1}{\tau}} \Big)} = N_1 \eta_1^{\frac{1}{\tau}} \eta^{-\frac{1}{\tau}}, \] and { \begin{align*} \|(u_1+u_2)-P_N(u_1+u_2)\| & \le \|u_1 - P_{N_1}(u_1)\| + \|u_2 - P_{N_2}(u_2)\| \\ & \le \|u_1\|_{{\mathcal A}_G^{\eta_1,t}} {\rm exp} (-\eta_1 \omega_d^{-\tau} N_1^\tau) + |u_2|_{{\mathcal A}_G^{\eta_2,t}} {\rm exp} (-\eta_2 \omega_d^{-\tau}N_2^\tau) \big) \\ & \le \big(\|u_1\|_{{\mathcal A}_G^{\eta_1,t}} + \|u_2\|_{{\mathcal A}_G^{\eta_2,t}}\big) {\rm exp} (-\eta \omega_d^{-\tau} N^\tau). \end{align*} } whence the {assertion}. \end{proof} {Note that when $\eta_1=\eta_2$ we obtain $\eta = 2^{-\tau}\eta_1\le 2^{-1}\eta_1$ thereby extending the previous counterexample.} \section{Sparsity classes of the residual}\label{sec:spars-res} For any finite index set $\Lambda$, let $r=r(u_\Lambda)$ be the residual produced by the Galerkin solution $u_\Lambda$. Under Assumption \ref{ass:minimality}, the step $$ \partial\Lambda := \text{\bf D\"ORFLER}(r, \theta) $$ selects a set {$\partial\Lambda$} of minimal cardinality in $\Lambda^c$ for which $\Vert r-P_{\partial\Lambda}r \Vert \leq \sqrt{1-\theta^2} \Vert r \Vert$. Thus, if $r$ belongs to a certain sparsity class ${\mathcal A}_{\bar{\phi}}$, identified by a function $\bar{\phi}$, then (\ref{eq:nl.gen.2o}) yields \begin{equation}\label{eq:boundDorfler} |\partial\Lambda| \leq {\bar{\phi}}^{-1}\left(\sqrt{1-\theta^2} \, \frac{\Vert r \Vert}{\ \Vert r \Vert_{{\mathcal A}_{\bar{\phi}}}}\right) +1 \;. \end{equation} Explicitly, if $r \in {\mathcal A}_B^{\bar{s}}$ for some $\bar{s}>0$, we have by (\ref{eq:nlb.201}) $$ |\partial\Lambda| \leq (1-\theta^2)^{-d/{2\bar{s}}} \left(\frac{\ \Vert r \Vert_{{\mathcal A}_B^{\bar{s}}}}{\Vert r \Vert}\right)^{d/\bar{s}} +1 \;, $$ whereas if $r \in {\mathcal A}^{\bar{\eta},\bar{t}}_G$ for some $\bar{\eta}>0$ and $\bar{t}>0$, we have by (\ref{eq:nlg.301}) $$ |\partial\Lambda| \leq \frac{\omega_d}{\eta^{d/\bar{t}}} \left( \log \frac{\Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}}{\Vert r \Vert} + |\log \sqrt{1-\theta^2}| \right)^{d/\bar{t}} + 1 \;. $$ We stress the fact that the cardinality of $\partial\Lambda$ is related to the {\sl sparsity class of the residual}. {We will see in the rest of this section that} such a class does coincide with the sparsity class of the solution in the algebraic case, whereas it is different (indeed, worse) in the exponential case. This is a crucial point to be kept in mind in the forthcoming optimality analysis of our algorithms. The cardinality of $\partial\Lambda$ depends indeed on how much the sparsity measure $\Vert r \Vert_{{\mathcal A}_{\bar{\phi}}}$ deviates from the {Hilbert} norm $\Vert r \Vert$. So, before embarking ourselves {on} the study of the relationship between the sparsity classes of the residual and of the solution, we make some brief comments on the ratio between these two quantities. For shortness, we only consider the exponential case, although similar considerations apply to the algebraic case as well. The size of the ratio $$ Q := \frac{\ \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}}{\Vert r \Vert} \; $$ depends on the relative behavior of the rearranged coefficients $r_n^*$ of $r$, which by Definition \ref{def:elpicGev} and Proposition \ref{prop:nlg1} satisfy \begin{equation}\label{rearranged-res} |r_n^*| \leq \lambda^* n^{(\bar{\tau}-1)/2} {\rm e}^{-\bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G} \end{equation} for some constant $\lambda^*>0$, with $\bar{\tau}=\bar{t}/d$. Let us consider two representative situations. \begin{example}[{\em genuinely decaying functions}]\label{E:genuinely-decaying} \rm The most ``favorable'' situation is the one in which the sequence of rearranged coefficients decays precisely at the rate given by the right-hand side of {\eqref{rearranged-res}}; in other words, suppose that there exists a constant $\lambda_*>0$ such that for all $n \geq 1$ \begin{equation}\label{eq:spars20} \lambda_* n^{(\bar{\tau}-1)/2} {\rm e}^{-\bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G} \, \leq \, |r_n^*| \, \leq \, \lambda^* n^{(\bar{\tau}-1)/2} {\rm e}^{-\bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G} \;. \end{equation} Then, $$ (\lambda_*)^2 \sum_{n \geq 1} n^{(\bar{\tau}-1)} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}^2 \, \leq \, \Vert r \Vert^2 \, \leq \, (\lambda^*)^2 \sum_{n \geq 1} n^{(\bar{\tau}-1)} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}^2 \;, $$ and since $$ \sum_{n \geq 1} n^{(\bar{\tau}-1)} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \sim \int_{1}^{+\infty} x^{\bar{\tau}-1} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} x^{\bar{\tau}}} \, dx = \int_{1}^{+\infty} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} y} \, dy =C \;, $$ we obtain $$ \frac1{C \lambda^*} \leq Q \leq \frac1{C \lambda_*} \;. $$ Thus, if (\ref{eq:spars20}) is a ``tight'' bound, the ratio $Q$ is ``small'', and the procedure {\bf D\"ORFLER} activates a moderate number of degrees of freedom at the current iteration. \endproof \end{example} \begin{example}[{\em plateaux}]\label{E:plateaux} \rm The opposite situation, i.e., the worst scenario, occurs when the sequence of rearranged coefficients of $r$ exhibits large ``plateaux'' consisting of equal (or nearly equal) elements in modulus. Fix an integer $K$ arbitrarily large, and suppose that the $K$ largest coefficients of $r$ satisfy $$ |r^*_1|=|r^*_2|= \cdots =|r^*_{K-1}|=|r^*_{K}| = \lambda^* K^{(\bar{\tau}-1)/2} {\rm e}^{-\bar{\eta} \omega_d^{-\bar{\tau}} K^{\bar{\tau}}} \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G} \;. $$ Since $$ \sum_{n > K} n^{(\bar{\tau}-1)} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} n^{\bar{\tau}}} \sim \int_{(K+1)^{\bar{\tau}}}^{+\infty} {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} y} \, dy = {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}}(K+1)^{\bar{\tau}} } < {\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} K^{\bar{\tau}} } \;, $$ there exists $\delta \in (0,1)$ such that $$ \Vert r \Vert^2 = (\lambda^*)^2 (K+\delta)^{\bar{\tau}}{\rm e}^{-2 \bar{\eta} \omega_d^{-\bar{\tau}} K^{\bar{\tau}} } \Vert r \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}^2 \;. $$ We conclude that the ratio $$ Q=\frac{{\rm e}^{\bar{\eta} \omega_d^{-\bar{\tau}} K^{\bar{\tau}}}}{\lambda^* (K+\delta)^{\bar{\tau}/2}} $$ turns out to be arbitrarily large, and indeed for such a residual it is easily seen that D\"orfler's condition $\Vert P_{\partial\Lambda}r \Vert \geq \theta \Vert r \Vert $ requires $|\partial\Lambda|$ to be of the order of $\theta K$. \endproof \end{example} Let us now investigate the sparsity classes of the residual, treating the algebraic and exponential cases separately. Note that, in view of Propositions \ref{prop:nlg1-alg} or \ref{prop:nlg1}, for studying the sparsity classes of certain functions $v$ and $Lv$ we are entitled to study, equivalently, the sparsity classes of the related vectors $\mathbf{v}$ and $\mathbf{A} \mathbf{v}$, where $\mathbf{A}$ is the stiffness matrix (\ref{eq:four100}). \subsection{Algebraic case}\label{S:algebraic-case} We first recall the notion of matrix compressibility (see \cite{CDDV:1998} where the concept has been used in the wavelet context). \begin{definition}[{matrix compressibility}]\label{def:compress} For $s^*>0$, a bounded matrix $\mathbf{A}:\ell^2(\mathbb{Z}^d)\to\ell^2(\mathbb{Z}^d)$ is called $s^*$-compressible if for any $j\in\mathbb{N}$ there exist constants $\alpha_j$ and $C_j$ and a matrix $\mathbf{A}_j$ having at most $\alpha_j 2^j$ non-zero entries per column, such that \[ \| \mathbf{A}-\mathbf{A}_j\| \leq C_j \] where $\{\alpha_j\}_{j\in\mathbb{N}}$ is summable, and for any $s<s^*$, $\{ C_j 2^{sj/d}\}$ is summable. \end{definition} Concerning the compressibility of the matrices belonging to the class ${\mathcal D}_a(\eta_L)$ {of Definition \ref{def:class.matrix}}, the following result can be found in \cite[Lemma 3.6]{Dahlke-Fornasier-Groechenig:2010}. We report here the proof for completeness. \begin{lemma}[{compressibility}]\label{lm:compr} {If $s^*:=\eta_L-d>0$, then any matrix $\mathbf{A}\in\mathcal{D}_a(\eta_L)$ is $s^*$-compressible.} \end{lemma} \begin{proof} Let us take {$N_j=\lceil\frac{2^{j/d}}{(j+1)^2}\rceil$, where $\lceil\cdot\rceil$ denotes the integer part plus $1$}. Then by Property \ref{prop:matrix-estimate} (algebraic case) there holds $\|\mathbf{A}-\mathbf{A}_{N_j}\|\lesssim 2^{-j(\eta_L-d)/d}\ (j+1)^{2(\eta_L-d)}=:C_j$ and $\mathbf{A}_{N_j}$ has $\alpha_j 2^j$ {non-vanishing} entries per column with {$\alpha_j \approx 2^d (j+1)^{-2d}$}. It is immediate to verify that $\sum_j \alpha_j<\infty$. Moreover, for $s<s^*$ and setting $\delta=s^*-s$, we clearly have $\sum_j C_j 2^{js/d}=\sum_j 2^{-j\delta/d} (j+1)^{2s^*} < \infty$. \end{proof} We now consider the continuity properties of the operator $L$ between sparsity spaces. The following result is well known (see e.g. \cite{Dahlke-Fornasier-Raasch:2007}) and {its} proof is here reported for completeness. \begin{proposition}[{continuity of $L$ in $\mathcal{A}_B^s$}]\label{prop:A-continuity-alg} Let $\mathbf{A}\in\mathcal{D}_a(\eta_L)$, $\eta_L>d$ and $s^*={\eta_L-d}$. For any $s<s^*$, if $v \in {\mathcal A}_B^s$ then $Lv \in {\mathcal A}_B^s$, with $$ \Vert Lv \Vert_{{\mathcal A}_B^s} \lsim \Vert v \Vert_{{\mathcal A}_B^s} \;. $$ The constants appearing in the bounds go to infinity as $s$ approaches $s^*$. \end{proposition} \begin{proof} Let us choose {$N_j=\lceil\frac{2^{j/d}}{(j+1)^2}\rceil$} as in the proof of Lemma \ref{lm:compr}. {If we} set $\mathbf{A}_j:=\mathbf{A}_{N_j}$, then by Property \ref{prop:matrix-estimate} (algebraic case) we have $$ \Vert \mathbf{A}-\mathbf{A}_{j} \Vert \lesssim 2^{-j(\eta_L-d)/d}\ (j+1)^{2(\eta_L-d)}=2^{-j s^*/d}\ (j+1)^{2s^*} . $$ On the other hand, for any $j \geq 0$, let ${\bf v}_j=P_j({\bf v})$ be a best $2^j$-term approximation of ${\bf v}\in\ell_B^s$, which therefore satisfies $\Vert {\bf v}-{\bf v}_j \Vert \leq 2^{-js/d} \Vert {\bf v } \Vert_{\ell_B^s}$. Note that the difference ${\bf v}_j - {\bf v}_{j-1}$ satisfies as well $$ \Vert {\bf v}_j - {\bf v}_{j-1} \Vert \lsim 2^{-js/d} \Vert {\bf v } \Vert_{\ell_B^s} \;. $$ Let $$ {\bf w}_J = \sum_{j=0}^J \mathbf{A}_{J-j}({\bf v}_j-{\bf v}_{j-1}) \;, $$ where we set ${\bf v}_{-1}={\bf 0}$. Writing ${\bf v}={\bf v}-{\bf v}_J + \sum_{j=0}^J ({\bf v}_j-{\bf v}_{j-1})$, we obtain $$ \mathbf{A}{\bf v}-{\bf w}_J = \mathbf{A}({\bf v}-{\bf v}_J) + \sum_{j=0}^J (\mathbf{A}-\mathbf{A}_{J-j})({\bf v}_j-{\bf v}_{j-1})\;. $$ The last equation yields \begin{eqnarray} \|\mathbf{A}{\bf v}-{\bf w}_J\| &\leq& \|\mathbf{A}\| \|{\bf v}-{\bf v}_J\| + \sum_{j=0}^J {\|\mathbf{A}-\mathbf{A}_{J-j}\|} \|{\bf v}_j-{\bf v}_{j-1}\|\nonumber\\ &\lesssim& \left(2^{-Js/d} + \sum_{j=0}^J 2^{-(J-j) s^*/d}\ (J-j+1)^{2s^*} 2^{-js/d}\right) \|\bv\|_{\ell^s_B}\nonumber\\ &\lesssim& 2^{-Js/d}\left( 1 + \sum_{j=0}^J 2^{-(J-j) (s^*-s)/d}\ (J-j+1)^{2s^*} \right) \|\bv\|_{\ell^s_B} \nonumber\\ &\lesssim& 2^{-Js/d}\|\bv\|_{\ell^s_B},\nonumber \end{eqnarray} where the series $ \sum_{k} 2^{-k(s^*-s)/d}\ (k+1)^{2s^*}$ is convergent but degenerates as $s$ approaches $s^*$. Finally, by construction {$\bw_J$} belongs to a finite dimensional space $V_{\Lambda_J}$, where \[ \vert {\Lambda_J} \vert {\lsim \omega_d} \sum_{j=0}^J N_{J-j}^d\lesssim 2^J\sum_{j=0}^J (J-j+1)^{-2d}\lesssim 2^J\ . \] This implies $\| {\mathbf{A}}{\bf{v}}\|_{\ell_B^s} \lesssim \|{\bf{v}}\|_{\ell_B^s} $ for any $s<s^*$. \end{proof} At last, we discuss the sparsity class of the residual $r=r(u_\Lambda)$ for some Galerkin solution $u_\Lambda$. \begin{proposition}[{sparsity class of the residual}]\label{prop:unif-bound-res-alg} Let the assumptions of Property \ref{prop:inverse.alg} be satisfied, and set {$s^*=\eta_L-d$}. For any $s<s^*$, if $u \in {\mathcal A}_B^s$ then $r(u_\Lambda) \in {\mathcal A}_B^s$ for any index set $\Lambda$, with $$ \Vert r(u_\Lambda) \Vert_{{\mathcal A}_B^s} \lsim \Vert u \Vert_{{\mathcal A}_B^s} \;. $$ \end{proposition} \begin{proof} Denoting by $ {\bf r}_\Lambda$ the vector representing $r(u_\Lambda)$ and using Proposition \ref{prop:A-continuity-alg}, we get \begin{equation}\label{eq:aux-1} \| {\bf r}_\Lambda\|_{\ell_B^s} = \| \mathbf{A}({\bf u} - {\bf u}_\Lambda )\|_{\ell_B^s} \lesssim \| {\bf u} - {\bf u}_\Lambda \|_{\ell_B^s} \lesssim \| {\bf u} \|_{\ell_B^s} + \| {\bf u}_\Lambda \|_{\ell_B^s} . \end{equation} At this point, we invoke the equivalent formulation of the Galerkin problem given by \eqref{eq:inf-pb-galerkin}, which yields $ \hat{{\bf u}} = ({\widehat{\mathbf{A}}_\Lambda})^{-1} (\mathbf{P}_\Lambda{\bf f})$. Using $\mathbf{A}\in\mathcal{D}_a(\eta_L)$ and combining Property \ref{prop:inf-matrix} together with Property \ref{prop:inverse.alg}, we obtain $(\widehat{\mathbf{A}}_\Lambda)^{-1}\in \mathcal{D}_a(\eta_L)$. Hence, applying Proposition \ref{prop:A-continuity-alg} to $(\widehat{\mathbf{A}}_\Lambda)^{-1}$ we get \[ \|{\bf u}_\Lambda\|_{\ell_B^s} = \|\hat{{\bf u}} \|_{\ell_B^s} = \|({\widehat{\mathbf{A}}_\Lambda})^{-1} (\mathbf{P}_\Lambda{\bf f})\|_{\ell_B^s} \lesssim \| \mathbf{P}_\Lambda{\bf f} \|_{\ell_B^s} \leq \| {\bf f} \|_{\ell_B^s} \;, \] where the last step is an easy consequence of the definition of the projector $ \mathbf{P}_\Lambda$. By substituting the above inequality into \eqref{eq:aux-1}, we finally obtain \begin{equation} \| {\bf r}_\Lambda\|_{\ell_B^s} \lesssim \| {\bf u} \|_{\ell_B^s} + \| {\bf f} \|_{\ell_B^s} = \| {\bf u} \|_{\ell_B^s} + \| \mathbf{A}{\bf u} \|_{\ell_B^s} \lesssim \| {\bf u} \|_{\ell_B^s} \;, \end{equation} where in the last inequality we used again Proposition \ref{prop:A-continuity-alg}. \end{proof} We observe that the previous bound is tailored {to} the ``worst-scenario'': one expects indeed that for $\Lambda$ large enough the residual becomes progressively smaller than the solution. \subsection{Exponential case}\label{subsec:spars-res-exp} {As already alluded to in the Introduction, and in striking contrast to} the previous algebraic case, the implication $v \in {\mathcal A}^{{\eta},{t}}_G \Rightarrow Lv \in {\mathcal A}^{{\eta},{t}}_G$ is false. The following counter-examples prove this fact, and shed light on which could be the correct implication. \begin{example}[{Banded matrices}]\label{ex:spars1}{\rm Fix $d=1$ and $t=1$ (hence, $\tau { = \frac{t}{d}} =1$). Recalling the expression (\ref{eq:four140}) for the entries of $\mathbf{A}$, let us choose $\hat{\nu}_0=\hat{\sigma}_0= {\sqrt{2 \pi}}$, which gives $$ {a_{\ell,\ell} = 1 \qquad\forall \; \ell\in\mathbb{Z}.} $$ Next, let us choose $\hat{\sigma}_h=0$ for all $h \not= 0$, which implies {(because $d=1$)} $$ |a_{\ell,k}|=\frac1{\sqrt{2\pi}}\, \frac{|\ell|\, |k|}{c_\ell \, c_k} |\hat{\nu}_{\ell-k}|\;, \qquad \ell \not= k \;, $$ i.e., $$ \frac1{2\sqrt{2\pi}}\, |\hat{\nu}_{\ell-k}| \leq |a_{\ell,k}| \leq \frac1{\sqrt{2\pi}}\,|\hat{\nu}_{\ell-k}| \;, \qquad \ell \not= k \;, \ \ |\ell|, |k| \geq 1 \;. $$ At this point, let us fix a real $\eta_L >0$ and an integer $p \geq 0$, and let us choose the coefficients $\hat{\nu}_h$ for $h \not= 0$ to satisfy $$ |\hat{\nu}_h| = \begin{cases} \sqrt{2\pi} {\rm e}^{-\eta_L |h|}& \text{if } 0 < |h| \leq p \;, \\ 0 & \text{if } |h| > p\;. \end{cases} $$ In summary, the coefficient $\nu$ of the elliptic operator $L$ is a trigonometric polynomial of degree $p$, whereas the coefficient $\sigma$ is a constant. The corresponding stiffness matrix $\mathbf{A}$ is banded with $2p+1$ non-zero diagonals, and satisfies \begin{equation}\label{eq:spars3} \tfrac12 {\rm e}^{-\eta_L |\ell-k|} \leq |a_{\ell,k}| \leq {\rm e}^{-\eta_L |\ell-k|} \;, \qquad 0 \leq |\ell-k|\leq p \;, \ \ |\ell|, |k| \geq 1 \;. \end{equation} In order to define the vector $\mathbf{v}$, let us introduce the function $\iota \, : \, \mathbb{N}_* \to \mathbb{N}_*$, {$\iota(n)=2(p+1)n$}. Let us fix a real $\eta>0$ and let us define the components $(\mathbf{v})_k = \hat{v}_k$ of the vector in such a way that $$ |(\mathbf{v})_k|= \begin{cases} {\rm e}^{-\frac{\eta}2 n}& \text{if } k=\iota(n) \text{ for some } n \geq 1\;, \\ 0 & \text{otherwise}\;. \end{cases} $$ Thus, the rearranged components $(\mathbf{v})_n^*$ satisfy $|(\mathbf{v})_n^*|={\rm e}^{-\frac{\eta}2 n}$, $n \geq 1$, {whence} $\mathbf{v} \in \ell_G^{\eta,1}(\mathbb{Z})$ (or, equivalently, $v \in {\mathcal A}^{\eta,1}_G$), with $\Vert \mathbf{v} \Vert_{\ell_G^{\eta,1}(\mathbb{Z})}=1$, {according to Definition \ref{def:elpicGev}}. The definition of the mapping $\iota$ and the banded structure of $\mathbf{A}$ imply that the only non-zero components of $\mathbf{A}\mathbf{v}$ are those of indices $\iota(n)+q$ for some $n \geq 1$ and $q \in [-p,p]$. For these components one has $$ (\mathbf{A}\mathbf{v})_{\iota(n)+q}= a_{\iota(n)+q,\iota(n)} (\mathbf{v})_{\iota(n)} \;, $$ thus, recalling (\ref{eq:spars3}), we easily obtain \begin{equation}\label{eq:spars4} \tfrac12 {\rm e}^{-\eta_L p} {\rm e}^{-\frac{\eta}2 n} \leq |(\mathbf{A}\mathbf{v})_{\iota(n)+q}| \leq {\rm e}^{-\frac{\eta}2 n}\;, \qquad q \in [-p,p]\;. \end{equation} This shows that, for any integer $N \geq 1$, $$ \# \{ \ell \, : \, |(\mathbf{A}\mathbf{v})_{\ell}| \geq \tfrac12 {\rm e}^{-\eta_L p} {\rm e}^{-\frac{\eta}2 N} \, \} \geq (2p+1)N \;, $$ hence $$ |(\mathbf{A}\mathbf{v})^*_{(2p+1)N}| \, {\rm e}^{\frac{\eta}2 (2p+1)N} \geq \tfrac12 {\rm e}^{-\eta_L p} {\rm e}^{\eta pN} \to +\infty \qquad \text{as } N \to +\infty \;, $$ i.e., $\mathbf{A}\mathbf{v} \not \in \ell_G^{\eta,1}(\mathbb{Z})$ (or, equivalently, $Lv \not\in {\mathcal A}^{\eta,1}_G$) {regardless of the relative values of $\eta_L$ and $\eta$.} On the other hand, let ${m}_p$ be the smallest integer such that $\frac12 {\rm e}^{-\eta_L p} > {\rm e}^{-\frac{\eta}2 {m}_p}$. Given any $m \geq 1$, let $N \geq 1$ and $Q \in [-p,p]$ be such that $(\mathbf{A}\mathbf{v})^*_m = (\mathbf{A}\mathbf{v})_{\iota(N)+Q}$, {which combined with (\ref{eq:spars4}) yields} $$ {\rm e}^{-\frac{\eta}2 (N+{m}_p)} < |(\mathbf{A}\mathbf{v})^*_m| \leq {\rm e}^{-\frac{\eta}2 N} \;. $$ The {rightmost inequality in (\ref{eq:spars4}), namely $|(\bA\bv)_{\iota (N+m_p) +q}| \le e^{-\frac{\eta}{2}(N+m_p)}$}, shows that there are at most $(2p+1)(N+{m}_p)$ components of $\mathbf{A}\mathbf{v}$ that are larger than ${\rm e}^{-\frac{\eta}2 (N+m_p)}$ in modulus. {This implies $m \leq (2p+1)(N+{m}_p)$, whence} $$ {\rm e}^{-\frac{\eta}2 N} \leq {\rm e}^{\frac{\eta}2 {m}_p} {\rm e}^{-\frac{\eta}{2(2p+1)} m} \;. $$ Setting $\bar{\eta}=\frac{\eta}{2p+1}$, we conclude that $\mathbf{A}\mathbf{v} \in \ell_G^{\bar{\eta},1}(\mathbb{Z})$ (or, equivalently, $Lv \in {\mathcal A}^{\bar{\eta},1}_G$), with $$ \Vert \mathbf{A}\mathbf{v} \Vert_{\ell_G^{\bar{\eta},1}(\mathbb{Z})} \leq {\rm e}^{\frac{\eta}2 {m}_p} \Vert \mathbf{v} \Vert_{\ell_G^{\eta,1}(\mathbb{Z})} \;. $$ {Therefore, the sparsity class of $\bA\bv$ deteriorates from $\ell^{\eta,1}_G(\mathbb{Z})$ for $\bv$ to $\ell^{\bar\eta,1}_G(\mathbb{Z})$ with $\bar\eta=\frac{\eta}{2p+1}$.} \endproof } \end{example} Next counter-example shows that, when the stiffness matrix $\bA$ is not banded, in order to have {$\bA\bv \in \ell^{\bar{\eta},\bar{t}}_G(\mathbb{Z})$} it is not enough to choose some $\bar{\eta} < \eta$ as above, but a choice of $\bar{t} < t$ is mandatory. \begin{example}[{Dense matrices}]\label{ex:spars2}{\rm Let us {take again $d=t=1$ and} modify the setting of the previous example, by assuming now that the coefficients $\hat{\nu}_h$ satisfy $$ |\hat{\nu}_h| = \sqrt{2\pi} {\rm e}^{-\eta_L |h|} \qquad \text{for all } |h|>0 \;, $$ so that $\mathbf{A}$ is no longer banded, and its elements satisfy \begin{equation}\label{eq:spars30} \tfrac12 {\rm e}^{-\eta_L |\ell-k|} \leq |a_{\ell,k}| \leq {\rm e}^{-\eta_L |\ell-k|} \qquad \text{for all } |\ell|, |k| \geq 1 \;. \end{equation} {If $M>0$ is an arbitrary integer, we now construct a vector $\bv^M = \sum_{n \geq 1} \mathbf{v}^{M,n}$ with gaps of size $\lambda(M)\ge M$ between consecutive non-vanishing entries. To this end, we introduce the function $\iota_M \, : \, \mathbb{N}_* \to \mathbb{N}_*$ defined as $\iota_M(n):=\lambda(M)n$ and the vectors $\mathbf{v}^{M,n}$ with components} $$ |(\mathbf{v}^{M,n})_k|={\rm e}^{-\frac{\eta}2 n} \delta_{k,\iota_M(n)} \;, \qquad k \in \mathbb{Z} \;. $$ {From (\ref{eq:spars30}) and the fact that only the $\iota_M(n)$-th entry of $\bv^{M,n}$ does not vanish, we obtain \begin{equation}\label{eq:spars31} \tfrac12 {\rm e}^{-\eta_L |\ell - \iota_M(n)|} {\rm e}^{-\frac{\eta}2 n} \leq |(\mathbf{A}\mathbf{v}^{M,n})_\ell| \leq {\rm e}^{-\eta_L |\ell - \iota_M(n)|} {\rm e}^{-\frac{\eta}2 n} \;. \end{equation} As in Example \ref{ex:spars1}, it is obvious that $\mathbf{v}^M \in \ell_G^{\eta,1}(\mathbb{Z})$ with $\Vert \mathbf{v}^M \Vert_{\ell_G^{\eta,1}(\mathbb{Z})}=1$. However, we will prove below that $\|\bA\bv^M\|_{\ell^{\bar\eta,\bar t}_G} \lsim \|\bv^M\|_{\ell^{\eta,1}_G}$ cannot hold uniformly in $M$ for any $\bar\eta>0$ and $\bar t>1/2$. We start by examining the cardinality $\#\mathcal{F}_n$ of the set \[ \mathcal{F}_n:=\{ \ell\in\mathbb{Z} \, : \, |(\mathbf{A}\mathbf{v}^{M,n})_{\ell}| > {\rm e}^{-\frac{\eta}2 M} \, \} \] In view of \eqref{eq:spars31}, the condition $|(\mathbf{A}\mathbf{v}^{M,n})_{\ell}| > {\rm e}^{-\frac{\eta}2 M}$ is satisfied by those $\ell=\iota_M(n)+m$ such that \begin{equation*} 0 \leq |m| \leq \frac{\eta}{2\eta_L}(M-n) \; , \end{equation*} whence $n\le M$ and $\#\mathcal{F}_n\ge \frac{\eta}{\eta_L}(M-n)+1$. We now claim that \begin{equation}\label{eq:C_M} C_M := \# \{ \ell \, : \, |(\mathbf{A}\mathbf{v}^M)_{\ell}| \geq {\rm e}^{-\frac{\eta}2 M} \, \} \geq \sum_{n = 1}^M \# \mathcal{F}_n \; , \end{equation} whose proof we postpone. Assuming \eqref{eq:C_M} we see that \[ C_M \geq \sum_{n=1}^{M}\left(\frac{\eta}{\eta_L}(M-n)+1 \right) \sim \frac{\eta}{2\eta_L}M^2 \; , \] or equivalently there are about $N_M=\left\lceil\frac{\eta}{2\eta_L}M^2\right\rceil$ coefficients of $\bv^M$ with values at least ${\rm e}^{-\frac{\eta}2 M}$. This implies that the $N_M$-th rearranged coefficient of $\mathbf{A}\mathbf{v}^M$ satisfies \[ |(\mathbf{A}\mathbf{v}^{M})^*_{N_M}| \ge {\rm e}^{-\frac{\eta}{2}M} \ge {\rm e}^{-\frac12 (2\eta_L\eta)^{1/2}N_M^{1/2}} \qquad \text{for all } M \geq 1 \;. \] This proves that for any $\bar{\eta}>0$ and $\bar t> \frac12$, one has $$ \Vert \mathbf{A}\mathbf{v}^M \Vert_{\ell_G^{\bar{\eta},\bar t}(\mathbb{Z})} \geq |(\mathbf{A}\mathbf{v}^{M})^*_{N_M}|\, {\rm e}^{\frac{\bar{\eta}}2 N_M^{\bar t}} \geq {\rm e}^{\frac{\bar{\eta}}2 N_M^{\bar t} - \frac12 (2\eta_L\eta)^{1/2}N_M^{1/2}} \to +\infty \qquad \text{as } M \to \infty \; , $$ whence the following bound cannot be valid} $$ \Vert \mathbf{A}\mathbf{v} \Vert_{\ell_G^{\bar{\eta},t}(\mathbb{Z})} \lsim \Vert \mathbf{v} \Vert_{\ell_G^{\eta,1}(\mathbb{Z})} \;, \qquad \text{for all } \mathbf{v} \in \ell_G^{\eta,1}(\mathbb{Z}) \;. $$ {It remains to prove \eqref{eq:C_M}. We first note that the sets $\mathcal{F}_n$ are disjoint provided $\iota_M(n+1)-\iota_M(n) = \lambda(M) \ge \frac{\eta}{\eta_L}M$. We next set \[ \varepsilon_M:=\min_{1\le n \le M}\min_{\ell\in\mathcal{F}_n} |(\mathbf{A}\mathbf{v}^{M,n})_{\ell}| -{\rm e}^{-\frac{\eta}2 M} > 0 \] which is a constant only dependent on $M$. We} observe that for every $\ell\in\mathcal{F}_n$, there holds \begin{eqnarray} |(\mathbf{A}\mathbf{v}^M)_{\ell}| &\geq& |(\mathbf{A}\mathbf{v}^{M,n})_{\ell}| - | \sum_{p\not= n} (\mathbf{A}\mathbf{v}^{M,p})_{\ell} |\geq{\rm e}^{-\frac{\eta}2 M} + \varepsilon_M - \sum_{p\not= n} | (\mathbf{A}\mathbf{v}^{M,p})_{\ell} |. \label{example:aux1} \end{eqnarray} {We write $\ell\in\mathcal{F}_n$ as $\ell=\iota_M(n)+m$, make use of \eqref{eq:spars31} and the definition of $\iota_M(n)=\lambda(M)n$ to deduce \[ \sum_{p\not= n} |(\mathbf{A}\mathbf{v}^{M,p})_{\ell} |\leq \sum_{p\not= n} e^{-\eta_L |\ell -\iota_M(p)|}e^{-\frac{\eta}{2}p} \leq \sum_{p\not= n} e^{-\eta_L |m+\lambda(M)(n-p)|}\leq \sum_{p\not= n} e^{-\eta_L (\lambda(M)|n-p| - |m|)}. \] Since $|m|\le \frac{\eta}{2\eta_L}M$, the above inequality gives} \begin{equation}\label{example:aux2} \sum_{p\not= n} |(\mathbf{A}\mathbf{v}^{M,p})_{\ell} |\leq 2 e^{\eta_L |m|} \sum_{q\geq 1} e^{-\eta_L \lambda(M)q} \leq 2e^{\frac{\eta}{2}M} \sum_{q\geq 1} e^{-\eta_L \lambda(M) q}\;. \end{equation} Combining \eqref{example:aux1} and \eqref{example:aux2} yields $$ |(\mathbf{A}\mathbf{v}^M)_{\ell}|\geq {\rm e}^{-\frac{\eta}2 M} + \varepsilon_M - 2e^{\frac{\eta}{2}M} \sum_{q\geq 1} e^{-\eta_L \lambda(M) q}\;. $$ By choosing $\lambda(M)$ sufficiently large, the last term on the right-hand side of the above inequality can be made arbitrarily small, in particular {$\le \varepsilon_M$. We thus get $|(\mathbf{A}\mathbf{v}^M)_{\ell}|\geq {\rm e}^{-\frac{\eta}2 M}$ and prove \eqref{eq:C_M}}. \endproof } \end{example} {Guided by Examples \ref{ex:spars1} and \ref{ex:spars2}}, we are ready to state the main result of this section. {We define \begin{equation}\label{aux-funct} \zeta(t) := \Big( \frac{1+t}{\omega_d^{1+t}} \Big)^{\frac{t}{d(1+t)}} \qquad\forall\; 0<t\le d. \end{equation} } \begin{proposition}[{continuity of $L$ in $\mathcal{A}^{\eta,t}_G$}]\label{propos:spars-res} Let the differential operator $L$ be such that the corresponding stiffness matrix satisfies $\mathbf{A} \in {\mathcal D}_e(\eta_L)$ for some constant $\eta_L>0$. Assume that $v \in {\mathcal A}^{\eta,t}_G$ for some $\eta>0$ and $t \in (0,d]$. Let one of the two following set of conditions be satisfied. \begin{enumerate}[\rm (a)] \item {If the} matrix $\mathbf{A}$ is banded with $2p+1$ non-zero diagonals, {let us set} $$ \bar{\eta}= \frac{\eta}{(2p+1)^\tau} \;, \qquad \bar{t}= t \;. $$ \item {If the matrix $\mathbf{A}$ is dense}, but the coefficients $\eta_L$ and $\eta$ satisfy the inequality $\eta< \eta_L \omega_d^{\tau}$, {let us set} $$ \bar{\eta}= \zeta(t)\eta \;, \qquad \bar{t}= \frac{t}{1+t} \;. $$ \end{enumerate} Then, one has $Lv \in {\mathcal A}^{\bar{\eta},\bar{t}}_G$, with \begin{equation}\label{eq:spars11bis} \Vert Lv \Vert_{{\mathcal A}_G^{\bar{\eta},\bar{t}}} \lsim \Vert v \Vert_{{\mathcal A}_G^{\eta,t}} \;. \end{equation} \end{proposition} \par{\noindent\it Proof}. \ignorespaces We adapt to our situation the technique introduced in \cite{CDDV:1998}. Let $L_J$ ($J \geq 0$) be the differential operator obtained by truncating the Fourier expansion of the coefficients of $L$ to the modes $k$ satisfying $|k|\leq J$. Equivalently, $L_J$ is the operator whose stiffness matrix $\mathbf{A}_J$ is defined in (\ref{eq:trunc-matr}); thus, by Property \ref{prop:matrix-estimate} {(exponential case)} we have $$ \Vert L-L_J \Vert = \Vert \mathbf{A}-\mathbf{A}_J \Vert \leq C_{\mathbf{A}} {(J+1)}^{d-1}{\rm e}^{-\eta_L J} \;. $$ On the other hand, for any $j \geq 1$, let $v_j=P_j(v)$ be a best $j$-term approximation of $v$ (with $v_{0}=0$), which therefore satisfies $\Vert v-v_j \Vert \leq {\rm e}^{-\eta \omega_d^{-\tau} j^\tau} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G}$, with $\tau=t/d$. Note that the difference $v_j -v_{j-1}$ consists of a single {Fourier} mode and satisfies as well $$ \Vert v_j-v_{j-1} \Vert \lsim {\rm e}^{-\eta \omega_d^{-\tau} j^\tau} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;. $$ Finally, let us introduce the function $\chi \, : \, \mathbb{N} \to \mathbb{N}$ defined as $\chi(j)=\lceil j^\tau \rceil$, the smallest integer larger than or equal to $j^\tau$. For any $J \geq 1$, let $w_J$ be the approximation of $Lv$ defined as $$ w_J = \sum_{j=1}^J L_{\chi(J-j)}(v_j-v_{j-1}) \;. $$ Writing $v=v-v_J + \sum_{j=1}^J (v_j-v_{j-1})$, we obtain $$ Lv-w_J = L(v-v_J) + \sum_{j=1}^J (L-L_{\chi(J-j)})(v_j-v_{j-1})\;. $$ {We now assume to be in Case (b). Since $L:\ell^2(\mathbb{Z}^d)\to\ell^2(\mathbb{Z}^d)$ is continuous}, the last equation yields \begin{equation}\label{eq:spars11} \Vert Lv-w_J \Vert \lsim \left( {\rm e}^{-\eta \omega_d^{-\tau} J^\tau} + \sum_{j=1}^J {\big(\lceil (J-j)^\tau \rceil + 1\big)^{d-1}} {\rm e}^{-(\eta_L \lceil (J-j)^\tau \rceil +\eta \omega_d^{-\tau} j^\tau )} \right)\Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;. \end{equation} The exponents of the addends can be bounded from below as follows {because $\tau\le 1$} \begin{eqnarray*} \eta_L \lceil (J-j)^\tau \rceil +\eta \omega_d^{-\tau} j^\tau &=& \eta_L \lceil (J-j)^\tau \rceil - \eta \omega_d^{-\tau} (J-j)^\tau + \eta \omega_d^{-\tau}( (J-j)^\tau + j^\tau) \\ &\geq& \eta_L (J-j)^\tau - \eta \omega_d^{-\tau} (J-j)^\tau + \eta \omega_d^{-\tau}((J-j) + j)^\tau \\ &=& \beta (J-j)^\tau + \eta \omega_d^{-\tau} J^\tau \;, \end{eqnarray*} with $\beta= \eta_L -\eta \omega_d^{-\tau} >0$ by assumption. Then, (\ref{eq:spars11}) yields \begin{equation}\label{eq:spars12} \Vert Lv-w_J \Vert \lsim \left( 1 + \sum_{j=0}^{J-1} {\big(\lceil j^\tau\rceil+1\big)^{d-1}} {\rm e}^{-\beta j^\tau} \right) {\rm e}^{-\eta \omega_d^{-\tau} J^\tau} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \lsim {\rm e}^{-\eta \omega_d^{-\tau} J^\tau} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;. \end{equation} On the other hand, by construction $w_J$ belongs to a finite dimensional space $V_{\Lambda_J}$, where { \begin{equation}\label{eq:spars13} |\Lambda_J| \leq \omega_d\sum_{j=1}^J \chi(J-j)^d = \omega_d \sum_{j=0}^{J-1} \lceil j^\tau \rceil^d \sim \frac{\omega_d}{1+t} J^{1+t} \qquad \text{as } J \to \infty \;. \end{equation} This implies $$ \Vert Lv-w_J \Vert \lsim {\rm e}^{-\bar{\eta} \omega_d^{-\bar\tau} |\Lambda_J|^{\bar\tau}} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;, $$ with $\bar\tau = \frac{\tau}{1+d\tau} = \frac{t}{d(1+t)}$ and $\bar\eta = \left(\frac{1+d\tau}{\omega_d^{1+d\tau}}\right)^{\bar\tau}\eta=\zeta(t)\eta$ as asserted.} {We last consider Case (a)}. One has $L_{\chi(J-j)}=L$ if $\chi(J-j) \geq p$, {whence if $j \leq J-p^{1/\tau}$, then the} summation in (\ref{eq:spars11}) can be limited to those $j$ satisfying $j_p \leq j \leq J$, where $j_p= \lceil J- p^{1/\tau} \rceil$. Therefore $$ \Vert Lv-w_J \Vert \lsim \left( {\rm e}^{-\eta \omega_d^{-\tau} J^\tau} + \max_{j_p\leq j \leq J} \lceil {(J-j)}^\tau\rceil^{d-1} \sum_{j=j_p}^J {\rm e}^{- \eta \omega_d^{-\tau} j^\tau } \right)\Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;. $$ Now, $J-j \leq p^{1/\tau}$ if $j_p\leq j \leq J$ and $j^\tau \geq j_p^\tau \geq (J-p^{1/\tau})^\tau \geq J^\tau - p$, whence $$ \Vert Lv-w_J \Vert \lsim \left(1+ p^{d-1+1/\tau}{\rm e}^{\eta \omega_d^{-\tau} p} \right) {\rm e}^{-\eta \omega_d^{-\tau} J^\tau} \Vert v \Vert_{{\mathcal A}^{{\eta},{t}}_G} \;. $$ We conclude by observing that $|\Lambda_J|\leq (2p+1)J$, since any matrix $\mathbf{A}_J$ has at most $2p+1$ diagonals. \endproof Finally, we discuss the sparsity class of the residual $r=r(u_\Lambda)$ for any Galerkin solution $u_\Lambda$. \begin{proposition}[{sparsity class of the residual}]\label{prop:unif-bound-res-exp} {Let $\mathbf{A}\in\mathcal{D}_e(\eta_L)$ and $\bA^{-1} \in\mathcal{D}_e(\bar\eta_L)$, for constants $\eta_L>0$ and $\bar\eta_L\in(0,\eta_L]$ according to Property \ref{prop:inverse.matrix-estimate}{, and let} $1\leq d \leq 10$. If $u \in {\mathcal A}^{\eta,t}_G$ for some $\eta>0$ and $t \in (0,d]$, such that {$\eta < \omega_d^{t/d(1+2t)}{\bar\eta_L}$}, then there exist suitable positive constants $\bar{\eta} \leq \eta$ and $\bar{t} \leq t$ such that $r(u_\Lambda) \in {\mathcal A}_G^{\bar{\eta},\bar{t}}$ for any index set $\Lambda$, with } $$ \Vert r(u_\Lambda) \Vert_{{\mathcal A}_G^{\bar{\eta},\bar{t}}} \lsim \Vert u \Vert_{{\mathcal A}^{\eta,t}_G} \;. $$ \end{proposition} \begin{proof} We first remark that the hypothesis $1\leq d \leq 10$ guarantees $\omega_d\geq2$ (see e.g. \cite[Corollary 2.55]{Folland:real-analysis}); this implies $r < \omega_d^r$ for any $r>0$, {whence} the function $\zeta$ introduced in (\ref{aux-funct}) satisfies $\zeta(t)<1$ for any $t>0$. Assume for the moment we are given $\bar{\eta}$ and $\bar{t}$. By using Proposition \ref{propos:spars-res} and Lemma \ref{L:nlg2}, we get \begin{equation}\label{eq:aux-2} \| {\bf r}_\Lambda\|_{\ell_G^{\bar{\eta},\bar{t}}} = \| \mathbf{A}({\bf u} - {\bf u}_\Lambda )\|_{\ell_G^{\bar{\eta},\bar{t}}} \lesssim \| {\bf u} - {\bf u}_\Lambda \|_{\ell_G^{{\eta_1},{t_1}}} \lesssim {\| {\bf u} \|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}} + \| {\bf u}_\Lambda \|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}}}, \end{equation} where, $\bar{\tau}=\bar{t}/d,\tau_1=t_1/d$ and the following relations hold \[ \bar\eta=\zeta(t_1)\eta_1 , \qquad \bar t =\frac{t_1}{1+t_1}<t_1 \ . \] From \eqref{eq:inf-pb-galerkin} we have $ {\bf u}_\Lambda = ({\widehat{\mathbf{A}}_\Lambda})^{-1} (\mathbf{P}_\Lambda{\bf f})$. Using Property \ref{prop:inf-matrix} and applying Proposition \ref{propos:spars-res} to $(\widehat{\mathbf{A}}_\Lambda)^{-1}$ we get \[ \|{\bf u}_\Lambda\|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}} = \|({\widehat{\mathbf{A}}_\Lambda})^{-1} (\mathbf{P}_\Lambda{\bf f})\|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}} \lesssim \| \mathbf{P}_\Lambda{\bf f} \|_{\ell_G^{{\eta_2},{t_2}}} \leq \| {\bf f} \|_{\ell_G^{{\eta_2},{t_2}}} \;, \] with \[ 2^{\tau_1}\eta_1={\zeta(t_2)\eta_2 < \eta_2} \ , \qquad {t_1}=\frac{t_2}{1+t_2}<t_2 \ . \] By substituting the above inequality into \eqref{eq:aux-2} and using again Proposition \ref{propos:spars-res} we get \begin{equation} \| {\bf r}_\Lambda\|_{\ell_G^{\bar{\eta},\bar{t}}} \lesssim \| {\bf u} \|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}} + \| {\bf f} \|_{\ell_G^{{\eta_2},{t_2}}} = \| {\bf u} \|_{\ell_G^{2^{\tau_1}{\eta_1},{t_1}}} + \| \mathbf{A}{\bf u} \|_{\ell_G^{{\eta_2},{t_2}}} \lesssim \| {\bf u} \|_{\ell_G^{\eta,t}} \end{equation} where \[ \eta_2={\zeta(t)\eta <\eta}\ , \qquad {t_2}=\frac{t}{1+t}<t \ . \] This shows that the thesis holds true for the choice \[ \bar{\eta}=\Big(\frac12\Big)^{\frac{t}{d(1+2t)}} \zeta \Big(\frac{t}{1+2t}\Big) \zeta \Big(\frac{t}{1+t}\Big) \zeta (t) \eta, \qquad \bar t = \frac{t}{1+3t}. \] It remains to verify the assumptions of Proposition \ref{propos:spars-res} when $\bA$ is dense. Since $\omega_d\geq 2$ and \[ t_1 = \frac{t}{1+2t} < t_2 = \frac{t}{1+t} < t, \] we have {$\omega_d^{\tau_1}<\omega_d^{\tau_2}<\omega_d^{\tau}$}. Moreover, using $\eta_1<2^{\tau_1}\eta_1<\eta_2<\eta$ { and $\eta_L \ge \bar\eta_L > \omega_d^{-\tau_1}\eta$ yields \[ \eta<\omega_d^\tau \eta_L, \qquad \eta_1 < \omega_d^{\tau_1}\eta_L, \qquad \eta_2 < \omega_d^{\tau_2}\bar\eta_L, \] which are the required conditions to apply Proposition \ref{propos:spars-res} when $\bA$ is dense. This concludes the proof.} \end{proof} { \begin{remark}[{definition of $\omega_d$}]{\rm The limitation $1\leq d \leq 10$ stems from the fact that the measure of the unit {Euclidean ball $\omega_d$} in $\mathbb{R}^d$ monotonically decreases to $0$ as $d \to \infty$. To avoid such a restriction, one could modify the definition of the Gevrey classes $G^{\eta,t}_p(\Omega)$ given in (\ref{eq:gevrey-class}), by replacing the Euclidean norm $|k|=\Vert k \Vert_2$ appearing in the exponential by the maximum norm $\Vert k \Vert_\infty$. Consequently, throughout the rest of the paper $\omega_d$ would be replaced by the quantity $2^d$, strictly larger than 1 for any $d$. \endproof } \end{remark} } \section{Coarsening}\label{S:coarsening} \newcommand{{\ell^{\frac{\eta}2,t}_G({\mathbb{Z}}}}{{\ell^{\frac{\eta}2,t}_G({\mathbb{Z}}}} \newcommand{{\bf{u}}}{{\bf{u}}} \newcommand{{\bf{v}}}{{\bf{v}}} \newcommand{{\bf{w}}}{{\bf{w}}} \newcommand{{\bf{z}}}{{\bf{z}}} \renewcommand{{\bf{a}}}{{\bf{a}}} \newcommand{{\bf{b}}}{{\bf{b}}} \newcommand{{\bf{c}}}{{\bf{c}}} We start by considering an example that sheds light on the role of coarsening for the exponential case. We then state and prove a seemingly new coarsening result, which is valid for both classes. \subsection{Example of coarsening}\label{S:example-coarse} Let ${\bf{a}},{\bf{b}}\in\mathbb{R}^p$ for $p\ge1$ be the vectors \[ {\bf{a}} := (1,0,\cdots,0), \quad {\bf{b}} := \frac{1}{p} (1,1,\cdots,1). \] Let ${\bf{v}},{\bf{z}}$ be the sequences defined by \[ {\bf{v}} := \big( {\rm e}^{-\eta k} {\bf{a}} \big)_{k=0}^\infty, \quad {\bf{z}} := \big( {\rm e}^{-\eta k} {\bf{b}} \big)_{k=0}^\infty. \] We first observe that \[ \|{\bf{v}}\|^2 = {p\|{\bf{z}}\|^2} = \frac{1}{1-{\rm e}^{2\eta}}\;, \qquad \Vert {\bf{v}} \Vert_{\ell^{2\eta,1}_G(\mathbb{Z})} = {p \Vert {\bf{z}} \Vert_{\ell^{2\eta/p,1}_G(\mathbb{Z})} = 1} \] (recall that $\omega_d=2$ for $d=1$). Given a parameter $\vare<1$, we now construct a perturbation ${\bf{w}}$ of ${\bf{v}}$ which is much less sparse {than $\bv$} by simply scaling ${\bf{z}}$ and adding it to ${\bf{v}}$ {(see Fig. \ref{fig:coarsening} (a))}: \[ {\bf{w}} := {\bf{v}} + \vare {\bf{z}} = \big({\rm e}^{-\eta k} ({\bf{a}}+\vare{\bf{b}})\big)_{k=1}^\infty\; . \] \begin{figure} \caption{Pictorial representation of (a) the components of the vector ${{\bf{w} \label{fig:1L} \label{fig:1R} \label{fig:coarsening} \end{figure} The first task is to compute the norms of ${\bf{w}}$. We obviously have $\|{\bf{w}}\| \simeq \|{\bf{v}}\|$. To determine the weak quasi-norm of ${\bf{w}}$ we need to find the rearrangement ${\bf{w}}^*$ (see Fig. \ref{fig:coarsening} (b)). Let $n_1$ be the smallest integer such that \[ \Big(1+\frac{\vare}{p}\Big) {\rm e}^{-\eta n_1} \ge \frac{\vare}{p} {\rm e}^{-\eta} > \Big(1+\frac{\vare}{p}\Big) {\rm e}^{-\eta (n_1+1)}\;, \] namely the index corresponding to the first crossing of the exponential curve ${\rm e}^{-\eta n}$ dictating the behavior of the first portion of the rearranged sequence ${\bf{w}}^*$ (which coincides with the behavior of ${\bf{v}}^*$), and the first plateaux of ${\bf{z}}$. This implies \[ \frac{1}{\eta} \log\Big(1+\frac{p}{\vare}\Big) < n_1 \le 1+ \frac{1}{\eta} \log\Big(1+\frac{p}{\vare}\Big)\;. \] Next, let $n_2$ be the smallest integer such that \[ \Big(1+\frac{\vare}{p}\Big) {\rm e}^{-\eta n_2} \ge \frac{\vare}{p} {\rm e}^{-2\eta} > \Big(1+\frac{\vare}{p}\Big) {\rm e}^{-\eta (n_2+1)}\;, \] which corresponds to the beginning of a number of decreasing exponentials preceeding the second plateaux of ${\bf{w}}^*$. This implies \[ 1+ \frac{1}{\eta} \log\Big(1+\frac{p}{\vare}\Big) < n_2 \le 2+ \frac{1}{\eta} \log\Big(1+\frac{p}{\vare}\Big) \] and shows that $n_2-n_1=1$, and that there is exactly one exponential between the first and second plateaux. Iterating this argument, we see that the difference between two consecutive {$n_j$'s} is just $1$, and that there is exactly one exponential between two consecutive plateaux {(see Fig \ref{fig:coarsening} (b))}. We are now ready to compute the weak quasi-norm of ${\bf{w}}$. Let $\nu_k$ denote the index corresponding to the end of the $k$-th plateaux {of ${\bf{w}}$}, which in turn corresponds to the value $w^*_{\nu_k} = {\rm e}^{-\eta k}$. Then \[ \nu_k = pk + n_1 \sim pk + \frac{1}{\eta} \log\Big(1+\frac{p}{\vare}\Big) \;. \] To determine the class of ${\bf{w}}$, we seek $\lambda$ so that ${\bf{w}}\in \ell^{\lambda,1}_G(\mathbb{Z})$, namely \[ \sup_{k\geq 0} \Big({\rm e}^{\lambda \nu_k /2} {\rm e}^{-\eta k}\Big) < \infty \quad \Leftrightarrow \quad \frac12 \lambda pk - \eta k \le 0 \quad \Leftrightarrow \quad \lambda\le \frac{2\eta}{p} \;. \] We thus realize that ${\bf{w}}\in\ell^{2\eta/p,1}_G(\mathbb{Z})$ belongs to a sparsity class much worse than that of ${\bf{v}}$, that deteriorates as the size $p$ of the plateaux tends to $\infty$. On the other hand, we note that {the restrictions ${\bf{w}}^*_{|[1,n_1]}={\bf{v}}^*_{|[1,n_1]}$ coincide, thereby} showing that the decay rate of the first part of ${\bf{w}}^*$ is the same as that of ${\bf{v}}^*$ {(see Fig \ref{fig:coarsening}(b))}. This example explains the need to coarsen the vector ${\bf{w}}$ starting at latest at $n_1$, to eliminate the tail of ${\bf{w}}^*$ which decays with rate $2\eta/p$ instead of the optimal rate $2\eta$ of ${\bf{v}}$. {In addition, we} observe that the best $n_1$-term approximation of ${\bf{w}}$ satisfies \[ \|{\bf{w}}-{\bf{w}}_{n_1}\|^2 = \sum_{k=0}^\infty p \frac{\vare^2}{p^2} {\rm e}^{-2k\eta} = \frac{\vare^2}{p} \frac{1}{1-{\rm e}^{-2\eta}} = { \|{\bf{v}}-{\bf{w}}\|^2 = \vare^2 \|{\bf{z}}\|^2}\; , \] {which is precisely the size of the perturbation error of ${\bf{v}}$. Given an error tolerance $\delta\ge\vare\|{\bf{z}}\|$, the best $N$-term approximation ${\bf{w}}_N$ of ${\bf{w}}$ satisfying $\|{\bf{w}}-{\bf{w}}_N\|\le\delta$ would require $$ N \sim \frac1\eta \log\frac{1}{\delta} = \frac2{2\eta} \log\frac{\Vert {\bf{v}} \Vert_{\ell^{2\eta,1}_G(\mathbb{Z})}}{\delta}\;. $$ } \subsection{New coarsening Result}\label{S:new-coarsening} We extract the following lesson from the example of Sect. \ref{S:example-coarse}: for as long as we deal with the first part of {${\bf{w}}^*$}, which has a decay rate ${\rm e}^{-k\eta}$ dictated by that of {${\bf{v}}^*$}, we could coarsen ${\bf{w}}$ and obtain an approximation of both ${\bf{w}}$ and ${\bf{v}}$ with {the} decay rate ${\rm e}^{-k\eta}$ of ${\bf{v}}$. This requires limiting the accuracy to size {$\|{\bf{v}}-{\bf{w}}\|$} since a smaller accuracy utilizes the tail of ${\bf{w}}$ which has a slower decay ${\rm e}^{-k\frac{\eta}{p}}$. We express this heuristics in the following theorem, which goes back to Cohen, Dahmen, and DeVore {\cite{CDDV:1998}}. However, our proof is much more elementary and the statement much more precise. Although the result holds for the general setting of Sect. \ref{sec:abstract-nl}, we just present it for the exponential case, since it will be used only in this situation. \begin{theorem}[coarsening]\label{T:coarsening} Let $\vare>0$ and let $v \in {\mathcal A}^{\eta,t}_G$ and $w \in V$ be so that \[ \|v-w \| \le \vare. \] Let $N=N(\vare)$ be the smallest integer such that the best $N$-term approximation $w_N$ of $w$ satisfies \[ \|w - w_N\| \le 2\vare. \] Then, {$\|v-w_N\|\le3\vare$ and} \[ N \le \frac{\omega_d}{\ \eta^{d/t}} \left(\log\frac{\Vert v \Vert_{{\mathcal A}^{\eta,t}_G}}{\vare}\right)^{d/t}\!\!\!+1 \;. \] \end{theorem} \begin{proof} Let $\Lambda_\vare$ be the set of indices corresponding to the best approximation of {$v$} with accuracy $\vare$. So $\Lambda_\vare$ is a minimal set with properties \[ \|v - P_{\Lambda_\vare} v\|\le \vare, \qquad |\Lambda_\vare| \le \frac{\omega_d}{\ \eta^{d/t}} \left(\log\frac{\Vert v \Vert_{{\mathcal A}^{\eta,t}_G}}{\vare}\right)^{d/t}\!\!\! +1 \;. \] If $z=w-v$, then \begin{align*} \|w - P_{\Lambda_\vare}w \| & \le \|(v+z) - P_{\Lambda_\vare}(v+z)\| = \|(v- P_{\Lambda_\vare} v) + (z-P_{\Lambda_\vare} z )\| \\ & \le \|v- P_{\Lambda_\vare} v\| + \|z-P_{\Lambda_\vare} z\| \le \vare + \|z\| \le 2\vare \;, \end{align*} because $I-P_{\Lambda_\vare}$ is the projector onto $V_{\mathbb{Z}^d \setminus \Lambda_\vare}$. Since $N$ is the cardinality of the smallest set satisfying the above relation, we deduce that $N \le |\Lambda_\vare|$. This concludes the proof. \end{proof} \section{Optimality properties of adaptive algorithms: algebraic case}\label{sec:complexity} The rest of the paper will be devoted to investigating complexity issues for the sequence of approximations $u_n=u_{\Lambda_n}$ generated by any of the adaptive algorithms presented in Sect. \ref{sec:plain-adapt-alg}. In particular, we wish to estimate the cardinality of each $\Lambda_n$ and check whether its growth is ``optimal'' with respect to the sparsity class ${\mathcal A}_\phi$ of the exact solution, in the sense that $|\Lambda_n|$ is comparable to the cardinality of the index set of the best approximation of $u$ yielding the same error $\Vert u - u_n \Vert$. The algebraic case will be dealt with in the present section, whereas the exponential case will be analyzed in the next one. The two cases differ in that no coarsening is needed for optimality in the former case, whereas we will prove optimality in the latter case only for the algorithms that incorporate a coarsening step. The reason of such a difference can be {attributed}, on the one hand, to the slower growth of the activated degrees of freedom in the exponential case as opposed to the algebraic case and, on the other hand, to the discrepancy in the sparsity classes of the residuals and the solution in the exponential case, {discussed in Sect. \ref{subsec:spars-res-exp}.} \subsection{ADFOUR with moderate D\"orfler marking}\label{subsec:moderate-adfour-alg} The approach followed in the sequel, which has been proposed in \cite{Gantumur-Stevenson:2007} in the wavelet framework and adopted in \cite{Stevenson:2007, Nochetto-et-al:2008} in the finite-element framework, allows us to prove the optimality of the algorithm in the algebraic case, provided D\"orfler marking is not too aggressive. The two following lemmas will be useful in the subsequent analysis. \begin{lemma}[{localized a posteriori upper bound}]\label{lem:optim1} Let $\Lambda \subset \Lambda_* \subset \mathbb{Z}^d$ be nonempty subsets of indices. Let $u_\Lambda \in V_\Lambda$ and $u_{\Lambda_*} \in V_{\Lambda_*}$ be the Galerkin approximations of Problem (\ref{eq:four.1}). Then $$ \tvert u_{\Lambda_*} - u_{\Lambda} \tvert^2 \leq \frac1{\alpha_*} \sum_{k \in \Lambda_* \setminus \Lambda} |\hat{R}_k(u_\Lambda)|^2 = \frac1{\alpha_*} \eta^2(u_\Lambda, \Lambda_*) \;. $$ \end{lemma} \begin{proof} One has $$ \tvert u_{\Lambda_*} - u_{\Lambda} \tvert^2 =a( u_{\Lambda_*} - u_{\Lambda}, u_{\Lambda_*} - u_{\Lambda}) =(f, u_{\Lambda_*} - u_{\Lambda})-a(u_\Lambda, u_{\Lambda_*} - u_{\Lambda})= \sum_{k \in \Lambda_*}\hat{r}_k(u_\Lambda) {(\hat{u}_{\Lambda_*} - \hat{u}_{\Lambda})_k} $$ {because $\Lambda_*$ is the support of $u_{\Lambda_*} - u_{\Lambda}$.} The {asserted} result follows immediately by the Cauchy-Schwarz inequality, {upon} recalling that $\hat{r}_k(u_\Lambda)=0$ for all $k \in \Lambda$. \end{proof} \begin{lemma}[{D\"orfler property}]\label{lem:optim2} Let $\Lambda \subset \Lambda_* \subset \mathbb{Z}^d$ be nonempty subsets of indices. Let $u_\Lambda \in V_\Lambda$ and $u_{\Lambda_*} \in V_{\Lambda_*}$ be the Galerkin approximations of Problem (\ref{eq:four.1}). {Let} the marking parameter $\theta$ satisfies $\theta \in (0,\theta_*)$, where $\theta_*=\sqrt{\frac{\alpha_*}{\alpha^*}}$, {and} set $\mu_\theta=1-\frac{\alpha^*}{\alpha_*}\theta^2>0$. If $$ \tvert u- u_{\Lambda_*} \tvert^2 \leq \mu \tvert u - u_{\Lambda} \tvert^2 \;, $$ for some $\mu \in (0, \mu_\theta]$, then $\Lambda^*$ fulfils D\"orfer's condition, i.e., $$ \eta(u_{\Lambda}, \Lambda^*) \geq \theta \eta(u_{\Lambda}) \;. $$ \end{lemma} \par{\noindent\it Proof}. \ignorespaces {Since $u - u_{\Lambda_*} \perp u_{\Lambda} - u_{\Lambda_*}$ in the energy norm because of Pythagoras, the assumption yields} $$ \tvert u - u_{\Lambda} \tvert^2 = \tvert u - u_{\Lambda_*} \tvert^2 + \tvert u_{\Lambda_*} - u_{\Lambda} \tvert^2 \leq \mu \tvert u - u_{\Lambda} \tvert^2 + \tvert u_{\Lambda_*} - u_{\Lambda} \tvert^2 \; . $$ {Invoking the lower bound in (\ref{eq:four.2.3}) gives} $$ \tvert u_{\Lambda_*} - u_{\Lambda} \tvert^2 \geq (1-\mu) \tvert u - u_{\Lambda} \tvert^2 \geq (1-\mu)\frac1{\alpha^*} \eta^2(u_{\Lambda}) \; , $$ {whence applying Lemma \ref{lem:optim1} implies} $$ \eta^2(u_\Lambda, \Lambda_*) \geq (1-\mu)\frac{\alpha_*}{\alpha^*} \eta^2(u_{\Lambda}) \geq (1-\mu_\theta)\frac{\alpha_*}{\alpha^*} \eta^2(u_{\Lambda}) = \theta^2 \eta^2(u_{\Lambda}). $$ {This concludes the proof.}\endproof We are ready to estimate the growth of degrees of freedom {generated by the algorithm {\bf ADFOUR} of} Sect. \ref{sec:defADFOUR}. For the moment, we place ourselves in the abstract framework of Sect. \ref{sec:abstract-nl}, only the final result {being specifically for} the algebraic case. \begin{proposition}[{cardinality of $\partial\Lambda_n$}]\label{prop:optim1} Let $\theta$ satisfy the condition stated in Lemma \ref{lem:optim2}, and let $\mu \in (0,\mu_\theta]$ be fixed. Let $\{\Lambda_n, \, u_n \}_{n\geq 0}$ be the sequence generated by the adaptive algorithm {\bf ADFOUR}, {and set} $\varepsilon_n^2=\mu \tvert u - u_n \tvert^2$. If the solution $u$ belongs to the sparsity class ${\mathcal A}_\phi$, then \begin{equation} |\partial \Lambda_n|=|\Lambda_{n+1}|-|\Lambda_n| \leq \kappa \, \phi^{-1}\left(\frac{\varepsilon_n}{\Vert u \Vert_{{\mathcal A}_\phi}}\right) \;, \qquad{\forall\ n\ge0\;,} \end{equation} {where $\kappa>1$ is the constant in \eqref{eq:nl.gen.2}.} \end{proposition} \begin{proof} {Let $\varepsilon=\varepsilon_n$ and make use of (\ref{eq:nl.gen.2}) for $u\in{\mathcal A}_\phi$}: there exists $\Lambda_{\varepsilon}$ and $w_{\varepsilon} \in V_{\Lambda_{\varepsilon}}$ such that $$ \tvert u - w_{\varepsilon} \tvert^2 \leq \varepsilon^2 \qquad \text{and} \qquad |\Lambda_{\varepsilon}| \leq \kappa \, \phi^{-1}\left(\frac{\varepsilon}{\Vert u \Vert_{{\mathcal A}_\phi}}\right). $$ Let $\Lambda_*=\Lambda_n \cup \Lambda_\epsilon$ be the overlay of the two index sets, and let $u_* \in V_{\Lambda_*}$ be the Galerkin approximation of Problem (\ref{eq:four.1}). Then, since $V_{\Lambda_\epsilon} \subseteq V_{\Lambda_*}$, we have $$ \tvert u - u_* \tvert^2 \leq \tvert u - w_{\varepsilon} \tvert^2 \leq \mu \tvert u - u_n \tvert^2 \;. $$ Thus, we are entitled to apply Lemma \ref{lem:optim2} to $\Lambda_n$ and $\Lambda_*$, yielding $$ \eta(u_n, \Lambda^*) \geq \theta \eta(u_n) \;. $$ By the minimality property of the cardinality of $\Lambda_{n+1}$ among all sets satisfying D\"orfler {property} for $u_n$ (Assumption \ref{ass:minimality}), we deduce that $|\Lambda_{n+1}|\leq |\Lambda_*| \leq |\Lambda_n| + |\Lambda_\epsilon|$, i.e., \begin{equation}\label{eq:optim11} |\Lambda_{n+1}| - |\Lambda_n| \leq |\Lambda_\epsilon| \;, \end{equation} whence the result. \end{proof} \begin{corollary}[{cardinality of $\Lambda_n$: general case}]\label{cor:optim1} {Let the assumptions of Proposition \ref{prop:optim1} be valid and $\rho = \sqrt{1-\frac{\alpha_*}{\alpha^*}\theta^2}$ be given by (\ref{eq:def_rhotheta}). Then} \begin{equation}\label{eq:optim12} |\Lambda_n| \leq \kappa \, \sum_{k=0}^{n-1} \phi^{-1}\left(\rho^{k-n} \, \frac{\varepsilon_n}{\Vert u \Vert_{{\mathcal A}_\phi}}\right) \;, \qquad {\forall\ n\ge0 \;.} \end{equation} \end{corollary} \begin{proof} Recalling that $|\Lambda_0|=0$, the previous proposition yields $$ |\Lambda_n| = \sum_{k=0}^{n-1} |\partial\Lambda_k| \leq \kappa \, \sum_{k=0}^{n-1} \phi^{-1}\left( \frac{\varepsilon_k}{\Vert u \Vert_{{\mathcal A}_\phi}}\right) \;. $$ On the other hand, by Theorem \ref{teo:four1} one has \begin{equation}\label{eq:optim12bis} {\varepsilon_n = \sqrt{\mu} \tvert u-u_n \tvert \leq \sqrt{\mu} \rho^{n-k} \tvert u-u_k \tvert = \rho^{n-k} \varepsilon_k \qquad\forall\ 0\le k \le n-1 \; ,} \end{equation} and we conclude recalling the monotonicity of $\phi$. \end{proof} At this point, we assume to be in the algebraic case, i.e. $u \in {\mathcal A}_B^s$ for some $s>0$. Then, (\ref{eq:optim12}) reads $$ |\Lambda_n| \leq \kappa \, \mu^{-d/2s} \tvert u-u_n \tvert^{-d/s} \Vert u \Vert_{{\mathcal A}_B^s}^{d/s} { ~\sum_{k=0}^{n-1} \left(\rho^{d/s}\right)^{n-k}\;, \qquad\forall\ n\ge0 \;.} $$ Summing-up the geometric series and using (\ref{eq:four.1bis}), we arrive at the following result. \begin{theorem}[{cardinality of $\Lambda_n$: algebraic case}]\label{teo:optim2} Under the assumptions of Proposition \ref{prop:optim1}, the growth of the active degrees of freedom produced by {\bf ADFOUR} in the algebraic case is estimated as follows: $$ {|\Lambda_n| \le C_* \, \Vert u-u_n \Vert^{-d/s} \Vert u \Vert_{{\mathcal A}_B^s}^{d/s}\;, \qquad\forall\ n\ge0} \;, $$ where the constant $C_*$ depends only on $\alpha_*$, $\mu$ and $\rho$. \endproof \end{theorem} This result is ``optimal'' in that the number of active degrees of freedom is {governed}, up to a multiplicative constant, by the same law {(\ref{eq:nl.gen.2})-\eqref{eq:nlb.200}} as for the best approximation of $u$. The optimality of this result is related to the ``sufficiently fast'' growth of the active degrees of freedom: the increment of degrees of freedom at each interation may be comparable to the total number of previously activated degrees of freedom (geometric growth). \subsection{{A-ADFOUR:} Aggressive ADFOUR} We now examine Algorithm {\bf A-ADFOUR}, defined in Sect. \ref{subsec:aggress}, which allows the choice of the parameter $\theta$ as close to 1 as desired. Such a feature is in the spirit of high regularity, or equivalently a large value of $s$ for $u\in {\mathcal A}_B^s$. This is a novel approach which combines the contraction property in Theorem \ref{teo:four3} and the key property of uniform boundedness of the residuals stated in Proposition \ref{prop:unif-bound-res-alg}. \begin{theorem}[{cardinality of $\Lambda_n$ for {\bf A-ADFOUR}}]\label{T:aggressive-ADFOUR} Let the assumptions of Property \ref{prop:inverse.alg} and Theorem \ref{teo:four3} be fulfilled, and let $u \in {\mathcal A}_B^s$ for some $s>0$. Then, the growth of the active degrees of freedom produced by {\bf A-ADFOUR} is estimated as follows: $$ {|\Lambda_n| \le C_* \, J^d \, \Vert u-u_n \Vert^{-d/s} \Vert u \Vert_{{\mathcal A}_B^s}^{d/s}\;, \qquad\forall\ n\ge0\;.} $$ Here, $J$ is the ($\theta$-dependent) input parameter of {\bf ENRICH}, whereas the constant {$C_*$ is independent of $\theta$.} \end{theorem} \begin{proof} At each iteration $n$, the set $\widetilde{\partial\Lambda}_n$ selected by {\bf D\"ORFLER} is minimal, hence by (\ref{eq:four.2.5.5bis}), (\ref{eq:nl.gen.2o}) and (\ref{eq:nlb.201}), one has $$ |\widetilde{\partial\Lambda}_n| \leq \left(\sqrt{1-\theta^2} \, \Vert r_n \Vert\right)^{-d/s} \Vert r_n \Vert_{{\mathcal A}_B^s}^{d/s} \ + \ 1 \;. $$ Using (\ref{eq:four.2.1bis}) and Proposition \ref{prop:unif-bound-res-alg}, this bound becomes $$ |\widetilde{\partial\Lambda}_n| \lsim \left(\sqrt{1-\theta^2} \, \tvert u-u_n \tvert\right)^{-d/s} \Vert u \Vert_{{\mathcal A}_B^s}^{d/s} \;. $$ On the other hand, estimate (\ref{eq:estim-enrich}) for the procedure {\bf ENRICH} yields $$ |\partial\Lambda_n| \lsim J^d \, \left(\sqrt{1-\theta^2} \, \tvert u-u_n \tvert\right)^{-d/s} \Vert u \Vert_{{\mathcal A}_B^s}^{d/s} \;. $$ Now, as in the proof of Corollary \ref{cor:optim1}, \begin{equation}\label{eq:optim14} |\Lambda_n| \lsim J^d \, (1-\theta^2)^{-d/s} \left( \sum_{k=0}^{n-1} \tvert u-u_k \tvert^{-d/s}\right) \Vert u \Vert_{{\mathcal A}_B^s}^{d/s} \;. \end{equation} The contraction property of Theorem \ref{teo:four3} yields for $0 \leq k \leq n-1$ $$ \tvert u-u_n \tvert \le \bar{\rho}^{n-k} \tvert u-u_k \tvert \;, $$ with $\bar{\rho}=C_0\sqrt{1-\theta^2}<1$ (see \ref{eq:aggr3}); thus, $$ \sum_{k=0}^{n-1} \tvert u-u_k \tvert^{-d/s} \le \tvert u-u_n \tvert^{-d/s} \sum_{k=0}^{n-1} \bar{\rho}^{\frac{d}{s}(n-k)}\lsim \bar{\rho}^{\frac{d}{s}} \tvert u-u_n \tvert^{-d/s} \lsim (1-\theta^2)^{d/s} \Vert u-u_n \Vert^{-d/s} \;. $$ Substituting into (\ref{eq:optim14}), the powers of $1-\theta^2$ cancel out, and the {asserted estimate follows.} \end{proof} \section{Optimality properties of adaptive algorithms: exponential case}\label{sec:adfour-coarse} From now on, let us assume that $u \in {\mathcal A}_G^{\eta,t}$ for some $\eta>0$ and $t \in (0,d]$. {Let us first} observe that none of the arguments that led to the complexity estimates of the previous section can be extended to the present situation. For {\bf ADFOUR} with moderate D\"orfler marking, Corollary \ref{cor:optim1} in which $\phi^{-1}$ is replaced by its logarithmic expression yields a bound for $|\Lambda_n|$ which is at least $n$ times larger than the optimal bound $$ |\Lambda_n^{\rm best}| \leq \kappa \, \frac{\omega_d}{\eta^{d/t}} \left( \log \frac{\Vert u \Vert_{{\mathcal A}_G^{\eta,t}}}{\varepsilon_n} \right)^{d/t} $$ for the given accuracy $\varepsilon_n$ (see the proof of Proposition \ref{prop:opt-inner-loop} for more details, in a similar situation). Manifestedly, the first cause of non-optimality is the crude bound (\ref{eq:optim11}), which in this case is no longer absorbed by the summation of a geometric series as in the algebraic case. On the other hand, for {\bf A-ADFOUR} a sharp estimate of the increment $|\widetilde{\partial\Lambda}_n|$ is indeed used in the proof of Theorem \ref{T:aggressive-ADFOUR}, but this involves the sparsity class of the residual, which in the exponential case may be different from that of the solution, as discussed in Sect. \ref{subsec:spars-res-exp}. Incorporating a coarsening step in the algorithms allows us to avoid, at least in part, these drawbacks. For these reasons, herafter we investigate the optimality properties of the two algorithms with coarsening presented in Sect. \ref{sec:plain-adapt-alg} \subsection{{C-ADFOUR:} ADFOUR with coarsening} Let us {now} discuss the complexity of Algorithm {\bf C-ADFOUR}, defined in Sect. \ref{subsec:coarse-adfour}. The following optimal result holds. \begin{theorem}[{cardinality of $\Lambda_n$ for {\bf C-ADFOUR}}]\label{teo:optim.adcoars.1} Assume that the solution $u$ belongs to ${\mathcal A}^{\eta,t}_G$, for some $\eta >0$ and $t \in (0,d]$. Then, there exists a constant $C>1$ such that the cardinality of the set $\Lambda_n$ of the active degrees of freedom produced by {\bf C-ADFOUR} satisfies the bound \begin{equation}\label{Lambda-n} |\Lambda_n| \leq \kappa \frac{\omega_d}{\eta^{d/t}} \left( \log \frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\Vert u-u_n \Vert} + \log C \right)^{d/t} \;, \qquad\forall\ n\ge0. \end{equation} \end{theorem} \begin{proof} Since each Galerkin approximation $u_{n+1}$ comes just after a call {$\Lambda_{n+1}:= {\bf COARSE} (u_{n,k+1}, \vare_n)$ with threshold $\varepsilon_n=\alpha_*^{-1/2}\|r_{n,k+1}\|\ge\|u-u_{n,k+1}\|$}, Theorem \ref{T:coarsening} yields \[ |\Lambda_{n+1}| \leq \frac{\omega_d}{\ \eta^{d/t}} \left(\log\frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\varepsilon_n}\right)^{d/t}\!\!\!+1 \;. \] On the other hand, (\ref{eq:four.1bis}) and Property \ref{prop:cons-coarse} yield \begin{equation}\label{u-un+1} {\Vert u-u_{n+1} \Vert \le \alpha_*^{-1/2} \tvert u-u_{n+1} \tvert \leq 3 (\alpha^*/\alpha_*)^{1/2}\varepsilon_n.} \end{equation} {Since $n\ge-1$, this} gives the result, up to a shift in the index. \end{proof} Next, we investigate the optimality of each inner loop. We already know from Theorem \ref{teo:adfour-coarse-convergence} that the number $K_n$ of inner iterations is bounded independently of $n$. So, we just estimate the growth of degrees of freedom when going from $k$ to $k+1$. We only consider the case of a moderate D\"orfler marking, i.e., we {subject} $\theta$ to the condition stated in Lemma \ref{lem:optim2} (since the case of $\theta$ close to 1 will be covered in the next subsection). The following result holds. \begin{proposition}[{cardinality of $\Lambda_{n,k}$ for {\bf C-ADFOUR}}]\label{prop:opt-inner-loop} Assume that $u \in {\mathcal A}^{\eta,t}_G$ for some $\eta >0$ and $t \in (0,d]$, and that the marking parameter satisfies $\theta \in (0,\theta_*)$, where $\theta_*=\sqrt{\frac{\alpha_*}{\alpha^*}}$. Then, there exist constants $C>1$ and $\bar{\eta} \in (0,\eta]$ such that, for all $n \geq 0$ and all $k=1, \dots , K_n$, one has $$ |\Lambda_{n,k}| \leq \kappa \frac{\omega_d}{\bar{\eta}^{d/t}} \left( \log \frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\Vert u-u_{n+1} \Vert} + \log C \right)^{d/t} \;. $$ \end{proposition} \begin{proof} Each inner loop of {\bf C-ADFOUR} can be viewed as a truncated version of {\bf ADFOUR}; hence, the analysis of this algorithm given in Sect. \ref{subsec:moderate-adfour-alg} can be adapted to the exponential case. In particular, for each increment $\partial\Lambda_{n,j}$ of degrees of freedom, Proposition \ref{prop:optim1} gives $$ |\partial\Lambda_{n,j}| \le \frac{\omega_d}{\ \eta^{d/t}} \left(\log\frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\varepsilon_{n,j}}\right)^{d/t}\!\!\!+1 \; , {\qquad\forall\ 0\le j \le K_n.} $$ Since, {$\varepsilon_{n,K_n} \leq \rho^{K_n-j}\varepsilon_{n,j}$ by (\ref{eq:optim12bis}), it follows that} $$ |\partial\Lambda_{n,j}| \le \frac{\omega_d}{\ \eta^{d/t}} \left(\log\frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\varepsilon_{n,K_n}} + (K_n-j)|\log \rho| \right)^{d/t}\!\!\!+1 \;. $$ Thus, recalling that $t \leq d$ by assumption, we have \begin{eqnarray*} |\Lambda_{n,k}|^{t/d} &\leq& |\Lambda_{n}|^{t/d} + \sum_{j=0}^{k-1} |\partial\Lambda_{n,j}|^{t/d} \\ &\leq& |\Lambda_{n}|^{t/d} + \kappa \frac{\omega_d^{t/d}}{\eta} \left(k \log\frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\varepsilon_{n,K_n}} + O(K_n^2) |\log \rho| \right) \;. \end{eqnarray*} {Combining \eqref{eq:adgev.2}, \eqref{Lambda-n}, and \eqref{u-un+1} with $k \leq K_n \lsim 1$, we conclude the assertion with $\bar\eta \le \eta/(1+K_n)$.} \end{proof} We remark that the previous result provides a complexity bound, relative to the sparsity class ${\mathcal A}^{\eta,t}_G$ of the solution, which is optimal with respect to the index $t$, {but} suboptimal with respect to the index $\bar{\eta}< \eta$. \subsection{{PC-ADFOUR:} Predictor/Corrector ADFOUR} At last, we discuss the optimality of Algorithm {\bf PC-ADFOUR}, presented in the second part of Sect. \ref{subsec:coarse-adfour}. \begin{theorem}[{cardinality of {\bf PC-ADFOUR}}]\label{teo:pc-adfour} Suppose that $u \in {\mathcal A}^{\eta,t}_G$, for some $\eta >0$ and $t \in (0,d]$. Then, there exists a constant $C>1$ such that the cardinality of the set $\Lambda_n$ of the active degrees of freedom produced by {\bf PC-ADFOUR} satisfies the bound $$ |\Lambda_n| \leq \kappa \frac{\omega_d}{\eta^{d/t}} \left( \log \frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\Vert u-u_n \Vert} + \log C \right)^{d/t} \;, {\qquad\forall\ n\ge0.} $$ If, in addition, the assumptions of Proposition \ref{prop:unif-bound-res-exp} are satisfied, then the cardinality of the intermediate sets $\widehat\Lambda_{n+1}$ activated in the predictor step can be estimated as $$ |\widehat\Lambda_{n+1}| \leq | \Lambda_n | + \kappa J^d \frac{\omega_d^2}{\bar{\eta}^{d/\bar{t}}} \left( \log \frac{\Vert u \Vert_{{\mathcal A}^{\eta,t}_G}}{\Vert u-u_n \Vert} + \big|\log\sqrt{1-\theta^2}\,\big| + \log C \right)^{d/\bar{t}} \;, {\qquad\forall\ n\ge0\;,} $$ where $J$ is the input parameter of {\bf ENRICH}, and $\bar{\eta}\leq \eta$, $\bar{t}\leq t$ are the parameters which occur in the thesis of Proposition \ref{prop:unif-bound-res-exp}. \end{theorem} \begin{proof} The proof of the first bound is the same as {that} of Theorem \ref{teo:optim.adcoars.1}. Concerning the second bound, {we invoke Proposition \ref{prop:unif-bound-res-exp} to write $r_n \in \mathcal{A}_G^{\bar\eta,\bar t}$ and recall that $\|r_n-P_{\widetilde{\partial\Lambda_n}} r_n\|\le (1-\theta^2)^{1/2}\|r_n\|$ for each iteration $n$. This, combined with the minimality of the set $\widetilde{\partial\Lambda}_n$ selected by {\bf D\"ORFLER}, yields} $$ |\widetilde{\partial\Lambda}_n| \leq \frac{\omega_d}{\bar{\eta}^{d/\bar{t}}} \left( \log \frac{\Vert r_n \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}}{\sqrt{1-\theta^2}\Vert r_n \Vert} \right)^{d/\bar{t}} \!\! + 1\;. $$ Estimate (\ref{eq:estim-enrich}) for {\bf ENRICH} yields $$ |{\partial\Lambda}_n| \leq \kappa J^d \, \frac{\omega_d^2}{\bar{\eta}^{d/\bar{t}}} \left( \log \frac{\Vert r_n \Vert_{{\mathcal A}^{\bar{\eta},\bar{t}}_G}}{\sqrt{1-\theta^2}\Vert r_n \Vert} \right)^{d/\bar{t}} \;. $$ Using {(\ref{eq:four.2.1})} and Proposition \ref{prop:unif-bound-res-exp}, {this time to replace $r_n$ by $u$ and $u-u_n$,} we obtain the {desired} result. \end{proof} We observe that in the case $\bar\eta<\eta$ and $\bar t< t$, the cardinalities $|\widehat\Lambda_{n+1}|$ and $| \Lambda_n|$ are not bounded by comparable quantities. This looks like a non-optimal result, yet it appears to be intimately related to the fact that in general the residuals belongs to a worse sparsity class than the solution. \end{document}
\begin{document} \allowdisplaybreaks \title{On some extension of Gauss' work and applications} \footnote{ 2010 \textit{Mathematics Subject Classification}. Primary 11E16; Secondary 11F03, 11G15, 11R37.} \footnote{ \textit{Key words and phrases}. binary quadratic forms, class field theory, complex multiplication, modular functions.} \footnote{ \thanks{ The first (corresponding) author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MIST) (2016R1A5A1008055 and 2017R1C1B2010652). The third author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2017R1A2B1006578), and by Hankuk University of Foreign Studies Research Fund of 2018.} } \begin{abstract} Let $K$ be an imaginary quadratic field of discriminant $d_K$ with ring of integers $\mathcal{O}_K$, and let $\tau_K$ be an element of the complex upper half-plane so that $\mathcal{O}_K=[\tau_K,\,1]$. For a positive integer $N$, let $\mathcal{Q}_N(d_K)$ be the set of primitive positive definite binary quadratic forms of discriminant $d_K$ with leading coefficients relatively prime to $N$. Then, with any congruence subgroup $\Gamma$ of $\mathrm{SL}_2(\mathbb{Z})$ one can define an equivalence relation $\sim_\Gamma$ on $\mathcal{Q}_N(d_K)$. Let $\mathcal{F}_{\Gamma,\,\mathbb{Q}}$ denote the field of meromorphic modular functions for $\Gamma$ with rational Fourier coefficients. We show that the set of equivalence classes $\mathcal{Q}_N(d_K)/\sim_\Gamma$ can be equipped with a group structure isomorphic to $\mathrm{Gal}(K\mathcal{F}_{\Gamma,\,\mathbb{Q}}(\tau_K)/K)$ for some $\Gamma$, which generalizes further into the classical theory of complex multiplication over ring class fields. \end{abstract} \title{On some extension of Gauss' work and applications} \section {Introduction} For a negative integer $D$ such that $D\equiv0$ or $1\Mod{4}$, let $\mathcal{Q}(D)$ be the set of primitive positive definite binary quadratic forms $Q(x,\,y)=ax^2+bxy+cy^2\in\mathbb{Z}[x,\,y]$ of discriminant $b^2-4ac=D$. The modular group $\mathrm{SL}_2(\mathbb{Z})$ (or $\mathrm{PSL}_2(\mathbb{Z})$) acts on the set $\mathcal{Q}(D)$ from the right and defines the proper equivalence $\sim$ as \begin{equation*} Q\sim Q'\quad\Longleftrightarrow\quad Q'=Q^\gamma =Q\left(\gamma\begin{bmatrix}x\\y\end{bmatrix}\right)~ \textrm{for some}~\gamma\in\mathrm{SL}_2(\mathbb{Z}). \end{equation*} In his celebrated work Disquisitiones Arithmeticae of 1801 (\cite{Gauss}), Gauss introduced the beautiful law of composition of integral binary quadratic forms. And, it seems that he first understood the set of equivalence classes $\mathrm{C}(D)=\mathcal{Q}(D)/\sim$ as a group. However, his original proof of the group structure is long and complicated to work in practice. After 93 years later Dirichlet (\cite{Dirichlet}) presented a different approach to the study of composition and genus theory, which seemed to be influenced by Legendre. (See \cite[$\S$3]{Cox}.) On the other hand, in 2004 Bhargava (\cite{Bhargava}) derived a wonderful general law of composition on $2\times2\times2$ cubes of integers, from which he was able to obtain Gauss' composition law on binary quadratic forms as a simple special case. Now, in this paper we will make use of Dirichlet's composition law to proceed the arguments. \par Given the order $\mathcal{O}$ of discriminant $D$ in the imaginary quadratic field $K=\mathbb{Q}(\sqrt{D})$, let $I(\mathcal{O})$ be the group of proper fractional $\mathcal{O}$-ideals and $P(\mathcal{O})$ be its subgroup of nonzero principal $\mathcal{O}$-ideals. When $Q=ax^2+bxy+cy^2$ is an element of $\mathcal{Q}(D)$, let $\omega_Q$ be the zero of the quadratic polynomial $Q(x,\,1)$ in $\mathbb{H} =\{\tau\in\mathbb{C}~|~\mathrm{Im}(\tau)>0\}$, namely \begin{equation}\label{wQ} \omega_Q=\frac{-b+\sqrt{D}}{2a}. \end{equation} It is well known that $[\omega_Q,\,1]=\mathbb{Z}\omega_Q+\mathbb{Z}$ is a proper fractional $\mathcal{O}$-ideal and the form class group $\mathrm{C}(D)$ under the Dirichlet composition is isomorphic to the $\mathcal{O}$-ideal class group $\mathrm{C}(\mathcal{O})=I(\mathcal{O})/P(\mathcal{O})$ through the isomorphism \begin{equation}\label{CDCO} \mathrm{C}(D)\stackrel{\sim}{\rightarrow}\mathrm{C}(\mathcal{O}),\quad [Q]\mapsto[[\omega_Q,\,1]]. \end{equation} On the other hand, if we let $H_\mathcal{O}$ be the ring class field of order $\mathcal{O}$ and $j$ be the elliptic modular function on lattices in $\mathbb{C}$, then we attain the isomorphism \begin{equation}\label{COGHK} \mathrm{C}(\mathcal{O})\stackrel{\sim}{\rightarrow}\mathrm{Gal}(H_\mathcal{O}/K), \quad [\mathfrak{a}]\mapsto(j(\mathcal{O}) \mapsto j(\overline{\mathfrak{a}})) \end{equation} by the theory of complex multiplication (\cite[Theorem 11.1 and Corollary 11.37]{Cox} or \cite[Theorem 5 in Chapter 10]{Lang87}). Thus, composing two isomorphisms given in (\ref{CDCO}) and (\ref{COGHK}) yields the isomorphism \begin{equation}\label{CDGHK} \mathrm{C}(D)\stackrel{\sim}{\rightarrow}\mathrm{Gal}(H_\mathcal{O}/K), \quad[Q]\mapsto(j(\mathcal{O})\mapsto j([-\overline{\omega}_Q,\,1])). \end{equation} \par Now, let $K$ be an imaginary quadratic field of discriminant $d_K$ and $\mathcal{O}_K$ be its ring of integers. If we set \begin{equation}\label{tauK} \tau_K=\left\{\begin{array}{ll} \sqrt{d_K}/2 & \textrm{if}~d_K\equiv0\Mod{4},\\ (-1+\sqrt{d_K})/2 & \textrm{if}~d_K\equiv1\Mod{4}, \end{array}\right. \end{equation} then we get $\mathcal{O}_K=[\tau_K,\,1]$. For a positive integer $N$ and $\mathfrak{n}=N\mathcal{O}_K$, let $I_K(\mathfrak{n})$ be the group of fractional ideals of $K$ relatively prime to $\mathfrak{n}$ and $P_K(\mathfrak{n})$ be its subgroup of principal fractional ideals. Furthermore, let \begin{eqnarray*} P_{K,\,\mathbb{Z}}(\mathfrak{n})&=&\{\nu\mathcal{O}_K~|~ \nu\in K^*~\textrm{such that}~\nu\equiv^*m\Mod{\mathfrak{n}} ~\textrm{for some integer $m$ prime to $N$}\},\\ P_{K,\,1}(\mathfrak{n})&=&\{\nu\mathcal{O}_K~|~ \nu\in K^*~\textrm{such that}~\nu\equiv^*1\Mod{\mathfrak{n}}\} \end{eqnarray*} which are subgroups of $P_K(\mathfrak{n})$. As for the multiplicative congruence $\equiv^*$ modulo $\mathfrak{n}$, we refer to \cite[$\S$IV.1]{Janusz}. Then the ring class field $H_\mathcal{O}$ of order $\mathcal{O}$ with conductor $N$ in $K$ and the ray class field $K_\mathfrak{n}$ modulo $\mathfrak{n}$ are defined to be the unique abelian extensions of $K$ for which the Artin map modulo $\mathfrak{n}$ induces the isomorphisms \begin{equation*} I_K(\mathfrak{n})/P_{K,\,\mathbb{Z}}(\mathfrak{n}) \simeq\mathrm{Gal}(H_\mathcal{O}/K) \quad\textrm{and}\quad I_K(\mathfrak{n})/P_{K,\,1}(\mathfrak{n}) \simeq\mathrm{Gal}(K_\mathfrak{n}/K), \end{equation*} respectively (\cite[$\S$8 and $\S$9]{Cox} and \cite[Chapter V]{Janusz}). And, for a congruence subgroup $\Gamma$ of level $N$ in $\mathrm{SL}_2(\mathbb{Z})$, let $\mathcal{F}_{\Gamma,\,\mathbb{Q}}$ be the field of meromorphic modular functions for $\Gamma$ whose Fourier expansions with respect to $q^{1/N}=e^{2\pi\mathrm{i}\tau/N}$ have rational coefficients and let \begin{equation*} K\mathcal{F}_{\Gamma,\,\mathbb{Q}}(\tau_K) =K(h(\tau_K)~|~h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}~ \textrm{is finite at}~\tau_K). \end{equation*} Then it is a subfield of the maximal abelian extension $K^\mathrm{ab}$ of $K$ (\cite[Theorem 6.31 (i)]{Shimura}). In particular, for the congruence subgroups \begin{eqnarray*} \Gamma_0(N)&=&\left\{\gamma\in\mathrm{SL}_2(\mathbb{Z})~|~ \gamma\equiv\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ 0&\mathrm{*}\end{bmatrix}\Mod{N M_2(\mathbb{Z})}\right\},\\ \Gamma_1(N)&=&\left\{\gamma\in\mathrm{SL}_2(\mathbb{Z})~|~ \gamma\equiv\begin{bmatrix}1&\mathrm{*}\\ 0&1\end{bmatrix}\Mod{N M_2(\mathbb{Z})}\right\}, \end{eqnarray*} we know that \begin{equation}\label{specialization} H_\mathcal{O}=K\mathcal{F}_{\Gamma_0(N),\,\mathbb{Q}}(\tau_K) \quad\textrm{and}\quad K_\mathfrak{n}=K\mathcal{F}_{\Gamma_1(N),\,\mathbb{Q}}(\tau_K) \end{equation} (\cite[Corollary 5.2]{C-K} and \cite[Theorem 3.4]{K-S13}). On the other hand, one can naturally defines an equivalence relation $\sim_\Gamma$ on the subset \begin{equation}\label{QNdK} \mathcal{Q}_N(d_K)= \{ax^2+bxy+cy^2\in\mathcal{Q}(d_K)~|~\gcd(N,\,a)=1\} \end{equation} of $\mathcal{Q}(d_K)$ by \begin{equation}\label{simG} Q\sim_\Gamma Q'\quad\Longleftrightarrow\quad Q'=Q^\gamma~\textrm{for some}~\gamma\in\Gamma. \end{equation} Observe that $\Gamma$ may not act on $\mathcal{Q}_N(d_K)$. Here, by $Q^\gamma$ we mean the action of $\gamma$ as an element of $\mathrm{SL}_2(\mathbb{Z})$. \par For a subgroup $P$ of $I_K(\mathfrak{n})$ with $P_{K,\,1}(\mathfrak{n})\subseteq P\subseteq P_K(\mathfrak{n})$, let $K_P$ be the abelian extension of $K$ so that $I_K(\mathfrak{n})/P\simeq\mathrm{Gal}(K_P/K)$. In this paper, motivated by (\ref{CDGHK}) and (\ref{specialization}) we shall present several pairs of $P$ and $\Gamma$ for which \begin{itemize} \item[(i)] $K_P=K\mathcal{F}_{\Gamma,\,\mathbb{Q}}(\tau_K)$, \item[(ii)] $\mathcal{Q}_N(d_K)/\sim_\Gamma$ becomes a group isomorphic to $\mathrm{Gal}(K_P/K)$ via the isomorphism \begin{equation}\label{desiredisomorphism} \begin{array}{ccl} \mathcal{Q}_N(d_K)/\sim_\Gamma&\stackrel{\sim}{\rightarrow}& \mathrm{Gal}(K_P/K)\\ \left[Q\right]&\mapsto&(h(\tau_K)\mapsto h(-\overline{\omega}_Q)~|~ h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}~ \textrm{is finite at}~\tau_K) \end{array} \end{equation} \end{itemize} (Propositions \ref{KPKF}, \ref{satisfies} and Theorems \ref{formclassgroup}, \ref{Galoisgroups}). This result would be certain extension of Gauss' original work. We shall also develop an algorithm of finding distinct form classes in $\mathcal{Q}_N(d_K)/\sim_\Gamma$ and give a concrete example (Proposition \ref{algorithm} and Example \ref{example}). To this end, we shall apply Shimura's theory which links the class field theory for imaginary quadratic fields and the theory of modular functions (\cite[Chapter 6]{Shimura}). And, we shall not only use but also improve the ideas of our previous work \cite{E-K-S17}. See Remarks \ref{difference}. \section {Extended form class groups as ideal class groups}\label{sect2} Let $K$ be an imaginary quadratic field of discriminant $d_K$ and $\tau_K$ be as in (\ref{tauK}). And, let $N$ be a positive integer, $\mathfrak{n}=N\mathcal{O}_K$ and $P$ be a subgroup of $I_K(\mathfrak{n})$ satisfying $P_{K,\,1}(\mathfrak{n})\subseteq P\subseteq P_K(\mathfrak{n})$. Each subgroup $\Gamma$ of $\mathrm{SL}_2(\mathbb{Z})$ defines an equivalence relation $\sim_\Gamma$ on the set $\mathcal{Q}_N(d_K)$ described in (\ref{QNdK}) in the same manner as in (\ref{simG}). In this section, we shall present a necessary and sufficient condition for $\Gamma$ in such a way that \begin{eqnarray*} \phi_\Gamma:\mathcal{Q}_N(d_K)/\sim_\Gamma&\rightarrow& I_K(\mathfrak{n})/P\\ \left[Q\right]~~~&\mapsto&[[\omega_Q,\,1]] \end{eqnarray*} becomes a well-defined bijection with $\omega_Q$ as in (\ref{wQ}). As mentioned in $\S$1, the lattice $[\omega_Q,\,1]=\mathbb{Z}\omega_Q+\mathbb{Z}$ is a fractional ideal of $K$. \par The modular group $\mathrm{SL}_2(\mathbb{Z})$ acts on $\mathbb{H}$ from the left by fractional linear transformations. For each $Q\in\mathcal{Q}(d_K)$, let $I_{\omega_Q}$ denote the isotropy subgroup of the point $\omega_Q$ in $\mathrm{SL}_2(\mathbb{Z})$. In particular, if we let $Q_0$ be the principal form in $\mathcal{Q}(d_K)$ (\cite[p. 31]{Cox}), then we have $\omega_{Q_0}=\tau_K$ and \begin{equation}\label{isotropy} I_{\omega_{Q_0}}=\left\{ \begin{array}{ll} \left\{\pm I_2\right\} & \textrm{if}~d_K\neq-4,\,-3, \\ \left\{\pm I_2,\,\pm S\right\} & \textrm{if}~d_K=-4, \\ \left\{\pm I_2,\,\pm ST,\, \pm(ST)^2\right\} & \textrm{if}~d_K=-3 \end{array} \right. \end{equation} where $S=\begin{bmatrix} 0&-1\\1&0\end{bmatrix}$ and $T=\begin{bmatrix}1&1\\ 0&1\end{bmatrix}$. Furthermore, we see that \begin{equation}\label{otherisotropy} I_{\omega_Q}=\{\pm I_2\}\quad \textrm{if $\omega_Q$ is not equivalent to $\omega_{Q_0}$ under $\mathrm{SL}_2(\mathbb{Z})$} \end{equation} (\cite[Proposition 1.5 (c)]{Silverman}). For any $\gamma=\begin{bmatrix}a&b\\c&d\end{bmatrix}\in \mathrm{SL}_2(\mathbb{Z})$, let \begin{equation*} j(\gamma,\,\tau)=c\tau+d\quad(\tau\in\mathbb{H}). \end{equation*} One can readily check that if $Q'=Q^\gamma$, then \begin{equation*} \omega_Q=\gamma(\omega_{Q'})\quad\textrm{and} \quad[\omega_Q,\,1]=\frac{1}{j(\gamma,\,\omega_{Q'})} [\omega_{Q'},\,1]. \end{equation*} \begin{lemma}\label{primein} Let $Q=ax^2+bxy+cy^2\in\mathcal{Q}(d_K)$. Then $\mathrm{N}_{K/\mathbb{Q}}([\omega_Q,\,1])=1/a$ and \begin{equation*} [\omega_Q,\,1]\in I_K(\mathfrak{n})\quad\Longleftrightarrow\quad Q\in\mathcal{Q}_N(d_K). \end{equation*} \end{lemma} \begin{proof} See \cite[Lemma 2.3 (iii)]{E-K-S17}. \end{proof} \begin{lemma}\label{prime} Let $Q=ax^2+bxy+cy^2\in\mathcal{Q}_N(d_K)$. \begin{enumerate} \item[\textup{(i)}] For $u,\,v\in\mathbb{Z}$ not both zero, the fractional ideal $(u\omega_Q+v)\mathcal{O}_K$ is relatively prime to $\mathfrak{n}=N\mathcal{O}_K$ if and only if $\gcd(N,\,Q(v,\,-u))=1$. \item[\textup{(ii)}] If $C\in P_K(\mathfrak{n})/P$, then \begin{equation*} C=[(u\omega_Q+v)\mathcal{O}_K]\quad \textrm{for some}~u,\,v\in\mathbb{Z}~\textrm{not both zero such that}~ \gcd(N,\,Q(v,\,-u))=1. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(i)] See \cite[Lemma 4.1]{E-K-S17} \item[(ii)] Since $P_K(\mathfrak{n})/P$ is a finite group, one can take an integral ideal $\mathfrak{c}$ in the class $C$ (\cite[Lemma 2.3 in Chapter IV]{Janusz}). Furthermore, since $\mathcal{O}_K=[a\omega_Q,\,1]$, we may express $\mathfrak{c}$ as \begin{equation*} \mathfrak{c}=(ka\omega_Q+v)\mathcal{O}_K\quad \textrm{for some}~k,\,v\in\mathbb{Z}. \end{equation*} If we set $u=ka$, then we attain (ii) by (i). \end{enumerate} \end{proof} \begin{proposition}\label{surjective} If the map $\phi_\Gamma$ is well defined, then it is surjective. \end{proposition} \begin{proof} Let \begin{equation*} \rho: I_K(\mathfrak{n})/P \rightarrow I_K(\mathcal{O}_K)/P_K(\mathcal{O}_K) \end{equation*} be the natural homomorphism. Since $I_K(\mathfrak{n})/P_K(\mathfrak{n})$ is isomorphic to $I_K(\mathcal{O}_K)/P_K(\mathcal{O}_K)$ (\cite[Proposition 1.5 in Chapter IV]{Janusz}), the homomorphism $\rho$ is surjective. Here, we refer to the following commutative diagram: \begin{figure} \caption{A commutative diagram of ideal class groups} \label{diagram} \end{figure} \noindent Let \begin{equation*} Q_1,\,Q_2,\,\ldots,\,Q_h\quad(\in\mathcal{Q}(d_K)) \end{equation*} be reduced forms which represent all distinct classes in $\mathrm{C}(d_K)=\mathcal{Q}(d_K)/\sim$ (\cite[Theorem 2.8]{Cox}). Take $\gamma_1,\,\gamma_2,\,\ldots,\,\gamma_h\in\mathrm{SL}_2(\mathbb{Z})$ so that \begin{equation*} Q_i'=Q_i^{\gamma_i}\quad(i=1,\,2,\,\ldots,\,h) \end{equation*} belongs to $\mathcal{Q}_N(d_K)$ (\cite[Lemmas 2.3 and 2.25]{Cox}). Then we get \begin{equation*} I_K(\mathcal{O}_K)/P_K(\mathcal{O}_K)= \{[\omega_{Q_i'},\,1]P_K(\mathcal{O}_K)~|~i=1,\,2,\,\ldots,\,h\} \quad\textrm{and}\quad[\omega_{Q_i'},\,1]\in I_K(\mathfrak{n}) \end{equation*} by the isomorphism given in (\ref{CDCO}) (when $D=d_K)$ and Lemma \ref{primein}. Moreover, since $\rho$ is a surjection with $\mathrm{Ker}(\rho)=P_K(\mathfrak{n})/P$, we obtain the decomposition \begin{equation}\label{decomp} I_K(\mathfrak{n})/P=(P_K(\mathfrak{n})/P)\cdot \{[[\omega_{Q_i'},\,1]]\in I_K(\mathfrak{n})/P~|~i=1,\,2,\,\ldots,\,h\}. \end{equation} \par Now, let $C\in I_K(\mathfrak{n})/P$. By the decomposition (\ref{decomp}) and Lemma \ref{prime} (ii) we may express $C$ as \begin{equation}\label{C} C=\left[\frac{1}{u\omega_{Q_i'}+v}\,[\omega_{Q_i'},\,1]\right] \end{equation} for some $i\in\{1,\,2,\,\ldots,\,h\}$ and $u,\,v\in\mathbb{Z}$ not both zero with $\gcd(N,\,Q_i'(v,\,-u))=1$. Take any $\sigma=\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ \widetilde{u}&\widetilde{v}\end{bmatrix}\in\mathrm{SL}_2(\mathbb{Z})$ such that $\sigma\equiv\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ u&v\end{bmatrix}\Mod{N M_2(\mathbb{Z})}$. We then derive that \begin{eqnarray*} C&=&\left[\frac{u\omega_{Q_i'}+v}{\widetilde{u}\omega_{Q_i'}+\widetilde{v}} \,\mathcal{O}_K\right]C\quad \textrm{because}~ \frac{u\omega_{Q_i'}+v}{\widetilde{u}\omega_{Q_i'}+\widetilde{v}} \equiv^*1\Mod{\mathfrak{n}}~\textrm{and}~P_{K,\,1}(\mathfrak{n}) \subseteq P\\ &=&\left[\frac{1}{\widetilde{u}\omega_{Q_i'}+\widetilde{v}} \,[\omega_{Q_i'},\,1]\right]\quad\textrm{by (\ref{C})}\\ &=&\left[\frac{1}{j(\sigma,\,\omega_{Q_i'})}\,[\omega_{Q_i'},\,1] \right]\\ &=&[[\sigma(\omega_{Q_i'}),\,1]]. \end{eqnarray*} Thus, if we put $Q=Q_i'^{\sigma^{-1}}$, then we obtain \begin{equation*} C=[[\omega_Q,\,1]]=\phi_\Gamma([Q]). \end{equation*} This prove that $\phi_\Gamma$ is surjective. \end{proof} \begin{proposition}\label{injective} The map $\phi_\Gamma$ is a well-defined injection if and only if $\Gamma$ satisfies the following property: \begin{equation}\label{P} \begin{array}{c} \textrm{Let $Q\in\mathcal{Q}_N(d_K)$ and $\gamma\in\mathrm{SL}_2(\mathbb{Z})$ such that $Q^{\gamma^{-1}}\in\mathcal{Q}_N(d_K)$. Then,}\\ j(\gamma,\,\omega_Q)\mathcal{O}_K\in P~\Longleftrightarrow~ \gamma\in\Gamma\cdot I_{\omega_Q}. \end{array} \end{equation} \end{proposition} \begin{proof} Assume first that $\phi_\Gamma$ is a well-defined injection. Let $Q\in\mathcal{Q}_N(d_K)$ and $\gamma\in\mathrm{SL}_2(\mathbb{Z})$ such that $Q^{\gamma^{-1}}\in\mathcal{Q}_N(d_K)$. If we set $Q'=Q^{\gamma^{-1}}$, then we have $Q=Q'^\gamma$ and so \begin{equation}\label{wjw1} [\omega_{Q'},\,1]=[\gamma(\omega_Q),\,1]=\frac{1}{j(\gamma,\,\omega_Q)}[\omega_Q,\,1]. \end{equation} And, we deduce that \begin{eqnarray*} j(\gamma,\,\omega_Q)\mathcal{O}_K\in P&\Longleftrightarrow& [[\omega_Q,\,1]]= [[\omega_{Q'},\,1]]~\textrm{in}~I_K(\mathfrak{n})/P~ \textrm{by Lemma \ref{primein} and (\ref{wjw1})}\\ &\Longleftrightarrow&\phi_\Gamma([Q])=\phi_\Gamma([Q'])~ \textrm{by the definition of $\phi_\Gamma$}\\ &\Longleftrightarrow&[Q]=[Q']~\textrm{in $\mathcal{Q}_N(d_K)/\sim_\Gamma$ since $\phi_\Gamma$ is injective}\\ &\Longleftrightarrow&Q'=Q^\alpha~\textrm{for some}~\alpha\in\Gamma\\ &\Longleftrightarrow&Q=Q^{\alpha\gamma}~\textrm{for some}~ \alpha\in\Gamma~\textrm{because}~Q'=Q^{\gamma^{-1}}\\ &\Longleftrightarrow&\alpha\gamma\in I_{\omega_Q}~ \textrm{for some}~\alpha\in\Gamma\\ &\Longleftrightarrow&\gamma\in\Gamma\cdot I_{\omega_Q}. \end{eqnarray*} Hence $\Gamma$ satisfies the property (\ref{P}). \par Conversely, assume that $\Gamma$ satisfies the property (\ref{P}). To show that $\phi_\Gamma$ is well defined, suppose that \begin{equation*} [Q]=[Q']\quad\textrm{in}~\mathcal{Q}_N(d_K)/\sim_\Gamma ~\textrm{for some}~Q,\,Q'\in\mathcal{Q}_N(d_K). \end{equation*} Then we attain $Q=Q'^\alpha$ for some $\alpha\in\Gamma$ so that \begin{equation}\label{wjw2} [\omega_{Q'},\,1]=[\alpha(\omega_Q),\,1]=\frac{1}{j(\alpha,\,\omega_Q)}[\omega_Q,\,1]. \end{equation} Now that $Q^{\alpha^{-1}}=Q'\in\mathcal{Q}_N(d_K)$ and $\alpha\in\Gamma\subseteq\Gamma\cdot I_{\omega_Q}$, we achieve by the property (\ref{P}) that $j(\alpha,\,\omega_{Q})\mathcal{O}_K\in P$. Thus we derive by Lemma \ref{primein} and (\ref{wjw2}) that \begin{equation*} [[\omega_Q,\,1]]=[[\omega_{Q'},\,1]] \quad\textrm{in}~I_K(\mathfrak{n})/P, \end{equation*} which claims that $\phi_\Gamma$ is well defined. \par On the other hand, in order to show that $\phi_\Gamma$ is injective, assume that \begin{equation*} \phi_\Gamma([Q])=\phi_\Gamma([Q'])\quad \textrm{for some}~Q,\,Q'\in\mathcal{Q}_N(d_K). \end{equation*} Then we get \begin{equation}\label{wlw} [\omega_Q,\,1]=\lambda[\omega_{Q'},\,1]\quad \textrm{for some}~\lambda\in K^*~\textrm{such that}~ \lambda\mathcal{O}_K\in P, \end{equation} from which it follows that \begin{equation}\label{QQb} Q=Q'^\gamma\quad\textrm{for some}~\gamma\in\mathrm{SL}_2(\mathbb{Z}) \end{equation} by the isomorphism in (\ref{CDCO}) when $D=d_K$. We then derive by (\ref{wlw}) and (\ref{QQb}) that \begin{equation*} [\omega_{Q'},\,1]=[\gamma(\omega_Q),\,1]= \frac{1}{j(\gamma,\,\omega_{Q})} [\omega_Q,\,1]=\frac{\lambda}{j(\gamma,\,\omega_Q)}[\omega_{Q'},\,1] \end{equation*} and so $\lambda/j(\gamma,\,\omega_Q)\in\mathcal{O}_K^*$. Therefore we attain \begin{equation*} j(\gamma,\,\omega_Q)\mathcal{O}_K=\lambda\mathcal{O}_K\in P, \end{equation*} and hence $\gamma\in\Gamma\cdot I_{\omega_Q}$ by the fact $Q^{\gamma^{-1}}=Q'\in\mathcal{Q}_N(d_K)$ and the property (\ref{P}). If we write \begin{equation*} \gamma=\alpha\beta\quad\textrm{for some}~\alpha\in\Gamma~ \textrm{and}~\beta\in I_{\omega_Q}, \end{equation*} then we see by (\ref{QQb}) that \begin{equation*} Q=Q^{\beta^{-1}}=Q^{\gamma^{-1}\alpha}=Q'^\alpha. \end{equation*} This shows that \begin{equation*} [Q]=[Q']\quad\textrm{in}~\mathcal{Q}_N(d_K)/\sim_\Gamma, \end{equation*} which proves the injectivity of $\phi_\Gamma$. \end{proof} \begin{theorem}\label{formclassgroup} The map $\phi_\Gamma$ is a well-defined bijection if and only if $\Gamma$ satisfies the property \textup{(\ref{P})} stated in \textup{Proposition \ref{injective}}. In this case, we may regard the set $\mathcal{Q}_N(d_K)/\sim_\Gamma$ as a group isomorphic to the ideal class group $I_K(\mathfrak{n})/P$. \end{theorem} \begin{proof} We achieve the first assertion by Propositions \ref{surjective} and \ref{injective}. Thus, in this case, one can give a group structure on $\mathcal{Q}_N(d_K)/\sim_\Gamma$ through the bijection $\phi_\Gamma: \mathcal{Q}_N(d_K)/\sim_\Gamma\rightarrow I_K(\mathfrak{n})/P$. \end{proof} \begin{remark} By using the isomorphism given in (\ref{CDCO}) (when $D=d_K$) and Theorem \ref{formclassgroup} we obtain the following commutative diagram: \begin{figure} \caption{The natural map between form class groups} \label{diagram2} \end{figure} \noindent Therefore the natural map $\mathcal{Q}_N(d_K)/\sim_\Gamma\rightarrow\mathrm{C}(d_K)$ is indeed a surjective homomorphism, which shows that the group structure of $\mathcal{Q}_N(d_K)/\sim\Gamma$ is not far from that of the classical form class group $\mathrm{C}(d_K)$. \end{remark} \section {Class field theory over imaginary quadratic fields} In this section, we shall briefly review the class field theory over imaginary quadratic fields established by Shimura. \par For an imaginary quadratic field $K$, let $\mathbb{I}_K^\mathrm{fin}$ be the group of finite ideles of $K$ given by the restricted product \begin{eqnarray*} \mathbb{I}_K^\mathrm{fin}&=&{\prod_{\mathfrak{p}}}^\prime K_\mathfrak{p}^*\quad \textrm{where $\mathfrak{p}$ runs over all prime ideals of $\mathcal{O}_K$}\\ &=&\left\{s=(s_\mathfrak{p})\in\prod_\mathfrak{p}K_\mathfrak{p}^*~|~ s_\mathfrak{p}\in\mathcal{O}_{K_\mathfrak{p}}^*~\textrm{for all but finitely many $\mathfrak{p}$}\right\}. \end{eqnarray*} As for the topology on $\mathbb{I}_K^\mathrm{fin}$ one can refer to \cite[p. 78]{Neukirch}. Then, the classical class field theory of $K$ is explained by the exact sequence \begin{equation*} 1\rightarrow K^*\rightarrow \mathbb{I}_K^\mathrm{fin}\rightarrow\mathrm{Gal}(K^\mathrm{ab}/K) \rightarrow 1 \end{equation*} where $K^*$ maps into $\mathbb{I}_K^\mathrm{fin}$ through the diagonal embedding $\nu\mapsto(\nu,\,\nu,\,\nu,\,\ldots)$ (\cite[Chapter IV]{Neukirch}). Setting \begin{equation*} \mathcal{O}_{K,\,p}=\mathcal{O}_K\otimes_\mathbb{Z} \mathbb{Z}_p\quad\textrm{for each prime $p$} \end{equation*} we have \begin{equation*} \mathcal{O}_{K,\,p} \simeq\prod_{\mathfrak{p}\,|\,p} \mathcal{O}_{K_\mathfrak{p}} \end{equation*} (\cite[Proposition 4 in Chapter II]{Serre}). Furthermore, if we let $\widehat{K}=K\otimes_\mathbb{Z}\widehat{\mathbb{Z}}$ with $\widehat{\mathbb{Z}}=\prod_p\mathbb{Z}_p$, then \begin{eqnarray*} \widehat{K}^*&=&{\prod_{p}}^\prime(K\otimes_\mathbb{Z} \mathbb{Z}_p)^*\quad\textrm{where $p$ runs over all rational primes}\\ &=&\left\{s=(s_p)\in\prod_p(K\otimes_\mathbb{Z} \mathbb{Z}_p)^*~|~ s_p\in\mathcal{O}_{K,\,p}^*~ \textrm{for all but finitely many $p$}\right\}\\ &\simeq&\mathbb{I}_K^\mathrm{fin} \end{eqnarray*} (\cite[Exercise 15.12]{Cox} and \cite[Chapter II]{Serre}). Thus we may use $\widehat{K}^*$ instead of $\mathbb{I}_K^\mathrm{fin}$ when we develop the class field theory of $K$. \begin{proposition}\label{1-1} There is a one-to-one correspondence via the Artin map between closed subgroups $J$ of $\widehat{K}^*$ of finite index containing $K^*$ and finite abelian extensions $L$ of $K$ such that \begin{equation*} \widehat{K}^*/J\simeq\mathrm{Gal}(L/K). \end{equation*} \end{proposition} \begin{proof} See \cite[Chapter IV]{Neukirch}. \end{proof} Let $N$ be a positive integer, $\mathfrak{n}=N\mathcal{O}_K$ and $s=(s_p)\in \widehat{K}^*$. For a prime $p$ and a prime ideal $\mathfrak{p}$ of $\mathcal{O}_K$ lying above $p$, let $n_\mathfrak{p}(s)$ be a unique integer such that $s_p\in \mathfrak{p}^{n_\mathfrak{p}(s)}\mathcal{O}_{K_\mathfrak{p}}^*$. We then regard $s\mathcal{O}_K$ as the fractional ideal \begin{equation*} s\mathcal{O}_K= \prod_p\prod_{\mathfrak{p}\,|\,p} \mathfrak{p}^{n_\mathfrak{p}(s)}\in I_K(\mathcal{O}_K). \end{equation*} By the approximation theorem (\cite[Chapter IV]{Janusz}) one can take an element $\nu_s$ of $K^*$ such that \begin{equation}\label{k} \nu_ss_p\in 1+N\mathcal{O}_{K,\,p}\quad\textrm{for all}~p\,|\,N. \end{equation} \begin{proposition}\label{rayidele} We get a well-defined surjective homomorphism \begin{eqnarray*} \phi_\mathfrak{n}~:~\widehat{K}^*&\rightarrow& I_K(\mathfrak{n})/P_{K,\,1}(\mathfrak{n})\\ s~~&\mapsto&~~~[\nu_ss\mathcal{O}_K] \end{eqnarray*} with kernel \begin{equation*} J_\mathfrak{n}= K^*\left( \prod_{p\,|\,N}(1+N\mathcal{O}_{K,\,p})\times \prod_{p\,\nmid\,N}\mathcal{O}_{K,\,p}^*\right). \end{equation*} Thus $J_\mathfrak{n}$ corresponds to the ray class field $K_\mathfrak{n}$. \end{proposition} \begin{proof} See \cite[Exercises 15.17 and 15.18]{Cox}. \end{proof} Let $\mathcal{F}_N$ be the field of meromorphic modular functions of level $N$ whose Fourier expansions with respect to $q^{1/N}$ have coefficients in the $N$th cyclotomic field $\mathbb{Q}(\zeta_N)$ with $\zeta_N=e^{2\pi\mathrm{i}/N}$. Then $\mathcal{F}_N$ is a Galois extension of $\mathcal{F}_1$ with $\mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1)\simeq \mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$ (\cite[Chapter 6]{Shimura}). \begin{proposition}\label{Galoisdecomposition} There is a decomposition \begin{equation*} \mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\} =\left\{\pm\begin{bmatrix}1&0\\0&d\end{bmatrix}~|~ d\in(\mathbb{Z}/N\mathbb{Z})^*\right\}/\{\pm I_2\} \cdot\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}. \end{equation*} Let $h(\tau)$ be an element of $\mathcal{F}_N$ whose Fourier expansion is given by \begin{equation*} h(\tau)=\sum_{n\gg-\infty}c_nq^{n/N}\quad(c_n\in\mathbb{Q}(\zeta_N)). \end{equation*} \begin{enumerate} \item[\textup{(i)}] If $\alpha=\begin{bmatrix}1&0\\0&d\end{bmatrix}$ with $d\in(\mathbb{Z}/N\mathbb{Z})^*$, then \begin{equation*} h(\tau)^\alpha= \sum_{n\gg-\infty}c_n^{\sigma_d}q^{n/N} \end{equation*} where $\sigma_d$ is the automorphism of $\mathbb{Q}(\zeta_N)$ defined by $\zeta_N^{\sigma_d}=\zeta_N^d$. \item[\textup{(ii)}] If $\beta\in\mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$, then \begin{equation*} h(\tau)^\beta=h(\gamma(\tau)) \end{equation*} where $\gamma$ is any element of $\mathrm{SL}_2(\mathbb{Z})$ which maps to $\beta$ through the reduction $\mathrm{SL}_2(\mathbb{Z})\rightarrow \mathrm{SL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$. \end{enumerate} \end{proposition} \begin{proof} See \cite[Proposition 6.21]{Shimura}. \end{proof} If we let $\widehat{\mathbb{Q}}=\mathbb{Q}\otimes_\mathbb{Z}\widehat{\mathbb{Z}}$ and $\displaystyle\mathcal{F}=\bigcup_{N=1}^\infty\mathcal{F}_N$, then we attain the exact sequence \begin{equation*} 1\rightarrow\mathbb{Q}^*\rightarrow\mathrm{GL}_2(\widehat{\mathbb{Q}}) \rightarrow\mathrm{Gal}(\mathcal{F}/\mathbb{Q})\rightarrow1 \end{equation*} (\cite[Chaper 7]{Lang87} or \cite[Chapter 6]{Shimura}). Here, we note that \begin{eqnarray*} \mathrm{GL}_2(\widehat{\mathbb{Q}})&=& {\prod_p}^\prime\mathrm{GL}_2(\mathbb{Q}_p) \quad\textrm{where $p$ runs over all rational primes}\\ &=& \{\gamma=(\gamma_p)\in\prod_p\mathrm{GL}_2(\mathbb{Q}_p)~|~ \gamma_p\in\mathrm{GL}_2(\mathbb{Z}_p)~ \textrm{for all but finitely many $p$}\} \end{eqnarray*} (\cite[Exercise 15.4]{Cox}) and $\mathbb{Q}^*$ maps into $\mathrm{GL}_2(\widehat{\mathbb{Q}})$ through the diagonal embedding. For $\omega\in K\cap\mathbb{H}$, we define a normalized embedding \begin{equation*} q_\omega:K^*\rightarrow\mathrm{GL}_2^+(\mathbb{Q}) \end{equation*} by the relation \begin{equation}\label{defq} \nu\begin{bmatrix}\tau_K\\1\end{bmatrix}= q_\omega(\nu)\begin{bmatrix}\tau_K\\1\end{bmatrix}\quad(\nu\in K^*). \end{equation} By continuity, $q_\omega$ can be extended to an embedding \begin{equation*} q_{\omega,\,p}:(K\otimes_\mathbb{Z}\mathbb{Z}_p)^*\rightarrow\mathrm{GL}_2(\mathbb{Q}_p) \quad\textrm{for each prime $p$} \end{equation*} and hence to an embedding \begin{equation*} q_\omega:\widehat{K}^*\rightarrow\mathrm{GL}_2(\widehat{\mathbb{Q}}). \end{equation*} Let $\min(\tau_K,\,\mathbb{Q})=x^2+b_Kx+c_K$ ($\in\mathbb{Z}[x]$). Since $K\otimes_\mathbb{Z}\mathbb{Z}_p=\mathbb{Q}_p\tau_K+\mathbb{Q}_p$ for each prime $p$, one can deduce that if $s=(s_p)\in\widehat{K}^*$ with $s_p=u_p\tau_K+v_p$ ($u_p,\,v_p\in\mathbb{Q}_p$), then \begin{equation}\label{gamma_p} q_{\tau_K}(s)=(\gamma_p)\quad\textrm{with}~\gamma_p= \begin{bmatrix}v_p-b_Ku_p & -c_Ku_p\\u_p&v_p\end{bmatrix}. \end{equation} \par By utilizing the concept of canonical models of modular curves, Shimura achieved the following remarkable results. \begin{proposition}[Shimura's reciprocity law]\label{reciprocity} Let $s\in\widehat{K}^*$, $\omega\in K\cap\mathbb{H}$ and $h\in\mathcal{F}$ be finite at $\omega$. Then $h(\omega)$ lies in $K^\mathrm{ab}$ and satisfies \begin{equation*} h(\omega)^{[s^{-1},\,K]}=h(\tau)^{q_\omega(s)}|_{\tau=\omega} \end{equation*} where $[\,\cdot,\,K]$ is the Artin map for $K$. \end{proposition} \begin{proof} See \cite[Theorem 6.31 (i)]{Shimura}. \end{proof} \begin{proposition}\label{model} Let $S$ be an open subgroup of $\mathrm{GL}_2 (\widehat{\mathbb{Q}})$ containing $\mathbb{Q}^*$ such that $S/\mathbb{Q}^*$ is compact. Let \begin{eqnarray*} \Gamma_S&=&S\cap\mathrm{GL}_2^+(\mathbb{Q}),\\ \mathcal{F}_S&=&\{h\in\mathcal{F}~|~h^\gamma=h~\textrm{for all}~\gamma\in S\},\\ k_S&=&\{\nu\in\mathbb{Q}^\mathrm{ab}~|~ \nu^{[s,\,\mathbb{Q}]}=\nu~\textrm{for all}~ s\in\mathbb{Q}^*\det(S) \subseteq\widehat{\mathbb{Q}}^*\} \end{eqnarray*} where $[\,\cdot,\,\mathbb{Q}]$ is the Artin map for $\mathbb{Q}$. Then, \begin{enumerate} \item[\textup{(i)}] $\Gamma_S/\mathbb{Q}^*$ is a Fuchsian group of the first kind commensurable with $\mathrm{SL}_2(\mathbb{Z})/\{\pm I_2\}$. \item[\textup{(ii)}] $\mathbb{C}\mathcal{F}_S$ is the field of meromorphic modular functions for $\Gamma_S/\mathbb{Q}^*$. \item[\textup{(iii)}] $k_S$ is algebraically closed in $\mathcal{F}_S$. \item[\textup{(iv)}] If $\omega\in K\cap\mathbb{H}$, then the subgroup $K^*q_\omega^{-1}(S)$ of $\widehat{K}^*$ corresponds to the subfield \begin{equation*} K\mathcal{F}_S(\omega)=K(h(\omega)~|~h\in\mathcal{F}_S~ \textrm{is finite at}~\omega) \end{equation*} of $K^\mathrm{ab}$ in view of \textup{Proposition \ref{1-1}}. \end{enumerate} \end{proposition} \begin{proof} See \cite[Propositions 6.27 and 6.33]{Shimura}. \end{proof} \begin{remark}\label{overQ} In particular, if $k_S=\mathbb{Q}$, then $\mathcal{F}_S=\mathcal{F}_{\Gamma_S,\,\mathbb{Q}}$ (\cite[Exercise 6.26]{Shimura}). \end{remark} \section {Construction of class invariants}\label{classinvariants} Let $K$ be an imaginary quadratic field, $N$ be a positive integer and $\mathfrak{n}=N\mathcal{O}_K$. From now on, let $T$ be a subgroup of $(\mathbb{Z}/N\mathbb{Z})^*$ and $P$ be a subgroup of $P_K(\mathfrak{n})$ containing $P_{K,\,1}(\mathfrak{n})$ given by \begin{eqnarray*} P&=&\langle\nu\mathcal{O}_K~|~ \nu\in\mathcal{O}_K-\{0\}~\textrm{such that}~\nu\equiv t\Mod{\mathfrak{n}}~ \textrm{for some}~t\in T\rangle\\ &=&\{\lambda\mathcal{O}_K~|~\lambda\in K^*~ \textrm{such that}~\lambda\equiv^*t\Mod{\mathfrak{n}}~ \textrm{for some}~t\in T\}.\nonumber \end{eqnarray*} Let $\mathrm{Cl}(P)$ denote the ideal class group \begin{equation*} \mathrm{Cl}(P)=I_K(\mathfrak{n})/P \end{equation*} and $K_P$ be its corresponding class field of $K$ with $\mathrm{Cl}(P)\simeq\mathrm{Gal}(K_P/K)$. Furthermore, let \begin{equation*} \Gamma=\left\{\gamma\in\mathrm{SL}_2(\mathbb{Z})~|~ \gamma\equiv\begin{bmatrix}t^{-1}&\mathrm{*}\\ 0&t\end{bmatrix}\Mod{N M_2(\mathbb{Z})}~ \textrm{for some}~t\in T\right\} \end{equation*} where $t^{-1}$ stands for an integer such that $tt^{-1}\equiv1\Mod{N}$. In this section, for a given $h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}$ we shall define a class invariant $h(C)$ for each class $C\in I_K(\mathfrak{n})/P$. \begin{lemma}\label{subgroup} The field $K_P$ corresponds to the subgroup \begin{equation*} \bigcup_{t\in T}K^*\left(\prod_{p\,|\,N}(t+N\mathcal{O}_{K,\,p}) \times\prod_{p\,\nmid\,N}\mathcal{O}_{K,\,p}^*\right) \end{equation*} of $\widehat{K}^*$ in view of \textup{Proposition \ref{1-1}}. \end{lemma} \begin{proof} We adopt the notations in Proposition \ref{rayidele}. Given $t\in T$, let $t^{-1}$ be an integer such that $tt^{-1}\equiv1\Mod{N}$. Let $s=s(t)=(s_p)\in \widehat{K}^*$ be given by \begin{equation*} s_p=\left\{\begin{array}{ll} t^{-1} & \textrm{if}~p\,|\,N,\\ 1 & \textrm{if}~p\nmid N. \end{array}\right. \end{equation*} Then one can take $\nu_s=t$ so as to have (\ref{k}), and hence \begin{equation}\label{st} \phi_\mathfrak{n}(s)=[ts\mathcal{O}_K]=[t\mathcal{O}_K]. \end{equation} Since $P$ contains $P_{K,\,1}(\mathfrak{n})$, we obtain $K_P\subseteq K_\mathfrak{n}$ and $\mathrm{Gal}(K_\mathfrak{n}/K_P)\simeq P/P_{K,\,1}(\mathfrak{n})$. Thus we achieve by Proposition \ref{rayidele} that the field $K_P$ corresponds to \begin{eqnarray*} \phi_\mathfrak{n}^{-1}(P/P_{K,\,1}(\mathfrak{n}))&=& \phi_\mathfrak{n}^{-1}\left(\bigcup_{t\in T}[t\mathcal{O}_K]\right)\quad\textrm{by the definitions of $P_{K,\,1}(\mathfrak{n})$ and $P$}\\ &=&\bigcup_{t\in T}s(t)J_\mathfrak{n} \quad\textrm{by (\ref{st}) and the fact $J_\mathfrak{n}=\mathrm{Ker}(\phi_\mathfrak{n})$}\\ &=&\bigcup_{t\in T}K^*\left(\prod_{p\,|\,N}(t^{-1}+N\mathcal{O}_{K,\,p}) \times\prod_{p\,\nmid\,N}\mathcal{O}_{K,\,p}^*\right)\\ &=&\bigcup_{t\in T}K^*\left(\prod_{p\,|\,N}(t+N\mathcal{O}_{K,\,p}) \times\prod_{p\,\nmid\,N}\mathcal{O}_{K,\,p}^*\right). \end{eqnarray*} \end{proof} \begin{proposition}\label{KPKF} We have $K_P=K\mathcal{F}_{\Gamma,\,\mathbb{Q}}(\tau_K)$. \end{proposition} \begin{proof} Let $S=\mathbb{Q}^*W$ ($\subseteq\mathrm{GL}_2(\widehat{\mathbb{Q}})$) with \begin{equation*} W=\bigcup_{t\in T}\left\{ \gamma=(\gamma_p)\in\prod_p\mathrm{GL}_2(\mathbb{Z}_p)~|~ \gamma_p\equiv\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ 0&t\end{bmatrix}\Mod{N M_2(\mathbb{Z}_p)}~\textrm{for all $p$}\right\}. \end{equation*} Following the notations in Proposition \ref{model} one can readily show that \begin{equation*} \Gamma_S= \mathbb{Q}^*\left\{ \gamma\in\mathrm{SL}_2(\mathbb{Z})~|~\gamma \equiv\begin{bmatrix}\mathrm{*}&\mathrm{*}\\0&t\end{bmatrix} \Mod{N M_2(\mathbb{Z})}~\textrm{for some}~t\in T\right\}\quad \textrm{and}\quad \det(W)=\widehat{\mathbb{Z}}^*. \end{equation*} It then follows that $\Gamma_S/\mathbb{Q}^*\simeq\Gamma/\{\pm I_2\}$ and $k_S=\mathbb{Q}$, and hence \begin{equation}\label{F_S} \mathcal{F}_S=\mathcal{F}_{\Gamma,\,\mathbb{Q}} \end{equation} by Proposition \ref{model} (ii) and Remark \ref{overQ}. Furthermore, we deduce that \begin{eqnarray*} K^*q_{\tau_K}^{-1}(S)&=&K^*\{s=(s_p)\in\widehat{K}^*~|~q_{\tau_K}(s)\in \mathbb{Q}^*W\}\\ &=&K^*\{s=(s_p)\in\widehat{K}^*~|~q_{\tau_K}(s)\in W\} \quad\textrm{since}~q_{\tau_K}(r)=rI_2~\textrm{for every}~r\in\mathbb{Q}^*~\textrm{by (\ref{defq})}\\ &=&K^*\{s=(s_p)\in\widehat{K}^*~|~ s_p=u_p\tau_K+v_p~\textrm{with}~u_p,\,v_p\in\mathbb{Q}_p ~\textrm{such that}\\ &&\hspace{3.4cm}\gamma_p=\left[\begin{smallmatrix}v_p-b_Ku_p & -c_Ku_p\\u_p&v_p\end{smallmatrix}\right] \in W~\textrm{for all $p$}\}\quad\textrm{by (\ref{gamma_p})}\\ &=&\bigcup_{t\in T}K^*\{s=(s_p)\in\widehat{K}^*~|~ s_p=u_p\tau_K+v_p~\textrm{with}~u_p,\,v_p\in\mathbb{Z}_p ~\textrm{such that}\\ &&\hspace{4cm}\gamma_p\in\mathrm{GL}_2(\mathbb{Z}_p) ~\textrm{and}~\gamma_p\equiv \left[\begin{smallmatrix}\mathrm{*}&\mathrm{*}\\0&t\end{smallmatrix}\right] \Mod{N M_2(\mathbb{Z}_p)}~\textrm{for all $p$}\}\\ &=&\bigcup_{t\in T}K^*\{s=(s_p)\in\widehat{K}^*~|~ s_p=u_p\tau_K+v_p~\textrm{with}~u_p,\,v_p\in\mathbb{Z}_p ~\textrm{such that}\\ &&\hspace{4cm}\det(\gamma_p)=(u_p\tau_K+v_p)(u_p\overline{\tau}_K+v_p)\in\mathbb{Z}_p^*,\\ &&\hspace{4cm}u_p\equiv0\Mod{N\mathbb{Z}_p}~ \textrm{and}~v_p\equiv t\Mod{N\mathbb{Z}_p}~\textrm{for all $p$}\}\\ &=&\bigcup_{t\in T}K^*\left( \prod_{p\,|\,N}(t+N\mathcal{O}_{K,\,p})\times \prod_{p\,\nmid\,N}\mathcal{O}_{K,\,p}^*\right). \end{eqnarray*} Therefore we conclude by Proposition \ref{model} (iv), (\ref{F_S}) and Lemma \ref{subgroup} that \begin{equation*} K_P=K\mathcal{F}_{\Gamma,\,\mathbb{Q}}(\tau_K). \end{equation*} \end{proof} Let $C\in\mathrm{Cl}(P)$. Take an integral ideal $\mathfrak{a}$ in the class $C$, and let $\xi_1$ and $\xi_2$ be elements of $K^*$ so that \begin{equation*} \mathfrak{a}^{-1}=[\xi_1,\,\xi_2] \quad \textrm{and}\quad \xi=\frac{\xi_1}{\xi_2}\in\mathbb{H}. \end{equation*} Since $\mathcal{O}_K=[\tau_K,\,1]\subseteq\mathfrak{a}^{-1}$ and $\xi\in\mathbb{H}$, one can express \begin{equation}\label{A} \begin{bmatrix}\tau_K\\1\end{bmatrix}=A\begin{bmatrix} \xi_1\\\xi_2\end{bmatrix}\quad\textrm{for some}~ A\in M_2^+(\mathbb{Z}). \end{equation} We then attain by taking determinant and squaring \begin{equation*} \begin{bmatrix} \tau_K&\overline{\tau}_K\\1&1\end{bmatrix} =A\begin{bmatrix}\xi_1&\overline{\xi}_1\\ \xi_2&\overline{\xi}_2\end{bmatrix} \end{equation*} that \begin{equation*} d_K=\det(A)^2\mathrm{N}_{K/\mathbb{Q}}(\mathfrak{a})^{-2}d_K \end{equation*} (\cite[Chapter III]{Lang94}). Hence, $\det(A)=\mathrm{N}_{K/\mathbb{Q}}(\mathfrak{a})$ which is relatively prime to $N$. For $\alpha\in M_2(\mathbb{Z})$ with $\gcd(N,\,\det(\alpha))=1$, we shall denote by $\widetilde{\alpha}$ its reduction onto $\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$ ($\simeq\mathrm{Gal}(\mathcal{F}_N/\mathcal{F}_1)$). \begin{definition}\label{invariant} Let $h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}$ \textup{($\subseteq \mathcal{F}_N$)}. With the notations as above, we define \begin{equation*} h(C)=h(\tau)^{\widetilde{A}}|_{\tau=\xi} \end{equation*} if it is finite. \end{definition} \begin{proposition} If $h(C)$ is finite, then it depends only on the class $C$ regardless of the choice of $\mathfrak{a}$, $\xi_1$ and $\xi_2$. \end{proposition} \begin{proof} Let $\mathfrak{a}'$ be also an integral ideal in $C$. Take any $\xi_1',\,\xi_2'\in K^*$ so that \begin{equation}\label{a'} \mathfrak{a}'^{-1}=[\xi_1',\,\xi_2']\quad \textrm{and}\quad \xi'=\frac{\xi_1'}{\xi_2'}\in\mathbb{H}. \end{equation} Since $\mathcal{O}_K\subseteq\mathfrak{a}'^{-1}$ and $\xi'\in\mathbb{H}$, we may write \begin{equation}\label{A'} \begin{bmatrix}\tau_K\\1\end{bmatrix}=A'\begin{bmatrix} \xi_1'\\\xi_2'\end{bmatrix} \quad\textrm{for some}~ A'\in M_2^+(\mathbb{Z}). \end{equation} Now that $[\mathfrak{a}]=[\mathfrak{a}']=C$, we have \begin{equation*} \mathfrak{a}'=\lambda\mathfrak{a}\quad \textrm{with}~\lambda\in K^*~\textrm{such that}~\lambda\equiv^*t\Mod{\mathfrak{n}}~ \textrm{for some}~t\in T. \end{equation*} Then it follows that \begin{equation}\label{a-l-a-} \mathfrak{a}'^{-1}=\lambda^{-1}\mathfrak{a}^{-1}= [\lambda^{-1}\xi_1,\,\lambda^{-1}\xi_2] \quad\textrm{and}\quad\frac{\lambda^{-1}\xi_1} {\lambda^{-1}\xi_2}=\xi. \end{equation} And, we obtain by (\ref{a'}) and (\ref{a-l-a-}) that \begin{equation}\label{B} \begin{bmatrix}\xi_1'\\\xi_2'\end{bmatrix} =B\begin{bmatrix}\lambda^{-1}\xi_1\\ \lambda^{-1}\xi_2\end{bmatrix}\quad \textrm{for some}~B\in\mathrm{SL}_2(\mathbb{Z}) \end{equation} and \begin{equation}\label{x'Bx} \xi'=B(\xi). \end{equation} On the other hand, consider $t$ as an integer whose reduction modulo $N$ belongs to $T$. Since $\mathfrak{a},\,\mathfrak{a}'=\lambda\mathfrak{a}\subseteq\mathcal{O}_K$, we see that $(\lambda-t)\mathfrak{a}$ is an integral ideal. Moreover, since $\lambda\equiv^*t\Mod{\mathfrak{n}}$ and $\mathfrak{a}$ is relatively prime to $\mathfrak{n}$, we get $(\lambda-t)\mathfrak{a}\subseteq \mathfrak{n}=N\mathcal{O}_K$, and hence \begin{equation*} (\lambda-t)\mathcal{O}_K\subseteq N\mathfrak{a}^{-1}. \end{equation*} Thus we attain by the facts $\mathcal{O}_K=[\tau_K,\,1]$ and $\mathfrak{a}^{-1}=[\xi_1,\,\xi_2]$ that \begin{equation}\label{A''} \begin{bmatrix} (\lambda-t)\tau_K\\\lambda-t \end{bmatrix} =A'' \begin{bmatrix}N\xi_1\\N\xi_2\end{bmatrix} \quad\textrm{for some}~A''\in M_2^+(\mathbb{Z}). \end{equation} We then derive that \begin{eqnarray*} NA''\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}&=& \lambda\begin{bmatrix}\tau_K\\1\end{bmatrix} -t\begin{bmatrix}\tau_K\\1\end{bmatrix}\quad\textrm{by (\ref{A''})}\\ &=&\lambda A'\begin{bmatrix}\xi_1'\\\xi_2'\end{bmatrix}- tA\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix} \quad\textrm{by (\ref{A}) and (\ref{A'})}\\ &=&A'B\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}- tA\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix} \quad\textrm{by (\ref{B})}. \end{eqnarray*} This yields $A'B-tA\equiv O\Mod{N M_2(\mathbb{Z})}$ and so \begin{equation}\label{AtAB} A'\equiv tAB^{-1}\Mod{N M_2(\mathbb{Z})}. \end{equation} Therefore we establish by Proposition \ref{Galoisdecomposition} that \begin{eqnarray*} h(\tau)^{\widetilde{A'}}|_{\tau=\xi'}&=& h(\tau)^{\widetilde{tAB^{-1}}}|_{\tau=\xi'}\quad \textrm{by (\ref{AtAB})}\\ &&\hspace{2.8cm}\textrm{where $\widetilde{~\cdot~}$ means the reduction onto $\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$} \\ &=&h(\tau)^{\widetilde{ \left[\begin{smallmatrix}1&0\\0&t^2\end{smallmatrix}\right]} \widetilde{\left[\begin{smallmatrix}t&0\\0&t^{-1}\end{smallmatrix}\right]} \widetilde{AB^{-1}}}|_{\tau=\xi'}\\ &=&h(\tau)^{\widetilde{ \left[\begin{smallmatrix}t&0\\0&t^{-1}\end{smallmatrix}\right]} \widetilde{AB^{-1}}}|_{\tau=\xi'}\quad \textrm{because $h(\tau)$ has rational Fourier coefficients}\\ &=&h(\tau)^{\widetilde{AB^{-1}}}|_{\tau=\xi'}\quad \textrm{since $h(\tau)$ is modular for $\Gamma$}\\ &=&h(\tau)^{\widetilde{A}}|_{\tau=B^{-1}(\xi')}\\ &=&h(\tau)^{\widetilde{A}}|_{\tau=\xi}\quad \textrm{by (\ref{x'Bx})}. \end{eqnarray*} This prove that $h(C)$ depends only on the class $C$. \end{proof} \begin{remark}\label{identityinvariant} If we let $C_0$ be the identity class in $\mathrm{Cl}(P)$, then we have $h(C_0)=h(\tau_K)$. \end{remark} \begin{proposition}\label{transformation} Let $C\in\mathrm{Cl}(P)$ and $h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}$. If $h(C)$ is finite, then it belongs to $K_P$ and satisfies \begin{equation*} h(C)^{\sigma(C')}=h(CC')\quad\textrm{for all}~C'\in \mathrm{Cl}(P) \end{equation*} where $\sigma:\mathrm{Cl}(P)\rightarrow\mathrm{Gal}(K_P/K)$ is the isomorphism induced from the Artin map. \end{proposition} \begin{proof} Let $\mathfrak{a}$ be an integral ideal in $C$ and $\xi_1,\,\xi_2\in K^*$ such that \begin{equation}\label{ax} \mathfrak{a}^{-1}=[\xi_1,\,\xi_2]\quad \textrm{with}~\xi=\frac{\xi_1}{\xi_2}\in\mathbb{H}. \end{equation} Then we have \begin{equation}\label{tAx} \begin{bmatrix}\tau_K\\1\end{bmatrix}=A\begin{bmatrix} \xi_1\\\xi_2\end{bmatrix}\quad\textrm{for some}~A\in M_2^+(\mathbb{Z}). \end{equation} Furthermore, let $\mathfrak{a}'$ be an integral ideal in $C'$ and $\xi_1'',\,\xi_2''\in K^*$ such that \begin{equation}\label{aax} (\mathfrak{a}\mathfrak{a}')^{-1}=[\xi_1'',\,\xi_2''] \quad\textrm{with}~\xi''=\frac{\xi_1''}{\xi_2''}\in\mathbb{H}. \end{equation} Since $\mathfrak{a}^{-1}\subseteq (\mathfrak{a}\mathfrak{a}')^{-1}$ and $\xi''\in\mathbb{H}$, we get \begin{equation}\label{xBx} \begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}=B\begin{bmatrix} \xi_1''\\\xi_2''\end{bmatrix}\quad \textrm{for some}~B\in M_2^+(\mathbb{Z}), \end{equation} and so it follows from (\ref{tAx}) that \begin{equation}\label{tABx} \begin{bmatrix}\tau_K\\1\end{bmatrix}= AB\begin{bmatrix}\xi_1''\\\xi_2''\end{bmatrix}. \end{equation} Let $s=(s_p)$ be an idele in $\widehat{K}^*$ satisfying \begin{equation}\label{s1pN} \left\{\begin{array}{ll} s_p=1 & \textrm{if}~p\,|\,N,\\ s_p\mathcal{O}_{K,\,p}=\mathfrak{a}_p' & \textrm{if}~ p\nmid N \end{array}\right. \end{equation} where $\mathfrak{a}_p'=\mathfrak{a}'\otimes_\mathbb{Z} \mathbb{Z}_p$. Since $\mathfrak{a}'$ is relatively prime to $\mathfrak{n}=N\mathcal{O}_K$, we obtain by (\ref{s1pN}) that \begin{equation}\label{sOa} s_p^{-1}\mathcal{O}_{K,\,p}=\mathfrak{a}_p'^{-1}\quad \textrm{for all $p$}. \end{equation} Now, we see that \begin{equation*} q_{\xi,\,p}(s_p^{-1}) \begin{bmatrix}\xi_1\\\xi_2\end{bmatrix} =\xi_2q_{\xi,\,p}(s_p^{-1}) \begin{bmatrix}\xi\\1\end{bmatrix} =\xi_2s_p^{-1}\begin{bmatrix}\xi\\1\end{bmatrix}= s_p^{-1}\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}, \end{equation*} which shows by (\ref{ax}) and (\ref{sOa}) that $q_{\xi,\,p}(s_p^{-1})\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}$ is a $\mathbb{Z}_p$-basis for $(\mathfrak{a}\mathfrak{a}')_p^{-1}$. Furthermore, $B^{-1}\begin{bmatrix}\xi_1\\\xi_2\end{bmatrix}$ is also a $\mathbb{Z}_p$-basis for $(\mathfrak{a}\mathfrak{a}')^{-1}_p$ by (\ref{aax}) and (\ref{xBx}). Thus we achieve \begin{equation}\label{qpsgB} q_{\xi,\,p}(s_p^{-1})=\gamma_pB^{-1}\quad \textrm{for some}~\gamma_p\in\mathrm{GL}_2(\mathbb{Z}_p). \end{equation} Letting $\gamma=(\gamma_p)\in\prod_p\mathrm{GL}_2(\mathbb{Z}_p)$ we get \begin{equation}\label{qsgB} q_\xi(s^{-1})=\gamma B^{-1}. \end{equation} We then deduce that \begin{eqnarray*} h(C)^{[s,\,K]}&=&(h(\tau)^{\widetilde{A}}|_{\tau=\xi})^{[s,\,K]}\quad \textrm{by Definition \ref{invariant}}\\ &=&(h(\tau)^{\widetilde{A}})^{\sigma(q_\xi(s^{-1}))}|_{\tau=\xi} \quad\textrm{by Propositioin \ref{reciprocity}}\\ &=&(h(\tau)^{\widetilde{A}})^{\sigma(\gamma B^{-1})}|_{\tau=\xi} \quad\textrm{by (\ref{qsgB})}\\ &=&h(\tau)^{\widetilde{A}\widetilde{G}} |_{\tau=B^{-1}(\xi)}\quad \textrm{where $G$ is a matrix in $M_2(\mathbb{Z})$ such that}\\ &&\hspace{3.1cm}G\equiv\gamma_p \Mod{N M_2(\mathbb{Z}_p)}~\textrm{for all}~p\,|\,N\\ &=&h(\tau)^{\widetilde{A}\widetilde{B}}|_{\tau=\xi''} \quad\textrm{by (\ref{xBx}) and the fact that for each $p\,|\,N$,}\\ &&\hspace{2.5cm}\textrm{$s_p=1$ and so $\gamma_p B^{-1}=I_2$ owing to (\ref{s1pN}) and (\ref{qpsgB})}\\ &=&h(CC')\quad\textrm{by Definition \ref{invariant} and (\ref{tABx})}. \end{eqnarray*} In particular, if we consider the case where $C'=C^{-1}$, then we derive that \begin{equation*} h(C)=h(CC')^{[s^{-1},\,K]}=h(C_0)^{[s^{-1},\,K]} =h(\tau_K)^{[s^{-1},\,K]}. \end{equation*} This implies that $h(C)$ belongs to $K_P$ by Proposition \ref{KPKF}. \par For each $p\,\nmid\,N$ and $\mathfrak{p}$ lying above $p$, we have by (\ref{s1pN}) that $\mathrm{ord}_\mathfrak{p}~ s_p=\mathrm{ord}_\mathfrak{p}~\mathfrak{a}'$, and hence \begin{equation*} [s,\,K]|_{K_P}=\sigma(C'). \end{equation*} Therefore we conclude \begin{equation*} h(C)^{\sigma(C')}=h(CC'). \end{equation*} \end{proof} \section {Extended form class groups as Galois groups} With $P$, $K_P$ and $\Gamma$ as in $\S$\ref{classinvariants}, we shall prove our main theorem which asserts that $\mathcal{Q}_N(d_K)/\sim_\Gamma$ can be regarded as a group isomorphic to $\mathrm{Gal}(K_P/K)$ through the isomorphism described in (\ref{desiredisomorphism}). \begin{lemma}\label{unit} If $Q\in\mathcal{Q}(d_K)$ and $\gamma\in I_{\omega_Q}$, then $j(\gamma,\,\omega_Q)\in\mathcal{O}_K^*$. \end{lemma} \begin{proof} We obtain from $Q=Q^\gamma$ that \begin{equation*} [\omega_Q,\,1]=[\gamma(\omega_Q),\,1] =\frac{1}{j(\gamma,\,\omega_Q)}[\omega_Q,\,1]. \end{equation*} This claims that $j(\gamma,\,\omega_Q)$ is a unit in $\mathcal{O}_K$. \end{proof} \begin{remark} This lemma can be also justified by using (\ref{isotropy}), (\ref{otherisotropy}) and the property \begin{equation}\label{jabt} j(\alpha\beta,\,\tau)=j(\alpha,\,\beta(\tau))j(\beta,\,\tau) \quad(\alpha,\,\beta\in\mathrm{SL}_2(\mathbb{Z}),\,\tau\in\mathbb{H}) \end{equation} (\cite[(1.2.4)]{Shimura}). \end{remark} \begin{proposition}\label{satisfies} For given $P$, the group $\Gamma$ satisfies the property \textup{(\ref{P})}. \end{proposition} \begin{proof} Let $Q=ax^2+bxy+cy^2\in\mathcal{Q}_N(d_K)$ and $\gamma\in\mathrm{SL}_2(\mathbb{Z})$ such that $Q^{\gamma^{-1}}\in\mathcal{Q}_N(d_K)$. \par Assume that $j(\gamma,\,\omega_Q)\mathcal{O}_K\in P$. Then we have \begin{equation*} j(\gamma,\,\omega_Q)\mathcal{O}_K=\frac{\nu_1}{\nu_2} \mathcal{O}_K\quad\textrm{for some}~\nu_1,\,\nu_2\in\mathcal{O}_K-\{0\} \end{equation*} satisfying \begin{equation}\label{ntnt} \nu_1\equiv t_1,\,\nu_2\equiv t_2\Mod{\mathfrak{n}}~ \textrm{with}~t_1,\,t_2\in T \end{equation} and hence \begin{equation}\label{zjnn} \zeta j(\gamma,\,\omega_Q)=\frac{\nu_1}{\nu_2}\quad \textrm{for some}~\zeta\in\mathcal{O}_K^*. \end{equation} For convenience, let $j=j(\gamma,\,\omega_Q)$ and $Q'=Q^{\gamma^{-1}}$. Then we deduce \begin{equation}\label{newQQ} \gamma(\omega_Q)=\omega_{Q'} \end{equation} and \begin{equation*} [\omega_Q,\,1]=j[\gamma(\omega_Q),\,1]=j[\omega_{Q'},\,1]= \zeta j[\omega_{Q'},\,1]. \end{equation*} So there is $\alpha=\begin{bmatrix}r&s\\ u&v\end{bmatrix}\in\mathrm{GL}_2(\mathbb{Z})$ which yields \begin{equation}\label{zz} \begin{bmatrix}\zeta j\omega_{Q'}\\ \zeta j\end{bmatrix}=\alpha \begin{bmatrix}\omega_Q\\1\end{bmatrix}. \end{equation} Here, since $\zeta j\omega_{Q'}/\zeta j=\omega_{Q'},\, \omega_Q\in\mathbb{H}$, we get $\alpha\in\mathrm{SL}_2(\mathbb{Z})$ and \begin{equation}\label{newww} \omega_{Q'}=\alpha(\omega_Q). \end{equation} Thus we attain $\gamma(\omega_Q)=\omega_{Q'}=\alpha(\omega_Q)$ by (\ref{newQQ}) and (\ref{newww}), from which we get $\omega_Q=(\alpha^{-1}\gamma)(\omega_Q)$ and so \begin{equation}\label{gaI} \gamma\in\alpha\cdot I_{\omega_Q}. \end{equation} Now that $aj\in\mathcal{O}_K$, we see from (\ref{ntnt}), (\ref{zjnn}) and (\ref{zz}) that \begin{eqnarray*} &&a\nu_2(\zeta j)\equiv a\nu_1\equiv at_1\Mod{\mathfrak{n}},~ \textrm{and}\\ &&a\nu_2(\zeta j)\equiv a\nu_2(u\omega_Q+v)\equiv ut_2(a\omega_Q)+at_2v\Mod{\mathfrak{n}}. \end{eqnarray*} It then follows that \begin{equation*} at_1\equiv ut_2(a\omega_Q)+at_2v\Mod{\mathfrak{n}} \end{equation*} and hence \begin{equation*} ut_2(a\omega_Q)+a(t_2v-t_1)\equiv0\Mod{\mathfrak{n}}. \end{equation*} Since $\mathfrak{n}=N\mathcal{O}_K=N[a\omega_Q,\,1]$, we have \begin{equation*} ut_2\equiv0\Mod{N}\quad\textrm{and}\quad a(t_2v-t_1)\equiv0\Mod{N}. \end{equation*} Moreover, since $\gcd(N,\,t_1)=\gcd(N,\,t_2)=\gcd(N,\,a)=1$, we achieve that \begin{equation*} u\equiv0\Mod{N}\quad\textrm{and}\quad v\equiv t_1t_2^{-1}\Mod{N} \end{equation*} where $t_2^{-1}$ is an integer satisfying $t_2t_2^{-1}\equiv1\Mod{N}$. This, together with the facts $\det(\alpha)=1$ and $T$ is a subgroup of $(\mathbb{Z}/N\mathbb{Z})^*$, implies $\alpha=\begin{bmatrix}r&s\\u&v\end{bmatrix}\in\Gamma$. Therefore we conclude $\gamma\in \Gamma\cdot I_{\omega_Q}$ by (\ref{gaI}), as desired. \par Conversely, assume that $\gamma\in\Gamma\cdot I_{\omega_Q}$, and so \begin{equation*} \gamma=\alpha\beta\quad \textrm{for some}~\alpha= \begin{bmatrix}r&s\\u&v\end{bmatrix}\in\Gamma~ \textrm{and}~\beta\in I_{\omega_Q}. \end{equation*} Here we observe that \begin{equation}\label{u0vt} u\equiv0\Mod{N}\quad\textrm{and}\quad v\equiv t\Mod{N}~\textrm{for some}~t\in T. \end{equation} We then derive that \begin{eqnarray*} j(\gamma,\,\omega_Q)&=&j(\alpha\beta,\,\omega_Q)\\ &=&j(\alpha,\,\beta(\omega_Q))j(\beta,\,\omega_Q)\quad \textrm{by (\ref{jabt})}\\ &=&j(\alpha,\,\omega_Q)\zeta\quad\textrm{for some}~ \zeta\in\mathcal{O}_K^*~\textrm{by the fact $\beta\in I_{\omega_Q}$ and Lemma \ref{unit}}. \end{eqnarray*} Thus we attain \begin{equation*} \zeta^{-1}j(\gamma,\,\omega_Q)-v =j(\alpha,\,\omega_Q)-v =(u\omega_Q+v)-v =\frac{1}{a}\{u(a\omega_Q)\}. \end{equation*} And, it follows from the fact $\gcd(N,\,a)=1$ and (\ref{u0vt}) that \begin{equation*} \zeta^{-1}j(\gamma,\,\omega_Q)\equiv^*v \equiv^*t\Mod{\mathfrak{n}}. \end{equation*} This shows that $\zeta^{-1}j(\gamma,\,\omega_Q) \mathcal{O}_K\in P$, and hence $j(\gamma,\,\omega_Q)\mathcal{O}_K\in P$. \par Therefore, the group $\Gamma$ satisfies the property (\ref{P}) for $P$. \end{proof} \begin{theorem}\label{Galoisgroups} We have an isomorphism \begin{equation}\label{Galoisisomorphism} \begin{array}{ccl} \mathcal{Q}_N(d_K)/\sim_\Gamma&\rightarrow& \mathrm{Gal}(K_P/K)\\ \left[Q\right]&\mapsto& \left(h(\tau_K)\mapsto h(-\overline{\omega}_Q)~|~ h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}~ \textrm{is finite at}~\tau_K\right). \end{array} \end{equation} \end{theorem} \begin{proof} By Theorem \ref{formclassgroup} and Proposition \ref{satisfies} one may consider $\mathcal{Q}_N(d_K)/\sim_\Gamma$ as a group isomorphic to $I_K(\mathfrak{n})/P$ via the isomorphism $\phi_\Gamma$ in $\S$\ref{sect2}. Let $C\in\mathrm{Cl}(P)$ and so \begin{equation*} C=\phi_\Gamma([Q])=[[\omega_Q,\,1]] \quad\textrm{for some}~Q\in\mathcal{Q}_N(d_K)/\sim_\Gamma. \end{equation*} Note that $C$ contains an integral ideal $\mathfrak{a}=a^{\varphi(N)}[\omega_Q,\,1]$, where $\varphi$ is the Euler totient function. We establish by Lemma \ref{prime} and the definition (\ref{wQ}) that \begin{equation*} \mathfrak{a}^{-1}=\frac{1}{\mathrm{N}_{K/\mathbb{Q}}(\mathfrak{a})} \overline{\mathfrak{a}}=\frac{1}{a^{\varphi(N)-1}} [-\overline{\omega}_Q,\,1] \end{equation*} and \begin{equation*} \begin{bmatrix}\tau_K\\1\end{bmatrix}= \begin{bmatrix} a^{\varphi(N)} & -a^{\varphi(N)-1}(b+b_K)/2\\ 0&a^{\varphi(N)-1} \end{bmatrix} \begin{bmatrix} -\overline{\omega}_Q/a^{\varphi(N)-1}\\ 1/a^{\varphi(N)-1} \end{bmatrix} \end{equation*} where $\min(\tau_K,\,\mathbb{Q})=x^2+b_Kx+c_K$ ($\in\mathbb{Z}[x]$). We then derive by Proposition \ref{Galoisdecomposition} that if $h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}$ is finite at $\tau_K$, then \begin{eqnarray*} h(C)&=&h(\tau)^{\widetilde{\left[\begin{smallmatrix} a^{\varphi(N)}&-a^{\varphi(N)-1}(b+b_K)/2\\ 0&a^{\varphi(N)-1} \end{smallmatrix}\right]}}|_{\tau=-\overline{\omega}_Q} \quad\textrm{by Definition \ref{invariant}}\\ &&\hspace{5cm}\textrm{where $\widetilde{~\cdot~}$ means the reduction onto $\mathrm{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm I_2\}$}\\ &=&h(\tau)^{\widetilde{\left[\begin{smallmatrix} 1 & -a^{-1}(b+b_K)/2\\ 0&a^{-1} \end{smallmatrix}\right]}}|_{\tau=-\overline{\omega}_Q} \quad\textrm{since}~a^{\varphi(N)}\equiv1\Mod{N}\\ &&\hspace{5cm}\textrm{where $a^{-1}$ is an integer such that $aa^{-1}\equiv1\Mod{N}$}\\ &=&h(\tau)^{\widetilde{\left[\begin{smallmatrix} 1&0\\0&a^{-1} \end{smallmatrix}\right]} \widetilde{\left[\begin{smallmatrix} 1 &-a^{-1}(b+b_K)/2\\0&1 \end{smallmatrix}\right]}}|_{\tau=-\overline{\omega}_Q}\\ &=&h(\tau)^{\widetilde{\left[\begin{smallmatrix} 1 &-a^{-1}(b+b_K)/2\\0&1 \end{smallmatrix}\right]}}|_{\tau=-\overline{\omega}_Q} \quad\textrm{because $h(\tau)$ has rational Fourier coefficients}\\ &=&h(\tau)|_{\tau=-\overline{\omega}_Q} \quad\textrm{since $h(\tau)$ is modular for $\Gamma$}\\ &=&h(-\overline{\omega}_Q). \end{eqnarray*} Now, the isomorphism $\phi_\Gamma$ followed by the isomorphism \begin{equation*} \begin{array}{ccl} \mathrm{Cl}(P)&\rightarrow&\mathrm{Gal}(K_P/K)\\ C&\mapsto&\left( h(\tau_K)=h(C_0)\mapsto h(C_0)^{\sigma(C)}=h(C)=h(-\overline{\omega}_Q)~|~ h\in\mathcal{F}_{\Gamma,\,\mathbb{Q}}~ \textrm{is finite at}~\tau_K\right) \end{array} \end{equation*} which is induced from Propositions \ref{KPKF}, \ref{transformation} and Remark \ref{identityinvariant}, yields the isomorphism stated in (\ref{Galoisisomorphism}), as desired. \end{proof} \begin{remark}\label{difference} In \cite{E-K-S17} Eum, Koo and Shin considered only the case where $K\neq\mathbb{Q}(\sqrt{-1}),\,\mathbb{Q}(\sqrt{-3})$, $P=P_{K,\,1}(\mathfrak{n})$ and $\Gamma=\Gamma_1(N)$. As for the group operation of $\mathcal{Q}_N(d_K)/\sim_{\Gamma_1(N)}$ one can refer to \cite[Remark 2.10]{E-K-S17}. They established an isomorphism \begin{equation}\label{rayGaloisgroups} \begin{array}{lcl} \mathcal{Q}_N(d_K)/\sim_{\Gamma_1(N)}&\rightarrow&\mathrm{Gal}(K_\mathfrak{n}/K)\\ \left[Q\right]=\left[ax^2+bxy+cy^2\right]&\mapsto& \left( h(\tau_K)\mapsto h(\tau)^{\widetilde{\left[ \begin{smallmatrix}a&(b-b_K)/2\\0&1\end{smallmatrix}\right]}}|_{\tau= \omega_Q}~|~h(\tau)\in\mathcal{F}_N~ \textrm{is finite at $\tau_K$}\right). \end{array} \end{equation} The difference between the isomorphisms described in (\ref{Galoisisomorphism}) and (\ref{rayGaloisgroups}) arises from the Definition \ref{invariant} of $h(C)$. The invariant $h_\mathfrak{n}(C)$ appeared in \cite[Definition 3.3]{E-K-S17} coincides with $h(C^{-1})$. \end{remark} \section {Finding representatives of extended form classes} In this last section, by improving the proof of Proposition \ref{surjective} further, we shall explain how to find all quadratic forms which represent distinct classes in $\mathcal{Q}_N(d_K)/\sim_\Gamma$. \par For a given $Q=ax^2+bxy+cy^2\in\mathcal{Q}_N(d_K)$ we define an equivalence relation $\equiv_Q$ on $M_{1,\,2}(\mathbb{Z})$ as follows: Let $\begin{bmatrix}r&s\end{bmatrix},\, \begin{bmatrix}u&v\end{bmatrix}\in M_{1,\,2}(\mathbb{Z})$. Then, $\begin{bmatrix}r&s\end{bmatrix}\equiv_Q\begin{bmatrix}u&v\end{bmatrix}$ if and only if \begin{equation*} \begin{bmatrix}r&s\end{bmatrix} \equiv\pm t\begin{bmatrix}u&v\end{bmatrix}\gamma\Mod{NM_{1,\,2}(\mathbb{Z})} \quad\textrm{for some}~t\in T~\textrm{and}~\gamma\in \Gamma_Q \end{equation*} where \begin{equation*} \Gamma_Q=\left\{\begin{array}{ll} \left\{\pm I_2\right\} & \textrm{if}~d_K\neq-4,\,-3, \\ \left\{\pm I_2,\, \pm\begin{bmatrix} -b/2&-a^{-1}(b^2+4)/4)\\a&b/2 \end{bmatrix} \right\} & \textrm{if}~d_K=-4, \\ \left\{\pm I_2,\, \pm\begin{bmatrix} -(b+1)/2&-a^{-1}(b^2+3)/4\\a&(b-1)/2 \end{bmatrix},\, \pm\begin{bmatrix} (b-1)/2&a^{-1}(b^2+3)/4\\-a&-(b+1)/2 \end{bmatrix} \right\} & \textrm{if}~d_K=-3. \end{array} \right. \end{equation*} Here, $a^{-1}$ is an integer satisfying $aa^{-1}\equiv1\Mod{N}$. \begin{lemma}\label{equivGamma} Let $Q=ax^2+bxy+cy^2\in\mathcal{Q}_N(d_K)$ and $\begin{bmatrix}r&s\end{bmatrix},\, \begin{bmatrix}u&v\end{bmatrix}\in M_{1,\,2}(\mathbb{Z})$ such that $\gcd(N,\,Q(s,\,-r))=\gcd(N,\,Q(v,\,-u))=1$. Then, \begin{equation*} [(r\omega_Q+s)\mathcal{O}_K]=[(u\omega_Q+v)\mathcal{O}_K]~ \textrm{in}~P_K(\mathfrak{n})/P \quad \Longleftrightarrow\quad \begin{bmatrix}r&s\end{bmatrix}\equiv_Q \begin{bmatrix}u&v\end{bmatrix}. \end{equation*} \end{lemma} \begin{proof} Note that by Lemma \ref{prime} (i) the fractional ideals $(r\omega_Q+s)\mathcal{O}_K$ and $(u\omega_Q+v)\mathcal{O}_K$ belong to $P_K(\mathfrak{n})$. Furthermore, we know that \begin{equation}\label{OK*} \mathcal{O}_K^*= \left\{ \begin{array}{ll} \{\pm1\} & \textrm{if}~K\neq\mathbb{Q}(\sqrt{-1}),\,\mathbb{Q}(\sqrt{-3}),\\ \{\pm 1,\,\pm\tau_K\} & \textrm{if}~K=\mathbb{Q}(\sqrt{-1}),\\ \{\pm 1,\,\pm\tau_K,\, \pm\tau_K^2\} & \textrm{if}~K=\mathbb{Q}(\sqrt{-3}) \end{array}\right. \end{equation} (\cite[Exercise 5.9]{Cox}) and so \begin{equation}\label{UK} U_K=\{(m,\,n)\in\mathbb{Z}^2~|~m\tau_K+n\in\mathcal{O}_K^*\} =\left\{\begin{array}{ll} \{\pm(0,\,1)\} & \textrm{if}~K\neq\mathbb{Q}(\sqrt{-1}),\,\mathbb{Q}(\sqrt{-3}),\\ \{\pm(0,\,1),\,\pm(1,\,0)\} & \textrm{if}~K=\mathbb{Q}(\sqrt{-1}),\\ \{\pm(0,\,1),\,\pm(1,\,0),\,\pm(1,\,1)\} & \textrm{if}~K=\mathbb{Q}(\sqrt{-3}). \end{array}\right. \end{equation} Then we achieve that \begin{eqnarray*} &&[(r\omega_Q+s)\mathcal{O}_K]=[(u\omega_Q+v)\mathcal{O}_K] \quad\textrm{in}~P_K(\mathfrak{n})/P\\ &\Longleftrightarrow& \left(\frac{r\omega_Q+s}{u\omega_Q+v}\right)\mathcal{O}_K\in P\\ &\Longleftrightarrow& \frac{r\omega_Q+s}{u\omega_Q+v}\equiv^*\zeta t\Mod{\mathfrak{n}} \quad\textrm{for some}~\zeta\in\mathcal{O}_K^*~\textrm{and}~t\in T\\ &\Longleftrightarrow& a(r\omega_Q+s)\equiv\zeta ta(u\omega_Q+v)\Mod{\mathfrak{n}} \quad\textrm{since $a(u\omega_Q+v)\mathcal{O}_K$ is relatively prime to $\mathfrak{n}$}\\ &&\hspace{6.5cm}\textrm{and}~a\omega_Q\in\mathcal{O}_K\\ &\Longleftrightarrow& r\left(\tau_K+\frac{b_K-b}{2}\right)+as\equiv (m\tau_K+n)t\left\{ u\left(\tau_K+\frac{b_K-b}{2}\right)+av\right\}\Mod{\mathfrak{n}}\\ &&\hspace{10.5cm}\textrm{for some}~(m,\,n)\in U_K\\ &\Longleftrightarrow& r\tau_K+\left(\frac{r(b_K-b)}{2}+as\right)\equiv t(-mub_K+mk+nu)\tau_K+t(-muc_K+nk) \Mod{\mathfrak{n}}\\ &&\hspace{3.5cm}\textrm{with}~ k=\frac{u(b_K-b)}{2}+av,~\textrm{where}~\min(\tau_K,\,\mathbb{Q}) =x^2+b_Kx+c_K\\ &\Longleftrightarrow& r\equiv t\left\{-\left(\frac{b_K+b}{2}\right)m+n\right\}u+tmav\Mod{N}\quad\textrm{and}\\ &&s\equiv ta^{-1}\left(\frac{b_K^2-b^2}{4}-c_K\right)mu+ t\left\{-\left(\frac{b_K-b}{2}\right)m+n\right\}v\Mod{N}\\ &&\hspace{10cm}\textrm{by the fact}~\mathfrak{n}=N[\tau_K,\,1]\\ &\Longleftrightarrow& \begin{bmatrix}r&s\end{bmatrix}\equiv_Q \begin{bmatrix}u&v\end{bmatrix}\quad\textrm{by (\ref{UK}) and the definition of $\equiv_Q$}. \end{eqnarray*} \end{proof} For each $Q\in\mathcal{Q}_N(d_K)$, let \begin{equation*} M_Q=\left\{\begin{bmatrix}u&v\end{bmatrix}\in M_{1,\,2}(\mathbb{Z})~|~ \gcd(N,\,Q(v,\,-u))=1\right\}. \end{equation*} \begin{proposition}\label{algorithm} One can explicitly find quadratic forms representing all distinct classes in $\mathcal{Q}_N(d_K)/\sim_\Gamma$. \end{proposition} \begin{proof} We adopt the idea in the proof of Proposition \ref{surjective}. Let $Q_1',\,Q_2',\,\ldots,\,Q_h'$ be quadratic forms in $\mathcal{Q}_N(d_K)$ which represent all distinct classes in $\mathrm{C}(d_K)=\mathcal{Q}(d_K)/\sim$. Then we get by Lemma \ref{equivGamma} that for each $i=1,\,2,\,\ldots,\,h$ \begin{equation*} P_K(\mathfrak{n})/P= \left\{[(u\omega_{Q_i'}+v)\mathcal{O}_K]~|~ \begin{bmatrix}u&v\end{bmatrix}\in M_{Q_i'}/\equiv_{Q_i'}\right\} =\left\{\left[\frac{1}{u\omega_{Q_i'}+v}\mathcal{O}_K\right]~|~ \begin{bmatrix}u&v\end{bmatrix}\in M_{Q_i'}/\equiv_{Q_i'}\right\}. \end{equation*} Thus we obtain by (\ref{decomp}) that \begin{eqnarray*} I_K(\mathfrak{n})/P&=&(P_K(\mathfrak{n})/P)\cdot \{[[\omega_{Q_i'},\,1]]\in I_K(\mathfrak{n})/P~|~i=1,\,2,\,\ldots,\,h\}\\ &=&\left\{\left[\frac{1}{u\omega_{Q_i'}+v} [\omega_{Q_i'},\,1]\right]~|~i=1,\,2,\,\ldots,\,h~ \textrm{and}~\begin{bmatrix}u&v\end{bmatrix} \in M_{Q_i'}/\equiv_{Q_i'}\right\}\\ &=&\left\{ \left[\left[\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ \widetilde{u}&\widetilde{v}\end{bmatrix}(\omega_{Q_i'}),\,1\right]\right]~|~ i=1,\,2,\,\ldots,\,h~ \textrm{and}~\begin{bmatrix}u&v\end{bmatrix} \in M_{Q_i'}/\equiv_{Q_i'}\right\} \end{eqnarray*} where $\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ \widetilde{u}&\widetilde{v}\end{bmatrix}$ is a matrix in $\mathrm{SL}_2(\mathbb{Z})$ such that $\begin{bmatrix}\mathrm{*}&\mathrm{*}\\ \widetilde{u}&\widetilde{v}\end{bmatrix}\equiv \begin{bmatrix}\mathrm{*}&\mathrm{*}\\u&v\end{bmatrix} \Mod{NM_2(\mathbb{Z})}$. Therefore we conclude \begin{equation*} \mathcal{Q}_N(d_K)/\sim_\Gamma=\left\{ \left[Q_i'^{\left[\begin{smallmatrix} \mathrm{*}&\mathrm{*}\\\widetilde{u}&\widetilde{v} \end{smallmatrix}\right]^{-1}} \right]~|~i=1,\,2,\,\ldots,\,h~ \textrm{and}~\begin{bmatrix}u&v\end{bmatrix} \in M_{Q_i'}/\equiv_{Q_i'} \right\}. \end{equation*} \end{proof} \begin{example}\label{example} Let $K=\mathbb{Q}(\sqrt{-5})$, $N=12$ and $T=(\mathbb{Z}/N\mathbb{Z})^*$. Then we get $P=P_{K,\,\mathbb{Z}}(\mathfrak{n})$ and $K_P=H_\mathcal{O}$, where $\mathfrak{n}=N\mathcal{O}_K$ and $\mathcal{O}$ is the order of conductor $N$ in $K$. There are two reduced forms of discriminant $d_K=-20$, namely \begin{equation*} Q_1=x^2+5y^2\quad\textrm{and}\quad Q_2=2x^2+2xy+3y^2. \end{equation*} Set \begin{equation*} Q_1'=Q_1\quad \textrm{and}\quad Q_2'=Q_2^{\left[\begin{smallmatrix}1&1\\1&2\end{smallmatrix}\right]} =7x^2+22xy+18y^2 \end{equation*} which belong to $\mathcal{Q}_N(d_K)$. We then see that \begin{equation*} M_{Q_1'}/\equiv_{Q_1'}=\left\{ \begin{bmatrix}0&1\end{bmatrix},\, \begin{bmatrix}1&0\end{bmatrix},\, \begin{bmatrix}1&6\end{bmatrix},\, \begin{bmatrix}2&3\end{bmatrix},\, \begin{bmatrix}3&2\end{bmatrix},\, \begin{bmatrix}3&4\end{bmatrix},\, \begin{bmatrix}4&3\end{bmatrix},\, \begin{bmatrix}6&1\end{bmatrix} \right\} \end{equation*} with corresponding matrices \begin{eqnarray*} \begin{bmatrix} 1&0\\0&1 \end{bmatrix},\, \begin{bmatrix} 0&-1\\1&0 \end{bmatrix},\, \begin{bmatrix} 0&-1\\1&6 \end{bmatrix},\, \begin{bmatrix} 1&1\\2&3 \end{bmatrix},\, \begin{bmatrix} -1&-1\\3&2 \end{bmatrix},\, \begin{bmatrix} 1&1\\3&4 \end{bmatrix},\, \begin{bmatrix} -1&-1\\4&3 \end{bmatrix},\, \begin{bmatrix} 1&0\\6&1 \end{bmatrix} \end{eqnarray*} and \begin{equation*} M_{Q_2'}/\equiv_{Q_2'}=\left\{ \begin{bmatrix}0&1\end{bmatrix},\, \begin{bmatrix}1&5\end{bmatrix},\, \begin{bmatrix}1&11\end{bmatrix},\, \begin{bmatrix}2&1\end{bmatrix},\, \begin{bmatrix}3&1\end{bmatrix},\, \begin{bmatrix}3&7\end{bmatrix},\, \begin{bmatrix}4&5\end{bmatrix},\, \begin{bmatrix}6&1\end{bmatrix} \right\} \end{equation*} with corresponding matrices \begin{eqnarray*} \begin{bmatrix} 1&0\\0&1 \end{bmatrix},\, \begin{bmatrix} 0&-1\\1&5 \end{bmatrix},\, \begin{bmatrix} 0&-1\\1&11 \end{bmatrix},\, \begin{bmatrix} -1&-1\\2&1 \end{bmatrix},\, \begin{bmatrix} 1&0\\3&1 \end{bmatrix},\, \begin{bmatrix} 1&2\\3&7 \end{bmatrix},\, \begin{bmatrix} 1&1\\4&5 \end{bmatrix},\, \begin{bmatrix} 1&0\\6&1 \end{bmatrix}. \end{eqnarray*} Hence there are $16$ quadratic forms \begin{equation*} \begin{array}{llll} x^2+5y^2,&5x^2+y^2,&41x^2+12xy+y^2,&29x^2-26xy+6y^2,\\ 49x^2+34xy+6y^2,&61x^2-38xy+6y^2,&89x^2+46xy+6y^2,&181x^2-60xy+5y^2,\\ 7x^2+22xy+18y^2,&83x^2+48xy+7y^2,&623x^2+132xy+7y^2,&35x^2+20xy+3y^2,\\ 103x^2-86xy+18y^2,&43x^2-18xy+2y^2,&23x^2-16xy+3y^2,&523x^2-194xy+18y^2 \end{array} \end{equation*} which represent all distinct classes in $\mathcal{Q}_N(d_K)/ \sim_\Gamma=\mathcal{Q}_{12}(-20)/\sim_{\Gamma_0(12)}$. \par On the other hand, for $\begin{bmatrix}r_1&r_2\end{bmatrix}\in M_{1,\,2}(\mathbb{Q}) \setminus M_{1,\,2}(\mathbb{Z})$ the \textit{Siegel function} $g_{\left[\begin{smallmatrix}r_1&r_2\end{smallmatrix}\right]}(\tau)$ is given by the infinite product \begin{eqnarray*} g_{\left[\begin{smallmatrix}r_1&r_2\end{smallmatrix}\right]}(\tau) &=&-e^{\pi\mathrm{i}r_2(r_1-1)} q^{(1/2)(r_1^2-r_1+1/6)} (1-q^{r_1}e^{2\pi\mathrm{i}r_2})\\ &&\times \prod_{n=1}^\infty(1-q^{n+r_1}e^{2\pi\mathrm{i}r_2}) (1-q^{n-r_1}e^{-2\pi\mathrm{i}r_2})\quad(\tau\in\mathbb{H}) \end{eqnarray*} which generalizes the Dedekind eta-function $\displaystyle q^{1/24}\prod_{n=1}^\infty (1-q^n)$. Then the function \begin{equation*} g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12\tau)^{12} =\left(\frac{\eta(6\tau)}{\eta(12\tau)}\right)^{24} \end{equation*} belongs to $\mathcal{F}_{\Gamma_0(12),\,\mathbb{Q}}$ (\cite[Theorem 1.64]{Ono} or \cite{K-L}), and the Galois conjugates of $g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12\tau_K)^{12}$ over $K$ are \begin{equation*} \begin{array}{llll} g_1=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12\sqrt{-5})^{12}, & g_2=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12\sqrt{-5}/5)^{12}, \\ g_3=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(6+\sqrt{-5})/41)^{12}, & g_4=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-13+\sqrt{-5})/29)^{12},\\ g_5=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(17+\sqrt{-5})/49)^{12}, & g_6=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-19+\sqrt{-5})/61)^{12}, \\ g_7=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(23+\sqrt{-5})/89)^{12}, & g_8=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-30+\sqrt{-5})/181)^{12},\\ g_9=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(11+\sqrt{-5})/7)^{12}, & g_{10}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(24+\sqrt{-5})/83)^{12}, \\ g_{11}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(66+\sqrt{-5})/623)^{12}, & g_{12}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(10+\sqrt{-5})/35)^{12},\\ g_{13}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-43+\sqrt{-5})/103)^{12}, & g_{14}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-9+\sqrt{-5})/43)^{12}, \\ g_{15}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-8+\sqrt{-5})/23)^{12}, & g_{16}=g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12(-97+\sqrt{-5})/523)^{12} \end{array} \end{equation*} possibly with some multiplicity. Now, we evaluate \begin{eqnarray*} &&\prod_{i=1}^{16}(x-g_i)\\&=& x^{16}+1251968x^{15}-14929949056x^{14}+1684515904384x^{13} -61912544374756x^{12}\\ &&+362333829428160x^{11} +32778846351721632x^{10}-845856631699319872x^9\\ &&+4605865492693542918x^8+91164259067285621248x^7 -124917935291699694528x^6\\ &&+180920285564131280640x^5 -3000295144057714916x^4+8871452719720384x^3\\ &&+458008762175904x^2 -1597177179712x+1 \end{eqnarray*} with nonzero discriminant. Thus $g_{\left[\begin{smallmatrix} 1/2&0\end{smallmatrix}\right]}(12\tau_K)^{12}$ generates $K_P=H_\mathcal{O}$ over $K$. \end{example} \begin{remark} In \cite{Schertz} Schertz deals with various constructive problems on the theory of complex multiplication in terms of the Dedekind eta-function and Siegel function. See also \cite{K-L} and \cite{Ramachandra}. \end{remark} \address{ Applied Algebra and Optimization Research Center\\ Sungkyunkwan University\\ Suwon-si, Gyeonggi-do 16419\\ Republic of Korea} {[email protected]} \address{ Department of Mathematical Sciences \\ KAIST \\ Daejeon 34141\\ Republic of Korea} {[email protected]} \address{ Department of Mathematics\\ Hankuk University of Foreign Studies\\ Yongin-si, Gyeonggi-do 17035\\ Republic of Korea} {[email protected]} \end{document}
\begin{document} \title{On the Fourier asymptotics of absolutely continuous measures with power-law singularities} \begin{abstract} We prove sharp estimates on the time-average behavior of the squared absolute value of the Fourier transform of some absolutely continuous measures that may have power-law singularities, in the sense that their Radon-Nikodym derivatives diverge with a power-law order. We also discuss an application to spectral measures of finite-rank perturbations of the discrete Laplacian. \end{abstract} \ \noindent{\bf Keywords}: Fourier Analysis, Quantum Dynamics and Spectral Theory. \ \noindent{\bf AMS classification codes}: 28A80 (primary), 42A85 (secondary). \renewcommand{\Alph{table}}{\Alph{table}} \section{Introduction}\label{sectIntrod} \subsection{Contextualization} The study of the long time behavior of the Fourier transform of fractal (spectral) measures and of the modulus of continuity of the distribution of such measures play an important role in spectral theory and quantum dynamics (such behavior is related to good transport properties). Actually, most of these works are motivated by possible applications to Schr\"odinger operators (see \cite{AvilaUaH,Last,strichartz1990,Zhao,Zhao2} and references therein). In this context, one may highlight two classical results on finite Borel measures on~$\mathbb{R}$: the Riemann-Lebesgue's Lemma (around 1900) and the Wiener's Lemma (around 1935). One may also highlight Strichartz's Theorem \cite{strichartz1990}, from 1990 (see Theorem \ref{Strichartztheorem} (i) below), that establishes (power-law) convergence rates for the time-average behavior of the squared absolute value of the Fourier transform of uniformly $\alpha$-H\"older continuous measures. We present some details. Let $\mu$ be a finite positive Borel measure on~$\mathbb{R}$ and $\alpha \in [0,1]$. We recall that $\mu$ is uniformly $\alpha$-H\"older continuous (denoted {\rm U}$\alpha${\rm H}) if there exists a constant $C>0$ such that for each interval $I$ with $\ell(I) < 1$, $\mu(I) \le C\, \ell(I)^\alpha$, where $\ell(\cdot)$ denotes the Lebesgue measure on~$\mathbb{R}$. Namely, one has the following result. \begin{theorem}[Theorems 2.5 and 3.1 in \cite{Last}]\label{Strichartztheorem} Let $\mu$ be a finite Borel measure on $\mathbb{R}$ and $\alpha \in [0,1]$. \begin{enumerate} \item[\rm{i)}] If $\mu$ is {\rm U}$\alpha${\rm H}, then there exists a constant $C_\mu> 0$, depending only on $\mu$, such that for every $f \in {\mathrm L}^2(\mathbb{R}, d\mu )$ and every $t>0$, \[\frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} f(x)\, d\mu(x) \bigg|^2 ds < C_{\mu} \|f\|_{{\mathrm L}^2(\mathbb{R}, d\mu )}^2 t^{-\alpha}. \] \item[\rm{ii)}] If there exists $C_{\mu}>0$ such that for every $t>0$, \[\frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx}\, d\mu(x) \bigg|^2 ds < C_{\mu} t^{-\alpha},\] then $\mu$ is {\rm U}$\frac{\alpha}{2}${\rm H}. \end{enumerate} \end{theorem} \begin{remark}{\rm Note that Theorem \ref{Strichartztheorem}-i) is, indeed, a particular case of Strichartz's Theorem~\cite{strichartz1990}, which holds for $\sigma$-finite measures.} \end{remark} Motivated by applications in spectral theory and quantum dynamics, we use in this work Fourier analysis to prove sharp estimates on the time-average behavior of the squared absolute value of the Fourier transform of some absolutely continuous measures that may have power-law singularities. Our main goal here is to obtain initial states (for the Schr\"odinger equation) for which the respective spectral measures have a singularity with a power-law growth rate, and for which the asymptotic behavior of the respective (time-average) quantum return probabilities (see definition ahead) depends continuously on such singularities (see Theorem \ref{maintheorem} and Example \ref{ex2} ahead). To the best knowledge of the present authors, this phenomenon has never been discussed, although it may be natural to specialists. In the next remark we discuss the fact that Theorem \ref{Strichartztheorem}-i), in general, is not sufficient to obtain sharp estimates on such Fourier transform averages of absolutely continuous measures with power-law singularities. \begin{remark}{\rm For some important classes of measures in spectral theory and quantum dynamics, such as spectral measures of dynamically defined Schr{\"o}dinger operators, $1/2$-H\"older continuity is typically optimal (usually due to the fact that there are square root singularities associated with the boundary of the spectrum; see \cite{AvilaUaH,Damanik,Zhao,Zhao2} for additional comments); by Theorem~\ref{Strichartztheorem}-i), the time-average behavior of the squared absolute value of the Fourier transform of such measures decays at least as $1/\sqrt{t}$. This rate, in general, is far from optimal as, for instance, is the case of the discrete Laplacian: let $\ell^2({\mathbb{Z}})$ and $\delta_j = (\delta_{jk})_{k \in {\mathbb{Z}}}$, $j\in{\mathbb Z}$, be its canonical basis, and consider the Laplacian, whose action on $\psi\in\ell^2({\mathbb{Z}})$ is given by \[\triangle\psi(k) = \psi(k+1) + \psi(k-1);\] so, although the spectral measure $\mu_{\delta_0}^{\triangle}$ of the pair $(\triangle,\delta_0)$ is at most uniformly $1/2$-H\"older continuous, one has \begin{equation}\label{eq00} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\delta_0}^{\triangle}(x)\bigg|^2 ds=O(\log(t)/t); \end{equation} see the case $\beta = \frac{1}{2}$ in Example \ref{ex1} and, e.g., Section 12.3 in \cite{Oliveira} for details of the Radon-Nikodym derivative of such spectral measure (here, $h(t) = O(r(t))$ indicates that there is $C>0$ so that, for each $t>0$, $h(t)\leq Cr(t)$). By Theorem~\ref{Strichartztheorem}-i), one may conclude that \begin{equation*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\delta_0}^{\triangle}(x)\bigg|^2 ds=O(1/\sqrt{t}), \end{equation*} which gives a worse bound than the one given by~\eqref{eq00}. Moreover, since $\mu_{\delta_0}^{\triangle}$ is at most uniformly $1/2$-H\"older continuous, by Theorem \ref{Strichartztheorem}-ii), the rate in~(\ref{eq00}) is (power-law) optimal. Namely, suppose that there exists $\varepsilonepsilon>0$ such that one can replace $t^{-1}$ by $t^{-1-\varepsilonepsilon}$ in~\eqref{eq00}; then, by Theorem \ref{Strichartztheorem}-ii), $\mu_{\delta_0}^{\triangle}$ is at least uniformly $(1/2+\varepsilonepsilon/4)$-H\"older continuous.} \end{remark} In order to put our work into perspective, we present the following example. To each $0<\beta <1$, denote by \begin{equation}\label{eqMbeta} M_\beta:= \max_{\eta > 0}\bigg|\displaystyle\int_0^\eta e^{-iu} u^{-\beta} du \bigg|^2; \end{equation} for $0<\beta <1$, $\bigg|\displaystyle\int_0^\infty e^{-iu} u^{-\beta} du \bigg| = \Gamma(1-\beta)$, where $\Gamma$ stands for the Gamma Function; thus, $M_\beta < \infty$. \begin{example}\label{ex1} {\rm Set, for each $\frac{1}{2} \leq \beta < 1$ and each $f \in {\mathrm L}^1(\mathbb{R})$, \begin{equation}\label{eqKB} K_{\beta,f} := \{(|\cdot|^{-\beta} \chi_{(0,1]}) \ast f\}, \end{equation} so $K_{\beta,f} \in {\mathrm L}^1(\mathbb{R})$. By the Convolution Theorem, it follows that for each $s >0$, \begin{eqnarray*} \widehat{K_{\beta,f}}(s) &=& \biggl\{\int_0^1 e^{-2\pi i x s} x^{-\beta} dx \biggl\} \, \hat{f}(s) = \biggl\{\frac{1}{(2\pi)^{1-\beta}s^{1-\beta}} \int_0^1 e^{-2\pi i x s} (2 \pi x s)^{-\beta} (2\pi s)\,dx \biggl\} \, \hat{f}(s)\\ &=& \biggl\{\frac{1}{(2\pi)^{1-\beta}s^{1-\beta}} \int_0^{2\pi s} e^{- iu} u^{-\beta} du \biggl\} \, \hat{f}(s), \end{eqnarray*} and so, for each $t >0$ and $\frac{1}{2} < \beta < 1$, \begin{eqnarray}\label{eqex} \nonumber\frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f}(x)\bigg|^2 ds &\leq& {M_\beta}\, \frac{1}{t}\int_0^ts^{2(\beta-1)}|\hat{f}(s)|^2 ds\\ &\le& {M_\beta} \frac{\|f\|^2_{{\mathrm L^1}(\mathbb{R})}}{t}\int_0^ts^{2(\beta-1)}ds = \frac{{M_\beta}}{2\beta-1} \, \frac{\|f\|^2_{{\mathrm L^1}(\mathbb{R})}}{t^{2(1-\beta)}}, \end{eqnarray} where $d\mu_{\beta,f}(x) = K_{\beta,f}(x)\, dx$. For $\beta = \frac{1}{2}$ and for every $t> 1,$ one has \begin{eqnarray}\label{eqex777777} \nonumber\frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f}(x)\bigg|^2 ds &=& \nonumber\frac{1}{t} \int_0^1 \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f}(x)\bigg|^2 ds + \nonumber \frac{1}{t} \int_1^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f}(x)\bigg|^2 ds\\ \nonumber &\leq& \frac{\|K_{\beta,f}\|_{{\mathrm L}^1(\mathbb{R})}^2}{t} + \, \frac{M_\beta\|f\|^2_{{\mathrm L^1}(\mathbb{R})}}{t}\int_1^t \frac{1}{s} \, ds\\ &=& \frac{\|K_{\beta,f}\|_{{\mathrm L}^1(\mathbb{R})}^2}{t} + \, \frac{M_\beta\|f\|^2_{{\mathrm L^1}(\mathbb{R})} \log(t)}{t}. \end{eqnarray} We argue that the above power-law upper estimates cannot be improved; suppose, on the contrary, that there exists $0<\varepsilonepsilon<1-\beta$ such that one can replace $2(1-\beta)$ by $2(1-\beta)+\varepsilonepsilon$ in the estimate~\eqref{eqex} or \eqref{eqex777777} (recall that in \eqref{eqex777777} $\beta = 1/2$). Then, by Theorem \ref{Strichartztheorem}-ii), for each $f \in {\mathrm L}^1(\mathbb{R})$, $\mu_{\beta,f}$ is at least {\rm U}$(1-\beta+ \varepsilonepsilon/4)${\rm H}. Now, set, for each $0<\delta <1-\beta$ and each $x \in \mathbb{R}\setminus\{0\}$, \[f_\delta(x) = \frac{1}{x^{1-\delta}} \; \chi_{(0,1]}(x),\] and $f_\delta \in {\mathrm L}^1(\mathbb{R})$. Note that for $0< x \leq 1$, \begin{eqnarray*} K_{\beta,f_\delta}(x) &=& \int_0^1 \frac{1}{y^{\beta} |x-y|^{1-\delta}} \, dy \geq \int_0^x \frac{1}{y^{\beta} |x-y|^{1-\delta}} \, dy \geq \frac{1}{x^\beta} \int_0^x \frac{1}{|x-y|^{1-\delta}} \, dy \geq x^{- \beta+\delta}. \end{eqnarray*} Thus, for $0< \epsilonilon < 1$, \[ \int_0^\epsilonilon K_{\beta,f_\delta}(x) \, dx \geq \int_0^\epsilonilon x^{-\beta+\delta} dx = \frac{\epsilonilon^{(1- \beta+ \delta)} }{(1-\beta + \delta)},\] and therefore, $\mu_{\beta,f_\delta}$ is at most {\rm U}$(1-\beta+ \delta)${\rm H}. Finally, let $0<\delta<\varepsilonepsilon/4$; then, since $\mu_{\beta,f_\delta}$ is at most {\rm U}$(1-\beta+ \delta)${\rm H}, one gets a contradiction with the fact that $\mu_{\beta,f_\delta}$ is also {\rm U}$(1-\beta+ \varepsilonepsilon/4)${\rm H}. We emphasize that by applying Theorem~\ref{Strichartztheorem}-i), it may be obtained at most that \[\frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f}(x)\bigg|^2 ds=O(t^{-(1-\beta)+\delta}), \] which gives a worse bound than the one given by~\eqref{eqex} (respec.~\eqref{eqex777777} for $\beta = \frac{1}{2}$) for $0<\delta<1-\beta$. We also note that what makes this example interesting is the fact that the measure has a power-law singularity, in the sense that its Radon-Nikodym derivative has a power-law divergence. } \end{example} In this work we use Fourier analysis to extend the estimates in (\ref{eqex})-\eqref{eqex777777} to the measures \[ d\mu_{\beta,f,g}(x) = K_{\beta,f}(x) g(x)\, dx, \] with $f \in {\mathrm L}^1(\mathbb{R})$ and $g \in {\mathrm L}^\infty[0,1]$ (see Theorem~\ref{2Stheorem} ahead), and then we discuss an application of this result to spectral measures of finite-rank perturbations of the Laplacian. Namely, by taking into account Example~3.1 in~\cite{Last}, we use this class of measures to obtain initial states (for the Schr\"odinger equation) for which the respective spectral measures have power-law singularities, and for which the asymptotic behavior of the respective (time-average) quantum return probabilities (see definition ahead) depend continuously on such singularities (see Theorem~\ref{maintheorem} and Example~\ref{ex2}). The organization of this work is as follows. In Subsection~\ref{subsectStrich} we discuss a Strichartz's-like Inequality (Theorem~\ref{2Stheorem}). In Section~\ref{sectFRPFL} we use some well-known results on the Radon-Nikodym derivative of spectral measures \cite{Germinet,Lasttransfermatrix} to present an application to finite-rank perturbations of the Laplacian. The proof of Theorem~\ref{2Stheorem} is left to Section~\ref{sectProofMain}. Some words about the notation:~$\hat{f}$ will always denote the Fourier transform of a function $f \in {\mathrm L}^1(\mathbb{R})$. If $h,g : \mathbb{R} \longrightarrow \mathbb{R}$ are mensurable functions, then $h \ast g$ denotes the convolution product of $h$ and $g$; $\mu$ always indicates a finite positive Borel measure on $\mathbb{R}$. For each $x \in \mathbb{R}$ and each $\epsilonilon>0$, $B(x,\epsilonilon)$ denotes the open interval $(x-\epsilonilon,x+\epsilonilon)$. If $g$ is a complex-valued function, then $\mathfrak{Re}(g)$ and $\mathfrak{Im}(g)$ denote its real and the imaginary parts, respectively. If~$f$ is a real-valued function, then $f^+$ and $f^-$ denote its positive and the negative parts, respectively. \subsection{A Strichartz's-like Inequality} \label{subsectStrich} Let $K_{\beta,f}$ be as in Example \ref{ex1}, $g \in {\mathrm L}^\infty[0,1]$ and consider \begin{equation} d\mu_{\beta,f,g}(x) = K_{\beta,f}(x) g(x) \, dx. \end{equation} For simplicity, suppose that $f,g$ are nonnegative (measurable) real-valued functions. So, by well-known arguments \cite{Last,strichartz1990} (see (\ref{maineq1}) ahead), it is possible to show that for every $t>0$, \begin{eqnarray} \label{eq0} \nonumber \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f,g}(x) \bigg|^2\nonumber ds &=& \nonumber \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 ds\\ &\le& \frac{e^{2\pi} }{2 \sqrt{\pi}} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x)g(x) K_{\beta,f}(y) g(y) e^{-\frac{t^2|x-y|^2}{4}} dx dy. \end{eqnarray} Moreover, for each $x \in \mathbb{R}$ and each $0<\epsilonilon<1$, one has \[ \mu_{\beta,f}(B(x,\epsilonilon)) \leq \|f\|_{{\mathrm L}^1(\mathbb{R})} \epsilonilon^{1-\beta},\] where $d\mu_{\beta,f}(x) = K_{\beta,f}(x) dx$. So, by using (\ref{eq0}) and a Strichartz's-like argument (as in \cite{Last,strichartz1990}), it follows that for every $t>0$, \begin{eqnarray} \nonumber \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f,g}(x) \bigg|^2 ds &\leq& \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{2 \sqrt{\pi}} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x) K_{\beta,f}(y) e^{-\frac{t^2|x-y|^2}{4}} dx dy\\ \nonumber &=& \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{2 \sqrt{\pi}} \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\frac{t^2|x-y|^2}{4}} d\mu_{\beta,f}(y) d\mu_{\beta,f}(x)\\ &=&\nonumber \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{ \sqrt{\pi}} \int_{\mathbb{R}} \sum_{n=0}^\infty \int_{\frac{n}{t}\leq |x-y|<\frac{n+1}{t}} e^{-\frac{t^2|x-y|^2}{4}} d\mu_{\beta,f}(y) d\mu_{\beta,f}(x) \\ \nonumber &\leq& \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{ \sqrt{\pi}} \int_{\mathbb{R}} \sum_{n=0}^\infty e^{-n^2/4} \|f\|_{{\mathrm L}^1(\mathbb{R})} t^{\beta-1} d\mu_{\beta,f}(x)\\ &=& \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{ \sqrt{\pi}} \left(\sum_{n=0}^\infty e^{-n^2/4}\right) \|f\|_{{\mathrm L}^1(\mathbb{R})} \|K_{\beta,f}\|_{{\mathrm L}^1(\mathbb{R})}\; t^{-(1-\beta)}. \end{eqnarray} Thus, by Young's Convolution Inequality, one gets, for every $t>0$, \begin{eqnarray}\label{eq2} \nonumber \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f,g}(x) \bigg|^2 ds &\leq& \frac{e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2}{ \sqrt{\pi}} \left(\sum_{n=0}^\infty e^{-n^2/4}\right) \|f\|^2_{{\mathrm L}^1(\mathbb{R})} \|(|\cdot|^{-\beta} \chi_{(0,1]})\|_{{\mathrm L}^1(\mathbb{R})} t^{-(1-\beta)} \\ &=& \left(\sum_{n=0}^\infty e^{-n^2/4}\right) \frac{ e^{2\pi} \|g\|_{{\mathrm L}^\infty[0,1]}^2 \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 }{ \sqrt{\pi}(1-\beta)}\; t^{-(1-\beta)}. \end{eqnarray} We remark that the uniform estimate over $x$ in the discussion above makes the decay in~\eqref{eq2} far from optimal (see the proof of Theorem \ref{2Stheorem} and compare~\eqref{eq2} with~(\ref{eq00lemma}) and~(\ref{eq1lemma})). By using Fourier analysis, we will explore this point of the argument to obtain the following result. \begin{theorem}\label{2Stheorem} For $\frac{1}{2} \leq \beta < 1$, let $ K_{\beta,f}$ and~$M_\beta$ be as before. To every $g \in {\mathrm L}^\infty[0,1]$, consider $d\mu_{\beta,f,g}(x)= K_{\beta,f}(x) g(x) dx$. Then: \begin{enumerate} \item[\rm {i)}] if $\frac{1}{2} < \beta < 1$, for every $t>0$, \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx}\, d\mu_{\beta,f,g}(x) \bigg|^2 ds &\leq& 2^{18} e^{2\pi} {M_\beta} \Gamma(\beta - 1/2) \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \|g\|_{{\mathrm L}^\infty[0,1]}^2\; t^{-2(1-\beta)}; \end{eqnarray*} \item[\rm {ii)}] if $\beta = \frac{1}{2}$, for every $t>0$, \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx}\, d\mu_{\beta,f,g}(x) \bigg|^2 ds &\leq& e^{2\pi} \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \|g\|_{{\mathrm L}^\infty[0,1]}^2 \biggr[\biggr( \frac{1}{t} + M_{\frac{1}{2}} \frac{\Gamma(0, 4\pi^2/t^2)}{t}\biggr) \biggr], \end{eqnarray*} in particular, for sufficiently large $t$ , \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx}\, d\mu_{\beta,f,g}(x) \bigg|^2 ds &\leq& e^{2\pi} \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \|g\|_{{\mathrm L}^\infty[0,1]}^2 \biggr[\biggr( \frac{1}{t} + 3 M_{\frac{1}{2}} \frac{\log(t)}{t}\biggr) \biggr], \end{eqnarray*} since \[\lim_{t \to \infty} \frac{\Gamma(0, 4\pi^2/t^2)}{\log(t)} = 2,\] \end{enumerate} where $\Gamma(\cdot,\cdot)$ denotes the Incomplete Gamma Function. \end{theorem} \begin{remark} \end{remark} \begin{enumerate} \item [i)] As mentioned in Example~\ref{ex1}, in general, one cannot get a better power-law estimate than $O(t^{-2(1-\beta)})$ for all $f \in {\mathrm L}^1(\mathbb{R})$. \item [ii)] If $0 \leq \beta < \frac{1}{2}$ then, by Young's Convolution Inequality, $K_{\beta,f} \cdot g \in {\mathrm L}^2(\mathbb{R})$. Hence, by applying Theorem \ref{Strichartztheorem}-i) to $K_{\beta,f} \cdot g$ and to $\chi_{[0,1]} \, dx$ (which is U$1$H), one gets \begin{equation*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} d\mu_{\beta,f,g}(x) \bigg|^2 ds = \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) \, g(x)dx \bigg|^2 ds = O(t^{-1}), \end{equation*} and $\beta=1/2$ is a transition point, so justifying its peculiar behavior as in Theorem~\ref{2Stheorem}-ii). \item [iii)] For an arbitrary $h \in {\mathrm L}^1(\mathbb{R})$, it is well known that $\hat{h}(s)$ can decay arbitrarily slow (see, e.g.,~\cite{Muller}). In this context, for $\frac{1}{2} \leq \beta < 1$, it is particularly interesting that the power-law asymptotic behavior of \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) \, g(x)dx \bigg|^2 ds \end{eqnarray*} is inherited from the asymptotic behavior of the Fourier transform of $|\cdot|^{-\beta}$, which depends continuously on $\beta$. \end{enumerate} \section{Finite-rank perturbations of the Laplacian}\label{sectFRPFL} Let $\delta_j = (\delta_{jk})_{k \in \mathbb{N}}$, $j=1,2...$, be the canonical basis of~$\ell^2(\mathbb{N})$. Consider the Laplacian with Dirichlet boundary condition, whose action on $\psi\in\ell^2(\mathbb{N})$ is \[\triangle\psi(k) = \psi(k+1) + \psi(k-1),\] with $\psi(0)=0$. Consider also the finite-rank perturbations of the Laplacian with Dirichlet boundary condition, acting in $\ell^2(\mathbb{N})$ by the law \[ \begin{cases} H_0 \psi \, \, =-\triangle\psi, \\ H_N \psi = -\triangle\psi + \displaystyle\sum_{j=1}^N v_j \langle \psi, \delta_j \rangle \delta_j,\qquad N \geq 1, \end{cases} \] with $\psi(0)=0$, where $(v_n)_{n \in {\mathbb{N}}}$ is a given real sequence. The study of the dynamical and spectral properties of these operators is a classical subject in spectral theory for at least two reasons: it is a relatively simple model, and therefore it can be used to discuss results in quantum dynamics by avoiding technical complications; they can be used as approximations to some Schr\"odinger operators (see, for instance, Section~3 in~\cite{Germinet}). In this context, our main goal here is to study the quantum dynamics of these operators for some initial states whose spectral measures may have power-law singularities. Naturally, some preparation is required. Let $\mu_\psi^{N}$ denote the spectral measure associated with the pair $(H_N,\psi)$ and $\displaystyle\frac{d\mu_\psi^{N}}{dx}$ the Radon-Nikodym derivative of $\mu_\psi^{N}$ with respect to the Lebesgue measure, and let $R_N(z) = (H_N - z)^{-1}$ be the corresponding resolvent operator. It is well known that for every $N \in \mathbb{N} \cup \{0\}$, $H_N$ has purely absolutely continuous spectrum (for details, see \cite{Lasttransfermatrix}). Thus, in this case, $\displaystyle\frac{d\mu_\psi^{N}}{dx}\in {\mathrm L}^1(\mathbb{R})$. Recall that for every $N \in \mathbb{N} \cup \{0\}$, the transfer matrix $T_N(E,n,n-1)$ between the sites $n-1$ and $n$, $n\in\mathbb{N}$, is given by \[T_N(E,n,n-1) =\left(\begin{array}{cc} E-v_n\chi_{[1,N]}(n) & -1\\ 1 & 0\\ \end{array}\right).\] Moreover, if $u_\theta(E,n)$, with $ \theta \in [0,\pi]$, denotes the solution to the eigenvalue equation $H_Nu = Eu$ at $E \in \mathbb{R}$ that satisfies \[u_\theta(E,m) = \sin(\theta), \, \, \, \, u_\theta(E,m+1) = \cos(\theta),\] then \[\left(\begin{array}{cc} u_\theta(E,n+1)\\ u_\theta(E,n)\\ \end{array}\right)= T_N(E,n,m)\left(\begin{array}{cc} u_\theta(E,m+1)\\ u_\theta(E,m)\\ \end{array}\right),\] with $T_N(E,n,m)=T_N(E,n,n-1)\cdots T_N(E,m+1,m)$. We need the following technical result. \begin{lemma}\label{teclemma} Let $N \in \mathbb{N} \cup \{0\}$. Then, there exist constants $C_{1,N},C_{2,N}> 0$ such that, for every $E \in [0,1]$, \[C_{1,N}\, \leq\, \displaystyle\frac{d\mu_{\delta_1}^{N}}{dx}(E)\, \leq\, C_{2,N};\] in particular, $\displaystyle\frac{d\mu_{\delta_1}^{N}}{dx} \in {\mathrm L}^\infty[0,1]$. \end{lemma} \begin{remark}{\rm The result stated in Lemma \ref{teclemma} is expected, since the boundary of the interval $[0,1]$ is far from $\pm 2$, which are the only points where $\displaystyle\frac{d\mu_{\delta_1}^{H_0}}{dx}$ diverges; see \cite{Oliveira} for details on the spectral measure $\mu_{\delta_1}^{H_0}$. Although natural to specialists, we present a proof of this result for the convenience of the reader.} \end{remark} \begin{proof} [{Proof} {\rm (Lemma~\ref{teclemma})}] Let $N\in\mathbb{N}\cup\{0\}$. It follows from Lemma 3.1 in \cite{Germinet} that there exists $D> 0$ such that for every $E \in [0,1]$, \[\displaystyle\frac{d\mu_{\delta_1}^{N}}{dx}(E) \geq \frac{D}{\|T_N(E,N,0)\|^2}.\] Note that there exists $F_N>0$ such that for each $E\in[0,1]$, $\Vert T_N(E,N,0)\Vert<F_N$ (since $T_N(E,N,0)$ is the product of $N$ matrices whose norms are bounded); thus, \[0 < C_{1,N} := \frac{D}{F_N^2} \leq \inf_{E \in [0,1]}\displaystyle\frac{d\mu_{\delta_1}^{N}}{dx}(E).\] Now, if $n \geq N$, then $T_N(E,n+1,n) = T_0(E,n+1,n)=A(E)$, where \begin{eqnarray*} A(E)=\left(\begin{array}{cc} E&-1\\1&0\end{array}\right). \end{eqnarray*} It is straightforward to show that for $E\in[0,1]$, $A(E)$ is similar to a rotation matrix; thus, there exists $C_N> 0$ such that for each $E \in [0,1]$ and each $n \geq N$, $\|T_N(E,n,0)\|^2 \leq C_N$. Indeed, \[\|T_N(E,n,0)\|=\Vert T_N(E,n,N)\cdot \cdot \cdot T_N(E,N,0)\Vert\le \Vert A(E)^{n-N}\Vert\cdot\Vert T_N(E,N,0)\Vert;\] since $A(E)^{n-N}$ is similar to a rotation and since for each $E\in[0,1]$, $\Vert T_N(E,N,0)\Vert$ is bounded, the result follows. Now, by Proposition 3.9 in \cite{Lasttransfermatrix}, one has for every $E \in [0,1]$ and every $L\in\mathbb{N}$, \begin{eqnarray*} \mathfrak{Im} \biggr(\int_{\mathbb{R}} \frac{1}{E+i/L - z} d\mu_{\delta_1}^{N}(z) \biggr) \leq (5 + \sqrt{24}) \frac{1}{L} \sum_{n=0}^{L+1} \|T_N(E,n,0)\|^2. \end{eqnarray*} Thus, for every $E \in [0,1]$ and every $L\ge N$, \begin{eqnarray*} \mathfrak{Im} \biggr(\int_{\mathbb{R}} \frac{1}{E+i/L - z} d\mu_{\delta_1}^{N}(z) \biggr) &\leq& (5 + \sqrt{24}) \frac{1}{L} \sum_{n=0}^{L+1} \|T_N(E,n,0)\|^2\\ &=& (5 + \sqrt{24}) \biggr( \frac{1}{L} \sum_{n=0}^{N} \|T_N(E,n,0)\|^2 + \frac{1}{L} \sum_{n=N+1}^{L+1} \|T_N(E,n,0)\|^2 \bigg)\\ &\leq& (5 + \sqrt{24}) \biggr( \frac{1}{L} \sum_{n=0}^{N} \|T_N(E,n,0)\|^2 + \frac{L-N}{L} C_N \bigg). \end{eqnarray*} By Stone's Formula, it follows that for each $E \in [0,1]$ \begin{eqnarray*} \frac{d\mu_{\delta_1}^{N}}{dx}(E) &=& \frac{1}{\pi} \lim_{ L \to \infty} \mathfrak{Im} \biggr\langle \delta_1, R^N\biggr(E+i\frac{1}{L}\biggr)\delta_1 \biggr\rangle = \frac{1}{\pi} \lim_{ L \to \infty} \mathfrak{Im} \biggr(\int_{\mathbb{R}} \frac{1}{E+i/L - z} d\mu_{\delta_1}^{N}(z) \biggr). \end{eqnarray*} Since $\sum_{n=0}^{N} \|T_N(E,n,0)\|^2$ does not depend on $L$, one gets \[C_{2,N}:= \frac{C_N}{\pi}(5+\sqrt{24}) \geq \displaystyle\sup_{E \in [0,1]} \frac{d\mu_{\delta_1}^{N}}{dx}(E).\] \end{proof} \subsection{Power-law singularities and quantum dynamics} Recall that, for every $N \in \mathbb{N} \cup \{0\}$, ${\mathbb{R}} \ni t \mapsto e^{-itH_N}$ is a one-parameter strongly continuous unitary evolution group and, for each $\psi\in \ell^2({\mathbb{N}})$, $(e^{-itH_N}\psi)_{t \in \mathbb{R}}$ is the unique solution to the Schr\"odinger equation \[ \begin{cases} \partial_t \psi = -iH_N\psi, \quad t \in {\mathbb{R}}, \\ \psi(0) = \psi\in\ell^2(\mathbb{N}). \end{cases} \] Now, we present a dynamical quantity usually considered to probe the large time behavior of $e^{-itH_N}\psi$, the so-called (time-average) {\em quantum return probability}, which gives the (time-average) probability of finding the particle at time $t>0$ in its initial state $\psi$: \begin{equation*} \frac{1}{t}\int_0^t |\langle \psi, e^{-isH_N} \psi \rangle|^2 \, ds. \end{equation*} For $0 \leq \beta < 1$ and every $f \in {\mathrm L}^1(\mathbb{R})$, let \[K_{\beta,f} = \{(|\cdot|^{-\beta} \chi_{(0,1]}) \ast f\}.\] Suppose that $f \geq 0$ and set \[\psi_{\beta,f} := \sqrt{K_{\beta,f}}(H_N)\delta_1,\] where each $\sqrt{K_{\beta,f}}(H_N): {\mathrm{dom\,}} (\sqrt{K_{\beta,f}}(H_N)) \subset \ell^2({\mathbb{N}})\rightarrow \ell^2({\mathbb{N}})$ is defined through the functional calculus: for every $\psi\in {\mathrm{dom\,}} (\sqrt{K_{\beta,f}}(H_N))$, one has \[\langle\psi,\sqrt{K_{\beta,f}}(H_N)\psi\rangle=\int \sqrt{K_{\beta,f}(x)}\,d\mu^{H_N}_\psi(x).\] Note that $\delta_1 \in {\mathrm{dom\,}} (\sqrt{K_{\beta,f}}(H_N))$, since $\sqrt{K_{\beta,f}} \in {\mathrm L}^2(\mathbb{R},d\mu^{H_N}_{\delta_1})$; if $\displaystyle\int_0^1 f(x) \, dx = 1$, then for every $x \in (0,1)$, $\displaystyle\lim_{\beta \downarrow 0} \sqrt{K_{\beta,f}}(x) = 1$, by dominated convergence; thus, \[\displaystyle\lim_{\beta \downarrow 0} \psi_{\beta,f} = \delta_1.\] Our next result describes the behavior of the (time-average) {\em quantum return probability} of the initial states $\psi_{\beta,f}$. \begin{theorem}\label{maintheorem} Let $\psi_{\beta,f}$ be as above, with $0\le f \in {\mathrm L}^1(\mathbb{R})$. Then: \begin{enumerate} \item[{\rm i)}] if $0\leq \beta < \frac{1}{2}$, for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-isH_N} \psi_{\beta,f} \rangle|^2 \, ds \leq \frac{20\pi \|f\|_{{\mathrm L}^1(\mathbb{R})}^2}{(1-2\beta)} \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2 t^{-1}; \end{eqnarray*} \item[{\rm ii)}] if $\frac{1}{2}< \beta <1$, for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-isH_N} \psi_{\beta,f} \rangle|^2 \, ds &\leq& \frac{\Gamma(\beta - 1/2) 2^{18} e^{2\pi} {M_\beta} \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2 }{(2\pi)^{2(\beta-1)}} \; t^{-2(1-\beta)}; \end{eqnarray*} \item[{\rm iii)}] if $\beta = \frac{1}{2}$, for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-isH_N} \psi_{\beta,f} \rangle|^2 \, ds &\leq& 4 \pi^2 e^{2\pi} \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2\biggr[ \frac{1}{t} + M_{\frac{1}{2}} \frac{\Gamma(0, 16\pi^4/t^2)}{t} \biggr], \end{eqnarray*} in particular, for sufficiently large $t$ \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-isH_N} \psi_{\beta,f} \rangle|^2 \, ds &\leq& 4 \pi^2 e^{2\pi} \|f\|_{{\mathrm L}^1(\mathbb{R})}^2 \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2\biggr[ \frac{1}{t} + 3M_{\frac{1}{2}} \frac{\log(t)}{t} \biggr]. \end{eqnarray*} \end{enumerate} \end{theorem} \begin{remark}{\rm By applying Theorem \ref{2Stheorem}, in general, one can extend the above result to families of Schr\"odinger operators whose Radon-Nikodym derivatives of each spectral measure (with respect to Lebesgue measure) is bounded (see the proof of Theorem~\ref{maintheorem}).} \end{remark} We revisit Example \ref{ex1}, but now taking into account Theorem~\ref{maintheorem}. \begin{example}\label{ex2} {\rm Let $\frac{1}{2} \leq \beta<1$ and $0<\delta<1-\beta$. Let $f_\delta \in {\mathrm L}^1(\mathbb{R})$ with $\displaystyle\int_0^1 f_\delta(x) dx =1$ and suppose that there exists $C>0$ so that, for every $w \in (0,1]$, \[f_\delta(w) \geq \frac{C}{w^{1-\delta}}.\] Hence, for every $0< w \leq 1$, \begin{eqnarray*} K_{\beta,f_\delta}(w) &\geq& C\int_0^1 \frac{1}{y^\beta |w-y|^{1-\delta}} \, dy \geq C\int_0^w \frac{1}{y^\beta |w-y|^{1-\delta}} \, dy \geq \frac{C}{w^\beta} \int_0^w \frac{1}{|w-y|^{1-\delta}} \, dy \geq C w^{- \beta+\delta}. \end{eqnarray*} Thus, for every $0< \epsilonilon < 1$, \[ \mu_{\psi_{\beta,f_\delta}}((0,\epsilonilon))=\int_0^\epsilonilon K_{\beta,f}(w) \frac{d\mu_{\delta_1}^{N}}{dx}(w) \, dw \geq C C_{1,N} \int_0^\epsilonilon w^{-\beta+\delta} dw = C C_{1,N} \frac{\epsilonilon^{(1-\beta+ \delta)} }{(1-\beta+ \delta)},\] with $C_{1,N}$ given by Lemma \ref{teclemma}. Therefore, in this case, \[ d\mu_{\psi_{\beta,f_\delta}} = K_{\beta,f_\delta} \frac{d\mu_{\delta_1}^{N}}{dx} \, dx\] is at most {\rm U}$(1-{\beta}+ \delta)${\rm H}, and so, by Theorem~\ref{Strichartztheorem}-i), one can say at most that \begin{eqnarray*}\frac{1}{t}\int_0^t |\langle \psi_{\beta,f_\delta}, e^{-isH_N} \psi_{\beta,f_\delta} \rangle|^2 \,ds = O(t^{-(1-\beta)-\delta}). \end{eqnarray*} Nonetheless, it follows from Theorem \ref{maintheorem} that, for every $\frac{1}{2} < \beta < 1$, \begin{equation}\label{eqex2} \frac{1}{t}\int_0^t \vert\langle \psi_{\beta,f_\delta}, e^{-isH_N} \psi_{\beta,f_\delta} \rangle\vert^2 \, ds = O(t^{-2(1-\beta)}) \end{equation} and \begin{equation}\label{eq9999} \frac{1}{t}\int_0^t \vert\langle \psi_{1/2,f_\delta}, e^{-isH_N} \psi_{1/2,f_\delta} \rangle\vert^2 \, ds = O(\log(t)/t)) \end{equation} We observe that the above rates are power-law optimal. Namely, if there exists $\varepsilonepsilon>0$ so that one can replace $t^{-2(1-\beta)}$ by $t^{-2(1-\beta)+\varepsilonepsilon}$ in~\eqref{eqex2} or (\ref{eq9999}), then by Theorem \ref{Strichartztheorem}-ii), $\psi_{\beta,f_\delta}$ will be at least uniformly $(1-\beta+\varepsilonepsilon/4)$-H\"older; for $0<\delta< \varepsilonepsilon/4$, one gets a contradiction.} \end{example} \begin{remark}{\rm For each $0<\delta<1-\beta$, Example \ref{ex2} presents a family of initial states, $\psi_{\beta,f_\delta}$, such that $\displaystyle\lim_{\beta \downarrow 0} \psi_{\beta,f_\delta} = \delta_1$ and for which the correspondent (time-average) quantum return probabilities depend continuously on the power-law growth rates of the singularities of the respective spectral measures.} \end{remark} \begin{proof}[{Proof} {\rm (Theorem~\ref{maintheorem})}] i) It follows from the Spectral Theorem that for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_\beta, e^{-isH_N} \psi_\beta \rangle|^2 \, ds &=& \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-isy} K_{\beta,f}(y) d\mu_{\delta_1}^{N}(y) \bigg|^2 ds \\ &=& \frac{2\pi}{t} \int_0^{\frac{t}{2\pi}} \bigg|\int_{\mathbb{R}} e^{- 2\pi isy} K_{\beta,f}( y) \frac{d\mu_{\delta_1}^{N}}{dx}(y)dy \bigg|^2 ds. \end{eqnarray*} Since $0\leq \beta < \frac{1}{2}$, by Young's Convolution Inequality and Lemma \ref{teclemma}, $K_{\beta,f} \frac{d\mu_{\delta_1}^{N}}{dx} \in {\mathrm L}^2(\mathbb{R})$. Thus, by Theorem \ref{Strichartztheorem}-i) applied to $d\mu=\chi_{[0,1]} dy$ and to the function $K_{\beta,f}( y) \frac{d\mu_{\delta_1}^{N}}{dx}( y)$, one obtains, for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-itH_N} \psi_{\beta,f} \rangle|^2 \, ds &=& \frac{2\pi}{t} \int_0^{\frac{t}{2\pi}} \bigg|\int_{\mathbb{R}} e^{- 2\pi isy} K_{\beta,f}( y) \frac{d\mu_{\delta_1}^{N}}{dx}(y)dy \bigg|^2 ds \\ &\leq& 10 \|K_{\beta,f}\|_{{\mathrm L}^2(\mathbb{R})}^2 \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2 2\pi t^{-1} \\ &\leq &\frac{20\pi \|f\|_{{\mathrm L}^1(\mathbb{R})}^2}{(1-2\beta)} \biggr\|\frac{d\mu_{\delta_1}^{N}}{dx}\biggr\|_{{\mathrm L}^\infty[0,1]}^2 t^{-1}. \end{eqnarray*} We remark that if $d\mu = \chi_{[0,1]} dx$, then one can choose $C_\mu =10$ in Theorem \ref{Strichartztheorem}-i); for details, see page~416 in \cite{Last}. \noindent ii) and iii) Let $\frac{1}{2}\leq \beta < 1$; since for every $t>0$, \begin{eqnarray*} \frac{1}{t}\int_0^t |\langle \psi_{\beta,f}, e^{-isH_N} \psi_{\beta,f} \rangle|^2 \, ds = \frac{2\pi}{t} \int_0^{\frac{t}{2\pi}} \bigg|\int_{\mathbb{R}} e^{-2\pi isy} d\mu_{\beta,f,g}(y) \bigg|^2 ds, \end{eqnarray*} where $g = \displaystyle\frac{d\mu_{\delta_1}^{N}}{dx}$, ii) and iii) are direct consequences of Lemma \ref{teclemma} and Theorem \ref{2Stheorem}. \end{proof} \section{Proof of Theorem \ref{2Stheorem}}\label{sectProofMain} \begin{lemma}\label{mainlemma} Let $ K_{\beta,f}$ be as before. \begin{enumerate} \item[{\rm i)}] If $\frac{1}{2} < \beta < 1$, then, for every $t>0$, one has \begin{eqnarray*} \bigg| \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \overline{K_{\beta,f}} (y) \, dx dy \bigg| \; \leq \; {M_\beta} \Gamma(\beta - 1/2) \|f\|_{{\mathrm L}^1(\mathbb{R})}^2\; t^{-2(1-\beta)}. \end{eqnarray*} \item[{\rm ii)}] If $\beta = \frac{1}{2}$, then, for every $t>0$, one has \begin{eqnarray*} \bigg| \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \overline{K_{\beta,f}} (y) \, dx dy \bigg| \; \leq \; \frac{\pi\|f\|_{{\mathrm L^1}(\mathbb{R})}^2}{t} + M_{\frac{1}{2}} \|f\|_{{\mathrm L^1}(\mathbb{R})}^2 \frac{\Gamma(0, \pi/t^2)}{t}, \end{eqnarray*} and recall that $M_\beta$ is given by~\eqref{eqMbeta}. \end{enumerate} \end{lemma} \begin{proof} Let $\frac{1}{2} \leq \beta <1$ and let $(K_n)_{n \in \mathbb{N}} \subset {\mathrm L}^1(\mathbb{R}) \cap {\mathrm L}^2(\mathbb{R})$ be so that $\displaystyle\lim_{n \to \infty}\|K_n - K_{\beta,f}\|_{{\mathrm L}^1(\mathbb{R})} = 0$. Then, by Theorem 4.9 in \cite{Brezis}, there exist a subsequence $(K_{n_k})$ and a function $h \in {\mathrm L}^1(\mathbb{R})$ such that $\displaystyle\lim_{k \to \infty} K_{n_k}(x) = K_{\beta,f}(x)$ for almost every $x \in \mathbb{R}$, and for every $k \geq 1$, $ |K_{n_k}(x)| \leq h(x)$ for almost every $x \in \mathbb{R}$. We note that for each $t >0$, each $k \geq 1$ and each $\xi \in \mathbb{R}$, \[ e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{n_k}}(\xi)| \leq e^{- \frac{\pi |\xi|^2}{t^2}} \|K_{n_k}\|_{{\mathrm L}^1(\mathbb{R})} \leq e^{- \frac{\pi |\xi|^2}{t^2}} \|h\|_{{\mathrm L}^1(\mathbb{R})},\] and for each $t >0$, \[\int_{\mathbb{R}} e^{- \frac{\pi |\xi|^2}{t^2}} d\xi = t.\] This show that, for every $t>0$, the sequence $e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{n_k}}(\xi)|$ is dominated by a integrable function. Set, for each $t>0$ and each $x \in \mathbb{R}$, $\Phi_t(x) := e^{-\pi t|x|^2}$. Then, for each $t>0$, \begin{equation}\label{eq0lemma} \widehat{\Phi_t}(\xi) = \frac{1}{t} e^{- \frac{\pi |\xi|^2}{t^2}}, \quad \xi \in \mathbb{R}. \end{equation} It follows from the identity in (\ref{eq0lemma}), some basic properties of the Fourier transform, dominated convergence and Plancherel's Theorem that for each $y \in \mathbb{R}$ and each $t >0$, \begin{eqnarray}\label{eq00lemma}\nonumber \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \, dx &=& \lim_{k \to \infty} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{n_k} (x) \, dx \nonumber = \lim_{k \to \infty} \int_{\mathbb{R}} \overline{(\tau_y \Phi_t)(x)} K_{n_k} (x)\, dx \\ \nonumber &=& \lim_{k \to \infty} \int_{\mathbb{R}} \overline{\widehat{(\tau_y \Phi_t)}(\xi)} \widehat{K_{n_k}}(\xi) \, d\xi = \lim_{k \to \infty} \int_{\mathbb{R}} e^{2\pi i y \xi} \widehat{\Phi_t}(\xi) \widehat{K_{n_k}}(\xi) \, d\xi \\ \nonumber &=& \lim_{k \to \infty} \frac{1}{t} \int_{\mathbb{R}} e^{2\pi i y \xi} e^{- \frac{\pi |\xi|^2}{t^2}} \widehat{K_{n_k}}(\xi) \, d\xi\\ &=& \frac{1}{t} \int_{\mathbb{R}} e^{2\pi i y \xi} e^{- \frac{\pi |\xi|^2}{t^2}} \widehat{K_{\beta,f}}(\xi) \, d\xi, \end{eqnarray} where $\tau_yf(\cdot) = f(\cdot-y)$ stands for the translation by $y\in\mathbb{R}$. Thus, by Fubini's Theorem, one obtains for each $t >0$, \begin{eqnarray}\label{eq1lemma} \nonumber \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \overline{K_{\beta,f}} (y) \, dx dy &=& \frac{1}{t} \int_{\mathbb{R}} \int_{\mathbb{R}} e^{2\pi i y \xi} \overline{K_{\beta,f}} (y) \, dy \, e^{- \frac{\pi |\xi|^2}{t^2}} \widehat{K_{\beta,f}}(\xi) \, d\xi\\ &=& \frac{1}{t} \int_{\mathbb{R}} e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi. \end{eqnarray} Now, by the Convolution Theorem, it follows that for each $\xi >0$, \begin{eqnarray}\label{eq88888} \nonumber \widehat{K_{\beta,f}}(\xi) &=& \biggl\{\int_0^1 e^{-2\pi i x \xi } x^{-\beta} dx \biggl\} \, \hat{f}(\xi) = \biggl\{\frac{1}{(2\pi)^{1-\beta}\xi^{1-\beta}} \int_0^1 e^{-2\pi i x \xi } (2 \pi x \xi)^{-\beta} (2\pi \xi)dx \biggl\} \, \hat{f}(\xi)\\ &=& \biggl\{\frac{1}{(2\pi)^{1-\beta}\xi^{1-\beta}} \int_0^{2\pi\xi} e^{- iu} u^{-\beta} du \biggl\} \, \hat{f}(\xi). \end{eqnarray} One also has, for each $\xi <0$, that \begin{eqnarray*} \widehat{K_{\beta,f}}(\xi) = \biggl\{\frac{1}{(2\pi)^{1-\beta}(-\xi)^{1-\beta}} \overline{\int_0^{-2\pi\xi} e^{-iu} u^{-\beta} du} \biggl\} \, \overline{\hat{f}(\xi)}. \end{eqnarray*} Thus, for $\xi \neq 0$, \begin{eqnarray}\label{eq2lemma} |\widehat{K_{\beta,f}}(\xi)|^2 \leq {M_\beta} \, \frac{\|f\|^2_{{\mathrm L^1}(\mathbb{R})}}{|\xi|^{2(1-\beta)}}. \end{eqnarray} Now a separate argument is necessary for each item. i) $1/2<\beta<1$. Since it follows from Cauchy's Residue Theorem that for every $t >0$ \begin{equation}\label{eq01} \int_{\mathbb{R}} \frac{e^{- \frac{\pi |\xi|^2}{t^2}} }{|\xi|^{2(1-\beta)} } \, d\xi = \pi^{1/2-\beta}\Gamma\biggr(\beta - \frac{1}{2}\biggr) t^{2\beta-1} \leq \Gamma\biggr(\beta - \frac{1}{2}\biggr) t^{2\beta-1} , \end{equation} one gets from (\ref{eq1lemma}) and (\ref{eq2lemma}) that for every $t >0$, \begin{eqnarray*} \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \overline{K_{\beta,f}} (y) \, dx dy &=& \frac{1}{t} \int_{\mathbb{R}} e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi \\ &\leq& \frac{{M_\beta} \|f\|_{{\mathrm L^1}(\mathbb{R})}^2}{t} \int_{\mathbb{R}} \frac{e^{- \frac{\pi |\xi|^2}{t^2}} }{|\xi|^{2(1-\beta)} } \, d\xi\\ &\leq& \Gamma(\beta - 1/2) {M_\beta} \|f\|_{{\mathrm L^1}(\mathbb{R})}^2 t^{-2(1-\beta)}. \end{eqnarray*} ii) $\beta=1/2$. By (\ref{eq88888}), for every $t> 0$ \begin{eqnarray*} \int_{-1}^1 e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi &\leq& 2\|f\|_{{\mathrm L^1}(\mathbb{R})}^2 \int_0^1 e^{- \frac{\pi |\xi|^2}{t^2}} \frac{1}{\xi} \biggr(\int_0^{2\pi\xi} \frac{1}{\sqrt{u}} \, du \biggr)^2 \, d\xi \\ &=& \pi \|f\|_{{\mathrm L^1}(\mathbb{R})}^2 \int_{0}^1 e^{- \frac{\pi |\xi|^2}{t^2}} \, d\xi \leq \pi\|f\|_{{\mathrm L^1}(\mathbb{R})}^2. \end{eqnarray*} Since, by Cauchy's Residue Theorem, for every $t >0$ \[\int_1^\infty \frac{e^{- \frac{\pi |\xi|^2}{t^2}} }{\xi} \, d\xi = \frac{\Gamma(0, \pi/t^2)}{2},\] one gets from (\ref{eq1lemma}) and (\ref{eq2lemma}) that for every $t >0$, \begin{eqnarray*} \int_{\mathbb{R}} \int_{\mathbb{R}} e^{-\pi t^2|x-y|^2} K_{\beta,f}(x) \overline{K_{\beta,f}} (y) \, dx dy &=& \frac{1}{t} \int_{\mathbb{R}} e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi \\ &=& \frac{1}{t} \int_{-1}^1 e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi + \frac{1}{t} \int_{|x|> 1} e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi\\ &\leq& \frac{1}{t} \int_{-1}^1 e^{- \frac{\pi |\xi|^2}{t^2}} |\widehat{K_{\beta,f}}(\xi)|^2 \, d\xi + \frac{ 2M_{\frac{1}{2}} \|f\|_{{\mathrm L^1}(\mathbb{R})}^2}{t} \int_1^\infty \frac{e^{- \frac{\pi |\xi|^2}{t^2}} }{\xi} \, d\xi\\ &\leq& \frac{\pi\|f\|_{{\mathrm L^1}(\mathbb{R})}^2}{t} + M_{\frac{1}{2}} \|f\|_{{\mathrm L^1}(\mathbb{R})}^2 \frac{\Gamma(0, \pi/t^2)}{t}. \end{eqnarray*} \end{proof} \begin{remark}{\rm Since the integral $\displaystyle\int_{\mathbb{R}} \frac{e^{- \frac{\pi |\xi|^2}{t^2}} }{|\xi| } \, d\xi$ does not converge, a separate argument is necessary therein for the case $\beta = \frac{1}{2}$.} \end{remark} \begin{proof} [{Proof} {\rm (Theorem~\ref{2Stheorem})}] Note that the proof of Theorem~\ref{2Stheorem} is a consequence of Lemma~\ref{mainlemma} and Fubini's Theorem. We present details of the proof of item~i): by the linearity of the convolution product, we may assume without loss of generality that $\|f\|_{{\mathrm L^1}(\mathbb{R})} \leq 1 $ and $ \|g\|_{{\mathrm L}^\infty[0,1]} \leq 1$. We divide this proof into two cases. \ \noindent {\bf Case 1:} $f,g$ are nonnegative real-valued functions. By Fubini's Theorem, one has, for every $t>0$, \begin{eqnarray}\label{maineq1} \nonumber \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 ds &\leq& \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 \, e^{2\pi-(2\pi s)^2/t^2} \, ds\\ \nonumber &\leq& \frac{e^{2\pi}}{t} \int_{-\infty}^{\infty} \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 \, e^{-(2\pi s)^2/t^2} \, ds\\ \nonumber &=& \frac{e^{2\pi}}{t} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x) g(x) \overline{K_{\beta,f}(y)} \, \overline{g(y)}\\ \nonumber &\times& \biggl\{ \int_{-\infty}^{\infty} \, e^{-((2\pi s)^2/t^2)-2\pi is(x-y)} \, ds \biggl\} dxdy\\ \nonumber &=& \frac{e^{2\pi} \sqrt{\pi}}{2 \pi} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x) g(x)\, K_{\beta,f}(y) g(y) e^{-\frac{t^2|x-y|^2}{4}} dx dy \\ \nonumber &\leq& \frac{e^{2\pi} \sqrt{\pi}}{2 \pi} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x) K_{\beta,f}(y) e^{-\frac{t^2|x-y|^2}{4}} dx dy \\ &=& \frac{e^{2\pi} \sqrt{\pi}}{2 \pi} \int_{\mathbb{R}} \int_{\mathbb{R}} K_{\beta,f}(x) K_{\beta,f}(y) e^{-\pi (t/2\sqrt{\pi})^2|x-y|^2} dx dy . \end{eqnarray} It then follows from (\ref{maineq1}) combined with Lemma \ref{mainlemma} i) that, for every $t>0$, \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 ds &\leq& \frac{\Gamma(\beta - 1/2) e^{2\pi} {M_\beta} \sqrt{\pi}}{2^{2\beta-1} \pi^\beta}\; t^{-2(1-\beta)}. \end{eqnarray*} \noindent {\bf Case 2:} $f,g$ are complex valued. This case is a direct consequence of {\bf Case 1}. Namely, by the linearity of the convolution product, by the inequality $(a+b)^2 \leq 2(a^2+b^2)$, $a,b>0,$ and by the identity \begin{eqnarray*} K_{\beta,f} \cdot g&=& \biggr\{K_{\beta,\mathfrak{Re}(f)^+} - K_{\beta,\mathfrak{Re}(f)^-} + i\biggr(K_{\beta,\mathfrak{Im}(f)^+} - K_{\beta,\mathfrak{Im}(f)^-}\biggr)\biggr\}\\ &\times& \biggr\{\mathfrak{Re}(g)^+ - \mathfrak{Re}(g)^- + i\biggr(\mathfrak{Im}(g)^+ - \mathfrak{Im}(g)^-\biggr)\biggr\}, \end{eqnarray*} it follows that, for every $t>0$, \begin{eqnarray*} \frac{1}{t} \int_0^t \bigg|\int_{\mathbb{R}} e^{-2\pi isx} K_{\beta,f}(x) g(x) \, dx \bigg|^2 ds &\leq& \frac{\Gamma(\beta - 1/2) 2^{16}e^{2\pi} {M_\beta} \sqrt{\pi}}{2^{2\beta-1} \pi^\beta} \; t^{-2(1-\beta)}. \end{eqnarray*} \end{proof} \begin{center} \Large{Acknowledgments} \end{center} \addcontentsline{toc}{section}{Acknowledgments} \noindent M. Aloisio thank the partial support by CAPES (a Brazilian government agency; Finance Code 001). S. L. Carvalho thanks the partial support by FAPEMIG (Minas Gerais state agency; under contract 001/17/CEX-APQ-00352-17) and C. R. de Oliveira thanks the partial support by CNPq (a Brazilian government agency, under contract 303689/2021-8). \noindent Email: [email protected], Departamento de Matem\'atica, UFAM, Manaus, AM, 369067-005 Brazil \noindent Email: [email protected], Departamento de Matem\'atica, UFMG, Belo Horizonte, MG, 30161-970 Brazil \noindent Email: [email protected], Departamento de Matem\'atica, UFSCar, S\~ao Carlos, SP, 13560-970 Brazil \noindent Email: [email protected], Departamento de Matem\'atica, UFAM \& UEA, Manaus, AM, 369067-005 Brazil \end{document}
\begin{document} \def\vec#1{\bm{#1}} \title{Quantum-router: Storing and redirecting light at the photon level} \author{Martin C. Korzeczek and Daniel Braun} \affiliation{Eberhard-Karls-Universit\"at T\"ubingen, Institut f\"ur Theoretische Physik, 72076 T\"ubingen, Germany} \centerline{\today} \begin{abstract} We propose a method for spatially re-routing single photons or light in a coherent state with small average photon number by purely electronic means, i.e.~without using mechanical devices such as micro-mirror arrays. The method is based on mapping the quantum state of the incoming light onto a spin-wave in an atomic ensemble as is done in quantum memories of light. Then the wavevector of the spin-wave is modified in a controlled way by an applied magnetic field gradient. Finally, by re-applying the same control beam as for storing, the signal pulse is released in a new direction that depends on the deflected wavevector of the spin-wave. We show by numerical simulation that efficiencies can be achieved for arbitrary deflection angles in the plane that are comparable with simple photon storage and re-emission in forward direction, and propose a new method for eliminating the stored momentum as source of decoherence in the quantum memory. In a reasonable parameter regime, the re-routing should be achievable on a time-scale on the order of few to $\sim100$ microseconds, depending on the deflection angle. The shifts in the wavevector that can be achieved using the Zeeman-effect, with otherwise minimal changes to the spin-wave, can also be used to complement existing ac-Stark spin-wave manipulation methods. \end{abstract} \maketitle \setcounter{page}{0} \pagenumbering{arabic} \section{Introduction} Light is a natural carrier for information, both classical and quantum, due to its large speed, relatively weak interaction with matter, and the possibility to guide light through optical fibers. The weak interaction motivates, on the other hand, to develop light-matter interfaces, such that quantum information can be stored and processed in other systems. It is well known that the efficiency with which light can be stored in matter can be increased by using an ensemble of atoms. The coupling constant relevant for the absorption of a single photon increases then $\propto\sqrt{N}$ with the number $N$ of atoms. It is nevertheless challenging to coherently absorb, store, and release again a single photon with an ensemble of atoms. A number of techniques have been developed to that end over the years such as EIT, slow light (for a review see \cite{lvovsky_optical_2009}), controlled reversible inhomogeneous broadening (CRIB) \cite[and 14-15 therein]{sangouard_analysis_2007}, and atomic frequency combs (AFC). In the latter, the distribution of atomic density over detuning has a comb-like structure, leading to multimode capacity. Even photon pairs have been coherently stored and released again, keeping part of their initial entanglement \cite{tiranov_temporal_2016}, as required by the DLCZ protocol of entanglement swapping for long distance quantum communication \cite{duan_long-distance_2001}. A basic working principle of these memory schemes is the storage of phase information of the incoming mode in a collective atomic excitation, such as a spin-wave, where each atom contributes part of the excitation with a well defined phase. Ideally, the phase relations remain intact during the storage time, a requirement that can be achieved to a high degree by using hyperfine spin states that decohere very slowly.\\ Most of the previous work has focused on improving the storage of the photon as measured by fidelity, bandwidth, and storage time or realizing quantum operations and mode multiplexing. In the present work we are interested in another aspect: the control of directionality of the emitted pulse. As was noted in \cite{scully_directed_2006}, the phases of the individual atomic contributions in the spin wave are such that the signal is re-emitted in exactly the same direction as in which it was absorbed. This gives the intuition that the directionality for collective emission is encoded in Hilbert-space phases and can be controlled by manipulating these phases prior to emission. Indeed, from \cite{scully_directed_2006} it is clear that if one created phases that correspond to those that would have resulted from absorption from a different direction, re-emission would be in that direction. The importance of the phases during re-emission has been considered before: Chen et al.~\cite{chen_coherent_2013} demonstrated forward and backward retrieval with EIT. Backward retrieval can lead to higher fidelity due to reduced re-absorption and compensation of the Doppler shift. In \cite{sangouard_analysis_2007} it was noted that by suitably changing the phases, the signal is re-emitted in backward direction compared to the original incoming signal without the need of additional control lasers. In \cite{chen_controllably_2016, wang_three-channel_2009}, forward retrieval and routing with a small 'array' of possible control beams was achieved. \cite{surmacz_efficient_2008} recognized phase matching and the spin-wavevector $\kappa$ as important for directionality and proposed multi-mode storage by having an array of control fields with sufficiently differing angles that any control beam only affects its own spin wave. As noted in \cite{mazelanik_coherent_2019}, imprinting a position-dependent phase $e^{i\phi(\vec{r})}$ onto the atomic coherence has, in $k$-space, the effect of a convolution of the original spin-wave and the added phase-factors. Due to the condition of phase matching, the $k$-space contributions of the spin-wave define whether and in which direction the signal will be re-emitted upon arrival of the next control pulse. Added phases that are linear in position shift the wavevector stored in the spin wave \cite{leszczynski_spatially_2018}, periodic phases will coherently divide the spin-wave into several contributions with shifted wavevectors \cite{parniak_quantum_2019, mazelanik_coherent_2019}. Ref.~ \cite{leszczynski_spatially_2018, parniak_quantum_2019, mazelanik_coherent_2019} proposed and demonstrated experimentally the use of an ac-Stark shift for manipulating the spin wave as described above, implementing temporal as well as directional beam splitters, and observing the Hong-Ou-Mandel effect. In \cite{lipka_spatial_2019}, the ac-Stark effect is demonstrated to allow for mimicking the effect of a cylindric lens by imprinting phases $\propto y^2$ orthogonal to the emission direction. The ac-Stark shift is thus a powerful tool for coherently manipulating spin waves. Solely shifting the wavevector of the spin waves by a large amount, thus changing the emission direction without splitting the spin wave is hard to achieve using this method, as inducing a suitable energy shift linear in space over the whole atomic cloud requires correspondingly large absolute shifts at some part of the cloud. \cite{mazelanik_coherent_2019} report an ac-Stark induced energy shift on the order of MHz for 0.1\,W laser power, while a magnetic field creates $\sim$ 10\,MHz per Gauss, such that the shift can reach the GHz regime. Here we extend these previous works to allow emission in an arbitrary direction in the 2D plane by manipulating the spin-wave phases in a controlled way during the storage phase (see Fig.~\ref{fig:timeline} for a schematic description of the pulse sequence). We show that in doped solids, where the atoms carrying the spin-wave can be considered to sit at fixed positions, this can be achieved by applying a magnetic field gradient and using the Zeeman effect for reasonable coil parameters and power supplies. This allows for fast routing of photons (few to $\sim100\,\mu$s with reasonable parameters, depending on the deflection angle) without using any mechanical parts, i.e.~the re-emission direction is controlled by purely electronic means. Even without optimizing the parameters of the control beam, efficiencies of the re-emission in any direction can be achieved that are comparable to those of forward re-emission. In cold atomic clouds or hot atomic vapors, where atomic motion scrambles the phases of spin waves that carry significant momentum, deflection angles up to $\sim 20\,$mrad and $\sim 0.2\,$mrad, respectively, should be achievable, which still allow for fast photon routing. \begin{figure} \caption{The time line is divided into the stages of absorption, storage, and emission. For each stage, the relevant wavevectors for phase matching are drawn above the axis, and a depiction of the system's state at the beginning and end of each stage is shown below. A ``manipulation'' (momentum change of the spin-wave) during the storage phase allows re-emission in a new direction.} \label{fig:timeline} \end{figure} Given the role of the individual atomic phases and the ability to shift $\kappa$ in the spin-wave, we also propose a new way of avoiding decoherence due to the interaction of diffusion and the momentum stored in the spin-wave. This can contribute to relax the necessity of using co-propagating pulses in implementations where the atoms move freely. The dominant decoherence mechanism in Raman-type quantum-memories is ground state decoherence. In vapor cells, it results mostly from the drift of atoms in and out of the laser beam, and in ultracold gases often from uncontrolled magnetic fields \cite{lvovsky_optical_2009}. In the latter case, an improvement can be obtained by using atomic clock states \cite{chen_controllably_2016,zhao_millisecond_2009} (i.e.~states with a transition frequency which is constant to first order in changes to the magnetic field), in the former by using optical lattices for limiting the motion of the atoms. Using rubidium, storage times reaching $1/e$ lifetimes of $0.22\,$s \cite{yangEfficientQuantumLight2016} for single light quanta and $16\,$s \cite{dudinLightStorageTime2013} for coherent states were reported. Using dopants in a solid, \cite{maOnehourCoherentOptical2021a} report storage times over $1\,$h. Reviews over different approaches to quantum memories are \cite{heshami_quantum_2016,lvovsky_optical_2009}. \section{The System} \label{sec:System} The system consists of an atomic cloud with atomic density $n(\vec{r})$ and a total of $N$ atoms inside of a geometrical volume $\mathcal{V}$ with $\mathop{\mathrm{Vol}}(\mathcal{V})=V$. Three internal states $\ket{g},\ket{e},\ket{s}$ in $\Lambda$-configuration are taken into account, and the motional state $\ket{\psi}$ is given by a wave function $\psi(\vec{r}_1,\dots,\vec{r}_N)$ which is a product of single-particle wave packets. We assume the atoms to be localized on a scale much smaller than the photonic wave lengths. In experiment, this can be realised using warm vapors, cold atomic clouds, or dopants inside a solid body. With this, averaging over radius-$\epsilon$ spheres $v_{\vec{r}}$ around position $\vec{r}$ much smaller than the wave lengths and much bigger than the atomic wave functions allows for introducing the atomic density $n(\vec{r})$ as the approximate eigenfunction of the atomic density operator averaged over the spheres $v_{\vec{r}}$: \begin{eqnarray} \label{eq:n} \hat{n}({\vec{r}}) \ket{\psi} := \left(\sum_{i=1}^N \frac{1}{\mathop{\mathrm{Vol}}(v_{\vec{r}})}\ket{v_{\vec{r}}}_i\!\bra{v_{\vec{r}}}\right) \ket{\psi} \approx n({\vec{r}}) \ket{\psi}, \end{eqnarray} where $\ket{v_{\vec{r}}}_i\!\bra{v_{\vec{r}}}:= \int_{v_{\vec{r}}} d^3 r'\ \ket{\vec{r}'}_i\!\bra{\vec{r}'}$. The atoms are treated as frozen in place for the absorption and emission processes. The definitions and derivations are parallel to the ones introduced in \cite{gorshkov_universal_2007,gorshkov_photon_2007-2}, and modified for 3d-space with arbitrary signal and control directions, as well as the quantized atomic motional state given above. Detailed derivations and outline of the numerical procedure are given in \cite{master_thesis}. Atomic transition operators for atom $i$ are denoted by $\hat{\sigma}_{\mu\nu}^i=\ket{\mu}_i\!\bra{\nu}$ ($\mu,\nu\in \{e,s,g\}$) and couple to the corresponding light modes via dipole transitions as depicted in Fig.~\ref{fig:energy_levels}. The control field (index ``c'') is described classically by its positive frequency envelope $\mathcal{E}_\text{c}^{\vec{k}_\text{c}}(\vec{r},t)$. As in \cite{gorshkov_photon_2007-1}, the control pulse's influence on the atomic cloud is later described by half the induced Rabi frequency $\Omega(\vec{r},t)$ which will be defined shortly: \begin{eqnarray}\label{eq:E_c} \vec{E}_\text{c}(\vec{r},t) &= \frac{1}{2} \vec{\epsilon}_{\text{c}} e^{i (\vec{k}_\text{c}\cdot\vec{r}-c|\vec{k}_\text{c}|t)} \mathcal{E}_\text{c}^{\vec{k}_\text{c}} + c.c., \end{eqnarray} here $\vec{E}_\text{c}$ is the electric field of the control pulse, $\vec{\epsilon}_\text{c}$ its polarisation, $\vec{k}_\text{c}$ its dominant wavevector, $c$ is the vacuum speed of light, and $c.c.$ stands for the complex conjugate. The signal pulse (index ``s'') is taken as fully quantised in 3d space with electric field operator \begin{eqnarray} \hat{\vec{E}}_\text{s}(\vec{r}) =& \sqrt{\frac{\hbar c}{2 \epsilon_0 (2\pi)^3}} \sum_{\ell\in\{1,2\}} \int_{\vec{k}\in \mathbb{R}^3} d^3 k\ \sqrt{|\vec{k}|} \vec{\epsilon}_{\vec{k},\ell} e^{i \vec{k}\cdot \vec{r}} \hat{a}_{\ell}(\vec{k}) + h.c., \end{eqnarray} where $\epsilon_0$ is the electric vacuum permittivity, $h=2\pi\hbar$ is Planck's constant, $\vec{\epsilon}_{\vec{k},\ell}$ is the polarisation vector for polarisation $\ell$ and wavevector $\vec{k}$ and $\hat{a}_\ell (\vec{k})$ is the continuous-mode annihilation operator for polarisation $\ell$ and wavevector $\vec{k}$ with $\left[\hat{a}_{\ell}(\vec{k}),\hat{a}^{\dagger}_{\ell'}(\vec{k}')\right]= \delta(\vec{k}-\vec{k'}) \cdot \delta_{\ell,\ell'}$ and $h.c.$ stands for the hermitian conjugate. As with the control field, we define positive frequency envelopes also for the signal field ($\hat{\mathcal{E}}^{\vec{k}_\text{s}}(\vec{r},t)$), the $g\leftrightarrow e$-coherence ($\hat{P}^{\vec{k}_\text{c}(\vec{r},t)}$, the ``polarisation'') and the $g\leftrightarrow s$-coherence ($\hat{S}^{\vec{\kappa}}(\vec{r},t)$, the ``spin wave''), \begin{eqnarray}\label{eq:def_envelopes} \hat{\mathcal{E}}^{\vec{k}_\text{s}}(\vec{r},t) &=& \sqrt{\frac{V}{(2\pi)^3}} e^{-i(\vec{k}_\text{s}\cdot\vec{r}-c|\vec{k}_\text{s}| t)} \int_{\vec{k}\in \mathbb{R}^3} d^3 k\ \sqrt{\frac{|\vec{k}|}{|\vec{k}_\text{s}|}}\ \frac{\vec{d}\cdot \vec{\epsilon}_{\vec{k}}}{\vec{d}\cdot \vec{\epsilon}_{\vec{k}_\text{s}}} e^{i \vec{k}\cdot \vec{r}} \hat{a}({\vec{k}}),\\ \nonumber \hat{P}^{\vec{k}_\text{s}}(\vec{r},t) &=& \frac{\sqrt{N}}{n(\vec{r})} \sum\limits_{i=1}^{N} e^{-i(\vec{k}_\text{s}\cdot \vec{r}-c|\vec{k}_\text{s}|t)} \hat\sigma_{ge}^{i} \,\frac{\ket{v_{\vec{r}}}_i\!\bra{v_{\vec{r}}}}{\mathop{\mathrm{Vol}}(v_{\vec{r}})}, \\ \nonumber \hat{S}^{\vec{\kappa}}(\vec{r},t) &=& \frac{\sqrt{N}}{n(\vec{r})} \sum\limits_{i=1}^{N} e^{-i((\vec{k}_\text{s}-\vec{k}_\text{c})\cdot \vec{r}-c(|\vec{k}_\text{s}|-|\vec{k}_\text{c}|)t)} \hat\sigma_{gs}^{i} \,\frac{\ket{v_{\vec{r}}}_i\!\bra{v_{\vec{r}}}}{\mathop{\mathrm{Vol}}(v_{\vec{r}})}, \quad \vec{\kappa}:=\vec{k}_\text{s}-\vec{k}_\text{c}\\ \nonumber \Omega(\vec{r},t) &=& \Omega^{\vec{k}_\text{c}}(\vec{r},t)\qquad = \qquad \frac{1}{2\hbar} \vec{d}_\text{c}\cdot \vec{\epsilon}_\text{c} \mathcal{E}^{\vec{k}_\text{c}}_\text{c}(\vec{r},t) \end{eqnarray} and the corresponding interaction Hamiltonian \begin{eqnarray} \label{eq:interaction_hamiltonian} \hat{H}_\text{I}=&-\sum_{j=1}^{N}\hat{\vec{d}}_j\cdot (\hat{\vec{E}}_\text{s} (\hat{\vec{r}}_j)+\vec{E}_\text{c}(\hat{\vec{r}}_j,t))\\ \approx& -\sum_{j=1}^{N} \left[ \sqrt{\frac{\hbar c}{2\epsilon_0 (2\pi)^3}} \int_{\vec{k}\in \mathbb{R}^3} d^3 k\ \sqrt{|\vec{k}|} \left( \vec{d}\cdot\vec{\epsilon}_{\vec{k}} e^{i \vec{k}\cdot \hat{\vec{r}}_j} \hat\sigma_{eg}^j \hat{a}(\vec{k})+h.c.\right) + \right.\nonumber \\ & \left. \qquad \qquad +\ \frac{1}{2} \vec{d}_\text{c}\cdot\vec{\epsilon}_\text{c} e^{i (\vec{k}_\text{c}\hat{\vec{r}}_j-c|\vec{k}_\text{c}|t)} \hat{\sigma}_{es}^j \mathcal{E}_\text{c}(\hat{\vec{r}}_j,t) +h.c.\right] \nonumber\\ \nonumber \approx & -\hbar \int_{\mathcal{V}} d^3 r\ \left[ \sqrt{N}g \hat{P}^{\vec{k}_\text{s}}(\vec{r},t) \hat{\mathcal{E}}^{\vec{k}_\text{s}}(\vec{r},t) + \hat{S}^{\vec{\kappa}}(\vec{r},t) \Omega(\vec{r},t) + h.c.\right] \hat{n}(\vec{r}). \end{eqnarray} Here, $g=\sqrt{c|\vec{k}_\text{s}|/(2\hbar \epsilon_0 V)} \vec{d}\cdot \vec{\epsilon}_{\vec{k}_\text{s}}$ is the single particle atom-light coupling, $\vec{d}$ is the dipole moment of the $g\leftrightarrow e$-transition, $\vec{d}_\text{c}$ the dipole moment of the $s\leftrightarrow g$-transition and $\vec{\kappa}$ is the wave vector difference between the pulses. The signal and control field polarisations are chosen to be $\vec{e}_z$ and the $\ell$-index is discarded. We correspondingly consider all involved wavevectors in the $xy$-plane. This allows for arbitrary deflection angles in the plane without complications from a change in polarisation. As the use of opposite circular polarisations for the probe and signal pulses in the $\Lambda$ scheme can strongly simplify distinguishing the two pulses for co-propagating configurations, in many implementations it will be advantageous to use a scheme with circular polarisation instead. However, this restricts the spin wave manipulation to deflection angles that do not strongly depart from forward or backward emission, so that the state overlap to the original polarisation remains high. The Rotating Wave Approximation is used and it is assumed that the signal pulse only couples to the $g\leftrightarrow e$-transition and similarly the control pulse with the $s\leftrightarrow e$-transition. \begin{figure} \caption{The energy levels of the atoms and relevant notation.\label{fig:energy_levels} \label{fig:energy_levels} \end{figure} Initially all atoms are in the ground state $\ket{g}$ and, as atomic motion is frozen, also the Doppler effect is neglected. Inhomogeneous broadening in the context of photon storage in an ensemble of atoms was considered in \cite{gorshkov_photon_2007-3}. The signal pulse is taken to be a weak coherent state with $|\alpha|^2\ll N$, with $|\alpha|$ the expectation value of the photon number. With these initial conditions, the fields $\mathcal{E}(\vec{r},t),\ P(\vec{r},t)$ and $S(\vec{r},t)$ can be defined as the system state's eigenvalues to the corresponding operators: $\mathcal{E}\leftrightarrow\hat{\mathcal{E}}^{\vec{k}_\text{s}}(\vec{r},t)$, $P\leftrightarrow \hat{P}^{\vec{k}_\text{s}}(\vec{r},t)$ and $S\leftrightarrow \hat{S}^{\vec{\kappa}}(\vec{r},t)$. Given our initial conditions and the limit of weak signal pulses, the system's state remains an eigenstate to these operators for all times, thus enabling our description through the complex-numbered eigenvalues. Choosing $\alpha=1$, all results for $\mathcal{E}$, $P$ and $S$ for a coherent signal pulse coincide with the expectation values of the operators that would result from using a 1-Photon Fock state as signal pulse. Therefore, 1-Photon Fock states can be described with the exact same formalism. The time evolution of the fields is given by the Heisenberg equation of motion and results in \begin{eqnarray}\label{eq:pde} \left( \partial_t+c\partial_{\vec{e}_{\vec{k}_\text{s}}} \right) \mathcal{E}\approx& i \sqrt{N} g \frac{V}{N} n P,\\ \nonumber \partial_t P =& -(\gamma+i \Delta) P + i \Omega S+i \sqrt{N} g \mathcal{E}, \\ \nonumber \partial_t S =& i \Omega^* P, \end{eqnarray} where $\partial_{\vec{e}_{\vec{k}_\text{s}}}$ is a spatial derivative in direction $\vec{e}_{\vec{k}_\text{s}}:=\vec{k}_\text{s}/|\vec{k}_\text{s}|$, the direction of propagation of the signal pulse. $\gamma$ is the spontaneous emission rate of the excited state (which is added heuristically to describe the most basic effect of spontaneous emission), and $\Delta$ the detuning. The number of photons in the signal field is given by \begin{eqnarray} \langle \hat{N}_\text{ph}\rangle \approx \frac{1}{V} \int d^3 r\ \mathcal{E}^*(\vec{r},t) \mathcal{E}(\vec{r},t), \end{eqnarray} and the number of excitations stored in the atomic cloud is \begin{eqnarray} \langle \hat{N}_{\ket{s}}\rangle \approx& \frac{1}{N} \int_{\mathcal{V}} d^3 r\ n(\vec{r}) S^*(\vec{r},t)S(\vec{r},t), \text{ and} \\ \langle \hat{N}_{\ket{e}}\rangle \approx& \frac{1}{N} \int_{\mathcal{V}} d^3 r\ n(\vec{r}) P^*(\vec{r},t)P(\vec{r},t), \end{eqnarray} respectively. With these, the time evolution of our state (neglecting atomic motion and decoherence) is fully described by the complex-valued fields $\mathcal{E}, P$ and $S$ and their time evolution (\ref{eq:pde}), with a direct mapping to the corresponding quantum state (for the atomic degrees of freedom): \begin{eqnarray}\label{eq:field_op_eigenstate} \ket{\Psi^P_{S}(t)}&=& \int d^{3N}r \bigotimes_{i=1}^{N} c_i \ \Big(\ket{g}_i +e^{i(\vec{k}_\text{s}\cdot\vec{r}_i-c|\vec{k}_\text{s}|t)} \frac{P({\vec{r}_i,t})}{\sqrt{N}}\ket{e}_i \\ \nonumber &&\qquad\qquad\qquad + e^{i((\vec{k}_\text{s}-\vec{k}_\text{c})\cdot\vec{r}_i-c(|\vec{k}_\text{s}|-|\vec{k}_\text{c}|)t)} \frac{S({\vec{r}_i,t})}{\sqrt{N}} \ket{s}_i \Big)\times\\ \nonumber && \times \psi(\vec{r}_1,\dots,\vec{r}_N)\ket{\vec{r}_1,\dots,\vec{r}_N}. \end{eqnarray} Here, $c_i\approx 1$ are normalisation factors. \section{Dynamics and directionality} \label{sec:Dynamics} We partition the system dynamics into three stages as depicted in Fig.~\ref{fig:timeline}: From $t_0$ to $t_1$, the absorption takes place. There, the atoms start in the ground state and the incoming signal and control pulses meet in the atomic cloud where a fraction $\eta_{abs}$ of the excitations of the probe pulse is converted into the spin wave. Between $t_1$ and $t_2$, the light remains stored and we optionally manipulate the spin wave using the Zeeman effect. During this time, a slow decay of the spin wave occurs but which we neglect in most of this work. During storage, the control field is absent, $\Omega(\vec{r},t)=0$. From time $t_2$ on, the emission control pulse arrives and releases the excitations stored in the spin wave into a new signal pulse with a possibly altered direction and remaining fraction of original excitations $\eta=\eta_\text{abs}\ \eta_\text{em}$. We consider in the following a spherical sample with volume $V=L^3$ and constant density, and change to unit-free coordinates by using $L$ as length scale, $1/\gamma$ as time scale, and defining the atomic number density relative to the mean density, $\tilde{n}$: \begin{eqnarray} \tilde{\vec{r}}:=\frac{\vec{r}}{L}, \ \tilde{t}:=\frac{t}{1/\gamma}, \ \tilde{n}:=\frac{n}{N/V}, \ \tilde{c}:=\frac{c}{\gamma L}. \end{eqnarray} The simplifying assumption of a uniform atomic density allows for numerically simple PDEs. A treatment of exact atomic positions can be found in \cite{asenjo-garcia_exponential_2017,manzoni_optimization_2018}. We define \begin{eqnarray} \tilde{\Delta}:=\frac{\Delta}{\gamma}, \ \tilde{\Omega}:=\frac{\Omega}{\gamma}, \ \tilde{g}:= \frac{\sqrt{N} g}{\gamma}, \ \tilde{P}:=\tilde{n}P, \ \tilde{S}:= \tilde{n} S, \end{eqnarray} with $\tilde{c}$ the dimensionless speed of light, $\tilde{\Delta}$ the dimensionless two-mode detuning, $\tilde{\Omega}$ half the dimensionless Rabi frequency induced by the control-pulse, and $\tilde{g}$ the dimensionless enhanced coupling between the atoms and the signal pulse. The normalised polarisation $\tilde{P}$ and the normalised spin wave $\tilde{S}$ are zero outside of the atomic cloud, which allows for a more direct interpretation of their numerical values when plotted. We define the $x$-axis such that $\vec{k}_\text{s}=k_\text{s} \vec{e}_{x}$. The partial differential equations (PDEs) are then \begin{eqnarray}\label{eq:final_pde} \left( \partial_{\tilde{t}}+\tilde{c} \partial_{\tilde{x}} \right) \mathcal{E}({\tilde{\vec{r}},\tilde{t}})&=& i \tilde{g} \tilde{P}({\tilde{\vec{r}},\tilde{t}}),\\ \partial_{\tilde{t}} \tilde{P}({\tilde{\vec{r}},\tilde{t}})&=& -(1+i\tilde{\Delta})\tilde{P}({\tilde{\vec{r}},\tilde{t}}) +i\tilde{\Omega}({\tilde{\vec{r}},\tilde{t}}) \tilde{S}({\tilde{\vec{r}},\tilde{t}}) +i \tilde{g} \tilde{n}({\tilde{ \vec{r}}}) \mathcal{E}({\tilde{\vec{r}},\tilde{t}}), \nonumber \\ \partial_{\tilde{t}} \tilde{S}({\tilde{\vec{r}},\tilde{t}})&=& i \tilde{\Omega}^*({\tilde{\vec{r}},\tilde{t}}) \tilde{P}({\tilde{\vec{r}},\tilde{t}}). \nonumber \end{eqnarray} The optical depth $d$ as defined in \cite{gorshkov_photon_2007-2} is here given by $d= \tilde{g}^2/\tilde{c}$ when using $L$ as length scale. If the cloud diameter is used as length scale instead, and the cloud has spherical shape and constant density we get $d'\approx 1.24\ d$. We consider the ideal situation of no dephasing during the storage time. In Sec.~\ref{subsec:mode_mismatch} we shortly discuss the dephasing-relevant aspects connected to the wavevector stored in the spin wave $\kappa$. \subsection{Phase matching conditions and directionality} For the absorption process, each excitation in the signal pulse carries the wavevector $\vec{k}_\text{s}$ and, if absorbed, leads to the emission of a control field excitation with wavevector $\vec{k}_\text{c}$ such that a spin wave excitation with wavevector \begin{equation} \vec{\kappa}=\vec{k}_\text{s}-\vec{k}_\text{c} \label{eq:phase_match_abs} \end{equation} remains due to the conservation of momentum. If after absorption the wavevector of the spin wave remains unchanged during storage, $\vec{\kappa'}=\vec{\kappa}$ and the same control pulse direction $\vec{k}'_\text{c}=\vec{k}_\text{c}$ is used (cf. Fig.~ \ref{fig:drawingphasematchingoverviewabs}), clearly the emitted signal pulse retains its original direction $\vec{k}'_\text{s}=\vec{k}_\text{s}$ as the PDEs from (\ref{eq:pde}) keep applying. More generally, the wavevector $\vec{\kappa}'$ stored in the spin wave and the wavevector $\vec{k}'_\text{c}$ of the control pulse are the only wavevectors that define the direction of re-emission. The wavevector of the emitted signal pulse becomes \begin{equation} \vec{k}'_\text{s} = \vec{\kappa}'+\vec{k}'_\text{c}. \label{eq:phase_match_em} \end{equation} The regarded electric field envelope accordingly changes to $\mathcal{E}^{\vec{k}'_\text{s}}$ with direction of motion $\vec{e}_{\vec{k}'_\text{s}}$ and accordingly adjusted values in (\ref{eq:def_envelopes}-\ref{eq:pde}). The equations (\ref{eq:phase_match_abs}) and (\ref{eq:phase_match_em}) are called phase matching conditions, as they need to be fulfilled in order to get constructive interference from the different participating atoms. This introduces the spatial extent of the atomic cloud $L$ as parameter that defines how closely the phase matching conditions need to be fulfilled in order to ensure purely constructive interference throughout the cloud. In \ref{subsec:mode_mismatch} we explore these conditions for our regarded system. For the absorption and emission processes to be efficient, energy and momentum both need to be conserved. Energy conservation implies that two-wave resonance in the atomic $\Lambda$-level system is necessary: \begin{eqnarray}\label{eq:resonance_cond} c |\vec{k}_\text{s}| -c |\vec{k}_\text{c}| = \omega_{gs} &\quad \text{ for absorption,}\\ \nonumber c |\vec{k}'_\text{s}| -c |\vec{k}'_\text{c}|= \omega_{gs} &\quad \text{ for emission.} \end{eqnarray} \begin{figure} \caption{The phase matching condition for the absorption (left) and emission (right) process without change of direction. The wavevectors are represented by arrows. } \label{fig:drawingphasematchingoverviewabs} \end{figure} These relations allow for the possibility of manipulating the emission direction of the signal pulse by changing either wavevector on the right hand side of (\ref{eq:phase_match_em}). Using emission control pulses in different directions was proposed in \cite{chen_controllably_2016,tordrup_holographic_2008}, but has the disadvantage of transferring the problem of controlling the direction of a light-field from the signal beam to the control beam, i.e.~one needs active optical elements or different sources for the control beam. Here we study the possibility of changing the wavevector stored in the spin wave, $\vec{\kappa}\rightarrow\vec{\kappa}':=\vec{\kappa}+\vec{\delta}$ (defining $\vec{\delta}$ as ``manipulation''), which can be done with purely electronic means, as we will show below. How this selects a new direction of the emitted signal pulse is depicted in Fig.~\ref{fig:drawingphasematchingoverviewem_simpleMan}: The atomic spin wave state starts with the wavevector $\vec{\kappa}$, is changed by $\vec{\delta}$ to become $\vec{\kappa}'$; a photon of wavevector $\vec{k}'_\text{c}$ is absorbed, and a photon of wavevector $\vec{k}'_\text{s}$ emitted. With this, the direction of emission of the signal pulse $\vec{k}'_\text{s}$ can deviate from the original direction $\vec{k}_\text{s}$ even when using the same control beam, $\vec{k}'_\text{c}=\vec{k}_\text{c}$. The angular change in direction is denoted by $\varphi$. \begin{figure} \caption{{\em Left:} \label{fig:drawingphasematchingoverviewem_simpleMan} \end{figure} During idealised manipulation, only $\vec{\kappa}$ is changed to become $\vec{\kappa}'$ without otherwise affecting the spin wave (see eq.~\ref{eq:manipulation_cond}). The exact values of the necessary spin-wave manipulation for inducing a change in directionality $\varphi$ in the emitted signal pulse are easily obtained with \begin{eqnarray}\label{eq:manip_forw_retrieval} \vec{\delta} &= \vec{k}'_\text{s}-\vec{k}_\text{s} = k_\text{s}\ \begin{pmatrix} \cos(\varphi)-1 \\ \sin(\varphi)\\ 0 \end{pmatrix} ,\quad |\vec{k}_\text{s}| = |\vec{k}'_\text{s}|\,. \end{eqnarray} For small angles $\varphi$, the increase is linear, $ \vec{\delta} \approx \varphi k_\text{s} \vec{e}_y.$ and for large angles it caps at $|\vec{\delta}|=2|\vec{k}_\text{s}|$. In Sec.~\ref{subsec:mode_mismatch} we study the decrease in efficiency when (\ref{eq:manip_forw_retrieval}) is not satisfied exactly. \subsection{Manipulation via Zeeman shift} \label{subsec:Manip_Zeeman} The manipulation needed to re-emit the light into a new direction $\vec{k}'_\text{s}$ can be understood as the creation of a new spin-wave state that would have resulted from signal and control pulses of wavevectors $\vec{k}'_\text{s}$ and $\vec{k}'_\text{c}$, with unchanged wave numbers $|\vec{k}'_\text{s}|=|\vec{k}_\text{s}|$, $|\vec{k}'_\text{c}|=|\vec{k}_\text{c}|$. This can be achieved by introducing a position-dependent phase equivalent to a wavevector $\vec{\delta}$: \begin{eqnarray}\label{eq:manipulation_cond} \hat{S}^{\vec{\kappa}'}({ \vec{r},t_2})\ \ket{\psi_{S}^{P} (t_2)} \overset{!}{=}& S({\vec{r},t_1})\ \ket{\psi_{S}^{P} ({t_2})}\\ \nonumber \overset{(\ref{eq:def_envelopes})}{\Rightarrow} S({\vec{r},t_2}) \overset{!}{=}& e^{i (\vec{\kappa}'-\vec{\kappa})\cdot \vec{r}} S({\vec{r},t_1})= e^{i \vec{\delta}\cdot \vec{r}} S({\vec{r},t_1}), \end{eqnarray} with the manipulation $\vec{\delta}$ leading to emission angles $\varphi$ as given in (\ref{eq:manip_forw_retrieval}). More generally, arbitrary phases $\phi(\vec{r})$ imprinted on the spin-wave such that $S(\vec{r},t_2)=e^{i \phi(\vec{r})} S(\vec{r}, t_1)$ can be treated by decomposing the resulting spin-wave into separate plane-wave contributions and their envelopes, each of which can be described individually by the PDEs using the corresponding wavevector $\vec{\kappa}'$. The added phases amount to a convolution in $k$-space of the original spin-wave with the added phase factors $e^{i\phi( \vec{r})}$, as can be seen from the mathematical relation $\mathcal{F}\left[ e^{i \phi(\vec{r})} S(\vec{r}, t_1) \right] \propto \mathcal{F}\left[ e^{i \phi(\vec{r})} \right] \star \mathcal{F}\left[ S(\vec{r}, t_1) \right]$, where $\mathcal{F}$ denotes the Fourier transform to k-space and $\star$ the convolution operator. While added phases linear in space solely shift the wavevector of the spin-wave, periodic phase patterns will split the spin-wave into several contributions as described and demonstrated in \cite{mazelanik_coherent_2019}. In our description, $\vec{\kappa}'=\vec{k}'_{\text{s}}-\vec{k}'_{\text{c}}$ and $|\vec{k}'_{\text{s}}|=|\vec{k}_{\text{s}}|$ must be fulfilled for the derivation of the PDEs to be valid, such that other wave-vector contributions to the spin-wave, i.e. any mode-mismatch, need to be treated as part of the envelope (see \ref{subsec:mode_mismatch}). A possible way of introducing the necessary phases is via the Zeeman shift created by a magnetic field gradient. For this, we introduce a classical magnetic field $B(\vec{r},t)$ of which we assume that it induces an energy shift in the atomic energy levels that is linear in the magnetic field. In principle, for the regarded cloud of rubidium atoms this regime can be reached by applying a homogeneous magnetic field $\vec{B}_0\approx {5}\,${kG}~$\vec{e}_z$ that pushes the atomic energy levels into the Paschen-Back regime, such that the effect of an additional gradient field leads to approximately linear responses \cite{sargsyan_hyperfine_2014, steck_rubidium_2015}. However, in rubidium this strength of $B_0$ changes the level structure such that our $\Lambda$ scheme is not available. By using a weak magnetic field for the storage and emission processes and ramping up $B_0$ for the duration of the manipulation scheme, the Paschen-Back-regime could still be used to manipulate the spin-wave: As we find in Appendix \ref{sec:adiabaticity}, the adiabaticity condition remains fulfilled for realistic ramp-up speeds, such that the ground states $\ket{g}=\ket{F=1,m_F}$ and $\ket{s}=\ket{F=2,m_F}$ are mapped to the states $\ket{\tilde{g}}=\ket{m_I=m_F+\frac{1}{2}, m_s=-\frac{1}{2}}$ and $\ket{\tilde{s}}=\ket{m_I=m_F-\frac{1}{2}, m_s=\frac{1}{2}}$. In practice, it might be simpler to create the spatially linearly increasing shift of the energy levels in a different way: with the use of a spatially non-linearly increasing magnetic field that accounts for the non-linear response of the atoms, the necessary effect can be induced without need for a fully linear response to additional magnetic fields as assumed here. This avoids the need to change $B_0$ before and after the spin wave manipulation. For an order-of-magnitude estimation, we nonetheless regard the linear regime with the Hamiltonian \begin{equation} \hat{H}_\text{B} = -\sum_{i} B(\vec{r}_i,t) \left( \mu_{g}\hat{\sigma}^{i}_{gg} + \mu_{e}\hat{\sigma}^{i}_{ee} +\mu_{s}\hat{\sigma}^{i}_{ss} \right), \end{equation} with $\mu_{x}$ being the respective magnetic moment corresponding to the atomic states $x\in\{ g, e, s \}$. The induced energy shifts lead to a changed time evolution during the storage time, which is solved by \begin{equation} S({\tilde{\vec{r}}, \tilde{t}_2}) = e^{i \phi_\text{tot}({\vec{\tilde r }})} S({\tilde{\vec{r}}, \tilde{t}_1}), \label{eq:Srt2} \end{equation} where $\tilde{t}_1$ and $\tilde{t}_2$ are the initial and final regarded moments in rescaled time and \begin{equation}\label{eq:manip_accum_phase} \phi_\text{tot}({\tilde{\vec{r}}}):= (\mu_{g}-\mu_{s})/(\gamma\hbar)\ \int_{\tilde{t}_1}^{\tilde{t}_2}d \tilde{t}\ B({\tilde{\vec{r}},\tilde{t}}) \end{equation} is the locally accumulated phase in the spin wave due to the magnetic field. Any global phase can be ignored. Thus, the necessary property of the $g$ and $s$ levels for our Zeeman manipulation to be applicable is that the two states differ in their reaction to magnetic fields, i.e. $\mu_g\neq \mu_s$ in our notation. This condition is indeed fulfilled for alkali atoms with hyperfine-split ground states and sufficiently weak magnetic fields. For schemes using atomic clock states with a suitably chosen value of $B_0$ to minimize the susceptibility of the spin-wave to stray magnetic fields (i.e. $\mu_g=\mu_s$), changing the strength of $B_0$ for the duration of the manipulation can still allow for the Zeeman manipulation scheme to be applied, while of course the spin-wave will be susceptible to stray magnetic fields for that duration. Inserting (\ref{eq:Srt2}) into (\ref{eq:manipulation_cond}) gives \begin{eqnarray} \phi_\text{tot}({\tilde{\vec{r}}}) =& \vec{\delta}\cdot \tilde{\vec{r}} L+const.\\ \nonumber \Leftrightarrow & \int_{t_1}^{t_2}d t\ B(\vec{r},t) = \frac{\hbar \vec{\delta}\cdot \vec{r} }{\mu_{g} - \mu_{s}}+const.\,. \end{eqnarray} For simplicity, we regard the time needed for manipulation using a fixed field gradient. The direction of the needed gradient of the magnetic field-amplitude $B$ is given by (\ref{eq:manip_forw_retrieval}) and we denote the contribution of $\vec{r}$ parallel to $\vec{\delta}$ with $r_{\parallel\vec{\delta}}$. With a field $B({\vec{r}})=B_0+{50}\, \frac{\text{G}}{\text{cm}}\cdot r_{\parallel\vec{\delta}}$, duration $T$ and coupling corresponding to an electronic spin transition \cite{steck_rubidium_2015} \begin{eqnarray}\nonumber (\mu_{g} - \mu_{s})/\hbar \approx 2\mu_\text{Bohr}/\hbar \approx {17.6}\,\text{rad/}\mu\text{s/G}. \end{eqnarray} For rubidium, this approximate value is reached both for weak magnetic fields and in the regarded Paschen-Back-regime. This gives \begin{equation}\label{eq:T} T =\dfrac{\hbar}{\mu_g-\mu_s}\cdot\dfrac{|\vec{\delta}|}{50\,\text{G/cm}} = \frac{|\vec{\delta}|}{88/\text{mm}}\,\mu\text{s}, \end{equation} which leads to necessary manipulation times of the order of $T\approx 10^{-4}$\,s to achieve arbitrary angles $\varphi$. A finite speed in turning on and off the field gradient will increase the necessary time correspondingly. The decoherence time scale from thermal motion of freely moving atoms at different temperatures and the corresponding limitations to the reachable deflection angles are discussed at the end of \ref{subsec:mode_mismatch}. We find that deflection angles $\varphi\approx 20\,$mrad remain viable in cold atomic clouds, but arbitrary deflection angles will likely require a different system. E.g. dopants in a solid body can act as a suitable atomic ensemble \cite{wang_three-channel_2009, lvovsky_optical_2009} where diffusion does not occur. In order to achieve arbitrary deflection angles $\varphi$ on the order of $\mu$s, correspondingly the rather large field gradient of $50\,$G/cm has to be created on a similar time scale. Using Maxwell coils \cite{hidalgo-tobon_theory_2010}, a rise time of $5\,\mu$s can be achieved with $63$ turns, a coil radius of $1\,$cm and a maximum current of $1\,$A, while using a current source delivering $<40\,$V. With the focus on small deflection angles, a smaller maximum gradient of $7\,$G/cm can be chosen. Using the same current source this allows for a much faster rise time of $0.1\,\mu$s which allows deflection angles of up to $\sim 0.2\,$mrad at thermal velocities of room-temperature vapors. A more detailed description of the coil parameters can be found in appendix \ref{seq:coil_properties}. In appendix \ref{sec:adiabaticity} we confirm that adiabaticity remains fulfilled in the regarded parameter regime such that, apart from the intended phases, the state of the system is not significantly affected by the field gradient. \section{Numerical Results} \label{sec:Num_results} \begin{figure} \caption{Relevant parameters that define the incoming signal and control pulses. \label{fig:drawing_param_abs} \label{fig:drawing_param_abs} \end{figure} In the following we provide results from solving (\ref{eq:final_pde}) numerically and optimizing the efficiency with which pulses can be stored and re-emitted in different directions. For simplicity, we restrict the incoming signal and control pulse to Gaussian shape with widths $w_{\mathcal{E},\parallel}$ and $w_{\Omega,\parallel}$ parallel to the respective direction of propagation, and the corresponding orthogonal beam widths $w_{\mathcal{E},\perp}$ and $w_{\Omega,\perp}$. The signal pulse is chosen to propagate along the $x$-axis, reaching the cloud's center at $t=0$. The control pulse propagates at an angle $\theta$ relative to the signal pulse and its timing and position are parametrized such that at time $t_{\Omega,0}$ the position of its peak is $(x_{\Omega,0},y_{\Omega,0})$ in the xy-plane. $A_{\Omega}$ denotes the amplitude of $\Omega$. The parameters are drawn in Fig.~\ref{fig:drawing_param_abs}. The results of \cite{gorshkov_photon_2007-1} and \cite{gorshkov_photon_2007-2} allow one to get estimates of the {scaling of the reachable efficiency with optical depth.} The achievable efficiencies are in general upper bounded by efficiencies that can be reached with the help of a cavity that restricts the electric field to a single relevant spatial mode \cite{gorshkov_photon_2007-1}, \begin{equation}\label{eq:eta_scaling_cavity} \eta^\text{max}_\text{cavity} \le \left( \eta^\text{max}_\text{abs, cavity} \right)^2 = \left( 1-\frac{1}{1+d'} \right)^2\,, \end{equation} which hence provides an important benchmark. For high optical depths the reachable efficiency in free space can be approximated by \begin{equation}\label{eq:eta_scaling} \eta^\text{max} \le \left( \eta^\text{max}_\text{abs} \right)^2 \overset{d\rightarrow\infty}{\sim} \left( 1-\frac{2.9}{d'} \right)^2. \end{equation} We choose \begin{equation} \label{eq:ref} \eta^\text{ref} =\left( 1-\frac{1}{1+d'/2.9} \right)^2 \end{equation} as reference for our results as it has an optical depth-dependence similar to (\ref{eq:eta_scaling_cavity}) and becomes an approximate upper bound for $d\rightarrow \infty$. As the chosen numerical method matches the discretised coordinates $\tilde{x}$ and $\tilde{c}\tilde{t}$ in order to achieve a simple propagation of $\mathcal{E}$ in (\ref{eq:final_pde}), the length of the regarded incoming signal pulses is limited due to computational constraints. Thus, the regarded signal pulses are of high bandwidth $\Delta \omega_\text{s} \gg \gamma$ with \begin{equation}\label{fig:effective_pulse_bandwidth} \frac{\Delta \omega_\text{s}}{\gamma} = \frac{\tilde{c}}{\tilde{w}_{\mathcal{E},\parallel}}. \end{equation} We expect high values of $\tilde{c}/\tilde{w}_{\mathcal{E},\parallel}$ to negatively affect the reachable efficiency as increasingly short pulses make higher optical depths necessary in order to reach optimal efficiency \cite{gorshkov_photon_2007-2}. We use parameters corresponding to a uniform, spherical cloud of $^{87}{\text{Rb}}$ with volume $V=L^3=({10}$\,{mm}$)^3$ and $\tilde{c}=850$. Unless explicitly stated otherwise, parameter values for the signal pulse are $\tilde{\Delta}=0.0, \tilde{w}_{\mathcal{E},\parallel}=100, \tilde{w}_{\mathcal{E},\perp}=0.2$, while the control parameters (i.e. width $w_{\Omega,\perp}$, length $w_{\Omega,\parallel}$, amplitude $A_{\Omega}$, timing $t_{\Omega,0}$ and displacement $x_{\Omega,0}$) are optimized to give high efficiencies. This corresponds to a high frequency bandwidth of the signal pulse $\Delta \omega_\text{s} \approx {0.3}$\,{GHz}, which makes the parameter regime comparable to the Autler-Townes storage scheme in \cite{saglamyurek_coherent_2018} except that control pulses with similar dimensions as the signal pulse are used. Note that although the limitations for higher signal bandwidths are not visible in the PDEs, for rubidium there are limitations as higher signal bandwidths will require changes to the $\Lambda$-system as the hyperfine coupling is no longer stronger than the necessary coupling to the light fields and additionally there arises significant overlap in the spectrum of the control and signal field. The choice of $\Delta=0$ is made for numerical simplicity. Before regarding the full process consisting of absorption, storage and reprogramming of direction, and emission, we study the absorption processes separately, in particular with respect to the achievable absorption efficiencies as function of the angle $\theta$ between signal and control beam. \subsection{The absorption process} \label{subsec:numerical_res_absorption} For testing the achievable storage efficiencies, a simple optimisation of control pulse parameters for varying values of $d$ and $\theta$ was done. The results are given in Fig.\ref{fig:sys3effabsall}. Fig.\ref{fig:sys3effabsall}(a) shows that efficiencies comparable to our reference curve from (\ref{eq:eta_scaling}) are already reached for $d\approx 5$, while the angle between signal and control pulse $\theta$ does not affect the reached efficiency. For $d=20$ an absorption efficiency of about 90\% should be achievable for $\tilde{w}_{\cal{E},\parallel}=100$, $\tilde{c}=850$. In Fig.~\ref{fig:sys3effabsall}(b) the reached efficiencies for different values for $\tilde{c}=c/(\gamma L)$ are shown, which corresponds to altering the size of the atomic cloud and Fig.~\ref{fig:sys3effabsall}(c) shows the corresponding results for different signal pulse lengths $\tilde{w}_{\mathcal{E},\parallel}$ and thus band widths. Together, Fig.~\ref{fig:sys3effabsall}(b) and (c) confirm that high values of ${\tilde{c}}/{\tilde{w}_{\mathcal{E},\parallel}}$ make higher optical depths necessary in order to reach high efficiencies. Fig.~\ref{fig:sys3effabsall}(d) shows the very smooth dependence of the resulting storage efficiency on single-parameter-variation. As reference parameters, the optimized values corresponding to Fig.~\ref{fig:sys3effabsall}(a) at the point $\theta=0,$ $d=6$ were used. \begin{figure} \caption{Maximum absorption efficiencies as function of different parameters. (a): $\eta_{abs} \label{fig:sys3effabsall} \end{figure} \subsection{Absorption, storage, and re-emission} \label{subsec:numerical_res_full_storage} We now consider the full process of absorption, storage, and re-emission. For calculating the total efficiency $\eta$, the number of re-emitted excitations up to a certain time after arrival of the emission control pulse was used, such that an altered shape of the re-emitted pulse does not affect the calculated efficiency. Fig.~\ref{fig:optemw100c850d6-17th0} shows the achieved total efficiencies as function of $\varphi$ when using control pulses optimized for $\varphi=0$. For an optical depth $d=17$, efficiencies varying between about 45\% and 70\% can be realized, with a maximum efficiency for backward re-emission ($\varphi=180^\circ$). Optimizing the parameters separately for each angle can still increase the efficiencies, in particular for high re-emission angles close to backward emission, as can be seen when comparing Fig.~\ref{fig:optemw100c850d6-17th0}(b) and (d). As the shape of the signal pulse orthogonal to its direction of propagation is preserved during absorption, departing from Gaussian beam profiles can improve the achievable emission and thus total efficiencies for intermediate values of $\varphi$ as the pulse shape originally in the direction orthogonal to propagation now contributes to the longitudinal shape of the spin wave when taking the new direction as reference. \begin{figure} \caption{(a,~b): Total efficiencies achieved for different re-emission angles $\varphi$ when using the parameters optimized for $\varphi=0$ and $\theta=0$. (a) uses $d=6$, while (b) uses $d=17$. (c,~d): Achieved efficiencies with parameters optimized for each $\varphi$ separately for angles close to $\varphi=0$ for (a) and close to $\varphi=\pi$ for (b).} \label{fig:optemw100c850d6-17th0} \end{figure} The amplitudes of the fields $\cal E$, $P$, $S$, and $ \Omega$ as function of space and time that result from the optimization of the overall efficiency are shown for a typical example ($d=6$ and $\varphi=0$ from Fig.~\ref{fig:sys3effabsall}(a)) in Fig.~\ref{tab:param_em_w100_d6_th0}, both for the absorption and emission part. One sees directly how the photon is transferred to a spin-wave excitation during absorption, whereas the excited state $\ket{e}$ is only excited very slightly and only for relatively short time. In emission, the process is inverted, and the excitation of the spin wave re-converted into an optical excitation. We also see that the spin wave envelope $S$ has essentially the same phase over the cross section of the sample as in the center of the sample, and the same is true for the signal pulse that is re-emitted. \begin{figure} \caption{Field amplitudes for the full storage process for $d=6$, $\varphi=0$ and $\theta=0$. (a): Amplitude over time of the variables at the center of the cloud for absorption. (c): Same for emission. (b): Resulting spin wave after absorption. (d): Outgoing field envelope after the emission process. } \label{tab:param_em_w100_d6_th0} \end{figure} \subsection{Imperfections} \label{subsec:mode_mismatch} For all previous considerations, exact two-wave resonance was assumed, namely \begin{equation} c|\vec{k}_\text{s}|-c|\vec{k}_\text{c}| = \omega_{ge} - \omega_{se}. \end{equation} Now we examine the influence of a slightly detuned signal field with a changed frequency $c|\vec{k}_\text{s}|= \omega_{ge}-\Delta+ck_\text{mis} $, where $k_\text{mis}$ is the mode mismatch. The control field frequency remains $c|\vec{k}_\text{c}|= \omega_{se}-\Delta$. A visualisation of a mismatched incoming probe pulse and the resulting spin wave is shown in Fig.~\ref{fig:modemismatch_E_In_S}. With the spontaneous emission rate of the excited state $\gamma/c$ as reference and assuming all other parameters as constant, we find Gaussian suppression of the absorption efficiency (see Fig.~\ref{fig:modemismatch}), \begin{equation} \eta_\text{abs}({k_\text{mis}}) \approx \eta_\text{abs}({0})\ \exp(-\frac{k_\text{mis}^2}{(11.4\ \gamma/c)^2}). \end{equation} Adjusted control parameters can largely compensate the exponential suppression of efficiency in the regarded range of mode mismatch (see the orange pluses in Fig.~\ref{fig:modemismatch}). \begin{figure} \caption{The incoming signal field envelope (a) and resulting spin wave (b) for parameters from Fig.~\ref{fig:sys3effabsall} \label{fig:modemismatch_E_In_S} \end{figure} \begin{figure} \caption{The dependence of the resulting absorption efficiency of the signal field mode mismatch. The corresponding frequency shift is measured in multiples of the spontaneous emission rate $\gamma$.} \label{fig:modemismatch} \end{figure} When re-emitting the excitation stored in the spin wave, there might also be a mode mismatch from a mismatch remaining from the absorption process or through non-optimal manipulation $\vec{\delta}$. If a mode mismatch is present, the momentum and energy conservation conditions from (\ref{eq:phase_match_em},~\ref{eq:resonance_cond}) cannot be fulfilled and the efficiency diminishes as destructive interference occurs. Fig.~\ref{fig:modemismatch_em} shows the decrease of total efficiency if a mode mismatch $k_\text{mis}$ is introduced to the stored spin wave according to \begin{equation}\label{eq:mode_mismatch_S} \tilde{S}(\vec{r}) \rightarrow e^{i (k_\text{mis} \vec{e}_{\vec{k}'_\text{s}})\cdot \vec{r}} \tilde{S}(\vec{r}). \end{equation} Not changing any other parameters (and using the parameters from Fig.~\ref{tab:param_em_w100_d6_th0}), the resulting efficiency for forward retrieval shows an approximately Gaussian dependence on $k_\text{mis}$, \begin{equation} \eta({k_\text{mis}})\approx \eta({0})\ \exp(- \frac{k_\text{mis}^2}{(2.9/L)^2} ).\label{eq:Gauss205} \end{equation} As $|\vec{k}_\text{s}|\times L \approx 10^5$ with the parameters used, the phase matching condition needs to be fulfilled with relatively high precision (see Fig.~\ref{fig:manipulationovervarphi}). Similarly to the absorption process, we expect that the reduction in achievable emission efficiency can be alleviated by adjusting the control parameters. \begin{figure} \caption{The dependence of the resulting total storage efficiency on the spin-wave phase error, e.g. stemming from non-optimal manipulation. The corresponding frequency shift is measured in multiples of the spontaneous emission rate $\gamma$.} \label{fig:modemismatch_em} \end{figure} \begin{figure} \caption{wave number and the corresponding manipulation time (using parameters from \ref{subsec:Manip_Zeeman} \label{fig:manipulationovervarphi} \end{figure} \begin{figure} \caption{The values printed inside the heat plot are the time scale ($\log_{10} \label{fig:t_motion_decoh} \end{figure} As the spin wave contains phases corresponding to the wavevector $\vec{\kappa}$ (see (\ref{eq:field_op_eigenstate})), atomic motion scrambling the phases \cite{zhao_long-lived_2009, saglamyurek_single-photon-level_2019} and separating the wave functions \cite{riedl_bose-einstein_2012} of the different hyperfine states during storage can be a major limiting factor of storage time (see also \cite{heshami_quantum_2016}). After the signal pulse absorption, depending on the angle $\theta$ between signal and control pulse, the wavevector stored in the spin wave ranges from $|\vec{k}_\text{c}|-|\vec{k}_\text{s}|=\frac{\omega_{gs}}{c}\approx 1/\, $mm to $|\vec{k}_\text{c}|+|\vec{k}_\text{s}|\approx 10/\, \rm{\mu}$m with a corresponding phase grating in the atomic state which can be scrambled by atomic motion even with individual atoms retaining their phase. Fig.~\ref{fig:t_motion_decoh} shows the resulting decoherence time scales when assuming thermal motion to be ballistic. As the use of a buffer gas can restrict the ballistic motion of the atoms \cite{ledbetterSpinexchangerelaxationfreeMagnetometryCs2008}, it is possible to soften this limitation of the achievable deflection angle $\varphi$ in rubidium vapors. Also, this wavevector corresponds to an additional momentum in the wave function of the $\ket{s}$ states leading to added velocities ranging from $\hbar |\vec{\kappa}|/m_\text{Rb}\approx 0.1\,{\rm nm/(ms)}$ to $ 10\,{\rm \mu m/(ms)}$ in rubidium. To maximize storage time, it might be advisable to start with a manipulation $\vec{\delta}_1=-\vec{\kappa}$ right after the absorption process, thus removing the phase grating and stored momentum mentioned above. In set-ups where almost parallel signal and control pulses must be chosen to avoid large stored momenta $\kappa$, our method of manipulation could be chosen to relax this constraint. Directly before the emission process, the wavevector can be reintroduced to the spin wave together with the intended total manipulation $\vec{\delta}$, thus minimising the influence of atomic motion: $\vec{\delta}_2=-\vec{\delta}_1+\vec{\delta}=\vec{\kappa'}$. For estimating the temperature regimes at which different wavevectors can be created or compensated by the proposed method, the color coding in Fig.~\ref{fig:t_motion_decoh} indicates how the time scale for manipulation compares to the decoherence from ballistic thermal motion. As manipulation time $T$, the values shown in Fig.~\ref{fig:manipulationovervarphi} are used, while accounting for the finite rise time of the coil by an additional fixed duration $2t_{\text{rise}}=10\,\mu$s. The time scale for decoherence (cf.~\cite{zhao_long-lived_2009}) $t_{\text{decoh}}$ is estimated by the time it takes an atom at thermal velocity $v_{\text{th}}$ to traverse a significant fraction of the spin wave phase grating given by $\kappa$: $t_{\text{decoh}}=1/(v_{\text{th}}\kappa)$, where $v_{\text{th}}=\sqrt{k_B T_{\text{Rb}}/m_{\text{Rb}}}$, with $k_B$ being Boltzmann's constant, $T_{\text{Rb}}$ the temperature of the rubidium ensemble and $m_{\text{Rb}}$ the atomic mass of rubidium. When using dopants in solid bodies as active atomic ensemble, the decoherence due to ballistic motion is eliminated, such that even anti-parallel control and signal pulses ($\theta= 180\,^\circ$) do not negatively affect the storage time (cf.~\cite{longdell_stopped_2005}). The solid medium will rescale the wave vectors involved, but the new time scales for shifting the spinwave wavevector will remain of the order of magnitude of a few to $10^2\,\mu$s, such that shifting the emission direction to arbitrary angles becomes possible. The condition for our Zeeman manipulation scheme to be applicable is a relative change in energy between the ground and storage states $g,e$ when applying an additional magnetic field. The estimated manipulation times assume a magnetic susceptibility corresponding to an electronic spin transition. Although the motional state of Bose-Einstein-condensates is outside the scope of our ansatz \eqref{eq:n}, existing experiments \cite{riedl_bose-einstein_2012, fleischhauer_electromagnetically_2005} indicate that photon storage can be described in a similar manner, and that due to the lack of thermal motion, the decoherence time of the spinwave is also less susceptible to its wavevector. This lets us expect that in BECs also, arbitrary deflection angles are achievable. In systems where only finite deflection angles can be achieved, the covered range of possible deflection angles can be increased by combining a fixed number of possible directions for the control pulse in emission with our proposed manipulation scheme as indicated in Fig.~\ref{fig:beamer_config}. \section{Summary} \label{sec:Summary} \begin{figure} \caption{Drawing of the phase matching condition when preparing multiple possible control pulses and a moderate manipulation $|\vec{\delta} \label{fig:beamer_config} \end{figure} Using a fully three dimensional treatment, we regarded the possibilities of storing weak coherent or single-photon signal pulses in an atomic cloud of three-level-atoms and re-emitting them in a controlled way in a new direction. The absorption of a photon in an ensemble of atoms results in a spin-wave with well defined wavevector $\vec{\kappa}$ and envelope $S(\vec{r})$. The envelope influences emission efficiency and the shape of the re-emitted pulse, whereas the wavevector reflects the momentum and energy-balance of two-photon absorption, with one photon from the signal beam and one from the control beam. We have shown that during storage the wavevector of the spin-wave can be modified by e.g.~applying a magnetic field gradient, without otherwise affecting the spin-wave. This modifies the momentum balance when the control beam is switched back on for re-emitting the signal pulse in such a way that even without changing the control beam a new emission direction can be selected. In solid-state-based quantum memories arbitrary in-plane deflection angles can be achieved with reasonable coils and power-supplies. We expect that BECs, too, allow for arbitrary deflection angles. In cold atomic clouds or hot atomic vapors, due to atomic motion scrambling the phases of spin waves that carry a significant wave vector, the resulting decoherence times are shortened, and correspondingly with the same coils and power-supplies deflection angles are restricted to $\sim20\,$mrad and $\sim0.2\,$mrad, respectively. This limitation can be softened by restricting the thermal motion of the vapor with the use of a buffer gas. This still allows for fast and efficient routing of photons into different beams or optical fibers. Our numerical simulations show that the efficiency of the whole process as measured by the ratio of the emitted energy compared to the energy in the incoming signal pulse is only moderately reduced for a beam emitted in an arbitrary direction compared to the beam re-emitted in the direction of the incoming pulse, even without adjusting any other parameters. Here, the envelope of the spin wave with regards to the new emission direction is the limiting property for the efficiency. Alternatively, one can also change the direction of the control beam in order to send out the stored excitation in another direction, or both methods can be combined. The phases of the spin-wave are defined in Hilbert space, i.e.~they control the coherent superposition of many-particle states with excitations localized at different positions in the atomic cloud whose phase they define relative to the corresponding atomic ground states. The effect that we described here is hence another remarkable example of the phenomenon that phases in Hilbert space have impact on the interference and propagation of photons in real configuration space, of which quantum optics is full (see \cite{fabre_modes_2019} for a recent review). Using the same control beam for emission as for absorption has the charm of needing no movable elements such as micro-mirrors for deflecting the signal beam, and allows for fast all-electronic control (on a time-scale on the order of few to $\sim100$ microseconds with reasonable magnetic field gradients, depending on the deflection angle) of the emission direction, opening the path for numerous applications of single-photon routing, such as photon-multiplexing, quantum communication to several parties, etc. Due to the linearity of the dynamics, we expect that quantum superpositions of photons in different modes (e.g.~in different time bins, as commonly used in quantum memories) will be propagated and re-directed with comparable efficiency as the pulses in a single mode considered here, but more work will be required to prove this. The possibility of purely shifting the momentum stored in the spin-wave also in emission direction promises the possibility to assist existing ac-Stark effect based spin-wave manipulation methods by allowing spin-wave multiplexing without any intrinsic loss introduced by non-linear phase factors. The scheme studied here focuses on deflection in the $xy$-plane. Slight deviations of the wavevector of the emitted light from the $xy$-plane should also be achievable, but deflection into arbitrary directions in the $4\pi$ spatial angle would need a rotation of the polarization vector as well. Alternatively, one might envisage a two-step deflection with the one in the $xy$-plane followed by another one in a plane perpendicular to it containing the wave-vector after the first deflection. In future works, it might be of interest to explore the proposed manipulation scheme in situations with further effects such as inhomogeneous broadening \cite{gorshkov_photon_2007-3}, exact atomic positions \cite{asenjo-garcia_exponential_2017,manzoni_optimization_2018}, and atomic interactions \cite{petrosyanCollectiveEmissionPhotons2021a}. \appendix \section{Coil properties} \label{seq:coil_properties} \paragraph*{Goal parameters.} As example parameters for our atomic cloud we assume a spherical volume with $V=L^3=1\,$cm$^3$, implying a radius of $r\approx 0.6\,$cm. For the magnetic gradient coils, we assumed a magnetic gradient with $50\,$G/cm that can be ramped up or down in the order of $5\,\mu$s that extends over the whole of the atomic cloud. \paragraph*{Corresponding coil parameters.} For a simple estimation of the necessary experimental current source and coil parameters, we assume the gradient coil to be a Maxwell coil pair with coil radius $a$. The rise time $\tau$ of the gradient coil is calculated as\cite{hidalgo-tobon_theory_2010} \begin{align} \tau = \dfrac{L_{\text{c}}I}{V_{\text{c}}-RI} \overset{V_{\text{c}}\gg RI}{\approx} \dfrac{L_{\text{c}}I}{V_{\text{c}}}\overset{!}{=} 5\,\mu\text{s},\label{eq:time} \end{align} where $L_{\text{c}}$ is the inductance of the gradient coil, $R$ is its Ohmic resistance and $I$ is the equilibrium current flowing through the coil at applied voltage $V_{\text{c}}$. The gradient created is given by \begin{align} G = &\eta I \overset{!}{=} 50\,\text{G/cm}\label{eq:grad}\\ &\eta \approx 0.64\ \mu_0 \dfrac{N_{\text{c}}}{a^2},\nonumber \end{align} where $\eta$ is the gradient coil efficiency, $\mu_0$ is the magnetic vacuum permittivity, and $N_{\text{c}}$ is the winding number of each coil. The inductance of the Maxwell coils is approximated as \begin{align} L_{\text{c}}\lessapprox 2 N_{\text{c}}^2 \pi a^2 \mu_0/(l+a/1.1) \approx \pi N_{\text{c}}^2 a \mu_0,\label{eq:inductance} \end{align} where the individual coil length $l$ was assumed to be $l=(1+0.1/1.1)a$. Using a coil radius of $a=1\,$cm, we can solve \eqref{eq:grad} for $N_{\text{c}}I$, giving \begin{align} 50\,\text{G/cm}&\overset{!}{=}0.64 \mu_0 N_{\text{c}}I/a^2 \nonumber\\\Rightarrow N_{\text{c}}I &\overset{!}{=} a^2/(0.64\mu_0)\ G\approx 62.2\,\text{A}. \end{align} Inserting \eqref{eq:inductance} into \eqref{eq:time}, we get \begin{align} 5\,\mu\text{s}&\overset{!}{=} \dfrac{L_{\text{c}}I}{V_{\text{c}}} = \pi (N_{\text{c}}I) N_{\text{c}} a \mu_0 /V_{\text{c}} = \dfrac{\pi}{0.64} \dfrac{a^3 N_{\text{c}}G}{V_{\text{c}}}\nonumber \\\Rightarrow V_{\text{c}} &\overset{!}{=} \dfrac{G}{\tau} \dfrac{\pi a^3}{0.64} N_{\text{c}} \approx N_{\text{c}} \times 0.49\,\text{V}. \end{align} Choosing $N_{\text{c}}=63$, this gives \begin{align*} V_{\text{c}}\approx 31\,\text{V} \quad \text{and} \quad I\approx 1\,\text{A} \end{align*} as solutions. The actual voltage needs to be increased by $RI$ to compensate for the coil's resistance. \section{Time scale for adiabaticity.} \label{sec:adiabaticity} For the regarded Rb-87, the Hamiltonian of the ground state spin manifold under an external magnetic field $\vec{B}$ is given by \begin{align} \hat{H}=A_{\text{HFS}}/\hbar^2 \hat{\vec{I}}\cdot\hat{\vec{J}} + \dfrac{\mu_{\text{Bohr}}}{\hbar} (g_S \hat{\vec{S}} + g_L \hat{\vec{L}})\cdot \vec{B}, \end{align} where $A_{\text{HFS}}\approx h\ 3.42\,$GHz is the hyperfine coupling, $\mu_{\text{Bohr}}$ is the Bohr-magneton, $g_S\approx 2$ is the electron g-factor, $g_I\approx -0.001$ is the nuclear g-factor and $\vec{B}=(B_0+B_1(t))\vec{e}_z$ is the applied magnetic field. In our case, we have $L=0$, $J=S=1/2$ and $I=3/2$. In the $\ket{m_I}_z\otimes \ket{m_S}_z$-basis, the magnetic coupling is diagonal and the hyperfine coupling takes the form \begin{align} A_{\text{HFS}} \hat{\vec{I}}\cdot\hat{\vec{S}}/\hbar^2 = A_{\text{HFS}} \pmqty{3/4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -3/4 & \sqrt{3}/2 & 0 & 0 & 0 & 0 & 0\\ 0 & \sqrt{3}/2 & 1/4 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1/4 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & -1/4 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1/4 & \sqrt{3}/2 & 0 \\ 0 & 0 & 0 & 0 & 0 & \sqrt{3}/2 & -3/4 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3/4}, \end{align} such that any state decay due to fluctuating magnetic fields only affects the 2-dimensional state subspaces described by \begin{align} \ket{m_I=+3/2}\otimes \ket{m_S=-1/2} &\leftrightarrow \ket{m_I=+1/2}\otimes \ket{m_S=+1/2},\label{eq:H1} \\ \ket{m_I=+1/2}\otimes \ket{m_S=-1/2} &\leftrightarrow \ket{m_I=-1/2}\otimes \ket{m_S=+1/2},\label{eq:H2} \\ \ket{m_I=-1/2}\otimes \ket{m_S=-1/2} &\leftrightarrow \ket{m_I=-3/2}\otimes \ket{m_S=+1/2}. \label{eq:H3} \end{align} Thus the question of adiabaticity for ramping up the B1-field can be regarded separately for the two-level subsystems. The corresponding two-level Hamiltonians (up to a two-level-global energy shift for zero-averaged eigenvalues) become \begin{align} \eqref{eq:H1} &\leftrightarrow \hat{H} = A_{\text{HFS}} \pmqty{-1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & 1/2} + \mu_{\text{Bohr}} \pmqty{\frac{g_S-g_I}{2} & 0 \\ 0 & \frac{-g_S+g_I}{2}} (B_0+B_1(t)) \\ \eqref{eq:H2} &\leftrightarrow \hat{H} = A_{\text{HFS}} \pmqty{0 & 1 \\ 1 & 0} + \mu_{\text{Bohr}} \pmqty{\frac{-g_S+g_I}{2} & 0 \\ 0 & \frac{g_S-g_I}{2}} (B_0+B_1(t)) \\ \eqref{eq:H3} &\leftrightarrow \hat{H} = A_{\text{HFS}} \pmqty{1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & -1/2} + \mu_{\text{Bohr}} \pmqty{\frac{-g_S+g_I}{2} & 0 \\ 0 & \frac{g_S-g_I}{2}} (B_0+B_1(t)), \end{align} which, assuming $B_1(t)=B_1\ t/\tau$ to be linear in time, all have the form \begin{align} \hat{H} = \hbar v \hat{\sigma}_x + \hbar(\epsilon + b t) \hat{\sigma}_z,\label{H_general} \end{align} with the corresponding values \begin{align} \hbar v=\left\{ \mqty{\sqrt{3}/2 A_{\text{HFS}}\\ A_{\text{HFS}}\\ \sqrt{3}/2 A_{\text{HFS}}} \right. , \ \hbar \epsilon =\left\{ \mqty{-A_{\text{HFS}}/2+\frac{g_S-g_I}{2}B_0 \\ \frac{-g_S+g_I}{2}B_0 \\ A_{\text{HFS}}/2+\frac{-g_S+g_I}{2}B_0 }\right. ,\ \hbar b =\left\{ \mqty{\mu_{\text{Bohr}}\frac{g_S-g_I}{2}B_1/\tau & \text{for }\eqref{eq:H1},\\ \mu_{\text{Bohr}}\frac{-g_S+g_I}{2}B_1/\tau & \text{for }\eqref{eq:H2},\\ \mu_{\text{Bohr}}\frac{-g_S+g_I}{2}B_1/\tau & \text{for }\eqref{eq:H3}.}\right. \end{align} The actual situation is described by $t\in[0,\tau]$, but extending this to $\pm\infty$ allows for an analytical solution of the transition probability $p$ using the Landau-Zener formula\cite{brundobler_s-matrix_1993} \begin{align} p = e^{-\pi z}, \qquad z = \dfrac{|v|^2}{|2b|}, \end{align} where $z$ is the Landau-Zener parameter. With $B_1\le 500\,$G throughout our atomic ensemble and $\tau\gtrsim 1\,\mu$s, we have \begin{align}\label{eq:landau-zener-parameter} z \ge (A_{\text{HFS}})^2/|2\hbar \mu_{\text{Bohr}}\dfrac{g_S-g_I}{2}B_1/\tau | \approx 50 \dfrac{\tau}{1\,\text{ns}} \overset{\tau\gtrsim 1\,\mu\text{s}}{\gg} 1, \end{align} such that state-transitions due to ramping up the B1-field can be neglected. \paragraph*{Finite times.} As the assumption of $t\in (-\infty,\infty)$ is not fulfilled in experiment, we do a numerical integration of the time evolution in order to make sure that $\tau\gtrsim1\,\mu$s is a safe regime with regards to negligible disturbance of the state. We choose the two-level Hamiltonians as introduced above in \eqref{H_general}. As \eqref{eq:H1} and \eqref{eq:H3} are equivalent except for exchanging the two basis states, only the parameters for \eqref{eq:H1} and \eqref{eq:H2} are regarded separately. The numerical integration evolves the state from $t=0$ to $t=\tau$, using the instantaneous eigenbasis, and $\ket{+}:=(1,0)^T$ as initial state. A third calculation is made with parameters from \eqref{eq:H2}, but with $B_0=0$, to test whether the background field that was introduced for approximately linearising the response to $B_1$ is also necessary to achieve adiabaticity. Figures \ref{fig:num_adiabaticity} and \ref{fig:num_adiabaticity2} show the results, which clearly indicate that adiabaticity remains a good approximation for $\tau$ in the $\mu$s regime even in the case of $t\in[0, \tau]$. Comparing these results with those of Appendix \ref{seq:coil_properties} we find that the speed with which the magnetic field can be altered is in practice limited by technical limitations while the fundamental limitations from the adiabaticity condition become relevant only at time scales that are more comparable to the hyperfine interaction as indicated by the Landau-Zener paremeter calculated in \eqref{eq:landau-zener-parameter}. \begin{figure} \caption{\label{fig:num_adiabaticity} \label{fig:num_adiabaticity} \end{figure} \begin{figure} \caption{\label{fig:num_adiabaticity2} \label{fig:num_adiabaticity2} \end{figure} \end{document}
\begin{document} \author{Alexander E Patkowski} \title{On asymptotic expansions for basic hypergeometric functions } \maketitle \begin{abstract} This paper establishes new results concerning asymptotic expansions of $q$-series related to partial theta functions. We first establish a new method to obtain asymptotic expansions using a result of Ono and Lovejoy, and then build on these observations to obtain asymptotic expansions for related multi hypergeometric series \end{abstract} \keywords{\it Keywords: \rm $L$-function; $q$-series, asymptotics} \subjclass{ \it 2010 Mathematics Subject Classification Primary 33D90; Secondary 11M41} \section{Introduction and Main Results} \par In keeping with usual notation [6], put $(y)_n=(y;q)_{n}:=\prod_{0\le k\le n-1}(1-yq^{k}),$ and define $(y)_{\infty}=(y;q)_{\infty}:=\lim_{n\rightarrow\infty}(y;q)_{n}.$ Recall (see [11, pg.182]) the parabolic cylinder function $D_s(x),$ defined as the solution to the differential equation of Weber [7, pg.1031, eq.(9.255), \#1] $$\frac{\partial^2 f}{\partial x^2}+(s+\frac{1}{2}-\frac{x^2}{4})f=0.$$ This function also has a relationship with the confluent hypergeometric function ${}_1F_1(a;b; x)$ [11, pg.183], [7, pg.1028] $$D_s(x)=e^{-x^2/4}\sqrt{\pi}\left(\frac{2^{s/2}}{\Gamma(\frac{1}{2}-\frac{s}{2})}{}_1F_1(-\frac{s}{2};\frac{1}{2};\frac{x^2}{2})-\frac{x2^{s/2+1/2}}{\Gamma(-\frac{s}{2})}{}_1F_1(\frac{1}{2}-\frac{s}{2};\frac{3}{2};\frac{x^2}{2})\right).$$ In the last two decades considerable attention has been given to asymptotic expansions of basic hypergeometric series when $q=e^{-t}$ as $t\rightarrow0^{+}.$ Special examples related to negative values of $L$-functions have been offered in [3, 8, 9, 12, 13, 14]. Recall that an $L$-function is defined as the series $L(s)=\sum_{n\ge1}a_nn^{-s},$ for a suitable arithmetic function $a_n:\mathbb{N}\rightarrow\mathbb{C}.$ In the simple case $a_n=1,$ we have the Riemann zeta function $L(s)=\zeta(s),$ for $\Re(s)>1,$ and if $a_n=(-1)^n,$ we obtain $L(s)=(1-2^{1-s})\zeta(s).$ \par As it turns out, there are general expansions for basic hypergeometric series with $q=e^{-t}$ which include special functions as terms along with values of $L(s).$ To this end, we offer the first instance of a more general expansion in the literature for certain multi-dimensional basic hypergeometric series. To illustrate our approach we first recall the result due to Ono and Lovejoy [9, Theorem 1], which says that for $k\ge2,$ integers $0<m<l,$ if \begin{equation}F_k(z,q):=\sum_{r_{k-1}\ge r_{k-2}\dots\ge r_1\ge0}\frac{(q)_{n_{k-1}}(z)_{n_{k-1}}z^{n_{k-1}+2n_{k-2}+\dots+2n_1}q^{r_1^2+r_1+r_2^2+r_2+\dots+r_k^2+r_k}}{(q)_{r_{k-1}-r_{k-2}}(q)_{r_{k-2}-r_{k-3}}\dots(q)_{r_{2}-r_{1}}(q)_{r_1}(-z)_{r_1+1}},\end{equation} then as $t\rightarrow0^{+},$ $$e^{-(k-1)m^2t}F_k(e^{-lmt},e^{-l^2t})=\sum_{n\ge0}L_{l,m}(-2n)\frac{((1-k)t)^n}{n!}, $$ where $L_{l, m}(s)=(2l)^{-s}\left(\zeta(s,\frac{m}{2l})-\zeta(s,\frac{l+m}{2l})\right).$ Here the Hurwitz zeta function is $\zeta(s,x)=\sum_{n\ge0}(n+x)^{-s},$ a one parameter refinement of $\zeta(s).$ Their paper uses a clever specialization of Andrews' refinement of a transformation of Watson to obtain (1.1). By [9, Theorem 2.1] \begin{equation} F_k(z,q)=\sum_{n\ge0}(-1)^nz^{(2k-2)n}q^{(k-1)n^2}.\end{equation} Putting $q=e^{-wt^2},$ and $z=e^{-vt},$ and taking the Mellin transform (see (2.10) or [11]) of (1.2), Lemma 2.4 of the next section tells us that for $\Re(s)>0,$ $\Re(w)>0,$ \begin{equation}\int_{0}^{\infty}t^{s-1} \left(F_k(e^{-vt},e^{-wt^2})-1\right)dt\end{equation} $$= (2w(k-1))^{-s/2}(1-2^{1-s})\zeta(s)\Gamma(s)e^{v^2(k-1)/(2w)}D_{-s}\left(\frac{v2(k-1)}{\sqrt{2(k-1)w}}\right).$$ Now by Mellin inversion and Cauchy's residue theorem, it can be shown that \begin{equation}F_k(e^{-vt},e^{-wt^2})-1\end{equation} $$\sim e^{v^2(k-1)/(2w)}\sum_{n\ge0}\frac{(2w(k-1))^{n/2}}{n!}(-t)^n(1-2^{1+n})\zeta(-n)D_{n}\left(\frac{v2(k-1)}{\sqrt{2(k-1)w}}\right),$$ as $t\rightarrow0^{+}.$ As a result of our new observation, we are able to produce general expansions of a similar type as (1.4) by appealing to Bailey chains [1]. We believe our observation is significant in its implications for further asymptotic expansions for basic hypergeometric series. \begin{theorem}\label{thm:theorem1} Define for $k\ge1$ $$A_{n,k}(z,q):=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{a^{r_1+r_2+\dots+r_k}q^{r_1^2+r_2^2+\dots+r_k^2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\frac{(z)_{r_k+1}(q/z)_{r_k}}{(q)_{2r_k+1}}.$$ For any $v\in\mathbb{C},$ and $\Re(w)>0,$ $$-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}A_{n,k}(e^{-vt-wt^2(k+1)},e^{-wt^2})$$ $$\sim t^{-1}\sqrt{\frac{\pi}{4w(k+1)}}e^{v^2/(4(k+1)w)}\bigg(\erf\left(\frac{v}{2\sqrt{(k+1)w}}\right)-\erf\left(-\frac{v}{2\sqrt{(k+1)w}}\right)\bigg)$$ $$+e^{v^2/(8(k+1)w)}\sum_{n\ge0}\frac{(2w(k+1))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}\left(-\frac{v}{\sqrt{2(k+1)w}}\right)$$ $$- e^{v^2/(8(k+1)w)}\sum_{n\ge0}\frac{(2w(k+1))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}\left(\frac{v}{\sqrt{2(k+1)w}}\right).$$ as $t\rightarrow 0^{+}.$ \end{theorem} We mention that the function in Theorem 1.1. should be compared to the $G(z,q)$ function contained in [9]. \begin{theorem}\label{thm:theorem2} Define for $k\ge1$ $$B_{n,k}(z,q):=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}P_{n,r_1,r_2,\dots,r_k}(q)q^{n-r_1+2(r_1-r_2)+\dots+2^{k-1}(r_{k-1}-r_k)},$$ where $$P_{n,r_1,r_2,\dots,r_k}(q):=\frac{(-q^2;q)_{2r_1}(-q^{2^2};q^{2})_{2r_2}\dots(-q^{2^{k}};q^{2^{k-1}})_{2r_k}}{(q^2;q^2)_{n-r_1}(q^{2^2};q^{2^2})_{r_1-r_2}\dots(q^{2^{k}};q^{2^{k}})_{r_{k-1}-r_{k}}}\frac{(z;q^{2^k})_{r_k+1}(q^{2^k}/z;q^{2^k})_{r_k}}{(q^{2^k};q^{2^k})_{2r_k+1}}.$$ For any $v\in\mathbb{C},$ and $\Re(w)>0,$ $$-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}B_{n,k}(e^{-vt-wt^2(2^{k}+1)},e^{-wt^2})$$ $$\sim t^{-1}\sqrt{\frac{\pi}{4w(2^k+1)}}e^{v^2/(4(2^k+1)w)}\bigg(\erf\left(\frac{v}{2\sqrt{(2^k+1)w}}\right)-\erf\left(-\frac{v}{2\sqrt{(2^k+1)w}}\right)\bigg)$$ $$+e^{v^2/(4(k+1)w)}\sum_{n\ge0}\frac{(w(2^{k}+1))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}(-\frac{v}{\sqrt{(2^{k}+1)w}})$$ $$- e^{v^2/(4(k+1)w)}\sum_{n\ge0}\frac{(w(2^{k}+1))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}(\frac{v}{\sqrt{(2^{k}+1)w}}).$$ \end{theorem} \begin{theorem}\label{thm:theorem3} Define for $k\ge1$ $$C_{n.k}(z,q)=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{q^{r_1^2/2+r_1/2+r_2^2/2+r_2/2+\dots+r_k^2/2+r_k/2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\frac{(z)_{r_k+1}(q/z)_{r_k}}{(q)_{{r_k}}(q;q^2)_{r_k+1}}.$$ For any $v\in\mathbb{C},$ and $\Re(w)>0,$ $$-1+\sum_{n\ge0}\frac{(e^{-wt^2};e^{-wt^2})_n}{(-e^{-wt^2};e^{-wt^2})_n}(-1)^ne^{-wt^2n(n+1)/2}C_{n,k}(e^{-vt-wt^2(k+1)/2},e^{-wt^2})$$ $$\sim t^{-1}\sqrt{\frac{\pi}{2w(k+2)}}e^{v^2/(2(k+2)w)}\bigg(\erf\left(\frac{v}{2\sqrt{(k+2)w}}\right)-\erf\left(-\frac{v}{2\sqrt{(k+2)w}}\right)\bigg)$$ $$+e^{v^2/(4(k+2)w)}\sum_{n\ge0}\frac{(w(k+2))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}(-\frac{v}{\sqrt{(k+2)w}})$$ $$-e^{v^2/(4(k+2)w)} \sum_{n\ge0}\frac{(w(k+2))^{n/2}}{n!}(-t)^n\zeta(-n)D_{n}(\frac{v}{\sqrt{(k+2)w}}),$$ as $t\rightarrow0^{+}.$ \end{theorem} We note that it may be desirable to utilize the Hermite polynomial representation of the parabolic cylinder function [7, pg.1030, eq.(9.253)] $D_n(x)=2^{-n/2}e^{-x^2/4}H_n(\frac{x}{\sqrt{2}}),$ as an alternative. Due to the parity relationship $D_n(-x)=(-1)^nD_n(x),$ there is some collapsing that takes place for Theorems 1.1 through Theorem 1.3. \par Our last result is an apparently new asymptotic expansion for a partial theta function involving a nonprincipal Dirichlet character $\chi.$ \begin{theorem}\label{thm:theorem4} Let $\chi(n)$ be a real, primitive, nonprincipal Dirichlet character associated with $L(s,\chi).$ Let $v\in\mathbb{C},$ and $\Re(w)>0.$ Then if $\chi$ is an even character, $$\sum_{n\ge1}\chi(n)e^{-w n^2t^2-v nt}\sim e^{v^2/(8w)}\sum_{n\ge0}\frac{(2w)^{(2n+1)/2}(-t)^{2n+1}}{(2n+1)!}L(-2n-1,\chi)D_{2n+1}(\frac{v}{\sqrt{2w}}),$$ as $t\rightarrow0^{+},$ and if $\chi$ is an odd character, $$\sum_{n\ge1}\chi(n)e^{-w n^2t^2-v nt}\sim e^{v^2/(8w)}\sum_{n\ge0}\frac{(2w)^{n}(-t)^{2n}}{(2n)!}L(-2n,\chi)D_{2n}(\frac{v}{\sqrt{2w}}),$$ as $t\rightarrow0^{+}.$ \end{theorem} \section{Proofs of results} We recall that a pair $(\alpha_n(a, q),\beta_n(a,q))$ is referred to as a Bailey pair [4] with respect to $(a,q)$ if \begin{equation}\beta_n(a,q)=\sum_{0\le j\le n}\frac{\alpha_j(a,q)}{(q;q)_{n-j}(aq;q)_{n+j}}.\end{equation} Iterating [5, (S1)] $k$ times gives us the following lemma. \begin{lemma} For $k\ge1,$ \begin{equation} \beta'_n(a,q)=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{a^{r_1+r_2+\dots+r_k}q^{r_1^2+r_2^2+\dots+r_k^2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\beta_{r_k}(a,q)\end{equation} \begin{equation} \alpha'_n(a,q)=a^{kn}q^{kn^2}\alpha_n(a,q).\end{equation} \end{lemma} Iterating [5, (D1)] $k$ times gives us the following different lemma. \begin{lemma} For $k\ge1,$ \begin{equation} \beta'_n(a,q)=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}p_{n,r_1,r_2,\dots,r_k}(a,q)q^{n-r_1+2(r_1-r_2)+\dots+2^{k-1}(r_{k-1}-r_k)}\beta_{r_k}(a^{2^{k}},q^{2^{k}}),\end{equation} where $$p_{n,r_1,r_2,\dots,r_k}(a,q):=\frac{(-aq;q)_{2r_1}(-a^{2}q^{2};q^{2})_{2r_2}\dots(-a^{2^{k}}q^{2^{k-1}};q^{2^{k-1}})_{2r_k}}{(q^2;q^2)_{n-r_1}(q^{2^2};q^{2^2})_{r_1-r_2}\dots(q^{2^{k}};q^{2^{k}})_{r_{k-1}-r_{k}}},$$ and \begin{equation} \alpha'_n(a,q)=\alpha_n(a^{2^{k}},q^{2^{k}}).\end{equation} \end{lemma} Lastly, we iterate [5, (S2)] $k$ times. \begin{lemma} For $k\ge1,$ \begin{equation} \beta'_n(a,q)=\frac{1}{(-\sqrt{aq})_{n}}\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{(-\sqrt{aq})_{r_k}a^{r_1/2+r_2/2+\dots+r_k/2}q^{r_1^2/2+r_2^2/2+\dots+r_k^2/2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\beta_{r_k}(a,q)\end{equation} \begin{equation} \alpha'_n(a,q)=a^{kn/2}q^{kn^2/2}\alpha_n(a,q).\end{equation} \end{lemma} Now it is well-known [1. Lemma 6] that $(\alpha_n(q,q),\beta_n(q,q))$ form a Bailey pair where \begin{equation}\alpha_n(q,q)=(-z)^{-n}\frac{q^{\binom{n+1}{2}}(1-z^{2n+1})}{1-q},\end{equation} \begin{equation}\beta_n(q,q)=\frac{(z)_{n+1}(q/z)_{n}}{(q)_{2n+1}}.\end{equation} The Mellin transform is defined as [11] (assuming $g$ satisfies certain growth conditions) \begin{equation}\mathfrak{M}(g)(s):=\int_{0}^{\infty}t^{s-1}g(t)dt.\end{equation} The main integral formula we will utilize is given in [7, pg.365, eq.(3.462), \#1]. \begin{lemma} For $\Re(s)>0,$ $\Re(w)>0,$ $$\int_{0}^{\infty}t^{s-1}e^{-w t^2-v t}dt=(2w)^{-s/2}\Gamma(s)e^{v^2/(8w)}D_{-s}(\frac{v}{\sqrt{2w}}).$$ \end{lemma} By the Legendre duplication formula $\Gamma(\frac{s}{2})\Gamma(\frac{s}{2}+\frac{1}{2})=2^{1-s}\sqrt{\pi}\Gamma(s),$ and the value $D_{-s}(0)\Gamma(\frac{1+s}{2})=2^{s/2}\sqrt{\pi},$ it is readily observed that the $v\rightarrow0$ case of this lemma reduces to the integral formula for $w^{-s/2}\Gamma(\frac{s}{2}).$ The parabolic cylinder function is an analytic function in $v$ and $x$ that enjoys the property that it has no singularities. \begin{proof}[Proof of Theorem~\ref{thm:theorem1}] Inserting (2.8)--(2.9) into Lemma 2.1 gives us the following Bailey pair \begin{equation}\bar{\beta}_n(q,q):=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{a^{r_1+r_2+\dots+r_k}q^{r_1^2+r_2^2+\dots+r_k^2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\frac{(z)_{r_k+1}(q/z)_{r_k}}{(q)_{2r_k+1}}\end{equation} \begin{equation}\bar{\alpha}_n(q,q)=(-z)^{-n}\frac{q^{(2k+1)n(n+1)/2}(1-z^{2n+1})}{1-q},\end{equation} A limiting case of Bailey's lemma [1, pg.270, eq.(2.4)] (with $a=q,$ $\rho_1=q,$ $\rho_2\rightarrow\infty,$ and $N\rightarrow\infty$) says that \begin{equation} \sum_{n\ge0}(q)_n(-1)^nq^{n(n+1)/2}\beta_n(q,q)=(1-q)\sum_{n\ge0}(-1)^n q^{n(n+1)/2}\alpha_n(q,q).\end{equation} Inserting (2.11)--(2.12) into (2.13) gives \begin{equation} \begin{aligned}&\sum_{n\ge0}(q)_n(-1)^nq^{n(n+1)/2}\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{a^{r_1+r_2+\dots+r_k}q^{r_1^2+r_2^2+\dots+r_k^2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\frac{(z)_{r_k+1}(q/z)_{r_k}}{(q)_{2r_k+1}}\\ &=\sum_{n\ge0}(-1)^n(-z)^{-n}q^{(k+1)n(n+1)}(1-z^{2n+1}).\end{aligned} \end{equation} Putting $q=e^{-wt^2},$ and $z=e^{-vt-wt^2(k+1)},$ we have that (2.14) becomes \begin{equation} \begin{aligned} &\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_ne^{-wt^2n(n+1)/2}(-1)^nA_{n,k}(e^{-vt-wt^2(k+1)},e^{-wt^2})\\ &=\sum_{n\ge0}e^{nvt-wt^2(k+1)n^2}(1-e^{-vt(2n+1)-wt^2(k+1)(2n+1)})\\ &= \sum_{n\ge0}e^{nvt-wt^2(k+1)n^2}-\sum_{n\ge0}e^{nvt-wt^2(k+1)n^2-vt(2n+1)-wt^2(k+1)(2n+1)}\\ &= \sum_{n\ge0}e^{nvt-wt^2(k+1)n^2}-\sum_{n\ge1}e^{-wt^2(k+1)n^2-vtn} .\end{aligned}\end{equation} Subtracting a $1$ from (2.15) and then taking the Mellin transform, we compute that $$\begin{aligned} &\mathfrak{M}\bigg\{ \sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_ne^{-wt^2n(n+1)/2}(-1)^nA_{n,k}(e^{-vt-wt^2(k+1)},e^{-wt^2})-1\bigg\}\\ &= \int_{0}^{\infty}t^{s-1}\left( \sum_{n\ge1}e^{nvt-wt^2(k+1)n^2}-\sum_{n\ge1}e^{-wt^2(k+1)n^2-vtn}\right)dt\\ &= \sum_{n\ge1}\int_{0}^{\infty}t^{s-1} \left(e^{nvt-wt^2(k+1)n^2} - e^{-wt^2(k+1)n^2-vtn}\right)dt\\ &=\sum_{n\ge1}\Bigg((2w(k+1)n^2)^{-s/2}\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}\left(-\frac{v}{\sqrt{2(k+1)w}}\right)\\ &- (2w(k+1)n^2)^{-s/2}\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}\left(\frac{v}{\sqrt{2(k+1)w}}\right)\Bigg)\\ &=(2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}\left(-\frac{v}{\sqrt{2(k+1)w}}\right)\\ &- (2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}\left(\frac{v}{\sqrt{2(k+1)w}}\right). \end{aligned}$$ Here we employed Lemma 2.4 with $v$ replaced by $nv,$ and $w$ replaced by $w(k+1)n^2,$ and the resulting formula is analytic for $\Re(s)>1.$ Now applying Mellin inversion, we compute that for $\Re(s)=c>1,$ \begin{equation}\begin{aligned} &-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}A_{n,k}(e^{-vt-wt^2(k+1)},e^{-wt^2})\\ &=\frac{1}{2\pi i}\int_{(c)}\bigg((2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}(-\frac{v}{\sqrt{2(k+1)w}})\\ &-(2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}(\frac{v}{\sqrt{2(k+1)w}})\bigg)t^{-s}ds.\end{aligned}\end{equation} The modulus of the integrand can be seen to be estimated as follows (see [11, pg.398] for a similar example). Making the change of variable $s\rightarrow s+\frac{1}{2},$ we obtain an integral for $\Re(s)>\frac{1}{2}.$ For $\Re(s)=\sigma>\frac{1}{2},$ we have $\zeta(s+\frac{1}{2})\ll \zeta(\sigma+\frac{1}{2}).$ The growth of the integrand is then seen to be dominated by $\Gamma(s+\frac{1}{2})e^{v^2/(8(k+1)w)}D_{-s-\frac{1}{2}}(\frac{v}{\sqrt{2(k+1)w}}),$ due to Stirling's formula. Now an estimate of Paris [10, pg. 425, A(10)] for the parabolic cylinder function $D_{-s-\frac{1}{2}}(x)$ for fixed $x$ as $|s|\rightarrow\infty,$ says that \begin{equation}D_{-s-\frac{1}{2}}(x)=\frac{\sqrt{\pi}e^{-x\sqrt{s}}}{2^{s/2+1/4}\Gamma(\frac{s}{2}+\frac{3}{4})}\left(1-\frac{x^3}{24\sqrt{s}}+\frac{x^2}{24s}(\frac{x^2}{48}-\frac{3}{2})+O(s^{-3/2})\right),\end{equation} uniformly for $|\arg(s)|\le \pi-\delta<\pi.$ By the asymptotic estimate (2.17) in conjunction with [11, pg.39, Lemma 2.2], we see the growth of the integrand is well controlled. Hence, we may apply Cauchy's residue theorem to obtain our expansion. \par The integrand of (2.16) has simple poles at $s=1$ and the negative integers $s=-n$ due to $\Gamma(s).$ Using $\lim_{s\rightarrow1}(s-1)\zeta(s)=1,$ and [7, pg.1030, eq.(9.254),\#1] $D_{-1}(x)=\sqrt{\pi/2}e^{x^2/4}(1-\erf(\frac{x}{\sqrt{2}})),$ $$\begin{aligned} &\lim_{s\rightarrow1}(s-1)t^{-s}\bigg((2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}(-\frac{v}{\sqrt{2(k+1)w}})\\ &-(2w(k+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(8(k+1)w)}D_{-s}(\frac{v}{\sqrt{2(k+1)w}})\bigg)\\ &=t^{-1}(2w(k+1))^{-1/2}e^{v^2/(8(k+1)w)}\bigg(D_{-1}(-\frac{v}{\sqrt{2(k+1)w}})-D_{-1}(\frac{v}{\sqrt{2(k+1)w}})\bigg)\\ &=t^{-1}\sqrt{\frac{\pi}{4w(k+1)}}e^{v^2/(4(k+1)w)}\bigg(\left(1-\erf\left(-\frac{v}{\sqrt{4(k+1)w}})\right)\right) \\ &-\left(1-\erf\left(\frac{v}{\sqrt{4(k+1)w}})\right)\right)\bigg).\end{aligned}$$ Using standard properties of Mellin transforms, we compute the residues at the negative integers to see that as $t\rightarrow0^{+},$ $$-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}A_{n,k}(e^{-vt-wt^2(k+1)},e^{-wt^2})$$ $$\sim t^{-1}\sqrt{\frac{\pi}{4w(k+1)}}e^{v^2/(4(k+1)w)}\bigg(\erf\left(\frac{v}{2\sqrt{(k+1)w}}\right)-\erf\left(-\frac{v}{2\sqrt{(k+1)w}}\right)\bigg)$$ $$+\sum_{n\ge0}\frac{(2w(k+1))^{n/2}}{n!}(-t)^n\zeta(-n)e^{v^2/(8(k+1)w)}D_{n}\left(-\frac{v}{\sqrt{2(k+1)w}}\right)$$ $$- \sum_{n\ge0}\frac{(2w(k+1))^{n/2}}{n!}(-t)^n\zeta(-n)e^{v^2/(8(k+1)w)}D_{n}\left(\frac{v}{\sqrt{2(k+1)w}}\right).$$ \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:theorem2}] Inserting (2.8)--(2.9) (with $q$ replaced by $q^{2^k}$) into Lemma 2.2 gives us the following Bailey pair \begin{equation} \hat{\beta}_n(q,q)=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}P_{n,r_1,r_2,\dots,r_k}(q)q^{n-r_1+2(r_1-r_2)+\dots+2^{k-1}(r_{k-1}-r_k)},\end{equation} where $$P_{n,r_1,r_2,\dots,r_k}(q):=\frac{(-q^2;q)_{2r_1}(-q^{2^2};q^{2})_{2r_2}\dots(-q^{2^{k}};q^{2^{k-1}})_{2r_k}}{(q^2;q^2)_{n-r_1}(q^{2^2};q^{2^2})_{r_1-r_2}\dots(q^{2^{k}};q^{2^{k}})_{r_{k-1}-r_{k}}}\frac{(z;q^{2^k})_{r_k+1}(q^{2^k}/z;q^{2^k})_{r_k}}{(q^{2^k};q^{2^k})_{2r_k+1}},$$ \begin{equation}\hat{\alpha}_n(q,q)=(-z)^{-n}\frac{q^{2^{k}n(n+1)/2}(1-z^{2n+1})}{1-q^{2^{k}}},\end{equation} Inserting (2.18)--(2.19) into (2.13) gives \begin{equation} \sum_{n\ge0}(q)_n(-1)^nq^{n(n+1)/2}B_{n,k}(z,q)=\sum_{n\ge0}(-1)^n(-z)^{-n}q^{(2^{k}+1)n(n+1)/2}(1-z^{2n+1}).\end{equation} Putting $q=e^{-wt^2},$ and $z=e^{-vt-wt^2(2^{k}+1)/2},$ we have that (2.20) becomes \begin{equation} \begin{aligned} &\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}B_{n,k}(e^{-vt-wt^2(2^{k}+1)/2},e^{-wt^2})\\ &=\sum_{n\ge0}e^{nvt-wt^2(2^{k}+1)n^2/2}(1-e^{-vt(2n+1)-wt^2(2^{k}+1)(2n+1)/2})\\ &= \sum_{n\ge0}e^{nvt-wt^2(2^{k}+1)n^2/2}-\sum_{n\ge0}e^{-wt^2(2^{k}+1)(n+1)^2/2-vt(n+1)}\\ &= \sum_{n\ge0}e^{nvt-wt^2(2^{k}+1)n^2/2}-\sum_{n\ge1}e^{-wt^2(2^{k}+1)n^2/2-vtn} .\end{aligned}\end{equation} Subtracting a $1$ from (2.21) and then taking the Mellin transform, we compute that $$\begin{aligned} &\mathfrak{M}\bigg\{ \sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_ne^{-wt^2n(n+1)/2}B_{n,k}(e^{-vt-wt^2(2^{k}+1)},e^{-wt^2})-1\bigg\}\\ &= \int_{0}^{\infty}t^{s-1}\left( \sum_{n\ge1}e^{nvt-wt^2(2^{k}+1)n^2/2}-\sum_{n\ge1}e^{-wt^2(2^{k}+1)n^2/2-vtn}\right)dt\\ &= \sum_{n\ge1}\int_{0}^{\infty}t^{s-1} \left(e^{nvt-wt^2(2^{k}+1)n^2/2} - e^{-wt^2(2^{k}+1)n^2/2-vtn}\right)dt\\ &=\sum_{n\ge1}\Bigg((w(2^{k}+1)n^2)^{-s/2}\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(-\frac{v}{\sqrt{(2^{k}+1)w}}\right)\\ &- (w(2^{k}+1)n^2)^{-s/2}\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(\frac{v}{\sqrt{2(2^{k}+1)w}}\right)\Bigg)\\ &=(w(2^{k}+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(-\frac{v}{\sqrt{(2^{k}+1)w}}\right)\\ &- (w(2^{k}+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(\frac{v}{\sqrt{2(2^{k}+1)w}}\right). \end{aligned}$$ Here we employed Lemma 2.4 with $v$ replaced by $nv,$ and $w$ replaced by $w(2^{k}+1)n^2/2,$ and again the resulting formula is analytic for $\Re(s)>1.$ By Mellin inversion, we compute for $\Re(s)=c>1,$ $$\begin{aligned} &-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}A_{n,k}(e^{-vt-wt^2(2^{k}+1)},e^{-wt^2})\\ &=\frac{1}{2\pi i}\int_{(c)}\bigg((w(2^{k}+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(-\frac{v}{\sqrt{(2^{k}+1)w}}\right)\\ &- (w(2^{k}+1))^{-s/2}\zeta(s)\Gamma(s)e^{v^2/(4(2^{k}+1)w)}D_{-s}\left(\frac{v}{\sqrt{(2^{k}+1)w}}\right)\bigg)t^{-s}ds.\end{aligned}$$ The integrand has a simple pole at $s=1$ and the negative integers $s=-n$ due to $\Gamma(s).$ We compute the residues at the negative integers to see that as $t\rightarrow0^{+},$ $$-1+\sum_{n\ge0}(e^{-wt^2};e^{-wt^2})_n(-1)^ne^{-wt^2n(n+1)/2}B_{n,k}(e^{-vt-wt^2(2^{k}+1)},e^{-wt^2})$$ $$\sim t^{-1}\sqrt{\frac{\pi}{4w(2^k+1)}}e^{v^2/(4(2^k+1)w)}\bigg(\erf\left(\frac{v}{2\sqrt{(2^k+1)w}}\right)-\erf\left(-\frac{v}{2\sqrt{(2^k+1)w}}\right)\bigg)$$ $$+\sum_{n\ge0}\frac{(w(2^{k}+1))^{n/2}}{n!}(-t)^n\zeta(-n)e^{v^2/(4(2^{k}+1)w)}D_{n}\left(-\frac{v}{\sqrt{(2^{k}+1)w}}\right)$$ $$- \sum_{n\ge0}\frac{(w(2^{k}+1))^{n/2}}{n!}(-t)^n\zeta(-n)e^{v^2/(4(2^{k}+1)w)}D_{n}\left(\frac{v}{\sqrt{(2^{k}+1)w}}\right).$$ \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:theorem3}] Since the proof is identical to the proof of our previous two theorems, we only outline some of the major details. Inserting the Bailey pair (2.8)--(2.9) into Lemma 2.3 and using (2.13) we obtain the identity \begin{equation} \sum_{n\ge0}\frac{(q)_n}{(-q)_{n}}(-1)^nq^{n(n+1)/2}C_{n,k}(z,q)\end{equation} $$=\sum_{n\ge0}z^{-n}q^{(k+2)n(n+1)/2}(1-z^{2n+1}),$$ where $$C_{n.k}(z,q)=\sum_{n\ge r_1\ge r_2\dots\ge r_k\ge0}\frac{q^{r_1^2/2+r_1/2+r_2^2/2+r_2/2+\dots+r_k^2/2+r_k/2}}{(q)_{n-r_1}(q)_{r_1-r_2}\dots(q)_{r_{k-1}-r_{k}}}\frac{(z)_{r_k+1}(q/z)_{r_k}}{(q)_{{r_k}}(q;q^2)_{r_k+1}}.$$ Putting $q=e^{-wt^2},$ and $z=e^{-vt-wt^2(k+2)/2},$ and proceeding as before the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:theorem4}] By Lemma 2.4, we have for $\Re(s)=c'>1,$ $\Re(w)>0,$ $$\sum_{n\ge1}\chi(n)e^{-w nt^2-v nt}=\frac{1}{2\pi i}\int_{(c')}(2w)^{-s/2}\Gamma(s)L(s,\chi)e^{v^2/(8w)}D_{-s}(\frac{v}{\sqrt{2w}})t^{-s}ds.$$ Since $L(s,\chi)$ is assumed to be nonprincipal, there is no pole at $s=1.$ Hence, computing the residues at the poles $s=-n$ from the gamma function, $$\sum_{n\ge1}\chi(n)e^{-w n^2t^2-v nt}\sim e^{v^2/(8w)}\sum_{n\ge0}\frac{(2w)^{n/2}(-t)^n}{n!}L(-n,\chi)D_{n}(\frac{v}{\sqrt{2w}}),$$ as $t\rightarrow0^{+}.$ After noting that if $\chi$ is even, then $L(-2n,\chi)=0,$ and if $\chi$ is odd, then $L(-2n-1,\chi)=0$ the result follows. \end{proof} \section{Concluding Remarks} Here we have of course limited ourselves with our choices of changing base of $q$ to three examples from [5]. Therefore, many more examples may be obtained by appealing to different Bailey chains from [5]. It would be desirable to obtain expansions for other $L$-functions, such as those contained in [3]. 1390 Bumps River Rd. \\* Centerville, MA 02632 \\* USA \\* E-mail: [email protected], [email protected] \end{document}
\begin{document} \def\thepage{} \begin{frontmatter} \setcounter{page}{157} \title{On Associative Confounder Bias} \author[A]{\fnms{Priyantha} \snm{Wijayatunga} \thanks{Corresponding Author: Priyantha Wijayatunga, Department of Statistics, Ume\r{a} School of Business and Economics, Ume\r{a} University, Ume\r{a} SE-901 87, Sweden. Email: [email protected].}}, \runningauthor{P. Wijayatunga} \address[A]{Department of Statistics, Ume\r{a} University, Ume\r{a}, Sweden} \begin{abstract} Conditioning on some set of confounders that causally affect both treatment and outcome variables can be sufficient for eliminating bias introduced by all such confounders when estimating causal effect of the treatment on the outcome from observational data. It is done by including them in propensity score model in so-called potential outcome framework for causal inference whereas in causal graphical modeling framework usual conditioning on them is done. However in the former framework, it is confusing when modeler finds a variable that is non-causally associated with both the treatment and the outcome. Some argue that such variables should also be included in the analysis for removing bias. But others argue that they introduce no bias so they should be excluded and conditioning on them introduces spurious dependence between the treatment and the outcome, thus resulting extra bias in the estimation. We show that there may be errors in both the arguments in different contexts. When such a variable is found neither of the actions may give the correct causal effect estimate. Selecting one action over the other is needed in order to be less wrong. We discuss how to select the better action. \end{abstract} \begin{keyword} causal effect estimation\sep confounder selection\sep graphical models. \end{keyword} \pagestyle{empty} \thispagestyle{empty} \end{frontmatter} \section{Introduction} \label{sec:intro} For making causal inferences from observational data (see \cite{RD2005} and \cite{PJ2009}) it is important to find, ideally all the potential pretreatment confounders of the given causal relation between the cause (treatment variable) and the effect (outcome variable), in order to obtain unbiased causal effect estimate of the former on the latter. Let $Z$ denote the treatment received by subjects, taking values from the set $\mathcal{Z}=\{0,1 \}$ and let $Y$ denote the outcome, taking values from the set $\mathcal{Y}=\{0,1\}$ where $0$ denotes failure and $1$ denotes success. In potential outcome causal model \cite{RD2005} it is accepted existence of pair of potential outcomes $(Y_1, Y_0)$ for each subject, where $Y_i$ is the outcome that would have been observed had the treatment been $Z=i$ for $i=0,1$. It is assumed that the pair is independent of the treatment assignment, written as $(Y_0,Y_1) \!\perp\!\!\!\perpe Z $ when the treatment assignments are randomized as in case of a randomized experiment. However in observational studies, the treatment assignments are not randomized. Then, useful assumption for causal inference is that the potential outcomes are conditionally independent of the treatment assignment given the pretreatment covariates, say, multivariate $X$. Ideally, $X$ denotes 'all' the potential pretreatment confounders of $Z$ and $Y$ and then it is written as $(Y_1,Y_2) \!\perp\!\!\!\perpe Z \mid X$. That is, to estimate the causal effect of $Z$ on $Y$, we need to condition on (control for) $X$. However, it is not necessary to consider all the pretreatment confounders but any 'sufficient' subset of them. Finding such a sufficient set of condounders is somewhat problematic and the potential outcome framework offers no clear way to do it. However, causal graphical modeling framework \cite{PJ2009} offers one way that is called 'back door criterion'. It shows how to choose a subset of covariates in order to identify the causal effect (to estimate it without bias). When a causal graphical model is identified on $Z$, $Y$ and all their causal factors the criterion can find a sufficient subset and such a set is called an 'admissible' or a 'deconfounding' set in the literature. Considering some covariates as confounders by ignoring such a criterion can sometimes introduce further bias (p. 351 of \cite{PJ2009}). However, the back-door criterion is not complete \cite{PJ2009}; there exist causal graphical models where the criterion fails for some sets of covariates though adjusting for them results in valid causal effect estimates. So, the problem of confounder selection is important in casual inference. In the potential outcome causal model, when the analyst has found all the confounders then he/she uses them either directly or indirectly (in so-called propensity score models \cite{RR1983}, \cite{RR1984}) for removing induced bias from them. However, any factor that is causing both the treatment and the outcome could be identified relatively easily as pretreatment confounders with subject domain knowledge. For that it is important to decide causal directions among the variables. But there may be other factors such as the ones that are non-causally related with both the treatment and the outcome, for e.g., those with associations. It seems that some researchers tend to use them for conditioning too, for e.g., including them in the propensity score model assuming that it removes the bias due to them. However generally, the causal graphical modelers do not consider them as confounders. Recently there was a debate (see \cite{SI09} \cite{PJ09a}, \cite{SA09} and \cite{RD09}) on this issue; if it is necessary to condition on a variable that is not causally related with both the treatment and the outcome but associated with both. In the debate, Rubin argues for and Pearl and his colleagues argue against saying that it will only introduce extra bias. Our goal here is to analyze these arguments a little more deeper and to understand when we should condition on them. We use graphical modeling framework to estimate causal effects therefore, we begin by giving some details of it. We argue that in some cases, it is desirable to condition whereas in others, it is not. Mostly the decision should be taken considering strengths of associations of the potential confounder with the treatment and the outcome. \section{Covariate Selection for Adjustment of Confounding} We use concept of intervention in causal graphical models (also called do-calculus) described in \cite{PJ2009} and \cite{LR2002} for the causal effect estimation. This approach is equivalent to the potential outcome model (see Ch. 7 of \cite{PJ2009} and \cite{WP2014}). To recall the reader with this calculus, first define the probability distribution of a random variable with conditioning by intervention or action on another variable. For an observed random data sample on a vector of random variables, say, $\textbf{X}=(X_1,...,X_n)$, we can find the joint probability distribution of them, say, $p(\textbf{X}=\textbf{x})=p(\textbf{x})$. We can have a factorization of $p(\textbf{x})$; let it be $p(\textbf{x})= \prod_{i}^{n} p(x_i \vert pa_i)$ where $PA_i \subseteq \{X_1,...,X_{i-1}$\} with the exception of $PA_1= \emptyset$ (empty set) using some conditional independence assumptions within $\textbf{X}$. Note that here we denote random variables (or sets of them) by uppercase letters/expressions (such as $X, PA$, etc.) and their values by relevant lowercase expressions ($x, pa$, respectively). For a causal structure on $\textbf{X}$ one can use, for e.g., time order of happening to index the variables such that cause variables have higher indices than those of effect variables'. For any $i$, such that $2 \leq i \leq n$ if $p(x_i \vert pa_i) \neq p(x_i )$ then the probability distribution of vector of random variables without $X_i$, say, $\textbf{X}_{-i}=\{X_1,...,X_n\} \backslash \{X_i\}$ when $X_i$ is intervened to a particular value of it, say, $x_i$, written as $do(X_i=x_i)$, denoted by $p(\textbf{x}_{-i} \vert do(X_i=x_i))$ is defined as follows; \begin{eqnarray*} p(\textbf{x}_{-i} \vert do(X_i=x_i)) &=& \frac{p(\textbf{x})}{p(x_i \vert pa_i)}=\prod_{k=1:k \neq i}^{n} p(x_k \vert pa_k) \\ & \neq & \frac{p(\textbf{x})} {p(x_i )} =\frac{1}{p(x_i)}\prod_{k=1}^{n} p(x_k \vert pa_k) = p(\textbf{x}_{-i} \vert x_i) \end{eqnarray*} where the last expression is corresponding conditional probability distribution when we have observed $X_i=x_i$. That is, generally two probability distributions differ. \small \begin{figure} \caption{ \label{simple.bn} \label{simple.bn} \end{figure} \normalsize Now, consider two different causal relationships between $X$, $Y$ and $Z$: the first one is such that $X$ is a cause of both $Z$ and $Y$, and $Z$ is a cause of $Y$ which is represented as causal network model $p(y,z,x)=p(x)p(z\vert x)p(y\vert x,z)$ shown by left hand side diagram and the second one is such that $X$ and $Z$ are causes of $Y$ which is represented as a causal network model $p(y,z,x)=p(x)p(z)p(y\vert x,z)$ shown by right hand side diagram in the Figure \ref{simple.bn}. And if we intervene on $Z$ as $do(Z=z)$ for $z=0,1$, then marginal intervention distribution of $Y$ for the first causal model is $p(y \vert do(Z=z)) = \sum_x p(x)p(z \vert x)p(y \vert z,x)/p(z \vert x) = \sum_x p(y \vert z,x)p(x)$ whereas that for the second causal model is $ p(y\vert do(Z=z) )= \sum_x p(x)p(z)p(y \vert z,x)/p(z) = \sum_x p(y \vert z,x)p(x) = p(y \vert z) $, since $X \!\perp\!\!\!\perpe Z$ in latter case. And the causal effect of the treatment option $Z=z_1$ compared to the control option $Z=z_0$ is defined as $\sum_y y p(y \vert do(Z=z_1)) - \sum_y y p(y \vert do(Z=z_0))$. It is identifiable if $\sum_x p(y \vert z,x)p(x)$ is a valid functional for $z=z_0,z_1$. Then we see that the estimates for the two cases are different. The above observation can be shown for a more general causal model. Let $\textbf{X}=(X_1,...,X_n)$ be according to time order and $\textbf{X}_0$ represent a set variables that causally affect $X_p$ but we are not sure about chronological order of the elements of $\textbf{X}_0$ with $X_i$ for $1 \le i < p \le n$. Let parents (causes) of $X_j$ in $\textbf{X}$ be $PA_j$ and that in $(\textbf{X}_0,\textbf{X})$ be $PA_j^+$, so $PA_j=PA_j^+$ for $j \leq p-1$ and $PA_p^+=PA_p \cup \textbf{X}_0$. Then, the joint probability distribution of $(\textbf{X},\textbf{X}_0)$ is $p(\textbf{x}_0, \textbf{x}) = p(\textbf{x}_0) \prod_{j=1}^{n} p(x_j \vert pa_j^+)$ and the intervention (on $X_i$) distribution is \begin{eqnarray*} p( x_p \vert do(X_i=x_i)) &=& \sum_{\substack{\textbf{x}_0,x_1,..x_{i-1}, \\ x_{i+1},...,x_{p-1}, \\ x_{p+1},...,x_{n}}} p(\textbf{x}_0) \prod_{\substack{j=1 \\ j \neq i }}^{n} p(x_j \vert pa_j^+) \\ &=& \sum_{\substack{\textbf{x}_0,x_1,..x_{i-1}, \\ x_{i+1},...,x_{p-1}}} p(\textbf{x}_0) \prod_{\substack{j=1 \\ j \neq i }}^{p-1} p(x_j \vert D_j) p(x_p \vert D_p,\textbf{x}_0) \\ &=& \sum_{\substack{\textbf{x}_0,x_1,..x_{i-1}, \\ x_{i+1},...,x_{p-1}}} p(\textbf{x}_0) p(x_1,..x_{i-1}) p(x_{i+1},....,x_p \vert D_{i+1},\textbf{x}_0) \\ &=& \sum_{\textbf{x}_0,pa_p} p(\textbf{x}_0)p(pa_p \backslash \{ x_i\}) p(x_p \vert pa_p,\textbf{x}_0) \\ &=& \sum_{\textbf{x}_0,pa_p} p(x_p \vert x_i,pa_p \backslash \{ x_i\},\textbf{x}_0)p(pa_p \backslash \{ x_i\},\textbf{x}_0) \end{eqnarray*} where $D_j=\{ X_1,...,X_{j-1}\}$ such that $D_1=\emptyset$ and $X_i \in PA_p$. This is of the form of $p(y \vert do(z))=\sum_x p(y \vert z,x)p(x)$ where $Z$ and $X$ affects $Y$ directly and so is $X$ on $Z$. If we assume that some of the variables in $\textbf{X}_0$ are associated with some of the variables in the vector $(X_1,...,X_{i-1})$ or, causally related or associated with variables in $(X_{p+1},...,X_n)$ then the above result holds. \begin{eqnarray*} p( x_p \vert do(X_i=x_i)) &=& \sum_{\textbf{x}_0,pa_p} \frac{ p(x_p, x_i,pa_p \backslash \{ x_i\},\textbf{x}_0)}{ p(x_i, pa_p \backslash \{ x_i\},\textbf{x}_0) / p(pa_p \backslash \{ x_i\},\textbf{x}_0)} \\ &=& \sum_{\textbf{x}_0,pa_p} \frac{ p(x_p, x_i,pa_p \backslash \{ x_i\},\textbf{x}_0)}{ p(x_i \vert pa_p \backslash \{ x_i\},\textbf{x}_0) } = \sum_{\textbf{x}_0,pa_p} \frac{ p(x_p, x_i,pa_p \backslash \{ x_i\},\textbf{x}_0)}{ p(x_i \vert pa_p^-) } \\ &=& \sum_{\textbf{x}_0,pa_p^-} \frac{ p(x_p, x_i,pa_p^-)}{ p(x_i \vert pa_p^-) } = \sum_{\textbf{x}_0,pa_p^-} p(x_p \vert x_i,pa_p^-)p( pa_p^-) \end{eqnarray*} where $PA_p^- = PA_p \cap PA_i$. Again, this is in the form of $p(y \vert do(z))=\sum_x p(y \vert z,x)p(x)$ where $X$ represents all the direct causal variables common to both $Z$ and $Y$. And above simplifications show that we can select the confounding variable set as follows. \begin{prop} Let $X^{\prime \prime}$ denote the set of all potential causal variables of $Y$ except for $Z$ and let $P(Y \vert Z,X^{\prime \prime})=P(Y \vert Z,X^{\prime})$ where $X^{\prime}$ is the smallest subset of $X^{\prime \prime}$ in some sense. Then the smallest subset $X$ of $X^{\prime}$ in a similar sense such that $P(Z \vert X^{\prime})=P(Z \vert X)$ is a sufficient set of confounders for estimating $P(Y \vert do(Z))$. \end{prop} Here the smallest subset $A$ of $X$ can be a set of variables whose sum of their configurations is the smallest. This rule gives a simple way to select covariates for removing confounding bias. We avoid the proof of this rule but it is clear from the above discussion. Recall that the back-door criterion is known to be incomplete (see Ch. 11 of \cite{PJ2009}) meaning that the criterion fails for some sets of covariates but adjusting for them is sufficient for removing confounding bias. Above rule avoids inclusion of covariates such as instrumental variables, especially for building propensity score models. In fact, in literature sufficient confounder set is selected such that, firstly each confounder in it is a cause of the treatment, and then it is a cause of the outcome \cite{VS2011}. However, it should be done in the other way round; a confounder should be predictive of the outcome first and then it should also predictive of the treatment. Following this order we do not miss any important confounders, since any confounder should be related to the outcome at the first place. For e.g., consider a causal model for estimating causal effect of teacher's instructional practice ($Z$) on student's reading comprehension achievement ($Y$) as discussed in \cite{KC2011}. It is assumed that the teacher's reading knowledge $X$ is a causal confounder such that it affects directly both $Z$ and $Y$. Furthermore, it is assumed that the teacher's professional development in reading $U$ affects directly to $Z$ and $X$ and the teacher's general knowledge ($W$) affects directly to $X$ and $Y$. The causal diagram is shown as Model 1 in Figure~\ref{simple.bn2}. Then it is easy to see that $p(y \vert do(z))=\sum_x p(y \vert z,x)p(x)$. And if we believe that $U$ and $W$ are dependent, for e.g., through a common cause, then we get that $p(y \vert do(z))=\sum_{x,w} p(y \vert z,x,w)p(x,w)$ when $Z \not\!\perp\!\!\!\perpe W$, that is reasonable to assume. \small \begin{figure} \caption{ \label{simple.bn2} \label{simple.bn2} \end{figure} \normalsize \section{Associative Confounders} There is a controversy among the research community about kinds of variables that should be considered as confounders for including, especially in the propensity score models in the potential outcome causal model, since therein the causal diagrams showing the causal structure are often not used. In fact, initially the propensity score concept came into light to describe the treatment allocation process \cite{RR1984}, \cite{RR1983}. In the current practice some authors argue that all the variables related to outcome should be included in the propensity score model \cite{RD1996} (there can be some redundancy then) whereas others argue that all the variables related to both the treatment and the outcome should be included \cite{PS2000}. However, the problems occur when one finds variables that have non-causal (associative) relationships with the treatment or the outcome. Researchers usually replace any such association between two variables with a causal fork using so-called common cause principle. This is to replace an association with causal relations \cite{SA09}. Simply, the principle says that a non-causal association between two variables can be replaced by a third variable that is causally affecting the both. For e.g., such an association between two variables $X$ and $Y$ with a model, say, $M_1 : X \rule[1.3mm]{4mm}{0.1pt} Y $ can be replaced by a model, say, $M_2: X \leftarrow U \rightarrow Y$ where arrows indicate causal relations. Then, $U$ is said to be a common cause of $X$ and $Y$. Note that we omit the possibility of having feedback causal relations. Now, let we observe a covariate $X$ that is non-causally associated with both $Z$ and $Y$, which is the topic of Rubin and Pearl debate. It can be assumed that the non-causal association structures $Z \rule[1.3mm]{4mm}{0.1pt} X \rule[1.3mm]{4mm}{0.1pt} Y$, is embedded in the context and therefore, apply a causal fork to each of the two associations separately. In fact, the argument of \cite{SA09} and \cite{PJ09a} is based on applying two causal forks for the two non-causal associations, one for each, thus making $X$ a so-called M-collider \cite{KC2011}. Their model of discussion is the Model A in Figure \ref{causalfork.bn} but the argument is based on the model $ Z \leftarrow U \rightarrow X \leftarrow W \rightarrow Y$ that is called an M-structure due to its shape. Here $U$ and $W$ are taken to be independent. An example of this model is given in \cite{DL2010}: measuring causal effect of low education ($Z$) on later diabetes risk ($Y$) where it is assumed that mother's previous diabetes status ($X$) is an associative covariate. A medical opinion is that family income during the childhood ($U$) is a cause of $X$ and $Z$, and mother's genetic risk of diabetes ($W$) is a cause of $X$ and $Y$. Though $U$ and $W$ can be independent, it is appropriate to think that it is a special case and generally, there is some dependence between them. In fact, one can just assume it but here we investigate how and when such cases arise and discuss which actions are appropriate then. For Model B of Figure $\ref{causalfork.bn} $, we can write the joint probability distribution of all the variables as $ p(y,x,u,w,z) = p(u)p(w)p(x \mid u,w)p(z \vert u)p(y\vert z,w)$, so with intervention $Z=do(z)$, we get $p(y,x,u,w \vert do(z)) = p(u)p(w)p(x \vert u,w)p(y \vert z,w) $. Then, \begin{align*} p(y \vert do(z)) &= \sum_{u,w,x} p(u)p(w)p(x \vert u,w)p(y\vert z,w) = \sum_{w} p( w)p(y\vert z,w) = \sum_{w} p( w \vert z)p(y\vert z,w) \\ &= p(y \vert z) = \sum_x p(y \mid z,x) p(x \mid z) \neq \sum_x p(y \mid z,x)p(x) \end{align*} if $W \!\perp\!\!\!\perpe Z$ and, since $Z \not\!\perp\!\!\!\perpe X$. Note that we have $W \!\perp\!\!\!\perpe Z$ whenever $U \!\perp\!\!\!\perpe W$. That is, the true probability of $Y$ when $Z$ is intervened is different from that obtained by conditioning on $X$. And ignoring $X$ gives the true intervened probability. So, when assuming $U \!\perp\!\!\!\perpe W$, conditioning on $X$ may result in a biased causal effect estimate; above inequality shows that the biasness may have caused due to the dependence between $X$ and $Z$, since $p(x) \neq p(x \vert z)$, i.e., when $X$ and $Z$ are weakly dependent the biasness is small. Note that if an error is occurred in the estimate of $p(y \vert do(z))$ for $z=0,1$ then it may not necessarily result in an error of same magnitude in causal effect estimate that is $\sum_y yp(y \vert do(Z=1)) -\sum_y yp(y \vert do(Z=0))$, i.e., two errors may result in a different error. Resultant error (bias) due to conditioning on $X$ is $\sum_{y,x} y p(y \vert Z=1,x) [p(x) - p(x \vert Z=1)] - \sum_{y,x} y p(y \vert Z=0,x) [p(x) - p(x \vert Z=0)] $. For simplicity, we concentrate on errors that can occur in estimation of $p(y \vert do(z))$ for $z=0,1$. Note that, above discussion is valid when some of other confounders, say, $X_1$ are present where $X \!\perp\!\!\!\perpe X_1$. And in the above analysis we made a strong assumption that $W \!\perp\!\!\!\perpe Z$, but this may not sometimes be true in reality. In the following section we show this possibility. \small \begin{figure} \caption{ \label{causalfork.bn} \label{causalfork.bn} \end{figure} \normalsize \subsection{Dependence of $X$ with $Z$ and $Y$} It is natural to consider the cases when $W \not\!\perp\!\!\!\perpe Z$ then it may be that $p(y \vert do(z)) \neq p(y \vert z)$ even if $U \not\!\perp\!\!\!\perpe W$. However, since $W$ is hidden it is unclear how to consider this case. In fact, for Model B in Figure~\ref{causalfork.bn} we have $W \!\perp\!\!\!\perpe Z$ \cite{LS1990}. Let us assume the case that $X$ and $Y$ are strongly dependent. We use a geometric figure that is used to visualize the Simpson's paradox \cite{WP2015a} to explore this possibility. Let the association between $Y$ and $X$ be such that $p(x \vert y) < p(x \vert y')$. Then, for some $T$ we have that $p(t' \vert y)p(x \vert y,t')+p(t \vert y)p(x \vert y,t) < p(t' \vert y')p(x \vert y',t')+p(t \vert y')p(x \vert y',t)$. Note that there are infinitely many such $T$ but they can be artificial unless given some meaningful interpretation, ideally to few of them. Now consider the case of $p(x \vert y,t')< p(x \vert y,t)$. It is important to note that the value $p(x \mid y)$ dissects positive length $p(x \vert y,t)-p(x \vert y,t')$ according to ratio $p(t \vert y):p(t' \vert y)$; \begin{align*} \{p(t \mid y)+p(t' \mid y)\}p(x \mid y)&=p(t' \mid y)p(x \mid y,t') + p(t \mid y)p(x \mid y,t) \\ p(t \mid y) \{p(x \mid y,t)-p(x \mid y) \} &= p(t' \mid y) \{p(x \mid y)-p(x \mid y,t') \} \\ \frac{p(x \mid y)-p(x \mid y,t')}{p(x \mid y,t)-p(x \mid y)} &=\frac{p(t \mid y)}{p(t' \mid y)} \end{align*} Now if $T$ is a common cause of $X$ and $Y$ association then we should have $p(x \vert y,t)=p(x \vert y',t)=p(x \vert t)$ and $p(x \vert y,t')=p(x \vert y',t')=p(x \vert t')$. Therefore, in Figure~\ref{fig:fig1} the conditional probabilities in the former equality are vertically aligned, and so are those in the latter. Then we have $p(x \vert y',t')< p(x \vert y',t)$ and $p(x \vert y')$ dissects positive length $p(x \vert y',t)-p(x \vert y',t')$ according to ratio $p(t \vert y'):p(t' \vert y')$ and similarly for $p(x \vert y,t')< p(x \vert y,t)$ and $p(x \vert y)$. In the Figure~\ref{fig:fig1} those ratios are marked with braces. Since the selection of $T$ is restricted by the strength of the dependence between $X$ and $Y$, for a higher value of it, we can have a higher dependence between $Y$ and $T$. And if the strength of the dependence between $X$ and $Y$ is characterized by $p(y \vert x)$ then that between $X$ and $T$ should be also higher. And the other case is similar, i.e., taking $p(x \vert y,t') > p(x \vert y,t)$. \begin{figure} \caption{A common cause variable $T$ for the negative correlation between $X$ and $Y$, $p(x \vert y) < p(x \vert y')$. For the probabilities $p$ and $q$ where $p+q=1$ the expression $\{p\} \label{fig:fig1} \end{figure} If $T$ is $W$ in our causal model in Figure~\ref{causalfork.bn}, a common cause for the association between $X$ and $Y$, then the dependences between $X$ and $W$, and $Y$ and $W$ can be strong given that the dependence between $X$ and $Y$ is strong. Similarly, a strong association between $Z$ and $X$ implies that those between $Z$ and $U$, and $U$ and $X$ can be strong. With similar arguments, these imply that $U$ and $W$ can be dependent. An alternative way to see that $U$ and $W$ are not independent when the associations between $X$ and $Z$, and $X$ and $Y$ are strong is to use correlations. In \cite{LE2001} it is shown that for any three random variables, say, $A,B$ and $C$ the correlation coefficients among them satisfy the relationship $\rho_{AC}^2+\rho_{BC}^2+\rho_{AB}^2 \leq 1+2\rho_{AC}\rho_{BC }\rho_{AB}$. If, for e.g., when $\rho_{XZ}=0.8$ and $\rho_{XY}=0.7$ then we cannot have $U$ and $W$ such that $\rho_{UW}=0$. Therefore, when the dependences between $Z$ and $X$, and $Y$ and $X$ are strong it may be that the introduced two common causes for those associations are dependent. Furthermore, there can be another possibility for these two associations; both associations may be due one cause, i.e., both $U$ and $W$ refer to the same hidden variable ($V$ in the Model C in Figure~\ref{complexcon.bn}). However, current studies are often done without considering these possibilities. But some researchers have shown that conditioning on associated covariates introduces only a small bias. Their claims may be due to these contexts. Sometimes it is advised \cite{RD09} to control for all the pretreatment covariates but the graphical causal model researchers reject this idea. Therefore, in the next section we take a look at different possibilities of associative covariates and try to understand when the biasness can be amplified. \small \begin{figure} \caption{ \label{complexcon.bn} \label{complexcon.bn} \end{figure} \normalsize \subsection{Deciding on Conditioning} Consider the case of two dependent hidden causes, i.e., $U \not \!\perp\!\!\!\perpe W$ (such as Model D in Figure \ref{complexcon.bn}) where the dependence is causal or non-causal. Then, \begin{align*} p(y \vert do(z)) &= \sum_{u, w, x} p(u,w)p(x \vert u,w)p(y \vert z,w)=\sum_{w} p(w)p(y \vert z,w) = \sum_{w,x} p(x)p(w \vert x)p(y \vert w,z,x) \\ & \neq \sum_{w,x} p(x) p(w \vert z,x)p(y \vert w,z,x) = \sum_x p(y \vert z,x)p(x) \end{align*} if $p(w \vert x) \neq p(w \vert z,x )$ i.e., $W \not\!\perp\!\!\!\perpe Z \vert X$, conditioning on $X$ does not give the correct probability estimate that is $\sum_{w} p(y \vert z,w)p(w)$. And ignoring $X$ also does not give the correct estimate, since then we get $p(y \vert do(z)) = p(y \vert z) =\sum_w p(y \vert z,w)p(w \vert z) \neq \sum_{w} p(y \vert z,w)p(w)$, i.e., we need to assume $Z \!\perp\!\!\!\perpe W$ in order to have the correct probability for the case but we know that $Z \not\!\perp\!\!\!\perpe W$, especially when associations between $X$ and $Y$, and $Z$ and $Y$ are strong. That is, to condition on $X$ we should have $W \!\perp\!\!\!\perpe Z \vert X$ and to ignore $X$ we should have $W \!\perp\!\!\!\perpe Z$. So, the question remains is that which statement should be accepted in order to be more correct against the other; either $W \!\perp\!\!\!\perpe Z \vert X$ or $W \!\perp\!\!\!\perpe Z$. Accepting the former (rejecting the latter) is to condition on $X$ and vice versa. But none of the conditions can be tested, since they involve unobservable $W$. However, with some subject domain knowledge if one can assume meaningful $U$ and $W$ and then recognize their dependences with $X$ (based on those between $Z$ and $X$, and $Y$ and $X$) it may be possible to decide which option can be better. For e.g., if those dependences are not strong and causation of $U$ and $W$ on $X$ is mostly based on explaining away phenomenon \cite{PJ1988}, then it may not be desirable to condition of $X$. Note that the explaining away phenomenon is that when we see $X=1$ then observing $U=1$ makes $P(W=1)$ lower and vice versa. If conditioned on $X$ in this case, then comparative strata of data sample in the causal effect calculation may have imbalances in the causal variables $U$ and $W$. This can cause biased causal effect estimates. And when the dependences of $U$ and $W$ with $X$ is assumed to be high then it is less likely that there is an explaining away phenomenon, i.e, most probably the causation is monotonic (when we see $X=1$ then observing $U=1$ makes $P(W=1)$ higher and vice versa) then conditioning on $X$ can be beneficial because it results in balances in the causal variables $U$ and $W$. Though one can reason about the actions to be taken as done above, it requires extensive simulation studies to confirm them. Now consider the case of single hidden cause, say, $V$ (Model C in Figure \ref{complexcon.bn}). Then \begin{align*} p(y \vert do(z)) &= \sum_{x,v} p(v)p(x \vert v)p(y \vert z,v)=\sum_{v} p(y \vert z,v) p(v)= \sum_{x,v} p(x)p(v \vert x)p(y \vert z,v,x) \\ & \neq \sum_{x,v} p(x)p(v \vert z,x)p(y \vert z,v,x) = \sum_x p(y \vert z,x)p(x). \end{align*} Therefore, here also conditioning on $X$ does not gives the correct probability estimate that is $\sum_{v} p(y \vert z,v) p(v)$ if $p(v \vert x) \neq p(v \vert z,x)$, i.e., $V \not \!\perp\!\!\!\perpe Z \vert X$. But ignoring $X$ also does not give the correct estimate as $p(y \vert do(z)) \neq p(y \vert z)$ in this case. Since $ p(y \vert z) =\sum_v p(y \vert z,v)p(v \vert z)$, ignoring $X$ means assuming $V \!\perp\!\!\!\perpe Z$, but we know that $Z$ and $V$ should be dependent. So, similar to the above case, the question remains is that which should be accepted against the other in order to be more correct; either or $V \!\perp\!\!\!\perpe Z \vert X$ or $V \!\perp\!\!\!\perpe Z$. Accepting the former is to condition on $X$ and vice versa. But similar to the above case where the dependences are higher, assuming $V \!\perp\!\!\!\perpe Z \vert X$ can be better than assuming $V \!\perp\!\!\!\perpe Z $, therefore conditioning on $X$. If the subject domain knowledge shows that there is a single common cause $V$ then it is beneficial to condition on $X$. \section{Conclusion} Causal effect estimation tasks from observational data need to consider confounders of the causal relation of interest for controlling for (conditioning on). However, it is not necessary that all of them are considered but a "sufficient" subset of them. Often the current practice is to select them according to their predictive ability of the treatment firstly and then the outcome. But it should be done other way round; firstly they should be predictive of the outcome and then the treatment. And we show how to handle associative confounders (those are not causing both the treatment and outcome but associated with them) where currently there is no clear consensus about using them. It is often beneficial to condition on associative confounders when they are strongly dependent with both the treatment and outcome whereas it is not so for weakly dependent ones. \paragraph{\textbf{Acknowledgments:} Financial support for this research is from the Swedish Research Council for Health, Working Life and Welfare (FORTE) and SIMSAM at Ume\r{a}. And the author is thankful to Slawomir Nowaczyk and anonymous referees for their comments.} \end{document}
\begin{document} \title{On weak solutions to a fractional Hardy--H\'enon equation:\\ Part I: Nonexistence} \author{ Shoichi Hasegawa\\ Department of Mathematics, School of Fundamental Science and Engineering,\\ Waseda University,\\ 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan \\ \\ Norihisa Ikoma\\ Department of Mathematics, Faculty of Science and Technology,\\ Keio University,\\ 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa 223-8522, Japan \\ \\ Tatsuki Kawakami\\ Applied Mathematics and Informatics Course, Faculty of Advanced Science and Technology,\\ Ryukoku University,\\ 1-5 Yokotani, Seta Oe-cho, Otsu, Shiga 520-2194, Japan } \date{} \maketitle \begin{abstract} This paper and \cite{HIK-20} treat the existence and nonexistence of stable (resp. outside stable) weak solutions to a fractional Hardy--H\'enon equation $(-\Delta)^s u = |x|^\varepsilonll |u|^{p-1} u$ in $\mathbb{R}^N$, where $0 < s < 1$, $\varepsilonll > -2s$, $p>1$, $N \geq 1$ and $N > 2s$. In this paper, the nonexistence part is proved for the Joseph--Lundgren subcritical case. \varepsilonnd{abstract} \noindent \textbf{Keywords}: fractional Hardy--H\'enon equation, stable (stable outside a compact set) solutions, Liouville type theorem. \noindent \textbf{AMS Mathematics Subject Classification System 2020}: 35R11, 35J61, 35B33, 35B53, 35D30. \section{Introduction} \left\langlebel{section:1} In this paper and \cite{HIK-20}, we consider a fractional Hardy--H\'enon equation \begin{equation} \left\langlebel{eq:1.1} (-\Delta)^s u=|x|^\varepsilonll |u|^{p-1}u\qquad\mbox{in}\quad{\mathbb R}^N \varepsilonnd{equation} and throughout this paper, we always assume the following condition on $s,\varepsilonll,p,N$: \begin{equation}\left\langlebel{eq:1.2} 0 < s < 1, \quad \varepsilonll>-2s, \quad p>1,\quad N\ge1,\quad N>2s. \varepsilonnd{equation} In \varepsilonqref{eq:1.1}, $(-\Delta)^s$ is the fractional Laplacian, which is defined for any $\varphi\in C^\infty_c (\mathbb R^N)$ by \[ (-\Delta)^s\varphi(x) := C_{N,s}\mbox{P.V.}\int_{\mathbb R^N}\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+2s}}\,dy =C_{N,s}\lim_{\varepsilon\to0}\int_{|x-y|>\varepsilon}\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+2s}}\,dy \] for $x\in \mathbb R^N$, where P.V. stands for the Cauchy principal value integral, \begin{equation} \left\langlebel{eq:1.3} C_{N,s}:=2^{2s}s(1-s)\pi^{-\frac{N}{2}}\frac{\Gamma(\frac{N+2s}{2})}{\Gamma(2-s)} \varepsilonnd{equation} and $\Gamma(z)$ is the gamma function. This paper and \cite{HIK-20} are motivated by previous work in \cite{Farina,DDG,DDW,H} and we shall study the existence and nonexistence of stable solutions to \varepsilonqref{eq:1.1}. Farina \cite{Farina} studied \varepsilonqref{eq:1.1} in the case $s=1$, $\varepsilonll = 0$ and $N \geq 2$ and he showed the existence ($ p \geq p_c(N) $) and nonexistence ($1 < p < p_c(N)$) of stable (resp. stable outside a compact set) solutions where the Joseph--Lundgren exponent $p_c(N)$ is defined by \[ p_c(N) := \left\{\begin{aligned} &\frac{ (N-2)^2 - 4N + 8 \sqrt{N-1} }{(N-2)(N-10)} & &\text{if } N \geq 11, \\ &\infty & &\text{if } 1 \leq N \leq 10. \varepsilonnd{aligned}\right. \] Next, Dancer, Du and Guo \cite[Theorem 1.2]{DDG} and Wang and Ye \cite[Theorem 1.7]{WY-12} studied the case $\varepsilonll > -2 $ and $N \geq 2 $ and showed the existence and nonexistence of stable and finite Morse index solutions in $H^1_{\rm loc} (\mathbb{R}N) \cap L^\infty_{\rm loc}(\mathbb{R}N)$. We remark that in \cite{WY-12}, they treated the weaker class of solutions than those in $H^1_{\rm loc} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N)$. The threshold on $p$ is given by \[ p_+(N,\varepsilonll) := \left\{\begin{aligned} & \frac{ (N-2)^2 - 2 (\varepsilonll + 2) (\varepsilonll + N) + 2\sqrt{ (\varepsilonll +2)^3 (\varepsilonll + 2 N - 2) } } {(N-2) (N-4 \varepsilonll - 10) } & &\text{if} \ N > 10 + 4 \varepsilonll , \\ & \infty & &\text{if} \ 2 \leq N \leq 10 + 4 \varepsilonll. \varepsilonnd{aligned}\right. \] On the other hand, the case $s=1/2$, $\varepsilonll = 0$ and $N \geq 2$ was treated in Chipot, Chleb\'{\i}k, Fila and Shafrir \cite{CCFS} as an extension problem and it is shown that there exists a positive radial solution to \varepsilonqref{eq:1.1} for $p \geq \frac{N+1}{N-1} = p_S(N,0)$ where $p_S(N,\varepsilonll)$ is defined in \varepsilonqref{eq:1.6}. Harada \cite{H} considered the same case $s=1/2$, $\varepsilonll = 0$ and $N \geq 2$, introduced the notion corresponding to the Joseph--Lundgren exponent $p_c(N)$ and proved the existence of a family of layered positive radial solutions when $p$ is the Joseph--Lundgren supercritical or critical. In \cite{H}, the subcritical case is also treated. D\'{a}vila, Dupaigne and Wei \cite{DDW} dealt with the case $\varepsilonll = 0$ and $0<s<1$, and proved the existence and nonexistence of stable (resp. stable outside a compact set) solutions of \varepsilonqref{eq:1.1}. We remark that in \cite{DDW}, they treated solutions $u \in C^{2\sigma} (\mathbb{R}N) \cap L^1( \mathbb{R}N , (1+|x|)^{-N-2s}dx )$ (the sign of the weight in \cite[Theorem 1.1]{DDW} might be a missprint) where $\sigma > s$ and \[ L^q(\mathbb{R}N , w(x)dx) := \Set{ u : \mathbb{R}N \to \mathbb{R} | \| u \|_{L^q(\mathbb{R}N , w(x)dx)} := \left( \int_{ \mathbb{R}N} |u(x)|^q w(x) \,dx \right)^{\frac{1}{q}} < \infty }. \] However, in order to make their argument work, it seems appropriate to assume $ u \in C^{2\sigma} (\mathbb{R}N) \cap L^2 (\mathbb{R}N , (1+|x|)^{-N-2s}dx ) $. For this point, see Remark \ref{Remark:2.4} and \cite[Lemmata 2.1--2.4]{DDW}. Notice that $L^2(\mathbb{R}N, (1+|x|)^{-N-2s}dx ) \subset L^1(\mathbb{R}N , (1+|x|)^{-N-2s}dx)$ since $(1+|x|)^{-N-2s} \in L^1(\mathbb{R}N)$. We also refer to Li and Bao \cite{LB-19} for the study of positive solutions of \varepsilonqref{eq:1.1} with singularity at $x=0$ in the case $-2s < \varepsilonll \leq 0$ and $1 < p \leq p_S(N,\varepsilonll)$. The aim of this paper and \cite{HIK-20} is to extend the results of \cite{DDW,H} into the case $\varepsilonll \neq 0$ and established the result which is a fractional counterpart of \cite{DDG}. In this paper, we establish the nonexistence result. On the other hand, in \cite{HIK-20}, we will consider the existence result and study properties of solutions. After submitting this paper, we learned Barrios and Quaas \cite{BQ-20} and the references therein from Alexander Quaas. In \cite{BQ-20} and Dai and Qin \cite{DQ}, they studied the nonexistence of positive solution (and nonnegative nontrivial solutions) of \varepsilonqref{eq:1.1} for $\varepsilonll \in (-2s,\infty)$ and $0<p<p_S(N,\varepsilonll)$ On the other hand, Yang \cite{Ya-15} considered the existence of positive solution of \varepsilonqref{eq:1.1} via the minimizing problem for $p=p_S(N,\varepsilonll)$. Finally, Fazly--Wei \cite{FW-16} considered \varepsilonqref{eq:1.1} for the case $0<\varepsilonll$ and $1 < p \leq p_S(N,\varepsilonll)$. They proved the nonexistence of stable solutions for $1<p< p_S(N,\varepsilonll)$, and for $p=p_S(N,\varepsilonll)$, they obtained the same result under the finite energy condition for solutions. See also the comments after Theorem \ref{Theorem:1.1}. We first introduce the notation of solutions of \varepsilonqref{eq:1.1}. \begin{definition} \left\langlebel{Definition:1.1} Suppose \varepsilonqref{eq:1.2}. We say that $u$ is a \varepsilonmph{solution} of \varepsilonqref{eq:1.1} if $u$ satisfies $u\in H^s_{\mathrm{loc}}(\mathbb R^N)\cap L^\infty_{\rm loc}(\mathbb R^N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ and \begin{equation} \left\langlebel{eq:1.4} \left\langlengle u,\varphi\right\ranglengle_{\dot H^s(\mathbb R^N)} =\int_{\mathbb R^N}|x|^\varepsilonll |u|^{p-1}u\varphi\, dx \quad\mbox{for all}\,\,\, \varphi\in C^\infty_c(\mathbb R^N) \varepsilonnd{equation} where \begin{equation} \left\langlebel{eq:1.5} \left\langlengle u,\varphi\right\ranglengle_{\dot H^s(\mathbb R^N)} :=\frac{C_{N,s}}{2}\int_{\mathbb R^N\times\mathbb R^N}\frac{(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\,dx\,dy \varepsilonnd{equation} and $C_{N,s}$ is the constant defined by \varepsilonqref{eq:1.3}. Remark that $|x|^\varepsilonll |u(x)|^{p-1} u(x) \in L^1_{\rm loc} (\mathbb{R}N)$ due to $u \in L^\infty_{\rm loc}(\mathbb{R}N)$ and \varepsilonqref{eq:1.2}. For $\Omega \subset \mathbb{R}N$, we also set \[ \| u \|_{\dot{H}^s (\Omega) } := \left( \frac{C_{N,s}}{2} \int_{ \Omega \times \Omega } \frac{ |u(x) - u(y)|^2 }{|x-y|^{N+2s}} \, dx \, dy \right)^{ \frac{1}{2} }. \] \varepsilonnd{definition} \begin{remark} \left\langlebel{Remark:1.1} In Section \ref{section:2}, we will see that \begin{enumerate} \item $\left\langle u, \varphi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \in \mathbb{R} $ for any $u \in H^s_{\mathrm{loc}}(\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ and $\varphi \in C^\infty_c (\mathbb{R}N)$. \item we may replace $C^\infty_{c}(\mathbb{R}N)$ in Definition \ref{Definition:1.1} by $C^1_c(\mathbb{R}N)$. \item our solution $u$ satisfies \varepsilonqref{eq:1.1} in the distribution sense, that is, \[ \int_{\mathbb{R}N} u \left( -\Delta \right)^s\varphi dx = \left\langle u , \varphi \right\rangle_{\dot{H}^s(\mathbb{R}N)} = \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p-1} u \varphi \, dx \quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}N)$}. \] \varepsilonnd{enumerate} For the details, see Lemma \ref{Lemma:2.1}. \varepsilonnd{remark} In order to state our main result and for later use, following \cite{DDW}, we introduce some notation. We put \[ \begin{aligned} B_R &:= \Set{ x \in \mathbb{R}N | \, |x| <R }, & S_R &:= \partial B_R = \set{x \in \mathbb{R}N | \, |x| =R}, \\ B_R^+ &:= \Set{ (x,t) \in \mathbb{R}^{N+1}_+ | \, \left| (x,t) \right|<R }, & S_R^+ &:= \partial B_R^+ = \Set{ (x,t) \in \mathbb{R}^{N+1}_+ | \, \left| (x,t) \right| = R }, \varepsilonnd{aligned} \] and $B_R^c:=\mathbb R^N\setminus B_R$. For $N\ge1$, $s\in(0,1)$ and $\varepsilonll>-2s$, we write \begin{equation}\left\langlebel{eq:1.6} p_S(N,\varepsilonll) := \frac{N+2s+2\varepsilonll}{N-2s} \in (1,\infty). \varepsilonnd{equation} Note that $p_S(N,0)$ corresponds to the critical exponent of the fractional Sobolev inequality $H^s(\mathbb{R}N) \subset L^{p_S(N,0)} (\mathbb{R}N)$. Next, for $\alpha\in[0,(N-2s)/2)$, we set \begin{equation} \left\langlebel{eq:1.7} \left\langlembda(\alpha) :=2^{2s}\frac{\Gamma(\frac{N+2s+2\alpha}{4})\,\Gamma(\frac{N+2s-2\alpha}{4})} {\Gamma(\frac{N-2s-2\alpha}{4})\,\Gamma(\frac{N-2s+2\alpha}{4})}. \varepsilonnd{equation} It is known that \begin{equation}\left\langlebel{eq:1.8} \text{the function $\alpha\mapsto \left\langlembda(\alpha)$ is strictly decreasing} \varepsilonnd{equation} and $\left\langlembda(\alpha)\to0$ as $\alpha \nearrow (N-2s)/2$ (see, e.g. Frank, Lieb and Seiringer \cite[Lemma 3.2]{FLS-08} and D\'avila, Dupaigne and Montenegro \cite[Appendix]{DDM}). \begin{remark} Let $v_\alpha(x) := |x|^{ - \left( \frac{N-2s}{2} - \alpha \right) }$ for $ 0 \leq \alpha < \frac{N-2s}{2}$. According to Fall \cite[Lemma 4.1]{Fall}, the constant $\left\langlembda(\alpha)$ appears in the equation \[ (-\Delta)^{s} v_\alpha = \left\langlembda(\alpha) |x|^{-2s} v_\alpha \quad \text{in} \ \mathbb{R}N\setminus\{0\}. \] \varepsilonnd{remark} Finally, we introduce the notation of stable, stable outside a compact set and Morse index equal to $K$: \begin{definition}\left\langlebel{Definition:1.2} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc}(\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1}. We say that $u$ is \varepsilonmph{stable} if $u$ satisfies \begin{equation}\left\langlebel{eq:1.9} p \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p-1} \varphi^2 \, dx \leq \| \varphi \|_{\dot{H}^s(\mathbb{R}N)}^2 \quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}N )$}. \varepsilonnd{equation} On the other hand, $u$ is called \varepsilonmph{stable outside a compact set} if there exists an $R_0 \geq 0$ such that \begin{equation} \left\langlebel{eq:1.10} p\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p-1}\varphi^2\,dx\le \|\varphi\|^2_{\dot H^s(\mathbb R^N)} \quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}N \setminus \overline{B_{R_0}} )$}. \varepsilonnd{equation} Finally, a solution $u$ is said to have a \varepsilonmph{Morse index equal to $K$} provided $K$ is the maximal dimension of subspaces $Z \subset C^\infty_c ( \mathbb{R}N)$ with \[ \| \varphi \|_{\dot{H}^s(\mathbb{R}N)}^2 - p \int_{ \mathbb{R}N} |x|^\varepsilonll |u|^{p-1} \varphi^2 \, dx < 0 \quad \text{for each $\varphi \in Z \setminus \{0\}$}. \] \varepsilonnd{definition} \begin{remark} \left\langlebel{Remark:1.3} \begin{enumerate} \item By a density argument, in Definition \ref{Definition:1.2}, we may replace $C_c^\infty(\mathbb{R}N)$ and $C^\infty_c(\mathbb{R}N \setminus \overline{ B_{R_0}} )$ by $C_c^1(\mathbb{R}N)$ and $C^1_c( \mathbb{R}N \setminus \overline{ B_{R_0} } )$. In addition, \varepsilonqref{eq:1.10} remains true for $\varphi \in C^1_c(\mathbb{R}N)$ with $\varphi \varepsilonquiv 0$ on $B_{R_0}$. \item As in \cite[Remark 1]{Farina}, we may check that if a solution $u$ has a finite Morse index, then $u$ is stable outside a compact set. \varepsilonnd{enumerate} \varepsilonnd{remark} The following is the main result of this paper: \begin{theorem} \left\langlebel{Theorem:1.1} Suppose \varepsilonqref{eq:1.2} and let $u\in H^s_{\mathrm{loc}}(\mathbb R^N)\cap L^\infty_{\rm loc}(\mathbb R^N) \cap L^2(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} which is stable outside a compact set. \begin{itemize} \item[\rm(i)] If $1<p<p_S(N,\varepsilonll)$, then $u\varepsilonquiv0$. \item[\rm(ii)] If $p=p_S(N,\varepsilonll)$, then $u$ has finite energy, that is \[ \|u\|^2_{\dot H^s(\mathbb R^N)}=\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\,dx<+\infty. \] Furthermore, if $u$ is stable, then $u\varepsilonquiv 0$. \item[\rm(iii)] If $p_S(N,\varepsilonll)<p$ with \begin{equation} \left\langlebel{eq:1.11} p\,\left\langlembda\bigg(\frac{N-2s}{2}-\frac{2s+\varepsilonll}{p-1}\bigg)>\left\langlembda(0), \varepsilonnd{equation} then $u\varepsilonquiv0$, where $\left\langlembda(\alpha)$ is the function given by \varepsilonqref{eq:1.7}. \varepsilonnd{itemize} \varepsilonnd{theorem} As mentioned in the above, it might be necessary to suppose $u \in L^2(\mathbb{R}N, (1+|x|)^{-N-2s} dx)$ in \cite{DDW} and also in \cite{FW-16} since a similar argument was used in \cite{FW-16}. Taking this point into consideration, we succeed to extend the nonexistence part of \cite{DDW} and \cite{FW-16} into the case $\varepsilonll \in (-2s, 0)$. Furthermore, Theorem \ref{Theorem:1.1} may be regarded as a fractional version of a part of \cite[Theorem 1.2]{DDG}. We remark that we deal with weak solutions and in \cite{DDM,FW-16}, classical solutions were studied. On the other hand, when $p_S(N,\varepsilonll) < p$ holds and \varepsilonqref{eq:1.11} fails to hold, then we will show the existence of stable solutions in \cite{HIK-20} and observe the properties of those solutions. By Remark \ref{Remark:1.3}, Theorem \ref{Theorem:1.1} asserts also that there is no solution of \varepsilonqref{eq:1.1} with finite Morse index when $1<p<p_S(N,\varepsilonll)$ or $p_S(N,\varepsilonll) < p $ with \varepsilonqref{eq:1.11}. When $p=p_S(N,\varepsilonll)$, we find that any solution of $u$ of \varepsilonqref{eq:1.1} with finite Morse index satisfies $u \in \dot{H}^s (\mathbb{R}N) \cap L^{p+1} (\mathbb{R}N , |x|^{\varepsilonll} dx )$. The proof of Theorem \ref{Theorem:1.1} is basically similar to \cite{DDW}. However, in \cite{DDW}, they use the fact that solutions are of class $C^1$ or smooth, for instance, see \cite[the proofs of Theorem 1.1 for $1 < p \leq p_S(n)$ and Theorem 1.4, and at the end of proof of Theorem 1.1 for $p_S(n) < p$]{DDW}. On the other hand, \varepsilonqref{eq:1.1} contains the term $|x|^\varepsilonll$ and especially in the case $ \varepsilonll < 0$, solutions of \varepsilonqref{eq:1.1} are not of class $C^1$ at the origin. Therefore, we need some modifications in the argument. In this paper, we first prove a local Pohozaev type identity as in Fall and Felli \cite{FF} and exploit it to show Theorem \ref{Theorem:1.1} for $1 < p \leq p_S(N,\varepsilonll)$. In addition, to show the motonicity formula (Lemma \ref{Lemma:4.2}), we use the idea in \cite[section 3]{FF} where they studied the Almgren type frequency. This paper is organized as follows. In subsection \ref{section:2.1}, we investigate the properties of the $s$-harmonic extension of functions in $H^s_{\mathrm{loc}}(\mathbb{R}N) \cap L^1( \mathbb{R}N , (1+|x|)^{-N-2s} dx )$, that is, functions satisfying the extension problem. Subsection \ref{section:2.2} is devoted to the proof of local Pohozaev identity and the energy estimate is done in subsection \ref{section:2.3}. Section \ref{section:3} contains the proof of Theorem \ref{Theorem:1.1} for $ 1 < p \leq p_S(N,\varepsilonll)$ and in section \ref{section:4}, we deal with the case $ p_S(N,\varepsilonll) < p$. \section{Preliminaries} \left\langlebel{section:2} This section is divided into three subsections. In subsection \ref{section:2.1}, we show properties of functions which belong to $H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $, and give a relationship between a solution $u$ of \varepsilonqref{eq:1.1} and $s$-harmonic functions. In subsection \ref{section:2.2}, we recall local regularity estimates for the extension problem. This estimate is useful to establish the Pohozaev identity in Proposition~\ref{Proposition:2.2}. Furthermore, applying an argument similar to the one in \cite{DDW}, we also give energy estimates for solutions of \varepsilonqref{eq:1.1} in subsection \ref{section:2.3}. Throughout this paper, by the letter $C$ we denote generic positive constants and they may have different values also within the same line. Furthermore, we write $X : = (x,t) \in \mathbb{R}^{N+1}_+$. \subsection{Remark on notion of weak solutions} \left\langlebel{section:2.1} We first prove properties of functions which belong to $ H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $. \begin{lemma} \left\langlebel{Lemma:2.1} Let $u\in H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx)$. Then the following hold. \begin{itemize} \item[\rm(i)] For any $\psi \in C^\infty_c(\mathbb{R}N)$, $\left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \in \mathbb{R}$ and \begin{equation}\left\langlebel{eq:2.1} \begin{aligned} &\left| \left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \right| \\ \leq \ & C(N,s) \left\{ \| u \|_{H^s(B_{2R})} \| \psi \|_{H^s(B_{2R})} + \| u \|_{L^1( \mathbb{R}N, (1+|x|)^{-N-2s} dx )} \| \psi \|_{L^1(B_{2R})} \right\} \varepsilonnd{aligned} \varepsilonnd{equation} where $\supp \psi \subset B_R$ with $R \geq 1$ and $C(N,s)$ is a constant depending on $N$ and $s$. In addition, let $\varphi_1\in C^\infty_c(\mathbb R^N)$ with $\varphi_1(x)\varepsilonquiv1$ in $B_1$ and $\varphi_1(x)\varepsilonquiv0$ in $B_2^c$, and set $\varphi_n(x):=\varphi_1(n^{-1}x)$. Then for any $\psi\in C^\infty_c(\mathbb R^N)$, \begin{equation*} \left\langlengle \varphi_nu,\psi\right\ranglengle_{\dot H^s(\mathbb R^N)}\to \left\langlengle u,\psi\right\ranglengle_{\dot H^s(\mathbb R^N)} \quad{\rm as}\quad n\to\infty. \varepsilonnd{equation*} In particular, \begin{equation}\left\langlebel{eq:2.2} \left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} = \int_{\mathbb{R}N} u (-\Delta)^s \psi \, dx \quad \text{for each $\psi \in C^\infty_{c}(\mathbb{R}N)$}. \varepsilonnd{equation} \item[\rm(ii)] Put \begin{equation} \left\langlebel{eq:2.3} P_s (x,t):= p_{N,s}\frac{t^{2s}}{\left( |x|^2 + t^2 \right)^{\frac{N+2s}{2}}},\quad U(x,t) := (P_s (\cdot, t)*u ) (x) \varepsilonnd{equation} where $p_{N,s}>0$ is chosen so that $\| P_s(\cdot, t) \|_{L^1(\mathbb{R}N)} = 1$. Then \begin{equation} \left\langlebel{eq:2.4} -\diver \left( t^{1-2s} \nabla U \right)=0\quad{\rm in}\,\,\,\mathbb R^{N+1}_+, \quad U(x,0)=u(x), \quad U \in H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+} ,t^{1-2s}dX \right) \varepsilonnd{equation} and for each $\psi\in C^\infty_c(\overline{\mathbb R^{N+1}_+})$ with $\partial_t\psi(x,0)=0$, \begin{equation} \left\langlebel{eq:2.5} \begin{aligned} -\lim_{t\to+0}\int_{\mathbb R^N}t^{1-2s}\partial_tU(x,t)\psi(x,t)\,dx &= \kappa_s\int_{\mathbb R^N}u(x)(-\Delta)^s\psi(x,0)\,dx \\ &= \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}N)}. \varepsilonnd{aligned} \varepsilonnd{equation} Here \[ \begin{aligned} & H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+} ,t^{1-2s}dX \right) \\ := \ & \Set{ V : \mathbb{R}^{N+1}_+ \to \mathbb{R} | \int_{B_R^+} t^{1-2s} \left\{ |\nabla V|^2 + V^2 \right\} dX < \infty \ \ \text{for all $R > 0$}} \varepsilonnd{aligned} \] and \begin{equation} \left\langlebel{eq:2.6} \kappa_s:=\frac{\Gamma(1-s)}{2^{2s-1}\Gamma(s)}. \varepsilonnd{equation} \varepsilonnd{itemize} \varepsilonnd{lemma} \begin{remark} \left\langlebel{Remark:2.1} \begin{enumerate} \item In Lemma \ref{Lemma:2.2} and Remark \ref{Remark:2.2}, we will see that \varepsilonqref{eq:2.5} holds for every $\psi \in C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+} )$ if we assume that $u$ is a solution of \varepsilonqref{eq:1.1}. \item Here we collect properties on the Poisson kernel $P_s(x,t)$. For the properties below, see, for instance, \cite{CS,DiNPV-12,FF,FF-15,JLX}. First of all, it is known that the Fourier transform of $P_s(x,t)$ is given by $\widehat{P}_s(\xi,t) = \theta_0 \left( 2\pi |\xi| t \right)$ where \[ \begin{aligned} & \wh{v} (\xi) := \int_{ \mathbb{R}N} v(x) e^{ - 2\pi i x \cdot \xi } \, dx \quad \text{for $v \in L^1 (\mathbb{R}N)$}, \\ &\theta_0(t) : = \frac{2}{\Gamma(s)} \left( \frac{t}{2} \right)^s K_s(t), \quad \theta_0'' + \frac{1-2s}{t} \theta_0' - \theta_0 = 0 \quad \text{in} \ (0,\infty), \quad \theta_0 (0) = 1, \varepsilonnd{aligned} \] $K_s(t)$ is the modified Bessel function of the second kind with order $s$ and \[ \kappa_s = \int_{0}^\infty t^{1-2s} \left\{ (\theta_0'(t))^2 + \theta_0^2(t) \right\} \,dt = \frac{\Gamma(1-s)}{2^{2s-1} \Gamma(s) }. \] By these properties, it is possible to prove that for each $v \in H^s(\mathbb{R}N)$, \[ \int_{\mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla ( P_s (\cdot, t) \ast v )(x) \right|^2 \, dX = \kappa_s \int_{\mathbb{R}N} \left( 4\pi^2|\xi|^2 \right)^s \left| \wh{v} (\xi) \right|^2 \, d \xi = \kappa_s \| v \|_{\dot{H}^s(\mathbb{R}N)}^2 \] and for $U(x,t) = (P_s(\cdot , t) \ast u)(x)$ and $u \in \dot{H}^s (\mathbb{R}N)$, \begin{equation}\left\langlebel{eq:2.7} \int_{\mathbb{R}^{N+1}_+} t^{1-2s} |\nabla U|^2 \, dX = \kappa_s \| u \|_{\dot{H}^s(\mathbb{R}N)}^2. \varepsilonnd{equation} Furthermore, for each $\zeta \in C^1_c(\overline{\mathbb{R}^{N+1}_+})$, \begin{equation}\left\langlebel{eq:2.8} \kappa_s \| \zeta(\cdot,0) \|_{\dot{H}^s(\mathbb{R}N)}^2 = \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} | \nabla ( P_s (\cdot, t) \ast \zeta(\cdot, 0) ) (x) |^2 \, dX \leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} | \nabla \zeta |^2 \, dX. \varepsilonnd{equation} Finally, for $\varphi \in C^\infty_{c} (\mathbb{R}N)$, \[ - \lim_{t \to +0} t^{1-2s} \partial_t \left( P_s( \cdot, t ) \ast \varphi \right) (x) = \kappa_s \left( -\Delta \right)^s \varphi (x) \quad \text{for any $x \in \mathbb{R}N$}. \] \varepsilonnd{enumerate} \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{Lemma:2.1}] (i) We first show $\left\langle u, \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \in \mathbb{R}$ and \varepsilonqref{eq:2.1}. Let $\psi \in C^\infty_{c}(\mathbb{R}N)$ with $\supp \psi \subset B_R$ and $R \geq 1$. Since $\psi \varepsilonquiv 0$ on $B_R^c$ and $|x-y| \geq |y|/2$ for $x \in B_R$ and $y \in B_{2R}^c$, we see from $R \geq 1$ that \[ \frac{1+|y|}{|x-y|} \leq2 \frac{1+|y|}{|y|} \leq 4 \quad \quad \text{for each $x \in B_R$ and $y \in B^c_{2R}$} \] and \[ \begin{aligned} & \int_{\mathbb{R}N \times \mathbb{R}N} \frac{\left| u(x) - u(y) \right| \left| \psi(x) - \psi(y) \right|} {|x-y|^{N+2s}} \, dx \, d y \\ = \ & \left( \int_{B_{2R} \times B_{2R} } + 2 \int_{B_{2R} \times B_{2R}^c} \right) \frac{\left| u(x) - u(y) \right| \left| \psi(x) - \psi(y) \right|}{|x-y|^{N+2s}} \, dx \, d y \\ \leq \ & \frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})} \\ & \qquad + 2 \int_{ B_{2R}} dx \int_{|y| \geq 2R} \left( |u(x)| + |u(y)| \right) |\psi(x)| \left( 1 + |y| \right)^{-N-2s} \left( \frac{1+|y|}{|x-y|} \right)^{N+2s} dy \\ \leq \ & \frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})} \\ & \qquad + C (N,s) \int_{ B_{2R}} dx \int_{ |y| \geq 2R} \left( |u(x)| + |u(y)| \right) |\psi(x)| (1+|y|)^{-N-2s} \, dy \\ \leq \ & \frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})} \\ &\quad + C (N,s) \left\{ \| u \|_{L^2(B_{2R})} \| \psi \|_{L^2(B_{2R})} + \| \psi \|_{L^1(B_{2R})} \| u \|_{L^1(\mathbb{R}N, (1+|x|)^{-N-2s} dx )} \right\} \varepsilonnd{aligned} \] where $C_{N,s}$ is the constant given by \varepsilonqref{eq:1.3}. Since $\| u \|_{H^s(B_{2R})} < \infty$ due to $u \in H^s_{\mathrm{loc}} (\mathbb{R}N)$, we observe that $\left\langle u, \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \in \mathbb{R}$ and \varepsilonqref{eq:2.1} holds. The assertion $\left\langle \varphi_n u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)} \to \left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}N)}$ as $n \to \infty$ follows from \varepsilonqref{eq:2.1} and $\| \varphi_n u - u \|_{L^1(\mathbb{R}N , (1+|x|)^{-N-2s} dx )} \to 0$. Finally we prove \varepsilonqref{eq:2.2}. We remark that due to Fall and Weth \cite[Lemma 2.1]{FW} and $\supp \psi \subset B_R$, there exists a $C_R>0$ such that \[ \left| \int_{|x-y| > \varepsilon } \frac{\psi(x) - \psi(y) }{|x-y|^{N+2s}} dx dy \right| \leq C_R \| \psi \|_{C^2(\mathbb{R}N)} \left( 1 + |x| \right)^{-N-2s} \quad \text{for all $x \in \mathbb{R}N$ and $\varepsilon \in (0,1)$}. \] Therefore, $\int_{\mathbb{R}N} u (-\Delta )^s \psi \, dx \in \mathbb{R}$ by $u \in L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx)$. Moreover, by the dominated convergence theorem, \[ \begin{aligned} \left\langle u, \psi \right\rangle &= \frac{C_{N,s}}{2} \lim_{\varepsilon \to 0} \int_{ |x-y|> \varepsilon } \frac{ \left( u(x) - u(y) \right) \left( \psi(x) - \psi (y) \right) }{|x-y|^{N+2s}} dx dy \\ &= \lim_{\varepsilon \to 0} C_{N,s} \int_{\mathbb{R}N} \left[ u(x) \int_{|x-y| > \varepsilon} \frac{\psi(x) - \psi (y) }{|x-y|^{N+2s}} dy \right] dx = \int_{ \mathbb{R}N} u(x) \left( - \Delta \right)^s \psi (x) dx. \varepsilonnd{aligned} \] Hence, (i) holds. (ii) Notice that $U$ is well-defined thanks to $u \in L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx)$. We prove \varepsilonqref{eq:2.4}. The assertion $ -\diver (t^{1-2s} \nabla U) = 0$ in $\mathbb R^{N+1}_+$ follows from the fact $ -\diver (t^{1-2s} \nabla P_s ) = 0 $ in $\mathbb R^{N+1}_+$. It is also easily seen that $P_s (x, t) \to \delta_0$ as $t \to +0$, hence, $U(x,0) = u(x)$ for $u \in C^\infty_{c} (\mathbb{R}N)$. For general $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^1(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $, the assertion follows from $U \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+ }, t^{1-2s}dX ) $ (this will be proved below) and the existence of the trace operator $H^1_{\mathrm{loc}} (\overline{ \mathbb{R}^{N+1}_+}, t^{1-2s}dX ) \to H^s_{\mathrm{loc}} (\mathbb{R}N)$ (see, for instance, Lions \cite{L-59}, and Demengel and Demengel \cite{DD-12}). In order to prove $U \in H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+}, t^{1-2s}dX )$, we will show $U \in H^1( B_R \times (0,R),t^{1-2s}dX )$ for each $R > 0$. To this end, let $\varphi_1$ be as in (i), $n_0 > 2R$ and write \[ U(x,t) = (P_s(\cdot, t)*u ) (x) = ( P_s(\cdot,t)*( \varphi_{n_0} u + (1-\varphi_{n_0}) u ) ) (x) =: U_1(x,t) + U_2(x,t). \] For $U_1$, we have $\nabla U_1 \in L^2(\mathbb{R}^{N+1}_+,t^{1-2s}dX)$ due to $\varphi_{n_0} u \in H^s(\mathbb{R}N)$ and Remark \ref{Remark:2.1}. In addition, Young's inequality and the fact $\| P_s (\cdot , t) \|_{L^1(\mathbb{R}N)} = 1$ imply \[ \begin{aligned} \int_{0}^R dt\int_{\mathbb{R}N} t^{1-2s}\left(P_s (\cdot , t) \ast (\varphi_{n_0} u )(x)\right)^2 \, dx &\leq \int_{0}^R t^{1-2s} \| P_s (\cdot ,t ) \|_{L^1(\mathbb{R}N)}^2 \| \varphi_{n_0} u \|_{L^2(\mathbb{R}N)}^2 \, dt \\ & = C_{R,s} \| \varphi_{n_0} u \|_{L^2(\mathbb{R}N)}^2. \varepsilonnd{aligned} \] Hence, $U_1 \in H^1(B_R \times (0,R),t^{1-2s}dX)$ for every $R > 0$. On the other hand, for $U_2$, thanks to $n_0 > 2R$, we may write as \[ U_2(x,t) =p_{N,s} \int_{|y|\ge{2R}} \frac{t^{2s}}{\left( |x-y|^2 + t^2 \right)^{\frac{N+2s}{2}}}(1-\varphi_{n_0}(y)) u(y)\,dy. \] Noting $|x-y|\ge |y|/2$ for all $|y| \ge n_0$ and $|x| \le R$, we see that \[ \left| U_2(x,t) \right| \leq C t^{2s} \int_{ |y| \geq n_0} |u(y)| (1+|y|)^{-N-2s} \left( \frac{1+|y|}{|x-y|} \right)^{N+2s} dy \leq C t^{2s} \| u \|_{L^1(\mathbb{R}N,(1+|x|)^{-N-2s}dx)} \] and that \begin{align*} |\nabla U_2(x,t)| & \le C \int_{|y| \geq n_0} \left( \frac{t^{2s-1}}{|x-y|^{N+2s}} + \frac{t^{2s}}{|x-y|^{N+1+2s}} \right) |u(y)| \, dy \\ & \le C \| u \|_{L^1(\mathbb{R}N,(1+|x|)^{-N-2s}dx)} \left( t^{2s-1} + t^{2s} \right) \varepsilonnd{align*} for all $(x,t) \in B_R \times (0,R)$. This yields $U_2 \in H^1(B_R \times (0,R),t^{1-2s}dX)$, which implies $U=U_1 +U_2 \in H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX \right)$. Next we prove \varepsilonqref{eq:2.5}. Let $\psi \in C^\infty_c(\overline{\mathbb R^{N+1}_+})$ with $\mathrm{supp}\, \psi \subset B_{R/2} \times [0,R] $ and $\partial_t \psi(x,0) = 0$. Then, by \varepsilonqref{eq:2.3} and Fubini's theorem, we have \begin{equation} \left\langlebel{eq:2.9} \begin{aligned} \hspace{-7pt} -\int_{\mathbb R^N}t^{1-2s}\partial_t U(x,t)\psi(x,t)\,dx &= -t^{1-2s}\int_{\mathbb R^N}\partial_t \left( \int_{\mathbb R^N} \frac{ p_{N,s}t^{2s} u(y) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\,dy \right)\,\psi(x,t)\,dx \\ & = - t^{1-2s}\int_{\mathbb R^N} \partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,t) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right]u(y)\,dy \\ & \hspace{1cm} + t^{1-2s} \int_{\mathbb R^N}\left[\int_{\mathbb R^N} \frac{p_{N,s} t^{2s}\partial_t \psi(x,t) }{ (|x-y|^2 + t^2)^{\frac{N+2s}{2}}}\,dx\right] u(y)\,dy \\ & =\int_{\mathbb R^N}\bigg(I_1(y,t)+I_2(y,t)\bigg)u(y)\, dy \varepsilonnd{aligned} \varepsilonnd{equation} where \begin{align*} I_1(y,t)&:=- t^{1-2s}\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,t) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right], \\ I_2(y,t)&:=t^{1-2s}\int_{\mathbb R^N} \frac{p_{N,s} t^{2s}\partial_t \psi(x,t) }{ (|x-y|^2 + t^2)^{\frac{N+2s}{2}}}\,dx. \varepsilonnd{align*} From $\psi (x,0) \in C^\infty_c(\mathbb{R}N)$ and Remark \ref{Remark:2.1}, we observe that \begin{equation}\left\langlebel{eq:2.10} -\lim_{t\to +0}t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (x)=\kappa_s(-\Delta)^s\psi(x,0) \quad \text{for $x \in \mathbb{R}N$}. \varepsilonnd{equation} Notice also that \begin{equation} \left\langlebel{eq:2.11} \begin{aligned} & I_1(y,t) \\ = \ &- t^{1-2s} \left\{\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \left(\psi(x,t) - \psi(x,0) \right) } {(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\,dx \right]\right. \\ & \hspace{6cm} + \left. \partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,0) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right] \right\} \\ = \ & - I_3(y,t) - t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y) \varepsilonnd{aligned} \varepsilonnd{equation} where \[ I_3(y,t):=-t^{1-2s}\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \left(\psi(x,t) - \psi(x,0) \right) } {(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right]. \] Thus, if we may prove that for all $y \in \mathbb{R}N$ \begin{align} \left\langlebel{align:2.12} &\lim_{t\to+0}\bigg(\left|I_2(y,t)\right|+ \left|I_3(y,t)\right|\bigg)=0 \quad \text{strongly in $L^\infty(\mathbb{R}N)$}, \\ \left\langlebel{align:2.13} & \begin{aligned} &|I_2(y,t)|+|I_3(y,t)|+|t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y)|\le C|y|^{-N-2s} \\ &\hspace{6cm}\text{for each $|y| \geq 2R$ and $t \in (0,1]$}, \varepsilonnd{aligned} \varepsilonnd{align} then we have \varepsilonqref{eq:2.5} by applying the dominated convergence theorem with $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^1(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $, and \varepsilonqref{eq:2.10}--\varepsilonqref{align:2.13} to \varepsilonqref{eq:2.9}. We first deal with \varepsilonqref{align:2.12}. Recall that $ {\rm supp}\, \psi \subset B_{R}$ and $\partial_t \psi (x,0) \varepsilonquiv 0$. Then, \[ \begin{aligned} & \frac{\psi(x,t) - \psi(x,0)}{t} = \int_{0}^{1} \partial_t \psi(x,t\theta) \,d\theta = \int_0^1 \partial_t \psi (x,t \theta) - \partial_t \psi (x,0) \, d \theta =:\Psi (x,t) \in C^\infty(\overline{\mathbb R^{N+1}_+}), \\ & \Psi(x,0)=0, \quad |\Psi(x,t)| \le \varphi_R(x) t, \quad \left| \partial_t \Psi(x,t) \right| \le \varphi_R(x) \quad \text{for each $(x,t) \in \mathbb R^N \times [0,1]$} \varepsilonnd{aligned} \] where $\varphi_R \in C_c(B_{3R/2})$. Hence, we observe from $(|x-y|^2 + t^2)^{1/2} \geq t$ that \begin{equation} \left\langlebel{eq:2.14} \begin{aligned} & |I_3(y,t)| \\ = \ & \left|t^{1-2s} \partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{1+2s} \Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx \right] \right| \\ \le \ & C \int_{\mathbb R^N} t \left| \frac{ \Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right| \,dx + t^2 \int_{\mathbb R^N} \left| \frac{p_{N,s} \partial_t\Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right| \,dx \\ & + C t^2 \int_{ \mathbb{R}N} \left| \frac{\Psi(x,t)}{ (|x-y|^2 + t^2)^{ \frac{N+2s}{2} + \frac{1}{2} } } \right| \, dx \\ \le \ & C \int_{\mathbb R^N} t^2 \frac{\varphi_R(x) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx = t^{2-2s} \int_{\mathbb R^N} \frac{\varphi_R(y-tz) }{(|z|^2 +1)^{\frac{N+2s}{2}}} \,dz \leq C \| \varphi_R \|_{L^\infty(\mathbb{R}N)} t^{2-2s} \varepsilonnd{aligned} \varepsilonnd{equation} where $C> 0$ is independent of $y \in \mathbb{R}N$. Hence, $I_3(\cdot,t) \to 0$ in $L^\infty(\mathbb{R}N)$ as $ t \to 0$. Since $|\partial_t \psi(x,t) | \le t\varphi_R(x)$ for all $(x,t) \in \mathbb R^N \times [0,1]$ due to $\partial_t\psi(x,0)=0$, in a similar way, we can check that $I_2(\cdot,t) \to 0$ in $L^\infty(\mathbb{R}N)$ as $ t \to 0$ and \varepsilonqref{align:2.12} holds. Next, we treat \varepsilonqref{align:2.13}. Let $|y| \geq 2R$ and consider $I_3$. Noting $|x-y|\ge|y|-|x|\ge |y|/4 $ for all $|y|\ge 2R$ and $|x|\le 3R/2$, by \varepsilonqref{eq:2.14} we see that \begin{equation} \left\langlebel{eq:2.15} \begin{aligned} |I_3(y,t)| & \le C\int_{\mathbb R^N} \frac{t^2 \varphi_R(x) }{( |x-y|^2 + t^2 )^{\frac{N+2s}{2}}} \,dx \\ & \le C\int_{|x|\le{3R/2}} t^2 \| \varphi_R \|_{L^\infty(\mathbb{R}N)} |x- y|^{-N-2s} \, dx \le C t^2 |y|^{-N-2s}. \varepsilonnd{aligned} \varepsilonnd{equation} In a similar way, we may prove \begin{equation} \left\langlebel{eq:2.16} |I_2(y,t)|\le C t^2 |y|^{-N-2s}. \varepsilonnd{equation} Furthermore, applying the change of variables and the integration by parts and noting $\supp \psi \subset B_R$, we obtain \begin{equation} \left\langlebel{eq:2.17} \begin{aligned} t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y) & = t^{1-2s} \partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} \psi(y-tz,0) }{(|z|^2+1)^{\frac{N+2s}{2}}} \,dz \right] \\ & = -t^{1-2s} \int_{\mathbb R^N} \frac{p_{N,s} \nabla_y \psi(y-tz,0) \cdot z }{(|z|^2+1)^{\frac{N+2s}{2}}} \,dz \\ & = \int_{|x|\le R}\frac{p_{N,s} \nabla_x \psi(x,0) \cdot (x-y) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx \\ & =-\int_{|x|\le R} p_{N,s} \psi(x,0) \mathrm{div}_x \left[ \frac{x-y}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right] \,dx. \varepsilonnd{aligned} \varepsilonnd{equation} Since it is easily seen that for $|y| \ge 2R$ \[ \int_{|x|\le R} |\psi(x,0) | \left| \mathrm{div}_x \left[ \frac{x-y}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right] \right| \,dx \le C\int_{|x|\le R} \left| x - y \right|^{ -N-2s } \,dx \le C |y|^{-N-2s}, \] by \varepsilonqref{eq:2.17}, we have \[ \left| t^{1-2s} \partial_t \left( P_s(\cdot , t) \ast \psi (\cdot , 0) \right) (y) \right| \leq C |y|^{-N-2s}. \] This together with \varepsilonqref{eq:2.15} and \varepsilonqref{eq:2.16} implies \varepsilonqref{align:2.13}, which completes the proof of Lemma~\ref{Lemma:2.1}. \varepsilonnd{proof} \par Applying Lemma~\ref{Lemma:2.1}, we have the following. \begin{lemma} \left\langlebel{Lemma:2.2} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s} dx )$ be a solution of \varepsilonqref{eq:1.1} under \varepsilonqref{eq:1.2}, and $U$ be the function given in \varepsilonqref{eq:2.3}. Then \begin{equation} \left\langlebel{eq:2.18} \begin{aligned} \int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla U \cdot\nabla\psi\,dX &=\kappa_s\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p-1}u\psi(x,0)\, dx = \kappa_s \left\langle u , \psi (\cdot,0) \right\rangle_{\dot{H}^s(\mathbb{R}N)} \varepsilonnd{aligned} \varepsilonnd{equation} for every $ \psi \in C^1_c ( \overline{\mathbb{R}^{N+1}_+} )$ where $\kappa_s$ is the constant given in \varepsilonqref{eq:2.6}. \varepsilonnd{lemma} \begin{remark}\left\langlebel{Remark:2.2} Since $U \in C^\infty(\mathbb{R}^{N+1}_+)$, by \varepsilonqref{eq:2.4}, \varepsilonqref{eq:2.18} and the integration by parts, for every $\psi \in C^\infty_{c}( \overline{\mathbb{R}^{N+1}_+} )$, we have \[ \begin{aligned} - \lim_{\tau \to +0} \int_{\mathbb{R}N} \tau^{1-2s} \partial_t U (x,\tau) \psi(x,\tau) \, dx &= \lim_{\tau \to +0} \int_{ \mathbb{R}N \times (\tau,\infty)} t^{1-2s} \nabla U \cdot \nabla \psi \, dX \\ &= \kappa_s \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p-1} u \psi(x,0) \, dx = \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}N)}. \varepsilonnd{aligned} \] Hence, for solutions $u \in \dot{H}^s (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^1(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $ of \varepsilonqref{eq:1.1}, the corresponding extension problem is \[ \left\{\begin{aligned} -\diver \left( t^{1-2s} \nabla U \right) &=0 & &\text{in} \ \mathbb{R}^{N+1}_+, \\ U(x,0) &= u(x) & &\text{on} \ \mathbb{R}N,\\ - \lim_{t \to +0} t^{1-2s} \partial_t U(x,t) &= \kappa_s |x|^\varepsilonll |u|^{p-1} u & &\text{on} \ \mathbb{R}N. \varepsilonnd{aligned}\right. \] \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{Lemma:2.2}] We first prove \varepsilonqref{eq:2.18} for $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$ with $\partial_t \psi(x,0) \varepsilonquiv 0$ on $\mathbb{R}N$. Let $\varepsilon>0$, $u$ be a solution of \varepsilonqref{eq:1.1} and $\psi \in C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+ } )$ with $\partial_t \psi(x,0) = 0$. Then we have \begin{align*} 0 & = \int_{\mathbb R^N \times (\varepsilon,\infty)} - \diver \left(t^{1-2s} \nabla U \right) \psi (x,t) \,dX \\ & = \int_{\mathbb R^N \times (\varepsilon,\infty)} t^{1-2s} \nabla U \cdot \nabla \psi \,dX + \int_{\mathbb R^N} \varepsilon^{1-2s} \partial_t U(\varepsilon,x) \psi(x,\varepsilon) \,dx. \varepsilonnd{align*} By letting $\varepsilon \to 0$ and \varepsilonqref{eq:2.5}, it follows that \[ \int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \psi \,dX = \kappa_s \int_{\mathbb R^N} u(x) (-\Delta)^s \psi(x,0) \,dx. \] Since $u$ is a solution of \varepsilonqref{eq:1.1}, Lemma \ref{Lemma:2.1} yields \[ \int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \psi \,dX = \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}N)} = \kappa_s \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p-1} u \psi(x,0) \, dx \] for every $\psi \in C^\infty_c (\overline{\mathbb{R}^{N+1}_+} )$ with $\partial_t \psi (x,0) \varepsilonquiv 0$. Next, we show \varepsilonqref{eq:2.18} for $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$. Let $\psi \in C^\infty_c ( \overline{\mathbb R^{N+1}_+} )$ and $(\rho_\varepsilon(t))_\varepsilon$ be a mollifier in $t$ with $\rho_\varepsilon(-t) = \rho_\varepsilon(t)$. Set \[ \Psi(x,t) := \left\{ \begin{aligned} & \psi(x,t) & &\text{if $t\geq 0$}, \\ & \psi(x,-t) & &\text{if $t < 0$}. \varepsilonnd{aligned} \right. \] Then $\Psi \in C^\infty( \mathbb R^{N+1} \setminus \{ t=0 \} ) \cap W^{1,\infty} (\mathbb R^{N+1})$ and $\Psi(\cdot,t) \in C^\infty_c(\mathbb R^N)$ for every $t\in \mathbb{R}$. Define $\Psi_\varepsilon$ by \[ \Psi_\varepsilon(x,t) := \int_{\mathbb R} \rho_\varepsilon (t-\tau) \Psi(x,\tau) \,d\tau = \int_{\mathbb R} \rho_\varepsilon (\tau) \Psi(x, t - \tau) \,d\tau. \] It is easily seen that $\Psi_\varepsilon \in C^\infty_c ( \mathbb R^{N+1} )$ with $\partial_t \Psi_\varepsilon(x,0) = 0$ thanks to the symmetry of $\Psi$ and $\rho_\varepsilon$ in $t$. Since it holds that for any $k\in\mathbb N$ and $t_1>0$ \begin{equation}\left\langlebel{eq:2.19} \lim_{\varepsilon\to 0}\bigg(\| \Psi_\varepsilon (\cdot, 0) - \psi(\cdot,0) \|_{C^k(\mathbb R^N)} + \sup_{ t \ge t_1 >0 } \| \Psi_\varepsilon (\cdot, t) - \psi(\cdot,t) \|_{C^k(\mathbb R^N)}\bigg)=0 \varepsilonnd{equation} and $\| \Psi_\varepsilon \|_{W^{1,\infty}(\mathbb R^{N+1}_+)} \le M_0$ for some $M_0>0$, we deduce that \begin{equation} \left\langlebel{eq:2.20} \Psi_\varepsilon \to \psi \quad \text{strongly in $H^1(\mathbb R^{N+1}_+, t^{1-2s}dX)$}. \varepsilonnd{equation} Therefore, from \[ \int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \Psi_\varepsilon \,dX = \kappa_s \int_{\mathbb R^N} |x|^\varepsilonll |u|^{p-1}u \Psi_\varepsilon(x,0) \,dx \] with \varepsilonqref{eq:2.19}, \varepsilonqref{eq:2.20} and $u \in L^\infty_{\rm loc} (\mathbb{R}N)$, as $\varepsilon \to 0$, we have \varepsilonqref{eq:2.18} for all $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$. Finally, since we may approximate functions in $C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+} )$ by functions in $C^1_c ( \overline{ \mathbb{R}^{N+1}_+} )$ in the $C^1( \overline{ \mathbb{R}^{N+1}_+} )$ sense, \varepsilonqref{eq:2.18} holds for every $C^1_c (\overline{ \mathbb{R}^{N+1}_+} )$ and we complete the proof. \varepsilonnd{proof} \subsection{Local Regularity and the Pohozaev identity} \left\langlebel{section:2.2} In this subsection we recall local regularity estimates for the extension problem in Remark \ref{Remark:2.2}, which are taken from \cite[Section~3.1]{FF} (see also \cite{CaSi,JLX}). Furthermore, we prove the Pohozaev identity for solutions to the extension problem. \par We first recall local regularity estimates. \begin{proposition} \left\langlebel{Proposition:2.1} {\rm(Fall and Felli \cite[Proposition~3.2, Lemma~3.3]{FF}, Jin, Li and Xiong \cite[Proposition~2.6]{JLX}, Cabr\`e and Sire \cite[Lemma~4.5]{CaSi})} Let $R_0>0$, $x_0 \in \mathbb{R}N$, $g(x,u) : B_{4R_0}(x_0) \times \mathbb{R} \to \mathbb{R} $ and $W \in H^1(B^+_{4R_0} (x,0), t^{1-2s}dX)$ be a weak solution to \[ \left\{\begin{aligned} -\diver \left( t^{1-2s} \nabla W \right) &= 0 & &\mathrm{in} \ B_{4R_0}^+(x_0,0), \\ - \lim_{t \to +0} t^{1-2s} \partial_t W (x,t) &= \kappa_s g(x, W(x,0) ) & &\mathrm{on} \ B_{4R_0}(x_0), \varepsilonnd{aligned}\right. \] that is, for all $\varphi \in C^\infty_{c}( B_{4R_0}^+(x_0,0) \cup B_{4R_0} (x_0) )$, \[ \int_{B_{4R_0}^+(x_0)} t^{1-2s} \nabla W \cdot \nabla \varphi \, d X = \kappa_s \int_{B_{4R_0}(x_0)} g \left( x,W(x,0) \right) \varphi(x,0) \, dx. \] \begin{enumerate} \item[{\rm (i)}] Suppose that $g(x,u) := a(x) u + b(x)$ with $a,b \in L^q(B_{4R_0}(x_0))$ for some $q > N/(2s)$. For any $\mu > 0$ there exists a $C=C(N,s,q, \mu , \| a \|_{L^q(B_{4R_0} (x_0) )} )$ such that \[ \left\| W \right\|_{L^\infty ( B_{2R_0}^+(x_0,0) ) } \leq C \left[ \| W \|_{L^{\mu} ( B_{3R_0}^+(x_0,0) ) } + \| b \|_{L^{q}(B_{4R_0} (x_0) )} \right]. \] In addition, there exists an $\alpha \in (0,1)$ such that $W \in C^{\alpha} ( \overline{B_{R_0}^+(x_0,0)} )$ and \[ \| W \|_{C^{\alpha} ( \overline{B_{R_0}^+(x_0,0) )} } \leq C \left[ \| W \|_{L^\infty (B_{2R_0}^+(x_0,0)) } + \| b \|_{L^q(B_{4R_0}(x_0))} \right]. \] \item[{\rm (ii)}] Suppose that $W \in C^\alpha ( \overline{B_{2R_0}^+ (x_0,0) } )$ and $g(x,u ) \in C^1( \overline{B_{4R_0} (x_0)} \times \mathbb{R} )$ for some $\alpha \in (0,1)$. Then there exist $\beta \in (0,1)$ and $C = C(N,s, \| g \|_{C^1( \overline{B_{2R_0}^+(x_0,0)} \times [ -A , A ] )})$ where $A := \| W \|_{C^\alpha( \overline{B_{2R_0}^+ (x_0,0)} ) }$ such that $\nabla_x W \in C^\beta ( \overline{B_{R_0}^+(x_0,0)})$ and $ t^{1-2s} \partial_t W \in C^\beta ( \overline{B_{R_0}^+(x_0,0)})$ with \[ \left\| \nabla_x W \right\|_{C^\beta ( \overline{B_{R_0}^+(x_0,0)})} + \left\| t^{1-2s} \partial_t W \right\|_{C^\beta ( \overline{B_{R_0}^+(x_0,0)})} \leq C. \] \varepsilonnd{enumerate} \varepsilonnd{proposition} For solutions of \varepsilonqref{eq:1.1}, we have \begin{lemma} \left\langlebel{Lemma:2.3} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} under \varepsilonqref{eq:1.2} and $U$ be the function given in \varepsilonqref{eq:2.3}. Then for each $R>1$ there exists an $\alpha_R \in (0,1)$ such that $U \in C^{\alpha_R} ( \overline{B_R^+} )$ and $\nabla_x U , t^{1-2s} \partial_t U \in C^{\alpha_R} ( \overline{(B_R \setminus B_{1/R}) \times (0,R)} )$. As a consequence with Remark \ref{Remark:2.2}, \[ - \lim_{t\to+0}t^{1-2s} \partial_t U(x,t) =\kappa_s |x|^\varepsilonll |u(x)|^{p-1} u(x)\quad\mbox{in}\quad C_{\rm loc} (\mathbb{R}N \setminus \{0\}). \] \varepsilonnd{lemma} \begin{proof} By $u \in L^\infty_{\rm loc} (\mathbb{R}N)$ and \varepsilonqref{eq:1.2}, we find some $q > N/(2s)$ such that $a(x) := |x|^\varepsilonll |u(x)|^{p-1} \in L^q(B_{4R})$ for each $R > 0$. Thus, we may apply Proposition \ref{Proposition:2.1} (i) for $U$ with $a(x)$ and there exists an $\alpha_R \in (0,1)$ so that $U \in C^\alpha( \overline{ B_R^+} )$. Next, notice that $g(x , u ) := |x|^\varepsilonll | u |^{p-1} u \in C^1( \overline{ B_R\setminus B_{1/R}} \times \mathbb{R} )$. Therefore, we may apply Proposition \ref{Proposition:2.1} (ii) and obtain the desired result. \varepsilonnd{proof} Next we prove the following Pohozaev identity. \begin{proposition} \left\langlebel{Proposition:2.2} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} with \varepsilonqref{eq:1.2}, and $U$ be the function given in \varepsilonqref{eq:2.3}. Then for all $R>0$, there holds \begin{equation} \left\langlebel{eq:2.21} \begin{aligned} & -\frac{N-2s}{2}\left[\int_{B^+_R}t^{1-2s}|\nabla U|^2\,dX -\frac{2\kappa_s}{N-2s}\frac{N+\varepsilonll}{p+1}\int_{B_R}|x|^\varepsilonll|u|^{p+1}\,dx\right] \\ & \quad +\frac{R}{2}\left[\int_{S^+_R}t^{1-2s}|\nabla U|^2\,dS -\frac{2\kappa_s}{p+1}\int_{S_R}|x|^\varepsilonll|u|^{p+1}\,d \omega \right] =R\int_{S^+_R}t^{1-2s}\bigg|\frac{\partial U}{\partial \nu}\bigg|^2\,dS \varepsilonnd{aligned} \varepsilonnd{equation} and \begin{equation} \left\langlebel{eq:2.22} \int_{B^+_R}t^{1-2s}|\nabla U|^2\,dX -\kappa_s\int_{B_R}|x|^\varepsilonll|u|^{p+1}\,dx =\int_{S^+_R}t^{1-2s}\frac{\partial U}{\partial \nu}U\,dS. \varepsilonnd{equation} Here $\nu = X/|X|$ is the unit outer normal vector of $S_R^+$ at $X$ and $\kappa_s$ the constant given in \varepsilonqref{eq:2.6}. \varepsilonnd{proposition} \begin{proof} We follow the argument in \cite[Proof of Theorem~3.7]{FF}. Let $u$ be a solution of \varepsilonqref{eq:1.1} and $U$ the function given in \varepsilonqref{eq:2.3}, and we take any $R>0$. Then, by \varepsilonqref{eq:2.4} we have \begin{equation} \left\langlebel{eq:2.23} \frac{N-2s}{2}t^{1-2s}|\nabla U|^2 = \diver \left( \frac{1}{2}t^{1-2s}|\nabla U|^2X-t^{1-2s}(X\cdot\nabla U)\nabla U \right) \varepsilonnd{equation} for $X\in B^+_R$. Let $\rho<R$. Then, integrating \varepsilonqref{eq:2.23} over the set \[ \mathcal O_\delta:= (B^+_R\setminus\overline{B^+_\rho})\cap \Set{X = (x,t) \in \mathbb R^{N+1}_+ | t>\delta} \quad \] with $\delta \in (0,\rho)$ and writing $B_{R,\rho, \delta} := B_{ \sqrt{R^2-\delta^2} } \setminus B_{ \sqrt{\rho^2 - \delta^2} } \subset \mathbb{R}N$, we have \begin{equation} \left\langlebel{eq:2.24} \begin{aligned} \frac{N-2s}{2}\int_{\mathcal O_\delta}t^{1-2s}|\nabla U|^2\,dX &= \int_{\mathcal O_\delta} \diver \left( \frac{1}{2}t^{1-2s}|\nabla U|^2X-t^{1-2s} (X\cdot\nabla U)\nabla U \right)\,dX \\ &= -\frac{1}{2}\delta^{2-2s}\int_{ B_{R,\rho,\delta} } |\nabla U(x,\delta)|^2\,dx +\delta^{2-2s}\int_{ B_{R,\rho,\delta} } |U_t(x,\delta)|^2\,dx \\ & \quad +\delta^{1-2s}\int_{ B_{R,\rho,\delta} } (x\cdot\nabla_xU(x,\delta))U_t(x,\delta)\,dx \\ & \quad +\frac{R}{2}\int_{S^+_R\cap [t>\delta] }t^{1-2s}|\nabla U|^2\,dS -R\int_{S^+_R\cap [t>\delta ]}t^{1-2s}\bigg|\frac{\partial U}{\partial\nu}\bigg|^2\,dS \\ & \quad -\frac{\rho}{2}\int_{S^+_\rho\cap [t>\delta] }t^{1-2s}|\nabla U|^2\,dS +\rho\int_{S^+_\rho\cap [t>\delta] }t^{1-2s}\bigg|\frac{\partial U}{\partial\nu}\bigg|^2\,dS. \varepsilonnd{aligned} \varepsilonnd{equation} Now we claim that there exists a sequence $\delta_n\to0$ such that \begin{equation} \left\langlebel{eq:2.25} \lim_{n\to\infty}\bigg[\frac{1}{2}\delta_n^{2-2s}\int_{B_R}|\nabla U(x,\delta_n)|^2\,dx +\delta_n^{2-2s}\int_{B_R}|U_t(x,\delta_n)|^2 \,dx \bigg]=0. \varepsilonnd{equation} In fact, if there is no such sequence, then there exists a $C>0$ such that \[ \liminf_{\delta\to0}\bigg[\frac{1}{2}\delta^{2-2s}\int_{B_R}|\nabla U(x,\delta)|^2\,dx +\delta^{2-2s}\int_{B_R}|U_t(x,\delta)|^2 \, dx \bigg]\ge C \] and thus there exists a $\delta_0>0$ such that for all $\delta \in (0,\delta_0)$, \[ \frac{1}{2}\delta^{1-2s}\int_{B_R}|\nabla U(x,\delta)|^2\,dx +\delta^{1-2s}\int_{B_R}|U_t(x,\delta)|^2 \, dx \ge\frac{C}{2\delta}. \] Since $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb R^{N+1}_+} ,t^{1-2s}dX)$, integrating the above inequality in $\delta$ over $(0,\delta_0)$, we have a contradiction and \varepsilonqref{eq:2.25} holds. On the other hand, by Lemma \ref{Lemma:2.3}, we see that \begin{equation} \left\langlebel{eq:2.26} \begin{aligned} \lim_{\delta\to0}\delta^{1-2s}\int_{ B_{R,\rho,\delta} } (x\cdot\nabla_xU(x,\delta))U_t(x,\delta)\,dx =-\kappa_s\int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\varepsilonll|u|^{p-1}u\,dx. \varepsilonnd{aligned} \varepsilonnd{equation} By \varepsilonqref{eq:2.24}--\varepsilonqref{eq:2.26} and replacing $\mathcal O_\delta$ with $\mathcal O_{\delta_n}$ for a sequence $\delta_n\to 0$, we conclude that \begin{equation} \left\langlebel{eq:2.27} \begin{aligned} \hspace{-5pt} \frac{N-2s}{2}\int_{B^+_R\setminus B^+_\rho}t^{1-2s}|\nabla U|^2\,dX & = -\kappa_s\int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\varepsilonll|u|^{p-1}u\,dx \\ & \qquad +\frac{R}{2}\int_{S^+_R}t^{1-2s}|\nabla U|^2\,dS -R\int_{S^+_R}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS \\ & \qquad\quad -\frac{\rho}{2}\int_{S^+_\rho}t^{1-2s}|\nabla U|^2\,dS +\rho\int_{S^+_\rho}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS. \varepsilonnd{aligned} \varepsilonnd{equation} Furthermore, integration by parts yields \begin{equation} \left\langlebel{eq:2.28} \begin{aligned} & - \kappa_s \int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\varepsilonll|u|^{p-1}u\,dx \\ = \ & - \kappa_s \int_{B_R\setminus B_\rho}|x|^\varepsilonll x\cdot\nabla_x \left(\frac{|u|^{p+1}}{p+1} \right)\,dx \\ = \ & \kappa_s \frac{N+\varepsilonll}{p+1}\int_{B_R\setminus B_\rho}|x|^\varepsilonll|u|^{p+1}\,dx \\ &\hspace{3cm} -\frac{\kappa_s}{p+1}\int_{S_R}|x|^{\varepsilonll+1}|u|^{p+1}\, d \omega +\frac{\kappa_s}{p+1}\int_{S_\rho}|x|^{\varepsilonll+1}|u|^{p+1}\,d \omega. \varepsilonnd{aligned} \varepsilonnd{equation} Similarly to \varepsilonqref{eq:2.25}, since $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb R^{N+1}_+} ,t^{1-2s}dX)$ and $u\in L^\infty_{\rm loc}(\mathbb R^N)$, by \varepsilonqref{eq:1.2}, we may prove that there exists a sequence $\rho_n\to0$ such that \[ \lim_{n\to\infty}\left[\rho_n\int_{S^+_{\rho_n}}t^{1-2s}|\nabla U|^2\,dS +\rho_n\int_{S^+_{\rho_n}}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS +\int_{S_{\rho_n}}|x|^{\varepsilonll+1}|u|^{p+1}\,d \omega \right]=0. \] Therefore, taking $\rho=\rho_n$ and letting $n\to\infty$ in \varepsilonqref{eq:2.27} and \varepsilonqref{eq:2.28}, we have \varepsilonqref{eq:2.21}. On the other hand, since \[ - \mathrm{div} \left( t^{1-2s} \nabla U \right) \varphi = 0 \quad \text{in} \ \widetilde{\mathcal{O}}_\delta := B_R^+ \cap \Set{ X \in \mathbb{R}^{N+1}_+ | t > \delta } \] for any $\varphi \in C^\infty ( \overline{ B_R^+ } )$, integration by parts gives \[ \int_{ \widetilde{\mathcal{O}}_\delta } t^{1-2s} \nabla U \cdot \nabla \varphi \, dX = \int_{S_R^+ \cap [ t > \delta ] } t^{1-2s} \frac{\partial U}{\partial \nu} \varphi \, d S - \int_{B_{\sqrt{R^2-\delta^2}}} \delta^{1-2s} \frac{\partial U}{\partial t} (x,\delta) \varphi (x,\delta) \, d x. \] For the last term, by decomposing $\varphi$ into $\varphi (x,t) = \zeta(x,t) \varphi (x,t) + (1-\zeta(x,t)) \varphi (x,t)$ where $\zeta \in C^\infty_c( \overline{B_{R/2}^+} )$ with $\zeta \varepsilonquiv 1$ on $B_{R/4}^+$ and noting $\zeta(x,t) \varphi(x,t) \in C^\infty_{c} ( \overline{ \mathbb{R}^{N+1}_+ } )$, Remark \ref{Remark:2.2} and Lemma \ref{Lemma:2.3} yield \[ - \int_{B_{\sqrt{R^2-\delta^2}}} \delta^{1-2s} \frac{\partial U}{\partial t} (x,\delta) \varphi (x,\delta) \, d x \to \kappa_s \int_{B_R} |x|^\varepsilonll |u|^{p-1} u \varphi (x,0) \, dx \quad \text{as $\delta \to 0$}. \] Therefore, for every $\varphi\in C^\infty(\overline{B^+_R})$, \[ \int_{B^+_R}t^{1-2s}\nabla U\cdot\nabla\varphi\,dX =\int_{S^+_R}t^{1-2s}\frac{\partial U}{\partial\nu}\varphi\,dS +\kappa_s\int_{B_R}|x|^\varepsilonll|u|^{p-1}u \varphi(x,0) \,dx. \] From the fact that $U \in H^1_{\mathrm{loc}} (\overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$ can be approximated by functions of $C^\infty(\overline{B^+_R})$ in the $H^1(B^+_R,t^{1-2s}dX)$ sense, by setting $\varphi = U$ in the above, we obtain \varepsilonqref{eq:2.22} and Proposition~\ref{Proposition:2.2} follows. \varepsilonnd{proof} \subsection{Energy estimates} \left\langlebel{section:2.3} We first show several lemmata by following \cite[Lemma~2.2 and Corollary~2.3]{DDW}. \begin{lemma} \left\langlebel{Lemma:2.4} For $\zeta \in W^{1,\infty}(\mathbb{R}N) \cap H^1(\mathbb{R}N) $, define \begin{equation} \left\langlebel{eq:2.29} \rho(\zeta; x):=\int_{\mathbb R^N}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\, dy. \varepsilonnd{equation} Then there exists a $C = C(N,s)>0$ such that \begin{equation}\left\langlebel{eq:2.30} \begin{aligned} \rho \left( \zeta ; x \right) &\leq C \left\{ \left( 1 + |x| \right)^2 \| \nabla \zeta \|_{L^{\infty} (\Omega(|x|)) }^2 + \| \zeta \|_{L^\infty( \Omega (|x|) )}^2 \right. \\ & \hspace{3cm} \left.+ \left( 1 + |x| \right)^{-N} \| \zeta \|_{L^2(\mathbb{R}N)}^2 \right\} \left( 1 + |x| \right)^{-2s} \qquad \text{for all $|x| \geq 1$}, \varepsilonnd{aligned} \varepsilonnd{equation} and \begin{equation}\left\langlebel{eq:2.31} \rho \left( \zeta ; x\right) \leq C \left( 1 + \| \zeta \|_{W^{1,\infty} (\mathbb{R}N) }^2 \right) \quad \text{for all $|x| \leq 1$} \varepsilonnd{equation} where \[ \Omega (|x|) := \Set{ y \in \mathbb{R}N | \, |y| \geq \frac{|x|}{2} }. \] \varepsilonnd{lemma} \begin{proof} The proof is basically the same as the one in \cite[Lemma 2.2]{DDW}. We treat the case $|x| \geq 1$ and put \[ \begin{aligned} D_1 &:= \Set{ y \in \mathbb{R}N | \, |y-x| \leq \frac{|x|}{2} }, & D_2 &:= \Set{ y \in \mathbb{R}N | \, \frac{|x|}{2} \leq |y-x| \leq 2 |x| }, \\ D_3 &:= \Set{ y \in \mathbb{R}N | \, 2|x| \leq |y-x| }. \varepsilonnd{aligned} \] For $D_1$, notice that $D_{1}$ is convex and $D_1 \subset \Omega (|x|)$. Since it follows from $\zeta \in W^{1,\infty} (\mathbb{R}N)$ that \[ |\zeta(x)-\zeta(y)|\le \| \nabla \zeta \|_{L^\infty( D_1 )} |x-y| \leq \| \nabla \zeta \|_{L^\infty (\Omega (|x|)) } |x-y| , \] we have \begin{equation} \left\langlebel{eq:2.32} \begin{aligned} \int_{D_1}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy & \le C \| \nabla \zeta \|_{L^\infty (\Omega (|x|) )}^2 \int_{D_1} |x-y|^{2-N-2s}\,dy \\ & \leq C \| \nabla \zeta \|_{L^\infty (\Omega(|x|))}^2 (1+|x|)^{2-2s}. \varepsilonnd{aligned} \varepsilonnd{equation} For $y\in D_2$, by $|x| \geq 1$, it holds that \begin{equation} \left\langlebel{eq:2.33} \begin{aligned} \int_{D_2}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy & \le C|x|^{-N-2s}\int_{|y|\le 3|x|}(\zeta(x)^2+\zeta(y)^2)\, dy \\ & \leq C \zeta(x)^2 |x|^{-2s} + C |x|^{-N-2s} \| \zeta \|_{L^2(\mathbb{R}N)}^2 \\ &\leq C \| \zeta \|_{L^\infty (\Omega(|x|))}^2 (1+|x|)^{-2s} + C \| \zeta \|_{L^2(\mathbb{R}N)}^2 (1+|x|)^{-N-2s}. \varepsilonnd{aligned} \varepsilonnd{equation} For $y \in D_3$, since $|y| \geq |x|$, we have \begin{equation}\left\langlebel{eq:2.34} \begin{aligned} \int_{D_3}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy & \le C \| \zeta \|_{L^\infty (\Omega (|x|))}^2 \int_{D_3}|x-y|^{-N-2s}\, dy \\ & \leq C \| \zeta \|_{L^\infty (\Omega (|x|))}^2 (1+|x|)^{-2s}. \varepsilonnd{aligned} \varepsilonnd{equation} Putting \varepsilonqref{eq:2.32}--\varepsilonqref{eq:2.34} together, we have \varepsilonqref{eq:2.30} for $|x| \geq 1$. On the other hand, for $|x| \leq 1$, by dividing $\mathbb{R}N$ into $B_2$ and $B_2^c$, and arguing as in \varepsilonqref{eq:2.32} and \varepsilonqref{eq:2.34}, we obtain \varepsilonqref{eq:2.31}. We omit the details and Lemma \ref{Lemma:2.4} holds. \varepsilonnd{proof} \begin{lemma} \left\langlebel{Lemma:2.5} For $m>N/2$, set \begin{equation} \left\langlebel{eq:2.35} \varepsilonta(x) :=(1+|x|^2)^{-\frac{m}{2}}. \varepsilonnd{equation} Let $R\ge R_0\ge1$ and $\psi\in C^\infty(\mathbb R^N)$ such that $0\le \psi\le 1$, $\psi\varepsilonquiv 0$ on $B_1$ and $\psi\varepsilonquiv 1$ on $B_2^c$. Define $\varepsilonta_R$ and $\rho_R$ by \begin{equation} \left\langlebel{eq:2.36} \varepsilonta_R(x) :=\varepsilonta\bigg(\frac{x}{R}\bigg)\psi\bigg(\frac{x}{R_0}\bigg), \quad \rho_R(x) := \rho \left( \varepsilonta_R; x \right). \varepsilonnd{equation} Then there exists a constant $C=C(N,s,m,R_0)>0$ such that \begin{equation} \left\langlebel{eq:2.37} \rho_R(x) \leq \left\{\begin{aligned} & C\varepsilonta \left(\frac{x}{R} \right)^2|x|^{-N-2s}+2R^{-2s}\rho \left(\varepsilonta; \frac{x}{R}\right) & & \text{if} \ |x|\ge 3R_0,\\ & C + 2 R^{-2s} \rho \left( \varepsilonta ; \frac{x}{R} \right) & & \text{if} \ |x| \leq 3R_0. \varepsilonnd{aligned}\right. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{remark} \left\langlebel{Remark:2.3} By Lemma \ref{Lemma:2.4} and \cite[Lemma 2.2]{DDW}, there exist two constants $c,C>0$ such that \begin{equation*} c \left( 1 + |x| \right)^{-N-2s} \leq \rho (\varepsilonta ;x) \leq C \left( 1 + |x| \right)^{-N-2s} \quad \text{for all $x \in \mathbb{R}N$}. \varepsilonnd{equation*} In addition, later (see \varepsilonqref{eq:2.58}) we shall also prove that for all sufficiently large $R>0$ and $x \in \mathbb{R}N$, \[ 0 < c_R (1+|x|)^{-N-2s} \leq \rho_R(x). \] \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{Lemma:2.5}] Applying Young's inequality with the definition of $\varepsilonta_R$, we have \begin{equation}\left\langlebel{eq:2.38} \begin{aligned} &\rho_R(x) \leq 2 \varepsilonta\left( \frac{x}{R} \right)^2 \int_{\mathbb{R}N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right) \right)^2}{|x-y|^{N+2s}} \, dy \\ &\hspace{4cm} + 2 \int_{\mathbb{R}N} \psi \left( \frac{y}{R_0} \right)^2 \frac{ \left( \varepsilonta \left( \frac{x}{R} \right) - \varepsilonta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy. \varepsilonnd{aligned} \varepsilonnd{equation} For the first term, if $|x|\ge 3R_0$, then $|x-y| \geq |x|/3$ for any $ y \in B_{2R_0}$ and \[ \int_{\mathbb{R}N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right) \right)^2}{|x-y|^{N+2s}} \, dy \leq \int_{B_{2R_0}} |x-y|^{-N-2s} \, dy \leq C_{R_0} |x|^{-N-2s} \] and if $|x| \leq 3R_0$, then we see \[ \begin{aligned} \int_{\mathbb{R}N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right) \right)^2}{|x-y|^{N+2s}} \, dy &\leq \left( \int_{B_{R_0} (x) } + \int_{B_{R_0}^c (x)} \right) \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right) \right)^2}{|x-y|^{N+2s}} \, dy \\ &\leq R_0^{-2} \| \psi \|_{C^1(\mathbb{R}N)}^2 \int_{|z| \leq R_0} |z|^{-N-2s+2} \, dz + \int_{|z| \geq R_0} |z|^{-N-2s} \, dz. \varepsilonnd{aligned} \] Since \[ \int_{\mathbb{R}N} \psi \left( \frac{y}{R_0} \right)^2 \frac{ \left( \varepsilonta \left( \frac{x}{R} \right) - \varepsilonta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy \leq \int_{\mathbb{R}N} \frac{ \left( \varepsilonta \left( \frac{x}{R} \right) - \varepsilonta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy = R^{-2s} \rho \left( \varepsilonta ; \frac{x}{R} \right), \] by \varepsilonqref{eq:2.38}, we have \varepsilonqref{eq:2.37}. \varepsilonnd{proof} \begin{lemma} \left\langlebel{Lemma:2.6} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} under \varepsilonqref{eq:1.2}. Assume that $u$ is stable outside $B_{R_0}$. Let $\zeta\in C^1(\mathbb{R}N)$ satisfy $\zeta \varepsilonquiv 0 $ on $B_{R_0}$ and \begin{equation}\left\langlebel{eq:2.39} |x| \left| \nabla \zeta (x) \right| + |\zeta (x)| \leq C\left( 1+|x| \right)^{- m } \quad \text{for all $x \in \mathbb{R}N$} \varepsilonnd{equation} for some $m > N/2$. Then \begin{equation} \left\langlebel{eq:2.40} \int_{\mathbb R^N}|x|^{\varepsilonll}|u|^{p+1}\zeta^2\, dx +\frac{1}{p}\|u\zeta\|_{\dot H^s(\mathbb R^N)}^2 \le \frac{C_{N,s}}{p-1}\int_{\mathbb R^N}u (x)^2\rho(\zeta;x)\, dx \varepsilonnd{equation} where $C_{N,s}$ is the constant given by \varepsilonqref{eq:1.3}. \varepsilonnd{lemma} \begin{remark}\left\langlebel{Remark:2.4} Later, we shall use Lemma \ref{Lemma:2.6} for $\zeta (x) = \varepsilonta_R (x)$ and we require the right-hand side of \varepsilonqref{eq:2.40} to be finite. For example, see Lemma \ref{Lemma:2.7} and the end of proof of it. Therefore, we need the condition $u \in L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx)$ by Remark \ref{Remark:2.3}. \varepsilonnd{remark} \begin{proof} We follow the argument in \cite[Lemma~2.1]{DDW}. Remark that the right-hand side in \varepsilonqref{eq:2.40} is finite due to \varepsilonqref{eq:2.30}, \varepsilonqref{eq:2.31}, \varepsilonqref{eq:2.39} and $u \in L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $. We first treat the case $\zeta \in C^1_c ( \mathbb{R}N)$ where $\zeta \varepsilonquiv 0$ on $B_{R_0}$. Since $u$ is a solution of \varepsilonqref{eq:1.1}, by Lemma~\ref{Lemma:2.3}, we see that $u\in C^1(\mathbb R^N\setminus\{0\})$. Since $u\zeta^2\in C^1_c(\mathbb R^N)$, by Remark~\ref{Remark:1.1} we can take $\varphi=u\zeta^2$ as a test function in \varepsilonqref{eq:1.4}, and by \varepsilonqref{eq:1.5} we have \begin{equation*} \begin{aligned} \int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\zeta^2\,dx &= \left\langle u , u \zeta^2 \right\rangle_{\dot{H}^s(\mathbb{R}N)} \\ & =\frac{C_{N,s}}{2} \int_{\mathbb R^N \times \mathbb{R}N} \frac{(u(x)-u(y))(u(x)\zeta(x)^2-u(y)\zeta(y)^2)}{|x-y|^{N+2s}}\,dx\,dy \\ & =\frac{C_{N,s}}{2}\int_{\mathbb R^N \times \mathbb{R}N } \frac{u(x)^2\zeta(x)^2-u(x)u(y)(\zeta(x)^2+\zeta(y)^2)+u(y)^2\zeta(y)^2}{|x-y|^{N+2s}}\,dx\,dy \\ & =\frac{C_{N,s}}{2}\int_{\mathbb R^N \times \mathbb{R}N} \frac{(u(x)\zeta(x)-u(y)\zeta(y))^2-(\zeta(x)-\zeta(y))^2u(x)u(y)}{|x-y|^{N+2s}}\,dx\,dy \\ & =\|u\zeta\|^2_{\dot H^s(\mathbb R^N)}-\frac{C_{N,s}}{2}\int_{\mathbb R^N\times \mathbb{R}N} \frac{(\zeta(x)-\zeta(y))^2u(x)u(y)}{|x-y|^{N+2s}}\,dx\,dy. \varepsilonnd{aligned} \varepsilonnd{equation*} Applying the fundamental inequality $2ab\le a^2+b^2$ with \varepsilonqref{eq:2.29}, we deduce that \begin{equation} \left\langlebel{eq:2.41} \begin{aligned} & \|u\zeta\|^2_{\dot H^s(\mathbb R^N)}-\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\zeta^2\,dx \\ \le \ & \frac{C_{N,s}}{4} \left( \int_{\mathbb R^N \times \mathbb{R}N} \frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}u(x)^2\,dx\,dy +\int_{\mathbb R^N \times \mathbb{R}N} \frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}u(y)^2\,dx\,dy\right) \\ = \ & \frac{C_{N,s}}{2}\int_{\mathbb R^N}u(x)^2\rho(\zeta;x)\,dx. \varepsilonnd{aligned} \varepsilonnd{equation} Since $u$ is stable outside $B_{R_0}$, by \varepsilonqref{eq:1.10} with $\varphi=u\zeta $ (see Remark \ref{Remark:1.3}) and \varepsilonqref{eq:2.41}, we have \begin{equation}\left\langlebel{eq:2.42} (p-1)\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\zeta^2\,dx \leq \| u \zeta \|_{\dot{H}^s(\mathbb{R}N)}^2 - \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p+1} \zeta^2 \, dx \leq \frac{C_{N,s}}{2}\int_{\mathbb R^N}u(x)^2\rho(\zeta;x)\,dx. \varepsilonnd{equation} Hence, by \varepsilonqref{eq:2.41} and \varepsilonqref{eq:2.42}, \[ \begin{aligned} \frac{1}{p} \| u \zeta \|_{\dot{H}^s(\mathbb{R}N)}^2 & \leq \frac{1}{p} \int_{ \mathbb{R}N} |x|^\varepsilonll |u|^{p+1} \zeta^2 \, dx + \frac{C_{N,s}}{2p} \int_{ \mathbb{R}N} u(x)^2 \rho (\zeta;x) \, dx \\ & \leq \frac{C_{N,s}}{2(p-1)} \int_{ \mathbb{R}N} u^2 \rho (\zeta;x) \, dx. \varepsilonnd{aligned} \] This together with \varepsilonqref{eq:2.42} implies \varepsilonqref{eq:2.40} for $\zeta \in C^1_{c} (\mathbb{R}N)$ with $ \zeta \varepsilonquiv 0$ on $B_{R_0}$. Next, let $\zeta \in C^1(\mathbb{R}N)$ satisfy $\zeta \varepsilonquiv 0$ on $B_{R_0}$ and \varepsilonqref{eq:2.39}. Let $(\varphi_n)_{n}$ be a sequence of cut-off functions in Lemma \ref{Lemma:2.1} and set $\zeta_n := \varphi_n \zeta \in C^1_c(\mathbb{R}N)$. It is easily seen that $(\zeta_n)_n$ satisfies \varepsilonqref{eq:2.39} uniformly with respect to $n$, namely, the constant $C$ in \varepsilonqref{eq:2.39} is independent of $n$. Exploiting this fact with \varepsilonqref{eq:2.30} and \varepsilonqref{eq:2.31}, we observe that there exists a $C>0$ such that for all $x \in \mathbb{R}N$ and $n$, \[ \rho(\zeta_n;x) \to \rho(\zeta;x), \quad \left| \rho(\zeta_n;x) \right| \leq C \left( 1 + |x| \right)^{-N-2s}. \] Therefore, from \varepsilonqref{eq:2.40} with $\zeta_n$, we find that $( \zeta_n u )_n$ is bounded in $\dot{H}^s(\mathbb{R}N)$ and it is not difficult to see $ |u(x)|^{p+1} \zeta_n (x)^2 \nearrow |u(x)|^{p+1} \zeta(x)^2$ for each $x \in \mathbb{R}N$ and $\zeta_n u \rightharpoonup \zeta u$ weakly in $\dot{H}^s(\mathbb{R}N)$. Hence, from the monotone convergence theorem, the weak lower semicontinuity of norm, the fact $u \in L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx)$, \varepsilonqref{eq:2.40} with $\zeta_n$ and the dominated convergence theorem, it follows that \varepsilonqref{eq:2.40} holds for each $\zeta \in C^1(\mathbb{R}N)$ with \varepsilonqref{eq:2.39} and $\zeta \varepsilonquiv 0$ on $B_{R_0}$. This completes the proof. \varepsilonnd{proof} By using Lemmata~\ref{Lemma:2.5} and ~\ref{Lemma:2.6}, we have the following. \begin{lemma} \left\langlebel{Lemma:2.7} Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} with \varepsilonqref{eq:1.2}, which is stable outside $B_{R_0}$, and let $\rho_R$ be the function given in Lemma~$\ref{Lemma:2.5}$ with \begin{equation} \left\langlebel{eq:2.43} m\in\left(\frac{N}{2} , \, \frac{N+s(p+1) + \varepsilonll}{2} \right). \varepsilonnd{equation} Then there exists a constant $C=C(N,s,\varepsilonll,m,p,R_0)>0$ such that \begin{equation} \left\langlebel{eq:2.44} \int_{\mathbb R^N}u^2\rho_R\, dx\le C\left( \int_{B_{3R_0}} u^2 \rho_R \, dx + R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}+1\right) \varepsilonnd{equation} for all $R\ge 3R_0$. \varepsilonnd{lemma} \begin{remark} \left\langlebel{Remark:2.5} Due to \varepsilonqref{eq:1.2}, $ 0 < s (p+1) + \varepsilonll$ holds and we may choose an $m$ satisfying \varepsilonqref{eq:2.43}. \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{Lemma:2.7}] We basically follow the idea in \cite[Lemma 2.4]{DDW} and let $R \geq 3R_0$. First, by H\"older's inequality, \begin{equation}\left\langlebel{eq:2.45} \begin{aligned} &\int_{\mathbb{R}N} u^2 \rho_R \, dx \\ =\ & \int_{B_{3R_0}} u^2 \rho_R \, dx + \int_{B_{3R_0}^c} u^2 \rho_R \left( |x|^{\frac{\varepsilonll}{2}} \varepsilonta_R \right)^{\frac{4}{p+1}} \left( |x|^{\frac{\varepsilonll}{2}} \varepsilonta_R \right)^{ - \frac{4}{p+1} } \, dx \\ \leq \ & \int_{B_{3R_0}} u^2 \rho_R \, dx + \left( \int_{B_{3R_0}^c} |x|^\varepsilonll |u|^{p+1} \varepsilonta_R^2 \, dx \right)^{\frac{2}{p+1}} \left( \int_{B_{3R_0}^c} |x|^{ -\frac{2\varepsilonll}{p-1} } \varepsilonta_R^{ -\frac{4}{p-1} } \rho_R^{ \frac{p+1}{p-1} } \, dx \right)^{ \frac{p-1}{p+1} }. \varepsilonnd{aligned} \varepsilonnd{equation} For the case $3R_0\le|x|\le R$, by Lemma~\ref{Lemma:2.4} with \varepsilonqref{eq:2.35}, \varepsilonqref{eq:2.36} and Remark \ref{Remark:2.3}, we have \begin{equation} \left\langlebel{eq:2.46} 2^{-\frac{m}{2}}=\varepsilonta(1)\le \varepsilonta_R(x)\le \varepsilonta\bigg(\frac{3R_0}{R}\bigg)\le 1 \qquad\mbox{and}\qquad \rho\bigg(\varepsilonta;\frac{x}{R}\bigg)\le C. \varepsilonnd{equation} Then, by \varepsilonqref{eq:2.37} we obtain \[ \rho_R(x)\le C(|x|^{-N-2s}+R^{-2s}) \quad \text{for all $3R_0 \leq |x| \leq R$}. \] This together with \varepsilonqref{eq:2.46} yields \begin{equation} \left\langlebel{eq:2.47} \begin{aligned} & \int_{B_R\setminus B_{3R_0}}|x|^{-\frac{2\varepsilonll}{p-1}}\rho_R^{\frac{p+1}{p-1}}\varepsilonta_R^{-\frac{4}{p-1}}\,dx \\ \le\ & C\int_{3R_0}^Rr^{-\frac{2\varepsilonll}{p-1}}\bigg(r^{-(N+2s)}+R^{-2s}\bigg)^{\frac{p+1}{p-1}}r^{N-1}\, dr \\ \le \ & C\int_{3R_0}^Rr^{N-1-\frac{2\varepsilonll}{p-1}-(N+2s)\frac{p+1}{p-1}}\, dr+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\varepsilonll}{p-1}}\, dr \\ \le \ & C\int_{3R_0}^\infty r^{-1-\frac{2}{p-1}(N+\varepsilonll+s(p+1))}\, dr+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\varepsilonll}{p-1}}\, dr \\ \le \ & C+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\varepsilonll}{p-1}}\, dr. \varepsilonnd{aligned} \varepsilonnd{equation} Since \[ \begin{aligned} R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\varepsilonll}{p-1}}\, dr &\leq \left\{\begin{aligned} & C R^{-2s \frac{p+1}{p-1} } & & \text{if} \ N - \frac{2\varepsilonll}{p-1} < 0,\\ & C R^{ - 2 s \frac{p+1}{p-1} } \left( 1 + \log R \right) & & \text{if} \ N - \frac{2\varepsilonll}{p-1} = 0,\\ & C R^{ N - \frac{2}{p-1} \left( s(p+1) + \varepsilonll \right) } & & \text{if} \ N - \frac{2\varepsilonll}{p-1} > 0, \varepsilonnd{aligned}\right. \\ &\le \left\{ \begin{aligned} & C & &\mbox{if}\quad N-\frac{2\varepsilonll}{p-1}\le0, \\ &C R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} & &\mbox{if}\quad N-\frac{2\varepsilonll}{p-1}>0, \varepsilonnd{aligned} \right. \varepsilonnd{aligned} \] by \varepsilonqref{eq:2.47}, we see that \begin{equation} \left\langlebel{eq:2.48} \int_{B_R\setminus B_{3R_0}}|x|^{-\frac{2\varepsilonll}{p-1}}\rho_R^{\frac{p+1}{p-1}}\varepsilonta_R^{-\frac{4}{p-1}}\,dx \le C+CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}. \varepsilonnd{equation} On the other hand, for the case $|x|\ge R\ge 3R_0$, by \varepsilonqref{eq:2.35} and $R^2+|x|^2 \leq 2|x|^2$, we obtain \begin{align*} \varepsilonta\bigg(\frac{x}{R}\bigg)^2|x|^{-N-2s} \le C\varepsilonta(1)^2(R^2+|x|^2)^{-\frac{N}{2}-s} &\le CR^{-N-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s} \\ &\le CR^{-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s}. \varepsilonnd{align*} This together with \varepsilonqref{eq:2.30}, \varepsilonqref{eq:2.37} and Remark \ref{Remark:2.3} implies that \[ \rho_R(x)\le CR^{-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s} \quad \mbox{for each} \ |x| \geq R. \] Thus, \varepsilonqref{eq:2.35}, \varepsilonqref{eq:2.36} and the fact $|x| \geq R$ give \begin{equation} \left\langlebel{eq:2.49} \begin{aligned} &|x|^{-\frac{2\varepsilonll}{p-1}}\rho_R(x)^{\frac{p+1}{p-1}}\varepsilonta_R(x)^{-\frac{4}{p-1}} \\ \leq \ & C \left( R^2+|x|^2 \right)^{-\frac{\varepsilonll}{p-1}} \left\{ R^{-2s} \left( 1 + \frac{|x|^2}{R^2} \right)^{- \frac{N+2s}{2} } \right\}^{ \frac{p+1}{p-1} } \left( 1 + \frac{|x|^2}{R^2} \right)^{ \frac{m}{2} \frac{4}{p-1} } \\ \leq \ & CR^{-2s\frac{p+1}{p-1}-\frac{2\varepsilonll}{p-1}} \left(1+\frac{|x|^2}{R^2}\right)^{-\frac{N+2s}{2}\frac{p+1}{p-1}+\frac{2m}{p-1} - \frac{\varepsilonll}{p-1} }. \varepsilonnd{aligned} \varepsilonnd{equation} Since it follows from \varepsilonqref{eq:2.43} that \[ \alpha_{ N,s,p,m,\varepsilonll } := -\frac{N+2s}{2}\frac{p+1}{p-1}+\frac{2m}{p-1} - \frac{\varepsilonll}{p-1} < -\frac{N}{2}, \] by \varepsilonqref{eq:2.49} we have \begin{equation} \left\langlebel{eq:2.50} \begin{aligned} & \int_{B_R^c}|x|^{-\frac{2\varepsilonll}{p-1}}\rho_R^{\frac{p+1}{p-1}}\varepsilonta_R^{-\frac{4}{p-1}}\,dx \\ \le \ & CR^{-2s\frac{p+1}{p-1}-\frac{2\varepsilonll}{p-1}}\int_R^\infty \left(1+\frac{r^2}{R^2}\right)^{\alpha_{ N,s,p,m,\varepsilonll } }r^{N-1}\, dr \\ = \ & C R^{- 2s \frac{p+1}{p-1} - \frac{2\varepsilonll}{p-1} + N-1 } \int_{R}^{\infty} \left( 1 + \frac{r^2}{R^2} \right)^{ \alpha_{ N,s,p,m,\varepsilonll } } \left( \frac{r}{R} \right)^{N-1} \, dr \\ \le \ & CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}. \varepsilonnd{aligned} \varepsilonnd{equation} Combining \varepsilonqref{eq:2.48} and \varepsilonqref{eq:2.50}, we obtain \begin{equation} \left\langlebel{eq:2.51} \int_{B_{3R_0}^c} |x|^{-\frac{2\varepsilonll}{p-1}}\rho_R^{\frac{p+1}{p-1}}\varepsilonta_R^{-\frac{4}{p-1}}\,dx \le C\left(R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}+1\right). \varepsilonnd{equation} Now we substitute \varepsilonqref{eq:2.51} into \varepsilonqref{eq:2.45} and infer from \varepsilonqref{eq:2.40} with $\zeta= \varepsilonta_R$ that \[ \begin{aligned} &\int_{\mathbb{R}N} u^2 \rho_R \, dx \\ \leq \ & \int_{B_{3R_0}} u^2 \rho_R \, dx + C \left( \int_{ B_{3R_0^c} } |x|^\varepsilonll |u|^{p+1} \varepsilonta_R^2 \, dx \right)^{\frac{2}{p+1}} \left(R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}+1\right)^{\frac{p-1}{p+1}} \\ \leq \ & \int_{B_{3R_0}} u^2 \rho_R \, dx + C \left( \int_{\mathbb{R}N } u^2 \rho_R \, dx \right)^{\frac{2}{p+1}} \left(R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}+1\right)^{\frac{p-1}{p+1}} \varepsilonnd{aligned} \] for $R\ge 3R_0$. Dividing the both sides by $ \left( \int_{\mathbb{R}N } u^2 \rho_R \, dx \right)^{\frac{2}{p+1}} < \infty $ and noting \[ \left( \int_{ B_{3R_0} } u^2 \rho_R \, dx \right) \left( \int_{ \mathbb{R}N} u^2 \rho_R \, dx \right)^{ - \frac{2}{p+1} } \leq \left( \int_{ B_{3R_0} } u^2 \rho_R \, dx \right)^{\frac{p-1}{p+1}}, \] we have \varepsilonqref{eq:2.44} and Lemma~\ref{Lemma:2.7} follows. \varepsilonnd{proof} For the supercritical case, we have the following energy estimates for the function $U$, which is given in \varepsilonqref{eq:2.3}. \begin{lemma} \left\langlebel{Lemma:2.8} Assume $p_S(N,\varepsilonll) < p $ and \varepsilonqref{eq:1.2}. Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^2(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} which is stable outside $B_{R_0}$ and $U$ be the function given in \varepsilonqref{eq:2.3}. Then there exists a $C=C(N,p,s,\varepsilonll,R_0,u)>0$ such that \begin{equation} \left\langlebel{eq:2.52} \int_{B^+_R}t^{1-2s}U^2\, dX\le CR^{N+2(1-s)-\frac{2(2s+\varepsilonll)}{p-1}} \varepsilonnd{equation} for all $R\ge 3R_0$. \varepsilonnd{lemma} \begin{remark} If $ p_S(N,\varepsilonll) < p $, then \begin{equation}\left\langlebel{eq:2.53} N - \frac{2}{p-1} \left( s ( p+ 1) + \varepsilonll \right) > 0. \varepsilonnd{equation} \varepsilonnd{remark} \begin{proof}[Proof of Lemma \ref{Lemma:2.8}] Let $\zeta_{R_0} \in C^\infty_c(\mathbb{R}N)$ with $0 \leq \zeta_{R_0} \leq 1$, $\zeta_{R_0} \varepsilonquiv 1$ on $B_{3R_0}$ and $\zeta_{R_0} \varepsilonquiv 0$ on $B_{4R_0}^c$. We decompose $u$ as \[ u(x) = \zeta_{R_0} (x) u(x) + (1 - \zeta_{R_0} (x)) u(x) = : v(x) + w(x). \] Notice that $v \in H^s(\mathbb{R}N)$ with $\supp v \subset \overline{B_{4R_0}}$ and $w \in H^s_{\rm loc} (\mathbb{R}N)$. Recalling \varepsilonqref{eq:2.3}, we also decompose $U$ as \[ U(x,t) = (P_s(\cdot, t) \ast u) (x) = (P_s (\cdot , t) \ast v ) (x) + (P_s (\cdot , t) \ast w) (x) = : V(x,t) + W(x,t). \] We first estimate $V(x,t)$. By Young's inequality and $\| P_s (\cdot , t) \|_{L^1(\mathbb{R}N)} = 1$, it follows that \[ \left\| V(\cdot, t) \right\|_{L^2(\mathbb{R}N)} \leq \| P_s (\cdot , t) \|_{L^1(\mathbb{R}N)} \| v \|_{L^2(\mathbb{R}N)} = \| v \|_{L^2(\mathbb{R}N)} \quad \text{for each $t \in (0,\infty)$}. \] Therefore, \[ \begin{aligned} \int_{B_R^+} t^{1-2s} |V|^2 \, d X & \leq \int_{0}^{R} dt \int_{\mathbb{R}N} t^{1-2s} |V(x,t)|^2 \, dx \leq \int_0^R t^{1-2s} \| v \|_{L^2(\mathbb{R}N)}^2 dt = C R^{2-2s}. \varepsilonnd{aligned} \] From \[ 2-2s \leq N + 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1}, \] we infer that \begin{equation}\left\langlebel{eq:2.54} \int_{B^+_R} t^{1-2s} |V|^2 \, dX \leq C R^{N + 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1}}. \varepsilonnd{equation} Next, we consider $W(x,t)$. By H\"older's inequality, \begin{equation}\left\langlebel{eq:2.55} \begin{aligned} & \int_{B_R^+} t^{1-2s} |W|^2 \, dX \\ = \ & \int_{ B_R^+} t^{1-2s} \left( \int_{\mathbb{R}N} (P_s(x-y,t))^{1/2} (P_s(x-y,t))^{1/2} w(y) \, dy \right)^2 \, dX \\ \leq \ & C \int_{ B_R^+} dX t^{1-2s} \int_{\mathbb R^N}w(y)^2\frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \\ \leq \ & C \int_{ B_R^+} dX t^{1-2s} \left( \int_{|x-y| \leq 3R} + \int_{ |x-y|> 3R} \right) w(y)^2\frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy. \varepsilonnd{aligned} \varepsilonnd{equation} For $y \in \mathbb{R}N$ with $|x-y| \leq 3R$, since it follows from \varepsilonqref{eq:1.2} that \[ - \frac{N+2s}{2} + 1 < 0 \quad \text{if} \ N \geq 2, \quad - \frac{N+2s}{2} + 1 = \frac{1-2s}{2} > 0 \quad \text{if} \ N = 1, \] we see that \[ \begin{aligned} &\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \\ \leq & \ \int_0^R dt \int_{B_R} dx \int_{|x-y| \leq 3 R} w(y)^2\frac{t}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \\ = & \ \int_{ B_R} dx \int_{ |x-y| \leq 3R} dy \int_{0}^R w(y)^2 \frac{1}{2-N-2s} \frac{\partial}{\partial t} \left( |x-y|^2 + t^2 \right)^{ \frac{2 - (N+2s)}{2} } \, dt \\ = \ & \frac{1}{2 - N - 2s} \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 \left\{ \left( |x-y|^2 + R^2 \right)^{ \frac{2-N-2s}{2} } - |x-y|^{ 2 - N -2s } \right\} \, dy \\ \leq \ & \left\{\begin{aligned} & \frac{1}{1 - 2s } \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 \left( |x-y|^2 + R^2 \right)^{ \frac{1-2s}{2} } \, dy & & \text{when $N = 1$}, \\ & \frac{1}{N+2s -2 }\int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 |x-y|^{ 2-N-2s } \, dy & & \text{when $N \geq 2$}. \varepsilonnd{aligned}\right. \varepsilonnd{aligned} \] When $N = 1$, by $\set{y \in \mathbb{R}N | |x-y| \leq 3R} \subset B_{4R}$ for each $x \in B_R$, we have \[ \begin{aligned} \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 \left( |x-y|^2 + R^2 \right)^{ \frac{1-2s}{2} } \, dy & \leq C R^{1-2s} \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 \, dy \\ & \leq C R^{2-2s} \int_{ B_{4R}} w(y)^2 \, dy. \varepsilonnd{aligned} \] On the other hand, when $N \geq 2$, since $B_R(y) \subset B_{5R}$ for each $y \in B_{4R}$, we have \[ \begin{aligned} \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 |x-y|^{ 2-N-2s } \, dy & \leq C \int_{ B_R} dx \int_{ B_{4R}} w(y)^2 |x-y|^{2-N-2s } \, dy \\ & = C \int_{ B_{4R}} dy \int_{ B_R (y) } w(y)^2 |z|^{2-N-2s} \, dz \\ & \leq C \int_{ B_{4R}} dy \int_{ B_{5R}} w(y)^2 |z|^{2-N-2s} \, d z \\ & = C R^{2 - 2s} \int_{ B_{4R}} w(y)^2 \,dy, \varepsilonnd{aligned} \] hence, for $N \geq 1$ and $R \geq 3R_0$, \[ \int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \leq C R^{2-2s} \int_{ B_{4R}} w(y)^2 \, dy. \] Notice that $w \varepsilonquiv 0$ on $B_{3R_0}$, $|w| \leq |u|$ and $0<c \leq \varepsilonta_R(x)$ for any $3R_0 \leq |x| \leq 4R$. By Lemma \ref{Lemma:2.6} and $N - 2\varepsilonll / (p-1) > 0$ due to $p > p_S(N,\varepsilonll)$, we may argue as in \varepsilonqref{eq:2.45} and \varepsilonqref{eq:2.47} to obtain \[ \begin{aligned} & \int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \\ \leq \ & C R^{2-2s} \int_{ 3R_0 \leq |x| \leq 4R} u^2 \left( |x|^{\frac{\varepsilonll}{2}} \varepsilonta_R \right)^{\frac{4}{p+1}} \left( |x|^{\frac{\varepsilonll}{2}} \varepsilonta_R \right)^{-\frac{4}{p+1}} \, dx \\ \leq \ & CR^{2-2s} \left( \int_{\mathbb{R}N} |x|^\varepsilonll |u|^{p+1} \varepsilonta_R^2 \, dx \right)^{\frac{2}{p+1}} \left( \int_{3R_0 \leq |x| \leq 4R} |x|^{-\frac{2\varepsilonll}{p-1}} \,dx \right)^{\frac{p-1}{p+1}} \\ \leq \ & C R^{ 2(1-s) + (N-\frac{2\varepsilonll}{p-1})\frac{p-1}{p+1} } \left( \int_{ \mathbb{R}N} u^2 \rho_R \, dx \right)^{\frac{2}{p+1}}. \varepsilonnd{aligned} \] Furthermore, by \varepsilonqref{eq:2.53} and Lemma~\ref{Lemma:2.7}, enlarging $C$ if necessary, we obtain \begin{equation}\left\langlebel{eq:2.56} \int_{\mathbb R^N}u^2\rho_R\,dx\le CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}, \varepsilonnd{equation} which yields \begin{equation} \left\langlebel{eq:2.57} \int_{ B_R^+ } dX t^{1-2s} \int_{ |x-y| \leq 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \le CR^{N+2(1-s)-\frac{2(2s+\varepsilonll)}{p-1}}. \varepsilonnd{equation} Next, we consider the second term in \varepsilonqref{eq:2.55}, namely the case $|x-y| > 3R$. Since $|x-y|\ge |y|-|x| \ge |y|-R\ge |y|/2$ and $B_{3R}^c(x) \subset B_{2R}^c$ for $x\in B_R$ and $y\in B^c_{2R}$, we have \[ \begin{aligned} &\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| > 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \\ \leq \ & \int_{ B_R} dx \int_{ |x-y| > 3R} dy \int_0^R w(y)^2 t |x-y|^{-N-2s} \, dt \\ \leq \ & R^2 \int_{ B_R} dx \int_{ B_{3R}^c(x)} w(y)^2 |x-y|^{-N-2s} \, dy \\ \leq \ & C R^2 \int_{ B_R} dx \int_{ B_{3R}^c(x)} w(y)^2 |y|^{-N-2s} \, dy \\ \leq \ & C R^{2+N} \int_{ |z| \geq 2R} w(z)^2 |z|^{-N-2s} \, dy. \varepsilonnd{aligned} \] On the other hand, from the definition of $\varepsilonta_R$ and $\rho_R$, it follows that for $|x| \geq 2 R \geq 6 R_0$ and $\boldsymbol{e}_1 = (1,0,\ldots,0) \in \mathbb{R}N$, \begin{equation}\left\langlebel{eq:2.58} \begin{aligned} \rho_R(x) = \int_{ \mathbb{R}N} \frac{ ( \varepsilonta_R(x) - \varepsilonta_R (y) )^2 }{|x-y|^{N+2s}} \, dy &\geq \int_{ |y| \geq 2 R_0 } \frac{ \left( \varepsilonta\left( \frac{x}{R} \right) - \varepsilonta \left(\frac{y}{R}\right) \right)^2 }{|x-y|^{N+2s}} \, dy \\ &= R^{-2s} \int_{ |z| \geq 2 R^{-1} R_0 } \frac{ \left( \varepsilonta\left(\frac{x}{R}\right) - \varepsilonta(z) \right)^2 }{|R^{-1} x - z|^{N+2s} } \, dz \\ & \geq R^{-2s} \int_{ |z- \boldsymbol{e}_1 | < \frac{1}{3}} \frac{ \left( \varepsilonta\left(\frac{x}{R}\right) - \varepsilonta(z) \right)^2 }{|R^{-1} x - z|^{N+2s} } \, dz \\ &\geq C_0 R^{-2s} \left| R^{-1} x \right|^{-N-2s} = C_0 R^{N} |x|^{-N-2s} \varepsilonnd{aligned} \varepsilonnd{equation} for some $C_0>0$. Thus, noting $u \varepsilonquiv w$ on $|y| \geq 2R$ and \varepsilonqref{eq:2.56}, we obtain \[ \begin{aligned} R^{2+N} \int_{ |y| \geq 2R} w(y)^2 |y|^{-N-2s} \, dy & \leq C R^2 \int_{ |y| \geq 2R} u^2 \rho_R \, dy \leq CR^{N+ 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1} }, \varepsilonnd{aligned} \] which implies \begin{equation}\left\langlebel{eq:2.59} \int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| > 3R} w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \leq CR^{N+ 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1} }. \varepsilonnd{equation} Substituting \varepsilonqref{eq:2.57} and \varepsilonqref{eq:2.59} into \varepsilonqref{eq:2.55}, we see that \[ \int_{ B_R^+} t^{1-2s} W^2 \, dX \leq CR^{N+ 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1} }. \] This with \varepsilonqref{eq:2.54} and $U=V+W$ completes the proof of Lemma \ref{Lemma:2.8}. \varepsilonnd{proof} \begin{lemma} \left\langlebel{Lemma:2.9} Assume the same conditions as in Lemma~$\ref{Lemma:2.8}$. Then there exists a $C=C(N,p,s,\varepsilonll,R_0,u)>0$ such that \begin{equation} \left\langlebel{eq:2.60} \int_{B^+_R}t^{1-2s}|\nabla U|^2\, dX +\int_{B_R}|x|^\varepsilonll|u|^{p+1}\,dx \le CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \varepsilonnd{equation} for all $R\ge 3R_0$. \varepsilonnd{lemma} \begin{proof} We first prove the weighted $L^{p+1}$ estimate for $u$. Let $\varepsilonta_R$ and $\rho_R$ be the functions given in Lemma~\ref{Lemma:2.5}. Then, since $u\in L^\infty_{\rm loc}(\mathbb R^N)$ with \varepsilonqref{eq:1.2}, it holds that \begin{equation} \left\langlebel{eq:2.61} \int_{B_{2R_0}}|x|^\varepsilonll|u|^{p+1}\, dx\le C. \varepsilonnd{equation} Furthermore, since $p_S(N,\varepsilonll) < p$ and \varepsilonqref{eq:2.53}, applying Lemmata~\ref{Lemma:2.6} and \ref{Lemma:2.7}, and noting $\varepsilonta_R \geq c_0>0$ on $B_R \setminus B_{2R_0}$, we see that for all $R \geq 3R_0$, \begin{align*} \int_{B_R\setminus B_{2R_0}}|x|^\varepsilonll|u|^{p+1}\, dx & \le C\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\varepsilonta_R^2 \,dx \\ & \le C\int_{\mathbb R^N}u^2\rho_R\,dx \le C\bigg(1+R^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}\bigg) \le CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)}. \varepsilonnd{align*} This together with \varepsilonqref{eq:2.61} yields \begin{equation} \left\langlebel{eq:2.62} \int_{B_R}|x|^\varepsilonll|u|^{p+1}\,dx \le CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \quad \text{for each $R \geq 3R_0$}. \varepsilonnd{equation} Next we take a cut-off function $\zeta\in C^\infty_c (\overline{\mathbb R^{N+1}_+} \setminus \overline{B^+_{R_0}} )$ such that \begin{equation}\left\langlebel{eq:2.63} \zeta\varepsilonquiv \left\{ \begin{array}{ll} 1& \mbox{on}\quad B^+_R\setminus B^+_{2R_0}, \\ 0& \mbox{on}\quad B^+_{R_0}\cup( \mathbb R^{N+1}_+ \setminus B^+_{2R}), \varepsilonnd{array} \right. \qquad |\nabla \zeta|\le CR^{-1} \quad\mbox{on}\quad B^+_{2R}\setminus B^+_{R}. \varepsilonnd{equation} Then, taking $\psi = U\zeta^2 \in C^1_c ( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{R_0}} )$ as a test function in \varepsilonqref{eq:2.18}, we obtain \begin{equation} \left\langlebel{eq:2.64} \begin{aligned} \kappa_s\int_{ \mathbb R^N }|x|^\varepsilonll|u|^{p+1}\zeta(x,0)^2\,dx & =\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla U\cdot\nabla(U\zeta^2)\,dX \\ & =\int_{\mathbb R^{N+1}_+}t^{1-2s}\{|\nabla(U\zeta)|^2-U^2|\nabla\zeta|^2\}\,dX. \varepsilonnd{aligned} \varepsilonnd{equation} Since $u$ is stable outside $B_{R_0}$ and $U=u$ on $\partial\mathbb R^{N+1}_+$, we see from \varepsilonqref{eq:2.8} that $U$ is stable outside $B^+_{R_0}$, that is, for any $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } \setminus \overline{B^+_{R_0}} )$, \begin{equation} \left\langlebel{eq:2.65} \begin{aligned} p\kappa_s\int_{\mathbb{R}N}|x|^\varepsilonll|U(x,0)|^{p-1}\psi(x,0)^2\,dx &= p\kappa_s\int_{\mathbb{R}N}|x|^\varepsilonll|u|^{p-1}\psi(x,0)^2\,dx \\ & \leq \kappa_s \| \psi (\cdot, 0) \|_{\dot{H}^s(\mathbb{R}N)}^2 \le\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \psi|^2\,dX. \varepsilonnd{aligned} \varepsilonnd{equation} By \varepsilonqref{eq:2.64} and \varepsilonqref{eq:2.65} with $\psi=U\zeta \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{B^+_{R_0}} )$, we have \[ \int_{\mathbb R^{N+1}_+}t^{1-2s}\{|\nabla(U\zeta)|^2-U^2|\nabla\zeta|^2\}\,dX \leq \frac{1}{p}\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\, dX, \] which implies \begin{equation} \left\langlebel{eq:2.66} \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\,dX \le \frac{p}{p-1}\int_{\mathbb R^{N+1}_+}t^{1-2s}U^2|\nabla\zeta|^2\,dX. \varepsilonnd{equation} By \varepsilonqref{eq:2.66}, \varepsilonqref{eq:2.63} and \varepsilonqref{eq:2.52}, we see that \begin{equation} \left\langlebel{eq:2.67} \begin{aligned} \int_{B^+_R\setminus B^+_{2R_0}}t^{1-2s}|\nabla U|^2\,dX &\le \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\,dX \\ & \le C\int_{\mathbb R^{N+1}_+}t^{1-2s}U^2|\nabla\zeta|^2\,dX \\ & \le C \left( \int_{B^+_{2R_0} \setminus B_{R_0}^+ } t^{1-2s} U^2 \, dX + R^{-2}\int_{ B^+_{2R} \setminus B_R^+ }t^{1-2s}U^2\,dX \right) \\ & \le C \int_{B^+_{2R_0} \setminus B_{R_0}^+ } t^{1-2s} U^2 \, dX + CR^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \varepsilonnd{aligned} \varepsilonnd{equation} for all $R\ge3R_0$. On the other hand, it follows from $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$ due to Lemma \ref{Lemma:2.1} that \[ \int_{B^+_{2R_0}}t^{1-2s} \left( |\nabla U|^2 + U^2 \right)\,dX\le C. \] This together with \varepsilonqref{eq:2.62} and \varepsilonqref{eq:2.67} yields \varepsilonqref{eq:2.60}, thus Lemma~\ref{Lemma:2.9} follows. \varepsilonnd{proof} \section{The subcritical and critical case} \left\langlebel{section:3} In this section, we prove Theorem~\ref{Theorem:1.1} for the subcritical and critical case, that is, $1 < p \leq p_S(N,\varepsilonll)$. \begin{proof}[Proof of Theorem~\ref{Theorem:1.1} for $1<p\le p_S(N,\varepsilonll)$] Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^2( \mathbb{R}N , (1+|x|)^{-N-2s} dx ) $ be a solution of \varepsilonqref{eq:1.1} which is stable outside $B_{R_0}$. As $R \to \infty$, $\varepsilonta_R(x) \to \psi(R_0^{-1} x)$ for each $x \in \mathbb{R}N$. Then by Lemma \ref{Lemma:2.5}, $(\rho_R)_{R\ge R_0}$ is bounded in $L^\infty(\mathbb{R}N)$ and we may check \[ \rho_R(x) = \int_{ \mathbb{R}N} \frac{ \left( \varepsilonta_R(x) - \varepsilonta_R(y) \right)^2 }{|x-y|^{N+2s}} \, dy \to \int_{ \mathbb{R}N} \frac{ \left( \psi(R_0^{-1} x) - \psi ( R_0^{-1} y ) \right)^2 }{|x-y|^{N+2s}} \, dy =: \rho_\infty (x). \] Next, from Lemmata \ref{Lemma:2.6} and \ref{Lemma:2.7} and the assumption $1 < p \leq p_S(N,\varepsilonll)$, it follows that $( u \varepsilonta_R^{2/(p+1)} )_{R \geq 3R_0}$ is bounded in $L^{p+1} (\mathbb{R}N,|x|^\varepsilonll dx)$ and $(u \varepsilonta_R)_{R \geq R_0}$ is bounded in $\dot{H}^s(\mathbb{R}N)$. Since \[ u(x) \varepsilonta_R(x)^{ \frac{2}{p+1} } \to u(x) \psi \left( R_0^{-1} x \right)^{\frac{2}{p+1}}, \quad u(x) \varepsilonta_R(x) \to u(x) \psi \left( R_0^{-1} x \right) \quad \text{for each $x \in \mathbb{R}N$}, \] we infer that \[ \begin{aligned} &u \varepsilonta_R^{\frac{2}{p+1}} \rightharpoonup u \psi \left( R_0^{-1} \cdot \right)^{\frac{2}{p+1}} & &\text{weakly in} \ L^{p+1} (\mathbb{R}N, |x|^\varepsilonll dx), \\ & u \varepsilonta_R \rightharpoonup u \psi \left( R_0^{-1} \cdot \right) & &\text{weakly in} \ \dot{H}^s(\mathbb{R}N). \varepsilonnd{aligned} \] In particular, we deduce that $u \in \dot{H}^s(\mathbb{R}N) \cap L^{p+1}(\mathbb{R}N,|x|^\varepsilonll dx)$. Since $\varphi_n u \to u$ strongly in $\dot{H}^s(\mathbb{R}N)$ where $(\varphi_n)_n$ appears in Lemma \ref{Lemma:2.1} and $u \in L^\infty_{\rm loc}(\mathbb{R}N)$, we may use $\varphi_n u$ as a test function in \varepsilonqref{eq:1.4}: \[ \int_{ \mathbb{R}N} |x|^\varepsilonll |u|^{p+1} \varphi_n \, dx = \left\langle u , \varphi_n u \right\rangle_{\dot{H}^s(\mathbb{R}N)}. \] Letting $n \to \infty$, we obtain \begin{equation} \left\langlebel{eq:3.1} \int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\, dx=\|u\|_{\dot H^s(\mathbb R^N)}^2. \varepsilonnd{equation} Thus, the former assertion of Theorem \ref{Theorem:1.1}(ii) is proved. For the latter assertion of Theorem \ref{Theorem:1.1} (ii), assume that $p = p_S(N,\varepsilonll)$ and $u$ is stable. By the same argument as above, we can apply the stability inequality \varepsilonqref{eq:1.9} with the test function $\varphi=u$: \[ p\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\, dx\le\|u\|_{\dot H^s(\mathbb R^N)}^2. \] This contradicts \varepsilonqref{eq:3.1} unless $u\varepsilonquiv0$. So it remains to prove the subcritical case. Since $u\in\dot H^s(\mathbb R^N) $, notice that $\nabla U\in L^2(\mathbb R^{N+1}_+,t^{1-2s}dX)$ thanks to Remark \ref{Remark:2.1}. Then, similarly to \varepsilonqref{eq:2.25} with $u \in L^{p+1}(\mathbb{R}N, |x|^\varepsilonll dx)$, we claim that there exists a sequence $R_n\to\infty$ such that \begin{equation} \left\langlebel{eq:3.2} \lim_{n\to\infty}R_n \left[ \int_{S^+_{R_n}}t^{1-2s}|\nabla U|^2\,dS +\int_{S_{R_n}}|x|^\varepsilonll|u|^{p+1}\, d\omega +\int_{S^+_{R_n}}t^{1-2s}\left|\frac{\partial U}{\partial \nu}\right|^2\,dS\right]=0. \varepsilonnd{equation} By \varepsilonqref{eq:2.21}, \varepsilonqref{eq:3.2} and replacing $R$ with $R_n$ for a sequence $R_n\to\infty$, we conclude that \[ \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\,dX =\frac{2\kappa_s}{N-2s}\frac{N+\varepsilonll}{p+1}\int_{\mathbb{R}N}|x|^\varepsilonll|u|^{p+1}\,dx. \] This together with \varepsilonqref{eq:2.7} yields the following Pohozaev identity \[ \frac{N+\varepsilonll}{p+1}\int_{\mathbb R^N}|x|^\varepsilonll|u|^{p+1}\, dx=\frac{N-2s}{2}\|u\|_{\dot H^s(\mathbb R^N)}^2. \] Combining this identity with \varepsilonqref{eq:3.1}, we observe that $u\varepsilonquiv0$ for $p<p_S(N,\varepsilonll)$, and the proof of Theorem~\ref{Theorem:1.1} is completed for $ 1 < p \leq p_S(N,\varepsilonll)$. \varepsilonnd{proof} \section{The supercritical case} \left\langlebel{section:4} In this section, we follow the argument in \cite{DDW} basically. However, due to the regularity issue of $U$ around $0$ in $\overline{ \mathbb{R}^{N+1}_+ }$, we prove the monotonicity formula (Lemma \ref{Lemma:4.2}) via the argument in \cite[section 3]{FF} and prove Theorem \ref{Theorem:1.1} for $p_S(N,\varepsilonll) < p$. For $X\in \mathbb R^{N+1}_+$, we use the following notation: \[ r := |X|, \quad \sigma := \frac{X}{|X|} \in S^+_1, \quad \sigma_{N+1} := \frac{t}{|X|}. \] Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^1(\mathbb{R}N, (1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} and $U$ be the function given in \varepsilonqref{eq:2.3}. For every $\left\langlembda>0$, we define \begin{equation} \left\langlebel{eq:4.1} D(U;\left\langlembda):=\left\langlembda^{-(N-2s)}\left[\frac{1}{2}\int_{B^+_\left\langlembda}t^{1-2s}|\nabla U|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx\right] \varepsilonnd{equation} and \begin{equation} \left\langlebel{eq:4.2} H(U;\left\langlembda):=\left\langlembda^{-(N+1-2s)}\int_{S^+_\left\langlembda}t^{1-2s}U^2\,dS =\int_{S^+_1} \sigma_{N+1}^{1-2s}U(\left\langlembda \sigma )^2\,dS. \varepsilonnd{equation} \begin{lemma} \left\langlebel{Lemma:4.1} As a function of $\left\langlembda$, $D,H \in C^1((0,\infty))$ and \[ \partial_{\left\langlembda} D(U;\left\langlembda) =\left\langlembda^{-(N-2s)}\int_{S^+_\left\langlembda}t^{1-2s} \left|\frac{\partial U}{\partial\nu} \right|^2\,dS -\left\langlembda^{-(N+1-2s)}\kappa_s\frac{2s+\varepsilonll}{p+1}\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx \] and \[ \begin{aligned} \partial_{\left\langlembda} H (U;\left\langlembda) &=2\left\langlembda^{-(N+1-2s)}\int_{S^+_\left\langlembda}t^{1-2s}U\frac{\partial U}{\partial \nu}\,dS \\ &= 2\left\langlembda^{-(N+1-2s)} \left[ \int_{B^+_\left\langlembda}t^{1-2s}|\nabla U|^2\,dX -\kappa_s\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx \right]. \varepsilonnd{aligned} \] \varepsilonnd{lemma} \begin{proof} We first remark that by \varepsilonqref{eq:2.3} and Lemma \ref{Lemma:2.3}, $U \in C( \overline{ \mathbb{R}^{N+1}_+} )$, $\nabla_x U \in C( \overline{ \mathbb{R}^{N+1}_+ } \setminus \{0\} )$ and $V := t^{1-2s} \partial_t U \in C ( \overline{ \mathbb{R}^{N+1}_+ } \setminus \{0\} )$. Hence, as a function of $\left\langlembda$, \[ \left\langlembda \mapsto \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \in C^1( (0,\infty) ). \] On the other hand, since \[ \begin{aligned} t^{1-2s} \left| \nabla U \right|^2 = t^{1-2s} \left| \nabla_x U \right|^2 + t^{2s-1} V^2 \varepsilonnd{aligned} \] and $t^{1-2s} \in L^1_{\rm loc} ( \overline{ \mathbb{R}^{N+1}_+} )$, it is easy to see that \[ \left\langlembda \mapsto \int_{ S_\left\langlembda^+} t^{1-2s} \left| \nabla U \right|^2 \, dS \in C((0,\infty)), \quad \left\langlembda \mapsto \int_{ B_\left\langlembda^+ } t^{1-2s} \left| \nabla U \right|^2 \, dX \in C^1( (0,\infty) ), \] which yields $D(U;\left\langlembda) \in C^1((0,\infty))$. On the other hand, for any $0< \left\langlembda_1 < \left\langlembda_2 < \infty$, there exists a $C_{\left\langlembda_1,\left\langlembda_2}>0$ such that for every $\left\langlembda \in [\left\langlembda_1,\left\langlembda_2]$ and $\sigma \in S^+_1$ \[ \begin{aligned} \sigma_{N+1}^{1-2s}\left| \partial_{\left\langlembda} ( U(\left\langlembda \sigma ) )^2 \right| \leq 2 \sigma_{N+1}^{1-2s} \left| U(\left\langlembda \sigma) \right| \left| \nabla U(\left\langlembda \sigma ) \right| &\leq 2 \left| U(\left\langlembda \sigma ) \right| \left( \sigma_{N+1}^{1-2s} \left| \nabla_x U(\left\langlembda \sigma ) \right| + \left| V (\left\langlembda \sigma) \right| \right) \\ & \leq C_{\left\langlembda_1,\left\langlembda_2} (1 + \sigma_{N+1}^{1-2s}). \varepsilonnd{aligned} \] Hence, the dominated convergence theorem gives $H (U;\left\langlembda) \in C^1((0,\infty))$. Next we compute the derivative of $D$ and $H$. Direct computations and \varepsilonqref{eq:2.21} give \[ \begin{aligned} &\partial_{\left\langlembda} D(U;\left\langlembda) \\ = \ & - (N-2s) \left\langlembda^{-(N-2s)-1} \left[ \frac{1}{2} \int_{ B_\left\langlembda^+ } t^{1-2s} | \nabla U |^2 \, dX - \frac{\kappa_s}{p+1} \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \right] \\ & \quad + \left\langlembda^{-(N-2s)-1} \left\langlembda \left[ \frac{1}{2} \int_{ S_\left\langlembda^+} t^{1-2s} |\nabla U|^2 \, dS - \frac{\kappa_s}{p+1} \int_{ S_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, d\omega \right] \\ =\ & - (N-2s) \left\langlembda^{-(N-2s)-1} \left[ \frac{1}{2} \int_{ B_\left\langlembda^+ } t^{1-2s} | \nabla U |^2 \, dX - \frac{\kappa_s}{p+1} \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \right] \\ & \quad + \left\langlembda^{-(N-2s)-1} \left[ \frac{N-2s}{2} \int_{ B_\left\langlembda^+} t^{1-2s} |\nabla U|^2 \, dX - \kappa_s \frac{N+\varepsilonll}{p+1} \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \right. \\ & \hspace{3cm} \left. + \left\langlembda \int_{ S_\left\langlembda^+} t^{1-2s} \left| \frac{\partial U}{\partial \nu} \right|^2 \, dS \right] \\ = \ & \left\langlembda^{-(N-2s)} \int_{ S_\left\langlembda^+} t^{1-2s} \left| \frac{\partial U}{\partial \nu} \right|^2 \, dS - \left\langlembda^{-(N+1-2s)}\kappa_s\frac{2s+\varepsilonll}{p+1}\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx. \varepsilonnd{aligned} \] For $H$, we compute similarly by using \varepsilonqref{eq:2.22} and $\nabla U(X) \cdot (X/|X|) = \partial U / \partial \nu$: \[ \begin{aligned} \partial_{\left\langlembda} H(U;\left\langlembda) &= \int_{ S^+_1} \sigma_{N+1}^{1-2s} 2 U(\left\langlembda \sigma) \nabla U (\left\langlembda \sigma) \cdot \sigma \, d S \\ &= 2 \left\langlembda^{-N-1+2s} \int_{ S_\left\langlembda^+} t^{1-2s} U \frac{\partial U}{\partial \nu} \, dS \\ &= 2 \left\langlembda^{-N-1+2s} \left[ \int_{ B_\left\langlembda^+ } t^{1-2s} |\nabla U|^2 \, dX - \kappa_s \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \right]. \varepsilonnd{aligned} \] Hence, we complete the proof. \varepsilonnd{proof} Applying Lemma~\ref{Lemma:4.1}, we prove the following monotonicity formula (cf. \cite[Theorem 1.4]{DDW}). \begin{lemma} \left\langlebel{Lemma:4.2} For $\left\langlembda>0$, define $E(U;\left\langlembda)$ by \begin{equation} \left\langlebel{eq:4.3} E(U;\left\langlembda):=\left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}} \left(D(U;\left\langlembda)+\frac{2s+\varepsilonll}{2(p-1)}H(U;\left\langlembda) \right). \varepsilonnd{equation} Then it holds that \begin{equation} \left\langlebel{eq:4.4} \partial_{\left\langlembda} E(U;\left\langlembda) =\left\langlembda^{\frac{2}{p-1}(s(p+1)+\varepsilonll)-N} \int_{S^+_\left\langlembda} t^{1-2s} \left|\frac{2s+\varepsilonll}{p-1}\frac{U}{\left\langlembda}+\frac{\partial U}{\partial r}\right|^2\,dS. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} Put \begin{equation} \left\langlebel{eq:4.5} \gamma:=\frac{2(2s+\varepsilonll)}{p-1}. \varepsilonnd{equation} By \varepsilonqref{eq:4.3} and \varepsilonqref{eq:4.5}, we have \begin{equation} \left\langlebel{eq:4.6} \begin{aligned} \partial_{\left\langlembda} E(U;\left\langlembda) & =\gamma \left\langlembda^{\gamma-1}\left(D(U;\left\langlembda)+\frac{\gamma}{4}H(U;\left\langlembda)\right) +\left\langlembda^{\gamma}\left( \partial_{\left\langlembda} D(U;\left\langlembda)+\frac{\gamma}{4} \partial_{\left\langlembda} H(U;\left\langlembda)\right) \\ & =\left\langlembda^{\gamma-1} \left(\gamma D(U;\left\langlembda)+\frac{\gamma^2}{4}H(U;\left\langlembda)+\left\langlembda \partial_{\left\langlembda} D(U;\left\langlembda) +\frac{\gamma\left\langlembda}{4} \partial_{\left\langlembda}H(U;\left\langlembda) \right). \varepsilonnd{aligned} \varepsilonnd{equation} Since it follows from \varepsilonqref{eq:4.5} that \[ \bigg(\frac{1}{2}-\frac{1}{p+1}\bigg)\gamma-\frac{2s+\varepsilonll}{p+1} =\frac{p-1}{2(p+1)}\frac{2(2s+\varepsilonll)}{p-1}-\frac{2s+\varepsilonll}{p+1}=0, \] by Lemma~\ref{Lemma:4.1} and \varepsilonqref{eq:2.22}, we see that \begin{align*} & \left\langlembda^{N-2s} \left( \gamma D(U;\left\langlembda)+\frac{\gamma^2}{4}H(U;\left\langlembda) +\left\langlembda \partial_{\left\langlembda} D(U;\left\langlembda) +\frac{\gamma \left\langlembda}{4} \partial_{\left\langlembda}H(U;\left\langlembda)\right) \\ = \ & \gamma \left[\frac{1}{2}\int_{B^+_\left\langlembda}t^{1-2s}|\nabla U|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx \right] +\frac{\gamma^2}{4}\left\langlembda^{-1}\int_{S^+_\left\langlembda}t^{1-2s}U^2\,dS \\ &\quad +\left\langlembda\int_{S^+_\left\langlembda}t^{1-2s} \left|\frac{\partial U}{\partial\nu} \right|^2\,dS -\kappa_s\frac{2s+\varepsilonll}{p+1}\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx +\frac{\gamma}{2}\int_{S^+_\left\langlembda}t^{1-2s}U\frac{\partial U}{\partial\nu}\,dS \\ = \ & \left\langlembda \left[ \int_{ S_\left\langlembda^+} t^{1-2s} \left\{ \frac{\gamma^2}{4} \left( \frac{U}{\left\langlembda} \right)^2 + \gamma \frac{U}{\left\langlembda} \frac{\partial U}{\partial \nu} + \left( \frac{\partial U}{\partial \nu} \right)^2 \right\} \, dS \right] \\ &\quad +\kappa_s\bigg\{\bigg(\frac{1}{2}-\frac{1}{p+1}\bigg) \gamma-\frac{2s+\varepsilonll}{p+1}\bigg\} \int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx \\ = \ &\left\langlembda\int_{S^+_\left\langlembda}t^{1-2s}\bigg|\frac{\gamma}{2}\frac{U}{\left\langlembda}+\frac{\partial U}{\partial\nu}\bigg|^2\,dS. \varepsilonnd{align*} This together with \varepsilonqref{eq:4.6} and $\partial U/\partial\nu=\partial U/\partial r$ on $S^+_\left\langlembda$ implies \varepsilonqref{eq:4.4}. \varepsilonnd{proof} Similar to \cite[Theorem~5.1]{DDW}, we prove the nonexistence result of solutions which have a special form and are stable outside $B_{R_0}^+$. \begin{lemma} \left\langlebel{Lemma:4.3} Let $R_0>0$ and $p_S(N,\varepsilonll)<p$. Suppose \varepsilonqref{eq:1.11} and that $W$ satisfies the following: \begin{equation}\left\langlebel{eq:4.7} \left\{\begin{aligned} & W(X) = r^{ - \frac{2s+\varepsilonll}{p-1} } \psi (\sigma) \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+} ,t^{1-2s} dX ) , \\ &\psi(\omega,0):=\psi|_{ \partial S^{+}_1} \in L^{p+1} ( \partial S^{+}_1 ), \\ & \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla W \cdot \nabla \Phi \, dX = \kappa_s \int_{ \mathbb{R}N} |x|^\varepsilonll |W(x,0)|^{p-1} W(x,0) \Phi(x,0) \, dx \\ &\hspace{5cm} \text{for each $\Phi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } )$}, \\ & \kappa_s p \int_{\mathbb R^N} |x|^\varepsilonll |W(x,0)|^{p-1} \Phi(x,0)^2 \, dx \leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla \Phi \right|^2 \, dX \\ &\hspace{5cm} \text{for each $\Phi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } \setminus \overline{ B^+_{R_0}} )$}. \varepsilonnd{aligned}\right. \varepsilonnd{equation} Then $W \varepsilonquiv 0$. \varepsilonnd{lemma} \begin{remark}\left\langlebel{Remark:4.1} \begin{enumerate} \item The function $W$ in Lemma \ref{Lemma:4.4} is not necessarily defined through the form $W=P_s(\cdot, t) \ast u$ where $u$ is a solution of \varepsilonqref{eq:1.1}. \item By $p_S(N,\varepsilonll)<p$, we have \[ \varepsilonll - \frac{p}{p-1} (2s+\varepsilonll) = - \frac{2sp + \varepsilonll }{p-1} > -N, \quad |x|^\varepsilonll |W(x,0)|^{p-1} W(x,0) \in L^1_{\rm loc} (\mathbb{R}N). \] \item Set \[ \begin{aligned} H^1( S^+_1 , \sigma_{N+1}^{1-2s}dS ) &:= \overline{ C^1 ( \overline{ S^+_1 } ) }^{ \| \cdot \|_{H^1(S^+_1 , \sigma_{N+1}^{1-2s}dS)} }, \\ \| u \|_{H^1(S^+_1, \sigma_{N+1}^{1-2s}dS)}^2 &:= \int_{ S^+_1 } \sigma_{N+1}^{1-2s} \left[ \left| \nabla_{S^+_1} u \right|^2 + u^2 \right] dS \varepsilonnd{aligned} \] where $\nabla_{ S^+_1 }$ stands for the standard gradient on the unit sphere in $\mathbb{R}^{N+1}$. From \cite[Lemma 2.2]{FF}, there exists the trace operator $H^1(S_1^+,\sigma_{N+1}^{1-2s}dS) \to L^2( \partial S^1_+ )$. \item Since \[ - \diver \left( t^{1-2s} \nabla W \right) = 0 \quad \text{in} \ \mathbb{R}^{N+1}_+, \quad W \in H^1_{\mathrm{loc}} \left( \overline{ \mathbb{R}^{N+1}_+ },t^{1-2s}dX \right), \] elliptic regularity yields $W = r^{ - \frac{2s+\varepsilonll}{p-1} } \psi(\sigma) \in C^\infty (\mathbb{R}^{N+1}_+)$. In addition, from $W \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$, we see $\psi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) $. Next, for $k \geq 1$, consider \[ W_k(X) := \max \left\{ -k , \, \min \left\{ |X|^{ \frac{2s+\varepsilonll}{p-1} } W(X), k \right\} \right\}. \] Then \[ W_k \in H^1 ( \overline{ B^+_2 } \setminus \overline{ B^+_{1/2} },t^{1-2s}dX) \cap L^\infty ( \overline{ B^+_2 } \setminus \overline{ B^+_{1/2} } ), \quad \left| W_k(X) \right| \leq \left| X \right|^{ \frac{2s+\varepsilonll}{p-1} } \left| W(X) \right|. \] From this fact, we may find $( \psi_k )_k$ satisfying \begin{equation}\left\langlebel{eq:4.8} \begin{aligned} & \psi_k \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1), \quad \left| \psi_k(\sigma) \right| \leq \left| \psi(\sigma) \right| \quad \text{for any $\sigma \in S^+_1$}, \\ & \left\| \psi_k - \psi \right\|_{H^1(S^+_1,\sigma_{N+1}^{1-2s}dS)} \to 0, \quad \psi_k (\omega,0) \to \psi(\omega,0) \quad \text{strongly in } L^{p+1} (\partial S^+_1). \varepsilonnd{aligned} \varepsilonnd{equation} \item If $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S_1^+)$, then we may find $(\varphi_k) \subset C^1( \overline{S^+_1} )$ such that \begin{equation}\left\langlebel{eq:4.9} \sup_{k \geq 1} \| \varphi_k \|_{L^\infty( S^+_1 )} < \infty, \quad \| \varphi_k - \varphi \|_{H^1( S^+_1,\sigma_{N+1}^{1-2s}dS )} \to 0. \varepsilonnd{equation} From the trace operator, we also have $\varphi_k(\omega,0) \to \varphi (\omega,0)$ in $L^2(\partial S^+_1)$. \varepsilonnd{enumerate} \varepsilonnd{remark} Even though a proof of Lemma~\ref{Lemma:4.3} is similar to the proof of \cite[Theorem~5.1]{DDW}, for the sake of completeness, we give the proof here. Before a proof of Lemma \ref{Lemma:4.3}, we recall \cite[Lemma 2.1]{FF}: \begin{lemma}\left\langlebel{Lemma:4.4} For $v (X) = f(r) \psi(\sigma) \in C^\infty (\mathbb{R}^{N+1}_+)$, \[ - \diver \left( t^{1-2s} \nabla v \right) = -r^{-N} \left( r^{N+1-2s} f_r(r) \right)_r \sigma_{N+1}^{1-2s} \psi(\sigma) - r^{-1-2s} f(r) \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \right) \] where $\diver_{S^+_1}$ is the standard divergence operator on the unit sphere in $\mathbb{R}^{N+1}$. \varepsilonnd{lemma} \begin{proof}[Proof of Lemma \ref{Lemma:4.3}] Let $W = r^{ - \frac{2s+\varepsilonll}{p-1} } \psi (\sigma) $ be as in the statement. We divide our arguments into several steps. \noindent \textbf{Step 1:} \textsl{$\psi$ satisfies \[ \left\{\begin{aligned} &- \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \right) + \beta \sigma_{N+1}^{1-2s} \psi = 0 & &\mathrm{in} \ S^+_1, \\ &- \lim_{ \sigma_{N+1} \to +0} \sigma_{N+1}^{1-2s} \partial_{\sigma_{N+1}} \psi = \kappa_s |\psi |^{p-1} \psi & &\mathrm{on} \ \partial S^+_1=S_1 \varepsilonnd{aligned}\right. \] where $\beta := \frac{2s+\varepsilonll}{p-1} \left( N - \frac{2sp+\varepsilonll}{p-1} \right)$, namely, for each $\varphi \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1) $, \begin{equation}\left\langlebel{eq:4.10} \int_{ S^+_1} \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \cdot \nabla_{S^+_1} \varphi + \beta \sigma_{N+1}^{1-2s} \psi \varphi \, d S = \kappa_s \int_{ \partial S^+_1} |\psi|^{p-1} \psi \varphi \, d \omega. \varepsilonnd{equation} Furthermore, we may also choose $\varphi = \psi$ in \varepsilonqref{eq:4.10} and obtain \begin{equation} \left\langlebel{eq:4.11} \int_{S^+_1} \sigma_{N+1}^{1-2s}|\nabla_{S^+_1}\psi|^2 +\beta\sigma_{N+1}^{1-2s}\psi^2 \,dS =\kappa_s\int_{ \partial S^+_1 }|\psi|^{p+1}\,d\omega. \varepsilonnd{equation} } \begin{proof} For \varepsilonqref{eq:4.10}, by \varepsilonqref{eq:4.9}, $\psi(\omega,0) \in L^{p+1}(\partial S^+_1)$ and the dominated convergence theorem, it is enough to prove it for $\varphi \in C^1( \overline{S^+_1} )$. For $V(X) = V(r \sigma) \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } )$, notice that \begin{equation}\left\langlebel{eq:4.12} \begin{aligned} \nabla V(X)& = \partial_r V( r \sigma ) \sigma + r^{-1} \nabla_{S^+_1} V, \\ \nabla W(X)& = r^{ - \frac{2s+\varepsilonll}{p-1} - 1 } \left[ - \frac{2s+\varepsilonll}{p-1} \psi(\sigma) \sigma + \nabla_{S^+_1} \psi(\sigma) \right]. \varepsilonnd{aligned} \varepsilonnd{equation} We see from \varepsilonqref{eq:4.7}, \varepsilonqref{eq:4.12} and $\sigma \cdot \nabla_{S^+_1} h (\sigma) = 0$ for functions $h$ on $S^+_1$ that \begin{equation}\left\langlebel{eq:4.13} \begin{aligned} &\kappa_s \int_{ \mathbb{R}N} |x|^{ - \frac{2sp + \varepsilonll}{p-1} } |\psi(x/|x|,0)|^{p-1} \psi(x/|x|,0) V(x,0) \, dx \\ = \ & \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla W \cdot \nabla V \, dX \\ = \ & \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} r^{ - \frac{2s+\varepsilonll}{p-1} - 1 } \left[ - \frac{2s+\varepsilonll}{p-1} \partial_rV (r \sigma) \psi(\sigma ) + r^{-1} \nabla_{S^+_1} V \cdot \nabla_{S^+_1} \psi \right] \, dX. \varepsilonnd{aligned} \varepsilonnd{equation} If we choose $V$ as $V(X) = \varepsilonta(r) \varphi(\sigma)$ where $\varepsilonta \in C^1_{c} ([0,\infty))$ and $\varphi \in C^1 ( \overline{ S^+_1} )$, then \varepsilonqref{eq:4.13} is rewritten as \begin{equation}\left\langlebel{eq:4.14} \begin{aligned} &\kappa_s \left( \int_0^\infty r^{ - \frac{2sp+\varepsilonll}{p-1} + N - 1 } \varepsilonta (r) \,dr \right) \left( \int_{ \partial S^+_1} |\psi( \omega , 0 )|^{p-1} \psi (\omega, 0) \varphi (\omega,0) \, d \omega \right) \\ = \ & - \frac{2s+\varepsilonll}{p-1} \left( \int_0^\infty r^{ N - \frac{2sp+\varepsilonll}{p-1}} \varepsilonta'(r) \, d r \right) \left( \int_{ S^+_1} \sigma_{N+1}^{1-2s} \varphi(\sigma) \psi(\sigma) \, d S \right) \\ & \quad + \left( \int_0^\infty r^{ - \frac{2sp+\varepsilonll}{p-1} + N -1 } \varepsilonta(r) \, dr \right) \left( \int_{ S^+_1} \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \varphi \cdot \nabla_{S^+_1} \psi \, d S \right). \varepsilonnd{aligned} \varepsilonnd{equation} Since $p_S(N,\varepsilonll) < p$ yields $ - \frac{2sp+\varepsilonll}{p-1} + N > 0$, it follows from the integration by parts that \[ -\frac{2s+\varepsilonll}{p-1} \int_0^\infty r^{N-\frac{2sp+\varepsilonll}{p-1}} \varepsilonta'(r) \, dr = \beta \int_0^\infty r^{N-\frac{2sp+\varepsilonll}{p-1}-1} \varepsilonta(r) \, dr. \] Thus, by choosing $\varepsilonta \geq 0$ with $\varepsilonta \not\varepsilonquiv 0$, \varepsilonqref{eq:4.14} implies \[ \kappa_s \int_{ \partial S^+_1} |\psi(\omega,0)|^{p-1} \psi(\omega,0) \varphi (\omega,0) \, d\omega = \int_{ S^+_1}\sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \cdot \nabla_{S^+_1} \varphi + \beta \sigma_{N+1}^{1-2s} \psi \varphi \, d S \] for every $\varphi \in C^1( \overline{ S^+_1} )$. Hence, \varepsilonqref{eq:4.10} holds. For \varepsilonqref{eq:4.11}, take $(\psi_k)_k$ satisfying \varepsilonqref{eq:4.8}. Then \varepsilonqref{eq:4.10} holds for $\varphi = \psi_k$. Thanks to $\psi(\omega,0) \in L^{p+1} (\partial S_1^+)$ and the dominated convergence theorem, letting $k \to \infty$, we obtain \varepsilonqref{eq:4.11}. \varepsilonnd{proof} \noindent \textbf{Step 2:} \textsl{For every $\varphi \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1)$, \begin{equation} \left\langlebel{eq:4.15} \kappa_sp\int_{ \partial S^+_1 }|\psi|^{p-1}\varphi^2\,d \omega \le \int_{ S^+_1 } \sigma_{N+1}^{1-2s} |\nabla_{S^+_1}\varphi|^2\,dS +\left(\frac{N-2s}{2}\right)^2\int_{ S^+_1} \sigma_{N+1}^{1-2s}\varphi^2\,dS. \varepsilonnd{equation} } \begin{proof} It is enough to treat the case $\varphi \in C^1( \overline{ S^+_1} )$ due to \varepsilonqref{eq:4.9} as in Step 1. We recall the stability in \varepsilonqref{eq:4.7}: for any $\phi\in C^1_c(\overline{\mathbb R^{N+1}_+}\setminus \overline{B^+_{R_0}})$, \begin{equation} \left\langlebel{eq:4.16} \kappa_sp\int_{\partial\mathbb R^{N+1}_+}|x|^\varepsilonll|W|^{p-1} \phi(x,0)^2\,dx \le \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \phi|^2\,dX. \varepsilonnd{equation} For $0< \varepsilon \ll 1$, we choose $\tau_\varepsilon$ and a standard cutoff function $\varepsilonta_\varepsilon\in C^1_c( (0,\infty) )$ such that \begin{equation} \left\langlebel{eq:4.17} \tau_\varepsilon := \frac{1}{\sqrt{-\log \varepsilon}} \to 0, \qquad \begin{aligned} &\chi_{(R_0+2 \tau_\varepsilon , \, R_0+ \varepsilon^{-1} )}(r)\le \varepsilonta_\varepsilon(r) \le \chi_{(R_0 + \tau_\varepsilon , \, R_0+2 \varepsilon^{-1} )} (r), \\ & |\varepsilonta_\varepsilon'(r)|\le C \tau_\varepsilon^{-1} \quad \mbox{for}\quad r\in(R_0+ \tau_\varepsilon , \, R_0+ 2 \tau_\varepsilon), \\ & |\varepsilonta_\varepsilon'(r)|\le C\varepsilon \quad \mbox{for}\quad r\in(R_0+ \varepsilon^{-1} , \, R_0+2 \varepsilon^{-1}) \varepsilonnd{aligned} \varepsilonnd{equation} where $\chi_A(r)$ is the characteristic function of $A \subset (0,\infty)$. For $\varphi\in C^1( \overline{S^+_1} )$, we put \begin{equation} \left\langlebel{eq:4.18} \phi(X)=r^{-\frac{N-2s}{2}}\varepsilonta_\varepsilonpsilon(r)\varphi(\sigma )\qquad\mbox{for}\quad X = r \sigma \in\mathbb R^{N+1}_+. \varepsilonnd{equation} Since $W=r^{ - \frac{2s+\varepsilonll}{p-1} } \psi (\sigma)$, we have \begin{equation} \left\langlebel{eq:4.19} \int_{\partial\mathbb R^{N+1}_+}|x|^\varepsilonll|W|^{p-1}\phi^2\,dx =\left(\int_0^\infty r^{-1}\varepsilonta_\varepsilon^2\,dr\right) \left(\int_{ \partial S^+_1 }|\psi|^{p-1}\varphi^2\,d\omega\right). \varepsilonnd{equation} On the other hand, by \varepsilonqref{eq:4.18}, we see that \begin{equation} \left\langlebel{eq:4.20} \begin{aligned} |\nabla\phi (X)|^2 & = \left( \left(r^{-\frac{N-2s}{2}}\varepsilonta_\varepsilon \right)' \right)^2\varphi^2 +r^{-2}\left(r^{-\frac{N-2s}{2}}\varepsilonta_\varepsilon\right)^2 |\nabla_{S^+_1}\varphi|^2 \\ & =\left[ \left(\frac{N-2s}{2} \right)^2\varphi^2+ | \nabla_{S^+_1}\varphi |^2\right] r^{-2-(N-2s)}\varepsilonta_\varepsilon^2 \\ &\qquad +r^{-(N-2s)}(\varepsilonta_\varepsilon')^2\varphi^2-(N-2s)r^{-1-(N-2s)}\varepsilonta_\varepsilon\varepsilonta_\varepsilon'\varphi^2. \varepsilonnd{aligned} \varepsilonnd{equation} Since it follows from \varepsilonqref{eq:4.17} that \begin{equation*} \begin{aligned} &\int_0^\infty r^{N+1-2s}r^{-(N-2s)}(\varepsilonta_\varepsilon')^2\,dr \leq C \tau_\varepsilon^{-2} \int_{R_0+\tau_\varepsilon}^{R_0+2\tau_\varepsilon} r \, dr +C\varepsilon^2\int_{R_0 + 1/\varepsilon}^{R_0 + 2/\varepsilon} r\,dr \leq C \left( \tau_\varepsilon^{-1} + 1 \right), \\ & \left| \int_0^\infty r^{N+1-2s}r^{-1-(N-2s)}\varepsilonta_\varepsilon\varepsilonta_\varepsilon'\,dr \right| \leq C \tau_\varepsilon^{-1} \int_{R_0 + \tau_\varepsilon }^{R_0 + 2 \tau_\varepsilon } \,dr +C\varepsilon\int_{R_0 + 1/\varepsilon}^{R_0 + 2 / \varepsilon} \,dr \le C, \varepsilonnd{aligned} \varepsilonnd{equation*} by \varepsilonqref{eq:4.20} we have \begin{equation} \left\langlebel{eq:4.21} \begin{aligned} & \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \phi|^2\,dX \\ = \ & \int_0^\infty\, dr\, \int_{ S^+_1 } r^N ( r \sigma_{N+1})^{1-2s} \left\{ \left[ \left(\frac{N-2s}{2} \right)^2\varphi^2 +|\nabla_{S^+_1}\varphi|^2\right]r^{-2-(N-2s)}\varepsilonta_\varepsilon^2 \right. \\ &\hspace{4cm} + r^{-(N-2s)}(\varepsilonta_\varepsilon')^2\varphi^2-(N-2s)r^{-1-(N-2s)}\varepsilonta_\varepsilon\varepsilonta_\varepsilon'\varphi^2\Bigg\} \,dS \\ \le \ & \left(\int_0^\infty r^{-1}\varepsilonta_\varepsilon^2\,dr\right) \left(\int_{S^+_1} \sigma_{N+1}^{1-2s} \left[ \left( \frac{N-2s}{2} \right)^2\varphi^2+|\nabla_{S^+_1}\varphi|^2\right]\,dS\right) +C \left( \tau_\varepsilon^{-1} + 1 \right). \varepsilonnd{aligned} \varepsilonnd{equation} Finally, remark that \[ \int_0^\infty r^{-1} \varepsilonta_\varepsilon^2 \, dr \geq \int_{R_0 + 2 \tau_\varepsilon}^{R_0+\varepsilon^{-1}} r^{-1} \, dr = \log \left( R_0 + \varepsilon^{-1} \right) - \log \left( R_0 + 2 \tau_\varepsilon \right) \geq \frac{\tau_\varepsilon^{-2}}{2} \] holds for sufficiently small $\varepsilon$. Therefore, substituting \varepsilonqref{eq:4.19} and \varepsilonqref{eq:4.21} to \varepsilonqref{eq:4.16}, dividing by $\int_0^\infty r^{-1} \varepsilonta_\varepsilon^2 \, dr $ and taking $\varepsilon\to0$, we obtain \varepsilonqref{eq:4.15}. \varepsilonnd{proof} \noindent \textbf{Step 3:} \textsl{For $\alpha\in [0 , \frac{N-2s}{2} )$, set \[ v_\alpha(x) := |x|^{ - \left( \frac{N-2s}{2} - \alpha \right) }, \quad V_\alpha (X) := ( P_s (\cdot, t) \ast v_\alpha ) (x). \] Then for each $X \in \mathbb{R}^{N+1}_+$ and $\left\langlembda > 0$, \begin{equation} \left\langlebel{eq:4.22} V_\alpha(\left\langlembda X)=\left\langlembda^{-\frac{N-2s}{2}+\alpha}V_\alpha(X) \varepsilonnd{equation} and $\phi_{\alpha}(\sigma) := V_\alpha ( \sigma) = V_\alpha (r^{-1} X) \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) $. Moreover, $\phi_{\alpha}$ also satisfies $\phi_{\alpha} > 0$ in $\overline{S^+_1}$, for any $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1)$, \begin{equation} \left\langlebel{eq:4.23} \begin{aligned} & \int_{S_1^+} \sigma_{N+1}^{1-2s} |\nabla_{S^+_1}\varphi |^2\,dS + \left[ \left(\frac{N-2s}{2}\right)^2-\alpha^2\right] \int_{S^+_1} \sigma_{N+1}^{1-2s}\varphi^2\,dS \\ = \ & \kappa_s \left\langlembda(\alpha) \int_{\partial S^+_1} \varphi^2\, d\omega +\int_{S^+_1} \sigma_{N+1}^{1-2s}\phi_\alpha^2 \left|\nabla_{S^+_1} \left( \frac{\varphi}{\phi_\alpha} \right) \right|^2\,dS \varepsilonnd{aligned} \varepsilonnd{equation} and \begin{equation} \left\langlebel{eq:4.24} 0 \leq \alpha_1 \leq \alpha_2 < \frac{N-2s}{2} \quad \mathbb{R}ightarrow\quad \phi_{\alpha_1} \le \phi_{\alpha_2} \qquad \mathrm{in} \quad S^+_1. \varepsilonnd{equation} } \begin{proof} By direct computation, we may check \varepsilonqref{eq:4.22}. For the assertion $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$, we remark $V_\alpha(X) = (P_s(\cdot,t) \ast v_\alpha )(x) \in C^\infty(\mathbb{R}^{N+1}_+)$. By $\phi_\alpha=V_\alpha|_{ S^+_1}$, to show $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$, it suffices to prove \begin{equation}\left\langlebel{eq:4.25} V_\alpha, \ t^{1-2s} \partial_tV_\alpha, \ \nabla_x V_\alpha \in C \left( \overline{ N^+_{1/4} (\partial S^+_1)} \right), \quad N^+_r(A) := \Set{ X \in \mathbb{R}^{N+1}_+ | \, \dist \left( X, A \right) < r }. \varepsilonnd{equation} To this end, decompose $v_\alpha = v_{\alpha,1} + v_{\alpha,2}$ where $ \supp v_{\alpha,1} \subset B_{2} \setminus B_{1/4}$ and $v_{\alpha,1} \varepsilonquiv v_\alpha$ on $B_{3/2} \setminus B_{1/2}$, and set $V_{\alpha,i} (X) := (P_s(\cdot, t) \ast v_{\alpha,i})(x)$. Since $v_{\alpha,1} \in C^\infty_c(\mathbb{R}N)$ and $N_{1/4} (\partial S^+_1 ) \cap \supp v_{\alpha,2} = \varepsilonmptyset$ where $N_r(A) := \set{x \in \mathbb{R}N |\, \dist (x,A) < r }$, it is not difficult to show \varepsilonqref{eq:4.25} and $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$. The assertion $\phi_{\alpha} > 0$ in $\overline{ S^+_1}$ follows from $v_\alpha > 0$ in $\mathbb{R}N \setminus \{0\}$ and the definition of $V_\alpha$. For \varepsilonqref{eq:4.23} and \varepsilonqref{eq:4.24}, remark that in \cite[Lemma~4.1]{Fall}, it is proved that $(-\Delta)^s v_\alpha = \left\langlembda(\alpha) |x|^{-2s} v_\alpha$ in $\mathbb{R}N$, where $\left\langlembda(\alpha)$ appears in \varepsilonqref{eq:1.7}. Hence, by the property of $P_s(x,t)$ and $v_\alpha$, we may check that for each $\varphi \in C^\infty_c(\mathbb{R}N \setminus \set{0} )$, \begin{equation}\left\langlebel{eq:4.26} - \lim_{t \to +0} \int_{ \mathbb{R}N} t^{1-2s} \partial_t V_\alpha (X) \varphi (x) \, d x = \kappa_s \int_{ \mathbb{R}N} v_\alpha (-\Delta)^s \varphi \, d x = \kappa_s \int_{ \mathbb{R}N} \left\langlembda (\alpha) |x|^{-2s} v_\alpha \varphi \, dx. \varepsilonnd{equation} For any fixed $\omega \in \partial S^+_1$, we consider a curve \[ \gamma_{\omega} (\tau ) := \begin{pmatrix} \sqrt{1-\tau^{2}} \, \omega \\ \tau \varepsilonnd{pmatrix} \in S^+_1. \] Then $\phi_\alpha ( \gamma_{\omega} (\tau) ) = V_\alpha (\gamma_{\omega} (\tau))$ and \[ \frac{d}{d \tau} V_\alpha ( \gamma_{\omega} (\tau ) ) = \nabla V_\alpha ( \gamma_{\omega} (\tau) ) \cdot \begin{pmatrix} - \frac{\tau}{\sqrt{1-\tau^2}} \omega \\ 1 \varepsilonnd{pmatrix} = - \frac{\tau}{\sqrt{1-\tau^2}} \nabla_x V_\alpha( \gamma_{\omega} (\tau) ) \cdot \omega + \partial_t V_\alpha( \gamma_{\omega} (\tau) ). \] Combining this fact with \varepsilonqref{eq:4.25}, $v_\alpha(1) = 1$ and \varepsilonqref{eq:4.26}, we deduce that \begin{equation}\left\langlebel{eq:4.27} -\lim_{\sigma_{N+1}\to0} \sigma_{N+1}^{1-2s} \partial_{\sigma_{N+1}} \phi_\alpha (\sigma) = \kappa_s\left\langlembda(\alpha)\qquad\mbox{on}\quad \partial S^+_1. \varepsilonnd{equation} Due to \varepsilonqref{eq:4.22}, we notice that \[ V_\alpha(X)=r^{-\frac{N-2s}{2}+\alpha}\phi_\alpha(\sigma). \] Furthermore, since $0=- \diver ( t^{1-2s} \nabla V_\alpha)$ in $\mathbb{R}^{N+1}_+$ and $V_\alpha=v_\alpha$ on $\partial\mathbb R^{N+1}_+\setminus\{0\}$, we have $\phi_\alpha=v_\alpha=1$ on $\partial S^+_1$, and by Lemma \ref{Lemma:4.4} with $f(r) = r^{ - \frac{N-2s}{2} + \alpha }$ and $\psi (\sigma) = \phi_{\alpha}(\sigma)$, $\phi_{\alpha}$ is a solution of \begin{equation} \left\langlebel{eq:4.28} \left\{\begin{aligned} - \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \phi_\alpha \right) + \left[ \left( \frac{N-2s}{2}\right)^2-\alpha^2\right] \sigma_{N+1}^{1-2s}\phi_\alpha &= 0 & &\mbox{in}\quad S^+_1, \\ \phi_\alpha&=1 & & \mbox{on}\quad \partial S^+_1. \varepsilonnd{aligned} \right. \varepsilonnd{equation} Now we prove \varepsilonqref{eq:4.23}. Since $\phi_{\alpha} \in C( \overline{ S^+_1} )$ and $\phi_{\alpha} > 0$ in $ \overline{ S^+_1}$, for every $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty (S^+_1)$ and $(\varphi_k)_k$ with \varepsilonqref{eq:4.9}, it follows that \[ \frac{\varphi_k}{\phi_{\alpha}} \to \frac{\varphi}{\phi_{\alpha}} \quad \text{strongly in} \ H^1(S^+_1,\sigma_{N+1}^{1-2s}dS), \quad \frac{\varphi}{\phi_{\alpha}} \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS). \] Hence, it suffices to show \varepsilonqref{eq:4.23} for $\varphi \in C^1( \overline{ S^+_1 } )$. For $\varphi\in C^1( \overline{S^+_1} )$, notice that \[ \begin{aligned} \nabla_{S^+_1} \phi_\alpha \cdot \nabla_{S^+_1} \left( \frac{\varphi^2}{\phi_\alpha} \right) &= \nabla_{S^+_1} \phi_{\alpha} \cdot \left[ \frac{2 \varphi \nabla_{S^+_1} \varphi}{\phi_{\alpha}} - \frac{\varphi^2 \nabla_{S^+_1} \phi_\alpha}{\phi_{\alpha}^2} \right] = |\nabla_{S^+_1} \varphi |^2 - \left|\nabla_{S^+_1} \left(\frac{\varphi}{\phi_\alpha} \right) \right|^2 \phi_\alpha^2. \varepsilonnd{aligned} \] Thus, multiplying \varepsilonqref{eq:4.28} by $\varphi^2/\phi_\alpha$, we see from \varepsilonqref{eq:4.27} that \varepsilonqref{eq:4.23} holds. Finally, for $0 \leq \alpha_1 \leq \alpha_2 < \frac{N-2s}{2}$, we infer from $0< \phi_{\alpha}$ that \[ \begin{aligned} - {\rm{div}}_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \phi_{\alpha_1} \right) &= - \left[ \left( \frac{N-2}{2} \right)^2 - \alpha_1^2 \right] \sigma_{N+1}^{1-2s} \phi_{\alpha_1} \\ &\leq - \left[ \left(\frac{N-2s}{2}\right)^2 - \alpha_2^2 \right] \sigma_{N+1}^{1-2s}\phi_{\alpha_1} \quad\mbox{on}\quad S^+_1, \varepsilonnd{aligned} \] which yields \[ - \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{ S^+_1 } \left( \phi_{\alpha_2} - \phi_{\alpha_1} \right) \right) + \left[ \left(\frac{N-2s}{2}\right)^2 - \alpha_2^2 \right] \sigma_{N+1}^{1-2s} \left( \phi_{\alpha_2} - \phi_{\alpha_1} \right) \geq 0. \] Multiplying this inequality by $( \phi_{\alpha_2} - \phi_{\alpha_1} )_- :=\max\{0,-(\phi_{\alpha_2} - \phi_{\alpha_1})\}\in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$ and integrating it over $S^+_1$, by $\phi_{\alpha_1} = 1 = \phi_{\alpha_2}$ on $\partial S^+_1$ and $\left( \frac{N-2s}{2} \right)^2 - \alpha^2_2 > 0$, we deduce that $ ( \phi_{\alpha_2} - \phi_{\alpha_1} )_- \varepsilonquiv 0$, hence, \varepsilonqref{eq:4.24} holds. \varepsilonnd{proof} \noindent \textbf{Step 4: } \textsl{Conclusion} Now we are ready to prove the assertion of Lemma \ref{Lemma:4.3}. By $p_S(N,\varepsilonll)<p$ and $\varepsilonll>-2s$ thanks to \varepsilonqref{eq:1.2}, we set \begin{equation*} \widetilde{\alpha}=\frac{N-2s}{2}-\frac{2s+\varepsilonll}{p-1} \in \left( 0 , \frac{N-2s}{2} \right). \varepsilonnd{equation*} By this choice of $\widetilde{\alpha}$, we see that \begin{equation} \left\langlebel{eq:4.29} \left(\frac{N-2s}{2}\right)^2-\widetilde{\alpha}^2=\frac{2s+\varepsilonll}{p-1}\left(N-2s-\frac{2s+\varepsilonll}{p-1}\right)=\beta \varepsilonnd{equation} where $\beta$ appears in Step 1. Let $(\psi_k)_k$ be the functions in \varepsilonqref{eq:4.8} and notice that $\phi_{0} / \phi_{\widetilde{\alpha}} \in C( \overline{ S^1_+ } ) \cap H^1(S^+_1,\sigma_{N+1}^{1-2s}dS)$, $\phi_0=\phi_{\widetilde{\alpha}}=1$ on $\partial S^+_1$ and $\psi_k \phi_0 / \phi_{\widetilde{\alpha}} \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1) $. Hence, \varepsilonqref{eq:4.15} and \varepsilonqref{eq:4.23} with $\varphi = \psi_k \phi_0 / \phi_{\widetilde{\alpha}}$ and $\alpha = 0$ give \begin{equation*} \begin{aligned} & \kappa_s p\int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega \\ \leq \ & \int_{S^+_1} \sigma_{N+1}^{1-2s} \left| \nabla_{S^+_1} \left( \frac{\psi_k \phi_0}{\phi_{\widetilde{\alpha}}} \right) \right|^2 \, dS + \left( \frac{N-2s}{2} \right)^2 \int_{S^+_1} \sigma_{N+1}^{1-2s} \left( \frac{\psi_k \phi_0}{\phi_{\widetilde{\alpha}}} \right)^2\,dS \\ = \ & \kappa_s \left\langlembda(0) \int_{ \partial S^+_1} \psi_k^2 \, d \omega + \int_{ S^+_1} \sigma_{N+1}^{1-2s} \phi_{0}^2 \left| \nabla_{ S^+_1 } \left( \frac{\psi_k}{\phi_{\widetilde{\alpha}}} \right) \right|^2 \,dS \varepsilonnd{aligned} \varepsilonnd{equation*} This together with \varepsilonqref{eq:4.24} implies \begin{equation} \left\langlebel{eq:4.30} \kappa_s p \int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega \le \kappa_s \left\langlembda(0) \int_{\partial S^+_1} \psi^2_k\,d \omega +\int_{ S^+_1 } \sigma_{N+1}^{1-2s} \phi_{\widetilde{\alpha}}^2 \left|\nabla_{ S^+_1} \left(\frac{\psi_k }{\phi_{\widetilde{\alpha}}} \right) \right|^2\,dS. \varepsilonnd{equation} Substituting \varepsilonqref{eq:4.23} with $\varphi = \psi_k$ and $\alpha = \widetilde{\alpha}$ into \varepsilonqref{eq:4.30}, we observe from \varepsilonqref{eq:4.29} that \begin{equation}\left\langlebel{eq:4.31} \begin{aligned} & \kappa_sp\int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega \\ \le \ & \kappa_s \left\langlembda(0) \int_{\partial S^+_1} \psi^2_k \,d \omega + \int_{ S^+_1 } \sigma_{N+1}^{1-2s}|\nabla_{ S^+_1 }\psi_k|^2\,dS \\ &\hspace{1.5cm} +\left[ \left( \frac{N-2s}{2}\right)^2-\widetilde{\alpha}^2\right] \int_{S^+_1} \sigma_{N+1}^{1-2s}\psi^2_k \,dS -\kappa_s\left\langlembda( \widetilde{\alpha} )\int_{\partial S^+_1}\psi^2_k \,d \omega \\ = \ & \kappa_s \left( \left\langlembda(0) - \left\langlembda(\widetilde{\alpha}) \right) \int_{\partial S^+_1} \psi^2_k \,d \omega + \int_{S^+_1} \sigma_{N+1}^{1-2s}|\nabla_{S^+_1}\psi_k |^2\,dS +\beta\int_{ S^+_1 } \sigma_{N+1}^{1-2s}\psi_k^2\,dS. \varepsilonnd{aligned} \varepsilonnd{equation} On the other hand, by \varepsilonqref{eq:4.23} with $\varphi = \psi_k$ and $\alpha = \widetilde{\alpha}$, we have \[ \begin{aligned} \int_{ S^+_1} \sigma_{N+1}^{1-2s} | \nabla_{ S^+_1 } \psi_k |^2 \, d S + \beta \int_{S^+_1} \sigma_{N+1}^{1-2s} \psi_k^2 \, dS \geq \kappa_s \left\langlembda(\widetilde{\alpha}) \int_{ \partial S^+_1} \psi_k^2 \, d \omega. \varepsilonnd{aligned} \] From \varepsilonqref{eq:4.31} and the fact $\left\langlembda (0) > \left\langlembda(\widetilde{\alpha})$ due to \varepsilonqref{eq:1.8}, it follows that \begin{equation*} \kappa_s p \int_{ \partial S^+_1} | \psi |^{p-1} \psi_k^2 \, d \omega \leq \frac{\left\langlembda (0)}{\left\langlembda(\widetilde{\alpha})} \left\{ \int_{ S^+_1} \sigma_{N+1}^{1-2s} | \nabla_{ S^+_1 } \psi_k |^2 \, d S + \beta \int_{S^+_1} \sigma_{N+1}^{1-2s} \psi_k^2 \, dS\right\}. \varepsilonnd{equation*} Letting $k \to \infty$ and noting \varepsilonqref{eq:4.8} and \varepsilonqref{eq:4.11}, we obtain \[ \kappa_s p \int_{ \partial S^+_1} | \psi |^{p+1} \, d \omega \leq \frac{\left\langlembda (0)}{\left\langlembda(\widetilde{\alpha})} \kappa_s \int_{ \partial S^+_1} |\psi|^{p+1} \, d \omega. \] Thus, we obtain $\left\langlembda(\widetilde{\alpha})p\le \left\langlembda(0)$ unless $\psi\varepsilonquiv 0$. Therefore, if \varepsilonqref{eq:1.11} holds, namely $\left\langlembda(\widetilde{\alpha})p> \left\langlembda(0)$, then $\psi\varepsilonquiv0$, and by $W=r^{ -\frac{2s+\varepsilonll}{p-1} } \psi$, we have $W\varepsilonquiv 0$. Hence, Lemma~\ref{Lemma:4.3} follows. \varepsilonnd{proof} Now we are ready to prove Theorem~\ref{Theorem:1.1} for $p_S(N,\varepsilonll)<p$. Following \cite{DDW}, we use the blow-down analysis. \begin{proof}[Proof of Theorem~\ref{Theorem:1.1} for $p_S(N,\varepsilonll)<p$] Assume \varepsilonqref{eq:1.2} and \varepsilonqref{eq:1.11}. Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}N) \cap L^\infty_{\rm loc} (\mathbb{R}N) \cap L^2(\mathbb{R}N,(1+|x|)^{-N-2s}dx) $ be a solution of \varepsilonqref{eq:1.1} which is stable outside $B_{R_0}$ and let $U$ be the function given in \varepsilonqref{eq:2.3}. Recall $D$, $H$ and $E$ in \varepsilonqref{eq:4.1}, \varepsilonqref{eq:4.2} and \varepsilonqref{eq:4.3}, respectively. Then, by Lemma~\ref{Lemma:2.9} we see that \begin{equation} \left\langlebel{eq:4.32} \left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}}D(U;\left\langlembda) \le C\left\langlembda^{\frac{2}{p-1}(s(p+1)+\varepsilonll)-N}\left(\int_{B^+_\left\langlembda}t^{1-2s}|\nabla U|^2\, dX +\int_{B_\left\langlembda}|x|^\varepsilonll|u|^{p+1}\,dx\right) \le C \varepsilonnd{equation} for $\left\langlembda\ge 3 R_0$. Since $E$ is nondecreasing with respect to $\left\langlembda$ due to Lemma \ref{Lemma:4.2}, by \varepsilonqref{eq:4.32} and Lemma \ref{Lemma:2.8}, for $\left\langlembda \geq 3R_0$, we have \begin{equation*} \begin{aligned} E(U;\left\langlembda) & \le \left\langlembda^{-1} \int_\left\langlembda^{2\left\langlembda}E(U;\xi)\,d\xi \\ & = \left\langlembda^{-1} \int_\left\langlembda^{2\left\langlembda} \xi^{\frac{2(2s+\varepsilonll)}{p-1}} \left[D(U;\xi)+\frac{2s+\varepsilonll}{2(p-1)}H(U;\xi) \right]\,d\xi \\ &\leq C + C \left\langlembda^{-1} \int_{ \left\langlembda}^{ 2\left\langlembda} d \xi \, \xi^{ \frac{2(2s+\varepsilonll)}{p-1} - N - 1 + 2s } \int_{ S_\xi^+} t^{1-2s} U^2 \, dS \\ & \le C+C\left\langlembda^{ \frac{2}{p-1}(2s+\varepsilonll) + 2s -N-2 }\int_{\left\langlembda}^{2\left\langlembda} d\xi \int_{S^+_\xi}t^{1-2s}U^2\,dS \\ & \le C+C\left\langlembda^{ \frac{2}{p-1}(2s+\varepsilonll) + 2s -N-2 }\int_{B_{2\left\langlembda}^+ }t^{1-2s}U^2\,dX \\ & \le C+C\left\langlembda^{ \frac{2}{p-1}(2s+\varepsilonll) + 2s -N-2 } \left\langlembda^{N + 2 (1-s) - \frac{2(2s+\varepsilonll)}{p-1} } \le C. \varepsilonnd{aligned} \varepsilonnd{equation*} This implies that \begin{equation} \left\langlebel{eq:4.33} \lim_{\left\langlembda\to\infty}E(U;\left\langlembda)<+\infty. \varepsilonnd{equation} On the other hand, for $X\in\mathbb R^{N+1}_+$, let \[ V_\left\langlembda (X):=\left\langlembda^{\frac{2s+\varepsilonll}{p-1}}U(\left\langlembda X). \] Then it is easy to check that \begin{equation}\left\langlebel{eq:4.34} \begin{aligned} &V_\left\langlembda(X) = \left( P_s(\cdot, t) \ast \left( \left\langlembda^{ \frac{2s+\varepsilonll}{p-1} } u \left( \left\langlembda \cdot \right) \right) \right) (x), \\ & - \lim_{t \to +0} t^{1-2s} \partial_t V_\left\langlembda(x,t) = \kappa_s |x|^\varepsilonll \left| V_\left\langlembda(x,0) \right|^{p-1} V_\left\langlembda (x,0), \\ &\left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}}D(U;\left\langlembda R)=D(V_\left\langlembda ;R), \,\,\, \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} } H(U;\left\langlembda R) = H(V_\left\langlembda; R), \,\,\, E(U;\left\langlembda R)=E(V_\left\langlembda;R) \varepsilonnd{aligned} \varepsilonnd{equation} for each $\left\langlembda \geq 3R_0$. Since $u$ is stable outside $B_{R_0}$, as in the proof of Lemma \ref{Lemma:2.9} (see \varepsilonqref{eq:2.65}), by \varepsilonqref{eq:2.8}, $U$ is stable outside $B_{R_0}^+$. Therefore, for every $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{\left\langlembda^{-1} R_0 } } )$, \begin{equation}\left\langlebel{eq:4.35} \begin{aligned} p \kappa_s \int_{ \mathbb{R}N} |x|^\varepsilonll | V_\left\langlembda(x,0) |^{p-1} \psi(x,0)^2 \, dx &= p \kappa_s \left\langlembda^{2s-N} \int_{ \mathbb{R}N} |x|^\varepsilonll |u(x)|^{p-1} \psi( \left\langlembda^{-1} x,0 )^2 \, dx \\ &\leq \left\langlembda^{2s-N} \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla ( \psi(\left\langlembda^{-1} X) ) \right|^2 \, dX \\ &= \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla \psi \right|^2 \, dX, \varepsilonnd{aligned} \varepsilonnd{equation} which implies that $V_\left\langlembda$ is stable outside $B^+_{\left\langlembda^{-1} R_0 }$. Furthermore, by \varepsilonqref{eq:2.52} and \varepsilonqref{eq:2.60}, $(V_\left\langlembda)_{\left\langlembda \geq 3 R_0}$ is bounded in $H^1_{\mathrm{loc}}(\overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$ and $(V_\left\langlembda(x, 0))_{\left\langlembda \geq 3R_0}$ is bounded in $L^{p+1}_{\rm loc} (\mathbb{R}N,|x|^\varepsilonll dx)$. Now let $(\left\langlembda_i)_{i=1}^\infty$ satisfy $\left\langlembda_i \to \infty$ and $V_{\left\langlembda_i} \rightharpoonup U_\infty$ weakly in $H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$. Thanks to the above fact, without loss of generality, we may also assume that \begin{equation}\left\langlebel{eq:4.36} U_\infty(x,0) \in L^{p+1}_{\rm loc} ( \mathbb{R}N,|x|^\varepsilonll dx). \varepsilonnd{equation} We shall claim that \begin{align} &V_{\left\langlembda_i} (x,0) \to U_\infty(x,0) & &\text{strongly in } L^q_{\rm loc} (\mathbb{R}N) \quad \text{for $1 \leq q < \frac{2N}{N-2s}$}, \left\langlebel{align:4.37} \\ &V_{\left\langlembda_i}(X) \to U_\infty(X) & &\text{strongly in } L^2_{\rm loc} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX). \left\langlebel{align:4.38} \varepsilonnd{align} Due to the boundedness of the trace operator from $H^1( B_R \times (0,R),t^{1-2s}dX)$ to $H^s(B_R)$ for each $R$ (see \cite{DD-12}), we also have $V_{\left\langlembda_i} (x,0) \rightharpoonup U_\infty (x,0)$ weakly in $H^s(B_R)$. By the compactness of embedding $H^s(B_R) \subset L^q(B_R)$ where $1 \leq q < 2N/(N-2s)$, we get \varepsilonqref{align:4.37}. For \varepsilonqref{align:4.38}, since $H^1( B_R \times (R^{-1},R), t^{1-2s}dX) = H^1( B_R \times (R^{-1},R) )$, we first remark that \begin{equation} \left\langlebel{eq:4.39} V_{\left\langlembda_i} \to U_\infty \qquad\mbox{strongly in $L^2_{\rm loc} (\mathbb{R}^{N+1}_+,t^{1-2s}dX)$}. \varepsilonnd{equation} Around $t = 0$, we notice that for $\psi \in C^1( \overline{ \mathbb{R}^{N+1}_+})$, \[ \begin{aligned} \left| \psi(x,t) \right| &\leq \left| \psi (x,0) \right| + \int_0^t \tau^{\frac{2s-1}{2}} \tau^{\frac{1-2s}{2}} \left| \partial_t \psi (x,\tau) \right| \, d \tau \\ &\leq |\psi(x,0)| + \left[ \frac{1}{2s} t^{2s} \right]^{1/2} \left( \int_0^t \tau^{1-2s} \left| \partial_t \psi (x, \tau ) \right|^2 \, d \tau \right)^{1/2}. \varepsilonnd{aligned} \] Therefore, \begin{equation}\left\langlebel{eq:4.40} \begin{aligned} & \int_{B_R \times (0,T)} t^{1-2s} \left| \psi (X) \right|^2 \, dX \\ &\leq 2 \int_{B_R \times (0,T)} t^{1-2s} \left[ |\psi(x,0)|^2 + \frac{t^{2s}}{s} \int_0^t \tau^{1-2s} |\partial_t \psi (x,\tau) |^2 \, d \tau \right] dX \\ & \leq C_s\left[ T^{2-2s} \| \psi(\cdot, 0) \|_{L^2(B_R)}^2 +T^2 \int_{B_R\times(0,T)} t^{1-2s} |\partial_t \psi (X)|^2 \, dX \right]. \varepsilonnd{aligned} \varepsilonnd{equation} By the density argument, \varepsilonqref{eq:4.40} holds for every $W \in H^1_{\mathrm{loc}} (\overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$. Thus, by \varepsilonqref{eq:4.39}, \[ \begin{aligned} & \limsup_{i \to \infty} \int_{ B_R \times (0,R) } t^{1-2s} | V_{\left\langlembda_i} - U_\infty |^2 \, dX \\ = \ & \limsup_{i \to \infty} \left( \int_{B_R \times (0,T)} + \int_{B_R \times (T,R)} \right) t^{1-2s} | V_{\left\langlembda_i} - U_\infty |^2 \, dX \leq C (T^{2-2s} + T^2 ). \varepsilonnd{aligned} \] Since $T \in (0,R)$ is arbitrary and $C$ is independent of $T$, \varepsilonqref{align:4.38} holds. Next, we shall prove that $U_\infty$ satisfies \varepsilonqref{eq:4.7}. For the third property in \varepsilonqref{eq:4.7}, we observe from \varepsilonqref{eq:2.18} and \varepsilonqref{eq:4.34} that for each $\varphi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} )$, \[ \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla V_{\left\langlembda_i} \cdot \nabla \varphi \, dX = \kappa_s \int_{ \mathbb{R}N} |x|^\varepsilonll |V_{\left\langlembda_i} (x,0)|^{p-1} V_{\left\langlembda_i} (x,0) \varphi (x,0) \, dx. \] By \varepsilonqref{align:4.37}, we may also suppose that $V_{\left\langlembda_i} (x,0) \to U_\infty(x,0)$ for a.a. $x \in \mathbb{R}N$. Since $(V_{\left\langlembda_i}(x,0))_i$ is bounded in $L^{p+1}_{\rm loc} (\mathbb{R}N,|x|^\varepsilonll dx)$, a variant of Strauss' lemma (see Strauss \cite[Compactness Lemma 2]{Str-77} and Berestycki and Lions \cite[Theorem A.I]{BL}) and the fact $V_{\left\langlembda_i} \rightharpoonup U_\infty$ weakly in $H^1_{\mathrm{loc}}( \overline{ \mathbb{R}^{N+1}_+}, t^{1-2s}dX)$ give \[ \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla U_\infty \cdot \nabla \varphi \, dX = \kappa_s \int_{ \mathbb{R}N} |x|^\varepsilonll |U_{\infty} (x,0)|^{p-1} U_{\infty} (x,0) \varphi (x,0) \, dx. \] In a similar way, by \varepsilonqref{eq:4.35}, we also observe that $U_\infty$ is stable outside $B_\varepsilon^+$ for any $\varepsilon > 0$, that is, for each $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{\varepsilon}} )$, \[ \kappa_s p \int_{ \mathbb{R}N} |x|^\varepsilonll |U_\infty(x,0)|^{p-1} \psi(x,0)^2 \, dx \leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} |\nabla \psi|^2 \, dX. \] Finally, we prove $U_\infty(X) = r^{- \frac{2s+\varepsilonll}{p-1}} U_\infty( r^{-1} X )$. If this is true, then \varepsilonqref{eq:4.36} gives $\psi(\omega, 0) = U_\infty(x/r,0) \in L^{p+1}(\partial S_1^+)$ and Lemma \ref{Lemma:4.3} is applicable for $U_\infty$. Remark that \varepsilonqref{eq:4.34} implies \begin{equation}\left\langlebel{eq:4.41} \left( - \Delta \right)^s V_\left\langlembda ( x,0) = |x|^\varepsilonll \left| V_\left\langlembda (x,0) \right|^{p-1} V_\left\langlembda (x,0) \quad \text{in} \ \mathbb{R}N \varepsilonnd{equation} and Proposition \ref{Proposition:2.2} and Lemma \ref{Lemma:4.2} hold for $V_\left\langlembda$. Hence, for $R_2>R_1>0$, by \varepsilonqref{eq:4.33}, \varepsilonqref{eq:4.34} and Lemma \ref{Lemma:4.2}, we have \begin{equation}\left\langlebel{eq:4.42} \begin{aligned} 0 =\lim_{i\to\infty}\bigg\{E(U;\left\langlembda_iR_2)-E(U;\left\langlembda_iR_1)\bigg\} &=\lim_{i\to\infty}\bigg\{E(V_{\left\langlembda_i};R_2)-E(V_{\left\langlembda_i};R_1)\bigg\} \\ & \ge\liminf_{i\to\infty}\int_{R_1}^{R_2}\frac{\partial}{\partial r}E(V_{\left\langlembda_i};r)\,dr. \varepsilonnd{aligned} \varepsilonnd{equation} This together with \varepsilonqref{eq:4.4}, \varepsilonqref{align:4.38} and the weak lower semicontinuity of norms yield \begin{equation} \left\langlebel{eq:4.43} \begin{aligned} 0 & \ge \liminf_{i\to\infty}\int_{R_1}^{R_2}r^{\frac{2}{p-1}(s(p+1)+\varepsilonll)-N} \left( \int_{S^+_r}t^{1-2s}\left(\frac{2s+\varepsilonll}{p-1}\frac{V_{\left\langlembda_i}}{r} +\frac{\partial V_{\left\langlembda_i}}{\partial r}\right)^2\, dS \right)dr \\ & = \liminf_{i\to\infty}\int_{B^+_{R_2}\setminus B^+_{R_1}}t^{1-2s}r^{\frac{2}{p-1}(s(p+1)+\varepsilonll)-N} \left(\frac{2s+\varepsilonll}{p-1}\frac{V_{\left\langlembda_i}}{r}+\frac{\partial V_{\left\langlembda_i}}{\partial r}\right)^2\,dX \\ & \ge \int_{B^+_{R_2}\setminus B^+_{R_1}}t^{1-2s}r^{\frac{2}{p-1}(s(p+1)+\varepsilonll)-N} \left(\frac{2s+\varepsilonll}{p-1}\frac{U_\infty}{r}+\frac{\partial U_\infty}{\partial r}\right)^2\,dX. \varepsilonnd{aligned} \varepsilonnd{equation} Noting that $U_\infty \in C^\infty(\mathbb{R}^{N+1}_+)$ thanks to $\mathrm{div} (t^{1-2s} \nabla U_\infty) = 0$ and elliptic regularity, by the arbitrariness of $R_1$ and $R_2$, we have \[ 0 = \frac{\partial U_\infty}{\partial r}+\frac{2s+\varepsilonll}{p-1}\frac{U_\infty}{r} = r^{ - \frac{2s+\varepsilonll}{p-1} } \frac{\partial}{\partial r}\left( r^{ \frac{2s+\varepsilonll}{p-1} } U_\infty \right) \quad \mbox{in}\quad \mathbb R^{N+1}_+. \] Integrating this equality with respect to $r$, we obtain \[ U_\infty(X)=r^{-\frac{2s+\varepsilonll}{p-1}}U_\infty(r^{-1}X) \] and hence, $U_\infty$ satisfies \varepsilonqref{eq:4.7}. It follows from Lemma~\ref{Lemma:4.3} that $U_\infty\varepsilonquiv0$. Since the weak limit does not depend on choices of subsequences, we infer that $V_\left\langlembda \rightharpoonup 0$ weakly in $H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$ and from \varepsilonqref{align:4.38} that \begin{equation}\left\langlebel{eq:4.44} \int_{ B_{2R}^+ } t^{1-2s} |V_\left\langlembda|^2 \, dX \to 0 \quad \mbox{as}\quad \left\langlembda \to \infty. \varepsilonnd{equation} Recalling \varepsilonqref{eq:4.34}, \varepsilonqref{eq:4.35}, \varepsilonqref{eq:4.41} and the proof of \varepsilonqref{eq:2.66} in Lemma \ref{Lemma:2.9}, we see \begin{equation}\left\langlebel{eq:4.45} \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(V_\left\langlembda \zeta)|^2\,dX \leq \frac{p}{p-1}\int_{\mathbb R^{N+1}_+}t^{1-2s}|V_{\left\langlembda}|^2|\nabla\zeta|^2\,dX \varepsilonnd{equation} where $\zeta \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} ) $ satisfying \[ \zeta \varepsilonquiv 1\ \ \text{in} \ B_{R}^+ \setminus B_{r}^+, \quad \zeta \varepsilonquiv 0 \quad \text{in} \ B_{r/2}^+ \cup ( \mathbb{R}^{N+1}_+ \setminus B_{2R}^+ ) \quad \mbox{for}\quad \left\langlembda^{-1} R_0 < r < R. \] From \varepsilonqref{eq:4.44}, \varepsilonqref{eq:4.45} and the property of $\zeta$, we observe that for any $0 < r < R$, \begin{equation}\left\langlebel{eq:4.46} \lim_{ \left\langlembda \to \infty} \int_{ B_R^+ \setminus B_r^+ } t^{1-2s} \left| \nabla V_{\left\langlembda} \right|^2 \, dX = 0. \varepsilonnd{equation} Furthermore, by \varepsilonqref{eq:2.64} with $V_\left\langlembda$, \varepsilonqref{eq:4.44} and \varepsilonqref{eq:4.45}, for each $0 < r < R$, \begin{equation}\left\langlebel{eq:4.47} \lim_{ \left\langlembda \to \infty}\int_{B_R \setminus B_r} |x|^\varepsilonll |V_{\left\langlembda}(x,0)|^{p+1} \, dx = 0. \varepsilonnd{equation} Next, we shall prove $E(U;\left\langlembda) \to 0$ as $\left\langlembda \to \infty$. In view of \varepsilonqref{eq:4.34}, for each $\varepsilon \in (0,1)$, we have \begin{equation}\left\langlebel{eq:4.48} \begin{aligned} \left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}}D(U;\left\langlembda) = \ &D(V_\left\langlembda ;1) \\ = \ &\frac{1}{2}\int_{B^+_\varepsilon}t^{1-2s}|\nabla V_\left\langlembda|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_\varepsilon}|x|^\varepsilonll|V_\left\langlembda (x,0)|^{p+1}\,dx \\ & \quad +\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\left\langlembda|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\varepsilonll|V_\left\langlembda (x,0) |^{p+1}\,dx \\ = \ & \varepsilon^{N-2s} D(V_\left\langlembda; \varepsilon) +\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\left\langlembda|^2\,dX \\ &\quad -\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\varepsilonll|V_\left\langlembda (x,0) |^{p+1}\,dx \\ = \ &\varepsilon^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \left[ (\left\langlembda\varepsilon)^{\frac{2(2s+\varepsilonll)}{p-1}} D(U;\left\langlembda\varepsilon)\right] \\ & \quad +\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\left\langlembda|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\varepsilonll|V_\left\langlembda (x,0) |^{p+1}\,dx. \varepsilonnd{aligned} \varepsilonnd{equation} By \varepsilonqref{eq:4.32} and \varepsilonqref{eq:4.46}--\varepsilonqref{eq:4.48}, we see that \[ \begin{aligned} &\limsup_{ \left\langlembda \to \infty} \left| \left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}}D(U;\left\langlembda) \right| \\ = \ & \limsup_{ \left\langlembda \to \infty} \left| \varepsilon^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \left[ (\left\langlembda\varepsilon)^{\frac{2(2s+\varepsilonll)}{p-1}} D(U;\left\langlembda\varepsilon)\right] \right. \\ & \hspace{3cm} \left. +\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\left\langlembda|^2\,dX -\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\varepsilonll|V_\left\langlembda (x,0) |^{p+1}\,dx \right| \\ \leq \ & C_0 \varepsilon^{N-\frac{2}{p-1}(s(p+1)+\varepsilonll)} \varepsilonnd{aligned} \] for some $C_0>0$. Since $\varepsilon \in (0,1)$ is arbitrary and $N - \frac{2}{p-1} ( (p+1) s + \varepsilonll ) > 0$, we obtain \begin{equation} \left\langlebel{eq:4.49} \lim_{\left\langlembda\to\infty}\left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}}D(U;\left\langlembda) = 0. \varepsilonnd{equation} On the other hand, from \varepsilonqref{eq:4.44} it follows that \[ 0 = \lim_{ \left\langlembda \to \infty} \int_{B_2^+} t^{1-2s} |V_\left\langlembda|^2 \,dX = \lim_{\left\langlembda\to\infty} \int_0^2 dr \int_{S_r^+} t^{1-2s} |V_\left\langlembda|^2 \, dS. \] By choosing a subsequence $(\left\langlembda_i)$, \[ \int_{S_r^+} t^{1-2s} |V_{\left\langlembda_i}|^2 dS \to 0 \quad \text{a.a. $r \in (0,2)$}. \] Therefore, there exists an $r_0 \in (0,2)$ such that \varepsilonqref{eq:4.34} gives \[ \left\langlembda_i^{ \frac{2(2s+\varepsilonll)}{p-1} } H(U; r_0 \left\langlembda_i) = H(V_{\left\langlembda_i} ; r_0 ) = r_0^{ -(N+1-2s) } \int_{ S_{r_0}^+} t^{1-2s} |V_{\left\langlembda_i}|^2 \,dS \to 0. \] With \varepsilonqref{eq:4.34}, \varepsilonqref{eq:4.49} and the monotonicity of $E(U;\left\langlembda)$, we have \begin{equation}\left\langlebel{eq:4.50} \lim_{ \left\langlembda \to \infty} E(U;\left\langlembda) = \lim_{ i \to \infty} E(U;\left\langlembda_i r_0)= 0. \varepsilonnd{equation} We shall prove $U \varepsilonquiv 0$. To this end, we shall prove $E(U;\left\langlembda) \to 0$ as $\left\langlembda \to 0$. Since $U \in C ( \overline{\mathbb{R}^{N+1}_+} )$ holds by Lemma \ref{Lemma:2.3}, as $\left\langlembda \to 0$, we have \begin{equation}\left\langlebel{eq:4.51} 0 \leq \left\langlembda^{\frac{2(2s+\varepsilonll)}{p-1}} H(U;\left\langlembda) = \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} } \int_{ S^+_1} \sigma_{N+1}^{1-2s} U(\left\langlembda \sigma)^2 \, dS \leq C \| U \|_{ L^\infty( B_1^+ ) }^2 \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} } \to 0 \varepsilonnd{equation} and by \varepsilonqref{eq:1.2}, \begin{equation}\left\langlebel{eq:4.52} \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} - (N-2s)} \int_{ B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx \leq C \| u \|_{ L^\infty(B_1) }^{p+1} \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} + 2s + \varepsilonll } \to 0. \varepsilonnd{equation} By \varepsilonqref{eq:4.50} and the monotonicity of $E(U;\left\langlembda)$, we have $E(U;\left\langlembda) \leq 0$ for all $\left\langlembda \in (0,\infty)$. In view of this fact with \varepsilonqref{eq:4.51} and \varepsilonqref{eq:4.52}, it follows that \[ \begin{aligned} &\limsup_{\left\langlembda \to + 0} \frac{\left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} - (N-2s) } }{2} \int_{B^+_\left\langlembda} t^{1-2s} |\nabla U|^2 \, dX \\ = \ & \limsup_{\left\langlembda \to + 0} \left[ E(U;\left\langlembda) + \frac{\left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} - (N-2s) } \kappa_s }{p+1} \int_{B_\left\langlembda} |x|^\varepsilonll |u|^{p+1} \, dx - \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} } \frac{2s+\varepsilonll}{2(p-1)} H(U;\left\langlembda) \right] \\ \leq \ & 0, \varepsilonnd{aligned} \] which yields \begin{equation}\left\langlebel{eq:4.53} \lim_{\left\langlembda \to +0} \left\langlembda^{ \frac{2(2s+\varepsilonll)}{p-1} - (N-2s)} \int_{ B_\left\langlembda^+} t^{1-2s} |\nabla U|^2 \, dX = 0. \varepsilonnd{equation} Now, \varepsilonqref{eq:4.51}--\varepsilonqref{eq:4.53} yields $E(U;\left\langlembda) \to 0$ $(\left\langlembda \to 0)$. Therefore, $E(U; \left\langlembda) \varepsilonquiv 0$ thanks to \varepsilonqref{eq:4.50} and the monotonicity of $E$. Applying the argument similar to \varepsilonqref{eq:4.42} and \varepsilonqref{eq:4.43}, we see that $U = r^{ - \frac{2s+\varepsilonll}{p-1} } U( r^{-1} X ) $. In addition, since $U\in C ( \overline{\mathbb{R}^{N+1}_+}) $, $\psi(\omega,0) = U(\omega,0) \in L^\infty(\partial S^+_1)$ and $u$ (respectively $U$) is stable outside $B_{R_0}$ (respectively $B_{R_0}^+$), \varepsilonqref{eq:4.7} is satisfied and it follows from Lemma~\ref{Lemma:4.3} that $U\varepsilonquiv 0$. This completes the proof. \varepsilonnd{proof} \noindent \textbf{Acknowledgment.} The authors would like to express their sincere gratitude to the anonymous referee for his/her careful reading and useful comments. They also would like to thank Alexander Quaas for letting them know \cite{BQ-20}. The first author (S.H.) was supported by JSPS KAKENHI Grant Numbers JP 20J01191. The second author (N.I.) was supported by JSPS KAKENHI Grant Numbers JP 17H02851, 19H01797 and 19K03590. The third author (T.K.) was supported by JSPS KAKENHI Grant Numbers JP 19H05599 and 16K17629. \begin{thebibliography}{99} \bibitem{BQ-20} B. Barrios and A. Quaas, The sharp exponent in the study of the nonlocal H\'{e}non equation in $\mathbb{R}N$: a Liouville theorem and an existence result. Calc. Var. Partial Differential Equations \textbf{59} (2020), no. 4, Paper No. 114, 22 pp. \bibitem{BL} H. Berestycki and P.-L. Lions, Nonlinear scalar field equations. I Existence of a ground state, Arch. Ration. Mech. Anal. {\bf 82} (4) (1983) 313--345. \bibitem{CCFS} M. Chipot, M. Chleb\'ik, M. Fila and I. Shafrir, Existence of positive solutions of a semilinear elliptic equation, J. Math. Anal. Appl. {\bf 223} (1998), 429--471. \bibitem{CS} L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations {\bf 32} (2007), 1245--1260. \bibitem{CaSi} X. Cabr\`e and Y. Sire, Nonlinear equations for fractional Laplacians I: Regularity, maximum principles, and Hamiltonian estimates, Ann. Inst. H. Poincar\'e {\bf 31} (2014), 23--53. \bibitem{DQ} W. Dai and G. Qin, Liouville type theorems for fractional and higher order H\'{e}non-Hardy type equations via the method of scaling spheres, arXiv:1810.02752 [math.AP]. \bibitem{DDG} E.N. Dancer, Y. Du and Z. Guo, Finite Morse index solutions of an elliptic equation with supercritical exponent, J. Differential Equations, {\bf 250} (2011), 3281--3310. \bibitem{DDM} J. D\'avila, L. Dupaigne and M. Montenegro, The extremal solution of a boundary reaction problem, Commun. Pure Appl. Anal. {\bf 7} (2008), 795--817. \bibitem{DDW} J. D\'avila, L. Dupaigne and J. Wei, On the fractional Lane--Emden equation, Trans. Amer. Math. Soc. {\bf 369} (2017), 6087--6104. \bibitem{DD-12} F. Demengel and G. Demengel, Functional spaces for the theory of elliptic partial differential equations. Translated from the 2007 French original by Reinie Ern\'e. Universitext. Springer, London; EDP Sciences, Les Ulis, 2012. \bibitem{DiNPV-12} E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces. Bull. Sci. Math. \textbf{136} (2012), no. 5, 521--573. \bibitem{Fall} M.M. Fall, Semilinear elliptic equations for the fractional Laplacian with Hardy potential. Nonlinear Anal. \textbf{193} (2020), 111311, 29 pp. \bibitem{FF} M.M. Fall and V. Felli, Unique continuation property and local asymptotics of solutions to fractional elliptic equations, Comm. Partial Differential Equations {\bf 39} (2014), 354--397. \bibitem{FF-15} M.M. Fall and V. Felli, Unique continuation properties for relativistic Schr\"{o}dinger operators with a singular potential. Discrete Contin. Dyn. Syst. \textbf{35} (2015), no. 12, 5827--5867. \bibitem{FW} M.M. Fall and T. Weth, Nonexistence results for a class of fractional elliptic boundary value problems, J. Funct. Anal. {\bf 263} (2012), 2205--2227. \bibitem{Farina} A. Farina, On the classification of solutions of the Lane--Emden equation on unbounded domains of $\mathbb R^N$, J. Math. Pures Appl. (9) {\bf 87} (2007), 537--561. \bibitem{FW-16} M. Fazly and J. Wei, On stable solutions of the fractional H\'enon-Lane-Emden equation. Commun. Contemp. Math. \textbf{18} (2016), no. 5, 1650005, 24 pp. \bibitem{FLS-08} R.L. Frank, E.H. Lieb and R. Seiringer, Hardy--Lieb--Thirring inequalities for fractional Schr\"{o}dinger operators. J. Amer. Math. Soc. \textbf{21} (2008), no. 4, 925--950. \bibitem{H} J. Harada, Positive solutions to the Laplace equation with nonlinear boundary conditions on the half space, Calc. Var. Partial Differential Equations {\bf 50} (2014), 399--435. \bibitem{HIK-20} S. Hasegawa, N. Ikoma and T. Kawakami, On weak solutions to a fractional Hardy--H\'enon equation: Part 2: Existence, in preparation. \bibitem{JLX} T. Jin, Y.Y. Li and J. Xiong, On a fractional Nirenberg problem, part I: Blow up analysis and compactness of solutions, J. Eur. Math. Soc. {\bf 16} (2014), 1111-1171. \bibitem{LB-19} Y. Li and J. Bao, Fractional Hardy--H\'{e}non equations on exterior domains. J. Differential Equations \textbf{266} (2019), no. 2-3, 1153--1175. \bibitem{L-59} J.-L. Lions, Th\'eor\'emes de trace et d'interpolation. I. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) \textbf{13} (1959), 389--403. \bibitem{Str-77} W.A. Strauss, Existence of solitary waves in higher dimensions. Comm. Math. Phys. \textbf{55} (1977), no. 2, 149--162. \bibitem{WY-12} C. Wang and D. Ye, Some Liouville theorems for H\'enon type elliptic equations. J. Funct. Anal. \textbf{262} (2012), no. 4, 1705--1727 and Corrigendum to "Some Liouville theorems for H\'enon type elliptic equations'' [J. Funct. Anal. \textbf{262} (4) (2012) 1705--1727] [MR2873856]. J. Funct. Anal. \textbf{263} (2012), no. 6, 1766--1768. \bibitem{Ya-15} J. Yang, Fractional Sobolev-Hardy inequality in $\mathbb{R}N$. Nonlinear Anal. \textbf{119} (2015), 179--185. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Generalised Fibonacci sequences constructed from balanced words} \begin{abstract} We study growth rates of generalised Fibonacci sequences of a particular structure. These sequences are constructed from choosing two real numbers for the first two terms and always having the next term be either the sum or the difference of the two preceding terms where the pluses and minuses follow a certain pattern. In 2012, McLellan proved that if the pluses and minuses follow a periodic pattern and $G_n$ is the $n$th term of the resulting generalised Fibonacci sequence, then \begin{equation*} \lim_{n\rightarrow\infty}|G_n|^{1/n} \end{equation*} exists. We extend her results to recurrences of the form $G_{m+2} = \alpha_m G_{m+1} \pm G_{m}$ if the choices of pluses and minuses, and of the $\alpha_m$ follow a balancing word type pattern. \end{abstract} \keywords{Fibonacci sequences; Balancing words; matrices} \section{Introduction} The Fibonacci sequence, recursively defined by $f_1=f_2=1$ and $f_n=f_{n-1}+f_{n-2}$ for all $n\geq 3$, has been generalised in several ways. In 2000, Divakar Viswanath studied random Fibonacci sequences given by $t_1=t_2=1$ and $t_n=\pm t_{n-1}\pm t_{n-2}$ for all $n\geq 3$. Here each $\pm$ is chosen to be $+$ or $-$ with probability $1/2$, and are chosen independently. Viswanath proved that \begin{equation*} \lim_{n\rightarrow\infty}\sqrt[n]{|t_n|}=1.13198824\dots \end{equation*} with probability $1$ where the exact value of the above limit is given as the exponential function on an integral expression involving a measure defined on Stern–Brocot intervals \cite{viswanath}. Almost nothing is known about this constant, however, not even if it is irrational. One of the key ideas in his proof was to study random Fibonacci sequences by using products of matrices. More specifically, let \begin{equation*} A:=\begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix} \text{ and } B:=\begin{bmatrix} 0 & 1\\ 1 & -1 \end{bmatrix}. \end{equation*} Then for all $n\in\mathbb{N}$ the $(n+1)$th and $(n+2)$th terms of a random Fibonacci sequence satisfy \begin{equation*} [1,1]Q_n=[G_{n+1},G_{n+2}] \end{equation*} where $Q_n$ is a matrix product consisting of $n$ $A$s and $B$s as factors where the pattern of $A$s and $B$s reflect the pattern of pluses and minuses generating the random Fibonacci sequence in question. In 2006, Jeffrey McGowan and Eran Makover used the formalism of trees to evaluate the growth of the average value of the $n$th term of a random Fibonacci sequence \cite{eran}, which by Jensen's inequality is larger than Viswanath's constant. More precisely, they proved that \begin{equation*} 1.12095\leq\sqrt[n]{E(|t_n|)}\leq 1.23375 \end{equation*} where $E(|t_n|)$ is the expected value of the $n$th term of the sequence. In 2007, Rittaud \cite{rittaud} improved this result and obtained \begin{equation*} \lim_{n\rightarrow\infty}\sqrt[n]{E(|t_n|)}=\alpha=1\approx 1.20556943\ldots \end{equation*} where $\alpha$ is the only real root of $f(x)=x^3-2x^2-1$. In 2010, Janvresse, Rittaud, and De La Rue generalised Viswanath's result to random Fibonacci sequences involving different coefficients and probabilities of the choices of pluses and minuses \cite{janvresse}. In 2012, Karyn McLellan used Viswanath's idea of representing random Fibonacci sequences as matrix products with the matrices $A$ and $B$ as factors to study sequences that begin with two real numbers with the next term always being either the sum or the difference of the two preceding terms, but where the pattern of the pluses and minuses was periodic instead of random \cite{mclellan}. McLellan determined the growth rate of any such sequence, showing that Viswanath's limit still exists, albeit evaluating to different values, depending on the particular sequence in question. She used these growth rates to provide an alternative method of calculation for Viswanath's constant in the random case. In 2018, the authors, in \cite{hare}, extended Rittaud's results and determined the probability that a random infinite walk down the tree contains no $(1,1)$ pairs after the initial root. We also determined tight upper and lower bounds on the number of coprime $(a,b)$ pairs at any given depth in the tree for any coprime pair $(a,b)$. In this paper we consider a more general model. Starting with $G_1$ and $G_2$ as any real numbers, consider the recurrence $G_{m+2} = \alpha_m G_{m+1} \pm G_{m}$ where the $\alpha_m$ are taken from a finite set. This can be modeled by matrix multiplication as \[ [G_{m+1},G_{m+2}] = [G_{m} , G_{m+1}] \begin{bmatrix} 0 & \pm 1 \\ 1 & \alpha_m \end{bmatrix}. \] Here we extend McLellan's results and show that Viswanath's limit exits if the pattern of matrix multiplications generating the sequence $\left(G_n\right)_n$ follows certain balancing word patterns. Recall that \begin{definition} A \textit{balanced word} or \textit{Sturmian word} $w$ is an infinite word over a two letter alphabet $\{a,b\}$ such that, for any two subwords from $w$ of the same length, the number of letters that are $a$ in each of these two subwords will differ by at most $1$. \end{definition} We consider the following construction of a balanced or Sturmian word throughout this paper. There exists a sequence of nonnegative integers $q_0,q_1,a_2,\ldots$ with $q_i>0$ for all $i>0$ such that the infinite set of words $\{s_n\}_{n\geq 0}$ constructed from $s_0=b$, $s_1=a$, and $s_{n+1}=s_n^{q_{n-1}}s_{n-1}$ for all $n\in\mathbb{N}$ converges to $w$. Strumian or balanced words have been great studied, and two relevant references are Allouche and Shallit \cite{allouche} and de Luca \cite{deLuca}. Instead of having the pluses and minuses follow a periodic pattern studied by McLellan, we will have the pluses and minuses follow the pattern in an infinite balanced word. \begin{notation}\label{P_1P_2} Let $v\geq 1$, and let \[ A_i := \begin{bmatrix} 0 & \epsilon_i \\ 1 & \alpha_i \end{bmatrix} \] for $i = 1, 2, \dots, v$, where $\epsilon_i \in \{-1, 1\}$ and $\alpha_i \in \mathbb{Z}$. We note that each $A_i$ has determinant $\pm 1$. Let $P_1$ and $P_2$ be products of these matrices of length $k_1$ and $k_2$ respectively, allowing multiplicity. That is \begin{equation*} P_j:=\begin{bmatrix} a_j & b_j\\ c_j & d_j \end{bmatrix} = A_{j,1} A_{j,2} \dots, A_{j,k_j}. \end{equation*} We further require for $j = 1, 2$ that \begin{equation}\label{inequalitiesentries} b_j,c_j\neq 0,\left|d_j\right|\geq 2, \left|a_j\right|\leq\left|b_j\right|\leq\left|d_j\right|\text{ and }\left|a_j\right|\leq\left|c_j\right|\leq\left|d_j\right|. \end{equation} For a sequence of positive integers $(q_m)_{m\in\mathbb{N}}$ define $P_m$, $k_m$ and $A_{m, j_1}, \dots A_{m, j_{k_m}}$, inductively as \begin{align*} P_{m+2} & := P_{m+1}^{q_m} P_m \\ & := \underbrace{\underbrace{A_{m+1, j_1} \dots A_{m+1, j_{k_{m+1}}}}_{P_{m+1}} \dots \underbrace{A_{m+1, j_1} \dots A_{m+1, j_{k_{m+1}}}}_{P_{m+1}}}_{q_m \text{times}} \underbrace{A_{m, j_1} \dots A_{m+1, j_{k_{m}}}}_{P_{m}} \\ & := A_{m+2, j_1} \dots A_{m+2, j_{k_{m+2}}} \\ & := \begin{bmatrix} a_{m+2}& b_{m+2}\\ c_{m+2} & d_{m+2} \end{bmatrix}. \end{align*} Notice that the sequence $\left(P_m\right)_m$ is a standard Sturmian word (or balanced word) on the alphabet $\{P_1,P_2\}$. We observe that for all $m\geq 3$ and $\ell\geq 1$ that $P_{m+\ell}$ always starts by $P_m$, which is a product of $k_m$ matrices. Thus, whenever $n\leq k_m$, the first $n$ matrices of $P_{m+\ell}$ is independent of the choice of $\ell\in\mathbb{N}$. With this observation, we define \begin{equation*} Q_n:= A_{m, 1} \dots A_{m, n} := \begin{bmatrix} e_n & f_n\\ g_n & h_n \end{bmatrix} \end{equation*} for $k_m \geq n$. Notice that so long as $k_m\geq n$, $Q_n$ is independent of the choice of $m$. Finally, let $G_1$ and $G_2$ be any two real numbers and for every $n\in\mathbb{N}$ let $G_{n+2}=\alpha_{m,n}G_{n+1}+\epsilon_{m,n}G_n$ where \begin{equation*} A_{m,n} := \begin{bmatrix} 0 & \epsilon_{m,n}\\ 1 & \alpha_{m,n} \end{bmatrix} \end{equation*} Notice that for all $n\in\mathbb{N}$ we have \begin{equation*} [G_1,G_2]Q_n=[G_{n+1},G_{n+2}]. \end{equation*} \end{notation} We prove the following: \begin{theorem}\label{bigthm} Let $q_m$, $P_m$, $a_m$, $c_m$, $Q_n$, and $G_n$ be as defined in Notation \ref{P_1P_2}. Then $\lim_{m\rightarrow\infty}\frac{a_m}{c_m}=\lim_{m\rightarrow\infty}\frac{b_m}{d_m}$ exists, and is positive, and is either $1$ or irrational. Let this limit be denoted by $M$. \begin{enumerate} \item If $M=1$, then $|G_n|$ grows at most linearly, i.e. there exists $C>0$ such that \begin{equation*} |G_n|<Cn \end{equation*} for all $n\in\mathbb{N}$. \item If $M$ is irrational and $G_1\neq\frac{-G_2}{M}$, then \begin{equation}\label{Gnlimit} \lim_{n\rightarrow\infty}|G_n|^{1/n} \end{equation} exists with this limit being greater than $1$. \end{enumerate} \end{theorem} The proof of Theorem \ref{bigthm} is divided up in the subsequent sections of the paper as follows. In Section \ref{sec2} we prove some necessary technical lemmas on the entries in the matrices $\left(P_m\right)_{m\in\mathbb{N}}$. We then divide the proof up into two cases. If $\left|b_i\right|=\left|c_i\right|=\left|d_i\right|-1=\left|a_i\right|+1$ for all sufficiently large $i$, then it turns out that $M=1$ and we get the case that $\left|G_n\right|$ grows at most linearly. This is proved in Section \ref{sec3}. Otherwise we get $M$ is irrational and the exponential growth of $\left|G_n\right|$, which is dealt in Section \ref{sec4}. For the rest of the present section though we include some remarks on Theorem \ref{bigthm} and illustrate them with an example. \begin{remark} If we restrict $G_1,G_2\in\mathbb{Z}$ or even to just $G_1,G_2\in\mathbb{Q}$, then we can ignore the condition that $G_1\neq\frac{-G_2}{M}$ in Theorem \ref{bigthm} since $M$ is irrational. \end{remark} \begin{remark}\label{alphal} Let $0<\alpha<1$ be irrational. If in Notation \ref{P_1P_2} we pick the sequence of positive integers $q_1,q_2,q_3,\ldots $ so that the continued fraction expansion of $\alpha$ can be represented as $[0;q_1,q_2,q_3,\ldots]$, then we have \begin{equation}\label{alphalimit} \lim_{m\rightarrow\infty}\frac{\text{number of }P_2\text{s in }P_m}{\text{number of }P_1\text{s and }P_2\text{s in }P_m}=\alpha. \end{equation} See, for example, \cite{deLuca}. \end{remark} We give some examples of $A_1$, $A_2$, $P_1$, and $P_2$ that satisfy Notation \ref{P_1P_2} with $v=2$. \begin{remark} For all $a,b,c,d\in\mathbb{R}$ observe that \begin{equation*} \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix} \cdot \begin{bmatrix} a & b\\ c & d \end{bmatrix} = \begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix} \cdot \begin{bmatrix} d & -b\\ -c & a \end{bmatrix}. \end{equation*} It follows that if we replace the inequalities in \eqref{inequalitiesentries} with the inequalities \begin{equation*} 0\leq\frac{|d_i|}{|b_i|},\frac{|c_i|}{|a_i|},\frac{|d_i|}{|c_i|},\frac{|b_i|}{|a_i|}\leq 1, \end{equation*} then Theorem \ref{bigthm} still holds. \end{remark} \begin{example}\label{entriessize} Let \begin{equation*} A:=\begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix} \text{ and } B:=\begin{bmatrix} 0 & 1\\ 1 & -1 \end{bmatrix}. \end{equation*} Let $P_1$ and $P_2$ be a product matrices of matrices of the form $A^j$ and $B^k$ where $j,k\geq 2$. Then the matrices $A$, $B$, $P_1$, and $P_2$ satisfy the matrices $A_1$, $A_2$, $P_1$, and $P_2$ respectively in Notation \ref{P_1P_2} with $v=2$ with $P_1$ and $P_2$ satisfying \eqref{inequalitiesentries}. \end{example} \begin{remark} In Theorem \ref{bigthm} it is possible that if the sequence $\left(G_n\right)_n$ grows at most linearly, then it could contain a bounded infinite subsequence of terms. For example, let the matrices $A$ and $B$ be as in Example \ref{entriessize} and let $P_1=P_2=A^3B^3$. Then for all $k\in\mathbb{N}$ we can verify that \begin{equation*} Q_{6k+1}=(A^3B^3)^kA=(-1)^k\begin{bmatrix} 4k & 1\\ 4k+1 & 1 \end{bmatrix}. \end{equation*} We can thus deduce that $|G_{6k+3}|=|G_1+G_2|$ for all $k\in\mathbb{N}$. It is routine to check that the entries in $Q_n$ in Example \ref{entriessize} grow at most linearly and so any corresponding Fibonacci sequence will grow at most linearly. \end{remark} We prove that the matrices in Example \ref{entriessize} satisfy Notation \ref{P_1P_2} and Theorem \ref{bigthm} in the next section. We give a concrete example though of matrices $P_1$ and $P_2$ and a sequence of positive integers $(q_m)_{m\in\mathbb{N}}$ for illustration. \begin{example} Let $P_1=A^2$ and $P_2=B^2$. Consider the number $1/\pi$, which has continued fraction expansion $[0;3,7,15,1,\ldots]$. Let our sequence of positive integers $(q_m)_{m\in\mathbb{N}}$ be these convergents so that $q_1=3$, $q_2=7$, $q_3=15$, $q_4=1,\ldots$ Then $P_1$ and $P_2$ satisfy Notation \ref{P_1P_2} and \eqref{inequalitiesentries}. Then we have $P_3=B^6A^2$, $P_4=(B^6A^2)^7B^2,\ldots$ Let $G_1=G_2=1$ in Theorem \ref{bigthm}. Then the corresponding Fibonacci sequence starts out as follows. \begin{align*} G_3&=G_1-G_2=1-1=0\\ G_4&=G_2-G_3=1-0=1\\ G_5&=G_3-G_4=0-1=-1\\ G_6&=G_4-G_5=1-(-1)=2\\ G_7&=G_5-G_6=-1-2=-3\\ G_8&=G_6-G_7=2-(-3)=5\\ G_9&=G_7+G_8=-3+5=2\\ G_{10}&=G_8+G_9=5+2=7\\ G_{11}&=G_9-G_{10}=2-7=-5\\ G_{12}&=G_{10}-G_{11}=7-(-5)=12\\ \ldots& \end{align*} Also, we have \begin{equation*} P_3=\begin{bmatrix} -3 & -11\\ 5 & 18 \end{bmatrix} \text{ and } P_4=\begin{bmatrix} 88364872 & -21089221\\ -144059117 & 343812479 \end{bmatrix} \end{equation*} and notice that \begin{equation*} \frac{1}{2}<\frac{3}{5},\frac{11}{18},\frac{88364872}{144059117},\frac{21089221}{343812479}<\frac{2}{3}. \end{equation*} Then by induction $n\in\mathbb{N}$ using Lemma \ref{positive} we have that if \begin{equation*} P_n=\begin{bmatrix} a_n & b_n\\ c_n & d_n \end{bmatrix}, \end{equation*} then \begin{equation*} \frac{1}{2}<\frac{\left|a_n\right|}{\left|c_n\right|},\frac{\left|b_n\right|}{\left|d_n\right|}<\frac{2}{3} \end{equation*} for all $n\geq 3$. Using Lemma \ref{limitsinfinity}, we therefore have that $|b_m|=|c_m|=|d_m|-1=|a_m|+1$ cannot hold for sufficiently large $m$ so that from the work in Section \ref{sec4} the sequence $\left(G_n\right)_n$ grows exponentially with the limit in \eqref{Gnlimit} existing and greater than $1$. As well, we can deduce from Remark \ref{alphal} that the fraction of $+$'s creating the Fibonacci sequence tends to $1/\pi$. \end{example} \section{Preliminary Results}\label{sec2} To prove Theorem \ref{bigthm} we first require some preliminary lemmas. \begin{lemma}\label{positive} Suppose we have \begin{equation*} \begin{bmatrix} a_1 & b_1\\ c_1 & d_1 \end{bmatrix} \cdot \begin{bmatrix} a_2 & b_2\\ c_2 & d_2 \end{bmatrix} = \begin{bmatrix} a_3 & b_3\\ c_3 & d_3 \end{bmatrix} \end{equation*} where $a_i,b_i,c_i,d_i\in\mathbb{Z}$ for $i=1,2,3$ with $c_1,d_1,b_2,d_2$ nonzero and that the determinants of all the matrices are $1$. Suppose that $|d_1|\geq|c_1|$, $|b_1|\geq|a_1|$, $|d_2|\geq|b_2|$, $|c_2|\geq|a_2|$ and that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_1|}{|c_1|},\frac{|b_1|}{|d_1|}\leq\frac{r_3}{r_4} \end{equation*} where $r_i\in\mathbb{Z}$ for $i=1,2,3,4$ with $|d_1|>r_2$ and $|d_1|>r_4$. Then \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_3|}{|c_3|},\frac{|b_3|}{|d_3|}\leq\frac{r_3}{r_4}. \end{equation*} Also, suppose that \begin{equation*} \frac{r_5}{r_6}\leq\frac{|a_2|}{|b_2|},\frac{|c_2|}{|d_2|}\leq\frac{r_7}{r_8} \end{equation*} where $r_i\in\mathbb{Z}\backslash\{0\}$ for $i=5,6,7,8$ with $|d_2|>r_6$ and $|d_2|>r_8$. Then \begin{equation*} \frac{r_5}{r_6}\leq\frac{|a_3|}{|b_3|},\frac{|c_3|}{|d_3|}\leq\frac{r_7}{r_8}. \end{equation*} \end{lemma} \begin{proof} We will assume that $\frac{c_1b_2}{d_1d_2}<0$. The case of $\frac{c_1b_2}{d_1d_2}>0$ follows similarly. Since $\frac{c_1b_2}{d_1d_2}<0$, $c_1b_2$ and $d_1d_2$ have opposite signs. Thus $|d_3|=|d_1||d_2|-|c_1||b_2|$ since $|d_1|\geq |c_1|$ and $|d_2|\geq |b_2|$. Similarly, since the determinants of the matrices is $1$, we can argue similarly that $|b_3|=|b_1||d_2|-|a_1||b_2|$, $|c_3|=|d_1||c_2|-|c_1||a_2|$, and $|a_3|=|b_1||c_2|-|a_1||a_2|$. We have \begin{align*} r_2|a_3|-r_1|c_3|&=r_2|b_1||c_2|-r_2|a_1||a_2|-r_1|d_1||c_2|+r_1|c_1||a_2|\\ &=(r_2|b_1|-r_1|d_1|)|c_2|-(r_2|a_1|-r_1|c_1|)|a_2|. \end{align*} Since the determinants of the matrices is $1$, we have $|b_1||c_1|\geq |a_1||d_1|-1$. Thus we have \begin{equation*} \frac{|b_1|}{|d_1|} \geq\frac{|a_1|}{|c_1|}-\frac{1}{|c_1||d_1|} \text{\ \ \ and\ \ \ } \frac{|b_1|}{|d_1|}-\frac{r_1}{r_2} \geq\frac{|a_1|}{|c_1|}-\frac{r_1}{r_2}-\frac{1}{|c_1||d_1|} \end{equation*} Since $|d_1|\geq|c_1|$ and $|d_1|>r_2$, we have \begin{equation*} r_2|b_1|-r_1|d_1|\geq r_2|a_1|-r_1|c_1|-\frac{r_2}{|d_1|}>r_2|a_1|-r_1|c_1|-1. \end{equation*} Since $r_2|b_1|-r_1|b_1|,r_2|a_1|-r_1|c_1|\in\mathbb{Z}$, we have $r_2|b_1|-r_1|d_1|\geq r_2|a_1|-r_1|c_1|\geq 0$. Since $|c_2|\geq|a_2|$, we thus have $r_2|a_3|-r_1|c_3|\geq 0$. Thus $\frac{r_1}{r_2}\leq\frac{|a_3|}{|c_3|}$. Next, we have \begin{align*} r_3|c_3|-r_4|a_3|&=r_3(|d_1||c_2|-|c_1||a_2|)-r_4(|b_1||c_2|-|a_1||a_2|)\\ &=(r_3|d_1|-r_4|b_1|)|c_2|-(r_3|c_1|-r_4|a_1|)|a_2|. \end{align*} Since the determinants of the matrices is $1$, we have $|a_1||d_1|\leq |b_1||c_1|+1$. Thus we have \begin{equation*} \frac{|b_1|}{|d_1|} \leq\frac{|a_1|}{|c_1|}+\frac{1}{|c_1||d_1|} \text{\ \ \ and\ \ \ } \frac{r_3}{r_4}-\frac{|b_1|}{|d_1|} \geq\frac{r_3}{r_4}-\frac{|a_1|}{|c_1|}-\frac{1}{|c_1||d_1|}. \end{equation*} Since $|d_1|\geq|c_1|$ and $|d_1|>r_4$, we have \begin{equation*} r_3|d_1|-r_4|b_1|\geq r_3|c_1|-r_4|a_1|-\frac{r_4}{|d_1|}>r_3|c_1|-r_4|a_1|-1. \end{equation*} Since $r_3|d_1|-r_4|b_1|,r_3|c_1|-r_4|a_1|\in\mathbb{Z}$, we have $r_3|d_1|-r_4|b_1|\geq r_3|c_1|-r_4|a_1|\geq 0$. Since $|c_2|\geq |a_2|$, we thus have $r_3|c_3|-r_4|a_3|\geq 0$. Thus $\frac{|a_3|}{|c_3|}\leq\frac{r_3}{r_4}$. The rest of the inequalities follow similarly. \end{proof} We prove that the matrices in Example \ref{entriessize} satisfies Notation \ref{P_1P_2} and Theorem \ref{bigthm}. \begin{proof} First, $A$ and $B$ consist of integer entries and both matrices have determinate $-1$. We can prove by induction on $j,k\in\mathbb{N}$ that \begin{equation*} A^j= \begin{bmatrix} F_{j-1} & F_j\\ F_j & F_{j+1} \end{bmatrix} \text{\ \ \ and \ \ \ } B^k=(-1)^k \begin{bmatrix} F_{k-1} & -F_k\\ -F_k & F_{k+1} \end{bmatrix} \end{equation*} where $F_k$ is the $k$th Fibonacci number where $F_0=0$, and $F_1=F_2=1$. Let $P_1$ and $P_2$ be product matrices of $A$s and $B$ satisfying the example. Without loss of generality, it is enough to prove that $P_1$ satisfies the matrix $P_1$ in Notation \ref{P_1P_2}. We prove this by induction on the number of matrices of the form $A^j$ and $B^k$ there are in the product. For the base cases of $P_1=A^k$ and $P_1=B^k$, we have \begin{equation*} 0\leq\frac{F_{k-1}}{F_k},\frac{F_k}{F_{k+1}}\leq 1, \end{equation*} $F_{k+1}\geq 2$, and $F_k\neq 0$. Suppose the case holds for some product matrix $P$ \begin{equation*} P= \begin{bmatrix} a & b\\ c & d \end{bmatrix}. \end{equation*} First we prove it holds for $PA^j$ where $j\geq 2$. By induction, we have $|d|\geq 2$ and $c\neq 0$. Thus $\frac{cF_{j-1}}{dF_j}\neq 0$. First assume that $\frac{cF_{j-1}}{dF_j}>0$. By induction, we have all of the inequalities holding in Lemma \ref{positive} with $r_1=0$ and $r_2=r_3=r_4=1$. Lemma \ref{positive} thus gives us all of the desired inequalities holding for $PA^j$ with the observations that $|c||F_j|+|d||F_{j+1}|\geq 2$, $|c|F_{k-1}+|d|F_k\neq 0$, and $|a|F_k+|b|F_{k+1}\neq 0$. Now assume that $\frac{cF_{j-1}}{dF_j}<0$. By induction, we have all of the inequalities holding in Lemma \ref{positive} with $r_1=r_5=0$ and $r_2=r_3=r_4=r_6=r_7=r_8=1$. Lemma \ref{positive} thus gives us all of the desired inequalities holding for $PA^j$ with the observations that $|d|F_{j+1}-|c|F_j\geq |d|\geq 2$, $d|F_k|-c|F_{k-1}|\neq 0$, and $|d|F_{k+1}-|c|F_k\neq 0$. The case of $PB^k$ is similar. \end{proof} \begin{lemma}\label{ratio1} Consider a matrix $P$ \begin{equation*} P= \begin{bmatrix} a & b\\ c & d \end{bmatrix}. \end{equation*} where $1\leq |a|<|c|<|d|$, $|a|<|b|<|d|$, and $|\det P|=|ad-bc|=1$. \begin{enumerate} \item Suppose there doesn't exist positive integers $r_1,r_2,r_3,r_4$ with $r_3<r_4$ such that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a|}{|c|},\frac{|b|}{|d|}\leq\frac{r_3}{r_4} \end{equation*} and $|d|>r_2,r_4$. Then $|b|=|c|=|d|-1=|a|+1$. \label{case1} \item Suppose there doesn't exist positive integers $r_1,r_2,r_3,r_4$ with $r_3<r_4$ such that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a|}{|b|},\frac{|c|}{|d|}\leq\frac{r_3}{r_4} \end{equation*} and $|d|>r_2,r_4$. Then $|b|=|c|=|d|-1=|a|+1$. \label{case2} \end{enumerate} \end{lemma} \begin{proof} We prove \ref{case1}. Case \ref{case2} follows by taking the transpose of the matrix $P$ and using \ref{case1}. Suppose $|c|\geq |a|+2$. Then \begin{equation*} \frac{1}{|d|-1}\leq\frac{1}{|c|}\leq\frac{|a|}{|c|}\leq\frac{|c|-2}{|c|}<\frac{|c|-1}{|c|}. \end{equation*} Also, we have \begin{equation*} \frac{|b|}{|d|}\leq\left|\frac{b}{d}-\frac{a}{c}\right|+\frac{|a|}{|c|} =\frac{1}{|cd|}+\frac{|a|}{|c|} \leq\frac{1}{|cd|}+\frac{|c|-2}{|c|} \leq\frac{|c|-2+\frac{1}{|d|}}{|c|} <\frac{|c|-1}{|c|} \end{equation*} with the last inequality following from $|d|\geq 3$. Also, we have \begin{equation*} \frac{|b|}{|d|}\geq\frac{2}{|d|}>\frac{1}{|d|-1} \end{equation*} since $|b|\geq 2$ and $|d|\geq 3$. Thus letting $r_1=1$, $r_2=|d|-1$, $r_3=|c|-1$, and $r_4=|c|$, we obtain the existence of four positive integers with the properties as stated in the theorem. Thus we may assume that $|c|=|a|+1$. By similar reasoning, if $|d|\geq |b|+2$, then we can see that $r_1=1$, $r_2=|d|-1$, $r_3=|d|-2$, and $r_4=|d|-1$ also satisfies the properties as stated in the theorem. Thus we may also assume that $|d|=|b|+1$. We can deduce that $|a||c|-|b||d|=\pm 1$ from $|ac-bd|=1$. Thus we have \begin{equation*} \pm 1=|a|(|b|+1)-|b|(|a|+1)=|a|-|b|. \end{equation*} We know, however, that $|a|<|b|$. We therefore have that $|b|=|a|+1$ and so we must also have that $|b|=|c|$. \end{proof} \begin{lemma}\label{samesigns} Consider a matrix $P$ \begin{equation*} P= \begin{bmatrix} a & b\\ c & d \end{bmatrix}, \end{equation*} where $|\det P|=|ad-bc|=1$, $a,b,c,d\in\mathbb{Z}\backslash\{0\}$. We have $\frac{a}{c}>0$ if and only if $\frac{b}{d}>0$. \end{lemma} \begin{proof} It suffices to prove that $\frac{ad}{bc}>0$. Suppose for a contradiction that $\frac{ad}{bc}<0$. Then $ad$ and $bc$ have opposite signs. Also notice that $|ad|\geq 1$ and $|bc|\geq 1$. Then we have $|ad-bc|\geq 2$, a contradiction. The result follows. \end{proof} \begin{lemma}\label{power} Consider a matrix $P$ \begin{equation*} P= \begin{bmatrix} a & b\\ c & d \end{bmatrix}, \end{equation*} where $a,b,c,d\in\mathbb{Z}$, $|c|,|d|\geq 2$, $b\neq 0$, $\det(P)=\pm 1$, and \begin{equation*} 0\leq\frac{|a|}{|b|},\frac{|c|}{|d|},\frac{|a|}{|c|},\frac{|b|}{|d|}\leq 1. \end{equation*} For all $i\in\mathbb{N}$, let \begin{equation*} P^i= \begin{bmatrix} a_i & b_i\\ c_i & d_i \end{bmatrix}. \end{equation*} For all $i\geq 2$, we have $|d_i|-|b_i|\geq|d_{i-1}|-|b_{i-1}|$, $|d_i|-|c_i|\geq|d_{i-1}|-|c_{i-1}|$, $|b_i|-|a_i|\geq(|b|-|a|)(|b_{i-1}|-|a_{i-1}|)$, and $|d_i|>|d_{i-1}|$. \end{lemma} \begin{proof} We can deduce that either $|d_i|=|d||d_{i-1}|-|c||b_{i-1}|$ and $|b_i|=|b||d_{i-1}|-|a||b_{i-1}|$ or $|d_i|=|d||d_{i-1}|+|c||b_{i-1}|$ and $|b_i|=|b||d_{i-1}|+|a||b_{i-1}|$. In the first case, we have \begin{align*} |d_i|-|b_i|&=|d||d_{i-1}|-|c||b_{i-1}|-(|b||d_{i-1}|-|a||b_{i-1}|)\\ &=(|d|-|b|)|d_{i-1}|-(|c|-|a|)|b_{i-1}|. \end{align*} Since $\det(P)=\pm 1$, we can deduce that $|a||d|\geq |b||c|-1$. Thus, we have $|a||d|-|a||c|\geq |b||c|-|a||c|-1$. Since $\det(P)=\pm 1$, we have $\gcd(|a|,|c|)=1$ and so since $|c|\geq 2$ and $|c|\geq |a|$, we have $|c|>|a|$. Since $|a||d|-|a||c|\geq |b||c|\geq 0$, we have \begin{equation*} |d|-|c|\geq |b|-|a|-\frac{1}{|c|}>|b|-|a|-1. \end{equation*} It follows that $|d|-|b|\geq |c|-|a|$ since $a,b,c,d\in\mathbb{Z}$. Thus \begin{equation*} |d_i|-|b_i|\geq(|c|-|a|)(|d_{i-1}|-|b_{i-1}|)\geq |d_{i-1}|-|b_{i-1}|. \end{equation*} The other inequalities follows similarly. \end{proof} \begin{lemma}\label{limitsinfinity} For all $m\in\mathbb{N}$ let $q_m$, $P_m$, $a_m$, $b_m$, $c_m$, and $d_m$ be defined as in Notation \ref{P_1P_2}. We have \begin{equation*} \lim_{m\rightarrow\infty}|a_m|=\lim_{m\rightarrow\infty}|b_m|=\lim_{m\rightarrow\infty}|c_m|=\lim_{m\rightarrow\infty}|d_m|=\infty. \end{equation*} \end{lemma} \begin{proof} By Lemma \ref{positive}, we have for all $m\in\mathbb{N}$ that $\min\{|a_m|,|b_m|,|c_m|,|d_m|\}=|a_m|$, $\max\{|a_m|,|b_m|,|c_m|,|d_m|\}=|d_m|$, and $|d_m|\geq 2$. By induction for all $m\in\mathbb{N}$ we have $\det(P_m)=|a_md_m-b_mc_m|=\pm 1$. Thus for all $m\in\mathbb{N}$, we have $\gcd(|b_m|,|d_m|)=\gcd(|c_m|,|d_m|)=1$ so that for all $m\in\mathbb{N}$ $|c_m|<|d_m|$ and $|b_m|<|d_m|$. Also, for all $m\in\mathbb{N}$, let \begin{equation*} P_{m+1}^{q_m}= \begin{bmatrix} a_{m+1,q_m} & b_{m+1,q_m}\\ c_{m+1,q_m} & d_{m+1,q_m} \end{bmatrix}. \end{equation*} From Lemma \ref{power}, we can deduce that $|c_{m,q}|<|d_{m,q}|$ and $|b_{m,q}|<|d_{m,q}|$. Thus, for all $m\in\mathbb{N}$, we have by Lemma \ref{power} \begin{equation*} |d_{m+2}|\geq |d_{m+1,q_m}||d_{m}|-|c_{m+1,q_m}||b_{m}|>|d_m|(|d_{m}|-|c_{m}|)\geq |d_m|. \end{equation*} It follows that $\lim_{m\rightarrow\infty}|d_m|=\infty$. Also, we have \begin{equation*} |b_{m+2}|\geq |b_{m+1,q_m}||d_{m}|-|a_{m+1,q_m}||b_{m}|\geq|b_{m+1,q_m}|(|d_{m}|-|b_{m}|)\geq |b_{m+1,q_m}|. \end{equation*} By similar reasoning, we have $|b_{m+1,q_m}|\geq |b_{m+1}|$ and so for all $m\in\mathbb{N}$ we have $|b_{m+2}|\geq |b_{m+1}|$ with equality only if $|a_{m+1,q_m}|=|b_{m+1,q_m}|=1$ and $|d_{m}|=|b_{m}|+1$. If $|b_m|$ is bounded for all $m\in\mathbb{N}$, then for sufficiently large $m$, we have $|d_{m}|=|b_{m}|+1$ and so $|d_m|$ is bounded, a contradiction. Thus $|b_m|$ isn't bounded and we have $\lim_{m\rightarrow\infty}|b_m|=\infty$. Finally, choose $M\in\mathbb{N}$ such that for all $m\geq M$ $|b_m|\geq 2$. Then for all $m\geq M$, we have $|b_m|>|a_m|$. By Lemma \ref{power}, we have $|b_{m+1,q_m}|-|a_{m+1,q_m}|\geq(|b_{m+1}|-|a_{m+1}|)^{q_m}\geq 1$. So for all $m\geq M$, we have \begin{align*} |a_{m+2}|&=|a_{m+1,q_m}a_{m}+b_{m+1,q_m}c_{m}|\\ &\geq |b_{m+1,q_m}||c_{m}|-|a_{m+1,q_m}||a_{m}|\\ &>(|b_{m+1,q_m}|-|a_{m+1,q_m}|)|a_{m}|\\ &\geq |a_m|. \end{align*} Thus $\lim_{m\rightarrow\infty}|a_m|=\infty$. Since $|a_m|<|c_m|$ for all $m\in\mathbb{N}$, we also have $\lim_{m\rightarrow\infty}|c_m|=\infty$. \end{proof} Letting $a_m$, $b_m$, $c_m$, and $d_m$ be as defined in Notation \ref{P_1P_2}, Lemma \ref{limitsinfinity} implies that for sufficiently large $m$, we have $|d_m|>|b_m|>|a_m|>0$. \begin{remark} In Notation \ref{P_1P_2}, since $P_3=(P_2)^{q_1}P_1$, $P_2$ is $P_3$ truncated after a certain point. Thus, by reindexing the matrices $P_i,{i\in\mathbb{N}}$, we will assume for the rest of the paper that $P_1$ is $P_2$ truncated after a certain point. \end{remark} \begin{lemma}\label{Q_nproductP_n} Let $q_m$, $P_m$, and $Q_n$ be as defined in Notation \ref{P_1P_2}. Then there exists uniquely $2\leq m_1<m_2<...<m_l$ and $n_1,\ldots,n_l\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_i\leq q_{m_i-1}$, \item for all $2\leq i\leq l$, if $n_i=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$, \item $Q_n=(P_{m_l})^{n_l}(P_{m_{l-1}})^{n_{l-1}}...(P_{m_1})^{n_1}M_n$ where the matrix $M_n$ is a product of the string of $A_1$s, $A_2$s, $\ldots$, and $A_v$s in $P_2$ truncated after a certain point unless $m_1=2$ and $n_1=q_1$ in which case $M_n$ is a product of the string of $A_1$s, $A_2$s, $\ldots$, and $A_v$s in $P_1$ truncated after a certain point. \end{enumerate} \end{lemma} \begin{proof} We prove by strong induction on $n\in\mathbb{N}$. If $n<k_2$, then we have $Q_n=M_n$. Suppose $n\geq k_2$ and that the lemma holds true for all values less than $n$. Choose $m\in\mathbb{N}$ such that $k_{m+1}>n\geq k_m$ where $m\geq 2$. Then $P_m$ is $Q_n$ truncated after a certain point and $Q_n$ is $P_{m+1}$ truncated after a certain point. Since $P_{m+1}=(P_m)^{q_{m-1}}P_{m-1}$, it follows that $Q_n=(P_m)^iR$ where $i$ and $R$ satisfy the following: \begin{enumerate} \item $i\leq q_{m-1}$ \item $R$ is a product of the string of $A_1$s, $A_2$s, $\ldots$, and $A_v$s in $P_m$ truncated after a certain point \item if $i=q_{m-1}$ and $m\geq 3$, then $R$ is a product of the string of $A_1$s, $A_2$s, $\ldots$, and $A_v$s in $P_{m-1}$ truncated after a certain point. \end{enumerate} Notice that if we replace $m$ by any other positive integer, say $m'$ and have $Q_n=\left(P_{m'}\right)^iR$ instead, then it follows that $m'<m$. But then if $i\leq q_{m-1}$, then we have that $P_{m'}$ is $R$ truncated after a certain point. Thus any expression for $Q_n$ in (3) in the lemma must begin with $m_l=m$. Similarly the value of $i$ satisfying the above must be unique. If $m=2$ and $i<q_1$, then $R$ is $P_2$ truncated after a certain point and $R=M_n$ and the result follows. If $m=2$ and $i=q_1$, then $R$ is $P_1$ truncated after a certain point and $R=M_n$ and again the result follows. So assume that $m>2$. Then $R=Q_{n-ik_m}$ and so $Q_n=P_m^iQ_{n-ik_m}$ with $n-ik_m<k_m$ and if $i=q_{n-1}$, then $n-ik_m<k_{m-1}$. By induction, the result follows. \end{proof} We divide into two cases. Case $1$ assumes that for sufficiently large $m$, we have $|b_m|=|c_m|=|d_m|-1=|a_m|+1$. Case $2$ deals with all other cases. \section{The Linear Growth Case}\label{sec3} For this case, we may assume without loss of generality that $|b_m|=|c_m|=|d_m|-1=|a_m|+1$ for all $m\in\mathbb{N}$. By Lemma \ref{limitsinfinity}, we may also assume without loss of generality that $|a_m|\geq 1$ for all $m\in\mathbb{N}$. \begin{lemma}\label{nklinear} Let $P_m$ and $Q_n$ be as defined in Notation \ref{P_1P_2}. Let $n_1,n_2,\ldots,n_j,\ldots,$ be the list of natural numbers such that for each $n_j$, there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_{n_j}=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{n_{j,1}}$. \end{enumerate} We have \begin{equation*} |g_{n_j}|\leq n_j\max\{|c_1|,|c_2|\} \end{equation*} for all $j\in\mathbb{N}$. \end{lemma} \begin{proof} We prove by induction on $k\in\mathbb{N}$. First we observe that Lemma \ref{positive} implies that $|e_{n_k}|\leq|f_{n_k}|\leq |h_{n_k}|$ and $|e_{n_k}|\leq|g_{n_k}|\leq |h_{n_k}|$ for all $k\in\mathbb{N}$. Let $k\in\mathbb{N}$. If there exists $r_1,r_2,r_3,r_4\in\mathbb{N}$ with $r_3<r_4$ such that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|e_{n_k}|}{|g_{n_k}|},\frac{|f_{n_k}|}{|h_{n_k}|}\leq\frac{r_3}{r_4} \end{equation*} and $|h_{n_k}|>r_2,r_4$, then we can use Lemma \ref{positive} to deduce that for all sufficiently large $m\in\mathbb{N}$, we have \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_m|}{|c_m|},\frac{|b_m|}{|d_m|}\leq\frac{r_3}{r_4}. \end{equation*} But this cannot be because \begin{equation*} \lim_{m\rightarrow\infty}\frac{|a_m|}{|c_m|}=\lim_{m\rightarrow\infty}\frac{|b_m|}{|d_m|}=1. \end{equation*} We can therefore see with Lemma \ref{ratio1} that for all $k\in\mathbb{N}$, we have $|f_{k_m}|=|g_{k_m}|=|h_{k_m}|-1=|e_{k_m}|+1$. Suppose the desired inequality holds for $k$. We will prove it also holds for $k+1$. Notice that Lemma \ref{Q_nproductP_n} implies that for all $n\in\mathbb{N}$ $Q_n$ is a product of the matrices $P_1$ and $P_2$ if and only if $n$ is in the sequence $(n_j)_j$. It follows that $Q_{n_{k+1}}=Q_{n_k}P_1$ or $Q_{n_{k+1}}=Q_{n_k}P_2$. Then we have that either $|g_{n_{k+1}}|=|h_{n_k}||c_i|+|g_{n_k}||a_i|$ or $|g_{n_{k+1}}|=|h_{n_k}||c_i|-|g_{n_k}||a_i|$ where $i=1$ or $2$. Suppose the first equality holds. Then we also have $|e_{n_{k+1}}|=|e_{n_k}||a_i|+|f_{n_k}||c_i|$ so that \begin{align*} |g_{n_{k+1}}|-|e_{n_{k+1}}|&=|h_{n_k}||c_i|+|g_{n_k}||a_i|-|e_{n_k}||a_i|-|f_{n_k}||c_i|\\ &=(|h_{n_k}|-|f_{n_k}|)|c_i|+(|g_{n_k}|-|e_{n_k}|)|a_i|\\ &=|c_i|+|a_i|\\ &>1, \end{align*} a contradiction. Thus we have \begin{align*} |g_{n_{k+1}}|&=|h_{n_k}||c_i|-|g_{n_k}||a_i|\\ &=(|g_{n_k}|+1)|c_i|-|g_{n_k}||c_i|+|g_{n_k}|\\ &=|g_{n_k}|+|c_i|\\ &\leq n_k\max\{|c_1|,|c_2|\}+|c_i|\\ &\leq(n_k+1)\max\{|c_1|,|c_2|\}\\ &\leq(n_{k+1})\max\{|c_1|,|c_2|\}. \end{align*} \end{proof} \begin{proposition}\label{entrieslimit1} Let $e_n$, $f_n$, $g_n$, and $h_n$ be as defined in Notation \ref{P_1P_2}. Then there exists $D>0$ such that for all $n\in\mathbb{N}$, we have \begin{equation*} |e_n|,|f_n|,|g_n|,|h_n|<Dn. \end{equation*} \end{proposition} \begin{proof} Consider all product matrices constructed as the string of $A$s and $B$s in $P_1$ truncated after a certain point and in $P_2$ truncated after a certain point. Let $M$ be the largest entry in absolute value of all such matrices. Let $n\in\mathbb{N}$. By Lemma \ref{Q_nproductP_n}, we have there exists $2\leq m_1<m_2<...<m_l$ and $ n_1,\ldots,n_l\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_i\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_i=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$. \item $Q_n=(P_{m_l})^{n_l}(P_{m_{l-1}})^{n_{l-1}}...(P_{m_1})^{n_1}M_n$ where the matrix $M_n$ is a product of the string of $A_1$s, $A_2$s, $\ldots$, and $A_v$s in either $P_1$ or $P_2$ truncated after a certain point. \end{enumerate} Let \begin{equation*} Q_{n'}=(P_{m_l})^{n_l}(P_{m_{l-1}})^{n_{l-1}}...(P_{m_1})^{n_1} \end{equation*} so that \begin{equation*} Q_n=Q_{n'}M_n. \end{equation*} Note that $n'<n+k_2$. By Lemma \ref{nklinear}, we have $|g_{n'}|\leq n'\max\{|c_1|,|c_2|\}$. Let $C=\max\{|c_1|,|c_2|\}$. Then we have \begin{align*} \max\{|e_n|,|f_n|,|g_n|,|h_n|\}& \leq 2M(|g_{n'}|+1)\leq 2M(Cn'+1)<2M(C(n+k_2)+1)\\ & \leq(2MC+2MCk_2+2M)n. \end{align*} Since none of $M,C$, or $k_2$ depend on $n$, letting $D=2MC+2MCk_2+2M$, we obtain our result. \end{proof} \begin{proof}[Proof of Theorem \ref{bigthm} for Case $1$] From Proposition \ref{entrieslimit1}, we get there exists $C>0$ such that for all $n\in\mathbb{N}$ \begin{equation*} |G_n|<Cn. \end{equation*} \end{proof} \section{The Exponential Growth Case}\label{sec4} Case $2$ covers all other cases. By Lemma \ref{ratio1}, there exists $m\in\mathbb{N}$, $m\geq 2$, with the following properties. $a_m,b_m,c_m,d_m$ are all nonzero and there exists positive integers $r_1,r_2,r_3,r_4,r_5,r_6,r_7,r_8$ such that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_m|}{|c_m|},\frac{|b_m|}{|d_m|}\leq\frac{r_3}{r_4}, \text{\ \ \ and\ \ \ } \frac{r_5}{r_6}\leq\frac{|a_m|}{|b_m|},\frac{|c_m|}{|d_m|}\leq\frac{r_7}{r_8}, \end{equation*} with $r_1<r_2$, $r_3<r_4$, $r_5<r_6$, $r_7<r_8$, and $|d_m|>r_2,r_4,r_6,r_8$. Without loss of generality, we may assume that the inequalities hold for $m=2$. Also, by taking the minimum of $\frac{r_1}{r_2}$ and $\frac{r_5}{r_6}$ and the maximum of $\frac{r_3}{r_4}$ and $\frac{r_7}{r_8}$, we can say there exists positive integers $r_1,r_2,r_3,r_4$ such that \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_2|}{|c_2|},\frac{|b_2|}{|d_2|},\frac{|a_2|}{|b_2|},\frac{|c_2|}{|d_2|}\leq\frac{r_3}{r_4} \end{equation*} with $r_1<r_2$, $r_3<r_4$, and $|d_2|>r_2,r_4$. \begin{lemma}\label{ratios1} Let $a_m$, $b_m$, $c_m$, and $d_m$ be as defined in Notation \ref{P_1P_2}. For all $m\geq 2$, we have \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_m|}{|c_m|},\frac{|b_m|}{|d_m|}\leq\frac{r_3}{r_4} \end{equation*} and $|d_m|>r_2,r_4$. \end{lemma} \begin{proof} We prove our result by induction on $m\in\mathbb{N}$. We have already established it for $m=2$. Suppose the case holds for some $m$ where $m\geq 2$. We prove it holds for $P_{m+1}=(P_m)^{q_{m-1}}P_{m-1}$.\par First assume that $\frac{c_{m,q_{m-1}}b_{m-1}}{d_{m,q_{m-1}}d_{m-1}}>0$. By Lemma \ref{power}, we have $|d_{m,q_{m+1}}|\geq |c_{m,q_{m+1}}|+1$. By Lemma \ref{positive}, we have the desired inequalities holding for $P_{m+1}$ with the observation by Lemma \ref{power} that \begin{align*} |d_{m+1}|&\geq |d_{m,q_{m-1}}||d_{m-1}|-|c_{m,q_{m-1}}||b_{m-1}|\\ &\geq |d_{m,q_{m-1}}|(|b_{m-1}|+1)-|c_{m,q_{m-1}}||b_{m-1}|\\ &\geq(|d_{m,q_{m-1}}|-|c_{m,q_{m-1}}|)|b_{m-1}|+|d_m,q_{m-1}|\\ &>|d_m,q_{m-1}|\\ &>|d_m|\\ &>r_2,r_4. \end{align*} Now assume that $\frac{c_{m,q_{m-1}}b_{m-1}}{d_{m,q_{m-1}}d_{m-1}}<0$. By induction, we have all of the inequalities holding in Lemma \ref{positive}. Lemma \ref{positive} thus gives us all of the desired inequalities holding for $P_{m+1}$ again with the observation that $|d_{m+1}|>|d_m|>r_2,r_4$. \end{proof} Also, with the help of Lemma \ref{positive}, we can obtain the following. \begin{lemma}\label{ratios3} Let $a_m$, $b_m$, $c_m$, and $d_m$ be as defined in Notation \ref{P_1P_2}. For all even $m\geq 2$, we have \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_m|}{|b_m|},\frac{|c_m|}{|d_m|}\leq\frac{r_3}{r_4} \end{equation*} and $|d_m|>r_2,r_4$. \end{lemma} By Lemmas \ref{ratio1}, we can also obtain that there exists positive integers $r_9,r_{10},r_{11},r_{12}$ such that \begin{equation*} \frac{r_9}{r_{10}}\leq\frac{|a_3|}{|b_3|},\frac{|c_3|}{|d_3|}\leq\frac{r_{11}}{r_{12}} \end{equation*} with $r_9<r_{10}$, $r_{11}<r_{12}$, and $|d_2|>r_{10},r_{12}$ and so also with the help of Lemma \ref{positive}, we can obtain the following. \begin{lemma}\label{ratios2} Let $a_m$, $b_m$, $c_m$, and $d_m$ be as defined in Notation \ref{P_1P_2}. For all odd $m\geq 3$, we have \begin{equation*} \frac{r_9}{r_{10}}\leq\frac{|a_m|}{|b_m|},\frac{|c_m|}{|d_m|}\leq\frac{r_{11}}{r_{12}} \end{equation*} and $|d_m|>r_2,r_4$. \end{lemma} \begin{remark} Without loss of generality, we will assume that $r_9=r_1$, $r_{10}=r_2$, $r_{11}=r_3$, and $r_{12}=r_4$ for the rest of this section. \end{remark} \begin{lemma}\label{qratios} Let $a_m$, $b_m$, $c_m$, and $d_m$ be as defined in Notation \ref{P_1P_2}. For all $m\geq 2$ and $i\in\mathbb{N}$, let \begin{equation*} P_m^i:=\begin{bmatrix} a_{m,i} & b_{m,i}\\ c_{m,i} & d_{m,i} \end{bmatrix}. \end{equation*} Then we have \begin{equation*} \frac{r_1}{r_2}\leq\frac{|a_{m,i}|}{|c_{m,i}|},\frac{|b_{m,i}|}{|d_{m,i}|},\frac{|a_{m,i}|}{|b_{m,i}|},\frac{|c_{m,i}|}{|d_{m,i}|}\leq\frac{r_3}{r_4}. \end{equation*} \end{lemma} \begin{remark} For the rest of the section, we will let $t_1=\frac{(r_4-r_3)}{r_3r_4}$ and $t_2=\frac{(r_2r_4+r_1r_3)}{r_1r_4}$. \end{remark} \begin{lemma}\label{c_mrelations} Let $c_m$ be as defined in Notation \ref{P_1P_2}. We have \begin{equation*} (t_1|c_{m-1}|)^{q_{m-2}}|c_{m-2}|\leq|c_m|\leq(t_2|c_{m-1}|)^{q_{m-2}}|c_{m-2}| \end{equation*} for all $m\geq 4$. \end{lemma} \begin{proof} Let $m\geq 4$. By Lemmas \ref{ratios3}, \ref{ratios2}, and \ref{qratios}, we have \begin{align*} |c_m| &\geq |d_{m-1,q_{m-2}}||c_{m-2}|- |c_{m-1,q_{m-2}}||a_{m-2}|\\ &\geq\frac{r_4}{r_3}|c_{m-1,q_{m-2}}||c_{m-2}|-\frac{r_3}{r_4}|c_{m-1,q_{m-2}}||c_{m-2}|\\ &=\frac{(r_4-r_3)}{r_3r_4}|c_{m-1,q_{m-2}}||c_{m-2}|. \end{align*} Through induction on $q_{m-2}$, we can similarly derive that $|c_{m-1,q_{m-2}}|\geq t_1^{q_{m-2}-1}|c_{m-1}|^{q_{m-2}}$. Also, we have \begin{align*} |c_m| &\leq |d_{m-1,q_{m-2}}||c_{m-2}|+|c_{m-1,q_{m-2}}||a_{m-2}|\\ &\leq\frac{r_2}{r_1}|c_{m-1,q_{m-2}}||c_{m-2}|+\frac{r_3}{r_4}|c_{m-1,q_{m-2}}||c_{m-2}|\\ &=\frac{(r_2r_4+r_1r_3)}{r_1r_4}|c_{m-1,q_{m-2}}||c_{m-2}|. \end{align*} Again, through induction on $q_{m-2}$, we can similarly derive that $|c_{m-1,q_{m-2}}|\leq t_2^{q_{m-2}-1}|c_{m-1}|^{q_{m-2}}$. Thus we have our result. \end{proof} \begin{proposition}\label{limc_mexists} Let $q_m$, $c_m$, and $k_m$ be as defined in Notation \ref{P_1P_2}. We have \begin{equation*} \lim_{m\rightarrow\infty}|c_m|^{1/k_m} \end{equation*} exists, is finite, and is greater than $1$. \end{proposition} \begin{proof} It suffices to show that \begin{equation*} \lim_{m\rightarrow\infty}\frac{\log|c_m|}{k_m} \end{equation*} exists, is finite, and is positive. Let $u_m=\log |c_m|$ and $s_m:=\frac{u_m}{k_m}$. By Lemma \ref{c_mrelations}, we have that $q_{m-1}(u_m+\log t_1)+u_{m-1}\leq u_{m+1}\leq q_{m-1}(u_m+\log t_2)+u_{m-1}$ for all $m\in\mathbb{N}$, $m\geq 3$. Let $m\in\mathbb{N}$, $m\geq 3$. We have \begin{align} s_{m+1}-s_m&=\frac{u_m}{k_{m+1}}\left(\frac{u_{m+1}}{u_m}-\frac{k_{m+1}}{k_m}\right)\nonumber\\ &\leq\frac{u_m}{k_{m+1}}\left(\frac{q_{m-1}(u_m+\log t_2)+u_{m-1}}{u_m}-\frac{k_{m+1}}{k_m}\right)\nonumber\\ &=\frac{u_m}{k_{m+1}}\left(\frac{u_{m-1}+q_{m-1}\log t_2}{u_m}-\frac{k_{m-1}}{k_m}\right)\nonumber\\ &=\frac{u_{m-1}}{k_{m+1}}-\frac{u_mk_{m-1}}{k_{m+1}k_m}+\frac{q_{m-1}\log t_2}{k_{m+1}}\nonumber\\ &=\frac{\left(u_{m-1}-\dfrac{u_mk_{m-1}}{k_m}\right)}{k_{m+1}}+\frac{q_{m-1}\log t_2}{k_{m+1}}\nonumber\\ &=\frac{k_{m-1}(s_{m-1}-s_m)}{k_{m+1}}+\frac{q_{m-1}\log t_2}{k_{m+1}}.\label{t2ineq} \end{align} Similarly, we have \begin{equation} s_m-s_{m+1}\leq\frac{k_{m-1}(s_{m-1}-s_m)}{k_{m+1}}+\frac{q_{m-1}\log t_1}{k_{m+1}}.\label{t1ineq} \end{equation} Therefore \begin{equation*} |s_m-s_{m+1}|\leq\frac{k_{m-1}|s_m-s_{m-1}|}{k_{m+1}}+\frac{q_{m-1}\log t}{k_{m+1}}, \end{equation*} where $t=\max\{t_2,t_1^{-1}\}$. Note that \begin{equation*} \frac{k_{m+1}}{k_{m-1}}=\frac{k_m+q_{m+1}k_{m-1}}{k_{m-1}}>1+q_{m+1}\geq 2. \end{equation*} Thus \begin{equation*} |s_{m+1}-s_m|\le\frac{|s_m-s_{m-1}|}{2}+\frac{\log t}{k_m}. \end{equation*} Consider the Fibonacci sequence $F_1=1, F_2=1$, and $F_m=F_{m-1}+F_{m-2}$ for all $m\geq 3$. Then we have \begin{equation} k_m\geq F_m=\left\lfloor\frac{\varphi^m}{\sqrt{5}}\right\rfloor\geq\frac{\varphi^m}{5}.\label{kngrowth} \end{equation} Thus \begin{equation*} |s_{m+1}-s_m|\le\frac{|s_m-s_{m-1}|}{2}+\frac{5\log t}{\varphi^m}. \end{equation*} We can prove by induction on $l\in\mathbb{N}$ that for all $m\geq 3$ and $l\geq 1$, we have \begin{align*} |s_{m+l}-s_{m+l-1}|&\leq\frac{|s_m-s_{m-1}|}{2^l}+\left(\frac{1}{2^{l-1}\cdot\varphi^m}+\frac{1}{2^{l-2}\varphi^{m+1}}+\ldots+\frac{1}{\varphi^{m+l-1}}\right)\log t\\ &<\frac{|s_m-s_{m-1}|}{2^l}+\frac{1}{\varphi^{m+l-1}}\left(1+\frac{\varphi}{2}+\frac{\varphi^2}{4}+\ldots\right)\\ &=\frac{|s_m-s_{m-1}|}{2^l}+\frac{1}{\varphi^{m+l-1}}\left(\frac{2}{2-\varphi}\right). \end{align*} By a geometric series argument, the limit exists and is finite. It remains to show the limit is positive. By Lemma \ref{c_mrelations}, we have $q_{m-2}(\log t_1+\log|c_{m-1}|)+\log|c_{m-2}|\leq\log|c_m|$ for all $m\geq 3$. We thus have \begin{equation*} q_{m-2}(\log|c_{m-1}|+\log t_1)+(\log|c_{m-2}|+\log t_1)\leq\log|c_m|+\log t_1 \end{equation*} for all $m\geq 3$. By Lemma \ref{limitsinfinity}, we have $\lim_{m\rightarrow\infty}\log|c_m|=\infty$. We can therefore deduce that there exists $C_2>0$ such that for all sufficiently large $m\in\mathbb{N}$, we have $\log|c_m|+\log t_1>C_2k_m$. It follows that the limit is positive. \end{proof} \begin{remark} Let $q_m$, $c_m$, and $q_m$ be as in Proposition \ref{limc_mexists}. Let $L:=\lim_{m\rightarrow\infty}|c_m|^{1/k_m}$. \end{remark} \begin{lemma}\label{prodrelc_m} Let $P_m$, $c_m$, $Q_n$, $e_n$, $f_n$, $g_n$, and $h_n$ be as defined in Notation \ref{P_1P_2}. Let $n\in\mathbb{N}$ such that there exists $2\leq m_1<m_2<...<m_l$ and $n_1,\ldots,n_l\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_i\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_i=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$. \item $Q_n=(P_{m_l})^{n_l}(P_{m_{l-1}})^{n_{l-1}}...(P_{m_1})^{n_1}$ \end{enumerate} Then \begin{equation*} \frac{r_1}{r_2}\leq\frac{|e_n|}{|g_n|},\frac{|f_n|}{|h_n|},\frac{|e_n|}{|f_n|},\frac{|g_n|}{|h_n|}\leq\frac{r_3}{r_4}. \end{equation*} Also, we have \begin{equation*} t_1^{l+n_1+\ldots+n_l}|c_{m_l}|^{n_l}|c_{m_{l-1}}|^{n_{l-1}}...|c_{m_1}|^{n_1} \leq|g_n| \leq t_2^{l+n_1+\ldots+n_l}|c_{m_l}|^{n_l}|c_{m_{l-1}}|^{n_{l-1}}...|c_{m_1}|^{n_1}. \end{equation*} \end{lemma} \begin{proof} The first pair of inequalities follows by similar reasoning as in the proof of Lemma \ref{positive}. The second pair of inequalities can by proved by induction on $l\in\mathbb{N}$ with the base case and induction step proved as in the proof of Lemma \ref{c_mrelations}. \end{proof} \begin{proposition}\label{rootgnj} Let $P_m$, $k_m$, $Q_n$, and $g_n$ be as defined in Notation \ref{P_1P_2}. Let $n_1,n_2,\ldots,n_j,\ldots,$ be the list of natural numbers such that for each $n_j$, there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_{n_j}=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{j,n_1}$. \end{enumerate} We have \begin{equation*} \lim_{j\rightarrow\infty}|g_{n_j}|^{1/n_j}=L. \end{equation*} \end{proposition} \begin{proof} It suffices to prove that \begin{equation*} \lim_{j\rightarrow\infty}\frac{\log|g_{n_j}|}{n_j}=\log L. \end{equation*} We have \begin{equation*} \lim_{m\rightarrow\infty}\frac{\log|c_m|}{k_m}=\log L. \end{equation*} Let $\epsilon>0$. Pick $\frac{\log L}{2}>\delta_1>0$ and $\delta_2,\delta_3,\delta_4>0$ such that $(\log L+\delta_1)(1+\delta_2)<\log L+\epsilon$ and $\frac{(1-\delta_3)(\log L-\delta_1)}{(1+\delta_4)}>\log L-\epsilon$. Choose $M\in\mathbb{N}$ such that for all $m\geq M$, we have \begin{equation} \left|\frac{\log|c_m|}{k_m}-\log L\right|<\delta_1\label{epsiloncond1}, \end{equation} \begin{equation} \frac{10}{\varphi^{M-1}(\varphi-1)\log L}+\frac{2}{k_M\log L}<\frac{\delta_2}{2}\label{epsiloncond5}, \end{equation} and \begin{equation} \frac{10\log(t_1^{-1})}{\varphi^{M-1}(\varphi-1)\log L}+\frac{2\log(t_1^{-1})}{k_M\log L}<\frac{\delta_3}{2}\label{epsiloncond6}. \end{equation} By \eqref{kngrowth}, we have $\lim_{m\rightarrow\infty}\frac{m}{k_m}=0$. Thus we can choose $N>M$ such that for all $m\geq N$, we have \begin{equation} \frac{2m\widehat{q_{M-1}}}{k_m\log L}+\frac{3\widehat{q_{M-1}}Mk_M}{k_m}<\frac{\delta_2}{2}\label{epsiloncond2}, \end{equation} \begin{equation} \frac{2m\widehat{q_{M-1}}\log(t_1^{-1})}{k_m\log L}<\frac{\delta_3}{2}\label{epsiloncond3}, \end{equation} and \begin{equation} \frac{\widehat{q_{M-1}}Mk_M}{k_{m}}<\delta_4\label{epsiloncond4} \end{equation} where \begin{equation*} \widehat{q_{M-1}}:=\max\{q_1,q_1,\ldots,q_{M-1}\}. \end{equation*} Let $n_j\geq k_{N}$. Then there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_n=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{j,n_1}$. \end{enumerate} By Lemma \ref{prodrelc_m}, we have \begin{align} & (l+n_{j,1}+\ldots+n_{j,l})\log(t_1)+n_{j,l}\log|c_{m_l}|+...+n_{j,1}\log|c_{m_1}|\nonumber\\ & \qquad \qquad \leq\log|g_{n_j}| \leq(l+n_{j,1}+\ldots+n_{j,l})\log(t_2)+n_{j,l}\log|c_{m_l}|+...+n_{j,1}\log|c_{m_1}|\label{gbounds}. \end{align} Pick $1\leq y\leq l$ such that $m_y\geq M>m_{y-1}$. Thus $y<M$. By \eqref{epsiloncond1}, we have \begin{equation} \log L-\delta_1<\frac{n_{j,l}\log|c_{m_l}|+...+n_{j,y}\log|c_{m_y}|}{n_{j,l}k_{m_l}+...+n_{j,y}k_{m_y}}<\log L+\delta_1\label{epsilon1bounds}. \end{equation} Also observe the following. \begin{align} \frac{l+n_{j,1}+\ldots+n_{j,l-1}}{\log|c_{m_l}|}&<\frac{(l+q_{m_1-1}+\ldots+q_{m_{l-1}-1})}{\log|c_{m_l}|}\nonumber\\ &<\frac{(l+q_{m_1-1}+\ldots+q_{m_{y-1}-1})}{\log|c_{m_l}|}+\frac{q_{m_y-1}+\ldots+q_{m_{l-1}-1}}{\log |c_{m_l}|}\nonumber\\ &<\frac{l\widehat{q_{M-1}}}{\log |c_{m_l}|}+\frac{\dfrac{k_{m_y+1}}{k_{m_y}}+\ldots+\dfrac{k_{m_{l-1}+1}}{k_{m_l-1}}}{\log |c_{m_l}|}\nonumber\\ &<\frac{l\widehat{q_{M-1}}}{k_{m_l}(\log L-\delta_1)}+\frac{\dfrac{k_{m_y+1}}{k_{m_y}}+\ldots+\dfrac{k_{m_{l-1}+1}}{k_{m_l-1}}}{k_{m_l}(\log L-\delta_1)}\nonumber\\ &<\frac{l\widehat{q_{M-1}}}{k_{m_l}(\log L-\delta_1)}+\frac{1}{(\log L-\delta_1)}\sum_{j=m_y}^{\infty}\frac{1}{k_j}\nonumber\\ &<\frac{l\widehat{q_{M-1}}}{k_{m_l}(\log L-\delta_1)}+\frac{1}{(\log L-\delta_1)}\sum_{j=m_y}^{\infty}\frac{5}{\varphi^y}\nonumber\\ &=\frac{l\widehat{q_{M-1}}}{k_{m_l}(\log L-\delta_1)}+\frac{5}{\varphi^{m_y-1}(\varphi-1)(\log L-\delta_1)}\nonumber\\ &<\frac{2m_l\widehat{q_{M-1}}}{k_{m_l}\log L}+\frac{10}{\varphi^{M-1}(\varphi-1)\log L}.\label{nlog} \end{align} Thus, by \eqref{epsiloncond1}, \eqref{epsiloncond5}, and \eqref{epsiloncond2}, we have \begin{align} &\frac{(l+n_{j,1}+\ldots+n_{j,l})\log(t_2)+n_{j,l}\log|c_{m_l}|+...+n_{j,1}\log|c_{m_1}|}{n_{j,l}\log|c_{m_l}|+...+n_{j,y}\log|c_{m_y}|}\nonumber\\ &\qquad \qquad < 1+\frac{(l+n_{j,1}+\ldots+n_{j,l})\log(t_2)+n_{j,y-1}\log|c_{m_l}|+...+n_{j,1}\log|c_{m_1}|}{n_{j,l}\log|c_{m_l}|}\nonumber\\ &\qquad \qquad <1+\frac{(l+n_{j,1}+\ldots+n_{j,l})\log(t_2)+\widehat{q_{M-1}}y\log|c_{m_{y-1}}|}{n_{j,l}\log|c_{m_l}|}\nonumber\\ &\qquad \qquad <1+\frac{(l+n_{j,1}+\ldots+n_{j,l-1})\log(t_2)+\widehat{q_{M-1}}M\log|c_M|}{\log|c_{m_l}|}+\frac{1}{\log |c_{m_l}|}\nonumber\\ &\qquad \qquad <1+\frac{(l+n_{j,1}+\ldots+n_{j,l-1})\log(t_2)}{\log |c_{m_l}|}+\frac{\widehat{q_{M-1}}Mk_M(\log L+\delta_1)}{k_{m_l}(\log L-\delta_1)}+\frac{1}{\log |c_M|}\nonumber\\ &\qquad \qquad <1+\frac{2m_l\widehat{q_{M-1}}}{k_{m_l}\log L}+\frac{10}{\varphi^{M-1}(\varphi-1)\log L}+\frac{\widehat{q_{M-1}}Mk_M(\log L+\delta_1)}{k_{m_l}(\log L-\delta_1)}+\frac{1}{\log |c_M|}\nonumber\\ &\qquad \qquad <1+\frac{2m_l\widehat{q_{M-1}}}{k_{m_l}\log L}+\frac{10}{\varphi^{M-1}(\varphi-1)\log L}+\frac{3\widehat{q_{M-1}}Mk_M}{k_{m_l}}+\frac{2}{k_M\log L}\nonumber\\ &\qquad \qquad <1+\delta_2\label{epsilon2bound}. \end{align} Combining \eqref{gbounds}, \eqref{epsilon1bounds}, and \eqref{epsilon2bound}, we thus have \begin{align*} \frac{\log |g_{n_j}|}{n_j}&<\frac{(1+\delta_2)(n_{j,l}\log|c_{m_l}|+...+n_{j,y}\log|c_{m_y}|)}{n_{j,l}k_{m_l}+...+n_{j,y}k_{m_y}}\\ &<(\log L+\delta_1)(1+\delta_2)\\ &<\log L+\epsilon. \end{align*} Also, since $t_1\leq 1$, by \eqref{epsiloncond6}, \eqref{epsiloncond3}, and \eqref{nlog}, we have \begin{align} &\frac{(l+n_{j,1}+\ldots+n_{j,l})\log(t_1)+n_{j,l}\log|c_{m_l}|+...+n_{j,1}\log|c_{m_1}|}{n_{j,l}\log|c_{m_l}|+...+n_{j,y}\log|c_{m_y}|}\nonumber\\ &\qquad \qquad >1-\frac{(l+n_{j,1}+\ldots+n_{j,l})\log(t_1^{-1})}{n_{j,l}\log|c_{m_l}|}\nonumber\\ &\qquad \qquad >1-\frac{(l+n_{j,1}+\ldots+n_{j,l-1})\log(t_1^{-1})}{\log|c_{m_l}|}-\frac{2\log(t_1^{-1})}{k_M\log L}\nonumber\\ &\qquad \qquad >1-\frac{2m_l\widehat{q_{M-1}}\log(t_1^{-1})}{k_{m_l}\log L}-\frac{10\log(t_1^{-1})}{\varphi^{M-1}(\varphi-1)\log L}-\frac{2\log(t_1^{-1})}{k_M\log L}\nonumber\\ &\qquad \qquad >1-\delta_3.\label{epsilon3bound} \end{align} Also \begin{align} \frac{n_j}{n_{j,l}k_{m_l}+...+n_{j,y}k_{m_y}}&=1+\frac{n_{j,y-1}k_{m_{y-1}}+\ldots+n_{j,1}k_{m_1}}{n_{j,l}k_{m_l}+...+n_{j,y}k_{m_y}}\nonumber\\ &<1+\frac{\widehat{q_{M-1}}yk_{m_{y-1}}}{k_{m_l}}\nonumber\\ &<1+\frac{\widehat{q_{M-1}}Mk_M}{k_{m_l}}\nonumber\\ &<1+\delta_4\label{epsilon4bound} \end{align} by \eqref{epsiloncond4}. Combining \eqref{gbounds}, \eqref{epsilon1bounds}, \eqref{epsilon3bound}, and \eqref{epsilon4bound}, we have \begin{align*} \frac{\log |g_{n_j}|}{n_j}&>\frac{(1-\delta_3)(n_{j,l}\log|c_{m_l}|+...+n_{j,y}\log|c_{m_y}|)}{(1+\delta_4)(n_{j,l}k_{m_l}+...+n_{j,y}k_{m_y})}\\ &>\frac{(1-\delta_3)(\log L-\delta_1)}{(1+\delta_4)}\\ &>\log L-\epsilon. \end{align*} \end{proof} \begin{lemma}\label{limitratios1} Let $q_m$, $a_m$ and $c_m$ be as defined in Notation \ref{P_1P_2}. We have $\lim_{m\rightarrow\infty}\frac{a_m}{c_m}$ exists, is between $-1$ and $1$, and is irrational. \end{lemma} \begin{proof} We have \begin{equation*} \lim_{m\rightarrow\infty}\frac{\log|c_m|}{k_m}=\log L>0. \end{equation*} Thus there exists $L'>1$ such that for all sufficiently large $m\in\mathbb{N}$, we have \begin{equation*} |c_{m-1}|>L'^{k_{m-1}}. \end{equation*} Let \begin{equation*} P_{m-1}^{q_{m-2}-1}P_{m-2}=: \begin{bmatrix} a_m' & b_m'\\ c_m' & d_m' \end{bmatrix}. \end{equation*} Then for $m\in\mathbb{N}$ sufficiently large, we have \begin{align*} \left|\frac{a_m}{c_m}-\frac{a_{m-1}}{c_{m-1}}\right|&=\left|\frac{a_{m-1}a_{m-2}'+b_{m-1}c_{m-2}'}{c_{m-1}a_{m-2}'+d_{m-1}c_{m-2}'}-\frac{a_{m-1}}{c_{m-1}}\right|\\ &=\left|\frac{\left(b_{m-1}-\dfrac{a_{m-1}d_{m-1}}{c_{m-1}}\right)c_m'}{c_{m-1}a_{m-2}'+d_{m-1}c_{m-2}'}\right| =\frac{|c_m'|}{|c_m||c_{m-1}|}\\ &\leq\frac{|c_m'|}{t_1|c_m'||c_{m-1}|^2} \leq\frac{1}{t_1L'^{2k_{m-1}}}. \end{align*} By a geometric series argument, using \eqref{kngrowth}, the sequence $\frac{a_m}{c_m}$ is Cauchy and so converges. The fact that the limit is between $-1$ and $1$ follows from Lemma \ref{ratios1}. It remains to show the limit is irrational. Suppose for a contradiction that it is rational and let it be $\frac{a}{b}$ where $a,b\in\mathbb{N}$. Let $N\in\mathbb{N}$ be sufficiently large so that for all $m\geq N$, the above inequality holds and $L'^{-2k_{m-1}}<\frac{1}{2}$. Then for all $m\geq N$, we have \begin{align*} \left|\frac{a}{b}-\frac{a_m}{c_m}\right|&\leq\sum_{i=m}^{\infty}\left|\frac{a_{i+1}}{c_{i+1}}-\frac{a_i}{c_i}\right| <\sum_{i=m}^{\infty}\frac{1}{t_1L'^{2k_i}} =\frac{1}{t_1L'^{2k_m}}\sum_{i=0}^{\infty}L'^{2k_m-2k_{m+i}}\\ &=\frac{1}{t_1L'^{2k_m}}\sum_{i=0}^{\infty}\prod_{j=0}^{i-1}L'^{2k_{m+j}-2k_{m+j+1}}\\ &<\frac{1}{t_1L'^{2k_m}}\sum_{i=0}^{\infty}\prod_{j=0}^{i-1}L'^{-2k_{m+j-1}}\\ &<\frac{1}{t_1L'^{2k_m}}\sum_{i=0}^{\infty}\left(\frac{1}{2}\right)^i\\ &=\frac{2}{t_1L'^{2k_m}}. \end{align*} Thus \begin{equation*} \frac{|ac_m-ba_m|}{|bc_m|}<\frac{2}{t_1L'^{2k_m}}. \end{equation*} Suppose that $\frac{a}{b}\neq\frac{a_m}{c_m}$. Then we have \begin{equation*} \frac{1}{|bc_m|}<\frac{2}{t_1L'^{2k_m}}. \end{equation*} Thus \begin{equation*} \frac{t_1L'^{2k_m}}{2|b|}<|c_m| \text{\ \ \ or\ \ \ } \left(\frac{t_1}{2|b|}\right)^{1/k_m}L'^2<|c_m|^{1/k_m}. \end{equation*} Thus if there are infinitely many $m\in\mathbb{N}$ such that $\frac{a}{b}\neq\frac{a_m}{c_m}$, then we have $L'^2\leq L'$, a contradiction since $L>1$. So for sufficiently large $m\in\mathbb{N}$, we have $\frac{a}{b}=\frac{a_m}{c_m}$. But for all $m\in\mathbb{N}$, we have $\gcd(a_m,c_m)=1$ and $\lim_{m\rightarrow\infty}|a_m|=\lim_{m\rightarrow\infty}|c_m|=\infty$ and so this cannot be the case either. Thus the limit must be irrational. \end{proof} \begin{remark} Let $q_m$, $a_m$ and $c_m$ as defined in Notation \ref{P_1P_2}. We will denote \begin{equation*} M:=\lim_{m\rightarrow\infty}\frac{a_m}{c_m}. \end{equation*} \end{remark} \begin{lemma}\label{n_jratiolimit} Let $q_m$, $P_m$, $Q_n$, $e_n$, $f_n$, $g_n$, and $h_n$ be as defined in Notation \ref{P_1P_2}. Let $n_1,n_2,\ldots,n_j,\ldots,$ be the list of natural numbers such that for each $n_j$, there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_n=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{j,n_1}$. \end{enumerate} We have $\lim_{j\rightarrow\infty}\frac{e_{n_j}}{g_{n_j}}$ and $\lim_{j\rightarrow\infty}\frac{f_{n_j}}{h_{n_j}}$ both exist and are equal to $M$. \end{lemma} \begin{proof} By Lemma \ref{limitratios1}, we have that \begin{equation*} \lim_{m\rightarrow\infty}\frac{e_{k_m}}{g_{k_m}} \end{equation*} exists and is equal to $M$. We will prove that the desired limit is $M$. Let $\epsilon>0$. Choose $N\in\mathbb{N}$ such that for all $m\geq N$, we have \begin{equation*} \left|\frac{e_{k_m}}{g_{k_m}}-M\right|<\frac{\epsilon}{2} \text{\ \ \ and\ \ \ } \frac{r_4^2}{(r_4^2-r_3^2)|g_{k_m}|}<\frac{\epsilon}{2}. \end{equation*} Let $n_j\geq k_N$. Then $k_{m+1}>n_j\geq k_m$ for some $m\geq N$. We have $Q_{n_j}=Q_{k_m}Q_{n_j-k_m}$. Thus we have the following using Lemma \ref{prodrelc_m}: \begin{align*} \left|\frac{e_{n_j}}{g_{n_j}}-\frac{e_{k_m}}{g_{k_m}}\right|&=\left|\frac{e_{k_m}e_{n_j-k_m}+f_{k_m}g_{n_j-k_m}}{g_{k_m}e_{n_j-k_m}+h_{k_m}g_{n_j-k_m}}-\frac{e_{k_m}}{g_{k_m}}\right|\\ &=\left|\frac{g_{n_j-k_m}(f_{k_m}g_{k_m}-e_{k_m}h_{k_m})}{g_{k_m}(g_{k_m}e_{n_j-k_m}+h_{k_m}g_{n_j-k_m})}\right|\\ &=\frac{|g_{n_j-k_m}|}{|g_{k_m}||g_{k_m}e_{n_j-k_m}+h_{k_m}g_{n_j-k_m}|}\\ &\leq\frac{|g_{n_j-k_m}|}{|g_{k_m}|(|h_{k_m}g_{n_j-k_m}|-|g_{k_m}e_{n_j-k_m}|)}\\ &<\frac{|g_{n_j-k_m}|}{|g_{k_m}|\left(\left|h_{k_m}g_{n_j-k_m}\right|-\dfrac{r_3^2}{r_4^2}\left|h_{k_m}g_{n_j-k_m}\right|\right)}\\ &=\frac{r_4^2}{(r_4^2-r_3^2)|g_{k_m}h_{k_m}|}\\ &\leq\frac{r_4^2}{(r_4^2-r_3^2)|g_{k_m}|}\\ &<\frac{\epsilon}{2}. \end{align*} Thus \begin{equation*} \left|\frac{e_{n_j}}{g_{n_j}}-M\right|\leq\left|\frac{e_{n_j}}{g_{n_j}}-\frac{e_{k_m}}{g_{k_m}}\right|+\left|\frac{e_{k_m}}{g_{k_m}}-M\right|<\epsilon. \end{equation*} Thus \begin{equation*} \lim_{j\rightarrow\infty}\frac{e_{n_j}}{g_{n_j}}=M. \text{\ \ \ and\ \ \ } \left|\frac{e_{n_j}}{g_{n_j}}-\frac{f_{n_j}}{h_{n_j}}\right|=\frac{1}{|g_{n_j}h_{n_j}|}, \end{equation*} from which the rest follow using Lemma \ref{prodrelc_m}. \end{proof} \begin{remark} For the rest of the paper, we will assume that $G_1\neq\frac{-G_2}{M}$. \end{remark} \begin{proposition}\label{partiallimit} Let $q_m$, $P_m$ and $Q_n$ be defined as in Notation \ref{P_1P_2}. Let $n_1,n_2,\ldots,n_j,\ldots,$ be the list of natural numbers such that for each $n_j$, there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_{n_j}=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{j,n_1}$. \end{enumerate} Let \begin{equation*} [G_1,G_2]Q_n=[G_{n+1},G_{n+2}] \end{equation*} for all $n\in\mathbb{N}$. We have \begin{equation*} \lim_{j\rightarrow\infty}|G_{n_j+1}|^{1/(n_j+1)}=\lim_{j\rightarrow\infty}|G_{n_j+2}|^{1/(n_j+2)}=L. \text{\ \ \ and\ \ \ } \lim_{j\rightarrow\infty}\frac{G_{n_j+1}}{G_{n_j+2}}=M. \end{equation*} \end{proposition} \begin{proof} For all $n_j$, we have $G_1e_{n_j}+G_2g_{n_j}=G_{n_j+1}$. By Lemma \ref{prodrelc_m}, $g_{n_j}\neq 0$. Thus $\frac{G_1e_{n_j}}{g_{n_j}}+G_2=\frac{G_{n_j+1}}{g_{n_j}}$. By Lemma \ref{n_jratiolimit}, we have \begin{equation*} \lim_{j\rightarrow\infty}\frac{G_{n_j+1}}{g_{n_j}}=MG_1+G_2. \end{equation*} Since $G_1\neq\frac{-G_2}{M}$, we have the limit is nonzero and so \begin{equation*} \lim_{j\rightarrow\infty}|G_{n_j+1}|^{1/(n_j+1)}=L. \end{equation*} follows from Proposition \ref{rootgnj}. The limit \begin{equation*} \lim_{j\rightarrow\infty}|G_{n_j+2}|^{1/(n_j+2)}=L \end{equation*} follows similarly. Also, for all $j\in\mathbb{N}$, we have \begin{align*} \frac{G_{n_j+1}}{G_{n_j+2}}&=\frac{G_1e_{n_j}+G_2g_{n_j}}{G_1f_{n_j}+G_2h_{n_j}}\\ &=\frac{e_{n_j}}{f_{n_j}}\cdot\frac{G_1+\dfrac{G_2g_{n_j}}{e_{n_j}}}{G_1+\dfrac{G_2h_{n_j}}{f_{n_j}}}. \end{align*} Applying Lemma \ref{n_jratiolimit} gives us the third limit with the observation that \begin{equation*} \lim_{j\rightarrow\infty}G_1+\frac{G_2h_{n_j}}{f_{n_j}}=G_1+\frac{G_2}{M}\neq 0. \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{bigthm} for Case $2$] Let $n_1,n_2,\ldots,n_j,\ldots,$ be the list of natural numbers such that for each $n_j$, there exists $2\leq m_1<m_2<...<m_l$ and $n_{j,1},\ldots,n_{j,l}\in\mathbb{N}$ with the following properties: \begin{enumerate} \item for all $1\leq i\leq l$, we have $n_{j,i}\leq q_{m_i-1}$ \item for all $2\leq i\leq l$, if $n_{j,i}=q_{m_i-1}$, then $m_{i-1}+2\leq m_i$ \item $Q_n=(P_{m_l})^{n_{j,l}}(P_{m_{l-1}})^{n_{j,l-1}}...(P_{m_1})^{j,n_1}$. \end{enumerate} For all $j\in\mathbb{N}$, we have $n_{j+1}-n_j<k_2$ by Lemma \ref{Q_nproductP_n}. We can deduce that there exists $C>0$ such that for all $n\in\mathbb{N}$ there exists $n_j<n$ and integers $C_1,C_2<C$ such that $G_n=C_1G_{n_j+1}+C_2G_{n_j+2}$. By Proposition \ref{partiallimit}, we can deduce that \begin{equation*} \limsup_{n\rightarrow\infty}|G_n|^{1/n}=L. \end{equation*} Also, out of all of the finite possibilities for $C_1$ and $C_2$, we observe that $MC_1+C_2\neq 0$ since $M$ is irrational. Let $M'$ denote the minimal possible value of $C_1+MC_2$ in absolute value. Then $M'>0$. By Proposition \ref{partiallimit}, for all $1\leq t<k_2$, we have \begin{equation*} \liminf_{j\rightarrow\infty}\frac{|G_{n_j+t}|}{|G_{n_j+1}|}\geq M'. \end{equation*} It follows that \begin{equation*} \liminf_{n\rightarrow\infty}|G_n|^{1/n}=L. \end{equation*} \end{proof} \section{Future Work} There are a couple of different directions this research can go in. The first direction involves studying the growth rates of generalised Fibonacci sequences produced by other patterns of words. Here we have examined Sturmian words, but there are other types of words as well. Some examples are words that follow a Thue-Morse pattern, as well as other morphisms. We could even try removing the condition $j,k\geq 2$ in Example \ref{entriessize}. The other direction involves trying to calculate the exact growth rate of certain random Fibonacci sequences produced from words following such patterns and seeing how close to Viswanath's constant we can get. McLellan \cite{mclellan} used words following a periodic pattern to create a new way of calculating Viswanath's constant. By adding in new patterns into her method, we may be able to calculate Viswanath's constant even more accurately. We even might be able to calculate its exact value or at least shed some light on its nature (for example, if it's irrational, transcendental, etc.). \section{Competing Interest Statement} The authors have no competing interests to declare. \end{document}
\begin{document} \title{Motions of a connected subgraph representing a swarm of robots inside a graph of work stations} \def\fnsymbol{footnote}{\fnsymbol{footnote}} \footnotetext[3]{Divisi{\' o}n de Matem{\' a}ticas e Ingenier{\' i}a, FES Acatl{\' a}n, Universidad Nacional Aut{\'o}noma de M{\' e}xico, Naucalpan, Mexico. {\tt [email protected], [email protected]}.} \footnotetext[2]{Coordinaci{\' o}n de Ciencias Computacionales, Instituto Nacional de Astrof{\' i}sica, {\' O}ptica y Electr{\' o}nica, Puebla, Mexico. {\tt [email protected]}.} \begin{abstract} Imagine that a swarm of robots is given, these robots must communicate with each other, and they can do so if certain conditions are met. We say that the swarm is connected if there is at least one way to send a message between each pair of robots. A robot can move from a work station to another only if the connectivity of the swarm is preserved in order to perform some tasks. We model the problem via graph theory, we study connected subgraphs and how to motion them inside a connected graph preserving the connectivity. We determine completely the group of movements. \end{abstract} \textbf{Keywords:} Edge-blocks, the Wilson group, motion planning, robots swarps, pebble motion \textbf{Mathematics Subject Classifications}: 05C25, 05C40, 05E18, 94C15 \section{Introduction} In this work we model the following problem. Imagine that a swarm of robots is given, these robots must communicate with each other, and they can do so if certain conditions are met . We say that the swarm is connected if there is at least one way to send a message between each pair of robots. A message between robots can be sent if either there is a direct communication between them or if there are intermediate robots which can send the message. Some work stations in a region are also given, the number of work stations are at least the number of robots. A robot can move from one of these work stations to another only if the connectivity of the swarm is preserved. The swarm of robots has one fixed initial position and, in order to perform some tasks, the robots move from one station to another as needed, always maintaining the swarm connected. After a while the swarm of robots returns to its initial position. In order to achieve this goal it is not necessary that each robot returns to its initial position, we only care about the position of the whole swarm, so as long as each one of the initial positions are occupied and the swarm is connected, we say that it has returned to its original position. Our intent in this paper is to study the different permutations that might appear once the swarm returns to its original position. In order to do so, we must also study the possible moves that the swarm can make, all moves must meet three conditions: 1. The connectivity of the swarm must be kept, 2. Only one robot is in each workstation at each time, 3. To avoid crashes, two robots are not allowed to swap positions. We model the problem using a graph as follows. The work stations are represented by the vertices of a graph, two vertices are connected by an edge if their corresponding workstations allow a couple of robots, one in each workstation, to communicate with each other. Notice that the initial positions of the robots induce a unique subgraph of our workstations graph and that every time a robot moves this induced subgraph might change. Since we are interested only in the moves that assure the connectivity of the swarm, both the workstations graph and every induced subgraph must be connected. Under this model the subgraph of robots moves through the workstations graph and we ask how the permutations of the initial subgraph look like. Related problems have been studied from a different perspective in the area of motion planning under the names of ``robots swarm'' and ``pebble motion'', for example in \cite{CKLL} and \cite{MR4036097}. A classical related problem is the well-known ``15-puzzle'' which was generalized to graphs by Wilson \cite{MR0332555}, who proved that for any nonseparable graph, except for one, its associate group is the symmetric group unless the graph is bipartite, for which it is the alternating group. While Wilson considered just an empty workstation, the problem was generalized to $k$ empty workstations in \cite{kornhauser1984coordinating} where it was also given a polynomial time algorithm that decides reachability of a target configuration. In \cite{MR1822278} optimal algorithms for specific graphs were explored. Colored versions were studied in \cite{MR2889522} and \cite{MR2672474}. In \cite{RW} it was proven that finding a shortest solution for the extended puzzle is NP-hard and therefore it is computationally intractable. In the following section we define formally the problem. In Section \ref{section3}, we prove that the set of possible movements is a group, and we define what we call the Wilson group (in honor of Richard M. Wilson). In Section \ref{section4}, we characterize such group when there are no ``empty workstations''. Finally, in Section \ref{section5}, we characterize the group in the case when there is at least one ``empty workstation''. \section{Definitions and basic results}\label{section2} In this section we introduce definitions, terminology and basic results. All the graphs considered in the paper are finite and simple. \begin{definition} Let $G$ be a graph, $R$ a $k$-set and $f_t$ a function such that \[f_t\colon V(G)\rightarrow R\cup\{\emptyset\}.\] We denote $f_t^{-1}(R)$ by $V_t$, and we say $f_t$ is an \emph{$R$-configuration over $G$} if $f_{t}\mid_{V_{t}}$ is bijective, where $t\in\Delta$ and $\Delta$ is a set of natural numbers. \end{definition} The elements of $R$ are called \emph{labels} and we use $R=[k]$ or a subset of $[k]$ for simplicity, where $[k]:=\{1,2,\dots,k\}$. \begin{definition} Let $G$ be a graph and $f_t$ a $[k]$-configuration over $G$. We say $f_t$ is a \emph{connected $[k]$-configuration over $G$} if the induced subgraph $G[V_t]$ of $G$ is connected. \end{definition} \begin{figure} \caption{(Left) A graph $G$. (Right) The graph $G$ labeled with a connected $[5]$-configuration $f_t$.} \label{fig01} \end{figure} If a vertex $v$ is such that $f_t(v)=\emptyset$, we say that it is \emph{empty}. The set of empty vertices is denoted by $V_\emptyset$. Figure \ref{fig01} shows an example of a connected $[k]$-configuration $f_t$ over a graph $G$, for $k=5$. We write \[f_t=\left( \begin{array}{@{\extracolsep{-2mm}}cccccccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7&v_8&v_9&v_{10}&v_{11}&v_{12}\\ 1&2&\emptyset&\emptyset&\emptyset&\emptyset&3&4&\emptyset&\emptyset&5&\emptyset \end{array}\hspace{-2mm}\right).\] Suppose that $f_t$ and $f_s$ are two connected $[k]$-configurations over a graph $G$. If $V_t=V_s$, for $f_t(v_j)=i$ and $f_s(v_j)=i'$ we have that $\sigma(i)=i'$ where $\sigma$ is a permutation of $V_t$. \begin{definition} Let $f_t$ and $f_s$ be two connected $[k]$-configurations over a graph $G$. If $V_t=V_s$ we say $f_t$ is \emph{similar} to $f_s$ and we denote it by $f_t \simeq f_s$. \end{definition} It is not hard to see that the relation $\simeq$ is an equivalence relation over the set of $[k]$-configurations over $G$. The equivalence class of $f_t$ is denoted by $[f_t]$. Therefore, a class $[f_t]$ is an unlabeled connected $[k]$-configuration $f_t$. \subsection{Motioning connected subgraphs} In this subsection, we establish the rules to motion connected induced subgraphs preserving the connectivity. \begin{definition} Let $f_t$ be a $[k]$-connected configuration over a graph $G$. Let $w[f_t]$ be a function such that \[w[f_t]\colon V(G)\rightarrow \{0,1\}\] for each $v\in V(G)$, $w[f_r](v)=1$ if $f_t(v)\in [k]$ and $w[f_r](v)=0$ otherwise. The function $w[f_t]$ is called \emph{the weight function of $f_t$}. \end{definition} Clearly, if $f_t\simeq f_s$ then $w[f_t]=w[f_s]$. We recall that a cycle of a graph is denoted by $(v_1,v_2,\dots,v_r)$ where $v_1=v_r$. However, to keep our arguments as simple as possible, we choose to use $v_1\not = v_r$ and then $v_1$ is adjacent to $v_r$. \begin{definition} Let $f_t$ be a connected $[k]$-configuration over a graph $G$. An \emph{$r$-cycle} $p$ is a cycle $p=(v_1,v_2,\dots,v_r)$ such that $w[f_t](v_i)=1$, for all $i\in [r]$. And an \emph{$r$-path} $p$ is a path $p=(v_1,v_2,\dots,v_r)$ such that $w[f_t](v_i)=0$ if and only if $i=1$, that is, only the vertex $v_1$ has weight $0$. \end{definition} Now, we associate a permutation to an $r$-cycle or path. In this paper, the product of permutations means composition of functions on the left. For a detailed introduction on permutations we refer to the book of Rotman \cite{MR1307623}. \begin{definition} Let $f_t$ be a connected $[k]$-configuration over a graph $G$ and let $p$ be an $r$-cycle or a path $p=(v_1,v_2,\dots,v_r)$. An \emph{elementary $p$-movement} of $V_t$ is a permutation $\sigma_p$ such that \[\sigma_p=(v_1v_2\dots v_r)=(v_1v_2)(v_2v_3)\dots(v_{r-1}v_r).\] \end{definition} Hence, we can define configurations $f_s$ arising from a given configuration $f_t$. \begin{definition} Let $f_t$ be a connected $[k]$-configuration over a graph $G$ and $\sigma_p$ an elementary $p$-movement of $V_t$. The $[k]$-configuration $f_{t+1}=f_t\circ \sigma_p$ over $G$ is an \emph{elementary configuration movement} arising from $f_t$. \end{definition} Note that if $G$ is a tree such that $V(G)=V_t$ for a connected $[k]$-configuration then there is no elementary $p$-movements. And in general, it is possible that $G[V_{t+1}]$ is a disconnected subgraph. \begin{definition} Let $f_t$ be a connected $[k]$-configuration over a graph $G$ and $\sigma_p$ an elementary $p$-movement of $V_t$. If $f_{t+1}$ is an elementary configuration movement arising from $f_t$ such that it is a connected $[k]$-configuration then $f_{t+1}$ is called \emph{valid} as well as $\sigma_p$. \end{definition} Figure \ref{fig02} shows two valid elementary $p$-movements, namely, $\sigma_{p}=(v_5v_8v_7v_2v_1)$ and $\sigma_{p'}=(v_7v_8v_{11})$, therefore $f_{t+1}=f_t\circ \sigma_{p}$ and $f'_{t+1}=f_t\circ \sigma_{p'}$ where \[f_{t+1}=\left( \begin{array}{@{\extracolsep{-2mm}}cccccccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7&v_8&v_9&v_{10}&v_{11}&v_{12}\\ \emptyset&1&\emptyset&\emptyset&4&\emptyset&2&3&\emptyset&\emptyset&5&\emptyset \end{array}\hspace{-2mm}\right)\text{ and }\] \[f'_{t+1}=\left( \begin{array}{@{\extracolsep{-2mm}}cccccccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7&v_8&v_9&v_{10}&v_{11}&v_{12}\\ 1&2&\emptyset&\emptyset&\emptyset&\emptyset&4&5&\emptyset&\emptyset&3&\emptyset \end{array}\hspace{-2mm}\right).\] \begin{figure} \caption{(Left) The graph $G$ labeled with a connected $[5]$-configuration $f_{t+1} \label{fig02} \end{figure} An example of an invalid elementary $p$-movement is $\sigma_{p''}=(v_3v_2v_7)$. Since we obtain a connected $[k]$-configuration $f_{t+1}$ from a connected $[k]$-configuration $f_{t}$ via a valid elementary $p$-movement, we have the following proposition. \begin{proposition}\label{prop9} A $\sigma_p$ is a valid elementary $p$-movement of $V_t$ if and only if $\sigma^{-1}_p$ is a valid elementary $p$-movement of $V_{t+1}.$ \end{proposition} Consider the set of empty vertices $V_\emptyset$. For any permutation $\sigma$ in the symmetric group $S_{V_\emptyset}$ of $V_\emptyset$ we have that $f_t\circ \sigma=f_t$, therefore, $\sigma$ is a valid elementary $p$-movement. Given a connected $[k]$-configuration $f_t$, we denote the set of valid elementary $p$-movements of $V_t$ as $\Gamma[f_t]$. Therefore $S_{V_\emptyset}\subseteq\Gamma[f_t]$. \begin{proposition} If $f_t\simeq f_s$ then $\Gamma[f_t]=\Gamma[f_s]$ and $f_t\circ \sigma\simeq f_s\circ \sigma$ for any $\sigma\in \Gamma[f_t]$. \end{proposition} By Proposition \ref{prop9} we have the following. \begin{proposition} $\sigma\in\Gamma[f_t] $ if and only if $\sigma^{-1}\in\Gamma[f_t\circ \sigma]$. \end{proposition} Next, we define a valid sequence of connected configurations. \begin{definition} Let $\{f_0,f_1,\dots,f_t\}$ be a set of connected $[k]$-configuration over a graph $G$. We say that it is a \emph{valid $f_0f_t$-sequence} if $f_s$ is a valid $[k]$-configuration arising from $f_{s-1}$ for all $s\in [t]$. \end{definition} Figure \ref{fig03} shows an example of a valid $f_0f_2$-sequence of a graph $H$ for \[f_0=\left( \begin{array}{@{\extracolsep{-2mm}}ccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7\\ \emptyset&1&\emptyset&2&3&\emptyset&\emptyset \end{array}\hspace{-2mm}\right).\] Taking a $3$-path $p_1=(v_3,v_4,v_5)$ and the permutation $\sigma_{p_1}=(v_3v_4v_5)$ we get $f_1=f_0\circ \sigma_{p_1}$ obtaining $f_1=\left( \begin{array}{@{\extracolsep{-2mm}}ccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7\\ \emptyset&1&2&3&\emptyset&\emptyset&\emptyset \end{array}\hspace{-2mm}\right).$ Then, taking a $4$-path $p_1=(v_6,v_3,v_4,v_2)$ and the permutation $\sigma_{p_2}=(v_6v_3v_4v_2)$ we get $f_2=f_1\circ \sigma_{p_2}$ obtaining $f_2=\left( \begin{array}{@{\extracolsep{-2mm}}ccccccc} v_1&v_2&v_3&v_4&v_5&v_6&v_7\\ \emptyset&\emptyset&3&1&\emptyset&2&\emptyset \end{array}\hspace{-2mm}\right).$ \begin{figure} \caption{(Left-top) A graph $H$. (Right-top) The graph $H$ labeled with a connected $[3]$-configuration $f_0$. (Left-bottom) The graph $H$ labeled with a connected $[3]$-configuration $f_1$. (Right-bottom) The graph $H$ labeled with a connected $[3]$-configuration $f_2$.} \label{fig03} \end{figure} Therefore, $f_2=f_0\circ \sigma_{p_1} \circ \sigma_{p_2}$, i.e., $f_2=f_0\circ \sigma$ where $\sigma=\sigma_{p_1} \circ \sigma_{p_2}$. In general, if $\{f_0,f_1,\dots,f_t\}$ is a set of $[k]$-configuration over a graph $G$, hence $f_i=f_{i-1}\circ \sigma_{p_i}$ for some $\sigma_{p_i}\in \Gamma[f_{i-1}]$ with $i\in [t]$ and then $f_t=f_0\circ\sigma$ for $\sigma=\sigma_{p_1} \circ \sigma_{p_2}\circ\dots\circ\sigma_{p_t}$. \begin{definition} Let $f_0$ and $f_t$ be two connected $[k]$-configurations over a graph $G$. If there exists a valid $f_0f_t$-sequence $\{f_0,f_1,\dots,f_t\}$ for which $f_i=f_{i-1}\circ \sigma_{p_i}$ for some $\sigma_{p_i}\in \Gamma[f_{i-1}]$ with $i\in [t]$, the permutation $\sigma=\sigma_{p_1} \circ \sigma_{p_2}\circ\dots\circ\sigma_{p_t}$ is called a \emph{valid movement} of $V_0$ having that $f_t=f_0\circ\sigma$. \end{definition} In a natural way we have the following two propositions. \begin{proposition}\label{prop14} A permutation $\sigma$ is a valid movement of $V_0$ if and only if $\sigma^{-1}$ is a valid movement of $V_{t}.$ \end{proposition} \begin{proposition}\label{prop15} If $f_t\simeq f_s$ then $f_t\circ \sigma\simeq f_s\circ \sigma$ for any valid movement $\sigma$ of $V_t$. \end{proposition} To end this section, we have the following theorem about the classes $[f_t]$. \begin{theorem}\label{teo16} Let $[f_t]$ and $[f_s]$ be two unlabeled connected $[k]$-configurations over a graph $G$. Then there exists a valid $f_tf_{t+r}$-sequence $\{f_t,f_{t+1},\dots,f_{t+r}\}$ for which $f_{t+r}\simeq f_s$. \end{theorem} \begin{proof} Let $T_t$ and $T_s$ spanning trees be of $G[V_t]$ and $G[V_s]$, respectively. And let $P=(x_0,x_1,\dots,x_r)$ be a $T_tT_s$-geodesic. If the length $r$ of $P$ is positive, then $w[f_t(x_1)]=0$. Take a leaf $y$ of $T_t$ contained in $V_t\setminus \{x_0\}$ and let $p_1=(x_1,x_0,\dots,y',y)$ a path containing the $x_0y$-path of $T_t$. Therefore, $\sigma_{p_1}$ is a valid elementary $p_1$-movement of $V_t$. The tree $T_{t+1}$ with vertex set $V_{t+1}=V_t\cup\{x_1\}\setminus\{y\}$ and edge set $E(T_t)\cup\{x_1x_0\}\setminus\{yy'\}$ is a spanning tree of $G[V_{t+1}]$ with $P'=(x_1,\dots,x_r)$ a $T_{t+1}T_s$-geodesic shorter than $P$. Now, we can assume that $r=0$. Consider a maximal component $T$ of $G[V_t\cap V_s]$ and take spanning trees $T_t$ and $T_s$ of $G[V_t]$ and $G[V_s]$, respectively, such that $T$ is a subgraph of them. If $T_t=T_s$ then $V_t=V_s$ and then $f_t\simeq f_s$. We can assume that there exist a leaf $y$ of $T_t$ contained in $V_t\setminus V(T)$ and then there exist a vertex $x$ in $V_s\setminus V(T)$ such that $xy''$ is an edge of $T_s$ and $y''$ is a vertex of $T$. Let $q_1=(x,y'',\dots,y',y)$ the path of $T_t\cup T_s$. Therefore, $\sigma_{q_1}$ is a valid elementary $q_1$-movement of $V_t$. The tree $T_{t+1}$ with vertex set $V_{t+1}=V_t\cup\{x\}\setminus\{y\}$ and edge set $E(T_t)\cup\{xy''\}\setminus\{yy'\}$ is a spanning tree of $G[V_{t+1}]$ and the maximal component $T'$ of $G[V_{t+1}\cap V_s]$ has order largest than $T$. Since the graph is finite, the result follows. \end{proof} \begin{corollary} \label{cor17} Let $[f_t]$ and $[f_s]$ be two unlabeled connected $[k]$-configurations over a graph $G$. Then there exists a valid movement $\sigma$ from $V_t$ to $V_s$. \end{corollary} \section{The Wilson group}\label{section3} Given two connected $[k]$-configurations over a graph $G$, by Theorem \ref{teo16} and Corollary \ref{cor17}, we know that we can move the first one to the second one via connected subgraphs. In this section, we prove a similar result but considering the case when the labels are sorted. Firstly, we define the following interesting set $\Phi$ regarding to valid movements. \begin{definition} Let $f_t$ be a connected $[k]$-configuration over a graph $G$. The \emph{Wilson set} $\Phi[f_t]$ is the set of valid movements of $V_t$ such that $f_t=f_t\circ \sigma$ for all $\sigma\in\Phi[f_t]$. \end{definition} Clearly, the symmetric group $S_{V_\emptyset}$ of $V_\emptyset$ is a subset of $\Phi[f_t]$. The following proposition establishes that the Wilson set is independent to the labels of $f_t$. \begin{proposition} If $f_t\simeq f_s$ then $\Phi[f_t]=\Phi[f_s]$. In particular, if $\sigma \in \Phi[f_t]$ then $\Phi[f_t]=\Phi[f_t\circ\sigma]$. \end{proposition} \begin{proof} Let $\sigma\in\Phi[f_t]$. Since $\sigma$ is a valid movement of $V_t$, by Proposition \ref{prop15}, we have $f_s\simeq f_t\simeq f_t\circ\sigma \simeq f_s\circ \sigma$, i.e., $\sigma\in\Phi[f_s]$ and then $\Phi[f_t]\subseteq\Phi[f_s]$. Analogously, $\Phi[f_s]\subseteq\Phi[f_t]$ and then $\Phi[f_t]=\Phi[f_s]$. Now, in particular, if $f_s=f_t\circ\sigma$ for some $\sigma\in \Phi[f_t]$ then $\Phi[f_t]=\Phi[f_t\circ\sigma]$. \end{proof} \begin{proposition}\label{proposition20} Let $\sigma$ be a valid movement of $V_t$. If $\phi\in\Phi[f_t\circ \sigma]$ then $\sigma\circ\phi\circ\sigma^{-1}\in\Phi[f_t]$. \end{proposition} \begin{proof} Since $\sigma$ is a valid movement of $V_t$, by Proposition \ref{prop14}, $\sigma^{-1}$ is a valid movement of $\sigma(V_t)$. On the other hand, since $\phi\in\Phi[f_t\circ \sigma]$ then $f_t\circ\sigma\simeq f_t\circ\sigma\circ\phi$. By Proposition \ref{prop15}, $\sigma^{-1}$ is a valid movement of $\sigma\circ\phi$ and then $f_t\simeq f_t\circ\sigma\circ\phi\circ\sigma^{-1}$. Because $\sigma\circ\phi\circ\sigma^{-1}$ is a valid movement of $V_t$ we have that $\sigma\circ\phi\circ\sigma^{-1}\in\Phi[f_t]$. \end{proof} Next, we prove that the Wilson set is, in fact, a group. \begin{theorem} The pair $(\Phi[f_t],\circ)$ is a group. \end{theorem} \begin{proof} Let $\sigma_1,\sigma_2\in \Phi[f_t]$, then $f_t\simeq f_t\circ\sigma_1$ and $\Phi[f_t]=\Phi[f_t\circ\sigma_1]$ hence $\sigma_2\in \Phi[f_t\circ\sigma_1]$ and $f_t\circ\sigma_1\simeq f_t\circ\sigma_1\circ\sigma_2$. Then $f_t\simeq f_t\circ\sigma_1\circ\sigma_2$ and $\sigma_1\circ\sigma_2\in \Phi[f_t]$. Therefore, $\Phi[f_t]$ is closed under the operation $\circ$. Now, it is clear that $f_t\simeq f_t\circ (1)$ and then the identity function is an element of $\Phi[f_t]$. Also it is clear the operation $\circ$ is associative. Finally, let $\sigma\in \Phi[f_t]$. By Proposition \ref{prop14}, $\sigma^{-1}$ is a valid movement of $V_t$ and by Proposition \ref{prop15}, we have that $\sigma^{-1}\in \Phi[f_t]$. \end{proof} \begin{theorem} Let $f_0$ and $f_t$ be connected $[k]$-configurations and $\sigma$ a valid movement of $V_0$ with $f_s=f_t\circ\sigma$ and $f_t\simeq f_s$. Then there exists a valid $f_0f_t$-sequence if and only if $f_t=f_s\circ\phi$ for some $\phi\in\Phi[f_t]$. \end{theorem} \begin{proof} Assume that there exists a valid $f_0f_t$-sequence and $\sigma_1$ is a valid movement from $V_0$ to $V_t$. By Proposition \ref{prop14}, $\sigma_1^{-1}$ is a valid movement from $V_t$ to $V_0$, i.e., $f_0=f_t\circ \sigma_1^{-1}$. Since $\sigma$ is a valid movement from $V_s$ to $V_t$ then $\sigma^{-1}$ is a valid movement from $V_t$ to $V_s$, i.e., $f_s=f_t\circ \sigma^{-1}$, then $f_t\circ\sigma^{-1}\simeq f_s\circ\sigma^{-1}=f_0$. Therefore, $f_0=f_t\circ \sigma_1^{-1}\simeq f_t\circ \sigma^{-1}$ and by Proposition \ref{prop15}, $f_t\simeq f_t\circ \sigma^{-1} \circ \sigma_1 $ and for $\phi=\sigma^{-1} \circ \sigma_1$ we have that $f_t=f_s\circ\phi$ with $\phi\in\Phi[f_t]$. Now, we verify the converse, let $f_t=f_s\circ\phi$ be for some $\phi\in\Phi[f_t]$, then $\phi$ is a valid movement from $V_s$ to $V_t$ and $\sigma$ is a valid movement from $V_0$ to $V_s$, therefore $\sigma\circ\phi$ is a valid movement from $V_0$ to $V_t$ and the theorem follows. \end{proof} The existence of the valid movement $\sigma$ is guaranteed by Theorem \ref{teo16} and Corollary \ref{cor17}, hence, in order to verify the existences of a valid $f_0f_t$-sequence, we only need to find some $\phi\in\Phi[f_t]$ for which $f_t=f_s\circ\phi$. \subsection{Some Wilson groups} Therefore, we need to know the structure of the Wilson group of a given subgraph. We begin with some particular configurations. \begin{theorem}\label{teo23} Let $f_t$ be a connected $[n]$-configuration over a graph $G$ of order $n$. \begin{enumerate} \item If $G$ is a cycle, then $\Phi[f_t]$ is the cyclic group $\mathbb{Z}_n$. \item If $G$ is a tree, then $\Phi[f_t]$ is the trivial group $\{(1)_{V}\}$. \end{enumerate} \end{theorem} \begin{proof} First, since $V_\emptyset$ is empty, then only the elementary valid movements are the cycles, for instance $\sigma=(v_1v_2\dots v_n)$ and then $\Phi[f_t]=\left\langle \sigma\right\rangle $ which is $\mathbb{Z}_n$. Second, there is no elementary valid movements different to the identity permutation, and the result follows. \end{proof} \begin{theorem}\label{teo24} Let $f_t$ be a connected $[k]$-configuration over a graph $G$ of order $n>k$. If $G$ is a cycle or a path then $\Phi[f_t]$ is $\{(1)_{V_t}\}\times S_{V_\emptyset}$. \end{theorem} \begin{proof} The valid movements are given by $k$-paths only into the two opposite directions, namely $\sigma_{p_1}$ and $\sigma_{p_2}$, see Figure \ref{fig04}. \begin{figure} \caption{The two possible direction of a $k$-path, for $k<n$, into a $n$-cycle or a $n$-path.} \label{fig04} \end{figure} Since the labels of $V_t$ are invariant under the valid movements, we have that $\Phi[f_t]=\{(1)_{V_t}\}\times S_{V_\emptyset}$ since any permutation of $V_\emptyset$ also leaves invariant the labels of $V_t$. \end{proof} \section{Saturated configurations}\label{section4} In this section, we study the configurations without empty vertices, that is, each vertex has weight 1. \begin{definition} Let $G$ be a connected graph of order $n$. A connected $[n]$-configuration is called \emph{saturated}. \end{definition} Theorem \ref{teo23} states a result concerning to saturated configurations, namely, when $G$ is a cycle or a tree. Note that, the elementary movements are only given by cycles, that is, a vertex can be moved if it is in a cycle. \begin{corollary} Let $f_t$ be a connected $[n]$-configuration over a unicyclic connected graph $G$ of order $n$ for which its cycle has order $k$. Then $\Phi[f_t]$ is the cyclic group $\mathbb{Z}_k$. \end{corollary} The following definitions are about edge-connectivity and edge-blocks. \begin{definition} A non-empty bridgeless connected subgraph $\textbf{B}$ of $G$ is called an \emph{edge-block} of $G$ if $\textbf{B}$ is maximal. \end{definition} Note that the Wilson group induces a (left) group action $\varphi$ into the set of vertices $\varphi\colon \Phi[f_t] \times V(G) \rightarrow V(G) $ where $\varphi(\sigma,v)=\sigma(v)$, therefore we have Theorem \ref{teo30}. \begin{theorem}\label{teo30} If $f_t$ is a saturated configuration over $G$ and $v\in V(\textbf{B})$ with $\textbf{B}$ an edge-block of $G$, then the orbit $\Phi[f_t]v$ of $v$ is $V(\textbf{B})$. \end{theorem} \begin{proof} First, note that an edge-block could be separable if it contains cut-vertices. Now, let $u$ be a vertex of $V(\textbf{B})$. By Menger's Theorem, there exist two edge disjoint $uv$-paths such that they internally share only cut-vertices $v_1,\dots,v_{r-1}$. The union of this two paths is a union of cycles $p_1,\dots p_r$ where $v=v_0$ is a vertex of $p_1$ and $u=v_r$ is a vertex of $p_r$. The permutation $\sigma=\sigma^{a_r}_{p_r}\circ\dots\circ\sigma^{a_1}_{p_1}$ for which $\sigma^{a_i}_{p_i}(v_{i-1})=v_i$ for $i\in\{1,\dots,r\}$ maps $v$ to $u$, therefore $u\in \Phi[f_t]v$ and $V(\textbf{B})\subseteq \Phi[f_t]v$. Finally, because a vertex $x$ out of $V(\textbf{B})$ is connected to $V(\textbf{B})$ via a bridge, it is not possible to move $x$ into $V(\textbf{B})$. Therefore $V(\textbf{B})=\Phi[f_t]v$. \end{proof} In consequence, the Wilson group of a saturated configuration of a graph in the product of the Wilson groups arising from each edge-block. Hence, we analyze the edge-blocks to know which is the Wilson group of a saturated graph. We recall that $S_X$ denotes the symmetric group over $X$, while $A_X$ denotes the alternating group over $X$. \begin{lemma}\label{lemma31} Let $G=(V,E)$ be a graph which is a cycle $C$ with a chord with some subdivisions. And let $f_t$ be a saturated configuration of $G$. Then $\Phi[f_t]=S_V$. \end{lemma} \begin{proof} The cycle $C$ can be interpreted as the union of the cycles $C'=(v_1\dots v_rw_m\dots w_1)$ and $C''=(w_1\dots w_mu_s\dots u_1)$ where the chord with its subdivisions is the path $T=(w_1\dots w_m)$ and the cycle $C$ is $(u_1\dots u_sw_mv_r\dots v_1w_1)$, see Figure \ref{fig05}. \begin{figure} \caption{Cycles of Lemma \ref{lemma31} \label{fig05} \end{figure} Since for the permutations $\sigma_{C'}$ and $\sigma_{C''}$ is turning clockwise while for the permutations $\sigma_C$ is turning counterclockwise, we have that \[\sigma_{C}\circ\sigma_{C''}\circ\sigma_{C'}=(v_1w_1).\] We call $\sigma'$ to this transposition. Now, we show $(vu)\in \Phi[f_t]$ for any $v,u\in V(G)$ with $v\not=u$. Without lossing of generality, $v=w_1$ and $u\in V(C'')$. For some $i$, $\sigma^i_{C''}$ we have $\sigma^i_{C''}(u)=v$. Suppose $u\not\in\{w_2,\dots,w_m\}$, then the permutation $\sigma=\sigma^i_{C''}\circ\sigma^{-1}_{C'}$ is such that $\sigma(v)=u$ with $v\sim u$. If $u\in\{w_2,\dots,w_m\}$, then $\sigma=\sigma^{i+1}_{C''}\circ\sigma^{-1}_{C'}$ is such that $\sigma(v)=u$ such that $v\sim u$. Hence, $(vu)=\sigma^{-1}\circ \sigma'\circ \sigma$ and $(vu)\in \Phi[f_t]$ and then $\Phi[f_t]=S_V$. \end{proof} A permutation is an element of $A_X$ if and only if it is a product of an even number of transpositions in $X$. Since every 3-cycle $(ijk)$ is the product of two transpositions $(ij)(ik)$ and the product of two transpositions $(ij)(kl)$ is the product of two 3-cycles $(ikj)(kjl)$ then the alternating group is generated by 3-cycles. \begin{lemma}\label{lemma32} Let $G=(V,E)$ be a graph which is the union of two cycles $C$ and $C'$ with exactly a cut vertex. And let $f_t$ be a saturated configuration of $G$. Then $\Phi[f_t]=S_V$ if $C$ or $C'$ is even, otherwise $\Phi[f_t]=A_V$. \end{lemma} \begin{proof} First, we prove $A_V\subseteq \Phi[f_t]$, that is $(wvu)$ is an element of $\Phi[f_t]$ for any $w,v,u\in V$. We can assume $w=w_1$ since, for some $j$, $\sigma=\sigma^j_{C}$ or $\sigma=\sigma^j_{C'}$ is such that $\sigma(w)=w_1$ and then $(wvu)=\sigma^{-1}\circ(w_1vu)\circ\sigma$. Second, let $C=(w_1v_1\dots v_r)$ be and $C'=(w_1u_1\dots u_s)$, see Figure \ref{fig06}. Therefore $\sigma^{-1}_{C'}\circ\sigma^{-1}_{C}\circ\sigma_{C'}\circ\sigma_C=(w_1v_1u_1)$. We call $\sigma'$ to this 3-cycle. Then, we divide the proof into three cases: \begin{figure} \caption{Cycles of Lemma \ref{lemma32} \label{fig06} \end{figure} \begin{enumerate} \item If $v=v_1$ and $u\in V(C')$, for some $j$, $\sigma=\sigma^j_{C'}$ is such that $\sigma(u)=w_1$ then $(w_1vu)=\sigma^{-1}\circ\sigma^{-1}_C\circ\sigma\circ\sigma_C$. \item If $v=v_1$ and $u\in V(C)$, for some $j$, $\sigma=\sigma^j_{C}$ is such that $\sigma(u)=w_1$ then $(w_1vu)=\sigma^{-1}\circ\sigma_{C'}\circ\sigma'\circ\sigma^{-1}_{C'}\circ\sigma$. \item Similar to (1.), but $v\in V(C)\setminus\{v_1\}$. If $u=u_1$ then the case is analogous to (1.) by symmetry. If $u\not=u_1$ then $\sigma=\sigma^{-1}_{C'}\circ\sigma^{j}\circ\sigma_{C'}$, for $\sigma^j(v)=v_1$ and some $j$, gets a configuration similar to the case (1.) \end{enumerate} Now, if $C$ and $C'$ are odd cycles, then $\sigma_C$ and $\sigma_{C'}$ are even permutations, therefore $A_V= \Phi[f_t]$. On the other hand, if $C$ or $C'$ is even, then it is an odd permutations, therefore $S_V= \Phi[f_t]$. \end{proof} In order to prove our main results, we define the following concept. \begin{definition} An edge-block of a graph is called \emph{weak} if it is a cycle or if every two cycles sharing vertices have exactly a vertex in common. \end{definition} \begin{figure} \caption{The edge-blocks $\textbf{B} \label{fig07} \end{figure} \begin{lemma}\label{lemma34} Let $\textbf{B}$ be an edge-block that is not a cycle and $f_t$ be a saturated configuration. If $\textbf{B}$ is weak such that every cycle is odd, then $\Phi[f_t]=A_V$ otherwise $\Phi[f_t]=S_V$. \end{lemma} \begin{proof} Let $\textbf{B}$ be as defined above and suppose that it is weak and every cycle is odd. We prove that $(uvw)\in \Phi[f_t]$ for any $u,v,w\in V(\textbf{B})$ as follows: since $u,v,w$ are in the same orbit and there are (at least) two cycles $C$ and $C'$ with exactly a cut vertex in common. We can send there all of them via a permutation $\sigma$, by Lemma \ref{lemma32}, we can get this 3-cycle there and then, via $\sigma^{-1}$ we get the desire 3-cycle. Because the parity of the odd cycles, $\Phi[f_t]=A_V$. On the other hand, if $\textbf{B}$ contains an even cycle $\Phi[f_t]=S_V$ or if it is not weak, we can assume that $C$ and $C'$ have in common a path with more than a vertex therefore any transposition $(uv)$ can be done via a permutation $\sigma$ sending $u$ and $v$ to the cycles $C$ and $C'$, by Lemma \ref{lemma31}, we can get this 2-cycle there and finally, via $\sigma^{-1}$ we get the desire transposition getting that $\Phi[f_t]=S_V$. \end{proof} Now, we can describe the Wilson group of a saturated configuration. \begin{theorem} Let $f_t$ be a saturated configuration of a graph $G$, then \[\Phi[f_t]=\stackrel{r}{\underset{i=1}{\prod}}\varGamma_{i}\] where $\Gamma_i$ is a cyclic group, an alternating group or a symmetric group. \end{theorem} \begin{proof} Let $\textbf{B}_i$ be the set of edge-blocks $G$, for $i\in\{1,\dots,r\}$. By Theorem \ref{teo30}, the non-trivial orbits are $V(\textbf{B}_i)$ for each $i\in\{1,\dots,r\}$. Hence, for the set $V(\textbf{B}_i)$, its Wilson group is cyclic, by Theorem \ref{teo23}, or it is an alternating group or a symmetric group by Lemma \ref{lemma34} and the result follows. \end{proof} \section{No-saturated configurations}\label{section5} In this section, we only study the Wilson group for no-saturated configurations. The main difference between saturated and no-saturated configurations is the existence of valid movements given by paths. Theorem \ref{teo24} is an example of this fact. To begin with, we analyze the behavior of the bipartite complete graph $K_{1,3}$, also called as the 3-star graph, which is a relevant graph in no-saturated configurations. Let $G$ be a graph containing at least a 3-star subgraph and $f_t$ a connected configuration over $G$ such that \[f_t=\left( \begin{array}{@{\extracolsep{-2mm}}cccccc} \dots&v&v_1&u_1&w_1&\dots\\ \dots&1&2&3&4&\dots \end{array}\hspace{-2mm}\right)\] where $v$ is a vertex of degree at least 3 and $v_1$, $u_1$ and $w_1$ are adjacent to $v$. Figure \ref{fig08} shows the sequence of movements to generate the transposition $(vv_1)$ supposing some empty vertices. \begin{figure} \caption{Movements over a 3-star.} \label{fig08} \end{figure} Before to verify the details to produce the movements to generate such transposition, we give the following definition. \begin{definition} Let $G$ be a graph and $uv$ an edge of $G$. A vertex $w$ is \emph{in the direction of $u$ with respect to $v$} if there exists a $wv$-path containing $u$. \end{definition} Therefore, the set of vertices of the component of $G-v$ containing $u$ are the vertices in the direction of $u$ with respect to $v$. If $v$ is not a cut-vertex, each vertex is in the direction of $u$ with respect to $v$, for every $u$ in $N(v)$. The set of empty vertices in the direction of $u$ with respect to $v$ is denote by $B_v[f_t](u)$ and its cardinality by $b_v[f_t](u)$ (or simply by $B_v(u)$ and $b_v(u)$, respectively, when $f_t$ is understood), see Figure \ref{fig09}. \begin{figure} \caption{A set $B_v(u)$.} \label{fig09} \end{figure} \begin{proposition} Let $G$ be a connected graph. If $G[B_v(u)]$ is a tree, then there exists a spanning tree of $G$ containing $G[B_v(u)]$. \end{proposition} \begin{lemma}\label{lemma38} Let $f_t$ be a connected $[k]$-configuration over a graph $G$ and $(w,v,u)$ a path such that $u,v\in V_t$. If $b_v(w)>0$ then there exists a cycle or a path $p$ containing $(w,v,u)$ for which $\sigma_p$ is in $\Gamma [f_t]$. \end{lemma} \begin{proof} Since $b_v(w)>0$, there exists an empty vertex in the direction of $w$ with respect to $v$. Therefore, there is an $r$-path $p_1=(w_r,\dots,w_1=w,v)$ with $w_r$ an empty vertex. On one hand, if $u=w_i$ for some $i\in [r-1]$ then for the cycle $p=(w_1=w,v,u=w_i,w_{i-1},\dots,w_2)$, $\sigma_p$ is in $\Gamma [f_t]$. On the other hand, if $u\not =w_i$ for all $i\in [r-1]$ and let $T$ a spanning tree of $G[V_t]$ containing the path $p=(w_{r-1},\dots,w_1=w,v,u=u_0,\dots,u_s)$ where $u_s$ is a leaf of $T$, and then $\sigma_p$ is in $\Gamma [f_t]$, see Figure \ref{fig10}. \begin{figure} \caption{The spanning tree in the proof of Lemma \ref{lemma38} \label{fig10} \end{figure} \end{proof} \begin{theorem}\label{teo39} Let $K_{1,3}$ be a star subgraph of $G$ with partition $(\{v\},\{u_1,v_1,w_1\})$ such that $b_v(u_1),b_v(w_1)>0$. If $v,v_1\in V_t$ and $vv_1$ is a bridge, then $(vv_1)\in \Phi[f_t]$. \end{theorem} \begin{proof} Let $f_t$ be a connected $[k]$-configuration over $G$. For the paths $(u_1,v,v_1)$ and $(w_1,v,v_1)$, by Lemma \ref{lemma38}, there exist the permutations \[\sigma_1=(u_s\dots u_{1}vv_1\dots v_r)\textrm{ and }\sigma_1'=(w_m\dots w_1vv_1\dots v_r).\] Since $vv_1$ is a bridge, we can assume that the corresponding paths of the permutations are sharing the leaf $v_r$. If $u_s=w_m$ then the vertex $v$ is in a cycle $(v,u_1,\dots,u_i=w_j,w_1)$, see Figure \ref{fig11}. Let $\sigma_2$ and $\sigma_3$ be the permutations \[\sigma_2=(vu_1\dots u_i\dots w_1)\textrm{ and }\sigma_3=(v_r\dots v_1vw_1\dots u_i\dots u_s).\] Then $\sigma=\sigma_3\circ\sigma_2\circ\sigma_1=(vv_1)\in\Phi[f_t]$. \begin{figure} \caption{Two paths sharing both leaves in the proof of Theorem \ref{teo39} \label{fig11} \end{figure} If $u_s\not=w_m$, consider the following permutations, see Figure \ref{fig12}. \[\sigma_5=(w_m\dots w_1vu_1\dots u_s) \textrm{ and } \sigma_6=(v_r\dots v_1vw_1\dots w_m) \textrm{ and } \sigma_7=(u_sw_s).\] Therefore $\sigma=\sigma_7\circ\sigma_6\circ\sigma_5\circ\sigma_1 \in \Phi[f_t]$ because $\sigma_7\in \Phi[f_t]$. \begin{figure} \caption{Two paths sharing only a leaf in the proof of Theorem \ref{teo39} \label{fig12} \end{figure} \end{proof} In order to analyze the Wilson group and use the previous ideas, we give the following definition. \begin{definition} Let $x,y\in V_t$ be and $v$ a vertex of degree 3 or more. We say that $v$ is an exchange-vertex for the pair $\{x,y\}$ if there is a vertex $v_1$ adjacent to $v$ and a valid permutation $\sigma$ such that $\sigma(v)=x$, $\sigma(v_1)=y$ and $f_t\circ\sigma$ satisfy the hypothesis of Theorem \ref{teo39}. Moreover, if $x\sim y$ we say $v$ is an exchange-vertex for the edge $xy$. \end{definition} Note that if $v$ is an exchange-vertex of $\{x,y\}$ then $(xy)\in\Phi[f_t]$ because Proposition \ref{proposition20}, indeed, if $(vv_1)\in\Phi[f_t\circ\sigma]$ then $(xy)=\sigma\circ(vv_1)\circ\sigma^{-1}\in\Phi[f_t]$. \begin{lemma} If there exists an exchange-vertex for the pair $\{x,y\}$ then $(xy)\in\Phi[f_t]$. \end{lemma} \begin{theorem}\label{teo40} If $(u_1,u_2,\dots,u_m)$ is a path such that for each edge there is an exchange vertex, then $(u_1u_m)\in\Phi[f_t]$. \end{theorem} \begin{proof} Since each transposition $(u_iu_{i+1})\in\Phi[f_t]$ with $i\in [m-1]$ and therefore \[(u_1u_2)\circ(u_2u_3)\circ\dots\circ (u_{m-1}u_m)\circ(u_{m-1}u_{m-2})\circ\dots\circ (u_2u_1)=(u_1u_m)\in\Phi[f_t] .\] \end{proof} Now, we analyze the edge-blocks of a graph $G$ when it is not a cycle and $f_t$ is a no-saturated configuration. \begin{lemma}\label{lemma43} If $\textbf{B}$ is an edge-block of $G[V_t]$ for which every vertex has weight 1, then for each edge $xy\in E(\textbf{B})$ there exists an exchange-vertex for $xy$. \end{lemma} \begin{proof} Since $f_t$ is a no-saturated configuration there is a vertex $v_1\not\in V(\textbf{B})$ adjacent to some vertex $v\in V(\textbf{B})$ such that $b_v(v_1)>0$. Since $\textbf{B}$ is a bridgeless subgraph, let $u,w$ adjacent to $v$ in $\textbf{B}$ living in a cycle there. Note that $v$ has degree at least 3. Let $xy$ be an edge of $\textbf{B}$. Without loss of generality, we can assume that $(xu)\circ(yv)\in\Phi[f_y]$ by Theorem \ref{teo23} and Lemma \ref{lemma34}. Let $f_s=f_t\circ(xu)\circ(yv)$. Note that the path $(v_1,v,u)$ satisfies the hypothesis of Lemma \ref{lemma38}. Hence, there exists a cycle or a path $p$ containing $(v_1,v,u)$ for which $\sigma_p\in\Gamma[f_s]$ with $\sigma_p(v_1)=v$ and $\sigma_p(v)=u$. Moreover, if $vv_1$ is not a bridge, we can delete edges adjacent to $v$ but not in $E(\textbf{B})$, $b_v[f_s\circ\sigma_p](w)>0$ and $b_v[f_s\circ\sigma_p](u)>0$. By Theorem \ref{teo39}, the vertex $v$ is an exchange-vertex for $xy$. \begin{figure} \caption{The vertex $v$ is an exchange-vertex of $xy$.} \label{fig13} \end{figure} \end{proof} The previous result can be established for vertex-blocks instead of edge-blocks because if $\textbf{B}$ is a vertex-block of $G[V_t]$ for which every vertex has weight 1, consider an edge $vv_1\in E(\textbf{B})$. If $vv_1$ is not a bridge, $vv_1$ is in an edge-block and there exists an exchange-vertex for $vv_1$. And if $vv_1$ is a bridge following the hypothesis of Theorem \ref{teo39}, then $v$ its an exchange-vertex for $vv_1$. \begin{theorem}\label{teo44} Let $xy$ be an edge for which its vertices have weight 1. If $xy$ is not a bridge an $G$ is not a cycle, then there exists an exchange-vertex for $xy$. \end{theorem} \begin{proof} If $xy$ belongs to an edge-block of $G[V_t]$, by Lemma \ref{lemma43}, the result is done. Now assume to the contrary. By hypothesis, $xy$ belongs to a cycle $C=(u_1=x,u_2=y,\dots,u_s)$ of $G$ and $u_i$ has degree at least 3 for some $i\in [s]$. Via a valid movement $\sigma$, we can move the edge $xy$ to the edge $u_iu_{i-1}$, see Figure \ref{fig14}, and by Theorem \ref{teo39}, $u_i=\sigma(y)$ (or $u_i=\sigma(x)$) is an exchange-vertex of $\sigma(x)\sigma(y)$, then $u_i$ is an exchange-vertex of $xy$. \begin{figure} \caption{The vertex $v$ is an exchange-vertex of $xy$.} \label{fig14} \end{figure} \end{proof} \begin{theorem}\label{teo43} If $u$ and $v$ are two different vertices of an edge-block $\textbf{B}$ of a graph $G$, both of weight 1 and $G$ is not a cycle, then $(uv)\in\Phi[f_t]$ \end{theorem} \begin{proof} Since there exist a path $P$ contained in $\textbf{B}$ for which every vertex is in $G[V_t]$, by Lemma \ref{lemma43} and Theorem \ref{teo44} the result follows. \end{proof} \begin{corollary} If $G$ is an edge-block, $G$ is not a cycle and $f_t$ is a no-saturated configuration, then $\Phi[f_t]=S_{V_t}$. \end{corollary} Before to analyzing the bridges of a graph with no-saturated configurations, consider the set of vertices $C_v$ for which $v$ is an exchange-vertex, i.e., \[C_v=\{x,y\in V_t\colon v\textrm{ is an exchange-vertex for the pair } \{x,y\} \}.\] In order to see the relation between $C_v$ and the orbits of $\Phi[f_t]$ we have the following definition and results. \begin{corollary} Let $\textbf{B}$ be an edge-block of $G$. If $v$ is an exchange-vertex for some edge of $\textbf{B}$, then $v$ is an exchange-vertex for each edge of $\textbf{B}$. \end{corollary} Since the set of bridges and edge-blocks induces a partition into the set of edges, we can define the following graph. \begin{definition} Given a graph $G$, the graph $G_\textbf{B}$ obtained from $G$ by contracting each edge-block into a vertex is called the \emph{edge-block} graph. \end{definition} \begin{proposition} The edge-block graph $G_\textbf{B}$ of a connected graph $G$ is a tree. \end{proposition} We denote a vertex of $G_\textbf{B}$ as $[v]$ where $v$ is any vertex of the corresponding edge-block of $G$, i.e., $[v]$ is the equivalence class of vertices of an edge-block of $G$. If the equivalence class is trivial, we use $v$ instead of $[v]$. For example, if $G$ is a unicyclic graph, the cycle of $G$ is denoted by $[v]$ in $G_\textbf{B}$ but the remainder vertices are denoted $u$ instead of $[u]$ in the edge-block graph of $G$. \begin{proposition} The edge $[u][v]$ of $G_\textbf{B}$ is a bridge if and only if $u_1v_1$ is a bridge of $G$ for some $u_1\in [u]$ and $v_1\in [v]$. \end{proposition} \begin{definition} Let $G_\textbf{B}$ be the edge-block graph of $G$ and $f_t$ a $[k]$-connected configuration. We define a weight function $\omega[f_t] $ such that \[\omega[f_t]:V(G_\textbf{B})\rightarrow \mathbb{N}\] and $\omega[f_t]([v])$ is the number of empty vertices in the corresponding edge-block of $v$ in $G$. \end{definition} For example, if $f_t$ is a saturated connected configuration over $G$, the weighted function of $G_\textbf{B}$ is zero for any of its vertices. Let $[u][v]$ be an edge of $G_\textbf{B}$ and $u_1v_1$ the bridge such that $u_1\in [u]$ and $v_1\in [v]$. Recall that the set of empty vertices en the direction of $u_1$ with respect to $v_1$ is $b_{v_1}(u_1)$. We denote by $\beta_{[v]}([u])$ to the sum of $\omega[f_t]([x])$ for all $[x]$ in the component of $G_\textbf{B}-[v]$ containing $[u]$. \begin{lemma} Let $[u][v]$ be an edge of $G_\textbf{B}$ where $[v]$ is not a trivial equivalence class. If $\beta_{[u]}([v])>0$ then $v_1$ is an exchange-vertex of $u_1v_1$ where $u_1v_1$ is the corresponding bridge for $[u][v]$. \end{lemma}\label{lemma50} \begin{proof} Suppose $u_1,v_1\in V_t$. Since $[v]$ is not a trivial equivalence class, $v_1$ is a vertex of a cycle $C$. Let $v_2$ be another vertex of $C$. If $b_{v_1}(v_2)>0$ then there exists a valid movement $\sigma\in \Gamma[f_t]$ such that $\sigma(v_2)=v_1$ and $\sigma(v_1)=u_1$. Due to the fact that $v_1$ is an exchange-vertex of $v_1v_2$, $v_1$ is an exchange-vertex of $\sigma(v_1)\sigma(v_2)=u_1v_1$, see Figure \ref{fig15} (left). \begin{figure} \caption{If $[v]$ is not a trivial equivalence class, then $v_1$ is an exchange-vertex of $u_1v_1$.} \label{fig15} \end{figure} If $b_{v_1}(v_2)=0$, then $C$ is saturated and there is a valid movement $\sigma_1$ such that $\sigma_1(v_2)=v_1$. Since $u_1v_1$ is a bridge and $\beta_{[u]}([v])>0$ there exists a vertex $w_1\notin [v]$ adjacent to $v_1$ such that $b_{v_1}(w_1)>0$. Hence, there is a valid movement $\sigma_2\in \Gamma[f_t\circ \sigma_1]$ such that $\sigma_2(v_1)=u_1$ and $\sigma_2(v_1)$ and the result follows. See Figure \ref{fig15} (right). \end{proof} \begin{theorem}\label{teo51} Let $P=(v_1,v_2,\dots,v_{r+1})$ be a path of $G$ such that each edge is a bridge and $v_1,v_{r+1}\in V_t$. If $[v_1]$ is a no trivial equivalence class of $G_\textbf{B}$ and $r\leq b_{v_2}(v_1)$, then $v_1$ is an exchange-vertex of each edge of $P$. \end{theorem} \begin{proof} To begin with, observe that each vertex of $P$ has weight 1. For each $i\in [r]$ we have $b_{v_{i+1}}(v_i)\geq r$, then there exists a valid movement $\sigma_i$ such that $\sigma_i(v_1)=v_i$ and $\sigma_i(v_2)=v_{i+1}$. By Lemma \ref{lemma50}, $v_1$ is an exchange-vertex of $\sigma_i(v_1)\sigma_i(v_2)=v_iv_{i+1}$ because $b_{v_2}[f_t\circ\sigma_i]\geq r+1-i$, see Figure \ref{fig16}. \begin{figure} \caption{If $[v]$ is not a trivial equivalence class, then $v_1$ is an exchange-vertex of $v_iv_{i+1} \label{fig16} \end{figure} \end{proof} \begin{corollary}\label{cor52} Let $P=(v_1,v_2,\dots,v_{r+1})$ be a path of $G$ such that each edge is a bridge and $v_1,v_{r+1}\in V_t$. If $[v_1]$ is a no trivial equivalence class of $G_\textbf{B}$ and $r\leq b_{v_2}(v_1)$, then $v_1v_{r+1}\in\Phi[f_t]$. \end{corollary} \begin{proof} By Theorem \ref{teo51}, $v_1$ is an exchange-vertex of each edge of $P$. By Theorem \ref{teo40}, the result follows. \end{proof} Next, we analyze the case where $[v]$ is a trivial equivalence class, but degree at least 3. \begin{theorem}\label{teo53} Let $P=(v_1,v_2,\dots,v_{r})$ be a path of $G$ such that each edge is a bridge and $v_1,v_{r}\in V_t$. If $[v_1]$ is a trivial equivalence class of $G_\textbf{B}$, degree at least 3 and $r\leq b_{v_2}(v_1)$, then $v_1$ is an exchange-vertex of each edge of $\{v_2,v_3,\dots,v_r\}$. \end{theorem} \begin{proof} Observe that the condition $r\leq b_{v_2}(v_1)$ implies at least two empty vertices in the direction of $v_1$ with respect to $v_2$. Via a valid movement, we can obtain at least two empty vertices in at least one branches at $v_1$. Now, we use exactly the same argument as in Theorem \ref{teo51} and the result follows, see Figure \ref{fig17}. \begin{figure} \caption{If $[v]$ is a trivial equivalence class, then $v_1$ is an exchange-vertex of $v_iv_{i+1} \label{fig17} \end{figure} \end{proof} In the previous proof we remark if $f_t$ has at least two branches at $v_1$ with at least an empty vertex each, then $v_1$ is an exchange vertex of $(v_1v_2)$. See the vertices $[v_{17}]$ and $[v_7]$ of the example of Figure \ref{fig18}. \begin{corollary}\label{cor54} Let $P=(v_1,v_2,\dots,v_{r})$ be a path of $G$ such that each edge is a bridge and $v_1,v_{r}\in V_t$. If $[v_1]$ is a trivial equivalence class of $G_\textbf{B}$, degree at least 3 and $r\leq b_{v_2}(v_1)$, then $v_2v_{r}\in\Phi[f_t]$. \end{corollary} \begin{theorem}\label{teo55} Let $[v]$ be a trivial equivalence class of $G_\textbf{B}$ with degree at least 3. If $x\in V_t$ (and $y\in V_t$) such that it is in direction of $v_1$ with respect to $v$ (and $v_2$ with respect to $v$, respectively) and at distance to $v$ less than $r$. If $b_{v_1}(v)\geq r$ (and $b_{v_2}(v)\geq r$) then $(xy)\in\Phi[f_t]$. \end{theorem} \begin{proof} By Corollary \ref{cor54}, we have $(vv_1),(vv_2)\in\Phi[f_t]$. It remains to prove that $(v_1v_2)\in\Phi[f_t]$ owing to the fact that $(xy)=(v_1x)\circ(v_2y)\circ(v_1v_2)\circ(v_1x)\circ(v_2y)$. If $vv_1$ and $vv_2$ satisfy the hypothesis of Theorem \ref{teo39}, the result follows. Without loss of generality, $vv_2$ doesn't have such hypothesis. Since $r\geq 2$, there exists a vertex $v_3$ adjacent to $v$ for which $b_v{v_3}\geq 2$. Hence, there is a valid movement $\sigma\Gamma[f_t]$ such that $\sigma(v_3)=v$ and $\sigma(v)=v_2$. Therefore, $b_v[f_t\circ\sigma](v_2)>0$ and $b_v[f_t\circ\sigma](v_3)>0$ and then $v$ is an exchange-vertex of $v_1$ and $v_2$ and finally $(v_1v_2)\in\Phi[f_t]$. \end{proof} Through Theorems \ref{teo51} and \ref{teo53} we can determine the vertices of $C_v$ where $[v]$ has degree at least 3 in $G_\textbf{B}$. We need the following useful definition related to $C_v$. \begin{definition} Let $[v]$ be a vertex of degree at least 3 of $G_\textbf{B}$. We denote by $C_{[v]}$ to the subset of vertices of $G$ having the following property: for every pair of their vertices $(x,y)$, the transposition $(xy)$ is in $\Phi[f_t]$ according to Corollaries \ref{cor52} and \ref{cor54} and Theorems \ref{teo43} and \ref{teo55}. \end{definition} Figure \ref{fig18} shows the four sets $C_{[v]}$ of a graph $G$. Note that $b_{[v_4]}([v_3])=1$, so $C_{[v_3]}$ contains $v_2$, $v_3$ and $v_4$. Similarly $b_{[v_{10}]}([v_{11}])=2$, so $C_{[v_{11}]}$ contains $v_{9}$, $v_{10}$, $v_{11}$ and $v_{14}$. The vertices $[v_7]$ and $[v_{17}]$ of $G_\textbf{B}$ are trivial equivalence classes. For $C_{[v_7]}$ we have $b_{[v_6]}([v_7])=2$, $b_{[v_8]}([v_7])=1$ and $b_{[v_{15}]}([v_7])=3$ then $v_6\in C_{[v_7]}$ and $v_{15},v_{16}\in C_{[v_7]}$. Moreover, there are two branches at $v_7$ with empty vertices, then $v_7\in C_{[v_7]}$. Finally, $C_{[17]}$ contains $v_{18}$ and $v_{19}$ because $b_{[v_{16}]}([v_{17}])=0$, $b_{[v_{19}]}([v_{17}])=3$ and $b_{[v_{19}]}([v_{17}])=3$. \begin{figure} \caption{The graph $G$ as an example with the set of vertices $C_{[v_3]} \label{fig18} \end{figure} Note the following facts. $S_{C_{[v]}}\subseteq \Phi[f_t]$ and if $v$ has degree at least 3, then $C_{[v]}\subseteq C_v$. Let $V_{\geq 3}$ be the set of vertices of degree at least 3. We define the relation $R$ over $V_{\geq 3}$ as follows: $vRv$ if and only if there exists a sequence $\{u=u_1,u_2,\dots,u_{r+1}=v\}$ of vertices of $V_{\geq 3}$ such that for each $i\in [r]$, $C_{[u_i]}\cap C_{u_{i+1}}\not = \emptyset$. Clearly, $R$ is a equivalence relation. We denote by $R(v)$ the equivalence class of $v$. \begin{theorem} If $v\in V_{\geq 3}$, then \[C_v=\underset{u\in R(v)}{\bigcup}C_{[v]}.\] \end{theorem} \begin{proof} Let $u\in R(v)$ be and a sequence $\{u=u_1,u_2,\dots,u_{r+1}=v\}$ of vertices of $V_{\geq 3}$ such that for each $i\in [r]$, $C_{[u_i]}\cap C_{u_{i+1}}\not = \emptyset$. Let $v_i\in C_{[u_i]}$ Let $v_0\in C_{[u]}$ then for each $j\in [r]$, we have $(v_{j-1}v_j)\in \Phi[f_t]$. Then, $(v_0v_r)\in \Phi[f_t]$. Since $v_r\in C_{[v]}\subseteq C_v$, then $v_0\in C_v$ and $C_{[u]}\subseteq C_v$. Now, suppose that $v_0\in C_v$ and $v_0\in C_{[u]}$ for all $u\in R(v)$. Let $u_1\in R(u)$ such that $[u_1]$ is the closest vertex to $[v_0]$ in $G_\textbf{B}$. Then $C_{[u_1]}\cap C_{[v_0]}=\emptyset$. Since $v_0\in C_v$ there exist valid movements $\sigma$ and $\sigma_1$ such that $\sigma(v)=v_0$ and $\sigma_1(u_1)=v_0$ but it is not possible according to Theorems \ref{teo51} and \ref{teo53} because $v_0$ would have to be a vertex of $C_{u_1}$. \end{proof} Finally, note that if $x,y\in C_v$, then $(xy)\Phi[f_t]$ and then $S_{C_v}\subset \Phi[f_t]$. In consequence, we can prove the main theorem for no-saturated configurations. \begin{theorem} Let $R(v_1),R(v_2),\dots,R(v_m)$ be the equivalence classes of $R$. Then \[\Phi[f_t]=\underset{i=1}{\overset{m}{\prod}}S_{C_{v_{i}}}\times S_{V_\emptyset}.\] \end{theorem} \begin{proof} By construction, $S_{V_\emptyset},S_{C_v}\leq \Phi[f_t]$, then $\underset{i=1}{\overset{m}{\prod}}S_{C_{v_{i}}}\times S_{V_\emptyset}\leq \Phi[f_t].$ Let $\sigma\in\Phi[f_t]$ be such that $\sigma\notin \underset{i=1}{\overset{m}{\prod}}S_{C_{v_{i}}}$, i.e., $\sigma(x)=y$ for $x\in C_{v_i}$ and $y\in C_{v_j}$ with $i\not = j$. The vertices $v_i$ and $v_j$ are exchange-vertices for some edges $xw$ and $yz$, respectively, then $(xw),(yz)\in\Phi[f_t]$. On the other hand, if $\sigma(w)=z$ then $\sigma\in \Phi[f_t]$ and $v_i$ is an exchange-vertex for $xw$ in $f_t\circ\sigma$, then $v_i$ is an exchange-vertex for $yz$ in $f_t$. Hence, $y\in C_{v_i}$. A contradiction since $C_{v_i}\cap C_{v_j}=\emptyset$. Now, suppose $\sigma(w)\not = z$, then $\sigma^{-1}(z)\not = w$. And we have $\sigma(x)=y$ then $\sigma^{-1}(z)\not =x$ and $\sigma(w)\not=y$. Therefore, $\sigma_1\colon=\sigma\circ(xw)\circ\sigma^{-1}\circ(yz)\circ\sigma\circ (xw)\in\Phi[f_t]$ and satisfies that $\sigma_1(x)=y$ and $\sigma_1(y)=z$. As before, this is a contradiction. Then $\sigma\in\underset{i=1}{\overset{m}{\prod}}S_{C_{v_{i}}}$ and $\Phi[f_t]\leq \underset{i=1}{\overset{m}{\prod}}S_{C_{v_{i}}}\times S_{V_\emptyset}$. \end{proof} To end this section, Figure \ref{fig18} shows a graph with a no-saturated configuration $f_t$ for which its Wilson group is $S_{C_{v_3}}\times S_{C_{v_7}}\times S_{C_{v_{11}}}\times S_{C_{v_{17}}} \times S_{V_\emptyset}$. \section*{Acknowledgments} Part of this work is included in the undergraduate thesis \cite{A} of one of the authors. C. Rubio-Montiel was partially supported by PAIDI grant 007/21. The authors wish to thank the anonymous referees of this paper for their suggestions and remarks. \end{document}
\begin{document} \begin{abstract} The groupoid of finite sets has a ``canonical'' structure of a symmetric 2-rig with the sum and product respectively given by the coproduct and product of sets. This 2-rig $\widehat{\mathbb{F}\mathbb{S} et}$ is just one of the many non-equivalent categorifications of the commutative rig $\mathbb{N}$ of natural numbers, together with the rig $\mathbb{N}$ itself viewed as a discrete rig category, the whole category of finite sets, the category of finite dimensional vector spaces over a field $k$, etc. In this paper it is shown that $\widehat{\mathbb{F}\mathbb{S} et}$ is the right categorification of $\mathbb{N}$ in the sense that it is biinitial in the 2-category of rig categories, in the same way as $\mathbb{N}$ is initial in the category of rigs. As a by-product, an explicit description of the homomorphisms of rig categories from a suitable version of $\widehat{\mathbb{F}\mathbb{S} et}$ into any (semistrict) rig category $\mathbb{S}$ is obtained in terms of a sequence of automorphisms of the objects $1+\stackrel{n)}{\cdots}+1$ in $\mathbb{S}$ for each $n\geq 0$. \end{abstract} \title{The groupoid of finite sets is biinitial in the 2-category of rig categories} \section{Introduction} A {\em rig} (a.k.a. a semiring) is a ring without {\em n}egatives, i.e. an (additive) abelian monoid $(S,+,0)$ equipped with an additional (multiplicative) monoid structure $(\,\bullet\,,1)$ such that $\bullet$ distributes over $+$ on both sides, and $0\bullet x=x\bullet 0=0$ for each $x\in S$. A paradigmatic example is the set $\mathbb{N}$ of nonnegative integers with the usual sum and product. The rig is called {\em commutative} when $\bullet$ is abelian; for instance, $(\mathbb{N},+,\bullet\,,0,1)$ is a commutative rig. Rigs (resp. commutative rigs) are the objects of a category $\mathcal{R} ig$ (resp. $\mathcal{C}\mathcal{R} ig$) having as morphisms the maps that preserve both $+$ and $\bullet$\,, and the corresponding neutral elements. We are interested in the categorical analog of a (commutative) rig. It is usually called a ({\em symmetric}) {\em rig category}, or a ({\em symmetric}) {\em bimonoidal category}. The last name, however, is confusing because a bimonoid in the set-theoretic context is a set simultaneously equipped with compatible monoid and comonoid structures, and not a set with two monoid structures one of them distributing over the other. The precise definition of a (symmetric) rig category is due to Laplaza \cite{Laplaza-1972}, and goes back to the 1970s (see also \cite{Kelly-1974}). Roughly, it is a category $\mathcal{S}$ equipped with functorial operations analogous to those of a rig, and satisfying all rig axioms up to suitable natural isomorphisms. More precisely, $\mathcal{S}$ must be equipped with an (additive) symmetric monoidal structure $(+,0,a,c,l,r)$, a (multiplicative) monoidal structure $(\bullet,1,a',l',r')$ (including a muliplicative commutator $c'$ in case the rig category is symmetric), and distributor and absorbing natural isomorphisms \begin{align*} d_{x,y,z}&: x\bullet(y+z)\stackrel{\cong}{\to} x\bullet y+x\bullet z \\ d'_{x,y,z}&:(x+y)\bullet z\stackrel{\cong}{\to} x\bullet z+y\bullet z \\ n_x&:x\bullet 0\stackrel{\cong}{\to} 0 \\ n'_x&:0\bullet x\stackrel{\cong}{\to} 0 \end{align*} making commutative the appropriate `coherence diagrams'. For short, we shall denote by $\mathbb{S}$ the whole data defining a (symmetric) rig category. A paradigmatic example is the category $\mathcal{F}\mathcal{S} et$ of finite sets and maps between them, with $+$ and $\bullet$ respectively given by the disjoint union and the cartesian product of finite sets. The rig category so defined $\mathbb{F}\mathbb{S} et$ is symmetric with $c'$ given by the canonical isomorphisms of sets $S\times T\cong T\times S$. Of course, rig categories (resp. symmetric rig categories) are the objects of a {\em 2-category} $\mathbf{RigCat}$ (resp. $\mathbf{SRigCat}$) whose 1- and 2-cells are given by the appropiate type of functors and natural transformations between these. The precise definitions are given in Section~\ref{categories_rig}. $\mathbb{F}\mathbb{S} et$ is not just a symmetric rig category. It is a {\em categorification} of the commutative rig $\mathbb{N}$, in the sense that the set of isomorphisms classes of objects in $\mathcal{F}\mathcal{S} et$ with the rig structure induced by $+$ and $\bullet$ is isomorphic to $\mathbb{N}$ with its canonical rig structure; for more on the idea of categorification, see \cite{Baez-Dolan-1998}. In fact, there are many other non-equivalent categorifications of $\mathbb{N}$ as a rig, such as the symmetric rig category $\mathbb{F}\mathbb{V} ect_k$ of finite dimensional vector spaces over any given field $k$, with $+$ and $\bullet$ respectively given by the direct sum and the tensor product of vector spaces, or the rig $\mathbb{N}$ itself viewed as a discrete category with only the identity morphisms. What is then the right categorical analog of $\mathbb{N}$ as a rig? Of course, the answer depends on what we mean by the ``right categorical analog'' of $\mathbb{N}$ as a rig. As it is well known, $\mathbb{N}$ is an {\em initial} object in the category of rigs, i.e. for every rig $(S,+,\bullet\,,0,1)$ there is one and only one rig homomorphism from $\mathbb{N}$ to $S$, namely, the map $\varphi:\mathbb{N}\to S$ given by $\varphi(0)=0$ and $\varphi(n)=1+\stackrel{n)}{\cdots}+1$ for each $n\geq 1$. Hence it is reasonable to look for the symmetric rig category having the analogous categorical property. It is a conjecture, apparently due to John Baez,~\footnote{See the nLab webpage {\em https://ncatlab.org/nlab/show/rig+category}.} that the right categorical analog of $\mathbb{N}$ in this sense is the {\em groupoid} of finite sets and the bijections between them equipped with the symmetric rig category structure inherited from $\mathbb{F}\mathbb{S} et$. Thus if we denote by $\widehat{\mathbb{F}\mathbb{S} et}$ this symmetric rig category, Baez's conjecture is that $\widehat{\mathbb{F}\mathbb{S} et}$ is {\em biinitial} in $\mathbf{SRigCat}$, i.e. for every symmetric rig category $\mathbb{S}$ the category of symmetric rig category homomorphisms of symmetric rig categories from $\widehat{\mathbb{F}\mathbb{S} et}$ to $\mathbb{S}$ is expected to be equivalent to the terminal category with only one object and its identity morphism. The purpose of this paper is to prove that $\widehat{\mathbb{F}\mathbb{S} et}$ is in fact biinitial in $\mathbf{RigCat}$, in the same way as $\mathbb{N}$ is initial in the category of rigs. For instance, up to isomorphism, the usual free vector space functor from the groupoid $\widehat{\mathcal{F}\mathcal{S} et}$ to $\mathcal{F}\mathcal{V} ect_k$ is the unique functor which extends to a homomorphism of rig categories $\widehat{\mathbb{F}\mathbb{S} et}\to\mathbb{F}\mathbb{V} ect_k$. To some extent, the result might seem obvious. Thus the underlying functor of each homomorphism of rig categories from $\widehat{\mathbb{F}\mathbb{S} et}$ to any other rig category $\mathbb{S}$ must preserve both the unit object and the sum, at least up to isomorphism. It follows that its action on objects is essentially given in a canonical way because every object in $\widehat{\mathbb{F}\mathbb{S} et}$ is a finite sum of the unit object (the singleton). Less obvious, however, is the fact that the action on morphisms is also essentially determined by the axioms of a homomorphism of rig categories, so that there is an essentially unique way of mapping the symmetric group $S_n$, for each $n\geq 2$, into the group of automorphisms $Aut_\mathcal{S}(\sf 1derline{n})$ of the object $\sf 1derline{n}={\sf 1}+\stackrel{n)}{\cdots}+{\sf 1}$, with {\sf 1} the unit object of $\mathbb{S}$.~\footnote{Behind this uniqueness, there should be the coherence theorem for symmetric monoidal categories, although we made no use of it in the proof of the main theorem.} Moreover, there is also the point that, together with the underlying functor, giving a homomorphism from $\widehat{\mathbb{F}\mathbb{S} et}$ to $\mathbb{S}$ also requires specifying the natural isomorphisms which take account of the preservation of $+$ and $\bullet$ up to isomorphisms. These natural isomorphisms must satisfy infinitely many coherence equations, and it is not a priori clear that all possible choices for them actually define equivalent homomorphisms. Basic for the proof of the theorem will be working with a particular semistrict skeletal version of $\widehat{\mathbb{F}\mathbb{S} et}$, as well as the fact that $\mathbb{S}$ can be assumed to be semistrict (i.e. such that many of the natural isomorphisms implicit in the structure of a rig category are identities). This allows us to describe quite explicitly the homomorphisms into $\mathbb{S}$, and to prove then that they are all indeed equivalent to a canonical one having the identity as its unique (2-)\,endomorphism. Rig categories have been used since the late 1970s as sources of examples of $E_\infty$ ring spaces (see Chapter~VI of \cite{May-1977}), and to define a sort of `2-K-theory' (see \cite{Baas-Dundas-Rognes-2004}). Our interest in rig categories is due to the fact that they constitute one of the basic inputs in the definition of the categorical analog of a (semi)module over a (semi)ring, the other one being a symmetric monoidal category to be acted on. More specifically, we are interested in the so-called $\mathbb{F}\mathbb{V} ect_k$-{\em module categories} as categorical analogs of the vector spaces. These are symmetric monoidal categories $\mathscr{M}=(\mathcal{M},\mathrm{op}lus,{\sf 0})$ equipped with a categorical action of $\mathbb{F}\mathbb{V} ect_k$ on it. This is given by a homomorphism of rig categories from $\mathbb{F}\mathbb{V} ect_k$ to the rig category of endomorphisms of $\mathscr{M}$ (cf. Example~\ref{semianell_endomorfismes} below), and we would like to identify in more concrete terms the data that define such a $\mathbb{F}\mathbb{V} ect_k$-module category structure. Since $\widehat{\mathbb{F}\mathbb{S} et}$ is biinitial in $\mathbf{RigCat}$, ebery $\mathscr{M}$ has a unique $\widehat{\mathbb{F}\mathbb{S} et}$-module category structure up to equivalence, given by the essentially unique rig category homomorphism from $\widehat{\mathbb{F}\mathbb{S} et}$ to the endomorphisms of $\mathscr{M}$ (cf. Example~\ref{morfisme_a_endomorfismes_M}). The point is that $\widehat{\mathbb{F}\mathbb{S} et}$ embedds as a rig subcategory of $\mathbb{F} in\mathbb{V} ect_k$ through the free vector space construction. Hence it follows from our main theorem that part of any $\mathbb{F} in\mathbb{V} ect_k$-module category structure on $\mathscr{M}$ is canonically given. This should contribute to better understand the additional data that defines a $\mathbb{F} in\mathbb{V} ect_k$-module category structure on $\mathscr{M}$. \begin{comment} In fact, the Bruhat decomposition of the general linear groups \[ GL(n,k)=T(n,k)\,S_n\,T(n,k) \] ($T(n,k)$ denotes the subgroup of upper triangular invertible $n\times n$ matrices) suggests that the underlying $\widehat{\mathbb{F}\mathbb{V} ect}_k$-module category structure on $\mathscr{M}$, where $\widehat{\mathbb{F}\mathbb{V} ect}_k$ stands for the sub-2-rig of $\mathbb{F}\mathbb{V} ect_k$ defined by the {\em groupoid} of finite dimensional vector spaces over a field $k$, will amount to a suitable collection of group homomorphisms \[ T(n,k)\to Aut\,(\mathrm{op}lus^n), \quad n\geq 0, \] identifying each upper triangular invertible $n\times n$ matrix with a natural automorphism of $\mathrm{op}lus^n$, and expectedly some more data related to the additive and multiplicative monoidal structures in each rig category. \end{comment} \subsection{Outline of the paper and assumed background} In section~2 we review the definition of (symmetric) rig category, and the corresponding notions of 1- and 2-morphism making them the objects of a 2-category. Some examples are given, paying special attention to the symmetric rig category `canonically' associated to every distributive category. The section ends with the statement of the strictification theorem for (symmetric) rig categories. In Section~3 a detailed description is given of the semistrict version of $\widehat{\mathbb{F}\mathbb{S} et}$ we shall use in this work. Finally, in Section~4 we prove that this 2-rig is biinitial in the 2-category of rig categories. Incidentally, we obtain explicit descriptions of the rig category homomorphisms from this semistrict version of $\widehat{\mathbb{F}\mathbb{S} et}$ into any (semistrict) rig category $\mathbb{S}$. In a sense, these descriptions might have no interest because the category is contractible. However, as mentioned before, we are actually interested in the categories $\mathcal{H} om_{\mathbf{RigCat}}(\mathbb{S}',\mathbb{S})$ for other rig categories $\mathbb{S}'$ containing $\widehat{\mathbb{F}\mathbb{S} et}$ as a sub-2-rig, such as the rig category $\mathbb{F}\mathbb{V} ect_k$ of finite dimensional vector spaces over a given ground field. The point is that isomorphic objects in $\mathcal{H} om_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et},\mathbb{S})$ may no longer be equivalent when extended to homomorphisms from the whole rig category $\mathbb{S}'$. Thus having the above descriptions may be useful in the study of these categories of homomorphisms. The reader is assumed to be familiar with the definitions of (symmetric) monoidal category and 2-category, and with the notions of (symmetric) monoidal functor and monoidal natural transformation between them. Good references for the basics of monoidal categories are, for instance, Chapters~1 and 3 of \cite{Aguiar-Mahajan-2010}, Chapter~XI of \cite{Kassel-1995}, or Chapter~2 of \cite{EGNO-2015}, and for the basics of 2-categories Chapter~7 of \cite{Borceux-1994-I}. The reader may also take a look to the standard text by MacLane \cite{MacLane-1998}. \subsection{A few conventions about notation and terminology} Both sets and structured sets (such as monoids, rigs, etc) are denoted in the same way, namely by capital letters $A,B,C,\ldots$. Plain categories and functors between them are denoted by $\mathcal{A},\mathcal{B},\mathcal{C},\ldots$, (symmetric) monoidal categories and (symmetric) monoidal functors between them by $\mathscr{A},\mathscr{B},\mathscr{C},\ldots$ and rig categories and rig category homomorphisms between them (both notions are defined below) by $\mathbb{A},\mathbb{B},\mathbb{C},\ldots$. When we refer to concrete examples of categories, the same convention will be applied to the first letters. Thus $\mathcal{S} et$, $\mathscr{S}et$ and $\mathbb{S} et$ respectively denote the plain category of (small) sets, the category $\mathcal{S} et$ equipped with a monoidal structure, and the category $\mathcal{S} et$ equipped with a rig category structure. Finally, 2-categories are denoted by boldface letters $\mathbf{A},\mathcal{B}f,\mathbf{C},\ldots$. Given two monoidal categories $\mathscr{A}=(\mathcal{A},\otimes,1,a,l,r)$ and $\mathscr{A}'=(\mathcal{A}',\otimes',1',a',l',r')$, and unless otherwise indicated, by a monoidal functor between them we mean a {\em strong} monoidal functor. The underlying functor of such a monoidal functor $\mathscr{F}:\mathscr{A}\to\mathscr{A}'$ is denoted by $F$, and the structural isomorphisms taking account of the preservation of the tensor product and unit objects up to natural isomorphism by $\varphi$ and $\varepsilon$, respectively. Thus $\mathscr{F}$ consists of the functor $F$ together with a natural isomorphism \[ \varphi_{x,y}:F(x\otimes y)\stackrel{\cong}{\to} Fx\otimes' Fy \] in $\mathcal{A}'$ for each pair of objects $x,y\in\mathcal{A}$, and an isomorphism $\varepsilon:F1\to 1'$ also in $\mathcal{A}'$, all these isomorphisms satisfying the corresponding coherence axioms. By a {\em semistrict} symmetric monoidal category we shall mean what some authors call a strict symmetric monoidal category, i.e. a symmetric monoidal category whose associator and left and right unitors are identities, but not the commutator. The term strict will refer to the case when the commutator is also trivial. Every symmetric monoidal category is equivalent to a semistrict one, but not to a strict one (this is MacLane's coherence theorem for symmetric monoidal categories; cf. \cite{MacLane-1998}). Finally, composition of (1-)morphisms (in a category or in a 2-category) is denoted by juxtaposition, and the identity of an object $x$ by $id_x$. In particular, for any objects $A,B,C$ and morphisms $f:A\to B$ and $g:B\to C$, the composite is denoted by $g\,f:A\to C$. Exceptionally, composition of functors is denoted with the symbol $\circ$\,. \section{The 2-category of (symmetric) rig categories} \label{categories_rig} In this section we review the notion of (symmetric) rig category, motivating the required coherence axioms, and give various examples. Next appropriate notions of 1- and 2-cell are defined making (symmetric) rig categories the objects of a 2-category. In fact, there are various reasonable notions of 1-cell, and correspondingly various 2-categories of (symmetric) rig categories. We will stick to the {\em strong} notion of 1-cell (cf. Definition~\ref{morfisme_categories_semianell} below), although natural examples are given which are not strong. The section ends with the statement of the corresponding strictification theorem. \subsection{Rig categories} As recalled in the introduction, a rig (or semiring) is an abelian monoid $(S,0,+)$ equipped with an additional monoid structure $(\bullet\,,1)$ such that $\bullet$ distributes over $+$ from either side and $0\bullet x=x\bullet 0=0$ for each $x\in S$. Equivalently, $(\bullet\,,1)$ must be such that all left and right translation maps $L^x:y\mapsto x\bullet y$ and $R^x:y\mapsto y\bullet x$ are monoid endomorphisms of $(S,+,0)$. It follows that \begin{itemize} \item[(1)] $L^x\circ R^y=R^y\circ L^x$ and $L^x+R^y=R^y+L^x$ for each $x,y\in S$; \item[(2)] $L^x\circ L^y=L^{x\bullet y}$ and $R^{y\bullet x}=R^x\circ R^y$ for each $x,y\in S$; \item[(3)] $L^{x+y}=L^x+L^y$ and $R^{y+x}=R^y+R^x$ for each $x,y\in S$; \item[(4)] $L^1=R^1={\sf 1}_S$; \item[(5)] $L^0=R^0={\sf 0}_S$, \end{itemize} where ${\sf 1}_S$ and ${\sf 0}_S$ denote the identity and zero maps of $S$, respectively, and the sum of endomorphisms is pointwise defined. Notice that the first condition in (1), and each of the two conditions in (2) correspond to the associativity of $\bullet$\,. All of them are made explicit because they lead to different conditions in the categorified definition. When the definition of a rig is categorified, the abelian monoid $(S,+,0)$ must be replaced by a symmetric monoidal category $\mathscr{S}=(\mathcal{S},+,0,a,c,l,r)$, with $a,c,l,r$ the associativity, commutativity, and left and right unit natural isomorphisms, the monoid structure $(\bullet\,,1)$ by an additional monoidal structure $(\bullet\,,1,a',l',r')$ on $\mathcal{S}$, with $a',l',r'$ as before, and the distributivity and `absorbing' axioms by natural isomorphisms (for short, the symbol $\bullet$ between objects is omitted from now on) \[ d_{x,y,z}:x(y+z)\stackrel{\cong}{\to}xy+xz,\quad d'_{x,y,z}:(x+y)z\stackrel{\cong}{\to}xz+yz \] \[ n_x:x0\stackrel{\simeq}{\rightarrow} 0,\quad n'_x:0x\stackrel{\simeq}{\rightarrow} 0 \] making each of the left and right translation functors $L^x,R^x:\mathcal{S}\to\mathcal{S}$ into symmetric $+$-monoidal endofunctors $\mathscr{L}^x=(L^x,d_{x,-,-},n_x)$ and $\mathscr{R}^x=(R^x,d'_{-,-,x},n'_x)$ of $\mathscr{S}$, in the same way as the maps $L^x,R^x$ were monoid endomorphisms of $(S,+,0)$. In doing this, equalities (1)-(5) no longer hold strictly. We only have canonical natural isomorphisms between the corresponding pairs of functors, given by the natural isomorphisms \begin{align*} a'_{x,-,y}&:L^x\circ R^y\Rightarrow R^y\circ L^x, \\ c_{x-,-y}&:L^x+R^y\Rightarrow R^y+L^x, \\ a'_{x,y,-}&:L^x\circ L^y\Rightarrow L^{xy}, \\ a'_{-,y,x}&:R^{yx}\Rightarrow R^x\circ R^y, \\ d'_{x,y,-}&:L^{x+y}\Rightarrow L^x+L^y , \\ d_{-,y,x}&:R^{y+x}\Rightarrow R^y+R^x, \\ l':L^1 &\,\Rightarrow {\bf 1}_{\mathcal{S}},\ r':R^1\Rightarrow {\bf 1}_\mathcal{S}, \\ n:L^0 &\,\Rightarrow {\bf 0}_\mathcal{S}, \ n':R^0\Rightarrow {\bf 0}_\mathcal{S}, \end{align*} where ${\bf 1}_\mathcal{S},{\bf 0}_\mathcal{S}$ respectively denote the identity and zero functors of $\mathcal{S}$. Moreover, the domain and codomain functors of all these isomorphisms are both symmetric $+$-monoidal. As a matter of fact, however, all these natural isomorphisms except $c_{x-,-y}$ need not be {\em monoidal}, and this must be explicitly required. Thus we are led to the following definition of a rig category. \subsubsection{\sc Definition.}\label{categoria_rig} A {\em rig category} is a symmetric monoidal category $\mathscr{S}=(\mathcal{S},+,0,a,c,l,r)$ together with the following data: \begin{itemize} \item[(SC1)] an additional monoidal structure $(\bullet\,,1,a',l',r')$ (the {\em multiplicative} monoidal structure); \item[(SC2)] two families of isomorphisms ({\em left and right distributors}) $$d_{x,y,z}:x(y+z)\stackrel{\simeq}{\rightarrow}x y+x z,$$ $$d'_{x,y,z}:(x+y) z\stackrel{\simeq}{\to}x z+y z$$ natural in $x,y,z\in\mathcal{S}$; \item[(SC3)] two families of isomorphisms ({\em absorbing isomorphisms}) $$n_x:x0\stackrel{\simeq}{\rightarrow} 0,$$ $$n'_x:0x\stackrel{\simeq}{\rightarrow} 0$$ natural in $x\in\mathcal{S}$. \end{itemize} Moreover, these data must satisfy the following axioms: \begin{enumerate} \item[(SC4)] for each $x\in\mathcal{S}$ the triples $\mathscr{L}^x=(L^x,d_{x,-,-},n_x)$, $\mathscr{R}^x=(R^x,d'_{-,-,x},n'_x)$ are symmetric $+$-monoidal endofunctors of $\mathscr{S}$; more explicitly, this means that the diagrams \begin{equation*} \xymatrix{& x(y+(z+t))\ar[r]^{id_x\bullet a_{y,z,t}}\ar[d]_{d_{x,y,z+t}} & x((y+z)+t)\ar[d]^{d_{x,y+z,t}} \\ (A1.1) & xy+x(z+t)\ar[d]_{id_{xy}+d_{y,z,t}} & x(y+z)+xt\ar[d]^{d_{x,y,z}+id_{xt}} \\ & xy+(xz+xt)\ar[r]_{a_{xy,xz,xt}} & (xy+xz)+xt}\quad \xymatrix{(x+(y+z))t\ar[r]^{a_{x,y,z}\bullet id_t}\ar[d]_{d'_{x,y+z,t}} & ((x+y)+z)t\ar[d]^{d'_{x+y,z,t}} \\ xt+(y+z)t\ar[d]_{id_{xt}+d'_{y,z,t}} & (x+y)t+zt\ar[d]^{d'_{x,y,t}+id_{zt}} \\ xt+(yt+zt)\ar[r]_{a_{xt,yt,zt}} & (xt+yt)+zt} \end{equation*} \begin{equation*} \xymatrix{(A1.2) & x(y+z)\ar[r]^{id_x\bullet c_{y,z}}\ar[d]_{d_{x,y,z}} & x(z+y)\ar[d]^{d_{x,z,y}} \\ & xy+xz\ar[r]_{c_{xy,xz}} & xz+xy}\quad \xymatrix{(y+z)x\ar[r]^{c_{y,z}\bullet id_x}\ar[d]_{d'_{y,z,x}} & (z+y)x\ar[d]^{d'_{z,y,x}} \\ yx+zx\ar[r]_{c_{yx,zx}} & zx+yx} \end{equation*} \begin{equation*} \xymatrix{(A1.3) & x(0+y)\ar[r]^{d_{x,0,y}}\ar[d]_{id_x\bullet l_y} & x0+xy\ar[d]^{n_x+id_{xy}} \\ & xy & 0+xy\ar[l]^{l_{xy}}}\quad \xymatrix{(0+x)y\ar[r]^{d'_{0,x,y}}\ar[d]_{l_x\bullet id_y} & 0y+xy\ar[d]^{n'_y+id_{xy}} \\ xy & 0+xy\ar[l]^{l_{xy}}} \end{equation*} \begin{equation*} \xymatrix{(A1.4) & x(y+0)\ar[r]^{d_{x,y,0}}\ar[d]_{id_x\bullet r_y} & xy+x0\ar[d]^{id_{xy}+n_x} \\ & xy & xy+0\ar[l]^{r_{xy}}}\quad \xymatrix{(x+0)y\ar[r]^{d'_{x,0,y}}\ar[d]_{r_x\bullet id_y} & xy+0y\ar[d]^{id_{xy}+n'_y} \\ xy & xy+0\ar[l]^{r_{xy}}} \end{equation*} commute for all objects $x,y,z,t\in\mathcal{S}$; \item[(SC5)] for each $x,y\in\mathcal{S}$ the natural isomorphism $a'_{x,-,y}:L^x\circ R^y\Rightarrow R^y\circ L^x$ is a symmetric monoidal natural isomorphism $\mathscr{L}^x\circ\mathscr{R}^y\Rightarrow\mathscr{R}^y\circ\mathscr{L}^x$; more precisely, this means that the diagrams \begin{equation*} \xymatrix{(A2.1) & x((z+t)y)\ar[r]^{id_x\bullet d'_{z,t,y}}\ar[d]_{a'_{x,z+t,y}} & x(zy+ty)\ar[r]^{d_{x,zy,ty}} & x(zy)+x(ty)\ar[d]^{a'_{x,z,y}+a'_{x,t,y}} \\ & (x(z+t))y\ar[r]_{d_{x,z,t}\bullet id_y} & (xz+xt)y\ar[r]_{d'_{xz,xt,y}} & (xz)y+(xt)y} \end{equation*} \begin{equation*} \xymatrix{(A2.2) & x(0y)\ar[r]^{a'_{x,0,y}}\ar[d]_{id_x\bullet n'_y} & (x0)y\ar[r]^{n_x\bullet id_y} & 0y\ar[d]^{n'_y} \\ & x0\ar[rr]_{n_x} && 0} \end{equation*} commute for all objects $x,y,z,t\in\mathcal{S}$; \item[(SC6)] for each $x,y\in\mathcal{S}$ the natural isomorphisms $a'_{x,y,-}:L^x\circ L^y\Rightarrow L^{xy}$ and $a'_{-,y,x}:R^{yx}\Rightarrow R^x\circ R^y$ are symmetric monoidal natural isomorphisms $\mathscr{L}^x\circ\mathscr{L}^y\Rightarrow\mathscr{L}^{xy}$ and $\mathscr{R}^{yx}\Rightarrow\mathscr{R}^x\circ\mathscr{R}^y$, respectively; more precisely, this means that the diagrams \begin{equation*} \xymatrix{(A3.1) & x(y(z+t))\ar[r]^{id_x\bullet d_{y,z,t}}\ar[d]_{a'_{x,y,z+t}} & x(yz+yt)\ar[r]^-{d_{x,yz,yt}} & x(yz)+x(yt)\ar[d]^{a'_{x,y,z}+a'_{x,y,t}} \\ & (xy)(z+t)\ar[rr]_{d_{xy,z,t}} && (xy)z+(xy)t }\quad \xymatrix{x(y0)\ar[r]^{a'_{x,y,0}}\ar[d]_{id_x\bullet n_{y}} & (xy)0\ar[d]^{n_{xy}} \\ x0\ar[r]_{n_x} & 0 } \end{equation*} \begin{equation*} \xymatrix{(A3.2) & ((t+z)y)x\ar[r]^{d'_{t,z,y}\bullet id_x} & (ty+zy)x\ar[r]^-{d'_{ty,zy,x}} & (ty)x+(zy)x \\ & (t+z)(yx)\ar[u]^{a'_{t+z,y,x}}\ar[rr]_{d'_{t,z,yx}} && t(yx)+z(yx)\ar[u]_{a'_{t,y,x}+a'_{z,y,x}} } \quad \xymatrix{0(yx)\ar[r]^{a'_{0,y,x}}\ar[d]_{n'_{yx}} & (0y)x\ar[d]^{n'_y\bullet id_x} \\ 0 & 0x\ar[l]^{n'_x}} \end{equation*} commute for all $x,y,z,t\in\mathcal{S}$; \item[(SC7)] for each $x,y\in\mathcal{S}$ the natural isomorphisms $d'_{x,y,-}:L^{x+y}\Rightarrow L^x+L^y$ and $d_{-,y,x}:R^{y+x}\Rightarrow R^y+R^x$ are symmetric monoidal natural isomorphisms $\mathscr{L}^{x+y}\Rightarrow\mathscr{L}^x+\mathscr{L}^y$ and $\mathscr{R}^{y+x}\Rightarrow\mathscr{R}^y+\mathscr{R}^x$, respectively; more precisely, this means that the diagrams \begin{equation*} \xymatrix{(A4.1) & (x+y)(z+t)\ar[r]^-{d'_{x,y,z+t}}\ar[d]_{d_{x+y,z,t}} & x(z+t)+y(z+t)\ar[r]^{d_{x,z,t}+d_{y,z,t}} & (xz+xt)+(yz+yt)\ar[d]^{v_{xz,xt,yz,yt}} \\ & (x+y)z+(x+y)t\ar[rr]_{\ \ d'_{x,y,z}+d'_{x,y,t}} & & (xz+yz)+(xt+yt)} \end{equation*} \begin{equation*} \xymatrix{(A4.2) & (x+y)0\ar[r]^{d'_{x,y,0}}\ar[d]_{n_{x+y}} & x0+y0\ar[d]^{n_x+n_y} \\ & 0 & 0+0\ar[l]^{l_0}}\quad \xymatrix{0(x+y)\ar[r]^{d_{0,x,y}}\ar[d]_{n'_{x+y}} & 0x+0y\ar[d]^{n'_x+n'_y} \\ 0 & 0+0\ar[l]^{l_0}} \end{equation*} commute for all objects $x,y,z,t\in\mathcal{S}$, where $v_{a,b,c,d}:(a+b)+(c+d)\stackrel{\cong}{\to}(a+c)+(b+d)$ in (A4.1) is the canonical isomorphism built from the associator $a$ and the symmetry $c$ of $\mathscr{S}$; \item[(SC8)] the left and right unit isomorphisms $l':L^1\Rightarrow {\bf 1}_\mathcal{S}$ and $r':R^1\Rightarrow {\bf 1}_\mathcal{S}$ are symmetric monoidal natural isomorphisms $\mathscr{L}^1\Rightarrow {\bf 1}_\mathcal{S}$ and $\mathscr{R}^1\Rightarrow {\bf 1}_\mathcal{S}$, respectively; more explicitly, this means that the diagrams \begin{equation*} \xymatrix{(A5.1) & 1(x+y)\ar[rr]^{d_{1,x,y}}\ar[rd]_{l'_{x+y}} & & 1 x+1 y\ar[ld]^{l'_x+l'_y} \\ & & x+y &} \quad \xymatrix{(x+y) 1\ar[rr]^{d'_{x,y,1}}\ar[rd]_{r'_{x+y}} & & x 1+y 1\ar[ld]^{r'_x+l'_y} \\ & x+y &} \end{equation*} commute for all objects $x,y\in\mathcal{S}$, and the following equalities hold: \begin{equation*} (A5.2)\qquad n_1=l'_0,\quad n'_1=r'_0; \end{equation*} \item[(SC9)] the left and right null isomorphisms $n':L^0\Rightarrow {\bf 0}_\mathcal{S}$ and $n:R^0\Rightarrow {\bf 0}_\mathcal{S}$ are symmetric monoidal natural isomorphisms $\mathscr{L}^0\Rightarrow {\bf 0}_\mathcal{S}$ and $\mathscr{R}^0\Rightarrow {\bf 0}_\mathcal{S}$, respectively; more explicitly, this means that $$n_0=n'_0.$$ \end{enumerate} \noindent For short, we shall write $\mathbb{S}$ to denote the whole data $(\mathcal{S},+,\bullet\,,0,1,a,c,l,r,a',l',r',d,d',n,n')$ defining a rig category. The objects $0$ and $1$ will be respectively called the {\em zero} and {\em unit} objects of $\mathbb{S}$. $\mathbb{S}$ will be called a {\em 2-rig} when the underlying category $\mathcal{S}$ is a groupoid. When convenient, and in order to distinguish the additive and the multiplicative monoidal structures on a rig category we shall write $\mathscr{S}^+=(\mathcal{S},+,0,a,c,l,r)$, and $\mathscr{S}^{\,\bullet}=(\mathcal{S},\bullet\,,1,a',l',r')$. \subsubsection{\sc Definition.} A rig category $\mathbb{S}$ is called {\em left} (resp. {\em right}) {\em semistrict} when all structural isomorphisms except $c$ and the right distributor $d'$ (resp. the left distributor $d$) are identities. It is called {\em semistrict} when it is either left or right semistrict, and {\em strict} when $c$ and both distributors are also identities. \subsubsection{\sc Remark.} Rigs can be more compactly defined as the one-object categories enriched over the symmetric monoidal closed category of abelian monoids with the usual tensor product of abelian monoids. Similarly, rig categories should correspond to one-object categories enriched over a suitable symmetric monoidal closed category of symmetric monoidal categories. Such enriched categories are considered by Guillou \cite{Guillou-2010} under the name of one-object {\em {\bf SMC}-categories}, although he avoids describing explicitly the symmetric monoidal structure on the category {\bf SMC} of symmetric monoidal categories. \subsection{Symmetric rig categories.}\label{categories_rig_simetriques} A rig $(S,+,\bullet,0,1)$ is commutative when the monoid $(S,\bullet,1)$ is abelian, or equivalently when $L^x=R^x$ for each $x\in S$. When this condition is categorified, the abelian monoid structure $(\bullet,1)$ becomes a symmetric monoidal structure $(\bullet,1,a',c',l',r')$ on $\mathcal{S}$, and instead of the equality $L^x=R^x$ we now have the natural isomorphism $c'_{x,-}:L^x\Rightarrow R^x$. Once more, this isomorphism may not be a symmetric monoidal natural isomorphism $\mathscr{L}^x\Rightarrow\mathscr{R}^x$, and this must be required explicitly. This leads us to the following definition. \subsubsection{\sc Definition.}\label{categoria_rig_simetrica} A {\em symmetric rig category} is a rig category $\mathbb{S}$ together with a family of natural isomorphisms $c'_{x,y}:x y\stackrel{\simeq}{\to}y x$, for each $x,y\in\mathcal{S}$, called the {\em multiplicative commutators}, such that \begin{itemize} \item[(CSC1)] $(\mathcal{S},\bullet,1,a',c',l',r')$ is a symmetric monoidal category, \item[(CSC2)] for each $x\in\mathcal{S}$, the natural isomorphism $c'_{x,-}:L^x\Rightarrow R^x$ is a symmetric monoidal natural isomorphism $\mathscr{L}^x\Rightarrow\mathscr{R}^x$; more precisely, this means that the diagrams \[ \xymatrix{(x+y) z\ar[r]^{d'_{z;x,y}}\ar[d]_{c'_{x+y,z}} & x z+y z\ar[d]^{c'_{x,z}+c'_{y,z}} \\ z(x+y)\ar[r]_{d_{z;x,y}} & z x+z y}\qquad \xymatrix{x0\ar[rr]^{c'_{x,0}}\ar[dr]_{n_x} && 0x\ar[ld]^{n'_x} \\ & 0 &} \] commute for all objects $x,y,z\in\mathcal{S}$. \end{itemize} A symmetric rig category is called {\em semistrict} (resp. {\em strict}) when the underlying rig category $\mathbb{S}$ is semistrict (resp. strict and with $c'$ trivial). ~\footnote{\,Some people, mostly those working on the K-theory of this type of categories, call the semistrict symmetric rig categories {\em bipermutative categories} because a semistrict symmetric monoidal category is also called a {\em permutative category}.} It is called a {\em symmetric 2-rig} when the underlying rig category is a 2-rig. \subsubsection{\sc Remark.} The previous definition coincides with the structure described by Laplaza in \cite{Laplaza-1972} except that in Laplaza's paper the distributors are only required to be monomorphisms. The correspondence between Laplaza's axioms and the axioms in Definitions~\ref{categoria_rig} and \ref{categoria_rig_simetrica} goes as follows: \begin{itemize} \item[(i)] (A1.1)-(A1.4) correspond to Laplaza's axioms (I), (III)-(V) and (XIX)-(XXII), \item[(ii)] (A2.1)-(A2.2) correspond to Laplaza's axioms (VIII) and (XVII), \item[(iii)] (A3.1)-(A3.2) correspond to Laplaza's axioms (VI)-(VII), (XVI) and (XVIII), \item[(iv)] (A4.1)-(A4.2) correspond to Laplaza's axioms (IX) and (XI)-(XII), \item[(v)] (A5.1)-(A5.2) correspond to Laplaza's axioms (XIII)-(XIV) and (XXIII)-(XXIV), \item[(vi)] (SC9) corresponds to Laplaza's axiom (X), and \item[(vii)] (CSC2) correspond to Laplaza's axioms (II) and (XV). \end{itemize} \subsection{Some examples of (symmetric) rig categories} Describing a rig category requires specifying the data $\mathcal{S},+,\bullet,0,1,a,c,l,r,a',l',r',d,d',n,n'$, and checking that they satisfy all of the above axioms. Usually, this is long and tedious. Hence in this subsection we just mention a few standard examples of rig categories without getting into the details. The particular type of rig categories we are interested in is discussed in more detail in \S~\ref{categories_distributives}. \subsubsection{\sc Example.} Every rig $S=(S,+,\bullet,0,1)$ can be thought of as a 2-rig with only identity morphisms, and all required isomorphisms trivial. They are symmetric 2-rigs when $S$ is commutative. \subsubsection{\sc Example.}\label{core} If $\mathbb{S}$ is a rig category, and $\widehat{\mathcal{S}}$ is the groupoid with the same objects as $\mathcal{S}$ and only the isomorphisms between them as morphisms then $\widehat{\mathcal{S}}$ inherits by restriction a canonical rig category structure. The 2-rig so obtained is denoted by $\widehat{\mathbb{S}}$. It is a symmetric 2-rig when $\mathbb{S}$ is a symmetric rig category, and (left,right) semistrict when $\mathbb{S}$ is so. \subsubsection{\sc Example.}\label{categories_semianell_no_cartesianes} Every symmetric monoidal closed category (in particular, every cartesian closed category) with finite coproducts is canonically a symmetric rig category with $+$ and $\bullet$ respectively given by the categorical coproduct $\sqcup$ and the tensor product $\otimes$. Closedness is necessary to ensure that the tensor product indeed distributes over coproducts. Thus denoting the internal homs by $[-,-]$ we have \begin{align*} Hom((x\sqcup y)\otimes z,t)&\cong Hom(x\sqcup y,[z,t]) \\ &\cong Hom(x,[z,t])\otimes Hom(y,[z,t]) \\ &\cong Hom(x\otimes z,t)\otimes Hom(y\otimes z,t) \\ &\cong Hom((x\otimes z)\sqcup(y\otimes z),t) \end{align*} for every objects $x,y,z,t$. Then the existence of the isomorphism $d'_{x,y,z}$ follows from the Yoneda lemma. A similar argument gives the isomorphism $d_{x,y,z}$. Symmetric rig categories of this type include those associated to the three cartesian closed categories $\mathcal{S} et$ of sets and maps (and its full subcategory $\mathcal{F}\mathcal{S} et$ with objects the finite sets), $\mathcal{S} et_G$ of $G$-sets and homomorphisms of $G$-sets for any group $G$ (and its full subcategory $\mathcal{F}\mathcal{S} et_{G}$ with objects the finite $G$-sets), and $\mathcal{C} at$ of (small) categories and functors, and those associated to the two non-cartesian symmetric monoidal closed categories $\mathcal{V} ect_k$ of vector spaces over a field $k$ and $k$-linear maps (and its full subcategory $\mathcal{F}\mathcal{V} ect_k$ with objects the finite dimensional vector spaces), and $\mathcal{R} ep_k(G)$ of $k$-linear representations of a group $G$ and homomorphisms of representations for any group $G$ (and its full subcategory $\mathcal{F}\mathcal{R} ep_k(G)$ with objects the finite dimensional representations). \subsubsection{\sc Example.} \label{semianell_endomorfismes} The set of endomorphisms of every abelian monoid is canonically a rig, with the sum of monoid endomorphisms defined pointwise, and the product given by the composition. Similarly, for every symmetric monoidal category $\mathscr{M}=(\mathcal{M},\mathrm{op}lus,0_\mathcal{M},\mathsf{a},\mathsf{c},\mathsf{l},\mathsf{r})$ the category $\mathcal{E} nd\,(\mathscr{M})$ of symmetric monoidal endofunctors of $\mathscr{M}$, and symmetric monoidal natural transformations between them is canonically a (non-symmetric) rig category, with the additive symmetric monoidal structure given by the pointwise sum of symmetric monoidal endofunctors and monoidal natural transformations, and with the multiplicative monoidal structure given by their composition. The zero object is the zero functor ${\bf 0}_\mathcal{M}$, and the canonical isomorphisms $a,c,l,r$ are pointwise given by the corresponding isomorphisms in $\mathscr{M}$. In particular, $\mathcal{E} nd(\mathscr{M})$ is semistrict (strict) symmetric monoidal category when $\mathscr{M}$ is so. The multiplicative monoidal structure is always strict, with the identity functor of $\mathcal{M}$ as unit object. Moreover, if we define $\mathscr{F}\bullet\mathscr{F}'=\mathscr{F}'\circ\mathscr{F}$, the left distributors are all trivial, while the right distributors \[ d'_{\mathscr{F}_1,\mathscr{F}_2,\mathscr{F}_3}:\mathscr{F}_3\circ(\mathscr{F}_1+\mathscr{F}_2)\Rightarrow\mathscr{F}_3\circ\mathscr{F}_1+\mathscr{F}_3\circ\mathscr{F}_2 \] are given by the monoidal structure of $\mathscr{F}_3$. As to the absorbing isomorphisms $n_{\mathscr{F}},n'_{\mathscr{F}}$, they are all trivial. In particular, the rig category so defined $\mathbb{E} nd(\mathscr{M})$ is left semistrict when $\mathscr{M}$ is semistrict, and a 2-rig when $\mathcal{M}$ is a groupoid. \subsection{Distributive categories}\label{categories_distributives} Recall that a {\em distributive category} is a cartesian and cocartesian category such that the canonical map $xy+xz\to x(y+z)$ is invertible for each objects $x,y,z$. Distributive categories generalize the cartesian closed categories with finite coproducts of Example~\ref{categories_semianell_no_cartesianes}, and as these have a ``canonical'' symmetric rig category structure. In this paragraph this structure is described in detail. Let us first recall that every cocartesian category $\mathcal{C}$ (i.e. a category $\mathcal{C}$ with all finite coproducts) has a ``canonical'' symmetric monoidal structure associated to the choice of a particular coproduct $(x+y,\iota^1_{x,y},\iota^2_{x,y})$ for each ordered pair of objects $(x,y)$, and a particular initial object $0$. It is given as follows: \begin{itemize} \item[(D1)] the tensor product $\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ is given on objects $(x,y)$ and morphisms $(f,g):(x,y)\to(x',y')$ by \[ (x,y)\mapsto x+y,\quad (f,g)\mapsto f+g, \] where $f+g$ is the morphism uniquely determined by the diagram \[ \xymatrix{ x\ar[r]^-{\iota^1_{x,y}}\ar[d]_-f & x+y\ar@{.>}[d]^-{\exists !\,f+g} & y\ar[l]_-{\iota^2_{x,y}}\ar[d]^-g \\ x'\ar[r]_-{\iota^1_{x',y'}} & x'+y' & y'\ar[l]^-{\iota^2_{x',y'}} } \] and the universal property of $x+y$; \item[(D2)] the unit object is the chosen initial object $0$; \item[(D3)] for every objects $x,y,z\in\mathcal{C}$ the associator $a_{x,y,z}$ is the morphism uniquely determined by the left hand side diagram \[ \xymatrix{ x\ar[d]_-{\iota^1_{x,y}}\ar[r]^-{\iota^1_{x,y+ z}} & x+(y+z)\ar@{.>}[d]_-{a_{x,y,z}} & y+z\ar[l]_-{\ \ \iota^2_{x,y+ z}}\ar[ld]^-{\iota^2_{x,y}\,+\, id_z} \\ x+y\ar[r]_-{\iota^1_{x+y,z}} & (x+y)+z & } \qquad \xymatrix{ x+y\ar[r]^-{\iota^1_{x+ y,z}}\ar[rd]_-{id_x\,+\,\iota^1_{y,z}} & (x+y)+z\ar@{.>}[d]^-{a^{-1}_{x,y,z}} & z\ar[l]_-{\iota^2_{x+y,z}}\ar[d]^-{\iota^2_{y,z}} \\ & x+(y+z) & y+ z\ar[l]^-{\iota^2_{x,y+ z}} } \] and the universal properties of $x+(y+z)$ (it is indeed invertible with inverse the morphism uniquely determined by the right hand side diagram and the universal property of $(x+y)+z$); \item[(D4)] for every objects $x,y\in\mathcal{C}$ the commutator $c_{x,y}$ is the morphism uniquely determined by the diagram \[ \xymatrix{ x\ar[r]^-{\iota^1_{x,y}}\ar[dr]_-{\iota^2_{y,x}} & x+y\ar@{.>}[d]^-{c_{x,y}} & y\ar[l]_-{\iota^2_{x,y}}\ar[ld]^{\iota^1_{y,x}} \\ & y+x & } \] and the universal property of $x+y$ (it is indeed invertible with inverse $c^{-1}_{x,y}=c_{y,x}$); \item[(D5)] for every object $x\in\mathcal{C}$ the left and right unitors $l_x,r_x$ are the morphisms uniquely determined by the diagrams \[ \xymatrix{ 0\ar[r]^-{!}\ar[dr]_-{!} & 0+x\ar@{.>}[d]_-{l_x} & x\ar[l]_-{\iota^2_{0,x}}\ar[ld]^{id_x} \\ & x & } \qquad \xymatrix{ x\ar[r]^-{\iota^1_{x,0}}\ar[dr]_-{id_x} & x+0\ar@{.>}[d]^-{r_x} & 0\ar[l]_-{!}\ar[ld]^{!} \\ & x & } \] and the universal properties of $0+x$ and $x+0$ (they are indeed invertible with inverses $l^{-1}_x,r^{-1}_x$ the morphisms $\iota^2_{0,x},\iota^1_{x,0}$ respectively). \end{itemize} Different choices of binary coproducts and initial object lead to different but equivalent symmetric monoidal structures on $\mathcal{C}$ and hence, we may indeed speak of the ``canonical'' symmetric monoidal structure on each cocartesian category $\mathcal{C}$. Similarly, every cartesian category $\mathcal{C}$ (i.e. a category $\mathcal{C}$ with all finite products) is ``canonically'' a symmetric monoidal category with the associator $a'$, commutator $c'$, and left and right unitors $l',r'$ defined by the dual diagrams for some particular choices of binary products and final object. When $\mathcal{C}$ is both cartesian and cocartesian, these two symmetric monoidal structures $(+,0,a,c,l,r)$ and $(\bullet,1,a',c',l',r')$ are related by the natural {\em left} and {\em right distributor maps} \begin{align*} \overline{d}_{x,y,z}&:xy+xz\to x(y+z) \\ \overline{d}\,'_{x,y,z}&:xz+yz\to (x+y)z \end{align*} uniquely determined by the diagrams \[ \xymatrix{ xy\ar[r]^-{\iota^1_{xy,xz}}\ar[dr]_-{id_x\bullet\,\iota^1_{y,z}} & xy+xz\ar@{.>}[d]^-{\overline{d}_{x,y,z}} & xz\ar[l]_-{\iota^2_{xy,xz}}\ar[ld]^{id_x\bullet\,\iota^2_{y,z}} \\ & x(y+z) & } \quad \xymatrix{ xz\ar[r]^-{\iota^1_{xz,yz}}\ar[dr]_-{\iota^1_{x,y}\bullet\,id_z} & xz+yz\ar@{.>}[d]^-{\overline{d}\,'_{x,y,z}} & yz\ar[l]_-{\iota^2_{xz,yz}}\ar[ld]^{\iota^2_{x,y}\bullet\,id_z} \\ & (x+y)z & } \] and the universal properties of the coproducts $xy+xz$ and $xz+yz$. In fact, these distributors are not independent of each other. Instead, it may be shown that they are related bu the commutativity of the diagram \[ \xymatrix{ xz+yz\ar[r]^-{\overline{d}\,'_{x,y,z}}\ar[d]_-{c'_{x,z}+c'_{y,z}} & (x+y)z\ar[d]^-{c'_{x+y,z}} \\ zx+zy\ar[r]_-{\overline{d}_{z,x,y}} & z(x+y)\,. } \] In particular, $\overline{d}\,'_{x,y,z}$ is invertible if and only if $\overline{d}_{z,x,y}$ is invertible. The distributors $\overline{d}_{x,y,z},\overline{d}\,'_{x,y,z}$ are in general non-invertible, and as said before, $\mathcal{C}$ is called {\em distributive} precisely when $\overline{d}_{x,y,z}$ (or equivalently, $\overline{d}\,'_{x,y,z}$) is invertible for every objects $x,y,z$. In fact, for a cartesian and cocartesian category to be distributive it is enough that there exists {\em any} natural isomorphism $xy+xz\cong x(y+z)$ (see \cite{Lack-2012}). The point is that, when $\mathcal{C}$ is distributive, there are also canonical isomorphisms $x0\cong 0\cong 0x$ for each $x\in\mathcal{C}$ such that the whole structure makes $\mathcal{C}$ into a symmetric rig category. These are the maps $\pi^1_{0,x}:0x\to 0$ and $\pi^2_{x,0}:x0\to 0$, whose respective inverses are the unique maps $0\to 0x$ and $0\to x0$. Thus we have the following well known result (at least, among experts) \subsubsection{\sc Proposition.}\label{estructura_categoria_rig_simetrica} Every distributive category $\mathcal{C}$ equipped with the above symmetric monoidal structures $(+,0,a,c,l,r)$ and $(\bullet,1,a',c',l',r')$, and with the distributors and absorbing isomorphisms given by \[ d_{x,y,z}=(\overline{d}_{x,y,z})^{-1},\quad d'_{x,y,z}=(\overline{d}\,'_{x,y,z})^{-1}, \quad n_x=\pi^2_{x,0},\quad n'_x=\pi^1_{0,x} \] is a symmetric rig category. A standard example of a distributive category is the category $\mathcal{F}\mathcal{S} et$ of finite sets and maps between them. In Section~3 the symmetric rig category structure of a skeleton of it corresponding to a particular choice of products and coproducts is explicitly described. \subsection{The 2-category of (symmetric) rig categories} (Symmetric) rig categories are the objects of a 2-category. In fact, there are various useful notions of 1-cell between (symmetric) rig categories, associated to the various notions of 1-cell between (symmetric) monoidal categories, either lax, colax, bilax, strong or strict (symmetric) monoidal functor (see \cite{Aguiar-Mahajan-2010}). Moreover, we may also consider 1-cells whose character is different for the additive and the multiplicative monoidal structures. For instance, a 1-cell may be additively strong and multiplicatively colax, and examples of these mixed kind actually arise in some natural situations. Thus there are actually various 2-categories of (symmetric) rig categories. Although we shall define the various types of 1-cell, and give examples of various types, at the end we shall restrict to the strong morphisms and the associated 2-categories. Recall that, given two rigs $S=(S,+,\bullet,0,1)$ and $\tilde{S}=(\tilde{S},\tilde{+},\tilde{\bullet},\tilde{0},\tilde{1})$, a rig homomorphism from $S$ to $\tilde{S}$ is a map $f:S\to\tilde{S}$ such that $f$ is both a monoid homomorphism from $(S,+,0)$ to $(\tilde{S},\tilde{+},\tilde{0})$, and a monoid homomorphism from $(S,\bullet,1)$ to $(\tilde{S},\tilde{\bullet},\tilde{1})$. It follows that $f$ is such that \begin{align} f\circ L^x&=\tilde{L}^{fx}\circ f \label{eq1} \\ f\circ R^x&=\tilde{R}^{fx}\circ f \label{eq2} \end{align} for each $x\in S$, where $L^x,R^x$ and $\tilde{L}^{fx},\tilde{R}^{fx}$ respectively denote the left and right translation maps of $S$ and $\tilde{S}$. In categorifying this definition, the map $f$ must be replaced by a functor $F:\mathcal{S}\to\tilde{\mathcal{S}}$ together with a pair $(\varphi^+,\varepsilon^+)$ making it a symmetric monoidal functor $\mathscr{S}^+\to\tilde{\mathscr{S}}^+$ of some type, and a pair $(\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ making it a monoidal functor $\mathscr{S}^{\,\bullet}\to\tilde{\mathscr{S}}^{\,\bullet}$ of perhaps a different type. When done, equalities (\ref{eq1})-(\ref{eq2}) no longer hold. Instead, we just have natural transformations between the involved functors. For instance, in case the multiplicative monoidal structure is lax, we have the natural morphisms \begin{align*} \varphi^{\,\bullet}_{x,-}&:\tilde{L}^{Fx}\circ F\Rightarrow F\circ L^x\,, \\ \varphi^{\,\bullet}_{-,x}&:\tilde{R}^{Fx}\circ F\Rightarrow F\circ R^x. \end{align*} Moreover, the domain and codomain functors of both transformations are always symmetric monoidal of some kind, with monoidal structure given by the monoidal structure on $F$ and the distributors. However, $\varphi^{\,\bullet}_{x,-}$ and $\varphi^{\,\bullet}_{-,x}$ need not be monoidal. Thus we are naturally led to the following notions of 1-cell between rig categories. \subsubsection{\sc Definition.}\label{morfisme_categories_semianell} Let be given two rig categories $\mathbb{S}$ and $\tilde{\mathbb{S}}$. A {\em (colax,lax) morphism of rig categories} from $\mathbb{S}$ to $\tilde{\mathbb{S}}$ is a functor $F:\mathcal{S}\to\tilde{\mathcal{S}}$ together with the following data: \begin{itemize} \item[(HSC1)] an additive symmetric colax monoidal structure $(\varphi^+,\varepsilon^+)$ on $F$, and \item[(HSC2)] a multiplicative lax monoidal structure $(\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ on $F$. \end{itemize} Moreover, these data must satisfy the following axioms: \begin{itemize} \item[(HSC3)] for each $x\in\mathcal{S}$ the natural morphism $\varphi^{\,\bullet}_{x,-}:\tilde{L}^{Fx}\circ F\Rightarrow F\circ L^x$ is a symmetric monoidal natural isomorphism $\tilde{\mathscr{L}}^{Fx}\circ\mathscr{F}^a\Rightarrow\mathscr{F}^a\circ\mathscr{L}^x$, i.e. the diagrams \[ \xymatrix{ F(x(y+z))\ar[r]^-{F(d_{x,y,z})} & F(xy+xz)\ar[r]^-{\varphi^+_{xy,xz}} & F(xy)\,+\,F(xz) \\ Fx\,F(y+z)\ar[u]^{\varphi^{\,\bullet}_{x,y+z}}\ar[r]_-{id_{Fx}\bullet\,\varphi^+_{y,z}} & Fx\,(Fy\,+\,Fz)\ar[r]_-{\tilde{d}_{Fx,Fy,Fz}} & Fy\,Fx\,+\,Fz\,Fx\ar[u]_{\varphi^{\,\bullet}_{x,y}\,+\,\varphi^{\,\bullet}_{x,z}} } \quad \xymatrix{ F(x0)\ar[r]^-{F(n_x)} & F0\ar[r]^-{\varepsilon^+} & \tilde{0} \\ Fx\,F0\ar[u]^{\varphi^{\,\bullet}_{x,0}}\ar[r]_-{id_{Fx}\bullet\,\varepsilon^+} & (Fx)\,\tilde{0}\ar[r]_-{\tilde{n}_{Fx}} & \tilde{0}\ar@{=}[u] } \] commute for each objects $y,z\in\mathcal{S}$; \item[(HSC4)] for each $x\in\mathcal{S}$ the natural isomorphism $\varphi^{\,\bullet}_{-,x}:\tilde{R}^{Fx}\circ F\Rightarrow F\circ R^x$ is a symmetric monoidal natural isomorphism $\tilde{\mathscr{R}}^{Fx}\circ\mathscr{F}^a\Rightarrow\mathscr{F}^a\circ\mathscr{R}^x$, i.e. the diagrams \[ \xymatrix{F((y+z)x)\ar[r]^-{F(d'_{y,z,x})} & F(yx+zx)\ar[r]^-{\varphi^+_{yx,zx}} & F(yx)\,+\,F(zx) \\ F(y+z)\,Fx\ar[u]^{\varphi^{\,\bullet}_{y+z,x}}\ar[r]_-{\varphi^+_{y,z}\bullet\, id_{Fx}} & (Fy\,+\,Fz)\,Fx\ar[r]_-{\tilde{d}'_{Fy,Fz,Fx}} & Fy\,Fx\,+\,Fz\,Fx\ar[u]_{\varphi^{\,\bullet}_{y,x}\,+\,\varphi^{\,\bullet}_{z,x}} } \quad \xymatrix{F(0x)\ar[r]^-{F(n'_x)} & F0\ar[r]^-{\varepsilon^+} & \tilde{0} \\ F0\,Fx\ar[u]^{\varphi^{\,\bullet}_{0,x}}\ar[r]_-{\varepsilon^+\bullet\,id_{Fx}} & \tilde{0}\,(Fx)\ar[r]_-{\tilde{n}'_{Fx}} & \tilde{0}\ar@{=}[u] } \] commute for each objects $y,z\in\mathcal{S}$. \end{itemize} \noindent For any other choices of $\alpha,\beta\in\{lax,\,colax,\,strong,\,strict\}$, $(\alpha,\beta)$ {\em morphisms of rig categories} are defined similarly. When $\alpha=\beta$, we shall speak of an $\alpha$ morphism. Finally, when $\mathbb{S},\tilde{\mathbb{S}}$ are symmetric rig categories, an $(\alpha,\beta)$ {\em morphism of symmetric rig categories} from $\mathbb{S}$ to $\tilde{\mathbb{S}}$ is an $(\alpha,\beta)$-morphism such that $(\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ is a symmetric $\beta$-monoidal structure on $F$, and an $\alpha$ {\em morphism of symmetric rig categories} is an $\alpha$-morphism such that $(\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ is a symmetric $\alpha$-monoidal structure on $F$. For short, we shall denote by $\mathbb{F}$ the whole data $(F,\varphi^+,\varepsilon^+,\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ defining a morphism of rig categories of any type. \subsubsection{\sc Remark.} A strong morphism of rig categories corresponds to the one-object case of the more general notion of a {\em {\bf SMC}-functor} between {\bf SMC}-categories introduced by Guillou (\cite{Guillou-2010}, Definition~4.1). \subsubsection{\sc Example.} Every rig homomorphism between two rigs $S$ and $\tilde{S}$ is a strict morphism between the associated discrete 2-rigs, and conversely. \subsubsection{\sc Example.} For every rig category $\mathbb{S}$ the inclusion functor $J:\widehat{\mathcal{S}}\hookrightarrow\mathcal{S}$ of the underlying groupoid $\widehat{\mathcal{S}}$ into $\mathcal{S}$ is a strict morphism of rig categories $\mathbb{J}:\widehat{\mathbb{S}}\hookrightarrow\mathbb{S}$ (cf. Example~\ref{core}). \subsubsection{\sc Example.} Let $\mathbb{S} et_G$ be the symmetric rig category of $G$-sets for some group $G$ (cf. Example~\ref{categories_semianell_no_cartesianes}). Then the forgetful functor $U_G:\mathcal{S} et_G\to\mathcal{S} et$ is a strict morphism of symmetric rig categories. However, its left adjoint $J_G:\mathcal{S} et\to\mathcal{S} et_G$, mapping each set $X$ to $X\times G$ with $G$-action given by $g'(x,g)=(x,g'g)$, and each map $f:X\to Y$ to the morphism of $G$-sets $f\times id_G:X\times G\to Y\times G$, is canonically just a strong-colax morphism $\mathbb{J}_G:\mathbb{S} et\to\mathbb{S} et_G$. The additive strong monoidal structure is given by the canonical right distributors \[ \varphi^+_{X,Y}=d'_{X,Y,G}:(X\sqcup Y)\times G\stackrel{\cong}{\to} (X\times G)\sqcup(Y\times G)\,, \] together with the unique map $\varepsilon^+:\emptyset\times G\stackrel{\cong}{\to} \emptyset$, while the multiplicative colax structure is given by the canonical non-invertible morphisms of $G$-sets \[ \varphi^{\,\bullet}_{X,Y}:(X\times Y)\times G\to (X\times G)\times(Y\times G) \] defined by $((x,y),g)\mapsto((x,g),(y,g))$, together with the unique map $\varepsilon^{\,\bullet}:\{*\}\times G\to\{*\}$. \subsubsection{\sc Example.} Let $\mathbb{V} ect_k$ be the symmetric rig category of vector spaces over a given field $k$ (cf. Example~\ref{categories_semianell_no_cartesianes}). Then the forgetful functor $U_k:\mathcal{V} ect_k\to\mathcal{S} et$ is a lax morphism of symmetric rig categories $\mathbb{U}_k:\mathbb{V} ect_k\to\mathbb{S} et$ with additive lax monoidal structure given by the canonical maps $\varphi^+_{V,W}:V\sqcup W\to V\times W$ defined by $v\mapsto(v,0)$ and $w\mapsto (0,w)$, together with the canonical map $\varepsilon^+:\emptyset\to \{0\}$, while the multiplicative lax structure is given by the canonical maps $\varphi^{\,\bullet}_{V,W}:V\times W\to V\otimes_k W$ given by $(v,w)\mapsto v\otimes w$, and the map $\varepsilon^\times:\{*\}\to k$ sending $*$ to the unit $1\in k$. By constrast, its left adjoint $J_k:\mathcal{S} et\to\mathcal{V} ect_k$, mapping each set $X$ to the vector space $k[X]$ spanned by $X$, is canonically a strong morphism of symmetric rig categories $\mathbb{J}_k:\mathbb{S} et\to\mathbb{V} ect_k$ with $\varphi^+_{X,Y}$, $\varepsilon^+$, $\varphi^{\,\bullet}_{X,Y}$, $\varepsilon^{\,\bullet}$ the usual natural isomorphisms $k[X\sqcup Y]\cong k[X]\mathrm{op}lus k[Y]$, $k[\emptyset]\cong 0$, $k[X\times Y]\cong k[X]\otimes k[Y]$, and $k[\{*\}]\cong k$. From now on, we shall restrict to strong morphisms of (symmetric) rig categories, and they will be called {\em homomorphisms}. \subsubsection{\sc Definition.} \label{transformacio_rig} Let $\mathbb{S},\tilde{\mathbb{S}}$ be two (symmetric) rig categories, and $\mathbb{F}_1,\mathbb{F}_2:\mathbb{S}\to\tilde{\mathbb{S}}$ two homomorphisms between them. A {\em rig transformation} from $\mathbb{F}_1$ to $\mathbb{F}_2$ is a natural transformation $\xi:F_1\Rightarrow F_2$ that is both $+$-monoidal and $\bullet$\,-monoidal. \subsubsection{\sc Remark.} There is a more general notion of 2-cell $\mathbb{F}_1\Rightarrow\mathbb{F}_2$ corresponding to Guillou's definition of monoidal transformation between {\bf SMC}-functors whose domain and codomain {\bf SMC}-categories have only one object and hence, are rig categories (\cite{Guillou-2010}, Definition~4.2). It consists of an object $\tilde{x}$ in $\tilde{\mathcal{S}}$ together with a family of natural morphisms $\eta_x:(F_2\,x)\,\tilde{x}\to\tilde{x}\,(F_1x)$ in $\tilde{\mathcal{S}}$, labelled by the objects $x$ in $\mathcal{S}$, satisfying appropriate conditions. This is analogous to the existence of a more general notion of 2-cell between (symmetric) monoidal functors, corresponding to the pseudonatural transformations between them when viewed as pseudofunctors between one-object bicategories. Then the previous notion of rig transformation is to be thought of as the analog in the rig category setting of Lack's icons \cite{Lack-2010}. Rig categories together with the rig category homomorphisms as 1-cells, and the rig transformations between these as 2-cells constitute a 2-category $\mathbf{RigCat}$. The various compositions of 1- and 2-cells are defined in the obvious way. Similarly, symmetric rig categories with the symmetric rig category homomorphisms, and the rig transformations as 2-cells also constitute a 2-category $\mathbf{SRigCat}$. Notice that, unlike the category $\mathcal{C}\mathcal{R} ig$ of commutative rigs, which is a full subcategory of $\mathcal{R} ig$, $\mathbf{SRigCat}$ is not a full sub-2-category of $\mathbf{RigCat}$ because being a {\em symmetric} rig category is not a property-like structure. A given rig category can be symmetric in various non-equivalent ways. As in any 2-category, two objects $\mathbb{S},\tilde{\mathbb{S}}$ in $\mathbf{RigCat}$ (or in $\mathbf{SRigCat}$) are said to be equivalent when there exists (symmetric) rig category homomorphisms $\mathbb{F}:\mathbb{S}\to\tilde{\mathbb{S}}$ and $\tilde{\mathbb{F}}:\tilde{\mathbb{S}}\to\mathbb{S}$ and invertible rig transformations $\xi:\tilde{\mathbb{F}}\circ\mathbb{F}\Rightarrow id_\mathbb{S}$ and $\tilde{\xi}:\mathbb{F}\circ\tilde{\mathbb{F}}\Rightarrow id_{\tilde{\mathbb{S}}}$. \subsection{Strictification theorem} A generic (symmetric) rig category involves many natural isomorphisms and lots of required commutative diagrams. Hence it is useful to know that some of the natural isomorphisms can be assumed to be identities because the final structure is equivalent to a similar one but with some of these isomorphisms trivial. Theorems of this type are usually known as strictification theorems. For symmetric rig categories the theorem is due to May (\cite{May-1977}, Proposition~VI.3.5), and for generic rig categories it is a consequence of the more general strictification theorem for {\bf SMC}-categories due to Guillou \cite{Guillou-2010}. Their statements are as follows. \subsubsection{\sc Theorem.}(\cite{May-1977},\cite{Guillou-2010})\label{teorema_estrictificacio} Every rig category (resp. symmetric rig category) $\mathbb{S}$ is equivalent in $\mathbf{RigCat}$ (resp. in $\mathbf{SRigCat}$) to a semistrict rig category (resp. semistrict symmetric rig category). The choice of which distributor is made trivial in a semistrict version of a given (symmetric) rig category is logically arbitrary. The only relevant point here is that, in the absence of a strict commutativity of $+$, a common and usually unavoidable situation, it is unreasonable to demand that both distributors be identities. \subsubsection{\sc Example} \label{Mat_k} Let $k$ be any ground field $k$, and let $\mathcal{M} at_k$ be the category with objects the positive integers $n\geq 0$, with $0$ as zero object, and with the $m\times n$ matrices with entries in $k$ as morphisms $n\to m$ for each $m,n\geq 1$. When equipped with the sum and product given on objects in the usual way, and on morphisms by \[ A+B=\left(\begin{array}{c|c} A&0 \\ \hline 0&B\end{array}\right),\qquad A\bullet B=\left(\begin{array}{ccc} A\,b_{11}&\cdots&A\,b_{1n} \\ \vdots&&\vdots \\ A\,b_{m1}&\cdots&A\,b_{mn}\end{array}\right) \] for any matrices $A,B$ with $B=(b_{ij})$, it becomes a left semistrict rig category $\mathbb{M} at_k$. It provides a left semistrict (and skeletal) version of the rig category $\mathbb{F}\mathbb{V} ect_k$ of finite dimensional vector spaces over $k$. The details in the previous example can be omitted because they are not relevant in the sequel. By contrast, next section is devoted to a complete description of an explicit left semistrict version of the rig category of finite sets which is essential for what follows. \section{Semistrict version of the symmetric 2-rig of finite sets} The purpose of this paper is to show that the symmetric 2-rig $\widehat{\mathbb{F}\mathbb{S} et}$ is biinitial in the 2-category of rig categories. However. instead of working with $\widehat{\mathbb{F}\mathbb{S} et}$ we shall consider an equivalent, skeletal version of it we shall denote by $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$, which has the advantage of being semistrict. Since equivalent objects in a 2-category have equivalent categories of morphisms, it is indeed enough to prove that $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ is biinitial in $\mathbf{RigCat}$. This considerably simplifies the diagrams, and makes computations much easier. This semistrict version of the $\widehat{\mathbb{F}\mathbb{S} et}$ appears, for instance, as Example~VI.5.1 in \cite{May-1977}. However, the detailed description given here, including explicit descriptions of the distributors, seems to be new. Let us first recall from \S~\ref{categories_distributives} that, after fixing particular binary products and coproducts, and final and initial objects every distributive category has a canonical symmetric rig category structure. In general, the resulting structural isomorphisms $a,c,l,r,a',c',l',r'$ are non trivial. In some cases, however, and for suitable choices of these binary products, coproducts and final, initial objects the associator and left and right unitors (but usually not the commutators) turn out to be trivial. This is so for the skeleton of $\mathcal{F}\mathcal{S} et$ having as objects the sets $[n]=\{1,\ldots,n\}$ for each $n\geq 1$, and $[0]=\emptyset$. We shall denote this skeleton by $\mathcal{F}\mathcal{S} et_{sk}$. \subsection{Additive monoidal structure} Being a skeleton of $\mathcal{F}\mathcal{S} et$, the groupoid $\mathcal{F}\mathcal{S} et_{sk}$ has all binary products and coproducts, and $[1]$ and $[0]$ as (unique) final and initial objects, respectively. Moreover, chosing binary products and coproducts just amounts in this case to making appropriate choices of the respective projections and injections. \subsubsection{\sc Lemma.}\label{estructura_additiva_semistricta} For every objects $[m],[n]\in\mathcal{F}\mathcal{S} et_{sk}$ let us take as coproduct of the pair $([m],[n])$ the set $[m+n]$ with the injections $\iota^1_{[m],[n]}:[m]\to[m+n]$ and $\iota^2_{[m],[n]}:[n]\to[m+n]$ defined by \begin{align*} \iota^1_{[m],[n]}(i)&=i, \quad i=1,\ldots m, \\ \iota^2_{[m],[n]}(j)&=m+j, \quad j=1,\ldots,n. \end{align*} Then for every maps $f:[m]\to[m']$, $g:[n]\to[n']$ their sum $f+g:[m+n]\to[m'+n']$ is given by \begin{equation}\label{suma-morfismes} (f+g)(k)=\left\{ \begin{array}{ll} f(k), & \mbox{if $k\in\{1,\ldots,m\}$,} \\ m'+g(k-m), & \mbox{if $k\in\{m+1,\ldots,m+n\}$,} \end{array} \right. \end{equation} and the resulting symmetric $+$-monoidal structure on $\mathcal{F}\mathcal{S} et_{sk}$ is semistrict with nontrivial commutators $c_{[m],[n]}:[m+n]\to[n+m]$ given by \begin{equation}\label{commutador} c_{[m],[n]}(k)=\left\{ \begin{array}{ll} n+k, & \mbox{if $k\in\{1,\ldots,m\}$,} \\ k-m, & \mbox{if $k\in\{m+1,\ldots,m+n\}$.} \end{array} \right. \end{equation} In particular, for each $n\geq 1$ the commutators $c_{[0],[n]}$ and $c_{[n],[0]}$ are both identities while $c_{[1],[n]}$ and $c_{[n],[1]}$ are the $(n+1)$-cycles $(1,n+1,n,n-1,\ldots,2)_{n+1}$ and $(1,2,3,\ldots,n+1)_{n+1}$, respectively. \noindent {\sc Proof.} It basically follows from the fact that these injections are such that \begin{eqnarray*} &\iota^1_{[m+n],[p]}\,\iota^1_{[m],[n]}=\iota^1_{[m],[n+p]}, \\ &\iota^2_{[m],[n+p]}=\iota^2_{[m],[n]}+id_{[p]}, \\ &\iota^2_{[0],[n]}=id_{[n]}=\iota^1_{[n],[0]} \end{eqnarray*} for each $m,n,p\geq 0$. The details are left to the reader. \qed \subsubsection{\sc Remark.} The injections $\iota^1_{[m],[n]},\iota^2_{[m],[n]}$ are nothing but the composite with the canonical injections of $[m],[n]$ into the disjoint union $[m]\sqcup[n]=([m]\times\{0\})\cup([n]\times\{1\})$ with the bijection $b^+_{m,n}:[m]\sqcup[n]\to[m+n]$ given by \begin{equation}\label{b+} b^+(k,\alpha)=\left\{ \begin{array}{ll} k, & \mbox{if $\alpha=0$,} \\ m+k, & \mbox{if $\alpha=1$.} \end{array}\right. \end{equation} We shall denote by $\mathscr{F}\mathscr{S}et_{sk}^+$ (resp. $\widehat{\mathscr{F}\mathscr{S}et}_{sk}^+$) the semistrict (additive) symmetric monoidal category referred to in this lemma (resp. the underlying groupoid equipped with the inherited semistrict symmetric monoidal structure). \subsection{Multiplicative monoidal structure} The multiplicative monoidal structure on $\mathcal{F}\mathcal{S} et_{sk}$ is defined similarly. The object part of any product of $([m],[n])$ in $\mathcal{F}\mathcal{S} et_{sk}$ is necessarily the set $[mn]$, and the projections onto $[m]$ and $[n]$ can be obtained by chosing any natural bijection $b^{\,\bullet}_{m,n}:[mn]\to[m]\times[n]$, and composing it with the canonical projections. The point is that by suitably chosing the bijections $b^{\,\bullet}_{m,n}$ the final symmetric $\bullet$\,-monoidal structure is again semistrict. \subsubsection{\sc Lemma.}\label{estructura_multiplicativa_semistricta} For every objects $[m],[n]\in\mathcal{F}\mathcal{S} et_{sk}$ with $m,n\geq 1$ let us take as product of the pair $([m],[n])$ the set $[mn]$ with the projections $\pi^1_{[m],[n]}:[mn]\to[m]$ and $\pi^2_{[m],[n]}:[mn]\to[n]$ defined by \begin{align*} \pi^1_{[m],[n]}(k)&=\left\{\begin{array}{ll} m, & \mbox{if $m\,|\,k$,} \\ r, & \mbox{otherwise,} \end{array}\right. \\ \pi^2_{[m],[n]}(k)&=\left\{\begin{array}{ll} q, & \mbox{if $m\,|\,k$,} \\ q+1, & \mbox{otherwise,}\end{array}\right. \end{align*} where $q,r$ respectively denote the quotient and the remainder of the euclidean division of $k$ by $m$. Then for every maps $f:[m]\to[m']$, $g:[n]\to[n']$ with $m,m',n,n'\geq 1$, the product map $f\bullet g:[mn]\to[m'n']$ is given by \begin{equation}\label{producte-morfismes} (f\bullet g)(k)=\left\{ \begin{array}{ll} (g(q)-1)\,m'+f(m), & \mbox{if $m\,|\,k$,} \\ (g(q+1)-1)\,m'+f(r), & \mbox{otherwise,} \end{array} \right. \end{equation} and the resulting symmetric $\bullet$\,-monoidal structure on $\mathcal{F}\mathcal{S} et_{sk}$ is semistrict with nontrivial commutators $c'_{[m],[n]}:[mn]\to[nm]$, $m,n\geq 1$, given by \begin{equation}\label{commutador'} c'_{[m],[n]}(k)=\left\{ \begin{array}{ll} (m-1)n+q, & \mbox{if $m\,|\,k$,} \\ (r-1)n+q+1, & \mbox{otherwise,} \end{array} \right. \end{equation} where $q,r$ are as before. In particular, for each $n\geq 1$ the commutators $c'_{[1],[n]},c'_{[n],[1]}$ are both identities while $c'_{[2],[n]}$ and $c'_{[n],[2]}$ are given by \begin{align*} c'_{[2],[n]}(k)&=\left\{ \begin{array}{ll} n+(k/2), & \mbox{if $k$ is even,} \\ (k+1)/2, & \mbox{if $k$ is odd,} \end{array}\right. \\ c'_{[n],[2]}(k)&=\left\{ \begin{array}{ll} 2k-1, & \mbox{if $k\in\{1,\ldots,n\}$,} \\ 2(k-n), & \mbox{if $k\in\{n+1,\ldots,2n\}$.} \end{array}\right. \end{align*} \noindent {\sc Proof.} For each $m,n\geq 1$ let $b^{\,\bullet}_{m,n}:[m]\times[n]\to[mn]$ be the bijection given by enumerating the elements of $[m]\times[n]$ by rows, i.e. \[ b^{\,\bullet}_{m,n}(i,j)=(j-1)\,m+i. \] Its inverse $(b^{\,\bullet})^{-1}_{m,n}:[mn]\to[m]\times[n]$ is given by \[ (b^{\,\bullet})^{-1}_{m,n}(k)=\left\{ \begin{array}{ll} (m,q), & \mbox{if $m\,|\,k$,} \\ (r,q+1), & \mbox{otherwise,} \end{array}\right. \] with $q,r$ as in the statement. Then the projections $\pi^1_{[m],[n]},\pi^2_{[m],[n]}$ make the diagram \[ \xymatrix @R=.8pc @C=.8pc{ & [m]\times[n]\ar[d]^{b^{\,\bullet}_{m,n}}\ar[dl]_{pr_1}\ar[dr]^{pr_2} & \\ [m] & [mn]\ar[l]^{\pi^1_{[m],[n]}}\ar[r]_{\pi^2_{[m],[n]}} & [n] } \] commute, and (\ref{producte-morfismes}) is nothing but the composite \[ f\bullet g=b^{\,\bullet}_{m',n'}\,(f\times g)\, (b^{\,\bullet})^{-1}_{m,n}, \] with $f\times g$ the usual cartesian product map given by $(i,j)\mapsto(f(i),g(j))$. It follows that the diagram \[ \xymatrix @R=.7pc @C=.9pc{ [m]\ar[ddd]_f && [mn]\ar[ll]_{\pi^1_{[m],[n]}}\ar[rr]^{\pi^2_{[m],[n]}}\ar[ddd]^{f\bullet\, g} && [n]\ar[ddd]^g \\ & [m]\times[n]\ar[ru]_{b^{\,\bullet}_{m,n}}\ar[lu]^{pr_1}\ar[d]^{f\times g} & & [m]\times[n]\ar[lu]^{b^{\,\bullet}_{m,n}}\ar[ru]_{pr_2}\ar[d]^{f\times g} & \\ & [m']\times[n']\ar[ld]_{pr_1}\ar[rd]^{b^{\,\bullet}_{m',n'}} && [m']\times[n']\ar[rd]^{pr_2}\ar[ld]_{b^{\,\bullet}_{m',n'}}& \\ [m'] && [m'n']\ar[ll]^{\pi^1_{[m'],[n']}}\ar[rr]_{\pi^2_{[m'],[n']}} && [n'] } \] commutes, and hence (\ref{producte-morfismes}) makes the dual of the diagram in (D1) commute for the projections $\pi^1_{[m],[n]},\pi^2_{[m],[n]}$ in the statement. Moreover, these projections are such that \begin{eqnarray*} &\pi^1_{[m],[n]}\,\pi^1_{[mn],[p]}=\pi^1_{[m],[np]}, \\ &\pi^2_{[m],[np]}=\pi^2_{[m],[n]}\times id_{[p]}, \\ &\pi^2_{[1],[n]}=id_{[n]}=\pi^1_{[n],[1]} \end{eqnarray*} for each $m,n,p\geq 1$, which together with the uniqueness of the morphisms making the duals of the diagrams in (D3) and (D5) commute implies that all isomorphism $a'_{[m],[n],[p]},l'_{[m]},r'_{[m]}$ are identities. To prove these equalities, let be $l\in[mnp]$, and let \[ l=q'(mn)+r'=q''m+r'' \] be the euclidean divisions of $l$ by $mn$ and by $m$, respectively, and $r'=q'''m+r'''$ the euclidean division by $m$ of the remainder $r'$ of the first of these divisions. Clearly, we have \begin{itemize} \item[(i)] $r''=r'''$, \item[(ii)] $q''=q'n+q'''$, and \item[(iii)] $m\,|\,l$ if and only if $m\,|\,r'$. \end{itemize} Then, on the one hand, an easy computation shows that \[ (\pi^1_{[m],[n]}\circ\pi^1_{[mn],[p]})(l)= \left\{\begin{array}{ll} m, & \mbox{if $mn\,|\,l$,} \\ m, & \mbox{if $mn\nmid l$ and $m\,|\,r'$,} \\ r''', & \mbox{if $mn\nmid l$ and $m\nmid r'$.} \end{array}\right. \] Since conditions $mn\,|\,l$ and $m\nmid r'$ can not hold simultaneously we have \[ (\pi^1_{[m],[n]}\,\pi^1_{[mn],[p]})(l)= \left\{\begin{array}{ll} m, & \mbox{if $m\,|\,r'$,} \\ r''', & \mbox{otherwise,} \end{array}\right. \] while by definition \[ \pi^1_{[m],[np]}(l)= \left\{\begin{array}{ll} m, & \mbox{if $m\,|\,l$,} \\ r'', & \mbox{otherwise.} \end{array}\right. \] Hence it follows from (i) and (iii) that both maps are indeed the same. On the other hand, for each $l\in[mnp]$ we have \begin{align*} (\pi^2_{[m],[n]}\times id_{[p]})(l)&= \left\{ \begin{array}{ll} (q'-1)n+\pi^2_{[m],[n]}(mn), & \mbox{if $mn\,|\,l$,} \\ q'n+\pi^2_{[m],[n]}(r'), & \mbox{if $mn\nmid l$,} \end{array} \right. \\ &= \left\{ \begin{array}{ll} q'n, & \mbox{if $mn\,|\,l$,} \\ q'n+q''', & \mbox{if $mn\nmid l$ and $m\,|\,r'$,} \\ q'n+q'''+1, & \mbox{if $mn\nmid l$ and $m\nmid r'$,} \end{array} \right. \\ &= \left\{ \begin{array}{ll} q'n+q''', & \mbox{if $m\,|\,r'$,} \\ q'n+q'''+1, & \mbox{otherwise} \end{array} \right. \end{align*} (in the last equality we use that $mn\,|\,l$ implies that $r'=0$ and hence, $q'''=0$), while by definition \[ \pi^2_{[m],[np]}(l)=\left\{\begin{array}{ll} q'', & \mbox{if $m\,|\,l$,} \\ q''+1, & \mbox{otherwise.} \end{array}\right. \] Hence it follows from (ii) and (iii) that both maps are again the same. Finally, it readily follows from their definitions that $\pi^1_{[n],[1]}$ and $\pi^2_{[1],[n]}$ are identities. It remains to see that the map (\ref{commutador'}) makes the dual of the diagram in (D4) commute. Or, (\ref{commutador'}) is nothing but the composite \[ c'_{[m],[n]}=b^{\,\bullet}_{n,m}\,\sigma_{[m],[n]}\, (b^{\,\bullet})^{-1}_{m,n}, \] where $\tilde{c}\,'_{[m],[n]}:[m]\times[n]\to[n]\times[m]$ is the permutation map $(i,j)\mapsto(j,i)$. The commutativity of the dual of the diagram in (D4) follows then from the diagram \[ \xymatrix @R=.7pc{ & [mn]\ar@/_1.5pc/[ldd]_{\pi^1_{[m],[n]}}\ar@/^1.5pc/[rdd]^{\pi^2_{[m],[n]}} & \\ & [m]\times[n]\ar[u]_{b^{\,\bullet}_{m,n}}\ar[ld]_{pr_1}\ar[rd]^{pr_2}\ar[dd]^{\tilde{c}\,'_{[m],[n]}} & \\ [m] & & [n] \\ & [n]\times[m]\ar[lu]^{pr_2}\ar[ur]_{pr_1}\ar[d]^{b^{\,\bullet}_{n,m}} & \\ & [nm]\ar@/^1.5pc/[luu]^{\pi^2_{[n],[m]}}\ar@/_1.5pc/[uur]_{\pi^1_{[n],[m]}} & } \] all of whose inner triangles commute. \qed \subsubsection{\sc Remark.} When we think of the elements in $[mn]$ as the points of the finite lattice $[m]\times [n]\subset\mathbb{R}^2$, the map $f\bullet g$ simply corresponds to applying $f$ to the columns and $g$ to the rows. However, the explicit formula for $(f\bullet g)(k)$ depends on the way we decide to enumerate the points in the lattice and hence, on the chosen bijection $b^{\,\bullet}_{m,n}$. The same thing happens with the commutators, ultimately defined by the maps $(i,j)\mapsto(j,i)$. The above bijections $b^{\,\bullet}_{m,n}$ correspond to enumerating the points by rows, so that the formula (\ref{commutador'}) for the commutators corresponds to doing the following. Take two sets of $mn$ aligned points, one on the top of the other. Divide the top set into $n$ boxes each one with $m$ points, and the bottom set into $m$ boxes each one with $n$ points. Then $c'_{[m],[n]}$ maps the successive points in the top $j$ box, for each $j\in\{1,\ldots,n\}$, into the $j^{th}$ point in the successive $m$ bottom boxes. \subsection{Distributors and absorbing isomorphisms} Next step is to describe the corresponding left and right distributors. Let $R_k:\mathbb{N}\to[k]$ be the modified remainder function mapping each nonnegative integer $x$ to the remainder of the euclidean division of $x$ by $k$ if $k\nmid x$, and to $k$ if $k\,|\,x$. Then we have the following. \subsubsection{\sc Lemma.}\label{distribuidors} Let $\mathcal{F}\mathcal{S} et_{sk}$ be equipped with the above additive and multiplicative semistrict symmetric monoidal structures. Then for every objects $[m],[n],[p]$ the left distributor $\overline{d}_{[m],[n],[p]}$ is the identity, while the right distributor $\overline{d}\,'_{[m],[n],[p]}:[mp+np]\to[mp+np]$ is the identity when some of the objects $[m],[n],[p]$ is $[0]$, and otherwise is given by \[ \overline{d}\,'_{[m],[n],[p]}(s)=\left\{\begin{array}{ll} s+\displaystyle{\frac{s-R_m(s)}{m}\,n}, & \mbox{if $s\in\{1,\ldots,mp\}$} \\ s-mp+\displaystyle{\frac{s-mp+n-R_n(s-mp)}{n}\,m}, & \mbox{if $s\in\{mp+1,\ldots,mp+np\}$}\end{array}\right. \] In particular, $\overline{d}\,'_{[m],[n],[p]}$ is the identity when some of the integers $m,n,p$ is zero, and $\overline{d}\,'_{[n],[1],[1]},\overline{d}\,'_{[1],[n],[1]}:[n+1]\to[n+1]$ are both identities, and $\overline{d}\,'_{[1],[1],[n]}:[2n]\to[2n]$ is the permutation given by \begin{equation}\label{d'_1,1,n} \overline{d}'_{[1],[1],[n]}(s)=\left\{\begin{array}{ll} 2s-1, & \mbox{if $s\in\{1,\ldots,n\}$,} \\ 2(s-n), & \mbox{if $s\in\{n+1,\ldots,2n\}$} \end{array}\right. \end{equation} for each $n\geq 1$. \noindent {\sc Proof.} Using (\ref{producte-morfismes}) it is easy to check that \begin{align*} id_{[m]}\times\iota^1_{[n],[p]}&=\iota^1_{mn,mn+mp}, \\ id_{[m]}\times\iota^2_{[n],[p]}&=\iota^2_{mp,mn+mp}. \end{align*} Hence $\overline{d}_{[m],[n],[p]}$ is the identity. As to the right distributor $\overline{d}\,'_{[m],[n],[p]}$, by definition we have \[ \overline{d}\,'_{[m],[n],[p]}(s)= \left\{\begin{array}{ll} (\iota^1_{[m],[n]}\times id_{[p]})(s), & \mbox{if $s\in\{1,\ldots,mp\}$,} \\ (\iota^2_{[m],[n]}\times id_{[p]})(s-mp), & \mbox{if $s\in\{mp+1,\ldots,mp+np\}$} \end{array}\right. \] for each $s\in[mp+np]$. Hence $\overline{d}\,'_{[m],[n],[p]}$ is the identity when some of the objects $[m],[n],[p]$ is $[0]$. Otherwise, by (\ref{producte-morfismes}) we have \[ (\iota^1_{[m],[n]}\times id_{[p]})(k)= \left\{\begin{array}{ll} k+(q-1)n, & \mbox{if $k=qm$,} \\ k+qn, & \mbox{if $k=qm+r$, $r\neq 0$,} \end{array}\right. \] for each $k\in[mp]$, and \[ (\iota^2_{[m],[n]}\times id_{[p]})(l)= \left\{\begin{array}{ll} l+q'm, & \mbox{if $l=q'n$,} \\ l+(q'+1)m, & \mbox{if $l=q'n+r'$, $r'\neq 0$,} \end{array}\right. \] for each $l\in[np]$. Hence \[ \overline{d}\,'_{[m],[n],[p]}(s)= \left\{\begin{array}{ll} s+(q-1)n, & \mbox{if $s\in\{1,\ldots,mp\}$ and $m\,|\,s$,} \\ s+qn, & \mbox{if $s\in\{1,\ldots,mp\}$ and $m\nmid s$,} \\ s+(q'-p)m, & \mbox{if $s\in\{mp+1,\ldots,mp+np\}$ and $n\,|\,(s-mp)$,} \\ s+(q'-p+1)m, & \mbox{if $s\in\{mp+1,\ldots,mp+np\}$ and $n\nmid (s-mp)$,}\end{array}\right. \] where $q$ is the quotient of the euclidean division of $s$ by $m$ for $s\leq mp$, and $q'$ the quotient of the euclidean division of $s-mp$ by $n$ when $s>mp$. It is left to the reader checking that, in terms of the modified remainder functions, this is the permutation in the statement. \qed \subsubsection{\sc Example.} The first few non trivial distributors $\overline{d}\,'_{[1],[1],[n]}:[2n]\to[2n]$ are \begin{align*} \overline{d}\,'_{[1],[1],[2]}&=(2,3)_4, \\ \overline{d}\,'_{[1],[1],[3]}&=(2,3,5,4)_6, \\ \overline{d}\,'_{[1],[1],[4]}&=(2,3,5)_8\,(4,7,6)_8, \\ \overline{d}\,'_{[1],[1],[5]}&=(2,3,5,9,8,6)_{10}\,(4,7)_{10}. \end{align*} Their decompositions into disjoint cycles do not seem to follow an easy pattern. Hence, unlike the commutators, all of order two, the order of the distributors $\overline{d}\,'_{[1],[1],[n]}$ will be a non-trivial function of $n$. \subsubsection{\sc Remark.} There seems to be no obvious description of the generic right distributor $\overline{d}'_{[m],[n],[p]}$ as a composite of the generators of $S_{(m+n)p}$. However, such a description exists when $m=n=1$, and is given as follows. Thus we know from (\ref{d'_1,1,n}) that $(d'_{[1],[1],[p]})^{-1}$ maps $1,\ldots,p$ to the first $p$ odd positive integers, and $p+1,\ldots,2p$ to the first $p$ even positive integers. For each $i=2,\ldots,p$ let $\sigma_{i,p}\in S_{2p}$ be the permutation given by \[ \sigma_{i,p}=(i,i+1)_{2p}\,(i+2,i+3)_{2p}\,\cdots (i+2(p-i),i+2(p-i)+1)_{2p}. \] Then the reader may easily chek that $\overline{d}'_{[1],[1],[p]}=\sigma_{2,p}\,\sigma_{3,p}\cdots \sigma_{p-1,p}\,\sigma_{p,p}$. \subsubsection{\sc Theorem.}\label{versio_semistricta} The symmetric rig category structure on $\mathcal{F}\mathcal{S} et_{sk}$ canonically associated to the choices of products and coproducts of Lemmas~\ref{estructura_additiva_semistricta} and \ref{estructura_multiplicativa_semistricta} is left semistrict. The left symmetric rig category so obtained $\mathbb{F}\mathbb{S} et_{sk}$ is equivalent to the symmetric rig category $\mathbb{F}\mathbb{S} et$. \noindent {\sc Proof.} The first assertion is a consequence of Proposition~\ref{estructura_categoria_rig_simetrica}, the previous three lemmas, and the fact that the projections $\pi^1_{[0],[n]}$ and $\pi^2_{[n],[0]}$ are in this case identities. As to the equivalence between $\mathbb{F}\mathbb{S} et_{sk}$ and $\mathbb{F}\mathbb{S} et$, an equivalence is given by the inclusion functor $\mathcal{J}:\mathcal{F}\mathcal{S} et_{sk}\hookrightarrow\mathcal{F}\mathcal{S} et$ with the additive and multiplicative symmetric monoidal structures defined by the above bijections $b^+_{m,n}$ and $b^{\,\bullet}_{m,n}$. \qed \subsubsection{\sc Corollary} $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ is a left semistrict symmetric 2-rig equivalent to the symmetric 2-rig $\widehat{\mathbb{F}\mathbb{S} et}$. \noindent {\sc Proof.} It follows the the previous theorem and Example~\ref{core}. \qed \subsubsection{\sc Remark.} If instead of the above bijections $b^{\,\bullet}_{m,n}$ one uses those that correspond to enumerating the points of $[m]\times[n]$ by columns, the obtained symmetric rig category structure on $\mathcal{F}\mathcal{S} et_{sk}$ turns out to be {\em right} semistrict. \section{Main theorem} Let $\mathbb{S}$ be an arbitrary rig category, not necessarily symmetric, with $\sf 0$ and $\sf 1$ as zero and unit objects, respectively. Our purpose is to prove that the category of rig category homomorphisms from $\widehat{\mathbb{F}\mathbb{S} et}$ to $\mathbb{S}$ is equivalent to the terminal category. Since this category, up to equivalence, remains invariant when $\mathbb{S}$ is replaced by any rig category equivalent to $\mathbb{S}$, Theorem~\ref{teorema_estrictificacio} above allows us to assume without loss of generality that $\mathbb{S}$ is left semistrict. We shall do that for the rest of the paper. \subsection{The basic maps of an arbitrary homomorphism} Since the additive associator $a$ of $\mathbb{S}$ is trivial, for every object $x\in\mathcal{S}$ and every $n\geq 0$ there is an object $nx$ uniquely defined by \[ nx=\left\{\begin{array}{ll} \sf 0, & \mbox{if $n=0$,} \\ x+\stackrel{n)}{\cdots}+x, & \mbox{if $n>0$.} \end{array}\right. \] When $x=\sf 1$ we shall write $\sf 1derline{n}$ instead of $n\sf 1\in\mathcal{S}$. In particular, $\sf 1derline{0}=\sf 0$ and $\sf 1derline{1}=\sf 1$. Recall from Definition~\ref{morfisme_categories_semianell} that a homomorphism of rig categories $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ is given by a symmetric $+$-monoidal functor $\mathscr{F}:\widehat{\mathscr{FS}et}^+_{sk}\to \mathscr{S}^+$, given by a triple $\mathscr{F}=(F,\varphi^+,\varepsilon^+)$, together with an additional $\bullet$-monoidal structure $(\varphi^\bullet,\varepsilon^\bullet)$ on $F$ satisfying the appropriate coherence conditions. Fundamental for what follows is the next family of invertible maps associated to any such homomorphism $\mathbb{F}$. \subsubsection{\sc Definition}\label{morfismes_basics} Let $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ be a homomorphism of rig categories. The {\em basic maps} of $\mathbb{F}$ are the (invertible) maps $\tau_n:F[n]\to\sf 1derline{n}$, $n\geq 0$, defined by the recurrence relation \begin{equation}\label{recurrencia_tau_1} \tau_{n+1}=(\varepsilon^\bullet+\tau_n)\,\varphi^+_{[1],[n]},\quad n\geq 0, \end{equation} with the initial condition $\tau_0=\varepsilon^+$. \subsubsection{\sc Remark}\label{remarca_recurrencia_2} For each $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ as before, let $\tau'_n:F[n]\to\sf 1derline{n}$, $n\geq 0$, be the morphisms defined by the recurrence relation \begin{equation}\label{recurrencia_tau_2} \tau'_{n+1}=(\tau'_n+\varepsilon^\bullet)\,\varphi^+_{[n],[1]}, \quad n\geq 0, \end{equation} with the initial condition $\tau'_0=\varepsilon^+$. Then $\tau'_n=\tau_n$ for each $n\geq 0$. We proceed by induction on $n\geq 0$. By definition, we have $\tau'_0=\tau_0$. Let us assume that $\tau'_{k}=\tau_k$ for each $0\leq k\leq n$ for some $n\geq 0$. Then \begin{align*} \tau_{n+1}&=(\varepsilon^{\,\bullet}+\tau_{n})\,\varphi_{[1],[n]}^+ \\ &=(\varepsilon^{\,\bullet}+\tau'_{n})\,\varphi_{[1],[n]}^+ \\ &=(\varepsilon^{\,\bullet}+\tau'_{n})\,(id_{F[1]}+\varphi_{[n-1],[1]}^+)^{-1}\,(\varphi_{[1],[n-1]}^++id_{F[1]})\,\varphi_{[n],[1]}^+ \\ &=[\varepsilon^{\,\bullet}+\tau'_{n}(\varphi_{[n-1],[1]}^+)^{-1}]\,(\varphi_{[1],[n-1]}^++id_{F[1]})\,\varphi_{[n],[1]}^+ \\ &=(\varepsilon^{\,\bullet}+\tau'_{n-1}+\varepsilon^{\,\bullet})\,(\varphi_{[1],[n-1]}^++id_{F[1]})\,\varphi_{[n],[1]}^+ \\ &=[(\varepsilon^{\,\bullet}+\tau'_{n-1})\,\varphi_{[1],[n-1]}^++\varepsilon^{\,\bullet}]\,\varphi_{[n],[1]}^+ \\ &=[(\varepsilon^{\,\bullet}+\tau_{n-1})\,\varphi_{[1],[n-1]}^++\varepsilon^{\,\bullet}]\,\varphi_{[n],[1]}^+ \\ &=(\tau_{n}+\varepsilon^{\,\bullet})\,\varphi_{[n],[1]}^+ \\ &=(\tau'_{n}+\varepsilon^{\,\bullet})\,\varphi_{[n],[1]}^+ \\ &=\tau'_{n+1}. \end{align*} The third equality follows from the coherence axiom on the monoidality isomorphisms $\varphi^+$, and the remaining ones readily follow from the induction hypothesis and the functoriality of $+$. The importance of the maps $\tau_n$ come from the fact, shown below, that they completely determine the homomorphism $\mathbb{F}$ once the action on objects of the underlying functor is given (cf. Section~\ref{conclusions}). Moreover, unlike the monoidality isomorphisms $\varphi^+_{[m][n]}$ and $\varphi^\bullet_{[m],[n]}$ for each $m,n\geq 0$, which must satisfy naturality and coherence conditions, the maps $\tau_n$ can be chosen in a completely arbitrary way. Next result shows how the additive and multiplicative monoidality isomorphisms of $\mathbb{F}$ can be obtained from the basic maps. Later on, we shall see that the action on morphisms of the underlying functor $F$ is also completely given by the basic maps. \subsubsection{\sc Proposition} \label{monoidalitat_tau} Let $(\tau_n)_{n\geq 0}$ be the basic maps of an arbitrary homomorphism of rig categories $\mathbb{F}=(F,\varphi^+,\varepsilon^+,\varphi^\bullet,\varepsilon^\bullet)$ from $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ to a left semistrict rig category $\mathbb{S}$. Then the additive and multiplicative monoidality isomorphisms are respectively given by \begin{align} \varphi^+_{[m],[n]}&=(\tau_m+\tau_n)^{-1}\,\tau_{m+n}, \label{varphi+_tau} \\ \varphi^\bullet_{[m],[n]}&=(\tau_m\bullet\tau_n)^{-1}\,\tau_{mn}\label{varphibullet_tau} \end{align} for each $m,n\geq 0$. \noindent {\sc Proof.} Let us prove (\ref{varphi+_tau}) or equivalently, that $\tau_{m+n}=(\tau_{m}+\tau_{n})\,\varphi^+_{[m],[n]}$ for each $m,n\geq 0$. We proceed by induction on $n\geq 0$ for any given $m\geq 0$. If $n=0$ this means checking that \[ \tau_{m}=(\tau_{m}+id_{\sf 1derline{0}})\,(id_{F[m]}+\varepsilon^+)\,\varphi^+_{[m],[0]} \] for each $m\geq 0$, and this is true because $\mathbb{S}$ is assumed to be semistrict. Let us now assume that (\ref{varphi+_tau}) holds for some $n\geq 0$ and every $m\geq 0$. Then for every $m\geq 0$ we have \begin{align*} \tau_{m+n+1}&=(\tau_{m+n}+\varepsilon^{\,\bullet})\,\varphi^+_{[m+n],[1]} \\ &=[(\tau_{m}+\tau_{n})\,\varphi^+_{[m],[n]}+\varepsilon^{\,\bullet}]\,\varphi^+_{[m+n],[1]} \\ &=(\tau_{m}+\tau_{n}+\varepsilon^{\,\bullet})\,(\varphi^+_{[m],[n]}+id_{F[1]})\,\varphi^+_{[m+n],[1]} \\ &=(\tau_{m}+\tau_{n}+\varepsilon^{\,\bullet})\,(id_{F[1]}+\varphi^+_{[n],[1]})\,\varphi^+_{[m],[n+1]} \\ &=[\tau_{m}+(\tau_{n}+\varepsilon^{\,\bullet})\,\varphi^+_{[n],[1]}]\,\varphi^+_{[m],[n+1]} \\ &=(\tau_{m}+\tau_{n+1})\,\varphi^+_{[m],[n+1]}. \end{align*} The first and the last equalities follow from Remark~\ref{remarca_recurrencia_2}, the second one by the induction hypothesis, the third and fifth ones by the functoriality of $+$, and the fourth one by the coherence axioms required on $\varphi^+$. The proof of (\ref{varphibullet_tau}) is completely similar but with $\bullet$ instead of $+$. $ \square$ Although we will not need it, next result gives an explicit expression of the basic maps of $\mathbb{F}$ in terms of the data defining $\mathbb{F}$. \subsubsection{\sc Proposition} For each homomorphism of rig categories $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ the corresponding basic maps $\tau_n$ are given by $\tau_0=\varepsilon^+$, $\tau_1=\varepsilon^\bullet$, and \[ \tau_n=(\varepsilon^\bullet+\stackrel{n)}{\cdots}+\varepsilon^\bullet)\,(id_{(n-2)F[1]}+\varphi^+_{[1],[1]})\,(id_{(n-3)F[1]}+\varphi^+_{[1],[2]})\,\cdots\,(id_{F[1]}+\varphi^+_{[1],[n-2]})\,\varphi^+_{[1],[n-1]} \] for each $n\geq 2$. \noindent {\sc Proof.} The case $n=0$ is by definition of the basic maps. Moreover, when $n=0$ the recurrence relation (\ref{recurrencia_tau_1}) gives that \[ \tau_1=(\varepsilon^\bullet+\varepsilon^+)\,\varphi^+_{[1],[0]}=(\varepsilon^\bullet+id_{\sf 1derline{0}})\,(id_{F[1]}+\varepsilon^+)\,\varphi^+_{[1],[0]}=\varepsilon^\bullet, \] where we have used that $\mathbb{S}$ is left semistrict (in particular, that the additive right unitor $\rho_{F[1]}$ is trivial) as well as the left semistrictness of $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$. The above expression for the remaining maps $\tau_n$ follows now by an easy induction on $n\geq 2$ using the functoriality of $+$. The details are left to the reader. $ \square$ \subsection{The canonical homomorphism from finite sets to an arbitrary left semistrict rig category} In order to see that, up to isomorphism, there is only one homomorphism of rig categories $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ let us start by describing a particularly simple such homomorphism we shall call the canonical one. \subsubsection{\sc Proposition}\label{morfisme_canonic} Let $\mathbb{S}$ be an arbitrary left semistrict rig category, not necessarily symmetric. There exists a functor $F_{can}:\widehat{\mathcal{F}\mathcal{S} et}_{sk}\to\mathcal{S}$ acting on objects by $[n]\mapsto\sf 1derline{n}$, and on the generators of $S_n$ for each $n\geq 2$ by \begin{equation}\label{accio_sobre_morfismes} F_{can}((i,i+1)_n)=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i-1}}, \quad i=1,\ldots,n-1. \end{equation} Moreover, $F_{can}$ is the underlying functor of a {\em strict morphism} of rig categories $\mathbb{F}_{can}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$. \noindent {\sc Proof.} For short, let us denote by $\gamma_i$ the morphism in the right hand side of (\ref{accio_sobre_morfismes}). These assignments define a functor if and only if $\gamma_1,\ldots,\gamma_{n-1}$ satisfy for each $n\geq 2$ the same relations as the generators $(1,2)_n,\ldots,(n-1,n)_n$ of $S_n$, i.e. if and only if \begin{itemize} \item[(R1)] $\gamma_i^2=id_{\sf 1derline{n}}$ for each $i=1,\ldots,n-1$; \item[(R2)] $(\gamma_i\,\gamma_{i+1})^3=id_{\overline{n}}$ for each $i=1,\ldots,n-2$; \item[(R3)] $(\gamma_i\,\gamma_j)^2=id_{\overline{n}}$ for each $i,j=1,\ldots,n-1$ such that $|i-j|>1$. \end{itemize} Relation (R1) follows from the functoriality of $+$, and the fact that $c$ is not just a braiding, but a symmetry, so that $c^2_{\sf 1derline{1},\sf 1derline{1}}=id_{\sf 1derline{2}}$. To see (R2), notice that by the functoriality of $+$ we have \begin{align*} \gamma_i\,\gamma_{i+1}&=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i-1}})\,(id_{\sf 1derline{i}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i-2}}) \\ &=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}}+id_{\sf 1derline{n-i-2}})\,(id_{\sf 1derline{i-1}}+id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i-2}}) \\ &=id_{\sf 1derline{i-1}}+(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}})+id_{\sf 1derline{n-i-2}}). \end{align*} Hence (R2) holds if and only if \[ [(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}})]^3=id_{\sf 1derline{3}} \] for each $i=1,\ldots,n-2$. Now, in every braided monoidal category $\mathscr{C}$ with trivial associator, and braiding $c$ the diagram \[ \xymatrix{ x+y+z\ar[r]^{id_x+c_{y,z}}\ar[d]_{c_{x,y}+id_z} & x+z+y\ar[r]^{c_{x,z}+id_y} & z+x+y\ar[d]^{id_z+c_{x,y}} \\ y+x+z\ar[r]_{id_y+c_{x,z}} & y+z+x\ar[r]_{c_{y,z}+id_x} & z+y+x } \] commutes for every objects $x,y,z\in\mathcal{C}$ (cf. \cite{Joyal-Street-1993}, Proposition~2.1). In particular, when $\mathscr{C}=\mathscr{S}^+$, and $x=y=z=\sf 1derline{1}$ we obtain that \[ (c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}})=(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}}). \] The desired equality follows then by taking the composite of this equality on the left with $c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}}$, and on the right with $id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}$. Finally, if $j\geq i+2$ we have \begin{align*} \gamma_i\,\gamma_j&=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{j-i}}+id_{\sf 1derline{n-j-1}})\,(id_{\sf 1derline{i}}+id_{\sf 1derline{j-i}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-j-1}}) \\ &=id_{\sf 1derline{i-1}}+(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{j-i}})\,(id_{\sf 1derline{j-i}}+c_{\sf 1derline{1},\sf 1derline{1}})+id _{\sf 1derline{n-j-1}}). \end{align*} Hence (R3) holds in this case if and only if \[ [(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{j-i}})\,(id_{\sf 1derline{j-i}}+c_{\sf 1derline{1},\sf 1derline{1}})]^2=id_{j-i+2}. \] Now, when $j-i=2$ this is equal to the square of $c_{\sf 1derline{1},\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}$ and hence, $id_{\sf 1derline{4}}$ because $c$ is a symmetry, and when $j-i>2$ it is the square of \[ (c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{j-i-2}}+id_{\sf 1derline{2}})\,(id_{\sf 1derline{2}}+id_{\sf 1derline{j-i-2}}+c_{\sf 1derline{1},\sf 1derline{1}})=c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{j-i-2}}+c_{\sf 1derline{1},\sf 1derline{1}}, \] and hence, $id_{\sf 1derline{j-i+2}}$. The case $i\geq j+2$ is proved similarly. Clearly, the functor $F_{can}$ so defined preserves strictly the zero and unit objects. It further preserves the action of $+$ and $\bullet$ on objects strictly. Thus, being $\mathbb{S}$ semistrict, the additive associator is trivial and hence, for every $m,n\geq 0$ we have \[ F_{can}([m]+[n])=F_{can}([m+n])=\sf 1derline{m+n}=\sf 1derline{m}+\sf 1derline{n}=F_{can}([m])+F_{can}([n]). \] Moreover, the triviality of the left distributors and of the multiplicative right unitors also gives that \begin{align*} \sf 1derline{m}\bullet\sf 1derline{n}&=\sf 1derline{m}\bullet(1+\stackrel{n)}{\cdots}+1) \\ &=\sf 1derline{m}\bullet 1+\stackrel{n)}{\cdots}+\sf 1derline{m}\bullet 1 \\ &=\sf 1derline{m}+\stackrel{n)}{\cdots}+\sf 1derline{m} \\ &=\sf 1derline{mn}, \end{align*} and hence \[ F_{can}([m]\bullet [n])=F_{can}([mn])=\sf 1derline{mn}=\sf 1derline{m}\bullet\sf 1derline{n}=F_{can}([m])\bullet F_{can}([n]). \] Therefore in order to see that $F_{can}$ is the underlying functor of a strict morphism of rig categories from $\widehat{\mathbb{F}\mathbb{S}} et_{sk}$ to $\mathbb{S}$ it remains to be checked that $F_{can}$ also preserves strictly: \begin{itemize} \item[(SH1)] the action of $+$ and $\bullet$ on morphisms, \item[(SH2)] the additive commutators, and \item[(SH3)] the right distributors. \end{itemize} Notice that the remaining structural isomorphisms are automatically preserved because $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ is left semistrict, and $\mathbb{S}$ is assumed to be so. \noindent \sf 1derline{Proof of (SH1)}. For every $m\geq 1$ and $n\geq 2$ we have \begin{align*} F_{can}(id_{[m]}+(j,j+1)_n)&=F_{can}((m+j,m+j+1)_{m+n}) \\ &=id_{\sf 1derline{m+j-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-j-1}} \\ &=id_{\sf 1derline{m}}+id_{\sf 1derline{j-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-j-1}} \\ &=F_{can}(id_{[m]})+F_{can}((j,j+1)_n) \end{align*} for each $j\in\{1,\ldots,n-1\}$. Since both $+$ and $F_{can}$ are functorial, it follows that for every $\sigma\in S_n$ with $\sigma=(j_1,i_1+1)_n,\cdots(j_l,j_l+1)_n$ we have \begin{align*} F_{can}(id_{[m]}+\sigma)&=F_{can}((id_{[m]}+(j_1,i_1+1)_n)\cdots(id_{[m]}+(j_l,j_l+1)_n)) \\ &=F_{can}(id_{[m]}+(j_1,i_1+1)_n)\cdots F_{can}(id_{[m]}+(j_l,j_l+1)_n)) \\ &=(id_{\sf 1derline{m}}+F_{can}((j_1,j_1+1)_n))\cdots(id_{\sf 1derline{m}}+F_{can}((j_l,j_l+1)_n)) \\ &=id_{\sf 1derline{m}}+F_{can}((j_1,j_1+1)_n)\cdots F_{can}((j_l,j_l+1)_n) \\ &=id_{\sf 1derline{m}}+F_{can}((j_1,j_1+1)_n\cdots (j_l,j_l+1)_n) \\ &=id_{\sf 1derline{m}}+F_{can}(\sigma) \end{align*} Moreover, for every $m\geq 2$ and $n\geq 1$ we have \begin{align*} F_{can}((i,i+1)_m+id_{[n]})&=F_{can}((i,i+1)_{m+n}) \\ &=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m+n-i-1}} \\ &=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}}+id_{\sf 1derline{n}} \\ &=F_{can}((i,i+1)_m)+F_{can}(id_{[n]}) \end{align*} for each $i\in\{1,\ldots,m-1\}$ and the same argument as before shows that \[ F_{can}(\rho+id_{[n]})=F_{can}(\rho)+id_{\sf 1derline{n}} \] for every $\rho\in S_m$. Using again the functoriality of $+$ and $F_{can}$, it follows that \begin{align*} F_{can}(\rho+\sigma)&=F_{can}((\rho+id_{[n]})\,(id_{[m]}+\sigma)) \\ &=F_{can}(\rho+id_{[n]})\,F_{can}(id_{[m]}+\sigma) \\ &=(F_{can}(\rho)+id_{\sf 1derline{n}})\,(id_{\sf 1derline{m}}+F_{can}(\sigma)) \\ &=F_{can}(\rho)+F_{can}(\sigma) \end{align*} for every permutations $\rho\in S_m$ and $\sigma\in S_n$. Hence $F_{can}$ preserves strictly the action of $+$ on morphisms. To prove that it also preserves strictly the action of $\bullet$\,, we proceed in the same way. Thus it is enough to prove the special cases \begin{itemize} \item[(i)] $F_{can}((i,i+1)_m\bullet id_{[n]})=F_{can}((i,i+1)_m)\bullet id_{\sf 1derline{n}}$ for each $i=1,\ldots,m-1$ and $m\geq 2$, and \item[(ii)] $F_{can}(id_{[m]}\bullet(j,j+1)_n)=id_{\sf 1derline{m}}\bullet F_{can}((j,j+1)_n)$ for each $j=1,\ldots,n-1$ and $n\geq 2$. \end{itemize} The generic case follows then by the functoriality of $F_{can}$ and $\bullet$\,. Let us prove (i). On the one hand, by definition of $\bullet$ we have \[ (i,i+1)_m\bullet id_{[n]}=(i,i+1)_{mn}\,(m+i,m+i+1)_{mn}\,(2m+i,2m+i+1)_{mn}\cdots ((n-1)m+i,(n-1)m+i+1)_{mn}. \] Hence \begin{align*} F_{can}&((i,i+1)_m\bullet id_{[n]}) \\ &=F_{can}((i,i+1)_{mn})\, F_{can}((m+i,m+i+1)_{mn})\, F_{can}((2m+i,2m+i+1)_{mn}) \\ &\hspace{8truecm}\cdots F_{can}(((n-1)m+i,(n-1)m+i+1)_{mn}) \\ &=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{nm-i-1}})\,(id_{\sf 1derline{m+i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-1)m-i-1}})\,(id_{\sf 1derline{2m+i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-2)m-i-1}}) \\ &\hspace{9truecm}\cdots (id_{\sf 1derline{(n-1)m+i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}}) \\ &=id_{\sf 1derline{i-1}}+(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-1)m}})\,(id_{\sf 1derline{m}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-2)m}})\,(id_{\sf 1derline{2m}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-3)m}}) \cdots (id_{\sf 1derline{(n-1)m}}+c_{\sf 1derline{1},\sf 1derline{1}})+id_{\sf 1derline{m-i-1}}, \end{align*} where in the last equality we use the functoriality of $+$ in order to separate the summand $id_{\sf 1derline{i-1}}$ on the left, and the summand $id_{\sf 1derline{m-i-1}}$ on the right which are common to all factors. Now, since $m\geq 2$ the composition of permutations in the middle is equal to \begin{align*} (c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-1)m}})\,(id_{\sf 1derline{m}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-2)m}})\,(id_{\sf 1derline{2m}}+&c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{(n-3)m}})\cdots (id_{\sf 1derline{(n-1)m}}+c_{\sf 1derline{1},\sf 1derline{1}}) \\ &=c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+\cdots +id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}, \end{align*} where we have $n$ summands equal to $c_{\sf 1derline{1},\sf 1derline{1}}$, and $n-1$ summands equal to $id_{\sf 1derline{m-2}}$. Hence \[ F_{can}((i,i+1)_m\bullet id_{[n]})=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+\cdots +id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}}. \] On the other hand, since $\mathbb{S}$ is assumed to be left semistrict, the left distributors and the multiplicative right unitors are trivial. Therefore \begin{align*} F_{can}((i,i+1)_m)\bullet id_{\sf 1derline{n}}&=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}})\bullet id_{\sf 1derline{n}} \\ &=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}})\bullet (id_{\sf 1derline{1}}+\stackrel{n)}{\cdots}+id_{\sf 1derline{1}}) \\ &=(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}})+\stackrel{n)}{\cdots}+(id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}}) \\ &=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}}+\cdots +id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-i-1}}, \end{align*} where the number of summands equal to $c_{\sf 1derline{1},\sf 1derline{1}}$ and to $id_{\sf 1derline{m-2}}$ are as before. This proves (i). Let us now prove (ii). By definition of $\bullet$\,, we have \begin{equation}\label{descomposicio} id_{[m]}\bullet(j,j+1)_n=((j-1)m+1,jm+1)_{mn}\,((j-1)m+2,jm+2)_{mn}\cdots((j-1)m+m,jm+m)_{mn}. \end{equation} Unlike before, the images by $F_{can}$ of the transpositions $((j-1)m+i,jm+i)_{mn}$ for each $i=1,\ldots,m$ are not immediate because they are not part of the generators of $S_{mn}$, whose images are given by Eq.~(\ref{accio_sobre_morfismes}). In fact, to compute their images we need to decompose $((j-1)m+i,jm+i)_{mn}$ as a product of the generators of $S_{mn}$. As the reader may easily check, such a decomposition is given by \begin{align*} ((j-1)m+i,jm+i)_{mn}&=(jm+i-1,jm+i)_{mn}\,(jm+i-2,jm+i-1)_{mn} \\ &\hspace{1truecm}\cdots\,(jm+i-(m-1),jm+i-(m-2))_{mn}\,(jm+i-m,jm+i-(m-1))_{mn}. \end{align*} Hence \begin{align*} F_{can}(((j-1)&m+i,jm+i)_{mn}) \\ &=F_{can}((jm+i-1,jm+i)_{mn})\,F_{can}((jm+i-2,jm+i-1)_{mn}) \\ &\hspace{0.8truecm}\cdots\,F_{can}((jm+i-(m-1),jm+i-(m-2))_{mn})\,F_{can}((jm+i-m,jm+i-(m-1))_{mn}) \\ &=(id_{\sf 1derline{jm+i-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m(n-j)-i}})\,(id_{\sf 1derline{jm+i-3}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m(n-j)-i+1}}) \\ &\hspace{0.8truecm}\cdots (id_{\sf 1derline{jm+i-m}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m(n-j)-i+(m-2)}})\,(id_{\sf 1derline{jm+i-(m+1)}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m(n-j)-i+(m-1)}}) \\ &=id_{\sf 1derline{jm+i-(m+1)}}+(id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}}) \\ &\hspace{2.2truecm}\cdots (id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-1}})+id_{\sf 1derline{m(n-j)-i}}, \end{align*} where in the last equality we again make use of the functoriality of $+$ in order to separate what is common to all factors, namely, the summand $id_{\sf 1derline{jm+i-(m+1)}}$ on the left, and the summand $id_{\sf 1derline{m(n-j)-i}}$ on the right. An easy induction on $m\geq 1$ shows that the composition of permutations in the middle is just $c_{\sf 1derline{1},\sf 1derline{m}}$. When $m=1$ the composition indeed reduces to $c_{\sf 1derline{1},\sf 1derline{1}}$. Let us now assume that \[ (id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\cdots (id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-1}})=c_{\sf 1derline{1},\sf 1derline{m}} \] for some $m\geq 1$. Then \begin{align*} (id_{\sf 1derline{m}}+c_{\sf 1derline{1},\sf 1derline{1}})\,&(id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{2}})\cdots (id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-1}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m}}) \\ &=(id_{\sf 1derline{m}}+c_{\sf 1derline{1},\sf 1derline{1}})\,[(id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{1}})\cdots (id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}})\,(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-1}})+id_{\sf 1derline{1}}] \\ &=(id_{\sf 1derline{m}}+c_{\sf 1derline{1},\sf 1derline{1}})\,(c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{1}}) \\ &=c_{\sf 1derline{1},\sf 1derline{m+1}}, \end{align*} where in the first equality we have separated the right summand $id_{\sf 1derline{1}}$ common to all but the first factor, and in the second we apply the induction hypothesis. As to the third equality, it follows from the hexagon coherence axiom on $c$, according to which in any braided (in particular, symmetric) monoidal category $\mathscr{C}$ with trivial associator and braiding $c$, the diagram \begin{equation}\label{axioma_hexagon} \xymatrix{ x\otimes y\otimes z\ar[r]^{c_{x,y}\otimes id_z}\ar[rd]_{c_{x,y\otimes z}} & y\otimes x\otimes z\ar[d]^{id_y\otimes c_{x,z}} \\ & y\otimes z\otimes x} \end{equation} commutes for every objects $x,y,z\in\mathcal{C}$. Thus we conclude that \[ F_{can}(((j-1)m+i,jm+i)_{mn})=id_{\sf 1derline{jm+i-(m+1)}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-i}} \] for each $i=1,\ldots,m$. Coming back to Eq.~(\ref{descomposicio}), it follows that \begin{align*} F_{can}(id_{[m]}\bullet&(j,j+1)_n) \\ &=F_{can}(((j-1)m+1,jm+1)_{mn})\,F_{can}(((j-1)m+2,jm+2)_{mn}) \\ &\hspace{1.5truecm}\cdots\,F_{can}(((j-1)m+(m-1),(j-1)m+(m-1))_{mn})\,F_{can}(((j-1)m+m,jm+m)_{mn}) \\ &=(id_{\sf 1derline{jm-m}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-1}})\,(id_{\sf 1derline{jm-m+1}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-2}}) \\ &\hspace{2truecm}\cdots\,(id_{\sf 1derline{jm-2}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-(m-1)}})\,(id_{\sf 1derline{jm-1}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-m}}) \\ &=id_{\sf 1derline{jm-m}}+(c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m-1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{m-2}})\, (id_{\sf 1derline{m-2}}+c_{\sf 1derline{1},\sf 1derline{m}}+id_{\sf 1derline{1}})\,(id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{m}})+id_{\sf 1derline{m(n-j)-m}}. \end{align*} It is now easy to show by induction on $m\geq 1$, using again the above hexagon axiom but with all arrows reversed, that the composition of permutations in the middle is nothing but $c_{\sf 1derline{m},\sf 1derline{m}}$. In summary, we have \[ F_{can}(id_{[m]}\bullet(j,j+1)_n)=id_{\sf 1derline{jm-m}}+c_{\sf 1derline{m},\sf 1derline{m}}+id_{\sf 1derline{m(n-j)-m}}. \] As to the right hand side of (ii), since $\mathbb{S}$ is assumed to be left semistrict, we have \begin{align*} id_{\sf 1derline{m}}\bullet F_{can}((j,j+1)_n)&=id_{\sf 1derline{m}}\bullet (id_{\sf 1derline{j-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-j-1}}) \\ &=id_{\sf 1derline{jm-m}}+id_{\sf 1derline{m}}\bullet c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m(n-j)-m}} \end{align*} Thus in order to prove (ii) it is now enough to see that \[ id_{\sf 1derline{m}}\bullet c_{\sf 1derline{1},\sf 1derline{1}}=c_{\sf 1derline{m},\sf 1derline{m}}, \] and this equality follows from the fact that the left distributor of $\mathbb{S}$ is trivial, together with the axiom \[ \xymatrix{ x(y+z)\ar[r]^{id_x\bullet c_{y,z}}\ar[d]_{d_{x,y,z}} &x(z+y)\ar[d]^{d_{x,z,y}} \\ xy+xz\ar[r]_{c_{xy,xz}} & xz+xy} \] which holds in every rig category. Therefore $F_{can}$ also preserves strictly the action of $\bullet$ on morphisms. \noindent \sf 1derline{Proof of (SH2)}. We need to prove that $F_{can}(c_{[m],[n]})=c_{\sf 1derline{m},\sf 1derline{n}}$ for each $m,n\geq 0$. The cases $m=0$ or $n=0$ follow from the fact that in every semistrict rig category $\mathbb{S}$ the commutators $c_{0,x}$ and hence, also $c_{x,0}=c^{-1}_{0,x}$ are identities for every object $x\in\mathcal{S}$. For instance, if $\mathbb{S}$ is left semistrict, the left distributors and the absorbing isomorphisms are all trivial and hence, the previous commutative diagram with $y=0$ gives that \[ id_x\bullet c_{0,z}=c_{0,xz} \] for every $x,z\in\mathcal{S}$. In particular, if $z=1$ we obtain that \[ id_x\bullet c_{0,1}=c_{0,x} \] because the right multiplicative unitors are also identities. Moreover, $c_{0,1}=id_1$ because of the coherence axiom $r_1\,c_{0,1}=l_1$ and the triviality of the additive unitors. When $\mathbb{S}$ is right semistrict, a similar argument works taking as starting point the axiom \[ \xymatrix{ (y+z)x\ar[r]^{c_{y,z}\bullet\, id_x}\ar[d]_{d'_{y,z,x}} &(z+y)x\ar[d]^{d'_{z,y,x}} \\ yx+zx\ar[r]_{c_{yx,zx}} & zx+yx} \] which also holds in every rig category. Since there is no obvious description of the commutators $c_{[m],[n]}$ in terms of the generators of $S_{m+n}$ except when $m=1$ or $n=1$, the cases $m,n\geq 1$ are shown by induction on $n\geq 1$ for any given $m\geq 1$. For $n=1$ we know that \[ c_{[m],[1]}=(1,2,\ldots,m+1)_{m+1}=(1,2)_{m+1}\,(2,3)_{m+1}\cdots (m,m+1)_{m+1}. \] Hence \begin{align*} F_{can}(c_{[m],[1]})&=F_{can}((1,2)_{m+1})\,F_{can}((2,3)_{m+1})\cdots F_{can}((m,m+1)_{m+1}) \\ &=(c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-1}})\,(id_{\sf 1derline{1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{m-2}})\cdots (id_{\sf 1derline{m-1}}+c_{\sf 1derline{1},\sf 1derline{1}}) \\ &=c_{\sf 1derline{m},\sf 1derline{1}}. \end{align*} The last equality follows by an easy induction on $m\geq 1$ using again the hexagon axiom on the commutator. Let now be $n\geq 2$, and let us assume that $F_{can}(c_{[m],[n-1]})=c_{\sf 1derline{m},\sf 1derline{n-1}}$. Because of the hexagon axiom we have \[ c_{[m],[n]}=(id_{[n-1]}+c_{[m],[1]})\,(c_{[m],[n-1]}+id_{[1]}). \] Therefore \begin{align*} F_{can}(c_{[m],[n]})&=F_{can}(id_{[n-1]}+c_{[m],[1]})\,F_{can}(c_{[m],[n-1]}+id_{[1]}) \\ &=(id_{\sf 1derline{n-1}}+F_{can}(c_{[m],[1]}))\,(F_{can}(c_{[m],[n-1]})+id_{\sf 1derline{1}}) \\ &=(id_{\sf 1derline{n-1}}+c_{\sf 1derline{m},\sf 1derline{1}})\,(c_{\sf 1derline{m},\sf 1derline{n-1}}+id_{\sf 1derline{1}}) \\ &=c_{\sf 1derline{m},\sf 1derline{n}}, \end{align*} where in the last equality we make use of the assumption that $\mathbb{S}$ is also semistrict. \noindent \sf 1derline{Proof of (SH3)}. We need to prove that $F_{can}(d'_{[m],[n],[p]})=d'_{\sf 1derline{m},\sf 1derline{n},\sf 1derline{p}}$ for each $m,n,p\geq 0$. When $p=0$ or $p=1$ the result follows from the fact that in every rig category $\mathbb{S}$ the right distributors $d'_{x,y,0}$ and $d'_{x,y,1}$ are such that the four diagrams \[ \xymatrix{(x+y)0\ar[r]^{d'_{x,y,0}}\ar[d]_{n_{x+y}} & x0+y0\ar[d]^{n_x+n_y} \\ 0 & 0+0\ar[l]^{l_0}}\quad \xymatrix{(x+y) 1\ar[rr]^{d'_{x,y,1}}\ar[rd]_{r'_{x+y}} & & x 1+y 1\ar[ld]^{r'_x+l'_y} \\ & x+y &} \] commute for every objects $x,y\in\mathcal{S}$. Hence they are identities when $\mathbb{S}$ is semistrict and consequently, they are strictly preserved. As to the cases $p\geq 2$, for any given $m,n\geq 0$, they readily follow from (SH2), and Lemma~\ref{lema_distribuidors_dreta} below, which proves that some of the right distributors in every left semistrict rig category are actually determined by the additive commutators. \qed \subsubsection{\sc Lemma.}\label{lema_distribuidors_dreta} Let $\mathbb{S}$ be a left semistrict rig category, and let $\sf 1derline{n}$ be as before for each $n\geq 0$. Then for every objects $x,y\in\mathcal{S}$, and $n\geq 2$ the right distributor $d'_{x,y,\sf 1derline{n}}$ is given by \begin{align}\label{distribuidors_dreta} d'_{x,y,\sf 1derline{n}}&=(id_{x\bullet\sf 1derline{n-1}}+c_{y\bullet\sf 1derline{n-1},\,x}+id_y)\,(id_{x\bullet\sf 1derline{n-2}}+c_{y\bullet\sf 1derline{n-2},\,x}+id_{x+y\bullet\sf 1derline{2}}) \\ &\hspace{2truecm}\cdots (id_{x\bullet\sf 1derline{2}}+c_{y\bullet\sf 1derline{2},\,x}+id_{x\bullet\sf 1derline{n-3}+y\bullet\sf 1derline{n-2}})\,(id_{x}+c_{y,\,x}+id_{x\bullet\sf 1derline{n-2}+y\bullet\sf 1derline{n-1}})\nonumber \end{align} \noindent {\sc Proof.} In every rig category $\mathbb{S}$ the diagram \begin{equation} \label{axioma_coherencia_fonamental} \xymatrix{(x+y)(z+t)\ar[r]^-{d'_{x,y,z+t}}\ar[d]_{d_{x+y,z,t}} & x(z+t)+y(z+t)\ar[r]^{d_{x,z,t}+d_{y,z,t}} & (xz+xt)+(yz+yt)\ar[d]^{v_{xz,xt,yz,yt}} \\ (x+y)z+(x+y)t\ar[rr]_{\ \ d'_{x,y,z}+d'_{x,y,t}} & & (xz+yz)+(xt+yt)} \end{equation} commutes for every objects $x,y,z,t\in\mathcal{S}$, where $v_{a,b,c,d}:(a+b)+(c+d)\stackrel{\cong}{\to}(a+c)+(b+d)$ is the canonical isomorphism built from the associator $a$ and the additive commutator $c$ of $\mathscr{S}$. In particular, when $\mathbb{S}$ is left semistrict it follows that \begin{align*} d'_{x,y,z+t}&=v^{-1}_{xz,xt,yz,yt}\,(d'_{x,y,z}+d'_{x,y,t}) \\ &=(id_{xz}+c_{yz,xt}+id_{yt})\,(d'_{x,y,z}+d'_{x,y,t}). \end{align*} Therefore when $z=\sf 1derline{n-1}$, $n\geq 2$, and $t=\sf 1derline{1}$ we obtain that \[ d'_{x,y,\sf 1derline{n}}=(id_{x\bullet\sf 1derline{n-1}}+c_{y\bullet\sf 1derline{n-1},x\bullet\sf 1derline{1}}+id_{y\bullet\sf 1derline{1}})\,(d'_{x,y,\sf 1derline{n-1}}+d'_{x,y,\sf 1derline{1}}), \] a recursive relation whose solution is given by (\ref{distribuidors_dreta}) because, as pointed out before, $d'_{x,y,\sf 1derline{1}}=id_{x+y}$ in a semistrict category. \qed \subsubsection{\sc Definition.} The strict morphism $\mathbb{F}_{can}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ in Proposition~\ref{morfisme_canonic} is called the {\em canonical homomorphism} of rig categories from $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ into the left semistrict rig category $\mathbb{S}$. \subsubsection{\sc Remark} What we have proved in Proposition~\ref{morfisme_canonic} is that the commutator of $\mathbb{S}$ induces a canonical group homomorphisms $\phi_n:S_n\to Aut_\mathcal{S}(\sf 1derline{n})$ for each $n\geq 2$, and that the functor from $\widehat{\mathcal{F}\mathcal{S} et}_{sk}$ to $\mathcal{S}$ so defined strictly preserves both the additive and the multiplicative monoidal structures. In fact, the same argument shows the existence of canonical group homomorphisms $\phi_{n;x}:S_n\to Aut_\mathcal{S}(nx)$ for anyy object $x$ in $\mathcal{S}$ other than the unit object $1$. We just need to replace $c_{1,1}$ by the commutator $c_{x,x}$. Of course, the corresponding functor will no longer extend to a homomorphism of rig categories \subsubsection{\sc Example} When $\mathbb{S}=\widehat{\mathbb{F}\mathbb{S} et}_{sk}$, $\mathbb{F}_{can}$ is the identity of $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$. In fact, in this case $\sf 1derline{n}=[n]$, and $\phi_n$ is the identity. \subsubsection{\sc Example} When $\mathbb{S}=\mathbb{M} at_k$ (cf. Example~\ref{Mat_k}), $\mathbb{F}_{can}$ is the strict morphism with underlying functor the canonical one mapping each object $[n]$ to $n$, and each permutation $\sigma\in S_n$ to the corresponding permutational matrix $P(\sigma)$ obtained from the identity matrix by moving the 1 in the $(i,i)^{th}$-entry to the entry $(\sigma(i),i)$, i.e. the matrix given by $P(\sigma)_{ij}=\delta_{i\,\sigma(j)}$ for each $i,j=1,\ldots,n$. \subsubsection{\sc Example} \label{morfisme_a_endomorfismes_M} When $\mathbb{S}=\mathbb{E} nd(\mathscr{M})$ for some semistrict symmetric monoidal category $\mathscr{M}=(\mathcal{M},\mathrm{op}lus,{\sf 0})$ (cf. Example~\ref{semianell_endomorfismes}), $\mathbb{F}_{can}$ is the strict morphism whose underlying functor maps each object $[n]$ to the symmetric monoidal endofunctor \[ F_{can}[n]\equiv\mathrm{op}lus^n:\mathscr{M}\to\mathscr{M} \] given on objects and morphisms by $*\mapsto *\,\mathrm{op}lus\stackrel{n)}{\cdots}\mathrm{op}lus\, *$, and each permutation $\sigma\in S_n$ to the monoidal natural automorphism $F_{can}(\sigma):\mathrm{op}lus^n\Rightarrow\mathrm{op}lus^n$ with $x$-component \[ F_{can}(\sigma)_x:x\mathrm{op}lus\stackrel{n)}{\cdots}\mathrm{op}lus x\to x\mathrm{op}lus\stackrel{n)}{\cdots}\mathrm{op}lus x \] given by the canonical isomorphism defined by the commutator $c_{x,x}$ and the permutation $\sigma$ for each object $x\in\mathcal{M}$. This gives the unique (up to equivalence), canonically given $\widehat{\mathbb{F}\mathbb{S} et}$-module structure on every symmetric monoidal category, in the same way as every abelian monoid has a unique, canonicallly given $\mathbb{N}$-module structure. \subsection{Main theorem} We are now able to prove the main result of the paper, namely, that $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ and hence, $\widehat{\mathbb{F}\mathbb{S} et}$ is biinitial in the 2-category of rig categories. This allows one to view this symmetric 2-rig as the right categorification of the commutative rig $\mathbb{N}$ of natural numbers. In order to prove that $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ is indeed biinitial, we have to prove two things. Firstly, for every (semistrict) rig category $\mathbb{S}$ all homomorphisms of rig categories $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ must be isomorphic or equivalently, isomorphic to the canonical one $\mathbb{F}_{can}$. Secondly, for every (semistrict) rig category $\mathbb{S}$ the canonical homomorphism $\mathbb{F}_{can}$ must have the identity as the unique rig transformation to itself. These statements are respectively shown in the next two lemmas. \subsubsection{\sc Lemma.}\label{lema_principal} Every homomorphism of rig categories $\mathbb{F}=(F,\varphi^+,\varepsilon^+,\varphi^{\,\bullet},\varepsilon^{\,\bullet})$ from $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ to a left semistrict rig category $\mathbb{S}$ is isomorphic to the corresponding canonical homomorphism $\mathbb{F}_{can}$. In fact, the basic maps $(\tau_n)_{n\geq 0}$ of $\mathbb{F}$ (Definition~\ref{morfismes_basics}) are the components of an invertible rig transformation $\tau:\mathbb{F}\Rightarrow\mathbb{F}_{can}$. \noindent {\sc Proof.} We already know that the basic maps $\tau_n$ are all invertible. Hence we need to prove the following: \begin{itemize} \item[(RT1)] $\tau_{n}$ is natural in $[n]$; \item[(RT2)] $\tau$ is $+$-monoidal; \item[(RT3)] $\tau$ is $\bullet$\,-monoidal. \end{itemize} Since $\mathbb{F}_{can}$ is strict, (RT2) and (RT3) follow readily from Proposition~\ref{monoidalitat_tau} together with the fact that the first two basic maps are $\tau_0=\varepsilon^+$ and $\tau_1=\varepsilon^\bullet$. In order to prove (RT1), we need to see that the diagram \begin{equation}\label{naturalitat_xi} \xymatrix{ F[n]\ar[d]_{F(\sigma)}\ar[r]^{\tau_{n}} & \sf 1derline{n}\ar[d]^{F_{can}(\sigma)} \\ F[n]\ar[r]_{\tau_{n}} & \sf 1derline{n} } \end{equation} commutes for every permutation $\sigma\in S_n$, and every $n\geq 0$. If $n\in\{0,1\}$ this is obvious because $\sigma$ is necessarily an identity. If $n\geq 2$ it is enough to prove it when $\sigma$ is one of the generators $(1,2)_n,\ldots,(n-1,n)_n$ of $S_n$. The generic case follows from the functoriality of $F$ and $F_{can}$ and the invertibility of each $\tau_{n}$. Since $\tau_{n}$ can be defined recursively by any of the equivalent recurrence relations (\ref{recurrencia_tau_1}) or (\ref{recurrencia_tau_2}), we proceed by induction on $n\geq 2$. If $n=2$ the commutativity of (\ref{naturalitat_xi}) for each $\sigma\in S_2$ just amounts to the commutativity of the outer diagram \[ \xymatrix{ F[2]\ar[d]_{F((1,2)_{2})}\ar[r]^-{\varphi^+_{[1],[1]}} & F[1]+F[1]\ar[r]^-{\varepsilon^{\,\bullet}+\varepsilon^{\,\bullet}}\ar@{.>}[d]^{c_{F[1],F[1]}} & \sf 1derline{2}\ar[d]^{c_{\sf 1derline{1},\sf 1derline{1}}}\\ F[2]\ar[r]_-{\varphi^+_{[1],[1]}} & F[1]+F[1]\ar[r]_-{\varepsilon^{\,\bullet}+\varepsilon^{\,\bullet}} & \sf 1derline{2} } \] where we have already made use of the fact that $F_{can}((1,2)_2)=c_{\sf 1derline{1},\sf 1derline{1}}$, and the definition of $\tau_{2}$. Now, the right square commutes because of the naturality of $c$, and the left square also commutes because $(1,2)_2=c_{[1],[1]}$ and $F$ is a symmetric $+$-monoidal functor. Let us now assume that for a given $n\geq 2$ the diagram (\ref{naturalitat_xi}) commutes for every generator $\sigma\in S_n$. Then for each $i=1,\ldots,n-1$ the diagram \[ \xymatrix{ F[n+1]\ar[d]_{F((i,i+1)_{n+1})}\ar[r]^-{\varphi^+_{[n],[1]}} & F[n]+F[1]\ar[r]^-{\tau_{n}+\varepsilon^{\,\bullet}}\ar[d]^{F((i,i+1)_n)+id_{F[1]}} & \sf 1derline{n+1}\ar[d]^{F_{can}((i,i+1)_{n+1})} \\ F[n+1]\ar[r]_-{\varphi^+_{[n],[1]}} & F[n]+F[1]\ar[r]_-{\tau_{n}+\varepsilon^{\,\bullet}} & \sf 1derline{n+1} } \] commutes. Thus the left square is a $\varphi^+$-naturality square, and the right square commutes because \begin{align*} F_{can}((i,i+1)_{n+1})&=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i}} \\ &=id_{\sf 1derline{i-1}}+c_{\sf 1derline{1},\sf 1derline{1}}+id_{\sf 1derline{n-i-1}}+id_{\sf 1derline{1}} \\ &=F_{can}((i,i+1)_{n})+id_{\sf 1derline{1}} \end{align*} and by the induction hypothesis. Notice that in the second equality we are using that $i\leq n-1$ and hence, $n-i\geq 1$ to be able to make explicit the identity $id_{\sf 1derline{1}}$ on the right. This proves that the diagram \[ \xymatrix{ F[n+1]\ar[d]_{F(\sigma)}\ar[r]^{\tau_{n+1}} & \sf 1derline{n+1}\ar[d]^{F_{can}(\sigma)} \\ F[n+1]\ar[r]_{\tau_{n+1}} & \sf 1derline{n+1} } \] commutes for all the generators of $S_{n+1}$ except the generator $(n,n+1)_{n+1}$. To prove that the diagram also commutes in this case we make use of the fact that \[ (n,n+1)_{n+1}=id_{[1]}+(n-1,n)_n \] together with the coherence axioms required on $\varphi^+$, from which we know that \[ (id_{F[1]}+\varphi^+_{[n-1],[1]})\,\varphi^+_{[1],[n]}=(\varphi^+_{[n-1],[1]}+id_{F[1]})\,\varphi^+_{[n],[1]}. \] It follows that the previous square with $\sigma=(n,n+1)_{n+1}$ looks like \[ \xymatrix{ F[n+1]\ar[d]_{F(id_{[1]}+(n-1,n)_{n})}\ar[r]^-{\varphi^+_{[1],[n]}} & F[1]+F[n]\ar[d]^{id_{F[1]}+F((n-1,n)_n)}\ar[r]^{id_{F[1]}+\varphi^+_{[n-1],[1]}\hspace{1truecm}} & F[1]+F[n-1]+F[1]\ar[r]^-{(\varphi_{[1],[n-1]}+id_{F[1]})^{-1}} & F[n]+F[1]\ar[r]^-{\tau_{n}+\varepsilon^{\,\bullet}} & \sf 1derline{n+1}\ar[d]_{F_{can}((n,n+1)_{n+1})} \\ F[n+1]\ar[r]_-{\varphi^+_{[1],[n]}} & F[1]+F[n]\ar[r]_-{id_{F[1]}+\varphi^+_{[n-1],[1]}\hspace{1truecm}} & F[1]+F[n-1]+F[1]\ar[r]_-{(\varphi_{[1],[n-1]}+id_{F[1]})^{-1}} & F[n]+F[1]\ar[r]_-{\tau_{n}+\varepsilon^{\,\bullet}} & \sf 1derline{n+1} } \] The left square commutes because of the naturality of $\varphi^+$. As to the right part, notice that \[ F_{can}((n,n+1)_{n+1})=id_{\sf 1derline{n-1}}+c_{\sf 1derline{1},\sf 1derline{1}}=id_{\sf 1derline{1}}+id_{\sf 1derline{n-2}}+c_{\sf 1derline{1},\sf 1derline{1}}=id_{\sf 1derline{1}}+F_{can}((n-1,n)_n) \] because $n\geq 2$. Moreover \[ (\tau_{n}+\varepsilon^{\,\bullet})\,(\varphi_{[1],[n-1]}+id_{F[1]})^{-1}=\tau_{n}\,(\varphi_{[1],[n-1]})^{-1}+\varepsilon^{\,\bullet}=\varepsilon^{\,\bullet}+\tau_{n-1}+\varepsilon^{\,\bullet}. \] Therefore we have \begin{align*} (\tau_{n}+\varepsilon^{\,\bullet})\,(\varphi_{[1],[n-1]}+id_{F[1]})^{-1}\,(id_{F[1]}+\varphi^+_{[n-1],[1]})&=(\varepsilon^{\,\bullet}+\tau_{n-1}+\varepsilon^{\,\bullet})\,(id_{F[1]}+\varphi^+_{[n-1],[1]}) \\ &=\varepsilon^{\,\bullet}+\tau_{n}, \end{align*} so that the right subdiagram in the above diagram is nothing but the diagram \[ \xymatrix{ F[n]+F[1]\ar[d]_{id_{F[1]}+F((n-1,n)_n)}\ar[r]^-{\varepsilon^{\,\bullet}+\tau_{n}} & \sf 1derline{n+1}\ar[d]^{id_{\sf 1derline{1}}+F_{can}((n-1,n)_n)} \\ F[n]+F[1]\ar[r]_-{\varepsilon^{\,\bullet}+\tau_{n}} & \sf 1derline{n+1} } \] whose commutativity follows from the functoriality of $+$ and the induction hypothesis. \qed \subsubsection{\sc Lemma}\label{unicitat} $\mathbb{F}_{can}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ has the identity as unique (2-)endomorphism. \noindent {\sc Proof.} Since $\mathbb{S}$ is assumed to be left semistrict and $\mathbb{F}_{can}$ is strict, it follows from Definition~\ref{transformacio_rig} that an endomorphism $\xi:\mathbb{F}_{can}\Rightarrow\mathbb{F}_{can}$ consists of a collection of morphisms $\xi_n:\sf 1derline{n}\to\sf 1derline{n}$ in $\mathcal{S}$, for $n\geq 0$, such that the following diagrams commute: \begin{itemize} \item[(1)] (naturality) for each $n\geq 0$ and each permutation $\sigma\in S_n$ \[ \xymatrix{ \sf 1derline{n}\ar[r]^{\xi_{n}}\ar[d]_{F_{can}(\sigma)} & \sf 1derline{n}\ar[d]^{F_{can}(\sigma)} \\ \sf 1derline{n}\ar[r]_{\xi_{n}} & \sf 1derline{n}\,; } \] \item[(2)] ($+$-monoidality) $\xi_0=id_{\sf 1derline{0}}$, and for each $m,n\geq 0$ \[ \xymatrix{ \sf 1derline{m+n}\ar[r]^{\xi_{m+n}}\ar[d]_{id} & \sf 1derline{m+n}\ar[d]^{id} \\ \sf 1derline{m+n}\ar[r]_{\xi_{m}+\xi_{n}} & \sf 1derline{m+n}\,; } \] \item[(3)] ($\bullet$-monoidality) $\xi_1=id_{\sf 1derline{1}}$, and for each $m,n\geq 0$ \[ \xymatrix{ \sf 1derline{mn}\ar[r]^{\xi_{mn}}\ar[d]_{id} & \sf 1derline{mn}\ar[d]^{id} \\ \sf 1derline{mn}\ar[r]_{\xi_{m}\bullet\xi_{n}} & \sf 1derline{mn}\,. } \] \end{itemize} In particular, it must be $\xi_{n+1}=\xi_n+\xi_1$ for each $n\geq 0$. Together with the fact that $\xi_0,\xi_1$ are identities, it follows that $\xi_n=id_{\sf 1derline{n}}$ for each $n\geq 0$. \qed Together, these two lemmas imply that there exists one and only one isomorphism between every two homomorphisms $\mathbb{F},\mathbb{F}':\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ as above, necessarily given by the composite \[ \xymatrix{ \mathbb{F}\ar@{=>}[r]^\tau & \mathbb{F}_{can}\ar@{=>}[r]^{(\tau')^{-1}} & \mathbb{F}'\,} \] with $\tau,\tau'$ the invertible rig transformations in Lemma~\ref{lema_principal} defined by the respective basic maps of $\mathbb{F}$ and $\mathbb{F}'$. Thus we have proved the following. \subsubsection{\sc Theorem}\label{main} The symmetric 2-rig $\widehat{\mathbb{F}\mathbb{S} et}$ of finite sets is biinitial in the 2-category $\mathbf{RigCat}$. \subsection{Explicit description of the homomorphisms from the semistrict 2-rig of finite sets}\label{conclusions} Let us denote by $\mathcal{H} om_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et}_{sk},\mathbb{S})$ the category of rig category homomorphisms from $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ to $\mathbb{S}$. According to Theorem~\ref{main}, it is a contractible category for any $\mathbb{S}$. In fact, when $\mathbb{S}$ is left semistrict we have proved more than that, in the sense that we have a quite explicit description of the objects in this category (and of the unique isomorphism between any of them). Thus it follows from the commutativity of (\ref{naturalitat_xi}) that the action on morphisms of the underlying functor $F$ of each $\mathbb{F}:\widehat{\mathbb{F}\mathbb{S} et}_{sk}\to\mathbb{S}$ is determined by the basic maps $\tau_n:F[n]\to\sf 1derline{n}$, $n\geq 0$, of $\mathbb{F}$. Since the data $(\varphi^+,\varepsilon^+,\varphi^\bullet,\varepsilon^\bullet)$ is also determined by these maps (cf. Proposition~\ref{monoidalitat_tau}), we conclude that every homomorphism $\mathbb{F}$ is completely given by its action on objects, and the basic maps. In other words, $\mathbb{F}$ is given by just two sequences, a sequence of objects $x=(x_n)_{n\geq 0}$ in $\mathcal{S}$ such that $x_n\cong\sf 1derline{n}$ for each $n\geq 0$, and a sequence of isomorphisms $\tau=(\tau_n)_{n\geq 0}$, with $\tau_n:x_n\to\sf 1derline{n}$. It turns out that these sequences can be chosen arbitrarily. More precisely, we have the following. \subsubsection{\sc Proposition}\label{morfismes_normalitzats} For any sequence $x=(x_n)_{n\geq 0}$ of objects in $\mathcal{S}$, with $x_n\cong\sf 1derline{n}$, and any sequence of isomorphisms $\tau=(\tau_n)_{n\geq 0}$, with $\tau_n:x_n\to\sf 1derline{n}$, the functor $F(x,\tau):\widehat{\mathcal{F}\mathcal{S} et}_{sk}\to\mathcal{S}$ defined on objects and morphisms respectively by \begin{align} F(x,\tau)[n]&=x_n,\quad n\geq 0,\label{F_tau_0} \\ F(x,\tau)(\sigma)&=\tau^{-1}_n\,F_{can}(\sigma)\,\tau_n,\quad \sigma\in S_n,\label{F_tau_1} \end{align} together with the isomorphisms \begin{align} \varphi(\tau)^+_{[m],[n]}&=(\tau_m+\tau_n)^{-1}\,\tau_{m+n},\quad m,n\geq 0, \label{varphi_tau_+} \\ \varphi(\tau)^\bullet_{[m],[n]}&=(\tau_m\bullet\tau_n)^{-1}\tau_{mn},\quad m,n\geq 0, \label{varphi_tau_bullet} \end{align} and $\varepsilon(\tau)^+=\tau_0$, $\varepsilon(\tau)^\bullet=\tau_1$ is a homomorphism of rig categories $\mathbb{F}(x,\tau)$. \noindent {\sc Proof.} Since both rig categories are left semistrict, the required naturality and coherence conditions on the data $(\varphi(\tau)^+,\varepsilon(\tau)^+,\varphi(\tau)^\bullet,\varepsilon(\tau)^\bullet)$ reduce to the following: \begin{itemize} \item[(A1)] ({\em naturality of} $\varphi(\tau)^+$) the diagram \[ \xymatrix{ x_{m+n}\ar[r]^{\varphi(\tau)^+_{[m],[n]}}\ar[d]_{F(x,\tau)(\rho+\sigma)} & x_m+x_n\ar[d]^{F(x,\tau)(\rho)+F(x,\tau)(\sigma)} \\ x_{m+n}\ar[r]_{\varphi(\tau)^+_{[m],[n]}} & x_m+x_n } \] commutes for each permutations $\rho\in S_m$ and $\sigma\in S_n$, and each $m,n\geq 0$; \item[(A2)] ({\em coherence axiom on} $\varphi(\tau)^+$) the diagram \[ \xymatrix{ x_{m+n+p}\ar[rr]^{\varphi(\tau)^+_{[m],[n+p]}}\ar[d]_{\varphi(\tau)^+_{[m+n],[p]}} && x_m+x_{n+p}\ar[d]^{id_{x_m}+\varphi(\tau)^+_{[n],[p]}} \\ x_{m+n}+x_p\ar[rr]_{\varphi(\tau)^+_{[m],[n]}+id_{x_p}} && x_m+x_n+x_p } \] commutes for each $m,n,p\geq 0$; \item[(A3)] ({\em coherence axiom on} $\varphi(\tau)^+$) the diagram \[ \xymatrix{ x_{m+n}\ar[r]^{\varphi(\tau)^+_{[m],[n]}}\ar[d]_{F(x,\tau)(c_{[m],[n]})} & x_m+x_n\ar[d]^{c_{x_m,x_n}} \\ x_{n+m}\ar[r]_{\varphi(\tau)^+_{[n],[m]}} & x_n+x_m } \] commutes for each $m,n\geq 0$; \item[(A4)] ({\em coherence axiom on} $\varepsilon(\tau)^+$) the diagrams \[ \xymatrix{ x_n\ar[r]^-{\varphi(\tau)^+_{[0],[n]}}\ar[dr]_{id_{x_n}} & x_0+x_n\ar[d]^{\varepsilon(\tau)^++id_{x_n}} \\ & x_n} \quad \xymatrix{ x_n\ar[r]^-{\varphi(\tau)^+_{[n],[0]}}\ar[dr]_{id_{x_n}} & x_n+x_0\ar[d]^{id_{x_n}+\varepsilon(\tau)^+} \\ & x_n} \] commute for each $n\geq 0$; \item[(A5)] ({\em naturality of} $\varphi(\tau)^\bullet$) the diagram \[ \xymatrix{ x_{mn}\ar[r]^-{\varphi(\tau)^\bullet_{[m],[n]}}\ar[d]_{F(x,\tau)(\rho\bullet\sigma)} & x_m\bullet x_n\ar[d]^{F(x,\tau)(\rho)\bullet F(x,\tau)(\sigma)} \\ x_{mn}\ar[r]_{\varphi(\tau)^\bullet_{[m],[n]}} & x_m\bullet x_n } \] commutes for each permutations $\rho\in S_m$ and $\sigma\in S_n$, and each $m,n\geq 0$; \item[(A6)] ({\em coherence axiom on} $\varphi(\tau)^\bullet$) the diagram \[ \xymatrix{ x_{mnp}\ar[rr]^{\varphi(\tau)^\bullet_{[m],[np]}}\ar[d]_{\varphi(\tau)^\bullet_{[mn],[p]}} && x_m\bullet x_{np}\ar[d]^{id_{x_m}\bullet\varphi(\tau)^\bullet_{[n],[p]}} \\ x_{mn}\bullet x_p\ar[rr]_{\varphi(\tau)^\bullet_{[m],[n]}\bullet id_{x_p}} && x_m\bullet x_n\bullet x_p } \] commutes for each $m,n,p\geq 0$; \item[(A7)] ({\em coherence axiom on} $\varepsilon(\tau)^\bullet$) the diagrams \[ \xymatrix{ x_n\ar[r]^-{\varphi(\tau)^\bullet_{[1],[n]}}\ar[dr]_{id_{x_n}} & x_1\bullet x_n\ar[d]^{\varepsilon(\tau)^\bullet\bullet id_{x_n}} \\ & x_n} \quad \xymatrix{ x_n\ar[r]^-{\varphi(\tau)^\bullet_{[n],[1]}}\ar[dr]_{id_{x_n}} & x_n\bullet x_1\ar[d]^{id_{x_n}\bullet\varepsilon(\tau)^\bullet} \\ & x_n} \] commute for each $n\geq 0$; \item[(A8)] ({\em coherence axiom between $\varphi(\tau)^+$ and} $\varphi(\tau)^\bullet$) the diagram \[ \xymatrix{ x_{mn+mp}\ar[d]_{\varphi(\tau)^{\,\bullet}_{[m],[n+p]}}\ar[rr]^-{\varphi(\tau)^+_{[mn],[mp]}} && x_{mn}+x_{mp}\ar[d]^{\varphi(\tau)^{\,\bullet}_{[m],[n]}\,+\,\varphi(\tau)^{\,\bullet}_{[m],[p]}} \\ x_m\bullet x_{n+p}\ar[rr]_-{id_{x_m}\bullet\, \varphi(\tau)^+_{[n],[p]}} && x_m\bullet x_n+x_m\bullet x_p; } \] commutes for each $m,n,p\geq 0$; \item[(A9)] ({\em coherence axiom between $\varphi(\tau)^+$ and} $\varphi(\tau)^\bullet$) the diagram \[ \xymatrix{ x_{(m+n)p}\ar[d]_{\varphi(\tau)^{\,\bullet}_{[m+n],[p]}}\ar[rr]^-{F(x,\tau)(d'_{[m],[n],[p]})} && x_{mp+np}\ar[rr]^-{\varphi(\tau)^+_{[mp],[np]}} && x_{mp}+x_{np}\ar[d]^{\varphi(\tau)^{\bullet}_{[m],[p]}\,+\,\varphi(\tau)^{\bullet}_{[n],[p]}} \\ x_{m+n}\bullet x_p\ar[rr]_-{\varphi(\tau)^+_{[m],[n]}\bullet\, id_{x_p}} && (x_m+x_n)\bullet x_p\ar[rr]_-{d'_{x_m,x_n,x_p}} && x_m\bullet x_p+x_n\bullet x_p } \] commutes for each $m,n,p\geq 0$. \end{itemize} When made explicit using (\ref{F_tau_1})-(\ref{varphi_tau_bullet}), all conditions are easy to check. For instance, condition (A1) (and condition (A5) is similar but with $\bullet$ instead of $+$) amounts to \[ [\tau_m^{-1}\,F_{can}(\rho)\,\tau_m+\tau_n^{-1}\,F_{can}(\sigma)\,\tau_n]\,(\tau_m+\tau_n)^{-1}\,\tau_{m+n}=(\tau_m+\tau_n)^{-1}\,\tau_{m+n}\,\tau_{m+n}^{-1}\,F_{can}(\rho+\sigma)\,\tau_{m+n}, \] and this equality is a consequence of the functoriality of $+$ (resp. $\bullet$) together with the fact that $F_{can}$ preserves the sum (resp. the product) of morphisms. (A2), and its analog (A6), simply follow from the functoriality of $+$ and $\bullet$, respectively. (A3) is a consequence of the functoriality of $+$, the naturality of $c_{x_m,x_n}$ in $x_m,x_n$, and the fact that $F_{can}$ preserves the commutators. (A4), and its analog (A7), follow from the fact that the left and right additive and multiplicative unitors of $\mathbb{S}$ are assumed to be identities. As to (A8), it is a consequence of the functoriality of $+$ and $\bullet$ together with the fact that the left distributor of $\mathbb{S}$ is assumed to be trivial. Finally, (A9) follows from the functoriality of $+$ and $\bullet$, the naturality of $d'_{x_m,x_n,x_p}$ in $x_m,x_n,x_p$, and the fact that $F_{can}$ preserves the right distributors. \qed \subsubsection{\sc Corollary} For every left semistrict rig category $\mathbb{S}$ the set of objects in $\mathcal{H} om_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et}_{sk},\mathbb{S})$ is in bijection with the set of pairs $(x,\tau)$, with $x=(x_n)_{n\geq 0}$ any sequence of objects in $\mathcal{S}$ such that $x_n\cong\sf 1derline{n}$ for each $n\geq 0$, and $\tau=(\tau_n)_{n\geq 0}$ any sequence of isomorphisms $\tau_n:x_n\to\sf 1derline{n}$. In particular, if $\mathbb{S}$ is skeletal $\mathcal{H} om_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et}_{sk},\mathbb{S})$ is isomorphic to the contractible category with set of objects $\prod_{n\geq 0}\,Aut_\mathcal{S}(\sf 1derline{n})$. \subsubsection{\sc Example} Let $\mathbb{S}$ be the symmetric 2-rig $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ itself. It follows that $\mathcal{E} nd_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et}_{sk})$ is isomorphic to the contractible category with $\prod_{n\geq 0}\,S_n$ as set of objects. In fact, as the category of endomorphisms of any object in a 2-category, it is a strict monoidal category with the tensor product given by the composition of endomorphisms. Even more, since every endomorphism of $\widehat{\mathbb{F}\mathbb{S} et}_{sk}$ is strictly invertible, $\mathcal{E} nd_{\mathbf{RigCat}}(\widehat{\mathbb{F}\mathbb{S} et}_{sk})$ is a {\em strict 2-group}, i.e. a strict monoidal groupoid all of whose objects are strictly invertible. Then it is easy to check that the above bijection between its set of objects, equipped with the group law induced by the tensor product, and $\prod_{n\geq 0}\,S_n$ with its usual direct product group structure is a group isomorphism. {} \end{document}
\begin{document} \title{Quantum circuit architecture search on a superconducting processor} \author{Kehuan Linghu} \thanks{These two authors contributed equally} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \author{Yang Qian} \thanks{These two authors contributed equally} \affiliation{School of Computer Science, Faculty of Engineering, University of Sydney, Australia} \affiliation{JD Explore Academy, China} \author{Ruixia Wang} \email{[email protected]} \author{Meng-Jun Hu} \author{Zhiyuan Li} \author{Xuegang Li} \author{Huikai Xu} \author{Jingning Zhang} \author{Teng Ma} \author{Peng Zhao} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \author{Dong E. Liu} \affiliation{State Key Laboratory of Low Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing, 100084, China} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \author{Min-Hsiu Hsieh} \affiliation{Centre for Quantum Software and Information, Faculty of Engineering and Information Technology, University of Technology Sydney, Australia} \author{Xingyao Wu} \email{[email protected]} \affiliation{JD Explore Academy, China} \author{Yuxuan Du} \email{[email protected]} \author{Dacheng Tao} \email{[email protected]} \affiliation{JD Explore Academy, China} \email{[email protected]} \author{Yirong Jin} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \author{Haifeng Yu} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \date{\today} \pacs{xxx} \begin{abstract} Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry. However, the heuristic ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability, which may lead to the degraded performance when executed on the noisy intermediate-scale quantum (NISQ) machines. To address this issue, here we demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique, i.e., quantum architecture search (QAS), to enhance VQAs on an 8-qubit superconducting quantum processor. In particular, we apply QAS to tailor the hardware-efficient ansatz towards classification tasks. Compared with the heuristic ans\"atze, the ansatz designed by QAS improves test accuracy from $31\%$ to $98\%$. We further explain this superior performance by visualizing the loss landscape and analyzing effective parameters of all ans\"atze. Our work provides concrete guidance for developing variable ans\"atze to tackle various large-scale quantum learning problems with advantages. \end{abstract} \maketitle \begin{figure*} \caption{\small{\textbf{Experimental implementation of QAS.} \label{fig:scheme} \end{figure*} The successful exhibition of random quantum circuits sampling and Boson sampling over fifty qubits \cite{arute2019quantum,wu2021strong,zhong2020quantum,zhu2021quantum} evidences the potential of using current quantum hardware to address classically challenging problems. A leading strategy towards this goal is variational quantum algorithms (VQAs) \cite{bharti2021noisy,cerezo2021variational}, which leverage classical optimizers to train an \textit{ansatz} that can be implemented on noisy intermediate-scale quantum (NISQ) devices \cite{preskill2018quantum}. In the past years, a growing number of theoretical studies has shown the computational superiority of VQAs in the regime of machine learning \cite{abbas2021power,banchi2021generalization,bu2021statistical,caro2020pseudo,caro2021generalization,du2020learnability,du2021efficient,huang2021information,huang2021power}, quantum many body physics \cite{huang2021provably,endo2020variational,kandala2017hardware,pagano2020quantum}, and quantum information processing \cite{cerezo2020variational2,du2021exploring,carolan2020variational}. On par with the achievements, recent studies have recognized some flaws of current VQAs through the lens of the tradeoff between the expressivity and learning performance \cite{holmes2021connecting,du2021efficient}. That is, an ansatz with very high expressivity may encounter the barren plateau issues \cite{mcclean2018barren,cerezo2020cost,pesah2020absence,grant2019initialization}, while an ansatz with low expressivity could fail to fit the optimal solution \cite{bravyi2020obstacles}. With this regard, designing a problem-specific and hardware-oriented ansatz is of great importance to guarantee good learning performance of VQAs and the precondition of pursuing quantum advantages. Pioneered experimental explorations have validated the crucial role of ansatz when applying VQAs to accomplish tasks in different fields such as machine learning \cite{havlivcek2019supervised,huang2020experimental,peters2021machine,rudolph2020generation}, quantum chemistry \cite{arute2020hartree,kandala2017hardware,robert2021resource,kais2014introduction,wecker2015solving,cai2020quantum}, and combinatorial optimization \cite{harrigan2021quantum,lacroix2020improving,zhou2020quantum,hadfield2019quantum}. On the one side, envisioned by the no-free lunch theorem \cite{wolpert1997no,poland2020no}, there does not exist a universal ansatz that can solve all learning tasks with the optimal performance. To this end, myriad handcraft ans\"atze have been designed to address different learning problems \cite{gard2020efficient,ganzhorn2019gate,choquette2021quantum}. For instance, the unitary coupled cluster ansatz and its variants attain superior performance in the task of estimating molecular energies \cite{cao2019quantum,romero2018strategies,cervera2021meta,parrish2019quantum}. Besides devising the problem-specific ans\"atze, another indispensable factor to enhance the performance of VQAs is the compatibility between the exploited ansatz and the employed quantum hardware, especially in the NISQ scenario \cite{harrigan2021quantum}. Concretely, when the circuit layout of ansatz mismatches with the qubit connectivity, additional quantum resources, e.g., SWAP gates, are essential to complete the compilation. Nevertheless, these extra quantum resources may inhibit the performance of VQAs, because of the limited coherence time and inevitable gate noise of NISQ machines. Considering that there are countless learning problems and diverse architectures of quantum devices \cite{petit2020universal,divincenzo2000physical,devoret2013superconducting}, it is impractical to manually design problem-specific and hardware-oriented ans\"atze. To enhance the capability of VQAs, initial studies have been carried out to seek feasible strategies of \textit{automatically designing} a problem-specific and hardware-oriented ansatz with both good trainability and sufficient expressivity. Conceptually, the corresponding proposals exploit random search \cite{cincio2021machine}, evolutionary algorithms \cite{chivilikhin2020mog,rattew2019domain,chivilikhin2020mog}, deep learning techniques \cite{chen2021quantum,meng2021quantum,kuo2021quantum,zhang2020differentiable,zhang2021neural,ostaszewski2021reinforcement,pirhooshyaran2021quantum}, and adaptive strategies \cite{bilkis2021semi,grimsley2019adaptive,tang2021qubit} to tailor a hardware-efficient ansatz \cite{kandala2017hardware}, i.e., inserting or removing gates, to decrease the cost function. In contrasts with conventional VQAs that only adjust parameters, optimizing both parameters and circuit layouts enable the enhanced learning performance of VQAs. Meanwhile, the automatic nature endows the power of these approaches to address broad learning problems. Despite the prospects, little is known about the effectiveness of these approaches executed on the real quantum devices. In this study, we demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique, i.e. quantum architecture search (QAS) scheme \cite{du2020quantum}, to enhance VQAs on an 8-qubit superconducting quantum processor. In particular, we focus on data classification tasks and utilize QAS to pursue a better classification accuracy. To our best knowledge, this is the first experimental study towards multi-class learning. Moreover, to understand the noise-resilient property of QAS, we fabricate a controllable dephasing noisy channel and integrate it into our quantum processor. Assisted by this technique, we experimentally demonstrate that the ansatz designed by QAS is compatible with the topology of the employed quantum hardware and attains much better performance than hardware-efficient ansatz \cite{kandala2017hardware} when the system noise becomes large. Experimental results indicate that under a certain level of noise, the ansatz designed by QAS achieves the highest test accuracy ($95.6\%$) , while other heuristic ans\"atze only reach $90\%$ accuracy. Additional analyses of loss landscape further explain the advantage of the QAS-based ansatz in both optimization and effective parameter space. These gains in performance suggest the significance of developing QAS and other automatic ansatz design techniques to enhance the learning performance of VQAs. \begin{figure*} \caption{\small{\textbf{Experimental setups.} \label{fig:2} \end{figure*} \noindent\textbf{Result} \noindent\textbf{The mechanism of QAS.} The underlying principle of QAS is optimizing the quantum circuit architecture and the trainable parameters \textit{simultaneously} to minimize an objective function. For elucidating, in the following, we elaborate on how to apply QAS to tailor the hardware-efficient ansatz (HEA). Mathematically, an $N$-qubit HEA $U(\bm{\theta})=\prod_{l=1}^LU_l(\bm{\theta})\in SU(2^N)$ yields a multi-layer structure, where the circuit layout of all blocks is identical, the $l$-th block $U_l(\bm{\theta})$ consists of a sequence of parameterized single-qubit and two-qubits gates, and $L$ denotes the block number. Note that our method can be generalized to prune other ans\"atze such as the unitary coupled cluster ansatz \cite{romero2018strategies} and the quantum approximate optimization ansatz \cite{farhi2014quantum}. QAS is composed of four steps to tailor HEA and ouput a problem-dependent and hardware-oriented ansatz, as shown in Fig.~\ref{fig:scheme}. The first step is specifying the ans\"atze pool $\mathcal{S}$ collecting all candidate ans\"atze. Suppose that $U_l(\bm{\theta})$ for $\forall l\in[L]$ can be formed by three types of parameterized single-qubit gates, i.e., rotational gates along three axis, and one type of two-qubits gates, i.e., CNOT gates. When the layout of different blocks can be varied by replacing single-qubit gates or removing two-qubits gates, the ans\"atze pool $\mathcal{S}$ includes in total $O((3^{N}+2^N)^L)$ ansatz. Denote the input data as $\mathcal{D}$ and an objective function as $\mathcal{L}$. The goal of QAS is finding the best candidate ansatz $\bm{a}\in \mathcal{S}$ and its corresponding optimal parameters $\bm{\theta}_{\bm{a}}^*$, i.e., \begin{equation}\label{eqn:obj_QAS} (\bm{\theta}_{\bm{a}}^*,\bm{a}^*)= \arg \min_{\bm{\theta}_{\bm{a}}\in\mathcal{C}, \bm{a}\in\mathcal{S}} \mathcal{L}(\bm{\theta}_{\bm{a}}, \bm{a}, \mathcal{D},\mathcal{E}_{\bm{a}}), \end{equation} where the quantum channel $\mathcal{E}_{\bm{a}}$ simulates the quantum system noise induced by $\bm{a}$. The second step is optimizing Eq.~(\ref{eqn:obj_QAS}) with in total $T$ iterations. As discussed in our technical companion paper \cite{du2020quantum}, seeking the optimal solution $(\bm{\theta}_{\bm{a}}^*,\bm{a}^*)$ is computationally hard, since the optimization of $\bm{a}$ is discrete and the size of $\mathcal{S}$ and $\mathcal{C}$ exponentially scales with respect to $N$ and $L$. To conquer this difficulty, QAS exploits the supernet and weight sharing strategy to ensure a good estimation of $(\bm{\theta}_{\bm{a}}^*,\bm{a}^*)$ within a reasonable computational cost. Concisely, weight sharing strategy correlates parameters among different ans\"atze in $\mathcal{S}$ to reduce the parameter space $\mathcal{C}$. As for supernet, it plays two significant roles, i.e., configuring the ans\"atze pool $\mathcal{S}$ and parameterizing an ansatz $\bm{a}\in\mathcal{S}$ via the specified weight sharing strategy. In doing so, at each iteration $t$, QAS randomly samples an ansatz $\bm{a}^{(t)}\in\mathcal{S}$ and updates its parameters with $\bm{\theta}_{\bm{a}}^{(t+1)}=\bm{\theta}_{\bm{a}}^{(t)} - \eta \nabla \mathcal{L}(\bm{\theta}_{\bm{a}}^{(t)}, \bm{a}^{(t)}, \mathcal{D},\mathcal{E}_{\bm{a}^{(t)}})$ and $\eta$ being the learning rate. Due to the weight sharing strategy, the parameters of the unsampled ans\"atze are also updated. The last two steps are ranking and fine tuning. Specifically, once the training is completed, QAS ranks a portion of the trained ans\"atze and chooses the one with the best performance. The ranking strategies are diverse, including random searching and evolutionary searching. Finally, QAS utilizes the selected ansatz to fine tune the optimized parameters with few iterations. Refer to Ref.~\cite{du2020quantum} for the omitted technical details of QAS. \noindent\textbf{Experimental implementation.} We implement QAS on a quantum superconducting processor to accomplish the classification tasks for the Iris dataset. Namely, the Iris dataset $\mathcal{D}=\{\bm{x}_i, y_i\}_{i=1}^{150}$ consists of three categories of flowers (i.e., $y_i\in\{0, 1, 2\}$) and each category includes $50$ examples characterized by $4$ features (i.e., $\bm{x}_i\in\mathbb{R}^4$). In our experiments, we split the Iris dataset into three parts, i.e., the training dataset $\mathcal{D}_T=\{\bm{x},y\}$, the validating dataset $\mathcal{D}_V$, and the test dataset $\mathcal{D}_E$ with $\mathcal{D}=\mathcal{D}_T \cup \mathcal{D}_V \cup \mathcal{D}_E$. The functionality of $\mathcal{D}_T$, $\mathcal{D}_V$, and $\mathcal{D}_E$ is estimating the optimal classifier, preventing the classifier to be over-fitted, and evaluating the generalization property of the trained classifier, respectively. Our experiments are carried out on a quantum processor including $8$ Xmon superconducting qubits with the one-dimensional chain structure. As shown in Fig.~\ref{fig:2}(b), the employed quantum device is fabricated by sputtering a Aluminium thin film onto a saphire substrate. The single qubit rotation gate $\text{R}_X$ ($\text{R}_Y$) along X-axis (Y-axis) is implemented with microwave pulse, and the Z rotation gate $\rm R_z$ is realized by virtual Z gate \cite{2016Efficient}. The construction of the CZ gate is completed by applying the avoided level crossing between the high level states $|11\rangle$ and $|02\rangle$ or $|11\rangle$ and $|20\rangle$. The calibrated readout matrix is shown in Fig.~\ref{fig:2}(d) and the device parameters is summarized in Table \ref{tab:device} of Appendix \ref{Expsetup}. We fabricate the controllable dephasing noise as a measurable disturbance to the quantum evolution. The operators for the noise channel can be written as $E_0=\sqrt{1-\alpha p}[1,0;0,1]$ and $E_1=\sqrt{\alpha p}[1,0;0,-1]$. $\alpha$ is a constant and the value of $p$ can be tuned in our experiment by changing the average number of the coherent photons on the readout cavity's steady state. The intensity of coherent photons is represented by the amplitude $p$ of the curve shown on the AWGs. The experimental implementation of the quantum classifiers is as follows. As illustrated in Fig.~\ref{fig:2}(a), the gate encoding method is exploited to load classical data into quantum states. The encoding circuit yields $U_E(\bm{x})=\otimes_{j=1}^4\text{R}_Y(\bm{x}_{i,j})$. To evaluate the effectiveness of QAS, three types of ans\"atze $U(\bm{\theta})$ are used to construct the quantum classifier. The first two types are heuristic ansatz, which are hardware-agnostic ansatz (HAA) and hardware-efficient ansatz (HEA). As depicted in Fig.~\ref{fig:2}(a), HAA $U_{\text{HAA}}(\bm{\theta})$ is designed for a general paradigm and ignores the topology of a specific quantum hardware platforms; HEA $U_{\text{HEA}}(\bm{\theta})$ adapts the quantum hardware constraints, where all inefficient two-qubit operators that connect two physically nonadjacent qubits are forbidden. The third type of ansatz refers to the output of QAS, denoted as $U_{\text{QAS}}(\bm{\theta})$. The mean square error between the prediction and real labels is employed as the objective function for all quantum classifiers. The noise rate of the dephasing channel $p$ is set as $0$, $0.01$ and $0.015$. We benchmark the test accuracy of these three ansatze HAA, HEA and QAS, and explore whether QAS attains the highest test accuracy. Refer to Appendix B for more implementation details. \begin{figure} \caption{\small{\textbf{The performance of quantum classifiers.} \label{fig:3} \end{figure} \begin{figure*} \caption{\small{\textbf{The circuit architecture and corresponding loss landscape.} \label{fig:4} \end{figure*} \noindent \textbf{Experimental results.} To comprehend the importance of the compatibility between quantum hardware and ansatz, we first examine the learning performance of the quantum classifiers with HAA and HEA under different noise rates. The achieved experimental results are demonstrated in Fig.~\ref{fig:3}(a). In particular, in the measure of training loss (i.e., the lower the better), the quantum classifier with the HEA significantly outperforms HAA for all noise settings. At the $10$-th epoch, the training loss of the quantum classifier with HAA and HEA is $0.06$ and $0.017$ ($0.049$ and $0.012$; $0.049$ and $0.012$) when $p=0.015$ ($p=0$; $p=0.01$), respectively. In addition, the optimization of the quantum classifier with HAA seems to be divergent when $p=0.015$. We further evaluate the test accuracy to compare their learning performance. As shown in Fig.~\ref{fig:3}(b), there exists a manifest gap between the two ans\"atze, highlited by the blue and yellow colors. For all noise settings, the test accuracy corresponding to HAA is only 31.1\%, whereas the test accuracy corresponding to HEA is at least 95.6\%. These observations signify the significance of reconciling the topology between the employed quantum hardware and ansatz, as the key motivation of this study. We next experiment on QAS to quantify how it problem-specific and hardware-oriented designs to enhance the learning performance quantum classifiers. Concretely, as shown in Fig.~\ref{fig:3}(b), for all noise settings, the quantum classifier with the ansatz searched by QAS attains the best test accuracy than those of HAA and HEA. That is, when $p=0$ ($p=0.01$ and $p=0.015$), the test accuracy achieved by QAS is $97.8\%$ ($97.8\%$ and $95.6\%$), which is higher than HEA with $96.7\%$ ($92.2\%$ and $90.0\%$). Notably, although the test accuracy is slightly decreased for the increased system noise, the strength of QAS becomes evident over the other two ans\"atze. In other words, QAS shows the advantages to simultaneously alleviate the effect of quantum noise and search the optimal ansatz to achieve high accuracy. The superior performance validates the effectiveness of QAS towards classification tasks. We last investigate the potential factors of ensuring the good performance of QAS from two perspectives, i.e., the circuit architecture and the corresponding loss landscape. The searched ans\"atze under three noise settings, HAA, and HEA are pictured in the top of Fig.~\ref{fig:4}. Compared with HEA and HAA, QAS reduces the number of CZ gates with respect to the increased level of noise. When $p=0.015$, QAS chooses the ansatz containing only one CZ gate. This behavior indicates that QAS can adaptively control the number of quantum gates to balance the expressivity and learning performance. We plot the loss landscape of HAA, HEA, and the ansatz searched by QAS in the middle row of Fig.~\ref{fig:4}. To visualize the high-dimension loss landscape in a 2D plane, the dimension reduction technique, i.e., principal component analysis (PCA) \cite{pearson1901liii} is applied to compress the parameter trajectory corresponding to each optimization step. After dimension reduction, we choose the obtained first two principal components that explain most of variance as the landscape spanning vector. Refer to \cite{rudolph2021orqviz} and Appendix C for details. For HAA and HEA, the objective function is governed by both the $0$-th component ($98.75\%$ of variance for HAA, $98.55\%$ of variance for HEA) and $1$-th component ($1.24\%$ of variance for HAA, $1.45\%$ of variance for HEA). By contrast, for the ans\"atze searched by QAS, their loss landscapes totally depend on the $0$-th component. Furthermore, the optimization path for QAS is exactly linear, while the optimization of HAA and HAA experiences a nonlinear curve. This difference reveals that QAS enables a more efficient optimization trajectory. As indicated by the bottom row of Fig.~\ref{fig:4}, there is a major parameter that contributes the most to the $0$-th component in the three ans\"atze searched by QAS, while HAA and HEA have to consider multiple parameters to determine the $0$-th component. This phenomenon reflects that ans\"atze searched by QAS are prone to have a smaller effective parameter space, which lead to less noise accumulation and further stronger noise robustness. These observations can be treated as the empirical evidence to explain the superiority of QAS. \noindent\textbf{Discussion} Our experimental results provide the following insights. First, we experimentally verify the feasibility of applying automatically designing a problem-specific and hardware-oriented ansatz to improve the power of quantum classifiers. Second, the analysis related to the loss landscape and the circuit architectures exhibits the potential of applying QAS and other variable ansatz construction techniques to compensate for the caveats incurred by executing variational quantum algorithms on NISQ machines. Besides classification tasks, it is crucial to benchmark QAS and its variants towards other learning problems in quantum chemistry and quantum many-body physics. In these two areas, the employed ansatz is generally Hamiltonian dependent \cite{peruzzo2014variational,romero2018strategies,cao2019quantum}. As a result, the way of constructing the ans\"atze pool should be carefully conceived. In addition, another important research diction is understanding the capabilities of QAS for large-scale problems. How to find the near-optimal ansatz among the exponential candidates is a challenging issue. We note that although QAS can reconcile the imperfection of quantum systems, a central law to enhance the performance of variational quantum algorithms is promoting the quality of quantum processors. For this purpose, we will delve into carrying out QAS and its variants on more advanced quantum machines to accomplish real-world generation tasks with potential advantages. \noindent\textbf{Methods} \noindent\textbf{Noise setup.} Due to the ac Stark effect, photon number fluctuations from the readout cavity can cause qubit dephasing \cite{yan2018distinguishing}. We implement a pure dephasing noisy channel in our device. To every qubit, the noise photons is generated by a coherent source with a Lorentzian-shaped spectrum, which are centered at the frequency of $\omega_c$. $\omega_c$ is the center frequency of the readout cavity, which is over-coupled to the feedline at the input and output port, and capacitively coupled to the Xmon qubit. The Hamiltonian of system including the readout cavity and the qubit can be written as \begin{equation} H/\hbar=\omega_c a^{\dagger}a+\frac{\omega_q}{2}\sigma_z+g_r(a^{\dagger}\sigma_{-}+a\sigma_{+}) \end{equation} where $\sigma_{\pm} = \sigma_x \pm i\sigma_y$, $\sigma_j$ ($j=x,y,z$) is the Pauli operator for the X-mon qubit. $a^{\dagger}$ ($a$) is the cavity photon creation (annihilation) operator. $\omega_q$ is the frequency between the ground and the first excited states of the qubit and $g_r$ is the coupling strength between the qubit and the readout cavity. By continuously sending the coherent photons to drive the readout cavity to maintain a coherent state, a noisy environment can be engineered. The noise channel can be described as the depolarization in the x-y plane of the Bloch sphere. The noise intensity can be tuned by changing the average number of the coherent photons on the readout cavity's steady state. The average number of photons is represented by the amplitude of the curve shown in the AWGs. Under different noise settings, the values of $T_2^{\star}$ is shown in Table \ref{tab2}. \begin{table*} \centering \begin{tabular}{ccccccccc} \hline \hline \toprule Parameter & Q1 & Q2 & Q3 & Q4 & Q5 & Q6 & Q7 & Q8 \\ \hline $T_{2}^{\star}(p=0)$ ($\mu s$) & $11.7$ & $2.0$ & $15.2$ & $1.9$ & $15.9$ & $1.8$ & $12.6$ & $1.6$ \\ $T_{2}^{\star}(p=0.005)$ ($\mu s$) & $10.6$ & $1.9$ & $15.3$ & $1.8$ & $15.0$ & $1.6$ & $10.7$ & $1.5$ \\ $T_{2}^{\star}(p=0.01)$ ($\mu s$) & $7.8$ & $1.6$ & $12.5$ & $1.7$ & $14.7$ & $1.6$ & $13.0$ & $1.6$ \\ $T_{2}^{\star}(p=0.015)$ ($\mu s$) & $0.2$ & $1.2$ & $12.7$ & $1.6$ & $14.4$ & $1.6$ & $12.1$ & $1.5$ \\ \toprule \hline \hline \end{tabular} \caption{The transverse relaxation time $T^{\star}_2$ under different noise settings.} \label{tab2} \end{table*} \noindent\textbf{Readout correction.} The experimentally measured resluts of the final state for the eight qubits were corrected with a calibration matrix, which can be got in an exprimentally calibration process. The reconstruction process for readout results is based on Bayes' rule. The colored schematic diagram for calibration matrix is shown in figure \ref{fig:2}(d). Assume that, $p_{ij}$ stands for the probability of getting a measured population $|i\rangle$ when preparing a basis state $|j\rangle$. The calibration matrix is \begin{eqnarray} F = \left(\begin{array}{cccc} p_{11}&p_{12}&...&p_{12^{n}}\\ p_{21}&p_{22}&...&p_{22^{n}}\\ ...&...&...&...\\ p_{2^{n}1}&p_{2^{n}2}&...&p_{2^n2^n} \end{array}\right). \end{eqnarray} If we prepare a state $|\psi\rangle$ on $n$ qubits, and the probability distribution of the prepared state in $2^n$ basis is $P=[P_1,P_2,...,P_{2^n}]^{T}$, then we will get a measured state probability distribution as $\tilde{P}$ in experiment, the relationship between the two probability distribution is \begin{equation} \tilde{P} = FP. \end{equation} Sovling for $P$, we have \begin{equation} P = F^{-1}\tilde{P}. \end{equation} \begin{acknowledgments} We appreciate the helpful discussion with Weiyang Liu and Guangming Xue. This work was supported by the NSF of Beijing (Grant No. Z190012), the NSFC of China (Grants No. 11890704, No. 12004042, No. 12104055, No. 12104056), and the Key-Area Research and Development Program of Guang Dong Province (Grant No. 2018B030326001). \end{acknowledgments} \textbf{Author contributions.} Y.-X. D. and H.-F. Y. conceived the research. K.-H. L.-H. and Y. Q. designed and performed the experiment. Y. Q., and Y.-X. D. performed numerical simulations. Y. Q., X.-Y. W., R.-X. W., M.-J. H., and D. L. analyzed the results. All authors contributed to discussions of the results and the development of the manuscript. Y. Q., R. -X. W., Y. -X. D. and K. -H. L. -H. wrote the manuscript with input from all co-authors. Y.-X. D., X.-Y. W., D.-C. T. and R.-X. W. supervised the whole project. \renewcommand{M\arabic{figure}}{M\arabic{figure}} \setcounter{figure}{0} \appendix \onecolumngrid \section{Experiment setup}\label{Expsetup} \subsection{Device parameters} The qubit parameters and length and fidelity for the single- and two-qubit gates of our device are summerized in Table \ref{tab:device}. \begin{table*} \centering \begin{tabular}{ccccccccc} \hline \hline \toprule Parameter & Q1 & Q2 & Q3 & Q4 & Q5 & Q6 & Q7 & Q8 \\ \hline $\omega_i/2\pi$ (GHz) & $5.202$ & $4.573$ & $5.146$ & $4.527$ & $5.099$ & $4.47$ & $5.118$ & $4.543$ \\ $\alpha_i/2\pi$ (GHz) & $-0.240$ & $-0.240$ & $-0.239$ & $-0.240$ & $-0.239$ & $-0.239$ & $-0.239$ & $-0.242$ \\ $T_{1}$ ($\mu s$) & $10.3$ & $13.6$ & $14.7$ & $16.7$ & $14.9$ & $13.2$ & $12.7$ & $8.0$ \\ $T_{2}^{\star}$ ($\mu s$) & $11.7$ & $2.0$ & $15.2$ & $1.9$ & $15.9$ & $1.8$ & $12.6$ & $1.6$ \\ $\bar{F}_s$& $99.67\%$ & $98.96\%$ & $99.76\%$ & $99.55\%$ & $99.05\%$ & $99.69\%$ & $98.51\%$ & $99.09\%$ \\ $T_s$ ($ns$) & $37$ & $37$ & $37$ & $35$ & $35$ & $37$ & $37$ & $35$ \\ $g_i/2\pi\,(\rm MHz)$ & \multicolumn{8}{c}{~~~~~~$19.7$~~~~~$19.6$~~~~~~$19.1$~~~~~$19.1$~~~~~$19.2$~~~~~~$19.3$~~~~~~$19.6$~~~~~~}\\ $J_{z,i}/2\pi\,(\rm MHz)$ & \multicolumn{8}{c}{~~~~~~~$0.675$~~~~~$0.8$~~~~~~~$0.7$~~~~~~$0.78$~~~~~$0.626$~~~~$0.655$~~~~$0.819$~~~~~~~}\\ $\bar{F}_{cz}$ & \multicolumn{8}{c}{~~~$97.23\%$~~$96.11\%$~~$97.48\%$~~$93.54\%$~~$93.72\%$~~$94.95\%$~~$95.54\%$~~}\\ $T_{cz}$ ($ns$) & \multicolumn{8}{c}{~~~~~$18.5$~~~~~~$21.5$~~~~~~$23$~~~~~~~$19.5$~~~~~~$23.5$~~~~~$21.5$~~~~~$18.5$~~~~~}\\ \toprule \hline \hline \end{tabular} \caption{Device parameters. $\omega_i$ and $\alpha_i$ represent the qubit frequency and qubit anharmonicity respectively. $T_1$ and $T_2^{\star}$ are the longitudinal and transverse relaxation time respectively. $\bar{F}_s$ and $T_s$ are the average fidelity and length of the single qubit gates. $g_i$ is the coupling strength between nearby qubits, and $J_{z,i}$ is the effective ZZ coupling strength. $\bar{F}_{cz}$ are the fidelity of the CZ gates calibrated by quantum process tomography and $T_{cz}$ are the length of the CZ gates.} \label{tab:device} \end{table*} \subsection{Electronics and control wiring}\label{Electronics} The device is installed at a cryogenic setup in a dilution refrigerator. The control and measurement electronics which is connected to the device is shown in figure \ref{fig:chip}. The electronic control module is divided by 6 area with different temperature. The superconducting quantum device is installed at the base plate with a cryogenic environment of 10 mK. For each qubit, the frequency is tuned by a flux contol line with changing the magnetic flux through the SQUID loop, and the flux is controlled by a constant current which is generated by a votage source and inductively coupled to the SQUID. Four attenuators are connected in the circuit in series to act as the thermal precipitator. The XY control for each qubit is achieved by up-converting the intermediate frequency signals with an analog IQ-mixer modules. The drive pulse is provided by the multichannel AWGs with a sample rate of 2 GSa/s. The qubit readout is performed by a readout control system with sampling rate of 1 GSa/s. The readout pulse is upconverted to the frequency band of the readout cavity with the analog IQ-mixer and transmited through the readout line with attenuators and low pass filters to the chip. At the output side, the constant and alternating currents are combined by a bias-tee, amplified by the parametric amplifier, and connected to the readout line through a circulator at 10 mK, as well as a high-electron mobility transistor (HEMT) at 4 K and two more amplifier at room temperature. The amplified signals finally are digitized by the Analog to Digital Converter. \begin{figure*} \caption{Schematic diagram for the electronics and wiring setup for the superconducting quantum system.} \label{fig:chip} \end{figure*} \section{Implementation of quantum classifiers} In this section, we implement HAA, HEA and QAS for classification of Iris dataset on the $8$-qubit superconducting quantum processors with controllable dephasing noise. A detailed description about the dataset and hyper-parameters configuration is given below. \subsection{Dataset}\label{sec:dataset} The classification data employed in this paper is the Iris dataset \cite{fisher1936use}, which contains $150$ instances characterized by $4$ attributes and $3$ categories. Each dimension of the feature vector is normalized to range $[0,1]$. During training, the whole dataset is split into three parts, including training set ($60$ samples), validation set ($45$ samples) and test set ($45$ samples). The sample distribution is visualized by selecting the first two dimensions of feature vector. As shown in Fig.~\ref{fig:app:iris_data}, samples of class $1$ and $2$ cannot be distinguished by a linear classifier. It means that nonlinearity should be introduced into the quantum classifier to achieve higher classification accuracy. \begin{figure} \caption{\small{\textbf{The visualization of Iris dataset based on its first two dimensions.} \label{fig:app:iris_data} \end{figure} \subsection{Objective function and accuracy measure} \textbf{Objective function.} We adopt the mean square error (MSE) as the objective function for all quantum classifiers, i.e., \begin{equation} \mathcal{L}(\mathcal{D},\bm{\theta})=\frac{1}{2n}\sum_{i=1}^n(\braket{O}_i-y_i)^2, \end{equation} where $\braket{O}_i=\braket{0|U_E(\bm{x}_i)^\dagger U(\bm{\theta})^\dagger OU(\bm{\theta})U_E(\bm{x}_i)|0}$, $O$ refers to the observable, $U_E(\bm{x}_i)$ denotes the unitary operator that embeds classical feature vector $\bm{x}_i$ into quantum circuit, and $U(\bm{\theta})$ is the variational quantum circuit with the trainable parameters $\bm{\theta}$. \textbf{Definition of train, valid, and test accuracy.} Given an example $\bm{x}_i$, the quantum classifier predicts its label as \begin{equation} \tilde{y}_i=\left\{ \begin{aligned} 0&,\braket{O}_i\leq \frac{1}{6}\\ 1&,\frac{1}{6}<\braket{O}_i\leq \frac{1}{2}\\ 2&,\frac{1}{2}<\braket{O}_i\leq\frac{5}{6} \end{aligned} \right.. \end{equation} The train (valid and test) accuracy aims to measure the differences between the predicted labels and true labels for examples in the train dataset $\mathcal{D}_T$ (valid dataset $\mathcal{D}_V$ and test dataset $ \mathcal{D}_E$), i.e.3, \begin{equation} accuracy=\frac{\sum_{(\bm{x}_i,y_i)\in \mathcal{D}}\mathbbm{1}_{\tilde{y}_i=y_i}}{|\mathcal{D}|},\mathcal{D}=\mathcal{D}_T\ or\ \mathcal{D}_V\ or\ \mathcal{D}_E, \end{equation} where $|\cdot|$ denotes the size of a set. \subsection{Training hyper-parameters} The trainable parameters for all ans\"atze are randomly initialized following the uniform distribution $\mathcal{U}_{[-\pi,\pi]}$. During training, the hyper-parameters are set as follows: the optimizer is stochastic gradient descent (SGD) \cite{kiefer1952stochastic}, the batch size is $4$ and the learning rate is fixed at $0.2$. Specifically, parameter shift rule \cite{mitarai2018quantum} is applied to compute the gradient of objective function with respect to single parameter. For QAS, we train $5$ candidate supernets for $40$ epochs to fit the training set. During the search phase, we randomly sample $100$ ansatz and rank them according to their accuracy on the validation set. Finally, the ansatz with the highest accuracy is selected as the target ansatz. The ans\"atze pool is constructed as follows. For the single-qubit gate, the candidate set is $\{RY,RZ\}$. For the two-qubit gate, QAS automatically determines whether applying $CZ$ gates to the qubit pair $(0,1),(1,2),(2,3)$ or not, discarding all other combinations, such as $(0,2) $ and $(0,3)$. These non-adjacent qubits connections require more gates when running on the superconducting processor of $1$-D chain topology, leading to bigger noise accumulation. \section{More details of experimental results} \subsection{PCA used in visualization of loss landscape} To visualize the loss landscape of HAA, HEA and ans\"atze searched by QAS with respect to the parameter space, we apply principle component analysis (PCA) to the parameter trajectory collected in every optimization step and choose the first two components as the observation variable. To be concrete, given a sequence of trainable parameter vector along the optimization trajectory $\{\bm{\theta}^{(1)},...,\bm{\theta}^{(t)},...,\bm{\theta}^{(T)}\}$ where $T$ is the number of total optimization steps and $\bm{\theta}^{(t)}\in \mathbb{R}^d$ denotes the parameter vector at the $t$-th step, we construct the matrix $\Theta=[\bm{\theta}^{(1)};...;\bm{\theta}^{(T)}]\in \mathbb{R}^{T\times d}$. Once we apply PCA to $\Theta$ and obtain the first two principal components $E=[\bm{e}_0,\bm{e}_1]^T\in \mathbb{R}^{2\times d}$, the loss landscape with respect to trainable parameters can be visualized by performing a 2D scan for $E\Theta^T$. Simultaneously, the projection vector $\bm{e}_i$ of each component indicates the contribution of each parameter to this component, implying how many parameters determine the value of objective function. Refer to \cite{rudolph2021orqviz} for details. The optimization trajectory can provide certain information of the trainability and convergence of the employed ansatz in quantum classifiers. When the optimization path is exactly linear, it implies that the loss landscape is not intricate and the model can be easily optimized. On the contrary, the complicated nonlinear optimization curve indicates the difficulty of convering to the local minima. \subsection{More experimental results} We conduct numerical experiments on classical computers to validate the effectiveness of QAS. \textbf{Dephasing noise.} We simulate the dephasing noise channel as \begin{equation} \rho'=(1-\bar{p})\rho+\bar{p}\sigma_z\rho \sigma_z, \end{equation} where $\rho$ and $\rho'$ represent the ideal quantum state (density matrix) and noisy quantum state affected by dephasing channel, $\sigma_z$ is the Pauli-Z operator, and $\bar{p}=\alpha p$ is the noise strength, representing the probability that applying a Pauli-Z operator to the quantum state. In the experiments, the noise strength $\bar{p}$ is set as $\{0.05,0.1,0.15\}$, and the circuit layer $L$ is set as $\{2,4,6\}$. Each setting runs for $10$ times to suppress the effects of randomness. \textbf{Simulation results.} As shown in Fig.~\ref{fig:app:acc_sim}, QAS achieves the highest test accuracy over all noise and layer settings. When $L=2$ and $p=0.05$, the performance gap between HEA ($95.7\%$) and QAS ($97.1\%$) is relatively small. With both the depth and noise strength increasing, HEA witnesses a rapid accuracy drop ($60\%$ for $L=6$ and $p=0.15$). By contrast, the test accuracy for QAS with $L=6$ and $p=0.15$ is $95\%$, which slightly decreases $2\%$. This behaviour accords with the results on the superconducting processor (the test accuracy of QAS running a superconducting device decreases from $97.8\%$ to $95.6\%$ when $p$ increases from $0$ to $0.015$, refer to Fig.~\ref{fig:3} for more details), further illustrating the advantage of QAS in error mitigation and model expressivity. \begin{figure} \caption{The test accuracy achieved by HEA and QAS under various number of layers and noise strength when simulating on classical devices.} \label{fig:app:acc_sim} \end{figure} \end{document}
\begin{document} \title [The Iwasawa main conjectures for ${\rm GL}_2$ and derivatives of $p$-adic $L$-functions]{The Iwasawa main conjectures for ${\rm GL}_2$ and derivatives of $p$-adic $L$-functions} \author[F.~Castella and X.~Wan] {Francesc Castella and Xin Wan} \address{Department of Mathematics, University of California, Santa Barbara, CA 93106, USA} \email{[email protected]} \address{Morningside Center of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Science, No.~55 Zhongguancun East Road, Beijing, 100190, China} \email{[email protected]} \thanks{During the preparation of this paper, F.C. was partially supported by the NSF grants DMS-{1801385} and DMS-{1946136}.} \subjclass[2010]{11R23 (primary); 11G05, 11G40 (secondary)} \date{January 12, 2020.} \begin{abstract} We prove under mild hypotheses the three-variable Iwasawa main conjecture for $p$-ordinary modular forms in the indefinite setting. Our result is in a setting complementary to that in the work of Skinner--Urban, and it has applications to Greenberg's nonvanishing conjecture for the first derivatives at the center of $p$-adic $L$-functions of cusp forms in Hida families with root number $-1$ and to Howard's horizontal nonvanishing conjecture. \end{abstract} \maketitle \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} Fix a prime $p>3$ and a positive integer $N$ prime to $p$. Let $\mathfrak{O}_L$ be the ring of integers of a finite extension $L/\mathbf{Q}_p$, and let \[ {\boldsymbol{f}}=\sum_{n=1}^\infty\boldsymbol{a}_nq^n\in\mathbb{I}\pwseries{q} \] be a Hida family of tame level $N$, where $\mathbb{I}$ is a finite flat extension of the one-variable Iwasawa algebra $\mathfrak{O}_L\pwseries{T}$ with fraction field $F_\mathbb{I}$. Throughout this paper, we shall assume that $\mathbb{I}$ is regular. Let $\mathcal{K}$ be an imaginary quadratic field of discriminant prime to $Np$, and let ${\boldsymbol{g}}amma_\mathcal{K}:={\rm Gal}(\mathcal{K}_\infty/\mathcal{K})$ be the Galois group of the $\mathbf{Z}_p^2$-extension of $\mathcal{K}$. From work of Hida \cite{hidaII}, there is a $3$-variable $p$-adic $L$-function \[ L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}\pwseries{{\boldsymbol{g}}amma_\mathcal{K}} \] interpolating critical values $L({\boldsymbol{f}}_\phi/\mathcal{K},\chi,j)$ for the Rankin--Selberg $L$-function attached to the classical specializations of ${\boldsymbol{f}}$ twisted by finite order characters $\chi:{\boldsymbol{g}}amma_\mathcal{K}\rightarrow\mu_{p^\infty}$. Let \[ \rho_{{\boldsymbol{f}}}:G_{\mathbf{Q}}:={\rm Gal}(\overline{\mathbf{Q}}/\mathbf{Q})\rightarrow{\rm Aut}_{F_{\mathbb{I}}}(V_{{\boldsymbol{f}}})\simeq{\rm GL}_2(F_\mathbb{I}) \] be the Galois representation associated to ${\boldsymbol{f}}$ (which we take to be the contragredient of the Galois representation first constructed in \cite{hida86b}), and let $\bar{\rho}_{{\boldsymbol{f}}}:G_{\mathbf{Q}}\rightarrow{\rm GL}_2(\kappaappa_\mathbb{I})$, where $\kappaappa_\mathbb{I}=\mathbb{I}/\mathfrak{m}_\mathbb{I}$ is the residue field of $\mathbb{I}$, be the associated semi-simple residual representation. By work of Mazur and Wiles \cite{MW-families,wiles88}, upon restriction to a decomposition group $D_p\subset G_{\mathbf{Q}}$ at $p$ we have \[ \bar{\rho}_{{\boldsymbol{f}}}\vert_{D_p}\sim \left(\begin{smallmatrix}\bar{\varepsilon}& *\\& \bar{\delta}\end{smallmatrix}\right) \] where the character $\bar{\delta}$ is unramified. Under the assumption that \begin{equation}\label{ass:irred} \textrm{$\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible and $\bar\varepsilon\neq\bar{\delta}$},\tag{MT} \end{equation} one knows that there exists a $G_\mathbf{Q}$-stable lattice $T_{{\boldsymbol{f}}}\subset V_{{\boldsymbol{f}}}$ which is free of rank two over $\mathbb{I}$ with the inertia coinvariants $\mathscr{F}^-T_{\boldsymbol{f}}$ being $\mathbb{I}$-free of rank one. Set \[ A_{\boldsymbol{f}}:=T_{\boldsymbol{f}}\otimes_\mathbb{I}\mathbb{I}^\vee,\quad \mathscr{F}^-A_{\boldsymbol{f}}:=(\mathscr{F}^-T_{\boldsymbol{f}})\otimes_\mathbb{I}\mathbb{I}^\vee, \] where $\mathbb{I}^\vee:={\rm Hom}_{\rm cts}(\mathbb{I},\mathbf{Q}_p/\mathbf{Z}_p)$ is the Pontryagin dual of $\mathbb{I}$, and define the Greenberg Selmer group ${\rm Sel}_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ by \begin{equation}\label{eq:2Gr-intro} {\rm Sel}_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}):=\kappaer\biggl\{{\rm H}^1(\mathcal{K}_\infty,A_{\boldsymbol{f}})\rightarrow \prod_{w\nmid p}{\rm H}^1(I_w,A_{\boldsymbol{f}})\times\prod_{w\vert p}{\rm H}^1(\mathcal{K}_{\infty,w},\mathscr{F}^-A_{\boldsymbol{f}})\biggr\}, \end{equation} where $w$ runs over the places of $\mathcal{K}_\infty$. The Pontryagin dual \[ X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}):={\rm Hom}_{\rm cts}({\rm Sel}_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}),\mathbf{Q}_p/\mathbf{Z}_p) \] is easily seen to be a finitely generated $\mathbb{I}\pwseries{{\boldsymbol{g}}amma_\mathcal{K}}$-module. In this paper we study the following instance of the Iwasawa--Greenberg main conjectures (see \cite{Greenberg55}). \begin{intro-conj1}[{\bf Iwasawa--Greenberg main conjecture}] The module $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\tt Gr}(\mathcal{K}_\infty,A_{{\boldsymbol{f}}}))=(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})) \] as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$. \end{intro-conj1} Many cases of this conjecture are known by the work of Skinner--Urban \cite{SU} and \cite{Kato295}. As we shall explain below, in this paper we place ourselves in a setting complementary to that in \cite{SU}, obtaining the following new result towards the Iwasawa--Greenberg main conjecture. The imaginary quadratic field $\mathcal{K}$ determines a factorization \[ N=N^+N^- \] with $N^-$ being the largest factor of $N$ divisible only by primes inert in $\mathcal{K}$. \begin{ThmA} In addition to {\rm (\ref{ass:irred})}, assume that: \begin{itemize} \item{} $N$ is squarefree, \item{} some specialization ${\boldsymbol{f}}_\phi$ is the $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$, \item{} $N^-$ is the product of a positive even number of primes, \item{} $\bar\rho_{{\boldsymbol{f}}}$ is ramified at every prime $q\vert N^-$, \item{} $p$ splits in $\mathcal{K}$. \end{itemize} Then $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}))=(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})) \] as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\otimes_{\mathbf{Z}_p}\mathbf{Q}_p$. \end{ThmA} As in \cite{SU}, the fact that $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion follows easily from Kato's work, and the proof of Theorem~A is reduced to establishing the divisibility ``$\subset$'' as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ predicted by the main conjecture. For the proof of this divisibility, in \cite{SU} the authors study congruences between $p$-adic families of cuspidal automorphic forms and Eisenstein series on ${\rm GU}(2,2)$, and their method (in particular, their application of Vatsal's result \cite{vatsal-special}) relies crucially on their hypothesis that $N^-$ is the squarefree product of an \emph{odd} number of primes. In contrast, for the proof of Theorem~A we first link the above main conjecture with another instance of the Iwasawa--Greenberg main conjectures, and exploit our assumption on $N^-$ to prove the latter using Heegner points. As a consequence of our approach, we also obtain an application to Greenberg's conjecture (see \cite[\S{0}]{Nekovar-Plater}, following \cite{GreenbergCRM}) on the generic order of vanishing at the center of the $p$-adic $L$-functions attached to cusp form in Hida families. To state this, assume for simplicity that $\mathbb{I}$ is just $\mathfrak{O}_L[[T]]$, and for each $k\in\mathbf{Z}_{\geqslant 2}$ let ${\boldsymbol{f}}_k$ be the $p$-stabilized newform on ${\boldsymbol{g}}amma_0(Np)$ obtained by setting $T=(1+p)^{k-2}-1$ in ${\boldsymbol{f}}$. One can show that the $p$-adic $L$-functions $L_p^{\tt MTT}({\boldsymbol{f}}_k,s)$ of \cite{mtt} satisfy a functional equation \[ L_p^{\tt MTT}({\boldsymbol{f}}_k,s)=-wL_p^{\tt MTT}({\boldsymbol{f}}_\kappa,k-s) \] with a sign $w=\pm{1}$ independent of $k\in\mathbf{Z}_{\geqslant 2}$ with $k\equiv 2\pmod{2(p-1)}$. \begin{intro-conj4}[{\bf Greenberg's nonvanishing conjecture}] Let $e\in\{0,1\}$ be such that $-w=(-1)^{e}$. Then \[ \frac{L_p^{\tt MTT}({\boldsymbol{f}}_k,s)}{(s-k/2)^{e}}\biggr\vert_{s=k/2}\neq 0, \] for all but finitely many $k\in\mathbf{Z}_{\geqslant 2}$ with $k\equiv 2\pmod{2(p-1)}$. \end{intro-conj4} In other words, for all but finitely many $k$ as above, the order of vanishing of $L_p^{\tt MTT}({\boldsymbol{f}}_k,s)$ at the center should be the least allowed by the sign in the functional equation. To state our result in the direction of this conjecture, let \[ T_{\boldsymbol{f}}^\dagger:=T_{\boldsymbol{f}}\otimes\mathbf{T}^\daggerheta^{-1} \] be the self-dual twist of $T_{\boldsymbol{f}}$. By work of Plater \cite{Plater} (and more generally Nekov{\'a}{\v{r}} \cite{nekovar310}) there is a cyclotomic $\mathbb{I}$-adic height pairing \begin{equation}\label{eq:ht-intro} \langle,\rangle_{\mathcal{K},\mathbb{I}}^{\rm cyc}:{\rm Sel}_{\tt Gr}(\mathcal{K},T_{\boldsymbol{f}}^\dagger)\times {\rm Sel}_{\tt Gr}(\mathcal{K},T_{\boldsymbol{f}}^\dagger)\rightarrow F_\mathbb{I} \end{equation} interpolating the $p$-adic height pairings for the classical specialization of ${\boldsymbol{f}}$ as constructed by Perrin-Riou \cite{PR-109}. It is expected that $\langle,\rangle_{\mathcal{K},\mathbb{I}}^{\rm cyc}$ is non-degenerate, in the sense that its kernel on either side should reduce to $\mathbb{I}$-torsion submodule of ${\rm Sel}_{\tt Gr}(\mathcal{K},T_{\boldsymbol{f}}^\dagger)$. \begin{ThmB} In addition to {\rm (\ref{ass:irred})}, assume that: \begin{itemize} \item $N$ is squarefree, \item ${\boldsymbol{f}}_2$ is old at $p$, \item there are at least two primes $\ell\Vert N$ at which $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified. \end{itemize} If ${\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)$ has $\mathbb{I}$-rank one and $\langle,\rangle_{\mathcal{K},\mathbb{I}}^{\rm cyc}$ is non-degenerate, then \[ \frac{d}{ds}L_p^{\tt MTT}({\boldsymbol{f}}_k,s)\biggr\vert_{s=k/2}\neq 0, \] for all but finitely many $k\in\mathbf{Z}_{\geqslant 2}$ with $k\equiv 2\pmod{2(p-1)}$. \end{ThmB} \begin{rem} The counterpart to Theorem~B in rank zero, i.e., the implication \begin{equation}\label{eq:rank0} {\rm rank}_{\mathbb{I}}\;{\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)=0\quad\Longrightarrow \quad L_p^{\tt MTT}({\boldsymbol{f}}_k,k/2)\neq 0, \end{equation} for all but finitely many $k$ as above, follows easily from \cite{SU} (see Theorem~\ref{thm:Gr+1}). \end{rem} \begin{rem}\label{rem:0or1} By the control theorem for ${\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)$ (see e.g. \cite[Prop.~12.7.13.4(i)]{nekovar310}) and the $p$-parity conjecture for classical Selmer groups (see e.g. \cite[Thm.~6.4]{cas-hsieh1}), the hypothesis that ${\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)$ has $\mathbb{I}$-rank one (resp. zero) implies that $w=1$ (resp. $w=-1$). Conversely, it is expected that the $\mathbb{I}$-rank of ${\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)$ is \emph{always} $0$ or $1$; more precisely, \[ {\rm rank}_{\mathbb{I}}\;{\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)\overset{?}=\left\{ \begin{array}{ll} 1&\textrm{if $w=1$,}\\[0.1cm] 0&\textrm{if $w=-1$.} \end{array} \right. \] For example, by \cite[Cor.~3.4.3 and Eq.~(21)]{howard-invmath} this prediction is a consequence of Howard's ``horizontal nonvanishing conjecture''. \end{rem} \begin{rem}\label{rem:CM-case} For Hida families ${\boldsymbol{f}}$ with CM (a case that is excluded by our hypotheses), the analogue of Theorem~B is due to Agboola--Howard and Rubin \cite[Thm.~B]{AHsplit}. In this case, the rank one and non-degeneracy assumptions follow from Greenberg's nonvanishing results \cite{greenberg-BSD} (see \cite[Prop.~2.4.4]{AHsplit}) and a transcendence result of Bertrand \cite{bertrand-AH} (see \cite[Thm.~A.1]{AHsplit}). In rank zero, the CM case of $(\ref{eq:rank0})$ follows from \cite{greenberg-BSD} and Rubin's proof of the Iwasawa main conjecture for imaginary quadratic fields \cite{rubin-IMC}. \end{rem} We conclude this Introduction with some of the ingredients that go into the proofs of the above results. The proof of Theorem~A builds on the link that we establish in $\S\ref{sec:Iw}$ between different instances of the Iwasawa--Greenberg main conjectures involving Selmer groups differing from $(\ref{eq:2Gr-intro})$ in their local conditions at the places above $p$. In particular, letting $\mathfrak{p}$ be the prime of $\mathcal{K}$ above $p$ determined by a fixed embedding $\overline{\mathbf{Q}}\hookrightarrow\overline{\mathbf{Q}}_p$, and denoting by $\hat\mathbf{Z}_p^{\rm ur}$ the completion of the ring of integers of the maximal unramified extension of $\mathbf{Q}_p$, a central role is played by the Selmer group defined by \[ {\rm Sel}_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}):=\kappaer\biggl\{{\rm H}^1(\mathcal{K}_\infty,A_{\boldsymbol{f}})\rightarrow\prod_{w\nmid p}{\rm H}^1(I_w,A_{\boldsymbol{f}})\times \prod_{w\mid\overline{\mathfrak{p}}}{\rm H}^1(\mathcal{K}_{\infty,w},A_{\boldsymbol{f}})\biggr\} \] whose Pontryagin dual is conjecturally generated by a $p$-adic $L$-function \[ \mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]],\quad\quad\textrm{where}\;\;\mathbb{I}^{\rm ur}:=\mathbb{I}\hat{\otimes}_{\mathbf{Z}_p}\hat{\mathbf{Z}}_p^{\rm ur}, \] interpolating critical values $L({\boldsymbol{f}}_\phi/\mathcal{K},\chi,j)$ with $\chi$ running over characters of ${\boldsymbol{g}}amma_\mathcal{K}$ with associated theta series of weight higher than the weight of ${\boldsymbol{f}}_\phi$. This second instance of the Iwasawa--Greenberg main conjecture can be related on the one hand to the main conjecture for $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ by building on the explicit reciprocity laws for the Rankin--Eisenstein classes of Kings--Loeffler--Zerbes \cite{KLZ2}, and on the other hand (after restriction to the anticyclotomic line) to the big Heegner point main conjecture of Howard \cite[Conj.~3.3.1]{howard-invmath} using the explicit reciprocity law for Heegner points of \cite{cas-hsieh1}\footnote{Itself a generalization of the celebrated $p$-adic Waldspurger formula of Bertolini--Darmon--Prasanna \cite{bdp1}.}, thereby allowing us to take the results of \cite{wanIMC} and \cite{Fouquet} towards the proof of those conjecture to bring to bear on the main conjecture for $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$. On the other hand, a key ingredient in the proof of Theorem~B is the Birch and Swinnerton-Dyer type formula for $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ along $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$ that we obtain in Theorem~\ref{thm:3.1.5} by building on the ealier results of the paper, leading to a $p$-adic Gross--Zagier formula for Howard's system of big Heegner points $\mathfrak{Z}_\infty$ that we then apply for a suitably chosen imaginary quadratic field $\mathcal{K}$. \noindent{\bf Acknowledgements.} It is a pleasure to thank Chris Skinner for several helpful conversations. Subtantial progress on this paper occurred during visits of the first author to Fudan University in January 2019, the Morningside Center of Mathematics in June 2019, and Academia Sinica in December 2019, and he would like to thank these institutions for their hospitality. \section{$p$-adic $L$-functions}\label{sec:padicL} \subsection{Hida families}\label{subsec:hida} Let $\mathbb{I}$ be a local reduced normal extension of $\mathfrak{O}_L[[T]]$, where $\mathfrak{O}_L$ is the ring of integers of a finite extension $L$ of $\mathbf{Q}_p$, and denote by $\mathcal{X}_a(\mathbb{I})\subset{\rm Hom}_{\rm cts}(\mathbb{I},\overline{\mathbf{Q}}_p)$ the set of continuous $\mathfrak{O}_L$-algebra homomorphisms $\phi:\mathbb{I}\rightarrow\overline{\mathbf{Q}}_p$ satisfying \[ \phi(1+T)=\zeta(1+p)^{k-2} \] for some $p$-power root of unity $\zeta=\zeta_\phi$ and some integer $k=k_\phi\in\mathbf{Z}_{\geqslant 2}$ called the \emph{weight} of $\phi$. We shall refer to the elements of $\mathcal{X}_a(\mathbb{I})$ as \emph{arithmetic} primes of $\mathbb{I}$, and let $\mathcal{X}_a^o(\mathbb{I})$ denote the set consisting of arithmetic primes $\phi$ with $\zeta_\phi=1$ and weight $k_\phi\equiv 2\pmod{p-1}$. Let $N$ be a positive integer prime to $p$, let $\chi$ be an even Dirichlet character modulo $Np$ taking values in $L$, and let ${\boldsymbol{f}}=\sum_{n=1}^\infty\boldsymbol{a}_nq^n\in\mathbb{I}[[q]]$ be an ordinary $\mathbb{I}$-adic cusp eigenform of tame level $N$ and character $\chi$, as defined in \cite[\S{3.3.9}]{SU}. In particular, for every $\phi\in\mathcal{X}_a(\mathbb{I})$ we have \[ {\boldsymbol{f}}_\phi:=\sum_{n=1}^\infty\phi(\boldsymbol{a}_n)q^n\in S_{k}({\boldsymbol{g}}amma_0(p^{t}N),\chi\omega^{2-k_\phi}\psi_{\zeta}), \] where \begin{itemize} \item $t=t_\phi\geqslant 1$ is such that $\zeta$ is a primitive $p^{t-1}$-st root of unity, \item $\omega$ is the Teichm\"uller character, and \item $\psi_{\zeta}:(\mathbf{Z}/p^{t}N\mathbf{Z})^\times{\dagger}oheadrightarrow(\mathbf{Z}/p^{t}\mathbf{Z})^\times\rightarrow\overline{\mathbf{Q}}_p^\times$ is determined by $\psi_{\zeta}(1+p)=\zeta=\zeta_\phi$. \end{itemize} Denote by $S^{\rm ord}(N,\chi;\mathbb{I})$ the space of such $\mathbb{I}$-adic eigenforms ${\boldsymbol{f}}$. If in addition ${\boldsymbol{f}}_\phi$ is $N$-new for all $\phi\in\mathcal{X}_a(\mathbb{I})$, we say that ${\boldsymbol{f}}$ is a \emph{Hida family} of tame level $N$ and character $\chi$. We refer to ${\boldsymbol{f}}_\phi$ as the specialization of ${\boldsymbol{f}}$ at $\phi$. More generally, if $\phi\in{\rm Hom}_{\rm cts}(\mathbb{I},\overline{\mathbf{Q}}_p)$ is such that ${\boldsymbol{f}}_\phi$ is a classical eigenform, we say that ${\boldsymbol{f}}_\phi$ is a classical specialization of ${\boldsymbol{f}}$; this includes the specializations of ${\boldsymbol{f}}\in S^{\rm ord}(N,\chi;\mathbb{I})$ at $\phi\in\mathcal{X}_a(\mathbb{I})$, but possibly also specializations in weight $1$, for example. \subsection{Congruence modules}\label{subsec:congr} We recall the notion of congruence modules following the treatment of \cite[$\S{12.2}$]{SU} and \cite[\S{3.3}]{hsieh-triple}. Let ${\boldsymbol{f}}$ be a Hida family of tame level $N$ and character $\chi$ defined over $\mathbb{I}$, and let $\rho_{\boldsymbol{f}}:G_\mathbf{Q}\rightarrow{\rm GL}_2(F_\mathbb{I})$ be the Galois representation associated to ${\boldsymbol{f}}$, where $F_\mathbb{I}$ is the fraction field of $\mathbb{I}$. Let $\mathbb{T}(N,\chi,\mathbb{I})$ be the Hecke algebra acting $S^{\rm ord}(N,\chi;\mathbb{I})$, and let $\lambda_{{\boldsymbol{f}}}:\mathbb{T}(N,\chi,\mathbb{I})\rightarrow\mathbb{I}$ be the algebra homomorphism defined by $\mathbb{I}$, which factors through the local component $\mathbb{T}_{\mathfrak{m}_{{\boldsymbol{f}}}}$. Since ${\boldsymbol{f}}$ is $N$-new, there is an algebra direct sum decomposition \[ \lambda:\mathbb{T}_{\mathfrak{m}_{{\boldsymbol{f}}}}\otimes_\mathbb{I} F_\mathbb{I}\simeq\mathbb{T}'\times F_\mathbb{I} \] with the projection onto the second factor given by $\lambda_{{\boldsymbol{f}}}$. The \emph{congruence module} $C({\boldsymbol{f}})\subset\mathbb{I}$ is defined by \[ C({\boldsymbol{f}}):=\lambda_{{\boldsymbol{f}}}\left(\mathbb{T}_{\mathfrak{m}_{{\boldsymbol{f}}}}\cap\lambda^{-1}(\{0\}\times F_\mathbb{I})\right). \] Following the convention in \cite[\S{7.7}]{KLZ2}, we shall also consider the \emph{congruence ideal} $I_{\boldsymbol{f}}$, defined as the fractional ideal $I_{\boldsymbol{f}}:=C({\boldsymbol{f}})^{-1}\subset F_\mathbb{I}$. As noted in \emph{loc.cit.}, if follows from \cite[Thm.~4.2]{hidaII} that elements of $I_{\boldsymbol{f}}$ define meromorphic functions on ${\rm Spec}(\mathbb{I})$ which are regular at all arithmetic points. \subsection{Rankin--Selberg $p$-adic $L$-functions}\label{sec:2varL} The next result on the construction of $3$-variable $p$-adic Rankin $L$-series is due to Hida. Let ${\boldsymbol{g}}amma$ be Galois group of the cyclotomic $\mathbf{Z}_p^\times$-extension of $\mathbf{Q}$, and set \[ \Lambda_{\boldsymbol{g}}amma=\mathbf{Z}_p[[{\boldsymbol{g}}amma]]. \] If $j\in\mathbf{Z}$ and $\chi$ is a Dirichlet character of $p$-power conductor, there is a unique $\phi\in{\rm Hom}_{\rm cts}(\Lambda_{\boldsymbol{g}}amma,\overline{\mathbf{Q}}_p^\times)$ extending the character $z\mapsto z^j\chi(z)$ on $\mathbf{Z}_p^\times$. \begin{thm}\label{thm:hida} Let ${\boldsymbol{f}}_1, {\boldsymbol{f}}_2$ be Hida families of tame levels $N_1, N_2$, respectively, and let $N={\rm lcm}(N_1,N_2)$. Then there is an element \[ L_p({\boldsymbol{f}}_1,{\boldsymbol{f}}_2)\in\left(I_{{\boldsymbol{f}}_1}\hat{\otimes}_{\mathbf{Z}_p}\mathbb{I}_{{\boldsymbol{f}}_2}\hat{\otimes}_{\mathbf{Z}_p}\Lambda_{{\boldsymbol{g}}amma}\right)\otimes_{\mathbf{Z}}\mathbf{Z}[\mu_N] \] uniquely characterized by the following interpolation property. Let $f_1$, $f_2$ be classical specializations of ${\boldsymbol{f}}_1$, ${\boldsymbol{f}}_2$ of weights $k_1$, $k_2$, respectively, with $k_1>k_2\geqslant 1$, let $j$ be an integer in the range $k_2\leqslant j\leqslant k_1-1$, and let $\chi$ be a Dirichlet character of $p$-power conductor. Suppose the automorphic representation $\pi_{f_1}$ is a principal series representation $\pi(\eta_1,\eta_1')$ with $\eta_1$ unramified and $\eta_1(p)$ a $p$-adic unit. Then the value of $L_p({\boldsymbol{f}}_1,{\boldsymbol{f}}_2)$ at the corresponding specialization $\phi\in{\rm Spec}(\mathbb{I}_{{\boldsymbol{f}}_1}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}_{{\boldsymbol{f}}_2}\hat\otimes_{\mathbf{Z}_p}\Lambda_{{\boldsymbol{g}}amma})$ is given by \begin{align*} \phi(L_p({\boldsymbol{f}}_1,{\boldsymbol{f}}_2))&=\frac{\mathcal{E}(f_1,f_2,\chi,j)}{\mathcal{E}(f_1)\mathcal{E}^*(f_1)}\cdot\frac{{\boldsymbol{g}}amma(j){\boldsymbol{g}}amma(j-k_2+1)}{\pi^{2j+1-k_2}(-i)^{k_1-k_2}2^{2j+k_1-k_2}\left\langle f_1,f_1^c\vert_{k_1}\bigl(\begin{smallmatrix}&-1\\p^{t_1}N_1\end{smallmatrix}\bigr)\right\rangle_{N_1}}\\ &\quad\times L( f_1,f_2,\chi^{-1},j), \end{align*} where if $\alpha_{i}$ and $\beta_{i}$ are the roots of the Hecke polynomial of $f_i$ at $p$, with $\alpha_{i}$ being the $p$-adic unit root, and $p^t$ is the conductor of $\chi$, the Euler factors are given by \[ \mathcal{E}(f_1,f_2,\chi,j)=\left\{ \begin{array}{ll} \left(1-\frac{p^{j-1}}{\alpha_{1}\alpha_{2}}\right)\left(1-\frac{p^{j-1}}{\alpha_{1}\beta_{2}}\right) \left(1-\frac{\beta_{1}\alpha_{2}}{p^j})(1-\frac{\beta_{1}\beta_{2}}{p^j}\right)&\textrm{if $t=0$,}\\[0.2cm] G(\chi)^2\cdot\left(\frac{p^{2j-2}}{\alpha^2_1\alpha_2\beta_2}\right)^t &\textrm{if $t\geqslant 1$,} \end{array} \right. \] where $G(\chi)$ is the Gauss sum of $\chi$, and if $p^{t_1}$ is the $p$-part of the conductor of $\eta_1'$, then \[ \mathcal{E}(f_1)\mathcal{E}^*(f_1)=\left\{ \begin{array}{ll} \left(1-\frac{\beta_{1}}{p\alpha_{1}}\right)\left(1-\frac{\beta_{1}} {\alpha_{1}}\right)&\textrm{if $t_1=0$,}\\[0.2cm] G(\chi_1)\cdot\eta_1'\eta_1^{-1}(p^{t_1})p^{-t_1}&\textrm{if $t_1\geqslant 1$,}\\ \end{array} \right. \] where $\chi_1$ is the nebentypus of $f_1$. \end{thm} \begin{proof} This follows from \cite[Thm.~5.1]{hidaII}, which we have stated adopting the formulation in \cite[Thm.~7.7.2]{KLZ2} (slightly extended to include some more general specializations of the dominant Hida family ${\boldsymbol{f}}_1$). \end{proof} In this paper, we shall consider the $p$-adic $L$-functions $L_p({\boldsymbol{f}}_1,{\boldsymbol{f}}_2)$ of Theorem~\ref{thm:hida} in the cases where either ${\boldsymbol{f}}_1$ or ${\boldsymbol{f}}_2$ has CM. Thus let ${\boldsymbol{f}}$ be a fixed Hida family of tame level $N$ defined over $\mathbb{I}$, and let $\mathcal{K}$ be an imaginary quadratic field of discriminant $-D_{\mathcal{K}}<0$ prime to $pN$ such that \begin{equation}\label{eq:spl} \textrm{$p=\mathfrak{p}\overline{\mathfrak{p}}$ splits in $\mathcal{K}$,}\nonumber \end{equation} with $\mathfrak{p}$ denoting the prime of $\mathcal{K}$ above $p$ induced by our fixed embedding $\imath_p:\overline\mathbf{Q}\hookrightarrow\mathbf{C}_p$. Let $\mathcal{K}_\infty$ be the $\mathbf{Z}_p^2$-extension of $\mathcal{K}$, and denote by ${\boldsymbol{g}}amma_\mathfrak{p}\simeq\mathbf{Z}_p$ the Galois group over $\mathcal{K}$ of the maximal subfield of $\mathcal{K}_\infty$ unramified outside $\overline{\mathfrak{p}}$. Let \begin{equation}\label{eq:g-CM} {\boldsymbol{g}}=\sum_{n=1}^\infty\boldsymbol{b}_nq^n \in\mathbb{I}_{{\boldsymbol{g}}}[[q]] \end{equation} be the canonical Hida family of CM forms constructed in \cite[\S{5.2}]{JSW}, where $\mathbb{I}_{{\boldsymbol{g}}}=\mathbf{Z}_p[[{\boldsymbol{g}}amma_\mathfrak{p}]]$. Specifically, denoting by $\theta_\mathfrak{p}:\mathbb{A}_{\mathcal{K}}^\times\rightarrow{\boldsymbol{g}}amma_\mathfrak{p}$ the composition of the global reciprocity map ${\rm rec}_\mathcal{K}:\mathbb{A}_\mathcal{K}^\times\rightarrow G_\mathcal{K}^{\rm ab}$ with the natural projection $G_\mathcal{K}^{\rm ab}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathfrak{p}$, we have \[ \boldsymbol{b}_n=\sum_{\substack{N(\mathfrak{a})=n, (\mathfrak{a},\overline{\mathfrak{p}})=1}}\theta_\mathfrak{p}(x_{\mathfrak{a}}), \] summing over integral ideals $\mathfrak{a}\subset\mathcal{O}_\mathcal{K}$, and $x_\mathfrak{a}\in\mathbb{A}_{\mathcal{K}}^{\infty,\times}$ is any finite id\`ele of $\mathcal{K}$ with ${\rm ord}_w(x_{\mathfrak{a},w})={\rm ord}_{w}(\mathfrak{a})$ for all finite places $w$ of $\mathcal{K}$. \subsection{Non-dominant CM: ${\boldsymbol{f}}_2={\boldsymbol{g}}$}\label{subsec:L-2}\label{sec:dom-CM} Assume that the residual representation $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible and $p$-distinguished. Then by \cite[Cor.~2, p,~482]{Fermat-Wiles} the local ring $\mathbb{T}_{\mathfrak{m}_{{\boldsymbol{f}}}}$ introduced in $\S\ref{subsec:congr}$ is Gorenstein, and by Hida's results (see e.g. \cite{hida-AJM88}) the congruence module $C({\boldsymbol{f}})$ is principal. Let $c_{\boldsymbol{f}}\in C({\boldsymbol{f}})$ be a generator, and set \[ L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}):=c_{{\boldsymbol{f}}}\cdot L_p({\boldsymbol{f}},{\boldsymbol{g}}), \] viewed as an element in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ (well-defined up to a unit in $\mathbb{I}^\times$), where ${\boldsymbol{g}}amma_\mathcal{K}={\rm Gal}(\mathcal{K}_\infty/\mathcal{K})$. The action of complex conjugation yields a decomposition \[ {\boldsymbol{g}}amma_\mathcal{K}\simeq {\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}\times{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}, \] where ${\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$ (resp. ${\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}$) denotes the Galois group of the anticyclotomic (resp. cyclotomic) $\mathbf{Z}_p$-extension of $\mathcal{K}$. We next study the projections of $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ to $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$ and $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}]]$. \subsubsection{Anticyclotomic restriction of $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$}\label{sec:anti-L} Assume that ${\boldsymbol{f}}$ has trivial tame character, and following \cite[Def.~2.1.3]{howard-invmath} define the \emph{critical character} $\mathbf{T}^\daggerheta:G_\mathbf{Q}\rightarrow\mathbb{I}^\times$ by \begin{equation}\label{def:crit-char} \mathbf{T}^\daggerheta:=[\langle\varepsilon_{\rm cyc}\rangle^{1/2}], \end{equation} where $\varepsilon_{\rm cyc}:G_\mathbf{Q}\rightarrow\mathbf{Z}_p^\times$ is the cyclotomic character, $\langle\cdot\rangle:\mathbf{Z}_p^\times\rightarrow 1+p\mathbf{Z}_p$ is the natural projection, and \[ [\cdot]:1+p\mathbf{Z}_p\hookrightarrow\mathbf{Z}_p[[1+p\mathbf{Z}_p]]^\times\simeq\mathbf{Z}_p[[T]]^\times\rightarrow\mathbb{I}^\times \] is the composition of the obvious maps. This induces the twist map \begin{equation}\label{eq:tw-theta} {\rm tw}_{\mathbf{T}^\daggerheta^{-1}}:\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]] \end{equation} defined by $\gamma\mapsto\mathbf{T}^\daggerheta^{-1}(\gamma)\gamma$ for $\gamma\in{\boldsymbol{g}}amma_\mathcal{K}$. Write $N$ as the product \[ N=N^+ N^- \] with $N^+$ (resp. $N^-$) divisible only by primes which are split (resp. inert) in $\mathcal{K}$, and consider the following generalized \emph{Heegner hypothesis}: \begin{equation}\label{eq:gen-Heeg-f} \textrm{$N^-$ is the squarefree product of an even number of primes.}\tag{gen-H} \end{equation} Whenever we assume that $\mathcal{K}$ satisfies the hypothesis (\ref{ass:gen-H}), we fix an integral ideal $\mathfrak{N}^+\subset\mathcal{O}_\mathcal{K}$ with $\mathcal{O}_\mathcal{K}/\mathfrak{N}^+\simeq\mathbf{Z}/N^+\mathbf{Z}$. \begin{prop}\label{thm:hida-1} Let $L_p^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ be the image of ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$ under the natural projection $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{{\rm ac}}]]$. If $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{eq:gen-Heeg-f})}, then $L_p^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ is identically zero. \end{prop} \begin{proof} Let $\phi\in{\rm Spec}(\mathbb{I}_{{\boldsymbol{f}}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}_{{\boldsymbol{g}}}\hat\otimes_{\mathbf{Z}_p}\Lambda_{\boldsymbol{g}}amma)={\rm Spec}(\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]])$ be a specialization in the range specified in Theorem~\ref{thm:hida}, with $f_1={\boldsymbol{f}}_\phi$ the $p$-stabilization of a newform $f\in S_k({\boldsymbol{g}}amma_0(N))$ of weight $k\geqslant 2$ and $f_2={\boldsymbol{g}}_\phi$ a classical weight $1$ specialization. By the interpolation property, the value $\phi(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$ is a multiple of \[ L(f_1,f_2,\chi^{-1},j)=L(f/\mathcal{K},\psi,j), \] with $\psi$ a finite order character of ${\boldsymbol{g}}amma_\mathcal{K}$ and $1\leqslant j\leqslant k-1$, and so $\phi({\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})))$ is also a multiple $L(f/\mathcal{K},\psi',k/2)$ for a finite order character $\psi'$ of ${\boldsymbol{g}}amma_\mathcal{K}$. If $\psi'$ factors through the projection ${\boldsymbol{g}}amma_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_{\mathcal{K}}^{{\rm ac}}$, then the $L$-function $L(f/\mathcal{K},\psi',s)$ is self-dual, with a functional equation relating its values at $s$ and $k-s$, and if $\mathcal{K}$ satisfies the hypothesis (\ref{eq:gen-Heeg-f}), then the sign in this functional equation is $-1$ (see e.g. \cite[\S{1}]{CV-dur}). Thus $L(f/\mathcal{K},\psi',k/2)=0$, and letting $\phi$ vary, the result follows. \end{proof} \subsubsection{Cyclotomic restriction of $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$}\label{sec:anti-L} As above, we denote by ${\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}$ the Galois group of the cyclotomic $\mathbf{Z}_p$-extension of $\mathcal{K}$, which we shall often identify with the maximal torsion-free quotient of ${\boldsymbol{g}}amma$. For any ordinary $p$-stabilized newform $f$ of tame level $N$ defined over $L$ (a finite extension of $\mathbf{Q}_p$), let $L_p^{\tt MTT}(f)\in\mathfrak{O}_L[[{\boldsymbol{g}}amma]]$ be the cyclotomic $p$-adic $L$-function attached to $f$ in \cite{mtt}, where $\mathfrak{O}_L$ is the ring of integers of $L$ (see \cite[\S{3.4.4}]{SU} and the references therein). We refer the reader to \emph{loc.cit.} for the precise interpolation property satisfied by $L_p^{\tt MTT}(f)$, only noting here that the complex periods used for the construction are Shimura's periods $\Omega_f^\pm\in\mathbf{C}^\times/\mathfrak{O}_L^\times$ (as reviewed in \cite[\S{3.3.3}]{SU}). \begin{thm}\label{thm:cyc-res} Let $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})_{\rm cyc}$ be the image of $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ under the natural projection $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}]]$. Then for every $\phi\in\mathcal{X}_a^o(\mathbb{I})$ we have \[ \phi(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})_{\rm cyc})=L_p^{\tt MTT}({\boldsymbol{f}}_\phi)\cdot L_p^{\tt MTT}({\boldsymbol{f}}_\phi\otimes\epsilon_\mathcal{K}) \] up to a unit in $\phi(\mathbb{I})[[{\boldsymbol{g}}amma]]^\times$, where $\epsilon_\mathcal{K}$ is the quadratic character associated to $\mathcal{K}$. \end{thm} \begin{proof} Since we assume that $\bar{\rho}_{{\boldsymbol{f}}}$ satisfies the hypotheses (\ref{ass:irred}), by \cite[Thm.~0.1]{hida-AJM88} (see also \cite[Lem.~12.1]{SU}) for every $\phi\in\mathcal{X}_a^o(\mathbb{I})$ we have the relation \[ \langle{\boldsymbol{f}}_\phi,{\boldsymbol{f}}_\phi\rangle_{N}\cdot\phi(c_{{\boldsymbol{f}}})^{-1}=u\cdot\Omega_{{\boldsymbol{f}}_\phi}^\pm\cdot\Omega_{{\boldsymbol{f}}_\phi\otimes\epsilon_\mathcal{K}}^\pm \] between the periods appearing in the interpolation property of the respective sides of the claimed equality, for some unit $u\in\phi(\mathbb{I})^\times$. Since by construction, $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ specializes at $\phi$ to the $p$-adic $L$-function $L_p({\boldsymbol{f}}_\phi/\mathcal{K})$ considered in \cite{BL-ord}, the result thus follows from \cite[Cor.~2.2]{BL-ord} and \cite[Thm.~12.8]{SU}. \end{proof} \subsection{Dominant CM: ${\boldsymbol{f}}_1={\boldsymbol{g}}$}\label{subsec:L-1}\label{sec:nondom-CM} As in $\S\ref{sec:dom-CM}$, let ${\boldsymbol{f}}\in\mathbb{I}[[q]]$ be a fixed Hida family of tame level $N$, and let ${\boldsymbol{g}}$ be the CM Hida family in (\ref{eq:g-CM}). Let ${\hat{\bZ}_p^{\rm ur}}$ be the completion of the ring of integers of the maximal unramified extension of $\mathbf{Q}_p$, and set $\mathbb{I}^{\rm ur}:=\mathbb{I}\hat\otimes_{\mathbf{Z}_p}{\hat{\bZ}_p^{\rm ur}}$. By \cite[\S{5.3.0}]{Katz49} (see also \cite[Thm.~II.4.14]{de-shalit}) there exists a $p$-adic $L$-function $\mathscr{L}_{\mathfrak{p}}^{}(\mathcal{K})\in{\hat{\bZ}_p^{\rm ur}}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ such that if $\psi$ is a character of ${\boldsymbol{g}}amma_\mathcal{K}$ corresponding to an algebraic Hecke character of $\mathcal{K}$ with trivial conductor and infinity type $(a,b)$ with $0\leqslant-b<a$, then \begin{equation}\label{eq:Katz} \mathscr{L}^{}_{\mathfrak{p}}(\mathcal{K})(\psi)=\biggl(\frac{\sqrt{D_\mathcal{K}}}{2\pi}\biggr)^{a} \cdot{\boldsymbol{g}}amma(b)\cdot(1-\psi(\mathfrak{p}))\cdot(1-p^{-1}\psi^{-1}(\overline\mathfrak{p})) \cdot\frac{\Omega_p^{b-a}}{\Omega_K^{b-a}}\cdot L_{}(\psi,0), \end{equation} where $\Omega_K\in\mathbf{C}^\times$ and $\Omega_p\in\mathbf{C}_p^\times$ are certain CM periods (as defined in e.g. \cite[\S{2.5}]{cas-hsieh1}). Let $h_\mathcal{K}$ be the class number of $\mathcal{K}$, $w_\mathcal{K}:=\vert\mathcal{O}_\mathcal{K}^\times\vert$, and set \begin{equation}\label{eq:factor-hida} \mathscr{L}_{\mathfrak{p}}({\boldsymbol{f}}/\mathcal{K}):=\biggl(\frac{h_\mathcal{K}}{w_\mathcal{K}}\mathscr{L}_\mathfrak{p}(\mathcal{K})_{{\rm ac}}\biggr)\cdot L_{p}({\boldsymbol{g}},{\boldsymbol{f}}), \end{equation} where $\mathscr{L}_\mathfrak{p}(\mathcal{K})_{{\rm ac}}$ is the anticyclotomic projection of $\mathscr{L}_\mathfrak{p}(\mathcal{K})$. A priori, $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$ is an element in $I_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ but comparing its interpolating property with that of a different $3$-variable $p$-adic $L$-function, we can show its integrality. \begin{prop} The $p$-adic $L$-function in {\rm (\ref{eq:factor-hida})} is integral, i.e., $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ \end{prop} \begin{proof} For any finite set $\Sigma$ of places $\mathcal{K}$ outside $p$ and containing all the places dividing $N D_\mathcal{K}$, the results of \cite[\S{7.5}]{wanIMC} yield the construction of the ``$\Sigma$-imprimitive'' element \[ \mathfrak{L}_\mathfrak{p}^\Sigma({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]] \] characterized by the following interpolation property. For a Zariski dense set of arithmetic points $\phi\in{\rm Spec}(\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]])$ with ${\boldsymbol{f}}_\phi$ of weight $2$ and conductor $p^tN$ generating a unitary $\pi_{{\boldsymbol{f}}_\phi}\simeq\pi(\chi_{1,p},\chi_{2,p})$ with $v_p(\chi_{1,p}(p))=-\frac{1}{2}$ and $v_p(\chi_{2,p}(p))=\frac{1}{2}$, and with $\boldsymbol{\psi}_\phi$ a Hecke character of $\mathcal{K}$ of infinity type $(-n,0)$ for some $n\geqslant 3$ and conductor $p^t$, we have: \begin{align*} \phi(\mathfrak{L}_{\mathfrak{p}}({\boldsymbol{f}}/\mathcal{K}))&=p^{(n-3)t}\boldsymbol{\psi}_{\phi,\mathfrak{p}}^2\chi_{1,p}^{-1}\chi_{2,p}^{-1}(p^{-t})G(\boldsymbol{\psi}_{\phi,\mathfrak{p}}\chi_{1,p}^{-1})G(\boldsymbol{\psi}_{\phi,\mathfrak{p}}\chi_{2,p}^{-1}){\boldsymbol{g}}amma(n){\boldsymbol{g}}amma(n-1)\Omega_p^{2n}\\ &\quad\times \frac{L^\Sigma({\boldsymbol{f}}_\phi,\chi^{-1}_{\phi}\boldsymbol{\psi}_\phi,0)}{(2\pi i)^{2n-1}\Omega_\mathcal{K}^{2n}}, \end{align*} where $\chi_{\phi}$ is the nebentypus of ${\boldsymbol{f}}_\phi$, $\Omega_p\in\mathbf{C}_p^\times$ and $\Omega_\mathcal{K}\in\mathbf{C}^\times$ are CM periods, and $L^\Sigma({\boldsymbol{f}}_\phi,\chi^{-1}_{\phi}\boldsymbol{\psi}_\phi,0)$ is the $\Sigma$-imprimitive Rankin--Selberg $L$-values. Setting \begin{equation}\label{eq:prim} \mathfrak{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K}):=\mathfrak{L}_\mathfrak{p}^\Sigma({\boldsymbol{f}}/\mathcal{K})\times\prod_{w\in\Sigma}P_w(\Psi_\mathcal{K}({\rm Frob}_w))^{-1}, \end{equation} where $P_w$ is the Euler factor at $w$ and $\Psi_\mathcal{K}:G_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathcal{K}$ is the natural projection, we thus obtain an element interpolating the Rankin--Selberg $L$-values themselves, but which \emph{a priori} is just an element in the fraction field of $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]$. To see the inclusion $\mathfrak{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ we shall compare $(\ref{eq:prim})$ with the product in the right-hand side of $(\ref{eq:factor-hida})$; the required integrality of $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$ will follow from this comparison. Any arithmetic point $\phi\in\mathrm{Spec}(\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]])$ as above can be written as the product $\boldsymbol{\psi}_\phi'\cdot\boldsymbol{\psi}_\phi''$, with $\boldsymbol{\psi}_\phi'$ cyclotomic (i.e., factoring through ${\boldsymbol{g}}amma_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}$), and $\boldsymbol{\psi}_\phi''$ corresponding to a Hecke character unramified at $\overline{\mathfrak{p}}$ and of infinity type $(-n,0)$. Then $\chi^{-1}_{\phi}\boldsymbol{\psi}'_\phi$ (resp. the theta series of $\boldsymbol{\psi}''_\phi$) corresponds to $\chi\vert\cdot\vert^j$ (resp. $f_1={\boldsymbol{g}}_\phi$) in Theorem~\ref{thm:hida}, so that \[ L({\boldsymbol{f}}_\phi,\chi_{\phi}^{-1}\boldsymbol{\psi}_\phi,0)=L(f_1,f_2,\chi^{-1},j) \] with $f_2={\boldsymbol{f}}_\phi$. Letting $p^tD_\mathcal{K}$ be the conductor of ${\boldsymbol{g}}_\phi$, according to \cite[Thm.~7.1]{HT-ENS} a direct calculation shows that the product $\mathcal{E}({\boldsymbol{g}}_\phi)\mathcal{E}^*({\boldsymbol{g}}_\phi)\cdot\langle{\boldsymbol{g}}_\phi,{\boldsymbol{g}}_\phi\rangle_{p^tD_\mathcal{K}}$ in Theorem~\ref{thm:hida} agrees with \begin{equation}\label{eq:factor-RS} \frac{{\boldsymbol{g}}amma(n)G(\boldsymbol{\psi}''^{-1}_{\phi,\bar{\mathfrak{p}}})L(\boldsymbol{\psi}''_\phi(\boldsymbol{\psi}''_\phi)^{-c}, 1)}{(-2\pi i)^n}\cdot\frac{L(\epsilon_\mathcal{K},1)}{-2\pi i} \end{equation} up to a $p$-adic unit independent of $\phi$, where $\epsilon_\mathcal{K}$ is the quadratic character attached to $\mathcal{K}$. By the class number formula, the second factor in $(\ref{eq:factor-RS})$ is given by $h_\mathcal{K}$ up to a $p$-adic unit, while by the interpolation property of the Katz $p$-adic $L$-function $\mathscr{L}_\mathfrak{p}(\mathcal{K})$ (see \cite[\S{5.3.0}]{Katz49}), the left factor multiplied by $(\Omega_p/\Omega_\mathcal{K})^{2n}$ is interpolated, for varying $\phi$, by the anti-cyclotomic projection of $\mathscr{L}_\mathfrak{p}(\mathcal{K})$ viewed as an element in ${\hat{\bZ}_p^{\rm ur}}[[{\boldsymbol{g}}amma_{\mathfrak{p}}]]$. This shows the factorization that $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$ in $(\ref{eq:factor-hida})$ and $\mathfrak{L}_\mathfrak{p}({\boldsymbol{f}}/K)$ differ by a unit. Finally, by the proof of \cite[Prop.~8.3]{wan-combined}, the only possible denominators of $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$ are powers of the augmentation ideal of $\mathbf{Z}_p[[{\boldsymbol{g}}amma_{\mathfrak{p}}]]$, while by $(\ref{eq:prim})$ the possible denominators can only be either powers of $p$ or factors coming from Euler factors at primes $w\in\Sigma$. Since these two sets are disjoint, integrality of $\mathfrak{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$, and hence of $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$, follows. \end{proof} We conclude this section by discussing the anticyclotomic restriction of $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$, which contrary to $L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K})$ will be nonzero under the generalized Heegner hypothesis. \begin{thm}\label{thm:bdp} Assume that $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{eq:gen-Heeg-f})}, and if $N^->1$ assume in addition that $N$ is squarefree. Then there exists an element $\mathscr{L}_{\mathfrak{p}}^{\tt BDP}({\boldsymbol{f}}/\mathcal{K})\in\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{{\rm ac}}]]$ such that for every $\phi\in\mathcal{X}_a(\mathbb{I})$ of weight $k$ and trivial nebentypus, and every crystalline character $\psi$ of ${\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$ corresponding to a Hecke character of $\mathcal{K}$ of infinity type $(n,-n)$ with $n\geqslant 0$, we have \begin{align*} \phi(\mathscr{L}_{\mathfrak{p}}^{\tt BDP}({\boldsymbol{f}}/\mathcal{K})^2)(\psi) &=\mathcal{E}_\mathfrak{p}({\boldsymbol{f}}_\phi,\psi)^2\cdot \psi(\mathfrak{N}^+)^{-1}\cdot 2^3\cdot\varepsilon({\boldsymbol{f}}_\phi)\cdot w_\mathcal{K}^2\sqrt{D_\mathcal{K}}\cdot{\boldsymbol{g}}amma(k+n){\boldsymbol{g}}amma(n+1)\Omega_p^{2k+4n}\\ &\quad\times\frac{L({\boldsymbol{f}}_\phi/\mathcal{K},\psi,k/2)\cdot\alpha({\boldsymbol{f}}_\phi,{\boldsymbol{f}}_\phi^B)^{-1}}{(2\pi)^{k+2n+1}\cdot({\rm Im}\;\boldsymbol{\theta})^{k+2n}\cdot\Omega_\mathcal{K}^{2k+4n}}, \end{align*} where $\mathcal{E}_\mathfrak{p}({\boldsymbol{f}}_\phi,\psi)=(1-\phi(\boldsymbol{a}_p)\psi_{\overline\mathfrak{p}}(p)p^{-k/2}) (1-\phi(\boldsymbol{a}_p)^{-1}\psi_{\overline\mathfrak{p}}(p)p^{k/2-1})$, $\varepsilon({\boldsymbol{f}}_\phi)$ is the global root number of ${\boldsymbol{f}}_\phi$, $w_\mathcal{K}:=\vert\mathcal{O}_\mathcal{K}^\times\vert$, $\Omega_p\in\mathbf{C}_p^\times$ and $\Omega_\mathcal{K}\in\mathbf{C}^\times$ are CM periods attached to $\mathcal{K}$ as \cite[\S{2.5}]{cas-hsieh1}, $\boldsymbol{\theta}\in\mathcal{K}$ is as in {\rm (\ref{eq:vartheta})} below, and \[ \alpha({\boldsymbol{f}}_\phi,{\boldsymbol{f}}_\phi^B)=\frac{\langle{\boldsymbol{f}}_\phi,{\boldsymbol{f}}_\phi\rangle}{\langle{\boldsymbol{f}}_\phi^B,{\boldsymbol{f}}_\phi^B\rangle} \] is a ratio of Petersson norms of ${\boldsymbol{f}}_\phi$ and its transfer ${\boldsymbol{f}}_\phi^B$ to a quaternion algebra, normalized as in \cite[\S{2.2}]{prasanna}. \end{thm} \begin{proof} When $N^-=1$, this is \cite[Thm.~2.11]{cas-2var} (in which case $\alpha({\boldsymbol{f}}_\phi,{\boldsymbol{f}}_\phi^B)=1$). In the following we sketch how to extend that result to include the more general hypothesis (\ref{ass:gen-H}). Some of the notations used here will be introduced later in $\S\ref{sec:HP}$. Let $\mathcal{O}_B$ be a maximal order of $B$, and let ${\rm Ig}_{N^+,N^-}$ be the Igusa scheme over $\mathbf{Z}_{(p)}$ classifying abelian surfaces with $\mathcal{O}_B$-multiplication and $U_\infty$-level structure (here $U_\infty$ is the open compact $U_r\subset\hat{R}_r^\times$ in $\S\ref{subsec:Sh}$ with $r=\infty$). For any valuation ring $W$ finite flat over $\mathbf{Z}_p$, denote by $V_p(W)$ the module of formal functions on ${\rm Ig}_{N^+,N^-}$ (i.e., $p$-adic modular forms) defined over $W$, and set \[ V_p(\mathbb{I}):=V_p(W_0)\hat{\otimes}_{W_0}\mathbb{I}, \] where $W_0=W(\kappaappa_\mathbb{I})$ is the ring of Witt vectors of the residue field of $\mathbb{I}$. For every $\mathcal{O}_\mathcal{K}$-ideal $\mathfrak{a}$ prime to $\mathfrak{N}^+\mathfrak{p}$, the construction of $\varsigma^{(s)}$ (for arbitrary $s\geqslant 0$) in $\S\ref{subsec:construct}$ determines CM points $x(\mathfrak{a})\in{\rm Ig}_{N^+,N^-}$, and the argument in \cite[Thm.~3.2.16]{hida-GME} with the use of $q$-expansions and the $q$-expansion principle replaced by Serre--Tate-expansions and the resulting $t$-expansion principle around any such $x(\mathfrak{a})$ (see e.g \cite[p.~107]{hida-mu}) shows that every element ${\boldsymbol{f}}^B\in V_p(\mathbb{I})$ defines a $p$-adic family (in fact, finite collections of such, since $\mathbb{I}$ is finite over $W_0[[T]]$) of $p$-adic modular forms ${\boldsymbol{f}}^B_z={\boldsymbol{f}}^B(u^z-1)\in V_p(W_0)$, where $u=1+p$, indexed by $z\in\mathbf{Z}_p$. The Hida family ${\boldsymbol{f}}$ corresponds to minimal prime in the localized universal $p$-ordinary Hecke algebra $\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}$, and by the integral Jacquet--Langlands correspondence (see e.g. the discussion in \cite[\S{5.3}]{LV}), there exists a $p$-adic family ${\boldsymbol{f}}_B$ as above corresponding to ${\boldsymbol{f}}$, which we normalize by requiring that some Serre--Tate expansion ${\boldsymbol{f}}_z^B(t)$ does not vanish modulo $p$. There are $U$- and $V$-operators acting on ${\boldsymbol{f}}_B$ defined as in \cite[\S{3.6}]{brooks}, and we set \[ {\boldsymbol{f}}_B^\flat:={\boldsymbol{f}}_B\vert(VU-UV). \] With these, we may define $\mathbb{I}^{\rm ur}$-valued measures $\mu_{{\boldsymbol{f}}_B,x(\mathfrak{a})}$ and $\mu_{{\boldsymbol{f}}_{B,\mathfrak{a}}^\flat}$ on $\mathbf{Z}_p$ (with the latter supported on $\mathbf{Z}_p^\times$ by \cite[Prop.~4.17]{brooks}) as in \cite[\S{2.7}]{cas-2var}, and define $\mathscr{L}_{\mathfrak{p},\boldsymbol{\xi}}({\boldsymbol{f}})$ to be the $\mathbb{I}^{\rm ur}$-valued measure on ${\rm Gal}(H_{p^\infty}/\mathcal{K})$ given by \[ \mathscr{L}_{\mathfrak{p},\boldsymbol{\xi}}({\boldsymbol{f}})(\phi)= \sum_{[\mathfrak{a}]\in{\rm Pic}(\mathcal{O}_\mathcal{K})} \boldsymbol{\xi}\boldsymbol{\chi}^{-1}(\mathfrak{a})\mathbf{N}(\mathfrak{a})^{-1} \int_{\mathbf{Z}_p^\times}(\phi\vert[\mathfrak{a}])(z){\rm d}\mu_{{\boldsymbol{f}}^\flat_{B,\mathfrak{a}}}(z) \] for all $\phi:{\rm Gal}(H_{p^\infty}/\mathcal{K})\rightarrow\mathcal{O}_{\mathbf{C}_p}^\times$, where, if $\sigma_{\mathfrak{a}}$ corresponds to $\mathfrak{a}$ under the Artin reciprocity map, $\phi\vert[\mathfrak{a}]$ is the character on $z\in\mathbf{Z}_p^\times$ given by $\phi(\sigma_\mathfrak{a}{\rm rec}_{{\mathfrak{p}}}(z))$ for the local reciprocity map ${\rm rec}_\mathfrak{p}:\mathcal{K}_\mathfrak{p}^\times\rightarrow G_\mathcal{K}^{\rm ab}\rightarrow{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$, $\boldsymbol{\chi}:\mathcal{K}^\times\backslash\mathbb{A}_\mathcal{K}^\times\rightarrow\mathbb{I}^\times$ is the character given by $x\mapsto\mathbf{T}^\daggerheta({\rm rec}_\mathbf{Q}({\rm N}_{\mathcal{K}/\mathbf{Q}}(x)))$ for the reciprocity map ${\rm rec}_\mathbf{Q}:\mathbf{Q}^\times\backslash\mathbb{A}^\times\rightarrow G_\mathbf{Q}^{\rm ab}$, and $\boldsymbol{\xi}$ is the auxiliary anticyclotomic $\mathbb{I}$-adic character constructed in \cite[Def.~2.8]{cas-2var}. Still denoting by $\mathscr{L}_{\mathfrak{p},\boldsymbol{\xi}}({\boldsymbol{f}})$ its image under the natural projection $\mathbb{I}^{\rm ur}[[{\rm Gal}(H_{p^\infty}/\mathcal{K})]]\rightarrow\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$, and setting \[ \mathscr{L}_\mathfrak{p}({\boldsymbol{f}})={\rm tw}_{\boldsymbol{\xi}^{-1}}(\mathscr{L}_{\mathfrak{p},\boldsymbol{\xi}}({\boldsymbol{f}})), \] one then readily checks as in the proof of \cite[Thm.~2.11]{cas-2var} that for every $\phi\in\mathcal{X}_a^o(\mathbb{I})$, the measure $\phi(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}))$ agrees with the measure constructed in \cite[\S{8.4}]{brooks} (in a formulation germane to that in \cite[\S{5.2}]{burungale-II}) attached to ${\boldsymbol{f}}_\phi$, from where the stated interpolation property follows from \cite[Prop.~8.9]{brooks}. \end{proof} \begin{cor}\label{cor:wan-bdp} Let the hypotheses be as in Theorem~\ref{thm:bdp}, and denote by $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ the image of ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K}))$ under the natural projection $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{{\rm ac}}]]$. Then \begin{equation}\label{eq:wan-bdp} \mathscr{L}_\mathfrak{p}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}=\mathscr{L}^{\tt BDP}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})^2 \end{equation} up to a unit in $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]][1/p]^\times$. In particular, $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ is nonzero. \end{cor} \begin{proof} In light of $(\ref{eq:factor-hida})$, the claimed equality up to a unit follows from a direct comparison of the interpolation properties in Theorem~\ref{thm:hida}, in $(\ref{eq:Katz})$, and in Theorem~\ref{thm:bdp} (\emph{cf.} \cite[\S{3.3}]{JSW}). On the other hand, by construction for every $\phi\in\mathcal{X}_a^o(\mathbb{I})$ the $p$-adic $L$-function $\mathscr{L}_\mathfrak{p}^{\tt BDP}({\boldsymbol{f}}/\mathcal{K})$ specializes at $\phi$ to the $p$-adic $L$-functions constructed in \cite[\S{3.3}]{cas-hsieh1} (for $N^-=1$), and in \cite[\S{5.2}]{burungale-II} and \cite[\S{8}]{brooks} (for $N^->1$); since the latter are nonzero by \cite[Thm.~3.9]{cas-hsieh1} and \cite[Thm.~5.7]{burungale-II}, the last claim in the theorem follows. \end{proof} \section{Iwasawa theory}\label{sec:Iw} Fix a prime $p>3$ and a positive integer $N$ prime to $p$, let \[ {\boldsymbol{f}}=\sum_{n=1}^\infty\boldsymbol{a}_nq^n\in\mathbb{I}[[q]] \] be a Hida family of tame level $N$ and trivial tame character, and let $\mathcal{K}$ be an imaginary quadratic field of discriminant prime of $Np$ in which $p=\mathfrak{p}\overline{\mathfrak{p}}$ splits. Let $\Sigma$ be a finite set of places of $\mathbf{Q}$ containing $\infty$ and the primes dividing $Np$, and for any number field $F$, let $\mathfrak{G}_{F,\Sigma}$ denote the Galois group of the maximal extension of $F$ unramified outside the places above $\Sigma$. \subsection{Selmer groups}\label{sec:selmer} Let $T_{\boldsymbol{f}}$ be the big Galois representation associated to ${\boldsymbol{f}}$, for which we shall take the geometric realization denoted $M({\boldsymbol{f}})^*$ in \cite[Def.~7.2.5]{KLZ2}. In particular, $T_{{\boldsymbol{f}}}$ is a locally free $\mathbb{I}$-module of rank $2$, and letting $D_p\subset G_{\mathbf{Q}}$ be the decomposition group at $p$ determined by our fixed embedding $\iota_p:\overline{\mathbf{Q}}\hookrightarrow\overline{\mathbf{Q}}_p$, it fits in an exact sequence of $\mathbb{I}[[D_p]]$-modules \begin{equation}\label{eq:Gr-f} 0\rightarrow\mathscr{F}^+T_{{\boldsymbol{f}}}\rightarrow T_{{\boldsymbol{f}}}\rightarrow\mathscr{F}^-T_{{\boldsymbol{f}}}\rightarrow 0 \end{equation} with $\mathscr{F}^\pm T_{{\boldsymbol{f}}}$ locally free of rank $1$ over $\mathbb{I}$, and with the $D_p$-action on the quotient $\mathscr{F}^-T_{{\boldsymbol{f}}}$ given by the unramified character sending an arithmetic Frobenius to $\boldsymbol{a}_p\in\mathbb{I}^\times$. Let $k_\mathbb{I}:=\mathbb{I}/\mathfrak{m}_\mathbb{I}$ be the residue field of $\mathbb{I}$, and denote by $\bar\rho_{{\boldsymbol{f}}}:G_\mathbf{Q}\rightarrow{\rm GL}_2(\kappaappa_\mathbb{I})$ the semi-simple residual representation associated with $T_{{\boldsymbol{g}}}$, which by (\ref{eq:Gr-f}) is conjugate to an upper-triangular representation upon restriction to $D_p$: \[ \bar\rho_{{\boldsymbol{f}}}\vert_{D_p}\sim\left(\begin{smallmatrix}\bar{\varepsilon}&*\\ &\bar\delta\end{smallmatrix}\right). \] If $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible and $\bar\varepsilon\neq\bar\delta$, as we shall assume from now on, then by work of Wiles \cite{wiles88} (see also \cite[Thm.~7.2.8]{KLZ2}), $T_{{\boldsymbol{f}}}$ is free of rank $2$ over $\mathbb{I}$, and each $\mathscr{F}^\pm T_{{\boldsymbol{f}}}$ is free of rank $1$. Recall that we let ${\boldsymbol{g}}amma_\mathcal{K}={\rm Gal}(\mathcal{K}_\infty/\mathcal{K})$ denote the Galois group of the $\mathbf{Z}_p^2$-extension of $\mathcal{K}$, and consider the $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-module \[ \mathbf{T}_{}:=T_{{\boldsymbol{f}}}\otimes_{\mathbb{I}}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\quad \] equipped with the $G_\mathcal{K}$-action via $\rho_{{\boldsymbol{f}}}\otimes\Psi_\mathcal{K}$, where $\rho_{{\boldsymbol{f}}}$ is the $G_\mathbf{Q}$-representation afforded by $T_{\boldsymbol{f}}$, and $\Psi_\mathcal{K}$ is the tautological character $G_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathcal{K}\hookrightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]^\times$. Replacing ${\boldsymbol{g}}amma_\mathcal{K}$ by ${\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$ (resp. ${\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}$), we define the $G_\mathcal{K}$-module $\mathbf{T}^{\rm ac}$ (resp. $\mathbf{T}^{\rm cyc}$) similarly. As in \cite{howard-invmath}, we also define the critical twist \begin{equation}\label{eq:dagger} T_{{\boldsymbol{f}}}^\dagger:=T_{{\boldsymbol{f}}}\otimes\mathbf{T}^\daggerheta^{-1}, \end{equation} where $\mathbf{T}^\daggerheta:G_\mathbf{Q}\rightarrow\mathbb{I}^\times$ is the character (\ref{def:crit-char}), and define its deformations $\mathbf{T}^\dagger, \mathbf{T}^{\dagger,{\rm ac}}$, and $\mathbf{T}^{\dagger,{\rm cyc}}$ similarly as before. In the definitions that follow, we let $M$ denote either of the Galois modules just introduced, for which we naturally define $\mathscr{F}^\pm{M}$ using $(\ref{eq:Gr-f})$. Consider the \emph{$p$-relaxed} Selmer group defined by \[ {\rm Sel}^{\{p\}}(F,M)={\rm ker}\Biggl\{{\rm H}^1(\mathfrak{G}_{F,\Sigma},M) \rightarrow\prod_{v\in\Sigma,\;v\nmid p}\frac{{\rm H}^1(F_v,M)}{{\rm H}^1_{\rm ur}(F_v,M)}\Biggr\},\nonumber \] where ${\rm H}^1_{\rm ur}(F_v,M)={\rm ker}\{{\rm H}^1(F_v,M)\rightarrow {\rm H}^1(F_v^{\rm ur},M)\}$ is the unramified local condition. \begin{defn} For $v\vert p$ and $\mathscr{L}_v\in\{\emptyset,{\tt Gr},0\}$, set \[ {\rm H}^1_{\mathscr{L}_v}(F_v,M):= \left\{ \begin{array}{ll} {\rm H}^1(F_v,M)&\textrm{if $\mathscr{L}_v=\emptyset$,}\\[0.1cm] {\rm ker}\{{\rm H}^1(F_v,M)\rightarrow{\rm H}^1(F_v^{\rm ur},\mathscr{F}^-M)\}&\textrm{if $\mathscr{L}_v={\tt Gr}$,}\\[0.1cm] \{0\}&\textrm{if $\mathscr{L}_v=0$,} \end{array} \right. \] and for $\mathscr{L}=\{\mathscr{L}_v\}_{v\vert p}$, define \begin{equation}\label{def:auxsel} {\rm Sel}_{\mathscr{L}}(F,M):={\rm ker}\Biggr\{{\rm Sel}^{\{p\}}(F,M) \rightarrow\prod_{v\vert p} \frac{{\rm H}^1(F_{v},M)}{{\rm H}_{\mathscr{L}_v}^1(F_{v},M)}\Biggr\}.\nonumber \end{equation} \end{defn} Thus, for example ${\rm Sel}_{0,\emptyset}(\mathcal{K},M)$ is the subspace of ${\rm Sel}^{\{p\}}(\mathcal{K},M)$ consisting of classes which satisfy no condition (resp. are locally trivial) at $\overline{\mathfrak{p}}$ (resp. $\mathfrak{p}$). For the ease of notation, we let ${\rm Sel}_{\tt Gr}(F,M)$ denote the Selmer group ${\rm Sel}_{\mathscr{L}}(F,M)$ given by $\mathscr{L}_v={\tt Gr}$ for all $v\vert p$. We shall also need to consider Selmer groups for the discrete module \[ A_{{\boldsymbol{f}}}:={\rm Hom}_{\rm cts}(T_{{\boldsymbol{f}}},\mu_{p^\infty}). \] To define these, recall that by Shapiro's lemma there is a canonical isomorphism \begin{equation}\label{eq:Shapiro} {\rm H}^1(\mathcal{K},\mathbf{T})\simeq\varprojlim_ {\mathcal{K}\subset_{\rm f}F\subset\mathcal{K}_\infty}{\rm H}^1(F,T_{\boldsymbol{f}}), \end{equation} where $F$ runs over the finite extensions of $\mathcal{K}$ contained in $\mathcal{K}_\infty$ and the limits is with respect to the corestriction maps. By the compatibility of $(\ref{eq:Shapiro})$ with the local restriction maps (see e.g. \cite[\S{3.1.2}]{SU}), the Selmer groups ${\rm Sel}_\mathscr{L}(\mathcal{K},\mathbf{T})$ are defined by local conditions ${\rm H}^1_{\mathscr{L}_v}(F_v,T_{{\boldsymbol{f}}})\subset{\rm H}^1(F_v,T_{{\boldsymbol{f}}})$ for all primes $v$. Thus we may let \[ {\rm Sel}_{\mathscr{L}}(\mathcal{K}_\infty,A_{\boldsymbol{f}})\subset\varinjlim_{\mathcal{K}\subset_{\rm f} F\subset\mathcal{K}_\infty}{\rm H}^1(F,A_{\boldsymbol{f}}) \] be the submodule cut out by the orthogonal complements of ${\rm H}^1_{\mathscr{L}_v}(F_v,T_{{\boldsymbol{f}}})$ under the perfect Tate duality \[ {\rm H}^1(F_v,T_{\boldsymbol{f}})\times{\rm H}^1(F_v,A_{\boldsymbol{f}})\rightarrow \mathbf{Q}_p/\mathbf{Z}_p. \] This also defines the Selmer groups ${\rm Sel}_\mathscr{L}(F,A_{\boldsymbol{f}})\subset{\rm H}^1(F,A_{\boldsymbol{f}})$ for any number field $F$, and we shall also consider their variants for the twisted module \[ A_{{\boldsymbol{f}}}^\dagger:={\rm Hom}_{\mathbf{Z}_p}(T_{{\boldsymbol{f}}}^\dagger,\mu_{p^\infty}), \] or their specializations. Finally, if $W$ denotes any of the preceding discrete modules, we set \[ X_{\mathscr{L}}(F,W):={\rm Hom}_{\mathbf{Z}_p}({\rm Sel}_{\mathscr{L}}(F,W),\mathbf{Q}_p/\mathbf{Z}_p), \] which we simply denote by $X_{\tt Gr}(F,W)$ when $\mathscr{L}_v={\tt Gr}$ for all $v\vert p$. We now record a number of lemmas for our later use. \begin{lem}\label{lem:eq-ranks} Assume that $\overline{\rho}_{{\boldsymbol{f}}}\vert_{G_F}$ is absolutely irreducible. Then ${\rm Sel}_{\tt Gr}(F,T_{{\boldsymbol{f}}}^\dagger)$ and $X_{\tt Gr}(F,A_{{\boldsymbol{f}}}^\dagger)$ have the same $\mathbb{I}$-rank. \end{lem} \begin{proof} For any height one prime $\mathfrak{P}\subset\mathbb{I}$, let $\mathbb{I}_{\mathfrak{P}}$ be the localization of $\mathbb{I}$ at $\mathfrak{P}$, and let $F_\mathfrak{P}=\mathbb{I}_{\mathfrak{P}}/\mathfrak{P}$ be the residue field. It suffices to show that for all but finitely many $\mathfrak{P}\in\mathcal{X}_a(\mathbb{I})$, the spaces ${\rm Sel}_{\tt Gr}(F,T_{{\boldsymbol{f}}}^\dagger)_{\mathfrak{P}}/\mathfrak{P}$ and $X_{\tt Gr}(F,A_{{\boldsymbol{f}}}^\dagger)_{\mathfrak{P}}/\mathfrak{P}$ have the same $F_{\mathfrak{P}}$-dimension. As noted in \cite[\S{12.7.5}]{nekovar310} (see also \cite[Lem.~2.1.6]{howard-invmath}), Hida's results imply that the localization $\mathbb{I}_\mathfrak{P}$ of $\mathbb{I}$ at any $\mathfrak{P}\in\mathcal{X}_a(\mathbb{I})$ is a discrete valuation ring. Let $\pi\in\mathbb{I}_{\mathfrak{P}}$ be a uniformizer. From Nekov{\'a}{\v{r}}'s theory (see \cite[Prop.~12.7.13.4(i)]{nekovar310}) and the identification \cite[(21)]{howard-invmath}, multiplication by $\pi$ induces natural maps \begin{align*} {\rm Sel}_{\tt Gr}(F,T_{{\boldsymbol{f}}}^\dagger)_{\mathfrak{P}}/\pi&\hookrightarrow{\rm Sel}_{\tt Gr}(F,T_{{\boldsymbol{f}},\mathfrak{P}}^\dagger/\pi),\\ {\rm Sel}_{\tt Gr}(F,A_{{\boldsymbol{f}},\mathfrak{P}}^\dagger[\pi])&{\dagger}oheadrightarrow{\rm Sel}_{\tt Gr}(F,A_{{\boldsymbol{f}}}^\dagger)_{\mathfrak{P}}[\pi] \end{align*} which are isomorphisms for all but finitely many $\mathfrak{P}\in\mathcal{X}_a(\mathbb{I})$. Since by \cite[Lem.~1.3.3]{howard-PhD-I} the spaces ${\rm Sel}_{\tt Gr}(F,T_{{\boldsymbol{f}},\mathfrak{P}}^\dagger/\pi)$ and ${\rm Sel}_{\tt Gr}(F,A_{{\boldsymbol{f}},\mathfrak{P}}^\dagger[\pi])$ have the same $F_\mathfrak{P}$-dimension, the result follows. \end{proof} \begin{lem}\label{lem:no-tors} If $\bar{\rho}_{{\boldsymbol{f}}}\vert_{G_\mathcal{K}}$ is irreducible, then the modules ${\rm H}^1(\mathfrak{G}_{\mathcal{K},\Sigma},\mathbf T^\dagger)$ and ${\rm H}^1(\mathfrak{G}_{\mathcal{K},\Sigma},\mathbf{T}^\daggerc)$ are torsion-free over $\mathbb{I}_{}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ and $\mathbb{I}_{}[[{\boldsymbol{g}}amma^{\rm ac}_\mathcal{K}]]$, respectively. \end{lem} \begin{proof} This follows immediately from \cite[\S{1.3.3}]{PR:Lp}, since ${\rm H}^0(\mathcal{K}_\infty,\bar{\rho}_{\boldsymbol{f}})={\rm H}^0(\mathcal{K}_\infty^{\rm ac},\bar{\rho}_{\boldsymbol{f}})=\{0\}$ by the irreducibility of $\bar{\rho}_{\boldsymbol{f}}\vert_{G_\mathcal{K}}$. \end{proof} \begin{lem}\label{lem:str-rel} We have ${\rm rank}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},\emptyset}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger))=1+{\rm rank}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger}))$. Moreover, if $\mathbb{I}$ is regular then \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},\emptyset}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger})_{\rm tors})={\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{0,{\tt Gr}}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger})_{\rm tors}), \] where the subscript {\rm tors} denotes the $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion submodule. \end{lem} \begin{proof} The first claim follows from an argument similar to that in Lemma~\ref{lem:eq-ranks} using part (2) of \cite[Lem.~2.3]{cas-BF}. For the second, note that the regularity of $\mathbb{I}$ implies that of $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$. Thus by \cite[Lem.~6.18]{Fouquet} the second claim follows from part (3) of \cite[Lem.~2.3]{cas-BF} (see also \cite[Prop.~3.16]{BL-ord}). \end{proof} We conclude this section with the following useful commutative algebra lemma from \cite{SU}, which will be used repeatedly in the proof of our main results in $\S\ref{sec:main}$. \begin{lem}\label{lem:3.2} Let $R$ be a local ring and $\mathfrak{a}\subset R$ a proper ideal such that $R/\mathfrak{a}$ is a domain. Let $I\subset R$ be an ideal and $\mathcal{L}$ an element of $R$ with $I\subset (\mathcal{L})$. Denote by a `bar' the image under the reduction map $R\rightarrow R/\mathfrak{a}$. If $\overline{\mathcal{L}}\in R/\mathfrak{a}$ is nonzero and $\overline{\mathcal{L}}\in\overline{I}$, then $I=(\mathcal{L})$. \end{lem} \begin{proof} This is a special case of \cite[Lem.~3.2]{SU}. \end{proof} \subsection{Explicit reciprocity laws} Let $G_\mathbf{Q}$ act on $\Lambda_{\boldsymbol{g}}amma$ via the tautological character $G_\mathbf{Q}{\dagger}oheadrightarrow{\boldsymbol{g}}amma\hookrightarrow\Lambda_{\boldsymbol{g}}amma^\times$. In \cite{KLZ2}, Kings--Loeffler--Zerbes constructed special elements \[ _c\mathcal{BF}_m^{{\boldsymbol{f}},{\boldsymbol{g}}}\in{\rm H}^1(\mathbf{Q}(\mu_m),T_{{\boldsymbol{f}}}\hat\otimes_{\mathbf{Z}_p} T_{{\boldsymbol{g}}}\hat\otimes_{\mathbf{Z}_p}\Lambda_{\boldsymbol{g}}amma) \] attached to pairs of Hida families ${\boldsymbol{f}},{\boldsymbol{g}}$, and related the image of $_c\mathcal{BF}_1^{{\boldsymbol{f}},{\boldsymbol{g}}}$ under a Perrin-Riou big logarithm map to the $p$-adic $L$-functions $L_p({\boldsymbol{f}},{\boldsymbol{g}})$ and $L_p({\boldsymbol{g}},{\boldsymbol{f}})$ of Theorem~\ref{thm:hida}. In this section we describe the variant of their results that we shall need. Since $\mathbf T^\dagger=\mathbf T\otimes\mathbf{T}^\daggerheta^{-1}$ by definition, the twist map ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}:\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ of $(\ref{eq:tw-theta})$ induces a $\mathbb{I}$-linear isomorphism \[ \widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}:{\rm H}^1(\mathcal{K},\mathbf{T})\rightarrow{\rm H}^1(\mathcal{K},\mathbf{T}^\dagger) \] satisfying $\widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\lambda x)={\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\lambda)\widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(x)$ for all $\lambda\in\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$. \begin{thm}[Kings--Loeffler--Zerbes] \label{thm:Col} There exists a class $\mathcal{BF}_{}^\dagger\in{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^\dagger)$ and $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-linear injections with pseudo-null cokernel \begin{align*} {\rm Col}^{(1),\dagger}:{\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^{-}\mathbf{T}^\dagger) \;&\rightarrow\;I_{\boldsymbol{f}}\otimes_{\mathbb{I}}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]],\\ {\rm Col}^{(2),\dagger}:{\rm H}^1(\mathcal{K}_{\mathfrak{p}},\mathscr{F}^{+}\mathbf{T}^\dagger) \;&\rightarrow\;I_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]], \end{align*} where ${\boldsymbol{g}}$ is the CM Hida family in $(\ref{eq:g-CM})$, such that \begin{align*} {\rm Col}^{(1),\dagger}({\rm loc}_{\mathfrak{p}bar}(\mathcal{BF}^\dagger))&={\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{}({\boldsymbol{f}},{\boldsymbol{g}}))\\ {\rm Col}^{(2),\dagger}({\rm loc}_\mathfrak{p}(\mathcal{BF}^\dagger))&={\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{}({\boldsymbol{g}},{\boldsymbol{f}})). \end{align*} In particular, for every prime $v$ of $\mathcal{K}$ above $p$, the class ${\rm loc}_v(\mathcal{BF}^\dagger)\in{\rm H}^1(\mathcal{K}_v,\mathbf{T}^\dagger)$ is non-torsion over $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$. \end{thm} \begin{proof} This follows from the results of \cite{KLZ2}, as explained in \cite[Thm.~2.4]{cas-BF}, and to which one needs to add some of the analysis in \cite{BST}. Indeed, taking $m=1$ in \cite[Def.~8.1.1]{KLZ2} (and using \cite[Lem.~6.8.9]{LLZ} to dispense with an auxiliary $c>1$ needed for the construction), one obtains a cohomology class \begin{equation}\label{eq:KLZ-class} \mathcal{BF}^{{\boldsymbol{f}},{\boldsymbol{g}}}\in{\rm H}^1(\mathbf{Q},T_{\boldsymbol{f}}\hat\otimes_{\mathbf{Z}_p}T_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\Lambda_{\boldsymbol{g}}amma) \nonumber \end{equation} attached to our fixed Hida family ${\boldsymbol{f}}$ and a second Hida family ${\boldsymbol{g}}$. Taking for ${\boldsymbol{g}}$ the canonical CM family in (\ref{eq:g-CM}), by \cite{BST} we have an isomorphism \[ T_{\boldsymbol{g}}\simeq{\rm Ind}_\mathcal{K}^\mathbf{Q}\mathbf{Z}_p[[{\boldsymbol{g}}amma_\mathfrak{p}]] \] as $G_\mathbf{Q}$-modules, where the $G_\mathcal{K}$-action on $\mathbf{Z}_p[[{\boldsymbol{g}}amma_\mathfrak{p}]]$ is given by the tautological character $G_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathfrak{p}\hookrightarrow\mathbf{Z}_p[[{\boldsymbol{g}}amma_\mathfrak{p}]]^\times$. By Shapiro's lemma, the class $\mathcal{BF}^{{\boldsymbol{f}},{\boldsymbol{g}}}$ therefore defines a class $\mathcal{BF}\in{\rm H}^1(\mathcal{K},\mathbf{T})$ whose image under $\widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}$ will be our required $\mathcal{BF}^\dagger$. Indeed, the inclusion $\mathcal{BF}^\dagger\in {\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)$ follows from \cite[Prop.~8.1.7]{KLZ2}, and by the explicit reciprocity law of \cite[Thm.~10.2.2]{KLZ2}, the maps \[ {\rm Col}^{(1)}:=\langle\mathcal{L}(-),\eta_{{\boldsymbol{f}}}\otimes\omega_{{\boldsymbol{g}}}\rangle,\quad {\rm Col}^{(2)}:=\left\langle\mathcal{L}(-),\eta_{{\boldsymbol{g}}}\otimes\omega_{{\boldsymbol{f}}}\right\rangle \] described in the proof of \cite[Thm.~2.4]{cas-BF} send the restriction at $\overline{\mathfrak{p}}$ and $\mathfrak{p}$ of $\mathcal{BF}$ to the $p$-adic $L$-functions $L_p({\boldsymbol{f}},{\boldsymbol{g}})$ and $L_p^{}({\boldsymbol{g}},{\boldsymbol{f}})$, respectively. Thus letting ${\rm Col}^{(1),\dagger}$ and ${\rm Col}^{(2),\dagger}$ be the $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-linear maps defined by the commutative diagrams \[ \xymatrix{ {\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^{-}\mathbf{T})\ar[r]^-{{\rm Col}^{(1)}}\ar[d]^-{\widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}}&I_{\boldsymbol{f}}\otimes_{\mathbb{I}}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\ar[d]^-{{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}}& {\rm H}^1(\mathcal{K}_{\mathfrak{p}},\mathscr{F}^{+}\mathbf{T})\ar[r]^-{{\rm Col}^{(2)}}\ar[d]^-{\widetilde{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}}&I_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\ar[d]^-{{\rm tw}_{\mathbf{T}^\daggerheta^{-1}}}\\ {\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^{-}\mathbf{T}^\dagger)\ar[r]^-{{\rm Col}^{(1),\dagger}}&I_{\boldsymbol{f}}\otimes_{\mathbb{I}}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]& {\rm H}^1(\mathcal{K}_{\mathfrak{p}},\mathscr{F}^{+}\mathbf{T}^\dagger)\ar[r]^-{{\rm Col}^{(2),\dagger}}&I_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]], } \] the result follows, with the last claim being an immediate consequence of the nonvanishing of the $p$-adic $L$-functions $L_p({\boldsymbol{f}},{\boldsymbol{g}})$ and $L_p({\boldsymbol{g}},{\boldsymbol{f}})$ (see e.g. \cite[Rem.~1.3]{cas-BF}). \end{proof} We shall also need to consider anticyclotomic variants of the maps ${\rm Col}^{(i),\dagger}$ in Theorem~\ref{thm:Col}. Letting $\mathcal{I}_{\rm cyc}$ be the kernel of the natural projection $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_{\mathcal{K}}^{\rm ac}]]$, the map \[ {\rm Col}_{\rm ac}^{(1),\dagger}:{\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^-\mathbf T^{\dagger,{\rm ac}})\rightarrow I_{\boldsymbol{f}}\otimes_\mathbb{I}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]] \] is defined by reducing ${\rm Col}^{(1),\dagger}$ modulo the ideal $\mathcal{I}_{\rm cyc}$, using the fact that by the vanishing of ${\rm H}^0(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^-\mathbf T^{\dagger,{\rm ac}})$ the restriction map induces a natural isomorphism \[ {\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^-\mathbf T^{\dagger})/\mathcal{I}_{\rm cyc}\simeq{\rm H}^1(\mathcal{K}_{\mathfrak{p}bar},\mathscr{F}^-\mathbf T^{\dagger,{\rm ac}}). \] The map ${\rm Col}_{\rm ac}^{(2),\dagger}:{\rm H}^1(\mathcal{K}_{\mathfrak{p}},\mathscr{F}^+\mathbf T^{\dagger,{\rm ac}})\rightarrow I_{\boldsymbol{g}}\hat\otimes_{\mathbf{Z}_p}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$ is defined in the same manner. Note that since the maps ${\rm Col}^{(i),\dagger}$ are injective with pseudo-null cokernel, the same is true for the maps ${\rm Col}_{\rm ac}^{(i),\dagger}$. \begin{cor}\label{cor:str-Gr} Let $\mathcal{BF}^{\dagger,{\rm ac}}$ be the image of the class $\mathcal{BF}^\dagger$ under the natural map ${\rm H}^1(\mathcal{K},\mathbf{T}^\dagger)\rightarrow{\rm H}^1(\mathcal{K},\mathbf{T}^{\dagger,{\rm ac}})$. Assume that $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{eq:gen-Heeg-f})}. Then we have the inclusion \[ {\rm loc}_{\overline{\mathfrak{p}}}(\mathcal{BF}_{}^{\dagger,{\rm ac}})\in{\rm ker}\{{\rm H}^1(\mathcal{K}_{\overline\mathfrak{p}},\mathbf{T}^\daggerc)\rightarrow{\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathscr{F}^-\mathbf{T}^\daggerc)\}; \] in particular, $\mathcal{BF}_{}^{\dagger,{\rm ac}}\in{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)$. Moreover, if we assume in addition that $N$ is squarefree when $N^->1$, then ${\rm loc}_{\mathfrak{p}}(\mathcal{BF}_{}^{\dagger,{\rm ac}})$ is non-torsion over $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$. \end{cor} \begin{proof} The combination of Theorem~\ref{thm:Col} and Proposition~\ref{thm:hida-1} yields the vanishing of the image of ${\rm loc}_{\mathfrak{p}bar}(\mathcal{BF}^{\dagger,{\rm ac}})$ under the map ${\rm Col}_{\rm ac}^{(1),\dagger}$, so the first claim follows from its injectivity. The second claim follows from Theorem~\ref{thm:Col} together with the novanishing result of Corollary~\ref{cor:wan-bdp}. \end{proof} \subsection{Iwasawa main conjectures}\label{sec:ES} We now use the reciprocity laws of Theorem~\ref{thm:Col} to relate different variants of the Iwasawa main conjecture. \begin{thm}\label{thm:2-varIMC} Assume that $\overline{\rho}_{{\boldsymbol{f}}}\vert_{G_\mathcal{K}}$ is irreducible. Then the following are equivalent: \begin{enumerate} \item[(i)]{} $X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, ${\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^{\dagger})$ has $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-rank one, and \begin{equation}\label{eq:BF-IMC} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger}))= {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}\bigg(\frac{{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^{\dagger})}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\cdot\mathcal{BF}^{\dagger}}\biggr) \nonumber \end{equation} up to powers of $p$. \item[(ii)]{} Both $X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger})$ and ${\rm Sel}_{0,\emptyset}(\mathcal{K},\mathbf{T}^{\dagger})$ are $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger}))\cdot\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]=({\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K}))) \] up to powers of $p$. \item[(iii)]{} Both $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger})$ and ${\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^{\dagger})$ are $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^{\dagger}))=({\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))). \] up to powers of $p$. \end{enumerate} Moreover, if in addition $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{eq:gen-Heeg-f})}, with $N$ being squarefree when $N^->1$, then the following are equivalent: \begin{enumerate} \item[(i)']{} $X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, ${\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^{\dagger,{\rm ac}})$ has $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one, and \begin{equation}\label{eq:BF-IMC} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger}))= {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\bigg(\frac{{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^{\dagger,{\rm ac}})}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathcal{BF}^{\dagger,{\rm ac}}}\biggr) \nonumber \end{equation} up to powers of $p$. \item[(ii)']{} Both $X_{\emptyset,0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger})$ and ${\rm Sel}_{0,\emptyset}(\mathcal{K},\mathbf{T}^{\dagger,{\rm ac}})$ are $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{\emptyset,0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger}))\cdot\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]=(\mathscr{L}^{\tt BDP}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})^2) \] up to powers of $p$. \end{enumerate} \end{thm} \begin{proof} Consider the exact sequence coming form Poitou--Tate duality \begin{align*}\label{eq:ES-1b} 0\rightarrow{\rm Sel}_{0,\emptyset}(\mathcal{K},\mathbf T^\dagger)\rightarrow{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)\xrightarrow{{\rm loc}_\mathfrak{p}} &{\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}},\mathbf T^\dagger)\\ &\rightarrow X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow 0.\nonumber \end{align*} By Theorem~\ref{thm:Col}, the cokernel of the map ${\rm loc}_\mathfrak{p}$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and so the equivalence between the claimed ranks in (i) and (ii) follows. By Lemma~\ref{lem:no-tors}, if ${\rm Sel}_{0,\emptyset}(\mathcal{K},\mathbf T^\dagger)$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion then it is trivial, and so the above yields \begin{equation}\label{eq:div-1} 0\rightarrow\frac{{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\cdot\mathcal{BF}^\dagger}\xrightarrow{{\rm loc}_\mathfrak{p}} \frac{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}},\mathbf T^\dagger)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\cdot{\rm loc}_\mathfrak{p}(\mathcal{BF}^\dagger)}\\ \rightarrow X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow 0. \end{equation} By \cite[Cor.~4.13]{betina-dimitrov-CM}, the congruence ideal of the CM Hida family ${\boldsymbol{g}}$ in $(\ref{eq:g-CM})$ is generated by $\mathscr{L}_\mathfrak{p}(\mathcal{K})_{\rm ac}$ after inverting $p$, and therefore by Theorem~\ref{thm:Col} and (\ref{eq:factor-hida}) the map ${\rm Col}_{}^{(2),\dagger}$ multiplied by this generator yields an injection \begin{equation}\label{eq:katz-col} \frac{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}},\mathbf T^\dagger)\cdot\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]][1/p]}{\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]][1/p]\cdot{\rm loc}_\mathfrak{p}(\mathcal{BF}^\dagger))}\hookrightarrow\frac{\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]][1/p]}{({\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})))}\nonumber \end{equation} with pseudo-null cokernel, which combined with $(\ref{eq:div-1})$ completes the proof of the equivalence of (i) and (ii). The equivalence between (i)' and (ii)' when $\mathcal{K}$ satisfies the hypothesis (\ref{ass:gen-H}) is shown in the same way, using the nonvanishing of ${\rm loc}_\mathfrak{p}(\mathcal{BF}^{\dagger,{\rm ac}})$ from Corollary~\ref{cor:str-Gr}. Now consider the exact sequence \begin{align*}\label{eq:ES-2b} 0\rightarrow{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf T^\dagger)\rightarrow{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)\xrightarrow{{\rm loc}_{\overline\mathfrak{p}}} &\frac{{\rm H}^1(\mathcal{K}_{\overline\mathfrak{p}},\mathbf T^\dagger)}{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\overline\mathfrak{p}},\mathbf T^\dagger)}\simeq{\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathscr{F}^-\mathbf T^\dagger)\\ &\rightarrow X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow 0,\nonumber \end{align*} which similarly as before implies the equivalence between the claimed $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-ranks in (ii) and (iii), and by Theorem~\ref{thm:Col} and Lemma~\ref{lem:no-tors} yields the exact sequence \[ 0\rightarrow\frac{ {\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\cdot\mathcal{BF}^\dagger}\xrightarrow{{\rm loc}_{\overline\mathfrak{p}}} \frac{{\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathscr{F}^-\mathbf T^\dagger)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\cdot{\rm loc}_{\mathfrak{p}}(\mathcal{BF}^\dagger)}\rightarrow X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow X_{{\tt Gr},0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)\rightarrow 0.\nonumber \] Since by Theorem~\ref{thm:Col} the map ${\rm Col}_{}^{(1),\dagger}$ multiplied by a generator of the congruence ideal of ${\boldsymbol{f}}$ yields an injection ${\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathscr{F}^-\mathbf T^\dagger)\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$ with pseudo-null cokernel sending ${\rm loc}_{\mathfrak{p}}(\mathcal{BF}^\dagger)$ into ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$ up to a unit in $\mathbb{I}^\times$, the equivalence between (ii) and (iii) follows. \end{proof} \subsection{Rubin's height formula}\label{sec:main} Fix a topological generator $\gamma_{\rm cyc}\in{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}$, and using the identification $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\simeq(\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]])[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}]]$, expand \begin{equation}\label{eq:2-exp} {\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))=L_{p,0}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}+L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}\cdot(\gamma_{\rm cyc}-1)+\cdots \end{equation} as a power series in $\gamma_{\rm cyc}-1$. The constant term $L_{p,0}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ thus corresponds to the image of ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$ under the natural projection $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\rightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$. By Shapiro's lemma, we may consider the class $\mathcal{BF}^\dagger\in{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf T^\dagger)$ of Theorem~\ref{thm:Col} as a system of classes $\mathcal{BF}_F^\dagger\in{\rm Sel}_{{\tt Gr},\emptyset}(F,T_{{\boldsymbol{f}}}^\dagger)$, indexed by the finite extensions $F$ of $\mathcal{K}$ contained in $\mathcal{K}_\infty$, compatible under the corestriction maps. For any intermediate extension $\mathcal{K}\subset L\subset \mathcal{K}_\infty$, we then set \[ \mathcal{BF}^\dagger_{}(L):=\{\mathcal{BF}^\dagger_{F}\}_{\mathcal{K}\subset_{\rm f} F\subset L} \] with $F$ running over the finite extensions of $\mathcal{K}$ contained in $L$, so in particular $\mathcal{BF}^\dagger_{}(\mathcal{K}_\infty)$ is nothing but $\mathcal{BF}^\dagger_{}$. Let $\mathcal{K}_n^{\rm ac}$ be the subextension of $\mathcal{K}_\infty^{\rm ac}$ with $[\mathcal{K}_n^{\rm ac}:\mathcal{K}]=p^n$, define $\mathcal{K}_k^{\rm cyc}$ similarly, and set $L_{n,k}=\mathcal{K}_n^{\rm ac}\mathcal{K}_k^{\rm cyc}$ for all $k\leqslant\infty$. \begin{lem}\label{lem:3.1.1} Assume that $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{eq:gen-Heeg-f})} and that $\bar{\rho}_{\boldsymbol{f}}\vert_{G_\mathcal{K}}$ is irreducible. Then there is a unique element \[ \beta^\dagger_{n}\in {\rm H}^1_{}(\mathcal{K}^{\rm ac}_{n,\overline\mathfrak{p}},\mathscr{F}^-\mathbf{T}^{\dagger,{{\rm cyc}}}) \] such that ${\rm loc}_{\overline{\mathfrak{p}}}(\mathcal{BF}^\dagger_{}(L_{n,\infty}))=(\gamma_{{\rm cyc}}-1)\beta_n^\dagger$. Furthermore, the natural images of the classes $\beta_n^\dagger$ in ${\rm H}^1(\mathcal{K}_{n,\overline\mathfrak{p}}^{\rm ac},\mathscr{F}^-T_{{\boldsymbol{f}}}^\dagger)$ are norm-compatible, defining a class \[ \{\beta_n^\dagger(\mathds{1})\}_n\in\varprojlim_n{\rm H}^1(\mathcal{K}_{n,\overline\mathfrak{p}}^{{\rm ac}},\mathscr{F}^-T_{{\boldsymbol{f}}}^\dagger)\simeq{\rm H}^1(\mathcal{K}_{\overline\mathfrak{p}},\mathscr{F}^-\mathbf{T}^\daggerc) \] that is sent to the linear term $L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ under the map ${\rm Col}_{{\rm ac}}^{(1),\dagger}$. \end{lem} \begin{proof} By the explicit reciprocity law of Theorem~\ref{thm:Col}, the first claim follows from the vanishing of $L_{p,0}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ (see Proposition~\ref{thm:hida-1}) and the injectivity of ${\rm Col}^{(1),\dagger}$, with the uniqueness claim being an immediate consequence of Lemma~\ref{lem:no-tors}; the last claim is a direct consequence of the definitions of $\beta^\dagger_n$ and $L^{\tt Hi}_{p,1}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$. \end{proof} Let $\mathcal{I}^{\rm cyc}=(\gamma_{\rm cyc}-1)\subset\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}]]$ be the augmentation ideal, and set $\mathcal{J}^{\rm cyc}=\mathcal{I}^{\rm cyc}/(\mathcal{I}^{\rm cyc})^2$. By work of Plater \cite{Plater}, and more generally Nekov{\'a}{\v{r}} \cite[\S{11}]{nekovar310}, for every $n$ there is a canonical (up to sign) $\mathbb{I}$-adic height pairing \begin{equation}\label{eq:I-ht} \langle,\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{{\rm cyc}}:{\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)\times{\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger) \rightarrow\mathcal{J}^{\rm cyc}\otimes_\mathbb{I} F_\mathbb{I}. \end{equation} (Note that the local indecomposability hypothesis (H1) in \cite[p.~107]{Plater} is only used to ensure the existence of well-defined sub and quotients at the places above $p$, which for $T_{\boldsymbol{f}}^\dagger$ is automatic, while hypotheses (H2) and (H3) in \emph{loc.cit.} follow from \cite[Lem.~2.4.4]{howard-invmath} for $T_{\boldsymbol{f}}^\dagger$.) Denoting by ${\rm H}^1_{\tt Gr}(L_{n,k,v},T_{\boldsymbol{f}}^\dagger)\subset{\rm H}^1(L_{n,k,v},T_{\boldsymbol{f}}^\dagger)$ the local condition defining ${\rm Sel}_{\tt Gr}(L_{n,k},T_{\boldsymbol{f}}^\dagger)$ at a place $v$, Plater's definition of $\langle,\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{{\rm cyc}}$ (which we shall briefly recall in the proof of Proposition~\ref{thm:rubin-ht} below) shows that $\langle,\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{{\rm cyc}}$ takes integral values in the submodule of ${\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)$ consisting of classes which are local cyclotomic universal norms at all places $v$ above $p$, i.e., classes in \[ {\rm H}^1_{\tt Gr}(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger)^{\rm univ}:=\bigcap_{k}{\rm cor}_{L_{n,k,v}/\mathcal{K}_{n,v}^{\rm ac}}({\rm H}^1_{\tt Gr}(L_{n,k,v},T_{\boldsymbol{f}}^\dagger)), \] and so by \cite[Lem.~2.3.1]{PR-109} the denominators of (\ref{eq:I-ht}) are abounded independently of $n$. The next result generalizes the height formula of \cite[Thm.~3.2(ii)]{rubin-ht} to our context. \begin{prop}\label{thm:rubin-ht} Assume that $\mathcal{K}$ satisfies the hypothesis {\rm (\ref{ass:gen-H})} and that $\bar{\rho}_{\boldsymbol{f}}\vert_{G_\mathcal{K}}$ is irreducible. Then the classes $\mathcal{BF}^\dagger_{\mathcal{K}_n^{\rm ac}}$ land in ${\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)$, and for every $x\in{\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)$ we have \begin{equation}\label{eq:rubin-ht} \langle\mathcal{BF}^\dagger_{\mathcal{K}_n^{\rm ac}},x\rangle^{\rm cyc}_{\mathcal{K}_n^{\rm ac},\mathbb{I}}=(\beta_n^\dagger(\mathds{1}),{\rm loc}_{\overline\mathfrak{p}}(x))_{\mathcal{K}_{n,\mathfrak{p}bar}^{\rm ac}}\otimes(\gamma_{\rm cyc}-1), \end{equation} where $(,)_{\mathcal{K}_{n,\mathfrak{p}bar}^{\rm ac}}$ is the local Tate pairing \[ \frac{{\rm H}^1(\mathcal{K}_{n,\overline\mathfrak{p}}^{\rm ac},T_{\boldsymbol{f}}^\dagger)}{{\rm H}^1_{\tt Gr}(\mathcal{K}_{n,\overline\mathfrak{p}}^{\rm ac},T_{\boldsymbol{f}}^\dagger)} \times {\rm H}^1_{\tt Gr}(\mathcal{K}_{n,\overline\mathfrak{p}}^{\rm ac},T_{\boldsymbol{f}}^\dagger)\rightarrow\mathbb{I}. \] \end{prop} \begin{proof} The first claim follows from the explicit reciprocity law of Theorem~\ref{thm:Col}, the vanishing of $L_{p,0}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$, and the injectivity of ${\rm Col}^{(1),\dagger}$. On the other hand, the proof of formula $(\ref{eq:rubin-ht})$ could be deduced from the general result \cite[(11.3.14)]{nekovar310}, but shall give a proof following the more direct generalization of Rubin's formula contained in \cite[\S{3}]{arnold-ht}. We begin by recalling Plater's definition of the $\mathbb{I}$-adic height pairing (itself a generalization of Perrin-Riou's \cite[\S{1.2}]{PR-109} in the $p$-adic setting). Let $\lambda$ be the isomorphism ${\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}\simeq\mathcal{J}^{\rm cyc}$ sending $\gamma_{\rm cyc}$ to the class of $\gamma_{\rm cyc}-1$. Composing with the natural isomorphism ${\rm Gal}(L_{n,\infty}/\mathcal{K}_n^{\rm ac})\simeq{\boldsymbol{g}}amma^{\rm cyc}$ the map $\lambda$ defines a class in ${\rm H}^1(\mathcal{K}_n^{\rm ac},\mathcal{J}^{\rm cyc})$, where we equip $\mathcal{J}^{\rm cyc}$ with the trivial Galois action, and so taking cup product we get \[ \rho_v:{\rm H}^1(K_{n,v}^{\rm ac},\mathbb{I}(1))\xrightarrow{\cup{\rm loc}_v(\lambda)}{\rm H}^2(K_{n,v}^{\rm ac},\mathcal{J}^{\rm cyc}(1))\simeq\mathcal{J}^{\rm cyc} \] for every place $v$. Denote by ${\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)^{\rm univ}$ the submodule of ${\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)$ (with $\mathbb{I}$-torsion quotient, as noted earlier) consisting of classes lying in ${\rm H}^1_{\tt Gr}(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger)^{\rm univ}$ for all $v\mid p$, and let $x, y\in{\rm Sel}_{\tt Gr}(\mathcal{K}_n^{\rm ac},T_{\boldsymbol{f}}^\dagger)^{\rm univ}$. Then $x$ corresponds to an extension of Galois modules \[ 0\rightarrow T_{\boldsymbol{f}}^\dagger\rightarrow X\rightarrow\mathbb{I}\rightarrow 0. \] The Kummer dual of this sequence induces maps on cohomology \[ {\rm H}^1(\mathcal{K}_{n}^{\rm ac},X^*(1))\rightarrow{\rm H}^1(\mathcal{K}_{n}^{\rm ac},T_{\boldsymbol{f}}^\dagger)\xrightarrow{\delta}{\rm H}^2(\mathcal{K}_{n}^{\rm ac},\mathbb{I}(1)) \] such that $\delta(y)=0$ (since ${\rm H}^2(\mathcal{K}_{n}^{\rm ac},\mathbb{I}(1))$ injects into $\bigoplus_v{\rm H}^2(\mathcal{K}_{n,v}^{\rm ac},\mathbb{I}(1))$ and the $v$-th component of $\delta(y)$ is given by ${\rm loc}_v(y)\cup{\rm loc}_v(x)=0$ by the self-duality of Greenberg's local conditions). Thus $y$ is the image of some $y^{\rm glob}\in{\rm H}^1(\mathcal{K}_n^{\rm ac},X^*(1))$. On the other hand, if $v$ is any place of $\mathcal{K}_{n}^{\rm ac}$, for every $k$ we can write ${\rm loc}_v(y)={\rm cor}_{L_{n,k,v}/\mathcal{K}_{n,v}^{\rm ac}}(y_{k,v})$ for some $y_{k,v}\in{\rm H}^1_{\tt Gr}(L_{n,k,v},T_{\boldsymbol{f}}^\dagger)$, and by a similar argument as above there exists a class $\widetilde{y}_{k,v}\in{\rm H}^1(L_{n,k,v},X^*(1))$ lifting $y_{k,v}$ under the natural map $\pi_v$ in the exact sequence \begin{equation}\label{eq:local-v} {\rm H}^1(L_{n,k,v},X^*(1))\xrightarrow{\pi_v}{\rm H}^1(L_{n,k,v},T_{\boldsymbol{f}}^\dagger)\xrightarrow{\delta_v}{\rm H}^2(L_{n,k,v},\mathbb{I}(1)). \end{equation} The difference ${\rm loc}_v(y^{\rm glob})-{\rm cor}_{L_{n,k,v}/\mathcal{K}_{n,v}^{\rm ac}}(\widetilde{y}_{k,v})$ is then the image of some class $w_{k,v}\in{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},\mathbb{I}(1))$, and we define \[ \langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}:=\lim_{k\to\infty}\sum_v\rho_v(w_{k,v}), \] a limit which is easily checked to exist and be independent of all choices. If in addition $y=y_0$ is the base class of a compatible system of classes \[ y_\infty=\{y_k\}_k\in{\rm H}^1(\mathcal{K}_n^{\rm ac},\mathbf{T}^{\dagger,{\rm cyc}})=\varprojlim_k{\rm H}^1(L_{n,k},T_{\boldsymbol{f}}^\dagger), \] then one easily checks (see e.g. \cite[Lem.~3.2.2]{AHsplit}) that there are classes $y_k^{\rm glob}\in{\rm H}^1(L_{n,k},X^*(1))$ lifting $y_k$. Similarly as above, for every place $v$ of $L_{n,k}$ the corestriction of ${\rm loc}_v(y_k^{\rm glob})-\widetilde{y}_{k,v}$ to ${\rm H}^1(K_{n,v}^{\rm ac},X^*(1))$ is the image of a class $w'_{k,v}\in{\rm H}^1(K_{n,v}^{\rm ac},\mathbb{I}(1))$, and with these choices we see that the above expression for $\langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}$ reduces to \begin{equation}\label{eq:ht-p} \langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}=\lim_{k\to\infty}\sum_{v\mid p}\rho_v(w'_{k,v}). \end{equation} As in \cite[\S{3.8}]{arnold-ht}, division by $\gamma_{\rm cyc}-1$ defines a natural \emph{derivative map} \[ \mathfrak{Der}_{}:{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger\otimes_\mathbb{I}\mathcal{I}^{\rm cyc})\rightarrow{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger) \] whose composition with the natural projection ${\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger)\rightarrow{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},\mathscr{F}^-T_{\boldsymbol{f}}^\dagger)$ factors as \begin{equation}\label{eq:der-v} \begin{aligned} \xymatrix{ {\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger\otimes_\mathbb{I}\mathcal{I}^{\rm cyc})\ar@{->>}[r]\ar[d]_-{\mathfrak{Der}_{}}& {\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},\mathscr{F}^-T_{\boldsymbol{f}}^\dagger\otimes_\mathbb{I}\mathcal{I}^{\rm cyc})\ar[d]^-{\mathfrak{Der}_{-}}\\ {\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger)\ar@{->>}[r]&{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},\mathscr{F}^-T_{\boldsymbol{f}}^\dagger). } \end{aligned} \end{equation} Letting ${\rm pr}_{\mathds{1}}$ be the natural projection ${\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},X^*(1)\otimes_\mathbb{I}\mathbb{I}[[{\boldsymbol{g}}amma^{\rm cyc}]])\rightarrow{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},X^*(1))$, the expression $(\ref{eq:ht-p})$ for $\langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}$ can be rewritten as \[ \langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}=\sum_{v\vert p}{\rm pr}_{\mathds{1}}({\rm loc}_v(y_\infty^{\rm glob})-\widetilde{y}_{\infty,v}), \] where ${\rm loc}_v(y_\infty^{\rm glob})-\widetilde{y}_{\infty,v}\in{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},X^*(1)\otimes_\mathbb{I}\mathbb{I}[[{\boldsymbol{g}}amma^{\rm cyc}]])$ is a lift of ${\rm loc}_v(y_\infty)-y_{\infty,v}\in{\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},T_{\boldsymbol{f}}^\dagger\otimes\mathcal{I}^{\rm cyc})$, and hence by \cite[Prop.~3.10]{arnold-ht} we obtain \begin{equation}\label{eq:der} \begin{aligned} \langle y,x\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}&=\sum_{v\mid p}\delta_v\left(\mathfrak{Der}({\rm loc}_v(y_\infty)-y_{\infty,v})\right)\otimes(\gamma^{\rm cyc}-1)\\ &=\sum_{v\mid p}\left(\mathfrak{Der}({\rm loc}_v(y_\infty)-y_{\infty,v}),{\rm loc}_v(x)\right)_{\mathcal{K}_{n,v}^{\rm ac}}\otimes(\gamma^{\rm cyc}-1)\\ &=\sum_{v\mid p}\left(\mathfrak{Der}_{-}({\rm loc}_v(y_\infty)),{\rm loc}_v(x)\right)_{\mathcal{K}_{n,v}^{\rm ac}}\otimes(\gamma^{\rm cyc}-1), \end{aligned} \end{equation} where the last equality follows from the commutativity of $(\ref{eq:der-v})$ and the fact that $y_{\infty,v}=\{y_{k,v}\}_k$ has trivial image in ${\rm H}^1(\mathcal{K}_{n,v}^{\rm ac},\mathscr{F}^-\mathbf{T}^{\dagger,{\rm cyc}})$. Now taking $y_\infty=\mathcal{BF}^\dagger(L_{n,\infty})$ in $(\ref{eq:der})$ we see that the contribution to $\langle\mathcal{BF}^\dagger_{\mathcal{K}_n^{\rm ac}},x\rangle^{\rm cyc}_{\mathcal{K}_n^{\rm ac},\mathbb{I}}$ from $\mathfrak{p}$ is zero, since $\mathcal{BF}^\dagger(L_{n,\infty})\in{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K}_n^{\rm ac},\mathbf{T}^{\dagger,{\rm cyc}})$ is finite at the places above $\mathfrak{p}$, while at $\mathfrak{p}bar$ chasing through the definitions we see that \[ \mathfrak{Der}_{-}({\rm loc}_{\mathfrak{p}bar}(\mathcal{BF}^\dagger(L_{n,\infty}))=\beta_n^\dagger(\mathds{1}), \] thus concluding the proof of the height formula (\ref{eq:rubin-ht}). \end{proof} \section{Big Heegner points}\label{sec:HP} \label{sec:bigHP} In this section, we recall the construction of big Heegner points and classes \cite{howard-invmath,LV} with some complements. Fix a prime $p>3$ and a positive integer $N$ prime to $p$. Let $\mathcal{K}$ be an imaginary quadratic field with ring of integers $\mathcal{O}_\mathcal{K}$ and discriminant $-D_\mathcal{K}<0$ prime to $Np$, and write \[ N=N^+N^- \] with $N^+$ (resp. $N^-$) divisible only by primes which are split (resp. inert) in $\mathcal{K}$. Throughout, we assume the following \emph{generalized Heegner hypothesis}: \begin{equation}\label{ass:gen-H} \textrm{$N^-$ is the squarefree product of an even number of primes,}\tag{gen-H} \end{equation} and fix an integral ideal $\mathfrak{N}^+$ of $\mathcal{K}$ with $\mathcal{O}_\mathcal{K}/\mathfrak{N}^+\simeq\mathbf{Z}/N^+\mathbf{Z}$. \subsection{Towers of Shimura curves}\label{subsec:Sh} Let $B/\mathbf{Q}$ be an indefinite quaternion algebra of discriminant $N^-$. We fix a $\mathbf Q$-algebra embedding $\iota_\mathcal{K}:\mathcal{K}\hookrightarrow B$, which we shall use to identify $\mathcal{K}$ with a subalgebra of $B$. Let $z\mapsto\overline{z}$ be the non-trivial automorphism of $\mathcal{K}$, and choose a basis $\{1,j\}$ of $B$ over $\mathcal{K}$ such that: \begin{itemize} \item $j^2=\beta\in\mathbf Q^\times$ with $\beta<0$ and $jt=\bar tj$ for all $t\in \mathcal{K}$, \item $\beta\in (\mathbf Z_q^\times)^2$ for $q\mid pN^+$, and $\beta\in\mathbf Z_q^\times$ for $q\mid D_\mathcal{K}$. \end{itemize} Fix a square-root $\delta=\sqrt{-D_\mathcal{K}}$, and define $\boldsymbol{\theta}\in \mathcal{K}$ by \begin{equation}\label{eq:vartheta} \boldsymbol{\theta}:=\frac{D_\mathcal{K}'+\delta}{2},\quad\textrm{where}\;\; D_\mathcal{K}':=\left\{ \begin{array}{ll} D_\mathcal{K} &\textrm{if $2\nmid D_\mathcal{K}$,}\\[0.1cm] D_\mathcal{K}/2 &\textrm{if $2\mid D_\mathcal{K}$,} \end{array} \right. \end{equation} so that $\mathcal O_\mathcal{K}=\mathbf Z+\boldsymbol{\theta}\mathbf{Z}$. For every prime $q\mid pN^+$, define the isomorphism $i_q:B_q:=B\otimes_\mathbf Q\mathbf Q_q \simeq \M_2(\mathbf Q_q)$ by \[ i_q(\boldsymbol{\theta})=\mat{\mathrm{Tr}(\boldsymbol{\theta})}{-\mathrm{Nm}(\boldsymbol{\theta})}10, \quad\quad i_q(j)=\sqrt\beta\mat{-1}{\mathrm{Tr}(\boldsymbol{\theta})}01, \] where $\mathrm{Tr}$ and $\mathrm{Nm}$ are the reduced trace and norm maps on $B$. For primes $q\nmid Np$, we fix any isomorphism $i_q:B_q\simeq \M_2(\mathbf Q_q)$ with $i_q(\mathcal O_\mathcal{K}\otimes_\mathbf Z\mathbf Z_q)\subset\M_2(\mathbf Z_q)$. Let $\hat{\mathbf{Z}}$ denote the profinite completion of $\mathbf{Z}$, and for any abelian group $M$ set $\hat{M}=M\otimes_{\mathbf{Z}}\hat{\mathbf{Z}}$. For each $r\geqslant 0$, let $R_{r}$ be the Eichler order of $B$ of level $N^+p^r$ with respect to the isomorphisms $\{i_q:B_q\simeq{\rm M}_2(\mathbf Q_q)\}_{q\nmid N^-}$, and let $U_{r}\subset\hat{R}_{r}^\times$ be the compact open subgroup \[ U_{r}:=\left\{(x_q)_q\in\hat{R}_{r}^\times\;\colon\;i_p(x_p)\equiv\mat 1*0*\pmod{p^r}\right\}. \] Consider the double coset spaces \begin{equation}\label{def:gross-curve} X_{r}=B^\times\backslash\bigl(\Hom_\mathbf Q(\mathcal{K},B)\times\hat{B}^\times/U_{r}\bigr), \end{equation} where $b\in B^\times$ acts on $(\Psi,g)\in\Hom_\mathbf Q(\mathcal{K},B)\times\hat B^\times$ by \[ b\cdot(\Psi,g)=(b\Psi b^{-1},bg), \] and $U_{r}$ acts on $\hat{B}^\times$ by right multiplication. As is well-known (see e.g. \cite[\S\S{2.1-2}]{LV}), $ X_{r}$ can be identified with a set of algebraic points on the Shimura curve with complex uniformization \[ X_{r}(\mathbf{C})=B^\times\backslash\bigl(\Hom_\mathbf Q(\mathbf{C},B)\times\hat{B}^\times/U_{r}\bigr). \] Let ${\rm rec}_\mathcal{K}:\mathcal{K}^\times\backslash\hat{\mathcal{K}}^\times\rightarrow{\rm Gal}(\mathcal{K}^{\rm ab}/\mathcal{K})$ be the reciprocity map of class field theory. By Shimura's reciprocity law, if $P\in X_{r}$ is the class of a pair $(\Psi,g)$, then $\sigma\in{\rm Gal}(\mathcal{K}^{\rm ab}/\mathcal{K})$ acts on $P$ by \[ P^{\sigma}:=[(\Psi,\hat{\Psi}(a)g)], \] where $a\in \mathcal{K}^\times\backslash\hat{\mathcal{K}}^\times$ is such that ${\rm rec}_\mathcal{K}(a)=\sigma$, and $\hat\Psi:\hat \mathcal{K}\rightarrow\hat B$ is the adelization of $\Psi$. We extend this to an action of $G_\mathcal{K}:={\rm Gal}(\overline{\mathbf Q}/\mathcal{K})$ in the obvious manner. The curves $ X_{r} $ are also equipped with natural actions of Hecke operators $T_\ell$ for $\ell\nmid Np$, $U_\ell$ for $\ell\vert Np$, and diamond operators $\langle d \rangle$ for $d\in(\mathbf Z/p^r\mathbf Z)^\times$, as described in e.g. \cite[\S{2.4}]{LV} and \cite[\S{2.1}]{ChHs2}. \subsection{Compatible systems of Heegner points}\label{subsec:construct} For each $c\geqslant 1$, let $\mathcal{O}_c=\mathbf Z+c\mathcal{O}_\mathcal{K}$ be the order of $\mathcal{K}$ of conductor $c$ and denote by $H_c$ the ring class field of $\mathcal{K}$ of conductor $c$, so that ${\rm Pic}(\mathcal{O}_c)\simeq{\rm Gal}(H_c/\mathcal{K})$ by class field theory. In particular, $H_1$ is the Hilbert class field of $\mathcal{K}$. \begin{defn}\label{def:HP} A point $P\in X_{r}$ is a \emph{Heegner point of conductor $c$} if it is the class of a pair $(\Psi,g)$ with \[ \Psi(\mathcal{O}_c)=\Psi(\mathcal{K})\cap(B\cap g\hat{R}_{r}g^{-1}) \] and \[ \Psi_p\left((\mathcal{O}_c\otimes\mathbf Z_p)^\times\cap(1+p^r\mathcal{O}_c\otimes\mathbf Z_p)^\times\right) =\Psi_p\left((\mathcal{O}_c\otimes\mathbf Z_p)^\times\right)\cap g_pU_{r,p}g_p^{-1}, \] where $\Psi_p$ and $U_{r,p}$ denote the $p$-components of $\Psi$ and $U_{r}$, respectively. \end{defn} For each prime $q\neq p$ define \begin{itemize} \item{} $\varsigma_q=1$, if $q\nmid N^+$, \item{} $\varsigma_q=\delta^{-1}\begin{pmatrix}\boldsymbol{\theta} & \overline{\boldsymbol{\theta}} \\ 1 & 1 \end{pmatrix} \in{\rm GL}_2(\mathcal{K}_{\mathfrak{q}})={\rm GL}_2(\mathbf Q_q)$, if $q=\mathfrak{q}\overline{\mathfrak{q}}$ splits with $\mathfrak{q}\mid\mathfrak{N}^+$, \end{itemize} and for each $s\geqslant 0$, let \begin{itemize} \item{} $\varsigma_p^{(s)}=\begin{pmatrix}\boldsymbol{\theta}&-1\\1&0\end{pmatrix}\begin{pmatrix}p^s&0\\0&1\end{pmatrix} \in{\rm GL}_2(\mathcal{K}_{\mathfrak{p}})={\rm GL}_2(\mathbf Q_p)$, if $p=\mathfrak{p}\overline{\mathfrak{p}}$ splits in $\mathcal{K}$, \item{} $\varsigma_p^{(s)}=\begin{pmatrix}0&1\\-1&0\end{pmatrix}\begin{pmatrix}p^s&0\\0&1\end{pmatrix}$, if $p$ is inert in $\mathcal{K}$. \end{itemize} \begin{rem} We shall ultimately assume that $p$ splits in $\mathcal{K}$, but it is worth-noting that, just as in \cite{howard-invmath, LV}, the constructions in this section also allow the case $p$ inert in $\mathcal{K}$. \end{rem} Set $\varsigma^{(s)}:=\varsigma_p^{(s)}\prod_{q\neq p}\varsigma_q$, which we view as an element in $\hat{B}^\times$ via the isomorphisms $\{i_q:B_q\simeq{\rm M}_2(\mathbf Q_q)\}_{q\nmid N^-}$ introduced in $\S\ref{subsec:Sh}$. With the $\mathbf{Q}$-algebra embedding $\iota_\mathcal{K}:\mathcal{K}\hookrightarrow B$ fixed there, one easily checks that for all $s\geqslant r$ the points \[ {P}_{s,r}^{}:=[(\iota_\mathcal{K},\varsigma^{(s)})]\in X_{r} \] are Heegner points of conductor $p^{s}$ in the sense of Definition~\ref{def:HP} with the following properties: \begin{itemize} \item{} \emph{Field of definition}: ${P}_{s,r}\in H^0(H_{p^s}({\mu}_{p^r}),{X}_{r})$. \item{} \emph{Galois equivariance}: For all $\sigma\in{\rm Gal}(H_{p^s}({\mu}_{p^r})/H_{p^s})$, \[ {P}_{s,r}^\sigma=\langle\vartheta(\sigma)\rangle\cdot {P}_{s,r}, \] where $\vartheta:{\rm Gal}(H_{p^s}({\mu}_{p^r})/H_{p^{s}})\rightarrow\mathbf Z_p^\times/\{\pm{1}\}$ is such that $\vartheta^2=\varepsilon_{\rm cyc}$. \item{} \emph{Horizontal compatibility}: If $s\geqslant r> 1$, then \[ \sum_{\sigma\in{\rm Gal}(H_{p^s}({\mu}_{p^r})/H_{p^{s-1}}({\mu}_{p^r}))} {\alpha}_r({P}_{s,r}^{{\sigma}}) =U_p\cdot{P}_{s,r-1}, \] where ${\alpha}_r: X_{r}\rightarrow {X}_{r-1}$ is the map induced by the inclusion $U_{r}\subset U_{r-1}$. \item{} \emph{Vertical compatibility}: If $s\geqslant r\geqslant 1$, then \[ \sum_{\sigma\in{\rm Gal}(H_{p^s}({\mu}_{p^r})/H_{p^{s-1}}({\mu}_{p^r}))} {P}_{s,r}^{{\sigma}} =U_p\cdot{P}_{s-1,r}. \] \end{itemize} (See \cite[Thm.~1.2]{cas-longo} and the references therein.) \subsection{Big Heegner points}\label{subsec:bigHP} Let $\mathbb{B}_r$ the $\mathbf{Z}_p$-algebra generated by the Hecke operators $T_\ell$, $U_\ell$, and $\langle a\rangle$ acting on the Shimura curve ${X}_{r}$ from $\S$\ref{subsec:Sh}, let $\mathfrak{h}_{r}$ be the $\mathbf{Z}_p$-algebra generated by the usual Hecke operators $T_\ell$, $U_\ell$, and $\langle a\rangle$ acting on the space $S_2({\boldsymbol{g}}amma_{0,1}(N,p^r))$ of classical modular form of level ${\boldsymbol{g}}amma_{0,1}(N,p^r):={\boldsymbol{g}}amma_0(N)\cap{\boldsymbol{g}}amma_1(p^r)$, and let $\mathbb{T}^{N^-}_{N,r}$ be the quotient of $\mathfrak{h}_r$ acting faithfully on the subspace of $S_2({\boldsymbol{g}}amma_{0,1}(N,p^r))$ consisting of $N^-$-new forms. The Jacquet--Langlands correspondence yields $\mathbf{Z}_p$-algebra isomorphisms \begin{equation}\label{eq:JL} \mathbb{B}_r\simeq\mathbb{T}^{N^-}_{N,r} \end{equation} (see e.g. \cite[\S{2.4}]{HMI}). In particular, letting $e_{\rm ord}=\lim_{n\to\infty}U_p^{n!}$ be Hida's ordinary projector, the $\mathbf{Z}_p$-module \[ \mathfrak{D}_{r}^{\rm ord}:=e_{\rm ord}({\rm Div}({X}_{r})\otimes_{\mathbf Z}\mathbf Z_p) \] is naturally endowed with an action of $\mathbb{T}_{r}^{\rm ord}:=e_{\rm ord}\mathbb{T}^{N^-}_{N,r}$. Denote by $\mathbb T_{r}^\dagger$ be the free $\mathbb T_{r}^{\rm ord}$-module of rank one equipped with the Galois action via the inverse of the critical character $\mathbf{T}^\daggerheta$, and set $\mathfrak{D}_{r}^\dagger:=\mathfrak{D}_{r}^{\rm ord}\otimes_{\mathbb{T}_{r}^{\rm ord}}\mathbb T_{r}^\dagger$. Let ${P}_{s,r}\in{X}_{r}$ be the Heegner point of conductor $p^s$ ($s\geqslant r$) constructed in $\S$\ref{subsec:construct}, and denote by $\mathcal{P}_{s,r}$ the image of $e_{\rm ord}{P}_{s,r}^{}$ in $\mathfrak{D}_{r}^{\rm ord}$. It follows from the Galois-equivariance property of ${P}_{s,r}$ that \[ \mathcal{P}_{s,r}^\sigma=\mathbf{T}^\daggerheta(\sigma)\cdot\mathcal{P}_{s,r} \] for all $\sigma\in{\rm Gal}(H_{p^s}({\mu}_{p^r})/H_{p^{s}})$ (see \cite[\S{7.1}]{LV}), and hence $\mathcal{P}_{s,r}$ defines an element \begin{equation}\label{eq:pt-dag} \mathcal{P}_{s,r}\otimes\zeta_r\in{\rm H}^0(H_{p^{s}},\mathfrak{D}_{r}^\dagger). \end{equation} Let ${\rm Pic}({X}_{r})$ be the Picard variety of ${X}_{r}$, and set \[ \mathfrak{J}_{r}^{\rm ord}:=e_{\rm ord}({\rm Pic}({X}_{r})\otimes_{\mathbf Z}\mathbf Z_p),\quad\quad\mathfrak{J}_{r}^\dagger:=\mathfrak{J}_{r}^{\rm ord}\otimes_{\mathbb{T}_r^{\rm ord}}\mathbb{T}_r^\dagger. \] Since the $U_p$-operator has degree $p$, taking ordinary parts yields an isomorphism $\mathfrak{D}_r^{\rm ord}\simeq\mathfrak{J}_r^{\rm ord}$, and so we may also view (\ref{eq:pt-dag}) as $\mathcal{P}_{s,r}\otimes\zeta_r\in{\rm H}^0(H_{p^{s}},\mathfrak{J}_{r}^\dagger)$. Let $t\geqslant 0$, and denote by $\mathfrak{G}_{H_{p^t}}$ the Galois group of the maximal extension of $H_{p^t}$ unramified outside the primes above $pN$. Consider the twisted Kummer map \[ {\rm Kum}_r:{\rm H}^0(H_{p^t},\mathfrak{J}_{r}^\dagger) \rightarrow{\rm H}^1(\mathfrak{G}_{H_{p^t}},{\rm Ta}_p(\mathfrak{J}_{r}^\dagger)) \] explicitly defined in \cite[p.~101]{howard-invmath}. This map is equivariant for the Galois and $U_p$ actions, and hence by horizontal compatibility the classes \begin{equation}\label{eq:HP-r} \mathfrak{X}_{p^t,r}:={\rm Kum}_r({\rm Cor}_{H_{p^{r+t}/H_{p^t}}}(\mathcal{P}_{r+t,r}\otimes\zeta_r)) \end{equation} satisfy ${\alpha}_{r,*}(\mathfrak{X}_{p^t,r})=U_p\cdot\mathfrak{X}_{p^t,r-1}$ for all $r>1$, where \[ {\alpha}_{r,*}:{\rm H}^1(\mathfrak{G}_{H_{p^t}},{\rm Ta}_p(\mathfrak{J}_{r}^\dagger))\rightarrow{\rm H}^1(\mathfrak{G}_{H_{p^t}},{\rm Ta}_p(\mathfrak{J}_{r-1}^\dagger)) \] is the map induced by the covering ${X}_r\rightarrow{X}_{r-1}$ by Albanese functoriality. Now let ${\boldsymbol{f}}\in\mathbb{I}[[q]]$ be a Hida family of tame level $N$. To define big Heegner points attached to ${\boldsymbol{f}}$ from the system of Heegner classes (\ref{eq:HP-r}) for varying $r$, we need to recall the following result realizing the big Galois representation $T_{\boldsymbol{f}}$ attached to ${\boldsymbol{f}}$ in the \'etale cohomology of the $p$-tower of Shimura curves \[ \cdots\rightarrow{X}_r\rightarrow{X}_{r-1}\rightarrow\cdots \] (rather than classical modular curves, as implicitly taken in $\S\ref{sec:selmer}$). Let $\kappaappa_\mathbb{I}:=\mathbb{I}/\mathfrak{m}_\mathbb{I}$ be the residue field of $\mathbb{I}$, and denote by $\bar{\rho}_{{\boldsymbol{f}}}:G_\mathbf{Q}\rightarrow{\rm GL}_2(\kappaappa_\mathbb{I})$ the associated (semi-simple) residual representation. Set \[ \mathbb{T}_\infty^{\rm ord}:=\varprojlim_r\mathbb{T}_r^{\rm ord}. \] By (\ref{eq:JL}) (see also the discussion in \cite[\S{5.3}]{LV}), there is a maximal ideal $\mathfrak{m}\subset\mathbb{T}^{\rm ord}_\infty$ associated with $\bar{\rho}_{{\boldsymbol{f}}}$, and ${\boldsymbol{f}}$ corresponds to a minimal prime in the localization $\mathbb{T}^{\rm ord}_{\infty,\mathfrak{m}}$. \begin{thm}\label{thm:helm} Assume that \begin{itemize} \item[(i)] $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible and $p$-distinguished, \item[(ii)] $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified at every prime $\ell\vert N^-$ with $\ell\equiv\pm{1}\pmod{p}$, \end{itemize} and let $\mathfrak{m}\subset\mathbb{T}_\infty^{\rm ord}$ be the maximal ideal associated with $\bar{\rho}_{{\boldsymbol{f}}}$. Then the module \[ \mathbf{Ta}_{\mathfrak{m}}^{\rm ord}:=\biggl(\varprojlim_r{\rm Ta}_p(\mathfrak{J}_r^{\rm ord})\biggr)\otimes_{\mathbb{T}_\infty^{\rm ord}}\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord} \] is free of rank $2$ over $\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}$, and if ${\boldsymbol{f}}$ corresponds to the minimal prime $\mathfrak{a}\subset\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}$, then there is an isomorphism \[ T_{\boldsymbol{f}}\simeq\mathbf{Ta}_{\mathfrak{m}}^{\rm ord}\otimes_{\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}}\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}/\mathfrak{a} \] as $(\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}/\mathfrak{a})[G_\mathbf{Q}]$-modules. \end{thm} \begin{proof} This is shown in \cite[Thm.~3.1]{Fouquet} assuming the ``mod $p$ multiplicity one'' hypothesis in [\emph{loc.cit.}, Prop.~3.7]. Since by \cite[Cor.~8.11]{helm} that hypothesis is ensured by our ramification condition on $\bar\rho_{{\boldsymbol{f}}}$, the result follows. \end{proof} Let $\mathfrak{m}\subset\mathbb{T}_\infty^{\rm ord}$ be a maximal ideal satisfying the hypotheses of Theorem~\ref{thm:helm}, and suppose that the Hida family ${\boldsymbol{f}}$ corresponds to a minimal prime of $\mathbb{T}_{\infty,\mathfrak{m}}^{\rm ord}$, so by Theorem~\ref{thm:helm} there is a quotient map $\mathbf{Ta}_{\mathfrak{m}}^{\rm ord}\rightarrow T_{\boldsymbol{f}}$. Note also that immediately from the definitions there are natural maps ${\rm Ta}_p(\mathfrak{J}_r^\dagger)\rightarrow\mathbf{Ta}_{\mathfrak{m}}^{\rm ord}\otimes\mathbf{T}^\daggerheta^{-1}\rightarrow T_{\boldsymbol{f}}^\dagger$. \begin{defn} The \emph{big Heegner point} of conductor $p^t$ is the class \[ \mathfrak{X}_{p^t}\in{\rm H}^1(H_{p^t},T_{{\boldsymbol{f}}}^\dagger) \] given by the image of $\varprojlim_rU_p^{-r}\cdot\mathfrak{X}_{p^t,r}$ under the composite map \[ \varprojlim_r{\rm H}^1(\mathfrak{G}_{H_{p^t}},{\rm Ta}_p(\mathfrak{J}_{r}^\dagger)) \rightarrow{\rm H}^1(\mathfrak{G}_{H_{p^t}},\mathbf{Ta}_{\mathfrak{m}}^{\rm ord}\otimes\mathbf{T}^\daggerheta^{-1}) \rightarrow{\rm H}^1(H_{p^t},T_{{\boldsymbol{f}}}^\dagger). \] \end{defn} We conclude this section with the following result essentially due to Howard, showing that the big Heegner points are Selmer classes under mild hypotheses. \begin{prop}\label{prop:HPinSel} Assume that $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified at every prime $\ell\vert N^-$. Then the classes $\mathfrak{X}_{p^t}$ lie in ${\rm Sel}_{\tt Gr}(H_{p^t},T_{{\boldsymbol{f}}}^\dagger)$. \end{prop} \begin{proof} The argument in \cite[Prop.~2.4.5]{howard-invmath} (see also \cite[Prop.~10.1]{LV}) shows that for every prime $w$ of $H_{p^t}$ the localization ${\rm loc}_w(\mathfrak{X}_{p^t})$ lies in the subspace ${\rm H}_{\tt Gr}^1(H_{p^t,w},T_{{\boldsymbol{f}}}^\dagger)\subset{\rm H}^1(H_{p^t,w},T_{{\boldsymbol{f}}}^\dagger)$ defining ${\rm Sel}_{\tt Gr}(H_{p^t},T_{{\boldsymbol{f}}}^\dagger)$, except when $w\vert\ell\vert N^-$, in which case it is shown that \[ {\rm loc}_{w}(\mathfrak{X}_{p^t})\in{\rm ker}\biggl\{{\rm H}^1(H_{p^t,w},T_{{\boldsymbol{f}}}^\dagger)\rightarrow\frac{{\rm H}^1(H_{p^t,w}^{\rm ur},T_{{\boldsymbol{f}}}^\dagger)}{{\rm H}^1(H_{p^t,w}^{\rm ur},T_{{\boldsymbol{f}}}^\dagger)_{\rm tors}}\biggr\}, \] where ${\rm H}^1(H_{p^t,w}^{\rm ur},T_{{\boldsymbol{f}}}^\dagger)_{\rm tors}$ denotes the $\mathbb{I}$-torsion submodule of ${\rm H}^1(H_{p^t,w}^{\rm ur},T_{{\boldsymbol{f}}}^\dagger)$. However, such primes $\ell$ are inert in $\mathcal{K}$, so $H_{p^t,w}=\mathcal{K}_\ell$, and since our hypothesis on $\bar\rho_{\boldsymbol{f}}$ implies that ${\rm H}^1(\mathcal{K}_\ell^{\rm ur},T_{{\boldsymbol{f}}}^\dagger)$ is $\mathbb{I}$-torsion free (see e.g. \cite[Lem.~3.12]{buy-bigHP}), the result follows. \end{proof} Recall that $\mathcal{K}_\infty^{\rm ac}=\cup_n \mathcal{K}_n^{\rm ac}$ is the anticyclotomic $\mathbf{Z}_p$-extension of $\mathcal{K}$, with $\mathcal{K}_n^{\rm ac}$ denoting the subextension of $\mathcal{K}_\infty^{\rm ac}$ with $[\mathcal{K}_n^{\rm ac}:\mathcal{K}]=p^n$. Similarly as in \cite[\S{3.3}]{howard-invmath} and \cite[\S{10.3}]{LV}, we set \[ \mathfrak{Z}_n:={\rm Cor}_{H_{p^t}/\mathcal{K}^{\rm ac}_n}(U_p^{-t}\cdot\mathfrak{X}_{p^t})\in{\rm H}^1(\mathcal{K}_n^{\rm ac},T_{{\boldsymbol{f}}}^\dagger), \] where $t\gg 0$ is chosen so that $\mathcal{K}_n^{\rm ac}\subset H_{p^t}$. By horizontal compatibility, the definition of $\mathfrak{Z}_n$ is independent of the choice of $t$, and for varying $n$ they define a system \[ \mathfrak{Z}_\infty:=\{\mathfrak{Z}_n\}_n\in\varprojlim_n{\rm H}^1(\mathcal{K}_n^{\rm ac},T_{{\boldsymbol{f}}}^\dagger)\simeq{\rm H}_{}^1(\mathcal{K},\mathbf{T}^{\dagger,{\rm ac}}) \] which is not $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion by the work of Cornut--Vatsal \cite{CV-dur} (see also \cite[Cor.~3.1.2]{howard-invmath}). \section{Main results} \label{sec:main} In this section we conclude the proof of the main results of this paper. Fix a prime $p>3$ and let \[ {\boldsymbol{f}}=\sum_{n=1}^\infty\boldsymbol{a}_nq^n\in\mathbb{I}[[q]] \] be a primitive Hida family of tame level $N$, and let $\mathcal{K}$ be an imaginary quadratic field of discriminant prime to $Np$ satisfying the generalized Heegner hypothesis (\ref{ass:gen-H}) relative to $N$. Our results will require some of the technical hypotheses below, which we record here for our later reference. \begin{itemize} \item[(h0)] $\mathbb{I}$ is regular, \item[(h1)]{} some specialization ${\boldsymbol{f}}_\phi$ is the $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$, \item[(h2)] $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible, \item[(h3)] $N$ is squarefree, \item[(h4)] $N^-\neq 1$, \item[(h5)] $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified at every prime $\ell\vert N^-$, \item[(h6)] $p$ splits in $\mathcal{K}$. \end{itemize} As usual, here $N^-$ denotes the largest factor of $N$ divisible only by primes which are inert in $\mathcal{K}$. \subsection{Converse to a theorem of Howard} As shown by Howard \cite[\S\S{2.3-4}]{howard-invmath}, for varying $c$ prime to $N$ the big Heegner points $\mathfrak{X}_c\in{\rm H}^1(H_c,T_{{\boldsymbol{f}}}^\dagger)$ form an anticyclotomic Euler system for $T_{{\boldsymbol{f}}}^\dagger$. Setting \[ \mathfrak{Z}_0:={\rm Cor}_{H_1/\mathcal{K}}(\mathfrak{X}_1)\in{\rm H}^1(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger), \] Kolyvagin's methods thus yield a proof of the implication \[ \mathfrak{Z}_0\not\in{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)_{\rm tors} \quad\Longrightarrow\quad {\rm rank}_\mathbb{I}\;{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)=1, \] where the subscript ${\rm tors}$ denotes the $\mathbb{I}$-torsion submodule (see \cite[Cor.~3.4.3]{howard-invmath}). In the spirit of Skinner's celebrated converse to the theorem of Gross--Zagier and Kolyvagin \cite{skinner}, in this section we prove a result in the converse direction (see Theorem~\ref{thm:converse-How} below). Following the approach of \cite{wan}, this will be deduced from progress on the Iwasawa main conjecture for big Heegner points (\cite[Conj.~3.3.1]{howard-invmath}) exploiting the non-triviality of $\mathfrak{Z}_\infty$. \begin{thm}\label{thm:HP-MC} Assume hypotheses {\rm (h0)--(h6)}. Then both $X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)$ and ${\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)$ have $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one, and \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)_{\rm tors})= {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathfrak{Z}_\infty}\biggr)^2, \] where the subscript {\rm tors} denotes the $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion submodule. \end{thm} \begin{proof} Since $\mathfrak{Z}_\infty$ is not $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, part (iii) of \cite[Thm.~B]{Fouquet} implies that $X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)$ and ${\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)$ have $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one, and that the divisibility \begin{equation}\label{eq:fouquet} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)_{\rm tors})\supset {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathfrak{Z}_\infty}\biggr)^2 \end{equation} holds in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$. Concerning the additional hypotheses in Fouquet's result, we note that: \begin{itemize} \item Assumption~3.4, that $\bar\rho_{\boldsymbol{f}}$ is irreducible, is our (h2), \item Assumption~3.5, that $\bar\rho_{\boldsymbol{f}}$ is $p$-distinguished, follows from (h1) (see \cite[Rem.~7.2.7]{KLZ2}), \item Assumption~3.10, that the tame character of ${\boldsymbol{f}}$ admits a square-root, is satisfied by (h1), \item Assumption~5.10, that all primes $\ell\vert N$ for which $\bar\rho_{\boldsymbol{f}}$ is not ramified have infinite decomposition group in $\mathcal{K}_\infty^{\rm ac}/\mathcal{K}$, is a reformulation of (h5), \item Assumption~5.13, that $\bar{\rho}_{{\boldsymbol{f}}}\vert_{G_\mathcal{K}}$ is irreducible, follows from (h2), (h4) and (h5) (see \cite[Lem.~2.8.1]{skinner}). \end{itemize} Let $\phi\in\mathcal{X}_a(\mathbb{I})$ be such that ${\boldsymbol{f}}_\phi$ is the ordinary $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$ as in hypothesis (h1). Letting $X\supset Y$ stand for the divisibility $(\ref{eq:fouquet})$, by \cite[Thm.~3.4]{cas-BF} (see also \cite[Thm.~1.2]{wan}) we have the equality \[ X=Y\pmod{{\rm ker}(\phi)\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}, \] (note that this requires the additional hypotheses (h3) and (h6)), from where the result follows by an application of Lemma~\ref{lem:3.2}. \end{proof} \begin{thm}\label{thm:converse-How} Assume hypotheses {\rm (h0)--(h6)}. Then the following implication holds: \[ {\rm rank}_\mathbb{I}\;{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)=1\quad\Longrightarrow\quad \mathfrak{Z}_0\not\in{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)_{\rm tors} \] where the subscript ${\rm tors}$ denotes the $\mathbb{I}$-torsion submodule. \end{thm} \begin{proof} Let $\gamma_{\rm ac}\in{\boldsymbol{g}}amma_\mathcal{K}^{{\rm ac}}$ be a topological generator. The restriction map for the extension $\mathcal{K}_\infty^{\rm ac}/\mathcal{K}$ induces a surjective homomorphism \[ X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)/(\gamma_{\rm ac}-1)X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger){\dagger}oheadrightarrow X_{\tt Gr}(\mathcal{K},A_{{\boldsymbol{f}}}^\dagger) \] with $\mathbb{I}$-torsion kernel. Since $X_{\tt Gr}(\mathcal{K},A_{{\boldsymbol{f}}}^\dagger)$ and ${\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)$ have the same $\mathbb{I}$-rank by Lemma~\ref{lem:eq-ranks}, we thus see from Theorem~\ref{thm:HP-MC} that our assumption implies that \[ (\gamma_{\rm ac}-1)\nmid{\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathfrak{Z}_\infty}\biggr). \] By \cite[Cor.~3.8]{SU} (with $F=\mathcal{K}_\infty^{\rm ac})$, it follows that the image of $\mathfrak{Z}_\infty$ in ${\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)/(\gamma_{{\rm ac}}-1){\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)$ is not $\mathbb{I}$-torsion; since this image is sent to $\mathfrak{Z}_0$ under the natural injection \[ {\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)/(\gamma_{{\rm ac}}-1){\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)\hookrightarrow{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger), \] the result follows. \end{proof} \begin{rem} Replacing the application of \cite[Thm.~3.4]{cas-BF} or \cite[Thm.~1.2]{wan} in the proof of Theorem~\ref{thm:HP-MC} by an application of \cite[Thm.~1.2]{BCK} or \cite[Thm.~1.1.5]{zanarella} the above argument gives a proof of Theorems~\ref{thm:HP-MC} and \ref{thm:converse-How} with hypotheses (h3)--(h6) replaced by ``Hypothesis~$\heartsuit$'' from \cite{zhang-kolyvagin}, i.e., letting ${\rm Ram}(\bar{\rho}_{\boldsymbol{f}})$ be the set of primes $\ell\Vert N$ such that $\bar{\rho}_{\boldsymbol{f}}$ is ramified at $\ell$: \begin{itemize} \item ${\rm Ram}(\bar{\rho}_{\boldsymbol{f}})$ contains all primes $\ell\Vert N^+$, and all primes $\ell\vert N^-$ such that $\ell\equiv\pm 1 \pmod{p}$, \item If $N$ is not squarefree, then ${\rm Ram}(\bar{\rho}_{\boldsymbol{f}})\neq\emptyset$, and either ${\rm Ram}(\bar{\rho}_{\boldsymbol{f}})$ contains a prime $\ell\vert N^-$ or there are at least two primes $\ell\Vert N^+$, \item If $\ell^2\vert N^+$, then ${\rm H}^0(\mathbf{Q}_\ell,\bar{\rho}_{\boldsymbol{f}})=\{0\}$, \end{itemize} and the assumption that $\bar{\rho}_{{\boldsymbol{f}}}$ is surjective and $\boldsymbol{a}_p\not\equiv\pm{1}\pmod{p}$. \end{rem} \subsection{Iwasawa--Greenberg main conjectures} Now we can upgrade the main result in \cite{wanIMC} towards the Iwasawa--Greenberg main conjecture for $\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K})$ to a proof of the full equality. \begin{thm}\label{thm:3-IMC-BDP} Assume hypotheses {\rm (h0)--(h6)}. Then $X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion and \begin{equation}\label{eq:3-IMC-bdp} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}))\cdot\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]=(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K}))\nonumber \end{equation} as ideals in $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}]]$. \end{thm} \begin{proof} Clearly (see \cite[Lem.~1.2]{Rubin-ES}), it suffices to show that the twisted module $X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, with characteristic ideal generated by ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(\mathscr{L}_\mathfrak{p}({\boldsymbol{f}}/\mathcal{K}))$ after extension of scalars to $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$. This in turn can be shown by a similar argument to that in the proof of Theorem~\ref{thm:HP-MC}, so we shall be rather brief. Taking $\phi\in\mathcal{X}_a(\mathbb{I})$ such that ${\boldsymbol{f}}_\phi$ is the ordinary $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$, we deduce that $X_{\emptyset,0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger,{\rm ac}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion and that the equality as ideals in $\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$ \begin{equation}\label{eq:BDP-IMC} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{\emptyset,0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^{\dagger,{\rm ac}}))\cdot\mathbb{I}^{\rm ur}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]=(\mathscr{L}_\mathfrak{p}^{\tt BDP}({\boldsymbol{f}}/\mathcal{K})^2) \end{equation} holds by applying Lemma~\ref{lem:3.2} to the combination of the divisibility in \cite[Thm.~1.1]{wanIMC} (projected under ${\boldsymbol{g}}amma_\mathcal{K}{\dagger}oheadrightarrow{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$) with the equality in \cite[Thm.~3.4]{cas-BF} for $f={\boldsymbol{f}}_\phi$. The $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsionness of $X_{\emptyset,0}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)$ then follows from that of $X_{\emptyset,0}(\mathcal{K}_\infty^{\rm ac},A_{\boldsymbol{f}}^\dagger)$ over $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$, and using the comparison of $p$-adic $L$-functions in Corollary~\ref{cor:wan-bdp}, the three-variable divisibility in \cite[Thm.~1.1]{wanIMC} combined with the equality (\ref{eq:BDP-IMC}) yields the desired three-variable equality by another application of Lemma~\ref{lem:3.2}. \end{proof} We can now deduce from Theorem~\ref{thm:3-IMC-BDP} the proof of Theorem~A in the Introduction. \begin{cor}\label{thm:3-IMC} Assume hypotheses {\rm (h0)--(h6)}. Then $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, and \begin{equation}\label{eq:3-IMC} {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]}(X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}))=(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))\nonumber \end{equation} as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]\otimes_{\mathbf{Z}_p}\mathbf{Q}_p$. \end{cor} \begin{proof} As in the proof of Theorem~\ref{thm:3-IMC-BDP}, it suffices to show that the twisted module $X_{\tt Gr}(\mathcal{K}_\infty,A_{\boldsymbol{f}}^\dagger)$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion with characteristic ideal generated by ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$, which by the equivalence between main conjectures in Theorem~\ref{thm:2-varIMC}, follows from Theorem~\ref{thm:3-IMC-BDP}. \end{proof} \subsection{Greenberg's nonvanishing conjecture for derivatives} \label{sec:appl-greenberg} As in the Introduction, let $-w\in\{\pm{1}\}$ be the generic sign in the functional equation of the $p$-adic $L$-functions $L_p^{\tt MTT}({\boldsymbol{f}}_\phi,s)$ for varying $\phi\in\mathcal{X}_a^o(\mathbb{I})$. By \cite[Cor.~3.4.3 and Eq.~(21)]{howard-invmath}, Howard's horizontal nonvanishing conjecture implies that \[ {\rm rank}_{\mathbb{I}}\;{\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)=\left\{ \begin{array}{ll} 1&\textrm{if $w=1$,}\\ 0&\textrm{if $w=-1$.} \end{array} \right. \] In the case $w=-1$, a result in the converse direction follows from \cite{SU}: \begin{thm}[Skinner--Urban]\label{thm:Gr+1} Assume that: \begin{itemize} \item{} $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible and $p$-distinguished, \item{} ${\boldsymbol{f}}$ has trivial tame character, \item{} there is a prime $\ell\Vert N$ such that $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified at $\ell$. \end{itemize} If ${\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger)$ is $\mathbb{I}$-torsion, then $L({\boldsymbol{f}}_\phi,k_\phi/2)\neq 0$ for all but finitely many $\phi\in\mathcal{X}_a^o(\mathbb{I})$. \end{thm} \begin{proof} Since the $\mathbb{I}$-modules ${\rm Sel}_{\tt Gr}(\mathbf{Q},T_{{\boldsymbol{f}}}^\dagger)$ and $X_{\tt Gr}(\mathbf{Q},A_{{\boldsymbol{f}}}^\dagger)$ have the same rank by Lemma~\ref{lem:eq-ranks}, our hypothesis implies that $X_{\tt Gr}(\mathbf{Q},A_{{\boldsymbol{f}}}^\dagger)$ is $\mathbb{I}$-torsion. Thus in particular ${\rm Sel}_{\tt Gr}(\mathbf{Q},A_{{\boldsymbol{f}}_\phi}(1-k_\phi/2))$ is finite for all but finitely many $\phi$ as in the statement, and so the result follows from \cite[Thm.~3.6.13]{SU}. \end{proof} Our application to Greenberg's nonvanishing conjecture (in the case $w=1$) will build on an $\mathbb{I}$-adic Gross--Zagier formula for the big Heegner point $\mathfrak{Z}_0$. In fact, we shall prove a formula of this type for the $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-adic family $\mathfrak{Z}_\infty$, and deduce the result for $\mathfrak{Z}_0$ by specialization at the trivial character. Define the cyclotomic $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-adic height pairing \begin{equation}\label{eq:lambda-ht} \langle,\rangle_{\mathcal{K}^{\rm ac}_\infty,\mathbb{I}}^{{\rm cyc}}:{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc) \otimes_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)^\iota \rightarrow\mathcal{J}^{\rm cyc}\otimes_{\mathbb{I}}\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\otimes_\mathbb{I} F_\mathbb{I} \end{equation} by \[ \langle a_\infty,b_\infty\rangle^{\rm cyc}_{\mathcal{K}_\infty^{\rm ac},\mathbb{I}}= \varprojlim_n\sum_{\sigma\in{\rm Gal}(\mathcal{K}_n^{\rm ac}/\mathcal{K})}\langle a_n,b_n^\sigma\rangle^{\rm cyc}_{\mathcal{K}_n^{\rm ac},\mathbb{I}}\cdot\sigma \] (using the fact that the $\mathbb{I}$-adic height pairing $\langle,\rangle_{\mathcal{K}_n^{\rm ac},\mathbb{I}}^{\rm cyc}$ have denominators that are bounded independently of $n$), and let the cyclotomic regulator $\mathcal{R}_{\rm cyc}\subset\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\otimes_\mathbb{I} F_\mathbb{I}$ be the characteristic ideal of the cokernel of $(\ref{eq:lambda-ht})$ (after dividing by the image of $(\gamma_{\rm cyc}-1)$ in $\mathcal{J}^{\rm cyc}$). Recall that since we assume that $\mathcal{K}$ satisfies the generalized Heegner hypothesis (\ref{ass:gen-H}), the constant term $L_{p,0}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ in the expansion $(\ref{eq:2-exp})$ vanishes by Proposition~\ref{thm:hida-1}. The next result provides a first interpretation of the linear term $L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$. \begin{thm}\label{thm:3.1.5} Assume hypotheses {\rm (h0)--(h6)}, and denote by $\mathcal{X}^{}_{\rm tors}$ the characteristic ideal of $X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)_{\rm tors}$. Then \[ \mathcal{R}_{\rm cyc}^{}\cdot\mathcal{X}_{\rm tors} =(L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}) \] as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\otimes_{\mathbb{I}}F_\mathbb{I}$. \end{thm} \begin{proof} The height formula of Theorem~\ref{thm:rubin-ht} and Lemma~\ref{lem:3.1.1} immediately yield the equality \begin{equation}\label{eq:cor-ht} \mathcal{R}_{\rm cyc}\cdot {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathcal{BF}^{\dagger,{\rm ac}}}\biggr) =(L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}})\cdot\eta^\iota, \end{equation} where $\eta\subset\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$ is the characteristic ideal of ${\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}bar},\mathbf{T}^\daggerc)/{\rm loc}_{\mathfrak{p}bar}({\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc))$. We shall argue below that $\eta\neq 0$. Global duality yields the exact sequence \begin{equation}\label{eq:PT2} 0\rightarrow\frac{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}},\mathbf{T}^\daggerc)} {{\rm loc}_{\mathfrak{p}}({\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc))} \rightarrow X_{\emptyset,{\tt Gr}}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger) \rightarrow X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)\rightarrow 0. \end{equation} Note that the left-most term in $(\ref{eq:PT2})$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, since by Corollary~\ref{cor:str-Gr} the image of the map ${\rm loc}_{\mathfrak{p}}:{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)\rightarrow {\rm H}^1_{\tt Gr}(\mathcal{K}_\mathfrak{p},\mathbf{T}^\daggerc)$ is nonzero and the target has $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one. By Theorem~\ref{thm:HP-MC}, it follows that the middle term has $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one, and by the action of complex conjugation the same is true for $X_{{\tt Gr},\emptyset}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)$. Thus the nonvanishing of $\eta$ follows from the analogue of $(\ref{eq:PT2})$ for the prime $\mathfrak{p}bar$ (see (\ref{eq:PT2bar}) below). By Lemma~\ref{lem:str-rel} the above also shows that $X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)$ is $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, and counting ranks in the exact sequence \begin{align*} 0\rightarrow {\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)\rightarrow{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^\daggerc)\rightarrow&\frac{{\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathbf{T}^\daggerc)}{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\overline{\mathfrak{p}}},\mathbf{T}^\daggerc)}\\ &\rightarrow X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger) \rightarrow X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)\rightarrow 0, \end{align*} we conclude that the first two terms in this sequence have $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one. Since the quotient ${\rm H}^1(\mathcal{K}_{\overline{\mathfrak{p}}},\mathbf{T}^\daggerc)/{\rm H}^1_{\tt Gr}(\mathcal{K}_{\overline{\mathfrak{p}}},\mathbf{T}^\daggerc)$ has no $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion, it follows that \begin{equation}\label{eq:Gr-rel} {\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)= {\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^\daggerc). \end{equation} Taking $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-torsion in the analogue of $(\ref{eq:PT2})$ for $\mathfrak{p}bar$, that is \begin{equation}\label{eq:PT2bar} 0\rightarrow\frac{{\rm H}^1_{\tt Gr}(\mathcal{K}_{\mathfrak{p}bar},\mathbf{T}^\daggerc)} {{\rm loc}_{\mathfrak{p}bar}({\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc))} \rightarrow X_{{\tt Gr},\emptyset}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger) \rightarrow X_{\tt Gr}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)\rightarrow 0, \end{equation} and applying Lemma~\ref{lem:str-rel} and the ``functional equation'' $\mathcal{X}_{\rm tors}^\iota=\mathcal{X}_{\rm tors}$ of \cite[p.~1464]{howard-PhD-I} we obtain \begin{equation}\label{eq:takechar} \begin{split} \eta^\iota\cdot\mathcal{X}_{\rm tors} &={\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger)). \end{split} \end{equation} On the other hand, by the equivalence (i)'$\Longleftrightarrow$(ii)' in Theorem~\ref{thm:2-varIMC}, the equality (\ref{eq:BDP-IMC}) implies that \[ {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}(X_{{\tt Gr},0}(\mathcal{K}_\infty^{\rm ac},A_{{\boldsymbol{f}}}^\dagger))={\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{{\tt Gr},\emptyset}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathcal{BF}^{\dagger,{\rm ac}}}\biggr) \] as ideals in $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\otimes_{\mathbf{Z}_p}\mathbf{Q}_p$, and so the result follows from the combination of (\ref{eq:cor-ht}), (\ref{eq:Gr-rel}), and (\ref{eq:takechar}). \end{proof} The aforementioned $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-adic Gross--Zagier formula for $\mathfrak{Z}_\infty$ is the following. \begin{cor}\label{cor:I-GZ} Assume hypotheses {\rm (h0)--(h6)}. Then we have the equality \[ (L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}})=(\langle\mathfrak{Z}_\infty,\mathfrak{Z}_\infty\rangle_{\mathcal{K}_\infty^{{\rm ac}},\mathbb{I}}^{\rm cyc}) \] as ideals of $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\otimes_{\mathbb{I}}\mathbb{I}$. \end{cor} \begin{proof} Since ${\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)$ has $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]$-rank one by Theorem~\ref{thm:HP-MC} and $\mathfrak{Z}_\infty$ is not $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]$-torsion, the regulator $\mathcal{R}_{\rm cyc}$ of $(\ref{eq:lambda-ht})$ satisfies \[ (\langle\mathfrak{Z}_\infty,\mathfrak{Z}_\infty\rangle_{\mathcal{K}_\infty^{{\rm ac}},\mathbb{I}}^{\rm cyc})=\mathcal{R}_{\rm cyc}\cdot {\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathfrak{Z}_\infty}\biggr)\cdot{\rm Char}_{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]}\biggl(\frac{{\rm Sel}_{\tt Gr}(\mathcal{K},\mathbf{T}^\daggerc)}{\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}]]\cdot\mathfrak{Z}_\infty}\biggr)^\iota. \] By the ``functional equation'' of \cite[p.~1464]{howard-PhD-I}, the result thus follows from the combination of Theorem~\ref{thm:3.1.5} and the equality of characteristic ideals in Theorem~\ref{thm:HP-MC}. \end{proof} Now we can conclude the proof of our application to Greenberg's nonvanishing conjecture. \begin{thm}\label{thm:Gr-1} Assume that: \begin{itemize} \item[(i)] $\mathbb{I}$ is regular, \item[(ii)] $\bar{\rho}_{{\boldsymbol{f}}}$ is irreducible, \item[(iii)]{} some specialization ${\boldsymbol{f}}_\phi$ is the $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$, \item[(iv)] $N$ is squarefree, \item[(v)] there are at least two primes $\ell\vert N$ at which $\bar{\rho}_{{\boldsymbol{f}}}$ is ramified. \end{itemize} If ${\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger)$ has $\mathbb{I}$-rank one and the $\mathbb I$-adic height pairing $\langle,\rangle_{\mathbf Q,\mathbb I}^{\rm cyc}$ is non-degenerate, then \begin{equation}\label{eq:gen} \frac{d}{ds}L_p^{\tt MTT}({\boldsymbol{f}}_\phi,s)\biggr\vert_{s=k_\phi/2}\neq 0, \nonumber \end{equation} for all but finitely many $\phi\in\mathcal{X}_a^o(\mathbb{I})$. \end{thm} \begin{proof} Let $\phi\in\mathcal{X}_a^o(\mathbb{I})$ be such that ${\boldsymbol{f}}_\phi$ is the ordinary $p$-stabilization of a newform $f\in S_2({\boldsymbol{g}}amma_0(N))$. Let $\ell_1$ and $\ell_2$ be two distinct primes as in hypothesis (v), and choose an imaginary quadratic field $\mathcal{K}$ such that the following hold: \begin{itemize} \item $\ell_1$ and $\ell_2$ are inert in $\mathcal{K}$, \item every prime dividing $N^+:=N/\ell_1\ell_2$ splits in $\mathcal{K}$, \item $p$ splits in $\mathcal{K}$, \item $L(f\otimes\epsilon_\mathcal{K},1)\neq 0$, where $\epsilon_\mathcal{K}$ is the quadratic character corresponding to $\mathcal{K}$. \end{itemize} Note that the existence of $\mathcal{K}$ is ensured by \cite{FH}, and that, so chosen, $\mathcal{K}$ satisfies (\ref{ass:gen-H}) with $N^-=\ell_1\ell_2$. Now, the action of a complex conjugation $\tau$ combined with the restriction map induces an isomorphism \begin{equation}\label{eq:dec} {\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)\simeq{\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger)\oplus{\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger\otimes\epsilon_\mathcal{K}), \end{equation} where the first and second summands are identified with the $+$ and $-$ eigenspaces for the action of $\tau$, respectively (see \cite[Lem.~3.1.5]{SU}). By Kato's work \cite{Kato295}, the nonvanishing of $L(f\otimes\epsilon_\mathcal{K},1)$ implies that ${\rm Sel}(\mathbf{Q},T_{f}\otimes\epsilon_\mathcal{K})$ is finite, and so by the control theorem for ${\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger\otimes\epsilon_\mathcal{K})$ (see the exact sequence in \cite[Cor.~3.4.3]{howard-invmath}) we conclude that ${\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger\otimes\epsilon_\mathcal{K})$ is $\mathbb{I}$-torsion, and so \[ {\rm rank}_\mathbb{I}\;{\rm Sel}_{\tt Gr}(\mathcal{K},T_{{\boldsymbol{f}}}^\dagger)={\rm rank}_\mathbb{I}\;{\rm Sel}_{\tt Gr}(\mathbf Q,T_{{\boldsymbol{f}}}^\dagger)=1 \] by (\ref{eq:dec}) and our assumption. In particular, since by (i)--(iv) we are assuming (h0)--(h3), and (h4)--(h6) hold by our choice of $\mathcal{K}$, Theorem~\ref{thm:converse-How} yields the non-triviality of the class $\mathfrak{Z}_0$, and so the element $\langle\mathfrak{Z}_0,\mathfrak{Z}_0\rangle_{\mathcal{K},\mathbb{I}}^{\rm cyc}\in\mathbb{I}$ is non-zero by our hypothesis of non-degeneracy. Let $L_p^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{\rm cyc}$ be the image of ${\rm tw}_{\mathbf{T}^\daggerheta^{-1}}(L_p^{\tt Hi}({\boldsymbol{f}}/\mathcal{K}))$ under the natural projection $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}]]{\dagger}oheadrightarrow\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{\rm cyc}]]$. By Theorem~\ref{thm:cyc-res}, for every $\phi\in\mathcal{X}_a^o(\mathbb{I})$ we have the factorization \begin{equation}\label{eq:factor-cyc} \phi(L_p^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{\rm cyc})={\rm tw}_{\mathbf{T}^\daggerheta_\phi^{-1}}(L_p^{\tt MTT}({\boldsymbol{f}}_\phi))\cdot {\rm tw}_{\mathbf{T}^\daggerheta_\phi^{-1}}(L_p^{\tt MTT}({\boldsymbol{f}}_\phi\otimes\epsilon_\mathcal{K})) \end{equation} up to a unit in $\phi(\mathbb{I})[[{\boldsymbol{g}}amma^{\rm cyc}]]^\times$. Expand \begin{align*} \phi(L_p^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{\rm cyc})&=L_{p,0}^{\tt Hi}({\boldsymbol{f}}_\phi^\dagger/\mathcal{K})+L_{p,1}^{\tt Hi}({\boldsymbol{f}}_\phi^\dagger/\mathcal{K})\cdot(\gamma_{\rm cyc}-1)+\cdots,\\ {\rm tw}_{\mathbf{T}^\daggerheta_\phi^{-1}}(L_p^{\tt MTT}({\boldsymbol{f}}_\phi))&=L_{p,0}^{\tt MTT}({\boldsymbol{f}}_\phi^\dagger)+L_{p,1}^{\tt MTT}({\boldsymbol{f}}_\phi^\dagger)\cdot(\gamma_{\rm cyc}-1)+\cdots,\\ {\rm tw}_{\mathbf{T}^\daggerheta_\phi^{-1}}(L_p^{\tt MTT}({\boldsymbol{f}}_\phi\otimes\epsilon_\mathcal{K}))&=L_{p,0}^{\tt MTT}({\boldsymbol{f}}_\phi^\dagger\otimes\epsilon_\mathcal{K})+L_{p,1}^{\tt MTT}({\boldsymbol{f}}_\phi^\dagger\otimes\epsilon_\mathcal{K})\cdot(\gamma_{\rm cyc}-1)+\cdots, \end{align*} as power series in $\gamma_{\rm cyc}-1$, and note that by the $p$-adic Mellin transform we have \[ \frac{d}{ds}L_p^{\tt MTT}({\boldsymbol{f}}_\phi,s)\biggr\vert_{s=k_\phi/2}\neq 0\quad\Longleftrightarrow\quad L_{p,1}^{\tt MTT}({\boldsymbol{f}}^\dagger_\phi)\neq 0 \] (see \cite[(24)]{venerucci-p-conv}). The constant term $L_{p,0}^{\tt Hi}({\boldsymbol{f}}_\phi^\dagger/\mathcal{K})\in\mathbb{I}$ vanishes by Proposition~\ref{thm:hida-1}, and so the factorization $(\ref{eq:factor-cyc})$ yields the following equality up to unit in $\mathcal{O}_\phi^\times$: \begin{equation}\label{eq:desc} L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger_\phi/\mathcal{K})=L_{p,1}^{\tt MTT}({\boldsymbol{f}}_\phi^\dagger)\cdot L_{p,0}^{\tt MTT}({\boldsymbol{f}}^\dagger_\phi\otimes\epsilon_\mathcal{K}). \end{equation} Finally, since by definition $L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})\in\mathbb{I}$ agrees with the image of the linear term $L_{p,1}^{\tt Hi}({\boldsymbol{f}}^\dagger/\mathcal{K})_{{\rm ac}}$ in $(\ref{eq:2-exp})$ under the augmentation map $\mathbb{I}[[{\boldsymbol{g}}amma_\mathcal{K}^{{\rm ac}}]]\rightarrow\mathbb{I}$, from Corollary~\ref{cor:I-GZ} specialized at the trivial character of ${\boldsymbol{g}}amma_\mathcal{K}^{\rm ac}$ and $(\ref{eq:desc})$ we see that \begin{align*} \langle\mathfrak{Z}_0,\mathfrak{Z}_0\rangle^{\rm cyc}_{\mathcal{K},\mathbb I}\neq 0\quad&\Longrightarrow\quad L_{p,1}^{\tt Hi}({\boldsymbol{f}}_\phi^\dagger/\mathcal{K})\neq 0,\quad\textrm{for almost all $\phi\in\mathcal{X}_a^o(\mathbb{I})$}\\\ &\Longrightarrow\quad L_{p,1}^{\tt MTT}({\boldsymbol{f}}^\dagger_\phi)\neq 0,\quad\quad\textrm{for almost all $\phi\in\mathcal{X}_a^o(\mathbb{I})$}, \end{align*} concluding the proof of Theorem~\ref{thm:Gr-1}. \end{proof} \end{document}
\begin{document} \mainmatter \title{Interpretation of \textit{NDTM} in the definition of \textit{NP}} \titlerunning{Interpretation of \textit{NDTM} in the definition of \textit{NP}} \author{JianMing ZHOU, Yu LI\inst{1,2}} \authorrunning{JianMing ZHOU, Yu LI} \institute{MIS, Universit\'e de Picardie Jules Verne, Amiens, France, [email protected] \and Institut of computational theory and application, Huazhong University of Science and Technology, Wuhan, China } \toctitle{Lecture Notes in Computer Science} \tocauthor{Authors' Instructions} \maketitle \begin{abstract} In this paper, we interpret \textit{NDTM} (NonDeterministic Turing Machine) used to define \textit{NP} by tracing to the source of \textit{NP}. Originally \textit{NP} was defined as the class of problems solvable in polynomial time by a \textit{NDTM} in Cook's theorem, where the \textit{NDTM} was represented as \textit{Query Machine} of essence \textit{Oracle}. Later a model consisting of a guessing module and a checking module was proposed to replace the \textit{NDTM}. This model of essence \textit{TM} has a fundamental difference from the \textit{NDTM} of essence \textit{Oracle}, but people still use the term \textit{NDTM} to designate this model, which leads to the disguised displacement of \textit{NDTM} and produces out the verifier-based definition of \textit{NP} as the class of problems verifiable in polynomial time by a \textit{TM} (Turing Machine). This verifier-based one has been then accepted as the standard definition of \textit{NP} where comes from the famous equivalence of the two definitions of \textit{NP}. Since then the notion of \textit{nondeterminism} is lost from \textit{NP}, which causes ambiguities in understanding \textit{NP} and then great difficulties in solving the \textit{P versus NP} problem. Since \textit{NP} is originally related with \textit{Oracle} that comes from Turing's work about \textit{Computability}, it seems quite necessary to trace back to Turing's work and clarify further the issue about \textit{NP}. \keywords{ Computability, Oracle, NDTM, TM , P versus NP, Cook's theorem } \end{abstract} \section{Introduction} The \textit{P versus NP} problem was selected as one of the seven millennial challenges by the Clay Mathematics Institute in 2000 \cite{cook2}. This problem goes far beyond the field of computer theory and penetrates into mathematics, mathematical logic, artificial intelligence, and even becomes the basic problem in philosophy. In introducing the second poll about \textit{P versus NP} conducted by Gasarch in 2012 \cite{william}, Hemaspaandra said: \textit{I hope that people in the distant future will look at these four articles to help get a sense of people’s thoughts back in the dark ages when P versus NP had not yet been resolved.} $P$ stands for \textit{Polynomial time}, meaning that a problem in $P$ is solvable by a deterministic Turing machine in polynomial time. Concerning the definition of $NP$, the situation is much more complex, \textit{NP} stands for \textit{Nondeterminisitc Polynomial time}, meaning that a problem in \textit{NP} is solvable by a Nondeterministic Turing machine in Polynomial time \cite{cook1}\cite{cook2}. However, this solver-based definition is considered academically as equivalent with another verifier-based definition of \textit{NP} \cite{np}: \textit{The two definitions of NP as the class of problems solvable by a nondeterministic Turing machine in polynomial time and the class of problems verifiable by a deterministic Turing machine in polynomial time are equivalent. The proof is described by many textbooks, for example Sipser's Introduction to the Theory of Computation, section 7.3.}\\ Due to this equivalence, the verifier-based definition has been accepted as the standard definition of \textit{NP}, the \textit{P versus NP} problem is then stated as: \begin{itemize} \item $P \subseteq NP$, since a problem solvable by a \textit{TM} in polynomial time is verifiable by a \textit{TM} in polynomial time. \item $NP = P$? whether a problem verifiable by a \textit{TM} in polynomial time is solvable by a a \textit{TM} in polynomial time? \end{itemize} In this paper, by tracing the source of \textit{NP}, we investigate \textit{NDTM} used to define \textit{NP} and reveal the disguised displacement of \textit{NDTM}, which produces out the verifier-based definition of \textit{NP} as well as the equivalence of the two definitions of \textit{NP}. The paper is organized as follows: we return to the origin of \textit{NDTM} in Section 2, examine its change in Section 3, analyze the proof of the equivalence of the two definitions of \textit{NP} in Section 4, and conclude the paper in Section 5. \section{ \textit{NDTM} as \textit{Oracle}} \textit{NDTM} was formally used to define \textit{NP} in Cook's paper entitled \textit{The complexity of theorem proving procedures} \cite{cook1}. \subsection{ \textit{NDTM} in Cook's theorem} Cook's theorem was originally stated as \cite{cook1}: \textbf{Theorem 1} \textit{If a set $S$ of strings is accepted by some nondeterministic Turing machine within polynomial time, then $S$ is $P$-reducible to \{DNF tautologies\}}. \\ Here $S$ refers to a set of instances of a problem that have solutions, which later becomes the solver-based definition of $NP$ in terms of \textit{language} \cite{cook2}: \textit{A problem in NP is a language accepted by some nondeterministic Turing machine within polynomial time}. Concerning \textit{\{DNF tautologies $ \neg A(w)$\}}, it can be transformed into \textit{\{CNF satisfiabilities A(w)\}}, so it corresponds to the \textit{SAT} problem. \\ \textbf{Theorem 1} is nowadays expressed as \cite{np}: \textbf{Cook's theorem} \textit{A problem in NP can be reduced to the SAT problem by a (deterministic) Turing machine in polynomial time.} \subsection{Analysis of \textit{Query Machine} } The main idea of the proof of \textbf{Theorem 1} is to construct $A(w)$ to express that a set $S$ of strings is accepted by a \textit{NDTM} in polynomial time \cite{cook1}: \textit{Suppose a nondeterministic Turing machine $M$ accepts a set $S$ of strings within time $Q(n)$, where $Q(n)$ is a polynomial. Given an input $w$ for $M$, we will construct a propositional formula $A(w)$ in conjunctive normal form ($CNF$) such that $A(w)$ is satisfiable iff $M$ accepts $w$. Thus $\neg A(w)$ is easily put in disjunctive normal form (using De Morgan’s laws), and $\neg A(w)$ is a tautology if and only if w $ \not \in S$. Since the whole construction can be carried out in time bounded by a polynomial in $\mid w \mid$ (the length of $w$), the theorem will be proved.}\\ This \textit{NDTM} is then represented as \textit{Query Machine} \cite{cook1}: \textit{By reduced we mean, roughly speaking, that if tautology hood could be decided instantly (by an "oracle") then these problems could be decided in polynomial time. In order to make this notion precise, we introduce query machines, which are like Turing machines with oracles in [1].} \\ This \textit{query machine} is described as \cite{cook1}: \textit{A query machine is a multitape Turing machine with a distinguished tape called the query tape, and three distinguished states called the $query~state$, $yes~state$, and $no~state$, respectively. If $M$ is a query machine and $T$ is a set of strings, then a $T$-computation of $M$ is a computation of $M$ in which initially $M$ is in the initial state and has an input string $w$ on its input tape, and each time $M$ assures the query state there is a string $u$ on the query tape, and the next state $M$ assumes is the yes state if $u \in T$ and the no state if $u \not \in T$. We think of an 'oracle', which knows $T$, placing $M$ in the yes state or no state}. \\ \begin{figure} \caption{A computation of \textit{NDTM} \label{fig5} \end{figure} The set of T of strings is explained as \cite{cook1}: \textit{Definition. A set S of strings is P-reducible (P for polynomial) to a set T of strings iff there is some query machine M and a polynomial Q(n) such that for each input string w, the T-computation of M with input w halts within Q($\mid w \mid$) steps ($\mid w \mid$ is the length of w) and ends in an accepting state iff $w \in S$}. \textit{It is not hard to see that P-reducibility is a transitive relation. Thus the relation $E$ on sets of strings, given by $(S,T) \in E$ iff each of $S$ and T is P-reducible to the other, is an equivalence relation. The equivalence class containing a set S will be denoted by deg (S) (the polynomial degree of difficulty of S).}\\ We use the graph isomorphism problem cited in \cite{cook1} to help interpreting \textit{Query Machine}. \textbf{Example: Graph isomorphism problem} Given two finite undirected graphs $G_1$ and $G_2$, the problem consists in determining whether $G_1$ is isomorphic to $G_2$. An isomorphism of $G_1$ and $G_2$ is a bijection $f$ between the vertex sets of $G_1$ and $G_2$, $f : V(G_1) \rightarrow V(G_2)$, such that any two vertices $u$ and $v$ are adjacent in $G_1$ if and only if $f(u)$ and $f(v)$ are adjacent in $G_2$. In this case, a solution to an instance refers to an isomorphism between $G_1$ and $G_2$.\\ We give the following two instances. \textit{Instance 1}: A pattern graph $G_{p1}=(V_{p1}, E_{p1})$ and a text graph $G_{t1}=(V_{t1}, E_{t1})$, \textit{Instance 2}: A pattern graph $G_{p2}=(V_{p2}, E_{p2})$ and a text graph $G_{t2}=(V_{t2}, E_{t2})$. \begin{figure} \caption{Instance 1} \label{fig1} \end{figure} \begin{figure} \caption{Instance 2} \label{fig2} \end{figure} For \textit{Instance 1}, $G_{p1}$ is isomorphic to $G_{t1}$, as there exists an isomorphism : $f(1)=a, f(2)=b, f(3)=c, f(4)=e, f(5)=d$; while for \textit{Instance 2}, $G_{p2}$ is not isomorphic to $G_{t2}$, as there is not any isomorphism of $G_{p2}$ and $G_{t2}$. \\ Let us analyze how a \textit{query machine} $M$ accepts a set S of strings in polynomial time (Fig.3). Initially, $M$ is in the initial state $q_0$ and has $w$ as input representing an instance of a problem. Then, $M$ assures the query state $q_{Query}$ where there is a string $u$ representing $w$ by a formula in terms of $CNF$. $u$ is taken as input of an \textit{oracle} and this \textit{oracle} instantly determines whether $u \in T$, that is, whether $u$ is satisfiable. Finally, according to the obtained reply, if $u \in T$ then the \textit{oracle} places $M$ in the yes state $q_Y$ and accepts $w$; or if $u \not \in T$ then the \textit{oracle} places $M$ in the no state $q_N$ and refuses $w$. For the graph isomorphism problem, $S$ refers to a set of strings that represents all instances that have solutions, for example, $S = \{\overline {G_{p1}}\ast\ast \overline {G_{t1}}, \ldots\}$. Note that $S$ does not contain $\overline {G_{p2}}\ast\ast \overline {G_{t2}}$, because Instance 2 has no solution. $T$ refers to the corresponding set of $CNF$ formulas that are satisfiable. $M$ accepts $w=\overline {G_{p1}}\ast\ast \overline {G_{t1}}$, but refuses $w=\overline {G_{p2}}\ast\ast \overline {G_{t2}}$. \\ Therefore, saying that a \textit{query machine} accepts a set $S$ of strings in polynomial time, in fact that is to say that an \textit{oracle} accepts a set $S$ of strings in polynomial time. In other words, the essence of the \textit{NDTM} in Cook's theorem is \textit{Oracle}. \section{ \textit{NDTM} as \textit{TM} } However, \textit{Oracle} is only a concept in thought experiments borrowed by Turing in his doctoral dissertation with the intention to represent something opposed to Turing Machine (\textit{TM}) of essence \textit{Computability} \cite{martin}\cite{turing}, so it cannot carry out any real computation. Therefore, later researchers proposed a \textit{NDTM} model to replace the \textit{NDTM} of essence \textit{Oracle}.\\ In Garey and Johnson's \textit{Computers and Intractability} \cite{garey}, this model is presented as: \textit{The NDTM model we will be using has exactly the same structure as a DTM (Deterministic Turing Machine), except that it is augmented with a guessing module having its own write-only head.} A computation of such a machine takes place in two distinct stages (see \cite{garey}, p. 30-31): \textit{The first stage is the "guessing" stage. Initially, the input string $x$ is written in tape squares 1 through $\mid x \mid $ (while all other squares are blank), the read-write head is scanning square 1, the the write-only head is scanning square -1, and the finite state control is "inactive". The guessing module then directs the write-only head, one step at a time, either to write some symbol from $\Gamma$ in the tape square being scanned and move one square to left, or to stop, at which point the guessing module becomes inactive and the finite state control is activated in state $q_0$. The choice of whether to remain active, and, if so, which symbol from $\Gamma$ to write, is made by the guessing module in a totally arbitrary manner. Thus the guessing module can write any string from $\Gamma*$ before it halts and, indeed, need never halt.} \textit{The "checking" stage begins when the finite state control is activated in state $q_0$. From this point on, the computation proceeds solely under the direction of the NDTM program according to exactly the same rules as for a DTM. The guessing module and its write-only head are no longer involved, having fulfilled their role by writing the guessed string on the tape. Of course, the guessed string can (and usually will) be examined during the checking stage. The computation ceases when and if the finite state control enters one of the two halt states (either $q_Y$ or $q_N$) and is said to be an accepting computation if it halts in state $q_Y$. All other computations, halting or not, are classed together simply as non-accepting computations. } \begin{figure} \caption{A computation of the \textit{NDTM} \label{fig3} \end{figure} For a given instance $x$, a guessing module finds a certificate $s$ of solution, then $s$ is verified by a checking module. If $s$ is a solution, the computation halts in state $q_Y$ and the machine can determine that $x$ has a solution. However if $s$ is not a solution, the machine can determine neither that $x$ has no solution, nor that $x$ has a solution. In other words, the state $q_N$ in Fig.4 is \textit{nondeterministic}. For \textit{Instance 2}, if a certificate with $f(1)=a, f(2)=b, f(3)=c, f(4)=d, f(5)=e, f(6)=f$ is generated by the guessing module, and it is checked out to not be a solution, then the machine can determine neither that \textit{Instance 2} has no solution, nor that \textit{Instance 2} has a solution. \\ This \textit{NDTM} is actually described as \cite{sipser}: \textit{At any point in a computation the machine may proceed according to several possibilities. The computation of a nondeterministic Turing machine is a tree whose branches correspond to different possibilities for the machine. If some branch of the computation leads to the accept state, the machine accepts its input.}\\ The essence of this \textit{NDTM} is \textit{TM}, which is confirmed in \cite{sipser}: \textit{ \textbf{Theorem 3.16} Every nondeterministic Turing machine has an equivalent deterministic Turing machine.\\ } Therefore, this \textit{NDTM} of essence \textit{TM} is completely different from that \textit{NDTM} of essence \textit{Oracle} in Fig.3. Unfortunately, people do not realize this fundamental difference, and still use the same term \textit{NDTM} to designate two different concepts. Consequently \textit{TM} is confused with \textit{Oracle}, and it produces out the following famous equivalence of the two definitions of \textit{NP}. \section{Analysis of the equivalence of the two definitions of \textit{NP} } Let us analyze the proof described in Sipser's \textit{Introduction to the Theory of Computation} (section 7.3) \cite{sipser}: \subsection{Description of the proof} \textit{ \textbf{Theorem 7.20} A language is in $NP$ iff it is decided by some nondeterministic polynomial time Turing machine.} \textit{ \textbf{Proof idea}: We show how to convert a polynomial time verifier to an equivalent polynomial time NDTM and vice versa. The NDTM simulates the verifier by guessing the certificate. The verifier simulates the NDTM by using the accepting branch as the certificate. } \textit{ \textbf{Proof}: From the forward direction of this theorem, let $A$ in \textit{NP} and show that $A$ is decided by a polynomial time $NDTM$ $N$. Let $V$ be the polynomial time verifier for $A$ that exists by the definition of \textit{NP}. Assume that $V $is a $TM$ that runs in time $n^k$ and construct $N$ as follows. } \textit{ N = On input w of length n: \begin{enumerate} \item Nondeterministically select string $c$ of length at most $n^k$. \item Run $V$ on input $<w, c>$. \item If $V$ accepts, accepts; otherwise, reject. \end{enumerate} To prove the other direction of the theorem, assume that $A$ is decided by a polynomial time $NDTM$ N and construct a polynomial time verifier $V$ as follows: V = On input $<w,c>$, where $w$ and $c$ are strings: \begin{enumerate} \item Simulate $N$ on input $w$, treating each symbol of $c$ as a description of nondeterministic choice to make at each step. \item If this branch of N's computation accepts, accept; otherwise, reject. \end{enumerate} } \subsection{Analysis of the proof } According to the proof idea, the proof is based on the equivalence between the \textit{verification} of a certificate $c$ by $V$ and the \textit{decision} for accepting instance $w$ by $NDTM$ $N$: \textit{ \begin{itemize} \item From the forward direction of the theorem: If V accepts, accepts; otherwise, reject; \item From the other direction of the theorem: If this branch of N's computation accepts, accept; otherwise, reject. \end{itemize} } In fact this equivalence is premised, it holds only with \textit{NDTM} of essence \textit{Oracle} in Fig.3 where the verifier $V$ checks the result obtained by $Oracle$, the \textit{verification} is certainly consistent with the \textit{decision}, then the \textit{verification} and the \textit{decision} are equivalent. However, the situation is completely different with the \textit{NDTM} in Fig.4, that is, in this proof. \\ Let us look at the \textit{HAMPATH (Hamiltonian path)} problem given in \cite{sipser} to explain this \textit{NDTM} in the proof: \textit{ The following is a nondeterministic Turing machine (NDTM) that decides the HAMPATH problem in nondeterministic polynomial time. Recall that in Definition 7.9 we defined the time of a nondeterministic machine to be the time used by the longest computation branch.\\ \\ N = " On input $<G, s, t>$, where G is a directed graph with nodes s and t: \begin{enumerate} \item Write a list of m numbers, $p_1\dots p_m$, where m is the number of nodes in G. Each number in the list is nondeterministically selected to be between 1 and m. \item Check for repetitions in the list. If any are found, reject. \item Check whether $s=p_1$ and $t=p_m$. If either fail, reject. \item For each i between 1 and m-1, check whether $(p_i, p_{i+1})$ is an edge of G. If any are not, reject. Otherwise, all tests have been passed, so accept. " \end{enumerate} To analyze this algorithm and verify that is runs in nondeterministic polynomial time, we examine each of its stages. In stage 1, the nondeterministic selection clearly runs in polynomial time. In stage 2 and 3, each part is a simple check, so together they run in polynomial time. Finally, stage 4 also clearly runs in polynomial time. Thus this algorithm runs in nondeterministic polynomial time. }\\ When $p_1\dots p_m$ is checked to be a Hamiltonian path, the corresponding \textit{NDTM} $N$ \textit{accepts} the instance $<G, s, t>$, and determines that the instance $<G, s, t>$ has a solution. But when $p_1\dots p_m$ is checked not to be a Hamiltonian path, \textit{NDTM} $N$ can neither determine that the instance $<G, s, t>$ has no solution, nor determine that the instance $<G, s, t>$ has a solution, because $p_1\dots p_m$ is just a certificate. In this case, the decision for accepting $<G, s, t>$ is \textit{nondeterministic}. In other words, the \textit{verification} is not consistent with the \textit{decision}. Therefore, the \textit{verification} cannot be used to define \textit{NP}, and the equivalence of the two definitions of NP does not hold! On the other hand, if people insist the equivalence of the two definitions of \textit{NP}, then it means the logic error of disguised displacement would be allowed to exist. Consequently the \textit{verification} of \textit{TM} would be confused up with the \textit{transcendent judgement} of \textit{Oracle}, finally replace the \textit{nondeterministic decision} about \textit{NP}, while the \textit{nondeterministic decision} is just the essence of \textit{NP}. All this is what happens actually in the theory of complexity of algorithms. \section{Conclusion} In this paper, we revealed the disguised displacement of concept \textit{NDTM} in the definition of \textit{NP}, which causes ambiguities in understanding \textit{NP} and finally great difficulties in solving the \textit{P versus NP} problem. Since \textit{NP} is originally related with $Oracle$ that comes from Turing's work about \textit{Computability}, it seems quite necessary to trace back to Turing's work and clarify further the issue about \textit{NP} \cite{scott}. \end{document}
\begin{document} \title{Mixing of the symmetric beta-binomial splitting process on arbitrary~graphs} \begin{abstract}We study the mixing time of the symmetric beta-binomial splitting process on finite weighted connected graphs $G=(V,E,\{r_e\}_{e\in E})$ with vertex set $V$, edge set $E$ and positive edge-weights $r_e>0$ for $e\in E$. This is an interacting particle system with a fixed number of particles that updates through vertex-pairwise interactions which redistribute particles. We show that the mixing time of this process can be upper-bounded in terms of the maximal expected meeting time of two independent random walks on $G$. Our techniques involve using a process similar to the chameleon process invented by \cite{morris} to bound the mixing time of the exclusion process. \end{abstract} \section{Introduction} In the field of econophysics, interacting particle systems have been widely used to analyse the dynamics of wealth held by agents within a network, providing insights into the distribution and flow of money within the system~\citep{yakovenko2009colloquium}. These are typically characterised by pairwise interactions between agents (represented by vertices in a graph) resulting in a redistribution of the wealth they hold (represented by particles on the vertices). One class of such systems which has found applications in econophysics are reshuffling models in which each agent in an interacting pair receives a random fraction of the total wealth they hold. In the uniform reshuffling model introduced in~\citet{dragulescu2000statistical} and discussed rigorously in~\citet{lanchier2018rigorous}, the random fraction is chosen uniformly. In this paper, we introduce and analyse the mixing time of the symmetric beta-binomial splitting process: a continuous-time interacting particle system on a finite connected (weighted) graph with a conservation property. Informally, the process updates by choosing randomly an edge from the graph, and redistributing the particles on the vertices of the edge according to a beta-binomial distribution. This process is a generalisation of the uniform reshuffling model, is a discrete-space version of a Gibbs sampler considered in~\citet{10.1214/20-AOP1428} and is related to the binomial splitting process of~\citet{QS} (sometimes called the binomial reshuffling model \citep{cao2022binomial}). Our focus is to provide general upper bounds on the mixing time of the symmetric beta-binomial splitting process on any connected graph. We achieve this through use of a chameleon process, a process which so far has only been used to bound the mixing time of exclusion processes~\citep{CP,HP,morris,olive}. We demonstrate how a chameleon process can be used more generally to understand how systems of interacting particles mix; in particular we establish a connection between the maximal expected meeting time of two independent random walks and the mixing time of the beta-binomial splitting process. Despite giving the same name to this auxiliary process, our version of the chameleon process is substantially different from those used previously; in particular it is engineered to deal with multiple particles occupying a single vertex (an event which cannot happen in the exclusion process). As is typical with proofs that use a chameleon process, the results we obtain are not optimal in the sense that the multiplicative constants appearing in the statements are not optimized. On the other hand, the strength of this approach is in allowing us to prove results for arbitrary graphs with arbitrary edge weights. \subsection{Model and main result} The $m$-particle symmetric beta-binomial splitting process with parameter $s>0$ on a finite connected graph $G=(V,E,(r_e)_{e\in E})$ (with vertex set $V$, edge set $E$ and $(r_e)_{e\in E}$ a collection of positive edge weights) is the continuous-time Markov process $(\xi_t)_{t\ge0}$ on state space \[ \Omega_{G,m}:=\left\{\xi\in\mathds{N}_0^V:\,\sum_{v\in V}\xi(v)=m\right\}, \] with infinitesimal generator \[ \mathcal{L}^{\mathrm{BB}(G,s,m)}f=\sum_{\{v,w\}\in E}\frac{r_{\{v,w\}}}{\sum_{e\in E}r_e}\left(\mathcal{P}_{\{v,w\}}^{\mathrm{BB}(G,s,m)}-\mathds{1}\right)f,\qquad f:\Omega_{G,m}\to\mathds{R}, \] where $\mathcal{P}_{\{v,w\}}^{\mathrm{BB}(G,s,m)}f(\xi):=\mathds{E}_{\mathcal{L}[\xi'_{\{v,w\}}]}[f]$, and $\xi'_{\{v,w\}}$ is the random variable defined as \[ \xi'_{\{v,w\}}(u):=\begin{cases} X&\mbox{ if }u=v\\ \xi(v)+\xi(w)-X&\mbox{ if }u=w\\ \xi(u)&\mbox{ otherwise,} \end{cases} \] with $X\sim\mbox{BetaBin}(\xi(u)+\xi(v),s,s)$. We recover the uniform reshuffling model by setting $s=1$. We remark that in the binomial splitting process of~\cite{QS}, the random variable $X$ is chosen instead according to a binomial distribution (recall we obtain a binomial with probability parameter 1/2 by sending $s\to\infty$ in the above beta-binomial). The symmetric \textbf{b}eta-\textbf{b}inomial \textbf{s}plitting \textbf{p}rocess (BBSP) on a connected graph with positive edge weights is irreducible on $\Omega_{G,m}$ and, by checking detailed balance, one can determine that the $m$-particle BBSP on $G$ with parameter $s$ (denoted BB$(G,s,m)$) has unique equilibrium distribution \begin{align}\label{e:eqmbb} \pi^{\mathrm{BB}(G,s,m)}(\xi)\propto \prod_{v\in V}\frac{\Gamma(s+\xi(v))}{\xi(v)!},\qquad\xi\in\Omega_{G,m}. \end{align} Recall that the total variation distance between two probability measures $\mu$ and $\nu$ defined on the same finite set $\Omega$ is \[\|\mu-\nu\|_\mathrm{TV}:=\sum_{\omega\in\Omega}(\mu(\omega)-\nu(\omega))_+,\] where for $x\in\mathds{R}$, $x_+:=\max\{x,0\}$. For any irreducible Markov process $(\xi_t)_{t\ge0}$ with state space $\Omega$, and equilibrium distribution $\pi$, the $\mathds{V}\mathrm{ar}epsilon$-total variation mixing time is \[t_\text{mix}(\mathds{V}\mathrm{ar}epsilon):=\inf\big\{t\ge0:\,\max_{\xi_0\in\Omega}\|\mathcal{L}[\xi_t]-\pi\|_\text{TV}\le \mathds{V}\mathrm{ar}epsilon\big\}\] for any $\mathds{V}\mathrm{ar}epsilon\in(0,1)$. We write $t_\mathrm{mix}^{\mathrm{BB}(G,s,m)}(\mathds{V}\mathrm{ar}epsilon)$ for the $\mathds{V}\mathrm{ar}epsilon$-total variation mixing time of BB$(G,s,m)$. For $i$ and $j$ distinct vertices of $G$, we also write $\hat M_{i,j}(G)$ for the meeting time of two independent random walks started from vertices $i$ and $j$, each moving as $\mathrm{BB}(G,s,1)$, that is, the time that the two walks are on neighbouring vertices and the edge between them rings for one of the walks. Recalling that BetaBin($1,s,s)\sim\mathrm{Bernoulli}(1/2)$, we see that $\hat M_{i,j}(G)$ does not depend on $s$ and is just the meeting time of two independent random walks on the graph obtained from $G$ by halving the edge weights. We assume throughout that $V=\{1,\ldots,n\}$. Our main result is as follows. \begin{theorem}[Symmetric beta-binomial splitting process mixing time bound]\label{T:betabin} Fix $s\in\mathds{Q}$ positive and write $s=b/a$ with $a$ and $b$ coprime. There exists a universal constant $C>0$ such that for any size $n$ connected graph $G$ with positive edge weights, and any integer $m\ge2$, \[ \forall\,\mathds{V}\mathrm{ar}epsilon\in(0,1/4),\qquad t_\mathrm{mix}^{\mathrm{BB}(G,s,m)}(\mathds{V}\mathrm{ar}epsilon)\le C(s)\log\left(\frac{n+m}{\mathds{V}\mathrm{ar}epsilon}\right)\max_{i,j}\mathds{E}\hat M_{i,j}(G), \] where $C(s)=Ca(p^*)^{-2}\log(12a(p^*)^{-2}) \log(a+b)$, $p^*=(5/12)^{2s}/(6B(s,s))$ for $s<20$, and $p^*=\frac16(1-\frac{20}{s+1})$ when $s\ge20$, with $B(\cdot,\cdot)$ the beta function. \end{theorem} Observe that $p^*\to\frac16$ as $s\to\infty$, whereas $p^*\to0$ as $s\to0$. The quantity $1/s$ can be seen as measuring the strength that particles tend to ``clump together'', with the strength increasing as $1/s\to\infty$. Thus it is not surprising to obtain an upper-bound which increases as $s\to0$, as breaking apart clumps of particles takes longer. Our methodology does not allow us to immediately deduce results in the case of $s$ irrational. \subsection{Related work} The beta-binomial splitting process is closely related to the binomial splitting process (although our methods do not obviously extend to this model). In \cite{QS}, the authors show that the binomial splitting process (as well as a more general version in which vertices have weights) exhibits total variation cutoff (abrupt convergence to equilibrium) at time $\frac12t_\mathrm{rel}\log m$ (with $t_\mathrm{rel}$ the relaxation time) for graphs satisfying a finite-dimensional geometry assumption provided the number of particles $m$ is at most order $n^2$ (they also obtain a pre-cutoff result without this restriction on particle numbers). For instance on the cycle their results show that the binomial splitting process mixes at time $\Theta(n^2\log m)$ for $m\le n^2$. On the other hand, for the beta-binomial splitting process on the cycle, our results give an upper bound of $\BigO(n^2\log(n+m))$ (with the implicit constant depending on the parameter $s$). The beta-binomial splitting process has, in a certain sense, more dependency between the movement of the particles compared with the binomial splitting process, which in turn means any analysis on the mixing time is more involved. To see this, consider that in the binomial splitting process, when an edge rings each particle on the edge decides which vertex to jump to independently of the other particles; this independence is not present in the beta-binomial splitting process. There has been a flurry of activity in recent years analysing mixing times of continuous mass (rather than discrete particles) redistribution processes \citep{10.1214/20-AOP1473,caputo2022spectral,pillai2018mixing,smith2013analysis}. The uniform reshuffling model (when run on the complete graph) is the discrete-space version of a Gibbs sampler on the $n$-simplex, the mixing time of which is analysed in~\citet[Example 13.3]{aldousfill} and~\cite{ASmith1}. In~\cite{aldousfill}, the total variation mixing time of the Gibbs sampler is shown to be $\BigO(n^2\log n)$; the argument can be used (as noted by~\cite{ASmith1}) to obtain a mixing time of $\BigO(n^2\log n)$ of the uniform reshuffling model on the complete graph (in which edge weights are all $1/(n-1)$), provided the number of particles $m$ is at least $n^{5.5}$. The arguments in~\cite{ASmith1} improve this result when $m>n^{18.5}$, obtaining $\BigO(n\log n)$ as the mixing time of the uniform reshuffling model on the complete graph in this regime. Our results improve the best known bound on the mixing time of the uniform reshuffling model on the complete graph to $\BigO(n^2\log n)$ for $m\le n^{5.5}$. More generally, the symmetric beta-binomial splitting process is a discrete-space version of a Gibbs sampler on the $n$-simplex, in which mass is redistributed across the vertices of a ringing edge according to a symmetric beta random variable. In~\cite{10.1214/20-AOP1428}, cutoff is demonstrated at time $\frac{1}{\pi^2} n^2\log n$ for this model on the line, provided the beta parameter (which we denote by $s$ here) is at least 1. While our upper-bound for the discrete-space model holds also for some $s\in(0,1)$, we are restricted to $s\in\mathds{Q}$ by the nature of our analysis. A continuous-space version of the binomial splitting process is the averaging process (also known as the repeated average model), introduced by \citet{aldous2011finite,aldouslanoue}. In this model, when an interaction occurs between two vertices, their mass is redistributed equally between them. Mixing times for this process have been studied with total variation cutoff demonstrated on the complete graph \citep{chatterjee2022phase}, and on the hypercube and complete bipartite graphs \citep{caputo2022cutoff}. A general lower bound for the mixing time of the averaging process on any connected graph is obtained by \cite{movassagh2022repeated}. Lastly, a model similar in flavour to the beta-binomal splitting process and which also has applications in econophysics is the immediate exchange process proposed in~\cite{heinsalu2014kinetic} and its generalisation~\citep{van2016duality}. In the discrete version of the generalised immediate exchange process, when an edge updates, each vertex on the edge gives to the other vertex a random number of its particles, chosen according to a beta-binomial distribution. Again, however, our methods do not obviously extend to this model (for our methodology it is important that updates are distributionally symmetric over the vertices on a ringing edge), and obtaining bounds on the mixing time of this process appears to be an open problem. \subsection{Heuristics} In order to bound the total variation (TV) distance between the time-$t$ states of two BB$(G,s,m)$ processes started from arbitrary configurations, we use the triangle inequality to reduce the problem to bounding the TV distance between the time-$t$ states of two BB$(G,s,m)$ configurations which start from \emph{neighbouring} configurations \eqref{e:1}, that is, configurations which differ by the action of moving a single particle (from any vertex to any other). We can then bound this latter TV distance by the TV distance of the time-$t$ states of two processes which each evolve similarly to a BB$(G,s,m)$ process but with the incongruent particle \emph{marked} to distinguish it from the rest \eqref{e:2}. We call this process a MaBB (marked beta-binomial splitting) process. A chameleon process will then be used to bound the total variation distance between two MaBB processes (or, more precisely, between a MaBB process and one in which the marked particle is ``at equilibrium'' given the configuration of non-marked particles, Proposition~\ref{P:tvboundink}). In the chameleon process associated with a MaBB, the non-marked particles are replaced with black particles (which are coupled to evolve identically to the non-marked particles). The purpose of the chameleon process is to provide a way to track how quickly the marked particle in the MaBB becomes mixed. We achieve this by having red particles in the chameleon process, with each additional red particle on a vertex corresponding to an increase in the probability that in the MaBB, the marked particle is on that vertex. It turns out that (see \eqref{e:condpimips}), if we construct the MaBB appropriately, then given we are at equilibrium and we observe the non-marked particles in a certain configuration $\xi$, the probability that the marked will be on vertex $v$ is proportional to $a\xi(v)+b$, where $\xi(v)$ is the observed number of non-marked particles on $v$, and $a$ and $b$ are the coprime integers with $s=b/a$. In the chameleon process this will correspond to having $a\xi(v)+b$ \emph{red} particles on $v$, for every $v\in V$. It turns out that bounding how long it takes the chameleon process to reach an all-red state (where there are $a\xi(v)+b$ red particles on each vertex $v$ when the black particles are in configuration $\xi$) when we condition on this happening before reaching a no-red state (an event we call Fill) is key to bounding the total variation distance between two MaBB processes. This calculation is carried out in Section~\ref{s:loss}. \subsection{Outline of the rest of the paper} The rest of the paper is structured as follows. In Section~\ref{S:BBprops} we identify five key properties enjoyed by the BBSP, which includes writing the equilibrium distribution~\eqref{e:eqmbb} explicitly in terms of $a$ and $b$. In Section~\ref{S:MaBB}, we give the construction of the MaBB process; firstly we present the dynamics of a single step, and then we show how the MaBB can be constructed `graphically'. The chameleon process is constructed in Section~\ref{S:cham}. We again give the dynamics of a single step, before showing how the same graphical construction can be used to build the entire trajectory of the chameleon process. Properties of the chameleon process, which allow us to make the connection to the MaBB, are presented in Section~\ref{S:champrops}. In Section~\ref{s:loss} we show that choosing the round length (a parameter of the chameleon process) to be on the order of the maximal expected meeting time of two independent random walks on $G$ suffices to ensure that there are, in expectation, a significant number of pink particles created during each round. This, in turn, can be used to show that an all-red state is reached quickly, given event Fill. We complete the proof of Theorem~\ref{T:betabin} in Section~\ref{S:proof}. An Appendix follows, in which we collect some of the proofs requiring lengthy case analyses. Finally, we give a possible simulation of the chameleon process over three time steps to illuminate the reader further on its evolution. \section{Key properties of the beta-binomial splitting process}\label{S:BBprops} We fix $s\in\mathds{Q}$ positive (with $s=b/a$ for $a$ and $b$ coprime), connected graph $G$ of size $n\in\mathds{N}$, and integer $m\ge2$, and demonstrate five properties of BB($G,s,m$) needed to prove Theorem~\ref{T:betabin}. For $e\in E$ and $\xi,\xi'\in\Omega_{G,m}$, we denote by $\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(\xi,\xi')$ the probability that, given the BB($G,s,m)$ configuration is $\xi$ and edge $e$ rings, the new configuration is $\xi'$. Further, for $v\in V$, we also write $C_{\xi,v}$ for the BB$(G,s,m+1)$ configuration which satisfies $C_{\xi,v}(u)=\xi(u)+\delta_v(u)$, for $u\in V$. \begin{proposition} $BB(G,s,m)$ satisfies the following properties: \begin{enumerate}[A.] \item\label{a:state} BB$(G,s,m)$ is irreducible on $\Omega_{G,m}$. \item\label{a:eqm} BB$(G,s,m)$ is reversible with equilibrium distribution \begin{align}\label{e:equil} \pi^{\mathrm{BB}(G,s,m)}(\xi)\propto\prod_{\substack{v\in V:\\\xi(v)>0}}\frac1{\xi(v)!}\prod_{i=0}^{\xi(v)-1}(ai+b),\qquad\xi\in\Omega_{G,m}. \end{align} \item\label{a:sym} Updates are symmetric: if the configuration of BB$(G,s,m)$ is $\xi$ and edge $e=\{v,w\}$ rings to give new configuration~$\xi'$, then $\xi'(v)\stackrel{d}{=}\xi'(w)$. \item \label{a:prob} Updates have a chance to be near even split: There exists probability $p^*\in(0,1/3)$ such that \begin{itemize} \item if the configuration of BB$(G,s,m)$ is $\xi$ with $\xi(v)+\xi(w)\ge 2$ and edge $e=\{v,w\}$ rings, with probability at least $p^*$, the new configuration $\xi'$ has \[\xi'(v)\in\left[\frac13(\xi(v)+\xi(w)),\frac23(\xi(v)+\xi(w))\right],\] \item if the configuration of BB$(G,s,m)$ is $\xi$ with $\xi(v)+\xi(w)=2$ and edge $e=\{v,w\}$ rings, the probability that both particles will be on the same vertex in the new configuration is at least $2p^*$. \end{itemize} Moreover, it suffices to take $p^*=(5/12)^{2s}/(6B(s,s))$ for $s<20$ and $p^*=\frac16(1-\frac{20}{s+1})$ for $s\ge20$. \item\label{a:MaBBindep} The heat kernel satisfies the following identity: for any $\xi,\xi'\in\Omega_{G,m}$, $e=\{v,w\}\in~E$, \begin{align*} &(\xi'(v)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',v})+(\xi'(w)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',w})\\&=(\xi(v)+\xi(w)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(\xi,\xi'). \end{align*} \end{enumerate} \end{proposition} \begin{proof}Property~\ref{a:state} is immediate. Recall the process has equilibrium distribution \[ \pi^{\mathrm{BB}(G,s,m)}(\xi)\propto \prod_{v\in V}\frac{\Gamma(s+\xi(v))}{\xi(v)!},\qquad\xi\in\Omega_{G,m}. \] Since $s=b/a$ with $a$ and $b$ coprime, this is of the form~\eqref{e:equil}: \begin{align*} \pi^{\mathrm{BB}(G,s,m)}(\xi)&\propto \prod_{\substack{v\in V:\\\xi(v)>0}}\frac1{\xi(v)!}\prod_{i=0}^{\xi(v)-1}(i+s)\\&=\prod_{\substack{\xi\in V:\\\xi(v)>0}}\frac1{\xi(v)!}\prod_{i=0}^{\xi(v)-1}\left(i+\frac{b}{a}\right)\propto\prod_{\substack{v\in V:\\\xi(v)>0}}\frac1{\xi(v)!}\prod_{i=0}^{\xi(v)-1}\left(ai+b\right). \end{align*}Thus Property~\ref{a:eqm} holds. Property~\ref{a:sym} holds as a beta-binomial $X$ with parameters $(k,s,s)$ has the same distribution as $k-X$, for any $k\in\mathds{N}$. To show Property~\ref{a:prob} holds (with $p^*=(5/12)^{2s}/(6B(s,s))$), we first show that with positive probability, if $\xi(v)+\xi(w)\ge2$ then $X/(\xi(v)+\xi(w))\in[1/3,2/3]$ where $X\sim\mathrm{BetaBin}(\xi(v)+\xi(w),s,s)$. Recall that to sample a BetaBin$(\xi(v)+\xi(w), s, s)$, we can first sample $Y\sim\mathrm{Beta}(s,s)$ and then given $Y$, sample Bin($\xi(v)+\xi(w),Y)$. We first observe that if $s\ge 1$, for such random variable $Y$, with probability at least $(5/12)^{2s}/(2B(s,s))$ (where $B(s,t)$ is the beta function), $Y$ will be in the interval $[5/12,7/12]$. This can be seen by noting that the density function of $Y$ in the interval $[5/12,7/12]$ is minimised on the boundary. If instead $s<1$, then $Y$ will be in $[5/12,7/12]$ with probability at least $(1/2)^{2s}/(2B(s,s))$. Further, if $s\ge20$ then we can use Chebyshev's inequality to obtain $\mathds{P}(Y\in[\frac{5}{12},\frac{7}{12}])\ge 1-\frac{36}{2s+1}\ge \frac12(1-\frac{20}{s+1})$. Fix $y\in[5/12,7/12]$ and let $Z\sim \mathrm{Bin}(\xi(v)+\xi(w),y)$. We observe that $\mathds{P}(Z\in[(\xi(v)+\xi(w))/3,2(\xi(v)+\xi(w))/3])$ is minimized (over $\xi(v)+\xi(w)\ge 2$ and $y\in[5/12,7/12])$ when $\xi(v)+\xi(w)=4$ and $y=7/12$, with a value of $1225/3456>1/3$. Combining, we obtain that we can take $p^*=(5/12)^{2s}/(6B(s,s))$, and when $s\ge20$ we can take $p^*=\frac16(1-\frac{20}{s+1})$. For the second part of Property~\ref{a:prob}, if $\xi(v)+\xi(w)=2$ then the probability that both particles end up on the same vertex is $1-\frac{s}{1+2s}$, which is larger than $2p^*$ for our choice of $p^*$. For Property~\ref{a:MaBBindep}, observe that \begin{align*}\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',v})&=\binom{\xi(v)+\xi(w)+1}{\xi'(v)+1}\frac{B(\xi'(v)+1+s,\xi'(w)+s)}{B(s,s)},\\ \mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',w})&=\binom{\xi(v)+\xi(w)+1}{\xi'(v)}\frac{B(\xi'(v)+s,\xi'(w)+1+s)}{B(s,s)},\\ \mathrm{P}_e^{\mathrm{BB}(G,s,m)}(\xi,\xi')&=\binom{\xi(v)+\xi(w)}{\xi'(v)}\frac{B(\xi'(v)+s,\xi'(w)+s)}{B(s,s)}.\end{align*} Thus \begin{align*} (\xi'(v)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',v})&=\frac{(\xi(v)+\xi(w)+1)!}{\xi'(w)!\xi'(v)!}\frac{B(\xi'(v)+1+s,\xi'(w)+s)}{B(s,s)},\\ (\xi'(w)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',w})&=\frac{(\xi(v)+\xi(w)+1)!}{\xi'(w)!\xi'(v)!}\frac{B(\xi'(v)+s,\xi'(w)+1+s)}{B(s,s)}. \end{align*} Adding these and using that $B(x,y)=B(x+1,y)+B(x,y+1)$, we obtain \begin{align*} &(\xi'(v)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',v})+(\xi'(w)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m+1)}(C_{\xi,v},C_{\xi',w})\\&=\frac{(\xi(v)+\xi(w)+1)!}{\xi'(w)!\xi'(v)!}\frac{B(\xi'(v)+s,\xi'(w)+s)}{B(s,s)}\\ &=(\xi(v)+\xi(w)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(\xi,\xi').\qedhere \end{align*} \end{proof} \section{An auxiliary process: MaBB}\label{S:MaBB} \subsection{Initial MaBB construction} In order to use a chameleon process in our setting, we need to introduce an auxiliary process related to the BBSP but with one of the particles distinguishable from the rest. To this end, we shall define a \textbf{ma}rked \textbf{b}eta-\textbf{b}inomial splitting process (MaBB) to be a process similar to the BBSP except that one particle is marked. Before giving the construction of the MaBB, we first discuss some of the key properties required. Firstly, we need that the time-$t$ total variation distance of the BBSP to its equilibrium distribution is bounded by the time-$t$ total variation distance of the MaBB to its equilibrium distribution. We can achieve this by using the contraction property of total variation distance as long as it is indeed the case that if we forget the marking in a MaBB we obtain the BBSP, see~\eqref{e:contraction}. Secondly, we require that, given a particular edge $e$ rings, the law which governs the movement of the non-marked particles does not depend on the location of the marked particle (this will ensure that the uniform random variables in element 3 of the graphical construction given in Section~\ref{S:graphical} can be taken to be independent). This is not to say that the locations of the non-marked particles are independent of the location of the marked -- indeed they are not -- as the trajectory of the marked particle depends on the trajectories of the non-marked particles. We write $\Omega_{G,m}'$ for the set of configurations of the MaBB, and members of $\Omega_{G,m}'$ are of the form $(\xi,y)$ where $\xi\in\Omega_{G,m-1}$ with $\xi(v)$ denoting the number of non-marked particles at vertex $v$, and $y\in V$ denotes the location of the marked particle. Let $\mathrm{P}_e^{\mathrm{MaBB}}((\xi,v),(\xi',w))$ denote the probability that, given the MaBB configuration is $(\xi,v)$ and edge $e$ rings, the new configuration is $(\xi',w)$. Then in order to ensure that if we forget the marking in the MaBB we obtain the BBSP, it suffices that, for every edge $e=\{v,w\}$ and $\xi,\xi'\in\Omega_{G,m-1}$, \begin{align}\label{e:contraction} \mathrm{P}_e^\mathrm{MaBB}((\xi,v),(\xi',v))+\mathrm{P}_e^\mathrm{MaBB}((\xi,v),(\zeta,w))=\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',v}) \end{align} where $\zeta\in\Omega_{G,m-1}$ satisfies $\zeta(y)=\xi'(y)+\delta_v(y)-\delta_w(y)$ for $y\in V$. The reason is that if we forget the marking in either of MaBB configurations $(\xi',v)$ or $(\zeta,w)$, we obtain the same BBSP configuration $C_{\xi',v}$, and these are the only configurations with this property which are obtainable from $(\xi,v)$ when $e$ rings. We now discuss how the MaBB is constructed and why it satisfies the two desirable properties. The MaBB is coupled to the BBSP so that it updates at the same times. When an edge rings in the BBSP, if the marked particle is absent from the vertices of the ringing edge, the update of the MaBB is as in the BBSP. If instead the marked particle is on one of the vertices of the ringing edge, we first remove the marked particle, then move the remaining (i.e.\! non-marked) particles as in the BBSP, and then add the marked particle back to one of the two vertices on the ringing edge with a certain law. Specifically, if $e=\{v,w\}$ is the ringing edge and the MaBB configuration before the update is $(\xi,v)$ and after the update the non-marked particles are in configuration $\xi'$, we place the marked particle on $v$ with probability \[ \mathrm{P}_{e,\xi,\xi'}(v,v):=\frac{\xi'(v)+1}{\xi(v)+\xi(w)+1}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',v})}{\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\xi')}, \]and place it on $w$ with probability \[ \mathrm{P}_{e,\xi,\xi'}(v,w):=\frac{\xi'(w)+1}{\xi(v)+\xi(w)+1}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',w})}{\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\xi')}. \]This exhausts all possibilities (i.e.\! $\mathrm{P}_{e,\xi,\xi'}(v,v)+\mathrm{P}_{e,\xi,\xi'}(v,w)=1$) by Property~\ref{a:MaBBindep}. Further, it is immediate from this construction that the movement of non-marked particles does not depend on the location of the marked particle. So it remains to show that \eqref{e:contraction} holds. We see this as follows: \begin{align*} &\mathrm{P}_e^\mathrm{MaBB}((\xi,v),(\xi',v))+\mathrm{P}_e^\mathrm{MaBB}((\xi,v),(\zeta,w))\\&=\mathrm{P}_{e,\xi,\xi'}(v,v)\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\xi')+\mathrm{P}_{e,\xi,\zeta}(v,w)\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\zeta)\\ &=\frac{\xi'(v)+1}{\xi(v)+\xi(w)+1}\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',w})+\frac{\zeta(w)+1}{\xi(v)+\xi(w)+1}\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\zeta,w})\\ &=\frac{\xi'(v)+1}{\xi(v)+\xi(w)+1}\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',w})+\frac{\xi'(w)}{\xi(v)+\xi(w)+1}\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\zeta,w})\\ &=\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',v}), \end{align*} where the last equality uses $C_{\xi',v}=C_{\zeta,w}$. This description for the MaBB is useful as it clearly demonstrates that the movement of the non-marked particles does not depend on the location of the marked particle. There is an equivalent (distributionally-speaking) description of the MaBB which is useful for proving some other properties. Note that for $y\in\{v,w\}=e$, \begin{align}\label{e:MaBB2BBSP} \mathrm{P}_e^\mathrm{MaBB}((\xi,v),(\xi',y))=\frac{\xi'(y)+1}{\xi(v)+\xi(w)+1}\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',y}). \end{align} Thus an update of the MaBB from state $(\xi,v)$ when edge $e=\{v,w\}$ rings can be obtained by first removing the marking on the marked particle (but leaving it on the vertex) to obtain the BBSP configuration $C_{\xi,v}$, then updating according to the BBSP, which gives BBSP configuration $C_{\xi',y}$ with probability $\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{\xi,v},C_{\xi',y})$, and then choosing a particle from edge $e$ uniformly and applying a mark to it (so the marked particle will be on $y$ with probability $\frac{\xi'(y)+1}{\xi(v)+\xi(w)+1}$). We shall use this alternative description later in the paper (see the proof of Lemma~\ref{L:Aexists}). For $k\in\mathds{N}_0$, set $\chi(k)=ak+b$ with $a$ and $b$ the coprime integers from Property~\ref{a:eqm}. We call this the colour function. The importance of the colour function becomes apparent from the following result. \begin{lemma}\label{L:colour}Fix vertices $v$ and $w$ with $e=\{v,w\}$ an edge of the graph. For any $\xi, \xi'\in\Omega_{G,m-1}$, \begin{align*}\chi(\xi(v))\mathrm{P}_{e,\xi,\xi'}(v,v)+\chi(\xi(w))\mathrm{P}_{e,\xi,\xi'}(w,v)=\chi(\xi'(v)). \end{align*} \end{lemma} The (yet to be defined) chameleon process will allow us to track possible locations of the marked particle in the MaBB, given the location of the non-marked particles. If we run the MaBB for a long time, and then observe that the configuration of non-marked particles is $\xi$, the probability the marked particle is on vertex $v$ will be close to $\pi_\xi(v)$ (defined in \eqref{e:pixi}). If we scale $\pi_\xi(v)$ by $a(m-1)+bn$, we obtain $\chi(\xi(v))$ (see \eqref{e:condpimips}). Together with reversibility, this is essentially the reason why Lemma~\ref{L:colour} is true. Our goal in the chameleon process will be to have $\chi(\xi(v))$ red particles on vertex $v$, for all $v$, as this will signal that the marked particle is ``mixed'' (see Proposition~\ref{P:tvboundink}). In fact, the chameleon process will always have $\chi(\xi(v))$ non-black particles on $v$ (they will be either red, white, or pink), when there are $\xi(v)$ black particles on $v$. \begin{proof}[Proof of Lemma~\ref{L:colour}] Reversibility of the BBSP (Property~\ref{a:eqm}) gives that for any edge $e$ and configurations $\zeta,\zeta'\in\Omega_{G,m}$, \begin{align}\label{e:piBBSP}\pi^{\mathrm{BB}(G,s,m)}(\zeta)\mathrm{P}^{\mathrm{BB}(G,s,m)}_e(\zeta,\zeta')=\pi^{\mathrm{BB}(G,s,m)}(\zeta')\mathrm{P}^{\mathrm{BB}(G,s,m)}_e(\zeta',\zeta).\end{align} For any $v,w\in V$ and $\xi$ and $\xi'$ which satisfy $\xi(v)+\xi(w)=\xi'(v)+\xi'(w)$, we have \[\sum_{y\in\{v,w\}}C_{\xi,v}(y)=\sum_{y\in\{v,w\}}C_{\xi,w}(y)=\sum_{y\in\{v,w\}}C_{\xi',v}(y)=\sum_{y\in\{v,w\}}C_{\xi',w}(y)=\xi(v)+\xi(w)+1.\] Observe that $\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',w))>0$ is equivalent to $ \mathrm{P}^\mathrm{MaBB}_e((\xi',w),(\xi,v))>0$ and implies $\xi(v)+\xi(w)=\xi'(v)+\xi'(w)$. Thus using~\eqref{e:MaBB2BBSP} and~\eqref{e:piBBSP}, we have \begin{align*} \pi^{\mathrm{BB}(G,s,m)}(C_{\xi,v})(\xi(v)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',w))&=\pi(C_{\xi',w})(\xi'(w)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi',w),(\xi,v)). \end{align*} By similar arguments we also have \begin{align*} &\pi^{\mathrm{BB}(G,s,m)}(C_{\xi,w})(\xi(w)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi,w),(\xi',v))\\&=\pi^{\mathrm{BB}(G,s,m)}(C_{\xi',v})(\xi'(v)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi',v),(\xi,w)),\\ \intertext{and} &\pi^{\mathrm{BB}(G,s,m)}(C_{\xi,y})(\xi(y)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi,y),(\xi',y))\\&=\pi^{\mathrm{BB}(G,s,m)}(C_{\xi',y})(\xi'(y)+1)\mathrm{P}^\mathrm{MaBB}_e((\xi',y),(\xi,y)),\qquad y\in\{v,w\}. \end{align*} Hence the MaBB process is reversible with equilibrium distribution \[ \pi^{\mathrm{MaBB}}((\xi,v))\propto \pi^{\mathrm{BB}(G,s,m)}(C_{\xi,v})(\xi(v)+1). \] For each $\xi\in\Omega_{G,m-1}$, we define \begin{align}\label{e:pixi}\pi_\xi(v):=\pi^\mathrm{MaBB}((\xi,v))/\sum_y\pi^\mathrm{MaBB}((\xi,y)),\end{align} so that \[ \pi_\xi(v)=\frac{\pi^{\mathrm{BB}(G,s,m)}(C_{\xi,v})(\xi(v)+1)}{\sum_y\pi^{\mathrm{BB}(G,s,m)}(C_{\xi,y})(\xi(y)+1)}. \] Property~\ref{a:eqm} gives that \[ \pi^{\mathrm{BB}(G,s,m)}(C_{\xi,v})\propto \frac{a\xi(v)+b}{\xi(v)+1}\prod_{\substack{w\in V:\\\xi(w)>0}}\frac1{\xi(w)!}\prod_{i=0}^{\xi(w)-1}(ai+b), \] and hence \begin{align}\label{e:condpimips} \pi_\xi(v)=\frac{a\xi(v)+b}{a(m-1)+bn}=\frac{\chi(\xi(v))}{a(m-1)+bn}. \end{align} It follows that to prove the lemma, it suffices to show that \begin{align*} \pi_\xi(v)\mathrm{P}_{e,\xi,\xi'}(v,v)+\pi_\xi(w)\mathrm{P}_{e,\xi,\xi'}(w,v)=\pi_{\xi'}(v), \end{align*} equivalently, \begin{align} \label{e:suff} \frac{\pi^\mathrm{MaBB}((\xi,v))}{\sum_y\pi^\mathrm{MaBB}((\xi,y))}\mathrm{P}_{e,\xi,\xi'}(v,v)+\frac{\pi^\mathrm{MaBB}((\xi,w))}{\sum_y\pi^\mathrm{MaBB}((\xi,y))}\mathrm{P}_{e,\xi,\xi'}(w,v)=\frac{\pi^\mathrm{MaBB}((\xi',v))}{\sum_y\pi^\mathrm{MaBB}((\xi',y))}. \end{align} Note that \[ \mathrm{P}_{e,\xi,\xi'}(v,v)=\frac{\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',v))}{\sum_{y\in \{v,w\}}\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',y))} =\frac{\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',v))}{\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')},\] where we define $\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi'):=\sum_{y\in \{v,w\}}\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',y))$ and note that this does not depend on $v$. Thus the left-hand side of~\eqref{e:suff} can be written as \begin{align*} &\frac{\pi^\mathrm{MaBB}((\xi,v))\mathrm{P}^\mathrm{MaBB}_e((\xi,v),(\xi',v))+ \pi^\mathrm{MaBB}((\xi,w))\mathrm{P}^\mathrm{MaBB}_e((\xi,w),(\xi',v))}{\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\sum_y\pi^\mathrm{MaBB}((\xi,y))}\\ &=\frac{\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)\pi^\mathrm{MaBB}((\xi',v))}{\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\sum_y\pi^\mathrm{MaBB}((\xi,y))}\end{align*} using the reversibility of MaBB. Thus showing~\eqref{e:suff} is equivalent to showing \begin{align} \hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)\sum_y\pi^\mathrm{MaBB}((\xi',y))=\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\sum_y\pi^\mathrm{MaBB}((\xi,y)). \end{align} We use reversibility to show this identity: \begin{align*} &\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)\sum_y\pi^\mathrm{MaBB}((\xi',y))\\ &=\pi^\mathrm{MaBB}((\xi',v))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)+\pi^\mathrm{MaBB}((\xi',w))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)\\&\phantom{=}+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi',y))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi',\xi)\\ &=\pi^\mathrm{MaBB}((\xi',v))\sum_{y\in\{v,w\}}\mathrm{P}^\mathrm{MaBB}_e((\xi',v),(\xi,y))+\pi^\mathrm{MaBB}((\xi',w))\sum_{y\in\{v,w\}}\mathrm{P}^\mathrm{MaBB}_e((\xi',w),(\xi,y))\\ &\phantom{=}\,+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi',y))\sum_z \mathrm{P}^\mathrm{MaBB}_e((\xi',z),(\xi,y))\\ &=\sum_{y\in\{v,w\}}\pi^\mathrm{MaBB}((\xi,y))\left(\mathrm{P}^\mathrm{MaBB}_e((\xi,y),(\xi',v))+\mathrm{P}^\mathrm{MaBB}_e((\xi,y),(\xi',w))\right)\\&\phantom{=}+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi',y))\mathrm{P}^\mathrm{MaBB}_e((\xi',y),(\xi,y))\\ &=\pi^\mathrm{MaBB}((\xi,v))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')+\pi^\mathrm{MaBB}((\xi,w))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\\&\phantom{=}+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi,y))\mathrm{P}^\mathrm{MaBB}_e((\xi,y),(\xi',y))\\ &=\pi^\mathrm{MaBB}((\xi,v))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')+\pi^\mathrm{MaBB}((\xi,w))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\\&\phantom{=}+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi,y))\sum_z\mathrm{P}^\mathrm{MaBB}_e((\xi,y),(\xi',z))\\ &=\pi^\mathrm{MaBB}((\xi,v))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')+\pi^\mathrm{MaBB}((\xi,w))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\\&\phantom{=}+\sum_{y\notin\{v,w\}}\pi^\mathrm{MaBB}((\xi,y))\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\\ &=\hat{\mathrm{P}}^\mathrm{MaBB}_{e}(\xi,\xi')\sum_y\pi^\mathrm{MaBB}((\xi,y)).\qedhere \end{align*} \end{proof} \subsection{Graphical construction of the MaBB}\label{S:graphical} We present a `graphical construction' of the MaBB, which will also be used for the chameleon process. The motivation behind this construction is that it contains all of the random elements from which one can then deterministically construct both the MaBB and the chameleon process. In particular, it allows us to construct the MaBB and the chameleon process on the same probability space. The graphical construction is comprised of the following elements: \begin{enumerate} \item A Poisson process of rate $\sum_e r_e$ which gives the times $\{\tau_1,\tau_2,\ldots\}$ at which edges ring (we also set $\tau_0=0$). \item A sequence of edges $\{e_r\}_{r\ge 1}$ so that edge $e_r$ is the edge which rings at the $r$th time $\tau_r$ of the Poisson process; for each $r\ge1$ and $e\in E$, $\mathds{P}(e_r=e)\propto r_e$. \item For each $r\ge 1$ an independent uniform random variable $U_r^b$ on $[0,1]$ (which will be used to determine how non-marked particles in the MaBB update at time $\tau_r$ when edge $e_r$ rings), and an independent uniform random variable $U_r^c$ on $[0,1]$ (used for updating the location of the marked particle in MaBB). \item A sequence of independent fair coin flips $\{d_\ell\}_{\ell\ge1}$ (Bernoulli$(1/2)$ random variables). These are only used in the chameleon process. \end{enumerate} We now demonstrate how the graphical construction is used to build the MaBB of interest, given an initial configuration. Fix $u\in[0,1]$, $e=\{v,w\}\in E$, and $\xi\in\Omega_{G,m-1}$. Without loss of generality, suppose $v<w$ (recall $V=[n]$) and suppose $\{\xi_1,\ldots,\xi_r\}$ are the possible configurations of the non-marked particles that can be obtained from non-marked configuration $\xi$ when edge $e$ rings. Without loss of generality suppose they are ordered so that \begin{align}\label{e:ordering}|\xi_i(v)-\frac12(\xi(v)+\xi(w))|\le|\xi_j(v)-\frac12(\xi(v)+\xi(w))|\quad\text{ if and only if }\quad i\le j,\end{align} with any ties resolved by ordering earlier the configuration which places fewer particles on $v$. We now define two deterministic functions $\mathrm{MaBB}:[0,1]\times E\times\Omega_{G,m-1}\to\Omega_{G,m-1}$ and $\mathrm{MaBB}^*:[0,1]\times[0,1]\times E\times \Omega_{G,m-1}\times V\to V$. Firstly, we define $\mathrm{MaBB}(u,e,\xi)$ to be the configuration of non-marked which satisfies, for each $1\le i\le r$, \[ \mathrm{MaBB}(u,e,\xi)=\xi_i\quad\mathrm{ if }\quad \sum_{j<i}\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\xi_j)<u\le \sum_{j\le i}\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\xi_j). \] When $u$ is chosen according to a uniform on $[0,1]$ this gives that MaBB$(u,e,\xi)$ has the law of the new configuration of non-marked particles (given $e$ rings and the old configuration is $\xi$), i.e.\! for a uniform $U$ on $[0,1]$, $\mathrm{MaBB}(U,e,\xi)$ has law $\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(\xi,\cdot)$. By Property~\ref{a:prob}, if $\xi(v)+\xi(w)\ge 2$, then \begin{align}\label{e:ulesspimplication}u\le p^*\quad\implies\quad\mathrm{MaBB}(u,e,\xi)(v)\in\left[\frac13(\xi(v)+\xi(w)),\frac23(\xi(v)+\xi(w))\right]\end{align} (this is the reason for choosing the ordering of the new configurations as described in~\eqref{e:ordering}, and is used in the proof of Lemma~\ref{L:Aexists}). Secondly, for $m\in V$ and $u,u'\in[0,1]$ we set \begin{align}\label{e:MaBBdef} \mathrm{MaBB}^*(u,u',e,\xi,m)=\begin{cases} m&\mbox{if } m\notin e \mbox{ or }m\in e \mbox{ and }u'<\mathrm{P}_{e,\xi,\mathrm{MaBB}(u,e,\xi)}(m,m), \\e\setminus\{m\}&\mbox{otherwise.} \end{cases} \end{align} We can now obtain a realisation of the MaBB as follows. Suppose we initialise at state $(\xi_0,x_0)$. Given the state at time $\tau_i$, the MaBB remains constant until the next update at time $\tau_{i+1}$, at which time \[ \xi_{\tau_{i+1}}=\mathrm{MaBB}(U_{i+1}^b,e_{i+1},\xi_{\tau_i}),\quad m_{\tau_{i+1}}=\mathrm{MaBB}^*(U_{i+1}^b,U_{i+1}^c,e_{i+1},\xi_{\tau_i},m_{\tau_i}). \] \section{The Chameleon process}\label{S:cham} \subsection{Introduction to the chameleon process} As stated previously, the chameleon process will be built using the graphical construction. The chameleon process is an interacting particle system consisting of coloured particles moving on the vertices of a graph (the same graph as the MaBB). Particles can be of four colours: black, red, pink and white. Each vertex $v$ in the chameleon process is occupied at a given time by a certain number, $B(v)$, of black particles and $\chi(B(v))$ non-black particles (recall $\chi$ is the colour function). Associated with each vertex is a notion of the amount of \emph{redness}, called $\operatorname{ink}$ (this terminology is consistent with previous works using a chameleon process). Specifically we write $\operatorname{ink}_t(v)$ for the number of red particles plus half the number of pink particles at vertex $v$ at time $t$ in the chameleon process. If there are $B(v)$ black particles at vertex $v$ at time $t$ then $0\le\operatorname{ink}_t(v)\le \chi(B(v))$, with the minimum (resp.\! maximum) attained when all non-black particles are white (resp.\! red). We use the initial configuration of the MaBB to initialise the chameleon process. Each non-marked particle on a vertex in the MaBB configuration corresponds to a black particle at the same vertex in the chameleon process. The vertex with the marked particle in the MaBB is initialised in the chameleon process with all non-black particles as red. Every other vertex has all non-black particles as white. The chameleon process consists of rounds of length $T$ (a parameter of the process), and at the end of some rounds is a depinking time. Whether we have a depinking time (at which we remove all pink particles, replacing them all with either red or white particles) will depend on the numbers of red, pink and white particles in the graph at the end of that round. If at the start of the round there are fewer red than white particles then we shall assign to each red particle a unique white particle; thus each red particle has a paired white particle. Later, our interest will be in determining how many red particles `meet' their paired white particle during a round, where two particles are said to meet if, at some moment in time, they are both on the same ringing edge (unless they start on the same vertex, they will be on different vertices when they meet). If there are fewer white than red particles at the start of the round we shall reverse roles so that each white particle gets a unique paired red particle. In the chameleon process we can only create new pink particles (by re-colouring red and white particles) at the meeting times of paired particles. It is this restriction which will lead to us taking the round length to be the maximal expected meeting time of two random walks. In previous works using other versions of the chameleon process, the idea of using paired particles is not used (it is not needed). It becomes useful here because \emph{a priori} there is no constant (not depending on the number of particles or size of the graph) bound on the number of particles which may occupy a vertex. As a result, without using pairing, it turns out we will need to understand the movement of 3 coloured particles simultaneously, rather than the movement of one red and one white until their meeting time. \subsection{A single step of the chameleon process} Our construction of the chameleon process is such that when an edge rings, we first observe how the non-marked particles move in the MaBB and move the black particles in the same way. Given the new configuration of black particles, the number of non-black particles on the vertices is determined by the colour function $\chi$. After observing the movement of the black particles, we shall then determine the movement of the red particles (and if we are to pinken any) then the pre-existing pink particles (i.e.\! not any just-created pink particles) and finally the white particles. To specify more precisely an update, we introduce some notation. We shall define a probability \[\theta(v)=\theta(v,e,B(v),B(w),B'(v),B'(w),R(v),R(w),P(v),P(w))\] which is a function of a vertex $v$, an edge containing that vertex $e=\{v,w\}$, and non-negative integers \[B(v), B(w), B'(v), B'(w), R(v),R(w), P(v), P(w)\] which satisfy $B'(v)\le B(v)+B(w)$, $B(v)+B(w)=B'(v)+B'(w)$, $R(v)+P(v)\le \chi(B(v))$, $R(w)+P(w)\le \chi(B(w))$. These `non-primed' integers shall represent the numbers of black/red/pink on the vertices of the edge $e$ just prior to it ringing, and $B'(v)$, $B'(w)$ the number of black particles on $v, w$ just after $e$ rings. For simplicity we write $R_{v,w}$ for $R(v)+R(w)$ and $P_{v,w}$ for $P(v)+P(w)$. To define $\theta(v)$ we also define integers \begin{align*} \ell(v)=\ell(v,R_{v,w},B'(w))&:=\{R_{v,w}-\chi(B'(w))\}\vee0,\\ u(v)=u(v,R_{v,w},B'(v))&:=\chi(B'(v))\wedge R_{v,w},\\ u(w)=u(w,R_{v,w},B'(w))&:=\chi(B'(w))\wedge R_{v,w}=R_{v,w}-\ell(v),\\ \ell^P(v)=\ell^P(v,R_{v,w},P_{v,w},B'(w))&:=\{P_{v,w}-\chi(B'(w))+u(w)\}\wedge 0,\\ u^P(v)=u^P(v,R_{v,w},P_{v,w},B'(w))&:=\{\chi(B'(v))-u(v)\}\wedge P_{v,w}. \end{align*} We also set $\ell(w)=R_{v,w}-u(v)$. The idea behind these definitions is the following. The values of $\chi(B'(v))$ and $\chi(B'(w))$ impose restrictions on the number of non-black particles which can occupy vertices $v$ and $w$ after the update. For example, the number of red particles on $v$ cannot exceed $\chi(B'(v))$; this gives an upper limit of $u(v)$ for the number of red particles that we can place onto $v$ after the update. On the other hand, the number of red particles on $w$ after the update cannot exceed $\chi(B'(w))$, which in turn means that the number of red particles on $v$ has to be at least $R_{v,w}-\chi(B'(w))$, giving a lower limit of $\ell(v)$. The difference between these values, i.e.\! $u(v)-\ell(v)=R_{v,w}-\ell(v)-\ell(w)$ is the number of \emph{flexible reds}, that is, the number of red particles which can be either on $v$ or $w$ after the update. It is these flexible reds that we get a chance to pinken, with pink particles representing particles which are half red and half white. Once the values of $u(v)$ and $u(w)$ have been determined, based on how the black particles move, we can then place the pre-existing pink particles. We again have to ensure that the number of non-black particles on $v$ does not exceed $\chi(B'(v))$, and now there could be at most $u(v)$ red particles, so we restrict to placing at most $\chi(B'(v))-u(v)$ pink particles; this gives $u^P(v)$. There is a similar restriction on vertex $w$ and through this we obtain a lower bound $\ell^P(v)$ on the number of pink particles to place onto $v$. The role of $\theta(v)$ is to give the probability of placing the lower limits on $v$, with $1-\theta(v)$ then the probability of placing the upper limits on $v$. We choose $\theta(v)$ to satisfy \begin{equation} \begin{aligned}\label{e:thetadef} &\theta(v)[\ell(v)+\frac12\ell^P(v)]+(1-\theta(v))[u(v)+\frac12 u^P(v)]\\&=(R(v)+\frac12 P(v))\mathrm{P}_{e,B,B'}(v,v)+(R(w)+\frac12 P(w))\mathrm{P}_{e,B,B'}(w,v)=:m^*(v). \end{aligned} \end{equation} This particular choice of $\theta(v)$ is necessary to ensure that the expected amount of ink at a vertex (given numbers of black particles on the vertices) matches the probability that the marked particle in the MaBB process is on that vertex (given the location of non-marked particles), see Lemma~\ref{L:onestep}. The following lemmas shows that such a $\theta(v)$ exists and give bounds on its value. \begin{lemma}[Existence of $\theta(v)$]\label{L:thetaexist} For every $e,v,w,B,B',R,P$, \[ \ell(v)+\frac12\ell^P(v)\le m^*(v)\le u(v)+\frac12 u^P(v), \] and so in particular there exists $\theta(v)\in[0,1]$ satisfying~\eqref{e:thetadef}. \end{lemma} \begin{lemma}[Bounds on $\theta(v)$]\label{L:thetabounds} Fix $e=\{v,w\}$, $B$ and $\eta\in(0,1/2)$. If $B'$ satisfies \[ \mathrm{P}_{e,B,B'}(v,v),\,\mathrm{P}_{e,B,B'}(w,w)\in[\eta,1-\eta], \] then $\theta(v)\in[\eta,1-\eta]$. \end{lemma} The proofs of these lemmas involve lengthy (but straightforward) case analyses and can be found in the Appendix. We now describe in full detail the dynamics of a single step of the chameleon process, including the role of $\theta(v)$. We show how this fits with the graphical construction in the next section. We assume that pairings of red and white particles have already happened (these happen at the beginning of each round, more details are provided on this in the next section on how this is achieved through ``label configurations''). As a preliminary step, we remove all non-black particles from the vertices of the ringing edge and place them into a pooled pile. They will be redistributed to the vertices during the steps described below. We update the black particles from $B$ to $B'$ according to the law of the movement of the non-marked particles in the MaBB (recall that the movement of non-marked particles does not depend on the location of the marked). \underline{Step 1:} [Place lower bounds] \\ If there are no red particles on $v$ or $w$, skip straight to Step 4. Otherwise we proceed as follows. We introduce a notion of reserving paired particles in this step and put the lower bounds $\ell(v)$ and $\ell(w)$ of red particles onto vertices $v$ and $w$. In choosing red particles to use for the lower bounds, it is important to avoid as much as possible the paired red particles (i.e.\! those reds for which their paired white is also on the ringing edge) so that they can be reserved for the set of flexible reds, as only reds which are both flexible and paired can actually be pinkened. Thus, when choosing from the pooled pile for the $\ell(v)+\ell(w)$ reds for the lower bounds, we shall first choose the non-paired reds (and the specific ones chosen -- i.e.\! the vertex they started from at this update step and the label if they have one (see the next section for a discussion on when and how to label particles) -- is made uniformly). If there are insufficient non-paired reds, then once they are placed we choose from the paired reds (again uniformly). \underline{Step 2:} [A fork in the road] \\ With probability $2[\theta(v)\wedge (1-\theta(v))]$ proceed to Step 3a; otherwise skip Step 3a and proceed to Step 3b. \underline{Step 3a:} [Create new pink particles] \\ Let $k$ denote the number of paired red particles remaining in the pile after Step 1. Select (uniformly) \begin{align}\label{e:cond}k\wedge\left\{\lceil [(|R|\wedge|W|)+|P|/2]/3\rceil-|P|/2\right\}\end{align} paired red particles from the pile\footnote{By taking this minimum we ensure that the number of pink particles created won't exceed a certain threshold.}, where $|R|:=\sum_{v\in V}R(v)$, and similarly for $|W|$ and $|P|$ (where $W(v)$ denotes the number of white particles on $v$). These are coloured pink and placed onto $v$. The paired white particles of these selected red particles are also coloured pink and placed onto $w$. Any paired red and any non-paired red left in the pile are then each independently placed onto $v$ or $w$ equally likely. Now proceed to Step 4. \underline{Step 3b:} [Place remaining red particles] \\If $\theta(v)<1/2$ put any remaining red particles from the pile onto $v$. As a result there will now be $u(v)$ red particles on $v$. If instead $\theta(v)\ge 1/2$, put any remaining red particles from the pile onto $w$ (and so there are $u(w)$ red particles on $w$.) Now proceed to Step 4. \underline{Step 4:} [Place old pink particles]\\ There may be some pink particles remaining in the pool (which were already pink at the start of the update). If not, skip to Step 5; otherwise with probability $\theta(v)$, put $\ell^P(v)$ of these pink particles on $v$, and the rest (i.e.\! $u^P(w)$ of them) on $w$. With the remaining probability, instead put $u^P(v)$ of them on $v$ and the rest on $w$. \underline{Step 5:} [Place white particles] \\The only possible particles left in the pile are white particles. These are placed onto $v$ and $w$ to ensure that the total number of non-black particles now on $v$ is $\chi(B'(v))$ (which also ensures there are $\chi(B'(w))$ non-black particles on $w$ since $\chi(B(v))+\chi(B(w))=\chi(B'(v))+\chi(B'(w))$ and no particles are created or destroyed). The choice of which white particles are put onto $v$ is done uniformly. The next result shows the usefulness of reserving in guaranteeing a certain number of reserved pairs remain in the pool after Step 1. We again defer the proof to the Appendix. Write $R^p_{v,w}$ for the number of paired red particles on $e=\{v,w\}$, and set $R^q_{v,w}=R(v)+R(w)- R^p_{v,w}$. \begin{lemma}\label{L:pairres} If there are $k$ paired red particles on ringing edge $e=\{v,w\}$ then the number that are left remaining in the pooled pile after Step 1 above is at least $k\wedge \chi(B'(v))\wedge \chi(B'(w))$. Further, on the event that $\chi(B'(v))/\chi(B'(w))\in[\gamma,1/\gamma]$ for some $\gamma\in(0,1)$, the probability any particular paired red particle remains in the pool after Step 1 is at least $\gamma$ uniformly over $B$, $R^q_{v,w}$, $R^p_{v,w}$ and $P$. \end{lemma} The next result gives the expected amount of ink after one step of using this algorithm. We state the result in terms of the first update, given any initial conditions. Recall $m^*(v)$ is defined in~\eqref{e:thetadef}. \begin{lemma}\label{L:onestep}For any $v,w\in V$, $B,R,P$ initial configurations of black, red and pink particles, and $B'$ the configuration of black particles just after the first update (at time $\tau_1$), \[\mathds{E}[\operatorname{ink}_{\tau_1}(v)\mid B, B', R, P, \{e_1=\{v,w\}\}]=m^*(v).\] \end{lemma} \begin{proof} Recall that each red particle contributes 1 to the ink value of the vertex it occupies, and each pink particle contributes $1/2$. We first consider the contribution to $\operatorname{ink}_{\tau_1}(v)$ which comes from the particles placed onto $v$ in Step 1. This is straightforward: we place $\ell(v)$ particles onto $v$ from the pile and these are all red, thus the contribution to $\operatorname{ink}_{\tau_1}(v)$ from Step 1 is simply $\ell(v)$. At Step 2 we do not place any new particles onto the vertices, but we do decide whether to proceed with Step 3a or Step 3b. If we do Step 3a then each red particle (paired or otherwise) in the pool will in expectation contribute a value of 1/2 to $\operatorname{ink}_{\tau_1}(v)$: either it gets coloured pink as does its paired white and one of them is placed onto $v$, or it stays red and is placed onto $v$ with probability 1/2. If we do Step 3b and $\theta(v)<1/2$ then we place the remaining red particles on $v$ which gives a total of $u(v)$ red on $v$. If instead $\theta(v)\ge 1/2$, we do not place any more red particles on $v$. Finally at Step 4 we place the pre-existing pink particles, each contributing $1/2$ to the ink of the vertex they are placed on. Putting these observations together we obtain \begin{align*} &\mathds{E}[\operatorname{ink}_{\tau_1}(v)\mid B,B',R,P,\{e_1=\{v,w\}\}]\\&=\ell(v)+2[\theta(v)\wedge(1-\theta(v))]\frac{u(v)-\ell(v)}{2}+(1-2[\theta(v)\wedge(1-\theta(v))])\Indic{\theta(v)<1/2}(u(v)-\ell(v))\\ &\phantom{=}+\theta(v)\frac{\ell^P(v)}{2}+(1-\theta(v))\frac{u^P(v)}{2}\\ &=\ell(v)+\Indic{\theta(v)<1/2}\big\{\theta(v)(u(v)-\ell(v))+(1-2\theta(v))(u(v)-\ell(v))\big\}\\&\phantom{=}+\Indic{\theta(v)\ge1/2}\big\{(1-\theta(v))(u(v)-\ell(v))\big\}+\theta(v)\frac{\ell^P(v)}{2}+(1-\theta(v))\frac{u^P(v)}{2}\\ &=\ell(v)+(1-\theta(v))(u(v)-\ell(v))+\theta(v)\frac{\ell^P(v)}{2}+(1-\theta(v))\frac{u^P(v)}{2}\\ &=\theta(v)\left(\ell(v)+\frac{\ell^P(v)}{2}\right)+(1-\theta(v))\left(u(v)+\frac{u^P(v)}{2}\right)\\ &=m^*(v).\qedhere \end{align*} \end{proof} \subsection{The evolution of the chameleon process} We define a ``particle configuration'' to be a function $V\to\mathds{N}_0$, which, in practice, will be the configuration of red, black, pink or white particles. For $S$ a particle configuration we define $|S|:=\sum_{v\in V}S(v)$. We also define a ``label configuration'' to be a function $[a(m-1)+bn]\to V\cup\{0\}$, which will give the vertex occupied by the labelled particle of a certain colour (and which has value 0 if there is no particle of a given label). We discuss further this labelling now. At the start of every round we shall pair some red particles with an equal number of white particles. The way we do this, and how we track the movement of the paired particles, is by labelling paired red and white particles with a unique number. Suppose there are $r$ red particles at the start of the $\ell$th round, and this is less than the number of white particles (otherwise, reverse roles of red and white in the following). We label the red particles with labels $1,\ldots, r$ such that for any pair of vertices $v$ and $w$, the label of any red particle on vertex $v$ is less than the label of any red particle on vertex $w$ if and only if $v<w$. In other words, we label red particles on vertex $1$ first, then label red particles on $2$, and continue until we have labelled all $r$ red particles. We similarly label $r$ white particles with the (same) rule that for any pair of vertices $v$ and $w$, the label of any white particle on vertex $v$ is less than the label of any white particle on vertex $w$ if and only if $v<w$. A labelled red particle and a labelled white particle are pairs if they have the same label. For every time, we will have two label configurations: one for the red particles and one for the white. Suppose $L$ is such a label configuration for the red particles at a certain time. Then the number of labelled red particles at this time is equal to \[\max\{ i: \,1\le i\le a(m-1)+bn, L(i)\neq0\},\] so in particular, $L(i)=0$ for any $i$ larger than the number of labelled red particles. There are several aspects of the update rule which require external randomness: in Step 1, to choose which particles make up the lower bounds, in Step 2 to determine whether we proceed with Step 3a or Step 3b, in Step 3a choosing which paired red particles to pinken and how to place the remaining red particles in the pile, in Step 4 how to place the old pink particles, and in Step 5 to place the white particles. To fit the chameleon process into the framework of the graphical construction, we shall use random variables $\{U^c_i\}_{i\ge1}$ as the source of the needed randomness with $U^c_i$ used at time $\tau_i$ (and we shall not make it explicit \emph{how} this is done). Further, and importantly, we shall do this in a way such that the randomness used at Step 1 is independent of the randomness used at Step 2 (it is standard that this is possible, see for example~\citet[Section 4.6]{williams1991probability}). The random variables $\{U^b_i\}_{i\ge1}$ are used to determine how the black particles move so that they move in the same way as the non-marked particles in the MaBB. For independent uniforms $U$, $U'$ on $[0,1]$, an edge $e$, particle configurations $B$ of black particles, $P$ of pink particles, and $R$ of red particles, and label configurations $L^R$ for red particles, $L^W$ for white particles, we define $\mathrm{C}(U,U',e,B,R,P,L^R,L^W)$ to be a quintuple with the first component equal to $\mathrm{MaBB}(U,e,B)$, the second (resp.\! third) component denoting the configuration of red (resp.\! pink) particles, and the fourth (resp.\! fifth) component denoting the label configuration of red (resp.\! white) particle just after edge $e$ rings if before this edge rang the configuration of black, red and pink was given by $B$, $R$ and $P$, the label configuration of red particles was $L^R$, and of white was $L^W$, and we use $U'$ as the source of randomness for Steps 1--5 as described above (in practice we shall take $U'$ to be $U^c_i$ for some $i\ge1$). \begin{defn}[Chameleon process]\label{d:cham} The chameleon process with round length $T>0$ and associated with a MaBB initialised at $(\xi,x)$ is the quintuple $(B_t^\mathrm{C},R_t^\mathrm{C},P_t^\mathrm{C},L^R_t,L^W_t)_{t\ge0}$ where $B_t^\mathrm{C},R_t^\mathrm{C}$ and $P_t^\mathrm{C}$ are particle configurations and $L^R_t$, $L^W_t$ are label configurations for each $t\ge0$, with the following properties: \begin{enumerate} \item (Initial values) $B_0^\mathrm{C}(v)=\xi(v)$, $R_0^\mathrm{C}(v)=\chi(\xi(x))\delta_{x}(v)$, and $P_0^\mathrm{C}(v)=0$, for all $v\in V$, \[L_0^R(i)=\begin{cases}x &\mbox{for }1\le i\le N_0:=\chi(\xi(x))\wedge[a(m-1)+bn-\chi(\xi(x))],\\0&\mbox{otherwise,}\end{cases}\] and \[ L_0^W(i)=\begin{cases}\min\Big\{\ell\in\{1,\ldots,n\}\setminus\{x\}:\,\sum_{\substack{k\in[\ell]:\\k\neq x}}\chi(\xi(k))\ge i\Big\} &\mbox{for }1\le i\le N_0,\\0&\mbox{otherwise.}\end{cases} \] \item (Updates during rounds) For each $i\ge1$, \begin{align*} (B_{\tau_i}^\mathrm{C},R_{\tau_i}^\mathrm{C},P_{\tau_i}^\mathrm{C},L_{\tau_i}^R,L_{\tau_i}^W)&=\mathrm{C}(U_i^b,U_i^c,e_i,B_{\tau_{i}-}^\mathrm{C},R_{\tau_{i}-}^\mathrm{C},P_{\tau_{i}-}^\mathrm{C}). \end{align*} \item (Particle configuration updates at end of rounds) For each $i\ge1$ such that \begin{align}\label{e:icond}\sum_{v\in V}P_{iT-}^\mathrm{C}(v)\ge\min\left\{\sum_{v\in V}R_{iT-}^\mathrm{C}(v),\sum_{v\in V}\left(\chi(B_{iT-}^\mathrm{C}(v))-R_{iT-}^\mathrm{C}(v)-P_{iT-}^\mathrm{C}(v)\right)\right\},\end{align} we set \[ B_{iT}^\mathrm{C}(v)=B_{iT-}^\mathrm{C}(v),\,\, R_{iT}^\mathrm{C}(v)=R_{iT-}^\mathrm{C}(v)+d_iP_{iT-}^\mathrm{C}(v),\,\, P_{iT}^\mathrm{C}(v)=0\quad\text{for all }v\in V; \]and if $i$ does not satisfy~\eqref{e:icond} then we set \[ B_{iT}^\mathrm{C}(v)=B_{iT-}^\mathrm{C}(v),\,\, R_{iT}^\mathrm{C}(v)=R_{iT-}^\mathrm{C}(v),\,\, P_{iT}^\mathrm{C}(v)=P_{iT-}^\mathrm{C}(v)\quad\text{for all }v\in V. \] \item (Label configuration updates at end of rounds) For each $i\ge1$ we define \[N_i:=\sum_v R_{iT}^\mathrm{C}(v)\wedge\Big[a(m-1)+bn-\sum_v \left(R_{iT}^\mathrm{C}(v)+P_{iT}^\mathrm{C}(v)\right)\Big]\] and set \begin{align*}L_{iT}^R(j)&=\begin{cases}\min\Big\{\ell\in[n]:\,\sum_{k=1}^\ell R^\mathrm{C}_{iT}(k)\ge j\Big\}&\mbox{for }1\le j\le N_i,\\ 0&\mbox{otherwise,}\end{cases}\\ L_{iT}^W(j)&=\begin{cases}\min\Big\{\ell\in[n]:\,\sum_{k=1}^\ell \left(\chi(B^\mathrm{C}_{iT}(k))-R^\mathrm{C}_{iT}(k)-P^\mathrm{C}_{iT}(k)\right)\ge j\Big\}&\mbox{for }1\le j\le N_i,\\0&\mbox{otherwise.}\end{cases}\end{align*} \end{enumerate} \end{defn} We can obtain the number of white particles $W_t^\mathrm{C}(v)$ at time $t$ on a vertex $v$ using $W_t^\mathrm{C}(v)+R_t^\mathrm{C}(v)+P_t^\mathrm{C}(v)=\chi(B_t^\mathrm{C}(v))$. We write $\mathcal{C}(m)$ for the space of possible configurations of the chameleon process in which the underlying MaBB has $m-1$ non-marked particles. We note from this definition that the process also updates at the ends of rounds, i.e.\! at times of the form $iT$ for $i\ge1$. At these times if the number of pink particles is at least the number of red or white particles (i.e.\! if \eqref{e:icond} holds), then we have a \emph{depinking} (and call this time a \emph{depinking time}) in which all pink particles are removed from the system. To do this, we use the coin flips $d_i$ given in the graphical construction. If time $iT$ is a depinking time then we re-colour all pink particles red simultaneously if $d_i=1$, otherwise if $d_i=0$ we re-colour them all white. A simulation of the chameleon process for the first few update times appears after the Appendix. \section{Properties of the chameleon process}\label{S:champrops} \subsection{Evolution of ink}\label{SS:evol} In this section we suppose that the chameleon process considered is associated with a MaBB initialised at $(\xi,x)$. \begin{lemma}\label{L:inkonly} The total ink in the system only changes at depinking times. \end{lemma} \begin{proof} This is a straightforward observation as the only particles that change colour at an update time that is not a depinking are paired red and white particles. But since we colour each in the pair pink, the total ink does not change. \end{proof} Let $\widehat{\operatorname{ink}}_j$ denote the ink in the system just after the $j$th depinking time and $D_j$ the time of the $j$th depinking. The process $\{\widehat{\operatorname{ink}}_j\}_{j\ge1}$ evolves as a Markov chain; the following result gives its transition probabilities. This result is similar to \citet[Proposition 7.3]{olive} for the chameleon process used there. \begin{lemma}\label{L:inkmc} For $j\in\mathds{N}$, $\widehat{\operatorname{ink}}_{j+1}\in\{\widehat{\operatorname{ink}}_j-\Delta(\widehat{\operatorname{ink}}_j),\widehat{\operatorname{ink}}_j+\Delta(\widehat{\operatorname{ink}}_j)\}$ a.s., where for each $r\in\mathds{N}$, \[ \Delta(r):=\left\lceil\frac{\min\{r,a(m-1)+bn-r\}}{3}\right\rceil. \] Moreover, conditionally on $\{\widehat{\operatorname{ink}}_\ell\}_{\ell=0}^j$, each possibility has probability $1/2$. \end{lemma} \begin{proof} Fix $j\in\mathds{N}$. After each depinking is performed there are no pink left in the system, and so $\widehat{\operatorname{ink}}_j$ is equal to the number of red particles at time $D_j$, $|R_{D_j}^\mathrm{C}|=\sum_vR_{D_j}^\mathrm{C}(v)$. As the number of non-black particles is fixed at $a(m-1)+bn$, it follows that the number of white particles at time $D_j$ is $|W_{D_j}^\mathrm{C}|=\sum_vW_{D_j}^\mathrm{C}(v)=a(m-1)+bn-\widehat{\operatorname{ink}}_j$. Observe that every time a red and white particle pair are pinkened, we lose one red and one white, and gain two pink particles. It can be easily checked that for $p$ and $s$ positive integers with $p$ even, \[p<s\Leftrightarrow \lceil (s+p/2)/3\rceil-p/2>0.\] In other words, while the number of pink particles remains less than the minimum of the number of red and white, the chameleon process will still create new pink particles (recall the number of pink particles created in Step 3a of the chameleon process); conversely, the chameleon process will stop producing new pink particles as soon as the number of pink particles is at least the minimum of the number of red and white particles. Moreover, once it stops producing new pink particles, the number of pink created is the smallest number which ensures that the number of pink is at least the number of red or white; we can see this by observing that \[ p+2\left(\lceil (s+p/2)/3\rceil-p/2\right)=2\lceil (s+p/2)/3\rceil \] is the smallest even integer which is at least \[ s-\left(\lceil (s+p/2)/3\rceil-p/2\right)=(s+p/2)-\lceil (s+p/2)/3\rceil. \] Thus the number of pink particles created just before the next depinking time (at time $D_{j+1})$ is the smallest $p$ even satisfying $p\ge |W_{D_j}^\mathrm{C}|-p/2$ or $p\ge |R_{D_j}^\mathrm{C}|-p/2$, which is $p=2\Delta(\widehat{\operatorname{ink}}_j)$. At the depinking time $D_{j+1}$, the pink particles either all become white (and $\widehat{\operatorname{ink}}_{j+1}=\widehat{\operatorname{ink}}_j-\Delta(\widehat{\operatorname{ink}}_j)$) or they all become red (and $\widehat{\operatorname{ink}}_{j+1}=\widehat{\operatorname{ink}}_j+\Delta(\widehat{\operatorname{ink}}_j)$). Which event happens depends just on the outcome of the independent fair coin flip $d_{j+1}$. \end{proof} \begin{lemma}\label{L:inkmart}The total ink in the system is a martingale and is absorbed in finite time in either 0 or $a(m-1)+bn$. Further, the event \[ \mathrm{Fill}:=\left\{\lim_{t\to\infty}\operatorname{ink}_t=a(m-1)+bn\right\} \] has probability $\chi(\xi(x))/(a(m-1)+bn)$. \end{lemma} \begin{proof} The fact that total ink is a martingale follows from Lemma~\ref{L:inkmc} and the behaviour of the chameleon process at depinking times. The probability of event Fill then follows by the martingale property and the dominated convergence theorem (total ink is bounded by $a(m-1)+bn$), as in the proof of Lemma 7.1 of \cite{olive}. \end{proof} \begin{corollary}\label{cor:fill} For $\zeta\in \Omega_{G,m-1}$ and $t\ge0$, \[ \mathds{P}(\{B_t^\mathrm{C}=\zeta\}\cap\mathrm{Fill})=\mathds{P}(B_t^\mathrm{C}=\zeta)\mathds{P}(\mathrm{Fill})=\mathds{P}(B_t^\mathrm{C}=\zeta)\frac{\chi(\xi(x))}{a(m-1)+bn}. \] \end{corollary} \begin{proof} This follows from Lemma~\ref{L:inkmart} and the fact that event Fill only depends on the outcomes of the coin flips $\{d_i\}_i$ whereas the movement of the black particles is independent of these coin flips. \end{proof} \begin{lemma}\label{L:limits} For all $t\ge0$ and $v\in V$, $\operatorname{ink}_t(v)\le \chi(B^\mathrm{C}_t(v))$. \end{lemma} \begin{proof}This follows simply from the fact that the number of non-black particles on a vertex with $B$ black particles is always $\chi(B)$. This is true at time 0, and Steps 1 to 5 guarantee this at update times which are not depinkings. Finally, at depinking times we do not change the number of particles on vertices, only their colour. Observe also that $\operatorname{ink}_t(v)=\chi(B^\mathrm{C}_t(v))$ if at time $t$ all non-black particles on $v$ are red. \end{proof} The next result shows that, during a single round and until they meet, a pair of paired red-white particles move (marginally) as independent random walks on the graph, which stay in place with probability $1/2$ when an incident edge rings. For two independent random walks $X,Y$ on a graph $G$ (each of which move by jumping from their current vertex $v$ to a neighbour $w$ when edge $\{v,w\}$ rings), we write $M^{X,Y}$ for their meeting time -- the first time they are on neighbouring vertices, and the edge between them rings for one of the walks (they each have their own independent sequence of edge-rings). If the walks start on the same vertex, we say their meeting time is 0. We let $\hat G$ denote the graph $(V,E,\{r_e/2\}_{e\in E})$, that is, we halve the rates on the edges of graph $G$. \begin{lemma}\label{L:beforemeet}Fix $u,v\in V$, $u\neq v$, and $i\in\mathds{N}_0$. Let $X$ and $Y$ be independent random walks on $\hat G$ with $X_0=u$, $Y_0=v$. For any $1\le j\le \sum_vR_{iT}^\mathrm{C}(v)\wedge \sum_vW_{iT}^\mathrm{C}(v)$, conditionally on $L_{iT}^R(j)=u$ and $L_{iT}^W(j)=v$, for all $t\in[iT,iT+\{T\wedge M^{X,Y}\})$, we have \[ (L_t^R(j),L_t^W(j))\stackrel{d}{=}(X_{t-iT},Y_{t-iT}). \] \end{lemma} \begin{proof}We make use of Property~\ref{a:sym}. Suppose edge $e=\{v,w\}$ rings during time interval $[iT,(i+1)T)$ and the black particles update from configuration $B$. Suppose $B'$ is a possible configuration of the black particles as a result of the update. Let $\tilde B$ be the configuration of black particles with $\tilde B(v)=B'(w)$, $\tilde B(w)=B'(v)$ and for $z\notin e$, $\tilde B(z)=B'(z)=B(z)$. As black particles update as non-marked particles in MaBB, $B'$ and $\tilde B$ are equally likely to be the configuration of black particles after the update, by Property~\ref{a:sym}. We claim that the probability that a labelled red particle (similarly labelled white particle) will be on $v$ after the update if configuration $B'$ is chosen as the new black configuration is the same as the probability the same labelled red particle (respectively, labelled white particle) will be on $w$ if configuration $\tilde B$ is chosen. This will suffice since prior to meeting, a paired red and white particle will never be on the same ringing edge. This claim will follow from showing that $\ell(v)=\tilde\ell(w)$, $\ell^P(v)=\tilde\ell^P(w)$, $u(v)=\tilde u(w)$, $u^P(v)=\tilde u^P(w)$ and $\theta(v)=\tilde\theta(w)$, where the notation with tilde refers to the update in which $\tilde B$ is chosen, and notation without the tilde to the update in which $B'$ is chosen. The identities regarding the lower and upper values are immediate from their definitions. To show $\theta(v)=\tilde\theta(w)$, observe that \begin{equation} \begin{aligned}\label{e:tildetheta} &\tilde\theta(v)[\tilde\ell(v)+\frac12\tilde\ell^P(v)]+(1-\tilde\theta(v))[\tilde u(v)+\frac12 \tilde u^P(v)]\\&=(R(v)+\frac12 P(v))\mathrm{P}_{e,B,\tilde B}(v,v)+(R(w)+\frac12 P(w))\mathrm{P}_{e,B,\tilde B}(w,v). \end{aligned} \end{equation} But by Property~\ref{a:sym}, we have \begin{align*} \mathrm{P}_{e,B,\tilde B}(v,v)&=\frac{\tilde B(v)+1}{B(v)+B(w)+1}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B,v},C_{\tilde B,v})}{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(B,\tilde B)}\\ &=\frac{ B'(w)+1}{B(v)+B(w)+1}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B,v},C_{B',w})}{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(B,B')}\\ &=\mathrm{P}_{e,B, B'}(v,w), \end{align*} and similarly $\mathrm{P}_{e,B,\tilde B}(w,v)=\mathrm{P}_{e,B,B'}(w,w)$. Plugging these into~\eqref{e:tildetheta} shows that $\tilde\theta(v)$ solves the same equation as $\theta(w)$, hence they are equal; similarly $\theta(v)=\tilde\theta(w)$. \end{proof} \subsection{From ink to total variation} In this section we show a crucial connection between the MaBB initialised at $(\xi,x)$ and its associated chameleon process. To emphasise the dependence of $\operatorname{ink}_t$ on the initial configuration of the MaBB, we shall sometimes write it as $\operatorname{ink}_t^{(\xi,x)}$. \begin{proposition}\label{P:MaBBtoC} Let $(\xi_t,m_t)$ denote the time-$t$ configuration of a MaBB initialised at $(\xi,x)\in\Omega'_{G,m}$. For every $t\ge0$ and $(\zeta,y)\in \Omega'_{G,m}$, \[ \mathds{P}\big((\xi_t,m_t)=(\zeta,y)\big)=\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\Indic{B_t^\mathrm{C}=\zeta}\right]. \] \end{proposition} The proof of Proposition~\ref{P:MaBBtoC} is similar in spirit to the proof of Lemma 1 of \cite{morris}. We introduce a new process $M^*$ which will also be constructed using the graphical construction. This process is similar to the chameleon process in that vertices are occupied by particles of various colours (black, red, pink and white). Like in the chameleon process, if there are $B$ black particles on a vertex, then there are $\chi(B)$ non-black particles. The process $M^*$ evolves exactly as the chameleon process except we replace Step 3a with Step 3a$^\prime$, described below. Further, $M^*$ does not have any updates at the ends of rounds (so in particular no depinking times). As a result the number of red, white and pink particles remain constant over time. We use the same terminology (e.g.\! ink) for process $M^*$. \underline{Step 3a$^\prime$:} Any red particles left in the pile are each independently placed onto $v$ or $w$ equally likely. It can be shown (following the same proof) that Lemma~\ref{L:onestep} holds also for $M^*$: \begin{lemma}\label{L:onestepM*} For any $v,w\in V$, $B,R,P$ initial configurations of black, red and pink particles, and $B'$ the configuration of black particles just after the first update (at time $\tau_1$), \[\mathds{E}^{M^*}[\operatorname{ink}_{\tau_1}(v)\mid B, B', R, P, \{e_1=\{v,w\}\}]=m^*(v).\]\end{lemma} \begin{lemma}\label{L:MipstoM*} Fix $(\xi,x)\in\Omega'_{G,m}$, random variable $\operatorname{ink}_0(y)$ taking values in $[0,\chi(\xi(y))]\cap(\mathds{N}_0/2)$ for each $y\in V$, and denote by $(\xi_t,m_t)$ the time-$t$ configuration of a MaBB which starts from a random configuration $(\xi_0,m_0)$ satisfying almost surely \[ \forall\, y\in V\quad \mathds{P}(m_0=y\mid \xi_0)=\mathds{E}^{M^*}\left[\frac{\operatorname{ink}_0(y)}{\chi(\xi(x))}\bigm| B_0^\mathrm{C}\right], \] where $M^*$ starts with configuration of black particles $B_0^\mathrm{C}=\xi_0$ and with initial ink value of $\operatorname{ink}_0(y)$ at each $y\in V$. Then for all $t\ge 0$, almost surely \[ \forall\, y\in V\quad\mathds{P}(m_t=y\mid (\xi_s)_{0\le s\le t})=\mathds{E}^{M^*}\left[\frac{\operatorname{ink}_t(y)}{\chi(\xi(x))}\bigm| (B_s^\mathrm{C})_{0\le s\le t}\right]. \] \end{lemma} \begin{proof} As $(\xi_s)_{s\ge 0}$ and $(B^\mathrm{C}_s)_{s\ge0}$ are constructed using the same $(U_r^b)_{r=1}^\infty$, they are equal almost surely. It suffices to show the statement at the update times. We shall use induction. The base case (time $\tau_0=0$) follows from the assumption. Fix $r\in\mathds{N}$ and suppose the result holds up to (and including) time $\tau_{r-1}$. Observe that by the strong Markov property and Lemma~\ref{L:onestepM*} (and recall the choice of $\theta$ from~\eqref{e:thetadef} and also that $B^\mathrm{C}_{\tau_r}=\mathrm{MaBB}(U_r^b,e_r,B^\mathrm{C}_{\tau_{r-1}})$), for any $y\in V$, almost surely \begin{align*} &\mathds{E}^{M^*}[\operatorname{ink}_{\tau_r}(y)\mid U_r^b,e_r,B^\mathrm{C}_{\tau_{r-1}},R_{\tau_{r-1}}^\mathrm{C},P_{\tau_{r-1}}^\mathrm{C}]\\&=\Indic{y\in e_r}\bigg\{\left[R_{\tau_{r-1}}^\mathrm{C}(y)+\frac12 P_{\tau_{r-1}}^\mathrm{C}(y)\right]\,\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)\\ &\phantom{=}+\left[R_{\tau_{r-1}}^\mathrm{C}(e_r\setminus\{y\})+\frac12 P_{\tau_{r-1}}^\mathrm{C}(e_r\setminus\{y\})\right]\,\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(e_r\setminus\{y\},y)\bigg\} +\Indic{y\notin e_r}\operatorname{ink}_{\tau_{r-1}}(y)\\ &=\Indic{y\in e_r}\Big\{\operatorname{ink}_{\tau_{r-1}}(y)\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)+\operatorname{ink}_{\tau_{r-1}}(e_r\setminus\{y\})\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(e_r\setminus\{y\},y)\Big\}\\&\phantom{=}+\Indic{y\notin e_r}\operatorname{ink}_{\tau_{r-1}}(y). \end{align*} Taking an expectation, the first term above becomes \begin{align*} &\mathds{E}^{M^*}\left[\Indic{y\in e_r}\operatorname{ink}_{\tau_{r-1}}(y)\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)\bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right] \\&=\mathds{E}^{M^*}\left[\mathds{E}^{M^*}\left[\Indic{y\in e_r}\operatorname{ink}_{\tau_{r-1}}(y)\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)\bigm|e_r,\,(B^\mathrm{C}_s)_{s\le\tau_r}\right]\Bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right]\\ &=\mathds{E}^{M^*}\left[\Indic{y\in e_r}\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)\mathds{E}^{M^*}\left[\operatorname{ink}_{\tau_{r-1}}(y)\bigm|(B^\mathrm{C}_s)_{s\le\tau_{r-1}}\right]\Bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right]\\ &=\mathds{E}^{M^*}\left[\chi(\xi(x))\mathds{P}(m_{\tau_{r-1}}=y\mid (\xi_s)_{s\le \tau_{r-1}})\,\Indic{y\in e_r}\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(y,y)\Bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right], \end{align*}using in the penultimate step that almost surely \[\mathds{E}^{M^*}\left[\operatorname{ink}_{\tau_{r-1}}(y)\bigm|e_r, \,(B^\mathrm{C}_s)_{s\le\tau_{r}}\right]=\mathds{E}^{M^*}\left[\operatorname{ink}_{\tau_{r-1}}(y)\bigm|(B^\mathrm{C}_s)_{s\le\tau_{r-1}}\right],\] since $B^\mathrm{C}_{\tau_r}=\mathrm{MaBB}(U_r^b,e_r,B^\mathrm{C}_{\tau_{r-1}})$ and $\operatorname{ink}_{\tau_{r-1}}(y)$ is independent of $e_r$ and $U_r^b$; and using the induction hypothesis in the last step. Similarly, \begin{align*} &\mathds{E}^{M^*}\left[\Indic{y\in e_r}\operatorname{ink}_{\tau_{r-1}}(e_r\setminus\{y\})\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(e_r\setminus\{y\},y)\bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right] \\&=\mathds{E}^{M^*}\left[\chi(\xi(x))\mathds{P}(m_{\tau_{r-1}}=e_r\setminus\{y\}\mid (\xi_s)_{s\le \tau_{r-1}},\,e_r)\,\Indic{y\in e_r}\mathrm{P}_{e_r,B^\mathrm{C}_{\tau_{r-1}},B^\mathrm{C}_{\tau_r}}(e_r\setminus\{y\},y)\Bigm|(B^\mathrm{C}_s)_{s\le\tau_r}\right], \end{align*} and thus \begin{align}\label{e:M*ink} &\mathds{E}^{M^*}\left[\frac{\operatorname{ink}_{\tau_r}(y)}{\chi(\xi(x))}\bigm| (B^\mathrm{C}_s)_{s\le \tau_r}\right]\\\notag &=\mathds{E}^{M^*}\Big[\mathds{P}(m_{\tau_{r-1}}=y\mid (\xi_s)_{s\le \tau_{r-1}})\left[\Indic{y\in e_r}\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)+\Indic{y\notin e_r}\right]\\ &\phantom{=\mathds{E}\Big[}+\Indic{y\in e_r}\mathds{P}(m_{\tau_{r-1}}=e_r\setminus\{y\}\mid (\xi_s)_{s\le \tau_{r-1}},\,e_r)\,\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(e_r\setminus\{y\},y)\Bigm|(\xi_s)_{s\le\tau_r}\Big].\notag \end{align} On the other hand, using the definition of MaBB$^*$ from~\eqref{e:MaBBdef}, \begin{align*} &\mathds{P}(m_{\tau_r}=y\mid(\xi_s)_{s\le\tau_r})\\ &=\mathds{P}(\mathrm{MaBB}^*(U_r^b,U^c_r,e_r,\xi_{\tau_{r-1}},m_{\tau_{r-1}})=y\mid(\xi_s)_{s\le\tau_r})\\ &=\mathds{P}\bigg(m_{\tau_{r-1}}\left[\Indic{m_{\tau_{r-1}}\notin e_r}+\Indic{m_{\tau_{r-1}}\in e_r}\Indic{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(m_{\tau_{r-1}},m_{\tau_{r-1}})}\right]\\&\phantom{=\mathds{P}\bigg(}+(e_r\setminus\{m_{\tau_{r-1}}\})\Indic{m_{\tau_{r-1}}\in e_r}\Indic{U_r^c\ge\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(m_{\tau_{r-1}},m_{\tau_{r-1}})}=y\Bigm|(\xi_s)_{s\le\tau_r}\bigg)\\ &=\mathds{P}\left(\{m_{\tau_{r-1}}=y\in e_r\}\cap\{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)\}\mid(\xi_s)_{s\le\tau_r}\right)\\ &\phantom{=}+\mathds{P}\left(\{m_{\tau_{r-1}}=e_r\setminus\{y\},\,y\in e_r\}\cap\{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(e_r\setminus\{y\},y)\}\mid(\xi_s)_{s\le\tau_r}\right)\\ &\phantom{=}+\mathds{P}\left(\{m_{\tau_{r-1}}=y\notin e_r\}\mid(\xi_s)_{s\le\tau_r}\right). \end{align*} Using the tower property of conditional expectation we condition further on $e_r$, and then use that given $e_r$ and $(\xi_s)_{s\le \tau_r}$, event $\{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)\}$ is independent of event $\{m_{\tau_{r-1}}=y\}\cap\{y\in e_r\}$, to obtain \begin{align*} &\mathds{P}(m_{\tau_r}=y\mid(\xi_s)_{s\le\tau_r})\\ &=\mathds{E}\left[\mathds{E}\left[\Indic{m_{\tau_{r-1}}=y}\big(\Indic{y\in e_r}\Indic{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)}+\Indic{y\notin e_r}\big)\mid(\xi_s)_{s\le\tau_r}, e_r\right]\Bigm| (\xi_s)_{s\le\tau_r}\right]\\ &\phantom{=}+\mathds{E}\left[\mathds{E}\left[\Indic{y\in e_r}\Indic{m_{\tau_{r-1}}=e_r\setminus\{y\}}\Indic{U_r^c<\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(e_r\setminus\{y\},y)}\mid(\xi_s)_{s\le\tau_r}, e_r\right]\Bigm| (\xi_s)_{s\le\tau_r}\right]\\\ &=\mathds{E}\left[\mathds{E}\left[\Indic{m_{\tau_{r-1}}=y}\left[\Indic{y\in e_r}\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)+\Indic{y\notin e_r}\right]\mid (\xi_s)_{s\le\tau_r},e_r\right]\Bigm| (\xi_s)_{s\le\tau_r}\right]\\ &\phantom{=}+\mathds{E}\left[\mathds{E}\left[\Indic{y\in e_r}\Indic{m_{\tau_{r-1}}=e_r\setminus\{y\}}\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(e_r\setminus\{y\},y)\mid (\xi_s)_{s\le\tau_r},e_r\right]\Bigm| (\xi_s)_{s\le\tau_r}\right]\\ &=\mathds{E}\Big[\mathds{P}^\mathrm{MaBB}(m_{\tau_{r-1}}=y\mid (\xi_s)_{s\le \tau_{r-1}})\,\left[\Indic{y\in e_r}\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(y,y)+\Indic{y\notin e_r}\right]\\ &\phantom{=\mathds{E}\Big[}+\Indic{y\in e_r}\mathds{P}^\mathrm{MaBB}(m_{\tau_{r-1}}=e_r\setminus\{y\}\mid (\xi_s)_{s\le \tau_{r-1}},\,e_r)\,\mathrm{P}_{e_r,\xi_{\tau_{r-1}},\xi_{\tau_r}}(e_r\setminus\{y\},y)\Bigm|(\xi_s)_{s\le\tau_r}\Big]. \end{align*} which agrees with \eqref{e:M*ink} and so completes the inductive step. \end{proof} We now turn to the proof of Proposition~\ref{P:MaBBtoC}. \begin{proof}[Proof of Proposition~\ref{P:MaBBtoC}] We shall need a list of times at which updates occur for the chameleon process; recall that the chameleon process updates at times $\{\tau_r\}_{r\ge1}$ but also at depinking times. To this end, we set $\hat\tau_0=0$ and for each $r\ge1$, we set \[ \hat \tau_r=(\min\{\tau_m:\,\tau_m>\hat\tau_{r-1}\})\wedge(\min\{D_i:\,D_i> \hat\tau_{r-1},\,i\in\mathds{N}\}). \] Similarly a hat placed on notation (e.g.\! $\hat e_r$) refers to the (in this example) edge chosen at time $\hat\tau_r$. If this is a depinking time then we set $\hat e_r=V$. Next, for each $r\ge1$ we introduce process $(M_t^r)_{t\ge0}$ which is constructed using the graphical construction. Each of these processes is a process in which vertices are occupied by particles of various colours, and we initialise them all with the initial configuration of the chameleon process. Prior to time $\hat\tau_r$, process $M^r$ evolves exactly as the chameleon process; at and after time $\hat\tau_r$ it evolves as $M^*$ (so in particular there are no more changes to the colours of particles). Note that in all these processes the black particles have the same trajectory and this matches the trajectory of the non-marked particles in the MaBB. Note also that $M^1$ is identical to $M^*$. We shall prove by induction on $r$ that for all $r\ge1$, \begin{align}\label{e:MaBBtoMr} \forall\,t>0,\,y\in V\quad \mathds{P}(m_t=y\mid\xi_t)=\mathds{E}^{M^r}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_t\right]\quad\mbox{a.s}. \end{align} This will prove the proposition since the chameleon process is the almost sure limit of $M^r$ as $r\to\infty$. The case $r=1$ follows from Lemma~\ref{L:MipstoM*} since $\operatorname{ink}_0^{(\xi,x)}(y)=\chi(\xi(x))$ if $y=x$ and otherwise $\operatorname{ink}_0^{(\xi,x)}(y)=0$ (thus the assumption of the lemma holds). We fix $r'\in\mathds{N}_0$, assume \eqref{e:MaBBtoMr} holds for $r=r'$ and show it holds for $r=r'+1$. Observe that before time $\hat\tau_{r'}$, $M^{r'+1}=M^{r'}$ so for $t<\hat\tau_{r'}$, for all $y$, almost surely \begin{align*} \mathds{P}(m_t=y\mid\xi_t)&=\mathds{E}^{M^{r'}}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_t\right]=\mathds{E}^{M^{{r'}+1}}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_t\right]. \end{align*} After time $\hat\tau_{r'}$, $M^{r'+1}$ evolves as $M^*$; so assuming that for all $y\in V$, \begin{align}\label{e:mipsmr} \mbox{a.s.}\quad\mathds{P}(m_{\hat\tau_{r'}}=y\mid\xi_{\hat\tau_{r'}})=\mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}^{(\xi,x)}_{\hat\tau_r'}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right], \end{align} then by Lemma~\ref{L:MipstoM*} we have that for all $t>\hat\tau_{r'}$, for all $y\in V$, \[ \mbox{a.s.}\quad\mathds{P}(m_{t}=y\mid(\xi_s)_{\hat\tau_{r'}\le s\le t})=\mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}^{(\xi,x)}_{t}(y)}{\chi(\xi(x))}\bigm| (B^\mathrm{C}_{s})_{\hat\tau_{r'}\le s\le t}\right]. \] The inductive step is then complete by taking an expectation and using that black particles have the same trajectory as the non-marked, almost surely. Thus it remains to prove \eqref{e:mipsmr}. We fix $y\in V$ and decompose according to three events, which partition the probability space: \begin{itemize} \item $E_1:=\bigcup_{i\ge1}\{y\notin \hat e_{r'}\}\cap\{\hat\tau_{r'}=\tau_i\}$ (the update is not a depinking time and $y$ is not on the ringing edge) \item $E_2:=\bigcup_{i\ge1}\{y\in \hat e_{r'}\}\cap\{\hat\tau_{r'}=\tau_i\}$ (the update is not a depinking time but $y$ is on the ringing edge) \item $E_3:=\bigcup_{i\ge1}\{\hat\tau_r=D_i\}$ (the update is a depinking time) \end{itemize} On event $E_1$, as $y$ is not on a ringing edge at time $\hat\tau_{r'}$, the value of $\operatorname{ink}_t^{(\xi,x)}(y)$ does not change at time $\hat\tau_{r'}$ in either of the processes $M^{r'}$ or $M^{r'+1}$; since they agree prior to this time, we deduce that almost surely \begin{align}\label{e:E1} \mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}^{(\xi,x)}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_1}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]=\mathds{E}^{M^{r'}}\left[\frac{\operatorname{ink}^{(\xi,x)}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_1}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]. \end{align} On event $E_2$, we may pinken some particles at time $\hat\tau_{r'}$ in process $M_{r'+1}$. Nevertheless, by Lemmas~\ref{L:onestep} and~\ref{L:onestepM*} (and again since the processes agree prior to this time), we see that their expected ink values agree, i.e. \begin{align}\label{e:E2} \mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_2}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]=\mathds{E}^{M^{r'}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_2}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]. \end{align} Finally, on event $E_3$, $M^{r'}$ does not update. On the other hand, almost surely \begin{align} &\mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_3}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\&=\sum_{i=1}^\infty\mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{\hat\tau_{r'}=D_i}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\ &=\sum_{i=1}^\infty\mathds{E}^{M^{r'+1}}\left[\left\{\Indic{d_i=1}\left(\frac{R^\mathrm{C}_{\hat\tau_{r'-1}}(y)+P^\mathrm{C}_{\hat\tau_{r'-1}}(y)}{\chi(\xi(x))}\right)+\Indic{d_i=0}\frac{R^\mathrm{C}_{\hat\tau_{r'-1}}(y)}{\chi(\xi(x))}\right\}\Indic{\hat\tau_{r'}=D_i}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\ &=\sum_{i=1}^\infty\mathds{E}^{M^{r'+1}}\left[\frac{R^\mathrm{C}_{\hat\tau_{r'-1}}(y)+\frac12P^\mathrm{C}_{\hat\tau_{r'-1}}(y)}{\chi(\xi(x))}\Indic{\hat\tau_{r'}=D_i}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\ &=\mathds{E}^{M^{r'+1}}\left[\frac{R^\mathrm{C}_{\hat\tau_{r'-1}}(y)+\frac12P^\mathrm{C}_{\hat\tau_{r'-1}}(y)}{\chi(\xi(x))}\Indic{E_3}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\ &=\mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'-1}}(y)}{\chi(\xi(x))}\Indic{E_3}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]\notag\\ &=\mathds{E}^{M^{r'}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\Indic{E_3}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right].\label{e:E3} \end{align} Putting together equations \eqref{e:E1}--\eqref{e:E3} and using that $E_1,E_2,E_3$ form a partition, we obtain that for each $y\in V$, \[ \mbox{a.s.}\quad \mathds{E}^{M^{r'+1}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right]=\mathds{E}^{M^{r'}}\left[\frac{\operatorname{ink}_{\hat\tau_{r'}}(y)}{\chi(\xi(x))}\bigm| B^\mathrm{C}_{\hat\tau_{r'}}\right], \]and thus by the inductive hypothesis, we have shown \eqref{e:mipsmr}. \end{proof} Next, we show how Proposition~\ref{P:MaBBtoC} can be used to bound the total variation distance between two MaBB configurations in terms of the total amount of ink in the chameleon process. Recall from~\eqref{e:pixi} the law $\pi_\zeta$ for $\zeta\in \Omega_{G,m-1}$ and denote by $\tilde m_t$ a random variable which, conditionally on $\xi_t=\zeta$, has law $\pi_{\zeta}$. Recall also the definition of event Fill from Lemma~\ref{L:inkmart}. \begin{proposition}\label{P:tvboundink} Let $(\xi_t,m_t)$ denote the time-$t$ configuration of a MaBB initialised at $(\xi,x)\in\Omega'_{G,m}$. For any $t>0$, \[ \|\mathcal{L}((\xi_t,m_t))-\mathcal{L}((\xi_t,\tilde m_t))\|_\mathrm{TV}\le 1-\,\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}}{a(m-1)+bn}\mid\mathrm{Fill}\right]. \] \end{proposition} \begin{proof} This is similar to the proof of Lemma 8.1 of \citet{olive}. By Proposition~\ref{P:MaBBtoC}, for any $(\zeta,y)\in\Omega'_{G,m}$, \[ \mathds{P}(\xi_t=\zeta,m_t=y)=\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\Indic{B^\mathrm{C}_t=\zeta}\right]\ge\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\Indic{\{B^\mathrm{C}_t=\zeta\}\cap\mathrm{Fill}}\right]. \] On the other hand, using that $B^\mathrm{C}_t$ and $\xi_t$ have the same distribution and Corollary~\ref{cor:fill}, \[ \mathds{P}(\xi_t=\zeta, \tilde m_t=y)=\pi_\zeta(y)\mathds{P}(\xi_t=\zeta)=\frac{\pi_\zeta(y)}{\mathds{P}(\mathrm{Fill})}\mathds{P}(\{B^\mathrm{C}_t=\zeta\}\cap\mathrm{Fill}). \] We deduce that \begin{equation} \begin{aligned}\label{e:pospart} &(\mathds{P}(\xi_t=\zeta,\,\tilde m_t=y)-\mathds{P}(\xi_t=\zeta,\,m_t=y))_+\\&\le \left(\mathds{E}\left[\Indic{\{B^\mathrm{C}_t=\zeta\}\cap\mathrm{Fill}}\left(\frac{\pi_\zeta(y)}{\mathds{P}(\mathrm{Fill})}-\frac{\operatorname{ink}_t^{(\xi,x)}(y)}{\chi(\xi(x))}\right)\right]\right)_+. \end{aligned} \end{equation} Observe that on event $\{B^\mathrm{C}_t=\zeta\}$, we have $\operatorname{ink}^{(\xi,x)}_t(y)\le \chi(\zeta(y))$ by Lemma~\ref{L:limits}, and so (on this event), \[ \frac{\operatorname{ink}^{(\xi,x)}_t(y)}{\chi(\xi(x))}\le\frac{\chi(\zeta(y))}{\chi(\xi(x))}=\frac{\chi(\zeta(y))}{(a(m-1)+bn)\mathds{P}(\mathrm{Fill})}=\frac{\pi_\zeta(y)}{\mathds{P}(\mathrm{Fill})}, \] where the first equality is due to Corollary~\ref{cor:fill} and the second from the definition of the colour function $\chi$. As a result we deduce from~\eqref{e:pospart} that \begin{align*} (\mathds{P}(\xi_t=\zeta,\,\tilde m_t=y)-\mathds{P}(\xi_t=\zeta,\,m_t=y))_+\le\mathds{E}\left[\Indic{\{B^\mathrm{C}_t=\zeta\}\cap\mathrm{Fill}}\left(\frac{\pi_\zeta(y)}{\mathds{P}(\mathrm{Fill})}-\frac{\operatorname{ink}^{(\xi,x)}_t(y)}{\chi(\xi(x))}\right)\right]. \end{align*} We take a sum over $y$ followed by $\zeta$ to obtain \begin{align*} \|\mathcal{L}((\xi_t,m_t))-\mathcal{L}((\xi_t,\tilde m_t))\|_\mathrm{TV}&\le\mathds{E}\left[\Indic{\mathrm{Fill}}\left(\frac{1}{\mathds{P}(\mathrm{Fill})}-\frac{\operatorname{ink}^{(\xi,x)}_t}{\chi(\xi(x))}\right)\right]\\ &=1-\mathds{P}(\mathrm{Fill})\,\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}}{\chi(\xi(x))}\mid\mathrm{Fill}\right]\\ &=1-\,\mathds{E}\left[\frac{\operatorname{ink}_t^{(\xi,x)}}{a(m-1)+bn}\mid\mathrm{Fill}\right], \end{align*} using Lemma~\ref{L:inkmart} in the last step. \end{proof} Recall from Section~\ref{SS:evol} that for each $\ell\in\mathds{N}$, $\widehat{\operatorname{ink}}_\ell$ denotes the value of ink just after the $\ell$th depinking time. We write $\widehat{\operatorname{ink}}_\ell^{(\xi,x)}$ to emphasise the dependence on the initial configuration of the corresponding MaBB. \begin{lemma}\label{L:expdec} Fix $(\xi,x)\in\Omega'_{G,m}$. For each $\ell\ge1$, \[ 1-\mathds{E}\left[\frac{\widehat{\operatorname{ink}}_\ell^{(\xi,x)}}{a(m-1)+bn}\mid\mathrm{Fill}\right]\le (71/72)^\ell\sqrt{a(m-1)+bn}. \] \end{lemma} We omit the proof (which uses Lemma~\ref{L:inkmc}) of this result since it is identical to the proof of Proposition 6.1 in \cite{olive}, except that here ink can take values in $\{0,\ldots,a(m-1)+bn\}$ (in contrast with \cite{olive} in which $\operatorname{ink}\in\{0,\ldots,n\}$). \section{Expected loss of red in a round}\label{s:loss} In this section we show that during a single round (which starts with fewer red particles than white) the number of red particles decreases in expectation by a constant factor. Let $M_{i,j}(G)$ denote the meeting time of two independent random walks started from vertices $i$ and $j$ on $G$ and recall that $\hat M_{i,j}(G)$ denotes the meeting time of two independent random walks started from vertices $i$ and $j$ on the graph obtained from $G$ by halving the edge-weights, that is, $\hat M_{i,j}(G)=M_{i,j}(\hat G)$. Consider a slight modification to the chameleon process in which we replace the number of selected particles~\eqref{e:cond} in Step 2 with $k$, that is, we allow all paired reds particles to be pinkened. We call this the \emph{modified} chameleon process. \begin{proposition}\label{P:lossred} Suppose the modified chameleon process starts a round with red configuration $R$, white configuration $W$ and black configuration $B$ such that $|R|\le |W|$. If the round length $T$ satisfies $T\ge 2\max_{i,j}\mathds{E}\hat M_{i,j}(G)$ then $\mathds{E}[|R_{T-}^\mathrm{C}|]\le (1-c)|R|$, with $c=\frac{(p^*)^2}{4a}$. \end{proposition} \begin{remark}If instead $|W|\le |R|$ then we have an equivalent result: $\mathds{E}[|W_{T-}^\mathrm{C}|]\le (1-c)|W|$. \end{remark} \begin{proof} We shall only count pinkenings between paired red and white particles which get coloured pink the first time they meet (if they do) during the round. (This means that we do not have to worry about how the particles move after their first meeting time -- they no longer move independently once they meet.) Since we assume $|R|\le |W|$, all red particles will have a label in $\{1,\ldots,|R|\}$. Let $M^r$ denote the meeting time of red particle with label $r$ with its paired white particle; this is the first time the two particles are on the same ringing edge. If two paired particles start the round on the same vertex we set their meeting time to be the first time this vertex is on a ringing edge. For each $s\in\mathds{N}$ write $F_s(r)$ for the event that a red particle with label $r$ remains in the pooled pile after Step 1 of the update at time $\tau_s$ (if red particle with label $r$ is not on edge $e_r$ at time $\tau_r-$, we set $F_r(s)=\mathds{V}\mathrm{ar}nothing$), and write $G_s$ for the event that we do Step 3a (rather than Step 3b) at the update at time $\tau_s$. We also write $e_s^1, \,e_s^2$ for the two vertices on edge $e_s$ (in an arbitrary order), $u_s(e_s^1)$ and $\ell_s(e_s^1)$ for the values of $u(e_s^1)$ and $\ell(e_s^2)$ at the update at time $\tau_s$, and $\theta_s(e_s^1)$ for the probability $\theta(e_s^1)$ at the update time $\tau_s$. We lower-bound the expected number of pink particles created during a single round (which has length $T$) of the modified chameleon process in which at the start of the round the configuration of red particles is $R$ by \begin{align*} 2\mathds{E}\left[\sum_{r=1}^{|R|}\sum_{s=1}^\infty\Indic{M^{r}=\tau_s<T}\Indic{F_s(r)}\Indic{G_s}\right]&=2\sum_{r=1}^{|R|}\sum_{s=1}^\infty\mathds{E}\left[\Indic{M^r=\tau_s<T}\mathds{P}(F_s(r)\cap G_s\mid \tau_s, M^r)\right]. \end{align*} Observe that conditionally on the configuration of the chameleon process at time $\tau_s-$ and the configuration of black particles at time $\tau_r$, $F_s(r)$ and $G_s$ are independent since $F_s(r)$ depends further only on the randomness at Step 1, and $G_s$ the randomness at Step 2 (and we have constructed the chameleon process so that these are independent). Therefore we have almost surely \begin{align}\label{e:FintG} \mathds{P}(F_s(r)\cap G_s\mid \tau_s, M^r)=\mathds{E}[2 (\theta_s(e_s^1)\wedge (1-\theta_s(e_s^1))\Indic{F_s(r)}\mid \tau_s, M^r). \end{align} Next, for each $s\in\mathds{N}$, we introduce an event $A_s$ which: \begin{enumerate} \item has probability $p^*$ (recall this constant comes from Property~\ref{a:prob}), \item prescribes only the value that $U_s^b$ takes, \item on event $A_s$, for each $i\in\{1,2\}$, given $e_s$ and $B^\mathrm{C}_{\tau_s-}$, configuration $B^\mathrm{C}_{\tau_s}$ satisfies almost surely \begin{enumerate} \item $\mathrm{P}_{e_s,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\in[p^*\wedge\frac29,(1-p^*)\vee\frac79]$, \item $\chi(B^\mathrm{C}_{\tau_s}(e_s^1))/\chi(B^\mathrm{C}_{\tau_s}(e_s^2))\in [1/(2a),2a]$. \end{enumerate} \end{enumerate} The fact that such an event exists is shown in Lemma~\ref{L:Aexists}. As $A_s$ only prescribes $U_s^b$, it is independent of events $\{M^r=\tau_s\}$ and $\{\tau_s<T\}$ (which do not depend on $U_s^b$). Thus from~\eqref{e:FintG} and by Lemma~\ref{L:thetabounds} we have \[ \mathds{P}(F_s(r)\cap G_s\mid \tau_s, M^r)\ge 2(\frac29\wedge p^*) p^*\, \mathds{P}(F_s(r)\mid \tau_s, M^r, A_s)\ge\frac43(p^*)^2\,\mathds{P}(F_s(r)\mid \tau_s, M^r, A_s), \]using $p^*<1/3$. We also have that $\mathds{P}(F_s(r)\mid \tau_s, M^r, A_s)\ge 1/(2a)$ almost surely. This follows from Lemma~\ref{L:pairres} by first conditioning on the configuration of the chameleon process at time $\tau_s-$, since given this, $\Indic{F_s(r)}$ is independent of $M^r$ and $\tau_s$. Thus our lower-bound on the expected number of pink particles created becomes \[ \frac{4}{3a}(p^*)^2 \sum_{r=1}^{|R|}\sum_{s=1}^\infty\mathds{P}\left(M^r=\tau_s<T\right)\ge \frac{(p^*)^2}{a}|R|\min_r\mathds{P}(M^r<T). \] For a red and white pair (with label $r$) on the same vertex $v$, say, at the start of the round, $M^r$ is the first time $v$ is on a ringing edge. Suppose $w$ is a neighbour of $v$ (chosen arbitrarily) and recall $M_{v,w}(\hat G)$ is the meeting time of two random walks on $\hat G$ started from vertices $v$ and $w$ respectively. Then $\mathds{P}(M^r< T)\ge\frac12\mathds{P}(M_{v,w}< T)$, since for the random walks to meet, vertex $v$ must be on a ringing edge for at least one of the two random walk processes. Then by Markov's inequality, we have in this case that $\mathds{P}(M^r< T)\ge\frac12(1-\max_{i,j}\mathds{E}M_{i,j}(\hat G)/T)$. On the other hand, for a red and white pair which start the round on different vertices, we can directly apply Markov's inequality to obtain $\mathds{P}(M^r< T)\ge 1-\max_{i,j}\mathds{E}M_{i,j}(\hat G)/T$. Thus if $T\ge2 \max_{i,j}\mathds{E}\hat M_{i,j}(G)$ then for any $r$, $\mathds{P}(M^r< T)\ge 1/4$. Hence this completes the proof with $c=\frac{(p^*)^2}{4a}$. \end{proof} \begin{lemma}\label{L:Aexists} For each $s\in\mathds{N}$, there exists an event $A_s$ satisfying properties 1--3 above. \end{lemma} \begin{proof} We define event $A_s=\{U_s^b\le p^*\},$ which clearly has probability $p^*$ and only prescribes the value that $U_s^b$ takes. Recall from the discussion in Section~\ref{S:graphical} (in particular~\eqref{e:ulesspimplication}) that $U_s^b\le p^*$ implies that if there are at least two non-marked particles on $e_s$ then a proportion in $[1/3,2/3]$ of the non-marked particles on the edge end up on each vertex on $e_s$ (at time $\tau_s$). Thus on event $A_s$ (and as black particles in the chameleon process move as non-marked particles in MaBB), almost surely, \begin{align*} B_{\tau_s}^\mathrm{C}(e_s^1)&\ge \frac13\Indic{\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i)\ge 2}\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i),\\ B_{\tau_s}^\mathrm{C}(e_s^2)&\le \frac23\Indic{\sum_{i=1}^2B^\mathrm{C}_{\tau_s-}(e_s^i)\ge 2}\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i)+\Indic{\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i)=1}\\ &\le \frac23\Indic{\sum_{i=1}^2B^\mathrm{C}_{\tau_s-}(e_s^i)\ge 2}\sum_{i=1}^2B^\mathrm{C}_{\tau_s-}(e_s^i)+1. \end{align*} Thus on event $A_s$, almost surely, \begin{align*} \frac{\chi(B^\mathrm{C}_{\tau_s}(e_s^1))}{\chi(B^\mathrm{C}_{\tau_s}(e_s^2))}&\ge \gamma+\frac{b(1-\gamma)-a\gamma}{\frac{2a}{3}\Indic{\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i)\ge 2}\sum_{i=1}^2B_{\tau_s-}^\mathrm{C}(e_s^i)+a+b}\\ &\ge \gamma, \end{align*} for any $\gamma\le 1/2$ provided $b(1-\gamma)\ge a\gamma$. We similarly have $\frac{\chi(B^\mathrm{C}_{\tau_s}(e_s^2))}{\chi(B^\mathrm{C}_{\tau_s}(e_s^1))}\ge \gamma$ under the same condition. This condition is satisfied taking $\gamma=1/(2a)$ (and this is indeed $\le 1/2$ as $a\ge 1$). Finally, it remains to show that for each $i\in\{1,2\}$ we have $\mathrm{P}_{e_s,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\in[p^*\wedge\frac29,(1-p^*)\vee\frac79]$ on event $A_s$, almost surely. This is the probability that in the MaBB process, if the marked particle is on vertex $e_s^i$, it remains on vertex $e_s^i$ given the non-marked particles update from configuration $B^\mathrm{C}_{\tau_s-}$ to $B^\mathrm{C}_{\tau_s}$ when edge $e_s$ rings. Suppose $\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)\ge 2$, i.e.\! before the update there are at least 2 black particles on $e_s$. For $y\in e_s$, write $m_s(y)\in\{0,1\}$ for the number of marked particles on $y$ after the update at time $\tau_s$. On event $A_s$, for each $i\in\{1,2\}$ we have $B^\mathrm{C}_{\tau_s}(e_s^i)\in[\frac13\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j),\frac23\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)]$ and thus \begin{align*} B^\mathrm{C}_{\tau_s}(e_s^i)+m_s(e_s^i)&\in\Big[\frac13\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j),\frac23\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)+1\Big]\\&=\Big[\frac13\big(\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)+1\big)-\frac13,\frac23\big(\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)+1\big)+\frac13\Big]\\ &\subseteq\Big[\frac29\big(\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)+1\big),\frac79\big(\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)+1\big)\Big]. \end{align*} Now recall (from the discussion after~\eqref{e:MaBB2BBSP}) the description of the MaBB process in which we remove the mark on the marked particle, then update as the BBSP, and then choose a uniform particle on the edge on which to apply the mark. Together with the just-determined bound on the number of particles on $e_s^i$, this tells us that the probability the marked particle is on $e_s^i$ after the update is in $[\frac29,\frac79]$. Suppose now that $\sum_{j=1}^2 B^\mathrm{C}_{\tau_s-}(e_s^j)=1$. Recall the definition of $\mathrm{P}_{e,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)$ as \begin{align*} \mathrm{P}_{e,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)&:=\frac{B^\mathrm{C}_{\tau_s}(e_s^i)+1}{B^\mathrm{C}_{\tau_s}(e_s^1)+B^\mathrm{C}_{\tau_s}(e_s^2)+1}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B^\mathrm{C}_{\tau_s-},e_s^i},C_{B^\mathrm{C}_{\tau_s},e_s^i})}{\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s})}\\ &=\frac{B^\mathrm{C}_{\tau_s}(e_s^i)+1}{2}\frac{\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B^\mathrm{C}_{\tau_s-},e_s^i},C_{B^\mathrm{C}_{\tau_s},e_s^i})}{\frac12}\\ &=(B^\mathrm{C}_{\tau_s}(e_s^i)+1)\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B^\mathrm{C}_{\tau_s-},e_s^i},C_{B^\mathrm{C}_{\tau_s},e_s^i}), \end{align*} where we have used Property~\ref{a:sym} to obtain $\mathrm{P}_e^{\mathrm{BB}(G,s,m-1)}(B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s})=1/2$. BBSP configuration $C_{B^\mathrm{C}_{\tau_s-},e_s^i}$ has two particles, thus by Property~\ref{a:sym} and the second part of Property~\ref{a:prob}, $\mathrm{P}_e^{\mathrm{BB}(G,s,m)}(C_{B^\mathrm{C}_{\tau_s-},e_s^i},C_{B^\mathrm{C}_{\tau_s},e_s^i})\ge p^*$, and so $\mathrm{P}_{e,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\ge p^*$. If $B^\mathrm{C}_{\tau_s}(e_s^i)=0$ (so that the non-marked particle and the marked particle end up on different vertices) we have (again by Property~\ref{a:prob}) $\mathrm{P}_{e,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\le 1-2p^*$, whereas if $B^\mathrm{C}_{\tau_s}(e_s^i)=1$, then by Properties~\ref{a:sym} and~\ref{a:prob}, $\mathrm{P}_{e,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\le (B^\mathrm{C}_{\tau_s}(e_s^i)+1)\frac{1-p^*}{2}=1-p^*$. Finally, if $B^\mathrm{C}_{\tau_s-}(e_s^i)=0$, then a marked particle on $e_s^i$ stays on $e_s^i$ at the update time $\tau_s$ with probability $1/2$ by Property~\ref{a:sym}. Thus in all cases we have that on event $A_s$, almost surely $\mathrm{P}_{e_s,B^\mathrm{C}_{\tau_s-},B^\mathrm{C}_{\tau_s}}(e_s^i,e_s^i)\in[p^*\wedge\frac29, (1-p^*)\vee\frac79]$.\qedhere \end{proof} \section{Proof of Theorem~\ref{T:betabin}}\label{S:proof} We can now put together the results obtained so far and complete the proof of Theorem~\ref{T:betabin}. These arguments are similar to those in previous works using a chameleon process. The next result bounds the first depinking time $D_1$. We wish to apply this result for any of the depinking times, and so we present the result in terms of a chameleon process started from any configuration in $\mathcal{C}(m)$. In reality, a chameleon process at time 0 will always have all red particles on a single vertex, as is apparent from Definition~\ref{d:cham}. \begin{lemma}\label{L:depinkexp} If the round length $T$ satisfies $T\ge 2\max_{i,j}\mathds{E}\hat M_{i,j}(G)$, then from any initial configuration in $\mathcal{C}(m)$ of the (non-modified) chameleon process, the first depinking time has an exponential moment: \[ \mathds{E}[e^{D_1/(KT)}]\le \frac{12a}{(p^*)^2}, \] where $K=8a/(p^*)^2$. \end{lemma} \begin{proof} This proof follows closely the proof of Lemma 9.2 from Oliveira. By the same arguments as there, we obtain \[ \mathds{P}(D_1>iT)\le \frac32 (1-c)^i, \] for any integer $i\ge 1$, with $c=(p^*)^2/(4a)$ the constant from Proposition~\ref{P:lossred}. To obtain the bound on the exponential moment, observe that for any $K>0$, \begin{align*} \mathds{E}[e^{D_1/(KT)}]&=\sum_{i=1}^\infty\mathds{E}[e^{D_1/(KT)}\Indic{iT<D_1<(i+1)T}]\\ &\le \sum_{i=1}^\infty\mathds{E}[e^{(i+1)/K}\Indic{D_1>iT}]=\sum_{i=1}^\infty e^{(i+1)/K}\mathds{P}(D_1>iT)\\ &\le \sum_{i=1}^\infty \frac32e^{1/K}\exp\left(\frac{i}{K}+i\log(1-c)\right). \end{align*} Set $K=2/c\ge -2/\log(1-c)$; then $i/K+i\log(1-c)\le \frac{i}{2}\log(1-c)<0$, and \[ \mathds{E}[e^{D_1/(KT)}]\le \frac{3}{2(1-\sqrt{1-c})}\le \frac{3}{c}.\qedhere \] \end{proof} We now show a result which bounds the exponential moment of the $j$th depinking time. In order to emphasise the initial configuration on the underlying MaBB we shall write $D_j((\xi,x))$ for the $j$th depinking time of a chameleon process corresponding to a MaBB which at time 0 is in configuration $(\xi,x)\in\Omega'_{G,m}$. \begin{lemma}\label{L:expmoment} If the round length $T$ satisfies $T\ge 2\max_{i,j}\mathds{E}\hat M_{i,j}(G)$, then for any $(\xi,x)\in\Omega'_{G,m}$, for all $j\in\mathds{N}$, \[\mathds{E}[e^{D_j((\xi,x))/(K T)}\mid \mathrm{Fill}]\le \left(\frac{12a}{(p^*)^2}\right)^j,\] where $K=8a/(p^*)^2$. \end{lemma} \begin{proof} This proof follows identically to the proof of Lemma~6.2 from~\cite{olive} and uses Lemma~\ref{L:depinkexp}. \end{proof} \begin{proof}[Proof of Theorem~\ref{T:betabin}] Let $\zeta_t$ and $\zeta'_t$ be two realisations of BB$(G,s,m)$ initialised at $\zeta$ and $\zeta'$ respectively. We shall bound $\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}$. We say that two BB$(G,s,m)$ configurations $\zeta^1$ and $\zeta^2$ are adjacent and write $\zeta^1\sim\zeta^2$ if there exist vertices $v$ and $w$ such that for all $y\notin\{v,w\}$, $\zeta^1(y)=\zeta^2(y)$ and $|\zeta^1(v)-\zeta^2(v)|=|\zeta^1(w)-\zeta^2(w)|=1$, i.e.\! by moving just a single particle we can obtain $\zeta^2$ from $\zeta^1$. We can now find a sequence of BB$(G,s,m)$ configurations $\{\zeta^i\}_{i=0}^r$ with $r\le m$ such that \[ \zeta=\zeta^0\sim\zeta^1\sim\cdots\sim\zeta^r=\zeta'. \] By the triangle inequality (for total variation), we have \begin{align}\label{e:1} \|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}\le \sum_{i=1}^r\|\mathcal{L}(\zeta^{i-1}_t)-\mathcal{L}(\zeta^i_t)\|_{\mathrm{TV}}, \end{align} where for each $0\le i\le r$, $\zeta^i_t$ is a realisation of BB$(G,s,m)$ started from configuration $\zeta^i$. We now show how to bound $\|\mathcal{L}(\zeta^{i-1}_t)-\mathcal{L}(\zeta^i_t)\|_{\mathrm{TV}}$. Suppose that $\zeta^{i-1}$ and $\zeta^i$ differ on vertices $v$ and $w$ with $\zeta^{i-1}(v)-\zeta^i(v)=1$. Define a BB$(G,s,m-1)$ configuration $\xi$ to be $\xi(y):=\zeta^{i-1}(y)-\delta_v(y)=\zeta^i(y)-\delta_w(y)$ for all $y\in V$. As BBSP is a projection of MaBB (i.e.\! if we ignore the marking of the marked particle in a MaBB we obtain a BBSP, which follows from~\eqref{e:contraction}), we have by the triangle inequality \begin{align}\label{e:2}\|\mathcal{L}(\zeta^{i-1}_t)-\mathcal{L}(\zeta^i_t)\|_{\mathrm{TV}}\le \|\mathcal{L}((\xi_t,m_t))-\mathcal{L}((\xi'_t,m'_t))\|_{\mathrm{TV}},\end{align}where $(\xi_t,m_t)_{t\ge0}$ is a realisation of MaBB initialised at $(\xi,v)$ and $(\xi'_t,m'_t)_{t\ge0}$ is a realisation of MaBB initialised at $(\xi,w)$. In order to apply Proposition~\ref{P:tvboundink}, we define $\tilde m_t$ to be a random variable which, given $\xi_t$ has law $\pi_{\xi_t}$, and similarly $\tilde m'_t$ to have law $\pi_{\xi'_t}$ given $\xi'_t$. Since $\mathcal{L}((\xi_t,\tilde m_t))=\mathcal{L}((\xi'_t,\tilde m'_t))$, we use the triangle inequality again to deduce \begin{align}\label{e:3} \|\mathcal{L}((\xi_t,m_t))-\mathcal{L}((\xi'_t,m'_t))\|_{\mathrm{TV}}\le \|\mathcal{L}((\xi_t,m_t))-\mathcal{L}((\xi_t,\tilde m_t))\|_{\mathrm{TV}}+\|\mathcal{L}((\xi'_t,m'_t))-\mathcal{L}((\xi'_t,\tilde m'_t))\|_{\mathrm{TV}}. \end{align}We can now apply Proposition~\ref{P:tvboundink}, and by combining~\eqref{e:1}--\eqref{e:3}, we obtain \begin{align}\label{e:4} \max_{\zeta,\zeta'}\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}\le 2m\max_{(\xi,x)\in\Omega_{G,m}'}\mathds{E}\left[1-\frac{\operatorname{ink}_t^{(\xi,x)}}{a(m-1)+bn}\mid\mathrm{Fill}\right]. \end{align} Lemma~\ref{L:inkonly} says that the total ink can only change at depinking times, thus (recalling the definition of $\widehat{\operatorname{ink}}_j$), $\operatorname{ink}_t^{(\xi,x)}=\chi(\xi(x))$ if $t<D_1((\xi,x))$ and $\operatorname{ink}_t^{(\xi,x)}=\widehat{\operatorname{ink}}_j^{(\xi,x)}$ if $D_j((\xi,x))\le t<D_{j+1}((\xi,x))$ for some $j$. Hence we have that for any $j\ge 1$, \begin{align*} 1-\frac{\operatorname{ink}_t^{(\xi,x)}}{a(m-1)+bn}&\le \max_{\ell\ge j}\left(1-\frac{\widehat{\operatorname{ink}}_\ell^{(\xi,x)}}{a(m-1)+bn}\right)+\Indic{t<D_j((\xi,x))}\\ &\le \sum_{\ell\ge j}\left(1-\frac{\widehat{\operatorname{ink}}_\ell^{(\xi,x)}}{a(m-1)+bn}\right)+\Indic{t<D_j((\xi,x))}. \end{align*}Taking expectations (given Fill) on both sides and using~\eqref{e:4} we obtain for any $j\ge1$, \begin{align*} &\max_{\zeta,\zeta'}\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}\\&\le 2m \max_{(\xi,x)\in\Omega_{G,m}'}\left\{\sum_{\ell\ge j}\mathds{E}\left[1-\frac{\widehat{\operatorname{ink}}_\ell^{(\xi,x)}}{a(m-1)+bn}\mid\mathrm{Fill}\right]+\mathds{P}(D_j((\xi,x))>t\mid\mathrm{Fill})\right\}. \end{align*} We bound the first term using Lemma~\ref{L:expdec} to obtain \begin{align*} &\max_{\zeta,\zeta'}\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}\\&\le 2m \max_{(\xi,x)\in\Omega_{G,m}'}\left\{\sum_{\ell\ge j}(71/72)^\ell\sqrt{a(m-1)+bn}+\mathds{P}(D_j((\xi,x))>t\mid\mathrm{Fill})\right\}\\ &\le 200me^{-j\log(72/71)}\sqrt{a(m-1)+bn}+2m\max_{(\xi,x)\in\Omega_{G,m}'}\mathds{P}(D_j((\xi,x))>t\mid\mathrm{Fill}), \end{align*}and then by Markov's inequality and Lemma~\ref{L:expmoment} (recall constant $K=8a/(p^*)^2$) we have that \begin{align*} &\max_{\zeta,\zeta'}\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}\\&\le200me^{-j\log(72/71)}\sqrt{a(m-1)+bn}+2me^{j\log(12a(p^*)^{-2})-t/(2K \max_{i,j}\mathds{E}\hat M_{i,j}(G))}. \end{align*} This holds for all $j\ge1$ so if we apply it with $j=\left\lfloor\frac{t}{4K\max_{i,j}\mathds{E}\hat M_{i,j}(G)\log(12a(p^*)^{-2})}\right\rfloor$ we obtain \begin{align*} \max_{\zeta,\zeta'}\|\mathcal{L}(\zeta_t)-\mathcal{L}(\zeta'_t)\|_{\mathrm{TV}}&\le K_0 m e^{-t/(4K\max_{i,j}\mathds{E}\hat M_{i,j}(G)\log(12a(p^*)^{-2}))}\sqrt{a(m-1)+bn} \end{align*} for some universal constant $K_0>0$. Thus we deduce that there exists a universal $C>0$ such that if $t\ge Ca(p^*)^{-2}\log(12a(p^*)^{-2})\log((am+bn)/\mathds{V}\mathrm{ar}epsilon)\max_{i,j}\mathds{E}\hat M_{i,j}(G)$, then the total variation distance between $\mathcal{L}(\zeta_t)$ and $\mathcal{L}(\zeta_t')$ is at most $\mathds{V}\mathrm{ar}epsilon$ for any initial configurations $\zeta$ and $\zeta'$, so the statement of Theorem~\ref{T:betabin} holds. \end{proof} \section*{Appendix}\label{S:App} For ease of notation in this section we write $\mathrm{P}(v,v)$ for $\mathrm{P}_{e,B,B'}(v,v)$ and similarly for other probabilities. We shall use throughout (sometimes without reference) that, by Lemma~\ref{L:colour}, \begin{align} \chi(B(v))\mathrm{P}_{}(v,v)+\chi(B(w))\mathrm{P}_{}(w,v)&=\chi(B'(v)),\label{e:fB'v}\\ \chi(B(v))\mathrm{P}_{}(v,w)+\chi(B(w))\mathrm{P}_{}(w,w)&=\chi(B'(w)).\label{e:fB'w} \end{align} We write $R_{v,w}$ for $R(v)+R(w)$, $P_{v,w}$ for $P(v)+P(w)$ and $B_{v,w}$ for $B(v)+B(w)$. \begin{proof}[Proof of Lemma~\ref{L:thetaexist}] We first show $m^*(v)\le u(v)+\frac12 u^P(v)$. Recall \[ u(v)+\frac12 u^P(v)=\chi(B'(v))\wedge R_{v,w}+\frac12\left(\left\{\chi(B'(v))-\chi(B'(v))\wedge R_{v,w}\right\}\wedge P_{v,w}\right). \] Hence if $R_{v,w}>\chi(B'(v))$ then $u(v)+\frac12 u^P(v)=\chi(B'(v))$. On the other hand in this case (using~\eqref{e:fB'v}), \[m^*(v)=\chi(B'(v))-[\chi(B(v)-R(v)-\frac12 P(v)]\mathrm{P}_{}(v,v)-[\chi(B(w)-R(w)-\frac12 P(w)]\mathrm{P}_{}(w,v)\] and thus as $R(v)+P(v)\le \chi(B(v))$ and $R(w)+P(w)\le \chi(B(w))$, we have $m^*(v)\le \chi(B'(v))$, i.e.\! in this case $m^*(v)\le u(v)+\frac12 u^P(v)$. If instead $R_{v,w}\le \chi(B'(v))$, then $u(v)+\frac12 u^P(v)=R_{v,w}+\frac12(\{\chi(B'(v))-R_{v,w}\}\wedge P_{v,w})$. If $P_{v,w}>\chi(B'(v))-R_{v,w}$ then $u(v)+\frac12 u^P(v)=R_{v,w}+\frac12(\chi(B'(v))-R_{v,w})=\frac12(\chi(B'(v))+R_{v,w})$. But \begin{align*}m^*(v)&=\frac12 R(v)\mathrm{P}_{}(v,v)+\frac12 R(w)\mathrm{P}_{}(w,v)\\&\phantom{=}+\frac12\left\{(R(v)+P(v))\mathrm{P}_{}(v,v)+(R(w)+P(w))\mathrm{P}_{}(w,v)\right\}\\ &\le \frac12 R_{v,w}+\frac12\chi(B(v))\mathrm{P}_{}(v,v)+\frac12 \chi(B(w))\mathrm{P}_{}(w,v)\\ &=\frac12(\chi(B'(v))+R_{v,w}), \end{align*} using \eqref{e:fB'v} in the last step. If instead $P_{v,w}\le \chi(B'(v))-R_{v,w}$ then $u(v)+\frac12 u^P(v)=R_{v,w}+\frac12P_{v,w}$ and it is clear that this is an upper bound on $m^*(v)$ by bounding $\mathrm{P}_{}(v,v)$ and $\mathrm{P}_{}(w,v)$ by 1. Now we turn to the lower bound, i.e.\! we want $m^*(v)\ge \ell(v)+\frac12\ell^P(v)$. Recall \[ \ell(v)+\frac12\ell^P(v)=\{R_{v,w}-\chi(B'(w))\}\vee0+\frac12\left[\left\{P_{v,w}-\chi(B'(w))+R\wedge \chi(B'(w))\right\}\vee 0\right]. \] If $R_{v,w}>\chi(B'(w))$, then $\ell(v)+\frac12\ell^P(v)=R_{v,w}-\chi(B'(w))+\frac12P_{v,w}.$ But in this case \begin{align*} m^*(v)&=(R(v)+\frac12 P(v))\mathrm{P}_{}(v,v)+(R(w)+\frac12P(w))\mathrm{P}_{}(w,v)\\ &=R_{v,w}+\frac12P_{v,w}-(R(v)+\frac12P(v))\mathrm{P}_{}(v,w)-(R(w)+\frac12P(w))\mathrm{P}_{}(w,w)\\ &=R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+(\chi(B(v))-R(v)-\frac12P(v))\mathrm{P}_{}(v,w)\\&\phantom{=}+(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}_{}(w,w)\\ &\ge R_{v,w}+\frac12P_{v,w}-\chi(B'(w)). \end{align*} Finally suppose $R\le \chi(B'(v))$, then $\ell(v)+\ell^P(v)=\frac12\left[\left\{P_{v,w}-\chi(B'(w))+R_{v,w}\right\}\vee0\right]$. If further $P_{v,w}>\chi(B'(w))-R_{v,w}$ then $\ell(v)+\ell^P(v)=\frac12(P_{v,w}+R_{v,w}-\chi(B'(w))).$ But in this case \begin{align*} m^*(v)&\ge \frac12\left[(R(v)+P(v))\mathrm{P}(v,v)+(R(w)+P(w))\mathrm{P}(w,v)\right]\\ &=\frac12\big[R_{v,w}+P_{v,w}-\chi(B'(w))+(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,w)\\&\phantom{=\frac12\big[}+(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,w)\big]\\ &\ge \frac12[R_{v,w}+P_{v,w}-\chi(B'(w))]. \end{align*} If instead $P_{v,w}\le \chi(B'(w))-R_{v,w}$, then $\ell(v)+\ell^P(v)=0\le m^*(v)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{L:thetabounds}] Recall that we suppose $\mathrm{P}(v,v),\,\mathrm{P}(w,v)\in[\eta,1-\eta]$ and that $\theta(v)$ is defined in \eqref{e:thetadef} which gives \[ \theta(v)=\frac{u(v)+\frac12 u^P(v)-m^*(v)}{u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v)}. \] There are numerous cases to consider which we detail below. Our goal is to show that in each case $\theta(v)\in[\eta,1-\eta]$. Recall that $\chi(B(v))+\chi(B(w))=\chi(B'(v))+\chi(B'(w))=aB_{v,w}+2b$. \underline{Case 1: $R_{v,w}>\chi(B'(v))\vee \chi(B'(w))$}\\ In this case $u(v)+\frac12u^P(v)=\chi(B'(v))$ and \[u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v)=\chi(B'(v))-(R_{v,w}-\chi(B'(w)))-\frac12P_{v,w}.\] But \begin{align*}m^*(v)&=\chi(B'(v))-(\chi(B(v))-R(v)-\frac12P(v))\mathrm{P}(v,v)-(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}(w,v)\\ &\le \chi(B'(v))-\eta(aB_{v,w}+2b-R_{v,w}-\frac12P_{v,w})\\ &=u+\frac12u^P(v)-\eta(u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v)), \end{align*} thus $\theta(v)\ge\eta$. On the other hand \begin{align*} m^*(v)&=R_{v,w}+\frac12P_{v,w}-(R(v)+\frac12P(v))\mathrm{P}(v,w)-(R(w)+\frac12P(w))\mathrm{P}(w,w)\\ &=R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+(\chi(B(v))-R(v)-\frac12P(v))\mathrm{P}(v,w)\\ &\phantom{=}+(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}(w,w)\\ &\ge R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+\eta(aB+2b-R_{v,w}-\frac12P_{v,w})\\ &=\ell(v)+\ell^P(v)+\eta(u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v)), \end{align*} thus $1-\theta(v)\ge\eta$. \underline{Case 2: $R_{v,w}\le \chi(B'(v))\wedge \chi(B'(w))$}\\ We consider sub-cases. \underline{Case 2a: $P_{v,w}\le (\chi(B'(v))-R_{v,w})\wedge(\chi(B'(w))-R_{v,w})$}\\ In this case $u(v)+\frac12u^P(v)=R_{v,w}+\frac12 P_{v,w}$ and $\ell(v)+\frac12\ell^P(v)=0$. But $m^*(v)\ge\eta(R_{v,w}+\frac12 P_{v,w})$ and so $\theta(v)\le 1-\eta$. We also have $m^*(v)\le (1-\eta)(R_{v,w}+\frac12 P_{v,w})$ and so $\theta\ge \eta$. \underline{Case 2b: $P_{v,w}\ge (\chi(B'(v))-R_{v,w})\vee(\chi(B'(w))-R_{v,w})$}\\ In this case $u(v)+\frac12u^P(v)=\frac12(\chi(B'(v))+R_{v,w})$ and $\ell(v)+\frac12\ell^P(v)=\frac12(P_{v,w}+R_{v,w}-\chi(B'(w)))$. Hence \[ u(v)+\frac12u^P(v)-\ell(v)-\frac12\ell^P(v)=\frac12(aB_{v,w}+2b-P_{v,w}). \]On the one hand \begin{align}\notag m^*(v)&=\frac12 R(v)\mathrm{P}(v,v)+\frac12 R(w)\mathrm{P}(w,v)+\frac12\left\{(R(v)+P(v))\mathrm{P}(v,v)+(R(w)+P(w))\mathrm{P}(w,v)\right\}\\\notag &\ge \frac12\eta R_{v,w}+\frac12\left\{R_{v,w}+P_{v,w}-(R(v)+P(v))\mathrm{P}(v,w)-(R(w)+P(w))\mathrm{P}(w,w)\right\}\\\notag &=\frac12\eta R_{v,w}+\frac12\big\{R_{v,w}+P_{v,w}-\chi(B'(w))+(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,w)\\\notag &\phantom{=\frac12\eta R_{v,w}+\frac12\big\{}+(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,w)\big\}\\ &\ge \frac12\eta R_{v,w}+\frac12(R_{v,w}+P_{v,w}-\chi(B'(w)))+\frac12\eta(aB_{v,w}+2b-R_{v,w}-P_{v,w}).\label{e:m2b1} \end{align} Thus $m^*(v)-\ell(v)-\frac12\ell^P(v)\ge \frac12\eta(aB_{v,w}+2b-P_{v,w})$, and so $1-\theta(v)\ge \eta$. On the other hand, \begin{align*} m^*(v)&=\frac12 R(v)\mathrm{P}(v,v)+\frac12 R(w)\mathrm{P}(w,v)+\frac12 \chi(B'(v))\\ &\phantom{=}-\frac12\big\{(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,v)+(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,v)\big\}\\ &\le \frac12\eta R_{v,w}+\frac12 \chi(B'(v))-\frac12\eta(aB_{v,w}+2b-R_{v,w}-P_{v,w}) \end{align*} Thus \begin{align}\notag u(v)+\frac12 u^P(v)-m^*(v)&\ge\frac12(1-\eta)R_{v,w}+\frac12\eta(aB_{v,w}+2b-R_{v,w}-P_{v,w})\\ &\ge \frac12\eta(aB_{v,w}+2b-P_{v,w})\label{e:m2b} \end{align} where we use $\eta<1/2$ in the last inequality. This gives $\theta(v)\ge \eta$. \underline{Case 2c: $\chi(B'(v))-R_{v,w}\le P_{v,w}\le \chi(B'(w))-R_{v,w}$}\\ We have $u(v)+\frac12 u^P(v)=\frac12(\chi(B'(v))+R_{v,w})$ and $\ell(v)+\frac12\ell^P(v)=0$. On the one hand $m^*(v)\ge \eta(R_{v,w}+\frac12 P_{v,w})=\frac12\eta R_{v,w}+\frac12\eta(R_{v,w}+P_{v,w})\ge \frac12\eta R_{v,w}+\frac12\eta \chi(B'(v))=\frac12\eta(R_{v,w}+\chi(B'(v)))$. Hence $\theta(v)\le 1-\eta$. On the other hand, as in~\eqref{e:m2b} we have $u(v)+\frac12 u^P(v)-m^*(v)\ge\frac12\eta(aB_{v,w}+2b-P_{v,w})$ which in this case becomes $u(v)+\frac12 u^P(v)-m^*(v)\ge\frac12\eta(\chi(B'(v))+R_{v,w})$. This gives $\theta(v)\ge \eta$. \underline{Case 2d: $\chi(B'(w))-R_{v,w}\le P_{v,w}\le \chi(B'(v))-R_{v,w}$}\\ We have $u(v)+\frac12 u^P(v)=R_{v,w}+\frac12 P_{v,w}$ and $\ell(v)+\frac12\ell^P(v)=\frac12(R_{v,w}+P_{v,w}-\chi(B'(w)))$. As in~\eqref{e:m2b}, $u(v)+\frac12 u^P(v)-m^*(v)\ge\frac12\eta(aB_{v,w}+2b-P_{v,w})$ which gives $u(v)+\frac12 u^P(v)-m^*(v)\ge \frac12\eta(R_{v,w}+\chi(B'(w)))$, so $1-\theta(v)\ge\eta$. On the other hand, $m^*(v)\le (1-\eta)(R_{v,w}+\frac12 P_{v,w})$, but $R_{v,w}+\frac12 P_{v,w}\ge R_{v,w}+\frac12(\chi(B'(w))-\frac12 R_{v,w})=\frac12 (R_{v,w}+\chi(B'(w)))$ and so $m^*(v)\le R_{v,w}+\frac12 P_{v,w}-\frac{\eta}{2}(R_{v,w}+\chi(B'(w)))$ which gives $\theta(v)\ge \eta$. \underline{Case 3: $\chi(B'(v))\le R_{v,w}\le \chi(B'(w))$}\\ In this case $u(v)+\frac12 u^P(v)=\chi(B'(v))$. We again consider sub-cases depending on the value of $P_{v,w}$. \underline{Case 3a: $P_{v,w}\ge \chi(B'(w))-R_{v,w}$}\\ Then $\ell(v)+\frac12\ell^P(v)=\frac12(R_{v,w}+P_{v,w}-\chi(B'(w)))$ and so $u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v)=\chi(B'(v))-\frac12(R_{v,w}+P_{v,w}-\chi(B'(w)))$. To show $1-\theta(v)\ge \eta$, we wish to show that $m^*(v)\ge \ell(v)+\frac12\ell^P(v)+\eta(u(v)+\frac12 u^P(v)-\ell(v)-\frac12\ell^P(v))$, i.e.\! that $m^*(v)\ge \eta \chi(B'(v))+\frac12(1-\eta)(R_{v,w}+P_{v,w}-\chi(B'(w)))$. As in~\eqref{e:m2b1} we have \begin{align*} m^*(v)&\ge \frac12 \eta R_{v,w}+\frac12\left\{R_{v,w}+P_{v,w}-\chi(B'(w))+\eta(aB_{v,w}+2b-R_{v,w}-P_{v,w})\right\}\\ &=\frac12(1-\eta)(R_{v,w}+P_{v,w}-\chi(B'(w)))+\frac12\eta\left\{R_{v,w}+P_{v,w}-\chi(B'(w))+aB_{v,w}+2b-P_{v,w}\right\}. \end{align*} But $R_{v,w}-\chi(B'(w))+aB_{v,w}+2b=R_{v,w}+\chi(B'(v))\ge 2\chi(B'(v))$, so $m^*(v)\ge \eta \chi(B'(v))+\frac12(1-\eta)(R_{v,w}+P_{v,w}-\chi(B'(w)))$ as needed. On the other hand, to show $\theta(v)\ge\eta$, we need to show that $m^*(v)\le (1-\eta)\chi(B'(v))+\frac{\eta}{2}(P_{v,w}+R_{v,w}-\chi(B'(w)))$. We have \begin{align*} &m^*(v)\\&=(1-\eta)\big\{(R(v)+\frac12P(v))\mathrm{P}(v,v)+(R(w)+\frac12P(w))\mathrm{P}(w,v)\big\}\\ &\phantom{=}+\eta\big\{(R(v)+\frac12 P(v))\mathrm{P}(v,v)+(R(w)+\frac12P(w))\mathrm{P}(w,v)\big\}\\ &=(1-\eta)\big\{\chi(B'(v))-(\chi(B(v))-R(v)-\frac12 P(v))\mathrm{P}(v,v)-(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}(w,v))\big\}\\ &\phantom{=}+\eta\big\{R_{v,w}+\frac12P_{v,w}-(R(v)+\frac12P(v))\mathrm{P}(w,v)-(R(w)+\frac12 P(w))\mathrm{P}(w,w)\big\}\\ &\le (1-\eta)\chi(B'(v))\\&\phantom{=}+\eta\big\{R_{v,w}+\frac12P_{v,w}-\chi(B'(w))-(\chi(B(v))-R(v)-\frac12P(v))\mathrm{P}(w,v)\\&\phantom{\,=+\eta\big\{R_{v,w}+\frac12P_{v,w}-\chi(B'(w))}-(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}(w,w)\big\}\\ &\le(1-\eta)\chi(B'(v))+\eta(R_{v,w}+\frac12P_{v,w}-\chi(B'(w)))\\ &\le (1-\eta)\chi(B'(v))+\frac{\eta}{2}(R_{v,w}+P_{v,w}-\chi(B'(w))), \end{align*} as needed. \underline{Case 3b: $P_{v,w}\le \chi(B'(w))-R_{v,w}$}\\ Here $u(v)+\frac12u^P(v)=\chi(B'(v))$ and $\ell(v)+\frac12\ell^P(v)=0$. On the one hand we have $m^*(v)\ge \eta(R_{v,w}+\frac12P_{v,w})\ge\eta R_{v,w}\ge\eta \chi(B'(v))$. This gives $\theta(v)\le 1-\eta$. On the other hand, we have \begin{align*} m^*(v)&=\chi(B'(v))-(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,v)-\frac12 P(v)\mathrm{P}(v,v)\\ &\phantom{\le \chi(B'(v))\,\,}-(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,v)-\frac12P(w)\mathrm{P}(w,v)\\ &\le \chi(B'(v))-\eta(aB_{v,w}+2b-R_{v,w}-\frac12 P_{v,w})-\frac12\eta P_{v,w}\\ &\le \chi(B'(v))-\eta \chi(B'(v))\\ &=(1-\eta)\chi(B'(v)). \end{align*}Thus it follows that $\theta(v)\ge \eta$. \underline{Case 4: $\chi(B'(w))\le R_{v,w}\le \chi(B'(v))$}\\ This is the final main case, and it has two sub-cases. \underline{Case 4a: $P_{v,w}\ge \chi(B'(v))-R_{v,w}$}\\ Here $u(v)+\frac12u^P(v)=\frac12(\chi(B'(v))+R_{v,w})$ and $\ell(v)+\frac12\ell^P(v)=R_{v,w}-\chi(B'(w))+\frac12P_{v,w}$. Thus $u(v)+\frac12u^P(v)-\ell(v)-\frac12\ell^P(v)=\chi(B'(w))+\frac12(\chi(B'(v))-R_{v,w}-P_{v,w})$. On the one hand we want $\theta(v)\le 1-\eta$, which in this case is equivalent to $m^*(v)\ge (1-\frac{\eta}{2})R_{v,w}+\frac{\eta}{2}\chi(B'(v))+\frac{1-\eta}{2}P_{v,w}-(1-\eta)\chi(B'(w))$. We can obtain this bound since \begin{align*} m^*(v)&=R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+(\chi(B(v))-R(v)-\frac12P(v))\mathrm{P}(v,w)\\ &\phantom{=}+(\chi(B(w))-R(w)-\frac12P(w))\mathrm{P}(w,w)\\ &\ge R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+\eta(aB_{v,w}+2b-R_{v,w}-\frac12P_{v,w})\\ &=R_{v,w}+\frac12P_{v,w}-\chi(B'(w))+\eta(\chi(B'(v))+\chi(B'(w))-R_{v,w}-\frac12P_{v,w})\\ &=(1-\frac{\eta}{2})R_{v,w}+\frac{1-\eta}{2}P_{v,w}+\frac{\eta}{2}\chi(B'(v))-(1-\eta)\chi(B'(w))+\frac{\eta}{2}(\chi(B'(v))-R_{v,w}), \end{align*} and we obtain the desired bound using that $R_{v,w}\le \chi(B'(v))$. On the other hand, we also need to show $\theta(v)\le \eta$, i.e.\! we need to show $m^*(v)\le\frac{1+\eta}{2}R_{v,w}+\frac{\eta}{2}P_{v,w}+\frac{1-\eta}{2}\chi(B'(v))-\eta \chi(B'(w))$. We have \begin{align*} m^*(v)&=\frac12R(v)\mathrm{P}(v,v)+\frac12R(w)\mathrm{P}(w,v)+\frac12\left\{(R(v)+P(v))\mathrm{P}(v,v)+(R(w)+P(w))\mathrm{P}(w,v)\right\}\\ &\le \frac{1-\eta}{2}R_{v,w}+\frac12\big\{\chi(B'(v))-(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,v)\\ &\phantom{\le\frac{1-\eta}{2}R_{v,w}+\frac12\big\{}-(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,v)\big\}\\ &\le \frac{1-\eta}{2}R_{v,w}+\frac12\big\{\chi(B'(v))-\eta(aB_{v,w}+2b-R_{v,w}-P_{v,w})\big\}\\ &=\frac{1-\eta}{2}R_{v,w}+\frac{\eta}{2}P_{v,w}+\frac{1-\eta}{2}\chi(B'(v))-\frac{\eta}{2}\chi(B'(w))\\ &=\frac{1+\eta}{2}R_{v,w}+\frac{\eta}{2}P_{v,w}+\frac{1-\eta}{2}\chi(B'(v))-\eta \chi(B'(w))-\eta R_{v,w}+\frac{\eta}{2}\chi(B'(w))\\ &\le\frac{1+\eta}{2}R_{v,w}+\frac{\eta}{2}P_{v,w}+\frac{1-\eta}{2}\chi(B'(v))-\eta \chi(B'(w)), \end{align*} using that $R_{v,w}\ge \chi(B'(w))$ in the last inequality. \underline{Case 4b: $P_{v,w}\le \chi(B'(v))-R_{v,w}$}\\ Here $u(v)+\frac12u^P(v)=R_{v,w}+\frac12 P_{v,w}$ and $\ell(v)+\frac12 \ell^P(v)=R_{v,w}-\chi(B'(w))+\frac12 P_{v,w}$, thus $u(v)+\frac12u^P(v)-\ell(v)-\frac12 \ell^P(v)=\chi(B'(w))$. Showing $\theta(v)\le 1-\eta$ is equivalent to showing $m^*(v)\ge R_{v,w}+\frac12 P_{v,w}-(1-\eta)\chi(B'(w))$. We have \begin{align*} m^*(v)&=\chi(B'(v))-(\chi(B(v))-R(v)-P(v))\mathrm{P}(v,v)-\frac12 P(v)\mathrm{P}(v,v)\\ &\phantom{\le \chi(B'(v))\,\,}-(\chi(B(w))-R(w)-P(w))\mathrm{P}(w,v)-\frac12P(w)\mathrm{P}(w,v)\\ &\ge \chi(B'(v))-(1-\eta)(aB_{v,w}+2b-R_{v,w}-P_{v,w})-\frac{1-\eta}{2}P_{v,w}\\ &=R_{v,w}+\frac12P_{v,w}-(1-\eta)\chi(B'(w))+\eta(\chi(B'(v))-R_{v,w}), \end{align*}and this shows the desired bound since $R_{v,w}\le \chi(B'(v))$. Showing $\theta(v)\ge\eta$ is equivalent to showing $m^*(v)\le R_{v,w}+\frac12 P_{v,w}-\eta \chi(B'(w))$. This holds since we have $m^*(v)\le (1-\eta)(R_{v,w}+\frac12P_{v,w})\le R_{v,w}+\frac12 P_{v,w}-\eta R_{v,w}$ and since $R_{v,w}\ge \chi(B'(w))$ this gives $m^*(v)\le R_{v,w}+\frac12 P_{v,w}-\eta \chi(B'(w))$ as needed. \end{proof} \begin{proof}[Proof of Lemma~\ref{L:pairres}] We fix the configurations of black, red, and pink particles $B$, $R$, $P$ just before an update on $e=\{v,w\}$ and also the number of paired red $R^p_{v,w}$. Let $x=\ell(v)+\ell(w)-R^q_{v,w}$, where $R^q_{v,w}$ is the number of non-paired red particles on $e$. Then $x\vee0$ is the number of paired red particles needed for the lower bounds in Step 1 and so any particular paired red particle will be remaining in the pool after Step 1 with probability $1-(x\vee 0)/R^p_{v,w}$. Observe that $\chi(B'(v))+\chi(B'(w))\ge R_{v,w}+R^p_{v,w}$ (since each paired red particle on $e$ implies the existence of a unique paired white particle also on~$e$). We consider four cases. \underline{Case 1: $R_{v,w}>\chi(B'(v))\vee \chi(B'(w))$}\\ Then $x=2R_{v,w}-\chi(B'(v))-\chi(B'(w))-R^q_{v,w}\le 2R_{v,w}-(R_{v,w}+R^p_{v,w})-R^q_{v,w}=0,$ i.e.\! no paired red particles are needed for the lower bounds and they all remain in the pool after Step 1. \underline{Case 2: $R_{v,w}\le \chi(B'(v))\wedge \chi(B'(w))$}\\ In this case $x=-R^q_{v,w}$ so all paired red particles remain in the pool. \underline{Case 3: $\chi(B'(v))\le R_{v,w}\le \chi(B'(w))$}\\ Then $x=R_{v,w}-\chi(B'(v))-R^q_{v,w}=R^p_{v,w}-\chi(B'(v))$. We need to show that this is at most $(1-\gamma) R^p_{v,w}$. We are assuming that $\chi(B'(w))\le \chi(B'(v))/\gamma$, . We also have that $\chi(B'(v))+\chi(B'(w))\ge 2R^p_{v,w}$ and thus $\chi(B'(v))\ge 2R^p_{v,w}/(1+1/\gamma)\ge \gamma R^p_{v,w}$ since $\gamma<1$. This gives the desired bound on $x$. \underline{Case 4: $\chi(B'(w))\le R_{v,w}\le \chi(B'(v))$}\\ This case follows similarly to Case 3, switching the roles of $v$ and $w$. \end{proof} \section*{Simulation} For purposes of further elucidating the evolution of the chameleon process and its relationship to the MaBB, we present a possible trajectory of the two processes for two updates (for simplicity we suppose the first two edge-rings occur at times 1 and 2). In this example, the graph is the line on 7 vertices and $a=b=1$. \begin{figure} \caption{The initial configurations are shown as above. Observe that the non-marked particles in the MaBB are in the same configuration as the black particles in the chameleon and vertex 3 (which has the marked particle in the MaBB) has all its non-black particles in the chameleon configuration as red. As it is the start of a round, and as there are fewer red particles than white, we pair up each red particle with a unique white particle and label the paired particles (with the same label) to track the pairings.} \end{figure} \begin{figure} \caption{At time 1 edge $\{1,2\} \end{figure} \begin{figure} \caption{At time 2 edge $\{2,3\} \end{figure} \end{document}
\begin{document} \ifthenelse{0 = 1}{\zibtitlepage}{} \title{Global Optimization of Mixed-Integer Nonlinear Programs with SCIP 8} \abstract{ For over ten years, the constraint integer programming framework SCIP has been extended by capabilities for the solution of convex and nonconvex mixed-integer nonlinear programs (MINLPs). With the recently published version~8.0, these capabilities have been largely reworked and extended. This paper discusses the motivations for recent changes and provides an overview of features that are particular to MINLP solving in SCIP. Further, difficulties in benchmarking global MINLP solvers are discussed and a comparison with several state-of-the-art global MINLP solvers is provided.} \normalsize \section{Introduction} Mixed-integer nonlinear programming (MINLP) concerns with the optimization of an objective function such that a finite set of linear or nonlinear constraints and integrality conditions is satisfied. The generality of this problem class means that many real-world applications can be modeled as MINLPs~\cite{Floudas1995,GrossmannKravanja1997,Pinter2006,TrespalaciosGrossmann2014}, but also that software that can handle this class efficiently becomes extremely complex. Solvers for MINLP~\cite{BussieckVigerske2010} are often built on top of or by combining solvers for mixed-integer linear programming (MIP) and solvers that find locally optimal solutions for nonlinear programs (NLP). In fact, one of the first commercial MINLP solvers, SCICONIC~\cite{Beale1980}, extends a MIP solver by piecewise linear approximations of low dimensional nonlinear terms. The first general purpose solver was DICOPT~\cite{KocisGrossmann1989}, which decomposes the solution of an MINLP into a sequence of MIP and NLP solves~\cite{DuranGrossmann1986}, thereby building on established software for these two program classes. DICOPT can solve MINLPs where nonlinear constraints are convex to optimality, but works only as a heuristic on nonconvex MINLPs. The first general purpose solvers to solve also nonconvex MINLPs to optimality were $\alpha$BB, BARON, and GLOP~\cite{AdjimanFloudas1996,Sahinidis1996,SmithPantelides1999}, all based on convexification techniques for nonconvex constraints. Also the solver SCIP (Solving Constraint Integer Programs), for which this paper provides an overview, belongs to the latter category. In the following, MINLPs of the form \begin{equation} \begin{aligned} \min \quad& c^{\top}x, \\ \mathrm{such\ that} \quad& \low{g} \leq g(x) \leq \upp{g}, \\ & \low{b} \leq Ax \leq \upp{b}, \\ & \low{x} \leq x \leq \upp{x}, \\ & x_{\mathcal{I}} \in \mathbb{Z}^{\vert\mathcal{I}\vert}, \end{aligned} \label{eq:minlp} \tag{MINLP} \end{equation} are considered, where $\low{x}$, $\upp{x} \in \overline{\Rbb}^{n}$, $\overline{\Rbb} := \mathbb{R} \cup \{\pm\infty\}$, $\low{x}\leq \upp{x}$, $\mathcal{I} \subseteq \{1, \ldots, n\}$, $c \in \mathbb{R}^n$, $\low{g}$, $\upp{g}\in\overline{\Rbb}^m$, $\low{g}\leq \upp{g}$, $g : \mathbb{R}^{n} \rightarrow \overline{\Rbb}^m$ is specified explicitly in algebraic form, $\low{b},\upp{b}\in\overline{\Rbb}^{\tilde m}$, $\low{b}\leq\upp{b}$, and $A\in\mathbb{R}^{\tilde m\times n}$. The restriction to a linear objective function is a technical detail of SCIP and without loss of generality. The software SCIP has been designed as a branch-cut-and-price framework to solve different types of optimization problems, most generally \emph{constraint integer programs} (CIPs), and most importantly MIPs and MINLPs. Roughly speaking, CIPs are finite-dimensional optimization problems with arbitrary constraints and a linear objective function that satisfy the following property: if all integer variables are fixed, the remaining subproblem must form a linear or nonlinear program. The problem class of CIP was motivated by the modeling flexibility of constraint programming and the algorithmic requirements of integrating it with efficient solution techniques available for MIP~\cite{Achterberg2007}. In order to solve CIPs, SCIP constructs relaxations -- typically linear programs (LPs). If the relaxation solution is not feasible for the current subproblem, the plugins that handle the violated constraints need to take measures to eventually render the relaxation solution infeasible for the updated relaxation, for example by branching or separation~\cite{Achterberg2007}. A plethora of additional plugin types, e.g., for presolving, finding feasible solutions, or tightening variable bounds, allow accelerating the solution process. After 20 years of development of the framework itself and included plugins, SCIP includes mature solvers for MIP, MINLP, as well as several other problem classes. Since November 2022, SCIP is freely available under an open-source license. SCIP solves problems like \eqref{eq:minlp} to global optimality via a spatial branch-and-bound algorithm that mixes branch-and-infer and branch-and-cut~\cite{BeKiLeLiLuMa12}. Important parts of the solution algorithm are presolving, domain propagation (that is, tightening of variable bounds), linear relaxation, and branching. A distinguishing feature of SCIP is that its capabilities to handle nonlinear constraints are not limited to MINLPs, but can be used for any CIP. For example, problems can be handled where linear and nonlinear constraints are mixed with typical constraints from constraint programming, as long as appropriate constraint handlers have been included in SCIP. Since most constraint handlers in SCIP construct a linear relaxation of their constraints, also the handling of nonlinear constraints focuses on linear relaxations. The emphasis on handling CIPs with nonlinear constraints rather than MINLP only is also a reason that the use of nonlinear relaxations or reformulations of complete MINLPs into other problem types, e.g., mixed-integer conic programs, has not been explored much so far. The development of SCIP initially focused on solving CIPs where fixing all integer variables resulted in a linear program~\cite{Achterberg2007}. However, it was soon realized that this requirement was not actually enforced by the implementation. As long as constraint handlers were able to resolve infeasibilities by separation, branching, or other means, the problem could be handled by SCIP. First experiments to handle nonlinear constraints in continuous variables were conducted for bilinear mixing constraints in mine production planning~\cite{BleyKochNiu2008}. The positive results of these experiments motivated the decision to include support for more general nonlinear constraints. With version 1.2~(2009), initial support for quadratic constraints (convex or nonconvex) and solving quadratically constrained programs (QCPs) to local optimality by Ipopt~\cite{WaechterBiegler2006} was added~\cite{BeHeVi09}. For version 2.0~(2010), a primal heuristic that solves sub-MIPs was added~\cite{BertholdGleixner2014} and other large-neighborhood-search heuristics were extended to create sub-MINLPs~\cite{BeHePfVi11}. Further, second-order cone constraints in three variables could be handled. More general nonlinear constraints, specified in algebraic form, were first supported by SCIP~2.1~(2011)~\cite{Vigerske2013}. Next to the specialized treatment for quadratic constraints, also handlers for signpower constraints ($x\abs{x}^p = z$ for some $p \geq 1$)~\cite{GlHeHuVi12} and 1-convex bivariate constraints ($f(x,y) = z$ for $f$ being convex or concave whenever $x$ or $y$ has been fixed)~\cite{BaMiVi12} were added. With the basic handling of nonlinear constraints in place~\cite{VigerskeGleixner2016}, the next releases were dedicated to adding features that improved performance. SCIP~3.0~(2012) brought optimization-based bound tightening (OBBT)~\cite{GleixnerBertholdMuellerWeltge2017} and an NLP diving heuristic. SCIP~3.2~(2015) added a reformulation of general quadratic constraints into second-order cone constraints and separation for edge-concave decompositions of quadratic constraints~\cite{SCIPoptsuite32}. With SCIP~4.0~(2017), higher-dimensional second-order cone constraints were disaggregated, KKT conditions for quadratic programs were utilized, multiple starting points were tried for NLP solves, solutions of the LP relaxation were projected onto a convex NLP relaxation, and also OBBT could be performed on the NLP instead of the LP relaxation~\cite{SCIPoptsuite40}. Improved convexification of bilinear constraints by use of additional linear constraints~\cite{MuellerSerranoGleixner2020}, a new primal heuristic that solves a sequence of NLP reformulations, and interfaces to the NLP solvers filterSQP and Worhp~\cite{FletcherLeyffer1998,BueskensWassel2013,MuellerKuhlmannVigerske2017} were added for SCIP 5.0~(2017)~\cite{SCIPoptsuite50}. The following two major releases brought a branch-and-price based solver for ring-packing~\cite{GleixnerMaherMuellerPedroso2020} (SCIP 6.0, 2018) and support for convex nonlinear subproblems in Benders Decomposition (SCIP 7.0~\cite{SCIPoptsuite70}, 2020). That versions 6 and 7 added comparatively few features for MINLP was due to an ongoing complete overhaul on the way how nonlinear constraints were handled. The primary motivation for this change, which was released with SCIP~8.0~(2022)~\cite{SCIPoptsuite80}, was to increase the reliability of the solver and to alleviate numerical issues that arose from problem reformulations and led to SCIP returning solutions that are feasible in the reformulated problem, but infeasible in the original problem. More precisely, previous SCIP versions built an extended formulation of~\eqref{eq:minlp} explicitly, with the consequence that the original constraints were no longer included in the presolved problem. Even though the formulations were theoretically equivalent, it was possible that $\varepsilon$-feasible solutions for the reformulated problem were not $\varepsilon$-feasible in the original problem. SCIP~8 remedies this by building an implicit extended formulation as an annotation to the original problem. A second motivation for the major changes in SCIP~8 was to reduce the ambiguity of expression and nonlinear structure types by implementing different plugin types for low-level structure types that define expressions, and high-level structure types that add functionality for particular, sometimes overlapping structures. Finally, new features for improving the solver's performance on MINLPs were introduced with SCIP~8. These include intersection, SDP (semi-definite programming), and RLT (reformulation linearization technique) cuts for quadratic expressions~\cite{ChmielaMunozSerrano2021,AchterbergBestuzhevaGleixner2022}, perspective strengthening~\cite{BestuzhevaGleixnerVigerske2021}, and symmetry detection~\cite{Wegscheider2019}. SCIP can read MINLPs from files in the following formats: LP, MPS, NL (AMPL), OSiL, PIP, and ZIMPL. In addition, problems can be passed to SCIP via interfaces to a variety of programming languages and modeling packages, including AMPL, C, GAMS, Java, Julia, Python, and MATLAB. The following section provides an overview of the MINLP solving capabilities of SCIP. Afterwards, the performance of SCIP is compared with that of other state-of-the-art global solvers for MINLP. \section{MINLP capabilities of SCIP} In the following, an overview of the facilities available in SCIP that are specific to the handling of MINLPs is provided. First, available nonlinear functions are listed and the integration of nonlinear constraints into the branch-and-cut solver of SCIP is discussed. Next, the concept of a \emph{nonlinear handler} is introduced, which is a new plug-in type that has been added with SCIP 8 and facilitates the integration of extensions that handle specific nonlinear structures. The remainder of this section gives an overview of features available in SCIP that increase the efficiency of MINLP solving, e.g., cut generators to tighten the linear relaxation, presolve reductions to simplify the problem, and primal heuristics to find feasible solutions early. To be concise, the presentation has been limited to high-level descriptions that spare technical details. Unless specified otherwise, more details are often found in~\cite{SCIPoptsuite80}. \subsection{Framework} \subsubsection{Expressions} \label{sec:expr} Algebraic expressions are well-formed combinations of constants, variables, and various algebraic operations such as addition, multiplication, and exponentiation, that are used to describe mathematical functions. They are often represented by a directed acyclic graph with nodes representing variables, constants, and operations and arcs indicating the flow of computation, see Figure~\ref{fig:exprgraph} for an example. \begin{figure} \caption{Expression graph for algebraic expression $\log(x)^2 + 2\log(x)y+y^2$.} \label{fig:exprgraph} \end{figure} Also in SCIP, expressions are stored as directed acyclic graphs, while all semantics of expression operands are defined by \textit{expression handler} plugins. These handler implement callbacks that are used by methods in the SCIP core to manage expressions (create, modify, copy, free, parse, print), to evaluate and compute derivatives at a point, to evaluate over intervals, to simplify, to identify common subexpressions, to check curvature and integrality, and to iterate over it. Some additional expression handler callbacks are used by the constraint handler for nonlinear constraints (Section~\ref{sec:consnl}) exclusively. Expression handlers for the following operators are included in SCIP~8.0: \begin{itemize} \item \texttt{val}: scalar constant; \item \texttt{var}: a SCIP variable; \item \texttt{sum}: an affine-linear function, $y\mapsto a_0 + \sum_{j=1}^k a_jy_j$ for $y\in\mathbb{R}^k$ with constant coefficients $a\in\mathbb{R}^{k+1}$; \item \texttt{prod}: a product, $y\mapsto c\prod_{j=1}^ky_j$ for $y\in\mathbb{R}^k$ with constant factor $c\in\mathbb{R}$; \item \texttt{pow}: a power with a constant exponent, $y\mapsto y^p$ for $y\in \mathbb{R}$ and exponent $p\in\mathbb{R}$ (if $p\not\in\mathbb{Z}$, then $y\geq 0$ is required); \item \texttt{signpower}: a signed power, $y\mapsto \mathrm{sign}(y)\abs{y}^p$ for $y\in\mathbb{R}$ and constant exponent $p\in\mathbb{R}$, $p>1$; \item \texttt{exp}: exponentiation, $y\mapsto \exp(y)$ for $y\in\mathbb{R}$; \item \texttt{log}: natural logarithm, $y\mapsto \log(y)$ for $y\in\mathbb{R}_{>0}$; \item \texttt{entropy}: entropy, $y\mapsto\begin{cases}-y\log(y), & \textrm{if }y > 0,\\0, & \textrm{if }y=0,\end{cases}$ for $y\in\mathbb{R}_{\geq 0}$; \item \texttt{sin}: sine, $y\mapsto\sin(y)$ for $y\in\mathbb{R}$; \item \texttt{cos}: cosine, $y\mapsto\cos(y)$ for $y\in\mathbb{R}$; \item \texttt{abs}: absolute value, $y\mapsto \abs{y}$ for $y\in\mathbb{R}$. \end{itemize} In previous versions of SCIP, also high-level structures such as quadratic functions could be represented as expression types. To avoid ambiguity and reduce complexity, this has been replaced by a recognition of quadratic expressions that is no longer made explicit by a change in the expression type. \subsubsection{Constraint Handler for Nonlinear Constraints} \label{sec:consnl} All nonlinear constraints $\low{g}\leq g(x)\leq \upp{g}$ of \eqref{eq:minlp} are handled by the constraint handler for nonlinear constraints in SCIP, while the linear constraints $\low{b}\leq Ax\leq \upp{b}$ are handled by the constraint handlers for linear constraints and its specializations (e.g., knapsack, set-covering). A constraint handler is responsible for checking whether solutions satisfy constraints and, if that is not the case, to resolve infeasibility by \emph{enforcing constraints}. This applies in particular to solutions of the LP relaxation. The nonlinear constraint handler currently enforces constraints by the following means: \begin{description} \item[DOMAINPROP] by analyzing the nonlinear constraints with respect to the variable bounds at the current node of the branch-and-bound tree, infeasibility or a bound tightening may be deduced, which allow pruning the node or cutting off the given solution, respectively; this is also known as \emph{domain propagation}; \item[SEPARATE] a cutting plane that is violated by the given solution may be computed; \item[BRANCH] the current node of the branch-and-bound tree is subdivided, that is, a variable $x_i$ and a branching point $\tilde x_i\in[\low{x}_i,\upp{x}_i]$ are selected and two child nodes with $x_i$ restricted to $[\low{x}_i,\tilde{x}_i]$ and $[\tilde{x}_i,\upp{x}_i]$, respectively, are created. \end{description} To decide whether a node can be pruned (DOMAINPROP), an overestimate of the range of $g(x)$ with respect to current variable bounds is computed by means of interval arithmetics~\cite{Moore1966}. If a constraint $k$ is found such that $g_k([\low{x},\upp{x}])\cap [\low{g}_k,\upp{g}_k]=\emptyset$, then there exists no point in $[\low{x},\upp{x}]$ for which this constraint is feasible. A bound tightening may be computed by applying the same methods in reverse order. That is, interval arithmetic is used to overestimate $g^{-1}([\low{g},\upp{g}])$, the preimage of $g(x)$ on $[\low{g},\upp{g}]$, and variable bounds are tightened to $[\low{x},\upp{x}]\cap g^{-1}([\low{g},\upp{g}])$. This is also known as feasibility-based bound tightening (FBBT). In the simplest case, callbacks of expression handlers are used to propagate intervals through expressions. However, in some cases, other methods that take more structure into account or that use additional information to tighten variable bounds and constraint sides are used (see, e.g., Sections~\ref{sec:quadprop} and~\ref{sec:bilin}). To construct a linear relaxation of the nonlinear constraints (SEPARATE option), an extended formulation is considered: \begin{equation} \label{eq:minlp_ext} \tag{$\text{MINLP}_\text{ext}$} \begin{aligned} \min\; & c^\top x, \\ \mathrm{such\ that}\; & h_i(x,w_{i+1},\ldots,w_{\hat m}) \lesseqgtr_i w_i, & i=1,\ldots,\hat m, \\ & \low{b} \leq Ax \leq \upp{b}, \\ & \low{x} \leq x \leq \upp{x}, \\ & \low{w} \leq w \leq \upp{w}, \\ & x_\mathcal{I} \in \mathbb{Z}^{\vert\mathcal{I}\vert}. \end{aligned} \end{equation} The functions $h_i$ are obtained from the expressions that define functions $g_i$ by recursively annotating subexpressions with auxiliary variables $w_{i+1},\ldots,w_{\hat m}$ for some $\hat m \geq m$. Initially, slack variables $w_1,\ldots,w_m$ are introduced and assigned to the root of all expressions, i.e., $h_i:=g_i$, $\low{w}_i := \low{g}_i$, $\upp{w}_i:=\upp{g}_i$, for $i=1,\ldots,m$. Next, for each function $h_i$, subexpressions $f$ may be assigned new auxiliary variables $w_{i'}$, $i'>m$, which results in extending~\eqref{eq:minlp_ext} by additional constraints $h_{i'}(x) = w_{i'}$ with $h_{i'} := f$. Bounds $\low{w}_{i'}$ and $\upp{w}_{i'}$ are initialized to bounds on $h_{i'}$, if available. Since auxiliary variables in a subexpression of $h_i$ always receive an index larger than $\max(m,i)$, the result is referred to by $h_i(x,w_{i+1},\ldots,w_{\hat m})$ for any $i=1,\ldots, \hat m$. That is, to simplify notation, $w_{i+1}$ is used instead of $w_{\max(i,m)+1}$. If a subexpression appears in several expressions, it is assigned at most one auxiliary variable and reindexing may be necessary to have $h_i$ depend on $x$ and $w_{i+1},\ldots, w_{\hat m}$ only. For the (in)equality sense $\lesseqgtr_i$, a valid simplification would be to assume equality everywhere. For performance reasons, though, it can be beneficial to relax certain equalities to inequalities if that does not change the feasible space of~\eqref{eq:minlp_ext} when projected onto $x$. Therefore, \[ \lesseqgtr_i\; := \begin{cases} =, & \text{if } \low{g}_i > -\infty,\; \upp{g}_i < \infty, \\ \leq, & \text{if } \low{g}_i = -\infty,\; \upp{g}_i < \infty, \\ \geq, & \text{if } \low{g}_i > -\infty,\; \upp{g}_i = \infty, \end{cases} \quad\text{ for }i=1,\ldots,m. \] For $i>m$, monotonicity of expressions is taken into account to derive $\lesseqgtr_i$. Whether to annotate a subexpression by an auxiliary variable depends on the structures that are recognized. In the simplest case, every subexpression that is not already a variable is annotated with an auxiliary variable. This essentially corresponds to the Smith Normal Form~\cite{SmithPantelides1999}. For every function $h_i$ of~\eqref{eq:minlp_ext}, the callbacks of the corresponding expression handler can be used to compute linear under- and overestimators, such that a linear relaxation for~\eqref{eq:minlp_ext} is constructed. It can, however, be beneficial to not add an auxiliary variable for every subexpression, thus allowing for more complex functions in~\eqref{eq:minlp_ext}. This will be the discussed in Section~\ref{sec:nlhdlr} below. \paragraph{Example} Recall Figure~\ref{fig:exprgraph} and the constraint \[ \log(x)^2 + 2\log(x)\,y+y^2 \leq 4. \] By annotating the root of the expression graph with a slack variable $w_1$ and each other non-variable node with an auxiliary variable, the extended formulation \begin{align*} w_2 + 2w_3 + w_4 & \leq w_1, \\ w_5^2 & \leq w_2, \\ w_5\,y & \leq w_3, \\ y^2 & \leq w_4, \\ \log(x) & = w_5, \\ w_1 & \leq 4. \end{align*} is obtained. Bounds on auxiliary variables have been omitted here. The constraints $w_5^2 = w_2$, $w_5y = w_3$, and $y^2 = w_4$ were relaxed to inequalities because $w_2+2w_3+w_4$ is monotonically increasing in each variable. However, to relax $\log(x)=w_5$ to $\log(x)\leq w_5$, both $w_5^2$ and $w_5y$ would need to be monotonically increasing in $w_5$. This would be the case if $\low{x}\geq 1$ and $\low{y}\geq 0$. If a constraint $h_i(x,w_{i+1},\ldots,w_{\hat m})\leq w_i$ (the $\geq$-case is analogous) of~\eqref{eq:minlp_ext} is violated and $h_i$ is nonconvex, then linear underestimators on $h_i$ can only be as tight as the convex envelope of $h_i$. Therefore, it may not be possible find a hyperplane that is violated by the solution of the LP relaxation. Since the convex envelope of $h_i$ depends on the bounds of variables appearing in $h_i$, these variables are candidates for branching (BRANCH). More precisely, when an expression handler computes a linear under- or overestimator for $h_i(x,w_{i+1},\ldots,w_{\hat m})$, it also signals for which variables it used current variable bounds. Marked original variables are then added to the list of branching candidates. For an auxiliary variable $w_{i'}$, $i'>i$, the variables in the subexpression that $h_{i'}$ represents are considered for branching instead. The decision on whether to add a cutting plane that separates the solution of the LP relaxation or to branch is rather complex, but the idea is to branch if either no cutting plane is found or if the violation of available cutting planes in the relaxation solution is rather small when compared to the convexification gap of the under/overestimators that define the cutting planes. In the latter case, it may be beneficial to first reduce the convexification gap by branching. To select one variable from the list of branching candidates, the violation of constraints in~\eqref{eq:minlp_ext} and historical information about the effect of branching on a given variable on the optimal value of the LP relaxation (``pseudo costs'') are taken into account. The branching point is a convex combination of the value of the variable in the LP relaxation and the mid-point of the variable's interval. \subsubsection{Nonlinear Handlers} \label{sec:nlhdlr} In the previous example, four auxiliary variables were introduced to construct the extended formulation. This is due to the expression handlers having a rather myopic view, basically, implementing techniques that can handle only their direct children. It is clear that, for this example, an extended formulation that only replaces $\log(x)$ by an auxiliary variable~$w_2$ could be more efficient to solve. However, this requires methods to detect the quadratic (or convex) structure and to either compute linear underestimators for the quadratic (convex) expression $w_2^2 + 2w_2y+y^2$ or to separate cutting planes for the set defined by $w_2^2 + 2w_2y+y^2\leq w_1$. Such structure detection and handling methods are the task of the new \emph{nonlinear handler} plugins that were introduced with SCIP 8. Nonlinear handlers determine the extended formulation~\eqref{eq:minlp_ext} by deciding when to annotate subexpressions with auxiliary variables. That is, given a constraint $h_i(x) \lesseqgtr_i w_i$, a nonlinear handler analyses the expression that defines $h_i$ and attempts to detect specific structures. At this point, it may also request to introduce additional auxiliary variables, thus changing $h_i(x)$ into $h_i(x,w_{i+1},\ldots,w_{\hat m})$. In addition, it informs the constraint handler that it will now provide separation for $h_i(x,w_{i+1},\ldots,w_{\hat m}) \leq w_i$, or $\geq w_i$, or both. If none of the nonlinear handlers declare that they will handle $h_i(x) \lesseqgtr_i w_i$, auxiliary variables are introduced for each argument of the root of the expression $h_i$ and expression handler callbacks are used to construct cutting planes from linear under-/overestimators. In addition to separation, nonlinear handlers can also contribute to domain propagation. This is implemented analogously to separation by setting up an additional extended formulation similarly to~\eqref{eq:minlp_ext}, with the main difference that slack and auxiliary variables are not actually created in SCIP and equalities are currently not relaxed to inequalities. Note that the extended formulations are stored as \emph{annotation} on the original expressions. Thus, for each task, the most suitable formulation can be used. For example, feasibility is checked on the original constraints, domain propagation and separation use the corresponding extended formulations, but branching is performed, by default, with respect to original variables only. With SCIP 7 and earlier, only one extended formulation was constructed explicitly and the connection to the original formulation was no longer available, leading to issues due to not ensuring that solutions are also ($\varepsilon$-)feasible for the original constraints. In addition to the improved numeric reliability, the nonlinear handlers also allow for a higher flexibility when handling nonlinear structures. For each node in an expression, more than one nonlinear handler can be attached, each one annotating possibly different subexpressions with auxiliary variables. For example, for a nonconvex quadratic constraint $\sum_{i,j} a_{i,j} x_ix_j \leq w$, the nonlinear handler for quadratics can declare that it will provide separation (by intersection cuts, see Section~\ref{sec:intersection}), but that also other means of separation should be tried. However, since no other nonlinear handler declares that it will provide separation, auxiliary variables are introduced for each argument of the sum, that is, an auxiliary variable $X_{ij}$ is assigned to each product $x_ix_j$. For the corresponding constraints $x_ix_j\leq X_{ij}$ (if $a_{i,j}\geq 0$), the well-known McCormick underestimators~\cite{McCormick1976}, \begin{equation} \label{eq:mccormick} \begin{aligned} X_{ij} & \geq \low{x}_ix_j + \low{x}_jx_i - \low{x}_i\low{x}_j, \\ X_{ij} & \geq \upp{x}_ix_j + \upp{x}_jx_i - \upp{x}_i\upp{x}_j, \end{aligned} \end{equation} or other means (see Section~\ref{sec:bilin}) will be used to construct a linear relaxation. \subsubsection{NLP Relaxation} \label{sec:nlprelax} Similar to the central LP relaxation of SCIP, an NLP relaxation is also available. In contrast to constraint handlers, the NLP relaxation uses a common data structure to store its constraints. At the moment, constraint handlers for linear constraints and the constraint handler for nonlinear constraints store a representation of their constraints in the NLP relaxation, so that in case of a MINLP, the NLP relaxation together with the integrality conditions on variables provides a unified view of the problem. For nonlinear constraints, the original (non-extended) form $\low{g}\leq g(x)\leq \upp{g}$ is added to the NLP. To find local optimal solutions for the NLP relaxation, interfaces to the NLP solvers filterSQP, Ipopt, and Worhp~\cite{FletcherLeyffer1998,WaechterBiegler2006,BueskensWassel2013} are available. First- and second-order derivatives for these solvers are computed via CppAD~\cite{cppad}. The NLP relaxation is mainly used by some primal heuristics~(Section~\ref{sec:heur}) and separators~(Section~\ref{sec:convextight}) at the moment. \subsection{Presolving} \label{sec:presolve} When presolving nonlinear constraints, expressions are simplified and brought into a canonical form. For example, recursive sums and products are flattened and fixed or aggregated variables are replaced by constants or sums of active variables. In addition, it is ensured that if a subexpression appears several times (in the same or different constraints), always the same expression object is used. This ensures that in the extended formulation \eqref{eq:minlp_ext} at most one auxiliary variable is attached to such common subexpressions. \subsubsection{Variable Fixings} Similar to what has been shown by Hansen et al.~\cite{HansenJaumardRuizXiong1993}, if a bounded variable $x_j$ does not appear in the objective ($c_j=0$), but in exactly one constraint $\low{g}_k \leq g_k(x) \leq \upp{g}_k$ where $g_k(x)$ is convex in $x_j$ for any fixing of other variables and $\upp{g}_k = +\infty$ (or concave in $x_j$ and $\low{g}_k=-\infty$), then there always exists an optimal solution where $x_j \in \{\low{x}_j,\upp{x}_j\}$. For example, if $y\in[0,1]$ appears only in a constraint $xy+yz-y^2\leq 5$, then $y$ can be changed to a binary variable. SCIP recognizes such variables for polynomial constraints (under additional assumptions~\cite{SCIPoptsuite80}) and changes the variable type to binary, if $\low{x}_j = 0$ and $\upp{x}_j=1$, or adds a bound disjunction constraint $x_j \leq \low{x}_j \vee x_j \geq \upp{x}_j$. As a consequence, branching on $x_j$ leads to fixing the variable in both children. \subsubsection{Linearization of Products} The introduction emphasized that with SCIP 8, an explicit extended reformulation of nonlinear constraints is avoided. An exception that proves this ``rule'' is the linearization of products of binary variables in presolving. Doing so has the advantage that more of SCIP's techniques for MIP solving can be utilized. In the simplest case, a product $\prod_i x_i$ is replaced by a new variable $z$ and a constraint of type ``and'' that models $z = \bigwedge_i x_i$ is added. The ``and''-constraint handler will then separate a linearization of this product~\cite{BerHP09}. For a product of only two binary variables, the linearization is added directly. For a quadratic function in binary variables with many terms, the number of variables introduced may be large. Thus, in this case, a linearization that requires fewer additional variables is used, even though it may lead to a weaker relaxation. \subsubsection{KKT Strengthening for QPs} A presolving method that aims to tighten the relaxation of a quadratic program~(QP) by adding redundant constraints derived from Karush-Kuhn-Tucker (KKT) conditions is available. Consider a quadratic program of the form \begin{align} \label{eq:QP}\tag{QP} \begin{aligned} \min\ \ &\tfrac{1}{2}\, x^\top Q x + c^\top x,\\ \text{such that}\ \ & Ax \leq b, \end{aligned} \end{align} where $Q \in \mathbb{R}^{n \times n}$ is symmetric, $c \in \mathbb{R}^n$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$. If~\eqref{eq:QP} is bounded, then all optima of~\eqref{eq:QP} satisfy the following KKT conditions: \begin{align} \label{eq:KKT_QP}\tag{KKT} \begin{aligned} Q x + c + A^\top \mu & = 0,\\ Ax & \leq b,\\ \mu_i (Ax - b)_i & = 0, && \qquad i \in \{1, \dots, m\},\\ \mu & \geq 0, \end{aligned} \end{align} where $\mu$ is the vector of Lagrangian multipliers of the constraints $Ax\leq b$. In a specialized presolver, SCIP recognizes whether~\eqref{eq:minlp} is equivalent to~\eqref{eq:QP} by checking whether a quadratic objective function has been reformulated into a constraint. If a~\eqref{eq:QP} has been found and all variables are bounded, then the equations~\eqref{eq:KKT_QP} are added as redundant constraints to the problem, whereby the complementarity constraints are formulated via special ordered sets of type 1. The redundant constraints can help to strengthen the linear relaxation and prioritize branching decisions to satisfy the complementarity constraints, which focuses the search more on the local optima of~\eqref{eq:QP}. In addition to~\eqref{eq:QP}, the implementation can also handle mixed-binary quadratic programs. For all details, see~\cite{SCIPoptsuite40,Fischer2017}. When this presolver was added to SCIP 4.0, it has shown to be very beneficial for box-constrained quadratic programs. Due to the many changes and extensions in SCIP~8, in particular for the handling of quadratic constraints~(Section~\ref{sec:quad}), it needs to be reevaluated under which conditions this presolver should be enabled. Currently, it is disabled by default. \subsubsection{Symmetry Detection} \label{sec:symmetry} Symmetries in a MINLP are automorphisms on $\mathbb{R}^n$ that map optimal solutions to optimal solutions. Such symmetries have an adverse effect on the performance of branch-and-bound solvers, because symmetric subproblems may be treated repeatedly. Therefore, SCIP can enforce lexicographically maximal solutions from an orbit of symmetric solutions via bound tightening and separation of linear inequalities~\cite{HojnyPfetsch2019,SCIPoptsuite50,SCIPoptsuite70,SCIPoptsuite80}. Since optimal solutions are naturally not known in advance, the symmetry detection resorts to find permutations of variables that map the feasible set onto itself and map each point to one with the same objective function value~\cite{Margot2010}. These permutations are given by isomorphisms in an auxiliary symmetry detection graph, which is constructed from the problem data (e.g., $c$, $A$, $\mathcal{I}$, and the expressions that define $g(x)$)~\cite{Liberti2012a,Wegscheider2019}. \subsection{Quadratics} \label{sec:quad} Since quadratic functions frequently appear in MINLPs (every second instance of MINLPLib~\cite{minlplib} has only linear and quadratic constraints), a number of techniques have been added to SCIP to handle this structure. Next to the presolving methods that were discussed in the previous section, three nonlinear handlers and four separators deal with quadratic structures. When none of the nonlinear handlers are active, then for each square and bilinear term in a quadratic function, an auxiliary variable is added in the extended formulation and gradient, secant, and McCormick under- and overestimators (see \eqref{eq:mccormick}) are generated. \subsubsection{Domain Propagation} \label{sec:quadprop} If variables appear more than once in a quadratic function, then a term-wise domain propagation does not necessarily yield the best possible results, due to suffering from the so-called \textit{dependency problem} of interval arithmetics. For example, it is easy to compute the range for $x^2+x$ for given bounds on $x$, or bounds on $x$ for a given interval on $x^2+x$, but standard interval arithmetics would treat the terms $x^2$ and $x$ separately, which can lead to overestimating the result. Therefore, a specialized nonlinear handler in SCIP provides a domain propagation procedure for quadratics that aims to reduce overestimation. For this, the detection routine of the nonlinear handler writes a quadratic expression as \begin{equation} \label{eq:quadexpr} q(y) = \sum_{i=1}^k q_i(y) \quad\text{with}\quad q_i(y) = a_iy_i^2 + c_iy_i + \sum_{j\in P_i} b_{i,j}y_iy_j, \end{equation} where $y_i$ is either an original variable ($x$) or another expression, $a_i,c_i\in\mathbb{R}$, $b_{i,j}\in\mathbb{R}\setminus\{0\}$, $j\in P_i \Rightarrow i\not\in P_j$ for all $j\in P_i$, $P_i\subset\{1,\ldots,k\}$, $i=1,\ldots,k$. For functions $q_i$ with at least two terms (at least two of $a_i$, $b_{i,j}$, $j\in P_i$, and $c_i$ are nonzero), a relaxation is obtained by replacing each $y_j$ by $[\low{y}_j,\upp{y}_j]$, $j\in P_i$. For this univariate quadratic interval-term in $y_i$, tight bounds can be computed~\cite{DomesNeumaier2010}. In addition, bounds on variables $y_j$, $j\in P_i$, are computed by considering \begin{equation} \label{eq:quadexpr2} \sum_{j\in P_i}b_{i,j}y_j \in ([\low{q},\upp{q}] - \sum_{i'\neq i} q_{i'}(y))/y_i - a_iy_i - c_i, \qquad y_i\in [\low{y}_i,\upp{y}_i], \end{equation} where $[\low{q},\upp{q}]$ are given bounds on $q(y)$. After relaxing each $q_{i'}$ to an interval, bounds on the right-hand side of \eqref{eq:quadexpr2} are computed, which are then used to calculate bounds on each $y_j$, $j\in P_i$. \subsubsection{Bilinear Terms} \label{sec:bilin} For a product $y_1y_2$, where $y_1$ and $y_2$ are either non-binary variables or other expressions, the expression handler for products already provides linear under- and overestimators and domain propagation that is best possible when considering the bounds $[\low{y}_1,\upp{y}_1] \times [\low{y}_2,\upp{y}_2]$ only. However, if linear inequalities in $y_1$ and $y_2$ are available, then possibly tighter linear estimates and variable bounds can be computed. In SCIP, this is done by a specialized nonlinear handler that implements the algorithm by Locatelli~\cite{Locatelli2018}. The inequalities are found by projection of the LP relaxation onto variables $(y_1,y_2)$. For more details, see~\cite{MuellerSerranoGleixner2020}. An alternative method that uses linear constraints to tighten the relaxation of quadratic constraints are the RLT cuts described in the following. \subsubsection{RLT Cuts} \label{sec:rlt} The Reformulation-Linearization Technique (RLT)~\cite{adams1986tight,adams1990linearization} has proven very useful to tighten relaxations of polynomial programming problems. In SCIP, a separator of cuts that are computed via RLT for bilinear product relations in~\eqref{eq:minlp_ext} is available. For simplicity, denote by $X_{ij}$ the auxiliary variable that is associated with a constraint $x_ix_j \lesseqgtr X_{ij}$ of \eqref{eq:minlp_ext} ($X_{ji}$ denotes the same variable as $X_{ij}$). Recall that it is valid to replace $\lesseqgtr$ by $=$. Given $X_{ij} = x_ix_j$, where $x_i \in [\low{x}_i,\upp{x}_i]$, $x_j \in [\low{x}_j,\upp{x}_j]$, and a linear constraint $a^\top x \leq b$, RLT cuts are derived by first multiplying the constraint by a nonnegative bound factors $(x_i - \low{x}_i)$, $(\upp{x}_i - x_i)$, $(x_j - \low{x}_j)$, or $(\upp{x}_j - x_j)$. For instance, consider multiplication by the factor $(x_i - \low{x}_i)$, which yields a valid nonlinear inequality: \begin{equation}\label{eq:rlt_reformulated} a^\top x\, (x_i - \low{x}_i) \leq b\,(x_i - \low{x}_i). \end{equation} This is referred to as the reformulation step. The linearization step is then performed for all terms $x_kx_i$ in~\eqref{eq:rlt_reformulated}. If a product relation $X_{ki} = x_kx_i$ exists, then the product is replaced with $X_{ki}$. If $x_k$ and $x_i$ are contained in the same clique, the product is replaced with an equivalent linear expression. Otherwise, it is replaced by a linear under- or overestimator such as~\eqref{eq:mccormick}. In addition, the RLT separator can reveal linearized products between binary and continuous variables. To do so, it checks whether pairs of linear inequalities that are defined in the same triple of variables (one of them binary, the other two continuous) imply a product relation. These implicit products can then be used in the linearization step of RLT cut generation~\cite{AchterbergBestuzhevaGleixner2022}. \subsubsection{SDP Cuts} \label{sec:sdp} As in the previous section, denote by $X_{ij}$ the auxiliary variable that is associated with a constraint $x_ix_j \lesseqgtr X_{ij}$ of \eqref{eq:minlp_ext}. A popular convex relaxation of the condition $X = xx^\top$ is given by requiring $X-xx^\top$ to be positive semidefinite. Separation for the set $\{(x,X) : X-xx^\top \succeq 0\}$ itself is possible, but cuts are typically dense and may include variables $X_{ij}$ for products that do not exist in the problem. Therefore, only principal $2 \times 2$ minors of $X-xx^\top$, which also need to be positive semidefinite, are considered. By Schur's complement, this means that the condition \begin{equation} \label{eq:minorposdef} A_{ij}(x,X) := \begin{bmatrix} 1 & x_i & x_j \\ x_i & X_{ii} & X_{ij} \\ x_j & X_{ij} & X_{jj} \end{bmatrix} \succeq 0 \end{equation} needs to hold for any $i,j$, $i\neq j$. A separator in SCIP detects minors for which $X_{ii}$, $X_{jj}$, $X_{ij}$ exist in \eqref{eq:minlp_ext} and enforces $A_{ij}(x,X)\succeq 0$. To do so for a solution $(\hat{x},\hat{X})$ that violates~\eqref{eq:minorposdef}, an eigenvector $v\in\mathbb{R}^3$ of $A_{ij}(\hat{x},\hat{X})$ with $v^\top A_{ij}(\hat{x},\hat{X})v<0$ is computed and the globally valid linear inequality $v^\top A_{ij}(x,X)v \geq 0$ is added. \subsubsection{Intersection Cuts} \label{sec:intersection} Intersection cuts~\cite{Tuy1964,Balas1971} have shown to be an efficient tool to strengthen relaxations of MIPs. Recently, Mu{\~{n}}oz and Serrano showed how to compute the tightest possible intersection cuts for quadratic programs~\cite{MunozSerrano2020}. This method has been implemented in SCIP~\cite{ChmielaMunozSerrano2021}. Assume a nonconvex quadratic constraint of~\eqref{eq:minlp_ext} is $q(y)\leq w$ with $q(y)$ as in \eqref{eq:quadexpr} and $w$ an auxiliary variable. The separation of intersection cuts is implemented for the set $S := \{ (y,w) \in \mathbb{R}^k : q(y) \leq w \}$ that is defined by this constraint. Let $(\hat{y},\hat w)$ be a basic feasible LP solution violating $q(y) \leq w$. First, a convex inequality $g(y,w) < 0$ is build that is satisfied by $(\hat{y},\hat w)$, but by no point of $S$. This defines a so-called \emph{$S$-free set} $C = \{ (y,w) \in \mathbb{R}^{k+1} : g(y,w) \leq 0 \}$, that is, a convex set with $(\hat{y},\hat w) \in \text{int}(C)$ containing no point of $S$ in its interior. The quality of the resulting cut highly depends on which $S$-free set is used, but using \textit{maximal} $S$-free sets yield the tightest possible intersection cuts~\cite{MunozSerrano2020}. By using the conic relaxation $K$ of the LP-feasible region defined by the nonbasic variables at $(\hat{y},\hat w)$, the intersection points between the extreme rays of $K$ and the boundary of $C$ are computed. The intersection cut is then defined by the hyperplane going through these points and successfully separates $(\hat{x},\hat w)$ and $S$. See Figure~\ref{fig:intercut} for an illustration. To obtain even better cuts, there is also a strengthening procedure implemented that uses the idea of negative edge extension of the cone $K$~\cite{Glover1974}. \begin{figure} \caption{An intersection cut (red) separating the basic feasible LP solution $(\hat y,\hat w)$ from $S$ (blue). The cut is computed using the intersection points of an $S$-free set $C$ (orange) and the rays of a simplicial cone $K \supseteq S$ (boundary in green) with apex $(\hat y,\hat w)\not\in S$.} \label{fig:intercut} \end{figure} In addition to the separation of intersection cuts for a set $S$ given by a constraint $q(y)\leq w$, SCIP can also generate intersection cuts for implied quadratic equations. Recall the matrix of auxiliary variables $X$ as introduced in Section~\ref{sec:rlt}. The condition $X=xx^\top$ implies that $X$ needs to have rank 1. Therefore, any $2\times 2$ minor $\begin{pmatrix} X_{i_1j_1} & X_{i_1j_2} \\ X_{i_2j_1} & X_{i_2j_2} \end{pmatrix}$ of $X$ needs to have determinant zero. That is, for any set of variable indices $i_1$, $i_2$, $j_1$, $j_2$ with $i_1\neq i_2$ and $j_1\neq j_2$, the condition $ X_{i_1j_1}X_{i_2j_2} = X_{i_1j_2}X_{i_2j_1} $ needs to hold. If all variables in this condition exist in~\eqref{eq:minlp_ext} and a solution violates this condition, then the previously described procedure to generate intersection cuts is applied to the set defined by this condition. Since intersection cuts can be rather dense, it is not clear yet how to decide when it will be beneficial to generate such cuts. Their separation is therefore currently disabled by default. For more details, see~\cite{ChmielaMunozSerrano2021}. \subsubsection{Edge-Concave Cuts} Another method to obtain a linear outer-approximation for a quadratic constraint is by utilizing an edge-concave decomposition of the quadratic function. This has shown to be particularly useful for randomly generated quadratic instances~\cite{MisenerFloudas2012,MisenerSmadbeckFloudas2015}. A function is edge-concave over the variables' domain (e.g., $[\low{x},\upp{x}]$) if it is componentwise concave. Given a quadratic function, the separator for edge-concave cuts solves an auxiliary MIP to partition the square and bilinear terms into a sum of edge-concave functions and a remaining function. Since the convex envelope of edge-concave functions is \emph{vertex-polyhedral}~\cite{Tardella2004}, that is, it is a polyhedral function with vertices corresponding to the vertices of the box of variable bounds, facets on the convex envelope of each edge-concave function can be computed by solving an auxiliary linear program (see also Section~\ref{sec:convexnlhdlr}). For the function of remaining terms, term-wise linear underestimators such as~\eqref{eq:mccormick} are summed up. Since the current implementation of edge-concave cuts in SCIP has not shown to be particularly useful for general MINLP, this separator is disabled for now. \subsubsection{Second-Order Cones} An important connection between MINLP and conic programming is the detection of constraints that can be represented as a second-order cone (SOC) constraint, since the latter defines a convex set, while the original constraint may use a nonconvex constraint function. A specialized nonlinear handler aims to detect SOC representable structures. In the detection phase, a constraint $h_i(x) \leq w_i$ (the case $\geq$ is handled similarly) of the extended formulation~\eqref{eq:minlp_ext} is passed to the nonlinear handler. For this constraint, it is checked whether it defines a bound on an Euclidian norm ($\sqrt{\sum_{j=1}^k (a_j y_j^2 + b_j y_j) + c}\leq w_i$ for some coefficients $a_j,b_j,c\in\mathbb{R}$, $a_j>0$, where $y_j$ is either an original variable or some subexpression of $h_i(\cdot)$), or is a quadratic constraint that is SOC-representable~\cite{MahajanMunson2010}. Since the introduction of slack variables $w_i$, $i\leq m$, may prevent such a detection, the equivalent constraint $h_i(x) \leq \bar{w}_i$ is considered instead. A detected SOC constraint is stored in the form \begin{equation} \label{eq:soc} \sqrt{\sum_{j=1}^k (v_j^\top y + \beta_j)^2} \leq v_{k+1}^\top y + \beta_{k+1} \end{equation} with $v_j\in\mathbb{R}^\ell$, $j=1,\ldots,k+1$, where $y_1,\ldots,y_\ell$ are variables of~\eqref{eq:minlp_ext}. Since the left-hand side of~\eqref{eq:soc} is convex, a solution $\hat y$ that violates~\eqref{eq:soc} can be separated by linearization of the left-hand side of~\eqref{eq:soc}. However, if there are many terms on the left-hand side of~\eqref{eq:soc} ($k$ being large), then it can require many cuts to provide a tight linear relaxation of~\eqref{eq:soc}. Thus, a disaggregation of the cone~\cite{Vielma2016} is used if $k\geq 3$: \begin{align} (v_j^\top y + \beta_j)^2 & \leq z_j (v_{k+1}^\top y + \beta_{k+1}), \quad j=1,\ldots,k, \label{eq:socext1} \\ \sum_{j=1}^k z_j & \leq v_{k+1}^\top y + \beta_{k+1}, \label{eq:socext2} \end{align} where variables $z_1,\ldots,z_k$ are new variables. A solution $(\hat y,\hat z)$ that violates~\eqref{eq:soc} needs to violate also~\eqref{eq:socext1} for some $j\in\{1,\ldots,k\}$ or \eqref{eq:socext2}. The latter is already linear and can be added as a cut. If a rotated second-order cone constraint~\eqref{eq:socext1} is violated for some $j$, then it is transformed into the standard form \[ \sqrt{4(v_j^\top y + \beta_j)^2 + (v_{k+1}^\top y + \beta_{k+1} - z_j)^2} \leq v_{k+1}^\top y + \beta_{k+1} + z_j \] and a gradient cut is constructed by linearization of the left-hand side. \subsection{Convexity} \label{sec:convex} \subsubsection{Convex and Concave Constraints} \label{sec:convexnlhdlr} For the linear underestimation of functions like $x\exp(x)$ or $x^2 + 2xy + y^2$, the construction of an extended formulation ($xw$, $\exp(x)=w$; $w_1+2w_2+w_3$, $w_1=x^2$, $w_2=xy$, $w_3=y^2$) is not advisable. Instead, hyperplanes that support the epigraph of a convex function can be used if convexity is recognized. In SCIP, specialized nonlinear handlers are available to detect for a function $h_i(x)$ of~\eqref{eq:minlp_ext} the subexpressions that need to be replaced by auxiliary variables $w_{i+1},\ldots, w_{\hat{m}}$ such that the remaining expression $h_i(x,w_{i+1},\ldots, w_{\hat{m}})$ is convex or concave. The detection utilizes the often applied rules for convexity/concavity of function compositions (e.g., $f$ convex and monotone decreasing, $g$ concave $\Rightarrow$ $f \circ g$ convex), but applies them in reverse order. That is, instead of deciding whether a function is convex/concave based on information on the convexity/concavity and monotonicity of its arguments, the algorithm formulates conditions on the convexity/concavity of the function arguments given a convexity/concavity requirement on the function itself. When a condition on an argument cannot be fulfilled, it is replaced by an auxiliary variable. Next to ``myopic'' rules for convexity/concavity that are implemented by the expression handlers, also rules for product compositions ($af(b g(x)+c) g(x)$ with constants $a,b,c$ and repeating subexpression $g(x)$), signomials ($c\prod_{j=1}^k f_j^{p_j}(x)$ with $c,p_j\in\mathbb{R}$ and subexpressions $f_j(x)$, $j=1,\ldots,k$), and quadratic forms are available. The latter may check for definiteness of its Hessian by calculating its eigenvalues. Further, it has been shown that for a composition of convex functions $f \circ g$, it can be beneficial for the linear relaxation to consider the extended formulation $f(w)$, $w\geq g(x)$, instead of the composition $f(g(x))$~\cite{TawarmalaniSahinidis2005}. This is enforced by a small variation of the detection algorithm. When a convex constraint $h_i(x,w_{i+1},\ldots,w_{\hat m}) \leq w_i$ of~\eqref{eq:minlp_ext} is violated at a point $(\hat x,\hat w)$, a tangent on the graph of $h_i$ at $(\hat x,\hat w)$ is used to compute a separating hyperplane. The slope of the tangent is given by the gradient of $h_i$ at $(\hat x,\hat w)$, which is calculated via automatic differentiation on the expression graph. If, however, $h_i$ is univariate, that is, $h_i(x,w_{i+1},\ldots,w_{\hat m})=f(y)$ for some variable $y$, and $y$ is integral, then taking the hyperplane through the points $(\lfloor \hat y\rfloor, f(\lfloor \hat y\rfloor))$ and $(\lfloor \hat y\rfloor+1, f(\lfloor \hat y\rfloor+1))$ can give a tighter underestimator. For a concave function $h_i(x,w_{i+1},\ldots,w_{\hat m})$, any hyperplane $\alpha x+\beta w+\gamma$ that underestimates $h_i(x,w_{i+1},\ldots,w_{\hat m})$ in all vertices of the box $[\low{x},\upp{x}]\times[\low{w}_{i+1},\upp{w}_{i+1}]\times\cdots\times[\low{w}_{\hat m},\upp{w}_{\hat m}]$ is a valid linear underestimator, since $h_i$ is vertex-polyhedral with respect to the box. Maximizing $\alpha \hat x + \beta \hat w + \gamma$ such that $\alpha x + \beta w + \gamma$ does not exceed $h_i(x,w_{i+1},\ldots,w_{\hat m})$ for all vertices gives an underestimator that is as tight as possible at a given reference point $(\hat x, \hat w)$. For the frequent cases $k=1$ and $k=2$, routines that directly compute such an underestimator are available. For $k>2$, a linear program is solved. Since the size of this LP is exponential in $k$, underestimators for concave functions in more than 14 variables are currently not computed. \subsubsection{Tighter Gradient Cuts} \label{sec:convextight} The separating hyperplanes generated for convex functions of~\eqref{eq:minlp_ext} as discussed in the previous section are, in general, not supporting for the feasible region of~\eqref{eq:minlp_ext}, because the point where the functions are linearized is not at the boundary of the feasible region (which is the reason why it needs to be separated). Therefore, often several rounds of cut generation and LP solving are required until the relaxation solution satisfies the convex constraints. Solvers for convex MINLP have handled this problem in various ways~\cite{DuranGrossmann1986,KronqvistLundellWesterlund2016}, but the basic idea is to build gradient cuts at a suitable boundary point of the feasible region. In SCIP, three procedures for building tighter and/or deeper gradient cuts for convex relaxations are included. The first two methods compute a point on the boundary of the set defined by all convex constraints of~\eqref{eq:minlp} that is close to the point to be separated. The first method solves an additional nonlinear program to project the point to be separated onto the convex set. Since solving an NLP for every point to be separated can be quite expensive, the second method, going back to an idea by Veinott~\cite{Veinott1967}, does a binary search between an interior point of the convex set and the point to be separated. The interior point is computed once in the beginning of the search by solving an auxiliary NLP. For more details, see~\cite{SCIPoptsuite40}. The third method does not aim to separate a given point, but utilizes the feasible points that are found by primal heuristics of SCIP. When a new solution is found, gradient cuts are generated at this solution for convex constraints of~\eqref{eq:minlp_ext} and added to the cutpool. If such a cut is later found to separate the relaxation solution, it is added to the LP. All methods are currently disabled as they require more tuning to be efficient in general. \subsection{Quotients} \label{sec:quotient} Note that the available expression handlers (see Section~\ref{sec:expr}) do not include a handler for quotients, since they can equivalently be written using a product and a power expression. Therefore, the default extended formulation for an expression $y_1y_2^{-1}$ is given by replacing $y_2^{-1}$ by a new auxiliary variable $w$. The linear outer-approximation is then obtained by estimating $y_1w$ and $y_2^{-1}$ separately. However, tighter linear estimates are often possible. Therefore, a specialized nonlinear handler checks whether a given function $h_i(x)$ can be cast as \begin{equation}\label{eq:quotient_constraint} f(y) = \frac{ay_1 + b}{cy_2 + d} + e \end{equation} with $a,b,c,d,e\in\mathbb{R}$, $a,c\neq 0$, and $y_1$ and $y_2$ being either original variables or subexpressions of $h_i(x)$. Tight linear estimators for~\eqref{eq:quotient_constraint} are computed by distinguishing a number of cases. For example, for $a\low{y}_1 +b\geq 0$ and $c\low{y}_2+d > 0$ (if $c>0$), a linear underestimator is obtained by computing a tangent on the graph of the convex underestimator of $f$ that is given by~\cite{ZamoraGrossmann1998}. A linear overestimator is obtained by computing a facet on the concave envelope of $f$, which is easy since $-f$ is vertex-polyhedral. Furthermore, in the univariate case ($y_1=y_2$), $f$ is either convex or concave on $[\low{y}_1,\upp{y}_1]$ if $-d/c\not\in[\low{y}_2,\upp{y}_2]$. Since in the univariate case the same variable appears twice, also a specialized domain propagation method that avoids the dependency problem of interval arithmetic is available. \subsection{Perspective Strengthening} Perspective reformulations have shown to efficiently tighten relaxations of convex mixed-integer nonlinear programs with on/off-structures, which are often modeled via big-M constraints or semi-continuous variables~\cite{frangioni2006perspective}. A variable $x_j$ is semi-continuous with respect to the binary indicator variable $x_{j'}$, $j'\in \mathcal{I}$, if it is restricted to the domain $[\low{x}^1_j, \upp{x}^1_j]$ when $x_{j'}=1$ and has a fixed value $x^0_j$ when $x_{j'}=0$. In SCIP, a strengthening of under- and overestimators for functions that depend on semi-continuous variables is available. Consider a constraint $h_i(x,w_{i+1},\ldots,w_{\hat m}) \lesseqgtr w_i$ of~\eqref{eq:minlp_ext} and write $h_i$ as a sum of its nonlinear and linear parts: \[ h_i(x,w_{i+1},\ldots,w_{\hat m}) = h_i^\text{nl}(x_\text{nl},w_\text{nl}) + h_i^\text{l}(x_\text{l},w_\text{l}), \] where $h_i^\text{nl}$ is a nonlinear function, $h_i^\text{l}$ is a linear function, $x_\text{nl}$ and $w_\text{nl}$ are the vectors of variables $x$ and $w$, respectively, that appear only in the nonlinear part of $h_i$, and $x_\text{l}$ and $w_\text{l}$ are the vectors of variables $x$ and $w$, respectively, that appear only in the linear part of $h_i$. A strengthening of under- or overestimators for $h_i(x,w_{i+1},\ldots,w_{\hat m})$ is attempted if $x_\text{nl}$ and $w_\text{nl}$ are semi-continuous with respect to the same indicator variable $x_{j'}$. To determine whether a variable $x_j$ is semi-continuous, bounds on $x_j$ that are implied by fixing a binary variable are analyzed. The implied bounds can be obtained either from linear constraints directly or by probing, and are stored by SCIP in a globally available data structure. If a pair of implied bounds on $x_j$ with the same binary variable $x_{j'}$ is found, i.e., \begin{align*} x_j &\leq \alpha^{(u)} x_{j'} + \beta^{(u)},\\ x_j &\geq \alpha^{(\ell)} x_{j'} + \beta^{(\ell)}, \end{align*} and $\beta^{(u)} = \beta^{(\ell)}$, then $x_j$ is a semi-continuous variable with $x_j^0 = \beta^{(u)}$, $\low{x}^1_j = \alpha^{(\ell)} + \beta^{(\ell)}$, and $\upp{x}^1_j = \alpha^{(u)} + \beta^{(u)}$. In addition, an auxiliary variable $w_i$ is found to be semi-continuous if function $h_i(x,w_{i+1},\ldots,w_{\hat m})$ depends only on semi-continuous variables with the same indicator variable. Assume that a linear underestimator $\ell(x,w_{i+1},\ldots,w_{\hat m}$) of $h_i(x,w_{i+1},\ldots,w_{\hat m})$ has been computed and split it into parts corresponding to the nonlinear and linear variables of $h_i$, respectively: \[ \ell(x,w_{i+1},\ldots,w_{\hat m}) = \ell^\text{nl}(x_\text{nl},w_\text{nl}) + \ell^\text{l}(x_\text{l},w_\text{l}). \] The perspective strengthening consists of extending the part of the underestimator that corresponds to the nonlinear part such that it is tight for $x_{j'}=0$: \[ \ell^\text{nl}(x_\text{nl},w_\text{nl}) + \left(h^\text{nl}_i(x^0_\text{nl},w^0_\text{nl}) - \ell^\text{nl}(x^0_\text{nl},w^0_\text{nl})\right)(1-x_{j'}) + \ell^\text{l}(x_\text{l},w_\text{l}). \] The linear part remains unchanged, since it shares none of the variables with the nonlinear part. This extension ensures that the estimator is equal to $h_i(x,w_{i+1},\ldots,w_{\hat m})$ for $x_{j'}=0$, $x_\text{nl} = x^0_\text{nl}$, and $w_\text{nl} = w^0_\text{nl}$, and equal to $\ell(x,w_{i+1},\ldots,w_{\hat m})$ for $x_{j'}=1$. If $h_i$ is convex, cuts obtained this way are equivalent to the classic perspective cuts~\cite{frangioni2006perspective}. For more details on the implementation in SCIP, see~\cite{BestuzhevaGleixnerVigerske2021}. An example is given in Figure~\ref{fig:persp}. \begin{figure} \caption{The original set $\{(x,y,w): x^2 \leq w, y\in\{0,1\} \label{fig:persp} \end{figure} \subsection{Optimization-Based Bound Tightening} Optimization-Based Bound Tightening (OBBT) is a domain propagation technique which minimizes and maximizes each variable over the feasible set of the problem or a relaxation thereof~\cite{QuGr93}. Whereas FBBT (see Section~\ref{sec:consnl}) propagates the nonlinearities individually, OBBT considers (a relaxation of) all constraints together, and may hence compute tighter bounds. However, it is rather expensive compared to FBBT. In SCIP, OBBT solves two auxiliary LPs for each variable $x_k$ that could be subject to spatial branching: \begin{equation} \label{eq:OBBTLP} \min / \max \{ x_k : D^xx+D^ww \leq d, c^\top x \leq U, x \in [\low{x},\upp{x}], w\in [\low{w},\upp{w}] \} \end{equation} where $D^xx+D^ww \leq d$, $D^x\in\mathbb{R}^{\ell \times n}$, $D^w\in\mathbb{R}^{\ell\times\hat m}$, $d\in\mathbb{R}^{\ell}$ is the linear relaxation of the feasible region of~\eqref{eq:minlp_ext}, and~$c^\top x \leq U$ is an objective cutoff constraint that excludes solutions with objective value worse than the current incumbent. The optimal value of \eqref{eq:OBBTLP} may then be used to tighten the lower / upper bound of variable $x_k$. A variable is subject to spatial branching if cut separation routines use the bounds of the variable at a node of the branch-and-bound tree. SCIP, by default, applies OBBT at the root node to tighten bounds globally. It restricts the computational effort by limiting the amount of LP iterations spent for solving the auxiliary LPs and interrupting for cheaper domain propagation techniques to be called between LP solves. Further, SCIP does not only use the optimal objective values of~\eqref{eq:OBBTLP} to tighten the bounds on $x_k$, but it also applies a computationally cheap approximation of OBBT during the branch-and-bound search by exploiting the dual solutions from solves of~\eqref{eq:OBBTLP} at the root node. Suppose the maximization LP is solved and feasible dual multipliers $\lambda_1,\ldots,\lambda_\ell$, $\mu \geq 0$ for $D^xx+D^ww \leq d$, $c^\top x \leq U$, respectively, and the corresponding reduced cost vectors $r^x$ and $r^w$ are obtained. Then \begin{equation}\label{eq:lvb} x_k \leq \sum_j r^x_j x_j + \sum_j r^w_j w_j + \lambda^\top d + \mu U \end{equation} is a valid inequality, which is called \emph{Lagrangian variable bound} (LVB), and \begin{equation}\label{eq:lvbvalue} \sum_{j:r^x_j < 0} r^x_j \low{x}_j + \sum_{j:r^x_j > 0} r^x_j \upp{x}_j + \sum_{j:r^w_j < 0} r^w_j \low{w}_j + \sum_{j:r^w_j > 0} r^w_j \upp{w}_j + \lambda^\top d + \mu U \end{equation} is a valid upper bound for $x_k$ that equals the OBBT bound if the dual multipliers are optimal. SCIP learns LVBs at the root node and propagates them during the tree search whenever the bounds of variables on the right-hand side of~\eqref{eq:lvb} become tighter or an improved primal solution is found. For further details, see~\cite{GleixnerBertholdMuellerWeltge2017}. In addition to OBBT with respect to the LP relaxation, also a variant is available that optimizes single variables over the potentially tighter convex NLP relaxation that is given by all linear and convex nonlinear constraints of~\eqref{eq:minlp}. Also for this variant, linear Lagrangian variable bounds similar to~\eqref{eq:lvb} can be constructed by taking constraint convexity and KKT conditions into account. Because of the potentially high computational cost of solving many NLPs, this variant of OBBT is deactivated by default. For more details, see~\cite{SCIPoptsuite40}. \subsection{Primal Heuristics} \label{sec:heur} The purpose of primal heuristics is to find high quality feasible solutions early in the search. When given an MINLP, up to 40 primal heuristics are active in SCIP by default. Many of them aim to find an integer-feasible solution to the LP relaxation. In the following, primal heuristics that are only active in the presence of nonlinear constraints are discussed. \subsubsection{subNLP} A primal heuristic like \texttt{subNLP} is implemented in virtually any global MINLP solver. Given a point $\tilde x$ that satisfies the integrality requirements ($\tilde x_{\mathcal{I}}\in\mathbb{Z}^{\vert\mathcal{I}\vert}$), the heuristic starts by fixing all integer variables in \eqref{eq:minlp} to the values given by $\tilde x$. It then calls the SCIP presolver on this subproblem for possible simplifications. Finally, it triggers a solution of the remaining NLP, using $\tilde x$ as the starting point. If the NLP solver, such as Ipopt, finds a solution that is feasible (and often also locally optimal) for the NLP relaxation, then a feasible point for~\eqref{eq:minlp} has been found. The starting point $\tilde x$ can be the current solution of the LP relaxation if integer-feasible, a point found by a primal heuristic that searches for integer-feasible solutions of the LP relaxation, or a point that is passed on by other primal heuristics for MINLP, such as those mentioned in the next sections. How frequently the heuristic should run and how much effort to spend on an NLP solve is a nontrivial decision. In the current implementation, the heuristic uses a fixed number for the iteration limit of the NLP solver for its first run. For the following calls, the limit is set to twice the average number of iterations required in previous runs. If, however, many of the previous runs hit the iteration limit, then an increased iteration limit is used. Whether to run the heuristic at a node of the branch-and-bound tree depends on the number of nodes processed since it ran the last time, the iteration limit that would be used, and how successful the heuristic has been in finding feasible points in previous calls. \subsubsection{Multistart} If~\eqref{eq:minlp} is nonconvex after fixing all integer variables, then several local optima may exist for the NLPs solved by heuristic \texttt{subNLP}. The success of the NLP solver then strongly depends on the starting point. Therefore, the multistart heuristic aims to compute several starting points that are passed to the \texttt{subNLP} heuristic. The algorithm, originally developed in~\cite{SmithChinneckAitken2013}, tries to approximate the boundary of the feasible set of the NLP relaxation by sampling points from $[\low{x},\upp{x}]$ and pushing them towards the feasible set by the use of an inexpensive gradient descent method. Afterwards, points that are relatively close to each other are grouped into clusters. Ideally, each cluster approximates the boundary of some connected component of the feasible set. For each cluster, a linear combination of the points is passed as a starting point to \texttt{subNLP}. For integer variables $x_i$, $i\in\mathcal{I}$, the value in the starting point is rounded to an integral value. To reduce infeasibility of a point $\hat x$, the \emph{constraint consensus} method~\cite{SmithChinneckAitken2013} is used. The algorithm computes a descent direction for each violated constraint of~\eqref{eq:minlp}. For example, if $g_i(\hat x) > \upp{g}_i$ for some $i\in\{1,\ldots,m\}$, then the descent direction is given by $-\frac{g_i(\hat x)}{\Vert\nabla g_i(\hat x)\Vert^2} \nabla g_i(\hat x)$. Point $\hat x$ is then updated by adding the average of the descent directions for all violated linear and nonlinear constraints. This step is iterated until $\hat x$ becomes feasible, or a stopping criterion has been fulfilled. The multistart heuristic currently runs for continuous problems ($\mathcal{I}=\emptyset$) only by default, since rounding and fixing integer variables most likely lead to infeasible NLP subproblems. For more details, see~\cite{SCIPoptsuite40}. \subsubsection{NLP Diving} As an alternative to finding a good fixing for all integer variables of~\eqref{eq:minlp}, the NLP diving heuristic starts by solving the NLP relaxation at the current branch-and-bound node with an NLP solver. It then iteratively fixes integer variables with fractional value and resolves both the LP and NLP relaxations, thereby simulating a depth-first-search in a branch-and-bound tree. By default, variables for which the sum of the distances from the solutions of the LP and NLP relaxations to a common integer value is minimal are rounded to the nearest integer value. Further, binary variables and nonlinear variables are preferred. If the resulting NLP is found to be (locally) infeasible, one-level backtracking is applied, that is, the last fixing is undone, and the opposite fixing is tried. If this is infeasible, too, the heuristic aborts. \subsubsection{MPEC} While the NLP diving heuristic either completely omits or enforces integrality restrictions in the NLP relaxation, the MPEC heuristic adds a relaxation of the integrality restriction to the NLP and tightens this relaxation iteratively. The heuristic is only applicable to mixed-binary nonlinear programs at the moment. The basic idea of the heuristic, originally developed in~\cite{ScheweSchmidt2019}, is to reformulate \eqref{eq:minlp} as a mathematical program with equilibrium constraints (MPEC) and to solve this MPEC to local optimality. The MPEC is obtained from \eqref{eq:minlp} by rewriting the condition $x_i\in\{0,1\}$, $i\in\mathcal{I}$, as complementarity constraint $x_i \perp 1 - x_i$. This reformulation is again reformulated to an NLP by writing it as $x_i\, (1-x_i) = 0$. However, since these reformulated complementarity constraints will not, in general, satisfy constraint qualifications, solving this NLP reformulation with a generic NLP solver will often fail. Therefore, in order to increase the chances of solving the NLP reformulation, the heuristic solves regularized versions of the NLP by relaxing $x_i (1-x_i) = 0$ to $x_i (1-x_i) \leq \theta$, for different, ever smaller $\theta > 0$. The solution of one NLP is thereby used as the starting point for the next solve. If the NLP solution is close to satisfying $x_\mathcal{I}\in\{0,1\}^{\vert\mathcal{I}\vert}$, it is passed as starting point to the \texttt{subNLP} heuristic. If an NLP is (locally) infeasible, the heuristic does two more attempts where the values for binary variables that are already close to $0$ or $1$ are flipped to $1$ or $0$, respectively. For more details, see~\cite{SCIPoptsuite50}. \subsubsection{Undercover} While the previous heuristics focused mainly on enforcing the integrality condition on an NLP, heuristic \texttt{undercover}~\cite{BertholdGleixner2014} starts from a completely different angle. The heuristic is based on the observation that it sometimes suffices to fix only a comparatively small number of variables of~\eqref{eq:minlp} to yield a subproblem with all constraints being linear. For example, for a bilinear term, only one of the variables needs to be fixed. The variables to fix are chosen by solving a set covering problem, which aims at minimizing the number of variables to fix. The values for the fixed variables are taken from the solution of the LP or NLP relaxation or a known feasible solution of the MINLP. The resulting sub-MIP is less complex to solve, and does not need to be solved to proven optimality. The solutions of the sub-MIP are immediately feasible for~\eqref{eq:minlp}. However, the best one is also passed as starting point to heuristic \texttt{subnlp} to try for further improvement. For more details, see~\cite{BertholdGleixner2014}. \section{Benchmark} The following aims to present a fair comparison of SCIP with several other state-of-the-art solvers for general MINLP. Doing so is not trivial at all. First, a set of instances needs to be selected that is suitable as a benchmark set. Second, solver parameters have to be set such that all solvers solve the same instances with the same working limits and the same requirements on feasibility and optimality -- this goal could not be reached completely. Third, the solver's results have to be checked for correctness, or, when this is not possible, plausibility. GAMS was used for the experiments, as it provides various facilities to help on solver comparisons and comes with current versions of SCIP and the commercial solvers BARON~\cite{KhajaviradSahinidis2018}, Lindo~API~\cite{LinSchrage2009}, and Octeract included. ANTIGONE has not been included in the comparison, as its development seems to have stopped years ago. All computations were run on a Linux cluster with Intel Xeon E5-2670~v2 CPUs (20 cores). The GAMS version is 41.2.0, which includes SCIP~8.0.2, BARON~22.9.30, Lindo~API~14.0.5099.162, and Octeract~4.5.1. A GAMS license with all solvers enabled was used, so that SCIP uses CPLEX~22.1.0.0 as LP solver and Ipopt with HSL MA27 as NLP solver, BARON can choose between all LP/MIP/NLP solvers that it interfaces with, and Octeract uses CPLEX~22.1.0.0 as LP/MIP/QP/QCP solver. \subsection{Test Set} \label{sec:testset} To construct a test set suitable for benchmarking, the MINLPLib~\cite{minlplib} collection of 1595 MINLP instances was used as source. First, all instances that could not be handled by some of the considered solvers were excluded, e.g., instances with trigonometric functions, as they are not supported by BARON. All solvers were then run in serial mode (that is, with parallelization features disabled) on the remaining 1505 instances and using the parameter settings described below. The results of these runs were then used to select a set of 200 instances that could be solved by at least one solver, that were not all trivial, had a varying degree of integrality and nonlinearity, and such that having many instances with a similar name is avoided. The latter was done to avoid overrepresentation of optimization problems for which many instances were added to MINLPLib. Since small changes to an instance can lead to large variations in the solver's performance, the benchmark's reliability is improved by considering for each instance four additional variants where the order of variables and equations has been permuted. The permuted instances were generated with GAMS/Convert. Thus, a test set of 1000 instances is obtained. The following approach was used to select the 200 instances before permutation: Let $I$ be the set of 1505 instances, $d_i$ be the fraction of integer variables in instance $i\in I$, and $e_i$ be the fraction of nonzeros in the Jacobian and objective function gradient that correspond to nonlinear terms. Next, assign to each instance an identifier $f_i\in F$ such that instances that seem to come from the same model are assigned the same identifier. This goal is approximated by mapping~$i$ to the name of the instance until the first digit, underscore, or dash, except for the block layout design instances \texttt{fo*}, \texttt{m*}, \texttt{no*}, \texttt{o*}, which were all assigned to the same identifier. $\vert F\vert=230$ different identifiers were found this way. Further, let $\upp{t}_i$ be the largest time in seconds that any solver who did not produce wrong results on instance $i$ spend on instance $i$. Finally, let $S$ be the number of instances that could be solved by at least one solver. To ensure that instances with a varying amount of integer variables and nonlinearity are included, the interval $[0,1]$ was split once at breakpoints $0.05, 0.25, 0.5, 0.9$ and once at $0.1, 0.25, 0.5$. Let $D$ and $E$ be the resulting partitions of $[0,1]$. For every interval from $D$ and $E$, the aim is to have roughly the same number of instances with $d_i$ and $e_i$ in the respective intervals. For the choice of breakpoints that define $D$ and $E$, the distribution of $d_i$ and $e_i$, $i\in I$, have been taken into account. For example, MINLPLib contains many purely continuous and purely discrete instances, but not many instances that are mostly linear or completely nonlinear. To avoid including too many instances originating from the same model, including more than two instances for each identifier in $F$ is discouraged. Further, instances that seem trivial, i.e., which are solved by all solvers in no more than five seconds, or could not be solved by any solver are excluded. Introducing penalty terms, the following optimization problem for instance selection is obtained: \begin{align*} \min\; & \sum_{d \in D} \lambda_d^2 + \sum_{e \in E} \lambda_e^2 + 10 \sum_{f\in F} \lambda_f^2 \\ \text{such~that}\; & \sum_{i\in I: d_i\in d} z_i = \left\lfloor \frac{N}{\vert D\vert} \right\rceil + \lambda_{d} && \forall d \in D, \\ & \sum_{i\in I: e_i \in e} z_i = \left\lfloor \frac{N}{\vert E\vert} \right\rceil + \lambda_{e} && \forall e \in E, \\ & \sum_{i\in I: f_i = f} z_i \leq 2 + \lambda_f && \forall f \in F, \\ & z_i = 0 && \forall i\in I: \upp{t}_i \leq 5, \\ & z_i = 0 && \forall i\in I: i\not\in S, \\ & z \in \{0,1\}^{\vert I\vert}, \lambda \in \mathbb{Z}^{\vert D\vert+\vert E\vert+\vert F\vert} \end{align*} This problem was solved for $N$ varying between 180 and 220. For $N=208$, this yield a selection of 200 instances with an acceptable penalty value of 106. See Section~\ref{sec:testsetdetail} for a list of all selected instances. Table~\ref{tab:bucketsizes} shows the number of instances for each element of $D\times E$. For five identifiers from $F$, three instead of two instances were selected, i.e., $\lambda_f=1$ for five $f\in F$. \begin{table}[ht] \centering \begin{tabular}{l|rrrrr|r} \toprule $E\downarrow$ \,$\vert$\, $D\rightarrow$ & $[0,0.05)$ & $[0.05,0.25)$ & $[0.25,0.5)$ & $[0.5,0.9)$ & $[0.9,1]$ & $[0,1]$ \\ \midrule $[0,0.1)$ & 3 & 7 & 19 & 15 & 6 & 50 \\ $[0.1,0.25)$ & 8 & 22 & 9 & 7 & 4 & 50 \\ $[0.25,0.5)$ & 8 & 8 & 6 & 10 & 18 & 50 \\ $[0.5,1]$ & 25 & 2 & 5 & 7 & 11 & 50 \\\midrule $[0,1]$ & 44 & 39 & 39 & 39 & 39 & 200 \\ \bottomrule \end{tabular} \caption{Number of instances selected with ``discreteness'' $d_i$ and ``nonlinearity'' $e_i$ in intervals from $D$ and $E$.} \label{tab:bucketsizes} \end{table} \subsection{Parameter Settings} \subsubsection{Missing Variable Bounds} \label{sec:missingbounds} To compute a lower bound on the optimal value of a minimization problem, all solvers considered here construct a convex relaxation of the given problem. For nonconvex constraints, this often relies on the computation of valid convex underestimators or concave overestimators. As these typically depend on variables' bounds (recall the McCormick underestimators~\eqref{eq:mccormick}), missing or very large bounds on variables in nonconvex terms can mean that an instance will be very hard or impossible to solve. Even when the user forgot to specify some variable bounds, the solver may still be able to derive bounds via domain propagation. Further, once a feasible solution $\hat x$ has been found, additional bounds may be derived from the inequality $c^\top x\leq c^\top \hat x$. However, as there are always cases where bounds are still missing after presolve, solvers invented different ways to deal with this obstacle. If SCIP cannot construct an under- or overestimator because of missing variable bounds, it continues by branching on an unbounded variable. This way, there will eventually be a node in the branch-and-bound tree where all variables are bounded. Nodes that still contain unbounded variable domains may be pruned due to a derived lower bound on the objective function exceeding the incumbents objective function value. But it may also be the case that pruning will not be possible and SCIP does not terminate. However, variable bounds after branching cannot grow indefinitely in SCIP, but are limited by $\pm 10^{20}$ by default. That is, SCIP does not search for solutions with variable values beyond this value. The other solvers considered here add variable bounds based on a heuristic decision. If BARON is still missing bounds on variables in nonconvex terms after presolve, it sets the bound to a value that depends on the type of nonlinearity involved. Typically, this value is around $\pm 10^{10}$. BARON also prints a warning to the log and no longer claim to have solved a problem to global optimality, i.e., it does not return a lower bound. Lindo API adjusts the bounds for all variables that are involved in convexification to be within $[-10^{10},10^{10}]$. At termination, it returns the lower bound for the restricted problem. Octeract proceeds similarly and introduces a bound of $\pm 10^7$ for every missing bound and returns the lower bound for the restricted problem at termination. Evidently, just passing an instance with unbounded variables to a solver with default settings may mean that each solver solves a different subproblem of the actual problem and often also reports a lower bound that corresponds to the solved subproblem only. Fortunately, for every solver considered here, parameters are available to adjust the treatment of unbounded variables. A first impulse could be to tell all solvers to set missing bounds to infinity, but this is not possible as each solver treats values beyond a certain finite value as ``infinity'' (BARON: $10^{50}$, Octeract: $10^{308}$, SCIP: $10^{20}$). Changing this value is either not possible or not advisable. We therefore decided to aim for $\pm 10^{12}$ as replacement for a missing variable bound. For BARON and SCIP, the GAMS interface can replace any missing bound by $\pm 10^{12}$ before the instance is passed to the solver. BARON will hence also return a lower bound for this restricted problem. For Lindo API, a solver parameter can be changed so that bounds for all variables subject to convexification are bounded by $\pm 10^{12}$ (instead of $\pm10^{10}$). Finally, also for Octeract, all missing bounds are set to $\pm10^{12}$ (instead of $\pm10^7$) by changing of a solver parameter. Note, that this still does not ensure that all solvers solve the same instance, since Lindo API would still change initial finite bounds beyond $10^{12}$ and may also not set any bounds for variables that are not involved in convexification. Next to missing bounds on problem variables, also singularities in functions (e.g., $1/x$, $\log(x)$) can prevent finite under- or overestimators from being available. Unfortunately, there are no parameters available to ensure a uniform treatment of this case in all solvers. SCIP ensures that a variable $x$ in $x^p$, $p<0$, or $\log(x)$ is bounded away from zero by $10^{-9}$, and terminates with a lower bound for this modified problem. BARON applies the same method as the one for missing bounds on problem variables to choose a suitable bound on $x$. No lower bound is returned at termination then. The methods in Lindo API and Octeract are not known to us. \subsubsection{Solution Quality} \label{sub:tolerances} To ensure that all solvers return solutions of the same quality, constraints of~\eqref{eq:minlp} are required to be satisfied with an absolute tolerance of $10^{-6}$. This applies to linear and nonlinear equations, variable bounds, and integrality. In addition, a tolerance on the proof of optimality is set. For this purpose, typically, solvers are allowed to stop when the absolute or relative gap between lower and upper bounds on the optimal value are sufficiently small. Since the test set is diverse and has optimal values of varying magnitude, setting only a relative gap limit and no absolute gap limit would be preferable. Unfortunately, Octeract does not permit different values for these limits. As a compromise, BARON, Lindo API, and SCIP are run with $10^{-4}$ as relative gap limit and $10^{-6}$ as absolute gap limit, while for Octeract, $10^{-6}$ is used for both the absolute and relative gap limit. Below, the impact of using a tighter optimality tolerance for Octeract is analyzed in a separate comparison. \subsubsection{Working Limits} As working limits, a time limit of two hours is used and the jobs on the cluster are restricted to 50 GB of RAM. Further, the amount parallelization (multiple threads or processes) that a solver is allowed to use is limited in varying degrees. To simplify the presentation, the term ``threads'' is used also for Octeract, even though it uses multiple processes instead of threads to parallelize its solving process. \subsubsection{Summary} To summarize, the following parameters are used: \begin{description} \item[GAMS] (applied to all solvers): \texttt{optcr=1e-4}, \texttt{optca=1e-6}, \texttt{reslim=7200}, \texttt{workspace={\allowbreak}50000}, \texttt{threads} $\in\{1,4,8,16\}$ \item[BARON:] \texttt{InfBnd=1e12}, \texttt{AbsConFeasTol=1e-6}, \texttt{AbsIntFeasTol=1e-6} \item[Lindo API:] \texttt{GOP\_BNDLIM=1e12}, \texttt{SOLVER\_FEASTOL=1e-6} \item[Octeract:] \texttt{INFINITY=1e12}, \texttt{INTEGRALITY\_VIOLATION\_TOLERANCE=1e-6} \item[SCIP:] \texttt{gams/infbound=1e12}, \texttt{constraints/nonlinear/linearizeheursol=o} (this\\ undoes a change in the algorithmic settings of SCIP that is part of the GAMS/SCIP interface) \end{description} \subsection{Correctness Checks} The GAMS/Examiner 2.0 tool is used to evaluate the violation of constraints, bounds, and integrality in the solutions reported by the solver. Examiner generates for each solver a file that contains for each instance the solving time, returned lower and upper bound, and solution infeasibility. A run of a solver on an instance is marked as \emph{failed} if the solver terminated abnormally, the solution is not feasible with respect to the feasibility tolerance, or the lower or upper bound contradicts with the bounds on the optimal value that are specified on the MINLPLib page. Note, that the primal and dual bounds on the MINLPLib page were calculated without enforcing the $\pm 10^{12}$ limit on unbounded variables. However, in order for an instance to be accepted into the test set, one of the solvers considered here must have solved the instance and found an optimal value that fits within the lower and upper bounds given at MINLPLib. It is therefore acceptable to use these bounds for checking. A run that has not failed is marked as \emph{solved} if the relative or absolute limits on the gap between lower and upper bound are satisfied. If a solver stopped without closing the gap before the time limit, then the solver time is changed to the time limit. The only exception here is BARON, which stops on two instances before the time limit without reporting a lower bound due to singularities in functions (see Section~\ref{sec:missingbounds}). To be consistent with the treatment of other solvers, these two instances were accounted as solved by BARON with the original solver time. \subsection{Results} \subsubsection{Serial Mode} For the main comparison, all parallelization features in the solvers were disabled, that is, GAMS was run with option \texttt{threads} set to 1. In addition to the solver itself, results for the \emph{virtual best} and \emph{virtual worst} solver are reported, which are obtained by picking for each instance the fastest or the slowest solver, respectively. Table~\ref{tab:results_1th} shows for each solver the number of instances that could be solved, the number of times the time limit was reached, and the number of runs that were marked as failed. Further, the shifted geometric mean of the running time of the solver is provided. The shift has been set to 1 second. Here, instances that failed are accounted with the time limit. The performance profile~\cite{DolanMore2002} in Figure~\ref{fig:pprofile_1th} shows the number of instances a solver solved within a time that is at most a factor of the fastest solvers time. Section~\ref{sec:detailed_singlethread} provides detailed results. \begin{table}[ht] \centering \begin{tabular}{l|rrrr} & solved & timeout & fail & time \\ \midrule BARON & 790 & 183 & 27 & 75.4 \\ Lindo API & 538 & 323 & 139 & 489.1 \\ Octeract & 671 & 279 & 50 & 184.1 \\ SCIP & 776 & 183 & 41 & 85.2 \\ \midrule virt.~worst & 368 & 405 & 227 & 1505.2 \\ virt.~best & 967 & 33 & 0 & 19.7 \\ \bottomrule \end{tabular} \caption{Aggregated performance data for all solvers on test set of 1000 instances with parallelization disabled.} \label{tab:results_1th} \end{table} \begin{figure} \caption{Performance profile comparing all solvers with parallelization disabled.} \label{fig:pprofile_1th} \end{figure} The results show a small lead of BARON before SCIP with respect to both number of instances solved and average time. Since the number of timeouts is almost equal, one could argue that it is the higher stability of BARON that moves it onto the first place here. In fact, the 41 fails of SCIP are due to returning a wrong optimal value 16 times, returning an infeasible solution 23 times, and aborts due to numerical troubles for two instances. For BARON, fails are due to returning a wrong optimal value 26 times and an infeasible solution only once. While SCIP~8.0 has made a large step forward in ensuring that nonlinear constraints are satisfied in the non-presolved problem, violations in linear constraints or variable bounds still occur for a few instances. These are typically due to variables being aggregated during presolve. Even though Octeract and Lindo API solved considerably fewer instances than BARON and SCIP, which also results in an increased mean time, it is noteworthy that each of the two is also the fastest solver on 270 and 66 instances, respectively. Octeract also produced correct results for 95\% of the test set, while for Lindo API a relatively high number of wrong optimal values, infeasible solutions, or aborts is observed. The large differences between the real and virtual solvers show that none of the solvers dominates all others or is dominated. Next, the effect of changing the gap limit for Octeract has been investigated. Recall from Section~\ref{sub:tolerances} that relative and absolute gap limits of $10^{-6}$ and $10^{-4}$, respectively, were used for all solvers except for Octeract. Since Octeract does not allow choosing these limits separately, it had been run with the tighter relative gap limit of $10^{-6}$. To check whether this lead to a considerable disadvantage for this solver, the solver was rerun on the 200 non-permuted instances with both relative and absolute gap limit set to $10^{-4}$. The table in Section~\ref{sec:detailed_octeractconvtol} shows that the change in the convergence tolerance had essentially no effect on the solver's performance. In both cases, the same 134 instances could be solved. The mean time changed from 178.6 for a limit of $10^{-6}$ to 179.0 for a limit of $10^{-4}$. \subsubsection{Parallel Mode} In the next comparison, each solver is allowed to use multiple threads or processes. Since SCIP's use of multiple threads is limited to presolving MIPs, checking quadratic functions for convexity, and the linear algebra in Ipopt, FiberSCIP~\cite{ShinanoHeinzVigerskeWinkler2018} is used to run SCIP in parallel mode. FiberSCIP is a shared-memory instantiation of the UG framework~\cite{ug} for the parallelization of branch-and-bound based solvers. The framework parallelizes the search of the branch-and-bound tree by collecting and distributing open problems between independent instances of SCIP. In addition, the first seconds of the solving process are used for a ``racing ramp-up'' phase. Here, multiple SCIP instances with differing parameter sets are run concurrently, and the one with the best lower bound is used for the remaining solve. The UG version was 1.0.0~beta3. For the runs in serial mode, reaching the memory limit of 50 GB was not observed for any solver. But since parallelization often increases memory requirements, a memory limit of 100 GB has been used for the runs in parallel mode. Since this meant a reduction in available computing resources, only the 200 non-permuted instances are used for comparisons. Table~\ref{tab:results_multithread} shows, for an increasing number of threads, the number of instances that could be solved by each solver and the mean time spent. In addition, Figure~\ref{fig:pprofile_fiberscip} provides a performance profile that compares SCIP and FiberSCIP only. Section~\ref{sec:detailed_multithread} gives detailed results. \begin{table}[ht] \centering \begin{tabular}{l|rrrrrrrr} & \multicolumn{2}{c}{1 thread} & \multicolumn{2}{c}{4 threads} & \multicolumn{2}{c}{8 threads} & \multicolumn{2}{c}{16 threads} \\ & solved & time & solved & time & solved & time & solved & time \\ \midrule BARON & 161 & 64.3 & 160 & 58.2 & 160 & 57.1 & 158 & 58.6 \\ Lindo API & 114 & 423.6 & 114 & 379.2 & 106 & 459.5 & 107 & 456.4 \\ Octeract & 134 & 178.6 & 133 & 146.9 & 138 & 118.1 & 135 & 123.2 \\ (Fiber)SCIP & 161 & 76.9 & 145 & 94.3 & 147 & 77.8 & 152 & 74.8 \\ \bottomrule \end{tabular} \caption{Aggregated performance data for all solvers on test set of 200 instances when run with parallelization allowed.} \label{tab:results_multithread} \end{table} \begin{figure} \caption{Performance profile comparing SCIP and FiberSCIP.} \label{fig:pprofile_fiberscip} \end{figure} Apparently, enabling parallelization seldom has a considerable advantage on this test set. For Octeract, where parallelization was part of its original design, a small increase in the number of instances that could be solved and a reduction in time by~34\% when using up to 8 parallel processes is observed. As far as we know, BARON's use of multiple threads is currently limited to enabling this feature in the solver for a MIP relaxation. As a consequence, only moderate improvement of running time by up to~11\% are seen. For Lindo API, an improvement due to parallelization seems to be impeded by a further increase in fails when using multiple threads (1~thread: 24, 4: 28, 8: 35, 16: 43). Finally, for SCIP/FiberSCIP the additional overhead due to the parallelization being build on top of the solver instead of being tightly integrated is not compensated by the use of multiple threads. However, in contrast to other solvers, a monotonous improvement in both number of instances solved and mean solving time when increasing from 4 to 16 threads is observed. Further, the virtual solvers in the performance profile show that FiberSCIP can solve instances that SCIP on one thread couldn't solve. Finally, note that a benefit due to parallelization can usually only be expected for rather challenging instances because of the additional overhead in duplicating and synchronizing data and processes. However, the test set deliberately included only instances that could already be solved by some solver in serial mode, and only instances that were trivial for all solvers, though they may be solved quickly by some, were excluded. As a small experiment, for each solver only those instances that required at least 10 or 100 seconds to solve in serial mode were considered. Unfortunately, this essentially repeated the trends shown in Table~\ref{tab:results_multithread}, so details are omitted here. A more thorough analysis of the parallelization capabilities of MINLP solvers using a set of challenging instances only would be necessary, but exceeds the scope of this paper. \section{Conclusion} The development of the MINLP solver in SCIP has come a long way. In a recent version-to-version comparison~\cite[slides 49-51]{marcscipversions}, a steady improvement in the performance of SCIP on MINLP over the last ten years has been measured, resulting in SCIP 8 solving twice as many instances as SCIP 3 and a speed-up of factor three. Partially, this improvement has been achieved by improving and adding features particular for MINLP. However, due to the generality of SCIP as a CIP solver, also many developments that targeted MIP solving were immediately available for MINLP solving. With version 8.0, the MINLP solving capabilities of SCIP have been largely reworked and extended, which resulted in a considerable improvement in both robustness and performance~\cite{SCIPoptsuite80,marcscipversions}. As a result, SCIP's performance is currently on par with the state-of-the-art commercial solver BARON. In contrast to the commercial solvers considered here, SCIP offers a variety of possibilities for a user, developer, or researcher to interact with the solving process. In particular, the newly added ``nonlinear handler'' plugin type sets SCIP apart from most other MINLP solvers, as it allows focusing on experimenting with new algorithms to handle certain structures in nonlinear functions without modifications to the solver's code. The rather large number of features that are disabled by default shows that tuning and improving the existing code base has become increasingly necessary. Future work will of course also include the addition of new features, e.g., improved separation for signomial functions~\cite{XuDAmbrosioLibertiVanier2022}, the use of alternative relaxations for polynomial functions~\cite{BestuzhevaGleixnerVoelker2022}, or monoidal strengthening of intersection cuts for quadratic constraints~\cite{ChmielaMunozSerrano2022}. The increasing number of cores in present-day CPUs means that to fully utilize an ordinary desktop computer, a solver needs to be parallelized. While the UG framework provides such a possibility for SCIP in both shared and distributed memory environments, the experiments with FiberSCIP on up to 16 threads show that more tuning is necessary to ensure that the additional overhead can be compensated by the use of additional computing resources. Since the development of UG was initially motivated and has focused primarily on the use of large-scale parallel computing environments~\cite{ShinanoAchterbergBertholdHeinzKochWinkler2016}, an investigation on using UG with SCIP to solve challenging MINLPs in distributed memory environments with many CPU cores could be interesting as well. \section*{Acknowledgments} We are very much in all SCIP developers' debt -- the extensions to support nonlinear constraints and solve MINLPs would not have been possible without the framework's existence and the powerful MIP solver that we could build upon. While the authors of this paper are the main developers of the new MINLP features in SCIP 8, many have contributed to the MINLP capabilities in previous releases of SCIP, namely Martin Ballerstein, Timo Berthold, Tobias Fischer, Thorsten Gellermann, Ambros Gleixner, Renke Kuhlmann, Dennis Michaels, Marc Pfetsch, and Stefan Weltge. Further, we thank Yuji Shinano for the development of FiberSCIP and swiftly reacting to our request for the possibility to set gap limits. Last but not least, we are very grateful to Franziska Schl{\"o}sser for the setup and maintenance of benchmarking and testing facilities for the infamous ``consexpr'' development branch of SCIP. The work for this article has been conducted within the Research Campus Modal funded by the German Federal Ministry of Education and Research (BMBF grant numbers 05M14ZAM, 05M20ZBM). Additional funding has been received from the German Federal Ministry for Economic Affairs and Energy within the project EnBA-M (ID: 03ET1549D). \appendix \section{Test Set} \label{sec:testsetdetail} The following table provides details on the test set of 200 instances that was constructed by the selection process described in Section~\ref{sec:testset}. For each instance, the number of variables ($n$), the number of discrete variables ($\vert\mathcal{I}\vert$), the number of constraints ($m+\tilde m$), the number of nonzeros in the Jacobian and objective function gradient (nz), and the number of nonzeros that correspond to nonlinear terms (nlnz) is given. { \scriptsize \begin{longtable}{l|rrrrr}\toprule instance & n & $\vert\mathcal{I}\vert$ & m+$\tilde m$ & nz & nlnz \\ \midrule \endhead alan & 8 & 4 & 7 & 23 & 3 \\ autocorr\_bern20-05 & 20 & 20 & 0 & 20 & 20 \\ autocorr\_bern35-04 & 35 & 35 & 0 & 35 & 35 \\ ball\_mk2\_10 & 10 & 10 & 1 & 20 & 10 \\ ball\_mk2\_30 & 30 & 30 & 1 & 60 & 30 \\ ball\_mk3\_10 & 10 & 10 & 1 & 20 & 10 \\ batch0812\_nc & 76 & 36 & 205 & 472 & 232 \\ batchs101006m & 278 & 129 & 1019 & 2865 & 49 \\ batchs121208m & 406 & 203 & 1511 & 4255 & 59 \\ bayes2\_20 & 86 & 0 & 77 & 615 & 440 \\ bayes2\_30 & 86 & 0 & 77 & 618 & 440 \\ blend029 & 102 & 36 & 213 & 542 & 64 \\ blend146 & 222 & 87 & 624 & 1721 & 256 \\ camshape100 & 199 & 0 & 200 & 696 & 299 \\ cardqp\_inlp & 50 & 50 & 1 & 100 & 50 \\ cardqp\_iqp & 50 & 50 & 1 & 100 & 50 \\ carton7 & 328 & 256 & 687 & 3979 & 678 \\ carton9 & 360 & 288 & 893 & 4917 & 758 \\ casctanks & 500 & 40 & 517 & 1605 & 514 \\ cecil\_13 & 840 & 180 & 898 & 2811 & 360 \\ celar6-sub0 & 640 & 640 & 16 & 1280 & 640 \\ chakra & 62 & 0 & 41 & 142 & 41 \\ chem & 11 & 0 & 4 & 36 & 11 \\ chenery & 43 & 0 & 38 & 132 & 56 \\ chimera\_k64maxcut-01 & 1101 & 1101 & 0 & 1101 & 1101 \\ chimera\_mis-01 & 2032 & 2032 & 0 & 2032 & 2032 \\ chp\_shorttermplan1a & 1008 & 144 & 2068 & 6118 & 576 \\ chp\_shorttermplan2a & 1584 & 240 & 3896 & 10160 & 1152 \\ chp\_shorttermplan2b & 1392 & 192 & 2552 & 7672 & 1440 \\ clay0204m & 52 & 32 & 90 & 284 & 64 \\ clay0205m & 80 & 50 & 135 & 430 & 80 \\ color\_lab3\_3x0 & 316 & 316 & 80 & 632 & 237 \\ crossdock\_15x7 & 210 & 210 & 44 & 630 & 210 \\ crossdock\_15x8 & 240 & 240 & 46 & 720 & 240 \\ crudeoil\_lee1\_07 & 749 & 56 & 1776 & 8124 & 896 \\ crudeoil\_pooling\_ct2 & 403 & 108 & 732 & 2523 & 140 \\ csched1 & 76 & 63 & 22 & 173 & 8 \\ csched1a & 28 & 15 & 22 & 77 & 7 \\ cvxnonsep\_psig20 & 20 & 10 & 0 & 20 & 20 \\ cvxnonsep\_psig30 & 30 & 15 & 0 & 30 & 30 \\ du-opt & 20 & 13 & 9 & 46 & 20 \\ du-opt5 & 20 & 13 & 9 & 46 & 20 \\ edgecross10-040 & 90 & 90 & 480 & 1530 & 90 \\ edgecross10-080 & 90 & 74 & 480 & 1528 & 88 \\ eg\_all\_s & 7 & 7 & 27 & 219 & 196 \\ eigena2 & 2500 & 0 & 1275 & 127500 & 127500 \\ elec50 & 150 & 0 & 50 & 300 & 300 \\ elf & 54 & 24 & 38 & 177 & 30 \\ eniplac & 141 & 24 & 189 & 510 & 48 \\ enpro56pb & 127 & 73 & 191 & 650 & 24 \\ ex1244 & 95 & 23 & 129 & 468 & 52 \\ ex1252a & 24 & 9 & 34 & 93 & 36 \\ faclay20h & 190 & 190 & 2280 & 7030 & 190 \\ faclay80 & 3160 & 3160 & 164320 & 496120 & 3160 \\ feedtray & 97 & 7 & 91 & 450 & 282 \\ fin2bb & 588 & 175 & 618 & 9413 & 42 \\ flay04m & 42 & 24 & 42 & 154 & 4 \\ flay05m & 62 & 40 & 65 & 242 & 5 \\ flay06m & 86 & 60 & 93 & 350 & 6 \\ fo7\_ar25\_1 & 112 & 42 & 269 & 1054 & 14 \\ fo7\_ar3\_1 & 112 & 42 & 269 & 1054 & 14 \\ forest & 236 & 73 & 309 & 1013 & 178 \\ gabriel01 & 215 & 72 & 467 & 1789 & 512 \\ gabriel02 & 261 & 71 & 597 & 2608 & 1024 \\ gasnet & 90 & 10 & 69 & 266 & 130 \\ gasprod\_sarawak16 & 1526 & 38 & 2252 & 6453 & 1088 \\ gastrans582\_cold13\_95 & 2186 & 250 & 3732 & 8538 & 2139 \\ gastrans582\_mild11 & 2186 & 250 & 3732 & 8538 & 2139 \\ gear & 4 & 4 & 0 & 4 & 4 \\ gear2 & 28 & 24 & 4 & 32 & 4 \\ gear4 & 6 & 4 & 1 & 8 & 4 \\ genpooling\_lee1 & 49 & 9 & 82 & 369 & 128 \\ genpooling\_lee2 & 53 & 9 & 92 & 453 & 192 \\ ghg\_1veh & 29 & 12 & 37 & 130 & 91 \\ gilbert & 1000 & 0 & 1 & 2000 & 2000 \\ graphpart\_2g-0066-0066 & 108 & 108 & 36 & 216 & 108 \\ graphpart\_clique-60 & 180 & 180 & 60 & 360 & 180 \\ gsg\_0001 & 77 & 0 & 111 & 368 & 44 \\ hadamard\_5 & 25 & 25 & 0 & 25 & 25 \\ heatexch\_spec1 & 56 & 12 & 64 & 224 & 32 \\ heatexch\_spec2 & 76 & 16 & 90 & 300 & 42 \\ hhfair & 29 & 0 & 25 & 80 & 21 \\ himmel16 & 18 & 0 & 21 & 96 & 84 \\ house & 8 & 0 & 8 & 25 & 9 \\ hs62 & 3 & 0 & 1 & 6 & 6 \\ hvb11 & 9817 & 9537 & 10251 & 36005 & 64 \\ hybriddynamic\_var & 81 & 10 & 100 & 286 & 61 \\ hybriddynamic\_varcc & 151 & 0 & 110 & 388 & 101 \\ hydroenergy1 & 288 & 96 & 428 & 1212 & 120 \\ ibs2 & 3010 & 1500 & 1821 & 13510 & 3000 \\ johnall & 194 & 190 & 192 & 957 & 573 \\ kall\_circles\_c6b & 17 & 0 & 53 & 148 & 86 \\ kall\_congruentcircles\_c72 & 17 & 0 & 59 & 160 & 86 \\ kissing2 & 772 & 0 & 10000 & 154400 & 154400 \\ kport20 & 101 & 40 & 27 & 189 & 116 \\ kriging\_peaks-red020 & 2 & 0 & 0 & 2 & 2 \\ kriging\_peaks-red100 & 2 & 0 & 0 & 2 & 2 \\ lop97icx & 986 & 899 & 87 & 1890 & 704 \\ mathopt5\_7 & 1 & 0 & 0 & 1 & 1 \\ mathopt5\_8 & 1 & 0 & 0 & 1 & 1 \\ maxcsp-geo50-20-d4-75-36 & 1000 & 1000 & 50 & 2000 & 1000 \\ meanvar-orl400\_05\_e\_7 & 2000 & 400 & 2003 & 7200 & 1600 \\ meanvar-orl400\_05\_e\_8 & 1600 & 400 & 1603 & 6400 & 800 \\ mhw4d & 5 & 0 & 3 & 13 & 10 \\ milinfract & 1000 & 500 & 501 & 502000 & 1000 \\ minlphi & 64 & 0 & 79 & 206 & 36 \\ multiplants\_mtg1a & 193 & 93 & 256 & 1972 & 95 \\ multiplants\_mtg2 & 229 & 112 & 306 & 2689 & 126 \\ nd\_netgen-3000-1-1-b-b-ns\_7 & 15000 & 3000 & 12155 & 48000 & 9000 \\ netmod\_kar1 & 456 & 136 & 666 & 1848 & 4 \\ netmod\_kar2 & 456 & 136 & 666 & 1848 & 4 \\ nous1 & 50 & 2 & 43 & 196 & 122 \\ nous2 & 50 & 2 & 43 & 196 & 122 \\ nvs02 & 8 & 5 & 3 & 19 & 16 \\ nvs06 & 2 & 2 & 0 & 2 & 2 \\ oil2 & 936 & 2 & 926 & 2214 & 440 \\ optmass & 30010 & 0 & 25005 & 80020 & 10006 \\ ortez & 87 & 18 & 74 & 268 & 54 \\ p\_ball\_10b\_5p\_3d\_m & 95 & 50 & 129 & 518 & 150 \\ p\_ball\_15b\_5p\_2d\_m & 105 & 75 & 139 & 523 & 150 \\ parabol5\_2\_3 & 40400 & 0 & 40200 & 240004 & 601 \\ parallel & 205 & 25 & 115 & 751 & 155 \\ pedigree\_ex485 & 485 & 426 & 296 & 1925 & 485 \\ pedigree\_ex485\_2 & 485 & 426 & 296 & 1925 & 485 \\ pointpack06 & 12 & 0 & 20 & 86 & 60 \\ pointpack08 & 16 & 0 & 35 & 155 & 112 \\ pooling\_epa1 & 214 & 30 & 340 & 1154 & 257 \\ pooling\_epa2 & 331 & 45 & 524 & 1913 & 554 \\ portfol\_buyin & 17 & 8 & 19 & 58 & 16 \\ portfol\_card & 17 & 8 & 20 & 66 & 16 \\ powerflow0014r & 118 & 0 & 197 & 652 & 461 \\ powerflow0057r & 440 & 0 & 725 & 2462 & 1795 \\ prob07 & 14 & 0 & 35 & 109 & 63 \\ process & 10 & 0 & 7 & 27 & 11 \\ procurement1mot & 784 & 60 & 749 & 2444 & 12 \\ procurement2mot & 796 & 60 & 761 & 2480 & 12 \\ product & 1553 & 107 & 1925 & 5555 & 264 \\ product2 & 2842 & 128 & 3125 & 8249 & 1056 \\ prolog & 20 & 0 & 22 & 128 & 14 \\ qp3 & 100 & 0 & 52 & 2747 & 100 \\ qspp\_0\_10\_0\_1\_10\_1 & 180 & 180 & 100 & 540 & 180 \\ qspp\_0\_11\_0\_1\_10\_1 & 220 & 220 & 121 & 660 & 220 \\ radar-2000-10-a-6\_lat\_7 & 10000 & 2000 & 8001 & 28000 & 6000 \\ radar-3000-10-a-8\_lat\_7 & 15000 & 3000 & 12001 & 42000 & 9000 \\ ravempb & 112 & 54 & 186 & 610 & 28 \\ risk2bpb & 463 & 14 & 580 & 2288 & 3 \\ routingdelay\_bigm & 1123 & 396 & 2977 & 7739 & 1827 \\ rsyn0815m & 205 & 79 & 347 & 909 & 11 \\ rsyn0815m03m & 705 & 282 & 1647 & 4120 & 33 \\ sfacloc2\_2\_95 & 186 & 39 & 239 & 595 & 76 \\ sfacloc2\_3\_90 & 291 & 75 & 496 & 1282 & 135 \\ sjup2 & 1696 & 8 & 17085 & 151716 & 88800 \\ slay06m & 102 & 60 & 135 & 462 & 12 \\ slay07m & 140 & 84 & 189 & 644 & 14 \\ smallinvDAXr1b010-011 & 30 & 30 & 3 & 120 & 30 \\ smallinvDAXr1b020-022 & 30 & 30 & 3 & 120 & 30 \\ sonet17v4 & 136 & 136 & 2057 & 6527 & 272 \\ sonet18v6 & 153 & 153 & 2466 & 7802 & 306 \\ sonetgr17 & 152 & 152 & 152 & 694 & 302 \\ spectra2 & 69 & 30 & 72 & 408 & 240 \\ sporttournament24 & 276 & 276 & 0 & 276 & 276 \\ sporttournament30 & 435 & 435 & 0 & 435 & 435 \\ sssd12-05persp & 95 & 75 & 52 & 305 & 45 \\ sssd18-06persp & 150 & 126 & 66 & 474 & 54 \\ st\_testgr1 & 10 & 10 & 5 & 51 & 10 \\ st\_testgr3 & 20 & 20 & 20 & 181 & 20 \\ steenbrf & 468 & 0 & 108 & 972 & 108 \\ stockcycle & 480 & 432 & 97 & 1008 & 48 \\ supplychainp1\_022020 & 2940 & 460 & 5300 & 15040 & 40 \\ supplychainp1\_030510 & 445 & 70 & 835 & 2330 & 15 \\ supplychainr1\_022020 & 1440 & 460 & 1840 & 7000 & 40 \\ supplychainr1\_030510 & 230 & 70 & 280 & 1005 & 15 \\ syn15m04m & 340 & 120 & 806 & 1986 & 44 \\ syn30m02m & 320 & 120 & 604 & 1502 & 40 \\ synheat & 56 & 12 & 64 & 224 & 28 \\ tanksize & 46 & 9 & 73 & 290 & 63 \\ telecomsp\_pacbell & 3570 & 3528 & 2940 & 121302 & 74088 \\ tln5 & 35 & 35 & 30 & 155 & 50 \\ tln7 & 63 & 63 & 42 & 287 & 98 \\ tls2 & 37 & 33 & 24 & 209 & 8 \\ tls4 & 105 & 89 & 64 & 613 & 32 \\ topopt-mbb\_60x40\_50 & 33600 & 2400 & 14363 & 259956 & 33600 \\ toroidal2g20\_5555 & 400 & 400 & 0 & 400 & 400 \\ toroidal3g7\_6666 & 343 & 343 & 0 & 343 & 343 \\ transswitch0009r & 69 & 9 & 103 & 346 & 255 \\ tricp & 169 & 0 & 190 & 1493 & 1140 \\ tspn08 & 44 & 28 & 18 & 136 & 60 \\ tspn15 & 135 & 105 & 34 & 502 & 165 \\ unitcommit1 & 960 & 720 & 5329 & 12404 & 240 \\ unitcommit2 & 960 & 720 & 5329 & 12404 & 480 \\ wager & 156 & 84 & 142 & 532 & 240 \\ waste & 2484 & 400 & 1991 & 9242 & 2736 \\ wastepaper3 & 52 & 27 & 30 & 177 & 108 \\ wastepaper4 & 76 & 44 & 38 & 274 & 176 \\ wastepaper6 & 136 & 90 & 54 & 528 & 360 \\ water4 & 195 & 126 & 137 & 756 & 46 \\ waternd1 & 74 & 20 & 83 & 301 & 114 \\ waterno2\_02 & 332 & 18 & 410 & 1088 & 202 \\ waterno2\_03 & 498 & 27 & 616 & 1635 & 303 \\ waterund01 & 40 & 0 & 38 & 152 & 78 \\ \bottomrule \end{longtable} } \section{Detailed Computational Results} The following tables show the outcome from running each solver on instances from the test set. If an instance has been solved to optimality, the time spend is reported. Note that due to differences in formulas for the relative gap in the various solvers, an instance may be accounted as solved even though the solver stopped at the time limit. If a run has been flagged as failed, the reason for this decision is given: ``abort'' if the solver did not return with a result, ``nonopt'' if the reported upper or lower bound were not consistent with those given by MINLPLib, and ``infeas'' if the reported solution is not feasible with respect to the feasibility tolerance. Otherwise, the relative gap at termination is reported, which is $\infty$ if no feasible solution or lower bound has been computed. An exception here is BARON, where an instance is considered as solved if the solver only decided to not return a lower bound due to singularities in functions (see Section~\ref{sec:missingbounds}). This is the case for instances \texttt{mhw4d} and \texttt{multiplants\_mtg2} and their permutations. For each instance, a time or gap that is at most 10\% worse than the one from the best solver on this instance is printed in bold font. \subsection{Serial Mode} \label{sec:detailed_singlethread} The following table shows the outcome from running each solver on the test set of 200 instances and their permutations in serial mode. { \scriptsize \begin{longtable}{l@{}c|rrrr} \toprule instance & perm & BARON & Lindo API & Octeract & SCIP \\ \midrule \endhead alan & - & 0.33 & 0.89\% & \textbf{0.05} & 0.19\\ & 1 & 0.30 & 0.89\% & \textbf{0.05} & 0.14\\ & 2 & 0.19 & 0.89\% & 0.13 & \textbf{0.11}\\ & 3 & 0.19 & 0.89\% & \textbf{0.05} & 0.11\\ & 4 & 0.18 & 0.89\% & \textbf{0.05} & 0.13\\ autocorr\_bern20-05 & - & 7.29 & 100.43 & \textbf{4.36} & 16.34\\ & 1 & 5.18 & 90.50 & \textbf{3.73} & 13.26\\ & 2 & 6.19 & 86.68 & \textbf{4.58} & 23.15\\ & 3 & 6.01 & 70.76 & \textbf{3.30} & 15.95\\ & 4 & 5.35 & 63.71 & \textbf{4.33} & 19.75\\ autocorr\_bern35-04 & - & \textbf{29.91} & 13.5\% & 12.7\% & 107.46\\ & 1 & \textbf{45.39} & infeas & 3524.97 & 93.55\\ & 2 & \textbf{55.21} & infeas & 2425.34 & 104.74\\ & 3 & \textbf{79.60} & infeas & 6572.33 & 88.83\\ & 4 & \textbf{39.39} & infeas & 4446.69 & 77.29\\ ball\_mk2\_10 & - & 0.07 & 3.98 & \textbf{0.01} & 0.05\\ & 1 & 0.07 & 3.96 & \textbf{0.01} & 0.10\\ & 2 & 0.07 & 3.98 & \textbf{0.01} & 0.01\\ & 3 & 0.07 & 3.66 & \textbf{0.01} & 0.01\\ & 4 & 0.07 & 3.66 & \textbf{0.01} & 0.01\\ ball\_mk2\_30 & - & 0.25 & 100.0\% & \textbf{0.02} & 0.06\\ & 1 & 0.08 & 100.0\% & \textbf{0.02} & 0.08\\ & 2 & 0.09 & 100.0\% & \textbf{0.02} & 0.03\\ & 3 & 0.08 & 100.0\% & \textbf{0.02} & 0.03\\ & 4 & 0.13 & 100.0\% & 0.04 & \textbf{0.03}\\ ball\_mk3\_10 & - & 10.58 & 3.58 & 0.04 & \textbf{0.00}\\ & 1 & 10.30 & 3.85 & 0.04 & \textbf{0.00}\\ & 2 & 10.47 & 3.88 & 0.04 & \textbf{0.00}\\ & 3 & 10.15 & 3.68 & 0.04 & \textbf{0.00}\\ & 4 & 10.20 & 3.56 & 0.04 & \textbf{0.00}\\ batch0812\_nc & - & 6.57 & 25.46 & 8.37 & \textbf{1.45}\\ & 1 & 8.95 & 22.13 & 9.06 & \textbf{1.18}\\ & 2 & 5.85 & 20.34 & 8.84 & \textbf{1.01}\\ & 3 & 6.41 & 20.35 & 8.65 & \textbf{1.45}\\ & 4 & 70.42 & 23.51 & 9.21 & \textbf{1.05}\\ batchs101006m & - & 6.91 & 134.04 & \textbf{4.84} & 7.63\\ & 1 & 18.23 & 59.63 & \textbf{5.28} & 6.55\\ & 2 & 10.39 & 51.85 & \textbf{5.20} & 6.44\\ & 3 & 10.03 & 81.83 & \textbf{5.13} & 6.31\\ & 4 & 8.77 & 57.22 & \textbf{4.80} & 6.19\\ batchs121208m & - & 47.33 & 198.76 & 79.1\% & \textbf{6.99}\\ & 1 & 27.10 & 221.28 & abort & \textbf{14.64}\\ & 2 & 30.58 & 180.28 & abort & \textbf{15.32}\\ & 3 & 33.04 & 184.44 & abort & \textbf{14.49}\\ & 4 & 26.62 & 152.58 & 60.6\% & \textbf{12.04}\\ bayes2\_20 & - & \textbf{67.77} & 0.033\% & 0.033\% & 0.033\%\\ & 1 & \textbf{80.24} & 0.033\% & 0.033\% & 0.033\%\\ & 2 & \textbf{69.41} & 0.033\% & 0.033\% & 0.033\%\\ & 3 & \textbf{58.54} & 0.033\% & 6051.51 & 0.033\%\\ & 4 & \textbf{372.69} & 0.033\% & 0.033\% & 0.033\%\\ bayes2\_30 & - & \textbf{21.02} & 7200.00 & 7200.00 & 7200.00\\ & 1 & \textbf{57.89} & 7200.00 & 7200.00 & 7200.00\\ & 2 & \textbf{30.04} & 7200.00 & 7200.00 & 7200.00\\ & 3 & \textbf{20.17} & 7200.00 & 7200.00 & 25.1\%\\ & 4 & \textbf{33.53} & 7200.00 & 7200.00 & 7200.00\\ blend029 & - & \textbf{1.03} & abort & 29.33 & 3.67\\ & 1 & \textbf{1.07} & abort & 30.37 & 2.79\\ & 2 & \textbf{1.28} & abort & 36.35 & 3.29\\ & 3 & \textbf{0.72} & 2512.99 & 27.62 & 3.86\\ & 4 & \textbf{1.15} & abort & 23.74 & 2.66\\ blend146 & - & 3648.64 & abort & 8.8\% & \textbf{471.12}\\ & 1 & 4300.65 & abort & 3.6\% & \textbf{860.35}\\ & 2 & 2597.99 & 4.9\% & 8.4\% & \textbf{770.63}\\ & 3 & 2642.15 & 5.5\% & 8.8\% & \textbf{380.89}\\ & 4 & 3212.94 & abort & 7.9\% & \textbf{534.07}\\ camshape100 & - & 9.2\% & 9.2\% & \textbf{19.40} & 5.4\%\\ & 1 & 9.9\% & 9.2\% & \textbf{110.89} & 5.2\%\\ & 2 & 9.5\% & 9.3\% & \textbf{97.31} & 5.3\%\\ & 3 & 11.1\% & 9.5\% & \textbf{111.83} & 5.2\%\\ & 4 & 9.9\% & 9.6\% & \textbf{113.55} & 5.6\%\\ cardqp\_inlp & - & \textbf{12.40} & 134.08 & 792.06 & 2751.70\\ & 1 & \textbf{12.39} & 133.88 & 878.19 & 3093.49\\ & 2 & \textbf{12.35} & 137.71 & 747.37 & 3131.70\\ & 3 & \textbf{12.34} & 134.87 & 765.74 & 3163.05\\ & 4 & \textbf{12.21} & 136.55 & 797.50 & 3091.96\\ cardqp\_iqp & - & \textbf{12.92} & 133.22 & 785.02 & 2769.87\\ & 1 & \textbf{12.23} & 133.62 & 878.44 & 3105.68\\ & 2 & \textbf{12.38} & 138.12 & 744.48 & 3145.64\\ & 3 & \textbf{12.26} & 135.01 & 765.38 & 3174.45\\ & 4 & \textbf{12.19} & 136.78 & 807.90 & 3101.12\\ carton7 & - & 987.64 & 110.53 & \textbf{6.03} & 13.54\\ & 1 & 1174.14 & 66.95 & \textbf{5.57} & 121.71\\ & 2 & 3846.89 & 84.70 & \textbf{9.25} & 13.62\\ & 3 & 516.36 & 173.14 & \textbf{12.03} & 14.55\\ & 4 & 2021.71 & 75.91 & \textbf{8.99} & 12.96\\ carton9 & - & 45.0\% & 287.45 & 344.94 & \textbf{36.30}\\ & 1 & 37.9\% & 197.12 & \textbf{9.94} & 31.5\%\\ & 2 & 38.6\% & 375.81 & \textbf{9.28} & 78.81\\ & 3 & 43.8\% & 684.50 & \textbf{19.96} & 67.78\\ & 4 & 31.2\% & 296.68 & \textbf{27.07} & 187.44\\ casctanks & - & \textbf{256.46} & 3.4\% & 11.8\% & 199\%\\ & 1 & \textbf{333.16} & 3.5\% & 10.2\% & 196\%\\ & 2 & \textbf{338.11} & 3.1\% & 10.5\% & 199\%\\ & 3 & \textbf{357.11} & 3.5\% & 10.1\% & 198\%\\ & 4 & \textbf{447.59} & 3.6\% & 11.3\% & 196\%\\ cecil\_13 & - & 280.62 & 0.32\% & \textbf{188.40} & 605.03\\ & 1 & 227.21 & 0.6\% & \textbf{157.90} & 626.61\\ & 2 & \textbf{207.11} & 0.32\% & 242.70 & 571.07\\ & 3 & 230.85 & 0.32\% & \textbf{94.69} & 648.09\\ & 4 & 181.85 & 0.32\% & \textbf{113.29} & 614.13\\ celar6-sub0 & - & 100.0\% & 100\% & $\infty$ & \textbf{1503.66}\\ & 1 & 100.0\% & 100\% & \textbf{3042.60} & 92.3\%\\ & 2 & 100.0\% & 100\% & \textbf{4763.43} & 6435.62\\ & 3 & 100.0\% & 100\% & \textbf{6198.12} & 93.7\%\\ & 4 & 100.0\% & 100\% & \textbf{2291.77} & 79.2\%\\ chakra & - & 0.14 & 1.28 & 7200.00 & \textbf{0.04}\\ & 1 & 0.31 & 1.27 & 7200.00 & \textbf{0.05}\\ & 2 & 0.15 & 1.28 & 7200.00 & \textbf{0.04}\\ & 3 & 0.15 & 1.01 & 7200.00 & \textbf{0.11}\\ & 4 & 0.30 & 1.27 & 7200.00 & \textbf{0.05}\\ chem & - & \textbf{0.04} & 1172.22 & 0.18 & 932.15\\ & 1 & \textbf{0.04} & 1159.31 & 0.19 & 914.43\\ & 2 & \textbf{0.04} & 1168.01 & 0.18 & 844.62\\ & 3 & \textbf{0.04} & 1169.18 & 0.18 & 887.05\\ & 4 & \textbf{0.04} & 1168.36 & 0.18 & 847.04\\ chenery & - & \textbf{1.40} & 0.43\% & 200.71 & 2.10\\ & 1 & \textbf{1.11} & 0.43\% & 239.04 & 4.92\\ & 2 & 1.39 & 0.78\% & 259.88 & \textbf{1.26}\\ & 3 & \textbf{1.15} & 0.9\% & 262.94 & 7.66\\ & 4 & 1.77 & 0.43\% & 261.43 & \textbf{1.50}\\ chimera\_k64maxcut-01 & - & 560.62 & 16.4\% & \textbf{168.60} & 450.42\\ & 1 & 499.78 & 17.8\% & \textbf{142.22} & 435.54\\ & 2 & 1969.20 & 19.0\% & \textbf{112.38} & 552.15\\ & 3 & 605.37 & 18.7\% & \textbf{106.61} & 595.36\\ & 4 & 738.03 & 14.9\% & \textbf{71.59} & 431.57\\ chimera\_mis-01 & - & 13.69 & nonopt & \textbf{1.14} & 6.76\\ & 1 & 16.00 & nonopt & \textbf{1.58} & 7.06\\ & 2 & 14.94 & nonopt & \textbf{1.46} & 7.71\\ & 3 & 14.89 & nonopt & \textbf{1.57} & 7.48\\ & 4 & 15.41 & nonopt & \textbf{1.30} & 7.54\\ chp\_shorttermplan1a & - & 2296.84 & $\infty$ & 64.74 & \textbf{17.55}\\ & 1 & 2708.40 & $\infty$ & 27.60 & \textbf{20.99}\\ & 2 & 4998.33 & $\infty$ & 61.12 & \textbf{15.40}\\ & 3 & 0.092\% & $\infty$ & 27.43 & \textbf{19.76}\\ & 4 & 2693.91 & $\infty$ & 62.06 & \textbf{15.63}\\ chp\_shorttermplan2a & - & 937.38 & $\infty$ & \textbf{17.59} & 27.16\\ & 1 & 2.6\% & $\infty$ & 19.04 & \textbf{16.19}\\ & 2 & 1181.77 & $\infty$ & \textbf{19.44} & 27.58\\ & 3 & 5678.85 & $\infty$ & \textbf{19.96} & 26.63\\ & 4 & 5971.28 & $\infty$ & 19.61 & \textbf{16.91}\\ chp\_shorttermplan2b & - & 0.34\% & 16.6\% & 0.67\% & \textbf{7200.00}\\ & 1 & nonopt & 0.27\% & 0.7\% & \textbf{0.08\%}\\ & 2 & 0.3\% & 0.32\% & 0.67\% & \textbf{0.11\%}\\ & 3 & 0.31\% & 48.9\% & 0.67\% & \textbf{0.08\%}\\ & 4 & 0.3\% & 0.18\% & 0.67\% & \textbf{0.067\%}\\ clay0204m & - & \textbf{0.47} & 4.75 & 1.47 & 1.67\\ & 1 & \textbf{0.67} & 6.32 & 1.24 & 1.63\\ & 2 & \textbf{0.31} & 5.77 & 1.33 & 1.59\\ & 3 & \textbf{0.45} & 5.32 & 1.55 & 1.96\\ & 4 & \textbf{0.47} & 4.80 & 1.32 & 1.67\\ clay0205m & - & \textbf{4.92} & 51.80 & 15.10 & 8.40\\ & 1 & \textbf{4.22} & 61.13 & 9.78 & 8.07\\ & 2 & \textbf{4.75} & 63.96 & 8.70 & 9.07\\ & 3 & \textbf{6.57} & 77.14 & 7.83 & 9.11\\ & 4 & \textbf{5.30} & 62.76 & 11.03 & 6.53\\ color\_lab3\_3x0 & - & \textbf{2654.00} & 65.6\% & $\infty$ & 15.3\%\\ & 1 & \textbf{5176.64} & 65.6\% & $\infty$ & 13.7\%\\ & 2 & \textbf{5761.65} & 65.7\% & $\infty$ & 15.1\%\\ & 3 & \textbf{5225.72} & 65.5\% & $\infty$ & 13.0\%\\ & 4 & \textbf{0.51\%} & 65.6\% & $\infty$ & 11.5\%\\ crossdock\_15x7 & - & \textbf{385.56} & 124\% & $\infty$ & 32.0\%\\ & 1 & \textbf{418.01} & 123\% & $\infty$ & 28.1\%\\ & 2 & \textbf{233.85} & 123\% & $\infty$ & 39.1\%\\ & 3 & \textbf{180.42} & 124\% & $\infty$ & 31.1\%\\ & 4 & \textbf{183.64} & 123\% & $\infty$ & 33.5\%\\ crossdock\_15x8 & - & \textbf{788.81} & 124\% & 6306.65 & 42.8\%\\ & 1 & \textbf{2830.45} & 121\% & $\infty$ & 59.2\%\\ & 2 & \textbf{681.63} & 124\% & $\infty$ & 52.7\%\\ & 3 & \textbf{1084.33} & 121\% & $\infty$ & 57.8\%\\ & 4 & \textbf{1414.43} & 121\% & $\infty$ & 58.7\%\\ crudeoil\_lee1\_07 & - & 7.50 & 63.26 & 8.56 & \textbf{4.72}\\ & 1 & 12.30 & 272.58 & 10.36 & \textbf{6.96}\\ & 2 & 12.42 & nonopt & 10.48 & \textbf{7.01}\\ & 3 & 13.79 & 111.62 & \textbf{4.11} & 7.31\\ & 4 & 7.69 & 103.43 & 4.54 & \textbf{3.09}\\ crudeoil\_pooling\_ct2 & - & \textbf{4.95} & 1.8\% & nonopt & 18.94\\ & 1 & \textbf{22.28} & 3.5\% & nonopt & 26.23\\ & 2 & \textbf{3.24} & 2.9\% & 5.85 & 17.42\\ & 3 & \textbf{5.98} & 3\% & nonopt & 17.02\\ & 4 & 28.56 & 2.4\% & \textbf{3.84} & 30.75\\ csched1 & - & 13.03 & 66.81 & 8.1\% & \textbf{1.59}\\ & 1 & 10.52 & 91.91 & 8.1\% & \textbf{2.44}\\ & 2 & 12.10 & 57.13 & 8.7\% & \textbf{3.25}\\ & 3 & 11.92 & 60.21 & 8.2\% & \textbf{2.29}\\ & 4 & 14.74 & 77.16 & 8.1\% & \textbf{3.06}\\ csched1a & - & \textbf{0.70} & 4.85 & 17.77 & 6.53\\ & 1 & \textbf{0.98} & 5.76 & 17.34 & 5.08\\ & 2 & \textbf{0.50} & 4.48 & 2.49 & 6.85\\ & 3 & \textbf{0.89} & 4.33 & 18.52 & 5.70\\ & 4 & \textbf{0.90} & 4.69 & 19.54 & 5.65\\ cvxnonsep\_psig20 & - & 0.99 & \textbf{0.48} & 35.0\% & 12.04\\ & 1 & 1.05 & \textbf{0.24} & 35.1\% & 14.75\\ & 2 & 0.96 & \textbf{0.25} & 34.9\% & 15.16\\ & 3 & 0.92 & \textbf{0.50} & 35.1\% & 16.56\\ & 4 & 1.04 & \textbf{0.41} & 34.9\% & 10.64\\ cvxnonsep\_psig30 & - & 4.25 & \textbf{3.26} & 45.6\% & 65.40\\ & 1 & 4.29 & \textbf{3.06} & 45.7\% & 367.26\\ & 2 & 4.25 & \textbf{2.98} & 45.5\% & 239.09\\ & 3 & 4.17 & \textbf{3.22} & 45.8\% & 127.14\\ & 4 & 4.31 & \textbf{3.28} & 45.5\% & 88.59\\ du-opt & - & \textbf{2.37} & 5.35 & 122.38 & \textbf{2.38}\\ & 1 & \textbf{2.37} & 4.83 & 60.86 & \textbf{2.16}\\ & 2 & \textbf{1.86} & 5.11 & 52.34 & 2.68\\ & 3 & \textbf{2.05} & 4.97 & 44.64 & 2.40\\ & 4 & \textbf{2.25} & 4.79 & 44.04 & 4.24\\ du-opt5 & - & 3.61 & 4.70 & 101.32 & \textbf{1.69}\\ & 1 & 4.48 & 4.48 & 130.24 & \textbf{1.50}\\ & 2 & 4.86 & 4.64 & 94.97 & \textbf{1.56}\\ & 3 & 3.97 & 4.88 & 103.22 & \textbf{1.68}\\ & 4 & 5.33 & 4.77 & 132.03 & \textbf{1.31}\\ edgecross10-040 & - & 7.25 & \textbf{0.29} & 4.16 & 8.36\\ & 1 & 5.42 & nonopt & \textbf{4.57} & 6.42\\ & 2 & 5.71 & nonopt & 4.84 & \textbf{3.65}\\ & 3 & 6.30 & nonopt & \textbf{5.29} & \textbf{5.03}\\ & 4 & 6.34 & nonopt & \textbf{4.82} & 7.68\\ edgecross10-080 & - & \textbf{51.58} & nonopt & 6\% & 74.51\\ & 1 & \textbf{42.04} & nonopt & 7.2\% & 58.81\\ & 2 & \textbf{41.55} & nonopt & 6.2\% & 77.79\\ & 3 & 50.34 & nonopt & 6.8\% & \textbf{43.25}\\ & 4 & 51.39 & nonopt & 7.1\% & \textbf{43.84}\\ eg\_all\_s & - & infeas & abort & 102\% & \textbf{3095.77}\\ & 1 & 90.7\% & abort & 109\% & \textbf{4758.26}\\ & 2 & 90.9\% & abort & 89.9\% & \textbf{6829.05}\\ & 3 & 85.5\% & abort & 98.1\% & \textbf{5101.84}\\ & 4 & 89.7\% & abort & 119\% & \textbf{1520.48}\\ eigena2 & - & 1146.69 & $\infty$ & \textbf{416.89} & $\infty$\\ & 1 & 1131.74 & $\infty$ & \textbf{101.71} & $\infty$\\ & 2 & 1147.70 & $\infty$ & \textbf{117.89} & $\infty$\\ & 3 & 1028.72 & $\infty$ & \textbf{288.60} & $\infty$\\ & 4 & 1134.27 & $\infty$ & \textbf{269.29} & $\infty$\\ elec50 & - & 66.5\% & \textbf{798.72} & 66.4\% & 45.1\%\\ & 1 & 66.5\% & \textbf{919.20} & 66.5\% & 44.9\%\\ & 2 & 66.5\% & \textbf{1073.13} & 66.4\% & 44.6\%\\ & 3 & 66.5\% & \textbf{1191.25} & 66.5\% & 44.8\%\\ & 4 & 66.5\% & \textbf{930.48} & 66.5\% & 44.7\%\\ elf & - & 5.91 & nonopt & 2.54 & \textbf{1.01}\\ & 1 & 6.24 & 2.09 & infeas & \textbf{1.67}\\ & 2 & 4.51 & 3.47 & infeas & \textbf{2.42}\\ & 3 & 4.39 & infeas & 1.76 & \textbf{1.28}\\ & 4 & 4.05 & infeas & infeas & \textbf{1.74}\\ eniplac & - & 4.54 & 3\% & \textbf{1.54} & 2.19\\ & 1 & 4.01 & 2.7\% & \textbf{2.01} & 2.64\\ & 2 & 5.11 & 3.6\% & \textbf{1.67} & 2.61\\ & 3 & 6.47 & 3.9\% & \textbf{1.25} & 2.58\\ & 4 & 3.63 & 3.7\% & \textbf{1.64} & \textbf{1.76}\\ enpro56pb & - & \textbf{1.84} & 1454.20 & \textbf{1.98} & 3.40\\ & 1 & \textbf{2.00} & 2040.82 & \textbf{2.11} & 2.27\\ & 2 & \textbf{2.21} & infeas & \textbf{2.08} & 3.78\\ & 3 & 4.95 & 2838.41 & \textbf{2.08} & 4.34\\ & 4 & 5.63 & 1628.60 & \textbf{2.07} & 2.44\\ ex1244 & - & \textbf{4.86} & 9.69 & 81.99 & 7.67\\ & 1 & \textbf{1.34} & 11.66 & 2.74 & 64.25\\ & 2 & \textbf{2.62} & 12.32 & nonopt & 26.26\\ & 3 & \textbf{2.06} & 11.75 & 3.98 & \textbf{2.07}\\ & 4 & \textbf{2.34} & 13.07 & nonopt & 15.20\\ ex1252a & - & \textbf{9.12} & infeas & 245.41 & 7200.00\\ & 1 & \textbf{16.38} & 51.58 & 334.58 & 0.031\%\\ & 2 & \textbf{2.94} & 36.26 & 260.68 & 0.06\%\\ & 3 & 1812.77 & infeas & \textbf{313.87} & 358.50\\ & 4 & \textbf{5.17} & 55.21 & 292.16 & 0.2\%\\ faclay20h & - & \textbf{349.36} & nonopt & 785.28 & 741.15\\ & 1 & \textbf{251.18} & nonopt & 721.19 & 720.40\\ & 2 & \textbf{524.94} & nonopt & 713.21 & 934.75\\ & 3 & \textbf{238.74} & nonopt & 693.80 & 780.05\\ & 4 & \textbf{478.08} & nonopt & 1071.27 & 834.45\\ faclay80 & - & 120\% & \textbf{5096.67} & $\infty$ & 160\%\\ & 1 & \textbf{120\%} & abort & $\infty$ & 160\%\\ & 2 & \textbf{120\%} & abort & $\infty$ & 159\%\\ & 3 & 120\% & abort & $\infty$ & \textbf{93.9\%}\\ & 4 & $\infty$ & abort & $\infty$ & \textbf{160\%}\\ feedtray & - & 80.5\% & 13.79 & 82.1\% & \textbf{1.70}\\ & 1 & 80.5\% & \textbf{22.18} & 80.5\% & 80.5\%\\ & 2 & 80.5\% & \textbf{74.65} & 83.2\% & 80.5\%\\ & 3 & 80.5\% & \textbf{19.32} & 80.5\% & nonopt\\ & 4 & 80.5\% & \textbf{19.37} & 80.5\% & 80.5\%\\ fin2bb & - & 114.16 & 1294.03 & 100.0\% & \textbf{15.92}\\ & 1 & 69.25 & 1827.21 & 100.0\% & \textbf{7.78}\\ & 2 & 55.05 & 1645.40 & 100.0\% & \textbf{9.54}\\ & 3 & 108.22 & 512.39 & 100.0\% & \textbf{7.80}\\ & 4 & \textbf{64.77} & 2036.16 & $\infty$ & 138.92\\ flay04m & - & 10.56 & 647.41 & \textbf{0.87} & 3.93\\ & 1 & 10.74 & 503.62 & \textbf{0.85} & 4.29\\ & 2 & 10.35 & 583.37 & \textbf{0.91} & 4.17\\ & 3 & 10.15 & 494.25 & \textbf{0.95} & 4.35\\ & 4 & 11.30 & 552.23 & \textbf{0.78} & 3.83\\ flay05m & - & 394.85 & 0.54\% & infeas & \textbf{120.82}\\ & 1 & 404.17 & 0.65\% & infeas & \textbf{123.07}\\ & 2 & 396.72 & 0.58\% & infeas & \textbf{116.15}\\ & 3 & 411.54 & 0.58\% & infeas & \textbf{117.29}\\ & 4 & 439.41 & 0.56\% & infeas & \textbf{123.48}\\ flay06m & - & 7\% & 14.3\% & infeas & \textbf{4931.28}\\ & 1 & 6.6\% & 14.0\% & infeas & \textbf{4727.15}\\ & 2 & 6.1\% & 13.9\% & infeas & \textbf{5259.68}\\ & 3 & 7.2\% & 14.5\% & infeas & \textbf{4811.50}\\ & 4 & 6\% & 13.7\% & \textbf{3154.62} & 5047.64\\ fo7\_ar25\_1 & - & 17.00 & 1282.08 & \textbf{8.95} & 39.74\\ & 1 & 9.09 & 1719.51 & \textbf{7.78} & 41.70\\ & 2 & 13.13 & 1424.43 & \textbf{9.90} & 53.04\\ & 3 & 13.99 & 1979.11 & \textbf{6.24} & 36.41\\ & 4 & \textbf{10.36} & 1738.14 & \textbf{9.99} & 45.67\\ fo7\_ar3\_1 & - & 25.69 & 1847.41 & \textbf{18.33} & 88.79\\ & 1 & 51.20 & 2351.00 & \textbf{7.57} & 63.32\\ & 2 & 55.38 & 940.00 & \textbf{10.98} & 59.84\\ & 3 & 46.00 & 1796.83 & \textbf{18.54} & 65.72\\ & 4 & 41.03 & 2416.13 & \textbf{12.23} & 59.26\\ forest & - & 747.74 & \textbf{1.18} & nonopt & 538.83\\ & 1 & \textbf{1244.39} & $\infty$ & nonopt & nonopt\\ & 2 & \textbf{27.0\%} & infeas & nonopt & nonopt\\ & 3 & \textbf{0.36\%} & nonopt & nonopt & nonopt\\ & 4 & \textbf{654.45} & nonopt & nonopt & nonopt\\ gabriel01 & - & 1760.52 & 5.4\% & 2\% & \textbf{351.17}\\ & 1 & 1596.81 & abort & 3.5\% & \textbf{331.53}\\ & 2 & 1676.43 & 8.6\% & 4.9\% & \textbf{292.51}\\ & 3 & 1584.57 & 8.7\% & 3.7\% & \textbf{245.91}\\ & 4 & 2622.95 & 10.5\% & 2\% & \textbf{286.68}\\ gabriel02 & - & 11.1\% & 44.0\% & 7078.04 & \textbf{1435.31}\\ & 1 & 16.6\% & $\infty$ & 5618.78 & \textbf{3606.13}\\ & 2 & 17.4\% & $\infty$ & 6015.73 & \textbf{2175.98}\\ & 3 & 19.2\% & 57.3\% & 5439.17 & \textbf{973.13}\\ & 4 & 16.8\% & $\infty$ & 5941.14 & \textbf{2639.64}\\ gasnet & - & \textbf{22.38} & 64.9\% & 96.7\% & 42.5\%\\ & 1 & nonopt & 64.9\% & 96.5\% & \textbf{41.0\%}\\ & 2 & nonopt & 64.9\% & 96.3\% & \textbf{41.7\%}\\ & 3 & 56.9\% & 65.1\% & 96.8\% & \textbf{41.1\%}\\ & 4 & nonopt & 65.0\% & 96.5\% & \textbf{41.9\%}\\ gasprod\_sarawak16 & - & \textbf{4578.18} & infeas & 0.74\% & 0.39\%\\ & 1 & \textbf{0.18\%} & 1.5\% & 1.1\% & 0.92\%\\ & 2 & \textbf{0.18\%} & 0.39\% & 0.71\% & 0.94\%\\ & 3 & \textbf{0.29\%} & 0.43\% & 0.88\% & 0.8\%\\ & 4 & \textbf{1258.01} & infeas & 0.75\% & 0.92\%\\ gastrans582\_cold13\_95 & - & 1442.44 & $\infty$ & $\infty$ & \textbf{55.01}\\ & 1 & 828.50 & $\infty$ & $\infty$ & \textbf{60.20}\\ & 2 & 2066.06 & $\infty$ & $\infty$ & \textbf{36.26}\\ & 3 & 1801.06 & $\infty$ & $\infty$ & \textbf{60.01}\\ & 4 & \textbf{652.52} & $\infty$ & $\infty$ & $\infty$\\ gastrans582\_mild11 & - & 687.72 & $\infty$ & $\infty$ & \textbf{7.41}\\ & 1 & 1845.78 & $\infty$ & $\infty$ & \textbf{13.16}\\ & 2 & 1635.11 & $\infty$ & $\infty$ & \textbf{20.77}\\ & 3 & 2784.89 & $\infty$ & $\infty$ & \textbf{10.81}\\ & 4 & 2150.14 & $\infty$ & $\infty$ & \textbf{10.53}\\ gear & - & 0.12 & \textbf{0.05} & 0.11 & 12.55\\ & 1 & 0.12 & \textbf{0.04} & 0.07 & 13.78\\ & 2 & 0.12 & \textbf{0.03} & 0.07 & 13.68\\ & 3 & 0.08 & \textbf{0.03} & 0.07 & 16.84\\ & 4 & 0.11 & \textbf{0.05} & 0.07 & 12.58\\ gear2 & - & 0.64 & 0.23 & \textbf{0.14} & 18.25\\ & 1 & 0.28 & \textbf{0.23} & \textbf{0.21} & 16.14\\ & 2 & 0.50 & 0.20 & \textbf{0.09} & 17.22\\ & 3 & 0.43 & 0.62 & \textbf{0.11} & 17.01\\ & 4 & 0.31 & 0.28 & \textbf{0.12} & 15.55\\ gear4 & - & \textbf{1.63} & infeas & 17.56 & 4.05\\ & 1 & 1.20 & infeas & 18.07 & \textbf{0.82}\\ & 2 & 1.61 & infeas & 18.56 & \textbf{0.75}\\ & 3 & \textbf{0.88} & infeas & 17.84 & \textbf{0.80}\\ & 4 & 1.46 & infeas & 18.12 & \textbf{0.56}\\ genpooling\_lee1 & - & 9.37 & 0.41\% & 117.04 & \textbf{2.09}\\ & 1 & 10.02 & 0.94\% & 192.20 & \textbf{2.44}\\ & 2 & 4.20 & 1.1\% & 39.59 & \textbf{2.40}\\ & 3 & 5.08 & 1.1\% & 73.26 & \textbf{3.12}\\ & 4 & 8.35 & 0.29\% & 52.15 & \textbf{2.43}\\ genpooling\_lee2 & - & 85.93 & 3724.02 & 186.63 & \textbf{6.88}\\ & 1 & 25.74 & 4288.32 & 254.82 & \textbf{6.48}\\ & 2 & 76.31 & 1402.41 & 236.81 & \textbf{5.23}\\ & 3 & 54.53 & 3502.43 & 278.49 & \textbf{5.03}\\ & 4 & 54.56 & 2305.10 & 281.04 & \textbf{6.53}\\ ghg\_1veh & - & \textbf{2.77} & 106.61 & 12.42 & 32.25\\ & 1 & \textbf{4.52} & 105.03 & 12.61 & 31.41\\ & 2 & \textbf{1.84} & 104.86 & 13.26 & 34.17\\ & 3 & \textbf{6.61} & 100.45 & 14.83 & 32.06\\ & 4 & \textbf{4.62} & 98.82 & 12.80 & 33.13\\ gilbert & - & 2.35 & 14.25 & 0.9\% & \textbf{1.13}\\ & 1 & 2.98 & 15.68 & 0.99\% & \textbf{1.16}\\ & 2 & 2.85 & 15.75 & 1\% & \textbf{1.03}\\ & 3 & 3.49 & 15.54 & 0.99\% & \textbf{1.29}\\ & 4 & 3.00 & 16.13 & 0.95\% & \textbf{1.24}\\ graphpart\_2g-0066-0066 & - & \textbf{0.65} & 8.7\% & 0.75 & 1.39\\ & 1 & 0.93 & 8.7\% & \textbf{0.69} & 0.78\\ & 2 & 1.08 & 8\% & \textbf{0.42} & 1.73\\ & 3 & 0.62 & 8.2\% & \textbf{0.44} & 2.06\\ & 4 & 0.83 & 8.7\% & \textbf{0.45} & 1.12\\ graphpart\_clique-60 & - & 34.6\% & 83.3\% & \textbf{2890.51} & 3962.88\\ & 1 & 32.0\% & 83.1\% & \textbf{7034.14} & 58.2\%\\ & 2 & 39.7\% & 80.3\% & \textbf{3824.11} & 48.7\%\\ & 3 & 36.1\% & 83.3\% & \textbf{5323.55} & 55.6\%\\ & 4 & 36.8\% & 80.3\% & \textbf{1751.71} & 54.5\%\\ gsg\_0001 & - & 13.68 & 351.30 & \textbf{8.58} & 32.26\\ & 1 & \textbf{11.72} & 412.60 & 75.14 & 30.21\\ & 2 & \textbf{10.34} & 372.91 & 95.72 & 28.77\\ & 3 & \textbf{12.57} & 567.84 & 62.47 & 27.70\\ & 4 & \textbf{13.55} & 379.44 & 115.60 & 28.15\\ hadamard\_5 & - & 54.92 & 129.27 & \textbf{20.45} & \textbf{21.56}\\ & 1 & 65.13 & 117.17 & 44.80 & \textbf{27.78}\\ & 2 & \textbf{15.25} & 121.23 & 34.82 & 66.81\\ & 3 & \textbf{17.17} & 121.44 & 30.28 & 23.27\\ & 4 & \textbf{10.01} & 141.44 & 33.75 & 22.94\\ heatexch\_spec1 & - & \textbf{1.39} & 0.34\% & 17.7\% & 7.1\%\\ & 1 & \textbf{46.44} & 0.29\% & 15.0\% & 214.24\\ & 2 & \textbf{29.74} & 0.26\% & 15.8\% & infeas\\ & 3 & \textbf{2001.04} & 0.46\% & 14.1\% & 7.3\%\\ & 4 & 0.53\% & \textbf{0.3\%} & 15.4\% & infeas\\ heatexch\_spec2 & - & \textbf{6.60} & 0.041\% & 5\% & infeas\\ & 1 & 8.17 & 0.055\% & 5.2\% & \textbf{7.08}\\ & 2 & \textbf{5.50} & 0.048\% & 5\% & 15.69\\ & 3 & \textbf{7.98} & 0.043\% & 5.1\% & 9.42\\ & 4 & 9.97 & 0.053\% & 5.2\% & \textbf{8.75}\\ hhfair & - & \textbf{0.43} & 100.0\% & $\infty$ & 100.0\%\\ & 1 & \textbf{0.20} & 100.0\% & $\infty$ & 100.0\%\\ & 2 & \textbf{0.35} & 100.0\% & $\infty$ & $\infty$\\ & 3 & \textbf{0.27} & 100.0\% & $\infty$ & 100.0\%\\ & 4 & \textbf{0.92} & 100.0\% & $\infty$ & 100.0\%\\ himmel16 & - & 41.46 & 20.42 & 12.95 & \textbf{6.13}\\ & 1 & 19.50 & 22.20 & 13.13 & \textbf{7.20}\\ & 2 & 29.07 & 21.96 & 16.00 & \textbf{5.29}\\ & 3 & 23.68 & 23.39 & 13.04 & \textbf{6.44}\\ & 4 & 22.73 & 22.24 & 14.29 & \textbf{6.12}\\ house & - & 0.37 & 0.89 & 108.41 & \textbf{0.33}\\ & 1 & 0.38 & 1.24 & 73.21 & \textbf{0.29}\\ & 2 & 0.37 & infeas & 80.09 & \textbf{0.21}\\ & 3 & \textbf{0.46} & 1.18 & 115.14 & 0.58\\ & 4 & 0.44 & 0.88 & 81.46 & \textbf{0.30}\\ hs62 & - & \textbf{1.07} & 4.04 & 0.023\% & 2.65\\ & 1 & \textbf{0.95} & 4.02 & 0.021\% & 3.23\\ & 2 & \textbf{1.07} & 4.61 & 0.021\% & 3.22\\ & 3 & \textbf{1.02} & 3.79 & 0.023\% & 2.29\\ & 4 & \textbf{0.87} & 3.91 & 0.021\% & 3.00\\ hvb11 & - & 175.10 & 40.7\% & 7.3\% & \textbf{103.26}\\ & 1 & 431.91 & 45.5\% & 7.2\% & \textbf{76.87}\\ & 2 & \textbf{176.62} & 29.0\% & 5.9\% & 868.06\\ & 3 & 616.86 & 47.9\% & 5.7\% & \textbf{37.39}\\ & 4 & 158.69 & 36.5\% & 6.9\% & \textbf{77.85}\\ hybriddynamic\_var & - & \textbf{0.75} & 4.11 & 0.32\% & 2.21\\ & 1 & \textbf{0.83} & 4.04 & 3775.94 & 2.08\\ & 2 & \textbf{1.05} & 4.04 & 2838.97 & 1.87\\ & 3 & \textbf{0.89} & 4.14 & 349.49 & 2.13\\ & 4 & \textbf{0.95} & 4.08 & 433.50 & 1.66\\ hybriddynamic\_varcc & - & \textbf{0.91} & 12.00 & 158.14 & 2.40\\ & 1 & \textbf{2.08} & 13.02 & 730.83 & 100\%\\ & 2 & \textbf{0.97} & 13.51 & 7200.00 & 100\%\\ & 3 & \textbf{2.38} & 13.18 & 7200.00 & 100\%\\ & 4 & \textbf{1.20} & 12.90 & 7200.00 & 100\%\\ hydroenergy1 & - & \textbf{1026.99} & 0.93\% & 0.65\% & 6295.41\\ & 1 & \textbf{1305.68} & 0.93\% & 0.66\% & 5883.90\\ & 2 & \textbf{3528.61} & 0.92\% & 0.65\% & 5487.34\\ & 3 & \textbf{2844.90} & 0.92\% & 0.66\% & 4605.84\\ & 4 & \textbf{3324.70} & 0.93\% & 0.67\% & 6541.83\\ ibs2 & - & nonopt & abort & 5.1\% & \textbf{20.64}\\ & 1 & nonopt & $\infty$ & 40.8\% & \textbf{12.25}\\ & 2 & nonopt & abort & 17.4\% & \textbf{8.54}\\ & 3 & nonopt & abort & 14.6\% & \textbf{8.39}\\ & 4 & nonopt & abort & 13.3\% & \textbf{8.14}\\ johnall & - & \textbf{2.85} & 22.36 & 44.23 & 30.44\\ & 1 & \textbf{3.54} & 37.05 & 43.07 & 31.43\\ & 2 & \textbf{2.30} & 40.92 & 44.03 & 29.52\\ & 3 & \textbf{2.86} & 40.69 & 43.02 & 31.01\\ & 4 & \textbf{2.59} & 30.65 & 43.35 & 30.52\\ kall\_circles\_c6b & - & 310.13 & \textbf{62.28} & 463.62 & 183.83\\ & 1 & 274.68 & \textbf{55.04} & 309.04 & 179.76\\ & 2 & 335.78 & \textbf{72.49} & 417.47 & 170.66\\ & 3 & 323.33 & \textbf{52.58} & 374.48 & 284.95\\ & 4 & 309.15 & \textbf{58.86} & 459.34 & 195.62\\ kall\_congruentcircles\_c72 & - & 26.45 & \textbf{4.74} & 49.42 & 22.40\\ & 1 & 20.00 & \textbf{10.50} & 41.78 & 29.79\\ & 2 & 24.93 & \textbf{4.45} & 38.33 & 30.96\\ & 3 & 20.36 & \textbf{10.36} & 44.57 & 28.19\\ & 4 & 14.04 & \textbf{5.11} & 40.23 & 26.06\\ kissing2 & - & \textbf{184.78} & 867.71 & 100.0\% & 100.0\%\\ & 1 & \textbf{286.21} & 911.33 & 100.0\% & 100.0\%\\ & 2 & \textbf{249.09} & 927.99 & 100.0\% & $\infty$\\ & 3 & \textbf{278.40} & 916.33 & 100.0\% & 1623.09\\ & 4 & \textbf{212.31} & 909.74 & 100.0\% & 397.82\\ kport20 & - & 6.1\% & 13.1\% & 3596.06 & \textbf{1001.03}\\ & 1 & \textbf{1387.42} & 10.4\% & 4236.68 & 4384.07\\ & 2 & 3.5\% & 12.0\% & \textbf{2599.71} & \textbf{2587.04}\\ & 3 & 3294.62 & 11.9\% & 2950.96 & \textbf{1982.41}\\ & 4 & 6.3\% & 11.4\% & 1998.33 & \textbf{1456.99}\\ kriging\_peaks-red020 & - & \textbf{10.33} & 21.79 & 85.61 & 117.65\\ & 1 & \textbf{10.52} & 21.32 & 90.28 & 117.03\\ & 2 & \textbf{8.61} & 21.31 & 90.38 & 116.84\\ & 3 & \textbf{10.46} & 21.11 & 90.31 & 117.11\\ & 4 & \textbf{10.84} & 21.33 & 89.54 & 117.06\\ kriging\_peaks-red100 & - & 196.08 & \textbf{140.14} & 629.86 & 7200.00\\ & 1 & \textbf{131.90} & \textbf{142.45} & 616.46 & 7200.00\\ & 2 & \textbf{133.05} & \textbf{140.39} & 616.91 & 7200.00\\ & 3 & \textbf{132.12} & \textbf{138.68} & 612.72 & 7200.00\\ & 4 & \textbf{133.57} & \textbf{142.08} & 613.09 & 7200.00\\ lop97icx & - & 1709.08 & 39.83 & \textbf{5.90} & 25.38\\ & 1 & 1963.46 & 92.39 & \textbf{8.45} & 29.05\\ & 2 & 2294.75 & 74.88 & \textbf{6.74} & 30.77\\ & 3 & 1893.25 & 68.66 & \textbf{8.80} & 45.83\\ & 4 & 1623.45 & 45.81 & \textbf{8.93} & 36.10\\ mathopt5\_7 & - & 0.29 & 12.35 & \textbf{0.08} & 0.17\\ & 1 & \textbf{0.12} & 12.49 & 0.21 & 0.17\\ & 2 & 0.12 & 12.35 & \textbf{0.08} & 0.19\\ & 3 & 0.12 & 12.22 & \textbf{0.08} & 0.33\\ & 4 & 0.22 & 12.27 & \textbf{0.08} & 0.18\\ mathopt5\_8 & - & 0.26 & 9.72 & \textbf{0.06} & 0.14\\ & 1 & 0.26 & 10.66 & \textbf{0.07} & 0.22\\ & 2 & 0.26 & 10.83 & \textbf{0.07} & 0.14\\ & 3 & 0.26 & 10.71 & \textbf{0.07} & 0.26\\ & 4 & 0.25 & 10.79 & \textbf{0.07} & 0.13\\ maxcsp-geo50-20-d4-75-36 & - & 16.94 & 101\% & \textbf{7.71} & 52.80\\ & 1 & 20.06 & 101\% & \textbf{5.53} & 335.96\\ & 2 & 27.23 & 101\% & \textbf{7.01} & 49.99\\ & 3 & \textbf{8.13} & 103\% & 25.70 & 296.74\\ & 4 & \textbf{22.97} & 102\% & 25.64 & 186.01\\ meanvar-orl400\_05\_e\_7 & - & 95.4\% & \textbf{23.95} & $\infty$ & 4080.76\\ & 1 & 94.3\% & \textbf{31.77} & 6630.62 & 5981.88\\ & 2 & 94.2\% & \textbf{25.01} & 6570.19 & 0.59\%\\ & 3 & 95.0\% & nonopt & $\infty$ & \textbf{5120.84}\\ & 4 & 94.5\% & nonopt & 6663.61 & \textbf{5806.16}\\ meanvar-orl400\_05\_e\_8 & - & 600.24 & 19.32 & \textbf{6.00} & 2404.35\\ & 1 & 571.98 & $\infty$ & \textbf{6.89} & 2126.66\\ & 2 & 549.46 & nonopt & \textbf{6.24} & 2366.11\\ & 3 & 645.73 & $\infty$ & \textbf{6.61} & 2115.85\\ & 4 & 407.85 & $\infty$ & \textbf{6.32} & 2353.26\\ mhw4d & - & 0.83 & 0.78 & \textbf{0.43} & \textbf{0.44}\\ & 1 & 0.82 & 0.62 & 1.64 & \textbf{0.44}\\ & 2 & 0.69 & 0.51 & 1.26 & \textbf{0.40}\\ & 3 & 0.82 & \textbf{0.78} & 1.61 & \textbf{0.74}\\ & 4 & 0.74 & 0.65 & 1.31 & \textbf{0.37}\\ milinfract & - & \textbf{55.06} & infeas & 75.5\% & 68.8\%\\ & 1 & \textbf{55.42} & infeas & 75.4\% & 75.2\%\\ & 2 & \textbf{55.04} & infeas & 75.5\% & 72.0\%\\ & 3 & \textbf{54.76} & infeas & 76.0\% & 75.3\%\\ & 4 & \textbf{54.45} & infeas & 75.7\% & 70.6\%\\ minlphi & - & $\infty$ & \textbf{1.86} & 100\% & 100.0\%\\ & 1 & $\infty$ & \textbf{1.65} & 100\% & 100.0\%\\ & 2 & $\infty$ & 1.74 & 100\% & \textbf{0.20}\\ & 3 & $\infty$ & 1.65 & 100\% & \textbf{0.07}\\ & 4 & $\infty$ & 1.24 & 100\% & \textbf{0.16}\\ multiplants\_mtg1a & - & \textbf{2245.95} & 15.6\% & 5.1\% & 4320.86\\ & 1 & 1669.44 & 17.9\% & \textbf{149.26} & 18.8\%\\ & 2 & \textbf{212.57} & 15.1\% & 288.46 & 3067.67\\ & 3 & 2199.49 & 17.9\% & \textbf{98.00} & 4467.25\\ & 4 & \textbf{1918.40} & 17.3\% & 2301.73 & 3773.36\\ multiplants\_mtg2 & - & \textbf{2115.70} & 0.12\% & 2\% & 21.0\%\\ & 1 & \textbf{717.45} & 0.15\% & 1.8\% & 0.4\%\\ & 2 & \textbf{276.56} & 0.13\% & 469.69 & 0.27\%\\ & 3 & \textbf{132.47} & 0.041\% & 1\% & 6.6\%\\ & 4 & \textbf{212.58} & 2.2\% & 1.9\% & 1\%\\ nd\_netgen-3000-1-1-b-b-ns\_7 & - & $\infty$ & 680.80 & \textbf{3.67} & 31.10\\ & 1 & 99.7\% & infeas & infeas & \textbf{32.20}\\ & 2 & $\infty$ & infeas & infeas & \textbf{47.17}\\ & 3 & 99.7\% & $\infty$ & infeas & \textbf{40.17}\\ & 4 & 99.7\% & infeas & infeas & \textbf{47.92}\\ netmod\_kar1 & - & \textbf{4.53} & 5948.45 & 41.89 & 5.51\\ & 1 & 11.3\% & 110.53 & 51.27 & \textbf{4.90}\\ & 2 & 11.1\% & 311.30 & 39.29 & \textbf{4.88}\\ & 3 & 4420.66 & 576.52 & 42.64 & \textbf{5.64}\\ & 4 & 15.4\% & 1050.14 & 40.84 & \textbf{5.11}\\ netmod\_kar2 & - & \textbf{5.26} & 5917.59 & 42.01 & \textbf{5.77}\\ & 1 & 11.3\% & 110.81 & 51.55 & \textbf{5.48}\\ & 2 & 11.1\% & 311.71 & 39.14 & \textbf{4.90}\\ & 3 & 4429.25 & 576.74 & 43.00 & \textbf{5.64}\\ & 4 & 15.4\% & 1051.59 & 40.95 & \textbf{5.12}\\ nous1 & - & 51.20 & 2.9\% & 52.93 & \textbf{7.70}\\ & 1 & 39.45 & 1.7\% & 48.64 & \textbf{5.19}\\ & 2 & 38.92 & 0.88\% & 62.63 & \textbf{2.62}\\ & 3 & 36.38 & 1.5\% & 37.63 & \textbf{5.75}\\ & 4 & 15.51 & 1.5\% & 42.83 & \textbf{4.75}\\ nous2 & - & \textbf{0.40} & 2.16 & 8.29 & 0.80\\ & 1 & \textbf{0.43} & infeas & 10.50 & 0.91\\ & 2 & 0.77 & 2.21 & 8.61 & \textbf{0.65}\\ & 3 & \textbf{0.51} & 1.76 & 6.92 & 0.87\\ & 4 & \textbf{0.59} & 2.25 & 5.81 & 0.69\\ nvs02 & - & \textbf{0.04} & 69.68 & 0.22 & 0.06\\ & 1 & 0.05 & 63.08 & 0.11 & \textbf{0.04}\\ & 2 & 0.12 & 54.34 & 0.28 & \textbf{0.03}\\ & 3 & 0.09 & 39.48 & 0.11 & \textbf{0.03}\\ & 4 & 0.05 & 46.77 & 0.15 & \textbf{0.04}\\ nvs06 & - & 0.08 & 6.64 & 0.06 & \textbf{0.02}\\ & 1 & 0.08 & 3.90 & 0.13 & \textbf{0.03}\\ & 2 & 0.09 & 3.79 & 0.06 & \textbf{0.03}\\ & 3 & 0.08 & 3.83 & 0.06 & \textbf{0.03}\\ & 4 & 0.08 & 3.84 & 0.06 & \textbf{0.03}\\ oil2 & - & \textbf{3.30} & 43.56 & nonopt & 4.64\\ & 1 & nonopt & infeas & nonopt & \textbf{4.55}\\ & 2 & \textbf{3.55} & infeas & nonopt & 8.76\\ & 3 & \textbf{5.05} & 22.24 & nonopt & 7.57\\ & 4 & \textbf{3.64} & 39.41 & nonopt & 6.34\\ optmass & - & $\infty$ & 11.3\% & \textbf{463.14} & 1.8\%\\ & 1 & $\infty$ & $\infty$ & \textbf{480.52} & 9.9\%\\ & 2 & $\infty$ & $\infty$ & \textbf{493.97} & 7.6\%\\ & 3 & $\infty$ & 12.8\% & \textbf{502.07} & 9.9\%\\ & 4 & $\infty$ & 12.8\% & \textbf{507.91} & 9.9\%\\ ortez & - & nonopt & 2.42 & 9.48 & \textbf{0.14}\\ & 1 & nonopt & 3.56 & 18.07 & \textbf{0.12}\\ & 2 & nonopt & 1.50 & 0.98 & \textbf{0.14}\\ & 3 & nonopt & 3.47 & 1.26 & \textbf{0.26}\\ & 4 & 0.25 & 2.95 & 7.15 & \textbf{0.15}\\ p\_ball\_10b\_5p\_3d\_m & - & 32.22 & 26.41 & 49.66 & \textbf{4.26}\\ & 1 & 23.79 & 30.34 & 52.39 & \textbf{3.94}\\ & 2 & 26.62 & 35.67 & infeas & \textbf{4.35}\\ & 3 & 30.37 & 29.09 & 70.38 & \textbf{4.35}\\ & 4 & 31.69 & 33.60 & 49.71 & \textbf{4.43}\\ p\_ball\_15b\_5p\_2d\_m & - & 90.52 & 101.62 & infeas & \textbf{5.44}\\ & 1 & 93.01 & 110.13 & infeas & \textbf{5.30}\\ & 2 & 59.07 & 83.88 & infeas & \textbf{5.01}\\ & 3 & 70.12 & 103.04 & infeas & \textbf{5.71}\\ & 4 & 73.50 & 153.02 & infeas & \textbf{4.21}\\ parabol5\_2\_3 & - & 0.051\% & 15.7\% & \textbf{4095.70} & 0.051\%\\ & 1 & \textbf{7200.00} & infeas & 6.6\% & \textbf{7200.00}\\ & 2 & 0.051\% & 16.1\% & 6.6\% & \textbf{7200.00}\\ & 3 & 6.7\% & 16.4\% & \textbf{406.60} & 7200.00\\ & 4 & \textbf{7200.00} & infeas & 6.6\% & \textbf{7200.00}\\ parallel & - & 19.19 & 38.93 & 102.24 & \textbf{10.26}\\ & 1 & 17.35 & 37.95 & 295.19 & \textbf{8.15}\\ & 2 & 19.55 & 36.08 & 291.45 & \textbf{8.78}\\ & 3 & 22.27 & 40.15 & 299.21 & \textbf{8.40}\\ & 4 & 20.80 & 37.29 & 295.03 & \textbf{7.73}\\ pedigree\_ex485 & - & \textbf{68.37} & 5.8\% & 276.45 & 105.90\\ & 1 & \textbf{85.93} & 6.5\% & 248.15 & 104.00\\ & 2 & 535.45 & 9.1\% & 161.40 & \textbf{56.53}\\ & 3 & \textbf{75.96} & 5.7\% & 243.80 & \textbf{77.55}\\ & 4 & \textbf{54.46} & 4.8\% & 240.39 & 79.44\\ pedigree\_ex485\_2 & - & 17.30 & 192.97 & \textbf{3.66} & 35.24\\ & 1 & 14.91 & 221.77 & \textbf{1.86} & 53.86\\ & 2 & 17.00 & 245.18 & \textbf{1.86} & 53.18\\ & 3 & 15.79 & 238.26 & \textbf{1.77} & 40.26\\ & 4 & 16.50 & 231.25 & \textbf{1.83} & 39.39\\ pointpack06 & - & 5.59 & 27.80 & 4.76 & \textbf{3.05}\\ & 1 & 5.68 & 30.47 & 5.09 & \textbf{2.99}\\ & 2 & 6.00 & 31.82 & 4.68 & \textbf{3.54}\\ & 3 & 5.71 & 47.47 & 5.55 & \textbf{3.41}\\ & 4 & 6.45 & 30.25 & 5.18 & \textbf{3.12}\\ pointpack08 & - & 159.31 & 3611.87 & 208.18 & \textbf{61.22}\\ & 1 & 172.19 & 3492.98 & 205.47 & \textbf{67.88}\\ & 2 & 194.38 & 4171.95 & 201.51 & \textbf{64.73}\\ & 3 & 210.56 & 2597.64 & 234.46 & \textbf{60.70}\\ & 4 & 170.46 & 3025.00 & 178.05 & \textbf{63.89}\\ pooling\_epa1 & - & \textbf{12.84} & infeas & 21.07 & \textbf{12.87}\\ & 1 & \textbf{10.13} & infeas & 17.86 & 1054.75\\ & 2 & \textbf{14.46} & infeas & 20.75 & 19.10\\ & 3 & \textbf{7.62} & infeas & 18.78 & 21.67\\ & 4 & \textbf{10.55} & 2.4\% & 20.88 & 35.76\\ pooling\_epa2 & - & 2.4\% & 0.041\% & \textbf{1694.37} & 1.9\%\\ & 1 & \textbf{1030.44} & 1.4\% & 2186.70 & 1.8\%\\ & 2 & 1.8\% & 0.088\% & \textbf{1797.05} & 1.8\%\\ & 3 & 3.2\% & 6096.04 & \textbf{1992.47} & 1.7\%\\ & 4 & 4.3\% & 0.39\% & \textbf{1615.89} & 2\%\\ portfol\_buyin & - & \textbf{0.34} & 2.34 & 3.56 & 0.50\\ & 1 & 0.56 & 2.27 & 0.85 & \textbf{0.35}\\ & 2 & \textbf{0.30} & 2.25 & 1.72 & \textbf{0.29}\\ & 3 & \textbf{0.22} & 2.28 & 3.42 & 0.41\\ & 4 & \textbf{0.31} & 2.26 & 3.49 & 0.57\\ portfol\_card & - & \textbf{0.31} & 2.15 & 5.29 & 0.58\\ & 1 & \textbf{0.22} & 2.34 & 1.93 & 0.95\\ & 2 & \textbf{0.43} & 2.33 & 4.81 & \textbf{0.45}\\ & 3 & \textbf{0.36} & 2.09 & 7.60 & \textbf{0.33}\\ & 4 & \textbf{0.31} & 2.33 & 7.36 & 0.75\\ powerflow0014r & - & \textbf{50.50} & 100.0\% & 100.0\% & 7079.42\\ & 1 & \textbf{31.00} & 100.0\% & 66.6\% & 2519.01\\ & 2 & \textbf{47.00} & 100.0\% & 100.0\% & 0.025\%\\ & 3 & \textbf{59.10} & 100.0\% & 100.0\% & 6935.50\\ & 4 & \textbf{50.99} & 100.0\% & 88.1\% & 7200.00\\ powerflow0057r & - & $\infty$ & $\infty$ & $\infty$ & \textbf{80.19}\\ & 1 & $\infty$ & $\infty$ & $\infty$ & \textbf{102.21}\\ & 2 & $\infty$ & $\infty$ & $\infty$ & \textbf{78.25}\\ & 3 & $\infty$ & $\infty$ & $\infty$ & \textbf{78.13}\\ & 4 & $\infty$ & $\infty$ & $\infty$ & \textbf{68.48}\\ prob07 & - & 75.62 & 68.41 & \textbf{16.19} & 76.05\\ & 1 & 74.75 & 67.34 & 15.70 & \textbf{10.94}\\ & 2 & 73.53 & 69.74 & 15.49 & \textbf{11.11}\\ & 3 & 80.90 & 69.79 & 16.44 & \textbf{11.47}\\ & 4 & 78.75 & 71.26 & 16.69 & \textbf{10.54}\\ process & - & \textbf{0.51} & 1.38 & 0.93 & 0.4\%\\ & 1 & \textbf{0.39} & 1.53 & 0.64 & 0.46\\ & 2 & \textbf{0.66} & 1.44 & 0.95 & 0.39\%\\ & 3 & \textbf{0.41} & 1.39 & 0.89 & 0.39\%\\ & 4 & \textbf{0.42} & 1.36 & 0.58 & 0.4\%\\ procurement1mot & - & \textbf{59.57} & 86.4\% & 79.5\% & 85.0\%\\ & 1 & \textbf{114.45} & 87.8\% & 79.7\% & 84.2\%\\ & 2 & \textbf{177.11} & 86.6\% & 79.7\% & 80.6\%\\ & 3 & \textbf{144.06} & 87.4\% & 79.7\% & 83.9\%\\ & 4 & \textbf{148.48} & 87.6\% & 79.7\% & 84.5\%\\ procurement2mot & - & 4.13 & 13.59 & 2.71 & \textbf{2.03}\\ & 1 & 8.12 & 12.54 & \textbf{2.54} & \textbf{2.39}\\ & 2 & 5.86 & 12.99 & 2.80 & \textbf{1.76}\\ & 3 & 7.10 & 15.11 & 3.40 & \textbf{2.37}\\ & 4 & 7.46 & 19.46 & \textbf{2.67} & 3.02\\ product & - & 48.09 & 465.36 & nonopt & \textbf{16.80}\\ & 1 & nonopt & 471.35 & nonopt & \textbf{11.71}\\ & 2 & nonopt & 2467.73 & nonopt & \textbf{13.45}\\ & 3 & nonopt & 278.34 & nonopt & \textbf{18.04}\\ & 4 & 107.32 & 446.81 & nonopt & \textbf{21.42}\\ product2 & - & 2.90 & 145.38 & \textbf{2.53} & 3.85\\ & 1 & \textbf{79.81} & 185.85 & nonopt & infeas\\ & 2 & 3.41 & 184.60 & \textbf{2.92} & infeas\\ & 3 & 26.03 & 467.30 & \textbf{2.62} & infeas\\ & 4 & 242.41 & \textbf{133.79} & nonopt & infeas\\ prolog & - & 0.31 & 100.0\% & 100.0\% & \textbf{0.06}\\ & 1 & \textbf{1.16} & 100.0\% & 100.0\% & abort\\ & 2 & 0.30 & 100.0\% & 100.0\% & \textbf{0.08}\\ & 3 & 1.22 & 100.0\% & 100.0\% & \textbf{0.05}\\ & 4 & 0.36 & 100\% & 100.0\% & \textbf{0.06}\\ qp3 & - & 13.7\% & \textbf{0.01} & 4.6\% & 14.1\%\\ & 1 & 12.2\% & \textbf{0.01} & 4.6\% & 6\%\\ & 2 & 11.9\% & \textbf{0.01} & 4.6\% & 6\%\\ & 3 & 12.7\% & \textbf{0.01} & 4.6\% & 13.8\%\\ & 4 & 13.2\% & \textbf{0.01} & 4.6\% & 13.4\%\\ qspp\_0\_10\_0\_1\_10\_1 & - & 24.2\% & 632.32 & \textbf{26.39} & 180.85\\ & 1 & 22.5\% & 653.67 & \textbf{25.59} & 190.34\\ & 2 & 21.3\% & 643.24 & \textbf{23.37} & 174.14\\ & 3 & 22.5\% & 661.55 & \textbf{23.97} & 207.82\\ & 4 & 23.7\% & 650.22 & \textbf{22.88} & 158.80\\ qspp\_0\_11\_0\_1\_10\_1 & - & 30.5\% & 3676.78 & \textbf{124.07} & 623.76\\ & 1 & 31.5\% & 3955.95 & \textbf{138.43} & 710.85\\ & 2 & 29.0\% & 3821.06 & \textbf{143.49} & 770.51\\ & 3 & 31.4\% & 3870.07 & \textbf{131.00} & 679.01\\ & 4 & 31.1\% & 3857.65 & \textbf{116.44} & 756.40\\ radar-2000-10-a-6\_lat\_7 & - & 75.1\% & \textbf{33.14} & \textbf{33.34} & 129.82\\ & 1 & 96.8\% & 64.95 & \textbf{39.42} & 140.45\\ & 2 & 95.9\% & 65.73 & \textbf{39.22} & 142.02\\ & 3 & 6060.60 & 65.56 & \textbf{39.32} & 141.78\\ & 4 & 96.8\% & 64.08 & \textbf{10.04} & 139.60\\ radar-3000-10-a-8\_lat\_7 & - & 100.0\% & 558.69 & \textbf{209.09} & 422.76\\ & 1 & 100.0\% & 732.73 & \textbf{14.72} & 291.33\\ & 2 & 100.0\% & 724.04 & \textbf{14.73} & 769.42\\ & 3 & 99.9\% & 754.45 & \textbf{13.79} & 287.51\\ & 4 & 100.0\% & 730.86 & \textbf{14.73} & 307.35\\ ravempb & - & \textbf{0.82} & 3438.61 & 1.26 & 3.09\\ & 1 & \textbf{0.98} & 560.99 & 1.09 & 2.22\\ & 2 & \textbf{0.73} & infeas & 0.83 & 1.54\\ & 3 & 0.91 & 182.23 & \textbf{0.80} & 2.26\\ & 4 & \textbf{0.85} & 990.98 & 1.06 & 2.70\\ risk2bpb & - & \textbf{0.70} & 1.46 & 10.70 & infeas\\ & 1 & \textbf{0.93} & 1.72 & 2.49 & infeas\\ & 2 & \textbf{0.92} & 1.49 & 47.77 & infeas\\ & 3 & \textbf{0.49} & 1.51 & 2.72 & infeas\\ & 4 & \textbf{0.46} & 0.99 & 2.39 & infeas\\ routingdelay\_bigm & - & \textbf{26.08} & nonopt & 18.4\% & nonopt\\ & 1 & \textbf{14.94} & 2.5\% & 54.3\% & nonopt\\ & 2 & \textbf{16.96} & nonopt & 20.3\% & 3223.45\\ & 3 & \textbf{25.96} & 2.9\% & 18.9\% & 0.97\%\\ & 4 & \textbf{24.02} & nonopt & 23.3\% & nonopt\\ rsyn0815m & - & \textbf{0.29} & 92.08 & 1.02 & 1.44\\ & 1 & \textbf{0.29} & 80.27 & 1.02 & 1.45\\ & 2 & \textbf{0.35} & 62.63 & 1.00 & 1.88\\ & 3 & \textbf{0.34} & 72.91 & 0.82 & 1.57\\ & 4 & \textbf{0.34} & 112.79 & 1.01 & 1.61\\ rsyn0815m03m & - & 31.23 & nonopt & 44.8\% & \textbf{14.06}\\ & 1 & 47.45 & nonopt & abort & \textbf{20.78}\\ & 2 & 40.66 & abort & 33.6\% & \textbf{14.84}\\ & 3 & 44.05 & nonopt & abort & \textbf{14.36}\\ & 4 & 38.36 & nonopt & abort & \textbf{17.09}\\ sfacloc2\_2\_95 & - & \textbf{0.31} & 5.25 & 1.11 & 2.78\\ & 1 & 0.74 & 77.82 & \textbf{0.58} & 2.18\\ & 2 & \textbf{0.30} & 69.08 & 1.02 & 2.19\\ & 3 & \textbf{0.38} & 129.88 & 0.64 & 2.05\\ & 4 & \textbf{0.39} & 22.24 & 0.57 & 2.36\\ sfacloc2\_3\_90 & - & 29.12 & 588.76 & 67.86 & \textbf{20.32}\\ & 1 & 28.72 & 3934.46 & 39.19 & \textbf{17.48}\\ & 2 & \textbf{27.15} & 11.7\% & 47.61 & \textbf{27.05}\\ & 3 & \textbf{25.26} & 1353.20 & 28.54 & \textbf{25.05}\\ & 4 & \textbf{16.92} & 39.7\% & 94.54 & 27.50\\ sjup2 & - & $\infty$ & 3456.91 & $\infty$ & \textbf{469.64}\\ & 1 & \textbf{3049.75} & \textbf{2931.24} & $\infty$ & 1.6\%\\ & 2 & $\infty$ & 3562.09 & $\infty$ & \textbf{254.22}\\ & 3 & $\infty$ & 3131.78 & 100.0\% & \textbf{308.94}\\ & 4 & $\infty$ & 2986.81 & $\infty$ & \textbf{345.25}\\ slay06m & - & \textbf{0.46} & 5.66 & 0.69 & 1.85\\ & 1 & \textbf{0.81} & 5.20 & 0.94 & 2.42\\ & 2 & \textbf{0.65} & 5.53 & 0.91 & 0.94\\ & 3 & \textbf{0.64} & 6.30 & 0.93 & 1.25\\ & 4 & 0.78 & 5.21 & \textbf{0.63} & 1.57\\ slay07m & - & \textbf{0.62} & 6.96 & \textbf{0.61} & 2.05\\ & 1 & \textbf{0.81} & 10.48 & 1.12 & 2.01\\ & 2 & \textbf{0.80} & 8.22 & 1.14 & 2.01\\ & 3 & \textbf{0.73} & 11.43 & 0.81 & 3.39\\ & 4 & \textbf{0.85} & 11.45 & 1.26 & 3.28\\ smallinvDAXr1b010-011 & - & \textbf{1.62} & 3.51 & 8.69 & 2.17\\ & 1 & \textbf{1.72} & 3.08 & 7.52 & 2.31\\ & 2 & \textbf{1.75} & 3.57 & 7.69 & 2.29\\ & 3 & \textbf{2.79} & 3.05 & 6.83 & \textbf{2.74}\\ & 4 & \textbf{1.65} & 3.60 & 9.48 & 2.19\\ smallinvDAXr1b020-022 & - & \textbf{1.95} & 3.46 & 205.80 & 3.37\\ & 1 & \textbf{2.43} & 3.95 & 209.56 & 2.80\\ & 2 & \textbf{1.94} & 3.56 & 185.73 & 2.92\\ & 3 & 2.26 & 4.04 & 192.10 & \textbf{0.75}\\ & 4 & 2.22 & 3.57 & 158.16 & \textbf{0.75}\\ sonet17v4 & - & 339.62 & nonopt & \textbf{160.55} & 949.25\\ & 1 & 414.29 & nonopt & \textbf{188.66} & 1453.04\\ & 2 & 324.27 & nonopt & \textbf{143.09} & 926.01\\ & 3 & 238.47 & nonopt & \textbf{138.66} & 916.91\\ & 4 & 358.04 & nonopt & \textbf{186.01} & 927.66\\ sonet18v6 & - & 429.30 & nonopt & \textbf{184.16} & 1445.90\\ & 1 & 439.64 & nonopt & \textbf{154.31} & 1353.80\\ & 2 & 421.41 & nonopt & \textbf{171.20} & 1426.08\\ & 3 & 557.19 & nonopt & \textbf{180.39} & 1211.53\\ & 4 & 624.38 & nonopt & \textbf{206.98} & 1208.09\\ sonetgr17 & - & 346.77 & nonopt & \textbf{65.24} & 1255.92\\ & 1 & 327.68 & 29.5\% & \textbf{129.02} & 1229.96\\ & 2 & 370.11 & nonopt & \textbf{81.73} & 1228.20\\ & 3 & 424.67 & 5378.39 & \textbf{36.48} & 1043.24\\ & 4 & 324.24 & 4886.49 & \textbf{30.05} & 1290.93\\ spectra2 & - & \textbf{1.37} & infeas & 105\% & 9.20\\ & 1 & \textbf{1.35} & infeas & 104\% & 8.46\\ & 2 & \textbf{1.59} & infeas & 105\% & 12.31\\ & 3 & \textbf{1.33} & 5.75 & 105\% & 11.51\\ & 4 & \textbf{1.33} & infeas & 104\% & 11.05\\ sporttournament24 & - & 4982.31 & 6.7\% & \textbf{21.48} & 45.75\\ & 1 & 1.3\% & 8\% & \textbf{27.89} & 59.26\\ & 2 & 2.5\% & 7.9\% & \textbf{15.68} & 81.50\\ & 3 & 1.3\% & 4.1\% & \textbf{20.86} & 81.00\\ & 4 & 4451.83 & 9.8\% & \textbf{21.45} & 121.43\\ sporttournament30 & - & 11.8\% & 12.5\% & \textbf{1946.51} & 0.72\%\\ & 1 & 11.7\% & 14.3\% & \textbf{2934.20} & 0.84\%\\ & 2 & 12.5\% & 13.3\% & \textbf{2795.89} & 0.55\%\\ & 3 & 12.0\% & 13.6\% & \textbf{2178.10} & 6530.54\\ & 4 & 11.8\% & 15.3\% & \textbf{4553.97} & 7010.55\\ sssd12-05persp & - & 27.8\% & \textbf{378.13} & 780.87 & 4.5\%\\ & 1 & 28.1\% & abort & \textbf{572.29} & 4.6\%\\ & 2 & 27.7\% & $\infty$ & \textbf{691.17} & 4.8\%\\ & 3 & 28.4\% & nonopt & \textbf{695.56} & 4.7\%\\ & 4 & 28.5\% & abort & 100.0\% & \textbf{5.1\%}\\ sssd18-06persp & - & 41.4\% & \textbf{7200.00} & 19.9\% & 15.1\%\\ & 1 & 41.4\% & nonopt & 21.4\% & \textbf{15.9\%}\\ & 2 & 41.4\% & abort & 25.6\% & \textbf{16.6\%}\\ & 3 & 41.3\% & abort & 23.2\% & \textbf{15.9\%}\\ & 4 & 41.6\% & abort & 100.0\% & \textbf{16.5\%}\\ st\_testgr1 & - & 0.24 & 5.51 & \textbf{0.07} & 0.12\\ & 1 & 0.34 & 4.73 & \textbf{0.06} & 0.24\\ & 2 & 0.37 & 4.93 & \textbf{0.07} & 0.13\\ & 3 & 0.13 & 4.50 & \textbf{0.06} & 0.12\\ & 4 & 0.12 & 4.45 & \textbf{0.07} & 0.13\\ st\_testgr3 & - & 0.24 & 3.67 & \textbf{0.07} & 0.24\\ & 1 & 0.23 & 3.66 & \textbf{0.07} & 0.17\\ & 2 & 0.23 & 3.52 & \textbf{0.06} & 0.17\\ & 3 & 0.24 & 3.28 & \textbf{0.07} & 0.17\\ & 4 & 0.24 & 3.33 & \textbf{0.07} & 0.17\\ steenbrf & - & 14.39 & \textbf{0.75} & 98.0\% & 12.42\\ & 1 & 7.59 & \textbf{1.87} & 97.9\% & 0.038\%\\ & 2 & 9.81 & \textbf{1.21} & 97.9\% & 0.33\%\\ & 3 & 8.40 & \textbf{0.99} & 98.0\% & 13.85\\ & 4 & \textbf{4.69} & 9.33 & 97.8\% & 0.34\%\\ stockcycle & - & 7.82 & 3.18 & 39.57 & \textbf{0.79}\\ & 1 & 6.15 & 1.36 & 35.36 & \textbf{0.56}\\ & 2 & 5.74 & 1.85 & 35.21 & \textbf{0.96}\\ & 3 & 6.14 & 1.57 & 36.67 & \textbf{0.79}\\ & 4 & 7.59 & 1.63 & 36.22 & \textbf{0.71}\\ supplychainp1\_022020 & - & nonopt & 20.8\% & 8.5\% & \textbf{1056.69}\\ & 1 & nonopt & 33.7\% & \textbf{11.5\%} & abort\\ & 2 & nonopt & 25.5\% & 11.6\% & \textbf{0.098\%}\\ & 3 & nonopt & 36.8\% & 11.4\% & \textbf{0.077\%}\\ & 4 & nonopt & 18.6\% & 11.1\% & \textbf{4696.71}\\ supplychainp1\_030510 & - & nonopt & 63.91 & \textbf{3.08} & 3.54\\ & 1 & nonopt & 92.12 & \textbf{3.56} & 5.41\\ & 2 & nonopt & 114.12 & 3.52 & \textbf{2.84}\\ & 3 & nonopt & 85.05 & \textbf{3.39} & 4.46\\ & 4 & \textbf{1.48} & 63.43 & 3.35 & 3.68\\ supplychainr1\_022020 & - & \textbf{6.13} & 12.5\% & 2966.92 & 21.97\\ & 1 & \textbf{11.57} & 15.8\% & 0.88\% & 27.18\\ & 2 & 19.28 & 15.9\% & 1.1\% & \textbf{1.64}\\ & 3 & 6.66 & 15.5\% & 1.2\% & \textbf{2.39}\\ & 4 & \textbf{10.05} & 15.2\% & 1.9\% & 26.06\\ supplychainr1\_030510 & - & 0.28 & 7.19 & 0.74 & \textbf{0.12}\\ & 1 & 0.40 & 7.87 & 0.73 & \textbf{0.28}\\ & 2 & 0.41 & 9.32 & 0.81 & \textbf{0.20}\\ & 3 & 0.38 & 9.23 & 0.78 & \textbf{0.16}\\ & 4 & 0.42 & 6.65 & 0.79 & \textbf{0.16}\\ syn15m04m & - & \textbf{0.54} & 42.37 & 1.04 & 2.75\\ & 1 & \textbf{0.67} & 43.23 & 1.44 & 2.05\\ & 2 & \textbf{0.58} & 41.87 & 1.48 & 1.13\\ & 3 & \textbf{0.49} & 44.00 & 1.13 & 0.94\\ & 4 & \textbf{0.50} & 43.84 & 1.40 & 1.86\\ syn30m02m & - & 2.51 & 239.18 & 1.83 & \textbf{1.58}\\ & 1 & 2.14 & 239.12 & 2.28 & \textbf{1.39}\\ & 2 & \textbf{1.92} & 249.03 & \textbf{1.88} & 2.18\\ & 3 & \textbf{2.27} & 187.53 & 2.53 & \textbf{2.27}\\ & 4 & 2.71 & 226.18 & 2.04 & \textbf{1.62}\\ synheat & - & \textbf{35.92} & 0.36\% & 17.1\% & infeas\\ & 1 & \textbf{734.68} & 0.33\% & 16.6\% & 3.9\%\\ & 2 & \textbf{5112.71} & 0.34\% & 15.3\% & infeas\\ & 3 & \textbf{33.77} & 0.39\% & 14.7\% & 7.2\%\\ & 4 & \textbf{26.63} & 0.34\% & 15.9\% & infeas\\ tanksize & - & 4.81 & 13.85 & 54.61 & \textbf{4.12}\\ & 1 & 5.53 & 14.77 & 59.53 & \textbf{3.22}\\ & 2 & 5.07 & 14.41 & 57.79 & \textbf{3.13}\\ & 3 & 4.37 & 16.30 & 57.19 & \textbf{3.40}\\ & 4 & 5.57 & 14.46 & 58.71 & \textbf{3.34}\\ telecomsp\_pacbell & - & 3393.96 & 2.8\% & \textbf{1125.10} & 0.63\%\\ & 1 & 2926.69 & 8.2\% & \textbf{2376.86} & 0.81\%\\ & 2 & 3854.14 & 5.4\% & \textbf{2301.23} & 0.96\%\\ & 3 & 0.52\% & 5\% & \textbf{1194.43} & 0.78\%\\ & 4 & 2094.65 & 6.2\% & \textbf{1483.59} & 0.58\%\\ tln5 & - & 5.26 & 30.1\% & 2.08 & \textbf{0.50}\\ & 1 & 8.30 & 30.1\% & 1.20 & \textbf{0.78}\\ & 2 & 5.75 & 31.1\% & 1.91 & \textbf{0.69}\\ & 3 & 3.67 & 30.1\% & 1.74 & \textbf{0.75}\\ & 4 & 2.76 & 29.1\% & 1.63 & \textbf{0.44}\\ tln7 & - & 2935.35 & 61.4\% & 274.33 & \textbf{188.05}\\ & 1 & 1666.87 & 70.9\% & 357.03 & \textbf{122.53}\\ & 2 & 1.3\% & 65.1\% & \textbf{301.36} & \textbf{302.03}\\ & 3 & 1622.52 & 69.1\% & \textbf{328.32} & 970.70\\ & 4 & 4263.19 & 66.5\% & \textbf{319.16} & \textbf{338.06}\\ tls2 & - & \textbf{0.13} & 5.43 & 0.18 & 0.93\\ & 1 & \textbf{0.14} & 8.16 & \textbf{0.14} & 1.08\\ & 2 & \textbf{0.14} & 7.82 & \textbf{0.14} & 0.59\\ & 3 & \textbf{0.13} & 4.81 & 0.15 & 0.84\\ & 4 & \textbf{0.14} & 7.82 & \textbf{0.14} & 0.79\\ tls4 & - & \textbf{35.15} & 833.24 & \textbf{36.82} & 41.19\\ & 1 & \textbf{30.56} & 870.73 & 34.63 & 44.52\\ & 2 & 42.93 & abort & \textbf{31.00} & 41.94\\ & 3 & \textbf{26.01} & abort & 29.09 & 40.57\\ & 4 & 38.94 & 865.53 & \textbf{32.69} & 47.23\\ topopt-mbb\_60x40\_50 & - & $\infty$ & \textbf{195.38} & $\infty$ & $\infty$\\ & 1 & \textbf{$\infty$} & nonopt & \textbf{$\infty$} & \textbf{$\infty$}\\ & 2 & \textbf{$\infty$} & nonopt & \textbf{$\infty$} & \textbf{$\infty$}\\ & 3 & \textbf{$\infty$} & nonopt & \textbf{$\infty$} & \textbf{$\infty$}\\ & 4 & $\infty$ & \textbf{481.77} & $\infty$ & $\infty$\\ toroidal2g20\_5555 & - & 3747.17 & nonopt & \textbf{3.23} & 5.70\\ & 1 & 833.26 & nonopt & \textbf{3.58} & 6.35\\ & 2 & 2465.17 & nonopt & \textbf{2.57} & 5.91\\ & 3 & 914.43 & nonopt & \textbf{3.40} & 4.66\\ & 4 & 1190.57 & nonopt & \textbf{3.07} & 5.89\\ toroidal3g7\_6666 & - & 6.2\% & nonopt & \textbf{67.16} & 141.64\\ & 1 & 4.6\% & nonopt & \textbf{74.53} & 174.00\\ & 2 & 5.7\% & nonopt & \textbf{86.25} & 148.01\\ & 3 & 6\% & nonopt & \textbf{122.43} & \textbf{122.20}\\ & 4 & 5.4\% & nonopt & \textbf{88.10} & \textbf{89.17}\\ transswitch0009r & - & 8\% & 16.0\% & 5.6\% & \textbf{6975.38}\\ & 1 & 6.5\% & 18.1\% & 5\% & \textbf{5869.65}\\ & 2 & 7.7\% & 23.3\% & 4\% & \textbf{6188.29}\\ & 3 & 11.9\% & 19.6\% & 5.6\% & \textbf{0.43\%}\\ & 4 & 7.4\% & 21.9\% & 3.8\% & \textbf{6364.97}\\ tricp & - & \textbf{0.82} & infeas & 100.0\% & 100.0\%\\ & 1 & \textbf{7.10} & infeas & 0.02\% & 100.0\%\\ & 2 & \textbf{182.36} & infeas & 100.0\% & 100.0\%\\ & 3 & 167.41 & infeas & \textbf{2.02} & 100.0\%\\ & 4 & \textbf{405.43} & infeas & 100.0\% & 100.0\%\\ tspn08 & - & \textbf{10.81} & 10.9\% & 2.3\% & 17.8\%\\ & 1 & \textbf{13.75} & 10.9\% & 2.3\% & 17.4\%\\ & 2 & \textbf{10.93} & 10.6\% & 2.3\% & 19.1\%\\ & 3 & \textbf{11.68} & 9.5\% & 2.4\% & 19.8\%\\ & 4 & \textbf{8.78} & 10.8\% & 2.3\% & 19.5\%\\ tspn15 & - & \textbf{884.44} & 12.8\% & 81.4\% & 50.0\%\\ & 1 & \textbf{967.84} & 12.8\% & 79.9\% & 52.0\%\\ & 2 & \textbf{1235.57} & 12.6\% & 82.7\% & 52.7\%\\ & 3 & \textbf{1129.22} & 12.8\% & 79.7\% & 49.7\%\\ & 4 & \textbf{1088.07} & 12.8\% & 78.4\% & 52.7\%\\ unitcommit1 & - & 0.27\% & \textbf{81.17} & 1082.69 & infeas\\ & 1 & 1.4\% & \textbf{65.40} & 1838.25 & infeas\\ & 2 & 1.5\% & \textbf{85.31} & 1538.88 & infeas\\ & 3 & 1.3\% & \textbf{60.10} & 1517.16 & infeas\\ & 4 & 1.5\% & \textbf{69.59} & 1419.76 & infeas\\ unitcommit2 & - & 7200.00 & 18.0\% & \textbf{11.20} & 439.70\\ & 1 & 7200.00 & 19.6\% & \textbf{8.42} & 23.15\\ & 2 & 7200.00 & 19.3\% & \textbf{9.70} & 10.83\\ & 3 & 7200.00 & 18.2\% & \textbf{8.15} & infeas\\ & 4 & 7200.00 & 21.1\% & \textbf{7.83} & infeas\\ wager & - & 15.30 & $\infty$ & 252.73 & \textbf{3.46}\\ & 1 & \textbf{17.16} & 28.9\% & 482.07 & 53.79\\ & 2 & 8.88 & 28.6\% & 1694.38 & \textbf{1.53}\\ & 3 & 15.85 & $\infty$ & 324.73 & \textbf{9.28}\\ & 4 & 14.15 & 37.2\% & 644.93 & \textbf{11.81}\\ waste & - & \textbf{22.55} & 46.3\% & 99.9\% & 38.85\\ & 1 & \textbf{34.81} & 46.2\% & 100.0\% & \textbf{34.62}\\ & 2 & \textbf{26.85} & 36.9\% & 100.0\% & infeas\\ & 3 & \textbf{38.54} & 47.0\% & 100.0\% & 44.26\\ & 4 & \textbf{24.72} & 46.6\% & 100.0\% & 41.89\\ wastepaper3 & - & 9.19 & 33.90 & 22.76 & \textbf{4.22}\\ & 1 & 11.99 & 33.19 & 27.77 & \textbf{4.01}\\ & 2 & 7.51 & 32.25 & 24.13 & \textbf{3.55}\\ & 3 & 8.27 & 39.31 & 26.56 & \textbf{5.57}\\ & 4 & 11.26 & 31.73 & 23.90 & \textbf{2.41}\\ wastepaper4 & - & 871.50 & 2320.60 & 2099.99 & \textbf{167.16}\\ & 1 & 328.30 & 1712.61 & 1708.66 & \textbf{144.72}\\ & 2 & 365.29 & 2919.38 & 1800.71 & \textbf{113.53}\\ & 3 & 229.96 & 2723.08 & 1593.40 & \textbf{117.95}\\ & 4 & 349.52 & 2223.22 & 1838.80 & \textbf{213.66}\\ wastepaper6 & - & \textbf{0.022\%} & abort & 0.16\% & \textbf{0.023\%}\\ & 1 & \textbf{7200.00} & abort & 0.22\% & 0.044\%\\ & 2 & \textbf{7200.00} & \textbf{7200.00} & 0.13\% & 0.044\%\\ & 3 & \textbf{7200.00} & 0.022\% & 0.2\% & 0.051\%\\ & 4 & \textbf{7200.00} & \textbf{7200.00} & 0.13\% & 0.058\%\\ water4 & - & 44.4\% & 46.1\% & \textbf{1648.53} & nonopt\\ & 1 & 42.9\% & 51.9\% & \textbf{1946.91} & nonopt\\ & 2 & 47.4\% & 51.1\% & \textbf{2214.42} & nonopt\\ & 3 & 45.8\% & 51.2\% & \textbf{2181.62} & nonopt\\ & 4 & 44.1\% & 53.2\% & \textbf{2084.97} & nonopt\\ waternd1 & - & 5.50 & 1130.72 & \textbf{1.34} & 7.21\\ & 1 & 5.06 & 1101.35 & \textbf{1.30} & 4.58\\ & 2 & 7.94 & 1150.81 & \textbf{1.36} & 5.74\\ & 3 & 22.42 & 1046.61 & \textbf{1.40} & 10.78\\ & 4 & 10.55 & 1181.39 & \textbf{1.61} & 7.26\\ waterno2\_02 & - & 5.20 & 26.45 & 4.18 & \textbf{2.89}\\ & 1 & \textbf{3.89} & 60.16 & 6.69 & \textbf{4.15}\\ & 2 & 5.13 & 28.66 & 6.66 & \textbf{3.72}\\ & 3 & \textbf{2.36} & 28.36 & 6.69 & 3.91\\ & 4 & 4.53 & 24.33 & 4.40 & \textbf{3.38}\\ waterno2\_03 & - & \textbf{260.33} & 1886.92 & 440.65 & nonopt\\ & 1 & \textbf{254.85} & 1847.35 & 479.46 & nonopt\\ & 2 & 270.87 & 1970.97 & 494.92 & \textbf{43.44}\\ & 3 & \textbf{313.74} & 2146.47 & 681.78 & nonopt\\ & 4 & 356.97 & 2060.53 & 550.59 & \textbf{38.73}\\ waterund01 & - & 0.35\% & infeas & 0.18\% & \textbf{1525.91}\\ & 1 & 0.34\% & 0.84\% & 0.18\% & \textbf{1988.24}\\ & 2 & 0.34\% & infeas & 0.18\% & \textbf{3003.09}\\ & 3 & 0.35\% & 0.98\% & 0.18\% & \textbf{0.025\%}\\ & 4 & 0.35\% & infeas & 0.18\% & \textbf{16.83}\\ \bottomrule \end{longtable} } \subsection{Octeract Gap Limit} \label{sec:detailed_octeractconvtol} The following table shows the outcome from running Octeract in serial mode with both gap limits set to either $10^{-6}$ or $10^{-4}$ on the test set of 200 non-permuted instances. { \scriptsize \begin{longtable}{l|rr} \toprule instance & $10^{-6}$ & $10^{-4}$ \\ \midrule \endhead alan & \textbf{0.05} & \textbf{0.05}\\ autocorr\_bern20-05 & \textbf{4.36} & \textbf{4.39}\\ autocorr\_bern35-04 & \textbf{12.7\%} & \textbf{12.7\%}\\ ball\_mk2\_10 & \textbf{0.01} & \textbf{0.01}\\ ball\_mk2\_30 & \textbf{0.02} & 0.02\\ ball\_mk3\_10 & \textbf{0.04} & \textbf{0.04}\\ batch0812\_nc & \textbf{8.37} & \textbf{8.63}\\ batchs101006m & \textbf{4.84} & \textbf{4.92}\\ batchs121208m & \textbf{79.1\%} & \textbf{79.1\%}\\ bayes2\_20 & \textbf{0.033\%} & \textbf{0.033\%}\\ bayes2\_30 & \textbf{7200.00} & \textbf{7200.00}\\ blend029 & \textbf{29.33} & \textbf{30.14}\\ blend146 & \textbf{8.8\%} & \textbf{8.8\%}\\ camshape100 & \textbf{19.40} & \textbf{19.42}\\ cardqp\_inlp & \textbf{792.06} & \textbf{788.89}\\ cardqp\_iqp & \textbf{785.02} & \textbf{793.77}\\ carton7 & \textbf{6.03} & \textbf{6.34}\\ carton9 & \textbf{344.94} & \textbf{346.42}\\ casctanks & \textbf{11.8\%} & \textbf{12.4\%}\\ cecil\_13 & \textbf{188.40} & \textbf{205.63}\\ celar6-sub0 & \textbf{$\infty$} & \textbf{$\infty$}\\ chakra & \textbf{7200.00} & \textbf{7200.00}\\ chem & \textbf{0.18} & \textbf{0.18}\\ chenery & \textbf{200.71} & \textbf{200.09}\\ chimera\_k64maxcut-01 & \textbf{168.60} & \textbf{169.03}\\ chimera\_mis-01 & \textbf{1.14} & 1.45\\ chp\_shorttermplan1a & \textbf{64.74} & \textbf{65.18}\\ chp\_shorttermplan2a & \textbf{17.59} & \textbf{17.69}\\ chp\_shorttermplan2b & \textbf{0.67\%} & \textbf{0.69\%}\\ clay0204m & \textbf{1.47} & \textbf{1.45}\\ clay0205m & \textbf{15.10} & \textbf{14.97}\\ color\_lab3\_3x0 & \textbf{$\infty$} & \textbf{$\infty$}\\ crossdock\_15x7 & \textbf{$\infty$} & \textbf{$\infty$}\\ crossdock\_15x8 & \textbf{6306.65} & \textbf{6336.71}\\ crudeoil\_lee1\_07 & 8.56 & \textbf{7.58}\\ crudeoil\_pooling\_ct2 & nonopt & nonopt\\ csched1 & \textbf{8.1\%} & \textbf{8.1\%}\\ csched1a & \textbf{17.77} & \textbf{17.43}\\ cvxnonsep\_psig20 & \textbf{35.0\%} & \textbf{35.0\%}\\ cvxnonsep\_psig30 & \textbf{45.6\%} & \textbf{45.6\%}\\ du-opt & \textbf{122.38} & \textbf{122.77}\\ du-opt5 & \textbf{101.32} & \textbf{102.03}\\ edgecross10-040 & \textbf{4.16} & \textbf{4.11}\\ edgecross10-080 & \textbf{6\%} & \textbf{5.6\%}\\ eg\_all\_s & \textbf{102\%} & \textbf{102\%}\\ eigena2 & \textbf{416.89} & 523.38\\ elec50 & \textbf{66.4\%} & \textbf{66.4\%}\\ elf & \textbf{2.54} & \textbf{2.54}\\ eniplac & 1.54 & \textbf{1.30}\\ enpro56pb & \textbf{1.98} & 2.31\\ ex1244 & \textbf{81.99} & \textbf{82.60}\\ ex1252a & \textbf{245.41} & \textbf{248.28}\\ faclay20h & \textbf{785.28} & \textbf{781.38}\\ faclay80 & \textbf{$\infty$} & \textbf{$\infty$}\\ feedtray & \textbf{82.1\%} & \textbf{82.1\%}\\ fin2bb & \textbf{100.0\%} & \textbf{100.0\%}\\ flay04m & \textbf{0.87} & \textbf{0.90}\\ flay05m & infeas & infeas\\ flay06m & infeas & infeas\\ fo7\_ar25\_1 & \textbf{8.95} & \textbf{8.99}\\ fo7\_ar3\_1 & \textbf{18.33} & \textbf{18.57}\\ forest & nonopt & nonopt\\ gabriel01 & \textbf{2\%} & 4.3\%\\ gabriel02 & \textbf{7078.04} & \textbf{6811.48}\\ gasnet & \textbf{96.7\%} & \textbf{96.7\%}\\ gasprod\_sarawak16 & \textbf{0.74\%} & \textbf{0.68\%}\\ gastrans582\_cold13\_95 & \textbf{$\infty$} & \textbf{$\infty$}\\ gastrans582\_mild11 & \textbf{$\infty$} & \textbf{$\infty$}\\ gear & 0.11 & \textbf{0.07}\\ gear2 & 0.14 & \textbf{0.13}\\ gear4 & \textbf{17.56} & \textbf{17.33}\\ genpooling\_lee1 & \textbf{117.04} & \textbf{117.72}\\ genpooling\_lee2 & \textbf{186.63} & \textbf{186.60}\\ ghg\_1veh & \textbf{12.42} & \textbf{12.36}\\ gilbert & \textbf{0.9\%} & \textbf{0.9\%}\\ graphpart\_2g-0066-0066 & 0.75 & \textbf{0.43}\\ graphpart\_clique-60 & \textbf{2890.51} & \textbf{2884.75}\\ gsg\_0001 & \textbf{8.58} & \textbf{8.56}\\ hadamard\_5 & \textbf{20.45} & \textbf{20.23}\\ heatexch\_spec1 & \textbf{17.7\%} & \textbf{17.7\%}\\ heatexch\_spec2 & \textbf{5\%} & \textbf{5\%}\\ hhfair & \textbf{$\infty$} & \textbf{$\infty$}\\ himmel16 & \textbf{12.95} & \textbf{13.27}\\ house & \textbf{108.41} & \textbf{108.71}\\ hs62 & \textbf{0.023\%} & \textbf{0.023\%}\\ hvb11 & \textbf{7.3\%} & \textbf{7.3\%}\\ hybriddynamic\_var & \textbf{0.32\%} & \textbf{0.32\%}\\ hybriddynamic\_varcc & \textbf{158.14} & \textbf{158.12}\\ hydroenergy1 & \textbf{0.65\%} & \textbf{0.65\%}\\ ibs2 & \textbf{5.1\%} & \textbf{5.1\%}\\ johnall & \textbf{44.23} & \textbf{43.95}\\ kall\_circles\_c6b & \textbf{463.62} & \textbf{466.28}\\ kall\_congruentcircles\_c72 & \textbf{49.42} & \textbf{49.78}\\ kissing2 & \textbf{100.0\%} & \textbf{100.0\%}\\ kport20 & \textbf{3596.06} & \textbf{3593.09}\\ kriging\_peaks-red020 & \textbf{85.61} & \textbf{85.98}\\ kriging\_peaks-red100 & \textbf{629.86} & \textbf{630.04}\\ lop97icx & \textbf{5.90} & \textbf{5.90}\\ mathopt5\_7 & \textbf{0.08} & \textbf{0.08}\\ mathopt5\_8 & \textbf{0.06} & \textbf{0.07}\\ maxcsp-geo50-20-d4-75-36 & \textbf{7.71} & \textbf{7.78}\\ meanvar-orl400\_05\_e\_7 & \textbf{$\infty$} & \textbf{$\infty$}\\ meanvar-orl400\_05\_e\_8 & \textbf{6.00} & \textbf{5.98}\\ mhw4d & \textbf{0.43} & 0.62\\ milinfract & \textbf{75.5\%} & \textbf{75.5\%}\\ minlphi & \textbf{100\%} & \textbf{100\%}\\ multiplants\_mtg1a & \textbf{5.1\%} & \textbf{5.1\%}\\ multiplants\_mtg2 & \textbf{2\%} & \textbf{2\%}\\ nd\_netgen-3000-1-1-b-b-ns\_7 & \textbf{3.67} & \textbf{3.59}\\ netmod\_kar1 & \textbf{41.89} & \textbf{41.79}\\ netmod\_kar2 & \textbf{42.01} & \textbf{42.04}\\ nous1 & \textbf{52.93} & \textbf{52.90}\\ nous2 & \textbf{8.29} & \textbf{8.04}\\ nvs02 & 0.22 & \textbf{0.15}\\ nvs06 & \textbf{0.06} & \textbf{0.06}\\ oil2 & nonopt & nonopt\\ optmass & \textbf{463.14} & \textbf{463.57}\\ ortez & \textbf{9.48} & \textbf{9.48}\\ p\_ball\_10b\_5p\_3d\_m & \textbf{49.66} & \textbf{49.32}\\ p\_ball\_15b\_5p\_2d\_m & infeas & infeas\\ parabol5\_2\_3 & \textbf{4095.70} & \textbf{4110.54}\\ parallel & \textbf{102.24} & \textbf{102.42}\\ pedigree\_ex485 & \textbf{276.45} & \textbf{277.27}\\ pedigree\_ex485\_2 & \textbf{3.66} & \textbf{3.94}\\ pointpack06 & \textbf{4.76} & \textbf{4.77}\\ pointpack08 & \textbf{208.18} & \textbf{209.02}\\ pooling\_epa1 & \textbf{21.07} & \textbf{21.10}\\ pooling\_epa2 & \textbf{1694.37} & 2039.75\\ portfol\_buyin & \textbf{3.56} & \textbf{3.37}\\ portfol\_card & \textbf{5.29} & \textbf{5.26}\\ powerflow0014r & \textbf{100.0\%} & \textbf{100.0\%}\\ powerflow0057r & \textbf{$\infty$} & \textbf{$\infty$}\\ prob07 & \textbf{16.19} & \textbf{15.83}\\ process & \textbf{0.93} & \textbf{0.92}\\ procurement1mot & \textbf{79.5\%} & \textbf{79.5\%}\\ procurement2mot & \textbf{2.71} & \textbf{2.69}\\ product & nonopt & nonopt\\ product2 & \textbf{2.53} & 2.84\\ prolog & \textbf{100.0\%} & \textbf{100.0\%}\\ qp3 & \textbf{4.6\%} & \textbf{4.6\%}\\ qspp\_0\_10\_0\_1\_10\_1 & \textbf{26.39} & \textbf{26.51}\\ qspp\_0\_11\_0\_1\_10\_1 & \textbf{124.07} & \textbf{124.05}\\ radar-2000-10-a-6\_lat\_7 & \textbf{33.34} & \textbf{32.97}\\ radar-3000-10-a-8\_lat\_7 & \textbf{209.09} & \textbf{208.61}\\ ravempb & 1.26 & \textbf{0.96}\\ risk2bpb & \textbf{10.70} & \textbf{10.68}\\ routingdelay\_bigm & \textbf{18.4\%} & \textbf{18.4\%}\\ rsyn0815m & \textbf{1.02} & \textbf{1.05}\\ rsyn0815m03m & \textbf{44.8\%} & \textbf{44.8\%}\\ sfacloc2\_2\_95 & 1.11 & \textbf{0.79}\\ sfacloc2\_3\_90 & \textbf{67.86} & \textbf{67.78}\\ sjup2 & \textbf{$\infty$} & \textbf{$\infty$}\\ slay06m & 0.69 & \textbf{0.48}\\ slay07m & \textbf{0.61} & 0.91\\ smallinvDAXr1b010-011 & \textbf{8.69} & \textbf{8.99}\\ smallinvDAXr1b020-022 & \textbf{205.80} & \textbf{208.43}\\ sonet17v4 & \textbf{160.55} & \textbf{160.19}\\ sonet18v6 & \textbf{184.16} & \textbf{184.08}\\ sonetgr17 & \textbf{65.24} & \textbf{65.03}\\ spectra2 & \textbf{105\%} & \textbf{105\%}\\ sporttournament24 & \textbf{21.48} & \textbf{21.47}\\ sporttournament30 & \textbf{1946.51} & \textbf{1959.43}\\ sssd12-05persp & \textbf{780.87} & \textbf{780.44}\\ sssd18-06persp & \textbf{19.9\%} & \textbf{20.2\%}\\ st\_testgr1 & \textbf{0.07} & \textbf{0.07}\\ st\_testgr3 & \textbf{0.07} & \textbf{0.07}\\ steenbrf & \textbf{98.0\%} & \textbf{98.0\%}\\ stockcycle & \textbf{39.57} & \textbf{39.61}\\ supplychainp1\_022020 & \textbf{8.5\%} & \textbf{8.8\%}\\ supplychainp1\_030510 & \textbf{3.08} & \textbf{3.10}\\ supplychainr1\_022020 & \textbf{2966.92} & \textbf{2966.61}\\ supplychainr1\_030510 & \textbf{0.74} & \textbf{0.74}\\ syn15m04m & \textbf{1.04} & 1.36\\ syn30m02m & 1.83 & \textbf{1.56}\\ synheat & \textbf{17.1\%} & \textbf{17.1\%}\\ tanksize & \textbf{54.61} & \textbf{54.45}\\ telecomsp\_pacbell & \textbf{1125.10} & \textbf{1111.55}\\ tln5 & \textbf{2.08} & \textbf{2.10}\\ tln7 & \textbf{274.33} & \textbf{274.65}\\ tls2 & \textbf{0.18} & 0.22\\ tls4 & \textbf{36.82} & \textbf{37.19}\\ topopt-mbb\_60x40\_50 & \textbf{$\infty$} & \textbf{$\infty$}\\ toroidal2g20\_5555 & \textbf{3.23} & \textbf{2.98}\\ toroidal3g7\_6666 & \textbf{67.16} & \textbf{67.08}\\ transswitch0009r & \textbf{5.6\%} & \textbf{5.5\%}\\ tricp & \textbf{100.0\%} & \textbf{100.0\%}\\ tspn08 & \textbf{2.3\%} & \textbf{2.3\%}\\ tspn15 & \textbf{81.4\%} & \textbf{81.4\%}\\ unitcommit1 & \textbf{1082.69} & \textbf{1092.08}\\ unitcommit2 & \textbf{11.20} & \textbf{11.23}\\ wager & \textbf{252.73} & \textbf{252.96}\\ waste & \textbf{99.9\%} & \textbf{99.9\%}\\ wastepaper3 & \textbf{22.76} & \textbf{22.60}\\ wastepaper4 & \textbf{2099.99} & \textbf{2110.22}\\ wastepaper6 & \textbf{0.16\%} & 0.21\%\\ water4 & \textbf{1648.53} & 2062.46\\ waternd1 & 1.34 & \textbf{1.08}\\ waterno2\_02 & \textbf{4.18} & \textbf{4.11}\\ waterno2\_03 & \textbf{440.65} & \textbf{440.06}\\ waterund01 & \textbf{0.18\%} & \textbf{0.18\%}\\ \bottomrule \end{longtable} } \subsection{Parallel Mode} \label{sec:detailed_multithread} The following table shows the outcome from running BARON in serial and parallel mode. { \scriptsize \begin{longtable}{l|rrrr} \toprule instance & 1 thread & 4 threads & 8 threads & 16 threads \\ \midrule \endhead alan & \textbf{0.33} & 0.38 & 0.37 & \textbf{0.36}\\ autocorr\_bern20-05 & 7.29 & 1.91 & \textbf{1.33} & \textbf{1.31}\\ autocorr\_bern35-04 & 29.91 & 8.44 & 5.71 & \textbf{4.66}\\ ball\_mk2\_10 & \textbf{0.07} & \textbf{0.07} & \textbf{0.07} & 0.20\\ ball\_mk2\_30 & 0.25 & \textbf{0.08} & \textbf{0.08} & \textbf{0.08}\\ ball\_mk3\_10 & \textbf{10.58} & \textbf{11.19} & \textbf{11.35} & 11.71\\ batch0812\_nc & 6.57 & \textbf{5.33} & 6.47 & \textbf{5.32}\\ batchs101006m & \textbf{6.91} & 13.82 & 9.41 & 11.02\\ batchs121208m & 47.33 & \textbf{26.40} & \textbf{24.95} & 32.16\\ bayes2\_20 & \textbf{67.77} & \textbf{68.42} & \textbf{70.96} & \textbf{68.64}\\ bayes2\_30 & \textbf{21.02} & \textbf{22.46} & \textbf{21.94} & \textbf{23.08}\\ blend029 & \textbf{1.03} & 1.08 & 1.25 & \textbf{0.95}\\ blend146 & 3648.64 & \textbf{2175.95} & 2854.41 & 3654.96\\ camshape100 & \textbf{9.2\%} & \textbf{9.2\%} & \textbf{9.2\%} & \textbf{9.2\%}\\ cardqp\_inlp & 12.40 & \textbf{10.82} & 12.25 & 18.77\\ cardqp\_iqp & 12.92 & \textbf{10.80} & 12.49 & 18.70\\ carton7 & 987.64 & 1964.21 & 1441.91 & \textbf{832.10}\\ carton9 & 45.0\% & \textbf{36.0\%} & 46.9\% & \textbf{37.9\%}\\ casctanks & \textbf{256.46} & \textbf{251.20} & \textbf{248.18} & \textbf{251.93}\\ cecil\_13 & 280.62 & 238.03 & \textbf{201.60} & \textbf{195.64}\\ celar6-sub0 & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ chakra & \textbf{0.14} & \textbf{0.15} & \textbf{0.14} & \textbf{0.15}\\ chem & \textbf{0.04} & \textbf{0.04} & \textbf{0.04} & \textbf{0.04}\\ chenery & 1.40 & 1.42 & \textbf{1.23} & \textbf{1.16}\\ chimera\_k64maxcut-01 & 560.62 & 183.06 & \textbf{119.55} & 171.40\\ chimera\_mis-01 & 13.69 & \textbf{12.39} & \textbf{11.84} & \textbf{12.54}\\ chp\_shorttermplan1a & 2296.84 & 2329.95 & 0.061\% & \textbf{1276.12}\\ chp\_shorttermplan2a & \textbf{937.38} & \textbf{1002.49} & \textbf{999.57} & \textbf{956.66}\\ chp\_shorttermplan2b & 0.34\% & \textbf{0.31\%} & \textbf{0.3\%} & \textbf{0.31\%}\\ clay0204m & 0.47 & 0.37 & \textbf{0.24} & 1.28\\ clay0205m & 4.92 & 3.78 & \textbf{2.63} & 4.94\\ color\_lab3\_3x0 & 2654.00 & 606.50 & 383.87 & \textbf{346.70}\\ crossdock\_15x7 & 385.56 & 87.79 & 49.36 & \textbf{38.69}\\ crossdock\_15x8 & 788.81 & 322.11 & 128.24 & \textbf{102.94}\\ crudeoil\_lee1\_07 & \textbf{7.50} & \textbf{7.47} & 27.59 & \textbf{6.98}\\ crudeoil\_pooling\_ct2 & 4.95 & 41.75 & 21.47 & \textbf{4.28}\\ csched1 & \textbf{13.03} & \textbf{12.42} & 29.59 & 31.28\\ csched1a & \textbf{0.70} & \textbf{0.76} & \textbf{0.70} & 0.78\\ cvxnonsep\_psig20 & \textbf{0.99} & \textbf{0.92} & \textbf{0.91} & \textbf{0.95}\\ cvxnonsep\_psig30 & \textbf{4.25} & \textbf{4.06} & \textbf{4.13} & \textbf{4.21}\\ du-opt & 2.37 & 2.18 & \textbf{1.95} & 2.26\\ du-opt5 & \textbf{3.61} & \textbf{3.34} & \textbf{3.32} & \textbf{3.54}\\ edgecross10-040 & 7.25 & 2.73 & \textbf{2.13} & 2.54\\ edgecross10-080 & 51.58 & 15.21 & 10.25 & \textbf{9.23}\\ eg\_all\_s & infeas & infeas & infeas & \textbf{90.9\%}\\ eigena2 & \textbf{1146.69} & \textbf{1150.94} & \textbf{1136.30} & \textbf{1132.85}\\ elec50 & \textbf{66.5\%} & \textbf{66.5\%} & \textbf{66.5\%} & \textbf{66.5\%}\\ elf & 5.91 & \textbf{4.52} & \textbf{4.24} & 5.65\\ eniplac & 4.54 & \textbf{4.04} & 5.48 & 9.66\\ enpro56pb & \textbf{1.84} & 3.38 & \textbf{2.01} & 2.92\\ ex1244 & 4.86 & \textbf{4.56} & \textbf{4.49} & \textbf{4.23}\\ ex1252a & \textbf{9.12} & \textbf{9.19} & \textbf{8.97} & 10.85\\ faclay20h & 349.36 & 91.42 & 98.32 & \textbf{47.33}\\ faclay80 & \textbf{120\%} & \textbf{120\%} & \textbf{120\%} & \textbf{120\%}\\ feedtray & \textbf{80.5\%} & \textbf{80.5\%} & \textbf{80.5\%} & \textbf{80.5\%}\\ fin2bb & 114.16 & \textbf{83.45} & 97.44 & 91.84\\ flay04m & 10.56 & \textbf{9.18} & \textbf{8.80} & \textbf{8.94}\\ flay05m & 394.85 & \textbf{370.14} & \textbf{353.95} & 411.75\\ flay06m & \textbf{7\%} & 7.2\% & \textbf{6.4\%} & \textbf{6.6\%}\\ fo7\_ar25\_1 & 17.00 & 4.55 & 5.17 & \textbf{3.37}\\ fo7\_ar3\_1 & 25.69 & 52.69 & 30.43 & \textbf{21.73}\\ forest & \textbf{747.74} & 1311.21 & 877.73 & 1470.56\\ gabriel01 & 1760.52 & 1619.79 & 1272.26 & \textbf{1131.82}\\ gabriel02 & 11.1\% & 18.1\% & \textbf{9\%} & 15.6\%\\ gasnet & \textbf{22.38} & 57.1\% & 50.7\% & 50.7\%\\ gasprod\_sarawak16 & 4578.18 & 5171.96 & 2524.08 & \textbf{2214.82}\\ gastrans582\_cold13\_95 & 1442.44 & \textbf{732.29} & 954.39 & 1435.95\\ gastrans582\_mild11 & \textbf{687.72} & 1206.29 & 1283.65 & 2130.36\\ gear & \textbf{0.12} & \textbf{0.11} & \textbf{0.11} & 0.28\\ gear2 & \textbf{0.64} & \textbf{0.62} & 0.69 & \textbf{0.67}\\ gear4 & \textbf{1.63} & \textbf{1.53} & 1.90 & 2.39\\ genpooling\_lee1 & 9.37 & \textbf{5.21} & \textbf{5.37} & \textbf{5.46}\\ genpooling\_lee2 & \textbf{85.93} & \textbf{84.33} & \textbf{83.67} & \textbf{85.79}\\ ghg\_1veh & \textbf{2.77} & \textbf{2.96} & 3.11 & \textbf{2.98}\\ gilbert & \textbf{2.35} & 2.60 & \textbf{2.56} & \textbf{2.38}\\ graphpart\_2g-0066-0066 & 0.65 & 0.76 & 0.46 & \textbf{0.37}\\ graphpart\_clique-60 & \textbf{34.6\%} & 100.0\% & 100.0\% & 100.0\%\\ gsg\_0001 & 13.68 & 16.00 & 16.69 & \textbf{11.94}\\ hadamard\_5 & 54.92 & 17.35 & \textbf{8.54} & 9.77\\ heatexch\_spec1 & \textbf{1.39} & 1.64 & 1.76 & \textbf{1.32}\\ heatexch\_spec2 & \textbf{6.60} & 7.91 & 8.50 & 7.88\\ hhfair & 0.43 & 0.39 & 0.42 & \textbf{0.35}\\ himmel16 & 41.46 & \textbf{36.26} & \textbf{36.42} & \textbf{36.46}\\ house & \textbf{0.37} & \textbf{0.38} & \textbf{0.38} & 0.57\\ hs62 & 1.07 & 1.08 & \textbf{0.94} & \textbf{0.88}\\ hvb11 & \textbf{175.10} & \textbf{174.88} & \textbf{179.60} & \textbf{177.88}\\ hybriddynamic\_var & \textbf{0.75} & \textbf{0.75} & 0.98 & 1.09\\ hybriddynamic\_varcc & 0.91 & \textbf{0.75} & \textbf{0.77} & \textbf{0.78}\\ hydroenergy1 & \textbf{1026.99} & \textbf{1057.36} & \textbf{1054.54} & \textbf{1057.15}\\ ibs2 & nonopt & nonopt & nonopt & nonopt\\ johnall & 2.85 & 2.51 & 2.50 & \textbf{2.23}\\ kall\_circles\_c6b & \textbf{310.13} & \textbf{313.79} & \textbf{307.19} & \textbf{306.85}\\ kall\_congruentcircles\_c72 & 26.45 & 27.78 & \textbf{24.66} & \textbf{22.91}\\ kissing2 & \textbf{184.78} & 207.89 & \textbf{184.05} & 207.42\\ kport20 & 6.1\% & \textbf{3.3\%} & 6.8\% & 5.7\%\\ kriging\_peaks-red020 & 10.33 & \textbf{9.90} & \textbf{9.00} & \textbf{9.43}\\ kriging\_peaks-red100 & \textbf{196.08} & \textbf{198.30} & \textbf{195.92} & \textbf{193.18}\\ lop97icx & 1709.08 & 450.05 & \textbf{315.56} & \textbf{317.81}\\ mathopt5\_7 & 0.29 & \textbf{0.13} & \textbf{0.12} & \textbf{0.12}\\ mathopt5\_8 & \textbf{0.26} & 0.43 & 0.40 & \textbf{0.26}\\ maxcsp-geo50-20-d4-75-36 & \textbf{16.94} & 18.01 & \textbf{15.92} & \textbf{17.40}\\ meanvar-orl400\_05\_e\_7 & \textbf{95.4\%} & \textbf{94.7\%} & \textbf{94.3\%} & \textbf{94.8\%}\\ meanvar-orl400\_05\_e\_8 & 600.24 & 541.81 & 508.87 & \textbf{455.48}\\ mhw4d & 0.83 & \textbf{0.70} & \textbf{0.67} & 0.80\\ milinfract & \textbf{55.06} & \textbf{54.78} & \textbf{54.71} & \textbf{55.20}\\ minlphi & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ multiplants\_mtg1a & 2245.95 & 2338.78 & 4026.31 & \textbf{1138.55}\\ multiplants\_mtg2 & \textbf{2115.70} & \textbf{2189.56} & \textbf{2175.33} & \textbf{2125.52}\\ nd\_netgen-3000-1-1-b-b-ns\_7 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ netmod\_kar1 & \textbf{4.53} & 5.70 & 5.71 & 6.81\\ netmod\_kar2 & \textbf{5.26} & \textbf{5.46} & \textbf{5.75} & 6.88\\ nous1 & 51.20 & \textbf{47.60} & \textbf{44.39} & 53.14\\ nous2 & 0.40 & \textbf{0.35} & \textbf{0.34} & \textbf{0.36}\\ nvs02 & \textbf{0.04} & 0.05 & 0.05 & 0.05\\ nvs06 & \textbf{0.08} & \textbf{0.08} & 0.10 & 0.10\\ oil2 & 3.30 & 3.02 & 3.00 & \textbf{2.67}\\ optmass & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ ortez & nonopt & nonopt & nonopt & nonopt\\ p\_ball\_10b\_5p\_3d\_m & 32.22 & 23.87 & \textbf{20.81} & 25.42\\ p\_ball\_15b\_5p\_2d\_m & 90.52 & \textbf{63.59} & \textbf{63.02} & 77.58\\ parabol5\_2\_3 & \textbf{0.051\%} & \textbf{0.051\%} & \textbf{0.051\%} & \textbf{0.051\%}\\ parallel & \textbf{19.19} & 21.16 & 20.85 & \textbf{18.15}\\ pedigree\_ex485 & 68.37 & \textbf{39.15} & 79.06 & 414.20\\ pedigree\_ex485\_2 & 17.30 & \textbf{14.20} & 16.23 & \textbf{14.65}\\ pointpack06 & 5.59 & 6.60 & \textbf{4.94} & 5.96\\ pointpack08 & \textbf{159.31} & \textbf{153.67} & \textbf{154.66} & \textbf{151.67}\\ pooling\_epa1 & \textbf{12.84} & \textbf{11.97} & 14.83 & 14.15\\ pooling\_epa2 & 2.4\% & \textbf{0.44\%} & 5.5\% & 3\%\\ portfol\_buyin & \textbf{0.34} & \textbf{0.33} & \textbf{0.34} & \textbf{0.32}\\ portfol\_card & \textbf{0.31} & \textbf{0.30} & \textbf{0.29} & \textbf{0.30}\\ powerflow0014r & \textbf{50.50} & \textbf{51.54} & \textbf{50.22} & \textbf{53.19}\\ powerflow0057r & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ prob07 & \textbf{75.62} & 81.05 & \textbf{76.59} & \textbf{71.80}\\ process & 0.51 & 0.52 & \textbf{0.42} & \textbf{0.42}\\ procurement1mot & \textbf{59.57} & \textbf{59.24} & \textbf{63.12} & \textbf{61.09}\\ procurement2mot & 4.13 & 4.24 & \textbf{3.78} & \textbf{3.65}\\ product & 48.09 & \textbf{24.31} & 49.29 & nonopt\\ product2 & \textbf{2.90} & 225.61 & 76.64 & 289.08\\ prolog & \textbf{0.31} & \textbf{0.32} & \textbf{0.33} & 0.36\\ qp3 & \textbf{13.7\%} & \textbf{14.0\%} & \textbf{14.2\%} & \textbf{14.4\%}\\ qspp\_0\_10\_0\_1\_10\_1 & \textbf{24.2\%} & \textbf{23.8\%} & \textbf{23.7\%} & \textbf{23.8\%}\\ qspp\_0\_11\_0\_1\_10\_1 & \textbf{30.5\%} & \textbf{30.1\%} & \textbf{30.1\%} & \textbf{30.1\%}\\ radar-2000-10-a-6\_lat\_7 & \textbf{75.1\%} & 91.0\% & 91.0\% & 91.0\%\\ radar-3000-10-a-8\_lat\_7 & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ ravempb & \textbf{0.82} & \textbf{0.80} & 1.05 & 0.94\\ risk2bpb & 0.70 & \textbf{0.43} & \textbf{0.46} & 0.49\\ routingdelay\_bigm & 26.08 & \textbf{23.23} & \textbf{23.91} & \textbf{24.85}\\ rsyn0815m & 0.29 & \textbf{0.26} & \textbf{0.26} & \textbf{0.26}\\ rsyn0815m03m & \textbf{31.23} & 38.94 & 34.49 & 39.53\\ sfacloc2\_2\_95 & 0.31 & \textbf{0.28} & 0.41 & \textbf{0.27}\\ sfacloc2\_3\_90 & 29.12 & 24.28 & \textbf{14.83} & \textbf{16.30}\\ sjup2 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ slay06m & 0.46 & \textbf{0.41} & 0.48 & 0.59\\ slay07m & \textbf{0.62} & 0.64 & \textbf{0.58} & \textbf{0.58}\\ smallinvDAXr1b010-011 & 1.62 & \textbf{1.52} & \textbf{1.44} & \textbf{1.52}\\ smallinvDAXr1b020-022 & 1.95 & 2.26 & \textbf{1.63} & \textbf{1.75}\\ sonet17v4 & 339.62 & 62.99 & \textbf{33.05} & \textbf{33.46}\\ sonet18v6 & 429.30 & 97.11 & \textbf{62.50} & \textbf{60.56}\\ sonetgr17 & 346.77 & 74.19 & 44.37 & \textbf{29.83}\\ spectra2 & \textbf{1.37} & \textbf{1.29} & 1.47 & \textbf{1.38}\\ sporttournament24 & 4982.31 & 1521.61 & \textbf{722.06} & \textbf{763.20}\\ sporttournament30 & \textbf{11.8\%} & 67.7\% & 67.7\% & 67.7\%\\ sssd12-05persp & \textbf{27.8\%} & \textbf{28.0\%} & \textbf{27.8\%} & \textbf{27.6\%}\\ sssd18-06persp & \textbf{41.4\%} & \textbf{40.5\%} & \textbf{41.4\%} & \textbf{41.3\%}\\ st\_testgr1 & 0.24 & \textbf{0.14} & 0.16 & 0.22\\ st\_testgr3 & \textbf{0.24} & 0.28 & 0.35 & 0.50\\ steenbrf & \textbf{14.39} & \textbf{15.32} & \textbf{15.07} & 17.55\\ stockcycle & 7.82 & \textbf{6.48} & \textbf{6.94} & \textbf{7.08}\\ supplychainp1\_022020 & nonopt & nonopt & nonopt & nonopt\\ supplychainp1\_030510 & nonopt & nonopt & nonopt & nonopt\\ supplychainr1\_022020 & 6.13 & \textbf{5.67} & \textbf{5.47} & \textbf{5.25}\\ supplychainr1\_030510 & 0.28 & \textbf{0.26} & \textbf{0.25} & 0.37\\ syn15m04m & 0.54 & \textbf{0.45} & \textbf{0.47} & \textbf{0.49}\\ syn30m02m & 2.51 & \textbf{2.20} & \textbf{2.38} & \textbf{2.21}\\ synheat & 35.92 & 28.74 & \textbf{26.21} & \textbf{25.37}\\ tanksize & 4.81 & \textbf{4.17} & 4.75 & \textbf{4.07}\\ telecomsp\_pacbell & 3393.96 & 1396.50 & \textbf{1189.08} & 1401.72\\ tln5 & 5.26 & \textbf{1.33} & 5.03 & 5.53\\ tln7 & 2935.35 & \textbf{1739.45} & 3085.42 & 2.7\%\\ tls2 & \textbf{0.13} & \textbf{0.12} & 0.22 & \textbf{0.12}\\ tls4 & 35.15 & 16.87 & \textbf{14.67} & 20.79\\ topopt-mbb\_60x40\_50 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ toroidal2g20\_5555 & 3747.17 & 114.31 & \textbf{81.54} & 190.59\\ toroidal3g7\_6666 & \textbf{6.2\%} & 74.7\% & 74.7\% & 74.7\%\\ transswitch0009r & \textbf{8\%} & \textbf{8\%} & \textbf{8\%} & \textbf{8\%}\\ tricp & \textbf{0.82} & 1.25 & 1.11 & 1.14\\ tspn08 & 10.81 & \textbf{9.28} & 11.25 & \textbf{9.49}\\ tspn15 & 884.44 & 1003.34 & \textbf{412.58} & \textbf{410.34}\\ unitcommit1 & \textbf{0.27\%} & \textbf{0.27\%} & \textbf{0.27\%} & \textbf{0.27\%}\\ unitcommit2 & 7200.00 & \textbf{725.90} & 6169.57 & \textbf{732.94}\\ wager & \textbf{15.30} & 27.54 & 25.87 & 27.99\\ waste & \textbf{22.55} & \textbf{24.58} & \textbf{23.21} & 29.11\\ wastepaper3 & \textbf{9.19} & 14.02 & 14.63 & 12.67\\ wastepaper4 & 871.50 & \textbf{417.33} & 763.71 & 556.85\\ wastepaper6 & 0.022\% & 0.022\% & \textbf{7200.00} & 0.022\%\\ water4 & \textbf{44.4\%} & \textbf{43.7\%} & \textbf{47.6\%} & 49.2\%\\ waternd1 & \textbf{5.50} & 6.83 & 6.98 & 6.07\\ waterno2\_02 & 5.20 & \textbf{4.18} & \textbf{4.33} & 5.03\\ waterno2\_03 & \textbf{260.33} & \textbf{262.06} & \textbf{272.21} & \textbf{272.70}\\ waterund01 & \textbf{0.35\%} & \textbf{0.35\%} & \textbf{0.35\%} & \textbf{0.35\%}\\ \bottomrule \end{longtable} } The following table shows the outcome from running Lindo API in serial and parallel mode. { \scriptsize \begin{longtable}{l|rrrr} \toprule instance & 1 thread & 4 threads & 8 threads & 16 threads \\ \midrule \endhead alan & \textbf{0.89\%} & \textbf{0.89\%} & \textbf{0.89\%} & \textbf{0.89\%}\\ autocorr\_bern20-05 & 100.43 & 33.83 & \textbf{23.88} & \textbf{21.91}\\ autocorr\_bern35-04 & 13.5\% & 2655.83 & \textbf{1788.33} & 2461.81\\ ball\_mk2\_10 & 3.98 & 2.17 & \textbf{1.28} & 1.43\\ ball\_mk2\_30 & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ ball\_mk3\_10 & 3.58 & \textbf{1.79} & 2.05 & 2.17\\ batch0812\_nc & 25.46 & \textbf{18.73} & \textbf{17.51} & \textbf{17.68}\\ batchs101006m & \textbf{134.04} & abort & abort & abort\\ batchs121208m & \textbf{198.76} & abort & abort & abort\\ bayes2\_20 & \textbf{0.033\%} & \textbf{0.033\%} & \textbf{0.033\%} & infeas\\ bayes2\_30 & \textbf{7200.00} & abort & \textbf{7200.00} & infeas\\ blend029 & abort & 205.94 & 166.19 & \textbf{96.01}\\ blend146 & abort & 5.8\% & \textbf{4.2\%} & 6.8\%\\ camshape100 & \textbf{9.2\%} & \textbf{8.9\%} & \textbf{8.7\%} & \textbf{8.4\%}\\ cardqp\_inlp & \textbf{134.08} & nonopt & nonopt & nonopt\\ cardqp\_iqp & \textbf{133.22} & nonopt & nonopt & nonopt\\ carton7 & \textbf{110.53} & \textbf{110.35} & \textbf{110.48} & \textbf{110.50}\\ carton9 & \textbf{287.45} & \textbf{287.68} & \textbf{287.13} & \textbf{287.83}\\ casctanks & \textbf{3.4\%} & \textbf{3.4\%} & \textbf{3.4\%} & \textbf{3.4\%}\\ cecil\_13 & 0.32\% & 0.16\% & 0.11\% & \textbf{5951.36}\\ celar6-sub0 & \textbf{100\%} & \textbf{100\%} & \textbf{100\%} & \textbf{100\%}\\ chakra & 1.28 & \textbf{0.34} & infeas & infeas\\ chem & 1172.22 & 471.44 & \textbf{314.57} & \textbf{329.14}\\ chenery & \textbf{0.43\%} & \textbf{0.43\%} & \textbf{0.43\%} & abort\\ chimera\_k64maxcut-01 & \textbf{16.4\%} & \textbf{16.4\%} & \textbf{16.1\%} & \textbf{16.7\%}\\ chimera\_mis-01 & nonopt & nonopt & nonopt & nonopt\\ chp\_shorttermplan1a & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ chp\_shorttermplan2a & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ chp\_shorttermplan2b & 16.6\% & 6045.30 & 5089.35 & \textbf{3782.12}\\ clay0204m & 4.75 & \textbf{2.51} & \textbf{2.56} & \textbf{2.44}\\ clay0205m & 51.80 & 18.76 & 11.84 & \textbf{10.23}\\ color\_lab3\_3x0 & \textbf{65.6\%} & \textbf{65.6\%} & \textbf{65.6\%} & \textbf{65.6\%}\\ crossdock\_15x7 & \textbf{124\%} & \textbf{124\%} & \textbf{124\%} & \textbf{124\%}\\ crossdock\_15x8 & \textbf{124\%} & \textbf{124\%} & \textbf{124\%} & \textbf{124\%}\\ crudeoil\_lee1\_07 & \textbf{63.26} & \textbf{63.10} & \textbf{63.51} & \textbf{63.54}\\ crudeoil\_pooling\_ct2 & 1.8\% & 3.1\% & \textbf{1.6\%} & \textbf{1.5\%}\\ csched1 & 66.81 & 65.05 & 46.49 & \textbf{36.59}\\ csched1a & 4.85 & 2.82 & \textbf{2.44} & \textbf{2.36}\\ cvxnonsep\_psig20 & 0.48 & \textbf{0.25} & \textbf{0.25} & \textbf{0.25}\\ cvxnonsep\_psig30 & 3.26 & \textbf{3.00} & \textbf{3.05} & \textbf{2.95}\\ du-opt & \textbf{5.35} & 5.94 & 5.92 & 6.74\\ du-opt5 & \textbf{4.70} & \textbf{4.77} & \textbf{4.63} & 5.17\\ edgecross10-040 & 0.29 & \textbf{0.12} & \textbf{0.12} & \textbf{0.12}\\ edgecross10-080 & nonopt & nonopt & nonopt & nonopt\\ eg\_all\_s & abort & abort & abort & abort\\ eigena2 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ elec50 & 798.72 & \textbf{648.65} & 757.22 & \textbf{656.49}\\ elf & nonopt & \textbf{2.56} & infeas & 3.02\\ eniplac & 3\% & 3952.97 & 3085.99 & \textbf{2683.08}\\ enpro56pb & 1454.20 & 564.02 & 92.37 & \textbf{42.35}\\ ex1244 & 9.69 & \textbf{6.99} & \textbf{6.71} & \textbf{6.71}\\ ex1252a & infeas & infeas & infeas & infeas\\ faclay20h & nonopt & nonopt & nonopt & nonopt\\ faclay80 & \textbf{5096.67} & \textbf{5092.40} & \textbf{5092.51} & \textbf{5120.74}\\ feedtray & \textbf{13.79} & \textbf{13.86} & \textbf{13.58} & \textbf{13.71}\\ fin2bb & \textbf{1294.03} & 3136.42 & 0.59\% & 3467.31\\ flay04m & 647.41 & 231.73 & 166.42 & \textbf{123.11}\\ flay05m & 0.54\% & 0.038\% & 5375.73 & \textbf{3837.92}\\ flay06m & 14.3\% & 9.5\% & 8.2\% & \textbf{6.8\%}\\ fo7\_ar25\_1 & 1282.08 & \textbf{662.73} & \textbf{661.65} & \textbf{661.84}\\ fo7\_ar3\_1 & 1847.41 & \textbf{796.80} & \textbf{752.45} & 954.82\\ forest & \textbf{1.18} & \textbf{1.18} & \textbf{1.23} & 1.53\\ gabriel01 & 5.4\% & 4.5\% & 4.2\% & \textbf{3.8\%}\\ gabriel02 & \textbf{44.0\%} & \textbf{43.0\%} & \textbf{42.2\%} & \textbf{42.1\%}\\ gasnet & \textbf{64.9\%} & \textbf{64.1\%} & \textbf{63.9\%} & \textbf{63.8\%}\\ gasprod\_sarawak16 & infeas & \textbf{0.4\%} & \textbf{0.4\%} & 1.2\%\\ gastrans582\_cold13\_95 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ gastrans582\_mild11 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ gear & \textbf{0.05} & 0.06 & 0.06 & 0.06\\ gear2 & \textbf{0.23} & \textbf{0.23} & \textbf{0.24} & \textbf{0.24}\\ gear4 & infeas & infeas & infeas & infeas\\ genpooling\_lee1 & 0.41\% & 680.62 & \textbf{136.93} & infeas\\ genpooling\_lee2 & 3724.02 & 169.90 & 111.71 & \textbf{86.41}\\ ghg\_1veh & 106.61 & 60.51 & \textbf{46.93} & \textbf{45.15}\\ gilbert & \textbf{14.25} & 28.89 & 25.67 & 34.60\\ graphpart\_2g-0066-0066 & \textbf{8.7\%} & \textbf{8.7\%} & \textbf{8.7\%} & \textbf{8.7\%}\\ graphpart\_clique-60 & \textbf{83.3\%} & \textbf{83.3\%} & \textbf{83.3\%} & \textbf{83.3\%}\\ gsg\_0001 & 351.30 & 578.61 & \textbf{338.79} & \textbf{308.84}\\ hadamard\_5 & 129.27 & 73.47 & 59.47 & \textbf{35.89}\\ heatexch\_spec1 & 0.34\% & \textbf{0.14\%} & 0.16\% & 0.25\%\\ heatexch\_spec2 & \textbf{0.041\%} & 0.052\% & \textbf{0.039\%} & \textbf{0.038\%}\\ hhfair & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ himmel16 & 20.42 & 8.10 & 5.33 & \textbf{3.63}\\ house & \textbf{0.89} & infeas & infeas & infeas\\ hs62 & 4.04 & 3.33 & 2.13 & \textbf{1.81}\\ hvb11 & 40.7\% & 27.4\% & \textbf{20.5\%} & nonopt\\ hybriddynamic\_var & 4.11 & 1.54 & 1.26 & \textbf{0.78}\\ hybriddynamic\_varcc & 12.00 & 4.55 & 3.17 & \textbf{2.71}\\ hydroenergy1 & 0.93\% & 0.74\% & 0.69\% & \textbf{0.59\%}\\ ibs2 & abort & abort & abort & abort\\ johnall & \textbf{22.36} & \textbf{22.56} & \textbf{22.26} & \textbf{22.48}\\ kall\_circles\_c6b & \textbf{62.28} & 120.62 & 90.79 & 80.68\\ kall\_congruentcircles\_c72 & \textbf{4.74} & 7.27 & \textbf{5.22} & 5.33\\ kissing2 & \textbf{867.71} & \textbf{866.79} & \textbf{868.19} & \textbf{869.92}\\ kport20 & 13.1\% & 13.1\% & \textbf{12.4\%} & \textbf{11.5\%}\\ kriging\_peaks-red020 & 21.79 & 7.24 & 4.12 & \textbf{3.65}\\ kriging\_peaks-red100 & 140.14 & 59.63 & 41.23 & \textbf{32.35}\\ lop97icx & 39.83 & \textbf{13.81} & nonopt & nonopt\\ mathopt5\_7 & 12.35 & \textbf{10.93} & \textbf{10.78} & 11.87\\ mathopt5\_8 & 9.72 & \textbf{8.27} & \textbf{8.20} & 9.06\\ maxcsp-geo50-20-d4-75-36 & \textbf{101\%} & \textbf{101\%} & \textbf{101\%} & \textbf{101\%}\\ meanvar-orl400\_05\_e\_7 & 23.95 & \textbf{16.40} & \textbf{14.93} & nonopt\\ meanvar-orl400\_05\_e\_8 & \textbf{19.32} & \textbf{19.40} & abort & \textbf{19.05}\\ mhw4d & 0.78 & 0.63 & \textbf{0.41} & 0.65\\ milinfract & infeas & \textbf{68.0\%} & \textbf{67.8\%} & \textbf{68.1\%}\\ minlphi & 1.86 & \textbf{1.14} & infeas & infeas\\ multiplants\_mtg1a & 15.6\% & 11.9\% & \textbf{5.6\%} & 7\%\\ multiplants\_mtg2 & 0.12\% & \textbf{1303.99} & \textbf{1336.22} & 1444.62\\ nd\_netgen-3000-1-1-b-b-ns\_7 & \textbf{680.80} & infeas & 760.69 & infeas\\ netmod\_kar1 & 5948.45 & 5.7\% & 6.2\% & \textbf{4040.34}\\ netmod\_kar2 & \textbf{5917.59} & 6.5\% & 7.6\% & infeas\\ nous1 & 2.9\% & 0.79\% & \textbf{6836.04} & \textbf{7023.44}\\ nous2 & 2.16 & 1.27 & \textbf{1.08} & 1.36\\ nvs02 & \textbf{69.68} & \textbf{70.03} & \textbf{69.93} & \textbf{70.27}\\ nvs06 & \textbf{6.64} & \textbf{6.36} & \textbf{6.68} & \textbf{6.68}\\ oil2 & \textbf{43.56} & \textbf{43.27} & \textbf{43.63} & \textbf{43.29}\\ optmass & \textbf{11.3\%} & 12.8\% & 12.8\% & 12.8\%\\ ortez & \textbf{2.42} & \textbf{2.38} & \textbf{2.28} & \textbf{2.38}\\ p\_ball\_10b\_5p\_3d\_m & 26.41 & 11.49 & \textbf{7.87} & 9.79\\ p\_ball\_15b\_5p\_2d\_m & 101.62 & 32.66 & \textbf{13.84} & 19.15\\ parabol5\_2\_3 & \textbf{15.7\%} & \textbf{15.6\%} & \textbf{15.5\%} & \textbf{15.5\%}\\ parallel & \textbf{38.93} & 153.77 & 89.39 & 66.77\\ pedigree\_ex485 & 5.8\% & 5.7\% & 5.1\% & \textbf{2.4\%}\\ pedigree\_ex485\_2 & 192.97 & 128.78 & \textbf{83.00} & \textbf{81.80}\\ pointpack06 & 27.80 & 11.29 & \textbf{6.94} & 10.78\\ pointpack08 & 3611.87 & 1638.66 & 2129.26 & \textbf{1356.41}\\ pooling\_epa1 & infeas & infeas & infeas & abort\\ pooling\_epa2 & 0.041\% & \textbf{4885.54} & 0.081\% & 6389.62\\ portfol\_buyin & \textbf{2.34} & \textbf{2.27} & \textbf{2.28} & \textbf{2.29}\\ portfol\_card & \textbf{2.15} & 2.19 & \textbf{1.96} & 2.20\\ powerflow0014r & 100.0\% & 82.1\% & 71.3\% & \textbf{60.6\%}\\ powerflow0057r & \textbf{$\infty$} & abort & abort & abort\\ prob07 & \textbf{68.41} & 155.23 & 105.41 & 100.26\\ process & 1.38 & 1.88 & 1.10 & \textbf{0.96}\\ procurement1mot & \textbf{86.4\%} & \textbf{88.5\%} & \textbf{89.2\%} & \textbf{89.3\%}\\ procurement2mot & \textbf{13.59} & abort & abort & abort\\ product & 465.36 & 448.67 & \textbf{267.76} & infeas\\ product2 & \textbf{145.38} & \textbf{145.40} & \textbf{145.40} & \textbf{145.42}\\ prolog & 100.0\% & \textbf{0.40} & infeas & 1.50\\ qp3 & \textbf{0.01} & 0.01 & 0.01 & 0.01\\ qspp\_0\_10\_0\_1\_10\_1 & \textbf{632.32} & \textbf{624.71} & \textbf{631.86} & \textbf{623.84}\\ qspp\_0\_11\_0\_1\_10\_1 & \textbf{3676.78} & \textbf{3669.93} & \textbf{3677.76} & \textbf{3673.84}\\ radar-2000-10-a-6\_lat\_7 & \textbf{33.14} & \textbf{33.30} & \textbf{33.04} & \textbf{33.00}\\ radar-3000-10-a-8\_lat\_7 & 558.69 & \textbf{495.51} & \textbf{498.02} & \textbf{538.74}\\ ravempb & 3438.61 & 305.70 & \textbf{26.47} & infeas\\ risk2bpb & 1.46 & \textbf{1.20} & \textbf{1.20} & \textbf{1.19}\\ routingdelay\_bigm & nonopt & nonopt & nonopt & nonopt\\ rsyn0815m & 92.08 & 18.96 & 15.88 & \textbf{14.40}\\ rsyn0815m03m & nonopt & nonopt & infeas & nonopt\\ sfacloc2\_2\_95 & \textbf{5.25} & \textbf{5.49} & \textbf{5.55} & \textbf{5.50}\\ sfacloc2\_3\_90 & 588.76 & 496.17 & \textbf{400.13} & 0.24\%\\ sjup2 & 3456.91 & \textbf{2496.08} & \textbf{2490.04} & \textbf{2495.30}\\ slay06m & 5.66 & \textbf{4.64} & \textbf{4.32} & 5.06\\ slay07m & \textbf{6.96} & \textbf{6.39} & \textbf{6.64} & \textbf{6.65}\\ smallinvDAXr1b010-011 & \textbf{3.51} & 4.58 & 4.67 & 5.10\\ smallinvDAXr1b020-022 & \textbf{3.46} & 4.79 & nonopt & nonopt\\ sonet17v4 & nonopt & nonopt & nonopt & nonopt\\ sonet18v6 & nonopt & nonopt & nonopt & nonopt\\ sonetgr17 & nonopt & \textbf{1464.38} & nonopt & nonopt\\ spectra2 & infeas & infeas & infeas & infeas\\ sporttournament24 & 6.7\% & 5.4\% & \textbf{4.7\%} & \textbf{4.7\%}\\ sporttournament30 & 12.5\% & \textbf{10.4\%} & \textbf{11.2\%} & \textbf{10.8\%}\\ sssd12-05persp & \textbf{378.13} & abort & abort & abort\\ sssd18-06persp & \textbf{7200.00} & \textbf{7200.00} & abort & abort\\ st\_testgr1 & 5.51 & \textbf{2.49} & 2.78 & nonopt\\ st\_testgr3 & 3.67 & \textbf{3.20} & 3.79 & 4.19\\ steenbrf & \textbf{0.75} & \textbf{0.73} & \textbf{0.77} & 1.52\\ stockcycle & \textbf{3.18} & 3.97 & 4.50 & 3.75\\ supplychainp1\_022020 & 20.8\% & 13.9\% & \textbf{11.9\%} & \textbf{11.8\%}\\ supplychainp1\_030510 & 63.91 & 66.23 & 63.40 & \textbf{48.63}\\ supplychainr1\_022020 & 12.5\% & 8.8\% & 7\% & \textbf{2.9\%}\\ supplychainr1\_030510 & \textbf{7.19} & 8.73 & 8.30 & 8.18\\ syn15m04m & 42.37 & 19.93 & \textbf{13.59} & 14.97\\ syn30m02m & 239.18 & 123.93 & \textbf{79.51} & \textbf{85.36}\\ synheat & 0.36\% & 0.19\% & 0.28\% & \textbf{0.16\%}\\ tanksize & 13.85 & 7.22 & 5.12 & \textbf{4.44}\\ telecomsp\_pacbell & 2.8\% & 3.5\% & \textbf{2.4\%} & 5.1\%\\ tln5 & 30.1\% & 26.2\% & 7.8\% & \textbf{1682.28}\\ tln7 & 61.4\% & 60.0\% & \textbf{9.9\%} & 28.3\%\\ tls2 & 5.43 & \textbf{3.96} & \textbf{4.13} & \textbf{3.86}\\ tls4 & 833.24 & 319.16 & \textbf{276.69} & \textbf{302.31}\\ topopt-mbb\_60x40\_50 & \textbf{195.38} & \textbf{194.93} & \textbf{194.58} & \textbf{194.68}\\ toroidal2g20\_5555 & nonopt & nonopt & nonopt & nonopt\\ toroidal3g7\_6666 & nonopt & nonopt & nonopt & nonopt\\ transswitch0009r & \textbf{16.0\%} & abort & abort & abort\\ tricp & infeas & infeas & infeas & infeas\\ tspn08 & \textbf{10.9\%} & \textbf{10.8\%} & \textbf{10.8\%} & \textbf{10.7\%}\\ tspn15 & \textbf{12.8\%} & \textbf{12.6\%} & \textbf{12.4\%} & \textbf{12.4\%}\\ unitcommit1 & 81.17 & \textbf{68.02} & \textbf{67.44} & 78.21\\ unitcommit2 & \textbf{18.0\%} & \textbf{18.0\%} & \textbf{17.8\%} & \textbf{17.7\%}\\ wager & $\infty$ & \textbf{32.4\%} & $\infty$ & $\infty$\\ waste & \textbf{46.3\%} & \textbf{46.1\%} & \textbf{46.1\%} & \textbf{46.1\%}\\ wastepaper3 & 33.90 & 26.89 & 0.83\% & \textbf{23.07}\\ wastepaper4 & 2320.60 & 1388.31 & 1206.13 & \textbf{1041.52}\\ wastepaper6 & abort & \textbf{7200.00} & \textbf{7200.00} & \textbf{7200.00}\\ water4 & 46.1\% & 39.1\% & 42.4\% & \textbf{34.1\%}\\ waternd1 & 1130.72 & 853.76 & \textbf{525.28} & \textbf{560.05}\\ waterno2\_02 & 26.45 & 12.97 & \textbf{9.72} & \textbf{10.04}\\ waterno2\_03 & 1886.92 & 737.09 & 532.41 & \textbf{465.62}\\ waterund01 & infeas & infeas & infeas & infeas\\ \bottomrule \end{longtable} } The following table shows the outcome from running Octeract in serial and parallel mode. { \scriptsize \begin{longtable}{l|rrrr} \toprule instance & 1 thread & 4 threads & 8 threads & 16 threads \\ \midrule \endhead alan & \textbf{0.05} & 0.06 & 0.08 & 0.09\\ autocorr\_bern20-05 & \textbf{4.36} & 4.91 & 4.88 & 7.71\\ autocorr\_bern35-04 & 12.7\% & 4064.07 & \textbf{1891.23} & 2241.67\\ ball\_mk2\_10 & \textbf{0.01} & 0.01 & 0.02 & 0.02\\ ball\_mk2\_30 & \textbf{0.02} & 0.02 & 0.02 & 0.03\\ ball\_mk3\_10 & \textbf{0.04} & \textbf{0.04} & 0.04 & 0.04\\ batch0812\_nc & 8.37 & 4.85 & \textbf{3.34} & 4.12\\ batchs101006m & \textbf{4.84} & 5.55 & 5.40 & 7.16\\ batchs121208m & \textbf{79.1\%} & \textbf{79.1\%} & \textbf{79.1\%} & abort\\ bayes2\_20 & 0.033\% & 0.033\% & \textbf{3777.02} & 6089.69\\ bayes2\_30 & \textbf{7200.00} & \textbf{7200.00} & \textbf{7200.00} & \textbf{7200.00}\\ blend029 & 29.33 & 13.45 & \textbf{10.27} & \textbf{11.02}\\ blend146 & 8.8\% & 3.4\% & 6.8\% & \textbf{2.6\%}\\ camshape100 & 19.40 & 9.17 & \textbf{7.12} & 9.91\\ cardqp\_inlp & 792.06 & 258.89 & 147.93 & \textbf{124.77}\\ cardqp\_iqp & 785.02 & 259.33 & 148.69 & \textbf{124.80}\\ carton7 & 6.03 & 1.00 & 0.89 & \textbf{0.76}\\ carton9 & 344.94 & 7.49 & \textbf{3.62} & \textbf{3.55}\\ casctanks & \textbf{11.8\%} & \textbf{12.1\%} & \textbf{11.6\%} & \textbf{12.5\%}\\ cecil\_13 & 188.40 & \textbf{99.55} & 134.75 & 166.38\\ celar6-sub0 & $\infty$ & \textbf{1637.26} & \textbf{1641.73} & \textbf{1562.09}\\ chakra & \textbf{7200.00} & 0.021\% & 0.022\% & \textbf{7200.00}\\ chem & \textbf{0.18} & \textbf{0.19} & \textbf{0.19} & 0.28\\ chenery & 200.71 & 77.26 & 35.42 & \textbf{25.05}\\ chimera\_k64maxcut-01 & 168.60 & 55.75 & \textbf{40.84} & \textbf{39.93}\\ chimera\_mis-01 & \textbf{1.14} & 1.50 & 1.38 & 1.94\\ chp\_shorttermplan1a & \textbf{64.74} & 0.028\% & 0.023\% & 0.042\%\\ chp\_shorttermplan2a & \textbf{17.59} & \textbf{16.76} & \textbf{16.30} & 21.88\\ chp\_shorttermplan2b & \textbf{0.67\%} & \textbf{0.67\%} & \textbf{0.71\%} & \textbf{0.7\%}\\ clay0204m & \textbf{1.47} & 2.10 & 2.28 & 3.10\\ clay0205m & 15.10 & \textbf{13.45} & \textbf{13.77} & 29.70\\ color\_lab3\_3x0 & $\infty$ & $\infty$ & \textbf{5460.67} & $\infty$\\ crossdock\_15x7 & $\infty$ & 6047.29 & 2087.00 & \textbf{1851.98}\\ crossdock\_15x8 & 6306.65 & 4889.74 & \textbf{2772.27} & \textbf{2837.68}\\ crudeoil\_lee1\_07 & 8.56 & 8.03 & \textbf{4.93} & 5.83\\ crudeoil\_pooling\_ct2 & nonopt & nonopt & nonopt & nonopt\\ csched1 & 8.1\% & 4306.97 & 2116.58 & \textbf{1592.40}\\ csched1a & 17.77 & 11.89 & \textbf{5.44} & \textbf{5.07}\\ cvxnonsep\_psig20 & 35.0\% & 28.1\% & \textbf{24.8\%} & \textbf{25.2\%}\\ cvxnonsep\_psig30 & 45.6\% & \textbf{36.6\%} & \textbf{35.3\%} & \textbf{34.4\%}\\ du-opt & 122.38 & 18.42 & 15.18 & \textbf{8.15}\\ du-opt5 & 101.32 & 48.22 & 17.33 & \textbf{14.29}\\ edgecross10-040 & 4.16 & 0.79 & \textbf{0.44} & 0.63\\ edgecross10-080 & 6\% & 3.8\% & \textbf{5205.96} & 3.8\%\\ eg\_all\_s & 102\% & 76.0\% & \textbf{41.6\%} & 198\%\\ eigena2 & \textbf{416.89} & 461.71 & 602.17 & $\infty$\\ elec50 & \textbf{66.4\%} & \textbf{66.4\%} & \textbf{66.4\%} & \textbf{66.3\%}\\ elf & \textbf{2.54} & infeas & infeas & infeas\\ eniplac & 1.54 & \textbf{1.06} & \textbf{1.01} & 1.61\\ enpro56pb & \textbf{1.98} & 2.48 & 2.25 & 3.52\\ ex1244 & 81.99 & \textbf{35.00} & 0.056\% & 0.082\%\\ ex1252a & 245.41 & 94.34 & 42.83 & \textbf{31.72}\\ faclay20h & 785.28 & \textbf{413.15} & 463.78 & \textbf{418.22}\\ faclay80 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ feedtray & \textbf{82.1\%} & \textbf{82.1\%} & \textbf{82.2\%} & \textbf{82.1\%}\\ fin2bb & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ flay04m & 0.87 & 0.34 & \textbf{0.27} & \textbf{0.28}\\ flay05m & infeas & infeas & infeas & infeas\\ flay06m & infeas & infeas & 566.98 & \textbf{413.02}\\ fo7\_ar25\_1 & 8.95 & 3.12 & 2.85 & \textbf{1.49}\\ fo7\_ar3\_1 & 18.33 & 3.23 & \textbf{2.13} & \textbf{2.18}\\ forest & nonopt & nonopt & nonopt & nonopt\\ gabriel01 & 2\% & 0.28\% & 3548.92 & \textbf{2168.66}\\ gabriel02 & 7078.04 & 2457.11 & 1088.23 & \textbf{588.74}\\ gasnet & \textbf{96.7\%} & \textbf{96.4\%} & \textbf{96.2\%} & \textbf{96.1\%}\\ gasprod\_sarawak16 & 0.74\% & 0.68\% & 0.5\% & \textbf{0.4\%}\\ gastrans582\_cold13\_95 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ gastrans582\_mild11 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ gear & 0.11 & \textbf{0.09} & \textbf{0.09} & 0.14\\ gear2 & 0.14 & \textbf{0.13} & \textbf{0.13} & 0.26\\ gear4 & 17.56 & 7.25 & 3.29 & \textbf{2.45}\\ genpooling\_lee1 & 117.04 & 40.77 & 20.27 & \textbf{15.25}\\ genpooling\_lee2 & 186.63 & 77.47 & 36.61 & \textbf{23.73}\\ ghg\_1veh & 12.42 & infeas & \textbf{3.58} & 4.05\\ gilbert & \textbf{0.9\%} & 9.7\% & 9.7\% & 22.5\%\\ graphpart\_2g-0066-0066 & 0.75 & \textbf{0.19} & \textbf{0.21} & 0.22\\ graphpart\_clique-60 & 2890.51 & 1502.07 & \textbf{287.82} & 802.57\\ gsg\_0001 & 8.58 & 4.67 & 3.04 & \textbf{2.54}\\ hadamard\_5 & \textbf{20.45} & 23.43 & 23.33 & 38.20\\ heatexch\_spec1 & 17.7\% & 17.0\% & 14.2\% & \textbf{12.7\%}\\ heatexch\_spec2 & \textbf{5\%} & \textbf{5.3\%} & \textbf{5.2\%} & \textbf{4.9\%}\\ hhfair & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ himmel16 & 12.95 & 5.89 & 3.28 & \textbf{2.22}\\ house & 108.41 & 35.02 & 20.60 & \textbf{12.89}\\ hs62 & 0.023\% & 0.02\% & \textbf{7200.00} & \textbf{7200.00}\\ hvb11 & 7.3\% & 4.1\% & 2.1\% & \textbf{1.5\%}\\ hybriddynamic\_var & 0.32\% & 166.64 & 73.29 & \textbf{53.94}\\ hybriddynamic\_varcc & 158.14 & 61.30 & 27.37 & \textbf{19.22}\\ hydroenergy1 & 0.65\% & 0.48\% & 0.36\% & \textbf{0.3\%}\\ ibs2 & \textbf{5.1\%} & 6.4\% & \textbf{5\%} & \textbf{5\%}\\ johnall & \textbf{44.23} & 50.09 & 51.26 & 88.02\\ kall\_circles\_c6b & 463.62 & 158.77 & 77.24 & \textbf{55.71}\\ kall\_congruentcircles\_c72 & 49.42 & 19.60 & 8.03 & \textbf{6.64}\\ kissing2 & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ kport20 & 3596.06 & 1590.23 & 472.21 & \textbf{381.94}\\ kriging\_peaks-red020 & 85.61 & 35.87 & 18.22 & \textbf{16.08}\\ kriging\_peaks-red100 & 629.86 & 277.25 & 140.33 & \textbf{113.86}\\ lop97icx & 5.90 & \textbf{3.76} & \textbf{3.67} & 4.60\\ mathopt5\_7 & 0.08 & \textbf{0.07} & \textbf{0.07} & 0.11\\ mathopt5\_8 & \textbf{0.06} & 0.08 & \textbf{0.06} & 0.09\\ maxcsp-geo50-20-d4-75-36 & \textbf{7.71} & 9.59 & 21.01 & 23.19\\ meanvar-orl400\_05\_e\_7 & $\infty$ & 2015.07 & \textbf{1112.27} & \textbf{1205.72}\\ meanvar-orl400\_05\_e\_8 & \textbf{6.00} & \textbf{5.57} & \textbf{5.87} & 8.45\\ mhw4d & 0.43 & \textbf{0.26} & 0.39 & 0.40\\ milinfract & \textbf{75.5\%} & \textbf{75.2\%} & \textbf{75.0\%} & \textbf{75.7\%}\\ minlphi & \textbf{100\%} & \textbf{100\%} & \textbf{100\%} & \textbf{100\%}\\ multiplants\_mtg1a & 5.1\% & 303.62 & 153.51 & \textbf{64.70}\\ multiplants\_mtg2 & 2\% & \textbf{1.5\%} & \textbf{1.4\%} & 1.6\%\\ nd\_netgen-3000-1-1-b-b-ns\_7 & \textbf{3.67} & infeas & infeas & 6.50\\ netmod\_kar1 & 41.89 & 28.58 & \textbf{25.95} & 33.54\\ netmod\_kar2 & 42.01 & \textbf{28.58} & \textbf{26.39} & 33.30\\ nous1 & 52.93 & 32.90 & \textbf{19.28} & \textbf{18.50}\\ nous2 & 8.29 & 6.12 & \textbf{3.68} & 4.58\\ nvs02 & \textbf{0.22} & 0.25 & 0.25 & \textbf{0.22}\\ nvs06 & \textbf{0.06} & 0.09 & 0.07 & 0.10\\ oil2 & nonopt & nonopt & nonopt & nonopt\\ optmass & 463.14 & 237.53 & \textbf{155.15} & 239.61\\ ortez & 9.48 & 7.09 & \textbf{1.10} & \textbf{1.18}\\ p\_ball\_10b\_5p\_3d\_m & \textbf{49.66} & infeas & infeas & infeas\\ p\_ball\_15b\_5p\_2d\_m & infeas & infeas & infeas & infeas\\ parabol5\_2\_3 & 4095.70 & \textbf{1851.01} & \textbf{1883.43} & 0.11\%\\ parallel & 102.24 & 39.89 & 18.25 & \textbf{15.46}\\ pedigree\_ex485 & 276.45 & 65.18 & \textbf{45.86} & \textbf{46.16}\\ pedigree\_ex485\_2 & 3.66 & \textbf{2.61} & \textbf{2.57} & 3.08\\ pointpack06 & 4.76 & 1.93 & \textbf{0.90} & 1.28\\ pointpack08 & 208.18 & 70.21 & 34.30 & \textbf{23.21}\\ pooling\_epa1 & 21.07 & \textbf{14.15} & \textbf{13.18} & 15.01\\ pooling\_epa2 & 1694.37 & 463.42 & \textbf{233.31} & \textbf{242.45}\\ portfol\_buyin & 3.56 & 1.60 & \textbf{1.12} & 1.47\\ portfol\_card & 5.29 & 2.64 & \textbf{1.68} & 5.86\\ powerflow0014r & 100.0\% & 100.0\% & 100.0\% & \textbf{28.4\%}\\ powerflow0057r & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ prob07 & 16.19 & 6.21 & 2.91 & \textbf{2.55}\\ process & 0.93 & 0.30 & \textbf{0.21} & 0.33\\ procurement1mot & \textbf{79.5\%} & \textbf{77.1\%} & \textbf{74.6\%} & \textbf{73.4\%}\\ procurement2mot & \textbf{2.71} & \textbf{2.88} & \textbf{2.92} & 4.47\\ product & nonopt & nonopt & nonopt & nonopt\\ product2 & \textbf{2.53} & \textbf{2.51} & \textbf{2.40} & 3.29\\ prolog & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ qp3 & 4.6\% & 3.8\% & \textbf{3.2\%} & \textbf{3\%}\\ qspp\_0\_10\_0\_1\_10\_1 & 26.39 & 9.61 & \textbf{7.43} & \textbf{7.73}\\ qspp\_0\_11\_0\_1\_10\_1 & 124.07 & 39.87 & 27.50 & \textbf{21.59}\\ radar-2000-10-a-6\_lat\_7 & 33.34 & 8.06 & \textbf{6.80} & 11.05\\ radar-3000-10-a-8\_lat\_7 & 209.09 & \textbf{17.99} & \textbf{17.84} & 138.18\\ ravempb & 1.26 & \textbf{1.06} & \textbf{1.04} & 1.65\\ risk2bpb & \textbf{10.70} & nonopt & nonopt & nonopt\\ routingdelay\_bigm & \textbf{18.4\%} & \textbf{18.6\%} & \textbf{18.6\%} & \textbf{18.6\%}\\ rsyn0815m & 1.02 & \textbf{0.81} & \textbf{0.81} & 1.26\\ rsyn0815m03m & \textbf{44.8\%} & abort & abort & abort\\ sfacloc2\_2\_95 & \textbf{1.11} & 1.44 & 1.34 & 1.72\\ sfacloc2\_3\_90 & 67.86 & \textbf{42.11} & \textbf{42.06} & 46.51\\ sjup2 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ slay06m & 0.69 & 0.84 & \textbf{0.62} & 1.00\\ slay07m & \textbf{0.61} & 0.82 & 0.92 & 1.24\\ smallinvDAXr1b010-011 & 8.69 & \textbf{4.27} & \textbf{4.64} & 4.82\\ smallinvDAXr1b020-022 & 205.80 & 62.49 & 23.58 & \textbf{17.20}\\ sonet17v4 & 160.55 & 32.25 & \textbf{17.94} & \textbf{17.26}\\ sonet18v6 & 184.16 & 37.12 & \textbf{22.59} & \textbf{20.81}\\ sonetgr17 & 65.24 & 18.42 & 13.25 & \textbf{10.75}\\ spectra2 & \textbf{105\%} & \textbf{103\%} & \textbf{104\%} & \textbf{104\%}\\ sporttournament24 & 21.48 & \textbf{5.01} & 7.20 & \textbf{5.26}\\ sporttournament30 & 1946.51 & 476.70 & \textbf{242.25} & 285.89\\ sssd12-05persp & \textbf{780.87} & 100.0\% & 100.0\% & 100.0\%\\ sssd18-06persp & \textbf{19.9\%} & 100.0\% & 100.0\% & infeas\\ st\_testgr1 & \textbf{0.07} & 0.08 & 0.09 & 0.12\\ st\_testgr3 & \textbf{0.07} & 0.10 & 0.11 & 0.10\\ steenbrf & \textbf{98.0\%} & \textbf{97.6\%} & \textbf{97.6\%} & \textbf{97.0\%}\\ stockcycle & 39.57 & 34.72 & \textbf{19.66} & infeas\\ supplychainp1\_022020 & \textbf{8.5\%} & \textbf{8.7\%} & \textbf{8.7\%} & \textbf{8.7\%}\\ supplychainp1\_030510 & 3.08 & \textbf{2.71} & \textbf{2.68} & 3.86\\ supplychainr1\_022020 & 2966.92 & 752.44 & 333.16 & \textbf{225.20}\\ supplychainr1\_030510 & 0.74 & \textbf{0.66} & \textbf{0.64} & 0.94\\ syn15m04m & \textbf{1.04} & \textbf{1.08} & \textbf{1.08} & 1.77\\ syn30m02m & \textbf{1.83} & \textbf{1.72} & 2.24 & 2.89\\ synheat & 17.1\% & 16.2\% & \textbf{14.0\%} & \textbf{13.1\%}\\ tanksize & 54.61 & 20.94 & 9.66 & \textbf{7.03}\\ telecomsp\_pacbell & 1125.10 & 966.13 & \textbf{632.56} & 1152.71\\ tln5 & 2.08 & 1.50 & \textbf{1.36} & 1.60\\ tln7 & 274.33 & 144.17 & 112.57 & \textbf{78.00}\\ tls2 & \textbf{0.18} & \textbf{0.18} & \textbf{0.18} & 0.27\\ tls4 & 36.82 & 18.27 & \textbf{10.79} & 17.84\\ topopt-mbb\_60x40\_50 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ toroidal2g20\_5555 & 3.23 & \textbf{2.43} & \textbf{2.23} & \textbf{2.38}\\ toroidal3g7\_6666 & 67.16 & 25.61 & \textbf{21.88} & \textbf{23.43}\\ transswitch0009r & 5.6\% & 3\% & 1.5\% & \textbf{0.97\%}\\ tricp & 100.0\% & \textbf{1236.23} & 100.0\% & 100.0\%\\ tspn08 & 2.3\% & 2\% & 1.6\% & \textbf{1.4\%}\\ tspn15 & \textbf{81.4\%} & \textbf{82.3\%} & \textbf{75.3\%} & \textbf{80.3\%}\\ unitcommit1 & 1082.69 & 422.70 & 208.33 & \textbf{152.08}\\ unitcommit2 & \textbf{11.20} & \textbf{10.85} & \textbf{10.38} & 12.71\\ wager & \textbf{252.73} & 100.0\% & 100.0\% & 100.0\%\\ waste & \textbf{99.9\%} & \textbf{99.9\%} & \textbf{99.9\%} & \textbf{99.9\%}\\ wastepaper3 & 22.76 & 14.47 & \textbf{11.86} & \textbf{12.59}\\ wastepaper4 & 2099.99 & 853.02 & \textbf{460.96} & \textbf{443.80}\\ wastepaper6 & 0.16\% & 0.2\% & 0.16\% & \textbf{0.12\%}\\ water4 & 1648.53 & 704.24 & 302.89 & \textbf{201.96}\\ waternd1 & 1.34 & \textbf{1.08} & \textbf{1.08} & 1.81\\ waterno2\_02 & \textbf{4.18} & 5.63 & 6.82 & 8.84\\ waterno2\_03 & 440.65 & 214.98 & 111.60 & \textbf{71.05}\\ waterund01 & 0.18\% & 0.13\% & \textbf{0.11\%} & \textbf{0.097\%}\\ \bottomrule \end{longtable} } The following table shows the outcome from running SCIP (1 thread) and FiberSCIP (4, 8, 16 threads). { \scriptsize \begin{longtable}{l|rrrr} \toprule instance & 1 thread & 4 threads & 8 threads & 16 threads \\ \midrule \endhead alan & \textbf{0.19} & 1.05 & 1.06 & 1.13\\ autocorr\_bern20-05 & \textbf{16.34} & 33.08 & \textbf{17.10} & 18.19\\ autocorr\_bern35-04 & 107.46 & 158.09 & 101.10 & \textbf{64.17}\\ ball\_mk2\_10 & \textbf{0.05} & 1.05 & 1.06 & 1.12\\ ball\_mk2\_30 & \textbf{0.06} & 1.06 & 1.06 & 1.14\\ ball\_mk3\_10 & 0.00 & 0.00 & 0.00 & \textbf{0.00}\\ batch0812\_nc & 1.45 & \textbf{1.06} & \textbf{1.08} & \textbf{1.13}\\ batchs101006m & \textbf{7.63} & abort & abort & abort\\ batchs121208m & \textbf{6.99} & abort & \textbf{7.15} & abort\\ bayes2\_20 & \textbf{0.033\%} & 0.97\% & \textbf{0.033\%} & \textbf{0.033\%}\\ bayes2\_30 & \textbf{7200.00} & \textbf{7200.00} & \textbf{7200.00} & \textbf{7200.00}\\ blend029 & 3.67 & 3.06 & 3.09 & \textbf{2.12}\\ blend146 & 471.12 & 280.13 & \textbf{119.13} & 176.20\\ camshape100 & \textbf{5.4\%} & 10.5\% & 10.4\% & 9.7\%\\ cardqp\_inlp & 2751.70 & 2408.46 & \textbf{1083.38} & 1285.54\\ cardqp\_iqp & 2769.87 & 2406.76 & 1190.43 & \textbf{758.44}\\ carton7 & 13.54 & 10.89 & \textbf{6.94} & 8.69\\ carton9 & 36.30 & 34.26 & 23.15 & \textbf{20.50}\\ casctanks & 199\% & \textbf{114\%} & \textbf{114\%} & \textbf{114\%}\\ cecil\_13 & 605.03 & 0.36\% & \textbf{72.23} & \textbf{71.31}\\ celar6-sub0 & 1503.66 & \textbf{639.66} & \textbf{599.98} & 685.13\\ chakra & \textbf{0.04} & 1.05 & 1.06 & 1.13\\ chem & 932.15 & \textbf{376.09} & 545.11 & 6288.63\\ chenery & \textbf{2.10} & 16.06 & 4.06 & 3.13\\ chimera\_k64maxcut-01 & 450.42 & 435.40 & \textbf{352.61} & \textbf{387.45}\\ chimera\_mis-01 & \textbf{6.76} & 8.20 & \textbf{7.31} & 10.29\\ chp\_shorttermplan1a & 17.55 & \textbf{7.21} & \textbf{7.24} & \textbf{7.36}\\ chp\_shorttermplan2a & 27.16 & 17.68 & \textbf{12.85} & \textbf{11.82}\\ chp\_shorttermplan2b & \textbf{7200.00} & 23.3\% & 17.9\% & 11.5\%\\ clay0204m & 1.67 & 2.04 & \textbf{1.09} & 2.12\\ clay0205m & 8.40 & 11.07 & 9.07 & \textbf{6.15}\\ color\_lab3\_3x0 & \textbf{15.3\%} & 66.8\% & 63.3\% & 59.4\%\\ crossdock\_15x7 & \textbf{32.0\%} & 88.8\% & 87.6\% & 83.5\%\\ crossdock\_15x8 & \textbf{42.8\%} & 100.0\% & 89.6\% & 92.3\%\\ crudeoil\_lee1\_07 & 4.72 & \textbf{2.20} & \textbf{2.16} & \textbf{2.23}\\ crudeoil\_pooling\_ct2 & \textbf{18.94} & 4.5\% & 0.22\% & 0.26\%\\ csched1 & \textbf{1.59} & 2.08 & 2.09 & 3.10\\ csched1a & 6.53 & \textbf{2.07} & \textbf{2.08} & 4.11\\ cvxnonsep\_psig20 & \textbf{12.04} & 0.086\% & 163.10 & 119.16\\ cvxnonsep\_psig30 & 65.40 & 3.4\% & 8.2\% & \textbf{52.11}\\ du-opt & \textbf{2.38} & 3.30 & 3.20 & 3.30\\ du-opt5 & 1.69 & \textbf{1.15} & \textbf{1.16} & \textbf{1.23}\\ edgecross10-040 & 8.36 & \textbf{4.11} & 5.12 & 6.13\\ edgecross10-080 & 74.51 & 53.15 & 62.17 & \textbf{47.26}\\ eg\_all\_s & 3095.77 & 3082.62 & \textbf{1220.55} & 1492.45\\ eigena2 & $\infty$ & \textbf{7200.00} & $\infty$ & $\infty$\\ elec50 & \textbf{45.1\%} & \textbf{44.9\%} & \textbf{44.9\%} & \textbf{44.9\%}\\ elf & \textbf{1.01} & \textbf{1.04} & \textbf{1.07} & 1.13\\ eniplac & \textbf{2.19} & 3.07 & 3.08 & 3.13\\ enpro56pb & 3.40 & \textbf{3.07} & \textbf{3.07} & \textbf{3.13}\\ ex1244 & 7.67 & 2.06 & \textbf{1.09} & 2.05\\ ex1252a & 7200.00 & 13.05 & \textbf{4.06} & \textbf{4.13}\\ faclay20h & 741.15 & 589.88 & \textbf{513.92} & 832.87\\ faclay80 & \textbf{160\%} & $\infty$ & $\infty$ & $\infty$\\ feedtray & \textbf{1.70} & 87.7\% & 80.5\% & 80.5\%\\ fin2bb & \textbf{15.92} & 100.0\% & 100.0\% & 100.0\%\\ flay04m & 3.93 & 5.04 & 4.05 & \textbf{3.11}\\ flay05m & 120.82 & 44.06 & 15.6\% & \textbf{16.12}\\ flay06m & \textbf{4931.28} & 27.6\% & 25.1\% & 10.4\%\\ fo7\_ar25\_1 & 39.74 & 60.06 & 24.04 & \textbf{15.12}\\ fo7\_ar3\_1 & 88.79 & 65.08 & 37.08 & \textbf{24.13}\\ forest & \textbf{538.83} & 0.21\% & abort & 0.19\%\\ gabriel01 & 351.17 & 353.14 & \textbf{116.14} & 146.23\\ gabriel02 & 1435.31 & 1638.34 & 782.25 & \textbf{267.30}\\ gasnet & \textbf{42.5\%} & 65.0\% & 64.4\% & 63.6\%\\ gasprod\_sarawak16 & \textbf{0.39\%} & \textbf{0.4\%} & \textbf{0.39\%} & \textbf{0.39\%}\\ gastrans582\_cold13\_95 & 55.01 & 6.96 & \textbf{4.97} & 6.99\\ gastrans582\_mild11 & 7.41 & 257.72 & 13.73 & \textbf{4.04}\\ gear & 12.55 & 24.05 & \textbf{5.06} & 39.16\\ gear2 & 18.25 & 51.07 & \textbf{7.07} & 597.17\\ gear4 & 4.05 & \textbf{1.05} & \textbf{1.06} & \textbf{1.11}\\ genpooling\_lee1 & \textbf{2.09} & 3.06 & \textbf{2.07} & \textbf{2.13}\\ genpooling\_lee2 & 6.88 & \textbf{4.06} & \textbf{4.08} & \textbf{4.15}\\ ghg\_1veh & 32.25 & 26.07 & \textbf{22.10} & \textbf{21.14}\\ gilbert & \textbf{1.13} & 5.12 & 11.14 & 19.72\\ graphpart\_2g-0066-0066 & 1.39 & \textbf{1.10} & \textbf{1.11} & \textbf{1.19}\\ graphpart\_clique-60 & 3962.88 & 36.3\% & 5234.58 & \textbf{2596.77}\\ gsg\_0001 & 32.26 & 799.12 & 1209.24 & \textbf{7.12}\\ hadamard\_5 & 21.56 & 16.07 & 11.07 & \textbf{8.15}\\ heatexch\_spec1 & 7.1\% & 3.8\% & \textbf{13.09} & 88.12\\ heatexch\_spec2 & infeas & 12.08 & 110.11 & \textbf{5.15}\\ hhfair & \textbf{100.0\%} & $\infty$ & $\infty$ & \textbf{100.0\%}\\ himmel16 & 6.13 & 6.08 & \textbf{4.07} & \textbf{4.04}\\ house & \textbf{0.33} & 1.06 & 1.07 & 1.12\\ hs62 & 2.65 & \textbf{2.05} & 3.06 & \textbf{2.11}\\ hvb11 & 103.26 & \textbf{27.11} & \textbf{27.13} & 2285.34\\ hybriddynamic\_var & 2.21 & \textbf{1.05} & \textbf{1.07} & \textbf{1.10}\\ hybriddynamic\_varcc & 2.40 & 3.05 & \textbf{2.08} & \textbf{2.14}\\ hydroenergy1 & 6295.41 & 5582.56 & \textbf{1646.23} & 3446.41\\ ibs2 & 20.64 & \textbf{16.24} & \textbf{17.02} & 22.00\\ johnall & \textbf{30.44} & 55.01 & 46.41 & 44.71\\ kall\_circles\_c6b & 183.83 & 95.07 & 48.08 & \textbf{30.12}\\ kall\_congruentcircles\_c72 & 22.40 & 21.05 & 12.07 & \textbf{7.13}\\ kissing2 & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ kport20 & 1001.03 & nonopt & 334.10 & \textbf{28.12}\\ kriging\_peaks-red020 & 117.65 & \textbf{36.08} & \textbf{39.08} & 69.16\\ kriging\_peaks-red100 & 7200.00 & 384.19 & \textbf{325.21} & 1028.67\\ lop97icx & \textbf{25.38} & 118.18 & 84.13 & \textbf{23.21}\\ mathopt5\_7 & \textbf{0.17} & 1.04 & 1.05 & 1.10\\ mathopt5\_8 & \textbf{0.14} & 1.05 & 1.06 & 1.11\\ maxcsp-geo50-20-d4-75-36 & \textbf{52.80} & 63.05 & 74.36 & 74.19\\ meanvar-orl400\_05\_e\_7 & \textbf{4080.76} & abort & 2.3\% & 2.4\%\\ meanvar-orl400\_05\_e\_8 & \textbf{2404.35} & 0.54\% & abort & 4849.82\\ mhw4d & \textbf{0.44} & 1.05 & 1.07 & 1.09\\ milinfract & \textbf{68.8\%} & 76.0\% & 76.0\% & 75.9\%\\ minlphi & 100.0\% & \textbf{34.04} & 100\% & 100\%\\ multiplants\_mtg1a & \textbf{4320.86} & 35.6\% & 6.4\% & 5.5\%\\ multiplants\_mtg2 & \textbf{21.0\%} & 98.1\% & 97.2\% & 97.0\%\\ nd\_netgen-3000-1-1-b-b-ns\_7 & \textbf{31.10} & 38.65 & 42.30 & 52.25\\ netmod\_kar1 & 5.51 & 4.09 & \textbf{3.11} & 5.16\\ netmod\_kar2 & 5.77 & \textbf{4.08} & 5.13 & 6.19\\ nous1 & \textbf{7.70} & 26.07 & 16.07 & 16.13\\ nous2 & \textbf{0.80} & 1.05 & 2.07 & 1.12\\ nvs02 & \textbf{0.06} & 1.04 & 1.06 & 1.09\\ nvs06 & \textbf{0.02} & 1.04 & 1.06 & 1.11\\ oil2 & \textbf{4.64} & 5.23 & 5.25 & 7.34\\ optmass & \textbf{1.8\%} & 9.6\% & 7.1\% & 7.1\%\\ ortez & \textbf{0.14} & 1.06 & 1.07 & 1.14\\ p\_ball\_10b\_5p\_3d\_m & \textbf{4.26} & 5.08 & 5.09 & \textbf{4.17}\\ p\_ball\_15b\_5p\_2d\_m & \textbf{5.44} & 6.11 & \textbf{5.10} & 6.17\\ parabol5\_2\_3 & \textbf{0.051\%} & $\infty$ & $\infty$ & $\infty$\\ parallel & 10.26 & \textbf{7.07} & 8.09 & 8.16\\ pedigree\_ex485 & 105.90 & \textbf{95.32} & \textbf{95.39} & 153.99\\ pedigree\_ex485\_2 & 35.24 & 76.58 & \textbf{20.67} & \textbf{19.54}\\ pointpack06 & \textbf{3.05} & 4.06 & \textbf{3.07} & \textbf{3.11}\\ pointpack08 & 61.22 & 25.05 & 12.07 & \textbf{9.14}\\ pooling\_epa1 & \textbf{12.87} & 29.09 & 49.12 & 30.19\\ pooling\_epa2 & 1.9\% & \textbf{1.8\%} & \textbf{1.8\%} & \textbf{1.6\%}\\ portfol\_buyin & \textbf{0.50} & 1.06 & 1.07 & 1.12\\ portfol\_card & \textbf{0.58} & 1.07 & 1.07 & 1.11\\ powerflow0014r & \textbf{7079.42} & 0.36\% & 0.47\% & 0.59\%\\ powerflow0057r & 80.19 & \textbf{57.15} & \textbf{59.19} & \textbf{61.29}\\ prob07 & 76.05 & 11.05 & \textbf{6.07} & 706.26\\ process & 0.4\% & \textbf{3.05} & \textbf{3.07} & \textbf{3.11}\\ procurement1mot & \textbf{85.0\%} & \textbf{90.8\%} & \textbf{90.8\%} & \textbf{90.7\%}\\ procurement2mot & \textbf{2.03} & \textbf{2.19} & \textbf{2.14} & \textbf{2.22}\\ product & 16.80 & 31.11 & \textbf{11.13} & 13.14\\ product2 & \textbf{3.85} & 4.45 & 5.48 & 6.88\\ prolog & \textbf{0.06} & abort & abort & abort\\ qp3 & \textbf{14.1\%} & 100\% & 100\% & 91.0\%\\ qspp\_0\_10\_0\_1\_10\_1 & 180.85 & 133.44 & \textbf{80.79} & \textbf{75.94}\\ qspp\_0\_11\_0\_1\_10\_1 & 623.76 & 208.02 & 154.94 & \textbf{115.04}\\ radar-2000-10-a-6\_lat\_7 & \textbf{129.82} & 148.80 & 162.20 & 185.15\\ radar-3000-10-a-8\_lat\_7 & \textbf{422.76} & 0.37\% & 0.026\% & 68.0\%\\ ravempb & 3.09 & \textbf{2.06} & \textbf{2.08} & \textbf{2.11}\\ risk2bpb & infeas & infeas & infeas & infeas\\ routingdelay\_bigm & nonopt & \textbf{6439.89} & 1.1\% & 2.8\%\\ rsyn0815m & \textbf{1.44} & 2.06 & 3.04 & 3.14\\ rsyn0815m03m & 14.06 & 81.16 & 14.17 & \textbf{12.19}\\ sfacloc2\_2\_95 & 2.78 & \textbf{2.06} & \textbf{2.09} & \textbf{2.14}\\ sfacloc2\_3\_90 & \textbf{20.32} & 62.08 & 28.09 & \textbf{22.16}\\ sjup2 & 469.64 & \textbf{107.74} & \textbf{114.44} & 146.11\\ slay06m & 1.85 & \textbf{1.06} & \textbf{1.08} & \textbf{1.13}\\ slay07m & 2.05 & \textbf{1.05} & \textbf{1.08} & \textbf{1.14}\\ smallinvDAXr1b010-011 & 2.17 & \textbf{1.08} & \textbf{1.10} & \textbf{1.14}\\ smallinvDAXr1b020-022 & 3.37 & \textbf{1.07} & \textbf{1.09} & \textbf{1.15}\\ sonet17v4 & 949.25 & \textbf{829.28} & \textbf{807.35} & \textbf{777.52}\\ sonet18v6 & 1445.90 & 906.34 & \textbf{815.39} & \textbf{816.65}\\ sonetgr17 & 1255.92 & 651.90 & \textbf{320.18} & 368.98\\ spectra2 & 9.20 & \textbf{3.10} & \textbf{3.14} & \textbf{3.23}\\ sporttournament24 & \textbf{45.75} & 71.17 & 58.16 & 69.26\\ sporttournament30 & \textbf{0.72\%} & 4\% & 3.4\% & 1.5\%\\ sssd12-05persp & 4.5\% & 19.6\% & 2774.49 & \textbf{256.15}\\ sssd18-06persp & \textbf{15.1\%} & 47.7\% & 46.1\% & 47.6\%\\ st\_testgr1 & \textbf{0.12} & 1.05 & 1.07 & 1.10\\ st\_testgr3 & \textbf{0.24} & 1.05 & 1.08 & 1.10\\ steenbrf & 12.42 & 6.08 & \textbf{5.09} & 7.14\\ stockcycle & \textbf{0.79} & 1.25 & 2.15 & 1.23\\ supplychainp1\_022020 & \textbf{1056.69} & 32.8\% & 31.2\% & 26.7\%\\ supplychainp1\_030510 & 3.54 & \textbf{3.11} & \textbf{3.13} & 4.19\\ supplychainr1\_022020 & 21.97 & 80.80 & 14.68 & \textbf{2.81}\\ supplychainr1\_030510 & \textbf{0.12} & 1.09 & 1.11 & 1.17\\ syn15m04m & 2.75 & \textbf{1.10} & \textbf{1.12} & \textbf{1.18}\\ syn30m02m & 1.58 & \textbf{1.09} & \textbf{1.10} & 2.16\\ synheat & infeas & 0.83\% & 1.8\% & \textbf{148.14}\\ tanksize & 4.12 & \textbf{3.07} & \textbf{3.07} & \textbf{3.12}\\ telecomsp\_pacbell & 0.63\% & \textbf{6523.90} & \textbf{6267.26} & 0.45\%\\ tln5 & \textbf{0.50} & 7.05 & 5.08 & 10.13\\ tln7 & \textbf{188.05} & 69.5\% & 30.4\% & 66.4\%\\ tls2 & \textbf{0.93} & 1.07 & 1.07 & 1.10\\ tls4 & 41.19 & 18.06 & 24.09 & \textbf{13.14}\\ topopt-mbb\_60x40\_50 & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$} & \textbf{$\infty$}\\ toroidal2g20\_5555 & \textbf{5.70} & 6.27 & 6.40 & 6.33\\ toroidal3g7\_6666 & \textbf{141.64} & \textbf{142.26} & 157.46 & 169.61\\ transswitch0009r & 6975.38 & 2602.29 & 1289.17 & \textbf{959.22}\\ tricp & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%} & \textbf{100.0\%}\\ tspn08 & \textbf{17.8\%} & 31.2\% & \textbf{18.1\%} & \textbf{18.6\%}\\ tspn15 & 50.0\% & 62.5\% & \textbf{27.4\%} & \textbf{29.4\%}\\ unitcommit1 & infeas & \textbf{2.38} & 4.58 & 6.25\\ unitcommit2 & 439.70 & \textbf{3.81} & 5.85 & 6.92\\ wager & 3.46 & 2.14 & \textbf{1.16} & 2.22\\ waste & \textbf{38.85} & 57.1\% & 55.4\% & 55.0\%\\ wastepaper3 & \textbf{4.22} & 6.07 & 5.09 & 5.14\\ wastepaper4 & 167.16 & abort & abort & \textbf{40.14}\\ wastepaper6 & 0.023\% & 0.04\% & 0.034\% & \textbf{7200.00}\\ water4 & nonopt & nonopt & nonopt & \textbf{3473.45}\\ waternd1 & 7.21 & 10.06 & \textbf{4.08} & 5.13\\ waterno2\_02 & 2.89 & 2.10 & 2.11 & \textbf{1.19}\\ waterno2\_03 & nonopt & \textbf{33.13} & nonopt & 39.3\%\\ waterund01 & \textbf{1525.91} & 1.5\% & 1.5\% & abort\\ \bottomrule \end{longtable} } \end{document}
\begin{document} \title{Memory-Efficient Differentiable Programming for Quantum Optimal Control of Discrete Lattices \thanks{ This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing and Applied Mathematics programs, under contract DE-AC02-06CH11357, and by the National Science Foundation Mathematical Sciences Graduate Internship. We gratefully acknowledge the computing resources provided on Bebop and Swing, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.} } \makeatletter \newcommand{\linebreakand}{ \end{@IEEEauthorhalign} \mbox{}\par \mbox{} \begin{@IEEEauthorhalign} } \makeatother \author{\IEEEauthorblockN{Xian Wang} \IEEEauthorblockA{\textit{ University of California, Riverside} \\ [email protected]} \and \IEEEauthorblockN{Paul Kairys} \IEEEauthorblockA{\textit{Argonne National Laboratory} \\ [email protected]} \and \IEEEauthorblockN{Sri Hari Krishna Narayanan} \IEEEauthorblockA{\textit{Argonne National Laboratory} \\ [email protected]} \and \linebreakand \IEEEauthorblockN{Jan H\"uckelheim} \IEEEauthorblockA{\textit{Argonne National Laboratory} \\ [email protected]} \and \IEEEauthorblockN{Paul Hovland} \IEEEauthorblockA{\textit{Argonne National Laboratory} \\ [email protected]} } \maketitle \begin{abstract} Quantum optimal control problems are typically solved by gradient-based algorithms such as GRAPE, which suffer from exponential growth in storage with increasing number of qubits and linear growth in memory requirements with increasing number of time steps. Employing QOC for discrete lattices reveals that these memory requirements are a barrier for simulating large models or long time spans. We employ a nonstandard differentiable programming approach that significantly reduces the memory requirements at the cost of a reasonable amount of recomputation. The approach exploits invertibility properties of the unitary matrices to reverse the computation during back-propagation. We utilize QOC software written in the differentiable programming framework JAX that implements this approach, and demonstrate its effectiveness for lattice gauge theory. \end{abstract} \section{Introduction} Quantum control allows systems that obey the laws of quantum mechanics must be manipulated to create desired behaviors. The application of external electromagnetic fields or force affects dynamical processes at the atomic or molecular scale~\cite{Werschnik_2007}. Quantum optimal control (QOC) approaches determine the external fields and forces to achieve a task in a quantum system in the best way possible\cite{kairys_parametrized,holland2020,lysne_small}. In particular, QOC can be used to achieve state preparation and gate synthesis. One of the computational advantages of quantum information processing is realized through the efficient simulation of quantum mechanical effects \cite{georgescu2014quantum, lloyd1996universal}. This potential impact is considerable within the fields of condensed matter and particle physics where the simulation of large quantum systems is critical for scientific discovery. In particular, the study of lattice gauge theories (LGT) provides significant insight into fundamental and emergent physics and is a critical application for quantum simulation \cite{snowmass,MARCOS2014634,martinez2016real,banuls2020simulating}. \par One potential route to achieving high-fidelity quantum simulation is through the use of QOC. In this application, QOC provides a compilation of the desired unitary process $U_{target}$ onto a set of analog device controls $\vec{\alpha}$. Using optimal control to implement the simulation is advantageous for two reasons. First, by reducing the device time needed to implement a specific unitary process one achieves higher fidelity due to reduced decoherence. Second, decomposing the desired unitary process into a locally-optimal quantum gate set accrues an approximation error and an optimal control route avoids this by compiling the desired unitary directly. \par One of the major downsides of QOC for quantum simulation is due to the need to accurately model the parameterized device evolution $U_{device}(\vec{\alpha})$. While in principle this optimization can be accomplished without additional information, accessing the derivative information, i.e. $\partial_{\alpha_i} U_{device}(\vec{\alpha})$ can dramatically accelerate the optimization protocol but comes with additional computational overhead. Our work assesses how this burden can be lifted by using memory-efficient differentiable programming strategies and applies these strategies to simulations of LGTs on superconducting quantum computers. \par We follow the QOC model of ~\cite{PhysRevA.95.042318}. Given a Hamiltonian $H_0$, an initial state $|\psi_0\rangle$, and a set of control operators $H_1, H_2, \ldots H_m$, one seeks to determine, for a sequence of time steps $t_0, t_1, \ldots, t_N$, a set of control fields $g_i(t)$ such that \begin{eqnarray} \mathbb{H}_t & = & H_0 + \sum_{i=1}^{m}g_i(t)H_i \label{evovlveschrodingersicrete1}\\ U_t & = & e^{-i\mathbb{H}_t\Delta t}\label{evovlveschrodingersicrete2}\\ K_t & = & U_{t}U_{t-1}U_{t-2} \ldots U_{1}U_{0}\\ |\psi_t\rangle & = & K_t|\psi_0\rangle. \label{evovlveschrodingersicrete4} \end{eqnarray} One possible objective is to minimize the trace distance between $K_N$ and a target quantum gate $K_T$: \begin{eqnarray} F_0 = 1 - |\Tr(K^{\dagger}_TK_N)/D|^2, \end{eqnarray} where $D$ is the Hilbert space dimension. \par In this work we approach QOC using the gradient ascent pulse engineering (GRAPE) algorithm~\cite{PhysRevA.63.032308}, as shown in Algorithm~\ref{algo:grape}. The algorithm requires derivative terms $\frac{ \partial \rho_t \lambda_t }{\partial g_i(t)}$ that can be accurately calculated using automatic differentiation (AD or autodiff)~\cite{PhysRevA.95.042318}, a well-known technique for obtaining derivatives and gradients of numerical functions~\cite{Griewank2008EDP,Naumann_book,baydin2015automatic}. \begin{algorithm} \caption{Pseudocode for the GRAPE algorithm.} \label{algo:grape} \begin{algorithmic} \STATE Guess initial controls $g_i(t)$. \REPEAT \STATE Starting from $H_0$, calculate \\ { \quad\quad\quad $\rho_t=U_tU_{t-1}\ldots U_1 H_0 U_1^\dagger \ldots U_{t-1}^\dagger U_{t}^\dagger$}. \STATE Starting from $\lambda_N =K_T$, calculate \\ { \quad\quad\quad $\lambda_t=U_{t+1}^\dagger \ldots U_N^\dagger K_T U_N \ldots U_{t}$}. \STATE Evaluate $\frac{ \partial \rho_t \lambda_t }{\partial g_i(t)}$ \STATE Update the $m \times N$ control amplitudes: \\ \quad\quad\quad $g_i(t) \rightarrow g_i(t)+\epsilon \frac{ \partial \rho_t \lambda_t }{\partial g_i(t)}$ \UNTIL{ $\Tr{(K_T^{\dagger}K_N)} <$ threshold} \STATE \textbf{return} $g_i(t)$ \end{algorithmic} \end{algorithm} For computations with many input parameters, it is often most efficient to use the so-called \emph{reverse mode} of AD, which has been popularized as \emph{back-propagation} in machine learning. Reverse mode AD computes the derivatives of a function's output with respect to its inputs by tracing sensitivities backwards through the computational graph after the original computation is completed. Since QOC has a large number of input parameters and few outputs (only the cost function(s)), reverse mode AD is a promising approach. However, reverse mode AD requires that certain intermediate states of the original computation are available during the derivative computation. In the case of QOC, storing such values in order to re-use them during the derivative computation results in additional memory usage that is exponentially proportional to the number of qubits as well as proportional to the number of time steps, severely limiting the system size and duration that can be simulated on classical computers. Our previous work~\cite{10.1007/978-3-031-08760-8_11} introduced non-standard approaches for reducing the memory requirements of QOC through recomputation or by exploiting reversibility, and we apply these approaches to lattice gauge theory in this work. There exist several implementations of quantum control. {\tt QuantumControl.jl} and its subpackages {\tt GRAPE.jl} and {\tt Krotov.jl} provide a Julia framework for quantum optimal control. {\tt GRAPE.jl} is an implementation of (second-order) GRAPE extended with automatic differentiation. {\tt GRAPE.jl} optimizes its memory utilization and achieves low runtime using a technique that combines analytical derivatives with naive automatic differentiation. Their approach is suitable for both open as well as closed quantum systems. {\tt QuTiP} is open-source software for simulating the dynamics of open quantum systems in Python and utilizes the Numpy, Scipy, and Cython numerical packages~\cite{JOHANSSON20131234}. For the derivative-based optimal control it uses the GRAPE algorithm, where control pulses are piece-wise constant functions~\cite{Li2022PulselevelNQ}. {\tt QuTiP} also provides the derivative-free CRAB algorithm. {\tt Krotov} is a Python library that supports optimal control in closed and open systems~\cite{10.21468/SciPostPhys.7.6.080}. Classical differentiable programming frameworks like JAX provide autodiff capabilities. One approach to differentiable programming for quantum control uses reinforcement learning. Here, a control agent is represented as a neural network that maps the state of the system at a given time to a control pulse. The parameters of this agent are optimized via gradient information obtained by direct differentiation through both the neural network and the differential equations of the system~\cite{Sch_fer_2020,murphy2019}. The rest of the paper is organized as follows. Section~\ref{sec:approach} presents our differentiable programming approach for reducing the memory requirements of QOC. Section~\ref{sec:lgt} discusses LGT. Simulation results are presented in Section~\ref{sec:results}. Section~\ref{sec:conclusion} concludes the paper. \section{Approach} \label{sec:approach} We apply the three ``advanced'' automatic differentiation approaches presented in~\cite{10.1007/978-3-031-08760-8_11}, which we summarize in this section, as well as a simpler ``naive'' approach. All four approaches are used to restore intermediate values of the computation when they are needed during the subsequent derivative computation. \begin{description} \item[Naive Approach (Store-All)] retains in memory all intermediate values that will be needed for the derivative computation, and is the default in JAX and many other frameworks and AD tools such as PyTorch, Tapenade, etc. \item [Periodic Checkpointing] is an AD technique that stores selected intermediate values in memory so that they can later be loaded during the subsequent derivative computation. Values that have not been stored will instead be recomputed, by restarting parts of the computation from the nearest available earlier state. Periodic checkpointing is a sub-optimal approach but is straightforward to implement. To compute the derivative of an interval, the intermediate states are recomputed and kept in memory for the duration of the derivative computation of that interval. The checkpointing approach reduces the overall memory consumption compared to a store-all approach, at the cost of some recomputation. \item [Reversibility] exploits the fact that the inverse of unitary matrices is their conjugate transpose, which can be computed cheaply and accurately. The use of the inverse allows computing $K_{t-1}$ from $K_{t}$ and $\psi_{t-1}$ from $\psi_{t}$. \begin{eqnarray} K_t & = & U_{t}U_{t-1}U_{t-2} \ldots U_{1}U_{0}\\ \label{eq:useinverse} K_{t-1}& = & U_{t}^\dagger K_t\\ \label{eq:useinversestate} |\psi_{t-1}\rangle& = & U_{t}^\dagger|\psi_{t}\rangle \end{eqnarray} Thus, one does not have to store any of the $K_t$ matrices required to compute the adjoint of a time step. Additionally, reversibility allows a further memory reduction by avoiding the storage of $U_t$, and recomputing it from the $g_i(t)$ control values instead. While this drastically reduces memory consumption and recomputation cost compared to checkpointing approaches, it potentially incurs roundoff errors during the inversion of long time-step sequences. \item[Checkpointing with Reversibility] is the third advanced approach, which combines checkpointing and reversibility to combine the accuracy of checkpointing with the efficiency of reversibility approaches. Checkpoints are stored at regular intervals as in the first approach, but the intermediate states within each interval are obtained by reversing the trajectory backwards from the final state of the interval. \end{description} The approaches are implemented in JAX, a differentiable programming framework that can automatically differentiate native Python and NumPy functions~\cite{jax2018github}. It can differentiate through loops, branches, recursion, and closures, and it can take derivatives of derivatives of derivatives. It supports reverse-mode differentiation (a.k.a. backpropagation) via {\tt grad} as well as forward-mode differentiation, and the two can be composed arbitrarily to any order. We have used JAX's {\tt jax.custom\_vjp} feature to implement the three advanced approaches. Using the feature, one can provide derivatives for a portion of the computation instead of relying on JAX's standard approach. Listing~\ref{lst:custom_derivatives} shows how the derivative for {\tt f(x,y)} can be computed analytically and used in the overall derivative computation. \begin{lstfloat} \begin{lstlisting}[style=Python] @jax.custom_vjp def f(x, y): return jnp.sin(x) * y def f_fwd(x, y): return f(x, y), (jnp.cos(x), jnp.sin(x), y) def f_bwd(res, g): cos_x, sin_x, y = res return (cos_x * g * y, sin_x * g) f.defvjp(f_fwd, f_bwd) \end{lstlisting} \caption{Custom derivatives in JAX.} \label{lst:custom_derivatives} \end{lstfloat} \section{Benchmark Application} \label{sec:lgt} We will restrict our discussion to the simulation of qubit systems but wish to emphasize that our analysis and methods are also applicable to arbitrary quantum systems. To assess the reduced memory footprint that the combination of checkpointing and reversibility provides, we have explored the task of quantum simulation lattice gauge theories using optimal control. \par In this application context one specifies a model Hamiltonian $H_\text{model}$ and device Hamiltonian $H_\text{device}(\vec{\alpha},t)$ and uses optimization to determine a set of controls $\vec{\alpha}\*$ that yields a device evolution $U_\text{device}$ close to the desired model evolution $U_\text{model}$ \cite{kairys_parametrized, lysne_small}. This is often difficult because the simulation of $n$ qubits requires computing and storing operators defined on a Hilbert space with dimension $2^n$, growing exponentially large with increasing system size. To alleviate this, one typically decomposes the model evolution from a single global unitary defined on $n$ qubits to a product of unitary evolutions with support on only $m$ qubits through the Lie-Trotter decomposition \cite{lloyd1996universal,childs2021theory, kairys_parametrized}. \par Commonly referred to as Trotterization, applying the Lie-Trotter decomposition only approximates the global unitary to some error. Furthermore, when choosing $m$ to be small (which reduces classical computational overhead by limiting the classical simulation to $m$ qubits) this will tend to yield larger Trotter error and requires deeper quantum circuits to mitigate \cite{childs2021theory}. Thus increasing $m$ as much as possible will help to mitigate errors due to approximations and reduce circuit depth, limiting errors due to decoherence. Using methods of checkpointing and reversibility enable optimal control studies of larger quantum systems and therefore could enable more accurate and efficient quantum simulations. \par \begin{figure} \caption{The definition of a $U(1)$ lattice gauge theory on a square lattice with spin-$1/2$ particles as defined in Ref.~\cite{MARCOS2014634} \label{fig:LGT} \end{figure} \par \begin{figure*} \caption{Two sets of identified controls after 1000 iterations of optimization for a lattice with 4 qubits. The gradients during optimization were calculated via the Reversibility method. Both figures use the same device Hamiltonian and assume $t/q = 0.01$ ns. The top figure visualizes optimized controls that generate $U_P(t/q)$ for $J = 1$ with infidelity of $F \approx 8.3\times 10^{-6} \label{fig:optimal_pulses} \end{figure*} \par One application instance in which $m$ is large is the simulation of quantum systems exhibiting non-local interactions. A family of systems which exhibit these non-local interactions are found within the class of LGTs, which describe both fundamental and emergent physics and are a prime application for quantum simulation \cite{banuls2020simulating}. \par We choose to focus on a 2-dimensional $U(1)$ LGT model that has been explored in the context of analog simulation with superconducting circuits in Ref.~\cite{MARCOS2014634}. In that work the LGT is described by the model Hamiltonian: \begin{align}\label{eq:model_ham} H_\text{model} &= -J H_P + V H_C \end{align} where spins (respectively, qubits) are defined on the edges of a square lattice shown in Figure~\ref{fig:LGT} and $H_P$ denotes a Hamiltonian of ``plaquette" terms involving 4-local operators defined on each square of the lattice (highlighted in blue in the Figure~\ref{fig:LGT}) \begin{equation} H_P = \sum_{\square} ( S_+^{(i)} S_-^{(j)} S_+^{(k)} S_-^{(l)} + h.c.) \end{equation} and $H_C$ denotes a Hamiltonian of ``corner" terms involving 2-local operators defined on each corner of the lattice (highlighted in green in Figure~\ref{fig:LGT}): \begin{equation} H_C = \sum_{\ulcorner} S_z^{(i)} S_z^{(j)}. \end{equation} The operators $S_{\pm}^{(j)} = S_x^{(j)}\pm i S_y^{(j)}$ denotes the qubit $i$ raising and lowering operator, $S_x^{(j)},S_y^{(j)},S_z^{(j)}$ are the qubit spin matrices, and h.c. denotes Hermitian conjugate. The notation $\sum_\square$ represents the sum over all square plaquettes on the lattice, and $\sum_{\ulcorner}$ is the sum over pairs of qubits in each corner which share a vertex \cite{MARCOS2014634}. \par One can approximate the global time evolution operator $U_\text{model}(t) = \exp(-\frac{i t}{\hbar} H_\text{model})$ as a product of local operators via Trotterization: \begin{align} U_\text{model}(t) = \lim_{q \rightarrow \infty} \bigg[ U_{P}\bigg(\frac{t}{q}\bigg) \cdot U_{C}\bigg(\frac{t}{q}\bigg)\bigg]^q \end{align} where $U_P = \exp[\frac{it}{q\hbar} J H_P]$ and $U_C = \exp[-\frac{it}{q\hbar} V H_C]$ are the time evolution operators under only the plaquette and corner Hamiltonian and equality holds in the limit of $q$. Typically, one truncates this limit at finite $q$ which yields a $q$th order approximation to the unitary dynamics $U^q_\text{model}(t)$ with error $\Delta U_\text{model}^q(t)= U_\text{model}(t) - U^q_\text{model}(t)$ scaling polynomially in $t/q$: \begin{align} \Delta U_\text{model}^q(t) = \frac{t^2}{2q} \sum_{l > m =1}^M [H_l,H_m] + \mathcal{O}\bigg(\frac{t^3}{q^2}\bigg) \end{align} where the sum of commutators is over every term in the Hamiltonian of Eq.~\ref{eq:model_ham}. \par The four-body interaction terms given by $S_+^{(i)} S_-^{(j)} S_+^{(k)} S_-^{(l)}$ are known as ``ring exchange'' interactions and represent a unique non-local operator \cite{MARCOS2014634}. This model Hamiltonian exhibits a number of interesting properties such as emergent excitations in the ground state and a quantum phase transition in the ratio of $J/V$ \cite{MARCOS2014634}. This model Hamiltonian was chosen as an application in which the memory advantages of checkpointing and reversibility could provide meaningful utility. Specifying the model Hamiltonian is only part of the example application. We also need to specify a Hamiltonian that models the assumed quantum device on which the simulation will be implemented. In this work we choose a device Hamiltonian derived from a two-dimensional array of coupled superconducting transmons such as those used to demonstrate quantum supremacy in 2019 \cite{arute2019quantum}. \par In this work we approximate the system as a set of coupled qubits, neglecting higher energy levels \cite{krantz2019quantum}. When a set of coupled transmons are tuned into resonance with one another their effective Hamiltonian can be described as: \begin{align} H(\vec{\alpha},t) &= \sum_{i} \gamma_{i}(\vec{\alpha},t) S_x^{(i)}+ \sum_{\langle i,j \rangle} g (S_x^{(i)}S_x^{(j)} + S_y^{(i)}S_y^{(j)}) \end{align} where $\sum_{\langle i,j \rangle}$ is the sum over all neighboring qubits on a square lattice, $g = -20 \times 2\pi$~MHz is a typical coupling strength between transmons, and $\gamma_i(\vec{\alpha},t)$ are the time-dependent microwave control envelope functions modulated in resonance with the transmon frequencies \cite{krantz2019quantum,arute2019quantum}. \par Thus the optimal control task is to determine a set of controls $\vec{\alpha}$ that minimizes the infidelity as defined by \begin{align} F(U_\text{model},U_\text{device}) = \frac{|\Tr(U_\text{model}^\dagger U_\text{device})|^2}{D^2} \label{eq:infidelity} \end{align} where $D$ is the dimension of the Hilbert space on which $U_\text{model}$ is defined \cite{d2021introduction}. \section{Experimental Results} \label{sec:results} We first provide a validation that the reversibility method leads to optimal controls which are both feasible and highly accurate. Shown in Figure~\ref{fig:optimal_pulses} are two sets of optimal controls for a 4-transmon system. These controls were initialized with a constant initial condition and over 1000 optimization iterations achieved infidelities below $10^{-4}$ for a $100$ ns control time. \par While these fidelities neglect decoherence, they are much better than state-of-the-art two-transmon operations and are on a similar time scale of two-transmon operations in real devices \cite{krantz2019quantum}. Additionally, the optimized controls are extremely smooth and have well-defined amplitude both of which are within current experimental limitations \cite{krantz2019quantum}. \par As an additional validation, we visualize in Figure~\ref{fig:convergence} the convergence of infidelity with increasing optimization iterations. Similar to~\cite{PhysRevA.95.042318,10.1007/978-3-031-08760-8_11} we use the ADAM optimizer with a learning rate of $10^{-3}$. We find that there are only small differences between the convergence of the optimizer with the reversibility method compared to the naive JAX method. This is to be expected as numerical errors due to imperfect reversibility begin to propagate through the derivative calculation and will therefore drive subtle differences in convergence. \begin{figure} \caption{The convergence to a set of optimal controls for two different target unitaries $U_P(t/q)$ and $U_C(t/q)$ and two different AD techniques. Each simulation uses the same device Hamiltonian and assumes $t/q = 0.01$ ns, $J=1$, $V=1$.} \label{fig:convergence} \end{figure} We explored the performance and memory requirements of the naive AD, reversibility, checkpointing, and checkpointing with reversibility approaches in three sets of experiments. In the first set, we vary the size of the lattice, thereby varying the number of qubits in the system. In the second set, we vary the number of timesteps in each iteration of the optimization process. Finally, we vary the interval between checkpoints for the checkpointing and the checkpointing with reversibility approach. Our experiments were conducted on a cluster where each compute node was connected to 8 NVIDIA A100 40GB GPUs. Each node contained 1TB DDR4 memory and 320GB GPU memory. We report the time taken to execute 20 iterations of the optimization procedure. We used the JAX memory profiling capability in conjunction with {\tt GO pprof} to measure the memory needs for each case. \noindent {\bf Vary Qubits} In these experiments, we fixed the width of the lattice to two and varied the length of the lattice. The results in Figure~\ref{fig:vary_qubits} (right) show that the device memory requirements for the standard approach are highest whereas the requirements for reversibility are lowest. We note that the standard approach can be executed at most for a $2\times3$ lattice made up of $7$ qubits and runs out of available device memory thereafter. The periodic checkpointing approach and reversibility approaches can be run for at most a $2\times4$ lattice made up of $10$ qubits and run out of available device memory thereafter. Figure~\ref{fig:vary_qubits} (left) also shows the execution time for the various approaches. \begin{figure} \caption{Comparison of execution time and device memory requirements for standard AD, periodic checkpointing, and full reversibility with increasing number of qubits in the lattice. The QOC simulations consisted of $N=500$ time steps with a checkpoint period of $C=\lfloor \sqrt{N} \label{fig:vary_qubits} \end{figure} \noindent {\bf Vary Timesteps} Next, we fixed the size of the lattice to $2\times3$ made up of $7$ qubits and varied the number of time steps, $N$. For periodic checkpointing, we used the optimal checkpoint period, $C=\lfloor\sqrt{N}\rfloor$. Our results are consistent with ~\cite{10.1007/978-3-031-08760-8_11}. The time is roughly linear in $N$ and independent of $C$. Periodic checkpointing and full reversibility are slower than naive AD. Full reversibility is somewhat faster than periodic checkpointing. The memory requirements of naive AD rise rapidly with more timesteps, while reversibility and checkpointing do not rise appreciably. \begin{figure} \caption{Comparison of the execution time and device memory requirements for standard AD, periodic checkpointing, and full reversibility approaches with increasing number of time steps. The QOC simulation consisted of $7$ qubits. The checkpoint period was chosen to be the square root of the number of time steps. \label{fig:vary_steps_time} \label{fig:vary_steps_time} \end{figure} \noindent {\bf Vary Checkpoints} We examined the dependence of execution time and memory requirements on the checkpointing period, $C$, keeping the size of the lattice fixed at $2\times3$ made up of $7$ qubits and the number of time steps fixed at $N=500$. Again the results obtained in in Figure~\ref{fig:vary_checks} are consistent with ~\cite{10.1007/978-3-031-08760-8_11}. The time taken is roughly independent of $C$. Periodic checkpointing with reversibility is somewhat faster than periodic checkpointing alone. The memory requirements of periodic checkpointing with reversibility vary as a function of $\frac{N}{C}$. The memory requirements of periodic checkpointing alone vary as a function of $\frac{N}{C}+C$. \begin{figure} \caption{Comparison of the execution time and device memory requirements for periodic checkpointing and checkpointing plus reversibility approaches with increasing number of time steps. The QOC simulations consisted of $500$ time steps and $7$ qubits. } \label{fig:vary_checks} \end{figure} \section{Conclusion} \label{sec:conclusion} We have demonstrated the application of three advanced AD approaches, implemented in the JAX differentiable programming framework, to lattice gauge theory. These approaches increase the number of qubits that can be simulated by reducing the memory requirements of automatic differentiation. \end{document}
\begin{document} \title[Parabolic odd potential]{One-dimensional quantum scattering \\ by a parabolic odd potential} \author{E M Ferreira$^1$ and J Sesma$^2$} \address{$^1$ Instituto de F\'{\i}sica, Universidade Federal do Rio de Janeiro, 21941-972, Rio de Janeiro, Brasil} \address{$^2$ Departamento de F\'{\i}sica Te\'orica, Facultad de Ciencias, 50009, Zaragoza, Spain} \eads{\mailto{[email protected]}, \mailto{[email protected]}} \begin{abstract} Quantum scattering by a one-dimensional odd potential proportional to the square of the distance to the origin is considered. The Schr\"odinger equation is solved exactly and explicit algebraic expressions of the wave function are given. A complete discussion of the scattering function reveals the existence of Gamow (decaying) states and of resonances. \end{abstract} \pacno{03.65.Nk; 02.30.Hq} \section{Introduction} An amazing property of divergent-at-infinity odd one-dimensional potentials of the type \begin{equation} V(x) = -\,x^N, \qquad N=3, 5, 7, \ldots, \label{i1} \end{equation} or \begin{equation} V(x)=\left\{\begin{array}{lll}x^N,&\qquad \mbox{for}&\quad x<0, \\ -\,x^N, &\qquad \mbox{for}&\quad x>0.\end{array}\right.\qquad N= 4, 6, 8, \ldots, \label{i2} \end{equation} is their capability to sustain resonances. The first evidence of that property did not occur in a direct way, but in the study of a Hamiltonian where a term of the type of Eq. (\ref{i1}), with $N=3$, had been added to the familiar harmonic oscillator, to have what is known as cubic anharmonic oscillator. Old studies \cite{yari,alva1} of that Hamiltonian, \begin{equation} H=-\,\frac{d^2}{dx^2}+\frac{x^2}{4}-\lambda\,x^3, \qquad \lambda>0, \label{i3} \end{equation} revealed that it has complex eigenvalues corresponding to localized eigenfunctions, that is, eigenstates of complex energy that could be associated with resonances. Investigations of different aspects of these resonances have continued up to recent years \cite{alva2,jent1}. The existence of localized eigenstates was initially attributed to the potential barrier due to the presence of the term $x^2$. But progressive weakening of that term does not destroy the resonances, which are present even in a pure cubic potential $V(x)=-x^3$ \cite{jent1}. This fact makes it interesting to study resonances in potentials of the form given in Eqs. (\ref{i1}) and (\ref{i2}), without the presence of a harmonic oscillator term. On the other hand, resonances are not possible in a linear potential, i. e., a potential as given by Eq. (\ref{i1}) with $N=1$, a fact already mentioned in \cite{alva1}. There are, in this case, no privileged values of the energy. This fact becomes obvious if one considers that a displacement of the energy accompanied by a corresponding translation in the variable $x$ leaves invariant the eigenvalue problem. The question arises if a potential with a shape, shown in Fig. 1, intermediate between those of the linear and cubic potentials, namely the parabolic odd one, \begin{equation} V(x)=\left\{\begin{array}{lll}x^2,&\qquad \mbox{for}&\quad x<0, \\ -\,x^2, &\qquad \mbox{for}&\quad x>0,\end{array}\right. \label{i4} \end{equation} which is of the form of Eq. (\ref{i2}) with $N=2$, can sustain resonances. Besides, one-dimensional scattering by this parabolic odd potential presents the additional interest of being algebraically solvable. These circumstances have lead us to carry out, in the present paper, a thorough study of the scattering by the potential given in Eq. (\ref{i4}). \begin{figure} \caption{Graphical representation of the potential given by Eq. (\ref{i4} \label{fig:1} \end{figure} The problem to be discussed here possesses common features with the study, done by Barton \cite{bart}, of tunneling and scattering in the inverted oscillator (also known as parabolic barrier). We adopt here the pragmatic attitude of Barton, leaving aside the mathematical issues mentioned in \cite{bart} and addressed in more recent papers \cite{bala}. Scattering by one-dimensional finite-range potentials constitutes a chapter in most texts of Quantum Mechanics. (See, for instance, the classical book by Landau and Lifshitz \cite[Section 22]{land} or the more recent ones by Robinett \cite[Chapter 12]{robi} and by Newton \cite[Chapter 3]{newt}. For a recent review, see \cite{boya}.) Assuming a probability flux impinging from the right on a potential that vanishes out of the interval $[-a,a]$, the wave function in the outer region can be written in the form \begin{equation} \psi(x) = \left\{\begin{array}{ll}\psi^{\rm{incoming}}(x) + r\,\psi^{\rm{outgoing}}(x), & \quad x>a, \\ t\,\psi^{\rm{outgoing}}(x), & \quad x<-\,a. \end{array}\right. \label{i5} \end{equation} The reflection and transmission coefficients, respectively $r$ and $t$, are functions of the energy of the particle represented by the incident flux. In the case of the potential becoming infinite and, therefore, impenetrable, the last equation is replaced by \begin{equation} \psi(x) = \left\{\begin{array}{ll}\psi^{\rm{incoming}}(x) - S\,\psi^{\rm{outgoing}}(x), & \quad x>a, \\ 0, &\quad x<-\,a, \end{array}\right. \label{i6} \end{equation} where the the scattering coefficient $S$ is also dependent on the energy. The potential of Eq. (\ref{i4}) that we are going to consider here has an infinite range. Nevertheless, the preceding formalism is applicable. Equations (\ref{i5}) and (\ref{i6}) remain valid if one replaces $x>a$ and $x<-a$ respectively by $x\to +\infty$ and $x\to -\infty$ and uses adequate expressions \cite{bart} for $\psi^{\rm{incoming}}(x)$ and $\psi^{\rm{outgoing}}(x)$. In Section 2 we obtain the Frobenius and Thom\'e solutions of the Schr\"odinger equation with the potential defined in Eq. (\ref{i4}). The connection factors linking these two kinds of solutions are obtained in Section 3. In this way we are able to write the physical solution and the scattering function in Section 4. A study, in Section 5, of the analytic properties of the scattering function in the complex energy plane reveals the occurrence of Gamow states, whose correspondence with resonances is discussed in Section 6. Finally, Section 7 contains some comments about the peculiarities of the potential considered. \section{Solutions of the Schr\"odinger equation for the parabolic odd potential} The differential equation to be solved is (in adequate scales for lengths and energies) \begin{equation} -\,\frac{d^2\psi(x)}{dx^2}+V(x)\,\psi(x)=E\,\psi(x), \label{i7} \end{equation} with $V(x)$ given by Eq. (\ref{i4}). Let us start by solving it on the positive real semiaxis. Written for $x>0$, Eq. (\ref{i7}) becomes \begin{equation} \frac{d^2\psi(x)}{dx^2}+(x^2+E)\psi(x)=0, \qquad x>0.\label{i8} \end{equation} Two convergent power series solutions (Frobenius ones) can be immediately obtained. However, for convenience in connecting the solutions valid at small $x$ to their asymptotic form, we express the Frobenius solutions as a product of an exponential times a convergent series, that turns out to be a confluent hypergeometric function. One gets in this way \begin{eqnarray} \psi_1^+(x)&=&\exp(ix^2/2)\,\ _1F_1\left(\frac{1-iE}{4}; \frac{1}{2}; -ix^2\right), \label{i9} \\ \psi_2^+(x)&=&x\,\exp(ix^2/2)\,\ _1F_1\left(\frac{3-iE}{4}; \frac{3}{2};-i x^2\right). \label{i10} \end{eqnarray} Formal solutions expressed as the product of an exponential times an asymptotic expansion (Thom\'e solutions) for $x\to +\infty$ can also be obtained by substitution in the differential equation. They are, in terms of generalized hypergeometric functions, \begin{eqnarray} \psi_3^+(x)&=&\exp(ix^2/2)\,x^{-(1-iE)/2}\,\ _2F_0\left(\frac{1-iE}{4}, \frac{3-iE}{4};; -\frac{i}{x^2}\right), \label{i11} \\ \psi_4^+(x)&=&\exp(-ix^2/2)\,x^{-(1+iE)/2}\,\ _2F_0\left(\frac{1+iE}{4}, \frac{3+iE}{4};; \frac{i}{x^2}\right). \label{i12} \end{eqnarray} Let us now consider the solutions on the negative real semiaxis, $x<0$. The differential equation is now \begin{equation} \frac{d^2\psi(x)}{d^2x}+(-x^2+E)\psi(x)=0, \qquad x<0.\label{i13} \end{equation} Following the same procedure as for Eqs. (\ref{i9}) to (\ref{i12}), we find for the Frobenius solutions \begin{eqnarray} \psi_1^-(x)&=&\exp(-x^2/2)\,\ _1F_1\left(\frac{1-E}{4}; \frac{1}{2}; x^2\right), \label{i14} \\ \psi_2^-(x)&=&x\,\exp(-x^2/2)\,\ _1F_1\left(\frac{3-E}{4}; \frac{3}{2}; x^2\right), \label{i15} \end{eqnarray} and for the Thom\'e solutions \begin{eqnarray} \psi_3^-(x)&=&\exp(-x^2/2)\,x^{-(1-E)/2}\,\ _2F_0\left(\frac{1-E}{4}, \frac{3-E}{4};; -\frac{1}{x^2}\right), \label{i16} \\ \psi_4^-(x)&=&\exp(x^2/2)\,x^{-(1+E)/2}\,\ _2F_0\left(\frac{1+E}{4}, \frac{3+E}{4};; \frac{1}{x^2}\right). \label{i17} \end{eqnarray} It is immediate to check that $\psi_j^-$ and $\psi_j^+$ ($j=1,2$) take the same value at $x=0$. The same is true for their derivatives with respect to $x$. Therefore, we have obtained two solutions of Eq. (\ref{i7}), $\psi_j(x)$ ($j=1,2$), which are represented by $\psi_j^-(x)$ when $x\leq 0$ and by $\psi_j^+(x)$ if $x\geq 0$. Since these two Frobenius solutions constitute a fundamental set of solutions, any other one can be written as a linear combination of them. In particular, the physical solution would be \begin{equation} \psi_{\rm{phys}}(x) = A_1\,\psi_1(x)+A_2\,\psi_2(x), \label{i18} \end{equation} with coefficients $A_1$ and $A_2$ to be determined. \section{The connection factors} The behavior of the Frobenius solutions for $x\to +\infty$ can be written in terms of the Thom\'e ones, by means of the so called connection factors $T_{j,k}^+$, in the form \begin{equation} \psi_j^+(x) \sim T_{j,3}^+\,\psi_3^+(x) + T_{j,4}^+\,\psi_4^+(x), \qquad j=1,2, \qquad x\to +\infty. \label{i19} \end{equation} These connection factors are obtained immediately by using the asymptotic power series of the confluent hypergeometric function \cite[Sec. 2.5, Eq. (47)]{ding} \begin{eqnarray} \fl \ _1F_1(a;c;z) &\sim \frac{\Gamma(c)\,z^{a-c}\,e^z}{\Gamma(a)}\ _2F_0(1-a,c-a;;z^{-1}) \nonumber \\ \fl & + \left(\!\begin{array}{c}e^{i\pi a} \\ e^{-i\pi a}\\ \cos \pi a \end{array}\!\right)\frac{\Gamma(c)\,z^{-a}}{\Gamma(c-a)}\ _2F_0(a,a\!-\!c\!+\!1;;(-z)^{-1}), \quad \left.\begin{array}{l}0<\arg z<\pi \\ 0>\arg z>-\pi \\ \arg z=0\end{array} \right\}. \label{i20} \end{eqnarray} They turn out to be \begin{eqnarray} T_{1,3}^+ = \frac{e^{-i\pi(1-iE)/8}\,\Gamma(1/2)}{\Gamma((1+iE)/4)}, \qquad && T_{1,4}^+ = \frac{e^{i\pi(1+iE)/8}\,\Gamma(1/2)}{\Gamma((1-iE)/4)}, \label{i21} \\ T_{2,3}^+ = \frac{e^{-i\pi(3-iE)/8}\,\Gamma(3/2)}{\Gamma((3+iE)/4)}, \qquad && T_{2,4}^+ = \frac{e^{i\pi(3+iE)/8}\,\Gamma(3/2)}{\Gamma((3-iE)/4)}. \label{i22} \end{eqnarray} Analogously to Eq. (\ref{i19}), one can express the behavior of the Frobenius solutions for $x\to -\infty$ in terms of the Thom\'e ones, \begin{equation} \psi_j^-(x) \sim T_{j,3}^-\,\psi_3^-(x) + T_{j,4}^-\,\psi_4^-(x), \qquad j=1,2, \qquad x\to -\infty. \label{i23} \end{equation} For an easier determination of their connection factors, we rewrite the solutions on the negative real semiaxis, that is, for $x=e^{i\pi}|x|$, in the form \begin{eqnarray} \psi_1^-(x)&=\exp(-|x|^2/2)\,\ _1F_1\left(\frac{1-E}{4}; \frac{1}{2}; |x|^2\right), \nonumber \\ \psi_2^-(x)&= - \exp(-|x|^2/2)\,\,|x|\,\ _1F_1\left(\frac{3-E}{4}; \frac{3}{2}; |x|^2\right), \nonumber \\ \psi_3^-(x)&=\exp(-|x|^2/2)\,e^{-i\pi(1-E)/2}\,|x|^{-(1-E)/2}\,\ _2F_0\left(\frac{1-E}{4}, \frac{3-E}{4};; -\frac{1}{|x|^2}\right), \nonumber \\ \psi_4^-(x)&=\exp(|x|^2/2)\,e^{-i\pi(1+E)/2}\,|x|^{-(1+E)/2}\,\ _2F_0\left(\frac{1+E}{4}, \frac{3+E}{4};; \frac{1}{x^2}\right). \nonumber \end{eqnarray} Then, by using Eq. (\ref{i20}), we obtain for the connection factors on the negative real semi-axis the expressions \begin{eqnarray} \fl T_{1,3}^- = \frac{e^{i\pi(1-E)/2}\,\cos((1-E)\pi/4)\,\Gamma(1/2)}{\Gamma((1+E)/4)}, \quad && T_{1,4}^- = \frac{e^{i\pi(1+E)/2}\,\Gamma(1/2)}{\Gamma((1-E)/4)}, \label{i24} \\ \fl T_{2,3}^- = -\,\frac{e^{i\pi(1-E)/2}\,\cos((3-E)\pi/4) \,\Gamma(3/2)}{\Gamma((3+E)/4)}, \quad && T_{2,4}^- = -\, \frac{e^{i\pi(1+E)/2}\,\Gamma(3/2)}{\Gamma((3-E)/4)}. \label{i25} \end{eqnarray} \section{The scattering function} We have, already, all we need to calculate the coefficients $A_1$ and $A_2$ in the expression of the physical solution, Eq. (\ref{i18}). In view of Eq. (\ref{i23}), one has for $x\to -\infty$ \begin{equation} \psi_{\rm{phys}}(x) \sim (A_1\,T_{1,3}^-+A_2\,T_{2,3}^-)\,\psi_3^-(x)+(A_1\,T_{1,4}^-+A_2\,T_{2,4}^-)\,\psi_4^-(x). \label{i26} \end{equation} The potential barrier $x^2$ prevents the hypothetical particle represented by $\psi_{\rm{phys}}(x)$ to reach large negative values of $x$. Therefore, the diverging (for $x\to -\infty)$ component $\psi_4^-$ in the expression of $\psi_{\rm{phys}}(x)$ must be eliminated, that is, the coefficients $A_1$ and $A_2$ must be taken such that \begin{equation} A_1\,T_{1,4}^-+A_2\,T_{2,4}^-=0. \label{i27} \end{equation} This relation determines them up to a common arbitrary multiplicative constant, that may be fixed by requiring the fulfilment of an additional condition like, for instance, \begin{equation} A_1\,T_{1,4}^++A_2\,T_{2,4}^+=1, \label{i28} \end{equation} unless it happens that \begin{equation} T_{1,4}^+T_{2,4}^--T_{2,4}^+T_{1,4}^- = 0, \label{i29} \end{equation} in which case \begin{equation} A_1\,T_{1,4}^++A_2\,T_{2,4}^+=0. \label{i30} \end{equation} Leaving aside this case, that will be considered in Section 5, one obtains from Eqs. (\ref{i27}) and (\ref{i28}) \begin{equation} A_1=\frac{T_{2,4}^-}{T_{1,4}^+T_{2,4}^--T_{2,4}^+T_{1,4}^-},\qquad A_2=\frac{-\,T_{1,4}^-}{T_{1,4}^+T_{2,4}^--T_{2,4}^+T_{1,4}^-}. \label{i31} \end{equation} On the other hand, bearing in mind Eqs. (\ref{i18}), (\ref{i19}), and (\ref{i28}), one realizes that, for $x\to +\infty$, \begin{equation} \psi_{\rm{phys}}(x) \sim (A_1\,T_{1,3}^++A_2\,T_{2,3}^+)\,\psi_3^+(x)+\psi_4^+(x). \label{i32} \end{equation} As it is well known, the flux of probability associated to a wave function $\psi(x)$ is given (in appropriate units) by \begin{equation} j(x) = -\,i\left(\psi(x)^*\,\frac{d\psi(x)}{dx}-\frac{d\psi(x)^*}{dx}\,\psi(x)\right), \nonumber \end{equation} the asterisk standing for complex conjugation. It is immediate to check that the fluxes associated with $\psi_3^+(x)$ and $\psi_4^+(x)$, as given by Eqs. (\ref{i11}) and (\ref{i12}), are respectively positive and negative. Besides, for real $E$, $\psi_3$ and $\psi_4$ are complex conjugate to each other and, obviously, their moduli are equal. Consequently, they represent, respectively, an outgoing (to the right) wave and an incoming (from the right) one. Therefore, Eq. (\ref{i32}) is of the form \begin{equation} \psi_{\rm{phys}}(x) \sim -\,S(E)\,\psi^{\rm{outgoing}}(x)+\psi^{\rm{incoming}}(x), \label{i33} \end{equation} analogous to Eq. (\ref{i6}), with a scattering function \begin{equation} S(E) = -(A_1\,T_{1,3}^++A_2\,T_{2,3}^+) = -\, \frac{T_{1,3}^+T_{2,4}^--T_{2,3}^+T_{1,4}^-}{T_{1,4}^+T_{2,4}^--T_{2,4}^+T_{1,4}^-}. \label{i34} \end{equation} Substitution of the connection factors by their expressions, given in Eqs. (\ref{i21}), (\ref{i22}), (\ref{i24}), and (\ref{i25}), allows one to obtain \begin{equation} S(E) = -\,e^{-i\pi/2}\,\frac{N(E)}{D(E)}, \label{i35} \end{equation} where we have denoted \begin{eqnarray} N(E) & = & \frac{e^{i\pi/8}}{\Gamma\left(\frac{3-E}{4}\right)\,\Gamma\left(\frac{1+iE}{4}\right)} + \frac{e^{-i\pi/8}}{\Gamma\left(\frac{1-E}{4}\right)\,\Gamma\left(\frac{3+iE}{4}\right)}, \label{i36} \\ D(E) & = & \frac{e^{-i\pi/8}}{\Gamma\left(\frac{3-E}{4}\right)\,\Gamma\left(\frac{1-iE}{4}\right)} + \frac{e^{i\pi/8}}{\Gamma\left(\frac{1-E}{4}\right)\,\Gamma\left(\frac{3-iE}{4}\right)}. \label{i37} \end{eqnarray} It is evident, from their explicit expressions, that $N(E)$ and $D(E)$ are complex conjugate to each other, as long as $E$ is real. Consequently, \begin{equation} |S(E)|=1 \qquad \mbox{for real}\; E. \label{i39} \end{equation} It is therefore possible to describe the result of the scattering in terms of a phase shift $\delta(E)$ defined as usually \cite{kahn} \begin{equation} S(E)=\exp \left[2\,i\,\delta(E)\right]. \label{i40} \end{equation} This definition of the phase shift is not unambiguous: it determines $\delta(E)$ up to addition of $n\pi$ ($n$ integer). To eliminate that ambiguity, we have chosen the interval $[0, \pi)$ to contain the value of $\delta(0)$. The resulting values of $\delta(E)$, for $-10<E<15$, are represented in Fig. 2. \begin{figure} \caption{Phase shift, in units of $\pi$, of a wave scattered by the parabolic odd potential vs. the energy of the particle represented by the wave.} \label{fig:2} \end{figure} \section{Analytic properties of the scattering function} In the preceding sections, real values for the energy were implicitly assumed. It is widely recognized that valuable information about the scattering process can be obtained by a study of the analytic properties of the scattering function extended to complex values of the energy. In the present case, such extension does not present any difficulty. The scattering function appears in Eq. (\ref{i35}) as the quotient of two functions, $N(E)$ and $D(E)$, defined in terms of the reciprocal Gamma function, $1/\Gamma(z)$, which can be trivially extended to complex values of $z$. In fact, it admits a series expansion convergent in the whole finite complex $z$-plane, \begin{equation} \frac{1}{\Gamma (z)}=\sum_{k=1}^\infty a_k\,z^k, \label{r1} \end{equation} whose coefficients $a_k$, approximated to 31 digits, can be found in a paper by Wrench \cite{wren}. Therefore, given that $N(E)$ is finite for any complex value of $E$, the only singularities of $S(E)$ in the finite complex $E$-plane are due to zeros of its denominator, that is, to fulfilment of Eqs. (\ref{i29}) and (\ref{i30}), in which case we have, instead of Eq. (\ref{i32}), \begin{equation} \psi_{\rm{phys}}(x) \sim (A_1\,T_{1,3}^++A_2\,T_{2,3}^+)\,\psi_3^+(x), \qquad \mbox{for}\quad x\to +\infty . \label{i41} \end{equation} It is not difficult to see that the upper half-plane, $\Im E>0$, is free from such singularities. From Eq. (\ref{i7}) and its complex conjugate, one obtains immediately \begin{equation} \fl \frac{d}{dx}\left(\psi_{\rm{phys}}(x)\,\frac{d\psi_{\rm{phys}}^*(x)}{dx}-\psi_{\rm{phys}}^*(x)\frac{d\psi_{\rm{phys}}(x)}{dx}\right) = (E-E^*)\,|\psi_{\rm{phys}}(x)|^2, \label{r2} \end{equation} which integrated from $-\infty$ to $x$ gives \begin{equation} \fl\psi_{\rm{phys}}(x)\,\frac{d\psi_{\rm{phys}}^*(x)}{dx}-\psi_{\rm{phys}}^*(x)\frac{d\psi_{\rm{phys}}(x)}{dx} = (E-E^*)\int_{-\infty}^x|\psi_{\rm{phys}}(t)|^2\,dt\,. \label{r3} \end{equation} The right hand side of this equation is pure imaginary, its modulus increases with $x$, and its sign is that of $\Im E$. For the left hand side, assuming $x$ positive and sufficiently large, we have, from Eq. (\ref{i41}) ($\mathcal{W}[f,g]$ representing the Wronskian of the functions $f$ and $g$) \begin{eqnarray} \mathcal{W}\left[\psi_{\rm{phys}},\,\psi_{\rm{phys}}^*\right](x) &\sim |A_1\,T_{1,3}^++A_2\,T_{2,3}^+|^2\, \mathcal{W}\left[\psi_3^+,\,(\psi_3^+)^*\right](x) \nonumber \\ &\sim |A_1\,T_{1,3}^++A_2\,T_{2,3}^+|^2\,(-2\,i)\,x^{i(E-E^*)/2}, \label{r4} \end{eqnarray} which may present the characteristics of the right hand side of Eq. (\ref{r3}) only if $\Im E<0$. In the general case of complex energy, one realizes that \begin{equation} D(E^*)=\left[(N(E)\right]^* \qquad \mbox{and}\qquad N(E^*)=\left[D(E)\right]^*, \label{r5} \end{equation} from which one obtains the familiar unitarity condition \begin{equation} S(E)\,[S(E^*)]^* = 1. \label{i38} \end{equation} According to this property, common to finite range potentials, zeros of $S(E)$ appear in the upper half-plane at positions symmetrical with respect to the horizontal axis of those of the poles in the lower half plane. There is, however, a symmetry property of zeros and poles of the scattering function specific of the potential we are considering. It stems from the relations \begin{equation} N(iE^*)= [N(E)]^*,\qquad D(-iE^*)=[D(E)]^*, \label{i43} \end{equation} that can be trivially checked in Eqs. (\ref{i36}) and (\ref{i37}). Such relations imply that the pattern of zeros of $S(E)$ is symmetrical with respect to the bisector of the first and third quadrants in the $E$-plane, and the pattern of poles is symmetrical with respect to the bisector of the second and fourth quadrants. This symmetry, together with the impossibility of having poles in the upper half-plane proven above, allows one to conclude that, for the potential we are considering, poles of the scattering function may occur only in the fourth quadrant of the $E$-plane. All these analytic properties of the scattering function are confirmed by a numerical computation of $S(E)$ as given by Eqs. (\ref{i35}), (\ref{i36}) and (\ref{i37}). We have represented in Fig. 3 the modulus and phase of the scattering function for complex values of $E$ in the region $-10\leq\Re E\leq15$ and $-15\leq\Im E\leq 0$. In view of the unitarity condition, Eq. (\ref{i38}), it is sufficient to show $S(E)$ in the lower half-plane. Constant-phase lines are symmetrical with respect to the horizontal axis, $\Im E=0$. Constant-modulus lines associated with $|S|=a$ in the lower half-plane and $|S|=1/a$ in the upper one are also symmetric to each other. In the figure, the constant-modulus (solid) lines are labeled with the value of $\log_{10}[|S(E)|]$. The shown constant-phase (dashed) lines correspond to $\arg [S(E)]=n\,\pi/4$ ($n$ integer). With the convention adopted above to remove the ambiguity in the phase shift, lines intersecting the horizontal axis correspond, from left to right, to $\arg [S(E)]=\pi/4,\, 0,\, 0,\, \pi/4,\, \pi/2,\, 3\pi/4,\, \pi,\, 5\pi/4,\, \ldots$. Poles of $S(E)$ are immediately recognized in the chart. There seems to exist two infinite sequences of poles, symmetric with respect to the bisector of the fourth quadrant, besides a pole at the bisector. The approximate positions of this pole and the first few of the two infinite sequences are given in Table 1. Although not explicitly shown, one can guess, looking at the figure, the existence of saddle points about positions intermediate between those of two consecutive poles of the same sequence. There is, however, a more interesting saddle point: that on the real axis, at $E\approx -4.042626$, where two $|S|=1$ lines intersect. The two constant-phase lines intersecting also there correspond to $\arg (S) \approx -\,0.519712\,\pi/4$. \begin{figure} \caption{Chart of modulus and phase of the scattering function of the parabolic odd potential. See the text for an explanation of the values corresponding to the constant-modulus (solid) lines and constant-phase (dashed) lines.} \label{fig:3} \end{figure} \begin{table} \caption{Approximate positions, in the complex $E$-plane, of the first poles of the scattering function of the parabolic odd potential.} \begin{indented} \item[] \begin{tabular}{r|r} \br \multicolumn{2}{c} {$0.889605-i\,0.889605$} \\ $2.977506-i\,4.081280 \quad$ & $ \quad 4.081280-i\,2.977506$ \\ $3.715766-i\,8.472130 \quad$ & $ \quad 8.472130-i\,3.715766$ \\ $4.173994-i\,12.59206 \quad$ & $ \quad 12.59206-i\,4.173994$ \\ $4.509353-i\,16.66338 \quad$ & $ \quad 16.66338-i\,4.509353$ \\ \br \end{tabular} \end{indented} \end{table} In the next section we will see that the pole of $S(E)$ at the bisector of the fourth quadrant is of special relevance. Approximations to its position can be obtained from polynomial approximations to the equation \begin{equation} D(E)=0 , \label{r5} \end{equation} obtained by truncation of the Taylor expansion \begin{equation} D(E)=\sum_{m=0}^\infty b_m\,E^m, \label{r6} \end{equation} where \begin{equation} b_m=\frac{1}{m!}\,\left.\frac{d^mD(E)}{dE^m}\right|_{E=0}. \label{r7} \end{equation} Trivial calculus gives \begin{equation} b_m = \left(-\frac{1}{4}\right)^m\,\sum_{n=0}^m \left( e^{-i\pi/8}\,i^{m-n}+ e^{i\pi/8}\,i^n\right)\frac{G^{(n)}(3/4)\,G^{(m-n)}(1/4)}{n!\,(m-n)!}, \label{r8} \end{equation} where we have used the notation $G^{(n)}(z)$ for the successive derivatives of the reciprocal Gamma function, \begin{equation} G^{(n)}(z)\equiv\frac{d^n}{dz^n}\frac{1}{\Gamma (z)}, \label{r9} \end{equation} whose computation was discussed in a former paper \cite[Appendix B]{abad}. We show, in Table 2, the first solution of \begin{equation} \sum_{m=0}^M b_m\,E^m=0 \label{r10} \end{equation} for several successive values of $M$. \begin{table} \caption{Successive approximations, $E_M$, to the position of the pole of $S(E)$ at the bisector of the fourth quadrant. They have been obtained by solving Eq. (\ref{r10}) with increasing values of $M$. The coefficients $b_m$ of the Taylor expansion of $D(E)$, Eq. (\ref{r6}), are shown in the second column.} \begin{indented} \item[] \begin{tabular}{rll} \br $M$ & $\qquad\qquad b_M $ & $\quad \qquad E_M $ \\ \mr 0 & \quad 0.4158919086E$+$00 & \ \\ 1 & \quad 0.3438700716E$+$00\,($-1-i$) & \quad 0.6047224563\,($1-i$) \\ 2 & \quad 0.1302850455E$+$00\,\,$i$ & \quad 0.9382649623\,($1-i$) \\ 3 & \quad 0.1693998465E$-$02\,($1-i$) & \quad 0.9131374180\,($1-i$) \\ 4 & \quad 0.2314470246E$-$02 & \quad 0.8885559742\,($1-i$) \\ 5 & \quad 0.4198228545E$-$04\,($-1-i$) & \quad 0.8892565969\,($1-i$) \\ 6 & \quad 0.2368862531E$-$04\,($-i$) & \quad 0.8896106601\,($1-i$) \\ 7 & \quad 0.5531769758E$-$07\,($-1+i$) & \quad 0.8896091851\,($1-i$) \\ 8 & \quad 0.1623934529E$-$06\,($-1$) & \quad 0.8896053333\,($1-i$) \\ 9 & \quad 0.3336310538E$-$08\,($-1-i$) & \quad 0.8896051925\,($1-i$) \\ 10 & \quad 0.5905347326E$-$09\,\,$i$ & \quad 0.8896052147\,($1-i$) \\ 11 & \quad 0.2651914365E$-$10\,($-1+i$) & \quad 0.8896052164\,($1-i$) \\ 12 & \quad 0.9945374276E$-$12 & \quad 0.8896052164\,($1-i$) \\ \br \end{tabular} \end{indented} \end{table} \section{Resonances} Poles of the scattering function are associated with what are called Gamow states: solutions of the Schr\"odinger equation, corresponding to complex energies $E=E_{\rm{R}}-i\Gamma/2$ ($\Gamma>0$), which at large distances contain only outgoing waves, as shown in Eq. (\ref{i41}). They owe their name to the fact that they were first used by Gamow \cite{gamo} to account for experimental data of $\alpha$-decay of certain nuclei. Due to the non-vanishing imaginary part of its energy, Gamow states are suitable to represent a decaying state, whose time evolution would be given by \begin{equation} \Psi(x,t)=\exp(-\,\Gamma\,t/2)\,\exp(-\,i\,E_{\rm{R}}\,t)\,\psi(x). \label{i42} \end{equation} The question arises whether each one of these Gamow states may be associated to a resonance, that is, to an enhancement of the interaction with the potential at real energies in the neighborhood of $E_{\rm{R}}$. Resonances in a potential are characterized by a sudden increase of about $\pi$ in the phase shift as the energy increases. This occurs when a pole of $S(E)$ lies in the vicinity of the real $E$ axis. In this case, constant phase-lines converging at the pole and corresponding to values of $\arg S(E)$ in an interval of amplitude $\pi$ have their intersections with the real $E$ axis contained in a small interval of values of $E$. Looking at the modulus and phase chart shown in Fig. 3, we realize that the pole at $(0.889605, -0.889605)$ could be associated to a resonance. This is more clearly seen in Fig. 4, where we show the time delay suffered by a wave interacting with the potential Eq. (\ref{i4}) at energies in the interval $(-10, 15)$. In our case, the time delay, defined as \cite[pp 110--111]{nuss} \begin{equation} \Delta t=2 \hbar\,\frac{d\delta (E)}{dE}, \label{i44} \end{equation} turns out to be \begin{equation} \Delta t=2 \hbar\,\Im\left(\frac{dN(E)/dE}{N(E)}\right), \label{i45} \end{equation} with $N(E)$ given in Eq. (\ref{i36}) and \begin{equation} \frac{dN(E)}{dE} = e^{i\pi/8}\, \frac{\psi\left(\frac{3-E}{4}\right)-i\,\psi\left(\frac{1+iE}{4}\right)}{4\,\Gamma\left(\frac{3-E}{4}\right)\,\Gamma\left(\frac{1+iE}{4}\right)} + e^{-i\pi/8}\,\frac{\psi\left(\frac{1-E}{4}\right)-i\,\psi\left(\frac{3+iE}{4}\right)}{4\,\Gamma\left(\frac{1-E}{4}\right) \,\Gamma\left(\frac{3+iE}{4}\right)}, \label{i46} \end{equation} where $\psi(\ldots)$ represents the digamma function. The marked peak in Fig. 4 reveals the mentioned resonance. There is also a much less marked bump at $E\approx 4$, obviously associated to the pole at $(4.081280, -2.977506)$. Other poles seem to have no physical implication. \begin{figure} \caption{Time delay of the outgoing wave as a function of the energy. The scale is consistent with that used for lengths and energies.} \label{fig:4} \end{figure} For illustration, we present in Figs. 5 to 7 the square of the modulus of the wave function for three different values of the energy, namely, $E_G=0.889605-0.889605\,i$ (Gamow state), $E_r=0.935$ (resonance, large time delay), and $E_s=-4.042626$ (saddle point, time delay equal to zero). In the first case the coefficients $A_1$ and $A_2$ in Eq. (\ref{i18}) are determined by Eq. (\ref{i27}) together with an arbitrarily chosen normalization condition \[ A_1\,T_{1,3}^++A_2\,T_{2,3}^+=1, \] in such a way that \[ \psi_{\rm{phys}}(x) \sim \psi_3^+(x), \qquad \mbox{for} \quad x\to +\infty. \] The (non-normalized) probability density shown in Fig. 5 corresponds to a time $t=0$. Subsequently it retains the same shape, but decreases, according to Eq. (\ref{i42}), by a factor $\exp (-\Gamma t)$, with $\Gamma=1.77921$. In the cases of Figs. 6 and 7, the coefficients $A_1$ and $A_2$ are obtained from Eqs. (\ref{i27}) and (\ref{i28}). As the energy is real, time damping does not occur. The oscillations in the value of the (non-normalized) probability density are due to the interference of the incoming and outgoing waves. The amplitude of the oscillations goes as $x^{-1}$ for $x\to +\infty$. In the case of resonance (Fig. 6), the large probability density at $x=0$ is to be noticed. \begin{figure} \caption{Squared modulus of the wave function of the Gamow state of energy $E_G=0.889605-0.889605\,i$.} \label{fig:5} \end{figure} \begin{figure} \caption{Squared modulus of the wave function resulting by interference of the incoming and outgoing waves at the energy of resonance, $E_r=0.935$.} \label{fig:6} \end{figure} \begin{figure} \caption{Squared modulus of the wave function resulting by interference of the incoming and outgoing waves at energy $E_s=-4.042626$.} \label{fig:7} \end{figure} \section{Final comments} The parabolic odd potential, Eq. (\ref{i4}), considered in this paper is an unorthodox one. Usual treatment of scattering refers to three-dimensional spherically symmetric potentials of finite range. Concepts like cross-section, phase shifts, $S$ matrix, resonances, etc., are well established for those potentials. The idea of cross-section does not seem to be extendable to our one-dimensional potential. The other concepts can be defined in a consistent and natural way, as we have seen. Nevertheless, the peculiar characteristics of the potential, namely being totally-reflecting, of infinite range, and unbounded, originate obvious differences with the usual three-dimensional case. Some comments about these differences are in order, we believe. We have needed only one Riemann sheet to describe $S$ as a function of $E$. In the case of potentials of finite range, the elements of the $S$ matrix depend on $E$ through its square root, a bi-valued function. Two Riemann sheets, the so called physical and unphysical ones, are needed. For this reason it is preferable to express $S$ in terms of the wave number $k$, proportional to the square root of $E$. In our case, a global definition of wave number is neither possible nor necessary. Gamow states in a finite range spherically symmetric potential have a complex wave number $k$ whose imaginary part is negative. This implies that the outgoing wave, $\exp(ikr)$, increases exponentially with $r$, a property which could be considered unreasonable. However, in words of Garc\'{\i}a-Calder\'on and Peierls, \cite{garc} ``such increase is entirely reasonable, because it reflects the fact that we are assuming an exponentially decaying state, and thus we see at distance $r$ the particles emitted by the system a time $r/v$ earlier, where $v$ is their velocity, and these are more numerous by a factor $\exp (r/v\tau)$; $\tau$ being the mean life." In the parabolic odd potential, the probability density, as shown in Fig. 4, does not increase with $x$, but, according to Eq. (\ref{i11}), it goes as $x^{-0.110395}$ for $x\to +\infty$. Such a behavior is consistent with the fact that the (local) wave number, or, in other words, the velocity of the outgoing particle represented by the wave function increases with $x$. Thinking in terms of a classical particle, the time needed to reach a large distance $x\to +\infty$ goes as $x$ in the case of a potential of finite range, whereas it goes as $\log x$ in our case: the particle escapes much more rapidly in the parabolic potential. Obviously, the potential considered in this paper is an idealization. Its interest lies mainly in the fact that closed analytical forms can be obtained for the solutions of the Schr\"odinger equation (Eqs. (\ref{i9}) to (\ref{i12}) and (\ref{i14}) to (\ref{i17})), the scattering function (Eq. (\ref{i35})), and the time delay (Eq. (\ref{i45})). Nevertheless, given the continuous progress in the synthesis of artificial quantized structures by means of stacks of thin films, the possibility of the odd parabolic potential to represent a useful approximation to a real situation should not be discarded. \ack{The comments of three anonymous referees have considerably contributed to improve the presentation of this article. Financial support from Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq, Brazil) and from Departamento de Ciencia, Tecnolog\'{\i}a y Universidad del Gobierno de Arag\'on (Project E24/1) and Ministerio de Ciencia e Innovaci\'on (Project MTM2009-11154) is gratefully acknowledged.} \section*{References} \end{document}
\begin{document} \allowdisplaybreaks \begin{abstract} We prove genuinely multilinear weighted estimates for singular integrals in product spaces. The estimates complete the qualitative weighted theory in this setting. Such estimates were previously known only in the one-parameter situation. Extrapolation gives powerful applications -- for example, a free access to mixed-norm estimates in the full range of exponents. \end{abstract} \title{Genuinely multilinear weighted estimates for singular integrals in product spaces} \section{Introduction} For given exponents $1 < p_1, \ldots, p_n < \infty$ and $1/p = \sum_i 1/p_i> 0$, a natural form of a weighted estimate in the $n$-variable context has the form $$ \Big \|g \prod_{i=1}^n w_i \Big \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for some functions $f_1, \ldots, f_n$ and $g$. It is natural to initially assume that $w_i^{p_i} \in A_{p_i}$, where $A_q$ stands for the classical Muckenhoupt weights. Even with this assumption the target weight only satisfies $\prod_{i=1}^n w_i^p \in A_{np} \supsetneq A_p$ making the case $n \ge 2$ have a different flavour than the classical case $n=1$. Importantly, it turns out to be very advantageous -- we get to the application later -- to only impose a weaker \emph{joint} condition on the tuple of weights $\vec w = (w_1, \ldots, w_n)$ rather than to assume individual conditions on the weights $w_i^{p_i}$. This gives the problem a genuinely multilinear nature. For many fundamental mappings $(f_1, \ldots, f_n) \mapsto g(f_1, \ldots, f_n)$, such as the $n$-linear maximal function, these joint conditions on the tuple $\vec w$ are necessary and sufficient for the weighted bounds. Genuinely multilinear weighted estimates were first proved for $n$-linear \emph{one-parameter} singular integral operators (SIOs) by Lerner, Ombrosi, P\'erez, Torres and Trujillo-Gonz\'alez in the extremely influential paper \cite{LOPTT}. A basic model of an $n$-linear SIO $T$ in $\mathbb{R}^d$ is obtained by setting \begin{equation*}\label{eq:multilinHEUR} T(f_1,\ldots, f_n)(x) = U(f_1 \otimes \cdots \otimes f_n)(x,\ldots,x), \qquad x \in \mathbb{R}^d,\, f_i \colon \mathbb{R}^d \to \mathbb{C}, \end{equation*} where $U$ is a linear SIO in $\mathbb{R}^{nd}$. See e.g. Grafakos--Torres \cite{GT} for the basic theory. Estimates for SIOs play a fundamental role in pure and applied analysis -- for example, $L^p$ estimates for the homogeneous fractional derivative $D^{\alpha} f=\mathcal F^{-1}(|\xi|^{\alpha} w_{1,i}dehat f(\xi))$ of a product of two or more functions, the \emph{fractional Leibniz rules}, are used in the area of dispersive equations, see e.g. Kato--Ponce \cite{KP} and Grafakos--Oh \cite{GO}. In the usual one-parameter context of \cite{LOPTT} there is a general philosophy that the maximal function controls SIOs $T$ -- in fact, we have the concrete estimate \begin{equation}\label{eq:CF} \|T(f_1, \ldots, f_n)w\|_{L^p} \lesssim \|M(f_1, \ldots, f_n)w\|_{L^p}, \qquad p > 0, \, w^p \in A_{\infty}. \end{equation} Thus, the heart of the matter of \cite{LOPTT} reduces to the maximal function $$M(f_1, \ldots, f_n) = \sup_I 1_I \prod_{i=1}^n \langle |f_i|\rangle_I,$$ where $\langle |f_i|\rangle_I = \fint_I |f_i| = \frac{1}{|I|} \int_I |f_i|$ and the supremum is over cubes $I \subset \mathbb{R}^d$. In this paper we prove genuinely multilinear weighted estimates for multi-parameter SIOs in the product space $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. For the classical linear multi-parameter theory and some of its original applications see e.g. \cite{CF1, CF2, RF1, RF2, RF3, FS, FL, Jo}. Multilinear multi-parameter estimates arise naturally in applications whenever a multilinear phenomena, like the fractional Leibniz rules, are combined with product type estimates, such as those that arise when we want to take different partial fractional derivatives $D^{\alpha}_{x_1} D^{\beta}_{x_2} f$. We refer to our recent work \cite{LMV} for a thorough general background on the subject. It is already known \cite{GLTP} that the multi-parameter maximal function $(f_1, \ldots, f_n) \mapsto \sup_R 1_R \prod_{i=1}^n \langle |f_i|\rangle_R$, where the supremum is over rectangles $R = \prod_{i=1}^m I^i \subset \prod_{i=1}^m \mathbb{R}^{d_i}$ with sides parallel to the axes, satisfies the desired genuinely multilinear weighted estimates. However, in contrast to the one-parameter case, there is no known general principle which would automatically imply the corresponding weighted estimate for multi-parameter SIOs from the maximal function estimate. In particular, no estimate like \eqref{eq:CF} is known. In the paper \cite{LMV} we developed the general theory of bilinear bi-parameter SIOs including weighted estimates under the more restrictive assumption $w_i^{p_i} \in A_{p_i}$. In fact, we only reached these weighted estimates without any additional cancellation assumptions of the type $T1=0$ in \cite{ALMV}. There are no genuinely multilinear weighted estimates for \textbf{any} multi-parameter SIOs in the literature -- not even for the bi-parameter analogues (see e.g. \cite[Appendix A]{LMV}) of Coifman--Meyer \cite{CM} type multilinear multipliers. Almost ten years after the maximal function result \cite{GLTP} we establish these missing bounds -- not only for some special SIOs -- but for a very general class of $n$-linear $m$-parameter SIOs. With weighted bounds previously being known both in the linear multi-parameter setting \cite{RF1, RF2, HPW} and in the multilinear one-parameter setting \cite{LOPTT}, we finally establish a holistic view completing the theory of qualitative weighted estimates in the joint presence of multilinearity and product space theory. With the understanding that a Calder\'on--Zygmund operator (CZO) is an SIO satisfying natural $T1$ type assumptions, our main result reads as follows. \begin{thm}\label{thm:intro1} Suppose $T$ is an $n$-linear $m$-parameter CZO in $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. If $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$, we have $$ \|T(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}, \qquad w = \prod_{i=1}^n w_i, $$ for all $n$-linear $m$-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$, $\vec p = (p_1, \ldots, p_n)$. Here $\vec w \in A_{\vec p}$ if $$ [\vec{w}]_{A_{\vec p}} :=\sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} < \infty, $$ where the supremum is over all rectangles $R \subset \mathbb{R}^d$. \end{thm} \noindent For the exact definitions, see the main text. Recent extrapolation methods are crucial both for the proof and for the applications. The extrapolation theorem of Rubio de Francia says that if $\|g\|_{L^{p_0}(w)} \lesssim \|f\|_{L^{p_0}(w)}$ for some $p_0 \in (1,\infty)$ and all $w \in A_{p_0}$, then $\|g\|_{L^{p}(w)} \lesssim \|f\|_{L^{p}(w)}$ for all $p \in (1,\infty)$ and all $w \in A_{p}$. In \cite{GM} (see also \cite{DU}) a multivariable analogue was developed in the setting $w_i^{p_i} \in A_{p_i}$, $i = 1, \ldots, n$. Such extrapolation results are already of fundamental use in proving other estimates -- often just to even deduce the full $n$-linear range of unweighted estimates $\prod_{j=1}^n L^{p_j}\to L^p$, $\sum_j 1/p_j = 1/p$, $1 < p_j < \infty$, $1/n <p < \infty$, from some particular single tuple $(p_1, \ldots, p_n, p)$. Indeed, reaching $p \le 1$ can often be a crucial challenge, particularly so in multi-parameter settings where many other tools are completely missing. Very recently, in \cite{LMO} it was shown that also the genuinely multilinear weighted estimates can be extrapolated. In the subsequent paper \cite{LMMOV} (see also \cite{Nieraeth}) a key advantage of extrapolating using the general weight classes was identified: it is possible to both start the extrapolation and, importantly, to reach -- as a consequence of the extrapolation -- weighted estimates with $p_i = \infty$. See Theorem \ref{thm:ext} for a formulation of these general extrapolation principles. Moreover, extrapolation is flexible in the sense that one can extrapolate both $1$-parameter and $m$-parameter, $m\ge 2$, weighted estimates. These new extrapolation results are extremely useful e.g. in proving mixed-norm estimates -- for example, in the bi-parameter case they yield that \[ \| T(f_1,\ldots, f_n)\|_{L^p(\mathbb{R}^{d_1}; L^q(\mathbb{R}^{d_2}))}\lesssim \prod_{i=1}^n \| f_i\|_{L^{p_i}(\mathbb{R}^{d_1}; L^{q_i}(\mathbb{R}^{d_2}))}, \] where $1<p_i, q_i\le \infty$, $\frac 1p= \sum_i \frac{1}{p_i} >0$ and $\frac 1q=\sum_i \frac{1}{q_i} >0$. The point is that even all of the various cases involving $\infty$ become immediate. See e.g. \cite{DO, LMMOV, LMV} for some of the previous mixed-norm estimates. Compared to \cite{LMMOV} we can work with completely general $n$-linear $m$-parameter SIOs instead of bi-parameter tensor products of $1$-parameter SIOs, and the proof is much simplified due to the optimal weighted estimates, Theorem \ref{thm:intro1}. We also use extrapolation to give a new short proof of the boundedness of the multi-parameter $n$-linear maximal function \cite{GLTP} -- see Proposition \ref{prop:prop2}. On the technical level there is no existing approach to our result: the modern one-parameter tools (such as sparse domination in the multilinear setting, see e.g. \cite{CUDPOU}) are missing and many of the bi-parameter methods \cite{LMV} used in conjunction with the assumption that each weight individually satisfies $w_i^{p_i} \in A_{p_i}$ are of little use. Aside from maximal function estimates, multi-parameter estimates require various square function estimates (and combinations of maximal function and square function estimates). Similarly as one cannot use $\prod_i Mf_i$ instead of $M(f_1, \ldots, f_n)$ due to the nature of the multilinear weights, it is also not possible to use classical square function estimates separately for the functions $f_i$. Now, this interplay makes it impossible to decouple estimates to terms like $\|M f_1 \cdot w_1\|_{L^{p_1}} \|S f_2 \cdot w_2\|_{L^{p_2}}$, since neither of them would be bounded separately as $w_1^{p_1} \not \in A_{p_1}$ and $w_2^{p_2} \not \in A_{p_2}$. However, such decoupling of estimates has previously seemed almost indispensable. Our proof starts with the reduction to dyadic model operators \cite{AMV} (see also \cite{DLMV2, Hy1, LMOV, Ma1, Ou}), which is a standard idea. After this we introduce a family of $n$-linear multi-parameter square function type objects $A_k$. On the idea level, a big part of the proof works by taking a dyadic model operator $S$ and finding an appropriate square function $A_k$ so that $$ \|S(f_1, \ldots, f_n)w \|_{L^p} \lesssim \|A_k(f_1, \ldots, f_n)w \|_{L^p}. $$ This requires different tools depending on the model operator in question and is a new way to estimate model operators that respects the $n$-linear structure fully. We then prove that all of our operators $A_k$ satisfy the genuinely $n$-linear weighted estimates $$ \|A_{k}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}. $$ This is done with an argument that is based on using duality and lower square function estimates iteratively until all of the cancellation present in these square functions has been exploited. Aside from the full range of mixed-norm estimates, the weighted estimates immediately give other applications as well. We present here a result on commutators, which greatly generalises \cite{LMV}. Commutator estimates appear all over analysis implying e.g. factorizations for Hardy functions \cite{CRW}, certain div-curl lemmas relevant in compensated compactness, and were recently connected to the Jacobian problem $Ju = f$ in $L^p$ (see \cite{HyCom}). For a small sample of commutator estimates in various other key setting see e.g. \cite{FL, HLW, HPW, LOR1}. \begin{thm} Suppose $T$ is an $n$-linear $m$-parameter CZO in $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$, $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$. Suppose also that $\|b\|_{\operatorname{bmo}} = \sup_R \frac{1}{|R|} \int_R |b-\ave{b}_R| < \infty$. Then for all $1\le k\le n$ we have the commutator estimate \begin{equation*} \begin{split} \| [b, T]_k(f_1,\ldots, f_n) w\|_{L^p} &\lesssim \|b\|_{\operatorname{bmo}} \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}, \\ [b, T]_k(f_1,\ldots, f_n) &:= bT(f_1, \ldots, f_n) - T(f_1, \ldots, f_{k-1}, bf_k, f_{k+1}, \ldots, f_n), \end{split} \end{equation*} for all $n$-linear $m$-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Analogous results hold for iterated commutators. \end{thm} We note that we can also finally dispose of some of the sparse domination tools that restricted some of the theory of \cite{LMV} to bi-parameter. \section{Preliminaries}\label{sec:def} Throughout this paper $A\lesssim B$ means that $A\le CB$ with some constant $C$ that we deem unimportant to track at that point. We write $A\sim B$ if $A\lesssim B\lesssim A$. Sometimes we e.g. write $A \lesssim_{\epsilon} B$ if we want to make the point that $A \le C(\epsilon) B$. \subsection{Dyadic notation} Given a dyadic grid $\calD$ in $\mathbb{R}^d$, $I \in \calD$ and $k \in \mathbb{Z}$, $k \ge 0$, we use the following notation: \begin{enumerate} \item $\ell(I)$ is the side length of $I$. \item $I^{(k)} \in \calD$ is the $k$th parent of $I$, i.e., $I \subset I^{(k)}$ and $\ell(I^{(k)}) = 2^k \ell(I)$. \item $\ch(I)$ is the collection of the children of $I$, i.e., $\ch(I) = \{J \in \calD \colon J^{(1)} = I\}$. \item $E_I f=\langle f \rangle_I 1_I$ is the averaging operator, where $\langle f \rangle_I = \fint_{I} f = \frac{1}{|I|} \int _I f$. \item $\Delta_If$ is the martingale difference $\Delta_I f= \sum_{J \in \ch (I)} E_{J} f - E_{I} f$. \item $\Delta_{I,k} f$ is the martingale difference block $$ \Delta_{I,k} f=\sum_{\substack{J \in \calD \\ J^{(k)}=I}} \Delta_{J} f. $$ \end{enumerate} For an interval $J \subset \mathbb{R}$ we denote by $J_{l}$ and $J_{r}$ the left and right halves of $J$, respectively. We define $h_{J}^0 = |J|^{-1/2}1_{J}$ and $h_{J}^1 = |J|^{-1/2}(1_{J_{l}} - 1_{J_{r}})$. Let now $I = I_1 \times \cdots \times I_d \subset \mathbb{R}^d$ be a cube, and define the Haar function $h_I^{\eta}$, $\eta = (\eta_1, \ldots, \eta_d) \in \{0,1\}^d$, by setting \begin{displaymath} h_I^{\eta} = h_{I_1}^{\eta_1} \otimes \cdots \otimes h_{I_d}^{\eta_d}. \end{displaymath} If $\eta \ne 0$ the Haar function is cancellative: $\int h_I^{\eta} = 0$. We exploit notation by suppressing the presence of $\eta$, and write $h_I$ for some $h_I^{\eta}$, $\eta \ne 0$. Notice that for $I \in \calD$ we have $\Delta_I f = \langle f, h_I \rangle h_I$ (where the finite $\eta$ summation is suppressed), $\langle f, h_I\rangle := \int fh_I$. We make a few clarifying comments related to the use of Haar functions. In the model operators coming from the representation theorem there are Haar functions involved. There we use the just mentioned convention that $h_I$ means some unspecified cancellative Haar function $h_I^{\eta}$ which we do not specify. On the other hand, the square function estimates in Section \ref{sec:SquareFunctions} are formulated using martingale differences (which involve multiple Haar functions as $\Delta_I f = \sum_{\eta \ne 0} \langle f, h_I^{\eta} \rangle h_I^{\eta}$). When we estimate the model operators, we carefully consider this difference by passing from the Haar functions into martingale differences via the simple identity \begin{equation}\label{eq:HaarMart} \langle f, h_I \rangle=\langle \Delta_I f, h_I \rangle, \end{equation} which follows from $\Delta_I f=\sum_{\eta \not=0} \langle f, h^{\eta}_I \rangle h^{\eta}_I$ and orthogonality. \subsection{Multi-parameter notation} We will be working on the $m$-parameter product space $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. We denote a general dyadic grid in $\mathbb{R}^{d_i}$ by $\calD^i$. We denote cubes in $\calD^i$ by $I^i, J^i, K^i$, etc. Thus, our dyadic rectangles take the forms $\prod_{i=1}^m I^i$, $\prod_{i=1}^m J^i$, $\prod_{i=1}^m K^i$ etc. We usually denote the collection of dyadic rectangles by $\calD = \prod_{i=1}^m \calD^i$. If $A$ is an operator acting on $\mathbb{R}^{d_1}$, we can always let it act on the product space $\mathbb{R}^d$ by setting $A^1f(x) = A(f(\cdot, x_2, \ldots, x_n))(x_1)$. Similarly, we use the notation $A^i f$ if $A$ is originally an operator acting on $\mathbb{R}^{d_i}$. Our basic multi-parameter dyadic operators -- martingale differences and averaging operators -- are obtained by simply chaining together relevant one-parameter operators. For instance, an $m$-parameter martingale difference is $$ \Delta_R f = \Delta_{I^1}^1 \cdots \Delta_{I^m}^m f, \qquad R = \prod_{i=1}^m I^i. $$ When we integrate with respect to only one of the parameters we may e.g. write \[ \langle f, h_{I^1} \rangle_1(x_2, \ldots, x_n):=\int_{\mathbb{R}^{d_1}} f(x_1, \ldots, x_n)h_{I^1}(x_1) \ud x_1 \] or $$ \langle f \rangle_{I^1, 1}(x_2, \ldots, x_n) := \fint_{I^1} f(x_1, \ldots, x_n) \ud x_1. $$ \subsection{Adjoints}\label{sec:adjoints} Consider an $n$-linear operator $T$ on $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. Let $f_j = f_j^1 \otimes f_j^2$, $j = 1, \ldots, n+1$. We set up notation for the adjoints of $T$ in the bi-parameter situation. We let $T^{j*}$, $j \in \{0, \ldots, n\}$, denote the full adjoints, i.e., $T^{0*} = T$ and otherwise $$ \langle T(f_1, \dots, f_n), f_{n+1} \rangle = \langle T^{j*}(f_1, \dots, f_{j-1}, f_{n+1}, f_{j+1}, \dots, f_n), f_j \rangle. $$ A subscript $1$ or $2$ denotes a partial adjoint in the given parameter -- for example, we define $$ \langle T(f_1, \dots, f_n), f_{n+1} \rangle = \langle T^{j*}_1(f_1, \dots, f_{j-1}, f_{n+1}^1 \otimes f_j^2, f_{j+1}, \dots, f_n), f_j^1 \otimes f_{n+1}^2 \rangle. $$ Finally, we can take partial adjoints with respect to different parameters in different slots also -- in that case we denote the adjoint by $T^{j_1*, j_2*}_{1,2}$. It simply interchanges the functions $f_{j_1}^1$ and $f_{n+1}^1$ and the functions $f_{j_2}^2$ and $f_{n+1}^2$. Of course, we e.g. have $T^{j*, j*}_{1,2} = T^{j*}$ and $T^{0*, j*}_{1,2} = T^{j*}_{2}$, so everything can be obtained, if desired, with the most general notation $T^{j_1*, j_2*}_{1,2}$. In any case, there are $(n+1)^2$ adjoints (including $T$ itself). These notions have obvious extensions to $m$-parameters. \subsection{Structure of the paper} To avoid unnecessarily complicating the notation, we start by proving everything in the bi-parameter case $m=2$. Importantly, we present a proof which does not exploit this in a way that would not be extendable to $m$-parameters (e.g., our proof for the partial paraproducts does not exploit sparse domination for the appearing one-parameter paraproducts). At the end, we demonstrate for some key model operators how the general case can be dealt with. \section{Weights}\label{sec:weights} The following notions have an obvious extension to $m$-parameters. A weight $w(x_1, x_2)$ (i.e. a locally integrable a.e. positive function) belongs to the bi-parameter weight class $A_p = A_p(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$, $1 < p < \infty$, if $$ [w]_{A_p} := \sup_{R}\, \ave{w}_R \ave{w^{1-p'}}^{p-1}_R = \sup_{R}\, \ave{w}_R \ave{w^{-\frac{1}{p-1}}}^{p-1}_R < \infty, $$ where the supremum is taken over rectangles $R$ -- that is, over $R = I^1 \times I^2$ where $I^i \subset \mathbb{R}^{d_i}$ is a cube. Thus, this is the one-parameter definition but cubes are replaced by rectangles. We have \begin{equation}\label{eq:eq28} [w]_{A_p(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})} < \infty \textup { iff } \max\big( \esssup_{x_1 \in \mathbb{R}^{d_1}} \,[w(x_1, \cdot)]_{A_p(\mathbb{R}^{d_2})}, \esssup_{x_2 \in \mathbb{R}^{d_2}}\, [w(\cdot, x_2)]_{A_p(\mathbb{R}^{d_1})} \big) < \infty, \end{equation} and that $$ \max\big( \esssup_{x_1 \in \mathbb{R}^{d_1}} \,[w(x_1, \cdot)]_{A_p(\mathbb{R}^{d_2})}, \esssup_{x_2 \in \mathbb{R}^{d_2}}\, [w(\cdot, x_2)]_{A_p(\mathbb{R}^{d_1})} \big) \le [w]_{A_p(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})}, $$ while the constant $[w]_{A_p}$ is dominated by the maximum to some power. It is also useful that $\ave{w}_{I^2,2} \in A_p(\mathbb{R}^{d_1})$ uniformly on the cube $I^2 \subset \mathbb{R}^{d_2}$. For basic bi-parameter weighted theory see e.g. \cite{HPW}. We say $w\in A_\infty(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})$ if \[ [w]_{A_\infty}:=\sup_R \, \ave{w}_R \exp\big( \ave{\log w^{-1}}_R \big)<\infty. \] It is well-known that $$A_\infty(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})=\bigcup_{1<p<\infty}A_p(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2}).$$ We also define $$ [w]_{A_1} = \sup_R \, \ave{w}_R \esssup_R w^{-1}. $$ \begin{comment} The following multilinear reverse H\"older property is well-known -- for the history and a very short proof see e.g. \cite[Lemma 2.5]{Li}. The proof in our bi-parameter setting is the same. \begin{lem}\label{lem:lem6} Let $u_i \in (0, \infty)$ and $w_i \in A_{\infty}$, $i = 1, \ldots, N$, be bi-parameter weights. Then for every rectangle $R$ we have $$ \prod_{i=1}^N \ave{w_i}_R^{u_i} \lesssim \Big\langle \prod_{i=1}^N w_i^{u_i} \Big\rangle_R. $$ \end{lem} \end{comment} We introduce the classes of multilinear Muckenhoupt weights that we will use. \begin{defn}\label{defn:defn1} Given $\vec p=(p_1, \ldots, p_n)$ with $1 \le p_1, \ldots, p_n \le \infty$ we say that $\vec{w}=(w_1, \ldots, w_n)\in A_{\vec p} = A_{\vec p}(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$, if $$ 0<w_i <\infty, \qquad i = 1, \ldots, n, $$ almost everywhere and $$ [\vec{w}]_{A_{\vec p}} :=\sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} < \infty, $$ where the supremum is over rectangles $R$, $$ w := \prod_{i=1}^n w_i \qquad \textup{and} \qquad \frac 1 p = \sum_{i=1}^n \frac 1 {p_i}. $$ If $p_i = 1$ we interpret $\ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}}$ as $\esssup_R w_i^{-1}$, and if $p = \infty$ we interpret $\ave{w^p}_R^{\frac 1 p}$ as $\esssup_R w$. \end{defn} \begin{rem} \begin{enumerate} \item It is important that the lower bound \begin{equation}\label{eq:eq7} \ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} \ge 1 \end{equation} holds always. To see this recall that for $\alpha_1, \alpha_2 > 0$ we have by H\"older's inequality that \begin{equation}\label{eq:eq8} \begin{split} 1 \le \ave{ w^{-\alpha_1}}_R^{\frac{1}{\alpha_1}} \ave{ w^{\alpha_2}}_R^{\frac{1}{\alpha_2}}. \end{split} \end{equation} Apply this with $\alpha_2 = p$ and $\alpha_1 = \frac{1}{n-\frac{1}{p}}$. Then apply H\"older's inequality with the exponents $u_i = \Big(n-\frac{1}{p}\Big)p_i'$ to get $\Big \langle \Big( \prod_{i=1}^n w_i \Big)^{-\frac{1}{n-\frac{1}{p}}} \Big \rangle_R^{n-\frac{1}{p}} \le \prod_{i=1}^n \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}}$. \item Our definition is essentially the usual one-parameter definition \cite{LOPTT} with the difference that cubes are replaced by rectangles. However, we are also using the renormalised definition from \cite{LMMOV} that works better with the exponents $p_i = \infty$. Compared to the usual formulation of \cite{LOPTT} the relation is that $[w_1^{p_1}, \ldots, w_n^{p_n}]_{A_{\vec p}}^{\frac 1p}$ with $A_{\vec p}$ defined as in \cite{LOPTT} agrees with our $[\vec w]_{A_{\vec p}}$ when $p_i < \infty$. \item The case $p_1 = \cdots = p_n = \infty = p$ can be used as the starting point of extrapolation. This is rarely useful but we will find use for it when we consider the multilinear maximal function. \end{enumerate} \end{rem} The following characterization of the class $ A_{\vec p}$ is convenient. The one-parameter result with the different normalization is \cite[Theorem 3.6]{LOPTT}. We record the proof for the convenience of the reader. \begin{lem}\label{lem:lem1} Let $\vec p=(p_1, \ldots, p_n)$ with $1 \le p_1, \ldots, p_n \le \infty$, $1/p = \sum_{i=1}^n 1/p_i \ge 0$, $\vec{w}=(w_1, \ldots, w_n)$ and $w = \prod_{i=1}^n w_i$. We have $$ [w_i^{-p_i'}]_{A_{np_i'}} \le [\vec{w}]_{A_{\vec p}}^{p_i'}, \qquad i = 1, \ldots, n, $$ and $$ [w^p]_{A_{np}} \le [\vec{w}]_{A_{\vec p}}^{p}. $$ In the case $p_i = 1$ the estimate is interpreted as $[w_i^{\frac 1n}]_{A_1} \le [\vec{w}]_{A_{\vec p}}^{1/n}$, and in the case $p=\infty$ we have $[w^{-\frac{1}{n}}]_{A_1} \le [\vec{w}]_{A_{\vec p}}^{1/n}$. Conversely, we have $$ [\vec{w}]_{A_{\vec p}} \le [w^p]_{A_{np}}^{\frac{1}{p}} \prod_{i=1}^n [w_i^{-p_i'}]_{A_{np_i'}}^{\frac{1}{p_i'}}. $$ \end{lem} \begin{proof} We fix an arbitrary $j \in \{1, \ldots, n\}$ for which we will show $[w_j^{-p_j'}]_{A_{np_j'}} \le [\vec{w}]_{A_{\vec p}}^{p_j'}$. Notice that \begin{equation}\label{eq:eq1} \frac{1}{p} + \sum_{i \ne j} \frac{1}{p_i'} = n - 1 + \frac{1}{p_j}. \end{equation} We define $q_j$ via the identity $$ \frac{1}{q_j} = \frac{1}{n - 1 + \frac{1}{p_j}}\cdot \frac 1p $$ and for $i \ne j$ we set $$ \frac{1}{q_i} = \frac{1}{n - 1 + \frac{1}{p_j}} \cdot\frac{1}{p_i'}. $$ From \eqref{eq:eq1} we have that $\sum_i \frac{1}{q_i} = 1$. By definition we have \begin{equation}\label{eq:eq3} [w_j^{-p_j'}]_{A_{np_j'}} = \sup_R \, \ave{ w_j^{-p_j'}}_R \ave{ w_j^{p_j' \frac{1}{np_j'-1}}}_R^{np_j' - 1}. \end{equation} Notice that $$ p_j' \frac{1}{np_j'-1} = \frac{1}{n-\frac{1}{p_j'}} = \frac{1}{n - 1 + \frac{1}{p_j}}. $$ Using H\"older's inequality with the exponents $q_1, \ldots, q_n$ we have the desired estimate $$ \ave{ w_j^{-p_j'}}_R^{\frac 1{p_j'}} \ave{ w_j^{\frac{p}{q_j}}}_R^{\frac{q_j}{p}} = \ave{ w_j^{-p_j'}}_R^{\frac 1{p_j'}}\ave{ w^{\frac{p}{q_j}} \prod_{i \ne j } w_i^{-\frac{p}{q_j}}}_R^{\frac{q_j}{p}} \le \ave{w^p}_R^{\frac{1}{p} } \prod_{i} \ave{ w_i^{-p_i'} }_R^{\frac{1}{p_i'} } \le [\vec{w}]_{A_{\vec p}}. $$ When $p_j=1$ this is $\esssup_R w_j^{-1} \ave{ w_j^{\frac{1}{n}}}_R^{n}\le [\vec{w}]_{A_{\vec p}}$, and so $[w_j^{\frac 1n}]_{A_1}^n \le [\vec{w}]_{A_{\vec p}}$. We now move on to bounding $[w^p]_{A_{np}}$. Notice that by definition \begin{equation}\label{eq:eq4} [w^p]_{A_{np}} = \sup_R \, \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}}}_R^{np-1}. \end{equation} We define $s_i$ via $$ -\frac{p}{np-1} \cdot s_i = - p_i' $$ and notice that then $\sum_i \frac{1}{s_i} = 1$. Then, by H\"older's inequality with the exponents $s_1, \ldots, s_n$ we have $$ \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}}}_R^{np-1} \le \ave{w^p}_R \prod_i \ave{ w_i^{-p_i'}}_R^{\big(\frac{p}{np-1}\big)\frac{1}{p_i'}(np-1)} = \Big[ \ave{w^p}_R^{\frac{1}{p}} \prod_i \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}} \Big]^{p} \le [\vec{w}]_{A_{\vec p}}^{p}, $$ which is the desired bound for $[w^p]_{A_{np}}$. Notice that in the case $p=\infty$ we get $$ [w^{-\frac{1}{n}}]_{A_1}^n = \sup_R \big\langle w^{-\frac{1}{n}} \big \rangle_R^{n} \esssup_R w \le \sup_R\Big[ \prod_i \langle w_i^{-1} \rangle_R \Big]\esssup_R w = [\vec{w}]_{A_{\vec p}}. $$ We then move on to bounding $ [\vec{w}]_{A_{\vec p}}$. It is based on the following inequality \begin{equation}\label{eq:eq2} 1 \le \ave{ w^{-\frac{p}{np-1}} }_R^{n-\frac{1}{p}} \prod_i \Big\langle w_i^{\frac{1}{n - 1 + \frac{1}{p_i}}} \Big\rangle_R^{n-1+\frac{1}{p_i}}. \end{equation} Before proving this, we show how it implies the desired bound. We have \begin{align*} [\vec{w}&]_{A_{\vec p}} = \sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_i \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} \\ &\le \sup_R \Big[ \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}} }_R^{np-1} \Big]^{\frac 1 p} \prod_i \Big[ \ave{w_i^{-p_i'}}_R \big \langle w_i^{p_i' \frac{1}{np_i'-1}}\big \rangle_R^{np_i' - 1} \Big]^{\frac 1{p_i'}} \le [w^p]_{A_{np}}^{\frac{1}{p}} \prod_i [w_i^{-p_i'}]_{A_{np_i'}}^{\frac{1}{p_i'}}, \end{align*} where in the last estimate we recalled \eqref{eq:eq3} and \eqref{eq:eq4}. Let us now give the details of \eqref{eq:eq2}. We apply \eqref{eq:eq8} with $\alpha_1 = \frac{p}{np-1}$ and $\alpha_2 = \frac{1}{n(n-1)+\frac{1}{p}}$ to get $$ 1 \le \ave{ w^{-\frac{p}{np-1}} }_R^{n-\frac{1}{p}} \Big\langle w^{ \frac{1}{n(n-1)+\frac{1}{p}} }\Big\rangle_R^{n(n-1)+\frac{1}{p}}. $$ The first term is already as in \eqref{eq:eq2}. Define $u_i$ via $$ \frac{1}{n(n-1)+\frac{1}{p}} u_i = \frac{1}{n - 1 + \frac{1}{p_i}} $$ and notice that by H\"older's inequality with these exponents ($\sum_i \frac{1}{u_i} = 1$) we have $$ \Big\langle w^{ \frac{1}{n(n-1)+\frac{1}{p}} }\Big\rangle_R^{n(n-1)+\frac{1}{p}} \le \prod_i \Big\langle w_i^{\frac{1}{n - 1 + \frac{1}{p_i}} }\Big\rangle_R^{ n - 1 + \frac{1}{p_i}}, $$ which matches the second term in \eqref{eq:eq2}. \end{proof} The following duality of multilinear weights is handy -- see \cite[Lemma 3.1]{LMS}. We give the short proof for convenience. \begin{lem}\label{lem:lem7} Let $\vec p=(p_1, \ldots, p_n)$ with $1 < p_1, \ldots, p_n < \infty$ and $\frac 1 p = \sum_{i=1}^n \frac 1 {p_i} \in (0,1)$. Let $\vec{w}=(w_1, \ldots, w_n)\in A_{\vec p}$ with $w = \prod_{i=1}^n w_i$ and define \begin{align*} \vec w^{\, i} &= (w_1, \ldots, w_{i-1}, w^{-1}, w_{i+1}, \ldots, w_n), \\ \vec p^{\,i} &= (p_1, \ldots, p_{i-1}, p', p_{i+1}, \ldots, p_n). \end{align*} Then we have $$ [\vec{w}^{\,i}]_{A_{\vec p^{\, i}}} = [\vec{w}]_{A_{\vec p}}. $$ \end{lem} \begin{proof} We take $i=1$ for notational convenience. Notice that $\frac{1}{p'} + \sum_{i=2}^n \frac{1}{p_i} = \frac{1}{p_1'}$. Notice also that $w^{-1} \prod_{i=2}^n w_i = w_1^{-1}$. Therefore, we have $$ [\vec{w}^{\,i}]_{A_{\vec p^{\, i}}} = \ave{w_1^{-p_1'}}_R^{\frac{1}{p_1'}} \ave{ w^{p}}_R^{\frac{1}{p}} \prod_{i=2}^n \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}} = [\vec{w}]_{A_{\vec p}}. $$ \end{proof} \begin{comment} The partial adjoints have to be always considered separately. However, using Lemma \ref{lem:lem7} we see that the weighted boundedness of $T$ transfers to the adjoints $T^{j*}$. Let us show this. Suppose we know that for some exponents $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$ we have \begin{equation}\label{eq:eq5} \|T(f_1, \ldots, f_n) w \|_{L^p} \lesssim \prod_i \|f_i w_i\|_{L^{p_i}} \end{equation} for all multilinear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. By extrapolation \cite{LMMOV} we know that this holds with all exponents in this range. Fix some $\vec p$ with $1 < p_1, \ldots, p_n, p < \infty$ and $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Let $\|f_{n+1}\|_{L^{p'}} \le 1$. We will bound $\|T^{1*}(f_1, \ldots, f_n)w\|_{L^p}$ by controlling \begin{align*} |\langle T^{1*}(f_1, \ldots, f_n)w, f_{n+1}\rangle| &= |\langle T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1}, f_1w_1 \rangle| \\ &\le \|f_1 w_1\|_{L^{p_1}} \| T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1} \|_{L^{p_1'}}. \end{align*} By Lemma \ref{lem:lem7} we know that $(w^{-1}, w_2, \ldots, w_n) \in A_{(p', p_2, \ldots, p_n)}$. The product of these weights is $w_1^{-1}$ and the associated target exponent is $p_1'$. Applying \eqref{eq:eq5} with this data we get $$ \| T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1} \|_{L^{p_1'}} \le \|(f_{n+1}w) w^{-1}\|_{L^{p'}} \prod_{i=2}^n \| f_i w_i\|_{L^{p_i}} \le \prod_{i=2}^n \| f_i w_i\|_{L^{p_i}}. $$ We have shown that $$ \|T^{1*}(f_1, \ldots, f_n)w\|_{L^p} \le \prod_{i=1}^n \| f_i w_i\|_{L^{p_i}}. $$ By using the extrapolation \cite{LMMOV} again this holds for all exponents $1 < p_1, \ldots, p_n \le \infty$ with $1/p = \sum_i 1/p_i> 0$ and for all multilinear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. \end{comment} We now recall the recent extrapolation result of \cite{LMMOV}. The previous version, which did not yet allow exponents to be $\infty$ appeared in \cite{LMO}. For related independent work see \cite{Nieraeth}. The previous extrapolation results with the separate assumptions $w_i^{p_i} \in A_{p_i}$ appear in \cite{GM} and \cite{DU}. An even more general result than the one below appears in \cite{LMMOV}, but we will not need that generality here. Finally, we note that the proof of this extrapolation result can be made to work in $m$-parameters even though \cite{LMMOV} provides the details only in the one-parameter case -- we give more details later in Section \ref{app:app1}. \begin{thm}\label{thm:ext} Let $f_1, \ldots, f_n$ and $g$ be given functions. Given $\vec p=(p_1,\dots, p_n)$ with $1\le p_1,\dots, p_n\le \infty$ let $\frac1p= \sum_{i=1}^n \frac{1}{p_i}$. Assume that given any $\vec w=(w_1,\dots, w_n) \in A_{\vec p}$ the inequality \begin{equation}\label{extrapol:H*} \|gw\|_{L^{p}} \lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i} } \end{equation} holds, where $w:=\prod_{i=1}^n w_i $. Then for all exponents $\vec q=(q_1,\dots,q_n)$, with $1< q_1,\dots, q_n\le \infty$ and $\frac1q= \sum_{i=1}^n \frac{1}{q_i} >0$, and for all weights $\vec v=(v_1,\dots, v_n) \in A_{\vec q}$ the inequality \begin{equation*}\label{extrapol:C*} \|gv\|_{L^{q} } \lesssim \prod_{i=1}^n \|f_iv_i\|_{L^{q_i} } \end{equation*} holds, where $v:=\prod_{i=1}^n v_i $. Given functions $f_1^j, \ldots, f_n^j$ and $g^j$ so that \eqref{extrapol:H*} holds uniformly on $j$, we have for the same family of exponents and weights as above, and for all exponents $\vec{s}=(s_1,\dots, s_n)$ with $1< s_1,\dots, s_n\le \infty$ and $\frac1s=\sum_i \frac{1}{s_i} >0$ the inequality \begin{equation}\label{extrapol:vv*} \| (g^j v)_j\|_{L^{q}(\ell^s)} \lesssim \prod_{i=1}^n \|(f_i^j v_i)_j\|_{L^{q_i}(\ell^{s_i}) }. \end{equation} \end{thm} \begin{rem} Using Lemma \ref{lem:lem7} and extrapolation, Theorem \ref{thm:ext}, we see that the weighted boundedness of $T$ transfers to the adjoints $T^{j*}$. Partial adjoints have to always be considered separately, though. \end{rem} As a final thing in this section, we demonstrate the necessity of the $A_{\vec p}$ condition for the weighted boundedness of SIOs. We work in the $m$-parameter setting and let $\mathbb{R}^d=\mathbb{R}^{d_1} \times \dots \times \mathbb{R}^{d_n}$. Let $R_j$ be the following version of the $n$-linear one-parameter Riesz transform in $\mathbb{R}^{d_j}$: \[ R_j(f_1,\dots, f_n)=\textup{p.v.} \int_{\mathbb{R}^{d_jn}}\frac{\sum_{i=1}^n\sum_{k=1}^{d_j}(x-y_i)_k}{(\sum_{i=1}^{n}|x-y_i|)^{d_j n+1}}f_1(y_1)\cdots f_n(y_n) \ud y_1\cdots \ud y_n, \] where $(x-y_i)_k$ is the $k$-th coordinate of $x-y_i \in \mathbb{R}^{d_j}$. Consider the tensor product $ R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m}. $ Let $\vec w =(w_1, \dots, w_n)$ be a multilinear weight, that is, $0 < w_i < \infty$ a.e., and denote $w=\prod_{i=1}^n w_i$. Suppose that for some exponents $1< p_1,\dots, p_n\le \infty$ with $1/p=\sum_{i=1}^n 1/{p_i}>0$ the estimate \[ \| R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m} (f_1, \dots, f_n)\|_{L^{p,\infty}(w^p)} \lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} \] holds for all $f_i \in L^\infty _c$. We show that $\vec w$ is an $m$-parameter $A_{\vec p}$ weight. Define $\sigma_i=w_i^{-p_i'}$. Let $E \subset \mathbb{R}^d$ be an arbitrary set such that $1_E \sigma_i \in L^\infty_c$ for all $i=1, \dots, n$. Fix an $m$-parameter rectangle $R=R^1 \times \cdots \times \mathbb{R}^m \subset \mathbb{R}^d$, where each $R^j$ is a cube. Let $R^+= (R^1)^+ \times \cdots \times (R^m)^+$, where $(R^j)^+:=R^j+(\ell(R^j), \dots, \ell(R^j))$. Using the kernel of $R_1 \otimes \cdots \otimes R_m$ we have for all $x \in R^+$ that \begin{align*} R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m} (1_E \sigma_11_R, \dots, 1_E\sigma_n1_R)(x) \gtrsim \prod_{i=1}^n \langle 1_E\sigma_i\rangle_R. \end{align*} Hence \begin{equation*} w^p(R^+)^{\frac 1p}\prod_{i=1}^n \langle 1_E\sigma_i\rangle_R \lesssim \prod_{i=1}^n \|1_E\sigma_i1_Rw_i \|_{L^{p_i}} =\prod_{i=1}^n \sigma_i(E \cap R)^{\frac{1}{p_i}}, \end{equation*} which gives that $ \langle w^p\rangle_{R^+}^{\frac 1p}\prod_{i=1}^n \langle 1_E\sigma_i\rangle_R^{\frac{1}{p_i'}} \lesssim 1. $ Since $E$ was arbitrary this implies the estimate \begin{equation}\label{eq:eq29} \langle w^p\rangle_{R^+}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_R^{\frac{1}{p_i'}} \lesssim 1. \end{equation} Similarly, we can show that \begin{equation}\label{eq:eq30} \langle w^p\rangle_{R}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_{R^+}^{\frac{1}{p_i'}} \lesssim 1. \end{equation} By H\"older's inequality we have that \[ \langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}\le \prod_{i=1}^n \langle \sigma_i\rangle_{R^+}^{\frac 1{p_i'}}. \] Hence, \eqref{eq:eq30} shows that \[ \langle w^p\rangle_{R}^{\frac 1p} \langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p} \lesssim 1. \] Therefore, \begin{align*} \frac{\langle w^p\rangle_{R}^{\frac 1p}}{\langle w^p\rangle_{R^+}^{\frac 1p}} =\frac{\langle w^p\rangle_{R}^{\frac 1p}\langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}}{\langle w^p\rangle_{R^+}^{\frac 1p}\langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}} \lesssim 1, \end{align*} where the denominator in the middle term was $\ge 1$. Thus, $\langle w^p\rangle_{R}^{\frac 1p} \lesssim \langle w^p\rangle_{R^+}^{\frac 1p}$, which together with \eqref{eq:eq29} gives that $ \langle w^p\rangle_{R}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_R^{\frac 1{p_i'}} \lesssim 1. $ \section{Maximal functions} It was proved in \cite{GLTP} that the multilinear bi-parameter (or multi-parameter) maximal function is bounded with respect to the genuinely multilinear bi-parameter weights. We give a new efficient proof of this. Let $\calD = \calD^1 \times \calD^2$ be a fixed lattice of dyadic rectangles and define $$ M_{\calD}(f_1, \ldots, f_n) = \sup_{R \in \calD} \prod_{i=1}^n \ave{ |f_i| }_R 1_R. $$ \begin{prop}\label{prop:prop2} If $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i$ we have $$ \|M_{\calD}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{prop} \begin{proof} Our proof is based on the proof of the case $\vec p = (p_1, \ldots, p_n) = (\infty, \ldots, \infty)$ and extrapolation, Theorem \ref{thm:ext}. We have $$ \sup_R \Big[\prod_i \langle w_i^{-1} \rangle_R \Big]\cdot \esssup_R w = [\vec{w}]_{A_{\vec p}}, $$ and therefore $$ \prod_i \langle w_i^{-1} \rangle_R \lesssim \frac{1}{\esssup_R w}. $$ For every $R \in \calD$ let $N_R \subset R$ be such that $|N_R| = 0$ and $w(x) \le \esssup_R w$ for all $x \in R \setminus N_R$. Let $N = \bigcup_{R \in \calD} N_R$. Then $|N| = 0$ and for every $x \in \mathbb{R}^d \setminus N$ we have $$ \frac{1}{w(x)} \ge \sup_{R \in \calD} \frac{1_R(x)}{\esssup_R w}. $$ Thus, we have \begin{align*} M_{\calD}(f_1, \ldots, f_n)(x)w(x) &\le \Big[\prod_i \|f_i w_i\|_{L^{\infty}} \Big]\sup_{R \in \calD} \Big[ 1_R(x) \prod_i \ave{ w_i^{-1}}_R \Big] \cdot w(x) \\ &\lesssim \Big[\prod_i \|f_i w_i\|_{L^{\infty}}\Big] \sup_{R \in \calD} \Big[ \frac{1_R(x) }{\esssup_R w} \Big] \cdot w(x) \le \prod_i \|f_i w_i\|_{L^{\infty}} \end{align*} almost everywhere, and so $\|M_{\calD}(f_1, \ldots, f_n)w\|_{L^{\infty}} \lesssim \prod_i \|f_i w_i\|_{L^{\infty}}$ as desired. \begin{comment} We now give another proof, which is not so special and, thus, gives more intuition on operators different from the maximal function. This time we also use extrapolation but instead directly prove the case $\overline p = (p_1, \ldots, p_n) = (n, \ldots, n)$. Then we have $p = 1$, $p_i = n$ and $p_i' = \frac{n}{n-1}$. We denote $$ \sigma_i = w_i^{-p_i'} = w_i^{-\frac{n}{n-1}} $$ and use the definition of $[\vec w]_{A_{\vec p}}$ to estimate $$ \prod_i \ave{\sigma_i}_R \lesssim_{[\vec w]_{A_{\vec p}}} \ave{w}_R^{-\frac{n}{n-1}}. $$ Fix $x$ and consider an arbitrary $R \in \calD$ so that $x \in R$. Then we have \begin{align*} \prod_i \ave{ |f_i| }_R &= \prod_i \frac{\sigma_i(R)}{|R|} \frac{1}{\sigma_i(R)} \int_R |f_i| \sigma_i^{-1} \sigma_i \\ &= \prod_i \ave{\sigma_i}_R \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i} \\ &\lesssim \ave{w}_R^{-\frac{n}{n-1}} \prod_i \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i} \\ &= \Big[ \ave{w}_R^{-1} \prod_i [ \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i}]^{\frac{n-1}{n}} \Big]^{\frac{n}{n-1}} \\ &= \Big[ \frac{1}{w(R)} \int_R \prod_i [ \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i}]^{\frac{n-1}{n}} \Big]^{\frac{n}{n-1}} \\ &\le \Big[ \frac{1}{w(R)} \int_R \prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) ]^{\frac{n-1}{n}} \cdot w^{-1} w \Big]^{\frac{n}{n-1}} \\ &\le \Big[ M_{\calD}^w\Big(\prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})]^{\frac{n-1}{n}} \cdot w^{-1} \Big)(x) \Big]^{\frac{n}{n-1}}. \end{align*} Therefore, we have \begin{align*} \|M_{\calD}(f_1, \ldots, f_n) \|_{L^1(w) } &\lesssim \Big\| M_{\calD}^w\Big(\prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})]^{\frac{n-1}{n}} \cdot w^{-1} \Big) \Big\|_{L^{\frac{n}{n-1}}(w)}^{\frac{n}{n-1}}. \end{align*} As $p=1$ we have by Lemma \ref{lem:lem1} that $w \in A_{n}$ (the exact class does not matter as long as we are in $A_{\infty}$) and so by Proposition \ref{prop:prop1} this is dominated by \begin{align*} \Big\| \Big[ \prod_i M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \Big]^{\frac{n-1}{n}} w^{-1} \Big\|_{L^{\frac{n}{n-1}}(w)}^{\frac{n}{n-1}} &= \Big\| \prod_i M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})w_i^{-\frac{1}{n-1}} \Big\|_{L^1} \\ &\le \prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})w_i^{-\frac{1}{n-1}} \|_{L^n} \\ &= \prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \|_{L^n(\sigma_i)}. \end{align*} By Lemma \ref{lem:lem1} we have $\sigma_i \in A_{\infty}$ and so by Proposition \ref{prop:prop1} we again have $$ \prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \|_{L^n(\sigma_i)} \lesssim \prod_i \| f_i \sigma_i^{-1} \|_{L^n(\sigma_i)} = \prod_i \| f_i w_i \|_{L^n}, $$ and so we are done. \end{comment} \end{proof} If an average is with respect to a different measure $\mu$ than the Lebesgue measure, we write $\langle f \rangle_R^{\mu} := \frac{1}{\mu(R)} \int_R f\ud \mu$ and define $$ M_{\calD}^{\mu} f = \sup_R 1_R \langle |f| \rangle_R^{\mu}. $$ The following is a result of R. Fefferman \cite{RF3}. Recently, we also recorded a proof in \cite[Appendix B]{LMV:Bloom}. \begin{prop}\label{prop:prop1} Let $\lambda \in A_p$, $p \in (1,\infty)$, be a bi-parameter weight. Then for all $s \in (1,\infty)$ we have $$ \| M_{\calD}^{\lambda} f \|_{L^s(\lambda)} \lesssim [\lambda]_{A_p}^{1+1/s} \|f\|_{L^s(\lambda)}. $$ \end{prop} We formulate some vector-valued versions of Proposition \ref{prop:prop1}. We state the following version with two sequence spaces -- of course, a version with arbitrarily many also works. Proposition \ref{prop:vecvalmax} is proved in the end of Section \ref{app:app1}. \begin{prop}\label{prop:vecvalmax} Let $\mu\in A_\infty$, $w\in A_p(\mu)$ and $1<p,s,t<\infty$. Then we have \[ \left\| \Big\|\big\|\{M^\mu f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(w\mu)}\lesssim \left\|\Big\|\big\|\{f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(w\mu)}. \]In particular, we have \[ \left\| \Big\|\big\|\{M^\mu f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(\mu)}\lesssim \left\|\Big\|\big\|\{f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(\mu)}. \] \end{prop} Finally, we point out that everything in this section works easily in the general multi-parameter situation. \section{Square functions}\label{sec:SquareFunctions} Let $\calD = \calD^1 \times \calD^2$ be a fixed lattice of dyadic rectangles. We define the square functions $$ S_{\calD} f = \Big( \sum_{R \in \calD} |\Delta_R f|^2 \Big)^{1/2}, \,\, S_{\calD^1}^1 f = \Big( \sum_{I^1 \in \calD^1} |\Delta_{I^1}^1 f|^2 \Big)^{1/2} $$ and define $S_{\calD^2}^2 f$ analogously. \begin{comment} The following lemma present the well-known and most basic square function estimates. We cannot rely on it as we will be working with the genuinely multilinear weights. \begin{lem}\label{lem:lem2} For $p \in (1,\infty)$ and a bi-parameter weight $w \in A_p$ we have $$ \| f \|_{L^p(w)} \sim \| S_{\calD} f\|_{L^p(w)} \sim \| S_{\calD^1}^1 f \|_{L^p(w)} \sim \| S_{\calD^2}^2 f \|_{L^p(w)}. $$ \end{lem} \end{comment} The following lower square function estimate valid for $A_{\infty}$ weights is important for us. The importance comes from the fact that by Lemma \ref{lem:lem1} some of the key weights $w^p$ and $w_i^{-p_i'}$ are at least $A_{\infty}$ for the multilinear weights of Definition \ref{defn:defn1}. \begin{lem}\label{lem:lem3} There holds $$ \|f\|_{L^p(w)} \lesssim \|S_{\calD^i}^i f\|_{L^p(w)} \lesssim \|S_{\calD} f\|_{L^p(w)} $$ for all $p \in (0, \infty)$ and bi-parameter weights $w \in A_{\infty}$. \end{lem} For a proof of the one-parameter estimate see \cite[Theorem 2.5]{Wi}. The bi-parameter results can be deduced using the following extremely useful $A_{\infty}$ extrapolation result \cite{CUMP}, which will be applied several times during the paper. We also mention that square function estimates related to Lemma \ref{lem:lem3} also appear in \cite{BM3}. \begin{lem}\label{lem:lem4} Let $(f,g)$ be a pair of non-negative functions. Suppose that there exists some $0<p_0<\infty$ such that for every $w\in A_\infty$ we have $$ \int f^{p_0} w \lesssim \int g^{p_0} w. $$ Then for all $0<p<\infty$ and $w\in A_\infty$ we have $$ \int f^{p} w \lesssim \int g^{p} w. $$ \end{lem} \begin{proof}[Proof of Lemma \ref{lem:lem3}] Let $w \in A_\infty$ be a bi-parameter weight. The first estimate in the statement follows from the one-parameter result \cite[Theorem 2.5]{Wi} and the fact that $w(x_1, \cdot) \in A_\infty(\mathbb{R}^{d_2})$ and $w(\cdot, x_2) \in A_\infty(\mathbb{R}^{d_1})$. Using this, we have that $$ \| f \|_{L^2(w)} \lesssim \| S^1_{\calD^1}f\|_{L^2(w)} = \Big(\sum_{I^1 \in \calD^1} \| \Delta^1_{I^1}f\|_{L^2(w)}^2 \Big)^{\frac 12}. $$ For each $I^1$ we again use the one-parameter estimate to get $$ \| \Delta^1_{I^1}f\|_{L^2(w)} \lesssim \|S^2_{\calD^2} \Delta^1_{I^1}f\|_{L^2(w)} =\Big( \sum_{I^2 \in \calD^2}\|\Delta^2_{I^2} \Delta^1_{I^1}f\|_{L^2(w)}^2 \Big)^{\frac 12}. $$ Since $\Delta^2_{I^2} \Delta^1_{I^1}f= \Delta_{I^1 \times I^2} f$, inserting the last estimate into the previous one shows that $$ \| f \|_{L^2(w)} \lesssim \Big( \sum_{I^1 \times I^2 \in \calD^1 \times \calD^2} \|\Delta_{I^1 \times I^2} f\|_{L^2(w)}^2 \Big)^{\frac 12} = \| S_{\calD} f \|_{L^2(w)}. $$ Since this holds for every bi-parameter weight $w \in A_\infty$, Lemma \ref{lem:lem4} concludes the proof. We point out that with further extrapolation we could obtain vector-valued versions analogous to Proposition \ref{prop:vecvalmax}, see the end of Section \ref{app:app1}. \end{proof} \begin{rem}\label{rem:rem1} We often use the lower square function estimate with the additional observation that we e.g. have for all $k = (k_1, k_2) \in \{0,1,\ldots\}^2$ that $$ S_{\calD} f = \Big( \sum_{K = K^1 \times K^2 \in \calD} |\Delta_{K,k} f|^2 \Big)^{1/2}, \qquad \Delta_{K,k} = \Delta_{K^1,k_1}^1 \Delta_{K^2, k_2}^2. $$ This simply follows from disjointness. \end{rem} For $k= (k_1, k_2)$ we define the following family of $n$-linear square functions. First, we set $$ A_1(f_1, \ldots, f_n) = A_{1,k}(f_1, \ldots, f_n) = \Big( \sum_{K \in \calD} \langle | \Delta_{K,k} f_1 | \rangle_K ^2 \prod_{j=2}^n \langle |f_j| \rangle_K^2 1_K \Big)^{\frac{1}{2}}. $$ In addition, we understand this so that $A_{1,k}$ can also take any one of the symmetric forms, where each $\Delta_{K^i, k_i}^i$ appearing in $\Delta_{K,k} = \Delta_{K^1,k_1}^1 \Delta_{K^2, k_2}^2$ can alternatively be associated with any of the other functions $f_2, \ldots, f_n$. That is, $A_{1,k}$ can, for example, also take the form $$ A_{1,k}(f_1, \dots, f_n) = \Big( \sum_{K \in \calD} \langle | \Delta^2_{K^2,k_2} f_1 | \rangle_K^2 \langle | \Delta^1_{K^1,k_1} f_2| \rangle_K^2 \prod_{j=3}^{n} \langle |f_j| \rangle_K^2 1_K \Big)^{\frac 12}. $$ For $k = (k_1, k_2, k_3)$ we define \begin{equation}\label{eq:eq11} \begin{split} &A_{2,k}(f_1, \ldots, f_n) \\ &= \Big( \sum_{K^2 \in \calD^2} \Big( \sum_{K^1 \in \calD^1} \langle|\Delta^2_{K^2, k_1}f_1|\rangle_{K}\langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K} \langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K1_{K} \Big)^2\Big)^{ \frac 12}, \end{split} \end{equation} where we again understand this as a family of square functions. First, the appearing three martingale blocks can be associated with different functions, too. Second, we can have the $K^1$ summation out and the $K^2$ summation in (we can interchange them), but then we have two martingale blocks with $K^2$ and one martingale block with $K^1$. Finally, for $k = (k_1, k_2, k_3, k_4)$ we define $$ A_{3,k}(f_1, \ldots, f_n) = \sum_{K \in \calD} \langle | \Delta_{K,(k_1, k_2)} f_1| \rangle_K \langle | \Delta_{K,(k_3, k_4)} f_2| \rangle_K \prod_{j=3}^n \langle |f_j| \rangle_K 1_K, $$ where this is a family with two martingale blocks in each parameter, which can be moved around. \begin{thm}\label{thm:thm3} If $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$ we have $$ \|A_{j,k}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}, \quad j=1,2,3, $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm} \begin{proof} The proofs of all of the cases have the same underlying idea based on an iterative use of duality and the lower square function estimate until all of the cancellation has been utilised. One can also realise that the result for $A_{3,k}$ follows using the above scheme just once if the result is first proved for $A_{1,k}$ and $A_{2,k}$. We show the proof for $A_{2,k}$ with the explicit form \eqref{eq:eq11}. Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$. This is enough by extrapolation, Theorem \ref{thm:ext}. To estimate $\|A_{2,k}(f_1, \ldots, f_n)w \|_{L^p}$ we take a sequence $(f_{n+1,K^2})_{K^2} \subset L^{p'}(\ell^2)$ with a norm $\| (f_{n+1,K^2})_{K^2} \|_{L^{p'}(\ell^2)} \le 1$ and look at \begin{equation}\label{eq:A1Dual} \sum_{K}\langle|\Delta^2_{K^2, k_1}f_1|,1_K\rangle\langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K} \langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K \langle f_{n+1,K^2}w \rangle_{K}. \end{equation} There holds that \begin{equation}\label{eq:RemAbs} \langle|\Delta^2_{K^2, k_1}f_1|,1_K\rangle = \langle \Delta^2_{K^2, k_1}f_1 , \varphi_{K^2, f_1} \rangle = \langle f_1 , \Delta^2_{K^2, k_1} \varphi_{K^2, f_1} \rangle, \qquad |\varphi_{K^2, f_1}| \le 1_K. \end{equation} We now get that \eqref{eq:A1Dual} is less than $\| f_1 w_1\|_{L^{p_1}}$ multiplied by \begin{equation*} \Big \| \sum_{K} \langle f_{n+1,K^2}w \rangle_{K} \langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K} \langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K \Delta^2_{K^2, k_1} \varphi_{K^2, f_1} w_1^{-1}\Big \|_{L^{p_1'}}. \end{equation*} We will now apply the lower square function estimate $\|gw_1^{-1}\|_{L^{p_1'}} \lesssim \|S_{\calD^2}^2(g) w_1^{-1} \|_{L^{p_1'}}$, Lemma \ref{lem:lem3}, with the weight $w_1^{-p_1'} \in A_{\infty}$ (see Lemma \ref{lem:lem1}). Here we use the block form of Remark \ref{rem:rem1}. Using also that $|\Delta^2_{K^2, k_1} \varphi_{K^2, f_1}| \lesssim 1_K$ we get that the last norm is dominated by \begin{equation*} \Big \|\Big( \sum_{K^2} \Big( \sum_{K^1} \langle |f_{n+1,K^2}|w \rangle_{K} \langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K} \langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K 1_K \Big)^2 \Big)^{\frac 12} w_1^{-1}\Big \|_{L^{p_1'}}. \end{equation*} We still have cancellation to use in the form of the other two martingale differences and will continue the process. We repeat the argument from above -- this gives that the previous term is dominated by $\| f_2 w_2 \|_{L^{p_2}}$ multiplied by $$ \Big \|\Big( \sum_{K^1} \Big( \sum_{K^2} \langle |f_{n+1,K^2}|w \rangle_{K} \langle |f_{1,K^2}|w_1^{-1}\rangle_K \langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K 1_K \Big)^2 \Big)^{\frac 12} w_2^{-1}\Big \|_{L^{p_2'}} $$ where $\| (f_{1,K^2})_{K^2} \|_{L^{p_1}(\ell^2)} \le 1$. Running this argument one more time finally gives us that this is dominated by $\| f_3w_3\|_{L^{p_3}}$ multiplied by \begin{equation*} \begin{split} \Big \| \Big(\sum_{K^1} & \Big( \sum_{K^2} \langle |f_{n+1,K^2}|w \rangle_{K} \langle |f_{1,K^2}|w_1^{-1}\rangle_K \langle |f_{2, K^1}|w_2^{-1}\rangle_K \prod_{j=4}^{n} \langle |f_j|\rangle_K 1_K \Big)^2 \Big)^{\frac 12} w_3^{-1}\Big\|_{L^{p_3'}} \\ & \le \Big \| \Big(\sum_{K^1} \Big( \sum_{K^2} M_\calD( f_{n+1,K^2}w, f_{1,K^2}w_1^{-1}, f_{2, K^1}w_2^{-1}, f_4, \dots, f_n) \Big)^2 \Big)^{\frac 12} w_3^{-1}\Big\|_{L^{p_3'}}, \end{split} \end{equation*} where $\| (f_{2,K^1})_{K^1} \|_{L^{p_2}(\ell^2)} \le 1$. Using Lemma \ref{lem:lem7} three times (we dualized three times) shows that $$ (w^{-1}, w_1,w_2,w_4, \dots, w_n) \in A_{(p',p_1,p_2,p_4, \dots, p_n)}. $$ The maximal function satisfies the weighted $$ L^{p'}(\ell^\infty_{K^1}(\ell^2_{K^2})) \times L^{p_1}(\ell^\infty_{K^1}(\ell^2_{K^2})) \times L^{p_2}(\ell^2_{K^1}(\ell^\infty_{K^2})) \times L^{p_4} \times \dots \times L^{p_n} \to L^{p_3'}(\ell^2_{K^1}(\ell^1_{K^2})) $$ estimate. This gives that the last norm above is dominated by \begin{equation*} \| ( f_{n+1,K^2}ww^{-1})_{K^2} \|_{L^{p'}(\ell^2)} \| ( f_{1,K^2}w_1^{-1}w_1)_{K^2} \|_{L^{p_1}(\ell^2)} \| ( f_{2, K^1}w_2^{-1}w_2)_{K^1} \|_{L^{p_2}(\ell^2)} \prod_{i=4}^n \| f_iw_i \|_{L^{p_i}}, \end{equation*} where the first three norms are $\le 1$. This concludes the proof for $A_{2,k}$ and the rest of the cases are similar. \end{proof} We also record some linear estimates. We will need these when we deal with the most complicated model operators -- the partial paraproducts. \begin{prop}\label{prop:prop3} For $u \in A_{\infty}$ and $p, s \in (1, \infty)$ we have $$ \Big\| \Big[ \sum_m \Big( \sum_{K \in \calD} \langle |\Delta_{K,k} f_m| \rangle_K^2 \frac{1_K}{\langle u \rangle_K^2} \Big)^{\frac{s}{2}} \Big]^{\frac{1}{s}} u^{\frac{1}{p}} \Big\|_{L^p} \lesssim \Big\| \Big( \sum_m |f_m|^s \Big)^{\frac{1}{s}} u^{-\frac{1}{p'}} \Big\|_{L^p}. $$ \end{prop} \begin{proof} By \eqref{eq:eq8} we have for all $n \ge 2$ that $$ 1 \le \langle u \rangle_K \Big\langle u^{-\frac{1}{n-1}} \Big\rangle_K^{n-1}. $$ Simply using this we reduce to \begin{align*} \Big\| \Big[& \sum_m \Big( \sum_{K \in \calD} \langle |\Delta_{K,k} f_m| \rangle_K^2 \Big\langle u^{-\frac{1}{n-1}} \Big\rangle_K^{2(n-1)}1_K\Big)^{\frac{s}{2}} \Big]^{\frac{1}{s}} u^{\frac{1}{p}} \Big\|_{L^p} \\ &= \Big\| \Big[ \sum_m A_{1,k}\big(f_m, u^{-\frac{1}{n-1}}, \ldots, u^{-\frac{1}{n-1}}\big)^s \Big]^{\frac{1}{s}} u^{\frac{1}{p}} \Big\|_{L^p}, \end{align*} where $A_{1,k}$ is a suitable square function as in Theorem \ref{thm:thm3}. We then fix $n$ large enough so that $u \in A_{n}$. We then notice that this implies that \begin{equation}\label{eq:InftynLin} \big(u^{-\frac{1}{p'}}, u^{\frac{1}{n-1}}, \ldots, u^{\frac{1}{n-1}}\big) \in A_{(p, \infty, \ldots, \infty)}. \end{equation} To see this, notice that the target weight associated with this tuple is $u^{-\frac{1}{p'}} u = u^{\frac{1}{p}}$ and that the target exponent is $p$, and so $$ \big[\big(u^{-\frac{1}{p'}}, u^{\frac{1}{n-1}}, \ldots, u^{\frac{1}{n-1}}\big)\big]_{A_{(p, \infty, \ldots, \infty)}} = \sup_R \langle u \rangle_R^{1/p} \langle u \rangle_R^{1/p'} \big\langle u^{-\frac{1}{n-1}} \big\rangle_R^{n-1} = [u]_{A_n} < \infty. $$ It remains to use the weighted (with the weight \eqref{eq:InftynLin}) vector-valued estimate $L^p(\ell^s) \times L^{\infty}\times \cdots \times L^{\infty} \to L^p(\ell^s)$ of $A_{1,k}$, which follows by Theorem \ref{thm:thm3} and \eqref{extrapol:vv*}. \end{proof} \begin{rem} It is possible to prove the above proposition also directly with the duality and lower square function strategy that was used in the proof of Theorem \ref{thm:thm3}. \end{rem} \section{Dyadic model operators}\label{sec:dmo} In this section we are working with a fixed set of dyadic rectangles $\calD = \calD^1 \times \calD^2$. All the model operators depend on this lattice, but it is not emphasised in the notation. \subsection{Shifts} Let $k=(k_1, \dots, k_{n+1})$, where $k_j = (k_j^1, k_j^2) \in \{0,1,\ldots\}^2$. An $n$-linear bi-parameter shift $S_k$ takes the form \begin{equation*}\label{eq:S2par} \langle S_k(f_1, \ldots, f_n), f_{n+1}\rangle = \sum_{K} \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} a_{K, (R_j)} \prod_{j=1}^{n+1} \langle f_j, \wt h_{R_j} \rangle. \end{equation*} Here $K, R_1, \ldots, R_{n+1} \in \calD = \calD^1 \times \calD^2$, $R_j = I_j^1 \times I_j^2$, $R_j^{(k_j)} := (I_j^1)^{(k_j^1)} \times (I_j^2)^{(k_j^2)}$ and $\wt h_{R_j} = \wt h_{I_j^1} \otimes \wt h_{I_j^2}$. Here we assume that for $m \in \{1,2\}$ there exist two indices $j^m_0,j_1^m \in \{1, \ldots, n+1\}$, $j^m_0 \not =j^m_1$, so that $\wt h_{I_{j^m_0}^m}=h_{I_{j^m_0}^m}$, $\wt h_{I_{j^m_1}^m}=h_{I_{j^m_1}^m}$ and for the remaining indices $j \not \in \{j^m_0, j^m_1\}$ we have $\wt h_{I_j^m} \in \{h_{I_j^m}^0, h_{I_j^m}\}$. Moreover, $a_{K,(R_j)} = a_{K, R_1, \ldots ,R_{n+1}}$ is a scalar satisfying the normalization \begin{equation}\label{eq:Snorm2par} |a_{K,(R_j)}| \le \frac{\prod_{j=1}^{n+1} |R_j|^{1/2}}{|K|^{n}}. \end{equation} \begin{thm} Suppose $S_k$ is an $n$-linear bi-parameter shift, $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$. Then we have $$ \|S_k(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. The implicit constant does not depend on $k$. \end{thm} \begin{proof} We use duality to always reduce to one of the operators of type $A_{3}$ in Theorem \ref{thm:thm3}. Performing the proof like this has the advantage that the form of the shift really plays no role -- it just affects which type of $A_3$ operator we get. For example, we consider the explicit case $$ S_k(f_1, \dots, f_n) = \sum_{K} A_K(f_1, \ldots, f_n), $$ where $$ A_K(f_1, \ldots, f_n) = \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} a_{K, (R_j)} \langle f_1, h_{R_1} \rangle \prod_{j=2}^{n} \langle f_j, \wt h_{R_j} \rangle h_{R_{n+1}}. $$ Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. The normalisation of the shift coefficients gives the direct estimate \begin{equation*} \begin{split} \sum_{K}& |\langle A_K(f_1, \ldots, f_n), f_{n+1} \rangle | \\ & \le \sum_K \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} \frac{\prod_{j=1}^{n+1} |R_j|^{1/2}}{|K|^{n}} \Big|\langle \Delta_{K,k_1} f_1, h_{R_1} \rangle \prod_{j=2}^{n} \langle f_j, \wt h_{R_j} \rangle \langle \Delta_{K,k_{n+1}}f_{n+1},h_{R_{n+1}}\rangle\Big| \\ & \le \sum_K \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} \frac{1}{|K|^{n}} \langle |\Delta_{K,k_1} f_1|, 1_{R_1} \rangle \prod_{j=2}^{n} \langle |f_j|, 1_{R_j} \rangle \langle |\Delta_{K,k_{n+1}}f_{n+1}|,1_{R_{n+1}}\rangle \\ &\le \sum_K \langle | \Delta_{K,k_1} f_1 | \rangle_K \prod_{j=2}^{n} \langle |f_j| \rangle_K\langle |\Delta_{K,k_{n+1}} f_{n+1}| \rangle_K |K| \\ & = \Big\| \sum_K \langle | \Delta_{K,k_1} f_1 | \rangle_K \prod_{j=2}^{n} \langle |f_j| \rangle_K\langle |\Delta_{K,k_{n+1}} f_{n+1}| \rangle_K 1_K \Big\|_{L^1}, \end{split} \end{equation*} where we used \eqref{eq:HaarMart} in the first step in the passage from Haar functions into martingale differences. Notice that \begin{equation}\label{eq:eq12} (w_1, \cdots, w_n, w^{-1})\in A_{(p_1,\cdots, p_n, p')}, \qquad w=\prod_{i=1}^n w_i. \end{equation} The target weight associated to this data is $ww^{-1} = 1$ and the target exponent is $1/p + 1/p' = 1$. By using Theorem \ref{thm:thm3} with a suitable $A_{3}(f_1, \ldots, f_{n+1})$ and the above weight we can directly dominate this by $$ \Big[\prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} \Big]\cdot \|f_{n+1} w^{-1}\|_{L^{p'}} \le \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}. $$ We are done. \end{proof} \subsection{Partial paraproducts} Let $k=(k_1, \dots, k_{n+1})$, where $k_j \in \{0,1,\ldots\}$. An $n$-linear bi-parameter partial paraproduct $(S\pi)_k$ with the paraproduct component on $\mathbb{R}^{d_2}$ takes the form \begin{equation}\label{eq:Spi} \langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle = \sum_{K = K^1 \times K^2} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}} a_{K, (I_j^1)} \prod_{j=1}^{n+1} \langle f_j, \wt h_{I_j^1} \otimes u_{j, K^2} \rangle, \end{equation} where the functions $\wt h_{I_j^1}$ and $u_{j, K^2}$ satisfy the following. There are $j_0,j_1 \in \{1, \ldots, n+1\}$, $j_0 \not =j_1$, so that $\wt h_{I_{j_0}^1}=h_{I_{j_0}^1}$, $\wt h_{I_{j_1}^1}=h_{I_{j_1}^1}$ and for the remaining indices $j \not \in \{j_0, j_1\}$ we have $\wt h_{I_j^1} \in \{h_{I_j^1}^0, h_{I_j^1}\}$. There is $j_2 \in \{1, \ldots, n+1\}$ so that $u_{j_2, K^2} = h_{K^2}$ and for the remaining indices $j \ne j_2$ we have $u_{j, K^2} = \frac{1_{K^2}}{|K^2|}$. Moreover, the coefficients are assumed to satisfy \begin{equation}\label{eq:PPNorma} \| (a_{K, (I_j^1)})_{K^2} \|_{\BMO} = \sup_{K^2_0 \in \calD^2} \Big( \frac{1}{|K^2_0|} \sum_{K^2 \subset K^2_0} |a_{K, (I_j^1)}|^2 \Big)^{1/2} \le \frac{\prod_{j=1}^{n+1} |I_j^1|^{\frac 12}}{|K^1|^{n}}. \end{equation} Of course, $(\pi S)_k$ is defined symmetrically. The following $H^1$-$\BMO$ duality type estimate is well-known and elementary: \begin{equation}\label{eq:H1BMO} \sum_{K^2} |a_{K^2}| |b_{K^2}| \lesssim \| (a_{K^2} ) \|_{\BMO} \Big\| \Big( \sum_{K^2} |b_{K^2}|^2 \frac{1_{K^2}}{|K^2|} \Big)^{1/2} \Big \|_{L^1}. \end{equation} Such estimates have natural multi-parameter analogues also, and the proofs in all parameters are analogous. See e.g. \cite[Equation (4.1)]{MO}. Our result for the partial paraproducts has a significantly more difficult proof than for the other model operators. It is also more inefficient in that is produces an exponential -- although crucially with an arbitrarily small exponent -- dependence on the complexity. This has some significance for the required kernel regularity of CZOs, but a standard $t \mapsto t^{\alpha}$ type continuity modulus will still suffice. \begin{thm}\label{thm:thm4} Suppose $(S\pi)_k$ is an $n$-linear partial paraproduct, $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$. Then, for every $0<\beta \le 1$ we have $$ \|(S\pi)_k(f_1, \ldots, f_n)w \|_{L^p} \lesssim_\beta 2^{\max_j k_j \beta}\prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm} \begin{proof} Recall that $(S\pi)_k$ is of the form \eqref{eq:Spi}. Recall also the indices $j_0$ and $j_1$, which say that $\wt h_{I^1_j}=h_{I^1_j}$ at least for $j\in \{j_0, j_1\}$, and the index $j_2$, which specifies the place of $h_{K^2}$ in the second parameter. It makes no difference for the argument what the indices $j_0$ and $j_1$ are, so we assume that $j_0=1$ and $j_1=2$. It makes a small difference whether $j_2 \in \{j_0,j_1\}$ or $j_2 \not \in \{j_0,j_1\}$, so we do not specify $j_2$ yet. To make the following formulae shorter we write $\wt h_{I^1_j}$ for every $j$ but keep in mind that these are cancellative at least for $j \in \{1,2\}$. We define $$ A_{K^2}(g_1, \dots, g_{n+1}) = \prod_{j=1}^{n+1} \langle g_j, u_{j,K^2} \rangle $$ and write $(S\pi)_k$ in the form $$ \langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle = \sum_{K = K^1 \times K^2} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}} a_{K, (I_j^1)} A_{K^2}(\langle f_{1}, \wt h_{I_{1}^1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{I_{n+1}^1}\rangle_1 ). $$ Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. We may assume $f_j \in L^{\infty}_c$. The $H^1$-$\BMO$ duality \eqref{eq:H1BMO} gives that \begin{equation}\label{eq:eq13} \begin{split} |\langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle| &\lesssim \sum_{K^1} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}}\Bigg[ \frac{\prod_{j=1}^{n+1}|I^1_j|^{\frac 12}}{|K^1|^n} \\ &\int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{I_{1}^1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{I_{n+1}^1}\rangle_1 )|^2 \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}\Bigg]. \end{split} \end{equation} Suppose $j \in \{3, \dots, n+1\}$ is such that $\wt h_{I^1_j}=h_{I^1_j}^0$ and $k_j>0$, that is, we have non-cancellative Haar functions and non-zero complexity. We expand $$ |I^1_j|^{-\frac{1}{2}} \langle f_j, h^0_{I^1_j} \rangle_1 =\langle f\rangle_{I_j^1,1} =\langle f_j \rangle_{K^1,1}+\sum_{i_j=1}^{k_j} \langle \Delta^1_{(I^1_j)^{(i_j)}} f_j\rangle_{(I_j^1)^{(i_j-1)},1}. $$ For convenience, we further write that $$ \langle \Delta^1_{(I^1_j)^{(i_j)}} f_j\rangle_{(I_j^1)^{(i_j-1)},1} = \langle h_{(I^1_j)^{(i_j)}} \rangle_{(I^1_j)^{(i_j-1)}} \langle f_j, h_{(I^1_j)^{(i_j)}} \rangle_1, $$ where we are suppressing the summation over the $2^{d_1}-1$ different Haar functions. We perform these expansions inside the operators $A_{K^2}$, and take the sums out of the $\ell^2_{K^2}$ norm. This gives that the right hand side of \eqref{eq:eq13} is less than a sum of at most $\prod_{j=3}^n(1+k_j)$ terms of the form \begin{equation*} \begin{split} \sum_{K^1} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}}& \Bigg[ \frac{\prod_{j=1}^{n+1} |I^1_j| |(I^1_j)^{(i_j)}|^{-\frac 12}}{|K^1|^n} \\ &\int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{(I_{1}^1)^{(i_1)}}\rangle_1, \dots, \langle f_{n+1}, \wt h_{(I_{n+1}^1)^{(i_{n+1})}}\rangle_1 )|^2 \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \Bigg]. \end{split} \end{equation*} Here we have the following properties. If $j$ in an index such that we did not do the expansion related to $j$, then $i_j=0$. Thus, at least $i_1=i_2=0$. We also remind that $\wt h_{(I_{j}^1)^{(i_j)}}=h_{(I_{j}^1)^{(i_j)}}$ for $j=1,2$. If $i_j<k_j$, then $\wt h_{(I_{j}^1)^{(i_j)}}=h_{(I_{j}^1)^{(i_j)}}$. If $i_j=k_j$, then $\wt h_{(I_{j}^1)^{(i_j)}} \in \{h_{K^1}, h_{K^1}^0\}$. We can further rewrite this as \begin{equation}\label{eq:eq14} \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n} \int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2 \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}. \end{equation} This is otherwise analogous to the right hand side of \eqref{eq:eq13} except for the key difference that if a non-cancellative Haar function appears, then the related complexity is zero. We turn to estimate \eqref{eq:eq14}. We show that \begin{equation}\label{eq:eq15} \eqref{eq:eq14} \lesssim_\beta 2^{\max_j k_j \frac \beta 2 }\Big[\prod_{j=1}^n \|f_j w_j\|_{L^{p_j}}\Big]\| f_{n+1} w^{-1} \|_{L^{p'}}. \end{equation} Recalling that $\| f_{n+1} w^{-1} \|_{L^{p'}} \le 1$ this implies that the left hand side of \eqref{eq:eq13} satisfies $$ LHS\eqref{eq:eq13} \lesssim_ \beta (1+\max_j k_j)^{n-1} 2^{\max_j k_j \frac \beta 2 } \prod_{j=1}^n \|f_j w_j\|_{L^{p_j}} \lesssim_\beta 2^{\max_j k_j \beta } \prod_{j=1}^n \|f_j w_j\|_{L^{p_j}}, $$ which proves the theorem. Let $(v_1, \dots, v_{n+1}) \in A_{(2, \dots, 2)}$ and $v=\prod_{j=1}^{n+1} v_j$. We will prove the $(n+1)$-linear estimate \begin{equation}\label{eq:eq16} \begin{split} \Bigg\| & \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \Bigg[ \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n} \frac{1_{K^1}}{|K^1|} \\ &\Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2 \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}\Bigg] v\Bigg\|_{L^{\frac{2}{n+1}}} \lesssim 2^{\max_j k_j \frac \beta 2 } \prod_{j=1}^{n+1} \| f_j v_j \|_{L^2}. \end{split} \end{equation} Extrapolation, Theorem \ref{thm:ext}, then gives that \begin{equation*} \begin{split} \Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n} \frac{1_{K^1}}{|K^1|} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots,& \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2 \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} v\Bigg\|_{L^{q}} \\ & \lesssim 2^{\max_j k_j \frac \beta 2 }\prod_{j=1}^{n+1} \| f_j v_j \|_{L^{q_j}} \end{split} \end{equation*} for all $q_1, \dots, q_{n+1} \in (1,\infty]$ such that $\frac{1}{q}=\sum_{j=1}^{n+1} \frac{1}{q_j}>0$ and for all $(v_1, \dots, v_{n+1}) \in A_{(q_1, \dots, q_{n+1})}$. Applying this with the exponent tuple $(p_1, \dots, p_n, p')$ and the weight tuple $(w_1, \dots, w_n,w^{-1}) \in A_{(p_1, \dots, p_n,p')}$ gives \eqref{eq:eq15}. It remains to prove \eqref{eq:eq16}. We denote $\sigma_j=v_j^{-2}$. The $A_{(2, \dots, 2)}$ condition gives that $$ \langle v^{\frac{2}{n+1}} \rangle_K^{n+1} \prod_{j=1}^{n+1} \langle \sigma_j \rangle_K \lesssim 1. $$ Using this we have $$ |A_{K^2}(\langle f_{1}, \wt h_{L^1_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L^1_{n+1}}\rangle_1 )| \lesssim \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K^{n+1}} \Bigg|A_{K^2}\Bigg(\frac{\langle f_{1}, \wt h_{L^1_1}\rangle_1}{\langle \sigma_1\rangle_K}, \dots, \frac{\langle f_{n+1}, \wt h_{L^1_{n+1}}\rangle_1}{\langle \sigma_{n+1} \rangle_K} \Bigg)\Bigg|. $$ For the moment we abbreviate the last $|A_{K^2}( \cdots)|$ as $c_{K,(L^1_j)}$. There holds that \begin{equation*} \begin{split} \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K^{n+1}}c_{K,(L^1_j)} &= \Bigg[ \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K} \Big\langle c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}v^{\frac{2}{n+1}} \Big\rangle_K \Bigg ]^{n+1} \\ &\le \Big(M_\calD^{v^{\frac{2}{n+1}}}\Big(c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}\Big)(x)\Big)^{n+1} \end{split} \end{equation*} for all $x \in K$. We substitute this into the left hand side of \eqref{eq:eq16}. This gives that the term there is dominated by \begin{equation*} \Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} \Big( \sum_{K^2} M_\calD^{v^{\frac{2}{n+1}}}\Big(c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}\Big)^{2(n+1)} \frac{1}{|K^2|} \Big)^{\frac{1}{2}} v\Bigg\|_{L^{\frac{2}{n+1}}}. \end{equation*} We use the $L^2(\ell_{K^1,(L^1_j)}^{n+1}(\ell_{K^2}^{2(n+1)}))$ boundedness of the maximal function $M_\calD^{v^{\frac{2}{n+1}}}$, see Proposition \ref{prop:vecvalmax}. This gives that the last norm is dominated by \begin{equation}\label{eq:eq17} \Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} 1_{K^1} \Big( \sum_{K^2} c_{K,(L^1_j)}^{2} \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} v^{-1}\Bigg\|_{L^{\frac{2}{n+1}}}. \end{equation} Now, we recall what the numbers $c_{K,(L^1_j)}$ are. At this point it becomes relevant which of the Haar functions $\wt h_{L^1_j}$ are cancellative and what is the form of the operators $A_{K^2}$. We assume that $\wt h_{L^1_j}=h_{L^1_j}$ for $j=1, \dots, n$ and $\wt h_{L^1_{n+1}}=h_{L^1_{n+1}}^0=h_{K^1_{n+1}}^0$, which is a good representative of the general case. First, we assume that the index $j_2$, which specifies the place of $h_{K^2}$ in $A_{K^2}$, satisfies $j_2 \in \{1, \dots, n\}$. The point is that then $\wt h_{L^1_{j_2}}=h_{L^1_{j_2}}$. For convenience of notation we assume that $j_2=1$. With these assumptions there holds that \begin{equation}\label{eq:eq20} c_{K,(L^1_j)} =\Bigg|\frac{\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle}{\langle \sigma_1\rangle_K} \prod_{j=2}^n\frac{\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_j\rangle_K} \cdot \frac{\Big\langle f_{n+1}, h^0_{K^1}\otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_{n+1} \rangle_K} \Bigg|. \end{equation} For $j=2, \dots,n$ we estimate that \begin{equation}\label{eq:eq21} \begin{split} \frac{\Big|\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \sigma_j\rangle_K} &= \frac{\Big| \Big\langle \langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}\langle \sigma_j \rangle_{K^1,1}, \frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \langle \sigma_j\rangle_{K^1,1}\rangle_{K^2}}\\ &\le M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})(x_2) \end{split} \end{equation} for all $x_2 \in K^2$. Also, there holds that $$ \frac{\Big|\Big\langle f_{n+1}, h^0_{K^1}\otimes \frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \sigma_{n+1} \rangle_K} \le |K^1|^{\frac 12} M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})(x) $$ for all $x \in K$. These give (recall that $L^1_{n+1}=K^1$) that \begin{equation*} \begin{split} &\sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} 1_{K^1}\Big( \sum_{K^2} c_{K,(L^1_j)}^{2} \frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \le \prod_{j=1}^n F_{j,K^1} \cdot M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1}), \end{split} \end{equation*} where \begin{equation}\label{eq:eq24} F_{1,K^1}= 1_{K^1}\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|} \Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \end{equation} and \begin{equation}\label{eq:eq25} F_{j,K^1} =1_{K^1}\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12}}{|K^1|} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}) \end{equation} for $j=2, \dots, n$. We will now continue from \eqref{eq:eq17} using the above pointwise estimates. Notice that \begin{equation*} \begin{split} \sum_{K^1}\prod_{j=1}^n F_{j,K^1} \le \prod_{j=1}^2 \Big(\sum_{K^1} (F_{j,K^1} )^2 \Big)^{1/2}\prod_{j=3}^n \sup_{K^1}F_{j,K^1} \le \prod_{j=1}^n \Big(\sum_{K^1} (F_{j,K^1} )^2 \Big)^{1/2}. \end{split} \end{equation*} Since $v^{-1}=\prod_{j=1}^{n+1}v_j^{-1}$, we have that \begin{equation*} \eqref{eq:eq17} \lesssim \prod_{j=1}^n \Big \| \Big( \sum_{K^1} F_{j,K^1}^2 \Big)^{\frac 12} v_{j}^{-1} \Big \|_{L^2} \big\|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})v_{n+1}^{-1} \big \|_{L^2}. \end{equation*} Since $\sigma_j=v_j^{-2}$ there holds by Proposition \ref{prop:prop1} that $$ \|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})v_{n+1}^{-1} \|_{L^2} =\|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1}) \|_{L^2(\sigma_{n+1})} \lesssim \| f_{n+1} v_{n+1} \|_{L^2}. $$ It remains to estimate the norms for $j=1, \dots, n$. We begin with $j=1$. If $l_1=0$, then we directly have that $$ \Big(\sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12} =\Big( \sum_{K}\frac{|\langle f_{1}, h_K\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K}}{|K|} \Big)^{\frac{1}{2}}. $$ Since $|\langle f_{1}, h_K\rangle| |K|^{-\frac 12} \le \langle | \Delta_K f_1 | \rangle_K$, we may use Proposition \ref{prop:prop3} to have that $$ \Big \| \Big( \sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12} v_{1}^{-1} \Big \|_{L^2} \lesssim \| f_1 \sigma_1^{-\frac 12} \|_{L^2}=\| f_1 v_1 \|_{L^2}. $$ Suppose then $l_1>0$. There holds that $$ \Big \| \Big( \sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12} v_{1}^{-1} \Big \|_{L^2} = \Big( \sum_{K^1} \| F_{1,K^1} v_1^{-1} \|_{L^2}^2\Big)^{\frac 12}. $$ Let $s \in (1, \infty)$ be such that $d_1/s'=\beta/(2n)$. Then \begin{equation*} F_{1,K^1} \le 2^{\frac{l_1\beta}{2n}}1_{K^1}\bigg(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s} \Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{s}{2}} \bigg)^{\frac 1s}. \end{equation*} Therefore, $\| F_{1,K^1} v_j^{-1} \|_{L^2}^2$ is less than \begin{equation}\label{eq:eq18} 2^{\frac{l_1\beta}{n}} \bigg\| \bigg(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s} \Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{s}{2}} \bigg)^{\frac 1s} \langle \sigma_1\rangle_{K^1,1}^{\frac 12} \bigg\|_{L^2}^2 |K^1|. \end{equation} Notice that $ |\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle | |K^2|^{-\frac 12} \le \langle | \Delta_{K^2} \langle f_1, h_{L_1^1} \rangle_1| \rangle_{K^2}. $ Therefore, the one-parameter case of Proposition \ref{prop:prop3} gives that \begin{equation}\label{eq:eq19} \begin{split} \eqref{eq:eq18} &\lesssim 2^{\frac{l_1\beta}{n}} \bigg\| \Big(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s} |\langle f_{1}, h_{L^1_1}\rangle_1|^s\Big)^{\frac 1s} \langle \sigma_1\rangle_{K^1,1}^{-\frac 12} \bigg\|_{L^2}^2 |K^1| \\ & \le 2^{\frac{l_1\beta}{n}} \bigg\| \sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|} |\langle f_{1}, h_{L^1_1}\rangle_1| \langle \sigma_1\rangle_{K^1,1}^{-\frac 12} \bigg\|_{L^2}^2 |K^1|. \end{split} \end{equation} Notice that $$ \sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|} |\langle f_{1}, h_{L^1_1}\rangle_1| \le \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}. $$ Thus, summing the right hand side of \eqref{eq:eq19} over $K^1$ leads to \begin{equation*} \begin{split} 2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^{d_2}} \sum_{K^1} \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}^2 \langle \sigma_1\rangle_{K^1,1}^{-1} |K^1| &=2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^d} \sum_{K^1} \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}^2 \frac{1_{K^1}}{\langle \sigma_1 \rangle_{K^1,1}^2} \sigma_1 \\ & \lesssim 2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^{d}} | f_1 |^2 v_1^2, \end{split} \end{equation*} where we used Proposition \ref{prop:prop3} again. Finally, we estimate the norms related to $j=2, \dots, n$, which are all similar. We assume that $l_j>0$. It will be clear how to do the case $l_j=0$. As above we have \begin{equation*} F_{j,K^1} \le 2^{\frac{l_j\beta}{2n}}1_{K^1}\bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})^s \bigg)^{\frac 1s}. \end{equation*} Therefore, we get that \begin{equation*} \begin{split} \| F_{j,K^1} v_{j}^{-1}\|_{L^2}^2 &\le 2^{\frac{l_j\beta}{n}} \bigg\| \bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})^s \bigg)^{\frac 1s} \langle \sigma_j \rangle_{K^1,1}^{\frac 12} \bigg \|_{L^2}^2|K^1| \\ &\lesssim 2^{\frac{l_j\beta}{n}} \bigg\| \bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s} |\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}|^s \bigg)^{\frac 1s} \langle \sigma_j \rangle_{K^1,1}^{\frac 12} \bigg \|_{L^2}^2|K^1| \\ & \le 2^{\frac{l_j\beta}{n}} \bigg\| \sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12}}{|K^1|} |\langle f_{j}, h_{L^1_j} \rangle_1| \langle \sigma_j \rangle_{K^1,1}^{-\frac 12} \bigg \|_{L^2}^2|K^1|, \end{split} \end{equation*} where we applied the one-parameter version of Proposition \ref{prop:vecvalmax}. The last norm is like the last norm in \eqref{eq:eq19}, and therefore the estimate can be concluded with familiar steps. Combining the estimates we have shown that $$ \prod_{j=1}^n \Big \| \Big( \sum_{K^1} F_{j,K^1}^2 \Big)^{\frac 12} v_{j}^{-1} \Big \|_{L^2} \lesssim \prod_{j=1}^n 2^{\frac{l_j\beta}{2n}} \| f_j v_j \|_{L^2} \le 2^{\max k_j\frac{\beta}{2}} \prod_{j=1}^n \| f_j v_j \|_{L^2}. $$ Above, we assumed that the index $j_2$ related to the form of the paraproduct satisfied $j_2=1$, see the discussion before \eqref{eq:eq20}. It remains to comment on the case $j_2=n+1$. In this case the formula corresponding to \eqref{eq:eq20} is $$ c_{K,(L^1_j)} =\Bigg|\prod_{j=1}^n\frac{\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_j\rangle_K} \cdot \frac{\langle f_{n+1}, h^0_{K^1}\otimes h_{K^2}\rangle}{\langle \sigma_{n+1} \rangle_K} \Bigg|. $$ For $j=1, \dots, n$ we do the estimate \eqref{eq:eq21}. Also, there holds that \begin{equation*} \begin{split} \frac{|\langle f_{n+1}, h^0_{K^1}\otimes h_{K^2}\rangle |}{\langle \sigma_{n+1} \rangle_K} &= |K^1|^{\frac 12}\frac{\big|\big \langle \langle f_{n+1}, h_{K^2}\rangle_2 \langle \sigma_{n+1} \rangle_{K^2,2}^{-1}\langle \sigma_{n+1} \rangle_{K^2,2} \big \rangle_{K^1}\big|} {\langle \langle \sigma_{n+1} \rangle_{K^2,2} \rangle_{K^1}} \\ & \le |K^1|^{\frac 12} M_{\calD^1}^{\langle \sigma_{n+1} \rangle_{K^2,2}}(\langle f_{n+1}, h_{K^2}\rangle_2 \langle \sigma_{n+1} \rangle_{K^2,2}^{-1})(x_1) \end{split} \end{equation*} for any $x_1 \in K^1$. With the pointwise estimates we proceed as above. Related to $f_j$, $j=1, \dots, n$, this leads to terms which we know how to estimate. Related to $f_{n+1}$ we get a similar term except that the parameters are in opposite roles. We are done. \end{proof} \subsection{Full paraproducts} An $n$-linear bi-parameter full paraproduct $\Pi$ takes the form \begin{equation}\label{eq:pi2bar} \langle \Pi(f_1, \ldots, f_n) , f_{n+1} \rangle = \sum_{K = K^1 \times K^2} a_{K} \prod_{j=1}^{n+1} \langle f_j, u_{j, K^1} \otimes u_{j, K^2} \rangle, \end{equation} where the functions $u_{j, K^1}$ and $u_{j, K^2}$ are like in \eqref{eq:Spi}. The coefficients are assumed to satisfy $$ \| (a_{K} ) \|_{\BMO_{\operatorname{prod}}} = \sup_{\Omega} \Big(\frac{1}{|\Omega|} \sum_{K\subset \Omega} |a_{K}|^2 \Big)^{1/2} \le 1, $$ where the supremum is over open sets $\Omega \subset \mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$ with $0 < |\Omega| < \infty$. As already discussed the $H^1$-$\BMO$ duality works also in bi-parameter (see again \cite[Equation (4.1)]{MO}): \begin{equation}\label{eq:BiParH1BMO} \sum_{K} |a_{K}| |b_{K}| \lesssim \| (a_{K} ) \|_{\BMO_{\operatorname{prod}}} \Big\| \Big( \sum_{K} |b_{K}|^2 \frac{1_{K}}{|K|} \Big)^{1/2} \Big \|_{L^1}. \end{equation} We are ready to bound the full paraproducts. \begin{thm} Suppose $\Pi$ is an $n$-linear bi-parameter full paraproduct, $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$. Then we have $$ \|\Pi(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm} \begin{proof} We use duality to always reduce to one of the operators of type $A_{1}$ in Theorem \ref{thm:thm3}. Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. The particular form of $\Pi$ does not matter -- it only affects the form of the operator $A_1$ we will get. We may, for example, look at $$ \Pi(f_1, \ldots, f_n) = \sum_{K = K^1 \times K^2} a_K \Big \langle f_{1}, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle \Big \langle f_{2}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle \prod_{j=3}^{n} \langle f_j \rangle_K \cdot \frac{1_K}{|K|}. $$ We have \begin{align*} |\langle \Pi(f_1, \ldots, f_n), f_{n+1} \rangle| \le \sum_{K} |a_K| \Big| \Big \langle f_{1}, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle \Big \langle f_{2}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle \Big| \prod_{j=3}^{n+1} \langle |f_j| \rangle_K. \end{align*} We now apply the unweighted $H^1$-$\BMO$ duality estimate from above to bound this with \begin{align*} \Big\| \Big( \sum_{K} \langle | \Delta_{K^1}^1 f_1 | \rangle_K^2 \langle | \Delta_{K^2}^2 f_2 | \rangle_K^2 \prod_{j=3}^{n+1} \langle |f_j| \rangle_K^2 1_K \Big)^{\frac{1}{2}} \Big\|_{L^1}. \end{align*} Recalling \eqref{eq:eq12} it remains to apply Theorem \ref{thm:thm3} with a suitable $A_1(f_1, \ldots, f_{n+1})$. \end{proof} \section{Singular integrals}\label{sec:SIOs} Let $\omega$ be a modulus of continuity: an increasing and subadditive function with $\omega(0) = 0$. A relevant quantity is the modified Dini condition \begin{equation}\label{eq:Dini} \|\omega\|_{\operatorname{Dini}_{\alpha}} := \int_0^1 \omega(t) \Big( 1 + \log \frac{1}{t} \Big)^{\alpha} \frac{dt}{t}, \qquad \alpha \ge 0. \end{equation} In practice, the quantity \eqref{eq:Dini} arises as follows: \begin{equation}\label{eq:diniuse} \sum_{k=1}^{\infty} \omega(2^{-k}) k^{\alpha} = \sum_{k=1}^{\infty} \frac{1}{\log 2} \int_{2^{-k}}^{2^{-k+1}} \omega(2^{-k}) k^{\alpha} \frac{dt}{t} \lesssim \int_0^1 \omega(t) \Big( 1 + \log \frac{1}{t} \Big)^{\alpha} \frac{dt}{t}. \end{equation} We define what it means to be an $n$-linear bi-parameter SIO. Let $\mathscr{F}_{d_i}$ denote the space of finite linear combinations of indicators of cubes in $\mathbb{R}^{d_i}$, and let $\mathscr{F}$ denote the space of finite linear combinations of indicators of rectangles in $\mathbb{R}^{d}$. Suppose that we have $n$-linear operators $T^{j_1*,j_2*}_{1,2}$, $j_1,j_2 \in \{0, \dots, n\}$, each mapping $\mathscr{F} \times \dots \times \mathscr{F}$ into locally integrable functions. We denote $T=T^{0*,0*}$ and assume that the operators $T^{j_1*,j_2*}_{1,2}$ satisfy the duality relations as described in Section \ref{sec:adjoints}. Let $\omega_i$ be a modulus of continuity on $\mathbb{R}^{d_i}$. Assume $f_j = f_j^1 \otimes f_j^2$, $j = 1, \ldots, n+1$, where $f_{j}^i \in \mathscr{F}_{d_i}$. \subsection*{Bi-parameter SIOs} \subsubsection*{Full kernel representation} Here we assume that given $m \in \{1,2\}$ there exist $j_1, j_2 \in \{1, \ldots, n+1\}$ so that $\operatorname{spt} f_{j_1}^m \cap \operatorname{spt} f_{j_2}^m = \emptyset$. In this case we demand that $$ \langle T(f_1, \ldots, f_n), f_{n+1}\rangle = \int_{\mathbb{R}^{(n+1)d}} K(x_{n+1},x_1, \dots, x_n)\prod_{j=1}^{n+1} f_j(x_j) \ud x, $$ where $$ K \colon \mathbb{R}^{(n+1)d} \setminus \{ (x_1, \ldots, x_{n+1}) \in \mathbb{R}^{(n+1)d}\colon x_1^1 = \cdots = x_{n+1}^1 \textup{ or } x_1^2 = \cdots = x_{n+1}^2\} \to \mathbb{C} $$ is a kernel satisfying a set of estimates which we specify next. The kernel $K$ is assumed to satisfy the size estimate \begin{displaymath} |K(x_{n+1},x_1, \dots, x_n)| \lesssim \prod_{m=1}^2 \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^m-x_j^m|\Big)^{d_mn}}. \end{displaymath} We also require the following continuity estimates. For example, we require that we have \begin{align*} |K(x_{n+1}, x_1, \ldots, x_n)-&K(x_{n+1},x_1, \dots, x_{n-1}, (c^1,x^2_n))\\ &-K((x_{n+1}^1,c^2),x_1, \dots, x_n)+K((x_{n+1}^1,c^2),x_1, \dots, x_{n-1}, (c^1,x^2_n))| \\ &\qquad \lesssim \omega_1 \Big( \frac{|x_{n}^1-c^1| }{ \sum_{j=1}^{n} |x_{n+1}^1-x_j^1|} \Big) \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}} \\ &\qquad\times \omega_2 \Big( \frac{|x_{n+1}^2-c^2| }{ \sum_{j=1}^{n} |x_{n+1}^2-x_j^2|} \Big) \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^2-x_j^2|\Big)^{d_2n}} \end{align*} whenever $|x_n^1-c^1| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^1-x_i^1|$ and $|x_{n+1}^2-c^2| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^2-x_i^2|$. Of course, we also require all the other natural symmetric estimates, where $c^1$ can be in any of the given $n+1$ slots and similarly for $c^2$. There are $(n+1)^2$ different estimates. Finally, we require the following mixed continuity and size estimates. For example, we ask that \begin{align*} |K(x_{n+1}&, x_1, \ldots, x_n)-K(x_{n+1},x_1, \dots, x_{n-1}, (c^1,x^2_n))| \\ & \lesssim \omega_1 \Big( \frac{|x_{n}^1-c^1| }{ \sum_{j=1}^{n} |x_{n+1}^1-x_j^1|} \Big) \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}} \cdot \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^2-x_j^2|\Big)^{d_2n}} \end{align*} whenever $|x_n^1-c^1| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^1-x_i^1|$. Again, we also require all the other natural symmetric estimates. \subsubsection*{Partial kernel representations} Suppose now only that there exist $j_1, j_2 \in \{1, \ldots, n+1\}$ so that $\operatorname{spt} f_{j_1}^1 \cap \operatorname{spt} f_{j_2}^1 = \emptyset$. Then we assume that $$ \langle T(f_1, \ldots, f_n), f_{n+1}\rangle = \int_{\mathbb{R}^{(n+1)d_1}} K_{(f_j^2)}(x_{n+1}^1, x_1^1, \ldots, x_n^1) \prod_{j=1}^{n+1} f_j^1(x^1_j) \ud x^1, $$ where $K_{(f_j^2)}$ is a one-parameter $\omega_1$-Calder\'on--Zygmund kernel but with a constant depending on the fixed functions $f_1^2, \ldots, f_{n+1}^2$. For example, this means that the size estimate takes the form $$ |K_{(f_j^2)}(x_{n+1}^1, x_1^1, \ldots, x_n^1)| \le C(f_1^2, \ldots, f_{n+1}^2) \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}}. $$ The continuity estimates are analogous. We assume the following $T1$ type control on the constant $C(f_1^2, \ldots, f_{n+1}^2)$. We have \begin{equation*}\label{eq:PKWBP} C(1_{I^2}, \ldots, 1_{I^2}) \lesssim |I^2| \end{equation*} and \begin{equation}\label{eq:pest} C(a_{I^2}, 1_{I^2}, \ldots, 1_{I^2}) + C(1_{I^2}, a_{I^2}, 1_{I^2}, \ldots, 1_{I^2}) + \cdots + C(1_{I^2}, \ldots, 1_{I^2}, a_{I^2}) \lesssim |I^2| \end{equation} for all cubes $I^2 \subset \mathbb{R}^{d_2}$ and all functions $a_{I^2} \in \mathscr{F}_{d_2}$ satisfying $a_{I^2} = 1_{I^2}a_{I^2}$, $|a_{I^2}| \le 1$ and $\int a_{I^2} = 0$. Analogous partial kernel representation on the second parameter is assumed when $\operatorname{spt} f_{j_1}^2 \cap \operatorname{spt} f_{j_2}^2 = \emptyset$ for some $j_1, j_2$. \begin{defn} If $T$ is an $n$-linear operator with full and partial kernel representations as defined above, we call $T$ an $n$-linear bi-parameter $(\omega_1, \omega_2)$-SIO. \end{defn} \subsection*{Bi-parameter CZOs} We say that $T$ satisfies the weak boundedness property if \begin{equation*}\label{eq:2ParWBP} |\langle T(1_R, \ldots, 1_R), 1_R \rangle| \lesssim |R| \end{equation*} for all rectangles $R = I^1 \times I^2 \subset \mathbb{R}^{d} = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. An SIO $T$ satisfies the diagonal BMO assumption if the following holds. For all rectangles $R = I^1 \times I^2 \subset \mathbb{R}^{d} = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$ and functions $a_{I^i}\in \mathscr{F}_{d_i}$ with $a_{I^i} = 1_{I^i}a_{I^i}$, $|a_{I^i}| \le 1$ and $\int a_{I^i} = 0$ we have \begin{equation*}\label{eq:DiagBMO} |\langle T(a_{I^1} \otimes 1_{I^2}, 1_R, \ldots, 1_R), 1_R \rangle| + \cdots + |\langle T(1_R, \ldots, 1_R), a_{I^1} \otimes 1_{I^2} \rangle| \lesssim |R| \end{equation*} and $$ |\langle T(1_{I^1} \otimes a_{I^2}, 1_R, \ldots, 1_R), 1_R \rangle| + \cdots + |\langle T(1_R, \ldots, 1_R), 1_{I^1} \otimes a_{I^2} \rangle| \lesssim |R|. $$ An SIO $T$ satisfies the product BMO assumption if it holds $$S(1, \ldots, 1) \in \BMO_{\textup{prod}}$$ for all the $(n+1)^2$ adjoints $S = T^{j_1*, j_2*}_{1,2}$. This can be interpreted in the sense that $$ \| S(1, \ldots, 1) \|_{\BMO_{\operatorname{prod}}} = \sup_{\calD = \calD^1 \times \calD^2} \sup_{\Omega} \Big(\frac{1}{|\Omega|} \sum_{ \substack{ R = I^1 \times I^2 \in \calD \\ R \subset \Omega}} |\langle S(1, \ldots, 1), h_R \rangle|^2 \Big)^{1/2} < \infty, $$ where the supremum is over all dyadic grids $\calD^i$ on $\mathbb{R}^{d_i}$ and open sets $\Omega \subset \mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$ with $0 < |\Omega| < \infty$, and the pairings $\langle S(1, \ldots, 1), h_R\rangle$ can be defined, in a natural way, using the kernel representations. \begin{defn}\label{defn:CZO} An $n$-linear bi-parameter $(\omega_1, \omega_2)$-SIO $T$ satisfying the weak boundedness property, the diagonal BMO assumption and the product BMO assumption is called an $n$-linear bi-parameter $(\omega_1, \omega_2)$-Calder\'on--Zygmund operator ($(\omega_1, \omega_2)$-CZO). \end{defn} \subsection*{Dyadic representation theorem} In Section \ref{sec:dmo} we have introduced the three different dyadic model operators (DMOs). In this section we explain how and why the DMOs are linked to the CZOs. Before stating the known representation theorem, to aid the reader, we first outline the main structure and idea of representation theorems -- for the lenghty details in this generality see \cite{AMV}. \textbf{Step 1.} There is a natural probability space $\Omega = \Omega_1 \times \Omega_2$, the details of which are not relevant for us here (but see \cite{Hy1}), so that to each $\sigma = (\sigma_1, \sigma_2) \in \Omega$ we can associate a random collection of dyadic rectangles $\calD_{\sigma} = \calD_{\sigma_1} \times \calD_{\sigma_2}$. The starting point is the martingale difference decomposition \begin{equation*} \langle T(f_1, \ldots, f_n),f_{n+1} \rangle = \sum_{j_1, j_2 =1}^{n+1} \mathbb{E}_{\sigma} \Sigma_{j_1, j_2, \sigma} + \mathbb{E}_{\sigma} \operatorname{Rem}_{\sigma}, \end{equation*} where $$ \Sigma_{j_1, j_2, \sigma} = \sum_{ \substack{ R_1, \ldots, R_{n+1} \\ \ell(I_{i_1}^1) > \ell(I_{j_1}^1) \textup{ for } i_1 \ne j_1 \\ \ell(I_{i_2}^2) > \ell(I_{j_2}^2) \textup{ for } i_2 \ne j_2}} \langle T(\Delta_{R_1}f_1, \ldots, \Delta_{R_n}f_n),\Delta_{R_{n+1}}f_{n+1} \rangle $$ and $R_1 = I_1^1 \times I_1^2, \ldots, R_{n+1} = I_{n+1}^1 \times I_{n+1}^2 \in \calD_\sigma = \calD_{\sigma_1} \times \calD_{\sigma_2}$. Notice how we have already started the proof working parameter by parameter. At this point the randomization is not yet important: it is used at a later point in the proof to find suitable common parents for dyadic cubes. Looking at the definition of shifts this is clearly critical: everything is organised under the common parent $K$ and they cannot be arbitrarily large. \textbf{Step 2.} There are $(n+1)^2$ main terms $\Sigma_{j_1, j_2, \sigma}$ -- these are similar to each other and all of them produce shifts, partial paraproducts and \emph{exactly one} full paraproduct. For example, a further parameter by parameter $T1$ style decomposition of $\Sigma_{n, n+1, \sigma}$ produces the full paraproduct \begin{align*} &\sum_{R = K^1 \times K^2} \langle T(1, \ldots, 1, h_{K^1} \otimes 1), &1 \otimes h_{K^2} \rangle \prod_{j=1}^{n-1} \langle f_j \rangle_{R}\Big \langle f_n, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle \Big \langle f_{n+1}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle, \end{align*} where $$ \langle T(1, \ldots, 1, h_{K^1} \otimes 1), 1 \otimes h_{K^2} \rangle = \langle T^{n*}_1(1, \ldots, 1), h_R \rangle. $$ For the quantitative part the product $\BMO$ assumption of $T^{n*}_1$ is critical here, and the remaining product $\BMO$ assumptions are used to control the full paraproducts coming from the other main terms. The shifts and partial paraproducts structurally arise from the $T1$ decomposition combined with probability. Again, the randomization is simply used to find suitably sized common parents. After this completely structural part (for full details see \cite{AMV}), the focus is on providing estimates for the coefficients, like the coefficient $a_{K, (R_j)}$ of the shifts. \emph{As in the full paraproduct case above, it is important to understand that the coefficients always have a concrete form in terms of pairings involving $T$ and various Haar functions.} These pairings are estimated in various ways: \begin{itemize} \item The shift coefficients are handled with kernel estimates only (various size and continuity estimates). Only in the part $\operatorname{Rem}_{\sigma}$, which we have not yet discussed, also the weak boundedness is used to handle the diagonal case, where kernel estimates are not valid. \item In the partial paraproduct case a size estimate does not suffice, as there is not enough cancellation. A more refined $\BMO$ estimate needs to be proved -- this is done via a duality argument. This duality is the source of the atoms $a_I$ appearing in some of the assumptions -- e.g. in \eqref{eq:pest}. \end{itemize} \textbf{Step 3.} The final step is to deal with the remainder $\operatorname{Rem}_{\sigma}$. This only produces shifts and partial paraproducts. Another difference to the main terms is that all the diagonal parts of the summation are here -- to deal with them we need to assume the weak boundedness property and the diagonal $\BMO$ assumptions. \begin{rem}\label{rem:mrem} An $m$-parameter representation theorem is structurally identical: the pairing $\langle T(f_1, \ldots, f_n),f_{n+1} \rangle$ is split into $(n+1)^m$ main terms and the remainder. These are then further split into shifts, partial paraproducts and full paraproducts. The full paraproduct is produced parameter by parameter, as it is in the bi-parameter case, and this produces partial paraproducts, where the paraproduct component can vary from being $1$-parameter to being $(m-1)$-parameter. The definition of a CZO is adjusted so that all of the appearing coefficients of the appearing model operators involving $T$ and Haar functions can be estimated. For the linear $m$-parameter representation theorem see Ou \cite{Ou} -- this establishes the appropriate definition of a multi-parameter CZO. We discuss the $m$-parameter case in more detail in Section \ref{sec:multi}. The point there is the following: while the representation theorem itself is straightforward, some of the estimates of Section \ref{sec:dmo} are harder in $m$-parameter. \end{rem} In the paper \cite{AMV}, among other things, a dyadic representation theorem for $n$-linear bi-parameter CZOs was proved. The minimal regularity required is $\omega_i \in \operatorname{Dini}_{\frac{1}{2}}$, but then the dyadic representation is in terms of certain modified versions of the model operators we have presented, and bounded, in this paper. It appears to be difficult to prove weighted bounds for the modified operators with the optimal dependency on the complexity. Instead, we will rely on a lemma, which says that all of the modified operators can be written as suitable sums of the standard model operators. This step essentially outright loses $\frac{1}{2}$ of kernel regularity, and puts us in competition to obtain our weighted bounds with $\omega_i \in \operatorname{Dini}_{1}$. The bilinear bi-parameter representation theorem with the usual H\"older type kernel regularity $w_i(t) = t^{\alpha_i}$ appeared first in \cite{LMV}. We now state a representation theorem that we will rely on. A consequence of \cite[Theorem 5.35]{AMV} and \cite[Lemma 5.12]{AMV} is the following. \begin{prop} Suppose $T$ is an $n$-linear bi-parameter $(\omega_1, \omega_2)$-CZO. Then we have $$ \langle T(f_1,\ldots,f_n), f_{n+1} \rangle= C_T \mathbb{E}_{\sigma} \sum_{u = (u_1, u_2) \in \mathbb{N}^2} \omega_1(2^{-u_1})\omega_2(2^{-u_2}) \langle U_{u, \sigma}(f_1,\ldots,f_n), f_{n+1} \rangle, $$ where $C_T$ enjoys a linear bound with respect to the CZO quantities and $U_{u, \sigma}$ denotes some $n$-linear bi-parameter dyadic operator (defined in the grid $\calD_{\sigma}$) with the following property. We have that $U_u = U_{u, \sigma}$ can be decomposed using the standard dyadic model operators as follows: \begin{equation}\label{eq:eq10} U_{u} = C \sum_{i_1=0}^{u_1-1} \sum_{i_2=0}^{u_2-1} V_{i_1,i_2}, \end{equation} where each $V = V_{i_1,i_2}$ is a dyadic model operator (a shift, a partial paraproduct or a full paraproduct) of complexity $k^m_{j, V}$, $j \in \{1, \ldots, n+1\}$, $m \in \{1,2\}$, satisfying $$ k^{m}_{j, V} \le u_m. $$ \end{prop} \begin{rem} We assumed that the operator $T$ and its adjoints are initially well-defined for finite linear combinations of indicators of rectangles. However, a careful proof of the representation theorem \cite{LMV} shows that this implies the boundedness of $T$ (for related details see also \cite{GH} and \cite{Hy3}). Therefore, we do not need to worry about this detail any more at this point and we can work with general functions. Moreover, we do not need to work with the CZOs directly -- after the representation theorem we only need to work with the dyadic model operators. \end{rem} \subsection*{Weighted estimates for CZOs} In this paper we were able to prove a complexity free weighted estimate for the shifts. On the contrary, the weighted estimate for the partial paraproducts is even exponential, however, with an arbitrarily small power. For these reasons, we can prove a weighted estimate with mild kernel regularity for paraproduct free $T$, and otherwise we will deal with the standard kernel regularity $\omega_i(t) = t^{\alpha_i}$. By paraproduct free we mean that the paraproducts in the dyadic representation of $T$ vanish, which could also be stated in terms of (both partial and full) ``$T1=0$'' type conditions (only the partial paraproducts, and not the full paraproducts, are problematic in terms of kernel regularity, of course). In the paraproduct free case the reader can think of convolution form SIOs. \begin{thm} Suppose $T$ is an $n$-linear bi-parameter $(\omega_1, \omega_2)$-CZO. For $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$ we have $$ \|T(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_i \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$, if one of the following conditions hold. \begin{enumerate} \item $T$ is paraproduct free and $\omega_i \in \operatorname{Dini}_{1}$. \item We have $\omega_i(t) = t^{\alpha_i}$ for some $\alpha_i \in (0,1]$. \end{enumerate} \end{thm} \begin{proof} Notice that in the paraproduct free case (1) by our results for the shifts we always have $$ \|U_{u, \sigma}(f_1, \ldots, f_n)w \|_{L^p} \lesssim (1+u_1)(1+u_2) \prod_i \|f_i w_i\|_{L^{p_i}}, $$ where the complexity dependency comes only from the decomposition \eqref{eq:eq10}. We then take some $1 < p_1, \ldots, p_n < \infty$ with $p \in (1, \infty)$, use the dyadic representation theorem and conclude that $T$ satisfies the weighted bound with these fixed exponents -- recall \eqref{eq:diniuse} and that $\omega_i \in \operatorname{Dini}_{1}$. Finally, we extrapolate using Theorem \ref{thm:ext}. The case of a completely general CZO with the standard kernel regularity is proved completely analogously. Just choose the exponent $\beta$ in the exponential complexity dependendency of the partial paraproducts to be small enough compared to $\alpha_1$ and $\alpha_2$. \end{proof} \section{Extrapolation}\label{app:app1} This section is devoted to providing more details about Theorem \ref{thm:ext} in the multi-parameter setting. We also obtain the proof of the vector-valued Proposition \ref{prop:vecvalmax}. We give the details in the bi-parameter case with the general case being similar. We begin with the following definitions. Given $\mu\in A_\infty(\mathbb{R}^{n+m})$, we say $w\in A_p(\mu)$ if $w>0$ a.e. and \[ [w]_{A_p(\mu)}:=\sup_R\, \langle w\rangle_R^{\mu} \left(\big\langle w^{-\frac 1{p-1}}\big\rangle_R^{\mu}\right)^{p-1}<\infty,\qquad 1<p<\infty. \] And we say $w\in A_1(\mu)$ if $w>0$ a.e. and \[ [w]_{A_1(\mu)} = \sup_R \, \ave{w}_R^\mu \esssup_R w^{-1}<\infty. \] We begin with the following auxiliary result needed to build the required machinery. This is an extension of Proposition \ref{prop:prop1}. \begin{rem} The so-called three lattice theorem states that there are lattices $\calD^m_j$ in $\mathbb{R}^{d_m}$, $m \in \{1,2\}$, $j \in \{1, \ldots, 3^{d_m}\}$, such that for every cube $Q^m \subset \mathbb{R}^{d_m}$ there exists a $j$ and $I^m \in \calD^m_j$ so that $Q^m \subset I^m$ and $|I^m| \sim |Q^m|$. Given $\lambda \in A_\infty$ we have in particular that $\lambda$ is doubling: $\lambda(2R) \lesssim \lambda(R)$ for all rectangles $R$. It then follows that also the non-dyadic variant $M^{\lambda}$ satisfies Proposition \ref{prop:prop1}. \end{rem} \begin{lem}\label{lem:lem5} Let $\mu\in A_\infty$ and $w\in A_p(\mu)$, $1<p<\infty$. Then we have \[ \| M^\mu f\|_{L^p(w\mu)}\lesssim \|f\|_{L^p(w\mu)}. \] \end{lem} \begin{proof} Fix $x$ and $f\ge 0$ and denote $\sigma=w^{-\frac 1{p-1}}$. For an arbitrary rectangle $R \subset \mathbb{R}^d$ with $x\in R$ we have \begin{align*} \langle f \rangle_R^\mu &= \langle \sigma\rangle_R^\mu \left( \langle w\rangle_R^\mu\right)^{\frac 1{p-1}}\left( \langle w\rangle_R^\mu\right)^{-\frac 1{p-1}}\frac 1{\sigma\mu(R)}\int_R f\mu\\ &\le [w]_{A_p(\mu)}^{\frac 1{p-1}} \left(M^{w\mu} \big( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \big)(x)\right)^{\frac 1{p-1}}. \end{align*} The idea of the above pointwise estimate is from \cite{Le}. If $w\mu, \sigma \mu\in A_\infty$, then by (the non-dyadic version of) Proposition \ref{prop:prop1} we have \begin{align*} \| M^{\mu} f\|_{L^p(w\mu)}&\lesssim \left\| \left(M^{w\mu} \big( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \big) \right)^{\frac 1{p-1}}\right\|_{L^p(w\mu)}\\ &\lesssim \left\| \left( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \right)^{\frac 1{p-1}}\right\|_{L^p(w\mu)}\\ &= \| M^{\sigma \mu}(f \sigma^{-1})\|_{L^p(\sigma\mu)}\lesssim \|f\|_{L^p(w\mu)}. \end{align*} Therefore, it remains to check that $w\mu, \sigma \mu\in A_\infty$. We only explicitly prove that $w\mu\in A_\infty$, since the other one is symmetric. First of all, write \[ \langle w\rangle_R^{\mu} \left(\big\langle w^{-\frac 1{p-1}}\big\rangle_R^{\mu}\right)^{p-1}\le [w]_{A_p(\mu)} \] in the form \[ \langle w\mu\rangle_R \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_R^{p-1}\le [w]_{A_p(\mu)} \langle \mu\rangle_R^p. \] Then by the Lebesgue differentiation theorem, we have for all cubes $I^1 \subset \mathbb{R}^{d_1}$ that \[ \langle w\mu\rangle_{I^1,1}(x_2) \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_{I^1,1}^{p-1}(x_2) \le [w]_{A_p(\mu)} \langle \mu\rangle_{I^1,1}^p(x_2),\quad x_2\in \mathbb{R}^{d_2}\setminus N_{I^1}, \]where $|N_{I^1}|=0$. By standard considerations there exists $N$ so that $|N| = 0$ and for all cubes $I^1 \subset \mathbb{R}^{d_1}$ we have \[ \langle w\mu\rangle_{I^1,1}(x_2) \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_{I^1,1}^{p-1}(x_2) \le [w]_{A_p(\mu)} \langle \mu\rangle_{I^1,1}^p(x_2) ,\quad x_2\in \mathbb{R}^{d_2}\setminus N. \] In other words, $w(\cdot, x_2)\in A_p(\mu(\cdot, x_2))$ (uniformly) for all $x_2\in \mathbb{R}^m\setminus N$. As $\mu \in A_{\infty}$ there exists $s < \infty$ so that $\mu \in A_s$. Then for all cubes $I^1 \subset \mathbb{R}^{d_1}$ and arbitrary $E\subset I^1$ we have \begin{align*} \frac{|E|}{|I^1|}\lesssim \Big(\frac{\mu(\cdot, x_2)(E)}{\mu(\cdot, x_2)(I^1)} \Big)^{\frac{1}{s}} \lesssim \Big(\frac{w\mu(\cdot, x_2)(E)}{w\mu(\cdot, x_2)(I^1)}\Big)^{\frac{1}{ps}},\quad {\rm {a.e.}} \, x_2\in \mathbb{R}^{d_2}, \end{align*}where the implicit constant is independent from $x_2$. This means $w\mu(\cdot, x_2)\in A_\infty(\mathbb{R}^{d_1})$ uniformly for a.e. $x_2\in \mathbb{R}^{d_2}$. Likewise we can show that $w\mu(x_1,\cdot)\in A_\infty(\mathbb{R}^{d_2})$ uniformly for a.e. $x_1\in \mathbb{R}^{d_1}$. This completes the proof and we are done. \end{proof} Now we are ready to formulate the following version of Rubio de Francia algorithm. \begin{lem} Let $\mu \in A_{\infty}$ and $p \in (1,\infty)$. Let $f$ be a non-negative function in $L^p(w\mu)$ for some $w\in A_p(\mu)$. Let $M^\mu_k$ be the $k$-th iterate of $M^\mu$, $M^\mu_0f=f$, and $\|M^\mu\|_{L^p(w\mu)} := \|M^\mu\|_{L^p(w\mu) \to L^p(w\mu)}$ be the norm of $M^\mu$ as a bounded operator on $L^p(w\mu)$. Define \[ Rf(x)= \sum_{k=0}^\infty \frac{M^\mu_k f}{(2\|M^\mu\|_{L^p(w\mu)})^k}. \] Then $f(x)\le Rf(x)$, $\|Rf\|_{L^p(w\mu)}\le 2 \|f\|_{L^p(w\mu)}$, and $Rf$ is an $A_1(\mu)$ weight with constant $[Rf]_{A_1(\mu)}\le 2 \|M^\mu\|_{L^p(w\mu)}$. \end{lem} \begin{proof} The statements $f(x)\le Rf(x)$ and $\|Rf\|_{L^p(w\mu)}\le 2 \|f\|_{L^p(w\mu)}$ are obvious. Since \[ M^\mu(Rf)\le \sum_{k=0}^\infty \frac{M^\mu_{k+1} f}{(2\|M^\mu\|_{L^p(w\mu)})^k}\le 2\|M^\mu\|_{L^p(w\mu)} Rf, \] we have \[ [Rf]_{A_1(\mu)}\le \sup_R \big(\inf_R M^\mu(Rf) \big) \big(\operatornamewithlimits{ess\,inf}_R Rf \big) ^{-1}\le 2\|M^\mu\|_{L^p(w\mu)}. \] We are done. \end{proof} With the above Rubio de Francia algorithm at hand, we are able to prove the bi-parameter version of \cite[Theorem 3.1]{LMO} and the corresponding endpoint cases similarly as in \cite[Theorem 2.3]{LMMOV}. On the other hand, the key technical lemma \cite[Lemma 2.14]{LMMOV} can be extended to the bi-parameter setting very easily. Using these as in \cite{LMMOV} we obtain Theorem \ref{thm:ext}. The above Rubio de Francia algorithm, of course, also yields the following standard linear extrapolation. Let $\mu\in A_\infty$ and assume that \begin{equation}\label{eq:eq23} \| g\|_{L^{p_0}(w\mu)}\lesssim \|f\|_{L^{p_0}(w\mu)} \end{equation} for all $w \in A_{p_0}(\mu)$. Then the same inequality holds for all $p \in (1,\infty)$ and $w \in A_p(\mu)$. Using this and Lemma \ref{lem:lem5} we obtain Proposition \ref{prop:vecvalmax} via the following standard argument. \begin{proof}[Proof of Proposition \ref{prop:vecvalmax}] From Lemma 8.2 we directly have that $$ \Big\| \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac 1s} \Big\|_{L^s(w\mu)} \lesssim \Big\| \Big( \sum_i |f^i_j|^s \Big)^{\frac 1s} \Big\|_{L^s(w\mu)} , \quad w \in A_s(\mu). $$ Then, the extrapolation described around \eqref{eq:eq23} gives that $$ \Big\| \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac 1s} \Big\|_{L^t(w\mu)} \lesssim \Big\| \Big( \sum_i |f^i_j|^s \Big)^{\frac 1s} \Big\|_{L^t(w\mu)} , \quad w \in A_t(\mu). $$ This in turn gives $$ \Big\| \Big( \sum_j \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac ts}\Big)^{\frac 1t} \Big\|_{L^t(w\mu)} \lesssim \Big\|\Big( \sum_j \Big( \sum_i |f^i_j|^s \Big)^{\frac ts}\Big)^{\frac 1t} \Big\|_{L^t(w\mu)} , \quad w \in A_t(\mu). $$ Extrapolating once more concludes the proof. \end{proof} \section{The multi-parameter case}\label{sec:multi} One can approach the multi-parameter case as follows. \begin{enumerate} \item What is the definition of an SIO/CZO? The important base case is the linear multi-parameter definition given in \cite{Ou}. That can be straightforwardly extended to the multilinear situation as in Section \ref{sec:SIOs}. The definition becomes extremely lengthy due to the large number of different partial kernel representations, and for this reason we do not write it down explicitly. However, there is no complication in combining the linear multi-parameter definition \cite{Ou} and our multilinear bi-parameter definition. We mention that another way to define the operators would be to adapt a Jour\'ne \cite{Jo} style definition -- this kind of vector-valued definition would be shorter to state. In this paper we do not use the Journ\'e style formulation. However, for the equivalence of the Journ\'e style definitions and the style we use here see \cite{Grau, LMV, Ou}. \item Is there a representation theorem in this generality? Yes -- see Remark \ref{rem:mrem}. The linear multi-parameter representation theorem is proved in \cite{Ou}. The multilinear representation theorems \cite{AMV, LMV} are stated only in the bi-parameter setting for convenience. However, using the multilinear methods from \cite{AMV, LMV} the multi-parameter theorem \cite{Ou} can easily be generalised to the multilinear setting. \item How do the model operators look like? Studying the above presented bi-parameter model operators, one realises that the philosophies in each parameter are independent of each other -- for example, if one has a shift type of philosophy in a given parameter, one needs at least two cancellative Haar functions in that parameter. With this logic it is clear how to define the $m$-parameter analogues just by working parameter by parameter. Alternatively, one can take all possible $m$-fold tensor products of one-parameter $n$-linear model operators, and then just replace the appearing product coefficients by general coefficients. This yields the form of the model operators. We demonstrate this with an example of a bilinear tri-parameter partial paraproduct. Tri-parameter partial paraproducts have the shift structure in one or two of the parameters. In the remaining parameters there is a paraproduct structure. The following is an example of a partial paraproduct with the shift structure in the first parameter and the paraproduct structure in the second and third parameters: \begin{equation*} \begin{split} &\sum_{K=K^1 \times K^2 \times K^3 \in \calD} \sum_{\substack{I_1^1, I^1_2, I_{3}^1 \in \calD^1 \\ (I^1_j)^{(k_j)}=K^1}} \Big[a_{K,(I^1_j)} \\ &\hspace{1cm}\Big\langle f_1, h_{I_1^1} \otimes \frac{1_{K^2}}{|K^2|} \otimes \frac{1_{K^3}}{|K^3|}\Big\rangle \Big\langle f_2, h_{I^1_2}^0 \otimes \frac{1_{K^2}}{|K^2|} \otimes h_{K^3} \Big\rangle \Big\langle f_3, h_{I^1_3} \otimes h_{K^2} \otimes \frac{1_{K^3}}{|K^3|} \Big\rangle\Big]. \end{split} \end{equation*} Here $\calD= \calD^1 \times \calD^2 \times \calD^3$. The assumption on the coefficients is that when $K^1$, $I^1_1$, $I^1_2$ and $I^1_3$ are fixed, then $$ \| (a_{K,(I^1_j)})_{K^2 \times K^3 \in \calD^2 \times \calD^3} \|_{\BMO_{{\rm{prod}}}} \le \frac{|I^1_1|^{ \frac 12}|I^1_2|^{\frac12} |I^1_3|^{\frac12}}{|K^1|^2}. $$ In the shift parameter there is at least two cancellative Haar functions and in the paraproduct parameters there is exactly one cancellative Haar and the remaining functions are normalised indicators. Thus, this is a generalization of $S \otimes \pi \otimes \pi$, where $S$ is a one-parameter shift and $\pi$ is a one-parameter paraproduct -- and all model operators arise like this. \item Finally, is it more difficult to show the genuinely multilinear weighted estimates for the $m$-parameter, $m \ge 3$, model operators compared to the bi-parameter model operators? When it comes to shifts and full paraproducts, there is no essential difference -- their boundedness always reduces to Theorem \ref{thm:thm3}, which has an obvious $m$-parameter version. With out current proof, the answer for the partial paraproducts is more complicated. Thus, we will elaborate on how to prove the weighted estimates for $m$-parameter partial paraproducts. Notice that previously e.g. in \cite{LMV} we could only handle bi-parameter partial paraproducts as our proof exploited the one-parameter nature of the paraproducts by sparse domination. Here we have already disposed of sparse domination, but the proof is still complicated and leads to some new philosophies in higher parameters. \end{enumerate} We now discuss how to prove a tri-parameter analogue of Theorem \ref{thm:thm4}. We can have a partial paraproduct with a bi-parameter paraproduct component and a one-parameter shift component, or the other way around. Regardless of the form, the initial stages of the proof of Theorem \ref{thm:thm4} can be used to reduce to estimating the weighted $L^2$ norms of certain functions which are analogous to \eqref{eq:eq24} and \eqref{eq:eq25}. Most of these norms can be estimated with similar steps as in the bi-parameter case. However, also a new type of variant appears. An example of such a variant is given by \begin{equation}\label{eq:eq27} F_{j,K^1}=1_{K^1} \sum_{(L^1_j)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|} M_{\calD^3}^{\langle \sigma_j \rangle_{K^{1,2}}}\big(\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big)^2\Big)^{\frac 12}, \end{equation} where $f_j \colon \mathbb{R}^{d_1} \times \mathbb{R}^{d_2} \times \mathbb{R}^{d_3} \to \mathbb{C}$. The goal is to estimate $\sum_{K^1} \| F_{j,K^1} \|_{L^2(\sigma_j)}^2$. Here we are denoting $K^{1,2} = K^1 \times K^2$ the original tri-parameter rectangle being $K = K^1 \times K^2 \times K^3$, and for brevity we only write $\langle \sigma_j \rangle_{K^{1,2}}$ instead of $ \langle \sigma_j \rangle_{K^{1,2}, 1, 2}$. Comparing with \eqref{eq:eq25}, the key difference is that in \eqref{eq:eq25} the measure of the maximal function depended only on $K^1$. Here, it depends also on $K^2$, and therefore we have maximal functions with respect to different measures inside the norms. We will use the following lemma and the appearing new type of extrapolation trick to overcome this. \begin{lem}\label{lem:lem8} Let $\mu \in A_\infty(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$ be a bi-parameter weight. Let $\calD=\calD^1\times \calD^2$ be a grid of bi-parameter dyadic rectangles in $\mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. Suppose that for each $m \in \mathbb{Z}$ and $K^1 \in \calD^1$ we have a function $f_{m,K^1} \colon \mathbb{R}^{d_2} \to \mathbb{C}$. Then, for all $p,s,t \in (1, \infty)$, the estimate $$ \Big\| \Big( \sum_{m } \Big( \sum_{K^1} 1_{K^1} M_{\calD^2}^{\langle \mu \rangle_{K^1}} (f_{m,K^1})^t \Big)^{\frac st} \Big)^{\frac 1s} \Big\|_{L^p(w\mu)} \lesssim \Big\| \Big( \sum_{m } \Big( \sum_{K^1} 1_{K^1} |f_{m,K^1}|^t \Big)^{\frac st} \Big)^{\frac 1s} \Big\|_{L^p(w\mu)} $$ holds for all $w \in A_p(\mu)$. \end{lem} \begin{proof} By extrapolation, see the discussion around \eqref{eq:eq23}, it suffices to take a function $f \colon \mathbb{R}^{d_2} \to \mathbb{C}$ and show that $$ \big\| 1_{K^1} M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(w\mu)} \lesssim \| 1_{K^1} f \|_{L^q(w\mu)} $$ for some $q \in (1, \infty)$ and for all $w \in A_q(\mu)$. We fix some $w \in A_{q}(\mu)$. The above estimate can be rewritten as $$ \big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(\langle w\mu\rangle_{K^1})}|K^1|^{\frac 1q} \lesssim \| f \|_{L^q(\langle w\mu\rangle_{K^1})}|K^1|^{\frac 1q}. $$ We have the identity \begin{equation}\label{eq:eq22} \langle w\mu\rangle_{K^1}(x_2) = \frac{\langle w\mu\rangle_{K^1}(x_2)}{\langle \mu \rangle_{K^1}(x_2)}\langle \mu \rangle_{K^1}(x_2) = \langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\langle \mu \rangle_{K^1}(x_2). \end{equation} Define $v(x_2) =\langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}$. We show that $v \in A_q(\langle \mu \rangle_{K^1})$. Let $I^2$ be a cube in $\mathbb{R}^{d_2}$. First, we have that $$ \int_{I^2} v \langle \mu \rangle_{K^1} = \int_{I^2} \langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\langle \mu(\cdot,x_2) \rangle_{K^1} \ud x_2 =\int_{K^1 \times I^2}w \mu |K^1|^{-1}. $$ Therefore, $ \langle v \rangle_{I^2}^{\langle \mu \rangle_{K^1}} =\langle w \rangle_{K^1 \times I^2 }^\mu. $ H\"older's inequality gives that $$ \big(\langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\big)^{-\frac{1}{q-1}} \le \big\langle w(\cdot,x_2)^{-\frac{1}{q-1}} \big \rangle_{K^1}^{\mu(\cdot, x_2)}, $$ which shows that $$ \int_{I^2} v^{-\frac{1}{q-1}} \langle \mu \rangle_{K^1} \le \int_{I^2}\big\langle w(\cdot,x_2)^{-\frac{1}{q-1}} \big \rangle_{K^1}^{\mu(\cdot, x_2)} \langle \mu(\cdot, x_2) \rangle_{K^1} \ud x_2 =\int_{K^1\times I^2} w^{-\frac{1}{q-1}} \mu |K^1|^{-1}. $$ Thus, we have that $$ \big(\langle v^{-\frac{1}{q-1}} \rangle_{I^2}^{\langle \mu \rangle_{K^1}}\big)^{q-1} \le \big(\langle w^{-\frac{1}{q-1}} \rangle_{K^1 \times I^2}^{\mu}\big)^{q-1}. $$ These estimates yield that $[v]_{A_q(\langle \mu \rangle_{K^1})} \le [w]_{A_q(\mu)}$. Recall the identity \eqref{eq:eq22}. Since $\langle \mu \rangle_{K^1} \in A_\infty$, we have that \begin{equation*} \big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(\langle w\mu\rangle_{K^1})} =\big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(v\langle \mu\rangle_{K^1})} \lesssim \| f \|_{L^q(v\langle \mu\rangle_{K^1}))} =\| f \|_{L^q(\langle w\mu\rangle_{K^1}))}, \end{equation*} where we used Lemma \ref{lem:lem5}. This concludes the proof. \end{proof} We now show how to estimate \eqref{eq:eq27}. First, we have that $\|F_{j,K^1}\|_{L^2(\sigma_j)}^2$ is less than $ 2^{\frac{2l^1_jd_1}{s'}}$ multiplied by \begin{equation*} \bigg\| \bigg[\sum_{(L^1_j)^{(l_j)}=K^1} \bigg[\frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|} M_{\calD^3}^{\langle \sigma_j \rangle_{K^{1,2}}} \big(\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big)^2\Big)^{\frac 12}\bigg]^{s} \bigg]^{\frac 1s} \bigg \|_{L^2(\langle\sigma_j\rangle_{K^1})}^2|K^1|. \end{equation*} The exponent $s \in (1, \infty)$ is chosen small enough so that we get a suitable dependence on the complexity through $2^{\frac{2l^1_jd_1}{s'}}$, see the corresponding step in the bi-parameter case. Since $\langle \sigma_j \rangle_{K^1} \in A_\infty(\mathbb{R}^{d_2} \times \mathbb{R}^{d_3})$, we can use Lemma \ref{lem:lem8} to have that the last term is dominated by $$ \bigg\| \bigg[\sum_{(L^1_j)^{(l_j)}=K^1} \bigg[\frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|} \big|\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big|^2\Big)^{\frac 12}\bigg]^{s} \bigg]^{\frac 1s} \bigg \|_{L^2(\langle\sigma_j\rangle_{K^1})}^2|K^1|. $$ After these key steps it only remains to use Proposition \ref{prop:prop3} twice in a very similar way as in the bi-parameter proof. We are done. \section{Applications} \subsection{Mixed-norm estimates} With our main result, Theorem \ref{thm:intro1}, and extrapolation, Theorem \ref{thm:ext}, the following result becomes immediate. \begin{thm} Let $T$ be an $n$-linear $m$-parameter Calder\'on-Zygmund operator. Let $1<p_i^j \le \infty$, $i = 1, \ldots, n$, with $\frac {1}{p^j}= \sum_i \frac{1}{p_i^j} >0$, $j = 1, \ldots, m$. Then we have that \[ \| T(f_1,\ldots, f_n)\|_{L^{p^1} \cdots L^{p^m}}\lesssim \prod_{i=1}^n \| f_i\|_{L^{p^1_i} \cdots L^{p^m_i}}. \] \end{thm} \begin{rem} We understand this as an a priori estimate with $f_i\in L_c^\infty$ -- this is only a concern when some $p_i^j$ is $\infty$. In \cite{LMMOV}, which concerned the bilinear bi-parameter case with \emph{tensor} form CZOs, we went to great lengths to check that this restriction can always be removed. We do not want to get into such considerations here, and prefer this a priori interpretation at least when $n \ge 3$. See also \cite{LMV} for some previous results for bilinear bi-parameter CZOs that are not of tensor form, but where, compared to \cite{LMMOV}, the range of exponents had some limitations in the $\infty$ cases. See also \cite{DO}. We also mention that mixed-norm estimates for multilinear bi-parameter Coifman-Meyer operators have been previously obtained in \cite{BM1} and \cite{BM2}. Related to this, bi-parameter mixed norm Leibniz rules were proved in \cite{OW}. \end{rem} The proof is immediate by extrapolating with tensor form weights. For the general idea see \cite[Theorem 4.5]{LMMOV} -- here the major simplification is that everything can be done with extrapolation and the operator-valued analysis is not needed. This is because the weighted estimate, Theorem \ref{thm:intro1}, is now with the genuinely multilinear weights unlike in \cite{LMMOV, LMV}. \subsection{Commutators} We will state these applications in the bi-parameter case $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. The $m$-parameter versions are obvious. We define $$ [b, T]_k(f_1,\ldots, f_n) := bT(f_1, \ldots, f_n) - T(f_1, \ldots, f_{k-1}, bf_k, f_{k+1}, \ldots, f_n). $$ One can also define the iterated commutators as usual. We say that $b \in \operatorname{bmo}$ if $$ \|b\|_{\operatorname{bmo}} = \sup_R \frac{1}{|R|} \int_R |b-\ave{b}_R| < \infty, $$ where the supremum is over rectangles. Recall that given $b \in \operatorname{bmo}$, we have \begin{equation}\label{e:litbmo} \|b\|_{\rm{bmo}}\sim \max\big(\esssup_{x_1\in \mathbb{R}^{d_1}} \|b(x_1,\cdot)\|_{\BMO(\mathbb{R}^{d_2})}, \esssup_{x_2\in \mathbb{R}^{d_2}} \|b(\cdot,x_2)\|_{\BMO(\mathbb{R}^{d_1})}\big). \end{equation} See e.g. \cite{HPW}. In the one-parameter case the following was proved in \cite[Lemma 5.6]{DHL}. \begin{prop}\label{prop:dhl} Let $\vec p=(p_1,\dots, p_n)$ with $1<p_1,\ldots, p_n<\infty$ and $\frac 1p=\sum_i \frac 1{p_i} <1$. Let $\vec w = (w_1,\dots, w_n)\in A_{\vec p}$. Then for any $1\le j\le n$ we have \[ \vec w_{b,z}:=(w_1,\dots, w_je^{\mathbb{R}e(bz)},\dots, w_n)\in A_{\vec p} \]with $[\vec w_{b,z}]_{A_{\vec p}}\lesssim [\vec w]_{A_{\vec p}}$ provided that $$ |z|\le \frac{\epsilon}{\max([w^p]_{A_\infty}, \max_i [w_i^{-p_i'}]_{A_\infty}) \|b\|_{\BMO}}, $$ where $\epsilon$ depends on $\vec p$ and the dimension of the underlying space. \end{prop} If $1< p_1, \dots, p_n < \infty$, then there holds that $[\vec w]_{A_{\vec p}(\mathbb{R}^{d})} < \infty$ if and only if \[ \max \big(\esssup_{x_1\in \mathbb{R}^{d_1}}[\vec w(x_1,\cdot)]_{A_{\vec p}(\mathbb{R}^{d_2})}, \esssup_{x_2\in \mathbb{R}^{d_2}}[\vec w(\cdot,x_2)]_{A_{\vec p}(\mathbb{R}^{d_1})}\big) < \infty. \] Moreover, we have that the above maximum satisfies $\max (\cdot, \cdot) \le [\vec w]_{A_{\vec p}(\mathbb{R}^{d})}\lesssim \max (\cdot, \cdot)^{\gamma}$ where $\gamma$ is allowed to depend on $\vec p$ and $d$. The first estimate follows from the Lebesgue differentiation theorem. The second estimate can be proved by using Lemma \ref{lem:lem1} and the corresponding linear statement, see \eqref{eq:eq28}. Using this, \eqref{e:litbmo} and Proposition \ref{prop:dhl} gives a bi-parameter version of Proposition \ref{prop:dhl} -- the statement is obtained by replacing $\BMO$ with $\operatorname{bmo}$, and the quantitative estimate is of the form $[w_{b,z}]_{A_{\vec p}} \lesssim [ \vec w ]_{A_{\vec p}}^\gamma$. Now, we have everything ready to prove the following commutator estimate. \begin{thm} Suppose $T$ is an $n$-linear bi-parameter CZO in $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$, $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$. Suppose also that $b \in \operatorname{bmo}$. Then for all $1\le k\le n$ we have the commutator estimate $$ \| [b, T]_k(f_1,\dots, f_n) w\|_{L^p} \lesssim \|b\|_{\operatorname{bmo}} \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} $$ for all $n$-linear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Analogous results hold for iterated commutators. \end{thm} \begin{proof} We assume $\|b\|_{\operatorname{bmo}} = 1$. It suffices to study $[b, T]_1$, and in fact we shall prove the following principle. Once we have \[ \| T(f_1,\dots, f_n) w\|_{L^p}\lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} \] for some $\vec p$ in the Banach range, then \[ \| [b,T]_1(f_1,\dots, f_n) w\|_{L^p}\lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}. \] In this principle the form of the $n$-linear operator plays no role ($T$ does not need to be a CZO). The iterated cases follow immediately from this principle and the full range then follows from extrapolation. Define \[ T_z^1(f_1, \dots, f_n)=e^{zb} T(e^{-zb}f_1, f_2,\dots, f_n). \] Then, by the Cauchy integral theorem, we get for nice functions $f_1,\dots, f_n$, that \[ [b, T]_1(f_1,\dots, f_n)=\frac{\ud}{\ud z}T_z^1(f_1,\dots, f_n)\Big|_{z=0} =\frac {-1}{2\pi i} \int_{|z|=\delta} \frac{T_z^1(f_1,\dots, f_n)}{z^2}\ud z,\quad \delta>0. \] Since $p\ge 1$, by Minkowski's inequality \[ \| [b, T]_1(f_1,\dots, f_n)w\|_{L^p}\le \frac 1{2\pi \delta^2}\int_{|z|=\delta} \|T_z^1(f_1,\dots, f_n) w\|_{L^p}|\ud z|. \] We choose $$ \delta \sim \frac{1}{\max([w^p]_{A_\infty}, \max_i [w_i^{-p_i'}]_{A_\infty}) }. $$ This allows to use the bi-parameter version of Proposition \ref{prop:dhl} to have that \begin{align*} \|T_z^1(f_1,\dots, f_n) w\|_{L^p} &=\|T(e^{-zb}f_1,f_2,\dots, f_n) we^{\mathbb{R}e(bz)}\|_{L^p}\\ &\lesssim \| e^{-zb} f_1w_1e^{\mathbb{R}e(bz)}\|_{L^{p_1}}\prod_{i=2}^n \|f_iw_i\|_{L^{p_i}} = \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}. \end{align*} The claim follows. \end{proof} \end{document}
\begin{document} \title{The Pricing of Quanto Options \\ \quad \\ \large An empirical copula approach } \author{Rafael Felipe Carmargo Prudencio\footnote{[email protected], Department of Applied Mathematics, University of S\~ao Paulo (USP), Brazil.} \\ and Christian D.\ J\"akel\footnote{[email protected], Department of Applied Mathematics, University of S\~ao Paulo (USP), Brazil.} } \maketitle \begin{abstract} The quanto option is a cross-currency derivative in which the pay-off is given in foreign currency and then converted to domestic currency, through a constant exchange rate, used for the conversion and determined at contract inception. Hence, the dependence relation between the option underlying asset price and the exchange rate plays an important role in quanto option pricing. In this work, we suggest to use \emph{empirical} copulas to price quanto options. Numerical illustrations show that the flexibility provided by this approach, concerning the dependence relation of the two underlying stochastic processes, results in non-negligible pricing differences when contrasted to other models. \end{abstract} \tableofcontents \section{Introduction} The \emph{quanto option} is a cross-currency contract. The payoff is defined with respect to an underlying asset or index in one currency, but for payment, the payoff is converted to another currency. The constant exchange rate is established at contract inception. Hence, the modelling of the dependence relation between the underlying asset and the exchange rate (which are both market observable variables), is mandatory for quanto options pricing. In this work, we propose a new approach, based on empirical copula, to price quanto options. We compare this approach with what is hereafter named the practitioners model (based on the Black-Scholes framework) and the Dimitroff-Szimayer-Wagner (DSW) framework \cite{DSW}. Without loss of generality, only call options are analysed, with the dividend yield of the underlying asset set to zero. The practitioners' approach is based on the assumptions that ``asset prices follow a geometric Brownian motion'' and ``volatility is constant''. Stochastic volatility models, such as the one proposed in \cite{DSW}, relax the ``volatility is constant'' assumption. In the quanto option context, the dependences among the relevant variables can considerably impact the pricing. Both the practitioners' approach and the DSW model \cite{DSW} use a constant correlation in order to address this issue. However, financial quantities (including the underlying asset and the exchange rate) can be related in a non-linear way (see, \emph{e.g.}, Teng et al.~\cite{TEG}). Hence a simple constant correlation cannot \emph{fully} represent the dependence relation between the relevant variables. The copulas framework, which we propose, intends to provide a more flexible framework to set the dependence relation between the market variables used in the pricing of quanto options. Besides, the empirical copula model (just like the DSW model) can adapt to a non-constant volatility smile. Before we start our discussion, we would like to note that we are aware of the shortcomings of our approach: it is computationally expensive and does not offer analytical tractability. \section{The quanto process} A \emph{quanto call option} is a financial instrument that gives the holder the right, but not the obligation, to buy an \emph{underlying asset} $S_f$, \emph{quoted in a foreign currency} (FOR), at a predetermined price $K$ (given in units of FOR currency), at maturity time $T$. The payoff amount, if positive, is \emph{converted to the domestic currency} (DOM) at an exchange rate $q ( \equiv \frac{DOM}{FOR})$. The latter is predetermined at the contract inception. Hence, the payoff, at maturity time $T$, is \begin{equation} \label{(1)} C_q ( T ) = \max \bigl\{ q(S_f (T ) - K), 0 \bigr\} \; . \end{equation} \subsection{The practitioners approach} From the risk-neutral pricing formula it follows that the price $c_q$ of a quanto call option at time $t = 0$ is \begin{equation} \label{(2)} c_q(0) = {\rm e}^{ - rT} \; \mathbb{E}_{\mathbb{Q}} \Bigl[ \max \{ q ( S_f (T) - K ) , 0 \} \Bigr] \; , \end{equation} where $\mathbb{Q}$ is the \emph{domestic risk-neutral measure} and $\mathbb{E}_\mathbb{Q}$ denotes the associated expectation value. We now derive the stochastic differential equation for $S_f ( T )$ under $\mathbb{Q}$. We assume that, under the domestic risk-neutral measure, \begin{equation} \label{(3)} {\rm d} S_f (t) = \mu_{S_f} {\rm d} t +\sqrt{ V_1} \, S(t) \, {\rm d} W_{\mathbb{Q}_1} (t) \; , \end{equation} where $\mu_{S_f}$ is the (unknown) \emph{drift} of $S_f (t)$ and $W_{\mathbb{Q}_1}$ represents a Brownian motion. The volatility is denoted by $\sqrt{ V_1}$. The stochastic process of the exchange rate $Q(t) ( \equiv \frac{DOM}{FOR} )$ under the domestic risk neutral measure is \begin{equation} \label{(4)} {\rm d} Q(t) = Q(t) \Bigl[ (r - r_f) {\rm d} t + \sqrt{V_2} \; {\rm d} W_{\mathbb{Q}_2} (t) \Bigr] \; , \end{equation} with \begin{equation} \label{(5)} W_{\mathbb{Q}_2} (t) = \rho(S_f, Q) W_{\mathbb{Q}_1} (t) + \sqrt{1 - \rho(S_f, Q)^2} \, \; W_{\mathbb{Q}_3} (t) \; , \end{equation} a second Brownian motion, correlated with the Brownian motion $W_{\mathbb{Q}_1}$. On the other hand, $W_{\mathbb{Q}_3}$ is a Brownian motion, which is independent from $W_{\mathbb{Q}_1}$. As can be read off from \eqref{(5)}, the \emph{infinitesimal correlation between the increments of $S_f$ and $Q$} is denoted by $\rho(S_f, Q) $. In order to derive the drift $\mu_{S_f}$, we express $S_f(t)$ in the domestic currency: we multiply $S_f (t)$ by $Q(t)$, setting \[ S_d (t) \doteq Q(t) S_f (t) \; . \] From It\o's product rule it now follows that \begin{align*} {\rm d} \bigl( S_d (t) \bigr) & = Q(t) S_f (t) \left[ \; \mu_{S_f} +r - r_f + \rho(S_f, Q) \sqrt{V_1 V_2} \; \right] {\rm d} t \\ & \qquad \qquad \qquad \qquad \qquad + \sqrt{V_1} \, {\rm d} W_{\mathbb{Q}_1} (t) + \sqrt{V_2} \, {\rm d} W_{\mathbb{Q}_2} (t) \; . \end{align*} Under the domestic risk neutral measure, the drift of $S_d(t)$ is equal to $r$. Thus, it follows that \begin{align*} \mu_{S_f} & = r - \left[ \, r - r_f + \rho(S_f, Q) \sqrt{V_1 V_2} \; \right] \; . \end{align*} Inserting this expression into \eqref{(3)}, we find \[ {\rm d} S_f (t) = r - \left[ r - r_f + \rho(S_f, Q) \sqrt{V_1 V_2} \; \right] {\rm d} t + \sqrt{V_1} \, S(t) \, {\rm d} W_{\mathbb{Q}_1}(t) \; . \] Since we know the dynamics of $S_f (t)$, we are now able to compute the expectation \eqref{(2)}. In fact, the diffusion of $S_f$ is of the same form as the diffusion process for a dividend paying stock, with dividend rate \[ q = r - r_f + \rho ( S_f, Q) \sqrt{V_1 V_2} \; . \] Whence, the computation of expectation \eqref{(2)} gives the price of the vanilla call option on a dividend-paying stock: \[ c^q (0) = q \cdot BS \left( S_f (0) {\rm e}^{ - ( r - r_f + \rho(S_f, Q) \sqrt{V_1 V_2}) T }, K , \sqrt{V_1} , T , r \right) \; . \] Here $BS (a, b, c, d, e)$ stands for the traditional Black-Scholes formula, with $a$ the underlying asset spot price, $b$ the strike value, $c$ the volatility, $d$ the time to maturity, and $e$ the risk-free interest rate. \color{black} The final step in the practitioners approach is to replace the constant volatilities, $\sqrt{V_1}$ and $\sqrt{V_2}$, by \emph{at the money} or \emph{at the strike values}: \begin{equation} \label{(6)} c^q_p (0) = q \cdot BS \left( S_f (0) {\rm e}^{ - T \left( r - r_f + \rho(S_f, Q) \sqrt{V_{1}^{atm} V_{2}^{atm}} \,\right) }, K, \sqrt{V_{1}^{strike}} , T , r \right) \; . \end{equation} Equation \eqref{(6)} is the $V^d_{black}$ approximation from Le Floc'h \cite{F}; in fact, it is one of the three approximations studied within \cite{F}. Note that $V^{atm}_i$, $i=1,2$, in $\rho(S_f, Q) \sqrt{V^{atm}_1 V^{atm}_2}$, must be the at-the-money value (not the at the strike value $V^{strike}_i$, $i=1,2$), as otherwise the price of the quanto forward contract would depend on the option strike (an exogenous factor). \subsection{The Dimitroff-Szimayer-Wagner (DSW) framework} \label{sec:2.2} The DSW approach consists in the use of the following diffusion processes to simulate values of $S_f(t)$ (named $S(T)$ in their work) and $Q^{-1}(T)$ (named $C(T)$ in their work), in order to compute expectation \eqref{(7)} below and to obtain the quanto option price value: \begin{align*} \begin{pmatrix} {\rm d} S_f (t) \\ {\rm d} V_1(t) \\ {\rm d} Q^{-1}(t) \\ {\rm d} V_2(t) \end{pmatrix} & = \begin{pmatrix} ( r_f(t) - d(t) ) S_f (t) \\ \kappa_1 ( \overline{V_1} - V_1(t)) \\ ( r_f(t) - r(t)) Q^{-1}(t) \\ \kappa_2 ( \overline{V_2} - V_2(t)) \end{pmatrix} {\rm d}t \\ & \quad + \begin{pmatrix} \sqrt{V_1(t)} \, S_f (t) & 0 & 0 & 0 \\ 0 & \eta_1 \sqrt{V_1(t)} & 0 & 0 \\ 0 & 0 & \sqrt{V_2(t)} \, Q^{-1}(t) & 0 \\ 0 & 0 & 0 & \eta_2 \sqrt{V_2(t)} \end{pmatrix} \\ & \qquad \times \begin{pmatrix} 1 & 0 & 0 & 0 \\ \rho_1 & \sqrt{1- {\rho_1}^2} & 0 & 0 \\ \rho & 0 & \sqrt{1- \rho^2} & 0 \\ \rho \rho_2 & 0 & \rho_2 \sqrt{1- \rho^2} & \sqrt{1- {\rho_2}^2} \end{pmatrix} \begin{pmatrix} \overline{{\rm d}W_1} (t) \\ \overline{{\rm d}W_2} (t) \\ \overline{{\rm d}W_3} (t) \\ \overline{{\rm d}W_4} (t) \end{pmatrix} \end{align*} where $(S_f (t), V_1(t))$ models the stock price and its variance, and $(Q^{-1}(t), V_2 (t))$ the foreign exchange rate and its variance with correlation $\rho_1$ and $\rho_2$, respectively. The correlation between the Brownian motions of the $S_f(t)$ and $Q^{-1}(t)$ diffusions is denoted by $\rho \equiv \rho(S_f, Q^{-1})$. The domestic risk-free interest rate is denoted by $r(t)$, the foreign risk free interest rate by $r_f(t)$, and the continuous dividend yield of the stock by $d(t)$. As the Heston model is one of the main building blocks of the DSW approach, the constants $\overline{V_i}$, $\kappa_i$ and $\eta_i$ have the traditional meaning, \emph{i.e.}, $\overline{V_i}$ is the long run variance, $\kappa_i$ is the rate at which $V_i(t)$ reverts to $\overline{V_i}$, and~$\eta_i$ determines the variance of the process $V_i(t)$, $i=1,2$. Besides, it is necessary to set $V_i(0)$, which is the initial variance, in order to get the full representation of the DSW approach in the risk-neutral format. Finally, the parameters in the equations above can be compiled in the Heston vector of parameters $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$, with \[ \varphi_{S_f} = \bigl( \rho_1, \kappa_1, \overline{V_1}, V_1(0), \eta_1 \bigr) \quad \text{and} \quad \varphi_{Q^{-1}} = \bigl( \rho_2, \kappa_2, \overline{V_2}, V_2(0), \eta_2 \bigr) \; . \] These Heston vectors of parameters are calibrated with market data in order to take into account the respective market volatility smiles. \subsection{Risk neutral pricing from a foreign investor's perspective} Our new framework (as well as the DSW model) bases the quanto option pricing on the diffusion processes of $S_f (t)$ and $Q^{-1} (t)$, $0 \le t \le T$, under the \emph{foreign risk neutral measure} $\mathbb{Q}_f$. From a foreign investor's perspective, the payoff, given in FOR currency, is \[ C^f_q (T) = Q^{-1}(T) \max \bigl\{ q ( S_f(T) - K),0 \bigr\} \; . \] Here $Q^{-1} (t) ( \equiv \frac{FOR}{DOM})$ is the exchange rate quoted as foreign currency per unit of domestic currency. \goodbreak From the risk neutral pricing formula, the option value (in FOR currency) is given by \[ c_q^f (0) = {\rm e}^{-r_f T} \, \mathbb{E}_{\mathbb{Q}_f} \; \Bigl[ Q^{- 1} (T) \max \bigl\{ q ( S_f(T) - K),0 \bigr\} \Bigr]\; . \] A non-arbitrage argument can be used to value the option in DOM currency: \begin{equation} \label{(7)} c_q (0) = Q( 0 ) \, {\rm e}^{-r_f T} \mathbb{E}_{\mathbb{Q}_f} \Bigl[ Q^{-1}(T ) \max \bigl\{ q(S_f (T ) - K ), 0 \bigr\} \Bigr] \; . \end{equation} Equation \eqref{(7)} sets a starting point for quanto option pricing. \section{The quanto option pricing under the empirical copula approach} A variety of methodologies can be used in order to compute the expectation in \eqref{(7)}. We like to make the pricing of quanto options as adaptable as possible to the dependence relation between $S_f(T)$ and $Q^{-1}(T)$. At the same time, our approach is capable to adapt to the market volatility smiles. The expectation in equation \eqref{(7)} involves two random variables, namely $S_f (T )$ and $Q^{-1}(T)$, hence, one approach to solve it, is to estimate the bi-variate \emph{cumulative distribution function} (CDF) of these random variables, under the probability measure $\mathbb{Q}_f$, and to compute the expectation based on simulations of this CDF. We denote the CDF by $H(s_f (T), q^{-1}(T))$ in this text, where $s_f (T)$ and $q^{-1}(T)$ are the possible outcomes of the random variables $S_f (T )$ and $Q^{-1}(T)$, respectively. The main ingredient in our analysis is Sklar's Theorem. It ensures the existence of a \emph{copula}, \emph{i.e.}, a function $C \colon [0,1]^d \to \mathbb{R}^+$ with the following properties~\cite{FS}: \begin{itemize} \item[$i.)$] if at least one coordinates $u_j = 0$, then $C(u) = 0$; \item[$ii.)$] $C$ is $d$-increasing, \emph{i.e.}, for every $a = (a_1, \ldots, a_d)$ and $b= (b_1, \ldots, b_d)$ in $[0,1]^d$ such that $a_i \le b_i$, $i=1, \ldots, d$, the $C$-volume $V_C([a,b])$ of the box $[a,b] = [a_1, b_1] \times \cdot \times [a_d, b_d]$ is positive. \item[$iii.)$] if $u_j =1$ for all $j \ne k$ for some fixed $k$, then $C(u) = u_k$. \end{itemize} We can now state Sklar's result. \begin{theorem}[Sklar's Theorem] \label{th:1} Every multivariate cumulative distribution function (CDF), \[ H (x_1, \ldots , x_d) = P \bigl\{ X_1 \le x_1, \ldots , X_d \le x_d \bigr\} \; , \] can be expressed in terms of its marginals $F_i(x_i) = P \bigl\{X_i \le x_i \bigr\}$, $i = \{1,\ldots, d\}$, and a copula $C$, such that \[ H(x_1, \ldots , x_d) = C \bigl( F_1(x_1), \ldots , F_d(x_d) \bigr) \; . \] \end{theorem} Using this result, the problem of estimating a bivariate distribution function $H(s_f (T), q^{-1}(T))$ can be divided into two independent problems: \begin{itemize} \item[$i.)$] Estimating the marginal distributions. The marginals are the market implied cumulative distribution functions of $S_f (T)$ and $Q^{-1}(T)$. We denote them by $F_{S_f (T)}$ and $F_{Q^{-1}(T)}$; and \item[$ii.)$] estimating a copula \[ C = C \Bigl( F_{S_f (T)} \bigl(s_f (T) \bigr) \, , \, F_{Q^{-1}(T)} \bigl( q^{-1}(T) \bigr) \Bigr) \; , \] which specifies the dependence relation between $S_f(T)$ and $Q^{-1}(T)$. The existence of such a copula is guaranteed by Sklar's Theorem. \color{black} \end{itemize} \noindent It follows from point~$i.)$ that, as the market implied cumulative distribution functions are used, our model duly adapts to the observed volatility smile. We will address item $i.)$ in Section~\ref{sec:3.2} and item $ii.)$ in Section~\ref{sec:3.3}. \color{black} \subsection{The marginals} \label{sec:3.2} In order to estimate the marginal distributions, the strategy adopted by DSW is to calibrate the parameters of a single Heston model on the market data of plain vanilla option prices, for both $S_f$ and $Q^{-1}$. The vectors of parameters for each Heston model are denoted by $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$, for $S_f$ and $Q^{-1}$, respectively. We will simply take over this first step from DSW and consider it as part of our own approach. However, for the purpose of illustration only, we will use \emph{hypothetical data} in Section~\ref{sec:4} and the parameters of the $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$ vectors will be set directly, \emph{i.e.}, without a calibration to real market data. \subsection{The copula} \label{sec:3.3} According to Theorem~\ref{th:1}, an estimate for $H \bigl(s_f (T ), q^{-1}(T ) \bigr)$ can be provided once a copula $C$ linking the random variables $S_f (T)$ and $Q^{-1}(T)$ is identified. Our approach is to calibrate the copula $C$ using data provided by an expert. The data are represented by a $(N \times 2)$ matrix $\mathbb{A}$, the first column contains data of $S_f (T)$, and the second column contains data of $Q^{-1}(T)$. By $\mathbb{A}(n)$, $n = \{1, \ldots , N \}$, we denote the $n$-th line of $\mathbb{A}$. $N$ is the number of ordered pairs provided by the expert. In order to build a copula based on the matrix~$\mathbb{A}$, we make use of kernel estimators\footnote{We refer to \cite{Gramacki} for the theory of kernel density estimation.}, following the methodology proposed by Scaillet and Fermanian \cite[Section~3.1]{FS}. The role of the kernels is to smoothen the data. In case there are sufficient data, the obtained bivariate CDF does not depend on the choice of a particular Kernel estimator. Hence, we work with $d$-dimensional Gaussian Kernel functions of the form \[ K ( x )= (2\pi)^{-\frac{d}{2}} \; {\rm e}^{- \frac{1}{2} x^T x} \; , \qquad x= (x_1, \ldots, x_d) \; . \] As one may expect, the probability density function related to our empirical CDF places more probability mass where there are more ordered pairs, and less probability mass where there are less ordered pairs. The estimated bivariate cumulative distribution function (CDF) of the two dependent random variables $S_f$ and $Q^{-1}$, denoted by $\widehat{F}$, is given by \begin{align*} \widehat{F} (s_f,q^{-1}) & = \int_{- \infty}^{s_f} {\rm d} s \int_{- \infty}^{q^{-1}} {\rm d} r \; \; \widehat{f} ( s , r) \; , \end{align*} with $\widehat{f}( s , r) = \frac{1}{N h^2 }\sum_{n=1}^N K \left( \tfrac{( s , r) - \mathbb{A}(n)}{h} \right) $ the Kernel estimator of $f(s,r)$. We are now able to define the copula which will allow us to compute the price of a quanto option. \begin{definition} A copula~$C$ is obtained by setting \begin{equation} \label{(8)} C(u_1, u_2) \equiv \widehat{F} \bigl( \xi_1(u_1), \xi_2(u_2) \bigr) \; , \end{equation} where $\xi_1 (u_1) = \inf \, \bigl\{ y_1 \mid \widehat{F}_{S_f} (y_1) \ge u_1 \bigr\}$ and $\xi_2 (u_2) = \inf \, \bigl\{ y_2 \mid \widehat{F}_{Q^{-1}} (y_2) \ge u_2 \bigr\}$. \end{definition} \begin{remark} One easily verifies that the greater the number $N$ of ordered pairs provided by the expert, the lower the impact of the choice of the kernel function $K$ and the bandwidth~$h$, on the copula estimation. \end{remark} We now state the relation between $\rho(S_f, Q)$ (the correlation between the infinitesimal increments of $S_f$ and $Q$) and $\rho(S_f, Q^{-1})$ (the correlation between the infinitesimal increments of $S_f$ and $Q^{-1}$). This information will be used in the numerical illustration section, in order to allow the three approaches to be compared, as the practitioners approach is based on the relation between $S_f$ and $Q$, while the DSW approach and our approach are based on the relation between $S_f$ and $Q^{-1}$. \begin{proposition} \label{prop:3.5} $\rho(S_f, Q) = - \rho(S_f, Q^{-1})$. \end{proposition} \color{black} \begin{proof} Without loss of generality, only stochastic terms shall be considered. From \eqref{(4)}, it follows that \[ {\rm d} Q(t) = Q(t) \sqrt{V_2} \, {\rm d} W_{ \mathbb{Q}_{2}}(t) \; . \] The difference between the $Q(t)$ diffusion, under the domestic and the foreign risk-neutral measure, lies in the drift term. The format of the Brownian motion part remains unaltered. Thus, under the foreign risk-neutral measure~$W_{ \mathbb{Q}_{f_2}}$, \[ {\rm d}Q(t) = Q(t) \sqrt{V_2} \, {\rm d} W_{ \mathbb{Q}_{f_2}} (t) \; . \] We apply It\o's Lemma to $Q^{-1}$. We find \[ {\rm d} Q^{-1}(t) = Q^{-1} (t) \left(- \sqrt{V_2} \right) \, {\rm d} W_{ \mathbb{Q}_{f_2}} (t). \] Inspecting equation \eqref{(5)}, we get, under the foreign risk neutral measure, \[ {\rm d} Q^{-1}(t) = Q^{-1} (t) \sqrt{V_2} \left( - \rho (S_f, Q) {\rm d} W_{\mathbb{Q}_{f_1}} - \sqrt{ 1 - \rho^2 (S_f, Q)} \; {\rm d} W_{\mathbb{Q}_{f_3}} \right) \; , \] where $W_{\mathbb{Q}_{f_1}}$ and $W_{\mathbb{Q}_{f_3}}$ are independent Brownian motions. Under the foreign risk-neutral measure, the stochastic process $S_f$ satisfies \begin{align*} {\rm d} S_f (t) = r_f S_f (t) {\rm d} t + \sqrt{V_1(t)} \, S_f (t) {\rm d} W_{\mathbb{Q}_{f_1}} (t) \; . \end{align*} Hence, \begin{align*} \rho(S_f, Q^{-1}) & = \operatorname{Cor} \; \left[ {\rm d} W_{\mathbb{Q}_{f_1}}(t), \left( - \rho (S_f, Q) {\rm d} W_{\mathbb{Q}_{f_1}} - \sqrt{ 1 - \rho^2 (S_f, Q)} \; {\rm d} W_{\mathbb{Q}_{f_3}} \right) \right] \\ & = - \operatorname{Cor} \; \left[ {\rm d} W_{\mathbb{Q}_{f_1}}(t), \left( \rho (S_f, Q) {\rm d} W_{\mathbb{Q}_{f_1}} + \sqrt{ 1 - \rho^2 (S_f, Q)} \; {\rm d} W_{\mathbb{Q}_{f_3}} \right) \right] \; ; \end{align*} thus $ \rho(S_f, Q^{-1}) = - \rho(S_f, Q)$. \end{proof} \section{Numerical illustration} \label{sec:4} In order to analyse the pricing differences among the practitioners' framework, the DSW framework, and our approach based on empirical copulas, we proceed as follows: we set numerical values displayed in the following table. They are used in \emph{all} the cases we will discuss. \begin{table}[h!] \begin{center} \vskip .1cm \label{tab:table1} \begin{tabular}{r|r|r|r|r|r} \textbf{correlation} & \textbf{initial} & \textbf{initial asset} & \textbf{domestic} & \textbf{foreign risk} & \textbf{constant}\\ $\rho(S_f, Q^{-1})$ & \textbf{exchange } & \textbf{value} $S_f(0)$ & \textbf{risk free} & \textbf{free interest} & \textbf{exchange}\\ & \textbf{rate} $Q(0)$ & & \textbf{interest} $r$ & \textbf{rate} $r_f$ & \textbf{rate} $q$ \\ & & & \textbf{rate} $r$ & & \\ \hline - 0.7 & 3.1 & 2500 & 0.1 & 0.01 & 3 \end{tabular} \end{center} \end{table} \goodbreak \noindent We will vary \begin{itemize} \item the Heston vector parameters $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$; and \item the time to maturity~$T$. \end{itemize} We also set to zero the continuous dividend yield $d(t)$, from the DSW approach depicted in Section~\ref{sec:2.2}. These choices allow us to compute the prices of foreign vanilla call options on both DOM currency and on $S_f$, and to derive the implied volatility smiles of these options: \begin{itemize} \item[$i.)$] We compute the quanto option prices in the practitioners' framework, using equation~\eqref{(6)} and $\rho(S_f, Q) = - \rho(S_f, Q^{-1})$; \item[$ii.)$] We evaluate the quanto option prices using the DSW framework outlined in Section~\ref{sec:2.2}; \item[$iii.)$] We compute the proposed quanto option prices in our new copula approach: \begin{itemize} \item We numerically derive the marginal cumulative distribution functions $F_{S_f} \bigl( s_f (T) \bigr)$ and $F_{Q^{-1}} \bigl( q^{-1}(T) \bigr)$, respectively, from the Heston model with parameters $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$; \item We compute the copula $C(u_1, u_2)$ form the matrix $\mathbb{A}$ (provided by an external expert), using equation \eqref{(8)}. Sampling from the copula $C(u_1, u_2)$, we obtain ordered pairs of quantiles $(v_1, v_2)$; \item The ordered pairs of quantiles $(v_1, v_2)$ are transformed into $S_f$ and $Q^{-1}$ outcomes, by setting \[ \Bigl( s_{f} (T) , q^{-1}(T) \Bigr) = \left( F^{-1}_{S_f} (v_1) , F_{Q^{-1}}^{-1} (v_2) \right) \; . \] \item For each obtained ordered pair $\bigl( s_f (T ), q^{-1}(T ) \bigr)$, equation \eqref{(7)} yields \[ C^q (0) = Q(0) {\rm e}^{- r_f T} q^{-1}( T ) \max \bigl\{ q (s_f (T ) - K ), 0 \bigr\} \; . \] \end{itemize} The \emph{average} of the numerous obtained values of $C^q(0)$ is the price we propose for of the quanto option in the empirical copula dependence relation framework. \end{itemize} We now discuss the outcome of these three procedures for different volatility smiles and dependence relation fashions between $S_f$ and $Q^{-1}$, in a case by case analysis. \subsection{Case I: Gaussian copula, constant volatility} The matrix $\mathbb{A}$ is set such that the obtained copula $C$ is Gaussian with correlation $\rho(S_f, Q^{-1})$ and the parameters $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$ are set to \[ \varphi_{S_f}=\varphi_{Q^{-1}} = (0, 0,0, 0.2, 0) \] (whence no volatility smile is present for both $S_f$ and $Q^{-1}$). The time to maturity is $T=3$. The DSW \cite{DSW} and the empirical copula approaches are capable of adapting to the imposed constant volatility smile, as these approaches can even adapt to non-constant volatility smiles. Since the $S_f$ and $Q^{-1}$ diffusions are correlated by a simple constant correlation $\rho(S_f,Q^{-1})$, the basic assumption of the practitioners' framework is satisfied. The DSW framework and the empirical copula approach are capable of adapting to this condition as well: the DSW model directly uses $\rho(S_f,Q^{-1})$ to correlate $S_f$ and $Q^{-1}$ diffusions, and the empirical copula approach simply reproduces the Gaussian copula dependence relation with correlation $\rho(S_f, Q^{-1})$ from the data given by the matrix $\mathbb{A}$. Hence, \emph{no pricing differences are observed} amongst the three approaches, despite minor differences due to simulation imprecisions. \subsection{Case II: Gaussian copula, co-inclining volatility smile} The matrix $\mathbb{A}$ is set such that the obtained copula is Gaussian, and \[ \varphi_{S_f} = \varphi_{Q^{-1}} = \bigl( - 0.7 , 1, 0.1, 0.2, 0.5 \bigr) \; , \] whence a co-inclining volatility smile is obtained for $S_f$ and $Q^{-1}$ (as can be seen from their vectors of parameters $\varphi_{S_f}$ and $\varphi_{Q^{-1}}$, and Figure~\ref{fig1}). The time to maturity is $T=3$. \begin{figure} \caption{Implied volatilities for $S_f$ and $Q^{-1} \label{fig1} \end{figure} Both the DSW approach and the empirical copula approach adapt to the volatility smiles, while the practitioners' approach does not, because of its ``volatility is constant'' assumption. Concerning the dependence relation between $S_f$ and $Q^{-1}$, the analysis is the same as in case I. Hence, no pricing differences should be observed between the DSW and the empirical copula frameworks (see Fig.~\ref{fig3}). The minor differences observed between these two approaches are due to simulation imprecisions. \begin{figure} \caption{Pricing differences, Gaussian copula, co-inclining volatility smile.} \label{fig3} \end{figure} \subsection{Case III: $t$-student copula, long term option} In Cases III and IV, the matrix $\mathbb{A}$ is set such that the obtained copula is a $t$-copula with $3$ degrees of freedom and correlation $\rho(S_f, Q^{-1})$, \[ \varphi_{S_f} = \varphi_{Q^{-1}} = ( - 0.7, 1,0 . 1, 0.2, 0.5) \] (whence a co-inclining volatility smile is obtained for $S_f$ and $Q^{-1}$, which is displayed in Figure~\ref{fig1}). The time to maturity is $T=3$. The practitioners' framework is not capable of adapting to this case, because of the imposed volatility smile; and the DSW framework is not capable to adapt to this case either: while it is capable to adapt to the volatility smile, it is \emph{not able to adapt} to the $t$-copula between $S_f$ and $Q^{-1}$. The latter presents more tail dependence than the Gaussian copula, which is intrinsic to the DSW framework. Hence, pricing differences are observed amongst all the three frameworks (see Figure~\ref{fig4}). The slight difference between the DSW framework and our framework is attributed to the difference between a $t$-copula (with 3~degrees of freedom) and a Gaussian copula, with the same correlation parameter. \begin{figure} \caption{Pricing differences, $t$-student copula, long term option.} \label{fig4} \end{figure} \subsection{Case IV: $t$-student copula, short term option} In Case IV, the conditions are exactly the same as in Case III, except that $T = 0.25$ instead of $T = 3$. \begin{figure} \caption{Pricing differences, $t$-student copula, short term option.} \label{fig5} \end{figure} Figure~\ref{fig5} shows that \emph{no pricing differences are observed}. We conclude that neither the dependence relation between $S_f$ and $Q^{-1}$ nor the volatility smile play a major role in the pricing of quanto options, if the contract is a short-term call option. \subsection{Case V: Frank copula, long term option} In this case, the same simple conditions as in Case $I$ are imposed, except that~$\mathbb{A}$ is set such that the obtained copula is a Frank copula with parameter $\alpha$. The ordered pairs of quantiles generated by this copula, when converted to ordered pairs of normal random variables, induce a correlation $\rho (S_f, Q^{-1})$. A Frank copula is less similar to a Gaussian copula than a $t$-copula is. Whence, this case stresses the modelling of the dependence relation more than Case $III$ does. \begin{figure} \caption{Pricing differences, Frank copula, long term option.} \label{fig6} \end{figure} The DSW and the practitioners' frameworks yield similar results, as no volatility smile is imposed and both approaches adapt to the imposed Frank copula dependence relation the same way, \emph{i.e.}, by considering solely its induced correlation. The empirical copula framework gives pricing figures considerably different from the other approaches as it takes into account the full dependence relation provided by the imposed Frank copula. Figure~\ref{fig6} illustrates these results. \subsection{Case VI: Frank copula, short term option} The conditions are the same as in Case V, except that now $T= 0,25$. As a consequence, major pricing differences among the three models are not identified, even though slight pricing differences for deep out-of-the money options exist. \begin{figure} \caption{Pricing differences, Frank copula, short term option} \label{fig7} \end{figure} Figure~\ref{fig7} illustrates the pricing differences. Whence, even in a stressed dependence relation context, the dependence relation does not play a major role in the pricing of short-term quanto options. \section{Summary} We have proposed a framework based on empirical copulas for quanto option pricing. We have given numerical examples in order to illustrate the pricing differences among our approach and the practitioners as well as the DSW model \cite{DSW}. Looking at the results, we conclude that: \begin{itemize} \item[$i.)$] the quanto option requires explicit modelling for accurate pricing, with the exception of short duration contracts; \item[$ii.)$] the flexibility provided by the empirical copula approach results in pricing differences when compared to the other two approaches. \end{itemize} On the proposed empirical copula dependence relation framework, we conclude that: \begin{itemize} \item[$iii.)$] it provides a flexible framework to define the dependence relation between the market variables used in quanto option pricing, by taking into account non-linear dependence relations, through the matrix $\mathbb{A}$ and the related empirical copula estimation framework; \item[$iv.)$] it can adapt to the observed volatility smiles from the relevant market variables, as the marginals of $S_f$ and $Q^{-1}$ shall be calibrated based on plain vanilla options market prices; and finally \item[$v.)$] a drawback of the proposed model is that it is computationally more expensive than the other models it was compared to. \end{itemize} \end{document}
\begin{document} \title[Operators Induced by Graphs]{Operators Induced by Graphs} \author{Ilwoo Cho and Palle E. T. Jorgensen} \address{St. Ambrose Univ., Dept. of Math., 518 W. Locust St., Davenport, Iowa, 52803, U. S. A. / Univ. of Iowa, Dept. of Math., 14 McLean Hall, Iowa City, Iowa, 52242, U. S. A. } \email{[email protected] / [email protected]} \thanks{The second named author is supported by the U. S. National Science Foundation.} \date{May., 2010} \subjclass{05C62, 05C90, 17A50, 18B40, 47A99.} \keywords{Directed Graphs, Graph Groupoids, Graph Operators} \dedicatory{} \thanks{} \maketitle \begin{abstract} In this paper, we consider the spectral-theoretic properties of certain operators induced by given graphs. Self-adjointness, unitary, hyponormality, and normality of graph-depending operators are considered. As application, we study the finitely supported operators in the free group factor $L(F_{N})$ . \end{abstract} \strut \section{Introduction} Starting with analysis on countable directed graphs $G,$ we introduce Hilbert spaces $H_{G}$ and a family of weighted operators $T$ on $H_{G}.$ When the \emph{weights} (called \emph{coefficients} later in the present context) are chosen, $T$ is called a \emph{graph operator}. From its weights (or coefficients), we define the support $Supp(T)$ of $T.$ In full generality, it is difficult to identify analytic tools that reflect global properties of the underlying graph. We will be interested in generic properties that allow us to study spectral theory of this family of operators $T.$ The spectral theorem will produce a spectral measure representation for $T$ provided we can establish normality of $T$; self-adjointness, unitary, etc. These are the classes of operators that admit spectral analysis. In Theorem 3.1, we give a necessary and sufficient condition on $Supp(T)$ for $T$ to be self-adjoint. Our analysis is of interest even in the case when $G$ is finite. For instance, in Theorems 3.2 and 3.3, with $G$ assumed finite, we show that there is a vertex-edge correspondence which charactrerizes to weighted operators $T$ that are unitary. Also, in Section 4, hyponormality and normality of our graph operators are characterized. \subsection{Overview} A \emph{graph} is a set of objects called \emph{vertices} (or points or nodes) connected by links called \emph{edges} (or lines). In a \emph{ directed graph}, the two directions are counted as being distinct directed edges (or arcs). A graph is depicted in a diagrammatic form as a set of dots (for vertices), jointed by curves or line-segments (for edges). Similarly, a directed graph is depicted in a diagrammatic form as a set of dots jointed by arrowed curves, where the arrows point the direction of the directed edges. Recently, we studied the operator-algebraic structures induced by directed graphs. A key idea in the study of graph-depending operator algebras is that every directed graph $G$ induces its corresponding \emph{groupoid} $\Bbb{G},$ called the\emph{\ graph groupoid of} $G$. By considering the algebraic structure of $\Bbb{G},$ we can determine the groupoid actions ${\Greekmath 0115} ,$ acting on Hilbert spaces $H$: We can obtain suitable representations $(H,$ $ {\Greekmath 0115} )$ for $\Bbb{G}.$ And this guarantees the existence of operator algebras $\mathcal{A}_{G}$ $=$ $\overline{\Bbb{C}[{\Greekmath 0115} (\Bbb{G})]}^{w},$ generated by $\Bbb{G}$ (or induced by $G$), contained in the operator algebras $B(H).$ Indeed, the operator algebras $\mathcal{A}_{G}$ are the groupoid ($C^{*}$- or $W^{*}$-)subalgebras of $B(H).$ Note that each edge $e$ of $G$ assigns a partial isometry on $H,$ and every vertex $v$ of $G$ assigns a projection on $H$ (under various different types of representations of $\Bbb{G}$). We will fix a \emph{canonical representation\ }$(H_{G},$ $L)$\emph{\ of} $\Bbb{G},$ and construct the corresponding von Neumann algebra \begin{center} $M_{G}$ $=$ $\overline{\Bbb{C}[L(\Bbb{G})]}^{w}$ in $B(H_{G}),$ \end{center} where $H_{G}$ is the \emph{graph Hilbert space} $l^{2}(\Bbb{G})$. This von Neumann algebra $M_{G}$ is called \emph{the graph von Neumann algebra of} $ G. $ (See Section 3.1 below). In this paper, we are interested in certain elements $T$ of $M_{G}.$ Recall that, by the definition of graph von Neumann algebras (which are groupoid von Neumann algebras), if $T$ $\in $ $M_{G},$ then \begin{center} $T$ $=$ $\underset{w\in \Bbb{G}}{\sum }$ $t_{w}L_{w}$ with $t_{w}$ $\in $ $ \Bbb{C}.$ \end{center} Define the support $Supp(T)$ of $T$ by \begin{center} $Supp(T)$ $=$ $\{w$ $\in $ $\Bbb{G}$ $:$ $t_{w}$ $\neq $ $0\}.$ \end{center} If the support $Supp(T)$ of $T$ is a ``finite'' subset of $\Bbb{G},$ then we call $T$ a \emph{graph operator}. If $T$ is a graph operator, then the quantities $t_{w}$, for $w$ $\in $ $Supp(T),$ are called the\emph{\ coefficients of} $T.$ As we see, all graph operators are (finite) linear sums of generating operators $L_{w}$ of $M_{G},$ for $w$ $\in $ $\Bbb{G}.$ i.e., they are the operators generated by finite numbers of projections and partial isometries on $H_{G}.$ We are interested in the operator-theoretical properties of them; in particular, self-adjointness, the unitary property, hyponormality, and normality. In operator theory, such properties are very important in order to understand the given operators. For instance, if a given operator $T$ is normal, then $T$ satisfies the conditions in the \emph{spectral mapping theorem}, and hence the $C^{*}$-algebra generated by $T$ is $*$-isomorphic to $C(spec(T)),$ the $C^{*}$-algebra consisting of all continuous functions on the spectrum $spec(T)$ of $T.$ Recall that, for an operator $T,$ the spectrum of $T$, defined by \begin{center} $spec(T)$ $\overset{def}{=}$ $\{t$ $\in $ $\Bbb{C}$ $:$ $T$ $-$ $t1_{H}$ is not invertible$\},$ \end{center} is a nonempty compact subset of $\Bbb{C}.$ We characterize the self-adjointness, the unitary property, hyponormality, and normality of graph operators in $M_{G}.$ We show that such operator-theoretic properties of graph operators are characterized by the combinatorial property of given graphs and certain analytic data of coefficients of $T.$ This provides another connection between operator theory, operator algebra, groupoid theory, and combinatorial graph theory. \subsection{Motivation and Applications} As application, we derive the operator-theoretic properties of finitely supported elements of the free group factors $L(F_{N})$, for $N$ $\in $ $ \Bbb{N}.$ Recall that the \emph{free group factor} $L(F_{N}),$ for $N$ $\in $ $\Bbb{N},$ is the group von Neumann algebra $\overline{\Bbb{C}[{\Greekmath 0115} (F_{N})]}^{w}$, in $B(l^{2}(F_{N})),$ generated by the free group $F_{N}$ with $N$-generators, where $(l^{2}(F_{N}),$ ${\Greekmath 0115} )$ is the \emph{left regular unitary representation of} $F_{N},$ consisting of the \emph{group Hilbert space} $l^{2}(F_{N}),$ and the \emph{unitary representation} (which is a group action) \emph{of} $F_{N}$\emph{\ acting on} $l^{2}(F_{N}).$ It is possible since the free group factors $L(F_{N})$ are $*$-isomorphic to the graph von Neumann algebras $M_{O_{N}}$ of the one-vertex-$N$-loop-edge graphs $O_{N},$ for all $N$ $\in $ $\Bbb{N}$ $\cup $ $\{\infty \}$ (See Section 5 below and [5]).\strut Recall that a von Neumann algebra $\mathcal{M}$ in $B(H)$ is a \emph{factor} , if its $W^{*}$-subalgebra $\mathcal{M}^{\prime }$ $\cap $ $\mathcal{M}$ is $*$-isomorphic to $\Bbb{C}$ (or $\Bbb{C}$ $\cdot $ $1_{\mathcal{M}}$), where \begin{center} $\mathcal{M}^{\prime }$ $\overset{def}{=}$ $\{x$ $\in $ $B(H)$ $:$ $xm$ $=$ $ mx,$ $\forall $ $m$ $\in $ $\mathcal{M}\}.$ \end{center} It is well-known that a group $\Gamma $ is an i. c. c (or an infinite conjugacy class) group, if and only if the corresponding group von Neumann algebra $L(\Gamma )$ is a factor. Since every free group $F_{N}$ is i. c. c., the group von Neumann algebra $L(F_{N})$ is a factor. So, we call $ L(F_{N}),$ the free group factors. The study of free group factors, itself, is very interesting and important in operator algebra. We are interested in the operator-theoretic properties of each element of the fixed free group factor. We can check that the free group factors $L(F_{N})$ and the graph von Neumann algebras $M_{O_{N}}$ of the one-vertex-$N$-loop-edge graphs $O_{N}$ are $*$-isomorphic. This provides a motivation for our application in Section 5. More precisely, the analysis of finitely supported operators in $ L(F_{N})$ is the study of graph operators in $M_{O_{N}},$ since there are one-to-one correspondence between finitely supported operators in $L(F_{N}),$ and graph operators in $M_{O_{N}}.$ \section{Definitions and Background} Starting with a graph $G,$ to understand the operator theory, we must introduce a Hilbert space $H_{G}$ naturally coming from $G.$ Our approach is as follows: From $G,$ introduce an enveloping groupoid $\Bbb{G}$ and an associated involutive algebra $\mathcal{A}_{G}.$ We then introduce a conditional expectation $E$ of $\mathcal{A}_{G}$ onto the subalgebra $ \mathcal{D}_{G}$ of diagonal elements. To get a representation of $\mathcal{A }_{G}$ and an associated Hilbert space $H_{G},$ we then use the Stinespring construction on $E$ (e.g., see [14]). \strut In this section, we introduce the concepts and definitions we will use.\strut \strut \subsection{Graph Groupoids} Let $G$ be a directed graph with its vertex set $V(G)$ and its edge set $ E(G).$ Let $e$ $\in $ $E(G)$ be an edge connecting a vertex $v_{1}$ to a vertex $v_{2}.$ Then we write $e$ $=$ $v_{1}$ $e$ $v_{2},$ for emphasizing the initial vertex $v_{1}$ of $e$ and the terminal vertex $v_{2}$ of $e.$ For a fixed graph $G,$ we can define the oppositely directed graph $G^{-1},$ with $V(G^{-1})$ $=$ $V(G)$ and $E(G^{-1})$ $=$ $\{e^{-1}$ $:$ $e$ $\in $ $ E(G)\},$ where each element $e^{-1}$ of $E(G^{-1})$ satisfies that \begin{center} $e$ $=$ $v_{1}$ $e$ $v_{2}$ in $E(G)$, with $v_{1},$ $v_{2}$ $\in $ $V(G),$ \end{center} if and only if \begin{center} $e^{-1}$ $=$ $v_{2}$ $e^{-1}$ $v_{1},$ in $E(G^{-1}).$ \end{center} This opposite directed edge $e^{-1}$ $\in $ $E(G^{-1})$ of $e$ $\in $ $E(G)$ is called the \emph{shadow of} $e.$ Also, this new graph $G^{-1}$, induced by $G,$ is said to be the \emph{shadow of} $G.$ It is clear that $ (G^{-1})^{-1}$ $=$ $G.$\strut Define the \emph{shadowed graph} $\widehat{G}$ of $G$ by a directed graph with its vertex set \begin{center} $V(\widehat{G})$ $=$ $V(G)$ $=$ $V(G^{-1})$ \end{center} and its edge set \begin{center} $E(\widehat{G})$ $=$ $E(G)$ $\cup $ $E(G^{-1})$, \end{center} where $G^{-1}$ is the \emph{shadow} of $G$. We say that two edges $e_{1}$ $=$ $v_{1}$ $e_{1}$ $v_{1}^{\prime }$ and $ e_{2}$ $=$ $v_{2}$ $e_{2}$ $v_{2}^{\prime }$ are \emph{admissible}, if $ v_{1}^{\prime }$ $=$ $v_{2},$ equivalently, the finite path $e_{1}$ $e_{2}$ is well-defined on $\widehat{G}.$ Similarly, if $w_{1}$ and $w_{2}$ are finite paths on $G,$ then we say $w_{1}$ and $w_{2}$ are \emph{admissible}, if $w_{1}$ $w_{2}$ is a well-defined finite path on $G,$ too. Similar to the edge case, if a finite path $w$ has its initial vertex $v$ and its terminal vertex $v^{\prime },$ then we write $w$ $=$ $v_{1}$ $w$ $v_{2}.$ Notice that every admissible finite path is a word in $E(\widehat{G}).$ Denote the set of all finite path by $FP(\widehat{G}).$ Then $FP(\widehat{G})$ is the subset of the set $E(\widehat{G})^{*},$ consisting of all finite words in $E( \widehat{G}).$ Suppose we take a part \begin{center} $ \begin{array}{lll} & \bullet & \overset{e_{3}}{\longrightarrow }\cdot \cdot \cdot \\ & \uparrow & _{e_{2}} \\ \cdot \cdot \cdot \underset{e_{1}}{\longrightarrow } & \bullet & \end{array} $ \end{center} in a graph $G$ or in the shadowed graph $\widehat{G},$ where $e_{1},$ $ e_{2}, $ $e_{3}$ are edges of $G,$ respectively of $\widehat{G}$. Then the above admissibility shows that the edges $e_{1}$ and $e_{2}$ are admissible, since we can obtain a finite path $e_{1}e_{2},$ however, the edges $e_{1}$ and $e_{3}$ are not admissible, since a finite path $e_{1}$ $e_{3}$ is undefined. We can construct the \emph{free semigroupoid} $\Bbb{F}^{+}(\widehat{G})$ of the shadowed graph $\widehat{G},$ as the union of all vertices in $V( \widehat{G})$ $=$ $V(G)$ $=$ $V(G^{-1})$ and admissible words in $FP( \widehat{G}),$ equipped with its binary operation, the \emph{admissibility}$ . $ Naturally, we assume that $\Bbb{F}^{+}\Bbb{(}\widehat{G})$ contains the \emph{empty word} $\emptyset ,$ as the representative of all undefined (or non-admissible) finite words in $E(\widehat{G})$. Remark that some free semigroupoid $\Bbb{F}^{+}\Bbb{(}\widehat{G})$ of $ \widehat{G}$ does not contain the empty word; for instance, if a graph $G$ is a one-vertex-multi-edge graph, then the shadowed graph $\widehat{G}$ of $ G $ is also a one-vertex-multi-edge graph too, and hence its free semigroupoid $\Bbb{F}^{+}(\widehat{G})$ does not have the empty word. However, in general, if $\left| V(G)\right| $ $>$ $1,$ then $\Bbb{F}^{+}( \widehat{G})$ always contain the empty word. Thus, if there is no confusion, we always assume the empty word $\emptyset $ is contained in the free semigroupoid $\Bbb{F}^{+}(\widehat{G})$ of $\widehat{G}.$ \begin{definition} By defining the \emph{reduction }(RR) on $\Bbb{F}^{+}(\widehat{G}),$ we define the graph groupoid $\Bbb{G}$ of a given graph $G,$ by the subset of $ \Bbb{F}^{+}(\widehat{G}),$ consisting of all ``reduced'' finite paths on $ \widehat{G},$ with the inherited admissibility on $\Bbb{F}^{+}(\widehat{G})$ under (RR), where the \emph{reduction} (RR) on $\Bbb{G}$ is\ \strut as follows: (RR)\qquad $\qquad \qquad \qquad w$ $w^{-1}$ $=$ $v$ and $w^{-1}w$ $=$ $ v^{\prime },$\strut for all $w$ $=$ $v$ $w$ $v^{\prime }$ $\in $ $\Bbb{G},$ with $v,$ $v^{\prime }$ $\in $ $V(\widehat{G}).$ \end{definition} Such a graph groupoid $\Bbb{G}$ is indeed a categorial groupoid with its base $V(\widehat{G})$ (See \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Appendix A}). \subsection{Canonical Representation of Graph Groupoids} \strut Let $G$ be a given countable connected directed graph with its graph groupoid $\Bbb{G}.$ Then we can define the (pure algebraic) algebra $ \mathcal{A}_{G}$ of $\Bbb{G}$ by a vector space over $\Bbb{C},$ consisting of all linear combinations of elements of $\Bbb{G},$ i.e., \begin{center} $\mathcal{A}_{G}$ $\overset{def}{=}$ $\Bbb{C}$ $\cup $ $\left( \underset{k=1 }{\overset{\infty }{\cup }}\left\{ \sum_{j=1}^{k}t_{j}w_{j}\left| \begin{array}{c} w_{j}\in \Bbb{G},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{j}\in \Bbb{C}, \\ j=1,...,k \end{array} \right. \right\} \right) ,$ \end{center} under the usual addition ($+$), and the multiplication ($\cdot $), dictated by the admissibility on $\Bbb{G}.$ Define now a unary operation ($*$) on $ \mathcal{A}_{G}$ by \begin{center} $\sum_{j=1}^{k}$ $t_{j}$ $w_{j}$ $\in $ $\mathcal{A}_{G}$ $\longmapsto $ $ \sum_{j=1}^{k}$ $\overline{t_{j}}$ $w_{j}^{-1}$ $\in $ $\mathcal{A}_{G},$ \end{center} where $\overline{z}$ means the conjugate of $z,$ for all $z$ $\in $ $\Bbb{C} , $ and of course $w^{-1}$ means the shadow of $w,$ for all $w$ $\in $ $\Bbb{ G}.$ We call this unary operation ($*$), the \emph{adjoint} (or the \emph{ shadow}) on $\mathcal{A}_{G}.$ Then the vector space $\mathcal{A}_{G},$ equipped with the adjoint ($*$), is a well-defined (algebraic) $*$-algebra. Now, define a $*$-subalgebra $\mathcal{D}_{G}$ of $\mathcal{A}_{G}$ by \begin{center} $\mathcal{D}_{G}$ $\overset{def}{=}$ $\Bbb{C}$ $\cup $ $\left( \underset{k=1 }{\overset{\infty }{\cup }}\left\{ \sum_{j=1}^{n}t_{j}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }v_{j}\left| \begin{array}{c} v_{j}\in V(\widehat{G}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{j}\in \Bbb{C}, \\ j=1,...,k \end{array} \right. \right\} \right) .$ \end{center} This $*$-algebra $\mathcal{D}_{G}$ acts like the diagonal of $\mathcal{A} _{G},$ so we call $\mathcal{D}_{G},$ the \emph{diagonal} ($*$-)\emph{ subalgebra of} $\mathcal{A}_{G}.$ \subsubsection{The Hilbert Space $H_{G}$} Below, we identify the canonical Hilbert space $H_{G}.$ The algebra $ \mathcal{A}_{G}$ is represented by bounded linear operators acting on $H_{G}. $ The representation is induced by the canonical conditional expectation, via the Stinespring construction (e.g., see [14]). We can construct a (algebraic $*$-)conditional expectation \begin{center} $E$ $:$ $\mathcal{A}_{G}$ $\rightarrow $ $\mathcal{D}_{G}$ \end{center} by (2.2.1) \begin{center} $E\left( \underset{w\in X}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w}w\right) $ $\overset{def}{=}$ $ \underset{v\in X\cap V(\widehat{G})}{\sum }$ $t_{v}$ $v,$ \end{center} for all $\underset{w\in X}{\sum }$ $t_{w}w$ $\in $ $\mathcal{A}_{G},$ where $ X$ means a finite subset of $\Bbb{G}.$ Since the conditional expectation $F$ is completely positive under a suitable topology on $\mathcal{A}_{G}$, we may apply the Stinespring's construction. i.e., the diagonal subalgebra $\mathcal{D}_{G}$ is represented as the $l^{2}$-space, $l^{2}(V(\widehat{G})),$ by the concatenation. Then we can obtain the Hilbert space $H_{G},$ \begin{center} $H_{G}$ $\overset{def}{=}$ the Stinespring space of $\mathcal{A}_{G}$ over $ \mathcal{D}_{G},$ by $F,$ \end{center} containing $l^{2}(V(\widehat{G})).$ i.e., if ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}$ is the Stinespring representation of $\mathcal{A}_{G},$ acting on $l^{2}(V( \widehat{G})),$ then \begin{center} $H_{G}$ $=$ ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}\left( \mathcal{A}_{G}\right) .\strut $ \end{center} This Stinespring space $H_{G}$ is the Hilbert space with its \emph{inner product} $<,>_{G}$ satisfying that: \begin{center} $<h,$ ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}(a)$ $k$ $>_{G}$ $=$ $<h,$ $E(a)$ $k$ $ >_{2},$ \end{center} for all $h,$ $k$ $\in $ $l^{2}(V(\widehat{G})),$ for all $a$ $\in $ $ \mathcal{A}_{G},$ where $<,>_{2}$ is the inner product on $l^{2}(V(\widehat{G })).$\strut i.e., The Stinespring space $H_{G}$ is the norm closure of $\mathcal{A}_{G},$ by the norm, (2.2.2) \begin{center} $\left\| \sum_{j=1}^{n}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }w_{i}\otimes h_{i}\right\| _{G}^{2}$ $=$ $ \sum_{i=1}^{n}$ $\sum_{k=1}^{n}$ $<h_{i},$ $E(w_{i}^{*}w_{k})$ $h_{k}$ $ >_{2},$ \end{center} induced by the Stinespring inner product $<,>_{G}$ on $\mathcal{A}_{G},$ for all $a_{i}$ $\in $ $\mathcal{A}_{G},$ $h_{i}$ $\in $ $l^{2}(V(\widehat{G})),$ for all $n$ $\in $ $\Bbb{N}.$ \begin{definition} We call this Stinespring space $H_{G},$ the graph Hilbert space of $\Bbb{G}$ (or of $G$). \end{definition} \strut Denote the Hilbert space element ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}(w)$ by $ {\Greekmath 0118} _{w}$ in the graph Hilbert space $H_{G},$ for all $w$ $\in $ $\Bbb{G},$ with the identification, \begin{center} ${\Greekmath 0118} _{\emptyset }$ $=$ $0_{H_{G}},$ the zero vector in $H_{G},$ \end{center} where $\emptyset $ is the empty word (if exists) of $\Bbb{G}.$ We can check that the subset $\{{\Greekmath 0118} _{w}$ $:$ $w$ $\in $ $\Bbb{G}\}$ of $H_{G}$ satisfies the following \emph{multiplication rule}: \begin{center} ${\Greekmath 0118} _{w_{1}}$ ${\Greekmath 0118} _{w_{2}}$ $=$ ${\Greekmath 0118} _{w_{1}w_{2}},$ on $H_{G},$ \end{center} for all $w_{1},$ $w_{2}$ $\in $ $\Bbb{G}.$ Thus, we can define the \emph{ canonical multiplication operators }$L_{w}$\emph{\ on} $H_{G}$, satisfying that \begin{center} $L_{w}$ ${\Greekmath 0118} _{w^{\prime }}$ $\overset{def}{=}$ ${\Greekmath 0118} _{w}$ ${\Greekmath 0118} _{w^{\prime }}$ $=$ ${\Greekmath 0118} _{ww^{\prime }},$ \end{center} for all $w,$ $w^{\prime }$ $\in $ $\Bbb{G}.$ The existence of such multiplication operators $L_{w}$'s guarantees the existence of a groupoid action $L$ of $\Bbb{G},$ acting on $H_{G}$; \begin{center} $L$ $:$ $w$ $\in $ $\Bbb{G}$ $\longmapsto $ $L(w)$ $\overset{def}{=}$ $L_{w}$ $\in $ $B(H_{G}).$ \end{center} This action $L$ of $\Bbb{G}$ is called the \emph{canonical groupoid action of }$\Bbb{G}$\emph{\ on} $H_{G}.$ \subsubsection{\strut The Operators $L_{w}$} Let $w$ and $w_{i}$ denote reduced finite paths in $FP_{r}(\widehat{G}),$ for $i$ $\in $ $\Bbb{N},$ equivalently, they are the reduced words in the edge set $E(\widehat{G}),$ under the reduction (RR). Consider (2.2.3) \begin{center} $L_{w}\left( \underset{i}{\sum }w_{i}\otimes h_{i}\right) $ $=$ $\underset{i }{\sum }$ $ww_{i}$ $\otimes $ $h_{i},$ \end{center} for $h_{i}$ $\in $ $l^{2}(\Bbb{N})$. Here, the element $\underset{i}{\sum }$ $w_{i}$ $\otimes $ $h_{i}$ denotes a finite sum of tensors in $\mathcal{A} _{G}$. And $ww_{i}$ in (2.2.3) means concatenation of finite words. With the conditional expectation $E$ $:$ $\mathcal{A}_{G}$ $\rightarrow $ $\mathcal{D} _{G}$ (See (2.2.1) above), we get the \emph{Stinespring representation} $ (H_{G},$ ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}),$ and the operators ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}(w)$ $:$ $\mathcal{H}_{G}$ $\rightarrow $ $ \mathcal{H}_{G}$ obtained from (2.2.3) by passing to the quotient and completion as in Definition 2.2. To simplify terminology, in the sequel, we will simply write $L_{w}$ for the operator ${\Greekmath 0119} _{(E,\mathcal{D}_{G})}(w).$ \subsubsection{\strut Graph von Neumann Algebras} Let $G$, $\Bbb{G},$ and $H_{G}$ be given as above. And let $\{L_{w}$ $:$ $w$ $\in $ $\Bbb{G}\}$ the multiplication operators on $H_{G}$, where $L$ is the canonical groupoid action of $\Bbb{G}.$ \begin{definition} Let $G$ be a countable directed graph with its graph groupoid $\Bbb{G}.$ The pair $(H_{G},$ $L)$ of the graph Hilbert space $H_{G}$ and the canonical groupoid action $L$ of $\Bbb{G}$ is called the canonical representation of $ \Bbb{G}$. The corresponding groupoid von Neumann algebra \begin{center} $M_{G}$ $\overset{def}{=}$ $\overline{\Bbb{C}[L(\Bbb{G})]}^{w},$ \end{center} generated by $\Bbb{G}$ (equivalently, by $L(\Bbb{G})$ $=$ $\{L_{w}$ $:$ $w$ $ \in $ $\Bbb{G}\}$), as a $W^{*}$-subalgebra of $B(H_{G})$, is called the graph von Neumann algebra of $G.$ \end{definition} \strut \strut We can check that the generating operators $L_{w}$'s of the graph von Neumann algebra $M_{G}$ of $G$ satisfies that: \begin{center} $L_{w}^{*}$ $=$ $L_{w^{-1}},$ for all $w$ $\in $ $\Bbb{G},$ \end{center} and \begin{center} $L_{w_{1}}L_{w_{2}}$ $=$ $L_{w_{1}w_{2}},$ for all $w_{1},$ $w_{2}$ $\in $ $ \Bbb{G}.$\\[0pt] \end{center} It is easy to check that if $v$ is a vertex in $\Bbb{G},$ then the graph operator $L_{v}$ is a projection, since \begin{center} $L_{v}^{*}$ $=$ $L_{v^{-1}}$ $=$ $L_{v}$ $=$ $L_{v^{2}}$ $=$ $L_{v}^{2}.$ \end{center} Thus, by the reduction (RR) on $\Bbb{G},$ we can conclude that if $w$ is a nonempty reduced finite path in $FP_{r}(\widehat{G}),$ then the operator $ L_{w}$ is a partial isometry, since \begin{center} $L_{w}^{*}$ $L_{w}$ $=$ $L_{w^{-1}w}$, \end{center} and $w^{-1}w$ is a vertex, and hence $L_{w}^{*}L_{w}$ is a projection on $ H_{G}.$\strut \section{\strut Self-Adjointness and Unitary Property} In this section, we introduce our main objects of this paper: canonical representations of graph groupoids, graph von Neumann algebras, and graph operators. And we study the self-adjointness of graph operators, and the unitary property of them. We can realize that the self-adjointness and the unitary property of graph operators are characterized by the combinatorial property (admissibility) of given graphs, and certain analytic data of coefficients of the operators. Section 3.1 introduces the graph operators, and the theorem in Section 3.2 yields the structure of the graph operators that are self-adjoint; and Section 3.3, the unitary case. The different geometries of $G$ and the associated operators reflect different spectral representations. Section 4 below covers of normal and hyponormal graph operators. Finally, Section 5 takes up the case when the algebra is one of the free group factors (e.g., see [15], and [16]). \subsection{Graph Operators} Let $G$ be a graph with its graph groupoid $\Bbb{G},$ and let $M_{G}$ $=$ $ \overline{\Bbb{C}[L(\Bbb{G})]}^{w}$ be the graph von Neumann algebra of $G$ in $B(H_{G}),$ where $(H_{G},$ $L)$ is the canonical representation of $\Bbb{ G}.$ Since $M_{G}$ is a groupoid von Neumann algebra generated by $\Bbb{G},$ every element $T$ of $M_{G}$ satisfies the expansion, \begin{center} $T$ $=$ $\underset{w\in \Bbb{G}}{\sum }$ $t_{w}$ $L_{w},$ with $t_{w}$ $\in $ $\Bbb{C}.$ \end{center} For the given operator $T$ $\in $ $M_{G},$ having the above expansion, define the subset $Supp(T)$ of $\Bbb{G}$ by \begin{center} $Supp(T)$ $\overset{def}{=}$ $\{w$ $\in $ $\Bbb{G}$ $:$ $t_{w}$ $\neq $ $ 0\}. $ \end{center} This subset $Supp(T)$ of $\Bbb{G}$ is called the \emph{support of} $T.$ \begin{definition} Let $T$ be an element of the graph von Neumann algebra $M_{G}$ of a given graph $G,$ and let $Supp(T)$ be the support of $T.$ If $Supp(T)$ is a finite set, then we call $T$ a graph operator (on $H_{G}$). \end{definition} i.e., graph operators are the finitely supported operators on $H_{G}.$ In the rest of this section, we will consider a very specific example, but very interesting, where a given graph $G$ is an infinite linear graph, \begin{center} $G$ $=$ \quad $\bullet \longrightarrow \bullet \longrightarrow \bullet \longrightarrow \cdot \cdot \cdot .$ \end{center} \strut \strut We want to investigate the matrix forms of (which is unitarily equivalent to) graph operators. Instead of determining the matrix forms of graph operators, acting on the graph Hilbert space $H_{G}$, we consider the matrix forms of them, acting on the subspace $l^{2}(V(\widehat{G})),$ embedded in the graph Hilbert space $H_{G}.$ For convenience, we let \begin{center} $V(G)$ $=$ $\Bbb{N},$ and $E(G)$ $=$ $\{(j,$ $j$ $+$ $1)$ $:$ $j$ $\in $ $ \Bbb{N}\},$ \end{center} i.e., \begin{center} $G$ $=$ \quad $\underset{1}{\bullet }\overset{(1,2)}{\longrightarrow } \underset{2}{\bullet }\overset{(2,3)}{\longrightarrow }\underset{3}{\bullet } \overset{(3,4)}{\longrightarrow }$ $\cdot \cdot \cdot .$ \end{center} \strut Then, we can check that \begin{center} $l^{2}\left( V(\widehat{G})\right) $ $\overset{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Hilbert}}{=}$ $l^{2}( \Bbb{N})$ in $H_{G}.$ \end{center} So, we can assign the graph operator $L_{j}$ to the infinite matrix \begin{center} $ \begin{array}{ll} \qquad \qquad \qquad j\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-th} & \\ \left( \begin{array}{lllllll} 0 & & & & & & 0 \\ & \ddots & & & & & \\ & & 0 & & & & \\ & & & 1 & & & \\ & & & & 0 & & \\ & & & & & \ddots & \\ 0 & & & & & & \end{array} \right) & j\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-th,} \end{array} $ \end{center} on $l^{2}(\Bbb{N}),$ for all $j$ $\in $ $\Bbb{N}$ $=$ $V(\widehat{G}),$ and we assign the graph operator $L_{(j,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }j+1)}$ to the infinite matrix \begin{center} \strut $ \begin{array}{ll} \qquad \qquad \qquad j\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-th} & \\ \left( \begin{array}{lllllll} 0 & & & & & & 0 \\ 0 & \ddots & & & & & \\ & \ddots & 0 & & & & \\ & & 0 & 1 & 1 & & \\ & & & 0 & 0 & & \\ & & & & 0 & \ddots & \\ 0 & & & & & \ddots & \end{array} \right) & j\RIfM@\expandafter\text@\else\expandafter\mbox\fi{-th,} \end{array} $ \end{center} on $l^{2}(\Bbb{N}),$ for all $j$ $\in $ $\Bbb{N}.$ More precisely, we can assign \begin{center} $L_{j}$ $\in $ $M_{G}$ $\longleftrightarrow $ $\mid j><j\mid $ $\in $ $ B\left( l^{2}(\Bbb{N})\right) $ \end{center} and \begin{center} $L_{(j,j+1)}$ $\in $ $M_{G}\longleftrightarrow $ $\mid j><j\mid +\mid j><j+1\mid $ $\in $ $B\left( l^{2}(\Bbb{N})\right) ,$ \end{center} where $\mid j>$ means the Dirac operators, for all $j$ $\in $ $\Bbb{N}.$ We use Dirac's notation for rank-one operators, i.e., \begin{center} $\mid u><v\mid x$ $=$ $<v,$ $x>$ $u,$ \end{center} defined for vectors $u,$ $v,$ $x$ in a fixed Hilbert space having its inner product $<,>.$ So, for a reduced finite path $w$ $=$ $e_{i_{1}}$ $e_{i_{2}}$ ... $e_{i_{k}}$ $\in $ $\Bbb{G},$ with $e_{i_{j}}$ $=$ $(i_{j},$ $i_{j}$ $+$ $1)$ $\in $ $E( \widehat{G}),$ where \begin{center} $i_{j+1}$ $=$ $i_{j}$ $+$ $1,$ for $j$ $=$ $1,$ ..., $k$ $-$ $1,$ \end{center} the graph operator $L_{w}$ is determined as a matrix, \begin{center} $A_{e_{i_{1}}}$ $+$ $A_{e_{i_{2}}}$ $+$ ... $+$ $A_{e_{i_{k}}},$ \end{center} where $A_{e_{i_{j}}}$ are the infinite matrices (on $l^{2}(\Bbb{N})$) of the graph operators $L_{e_{i_{j}}}.$ \strut For instance, the self-adjoint operator \begin{center} $ \begin{array}{ll} L_{(j,j+1)}+L_{(j,j+1)}^{*} & =L_{(j,j+1)}+L_{(j+1,j)} \\ & =2\mid j><j\mid +\mid j><j+1\mid \\ & \qquad \qquad +\mid j+1><j\mid \end{array} $ \end{center} has its matrix form \begin{center} $\left( \begin{array}{lllllll} 0 & 0 & & & & & \\ 0 & \ddots & \ddots & & & & \\ & \ddots & 0 & 0 & & & \\ & & 0 & 2 & 1 & & \\ & & & 1 & 0 & 0 & \\ & & & & 0 & \ddots & \ddots \\ & & & & & \ddots & \end{array} \right) ,$ \end{center} \strut on $l^{2}(\Bbb{N})$ $=$ $l^{2}\left( V(\widehat{G})\right) .$ So, more generally, the self-adjoint operator $L_{w}$ $+$ $L_{w}^{*}$, for $w$ $ \in $ $\Bbb{G},$ becomes a certain self-adjoint Toeplitz operator on $l^{2}( \Bbb{N}),$ because $l^{2}(\Bbb{N})$ is Hilbert-space isomorphic to the Hardy space $H^{2}(\Bbb{T}),$ equipped with the Haar measure, where $\Bbb{T}$ is the unit circle in $\Bbb{C}$ (e.g., see [3], [12], and [13]). \subsection{\strut Self-Adjoint Graph Operators} The operator-theoretic properties of (bounded linear) operators; the self-adjointness, the unitary properties, the hyponormality, and the normality are briefly introduced in \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Appendix B}. In this section, we will consider the self-adjointness of graph operators. Let $G$ be a graph with its graph groupoid $\Bbb{G},$ and let $M_{G}$ be the graph von Neumann algebra of $G.$ Take a graph operator $T$ in $M_{G},$ \begin{center} $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w},$ with $t_{w}$ $\in $ $\Bbb{C}.$ \end{center} The following theorem characterize the self-adjointness of $T$. \begin{theorem} Let $T$ $\in $ $M_{G}$ be a given graph operator. Then $T$ is self-adjoint, if and only if there exists ``a'' subset $X$ of $Supp(T)$ such that \begin{center} $Supp(T)$ $\cap $ $FP_{r}(\widehat{G})$ $=$ $X$ $\sqcup $ $X^{-1},$ \end{center} where $\sqcup $ means the disjoint union, and \begin{center} $t_{x}$ $=$ $\overline{t_{x^{-1}}},$ for all $x$ $\in $ $X,$ \end{center} where $X^{-1}$ $\overset{def}{=}$ $\{x^{-1}$ $:$ $x$ $\in $ $X\},$ and $ \overline{z}$ means the conjugate of $z,$ for all $z$ $\in $ $\Bbb{C},$ and \begin{center} $t_{v}$ $\in $ $\Bbb{R},$ for all $v$ $\in $ $Supp(T)$ $\cap $ $V(\widehat{G} ).$ \end{center} \end{theorem} \begin{proof} ($\Leftarrow $) Assume that $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ is a graph operator in $M_{G},$ and suppose there exists a subset $X$ of \begin{center} $Supp_{V}^{c}(\widehat{G})$ $\overset{denote}{=}$ $Supp(T)$ $\cap $ $FP_{r}( \widehat{G})$ \end{center} such that \begin{center} $Supp_{V}^{c}(T)$ $=$ $X$ $\sqcup $ $X^{-1},$ \end{center} and \begin{center} $t_{x}$ $=$ $\overline{t_{x^{-1}}},$ for all $x$ $\in $ $X.$ \end{center} Also, assume that $t_{v}$ $\in $ $\Bbb{R},$ for all elements $v$ in \begin{center} $Supp_{V}(T)$ $\overset{denote}{=}$ $Supp(T)$ $\cap $ $V(\widehat{G}).$ \end{center} Then the operator $T$ can be re-written by \begin{center} $T$ $=$ $\underset{v\in Supp_{V}(T)}{\sum }$ $t_{v}L_{v}$ $+$ $\underset{ x\in X}{\sum }$ $t_{x}$ $L_{x}$ $+$ $\underset{x^{-1}\in X^{-1}}{\sum }$ $ t_{x^{-1}}$ $L_{x^{-1}}.$ \end{center} Moreover, we can have that $\qquad T^{*}$ $=$ $\left( \underset{v\in Supp_{V}(T)}{\sum }t_{v}L_{v}+ \underset{x\in X}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{x}L_{x}+\underset{x^{-1}\in X^{-1}}{\sum } t_{x^{-1}}L_{x^{-1}}\right) ^{*}$ $\qquad \qquad =$ $\underset{v\in Supp_{V}(T)}{\sum }$ $\overline{t_{v}}$ $ L_{v^{-1}}$ $+$ $\underset{x\in X}{\sum }$ $\overline{t_{x}}$ $L_{x^{-1}}$ $ + $ $\underset{x^{-1}\in X^{-1}}{\sum }$ $\overline{t_{x^{-1}}}$ $L_{x}$ $\qquad \qquad =$ $\underset{v\in Supp_{V}(T)}{\sum }$ $t_{v}L_{v}$ $+$ $+$ $ \underset{x\in X}{\sum }$ $\overline{t_{x}}$ $L_{x^{-1}}$ $+$ $\underset{ x^{-1}\in X^{-1}}{\sum }$ $\overline{t_{x^{-1}}}$ $L_{x}$ since $t_{v}$ $\in $ $\Bbb{R},$ and $L_{v}$ are projections for all $v$ $\in $ $V(\widehat{G})$ $\qquad \qquad =$ $\underset{v\in Supp_{V}(T)}{\sum }$ $t_{v}L_{v}$ $+$ $ \underset{x\in X}{\sum }$ $t_{x^{-1}}$ $L_{x^{-1}}$ $+$ $\underset{x^{-1}\in X^{-1}}{\sum }$ $t_{x}L_{x}$ since $t_{x}$ $=$ $\overline{t_{x^{-1}}},$ for all $x$ $\in $ $X$ $\qquad \qquad =$ $\underset{v\in Supp_{V}(T)}{\sum }$ $t_{v}L_{v}$ $+$ $ \underset{x^{-1}\in X^{-1}}{\sum }$ $t_{x^{-1}}L_{x^{-1}}$ $+$ $\underset{ x\in X}{\sum }$ $t_{x}$ $L_{x}$ since $Supp(T)$ $=$ $X$ $\sqcup $ $X^{-1}$ $\qquad \qquad =$ $T.$ Therefore, under hypothesis, the adjoint $T^{*}$ of $T$ is identical to $T,$ itself, and hence the element $T$ of $M_{G}$ is self-adjoint. ($\Rightarrow $) Let $T$ $\in $ $M_{G}$ be a self-adjoint graph operator, i.e., $T$ satisfies $T^{*}$ $=$ $T.$ Then \begin{center} $ \begin{array}{ll} T^{*} & =\left( \underset{w\in Supp(T)}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w}L_{w}\right) ^{*} \\ & =\underset{w\in Supp(T)}{\sum }\overline{t_{w}}L_{w^{-1}}\overset{(\star ) }{=}\underset{w\in Supp(T)}{\sum }t_{w}L_{w} \\ & =T. \end{array} $ \end{center} To satisfy the above equality ($\star $), we must have \begin{center} $Supp(T^{*})$ $=$ $Supp(T).$ \end{center} Notice that the support $Supp(T^{*})$ of the adjoint $T^{*}$ of $T$ satisfies \begin{center} $Supp(T^{*})$ $=$ $Supp(T)^{-1},$ in $\Bbb{G}$ \end{center} So, the self-adjointness of $T$ guarantees \begin{center} $Supp(T)$ $=$ $Supp(T)^{-1}$ in $\Bbb{G}.$ \end{center} Therefore, since $Supp(T)$ is self-adjoint, in the sense that $Supp(T)$ is identical to $Supp(T)^{-1},$ there must exists a subset $X$ of $ Supp_{V}^{c}(T)$ such that \begin{center} $Supp_{V}^{c}(T)$ $=$ $X$ $\sqcup $ $X^{-1},$ \end{center} because the following set equality always holds true; \begin{center} $Supp_{V}(T)^{-1}$ $=$ $Supp_{V}(T)$ \end{center} (since $V(\widehat{G})^{-1}$ $=$ $V(\widehat{G})$ $=$ $V(G)$ $=$ $V(G^{-1})$ ). Now, let $X$ be a subset satisfying the above set equality, \begin{center} $Supp_{V}^{c}(T)$ $=$ $X$ $\sqcup $ $X^{-1},$ in $\Bbb{G}.$ \end{center} For a fixed element $x$ $\in $ $X,$ the coefficient $t_{x}$ of $T$ has its corresponding coefficient $t_{x^{-1}}$ of $T.$ Assume now that there exists at least one element $x_{0}$ $\in $ $X,$ such that \begin{center} $t_{x_{0}}$ $\neq $ $\overline{t_{x_{0}^{-1}}}$ in $\Bbb{C}.$ \end{center} Then the summand $t_{x_{0}}L_{x_{0}}$ of $T$ satisfies that \begin{center} $(t_{x_{0}}L_{x_{0}})^{*}$ $=$ $\overline{t_{x_{0}}}$ $L_{x_{0}^{-1}}$ $\neq $ $t_{x_{0}^{-1}}$ $L_{x_{0}^{-1}},$ \end{center} and hence $T^{*}$ $\neq $ $T$ on $H_{G}.$ This contradicts our self-adjointness of $T.$ Therefore, if $T$ is self-adjoint, then there exists a unique subset $X$ of the support $Supp(T)$ of $T$ such that \begin{center} $Supp_{V}^{c}(T)$ $=$ $X$ $\sqcup $ $X^{-1},$ \end{center} and \begin{center} $t_{x}$ $=$ $\overline{t_{x^{-1}}},$ for all $x$ $\in $ $X.$ \end{center} Similarly, assume that there exists at least one $v_{0}$ $\in $ $ Supp_{V}(T), $ such that $t_{v_{0}}$ $\in $ $\Bbb{C}$ $\setminus $ $\Bbb{R}.$ Then the summand $t_{v_{0}}$ $L_{v_{0}}$ of $T$ satisfies that \begin{center} $(t_{v_{0}}L_{v_{0}})^{*}$ $=$ $\overline{t_{v_{0}}}$ $L_{v_{0}^{-1}}$ $=$ $ \overline{t_{v_{0}}}$ $L_{v_{0}}$ $\neq $ $t_{v_{0}}$ $L_{v_{0}},$ \end{center} since $\overline{t_{v_{0}}}$ $\neq $ $t_{v_{0}},$ whenever $t_{v_{0}}$ $ \notin $ $\Bbb{R}$ in $\Bbb{C}.$ This also contradicts our assumption that $ T $ is self-adjoint. \end{proof} \strut The above theorem characterizes the self-adjointness of graph operators $T$ by the classification of the support $Supp(T),$ and the coefficients of $T.$ This is interesting since the self-adjointness of graph operators are determined by the combinatorial data represented by the elements of the supports (or the admissibility of graph groupoids of given graphs), and the simple analytic data of coefficients. \begin{example} Let $G$ be a graph, \strut \begin{center} $G$ $=$ \quad $_{v_{1}}\bullet \overset{e_{1}}{\underset{e_{2}}{ \rightrightarrows }}\underset{v_{2}}{\bullet }\overset{e_{3}}{\leftarrow } \bullet _{v_{3}}.$ \end{center} \strut Let \begin{center} $T_{1}$ $=$ $t_{v_{1}}$ $L_{v_{1}}$ $+$ $t_{e_{1}}L_{e_{1}}$ $+$ $ t_{e_{1}^{-1}}L_{e_{1}^{-1}}$ $+$ $ t_{e_{3}e_{2}^{-1}}L_{e_{3}e_{2}^{-1}}+t_{e_{2}e_{3}^{-1}}L_{e_{2}e_{3}^{-1}}, $ \end{center} and \begin{center} $T_{2}$ $=$ $t_{e_{2}}L_{e_{2}}$ $+$ $t_{e_{3}}L_{e_{3}}$ $+$ $ t_{e_{3}^{-1}}L_{e_{3}^{-1}},$ \end{center} in $M_{G}.$ Then we can check the self-adjointness of $T_{1}$ and $T_{2}$ immediately by the above theorem. First, consider the self-adjointness of $ T_{1}.$ We can see that \begin{center} $Supp_{V}(T_{1})$ $=$ $\{v_{1}\},$ and $Supp_{V}^{c}(T_{1})$ $=$ $\{e_{1},$ $ e_{1}^{-1},$ $e_{3}e_{2}^{-1},$ $e_{2}e_{3}^{-1}\},$ \end{center} in $Supp(T_{1}).$ So, there exists a subset $X$ of $Supp(T_{1}),$ \begin{center} $X$ $=$ $\{e_{1},$ $e_{3}e_{2}^{-1}\},$ having $X^{-1}$ $=$ $\{e_{1}^{-1},$ $ e_{2}e_{3}^{-1}\},$ \end{center} satisfying that \begin{center} $Supp_{V}^{c}(T_{1})$ $=$ $X$ $\sqcup $ $X^{-1}.$ \end{center} (From this example, we can realize that the existence of $X$ is not uniquely determined. For instance, we may take a set $Y,$ \begin{center} $Y$ $=$ $\{e_{1}^{-1},$ $e_{3}e_{2}^{-1}\},$ having $Y^{-1}$ $=$ $\{e_{1},$ $ e_{2}e_{3}^{-1}\},$ \end{center} satisfying $Supp_{V}^{c}(T_{1})$ $=$ $Y$ $\sqcup $ $Y^{-1}.$) So, the graph operator $T_{1}$ is self-adjoint on $H_{G},$ if and only if \begin{center} $t_{v_{1}}$ $\in $ $\Bbb{R},$ \end{center} and \begin{center} $t_{e_{1}}$ $=$ $\overline{t_{e_{1}^{-1}}},$ and $t_{e_{3}e_{2}^{-1}}$ $=$ $ \overline{t_{e_{2}e_{3}^{-1}}},$ in $\Bbb{C}.$ \end{center} Also, for an operator $T_{2},$ we can immediately check that $T_{2}$ never be self-adjoint on $H_{G},$ because \begin{center} $Supp(T_{2})$ $=$ $Supp_{V}^{c}(T_{2})$ $=$ $\{e_{2},$ $e_{3},$ $ e_{3}^{-1}\},$ \end{center} and there does not exist a subset $X,$ satisfying \begin{center} $Supp_{V}^{c}(T_{2})$ $=$ $X$ $\sqcup $ $X^{-1}.$ \end{center} Therefore, a graph operator $T_{2}$ is not self-adjoint on $H_{G}.$\strut \end{example} \subsection{\strut Unitary Graph Operators} \strut In this section, we will consider the unitary graph operators in the given graph von Neumann algebra $M_{G}$ of a connected directed graph $G.$ To consider the unitary property of graph operators, we will restrict our interests to the case where a given connected graph $G$ is a finite graph. Recall that a graph $G$ is \emph{finite}, if \begin{center} $\left| V(G)\right| $ $<$ $\infty ,$ and $\left| E(G)\right| $ $<$ $\infty .$ \end{center} \strut \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Assumption} In this section, we assume all given graphs are ``finite.'' $\square $ \strut The reason we only consider finite graphs to study the unitary property of graph operators is that: we want to determine the identity operator $i_{d}$ on the graph Hilbert space $H_{G},$ easily. Notice that the identity operator $i_{d}$ in $B(H_{G})$ is identified with the element \begin{center} $1_{M_{G}}$ $=$ $\underset{v\in V(\widehat{G})}{\sum }$ $L_{v}$ in $M_{G}.$ \end{center} \begin{remark} \strut Remark that, even though the given graph $K$ is ``infinite,'' in particular, $\left| V(K)\right| $ $=$ $\infty ,$ the identity element $ 1_{M_{K}}$ of the corresponding graph von Neumann algebra $M_{K}$ is the operator $\underset{v\in V(\widehat{K})}{\sum }$ $L_{v},$ under topology. So, the identity element $1_{M_{K}}$ is not finitely supported. Therefore, we can verify that a finitely supported element $T$ of $M_{K}$ (which is our graph operator) would not be unitary, since the Cartesian product \begin{center} $Supp(T)^{r_{1}}\times ...\times Supp(T)^{r_{n}},$ \end{center} where \begin{center} $(r_{1},$ ..., $r_{n})$ $\in $ $\{\pm 1\}^{n}$, \end{center} is a finite set, for all $n$ $\in $ $\Bbb{N}.$ Thus, we restrict our interests to the case where we have ``finite'' graphs. \end{remark} Let $1_{M_{G}}$ be the identity element of the graph von Neumann algebra $ M_{G}$ of a finite graph $G.$ Then, an operator $U$ on $H_{G}$ is unitary, if and only if \begin{center} $U^{*}U$ $=$ $1_{M_{G}}$ $=$ $UU^{*},$ \end{center} by definition, and hence, equivalently, $U^{*}$ $=$ $U^{-1},$ where $U^{-1}$ means the \emph{inverse of} $U.$ Now, let's fix a graph operator \begin{center} $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ in $M_{G}.$ \end{center} \strut Then the adjoint $T^{*}$ of $T$ is \begin{center} $T^{*}$ $=$ $\underset{w\in Supp(T)}{\sum }$ $\overline{t_{w}}$ $L_{w^{-1}}$ in $M_{G}.$ \end{center} Thus the products $T^{*}T$ of $TT^{*}$ are \begin{center} $T^{*}T$ $=$ $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{ t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$ \end{center} and \begin{center} $TT^{*}$ $=$ $\underset{(y_{1},y_{2})\in Supp(T)^{2}}{\sum }$ $t_{y_{1}} \overline{t_{y_{2}}}$ $L_{y_{1}y_{2}^{-1}},$ \end{center} respectively, where \begin{center} $Supp(T)^{2}$ $\overset{def}{=}$ $Supp(T)$ $\times $ $Supp(T).$ \end{center} \begin{definition} Let $X$ be a subset of the graph groupoid $\Bbb{G}$ of $G.$ We say that this subset $X$ is \emph{alternatively disconnected}, if it satisfies that: (i)\ \ \ $\left| X\right| $ $\geq $ $2,$ (ii)\ \ for any pair $(w_{1},$ $w_{2})$ of ``distinct'' elements $w_{1}$ and $w_{2}$ of \begin{center} $X$ $\cap $ $FP_{r}(\widehat{G})$ (if it exists, or if it is nonempty), \end{center} neither ``$w_{1}^{-1}$ and $w_{2},$'' nor ``$w_{1}$ and $w_{2}^{-1}$'' is admissible in $\Bbb{G}.$ \end{definition} \strut Let $G$ be a finite graph, \begin{center} $G$ $=$ \quad $_{v_{1}}\bullet \overset{e_{1}}{\longleftarrow }\underset{ v_{2}}{\bullet }\overset{e_{2}}{\longrightarrow }\bullet _{v_{3}}$ \end{center} \strut and let \begin{center} $X_{1}$ $=$ $\{v_{1},$ $e_{1},$ $e_{2}\},$ $X_{2}$ $=$ $\{e_{1}^{-1},$ $ e_{2},$ $v_{3}\},$ $X_{3}$ $=$ $\{v_{2},$ $v_{3}\}$ \end{center} be given subsets of the graph groupoid $\Bbb{G}$ of $G.$ Then, we can check that the subset $X_{1}$ is not alternatively disconnected, because it does not satisfy the condition (ii) of the definition. i.e., both ``$e_{1}^{-1}$ and $e_{2},$'' and ``$e_{2}^{-1}$ and $e_{1}"$ are admissible in $\Bbb{G}$. Also, we can see the subset $X_{2}$ is alternatively disconnected. Indeed, neither ``$e_{1}$ $=$ $(e_{1}^{-1})^{-1}$ and $e_{2},$'' nor ``$e_{2}^{-1}$ and $e_{1}^{-1}$'' is admissible in $\Bbb{G}.$ Clearly, the subset $X_{3}$ is alternatively disconnected, since it satisfies the conditions (i) and (ii) of the above theorem. \strut Also, all vertex sets of (finite) graphs are alternatively disconnected in the above sense. Now, let's go back to our main interest of this section. To become a graph operator $T$ of $M_{G}$ to be unitary, both operators $T^{*}T$ and $TT^{*}$ must be the identity element \begin{center} $1_{M_{G}}$ $=$ $\underset{v\in V(\widehat{G})}{\sum }$ $L_{v}$ in $M_{G}.$ \end{center} Thus we can obtain the following characterization. \begin{theorem} Let $G$ be a finite graph with \begin{center} $\left| V(G)\right| $ $\geq $ $2,$ \end{center} and let $T$ $\in $ $M_{G}$ be a graph operator with its support $Supp(T).$ Then $T$ is unitary, if and only if (i)$\ \ \ Supp(T)$ is alternatively disconnected, (ii)\ \ the support $Supp(T)$ satisfies \begin{center} $\left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $V(\widehat{G}),$ \end{center} where $X^{-1}X$ $\overset{def}{=}$ $\{w_{1}^{-1}w_{2}$ $:$ $w_{1},$ $w_{2}$ $ \in $ $X\},$ for all $X$ $\subset $ $\Bbb{G},$ and (iii) the coefficients of $T$ satisfy \begin{center} $\underset{w\in Supp(T),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }w^{-1}w=v}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $1,$ for all $v$ $\in $ $V(\widehat{G}),$ \end{center} in $\Bbb{C}.$ \end{theorem} \begin{proof} Assume that the given graph operator $T$ is unitary on $H_{G}.$ Then, by definition, (3.3.1) \begin{center} $T^{*}T$ $=$ $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{ t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$ $=$ $\underset{v\in V(\widehat{ G})}{\sum }$ $L_{v}$ $=$ $1_{M_{G}},$ \end{center} and (3.3.2) \begin{center} $TT^{*}$ $=$ $\underset{(y_{1},y_{2})\in Supp(T)^{2}}{\sum }$ $t_{y_{1}} \overline{t_{y_{2}}}$ $L_{y_{1}y_{2}^{-1}}$ $=$ $\underset{v\in V(\widehat{G} )}{\sum }$ $L_{v}$ $=$ $1_{M_{G}},$ \end{center} in the graph von Neumann algebra $M_{G}$ of a finite connected graph $G.$ Notice here that, if there exists a pair $(w_{1},$ $w_{2})$ of distinct elements $w_{1}$ $\neq $ $w_{2}$ in $Supp(T),$ such that $w_{1}^{-1}w_{2}$ $ \neq $ $\emptyset ,$ equivalently, $w_{1}^{-1}$ and $w_{2}$ are admissible in $\Bbb{G},$ then there exists an nonzero summand \begin{center} $\overline{t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$ \end{center} in (3.3.1). By the distinctness of $w_{1}$ and $w_{2},$ and by the assumption $w_{1}^{-1}w_{2}$ $\neq $ $\emptyset ,$ the element $ w_{1}^{-1}w_{2}$ must be a nonempty reduced finite path in $\Bbb{G}.$ This shows that the first equality (3.3.1) does not hold, and hence it contradicts our unitary property of $T.$ Similarly, if $w_{1}w_{2}^{-1}$ $\neq $ $\emptyset ,$ then there exists an nonzero summand \begin{center} $t_{w_{1}}$ $\overline{t_{w_{2}}}$ $L_{w_{1}w_{2}^{-1}}$ \end{center} in (3.3.2), and hence this term breaks the unitary property of $T,$ which contradicts our assumption for $T.$ Therefore, to satisfy the unitary property of $T,$ the support $Supp(T)$ of $ T$ is alternatively disconnected, i.e., for any pair $(w_{1},$ $w_{2})$ of distinct elements in $Supp(T),$ neither ``$w_{1}^{-1}$ and $w_{2},$'' nor ``$ w_{1}$ and $w_{2}^{-1}$'' is admissible in $\Bbb{G}.$ Under the alternative disconnectedness of $Supp(T),$ we can obtain the alternating form of the left-hand side of (3.3.1): (3.3.3) \begin{center} $ \begin{array}{ll} T^{*}T & =\underset{(w_{1},w_{2})\in Supp(T)^{2},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }w_{1}=w_{2}}{\sum } \overline{t_{w_{1}}}t_{w_{2}}L_{w_{1}^{-1}w_{2}} \\ & =\underset{w\in Supp(T)}{\sum }\overline{t_{w}}t_{w}L_{w^{-1}w} \\ & =\underset{w\in Supp(T)}{\sum }\left| t_{w}\right| ^{2}L_{w^{-1}w}. \end{array} $ \end{center} Remark here that $w^{-1}w$ $\in $ $V(\widehat{G}),$ for all $w$ $\in $ $\Bbb{ G}.$ By (3.3.3), we can re-write that $T$ is unitary if and only if (3.3.4) \begin{center} $T^{*}T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $\left| t_{w}\right| ^{2}$ $ L_{w^{-1}w}$ $=$ $\underset{v\in V(\widehat{G})}{\sum }$ $L_{v}$ $=$ $ 1_{M_{G}},$ \end{center} by the finiteness of $G.$ And the second equality of (3.3.4) can be refined as follows: (3.3.5) \begin{center} $ \begin{array}{ll} \underset{w\in Supp(T)}{\sum }\left| t_{w}\right| ^{2}L_{w^{-1}w} & = \underset{v\in V(\widehat{G})}{\sum }\left( \underset{w\in Supp(T),\,w^{-1}w=v}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\left| t_{w}\right| ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } L_{v}\right) \\ & =\underset{v\in V(\widehat{G})}{\sum }\left( \underset{w\in Supp(T),\,w^{-1}w=v}{\sum }\left| t_{w}\right| ^{2}\right) L_{v}=1_{M_{G}}. \end{array} $ \end{center} Therefore, by (3.3.5), the support of $T$ must satisfy (3.3.6) \begin{center} $\left( Supp(T)^{-1}\right) \left( Supp(T)\right) $ $=$ $V(\widehat{G}),$ \end{center} and, under the alternative disconnectedness (3.3.6) of $T$, the coefficients of $T$ must satisfy (3.3.7) \begin{center} $\underset{w\in Supp(T),\,w^{-1}w=v}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $ 1,$ for all $v$ $\in $ $V(\widehat{G}),$ \end{center} in $\Bbb{C}$, where \begin{center} $X^{-1}X$ $\overset{def}{=}$ $\{w_{1}^{-1}w_{2}$ $:$ $w_{1},$ $w_{2}$ $\in $ $X\},$ for all $X$ $\subset $ $\Bbb{G}.$ \end{center} i.e., we can obtain that $T^{*}T$ $=$ $1_{M_{G}},$ if and only if the support $Supp(T)$ is alternatively disconnected, and it satisfies (3.3.6), and the coefficients of $T$ satisfy (3.3.7). Similar to the above observation, we can get that $TT^{*}$ $=$ $1_{M_{G}},$ if and only if $Supp(T)$ is alternatively disconnected, and it satisfies (3.3.6), and the coefficients of $T$ satisfies (3.3.8) \begin{center} $\underset{y\in Supp(T),\,yy^{-1}=x}{\sum }$ $\left| t_{y}\right| ^{2}$ $=$ $ 1,$ for all $x$ $\in $ $V(\widehat{G}).$ \end{center} However, it is easy to check that the conditions (3.3.7) and (3.3.8) are equivalent, because there exists a bijection $g,$ \begin{center} $g$ $:$ $w$ $\in $ $Supp(T)$ $\longmapsto $ $w^{-1}$ $\in $ $Supp(T)^{-1}.$ \end{center} Therefore, we can conclude that the graph operator $T$ is unitary, if and only if the support $Supp(T)$ of $T$ is alternatively disconnected, and it also satisfies the conditions (3.3.6), and the coefficients of $T$ satisfy (3.3.7) (or (3.3.8)).\strut \end{proof} Similar to the self-adjointness of graph operators, the unitary property of graph operators are also determined by the admissibility on the graph groupoids of given graphs and certain conditions on coefficients of the operators. \begin{remark} \strut In the proof of the above theorem (the unitary characterization of graph operators), where \begin{center} $\left| V(G)\right| $ $\geq $ $2,$ \end{center} the alternative disconnectedness is crucial. Since $\left| V(G)\right| $ $>$ $1,$ all generating operators $L_{w}$'s of the graph von Neumann algebra $ M_{G}$ are partial isometries. Moreover, the products $L_{w_{1}}$ ... $ L_{w_{n}},$ for all $n$ $\in $ $\Bbb{N},$ are partial isometries, whose initial and final spaces are ``not'' identified with the graph Hilbert space $H_{G}.$ Thus, to satisfy the unitary property, the products $L_{w_{1}w_{2}}$ either in $T^{*}T$ or in $TT^{*}$ must be the zero operator, whenever $w_{1}$ $\neq $ $w_{2}$ in $\Bbb{G}.$ \end{remark} In the rest of this paper, we will consider following two examples. \begin{example} Let $T$ $=$ $\underset{v\in V(\widehat{G})}{\sum }$ $t_{v}$ $L_{v}$ be a graph operator in $M_{G}.$ Then it is unitary, if and only if (i) $t_{v}$ $ \neq $ $0,$ and (ii) $\left| t_{v}\right| ^{2}$ $=$ $1,$ for all $v$ $\in $ $ V(\widehat{G}).$ \end{example} \begin{example} Let $G$ be a connected finite graph, \strut \begin{center} $G$ $=$ \qquad $_{v_{1}}\bullet \overset{e_{1}}{\longrightarrow }\underset{ v_{2}}{\bullet }\overset{e_{2}}{\longrightarrow }\underset{v_{3}}{\bullet } \overset{e_{3}}{\longrightarrow }\bullet _{v_{4}}$. \end{center} \strut Let $T_{1}$ $=$ $t_{v_{1}}L_{v_{1}}$ $+$ $ t_{e_{2}^{-1}}L_{e_{2}^{-1}}+t_{e_{2}e_{3}}$ $L_{e_{2}e_{3}}$ be a given graph operator in the graph von Neumann algebra $M_{G}$ of $G.$ We can check that \begin{center} $Supp(T_{1})$ $=$ $\{v_{1},$ $e_{2}^{-1},$ $e_{2}e_{3}\},$ \end{center} and hence \begin{center} $\Pi _{1}$ $=$ $\left( Supp(T_{1})^{-1}\right) \left( Supp(T_{1})\right) $ $ = $ $\{v_{1},$ $v_{2},$ $v_{4}\}.$ \end{center} So, $\Pi _{1}$ $\neq $ $V(G)$ $=$ $V(\widehat{G}).$ Therefore, this graph operator $T_{1}$ is not unitary. Now, let $T_{2}$ $=$ $t_{v_{1}}L_{v_{1}}$ $+$ $t_{v_{3}}L_{v_{3}}$ $+$ $ t_{e_{2}^{-1}}L_{e_{2}^{-1}}$ $+$ $t_{e_{2}e_{3}}L_{e_{2}e_{3}}.$ Then the support \begin{center} $Supp(T_{2})$ $=$ $\{v_{1},$ $v_{3},$ $e_{2}^{-1},$ $e_{2}e_{3}\}$ of $T_{2}$ \end{center} satisfies that \begin{center} $\Pi _{2}$ $=$ $\left( Supp(T_{2})^{-1}\right) \left( Supp(T_{2})\right) $ $ = $ $\{v_{1},$ $v_{2},$ $v_{3},$ $v_{4}\}$ $=$ $V(\widehat{G}).$ \end{center} Moreover, all the pairs $(w_{1},$ $w_{2})$ of distinct elements $w_{1}$ and $ w_{2}$ of $Supp(T_{2})$ are alternatively disconnected. For instance, \begin{center} $\left( e_{2}^{-1}\right) ^{-1}(e_{2}e_{3})$ $=$ $e_{2}^{2}e_{3}$ $=$ $ \emptyset $ $e_{3}$ $=$ $\emptyset ,$ $v_{3}^{-1}$ $e_{2}$ $=$ $v_{3}$ $e_{2}$ $=$ $\emptyset ,$ and $e_{2}^{-1}$ $ v_{3}$ $=$ $\emptyset ,$ \end{center} etc. Therefore, we can obtain that the operator $T_{2}$ is unitary on the graph Hilbert space $H_{G},$ if and only if \begin{center} $\underset{w\in Supp(T_{2}),\,w^{-1}w=v_{1}}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $\left| t_{v_{1}}\right| ^{2}$ $=$ $1,$ $\underset{w\in Supp(T_{2}),\,w^{-1}w=v_{2}}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $\left| t_{v_{2}}\right| ^{2}$ $+$ $\left| t_{e_{2}^{-1}}\right| ^{2}$ $=$ $1,$ $\underset{w\in Supp(T_{2}),\,w^{-1}w=v_{3}}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $\left| t_{v_{3}}\right| ^{2}$ $=$ $1,$ \end{center} and \begin{center} $\underset{w\in Supp(T_{2}),\,w^{-1}w=v_{4}}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $\left| t_{e_{2}e_{3}}\right| ^{2}$ $=$ $1.$ \end{center} Simply, $T_{2}$ is unitary, if and only if \begin{center} $\left| t_{v_{1}}\right| ^{2}$ $=$ $\left| t_{v_{3}}\right| ^{2}$ $=$ $ \left| t_{e_{2}e_{3}}\right| ^{2}$ $=$ $1,$ and $\left| t_{v_{2}}\right| ^{2} $ $+$ $\left| t_{e_{2}^{-1}}\right| ^{2}$ $=$ $1.$ \end{center} \strut \end{example} \strut The above unitary characterization of graph operators (induced by finite graphs) is in fact incomplete, since we did not consider the case where a given graph $G$ satisfies $\left| V(G)\right| $ $=$ $1.$ If a finite graph $G$ has only one vertex $v_{0},$ then it is graph-isomorphic to the one-vertex-$\left| E(G)\right| $-multi-loop-edge graph $O_{\left| E(G)\right| }.$ To make our unitary characterization of graph operators complete, we need the following theorem. \begin{theorem} Let $O_{n}$ be the one-vertex-$n$-loop-edge graph with its graph groupoid $ \Bbb{O}_{n},$ having its unique vertex $v_{O},$ and let $M_{O_{n}}$ be the graph von Neumann algebra of $O_{n},$ for $n$ $\in $ $\Bbb{N}.$ Let \begin{center} $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ $\in $ $M_{O_{n}}$ \end{center} be a fixed graph operator. Then $T$ is unitary, if and only if (i)$\ \ \ \left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $\{v_{O}\},$ (ii)\ \ the coefficients $\{t_{w}$ $:$ $w$ $\in $ $Supp(T)\}$ of $T$ satisfies \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{t_{w_{1}}}$ $ t_{w_{2}}$ $=$ $1.$ \end{center} \end{theorem} \begin{proof} ($\Leftarrow $) \strut Assume that a fixed graph operator $T$ of $M_{O_{n}}$ satisfies both conditions (i) and (ii). Then we can obtain that $\qquad \qquad T^{*}T$ $=$ $\underset{(w_{1}^{-1},w_{2})\in \left( Supp(T)\right) ^{-1}\times Supp(T)}{\sum }$ $\left( \overline{t_{w_{1}}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}}\right) $ $L_{w_{1}^{-1}w_{2}}$ $\qquad \qquad \qquad =$ $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $ \left( \overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}}\right) $ $L_{v_{O}}$ by (i) $\qquad \qquad \qquad =$ $\left( \underset{(w_{1},w_{2})}{\sum }\left( \overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}}\right) \right) $ $L_{v_{O}}$ $=$ $ L_{v_{O}}$ by (ii). Notice that, by definition, $L_{v_{O}}$ is the identity element of $ M_{O_{n}}.$ i.e., $L_{v_{O}}$ $=$ $1_{M_{O_{n}}}.$ Therefore, we have that (3.3.9) \begin{center} $T^{*}T$ $=$ $1_{M_{O_{n}}}$. \end{center} Consider now $TT^{*}.$ Observe that, under hypothesis, $\qquad \qquad TT^{*}$ $=$ $\underset{(w_{1},w_{2}^{-1})\in Supp(T)\times \left( Supp(T)\right) ^{-1}}{\sum }$ $\left( t_{w_{1}}\overline{t_{w_{2}}} \right) $ $L_{w_{1}w_{2}^{-1}}$ $\qquad \qquad \qquad =$ $\underset{(w_{1},w_{2})\in Supp(T)}{\sum }\left( t_{w_{1}}\overline{t_{w_{2}}}\right) $ $L_{v_{O}}$ by (i), and by the fact that: \begin{center} $ \begin{array}{ll} \left( Supp(T)\right) ^{-1}\left( Supp(T)\right) & =\{v_{O}\}=\{v_{O}^{-1}\} \\ & =\left( Supp(T)\right) \left( Supp(T)\right) ^{-1}, \end{array} $ \end{center} thus we can have $\qquad \qquad \qquad =$ $\left( \underset{(w_{1},w_{2})}{\sum }(t_{w_{1}} \overline{t_{w_{2}}})\right) $ $L_{v_{O}}$ $=$ $\overline{\left( \underset{ (w_{1},w_{2})}{\sum }(\overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}})\right) }$ $ L_{v_{O}}$ $\qquad \qquad \qquad =$ $\overline{1}$ $\cdot $ $L_{v_{O}}$ $=$ $1$ $\cdot $ $L_{v_{O}}$ $=$ $L_{v_{O}},$ by (ii). Therefore, we obtain that (3.3.10) \begin{center} $TT^{*}$ $=$ $1_{M_{O_{n}}}.$ \end{center} So, by (3.3.9) and (3.3.10), this graph operator $T$ is unitary. ($\Rightarrow $) Suppose a graph operator $T$ of $M_{O_{n}}$ is unitary. Assume that $T$ does not satisfy the condition (i). Then we can pick a pair \begin{center} $(w_{1},$ $w_{2})$ $\in $ $(Supp(T))^{2},$ \end{center} such that $w_{1}$ $\neq $ $w_{2},$ and $w_{1}^{-1}w_{2}$ $\neq $ $v_{O},$ equivalently, $w_{1}^{-1}w_{2}$ $\in $ $FP_{r}(\widehat{O_{n}}).$ This means that the product $T^{*}T$ of $T^{*}$ and $T$ contains a nonzero summand $ \overline{t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$. Thus, \begin{center} $T^{*}T$ $\neq $ $1_{M_{O_{n}}}$ $=$ $L_{v_{O}}.$ \end{center} This contradicts our assumption that $T$ is unitary. Assume now that $T$ does not satisfy the condition (ii). Say \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $(\overline{t_{w_{1}}}$ $ t_{w_{2}})$ $=$ $t_{0}$ $\neq $ $1$, in $\Bbb{C}.$ \end{center} For convenience, assume $T$ satisfies the condition (i). Then the product $ T^{*}T$ of $T^{*}$ and $T$ is identical to \begin{center} $T^{*}T$ $=$ $t_{0}$ $L_{v_{O}}$ $\neq $ $L_{v_{O}}$ $=$ $1_{M_{O_{n}}}.$ \end{center} This contradict the unitary property of $T.$ \end{proof} \strut The above theorem characterizes the unitary property of graph operators induced by the one-vertex-multi-loop-edge graphs. \strut \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Conclusion (Unitary Characterization of Graph Operators)} Let $G$ be a finite graph and let $M_{G}$ be the graph von Neumann algebra of $G.$ Let \begin{center} $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ $\in $ $M_{G}$ \end{center} by a graph operator. (3.3.11) Assume that $\left| V(G)\right| $ $=$ $1.$ Then $T$ is unitary, if and only if \begin{center} $\left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $V(G),$ \end{center} and \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{t_{w_{1}}}$ $ t_{w_{2}}$ $=$ $1,$ in $\Bbb{C}.$ \end{center} \strut (3.3.12) Assume now that $\left| V(G)\right| $ $=$ $2.$ Then $T$ is unitary, if and only if $Supp(T)$ is alternatively disconnected, and \begin{center} $\left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $V(G),$ \end{center} and \begin{center} $\underset{w\in Supp(T),\,w^{-1}w=v}{\sum }$ $\left| t_{w}\right| ^{2}$ $=$ $ 1,$ for all $v$ $\in $ $V(G).$ \end{center} $\square $ \section{\strut Normality of Graph Operators} In this section, we will consider the normality of graph operators. Let $G$ be a connected directed graph with its graph groupoid $\Bbb{G},$ and let $ M_{G}$ $=$ $\overline{\Bbb{C}[L(\Bbb{G})]}^{w}$ be the graph von Neumann algebra of $G$ in $B(H_{G}),$ where $(H_{G},$ $L)$ is the canonical representation of $\Bbb{G},$ consisting of the graph Hilbert space $H_{G}$ $ = $ $l^{2}(\Bbb{G}),$ and the canonical groupoid action $L$ of $\Bbb{G}.$ We are interested in the normality of graph operators in $M_{G}.$ Recall that an operator $T$ is \emph{normal}, if $T^{*}T$ $=$ $TT^{*}.$ Before checking the normality of a graph operator $T$ $\in $ $M_{G},$ we will consider the hyponormality of $T$ in Section 4.1. Recall that an operator $T$ is \emph{hyponormal}, if $T^{*}T$ $-$ $TT^{*}$ is positive. The hyponormality characterization of graph operators would give the normality characterization directly. Notice here that (pure) hyponormal operators and normal operators have few common analytic properties. So, in general, we do not know how the hyponormality determines the normality. However, in our graph-operator case, the hyponormal characterization determines the normality characterization. As we have seen in Section 3, the self-adjointness and the unitary property of graph operators are characterized by the admissibility on the graph groupoid $\Bbb{G}$ (equivalently, the combinatorial property of $G$ or $ \widehat{G}$), and certain analytic data of coefficients. We hope to obtain the similar normality characterization. \subsection{Hyponormality} To consider the normality of graph operators, we first characterize the hyponormality of them. Note that the hyponormality, itself, is interesting in operator theory (e.g., see [12], and [13]). For instance, the hyponormality of Toeplitz operators have been studied widely (e.g., See [3], and cited papers of [3]). We may understand hyponormality (or co-hyponormality) as the generalized normality. But keep in mind that hyponormal operators and normal operators do not share analytic properties much. However, in our case, we can show that hyponormality of graph operators and normality of graph operators are combinatorially related. In this section, we characterize the hyponormality of a given graph operator $T,$ in terms of the combinatorial information on a fixed graph groupoid and the analytic data on the coefficients of $T,$ like in Sections 3.2, and 3.3. Recall that an operator $T$ is \emph{positive} on a Hilbert space $H,$ if \begin{center} \strut $<T{\Greekmath 0118} ,$ ${\Greekmath 0118} >\;\geq $ $0,$ for all ${\Greekmath 0118} $ $\in $ $H.$ \end{center} Here, $<,>$ means the inner product on $H,$ and $\left\| .\right\| $ means the corresponding Hilbert norm induced by $<,>.$ If $T$ is a positive operator on $H,$ we write \begin{center} $T$ $\geq $ $0_{H},$ \end{center} where $0_{H}$ is the zero operator in $B(H).$ When $T_{1}$ and $T_{2}$ are operators on $H,$ we write \begin{center} $T_{1}$ $\geq $ $T_{2},$ \end{center} if the operator $T_{1}$ $-$ $T_{2}$ is a positive operator on $H.$\strut Thus, by definition, an operator $T$ is \emph{hyponormal} on $H,$ if and only if $T^{*}T$ $\geq $ $TT^{*}$ on $H,$ equivalently, the operator \begin{center} $S(T)$ $\overset{denote}{=}$ $[T^{*},$ $T]$ $\overset{def}{=}$ $T^{*}T$ $-$ $ TT^{*}$ \end{center} is a positive operator on $H,$ where $[A,$ $B]$ means the operator, \begin{center} $[A,$ $B]$ $\overset{def}{=}$ $AB$ $-$ $BA$, for all $A,$ $B$ $\in $ $B(H).$ \end{center} We call the operator $[A,$ $B]$, the \emph{commutator} of $A$ and $B.$ In particular, if $A$ $=$ $T^{*},$ and $B$ $=$ $T,$ for $T$ $\in $ $B(H),$ the commutator $[T^{*},$ $T]$ is called the \emph{self-commutator of} $T.$ Notice here that the self-commutator $S(T)$ $=$ $[T^{*},$ $T]$ of every operator $T$ is self-adjoint on $H.$ Define a two maps $s,$ $r$ $:$ $\Bbb{G}$ $\rightarrow $ $V(\widehat{G})$ by \begin{center} $r(w)$ $\overset{def}{=}$ $w^{-1}w,$ and $s(w)$ $\overset{def}{=}$ $ww^{-1},$ \end{center} for all $w$ $\in $ $\Bbb{G}.$ i.e., these maps $r$ and $s$ are the range map and the source map of the (graph) groupoid $\Bbb{G},$ in the sense of Section 2.2. \strut Now, fix a graph operator \begin{center} $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}L_{w}$ $\in $ $M_{G},$ \end{center} acting on the graph Hilbert space $H_{G}.$ The self-commutator $S(T)$ of $T$ is computed as follows:\strut $\qquad S(T)$ $=$ $T^{*}T$ $-$ $TT^{*}$ (4.1.1) $\qquad \qquad =$ $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $ \overline{t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$ $-$ $\underset{ (y_{1},y_{2})\in Supp(T)^{2}}{\sum }$ $t_{y_{1}}$ $\overline{t_{y_{2}}}$ $ L_{y_{1}y_{2}^{-1}}$ $\qquad \qquad =$ $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $ \overline{t_{w_{1}}}$ $t_{w_{2}}\left( L_{w_{1}^{-1}w_{2}}-L_{w_{2}w_{1}^{-1}}\right) $ (4.1.2) $\qquad \qquad =$ $\underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }$ $\overline{t_{w_{1}}}$ $t_{w_{2}}$ $\left( L_{w_{1}^{-1}w_{2}}-L_{w_{2}w_{1}^{-1}}\right) $ $\qquad \qquad =$ $\left( \underset{w\in Supp(T)}{\sum }\left| t_{w}\right| ^{2}\left( L_{r(w)}-L_{s(w)}\right) \right) $ $\qquad \qquad \qquad \qquad +$ $\left( \underset{(w_{1},w_{2})\in Supp(T),\,w_{1}\neq w_{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } \overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}}\left( L_{w_{1}^{-1}w_{2}}-L_{w_{2}w_{1}^{-1}}\right) \right) $ where $r(w)$ $=$ $w^{-1}w,$ and $s(w)$ $=$ $ww^{-1},$ for all $w$ $\in $ $ \Bbb{G}$ $\qquad \qquad =$ $\left( \underset{w\in Supp(T)}{\sum }\left| t_{w}\right| ^{2}\left( L_{r(w)}-L_{s(w)}\right) \right) $ \begin{center} $+$ $\left( \underset{(w_{1},w_{2})\in Supp(T),\,w_{1}\neq w_{2}^{\pm 1},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\mathcal{S}_{(w_{1},w_{2})} \right) ,$ \end{center} where \strut \begin{center} $\mathcal{S}_{(w_{1},w_{2})}$ $\overset{def}{=}$ $\left( \overline{t_{w_{1}}} t_{w_{2}}\left( L_{w_{1}^{-1}w_{2}}-L_{w_{2}w_{1}^{-1}}\right) +t_{w_{1}} \overline{t_{w_{2}}}\left( L_{w_{2}^{-1}w_{1}}-L_{w_{1}w_{2}^{-1}}\right) \right) ,$ \end{center} \strut \strut for all $(w_{1},$ $w_{2})$ $\in $ $Supp(T)^{2},$ such that $w_{1}$ $\neq $ $ w_{2}^{\pm 1}$ \strut \strut (4.1.3) $\qquad \qquad =$ $\left( \underset{w\in Supp(T)}{\sum }\left| t_{w}\right| ^{2}\left( L_{r(w)}-L_{s(w)}\right) \right) $ \begin{center} $ \begin{array}{ll} +\left( \underset{(w_{1},w_{2})\in Supp(T),\,w_{1}\neq w_{2}^{\pm 1},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\right. & \left( \left( \overline{ t_{w_{1}}}t_{w_{2}}L_{w_{1}^{-1}w_{2}}+t_{w_{1}}\overline{t_{w_{2}}} L_{w_{2}^{-1}w_{1}}\right) \right. \\ & \left. \left. \quad -\left( \overline{t_{w_{1}}} t_{w_{2}}L_{w_{2}w_{1}^{-1}}+t_{w_{1}}\overline{t_{w_{2}}} L_{w_{1}w_{2}^{-1}}\right) \right) \right) . \end{array} $ \end{center} \strut The computation (4.1.3) indeed shows that the self-commutator $S(T)$ is self-adjoint on the graph Hilbert space $H_{G},$ since each summand of (4.1.3) is self-adjoint. Recall that if two operators are self-adjoint, then the addition of these two operators is again self-adjoint. The hyponormality of $T$ is guaranteed by the positivity of the self-adjoint operator $S(T).$ In general, it is not easy to check when a self-adjoint operator $S$ is positive, because, for example, it is hard to see when the spectrum $spec(S)$ (contained in $\Bbb{R}$) is contained in $\Bbb{R}_{0}^{+}$ $=$ $\{r$ $\in $ $\Bbb{R}$ $:$ $r$ $\geq $ $0\}.$ However, in our graph-operator case, we can check the positivity of $S(T)$ of $T,$ by (4.1.1), (4.1.2), (4.1.3), and the computations, \begin{center} $<S(T){\Greekmath 0118} _{x},$ ${\Greekmath 0118} _{x}>,$ and $<S(T)$ ${\Greekmath 0118} _{x},$ ${\Greekmath 0118} _{y}>,$ \end{center} for $x,$ $y$ $\in $ $\Bbb{G}$ $\setminus $ $\{\emptyset \}$ (equivalently, for ${\Greekmath 0118} _{x},$ ${\Greekmath 0118} _{y}$ $\in $ $\mathcal{B}_{H_{G}}$ in $H_{G}$), where $ <,>$ means the inner product on $H_{G}.$ To check the positivity of $S(T),$ we have to show that \begin{center} $<S(T)$ ${\Greekmath 0118} ,$ ${\Greekmath 0118} >$ $\geq $ $0,$ for all ${\Greekmath 0118} $ $\in $ $H_{G}.$ \end{center} Since the collection of vectors \begin{center} ${\Greekmath 0111} $ $=$ $\underset{x\in \Bbb{G}}{\sum }$ $r_{x}$ ${\Greekmath 0118} _{x}$ $\in $ $ H_{G},$ with $r_{x}$ $\in $ $\Bbb{C},$ \end{center} is dense in $H_{G},$ it is enough to show that \begin{center} $<S(T){\Greekmath 0111} ,$ ${\Greekmath 0111} >$ $\geq $ $0,$ for all ${\Greekmath 0111} $ $\in $ $\mathcal{H}_{G},$ \end{center} where \begin{center} $\mathcal{H}_{G}$ $\overset{def}{=}$ $\left\{ {\Greekmath 0111} =\underset{x\in \Bbb{G}}{ \sum }r_{x}{\Greekmath 0118} _{x}\left| r_{x}\in \Bbb{C},{\Greekmath 0118} _{x}\in \mathcal{B} _{H_{G}}\right. \right\} $ $\subseteq $ $H_{G}.$ \end{center} \begin{lemma} \strut Let $L_{w}$ $\in $ $M_{G}$ be a generating operator of $M_{G}$ induced by $w$ $\in $ $\Bbb{G}.$ Then (4.1.4) \begin{center} $<L_{w}$ ${\Greekmath 0118} _{x},$ ${\Greekmath 0118} _{y}$ $>$ $=$ ${\Greekmath 010E} _{r(w),\,s(x)}$ ${\Greekmath 010E} _{wx,\,y},$ \end{center} where ${\Greekmath 010E} $ means the Kronecker delta. \end{lemma} \begin{proof} \strut Compute $\qquad <L_{w}$ ${\Greekmath 0118} _{x},$ ${\Greekmath 0118} _{y}$ $>$ $=$ $<{\Greekmath 0118} _{wx},$ ${\Greekmath 0118} _{y}$ $>$ \strut $\qquad \qquad \qquad =$ $\left\{ \begin{array}{ll} <{\Greekmath 0118} _{wx},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }{\Greekmath 0118} _{y}> & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }r(w)=s(x) \\ <{\Greekmath 0118} _{\emptyset },\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }{\Greekmath 0118} _{y}>\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }=0 & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise} \end{array} \right. $ \strut $\qquad \qquad \qquad =$ ${\Greekmath 010E} _{r(w),\,s(x)}$ $<{\Greekmath 0118} _{wx},$ ${\Greekmath 0118} _{y}$ $>$ \strut $\qquad \qquad \qquad =$ $\left\{ \begin{array}{ll} {\Greekmath 010E} _{r(w),\,s(x)}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\cdot \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }1 & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }wx=y \\ {\Greekmath 010E} _{r(w),\,s(x)}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\cdot \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }0 & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise} \end{array} \right. $ \strut since ${\Greekmath 0118} _{wx},$ ${\Greekmath 0118} _{y}$ $\in $ $\mathcal{B}_{H_{G}}$ $\cup $ $ \{0_{H_{G}}\}$ $\qquad \qquad \qquad =$ ${\Greekmath 010E} _{r(w),\,s(x)}$ ${\Greekmath 010E} _{wx,\,y}.$ Therefore, \begin{center} $<L_{w}{\Greekmath 0118} _{x,\,},$ ${\Greekmath 0118} _{y}>$ $=$ ${\Greekmath 010E} _{r(w),\,s(x)}$ ${\Greekmath 010E} _{wx,\,y},$ \end{center} for all $w,$ $x,$ $y$ $\in $ $\Bbb{G}.$ \end{proof} \strut By (4.1.4), we can obtain the following lemma. \begin{lemma} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ $\in $ $M_{G}$ be a graph operator, and let ${\Greekmath 0118} $ $=$ $\underset{x\in \Bbb{G}}{\sum }$ $ r_{x}$ ${\Greekmath 0118} _{x}$ $\in $ $\mathcal{H}_{G}$ be a vector in $H_{G}.$ Then (4.1.5) $\qquad <S(T)$ ${\Greekmath 0118} ,$ ${\Greekmath 0118} $ $>$ $\qquad \qquad =$ $\underset{(x,y)\in \Bbb{G}^{2}}{\sum }r_{x}\overline{r_{y} }$ $\left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\overline{t_{w_{1}}}t_{w_{2}}{\Greekmath 010E} _{r(w_{1}^{-1}w_{2}),\,s(x)}{\Greekmath 010E} _{w_{1}^{-1}w_{2}x,\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad \left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset }{\sum } t_{y_{1}}\overline{t_{y_{2}}}{\Greekmath 010E} _{r(y_{1}y_{2}^{-1}),\,s(x)}{\Greekmath 010E} _{y_{1}y_{2}^{-1}x,\,y}\right) .$ $\square $ \end{lemma} \strut The proof of the above theorem is straightforward, by (4.1.1) and (4.1.4). Now, we denote the summands \strut $\qquad r_{x}\overline{r_{y}}$ $\left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\overline{t_{w_{1}}} t_{w_{2}}{\Greekmath 010E} _{r(w_{1}^{-1}w_{2}),\,s(x)}{\Greekmath 010E} _{w_{1}^{-1}w_{2}x,\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad \left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset }{\sum } t_{y_{1}}\overline{t_{y_{2}}}{\Greekmath 010E} _{r(y_{1}y_{2}^{-1}),\,s(x)}{\Greekmath 010E} _{y_{1}y_{2}^{-1}x,\,y}\right) $ of (4.1.5) by $\Delta _{xy}.$ By (4.1.3), each summand $\Delta _{xy}$ has its (kind of) pair $\Delta _{yx},$ in the formula (4.1.5), $\qquad \Delta _{yx}$ $=$ $r_{y}$ $\overline{r_{x}}$ $\left( \underset{ (w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum } \overline{t_{w_{2}}}t_{w_{1}}{\Greekmath 010E} _{r(w_{2}^{-1}w_{1}),\,s(y)}{\Greekmath 010E} _{x,\,w_{2}^{-1}w_{1}\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad \left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset }{\sum } t_{y_{2}}\overline{t_{y_{1}}}{\Greekmath 010E} _{r(y_{2}y_{1}^{-1}),\,s(y)}{\Greekmath 010E} _{x,y_{2}y_{1}^{-1}\,y}\right) .$ \strut i.e., (4.1.6) \begin{center} $<S(T)$ ${\Greekmath 0118} ,$ ${\Greekmath 0118} $ $>$ $=$ $\underset{(x,y)\in \Bbb{G}^{2}}{\sum }$ $ r_{x}$ $\overline{r_{y}}$ $\Delta _{xy}$ $=$ $\underset{(x,y)\in \Bbb{G}^{2} }{\sum }$ $r_{y}$ $\overline{r_{x}}$ $\Delta _{yx},$ \end{center} for all ${\Greekmath 0118} $ $=$ $\underset{x\in \Bbb{G}}{\sum }$ $r_{x}$ ${\Greekmath 0118} _{x}$ $\in $ $\mathcal{H}_{G}$ $\subseteq $ $H_{G}.$ \strut The following theorem is the characterization of hyponormal graph operators. \begin{theorem} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ be a graph operator in the graph von Neumann algebra $M_{G}$ of $G.$ Then $T$ is hyponormal, if and only if (4.1.7) \begin{center} $\{r(w)$ $:$ $w$ $\in $ $\Pi _{T^{*}T}\}$ $\supseteq $ $\{r(w)$ $:$ $w$ $\in $ $\Pi _{TT^{*}}\}$ in $V(\widehat{G}),$ \end{center} where \begin{center} $\Pi _{T^{*}T}$ $\overset{def}{=}$ $\left( \left( Supp(T)\right) ^{-1}\left( Supp(T)\right) \right) $ $\setminus $ $\{\emptyset \},$ \end{center} and \begin{center} $\Pi _{TT^{*}}$ $\overset{def}{=}$ $\left( \left( Supp(T)\right) \left( Supp(T)\right) ^{-1}\right) $ $\setminus $ $\{\emptyset \},$ \end{center} in $\Bbb{G},$ and the coefficients of $T$ satisfies (4.1.8) \begin{center} $\left( \underset{w_{1}^{-1}w_{2}\in \Pi _{T^{*}T},\,r(w_{1}^{-1}w_{2})=v}{ \sum }\overline{t_{w_{1}}}t_{w_{2}}\right) $ $\geq $ $\left( \underset{ y_{1}y_{2}^{-1}\in \Pi _{TT^{*}},\,r(y_{1}y_{2}^{-1})=v}{\sum }t_{w_{1}} \overline{t_{w_{2}}}\right) ,$ \end{center} in $\Bbb{R}$ $\subset $ $\Bbb{C},$ for all $v$ $\in $ $V(\widehat{G}).$ \end{theorem} \begin{proof} \strut Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ be a given graph operator in $M_{G}.$ Then, by (4.1.1), the self-commutator $S(T)$ of $T$ is \begin{center} $S(T)$ $=$ $TT^{*}$ $-$ $TT^{*}$ $=$ $\underset{(w_{1},w_{2})}{\sum }$ $ \overline{t_{w_{1}}}$ $t_{w_{2}}$ $L_{w_{1}^{-1}w_{2}}$ $-$ $\underset{ (y_{1},y_{2})}{\sum }$ $t_{y_{1}}$ $\overline{t_{y_{2}}}$ $ L_{y_{1}y_{2}^{-1}},$ \end{center} identified with \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{ \sum }$ $\overline{t_{w_{1}}}$ $t_{w_{2}}\left( L_{w_{1}^{-1}w_{2}}-L_{w_{2}w_{1}^{-1}}\right) ,$ \end{center} by (4.1.2). Then, for all ${\Greekmath 0118} $ $=$ $\underset{x\in \Bbb{G}}{\sum }$ $r_{x}$ ${\Greekmath 0118} _{x}$ $\in $ $\mathcal{H}_{G}$ in $H_{G},$ we can obtain the formula (4.1.5), which states; $\qquad <S(T)$ ${\Greekmath 0118} ,$ ${\Greekmath 0118} $ $>$ $\qquad \qquad =$ $\underset{(x,y)\in \Bbb{G}^{2}}{\sum }r_{x}\overline{r_{y} }$ $\left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\overline{t_{w_{1}}}t_{w_{2}}{\Greekmath 010E} _{r(w_{1}^{-1}w_{2}),\,s(x)}{\Greekmath 010E} _{w_{1}^{-1}w_{2}x,\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad $ $\left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset }{\sum } t_{y_{1}}\overline{t_{y_{2}}}{\Greekmath 010E} _{r(y_{1}y_{2}^{-1}),\,s(x)}{\Greekmath 010E} _{y_{1}y_{2}^{-1}x,\,y}\right) ,$ satisfying (4.1.6). ($\Leftarrow $) Consider now that the terms $\qquad \Delta _{xy}^{o}$ $\overset{def}{=}$ $\frac{1}{r_{x}\overline{r_{y}}} \Delta _{xy}$ $=$ $\left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset }{\sum }\overline{t_{w_{1}}} t_{w_{2}}{\Greekmath 010E} _{r(w_{1}^{-1}w_{2}),\,s(x)}{\Greekmath 010E} _{w_{1}^{-1}w_{2}x,\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad $ $\left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset }{\sum } t_{y_{1}}\overline{t_{y_{2}}}{\Greekmath 010E} _{r(y_{1}y_{2}^{-1}),\,s(x)}{\Greekmath 010E} _{y_{1}y_{2}^{-1}x,\,y}\right) $ in (4.1.5). Each $\Delta _{xy}^{o}$ can be re-formulated by (4.1.9) $\qquad \Delta _{xy}^{o}$ $=$ $\underset{v\in \mathcal{V}_{T}}{\sum }\left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v}{\sum }\overline{t_{w_{1}}}t_{w_{2}}{\Greekmath 010E} _{v,\,s(x)}{\Greekmath 010E} _{w_{1}^{-1}w_{2}x,\,y}\right. $ $\qquad \qquad \qquad \qquad \qquad \qquad \quad $ $\left. -\underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v}{\sum }t_{y_{1}}\overline{t_{y_{2}}}{\Greekmath 010E} _{v,\,s(x)}{\Greekmath 010E} _{y_{1}y_{2}^{-1}x,\,y}\right) ,$ where \begin{center} $\mathcal{V}_{T}$ $\overset{def}{=}$ $\{r(w)$ $:$ $w$ $\in $ $Supp(T)\}$ $ \cup $ $\{s(w)$ $:$ $w$ $\in $ $Supp(T)\},$ \end{center} in $V(\widehat{G}).$ Let's denote the summand of $\Delta _{xy}^{o}$ by $ \Delta _{xy}^{o}(v),$ for $v$ $\in $ $\mathcal{V}_{T}.$ i.e., (4.1.10) \begin{center} $\Delta _{xy}^{o}$ $=$ $\underset{v\in \mathcal{V}_{T}}{\sum }$ $\Delta _{xy}^{o}$ \end{center} Thus, if (4.1.11) \begin{center} $\Delta _{xy}^{o}(v)$ $\geq $ $0,$ for all $v$ $\in $ $V(\widehat{G}),$ \end{center} then we can make $<S(T){\Greekmath 0118} ,$ ${\Greekmath 0118} $ $>$ be positive in $\Bbb{R},$ for an ``arbitrary'' ${\Greekmath 0118} $ $\in $ $\mathcal{H}_{G},$ and hence the operator $T$ is hyponormal, by (4.1.6). So, since the above vector ${\Greekmath 0118} $ is arbitrary in $\mathcal{H}_{G},$ we can obtain that: if the set-inclusion (4.1.7) holds, and if the inequality (4.1.8); $\qquad \quad \left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v}{\sum } \overline{t_{w_{1}}}t_{w_{2}}\right) $ $\qquad \quad \qquad \quad \qquad \quad \geq $ $\left( \underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{y_{1}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{t_{y_{2}}} \right) ,$ holds in $\Bbb{R}$($\subset $ $\Bbb{C}$), for all $v$ $\in $ $\mathcal{V} _{T},$ then $T$ is hyponormal, since \begin{center} $<S(T)$ ${\Greekmath 0111} ,$ ${\Greekmath 0111} $ $>$ $\geq $ $0,$ for all ${\Greekmath 0111} $ $\in $ $\mathcal{H} _{G}$($\subseteq $ $H_{G}$). \end{center} Equivalently, if both (4.1.7) and (4.1.8) hold, then $T$ is hyponormal on $ H_{G}.$ \strut ($\Rightarrow $) Conversely, let a given graph operator $T$ be hyponormal on $H_{G},$ equivalently, the self-commutator $S(T)$ is a positive operator on $ H_{G}.$ And assume that $S(T)$ does not satisfy either (4.1.7) or (4.1.8). Suppose first that the condition (4.1.7) does not hold. i.e., assume (4.1.12) \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\{r(w)$ $:$ $w$ $\in $ $\Pi _{T^{*}T}\}$ $ \subset $ $\{r(w)$ $:$ $w$ $\in $ $\Pi _{TT^{*}}\}$ $=$ $\mathcal{R} _{TT^{*}}.$ \end{center} This means that there exists an element $w_{0}$ $\in $ $Supp(T),$ such that \begin{center} $r(w_{0})$ $\in $ $\mathcal{R}_{T^{*}T}$ and $s(w_{0})$ $\in $ $\mathcal{R} _{TT^{*}},$ \end{center} satisfying \begin{center} $r(w_{0})$ $\neq $ $s(w_{0})$ in $V(\widehat{G}),$ \end{center} with \begin{center} $s(w_{0})$ $\in $ $\mathcal{R}_{TT^{*}}$ $\setminus $ $\mathcal{R}_{T^{*}T}$ ($\neq $ $\varnothing $). \end{center} Notice here that $r(w_{1}w_{2})$ $=$ $r(w_{2}),$ and $s(w_{1}w_{2})$ $=$ $s(w_{1}),$ for all $w_{1},$ $w_{2}$ $\in $ $\Bbb{G}.$ So, our condition (4.1.12) guarantees the existence of such an element $w_{0}$ in $Supp(T).$ Then we can obtain the summand \begin{center} $\left| t_{w_{0}}\right| ^{2}\left( L_{r(w_{0})}-L_{s(w_{0})}\right) $ \end{center} of $S(T),$ by (4.1.2). Again, by (4.1.12), we have the summand \begin{center} $-\left| t_{w_{0}}\right| ^{2}L_{s(w_{0})}$ \end{center} of $S(T).$ Thus, if we take a vector ${\Greekmath 0118} _{w_{0}}$ $\in $ $\overline{t_{w}}$ $\in $ $\mathcal{B}_{H_{G}}$ $\subset $ $\mathcal{H}_{G}$ in $H_{G},$ then $\qquad <S(T){\Greekmath 0118} _{w_{0}},$ ${\Greekmath 0118} _{w_{0}}$ $>$ $=$ $-\left| t_{w_{0}}\right| ^{2}<L_{s(w_{0})}{\Greekmath 0118} _{w_{0}},$ ${\Greekmath 0118} _{w_{0}}$ $>$ $\qquad \qquad \qquad =$ $-\left| t_{w_{0}}\right| ^{2}$ $<{\Greekmath 0118} _{w_{0}},$ $ {\Greekmath 0118} _{w_{0}}$ $>$ by (4.1.4) $\qquad \qquad \qquad =$ $-\left| t_{w_{0}}\right| ^{2}$ $\left\| {\Greekmath 0118} _{w_{0}}\right\| ^{2}$ $=$ $-\left| t_{w_{0}}\right| ^{2}$ $<$ $0,$ where \begin{center} $\left\| {\Greekmath 0111} \right\| $ $\overset{def}{=}$ $\sqrt{<{\Greekmath 0111} ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }{\Greekmath 0111} >},$ for all ${\Greekmath 0111} $ $\in $ $H_{G},$ \end{center} is the Hilbert space norm on $H_{G}.$ This shows that there exists a vector $ {\Greekmath 0118} ^{\prime }$ $\in $ $H_{G},$ such that $<$ $S(T)$ ${\Greekmath 0118} ^{\prime },$ ${\Greekmath 0118} ^{\prime }$ $>$ becomes negative in $\Bbb{R}.$ This contradicts our assumption that $T$ is hyponormal. Therefore, if $T$ is hyponormal, then the condition (4.1.7) must hold. Assume now that $T$ is hyponormal, and the inequality (4.1.8) does not hold. We will assume that (4.1.7) holds true for $T.$ Since (4.1.8) does not hold, there exists at least one vertex $v_{0}$ such that (4.1.13) \begin{center} $\Delta _{xy}^{o}(v_{0})$ $<$ $0,$ \end{center} where $\Delta _{xy}^{o}$ and $\Delta _{xy}^{o}(v)$'s are defined in (4.1.10) and (4.1.11), respectively. Then we can take a vector $\qquad {\Greekmath 0111} $ $=$ $\underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v_{0}}{ \sum }\left( {\Greekmath 0118} _{x}+{\Greekmath 0118} _{w_{1}^{-1}w_{2}x}\right) $ $\qquad \qquad \qquad +$ $\underset{(y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v_{0}}{ \sum }$ $\left( {\Greekmath 0118} _{y}+{\Greekmath 0118} _{y_{1}y_{2}^{-1}y}\right) $ in $\mathcal{H}_{G}.$ Then, by (4.1.3) \begin{center} $<S(T){\Greekmath 0111} ,$ ${\Greekmath 0111} $ $>$ $=$ $\Delta _{xy}^{o}(v_{0})$ $<$ $0.$ \end{center} Therefore, it breaks the hyponormality of $T,$ which contradicts our assumption that $T$ is hyponormal. Thus, the condition (4.1.8) must hold under the hyponormality of $T.$ As we have seen above, we can conclude that a graph operator $T$ is hyponormal, if and only if the both conditions (4.1.7), and (4.1.8) hold. \end{proof} \strut The above theorem characterize the hyponormality of graph operators in terms of the admissibility on $\Bbb{G},$ and the analytic data of coefficients, just like Sections 3.2, and 3.3. From below, denote \begin{center} $r\left( \Pi _{T^{*}T}\right) $ and $r(\Pi _{TT^{*}})$ \end{center} by \begin{center} $\mathcal{R}_{T^{*}T}$ and $\mathcal{R}_{TT^{*}},$ \end{center} respectively. The above theorem provides not only the characterization of hyponormal graph operators but also the very useful process for checking ``non-hyponormality.'' \begin{corollary} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}L_{w}$ be a graph operator in $M_{G}.$ (1) If $\mathcal{R}_{T^{*}T}$ $\nsupseteq $ $\mathcal{R}_{TT^{*}},$ then $T$ is not hyponormal. (2) If there exists a vertex $v_{0}$ $\in $ $\mathcal{V}_{T},$ such that $\qquad \left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v_{0}}{\sum }\overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } t_{w_{2}}\right) $ $\qquad \qquad \qquad \qquad \ngeqq $ $\left( \underset{(y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v_{0}}{ \sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{y_{1}}\overline{t_{y_{2}}}\right) ,$ then $T$ is not hyponormal. $\square $ \end{corollary} In the rest of this section, we will consider several fundamental examples. These examples will give the concrete understanding for the above theorem; the characterization of hyponormal graph operators. \begin{example} Suppose a graph $G$ contains its subgraph, \strut \begin{center} $_{v_{1}}\bullet \overset{e_{1}}{\underset{e_{2}}{\rightrightarrows }} \bullet _{v_{2}},$ \end{center} \strut and let $T$ $=$ $t_{e_{1}}$ $L_{e_{1}}$ $+$ $t_{e_{2}}$ $L_{e_{2}}.$ Then this graph operator $T$ is not hyponormal, since \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\{r(e_{1}^{-1}e_{1}),$ $r(e_{1}^{-1}e_{2}),$ $ r(e_{2}^{-1}e_{1}),$ $r(e_{2}^{-1}e_{2})\}$ $=$ $\{v_{2}\},$ \end{center} and \begin{center} $\mathcal{R}_{TT^{*}}$ $=$ $\{r(e_{1}e_{1}^{-1}),$ $r(e_{1}e_{2}^{-1}),$ $ r(e_{2}e_{1}^{-1}),$ $r(e_{2}e_{2}^{-1})\}$ $=$ $\{v_{1}\}$. \end{center} in $V(\widehat{G}).$ So, \begin{center} $\mathcal{R}_{T^{*}T}$ $\cap $ $\mathcal{R}_{TT^{*}}$ $=$ $\varnothing ,$ \end{center} and hence $T$ does not satisfy the condition (4.1.7), stating \begin{center} $\mathcal{R}_{T^{*}T}$ $\supseteq $ $\mathcal{R}_{TT^{*}}.$ \end{center} Therefore, this operator $T$ is not hyponormal. \end{example} \begin{example} \strut Suppose a graph $G$ contains its subgraph, \strut \begin{center} $_{v_{1}}\bullet \overset{e_{1}}{\underset{e_{2}}{\rightrightarrows }} \bullet _{v_{2}},$ \end{center} \strut and let $T_{1}$ $=$ $t_{e_{1}}L_{e_{1}}$ $+$ $t_{e_{1}^{-1}}$ $ L_{e_{1}^{-1}} $ $+$ $t_{e_{2}}L_{e_{2}}.$ Then we can have that \begin{center} $\mathcal{R}_{T_{1}^{*}T_{1}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } r(e_{1}^{-1}e_{2}), \\ r(e_{1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{2}), \\ r(e_{2}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{2}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ \end{center} and \begin{center} $\mathcal{R}_{T_{1}T_{1}^{*}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{2}^{-1}), \\ r(e_{1}^{-1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ } r(e_{1}^{-1}e_{2}^{-1}), \\ r(e_{2}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{2}^{-1}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ \end{center} in $V(\widehat{G}).$ Thus, $T_{1}$ satisfies the condition (4.1.7); \begin{center} $\mathcal{R}_{T_{1}^{*}T_{1}}$ $=$ $\mathcal{V}_{T_{1}}$ $=$ $\mathcal{R} _{T_{1}T_{1}^{*}},$ and hence $\mathcal{R}_{T_{1}^{*}T_{1}}$ $\supseteq $ $ \mathcal{R}_{T_{1}T_{1}^{*}}.$ \end{center} So, $T_{1}$ is hyponormal, if and only if (4.1.8) holds. So, $T_{1}$ is hyponormal, if and only if, for $v_{1}$ $\in $ $\mathcal{V}_{T},$ (4.1.14) $\qquad \qquad \left| t_{e_{1}}\right| ^{2}$ $-$ $\left( \left| t_{e_{1}}\right| ^{2}+t_{e_{1}}\overline{t_{e_{2}}}+t_{e_{2}}\overline{ t_{e_{1}}}+\left| t_{e_{2}}\right| ^{2}\right) $ $\geq $ $0$ \begin{center} $\Longleftrightarrow $ $\left| t_{e_{2}}\right| ^{2}$ $\leq $ $-$ $t_{e_{1}} \overline{t_{e_{2}}}$ $-$ $t_{e_{2}}\overline{t_{e_{1}}},$ \end{center} and, for $v_{2}$ $\in $ $\mathcal{V}_{T},$ (4.1.15) $\qquad \qquad \left( \left| t_{e_{1}}\right| ^{2}+\overline{t_{e_{1}}} t_{e_{2}}+\overline{t_{e_{2}}}t_{e_{1}}+\left| t_{e_{2}}\right| ^{2}\right) $ $-$ $\left| t_{e_{1}}\right| ^{2}$ $\geq $ $0$ \begin{center} $\Longleftrightarrow $ $\left| t_{e_{2}}\right| ^{2}$ $\geq $ $-\overline{ t_{e_{1}}}$ $t_{e_{2}}$ $-$ $\overline{t_{e_{2}}}$ $t_{e_{1}}.$ \end{center} If we combine (4.1.14) and (4.1.15), we can obtain that the given operator $ T_{1}$ is hyponormal, if and only if (4.1.16) \begin{center} $\left| t_{e_{2}}\right| ^{2}$ $=$ $-$ $\overline{t_{e_{1}}}$ $t_{e_{2}}$ $-$ $\overline{t_{e_{2}}}$ $t_{e_{1}}.$ \end{center} In fact, the readers can easily check that the hyponormality condition (4.1.16) guarantees the ``normality'' of $T_{1},$ too. i.e., $T_{1}$ is normal, if and only if (4.1.16) holds (See Section 4.2 below). \strut Now, let $T_{2}$ $=$ $t_{e_{1}}L_{e_{1}}$ $+$ $t_{e_{1}^{-1}}$ $ L_{e_{1}^{-1}}.$ Then we have $\mathcal{R}_{T_{2}^{*}T_{2}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{1}^{-1}), \\ r(e_{1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{1}^{-1}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ and $\mathcal{R}_{T_{2}T_{2}^{*}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{1}), \\ r(e_{1}^{-1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{1}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ and hence the operator $T_{2}$ satisfies (4.1.7). So, $T_{2}$ is hyponormal, if and only if (4.1.17) \begin{center} $\left| t_{e_{1}}\right| ^{2}$ $\geq $ $\left| t_{e_{1}^{-1}}\right| ^{2}\qquad $(for $v_{1}$), \end{center} and \begin{center} $\left| t_{e_{1}^{-1}}\right| ^{2}$ $\geq $ $\left| t_{e_{1}}\right| ^{2}$ \qquad (for $v_{2}$). \end{center} Therefore, by (4.1.17), we can conclude that $T_{2}$ is hyponormal, if and only if \begin{center} $\left| t_{e_{1}}\right| ^{2}$ $=$ $\left| t_{e_{1}^{-1}}\right| ^{2}$ in $ \Bbb{C}.$ \end{center} This example also shows that the hyponormality of $T_{2}$ is equivalent to the normality of $T_{2}$ (See Section 4.2 below). \end{example} \begin{example} \strut Let a graph $G$ contains the following subgraph, \strut \begin{center} $_{v_{1}}\bullet \overset{e_{1}}{\longrightarrow }\underset{v_{2}}{\bullet } \overset{e_{2}}{\longrightarrow }\bullet _{v_{3}},$ \end{center} \strut and let $T$ $=$ $t_{1}L_{e_{1}}$ $+$ $t_{2}L_{e_{2}}$ $+$ $ t_{v_{3}}L_{v_{3}}.$ Then we can have that \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\left\{ \begin{array}{c} r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}v_{3}), \\ r(e_{2}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}v_{3}), \\ r(v_{3}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{3}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{3}v_{3}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{3}\},$ \end{center} and \begin{center} $\mathcal{R}_{TT^{*}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}v_{3}), \\ r(e_{2}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}v_{3}), \\ r(v_{3}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{3}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{3}v_{3}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2},$ $v_{3}\}.$ \end{center} So, the operator $T$ does not satisfy the condition (4.1.7) for the hyponormality of $T$, and hence $T$ is not hyponormal. \end{example} \begin{example} \strut Assume that a graph $G$ contains a subgraph, \strut \begin{center} $^{v_{1}}\bullet \overset{e_{1}}{\longrightarrow }\underset{\underset{e_{2}}{ \circlearrowleft }}{\bullet }^{v_{2}},$ \end{center} \strut and let $T$ $=$ $t_{v_{1}}L_{v_{1}}$ $+$ $t_{e_{1}}$ $L_{e_{1}}$ $+$ $ t_{e_{2}^{-1}}L_{e_{2}^{-1}}.$ Then we can have that \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\left\{ \begin{array}{c} r(v_{1}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{1}e_{2}^{-1}), \\ r(e_{1}^{-1}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{2}), \\ r(e_{2}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{2}^{-1}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ \end{center} and \begin{center} $\mathcal{R}_{TT^{*}}$ $=$ $\left\{ \begin{array}{c} r(v_{1}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(v_{1}e_{2}), \\ r(e_{1}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{2}), \\ r(e_{2}^{-1}v_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{2}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\}.$ \end{center} So, $\mathcal{R}_{T^{*}T}$ $=$ $\mathcal{R}_{TT^{*}},$ and hence $T$ satisfies the condition (4.1.7). So, to make $T$ be hyponormal, the coefficients of $T$ must satisfy the condition (4.1.8). Thus we can conclude that $T$ is hyponormal, if and only if (4.1.18) \begin{center} $\left( \left| t_{v_{1}}\right| ^{2}+\overline{t_{e_{1}}}t_{v_{1}}\right) $ $ \geq $ $\left( \left| t_{v_{1}}\right| ^{2}+\left| t_{e_{1}}\right| ^{2}+t_{e_{2}^{-1}}\overline{t_{e_{1}}}\right) \quad $(for $v_{1}$) \end{center} and \begin{center} $\left( \overline{t_{v_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{e_{1}}+\left| t_{e_{1}}\right| ^{2}+\left| t_{e_{2}^{-1}}\right| ^{2}\right) $ $\geq $ $\left( t_{e_{1}} \overline{t_{e_{2}^{-1}}}+\left| t_{e_{2}^{-1}}\right| ^{2}\right) $\quad (for $v_{2}$). \end{center} The above condition (4.1.18) can be rewritten by (4.1.19) \begin{center} $\overline{t_{e_{1}}}$ $t_{v_{1}}$ $-$ $t_{e_{2}^{-1}}\overline{t_{e_{1}}}$ $ \geq $ $\left| t_{e_{1}}\right| ^{2},$ \end{center} and \begin{center} $\left| t_{e_{1}}\right| ^{2}$ $\geq $ $t_{e_{1}}\overline{t_{e_{2}^{-1}}}$ $ -$ $\overline{t_{v_{1}}}$ $t_{e_{1}},$ \end{center} respectively. Therefore, the given graph operator $T$ is hyponormal, if and only if \begin{center} $\overline{t_{e_{1}}}$ $t_{v_{1}}$ $-$ $t_{e_{2}^{-1}}\overline{t_{e_{1}}}$ $ \geq $ $\left| t_{e_{1}}\right| ^{2}$ $\geq $ $t_{e_{1}}\overline{ t_{e_{2}^{-1}}}$ $-$ $\overline{t_{v_{1}}}$ $t_{e_{1}},$ if and only if $\left| t_{e_{1}}\right| ^{2}$ $\leq $ $\left| \overline{t_{e_{1}}} t_{v_{1}}-t_{e_{2}^{-1}}\overline{t_{e_{1}}}\right| $. \end{center} \end{example} \begin{example} \strut Suppose a graph $G$ contains its subgraph, \strut \strut \begin{center} $^{v_{1}}\underset{\underset{e_{1}}{\circlearrowright }}{\bullet }\overset{ e_{2}}{\underset{e_{3}}{\rightrightarrows }}\bullet ^{v_{2}},$ \strut \end{center} and let $T$ $=$ $t_{e_{1}}L_{e_{1}}$ $+$ $t_{e_{2}}$ $L_{e_{2}}$ $+$ $ t_{e_{3}}$ $L_{e_{3}}.$ Then we can have that \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\left\{ \begin{array}{c} r(e_{1}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}^{-1}e_{3}), \\ r(e_{2}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}^{-1}e_{3}), \\ r(e_{3}^{-1}e_{1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{3}^{-1}e_{2}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{3}^{-1}e_{3}) \end{array} \right\} $ $=$ $\{v_{1},$ $v_{2}\},$ \end{center} and \begin{center} $\mathcal{R}_{TT^{*}}$ $=$ $\left\{ \begin{array}{c} r(e_{1}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{1}e_{3}^{-1}), \\ r(e_{2}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{2}e_{3}^{-1}), \\ r(e_{3}e_{1}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{3}e_{2}^{-1}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }r(e_{3}e_{3}^{-1}) \end{array} \right\} $ $=$ $\{v_{1}\}.$ \end{center} So, the operator $T$ satisfies the condition (4.1.7) i.e., \begin{center} $\mathcal{R}_{T^{*}T}$ $\supset $ $\mathcal{R}_{TT^{*}}.$ \end{center} Thus, we can obtain that $T$ is hyponormal, if and only if (4.1.21) \begin{center} $\left( \left| t_{e_{1}}\right| ^{2}+\overline{t_{e_{2}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{e_{1}}+ \overline{t_{e_{3}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{e_{1}}\right) $ $\geq $ $\left( \left| t_{e_{1}}\right| ^{2}+\left| t_{e_{2}}\right| ^{2}+t_{e_{2}}\overline{ t_{e_{3}}}+t_{e_{3}}\overline{t_{e_{2}}}+\left| t_{e_{3}}\right| ^{2}\right) $ \end{center} (for $v_{1}$), and \begin{center} $\left( \overline{t_{e_{1}}}t_{e_{2}}+\overline{t_{e_{1}}}t_{e_{3}}+\left| t_{e_{2}}\right| ^{2}+\overline{t_{e_{2}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{e_{3}}+\overline{ t_{e_{3}}}t_{e_{2}}+\left| t_{e_{3}}\right| ^{2}\right) $ $\geq $ $0$ \end{center} (for $v_{2}$). \end{example} \subsection{Normality} In this section, we will consider the normality of graph operators. In Section 4.1, we studied the hyponormality of graph operators in terms of combinatorial information of given graphs, and certain analytic data of coefficients of operators. Throughout this section, we will use the same notations used in Section 4.1. Thanks to the hyponormality characterization ((4.1.7) and (4.1.8)) of graph operators, we can obtain the following normality characterization of graph operators. \begin{theorem} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}$ $L_{w}$ be a graph operator in the graph von Neumann algebra $M_{G}$ of a connected graph $G.$ Then $T$ is normal, if and only if (4.2.1) \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $\mathcal{R}_{TT^{*}},$ \end{center} and (4.2.2) $\qquad \qquad \left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v}{\sum } \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{t_{w_{1}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w_{2}}\right) $ \strut $\qquad \qquad \qquad \qquad \qquad \geq $ $\left( \underset{ (y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{y_{1}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{t_{y_{2}}} \right) .$ \end{theorem} \begin{proof} By definition, a graph operator $T$ is normal on the graph Hilbert space $ H_{G},$ if and only if $T^{*}T$ $=$ $TT^{*}$ on $H_{G}.$ In other words, $T$ is normal, if and only if both $T$ and $T^{*}$ are hyponormal. Thus, $T$ is normal, if and only if the self-commutator $S(T)$ is identical to the zero element $0_{M_{G}}$ (which is identified with the zero operator $0_{H_{G}}$ on $H_{G}$), if and only if \begin{center} $<S(T){\Greekmath 0118} ,$ ${\Greekmath 0118} >$ $=$ $0,$ for all ${\Greekmath 0118} $ $\in $ $H_{G}.$ \end{center} Therefore, by the little modification of the proof of Theorem 4.5, we can conclude that $T$ is normal, if and only if the combinatorial condition (4.2.1) and the analytic condition (4.2.2) hold.\strut \end{proof} \section{Operators in Free Group Factors} In this section, we consider applications of operator-theoretic properties of graph operators. We will characterize the self-adjointness, the hyponormality, the normality, and the unitary property of finitely supported operators in the free group factor $L(F_{N}),$ generated by the free group $ F_{N}$ with $N$-generators, for $N$ $\in $ $\Bbb{N}.$ In operator algebra, the study of free group factors $L(F_{N})$ is very important (e.g., See [11]). Also, the study of elements of $L(F_{N})$ is interesting, since they are (possibly, the infinite or the limit of) linear combinations of unitary operators (e.g., See [3], [4], [6], and [7]). The following theorem provides the key motivation of our applications. \begin{theorem} \strut (Also, see [4]) The free group factor $L(F_{N})$ is $*$-isomorphic to the graph von Neumann algebra $M_{O_{N}}$ of the one-vertex-$N$-loop-edge graph $O_{N},$ for all $N$ $\in $ $\Bbb{N}.$ \end{theorem} \begin{proof} \strut Let $O_{N}$ be the one-vertex-$N$-loop-edge graph and let $\Bbb{O} _{N} $ be the graph groupoid of $O_{N}.$ Since $O_{N}$ has only one vertex, say $v_{O},$ the graph groupoid $\Bbb{O}_{N}$ is in fact a group (See Section 2.2). Indeed, the graph groupoid $\Bbb{O}_{N}$ is a (categorial) groupoid (in the sense of Section 2.2) with its base, consisting of only one element $v_{O}.$ Thus $\Bbb{O}_{N}$ is a group. Moreover, this group $\Bbb{O} _{N}$ has $N$-generators contained in the edge set \begin{center} $E(O_{N})$ $=$ $\{e_{1},$ ..., $e_{N}\}$ \end{center} of $O_{N}.$ So, we can define a morphism $g$ $:$ $\Bbb{O}_{N}$ $\rightarrow $ $F_{N}$ by a map satisfying \begin{center} $g$ $:$ $e_{j}$ $\in $ $E(O_{N})$ $\mapsto $ $u_{j}$ $\in $ $X_{F_{N}},$ \end{center} for all $j$ $=$ $1,$ ..., $N$ (by the possible rearrangement), where \begin{center} $X_{F_{N}}$ $=$ $\{u_{1},$ ..., $u_{N}\}$ \end{center} is the generator set of the free group $F_{N}$ $=$ $<X_{F_{N}}>.$ Then this morphism satisfies that $g(x_{i_{1}}$ ... $x_{i_{n}})$ $=$ $q_{i_{1}}$ ... $q_{i_{n}}$ in $F_{N},$ for all $x_{i_{1}},$ ..., $x_{i_{n}}$ $\in $ $E(\widehat{O_{N}}),$ for $n$ $ \in $ $\Bbb{N},$ such that \begin{center} $x_{i_{j}}$ $=$ $\left\{ \begin{array}{ll} e_{i_{j}} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }x_{i_{j}}\in E(O_{N}) \\ e_{i_{j}}^{-1} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }x_{i_{j}}\in E(O_{N}^{-1}), \end{array} \right. $ \end{center} where $\widehat{O_{N}}$ is the shadowed graph of $O_{N},$ and where \begin{center} $q_{i_{j}}$ $=$ $\left\{ \begin{array}{ll} u_{i_{j}} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }q_{i_{j}}\in X_{F_{N}} \\ u_{i_{j}}^{-1} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }q_{i_{j}}\in X_{F_{N}}^{-1}, \end{array} \right. $ \end{center} for all $j$ $=$ $1,$ ..., $n.$ (Remark that the graph groupoid $\Bbb{O}_{N}$ is generated by $E(\widehat{O_{N}}),$ as a groupoid, and hence the group $ \Bbb{O}_{N}$ is generated by $E(O_{N}).$) Therefore, the morphism $g$ is a group-homomorphism. Since $g$ is preserving generators, it is bijective. So, the morphism $g$ is a group-isomorphism, and hence $\Bbb{O}_{N}$ and $F_{N}$ are group-isomorphic. Let $(H_{O_{N}},$ $L)$ be the canonical representation of $\Bbb{O}_{N},$ and let $(H_{F_{N}},$ ${\Greekmath 0115} )$ be the left regular unitary representation of $ F_{N},$ where $H_{F_{N}}$ $=$ $l^{2}(F_{N})$ is the group Hilbert space of $ F_{N}.$ By the existence of the group-isomorphism $g$ of $\Bbb{O}_{N}$ and $ F_{N},$ the Hilbert spaces $H_{O_{N}}$ and $H_{F_{N}}$ are Hilbert-space isomorphic. Indeed, there exists a linear map \begin{center} $\Phi $ $:$ $H_{O_{N}}$ $\rightarrow $ $H_{F_{N}}$ \end{center} satisfying that \begin{center} $\Phi \left( \underset{w\in \Bbb{O}_{N}}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{w}{\Greekmath 0118} _{w}\right) $ $\overset{def}{=}$ $\underset{g(w)\in g(\Bbb{O}_{N})=F_{N}}{\sum }$ $t_{w}$ $ {\Greekmath 0118} _{g(w)},$ \end{center} in $H_{F_{N}},$ for all $\underset{w\in \Bbb{O}_{N}}{\sum }$ $t_{w}$ ${\Greekmath 0118} _{w}.$ It is easy to check that this linear map $\Phi $ is bounded and bijective. i.e., $\Phi $ is a Hilbert-space isomorphism, and hence $ H_{O_{N}} $ and $H_{F_{N}}$ are Hilbert-space isomorphic. By the existence of $\Phi $ and $g,$ we can obtain the commuting diagram, \begin{center} $ \begin{array}{lll} H_{O_{N}} & \overset{\Phi }{\longrightarrow } & H_{F_{N}} \\ \,\downarrow _{L} & & \,\downarrow _{{\Greekmath 0115} } \\ H_{O_{N}} & \underset{\Phi }{\longrightarrow } & H_{F_{N}}. \end{array} $ \end{center} This shows that the group actions $L$ of $\Bbb{O}_{N}$ and ${\Greekmath 0115} $ of $ F_{N}$ are equivalent. i.e., the representations $(H_{O_{N}},$ $L)$ of $\Bbb{ O}_{N}$ and $(H_{F_{N}},$ ${\Greekmath 0115} )$ of $F_{N}$ are equivalent. Therefore, the group von Neumann algebras $vN(L(\Bbb{O}_{N}))$ and $ vN({\Greekmath 0115} (F_{N}))$ are $*$-isomorphic from each other in $B(\mathcal{H}),$ where \begin{center} $H_{O_{N}}$ $\overset{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Hilbert}}{=}$ $\mathcal{H}$ $\overset{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ Hilbert}}{=}$ $H_{F_{N}},$ \end{center} where $\overset{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Hilbert}}{=}$ means ``being Hilbert-space isomorphic.'' i.e., the graph von Neumann algebra $M_{O_{N}}$ and the group von Neumann algebra $L(F_{N})$ are $*$-isomorphic. \end{proof} \strut The above theorem shows that the study of $L(F_{N})$ is to study $ M_{O_{N}}.$ So, to study finitely supported operators of $L(F_{N}),$ we will study the graph operators in $M_{O_{N}}.$ \strut By Section 3.2, we can obtain the following self-adjointness characterization on $M_{O_{N}}.$ \begin{proposition} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}L_{w}$ be a graph operator in $M_{O_{N}}.$ Then $T$ is self-adjoint, if and only if there exists a subset $Y$ of \begin{center} $Supp(T)$ $\cap $ $FP_{r}(\widehat{O_{N}}),$ \end{center} such that (5.1) \begin{center} $Supp(T)$ $=$ $\left\{ \begin{array}{ll} \{v_{O}\}\sqcup Y\sqcup Y^{-1} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }v_{O}\in Supp(T) \\ Y\sqcup Y^{-1} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise,} \end{array} \right. $ \end{center} and \begin{center} $t_{v_{O}}$ $\in $ $\Bbb{R},$ and $t_{y}$ $=$ $\overline{t_{y^{-1}}}$ in $ \Bbb{C},$ for all $y$ $\in $ $Y,$ \end{center} $\square $ \end{proposition} \strut The proof is done by Section 3.2. By the above proposition, we obtain the self-adjointness characterization of finitely supported elements in the free group factor $L(F_{N}).$ \begin{corollary} Let $T$ $=$ $t_{j_{k}}$ $u_{g_{j_{k}}^{-1}}$ $+$ ... $+$ $ t_{j_{1}}u_{g_{j_{1}}^{-1}}+t_{0}u_{e}+t_{i_{1}}u_{g_{i_{1}}}+...+t_{i_{n}}u_{g_{i_{n}}} $ be an element of $L(F_{N}),$ where $u_{g}$ $\overset{def}{=}$ ${\Greekmath 0115} (g), $ for all $g$ $\in $ $F_{N},$ and $e$ is the group-identity of $F_{N}.$ Then $T$ is self-adjoint, if and only if (5.2) \begin{center} $k$ $=$ $n$ in $\Bbb{N},$ \end{center} and \begin{center} $t_{0}$ $\in $ $\Bbb{R},$ and $t_{i_{p}}$ $=$ $\overline{t_{j_{p}}},$ for all $p$ $=$ $1,$ ..., $n$ $=$ $k.$ \end{center} $\square $ \end{corollary} \strut Now, let's consider the hyponormality. \begin{proposition} Let $T$ $=$ $\underset{w\in Supp(T)}{\sum }$ $t_{w}L_{w}$ be a graph operator in $M_{O_{N}}.$ Then $T$ is hyponormal, if and only if (5.3) \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\left( \overline{t_{w_{1}}} t_{w_{2}}-t_{w_{1}}\overline{t_{w_{2}}}\right) $ $\geq $ $0.$ \end{center} \end{proposition} \begin{proof} \strut By Section 4.1, we can have, in general, that a graph operator $T$ is hyponormal, if and only if (5.4) \begin{center} $\mathcal{R}_{T^{*}T}$ $=$ $r\left( \Pi _{T^{*}T}\right) $ $\supseteq $ $ \mathcal{R}_{TT^{*}}$ $=$ $r\left( \Pi _{TT^{*}}\right) ,$ \end{center} and (5.5) $\qquad \left( \underset{(w_{1},w_{2})\in Supp(T)^{2},\,w_{1}^{-1}w_{2}\neq \emptyset ,\,r(w_{1}^{-1}w_{2})=v}{\sum }\overline{t_{w_{1}}} t_{w_{2}}\right) $ \begin{center} $\geq $ $\left( \underset{(y_{1},y_{2})\in Supp(T)^{2},\,y_{1}y_{2}^{-1}\neq \emptyset ,\,r(y_{1}y_{2}^{-1})=v}{\sum }\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{y_{1}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{ t_{y_{2}}}\right) ,$ \end{center} for all $v$ $\in $ $V(\widehat{O_{N}}),$ by (4.1.7), and (4.1.8). However, the fixed graph $O_{N}$ has only one vertex $v_{O},$ and all elements of $ \Bbb{O}_{N}$ are admissible from each other via $v_{O}$ (equivalently, $\Bbb{ O}_{N}$ is a group). Therefore, the condition (5.4) automatically hold true, and the inequality (5.5) can be simply re-written by (5.6) \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{t_{w_{1}}}$ $ t_{w_{2}}$ $\geq $ $\underset{(y_{1},y_{2})\in Supp(T)^{2}}{\sum }$ $ t_{y_{1}}$ $\overline{t_{y_{2}}}.$ \end{center} Therefore, the operator $T$ is hyponormal, if and only if (5.3) holds true. \end{proof} \strut By the above proposition, we can obtain that: \begin{corollary} Let $T$ $=$ $\underset{g\in Supp(T)}{\sum }$ $t_{g}$ $u_{g}$ be a finitely supported element of $L(F_{N}).$ Then $T$ is hyponormal, if and only if (5.7) \begin{center} $\underset{(g_{1},g_{2})\in Supp(T)^{2}}{\sum }\left( \overline{t_{g_{1}}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{g_{2}}-t_{g_{1}}\overline{t_{g_{2}}}\right) $ $\geq $ $0.$ \end{center} $\square $ \end{corollary} \strut By the hyponormality characterization (5.7) and by Section 4.2, we can obtain the following corollary, too. \begin{corollary} Let $T$ $=$ $\underset{g\in Supp(T)}{\sum }$ $t_{g}u_{g}$ be a finitely supported element of $L(F_{N}).$ Then $T$ is normal, if and only if (5.8) \begin{center} $\underset{(g_{1},g_{2})\in Supp(T)^{2}}{\sum }\left( \overline{t_{g_{1}}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t_{g_{2}}-t_{g_{1}}\overline{t_{g_{2}}}\right) $ $=$ $0.$ \end{center} $\square $ \end{corollary} \strut Finally, let's consider the unitary property of finitely supported elements of $L(F_{N}).$ In Section 3.3, we obtain that: a graph operator $T$ of the graph von Neumann algebra $M_{O_{N}}$ of the one-vertex-$N$-loop-edge graph $O_{N}$ is unitary, if and only if \begin{center} $\left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $\{v_{O}\},$ \end{center} and \begin{center} $\underset{(w_{1},w_{2})\in Supp(T)^{2}}{\sum }$ $\overline{t_{w_{1}}}$ $ t_{w_{2}}$ $=$ $1,$ in $\Bbb{C},$ \end{center} where $v_{O}$ is the unique vertex of $O_{N},$ for $N$ $\in $ $\Bbb{N}.$ Therefore, we can obtain that: \begin{proposition} Let $T$ $=$ $\underset{g\in Supp(T)}{\sum }$ $t_{g}u_{g}$ be a finitely supported element of $L(F_{N}).$ Then $T$ is unitary, if and only if (5.9) \begin{center} $\left( Supp(T)\right) ^{-1}\left( Supp(T)\right) $ $=$ $\{e_{F_{N}}\},$ \end{center} and (5.10) \begin{center} $\underset{(g_{1},g_{2})\in Supp(T)^{2}}{\sum }$ $\overline{t_{g_{1}}}$ $ t_{g_{2}}$ $=$ $1,$ in $\Bbb{C}.$ \end{center} $\square $ \end{proposition} \strut \strut \strut \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Appendix A. Categorial Groupoids and Groupoid Actions\strut \strut } \strut We say an algebraic structure $(\mathcal{X},$ $\mathcal{Y},$ $s,$ $r)$ is a \emph{(categorial) groupoid}, if it satisfies that: (i) $\mathcal{Y}$ $ \subset $ $\mathcal{X},$ (ii) for all $x_{1},$ $x_{2}$ $\in $ $\mathcal{X},$ there exists a partially-defined binary operation $(x_{1},$ $x_{2})$ $ \mapsto $ $x_{1}$ $x_{2},$ for all $x_{1},$ $x_{2}$ $\in $ $\mathcal{X},$ depending on the \emph{source map} $s$ and the\emph{\ range map} $r$ satisfying the followings;\strut (ii-1) $x_{1}$ $x_{2}$ is well-determined, whenever $r(x_{1})$ $=$ $s(x_{2})$ and in this case, \begin{center} $s(x_{1}$ $x_{2})$ $=$ $s(x_{1})$ and $r(x_{1}$ $x_{2})$ $=$ $r(x_{2}),$ \end{center} \qquad \ for $x_{1},$ $x_{2}$ $\in $ $\mathcal{X},$\strut (ii-2) $(x_{1}$ $x_{2})$ $x_{3}$ $=$ $x_{1}$ $(x_{2}$ $x_{3})$, if they are well-determined in the sense of (ii-1), for $x_{1},$ $x_{2},$ $x_{3}$ $\in $ $\mathcal{X},$\strut (ii-3) if $x$ $\in $ $\mathcal{X},$ then there exist $y,$ $y^{\prime }$ $\in $ $\mathcal{Y}$ such that $s(x)$ $=$ $y$ and $r(x)$ $=$ $y^{\prime },$ satisfying $x$ $=$ $y$ $x$ $y^{\prime }$ (Here, the elements $y$ and $ y^{\prime }$ are not necessarily distinct),\strut \strut (ii-4) if $x$ $\in $ $\mathcal{X},$ then there exists a unique element $ x^{-1}$ for $x$ satisfying \begin{center} $x$ $x^{-1}$ $=$ $s(x)$ and $x^{-1}$ $x$ $=$ $r(x).$\strut \end{center} The subset $\mathcal{Y}$ of a groupoid $\mathcal{X}$ is said to be the \emph{ base of} $\mathcal{X}$. Thus, every group $\Gamma $ is a groupoid $\Gamma $ $=$ $(\Gamma ,$ $ \{e_{\Gamma }\}$ $s,$ $r)$ (and hence $s$ $=$ $r$ on $\Gamma $), where $ e_{\Gamma }$ is the group-identity of $\Gamma .$ Conversely, every groupoid with its base having the cardinality 1 is a group. Remark that we can naturally assume that there exists the \emph{empty element } $\emptyset $ in a groupoid $\mathcal{X}.$ The empty element $\emptyset $ means the products $x_{1}$ $x_{2}$ are not well-defined, for some $x_{1},$ $ x_{2}$ $\in $ $\mathcal{X}.$ Notice that if $\left| \mathcal{Y}\right| $ $=$ $1$ (equivalently, if $\mathcal{X}$ is a group), then the empty word $ \emptyset $ is not contained in the groupoid $\mathcal{X}.$ However, in general, whenever $\left| \mathcal{Y}\right| $ $\geq $ $2,$ a groupoid $ \mathcal{X}$ always contain the empty word. So, if there is no confusion, the existence of the empty element $\emptyset $ is automatically assumed, whenever the base $\mathcal{Y}$ of $\mathcal{X}$ contains more than one element. Under this setting, the partially-defined binary operation on $ \mathcal{X}$ is well-defined on $\mathcal{X}$ (more precisely, on $\mathcal{X }$ $\cup $ $\{\emptyset \},$ which is identified with $\mathcal{X},$ whenever $\left| \mathcal{Y}\right| $ $\geq $ $2$). It is easy to check that our graph groupoid $\Bbb{G}$ of a countable directed graph $G$ is indeed a groupoid with its base $V(\widehat{G}).$ i.e., the graph groupoid $\Bbb{G}$ of a graph $G$ is a groupoid \begin{center} $\Bbb{G}$ $=$ $(\Bbb{G},$ $V(\widehat{G}),$ $s$, $r)$, \end{center} satisfying \begin{center} $s(w)$ $=$ $s(v$ $w)$ $=$ $v$ and $r(w)$ $=$ $r(w$ $v^{\prime })$ $=$ $ v^{\prime },$ \end{center} for all $w$ $=$ $v$ $w$ $v^{\prime }$ $\in $ $\Bbb{G}$ with $v,$ $v^{\prime } $ $\in $ $V(\widehat{G}).$ i.e., the vertex set $V(\widehat{G})$ $=$ $V(G)$ is the base of $\Bbb{G}.$\strut Let $\mathcal{X}_{k}$ $=$ $(\mathcal{X}_{k},$ $\mathcal{Y}_{k},$ $s_{k},$ $ r_{k})$ be groupoids, for $k$ $=$ $1,$ $2.$ We say that a map $f$ $:$ $ \mathcal{X}_{1}$ $\rightarrow $ $\mathcal{X}_{2}$ is a \emph{ groupoid-morphism}, if (i)$\ \ \ f$ is a function, (ii)$\ \ f(\mathcal{Y}_{1})$ $\subseteq $ $\mathcal{Y}_{2},$ (iii) $s_{2}\left( f(x)\right) $ $=$ $f\left( s_{1}(x)\right) $ in $\mathcal{ X}_{2},$ for all $x$ $\in $ $\mathcal{X}_{1}$, and (iv) $r_{2}\left( f(x)\right) $ $=$ $f\left( r_{1}(x)\right) $ in $\mathcal{X }_{2},$ for all $x$ $\in $ $\mathcal{X}_{1}.$ Equivalently, $f$ is a groupoid-morphism, if and only if (i)$^{\prime }$ $f$ is a function, (ii)$^{\prime }$ $f$ satisfies \begin{center} $f(x_{1}x_{2})$ $=$ $f(x_{1})$ $f(x_{2})$ in $\mathcal{X}_{2},$ \end{center} for all $x_{1},$ $x_{2}$ $\in $ $\mathcal{X}_{1}.$ If a groupoid-morphism $f$ is bijective, then we say that $f$ is a \emph{ groupoid-isomorphism}, and the groupoids $\mathcal{X}_{1}$ and $\mathcal{X} _{2}$ are said to be \emph{groupoid-isomorphic}.\strut Notice that, if two countable directed graphs $G_{1}$ and $G_{2}$ are \emph{ graph-isomorphic}, via a graph-isomorphism $g$ $:$ $G_{1}$ $\rightarrow $ $ G_{2},$ in the sense that: (i)$\ \ \ g$ is bijective from $V(G_{1})$ onto $V(G_{2}),$ (ii)$\ \ g$ is bijective from $E(G_{1})$ onto $E(G_{2}),$ (iii) $g(e)$ $=$ $g(v_{1}$ $e$ $v_{2})$ $=$ $g(v_{1})$ $g(e)$ $g(v_{2})$ in $ E(G_{2}),$ for all $e$ $=$ $v_{1}$ $e$ $v_{2}$ $\in $ $E(G_{1}),$ with $v_{1},$ $v_{2}$ $\in $ $V(G_{1}),$ then the graph groupoids $\Bbb{G}_{1}$ and $\Bbb{G}_{2}$ are groupoid-isomorphic. More generally, if two graphs $G_{1}$ and $G_{2}$ have graph-isomorphic shadowed graphs $\widehat{G_{1}}$ and $\widehat{G_{2}} , $ then $\Bbb{G}_{1}$ and $\Bbb{G}_{2}$ are groupoid-isomorphic (See [10] and [11]).\strut \strut \strut Let $\mathcal{X}$ $=$ $(\mathcal{X},$ $\mathcal{Y},$ $s,$ $r)$ be a groupoid. We say that this groupoid $\mathcal{X}$ \emph{acts on a set }$Y,$ if there exists a groupoid action ${\Greekmath 0119} $ of $\mathcal{X}$ such that: (i) $ {\Greekmath 0119} (x)$ $:$ $Y$ $\rightarrow $ $Y$ is a well-defined function, for all $x$ $ \in $ $\mathcal{X},$ and (ii) ${\Greekmath 0119} $ satisfies \begin{center} ${\Greekmath 0119} (x_{1}x_{2})$ $=$ ${\Greekmath 0119} (x_{1})$ $\circ $ ${\Greekmath 0119} (x_{2})$ on $Y,$ \end{center} for all $x_{1},$ $x_{2}$ $\in $ $\mathcal{X}$, where ($\circ $) means the usual composition of maps. We call the set $Y,$ a $\mathcal{X}$\emph{-set}. Let $\mathcal{X}_{1}$ $\subset $ $\mathcal{X}_{2}$ be a subset, where $ \mathcal{X}_{2}$ $=$ $(\mathcal{X}_{2},$ $\mathcal{Y}_{2},$ $s,$ $r)$ is a groupoid. Assume that $\mathcal{X}_{1}$ $=$ $(\mathcal{X}_{1},$ $\mathcal{Y} _{1},$ $s,$ $r),$ itself, is a groupoid, where $\mathcal{Y}_{1}$ $=$ $ \mathcal{X}_{1}$ $\cap $ $\mathcal{Y}_{2}.$ Then we say that the groupoid $ \mathcal{X}_{1}$ is a \emph{subgroupoid} of $\mathcal{X}_{2}.$\strut \strut Recall that we say a graph $G_{1}$ is a \emph{full-subgraph} of a countable directed graph $G_{2},$ if\strut \begin{center} $E(G_{1})$ $\subseteq $ $E(G_{2})$ \end{center} and \begin{center} $V(G_{1})$ $=$ $\{v$ $\in $ $V(G_{1})$ $:$ $e$ $=$ $v$ $e$ or $e$ $=$ $e$ $ v, $ $\forall $ $e$ $\in $ $E(G_{1})\}.$\strut \end{center} Remark the difference between full-subgraphs and subgraphs: We say that $ G_{1}^{\prime }$ is a \emph{subgraph} of $G_{2},$ if\strut \begin{center} $V(G_{1}^{\prime })$ $\subseteq $ $V(G_{2})$ \end{center} and \begin{center} $E(G_{1}^{\prime })$ $=$ $\{e$ $\in $ $E(G_{2})$ $:$ $e$ $=$ $v_{1}$ $e$ $ v_{2},$ for $v_{1},$ $v_{2}$ $\in $ $V(G_{1}^{\prime })\}.$\strut \strut \end{center} \strut Also, if a graph $V$ is a graph with $V(V)$ $\subseteq $ $V(G_{2}),$ and $E(V)$ $=$ $\varnothing ,$ then we call $V,$ a \emph{vertex subgraph of} $G_{2}.$ We will say that $G_{1}$ is a \emph{part} of $G_{2},$ if $G_{1}$ is either a full-subgraph of $G_{2},$ or a subgraph of $G_{2},$ or a vertex subgraph of $ G_{2}.$ It is easy to show that the graph groupoid $\Bbb{G}_{1}$ of $G_{1}$ is a subgroupoid of the graph groupoid $\Bbb{G}_{2}$ of $G_{2},$ whenever $ G_{1}$ is a part of $G_{2}.$\strut \strut \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{Appendix B. Operator-Theoretic Properties} \strut Let $H$ be an arbitrary separable Hilbert space equipped with its inner product $<,>.$ i.e., the inner product $<,>$ on $H$ is the sesquilinear form, \begin{center} $<,>$ $:$ $H$ $\times $ $H$ $\rightarrow $ $\Bbb{C},$ \end{center} satisfying that: (i)\ \ $\ <t_{1}{\Greekmath 0118} _{1}+t_{2}{\Greekmath 0118} _{2},$ ${\Greekmath 0111} >$ $=$ $t_{1}<{\Greekmath 0118} _{1},$ $ {\Greekmath 0111} >$ $+$ $t_{2}$ $<{\Greekmath 0118} _{2},$ ${\Greekmath 0111} >,$ (ii)\ $<{\Greekmath 0118} ,$ ${\Greekmath 0111} >$ $=$ $\overline{<{\Greekmath 0111} ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }{\Greekmath 0118} >},$ (iii) $<{\Greekmath 0118} ,$ ${\Greekmath 0118} >$ $\;\geq $ $0,$ and equality holds, if and only if $ {\Greekmath 0118} $ $=$ $0_{H},$ for all ${\Greekmath 0118} ,$ ${\Greekmath 0118} _{k},$ ${\Greekmath 0111} ,$ ${\Greekmath 0111} _{k}$ $\in $ $H,$ and $t_{k}$ $ \in $ $\Bbb{C},$ where $\overline{t}$ mean the conjugates of $t,$ and $0_{H}$ means the zero vector in $H.$ As usual, let $B(H)$ be the operator algebra consisting of all (bounded linear) operators on $H.$ For any operator $T$ $\in $ $B(H),$ there exists a unique operator $T^{*}$ satisfying \begin{center} $<T{\Greekmath 0118} ,$ ${\Greekmath 0111} >$ $=$ $<{\Greekmath 0118} ,$ $T^{*}{\Greekmath 0111} >,$ \end{center} for all ${\Greekmath 0118} ,$ ${\Greekmath 0111} $ $\in $ $H.$ This operator $T^{*}$ is called the \emph{adjoint of} $T.$ \begin{definition} Let $T$ $\in $ $B(H)$ be an operator. (1) We say that an operator $T$ is \emph{self-adjoint}, if the adjoint $T^{*} $ of\emph{\ }$T$ is identical to $T,$ i.e., \begin{center} $T$ is self-adjoint $\overset{def}{\Longleftrightarrow }$ $T^{*}$ $=$ $T$ in $B(H).$ \end{center} (2) An operator $T$ is said to be \emph{normal}, if the product $T^{*}T$ of $ T^{*}$ and $T$ is identical to the product $TT^{*},$ on $H,$ i.e., \begin{center} $T$ is normal $\overset{def}{\Longleftrightarrow }$ $T^{*}T$ $=$ $TT^{*}$ in $B(H).$ \end{center} (3) We call $T$ a \emph{unitary}, if it is normal, and $T^{*}T$ $=$ $TT^{*}$ are identical to the identity operator $1_{H}$ on $H,$ i.e., \begin{center} $T$ is unitary $\overset{def}{\Longleftrightarrow }$ $T^{*}T$ $=$ $1_{H}$ $=$ $TT^{*}$ in $B(H)$ \end{center} \strut (4) An operator $T$ $\in $ $B(H)$ is called a \emph{projection}, if it is self-adjoint and idempotent, in the sense that $T^{2}$ $=$ $T$ on $H.$ i.e., \begin{center} $T$ is a projection $\overset{def}{\Longleftrightarrow }$ $T^{*}$ $=$ $T$ $=$ $T^{2}$ in $B(H).$ \end{center} (5) We say an operator $T$ is \emph{positive} on $H,$ if \begin{center} $<T{\Greekmath 0118} ,$ ${\Greekmath 0118} >$ \ $\geq $ $0,$ for all ${\Greekmath 0118} $ $\in $ $H$ with $\left\| {\Greekmath 0118} \right\| $ $=$ $1,$ \end{center} where $\left\| {\Greekmath 0118} \right\| $ $\overset{def}{=}$ $\sqrt{<{\Greekmath 0118} ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }{\Greekmath 0118} >}$ is the Hilbert norm of ${\Greekmath 0118} $, for all ${\Greekmath 0118} $ $\in $ $H.$ (6) An operator $T$ is said to be \emph{hyponormal}, if the operator $T^{*}T$ $-$ $TT^{*}$ is positive on $H.$ \end{definition} \strut Such properties of operators are well-known in operator theory. Also, if an operator $T$ has one of the above properties, then it is a ``good'' operator in the theory. For instance, if $T$ is normal, then it satisfies the spectral mapping theorem, and hence $f(T)$ is again normal, for all continuous functions on $\Bbb{C},$ etc. By definition, we can check that: (2.3.1) If $T$ is self-adjoint, then $T$ is normal. (2.3.2) If $T$ is unitary, then $T$ is normal. (2.3.3) Every projection is self-adjoint. (2.3.4) Every normal operator is hyponormal. (2.3.5) If $T$ is unitary, then $T$ is invertible, moreover, $T^{*}$ $=$ $ T^{-1}.$ Clearly, the converses of the above facts does not hold true, in general. We say that an operator $T$ is a \emph{partial isometry}, if the product $ T^{*}T$ of the adjoint $T^{*}$ and $T$ is a projection. The following characterization is also known: $T$ is a partial isometry, if and only if $ TT^{*}T$ $=$ $T,$ if and only if $T^{*}$ is a partial isometry, if and only if $T^{*}TT^{*}$ $=$ $T^{*}.$ In particular, the projections $T^{*}T$ and $ TT^{*}$ are called the \emph{initial projection}, and the \emph{final projection} of $T,$ respectively. i.e., the projection $T^{*}T$ (resp., the projection $TT^{*}$) send the elements of $H$ into the elements of the subspace $H_{init}^{T}$ (resp., $H_{fin}^{T}$) of $H.$ We call the subspaces $H_{init}^{T}$ and $H_{fin}^{T}$ of $H,$ induced by a partial isometry $T$, the \emph{initial subspace} and the \emph{final subspace} of $T$ in $H,$ respectively. A partial isometry $T$ satisfying that $T^{*}T$ $=$ $1_{H}$ is called an \emph{isometry}. Keep in mind that, even though $T^{*}T$ $=$ $1_{H},$ it is possible that $TT^{*}$ $\neq $ $1_{H}.$ Clearly, if $TT^{*}$ $=$ $1_{H},$ for an isometry $T,$ then this isometry $T$ becomes a unitary. A partial isometry $T,$ satisfying $TT^{*}$ $=$ $1_{H},$ is called a \emph{co-isometry} . In many cases, the projections $T^{*}T$ and $TT^{*}$ of a partial isometry $ T $ are distinct from each other, whenever $H_{init}^{T}$ and $H_{fin}^{T}$ are different in $H.$ This shows that partial isometries are not normal, in general. For instance, let \begin{center} $T$ $=$ $\left( \begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array} \right) ,$ with its adjoint $T^{*}$ $=$ $\left( \begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array} \right) ,$ \end{center} on $H$ $=$ $\Bbb{C}^{\oplus 2}.$ Then it is a partial isometry, since $ TT^{*}T$ $=$ $T.$ And, it has its initial projection and final projection as follows: \begin{center} $T^{*}T$ $=$ $\left( \begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array} \right) $, and $TT^{*}$ $=$ $\left( \begin{array}{ll} 0 & 0 \\ 0 & 1 \end{array} \right) ,$ \end{center} on $H.$ We can easily see that $T^{*}T$ $\neq $ $TT^{*},$ and hence $T$ is not normal. Let $T$ $\in $ $B(H)$ be an operator. Then $T$ has its \emph{spectrum }$ spec(T),$ defined by a subset \begin{center} $spec(T)$ $\overset{def}{=}$ $\{t$ $\in $ $\Bbb{C}$ $:$ $T$ $-$ $t$ $1_{H}$ is not invertible on $H\},$ \end{center} in $\Bbb{C}.$ It is well-known that every spectrum $spec(T)$ is nonempty and compact in $\Bbb{C},$ whenever $T$ is a (bounded linear) operator on a (complex) Hilbert space $H.$ This numerical data for $T$ is very valuable to analyze the operators. In particular, every normal operator $T$ can be understood (or regarded) as a complex-valued function, satisfying \begin{center} $\int_{spec(T)}$ $t$ $dE,$ \end{center} where $E$ is the suitable (operator-valued) measure on $spec(T),$ called the \emph{\ spectral measure}. Thus, if $f$ is a $\Bbb{C}$-valued continuous map, then \begin{center} $f(T)$ $=$ $\int_{spec(T)}$ $f(t)$ $dE(t).$ \end{center} i.e., the \emph{spectral mapping theorem} holds for normal operators. Thus all operators $f(T)$ are normal, too, whenever $T$ is normal. Also, by Gelfand, the $C^{*}$-algebra $C^{*}(T),$ generated by $T,$ is $*$ -isomorphic to the $C^{*}$-algebra $C(spec(T)),$ consisting of all continuous $\Bbb{C}$-valued functions on $spec(T).$ (However, finding spectra of operators is not easy at all.) \strut \strut \strut \strut \strut \strut \RIfM@\expandafter\text@\else\expandafter\mbox\fibf{References}\strut \strut \strut {\small [1] \ \ A. Gibbons and L. Novak, Hybrid Graph Theory and Network Analysis, ISBN: 0-521-46117-0, (1999) Cambridge Univ. Press.} {\small [2]\strut \ \ \ \strut B. Solel, You can see the arrows in a Quiver Operator Algebras, (2000), preprint.} {\small [3] \ \ I. Cho, Hyponormality of Toeplitz Operators with Trigonometric Polynomial Symbols, Master Degree Thesis, (1999) Sungkyunkwan Univ.} {\small [4] \ \ \strut I. Cho, Graph Groupoids and Partial Isometries, ISBN: 978-3-8383-1397-9, (2009) LAP Publisher.\strut } {\small [5] \ \ \strut I. Cho, and P. E. T. Jorgensen, }$C^{*}${\small -Subalgebras Generated by Partial Isometries, JMP, DOI: 10.1063/1.3056588, (2009).} {\small [6] \ \ \strut I. Cho, and P. E. T. Jorgensen, }$C^{*}${\small -Subalgebras Generated by a Single Operator in }$B(H)${\small , ACTA Appl. Math., 108, (2009) 625 - 664.} {\small [7] \ \ \strut I. Cho, and P. E. T. Jorgensen, Measure Framings on Graphs and Corresponding von Neumann Algebras, (2009) Preprint.} {\small [8] \ \ \strut I. Raeburn, Graph Algebras, CBMS no 3, AMS (2005).} {\small [9] \ \ \strut P. D. Mitchener, }$C^{*}${\small -Categories, Groupoid Actions, Equivalent KK-Theory, and the Baum-Connes Conjecture, arXiv:math.KT/0204291v1, (2005), Preprint.} {\small [10] R. Gliman, V. Shpilrain and A. G. Myasnikov (editors), Computational and Statistical Group Theory, Contemporary Math, 298, (2001) AMS.} {\small [11] F. Radulescu, Random Matrices, Amalgamated Free Products and Subfactors of the $C^{*}$- Algebra of a Free Group, of Noninteger Index, Invent. Math., 115, (1994) 347 - 389.} {\small [12] P. R. Halmos. Hilbert Space Problem Book (2-nd Ed), ISBN: 0-387-90685-1, (1982) Springer-Verlag.} {\small [13] T. Yosino, Introduction to Operator Theory, ISBN: 0-582-23743-2, (1993) Longman Sci. \& Tech.} {\small [14] F. W. Stinespring, Positive Functions on }$C^{*}${\small -Algebras, Proc. Amer. Math. Soc., vol 6, (1955) 211 - 216.} {\small [15] M. B. Stefan, Indecomposability of Free Group Factors over Nonprime Subfactors and Abelian Subalgebras, Pacific J. Math., 219, no. 2, (2005) 365 - 390.} {\small [16] N. Tanaka, Conjugacy Classes of Zero Entropy Automorphisms on Free Group Factors, Nihonkai M. J., 6, no. 2, (1995) 171 - 175. } \end{document}
\begin{document} \title{Filtrations and torsion pairs on Abramovich Polishchuk's heart} \author{Yucheng Liu} \address{Beijing International Center for Mathematical Research, Peking University, No.5 Yi- heyuan Road Haidian District, Beijing, 100871, P.R.China} \email{[email protected]} \keywords{Bridgeland stability conditions, product varieties, filtrations} \subjclass[2010]{14F08, 14J40, 18E99} \maketitle \begin{abstract} We study some abelian subcategories and torsion pairs in Abramovich Polishchuk's heart. And we apply the construction from \cite{Stabilityconditionsonproductvarieties} on a full triangulated subcategory $\mathcal{D}_S^{\leq 1}$ in $D(X\times S)$, for an arbitrary smooth projective variety $S$. We also define a notion of $l$-th level stability, which is a generalization of the slope stability and the Gieseker stability. We show that for any object $E$ in Abramovich Polishchuk's heart, there is a unique filtration whose factors are $l$-th level semistable, and the phase vectors are decreasing in a lexicographic order. \end{abstract} \section{Introduction} Given a $t$-structure with Noetherian heart on the bounded derived category of coherent sheaves $D(X)$, with $X$ a smooth variety over a field $k$ of characteristic $0$, Abramovich and Polishchuk (\cite{APsheaves}) constructed a $t$-structure on $D(X\times S)$, where $S$ is another smooth variety over the same base field $k$. Their construction was refined by Polishchuk in his paper \cite{polishchuk2007constant}, where he removed the characteristic $0$ and smoothness assumption. Their construction has important applications on the study of Bridgeland stability conditions. For instance, Bayer and Macr\`i used it to construct a nef divisor on the moduli space of Bridgeland semistable objects (see \cite{bayer2014mmp} and \cite{bayer2014projectivity}); the author used their construction to construct Bridgeland stability conditions on $X\times S$ (see \cite{Stabilityconditionsonproductvarieties}), where $S$ is a smooth projective curve. In this paper, we will further study the relation between Abramovich Polishchuk's heart (AP heart for short) and stability conditions. As in \cite{Stabilityconditionsonproductvarieties}, suppose that we have a stability condition $\sigma=(\mathcal{A},Z)$ on $D(X)$ (see Definition \ref{first WSC} below for the definition of stability conditions), with $\mathcal{A}$ the Noetherian heart of a bounded $t$-structure on $D(X)$ and $Z:K_0(\mathcal{A})\rightarrow \mathbb{C}$ a central charge with discrete image. We denote the corresponding AP heart on $D(X\times S)$ by $\mathcal{A}_S$. Then for any object $E\in\mathcal{A}_S$, there is a complexified Hilbert polynomial $$L_E(n)=a_r(E)n^r+a_{r-1}(E)n^{r-1}+\cdots +a_0(E)+i(b_r(E)n^r+b_{r-1}(E)n^{r-1}\cdots+b_0(E))$$ where $r= dim(S)$ and $a_i, b_i$ are linear maps from the Grothendieck group $ K_0(D(X\times S))$ to $\mathbb{R}$ for any $0\leq i\leq r$. For any integer $0\leq j\leq r$, one can define the full subcategory $\mathcal{A}_S^{\leq j} $ in $\mathcal{A}_S$ consisting of objects whose complexified Hilbert polynomial is of degree $\leq j$. It is easy to show that $\mathcal{A}_S^{\leq j}$ is an abelian subcategory. And there is a unique filtration $$0=E_0\subset E_1\subset\cdots\subset E_r=E$$ for any object $E\in\mathcal{A}_S$, such that $E_j$ is the maximal subobject of $E$ in $\mathcal{A}_S^{\leq j}$. If we use $\mathcal{D}^{\leq 1}_S$ to denote the minimal triangulated subcategory generated by $\mathcal{A}_S^{\leq 1}$ in $D(X\times S)$, we can apply the construction in \cite{Stabilityconditionsonproductvarieties} on $\mathcal{D}_S^{\leq 1}$. Thus we get the following result. \begin{theorem}\label{main corollary} The triangulated category $\mathcal{D}_S^{\leq 1}$ admits stability conditions. \end{theorem} In the meantime, we prove a sequence of positivity results on the coefficients of $L_E(n)$ (see also \cite[Lemma 4.1]{somequadraticinequalities}). These positivity results help us to construct weak stability functions on $\mathcal{A}_S$ and its abelian subcategories. This lead us to the notion of $l$-th level stability, which can be viewed as a generalization of both the slope stability and the Gieseker stability. However, the phase of an object with respect to such stability is no long a single real number, but a vector of real numbers. Then we show the existence and uniqueness of the filtration of an object $E\in\mathcal{A}_S$ with respect to such stability, which we call the lexicographic order filtration of $E$. \begin{theorem}\label{lexi-order filtration in introduction} Assume $l\leq j$, given $t_1,t_2,\cdots,t_l\in\mathbb{Q}_{>0}$, for any object $E\in\mathcal{A}_S^{\leq j}$, there exists a unique filtration $$0=E_0\subset E_1\subset E_2\cdots \subset E_n=E,$$such that each quotient factor $E_{i}/E_{i-1}$ is $l$-th level semi-stable of phase $\vec{\phi}_i$ and $$\vec{\phi}_1>\vec{\phi}_2>\cdots>\vec{\phi}_n.$$ \end{theorem} Given $\vec{t}=(t_1,\cdots,t_{r+1})\in\mathbb{Q}_{>0}^{r+1}$ and a phase vector $\vec{\phi}=(\phi_1,\cdots,\phi_{r+1})\in(0,1)^{r+1}$, using this theorem, we can construct torsion pairs on $\mathcal{A}_S$:$$\mathcal{T}_{\vec{\phi}}^{\vec{t}}\coloneqq \{E\in\mathcal{A}_S|all \ the \ factors \ in \ theorem\ \ref{lexi-order filtration in introduction}\ have \ phases \ > \vec{\phi}\}.$$ $$\mathcal{F}_{\vec{\phi}}^{\vec{t}}\coloneqq\{E\in\mathcal{A}_S|all \ the \ factors \ in \ theorem\ \ref{lexi-order filtration in introduction}\ have \ phases \leq \vec{\phi}\}.$$ In the end of this paper, we state some positivity results on the tilted heart $\mathcal{A}_{S,\vec{\phi}}^{\vec{t}}\coloneqq \langle \mathcal{F}_{\vec{\phi}}^{\vec{t}}[1], \mathcal{T}_{\vec{\phi}}^{\vec{t}}\rangle$ (see Theorem \ref{positivity on tilted heart}). \subsection*{Outline of the paper} In Section \ref{BSC}, we review some basic definitions and results in the theory of stability conditions. In Section \ref{Preliminary results}, we review some necessary results from \cite{APsheaves}, \cite{polishchuk2007constant} and \cite{Stabilityconditionsonproductvarieties}. In Section \ref{Abelian subcategories}, we study some abelian subcategories on $\mathcal{A}_S$ and state some general results on weak stability conditions. In Section \ref{torsion pairs}, we apply the results from previous section on $\mathcal{A}_S$. As a result, we define the notion of $l$-th level stability and prove the existence and uniqueness of lexicographic order filtration. Moreover, we use the filtration to construct torsion pairs of $\mathcal{A}_S$ and show some positivity results on the tilted hearts. \subsection*{Notation and Conventions} In this paper, we work over an algebraic closed field in arbitrary characteristic. All varieties are integral separated algebraic schemes of finite type over such a field $k$. We will use $D(X)$ rather than the usual notation $D^b(cohX)$ to denote the bounded derived categories of coherent sheaves on $X$. \section{Bridgeland Stability conditions}\label{BSC} In this section, we review the definition and some basic results on (weak) stability conditions (See \cite{beilinson1982faisceaux}, \cite{polishchuk2007constant}, \cite{bridgeland2008stability}, \cite{kontsevich2008stability} and \cite{baye2011bridgeland}). \begin{definition} Let $\mathcal{D}$ be an triangulated category. A $t$-structure on $\mathcal{D}$ is a pair of full subcategories $(\mathcal{D}^{\leq 0},\mathcal{D}^{\geq 0})$ satisfying the conditions (i)-(iii) below. We denote $\mathcal{D}^{\leq n}=\mathcal{D}^{\leq 0}[-n]$, $\mathcal{D}^{\geq n}=\mathcal{D}^{\geq 0}[-n]$ for every $n\in\mathbb{Z}$. Then the conditions are: (i) $Hom(X,Y)=0$ for every $X\in\mathcal{D}^{\leq 0}$ and $Y\in\mathcal{D}^{\geq 1}$; (ii) $\mathcal{D}^{\leq -1}\subset \mathcal{D}^{\leq 0}$ and $\mathcal{D}^{\geq 1}\subset \mathcal{D}^{\geq 0}$. (iii) every object $X\in\mathcal{D}$ fits into an exact triangle $$\tau^{\leq 0}X\rightarrow X\rightarrow \tau^{\geq 1}X\rightarrow \cdots$$ with $\tau^{\leq 0}X\in\mathcal{D}^{\leq 0}$, $\tau^{\geq 1}X\in\mathcal{D}^{\geq 1}$. The heart of the $t$-structure is $\mathcal{A}=\mathcal{D}^{\leq 0}\cap\mathcal{D}^{\geq 0}$. It is an abelian category (see e.g. \cite[Theorem 8.1.9]{hotta2007d}). The associated cohomology functors are defined by $H^0=\tau^{\leq 0}\tau^{\geq 0}$, $H^i(X)=H^0(X[i])$. \end{definition} Motivated by work from String Theory (see e.g. \cite{douglas2002dirichlet}), this notion of t-structures was refined By Bridgeland to the following one in \cite{bridgeland2007stability}. \begin{definition}\label{slicing} A slicing on a triangulated category $\mathcal{D}$ consists of full subcategories $\mathcal{P}(\phi)\in\mathcal{D}$ for each $\phi\in\mathbb{R}$, satisfying the following axioms: \par (a) for all $\phi \in \mathbb{R}$, $\mathcal{P}(\phi+1)=\mathcal{P}(\phi)[1]$, \par (b) if $\phi_1>\phi_2$ and $A_j\in\mathcal{P}(\phi_j)$ then $Hom_{\mathcal{D}}(A_1,A_2)=0$, \par (c) for every $0\neq E\in\mathcal{D}$ there is a sequence of real numbers $$\phi_1>\phi_2>\cdots>\phi_m$$and a sequence of morphisms $$0=E_0\xrightarrow{f_1}E_1\xrightarrow{f_2} \cdots \xrightarrow{f_m}E_m=E $$such that the cone of $f_j$ is in $\mathcal{P}(\phi_j)$ for all $j$. \end{definition} This definition of slicings can be viewed as a refinement of t-structures in triangulated categories. \begin{definition}\label{first WSC} A stability condition on $\mathcal{D}$ consists of a pair $(\mathcal{P},Z)$, where $\mathcal{P}$ is a slicing and $Z:K_0(D)\rightarrow\mathbb{C}$ is a group homomorphism such that the following conditions are satisfied: \par (a) If $0\neq E\in\mathcal{P}(\phi)$ then $Z(E)=m(E)exp(i\pi\phi)$ for some $m(E)\in \mathbb{R}_{> 0}$. \par (b) (Support property) The central charge $Z$ factors as $K_0(\mathcal{D})\xrightarrow{v} \Lambda\xrightarrow{g} \mathbb{C}$, where $\Lambda$ is a finite rank lattice, $v$ is a surjective group homomorphism and $g$ is a group homomorphism, and there exists a quadratic form $Q$ on $\Lambda\otimes\mathbb{R}$ such that $Q|_{ker(g)}$ is negative definite, and $Q(v(E))\geq 0$, for any object $E\in\mathcal{P}(\phi)$. \end{definition} \begin{remark} If we only require $m(E)$ to be nonnegative in (a), then the pair $(\mathcal{P},Z)$ is called a weak stability condition. By \cite[Lemma 2.2]{bridgeland2008stability}, there is a $S^1$ action on the space of stability conditions. Specifically, for any element $e^{i\theta}\in S^1$, $e^{i\theta}\cdot (Z,\mathcal{P})=(Z',\mathcal{P}')$ by setting $Z'=e^{i\theta} Z$ and $\mathcal{P}'(\phi)=\mathcal{P}(\phi-\theta)$. \end{remark} There is an equivalent way of defining a stability condition, which will be more frequently used in this paper. Firstly, we need to define what is a (weak) stability function $Z$ on an abelian category $\mathcal{A}$. \begin{definition} Let $\mathcal{A}$ be an abelian category. We call a group homomorphism $Z:K_0(\mathcal{A})\rightarrow \mathbb{C}$ a weak stability function on $\mathcal{A}$ if, for $E\in \mathcal{A}$, we have $Im(Z(E))\geq0$,with $Im(Z(E))=0 \implies Re(Z(E))\leq0$. If moreover, for $E\neq 0,$ $Im(Z(E))=0\implies Re(Z(E))< 0$, we say that $Z$ is a stability function. \end{definition} \begin{definition}\label{slicing } A stability condition on $\mathcal{D}$ is a pair $\sigma=(\mathcal{A},Z)$ consisting of the heart of a bounded t-structure $\mathcal{A}\subset\mathcal{D}$ and a stability function $Z:K_0(A)\rightarrow \mathbb{C}$ such that (a) and (b) below are satisfied: \par (a) (HN-filtration) The function $Z$ allow us to define a slope for any object $E$ in the heart $\mathcal{A}$ by $$\mu_{\sigma}(E):=\begin{cases} -\frac{Re(Z(E))}{Im(Z(E))}\ &\text{if} \ Im(Z(E))> 0,\\ +\infty &\text{otherwise.} \end{cases}$$ The slope function gives a notion of stability: A nonzero object $E\in \mathcal{A}$ is $\sigma$ semistable if for every proper subobject $F$, we have $\mu_{\sigma}(F)\leq\mu_{\sigma}(E)$. We require any object $E$ of $\mathcal{A}$ to have a Harder-Narasimhan filtration in $\sigma$ semistable ones, i.e., there exists a unique filtration $$0=E_0\subset E_1 \subset E_2\subset \cdots \subset E_{m-1} \subset E_m=E$$ such that $E_i/E_{i-1}$ is $\sigma$ semistable and $\mu_{\sigma}(E_i/E_{i-1})>\mu_{\sigma}(E_{i+1}/E_i)$ for any $1\leq i\leq m$. (b) (Support property) Equivalently as in Definition \ref{first WSC}, the central charge $Z$ factors as $K_0(\mathcal{D})\xrightarrow{v} \Lambda\xrightarrow{g} \mathbb{C}$. And there exists a quadratic form $Q$ on $\Lambda_{\mathbb{R}}$ such that $Q|_{ker(g)}$ is negative definite and $Q(v(E))\geq 0$ for any $\sigma$ semistable object $E\in\mathcal{A}$. \end{definition} \begin{remark}\label{abuse of terminology} Similarly, we call $(\mathcal{A},Z)$ a weak stability condition if $Z$ is just a weak stability function on $\mathcal{A}$. The equivalence of these two definitions is given by setting $\mathcal{P}(\phi)$ to be the full subcategory consisting of $\sigma$ semistable objects in $\mathcal{A}$. If $Z$ has discrete image in $\mathbb{C}$, and $\mathcal{A}$ is Noetherian, then condition (a) is satisfied automatically. Sometimes in this paper (especially in Section 5), a pair $(\mathcal{A},Z)$ of an abelian category $\mathcal{A}$ and a stability function $Z$ which satisfies (a) will also be called a (weak) stability condition. \end{remark} There is an important operation called tilting with respect to a torsion pair, which is very useful for constructing stability conditions. \begin{definition} A torsion pair in an abelian category $\mathcal{A}$ is a pair of full subcategories $(\mathcal{T},\mathcal{F})$ of $\mathcal{A}$ which satisfy $Hom_{\mathcal{A}}(T,F)=0$ for $T\in\mathcal{T}$ and $F\in\mathcal{F}$, and such that every object $E\in\mathcal{A}$ fits into a short exact sequence $$0\rightarrow T\rightarrow E\rightarrow F\rightarrow 0$$ for some pair of objects $T\in\mathcal{T}$ and $F\in\mathcal{F}$. \end{definition} \begin{remark} In this paper, most torsion pairs are coming from weak stability conditions $\sigma=(\mathcal{A},Z)$. In fact, let $$\mathcal{T}=\{E\in\mathcal{A}\mid \mu_{\sigma,min}(E)>0\}\ and \ \mathcal{F}=\{E\in\mathcal{A}\mid \mu_{\sigma,max}(E)\leq 0\}$$ be a pair of full subcategories, where $\mu_{\sigma,min}(E)$ is the slope of the last HN-factor of $E$ and $\mu_{\sigma,max}(E)$ is the slope of the first HN-factor of $E$. It is easy to see that this is a torsion pair. \end{remark} \begin{lemma}[{\cite[Proposition 2.1]{happel1996tilting}}] Suppose $\mathcal{A}$ is the heart of a bounded t-structure on a triangulated category $\mathcal{D}$, $(\mathcal{T},\mathcal{F})$ is a torsion pair in $\mathcal{A}$. Then $\mathcal{A}^{\#}=\langle\mathcal{F}[1], \mathcal{T}\rangle$ is a heart of a bounded t-structure on $\mathcal{D}$. \end{lemma} In this paper, we are interested in the case when $\mathcal{D}$ is the bounded derived category of coherent sheaves on an algebraic variety $X$ or an admissible component of it. From now on, $X$ will be a smooth projective variety over an algebraically closed field $k$, and $D(X)$ will be the bounded derived category of coherent sheaves on $X$. \section{AP heart and complexified Hilbert polynomial}\label{Preliminary results} In this section, we will recall some results from \cite{APsheaves}, \cite{polishchuk2007constant}, \cite{bayer2014mmp} and \cite{Stabilityconditionsonproductvarieties}. We will work under the following setup in the rest of this paper. \textbf{Setup}: Suppose $X$ and $S$ are smooth projective varieties, and $\sigma=(\mathcal{A},Z)$ is a stability condition on $D(X)$, where $\mathcal{A}$ is Noetherian and the image of $Z$ is discrete. The heart $\mathcal{A}$ corresponds to a bounded $t$-structure $(D^{\leq 0}(X), D^{\geq 0}(X))$ on $D(X)$. For any t-structure $(D^{\leq 0}(X), D^{\geq 0}(X))$ on $D(X)$, we have the following theorem. \begin{theorem}[{\cite[Theorem 3.3.6]{polishchuk2007constant}}] Suppose $S$ is a projective variety of dimension $r$, and $\mathcal{O}(1)$ is an ample line bundle on $S$. There exists a global t-structure on $D(X\times S)$ defined as $$D^{[a,b]}(X\times S)=\{E\in D(X\times S)\mid \textbf{R}p_*(E \otimes q^*(\mathcal{O}(n)))\in D^{[a,b]}(X) \ for\ all \ n\gg 0\}.$$ Here $a,b$ can be infinite. Moreover, the global heart $$\mathcal{A}_S=D^{\leq 0}(X\times S)\cap D^{\geq 0}(X\times S)$$ is Noetherian and independent of the choice of $\mathcal{O}(1)$. \end{theorem} \begin{remark} This theorem can be viewed as a generalization of Serre's vanishing. In fact, if $\mathcal{A}=coh(X)$ is the abelian category of coherent sheaves on $X$, then $\mathcal{A}_S=coh(X\times S)$ is the abelian category of coherent sheaves on $X\times S$. \end{remark} \begin{remark}\label{Complexified Hilber polynomials} In \cite{Stabilityconditionsonproductvarieties}, we observed that $Z(\textbf{R}p_*(E\otimes \mathcal{O}(n)))$ is a polynomial of degree no more than $dim(S)=r$, whose leading coefficient is a weak stability function on $\mathcal{A}_S$. We denote this polynomial by $L_E(n)$ for any object $E$ in $\mathcal{A}_S$. \end{remark} We also have the following theorem. \begin{theorem}[{\cite[Theorem 3.3]{Stabilityconditionsonproductvarieties}}]\label{glabal weak stablity condition} Assume $S$ is a smooth projective variety of dimension $r$, we define $(\mathcal{A}_S, Z_S)$ as below: \\ $$\mathcal{A}_S=\{E\in D(X\times S)\mid \textbf{R}p_*(E \otimes q^*(\mathcal{O}(n)))\in \mathcal{A}\ for\ all\ n\gg0 \}$$ $$Z_S(E)=\lim_{n\rightarrow +\infty}\frac{Z(\textbf{R}p_*(E\otimes q^*(\mathcal{O}(n)))r!}{n^r vol{(\mathcal{O}(1))}},$$where $vol(\mathcal{O}(1))$ is the volume of $\mathcal{O}(1)$. Then this pair is a weak stability condition on $D(X\times S)$. \end{theorem} \begin{example} If we take $X=Spec(\mathbb{C})$, $\mathcal{A}$ is the category of $\mathbb{C}$-vector spaces and $Z(V)=z\cdot dim(V)$ for any finite dimensional $\mathbb{C}$ vector space, where $z\in\mathbb{H}\cup\mathbb{R}_{\leq 0}$. Then the global heart is the category of coherent sheaves on $S$, and $L_E(n)=z\cdot Hilb_E(n)\ for\ n\gg 0$. Therefore, we call $L_E(n)$ the complexified Hilbert polynomial. \end{example} We get the following slope function $\mu_1$ from the pair $(\mathcal{A}_S,Z_S)$: $$\mu_1(E)\coloneqq\begin{cases} -\frac{Re(Z_S(E))}{Im(Z_S(E))} &\text{if} \ Im(Z_S(E))> 0,\\ +\infty &\text{otherwise.}\end{cases}$$ This weak stability condition is closely related to the global slicing constructed in \cite[Section 4]{bayer2014mmp}. We include the explicit construction of the global slicing in the next. Given a stability condition $\sigma=(Z, \mathcal{P})$ on $D(X)$ and a phase $\phi\in\mathbb{R}$, we have its associated t-structure $$(\mathcal{P}(>\phi)=\mathcal{D}_{\phi}^{\leq -1}(X) ,\mathcal{P}(\leq\phi)=\mathcal{D}_{\phi}^{\geq 0}(X))$$ on $D(X)$. By Abramovich and Polishchuk's construction, we get a global t-structure $$(\mathcal{P}_S(>\phi)\coloneqq\mathcal{D}_{\phi}^{\leq -1}(X\times S), \mathcal{P}_S(\leq \phi))\coloneqq\mathcal{D}_{\phi}^{\geq 0}(X\times S))$$ on $D(X\times S)$. Then we have the following lemma in \cite[Section 4]{bayer2014mmp}. \begin{lemma}[{\cite[Lemma 4.6]{bayer2014mmp}}]\label{global slicing} Assume $\sigma=(Z, \mathcal{P})$ is a weak stability condition as in our setup, and $\mathcal{P}_S(>\phi)$, $\mathcal{P}_S(\leq \phi)$ defined as above. There is a slicing $\mathcal{P}_S$ on $D^b(X\times S)$ defined by $$\mathcal{P}_S(\phi)=\mathcal{P}_S(\leq \phi)\cap \underset{\epsilon>0}{\cap}\mathcal{P}_S(>\phi-\epsilon).$$ \end{lemma} To conclude this section, we describe the relation between $\mu_1$ semistable objects and the global slicing $\mathcal{P}_S$. \begin{prop}[{\cite[Proposition 3.14]{Stabilityconditionsonproductvarieties}}]\label{semistable reduction} If $S$ is a smooth projective variety, and $E\in \mathcal{A}_S$ is semistable with respect to $\mu_1$ of phase $\phi$ and $Z_S(E)\neq 0$, then there exists a short exact sequence $$0\rightarrow K\rightarrow E\rightarrow Q\rightarrow 0$$ such that $K\in \mathcal{P}_S(\phi)$, $Q\in \mathcal{P}_S(<\phi)$ and $Z_S(Q)=0$. \end{prop} \section{Abelian subcategories in $\mathcal{A}_S$}\label{Abelian subcategories} As in Remark \ref{Complexified Hilber polynomials}, for any $E\in\mathcal{A}_S$, we can write $$L_E(n)\coloneqq\Sigma_{k=0}^{r} (a_k(E)+ib_k(E))n^k,$$ where $a_i, b_i:K_0(\mathcal{A}_S)\rightarrow\mathbb{R}$ are group homomorphisms for $0\leq i\leq r$. Then we have the following positivity of coefficients $a_r$ and $b_r$. \begin{prop} \label{positive coefficients} For any $E\in\mathcal{A}_S$, we have the following inequalities. (1) $b_r(E)\geq 0$. (2) If $b_r(E)=0$, then $a_r(E)\leq0$ and $b_{r-1}(E)\geq 0$. (3) In general, if $$b_r(E)=a_r(E)=b_{r-1}(E)=\cdots=a_i(E)=b_{i-1}(E)=0,$$ then $a_{i-1}(E)\leq 0$ and $b_{i-2}(E)\geq 0$ for any $2\leq i\leq r$. (4) Moreover, if $E$ is a nonzero object and $$b_r(E)=a_r(E)=b_{r-1}(E)=\cdots=a_1(E)=b_{0}(E)=0,$$ then $a_0(E)<0$. \end{prop} \begin{proof} See the proof in \cite[Lemma 4.1]{somequadraticinequalities}. \end{proof} There are some full subcategories of $\mathcal{A}_S$. \begin{definition} For any integer $1\leq j\leq r$. We can define a full subcategory $\mathcal{A}_S^{\leq j} $ in $\mathcal{A}_S$ in the following way: $$obj(\mathcal{A}_S^{\leq j})=\{E\in\mathcal{A}_S| deg(L_E(n))\leq j\}.$$ \end{definition} \begin{lemma}\label{subquotients} If $E$ is an object in $\mathcal{A}_S^{\leq j}$, and $Q$ is a subquotient of $E$ in $\mathcal{A}_S$. Then $Q\in\mathcal{A}_S^{\leq j}$. \end{lemma} \begin{proof} Assume $E$ is an object in $\mathcal{A}_S^{\leq j}$, let $$0\rightarrow K\rightarrow E\rightarrow Q\rightarrow 0$$ be a short exact sequence in $\mathcal{A}_S$. Suppose $k,q$ are the maximal integers such that $a_k(K)+ib_k(K)\neq 0$ and $a_q(Q)+ib_q(Q)\neq 0$ respectively. If $k>j$, then we get $$a_{k}(K)+ib_k(K)=-a_k(Q)-ib_k(Q).$$ By Lemma \ref{positive coefficients}, we get $b_k(K)\geq 0$, and if $b_k(K)=0$, we have $a_k(K)<0$ by the definition of $k$. Then $b_k(Q)\leq 0$, and if $b_k(Q)=0$, we have $a_k(Q)>0$. Therefore, we have $q>k$ by Lemma \ref{positive coefficients}. Similarly, we can prove that $k>q$, a contradiction. Therefore, $k,q$ can not exceed $j$, which implies $K,Q\in\mathcal{A}_S^{\leq j}$. \end{proof} \begin{corollary}\label{abelian} $\mathcal{A}_S^{\leq j}$ is an abelian category for any $0\leq j\leq r$. \end{corollary} \begin{prop}\label{support torsion pair} There exists subcategory $\mathcal{F}^{>j}$ in $\mathcal{A}_S$ for any $1\leq j\leq r$, such that the pair $(\mathcal{A}_S^{\leq j}, \mathcal{F}^{>j})$ is a torsion pair of $\mathcal{A}_S$. \end{prop} \begin{proof} We claim that for any object $E\in\mathcal{A}_S$, there exists a maximal subobject $E_j \in\mathcal{A}_S^{\leq j}$ of $E$. To prove the claim, we firstly show that for any two subobjects $E_1,E_2\subset E$ and $E_1,E_2\in\mathcal{A}_S^{\leq j}$. There exsits a third subobjects $E_3\in\mathcal{A}_S^{\leq j}$ such that $E_1, E_2\subset E_3\subset E$. Suppose we have two injections $$E_1\lhook\joinrel\xrightarrow{f} E,\ and\ E_2\lhook\joinrel\xrightarrow{g} E.$$ Then we have a morphism $$ E_1\oplus E_2\xrightarrow{(f,g)}E.$$ Let $E_3$ be the image of this morphism, by Lemma \ref{subquotients}, we know that $E_3\in\mathcal{A}_S^{\leq j}$. And $E_1,E_2\subset E_3\subset E$ by construction. Hence it suffices to prove that the increasing sequence $$E_1\subset E_2\subset\cdots\subset E$$ of subobjects $E_k\in\mathcal{A}_S^{\leq j}$ stabilizes after finite step. This follows directly from the fact $\mathcal{A}_S$ is Noetherian. Let $F^{>j}$ be the full subcategory consisting of the objects whose maximal subobjects in $\mathcal{A}_S^{\leq j}$ is $0$. Then it is easy to see that $(\mathcal{A}_S^{\leq j}, \mathcal{F}^{>j})$ is a torsion pair. \end{proof} \begin{corollary} There exists a unique filtration $$0=E_0\subset E_1\subset\cdots\subset E_r=E$$ for any object $E\in\mathcal{A}_S$, such that $E_j$ is the maximal subobject of $E$ in $\mathcal{A}_S^{\leq j}$. \end{corollary} \begin{proof} By the proof of Proposition \ref{support torsion pair}, we can take $E_j$ to be the maximal subobject in $\mathcal{A}_S^{\leq j}$. Since $\mathcal{A}_S^{\leq j}$ is a full subcategory of $\mathcal{A}_S^{\leq k}$ for any integers $1\leq j<k\leq r$. We get $E_j\subset E_k$, hence we get the unique filtration. \end{proof} \begin{remark} One can view this as a generalization of the torsion filtration of sheaf (see \cite[Definition 1.1.4]{huybrechts2010geometry}). \end{remark} If we denote the minimal triangulated subcategory generated by $\mathcal{A}_S^{\leq 1}$ in $D(X\times S)$ by $\mathcal{D}^{\leq 1}_S$, then we can apply the construction in \cite{Stabilityconditionsonproductvarieties} on $\mathcal{D}_S^{\leq 1}$. \begin{corollary} The triangulated category $\mathcal{D}_S^{\leq 1}$ admits stability conditions. \end{corollary} \begin{proof} By \cite[Lemma 4.1, Theorem 4.5]{somequadraticinequalities}, we know that the same construction in \cite{Stabilityconditionsonproductvarieties} also works for $\mathcal{D}_S^{\leq 1}$. \end{proof} \subsection{Abelian subcategories from a weak stability condition} In this subsection, we consider a more general situation. Let $\sigma=(\mathcal{A},Z)$ be an arbitrary weak stability condition on a triangulated category $\mathcal{D}$. \begin{lemma}\label{subabelian} Let $\mathcal{A}_0$ denote the full subcategory in $\mathcal{A}$ consisting of objects $E$ whose central charge is $0$, then $\mathcal{A}_0$ is an abelian subcatgory. \end{lemma} \begin{proof} It is very easy to see that $\mathcal{A}_0$ is closed under subobjects and quotient objects. In particular, $\mathcal{A}_0$ is an abelian subcategory. \end{proof} \begin{remark} Corollary \ref{abelian} can also be viewed as a direct consequence of Lemma \ref{subabelian}. Indeed, for any $0\leq j\leq r$, $(\mathcal{A}_S^{\leq j}, a_j+ib_j)$ is a weak stability condition, hence we can apply Lemma \ref{subabelian} inductively. \end{remark} Let $\mathcal{P}_{\sigma}(\phi)$ be the subcategory of $\sigma$ semistable objects whose phase is $\phi$. It is easy to see that $\mathcal{A}_0\subset\mathcal{P}_{\sigma}(1)$. \begin{lemma}\label{switching lemma} If we have a short exact sequence in $\mathcal{A}$ $$0\rightarrow K\rightarrow E\xrightarrow{f} Q\rightarrow 0,$$ where $K\in\mathcal{P}_{\sigma}(\phi)$ and $Q\in\mathcal{A}_0$. Then we have the following short exact sequence in $\mathcal{A}$ $$0\rightarrow Q'\rightarrow E\rightarrow K'\rightarrow 0,$$ where $Q'\in\mathcal{A}_0$ and $K'\in\mathcal{P}_{\sigma}(\phi)$. \end{lemma} \begin{proof} If $K\in\mathcal{P}_{\sigma}(1)$, we know that $E$ is in $\mathcal{P}_{\sigma}(1)$ as well. Hence we can take $K'=E$ and $Q'=0$. Now we can assume that $0<\phi<1$. By the HN property of $\sigma$, there exist a maximal subobject $Q'\in\mathcal{P}_{\sigma}(1)$ with a natural inclusion $i: Q'\hookrightarrow E$. Hence we have the following shoer exact sequence $$0\rightarrow Q'\xrightarrow{i} E\xrightarrow{g} K'\rightarrow 0.$$ We claim that $Z(Q')=0$ and $K'\in\mathcal{P}_{\sigma}(\phi)$. If $Z(Q')\neq 0$, then we have $Z(Q')\in\mathbb{R}_{<0}$. Let $G\coloneqq ker(f\circ i):Q'\rightarrow Q$, then we have $Z(G)=Z(Q')\in\mathbb{R}_{<0}$ and $G\hookrightarrow K$. This contradicts the facts $K\in\mathcal{P}_{\sigma(\phi)}$ and $\phi<1$. Hence, we get $Z(Q')=0$. Now we know that $Z(E)=Z(K')$. Suppose that $K'$ is not semistable, let $F'$ be a destabilizing subobject of $K'$ and $F=g^{-1}(F')$. Then we have a morphism $f':F\rightarrow Q$ which is the composition of $f$ and the inclusion from $F$ to $E$. Let $H\coloneqq ker(f')$, we know that $Z(H)=Z(F)$ and $H$ is a subobject of $K$. Since $Z(K)=Z(K')=Z(E)$, $H$ destabilize $K$, which contradicts the assumption that $K$ is semistable. Therefore, the claim is proved, and we get the short exact sequence. \end{proof} \begin{remark} In the proof of this lemma, we also proved that $Q'$ is a subobject of $Q$. \end{remark} \begin{definition}\label{definition of abelianizer} We call $\mathcal{A}_0$ the abelianizer of the weak stability condition $\sigma=(\mathcal{A},Z)$. If there exists a short exact sequence $$0\rightarrow Q'\rightarrow E\rightarrow K'\rightarrow 0,$$ where $Q'\in\mathcal{A}_0$ and $K'\in\mathcal{P}_{\sigma}(\phi)$, we say that $E$ is quasi-semistable. \end{definition} The following Proposition is the justification of naming $\mathcal{A}_0$ the abelianizer of $\sigma$. \begin{prop}\label{abelianizer} Let $$\mathcal{A}_{\sigma}(\phi)\coloneqq\{E\in\mathcal{A}|E\ is\ quasi- semistable \ and \ Z(E)\in exp(i\pi \phi)\cdot \mathbb{R}_{\geq 0}\}.$$ Then $\mathcal{A}_{\sigma}(\phi)$ is an abelian subcategory. \end{prop} \begin{proof} If $\phi=1$, then $\mathcal{A}_{\sigma}(1)=\mathcal{P}_{\sigma}(1)$. Then $\mathcal{A}_{\sigma}(1)$ is an abelian category since it is closed under subobjects and quotient objects. Now we can assume $0<\phi<1$. If we have a morphism $f:E_1\rightarrow E_2$ in $\mathcal{A}$ between two objects $E_1,E_2\in\mathcal{A}_{\sigma}(\phi)$. There exists the following diagram in $\mathcal{A}$. \[ \begin{tikzcd} 0 \arrow{r}{} & Q_1 \arrow{r}{} & E_1 \arrow{r}{} \arrow{d}[swap]{f} & K_1 \arrow{r} & 0 \\% 0 \arrow{r}{}& Q_2 \arrow{r}{} & E_2 \arrow{r}{} & K_2 \arrow{r}{} & 0 \end{tikzcd} \] where $Q_i\in\mathcal{A}_0$ and $K_i\in\mathcal{P}_{\sigma}(\phi)$ for $i=1,2$. Since $0<\phi<1$ and $Q_i\in\mathcal{A}_0$, there is no nontrivial morphism from $Q_1$ to $K_2$. Hence the diagram con be completed by the following commutative diagram. \[ \begin{tikzcd} 0 \arrow{r}{} & Q_1 \arrow{r}{} \arrow{d}[swap]{h}& E_1 \arrow{r}{} \arrow{d}[swap]{f} & K_1 \arrow{r} \arrow{d}[swap]{g}& 0 \\% 0 \arrow{r}{}& Q_2 \arrow{r}{} & E_2 \arrow{r}{} & K_2 \arrow{r}{} & 0 \end{tikzcd} \] It suffices to show that $ker(f), im(f)$ and $coker(f)$ are quasi-semistable. By snake lemma, we have the following long exact sequence $$0\rightarrow ker(h)\rightarrow ker(f)\rightarrow ker(g)\xrightarrow{\delta}coker(h)\rightarrow coker(f)\rightarrow coker(g)\rightarrow 0$$ This can be decomposed into two short exact sequences $$0\rightarrow ker(h)\rightarrow ker(f)\rightarrow ker(\delta)\rightarrow 0$$and $$0\rightarrow coker(\delta)\rightarrow coker(f)\rightarrow coker(g)\rightarrow 0.$$ We know that $im(g)$ is a subobject of $K_2$ and a quotient object of $K_1$, and $K_1,K_2\in\mathcal{P}_{\sigma}(\phi)$. This implies that $im(g)\in\mathcal{P}_{\sigma}(\phi)$, hence $ker(g)\in\mathcal{P}_{\sigma}(\phi)$ as well. Since $coker(h)$ is a quotient object of $Q_2$, it is an object in $\mathcal{A}_0$ by Lemma \ref{subquotients}. Therefore, $Z(ker(\delta))=Z(ker(g))$ and $ker(\delta)\subset ker(g)$ imply that $ker(\delta)\in\mathcal{P}_{\sigma}(\phi)$. Thus we get that $ker(f)$ is quasi-semistable of phase $\phi$. Now we want to show that $coker(f)\in\mathcal{A}_{\sigma}(\phi)$. It is easy to see that $coker(g)\in\mathcal{A}_{\sigma}(\phi)$. Indeed, if $F$ is a subobject of $coker(g)$ that destabilize $coker(g)$. Let $H\coloneqq coker(g)/F$, then if $Z(F)\neq 0$, we have $Z(H)\neq 0$ and $\mu_{\sigma}(H)<\mu_{\sigma}(coker(g))=\mu_{\sigma}(K_2)$. This contradicts to the assumption $K_2\in\mathcal{P}_{\sigma}(\phi)$. Hence we proved that any destabilizing subobject of $coker(g)$ is in $\mathcal{A}_0$. This is equivalent to $coker(g)\in\mathcal{A}_{\sigma}(\phi)$. Therefore, we have the following commutative diagram. \[ \begin{tikzcd} 0 \arrow{r}{} & coker(\delta) \arrow{r}{} \arrow{d}[swap]{id}& Q \arrow{r}{} \arrow{d}[swap]{} & Q' \arrow{r} \arrow{d}[swap]{}& 0 \\% 0 \arrow{r}{}& coker(\delta) \arrow{r}{} \arrow{d}[swap]{} & coker(f) \arrow{r}{} \arrow{d}[swap]{}& coker(g) \arrow{r}{} \arrow{d}[swap]{}& 0 \\% 0 \arrow{r} & 0 \arrow{r}{}& K\arrow{r}{id} & K\arrow{r}{} & 0 \end{tikzcd} \] where $Q', \in\mathcal{A}_0$ and $K\in\mathcal{P}_{\sigma}(\phi)$, and moreover every row is a short exact sequence. Notice that $coker(\delta)$ is a quotient object of $coker(h)\in\mathcal{A}_0$, hence $Q\in\mathcal{A}_0$, and $coker(f)\in\mathcal{A}_{\sigma}(\phi)$. It is left to prove that $im(f)\in\mathcal{A}_{\sigma}(\phi)$. By snake lemma, we have the following commutative diagram. \[ \begin{tikzcd} 0 \arrow{r}{} & ker(h) \arrow{r}{} \arrow{d}[swap]{}& ker(f) \arrow{r}{} \arrow{d}[swap]{} & ker(\delta) \arrow{r} \arrow{d}[swap]{}& 0 \\% 0 \arrow{r}{}& Q_1 \arrow{r}{} \arrow{d}[swap]{} & E_1 \arrow{r}{} \arrow{d}[swap]{}& K_1 \arrow{r}{} \arrow{d}[swap]{}& 0 \\% 0 \arrow{r} & im(h) \arrow{r}{}& im(f) \arrow{r}{} & J\arrow{r}{} & 0 \end{tikzcd} \] where $J\coloneqq K_1/ker(\delta)$, $im(h)\in\mathcal{A}_0$ and every column is a short exact sequence. Hence it suffices to prove that $J\in\mathcal{A}_{\sigma}(\phi)$. This follows from the following commutative diagram. \[ \begin{tikzcd} & & & 0\arrow{d}[swap]{}\\ & 0 \arrow{r}{} \arrow{d}[swap]{} & 0 \arrow{r}{} \arrow{d}[swap]{} & im(\delta) \arrow{d}[swap]{} & \\% 0\arrow{r}{} & ker(\delta) \arrow{r}{} \arrow{d}[swap]{}& K_1 \arrow{r}{} \arrow{d}[swap]{id} & J \arrow{r} \arrow{d}[swap]{}& 0 \\ 0 \arrow{r}{}& ker(g) \arrow{r}{} \arrow{d}[swap]{} & K_1 \arrow{r}{} \arrow{d}[swap]{}& im(g) \arrow{r}{} \arrow{d}[swap]{} & 0 \\% & im(\delta) \arrow{r}{}\arrow{d}[swap]{}& 0 \arrow{r}{} & 0 \\ & 0 \end{tikzcd} \] where $im(\delta)\in\mathcal{A}_0$ and $im(g)\in\mathcal{P}_{\sigma}(\phi)$, and every column is a short exact sequence. Hence $J\in\mathcal{A}_{\sigma}(\phi)$, the proof is complete. \end{proof} \begin{remark} When $\sigma=(\mathcal{A},Z)$ is a stability condition, we know that $\mathcal{A}_0$ is trivial. In this case, Proposition \ref{abelianizer} is \cite[Lemma 5.2]{bridgeland2007stability}. \end{remark} \section{Lexicographic order filtrations and torsion pairs}\label{torsion pairs} Let us shift our attention back to $\mathcal{A}_S^{\leq j}$, Proposition \ref{positive coefficients} and Proposition \ref{abelianizer} give us a lots weak stability conditions and abelian subcategories in $\mathcal{A}_S$. As in Remark \ref{abuse of terminology}, we call a pair $(\mathcal{A},Z)$ a stability condition if this pair admits HN property (without the reference to any triangulated categories and t-structures). For the simplicity of our statements and arguments, we introduce the following definition. \begin{definition}\label{rational stability condition} If $(\mathcal{A},Z)$ is a stability condition, and the image of $Z$ lies in $\mathbb{Q}\oplus \mathbb{Q}i$, we call $(\mathcal{A},Z)$ a rational stability condition. We use $RStab(X)$ to denote the set of rational stability conditions on $D(X)$. \end{definition} \begin{remark} By \cite[Proposition 5.0.1]{APsheaves}, we know the heart $\mathcal{A}$ of a rational stability condition is Noetherian. And in this case, the images of $a_j,b_j$ are rational. We focus on the rational stability conditions just for the simplicity of statements and arguments. All results and proofs in the rest of this thesis can be easily adapted to the stability conditions whose central charge have discrete image. \end{remark} From now on, we always assume that our original stability condition $\sigma$ on $X$ is a rational stability condition. \begin{lemma} Let $Z_{j}^t(E)=a_j(E)t-b_{j-1}(E)+ib_j(E)$, then $\sigma^t\coloneqq(\mathcal{A}_S^{\leq j}, Z^{t}_j)$ is a weak stability condition for any $t\in\mathbb{Q}_{> 0}$ and any integer $0 \leq j\leq r$. \end{lemma} \begin{proof} By Lemma \ref{positive coefficients}, $Z_j^t$ is a weak stability function on $\mathcal{A}_S^{\leq j}$. Then by the rationality of $b_j, a_j$ and the Noetherianity of $\mathcal{A}_S^{\leq j}$, we see that $\sigma^t$ admits HN property. \end{proof} Apply Proposition \ref{abelianizer} on $\sigma^t$, we have the following abelian subcategories in $\mathcal{A}_S$. \begin{definition} For simplicity of the notation, let $\mathcal{A}^t(\phi)$ to denote $\mathcal{A}_{\sigma^t}(\phi)$. Then $\mathcal{A}^t(\phi)$ is an abelian subcategory by Proposition \ref{abelianizer}. \end{definition} \begin{lemma}\label{first slice stability condition} We have the following stability conditions. (1) The pair $$\sigma_{1}^{t,t'}=(\mathcal{A}^t(1), a_{j-1}t'-b_{j-2}+i(b_{j-1}-a_jt))$$ is a weak stability condition for any $t,t'\in\mathbb{Q}_{> 0}$. (2) The pair $$\sigma_{\phi}^{t,t'}=(\mathcal{A}^t(\phi), a_{j-1}t'-b_{j-2}+ib_j)$$ is a weak stability condition for any $t, t'\in\mathbb{Q}_{> 0}$ and $\phi\in(0,1)$. \end{lemma} \begin{proof} For (1), assume that $E\in\mathcal{A}^t(1)$ and $b_{j-1}(E)-a_j(E)t=0$. Then we get $b_j(E)=a_j(E)=b_{j-1}(E)=0$. By Lemma \ref{positive coefficients}, this implies that $a_{j-1}(E)\leq 0$ and $b_{j-2}(E)\geq 0$. Therefore, we know that $a_{j-1}t'-b_{j-2}+ib_j$ is a weak stability function on $\mathcal{A}^t(\phi)$. For (2), assume that $E\in\mathcal{A}^t(\phi)$ and $b_j(E)=0$. Then since $\phi\in(0,1)$, we have that $Z_j^t(E)=0$, hence $b_j(E)=a_j(E)=b_{j-1}(E)=0$. Therefore, by the same reason, we know that $a_{j-1}t'-b_{j-2}+ib_j$ is a weak stability function on $\mathcal{A}^t(\phi)$. The HN property following from rationality of central charge and the Noetherianity of $\mathcal{A}^t(\phi)$. \end{proof} We can construct weak stability conditions inductively by Lemma \ref{positive coefficients}. For convention, we define $b_{-1}=0$. \begin{lemma}\label{slice stability condition} (1) Inductively, for any $1\leq k\leq j$, let $\mathcal{A}^{t_1,t_2,\cdots,t_k}_{\phi_1,\phi_2,\cdots,\phi_{k}}\coloneqq\mathcal{A}_{\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots, t_k}}(\phi_k) $, then the pair $$\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}=(\mathcal{A}^{t_1,t_2,\cdots,t_k}_{\phi_1,\phi_2\cdots,\phi_k}, a_{j-k}t_{k+1}-b_{j-k-1}+ib_j)$$ is a weak stability condition for any $t_1,\cdots, t_{k+1}\in\mathbb{Q}_{> 0}$ and any $\phi_1\cdots,\phi_k\in(0,1)$. (2) Given a sequence of real numbers $\phi_1,\cdots,\phi_k\in(0,1]$, let $M=\{1\leq m\leq k| \phi_i=1 \}$. Assume that $M$ is a nonempty set, and let $n$ be the maximal integer in $M$. Then the pair $$\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}=(\mathcal{A}^{t_1,t_2,\cdots,t_k}_{\phi_1,\phi_2\cdots,\phi_k}, a_{j-k}t_{k+1}-b_{j-k-1}-i(a_{j-n+1}t_n-b_{j-n}))$$ is a weak stability condition for any $t_1,\cdots, t_{k+1}\in\mathbb{Q}_{> 0}$. \end{lemma} \begin{proof} Firstly, we claim that for any $t_1,t_2,\cdots,t_{k+1}\in\mathbb{Q}_{>0}$ and $\phi_1,\cdots,\phi_k\in(0,1]$, the abelianizer (see Definition \ref{definition of abelianizer}) of $\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}$ is the full abelian subcategory consists of the following objects $$\{E\in\mathcal{A}_S^{\leq j}| b_j(E)=a_j(E)=\cdots=b_{j-k}(E)=a_{j-k}(E)=b_{j-k-1}(E)=0 \}.$$ The claim and lemma can be proved by induction on $k$, the case $k=1$ follows from Lemma \ref{first slice stability condition} and Lemma \ref{positive coefficients}. Let $(C_l)$ denote the proposition that the claim is true for all $1\leq k\leq l$, and $(L_l)$ denote the proposition that the lemma is true for all $1\leq k\leq l$. Then we will prove that $(C_{l-1})$ implies $(L_l)$ and $(L_l)$ implies $(C_l)$. Hence both propositions are true for any $1\leq k\leq j$. Indeed, assume that $(C_{k-1})$ is true. If $\phi_k<1$, we assume that $E\in\mathcal{A}_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_k}$ and the imaginary part of $E$ in $\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}$ is $0$. Since $\phi_k<1$, the imaginary parts of $\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}$ and $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$ are the same by construction. By definition of $\mathcal{A}^{t_1,t_2,\cdots,t_k}_{\phi_1,\phi_2\cdots,\phi_k}$, we know that $E$ is in the abelianizer of $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$. Hence by $(C_{k-1})$, we get that $$b_j(E)=a_j(E)=\cdots=b_{j-k+1}(E)=a_{j-k+1}(E)=b_{j-k}(E)=0.$$ Therefore, by Lemma \ref{positive coefficients}, $a_{j-k}(E)t_{k+1}-b_{j-k-1}(E)\leq 0$, hence $(L_k)$ and $(C_k)$ holds. Now we consider the case when $\phi_k=1$. We assume that $E\in\mathcal{A}_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_k}$ and the imaginary part of $E$ in $\sigma_{\phi_1,\phi_2,\cdots,\phi_k}^{t_1,t_2,\cdots,t_{k+1}}$ is $0$, i.e., $a_{j-k+1}(E)t_k-b_{j-k}(E)=0$. Since $a_{j-k+1}t_k-b_{j-k}$ is the real part of $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$ and $E$ is semistable of phase 1 with respect to $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$. We get that $E$ is in the abelianizer of $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$. Hence $(L_k)$ and $(C_k)$ holds. The HN property following from rationality of central charge and the Noetherianity of $\mathcal{A}^{t_1,t_2,\cdots,t_k}_{\phi_1,\phi_2,\cdots,\phi_k}$. \end{proof} \begin{remark} In fact, $t_i$ is not necessarily a fixed real number. It could be a continuous function $t_i:(0,1]\rightarrow \mathbb{R}_{> 0}$, $\phi_{i+1}\mapsto t_{i}(\phi_{i+1})$, such that $t_{i}(1)\in\mathbb{Q}_{> 0}$. We assume them to be fixed numbers for the simplicity of the notation. \end{remark} \begin{definition} Given $t_1,t_2,\cdots,t_l\in\mathbb{Q}_{>0}$, if $E$ is $\sigma^{t_1}$ semistable of phase $\phi_1$ and $E\in\mathcal{A}^{t_1,t_2,\cdots,t_{k-1}}_{\phi_1,\phi_2\cdots,\phi_{k-1}}$ is semistable with respect to $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_{k}}$ of phase $\phi_k$ for any $1<k\leq l$. Then we say that $E$ is $l$-th level semistable of phase $\vec{\phi}=(\phi_1,\cdots,\phi_l)$, where $\phi_i\in(0,1]$. \begin{example} Note that in the trivial case $X=Spec(\mathbb{C})$, $\sigma=(Vect, i\cdot dim)$, where $Vect$ is the category of finite dimensional $\mathbb{C}$ vector spaces. And $S$ is a smooth projective of dimension $r$. Then being $1$-st level semistable is equivalent to being slope semistable, and being $(r+1)$-th level semitable is equivalent to being Gieseker semistable. \end{example} We will the lexicographic order of such vectors, i.e., we say that $$(\phi_1,\phi_2,\cdots,\phi_l)>(\psi_1,\psi_2,\cdots,\psi_l)$$ if $\phi_1>\psi_1$, or $\phi_1=\psi_1, \phi_2=\psi_2, \cdots, \phi_k=\psi_k$ and $\phi_{k+1}>\psi_{k+1}$ for some $1\leq k\leq l-1$. \end{definition} \begin{lemma}\label{invariance of phase} Given $t_1,t_2,\cdots,t_k\in\mathbb{Q}_{>0}$, assume that $E$ is $(k-1)$-th level semistable of phase $(\phi_1,\phi_2,\cdots,\phi_{k-1})$. Let $$0=E_0\subset E_1\subset\cdots\subset E_n=E$$ be its HN filtration with respect to $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_k}$. Then every HN factor $E_i/E_{i-1}$ is $k$-th level semistable of phase $(\phi_1,\cdots,\phi_{k-1},\phi_{k,i})$, where $\phi_{k,1}>\phi_{k,2}>\cdots>\phi_{k,n}$ is a decreasing sequence of real numbers. \end{lemma} \begin{proof} We let $Q_i$ denote the HN factor $E_i/E_{i-1}$. Then $Q_i$ is semistable with respect to $\sigma_{\phi_1,\phi_2,\cdots,\phi_{k-1}}^{t_1,t_2,\cdots,t_k}$ of phase $\phi_{k,i}$. We claim that: for any object $F\in\mathcal{A}_{\phi_1,\cdots,\phi_k}^{t_1,\cdots, t_k}$ which is semistable with respect to $\sigma_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_{k+1}}$, $F$ is also semistable with respect to $\sigma_{\phi_1,\cdots,\phi_{k-1}}^{t_1,\cdots,t_{k}}$ of phase $\phi_k$. The lemma follows naturally from the claim. Indeed, by definition of $\mathcal{A}^{t_1,\cdots, t_k}_{\phi_1,\cdots,\phi_k}$, we have the following short exact sequence $$0\rightarrow F_0\rightarrow F\rightarrow F'\rightarrow 0$$ where $F_0$ is in the abelianizer of $\sigma_{\phi_1,\cdots,\phi_{k-1}}^{t_1,\cdots,t_{k}}$ and $F'$ is semistable with respect to $\sigma_{\phi_1,\cdots,\phi_{k-1}}^{t_1,\cdots,t_{k}}$ of phase $\phi_k$. If $F_0\simeq 0$, the claim is true. So now we assume that $F_0$ is a nonzero object. By the proof of Lemma \ref{slice stability condition}, we know that $$b_j(F_0)=a_j(F_0)=\cdots=b_{j-k+1}(F_0)=a_{j-k+1}(F_0)=b_{j-k}(F_0)=0.$$ Hence the slope of $F_0$ with respect to $\sigma_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_{k+1}}$ is $+\infty$. By the assumption $F\in\mathcal{A}_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_k}$ is semistable with respect to $\sigma_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_{k+1}}$ of phase $\phi_k$, we get that the imaginary part of $F$ with respect to $\sigma_{\phi_1,\cdots,\phi_k}^{t_1,\cdots,t_{k+1}}$ should also be $0$. By the proof of Lemma \ref{slice stability condition}, we know that $F$ is in the abelianizer of $\sigma_{\phi_1,\cdots,\phi_{k-1}}^{t_1,\cdots,t_{k}}$. Hence the claim is true and in this case we have $\phi_1=\phi_2=\cdots=\phi_{k+1}=1$. \end{proof} \begin{remark} From the proofs of Lemma \ref{slice stability condition} and Lemma \ref{invariance of phase}, we know that if $E$ is $k$-th level semistable of phase $(\phi_1,\cdots,\phi_k)$, then $\phi_i=1$ implies that $E$ is in the abelianizer of $\sigma_{\phi_1,\cdots,\phi_{i-1}}^{t_1,\cdots,t_{i}}$ and hence $\phi_h=1$ for all $1\leq h\leq i$. \end{remark} \begin{theorem}\label{lexi-order filtration} Assume $l\leq j$, given $t_1,t_2,\cdots,t_l\in\mathbb{Q}_{>0}$, for any object $E\in\mathcal{A}_S^{\leq j}$, there exists a unique filtration $$0=E_0\subset E_1\subset E_2\cdots \subset E_n=E,$$such that each quotient factor $E_{i}/E_{i-1}$ is $l$-th level semi-stable of phase $\vec{\phi}_i$ and $$\vec{\phi}_1>\vec{\phi}_2>\cdots>\vec{\phi}_n$$. \end{theorem} \begin{proof} For any $k<l$, we claim that if $E$ is $k$-th level semistable of phase $\vec{\phi}=(\phi_1,\phi_2\cdots,\phi_k)$. There exists a filtration for $E$ as in the statement of theorem. Moreover, the first $k$ coordinates of all the vectors $\vec{\phi_1}>\vec{{\phi_2}}>\cdots>\vec{\phi_n}$ are equal to $\vec{\phi}$. The claim can be shown by induction. Firstly, if $E$ is $(l-1)$-th level semistable of phase $(\phi_1,\phi_2,\cdots,\phi_{l-1})$. Then by Lemma \ref{invariance of phase}, we get the claim. Assume that the claim is true for any $(l-i)$-th level semistable $E$. The goal is the prove the claim for any $(l-i-1)$-th level semistable object $E$. Now assume that $E$ is $(l-i-1)$-th level semistable of phase $(\phi_1,\phi_2,\cdots,\phi_{l-i-1})$, take its HN filtration with respect to $\sigma_{\phi_1,\phi_2,\cdots,\phi_{l-i-1}}^{t_1,t_2,\cdots,t_{l-i}}$. By induction on the number of HN factors, we can reduce the case when there are only two HN factors. Indeed, suppose we have the following short exact sequence $$0\rightarrow K\rightarrow E\xrightarrow{f} Q\rightarrow0,$$ where $K, Q$ are $(l-i)$-th semistable of phases $$(\phi_1,\phi_2,\cdots,\phi_{l-i-1},\phi_{l-i,1}),\ (\phi_1,\phi_2,\cdots,\phi_{l-i-1},\phi_{l-i,2})$$ respectively. By induction, there exists filtrations $$0=K_0\subset K_1\subset \cdots\subset K_{n_1}=K$$ and $$0=Q_0\subset Q_1\subset \cdots\subset Q_{n_2}=Q$$ satisfying the requirement in the claim. Now we take $E_{i}=K_{i}$ for $0\leq i\leq n_1$ and $E_{n_1+h}=f^{-1}(Q_h)$ for $0\leq h\leq n_2$, where $f^{-1}(Q_h)$ is the pullback of $Q_h$ along the map $f$. Then it is easy to check the filtration $$0=E_0\subset E_1\subset \cdots \subset E_{n_1+n_2}=E$$ satisfies the requirement of the claim. Hence we proved the existence of the filtration. The uniqueness follows from the uniqueness of HN filtration. In fact, if we have a filtration $$0=E_0\subset E_1\subset E_2\cdots \subset E_n=E,$$such that each quotient factor $E_{i}/E_{i-1}$ is $l$-th level semi-stable of phase $\vec{\phi}_i$ and $$\vec{\phi}_1>\vec{\phi}_2>\cdots>\vec{\phi}_n.$$ There is a partition $$S_1=\{\vec{\phi}_1,\vec{\phi}_2,\cdots,\vec{\phi}_{i_1}\},S_2=\{\vec{\phi}_{i_1+1},\cdots,\vec{\phi}_{i_2}\},\cdots,S_{k+1}=\{\vec{\phi}_{i_{k}+1},\cdots, \vec{\phi}_{n}\}$$ of the set $\{\vec{\phi}_1,\cdots,\vec{\phi}_n\}$ such that all the vectors in $S_i$ have the same first coordinate $\psi_i$ and $\psi_1>\psi_2>\cdots>\psi_{k+1}.$ By the proof of Proposition \ref{abelianizer}, we know that $$0=E_0\subset E_{i_1}\subset E_{i_2}\subset\cdots \subset E_{i_{k}}\subset E$$ is the HN filtration of $E$ with respect to $\sigma^{t_1}$. Hence it is unique. Then by inductively partitioning the vectors according to their next coordinates, we get the uniqueness of the filtration. \end{proof} We can use Theorem \ref{lexi-order filtration} to get finer torsion pairs of $\mathcal{A}_S$. \begin{corollary} Given $\vec{t}=(t_1,\cdots, t_{r+1})\in\mathbb{Q}_{>0}^{r+1}$ and $\vec{\phi}=(\phi_1,\cdots,\phi_{r+1})\in(0,1)^{r+1}$. Let $$\mathcal{T}_{\vec{\phi}}^{\vec{t}}\coloneqq \{E\in\mathcal{A}_S|all \ the \ factors \ in \ theorem\ \ref{lexi-order filtration}\ have \ phases \ > \vec{\phi}\}.$$ $$\mathcal{F}_{\vec{\phi}}^{\vec{t}}\coloneqq\{E\in\mathcal{A}_S|all \ the \ factors \ in \ theorem\ \ref{lexi-order filtration}\ have \ phases \leq \vec{\phi}\}.$$ This pair is a torsion pair of $\mathcal{A}_S$. \end{corollary} \begin{proof} This follows directly from Theorem \ref{lexi-order filtration}. \end{proof} Hence we have a tilted heart $\mathcal{A}_{S,\vec{\phi}}^{\vec{t}}\coloneqq \langle \mathcal{F}_{\vec{\phi}}^{\vec{t}}[1], \mathcal{T}_{\vec{\phi}}^{\vec{t}}\rangle$. Take $\vec{\phi}=(\frac{1}{2},\cdots,\frac{1}{2})$, and denote the tilted heart by $\mathcal{A}_S^{t_1,\cdots, t_{r+1}}$. \begin{theorem}\label{positivity on tilted heart} For any object $E$ in $\mathcal{A}_S^{t_1,t_2,\cdots,t_{r+1}}$, we have the following inequalities. (1) $-a_r(E)t_1+b_{r-1}(E)\geq 0$. (2) For any positive integer $0< l\leq r$, if $$-a_r(E)t_1+b_{r-1}(E)=\cdots=-a_{r+1-l}(E)t_{l}+b_{r-l}(E)=0,$$ then we have $-a_{r-l}(E)t_{l+1}+b_{r-l-1}(E)\geq 0$. \footnote{Recall the convention $b_{-1}=0$.} \end{theorem} \begin{proof} By the definition of $\mathcal{A}_S^{t_1,t_2,\cdots,t_{r+1}}$, we have the following short exact sequence in $\mathcal{A}_S^{t_1,t_2,\cdots,t_{r+1}}$. $$0\rightarrow F[1]\rightarrow E\rightarrow T\rightarrow 0.$$ where $F\in\mathcal{F}_{\vec{\phi}}^{\vec{t}}$ and $T\in\mathcal{T}_{\vec{\phi}}^{\vec{t}}$, where $\vec{\phi}=(\frac{1}{2},\cdots,\frac{1}{2})$ and $\vec{t}=(t_1,t_2,\cdots,t_{r+1})$. By induction, we can easily show that if $$-a_r(E)t_1+b_{r-1}(E)=\cdots=-a_{r+1-l}(E)t_{l}+b_{r-l}(E)=0,$$ then we have $T, F\in\mathcal{A}_{\frac{1}{2},\cdots,\frac{1}{2}}^{t_1,t_2,\cdots,t_l }$. Hence by the definition of $\mathcal{F}_{\vec{\phi}}^{\vec{t}}$ and $\mathcal{T}_{\vec{\phi}}^{\vec{t}}$, we have $$-a_{r-l}(T)t_{l+1}+b_{r-l-1}(T)\geq 0$$ and $$-a_{r-l}(F)t_{l+1}+b_{r-l-1}(F)\leq 0.$$ This completes the proof. \end{proof} \begin{remark} We choose the vector $\vec{\phi}$ to be $(\frac{1}{2},\cdots,\frac{1}{2})$ just for the simplicity of the statement. One can easily generalize the statement for arbitrary vector $\vec{\phi}\in(0,1]^{r+1}$. \end{remark} \end{document}
\begin{document} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \newtheorem*{claim}{Claim} \newtheorem*{criterion}{Criterion} \newtheorem*{torus_thm}{Theorem A} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{construction}[theorem]{Construction} \newtheorem{notation}[theorem]{Notation} \newtheorem{convention}[theorem]{Convention} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{example}[theorem]{Example} \def\mathbb A{\mathbb A} \def\mathbb R{\mathbb R} \def\mathbb H{\mathbb H} \def\mathbb Z{\mathbb Z} \def\mathbb C{\mathbb C} \def\mathbb Q{\mathbb Q} \def\text{B}{\text{B}} \def\text{E}{\text{E}} \def\mathcal F{\mathcal F} \def\mathcal G{\mathcal G} \def\mathcal P{\mathcal P} \def\text{SL}{\text{SL}} \def\text{SO}{\text{SO}} \def\text{O}{\text{O}} \def\mathcal PSL{\text{PSL}} \def\mathcal PGL{\text{PGL}} \def\mathbb RP{\mathbb{RP}} \def\mathbb CP{\mathbb{CP}} \def\widetilde{\widetilde} \def\text{Isom}{\text{Isom}} \def\text{Homeo}{\text{Homeo}} \def\text{BHomeo}{\text{BHomeo}} \def\mathcal Gal{\text{Gal}} \def\text{spin}{\text{spin}} \def\text{geo}{\text{geo}} \def\text{trace}{\text{trace}} \def\text{E}xt{\text{Ext}} \def\mathbb Hom{\text{Hom}} \def\text{rot}{\text{rot}} \def\text{univ}{\text{univ}} \title{Real places and torus bundles} \author{Danny Calegari} \address{Department of Mathematics \\ California Institute of Technology \\ Pasadena CA, 91125} \date{2/16/2006, Version 0.15} \begin{abstract} If $M$ is a hyperbolic once-punctured torus bundle over $S^1$, then the trace field of $M$ has no real places. \end{abstract} \maketitle \section{Introduction} This paper studies a particular example of the interaction between topology and number theory in the context of finite volume hyperbolic $3$-manifolds. If a {\em noncompact} hyperbolic $3$-manifold $M$ is irreducible and atoroidal, and homeomorphic to the interior of a compact $3$-manifold with torus boundary components, then $M$ admits a unique complete hyperbolic structure. This structure determines a faithful representation from $\pi_1(M)$ into $\mathcal PSL(2,\mathbb C)$ which can be taken to have image in $\mathcal PSL(2,K)$ for some smallest number field $K$, called the {\em trace field} of $M$. See \cite{MacReid}, Theorem 4.2.3. (Note that the noncompactness of $M$ is important for the identification of $K$ with the trace field of $\pi_1(M)$.) It is an important question to study which number fields $K$ can arise from this construction, and conversely, to understand the relationship between the topology of $M$ and the algebra of $K$. This question seems to be wide open, and very few nontrivial relationships are known. In this paper we show by an elementary geometric argument that when $M$ is a hyperbolic once-punctured torus bundle over $S^1$, the trace field $K$ has no real places (i.e. it is not Galois conjugate to a subfield of $\mathbb R$). \subsection{Statement of results} In \S 2 we give an exposition of the relation between Euler classes, Stiefel-Whitney classes, orientations, and boundary traces for hyperbolic surfaces and $3$-manifolds. All this material is classical, but it seems that there is no explicit and thorough account of it in the literature which takes our particular viewpoint. This discussion takes up more than half the length of the paper --- we feel that its inclusion is justified by its potential interest as a reference for the $3$-manifold community. Note that some facts which might be otherwise somewhat obscure, are clarified by a thorough discussion of this foundational material. In particular, it is immediate from our viewpoint that for a hyperbolic knot complement $M$, the trace of a longitude is $-2$, after lifting a geometric representation $\pi_1(M) \to \mathcal PSL(2,\mathbb C)$ to $\text{SL}(2,\mathbb C)$. \vskip 12pt In \S 3 we specialize to once-punctured torus bundles. After a preliminary discussion of trace fields and invariant trace fields, we prove our main theorem: \begin{torus_thm} Let $M_\phi$ be a hyperbolic oriented once-punctured torus bundle over $S^1$ with monodromy $\phi$. Let $K$ denote the trace field of $M_\phi$, and $k$ the invariant trace field. Then: \begin{enumerate} \item{$K$ admits no real places} \item{If $k$ admits a real place, then $H_1(M)$ contains 2-torsion} \end{enumerate} \end{torus_thm} \vskip 12pt In \S 4 we make some related observations. Most interesting is the observation that if $M$ contains a {\em pseudo-rational surface} --- i.e. a surface $S$ with all traces rational --- then such a surface has maximal Euler class with respect to any $\mathcal PSL(2,\mathbb R)$ representation. In particular, a $3$-manifold containing a pseudo-rational surface which is not Thurston norm minimizing in its homology class (e.g. it might be separating) has a trace field with no real place. A nice corollary, pointed out by A. Reid, is that this equality lets us construct examples of knot complements which contain no totally geodesic immersed surfaces, for example, the knot $8_{20}$ in Rolfsen's tables. We also give an example showing that for every $n<0$ and every $m$ with $|m| \le n$ and $m=n \mod 2$ there are incompressible surfaces $S_{n,m}$ with Euler characteristic $n$ in some hyperbolic $3$-manifold $M$ with a real place $\sigma$ for which $e_\sigma(S_{n,m}) = m$. The Milnor-Wood inequality implies that $|m| \le n$ is sharp, and the mod 2 condition is implied by the fact that the representations are Galois conjugate into a geometric representation, so these are all the possibilities which can arise. Somewhat amazingly, all these examples can be found in a single manifold $M$, the complement of the link $8_6^2$ in Rolfsen's tables. \section{Euler, Stiefel-Whitney, traces}\label{characteristic_class_section} In what follows, we frequently deal with complete noncompact hyperbolic surfaces $S$ with finite area. Such surfaces are homeomorphic to the interior of a compact surface $\overline{S}$ with boundary. By abuse of notation, we will refer to the boundary components of the (compactifying) surface $\overline{S}$ as the boundary components of $S$. For the convenience of the reader, we give a thorough exposition of the relationship between $\text{SL},\mathcal PSL$, Stiefel-Whitney, Euler, and hyperbolic geometry. Related references are \cite{Goldman}, \cite{Culler}. \subsection{Geometric representation} Let $S$ be a noncompact orientable surface of finite type, and let $\phi:S \to S$ be a pseudo-Anosov homeomorphism. Then the mapping torus $$M_\phi := S \times I /(s,1) \sim (\phi(s),0)$$ admits a complete hyperbolic structure. Corresponding to this hyperbolic structure there is a discrete faithful representation $$\rho_\text{geo}:\pi_1(M_\phi) \to \mathcal PSL(2,\mathbb C)$$ which is unique up to conjugacy and orientation. We fix one such representation, and call it the {\em geometric representation}. \subsection{Quasifuchsian representations} An orientable surface $S$ with negative Euler characteristic itself admits a hyperbolic structure, and any such hyperbolic structure determines some discrete faithful representation $$\rho_S:\pi_1(S) \to \mathcal PSL(2,\mathbb R)$$ for which the image of each boundary curve is parabolic. The space of discrete faithful representations from $\pi_1(S)$ into $\mathcal PSL(2,\mathbb C)$ up to conjugacy and orientation, and for which the image of each boundary curve maps to a parabolic element, is connected, and contains an open dense subset homeomorphic to a ball consisting of {\em quasifuchsian} representations. For each quasifuchsian representation $\rho_q:\pi_1(S) \to \mathcal PSL(2,\mathbb C)$ the ideal circle of the universal cover $\widetilde{S}$ maps to a quasicircle in the sphere $\mathbb CP^1$. See \cite{Brock_Canary_Minsky} and \cite{MatTan} for background and details on the theory of Kleinian groups. \subsection{Nonorientable representations} A surface $S$ might be orientable and yet admit representations into $\text{Isom}(\mathbb H^2)$ with nonorientable holonomy. The group of all isometries of $\mathbb H^2$ is $\mathcal PGL(2,\mathbb R)$ which embeds into $\mathcal PGL(2,\mathbb C) \cong \mathcal PSL(2,\mathbb C)$. \subsection{Classifying spaces} We may embed both $\mathcal PSL(2,\mathbb R)$ and $\mathcal PGL(2,\mathbb R)$ into the group $\text{Homeo}(S^1)$ by considering their action on the ideal circle $S^1_\infty$ of $\mathbb H^2$. The image of $\mathcal PSL(2,\mathbb R)$ is contained in $\text{Homeo}^+(S^1)$ (the superscript $+$ denotes orientation preserving homeomorphisms). These inclusion maps are homotopy equivalences. The circle $S^1$ embeds in $S^2$ as the equator, and the group $\text{Homeo}(S^1)$ embeds in $\text{Homeo}^+(S^2)$. We have two commutative diagrams $$\begin{diagram} \mathcal PSL(2,\mathbb R) & & \rTo & &\mathcal PGL(2,\mathbb R) \\ & \rdTo & & \ldTo & \\ & & \mathcal PSL(2,\mathbb C) & & \\ \end{diagram} \; \; \; \; \begin{diagram} \text{Homeo}^+(S^1) & & \rTo & &\text{Homeo}(S^1) \\ & \rdTo & & \ldTo & \\ & & \text{Homeo}^+(S^2) & & \\ \end{diagram}$$ where the left diagram includes into the right diagram by a homotopy equivalence. It is further true that the inclusions $$\text{Homeo}^+(S^1) \to \text{Homeo}^+(D^2)$$ and $$\text{Homeo}^+(S^2) \to \text{Homeo}^+(D^3)$$ where $D^2,D^3$ are open balls, obtained by coning to the center and throwing away the boundary, are homotopy equivalences. The first case is straightforward; the second follows from Hatcher's proof of the Smale conjecture \cite{Hatcher} and the (homotopy) equivalence of the categories DIFF and TOP in dimension $3$. For any group $G$, and any space $X$, a representation $$\rho:\pi_1(X) \to G$$ induces a homotopy class of maps to the classifying space $\text{B} G$. There is a tautological $G$ bundle over $\text{B} G$ called $\text{E} G$ which pulls back to a $G$-bundle over $X$ called $E_\rho$. Topologically, we form $E_\rho$ as the quotient bundle $$E_\rho = \widetilde{X} \times G / (x,g) \sim (\alpha(x),\rho(\alpha)(g))$$ where $\alpha$ ranges over elements of $\pi_1(X)$. \subsection{Homotopy of $\text{B} \text{Homeo}$} The group $\text{Homeo}^+(S^1)$ is homotopy equivalent to the subgroup $\text{SO}(2,\mathbb R) \approx S^1$ consisting of rotations. $\text{Homeo}(S^1)$ is homotopy equivalent to the subgroup $\text{O}(2,\mathbb R)$ which is homotopic to two disjoint circles. $\text{Homeo}^+(S^2)$ is homotopy equivalent to $\text{SO}(3,\mathbb R) \approx \mathbb RP^3$. It follows that we can compute the homotopy groups of $\text{B}\text{Homeo}$: $$\pi_i(\text{B}\text{Homeo}^+(S^1)) = \begin{cases} \mathbb Z & \text{ if }i=2 \\ 0 & \text{ otherwise } \\ \end{cases} \; \; \; \pi_i(\text{B}\text{Homeo}(S^1)) = \begin{cases} \mathbb Z/2\mathbb Z & \text{ if }i=1 \\ \mathbb Z & \text{ if }i=2 \\ 0 & \text{ otherwise } \\ \end{cases} $$ $$\pi_i(\text{B}\text{Homeo}^+(S^2)) = \begin{cases} \mathbb Z/2\mathbb Z & \text{ if }i=2 \\ 0 & \text{ if }i=0,1,3 \\ \mathbb Z & \text{ if }i=4 \\ \end{cases} $$ and is torsion for $i>4$. Cohomology on $\text{B} G$ pulls back to cohomology classes on $X$ which represent the first obstruction to trivializing the bundle $E_\rho$. Since $\text{B}\text{Homeo}^+(S^1)$ is a $K(\mathbb Z,2)$ and therefore has the homotopy type of $\mathbb CP^\infty$, the cohomology ring is generated by a single element $e \in H^2(\text{B}\text{Homeo}^+(S^1);\mathbb Z)$. Identifying $\text{Homeo}^+(S^1)$ with $\mathcal PSL(2,\mathbb R)$ up to homotopy, this element represents the obstruction to lifting a representation from $\mathcal PSL(2,\mathbb R)$ to $\widetilde{\text{SL}}(2,\mathbb R)$, its universal covering group. The mod 2 reduction of $e$ is the obstruction to lifting to $\text{SL}(2,\mathbb R)$. Since $\text{B}\text{Homeo}^+(S^2)$ is a $K(\mathbb Z/2\mathbb Z,2)$ below dimension $4$, only the class $w \in H^2(\text{B}\text{Homeo}^+(S^2);\mathbb Z/2\mathbb Z)$ is relevant to $3$-manifolds. Identifying $\text{Homeo}^+(S^2)$ with $\mathcal PSL(2,\mathbb C)$ up to homotopy, this represents the obstruction to lifting a representation from $\mathcal PSL(2,\mathbb C)$ into $\text{SL}(2,\mathbb C)$. If we include $\mathcal PSL(2,\mathbb R)$ into $\mathcal PSL(2,\mathbb C)$ then we see that $w$ is the image of $e$ under $H^2(X;\mathbb Z) \to H^2(X;\mathbb Z/2\mathbb Z)$. \subsection{$\text{B}\text{Homeo}(S^1)$} The short exact sequence $$0 \to \text{Homeo}^+(S^1) \to \text{Homeo}(S^1) \to \mathbb Z/2\mathbb Z \to 0$$ gives rise to a fibration of spaces $$\text{B}\text{Homeo}^+(S^1) \to \text{B}\text{Homeo}(S^1) \to \mathbb RP^\infty$$ which exhibits $\text{B}\text{Homeo}(S^1)$ up to homotopy as a twisted $\mathbb CP^\infty$ bundle over $\mathbb RP^\infty$. The generator of $\pi_1(\text{B}\text{Homeo}(S^1))$ acts on $\pi_2(\text{B}\text{Homeo}(S^1))$ by multiplication by $-1$. Since this multiplication is trivial with $\mathbb Z/2\mathbb Z$ coefficients, the $\mathbb Z/2\mathbb Z$ cohomology can be computed from the K\"unneth formula. In low dimensions we get $$H^i(\text{B}\text{Homeo}(S^1);\mathbb Z/2\mathbb Z) = \begin{cases} \mathbb Z/2\mathbb Z & \text{ for } i=0,1 \\ \mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z & \text{ for } i=2 \\ \end{cases}$$ The generator in dimension $1$ is the orientation class $o$, and the generators in dimension $2$ are $o^2$ and $r$ which is the mod 2 reduction of the Euler class, and is obtained by pulling back $w$ from $H^2(\text{B}\text{Homeo}^+(S^2);\mathbb Z/2\mathbb Z)$. \subsection{Relative bundles} Suppose we have a pair of spaces $X,Y$ and a representation $$\rho:\pi_1(X) \to G$$ such that $\rho|_{\pi_1(Y)}$ is trivial. Then the resulting map to the classifying space maps $Y$ to the basepoint of $\text{B} G$, and there is a well-defined relative homotopy class of pairs. This defines a canonical trivialization of the restricted bundle $E_\rho|_Y$. It follows that we can pull back reduced cohomology of $\text{B} G$ to a relative class in $H^*(X,Y)$ which represents the obstruction to extending this trivialization of $E_\rho$ over $Y$ to all of $X$. More generally, suppose $G$ contains a closed contractible subgroup $H$. Then the coset space $G/H$ is homotopy equivalent to $G$, and the bundle $E_\rho$ can be replaced by a homotopy equivalent bundle $E_\rho/H$. A representation $\rho:\pi_1(X) \to G$ for which $\rho(\pi_1(Y))$ is contained in $H$ defines a canonical trivialization of $E_\rho/H$, and therefore a relative cohomology class in $H^*(X,Y)$. In the groups $\mathcal PSL(2,\mathbb R)$ and $\mathcal PSL(2,\mathbb C)$ the maximal parabolic subgroups are contractible; these are the $N$ subgroups with respect to the $KAN$ (or Iwasawa) decomposition of these groups. In $\text{Homeo}^+(S^1)$ the stabilizer of a point is contractible. In $\text{Homeo}^+(S^2)$, the stabilizer of a point is not contractible, being homotopic to $S^1$. \subsection{Euler classes of surfaces, and relative Euler class} For $S$ a closed orientable hyperbolic surface, the Euler class $e_{\rho_S}$ of the representation $$\rho_S: \pi_1(S) \to \mathcal PSL(2,\mathbb R)$$ associated to any hyperbolic structure on $S$ satisfies $$e_{\rho_S}([S]) = \pm \chi(S)$$ where $[S]$ represents the fundamental class in $H_2(S;\mathbb Z)$, and the sign depends on the choice of orientation. Note in this case that $E_{\rho_S}$ is isomorphic to the unit tangent bundle of $S$. If $S$ is orientable and complete with finite area, but possibly with punctures, then there is a relative fundamental class in $H_2(S,\partial S;\mathbb Z)$. If $E_{\rho_S}$ represents the associated circle bundle, then $E_{\rho_S}$ restricts to a finite union of tori fibering over the boundary components of $S$. By our assumption, the holonomy around a boundary component is parabolic, and has a (unique) fixed point in $S^1$. This fixed point suspends to a canonical section of $E_{\rho_S}$ over $\partial S$, and defines a trivialization of this restricted bundle. The {\em relative Euler class} of $\rho_S$ is the element of $H^2(S,\partial S;\mathbb Z)$ which represents the obstruction to extending this trivialization of $E_{\rho_S}|_{\partial S}$ over all of $E_{\rho_S}$. \subsection{Geometric computation of Euler class} In \cite{Thurston_circles}, Thurston defined a $2$-cocycle for a group $G$ acting in an orientation-preserving way on $S^1$. If $\sigma:G \to \text{Homeo}^+(S^1)$ is a representation, choose a point $p \in S^1$ and for every triple $g_0,g_1,g_2 \in G$ define $$c(g_0,g_1,g_2) = \begin{cases} 1 & \text{if $g_0(p),g_1(p),g_2(p)$ are positively ordered} \\ -1 & \text{if $g_0(p),g_1(p),g_2(p)$ are negatively ordered} \\ 0 & \text{if $g_0(p),g_1(p),g_2(p)$ are degenerate} \\ \end{cases}$$ Equivalently, think of $S^1$ as the ideal boundary of $\mathbb H^2$. Then for $g_0,g_1,g_2$ a triple of elements in $G$, define $c(g_0,g_1,g_2)$ to be the (signed) area of the ideal triangle with vertices at the $g_i(p)$, divided by $\pi$. From this geometric definition, it is easy to see that $c$ is coclosed, and that the cohomology class it defines is independent of $p$. Suppose that $G = \pi_1(M)$ and $H_i < G$ is $\pi_1(\partial M_i)$ for each component $\partial M_i$ of $\partial M$. If each $H_i$ has a fixed point in $S^1$, then for some choice of basepoint, $c$ is identically zero on $H_i$. It follows that $c$ defines a relative class in $H^2(M,\partial M;\mathbb Z)$. \begin{lemma}\label{Thurston_and_Euler} The Thurston cocycle $c$ is related to the Euler class $e$ by $$[c] = 2[e] \text{ in } H^2(M,\partial M;\mathbb Z)$$ \end{lemma} For a proof, see e.g. \cite{Jekel}. For more details, and related constructions, see \cite{Ghys} or \cite{Calegari_euler}. \subsection{Stiefel-Whitney class and $\text{SL}(2,\mathbb C)$} For a closed hyperbolic $3$-manifold, the $S^2$ bundle coming from the geometric representation $\rho_\text{geo}$ is isomorphic to the unit tangent bundle (to see this, use the exponential map). The pullback of the generator of $H^2(\text{BHomeo}^+(S^2);\mathbb Z/2\mathbb Z)$ can therefore be identified with the second Stiefel-Whitney class of $M$. If $M$ is orientable, $TM$ is parallelizable, so this class must vanish. If $M$ is noncompact with finite volume, the geometric representation on each boundary torus group is parabolic, with a fixed point $p \in \mathbb CP^1$. The subgroup $N_p < \mathcal PSL(2,\mathbb C)$ of parabolic elements fixing $p$ is contractible, so there is a trivialization of the associated bundle over $\partial M$ and we get a relative second Stiefel-Whitney class $$w_\text{geo} \in H^2(M,\partial M;\mathbb Z/2\mathbb Z)$$ which might not be trivial. Note that the image of $w_\text{geo}$ in $H^2(M;\mathbb Z/2\mathbb Z)$ is the ordinary second Stiefel-Whitney class which is always trivial for orientable $M$, as above. It follows from the long exact sequence in cohomology that $w_\text{geo}$ is the image of a distinguished class in $H^1(\partial M;\mathbb Z/2\mathbb Z)/i^*H^1(M;\mathbb Z/2\mathbb Z)$, where $i^*$ denotes the homomorphism induced by the inclusion map $i:\partial M \to M$. Since the ordinary second Stiefel-Whitney class vanishes, the geometric representation lifts to the double cover $$\hat{\rho}_\text{geo}:\pi_1(M) \to \text{SL}(2,\mathbb C)$$ Since every element of $\pi_1(\partial M)$ is parabolic, the preimages either have trace $2$ or $-2$. Different choices of lift to $\text{SL}(2,\mathbb C)$ might change the value on elements of $H_1(\partial M;\mathbb Z/2\mathbb Z)$ which are in the image of $H_1(M;\mathbb Z/2\mathbb Z)$ but do not change the values on elements which are homologically nontrivial in $\partial M$ but bound in $M$ (mod 2). In particular, if $\alpha \in \pi_1(\partial M)$ represents zero in $H_1(M;\mathbb Z/2\mathbb Z)$, then the trace is {\em independent} of the choice of lift. We deduce that if $\text{trace}(\hat{\rho}_\text{geo}(\alpha)) = -2$ where $\alpha = \partial S$ for some properly embedded $S \subset M$ then $w_\text{geo}$ is nontrivial. In general, for any representation $$\rho:\pi_1(M) \to \mathcal PSL(2,\mathbb C)$$ which sends $\pi_1(\partial M)$ to parabolic elements, we get a relative class $$w_\rho \in H^2(M,\partial M;\mathbb Z/2\mathbb Z)$$ If $\rho$ lifts to $\hat{\rho}:\pi_1(M) \to \text{SL}(2,\mathbb C)$ then the image of $w_\rho$ in $H^2(M;\mathbb Z/2\mathbb Z)$ is zero. If $\text{trace}(\hat{\rho}(\alpha)) = -2$ for some $\alpha \in \pi_1(\partial M)$ which represents a trivial class in $H_1(M;\mathbb Z/2\mathbb Z)$ then $w_\rho$ is nontrivial. We summarize this as a lemma: \begin{lemma}\label{Stiefel_nontrivial} Suppose that $M$ is a compact orientable $3$-manifold with torus boundary components, and suppose $\rho$ is some representation $$\rho:\pi_1(M) \to \mathcal PSL(2,\mathbb C)$$ for which elements of $\pi_1(\partial M)$ map to parabolic transformations. Suppose further that $\rho$ lifts to $$\hat{\rho}:\pi_1(M) \to \text{SL}(2,\mathbb C)$$ Let $w_\rho \in H^2(M,\partial M;\mathbb Z/2\mathbb Z)$ denote the relative Stiefel-Whitney class of $\rho$, where the trivialization on $\partial M$ comes from the fixed point of the corresponding parabolic subgroup. Further, suppose there is some $\alpha \in \pi_1(\partial M)$ representing zero in $H_1(M;\mathbb Z/2\mathbb Z)$ for which $$\text{trace}(\hat{\rho}(\alpha)) = -2$$ Then $w_\rho$ is nontrivial. \end{lemma} \subsection{$\chi$ and $w$} Let $M$ be a manifold containing an incompressible surface $S$ with a single boundary component. It turns out that there is a very simple relationship between $\chi(S)$ and $w_\text{geo}([S])$; in fact, $w_\text{geo}([S])$ is just the mod 2 reduction of $\chi(S)$. The simplest way to see this is topological. The value of $w_\text{geo}([S])$ depends only on the topology of the $S^2$ bundle over $S$ obtained from the representation $$\rho_\text{geo}|_S:\pi_1(S) \to \mathcal PSL(2,\mathbb C) < \text{Homeo}^+(S^2)$$ By using the exponential map, we may identify this bundle with the restriction of the unit tangent bundle $UTM|_S$. In particular, the value of $w_\text{geo}([S])$ only depends on the topology of a tubular neighborhood of $S$ in $M$. If $S$ is two-sided and embedded, then this tubular neighborhood is just a product $S \times I$ and therefore the value of $w_\text{geo}([S])$ depends only on $S$. Now, for a geometric representation $\rho_S|_S$ coming from a hyperbolic structure on $S$, we have already seen that $w_S([S]) =\chi(S) \text{ mod } 2$. In particular, we have the following lemma: \begin{lemma}\label{Stiefel_class} Let $M$ be a complete finite-volume orientable hyperbolic $3$-manifold. If $S \subset M$ is incompressible and orientable, then $$w_\text{geo}([S]) = \chi(S) \mod 2$$ where $[S]$ represents the fundamental class in $H_2(S,\partial S;\mathbb Z/2\mathbb Z)$. \end{lemma} The following corollary is folklore, and seems to have been observed first by W. Thurston, at least for Seifert surfaces of knots in $S^3$. There appears to be some confusion in the literature about whether it is well-known, and therefore we state it for completeness: \begin{corollary} Let $M$ be a complete noncompact orientable hyperbolic $3$-manifold, and let $\alpha \in \partial M$ be the boundary of a $2$-sided incompressible surface $S \subset M$. If $\hat{\rho}_\text{geo}:\pi_1(M) \to \text{SL}(2,\mathbb C)$ is any lift of the geometric representation, then $$\text{trace}(\hat{\rho}_\text{geo}(\alpha)) = -2$$ \end{corollary} It follows that the longitude of any knot has trace $-2$. This answers Question 6.2 in \cite{Cooper_Long}. \vskip 12pt We remark that by the proof of the Ending Lamination Conjecture \cite{Brock_Canary_Minsky}, for $S$ incompressible, the geometric representation $\rho_\text{geo}|_S$ coming from the hyperbolic structure on $M$ and a geometric representation $\rho_S|_S$ coming from a hyperbolic structure on $S$ are homotopic through representations of $\pi_1(S)$ into $\mathcal PSL(2,\mathbb C)$ which send boundary curves to parabolic elements, since $\rho_\text{geo}|_S$ is always in the closure of the space of quasifuchsian representations. This gives another, more high-powered proof of Lemma~\ref{Stiefel_class}. \section{Torus bundles}\label{torus_section} \subsection{Number fields} If $M$ is a complete hyperbolic $3$-manifold of finite volume, we have the corresponding geometric representation $$\rho_\text{geo}:\pi_1(M) \to \mathcal PSL(2,\mathbb C)$$ The {\em trace field} $K$ of $M$ is the field generated by the traces of $\rho_\text{geo}(\alpha)$, as $\alpha$ varies over the elements of $\pi_1(M)$. Of course, the trace of an element in $\mathcal PSL(2,\mathbb C)$ is only determined up to sign, but the field generated by these elements is independent of sign. It turns out that the trace field $K$ is always a {\em number field} (i.e. some finite algebraic extension of $\mathbb Q$) and as remarked in the introduction, {\em for $M$ noncompact}, $\rho_\text{geo}$ can be conjugated into $\mathcal PSL(2,K)$. The {\em invariant trace field} is the subfield $k$ of $K$ generated by the squares of the traces of $\rho_\text{geo}(\alpha)$ as above. In general, $k$ and $K$ are not equal, and the degree satisfies $[K : k] = 2^n$ for some $n$. The field $k$ is an invariant of the commensurability class of $M$, where two hyperbolic manifolds $M,N$ are said to be commensurable if they have a common finite cover. See \cite{MacReid} for details and proofs. \subsection{Action of $\mathcal Gal(L/\mathbb Q)$} Let $L$ denote the Galois closure of $K$ in $\mathbb C$. If $p(x)$ is the minimal polynomial of a generating element of $K$, then $L$ is obtained from $\mathbb Q$ by adjoining all roots of $p(x)$. The Galois group $\mathcal Gal(L/\mathbb Q)$ of $L$ over $\mathbb Q$ acts on $L$ by field automorphisms, conjugating $K$ into different subfields of $\mathbb C$. The various (Galois conjugate) embeddings of $K$ into $\mathbb C$ are called {\em places}, and can be real or complex. Complex places come in pairs, interchanged by complex conjugation. If the degree $[K:\mathbb Q]=d$, then $$d = r_1 + 2r_2$$ where $r_1$ is the number of real places, and $r_2$ is the number of conjugate pairs of complex places. In general, if $$\rho:\pi_1(M) \to \mathcal PSL(2,K)$$ is some representation which is parabolic on $\partial M$, and if the relative Stiefel-Whitney class $w_\rho$ has zero image in $H^2(M;\mathbb Z/2\mathbb Z)$, then $\rho$ lifts to $\hat{\rho}$, and parabolic elements lift to elements of $\text{SL}(2,K)$ with trace equal to $\pm 2$. If $\rho_\sigma$ is obtained from $\rho$ by Galois conjugating $K$ to $K_\sigma$, then $\rho_\sigma$ lifts to $\text{SL}(2,K_\sigma)$, so the relative Stiefel-Whitney class $w_{\rho_\sigma}$ has zero image in $H^2(M;\mathbb Z/2\mathbb Z)$. Moreover, since $\pm 2$ are in the fixed field of $\sigma$, we have equality $$\hat{\rho}_\sigma(\alpha) = \hat{\rho}(\alpha)$$ Since the relative class is determined by these traces, we have equality $$w_\rho = w_{\rho_\sigma}$$ \begin{remark} The invariance of $w_\rho$ under the Galois group implies that $w_\rho$ can be pulled back from the cohomology of $\text{B}\mathcal PSL(2,\overline{\mathbb Q})$ where $\overline{\mathbb Q}$ has the discrete topology. \end{remark} \subsection{Real places for $K$ and $k$} A real place for $K$ determines an embedding of $\pi_1(M)$ in $\mathcal PSL(2,\mathbb R)$, which lifts to $\text{SL}(2,\mathbb R)$, by the vanishing of the second Stiefel-Whitney class for a $3$-manifold, and the invariance of $w$ under the action of the group $\mathcal Gal(L/\mathbb Q)$. Let $\mathcal Gamma$ be the group generated by squares of elements of $\pi_1(M)$ where $M$ is noncompact as before. A real place for $k$ determines an embedding of $\mathcal Gamma$ into $\mathcal PSL(2,\mathbb R)$ which extends to an embedding of $\pi_1(M)$ into $\mathcal PSL(2,\mathbb C)$ for which every element of $\pi_1(M)$ has a trace which is real or pure imaginary. Since the square of every element $\alpha$ of $\pi_1(M)$ stabilizes $\mathbb H^2$ in $\mathbb H^3$, it follows that $\alpha$ either fixes $\mathbb H^2$ (possibly reversing orientation) or takes it to an orthogonal copy of $\mathbb H^2$. In the second case, the square $\alpha^2$ takes $\mathbb H^2$ to itself by an orientation-reversing isometry, contrary to the fact that $\alpha^2$ is in $\mathcal PSL(2,\mathbb R)$ by hypothesis. It follows that $\pi_1(M)$ preserves $\mathbb H^2$, and therefore has image in $\mathcal PGL(2,\mathbb R)$. Again, this discussion depends on the noncompactness of $M$. In general, for noncompact $M$, a similar argument shows that the representation of $\pi_1(M)$ can always be conjugated into $\mathcal PGL(2,k)$ where $k$ is the invariant trace field. This fact is implicit e.g. in \cite{Neumann_Reid}, page 278. \subsection{Homology of torus bundles} Let $M_\phi$ be a hyperbolic once-punctured torus bundle with monodromy $\phi$. Then $\phi$ induces an automorphism on $H_1(T;\mathbb Z)$, represented by some matrix $$\phi \sim \begin{pmatrix} a & b\\ c & d\\ \end{pmatrix}$$ Since $\phi$ is pseudo-Anosov, the trace $a+d$ satisfies $|a+d|>2$. Then $H_1(M_\phi)$ is isomorphic to the kernel of the map $$\begin{pmatrix} a-1 & b & 0 \\ c & d-1 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix} \;: \; \mathbb Z^3 \to \mathbb Z^3$$ which has rank $1$, and therefore $H_2(M_\phi,\partial M_\phi;\mathbb Z)$ is isomorphic to $\mathbb Z$, generated by the relative class of the fiber. \subsection{Torus bundles and $2$-torsion} Let $M_\phi$ be a hyperbolic surface bundle over $S^1$ with fiber a once-punctured torus $T$. The following theorem relates topology, homological algebra, and number theory: \begin{torus_thm} Let $M_\phi$ be a hyperbolic oriented once-punctured torus bundle over $S^1$ with monodromy $\phi$. Let $K$ denote the trace field of $M_\phi$, and $k$ the invariant trace field. Then: \begin{enumerate} \item{$K$ admits no real places} \item{If $k$ admits a real place, then $H_1(M)$ contains 2-torsion} \end{enumerate} \end{torus_thm} \begin{proof} We suppose after conjugating that the image of $\rho_\text{geo}$ lies in $\mathcal PSL(2,K)$. We know that $\rho_\text{geo}$ lifts to $$\hat{\rho}_\text{geo}:\pi_1(M_\phi) \to \text{SL}(2,K)$$ Moreover, if $T$ denotes the fiber of the fibration, and $A,B$ are standard (free) generators for $\pi_1(T)$, then $$\text{trace }\hat{\rho}_\text{geo}([A,B]) = -2$$ as in Lemma~\ref{Stiefel_class}. Suppose $\sigma:K \to \mathbb R$ is a real place, and let $$\hat{\rho}_\sigma:\pi_1(M_\phi) \to \text{SL}(2,\mathbb R)$$ be obtained by Galois conjugating $K$ into $\mathbb R$. Let $\rho_\sigma:\pi_1(M_\phi) \to \mathcal PSL(2,\mathbb R)$ be obtained by composing $\hat{\rho}_\sigma$ with the covering map $\text{SL}(2,\mathbb R) \to \mathcal PSL(2,\mathbb R)$. We denote the relative Euler class of $\rho_\sigma$ by $e$: $$e \in H^2(M_\phi,\partial M_\phi;\mathbb Z)$$ Since $w_\text{geo} = w_\sigma$ we must have that $e([T])$ is odd. Triangulate $T$ by two ideal triangles $\Delta_1,\Delta_2$. The representation $\rho_\sigma$ determines a developing map from the universal cover $\widetilde{T}$ to $\mathbb H^2$ $$d:\widetilde{T} \to \mathbb H^2$$ If the image of both triangles has the same orientation, then the developing map is a homeomorphism, and we obtain a complete hyperbolic structure on $T$ which is invariant under $\phi$. But this implies that $\phi$ has finite order, which is incompatible with the existence of a complete hyperbolic structure on $M_\phi$. It follows that the orientations on the images of the $\Delta_i$ disagree. By Thurston's formula for $2e$ (Lemma~\ref{Thurston_and_Euler}), we have $$2e([T]) = \sum_i \text{sign of orientation on } d(\Delta_i) = 0$$ This gives a contradiction, and shows that $K$ has no real place. \vskip 12pt Now if $k$ admits a real place, then there is some $\sigma: K \to \mathbb C$ such that $$\rho_\sigma:\pi_1(M_\phi) \to \mathcal PGL(2,\mathbb R)$$ As above, we get a developing map from $\widetilde{T}$ to $\mathbb H^2$ for which the orientations on the ideal triangles must disagree, and the rational relative Euler class of the action must vanish. Since $\rho_\sigma$ is conjugate into $\mathcal PGL(2,\mathbb R)$ but not $\mathcal PSL(2,\mathbb R)$ the orientation class $o_\sigma \in H^1(M_\phi;\mathbb Z/2\mathbb Z)$ must be nontrivial. In fact, since the traces of elements of $\pi_1(\partial M_\phi)$ are $\pm 2$, boundary elements map to the subgroup $\mathcal PSL(2,\mathbb R)$, and therefore the orientation class $o_\sigma$ is a nontrivial class in $H^1(M_\phi,\partial M_\phi;\mathbb Z/2\mathbb Z)$. Since $H_1(M,\partial M;\mathbb Z)$ is torsion for a punctured torus bundle, we are done. \end{proof} \begin{remark} Note that any field of odd degree admits a real place; in particular, the degree of $K$ is always even. \end{remark} \begin{remark} If $M$ is a (compact or noncompact) hyperbolic surface bundle, and $S$ is any fiber, then the same argument shows that if $K$ has a real place $\sigma:K \to \mathbb R$, then $$|e_\sigma(S)| < -\chi(S), \; e_\sigma(S) = \chi(S) \mod 2$$ \end{remark} \begin{remark} J. Button has studied trace fields of punctured torus bundles with monodromy of the form $L^{-1}R^{-n}$ in \cite{Button}. He showed for positive $n \equiv 2 \mod 4$ and for all odd $n$ that the invariant trace field of $\pi_1(M_n)$ has no real places. For $n$ odd, the homology of $M_n$ has no $2$-torsion, but for $n$ even and not divisible by $4$, this does not follow from Theorem A, but rather from an explicit computation. One might further ask whether every punctured torus bundle whose invariant trace field has a real place has $4$-torsion in $H_1$. In fact, we posed exactly this question in an earlier version of this paper. J. Button has found a counterexample to this question: the census manifold s299 is a once-punctured torus bundle with monodromy $-R^4L^2$ whose invariant trace field $k$ has degree 3, and whose trace field $K$ has degree 12, and which has first homology $\mathbb Z \oplus \mathbb Z/2\mathbb Z \oplus \mathbb Z/6\mathbb Z$. \end{remark} \begin{remark} In \cite{Goldman}, W. Goldman characterizes geometric representations of once punctured torus groups amongst all $\mathcal PSL(2,\mathbb R)$ representations in terms of trace data. This gives an alternate proof of the first part of Theorem A, without using the geometric formula for the Euler class. \end{remark} \begin{remark} Part (2) of Theorem A also follows from Corollary 2.3 of \cite{Neumann_Reid}. \end{remark} \begin{example} Amongst the cusped manifolds in the Hodgson--Weeks census (see \cite{Weeks}), m039 is a torus bundle with monodromy $RL^4$, $H_1 = \mathbb Z \oplus \mathbb Z/4\mathbb Z$ and invariant trace field with minimal polynomial $x^3 - x^2 + x + 1$. It has a degree $2$ cover v3225 for which this is the trace field; this cover is fibered with fiber a twice-punctured torus, so necessarily the Euler class of the representation associated to the real place must vanish on this fiber. Some other examples: m040 is a torus bundle with monodromy $-RL^4$, $H_1 = \mathbb Z \oplus \mathbb Z/8\mathbb Z$ and invariant trace field with minimal polynomial $x^3 - x^2 + x + 1$, and v2231 is a torus bundle with monodromy $RL^2RL^3$, $H_1 = \mathbb Z \oplus \mathbb Z/16\mathbb Z$ and invariant trace field with minimal polynomial $x^7 - 3x^5 - 2x^3 - 2x^2 + 4x - 2$. The invariant trace fields were found with the help of the program {\tt snap} (\cite{snap}). \end{example} \section{Inequalities for the Euler class} \subsection{Thurston norm} We have seen from \S~\ref{characteristic_class_section} and \S~\ref{torus_section} that $$|e_\sigma(S)| < -\chi(S)$$ for $S$ a fiber of $M$, and $$e_\sigma(S) = \chi(S) \mod 2$$ for any incompressible surface $S$, whenever $\sigma:K \to \mathbb R$ is a real place. In \cite{Thurston_norm}, Thurston introduced a norm on $H_2(M,\partial M;\mathbb R)$ for $M$ irreducible and atoroidal. For a homology class $[S]$, the norm satisfies $$\|[S]\| = \inf_S -\chi(S)$$ where the infimum is taken over all (possibly disconnected) representatives $S$ of $[S]$ with no spherical components. A generalization of this norm, due to Gromov, measures a similar complexity amongst all {\em immersed} surfaces with no spherical components representing a given homology class. A theorem of Gabai (\cite{Gabai}) shows that these two norms are equal (after a suitable normalization); i.e. any immersed surface may be replaced by an embedded surface of no larger norm. The key properties of the norm $\|\cdot \|$ are that the unit ball $\mathcal P(M)$ is a finite sided polyhedron, whose vertices are {\em rational}, and that there are a finite (possibly empty) collection of top dimensional faces $Q_i$ with the property that the integral homology classes $[S]$ representing fibrations of $M$ over $S^1$ are exactly those whose (positive) projective rays intersect the interiors of the $Q_i$. Such $Q_i$ are called {\em fibered faces} of $\mathcal P(M)$. Our estimate implies the following: \begin{theorem}\label{norm_ball} Let $M$ be a cusped hyperbolic $3$-manifold, and suppose $\sigma:K \to \mathbb R$ is a real place with associated relative Euler class $e_\sigma$. Then for every fibered face of $\mathcal P(M)$ there is a vertex $V_i$ such that $e_\sigma(V_i) \ne \|V_i\|$. Similarly, there is a vertex $V_j$ such that $e_\sigma(V_j) \ne -\|V_j\|$. \end{theorem} \subsection{Pseudo-rational surfaces} In \cite{LongReid}, Long and Reid define a {\em pseudomodular} surface to be one whose cusp set is contained in $\mathbb Q$. We alter their definition slightly to adapt it to our context: \begin{definition} A subgroup $\mathcal Gamma < \mathcal PSL(2,\mathbb R)$ is {\em pseudo-rational} if the traces of all elements are contained in $\mathbb Q$. \end{definition} A discrete finite covolume pseudo-rational subgroup acts on $\mathbb H^2$ with quotient a pseudo-rational surface $S$. Such a surface in a hyperbolic $3$-manifold is necessarily totally geodesic. Note that for us, pseudo-rational surfaces are always orientable. \begin{example} A thrice-punctured sphere is a (pseudo)-rational surface. \end{example} If $K$ is the trace field of $M$ and $\sigma:K \to \mathbb R$ is a real place, the traces of a pseudo-rational subsurface do not change. It follows that $$e_\sigma([S]) = \pm \chi(S)$$ for any pseudo-rational surface, and any real place $\sigma$. Consequently, we have the following corollary: \begin{theorem}\label{pseudo_theorem} Let $M$ be a cusped hyperbolic $3$-manifold, and suppose $S \subset M$ is a pseudo-rational surface (possibly immersed). If $S$ is not (Gromov or Thurston) norm minimizing in its homology class, $K$ has no real places. \end{theorem} In particular, if $M$ contains a separating pseudo-rational surface, its trace field has no real places. \begin{example} One method of constructing (pseudo)-rational surfaces is by covering thrice punctured spheres. A thrice-punctured sphere is always homologically essential, and therefore so is its preimage in a finite cover. But if such a finite cover has suitable symmetries, one might be able to find a (low genus) surface with a $2$-fold orientation-reversing fixed-point free symmetry, in the same homology class as the pseudo-rational surface. One can then cut along such a surface, and reglue the resulting boundary components to themselves to get a new manifold with the same trace field as the old, in which the pseudo-rational surface is homologically trivial. \end{example} A nice application of Theorem~\ref{pseudo_theorem} is the following Corollary, which was suggested by A. Reid: \begin{corollary}\label{no_totally_geodesic} Let $M$ be a fibered knot complement in a rational homology sphere whose trace field $K$ has odd prime degree. Then $M$ does not contain an immersed totally geodesic surface. \end{corollary} \begin{proof} Since $K$ has prime degree, it has no proper subfields other than $\mathbb Q$; in particular, any immersed totally geodesic surface $S$ has rational traces. Since $M$ is a knot complement in a rational homology sphere, its rational second homology is $1$ dimensional. Since it is fibered, the rational homology is generated by the fiber $F$. It follows that $[S] = n[F]$ in homology for some nonzero integer $n$. Since $F$ is a fiber of a fibration, it is Thurston (and Gromov) norm-minimizing, and therefore $$-\chi(S) \ge -|n|\chi(F)$$ Let $\sigma:K \to \mathbb R$ be a real place, and let $e_\sigma$ be the associated relative Euler class. Then we have $$|e_\sigma(S)| = |n|\cdot |e_\sigma(F)| < -|n|\chi(F) \le -\chi(S)$$ contrary to Theorem~\ref{pseudo_theorem}. \end{proof} For example, the knot $8_{20}$ in \cite{Rolfsen} (the complement is m222 in the census) is fibered, and has trace field generated by a root of $x^5 - x^4 + x^3 + 2x^2 - 2x + 1$ which has degree $5$. \subsection{Realizing Euler classes} If $\Sigma$ is a closed, orientable surface of genus $g\ge 2$, Goldman \cite{Goldman} showed that the $\mathcal PSL(2,\mathbb R)$ representation variety of $\pi_1(\Sigma)$ has $4g-3$ components, indexed by values of the Euler class on $\Sigma$ satisfying $$|e([\Sigma])| \le -\chi(\Sigma)$$ The Milnor--Wood inequality (c.f. \cite{Milnor}, \cite{Wood}) says that one cannot do better, even amongst $\text{Homeo}^+(S^1)$ representations: \begin{theorem}[Milnor--Wood] Let $\Sigma$ be a closed surface of genus at least $1$, and let $\rho:\pi_1(\Sigma) \to \text{Homeo}^+(S^1)$ be a representation with Euler class $e_\rho$. Then $$|e_\rho([\Sigma])| \le -\chi(\Sigma)$$ \end{theorem} Similar theorems hold for surfaces with boundary, where one considers relative Euler classes. For representations $\rho_\sigma$ coming from real places $\sigma$ of trace fields $K$, we have the additional constraint that $e_\sigma([\Sigma]) = \chi(\Sigma) \mod 2$. Modulo this constraint, we will see how to construct simple examples which realize every possible compatible combination of Euler characteristic and Euler class. \begin{definition} Let $M$ be a manifold, and $\mathcal P(M)$ the unit ball of the Thurston norm. A {\em big diamond} is a symmetrical $4$-gon $D \subset \mathcal P(M)$ which is the intersection of $\mathcal P(M)$ with a two-dimensional plane $\pi$, and whose vertices are integer lattice points which generate the lattice of integral points in $\pi$. \end{definition} Since the norm of every integer lattice point is at least $1$, a ``big diamond" is as big as possible, hence the name. Notice too that only cusped manifolds can have big diamonds in $\mathcal P(M)$. \begin{theorem} Let $M$ be a cusped hyperbolic $3$-manifold. Suppose the trace field $K$ has a real place $\sigma$ with associated relative Euler class $e_\sigma$, and suppose further that the unit ball in the Thurston norm $\mathcal P(M)$ contains a big diamond $D$. Then for every integer $n<0$ and every integer $m$ with $|m|\le -n$ and $n=m \mod 2$ there is an immersed incompressible connected surface $S_{n,m}$ in $M$ satisfying $$\chi(S_{n,m}) = n, \; e_\sigma([S_{n,m}]) = m$$ \end{theorem} \begin{proof} Let $V_1,V_2$ be surfaces representing the vertices of the big diamond $D$. Then $\chi(V_1)=\chi(V_2) = -1$ and therefore $|e_\sigma(V_1)| = |e_\sigma(V_2)| = 1$. After replacing $V_1$ and/or $V_2$ with their negatives if necessary, we can assume $$e_\sigma(V_1) = 1, \; e_\sigma(V_2) = -1$$ For $p,q \ge 1$ let $V_{p,q}$ denote the Thurston norm-minimizing surface representing the homology class $p[V_1] + q[V_2]$. Since $D$ is a diamond, we have $$\chi(V_{p,q}) = -p-q$$ Since $e_\sigma$ is linear, we have $$e_\sigma(V_{p,q}) = p-q$$ If $p,q$ are coprime, then $V_{p,q}$ is represented by a connected surface. Otherwise, we have $p = ap',q=aq'$ for some $a>1$ where $p',q'$ are coprime. Then $\pi_1(V_{p',q'})$ has a subgroup of index $a$ which gives a connected incompressible immersed surface in $M$ with Euler characteristic $-p-q$ and Euler class $p-q$. Together with finite index subgroups of $\pi_1(V_1),\pi_1(V_2)$, this shows that every possibility is realized. \end{proof} \begin{example} The link $8_6^2$ in Rolfsen's tables \cite{Rolfsen} has a complement whose unit ball in the Thurston norm is a big diamond, and has trace field generated by a root of $x^3 -x^2 + 3x - 2$, which has a real place because the degree is odd (thanks to N. Dunfield for finding this example.) \end{example} \end{document}
\begin{document} \begin{abstract} This is the first part of our work on Zariski decomposition structures, where we study Zariski decompositions using Legendre-Fenchel type transforms. In this way we define a Zariski decomposition for curve classes. This decomposition enables us to develop the theory of the volume function for curves defined by the second named author, yielding some fundamental positivity results for curve classes. For varieties with special structures, the Zariski decomposition for curve classes admits an interesting geometric interpretation. \end{abstract} \title{ extbf{Convexity and Zariski decomposition structure} \section{Introduction} In \cite{zariski62} Zariski introduced a fundamental tool for studying linear series on a surface now known as a Zariski decomposition. Over the past 50 years the Zariski decomposition and its generalizations to divisors in higher dimensions have played a central role in birational geometry. In this paper we apply abstract convex analysis to the study of Zariski decompositions. The key perspective is that a Zariski decomposition captures the failure of strict log concavity of a volume function, and thus can be studied using Legendre-Fenchel type transforms. Surprisingly, such transforms capture rich geometric information about the variety, a posteriori motivating many well-known geometric inequalities for pseudo-effective divisors. There are two natural dualities for cones of divisors and curves: the nef cone of divisors $\Nef^{1}(X)$ is dual to the pseudo-effective cone of curves $\Eff_{1}(X)$ and the pseudo-effective cone of divisors $\Eff^{1}(X)$ is dual to the movable cone of curves $\Mov_{1}(X)$. In this paper we study the first duality, obtaining a Zariski decomposition for curve classes on varieties of arbitrary dimension which generalizes Zariski's original construction. In the sequel \cite{lehmannxiao2015b}, we will focus on the second duality and study $\sigma$-decompositions from the perspective of convex analysis. Throughout we work over $\mathbb{C}$, but the main results also hold over an algebraically closed field or in the K\"ahler setting (see Section \ref{outlinesec}). \subsection{Zariski decomposition} We define a Zariski decomposition for big curve classes -- elements of the interior of the pseudo-effective cone of curves $\Eff_{1}(X)$. \begin{defn} \label{def zariski decomposition} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in \Eff_{1}(X)^{\circ}$ be a big curve class. Then a Zariski decomposition for $\alpha$ is a decomposition \begin{align*} \alpha = B^{n-1} + \gamma \end{align*} where $B$ is a big and nef $\mathbb{R}$-Cartier divisor class, $\gamma$ is pseudo-effective, and $B \cdot \gamma = 0$. We call $B^{n-1}$ the ``positive part'' and $\gamma$ the ``negative part" of the decomposition. \end{defn} This definition directly generalizes Zariski's original definition, which (for big classes) is given by similar intersection criteria. As we will see shortly in Section \ref{zardecomandconvexity}, it also mirrors the $\sigma$-decomposition of \cite{Nak04} and the Zariski decomposition of \cite{fl14}. Our first theorem is: \begin{thrm} \label{thm decomposition mainthrm} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in \Eff_{1}(X)^{\circ}$ be a big curve class. Then $\alpha$ admits a unique Zariski decomposition $\alpha = B^{n-1} + \gamma$. \end{thrm} \begin{exmple} \label{example zariski surface} If $X$ is an algebraic surface, then the Zariski decomposition provided by Theorem \ref{thm decomposition mainthrm} coincides (for big classes) with the numerical version of the classical definition of \cite{zariski62}. Indeed, using Proposition \ref{negpartrigid} one sees that the negative part $\gamma$ is represented by an effective curve $N$. The self-intersection matrix of $N$ must be negative-definite by the Hodge Index Theorem. (See e.g.~\cite{Nak04} for another perspective focusing on the volume function.) \end{exmple} \subsection{Convexity and Zariski decompositions} \label{zardecomandconvexity} According to the philosophy of \cite{fl14}, the key property of the Zariski decomposition (or $\sigma$-decomposition for divisors) is that it captures the failure of the volume function to be strictly log-concave. The Zariski decomposition for curves plays a similar role for the following interesting volume-type function defined in \cite{xiao15}. \begin{defn} (see \cite[Definition 1.1]{xiao15}) \label{defn:widehatvol} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in \Eff_{1}(X)$ be a pseudo-effective curve class. Then the volume of $\alpha$ is defined to be \begin{align*} \widehat{\vol}(\alpha) = \inf_{A \textrm{ big and nef divisor class}} \left( \frac{A \cdot \alpha}{\vol(A)^{1/n}} \right)^{\frac{n}{n-1}}. \end{align*} We say that a big and nef divisor class $A$ computes $\widehat{\vol}(\alpha)$ if this infimum is achieved by $A$. When $\alpha$ is a curve class that is not pseudo-effective, we set $\widehat{\vol}(\alpha)=0$. \end{defn} The function $\widehat{\vol}$ is a polar transformation of the volume function for ample divisors. In our setting, the polar transformation plays the role of the Legendre-Fenchel transform of classical convex analysis, linking the differentiability of a function to the strict convexity of its transform. From this viewpoint, Definition \ref{def zariski decomposition} is important precisely because it captures the log concavity of $\widehat{\vol}$. \begin{thrm} \label{thm intro strict concavity M hatvol} Let $X$ be a smooth projective variety of dimension $n$. Let $\alpha_1, \alpha_2\in \Eff_1 (X)$ be two big curve classes. Then $$\widehat{\vol}(\alpha_1+ \alpha_2)^{n-1/n}\geq \widehat{\vol}(\alpha_1)^{n-1/n}+ \widehat{\vol}(\alpha_2)^{n-1/n}$$ with equality if and only if the positive parts in the Zariski decompositions of $\alpha_1$ and $\alpha_2$ are proportional. \end{thrm} As an important special case, the positive part of a curve class has the same volume as the original class, showing the similarity with the $\sigma$-decomposition. Furthermore, just in Zariski's classical work, the ``projection'' onto the positive part elucidates the intersection-theoretic nature of the volume. \begin{thrm} Let $X$ be a projective variety of dimension $n$ and let $\alpha \in \Eff_{1}(X)^{\circ}$ be a big curve class. Suppose that $\alpha = B^{n-1} + \gamma$ is the Zariski decomposition of $\alpha$. Then \begin{equation*} \widehat{\vol}(\alpha) = \widehat{\vol}(B^{n-1}) = B^{n} \end{equation*} and $B$ is the unique big and nef divisor class with this property satisfying $B^{n-1} \preceq \alpha$. \end{thrm} \begin{exmple} An important feature of Zariski decompositions and $\widehat{\vol}$ for curves is that they can be calculated via intersection theory directly on $X$ once one has identified the nef cone of divisors. (In contrast, the analogous divisor constructions may require passing to birational models of $X$ to admit an interpretation via intersection theory.) This is illustrated by Example \ref{example proj bundle} where we calculate the Zariski decomposition of any curve class on the projective bundle over $\mathbb{P}^{1}$ defined by $\mathcal{O} \oplus \mathcal{O} \oplus \mathcal{O}(-1)$. \end{exmple} \subsection{Formal Zariski decompositions} The Zariski decomposition for curves can be deduced from a general theory of duality for log concave homogeneous functions defined on cones. We define a ``formal'' Zariski decomposition capturing the failure of strict log concavity of a certain class of homogeneous functions on finite-dimensional cones. Let $\mathcal{C}$ be a full dimensional closed proper convex cone in a finite dimensional vector space. For any $s>1$, let $\HConc_{s}(\mathcal{C})$ denote the collection of functions $f: \mathcal{C} \to \mathbb{R}$ that are upper-semicontinuous, homogeneous of weight $s>1$, strictly positive on the interior of $\mathcal{C}$, and which are $s$-concave in the sense that \begin{equation*} f(v)^{1/s} + f(x)^{1/s} \leq f(x+v)^{1/s} \end{equation*} for any $v,x \in \mathcal{C}$. In this context, the correct analogue of the Legendre-Fenchel transform is the (concave homogeneous) polar transform. For any $f \in \HConc_{s}(\mathcal{C})$, the polar $\mathcal{H}f$ is an element of $\HConc_{s/s-1}(\mathcal{C}^{*})$ for the dual cone $\mathcal{C}^{*}$ defined as \begin{equation*} \mathcal{H}f(w^{*}) = \inf_{v \in \mathcal{C}^{\circ}} \left( \frac{w^{*} \cdot v}{f(v)^{1/s}} \right)^{s/s-1} \qquad \qquad \forall w^{*} \in \mathcal{C}^{*}. \end{equation*} We define what it means for $f \in \HConc_{s}(\mathcal{C})$ to have a Zariski decomposition structure and show that it follows from a differentiability condition for $\mathcal{H}f$, and vice versa (see Section \ref{formal zariski section}). Just as in the classical definition of Zariski, one can view this structure as a decomposition of the elements of $\mathcal{C}^{\circ}$ into ``positive parts'' retaining the value of $f$ and ``negative parts'' along which the strict log concavity of $f$ fails. \begin{exmple} Let $q$ be a bilinear form on a vector space $V$ of signature $(1,\dim V - 1)$ and set $f(v) = q(v,v)$. Suppose $\mathcal{C}$ is a closed full-dimensional convex cone on which $f$ is non-negative. Identifying $V$ with $V^*$ under $q$, we see that $\mathcal{C} \subset \mathcal{C}^{*}$ and that $\mathcal{H}f|_{\mathcal{C}} = f$ by the Hodge inequality. Then $\mathcal{H}f$ on the entire cone $\mathcal{C}^*$ is controlled by a Zariski decomposition with positive parts lying in $\mathcal{C}$. This is of course the familiar picture for surfaces, where $f$ is the self-intersection on the nef cone and $\mathcal{H}f$ is the volume on the pseudo-effective cone. Thus we see that the conclusion of Example \ref{example zariski surface} -- that $\vol$ and $\widehat{\vol}$ coincide on surfaces -- is a direct consequence of the Hodge Index Theorem for surfaces. Furthermore, we obtain a theoretical perspective motivating the linear algebra calculations of \cite{zariski62}. \end{exmple} Many of the basic geometric inequalities in algebraic geometry -- and hence for polytopes or convex bodies via toric varieties (as in \cite{teissier82} and \cite{khovanskii88} and the references therein) -- can be understood using this abstract framework. A posteriori this theory motivates many well-known theorems about the volume of divisors (which can itself be interpreted as a polar transform). In particular, the $\sigma$-decomposition for divisor classes can be also interpreted by our general theory. See Remark \ref{aposteriori} for more details. \subsection{Positivity of curves} The volume function for curves shares many of the important properties of the volume function for divisors. This is no accident -- as explained above, polar duality behaves compatibly with many topological properties and with geometric inequalities. Clearly the volume function is homogeneous and it is not hard to show that it is positive precisely on the big cone of curves. Perhaps the most important property is the following description of the derivative, which mirrors the results of \cite{bfj09} and \cite{lm09} for divisors. \begin{thrm} \label{thm C1} Let $X$ be a projective variety of dimension $n$. Then the function $\widehat{\vol}$ is $\mathcal{C}^{1}$ on the big cone of curves. More precisely, let $\alpha$ be a big curve class on $X$ and write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. For any curve class $\beta$, we have \begin{align*} \left. \frac{d}{dt} \right|_{t = 0} \widehat{\vol}(\alpha + t \beta) = \frac{n}{n-1} B \cdot \beta. \end{align*} \end{thrm} Another key property of the $\sigma$-decomposition for divisors is that the negative part is effective. While the negative part of the Zariski decomposition for curves need not be effective, the correct analogue is given by the following proposition. \begin{prop} Let $X$ be a projective variety of dimension $n$. Let $\alpha$ be a big curve class and write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. There is a proper subscheme $i: V \subsetneq X$ and a pseudo-effective class $\gamma' \in N_{1}(V)$ such that $i_{*}\gamma' = \gamma$. \end{prop} By analogy with the algebraic Morse inequality for nef divisors, we prove a Morse-type inequality for curves. \begin{thrm} \label{thm morseinequality} Let $X$ be a smooth projective variety of dimension $n$. Let $\alpha$ be a big curve class and let $\beta$ be a nef curve class. Write $\alpha = B^{n-1} + \gamma$ for the Zariski decomposition of $\alpha$. If \begin{equation*} \widehat{\vol}(\alpha) - n B \cdot \beta > 0 \end{equation*} then $\alpha - \beta$ is big. \end{thrm} \subsection{Examples} The Zariski decomposition is particularly striking for varieties with a rich geometric structure. We discuss several examples: toric varieties, Mori dream spaces and hyperk\"ahler manifolds. The complete intersection cone $\CI_{1}(X)$ is defined to be the closure of the set of classes of the form $A^{n-1}$ for an ample divisor $A$ on $X$. Note that the positive part of the Zariski decomposition takes values in $\CI_{1}(X)$. We should emphasize that $\CI_{1}(X)$ need not be convex -- the appendix gives an explicit example. \subsubsection{Toric varieties} Let $X$ be a simplicial projective toric variety of dimension $n$ defined by a fan $\Sigma$. Suppose that the curve class $\alpha$ lies in the interior of the movable cone of curves, or equivalently, $\alpha$ is defined by a positive Minkowski weight on the rays of $\Sigma$. A classical theorem of Minkowski attaches to such a weight a polytope $P_{\alpha}$ whose facet normals are the rays of $\Sigma$ and whose facet volumes are determined by the weights. In this setting, the volume is calculated by an mixed volume problem: fixing $P_{\alpha}$, amongst all polytopes whose normal fan refines $\Sigma$ there is a unique $Q$ (up to homothety) minimizing the mixed volume calculation \begin{equation*} \left( \frac{V(P_{\alpha}^{n-1},Q)}{\vol(Q)^{1/n}} \right)^{n/n-1}. \end{equation*} Then the volume is $n!$ times this minimum value, and the positive part of $\alpha$ is proportional to the $(n-1)$-product of the big and nef divisor defined by $Q$. Note that if we let $Q$ vary over all polytopes then the Brunn-Minkowski inequality shows that the minimum is given by $Q=cP_{\alpha}$, but the normal fan condition on $Q$ yields a new version of this classical problem. We give a procedure for computing the volume of any big curve class $\alpha$ using Zariski decompositions. From the viewpoint of convex analysis, the compatibility with the Zariski decomposition corresponds to the fact that the solution of an mixed volume problem should be given by a condition on the derivative. The procedure is as follows. Note that every big and nef divisor on $X$ is semi-ample (that is, the pullback of an ample divisor on a toric birational model). Thus, the Zariski decomposition for curves is characterized by the existence of a birational toric morphism $\pi: X \to X'$ such that: \begin{itemize} \item the class $\pi_{*}\alpha \in N_{1}(X')$ coincides with $A^{n-1}$ for some ample divisor $A$, and \item $\alpha - (\pi^*A)^{n-1}$ is pseudo-effective. \end{itemize} Thus one can compute the Zariski decomposition and volume for $\alpha$ by the following procedure. \begin{enumerate} \item For each toric birational morphism $\pi: X \to X'$, check whether $\pi_{*}\alpha$ is in the complete intersection cone. If so, there is a unique big and nef divisor $A_{X'}$ such that $A_{X'}^{n-1} = \pi_{*}\alpha$. \item Check if $\alpha - (\pi^{*}A_{X'})^{n-1}$ is pseudo-effective. \end{enumerate} We analyze these in a simple toric variety; see Example \ref{example toric zariski}. \subsubsection{Hyperk\"ahler manifolds} For a hyperk\"ahler manifold $X$, the results of \cite[Section 4]{Bou04} show that the volume and $\sigma$-decomposition of divisors satisfy a natural compatibility with the Beauville-Bogomolov form. We prove the analogous properties for curve classes. The following theorem is phrased in the K\"ahler setting, although the analogous statements in the projective setting are also true. \begin{thrm} Let $X$ be a hyperk\"ahler manifold of dimension $n$ and let $q$ denote the bilinear form on $H^{n-1,n-1}(X)$ induced via duality from the Beauville-Bogomolov form on $H^{1,1}(X)$. \begin{enumerate} \item The cone of complete intersection $(n-1,n-1)$-classes is $q$-dual to the cone of pseudo-effective $(n-1,n-1)$-classes. \item If $\alpha$ is a complete intersection $(n-1,n-1)$-class then $\widehat{\vol}(\alpha) = q(\alpha,\alpha)^{n/2(n-1)}$. \item Suppose $\alpha$ lies in the interior of the cone of pseudo-effective $(n-1,n-1)$-classes and write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. Then $q(B^{n-1},\gamma) = 0$ and if $\gamma$ is non-zero then $q(\gamma,\gamma)<0$. \end{enumerate} \end{thrm} \subsubsection{Mori dream spaces} If $X$ is a Mori dream space, then the movable cone of divisors admits a chamber structure defined via the ample cones on small $\mathbb{Q}$-factorial modifications. This chamber structure behaves compatibly with the $\sigma$-decomposition and the volume function for divisors. For curves we obtain a complementary picture. The movable cone of curves admits a ``chamber structure'' defined via the complete intersection cones on small $\mathbb{Q}$-factorial modifications. However, the Zariski decomposition and volume of curves are no longer invariant under small $\mathbb{Q}$-factorial modifications but instead exactly reflect the changing structure of the pseudo-effective cone of curves. Thus the Zariski decomposition is the right tool to understand the birational geometry of movable curves on $X$. This example is analyzed in \cite{lehmannxiao2015b}, since it relies on the techniques developed there. \subsection{Connections with birational geometry} Finally, we briefly discuss the relationship between the volume function for curves and several other topics in birational geometry. A basic technique in birational geometry is to bound the positivity of a divisor using its intersections against specified curves. These results can profitably be reinterpreted using the volume function of curves. \begin{prop} \label{multbound} Let $X$ be a smooth projective variety of dimension $n$. Choose positive integers $\{k_{i} \}_{i=1}^{r}$. Suppose that $\alpha \in \Mov_{1}(X)$ is represented by a family of irreducible curves such that for any collection of general points $x_{1},x_{2},\ldots,x_{r},y$ of $X$, there is a curve in our family which contains $y$ and contains each $x_{i}$ with multiplicity $\geq k_{i}$. Then \begin{equation*} \widehat{\vol}(\alpha)^{{n-1}/{n}} \geq \frac{\sum_{i} k_{i}}{r^{1/n}}. \end{equation*} \end{prop} We can thus apply volumes of curves to study Seshadri constants, bounds on volume of divisors, and other related topics. We defer a more in-depth discussion to Section \ref{applications sec}, contenting ourselves with a fascinating example. \begin{exmple} If $X$ is rationally connected, it is interesting to analyze the possible volumes for classes of special rational curves on $X$. When $X$ is a Fano variety of Picard rank $1$, these invariants will be closely related to classical invariants such as the length and degree. For example, we say that $\alpha \in N_{1}(X)$ is a rationally connecting class if for any two general points of $X$ there is a chain of rational curves of class $\alpha$ connecting the two points. Is there a uniform upper bound (depending only on the dimension) for the minimal volume of a rationally connecting class on a rationally connected $X$? \cite{KMM92} and \cite{campana92} show that this is true for smooth Fano varieties. We discuss this question briefly in Section \ref{rcexample}. \end{exmple} \subsection{Outline of paper} \label{outlinesec} In this paper we will work with projective varieties over $\mathbb{C}$ for simplicity of arguments and for compatibility with cited references. However, all the results will extend to smooth varieties over arbitrary algebraically closed fields on the one hand and arbitrary compact K\"ahler manifolds on the other. We give a general framework for this extension in Sections \ref{charp background sec} and \ref{kahler background sec} and then explain the details as we go. In Section \ref{section preliminaries} we review the necessary background, and make several notes explaining how the proofs can be adjusted to arbitrary algebraically closed fields and compact K\"ahler manifolds. Sections \ref{legendresection} and \ref{formal zariski section} discuss polar transforms and formal Zariski decompositions for log concave functions. In Section \ref{section zariski} we construct the Zariski decomposition of curves and study its basic properties and its relationship with $\widehat{\vol}$. Section \ref{toric section} discusses toric varieties, and Section \ref{hyperkahler section} is devoted to the study of hyperk\"ahler manifolds. Section \ref{applications sec} discusses connections with other areas of birational geometry. Finally, the appendix collects some ``reverse" Khovanskii-Teissier type results in the analytic setting and a result related to the transcendental holomorphic Morse inequality. The appendix also gives a toric example where the complete intersection cone of curves is not convex. \section{Preliminaries} \label{section preliminaries} In this section, we first fix some notations over a projective variety $X$: \begin{itemize} \item $N^1(X)$: the real vector space of numerical classes of divisors; \item $N_1(X)$: the real vector space of numerical classes of curves; \item $\Eff^1(X)$: the cone of pseudo-effective divisor classes; \item $\Nef^1(X)$: the cone of nef divisor classes; \item $\Mov^1(X)$: the cone of movable divisor classes; \item $\Eff_1(X)$: the cone of pseudo-effective curve classes; \item $\Mov_1(X)$: the cone of movable curve classes, equivalently by \cite{BDPP13} the dual of $\Eff^{1}(X)$; \item $\CI_1(X)$: the closure of the set of all curve classes of the form $A^{n-1} $ for an ample divisor $A$. \end{itemize} With only a few exceptions, capital letters $A,B,D,L$ will denote $\mathbb{R}$-Cartier divisor classes and greek letters $\alpha,\beta,\gamma$ will denote curve classes. For two curve classes $\alpha, \beta$, we write $\alpha\succeq \beta$ (resp. $\alpha\preceq \beta$) to denote that $\alpha-\beta$ (resp. $\beta-\alpha$) belongs to $\Eff_{1}(X)$. We will do similarly for divisor classes, or two elements of a cone $\mathcal{C}$ if the cone is understood. We will use the notation $\langle - \rangle$ for the positive product on smooth varieties as in \cite{BDPP13}, \cite{bfj09} and \cite{Bou02}. To extend our results to arbitrary compact K\"ahler manifolds, we need to deal with transcendental objects which are not given by divisors or curves. Let $X$ be a compact K\"ahler manifold of dimension $n$. By analogue with the projective situation, we need to deal with the following spaces and positive cones: \begin{itemize} \item $H^{1,1}_{\BC}(X, \mathbb{R})$: the real Bott-Chern cohomology group of bidegree $(1,1)$; \item $H^{n-1,n-1}_{\BC}(X, \mathbb{R})$: the real Bott-Chern cohomology group of bidegree $(n-1,n-1)$; \item $\mathcal{N}(X)$: the cone of pseudo-effective $(n-1,n-1)$-classes; \item $\mathcal{M}(X)$: the cone of movable $(n-1,n-1)$-classes; \item $\overline{\mathcal{K}}(X)$: the cone of nef $(1,1)$-classes, equivalently the closure of the K\"ahler cone; \item $\mathcal{E}(X)$: the cone of pseudo-effective $(1,1)$-classes. \end{itemize} Recall that we call a Bott-Chern class pseudo-effective if it contains a $d$-closed positive current, and call an $(n-1,n-1)$-class movable if it is contained in the closure of the cone generated by the classes of the form $\mu_{*}(\widetilde{\omega}_1 \wedge...\wedge \widetilde{\omega}_{n-1})$ where $\mu: \widetilde{X}\rightarrow X$ is a modification and $\widetilde{\omega}_1,...,\widetilde{\omega}_{n-1}$ are K\"ahler metrics on $\widetilde{X}$. For the basic theory of positive currents, we refer the reader to \cite{Dem}. If $X$ is a smooth projective variety over $\mathbb{C}$, then we have the following relations (see e.g. \cite{BDPP13}) $$\Nef^1(X)=\overline{\mathcal{K}}(X)\cap N^1(X),\ \Eff^1(X)=\mathcal{E}(X)\cap N^1(X)$$ and $$\Eff_1(X)=\mathcal{N}(X)\cap N_1(X),\ \Mov_1(X)=\mathcal{M}(X)\cap N_1(X).$$ \subsection{Khovanskii-Teissier inequalities} We collect several results which we will frequently use in our paper. In every case, the statement for arbitrary projective varieties follows from the familiar smooth versions via a pullback argument. Recall the well-known Khovanskii-Teissier inequalities for a pair of nef divisors over projective varieties (see e.g. \cite{Tei79}). \begin{itemize} \item Let $X$ be a projective variety and let $A,B$ be two nef divisor classes on $X$. Then we have \begin{equation*} A^{n-1} \cdot B \geq (A^{n})^{n-1/n} (B^{n})^{1/n} \end{equation*} \end{itemize} We also need the characterization of the equality case in the above inequality as in \cite[Theorem D]{bfj09} -- see also \cite{FX14} for the analytic proof for transcendental classes in the K\"ahler setting. (We call this characterization Teissier's proportionality theorem as it was first proposed and studied by B.~Teissier.) \begin{itemize} \item Let $X$ be a projective variety and let $A,B$ be two big and nef divisor classes on $X$. Then \begin{equation*} A^{n-1} \cdot B = (A^{n})^{n-1/n} (B^{n})^{1/n} \end{equation*} if and only if $A$ and $B$ are proportional. \end{itemize} We next prove a more general version of Teissier's proportionality theorem for $n$ big and nef $(1,1)$-classes over compact K\"ahler manifolds (thus including projective varieties defined over $\mathbb{C}$) which follows easily from the result of \cite{FX14}. This result should be useful in the study of the structure of complete intersection cone $\CI_1 (X)$. \begin{thrm} \label{thm KT inequality} Let $X$ be a compact K\"ahler manifold of dimension $n$, and let $B_1,...,B_n$ be $n$ big and nef $(1,1)$-classes over $X$. Then we have \begin{align*} B_1 \cdot B_2 \cdot\cdot\cdot B_n \geq (B_1 ^n)^{1/n} \cdot (B_2 ^n)^{1/n} \cdot\cdot\cdot (B_n ^n)^{1/n}, \end{align*} where the equality is obtained if and only if $B_1,...,B_n$ are proportional. \end{thrm} We include a proof, since we are not aware of any reference in the literature. The proof reduces the global inequalities to the pointwise Brunn-Minkowski inequalities by solving Monge-Amp\`{e}re equations \cite{FX14} (see also \cite{Dem93} for a related result), and then applies the result of \cite{FX14} -- where the key technique and estimates go back to \cite{fuxiao14kcone} -- for a pair of big and nef classes (see also \cite[Theorem D]{bfj09} for divisor classes). Recall that the ample locus $\Amp(D)$ of a big $(1,1)$-class $D$ is the set of points $x\in X$ such that there is a strictly positive current $T_x \in D$ with analytic singularities which is smooth near $x$. When $L$ is a big $\mathbb{R}$-divisor class on a smooth projective variety $X$, then the ample locus $\Amp(L)$ is equal to the complement of the augmented base locus $\mathbb{B}_{+}(L)$ (see \cite{Bou04}). \begin{proof} Without loss of generality, we can assume all the $B_i ^n=1$. Then we need to prove \begin{align*} B_1 \cdot B_2 \cdot\cdot\cdot B_n \geq 1, \end{align*} with the equality obtained if and only if $B_1,...,B_n$ are equal. To this end, we fix a smooth volume form $\Phi$ with $\vol(\Phi) =1$. We choose a smooth $(1,1)$-form $b_j$ in the class $B_j$. Then by \cite[Theorem C]{BEGZ10MAbig}, for every class $B_j$ we can solve the following singular Monge-Amp\`{e}re equation \begin{align*} \langle (b_j + i\partial \bar \partial \psi_j)^n \rangle = \Phi, \end{align*} where $\langle - \rangle$ denotes the non-pluripolar products of positive currents (see \cite[Definition 1.1 and Proposition 1.6]{BEGZ10MAbig}). Denote $T_j = b_j + i\partial \bar \partial \psi_j$, then \cite[Theorem B]{BEGZ10MAbig} implies $T_j$ is a positive current with minimal singularities in the class $B_j$. Moreover, $T_j$ is a K\"ahler metric over the ample locus $\Amp(B_j)$ of the big class $B_j$ by \cite[Theorem C]{BEGZ10MAbig}. Note that $\Amp(B_j)$ is a Zariski open set of $X$. Denote $\Omega=\Amp(B_1)\cap ... \cap \Amp(B_n)$, which is also a Zariski open set. By \cite[Definition 1.17]{BEGZ10MAbig}, we then have \begin{align*} B_1 \cdot B_2 \cdot\cdot\cdot B_n &= \int_X \langle T_1 \wedge ... \wedge T_n\rangle\\ &= \int_\Omega T_1 \wedge ... \wedge T_n, \end{align*} where the second line follows because the non-pluripolar product $\langle T_1 \wedge ... \wedge T_n\rangle$ puts no mass on the subvariety $X\setminus \Omega$ and all the $T_j$ are K\"ahler metrics over $\Omega$. For any point $x\in \Omega$, we have the following pointwise Brunn-Minkowski inequality \begin{align*} T_1 \wedge ... \wedge T_n \geq \left(\frac{T_1 ^n}{\Phi}\right)^{1/n}\cdot\cdot\cdot \left(\frac{T_n ^n}{\Phi}\right)^{1/n} \Phi= \Phi \end{align*} with equality if and only if the K\"ahler metrics $T_j$ are proportional at $x$. Here the second equality follows because we have $T_j ^n =\Phi$ on $\Omega$. In particular, we get the Khovanskii-Teissier inequality \begin{align*} B_1 \cdot B_2 \cdot\cdot\cdot B_n \geq 1. \end{align*} And we know the equality $B_1 \cdot B_2 \cdot\cdot\cdot B_n =1$ holds if and only if the K\"ahler metrics $T_j$ are pointwise proportional. At this step, we can not conclude that the K\"ahler metrics $T_j$ are equal over $\Omega$ since we can not control the proportionality constants from the pointwise Brunn-Minkowski inequalities. However, for any pair of $T_i$ and $T_j$, we have the following pointwise equality over $\Omega$: \begin{align*} T_i ^{n-1} \wedge T_j = \left(\frac{T_i ^n}{\Phi}\right)^{n-1/n}\cdot \left(\frac{T_j ^n}{\Phi}\right)^{1/n} \Phi, \end{align*} since $T_i$ and $T_j$ are pointwise proportional over $\Omega$. This implies the equality \begin{align*} B_i ^{n-1} \cdot B_j = 1. \end{align*} Then by the pointwise estimates of \cite{FX14}, we know the currents $T_i$ and $T_j$ must be equal over $X$, which implies $B_i = B_j$. In conclusion, we get that $B_1 \cdot B_2 \cdot\cdot\cdot B_n = 1$ if and only if the $B_j$ are equal. \end{proof} \subsection{Complete intersection cone} Since the complete intersection cone plays an important role in the paper, we quickly outline its basic properties. Recall that $\CI_{1}(X)$ is the closure of the set of all curve classes of the form $A^{n-1}$ for an ample divisor $A$. It naturally has the structure of a closed pointed cone. \begin{prop} \label{prop boundary CI} Let $X$ be a projective variety of dimension $n$. Suppose that $\alpha \in \CI_{1}(X)$ lies on the boundary of the cone. Then either \begin{enumerate} \item $\alpha = B^{n-1}$ for some big and nef divisor class $B$, or \item $\alpha$ lies on the boundary of $\Eff_{1}(X)$. \end{enumerate} \end{prop} \begin{proof} We fix an ample divisor class $K$. Since $\alpha \in \CI_{1}(X)$ is a boundary point of the cone, we can write $\alpha$ as the limit of classes $A_{i}^{n-1}$ for some sequence of ample divisor classes $A_{i}$. First suppose that the values of $A_{i} \cdot K^{n-1}$ are bounded above as $i$ varies. Then the classes of the divisor $A_{i}$ vary in a compact set, so they have some nef accumulation point $B$. Clearly $\alpha = B^{n-1}$. Furthermore, if $B$ is not big then $\alpha$ will lie on the boundary of $\Eff_{1}(X)$ since in this case $B^{n-1} \cdot B = 0$. If $B$ is big, then it is not ample, since the map $A \mapsto A^{n-1}$ from the ample cone of divisors to $N_{1}(X)$ is locally surjective. Thus in this case $B$ is big and nef. Now suppose that the values of $A_{i} \cdot K^{n-1}$ do not have any upper bound. Since the $A_{i}^{n-1}$ limit to $\alpha$, for $i$ sufficiently large we have \begin{equation*} 2(\alpha \cdot K) > A_{i}^{n-1} \cdot K \geq \vol(A_{i})^{n-1/n} \vol(K)^{1/n} \end{equation*} by the Khovanskii-Teissier inequality. In particular this shows that $\vol(A_{i})$ admits an upper bound as $i$ varies. Note that the classes $A_{i}/(K^{n-1} \cdot A_{i})$ vary in a compact slice of the nef cone of divisors. Without loss of generality, we can assume they limit to a nef divisor class $B$. Then we have \begin{align*} B \cdot \alpha & = \lim_{i \to \infty} \frac{A_{i}}{K^{n-1} \cdot A_{i}} \cdot A_{i}^{n-1} \\ & = \lim_{i \to \infty} \frac{\vol(A_{i})}{K^{n-1} \cdot A_{i}} \\ & = 0. \end{align*} The last equality holds because $\vol(A_{i})$ is bounded above but $A_{i} \cdot K^{n-1}$ is not. So in this case $\alpha$ must be on the boundary of the pseudo-effective cone $\Eff_1$. \end{proof} The complete intersection cone differs from most cones considered in birational geometry in that it is \emph{not} convex. Since we are not aware of any such example in the literature, we give a toric example from \cite{fs09} in the appendix. The same example shows that the cone that is the closure of all products of $(n-1)$ ample divisors is also not convex. \begin{rmk} \label{rmk CI} It is still true that $\CI_{1}(X)$ is ``locally convex". Let $A, B$ be two ample divisor classes. If $\epsilon$ is sufficiently small, then $$A^{n-1}+\epsilon B^{n-1}=A_\epsilon ^{n-1}$$ for a unique ample divisor $A_\epsilon$. The existence of $A_\epsilon$ follows from the Hard Lefschetz theorem. Consider the following smooth map \begin{align*} \Phi: N^1(X) \rightarrow N_1(X) \end{align*} sending $D$ to $D^{n-1}$. By the Hard Lefschetz theorem, the derivative $d\Phi$ is an isomorphism at the point $A$. Thus $\Phi$ is local diffeomorphism near $A$, yielding the existence of $A_\epsilon$. The uniqueness follows from Teissier's proportionality theorem. (See \cite{GT13} for a more in-depth discussion.) \end{rmk} Another natural question is: \begin{ques} Suppose that $X$ is a projective variety of dimension $n$ and that $\{ A_{i} \}_{i=1}^{n-1}$ are ample divisor classes on $X$. Then is $A_{1} \cdot \ldots \cdot A_{n-1} \in \CI_{1}(X)$? \end{ques} One can imagine that such a statement may be studied using an ``averaging'' method. We hope Theorem \ref{thm KT inequality} would be helpful in the study of this problem. \subsection{Fields of characteristic $p$} \label{charp background sec} Almost all the results in the paper will hold for smooth varieties over an arbitrary algebraically closed field. The necessary technical generalizations are verified in the following references: \begin{itemize} \item \cite[Remark 1.6.5]{lazarsfeld04} checks that the Khovanskii-Teissier inequalities hold over an arbitrary algebraically closed field. \item The existence of Fujita approximations over an arbitrary algebraically closed field is proved in \cite{takagi07}. \item The basic properties of the $\sigma$-decomposition in positive characteristic are considered in \cite{mustata11}. \item The results of \cite{Cut13} lay the foundations of the theory of positive products and volumes over an arbitrary field. \item \cite{fl14} describes how the above results can be used to extend \cite{BDPP13} and most of the results of \cite{bfj09} over an arbitrary algebraically closed field. In particular the description of the derivative of the volume function in \cite[Theorem A]{bfj09} holds for smooth varieties in any characteristic. \end{itemize} \subsection{Compact K\"ahler manifolds} \label{kahler background sec} The following results enable us to extend most of our results to arbitrary compact K\"ahler manifolds. \begin{itemize} \item The Khovanskii-Teissier inequalities for classes in the nef cone $\overline{\mathcal{K}}$ can be proved by the mixed Hodge-Riemann bilinear relations \cite{DN06}, or by solving complex Monge-Amp\`{e}re equations \cite{Dem93}; see also Theorem \ref{thm KT inequality}. \item Teissier's proportionality theorem for transcendental big and nef classes has recently been proved by \cite{FX14}; see also Theorem \ref{thm KT inequality}. \item The theory of positive intersection products for pseudo-effective $(1,1)$-classes has been developed by \cite{Bou02, BDPP13, BEGZ10MAbig}. \item The cone duality $\overline{\mathcal{K}}^* =\mathcal{N}$ follows from the numerical characterization of the K\"ahler cone of \cite{DP04}. \end{itemize} We remark that we need the cone duality $\overline{\mathcal{K}}^* =\mathcal{N}$ to extend the Zariski decompositions and Morse-type inequality for curves to positive currents of bidimension $(1,1)$. Comparing with the projective situation, the main ingredient missing is Demailly's conjecture on the transcendental holomorphic Morse inequality, which is in turn implied by the expected identification of the derivative of the volume function on pseudo-effective $(1,1)$-classes as in \cite{bfj09}. Indeed, it is not hard to see these two expected results are equivalent (see e.g. \cite[Proposition 1.1]{xiao2014movable} -- which is essentially \cite[Section 3.2]{bfj09}). And they would imply the duality of the cones $\mathcal{M}(X)$ and $\mathcal{E}(X)$. Thus, any of our results which relies on either the transcendental holomorphic Morse inequality, or the results of \cite{bfj09}, is still conjectural in the K\"ahler setting. However, these conjectures are known if $X$ is a compact hyperk\"ahler manifold (see \cite[Theorem 10.12]{BDPP13}), so all of our results extend to compact hyperk\"ahler manifolds. \section{Polar transforms} \label{legendresection} As explained in the introduction, Zariski decompositions capture the failure of the volume function to be strictly log concave. In this section and the next, we use some basic convex analysis to define a formal Zariski decomposition which makes sense for any non-negative homogeneous log concave function on a cone. The main tool is a Legendre-Fenchel type transform for such functions. \subsection{Duality transforms} Let $V$ be a finite-dimensional $\mathbb{R}$-vector space of dimension $n$, and let $V^{*}$ be its dual. We denote the pairing of $w^* \in V^*$ and $v \in V$ by $w^* \cdot v$. Let $\Cvx(V)$ denote the class of lower-semicontinuous convex functions on $V$. Then \cite[Theorem 1]{milman09legendre} shows that, up to composition with an additive linear function and a symmetric linear transformation, the Legendre-Fenchel transform is the unique order-reversing involution $\mathcal{L}: \Cvx(V) \to \Cvx(V^{*})$. Motivated by this result, the authors define a duality transform to be an order-reversing involution of this type and characterize the duality transforms in many other contexts (see e.g. \cite{milman11hiddenduality}, \cite{milman08sconcave}). In this section we study a duality transform for the set of non-negative homogeneous functions on a cone. This transform is the concave homogeneous version of the well-known polar transform; see \cite[Chapter 15]{rockafellar70convexBOOK} for the basic properties of this transform in a related context. This transform is also a special case of the generalized Legendre-Fenchel transform studied by \cite[Section 14]{Moreau1966-1967}, which is the usual Legendre-Fenchel transform with a ``coupling function'' -- we would like to thank M. Jonsson for pointing this out to us. See also \cite[Section 0.6]{singerconvexBOOK} and \cite[Chapter 1]{rubinov00} for a brief introduction to this perspective. Finally, it is essentially the same as the transform $\mathcal{A}$ from \cite{milman11hiddenduality} when applied to homogeneous functions, and is closely related to other constructions of \cite{milman08sconcave}. \cite[Chapter 2]{rubinov00} and \cite{dr02} work in a different setting which nonetheless has some nice parallels with our situation. Let $\mathcal{C}\subset V$ be a proper closed convex cone of full dimension and let $\mathcal{C}^* \subset V^*$ denote the dual cone of $\mathcal{C}$, that is, \begin{align*} \mathcal{C}^* =\{w^* \in V^*|\ w^* \cdot v \geq 0 \ \textrm{for any}\ v\in \mathcal{C}\}. \end{align*} We let $\HConc_s (\mathcal{C})$ denote the collection of functions $f: \mathcal{C} \to \mathbb{R}$ satisfying: \begin{itemize} \item $f$ is upper-semicontinuous and homogeneous of weight $s>1$; \item $f$ is strictly positive in the interior of $\mathcal{C}$ (and hence non-negative on $\mathcal{C}$); \item $f$ is $s$-concave: for any $v,x \in \mathcal{C}$ we have $f(v)^{1/s} + f(x)^{1/s} \leq f(v+x)^{1/s}$. \end{itemize} Note that since $f^{1/s}$ is homogeneous of degree $1$, the definition of concavity for $f^{1/s}$ above coheres with the usual one. For any $f\in \HConc_s (\mathcal{C})$, the function $f^{1/s}$ can extend to a proper upper-semicontinuous concave function over $V$ by letting $f^{1/s}(v)=-\infty$ whenever $v\notin \mathcal{C}$. Thus many tools developed for arbitrary concave functions on $V$ also apply in our case. Since an upper-semicontinuous function is continuous along decreasing sequences, the following continuity property of $f$ follows immediately from the non-negativity and concavity of $f^{1/s}$. \begin{lem} \label{uppersemilimit} Let $f \in \HConc_{s}(\mathcal{C})$ and $v \in \mathcal{C}$. For any element $x \in \mathcal{C}$ we have $$f(v) = \lim_{t \to 0^{+}} f(v + tx).$$ \end{lem} In particular, any $f \in \HConc_{s}(\mathcal{C})$ must vanish at the origin. \\ In this section we outline the basic properties of the polar transform $\mathcal{H}$ (following a suggestion of M.~Jonsson). In contrast to abstract convex transforms, $\mathcal{H}$ retains all of the properties of the classical Lengendre-Fenchel transform. Since the proofs are essentially the same as in the theory of classical convex analysis, we omit most of the proofs in this section. Recall that the polar transform $\mathcal{H}$ associates to a function $f \in \HConc_{s}(\mathcal{C})$ the function $\mathcal{H}f: \mathcal{C}^{*} \to \mathbb{R}$ defined as \begin{align*} \mathcal{H} f (w^*):= \inf_{v\in \mathcal{C}^{\circ}} \left(\frac{w^* \cdot v}{f(v)^{1/s}}\right)^{s/s-1}. \end{align*} By Lemma \ref{uppersemilimit} the definition is unchanged if we instead vary $v$ over all elements of $\mathcal{C}$ where $f$ is positive. The following proposition shows that $\mathcal{H}$ defines an order-reversing involution from $\HConc_{s}(\mathcal{C})$ to $\HConc_{s/s-1}(\mathcal{C}^{*})$. Its proof is similar to the classical result in convex analysis, see e.g. \cite[Theorem 15.1]{rockafellar70convexBOOK}. \begin{prop} \label{prop H involution} Let $f,g \in \HConc_s(\mathcal{C})$. Then we have \begin{enumerate} \item $\mathcal{H} f \in \HConc_{s/s-1}(\mathcal{C}^{*})$. \item If $f \leq g$ then $\mathcal{H} f \geq \mathcal{H} g$. \item $\mathcal{H} ^2 f = f$. \end{enumerate} \end{prop} It will be crucial to understand which points obtain the infimum in the definition of $\mathcal{H}f$. \begin{defn} Let $f \in \HConc_{s}(\mathcal{C})$. For any $w^* \in \mathcal{C}^{*}$, we define $G_{w^*}$ to be the set of all $v \in \mathcal{C}$ which satisfy $f(v)>0$ and which achieve the infimum in the definition of $\mathcal{H}f(w^*)$, so that \begin{align*} \mathcal{H} f (w^*) = \left( \frac{w^* \cdot v}{f(v)^{1/s}} \right)^{s/s-1}. \end{align*} \end{defn} \begin{rmk} The set $G_{w^*}$ is the analogue of supergradients of concave functions. In particular, in the following sections we will see that the differential of $\mathcal{H}f$ at $w^*$ lies in $G_{w^*}$ if $\mathcal{H}f$ is differentiable. \end{rmk} It is easy to see that $G_{w^*} \cup \{ 0 \}$ is a convex subcone of $\mathcal{C}$. Note the symmetry in the definition: if $v \in G_{w^{*}}$ and $\mathcal{H}f(w^{*})>0$ then $w^* \in G_{v}$. Thus if $v \in \mathcal{C}$ and $w^*\in \mathcal{C}^*$ satisfy $f(v)>0$ and $\mathcal{H}f(w^{*})>0$ then the conditions $v \in G_{w^{*}}$ and $w^{*} \in G_{v}$ are equivalent. The analogue of the Young-Fenchel inequality in our situation is: \begin{prop} \label{younginequality} Let $f \in \HConc_{s}(\mathcal{C})$. Then for any $v \in \mathcal{C}$ and $w^* \in \mathcal{C}^*$ we have \begin{equation*} \mathcal{H}f(w^*)^{s-1/s} f(v)^{1/s} \leq v \cdot w^*. \end{equation*} Furthermore, equality is obtained only if either $v \in G_{w^*}$ and $w^{*} \in G_v$, or at least one of $\mathcal{H}f(w^{*})$ and $f(v)$ vanishes. \end{prop} The next theorem describes the basic properties of $G_{v}$: \begin{thrm} \label{thm abstract zariski} Let $f \in \HConc_{s}(\mathcal{C})$. \begin{enumerate} \item Fix $v \in \mathcal{C}$. Let $\{w_{i}^{*}\}$ be a sequence of elements of $\mathcal{C}^{*}$ with $\mathcal{H}f(w_{i}^{*}) = 1$ such that $$f(v) = \lim_{i} (v \cdot w_{i}^{*})^{s} >0.$$ Suppose that the sequence admits an accumulation point $w^*$. Then $f(v) = (v \cdot w^*)^s$ and $\mathcal{H}f(w^*)=1$. \item For every $v \in \mathcal{C}^{\circ}$ we have that $G_{v}$ is non-empty. \item Fix $v \in \mathcal{C}^{\circ}$. Let $\{ v_{i} \}$ be a sequence of elements of $\mathcal{C}^{\circ}$ whose limit is $v$ and for each $v_{i}$ choose $w_{i}^{*} \in G_{v_{i}}$ with $\mathcal{H}f(w_{i}^{*}) = 1$. Then the $w_{i}^{*}$ admit an accumulation point $w^*$, and any accumulation point lies in $G_{v}$ and satisfies $\mathcal{H}f(w^*)=1$. \end{enumerate} \end{thrm} \begin{proof} (1) The limiting statement for $f(v)$ is clear. We have $\mathcal{H}f(w^*) \geq 1$ by upper semicontinuity, so that \begin{equation*} f(v)^{1/s} = \lim_{i \to \infty} v \cdot w_{i}^* \geq \frac{v \cdot w^*}{\mathcal{H}f(w^*)^{s-1/s}} \geq f(v)^{1/s}. \end{equation*} Thus we have equality everywhere. If $\mathcal{H}f(w^{*})^{s-1/s} > 1$ then we obtain a strict inequality in the middle, a contradiction. (2) Let $w_i^*$ be a sequence of points in $\mathcal{C}^{* \circ}$ with $\mathcal{H}f(w_i^*)=1$ such that $f(v) = \lim_{i \to \infty} (w_i^* \cdot v)^{s}$. By (1) it suffices to see that the $w_{i}^{*}$ vary in a compact set. But since $v$ is an interior point, the set of points which have intersection with $v$ less than $2f(v)^{1/s}$ is bounded. (3) By (1) it suffices to show that the $w_{i}^*$ vary in a compact set. For sufficiently large $i$ we have that $2v_{i} - v \in \mathcal{C}$. By the log concavity of $f$ on $\mathcal{C}$ we see that $f$ must be continuous at $v$. Thus for any fixed $\epsilon > 0$, we have for sufficiently large $i$ \begin{equation*} w_{i}^{*} \cdot v \leq 2 w_{i}^{*} \cdot v_{i} \leq 2(1+\epsilon) f(v)^{1/s}. \end{equation*} Since $v$ lies in the interior of $\mathcal{C}$, this implies that the $w_{i}^{*}$ must lie in a bounded set. \end{proof} We next identify the collection of points where $f$ is controlled by $\mathcal{H}$. \begin{defn} Let $f \in \HConc_{s}(\mathcal{C})$. We define $\mathcal{C}_{f}$ to be the set of all $v \in \mathcal{C}$ such that $v \in G_{w^*}$ for some $w^{*} \in \mathcal{C}$ satisfying $\mathcal{H}f(w^*)>0$. \end{defn} Since $v\in G_{w^*}$ and $\mathcal{H}f(w^*)>0$, Proposition \ref{younginequality} and the symmetry of $G$ show that $w^{*} \in G_{v}$. Furthermore, we have $\mathcal{C}^{\circ} \subset \mathcal{C}_{f}$ by Theorem \ref{thm abstract zariski} and the symmetry of $G$. \subsection{Differentiability} \begin{defn} We say that $f \in \HConc_{s}(\mathcal{C})$ is differentiable if it is $\mathcal{C}^{1}$ on $\mathcal{C}^{\circ}$. In this case we define the function \begin{align*} D: \mathcal{C}^{\circ} \to V^{*} \qquad \qquad \textrm{by} \qquad \qquad v \mapsto \frac{Df(v)}{s}. \end{align*} \end{defn} The main properties of the derivative are: \begin{thrm} \label{derivativeandbm} Suppose that $f \in \HConc_{s}(\mathcal{C})$ is differentiable. Then \begin{enumerate} \item $D$ defines an $(s-1)$-homogeneous function from $\mathcal{C}^{\circ}$ to $\mathcal{C}^{*}_{\mathcal{H}f}$. \item $D$ satisfies a Brunn-Minkowski inequality with respect to $f$: for any $v \in \mathcal{C}^{\circ}$ and $x \in \mathcal{C}$ \begin{equation*} D(v) \cdot x \geq f(v)^{s-1/s} f(x)^{1/s}. \end{equation*} Moreover, we have $D(v) \cdot v = f(v) = \mathcal{H}f(D(v))$. \end{enumerate} \end{thrm} We will need the following familiar criterion for the differentiability of $f$, which is an analogue of related results in convex analysis connecting the differentiability with the uniqueness of supergradient (see e.g. \cite[Theorem 25.1]{rockafellar70convexBOOK}). \begin{prop} \label{diffuniqueness} Let $f \in \HConc_{s}(\mathcal{C})$. Let $U\subset \mathcal{C}^{\circ}$ be an open set. Then $f|_{U}$ is differentiable if and only if for every $v \in U$ the set $G_{v} \cup \{0\}$ consists of a single ray. In this case $D(v)$ is defined by intersecting against the unique element $w^{*} \in G_{v}$ satisfying $\mathcal{H}f(w^{*})=f(v)$. \end{prop} We next discuss the behaviour of the derivative along the boundary. \begin{defn} We say that $f \in \HConc_{s}(\mathcal{C})$ is $+$-differentiable if $f$ is $\mathcal{C}^{1}$ on $\mathcal{C}^{\circ}$ and the derivative on $\mathcal{C}^{\circ}$ extends to a continuous function on all of $\mathcal{C}_{f}$. \end{defn} It is easy to see that the $+$-differentiability implies continuity. \begin{lem}\label{lem +c1 implies continuity} If $f \in \HConc_{s}(\mathcal{C})$ is $+$-differentiable then $f$ is continuous on $\mathcal{C}_f$. \end{lem} \begin{rmk} \label{extensiontoboundaryrmk} For $+$-differentiable functions $f$, we define the function $D: \mathcal{C}_{f} \to V^{*}$ by extending continuously from $\mathcal{C}^{\circ}$. Many of the properties in Theorem \ref{derivativeandbm} hold for $D$ on all of $\mathcal{C}_f$. By taking limits and applying Lemma \ref{uppersemilimit} we obtain the Brunn-Minkowski inequality. In particular, for any $x\in \mathcal{C}_f$ we still have $$D(x) \cdot x = f(x)=\mathcal{H}f(D(x)).$$ Thus it is clear that $D(x)\in \mathcal{C}_{\mathcal{H}f}^{*}$ for any $x\in \mathcal{C}_f$. \end{rmk} \begin{lem}\label{lemma extention derivative} Assume $f \in \HConc_{s}(\mathcal{C})$ is $+$-differentiable. For any $x\in \mathcal{C}_f$ and $y\in \mathcal{C}^\circ$, we have \begin{align*} \left. \frac{d}{dt}\right|_{t=0^+} f(x+ty)^{1/s}=(D(x)\cdot y) f(x)^{1-s/s}. \end{align*} \end{lem} We next analyze what we can deduce about $f$ in a neighborhood of $v \in \mathcal{C}_{f}$ from the fact that $G_{v} \cup \{ 0 \}$ is a unique ray. \begin{lem} \label{lem compactness} Let $f \in \HConc_{s}(\mathcal{C})$. Let $v \in \mathcal{C}_{f}$ and assume that $G_{v} \cup \{0\}$ consists of a single ray. Suppose $\{v_i\}$ is a sequence of elements of $\mathcal{C}_{f}$ converging to $v$. Let $w^*_i \in G_{v_i}$ be any point satisfying $\mathcal{H}f(w^*_i)=1$. Then the $w^*_{i}$ vary in a compact set. Any accumulation point $w^*$ must be the unique point in $G_{v}$ satisfying $\mathcal{H}f(w^*)=1$. \end{lem} \begin{proof} By Theorem \ref{thm abstract zariski} it suffices to prove that the $w^*_i$ vary in a compact set. Otherwise, we must have that $w^*_i \cdot m$ is unbounded for some interior point $m \in \mathcal{C}^\circ$. By passing to a subsequence we may suppose that $w^*_{i} \cdot m \to \infty$. Consider the normalization $$\widehat{w}_i^* := \frac{w^*_i}{w^*_i \cdot m};$$ note that $\widehat{w}_i^*$ vary in a compact set. Take some convergent subsequence, which we still denote by $\widehat{w}_i^*$, and write $\widehat{w}_i^* \rightarrow \widehat{w}_0 ^*$. Since $\widehat{w}_0 ^*\cdot m =1$ we see that $\widehat{w}_0 ^* \neq 0$. We first prove $v \cdot \widehat{w}_0 ^* >0$. Otherwise, $v \cdot \widehat{w}_0 ^* =0$ implies \begin{align*} \frac{v \cdot (w^*+ \widehat{w}_0 ^*)}{\mathcal{H}f(w^*+ \widehat{w}_0 ^*)^{s-1/s}} \leq \frac{v \cdot w^*}{\mathcal{H}f(w^*)^{s-1/s}} =f(v)^{1/s}. \end{align*} By our assumption on $G_{v}$, we get $w^*+ \widehat{w}_0 ^*$ and $w^*$ are proportional, which implies $\widehat{w}_0 ^*$ lies in the ray spanned by $w^*$. Since $\widehat{w}_0 ^* \neq 0$ and $v \cdot w^* >0$, we get that $v \cdot \widehat{w}_0 ^* >0$. So our assumption $v \cdot \widehat{w}_0 ^* =0$ does not hold. On the other hand, $\mathcal{H}f(w_i ^*)=1$ implies $$\mathcal{H}f(\widehat{w}_i ^*)^{s-1/s} =\frac{1}{m \cdot w_i^*}\rightarrow 0 .$$ By the upper-semicontinuity of $f$ and the fact that $\lim v_i \cdot \widehat{w}_i ^* =v \cdot \widehat{w}_0 ^* >0$, we get \begin{align*} f(v)^{1/s}&\geq \limsup_{i \to \infty} f(v_i)^{1/s}\\ & =\limsup_{i \to \infty} \frac{v_i \cdot \widehat{w}_i^*}{\mathcal{H}f(\widehat{w}_i ^*)^{s-1/s}}=\infty. \end{align*} This is a contradiction, thus the sequence $w^* _i$ must vary in a compact set. \end{proof} \begin{thrm} \label{seconddiffuniqueness} Let $f \in \HConc_{s}(\mathcal{C})$. Suppose that $U\subset \mathcal{C}_f$ is a relatively open set and $G_{v} \cup \{0\}$ consists of a single ray for any $v \in U$. If $f$ is continuous on $U$ then $f$ is $+$-differentiable on $U$. In this case $D(v)$ is defined by intersecting against the unique element $w^{*} \in G_{v}$ satisfying $\mathcal{H}f(w^{*})=f(v)$. \end{thrm} Even if $f$ is not continuous, we at least have a similar statement along the directions in which $f$ is continuous (for example, any directional derivative toward the interior of the cone). \begin{proof} Theorem \ref{diffuniqueness} shows that $f$ is differentiable on $U \cap \mathcal{C}^{\circ}$ and is determined by intersections. By combining Lemma \ref{lem compactness} with the continuity of $f$, we see that the derivative extends continuously to any point in $U$. \end{proof} \begin{rmk}\label{rmk not a single ray} Assume $f\in \HConc_s (\mathcal{C})$ is $+$-differentiable. In general, we can not conclude that $G_{v}\cup \{0\}$ contains a single ray if $x\in \mathcal{C}_f$ is not an interior point. An explicit example is in Section \ref{section zariski}. Let $X$ be a smooth projective variety of dimension $n$, let $\mathcal{C}=\Nef^1 (X)$ be the cone of nef divisor classes and let $f=\vol$ be the volume function of divisors. Let $B$ be a big and nef divisor class which is not ample. Then $G_B$ contains the cone generated by all $B^{n-1}+ \gamma$ with $\gamma$ pseudo-effective and $B\cdot \gamma=0$, which in general is more than a ray. \end{rmk} \section{Formal Zariski decompositions} \label{formal zariski section} The Legendre-Fenchel transform relates the strict concavity of a function to the differentiability of its transform. The transform $\mathcal{H}$ will play the same role in our situation; however, one needs to interpret the strict concavity slightly differently. We will encapsulate this property using the notion of a Zariski decomposition. \begin{defn}\label{definition formal zariski} Let $f \in \HConc_{s}(\mathcal{C})$ and let $U \subset \mathcal{C}$ be a non-empty subcone. We say that $f$ admits a strong Zariski decomposition with respect to $U$ if: \begin{enumerate} \item For every $v \in \mathcal{C}_{f}$ there are unique elements $p_{v} \in U$ and $n_{v} \in \mathcal{C}$ satisfying \begin{equation*} v = p_{v} + n_{v} \qquad \qquad \textrm{and} \qquad \qquad f(v) = f(p_{v}). \end{equation*} We call the expression $v = p_v + n_v$ the Zariski decomposition of $v$, and call $p_v$ the positive part and $n_v$ the negative part of $v$. \item For any $v,w \in \mathcal{C}_f$ satisfying $v+w \in \mathcal{C}_f$ we have \begin{equation*} f(v)^{1/s} + f(w)^{1/s} \leq f(v+w)^{1/s} \end{equation*} with equality only if $p_{v}$ and $p_{w}$ are proportional. \end{enumerate} \end{defn} \begin{rmk} Note that the vector $n_{v}$ must satisfy $f(n_{v})=0$ by the non-negativity and log-concavity of $f$. In particular $n_{v}$ lies on the boundary of $\mathcal{C}$. Furthermore, any $w^{*} \in G_{v}$ is also in $G_{p_{v}}$ and must satisfy $w^{*} \cdot n_{v} = 0$. Note also that the proportionality of $p_{v}$ and $p_{w}$ may not be enough to conclude that $f(v)^{1/s} + f(w)^{1/s} = f(v+w)^{1/s}$. This additional property turns out to rely on the strict log concavity of $\mathcal{H}f$. \end{rmk} The main principle of the section is that when $f$ satisfies a differentiability property, $\mathcal{H}f$ admits some kind of Zariski decomposition. Usually the converse is false, due to the asymmetry of $G$ when $f$ or $\mathcal{H}f$ vanishes. However, the existence of a Zariski decomposition is usually strong enough to determine the differentiability of $f$ along some subcone. We will give a version that takes into account the behavior of $f$ along the boundary of $\mathcal{C}$. \begin{thrm} \label{strong zariski equivalence} Let $f \in \HConc_{s}(\mathcal{C})$. Then we have the following results: \begin{itemize} \item If $f$ is $+$-differentiable, then $\mathcal{H}f$ admits a strong Zariski decomposition with respect to the cone $D(\mathcal{C}_{f}) \cup \{0\}$. \item If $\mathcal{H}f$ admits a strong Zariski decomposition with respect to a cone $U$, then $f$ is differentiable. \end{itemize} \end{thrm} \begin{proof} First suppose $f$ is $+$-differentiable; we must prove the function $\mathcal{H}f$ satisfies properties $(1), (2)$ in Definition \ref{definition formal zariski}. We first show the existence of the Zariski decomposition in property (1). If $w^* \in \mathcal{C}^{*}_{\mathcal{H}f}$ then by definition there is some $v \in \mathcal{C}$ satisfying $f(v)>0$ such that $w^{*} \in G_{v}$. In particular, by the symmetry of $G$ we also have $v\in G_{w^*}$, thus $v\in \mathcal{C}_f$. Since $f(v)>0$ we can define \begin{equation*} p_{w^*} := \left( \frac{\mathcal{H}f(w^*)}{f(v)} \right)^{s-1/s} \cdot D(v), \qquad \qquad n_{w^*} = w^* - p_{w^*}. \end{equation*} Then $p_{w^*} \in D(\mathcal{C}_f)$ and \begin{align*} \mathcal{H}f(p_{w^*}) & = \mathcal{H} \left(\left( \frac{\mathcal{H}f(w^*)}{f(v)} \right)^{s-1/s} \cdot D(v) \right) \\ & = \frac{\mathcal{H}f(w^*)}{f(v)} \cdot \mathcal{H}f \left (D(v) \right) = \mathcal{H}f(w^*) \end{align*} where the final equality follows from Theorem \ref{derivativeandbm} and Remark \ref{extensiontoboundaryrmk}. We next show that $n_{w^*} \in \mathcal{C}^*$. Choose any $x \in \mathcal{C}^\circ$ and note that for any $t>0$ we have the inequality \begin{equation*} \frac{v + tx}{f(v+tx)^{1/s}} \cdot w^* \geq \frac{v}{f(v)^{1/s}} \cdot w^* \end{equation*} with equality when $t=0$. By Lemma \ref{lemma extention derivative}, taking derivatives at $t=0$ we obtain \begin{equation*} \frac{x \cdot w^*}{f(v)^{1/s}} - \frac{(v \cdot w^*)(D(v) \cdot x)}{f(v)^{(s+1)/s}} \geq 0, \end{equation*} or equivalently, identifying $v \cdot w^*/f(v)^{1/s} = \mathcal{H}f(w^*)^{s-1/s}$, \begin{equation*} x \cdot \left(w^* -D(v) \cdot \frac{\mathcal{H}f(w^*)^{s-1/s}}{f(v)^{s-1/s}} \right) \geq 0. \end{equation*} Since this is true for any $x \in \mathcal{C}^\circ$, we see that $n_{w^*} \in \mathcal{C}^*$ as claimed. We next show that $p_{w^*}$ constructed above is the unique element of $D(\mathcal{C}_{f})$ satisfying the two given properties. First, after some rescaling we can assume $\mathcal{H}f(w^*)=f(v)$, which then implies $w^*\cdot v=f(v)$. Suppose that $z \in \mathcal{C}_{f}$ and $D(z)$ is another vector satisfying $\mathcal{H}f(D(z)) = \mathcal{H}f(w^*)$ and $w^* - D(z) \in \mathcal{C}$. Note that by Remark \ref{extensiontoboundaryrmk} $f(z) = \mathcal{H}f(D(z))= f(v)$. By Proposition \ref{younginequality} we have \begin{equation*} \mathcal{H}f(D(z))^{s-1/s} f(v)^{1/s} \leq D(z) \cdot v \leq w^{*} \cdot v = f(v) \end{equation*} so we obtain equality everywhere. In particular, we have $D(z)\cdot v=f(v)$. By Theorem \ref{derivativeandbm}, for any $x \in \mathcal{C}$ we have \begin{align*} D(z) \cdot x \geq f(z)^{s-1/s} f(x)^{1/s}. \end{align*} Set $x = v +\epsilon q$ where $\epsilon > 0$ and $q \in \mathcal{C}^{\circ}$. With this substitution, the two sides of the equation above are equal at $\epsilon = 0$, so taking an $\epsilon$-derivative of the above equation and arguing as before, we see that $D(z) - D(v) \in \mathcal{C}^{*}$. We claim that $D(z)=D(v)$. First we note that $D(v)\cdot z= f(z)$. Indeed, since $f(z)=f(v)$ and $D(v)\preceq D(z)$ we have \begin{align*} f(v)^{s-1/s} f(z)^{1/s}\leq D(v)\cdot z \leq D(z)\cdot z =f(z). \end{align*} Thus we have equality everywhere, proving the equality $D(v)\cdot z= f(z)$. Then we can apply the same argument as before with the roles of $v$ and $z$ switched. This shows $D(v)\succeq D(z)$, so we must have $D(z)=D(v)$. We next turn to (2). The inequality is clear, so we only need to characterize the equality. Suppose $w^*, y^* \in \mathcal{C}_{\mathcal{H}f} ^{*}$ satisfy \begin{equation*} \mathcal{H}f(w^*)^{s-1/s} + \mathcal{H}f(y^*)^{s-1/s} = \mathcal{H}f(w^* + y^*)^{s-1/s} \end{equation*} and $w^* + y^* \in \mathcal{C}_{\mathcal{H}f} ^{*}$. We need to show they have proportional positive parts. By assumption $G_{w^*+y^*}$ is non-empty, so we may choose some $v \in G_{w^*+y^*}$. Then also $v \in G_{w^*}$ and $v \in G_{y^*}$. Note that by homogeneity $v$ is also in $G_{aw^{*}}$ and $G_{by^{*}}$ for any positive real numbers $a$ and $b$. Thus by rescaling $w^*$ and $y^*$, we may suppose that both have intersection $f(v)$ against $v$, so that $\mathcal{H}f(w^{*}) = \mathcal{H}f(y^{*}) = f(v)$. Then we need to verify the positive parts of $w^*$ and $y^*$ are equal. But they both coincide with $D(v)$ by the argument in the proof of (1).\\ Conversely, suppose that $\mathcal{H}f$ admits a strong Zariski decomposition with respect to the cone $U$. We claim that $f$ is differentiable. By Proposition \ref{diffuniqueness} it suffices to show that $G_{v} \cup \{0\}$ is a single ray for any $v \in \mathcal{C}^{\circ}$. For any two elements $w^{*}, y^{*}$ in $G_{v}$ we have \begin{align*} \mathcal{H}f(w^*)^{1/s} + \mathcal{H}f(y^{*})^{1/s} = \frac{w^{*} \cdot v}{f(v)^{1/s}} + \frac{y^{*} \cdot v}{f(v)^{1/s}} \geq \mathcal{H}f(w^{*}+y^{*})^{1/s}. \end{align*} Since $w^{*}$, $y^{*}$ and their sum are all in $\mathcal{C}^{*}_{\mathcal{H}f}$, we conclude by the strong Zariski decomposition condition that $w^{*}$ and $y^{*}$ have proportional positive parts. After rescaling so that $\mathcal{H}f(w^{*}) = f(v) = \mathcal{H}f(y^{*})$ we have $p_{w^{*}} = p_{y^{*}}$. Thus it suffices to prove $w^* =p_{w^*}$. Note that $\mathcal{H}f(w^{*})=\mathcal{H}f(p_{w^{*}})$ as $p_{w^*}$ is the positive part. If $w^* \neq p_{w^*}$, then $v\cdot w^* > v\cdot p_{w^*}$ since $v$ is an interior point. This implies \begin{align*} f(v)=\inf_{y^* \in \mathcal{C}^{*\circ}}\left(\frac{v\cdot y^*}{\mathcal{H}f(y^*)^{s-1/s}}\right)^s<\left(\frac{v\cdot w^*}{\mathcal{H}f(w^*)^{s-1/s}}\right)^s, \end{align*} contradicting with $w^* \in G_v$. Thus $w^*=p_{w^*}$ and $G_{v} \cup \{0\}$ must be a single ray. \end{proof} \begin{rmk} It is worth emphasizing that if $f$ is $+$-differentiable and $w^{*} \in \mathcal{C}^{*}_{\mathcal{H}f}$, we can construct a positive part for $w^{*}$ by choosing \emph{any} $v \in G_{w^*}$ with $f(v) > 0$ and taking an appropriate rescaling of $D(v)$. \end{rmk} \begin{rmk} It would also be interesting to study some kind of weak Zariski decomposition. For example, one can define a weak Zariski decomposition as a decomposition $v=p_v + n_v$ only demanding $f(v)=f(p_v)$ and the strict log concavity of $f$ over the set of positive parts. Appropriately interpreted, the existence of a weak decomposition for $\mathcal{H}f$ should be a consequence of the differentiability of $f$. \end{rmk} Under some additional conditions, we can get the continuity of the Zariski decompositions. \begin{thrm}\label{thrm positive part continuity} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable. Then the function taking an element $w^{*} \in \mathcal{C}^{* \circ}$ to its positive part $p_{w^*}$ is continuous. If furthermore $G_{v} \cup \{ 0 \}$ is a unique ray for every $v \in \mathcal{C}_{f}$ and $\mathcal{H}f$ is continuous on all of $\mathcal{C}^{*}_{\mathcal{H}f}$, then the Zariski decomposition is continuous on all of $\mathcal{C}^{*}_{\mathcal{H}f}$. \end{thrm} \begin{proof} Fix any $w^{*} \in \mathcal{C}^{* \circ}$ and suppose that $w^*_i$ is a sequence whose limit is $w^*$. For each choose some $v_{i} \in G_{w^*_i}$ with $f(v_i) = 1$. By Theorem \ref{thm abstract zariski}, the $v_{i}$ admit an accumulation point $v \in G_{w^*}$ with $f(v) = 1$. By the symmetry of $G$, each $v_{i}$ and also $v$ lies in $\mathcal{C}_{f}$. The $D(v_i)$ limit to $D(v)$ by the continuity of $D$. Recall that by the argument in the proof of Theorem \ref{strong zariski equivalence} we have $p_{w^*_i} = \mathcal{H}f(w_{i}^{*})^{s-1/s}D(v_{i})$ and similarly for $w^{*}$. Since $\mathcal{H}f$ is continuous at interior points, we see that the positive parts vary continuously as well. The last statement follows by a similar argument using Lemma \ref{lem compactness}. \end{proof} \begin{exmple} \label{bilinearexample} Suppose that $q$ is a bilinear form on $V$ and $f(v) = q(v,v)$. Let $\mathcal{P}$ denote one-half of the positive cone of vectors satisfying $f(v) \geq 0$. It is easy to see that $f$ is $2$-concave and non-trivial on $\mathcal{P}$ if and only if $q$ has signature $(1,\dim V-1)$. Identifying $V$ with $V^{*}$ under $q$, we have $\mathcal{P} = \mathcal{P}^{*}$ and $\mathcal{H}f = f$ by the usual Hodge inequality argument. Now suppose $\mathcal{C} \subset \mathcal{P}$. Then $\mathcal{C}^{*}$ contains $\mathcal{C}$. As discussed above, by the Hodge inequality $\mathcal{H}f|_{\mathcal{C}} = f$. Note that $f$ is everywhere differentiable and $D(v) = v$ for classes in $\mathcal{C}$. Thus on $\mathcal{C}$ the polar transform $\mathcal{H}f$ agrees with $f$, but outside of $\mathcal{C}$ the function $\mathcal{H}f$ is controlled by a Zariski decomposition involving a projection to $\mathcal{C}$. This is of course just the familiar picture for curves on a surface identifying $f$ with the self-intersection on the nef cone and $\mathcal{H}f$ with the volume on the pseudo-effective cone. More precisely, for big curve classes the decomposition constructed in this way is the numerical version of Zariski's original construction. Along the boundary of $\mathcal{C}^{*}$, the function $\mathcal{H}f$ vanishes identically so that Theorem \ref{strong zariski equivalence} does not apply. The linear algebra arguments of \cite{zariski62}, \cite{bauer09} give a way of explicitly constructing the vector computing the minimal intersection as above. \end{exmple} \begin{exmple} Fix a spanning set of unit vectors $\mathcal{Q}$ in $\mathbb{R}^n$. Recall that the polytopes whose unit facet normals are a subset of $\mathcal{Q}$ naturally define a cone $\mathcal{C}$ in a finite dimensional vector space $V$ which parametrizes the constant terms of the bounding hyperplanes. One can also consider the cone $\mathcal{C}_{\Sigma}$ which is the closure of those polytopes whose normal fan is $\Sigma$. The volume function $\vol$ defines a weight-$n$ homogeneous function on $\mathcal{C}$ and (via restriction) $\vol_{\Sigma}$ on $\mathcal{C}_{\Sigma}$, and it is interesting to ask for the behavior of the polar transforms. (Note that this is somewhat different from the link between polar sets and polar functions, which is described for example in \cite{milman11hiddenduality}.) The dual space $V^{*}$ consists of the Minkowski weights on $\mathcal{Q}$. We will focus on the subcone $\mathcal{M}$ of strictly positive Minkowski weights, which is contained in the dual of both cones. By Minkowski's theorem, a strictly positive Minkowski weight determines naturally a polytope in $\mathcal{C}$, so we can identify $\mathcal{M}$ with the interior of $\mathcal{C}$. As explained in Section \ref{toric section}, the Brunn-Minkowski inequality shows that $\mathcal{H}\vol|_{\mathcal{M}}$ coincides with the volume function on $\mathcal{M}$. However, calculating $\mathcal{H}\vol_{\Sigma}|_{\mathcal{M}}$ is more subtle. It would be very interesting to extend this duality to all convex sets, perhaps by working on an infinite dimensional space. \end{exmple} \subsection{Teissier proportionality} In this section, we give some conditions which are equivalent to the strict log concavity. The prototype is the volume function of divisors over the cone of big and movable divisor classes. \begin{defn} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable and let $\mathcal{C}_{T}$ be a non-empty subcone of $\mathcal{C}_{f}$. We say that $f$ satisfies Teissier proportionality with respect to $\mathcal{C}_{T}$ if for any $v,x \in \mathcal{C}_{T}$ satisfying \begin{equation*} D(v) \cdot x = f(v)^{s-1/s} f(x)^{1/s} \end{equation*} we have that $v$ and $x$ are proportional. \end{defn} Note that we do not assume that $\mathcal{C}_{T}$ is convex -- indeed, in examples it is important to avoid this condition. However, since $f$ is defined on the convex hull of $\mathcal{C}_{T}$, we can (somewhat abusively) discuss the strict log concavity of $f|_{\mathcal{C}_{T}}$: \begin{defn} \label{defn generalized log concave} Let $\mathcal{C}' \subset \mathcal{C}$ be a (possibly non-convex) subcone. We say that $f$ is strictly log concave on $\mathcal{C}'$ if \begin{align*} f(v)^{1/s} + f(x)^{1/s}< f(v+x)^{1/s} \end{align*} holds whenever $v, x\in \mathcal{C}'$ are not proportional. Note that this definition makes sense even when $\mathcal{C}'$ is not itself convex. \end{defn} \begin{thrm} \label{thrm teissier} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable. For any non-empty subcone $\mathcal{C}_{T}$ of $\mathcal{C}_{f}$, consider the following conditions: \begin{enumerate} \item The restriction $f|_{\mathcal{C}_{T}}$ is strictly log concave (in the sense defined above). \item $f$ satisfies Teissier proportionality with respect to $\mathcal{C}_{T}$. \item The restriction of $D$ to $\mathcal{C}_{T}$ is injective. \end{enumerate} Then we have (1) $\implies$ (2) $\implies$ (3). If $\mathcal{C}_{T}$ is convex, then we have (2) $\implies$ (1). If $\mathcal{C}_{T}$ is an open subcone, then we have (3) $\implies$ (1). \end{thrm} \begin{proof} We first prove (1) $\implies$ (2). Let $v, x\in \mathcal{C}_T$ satisfy $D(v)\cdot x =f(v)^{s-1/s}f(x)^{1/s}$ and $f(v)=f(x)$. Assume for a contradiction that $v\neq x$. Since $f|_{\mathcal{C}_T}$ is strictly log concave, for any two $v, x\in \mathcal{C}_T$ which are not proportional we have \begin{align*} f(x)^{1/s} < f(v)^{1/s} + \frac{D(v) \cdot (x-v)}{f(v)^{s-1/s}}. \end{align*} Since we have assumed $D(v)\cdot x =f(v)^{s-1/s}f(x)^{1/s}$ and $f(v)=f(x)$, we must have \begin{align*} f(x)^{1/s} = f(v)^{1/s} + \frac{D(v) \cdot (x-v)}{f(v)^{s-1/s}} \end{align*} since $D(v)\cdot v= f(v)$. This is a contradiction, so we must have $v=x$. This then implies $f$ satisfies Teissier proportionality. We next show (2) $\implies$ (3). Let $v_1, v_2 \in \mathcal{C}_{T}$ with $D(v_1)=D(v_2)$. Then we have \begin{align*} f(v_1) &= D(v_1)\cdot v_1 =D(v_2)\cdot v_1\\ &\geq f(v_2)^{s-1/s}f(v_1)^{1/s}, \end{align*} which implies $f(v_1)\geq f(v_2)$. By symmetry, we get $f(v_1)=f(v_2)$. So we must have $$D(v_1)\cdot v_2= f(v_1)^{s-1/s}f(v_2)^{1/s}.$$ By the Teissier proportionality we see that $v_1, v_2$ are proportional, and since $f(v_1)=f(v_2)$ they must be equal. We next show that if $\mathcal{C}_{T}$ is convex then (2) $\implies$ (1). Fix $y$ in the interior of $\mathcal{C}_{T}$ and fix $\epsilon >0$. Then \begin{align*} f(v+x+\epsilon y)^{1/s}-f(v)^{1/s}&=\int_0 ^1 (D(v+t(x+\epsilon y))\cdot x) f(v+t(x+\epsilon y))^{1-s/s}dt. \end{align*} The integrand is bounded by a positive constant independent of $\epsilon$ as we let $\epsilon$ go to $0$ due to the $+$-differentiability of $f$ (which also implies the continuity of $f$). Using Lemma \ref{uppersemilimit}, the dominanted convergence theorem shows that \begin{align*} f(v+x)^{1/s}-f(v)^{1/s}&=\int_0 ^1 (D(v+tx)\cdot x) f(v+tx)^{1-s/s}dt. \end{align*} This immediately shows the strict log concavity. Finally, we show that if $\mathcal{C}_{T}$ is open then (3) $\implies$ (1). By \cite[Corollary 26.3.1]{rockafellar70convexBOOK}, it is clear that for any convex open set $U \subset \mathcal{C}_{T}$ the injectivity of $D$ over $U$ is equivalent to the strict log concavity of $f|_{U}$. Using the global log concavity of $f$, we obtain the conclusion. More precisely, assume $x, y\in \mathcal{C}_T$ are not proportional, then by the strict log concavity of $f$ near $x$ and the global log concavity on $\mathcal{C}$, for $t>0$ sufficiently small we have \begin{align*} f^{1/s}(x+y)&\geq f^{1/s}(x+ty) +(1-t)f^{1/s}(y)\\ &> (f^{1/s}(x)+f^{1/s}(x+2ty))/2 + (1-t)f^{1/s}(y)\\ &\geq f^{1/s}(x)+f^{1/s}(y). \end{align*} \end{proof} Another useful observation is: \begin{prop} \label{derivativeofdual} Let $f \in \HConc_{s}(\mathcal{C})$ be differentiable and suppose that $f$ is strictly log concave on an open subcone $\mathcal{C}_{T}\subset \mathcal{C}^{\circ}$. Then $\mathcal{H}f$ is differentiable on $D(\mathcal{C}_{T})$ and the derivative is determined by the prescription \begin{equation*} D(D(v)) = v. \end{equation*} \end{prop} \begin{proof} We first show that $D(\mathcal{C}_{T}) \subset \mathcal{C}^{* \circ}$. Suppose that there were some $v \in \mathcal{C}_{T}$ such that $D(v)$ lay on the boundary of $\mathcal{C}^{*}$. Choose $x \in \mathcal{C}$ satisfying $x \cdot D(v) = 0$. By openness we have $v + tx \in \mathcal{C}_{T}$ for sufficiently small $t$. Since $D(v) \in G_{v + tx}$, we must have that $D(v)$ and $D(v+tx)$ are proportional by Proposition \ref{diffuniqueness}. This is a contradiction by Theorem \ref{thrm teissier}. Now suppose $w^{*} = D(v) \in D(\mathcal{C}_{T})$. By the strict log concavity of $f$ on $\mathcal{C}_{T}$ (and the global log concavity), we must have that $G_{w^*} \cup \{0 \}$ consists only of the ray spanned by $v$. Applying Proposition \ref{diffuniqueness}, we obtain the statement. \end{proof} Combining all the results above, we obtain a very clean property of $D$ under the strongest possible assumptions. \begin{thrm} \label{strongesttransform} Assume $f \in \HConc_{s}(\mathcal{C})$ and its polar transform $\mathcal{H}f\in \HConc_{s/s-1}(\mathcal{C}^*)$ are $+$-differentiable. Let $U=D(\mathcal{C}_{\mathcal{H}f} ^*)\cup \{0\}$ and $U^*=D(\mathcal{C}_{f} )\cup \{0\}$. Then we have: \begin{itemize} \item $f$ and $\mathcal{H}f$ admit a strong Zariski decomposition with respect to the cone $U$ and the cone $U^*$ respectively; \item For any $v\in \mathcal{C}_{f}$ we have $D(v)=D(p_v)$ (and similarly for $w \in \mathcal{C}_{\mathcal{H}f}^{*}$); \item $D$ defines a bijection $D: U^{\circ} \to U^{* \circ}$ with inverse also given by $D$. In particular, $f$ and $\mathcal{H}f$ satisfy Teissier proportionality with respect to the open cone $U^{\circ}$ and $U^{* \circ}$ respectively. \end{itemize} \end{thrm} \begin{proof} Note that $U^* \subset \mathcal{C}^{*}_{\mathcal{H}f}$ (and $U \subset \mathcal{C}_{f}$) since for any $v \in \mathcal{C}_{f}$ we have $D(v) \in G_{v}$ and $f(v) > 0$. The first statement is immediate from Theorem \ref{strong zariski equivalence}. We next show the second statement. By the definition of positive parts, we have $G_v \subset G_{p_v}$. Since both $v, p_{v} \in \mathcal{C}_{f}$, we know by the argument of Theorem \ref{strong zariski equivalence} that $D(v)$ and $D(p_{v})$ are both proportional to the (unique) positive part of any $w^{*} \in G_{v}$ with positive $\mathcal{H}f$. Finally we show the third statement. We start by proving the Teissier proportionality on $U^{\circ}$. By part (2) of the Zariski decomposition condition $f$ is strictly log concave on $U^\circ$, and Teissier proportionality follows by Theorem \ref{thrm teissier}. Furthermore, the argument of Proposition \ref{derivativeofdual} then shows that $D(U^{\circ})\subset \mathcal{C}^{* \circ}$ and $D(D(U^{\circ})) = U^{\circ}$. We must show that $D(U^{\circ}) \subset U^{* \circ}$. Suppose that $v \in U^{\circ}$ had that $D(v)$ was on the boundary of $U^{*}$. Since $D(v) \in \mathcal{C}^{* \circ}$, there must be some sequence $w_{i}^{*} \in C^{* \circ} - U^{*}$ whose limit is $D(v)$. We note that each $D(w_{i}^{*})$ lies on the boundary of $\mathcal{C}$, thus must lie on the boundary of $U$. Indeed, by the second statement we have $D(w_{i}^{*}) = D(w_{i}^{*} + tn_{w_{i}^{*}})$ for any $t>0$, which would violate the uniqueness of $G_{D(w_{i}^*)}$ as in Proposition \ref{diffuniqueness} if it were an interior point. Using the continuity of $D$ we see that $v = D(D(v))$ lies on the boundary of $U$, a contradiction. In all, we have shown that $D: U^{\circ} \to U^{* \circ}$ is an isomorphism onto its image with inverse $D$. By symmetry we also have $D(U^{* \circ}) \subset U^{\circ}$, and we conclude after taking $D$ the reverse inclusion $U^{* \circ} \subset D(U^{\circ})$. \end{proof} \subsection{Morse-type inequality} The polar transform $\mathcal{H}$ also gives a natural way of translating cone positivity conditions from $\mathcal{C}$ to $\mathcal{C}^{*}$. \begin{defn} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable. We say that $f$ satisfies a Morse-type inequality if for any $v\in \mathcal{C}_f$ and $x\in \mathcal{C}$ satisfying the inequality $$ f(v)-sD(v)\cdot x >0 $$ we have that $v-x \in \mathcal{C}^\circ$. \end{defn} Note that the prototype of the Morse-type inequality is the well known algebraic Morse inequality for nef divisors. In order to translate the positivity in $\mathcal{C}$ to $\mathcal{C}^*$, we need the following ``reverse" Khovanskii-Teissier inequality. \begin{prop} \label{prop abstract reverse KT inequality} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable and satisfy a Morse-type inequality. Then we have \begin{align*} s(y^*\cdot v)(D(v)\cdot x)\geq f(v) (y^*\cdot x), \end{align*} for any $y^* \in \mathcal{C}^*$, $v\in \mathcal{C}_f$ and $x\in \mathcal{C}$. \end{prop} \begin{proof} The inequality holds when $y^*=0$, so we need to deal with the case when $y^*\neq 0$. Since both sides are homogeneous in all the arguments, we may rescale to assume that $y^{*} \cdot v = y^{*} \cdot x$. Then we need to show that $s D(v) \cdot x \geq f(v)$. If not, then \begin{equation*} f(v) - sD(v) \cdot x > 0, \end{equation*} so that $v-x \in \mathcal{C}^{\circ}$ by the Morse-type inequality. But then we conclude that $y^{*} \cdot v > y^{*} \cdot x$, a contradiction. \end{proof} \begin{thrm} \label{general morse} Let $f \in \HConc_{s}(\mathcal{C})$ be $+$-differentiable and satisfy a Morse-type inequality. Then for any $v \in \mathcal{C}_f$ and $y^*\in \mathcal{C}^{*}$ satisfying \begin{align*} \mathcal{H}f(D(v))-s v\cdot y^*>0, \end{align*} we have $D(v)-y^*\in \mathcal{C}^{*\circ}$. In particular, we have $D(v)-y^*\in \mathcal{C}^*_{\mathcal{H}f}$ and \begin{align*} \mathcal{H}f(D(v)-y^*)^{s-1/s}&\geq (\mathcal{H}f(D(v))-s v\cdot y^*)\mathcal{H}f(D(v))^{-1/s}\\ &=(f(v)-s v\cdot y^*)f(v)^{-1/s}. \end{align*} As a consequence, we get \begin{align*} \mathcal{H}f(D(v)-y^*)\geq f(v)-\frac{s^2}{s-1}v\cdot y^*. \end{align*} \end{thrm} \begin{proof} Note that $\mathcal{H}f(D(v))=f(v)$. First we claim that the inequality $f(v)-sv\cdot y^*>0$ implies $D(v)-y^* \in \mathcal{C}^{*\circ}$. To this end, fix some sufficiently small $y'^{*} \in \mathcal{C}^{*\circ}$ such that $y^*+y'^*$ still satisfies $f(v)-sv\cdot (y^*+y'^*)>0$. Then by the ``reverse" Khovanskii-Teissier inequality, for some $\delta>0$ and any $x\in \mathcal{C}$ we have \begin{align*} D(v)\cdot x\geq \left(\frac{f(v)}{s(y^* + y'^*)\cdot v}\right) (y^*+y'^*) \cdot x \geq (1+\delta)(y^*+y'^*)\cdot x. \end{align*} This implies $D(v)-y^* \in \mathcal{C}^{*\circ}$. By the definition of $\mathcal{H}f$ we have \begin{align*} \mathcal{H}f(D(v)-y^*)&=\inf_{x\in \mathcal{C}^\circ}\left( \frac{(D(v)-y^*)\cdot x}{f(x)^{1/s}}\right)^{s/s-1}\\ &\geq \left(\frac{f(v)-sy^*\cdot v}{f(v)}\right)^{s/s-1}\inf_{x\in \mathcal{C}^\circ}\left( \frac{D(v)\cdot x}{f(x)^{1/s}}\right)^{s/s-1}\\ &=\mathcal{H}f(D(v))\left(\frac{f(v)-sy^*\cdot v}{f(v)}\right)^{s/s-1}, \end{align*} where the second line follows from ``reverse" Khovanskii-Teissier inequality. To obtain the desired inequality, we only need to use the equality $\mathcal{H}f(D(v))=f(v)$ again. To show the last inequality, we only need to note that the function $(1-x)^\alpha$ is convex for $x\in [0,1)$ if $\alpha \geq 1$. This implies $(1-x)^\alpha \geq 1-\alpha x$. Applying this inequality in our situation, we get \begin{align*} \mathcal{H}f(D(v)-y^*)&\geq \left( 1- \frac{s v\cdot y^*}{f(v)}\right)^{s/s-1} f(v)\\ & \geq f(v)-\frac{s^2}{s-1}v\cdot y^*. \end{align*} \end{proof} \subsection{Boundary conditions} Under certain conditions we can control the behaviour of $\mathcal{H}f$ near the boundary, and thus obtain continuity. \begin{defn} Let $f \in \HConc_{s}(\mathcal{C})$ and let $\alpha \in (0,1)$. We say that $f$ satisfies the sublinear boundary condition of order $\alpha$ if for any non-zero $v$ on the boundary of $\mathcal{C}$ and for any $x$ in the interior of $\mathcal{C}$, there exists a constant $C:=C(v, x)>0$ such that $f(v+\epsilon x)^{1/s}\geq C\epsilon^\alpha$. \end{defn} Note that the condition is always satisfied at $v$ if $f(v)>0$. Furthermore, the condition is satisfied for any $v,x$ with $\alpha = 1$ by homogeneity and log-concavity, so the crucial question is whether we can decrease $\alpha$ slightly. Using this sublinear condition, we get the vanishing of $\mathcal{H}f$ along the boundary. \begin{prop} \label{sublinearcontinuity} Let $f \in \HConc_{s}(\mathcal{C})$ satisfy the sublinear boundary condition of order $\alpha$. Then $\mathcal{H}f$ vanishes along the boundary. As a consequence, $\mathcal{H}f$ extends to a continuous function over $V^*$ by setting $\mathcal{H}f=0$ outside $\mathcal{C}^*$. \end{prop} \begin{proof} Let $w^*$ be a boundary point of $\mathcal{C}^*$. Then there exists some non-zero $v\in \mathcal{C}$ such that $w^*\cdot v=0$. Fix $x\in \mathcal{C}^\circ$. By the definition of $\mathcal{H}f$ we get \begin{align*} \mathcal{H}f(w^*)^{s-1/s}\leq \frac{w^*\cdot (v+\epsilon x)}{f^{1/s}(v+\epsilon x)}\leq \frac{\epsilon w^*\cdot x}{C\epsilon^\alpha}. \end{align*} Letting $\epsilon$ tend to zero, we see $\mathcal{H}f(w^*)=0$. To show the continuity, by Lemma \ref{uppersemilimit} we only need to verify \begin{align*} \lim _{\epsilon \rightarrow 0}\mathcal{H}f(w^* + \epsilon y^*)=0 \end{align*} for some $y^* \in \mathcal{C}^{*\circ}$ (as any other limiting sequence is dominated by such a sequence). This follows easily from \begin{align*} \mathcal{H}f(w^* + \epsilon y^*)^{s-1/s}&\leq \frac{(w^*+\epsilon y^*)\cdot (v+\epsilon x)}{f^{1/s}(v+\epsilon x)}\\ &\leq \frac{\epsilon(y^*\cdot v + w^*\cdot x + \epsilon y^*\cdot x)}{C \epsilon^\alpha}. \end{align*} \end{proof} \begin{rmk} If $f$ satisfies the sublinear condition, then $\mathcal{C}_{\mathcal{H}f} ^* =\mathcal{C}^{*\circ}$. This makes the statements of the previous results very clean. In the following section, the function $\widehat{\vol}$ has this nice property. \end{rmk} \section{Positivity for curves} \label{section zariski} We now study the basic properties of $\widehat{\vol}$ and of the Zariski decompositions for curves. Some aspects of the theory will follow immediately from the formal theory of Section \ref{formal zariski section}; others will require a direct geometric argument. We first outline how to apply the results of Section \ref{formal zariski section}. Recall that $\widehat{\vol}$ is the polar transform of the volume function for divisors restricted to the nef cone. More precisely, we are now in the situation: \begin{align*} \mathcal{C}=\Nef^1 (X), \quad f=\vol,\quad \mathcal{C}^*=\Eff_1 (X),\quad \mathcal{H}f= \widehat{\vol}. \end{align*} Thus, to understand the properties of $\widehat{\vol}$ we need to recall the basic features of the volume function on the nef cone of divisors. It is an elementary fact that the volume function on the nef cone of divisors is differentiable everywhere (with $D(A) = A^{n-1}$). In the notation of Section \ref{legendresection} the cone $\Nef^{1}(X)_{\vol}$ coincides with the big and nef cone. The Khovanskii-Teissier inequality (with Teissier proportionality) holds on the big and nef cone as recalled in Section \ref{section preliminaries}. Finally, the volume for nef divisors satisfies the sublinear boundary condition of order $n-1/n$: this follows from an elementary intersection calculation using the fact that $N \cdot A^{n-1} \neq 0$ for any non-zero nef divisor $N$ and ample divisor $A$. \begin{rmk} \label{rmk decomposition kahler} Due to the outline above, the proofs in this section depend only upon elementary facts about intersection theory, the Khovanskii-Teissier inequality and Teissier's proportionality theorem. As discussed in the preliminaries, the arguments in this section thus extend immediately to smooth varieties over an arbitrary algebraically closed field and to the K\"ahler setting. \end{rmk} \subsection{Basic properties} The following theorems collect the various analytic consequences for $\widehat{\vol}$. \begin{thrm} \label{volcurvesbasicprops} Let $X$ be a projective variety of dimension $n$. Then: \begin{enumerate} \item $\widehat{\vol}$ is continuous and homogeneous of weight $n/n-1$ on $\Eff_{1}(X)$ and is positive precisely for the big classes. \item For any big and nef divisor class $A$, we have $\widehat{\vol}(A^{n-1}) = \vol(A)$. \item For any big curve class $\alpha$, there is a big and nef divisor class $B$ such that \begin{equation*} \widehat{\vol}(\alpha) = \left( \frac{B \cdot \alpha}{\vol(B)^{1/n}} \right)^{n/n-1}. \end{equation*} We say that the class $B$ computes $\widehat{\vol}(\alpha)$. \end{enumerate} \end{thrm} The first two were already proved in \cite[Theorem 3.1]{xiao15}. \begin{proof} (1) follows immediately from Propositions \ref{prop H involution} and \ref{sublinearcontinuity}. Since $D(A) = A^{n-1}$, (2) follows from the computation $$\widehat{\vol}(A^{n-1}) = D(A) \cdot A = A^{n}.$$ The existence in (3) follows from Theorem \ref{thm abstract zariski}. \end{proof} We also note the following easy basic linearity property, which follows immediately from the Khovanskii-Teissier inequalities. \begin{thrm} \label{zardecomlinearity} Let $X$ be a projective variety of dimension $n$ and let $\alpha$ be a big curve class. If $A$ computes $\widehat{\vol}(\alpha)$, it also computes $\widehat{\vol}(c_{1}\alpha + c_{2}A^{n-1})$ for any positive constants $c_{1}$ and $c_{2}$. \end{thrm} After constructing Zariski decompositions below, we will see that in fact we can choose a possibly negative $c_{2}$ so long as $c_{1} \alpha + c_{2} A^{n-1}$ is a big class. \subsection{Zariski decompositions for curves} The following theorem is the basic result establishing the existence of Zariski decompositions for curve classes. \begin{thrm} \label{thrm:existenceofzardecom} Let $X$ be a projective variety of dimension $n$. Any big curve class $\alpha$ admits a unique Zariski decomposition: there is a unique pair consisting of a big and nef divisor class $B_\alpha$ and a pseudo-effective curve class $\gamma$ satisfying $B_\alpha \cdot \gamma = 0$ and \begin{equation*} \alpha = B_\alpha^{n-1} + \gamma. \end{equation*} In fact $\widehat{\vol}(\alpha) = \widehat{\vol}(B_{\alpha}^{n-1}) = \vol(B_{\alpha})$. In particular $B_{\alpha}$ computes $\widehat{\vol}(\alpha)$, and any big and nef divisor computing $\widehat{\vol}(\alpha)$ is proportional to $B_{\alpha}$. \end{thrm} \begin{proof} The existence of the Zariski decomposition and the uniqueness of the positive part $B_{\alpha}^{n-1}$ follow from Theorem \ref{strong zariski equivalence}. The uniqueness of $B_\alpha$ follows from Teissier proportionality for big and nef divisor classes. It is clear that $B_{\alpha}$ computes $\widehat{\vol}(\alpha)$ by Theorem \ref{strong zariski equivalence}. The last claim follows from Teissier proportionality and the fact that $\alpha \succeq B_{\alpha}^{n-1}$. \end{proof} As discussed before, conceptually the Zariski decomposition $\alpha = B_\alpha^{n-1} + \gamma$ captures the failure of log concavity of $\widehat{\vol}$: the term $B_\alpha^{n-1}$ captures all the of the positivity encoded by $\widehat{\vol}$ and is positive in a very strong sense, while the negative part $\gamma$ lies on the boundary of the pseudo-effective cone. \begin{exmple} \label{example proj bundle} Let $X$ be the projective bundle over $\mathbb{P}^{1}$ defined by $\mathcal{O} \oplus \mathcal{O} \oplus \mathcal{O}(-1)$. There are two natural divisor classes on $X$: the class $f$ of the fibers of the projective bundle and the class $\xi$ of the sheaf $\mathcal{O}_{X/\mathbb{P}^{1}}(1)$. Using for example \cite[Theorem 1.1]{fulger11} and \cite[Proposition 7.1]{fl14}, one sees that $f$ and $\xi$ generate the algebraic cohomology classes with the relations $f^{2} = 0$, $\xi^{2}f = -\xi^{3} = 1$ and \begin{equation*} \Eff^{1}(X) = \Mov^{1}(X) = \langle f, \xi \rangle \qquad \qquad \Nef^{1}(X) = \langle f, \xi + f \rangle \end{equation*} and \begin{align*} \Eff_{1}(X) = \langle \xi f, \xi^{2} \rangle \qquad & \qquad \Nef_{1}(X) = \langle \xi f, \xi^{2} + \xi f \rangle \\ \CI_{1}(X) = \langle \xi f, & \xi^{2} + 2 \xi f \rangle. \end{align*} Using this explicit computation of the nef cone of the divisors, we have \begin{equation*} \widehat{\vol}(x \xi f + y \xi^{2}) = \inf_{a,b \geq 0} \frac{a y + bx}{(3ab^{2}+2b^{3})^{1/3}} \end{equation*} This is essentially a one-variable minimization problem due to the homogeneity in $a,b$. It is straightforward to compute directly that for non-negative values of $x,y$: \begin{align*} \widehat{\vol}(x \xi f + y \xi^{2}) & = \left( \frac{3}{2}x - y \right) y^{1/2} \qquad \textrm{ if }x \geq 2y; \\ & = \frac{x^{3/2}}{2^{1/2}} \qquad \qquad \qquad \textrm{ if }x < 2y. \end{align*} Note that when $x<2y$, the class $x \xi f + y \xi^{2}$ no longer lies in the complete intersection cone -- to obtain $\widehat{\vol}$, Theorem \ref{thrm:existenceofzardecom} indicates that we must project $\alpha$ onto the complete intersection cone in the $y$-direction. This exactly coheres with the calculation above. \end{exmple} The Zariski decomposition for curves is continuous. \begin{thrm} \label{zardecomcontinuous} Let $X$ be a projective variety of dimension $n$. The function sending a big curve class $\alpha$ to its positive part $B_{\alpha}^{n-1}$ or to the corresponding divisor $B_{\alpha}$ is continuous. \end{thrm} \begin{proof} The first statement follows from Theorem \ref{thrm positive part continuity}. The second then follows from the continuity of the inverse map to the $n-1$-power map. \end{proof} It is interesting to study whether the Zariski projection taking $\alpha$ to its positive part is $\mathcal{C}^1$. This is true on the ample cone -- the map $\Phi$ sending an ample divisor class $A$ to $A^{n-1}$ is a $\mathcal{C}^1$ diffeomorphism by the argument in Remark \ref{rmk CI}. \begin{rmk} The continuity of the Zariski decomposition does not extend to the entire pseudo-effective cone, even for surfaces. For example, suppose that a surface $S$ admits a nef class $N$ which is a limit of (rescalings of) irreducible curve classes which each have negative self-intersection. (A well-known example of such a surface is $\mathbb{P}^{2}$ blown up at 9 general points.) For any $c \in [0,1]$ one can find a sequence of big divisors $\{ L_{i} \}$ whose limit is $N$ but whose positive parts have limit $c N$. \end{rmk} An important feature of the $\sigma$-decomposition for divisors is its concavity: given two big divisors $L_{1}, L_{2}$ we have $$P_{\sigma}(L_{1}+L_{2}) \succeq P_{\sigma}(L_{1}) + P_{\sigma}(L_{2}).$$ However, the analogous property fails for curves: \begin{exmple} Let $X$ be a smooth projective variety such that $\CI_{1}(X)$ is not convex. (An explicit example is given in Appendix.) Then there are complete intersection classes $\alpha = B_{\alpha}^{n-1}$ and $\beta = B_{\beta}^{n-1}$ such that $\alpha+\beta$ is not a complete intersection class. Let $B_{\alpha + \beta}^{n-1}$ denote the positive part of the Zariski decomposition for $\alpha+\beta$. Then \begin{equation*} B_{\alpha+\beta}^{n-1} \preceq B_{\alpha}^{n-1} + B_{\beta}^{n-1}. \end{equation*} Furthermore, we can not have equality since the sum is not a complete intersection class. Thus \begin{equation*} B_{\alpha+\beta}^{n-1} \precneqq B_{\alpha}^{n-1} + B_{\beta}^{n-1}. \end{equation*} \end{exmple} However, one can still ask: \begin{ques} Fix $\alpha \in \Eff_{1}(X)$. Is there a fixed class $\xi \in \CI_{1}(X)$ such that for any $\epsilon > 0$ there is a $\delta > 0$ satisfying \begin{equation*} B_{\alpha+\delta \beta}^{n-1} \preceq B_{\alpha + \epsilon \xi}^{n-1} \end{equation*} for every $\beta \in N_{1}(X)$ of bounded norm? \end{ques} This question is crucial for making sense of the Zariski decomposition of a curve class on the boundary of $\Eff_{1}(X)$ via taking a limit. \subsection{Strict log concavity} The following theorem is an immediate consequence of Theorem \ref{strong zariski equivalence}, which gives the strict log concavity of $\widehat{\vol}$. \begin{thrm} \label{volcurvesconcavity} Let $X$ be a projective variety of dimension $n$. For any two pseudo-effective curve classes $\alpha, \beta$ we have \begin{equation*} \widehat{\vol}(\alpha + \beta)^{\frac{n-1}{n}} \geq \widehat{\vol}(\alpha)^{\frac{n-1}{n}} + \widehat{\vol}(\beta)^{\frac{n-1}{n}}. \end{equation*} Furthermore, if $\alpha$ and $\beta$ are big, then we obtain an equality if and only if the positive parts of $\alpha$ and $\beta$ are proportional. \end{thrm} \begin{proof} The inequality is clear. Combining the $+$-differentiability of $\vol$ with Theorem \ref{strong zariski equivalence}, we see the forward implication in the last sentence. Conversely, if $\alpha$ and $\beta$ have proportional positive parts, then working directly from the definition it is clear that the sum of the positive parts is the (unique) positive part of $\alpha + \beta$, and the conclusion follows. \end{proof} \subsection{Differentiability} \label{section derivative} In \cite{bfj09} the derivative of the volume function was calculated using the positive product: given a big divisor class $L$ and any divisor class $E$, we have \begin{equation*} \left. \frac{d}{dt} \right|_{t=0} \vol(L + t E) = n \langle L^{n-1} \rangle \cdot E. \end{equation*} In this section we prove an analogous statement for curve classes. For curves, the big and nef divisor class $B$ occurring in the Zariski decomposition plays the role of the positive product, and the homogeneity constant $n/n-1$ plays the role of $n$. \begin{thrm} \label{thrm:derivative} Let $X$ be a projective variety of dimension $n$, and let $\alpha$ be a big curve class with Zariski decomposition $\alpha = B^{n-1} + \gamma$. Let $\beta$ be any curve class. Then $\widehat{\vol}(\alpha + t \beta)$ is differentiable at $0$ and \begin{align*} \left. \frac{d}{dt} \right|_{t=0} \widehat{\vol}(\alpha + t \beta) & = \frac{n}{n-1} B \cdot \beta. \end{align*} In particular, the function $\widehat{\vol}$ is $\mathcal{C}^{1}$ on the big cone of curves. If $C$ is an irreducible curve on $X$, then we can instead write \begin{align*} \left. \frac{d}{dt} \right|_{t=0} \widehat{\vol}(\alpha + t C)= \frac{n}{n-1} \vol(B|_{C}). \end{align*} \end{thrm} \begin{proof} This follows immediately from Propositions \ref{diffuniqueness} and \ref{derivativeofdual} since $G_{\alpha}\cup \{0\}$ consists of a single ray by the last statement of Theorem \ref{thrm:existenceofzardecom}. \end{proof} \begin{exmple} \label{example derivative} We return to the setting of Example \ref{example proj bundle}: let $X$ be the projective bundle over $\mathbb{P}^{1}$ defined by $\mathcal{O} \oplus \mathcal{O} \oplus \mathcal{O}(-1)$. Using our earlier notation we have \begin{align*} \Eff_{1}(X) = \langle \xi f, \xi^{2} \rangle \qquad \end{align*} and \begin{align*} \widehat{\vol}(x \xi f + y \xi^{2}) & = \left( \frac{3}{2}x - y \right) y^{1/2} \qquad \textrm{ if }x \geq 2y; \\ & = \frac{x^{3/2}}{2^{1/2}} \qquad \qquad \qquad \textrm{ if }x < 2y. \end{align*} We focus on the complete intersection region where $x \geq 2y$. Then we have \begin{equation*} x \xi f + y \xi^{2} = \left( \frac{x - 2y}{2 y^{1/2}} f + y^{1/2} (\xi + f) \right)^{2}. \end{equation*} The divisor in the parentheses on the right hand side is exactly the $B$ appearing in the Zariski decomposition expression for $x \xi f + y \xi^{2}$. Thus, we can calculate the directional derivative of $\widehat{\vol}$ along a curve class $\beta$ by intersecting against this divisor. For a very concrete example, set $\alpha = 3 \xi f + \xi^{2}$, and consider the behavior of $\widehat{\vol}$ for \begin{equation*} \alpha_{t} := 3 \xi f + \xi^{2} - t(2\xi f+ \xi^{2}). \end{equation*} Note that $\alpha_{t}$ is pseudo-effective precisely for $t \leq 1$. In this range, the explicit expression for the volume above yields \begin{align*} \widehat{\vol}(\alpha_{t}) & = \left( \frac{7}{2} - 2t \right) (1-t)^{1/2}, \\ \frac{d}{dt} \widehat{\vol}(\alpha_{t}) & = -3(1-t)^{1/2} - \frac{3}{4}(1-t)^{-1/2}. \end{align*} Note that this calculation agrees with the prediction of Theorem \ref{thrm:derivative}, which states that if $B_{t}$ is the divisor defining the positive part of $\alpha_{t}$ then \begin{align*} \frac{d}{dt} \widehat{\vol}(\alpha_{t}) & = \frac{3}{2} B_{t} \cdot (2 \xi f + \xi^{2}) \\ & = \frac{-3}{2} \left( \frac{(3-2t) - 2(1-t)}{2 (1-t)^{1/2}} + 2(1-t)^{1/2} \right). \end{align*} In particular, the derivative decreases to $-\infty$ as $t$ approaches $1$ (and the coefficients of the divisor $B$ also increase without bound). This is a surprising contrast to the situation for divisors. Note also that $\widehat{\vol}$ is not convex on this line segment, while $\vol$ is convex in any pseudo-effective direction in the nef cone of divisors by the Morse inequality. \end{exmple} \subsection{Negative parts} We next analyze the structure of the negative part of the Zariski decomposition. First we have: \begin{lem} \label{negnotmovable} Let $X$ be a projective variety. Suppose $\alpha$ is a big curve class and write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. If $\gamma \neq 0$ then $\gamma \not \in \Mov_{1}(X)$. \end{lem} \begin{proof} Since $B$ is big and $B \cdot \gamma = 0$, $\gamma$ cannot be movable if it is non-zero. \end{proof} For the Zariski decomposition under $\widehat{\vol}$, we can not guarantee the negative part is the class of an effective curve. As in \cite{fl14}, it is more reasonable to ask if the negative part is the pushforward of a pseudo-effective class from a proper subvariety. Note that this property is automatic when the negative part is represented by an effective class, and for surfaces it is actually equivalent to asking that the negative part be effective. In general this subtle property of pseudo-effective classes is crucial for inductive arguments on dimension. \begin{prop} \label{negpartrigid} Let $X$ be a projective variety of dimension $n$. Let $\alpha$ be a big curve class and write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. There is a proper subscheme $i: V \subsetneq X$ and a pseudo-effective class $\gamma' \in N_{1}(V)$ such that $i_{*}\gamma' = \gamma$. \end{prop} \begin{proof} We may choose an effective nef $\mathbb{R}$-Cartier divisor $D$ whose class is $B$. By resolving the base locus of a sufficiently high multiple of $D$ we obtain a blow-up $\phi: Y \to X$, a birational morphism $\psi: Y \to Z$ and an effective ample divisor $A$ on $Z$ such that after replacing $D$ by some numerically equivalent divisor we have $\phi^{*}D \geq \psi^{*}A$. Write $E$ for the difference of these two divisors and set $V_{Y}$ to be the union of $\Supp(E)$ with the $\psi$-exceptional locus. There is a pseudo-effective curve class $\gamma_{Y}$ on $Y$ which pushes forward to $\gamma$ and thus satisfies $\phi^{*}D \cdot \gamma_{Y} = 0$. There is an infinite sequence of effective $1$-cycles $C_{i}$ such that $\lim_{i \to \infty} [C_{i}] = \gamma_{Y}$. Each effective cycle $C_{i}$ can be decomposed as a sum $C_{i} = T_{i} + T_{i}'$ where $T_{i}'$ consists of the components contained in $V_{Y}$ and $T_{i}$ consists of the rest. Note that \begin{equation*} \lim_{i \to \infty} A \cdot \psi_{*}T_{i} \leq \lim_{i \to \infty} \phi^{*}D \cdot T_{i} = 0. \end{equation*} This shows that $\lim_{i \to \infty} [T_{i}]$ converges to a pseudo-effective curve class $\beta \in N_{1}(Y)$ satisfying $\psi_{*}\beta = 0$. Clearly $\lim_{i \to \infty}[T_{i}']$ is the pushforward of a pseudo-effective curve class from $V_{Y}$. \cite[Theorem 4.1]{djv13} (which holds in the singular case by the same argument) shows that $\beta$ is also the pushforward of a pseudo-effective curve class on $V_{Y}$. Thus $\gamma_{Y}$ is the pushforward of a pseudo-effective curve class on $V_{Y}$. Pushing forward to $X$, we see that $\gamma$ is the pushforward of a pseudo-effective curve class on $V := \phi(V_{Y})$. Note that $V$ is a proper subset of $X$ since $\phi$ is birational. \end{proof} \begin{rmk} In contrast, for the Zariski decomposition of curves in the sense of Boucksom (see \cite[Theorem 3.3 and Lemma 3.5]{xiao15}) the negative part can always be represented by an effective curve which is very rigidly embedded in $X$. This has a similar feel as the $\sigma$-decomposition of \cite{Nak04} for curve classes. \end{rmk} \subsection{Birational behavior} \label{birational subsection} We next use the Zariski decomposition to analyze the behavior of positivity of curves under birational maps $\phi: Y \to X$. Note that (in contrast to divisors) the birational pullback can only decrease the positivity for curve classes: we have \begin{equation*} \widehat{\vol}(\alpha) \geq \widehat{\vol}(\phi^{*}\alpha). \end{equation*} In fact pulling back does not preserve pseudo-effectiveness, and even for a movable class we can have a strict inequality of $\widehat{\vol}$ (for example, a big movable class can pull back to a movable class on the pseudo-effective boundary). Again guided by \cite{fl14}, the right approach is to consider all $\phi_{*}$-preimages of $\alpha$ at once. \begin{prop} \label{birvolstatement} Let $\phi: Y \to X$ be a birational morphism of projective varieties of dimension $n$. Let $\alpha$ be a big curve class on $X$ with Zariski decomposition $B^{n-1} + \gamma$. Let $\mathcal{A}$ be the set of all pseudo-effective curve classes $\alpha'$ on $Y$ satisfying $\phi_{*}\alpha' = \alpha$. Then \begin{equation*} \sup_{\alpha' \in \mathcal{A}} \widehat{\vol}(\alpha') = \widehat{\vol}(\alpha). \end{equation*} This supremum is achieved by an element $\alpha_{Y} \in \mathcal{A}$. \end{prop} \begin{proof} Suppose $\alpha' \in \mathcal{A}$. Since $\phi_{*}\alpha' = \alpha$, it is clear from the projection formula that $\widehat{\vol}(\alpha') \leq \widehat{\vol}(\alpha)$. Conversely, set $\gamma_{Y}$ to be any pseudo-effective curve class on $Y$ pushing forward to $\gamma$. Define $\alpha_{Y} = \phi^{*}B^{n-1} + \gamma_{Y}$. Since $\phi^{*}B \cdot \gamma_{Y} = 0$, by Theorem \ref{thrm:existenceofzardecom} this expression is the Zariski decomposition for $\alpha_{Y}$. In particular $\widehat{\vol}(\alpha_{Y}) = \widehat{\vol}(\alpha)$. \end{proof} This proposition indicates the existence of some ``distinguished'' preimages of $\alpha$ with maximum $\widehat{\vol}$. In fact, these distinguished preimages also have a very nice structure. \begin{prop} \label{birpospartstructure} Let $\phi: Y \to X$ be a birational morphism of projective varieties of dimension $n$. Let $\alpha$ be a big curve class on $X$ with Zariski decomposition $B^{n-1} + \gamma$. Set $\mathcal{A}'$ to be the set of all pseudo-effective curve class $\alpha'$ on $Y$ satisfying $\phi_{*}\alpha' = \alpha$ and $\widehat{\vol}(\alpha') = \widehat{\vol}(\alpha)$. Then \begin{enumerate} \item Every $\alpha' \in \mathcal{A}'$ has a Zariski decomposition of the form \begin{equation*} \alpha' = \phi^{*}B^{n-1} + \gamma'. \end{equation*} Thus $\mathcal{A}' = \{ \phi^{*}B^{n-1} + \gamma' \, | \, \gamma' \in \Eff_{1}(Y), \phi_{*}\gamma' = \gamma \}$ is determined by the set of pseudo-effective preimages of $\gamma$. \item These Zariski decompositions are stable under adding $\phi$-exceptional curves: if $\xi$ is a pseudo-effective curve class satisfying $\phi_{*}\xi = 0$, then for any $\alpha' \in \mathcal{A}'$ we have \begin{equation*} \alpha' + \xi = \phi^{*}B^{n-1} + (\gamma' + \xi) \end{equation*} is the Zariski decomposition for $\alpha' + \xi$. \end{enumerate} \end{prop} \begin{proof} To see (1), note that \begin{equation*} \frac{\phi^{*}B}{\vol(B)^{1/n}} \cdot \alpha' = \frac{B}{\vol(B)^{1/n}} \cdot \alpha = \widehat{\vol}(\alpha). \end{equation*} Thus if $\widehat{\vol}(\alpha') = \widehat{\vol}(\alpha)$ then $\widehat{\vol}(\alpha')$ is computed by $\phi^{*}B$. By Theorem \ref{thrm:existenceofzardecom} we obtain the statement. (2) follows immediately from (1), since \begin{equation*} \widehat{\vol}(\alpha) = \widehat{\vol}(\alpha') \leq \widehat{\vol}(\alpha' + \xi) \leq \widehat{\vol}(\alpha) \end{equation*} by Proposition \ref{birvolstatement}. \end{proof} While there is not necessarily a \emph{uniquely} distinguished $\phi_{*}$-preimage of $\alpha$, there \emph{is} a uniquely distinguished complete intersection class on $Y$ whose $\phi$-pushforward lies beneath $\alpha$ -- namely, the positive part of any sufficiently large class pushing forward to $\alpha$. This is the analogue in our setting of the ``movable transform'' of \cite{fl14}. \subsection{Morse-type inequality for curves} \label{section morse ineq} In this section we prove a Morse-type inequality for curves under the volume function $\widehat{\vol}$. First let us recall the algebraic Morse inequality for nef divisor classes over smooth projective varieties. If $A, B$ are nef divisor classes on a smooth projective variety $X$ of dimension $n$, then by \cite[Example 2.2.33]{lazarsfeld04} (see also \cite{Dem85morse}, \cite{Siu93matsusaka}, \cite{Tra95morse}) $$ \vol(A-B)\geq A^n - n A^{n-1}\cdot B. $$ In particular, if $A^n - n A^{n-1}\cdot B>0$, then $A-B$ is big. This gives us a very useful bigness criterion for the difference of two nef divisors. By analogy with the divisor case, we can ask: \begin{itemize} \item Let $X$ be a projective variety of dimension $n$, and let $\alpha, \gamma\in \Eff_1(X)$ be two nef (movable) curve classes. Is there a criterion for the bigness of $\alpha-\gamma\in \Eff_1(X)$ using only intersection numbers defined by $\alpha, \gamma$? \end{itemize} Inspired by \cite{Xia13}, we give such a criterion using the $\widehat{\vol}$ function. In \cite{lehmannxiao2015b}, we answer the above question by giving a slightly different criterion which needs the refined structure of the movable cone of curves. The following results follow from Theorem \ref{general morse}. \begin{thrm} \label{thm curve morseinequality} Let $X$ be a projective variety of dimension $n$. Let $\alpha$ be a big curve class and let $\beta$ be a movable curve class. Write $\alpha = B^{n-1} + \gamma$ for the Zariski decomposition of $\alpha$. Then \begin{align*} \widehat{\vol}(\alpha - \beta)^{n-1/n} &\geq (\widehat{\vol}(\alpha)-n B \cdot \beta)\cdot \widehat{\vol}(\alpha)^{-1/n}\\ &=(B^{n} - n B \cdot \beta)\cdot (B^{n})^{-1/n}. \end{align*} In particular, we have \begin{equation*} \widehat{\vol}(\alpha - \beta) \geq B^{n} - \frac{n^2}{n-1} B \cdot \beta. \end{equation*} \end{thrm} \begin{proof} The theorem follows immediately from Theorem \ref{general morse} and the fact that $\alpha \succeq B^{n-1}$. \end{proof} \begin{cor} \label{cor curve morseinequality} Let $X$ be a projective variety of dimension $n$. Let $\alpha$ be a big curve class and let $\beta$ be a movable curve class. Write $\alpha = B^{n-1} + \gamma$ for the Zariski decomposition of $\alpha$. If \begin{equation*} \widehat{\vol}(\alpha) - nB \cdot \beta > 0 \end{equation*} then $\alpha - \beta$ is big. \end{cor} \begin{rmk} Superficially, the above theorem appears to differ from the classical algebraic Morse inequality for nef divisors, since $\alpha$ can be any big curve class. However, using the Zariski decomposition one sees that the statement for $\alpha$ is essentially equivalent to the statement for the positive part of $\alpha$, so that Theorem \ref{thm curve morseinequality} is really a claim about nef curve classes. \end{rmk} \begin{exmple} \label{exmple optimal n} The constant $n$ is optimal in Corollary \ref{cor curve morseinequality}. Indeed, for any $\epsilon>0$ there exists a projective variety $X$ such that $$ \widehat{\vol}(\alpha)-(n-\epsilon)B_\alpha \cdot \gamma >0, $$ for some $\alpha\in \Eff_1(X)$ and $\gamma\in \Mov_1(X)$ but $\alpha-\gamma$ is not a big curve class. To find such a variety, let $E$ be an elliptic curve with complex multiplication and set $X = E^{\times n}$. The pseudo-effective cone of divisors $\Eff^1(X)$ is identified with the cone of constant positive $(1,1)$-forms, while the pseudo-effective cone of curves $\Eff_1(X)$ is identified with the cone of constant positive $(n-1,n-1)$-forms. Furthermore, every strictly positive $(n-1, n-1)$-form is a $(n-1)$-self-product of a strictly positive $(1,1)$-form. We set \begin{align*} B_\alpha=i\sum_{j=1} ^n dz^j\wedge d\bar z ^j, \qquad B_\gamma=i\sum_{j=1} ^n \lambda_j dz^j\wedge d\bar z ^j. \end{align*} Here the $\lambda_j >0$. Let $\alpha=B_\alpha ^{n-1}$ and $\gamma=B_\gamma ^{n-1}$. Then $\widehat{\vol}(\alpha)-(n-\epsilon)B_\alpha \cdot \gamma >0$ is equivalent to \begin{align*} \sum_{j=1} ^n \lambda_1...\widehat{\lambda}_j...\lambda_n <\frac{n}{n-\epsilon}, \end{align*} and $\alpha-\gamma$ being big is equivalent to \begin{align*} \lambda_1...\widehat{\lambda}_j...\lambda_n<1 \end{align*} for every $j$. Now it is easy to see we can always choose $\lambda_1,..., \lambda_n$ such that the first inequality holds but the second does not hold. \end{exmple} \begin{rmk} Using the cone duality $\overline{\mathcal{K}}^* =\mathcal{N}$ and Theorem \ref{thrm appendix reserve KT} in Appendix A, it is easy to extend the above Morse-type inequality for curves to positive currents of bidimension $(1,1)$ over compact K\"ahler manifolds. \end{rmk} One wonders if Theorem \ref{thm curve morseinequality} can be improved: \begin{ques} \label{morseques} Let $X$ be a projective variety of dimension $n$. Let $\alpha$ be a big curve class and let $\beta$ be a movable curve class. Write $\alpha = B^{n-1} + \gamma$ for the Zariski decomposition of $\alpha$. Is \begin{equation*} \widehat{\vol}(\alpha - \beta) \geq \vol(\alpha) - n B \cdot \beta? \end{equation*} \end{ques} \begin{rmk} \label{rmk conj morse} By Theorem \ref{thm curve morseinequality}, if $\widehat{\vol}(\alpha) - n B \cdot \beta>0$ then $\widehat{\vol}$ is $\mathcal{C}^1$ at the point $\alpha-s\beta$ for every $s\in [0,1]$. The derivative formula of $\widehat{\vol}$ implies \begin{align*} \widehat{\vol} (\alpha-\beta)-\widehat{\vol} (\alpha)=\int_0 ^1 -\frac{n}{n-1}B_{\alpha-s\beta} \cdot \beta \, \, ds, \end{align*} where $B_{\alpha-s\beta}$ is the big and nef divisor class defining the Zariski decomposition of $\alpha-s\beta$. To give an affirmative answer to Question \ref{morseques}, we conjecture the following: \begin{align*} B_{\alpha-s\beta} \cdot \beta \leq (n-1)B_{\alpha} \cdot \beta\ \textrm{for every}\ s\in [0,1]. \end{align*} Without loss of generality, we can assume $B_{\alpha} \cdot \beta >0$. Then by continuity of the decomposition, this inequality holds for $s$ in a neighbourhood of $0$. At this moment, we do not know how to see this neighbourhood covers $[0,1]$. \end{rmk} \begin{rmk} \label{aposteriori} In this section we have seen how to use abstract convex analysis to understand the derivative and geometric inequalities for the volume function for curves. Dually, one could in theory use the same approach to understand properties of the volume function of divisors. Note that since the volume function for divisors is $n$-concave we have $\mathcal{H}^{2}\vol = \vol$, so that we can apply the results of Sections \ref{legendresection} and \ref{formal zariski section} to $\vol$. Once we know a few key properties for $\vol$ or $\mathcal{H}\vol$, such as differentiability, then many well-known results for divisors (the Brunn-Minkowski inequality, the existence of $\sigma$-decompositions, the explicit expression for the derivative, etc.) follow immediately from the general set-up. The polar transform of the volume function is analyzed in more detail in \cite{lehmannxiao2015b}. \end{rmk} \section{Toric varieties} \label{toric section} In this section $X$ will denote a simplicial projective toric variety of dimension $n$. In terms of notation, $X$ will be defined by a fan $\Sigma$ in a lattice $N$ with dual lattice $M$. We let $\{ v_{i} \}$ denote the primitive generators of the rays of $\Sigma$ and $\{ D_{i} \}$ denote the corresponding classes of $T$-divisors. \subsection{Mixed volumes} Suppose that $L$ is a big movable divisor class on the toric variety $X$. Then $L$ naturally defines a (non-lattice) polytope $Q_{L}$: if we choose an expression $L = \sum a_{i}D_{i}$, then \begin{equation*} Q_{L} = \{ u \in M_{\mathbb{R}} | \langle u,v_{i} \rangle + a_{i} \geq 0\} \end{equation*} and changing the choice of representative corresponds to a translation of $Q_{L}$. Conversely, suppose that $Q$ is a full-dimensional polytope such that the unit normals to the facets of $Q$ form a subset of the rays of $\Sigma$. Then $Q$ uniquely determines a big movable divisor class $L_{Q}$ on $X$. The divisors in the interior of the movable cone correspond to those polytopes whose facet normals coincide with the rays of $\Sigma$. Given polytopes $Q_{1},\ldots,Q_{n}$, let $V(Q_{1},\ldots,Q_{n})$ denote the mixed volume of the polytopes. \cite{bfj09} explains that the positive product of big movable divisors $L_{1},\ldots,L_{n}$ can be interpreted via the mixed volume of the corresponding polytopes: \begin{equation*} \langle L_{1} \cdot \ldots \cdot L_{n} \rangle = n! V(Q_{1},\ldots,Q_{n}). \end{equation*} Now suppose that $\alpha$ lies in the interior of $\Mov_{1}(X)$. Using \cite[Theorem 1.8]{lehmannxiao2015b}, we see that $\alpha = \langle L^{n-1} \rangle$ for some big movable divisor class $L$. Let $P_{\alpha}$ denote the polytope corresponding to $L$. Reinterpreting $\langle L^{n-1} \rangle \cdot A$ as a positive product for an ample divisor $A$, we see that the volume is \begin{equation*} \inf_{Q} \left( \frac{n! V(P_{\alpha}^{n-1},Q)}{ n!^{1/n}\vol(Q)^{1/n}} \right)^{n/n-1} = n! \inf_{Q} \left( \frac{V(P_{\alpha}^{n-1},Q)}{\vol(Q)^{1/n}} \right)^{n/n-1} \end{equation*} where $Q$ varies over all polytopes whose normal fan is refined by $\Sigma$. \subsection{Computing the Zariski decomposition} The nef cone of divisors and pseudo-effective cone of curves on $X$ can be computed algorithmically. Thus, for any face $F$ of the nef cone, by considering the $(n-1)$-product and adding on any curve classes in the dual face, one can easily divide $\Eff_{1}(X)$ into regions where the positive product is determined by a class on $F$. In practice this is a good way to compute the Zariski decomposition (and hence the volume) of curve classes on $X$. In the other direction, suppose we start with a big curve class $\alpha$. On a toric variety, every big and nef divisor is semi-ample (that is, the pullback of an ample divisor on a toric birational model). Thus, the Zariski decomposition is characterized by the existence of a birational toric morphism $\pi: X \to X'$ such that: \begin{itemize} \item the class $\pi_{*}\alpha \in N_{1}(X')$ coincides with $A^{n-1}$ for some ample divisor $A$, and \item $\alpha - (\pi^*A)^{n-1}$ is pseudo-effective. \end{itemize} Thus one can compute the Zariski decomposition and volume for $\alpha$ by the following procedure. \begin{enumerate} \item For each toric birational morphism $\pi: X \to X'$, check whether $\pi_{*}\alpha$ is in the complete intersection cone. If so, there is a unique big and nef divisor $A_{X'}$ such that $A_{X'}^{n-1} = \pi_{*}\alpha$. \item Check if $\alpha - (\pi^{*}A_{X'})^{n-1}$ is pseudo-effective. \end{enumerate} The first step involves solving polynomial equations to deduce the equality of coefficients of numerical classes, but otherwise this procedure is completely algorithmic. (Note that there may be no natural pullback from $\Eff_{1}(X')$ to $\Eff_{1}(X)$, and in particular, the calculation of $(\pi^*A_{X'})^{n-1}$ is not linear in $A_{X'}^{n-1}$.) \begin{exmple}\label{example toric zariski} Let $X$ be the toric variety defined by a fan in $N = \mathbb{Z}^{3}$ on the rays \begin{align*} v_{1} & = (1,0,0) \qquad \qquad \, \, \, \, v_{2} = (0,1,0) \qquad \qquad \, \, \, \,\, \, v_{3} = (1,1,1) \\ v_{4} & = (-1,0,0) \qquad \qquad v_{5} = (0,-1,0) \qquad \qquad v_{6} = (0,0,-1) \end{align*} with maximal cones \begin{align*} \langle v_{1}, v_{2}, v_{3} \rangle, \, \langle v_{1}, v_{2}, v_{6} \rangle, \, \langle v_{1}, v_{3}, v_{5} \rangle, \, \langle v_{1}, v_{5}, v_{6} \rangle, \\ \langle v_{2}, v_{3}, v_{4} \rangle, \, \langle v_{2}, v_{4}, v_{6} \rangle, \, \langle v_{3}, v_{4}, v_{5} \rangle, \, \langle v_{4}, v_{5}, v_{6} \rangle. \end{align*} The Picard rank of $X$ is $3$. Letting $D_{i}$ and $C_{ij}$ be the divisors and curves corresponding to $v_{i}$ and $\overline{v_{i}v_{j}}$ respectively, we have intersection product \begin{equation*} \begin{array}{c|c|c|c} & D_{1} & D_{2} & D_{3} \\ \hline C_{12} & -1 & -1 & 1 \\ \hline C_{13} & 0 & 1 & 0 \\ \hline C_{23} & 1 & 0 & 0 \end{array} \end{equation*} Standard toric computations show that: \begin{align*} \Eff^{1}(X) = \langle D_1, D_2, D_3 \rangle \qquad & \qquad \Nef^{1}(X) = \langle D_1+D_3, D_2+D_3, D_3 \rangle \\ \Mov^1(X) = \langle D_1 &+D_2, D_1+D_3,D_2+D_3,D_3 \rangle \end{align*} and \begin{align*} \Eff_{1}(X) = \langle C_{12}, C_{13}, C_{23} \rangle \qquad & \qquad \Nef_{1}(X) = \langle C_{12}+C_{13}+C_{23}, C_{13},C_{23} \rangle. \end{align*} $X$ admits a unique flip and has only one birational contraction corresponding to the face of $\Nef^{1}(X)$ generated by $D_{1}+D_{3}$ and $D_{2}+D_{3}$. Set $B_{a,b} = aD_{1}+bD_{2}+(a+b)D_{3}$. The complete intersection cone is given by taking the convex hull of the boundary classes \begin{equation*} B_{a,b}^2 = T_{a,b} = 2abC_{12} + (a^{2}+2ab)C_{13} + (b^{2}+2ab)C_{23} \end{equation*} and the face of $\Nef_{1}(X)$ spanned by $C_{13},C_{23}$. For any big class $\alpha$ not in $\CI_{1}(X)$, the positive part can be computed on the unique toric birational contraction $\pi: X \to X'$ given by contracting $C_{12}$. In practice, the procedure above amounts to solving $\alpha - t C_{12} = T_{a,b}$ for some $a,b,t$. If $\alpha = xC_{12} + yC_{13} + zC_{23}$, this yields the quadratic equation $4(y-x+t)(z-x+t) = (x-t)^{2}$. Solving this for $t$ tells us $\gamma = tC_{12}$, and the volume can then easily be computed. \end{exmple} \section{Hyperk\"ahler manifolds} \label{hyperkahler section} Throughout this section $X$ will denote a hyperk\"ahler variety of dimension $n$ (with $n=2m$). We will continue to work in the projective setting. However, as explained in Section \ref{kahler background sec}, Demailly's conjecture on transcendental Morse inequality is known for hyperk\"ahler manifolds. Thus all the results in this section and related results in \cite{lehmannxiao2015b} can extended accordingly in the K\"ahler setting for hyperk\"ahler varieties with no qualifications. Let $\sigma$ be a symplectic holomorphic form on $X$. For a real divisor class $D \in N^{1}(X)$ the Beauville-Bogomolov quadratic form is defined as \begin{align*} q(D)= D^2 \cdot \{(\sigma \wedge \bar \sigma)\}^{n/2-1}, \end{align*} where we normalize the symplectic form $\sigma$ such that \begin{align*} q(D)^{n/2}= D^n. \end{align*} As proved in \cite[Section 4]{Bou04}, the bilinear form $q$ is compatible with the volume function and $\sigma$-decomposition for divisors in the following way: \begin{enumerate} \item The cone of movable divisors is $q$-dual to the pseudo-effective cone. \item If $D$ is a movable divisor then $\vol(D) = q(D,D)^{n/2} = D^{n}$. \item For a pseudo-effective divisor $D$ write $D = P_{\sigma}(D) + N_{\sigma}(D)$ for its $\sigma$-decomposition. Then $q(P_{\sigma}(D),N_{\sigma}(D)) = 0$, and if $N_{\sigma}(D) \neq 0$ then $q(N_{\sigma}(D),N_{\sigma}(D))<0$. \end{enumerate} The bilinear form $q$ induces an isomorphism $\psi: N^{1}(X) \to N_{1}(X)$ by sending a divisor class $D$ to the curve class defining the linear function $q(D,-)$. We obtain an induced bilinear form $q$ on $N_{1}(X)$ via the isomorphism $\psi$, so that for curve classes $\alpha$, $\beta$ \begin{equation*} q(\alpha,\beta) = q(\psi^{-1}\alpha,\psi^{-1}\beta) = \psi^{-1}\alpha \cdot \beta. \end{equation*} In particular, two cones $\mathcal{C}, \mathcal{C}'$ in $N^{1}(X)$ are $q$-dual if and only if $\psi(\mathcal{C})$ is dual to $\mathcal{C}'$ under the intersection pairing (and similarly for cones of curves). In this section we verify that the bilinear form $q$ on $N_{1}(X)$ is compatible with the volume and Zariski decomposition for curve classes in the same way as for divisors. \begin{rmk} Since the signature of the Beauville-Bogomolov form is $(1,\dim N^{1}(X)-1)$, one can use the Hodge inequality to analyze the Zariski decomposition as in Example \ref{bilinearexample}. We will instead give a direct geometric argument to emphasize the ties with the divisor theory. \end{rmk} We first need the following proposition. \begin{prop} \label{hyperkahlerposprod} Let $D$ be a big movable divisor class on $X$. Then we have \begin{equation*} \psi(D) = \frac{\langle D^{n-1} \rangle}{\vol(D)^{n-2/n}}. \end{equation*} In particular, the complete intersection cone coincides with the $\psi$-image of the nef cone of divisors and if $A$ is a big and nef divisor then $\widehat{\vol}(\psi(A)) = \vol(A)^{1/n-1}$. \end{prop} \begin{proof} First note that $\psi(D)$ is contained in $\Mov_{1}(X)$. Indeed, since the movable cone of divisors is $q$-dual to the pseudo-effective cone of divisors by \cite[Proposition 4.4]{Bou04}, the $\psi$-image of the movable cone of divisors is dual to the pseudo-effective cone of divisors. For any big movable divisor $L$, the basic equality for bilinear forms shows that \begin{equation*} L \cdot \psi(D) = q(L,D) = \frac{1}{2} (\vol(L+D)^{2/n} - \vol(L)^{2/n} - \vol(D)^{2/n}). \end{equation*} In \cite[Theorem 1.7]{lehmannxiao2015b} we show that $\vol(L+D)^{1/n} \geq \vol(L)^{1/n} + \vol(D)^{1/n}$ with equality if and only if $L$ and $D$ are proportional. Squaring and rearranging, we see that \begin{equation*} \frac{L \cdot \psi(D)}{\vol(L)^{1/n}} \geq \vol(D)^{1/n} \end{equation*} with equality if and only if $L$ is proportional to $D$. By \cite[Proposition 3.3 and Theorem 3.12]{lehmannxiao2015b} we immediately get that \begin{equation*} \psi(D) = \frac{\langle D^{n-1} \rangle}{\vol(D)^{n-2/n}}. \end{equation*} The final statements follow immediately. \end{proof} \begin{thrm} \label{secondhyperkahlertheorem} Let $q$ denote the Beauville-Bogomolov form on $N_{1}(X)$. Then: \begin{enumerate} \item The complete intersection cone of curves is $q$-dual to the pseudo-effective cone of curves. \item If $\alpha$ is a complete intersection curve class then $\widehat{\vol}(\alpha) = q(\alpha,\alpha)^{n/2(n-1)}$. \item For a big class $\alpha$ write $\alpha = B^{n-1} + \gamma$ for its Zariski decomposition. Then $q(B^{n-1},\gamma) = 0$ and if $\gamma$ is non-zero then $q(\gamma,\gamma) < 0$. \end{enumerate} \end{thrm} \begin{proof} For (1), since the complete intersection cone coincides with $\psi(\Nef^{1}(X))$ it is $q$-dual to the dual cone of $\Nef^{1}(X)$. For (2), by Proposition \ref{hyperkahlerposprod} we have \begin{align*} q(\psi(A),\psi(A)) = q(A,A) & = \vol(A)^{2/n} \\ & = \widehat{\vol}(\psi(A))^{2(n-1)/n}. \end{align*} For (3), we have \begin{equation*} q(B^{n-1},\gamma) = \psi^{-1}(B^{n-1}) \cdot \gamma = \vol(B)^{n-2/n} B \cdot \gamma = 0. \end{equation*} For the final statement $q(\gamma,\gamma)<0$, note that \begin{equation*} q(\alpha,\alpha) = q(B^{n-1},B^{n-1}) + q(\gamma,\gamma) \end{equation*} so it suffices to show that $q(\alpha,\alpha) < q(B^{n-1},B^{n-1})$. Set $D = \psi^{-1}\alpha$. The desired inequality is clear if $q(D,D) \leq 0$, so by \cite[Corollary 3.10 and Erratum Proposition 1]{Huy99} it suffices to restrict our attention to the case when $D$ is big. (Note that the case when $-D$ is big can not occur, since $q(D,A) = A \cdot \alpha > 0$ for an ample divisor class $A$.) Let $D = P_{\sigma}(D) + N_{\sigma}(D)$ be the $\sigma$-decomposition of $D$. By \cite[Proposition 4.2]{Bou04} we have $q(N_{\sigma}(D),B) \geq 0$. Thus \begin{align*} \vol(B)^{2(n-1)/n} = q(B^{n-1},B^{n-1}) & = q(\alpha,B^{n-1}) \\ & = \vol(B)^{n-2/n} q(D,B) \geq \vol(B)^{n-2/n}q(P_{\sigma}(D),B). \end{align*} Arguing just as in the proof of Proposition \ref{hyperkahlerposprod}, we see that \begin{equation*} q(P_{\sigma}(D),B) \geq \vol(P_{\sigma}(D))^{1/n} \vol(B)^{1/n} \end{equation*} with equality if and only if $P_{\sigma}(D)$ and $B$ are proportional. Combining the two previous equations we obtain \begin{equation*} \vol(B)^{n-1/n} \geq \vol(P_{\sigma}(D))^{1/n}. \end{equation*} and equality is only possible if $B$ and $P_{\sigma}(D)$ are proportional. Then we calculate: \begin{align*} q(\alpha,\alpha) & = q(D,D) \\ & \leq q(P_{\sigma}(D),P_{\sigma}(D)) \textrm{ by \cite[Theorem 4.5]{Bou04}} \\ & = \vol(P_{\sigma}(D))^{2/n} \\ & \leq \vol(B)^{2(n-1)/n} = q(B,B). \end{align*} If $P_{\sigma}(D)$ and $B$ are not proportional, we obtain a strict inequality at the last step. If $P_{\sigma}(D)$ and $B$ are proportional, then $N_{\sigma}(D) > 0$ (since otherwise $D=B$ and $\alpha$ is a complete intersection class). Then by \cite[Theorem 4.5]{Bou04} we have a strict inequality $q(P_{\sigma}(D),P_{\sigma}(D)) > q(D,D)$ on the second line. In either case we conclude $q(\alpha,\alpha) < q(B,B)$ as desired. \end{proof} \section{Connections with birational geometry} \label{applications sec} We end with a discussion of several connections between positivity of curves and other constructions in birational geometry. There is a large body of literature relating the positivity of a divisor at a point to its intersections against curves through that point. One can profitably reinterpret these relationships in terms of the volume of curve classes. A key result conceptually is: \begin{prop} \label{multiplicityestimate} Let $X$ be a smooth projective variety of dimension $n$. Choose positive integers $\{k_{i} \}_{i=1}^{r}$. Suppose that $\alpha \in \Mov_{1}(X)$ is represented by a family of irreducible curves such that for any collection of general points $x_{1},x_{2},\ldots,x_{r},y$ of $X$, there is a curve in our family which contains $y$ and contains each $x_{i}$ with multiplicity $\geq k_{i}$. Then \begin{equation*} \widehat{\vol}(\alpha)^{\frac{n-1}{n}} \geq \frac{\sum_{i} k_{i}}{r^{1/n}}. \end{equation*} \end{prop} This is just a rephrasing of well-known results in birational geometry; see for example \cite[V.2.9 Proposition]{k96}. \begin{proof} By continuity and rescaling invariance, it suffices to show that if $L$ is a big and nef Cartier divisor class then \begin{equation*} \left( \sum_{i=1}^{r} k_{i} \right) \frac{\vol(L)^{1/n}}{r^{1/n}} \leq L \cdot C. \end{equation*} A standard argument (see for example \cite[Example 8.22]{lehmann14}) shows that for any $\epsilon > 0$ and any general points $\{ x_{i} \}_{i=1}^{r}$ of $X$ there is a positive integer $m$ and a Cartier divisor $M$ numerically equivalent to $mL$ and such that $\mult_{x_{i}}M \geq mr^{-1/n}\vol(L)^{1/n} - \epsilon$ for every $i$. By the assumption on the family of curves we may find an irreducible curve $C$ with multiplicity $\geq k_{i}$ at each $x_{i}$ that is not contained $M$. Then \begin{equation*} m(L \cdot C) \geq \sum_{i=1}^{r} k_{i} \mult_{x_{i}}M \geq \left(\sum_{i = 1}^{r} k_{i} \right) \left(\frac{ m\vol(L)^{1/n}}{r^{1/n}} - \epsilon \right). \end{equation*} Divide by $m$ and let $\epsilon$ go to $0$ to conclude. \end{proof} \begin{exmple} The most important special case is when $\alpha$ is the class of a family of irreducible curves such that for any two general points of $X$ there is a curve in our family containing them. Proposition \ref{multiplicityestimate} then shows that $\widehat{\vol}(\alpha) \geq 1$. \end{exmple} \subsection{Seshadri constants} Let $X$ be a smooth projective variety of dimension $n$ and let $A$ be a big and nef $\mathbb{R}$-Cartier divisor on $X$. Recall that for points $\{ x_{i} \}_{i=1}^{r}$ on $X$ the Seshadri constant of $A$ along the $\{ x_{i} \}$ is \begin{equation*} \varepsilon(x_{1},\ldots,x_{r},A) := \inf_{C \ni x_{i}} \frac{A \cdot C}{\sum_{i} \mult_{x_{i}}C}. \end{equation*} where the infimum is taken over all reduced irreducible curves $C$ containing at least one of the points $x_{i}$. An easy intersection calculation on the blow-up of $X$ at the $r$ points shows that \begin{equation*} \varepsilon(x_{1},\ldots,x_{r},A) \leq \frac{\vol(A)^{1/n}}{r^{1/n}}. \end{equation*} When the $r$ points are very general, $r$ is large, and $A$ is sufficiently ample, one ``expects'' the two sides of the inequality to be close. This heuristic can fail badly, but it is interesting to analyze how close it is to being true. In particular, the Seshadri constant should only be very small compared to the volume in the presence of a ``Seshadri-exceptional fibration'' (see \cite{ekl95}, \cite{hk03}). This motivates the following definition: \begin{defn} Let $A$ be a big and nef $\mathbb{R}$-Cartier divisor on $X$. Set $\varepsilon_{r}(A)$ to be the Seshadri constant of $A$ along $r$ points $\mathbf{x} := \{ x_{i} \}$ of $X$. We define the Seshadri ratio of $A$ to be \begin{equation*} sr_{\mathbf{x}}(A) := \frac{r^{1/n} \varepsilon(x_{1},\ldots,x_{r},A)}{\vol(A)^{1/n}}. \end{equation*} \end{defn} Note that the Seshadri ratio is at most $1$, and that low values should only arise in special geometric situations. The principle established by \cite{ekl95}, \cite{hk03} is that if the Seshadri ratio for $A$ is small, then the curves which approximate the bound in the Seshadri constant can not ``move too much.'' In this section we revisit these known results on Seshadri constants from the perspective of the volume of curves. In particular we demonstrate how the Zariski decomposition can be used to bound the classes of curves $C$ which give small values in the Seshadri computations above. \begin{prop} \label{easyseshadriestimate} Let $X$ be a smooth projective variety of dimension $n$ and let $A$ be a big and nef $\mathbb{R}$-Cartier divisor on $X$. Fix $\delta > 0$ and fix $r$ points $x_{1},\ldots,x_{r}$. Suppose that $C$ is a curve containing at least one of the $x_{i}$ and such that \begin{equation*} \varepsilon(x_{1},\ldots,x_{r},A) (1+ \delta) > \frac{A \cdot C}{\sum_{i} \mult_{x_{i}}C}. \end{equation*} Letting $\alpha$ denote the numerical class of $C$, we have \begin{equation*} sr_{\mathbf{x}}(A) (1+\delta) \geq r^{1/n}\frac{\widehat{\vol}(\alpha)^{n-1/n}}{\sum_{i} \mult_{x_{i}}C} \end{equation*} \end{prop} In fact, this estimate is rather crude; with better control on the relationship between $A$ and $\alpha$, one can do much better. \begin{proof} One simply multiplies both sides of the first inequality by $r^{1/n}/\vol(A)^{1/n}$ to deduce that \begin{align*} sr_{\mathbf{x}}(A) (1+\delta) & \geq r^{1/n} \frac{A \cdot C}{\vol(A)^{1/n} \sum_{i} \mult_{x_{i}}C} \end{align*} and then uses the obvious inequality $(A \cdot C)/\vol(A)^{1/n} \geq \widehat{\vol}(C)^{n-1/n}$. \end{proof} We can then bound the Seshadri ratio of $A$ in terms of the Zariski decomposition of the curve. \begin{prop} Let $X$ be a smooth projective variety of dimension $n$ and let $A$ be a big and nef $\mathbb{R}$-Cartier divisor on $X$. Fix $\delta > 0$ and fix $r$ distinct points $x_{i} \in X$. Suppose that $C$ is a curve containing at least one of the $x_{i}$ such that the class $\alpha$ of $C$ is big and \begin{equation*} \varepsilon(x_{1},\ldots,x_{r},A) (1+ \delta) > \frac{A \cdot C}{\sum_{i} \mult_{x_{i}}C}. \end{equation*} Write $\alpha = B^{n-1} + \gamma$ for the Zariski decomposition. Then $sr_{\mathbf{x}}(A) (1+\delta) > sr_{\mathbf{x}}(B)$. \end{prop} \begin{proof} By Proposition \ref{easyseshadriestimate} it suffices to show that \begin{equation*} r^{1/n}\frac{\widehat{\vol}(\alpha)^{n-1/n}}{\sum_{i} \mult_{x_{i}}C} \geq sr_{\mathbf{x}}(B), \end{equation*} But this follows from the definition of Seshadri constants along with the fact that $B \cdot C = \widehat{\vol}(C)$. \end{proof} These results are of particular interest in the case when the points are very general, when it is easy to deduce the bigness of the class of $C$. Certain geometric properties of Seshadri constants become very clear from this perspective. For example, following the notation of \cite{nagata60} we say that a curve $C$ on $X$ is abnormal for a set of $r$ points $\{x_{i}\}$ and a big and nef divisor $A$ if $C$ contains at least one $x_{i}$ and \begin{equation*} 1 > \frac{r^{1/n}(A \cdot C)}{\vol(A)^{1/n} \sum_{i} \mult_{x_{i}}C}. \end{equation*} \begin{cor} Let $X$ be a smooth projective variety of dimension $n$ and let $A$ be a big and nef $\mathbb{R}$-Cartier divisor on $X$. Fix $r$ very general points $x_{1},\ldots,x_{r}$. Then no abnormal curve goes through a very general point of $X$ aside from the $x_{i}$. \end{cor} \begin{proof} Since the $x_{i}$ are very general, any curve going through at least one more very general point deforms to cover the whole space, so its class is big and nef. Then combine Proposition \ref{easyseshadriestimate} and Proposition \ref{multiplicityestimate} to deduce that if the Seshadri constant of the $\{x_{i}\}$ is computed by a curve through an additional very general point then $sr_{\mathbf{x}}(A) = 1$. \end{proof} \subsection{Rationally connected varieties} \label{rcexample} Given a rationally connected variety $X$ of dimension $n$, it is interesting to ask for the possible volumes of curve classes representing rational curves. In particular, one would like to know if one can find classes whose volumes satisfy a uniform upper bound depending only on the dimension. There are four natural options: \begin{enumerate} \item Consider all classes of rational curves. \item Consider all classes of chains of rational curves which connect two general points. \item Consider all classes of irreducible rational curves which connect two general points. \item Consider all classes of very free rational curves. \end{enumerate} Note that each criterion is more special than the previous ones. We call a class of the second kind an RCC class and a class of the fourth kind a VF class. Every one of the classes (2), (3), (4) has positive volume; indeed, \cite{8authors} shows that if two general points of $X$ can be connected via a chain of curves of class $\alpha$, then $\alpha$ is a big class. \ On a Fano variety of Picard rank $1$, the minimal volume of an RCC class is determined by the degree and the minimal degree of an RCC class against the ample generator (or equivalently, the degree, the index, and the length of an RCC class). The minimum volume is thus related to these well studied invariants. In higher dimensions, the work of \cite{KMM92} and \cite{campana92} shows that there are constants $C(n)$, $C'(n)$ such that any $n$-dimensional smooth Fano variety carries an RCC class satisfying $-K_{X} \cdot \alpha \leq C(n)$, and a VF class satisfying $-K_{X} \cdot \beta \leq C'(n)$. We then also obtain explicit bounds on the minimal volume of an RCC or VF class on $X$. It is interesting to ask what happens for arbitrary rationally connected varieties. \begin{exmple} We briefly discuss bounds on the volumes of rational curve classes on smooth surfaces. Consider first the Hirzebruch surfaces $\mathbb{F}_{e}$. It is clear that on a Hirzebruch surface a curve class is RCC if and only if it is big, and one easily sees that the minimum volume for an RCC class is $\frac{1}{e}$. Thus there is no non-trivial universal lower bound for the minimum volume of an RCC class. In terms of upper bounds, note that if $\pi: Y \to X$ is a birational map and $\alpha$ is an RCC class, then $\pi_{*}\alpha$ is an RCC class as well. Conversely, given any RCC class $\beta$ on $X$, there is some preimage $\beta'$ on $Y$ which is also an RCC class. Thus by Proposition \ref{birvolstatement}, we see that any rational surface carries an RCC class of volume no greater than that of an RCC class on a minimal surface. This shows that any smooth rational surface has an RCC class of volume at most $1$. On a surface any VF class is necessarily big and nef, so the universal lower bound on the volume is $1$. In the other direction, consider again the Hirzebruch surface $\mathbb{F}_{e}$. Any VF class will have the form $aC_{0} + bF$ where $C_{0}$ is the section of negative self-intersection and $F$ is the class of a fiber. Note that the self intersection is $2ab - a^{2}e$. For a VF class we clearly must have $a \geq 1$, so that $b \geq ea$ to ensure nefness. Thus the smallest possible volume of a VF class is $e$, and this is achieved by the class $C_{0} + eF$. Note that there is no uniform upper bound on the minimum volume of a VF class. \end{exmple} As indicated in the previous example, it is most interesting to look for upper bounds on the minimum volume of an RCC class. Indeed, by taking products with projective spaces, one sees that in any dimension the only uniform lower bound for volumes of RCC classes is $0$. Furthermore, there is no uniform upper bound for the minimum volume of a VF class. The crucial distinction is that VF classes are nef, while RCC classes need not be, so that a uniform bound on the volume of a VF class can only be expected for bounded families of varieties. The following question gives a ``birational'' version of the well-known results of \cite{KMM92}. \begin{ques} Let $X$ be a smooth rationally connected variety of dimension $n$. Is there a bound $d(n)$, depending only on $n$, such that $X$ admits an RCC class of volume at most $d(n)$? \end{ques} It is also interesting to ask for optimal bounds on volumes. The first situation to consider are the ``extremes'' in the examples above. Note that the lower bound of the volume of a VF class is $1$ by Proposition \ref{multiplicityestimate}, so it is interesting to ask when the minimum is achieved. \begin{ques} For which varieties $X$ is the smallest volume of an RCC class equal to $1$? For which varieties $X$ is the smallest volume of a VF class equal to $1$? \end{ques} \section{Appendix} \subsection{Reverse Khovanskii-Teissier inequalities} An important step in the analysis of the Morse inequality is the ``reverse" Khovanskii-Teissier inequality for big and nef divisors $A$, $B$, and a movable curve class $\beta$: \begin{equation*} n(A \cdot B^{n-1})(B \cdot \beta) \geq B^{n} (A \cdot \beta). \end{equation*} We prove a more general statement on ``reverse" Khovanskii-Teissier inequalities in the analytic setting. Some related work has appeared independently in the recent preprint \cite{popovici15}. \begin{thrm} \label{thrm appendix reserve KT} Let $X$ be a compact K\"ahler manifold of dimension $n$. Let $\omega, \beta, \gamma\in \overline{\mathcal{K}}$ be three nef classes on $X$. Then we have $$ (\beta^k \cdot \alpha^{n-k})\cdot (\alpha^k\cdot \gamma^{n-k})\geq \frac{k!(n-k)!}{n!}\alpha^n\cdot (\beta^k \cdot\gamma^{n-k}). $$ \end{thrm} \begin{proof} The proof depends on solving Monge-Amp\`{e}re equations and the method of \cite{Pop14}. Without loss of generality, we can assume $\gamma$ is normalised such that $\beta^k\cdot \gamma^{n-k}=1$. Then we need to show \begin{align} \label{eq need to prove} (\beta^k \cdot \alpha^{n-k})\cdot (\alpha^k\cdot \gamma^{n-k})\geq \frac{k!(n-k)!}{n!}\alpha^n. \end{align} We first assume $\alpha,\beta,\gamma$ are all K\"ahler classes. We will use the same symbols to denote the K\"ahler metrics in corresponding K\"ahler classes. By the Calabi-Yau theorem \cite{Yau78}, we can solve the following Monge-Amp\`{e}re equation: \begin{align} \label{eq MA} (\alpha+i\partial\bar \partial \psi)^n= \left(\int \alpha^n \right) \beta^k\wedge \gamma^{n-k}. \end{align} Denote by $\alpha_\psi$ the K\"ahler metric $\alpha+i\partial\bar \partial \psi$. Then we have \begin{align*} (\beta^k \cdot \alpha^{n-k})\cdot (\alpha^k\cdot \gamma^{n-k})&=\int \beta^k \wedge \alpha_\psi^{n-k}\cdot \int \alpha_\psi^k \wedge \gamma^{n-k}\\ &= \int \frac{\beta^k \wedge \alpha_\psi^{n-k}}{\alpha_\psi ^n}\alpha_\psi ^n\cdot \int \frac{\alpha_\psi^k \wedge \gamma^{n-k}}{\alpha_\psi ^n}\alpha_\psi ^n\\ &\geq \left( \int \left(\frac{\beta^k \wedge \alpha_\psi^{n-k}}{\alpha_\psi ^n} \cdot \frac{\alpha_\psi^k \wedge \gamma^{n-k}}{\alpha_\psi ^n} \right)^{1/2}\alpha_\psi ^n\right)^2. \end{align*} The last line follows because of the Cauchy-Schwarz inequality. We claim that the following pointwise inequality holds: \begin{align*} \frac{\beta^k \wedge \alpha_\psi^{n-k}}{\alpha_\psi ^n} \cdot \frac{\alpha_\psi^k \wedge \gamma^{n-k}}{\beta^k \wedge \gamma^{n-k}} \geq \frac{k!(n-k)!}{n!}. \end{align*} Then by (\ref{eq MA}) it is clear the above pointwise inequality implies the desired inequality (\ref{eq need to prove}). For any fixed point $p\in X$, we can choose some coordinates such that at the point $p$: $$\alpha_\psi= i\sum_{j=1}^n dz^j \wedge d\bar z ^j, \quad \beta = i\sum_{j=1}^n \mu_j dz^j \wedge d\bar z ^j, $$ and $$\gamma^{n-k}=i^{n-k} \sum_{|I|=|J|=n-k} \Gamma_{IJ} dz_I \wedge d\bar z_{J}.$$ Denote by $\mu_J$ the product $\mu_{j_1}...\mu_{j_k}$ with index $J=(j_1<...<j_k)$ and denote by $J^c$ the complement index of $J$. Then it is easy to see at the point $p$ we have $$ \frac{\beta^k \wedge \alpha_\psi^{n-k}}{\alpha_\psi ^n} \cdot \frac{\alpha_\psi^k \wedge \gamma^{n-k}}{\beta^k \wedge \gamma^{n-k}}= \frac{k!(n-k)!}{n!} \frac{(\sum_{J}\mu_J)(\sum_K \Gamma_{KK})}{\sum_{J}\mu_J \Gamma_{J^c J^c}}\geq \frac{k!(n-k)!}{n!}. $$ This finishes the proof of the case when $\alpha,\beta,\gamma$ are all K\"ahler classes. If they are just nef classes, by taking limits, then we get the desired inequality. \end{proof} \begin{rmk} \label{rmk appendix movable} By \cite[Section 2.1.1]{xiao15}, for $k=1$ we can always replace $\gamma^{n-1}$ in Theorem \ref{thrm appendix reserve KT} by an arbitrary movable class. \end{rmk} \begin{rmk} It would be interesting to find an algebraic approach to Theorem \ref{thrm appendix reserve KT}, thus generalizing it to projective varieties defined over arbitrary fields. \end{rmk} \subsection{Towards the transcendental holomorphic Morse inequality} Recall that the (weak) transcendental holomorphic Morse inequality over compact K\"ahler manifolds conjectured by Demailly is stated as follows: \begin{itemize} \item Let $X$ be a compact K\"ahler manifold of dimension $n$, and let $\alpha, \beta\in \overline{\mathcal{K}}$ be two nef classes. Then we have $\vol(\alpha-\beta)\geq \alpha^n -n \alpha^{n-1}\cdot \beta$. In particular, if $\alpha^n -n \alpha^{n-1}\cdot \beta>0$ then there exists a K\"ahler current in the class $\alpha-\beta$. \end{itemize} Indeed, the last statement has been proved in the recent work \cite{Xia13, Pop14}. The missing part is how to bound the volume $\vol(\alpha-\beta)$ by $\alpha^n -n \alpha^{n-1}\cdot \beta$. By \cite[Theorem 2.1 and Remark 2.3]{xiao15} the volume for transcendental pseudo-effective $(1,1)$-classes is conjectured to be characterized as following: \begin{align} \label{eq vol conj} \vol(\alpha)=\inf_{\gamma\in \mathcal{M}, \mathfrak{M}(\gamma)=1}(\alpha\cdot \gamma)^n \end{align} For the definition of $\mathfrak{M}$ in the K\"ahler setting, see \cite[Definition 2.2]{xiao15}. If we denote the right hand side of (\ref{eq vol conj}) by $\overline{\vol}(\alpha)$, then we can prove the following: \begin{thrm} \label{thm tran morse} Let $X$ be a compact K\"ahler manifold of dimension $n$, and let $\alpha, \beta\in \overline{\mathcal{K}}$ be two nef classes. Then we have $$\overline{\vol}(\alpha-\beta)^{1/n}\vol(\alpha)^{n-1/n}\geq \alpha^n -n \alpha^{n-1}\cdot \beta.$$ \end{thrm} \begin{proof} We only need to consider the case when $\alpha^n -n \alpha^{n-1}\cdot \beta>0$. And \cite{Pop14} implies the class $\alpha-\beta$ is big. By the definition of $\overline{\vol}$, we have $$ \overline{\vol}(\alpha-\beta)^{1/n}=\inf_{\gamma\in \mathcal{M}, \mathfrak{M}(\gamma)=1}(\alpha-\beta)\cdot \gamma. $$ So we need to estimate $(\alpha-\beta)\cdot \gamma$ with $\mathfrak{M}(\gamma)=1$: \begin{align*} (\alpha-\beta)\cdot \gamma&=\alpha\cdot \gamma-\beta\cdot \gamma\\ &\geq \alpha\cdot \gamma-\frac{n(\alpha^{n-1}\cdot \beta)\cdot(\alpha\cdot\gamma)}{\alpha^n}\\ &=\frac{\alpha\cdot\gamma}{\alpha^n}(\alpha^n-n\alpha^{n-1}\cdot \beta)\\ &\geq \vol(\alpha)^{1-n/n}(\alpha^n-n\alpha^{n-1}\cdot \beta), \end{align*} where the second line follows from Theorem \ref{thrm appendix reserve KT} and Remark \ref{rmk appendix movable}, and the last line follows the definition of $\mathfrak{M}$ and $\mathfrak{M}(\gamma)=1$. By the arbitrariness of $\gamma$ we get $$\overline{\vol}(\alpha-\beta)^{1/n}\vol(\alpha)^{n-1/n}\geq \alpha^n -n \alpha^{n-1}\cdot \beta. $$ \end{proof} \begin{rmk} Without using the conjectured equality (\ref{eq vol conj}), it is observed independently by \cite{tosatti2015current} and \cite{popovici15} that one can replace $\overline{\vol}$ by the volume function $\vol$ in Theorem \ref{thm tran morse}. \end{rmk} \subsection{Non-convexity of the complete intersection cone} We give an example explicitly verifying the non-convexity of $\CI_{1}(X)$. \begin{exmple} \cite{fs09} gives an example of a smooth toric threefold $X$ such that every nef divisor is big. We show that for this toric variety $\CI_{1}(X)$ is not convex. Let $X$ be the toric variety defined by a fan in $N = \mathbb{Z}^{3}$ on the rays \begin{align*} v_{1} & = (1,0,0) \qquad \qquad v_{2} = (0,1,0) & v_{3} & = (0,0,1) \qquad \qquad v_{4} = (-1, -1,-1) \\ v_{5} & = (1,-1,-2) \qquad \, \, \, v_{6} = (1,0,-1) & v_{7} & = (0,-1,-2) \qquad \, \, \, v_{8} = (0,0,-1) \end{align*} with maximal cones \begin{align*} \langle v_{1}, v_{2}, v_{3} \rangle, \, \langle v_{1}, v_{2}, v_{6} \rangle, \, \langle v_{1}, v_{3}, v_{4} \rangle, \, \langle v_{1}, v_{4}, v_{5} \rangle, \\ \langle v_{1}, v_{5}, v_{6} \rangle, \, \langle v_{2}, v_{3}, v_{4} \rangle, \, \langle v_{2}, v_{4}, v_{8} \rangle, \, \langle v_{2}, v_{5}, v_{6} \rangle, \\ \langle v_{2}, v_{5}, v_{8} \rangle, \, \langle v_{4}, v_{5}, v_{7} \rangle, \, \langle v_{4}, v_{7}, v_{8} \rangle, \, \langle v_{5}, v_{7}, v_{8} \rangle. \end{align*} Since $X$ is the blow-up of $\mathbb{P}^{3}$ along 4 rays, it has Picard rank $5$. Let $D_{i}$ be the divisor corresponding to the ray $v_{i}$ and $C_{ij}$ denote the curve corresponding to the face generated by $v_{i}$ and $v_{j}$. Standard toric computations show that the pseudo-effective cone of divisors is simplicial and is generated by $D_{1},D_{5},D_{6},D_{7},D_{8}$. The pseudo-effective cone of curves is also simplicial and is generated by $C_{14}, C_{16}, C_{25}, C_{47}, C_{48}$. From now on we will write divisor or curve classes as vectors in these (ordered) bases. \begin{comment} \begin{shaded} To Jian, May 1: You can verify these using the complete table of intersections: \begin{equation*} \begin{array}{c|c|c|c|c|c|c|c|c} & D_{1} & D_{2} & D_{3} & D_{4} & D_{5} & D_{6} & D_{7} & D_{8} \\ \hline C_{12} & -1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \hline C_{13} & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \hline C_{14} & -2 & 0 & 1 & -1 & 1 & 0 & 0 & 0 \\ \hline C_{15} & 1 & 0 & 0 & 1 & -1 & 1 & 0 & 0 \\ \hline C_{16} & 1 & 1 & 0 & 0 & 1 & -2 & 0 & 0 \\ \hline C_{23} & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \hline C_{24} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ \hline C_{25} & 0 & -1 & 0 & 0 & -1 & 1 & 0 & 1 \\ \hline C_{26} & 1 & 1 & 0 & 0 & 1 & -2 & 0 & 0 \\ \hline C_{28} & 0 & 2 & 0 & 1 & 1 & 0 & 0 & -3 \\ \hline C_{34} & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ \hline C_{45} & 1 & 0 & 0 & 0 & -1 & 0 & 1 & 0 \\ \hline C_{47} & 0 & 0 & 0 & 1 & 1 & 0 & -2 & 1 \\ \hline C_{48} & 0 & 1 & 0 & 0 & 0 & 0 & 1 & -2 \\ \hline C_{56} & 1 & 1 & 0 & 0 & 1 & -2 & 0 & 0 \\ \hline C_{57} & 0 & 0 & 0 & 1 & 1 & 0 & -2 & 1 \\ \hline C_{58} & 0 & 1 & 0 & 0 & 0 & 0 & 1 & -2 \\ \hline C_{78} & 0 & 0 & 0 & 1 & 1 & 0 & -2 & 1 \end{array} \end{equation*} Note the equalities \begin{align*} C_{13}=C_{23}=C_{34} \\ C_{16}=C_{26}=C_{56} \\ C_{48}=C_{58} \\ C_{47}=C_{57}=C_{78} \end{align*} \end{shaded} \end{comment} The intersection matrix is: \begin{equation*} \begin{array}{c|c|c|c|c|c} & D_{1} & D_{5} & D_{6} & D_{7} & D_{8} \\ \hline C_{14} & -2 & 1 & 0 & 0 & 0 \\ \hline C_{16} & 1 & 1 & -2 & 0 & 0 \\ \hline C_{25} & 0 & -1 & 1 & 0 & 1 \\ \hline C_{47} & 0 & 1 & 0 & -2 & 1 \\ \hline C_{48} & 0 & 0 & 0 & 1 & -2 \end{array} \end{equation*} The nef cone of divisors is dual to the pseudo-effective cone of curves. Thus it is simplicial and has generators $A_{1},\ldots,A_{5}$ determined by the columns of the inverse of the matrix above: \begin{align*} A_{1} & = (1,3,2,2,1) \\ A_{2} & = (3,6,4,4,2) \\ A_{3} & = (6,12,9,8,4) \\ A_{4} & = (2,4,3,2,1) \\ A_{5} & = (4,8,6,5,2) \end{align*} A computation shows that for real numbers $x_{1},\ldots,x_{5}$, \begin{align*} \left( \sum_{i=1}^{5} x_{i}A_{i} \right)^{2} = & (1,3,6,2,4) (x_{1}^{2}+6x_{1}x_{2}+12x_{1}x_{3}+4x_{1}x_{4} + 8x_{1}x_{5}) + \\ & (9,22,45,15,30) x_{2}^{2} + \\ & (12,30,60,20,40) (x_{2}x_{4}+2x_{2}x_{5}+3x_{2}x_{3}+3x_{3}^{2}+2x_{3}x_{4}+4x_{3}x_{5}) + \\ & (4,10,20,6,13) x_{4}^{2} + \\ & (16,40,80,26,52) (x_{4}x_{5} + x_{5}^{2}). \end{align*} Note that the five vectors above form a basis of $N_{1}(X)$ and each one is proportional to one of the $A_{i}^{2}$. It is clear from this explicit description that the cone is not convex. For example, the vector \begin{equation*} v = (9,22,45,15,30) + (4,10,20,6,13) \end{equation*} can not be approximated by curves of the form $H^{2}$ for an ample divisor $H$. Indeed, if we have a sequence of ample divisors $H_{j} = \sum x_{i,j}A_{i}$ with $x_{i,j} > 0$ such that $H_{j}^{2}$ converges to $v$, then \begin{equation*} \lim_{j \to \infty} x_{2,j} = 1 \qquad \qquad \textrm{and} \qquad \qquad \lim_{j \to \infty} x_{4,j} = 1. \end{equation*} But then the limit of the coefficient of $(12,30,60,20,40)$ is at least $1$, a contradiction. Exactly the same argument shows that the closure of the set of all products of two (possibly different) ample divisors is not convex. \end{exmple} \noindent \textsc{Brian Lehmann}\\ \textsc{Department of Mathematics, Boston College, Chestnut Hill, MA 02467, USA}\\ \verb"Email: [email protected]"\\ \noindent \textsc{Jian Xiao}\\ \textsc{Institute of Mathematics, Fudan University, 200433 Shanghai, China}\\ \noindent \textsc{Current address:}\\ \textsc{Institut Fourier, Universit\'{e} Joseph Fourier, 38402 Saint-Martin d'H\`{e}res, France}\\ \verb"Email: [email protected]"\\ \end{document}
\begin{document} \title{\LARGE \bf Fast AC Power Flow Optimization using Difference of Convex Functions Programming} \author{ Sandro Merkli\thanks{ Automatic Control Lab, ETH Zurich, Physikstrasse 3, 8092 Zurich \tt{ \{merkli,smith,juanj\}@control.ee.ethz.ch}} \thanks{ Inspire-IfA, Inspire AG, Technoparkstrasse 1, 8005 Zurich. \tt{\{merkli,domahidi\}@inspire.ethz.ch}} , Alexander Domahidi\thanks{ embotech GmbH, Physikstrasse 3, ETL K10.1, 8092 Zurich, \tt{\{jerez,domahidi\}@embotech.com}} \footnotemark[2] , Juan Jerez\footnotemark[1] \footnotemark[3]\\ Manfred Morari\footnotemark[1], Roy S.\ Smith\footnotemark[1]\\ } \def \C { \mathbb C } \def \R { \mathbb R } \def \diag { \operatorname{diag} } \def \minim { \operatorname*{minimize} } \def \maxim { \operatorname*{maximize} } \def \st { \operatorname*{subject\ to} } \def \real { \operatorname{Re} } \def \imag { \operatorname{Im} } \def \eig { \operatorname{eig} } \def \bmb { \begin{bmatrix} } \def \bme { \end{bmatrix} } \newcommand{\cve}[1] {#1} \newcommand{\jcve}[1] {\bar{#1}} \maketitle \thispagestyle{empty} \pagestyle{empty} \begin{abstract} An effective means for analyzing the impact of novel operating schemes on power systems is time domain simulation, for example for investigating optimization-based curtailment of renewables to alleviate voltage violations. Traditionally, interior-point methods are used for solving the non-convex AC optimal power flow (OPF) problems arising in this type of simulation. This paper presents an alternative algorithm that better suits the simulation framework, because it can more effectively be warm-started, has linear computational and memory complexity in the problem size per iteration and globally converges to Karush-Kuhn-Tucker (KKT) points with a linear rate if they exist. The algorithm exploits a difference-of-convex-functions reformulation of the OPF problem, which can be performed effectively. Numerical results are presented comparing the method to state-of-the-art OPF solver implementations in MATPOWER, leading to significant speedups compared to the latter. \end{abstract} \section{Introduction} The amount of renewable energy sources (RES) in distribution systems is steadily increasing~\cite{ren212015global}. Due to their volatility and limited predictability, they are posing new challenges to power system operation and planning. A prominently observed consequence of the increase in renewable power in-feeds are local voltage limit violations~\cite{Ayres2010}. Traditionally, the remedy for these violations required expensive line capacity extensions. Recent studies have shown that such extensions could be reduced by a shift in operational paradigms from rule-based to optimization-based approaches, see for example~\cite{Warrington2014,Ulbig2012,Vrettos2013}. Since it is non-trivial to predict the impact of such shifts in operational paradigms on power systems, time-domain simulations provide valuable insight~\cite{Ulbig2012}. System-wide simulations over extended periods of time can demonstrate seasonal impacts and yield statistical data. This data provides a more in-depth view than worst-case snapshot studies, which are the current industrial practice. While the latter only provides information on violation severity, the former also gives a sense of how often they occur. However, if the impact of optimization-based approaches is to be simulated over such long periods of time and for different scenarios, a large number of optimization problems need to be solved. In the case of dispatch optimization, sampling times for the control are on the order of 15 minutes. This means that proposed optimization problems can typically be solved fast enough for on-line operation using state-of-the-art software such as MATPOWER~\cite{Zimmerman2011}. However, in simulations, solving the optimization problems is the most computationally expensive task. Therefore, efficient numerical methods are essential for performing simulations in a practical time frame. In many cases, the problems proposed in optimization-based operation schemes are related to a class of problems collectively referred to as optimal power flow (OPF) problems. An extensive amount of literature exists on solving such problems and a recent survey is given in~\cite{Frank2012surveyI,Frank2012surveyII}. However, due to the non-convexity and large scale of the problem, it remains an active research topic. In fact, the non-convexity makes the problem computationally intractable to solve to global optimality in general. However, critical points can in most cases be found efficiently if they exist, for example using sequential quadratic programming (SQP)~\cite{Robinson1972} and an initial guess that is close to the critical point, or with the difference-of-convex-functions method used in this paper~\cite{LeThi2014}. The most popular approaches to solving AC OPF problems are interior point methods~\cite{Torres2001} and sequential convex approximation methods~\cite{Alsac1990,Chang1990}. While the former are numerically robust and well-studied, the latter tend to be faster according to~\cite{Frank2012surveyI}. There are two main sequential convex approximation approaches: Sequential linear programming (SLP) and SQP. These schemes approximate the original problem iteratively with convex linear and quadratic programs, respectively. Most implementations of these approaches use conventional power flow computations between their iterations to restore feasibility of the Kirchhoff equations. In general, SLP/SQP methods require extensions to become globally convergent, which reduces their performance~\cite{Frank2012surveyI}. A recently developed alternative approach is to solve a convex semidefinite programming (SDP) relaxation of the problem~\cite{Lavaei2012,Molzahn2013a}. The optimal value of this relaxation is either the globally optimal value of the non-convex problem or in the worst case only a lower bound on the latter. The reformulation presented in this paper allows for the solution of AC optimal power flow problems using difference of convex functions programming. This higher order approximation is tighter than the one made in SLP/SQP methods. In comparison with SLP, the method has no issues of unboundedness of the relaxations and is globally convergent without extensions. The presented method operates entirely in the voltage space, satisfying the Kirchhoff equations by design and thereby eliminating the need for conventional power flow computations. In comparison with the SDP relaxation, it converges to critical points at a lower computational cost than the former, especially when warm-started. Also, the SDP relaxation provides only a lower bound on the objective in the worst case, which does not provide a feasible point. In contrast, the proposed method always converges to a critical point if one exists. While certifying local optimality of these points is not straightforward, experiments show that they represent acceptable solutions. This statement will be quantified in the numerical results section. In this work, we present the application of our method to a specific example of an optimization-based operation scheme designed to reduce RES curtailments. In this example, the distribution system operator (DSO) is tasked with keeping the system stable and within the allowed operating conditions. Normally, the DSO is not operated for financial gain and its actions are bound to regulations, for example the EEG in Germany~\cite{eeg2014} or the European equivalent ENTSO~\cite{entsoe}. The range of actions the DSO can take includes adjusting setpoints of generators and curtailing renewable energy sources. Approaches for finding such points currently used in practice are usually rule-based. Such rules involve a significant amount of tuning and rarely come with mathematical guarantees. Additionally, costs for adjustments can only indirectly be taken into account. While the rest of the paper is developed with this specific example in mind, the theory applies to a wide range of problems involving similar constraints, including standard economic dispatch. In particular, any AC power flow optimizations can make use of the decomposition technique presented here. \subsection{Summary of contribution} In this paper, we present a novel method for solving a class of optimal power flow problems that is particularly suited for time-domain system simulations. \begin{enumerate}[(i)] \item \emph{Formulation}: We propose an OPF-like optimization problem to reduce curtailment in distribution grid operation. While the constraints are similar to economic dispatch, the cost function is formulated specifically to represent the cost faced by the system operator. \item \emph{Reformulation into a difference-of-convex functions problem}: We give an efficient method for transforming the given OPF problem into a difference of convex functions problem. Its computational complexity is linear in the problem size. This reformulation preserves the sparsity of the problem while at the same time leading to the sequential convex relaxations being as close to the original non-convex problem as possible within the DC programming framework. \item \emph{Efficient solution of convex subproblems}: The difference of convex functions approach solves the non-convex problem using a series of convex approximations, in this case second-order cone programs (SOCPs) that can be reformulated as convex quadratically constrained linear problems (QCLPs). We present an approach using accelerated dual projected gradient methods to solving these QCLPs that exploits their structure. This leads to the complexity of all iteration computations as well as the required amount of memory growing linearly with the problem size. \end{enumerate} \subsection{Outline} The rest of this paper is structured as follows: Section~\ref{sec:prelim} introduces some preliminaries. In Section~\ref{sec:reform}, we present a reformulation of the optimization problem such that the difference-of-convex-functions method is applicable. Section~\ref{sec:effinner} outlines an efficient method to solve the convex inner problems arising in the proposed algorithm. In Section~\ref{sec:numres}, numerical results are presented and discussed. Final conclusions are presented in Section~\ref{sec:conclusion}. \section{Preliminaries} \label{sec:prelim} This section outlines both the model of the power system as well as the optimization-based control strategy we propose. Basic notation is introduced, assumptions are clarified and current operational practice is described. \subsection{Notation} The power grid is modeled as an undirected graph with $M$ vertices and $L$ edges. Vertices model buses, while edges model power lines. Each line (say, from bus $j$ to bus $l$) has admittance $y_{jl} \in \C$. Each bus $j$ has an associated voltage $v_j \in \C$ and power in-feed $s_j \in \C$, where $\real(s_j)$ denotes active and $\imag(s_j)$ denotes reactive power. Let $v,s \in \C^M$ be the stacked versions of the bus voltages and powers, respectively. The admittance matrix of the grid is given as \begin{equation} Y_{jl} := \begin{cases} y_{jl} & \text{if } j \ne l, \\ y^{\text{sh}}_{j}-\sum_{k=1,k\ne j}^M y_{jk} & \text{if } j = l. \end{cases} \end{equation} where $y_j^{\text{sh}} \in \C$ are shunt admittances. The Kirchhoff equations for the system can hence be written in matrix form: \begin{equation} \label{eqn:kirchhoff} \diag(v)\bar Y \bar v = s, \end{equation} where $\bar \cdot$ describes the (element-wise) complex conjugate. Let $e_k$ denote the $k$-th unit vector with appropriate dimension. Let $(\cdot)^r := \real(\cdot), (\cdot)^q := \imag(\cdot)$ and let $r_k, q_k$ be the $k$-th rows of $\real(Y)$ and $\imag(Y)$, respectively. For vectors $a \in \C^n$, define \begin{equation} J(a) := \left\{ k \in \{1,\ldots,n\} \Big| a_k \ne 0 \right\}, \end{equation} and for matrices $A \in \C^{n \times n}$, let \begin{equation} \begin{aligned} J(A) :=& \;\Big\{ k \in \{1,\ldots,n\} \Big| \\ & \;\exists j \in \{1,\ldots,n\} : A_{kj} \ne 0 \text{ or } A_{jk} \ne 0 \Big\}. \end{aligned} \end{equation} This means $J(\cdot)$ returns the indexes of rows and columns with at least one nonzero entry. We then use the notation $A_B$ to denote a version of $A$ with only the rows and columns with indexes from a given set $B$. \subsection{Operational constraints} \label{ssec:opcon} The constraints represent limits introduced by the system operator are either due to regulations or to avoid damage to the system. Firstly, the voltage magnitude has to be within a fixed interval for each bus $j$: \begin{equation} \label{eqn:vlim} v_{\min,j} \le \left| v_j \right| \le v_{\max,j}. \end{equation} These limits are important for distribution grids, since the assumption of low-resistance lines commonly made in transmission grids does not hold. This means there can be significant discrepancies in the voltages between two endpoints of a line. Additionally, one of the main problems faced by DSOs are voltage constraint violations due to local renewable power in-feeds. Finally, the current through each line $(j,k)$ is limited for thermal reasons: \begin{equation} \label{eqn:llim} |y_{jl}||v_j - v_l| \le i_{\max,jl}, \end{equation} The limits in~\eqref{eqn:vlim} and~\eqref{eqn:llim} together will hereafter be referred to as the operational constraints for the power grid. The DSO action space is modeled as an interval of active and reactive power for each bus $j$: \begin{equation} \label{eqn:slim} \begin{aligned} p_{\min,j} &\le \real( s_j) \le p_{\max,j}, \\ q_{\min,j} &\le \imag( s_j) \le q_{\max,j}. \end{aligned} \end{equation} For buses at which the DSO cannot intervene, the upper and lower limits in~\eqref{eqn:slim} are equal. Let $(s^0, v^0)$ be an operating point of the power grid that represents the state of the distribution grid without any DSO intervention. If this point satisfies all operational constraints~\eqref{eqn:vlim} and~\eqref{eqn:llim}, no DSO intervention is required. Otherwise, some limits are violated and the task of the DSO is then to find a point $(s,v)$ that satisfies all operational constraints, but also lies within its action space~\eqref{eqn:slim}. \subsection{DSO optimization problem} The penalization for introduced deviations to power setpoints is modeled linearly here, while voltage deviations are interpreted as an effect of changing powers without a direct cost. This is the case for example in Germany~\cite{eeg2014}. Even though the DSO is not run for profit, its operational cost has to be covered by the power consumers. It is therefore advisable to perform a social welfare optimization for least cost: \begin{subequations} \label{eqn:opf0} \begin{align} \minim_{\cve s \in \C^M, \cve v \in \C^M} &\;\; \|\real(\cve s-\cve s_0)\|_1 + \|\imag(\cve s-\cve s_0)\|_1 \label{eqn:opf0_cost}\\ \st &\;\; \diag(\cve v) \jcve Y \jcve v = \cve s, \label{eqn:opf0_kirch}\\ &\;\; v_{\min,k} \le \left| v_k \right| \le v_{\max,k}, \label{eqn:opf0_vlim} \\ &\;\; p_{\min} \le \real( s) \le p_{\max}, \label{eqn:opf0_plim} \\ &\;\; q_{\min} \le \imag( s) \le q_{\max}, \label{eqn:opf0_qlim} \\ &\;\; |y_{jl}||v_j - v_l| \le i_{\max,(j,l)}, \label{eqn:opf0_line} \\ &\;\; k \in \mathcal M,\quad (j,l) \in \mathcal E, \end{align} \end{subequations} where the $1$-norm cost function is proportional to the monetary cost for the power deviations the DSO introduces. For renewable in-feed curtailment, this situation is commonplace in some European countries, where the operator is typically required by law to pay the nominal price for available power, regardless of whether it is used or curtailed. Since this is the most relevant case here, the assumption is made that all costs are of this structure. However, the general framework presented in this work can be extended to use any convex cost function. Problem~\eqref{eqn:opf0} will hereafter be referred to as the OPF problem. It is non-convex due to the quadratic Kirchhoff equalities~\eqref{eqn:opf0_kirch} as well as the lower voltage magnitude bounds~\eqref{eqn:opf0_vlim}. \subsection{Difference-of-convex-functions (DC) programming} The method used in this work for solving problem~\eqref{eqn:opf0} is called difference-of-convex-functions (DC\footnote{Not to be confused with the abbreviation ``DC'' for direct current, and the related approximations of the AC-OPF problem.}) programming. This section outlines the algorithm and presents some existing related theoretical results. DC programming is a class of algorithms for solving problems of the form \begin{equation} \label{eqn:dc_gen0} \begin{aligned} \minim_{x} & \;\; g_0(x) - h_0(x) \\ \st &\;\; g_i(x) - h_i(x) \le 0, \end{aligned} \end{equation} where $i \in \{1,\ldots,m\}$ and the $g_i, h_i$ are convex, subdifferentiable functions. This method was historically used for optimization problems involving piecewise affine functions. However, a wide range of problems can be formulated as~\eqref{eqn:dc_gen0}, including all convex optimization problems, optimization problems with binary variables, quadratic equality constraints and higher-order polynomial constraints. A recent survey of the method and related theory is given in~\cite{An2005}, and~\cite{LeThi2014} presents the basic algorithm, which is also given in Algorithm~\ref{alg:dca} for completeness. The main idea of the algorithm is to solve a sequence of convex problems obtained by linearizing the \emph{concave} parts of the constraints and objective: \begin{equation} \label{eqn:dc_gen_inner0} \begin{aligned} \minim_{x} &\;\; g_0(x) - \left[ h_0(\tilde x) + \nabla h_0(\tilde x) (x-\tilde x) \right] \\ \st &\;\; g_i(x) - \left[ h_i(\tilde x) + \nabla h_i(\tilde x) (x-\tilde x) \right] \le 0. \end{aligned} \end{equation} The optimizer $x^*$ of~\eqref{eqn:dc_gen_inner0} is then used as the next point of convexification $\tilde x$, and the process is repeated until convergence is reached. The feasible set of~\eqref{eqn:dc_gen_inner0} is a convex inner approximation of that of~\eqref{eqn:dc_gen0}. This means that~\eqref{eqn:dc_gen_inner0} is not necessarily feasible, even if the original non-convex problem is. This is circumvented in the algorithm using a penalty reformulation: \begin{equation} \label{eqn:dc_gen_inner1} \begin{aligned} \minim_{x,t} &\;\; g_0(x) - \left[ h_0(\tilde x) + \nabla h_0(\tilde x) (x-\tilde x) \right] + \beta^k t \\ \st &\;\; g_i(x) - \left[ h_i(\tilde x) + \nabla h_i(\tilde x) (x-\tilde x) \right] \le t, \\ &\;\; t \ge 0, \end{aligned} \end{equation} where $\beta^k \in \R_+$ is a penalty weight parameter that is updated after each convexification using the rule in Algorithm~\ref{alg:dca}. Note the similarity of this scheme to other sequential convex programming methods, most notably SQP and SLP. The key difference to those methods is that the convex parts $g_i$ are retained in their original form, which yields tighter approximations in the sequence of convex problems solved. This means in particular that if the original problem had a bounded feasible set, all issues of possible unboundedness that arise in SLP~\cite{Bazaraa2013} are avoided and there is no need for trust region approaches and their associated performance penalty. \begin{figure} \caption{Difference of convex functions algorithm from~\cite{LeThi2014} \label{algstep:termination} \label{algstep:inner} \label{alg:dca} \end{figure} It is shown in~\cite{LeThi2014} that the algorithm presented here globally converges to a KKT point of~\eqref{eqn:dc_gen0} with a linear rate, provided one exists and standard constraint qualifications are satisfied. It is also shown that the sequence of optimal values of the approximations~\eqref{eqn:dc_gen_inner1} is monotonically decreasing. This holds for all initial choices of the algorithm parameters, which means no a priori bound on the size of the penalty parameter $\beta_k$ is required. All results also hold if, in addition to the constraints in~\eqref{eqn:dc_gen0}, a constraint \begin{equation} \label{eqn:adddccon} x \in \mathcal C, \end{equation} for some convex, closed set $\mathcal C$ is added. The algorithms are then simply modified to include the constraint~\eqref{eqn:adddccon} in each of the convex approximations. It is worth noting that the choice of $g_i$ and $h_i$ is not unique for a given problem. The authors in~\cite{LeThi2014} make no theoretical statements on the impact of the choice of the $g_i$ and $h_i$ on the convergence speed. However, numerical experiments show that the choice does have a strong impact on the number of iterations required. We will discuss an effective technique for choosing the functions $g_i,h_i$ for the problem at hand in Section~\ref{ssec:appofdc}. \section{Reformulation of OPF as Difference-of-Convex-functions problem} \label{sec:reform} In this section, the DSO OPF problem~\eqref{eqn:opf0} is reformulated as a QCLP, and an efficient way of computing the splits of the non-convex functions into differences of convex functions is presented. These splits result in a special structure of the convex sub-problems. We then show in Section~\ref{sec:effinner} how to solve these sub-problems efficiently. \subsection{Reformulation as QCLP} \label{ssec:reform_qclp} As already shown in~\cite{Low2014}, OPF problems with linear cost functions can be recast as non-convex quadratically constrained linear programs. A similar technique will be applied here. First, let $s_0 \in \C^M, v_0 \in \C^M$ be the power and voltage vectors the system is operating at without any DSO intervention. We now introduce the difference in voltages introduced by the DSO as follows: \begin{equation} \label{eqn:dv_def} v := v_0 + \Delta v \;\in \C^{M}, \end{equation} where $\Delta v \in \C^M$ is the change from the starting point and $v$ is the resulting voltage vector. The resulting change of powers $\Delta s \in \C^M$ can be computed using the Kirchhoff equations~\eqref{eqn:opf0_kirch}: \[ \Delta s = \diag(v_0)\bar Y\bar{\Delta v} + \diag(\Delta v)\bar Y \bar v + \diag(\Delta v)\bar Y \bar{\Delta v}. \] Define now $Y^{(k)}$ as a version of $Y$ with all but the $k$-th row set to 0. After some reformulation, we can write \begin{subequations} \label{eqn:pow_from_volt} \begin{align} \Big(\real (\Delta s) \Big)_k &= z^TH_{r,k}z + h_{r,k}^T z, \\ \Big(\imag (\Delta s) \Big)_k &= z^TH_{q,k}z + h_{q,k}^T z, \end{align} \end{subequations} with $z := \bmb \real(\Delta v)^T & \imag(\Delta v)^T \bme^T \in \R^{2M}$, and \begin{equation} \label{eqn:pfv_matrices} \begin{aligned} H_{r,k} &:= \bmb \real(Y^{(k)}) & -\imag(Y^{(k)}) \\ \imag(Y^{(k)}) & \real(Y^{(k)}) \bme, \\ H_{q,k} &:= \bmb -\imag(Y^{(k)}) & -\real(Y^{(k)}) \\ \real(Y^{(k)}) & -\imag(Y^{(k)}) \bme. \end{aligned} \end{equation} The linear parts in~\eqref{eqn:pow_from_volt} are given by \begin{align} \,&h_{r,k} :=\nonumber \\ \,&\bmb \Big( (v_0^r)_k r_k + (v_0^q)_k q_k\Big)^T + e_k \Big(r_k (v_0^r)_k + q_k(v_0^q)_k\Big) \\ \Big( (v_0^q)_k r_k - (v_0^r)_k q_k\Big)^T + e_k \Big(q_k (v_0^r)_k + r_k(v_0^q)_k\Big) \bme,\nonumber \\ \,&h_{q,k} := \\ \,&\bmb \Big( (v_0^q)_k r_k - (v_0^r)_k q_k\Big)^T - e_k \Big(q_k (v_0^r)_k + r_k(v_0^q)_k\Big) \\ \Big( -(v_0^q)_k q_k - (v_0^r)_k r_k\Big)^T + e_k \Big(r_k (v_0^r)_k - r_k(v_0^q)_k\Big) \bme.\nonumber \end{align} Equations~\eqref{eqn:pow_from_volt} can now be used to express the constraints on powers given in~\eqref{eqn:opf0_plim}--\eqref{eqn:opf0_qlim} as constraints on $\Delta v$. Using~\eqref{eqn:dv_def}, the constraints~\eqref{eqn:opf0_vlim} and~\eqref{eqn:opf0_line} can also be expressed in $\Delta v$. Finally, problem~\eqref{eqn:opf0} can be rewritten entirely in the variable $z$: \begin{subequations} \label{eqn:opf1} \begin{align} \minim_{z \in \R^{2M}} &\;\; \sum_{k=1}^M \big|z^TH_{r,k}z + h_{r,k}^Tz \big| + \big|z^TH_{q,k}z + h_{q,k}^Tz \big| \label{eqn:opf1_cost} \\ \st & \;\; z^TQ_iz + q_i^Tz + \gamma_i \le 0 \label{eqn:opf1_con}, \\ & \;\; i \in \{1,\ldots,K\},\nonumber \end{align} \end{subequations} where $K := 6M+L$. The constraints~\eqref{eqn:opf1_con} are reformulations of the original constraints~\eqref{eqn:opf0_vlim}--\eqref{eqn:opf0_line}. The structures of the $Q_i$ are of particular importance in later sections, which is why they are given here. \begin{figure} \caption{Sparsity pattern of example power constraint and line constraint Hessian matrices $Q$. In this case, vertex $j$ has 4 neighbors. Note that the power constraint matrix is shown in symmetric form as defined in~\eqref{eqn:symmh} \label{fig:sparsevis1} \end{figure} \begin{enumerate}[(i)] \item The matrix $Q$ of power constraints~\eqref{eqn:opf0_plim} and~\eqref{eqn:opf0_qlim} are either the matrices $H_{r,k}$ or $H_{q,k}$ or negative versions thereof. \item For the voltage bounds~\eqref{eqn:opf0_vlim}, $Q$ has $1$ (for upper bounds) or $-1$ (for lower bounds) on the $j$-th and $(M+j)$-th entries on the diagonal, and 0 everywhere else. \item The matrix $Q$ of line constraints~\eqref{eqn:opf0_line} have $1$ in positions \[ (j,j),\; (l,l),\; (M+j,M+j),\;(M+l,M+l), \] of the diagonal and $-1$ in positions \[ (j,l),\; (l,j),\; (M+j,M+l),\;(M+j,M+l). \] \end{enumerate} Note that the matrices from~\eqref{eqn:pfv_matrices} are not symmetric, but they can be trivially made symmetric without changing the value of the constraints in~\eqref{eqn:opf1_con}. We hence define the symmetric versions \begin{equation} \label{eqn:symmh} \hat H_{r,k} := \frac{H_{r,k} + H_{r,k}^T}{2}, \qquad \hat H_{q,k} := \frac{H_{q,k} + H_{q,k}^T}{2}. \end{equation} A visualization of the described sparsity patterns is given in Figure~\ref{fig:sparsevis1}. Since the cost function~\eqref{eqn:opf1_cost} is inconvenient due to its non-smoothness, a standard 1-norm reformulation with additional slack variables $u \in \R^{2M}$ is performed. Defining $x := \bmb z^T & u^T \bme^T$, problem~\eqref{eqn:opf1} can be written as a standard QCLP: \begin{subequations} \label{eqn:qclp} \begin{align} \minim_{x} & \;\; c^Tx \label{eqn:qclp_cost} \\ \st &\;\; x^TP_ix + p_i^Tx + \omega_i \le 0 \label{eqn:qclp_cons},\\ &\;\; i \in \{1,\ldots,10M+L\},\nonumber \\ &\;\; x_j \ge 0, \\ &\;\; j \in \{2M+1,\ldots,4M\}, \nonumber \end{align} \end{subequations} for appropriate $c,P_i, p_i, \omega_i$. The structure of the matrices $P_i$ is given by \begin{equation} P_i = \bmb * & 0^{2M\times 2M} \\ 0^{2M\times 2M} & 0^{2M\times 2M} \bme \in \R^{4M \times 4M}, \end{equation} where the upper blocks denoted by $*$ have the same sparsity patterns as the matrices from Problem~\eqref{eqn:opf1}. The vectors $p_i \in \R^{4M}$ are versions of the linear parts $h_{r,k},h_{q,k},q_i$ from Problem~\eqref{eqn:opf1} with $2M$ additional entries. These additional entries correspond to the coefficients of the slack variables $u$, at most one of which is involved in each constraint. \subsection{Application of DC programming} \label{ssec:appofdc} In order to apply DC programming to solve~\eqref{eqn:qclp}, both~\eqref{eqn:qclp_cost} and~\eqref{eqn:qclp_cons} have to be written as a difference of two convex functions as described in~\eqref{eqn:dc_gen0}. We call this procedure a ``DC split''. Since~\eqref{eqn:qclp_cost} is linear, no split has to be performed, we can just define $g_0(x) := c^Tx$ and $h_0(x) := 0$. The constraints~\eqref{eqn:qclp_cons} on the other hand can be non-convex, so they have to be separated. Note that for every symmetric indefinite matrix $P$, there exist infinitely many pairs $P^+, P^- \succeq 0$ such that \begin{equation} \label{eqn:qsplit} P = P^+ - P^-. \end{equation} As a consequence, problem~\eqref{eqn:qclp} can be rewritten as \begin{subequations} \label{eqn:qclp2} \begin{align} \minim_{x} & \;\; c^Tx \label{eqn:qclp2_cost} \\ \st &\;\; \big(x^TP^+_ix + p_i^Tx + \omega_i\big) - \big(x^TP^-_ix\big) \le 0, \label{eqn:qclp2_cons}\\ &\;\; i \in \{1,\ldots,K\}\nonumber,\\ &\;\; x_j \ge 0, \\ &\;\; j \in \{2M+1,\ldots,4M\}, \nonumber \end{align} \end{subequations} which now has the form given in~\eqref{eqn:dc_gen0}. Therefore, the algorithm from Figure~\ref{alg:dca} can directly be applied. The existence of infinitely many splits of the $P$ matrices from~\eqref{eqn:qclp} raises the question of optimal split selection. One approach that both intuitively makes sense and has been effective in experiments is to split the matrices such that the $P_i^-$ have small eigenvalues. This split strategy leads to the curvature of the concave terms $-x^TP_i^-x$ being smaller and therefore the linearized approximation being closer to the original non-convex term. One can also use the freedom in the splits to induce structure in the Hessian matrix $P^+_i$ in order to simplify the convex problems to be solved. For example, the structure imposed here is for the matrices $P^+_i$ to be diagonal, making their inverses trivial to compute. \subsection{Analytic eigenvalue computations} Since there are a large number of constraints of the type~\eqref{eqn:qclp_cons}, calculating splits using numerical eigenvalue decompositions would be computationally prohibitive. Due to the structure of the $P_i$, eigenvalues can be computed analytically using the method described in this section. Note first that for the indexes $i$ corresponding to voltage or line constraints, the eigenvalues of $P_i$ are trivial to compute due to their simple structure. For the power constraints, we use the following Lemma: \begin{lemma} The eigenvalues of the matrices from~\eqref{eqn:symmh} are given by \begin{subequations} \begin{align} \eig(\hat H_{r,k}) &= \left\{\frac {\real(Y_{kk}) \pm \sqrt{\real(Y_{kk})^2 - 4\|Y^{(k)}\|_2^2} }{2}, 0 \right\}, \\[0.3cm] \eig(\hat H_{q,k}) &= \left\{ \frac {-\imag(Y_{kk}) \pm \sqrt{\imag(Y_{kk})^2 - 4\|Y^{(k)}\|_2^2}}{2},0\right\}. \end{align} \end{subequations} \end{lemma} \begin{proof} The proof is shown for $\hat H_{r,k}$ only, since the proof for the $\hat H_{q,k}$ is identical. Note first that the $\hat H_{r,k}$ have a blocked structure: \begin{equation} \label{eqn:blocked} \hat H_{r,k} = \bmb A_k & B_k \\ -B_k & A_k \bme = \bmb A_k & B_k \\ B_k^T & A_k \bme. \end{equation} For matrices of this form, the identity \begin{equation} \label{eqn:eigident} \eig\left( A_k + \sqrt{-1}B_k \right) = \eig\left(\bmb A_k & B_k \\ -B_k & A_k \bme\right), \end{equation} holds~\cite{Golub2012}. Both $A_k$ and $B_k$ are permuted arrowhead matrices. A matrix $A \in \C^{m \times m}$ is called \emph{arrowhead} if it has a structure \begin{equation} A = \bmb \alpha & a^T \\ b & D \bme, \end{equation} with $\alpha \in \C$, $a,b \in \C^{m-1}$ and $D = \diag(d) \in \C^{(m-1) \times (m-1)}$ for some $d \in \C^{m-1}$. In case $D = 0$, it can easily be shown~\cite{OLeary1990} that \begin{equation} \label{eqn:arroweig} \eig(A) = \left\{\frac{\alpha \pm \sqrt{\alpha^2 + 4a^Tb}}{2}, 0 \right\}. \end{equation} Next, note that $A_k$ only has one non-zero row at the same index as $B_k$ has its only non-zero row, and the same holds for their columns. This means that $A_k+\sqrt{-1}B_k$ is also arrowhead and its eigenvalues are the same as those of $\hat H_{r,k}$ due to~\eqref{eqn:eigident}. Substituting $A_k = \real(Y^{(k)}), B_k := \imag(Y^{(k)})$ and applying~\eqref{eqn:arroweig} yields the lemma. \end{proof} Note that the application of~\eqref{eqn:arroweig} is particularly simple for the case here, since $\hat H^{r,k}$ is built from $Y^{(k)}$, which in turn only has as many entries as bus $k$ has neighbors. Since power system graphs are generally very sparse, this yields a significant reduction in computational cost over even a Lanczos-based or other iterative approximation of eigenvalues, let alone a standard exact computation. \subsection{Sparse splits} At this point, the eigenvalues of all the matrices $P_i$ can be computed efficiently. However, directly applying the split in~\eqref{eqn:qsplit} would lead to a loss of sparsity. Due to the sparse graph structure of the grid matrix $Y$, the expressions $x^TP_ix$ only involve a small subset of the variables in $x$ (specifically, the local variables for a bus and the variables of its neighbors). This section introduces an alternative split that both conserves sparsity in the constraints and also makes the $P^+_i$ diagonal. This structure will then make the solution of the convex subproblems of the algorithm much simpler, as will be outlined in later sections. This is because the $P_i^+$ are used in the quadratic parts of the convex subproblems, whereas the $P_i^-$ only appear in their linear terms. Define a sparse, diagonal matrix $D_i$ as follows: \begin{equation} (D_i)_{jj} = \begin{cases} 1, & \text{if $j \in J(P_i) \cup J(p_i)$}, \\ 0, & \text{otherwise.} \end{cases} \end{equation} This matrix hence has ones only at the row and column indexes at which either $P_i$ and $p_i$ also have nonzeros. We then define the alternative split \begin{equation} \label{eqn:sparsesplit} P_i := \alpha D_i - (\alpha D_i - P_i), \end{equation} where $\alpha$ is the absolute value of the largest eigenvalue of $P_i$. This sparse split still guarantees positive definiteness of the split matrices since it only shifts the non-zero eigenvalues. \section{Efficient solution of inner problems} \label{sec:effinner} The bottleneck of the DC algorithm is the solution of the convex approximation. In this section, an dual projected gradient method that has an iteration complexity linear in the problem size. With the splits~\eqref{eqn:sparsesplit} applied, the problem to be solved at each DC iteration has the form \begin{subequations} \label{eqn:dcinner_fromalgo} \begin{align} \minim_{x \in \R^{4M}, t} &\;\; c^Tx + \beta^kt \\ \st & \;\; x^TP_i^+x + \hat p_i(\tilde x^k)^Tx + \hat \omega_i(\tilde x^k) \le t, \\ & \;\; i \in \{1,\ldots,10M+L\}, \quad t \ge 0, \\ &\;\; x_j \ge 0, \\ &\;\; j \in \{2M+1,\ldots,4M\}, \nonumber \end{align} \end{subequations} where $\tilde x^k$ is the current point around which a convex approximation is formed, and \begin{equation} \begin{aligned} \hat p_i(\tilde x^k) &:= (p_i-2P_i^-\tilde x^k), \\ \hat \omega_i(\tilde x^k) &:= \omega_i +(\tilde x^k)^TP_i^-\tilde x^k. \end{aligned} \end{equation} General-purpose sparse convex second-order cone programming codes such as ECOS~\cite{Domahidi2013}, GUROBI~\cite{GurobiOptimization2014} or MOSEK~\cite{ApS2015} can be used to solve these problems. However, the structure of the problem suggests that a specialized solver could lead to increased performance. Firstly, the constraint Hessians $P_i$ are diagonal and sparse, and all nonzero entries have the same values. Additionally, $P_i$ and $p_i$ have the same nonzero patterns for any given $i$. It was also experimentally observed that the subsequent convex approximations are often similar, which suggests a warm-startable method could be beneficial. This section will present an approach based on accelerated dual gradient descent that was used in this work. \subsection{Projected gradient method} A well-known algorithm for solving optimization problems of the form \begin{subequations} \begin{align} \label{eqn:gradexample} \minim_x & \;\; f(x) \\ \st &\;\; x \in \mathcal C \end{align} \end{subequations} is given by the iteration \begin{equation} \label{eqn:projgradalgo} x^{(k+1)} = \operatorname{proj}_{\mathcal C} \left(x^{(k)} - \alpha \nabla f(x^{(x)}) \right) \end{equation} where $\alpha$ is a step size and $\operatorname{proj}_{\mathcal C}$ is the Euclidean projection onto the set $\mathcal C$. If $\mathcal C$ and $f(x)$ are convex, this algorithm converges to the global minimum of~\eqref{eqn:gradexample}, given that the step size is small enough. The rate of convergence depends highly on $f(x)$, and the method can be accelerated by varying $\alpha$ (see for example~\cite{Nesterov1983,Richter2012}). In order for this algorithm to be efficient, the projection should be a simple operation. In the next section, a reformulation of problem~\eqref{eqn:dcinner_fromalgo} is given that achieves the latter. \subsection{Box-constrained inner problem formulation} The intersection of the constraints in~\eqref{eqn:dcinner_fromalgo} is not easy to project onto, hence direct application of~\eqref{eqn:projgradalgo} is not efficient. Using two reformulations, the problem will be recast as a minimization of a smooth function subject to box constraints. The first step is a lifting into a higher-dimensional variable space: We introduce variables $y_i := x_i^2$ and change the penalty function from an $\infty$-norm to a $1$-norm (another possible penalty function shown in~\cite{LeThi2014}). The inner problem to be solved then becomes \begin{subequations} \label{eqn:dcinner_lifted} \begin{align} \minim_{x,y,t} & \;\; c^Tx + \beta^k 1^Tt \label{eqn:liftedcost}\\ \st & \;\; Ax + By - b \le t, \label{eqn:con_liftlin} \\ & \;\; \diag(x)x - y = 0, \label{eqn:con_diagxy}\\ & \;\; t \ge 0, \label{eqn:con_tpos} \\ &\;\; x_j \ge 0, \\ &\;\; j \in \{2M+1,\ldots,4M\}, \nonumber \end{align} \end{subequations} where \begin{equation} A := \begin{bmatrix} \hat p_1(\tilde x^k)^T \\ p_2(\tilde x^k)^T \\ \vdots \\ p_K(\tilde x^k)^T \end{bmatrix}, \; B := \begin{bmatrix} \diag(P_1^+)^T \\ \diag(P_2^+)^T \\ \vdots \\ \diag(P_K^+)^T \end{bmatrix}, \; b := \begin{bmatrix} \hat \omega_1(\tilde x^k) \\ \omega_2(\tilde x^k) \\ \vdots \\ \omega_K(\tilde x^k) \end{bmatrix}, \end{equation} where $K := 10M+L$. We now make use of the following lemma to relax the constraints in~\eqref{eqn:con_diagxy}: \begin{lemma} \label{lem:relax} Consider a version (P) of~\eqref{eqn:dcinner_lifted} with the constraints~\eqref{eqn:con_diagxy} relaxed to \begin{equation} \diag(x)x - y \le 0. \label{eqn:proofcon}\end{equation} For every minimizer of (P) with one or more of~\eqref{eqn:proofcon} inactive, a minimizer with equal cost function can be found which has all the constraints~\eqref{eqn:proofcon} active. \end{lemma} \begin{proof} The lemma will be shown by construction: Assume a point $(x^\star,y^\star,t^\star)$ is optimal for~(P), but a constraint in~\eqref{eqn:proofcon} is not active. Then the $y_i$ corresponding to that constraint can be decreased to make the constraint active without any change to the cost function or constraint satisfaction. The latter is due to all entries of $B$ being non-negative. \end{proof} Lemma~\ref{lem:relax} implies that we can simply solve the relaxed version of~\eqref{eqn:dcinner_lifted} and then recover an optimal solution for the latter. In a second step, the relaxed version of~\eqref{eqn:dcinner_lifted} will be dualized to yield a box-constrained problem. With some additional reformulation (see Appendix~\ref{ssec:app_dualderi}), the dual of~\eqref{eqn:dcinner_lifted} can be written as \begin{subequations} \label{eqn:dcinner_dual} \begin{align} \minim_{\lambda} & \;\; \frac{1}{4} \lambda^TC\left(\diag(D^T\lambda)^{-1}\right)C^T\lambda + d^T\lambda \label{eqn:dualcost} \\ \st & \;\; 0 \le \lambda \le 1. \end{align} \end{subequations} for $C,D,d$ as derived in the appendix. This problem can now readily be solved using the projected gradient method and its accelerated variants: The cost function is a sum of ``quadratic over linear'' functions, which are convex and differentiable: \[ \frac{\partial }{\partial \lambda} \left(\frac{(a^T\lambda)^2}{b^T\lambda} \right) = 2a\frac{a^T\lambda}{b^T\lambda} - b \frac{(a^T\lambda)^2}{(b^T\lambda)^2}. \] In order to avoid numerical issues with the inverse of $\diag(D^T\lambda)$, a term $\varepsilon y$ can be added to~\eqref{eqn:liftedcost} for a small $\varepsilon > 0$. This leads to the inverse in~\eqref{eqn:dualcost} becoming $\diag(D^T\lambda + 1\varepsilon)^{-1}$, which is well-defined for all $\lambda \ge 0$. Since the Lipschitz constant of~\eqref{eqn:dualcost} is not easily derived, an adaptive backtracking line search was used to determine the step size $\alpha$ taken in~\eqref{eqn:projgradalgo}. The initial guess for the step size was chosen to be 2 times the step size taken in the previous iteration. This allows the algorithm to adapt its initial guess to both growing as well as shrinking step sizes. At the same time, no convergence guarantees are lost since the line search is still performed at each iteration. This technique led to a significant reduction in the average number of line search iterations, speeding up the overall algorithm substantially. \subsection{Computational complexity} In order to investigate how scalable the presented method is, it is worthwhile to compute the iteration complexity of the inner solver based on the problem parameters. Let $d_{\max}$ be the maximum number of neighbors of any vertex in the power system graph, and recall that the number of buses and lines are denoted by $M$ and $L$, respectively. The rows of the matrices $C$ and $D$ ultimately come from the $Y^{(k)}$ and variations thereof. Each of them has at most $2d_{\max}$ entries. Since the number of rows in $C$ and $D$ is $\mathcal O(M+L)$, this translates to the number of entries being $\mathcal O((M+L)d_{\max})$. All that is required for the cost function and gradient computations is products of $C^T$ and $D^T$ with $\lambda$ as well as some vector operations. The projection is an elementwise operation and therefore~$\mathcal O(M+L)$. In summary, an iteration of the inner solver has linear complexity in the size of the grid if $d_{\max}$ is assumed to only grow very weakly with system size, which is true in all test cases available. \section{Numerical results} \label{sec:numres} In this section, we present numerical results on the performance and behavior of the proposed algorithm. In order to make the results comparable to other work in the field, some of the tests will be conducted on the IEEE benchmark test systems available in MATPOWER~\cite{Zimmerman2011,Fliscounakis2013}. For the experiments, a standalone implementation of the proposed method was created, which will be referred to as DQ-OPF. The implementation is a single-threaded, library-free ANSI C code, compiled with GNU GCC. The test computer had a Core i7-4600U dual-core CPU clocked at 2.1 GHz and 8 GB of memory. The operating system used was Debian Linux. \subsection{Algorithm behavior} \begin{figure} \caption{The DC method applied to the MATPOWER version of the IEEE 30-bus grid, using dual projected gradient as the inner solver. The ``$t$ (inner)'' line represents the maximum constraint violation of the convex approximation at that iteration, whereas the ``$t$ (actual)'' represents the maximum constraint violation of the original, non-convex problem. The lower subplot shows the true objective as well as the difference between subsequent iterates.} \label{fig:numres1} \end{figure} In the first set of results, the convergence behavior of the algorithm is investigated. For these problems, a local optimum $(v^*,s^*)$ was found with IPOPT~\cite{Waechter2006}. The entries of $v^*$ were then perturbed uniformly and the corresponding perturbed powers were computed using the Kirchhoff equations to yield a perturbed operating point $(\tilde v,\tilde s)$. The perturbation size was chosen to make the maximum constraint violation about 100\%. This was done in order to simulate the practical situation of the power grid state being only slightly infeasible with respect to the operational constraints, but respecting the Kirchhoff equations. The point $(\tilde v,\tilde s)$ was used as starting point $(s^0,v^0)$ as defined in Section~\ref{ssec:opcon}, and DQ-OPF started from there. An example solver run is shown in Figure~\ref{fig:numres1} with a termination criterion of $\|x^k-x^{k-1}\|_2 \le 10^{-4}$. Within a small number of DC iterations, the maximum constraint violation of the non-convex problem drops below $10^{-4}$~per unit, which is well below $1\%$ relative accuracy. Note also that the objective value does not improve significantly past iteration 5. \begin{figure} \caption{Solution of the same problem as in Figure~\ref{fig:numres1} \label{fig:numres2} \end{figure} For the problem shown in Figure~\ref{fig:numres1}, the inner convex problems were solved to high accuracy ($10^4$ inner iterations). In other sequential convex programming methods, it is often observed that solving the intermediate problems approximately can often be sufficient for convergence~\cite{Heinkenschloss2002}. In order to investigate if this is also the case with the methods presented here, the gradient solver iterations were limited to 100 in the same problem as above, and the other parameters left unchanged. The resulting run for the same problem is shown in Figure~\ref{fig:numres2}. While the number of outer iterations required is higher than before for the same accuracy, they are two orders of magnitude cheaper computationally. Note that the objectives in Figure~\ref{fig:numres1} and Figure~\ref{fig:numres2} converge to slightly different values. This is due to the two solver runs converging to different local optima. \subsection{Performance} \begin{table} \centering \renewcommand{1.3}{1.3} \caption{Average time and objective for $1\%$ relative accuracy} \begin{tabular}{l c c | c c} \hline & \multicolumn{2}{ c }{\textbf{IPOPT \& PARDISO}} & \multicolumn{2}{ c }{\textbf{DQ-OPF}} \\ & Time & Objective & Time & Objective \\ 6-bus & 55 ms & 0.9 & 2.2 ms & 0.9 \\ 9-bus & 65 ms & 1.9 & 2.3 ms & 1.5 \\ 14-bus & 68 ms & 1.3 & 3.5 ms & 1.0 \\ 30-bus & 77 ms & 1.5 & 10 ms & 1.3 \\ 39-bus & 92 ms & 12 & 23 ms & 11 \\ 57-bus & 97 ms & 2.9 & 25 ms & 1.6 \\ 118-bus & 213 ms & 17 & 76 ms & 3.7 \\ 2383-bus & 3.5 s & 24 & 4.4 s & 10.4 \\ 2737-bus & 3.3 s & 12 & 2.4 s & 7.6 \\ 3210-bus & 2.8 s & 14 & 4.0 s & 9.2 \\ 9241-bus & 15 s & 26 & 16 s & 6.8 \\ \hline \end{tabular} \label{tbl:comparison} \end{table} In order to compare the implemented method to the state of the art, MATPOWER test cases were used in conjunction with the 1-norm cost function, as described in~\eqref{eqn:opf0}. Instead of solving the inner problems accurately as shown in Figure~\ref{fig:numres1}, the inner solver was limited to 100--1000 iterations depending on grid size, yielding the aforementioned calculation time improvements. The DC solver parameters were tuned for one instance of the problem and then reused across all runs. The solver was started at $(s^0,v^0)$. As a reference, we used MATPOWER's IPOPT interface along with the parallel PARDISO~\cite{Kuzmin2013,Schenk2008,Schenk2007} solver for linear systems. Table~\ref{tbl:comparison} presents the results averaged over 100 runs with random initial points created as in the previous experiment. In these experiments, IPOPT was warm-started at the same point as DQ-OPF using MATPOWERs warm-start functionality. DQ-OPF is faster in many cases, with the speedups for the smaller grids being substantial. For the larger grids, the run times are comparable to the reference. Since no significant effort was put into optimizing the solver for larger grids, further speedups can be expected in the proposed method through parallelization and more efficient code. For the largest grid, MATPOWER ran into memory issues on the computer used. DQ-OPF requires a memory amount linear in the problem size and hence had no such issues. Note also that IPOPT was run with multi-threading enabled (2 threads) and the times shown are wall clock, not CPU time. Another observation is that the average objective values were consistently smaller with the method used. This means the proposed method found local optima with better objective values. A likely reason for this is the objective function, which represents distance from the starting point $(s^0,v^0)$. The presented method tends to find local optima close to the point at which it was started, whereas IPOPT (and interior-point methods in general) seem to benefit less from warm-start information~\cite{John2008}. This difference in objective values was made both when IPOPT was warm-started as well as when the default settings (no warm-start) were used. \subsection{Case study: Simulation experiment} \begin{figure} \caption{Voltage traces for all buses in the MV grid over the course of the simulation. Admissible limits were $[0.9,1.07]$ per unit. The OPF problem had to be solved at time instances 30 through 25 and 50.} \label{fig:numres3} \end{figure} \begin{figure} \caption{Total RES power in-feed for the MV grid over the simulation horizon. The MATPOWER and proposed OPF solutions are close, with the proposed solution discarding slightly less renewable energy.} \label{fig:numres4} \end{figure} \begin{figure} \caption{Rural MV grid used for the simulation experiment. Line admittances were chosen to be in range typically used in rural distribution grids. Note that while the grid here is radial, this is not a required assumption for the proposed method. The cyan bus in the middle is the slack bus, modeled here as a bus with no power limits.} \label{fig:mvruralgrid} \end{figure} \begin{figure} \caption{Box-plot of the distribution of solve times for the optimization problems solved in the simulation experiment. The boxes contain 50\% of the cases, the interval marked by the dashed lines contains 90\% of the cases. The plus signs mark outliers.} \label{fig:numres5} \end{figure} In order to demonstrate the effectiveness of warm-starting the presented method, a power system time simulation experiment is presented in this section. The experiment was run with the test grid shown in Figure~\ref{fig:mvruralgrid}. Three different approaches to dealing with voltage violations were tested: \begin{enumerate}[(i)] \item \emph{Rule-based curtailment}: In this control scheme, no optimization is run, instead the renewable in-feeds of the grid are simply curtailed down to a fixed fraction of their rated in-feed. This case reflects current industry practice. \item \emph{MATPOWER OPF-based curtailment}: In this scheme, problem~\eqref{eqn:opf0} is solved to local optimality with MATPOWER at its default settings. The 1-norm cost was implemented using MATPOWER's piecewise affine cost function functionality. \item \emph{Proposed method OPF-based curtailment}: Problem~\eqref{eqn:opf0} is solved to local optimality, but with the method presented in this paper. The solver is warm-started with the solution from the previous solve when available. The inner problems were solved with the presented dual gradient method, which was limited to 200 inner iterations. \end{enumerate} As a simulation environment, Adaptricity DPG.sim~\cite{KOCH} was used. At each simulation time step, the operational limits~\eqref{eqn:kirchhoff},~\eqref{eqn:vlim} and~\eqref{eqn:llim} were checked. If any of them were violated, one of the approaches above was invoked. The resulting power in-feed and voltage profiles for the different approaches are shown in Figures~\ref{fig:numres3} and~\ref{fig:numres4}, respectively. As can be seen in the uppermost subplots of the two figures, the profiles obtained by using the rule-based curtailment controller have strong fluctuations due to the controller intervening non-smoothly when violations are detected. Both the voltage and power profiles are much smoother if the optimization-based intervention solving~\eqref{eqn:opf0} is performed. Even though only local optima are found both in MATPOWER and the presented method, these smoother profiles were observed in all simulations. Additionally, even though the different numerical approaches often yield different local minima, the difference in cost function values is minor. The distribution of solve times for the simulation is presented in Figure~\ref{fig:numres5}. As can be seen, the average solve time of the proposed method is only about 14\% of the state of the art. This directly results in a speedup of up to factor 7 in the simulations. Finally, due to the less severe interventions, much less curtailment is required, resulting in a significant increase of renewable energy integrated. The typical increase in RES in-feed is in the 20--40\% in yearly simulations, but the specific value depends strongly on the grid topology and available amount of renewable in-feed capacity. \section{Conclusion} \label{sec:conclusion} This paper presented an alternative approach to dealing with over-voltage problems in distribution grids with an OPF-based approach that leads to minimum intervention by the DSO. Along with a formulation of the optimization problem, a novel method to solve it to local optimality was presented that can be warm-started and significantly outperforms current state-of-the-art interior-point methods. The presented method can easily be extended to other optimization problems involving AC power flow constraints. \section*{Acknowledgments} \label{sec:acknowledgements} This work was supported by the Swiss Commission for Technology and Innovation (CTI), (Grant 16946.1 PFIW-IW). We also thank the team at Adaptricity (Stephan Koch, Andreas Ulbig and Francesco Ferrucci) for providing the simulation environment and valuable discussions in the area of power systems. \appendix \section{Appendix} \subsection{Derivation of the dual problem} \label{ssec:app_dualderi} In this section, a detailed derivation of the step from~\eqref{eqn:dcinner_lifted} to~\eqref{eqn:dcinner_dual} is given. First, notice that the only non-zeros entries of $c$ in~\eqref{eqn:dcinner_lifted} are those corresponding to the slack variables $u$ introduced in~\eqref{eqn:qclp}. Moreover, the constraints corresponding to the cost function reformulation do not need penalties, since they are satisfied by construction. Recalling $x = \bmb z^T & u^T \bme^T$, we can rewrite~\eqref{eqn:dcinner_lifted} as \begin{subequations} \label{eqn:dcinner_lifted_r1} \begin{align} \minim_{x,y,u,t} & \;\; 1^Tu + \beta^k 1^Tt \label{eqn:liftedcost_r1}\\ \st & \;\; A_1z + B_1y - b_1 \le t, \label{eqn:con_liftlin_r1} \\ & \;\; A_2z + B_2y - b_2 \le u, \label{eqn:con_liftlin2_r1} \\ & \;\; \diag(z)z - y \le 0, \label{eqn:con_diagxy_r1}\\ & \;\; t \ge 0, \label{eqn:con_tpos_r1} \\ & \;\; u \ge 0. \label{eqn:con_tpos2_r1} \end{align} \end{subequations} In~\eqref{eqn:dcinner_lifted_r1}, constraints~\eqref{eqn:con_liftlin_r1} contain all the actual constraints (originally~\eqref{eqn:opf1_con}), whereas~\eqref{eqn:con_liftlin2_r1} contains all constraints resulting from the cost function reformulation. In the 1-norm cost formulation, a trick has been applied: Normally, a cost of $|w|$ for some variable $w$ would be replaced by one slack variable $s$ and then a problem \[ \begin{aligned} \minim_{w,s} &\;\; s + \text{(other costs)} \\ \st &\;\; w \le s,\; -w \le s, \\ &\;\; \text{(other constraints)}, \end{aligned} \] solved. An equivalent formulation to this is to introduce two slack variables $s_1,s_2$ and solve \[ \begin{aligned} \minim_{w,s} &\;\; s_1 + s_2 + \text{(other costs)} \\ \st &\;\; w \le s_1,\; -w \le s_2, \\ &\;\; s_1 \ge 0, s_2 \ge 0, \\ &\;\; \text{(other constraints)}. \end{aligned} \] The equivalence is easily shown: One of $-w,w$ is always negative, leading to one of $s_1,s_2$ becoming $0$, and the cost being equivalent to the more standard formulation. Because this alternative formulation was used, one can treat the $t$ and $u$ in~\eqref{eqn:dcinner_lifted_r1} the same and rewrite the latter once more as \begin{subequations} \label{eqn:dcinner_lifted_r2} \begin{align} \minim_{x,y,t} & \;\; 1^Tt \label{eqn:liftedcost_r2}\\ \st & \;\; Cz + Dy - d \le t, \label{eqn:con_liftlin_r2} \\ & \;\; \diag(z)z - y \le 0, \label{eqn:con_diagxy_r2}\\ & \;\; t \ge 0, \label{eqn:con_tpos_r2} \\ \end{align} \end{subequations} where $t$ now has a larger dimension than the $t$ in~\eqref{eqn:dcinner_lifted_r1} and \[ C := \bmb \beta^k A_1 \\ A_2 \bme, \quad D := \bmb \beta^k B_1 \\ B_2 \bme, \quad d := \bmb \beta^k b_1 \\ b_2 \bme. \] At this point, let $\lambda, \mu$ and $\gamma$ be the dual multipliers for the constraints~\eqref{eqn:con_liftlin_r2},~\eqref{eqn:con_diagxy_r2} and~\eqref{eqn:con_tpos_r2}, respectively. The Lagrangian of~\eqref{eqn:dcinner_lifted_r2} then becomes \begin{equation} \begin{aligned} L(z,y,t,\lambda,\mu,\gamma) =\;\;& 1^Tt + \lambda^T(Cz + Dy - d - t) \\ &+ \mu^T(\diag(z)z-y) + \gamma(-t). \end{aligned} \end{equation} Setting the partial derivatives to 0 yields \begin{subequations} \begin{align} 1 - \lambda - \gamma &= 0, \label{eqn:dldt} \\ C^T\lambda + 2\diag(\mu)z &= 0, \label{eqn:dldz} \\ D^T\lambda - \mu &= 0. \label{eqn:dldy} \end{align} \end{subequations} Equation~\eqref{eqn:dldz} implies that \[ z^*(\lambda,\mu) = -\frac{1}{2}\diag(\mu)^{-1}C^T\lambda. \] The dual problem hence becomes \begin{subequations} \label{eqn:dcinner_dual2} \begin{align} \maxim_{x,y,t} & \;\; -\frac{1}{4}\lambda^TC\diag(\mu)^{-1}C^T\lambda - d^T\lambda \\ \st & \;\; \lambda,\gamma,\mu \ge 0, \\ &\;\; 1 - \lambda - \gamma = 0, \\ &\;\; D^T\lambda - \mu = 0. \end{align} \end{subequations} Upon closer inspection of~\eqref{eqn:dcinner_dual2}, it can be seen that $\gamma$ and $\mu$ can be eliminated to yield \begin{subequations} \label{eqn:dcinner_dual3} \begin{align} \maxim_{x,y,t} & \;\; -\frac{1}{4}\lambda^TC\diag(D^T\lambda)^{-1}C^T\lambda - d^T\lambda \\ \st & \;\; 0 \le \lambda \le 1, \label{eqn:dualcon1} \\ &\;\; D^T\lambda \ge 0.\label{eqn:dualcon2} \end{align} \end{subequations} Finally, since all entries of $D$ are non-negative, constraints~\eqref{eqn:dualcon1} imply~\eqref{eqn:dualcon2}, and the latter can therefore be removed, resulting in the formulation~\eqref{eqn:dcinner_dual} presented in the main text. \end{document}
\begin{document} \title{A homotopy training algorithm for fully connected neural networks} \author[$1$]{Qipin Chen} \author[$1$]{Wenrui Hao} \affil[$1$]{Department of Mathematics, Pennsylvania State University, University Park, PA 16802} \maketitle \begin{abstract} In this paper, we present a Homotopy Training Algorithm (HTA) to solve optimization problems arising from fully connected neural networks with complicated structures. The HTA dynamically builds the neural network starting from a simplified version and ending with the fully connected network via adding layers and nodes adaptively. Therefore, the corresponding optimization problem is easy to solve at the beginning and connects to the original model via a continuous path guided by the HTA, which provides a high probability of obtaining a global minimum. By gradually increasing the complexity of the model along the continuous path, the HTA provides a rather good solution to the original loss function. This is confirmed by various numerical results including VGG models on CIFAR-10. For example, on the VGG13 model with batch normalization, HTA reduces the error rate by 11.86\% on test dataset compared with the traditional method. Moreover, the HTA also allows us to find the optimal structure for a fully connected neural network by building the neutral network adaptively. \end{abstract} \section{Introduction} The deep neural network (DNN) model has been experiencing an extraordinary resurgence in many important artificial intelligence applications since the late 2000s. In particular, it has been able to produce state-of-the-art accuracy in computer vision \cite{simonyan2014very}, video analysis \cite{he2016deep}, natural language processing \cite{collobert2008unified}, and speech recognition \cite{sainath2013deep}. In the annual contest ImageNet Large Scale Visual Recognition Challenge (ILSVRC), the deep convolutional neural network (CNN) model has achieved the best classification accuracy since 2012, and has exceeded human ability on such tasks since 2015 \cite{markoff2015learning}. The success of learning through neural networks with large model size, i.e., deep learning, is widely believed to be the result of being able to adjust millions to hundreds of millions of parameters to achieve close approximations to the target function. The approximation is usually obtained by minimizing its output error over a training set consisting of a significantly large amount of samples. Deep learning methods, as the rising star among all machine learning methods in recent years, have already had great success in many applications. Many advancements \cite{cang2018representability,cang2018integration,yin2018binaryrelax,yin2016quantization} in deep learning have been made in the last few years. However, as the size of new state-of-the-art models continues to grow larger, they rely more heavily on efficient algorithms for training and making inferences from such models. This clearly places strong limitations on the application scenarios of DNN models for robotics \cite{konda2012real}, auto-pilot automobiles \cite{hammerla2016deep}, and aerial systems \cite{maire2014convolutional}. At present, there are two big challenges in fundamentally understanding deep neural networks: \begin{itemize} \item How to efficiently solve the highly nonlinear and non-convex optimization problems that arise during the training of a deep learning model. \item How to design a deep neural network structure for specific problems. \end{itemize} In order to solve these challenges, in this paper, we will present a new training algorithm based on the homotopy continuation method \cite{BHS,BHSW,MorganSommese1}, which has been successfully used to study nonlinear problems such as nonlinear differential equations \cite{HHHLSZ,HHHS,WHL}, hyperbolic conservation laws \cite{HHSSXZ,HY}, data driven optimization \cite{Hpar,HHarlim}, physical systems \cite{HNS1,HNS}, and some more complex free boundary problems arising from biology \cite{HCF,HF}. In order to tackle the nonlinear optimization problem in DNN, the homotopy training algorithm is designed and shows efficiency and feasibility for fully connected neural networks with complex structures. The HTA also provides a new way to design a deep fully connected neural network with an optimal structure. This homotopy setup presented in this paper can also be extended to other neural network such as CNN and RNN. In this paper, we will focus on fully connected DNNs only. The rest of this paper is organized as follows. We first introduce the HTA in Section 2 and then discuss the theoretical analysis in Section 3. Several numerical examples are given in Section 4 to demonstrate the accuracy and efficiency of the HTA. Finally, applications of HTA to computer vision will be given in Section 5. \section{Homotopy Training Algorithm} \label{sec:main} The basic idea of HTA is to train a simple model at the beginning, then adaptively increase the structure's complexity, and eventually to train the original model. We will illustrate the idea of the homotopy setup by using a fully connected neural network with $\mathbf{x}=(x_1,\cdots x_n)^T$ as the input and $\mathbf{y}=(y_1,\cdots, y_m)^T$ as the output. More specifically, for a single hidden layer, the neural network (see Fig. \ref{Fig:SL}) can be written as \bes \mathbf{y}=f(\mathbf{x})=W_2^T \sigma(W_1^T \mathbf{x}+\beta_1)+\beta_2,\label{DNN1}\ees where $\sigma$ is the activation function (for example, ReLU), $W_1\in R^{n\times d_1}$ and $W_2\in R^{d_1\times m}$ are parameter matrices representing the weighted summation, $\beta_1 \in R^{d_1} $ and $\beta_2\in R^m$ are vectors representing bias, and $d_1$ is the number of nodes of the single hidden layer, namely, the width. Similarly, a fully connected neural network with two hidden layers (see Fig. \ref{Fig:2L})is written as \bes \mathbf{y}=f(\mathbf{x})=W_3^T \sigma(W_2^T \sigma(W_1^T \mathbf{x}+\beta_1)+\beta_2)+\beta_3,\label{DNN2}\ees where $W_1\in R^{n\times d_1}$, $W_2\in R^{d_1\times d_2}$, $W_3\in R^{d_2\times m}$, $\beta_1 \in R^{d_1} $, $\beta_2\in R^{d_2}$, $\beta_3\in R^{m}$ and $d_2$ is the width of the second layer. Then the homotopy continuation method is introduced to track the minimizer of (\ref{DNN1}) to the minimizer of (\ref{DNN2}) by setting \bes \mathbf{y}(t)&=&H_1(x;W_j,\beta_j,t)=(1-t)[W_2^T \sigma(W_1^T x+\beta_1)+\beta_2]\nnu\\&\ &+t[W_3^T \sigma(W_2^T \sigma(W_1^T x+\beta_1)+\beta_2)+\beta_3]~j=1,2,3.\label{Hom}\ees Then an optima of (\ref{DNN2}) will be obtained by tracking the homotopy parameter $t$ from $0$ to $1$. The idea is that the model of (\ref{DNN1}) is easier to train than that of (\ref{DNN2}). Moreover, the homotopy setup will follow the universal approximation theory \cite{hornik1989multilayer,huang2006universal} to find an approximation trajectory to reveal the real nonlinear relationship between the input $x$ and the output $y$. Similarly, we can extend this homotopy idea to any two layers \bes H_i(x;\theta,t)=(1-t)y_i(x;\theta)+t y_{i+1}(x;\theta),\label{Hom_t}\ees where $y_{i}(x;\theta)$ is the approximation of a fully connected neural network with $i$ layers and $\theta$ represents parameters that are weights of the neural network. In this case, we can train a fully connected neural network ``node-by-node" and ``layer-by-layer." This computational algorithm can significantly reduce computational costs of deep learning, which is used on large-scale data and complex problems. Using the homotopy setup, we are able to rewrite the ANN, CNN, and RNN in terms of a specific start system such as (\ref{DNN1}). After designing a proper homotopy, we need to train this model with some data sets. In the homotopy setup, we need to solve the following optimization problem: \begin{equation} \theta(t)=\mathop{\arg\min}_{\theta} \sum_{j=1}^N \| H_i(X^j;\theta,t) - Y^j \|^2_{U}, \label{OPt}\end{equation} where $X^j$ and $Y^j$ represent data points and $N$ is the number of data points in a mini-batch. In this optimization, the homotopy setup tracks the optima from a simpler optimization problem to a more complex one. The loss function in (\ref{OPt}) could be changed to other types of entropy functions \cite{Goodfellow-et-al-2016,vapnik1999overview}. {\bf A simple illustration:} We consider a simple neural network with two hidden layers to approximate a scalar function $y=f(x)$ (the width of two hidden layers are 2 and 3 respectively). The detailed HTA algorithm for training this neural network is listed in {\bf Algorithm 1}. Then the neural network with a single layer in (\ref{DNN1}) gives us that $W_1\in R^{1\times 2}$, $W_2\in R^{2\times 1}$, $\beta_1\in R^{2\times1}$, and $\beta_2\in R$. By denoting all the weights $W_1,~W_2,~\beta_1,\beta_2$ as $\theta_1$, the optimization problem (\ref{OPt}) for $t=0$ is formulated as $min f_1(\theta_1)$. Then a minimizer $\theta_1^*=\{W_1^*, W_2^*,\beta_1^*\}$ satisfies the necessary condition \bes\nabla_{\theta_1} f_1(\theta_1^*)=0.\label{Sl1}\ees Similarly, for the neural network with two hidden layers, we have that, in (\ref{DNN2}), $W_2\in R^{2\times 3}$ and $\beta_2\in R^{3\times1}$ are changed, $W_3\in R^{3\times 1}$ and $\beta_3\in R$. Then all the new variables introduced by the second hidden layer ($W_3$, $\beta_3$ and part of $W_2$ and $\beta_2$) are denoted as $\theta_2$. The total variables $\theta=\theta_1\cup \theta_2$ formulate the optimization problem of two hidden layers, namely, (\ref{OPt}) for $t=1$, as $min f_2(\theta)$ and solve it by using (\ref{Hom_t}), which is equivalent to solving the following nonlinear equations: \bes Hom(\theta,t):=t\nabla_\theta f_2(\theta)+(1-t) \left( \begin{aligned} \nabla_{\theta_1} \tilde{f}_2(\theta)\\ \nabla_{\theta_2} \tilde{f}_2(\theta)-\nabla_{\theta_2} \tilde{f}_2(\theta^0)\\ \end{aligned}\right)=0, \label{Hom1}\ees where $\tilde{f}_2$ is the objective function with the activation function of the second hidden layer as the identity and $\theta^0$ is constructed as \[W_1^0=W_1^*,~ W_2^0=[W_2^*,0,0],~ W_3^0=[1,0,0]^T \hbox{~and~} \beta_1^0=\beta_1^*,~\beta_2^0=[\beta_2^*,0,0]^T, ~\beta_3^0=0.\] By noticing that $\nabla_{\theta_1} \tilde{f}_2(\theta^0)=\nabla_{\theta_1} f_1(\theta_1^*)=0$, we have that $Hom(\theta^0,0)=0$, which implies that the neural network with a single layer can be rewritten as a special form of the neural network with two layers. Then we can solve the optimization problem $min f_2(\theta)$ by tracking (\ref{Hom1}) with respect to $t$ from 0 to 1. This homotopy technique is quite often used in solving nonlinear equations \cite{HHHLSZ,HHHS,WHL}. However, in practice, we will use some advanced optimization methods for solving (\ref{OPt}), such as the stochastic gradient decent method, instead of solving nonlinear equations (\ref{Hom1}) directly. \section{Theoretical analysis} \label{sec:con} In this section, we analyze the convergence of HTA between any two layers, namely, from $i$-th layer to $i+1$-th layer. Assuming that we have a known minimizer of a fully connected neural network with $i$ layers, we prove that we can get a minimizer by adding $i+1$-th layer through the HTA. First, we consider the expectation of the loss function as \begin{equation} \mathbb{E}_\xi(\mathcal{L}(H_{i}(x_\xi;\theta,t),y_\xi), \end{equation} where $\mathcal{L}$ is the categorical cross entropy loss function \cite{rao2019natural,bernico2018deep} and is defined as $\mathcal{L}(x,y) = -x(y) + log(\sum_j e^{x[j]})$, and $\xi$ is a random variable due to random algorithms for solving the optimization problem for each given $t$. For simplicity, we denote \begin{eqnarray} F(\theta;\xi,t)&:=&\mathcal{L}(H_{i}(x_\xi;\theta,t),y_\xi),\\ f(\theta;t)&:=&\mathbb{E}_\xi[F(\theta;\xi,t)],\\ G(\theta;\xi,t)&:=&\nabla_\theta F(\theta;\xi,t), \end{eqnarray} where the index $i$ does not contribute to the analysis and therefore is ignored in our notation. By denoting $\displaystyle\theta_*^t:=argmin_\theta f(\theta;t)$, we define our stochastic gradient scheme for any given $t$: \begin{equation} \label{sgd_scheme} \theta_{k+1} = \theta_k - \gamma_k G(\theta_k;\xi_k,t). \end{equation} First we have the following convergence theorem for any given $t$ with the sigmoid activation function. \begin{theorem} {\bf (Nonconvex Convergence)} If $\nabla_\theta H_i(x;\theta,t)$ is bounded for a given $t$, namely, $ |\nabla_\theta H_i(x;\theta,t)| \le M_t,$ and $\{\theta_k\}$ is contained in a bounded open set, supposing that (\ref{sgd_scheme}) is run with a step-size sequence satisfying \begin{equation} \sum_{k=1}^\infty \gamma_k = \infty \text{ and } \sum_{k=1}^\infty \gamma_k^2 < \infty, \end{equation} then we have \begin{equation} \mathbb{E}[\frac{1}{A_k}\sum_{k=1}^K \gamma_k\|\nabla f(\theta_k;t)\|_2^2]\rightarrow 0 \text{ as } K\rightarrow \infty \hbox{~ with~} A_k:=\sum_{k=1}^K \gamma_k. \end{equation} \label{nonconvex} \begin{proof} First, we prove the {\bf Lipschitz-continuous objective gradients} condition \cite{BCN}, which means that $f(\theta;t)$ is $C^1$ and $\nabla f(\theta;t)$ is Lipschitz continuous with respect to $\theta$: \begin{itemize} \item {\bf $f(\theta; t)$ is $C^1$.} Since $y_i(x_\xi; \theta)\in C^1$ for $\theta$, we have $H(x_\xi;\theta,t) \in C^1$. Moreover, since $\mathcal{L}(\cdot, y)$ is $C^1$, we have that $F(\theta;\xi,t)\in C^1$ or that $\nabla_\theta F(\theta; \xi,t)$ is continuous. Considering \begin{equation} \nabla_\theta f(\theta;t) = \nabla_\theta \mathbb{E}_\xi(F(\theta;\xi,t)) = \mathbb{E}_\xi(\nabla_\theta F(\theta;\xi,t)), \end{equation} we have that $\nabla_\theta f(\theta,t)$ is continuous or that $f(\theta; t)\in C^1$. \item {\bf $\nabla f(\theta;t)$ is Lipschitz continuous.} Since \begin{eqnarray} \nabla_\theta f(\theta;t) &=& \mathbb{E}_\xi[\nabla_x \mathcal{L}(H(x_\xi;\theta,t),y_\xi)\nabla_\theta H(x_\xi;\theta,t)], \end{eqnarray} we will prove that both $\nabla_x\mathcal{L}(H(x_\xi;\theta,t),y_\xi)$ and $\nabla_\theta H(x_\xi; \theta,t)$ are bounded and Lipschitz continuous. Because both $\sigma(x)=\frac{1}{1+e^{-x}}$ and $\sigma'(x)=\sigma(x)(1-\sigma(x))$ are Lipschitz continuous and $\{\theta_k\}$ is bounded (assumption of Theorem \ref{nonconvex}), $\nabla_\theta H(x_\xi; \theta,t)$ is Lipschitz continuous. ($x_\xi$ is bounded because the size of our dataset is finite.) By differentiating $\mathcal{L}(x,y)$, we have \begin{equation} \label{part1} \nabla_x \mathcal{L}(x,y) = (-\delta_y^1+\frac{e^{x_1}}{\sum_je^{x_j}},\cdots,-\delta_y^n+\frac{e^{x_n}}{\sum_je^{x_j}}), \end{equation} where $\delta$ is the Kronecker delta. Since \begin{equation} \frac{\partial}{\partial x_k}\frac{e^{x_i}}{\sum_j e^{x_j}} = \begin{cases} &\frac{e^{x_i}\sum_j e^{x_j}-(e^{x_i})^2}{(\sum_j e^{x_j})^2} \quad k=i \\ &\frac{-e^{x_i} e^{x_k}}{(\sum_j e^{x_j})^2} \quad k\ne i, \\ \end{cases} \end{equation} which implies that $\big|\frac{\partial}{\partial x_k}\frac{e^{x_i}}{\sum_j e^{x_j}}\big|\leq 2$, we see that $\nabla_x \mathcal{L}(\cdot, y)$ is Lipschitz continuous and bounded. Therefore, $\nabla_x\mathcal{L}(H(x_\xi;\theta,t),y_\xi)$ is Lipschitz continuous and bounded. Thus, $\nabla f(\theta;t)$ is Lipschitz continuous. \end{itemize} Second, we prove the {\bf first and second moment limits} condition \cite{BCN}: \begin{itemize} \item[a.] According to our theorem's assumption, $\{\theta_k\}$ is contained in an open set that is bounded. Since $f$ is continuous, $f$ is bounded; \item[b.] Since $G(\theta_k; \xi_k,t)=\nabla_\theta F(\theta_k;\xi_k,t)$ is continuous, we have \begin{equation} \mathbb{E}_{\xi_k}[G(\theta_k;\xi_k,t)] = \nabla_\theta\mathbb{E}_{\xi_k}[F(\theta_k;\xi_k,t)] = \nabla_\theta f(\theta_k;t). \end{equation} Therefore, \begin{equation} \nabla f(\theta_k; t)^T\mathbb{E}_{\xi_k}[G(\theta_k; \xi_k,t)]=\nabla f(\theta_k; t)^T\cdot \nabla f(\theta_k; t) = \|\nabla f(\theta_k; t)\|_2^2 \ge u\|\nabla f(\theta_k; t)\|_2^2 \end{equation} for $0<u\le 1$. On the other hand, we have \begin{equation} \|\mathbb{E}_{\xi_k}[G(\theta_k; \xi_k,t)]\|_2=\|\nabla f(\theta_k; t)\|_2 \le u_G\|\nabla f(\theta_k; t)\|_2 \end{equation} for $u_G\ge 1$. \item[c.] Since $\nabla F(\theta_k; \xi_k,t)$ is bounded for a given $t$, we have $\mathbb{E}_{\xi_k}[\|\nabla F(\theta_k; \xi_k,t)\|_2^2]$ is also bounded. Thus, \begin{equation} \mathbb{V}_{\xi_k}[G(\theta_k; \xi_k,t)] := \mathbb{E}_{\xi_k}[\|G(\theta_k; \xi_k,t)\|_2^2] - \|\mathbb{E}_{\xi_k}[G(\theta_k; \xi_k,t)]\|_2^2\le \mathbb{E}_{\xi_k}[\|G(\theta_k; \xi_k,t)\|_2^2], \end{equation} which implies that $\mathbb{V}_{\xi_k}[G(\theta_k; \xi_k,t)]$ is bounded. \end{itemize} We have checked assumptions 4.1 and 4.3 in \cite{BCN}. By theorem 4.10 in \cite{BCN}, with the diminishing step-size, namely, \begin{equation} \sum_{k=1}^\infty \gamma_k = \infty \text{ and } \sum_{k=1}^\infty \gamma_k^2 < \infty, \end{equation} the following convergence is obtained \begin{equation} \mathbb{E}[\frac{1}{A_k}\sum_{k=1}^K \gamma_k\|\nabla f(\theta_k;t)\|_2^2]\rightarrow 0 \text{ as } K\rightarrow \infty. \end{equation} \end{proof} \end{theorem} Second we theoretically explore the existence of solution path $\theta(t)$ when $t$ varies from $0$ to $1$ for the convex case. The solution path of $\theta(t)$ might be complex for the non-convex case, i.e., bifurcations, and is hard to analyze theoretically. Therefore, we analyze the HTA theoretically on the convex case only but apply it to non-convex cases in the numerical experiments. We redefine our stochastic gradient scheme for the homotopy process as \begin{equation} \label{sgd_scheme} \theta_{k+1} = \theta_k - \gamma_k G(\theta_k;\xi_k,t_k), \end{equation} where $\gamma_k$ is the learning rate and $t_0=0$, $t_k\nearrow 1$. Instead of considering the local convergence of the HTA in a neighborhood of the global minimum, we proved the following theorem in a more general assumption, namely, $f$ is a convex and differentiable objective function with a bounded gradient. \begin{theorem} {\bf (Existence of solution path $\theta(t)$)} Assume that $f(\cdot,\cdot)$ is convex and differentiable and that $\|G(\theta;\xi,t)\| \le M$. Then for stochastic gradient scheme (\ref{sgd_scheme}), with a finite partition for $t$ between [0,1], we have \begin{equation} \lim_{n\rightarrow \infty} \mathbb{E}[f(\bar{\theta}_n, \bar{t}_n)] = f(\theta_*^1, 1), \end{equation} where $\bar{\theta}_n = \frac{\sum_{k=0}^n\gamma_k\theta_k}{\sum_{k=0}^n\gamma_k}$ and $\bar{t}_n = \frac{\sum_{k=0}^n\gamma_k t_k}{\sum_{k=0}^n\gamma_k}$. \end{theorem} \begin{proof} \begin{eqnarray} \mathbb{E}[\|\theta_{k+1}-\theta_*^{t_{k+1}}\|^2] &=&\mathbb{E}[\|\theta_k-\gamma_kG(\theta_k;\xi_k,t_k) - \theta_*^{t_k}\|^2] \nonumber\\ &\quad& - 2\mathbb{E}[\langle \theta_k - \gamma_kG(\theta_k;\xi_k,t_k) - \theta_*^{t_k}, \theta_*^{t_{k+1}} - \theta_*^{t_k}\rangle]\nnu\\&\quad & +\mathbb{E}[\|\theta_*^{t_{k+1}} - \theta_*^{t_k}\|^2]. \end{eqnarray} By defining \begin{equation} \begin{aligned} A_k=-2\mathbb{E}[\langle \theta_k-\gamma_kG(\theta_k;\xi_k,t_k) - \theta_*^{t_k} , \theta_*^{t_{k+1}} - \theta_*^{t_k}\rangle] + \mathbb{E}[\|\theta_*^{t_{k+1}}-\theta_*^{t_k}\|^2], \end{aligned} \end{equation} we have $\sum_{k=0}^nA_k\le A<\infty$ since $t\in[0,1]$ has a finite partition. Therefore, we obtain \begin{eqnarray} \mathbb{E}[\|\theta_{k+1} - \theta_*^{t_{k+1}}\|^2] \nonumber &=&\mathbb{E}[\|\theta_k - \gamma_kG(\theta_k;\xi_k,t_k) - \theta_*^{t_k}\|^2] + A_k \nonumber\\ &=&\mathbb{E}[\|\theta_k-\theta_*^{t_k}\|^2] - 2\gamma_k \mathbb{E} [\langle G(\theta_k;\xi_k,t_k),\theta_k-\theta_*^{t_k}\rangle] +\gamma_k^2\mathbb{E}[\|G(\theta_k;\xi_k,t_k)\|^2] + A_k\nnu\\ &\le&\mathbb{E}[\|\theta_k-\theta_*^{t_k}\|^2] - 2\gamma_k \mathbb{E} [\langle G(\theta_k;\xi_k,t_k),\theta_k - \theta_*^{t_k}\rangle] +\gamma_k^2M^2 + A_k\nonumber. \end{eqnarray} Since \begin{eqnarray} \mathbb{E}[\langle G(\theta_k;\xi_k,t_k),\theta_k-\theta_*^{t_k}\rangle] &=&\mathbb{E}_{\xi_0,\cdots,\xi_{k-1}}[\mathbb{E}_{\xi_k}[\langle G(\theta_k;\xi_k,t_k) , \theta_k - \theta_*^{t_k}\rangle |\xi_0,\cdots,\xi_{k-1}]] \nonumber\\ &=&\mathbb{E}_{\xi_0,\cdots,\xi_{k-1}}[\langle \nabla f(\theta_k;t_k) , \theta_k - \theta_*^{t_k}\rangle|\xi_0,\cdots,\xi_{k-1}] \nonumber\\ &=&\mathbb{E}[\langle \nabla f(\theta_k;t_k) , \theta_k - \theta_*^{t_k}\rangle], \end{eqnarray} we have \begin{eqnarray} \mathbb{E}[\|\theta_{k+1} - \theta_*^{t_{k+1}}\|^2] \le\mathbb{E}[\|\theta_k-\theta_*^{t_k}\|^2] - 2\gamma_k\mathbb{E} [\langle \nabla f(\theta_k;t_k),\theta_k - \theta_*^{t_k}\rangle] +\gamma_k^2M^2 + A_k. \end{eqnarray} Due to the convexity of $f(\cdot, t_k)$, namely, \begin{equation} \langle \nabla f(\theta_k, t_k), \theta_k-\theta_*^{t_k}\rangle \ge f(\theta_k; t_k) - f(\theta_*^{t_k}; t_k), \end{equation} we conclude that \begin{eqnarray} \mathbb{E}[\|\theta_{k+1} - \theta_*^{t_{k+1}}\|^2] \le \mathbb{E}[\|\theta_k-\theta_*^{t_k}\|^2]- 2\gamma_k \mathbb{E}[ f(\theta_k;t_k) - f(\theta_*^{t_k},t_k)] +\gamma_k^2M^2 + A_k, \quad \label{eqn24} \end{eqnarray} or \begin{eqnarray} 2\gamma_k\mathbb{E}[f(\theta_k;t_k) - f(\theta_*^{t_k};t_k)]\le -\mathbb{E}[\|\theta_{k+1} - \theta_*^{t_{k+1}}\|^2 - \|\theta_k - \theta_*^{t_k}\|^2] + \gamma_k^2M^2 + A_k.\nonumber \end{eqnarray} By summing up $k$ from 0 to n, \begin{eqnarray} 2\sum_{k=0}^n\gamma_k\mathbb{E}[f(\theta_k;t_k) - f(\theta_*^{t_k};t_k)] &\le& -\mathbb{E}[\|\theta_{n+1} - \theta_*^{t_{n+1}}\|^2 - \|\theta_0 - \theta_*^{0}\|^2] + M^2\sum_{k=0}^n\gamma_k^2+ \sum_{k=0}^n A_k \nonumber \\ &\le& D^2+ M^2\sum_{k=0}^n\gamma_k^2 + \sum_{k=0}^n A_k, \end{eqnarray} where $D = \|\theta_0 - \theta_*^0\|$. Dividing $2\sum_{k=0}^n\gamma_k$ on both sides, we have \begin{eqnarray} \frac{1}{\sum_{k=0}^n\gamma_k} \sum_{k=0}^n\gamma_k \mathbb{E}[f(\theta_k;t_k) - f(\theta_*^{t_k};t_k)] \le \frac{D^2+ M^2\sum_{k=0}^n\gamma_k^2 + \sum_{k=0}^n A_k}{2\sum_{k=0}^n\gamma_k} \le \frac{D^2+ M^2\sum_{k=0}^n\gamma_k^2 + A}{2\sum_{k=0}^n\gamma_k}.\nonumber \end{eqnarray} According to the convexity of $f(\cdot;\cdot)$ and Jensen's inequality \cite{jensen1906fonctions}, \begin{equation} \frac{1}{\sum_{k=0}^n\gamma_k} \sum_{k=0}^n\gamma_k \mathbb{E}[f(\theta_k;t_k)] \ge \mathbb{E}[f(\bar{\theta}_n;\bar{t}_n)], \end{equation} where $\bar{\theta}_n = \frac{\sum_{k=0}^n\gamma_k\theta_k}{\sum_{k=0}^n\gamma_k}$ and $\bar{t}_n = \frac{\sum_{k=0}^n\gamma_k t_k}{\sum_{k=0}^n\gamma_k}$. Then we have \begin{eqnarray} \mathbb{E}[f(\bar{\theta}_n;\bar{t}_n)] - \frac{\sum_{k=0}^n\gamma_k f(\theta_*^{t_k};t_k)}{\sum_{k=0}^n\gamma_k} \le \frac{D^2+ M^2\sum_{k=0}^n\gamma_k^2 + A}{2\sum_{k=0}^n\gamma_k}. \end{eqnarray} We choose $\gamma_k$ such that $\sum_{k=0}^n\gamma_k = \infty$ and $\sum_{k=0}^n\gamma_k^2 < \infty$, for example, $\gamma_k = \frac{1}{k}$. Taking $n$ to infinite, we have \begin{equation} \lim_{n\rightarrow \infty}\mathbb{E}[f(\bar{\theta}_n, \bar{t}_n)] - \lim_{n\rightarrow \infty} \frac{\sum_{k=0}^n\gamma_k f(\theta_*^{t_k}, t_k)}{\sum_{k=0}^n\gamma_k}\le 0. \end{equation} Since $t_k\nearrow 1$, $f(\cdot,\cdot)$ is continuous and $\displaystyle\sum_{k=0}^n\gamma_k = \infty$, then we have \begin{equation} \lim_{n\rightarrow \infty}\frac{\sum_{k=0}^n\gamma_k f(\theta_*^{t_k}, t_k)}{\sum_{k=0}^n\gamma_k} = f(\theta_*^1, 1), \end{equation} which implies that \begin{equation} \lim_{n\rightarrow \infty} \mathbb{E}[f(\bar{\theta}_n, \bar{t}_n)] \leq f(\theta_*^1, 1). \end{equation} Since $\bar{t}_n\rightarrow 1$, we have \begin{equation} \mathbb{E}[f(\lim_{n\rightarrow \infty}\bar{\theta}_n, 1)] \leq f(\theta_*^1, 1). \end{equation} On the other hand, $\theta_*^1$ is the global minimum due to the convexity of $f$, and we have $\displaystyle\mathbb{E}[f(\lim_{n\rightarrow \infty}\bar{\theta}_n, 1)] \geq f(\theta_*^1, 1).$ Thus, $\displaystyle\mathbb{E}[f(\lim_{n\rightarrow \infty}\bar{\theta}_n, 1)] = f(\theta_*^1, 1)$ holds. \end{proof} \section{Numerical Results} In this section, we demonstrate the efficiency and the feasibility of the HTA by comparing it with the traditional method, the stochastic gradient descent method. For both methods, we used the same hyper parameters, such as learning rate (0.05), batch size (128), and the number of epochs (380) on the same neural network for various problems. Due to the non-convexity of objective functions, both methods may get stuck at local optimas. We also ran the training process 15 times with different random initial guesses for both methods and reported the best results for each method. \label{sec:experiments} \subsection{Function Approximations} {\bf Example 1 (Single hidden layer):} The first example we considered is using a single-hidden-layer connected neural network to approximate function \begin{equation} f(x) = \sin(x_1 + x_2 + \cdots + x_n), \end{equation} where $x = (x_1,x_2,\cdots,x_n)^T\in R^n$. The width of the single hidden layer NN is 20 and the width of the hidden layer of initial state of HTA is set to be 10. Then the homotopy setup is written as \[H(x;\theta,t)=(1-t)y_1(x;\theta) + ty_2(x;\theta)\] where $y_1$ and $y_2$ are the fully connected NNs with 10 and 20 as their width of hidden layers respectively. In particular, we have \begin{equation} y_{1}(x;\theta) = W_{21} \cdot r(W_{11} \cdot x + b_{11}) + b_{21}, \end{equation} \begin{equation} y_{2}(x;\theta) = W_{2} \cdot r(W_{1} \cdot x + b_{1}) + b_{21}, \end{equation} where $x \in \mathbb{R}^{n\times 1}$, $W_{11} \in \mathbb{R}^{10\times n}$, $b_{11}\in \mathbb{R}^{10\times 1}$, $W_{21}\in \mathbb{R}^{1\times 10}$, $b_{21}\in \mathbb{R}^{1\times 1}$, $W_{1}=\left( \begin{aligned} W_{11}\\ W_{12}\\ \end{aligned}\right) \in \mathbb{R}^{20\times n}$, $b_{1}=\left( \begin{aligned} b_{11}\\ b_{12}\\ \end{aligned}\right)\in \mathbb{R}^{20\times 1}$, and $W_{2}=\left( W_{21}, W_{22} \right)\in \mathbb{R}^{1\times 20}$. We use the ReLU function $r(x)=\max\{0,x\}$ as our activation function. For $n\leq 3$, we used the uniform grid points, where the sample points are the Cartesian products of uniformly sampled points of each dimension. Then the size of the training data set is $10^{2n}$. For $n\geq 4$, we employed the sparse grid \cite{garcke2006sparse,smolyak1963quadrature} with level 6 as sample points. For each $n$, $90\%$ of the data set is used for training while $10\%$ is used for testing. The loss curves of one-dimensional and two-dimensional cases are shown in Figs. \ref{sin_loss_1d} and \ref{sin_loss_2d}. By choosing $\Delta t=0.5$, the testing loss of HTA (for $t=1$) is lower than that of the traditional training algorithm. Fig. \ref{sin_testing_plot} shows the comparison between the traditional method and HTA for for the one-dimensional case while Fig. \ref{sin_testing_plot_2d} shows the comparison of the two-dimensional case by using contour curves. All the results of up to $n=5$ are summarized in Table \ref{tab_testing_loss}, which lists the test loss between the HTA and the traditional training algorithm. It shows clearly that the HTA method is more efficient than the traditional method. {\bf Example 2 (multiple hidden layers):} The second example is using a two-hidden-layer fully connected neural network to approximate the same function in Example 1 for the multi-dimensional case. Since the approximation of the neutral network with a single hidden layer is not effective for $n>3$ (see Table \ref{tab_testing_loss}), we use a two-hidden-layer fully connected neural network with 20 nodes for each layer. Then we use the following homotopy setup to increase the width of each layer from 10 to 20: \begin{equation} H_1(x;\theta,t) = (1-t)y_1(x;\theta) + ty_2(x;\theta),\quad H_2(x;\theta,t) = (1-t)y_2(x;\theta) + ty_3(x;\theta), \end{equation} where $y_1$, $y_2$, and $y_3$ represent neural networks with width (10,10), (10,20), and (20,20) respectively. The rationale is that the first homotopy function, $H_1(x;\theta,t)$, increases the width of the first layer while the second homotopy function, $H_2(x;\theta,t)$, increases the width of the second layer. The size of the training data and the strategy of choosing $\Delta t$ is the same as in Example 1. Table \ref{tab_testing_loss_hd} shows the results of the approximation, and Fig. \ref{sin_testing_loss_hd} shows the testing curves for $n=5$ and $n=6$. The HTA achieves higher accuracy than the traditional method. \subsection{Parameter Estimation} Parameter estimation often requires a tremendous number of model evaluations to obtain the solution information on the parameter space \cite{CMV15,Hpar,Goodman}. However, this large number of model evaluations becomes very difficult and even impossible for large-scale computational models \cite{hma:05,MS94}. Then a surrogate model needs to be built in order to approximate the parameter space. Neural networks provide an effective way to build the surrogate model. But an efficient training algorithm of neural networks is needed to obtain an effective approximation especially for limited sample data on parameter space. We will use the Van der Pol equation as an example to illustrate the efficiency of HTA on the parameter estimation. {\bf Example 3:} We applied the HTA to estimate the parameters of the Van der Pol equation: \begin{equation} y'' - \mu(k - y^2)y' + y = 0 \hbox{~with~} y(0)=2 \hbox{~and ~} y'(0)=0. \end{equation} In order to estimate the parameters $\mu$ and $k$ for a given data $\tilde{y}(t;\mu,k)$, we first use single-hidden-layer fully connected neural network to build a surrogate model with $\mu$ and $k$ as inputs and $y(1)$ as the output. Our training data set is chosen on $1\le \mu \le 10$ and $1\le k \le 10$ with 8,281 (with $0.1$ as the mesh size). This neural network is trained by both the traditional method and the HTA. The testing dataset is 961 uniform grid points on $11\le \mu_i \le 14$ and $11\le k_i \le 14$ with $0.1$ as the mesh size for both $\mu$ and $k$ The testing loss curves of the traditional method and HTA are shown in Fig. \ref{vdp}: after $5\times10^4$ steps, the testing loss is $0.007$ for HTA and $0.220$ for the traditional method. We also compared these two surrogate models (traditional method and HTA) with the numerical ODE solution $y(1;\mu,k)$ in Fig. \ref{vdp_equation}. This comparison shows that the approximation of the HTA is closer to the ODE model than the traditional method. Once we built surrogate models, then we moved to a parameter estimation step for any given data $\tilde{y}(t;\mu,k)$ to solve the following optimization problem \begin{equation} \min_{\mu,k} (S(\mu,k)-\tilde{y}(1))^2,\label{PE} \end{equation} where $S(\mu,k)$ is the surrogate neural network model and $\tilde{y}(1)$ is the data when $t=1$. In our example, we generated ``artificial data" on the testing dataset. We use the SDG to solve the optimization problem with $\rho=k=11$ as the initial guess. We define the error of the parameter estimation below: \begin{equation} Err_{PE}=\frac{\sum_{i=1}^n \sqrt{(\mu^*_i-\mu_i)^2+(k^*_i-k_i)^2}}{n}, \end{equation} where $(\mu_i,k_i)$ is the sample point and $(\mu^*_i,k^*_i)$ is the optima of (\ref{PE}) for a given $\tilde{y}(1)$. Then the error of HTA is 0.71 while the error of the traditional method is 1.48. We also list some results of parameter estimation for different surrogate models in Table \ref{para}. The surrogate model created by the HTA provides smaller errors than the traditional method for parameter estimation. \section{Applications to Computer Vision} Computer vision is one of the most common applications in the field of machine learning \cite{krizhevsky2012imagenet,rowley1998neural}. It has diverse applications, from designing navigation systems for self-driving cars \cite{bojarski2016end} to counting the number of people in a crowd \cite{chan2008privacy}. There are many different models that can be used for detection and classification of objects. Since our algorithm focuses on the fully connected neural networks, we will only apply our algorithm to computer vision models with fully connected neural networks. Therefore, in this section, we will use different Visual Geometry Group (VGG) models \cite{simonyan2014very} as an example to illustrate the application of HTA to computer vision. In computer vision, the VGG models use convolutional layers to extract features of the input picture, and then flatten the output tensor to be a 512-dimension-long vector. The output long vector will be sent into the fully connected network (See Fig. \ref{VGG} for more details). In order to demonstrate the efficiency of HTA, we will apply it to the fully connected network part of the VGG models. \subsection{Three States of HTA} The last stage of VGG models is a fully connected neural network that links convolutional layers of VGG to the classification categories of the Canadian Institute for Advanced Research (CIFAR-10) \cite{krizhevsky2009learning}. Then the input is the long vector generated by convolutional layers (the width is 512), while the output is the 10 classification categories of CIFAR-10. In order to train this fully connected neural network, we construct a three states setup of HTA, which is shown in Fig. \ref{3states}. In this section, we use $x$ to represent the inputs generated by the convolutional layers ($x\in R^{512}$), and $\theta$ to represent the parameters for each state. The size of $\theta$ may change for different states. \begin{itemize} \item {\bf State 1:} For the first state, we construct a fully connected network with 2 hidden layers. The width of the $i$-th hidden layer is set to be $w_i$. Then it can be written as \begin{equation} y_{1}(x;\theta) = W_{31} \cdot r(W_{21} \cdot r(W_{11} \cdot x + b_{11}) + b_{21}) + b_{3}, \end{equation} where $x \in \mathbb{R}^{512\times 1}$, $W_{11} \in \mathbb{R}^{w_1\times 512}$, $b_{11}\in \mathbb{R}^{w_1\times 1}$, $W_{21}\in \mathbb{R}^{w_2\times w_1}$, $b_{21}\in \mathbb{R}^{w_2\times 1}$, $W_{31}\in \mathbb{R}^{10\times w_2}$, and $b_{3}\in \mathbb{R}^{10\times 1}$. We use the ReLU function $r(x)=\max\{0,x\}$ as our activation function. \item {\bf State 2:} For the second state, we add $(512-w_1)$ nodes to the first hidden layer to recover the first hidden layer of the original model. Therefore, the formula of the second state becomes \begin{equation} y_{2}(x;\theta) = W_{31} \cdot r(\tilde{W}_{2} \cdot r(W_{1} \cdot x + b_{1}) + b_{21}) + b_{3}, \end{equation} where $x \in \mathbb{R}^{512\times 1}$, $W_1 =\left( \begin{aligned} W_{11}\\ W_{12}\\ \end{aligned}\right) \in \mathbb{R}^{512\times 512}$, $b_1 =\left( \begin{aligned} b_{11}\\ b_{12}\\ \end{aligned}\right) \in \mathbb{R}^{512\times 1}$, and $\tilde{W}_2 =\left( W_{21}, W_{22} \right) \in \mathbb{R}^{w_2\times 512}$. In particular, if we choose $W_{12}=0$ and $W_{22}=0$, we will recover $y_1(x;\theta)$ of state 1. \item {\bf State 3:} Finally, we recover the original structure of the VGG by adding $(512-w_2)$ nodes to the second hidden layer: \begin{equation} y_{3}(x; \theta) = W_{3} \cdot r(W_{2} \cdot r(W_{1} \cdot x + b_{1}) + b_{2}) + b_{3}, \end{equation} where $W_3 =\left( \begin{aligned} W_{31}, W_{32} \end{aligned}\right) \in \mathbb{R}^{10\times 512}$, $W_2 =\left( \begin{aligned} \tilde{W}_{2}\\ \tilde{\tilde{W}}_{2}\\ \end{aligned}\right) \in \mathbb{R}^{512\times 512}$, $b_2 =\left( \begin{aligned} b_{21}\\ b_{22}\\ \end{aligned}\right) \in \mathbb{R}^{512\times 1}$. $y_3(x;\theta)$ will be reduced to $y_2(x;\theta)$ if $\tilde{\tilde{W}}_{2}=0$ and $W_{32}=0$. \end{itemize} \subsection{Homotopic Path} In order to connect these three states, we use two homotopic paths thats are defined by the following homotopy functions: \begin{equation} H_{i}(x; \theta,t) = (1-t)y_{i}(x; \theta) + ty_{i+1}(x; \theta),~i=1,2. \end{equation} In this homotopy setup, when $t=0$, we already have an optimal solution $\theta_i$ for $i$-th state and want to find an optimal solution for $\theta$ for $i+1$-th state when $t=1$. By tracking $t$ from 0 to 1, we can discover a solution path $\theta(t)$ since $y_{i}(x; \theta)$ is a special form of $y_{i+1}(x; \theta)$. Then the loss functions for the homotopy setup becomes \begin{equation} L_{i}(x, y; \theta, t) = L(H_i(x; \theta, t), y), \end{equation} where $L(x, y)$ is the loss function. \subsection{Training Process} We first optimize $L(y_{1}(x; \theta_1), y)$ for the first state. The model structure is relatively simple to solve, and it efficiently obtains a local minimum or even a global minimum for the loss function. Then the second setup is to optimize $L_{1}(x, y; \theta, t)$ by using $\theta_1$ as an initial condition for $t=0$. By gradually tracking parameter $t$ to $1$, we obtain an optimal solution $\theta_2$ of $L(y_{2}(x; \theta), y)$. The third setup is to optimize $L_{2}(x, y; \theta, t)$ by tracking $t$ from $0$ ($\theta_2$) to $1$. Then we obtain an optimal solution, $\theta_3$, of $L(y_{3}(x; \theta), y)$. Due to the continuous paths, the optimal solution $\theta_{i+1}$ of $i+1$-th state is connected to $\theta_i$ of the $i$-th state by the parameter $t$. In this way, we can build our complex network adaptively. \subsection{Numerical Results on CIFAR-10} We tested the HTA with the three-state setup on the CIFAR-10 dataset. We used VGG11, VGG13, VGG16, and VGG19 with batch normalization \cite{simonyan2014very} as our base models. Fig. \ref{vgg13_bn} shows the comparison of validation loss between HTA and the traditional method on the VGG13 model. Using the HTA with VGG13 has a lower error rate (5.14\%) than the traditional method (5.82\%), showing that the HTA with VGG13 improves the error rate by 11.68\%. All of the results for the different models are shown in Table \ref{vgg_result}. It is clearly seen that the HTA is more accurate than the traditional method for all of the different models. For example, the HTA with VGG11 results in an error rate of 7.02\% while the traditional method results in 7.83\% (an improvement of 10.34\%). In addition, the HTA with VGG19 has an error rate of 5.88\% compared to 6.35\% with the traditional model (an improvement of 7.40\%), and the HTA with VGG16 has an error rate of 5.71\% while the traditional method has an error rate of 6.14\% (a 7.00\% improvement). \subsection{The Optimal Structure of a Fully Connected Neural Network} \label{sec:alg} Since the HTA builds the fully connected neural network adaptively, it also provides a way for us to find the optimal structure of the fully connected neural network; for example, we can find the number of layers and the width for each layer. We designed an algorithm to find the optimal structure based on HTA. First, we began with a minimal model; for example, in the VGG models of CIFAR-10, the minimal width of two hidden layers is 10 because of the 10 classification. Then we applied the HTA to the first hidden layer by adding ``node-by-node," and we optimized the loss function dynamically with respect to the homotopy parameter. If the optimal width of the first hidden layer was found, then the weights of new added nodes were close to zero after optimization. Then we moved to the second hidden layer and implemented the same process to train ``layer-by-layer." When the weights of the new added nodes for the second hidden layer became close to zero, we terminated the process. {\bf Numerical results on CIFAR-10:} When we applied the algorithm for finding the optimal structure to each VGG base model, we found that the results were more accurate than when we used the base model only. For VGG11 with batch normalization, we set $\delta t=1/2$ and $n_{epoch}=50$ and found the optimal structure whose widths of the first and second hidden layers are 480 and 20, respectively. The error rate with VGG11 was reduced to 7.37\% while the error rate of the base model was 7.83\%. In this way, our algorithm can reach higher accuracy but with a simpler structure. The rest of our experimental results for different VGG models are listed in Table \ref{BSF}. \section{Conclusion} In this paper, we developed a homotopy training algorithm for the fully connected neural network models. This algorithm starts from a simple neural network and adaptively grows into a fully connected neural network with a complex structure. Then the complex neural network can be trained by the HTA to attain a higher accuracy. The convergence of the HTA for each $t$ is proved for the non-convex optimization that arises from fully connected neural networks with a $C^1$ activation function. Then the existence of solution path $\theta(t)$ is demonstrated theoretically for the convex case although it exists numerically in the non-convex case. Several numerical examples have been used to demonstrate the efficiency and feasibility of HTA. We also proved the convergence of HTA to the local optima if the optimization problem is convex. The application of HTA to computer vision, using the fully connected part of VGG models on CIFAR-10, provides better accuracy than the traditional method. Moreover, the HTA method provides an alternative way to find the optimal structure to reduce the complexity of a neural network. In this paper, we developed the HTA for fully connected neural networks only, but we vision it as the first step in the development of HTA for general neural networks. In the future, we will design a new way to apply it to more complex neural networks such as CNN and RNN so that the HTA can speed up the training process more efficiently. Since the structures of the CNNs and RNNs are very different from fully connected neural networks, we need to redesign the homotopy objective function in order to incorporate their structures, for instance, by including the dropout technique. \section{Figures \& Tables} The output for figure is: \begin{figure} \caption{The structure of a neural network with a single hidden layer.} \label{Fig:SL} \end{figure} \begin{figure} \caption{The structure of a neural network with two hidden layers.} \label{Fig:2L} \end{figure} \begin{figure*} \caption{Training/testing loss of approximating $\sin(x)$.} \label{sin_loss_1d} \end{figure*} \begin{figure*} \caption{Training/testing loss of approximating $\sin(x_1 + x_2)$.} \label{sin_loss_2d} \end{figure*} \begin{figure*} \caption{Approximation results of $\sin(x)$.} \label{sin_testing_plot} \end{figure*} \begin{figure*} \caption{Approximation results of $\sin(x_1 + x_2)$.} \label{sin_testing_plot_2d} \end{figure*} \begin{figure*} \caption{Testing loss of approximating $\sin(x_1 + x_2 + \cdots + x_n)$.} \label{sin_testing_loss_hd} \end{figure*} \begin{figure} \caption{Testing loss of the Van der Pol equation.} \label{vdp} \end{figure} \begin{figure} \caption{Comparisons between the HTA and the traditional method by contour plots of $y(1;\mu,k)$.} \label{vdp_equation} \end{figure} \begin{figure} \caption{The structure of VGG models that consist of a convolution \& pooling neural network and a fully connected neural network. The HTA is applied on the fully connected layers only.} \label{VGG} \end{figure} \begin{figure} \caption{Three states of the VGG with the HTA.} \label{3states} \end{figure} \begin{figure} \caption{Comparisons of error rate for VGG13 between the HTA and the traditional method. } \label{vgg13_bn} \end{figure} \noindent The output for table is: \begin{table} \begin{center} \begin{tabular}{| c | c | c | c |} \hline Dimensions (n) & Test Loss & Test Loss with HTA & Number of grid points \\ \hline 1 & $0.078$ & $0.005$ & $10^2$ (uniform grid)\\ \hline 2 & $0.195$ & $0.152$ & $10^4$ (uniform grid)\\ \hline 3 & $0.347$ & $0.213$ & $10^6$ (uniform grid)\\ \hline 4 & $0.393$ & $0.299$ & 2300 (sparse grid)\\ \hline 5 & $0.493$ & $0.352$ & 5503 (sparse grid)\\ \hline \end{tabular} \end{center} \caption{Testing loss of one-hidden-layer NNs.} \label{tab_testing_loss} \end{table} \begin{table} \begin{center} \begin{tabular}{| c | c | c | c |} \hline Dimensions (n) & Test Loss & Test Loss with HTA & Number of gird points \\ \hline 5 & $0.307$ & $0.074$ & 5503 (sparse grid)\\ \hline 6 & $0.340$ & $0.194$ & 10625 (sparse grid)\\ \hline 7 & $0.105$ & $0.029$ & 18943 (sparse grid)\\ \hline 8 & $0.079$ & $0.022$ & 31745 (sparse grid)\\ \hline \end{tabular} \end{center} \caption{Testing loss of two-hidden-layer NNs.} \label{tab_testing_loss_hd} \end{table} \begin{table} \begin{center} \begin{tabular}{| c | c | c |} \hline sample points $(\mu_i,k_i)$ & optima of traditional method $(\mu^*_i,k^*_i)$ & optima of HTA $(\mu^*_i,k^*_i)$ \\ \hline $(11.1,12.9)$ & $(11.9, 11.6)$ & $(11.9, 12.6)$ \\ \hline $(11.9, 13.1)$ & $(11.9, 11.7)$ & $(11.9, 12.7)$ \\ \hline $(12.6, 11.4)$ & $(11.0, 11.0)$ & $(12.0, 11.4)$ \\ \hline $(13.2, 12.8)$ & $(11.9, 11.5)$ & $(11.9, 12.5)$ \\ \hline $(13.9, 11.1)$ & $(11.0, 11.0)$ & $(12.0, 11.2)$ \\ \hline \end{tabular} \end{center} \caption{Parameter estimation results.} \label{para} \end{table} \begin{table} \begin{center} \begin{tabular}{| p{2.5cm} | p{2.5cm} | p{2.5cm} | p{2.5cm} |} \hline Base Model Name & Original Error Rate & Error Rate with HTA & Rate of Imrpovement(ROIs)\\ \hline VGG11 & $7.83\%$ & $7.02\%$ & $10.34\%$ \\ \hline VGG13 & $5.82\%$ & $5.14\%$ & $11.68\%$ \\ \hline VGG16 & $6.14\%$ & $5.71\%$ & $7.00\%$ \\ \hline VGG19 & $6.35\%$ & $5.88\%$ & $7.40\%$ \\ \hline \end{tabular} \end{center} \caption{The comparison between the HTA and the traditional method of VGG models on CIFAR-10} \label{vgg_result} \end{table} \begin{table} \begin{center} \begin{tabular}{| c| c | c | c | c |} \hline Base Model Name & Original Error Rate & Error Rate with OSF & $w_1$ & $w_2$ \\ \hline VGG11 & $7.83\%$ & $7.37\%$ & 480 & 20 \\ \hline VGG13 & $5.82\%$ & $5.67\%$ & 980 & 310 \\ \hline VGG16 & $6.14\%$ & $5.91\%$ & 752 & 310 \\ \hline VGG19 & $6.35\%$ & $6.05\%$ & 752 & 310 \\ \hline \end{tabular} \end{center} \caption{The results of the algorithm for finding the optimal structure when applied to CIFAR-10: $w_i$ stands for the optimal width of $i$-th hidden layer.} \label{BSF} \end{table} \begin{algorithm} \caption{The HTA algorithm for a neural network with two hidden layers.} \begin{algorithmic}[1] \STATE{Solve the optimization problem $\mathcal{L}(H_1(x;\theta,0),y$) and denote the solution as $\theta_1^*$;} \STATE{Set an initial guess as $\theta^0=\theta_1^*\cup\theta_2^0$ and $N=\frac{1}{\delta t}$ where $\delta t$ is the homotopy stepsize;} \FOR {$i = 1,\cdots, N$} \STATE{Solve $\theta^i=argmin\mathcal{L}(H_1(x;\theta,i\delta t),y)$ by using $\theta^{i-1}$ as the initial guess} \ENDFOR \end{algorithmic}\label{alg} \end{algorithm} \end{document}
\begin{document} \title[]{A useful lemma for calculating the Hausdorff dimension of certain sets in Engel expansions} \author {Lei Shang} \subjclass[2010]{Primary 11K55; Secondary 28A80} \keywords{Engel expansions, Growth rate of digits, Hausdorff dimension} \begin{abstract} Let $\{s_n\}$ and $\{t_n\}$ be two sequences of positive real numbers. Under some mild conditions on $\{s_n\}$ and $\{t_n\}$, we give the precise formula of the Hausdorff dimension of the set \[ \mathbb{E}(\{s_n\},\{t_n\}):=\Big\{x\in(0,1): s_{n}<d_{n}(x)\leq s_n+t_n, \forall n\geq1\Big\}, \] where $d_n(x)$ denotes the digit of the Engel expansion of $x$. This result improves the Lemma 2.6 of Shang and Wu (2021JNT), and is very useful for calculating the Hausdorff dimension of certain sets in Engel expansions. \end{abstract} \maketitle \section{Introduction} Let $T:[0,1)\to [0,1)$ be the \emph{Engel expansion map} defined by $T(0):=0$ and \[ T(x) := x\left\lceil\frac{1}{x}\right\rceil -1,\ \ \ \forall x \in (0,1), \] where $\lceil y\rceil$ denotes the least integer not less than $y$. Denote by $T^k$ the $k$th iteration of $T$. For $x\in (0,1)$, if $x$ is rational, then there exists $n \in \mathbb{N}$ such that $T^n(x)=0$; if $x$ is irrational, then $T^n(x)>0$ for all $n \geq 1$. Then every irrational number $x\in (0,1)$ admits an infinite series with the form \begin{equation}\label{EE} x= \frac{1}{d_1(x)} + \frac{1}{d_1(x)d_2(x)}+\cdots+\frac{1}{d_1(x)\cdots d_n(x)}+\cdots \end{equation} by letting $d_1(x) := \lceil 1/x\rceil$ and $d_{n+1}(x) := d_1(T^n(x))$ for all $n \geq 1$. The expression \eqref{EE} is called the \emph{Engel expansion} of $x$ and the integers $d_n(x)$ are called the \emph{digits} of the Engel expansion of $x$. It was remarked in \cite[p.\,7]{ERS58} that $d_{n+1}(x) \geq d_n(x) \geq 2$ for all $n \geq 1$ and $d_n(x) \to \infty$ as $n \to \infty$. We refer the reader to Galambos \cite{Gal76} for more information of Engel expansions. Let $\{s_n\}$ and $\{t_n\}$ be sequences of positive real numbers. Write \[ \mathbb{E}(\{s_n\},\{t_n\}):=\Big\{x\in(0,1): s_{n}<d_{n}(x)\leq s_n+t_n, \forall n\geq1\Big\}. \] We will prove the following result. \begin{lemma}\label{A} Assume that: (1)\,$s_n \geq t_n \geq 2$; (2)\,$s_{n+1} \geq s_n+t_n$; (3)\,$\lim_{n \to \infty}s_n=\infty$. Then \begin{align}\label{Formula} \dim_\mathrm{H}\mathbb{E}(\{s_n\},\{t_n\})=\liminf_{n \to\infty}\frac{\sum_{k=1}^n \log t_k}{\sum_{k=1}^{n+1} \log s_k+\log s_{n+1}-\log t_{n+1}}. \end{align} \end{lemma} We remark that a similar result was obtained by Liao and Rams \cite{LR} for a class of infinite iterated function systems, while the Engel expansion system does not belong to their setting, see \cite{JR, LR}. We also point out that the result of Lemma A would be very useful for calculating the Hausdorff dimension of certain sets in Engel expansions, see for example \cite{FWnon, LW03, LL, SWjmaa}. The paper is organized as follows. Section 2 is devoted to several definitions and basic properties of the Engel expansion. The proof of Lemma A will be given in Section 3. \section{Preliminaries} \begin{definition} A finite sequence $(\sigma_1, \cdots, \sigma_n) \in \mathbb{N}^n$ is said to be \emph{admissible} if there exists $x \in (0,1)$ such that $d_k(x) = \sigma_k$ for all $1\leq k\leq n$. An infinite sequence $(\sigma_1, \cdots, \sigma_k, \cdots) \in \mathbb{N}^\mathbb{N}$ is said to be \emph{admissible} if there exists $x \in (0,1)$ such that for all $n \geq 1$, $d_k(x) = \sigma_k, \forall 1 \leq k \leq n$. \end{definition} Denote by $\Sigma_n$ the collection of all admissible sequences with length $n$ and by $\Sigma$ that of all infinite admissible sequences. The following result gives a characterisation of admissible sequences. \begin{proposition}[\cite{Gal76}]\label{AD} $(\sigma_1, \cdots, \sigma_n) \in \Sigma_n$ if and only if $2 \leq \sigma_1 \leq \cdots \leq \sigma_n$; $(\sigma_1, \cdots, \sigma_n, \cdots) \in \Sigma$ if and only if \[ \sigma_{n+1} \geq \sigma_n \geq 2, \ \forall n \geq 1 \ \ \ \ \ \text{and} \ \ \ \ \ \lim_{n \to \infty}\sigma_n = \infty. \] \end{proposition} \begin{definition} Let $(\sigma_1, \cdots, \sigma_n) \in \Sigma_n$. We call \begin{equation*} I_n(\sigma_1, \cdots, \sigma_n) := \big\{x \in (0,1): d_1(x)=\sigma_1, \cdots,d_n(x)=\sigma_n\big\} \end{equation*} the \emph{cylinder} of order $n$ associated to $(\sigma_1,\dots,\sigma_n)$. \end{definition} We use $|I|$ to denote the diameter of an interval $I$. \begin{proposition}[{\cite{Gal76}}]\label{cylinder} Let $(\sigma_1, \cdots, \sigma_n) \in \Sigma_n$. Then $I_n(\sigma_1, \cdots, \sigma_n) =[A_n, B_n)$, where \[ A_n:=\frac{1}{\sigma_1}+\cdots+\frac{1}{\sigma_1\sigma_2\cdots \sigma_{n-1}}+ \frac{1}{\sigma_1\sigma_2\cdots \sigma_{n-1}\sigma_n} \] and \[ B:=\frac{1}{\sigma_1}+\cdots+ \frac{1}{\sigma_1\sigma_2\cdots \sigma_{n-1}}+ \frac{1}{\sigma_1\sigma_2\cdots \sigma_{n-1}(\sigma_n-1)}. \] Moreover, \begin{equation*}\label{cylinder length} \left|I_n(\sigma_1, \sigma_2, \cdots, \sigma_n)\right| = \frac{1}{\sigma_1\sigma_2\cdots \sigma_{n-1}\sigma_n(\sigma_n-1)}. \end{equation*} \end{proposition} \begin{proposition}[{\cite[Proposition 4.1]{Fal90}}]\label{upp} Suppose $\mathbb{F}$ can be covered by $\mathcal{N}_n$ sets of diameter at most $\delta_n$ with $\delta_n \to 0$ as $n \to \infty$. Then \[ \dim_{\rm H}\mathbb{F} \leq \liminf_{n \to \infty} \frac{\log \mathcal{N}_n}{-\log \delta_n}. \] \end{proposition} \begin{proposition}[{\cite[Example 4.6]{Fal90}}]\label{low} Let $[0,1] = \mathbb{E}_0 \supset \mathbb{E}_1 \supset \cdots$ be a decreasing sequence of sets and $\mathbb{E} = \bigcap_{n \geq 0} \mathbb{E}_n$. Assume that each $\mathbb{E}_n$ is a union of a finite number of disjoint closed intervals (called basic intervals of order $n$) and each basic interval in $\mathbb{E}_{n-1}$ contains $m_n$ intervals of $\mathbb{E}_n$ which are separated by gaps of lengths at least $\varepsilon_n$. If $m_n \geq 2$ and $\varepsilon_{n-1}> \varepsilon_n >0$, then \[ \dim_\mathrm{H} \mathbb{E} \geq \liminf_{n \to \infty} \frac{\log(m_1m_2 \cdots m_{n-1})}{-\log (m_{n}\varepsilon_n)}. \] \end{proposition} \section{Proof of Lemma A} In this section, we will give the proof of Lemma A. Assume that: (1)\,$s_n \geq t_n \geq 2$; (2)\,$s_{n+1} \geq s_n+t_n$; (3)\,$\lim_{n \to \infty}s_n=\infty$. Let \begin{equation*} \mathcal{D}_n:=\big\{(\sigma_1,\cdots \sigma_n)\in \mathbb{N}^n:s_k<\sigma_k\leq s_k+t_k,\forall 1\leq k\leq n\big\}. \end{equation*} Note that $\sigma_1>s_1\geq 2$ and $\sigma_{k+1}>s_{k+1} \geq s_k+t_k >\sigma_k$, so $\sigma_n \geq \cdots\geq \sigma_1\geq 2$. That is to say, $(\sigma_1,\cdots \sigma_n)$ is admissible. For $(\sigma_1,\cdots \sigma_n) \in \mathcal{D}_n$, let \begin{equation*} J_n(\sigma_1,\cdots,\sigma_n):=\bigcup_{s_{n+1}<j\leq s_{n+1}+t_{n+1}} cl(I_{n+1}(\sigma_1,\cdots,\sigma_n,j)), \end{equation*} where $cl(\cdot)$ denotes the closure of a set. By Proposition \ref{cylinder}, we know that $J_n(\sigma_1,\cdots,\sigma_n)$ is a closed interval, which is called the \emph{basic interval} of order $n$. Write \begin{equation}\label{En} \mathbb{E}_n:=\bigcup_{(\sigma_1,\cdots,\sigma_n)\in \mathcal{D}_n} J_n(\sigma_1,\cdots,\sigma_n) \end{equation} with the convention $\mathbb{E}_0:=[0,1]$. Then \begin{equation*} \mathbb{E}(\{s_n\},\{t_n\})=\bigcap_{n=0}^\infty\mathbb{E}_n. \end{equation*} For the upper bound of $\dim_{\rm H}\mathbb{E}(\{s_n\},\{t_n\})$, for any $n \geq 1$, we derive from \eqref{En} that $\{J_n(\sigma_1,\cdots,\sigma_n):(\sigma_1,\cdots,\sigma_n) \in \mathcal{D}_n\}$ is a cover of $\mathbb{E}(\{s_n\},\{t_n\})$. Then \begin{align*} \mathcal{N}_n:=\#\mathcal{D}_n=(\lfloor s_1+t_1\rfloor-\lfloor s_1\rfloor)\cdots(\lfloor s_n+t_n\rfloor-\lfloor s_n\rfloor)\leq 2^nt_1\cdots t_n. \end{align*} For any $(\sigma_1,\cdots \sigma_n) \in \mathcal{D}_n$, by Proposition \ref{cylinder}, we see that \begin{align*} |J_{n}(\sigma_1,\cdots,\sigma_n)|&=\sum_{j=\lfloor s_{n+1}\rfloor+1}^{\lfloor s_{n+1}+t_{n+1}\rfloor}|I(\sigma_1,\cdots,\sigma_n,j)| =\frac{1}{\sigma_1\cdots \sigma_n}\sum_{\lfloor s_{n+1}\rfloor+1}^{\lfloor s_{n+1}+t_{n+1}\rfloor}\frac{1}{j(j-1)}. \end{align*} Note that $s_{n+1}-1\geq s_{n+1}/2$ and $s_{n+1}+t_{n+1}\geq s_{n+1}$, so \begin{align*} \sum_{j=\lfloor s_{n+1}\rfloor+1}^{\lfloor s_{n+1}+t_{n+1}\rfloor}\frac{1}{j(j-1)} = \frac{1}{\lfloor s_{n+1}\rfloor} - \frac{1}{\lfloor s_{n+1}+t_{n+1}\rfloor} \leq \frac{1}{s_{n+1}-1}-\frac{1}{s_{n+1}+t_{n+1}} \leq\frac{4t_{n+1}}{s_{n+1}^2}. \end{align*} Since $\sigma_k\geq s_k$, we have \begin{align*} |J_n(\sigma_1,\cdots,\sigma_n)|\leq\frac{1}{s_1\cdots s_n}\cdot\frac{4t_{n+1}}{s_{n+1}^2}=:\delta_n. \end{align*} From Proposition \ref{upp}, we conclude that \begin{align*} \dim_\mathrm{H}\mathbb{E}(\{s_n\},\{t_n\})&\leq \liminf_{n\to \infty}\frac{\log\mathcal{N}_n}{-\log \delta_n}\\ &\leq\liminf_{n\to \infty}\frac{n\log2+\log(t_1\cdots t_n)}{\log(s_1\cdots s_{n+1})+\log s_{n+1}-\log t_{n+1}-\log 4}\\ &=\liminf_{n\to \infty}\frac{\sum_{k=1}^n\log t_{k}}{\sum_{k=1}^{n+1}\log s_{k}+\log s_{n+1}-\log t_{n+1}}. \end{align*} For the lower bound of $\dim_{\rm H}\mathbb{E}(\{s_n\},\{t_n\})$, by the structure of basic intervals, we deduce that each basic interval of order $n-1$ contains \[ \frac{t_n}{2}<\lfloor t_n\rfloor\leq m_n:=\lfloor s_n+t_n\rfloor - \lfloor s_n\rfloor < t_n+1 <2t_n \] basic intervals of order $n$. Next we will estimate the gaps between two basic intervals with the same order. For two adjacent sequences $(\sigma_1,\cdots,\sigma_n)$ and $(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$ in $\mathcal{D}_n$, we deduce that the cylinders $J_n(\sigma_1,\cdots,\sigma_n)$ and $J_n(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$ are different and then $J_n(\sigma_1,\cdots,\sigma_n)$ is on the left-hand or right-hand side of $J_n(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$. Without loss of generality, we assume that $J_n(\sigma_1,\cdots,\sigma_n)$ is on the left-hand side of $J_n(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$. Then \[ \Pi_1:=\bigcup_{\sigma_n\leq j\leq \lfloor s_{n+1}\rfloor}I_{n+1}(\sigma_1,\cdots,\sigma_n,j) \ \text{and}\ \Pi_2:=\bigcup_{j\geq \lfloor s_{n+1}+t_{n+1}\rfloor+1}I_{n+1}(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n,j) \] are the gap of $J_n(\sigma_1,\cdots,\sigma_n)$ and $J_n(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$. For $\Pi_1$, we have \begin{align*} |\Pi_1|&=\sum_{\sigma_n\leq j\leq \lfloor s_{n+1}\rfloor}|I_{n+1}(\sigma_1,\cdots,\sigma_n,j)|\\ &=\frac{1}{\sigma_1\cdots\sigma_n}\sum_{\sigma_n\leq j\leq \lfloor s_{n+1}\rfloor}\frac{1}{j(j-1)}\\ &=\frac{1}{\sigma_1\cdots\sigma_n}\left(\frac{1}{\sigma_n-1}-\frac{1}{\lfloor s_{n+1}\rfloor}\right)\\ &\geq\frac{1}{2s_1\cdots 2s_n}\left(\frac{1}{s_n+t_n-1}-\frac{1}{s_{n+1}-1}\right). \end{align*} For $\Pi_2$, we obtain \begin{align*} |\Pi_2|&=\sum_{j\geq \lfloor s_{n+1}+t_{n+1}\rfloor+1}|I_{n+1}(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n,j)|\\ &=\frac{1}{\sigma^{\prime}_1\cdots\sigma^{\prime}_n}\sum_{j\geq \lfloor s_{n+1}+t_{n+1}\rfloor+1}\frac{1}{j(j-1)}\\ &\geq\frac{1}{2s_1\cdots 2s_n}\cdot\frac{1}{s_{n+1}+t_{n+1}}. \end{align*} Note that $s_{n+1} \geq s_n+t_n$, so \begin{align*} \frac{1}{s_n+t_n-1}-\frac{1}{s_{n+1}-1} +\frac{1}{s_{n+1}+t_{n+1}} &= \frac{1}{s_n+t_n-1}- \frac{t_{n+1}+1}{(s_{n+1}-1)(s_{n+1}+t_{n+1})}\\ &\geq \frac{1}{s_n+t_n-1}\cdot \left(1- \frac{t_{n+1}+1}{s_{n+1}+t_{n+1}}\right)\\ &=\frac{1}{s_n+t_n-1}\cdot\frac{s_{n+1}-1}{s_{n+1}+t_{n+1}}. \end{align*} Since $s_n+t_n-1 < 2s_n$, $s_{n+1}-1 \geq s_{n+1}/2$ and $s_{n+1}+t_{n+1} \leq 2s_{n+1}$, we see that the length of the gap of $J(\sigma_1,\cdots,\sigma_n)$ and $J(\sigma^{\prime}_1,\cdots,\sigma^{\prime}_n)$ is at least \begin{align*} |\Pi_1|+|\Pi_2| &\geq\frac{1}{2s_1\cdots2s_n}\cdot\frac{1}{s_n+t_n-1}\cdot\frac{s_{n+1}-1}{s_{n+1}+t_{n+1}}\\ &\geq \frac{1}{2s_1\cdots2s_n}\cdot\frac{1}{8s_n}\\ &=\frac{1}{2^{n+3}}\cdot\frac{1}{s_1\cdots s_ns_n}:=\varepsilon_n. \end{align*} It follows from Proposition \ref{low} that \begin{align*} \dim_\mathrm{H}\mathbb{E}(\{s_n\},\{t_n\})&\geq \liminf_{n\to \infty}\frac{\log(m_1m_2\cdots m_{n-1})}{-\log(m_n\varepsilon_n)}\\ &\geq\liminf_{n\to \infty}\frac{-n\log2+\sum _{k=1}^n \log t_k }{(n+5)\log2+\sum_{k=1}^{n+1}\log s_k+\log s_{n+1}-\log t_{n+1}}\\ &=\liminf_{n\to \infty}\frac{\sum _{k=1}^n \log t_k}{\sum_{k=1}^{n+1}\log s_k+\log s_{n+1}-\log t_{n+1}}. \end{align*} \end{document}
\begin{document} \title{$p$-Biharmonic hypersurfaces in Einstein space and conformally flat space} \begin{abstract} In this paper, we present some new properties for $p$-biharmonic hypersurfaces in Riemannian manifold. We also characterize the $p$-biharmonic submanifolds in an Einstein space. We construct a new example of proper $p$-biharmonic hypersurfaces. We present some open problems. \\[2mm] {\it AMS Mathematics Subject Classification $(2020)$}: 53C43, 58E20, 53C25. \\[1mm] {\it Key words and phrases:} $p$-biharmonic maps, $p$-biharmonic submanifolds, Einstein space. \end{abstract} \title{$p$-Biharmonic hypersurfaces in Einstein space and conformally flat space} \section{Introduction} Let $\varphi:(M^m,g)\longrightarrow (N^n,h)$ be a smooth map between Riemannian manifolds. The $p$-energy functional of $\varphi$ is defined by \begin{equation}\label{eq1.1} E_{p}(\varphi;D)=\frac{1}{p}\int_{D}|d\varphi|^pv_{g}, \end{equation} where $D$ is a compact domain in $M$, $|d\varphi|$ the Hilbert-Schmidt norm of the differential $d\varphi$, $v^g$ the volume element on $(M^m,g)$, and $p\geq2$.\\ A smooth map is called $p$-harmonic if it is a critical point of the $p$-energy functional (\ref{eq1.1}). We have \begin{equation}\label{eq1.2} \frac{d}{dt}E_{p}(\varphi_{t};D)\Big|_{t=0}=-\int_{D}h(\tau_{p}(\varphi),v)v_{g}, \end{equation} where $\{\varphi_{t}\}_{t\in (-\epsilon,\epsilon)}$ is a smooth variation of $\varphi$ supported in $D$, $\displaystyle v=\frac{\partial \varphi_{t}}{\partial t}\Big|_{t=0}$ the variation vector field of $\varphi$, and $\tau_{p}(\varphi)=\operatorname{div}^M(|d\varphi|^{p-2}d\varphi)$ the $p$-tension field of $\varphi$.\\ Let $\nabla^{M}$ the Levi-Civita connection of $(M^m,g)$, and $\nabla^{\varphi}$ the pull-back connection on $\varphi^{-1}TN$, the map $\varphi$ is $p$-harmonic if and only if (see \cite{BG,BI,ali}) \begin{equation}\label{eq1.3} |d\varphi|^{p-2}\tau(\varphi)+(p-2)|d\varphi|^{p-3} d\varphi(\operatorname{grad}^M|d\varphi|)=0, \end{equation} where $\tau(\varphi)=\operatorname{trace}_g\nabla d\varphi$ is the tension field of $\varphi$ (see \cite{BW,ES}). The $p$-bienergy functional of $\varphi$ is defined by \begin{equation}\label{eq1.4} E_{2,p}(\varphi;D)=\frac{1}{2}\int_D|\tau_p(\varphi)|^2 v^g. \end{equation} We say that $\varphi$ is a $p$-biharmonic map if it is a critical point of the $p$-bienergy functional \eqref{eq1.4}, the Euler-Lagrange equation of the $p$-bienergy functional is given by (see \cite{cherif2}) \begin{eqnarray}\label{eq1.7} \tau_{2,p}(\varphi) &=&\nonumber -|d\varphi|^{p-2}\operatorname{trace}_gR^{N}(\tau_{p}(\varphi),d\varphi)d\varphi -\operatorname{trace}_g\nabla^\varphi |d\varphi|^{p-2} \nabla^\varphi \tau_{p}(\varphi)\\ &&-(p-2)\operatorname{trace}_g\nabla <\nabla^\varphi\tau_{p}(\varphi),d\varphi>|d\varphi|^{p-4}d\varphi=0, \end{eqnarray} where $R^N$ is the curvature tensor of $(N^n,h)$ defined by \begin{equation*} R^N(X,Y)Z=\nabla^N_X \nabla^N_Y Z-\nabla^N_Y \nabla^N_X Z-\nabla^N_{[X,Y]}Z,\quad\forall X,Y,Z\in\Gamma(TN), \end{equation*} and $\nabla^N$ the Levi-Civita connection of $(N^n,h)$. The $p$-energy functional (resp. $p$-bienergy functional) includes as a special case $(p = 2)$ the energy functional (resp. bienergy functional), whose critical points are the usual harmonic maps (resp. biharmonic maps \cite{Jiang}).\\ A submanifold in a Riemannian manifold is called a $p$-harmonic submanifold (resp. $p$-biharmonic submanifold) if the isometric immersion defining the submanifold is a $p$-harmonic map (resp. $p$-biharmonic map). Will call proper $p$-biharmonic submanifolds a $p$-biharmonic submanifols which is non $p$-harmonic. \section{Main Results} Let $(M^m,g)$ be a hypersurface of $(N^{m+1},\langle ,\rangle )$, and $\mathbf{i} : (M^m,g) \hookrightarrow (N^{m+1},\langle ,\rangle ) $ the canonical inclusion. We denote by $\nabla^M$ (resp. $\nabla^N$) the Levi-Civita connection of $(M^m,g)$ (resp. of $(N^{m+1},\langle ,\rangle )$), $\operatorname{grad}^M$ (resp. $\operatorname{grad}^N$) the gradient operator in $(M^m,g)$ (resp. in $(N^{m+1},\langle ,\rangle )$, $B$ the second fundamental form of the hypersurface $(M^m,g)$, $A$ the shape operator with respect to the unit normal vector field $\eta$, $H$ the mean curvature of $(M^m,g)$, $\nabla^\perp$ the normal connection of $(M^m,g)$, and by $\Delta$ (resp. $\Delta^{\perp}$) the Laplacian on $(M^m,g)$ (resp. on the normal bundle of $(M^m,g)$ in $(N^{m+1},\langle ,\rangle )$ (see \cite{BW,ON,YX}). Under the notation above we have the following results. \begin{theorem}\label{th1} The hypersurface $(M^m,g)$ with the mean curvature vector $H= f \eta $ is $p$-bihamronic if and only if \begin{equation}\label{sys1} \left\{ \begin{array}{lll} -\Delta^M(f) + f |A|^2 -f \operatorname{Ric}^N(\eta , \eta ) + m(p-2) f^3 &=& 0; \\\\ 2A(\operatorname{grad}^M f) -2 f (\operatorname{Ricci}^N \eta)^\top + ( p-2 + \dfrac{m}{2} ) \operatorname{grad}^M f^2 &=& 0, \end{array} \right. \end{equation} where $\operatorname{Ric}^N $ (resp. $\operatorname{Ricci}^N$) is the Ricci curvature (resp. Ricci tensor) of $(N^{m+1},\langle ,\rangle )$. \end{theorem} \begin{proof} Choose a normal orthonormal frame $\{ e_i \}_{i=1,... , m }$ on $(M^m,g)$ at $x$, so that $\{ e_i , \eta \}_{i=1,...,m} $ is an orthonormal frame on the ambient space $(N^{m+1},\langle ,\rangle )$. Note that, $d\mathbf{i}(X)=X$, $\nabla^{\mathbf{i}}_X Y = \nabla^N_X Y $, and the $p$-tension field of $\mathbf{i}$ is given by $ \tau_p(\mathbf{i}) = m^{\frac{p}{2} } f \eta $. We compute the $p$-bitension field of $\mathbf{i}$ \begin{eqnarray}\label{eq2.2} \nonumber \tau_{2,p}(\mathbf{i})& =& -|d\mathbf{i}|^{p-2} \operatorname{trace}_g R^N(\tau_p(\mathbf{i}) , d\mathbf{i} ) d\mathbf{i} \\ \nonumber & & -(p-2) \operatorname{trace}_g \nabla \langle \nabla^\mathbf{i} \tau_p(\mathbf{i}) , d\mathbf{i} \rangle |d\mathbf{i}|^{p-4} d\mathbf{i} \\ & &- \operatorname{trace}_g \nabla^\mathbf{i}|d\mathbf{i}|^{p-2} \nabla^\mathbf{i}\tau_p(\mathbf{i}). \end{eqnarray} The first term of (\ref{eq2.2}) is given by \begin{eqnarray}\label{eq2.3} \nonumber -|d\mathbf{i}|^{p-2} \operatorname{trace}_g R^N (\tau_p(\mathbf{i}) , d\mathbf{i} )d\mathbf{i} &=& -|d\mathbf{i}|^{p-2} \sum_{i=1}^mR^N(\tau_p(\mathbf{i}) , d\mathbf{i}(e_i))d\mathbf{i}(e_i)\\ \nonumber &=& - m^{p-1} f \sum_{i=1}^m R^N(\eta , e_i)e_i \\ \nonumber &=& -m^{p-1} f \operatorname{Ricci}^N \eta \\ \nonumber &=& -m^{p-1} f \left[ (\operatorname{Ricci}^N \eta )^\perp +(\operatorname{Ricci}^N \eta)^\top \right].\\ \end{eqnarray} We compute the second term of (\ref{eq2.2}) \begin{eqnarray*} -(p-2)\operatorname{trace}_g \nabla\langle \nabla^\mathbf{i} \tau_p(\mathbf{i}) , d\mathbf{i} \rangle |d\mathbf{i}|^{p-4}d\mathbf{i} &=& -(p-2)m^{p-2} \sum_{i,j=1}^m\nabla^N_{e_j} \langle \nabla^N_{e_i} f \eta , e_i \rangle e_j, \end{eqnarray*} \begin{eqnarray*} \sum_{i=1}^m\langle \nabla^N_{e_i} f \eta , e_i \rangle &=& \sum_{i=1}^m\left[\langle e_i(f) \eta , e_i \rangle + f\langle \nabla^N_{e_i} \eta , e_i \rangle \right]\\ &=& -f \sum_{i=1}^m \langle \eta , B(e_i , e_i) \rangle \\ &=& -m f^2. \end{eqnarray*} By the last two equations, we have the following \begin{equation}\label{eq2.4} -(p-2)\operatorname{trace}_g \nabla\langle \nabla^\mathbf{i} \tau_p(\mathbf{i}) , d\mathbf{i} \rangle |d\mathbf{i}|^{p-4}d\mathbf{i} = m^{p-1}(p-2) \left( \operatorname{grad}^M f^2 + m f^3\eta\right). \end{equation} The third term of (\ref{eq2.2}) is given by \begin{eqnarray}\label{eq2.5} \nonumber - \operatorname{trace}_g \nabla^{\mathbf{i}}|d\mathbf{i}|^{p-2} \nabla^{\mathbf{i}}\tau_p(\mathbf{i}) &=& -m^{p-1} \sum_{i=1}^m\nabla^N_{e_i} \nabla^N_{e_i} f \eta \\ \nonumber &=& -m^{p-1} \sum_{i=1}^m\nabla^N_{e_i}[e_i(f) \eta +f \nabla^N_{e_i} \eta ] \\ \nonumber &=& -m^{p-1}\left[ \Delta^M (f) \eta + 2 \nabla^N_{\operatorname{grad}^M f } \eta + f \sum_{i=1}^m\nabla^N_{e_i}\nabla^N_{e_i} \eta \right]. \nonumber \\ \end{eqnarray} Thus, at $x$, we obtain \begin{eqnarray} \label{eq2.6} \sum_{i=1}^m\nabla_{e_i}^N \nabla_{e_i}^N \eta &=& \nonumber \sum_{i=1}^m\nabla_{e_i}^N\left[(\nabla_{e_i}^N \eta)^\perp +(\nabla_{e_i}^N \eta)^\top \right] \\ &=&\nonumber -\sum_{i=1}^m\nabla_{e_i}^NA(e_i) \\ &=& - \sum_{i=1}^m\nabla_{e_i}^M A(e_i)-\sum_{i=1}^mB(e_i , A(e_i)). \end{eqnarray} Since $\langle A(X),Y\rangle = \langle B(X,Y) , \eta\rangle $ for all $X,Y \in \Gamma(TM) $, we get \begin{eqnarray}\label{eq2.7} \nonumber \sum_{i=1}^m\nabla_{e_i}^M A(e_i) &=&\sum_{i,j=1}^m \langle \nabla_{e_i}^M A(e_i), e_j\rangle e_j \\ \nonumber &=&\sum_{i,j=1}^m\left[ e_i \langle A(e_i) , e_j \rangle e_j - \langle A(e_i) , \nabla^M_{e_i} e_j \rangle e_j \right]\\ \nonumber &=&\sum_{i,j=1}^m e_i \langle B(e_i,e_j) , \eta \rangle e_j \\ \nonumber &=&\sum_{i,j=1}^m e_i \langle \nabla_{e_j}^Ne_i , \eta \rangle e_j\\ &=&\sum_{i,j=1}^m \langle \nabla_{e_i}^N\nabla_{e_j}^Ne_i , \eta \rangle e_j. \end{eqnarray} By using the definition of curvature tensor of $(N^{m+1},\langle ,\rangle)$, we conclude \begin{eqnarray} \label{eq2.8} \nonumber \sum_{i=1}^m\nabla_{e_i}^M A(e_i) &=&\sum_{i,j=1}^m \left[\langle R^N (e_i,e_j)e_i , \eta\rangle e_j + \langle \nabla_{e_j}^N\nabla_{e_i}^Ne_i , \eta\rangle e_j\right] \\ \nonumber &=&\sum_{i,j=1}^m \left[ -\langle R^N (\eta , e_i) e_i ,e_j \rangle e_j + \langle \nabla_{e_j}^N\nabla_{e_i}^Ne_i , \eta\rangle e_j \right]\\ \nonumber &=& - \sum_{j=1}^m\langle \operatorname{Ricci}^N \eta , e_j \rangle e_j + \sum_{i,j=1}^m e_j\langle \nabla_{e_i}^Ne_i , \eta\rangle e_j-\sum_{i,j=1}^m \langle \nabla_{e_i}^N{e_i}, \nabla_{e_i}^N \eta \rangle e_j \\ &=& -( \operatorname{Ricci}^N \eta )^\top + m \operatorname{grad}^M f. \end{eqnarray} On the other hand, we have \begin{eqnarray} \label{eq2.9} \nonumber \sum_{i=1}^mB(e_i,A(e_i )) &=& \sum_{i=1}^m\langle B(e_i , A(e_i)) , \eta\rangle \eta \\ \nonumber &=& \sum_{i=1}^m\langle A(e_i) , A(e_i )\rangle \eta \\ &=& |A|^2 \eta. \end{eqnarray} Substituting (\ref{eq2.6}), (\ref{eq2.8}) and (\ref{eq2.9}) in (\ref{eq2.5}), we obtain \begin{eqnarray}\label{eq2.10} - \operatorname{trace}_g \nabla^{\mathbf{i}}|d\mathbf{i}|^{p-2} \nabla^{\mathbf{i}}\tau_p(\mathbf{i}) &=&\nonumber - m^{p-1} \big[ \Delta^M(f)\eta-2 A (\operatorname{grad}^M f) + f (\operatorname{Ricci}^N \eta )^\top \\ &&- \dfrac{m}{2} \operatorname{grad}^M f^2 - f |A|^2 \eta \big]. \end{eqnarray} The Theorem \ref{th1} follows by (\ref{eq2.2})-(\ref{eq2.4}), and (\ref{eq2.10}). \end{proof} As an immediate consequence of Theorem \ref{th1} we have. \begin{corollary}\label{corollary1} A hypersurface $(M^m,g) $ in an Einstein space $(N^{m+1},\langle ,\rangle )$ is $p$-biharmonic if and only if it's mean curvature function $f$ is a solution of the following PDEs \begin{equation}\label{sys2} \left\{ \begin{array}{lll} -\Delta^M(f) + f |A|^2 + m(p-2) f^3 - \frac{S}{m+1}f &=& 0; \\\\ 2A(\operatorname{grad}^M f) + ( p-2 + \dfrac{m}{2} ) \operatorname{grad}^M f^2 &=& 0, \end{array} \right. \end{equation} where $S$ is the scalar curvature of the ambient space. \end{corollary} \begin{proof} It is well known that if $(N^{m+1} ,\langle , \rangle ) $ is an Einstein manifold then $\operatorname{Ric}^N(X,Y) = \lambda \langle X,Y\rangle $ for some constant $\lambda$, for any $X,Y \in \Gamma(TN)$. So that \begin{eqnarray*} S&=& \operatorname{trace}_{\langle , \rangle} \operatorname{Ric}^N\\ &=& \sum_{i=1}^m\operatorname{Ric}^N(e_i,e_i) + \operatorname{Ric}^N(\eta, \eta) \\ &=& \lambda (m+1), \end{eqnarray*} where $\{ e_i \}_{i=1,... , m }$ is a normal orthonormal frame on $(M^m,g)$ at $x$. Since $\operatorname{Ric}^N(\eta, \eta) = \lambda $, on conclude that $$\operatorname{Ric}^N(\eta, \eta) = \frac{S}{m+1}. $$ On the other hand, we have \begin{eqnarray*} (\operatorname{Ricci}^N\eta)^{\top}&=& \sum_{i=1}^m\langle \operatorname{Ricci}^N\eta , e_i\rangle e_i \\ &=& \sum_{i=1}^m\operatorname{Ric}^N(\eta , e_i)e_i\\ &=& \sum_{i=1}^m\lambda\langle \eta , e_i\rangle e_i \\ &=& 0. \end{eqnarray*} The Corollary \ref{corollary1} follows by Theorem \ref{th1}. \end{proof} \begin{theorem} \label{th2} A totally umbilical hypersurface $(M^m,g) $ in an Einstein space $(N^{m+1},\langle ,\rangle )$ with non-positive scalar curvature is $p$-biharmonic if and only if it is minimal. \end{theorem} \begin{proof} Take an orthonormal frame $\{ e_i, \eta \}_{i=1,...,m}$ on the ambient space $(N^{m+1} , \langle ,\rangle )$ such that $\{ e_i \}_{i=1,...,m}$ is an orthonormal frame on $(M^m,g)$. We have \begin{eqnarray*} f &=& \langle H,\eta\rangle \\ &=& \frac{1}{m} \sum_{i=1}^m\langle B(e_i, e_i), \eta \rangle \\ &=& \frac{1}{m} \sum_{i=1}^m\langle g(e_i, e_i) \beta \eta , \eta \rangle \\ &=& \beta, \end{eqnarray*} where $\beta\in C^\infty(M)$. The $p$-biharmonic hypersurface equation $(\ref{sys2})$ becomes \begin{equation}\label{sys3} \left\{ \begin{array}{lll} -\Delta^M(\beta) + m (p-1)\beta^3 - \frac{S}{m+1} \beta &=& 0; \\\\ (p-1 + \frac{m}{2} ) \beta \operatorname{grad}^M \beta &=& 0, \end{array} \right. \end{equation} Solving the last system, we have $\beta= 0$ and hence $f=0$, or $$\beta= \pm\sqrt{\frac{S}{m(m+1)(p-1)}},$$ it's constant and this happens only if $S\geq0$. The proof is complete. \end{proof} \section{$p$-biharmonic hypersurface in conformally flat space } Let $ \mathbf{i} : M^m \hookrightarrow \mathbb{R}^{m+1} $ be a minimal hypersurface with the unit normal vector field $\eta$, $ \widetilde{ \mathbf{i} } : (M^m , \widetilde{g}) \hookrightarrow (\mathbb{R}^{m+1} , \widetilde{h} = e^{2\gamma} h ) $, $x\longmapsto \widetilde{ \mathbf{i} }(x)=\mathbf{i}(x)=x$, where $\gamma \in C^{\infty}(\mathbb{R}^{m+1} )$, $h= \langle ,\rangle _{\mathbb{R}^{m+1}} $, and $\widetilde{g} $ is the induced metric by $\widetilde{h}$, that is $$ \widetilde{g}(X,Y ) = e^{2\gamma} g(X,Y)= e^{2\gamma} \langle X,Y\rangle _{\mathbb{R}^{m+1}} ,$$ where $g$ is the induced metric by $h$. Let $\{e_i, \eta \}_{i=1,...,m} $ be an orthonormal frame adapted to the $p$-harmonic hypersurface on $(\mathbb{R}^{m+1} , h )$, thus $ \{\widetilde{e}_i, \widetilde{\eta} \}_{i=1,...,m} $ becomes an orthonormal frame on $ (\mathbb{R}^{m+1} , \widetilde{h} ) $, where $\widetilde{e}_i=e^{-\gamma}e_i$ for all $i=1,..., m $, and $ \widetilde{\eta}=e^{-\gamma} \eta $. \begin{theorem}\label{th3} The hypersurface $(M^m , \widetilde{g})$ in the conformally flat space $(\mathbb{R}^{m+1} , \widetilde{h}) $ is $p$-biharmonic if and only if \begin{equation}\label{sys4} \left\{ \begin{array}{lll} \eta(\gamma) e^{-\gamma} \big[-\Delta^M(\gamma) -m \operatorname{Hess}^{\mathbb{R}^{m+1}}_{\gamma} ( \eta , \eta ) + (1-m)|\operatorname{grad}^M \gamma |^2 \\\\- |A|^2+m (1-p) \eta(\gamma)^2\big] + \Delta^M(\eta(\gamma) e^{-\gamma}) + (m-2) ( \operatorname{grad}^M \gamma ) (\eta(\gamma) e^{-\gamma} )= 0; \\\\ -2 A(\operatorname{grad}^M( \eta(\gamma) e^{-\gamma})) +2 (1-m) \eta(\gamma) e^{-\gamma} A(\operatorname{grad}^M \gamma) \\\\+ (2p-m)\eta(\gamma)\operatorname{grad}^M (\eta(\gamma) e^{-\gamma}) = 0, \end{array} \right. \end{equation} where $\operatorname{Hess}^{\mathbb{R}^{m+1}}_{\gamma}$ is the Hessian of the smooth function $\gamma$ in $(\mathbb{R}^{m+1} , h) $. \end{theorem} \begin{proof} By using the Kozul's formula, we have $$\left\{ \begin{array}{ll} \widetilde{\nabla}_X^M Y = \nabla_X^M Y + X(\gamma) Y + Y(\gamma)X -g(X,Y) \operatorname{grad}^M \gamma; \\\\ \widetilde{\nabla}_U^{\mathbb{R}^{m+1}} V = \nabla_U^{\mathbb{R}^{m+1}} V + U(\gamma) V + V(\gamma)U -h(U,V) \operatorname{grad}^{\mathbb{R}^{m+1}} \gamma, \end{array} \right.$$ for all $X,Y \in\Gamma(TM )$, and $ U,V \in \Gamma(T\mathbb{R}^{m+1} )$. Consequently \begin{eqnarray}\label{eq3.2} \nabla_X^{\widetilde{\mathbf{i}}} d\widetilde{\mathbf{i}}(Y) &=&\nonumber\nabla_{X}^{\widetilde{\mathbf{i}}} Y \\ &=&\nonumber \widetilde{\nabla}_{d\mathbf{i} (X)}^{\mathbb{R}^{m+1}} Y\\ &=&\nonumber \widetilde{\nabla}_X^{\mathbb{R}^{m+1}} Y \\ &=& \nabla_X^{\mathbb{R}^{m+1}} Y + X(\gamma) Y+ Y(\gamma)X -h(X,Y) \operatorname{grad}^{\mathbb{R}^{m+1}} \gamma,\qquad \end{eqnarray} and the following \begin{eqnarray}\label{eq3.3} d\widetilde{\mathbf{i}} ( \widetilde{\nabla}_X^M Y ) &=&\nonumber d\mathbf{i}(\nabla_X^M Y) + X(\gamma) d\mathbf{i} (Y) + Y(\gamma) d\mathbf{i}(X) - g(X,Y) d\mathbf{i}( \operatorname{grad}^M \gamma )\\ &=&\nabla_X^M Y + X(\gamma) Y + Y(\gamma) X - g(X,Y) \operatorname{grad}^M \gamma. \end{eqnarray} From equations (\ref{eq3.2}) and (\ref{eq3.3}), we get \begin{eqnarray}\label{eq3.4} (\nabla d\widetilde{\mathbf{i}})(X,Y) &=&\nonumber \nabla_X^{\widetilde {\mathbf{i}} } d \widetilde {\mathbf{i}} (Y) -d \widetilde {\mathbf{i}}(\widetilde{\nabla}_X^M Y)\\ &=&\nonumber (\nabla d\mathbf{i})(X,Y) + g(X,Y)[\operatorname{grad}^M\gamma -\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma]\\ &=& B(X,Y) -g(X,Y) \eta(\gamma)\eta. \end{eqnarray} So that, the mean curvature function $\widetilde{f}$ of $(M^m,\widetilde{g})$ in $ (\mathbb{R}^{m+1} , \widetilde{h} ) $ is given by $ \widetilde{f}=-\eta(\gamma)e^{-\gamma} $. Indeed, by taking traces in (\ref{eq3.4}), we obtain $$e^{2\gamma}\widetilde{H} = H - \eta(\gamma)\eta.$$ Since $(M^m,g)$ is minimal in $ (\mathbb{R}^{m+1} , h ) $, we find that $\widetilde{H} = - e^{-2\gamma} \eta(\gamma)\eta$, that is $\widetilde{H} = - e^{-\gamma} \eta(\gamma)\widetilde{\eta}$.\\ With the new notations the equation $(\ref{sys1})$ for $p$-biharmonic hypersurface in the conformally flat space becomes \begin{equation}\label{sys5} \left\{ \begin{array}{lll} -\widetilde{\Delta}(\widetilde{f}) + \widetilde{f} |\widetilde{A}|_{\widetilde{g}}^2 -\widetilde{f} \, \widetilde{\operatorname{Ric}}^{\mathbb{R}^{m+1} }(\widetilde{\eta} , \widetilde{\eta} ) + m(p-2) \widetilde{f}^3 &=& 0; \\\\ 2\widetilde{A}(\widetilde{\operatorname{grad}}^M \widetilde{f}) -2 \widetilde{f} (\widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1} } \widetilde{\eta})^\top + ( p-2 + \dfrac{m}{2} ) \widetilde{\operatorname{grad}}^M \widetilde{f}^2 &=& 0, \end{array} \right. \end{equation} A straightforward computation yields \begin{eqnarray*} \nonumber \widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1}} \eta &=& e^{-2\gamma} \big[\operatorname{Ricci}^{\mathbb{R}^{m+1}}\eta -\Delta^{\mathbb{R}^{m+1}}(\gamma)\eta+(1-m)\nabla_{\eta}^{\mathbb{R}^{m+1}} \operatorname{grad}^{\mathbb{R}^{m+1}} \gamma \\ \nonumber &&+(1-m) |\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma|^2 \eta - (1-m) \eta(\gamma)\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma \big]; \end{eqnarray*} \begin{eqnarray}\label{eq5} \nonumber \widetilde{\operatorname{Ric}}^{\mathbb{R}^{m+1} }(\widetilde{\eta} , \widetilde{\eta} ) &=& \widetilde{h}(\widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1}} \widetilde{\eta},\widetilde{\eta}) \\ \nonumber &=&h(\widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1}}\eta,\eta)\\ \nonumber&=& e^{-2\gamma} h\big(\operatorname{Ricci}^{\mathbb{R}^{m+1}}\eta - \Delta^{\mathbb{R}^{m+1}}(\gamma) \eta+(1-m)\nabla_{\eta}^{\mathbb{R}^{m+1}} \operatorname{grad}^{\mathbb{R}^{m+1}} \gamma\\ \nonumber &&+ (1-m) |\operatorname{grad}^{\mathbb{R}^{m+1}}\gamma|^2 \eta - (1-m) \eta(\gamma)\operatorname{grad}^{\mathbb{R}^{m+1}}\gamma,\eta\big) \\ \nonumber &=& e^{-2\gamma} \big[-\Delta^{\mathbb{R}^{m+1}}(\gamma) +(1-m) \operatorname{Hess}_{\gamma}^{\mathbb{R}^{m+1}}(\eta,\eta) +(1-m) |\operatorname{grad}^{\mathbb{R}^{m+1}}\gamma|^2\\ && -(1-m)\eta(\gamma)^2\big]; \end{eqnarray} \begin{eqnarray}\label{eq8} (\widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1}} \widetilde{\eta})^\top \nonumber&=&\sum_{i=1}^mh(\widetilde{\operatorname{Ricci}}^{\mathbb{R}^{m+1} } \widetilde{\eta} ,e_i)e_i \\ \nonumber &=& (1-m)e^{-3\gamma}\sum_{i=1}^m\left[ h ( \nabla_{\eta}^{\mathbb{R}^{m+1}}\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , e_i)e_i -\eta(\gamma)h(\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , e_i)e_i \right]\\ \nonumber &=& (1-m) e^{-3\gamma}\Big[\sum_{i=1}^m h ( \nabla_{e_i}^{\mathbb{R}^{m+1}}\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , \eta)e_i -\eta(\gamma) \operatorname{grad}^M \gamma\Big] \\ \nonumber &=& (1-m) e^{-3\gamma}\Big[\sum_{i=1}^m e_i h(\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , \eta)e_i -\sum_{i=1}^m h(\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , \nabla_{e_i}^{\mathbb{R}^{m+1}} \eta ) e_i\\ \nonumber &&- \eta(\gamma) \operatorname{grad}^{M} \gamma\Big] \\ \nonumber &=& (1-m) e^{-3\gamma} \big[\operatorname{grad}^M \eta(\gamma) +\sum_{i=1}^m h(\operatorname{grad}^{\mathbb{R}^{m+1}} \gamma , Ae_i)e_i -\eta(\gamma) \operatorname{grad}^M \gamma\big]\\ &=& (1-m) e^{-3\gamma} \big[\operatorname{grad}^M \eta(\gamma) + A(\operatorname{grad}^{M} \gamma) - \eta(\gamma) \operatorname{grad}^M \gamma\big]; \end{eqnarray} \begin{eqnarray}\label{eq6} \nonumber \widetilde{\Delta}(\widetilde{f}) &=& e^{-2\gamma}[\Delta(\widetilde{f})+(m-2)d\widetilde{f}(\operatorname{grad}^M \gamma )] \\ &=& e^{-2\gamma}[-\Delta(\eta(\gamma)e^{-\gamma})-(m-2)(\operatorname{grad}^M \gamma)(\eta(\gamma)e^{-\gamma})]; \end{eqnarray} \begin{eqnarray}\label{eq3.8} |\widetilde{A}|_{\widetilde{g}}^2 &=&\nonumber\sum_{i=1}^m\widetilde{g}(\widetilde{A}\widetilde{e}_i,\widetilde{A}\widetilde{e}_i)\\ &=&\nonumber\sum_{i=1}^m g(\widetilde{A}e_i,\widetilde{A}e_i)\\ &=&\nonumber\sum_{i=1}^m h(\widetilde{\nabla}_{e_i}^{\mathbb{R}^{m+1}} \widetilde{\eta},\widetilde{\nabla}_{e_i}^{\mathbb{R}^{m+1}} \widetilde{\eta}) \\ &=&\nonumber\sum_{i=1}^m h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta}+e_i(\gamma) \widetilde{\eta} +\widetilde{\eta}(\gamma)e_i,\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta}+e_i(\gamma) \widetilde{\eta} +\widetilde{\eta}(\gamma)e_i)\\ &=&\nonumber\sum_{i=1}^m \big[h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta},\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta}) +2 \widetilde{\eta}(\gamma)h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta},e_i)+e_i(\gamma)^2e^{-2\gamma}\\ &&+2e_i(\gamma)h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta},\widetilde{\eta})\big]+m\widetilde{\eta}(\gamma)^2. \end{eqnarray} The first term of (\ref{eq3.8}) is given by \begin{eqnarray*} \sum_{i=1}^mh(\nabla_{e_i}^{\mathbb{R}^{m+1}}e^{-\gamma}\eta,\nabla_{e_i}^{\mathbb{R}^{m+1}}e^{-\gamma}\eta) &=&\sum_{i=1}^m h(-e^{-\gamma}e_i(\gamma) \eta+e^{-\gamma}\nabla_{e_i}^{\mathbb{R}^{m+1}}\eta , -e^{-\gamma}e_i(\gamma) \eta+e^{-\gamma}\nabla_{e_i}^{\mathbb{R}^{m+1}}\eta )\\ &=&\sum_{i=1}^m[ e^{-2\gamma}e_i(\gamma)^2+e^{-2\gamma}h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\eta,\nabla_{e_i}^{\mathbb{R}^{m+1}}\eta)]\\ &=& e^{-2\gamma} |\operatorname{grad}^M \gamma|^2 + e^{-2\gamma}|A|^2. \end{eqnarray*} The second term of (\ref{eq3.8}) is given by \begin{eqnarray*} 2\widetilde{\eta}(\gamma)\sum_{i=1}^m h(\nabla_{e_i}^{\mathbb{R}^{m+1}}\widetilde{\eta},e_i) &=& -2e^{-\gamma} \eta(\gamma)\sum_{i=1}^m h(e^{-\gamma}\eta,\nabla_{e_i}^{\mathbb{R}^{m+1}} e_i ) \\ &=& -2me^{-2\gamma} \eta(\gamma)h(\eta, H) \\ &=&0. \end{eqnarray*} Here $H=0$. We have also \begin{eqnarray*} 2\sum_{i=1}^me_i(\gamma)h(\nabla_{e_i}^{\mathbb{R}^{m+1}} \widetilde{\eta},\widetilde{\eta}) & =&\sum_{i=1}^m e_i(\gamma) e_i h(\widetilde{\eta} , \widetilde{\eta}) \\ &=& \sum_{i=1}^me_i(\gamma) e_i(e^{-2\gamma}) \\ &=& -2e^{-2\gamma}\sum_{i=1}^m e_i(\gamma)^2 \\ &=& -2e^{-2\gamma} |\operatorname{grad}^M \gamma|^2. \end{eqnarray*} Thus \begin{equation}\label{eq7} |\widetilde{A}|_{\widetilde{h}}^2= e^{-2\gamma} |A|^2 + m e^{-2\gamma}\eta(\gamma)^2. \end{equation} We compute \begin{eqnarray}\label{eq9} \nonumber \widetilde{\operatorname{grad}}^M \widetilde{f}&=& e^{-2\gamma} \sum_{i=1}^me_i(\widetilde{f})e_i\\ &=& -e^{-2\gamma}\operatorname{grad}^M (\eta(\gamma)e^{-\gamma}); \end{eqnarray} and the following \begin{eqnarray}\label{eq10} \nonumber \widetilde{A}(\widetilde{\operatorname{grad}}^M \widetilde{f}) &=& -\widetilde{\nabla}_{\widetilde{\operatorname{grad}}^M \widetilde{f}}^{\mathbb{R}^{m+1}} \widetilde{\eta} \\ \nonumber &=&-\widetilde{\nabla}_{\widetilde{\operatorname{grad}}^M \widetilde{f}}^{\mathbb{R}^{m+1}} e^{-\gamma} \eta \\ \nonumber &=& e^{-\gamma}(\widetilde{\operatorname{grad}}^M \widetilde{f}) (\gamma) \eta - e^{-\gamma}\widetilde{\nabla}_{\widetilde{\operatorname{grad}}^M \widetilde{f}}^{\mathbb{R}^{m+1}}\eta \\ \nonumber &=& -e^{-3\gamma} \operatorname{grad}^M (\eta(\gamma)e^{-\gamma} ) (\gamma) \eta + e^{-3\gamma} \widetilde{\nabla}_{\operatorname{grad}^M (\eta(\gamma)e^{-\gamma})}^{\mathbb{R}^{m+1}}\eta\\ \nonumber &=& -e^{-3\gamma} \operatorname{grad}^M(\eta(\gamma)e^{-\gamma})(\gamma) \eta + e^{-3\gamma} \eta(\gamma) \operatorname{grad}^M(\eta(\gamma)e^{-\gamma})\\ \nonumber && +e^{-3\gamma} \operatorname{grad}^M(\eta(\gamma)e^{-\gamma})(\gamma) \eta + e^{-3\gamma}\nabla_{\operatorname{grad}^M(\eta(\gamma)e^{-\gamma})}^{\mathbb{R}^{m+1}} \eta \\ &=& e^{-3\gamma} \eta(\gamma) \operatorname{grad}^M(\eta(\gamma)e^{-\gamma}) - e^{-3\gamma } A(\operatorname{grad}^M \eta(\gamma) e^{-\gamma}).\quad\qquad \end{eqnarray} Substituting $(\ref{eq5})-(\ref{eq10})$ in $(\ref{sys5})$, and by simplifying the resulting equation we obtain the system $(\ref{sys4})$. \end{proof} \begin{remark}\label{remark}\quad \begin{enumerate} \item Using Theorem \ref{th3}, we can construct many examples for proper $p$-biharmonic hypersurfaces in the conformally flat space. \item If the functions $\gamma$ and $\eta(\gamma)$ are non-zero constants on $M$, then according to Theorem \ref{th3}, the hypersurface $(M^m , \widetilde{g})$ is $p$-biharmonic in $(\mathbb{R}^{m+1} , \widetilde{h}) $ if and only if $$|A|^2=m (1-p) \eta(\gamma)^2-m\eta(\eta(\gamma)).$$ \end{enumerate} \end{remark} \begin{example} The hyperplane $\mathbf{i} :\mathbb{R}^m \hookrightarrow (\mathbb{R}^{m+1} , e^{2\gamma(z)} h ) $, $x \longmapsto(x , c) $, where $ \gamma \in C^{\infty}(\mathbb{R}) $, $h= \sum_{i=1}^{m}dx_i^2 + dz^2 $, and $c\in \mathbb{R}$, is proper $p$-biharmonic if and only if $(1-p)\gamma'(c)^2-\gamma''(c)=0$. Note that, the smooth function $$\gamma(z)=\frac{\ln\left(c_1(p-1)z+c_2(p-1)\right)}{p-1},\quad c_1,c_2\in\mathbb{R},$$ is a solution of the previous differential equation (for all $c$). \end{example} \begin{example} Let $M$ be a surface of revolution in $\{(x,y,z)\in\mathbb{R}^3\,|\,z>0\}$. If $M$ is part of a plane orthogonal to the axis of revolution, so that $M$ is parametrized by $$(x_1,x_2)\longmapsto(f(x_2)\cos(x_1),f(x_2)\sin(x_1),c),$$ for some constant $c>0$. Here $f(x_2)>0$. Then, $M$ is minimal, and according to Theorem \ref{th3}, the surface $M$ is proper $p$-biharmonic in $3$-dimensional hyperbolic space $(\mathbb{H}^3,z^{\frac{2}{p-1}}h)$, where $h=dx^2+dy^2+dz^2$. \end{example} \textbf{Open Problems.} \begin{enumerate} \item If $M$ is a minimal surface of revolution contained in a catenoid, that is $M$ is parametrized by $$(x_1,x_2)\longmapsto\left(a\cosh\left(\frac{x_2}{a}+b\right)\cos(x_1),a\cosh\left(\frac{x_2}{a}+b\right)\sin(x_1),x_2\right),$$ where $a\neq0$ and $b$ are constants. Is there $p\geq2$ and $\gamma\in C^\infty(\mathbb{R}^3)$ such that $M$ is proper $p$-biharmonic in $\left(\mathbb{R}^3,e^{2\gamma}(dx^2+dy^2+dz^2)\right)$? \item Is there a proper $p$-biharmonic submanifolds in Euclidean space $(\mathbb{R}^n,dx_1^2+...+dx_n^2)$? \end{enumerate} \footnotesize{ } \end{document}
\begin{document} \title{Splitting Monoidal Stable Model Categories} \author{D. Barnes} \maketitle \begin{abstract} \noindent If ${ \mathbb{C} }al$ is a stable model category with a monoidal product then the set of homotopy classes of self-maps of the unit forms a commutative ring, $[S, S]^{{ \mathbb{C} }al}$. An idempotent $e$ of this ring will split the homotopy category: $[X, Y]^{{ \mathbb{C} }al} { : }ng e[X, Y]^{{ \mathbb{C} }al} \oplus (1-e)[X, Y]^{{ \mathbb{C} }al}$. We prove that provided the localised model structures exist, this splitting of the homotopy category comes from a splitting of the model category, that is, ${ \mathbb{C} }al$ is Quillen equivalent to $L_{e S} { \mathbb{C} }al \times L_{(1-e)S} { \mathbb{C} }al$ and $[X,Y]^{L_{eS} { \mathbb{C} }al} { : }ng e [X,Y]^{ \mathbb{C} }al$. This Quillen equivalence is strong monoidal and is symmetric when the monoidal product of ${ \mathbb{C} }al$ is. \end{abstract} \section{Introduction} Let $R$ be a commutative ring with an idempotent $e$, so $e \cdot e = e$, then there is an equivalence of categories $R \leftmod \overrightarrow{\longleftarrow} eR \leftmod \times (1-e) R \leftmod$ and for any $R$-module $M$ a natural isomorphism $M { : }ng eM \oplus (1-e)M$. This result can be useful since in general it is easier to study the categories $eR \leftmod$ and $(1-e) R \leftmod$ separately. We want to find some generalisation of this result to model categories. Our initial example is an additive and monoidal category, so we look for a class of monoidal model categories whose homotopy category is additive. The collection of monoidal stable model categories is such a class. A pointed model category ${ \mathbb{C} }al$ comes with a natural adjunction $(\Sigma, \Omega)$ on $\ho { \mathbb{C} }al$. When this adjunction is an equivalence we say that ${ \mathbb{C} }al$ is stable. The homotopy category of a stable model category is naturally a triangulated category (hence additive), see \cite[Chapter 7]{hov99}. We are interested in monoidal stable model categories: those stable model categories which are also monoidal model categories (\cite[Section 6.6]{hov99}). Thus ${ \mathbb{C} }al$ has a closed monoidal product $(\wedge, \homSSS)$ with unit $S$ which is compatible with the model structure in the sense that the pushout product axiom holds. We write $[X,Y]^{ \mathbb{C} }al$ for the set of maps in the homotopy category of ${ \mathbb{C} }al$, this is a group since $X$ is equivalent to $\Omega^2 \Sigma^2 X$. It is then an simple task to prove that $[S,S]^{ \mathbb{C} }al$ is a commutative ring (Lemma \ref{lem:unitring}). For any $X, Y \in { \mathbb{C} }al$, $[X,Y]^{ \mathbb{C} }al$ is a $[S,S]^{ \mathbb{C} }al$-module via the smash product. Hence, for any idempotent $e \in [S,S]^{ \mathbb{C} }al$, we have an isomorphism which is natural in $X$ and $Y$: $[X, Y]^{{ \mathbb{C} }al} { : }ng e[X, Y]^{{ \mathbb{C} }al} \oplus (1-e)[X, Y]^{{ \mathbb{C} }al}$. Define $e \ho { \mathbb{C} }al$ to be that category with the same class of objects as $\ho { \mathbb{C} }al$ and with morphisms given by $e[X, Y]^{{ \mathbb{C} }al}$. Then, as with the case of $R$-modules above, we have an equivalence of categories $\ho { \mathbb{C} }al \overrightarrow{\longleftarrow} e \ho { \mathbb{C} }al \times (1-e) \ho { \mathbb{C} }al$. We want to understand this splitting in terms of the model category ${ \mathbb{C} }al$. We assume that for any cofibrant object $E \in { \mathbb{C} }al$ there is a new model structure on the category ${ \mathbb{C} }al$, written $L_E { \mathbb{C} }al$, with the same cofibrations as ${ \mathbb{C} }al$ and weak equivalences those maps $f$ such that $\id_E \wedge f$ is a weak equivalence of ${ \mathbb{C} }al$. The model structure $L_E { \mathbb{C} }al$ is called the Bousfield localisation of ${ \mathbb{C} }al$ at $E$ and there is a left Quillen functor $\id : { \mathbb{C} }al \to L_E { \mathbb{C} }al$. For $e$ an idempotent of $[S,S]^{ \mathbb{C} }al$, we are interested in localising at the objects $eS$ and $(1-e)S$. These are constructed in terms of homotopy colimits and $S$ is weakly equivalent to $eS { : }prod (1-e)S$. Our main result, Theorem \ref{thm:generalsplitting}, is that the adjunction $$\Delta : { \mathbb{C} }al \overrightarrow{\longleftarrow} L_{eS} { \mathbb{C} }al \times L_{(1-e)S} { \mathbb{C} }al : \prod$$ is a Quillen equivalence. Furthermore $[X,Y]^{L_{eS} { \mathbb{C} }al} { : }ng e [X,Y]^{ \mathbb{C} }al$, so that this Quillen equivalence induces the splitting of $\ho { \mathbb{C} }al$. Note that there is a non-trivial idempotent $e \in [S,S]^{ \mathbb{C} }al$ if and only if there is a non-trivial splitting of the homotopy category. The splitting theorem proves that if there is such an idempotent, then there is a splitting of model categories. Corollary \ref{cor:possiblesplittings} demonstrates that if one has a splitting at the model category level (into $L_E { \mathbb{C} }al$ and $L_F { \mathbb{C} }al$) then the idempotent this defines ($e$) returns the splitting at the model category level: $L_{eS} { \mathbb{C} }al = L_E { \mathbb{C} }al$ and $L_{(1-e)S} { \mathbb{C} }al = L_F { \mathbb{C} }al$. Hence, the notions: a splitting of $[S,S]^{ \mathbb{C} }al$, a splitting of $\ho { \mathbb{C} }al$ and a splitting of the model category ${ \mathbb{C} }al$, are all equivalent. Our motivation for this splitting result came from studying rational equivariant spectra for compact Lie groups $G$. The ring of self-maps of the unit in the homotopy category of rational $G$-spectra, $[S,S]^G_{ \mathbb{Q} }$, is naturally isomorphic to the rational Burnside ring. We have a good understanding of idempotents in this ring via tom-Dieck's isomorphism, see Lemma \ref{lem:tdisom}. If a non-trivial idempotent exists, then we can use it to split the category and obtain two pieces which are possibly easier to study. We construct a model category of rational equivariant spectra in Section \ref{sec:rateqspec}, we then give two examples of this splitting result taken from \cite{barnes}. Corollary \ref{cor:finG} considers the case of a finite group and at the homotopy level recovers the splitting result of \cite[Appendix A]{gremay95}. The second example is Lemma \ref{lem:Geidemfamily} and in the case of $O(2)$ the idempotent constructed is non-trivial and gives the homotopy level splitting of \cite{gre98a}. Since we are working in a monoidal context and the splitting result is a strong monoidal adjunction, we can give two further examples: the case of modules over a ring spectrum (Proposition \ref{prop:splitrmod}) and $R$-$R$-bimodules for a ring spectrum $R$ (Proposition \ref{prop:splitbimod}). After these examples we return to our motivating case of rational $G$-spectra and give a model structure for rational $G$-spectra in terms of modules over a commutative ring spectrum. We also feel that we should mention \cite{ss03stabmodcat}. In this paper the authors assume that one has a stable model category with a set of compact generators and conclude that such a category is Quillen equivalent to the category of right modules over a ring spectrum with many objects (that is, right modules over a category enriched over symmetric spectra). Consider a symmetric monoidal category ${ \mathbb{C} }al$ with a set of compact generators ${ \mathcal{G} }$ such that there is an idempotent $e \in [S,S]^{ \mathbb{C} }al$, we can relate our splitting result to the work of the above-mentioned paper as follows. We have two new sets of compact objects $e { \mathcal{G} } = \{e G | G \in { \mathcal{G} } \}$ and $(1-e) { \mathcal{G} }$, their union is a set of generators for ${ \mathbb{C} }al$. We can construct a ring spectrum with many objects from $e { \mathcal{G} }$, call this ${ \mathcal{E} }(e { \mathcal{G} })$. The homotopy category of right modules over ${ \mathcal{E} }(e { \mathcal{G} })$ is equivalent to $e \ho { \mathbb{C} }al$ and similarly the homotopy category of right modules over ${ \mathcal{E} }((1-e) { \mathcal{G} })$ is equivalent to $(1-e) \ho { \mathbb{C} }al$. All of our examples (see Sections \ref{sec:rateqspec} - \ref{sec:modbimod}) have a set of compact generators. \paragraph*{Acknowledgments} This work is a development of material from my PhD thesis, supervised by John Greenlees. I would to thank him for all the help and advice he has given me. Stating the splitting result in terms of stable model categories was a suggestion made by both Neil Strickland and Stefan Schwede and the bimodule example was the idea of Stefan Schwede, they both deserve a great deal of gratitude for carefully reading earlier versions of this work. \section{Stable Model Categories} We introduce the notion of a stable model category, prove that if ${ \mathbb{C} }al$ is a monoidal stable model category then $[S,S]^{ \mathbb{C} }al$ is a commutative ring and prove some basic results about idempotents of $[S,S]^{ \mathbb{C} }al$. A pointed model category ${ \mathbb{C} }al$ comes with a natural action of $\ho \SSET_*$ (the homotopy category of pointed simplicial sets) on $\ho { \mathbb{C} }al$, see \cite[Chapter 6]{hov99} or \cite[Section I.2]{quil67}. In particular for $X \in { \mathbb{C} }al$ we have $\Sigma X := S^1 \wedge^L X$ and $\Omega X := R \homSSS_* (S^1, X)$, these define the suspension and loop adjunction $(\Sigma, \Omega)$ on $\ho { \mathbb{C} }al$. When this adjunction is an equivalence we say that ${ \mathbb{C} }al$ is \textbf{stable}, see \cite[Chapter 7]{hov99}. Following that chapter we see that $\ho { \mathbb{C} }al$ is a triangulated category in the classical sense (see \cite{del77}) and that cofibre and fibre sequences agree (up to signs). Let ${ \mathbb{C} }al$ be a monoidal stable model category, we let ${ : }frep$ and ${ \widehat{f} }$ denote cofibrant and fibrant replacement in ${ \mathbb{C} }al$. For any collection of objects $\{Y_i \}_{i \in I}$ in ${ \mathbb{C} }al$, there is a natural map ${ : }prod Y_i \to \prod Y_i$. In a triangulated category finite coproducts and finite products coincide, thus when $I$ is a finite set we have a weak equivalence ${ : }prod_{i \in I} { : }frep Y_i \to \prod_{i \in I} { \widehat{f} } Y_i$. \begin{lem}\label{lem:unitring} The set $[S,S]^{ \mathbb{C} }al$ is a commutative ring. \end{lem} \begin{pf} The homotopy category of a stable model category is additive \cite[Lemma 7.1.2]{hov99}. Thus $[S,S]^{ \mathbb{C} }al$ is an abelian group and this addition is compatible with composition of maps $\circ : [S,S]^{ \mathbb{C} }al \otimes_{ \mathbb{Z} } [S,S]^{ \mathbb{C} }al \to [S,S]^{ \mathbb{C} }al$. There is also a smash product operation $\wedge : [S,S]^{ \mathbb{C} }al \otimes_{ \mathbb{Z} } [S,S]^{ \mathbb{C} }al \to [S,S]^{ \mathbb{C} }al$. The operations $\circ$ and $\wedge$ satisfy the following interchange law. Let $a$, $b$, $c$ and $d$ be elements of $[S,S]^{ \mathbb{C} }al$, then $(a \circ b) \wedge (c \circ d) = (a \wedge c) \circ (b \wedge d) $ as elements of $[S \wedge S, S\wedge S]^{ \mathbb{C} }al$ and the unit of each operation is the identity map of $S$. Hence, by the well-known argument below, the two operations $\circ$ and $\wedge$ are equal and commutative. So composition defines a commutative ring structure on the group $[S,S]^{ \mathbb{C} }al$. Consider any set $A$, with two binary operations $\wedge$, $\circ$ which satisfies the above interchange law. Assume there is an element $e \in A$ which acts as a both a left and right identity for $\wedge$ and $\circ$. Then $a \wedge d = (a \circ e) \wedge (e \circ d) = a \circ d$ and $a \wedge d = (e \circ a) \wedge (d \circ e) = d \circ a$. Hence the two operations are equal and are commutative. \end{pf} Note that the above does not assume that $\wedge$ is a {\em symmetric} monoidal product. Consider a map in the homotopy category, $a \in [S,S]^{ \mathbb{C} }al$. This can be represented by $a' { : } { : }frep { \widehat{f} } S \to { : }frep { \widehat{f} } S$. We can consider the homotopy colimit of the diagram ${ : }frep { \widehat{f} } S \overset{a'}{\to} { : }frep { \widehat{f} } S \overset{a'}{\to} { : }frep { \widehat{f} } S \overset{a'}{\to} \dots $ which we denote by $a S$. A different choice of representative will give a weakly equivalent homotopy colimit, so we must use a little care when writing $a S$. The construction of the homotopy colimit $a S$ comes with a map ${ : }frep S \to { : }frep { \widehat{f} } S \to a' { : }frep { \widehat{f} } S$. For any $X \in { \mathbb{C} }al$, we have the map $a' \wedge \id_X { : } { : }frep { \widehat{f} } S \wedge X \to { : }frep { \widehat{f} } S \wedge X$. We can then construct homotopy colimits as above to create the object $aX$. We use \cite[Proposition 7.3.2]{hov99}, to obtain an exact sequence: $$0 \to \LimSSS^1 [X, Y]^{ \mathbb{C} }al \to [a X,Y]^{ \mathbb{C} }al \to \LimSSS [X,Y]^{ \mathbb{C} }al \to 0.$$ We are interested in $e S$ for $e$ an idempotent of $[S,S]^{ \mathbb{C} }al$. In such a case, the $\LimSSS^1$-term is zero as the tower created by an idempotent satisfies the Mittag-Leffler condition (\cite[Definition 3.5.6]{weib}). Hence the above exact sequence reduces to an isomorphism $[e X,Y]^{ \mathbb{C} }al \to \LimSSS [X,Y]^{ \mathbb{C} }al = e [X,Y]^{ \mathbb{C} }al$. If $e$ is an idempotent so is $(\id_S - e)$, which we now write as $(1-e)$. Furthermore we have a canonical natural isomorphism $[X,Y]^{ \mathbb{C} }al { : }ng e[X,Y]^{ \mathbb{C} }al \oplus (1-e)[X,Y]^{ \mathbb{C} }al$ for any $X$ and $Y$. Thus, there is a natural isomorphism in the homotopy category $X \to e X \prod (1-e) X$. We can write $\ho { \mathbb{C} }al$ as the product category $e \ho { \mathbb{C} }al \times (1-e) \ho { \mathbb{C} }al$, where $e \ho { \mathbb{C} }al$ has the same objects as $\ho { \mathbb{C} }al$ and $e \ho { \mathbb{C} }al(X,Y) :=e[X,Y]^{ \mathbb{C} }al$. We wish to pull this splitting back to the level of model categories. \begin{lem}\label{lem:splitobjects} For any object $X$ in ${ \mathbb{C} }al$ there is a natural weak equivalence ${ : }frep X \wedge { : }frep S \to { \widehat{f} } e X \prod { \widehat{f} } (1-e) X$. \end{lem} \begin{pf} We start with the maps ${ : }frep X \wedge { : }frep S \to e X$ and ${ : }frep X \wedge { : }frep S \to (1-e) X$. By taking fibrant replacements we obtain a map ${ : }frep X \wedge { : }frep S \to { \widehat{f} } e X \prod { \widehat{f} } (1-e) X$. The following diagram commutes for any $Y \in { \mathbb{C} }al$, proving the result. $$\xymatrix{ [{ \widehat{f} } eX \prod { \widehat{f} } (1-e) X, Y]^{ \mathbb{C} }al \ar[r] \ar[d]^{ : }ng & [X,Y]^{ \mathbb{C} }al \ar[dd]^{ : }ng \\ [{ \widehat{f} } eX \vee { \widehat{f} } (1-e) X, Y]^{ \mathbb{C} }al \ar[d]^{ : }ng \\ [{ \widehat{f} } eX , Y]^{ \mathbb{C} }al \oplus [{ \widehat{f} } (1-e) X, Y]^{ \mathbb{C} }al \ar[r]^{ : }ng & e[X , Y]^{ \mathbb{C} }al \oplus (1-e) [X, Y]^{ \mathbb{C} }al }$$ \end{pf} \section{Localisations} We define the notion of a Bousfield localisation of a monoidal model category and prove that when the localisation exists, the new model category shares many of the properties of the original (left properness, the pushout product axiom and the monoid axiom). We also consider Quillen pairs between localised categories. Recall the following concepts of localisation. \begin{defn}\label{def:genEstuff} Let $E$ be a cofibrant object of the monoidal model category ${ \mathbb{C} }al$ and let $X$, $Y$ and $Z$ be objects of ${ \mathbb{C} }al$. \begin{enumerate} \item A map $f : X \to Y$ is an $E$-\textbf{equivalence}\index{E-equivalence@$E$-equivalence} if $\id_E \wedge f : E \wedge X \to E \wedge Y$ is a weak equivalence. \item $Z$ is $E$-\textbf{local}\index{E-local@$E$-local} if $f^* : [Y,Z]^{ \mathbb{C} }al \to [X,Z]^{ \mathbb{C} }al$ is an isomorphism for all $E$-equivalences $f : X \to Y$. \item An $E$-\textbf{localisation}\index{E-localisation@$E$-localisation} of $X$ is an $E$-equivalence $\lambda : X \to Y$ from $X$ to an $E$-local object $Y$. \item $A$ is $E$-\textbf{acyclic}\index{E-acyclic@$E$-acyclic} if the map $* \to A$ is an $E$-equivalence. \end{enumerate} \end{defn} The following is a standard result, see \cite[Theorems 3.2.13 and 3.2.14]{hir03}. \begin{lem}\label{lem:genEequivElocal} An $E$-equivalence between $E$-local objects is a weak equivalence. \end{lem} Consider the category ${ \mathbb{C} }al$ with a new set of weak equivalences: the $E$-equivalences, while leaving the cofibrations unchanged. If this defines a model structure we call this the Bousfield localisation of ${ \mathbb{C} }al$ at $E$ and write it as $L_E { \mathbb{C} }al$. The identity functor gives a strong monoidal Quillen pair (see definition below) $$\id_{ \mathbb{C} }al : { \mathbb{C} }al \overrightarrow{\longleftarrow} L_E { \mathbb{C} }al : \id_{ \mathbb{C} }al.$$ This follows since the cofibrations are unchanged and if $f : X \to Y$ is an acyclic cofibration of ${ \mathbb{C} }al$ then $f \wedge \id_E$ is also an acyclic cofibration. Hence $f$ is a cofibration and an $E$-equivalence. We will write ${ \widehat{f} }_{E}$ for fibrant replacement in $L_E { \mathbb{C} }al$. \begin{defn} A Quillen pair $L : { \mathbb{C} }al \overrightarrow{\longleftarrow} { \mathcal{D} } : R$ between monoidal model categories is said to be a \textbf{strong monoidal adjunction} if there is a natural isomorphism $L (X \otimes Y) \to LX \otimes LY$ and an isomorphism $L S_{ \mathbb{C} }al \to S_{ \mathcal{D} }$. We require that these isomorphisms satisfy the associativity and unital coherence conditions of \cite[Definition 4.1.2]{hov99}. A strong monoidal adjunction $(L,R)$ is a \textbf{strong monoidal Quillen pair} if it is a Quillen adjunction and if whenever ${ : }frep S_{ \mathbb{C} }al \to S_{ \mathbb{C} }al$ is a cofibrant replacement of $S_{ \mathbb{C} }al$, then the induced map $L { : }frep S_{ \mathbb{C} }al \to L S_{ \mathbb{C} }al$ is a weak equivalence. \end{defn} From now on we assume that for any cofibrant $E$ the $E$-equivalences and cofibrations define a model structure on ${ \mathbb{C} }al$, the $E$-local model structure. In general we won't have a good description of the fibrations of $L_E { \mathbb{C} }al$, however we do have the following lemma. This result is similar in nature to \cite[Proposition 3.4.1]{hir03}. \begin{lem} An $E$-fibrant object is fibrant in ${ \mathbb{C} }al$ and $E$-local. If $X$ is $E$-local and fibrant in ${ \mathbb{C} }al$, then $X \to *$ has the right lifting property with respect to the class of $E$-acyclic cofibrations between cofibrant objects. \end{lem} Note that in many cases a stronger result holds: an object is $E$-fibrant if and only if it is fibrant in ${ \mathbb{C} }al$ and $E$-local. For example, this stronger result holds for EKMM spectra localised at an object $E$ by the fact that the domains of the generating $E$-acyclic cofibrations are cofibrant. \vskip 0.5cm \begin{pf} Let $A \to B$ be an acyclic cofibration, then this is also an $E$-equivalence. So for an $E$-fibrant object $Z$, the canonical map $Z \to *$ will have the right lifting property with respect to $A \to B$. Let $f { : } A \to B$ be an $E$-equivalence. We must prove that $f^* { : } [B,Z]^{{ \mathbb{C} }al} \to [A, Z]^{{ \mathbb{C} }al}$ is an isomorphism. But since $Z$ is $E$-fibrant the Quillen pair between ${ \mathbb{C} }al$ and $L_E { \mathbb{C} }al$ gives an isomorphism $[B,Z]^{{ \mathbb{C} }al} { : }ng [B,Z]^{L_E { \mathbb{C} }al}$. This is natural in the first variable and the first statement follows. Let $i : A \to B$ be an $E$-acyclic cofibration between cofibrant objects and let $f : A \to X$ be any map of ${ \mathbb{C} }al$. Since $X$ is $E$-local, $i$ induces an isomorphism $i^* :[B,X]^{ \mathbb{C} }al \to [A,X]^{ \mathbb{C} }al$. Choose $g : B \to X$ such that $g \circ i$ and $f$ are homotopic. We now apply the homotopy extension property (see \cite[Page 1.7]{quil67}), choose a path object $X'$ for $X$ with a map $h : A \to X'$ such that $p_0 \circ h = g \circ i$ and $p_1 \circ h = f$. We thus have the following diagram $$\xymatrix{ A \ar[r]^h \ar[d]_i & X' \ar[d]^{p_0} \\ B \ar[r]^g & X }$$ where $i$ is a cofibration and $p_0$ is a fibration and a weak equivalence in ${ \mathbb{C} }al$. Thus we have a lifting $H : B \to X'$ and the map $p_1 \circ H$ is the solution to our original lifting problem. \end{pf} If the $E$-local model structure exists, then every weak equivalence is an $E$-equivalence. Take a weak equivalence $f$, factor this into $g \circ h$ with $h$ a cofibration and a weak equivalence and $h$ an acyclic $E$-fibration. Then since smashing with $E$ is a left Quillen functor, $\id_E \wedge h$ is an acyclic cofibration. By definition, $\id_E \wedge g$ is a weak equivalence, hence so is $\id_E \wedge f$. We also note that if $F$ and $E$ are cofibrant objects of ${ \mathbb{C} }al$ then the model categories $L_{F \wedge E} { \mathbb{C} }al$ and $L_E L_F { \mathbb{C} }al$ are equal (they have the same weak equivalences and cofibrations). Now we prove a straightforward result about Quillen functors between localised categories and then turn to proving that $L_E { \mathbb{C} }al$ inherits many of the properties of the original model structure on ${ \mathbb{C} }al$. \begin{thm}\label{thm:locfuncs} Take a Quillen adjunction between monoidal model categories with a strong monoidal left adjoint $F : { \mathscr{C} } \overrightarrow{\longleftarrow} { \mathscr{D} } : G.$ Let $E$ be cofibrant in ${ \mathscr{C} }$ and assume that all model categories mentioned below exist. Then $(F,G)$ passes to a Quillen pair $F : L_E { \mathscr{C} } \overrightarrow{\longleftarrow} L_{FE} { \mathscr{D} } : G.$ Furthermore, if $(F,G)$ form a Quillen equivalence, then they pass to a Quillen equivalence of the localised categories. \end{thm} \begin{pf} Since the cofibrations in $L_E { \mathscr{C} }$ and $L_{FE} { \mathscr{D} }$ are unchanged $F$ preserves cofibrations. Now take an acyclic cofibration in ${ \mathscr{C} }$ of the form $\id_E \wedge f : E \wedge X \to E \wedge Y$, applying $F$ and using the strong monoidal condition we have a weak equivalence in ${ \mathscr{D} }$: $\id_{FE} \wedge Ff : FE \wedge FX \to FE \wedge FY$. Hence $F$ takes $E$-acyclic cofibrations to $FE$-acyclic cofibrations and we have a Quillen pair. To prove the second statement we show that $F$ reflects $E$-equivalences between cofibrant objects and that $F { : }frep GX \to X$ is an $E$-equivalence for all $X$ fibrant in $L_{FE} { \mathscr{D} }$. These conditions are an equivalent definition of Quillen equivalence by \cite[Corollary 1.3.16(b)]{hov99}. The first condition follows since strong monoidality allows us to identify $F(\id_E \wedge f)$ and $\id_{FE} \wedge Ff $ for a map $f$ in ${ \mathscr{C} }$ and $F$ reflects weak equivalences between cofibrant objects. The second condition is equally simple: we know that an $E$-fibrant object is fibrant and that cofibrant replacement is unaffected by Bousfield localisation. Hence $F { : }frep GX \to X$ is a weak equivalence and thus an $E$-equivalence. \end{pf} \begin{prop}\index{Left proper} If ${ \mathbb{C} }al$ is left proper so is $L_E { \mathbb{C} }al$. \end{prop} \begin{prop}\label{prop:pushaxiom} If ${ \mathbb{C} }al$ is symmetric monoidal, then for two cofibrations, $f : U \to V$ and $g : W \to X$, the induced map $$f \square g : V \wedge W \bigvee_{U \wedge W} U \wedge X \to V \wedge X$$ is a cofibration which is an $E$-acyclic cofibration if either $f$ or $g$ is. If $X$ is a cofibrant object then the map ${ : }frep S \wedge X \to X$ is a weak equivalence. \end{prop} \begin{pf} Since the cofibrations are unchanged by localisation, we only need to check that the above map is an $E$-equivalence when one of $f$ or $g$ is. Assume that $f$ is an $E$-equivalence, then the map $\id_E \wedge f : E \wedge U \to E \wedge V$ is a weak equivalence and a cofibration. Thus, since $E \wedge (-)$ commutes with pushouts the map $$ E \wedge (V \wedge W \bigvee_{U \wedge W} U \wedge X) \to E \wedge (V \wedge X) $$ is also a weak equivalence and a cofibration. By symmetry, this also deals with the case when $g$ is an $E$-equivalence. The unit condition is unaffected by localisation, so it holds in the $E$-local model structure. \end{pf} Thus, when ${ \mathbb{C} }al$ is symmetric, $L_E { \mathbb{C} }al$ is a monoidal model category. Now we consider the monoid axiom. \begin{prop}\label{prop:Emonoid} If ${ \mathbb{C} }al$ is symmetric monoidal and satisfies the monoid axiom, then so does $L_E { \mathbb{C} }al$. \end{prop} \begin{pf} Let $i : A \to X$ be an acyclic $E$-cofibration, then for any object $Y$, the map $\id_E \wedge i \wedge \id_Y$ is a weak equivalence. Moreover, transfinite compositions of pushouts of such maps are weak equivalences by the monoid axiom for ${ \mathbb{C} }al$. Thus transfinite compositions of pushouts of maps of the form $i \wedge \id_Y$ are $E$-equivalences. \end{pf} \section{The Splitting} We are now ready to prove our main result, Theorem \ref{thm:generalsplitting}. We conclude this section with a converse to this result. Recall the definition of the product model category\index{Product model category} from \cite[Example 1.1.6]{hov99}. Given model categories $M_1$ and $M_2$ we can put a model category structure on $M_1 \times M_2$. A map $(f_1,f_2)$ is a cofibration, weak equivalence or fibration if and only if $f_1$ is so in $M_1$ and $f_2$ is so in $M_2$. Similarly a finite product of model categories has a model structure where a map is a cofibration, weak equivalence or fibration if and only if each of its factors is so. If $M_1$ and $M_2$ both satisfy any of the following: left properness, right properness, the pushout product axiom, the monoid axiom or cofibrant generation, then so does $M_1 \times M_2$. \begin{prop}\label{prop:genadjunct} If $E$ and $F$ are cofibrant objects of ${ \mathbb{C} }al$ then there is a strong monoidal Quillen adjunction $$\Delta : { \mathbb{C} }al \overrightarrow{\longleftarrow} L_E { \mathbb{C} }al \times L_F { \mathbb{C} }al : \prod.$$ \end{prop} Let ${ \mathbb{C} }al$ be a stable monoidal model category with an idempotent $e \in [S,S]^{ \mathbb{C} }al$. Then we have a Quillen pair $$\Delta : { \mathbb{C} }al \overrightarrow{\longleftarrow} L_{eS} { \mathbb{C} }al \times L_{(1-e)S} { \mathbb{C} }al : \prod$$ and an equivalence of homotopy categories $$\Delta : \ho { \mathbb{C} }al \overrightarrow{\longleftarrow} e \ho { \mathbb{C} }al \times (1-e) \ho { \mathbb{C} }al : \prod.$$ We now wish to prove that the Quillen pair induces this equivalence of homotopy categories. \begin{lem} Take an idempotent $e \in [S,S]^{ \mathbb{C} }al$, any pair of objects $X$, $Y$ and an $eS$-local object $Z$. Then there are natural isomorphisms $$[X, Y]^{L_eS { \mathbb{C} }al} \longrightarrow [X,{ \widehat{f} }_{eS}Y]^{ \mathbb{C} }al, \quad [X,Z]^{{ \mathbb{C} }al} \longrightarrow e[X,Z]^{{ \mathbb{C} }al}.$$ \end{lem} \begin{pf} The first comes from the Quillen adjunction between ${ \mathbb{C} }al$ and $L_{eS} { \mathbb{C} }al$. For the second we use the fact that the map ${ : }frep X \to e X$ is an $eS$-equivalence to obtain isomorphisms $[X,Z]^{{ \mathbb{C} }al} \leftarrow [eX,Z]^{{ \mathbb{C} }al} \to e[X,Z]^{{ \mathbb{C} }al}$. \end{pf} \begin{lem}\label{lem:cofibreseqce} Let $e$ be an idempotent of $[S,S]^{ \mathbb{C} }al$. Then the map $e { : } eS \to eS$ is an isomorphism in $\ho { \mathbb{C} }al$. Hence $(1-e) { : } eS \to eS$ is equal to the zero map in $\ho { \mathbb{C} }al$ and so for any $X$ and $Y$ in ${ \mathbb{C} }al$, $(1-e)[X,eY]^{ \mathbb{C} }al=0$. \end{lem} \begin{pf} Consider the map $e^* { : } [eS,X]^{ \mathbb{C} }al \to [eS,X]^{ \mathbb{C} }al$, this is naturally isomorphic to $e^* { : } e[S,X]^{ \mathbb{C} }al \to e[S,X]^{ \mathbb{C} }al$, which is an isomorphism. The second part follows since $(1-e) \circ e \in [S,S]^{ \mathbb{C} }al$ is equal to zero. \end{pf} \begin{thm}\label{thm:generalsplitting} Let ${ \mathbb{C} }al$ be a stable monoidal model category with an idempotent $e \in [S,S]^{ \mathbb{C} }al$. Assume that the model categories $L_{eS} { \mathbb{C} }al$ and $L_{(1-e)S} { \mathbb{C} }al$ exist, then the strong monoidal Quillen pair below is a Quillen equivalence. $$\Delta : { \mathbb{C} }al \overrightarrow{\longleftarrow} L_{eS} { \mathbb{C} }al \times L_{(1-e)S} { \mathbb{C} }al : \prod$$ \end{thm} \begin{pf} The right adjoint detects all weak equivalences: take $f { : } A \to B$ in $L_{eS} { \mathbb{C} }al$ and $g { : } C \to D$ in $L_{(1-e)S} { \mathbb{C} }al$. If $(f,g) { : } A \prod C \to B \prod D$ is a weak equivalence then $f$ and $g$ are weak equivalences since they are retracts of $(f,g)$. Hence $f$ is an $eS$-equivalence and $g$ is a $(1-e)S$-equivalence. Let $X$ be a cofibrant object of ${ \mathbb{C} }al$, we then have an $eS$-acyclic cofibration $X \to { \widehat{f} }_{eS} X$ and an $(1-e)S$-acyclic cofibration $X \to { \widehat{f} }_{(1-e)S} X$. We must prove that $X \to { \widehat{f} }_{eS} X \prod { \widehat{f} }_{(1-e)S} X$ is a weak equivalence. For any $A \in { \mathbb{C} }al$ we have the following commutative diagram: $$\xymatrix{ e[A,X]^{ \mathbb{C} }al \oplus (1-e)[A,X]^{ \mathbb{C} }al \ar[r] & e[A,{ \widehat{f} }_{eS} X]^{ \mathbb{C} }al \oplus (1-e)[A,{ \widehat{f} }_{(1-e)S} X]^{ \mathbb{C} }al \\ [A, X]^{ \mathbb{C} }al \ar[r] \ar[u]_{ : }ng & [A, { \widehat{f} }_{eS} X \prod { \widehat{f} }_{(1-e)S} X]^{ \mathbb{C} }al \ar[u]_{ : }ng }$$ So we have reduced the problem to proving that $e[A,X]^{ \mathbb{C} }al \to e[A,{ \widehat{f} }_{eS} X]^{ \mathbb{C} }al$ is an isomorphism. This follows from the commutative diagram below and Lemma \ref{lem:cofibreseqce}, which tells us that the terms $ e[A,(1-e)X]^{ \mathbb{C} }al$ and $e[A,(1-e){ \widehat{f} }_{eS}X]^{ \mathbb{C} }al$ are zero. $$\xymatrix@C+0.3cm{ e[A,X]^{ \mathbb{C} }al \ar[r] \ar[d]^{ : }ng & e[A,{ \widehat{f} }_{eS} X]^{ \mathbb{C} }al \ar[d]^{ : }ng \\ e[A,eX]^{ \mathbb{C} }al \oplus e[A,(1-e)X]^{ \mathbb{C} }al \ar[r]^(0.45){{ : }ng} & e[A, e{ \widehat{f} }_{eS} X]^{ \mathbb{C} }al \oplus e[A,(1-e){ \widehat{f} }_{eS}X]^{ \mathbb{C} }al }$$ \end{pf} A \textbf{finite orthogonal decomposition} of $\id_S$ is a collection of idempotents $e_1, \dots, e_n$ which sum to the identity in $[S,S]^{ \mathbb{C} }al$ such that $e_i \circ e_j =0$ for $i \neq j$. This result extends to give a strong monoidal Quillen equivalence between ${ \mathbb{C} }al$ and $\prod_{i=1}^n L_{e_i S} { \mathbb{C} }al$ whenever $e_1, \dots, e_n$ is a finite orthogonal decomposition of $\id_S$. \begin{cor}\label{cor:possiblesplittings} Consider a monoidal model category ${ \mathbb{C} }al$ which splits as a product $L_E { \mathbb{C} }al \times L_F { \mathbb{C} }al$, for cofibrant objects $E$ and $F$. Then there are orthogonal idempotents $e_E$ and $e_F$ in $[S,S]^{ \mathbb{C} }al$ such that $e_E + e_F = \id_S$, $L_{e_E S} { \mathbb{C} }al = L_E { \mathbb{C} }al$ and $L_{e_F S} { \mathbb{C} }al = L_F { \mathbb{C} }al$. \end{cor} \begin{pf} Using the isomorphism $[S,S]^{L_E { \mathbb{C} }al} \oplus [S,S]^{L_F { \mathbb{C} }al} \to [S,S]^{ \mathbb{C} }al$ define $e_E$ as the image of $\id_S \oplus \ 0 \in [S,S]^{L_E { \mathbb{C} }al} \oplus [S,S]^{L_F { \mathbb{C} }al}$ in $[S,S]^{ \mathbb{C} }al$. Similarly define $e_F$ as the image of $0 \oplus \id_S$. Thus we have idempotents $e_E$ and $e_F$ in $[S,S]^{ \mathbb{C} }al$ such that $e_E + e_F = \id_S$ and $e_E \circ e_F = 0$. By construction, $e_E [X,Y]^{ \mathbb{C} }al { : }ng [X,Y]^{L_E { \mathbb{C} }al}$ and by our work above $e_E [X,Y]^{ \mathbb{C} }al { : }ng [X,Y]^{L_{e_E S} { \mathbb{C} }al}$. From this it follows that the $e_E S$-equivalences are the $E$-equivalences and $L_{eS} { \mathbb{C} }al =L_E { \mathbb{C} }al$. \end{pf} \section{Rational Equivariant Spectra}\label{sec:rateqspec} Our motivating example for the splitting result is the category of rational $G$-equivariant EKMM $S$-modules for a compact Lie group $G$. Our first task is to define this category, for this we will need a rational sphere spectrum. We work with $G { \mathcal{M} }$, the category of $G$-equivariant EKMM $S$-modules from \cite{mm02}. One could work with $G$-equivariant orthogonal spectra and perform analogous constructions there and obtain equivalent results for that category. In particular the two categories of equivariant spectra we have mentioned are monoidally Quillen equivalent. We will construct ${ \mathbb{Q} }$ as a group and translate this into spectra. Take a free resolution of ${ \mathbb{Q} }$ as an abelian group, $0 \to R \overset{f}{\to} F \to { \mathbb{Q} } \to 0$, where $F= \oplus_{q \in { \mathbb{Q} }} { \mathbb{Z} }$. Since a free abelian group is a direct sum of copies of ${ \mathbb{Z} }$ we can rewrite this short exact sequence as $0 \to \bigoplus_i { \mathbb{Z} } \overset{f}{\to} \bigoplus_j { \mathbb{Z} } \to { \mathbb{Q} } \to 0$. Since ${ \mathbb{Q} }$ is flat, the sequence $0 \to \bigoplus_i M \overset{f \otimes \id}{\to} \bigoplus_j M \to { \mathbb{Q} } \otimes M \to 0$ is exact for any abelian group $M$. Hence for each subgroup $H$ of $G$, we have an injective map (which we also denote as $f$) $\bigoplus_i A(H) \overset{f \otimes \id}{\to} \bigoplus_j A(H)$ and $\bigoplus_j A(H) / \bigoplus_i A(H) { : }ng A(H) \otimes { \mathbb{Q} }$. For $H$, a subgroup of $G$, $$[\bigvee_i S, \bigvee_j S]^H { : }ng \homSSS_{A(H)} \Big( \bigoplus_{i} A(H),\bigoplus_{j} A(H) \Big).$$ Thus we can choose $g : \bigvee_i { : }frep S \to \bigvee_j { : }frep S$, a representative for the homotopy class corresponding to $f$. Let $I$ be the unit interval with basepoint $0$, there is a cofibration of spaces $S^0 \to I$ which sends the non-basepoint point of $S^0$ to $1 \in I$. If $X$ is a cofibrant $G$-spectrum then $X { : }ng X \wedge S^0 \to X \wedge I$ is a cofibration since $G$-spectra are enriched over spaces (see \cite[Chapter III, Definition 1.14]{mm02} and \cite[Lemma 4.2.2]{hov99}). For a map $f : X \to Y$, the cofibre of $f$, $C_f$, is the pushout of the diagram $X \wedge I \leftarrow X \overset{f}{\to} Y$. If $X$ is cofibrant then the map $Y \to C_f$ is a cofibration, hence if $X$ and $Y$ are cofibrant, so is $Cf$. \begin{defn}\label{def:ratsphere} For the map $g$ as constructed above, the cofibre of $g$ is the \textbf{rational sphere spectrum}\index{Rational sphere spectrum}\index{S rational@${S^0_{ \mathcal{M} }{{ \mathbb{Q} }}}$} and we have a cofibre sequence $$\bigvee_i { : }frep S \overset{g}{\longrightarrow} \bigvee_j { : }frep S \longrightarrow S^0_{ \mathcal{M} } {{ \mathbb{Q} }}. $$ \end{defn} A different choice of representative for the homotopy class $[g]$ will induce a weak equivalence between the cofibres, and hence (up to weak equivalence) $S^0_{ \mathcal{M} }{{ \mathbb{Q} }} $ is independent of this choice of representative. Note that there is an inclusion $\alpha : { : }frep S \to \bigvee_j { : }frep S$ which sends ${ : }frep S$ to the term of $\bigvee_j { : }frep S$ corresponding to $1 \in { \mathbb{Q} }$. \begin{prop}\label{prop:rathomgps} Let $X$ be a $G$-spectrum, then for any subgroup $H$ of $G$ the map $(\id_X \wedge \alpha)_* : \pi_*^H(X) \to \pi_*^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} )$ induces an isomorphism $\pi_*^H(X) \otimes { \mathbb{Q} } \to \pi_*^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} )$. \end{prop} \begin{pf} Using the cofibre sequence which defines $S^0{{ \mathbb{Q} }}$ we have the following collection of isomorphic long exact sequences of homotopy groups $$\begin{array}{ccccccc} \dots \longrightarrow & \pi_n^H(X \wedge \bigvee_i { : }frep S) & \overset{(\id \wedge g)_*}{\longrightarrow} & \pi_n^H(X \wedge \bigvee_j { : }frep S) & \longrightarrow & \pi_n^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}) & \longrightarrow \dots \\ \dots \longrightarrow & \pi_n^H(\bigvee_i X) & \overset{(\id \wedge g)_*}{\longrightarrow} & \pi_n^H(\bigvee_j X) & \longrightarrow & \pi_n^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}) & \longrightarrow \dots \\ \dots \longrightarrow & \bigoplus_i \pi_n^H( X) & \overset{g \otimes \id}{\longrightarrow} & \bigoplus_j \pi_n^H( X) & \longrightarrow & \pi_n^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}) & \longrightarrow \dots \\ \dots \longrightarrow & \bigoplus_i { \mathbb{Z} } \bigotimes \pi_n^H( X) & \overset{g \otimes \id}{\longrightarrow} & \bigoplus_j { \mathbb{Z} } \bigotimes \pi_n^H( X) & \longrightarrow & \pi_n^H(X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}) & \longrightarrow \dots \end{array} $$ Since the map $g \otimes \id : (\bigoplus_i { \mathbb{Z} }) \otimes \pi_n^H( X) \to (\bigoplus_j { \mathbb{Z} }) \otimes \pi_n^H( X)$ is injective for all $n$, this long exact sequence splits into short exact sequences and the result follows. \end{pf} There are many other methods for constructing a rational sphere spectrum, these will all be weakly equivalent to $S^0_{ \mathcal{M} } { \mathbb{Q} }$ as we prove below. One obvious alternative is to construct a homotopy colimit of the diagram ${ : }frep S \overset{2}{\to} { : }frep S \overset{3}{\to} { : }frep S \overset{4}{\to} \dots$, call this object $R_{ \mathbb{Q} }$. It follows that the map $\pi_*^H({ : }frep X) \to \pi_*^H(R_{ \mathbb{Q} } \wedge { : }frep X)$ induced by ${ : }frep S \to R_{ \mathbb{Q} }$ gives an isomorphism $\pi_*^H({ : }frep X) \otimes { \mathbb{Q} } \to \pi_*^H(R_{ \mathbb{Q} } \wedge { : }frep X)$. We prove in Lemma \ref{lem:univpropsq} that if you have any rationalisation of the sphere -- a rational equivalence $f { : } S \to X$ where $X$ is a spectrum with $\pi_*^H(X)$ rational for all $n$ and $H$, then $S^0_{ \mathcal{M} } { \mathbb{Q} }$ and $X$ are weakly equivalent. The result below is \cite[Chapter IV, Theorem 6.3]{mm02}, the proof of which is an adaptation of the material in \cite[chapter VIII]{EKMM97}. \begin{thm}\label{thm:GSlocal} Let $E$ be a cofibrant spectrum or a cofibrant based $G$-space. Then $G { \mathcal{M} } $ has an $E$-model structure\index{E-model structure@$E$-model structure} whose weak equivalences are the $E$-equivalences and whose $E$-cofibrations are the cofibrations of $G { \mathcal{M} }$. The $E$-fibrant objects are precisely the $E$-local objects and $E$-fibrant approximation constructs a Bousfield localisation $f_X : X \to { \widehat{f} }_E X$ of $X$ at $E$. The notation for $E$-model structure on the underlying category of $G { \mathcal{M} }$ is $L_E G{ \mathcal{M} } $ or $G { \mathcal{M} }_E$\index{G MM@$G { \mathcal{M} }_E$}. \end{thm} The categories $L_E G { \mathcal{M} }$ are cofibrantly generated model categories, this is implied by the proof of \cite[Chapter VIII, Theorem 1.1]{EKMM97}. Let $c$ be a fixed infinite cardinal that is at least the cardinality of $E^*(S)$. Then define ${ \mathscr{T} }$, a test set for $E$-fibrations, to consist of all inclusions of cell complexes $X \to Y$ such that the cardinality of the set of cells of $Y$ is less than or equal to $c$. Hence the domains of these maps are $\kappa$-small where $\kappa$ is the least cardinal greater than $c$. Thus if we let $I$ be the set of generating cofibrations for $G { \mathcal{M} }$, then we can take $I$ and ${ \mathscr{T} }$ as sets of generating cofibrations and generating acyclic cofibrations for $L_E G { \mathcal{M} }$. \begin{lem}\label{lem:ratequivs} For a map $g : X \to Y$ the following are equivalent: \begin{enumerate} \item $g : X \to Y $ is an $S^0_{ \mathcal{M} } { \mathbb{Q} }$-equivalence. \item $g_*^H : \pi_*(X^H) \otimes { \mathbb{Q} } \to \pi_*(Y^H) \otimes { \mathbb{Q} }$ is an isomorphism for all $H$. \item \vskip -0.0cm $g_*^H : \h_*(X^H; { \mathbb{Q} }) \to \h_*(Y^H; { \mathbb{Q} })$ is an isomorphism for all $H$. \end{enumerate} \end{lem} \begin{pf} We have shown in Proposition \ref{prop:rathomgps} that the first two conditions are equivalent. The last two statements are equivalent since the Hurewicz map induces an isomorphism $\pi_*(A) \otimes { \mathbb{Q} } \to H_*(A;{ \mathbb{Q} })$ for any non-equivariant spectrum $A$. \end{pf} \begin{defn} The model category of rational $G$-spectra is defined to be $L_{{S^0_{ \mathcal{M} } { \mathbb{Q} }}} G { \mathcal{M} }$, which we write as $G { \mathcal{M} }_{ \mathbb{Q} }$. Since the $S^0_{ \mathcal{M} } { \mathbb{Q} }$-equivalences are precisely the rational homotopy isomorphisms, we call the $S^0_{ \mathcal{M} } { \mathbb{Q} }$-equivalences \textbf{rational equivalences} or \textbf{$\pi_*^{ \mathbb{Q} }$-isomorphisms}. The set of rational homotopy classes of maps from $X$ to $Y$ will be written $[X,Y]^G_{ \mathbb{Q} }$ and we will write ${ \widehat{f} }_{ \mathbb{Q} }$ for fibrant replacement in the localised category. \end{defn} The lemma above proves that our model structure is independent of our choice of rational sphere spectrum. We now prove that $G { \mathcal{M} }_{ \mathbb{Q} }$ is a right proper model category, for which we need the following. \begin{lem}\label{lem:ratLES} For any map $f : X \to Y$ of $G$-prespectra and any $H \subset G$, there are natural long exact sequences $$\[email protected]@R-0.6cm{ \dots \ar[r] & \pi_q^H(Ff) \otimes { \mathbb{Q} } \ar[r] & \pi_q^H(X) \otimes { \mathbb{Q} } \ar[r] & \pi_q^H(Y) \otimes { \mathbb{Q} } \ar[r] & \pi_{q-1}^H(Ff) \otimes { \mathbb{Q} } \ar[r] & \dots, \\ \dots \ar[r] & \pi_q^H(X) \otimes { \mathbb{Q} } \ar[r] & \pi_q^H(Y) \otimes { \mathbb{Q} } \ar[r] & \pi_q^H(Cf) \otimes { \mathbb{Q} } \ar[r] & \pi_{q-1}^H(X) \otimes { \mathbb{Q} } \ar[r] & \dots }$$ and the natural map $\nu : Ff \to \Omega Cf$ is a $\pi_*$-isomorphism. \end{lem} \begin{pf} By \cite[Chapter IV, Remark 2.8]{mm02}, we have long exact sequences as above, but without needing to tensor with ${ \mathbb{Q} }$. Since ${ \mathbb{Q} }$ is flat, tensoring with it preserves exact sequences, hence the result follows. \end{pf} \begin{lem}\label{lem:rightproperrational} The category $G { \mathcal{M} }_{ \mathbb{Q} }$ is right proper. \end{lem} \begin{pf} Following the proof of \cite[Lemma 9.10]{mmss01} one shows that a stronger statement holds: in a pullback diagram as below, if $\beta$ is a level wise fibration of $G$-spaces then $r$ is a $\pi_*^{ \mathbb{Q} }$-isomorphism. $$\xymatrix@!C{ W \ar[r]^{\delta} \ar[d]_{r} & X \ar[d]^{\sim_{ \mathbb{Q} }} \\ Y \ar[r]_\beta & Z \ar@{}[ul]|\lrcorner|(0.52){\cdot \hskip 2.5pt } }$$ The only point of difference is that in the last step of the proof one needs to use the long exact sequence of \emph{rational} homotopy groups of a fibration. \end{pf} Since is our localisation is of a particularly nice form, we are able to give the following interpretation of maps in $\ho G { \mathcal{M} }_{ \mathbb{Q} }$. \begin{thm}\label{thm:rathomotopymaps} For any $X$ and $Y$, $[X,Y]^G_{ \mathbb{Q} }$ is a rational vector space. If $Z$ is an $S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$-local object of $G { \mathcal{M} }$ then $Z$ has rational homotopy groups. There is a natural isomorphism $[X,Y]^G_{ \mathbb{Q} } { : }ng [X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }},Y \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}]^G.$ \end{thm} \begin{pf} For each integer $n$ we have a self-map of ${ : }frep S$ which represents multiplication by $n$ at the model category level, applying $(-) \wedge X$ we obtain a self-map of ${ : }frep S \wedge X$. Since this map is an isomorphism of rational homotopy groups it induces an isomorphism $n { : } [X,Y]^G_{ \mathbb{Q} } \to [X,Y]^G_{ \mathbb{Q} }$, hence $[X,Y]^G_{ \mathbb{Q} }$ is a rational vector space. The homotopy groups of $Z$ can be given in terms of $[\Sigma^p G/H_+, Z]^G$ for $p$ an integer and $H$ a subgroup of $G$. Since we have assumed that $Z$ is $S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$-local, this homotopy group is isomorphic to $[\Sigma^p G/H_+, Z]^G_{ \mathbb{Q} }$ which we now know is a rational vector space. The map $Y \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} \to { \widehat{f} }_{ \mathbb{Q} } (Y \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }})$ is a $\pi_*^{ \mathbb{Q} }$-isomorphism between objects with rational homotopy groups, hence it is a $\pi_*$-isomorphism. For any $G$-spectrum $X$, $X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$ is rationally equivalent to $X$. Combining these we obtain isomorphisms as below. $$ \begin{array}{rcll} [X,Y]^G_{ \mathbb{Q} } & { : }ng & [X \wedge S^0{{ \mathbb{Q} }},Y \wedge S^0{{ \mathbb{Q} }}]^G_{ \mathbb{Q} } \\ & { : }ng & [X \wedge S^0{{ \mathbb{Q} }},{ \widehat{f} }_{ \mathbb{Q} } (Y \wedge S^0{{ \mathbb{Q} }})]^G \\ & { : }ng & [X \wedge S^0{{ \mathbb{Q} }},Y \wedge S^0{{ \mathbb{Q} }}]^G \end{array} $$ \end{pf} The following result gives a universal property for $S^0_{ \mathcal{M} } { \mathbb{Q} }$. Note that if the map $f$ is a rational equivalence, then the lift in the proof below is a rational equivalence between spectra with rational homotopy groups and hence is a weak equivalence. \begin{lem}\label{lem:univpropsq} Let $X$ be a spectrum with a map $f { : } S \to X$ such that $\pi_n^H(X)$ is a rational vector space for each subgroup $H$ and integer $n$. Then there is a map $S^0_{ \mathcal{M} } { \mathbb{Q} } \to X$ in $\ho G { \mathcal{M} }$ such that the composite $S \to S^0_{ \mathcal{M} } { \mathbb{Q} } \to X$ is equal to the map $f$ (in $\ho G { \mathcal{M} }$). \end{lem} \begin{pf} By Theorem \ref{thm:rathomotopymaps} the map ${ : }frep X \to { \widehat{f} }_{ \mathbb{Q} } { : }frep X$ is a weak equivalence. We then draw the diagram below and obtain a lifting $S^0_{ \mathcal{M} } { \mathbb{Q} } \to { \widehat{f} }_{ \mathbb{Q} } { : }frep X$ using the rational model structure on $G { \mathcal{M} }$. $$\xymatrix{ *+<0.5cm>{{ : }frep S} \ar[r] \ar@{>->}[d]_{\sim_{ \mathbb{Q} }} & { : }frep X \ar[r]^{\sim} & { \widehat{f} }_{ \mathbb{Q} } { : }frep X \ar@{->>}[d] \\ S^0_{ \mathcal{M} } { \mathbb{Q} } \ar[rr] & & {\ast} }$$ \end{pf} \section{Splitting Rational Equivariant Spectra}\label{sec:eqstabhom} We show how splittings of the category of rational equivariant spectra correspond to idempotents of the rational Burnside ring. In particular, we know all such idempotents in the case of a finite group and we have the idempotent $e_1$, constructed in Lemma \ref{lem:Geidemfamily}, which is in many cases a non-trivial idempotent. For a compact Lie group $G$ the Burnside ring is defined to be $[S,S]^G$. The following result is tom Dieck's isomorphism, see \cite[Chapter V, Lemma 2.10]{lms86} which references \cite[Lemma 6]{tdieck77}. This result can be very useful when studying the Burnside ring of $G$. Recall that ${ \mathcal{F} } G$ is the set of subgroups of $G$ that have finite index in their normaliser. There is a topology on ${ \mathcal{F} } G$ (induced by the Hausdorff metric on subsets of $G$) such that the conjugation action of $G$ on ${ \mathcal{F} } G$ is continuous, see \cite[Chapter V, Lemma 2.8]{lms86}. \begin{lem}\label{lem:tdisom} Let $C( { \mathcal{F} } G / G, { \mathbb{Q} })$ denote the ring of continuous maps from the orbit space ${ \mathcal{F} } G / G$ to ${ \mathbb{Q} }$, where ${ \mathbb{Q} }$ is considered as a topological space with the discrete topology. The map $[S,S]^G \to C( { \mathcal{F} } G / G, { \mathbb{Q} })$ which takes $f$ to $(H) \mapsto \deg(f^H)$ induces an isomorphism of rings $[S,S]^G \otimes { \mathbb{Q} } \to C( { \mathcal{F} } G / G, { \mathbb{Q} })$. \end{lem} In particular, for a finite group $G$, this specifies an isomorphism $[S,S]^G \otimes { \mathbb{Q} } \to \prod_{(H) \leqslant G} { \mathbb{Q} }$. Let $e_H \in [S,S]^G \otimes { \mathbb{Q} }$ be the idempotent corresponding to projection onto factor $(H)$, then we have a finite orthogonal decomposition of $\id_S$ given by the collection $\{ e_H \}$ as $H$ runs over the conjugacy classes of subgroups of $G$. We now give an isomorphism between the rational Burnside ring and self maps of $S$ in $\ho G { \mathcal{M} }_{ \mathbb{Q} }$. \begin{prop} There is a ring isomorphism $[S,S]^G \otimes { \mathbb{Q} } \to [S,S]^G_{ \mathbb{Q} }$ induced by $\id { : } G { \mathcal{M} } \to G { \mathcal{M} }_{ \mathbb{Q} }$. \end{prop} \begin{pf} The identity functor induces a ring map $[S,S]^G \to [S,S]^G_{ \mathbb{Q} }$ and since the right hand side is a rational vector space this induces the desired map of rings. That this map is an isomorphism follows from the isomorphisms: $[S, S]^G \otimes { \mathbb{Q} } { : }ng [S, S^0_{ \mathcal{M} } { \mathbb{Q} }]^G$, $[S, S^0_{ \mathcal{M} } { \mathbb{Q} }]^G { : }ng [S, { \widehat{f} }_{ \mathbb{Q} } S]^G$ and $[S, { \widehat{f} }_{ \mathbb{Q} } S]^G { : }ng [S, S]^G_{ \mathbb{Q} }$. The universal property of $S^0_{ \mathcal{M} } { \mathbb{Q} }$ provides the second isomorphism and ensures that the composite of the above maps is equal to the specified map of rings. \end{pf} \begin{cor}\label{cor:Gspecsplit} If $e$ is an idempotent of the rational Burnside ring of $G$, then the adjunction below is a strong symmetric monoidal Quillen equivalence. $$\Delta : G { \mathcal{M} }_{ \mathbb{Q} } \overrightarrow{\longleftarrow} L_{e S} G { \mathcal{M} }_{ \mathbb{Q} } \times L_{(1-e) S} G { \mathcal{M} }_{ \mathbb{Q} } : \prod$$ \end{cor} \begin{cor}\label{cor:finG} The category of rational $G$-spectra (for finite $G$) splits into the product of the localisations $L_{e_H S} { G \mathscr{I} \mathscr{S} }_{ \mathbb{Q} }$ as $(H)$ runs over the conjugacy classes of subgroups of $G$. \end{cor} At the homotopy level this result can be found in \cite[Appendix A]{gremay95}. Note that the two localisations of $G$-spectra that we have used: $L_{S^0_{ \mathcal{M} } { \mathbb{Q} }} G { \mathcal{M} }$ and $L_{eS} L_{S^0_{ \mathcal{M} } { \mathbb{Q} }} G { \mathcal{M} }$ share many of the same properties. This is because they are designed to invert elements of $[S,S]^G$ and $[S,S]_{ \mathbb{Q} }^G$ respectively. The first is designed to invert the primes and the second inverts the idempotent $e$. \begin{lem}\label{lem:rightproperfamily} For $e$ an idempotent of $[S,S]^G \otimes { \mathbb{Q} }$ the category $L_{eS} G { \mathcal{M} }$ is right proper. \end{lem} \begin{pf} Let $e \in [S,S]^G \otimes { \mathbb{Q} }$ be an idempotent, then for any exact sequence of $[S,S]^G \otimes { \mathbb{Q} }$-modules $ \dots \to M_i \to M_{i-1} \to \dots $, the sequence $ \dots \to e M_i \to e M_{i-1} \to \dots $ is exact. Right properness then follows from the proof of Lemma \ref{lem:rightproperrational} by applying $e$ to the long exact sequence of rational homotopy groups of a fibration. \end{pf} We now give a general example of an idempotent of the Burnside ring. This idempotent is non-trivial in many cases, such as when $G=O(2)$, the group of two-by-two orthogonal matrices. This idempotent was used to study rational $O(2)$-spectra in \cite{gre98a} and \cite[Part III]{barnes}. \begin{lem}\label{lem:Geidemfamily} Let $G$ be a compact Lie group and let $S$ denote the set of subgroups of the identity component of $G$ which have finite index in their normaliser. Then there is an idempotent $e_1 \in [S,S]^G \otimes { \mathbb{Q} } { : }ng C( { \mathcal{F} } G / G, { \mathbb{Q} })$ given by the map which sends $(H)$ to 1 if $H \in S$ and zero otherwise. \end{lem} \begin{pf} Let $G_1$ denote the identity component of $G$ and recall that since $G$ is compact $F = G /G_1$ is finite. Take $H \in S$, by \cite[Chapter II, Corollary 5.6]{bred} we know that if $K \in { \mathcal{F} } G$ is in some sufficiently small neighbourhood of $H$ in the space ${ \mathcal{F} } G$, then $K$ is subconjugate to $H$ and so $K$ is a subgroup of $G_1$. It follows that $S$ is open in ${ \mathcal{F} } G/G$. Now take $(K)$ to be in $({ \mathcal{F} } G/G) \setminus S$, so there is a $g \in G \setminus G_1$ such that $K \cap gG_1$ is non-empty. Then any $L \in { \mathcal{F} } G$ that is sufficiently close to $K$ also has a non-trivial intersection with $gG_1$ so $L$ is not a subgroup of $G_1$, it follows that $S$ is also closed. Hence $e_1$, the characteristic function of $S$, is a continuous map ${ \mathcal{F} } G /G \to { \mathbb{Q} }$. Thus $e_1$ is an idempotent, since $e_1(H)=1$ if $H \in S$ and zero otherwise. \end{pf} Let ${ \mathscr{F} }$ be the set of subgroups of $G_1$, then it can be shown that $e_1 S$ is weakly equivalent to ${E { \mathscr{F} }}_+$ (the universal space for a family). One can then use the results of \cite[Chapter IV, Section 6]{mm02} to obtain better understanding of $L_{e_1 S} G { \mathcal{M} }_{ \mathbb{Q} }$ and $L_{(1-e_1)S} G { \mathcal{M} }_{ \mathbb{Q} }$. \section{Modules and Bimodules}\label{sec:modbimod} We give two general examples of where our splitting result can be applied. Choose a monoidal model category of spectra, such as symmetric, orthogonal or EKMM spectra (this could even be $G$-equivariant for the last two versions) and call it ${ \mathscr{S} }$. For $R$ a ring spectrum we consider splittings of the model category of $R$-$R$-bimodules, this is a monoidal model category which is not (in general) symmetric. We let $[-,-]^{(R,R)}$ denote maps in the homotopy category of $R$-$R$-bimodules. Our second example considers the case of $R$-modules, when $R$ is not commutative. Although $R \leftmod$ is not a monoidal model category we can still obtain splittings of the model category by considering idempotents of $[R,R]^{(R,R)}$. We return to rational equivariant spectra at the end of this section and create a commutative ring spectrum $S_{ \mathbb{Q} }$ such that $S_{ \mathbb{Q} } \leftmod$ is Quillen equivalent to $G { \mathcal{M} }_{ \mathbb{Q} }$ (Theorem \ref{thm:localisedtomodules}). We then show that splittings of $S_{ \mathbb{Q} } \leftmod$ correspond to splittings of $G { \mathcal{M} }_{ \mathbb{Q} }$. We first introduce some results from \cite{EKMM97}, these can be adapted to any of the categories of spectra we have mentioned above. For $R$ an algebra, there is a notion of a cell $R$-module, see \cite[Chapter III, Definition 2.1]{EKMM97}, a cell $R$ module is a special kind of cofibrant module. We can always replace an $R$-module $M$ by a weakly equivalent cell $R$-module $\Gamma M$ via \cite[Chapter III, Theorem 2.10]{EKMM97}. If $E$ is a right $R$-module then we have a spectrum $E \wedge_R X$ for any left $R$-module $X$. It is defined as the coequaliser of the diagram $E \wedge R \wedge X \overrightarrow{\longrightarrow} E \wedge X$ where the maps are given by the action of $R$ on $E$ and the action of $R$ on $X$. Thus we have the notion of an $E^R$-equivalence of $R$-modules: a map $f$ in $R \leftmod$ such that $E \wedge_R f$ is a weak equivalence of underlying spectra. Let $E$ be a cell right $R$-module, then by \cite[Chapter VIII, Theorem 1.1]{EKMM97}, there is a model structure $L_E R \leftmod$ on the category of $R$-modules with weak equivalences the $E^R$-equivalences and cofibrations given by the cofibrations for $R \leftmod$. We also note that if $X$ is a cofibrant $R$-module, the functor $- \wedge_R X$ preserves weak equivalences (\cite[Chapter III, Theorem 3.8]{EKMM97}). \begin{prop}\label{prop:splitbimod} For $R$ a ring spectrum in ${ \mathscr{S} }$, whose underlying spectrum is cofibrant, an idempotent of $\text{THH}^0 (R):=[R,R]^{(R,R)}$ splits the category of $R$-$R$-bimodules. \end{prop} \begin{pf} We can identify the category of $R$-$R$ bimodules with the category of $R \wedge R^{op}$-modules. The ring spectrum $R^{op}$ has the same underlying spectrum as $R$ but the multiplication is given by $R \wedge R \overset{\tau}{\to} R \wedge R \overset{\mu}\to R$ where $\tau$ is the symmetry isomorphism of $\wedge$ in ${ \mathscr{S} }$ and $\mu$ is the multiplication of $R$. We have assumed that $R$ is cofibrant to ensure that $R \wedge R^{op}$ is weakly equivalent to $R \wedge^L R^{op}$, thus $[X,Y]^{(R,R)} { : }ng [X,Y]^{R \wedge^L R^{op}}$. For a cell $R$-$R$-bimodule $E$ we have a $E$-local model structure on the category of $R$-$R$-bimodules. If $M$ is a cofibrant $R$-$R$-bimodule, then an $M$-equivalence is the same as a $\Gamma M$-equivalence and so we can localise at any cofibrant bimodule by localising at its cellular replacement. We can now apply Theorem \ref{thm:generalsplitting} to complete the proof. \end{pf} We now turn to left modules over a ring spectrum, we can obtain a splitting result when $R$ is not commutative. In this case $R \leftmod$ does not have a monoidal product and so $[R,R]^R$ does not act on $[X,Y]^R$. Instead we will use the action of $[R,R]^{(R,R)}$ on $[X,Y]^R$ to split the category. Throughout we assume that $R$ is cofibrant as a spectrum. We return to algebra briefly to offer some context for this result. If $R$ was an arbitrary ring, then for a \emph{central} idempotent $e \in R$, (so $er=re$ for any $r \in R$), one can form new rings $eR$ and $(1-e)R$ such that $R \leftmod$ is equivalent to $eR \leftmod \times (1-e)R \leftmod$. Furthermore, for any $R$-module $M$, there is a natural isomorphism $M { : }ng eM \oplus (1-e)M$. A central idempotent is precisely the same data as an $R$-$R$-bimodule map from $R$ to itself. Hence, the proposition below is the ring spectrum version of this algebraic result. \begin{prop}\label{prop:splitrmod} Let $R \in { \mathscr{S} }$ be a ring spectrum whose underlying spectrum is cofibrant and let $e$ be an idempotent of $[R,R]^{(R,R)}$. Then there is a Quillen equivalence $$\Delta : R \leftmod \overrightarrow{\longleftarrow} L_{\Gamma e R} R \leftmod \times L_{\Gamma (1-e) R} R \leftmod : \prod. $$ \end{prop} \begin{pf} We construct $e R$ in the category of $R$-$R$-bimodules and then consider it as a right $R$-module. Since $R$ is cofibrant, it follows that $eR$ is cofibrant as a right $R$-module (see below for details). We localise the category of $R$-modules at the cell right $R$-module $\Gamma eR$ and note that the weak equivalences of $L_{\Gamma e R} R \leftmod$ are the $(eR)^R$-equivalences. We can then follow the proof of Theorem \ref{thm:generalsplitting}. \end{pf} There is a forgetful functor $U$ from $R$-$R$-bimodules to $R \leftmod$, this is a right Quillen functor with left adjoint $M \mapsto M \wedge R$. Take $f: A \to B$ a generating (acyclic) cofibration of ${ \mathscr{S} }$. Then $g= \id_R \wedge f \wedge \id_R$ is a generating (acyclic) cofibration for the category of $R$-$R$-bimodules. Since $f \wedge \id_R$ is a cofibration of spectra, it follows that $g$ is a cofibration of left $R$-modules, hence $U$ is a left Quillen functor. A slight alteration of this argument shows that a cofibrant $R$-$R$-bimodule is cofibrant as a right $R$-module. The functor $U$ induces a ring map $[R,R]^{(R,R)} \to [R,R]^R { : }ng \pi_0(R)$. If $R$ is commutative, every $R$-module can be considered as an $R$-$R$-bimodule, this defines a right Quillen functor $I$. Let $M$ be an $R$-$R$-bimodule with actions $\nu$ and $\nu'$. Then define $SM$ as the coequaliser: $\xymatrix@C+0.3cm{ R \wedge M \ar@<+0.2ex>[r]^(0.6){\nu} \ar@<-0.2ex>[r]_(0.6){\nu' \circ \tau} & M \ar[r] & SM. }$ It follows that $S$ is the left adjoint of $I$ and that $UI$ is the identity functor of $R \leftmod$. These functors give a retraction: $[R,R]^R \overset{I}{\to} [R,R]^{(R,R)} \overset{U}{\to} [R,R]^R$. Thus in the commutative case it is no restriction to consider an idempotent $e \in [R,R]^{(R,R)}$. The Quillen equivalence above would then follow from our main result and would be a strong symmetric monoidal Quillen equivalence. For $E$ a cofibrant spectrum and $R$ a commutative ring spectrum, the $L_{E \wedge R}$-model structure on the category of $R$-modules has weak equivalences those maps $f$ which are $E$-equivalences of underlying spectra. Thus $L_{E \wedge R} R \leftmod$ is precisely the model category of $R$-modules in $L_E { \mathscr{S} }$. One important source of idempotents in $\pi_0 (R)$ (or $[R,R]^{(R,R)}$) is the image of idempotents in $\pi_0(S)$ via the unit map $S \to R$. We return to our primary example of rational equivariant EKMM-spectra to give an example of this. To obtain our commutative ring spectrum we use \cite[Chapter VIII, Theorem 2.2]{EKMM97}, we give the statement that we will need below. Here we assume that $E$ is a cell spectrum (hence cofibrant). \begin{thm}\label{thm:algebralocalise} For a cell commutative $R$-algebra $A$, the localisation $\lambda : A \to A_E$ can be constructed as the inclusion of a subcomplex in a cell commutative $R$-algebra $A_E$. In particular $A \to A_E$ is an $E$-equivalence and a cofibration of commutative ring spectra for any cell commutative $R$-algebra $A$. \end{thm} \begin{defn}\label{def:S_qq} Let $S_{ \mathbb{Q} }$\index{Sa @$S_{ \mathbb{Q} }$} be the commutative ring spectrum constructed as the $S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$-localisation of $S$. \end{defn} It follows immediately that the unit $\eta : S \to S_{ \mathbb{Q} }$ is an $S^0_{ \mathcal{M} } { \mathbb{Q} }$-equivalence. Thus, by our universal property for $S^0_{ \mathcal{M} } { \mathbb{Q} }$ (Lemma \ref{lem:univpropsq}) and the fact that $S_{ \mathbb{Q} }$ has rational homotopy groups, we have the first statement of the following result. The rest of the lemma follows by a standard argument, see \cite[13.1]{adams}. \begin{lem}\label{lem:compareSQtoRQ} There is a weak equivalence $S^0_{ \mathcal{M} }{{ \mathbb{Q} }} \to S_{ \mathbb{Q} }$. Hence all $S_{ \mathbb{Q} }$-modules are $S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$-local and so all $S_{ \mathbb{Q} }$-modules have rational homotopy groups. \end{lem} \begin{thm}\label{thm:localisedtomodules} There is a strong symmetric monoidal Quillen equivalence: $$S_{ \mathbb{Q} } \wedge (-) : G { \mathcal{M} }_{ \mathbb{Q} } \overrightarrow{\longleftarrow} S_{ \mathbb{Q} } \leftmod : U.$$ \end{thm} \begin{pf} The above functors form a strong monoidal Quillen pair (with the usual structure on $G { \mathcal{M} } $). Since cofibrations are unaffected by localisation, $S_{ \mathbb{Q} } \wedge (-) : G { \mathcal{M} }_{ \mathbb{Q} } \to S_{ \mathbb{Q} } \leftmod$ preserves cofibrations. Consider an acyclic rational cofibration $X \to Y$, we know that $S_{ \mathbb{Q} } \wedge (-)$ applied to this gives a cofibration, we must check that it is also a $\pi_*$-isomorphism. We see that $X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} \to Y \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }}$ is a cofibration and a $\pi_*$-isomorphism, so in turn $X \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} \wedge S_{ \mathbb{Q} } \to Y \wedge S^0_{ \mathcal{M} }{{ \mathbb{Q} }} \wedge S_{ \mathbb{Q} }$ is a $\pi_*$-isomorphism (by the monoid axiom). This proves that $X \wedge S_{ \mathbb{Q} } \to Y \wedge S_{ \mathbb{Q} }$ is a $\pi^{ \mathbb{Q} }_*$-isomorphism between $S_{ \mathbb{Q} }$-modules, which we know have rational homotopy groups and thus this map is in fact a $\pi_*$-isomorphism. Hence we have a Quillen pair, now we prove that it is a Quillen equivalence. The right adjoint preserves and detects all weak equivalences. The map $X \to S_{ \mathbb{Q} } \wedge X$ is a rational equivalence for all cofibrant $S$-modules $X$. This follows since smashing with a cofibrant object will preserve the $\pi_*^{ \mathbb{Q} }$-isomorphism $S \to S_{ \mathbb{Q} }$. \end{pf} It follows that we have an isomorphism of rings $[S,S]^G_{ \mathbb{Q} } \to [S_{ \mathbb{Q} }, S_{ \mathbb{Q} }]^{S_{ \mathbb{Q} } \leftmod}$. Hence for an idempotent $e$ of the rational Burnside ring we can split $S_{ \mathbb{Q} } \leftmod$ using the objects $e S \wedge S_{ \mathbb{Q} }$ and $(1-e)S \wedge S_{ \mathbb{Q} }$. We can then apply Theorem \ref{thm:locfuncs} to see that the strong symmetric monoidal adjunction below is a Quillen equivalence, hence we have a comparison between our splitting of $S_{ \mathbb{Q} } \leftmod$ and Corollary \ref{cor:Gspecsplit}. $$S_{ \mathbb{Q} } \wedge (-): L_{eS} G { \mathcal{M} }_{ \mathbb{Q} } \overrightarrow{\longleftarrow} L_{(\Gamma eS) \wedge S_{ \mathbb{Q} }} S_{ \mathbb{Q} } \leftmod :U$$ We briefly wish to mention that following the construction of $S_{ \mathbb{Q} }$ one can make $R_{e}$ for any commutative ring $R$ and idempotent $e \in \pi_0(R)$ by localising $R$ at $\Gamma eR$. It follows that $R_e$ is weakly equivalent to $\Gamma eR$ and hence any $R_e$-module is $\Gamma eR$-local. Then, as with the $S_{ \mathbb{Q} }$-case, one can prove that extension and restriction of scalars along $R \to R_e$ induces a Quillen equivalence between $L_{\Gamma eR} R \leftmod$ and $R_e \leftmod$. This is a manifestation of \cite[Theorem 2]{wolb}. Hence we have a different statement of the splitting result: there is a Quillen equivalence $R \leftmod \overrightarrow{\longleftarrow} R_e \leftmod \times R_{1-e} \leftmod$, induced by extension and restriction of scalars. \end{document}
\begin{document} \textwidth 150mm \textheight 225mm \title{Some upper bounds for the signless Laplacian spectral radius of digraphs \thanks{ Supported by the National Natural Science Foundation of China (No.11171273).}} \author{{Weige Xi and Ligong Wang\footnote{Corresponding author.} }\\ {\small Department of Applied Mathematics, School of Science,}\\ {\small Northwestern Polytechnical University, Xi'an, Shaanxi 710072, P.R.China} \\{\small E-mail: [email protected], [email protected] }\\} \date{} \maketitle \begin{center} \begin{minipage}{120mm} \vskip 0.3cm \begin{center} {\small {\bf Abstract}} \end{center} {\small Let $G=(V(G) ,E(G))$ be a digraph without loops and multiarcs, where $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $E(G)$ are the vertex set and the arc set of $G$, respectively. Let $d_i^{+}$ be the outdegree of the vertex $v_i$. Let $A(G)$ be the adjacency matrix of $G$ and $D(G)=\textrm{diag}(d_1^{+},d_2^{+},\ldots,d_n^{+})$ be the diagonal matrix with outdegrees of the vertices of $G$. Then we call $Q(G)=D(G)+A(G)$ the signless Laplacian matrix of $G$. The spectral radius of $Q(G)$ is called the signless Laplacian spectral radius of $G$, denoted by $q(G)$. In this paper, some upper bounds for $q(G)$ are obtained. Furthermore, some upper bounds on $q(G)$ involving outdegrees and the average 2-outdegrees of the vertices of $G$ are also derived. \vskip 0.1in \noindent {\bf Key Words}: \ Digraph, Signless Laplacian spectral radius, Upper bounds. \vskip 0.1in \noindent {\bf AMS Subject Classification (2000)}: \ 05C50 15A18} \end{minipage} \end{center} \section{Introduction } \label{sec:ch6-introduction} Let $G=(V(G), E(G))$ be a digraph without loops and multiarcs, where $V(G)=\{v_1,v_2,$ $\ldots, v_n\}$ and $E(G)$ are the vertex set and the arc set of $G$, respectively. If $(v_i, v_j)$ be an arc of $G$, then $v_i$ is called the initial vertex of this arc and $v_j$ is called the terminal vertex of this arc. For any vertex $v_i$ of $G$, we denote $N_i^{+}=N_{v_i}^{+}(G)=\{v_j : (v_i,v_j)\in E(G) \}$ and $N_i^{-}=N_{v_i}^{-}(G)=\{v_j : (v_j,v_i)\in E(G) \}$ the set of out-neighbors and in-neighbors of $v_i$, respectively. Let $d_i^{+}=|N_i^{+}|$ denote the outdegree of the vertex $v_i$ and $d_i^{-}=|N_i^{-}|$ denote the indegree of the vertex $v_i$ in the digraph $G$. The maximum vertex outdegree is denoted by $\displaystyleelta^+$, and the minimum outdegree by $\delta^+$. If $\delta^+=\displaystyleelta^+$, then $G$ is a regular digraph. Let $t_i^+=\sum\limits_{v_j\in N_i^{+}}d_j^{+}$ be the 2-outdegree of the vertex $v_i$, $m_i^{+}=\frac{t_i^{+}}{d_i^{+}}$ the average 2-outdegree of the vertex $v_i$. A digraph is strongly connected if for every pair of vertices $v_i,v_j\in V(G)$, there exists a directed path from $v_i$ to $v_j$ and a directed path from $v_j$ to $v_i$. In this paper, we consider finite digraphs without loops and multiarcs, which have at least one arc. For a digraph $G$, let $A(G)=(a_{ij})$ denote the adjacency matrix of $G$, where $a_{ij}=1$ if $(v_i,v_j)\in E(G)$ and $a_{ij}=0$ otherwise. Let $D(G)=\textrm{diag}(d_1^{+},d_2^{+},\ldots,d_n^{+})$ be the diagonal matrix with outdegrees of the vertices of $G$ and $Q(G)=D(G)+A(G)$ the signless Laplacian matrix of $G$. However, the signless Laplacian matrix of an undirected graph $D$ can be treated as the signless Laplacian matrix of the digraph $G'$, where $G'$ is obtained from $D$ by replace each edge with pair of oppositely directed arcs joining the same pair of vertices. Therefore, the research of the signless Laplacian matrix of a digraph has more universal significance than undirected graph. The eigenvalues of $Q(G)$ are called the signless Laplacian eigenvalues of $G$, denoted by $q_1,q_2,\ldots,q_n$. In general $Q(G)$ are not symmetric and so its eigenvalues can be complex numbers. We usually assume that $|q_1|\geq|q_2|\geq\ldots\geq|q_n|$. The signless Laplacian spectral radius of $G$ is denoted and defined as $q(G)=|q_1|$, i.e., the largest absolute value of the signless Laplacian eigenvalues of $G$. Since $Q(G)$ is a nonnegative matrix, it follows from Perron Frobenius Theorem that $q(G)=q_1$ is a real number. For the Laplacian spectral radius and signless Laplacian spectral radius of an undirected graph are well treated in the literature, see \cite{Wang,WYL,WeLi,Zhu} and \cite{ChWa,CTG,GDC,HaLu,HJZ,WeLi}, respectively. Recently, there are some papers that give some lower or upper bounds for the spectral radius of a digraph, see \cite{RB,GDa,XuXu}. Now we consider the signless Laplacian spectral radius of a digraph $G$. For application it is crucial to be able to computer or at least estimate $q(G)$ for a given digraph. In 2013, S.B. Bozkurt and D. Bozkurt in \cite{BoBo} obtained the following bounds for signless Laplacian spectral radius of a digraph. \noindent\begin{equation}\label{eq:c1} q(G) \leq \max\{d_i^{+}+d_j^{+}: (v_i, v_j)\in E(G)\}. \end{equation} \noindent\begin{equation}\label{eq:c2} q(G) \leq \max\{d_i^{+}+m_i^{+}: v_i\in V(G)\}. \end{equation} \noindent\begin{equation}\label{eq:c3} q(G)\leq \max\bigg\{\frac{d_i^{+}+d_j^{+}+ \sqrt{(d_i^{+}-d_j^{+})^{2}+4m_i^{+}m_j^{+}}}{2}: (v_i, v_j)\in E(G)\bigg\}. \end{equation} \noindent\begin{equation}\label{eq:c4} q(G)\leq \max\bigg\{d_i^{+}+\sqrt{\sum _{v_j:(v_j, v_i)\in E(G)}d_j^{+}}: v_i\in V(G)\bigg\}. \end{equation} In 2014, Hong and You in \cite{HoYo} gave a sharp bound for the signless Laplacian spectral radius of a digraph: \noindent\begin{equation}\label{eq:c5} q(G)\leq \min_{1 \leq i \leq n}\bigg\{\frac{d_1^{+}+2d_i^{+}-1+ \sqrt{(2d_i^{+}-d_1^{+}+1)^{2}+8\sum\limits_{k=1}^{i-1}(d_k^{+}-d_i^{+})}}{2}\bigg\}. \end{equation} \noindent\begin{remark}\label{re:c1} Note that $G$ is a strongly connected digraph for bounds \eqref{eq:c1}, \eqref{eq:c3}, \eqref{eq:c4}, respectively. \end{remark} In this paper, we study on the signless Laplacian spectral radius of a digraph $G$. We obtain some upper bounds for $q(G)$, and we also show that some upper bounds on $q(G)$ involving outdegrees and the average 2-outdegrees of the vertices of $G$ can be obtained from our bounds. \section{Preliminaries Lemmas} \label{sec:1} In this section, we give the following lemmas which will be used in the following study. \noindent\begin{lemma}\label{le:c1} (\cite{HoJo}) \ Let $M=(m_{ij})$ be an $n \times n$ nonnegative matrix with spectral radius $\rho(M)$, i.e., the largest eigenvalues of $M$, and let $R_i=R_i(M)$ be the $i$-th row sum of $M$, i.e., $R_i(M)=\sum\limits_{j=1}^n m_{ij} \ (1 \leq i\leq n)$. Then \begin{equation}\label{eq:ca} \min\{R_{i}(M):1 \leq i\leq n\}\leq \rho(M)\leq \max\{R_{i}(M):1 \leq i\leq n\}.\end{equation} Moreover, if $M$ is irreducible, then any equality holds in \eqref{eq:ca} if and only if $R_1=R_2=\ldots=R_n$. \end{lemma} \noindent\begin{lemma}\label{le:c2} (\cite{HoJo}) \ Let $M$ be an irreducible nonnegative matrix. Then $\rho(M)$ is an eigenvalue of $M$ and there is a positive vector $X$ such that $MX=\rho(M)X$. \end{lemma} \noindent\begin{lemma}\label{le:c6} (\cite{Li}) \ Let $A=(a_{ij})\in \mathbb{C}^{n\times n}$, $r_i=\sum\limits_{j\neq i}|a_{ij}|$ for each $i=1,2,\ldots,n$, $S_{ij}=\{z \in \mathbb{C}: |z-a_{ii}|\cdot|z-a_{jj}|\leq r_ir_j\}$ for all $i\neq j$. Also let $E(A)=\{(i,j) :a_{ij}\neq 0,1 \leq{i\neq j}\leq n\}$. If $A$ is irreducible, then all eigenvalues of $A$ are contained in the following region \begin{equation}\label{eq:ce} \Omega(A)=\bigcup_{(i,j) \in E(A)}S_{ij}. \end{equation} Furthermore, a boundary point $\lambda$ of \eqref{eq:ce} can be an eigenvalue of $A$ only if $\lambda$ locates on the boundary of each oval region $S_{ij}$ for $e_{ij} \in E(A)$. \end{lemma} \section{ Some upper bounds for the signless Laplacian spectral radius of digraphs} \label{sec:2} In this section, we present some upper bounds for the signless Laplacian spectral radius $q(G)$ of a digraph $G$ and also show that some bounds involving outdegrees, the average 2-outdegrees, the maximum outdegree and the minimum outdegree of the vertices of $G$ with $n$ vertices and $m$ arcs can be obtained from our bounds. \noindent\begin{theorem}\label{th:c7} \ Let $G$ be a strongly connected digraph with $n\geq 3$ vertices, $m$ arcs, the maximum vertex outdegree $\displaystyleelta^+$ and the minimum outdegree $\delta^{+}$. Then \begin{equation}\label{eq:c27} q(G)\leq \max\{\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}, \delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2} \}. \end{equation} Moreover, if $G(\neq\overset{\longrightarrow}{C_{n}})$ is a regular digraph or $G\cong \overset{\longleftrightarrow}{K}_{1,n-1}$, where $\overset{\longleftrightarrow}{K}_{1,n-1}$ denotes the digraph on $n$ vertices which replace each edge in star graph $K_{1,n-1}$ with the pair of oppositely directed arcs, then the equality holds in \eqref{eq:c27} \end{theorem} \begin{proof} \ From \eqref{eq:c2}, we know that $q(G)\leq \max\{d_i^{+}+m_i^{+}: v_i\in V(G)\}.$ So we only need to prove that $\max\{d_i^{+}+m_i^{+}: v_i\in V(G)\}\leq \max\{\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)} {\displaystyleelta^{+}},\delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2} \}$. Suppose $\max\{d_i^{+}+m_i^{+}: v_i\in V(G)\}$ occurs at vertex $u$. Two cases arise $d_u^{+}=1$, or $2\leq d_u^{+}\leq \displaystyleelta^{+}$. \noindent{\textbf{{Case 1.}}} \ $d_u^{+}=1.$ Suppose that $N_u^{+}=\{w\}.$ Since $m_u^{+}=d_w^{+}\leq \displaystyleelta^{+},$ thus $d_u^{+}+m_u^{+}\leq 1+\displaystyleelta^{+}$. Since $\sum\limits_{v_i\in V(G)}d_i^{+}=m,$ let $d_j^{+}=\displaystyleelta^{+}$, then $\sum\limits_{i\neq j}d_i^{+}=m- \displaystyleelta^{+}\geq (n-1)\delta^{+}$, so $m-(n-1)\delta^{+}\geq \displaystyleelta^{+}$. Therefore $\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}\geq \delta^{+}-1+\frac{\displaystyleelta^{+}}{\displaystyleelta^{+}}=\delta^{+}\geq 1$. Thus $d_u^{+}+m_u^{+}\leq 1+\displaystyleelta^{+}\leq \displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}$, the result follows. \noindent{\textbf{{Case 2.}}} \ $2\leq d_u^{+}\leq \displaystyleelta^{+}$. Note that $m-(n-1)\delta^{+}\geq d_u^{+}\geq 2$, and \begin{eqnarray*} m&=&\sum\limits_{v:(u,v)\in E(G)}d_v^{+}+\sum\limits_{ v:(u,v)\notin E(G)}d_v^{+}\\ &\geq &\sum\limits_{v:(u,v)\in E(G)}d_v^{+}+ d_u^{+}+(n-d_u^{+}-1)\delta^{+}, \end{eqnarray*} thus \begin{eqnarray*} \sum\limits_{v:(u,v)\in E(G)}d_v^{+}&\leq& m-d_u^{+}-(n-d_u^{+}-1)\delta^{+}\\ &=&m-(n-1)\delta^{+}+(\delta^{+}-1)d_u^{+} \end{eqnarray*} $$m_u^{+}=\frac{\sum\limits_{v:(u,v)\in E(G)}d_v^{+}}{d_u^{+}}\leq \frac{m-(n-1)\delta^{+}}{d_u^{+}}+\delta^{+}-1.$$ This follows that $m_u^{+}+d_u^{+}\leq d_u^{+}+\frac{m-(n-1)\delta^{+}}{d_u^{+}}+\delta^{+}-1$. Let $f(x)=x+\frac{m-(n-1)\delta^{+}}{x}+\delta^{+}-1$, where $x\in [2,\displaystyleelta^{+}].$ It is easy to see that $f'(x)=1-\frac{m-(n-1)\delta^{+}}{x^{2}}$. Let $a=m-(n-1)\delta^{+},$ then $\sqrt{a}$ is the unique positive root of $f'(x)=0.$ We consider the next three Subcases. \noindent{\textbf{{Subcase 1.}}} \ $\sqrt{a}< 2.$ When $x\in [2,\displaystyleelta^{+}],$ since $f'(x)>0,$ then $f(x)\leq f(\displaystyleelta^{+})$. \noindent{\textbf{{Subcase 2.}}} \ $2\leq\sqrt{a}\leq \displaystyleelta^{+}.$ Then $f'(x)<0$ for $x\in [2,\sqrt{a})$, and $f'(x)\geq 0$, for $x\in [\sqrt{a}, \displaystyleelta^{+}]$. Thus, $f(x)\leq \max\{f(2), f(\displaystyleelta^{+})\}$. \noindent{\textbf{{Subcase 3.}}} \ $\displaystyleelta^{+}<\sqrt{a}$. When $x\in [2,\displaystyleelta^{+}]$, since $f'(x)<0$, then $f(x)\leq f(2)$. Recall that $2\leq d_u^{+}\leq \displaystyleelta^{+}$, thus $$ m_u^{+}+d_u^{+}\leq \max\{f(2), f(\displaystyleelta^{+})\}$$ $$=\max\{\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}, \delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2}\}.$$ If $G(\neq\overset{\longrightarrow}{C_{n}})$ is a regular digraph, then $d_i^{+}+m_i^{+}=2d_i^{+}=2\displaystyleelta^{+}$ for all $v_i\in V(G)$. We can get $q(G)=2\displaystyleelta^{+}$. Since $G(\neq\overset{\longrightarrow}{C_{n}})$ is a strongly connected digraph, then we may assume that $\displaystyleelta^{+} \geq 2$, this implies that $\delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2}=\displaystyleelta^{+}+1+\frac{\displaystyleelta^{+}}{2}\leq 2\displaystyleelta^{+}=\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}$. So $\max\{\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}, \delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2} \}=2\displaystyleelta^{+}$. Thus, the equality holds. If $G\cong \overset{\longleftrightarrow}{K}_{1,n-1} $, we can get $q(G)=n$. Since $\delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2}=2+\frac{n-1}{2}\leq n$ from $n\geq 3$ and $\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}=n-1+1-1+\frac{n-1}{n-1}=n$. So $\max\{\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}, \delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2} \}=n$. Thus, the equality also holds. By combining the above discussion, the result follows. \end{proof} \noindent\begin{corollary}\label{co:ck} Let $G$ be a strongly connected digraph with $n\geq 3$ vertices, $m$ arcs, the maximum outdegree $\displaystyleelta^+$ and the minimum outdegree $\delta^{+}$. If $\displaystyleelta^{+}\geq\frac{m-(n-1)}{2}$ and $\delta^{+}=1$, then \begin{equation}\label{eq:cm} q(G)\leq \displaystyleelta^{+}+2. \end{equation} \end{corollary} \begin{proof} Because $\displaystyleelta^{+}+\delta^{+}-1+\frac{m-\delta^{+}(n-1)}{\displaystyleelta^{+}}\leq \displaystyleelta^{+}+2$, $\delta^{+}+1+\frac{m-\delta^{+}(n-1)}{2}\leq \displaystyleelta^{+}+2$, therefore by Theorem \ref{th:c7}, we have $q(G)\leq \displaystyleelta^{+}+2.$ \end{proof} Let $G^{*}(m,n,\frac{m-(n-1)}{2},1)$ be a class of strongly connected digraphs with $\displaystyleelta^{+}\geq\frac{m-(n-1)}{2}$, $\delta^{+}=1$, and there exists a vertex $v_0\in V(G)$ such that $d_{v_0}=\displaystyleelta^{+}$ and there exists a vertex $v_k\in N_{v_0}^{+}$, $d_{v_k}^{+}\geq 2$. \noindent\begin{remark}\label{re:c2} For $G\in G^{*}(m,n,\frac{m-(n-1)}{2},1)$, we have $\displaystyleelta^{+}+2\leq\max\{d_i^{+}+d_j^{+}: (v_i, v_j)\in E(G)\}$, thus the upper bound \eqref{eq:cm} is better than the upper bound \eqref{eq:c1} for the class of digraphs $G\in G^{*}(m,n,\frac{m-(n-1)}{2},1)$. But for general digraphs, the upper bound \eqref{eq:cm} is incomparable with the upper bound \eqref{eq:c1}. \end{remark} \begin{example} Let $G$ be the digraph of order 4, as shown in Figure 1. Since it has 9 arcs, and the maximum outdegree $\displaystyleelta^+=3=\frac{9-(4-1)}{2}$, the minimum outdegree $\delta^{+}=1$, and there exists a vertex $v_4\in N_{v_1}^{+}$, $d_{v_4}^{+}=3>2$, therefore $G=G^{*}(9,4,3,1)$. \begin{figure} \caption{Graph $G^*(9,4,3,1)$} \end{figure} \end{example} \begin{table}[H] \centering\caption{Values of the upper bounds for example 1.} \begin{tabular}{cccc} \hline &$q(G)$&\eqref{eq:c1}&\eqref{eq:cm}\\ \hline $G^{*}(9,4,3,1)$ & 4.7321& 6&5 \\ \hline \end{tabular} \end{table} \noindent\begin{theorem}\label{th:c8} \ Let $G$ be a strongly connected digraph with vertex set $V(G)=\{v_1,v_2,$ $\ldots, v_n\}$ and arc set $E(G)$. Then \begin{equation}\label{eq:c28} q(G)\leq \max\bigg\{\frac{d_i^{+}+d_j^{+}+ \sqrt{(d_i^{+}-d_j^{+})^{2}+{4\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}}}}{2} :(v_i, v_j)\in E(G) \bigg\}.\end{equation} Moreover if $G$ is a regular digraph or a bipartite semiregular digraph, then the equality holds in \eqref{eq:c28}. \end{theorem} \begin{proof} \ From the definition of $D=D(G)$ we get $D^{\frac{1}{2}}=\textrm{diag}(\sqrt{d_i^{+}}:v_i\in V(G)),$ and consider the similar matrix $P=D^{-\frac{1}{2}}Q(G)D^{\frac{1}{2}}.$ Since $G$ is a strongly connected digraph, it is easy to see that $P$ is irreducible and nonnegative. Now the $(i,j)$-th element of $P=D^{-\frac{1}{2}}Q(G)D^{\frac{1}{2}}$ is $$p_{ij} = \begin{cases} \ d_i^{+} & \textrm{ if $i=j $,}\\ \ \frac{\sqrt{d_j^{+}}}{\sqrt{d_i^{+}}} & \textrm{ if $(v_i, v_j)\in E(G) $,}\\ \ 0 & \textrm{ otherwise.} \end{cases}$$ Let $R_i(P)$ be the $i$-th row sum of $P$ and $R_i^{'}(P)=R_i-d_i^{+}.$ Then by Cauchy-Schwarz inequality, we have \begin{align*} R_i^{'}(P)^{2}=&\left(\sum\limits_{v_j:(v_i,v_j)\in E(G)}\frac{\sqrt{d_j^{+}}}{\sqrt{d_i^{+}}} \right)^{2} \leq \sum\limits_{v_j:(v_i,v_j)\in E(G)}1^{2}\sum\limits_{v_j:(v_i,v_j)\in E(G)}\frac{d_j^{+}}{d_i^{+}}\\ =&\sum\limits_{v_j:(v_i,v_j)\in E(G)}d_j^{+} =d_i^{+}m_i^{+}. \end{align*} Since $P$ is irreducible and nonnegative, $\rho(P)$ denotes the spectral radius of $P$. Then by Lemma \ref{le:c6}, there at least exists $(v_i,v_j)\in E(G)$ such that $\rho(P)$ is contained in the following oval region $$|\rho(P)-d_i^{+}||\rho(P)-d_j^{+}|\leq R_i^{'}(P)R_i^{'}(P) \leq\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}.$$ Obviously, $\rho(P)=q(G)>\max\{d_i^{+}: v_i \in E(G)\},$ and $(\rho(P)-d_i^{+})(\rho(P)-d_j^{+})\leq|\rho(P)-d_i^{+}||\rho(P)-d_j^{+}|.$ Therefore, solving the above inequality we obtain $$ q(G)\leq\bigg\{\frac{d_i^{+}+d_j^{+}+\sqrt{(d_i^{+}-d_j^{+})^{2} +{4\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}}}}{2} \bigg\}.$$ Hence \eqref{eq:c28} holds. If $G$ is a regular digraph, $$q(G)=2\displaystyleelta^{+}=\max\bigg\{\frac{d_i^{+}+d_j^{+}+\sqrt{{(d_i^{+}-d_j^{+})^{2} +4\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}}}}{2} :(v_i, v_j)\in E(G) \bigg\}.$$ Thus, the equality holds. If $G$ is a bipartite semiregular digraph, $$\max\bigg\{\frac{d_i^{+}+d_j^{+}+\sqrt{{(d_i^{+}-d_j^{+})^{2} +4\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}}}}{2} :(v_i, v_j)\in E(G) \bigg\}=d_i^{+}+d_j^{+}.$$ Because $q(G)=\rho(D^{-1}Q(G)D)$, the $i$-th row sum of $D^{-1}Q(G)D$ is $d_i^{+}+m_i^{+}$, and $G$ is a bipartite semiregular digraph, therefore $d_i^{+}+m_i^{+}=d_i^{+}+d_j^{+}, (v_i, v_j)\in E(G)$, that is the row sums of $D^{-1}Q(G)D$ are all equal, then by Lemma \ref{le:c1}, $\rho(D^{-1}Q(G)D)=d_i^{+}+d_j^{+}$. Thus we have \begin{align*} q(G)&=\rho(D^{-1}Q(G)D)=d_i^{+}+d_j^{+} \\ &=\max\bigg\{\frac{d_i^{+}+d_j^{+}+\sqrt{{(d_i^{+}-d_j^{+})^{2} +4\sqrt{d_i^{+}m_i^{+}}\sqrt{d_j^{+}m_j^{+}}}}}{2} :(v_i, v_j)\in E(G) \bigg\}. \end{align*} Then the equality holds. \end{proof} For a digraph $G=(V(G), E(G)),$ let $f : V(G)\times V(G) \rightarrow \mathbb{R}$ be a function. If $f(v_i, v_j)>0$ for all $(v_i, v_j)\in E(G),$ we say $f$ is positive on arcs. \noindent\begin{theorem}\label{th:c9} \ Let $G=(V(G), E(G))$ be a digraph. Let $f : V(G)\times V(G)\rightarrow \mathbb{R^{+}}\bigcup \{0\}$ be a nonnegative function which is positive on arcs. Then \begin{equation}\label{eq:c29} q(G)\leq \max\left\{\frac{\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k)+ \sum\limits_{v_k:(v_j, v_k)\in E(G)}f(v_j, v_k)}{f(v_i,v_j)} : (v_i,v_j)\in E(G)\right\}. \end{equation} \end{theorem} \begin{proof} \ Let ${\bf X}=(x_1,x_2,\ldots,x_n)^{T}$ be an eigenvector corresponding to the eigenvalue $q(G)$ of $Q(G)$. Since $$Q(G){\bf X}=q(G){\bf X}.$$ Then for $1 \leq i\leq n$ \begin{equation}\label{eq:c30} q(G)x_i=d_i^{+}x_i+\sum\limits_{v_k:(v_i,v_k)\in E(G)}x_k=\sum\limits_{v_k:(v_i,v_k)\in E(G)}(x_i+x_k). \end{equation} By \eqref{eq:c30}, we have $$q(G)(x_i+x_j)=\sum\limits_{v_k:(v_i,v_k)\in E(G)}(x_i+x_k)+\sum\limits_{v_k:(v_j,v_k)\in E(G)}(x_j+x_k).$$ For convenience we use $f(i, j)$ denote $f(v_i, v_j).$ Set $g(i, j)=\frac{x_i+x_j}{f(i,j)}.$ If $(v_i, v_j)\in E(G),$ then \begin{equation}\label{eq:c31} q(G)f(i,j)g(i,j)=\sum\limits_{v_k:(v_i,v_k)\in E(G)}f(i,k)g(i,k)+\sum\limits_{v_k:(v_j,v_k)\in E(G))}f(j,k)g(j,k). \end{equation} By \eqref{eq:c31}, we get \begin{align*} |q(G)f(i,j)g(i,j)|=&q(G)f(i,j)|g(i,j)| \\ \leq&\sum\limits_{v_k:(v_i,v_k)\in E(G)}f(i,k)|g(i,k)|+\sum\limits_{v_k:(v_j,v_k)\in E(G)}f(j,k)|g(j,k)|. \end{align*} Now choose $i_1,j_1$ such that $(v_{i_1}, v_{j_1})\in E(G)$ and $|g(i_1, j_1)|=\max\{|g(i, j)| : (v_i, v_j)\in E(G)\}.$ If $g|(i_1, j_1)|=0,$ then $|g(i, j)|=0 $ for all arcs $(v_i, v_j)\in E(G).$ i.e.,$x_i+x_j=0$ for all arcs $(v_i, v_j)\in E(G).$ By \eqref{eq:c30}, we have $q(G)=0$ which is impossible, since $G$ has at least one arc. So $|g(i_1,j_1)|>0.$ Then $$q(G)f(i_1,j_1)|g(i_1,j_1)|\leq\sum\limits_{v_k:(v_{i_1},v_k)\in E(G)}f(i_1,k)|g(i_1,k)|+ \sum\limits_{v_k:(v_{j_1},v_k)\in E(G)}f(j_1,k)|g(j_1,k)|.$$ Therefore, we obtain \begin{align*} q(G)\leq&\sum\limits_{v_k:(v_{i_1},v_k)\in E(G)}\frac{f(i_1,k)}{f(i_1,j_1)}\frac{|g(i_1,k)|}{|g(i_1,j_1)|}+ \sum\limits_{v_k:(v_{j_1},v_k)\in E(G)}\frac{f(j_1,k)}{f(i_1,j_1)}\frac{|g(j_1,k)|}{|g(i_1,j_1)|}\\ \leq&\sum\limits_{v_k:(v_{i_1},v_k)\in E(G)}\frac{f(i_1,k)}{f(i_1,j_1)}+ \sum\limits_{v_k:(v_{j_1},v_k)\in E(G)}\frac{f(j_1,k)}{f(i_1,j_1)}, \end{align*} i.e.,$$q(G)\leq\frac{\sum\limits_{v_k:(v_{i_1},v_k)\in E(G)}f(i_1,k)+\sum\limits_{v_k:(v_{j_1},v_k)\in E(G)}f(j_1,k)}{f(i_1,j_1)}, \textrm {where $(v_{i_1}, v_{j_1})\in E(G)$}.$$ This proves the desired result. \end{proof} \noindent\begin{corollary}\label{co:c10} \ Let $G=(V(G), E(G))$ be a digraph. Then \begin{equation}\label{eq:c32}q(G)\leq \max\left\{d_i^{+} \sqrt{\frac{m_i^{+}}{d_j^{+}}}+d_j^{+}\sqrt{\frac{m_j^{+}}{d_i^{+}}} : {(v_i, v_j)\in E(G)}\right\}. \end{equation} \end{corollary} \begin{proof} \ Setting $f(v_i, v_j)=\sqrt{d_i^{+}d_j^{+}}$ in \eqref{eq:c29}, by Cauchy-Schwarz inequality, $$\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k) =\sum\limits_{v_k:(v_i, v_k)\in E(G)}\sqrt{d_i^{+}d_k^{+}}=\sum\limits_{v_k:(v_i, v_k)\in E(G)}(\sqrt{d_i^{+}}\sqrt{d_k^{+}})$$ $$\leq\sqrt{\sum\limits_{v_k:(v_i, v_k)\in E(G)}d_i^{+}\sum\limits_{v_k:(v_i, v_k)\in E(G)}d_k^{+}}= \sqrt{{d_i^{+}}^{2}\sum\limits_{v_k:(v_i, v_k)\in E(G)}d_k^{+}}=d_i^{+}\sqrt{d_i^{+}m_i^{+}}.$$ By \eqref{eq:c29}, we get \begin{align*} q(G)\leq& \max\left\{\frac{\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k)+ \sum\limits_{v_k:(v_j, v_k)\in E(G)}f(v_j, v_k)}{f(v_i,v_j)} : (v_i,v_j)\in E(G)\right\} \\ \leq& \max\left\{\frac{d_i^{+}\sqrt{d_i^{+}m_i^{+}}+d_j^{+}\sqrt{d_j^{+}m_j^{+}}} {\sqrt{d_i^{+}d_j^{+}}} : (v_i,v_j)\in E(G)\right\} \\ =&\max\left\{d_i^{+}\sqrt{\frac{m_i^{+}}{d_j^{+}}}+d_j^{+}\sqrt{\frac{m_j^{+}}{d_i^{+}}} : {(v_i, v_j)\in E(G)}\right\}. \end{align*} \end{proof} \noindent\begin{corollary}\label{co:c13} \ Let $G=(V(G), E(G))$ be a digraph. Then \begin{equation}\label{eq:c35}q(G)\leq \max\left\{\frac{d_i^{+}(d_i^{+}+m_i^{+})+d_j^{+}(d_j^{+}+m_j^{+})}{d_i^{+}+d_j^{+}} : (v_i,v_j)\in E(G)\right\}. \end{equation} \end{corollary} \begin{proof} \ Setting $f(v_i, v_j)=d_i^{+}+d_j^{+}$ in \eqref{eq:c29}, since $\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k)=\sum\limits_{v_k:(v_i, v_k)\in E(G)}(d_i^{+}+d_k^{+})=d_i^{+}(d_i^{+}+m_i^{+})$, So we get the desired result. \end{proof} \noindent\begin{corollary}\label{co:c11} \ Let $G=(V(G), E(G))$ be a digraph. Then \begin{equation}\label{eq:c33}q(G)\leq \max\left\{\frac{d_i^{+}\sqrt{d_i^{+} +m_i^{+}}+d_j^{+}\sqrt{d_j^{+}+m_j^{+}}}{\sqrt{d_i^{+}+d_j^{+}}} :{(v_i, v_j)\in E(G)}\right\}. \end{equation} \end{corollary} \begin{proof} \ Setting $f(v_i, v_j)=\sqrt{d_i^{+}+d_j^{+}}$ in \eqref{eq:c29}, since $\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k)=\sum\limits_{v_k:(v_i, v_k)\in E(G)}(1\cdot \sqrt{d_i^{+}+d_k^{+}})\leq\sqrt{d_i^{+}\sum\limits_{v_k:(v_i, v_k)\in E(G)}(d_i^{+}+d_k^{+})}$ $=\sqrt{d_i^{+}({d_i^{+}}^{2}+d_i^{+}m_i^{+})}=d_i^{+}\sqrt{d_i^{+}+m_i^{+}}$ by Cauchy-Schwarz inequality. Thus by \eqref{eq:c29} we get the desired result. \end{proof} \noindent\begin{corollary}\label{co:c12} \ Let $G=(V(G), E(G))$ be a digraph. Then \begin{equation}\label{eq:c34}q(G)\leq \max\left\{\frac{d_i^{+} (\sqrt{d_i^{+}}+\sqrt{m_i^{+}})+d_j^{+}(\sqrt{d_j^{+}}+\sqrt{m_j^{+}})} {\sqrt{d_i^{+}}+\sqrt{d_j^{+}}}:{(v_i, v_j)\in E(G)}\right\}. \end{equation} \end{corollary} \begin{proof} \ Setting $f(v_i, v_j)=\sqrt{d_i^{+}}+\sqrt{d_j^{+}}$ in \eqref{eq:c29}, since $\sum\limits_{v_k:(v_i, v_k)\in E(G)}f(v_i, v_k)=\sum\limits_{v_k:(v_i, v_k)\in E(G)}(\sqrt{d_i^{+}}$ $+\sqrt{d_k^{+}})=d_i^{+}\sqrt{d_i^{+}}+\sum\limits_{v_k:(v_i, v_k)\in E(G)}(1\cdot \sqrt{d_k^{+}})\leq {d_i^{+}}^{\frac{3}{2}}+\sqrt{d_i^{+}\sum\limits_{v_k:(v_i, v_k)\in E(G)}d_k^{+}}=d_i^{+}(\sqrt{d_i^{+}}+\sqrt{m_i^{+}})$ by Cauchy-Schwarz inequality. By \eqref{eq:c29} the result follows. \end{proof} Notice that \eqref{eq:c33} and \eqref{eq:c34} can be viewed as adding square roots to \eqref{eq:c35} at difference places. \section{Example} \label{sec:3} Let $G_1$, $G_2$ be the digraphs of order 4,6, respectively, as shown in Figure 2. \begin{table}[H] \centering\caption{Values of the various bounds for example 1.} \begin{tabular}{cccccccccccccccc} \hline &$q(G)$&\eqref{eq:c1}&\eqref{eq:c2}&\eqref{eq:c3}&\eqref{eq:c4}&\eqref{eq:c5} \\ &\eqref{eq:c27}&\eqref{eq:c28}&\eqref{eq:c32}&\eqref{eq:c35}&\eqref{eq:c33}&\eqref{eq:c34}\\ \hline $G_1$ & 3.0000& 4.0000& 3.5000& 3.3028 & 3.4142& 3.5616 \\ & 3.5000 & 3.5651 & 3.4495& 3.3333& 3.6029& 3.5731 \\ \hline $G_2$ & 4.1984& 5.0000& 4.6667 & 4.6016& 5.0000 & 4.7321 \\ & 5.5000& 4.7913& 4.5644& 4.6000& 4.7956 & 4.7866 \\ \hline \end{tabular} \end{table} \noindent\begin{remark}\label{re:c3} Obviously, from Table 1, the bound \eqref{eq:c3} is the best in all known upper bounds for $G_1$, and the bound \eqref{eq:c32} is the best for $G_2$. Finally bound \eqref{eq:c35} is the second-best bounds for $G_1$ and $G_2$. In general, these bounds are incomparable. \end{remark} \end{document}
\begin{document} \title{Polyhedra without cubic vertices are prism-hamiltonian} \author{ Simon \v Spacapan\footnote{ University of Maribor, FME, Smetanova 17, 2000 Maribor, Slovenia. e-mail: simon.spacapan @um.si. }} \date{\today} \maketitle \begin{abstract} The prism over a graph $G$ is the Cartesian product of $G$ with the complete graph on two vertices. A graph $G$ is prism-hamiltonian if the prism over $G$ is hamiltonian. We prove that every polyhedral graph (i.e. 3-connected planar graph) of minimum degree at least four is prism-hamiltonian. \end{abstract} \noindent {\bf Key words}: Hamiltonian cycle, circuit graph \noindent {\bf AMS subject classification (2010)}: 05C10, 05C45 \section{Introduction} The study of hamiltonicity of planar graphs is largely concerned with finding subclasses of 3-connected planar graphs for which each member of the subclass is hamiltonian or has some hamiltonian-type property. One such result was obtained in 1956 by Tutte who proved that all $4$-connected planar graphs are hamiltonian \cite{tutte2}. Although not every 3-connected planar graph is hamiltonian it is possible to prove that this class of graphs satisfies (hamiltonian-type) properties weaker than hamiltonicity. A 2-walk in a graph is a closed spanning walk that visits every vertex at most twice. Clearly, every hamiltonian graph has a 2-walk. In \cite{richter} Gao and Richter proved that every $3$-connected planar graph has a 2-walk. There is an extensive list of non-hamiltonian 3-connected planar graphs with special properties, such as graphs with small order and size \cite{bar}, plane triangulations \cite{zamfirescu}, regular graphs \cite{tutte1, zamfirescu1}, $K_{2,6}$-minor-free graphs \cite{eli1}, and graphs with few 3-cuts \cite{brink2}. However some classes of graphs mentioned above are prism-hamiltonian. For example every plane triangulation is prism-hamiltonian \cite{bib}, and every cubic 3-connected graph is prism-hamiltonian \cite{kaiser},\cite{paul}. It is well known that every prism-hamiltonian graph has a 2-walk, so the result obtained in \cite{bib} strengthens the result of Gao and Richter mentioned above. Rosenfeld and Barnette \cite{domneva} conjectured that every 3-connected planar graph is prism-hamiltonian (see also \cite{kral}). This conjecture was recently refuted in \cite{jaz} where vertex degrees play a central role in construction of counterexamples. In particular every counterexample to Rosenfeld-Barnette conjecture given in \cite{jaz} has many cubic vertices and two vertices of \enquote{high} degree (linear in order of the graph). In \cite{zam} the authors show that there is an infinite family of 3-connected planar graphs, each of them not prism-hamiltonian, such that the ratio of cubic vertices tends to 1 when the order goes to infinity, and maximum degree stays bounded by 36. Vertex degrees in relation to hamiltonicity properties are discussed already by Ore in \cite{ore} and later by Jackson and Wormald in \cite{jackson}. Let $\sigma_k(G)$ be the minimum sum of vertex degrees of an independent set of $k$ vertices. Ore showed that $\sigma_2(G)\geq n$ implies that $G$ is hamiltonian, and Jackson and Wormald showed that $\sigma_3(G)\geq n$ implies that $G$ has a 2-walk (provided that $G$ is connected). This was strenghtened by Ozeki in \cite{kenta} who showed that $\sigma_3(G)\geq n$ implies that $G$ is prism-hamiltonian. In this paper we prove that every 3-connected planar graph of minimum degree at least four is prism-hamiltonian. Equivalently, every 3-connected planar graph which is not prism-hamiltonian must have at least one cubic vertex. In particular this implies that every regular 3-connected planar graph is prism-hamiltonian. The class of 3-connected planar graphs of minimum degree at least four is neither hamiltonian nor traceable (even when restricted to plane triangulations, or to regular graphs), see \cite{zamfirescu} and \cite{zamfirescu1}. In this sense prism-hamiltonicity appears to be the strongest hamiltonian-type property this class has. The proof we give in this article builds on results obtained in \cite{richter}, where a method of decomposing graphs into plain chains is developed. In \cite{richter} the authors work with circuit graphs (which where originally defined in \cite{barnette}). A plane graph is a circuit graph if it is obtained from a 3-connected plane graph $G$ by deleting all vertices that lie in the exterior of a cycle of $G$. A cactus is a connected graph $G$ such that every block of $G$ is either a $K_2$ or a cycle, and such that every vertex of $G$ is contained in at most two blocks of $G$ (the last condition is usually omitted, however for us it will be crucial, so we include it in the definition). The main result of \cite{richter} is that any circuit graph (and hence also any 3-connected plane graph) has a spanning cactus as a subgraph. Here we improve this result by proving that any circuit graph with no internal cubic vertex has a spanning bipartite cactus as a subgraph. Every cactus has a 2-walk while every bipartite cactus is prism-hamiltonian. Our result thus implies that circuit graphs with all internal vertices of degree at least 4 are prism-hamiltonian. We mention that 3-connected planar graphs of minimum degree at least 4 also appear in \cite{thomassen} where the author proved that no graph in this class is hypohamiltonian. \section{Preliminaries} We refer to \cite{mt} for terminology not defined here. Let $G=(V(G),E(G))$ be a graph, $x\in V(G)$ and $X\subseteq V(G)$. We say that $x$ is {\em adjacent} to $X$, if $x$ is adjacent to some vertex of $X$. If $u$ and $v$ are adjacent then $e=uv$ denotes the edge with endvertices $u$ and $v$; the subgraph induced by $u$ and $v$ is a path denoted by $u,v$. The {\em union} of graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$ is the graph $G\cup H=(V(G)\cup V(H),E(G)\cup E(H))$ and the {\em intersection} of $G$ and $H$ is $G\cap H=(V(G)\cap V(H),E(G)\cap E(H))$. The graph $G-X$ is obtained from $G$ by deleting all vertices in $X$ and edges incident to a vertex in $X$. Similarly, for $M\subseteq E(G)$, $G-M$ is the graph obtained from $G$ by deleting all edges in $M$. If $X=\{x\}$ we write $G-x$ instead of $G-\{x\}$. Let $G$ be a plane graph. Vertices and edges incident to the unbounded face of $G$ are called {\em external vertices} and {\em external edges}, respectively. If a vertex (or an edge) is not an external vertex (or edge), then it is called an {\em internal vertex} (or an {\em internal edge}). A path $P$ is an {\em external} resp. {\em internal} path of $G$ if all edges of $P$ are external resp. internal edges. We use $[n]$ to denote the set of positive integers less or equal $n$. A path of odd/even length is called an {\em odd/even path}, respectively. Similarly we define {\em odd} and {\em even faces}, based on the parity of their degree. Recall that every vertex of a cactus $G$ is contained in at most two blocks of $G$. A vertex of a catus $G$ is {\em good} if it is contained in exactly one block of $G$. A {\em prism} over a graph $G$ is the Cartesian product of $G$ and the complete graph on two vertices $K_2$. The following proposition is given in \cite{eli3} (Theorem 2.3.). For the sake of completness we include the proof of it also here. \begin{proposition} \label{bolje} Every bipartite cactus is prism-hamiltonian. \end{proposition} \noindent{\bf Proof.\ } We denote $V(K_2)=\{a,b\}$. We use induction to prove the following stronger statement. Every prism $G\Box K_2$ over a bipartite cactus $G$ has a Hamilton cycle $C$ such that for every good vertex $x$ of $G$, we have $(x,a)(x,b)\in E(C)$. This is clearly true when $G$ is an even cycle or $K_2$. Let $G$ be a bipartite cactus and assume that the statement is true for all bipartite cactuses with fewer vertices than $|V(G)|$. If all vertices of $G$ are good, then $G$ is an even cycle or $K_2$. Otherwise, there is a vertex $u$, which is not a good vertex of $G$. Hence, $u$ is contained in exactly two blocks of $G$. Let $G_1'$ and $G_2'$ be connected components of $G-x$, and let $G_1=G-G_2'$ and $G_2=G-G_1'$. Both, $G_1$ and $G_2$, are bipartite cactuses. Moreover, $x$ is a good vertex in $G_i$, for $i=1,2$. By induction hypothesis there is a Hamilton cycle $C_i$ in $G_i$ such that $C_i$ uses the edge $e=(x,a)(x,b)$ in $G_i$. The desired Hamilton cycle in $G$ is $(C_1\cup C_2)-e$. Observe that every good vertex of $G$ is a good vertex of $G_1$ or $G_2$. It follows that for every good vertex $x$ of $G$, we have $(x,a)(x,b)\in E(C)$. $\square$ \begin{corollary} Every graph $G$, that has a bipartite cactus $H$ as a spanning subgraph, is prism-hamiltonian. \end{corollary} If $G$ is plane graph and $H$ is a subgraph of $G$, then $H$ is also a plane graph and we assume that the embedding of $H$ in the plane is the one given by $G$. Let $G$ be a plane graph and $G^+$ the graph obtained from $G$ by adding a vertex to $G$ and making it adjacent to all external vertices of $G$. The graph $G$ is a {\em circuit graph} if $G^+$ is 3-connected. It follows from the definition that any circuit graph is 2-connected, and hence every face of a circuit graph is bounded by a cycle. If $G$ is a 3-connected plane graph (or if $G$ is a circuit graph) and $C$ is a cycle of $G$, then the subgraph of $G$ bounded by $C$ is a circuit graph. Observe also that for any circuit graph $G$ with outer cycle $C$, and any separating set $S$ of size 2 in $G$, every connected component of $G-S$ intersects $C$. A graph $G$ is a {\em chain of blocks} if the block-cutvertex graph of $G$ is a path. We denote the blocks and cutvertices of $G$, by $$B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,,$$ where $B_i$ are blocks for $i\in [n]$, and $b_i\in V(B_i)\cap V(B_{i+1})$ are cutvertices of $G$ for $i\in [n-1]$. A plane graph $G$ is a {\em plane chain of blocks} if it is a chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$$ such that every external vertex of $B_i, i \in [n]$ is also an external vertex of $G$. The following lemma is given in \cite{richter} (Lemma 3, p. 261). \begin{lemma}\label{plainchain} Let $G$ be a circuit graph with outer cycle $C$ and let $x\in V(C)$. Let $x'$ and $x''$ be the neighbors of $x$ in $C$. Then \begin{itemize} \item[(i)] $G-x$ is a plane chain of blocks $B_1,b_1,B_2,\ldots,b_{n-1}, B_n$ and each nontrivial block of $G-x$ is a circuit graph. \item[(ii)]Setting $x'=b_0$ and $x''=b_n$, then $ B_i\cap C$ is a path in $C$ with endvertices $b_{i-1}$ and $b_i$, for every $i\in [n]$. \end{itemize} \end{lemma} It follows from the above lemma that for every nontrivial block $B_i$ of $G-x$, with outer cycle $C_i$, $C_i$ is the union of two $b_{i-1}b_i$-paths $P_i$ and $P_i'$, where $P_i$ is an internal path in $G$ and $P_i'$ is an external path in $G$. \section{The proof of main result} In this section we prove that any circuit graph $G$ such that every internal vertex of $G$ is of degree at least 4 is prism-hamiltonian. \begin{definition} Let $G$ be a circuit graph with outer cycle $C$ and let $x,y\in V(C)$. We say that $G$ is bad with respect to $x$ and $y$ if \begin{itemize} \item[(i)] $G$ has exactly one bounded odd face $F$ \item[(ii)] $x$ and $y$ are incident to $F$ \item[(iii)] If $x$ and $y$ are adjacent, then $e=xy$ is an internal edge of $G$. \end{itemize} We say that $G$ is good with respect to $x$ and $y$ if it's not bad with respect to $x$ and $y$. \end{definition} If $G$ is a circuit graph and $G$ is bad with respect to $x$ and $y$, then there is no hamiltonian cycle $C$ in $G\Box K_2$ such that $C$ uses vertical edges at $x$ and $y$ (edges between the two layers of $G$). For example, an odd cycle is bad with respect to any two non-adjacent vertices, and hence the prism over an odd cycle has no hamiltonian cycle that uses vertical edges at two non-adjacent vertices of this cycle. Conversely, it turns out (and is a consequence of Theorem \ref{glavni}) that for any circuit graph $G$ with all internal vertices of degree at least 4, and any external vertices $x$ and $y$ of $G$ such that $G$ is good with respect to $x$ and $y$, there is a hamiltonian cycle in $G\Box K_2$ that uses vertical edges at $x$ and $y$. Note also that a bipartite circuit graph $B$ is good with respect to any two external vertices of $B$ (this fact we shall use frequently). In order to simplify the formulation of statements, we also say that complete graphs $K_1$ and $K_2$ are good with respect to any of its vertices. \begin{definition} Let $G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$ be a plain chain of blocks such that each nontrivial block $B_i$ is a circuit graph. Let $b_0\neq b_1$ be an external vertex of $B_1$, and $b_n\neq b_{n-1}$ be an external vertex of $B_n$. We say that $G$ is a good chain with respect to $b_0$ and $b_n$ if $B_i$ is good with respect to $b_{i-1}$ and $b_i$ for every $i\in [n]$. \end{definition} The same definition is used when only one of the two vertices $b_0$ and $b_n$ is given, and in this case we say that $G$ is a good chain with respect to $b_0$ or with respect to $b_n$. If $G=B_1$ has only one block we say that $G$ is a good chain with respect to any external vertex of $G$. \begin{lemma} \label{dvodelen} Let $B$ be a bipartite circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Then $B$ has at least 4 external vertices of degree 2. \end{lemma} \noindent{\bf Proof.\ } Let $C$ be a $k$-cycle, $k\geq 4$. Let ${\mathcal F}$ be the set of faces of $B$, and set $e=|E(B)|, v=|V(B)|$ and $f=|{\mathcal F}|$. Since $B$ is bipartite $$2e=\sum_{F\in {\mathcal F}} \deg (F)\geq 4(f-1)+k\,.$$ We use the Euler's formula to obtain $$\sum_{x\in V(B)}\deg(x)=2e\leq 4v-k-4\,.$$ Since every internal vertex of $B$ is of degree at least 4 we get $$\sum_{x\in V(C)}\deg(x)\leq 3k-4\,.$$ Since all vertices of $C$ are of degree at least 2, the claim of the lemma follows from the pigeonhole principle. $\square$ The following lemma is a well known fact. \begin{lemma}\label{sodalica} A plane graph $G$ is bipartite if and only if all bounded faces of $G$ are even. \end{lemma} \begin{lemma}\label{osnovna1} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x$ and $y$ be any vertices of $C$, and $Q$ a $xy$-path in $C$. Suppose that all vertices in $V(C)\setminus V(Q)$ are of degree at least three in $B$. Then $B$ is good with respect to $x$ and $y$. \end{lemma} \noindent{\bf Proof.\ } Suppose to the contrary, that $B$ is bad with respect to $x$ and $y$. Then $x$ and $y$ are incident to odd face $F$ of $B$, and $F$ is the only bounded odd face of $B$. Moreover, $x$ and $y$ are not adjacent in $C$. It follows that $B-\{x,y\}$ has exactly two components. Let $H$ be the component of $B-\{x,y\}$ that contains a vertex of $Q$. If $xy\in E(B)$ and $F$ is contained in the exterior of the cycle $E(Q)\cup \{xy\}$ define $H'=(B-V(H))-xy$. Otherwise define $H'=B-V(H)$. $H'$ is a plain chain of blocks and each nontrivial block of $H'$ is a bipartite circuit graph (by Lemma \ref{sodalica}), so assume $$H'=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $d_0=x$ and $d_m=y$. If $j\in [m]$ and $u\in V(D_j)\setminus \{d_{j-1},d_j\}$, then $\deg_{D_j}(u)>2$. So if $D_j$ is nontrivial, then it has at most two vertices of degree 2 in $D_j$; since $D_j$ is bipartite this contradicts Lemma \ref{dvodelen}. It follows that all blocks of $H'$ are trivial. If $H'$ is $K_2$ then $x$ and $y$ are adjacent in $C$ (a contradiction), otherwise a vertex in $V(C)\setminus V(Q)$ is of degree $\leq 2$ (this contradicts the assumption of the lemma). $\square$ \begin{lemma}\label{osnovna} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x\in V(C)$ be any vertex and $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. Then for every $i\in [n]$, $B_i$ is good with respect to $b_{i-1}$ and $b_i$. \end{lemma} \noindent{\bf Proof.\ } Let $B_i$ be a nontrivial block with outer cycle $C_i$, and define $Q=C\cap B_i$. $Q$ is a path in $C_i$ with endvertices $b_{i-1}$ and $b_i$, and every vertex in $V(C_i)\setminus V(Q)$ is of degree more than 2 in $B_i$. By Lemma \ref{osnovna1}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$. $\square$ \begin{lemma}\label{osnovna2} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x$ and $y$ be any vertices of $C$ and $Q$ a $xy$-path in $C$ such that all vertices in $V(C)\setminus V(Q)$ are of degree at least three in $B$. If $B-x$ is bipartite, then $|V(B_i\cap Q)|\geq 2$ for every block $B_i$ of $B-x$. \end{lemma} \noindent{\bf Proof.\ } If $V(C)=V(Q)$, the lemma follows from Lemma \ref{plainchain}. Assume $V(C)\neq V(Q)$, and let $u\in V(C)\setminus V(Q)$ be the neighbor of $x$. The block $B_1$ of $B-x$ containing $u$ is nontrivial, for otherwise $\deg_B(u)=2$. If $|V(B_1\cap Q)|< 2$, then $B_1$ has at most two vertices of degree two in $B_1$. Therefore, by Lemma \ref{dvodelen}, $B_1$ is non-bipartite and hence $B-x$ is non-bipartite. $\square$ \begin{lemma} \label{skupek} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x\in V(C)$ be any vertex, and let $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Then for every $k \in [n-1]$ the graph $$G=B-\bigcup_{i=k+1}^n V(B_i)$$ is a good chain with respect to $x$. \end{lemma} \noindent{\bf Proof.\ } Let $b_0\in V(B_1)$ be the neighbor of $x$ in $C$. Denote the path $x, b_0$ by $B_0$. {\em Case 1: Suppose that $B_k$ is trivial. } \\ If $x$ is not adjacent to a vertex in $G-b_0$, then $G$ induces a plain chain of blocks $$B_0,b_0,B_1,\ldots,b_{k-2}, B_{k-1}\,$$ and, by Lemma \ref{osnovna}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$ for $i\in [k-1]$. Assume therefore that $x$ is adjacent to a vertex in $G-b_0$. Let $\ell\in [k]$ be the maximum number such that $x$ is adjacent to $B_\ell-\{b_{\ell-1},b_k\}$. Since $B_k$ is trivial, $\ell\neq k$. The graph $H$ induced by $\bigcup_{i=0}^\ell V(B_i)$ is a nontrivial block of $G$. Moreover, since $H$ is a subgraph of $B$ bounded by a cycle of $B$, $H$ is a circuit graph. Note also that if $x$ and $b_\ell$ are incident to a bounded face $F$ of $H$, then $x$ and $b_\ell$ are adjacent, moreover $xb_\ell$ is an external edge of $H$. It follows that $H$ is good with respect to $x$ and $b_\ell$, and therefore $$G=H,b_\ell,B_{\ell+1},\ldots, b_{k-2}, B_{k-1}$$ is a good chain with respect to $x$. {\em Case 2: Suppose that $B_k$ is nontrivial. } \\ By Lemma \ref{plainchain}, $B_k-b_k$ is a plain chain of blocks, so let $$B_k-b_k=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $C_k$ be the outer cycle of $B_k$, and $d_0\in V(D_1), d_m\in V(D_m)$ be the neighbors of $b_k$ in $C_k$. Without loss of generality assume that $b_kd_0$ is an internal edge of $B$. Note that $D_1$ is nontrivial if $b_{k-1}\neq d_0$, for otherwise $\deg_{B} (d_0)\leq 3$ (this is a contradiction because $d_0$ is an internal vertex of $B$ if $b_{k-1}\neq d_0$). Suppose that $x$ is adjacent to $D_1-\{b_{k-1},d_1\}$ (it is possible that $b_{k-1}=d_1$). Then $D_1$ is nontrivial. Let $j\in [m]$ be such that $b_{k-1}\in V(D_j)\setminus \{d_{j-1}\}$ and let $H'$ be the graph induced by $$\bigcup_{i=0}^{k-1} V(B_i)\cup \bigcup_{i=1}^jV(D_i)\,.$$ $H'$ is bounded by a cylce of $B$, so it is a circuit graph. We shall prove that $H'$ is good with respect to $x$ and $d_j$. Suppose that $x$ and $d_j$ are incident to a face $F'$ of $H'$, and that $F'$ is the only bounded odd face of $H'$. Then $d_j=b_{k-1}$, and all bounded faces of $D_1$ are even. This contradicts Lemma \ref{dvodelen}, because $\deg_{D_1}(u)>2$ for every $u\in V(D_1)\setminus \{d_0, d_1\}$. It follows that $$G=H',d_{j},D_{j+1},\ldots,d_{m-1},D_m\,$$ is a good chain with respect to $x$. Suppose that $x$ is not adjacent to $D_1-\{b_{k-1},d_1\}$. We claim that $|V(D_1)\cap V(C)|\geq 2$. To prove the claim suppose the contrary, that $|V(D_1)\cap V(C)|<2$. Then $\{b_k,d_1\}$ is a separating set in $B$, and $D_1-\{b_k,d_1\}$ is a component of $B-\{b_k,d_1\}$ disjoint with $C$. It follows that $B$ is not a circuit graph, a contradiction. This proves the claim. Define $\ell$ and $H$ as in Case 1. We claim that $$G=H,b_\ell,B_{\ell+1},\ldots, B_{k-1},b_{k-1},D_1,d_1,\ldots,d_{m-1},D_m$$ is a good chain with respect to $x$. We have already shown (in Case 1) that $H$ is good with respect to $x$ and $b_{\ell}$. By Lemma \ref{osnovna}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$ for $i\in [k-1]\setminus [\ell]$, and $D_i$ is good with respect to $d_{i-1}$ and $d_i$ for $i\in [m], i\neq 1$. It remains to prove that $D_1$ is good with respect to $b_{k-1}$ and $d_1$. Let $C'$ be the outer cycle of $D_1$ and let $Q=C\cap D_1$ (or equivalently $Q=C\cap C'$). Note that for every vertex $z\in V(C')\setminus V(Q)$, $\deg_{D_1}(z)>2$. By Lemma \ref{osnovna1}, $D_1$ is good with respect to $b_{k-1}$ and $d_1$. $\square$ \begin{definition}\label{def} Let $B$ be a circuit graph with outer cycle $C$. Let $\{x,y\}\subseteq V(C)$ and $\{u_1,u_2\}\subseteq V(C)$ be any sets. A set of pairwise disjoint chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ is a $(x,y;u_1,u_2)$-set of chains in $B$ if there exists a $xy$-path $P$ in $B$ such that \begin{itemize} \item[(i)] $V(B)\setminus V(P)\subseteq \bigcup_{i=1}^{k}V(G_i)$ \item[(ii)] For $i\in [k]$, $G_i$ intersects $P$ in exactly one vertex $x_i$, and $G_i$ is a good chain with respect to $x_i$. \item[(iii)] For $j\in [2]$, either $G_i$ is a good chain with respect to $u_j$ and $x_i$ for some $i\in [k]$, or $u_j \notin \bigcup_{i=1}^{k}V(G_i)$. \end{itemize} A path $P$ that fulfills (i),(ii) and (iii) is called a ${\mathcal C}$-path. The set ${\mathcal C}$ is an odd or an even $(x,y;u_1,u_2)$-set of chains if there exits an odd or an even ${\mathcal C}$-path, respectively. \end{definition} We say that a set of pairwise disjoint chains $G_1,\ldots,G_k$ is a $(x,y;u_1)$-set of chains if it satisfies (i),(ii) and (iii) for $j=1$. Moreover, ${\mathcal C}$ is a $(x,y)$-set of chains if it satisfies (i) and (ii) of Definition \ref{def}. We also use Definition \ref{def} in slightly more general settings in which $B$ is a plain chain of blocks (and each block is a circuit graph). More precisely, if $B$ is a plain chain of blocks, $x,y$ are two external vertices of $B$, and ${\mathcal C}$ is a set of pairwise disjoint plain chains that satisfy (i),(ii) and (iii), then ${\mathcal C}$ is a $(x,y;u_1,u_2)$-set of chains in $B$. \begin{lemma} \label{spajanje} Let $G$ be a bipartite plain chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n $$ such that for $i\in [n]$ each nontrivial block $B_i$ of $G$ is a circuit graph with outer cycle $C_i$. Suppose that $u,x,y\in V(C_j),u\neq b_j$, and that ${\mathcal C}$ is a $(x,y;u,b_j)$-set of chains in $B_j$ for some $j\in [n]$. Then for every $\ell>j$ and any vertex $v\in V(C_\ell)\setminus V(C_{\ell-1})$, there is a $(x,y;u,v)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$. Moreover, if $u=b_{j-1}$, then for every $\ell'<j$ and any vertex $v'\in V(C_{\ell'})\setminus V(C_{\ell'+1})$, there is a $(x,y;v',v)$-set of chains in $\bigcup_{i=\ell'}^{\ell} B_i$. \end{lemma} \noindent{\bf Proof.\ } Let $\ell>j$ and $v\in V(C_\ell)\setminus V(C_{\ell-1})$. Suppose that ${\mathcal C}=\{ G_1,\ldots,G_k\}$ is a $(x,y;u,b_j)$-set of chains in $B_j$ and that $P$ is a ${\mathcal C}$-path. Then (a) or (b) occures. \begin{itemize} \item[(a)] There is a chain $G_{r} \in {\mathcal C}$ such that $G_{r}$ is a good chain with respect to $x_{r}$ and $b_{j}$, where $\{x_{r}\}=V(G_{r})\cap V(P)$ \item[(b)] $b_j \notin \bigcup_{i=1}^{k}V(G_i)$. \end{itemize} In case (a), $G_{r}'=G_{r}\cup \bigcup_{i=j+1}^{\ell} B_i$ is a good chain with respect to $x_{r}$ and $v$, and therefore ${\mathcal C'} ={\mathcal C}\cup \{G_r'\}\setminus \{G_r\}$ is a $(x,y;u,v)$-set of chains in $ \bigcup_{i=j}^{\ell} B_i$. In case (b), $G_0=\bigcup_{i=j+1}^{\ell} B_i$ is a good chain with respect to $b_j$ and $v$, and therefore ${\mathcal C'}=\{G_0,\ldots,G_{k}\}$ is a $(x,y;u,v)$-set of chains in $ \bigcup_{i=j}^{\ell} B_i$. In both cases a ${\mathcal C'}$-path is $P$. The last sentence of the lemma is proved analogously. $\square$ If we use the notation of the above lemma, we note that a $(x,y;b_j)$-set of chains in $B_j$ can be extended to a $(x,y)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$ (in fact the construction given in the above proof works also in this case). Note also that a $(x,y;u,v)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$ exists also under the assumption that $B_i$ is bipartite for $i>j$ (and $G$ may be non-bipartite). In \cite{richter} the following result was proved (Theorem 5, p.262). \begin{theorem}\label{rihta} Let $B$ be a bipartite circuit graph with outer cycle $C$. If $x,y\in V(C)$, then for any vertex $u\in V(C)$ (not necessarily distinct from $x$ and $y$) there exists a $(x,y;u)$-set of chains in $B$. \end{theorem} \begin{lemma}\label{posebna} Let $B$ be a bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $Q$ is a $xy$-path in $C$. If every internal vertex of $B$ is of degree at least 4 and every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, then there exists a $(x,y;x,y)$-set of chains in $B$. \end{lemma} \noindent{\bf Proof.\ } By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ By Lemma \ref{osnovna2}, $|V(B_i\cap Q)|\geq 2$ for $i\in [n]$. We may assume, without loss of generality, that $y\in V(B_1)$ and $y\neq b_1$. If $B_1$ is nontrivial then $B_1-y$ is a plain chain of blocks $$B_1-y=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $d_0\in V(D_1)$ be the neighbor of $y$ in $Q$, and define $k=\max\{i\,|\,D_i\cap Q\neq \emptyset\}$. {\em Case 1:} Suppose that $D_k$ intersects $Q$ in exactly one vertex (in this case $d_{k-1}$). Then $G=\bigcup_{i=k}^m D_i$ is a good chain with respect to $d_{k-1}$. If $D_i$ is trivial define $P_i=D_i$ and ${\mathcal C_i}=\emptyset$, for $i\in [k-1]$. If $D_i$ is nontrivial then, by Theorem \ref{rihta}, there is a $(d_{i-1},d_i;d_i)$-set of chains ${\mathcal C_i}$ in $D_i$, for $i\in [k-1]$. In this case let $P_i$ be a ${\mathcal C_i}$-path in $D_i$. If $B_i$ is trivial define $R_i=B_i$ and ${\mathcal F_i}=\emptyset$, for $i\in [n],i\neq 1$. If $B_i$ is nontrivial then, by Theorem \ref{rihta}, there is a $(b_{i-1},b_i;b_{i-1})$-set of chains ${\mathcal F_i}$ in $B_i$, for $i\in[n],i\neq 1$. In this case let $R_i$ be a ${\mathcal F_i}$-path in $B_i$. Let $b_n$ be the neighbor of $x$ in $Q$, and let $R_{n+1}$ be the path $x,b_n$. Additionally let $P_0$ be the path $y,d_0$. Define $$P=\bigcup_{i=0}^{k-1} P_i\cup \bigcup_{i=2}^{n+1} R_i\,.$$ The chain $G$ together with chains ${\mathcal C_i}, i\in [k-1]$ and ${\mathcal F_i},i\in [n],i\neq 1$ is a $(x,y;x,y)$-set of chains in $B$. If we call this set of chains ${\mathcal C}$, then $P$ is a ${\mathcal C}$-path. {\em Case 2:} Suppose that $D_k$ intersects $Q$ in more than one vertex. Then, by Theorem \ref{rihta}, there is a $(d_{k-1},b_1;d_k)$-set of chains ${\mathcal H}$ in $D_k$. By Lemma \ref{spajanje} (see also the note directly after Lemma \ref{spajanje}) there is a $(d_{k-1},b_1)$-set of chains in $\bigcup_{i=k}^m D_i$. The rest of the proof is similar as in Case 1. If $B_1$ is trivial, then $x$ and $y$ are adjacent in $C$ (for otherwise $\deg_B(u)=2$, where $u$ is the neighbor of $x$ in $V(C)\setminus V(Q)$) and $V(Q)=V(C)$. Define $R_1=B_1$. In this case $\bigcup_{i=2}^{n} {\mathcal F_i}$ is a $(x,y;x,y)$-set of chains in $B$. The corresponding path is $\bigcup_{i=1}^{n+1} R_i.$ $\square$ \begin{lemma}\label{to} Let $B$ be a bipartite circuit graph with outer cycle $C$. Let $x,y,u_1,u_2\in V(C)$ be such that $\{x,y\}\neq \{u_1,u_2\}$. If every internal vertex of $B$ is of degree at least 4, then there is a $(x,y;u_1,u_2)$-set of chains in $B$. \end{lemma} \noindent{\bf Proof.\ } Suppose that the claim of the lemma is not true; let $B$ be a counterexample with minimum number of vertices. It's easy to verfy the lemma when $B$ is a 4-cycle, or any even cycle. By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $Q$ and $Q'$ be the $xy$-paths in $C$. Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $Q$ and $Q'$, respectively. We set $B_0=\emptyset$ (to avoid ambiguity in the following definitions). Let $k\in [n]$ be such that $y\in V(B_{k})\setminus V(B_{k-1})$, and let $k_j\in [n]$ be such that $u_j\in V(B_{k_j})\setminus V(B_{k_j-1})$ for $j=1,2$ (if $x\in \{u_1,u_2\}$ this applies only to $k_1$ and we set $u_2=x$). We may assume, without loss of generality, that $y\neq b_0$ (otherwise $y\neq b_n$ and we have a similar proof) and that $k_1\leq k_2$. We shall construct a $xy$-path $P$ in $B$. For $i\in [k]$, if $B_i$ is trivial define ${\mathcal C_i}=\emptyset$ and $P_i=B_i$. In the sequal we define ${\mathcal C_i}$ and $P_i$ for nontrivial blocks $B_i$. By minimality of $B$, Lemma \ref{to} is true for every nontrivial block $B_i$ of $B-x$ and therefore, for every $i\in [k]$ we can apply the statement of Lemma \ref{to} to $B_i$. Denote the outer cycle of $B_i$ by $C_i$. Since $Q_i=B_i\cap C$ is a $b_{i-1}b_i$-path in $C_i$ and every vertex of $V(C_i)\setminus V(Q_i)$ is of degree at least 3 in $B_i$, we can also apply Lemma \ref{posebna} to $B_i$. The following statements are obtained either by an application of Lemma \ref{to} or Lemma \ref{posebna} to $B_i$. For $i\in [k-1]$ and $j=1,2$ there exists: \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains in $B_i$ (by Lemma \ref{posebna}), \item[(ii)] a $(b_{k-1},y;b_{k-1},b_k)$-set of chains in $B_k$ (by minimality of $B$ (i.e. by the statement of Lemma \ref{to}) if $y\neq b_k$, and by Lemma \ref{posebna} if $y=b_k$), \item[(iii)] a $(b_{k_j-1},b_{k_j};b_{k_j-1},u_{j})$-set of chains in $B_{k_j}$ (by minimality of $B$ if $u_j\neq b_{k_j}$, and by Lemma \ref{posebna} if $u_j=b_{k_j}$), \item[(iv)] if $k_j=k$ and $u_j\neq y$, there is a $(b_{k-1},y;b_{k-1},u_j)$-set of chains in $B_k$ (by minimality of $B$), \item[(v)] if $k_j=k$ and $u_j\neq b_k$, there is a $(b_{k-1},y;u_j,b_{k})$-set of chains in $B_k$ (by minimality of $B$), \item[(vi)] if $k_1=k_2$, there is a $(b_{{k_1}-1},b_{k_1};u_1,u_2)$-set of chains in $B_{k_1}$ (by minimality of $B$), \item[(vii)] if $k_1=k_2=k$, there is a $(b_{k-1},y;u_1,u_2)$-set of chains in $B_k$ (by minimality of $B$). \end{itemize} Since $\{x,y\}\neq \{u_1,u_2\}$ we may assume, without loss of generality, that $y\notin \{u_1,u_2\}$. Therefore we have the following possibilities (1) $u_1,u_2\notin V(Q')$, (2) $u_1\notin V(Q'),u_2\notin V(Q)$, (3) $u_2=x$ and $u_1\notin V(Q')$. All other possibilites are symmetric, and they can be obtained from one of the above cases by exchanging the roles of $Q$ and $Q'$; for example, $u_1,u_2\notin V(Q')$ is symmetric to $u_1,u_2\notin V(Q)$. Therefore we can also assume that $u_1\notin V(Q')$. With this assumption the following cases with regard to $k, k_1$ and $k_2$ may appear. Next to each particular case below we also write which of the above statements we use to prove the existence of a $(x,y;u_1,u_2)$-set of chains in $B$. Later we give detailed arguments. \begin{itemize} \item[(a)] $k_1<k_2<k$, we use (i) for $i\in [k-1]\setminus \{k_1,k_2\}$, (iii) for $j\in [2]$, and (ii). \item[(b)] $k_1<k_2=k$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (iii) for $j=1$, and (iv) for $j=2$. \item[(c)] $k_1<k<k_2$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (iii) for $j=1$, and (ii). \item[(d)] $k_1=k_2<k$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (vi) for $j=1$, and (ii). \item[(e)] $k_1=k_2=k$, we use (i) for $i\in [k-1]$, and (vii). \item[(f)] $k_1<k$ and $u_2=x$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, and (ii). \item[(g)] $k_1=k$ and $u_2=x$, we use (i) for $i\in [k-1]$, and (v) for $j=1$. \end{itemize} We prove cases (a), (c) and (g) in detail. Cases (b),(d) and (e) are similar to case (a), and case (f) is similar to case (g), so here we skip details. {\em Case (a).} Suppose that $k_1<k_2<k$. By (i), there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]\setminus \{k_1,k_2\}$. By (iii), there is a $(b_{k_j-1},b_{k_j};b_{k_j-1},u_j)$-set of chains ${\mathcal C_{k_j}}$ in $B_{k_j}$ for $j=1,2$. By (ii), there is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains ${\mathcal C_k}$ in $B_k$. Denote by $P_{i}$ a ${\mathcal C_i}$-path in $B_{i}$, for $i\in [k]$. By Lemma \ref{skupek}, $G_0=B- \bigcup_{i=1}^{k}V(B_i)$ is a good chain with respect to $x$. Let $P_0$ be the path $x,b_0$. Define $P=\bigcup_{i=0}^{k} P_i$ (and recall that $P_i=B_i$, if $B_i$ is trivial). Then ${\mathcal C}=\{G_0\}\cup \bigcup_{i=1}^k{\mathcal C_i}$ is a $(x,y;u_1,u_2)$-set of chains in $B$. {\em Case (c).} Suppose that $k_1<k<k_2$. By (i) there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]\setminus \{k_1\}$. By (iii) there is a $(b_{k_1-1}b_{k_1};b_{k_1-1},u_1)$-set of chains ${\mathcal C_{k_1}}$ in $B_{k_1}$. By (ii) there is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains ${\mathcal C_k}$ in $B_k$. Since ${\mathcal C}_{k}$ is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains in $B_k$, by Lemma \ref{spajanje} there is a $(b_{k-1},y;b_{k-1},u_2)$-set of chains ${\mathcal D}_{k}$ in $\bigcup_{i=k}^{k_2} B_i$. By Lemma \ref{skupek}, $G_1=B- \bigcup_{i=1}^{k_2}V(B_i)$ is a good chain with respect to $x$. Then $G_1$ together with chains in ${\mathcal C_i},i\in [k-1]$ and ${\mathcal D}_{k}$ forms a $(x,y;u_1,u_2)$-set of chains in $B$ (the corresponding path is $P=\bigcup_{i=0}^{k} P_i$). {\em Case (g).} By (i) there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]$. Since $u_1\notin V(Q')$, by an assumption, we have $u_1\neq b_k$. By (v), there is a $(b_{k-1},y;u_1,b_{k})$-set of chains ${\mathcal C_k}$ in $B_k$. By Lemma \ref{spajanje} there is a $(b_{k-1},y;u_1)$-set of chains ${\mathcal F}_{k}$ in $ \bigcup_{i=k}^{n} B_i$. Let $P_{i}$ be a ${\mathcal C_i}$-path in $B_{i}$, for $i\in [k]$, and define $P=\bigcup_{i=0}^{k} P_i$. Then chains in ${\mathcal C_i},i\in [k-1]$ and ${\mathcal F}_{k}$ form a $(x,y;u_1,x)$-set of chains in $B$, with $P$ being the corresponding path. $\square$ \begin{definition}\label{defcik} Let $B$ be a circuit graph with outer cycle $C$, and let $u_1,u_2, u_3\in V(C)$. A set of pairwise disjoint chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ is a $[u_1,u_2,u_3]$-set of chains in $B$ if there exists an even cycle $C'$ in $B$ such that \begin{itemize} \item[(i)] $V(B)\setminus V(C')\subseteq \bigcup_{i=1}^{k}V(G_i)$ \item[(ii)] For $i\in [k]$, $G_i$ intersects $C'$ in exactly one vertex $x_i$, and $G_i$ is a good chain with respect to $x_i$. \item[(iii)] For $j\in [3]$, either $G_i$ is a good chain with respect to $u_j$ and $x_i$ for some $i\in [k]$, or $u_j \notin \bigcup_{i=1}^{k}V(G_i)$. \end{itemize} A cycle $C'$ that fulfills (i),(ii) and (iii) is called a ${\mathcal C}$-cycle. \end{definition} If ${\mathcal C}$ fulfills (i),(ii) and (iii) for $j=1,2$, then ${\mathcal C}$ is a $[u_1,u_2]$-set of chains in $B$. \begin{lemma} \label{cikli} Let $B$ be a bipartite circuit graph with outer cycle $C$, and let $u_1,u_2,u_3$ be any vertices of $C$. If all internal vertices of $B$ are of degree at least 4, then there exits a $[u_1,u_2,u_3]$-set of chains in $B$. \end{lemma} \noindent{\bf Proof.\ } By Lemma \ref{plainchain}, $B-u_3$ is a plain chain of blocks $$B-u_3=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $k_i\in [n]$ be such that $u_i\in V(B_i)\setminus V(B_{i-1})$ (here we set $B_0=\emptyset$). For $i\in [n]$, define $P_i=B_i$ and ${\mathcal C_i}=\emptyset$, if $B_i$ is trivial. In the sequal we define $P_i$ and ${\mathcal C_i}$ for nontrivial blocks $B_i$. {\em Case 1: $k_1\neq k_2$}. By Lemma \ref{posebna} and \ref{to} there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1})$-set of chains ${\mathcal C_i}$ in $B_i$, if $i\notin \{k_1,k_2\}$, \item[(ii)] a $(b_{i-1},b_i;b_{i-1},u_j)$-set of chains ${\mathcal C_i}$ in $B_i$, if $i=k_j$ for $j=1,2$. \end{itemize} Let $P_i$ be the corresponding ${\mathcal C_i}$-path in $B_i$, for $i\in [n]$. Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$ and define $P_0=x,b_0$ and $P_{n+1}=b_n,x$. Define $C'=\bigcup_{i=0}^{n+1} P_i$ and $${\mathcal C}=\bigcup_{i=1}^n {\mathcal C_i}\,.$$ Then ${\mathcal C}$ is a $[u_1,u_2,u_3]$-set of chains, and $C'$ is a corresponding ${\mathcal C}$-cycle. {\em Case 2: $k_1= k_2$}. By Lemma \ref{posebna} and \ref{to} there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, if $i\neq k_1$, \item[(ii)] a $(b_{k_1-1},b_{k_1};u_1,u_2)$-set of chains ${\mathcal C_{k_1}}$ in $B_{k_1}$. \end{itemize} The rest of the proof is the same as above. $\square$ \begin{lemma} \label{bipartite} Let $B$ be a non-bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $Q$ is a $xy$-path in $C$. If all internal vertices of $B$ are of degree at least 4, every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, and $B-x$ is bipartite, then for any vertex $u\in V(C-x)$ there is an odd and an even $(x,y;u)$-set of chains in $B$. \end{lemma} \noindent{\bf Proof.\ } We claim that for any neighbor $z$ of $x$, there is a $(x,y;u)$-set of chains ${\mathcal C}$ in $B$ such that a ${\mathcal C}$-path contains the edge $xz$. Before we prove the claim let us see how we prove the lemma using this claim. Since $B$ is non-bipartite and 2-connected, there is an odd cycle $C'$ containing $x$. Let $x_1$ and $x_2$ be the neigbors of $x$ in $C'$. Let $R$ be the $x_1x_2$-path in $C'$ not containing $x$. Suppose that $R_i$ is a $x_iy$-path in $B-x$ for $i=1,2$. Since $B-x$ is bipartite, $R_1\cup R_2\cup R$ is an even closed walk, and since $R$ is odd, $R_1$ and $R_2$ have different parities. It follows that every $x_1y$-path in $B-x$ is odd, and every $x_2y$-path in $B-x$ is even (or vice-versa). Using the above claim and setting $z=x_1$ (resp. $z=x_2$) we get an even (resp. an odd) $(x,y;u)$-set of chains in $B$. In the rest of the proof we prove the claim. Let $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$$ and suppose that $xz\in E(B)$. By Lemma \ref{osnovna2}, $|V(B_i\cap Q)|\geq 2$ for $i\in [n]$, hence we may assume that $y\in V(B_n)\setminus V(B_{n-1})$. Let $k_u,k_{z}\in [n]$ be such that $u\in V(B_{k_u})\setminus V(B_{k_u+1})$ and $z\in V(B_{k_{z}})\setminus V(B_{k_{z}+1})$ (here we set $B_{n+1}=\emptyset$). It follows from these definitions that $u\neq b_{k_u}$ and $z\neq b_{k_z}$. We shall construct a $(x,y;u)$-set of chains ${\mathcal C}$ in $B$, and a ${\mathcal C}$-path $P$ in $B$, so that $P$ contains the edge $xz$. We distinguish several cases with regard to $k_u$ and $k_z$. In each case we define $P_i=B_i$ and ${\mathcal C_i}=\emptyset$, if $B_i$ is trivial and $i\in [n]$. Now we treat different cases and define $P_i$ and ${\mathcal C_i}$, if $B_i$ is nontrivial. Suppose that $k_{z}<k_u<n$. By Lemma \ref{posebna} and Lemma \ref{to} there is \begin{itemize} \item[(i)] a $(z,b_{k_z};b_{{k_z}-1},b_{k_z})$-set of chains in $B_{k_z}$, where $k_z\neq k_u$ \item[(ii)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains in $B_i$, for $k_z< i< n,i\neq k_u$ \item[(iii)] a $(b_{n-1},y;b_{n-1})$-set of chains in $B_{n}$, where $n\neq k_u$ \item[(iv)] a $(b_{k_u-1},b_{k_u};u)$-set of chains in $B_{k_u}$. \end{itemize} Let ${\mathcal C_i}$ be the set of chains in $B_i$ (as defined above), and let $P_i$ be a ${\mathcal C_i}$-path for $i\in [n]\setminus [k_z-1]$. By Lemma \ref{skupek}, $G_0=B-\bigcup_{i=k_z}^{n} V(B_i)$ is a good chain with respect to $x$ in $B$ (if $k_z=1$ this is irrelevant). Let $P_0$ be the path $x,z$ and define $P=P_0\cup \bigcup_{i=k_z}^{n} P_i$. Then $${\mathcal C}=\{G_0\} \cup \bigcup_{i=k_z}^{n} {\mathcal C_{i}}$$ is a $(x,y;u)$-set of chains in $B$ and $P$ is a ${\mathcal C}$-path. If $k_z=k_u<n$ we use (ii) and (iii), and instead of (iv) we use \begin{itemize} \item[(v)] there is a $(z,b_{k_u};u)$-set of chains in $B_{k_u}$. \end{itemize} The rest of the proof is the same as above. If $k_z<k_u=n$ we use (i) and (ii), and instead of (iv) we use \begin{itemize} \item[(vi)] there is a $(b_{n-1},y;u)$-set of chains in $B_{n}$ \end{itemize} and the rest of the proof is (again) the same as above (note that (v) and (vi) follow from Lemma \ref{to}). If $k_u<k_z<n$ then we use (i),(ii) and (iii). By Lemma \ref{spajanje} and (i), there is a $(z,b_{k_z}; b_{k_z},u)$-set of chains ${\mathcal F_{k_z}}$ in $\bigcup_{i=k_u}^{k_z} B_i$. By Lemma \ref{skupek}, $G_1=B-\bigcup_{i=k_u}^{n} V(B_i)$ is a good chain with respect to $x$ in $B$. It follows that $${\mathcal C}=\{G_1\}\cup {\mathcal F_{k_z}} \cup \bigcup_{i=k_z+1}^{n} {\mathcal C_{i}}$$ is a $(x,y;u)$-set of chains in $B$. The path $P$ (as defined above) is a ${\mathcal C}$-path. This proves the claim when $k_z\neq n$. Assume now that $k_z=n$. If $k_z=k_u=n$ and $z\neq y$ then, by Lemma \ref{to}, there is \begin{itemize} \item[(vii)] a $(z,y;u)$-set of chains ${\mathcal H_{n}}$ in $B_{n}$. \end{itemize} By Lemma \ref{skupek}, $G_{2}=B- V(B_n)$ is a good chain with respect to $x$ in $B$. It follows that ${\mathcal C}=\{G_{2}\}\cup {\mathcal H_{n}} $ is a $(x,y;u)$-set of chains in $B$, and $P$ (as defined above) is a ${\mathcal C}$-path. If $k_z=k_u=n$ and $z=y=u$ then $xz$ is an edge of $C$ (recall that $y\neq b_{n-1}$ and that $y$ is an external vertex of $B$). By Lemma \ref{osnovna}, $G_3=B-y$ is a good chain with respect to $x$. Therefore ${\mathcal C}=\{G_3\}$ is a $(x,y;u)$-set of chains, where a ${\mathcal C}$-path in $B$ is the path $x, y$. If $z=y\neq u$ then $B_n$ is a good chain with respect to $y$ and $u$, and $G_2$ is a good chain with respect to $x$. It follows that $\{B_n,G_2\}$ is a $(x,y;u)$-set of chains in $B$; again a ${\mathcal C}$-path in $B$ is the path on two vertices $x,y$. If $z \neq y$ Finally, if $k_u<k_z=n$ and $z\neq y$ there is \begin{itemize} \item[(viii)] a $(z,y;b_{n-1})$-set of chains in $B_{n}$. \end{itemize} By Lemma \ref{spajanje} and (viii), there is a $(z,y;u)$-set of chains ${\mathcal I_{n}}$ in $\bigcup_{i=k_u}^{n} B_i$. In this case ${\mathcal C}=\{G_{1}\}\cup {\mathcal I_{n}}$ is a $(x,y;u)$-set of chains in $B$. If $z=y$, then $G_4=\bigcup_{i=k_u}^{n} B_i$ is a good chain with respect to $u$ and $y$, hence ${\mathcal C}=\{G_{1},G_4\} $ is a $(x,y;u)$-set of chains in $B$. This proves the claim, and hence also the lemma. $\square$ \begin{theorem}\label{main} Let $B$ be a non-bipartite circuit graph with outer cycle $C$, and let $x,y\in V(C)$. If all internal vertices of $B$ are of degree at least 4, then for any vertex $u\in V(C)$ there is a $(x,y;u)$-set of chains in $B$. Moreover, if $Q$ is a $xy$-path in $C$ such that every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, then for any vertex $u\in V(Q)$ there is an odd and an even $(x,y;u)$-set of chains in $B$. \end{theorem} \noindent{\bf Proof.\ } Suppose the theorem is not true. Let $B$ be a counterexample of minimum order. We may assume that $u\neq x$ (otherwise $u\neq y$, and the proof is analogous). By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $k_1,k_2\in [n]$ be such that $u\in V(B_{k_1})\setminus V(B_{{k_1-1}})$ and $y\in V(B_{k_2})\setminus V(B_{k_2-1})$ (here we set $B_0=\emptyset$). Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. We may assume that $xb_0$ is an edge of $Q$ and $xb_n$ is not an edge of $Q$, and that $u\in V(Q)$ (the last sentence of the theorem assumes $u\in V(Q)$, and for the proof of the first part of the theorem $u\in V(Q)$ may be assumed without loss of generality). Since $u\in V(Q)$ we have $k_1\leq k_2$. We give two constructions. In both constructions we define $P_i=B_i$ and ${\mathcal C_{i}}=\emptyset$, if $B_i$ is trivial. In the sequal we treat nontrivial blocks $B_i$. \\ {\em Construction A.} If $k_1<k_2$ then, by minimality of $B$ (if $B_i$ is non-bipartite) and by Lemma \ref{to} (if $B_i$ is bipartite), there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_i)$-set of chains in $B_i$, for $i\in [k_1-1]$, \item[(ii)] a $(b_{k_1-1},b_{k_1};u)$-set of chains in $B_{k_1}$, \item[(iii)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_i$, for $i\in [k_2-1]\setminus [k_1]$, \item[(iv)] a $(b_{k_2-1},y;b_{k_2-1})$-set of chains in $B_{k_2}$, \end{itemize} if $k_1=k_2$ and $y\neq b_0$ there is \begin{itemize} \item[(v)] a $(b_{k_2-1},y;u)$-set of chains in $B_{k_2}$. \end{itemize} Note that for $i\in [k_2-1]$ every vertex in $V(C_i)\setminus V(Q_i)$ is of degree at least 3 in $B_i$, where $C_i$ is the outer cycle of $B_i$ and $Q_i=Q\cap B_i$. By minimality of $B$ we may apply the (last) statement of the theorem to $B_i$, if $B_i$ is non-bipartite. Hence, if $B_i$ is non-bipartite for some $i\in [k_2-1]$, there is an odd and an even set of chains for (i), (ii) and (iii). Additionally, if $B_{k_2}$ is non-bipartite and $y=b_{k_2}$, then there is also an odd and an even set of chains ${\mathcal C_{k_2}}$ for (iv) and (v) (by minimality of $B$). Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (i)-(iv) if $k_1<k_2$; and defined by (i) and (v) if $k_1=k_2$ and $y\neq b_0$. The ${\mathcal C_i}$-path is denoted by $P_i$, for $i\in [k_2]$. Let $P_0$ be the path $x,b_0$. Define $P=\bigcup_{i=0}^{k_2}P_i$. By Lemma \ref{skupek}, $G_1=B-\bigcup_{i=1}^{k_2}B_i$ is a good chain with respect to $x$ in $B$. Hence $${\mathcal C}=\{G_1\}\cup \bigcup_{i=1}^{k_2} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The path $P$ is a ${\mathcal C}$-path in $B$. Moreover, if a block $B_i, i\in [k_2-1]$ is non-bipartite, then there exists an odd and an even set of chains ${\mathcal C_i}$ in $B_i$, and so ${\mathcal C}$ is an odd or an even set of chains subject to the choice of ${\mathcal C_i}$. If $y=b_0$ then $u=y=b_0$ (by our assumptions $u\in V(Q)$ and $u\neq x$). In this case $G_2=B-y$ is a good chain with respect to $x$, by Lemma \ref{osnovna}. Hence, ${\mathcal C}=\{G_2\}$ is a $(x,y;u)$-set of chains, and $P=x,y$ is the corresponding ${\mathcal C}$-path. This proves the first claim of the theorem; and also the second claim of the theorem if $B_i$ is non-bipartite for some $i\in [k_2-1]$, or if $B_{k_2}$ is non-bipartite and $y=b_{k_2}$ (note that this, in particular, proves the theorem for the case if $y=b_n$ and $B-x$ is non-biparite). To finish the proof of the second claim of the theorem we give construction B, in which we assume that every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$. We also assume that $B_i$ is bipartite for $i\in [k_2-1]$, and if $y=b_{k_2}$ then $B_i$ is bipartite for $i\in [k_2]$. \\ {\em Construction B.} If $k_1<k_2$ and $y\neq b_{k_2}$ then, by minimality of $B$ (if $B_i$ is non-bipartite) and by Lemma \ref{to} (if $B_i$ is bipartite), there is \begin{itemize} \item[(vi)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_i$, for $i\in [n]\setminus [k_2]$, \item[(vii)] a $(b_{k_2},y;b_{k_2-1})$-set of chains in $B_{k_2}$, \end{itemize} and if $k_1=k_2$ and $y\neq b_{k_2}$ there is \begin{itemize} \item[(viii)] a $(b_{k_2},y;u)$-set of chains in $B_{k_2}$. \end{itemize} Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (vi) and (vii) if $k_1<k_2$ and $y\neq b_{k_2}$; and defined by (vi) and (viii) if $k_1=k_2$ and $y\neq b_{k_2}$. The ${\mathcal C_i}$-paths are denoted by $P_i$, for $i\in [n]\setminus [k_2-1]$. Suppose that $y\neq b_{k_2}$ and $k_1< k_2$. Let $R_1$ and $R_2$ be the $yb_{k_2}$-paths in $C_{k_2}$ (where $C_{k_2}$ is the outer cycle of $B_{k_2}$), and assume $b_{k_2-1}\in V(R_1)$. Since every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, we find that every vertex in $V(C_{k_2})\setminus V(R_1)$ is of degree at least 3 in $B_{k_2}$. Therefore, by minimailty of $B$, if $B_{k_2}$ is non-bipartite there is an odd and an even set of chains ${\mathcal C_{k_2}}$ for (vii) and (viii). Moreover, if $B_i$ is non-bipartite there exist odd and even sets of chains ${\mathcal C_{i}}$ for $i\in [n]\setminus [k_2]$, as given by (vi). If $u\neq b_{k_1}$ then, by Lemma \ref{spajanje} (see notes directly after Lemma \ref{spajanje}) and (vii), there is a $(b_{k_2},y;u)$-set of chains ${\mathcal D_{k_2}}$ in $ \bigcup_{i=k_1}^{k_2} B_i$ (recall the assumption that $B_i$ is bipartite for $i\in [k_2-1]$). By Lemma \ref{skupek}, $G_3=B-\bigcup_{i=k_1}^{n}B_i$ is a good chain with respect to $x$. Let $P_{n+1}$ be the path $x,b_n$. Define $P=\bigcup_{i=k_2}^{n+1}P_i$. Then $${\mathcal C}=\{G_3\}\cup {\mathcal D_{k_2}}\cup \bigcup_{i=k_2+1}^{n} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The corresponding ${\mathcal C}$-path is $P$. If $u=b_{k_1}$ the construction of a $(x,y;u)$-set of chains in $B$ is analogous as in the case $u\neq b_{k_1}$ (the only difference is that ${\mathcal D_{k_2}}$ is a $(b_{k_2},y;u)$-set of chains in $ \bigcup_{i=k_1+1}^{k_2} B_i$, and $G_3=B-\bigcup_{i=k_1+1}^{n}B_i$). If $y\neq b_{k_2}$ and $k_1=k_2$, we use (vi) and (viii). In this case $${\mathcal C}=\{G_3\}\cup \bigcup_{i=k_2}^{n} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. If $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2-1]$ then we can choose ${\mathcal C}_i$ so that ${\mathcal C}$ is an odd or an even $(x,y;u)$-set of chains in $B$ (in all cases above). This proves the second claim of the theorem if $y\neq b_{k_2}$ and $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2-1]$. Suppose now that $y=b_{k_2},k_2\neq n$ and $u\neq b_{k_1}$. If we use (vi) for $i=k_2+1$, we find that ${\mathcal C_{k_2+1}}$ is a $(b_{k_2},b_{k_2+1};b_{k_2})$-set of chains in $B_{k_2+1}$. Hence, by Lemma \ref{spajanje} (see notes after Lemma \ref{spajanje}), there is a $(b_{k_2},b_{k_2+1};u)$-set of chains ${\mathcal F_{k_2+1}}$ in $ \bigcup_{i=k_1}^{k_2+1} B_i$ (recall the assumption that $B_i$ is bipartite for $i\in [k_2]$ if $y=b_{k_2}$). Then $${\mathcal C}=\{G_3\}\cup {\mathcal F_{k_2+1}}\cup \bigcup_{i=k_2+2}^{n} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The corresponding ${\mathcal C}$-path is $P$. If $y=b_{k_2}, k_2\neq n$ and $u=b_{k_1}$, then let ${\mathcal H_{k_2+1}}$ be a $(b_{k_2},b_{k_2+1};u)$-set of chains in $ \bigcup_{i=k_1+1}^{k_2+1} B_i$ (it exits by Lemma \ref{spajanje}) and define $G_4=B-\bigcup_{i=k_1+1}^{n} B_i$. In this case ${\mathcal C}=\{G_4\}\cup {\mathcal H_{k_2+1}}\cup \bigcup_{i=k_2+2}^{n} {\mathcal C_i}$ is a $(x,y;u)$-set of chains in $B$. Observe that, if $B_i$ is non-bipartite for some $i\in [n]\setminus[k_2]$, then we can choose $P_i$, and hence also $P$, so that ${\mathcal C}$ is odd or even. This proves the second claim of the theorem if $y=b_{k_2}$ ($k_2\neq n$) and $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2]$. The last case to consider is when $B_i$ is bipartite for $i\in [n]$. In this case $B-x$ is bipartite and the theorem follows from Lemma \ref{bipartite}. $\square$ The proof of the following lemma is similar to the proof of Lemma \ref{spajanje} (so we skip this proof). \begin{lemma} \label{spajanje1} Let $G$ be a bipartite plain chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $u\in V(B_j)\setminus \{b_{j-1},b_j\}$ for some $j\in \{2,\ldots,n-1\}$, and suppose that there exits a $[b_{j-1},b_j,u]$-set of chains in $B_j$. Then for any $v'\in V(B_1)\setminus V(B_{2})$ and $v''\in V(B_n)\setminus V(B_{n-1})$, there exists a $[v',v'',u]$-set of chains in $G$. \end{lemma} \begin{theorem} \label{glavni} Let $B$ be a non-bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $B$ is good with respect to $x$ and $y$. If every internal vertex of $B$ is of degree at leat 4 and $B$ is not an odd cycle, then there exists a $[x,y]$-set of chains in $B$. \end{theorem} \noindent{\bf Proof.\ } By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. Let $k \in [n]$ be such that $y\in V(B_{k})\setminus V(B_{{k-1}})$ (here we set $B_0=\emptyset$). Suppose that at least one block $B_i$ is non-bipartite. If $B_i$ is trivial, define ${\mathcal C_i}=\emptyset$ and $P_i=B_i$. Otherwise, by Theorem \ref{main} and Lemma \ref{to} there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i})$-set of chains in $B_i$, for $i\in [k-1]$, \item[(ii)] a $(b_{k-1},b_k;y)$-set of chains in $B_{k}$, \item[(iii)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_{i}$ for $i\in [n]\setminus [k]$. \end{itemize} Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (i), (ii) and (iii). The ${\mathcal C_i}$-paths are denoted by $P_i$, for $i\in [n]$. Let $P_0=x,b_0$ and $P_{n+1}=b_n,x$, and define $C'=\bigcup_{i=0}^{n+1} P_i$. Since $B_i$ is non-bipartite for some $i\in [n]$, there is an odd and an even set of chains ${\mathcal C_i}$ for (i),(ii) or (iii). Hence, we can choose ${\mathcal C_i}$ and $P_i$ so that $C'$ is even, and therefore ${\mathcal C}= \bigcup_{i=1}^n{\mathcal C_i}$ is a $[x,y]$-set of chains in $B$. Suppose that all blocks $B_i, i\in [n]$ are bipartite. Then all odd faces of $B$ are incident to $x$. Define $B_{n+1}=P_{n+1}$. If $y\notin\{b_{k-1},b_k\}$, then by Lemma \ref{cikli} there exits a $[b_{k-1},b_k,y]$-set of chains ${\mathcal D_k}$ in $B_k$. Let $G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n,b_n,B_{n+1}\,.$ By Lemma \ref{spajanje1} there is a $[x,y]$-set of chains in $G$, which is also a $[x,y]$-set of chains in $B$ (because $G$ is a spanning subgraph of $B$). Assume now that $y\in\{b_{k-1},b_k\}$. Suppose that $y\in \{b_0,b_n\}$. We may assume $y=b_0$. If a block $B_i$ of $G$ is nontrivial, then by Lemma \ref{cikli}, there is a $[b_{i-1},b_i]$-set of chains ${\mathcal F_i}$ in $B_i$. Therefore, by Lemma \ref{spajanje1} there is a $[x,y]$-set of chains in $G$, which is also a $[x,y]$-set of chains in $B$. Otherwise all blocks $B_i,i\in [n+1]$ are trivial. If $C$ is an even cycle, then $C$ itself is a $[x,y]$-set of chains in $B$. Otherwise $C$ is odd, and since $B$ is not an odd cycle, $C$ has a chord. Hence $B$ has an even cycle $C_0$ (which goes through $x$). Clearly, $C_0$ together with blocks $B_i$ such that $|V(B_i)\cap V(C_0)|\leq 1$ forms a $[x,y]$-set of chains in $B$. Hence we may assume that $y=b_k$ where $k\notin\{0,n\}$. Suppose that all bounded odd faces of $B$ are incident to $y$ (and recall that all bounded odd faces are incident to $x$). Then there are exactly one or two such faces. However, if there is exactly one bounded odd face in $B$, and this odd face is incident to $x$ and $y$, then $xy\in E(C)$ (follows from the fact that $B$ is good with respect to $x$ and $y$) and so $y\in \{b_0,b_n\}$. Therefore there are exactly two bounded odd faces in $B$ (both adjacent to $x$ and $y$). In this case the cycle $C'$(defined above) bounds exactly two odd faces of $B$ and therefore $C'$ is even. Hence, ${\mathcal C}$ (defined above) is a $[x,y]$-set of chains in $B$. We may therefore assume that there is a bounded odd face $F$ of $B$, which is not incident to $y$, and that $y\notin \{b_0,b_n\}$. Let $xx_1$ and $xx_2$ be edges incident to $F$. Since $F$ is not incident to $y=b_k$ we may assume, without loss of generality, that $x_1,x_2\in \bigcup_{i=1}^{k} V(B_i)$. Since $F$ is an odd face and $G$ is bipartite, every $x_1x$-path in $G$ is odd and every $x_2x$-path in $G$ is even (or vice-versa). Let $k'\in [k]$ be such that $x_1\in V(B_{k'})\setminus V(B_{k'+1})$. If $B_{k'}$ resp. $B_i$ is nontrivial, then by Lemma \ref{posebna} and Lemma \ref{to} there is a \begin{itemize} \item[(iv)] a $(x_1,b_{k'};b_{k'-1},b_{k'})$-set of chains ${\mathcal G_{k'}}$ in $B_{k'}$, \item[(v)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal G_{i}}$ in $B_{i}$ for $i\in [n]\setminus [k']$. \end{itemize} Let $P_i$ be the ${\mathcal G_{i}}$-path in $B_i$ for $i\in [n]\setminus [k'-1]$ (if $B_i$ is trivial, define $P_i=B_i$ and ${\mathcal G_{i}}=\emptyset$), and define $C''=\bigcup_{i=k'}^{n+1}P_i \cup \{xx_1\}$ (recall that $P_{n+1}=b_n,x$). Since every $x_1x$-path in $G$ is odd, $C''$ is even. By (iv) and Lemma \ref{spajanje}, there is a $(x_1,b_{k'};b_{k'})$-set of chains ${\mathcal H_{k'}}$ in $\bigcup_{i=1}^{k'}B_i$. Then ${\mathcal G}=\bigcup_{i=k'+1}^n {\mathcal G_{i}}\cup {\mathcal H_{k'}}$ is a $[x,y]$-set of chains in $B$, and $C''$ is a ${\mathcal G}$-cycle in $B$. $\square$ \begin{theorem}\label{final} Let $B$ be a circuit graph such that every internal vertex of $B$ is of degree at least 4. Then $B$ has a spanning bipartite cactus. \end{theorem} \noindent{\bf Proof.\ } Let $C$ be the outer cycle of $B$. We prove a slightly stronger statement: if $B$ is a circuit graph such that every internal vertex of $B$ is of degree at least 4, and $x,y\in V(C)$ are vertices such that $B$ is good with respect to $x$ and $y$, then $B$ has a spanning bipartite cactus $T$ such that $x$ and $y$ are contained in exactly one block of $T$. The proof is by induction on $|V(B)|$. The statement is clealy true if $B$ is an even cycle. If $B$ is an odd cycle and $B$ is good with respect to $x$ and $y$ then $x$ and $y$ are adjacent. A spanning $xy$-path in $B$ is a bipartite spanning cactus in $B$ such that $x$ and $y$ are contained in exactly one block of this cactus. If $B$ is not an odd cycle, then by Theorem \ref{glavni} (if $B$ is non-biparitite) and Lemma \ref{cikli} (if $B$ is bipartite), there is a $[x,y]$-set of chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ in $B$. Let $C'$ be a ${\mathcal C}$-cycle. Note that each block $B'$ of a chain $G_i,i\in [k]$ is good with respect to (both) cutvertices of $G_i$ contained in $B'$. Moreover, either $x\notin \bigcup_{i=1}^k V(G_i)$ or a chain of ${\mathcal C}$ is good with respect to $x$ (a similar fact is true for $y$). Therefore we can use the induction hypothesis, to obtain a spanning bipartite cactus $T(B')$ in $B'$ such that (both) cutvertices of $G_i$ contained in $B'$ are contained in exactly one block of $T(B')$. Moreover the block $B_x$ that contains $x$ (if any) has a spanning bipartite cactus $T(B_x)$ such that $x$ is contained in exactly one block of $T(B_x)$ (a similar fact is true for $y$). Let ${\mathcal B}$ be the set of all blocks of $G_i,i\in [k]$ and define $T=C'\cup \bigcup_{B' \in {\mathcal B}}T(B')$. This gives the required bipartite cactus in $B$. $\square$ Let ${\mathcal P}$ be the class of 3-connected planar graphs whose prisms are not hamiltonian. We end the article with the problem to determine the minimum ratio of cubic vertices in a graph $G\in {\mathcal P}$. Let $V_3(G)$ denote the set of cubic vertices in $G$. \begin{problem} Determine minimum $\epsilon$ such that there exist arbitrary large graphs $G\in {\mathcal P}$ with $|V_3(G)|/|V(G)|<\epsilon$. In particular, can $\epsilon$ be arbitrary small ? \end{problem} \noindent {\bf Acknowledgement:} The author thanks Uro\v s Milutinovi\'c for proofreading parts of the final version of this paper. This work was supported by the Ministry of Education of Slovenia [grant numbers P1-0297, J1-9109]. \end{document}
\begin{document} \title{$C$-$(k, \ell)$-Sum-Free Sets} \begin{abstract} The Minkowski sum of two subsets $A$ and $B$ of a finite abelian group $G$ is defined as all pairwise sums of elements of $A$ and $B$: $A + B = \{ a + b : a \in A, b \in B \}$. The largest size of a $(k, \ell)$-sum-free set in $G$ has been of interest for many years and in the case $G = \mathbb{Z}/n\mathbb{Z}$ has recently been computed by Bajnok and Matzke. Motivated by sum-free sets of the torus, Kravitz introduces the \emph{noisy Minkowski sum} of two sets, which can be thought of as discrete evaluations of these continuous sumsets. That is, given a noise set $C$, the noisy Minkowski sum is defined as $A +_C B = A + B + C$. We give bounds on the maximum size of a $(k, \ell)$-sum-free subset of $\mathbb{Z}/n\mathbb{Z}$ under this new sum, for $C$ equal to an arithmetic progression with common difference relatively prime to $n$ and for any two element set $C$. \end{abstract} \section{Introduction} Given a finite abelian group $G$ of order $n$, the \emph{Minkowski sum} of two subsets $A, B \subseteq G$ is the set of pairwise sums, defined to be \[ A + B = \{ a + b\ |\ a \in A, b \in B \}. \] We also use $A - B$ to denote the pairwise differences of elements of $A$ and $B$: \[ A - B = \{ a - b\ |\ a \in A, b \in B \}. \] Extending these definitions, for an integer $k \ge 1$, we define $kA = A + \dots + A$, where there are $k$ copies of $A$ in the summation. For integers $k, \ell \ge 1$, we say that $A$ is \emph{$(k, \ell)$-sum-free} if $kA \cap \ell A = \emptyset$. Let $\mu_{k, \ell}(G)$ denote the largest possible size of a $(k, \ell)$-sum-free subset, that is, \[ \mu_{k, \ell}(G) = \max \{ |A|\ |\ \text{$A \subseteq G$ is $(k, \ell)$-sum-free} \}. \] Note that if $k = \ell$, then $kA = \ell A$, so we may assume that $k > \ell$. When $k = 2$ and $\ell = 1$, we see that $\mu_{2, 1}(\mathbb{Z}/n\mathbb{Z})$ is simply the maximal size of a normal sum-free set in $\mathbb{Z}/n\mathbb{Z}$. The value of $\mu_{2, 1}(\mathbb{Z}/n\mathbb{Z})$ was first calculated by Diamanda and Yap in 1969. \begin{theorem}[\cite{diamanda-yap}, Lemma 3] For any positive integer $n$, \[ \mu_{2, 1}(\mathbb{Z}/n\mathbb{Z}) = \max_{d | n} \left\{ \left\lceil \frac{d-1}{d} \right\rceil \cdot \frac{n}{d} \right\}. \] \end{theorem} In 2003, Hamidoune and Plagne extended this result to $(k, \ell)$-sum-free sets for which $n$ and $k - \ell$ are relatively prime: \begin{theorem}[\cite{critical-pair}, Theorem 2.6] If $\gcd(n, k - \ell) = 1$, then \[ \mu_{k, \ell}(\mathbb{Z}/n\mathbb{Z}) = \max \left\{ \left\lceil \frac{d-1}{k + \ell} \right\rceil \cdot \frac{n}{d} \right\}. \] \end{theorem} In 2018, Bajnok and Matzke finished the problem of determining $\mu_{k, \ell}(\mathbb{Z}/n\mathbb{Z})$ by calculating $\mu_{k, \ell}(\mathbb{Z}/n\mathbb{Z})$, even when $n$ and $k - \ell$ are not relatively prime. \begin{theorem}[\cite{bajnok-max}, Theorem 6] For positive integers $n, k, \ell$ with $k > \ell$, \[ \mu_{k, \ell}(\mathbb{Z}/n\mathbb{Z}) = \max_{d | n} \left\{ \left\lceil \frac{d - (\delta - r)}{k + \ell} \right\rceil \cdot \frac{n}{d} \right\}, \] where $\delta = \gcd(n, k - \ell)$, $f = \left\lceil \frac{d - \delta}{k + \ell} \right\rceil$, and $r$ is the remainder of $\ell f$ modulo $\delta$. \end{theorem} We'll be more interested in a slightly different version of the Minkowski sum, motivated by sum-free sets of the torus $\mathbb{T}$. In \cite{kravitz}, Kravitz suggests the following problem: Consider the map $\varphi: \mathcal{P}(\mathbb{Z}/n\mathbb{Z}) \rightarrow \mathcal{P}(\mathbb{T})$ from subsets of $\mathbb{Z}/n\mathbb{Z}$ to subsets of the torus defined by $\varphi(A) = \bigcup_{i \in A} \left( \frac{i}{n}, \frac{i + 1}{n} \right)$. Then, since $\left( \frac{i}{n}, \frac{i + 1}{n} \right) + \left( \frac{j}{n}, \frac{j + 1}{n} \right) = \left( \frac{i + j}{n}, \frac{i + j + 2}{n} \right)$, we have that $\varphi(A) + \varphi(B) = \varphi(A + B + \{ 0, 1 \})$. That is, the normal Minkowski sum of the union of certain intervals in the torus corresponds to a new kind of sum of subsets of $\mathbb{Z}/n\mathbb{Z}$. With this motivation, Kravitz defines what we will call the \emph{noisy Minkowski sum} of two sets with a set $C$: given sets $A, B, C \subseteq G$, let \[ A +_C B = A + B + C = \{ a + b + c\ |\ a \in A, b \in B, c \in C \}. \] For a given set $C$, this operation can be understood as taking the normal Minkowski sum and adding some noise given by $C$. Then, define $k *_C A = kA + (k-1)C$. Note that when $C = \{ 0 \}$, we recover the normal Minkowski sum. The quantity we are interested in is \[ \mu_{k, \ell}^C(G) = \max \{ |A|\ |\ A \subseteq G,\ k *_C A \cap \ell *_C A = \emptyset \}, \] the maximum size of a \emph{$C$-$(k, \ell)$-sum-free set} of $G$. In his paper, Kravitz asks about the value of $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$. Note that maximal $\{ 0, 1 \}$-$(k, \ell)$-sum-free subsets of $\mathbb{Z}/n\mathbb{Z}$ correspond to maximal $(k, \ell)$-sum-free subsets of $\mathbb{T}$, restricting to sets of the form $\bigcup_{i \in I} \left( \frac{i}{n}, \frac{i + 1}{n} \right)$. Thus, calculating $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$ answers a less granular version of the largest $(k, \ell)$-sum-free set in the torus. In this paper, we address Kravitz's question about $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$ as well as give bounds on $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$ for some other values of $C$. In particular, for $C = \{ 0, 1, \dots, c-1 \}$, we prove the following bounds: \begin{theorem} \label{thm:01c-main} For $c \ge 2$ and $C = \{ 0, 1, \dots, c-1 \}$, we have \[ \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2) \le \mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor - (c-2), \] where $\delta = \gcd(n, k - \ell)$, and $r$ is the remainder of $-k \cdot \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor + (c-2)$ modulo $\delta$. \end{theorem} Note that when $\gcd(n, k - \ell) = 1$, $r < \delta = 1$, so $r = 0$, and the two sides of the bound are equal. In this case (and in many others), by taking $c = 2$, this theorem gives an explicit answer to Kravitz's question. When the two sides are not equal, we have that $r < \delta = \gcd(n, k - \ell) \le k - \ell < k + \ell$, so the upper and lower bounds can differ by at most 1. We also consider two element sets $C$ of the form $\{ 0, s \}$. We prove the following bounds on $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z})$: \begin{theorem}\label{thm:0s-main} For $k > \ell$, we have \begin{align*} \mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) &\ge \max \left\{ \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\},\ \left\lfloor \frac{n + 2(s-1) - r}{k + \ell} \right\rfloor - (s - 1) \right\} \\ \mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) &\le \max \left\{ \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\},\ \left\lfloor \frac{n}{k + \ell} \right\rfloor \right\}, \end{align*} where $\delta = \gcd(n, k - \ell)$ and $r$ is the remainder of $-k\cdot \left\lfloor \frac{n + 2(s-1)}{k + \ell} \right\rfloor +(s - 1)$ modulo $\delta$. \end{theorem} When $\max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\} \ge \lfloor \frac{n}{k + \ell} \rfloor$, we have that the two sides are in fact equal, i.e. \[ \mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) = \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\}. \] Another specific case is when $s = p$ is prime and $p$ divides $k - \ell$. Then $\mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) = 0$, so our inequality simplifies to \[ \left\lfloor \frac{n + 2(p-1) - r}{k + \ell} \right\rfloor - (p - 1) \le \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] We will see in Section 2 that the bounds we've achieved for $C = \{ 0, 1, \dots, c-1 \}$ extend to any set that is an arithmetic progression of length $c$ with common difference relatively prime to $n$, and the bounds for $C = \{ 0, s \}$ hold for any two element set. In Section 3, we prove Theorem~\ref{thm:01c-main}, and in Section 4, we prove Theorem~\ref{thm:0s-main}. Finally in Section 5, we discuss some open question and conjectures. \section{Transformations of $C$} As there are many choices for $C$, we may seek to show that several sets are equivalent in the sense that they all have the same size of a maximal $C$-$(k, \ell)$-sum-free set. In this section we introduce two transformations of $C$ that preserve $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$. \begin{proposition} \label{prop:shift} For any $g \in \mathbb{Z}/n\mathbb{Z}$, we have that $\mu_{k, \ell}^C (\mathbb{Z}/n\mathbb{Z}) = \mu_{k, \ell}^{C + \{ g \}}(\mathbb{Z}/n\mathbb{Z})$. \end{proposition} \begin{proof} We have that \begin{align*} k *_C A &= kA + (k-1)C \\ &= kA + (k-1)(C + \{ g \}) + k \{ -g \} + \{ g \} \\ &= k (A + \{ -g \}) + (k - 1)(C + \{ g \}) + \{ g \} \\ &= k *_{C + \{ g \}} (A + \{ -g \}) + \{ g \}, \end{align*} and similarly, $\ell *_C A = \ell *_{C + \{ g \}} (A + \{ -g \}) + \{ g \}$, therefore letting $B = A + \{ -g \}$, we have that \[ k *_C A = \ell *_C A \Longleftrightarrow k *_{C + \{ g \}} B = \ell *_{C + \{ g \}} B. \] Thus, the proposition follows. \end{proof} \begin{proposition} \label{prop:mult} For any $g \in \mathbb{Z}/n\mathbb{Z}^\times$ and set $A$, let $A / g = \{ a / g : a \in A \}$. Then, $\mu_{k, \ell}^C (\mathbb{Z}/n\mathbb{Z}) = \mu_{k, \ell}^{C / g} (\mathbb{Z}/n\mathbb{Z})$. \end{proposition} \begin{proof} For any set $A \subseteq \mathbb{Z}/n\mathbb{Z}$, we have that $k *_C A \cap \ell *_C A = \emptyset$ iff $(k *_C A) / g \cap (\ell *_C A) / g = \emptyset$, or equivalently, $k *_{C / g} (A / g) \cap \ell *_{C / g} (A / g) = \emptyset$. \end{proof} We will call the operation in Proposition~\ref{prop:shift} \emph{shift} and the operation in Proposition~\ref{prop:mult} \emph{multiplication}, as these are the respective operations the propositions allow us to perform on $C$. We say that two sets $C$ and $D$ are \emph{shift-mult-equivalent} if, by applying a sequence of shifts and multiplications to $C$, one can attain $D$. Note since shifts and multiplications are invertible and composable, such sequences define an equivalence relation. In fact, two sets $C$ and $D$ are shift-mult-equivalent iff $D = g(C + \{ h \})$ for some elements $h \in \mathbb{Z}/n\mathbb{Z}$ and $g \in \mathbb{Z}/n\mathbb{Z}^\times$. As a result, $\{ 0, 1, \dots, c-1 \}$ is shift-mult-equivalent to any length $c$ arithmetic progression with common difference relatively prime to $n$, and $\{ 0, c \}$ is shift-mult-equivalent to any two element set whose elements have difference $\Delta$ such that $\gcd(n, \Delta) = \gcd(n, c)$. \section{Maximal $\{ 0, 1, \dots, c-1 \}$-$(k, \ell)$-Sum-Free Sets} In this section we look at $C = \{ 0, 1, \dots, c-1 \}$, with $c \ge 1$. Note that when $c = 0$, $C = \{ 0 \}$, which has already been thoroughly investigated. Hence, we consider $c \ge 2$. When $c = 2$, we have that $C = \{ 0, 1 \}$. Note that shifting and multiplying gives us that any set of two elements whose difference is relatively prime to $n$ is shift-mult-equivalent to $C$. Using $(k, \ell)$-sum-free sets on the torus, we may give a natural upper bound for $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$. In order to talk about sets of the torus $\mathbb{T}$, let $\mu^*$ denote the normalized Haar measure on $\mathbb{T}$ and let $\mu^*_{k, \ell}(\mathbb{T}) = \max \{ \mu^*(A)\ |\ A \subseteq \mathbb{T}, kA \cap \ell A = \emptyset \}$. Kravitz proved the following equality for maximal sum-free sets of $\mathbb{T}$: \begin{theorem}[\cite{kravitz}, Theorem 1.3] For $k > \ell$, it holds that $\mu^*_{k, \ell}(\mathbb{T}) = \frac{1}{k + \ell}$. \end{theorem} Using the map $\phi : \mathcal{P}(\mathbb{Z}/n\mathbb{Z}) \rightarrow \mathcal{P}(\mathbb{T})$ we defined in the introduction, we can prove the following upper bound: \begin{theorem} \label{thm:01easy} For $n, k, \ell \in \mathbb{N}$ with $k > \ell$, we have that $\mu_{k, \ell}^{\{ 0, 1 \}} \le \left\lfloor \frac{n}{k + \ell} \right\rfloor$. \end{theorem} \begin{proof} Recall that $\phi$ is defined on sets $A \subseteq \mathbb{Z}/n\mathbb{Z}$ by $\phi(A) = \bigcup_{i \in A} \left( \frac{i}{n}, \frac{i + 1}{n} \right)$ and that $\phi( A +_{\{ 0, 1 \}} B ) = \phi(A) + \phi(B)$. Therefore, $A$ is $\{ 0, 1 \}$-$(k, \ell)$-sum-free iff $\phi(A)$ is sum-free. Then, \[ \mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z}) \le n \cdot \mu^*_{k, \ell}(\mathbb{T}) = \frac{n}{k + \ell}. \] Since $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$ is an integer, we in fact have that \[ \mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] \end{proof} This bound is sharp: for instance, for $n = 10$, $k = 2$, and $\ell = 1$, we have that $\{ 4, 5, 6 \}$ is a size $\left\lfloor \frac{n}{k + \ell} \right\rfloor = 3$ $\{ 0, 1 \}$-$(k, \ell)$-sum-free set. In general, we may prove an upper bound on $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$ with a different approach, which will align with Theorem~\ref{thm:01easy} in the case $C = \{ 0, 1 \}$. The upper bound we give is based on the following result of Kneser. \begin{theorem}[Kneser \cite{kneser}] Let $G$ be a finite abelian group. For nonempty $A, B \subseteq G$ and $H = \stab(A + B)$ the stabilizer of $A + B$, then \[ |A + B| \ge |A + H| + |B + H| - |H|. \] \end{theorem} In order to apply this theorem, we need the following easy lemma. \begin{lemma} \label{lemma:substab} For sets $A$ and $B$, $\stab(A) \subseteq \stab(A + B)$ as a subgroup inclusion. \end{lemma} \begin{proof} We have that \[ \stab(A) + (A + B) = (\stab(A) + A) + B = A + B. \] \end{proof} Recall, by definition, that $A +_C B = A + B + C$, so this lemma also gives that $\stab(A) \subseteq \stab(A +_C B)$. Now, we can lower bound $|A +_C B|$. \begin{lemma} \label{lemma:kneser_bound} When $C = \{ 0, 1, \dots, c-1 \}$ for $c \ge 2$, we have that \[ |A +_C B| \ge \min \left\{ n, |A| + |B| + (c-2) \right\}. \] \end{lemma} \begin{proof} Let $K = \stab(A +_C B)$. Then, by Kneser's result, we have \begin{align*} |A +_C B| = |A + B + C| &\ge |A + B + K| + |\{ 0, \dots, c-1 \} + K| - |K| \\ &\ge |A + B| + |\{ 0, \dots, c-1 \} + K| - |K|. \numberthis \label{eqn:kneser-1} \end{align*} If $K = \mathbb{Z}/n\mathbb{Z}$, then this means that $A +_C B = \mathbb{Z}/n\mathbb{Z}$ since every element of $\mathbb{Z}/n\mathbb{Z}$ stabilizes $A +_C B$. Then, $|A +_C B| = n \ge \min \{ n, |A| + |B| + c - 2 \}$. So, assume $K \not= \mathbb{Z}/n\mathbb{Z}$. Note that if a subgroup $H$ stabilizes a set $X$, we must have that for any $x \in X$, $x + H \subseteq X$, so $X$ is a union of cosets of $H$. Then, if $[\mathbb{Z}/n\mathbb{Z} : K] \le c$, we have that $A +_C B$ is a union of cosets of $K$. However, if $a \in A$ and $b \in B$, then $\{ a + b, a + b - 1, \dots, a + b + c - 1 \} \subseteq A +_C B$, so at least one element of each coset of $K$ is in $A +_C B$. This implies that $A +_C B = \mathbb{Z}/n\mathbb{Z}$, which we've assumed is not true. Now, if $[\mathbb{Z}/n\mathbb{Z} : K] > c$, we have that $| \{ 0, \dots, c-1 \} + K| = c|K|$, so Equation~\ref{eqn:kneser-1} can be rewritten as \[ |A +_C B| \ge |A + B| + (c-1)|K|. \] Now, let $H = \stab(A + B)$, so the above equation implies that \begin{align*} |A +_C B| &\ge |A + H| + |B + H| + (c-1)|K| - |H| \\ &\ge |A| + |B| + (c-1)|K| - |H|. \end{align*} By Lemma~\ref{lemma:substab}, $H$ is a subgroup of $K$. In particular, $|H| \le |K|$. Then, we can write \[ |A +_C B| \ge |A| + |B| + (c - 2)|H| \ge |A| + |B| + (c - 2), \] as desired. \end{proof} With this result, we can show an upper bound on $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$. \begin{theorem} We have that $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n + 2(c-2))}{k + \ell} \right\rfloor - (c-2)$. \end{theorem} \begin{proof} Take $A$ to be a $C$-$(k, \ell)$-sum-free subset of $\mathbb{Z}/n\mathbb{Z}$. By iteratively applying Lemma~\ref{lemma:kneser_bound}, we find that \begin{align*} |k *_C A| &\ge \min \{ n, k|A| + (k-1)(c-2) \} \\ |\ell *_C A| &\ge \min \{ n, \ell|A| + (\ell-1)(c-2) \}. \end{align*} Since both $k *_C A$ and $\ell *_C A$ are nonempty, we must have that $|k *_C A|, |\ell *_C A| < n$, so \begin{align*} |k *_C A| &\ge k|A| + (k-1)(c-2) \\ |\ell *_C A| &\ge \ell|A| + (\ell-1)(c-2). \end{align*} Then since $k *_C A$ and $\ell *_C A$ are disjoint, \begin{align*} n \ge |k *_C A| + |\ell *_C A| \ge (k + \ell)|A| + (k + \ell - 2)(c - 2). \end{align*} Rearranging gives \[ |A| \le \frac{n + 2(c-2)}{k + \ell} - (c - 2). \] Since $|A|$ must be an integer, we must have \[ |A| \le \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor - (c - 2). \] \end{proof} We denote this upper bound by $\chi(c, k, \ell) = \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor - (c-2)$. We will see in the lower bound that this upper bound is achieved or almost achieved by an interval. The analysis of the largest length of a $C$-$(k, \ell)$-sum-free interval will closely follow the methods used by Bajnok and Matzke in calculating the largest $(k, \ell)$-sum-free sets in \cite{bajnok-max}. \begin{theorem} Let $\delta = \gcd(n, k - \ell)$ and $r$ denote the remainder of $-k\cdot \chi(c, k, \ell) - (k-1)(c-2)$ modulo $\delta$. Then, \[ \mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z}) \ge \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2). \] \end{theorem} \begin{proof} We have that an interval $A = [a, \dots, a + m - 1]$ of length $m$ satisfies \begin{align*} k *_C A &= \{ ka, \dots, ka + km + (c-2)k - (c-1) \} \\ \ell *_C A &= \{ \ell a, \dots, \ell a + \ell m + (c-2)\ell - (c-1) \}, \end{align*} so \[ k *_C A - \ell *_C A = \{ (k-\ell)a - \ell m - (c-2)\ell + (c-1), \dots, (k-\ell)a + km + (c-2)k - (c-1) \}. \] We have that $A$ is $C$-$(k, \ell)$-sum-free iff $k *_C A - \ell *_C A$ does not contain $0$, so $A$ is $C$-$(k, \ell)$-sum-free iff there exists some $b \in \mathbb{Z}$ for which \begin{align*} bn + 1 &\le (k - \ell)a - \ell m - (c-2)\ell + (c-1) \\ (b + 1)n - 1 &\ge (k - \ell)a + km + (c-2)k - (c-1), \end{align*} which can be rearranged to give \begin{align*} \ell m + (\ell-1)(c-2) &\le (k - \ell)a - bn \le n - km - (k-1)(c-2) \\ \Longleftrightarrow \frac{\ell m + (\ell-1)(c-2)}{\delta} &\le \frac{k - \ell}{\delta} \cdot a - \frac{n}{\delta} \cdot b \le \frac{n - km - (k-1)(c-2)}{\delta}. \end{align*} Since $\gcd \left( \frac{k - \ell}{\delta}, \frac{n}{\delta} \right) = 1$, any integer can be expressed as $\frac{k - \ell}{\delta} \cdot a - \frac{n}{\delta} \cdot b$ for some choice of $a$ and $b$. Thus, it suffices to show that there exists some integer $z$ for which \[ \frac{\ell m + (\ell-1)(c-2)}{\delta} \le z \le \frac{n - km - (k-1)(c-2)}{\delta}, \] or equivalently, \[ \frac{\ell m + (\ell-1)(c-2)}{\delta} \le \left\lfloor \frac{n - km - (k-1)(c-2)}{\delta} \right\rfloor. \numberthis \label{eqn:interval_ineq} \] Since $\lfloor \frac{x}{\delta} \rfloor \ge \lfloor \frac{x - \delta + 1}{\delta} \rfloor$, we have that any $m$ that satisfies \[ \frac{\ell m + (\ell-1)(c-2)}{\delta} \le \frac{n - km - (k-1)(c-2) - \delta + 1}{\delta} \] must also satisfy \ref{eqn:interval_ineq}. But the above can be rewritten as \[ m \le \frac{n + 2c - \delta - 3}{k + \ell} - (c - 2), \] so there is an interval of size $\left\lfloor \frac{n + 2c - \delta - 3}{k + \ell} \right\rfloor - (c - 2) = \left\lfloor \frac{n + 2(c-2) - (\delta-1)}{k + \ell} \right\rfloor - (c - 2)$ that is $C$-$(k, \ell)$-sum-free. Since $\delta = \gcd(n, k - \ell) < k + \ell$, if we let $\gamma(n, k, \ell, c)$ denote the length of the longest $C$-$(k, \ell)$-sum-free interval, we have that $\gamma(n, k, \ell, c) \in \{ \chi(n, k, \ell, c), \chi(n, k, \ell, c) - 1 \}$. Since $r \equiv -k \cdot \chi(n, k, \ell, c) - (k-1)(c-2) \Mod{\delta}$ and $n \equiv 0 \Mod{\delta}$, we have that $\gamma(n, k, \ell, c) = \chi(n, k, \ell, c)$ iff \begin{align*} \ell\cdot \chi(n, k, \ell, c) + (\ell-1)(c-2) &\le n - k\cdot \chi(n, k, \ell, c) - (k-1)(c-2) - r \\ \Longleftrightarrow \chi(n, k, \ell, c) &\le \frac{n + 2(c-2) - r}{k + \ell} - (c-2). \end{align*} Since $\chi(n, k, \ell, c)$ is an integer, $\gamma(n, k, \ell, c) = \chi(n, k, \ell, c)$ exactly when \[ \chi(n, k, \ell, c) \le \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2). \numberthis \label{eqn:bound_truth} \] Note that $\left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2)$ takes the value $\chi(n, k, \ell, c)$ exactly when Equation~\ref{eqn:bound_truth} holds and otherwise takes the value $\chi(n, k, \ell, c) - 1$, thus $\gamma(n, k, \ell, c) = \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2)$ is the length of the longest $C$-$(k, \ell)$-sum-free interval in $\mathbb{Z}/n\mathbb{Z}$. In particular, this implies that \[ \mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z}) \ge \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2). \] \end{proof} Combining the two bounds, and noting that $r$ is the remainder of $-k \cdot \chi(n, k, \ell, c) - (k-1)(c-2) = -k \cdot \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor + (c - 2)$ modulo $\delta$, we have proven Theorem~\ref{thm:01c-main}. \begin{lemma} For $C = \{ 0, 1, \dots, c-1 \}$ and any subset $A \subseteq \mathbb{Z}/n\mathbb{Z}$, if $A$ contains two elements $x$ and $y$ such that $y - x < c - \lceil \frac{c-2}{k} \rceil$, then $k *_C (A \cup \{ z \}) = k *_C A$ for any $z \in \{ x+1, \dots, y-1 \}$. \end{lemma} \begin{proof} Suppose that there are elements $x, y \in A$ such that $2 \le y - x < c - \lceil \frac{c-2}{k} \rceil$. Then, for any $z \in \{ x+1, \dots, y-1 \}$, the set $k *_C (A \cup \{ z \})$ can be written as \[ \{ a_1 + \dots + a_k + b \ |\ a_i \in A \cup \{ z \}, b \in \{ 0, 1, \dots, (k-1)(c-1) \} \}. \] Since any $a_1 + \dots + a_k + b$ with all $a_i \in A$ is already in $k *_C A$, it suffices to show that any such sum with at least one of the $a_i$ equal to $z$ is also in $k *_C A$. To do this, consider a sum $a_1 + \dots + a_k + b$, where $j$ of the $a_i$'s are equal to $z$. Then, consider replacing every of the $j$ instances of $z$ with $x$, and with $y$. In the first case, the value of $a_1 + \dots + a_k + b$ decreases by at most $k(z-x)$, and in the second, the value increases by at most $k(y-z)$. Since $k(z-x) + k(y-z) = k(y-x) \le k(c - 1 - \lceil \frac{c-2}{k} \rceil) \le (k-1)(c-1) - 1$, we have that either $b$ is at least $k(y-z) + 1$, in which case $b' = b - j(y-z) \in \{ 0, 1, \dots, (k-1)(c-1) \}$, or $b$ is at most $(k-1)(c-1) - k(z-x)$, in which case $b' = b + j(z-x) \in \{ 0, 1, \dots, (k-1)(c-1) \}$. In the first case, $a_1 + \dots + a_k + b$ is equal to $a'_1 + \dots + a'_k + b'$, where $a'_i = a_i$ if $a_i \not= z$, otherwise $a'_i = y$; in the second, $a_1 + \dots + a_k + b$ is equal to $a'_1 + \dots + a'_k + b'$, where $a'_i = a_i$ if $a_i \not= z$ and $a'_i = x$ otherwise. Since we can always rewrite $a_1 + \dots + a_k + b$ with elements of $A$, we have that $k *_C (A \cup \{ z \}) = k *_C A$. \end{proof} Let $A$ be a maximal $C$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$. Noting that single elements are also intervals, we can write any set as a union of disconnected intervals. The fact that $A$ is $C$-$(k, \ell)$-sum-free allows us to say a bit more about the distance between these intervals. We now restrict our attention to $C$ of size $2$. First, by shifting, we can write $C$ as $\{ 0, c \}$. When $c$ is relatively prime to $n$, we can multiply $C$ by $c^{-1}$ so that $C = \{ 0, 1 \}$, which we have understood in the previous section. Therefore, we now consider sets $C = \{ 0, c \}$ for which $\gcd(c, n) \not= 1$. In particular, we look at $C = \{ 0, p \}$ where $p$ is a prime and $p | n$. \begin{theorem}\label{thm:0p-main} When $p | k - \ell$, \[ \left\lfloor \frac{n + 2(p-1) - r}{k + \ell} \right\rfloor - (p - 1) \le \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n}{k + \ell} \right\rfloor, \] and when $p \not| k - \ell$, \[ \max \left\{ \left\lceil \frac{p - 1}{k + \ell} \right\rceil \cdot \frac{n}{p},\ \left\lfloor \frac{n + 2(p-1) - r}{k + \ell} \right\rfloor - (p - 1) \right\} \le \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) \le \max \left\{ \left\lceil \frac{p - 1}{k + \ell} \right\rceil \cdot \frac{n}{p},\ \left\lfloor \frac{n}{k + \ell} \right\rfloor \right\}, \] where $\delta = \gcd(n, k - \ell)$, $f = \lfloor \frac{n + 2(p-1)}{k + \ell} \rfloor - (p-1)$, and $r$ is the residue of $-kf - (k-1)(p-1)$ modulo $\delta$. \end{theorem} Note for instance that if $p \not| k - \ell$ and $\lceil \frac{p - 1}{k + \ell} \rceil \cdot \frac{n}{p} \ge \lfloor \frac{n}{k + \ell} \rfloor$, then we in fact have equality: \[ \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) = \left\lceil \frac{p - 1}{k + \ell} \right\rceil \cdot \frac{n}{p}. \] In order to prove this theorem, we will prove the lower and upper bounds separately. The lower bound will follow from the following lemma: \begin{lemma} For any $k > \ell$, \[ \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) \ge \max \left\{ \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) \cdot \frac{n}{p},\ \mu_{k, \ell}^{\{ 0, 1, \dots, p \}}(\mathbb{Z}/n\mathbb{Z}) \right\}. \] \end{lemma} \begin{proof} First, suppose that we have a $(k, \ell)$-sum-free set $B$ in $\mathbb{Z}/p\mathbb{Z}$, and define $\psi_{n, p}$ to be the canonical projection from $\mathbb{Z}/n\mathbb{Z}$ to $\mathbb{Z}/p\mathbb{Z}$. Then, $A = \psi_{n, p}^{-1}(B)$ must also be $(k, \ell)$-sum-free and in fact $\{ 0, p \}$-$(k, \ell)$-sum-free. Since $A$ has size $\mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) \cdot \frac{n}{p}$, the first lower bound is established. Next, we have that $\{ 0, p \} \subseteq \{ 0, 1, \dots, p \}$, so any $\{ 0, 1, \dots, p \}$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$ must also be a $\{ 0, p \}$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$. This gives the second of the two lower bounds. Since $\mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z})$ is lower bounded by both $\mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z})\cdot \frac{n}{p}$ and $\mu_{k, \ell}^{\{ 0, 1, \dots, p \}}(\mathbb{Z}/n\mathbb{Z})$, we have that it is at least the maximum of the two. \end{proof} Now, from Bier and Chin we know that \[ \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) = \begin{cases} \left\lceil \frac{p - 1}{k + \ell} \right\rceil & p \not| k - \ell \\ 0 & p | k - \ell \end{cases}. \] Using this result and the lower bound in Theorem~\ref{thm:01c-main}, we get the lower bound in Theorem~\ref{thm:0p-main}. In order to handle the upper bound, we need a few lemmas. \begin{lemma} \label{lemma:substab} For sets $A$ and $B$, $\stab(A) \subseteq \stab(A + B)$ as a subgroup inclusion. \end{lemma} \begin{proof} We have that \[ \stab(A) + (A + B) = (\stab(A) + A) + B = A + B. \] \end{proof} We can write $A +_C B = A + B + C$, so this lemma also gives that $\stab(A) \subseteq \stab(A +_C B)$. \begin{lemma} \label{lemma:0p-bound-stab-not-p} For a $\{ 0, p \}$-$(k, \ell)$-sum-free set $A$, let $K = \stab(k *_{\{ 0, p \}} A)$ and suppose that $K \not= \langle p \rangle$. Then \[ |A| \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] \end{lemma} \begin{proof} Our proof of this lemma follows from the following claim. Claim: For all $1 \le j \le k$, $|j *_{\{ 0, p \}} A| \ge j |A|$. We prove this claim via induction. For $j = 1$, $|j *_{\{ 0, p \}} A| = |A|$, so the base case is true. Now for $2 \le j \le k$, suppose the claim is true for $j - 1$. Let $J = \stab(j *_{\{ 0, \}} A)$. By Kneser, we have that \[ |j *_{\{ 0, p \}} A| \ge |A + ((j - 1) *_{\{ 0, p \}} A)| + |\{ 0, p \} + J| - |J|. \] By Lemma~\ref{lemma:substab}, $J$ is a subgroup of $K$, so $J \not= \langle p \rangle$. Also, $J \not= \langle 1 \rangle$ otherwise $j *_{\{ 0, p \}} A = \mathbb{Z}/n\mathbb{Z} \implies k *_{\{ 0, p \}} A = \mathbb{Z}/n\mathbb{Z}$, contradicting the fact that $A$ is a $\{ 0, p \}$-$(k, \ell)$-sum-free. Then, we have that $|\{ 0, p \} + J| = 2|J|$, so we can rewrite the above equation as \begin{align*} |j *_{\{ 0, p \}} A| &\ge |A + ((j - 1) *_{\{ 0, p \}} A)| + |J| \\ &\ge |A| + |(j - 1) *_{\{ 0, p \}} A| + |J| - |H| \\ &\ge |A| + |(j - 1) *_{\{ 0, p \}} A| \\ &\ge |A| + (j-1) |A| \\ &= j |A|, \end{align*} where $H = \stab(A + ((j - 1) *_{\{ 0, p \}} A)) \subseteq J$ by Lemma~\ref{lemma:substab}, so $|H| \le |J|$. This completes the proof of the claim. To finish our proof, we have that $|k *_{\{ 0, p \}} A| \ge k |A|$ and $|\ell *_{\{ 0, p \}} A| \ge \ell |A|$. Since $k *_{\{ 0, p \}} A \cap \ell *_{\{ 0, p \}} A = \emptyset$, we have that $n \ge (k + \ell)|A| \implies |A| \le \lfloor \frac{n}{k + \ell} \rfloor$. \end{proof} Now, we are ready to prove the upper bound. \begin{lemma} For any $k > \ell$, \[ \mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z}) \le \max \left\{ \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) \cdot \frac{n}{p},\ \left\lfloor \frac{n}{k + \ell} \right\rfloor \right\}. \] \end{lemma} \begin{proof} Suppose that $A$ is a $\{ 0, p \}$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$. Once again define the map $\psi_{n, p}$ to be the canonical projection onto $\mathbb{Z}/p\mathbb{Z}$. We consider cases based on $|\psi_{n, p}(A)|$, the number of distinct residues elements of $A$ leave modulo $p$. First, suppose $|\psi_{n, p}(A)| > \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z})$. Then, if $K = \stab(k *_{\{ 0, p \}} A) = \langle p \rangle$, we have that $k *_{\{ 0, p \}} A$ is a union of cosets of $\langle p \rangle$ but is not equal to $\mathbb{Z}/n\mathbb{Z}$. Since $\psi_{n, p}(k *_{\{0, p \}} A) = k \psi_{n, p}(A)$ and $|\psi_{n, p}(A)| > \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z})$, then $k \psi_{n, p}(A) \cap \ell \psi_{n, p}(A) \not= \emptyset$. However, since $k *_{\{ 0, p \}} = \psi_{n, p}^{-1}(\psi_{n, p}(A))$ is a union of cosets, it must have nontrivial intersection with $\ell *_{\{ 0, p \}} A$, contradiction on $A$ being $\{ 0, p \}$-$(k, \ell)$-sum-free. Then, assume $K \not= \langle p \rangle$, in which case Lemma~\ref{lemma:0p-bound-stab-not-p} gives that $|A| \le \lfloor \frac{n}{k + \ell} \rfloor$. Now, suppose $|\psi_{n, p}(A)| \le \mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z})$. Since $|\psi_{n, p}^{-1}(\{ a \})| = \frac{n}{p}$ for any element $a \in \mathbb{Z}/p\mathbb{Z}$, we have that $|A|$ is upper bounded by $\mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) \cdot \frac{n}{p}$. Combining the upper bounds in the two cases gives that $\mu_{k, \ell}^{\{ 0, p \}}(\mathbb{Z}/n\mathbb{Z})$ is upper bounded by the larger of the two. \end{proof} The upper bounds in Theorem~\ref{thm:0p-main} can be found by using the fact that $\mu_{k, \ell}(\mathbb{Z}/p\mathbb{Z}) = \lceil \frac{p - 1}{k + \ell} \rceil$ when $p \not| k - \ell$ and $0$ when $p | k - \ell$. \section{$C$ of size $2$} We now restrict our attention to $C$ of size $2$. First, by shifting, we can write $C$ as $\{ 0, s \}$. When $s$ is relatively prime to $n$, we can multiply $C$ by $s^{-1}$ so that $C = \{ 0, 1 \}$, which we have examined in the previous section. Therefore, we now consider sets $C = \{ 0, s \}$ for which $d = \gcd(s, n) \not= 1$. We will prove the upper and lower bounds of Theorem~\ref{thm:0s-main} separately. Recall that $\gamma(n, k, \ell, s + 1) = \left\lfloor \frac{n + 2(s-1) - r}{k + \ell} \right\rfloor - (s - 1)$, so our lower bound will follow immediately from the following theorem: \begin{theorem} For any $k > \ell$, \[ \mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \ge \max \left\{ \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\},\ \gamma(n, k, \ell, s + 1) \right\}. \] \end{theorem} \begin{proof} For any $e | d$, define $\psi_{n, e}$ to be the canonical projection from $\mathbb{Z}/n\mathbb{Z}$ onto $\mathbb{Z}/e\mathbb{Z}$. That is, $\psi_{n, e}(a)$ is equal to the remainder of $a$ modulo $e$. Suppose we have a maximal $(k, \ell)$-sum-free set $B$ in $\mathbb{Z}/e\mathbb{Z}$. Then, $A = \psi_{n, e}^{-1}(B)$ must also be $(k, \ell$)-sum-free and in fact $\{ 0, s \}$-$(k, \ell)$-sum-free. Since $A$ has size $\mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e}$, by taking the maximum value over all $e | d$, we have that $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \ge \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\}$. Because $\{ 0, s \} \subseteq \{ 0, 1, \dots, s \}$, any $\{ 0, 1, \dots, p \}$-$(k, \ell)$-sum-free set is also a $\{ 0, p \}$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$. This gives the second lower bound, that $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \ge \gamma(n, k, \ell, s + 1)$. \end{proof} In order to handle the upper bound, we need the following lemma, which deals with the case that the stabilizer of $k *_{\{ 0, s \}} A$ does not contain $\langle d \rangle$. By $\langle x \rangle$, we mean the cyclic subgroup of $\mathbb{Z}/n\mathbb{Z}$ generated by $x$. \begin{lemma} \label{lemma:0s-bound-stab} For a $\{ 0, s \}$-$(k, \ell)$-sum-free set $A$, let $K = \stab(k *_{\{ 0, s \}} A)$ and suppose that $K$ does not contain $\langle d \rangle$, the subgroup of $\mathbb{Z}/n\mathbb{Z}$ generated by $d$, (i.e. $K \not= \langle e \rangle$ for any $e | d$, $e > 1$). Then \[ |A| \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] \end{lemma} \begin{proof} Our proof of this lemma follows from the following claim: For all $1 \le j \le k$, $|j *_{\{ 0, s \}} A| \ge j |A|$. We prove this claim via induction. For $j = 1$, $|j *_{\{ 0, s \}} A| = |A|$, so the base case is true. Now for $2 \le j \le k$, suppose the claim is true for $j - 1$. Let $J = \stab(j *_{\{ 0, s \}} A)$. By Kneser, we have that \[ |j *_{\{ 0, s \}} A| \ge |A + ((j - 1) *_{\{ 0, s \}} A)| + |\{ 0, s \} + J| - |J|. \] By Lemma~\ref{lemma:substab}, $J$ is a subgroup of $K$, so $J$ also doesn't contain $\langle d \rangle$. Then, we have that $|\{ 0, s \} + J| = 2|J|$, so we can rewrite the above equation as \begin{align*} |j *_{\{ 0, s \}} A| &\ge |A + ((j - 1) *_{\{ 0, s \}} A)| + |J| \\ &\ge |A| + |(j - 1) *_{\{ 0, s \}} A| + |J| - |H| \\ &\ge |A| + |(j - 1) *_{\{ 0, s \}} A| \\ &\ge |A| + (j-1) |A| \\ &= j |A|, \end{align*} where $H = \stab(A + ((j - 1) *_{\{ 0, s \}} A)) \subseteq J$ by Lemma~\ref{lemma:substab}, so $|H| \le |J|$. This completes the proof of the claim. To finish our proof, we note that $|k *_{\{ 0, s \}} A| \ge k |A|$ and $|\ell *_{\{ 0, s \}} A| \ge \ell |A|$. Since $k *_{\{ 0, s \}} A \cap \ell *_{\{ 0, s \}} A = \emptyset$, we have that $n \ge (k + \ell)|A| \implies |A| \le \lfloor \frac{n}{k + \ell} \rfloor$. \end{proof} Now, we are ready to prove the upper bound. \begin{theorem} For $k > \ell$, \[ \mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \\ \le \max \left\{ \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e} \right\},\ \left\lfloor \frac{n}{k + \ell} \right\rfloor \right\}. \] \end{theorem} \begin{proof} Suppose that $A$ is a $\{ 0, s \}$-$(k, \ell)$-sum-free set in $\mathbb{Z}/n\mathbb{Z}$. Once again define the map $\psi_{n, e}$ to be the canonical projection of $\mathbb{Z}/n\mathbb{Z}$ onto $\mathbb{Z}/e\mathbb{Z}$. First, suppose that $K = \stab(k *_{\{ 0, s \}} A) = \langle e \rangle$ for some $e | d$, so $k *_{\{ 0, s \}} A$ is a union of cosets of $\langle e \rangle$ but is not equal to $\mathbb{Z}/n\mathbb{Z}$. If $|\psi_{n, e}(A)| > \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z})$, then since $\psi_{n, e}(k *_{\{ 0, s \}} A) = k \psi_{n, e}(A)$ and $k *_{\{ 0, s \}} A = \psi_{n, e}^{-1}(\psi_{n, e}(k *_{\{ 0, s \}} A)$ is a union of cosets, $k *_{\{ 0, s \}}$ and $\ell *_{\{ 0, s \}} A$ have nontrivial intersection, contradiction on $A$ being $\{ 0, s \}$-$(k, \ell)$-sum-free. Therefore, $|\psi_{n, e}(A) \le \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z})$. In particular, this gives the bound $|A| \le \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z}) \cdot \frac{n}{e}$. Taking the maximum value over all $e | d$ gives that $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \le \max_{e | d} \left\{ \mu_{k, \ell}(\mathbb{Z}/e\mathbb{Z} \cdot \frac{n}{e} \right\}$. Otherwise, $K$ does not contain $\langle d \rangle$. Then, by Lemma~\ref{lemma:0s-bound-stab}, $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z}) \le \lfloor \frac{n}{k + \ell} \rfloor$. Combining the two cases gives the stated result. \end{proof} Note that if $C$ is of size $1$, we can reduce the question to asking about $C = \{ 0 \}$, in which case $+_C$ is the same as the Minkowski sum, which has been thoroughly investigated. Thus, we first consider $C$ of size $2$. If $C$ is of size $2$, we can without loss of generality assume $C = \{ 0, c \}$ by Proposition~\ref{prop:shift}, and we can divide out by $\frac{c}{\gcd(n, c)}$ by Proposition~\ref{prop:mult}, so we may assume that $C = \{ 0, c \}$ where $c | n$. In the case $c = 1$, which corresponds to original $C$ of two elements whose difference is relatively prime to $n$, we have the following result, which partially answers Kravitz's question. \begin{theorem} With $\delta = \gcd(n, k - \ell)$ and $r$ equal to the remainder of $-\lfloor \frac{n}{k + \ell} \rfloor \cdot k$ modulo $\delta$, we have the following bounds: \[ \left\lfloor \frac{n - r}{k + \ell} \right\rfloor \le \mu_{k, \ell}^{\{ 0, 1 \}} (\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] \end{theorem} Note that this handles the case $n | k - \ell$: when $n | k - \ell$, then any nonempty $A$ has $kA \cap \ell A \not= \emptyset$ since $ka = \ell a$ for any $a \in \mathbb{Z}/n\mathbb{Z}$. Then, $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z}) = 0$. But if $n | k - \ell$, then $n \le k - \ell < k + \ell$, so the upper bound in the theorem is $\lfloor \frac{n}{k + \ell} \rfloor = 0$. To prove the theorem, we will prove the upper and lower bounds separately. The upper bound depends on a bound on $(k, \ell)$-sum-free sets in the torus. To talk about sum-free sets on the torus, let $\mu^*$ denote the normalized Haar measure on the torus $\mathbb{T}$ and let $\mu_{k, \ell}^*(\mathbb{T}) = \max \{ \mu^*(A)\ |\ A \subseteq \mathbb{T}, kA \cap \ell A = \emptyset \}$. Then, Kravitz proved the following: \begin{theorem} \emph{(Kravitz, 2019)} $\mu_{k, \ell}^*(\mathbb{T}) = \frac{1}{k + \ell}$. \end{theorem} As a result, we derive the following upper bound on $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$. \begin{theorem} $\mu_{k, \ell}^{\{ 0, 1 \}} \le \left\lfloor \frac{n}{k + \ell} \right\rfloor$. \end{theorem} \begin{proof} For any $A \subseteq \mathbb{Z}/n\mathbb{Z}$, consider the map $\phi : \mathcal{P}(\mathbb{Z}/n\mathbb{Z}) \rightarrow \mathcal{P}(\mathbb{T})$ defined by $\phi(A) = \bigcup_{i \in A} \left( \frac{i}{n}, \frac{i + 1}{n} \right)$. We have that since $\left( \frac{i}{n}, \frac{i + 1}{n} \right) + \left( \frac{j}{n}, \frac{j + 1}{n} \right) = \left( \frac{i + j}{n}, \frac{i + j + 2}{n} \right)$, then $\phi( A +_{\{ 0, 1 \}} B ) = \phi(A) + \phi(B)$. Therefore, $A$ is $\{ 0, 1 \}$-$(k, \ell)$-sum-free iff $\phi(A)$ is sum-free. Then, \[ \mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z}) \le n \cdot \mu^*_{k, \ell}(\mathbb{T}) = \frac{n}{k + \ell}. \] Since $\mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z})$ is an integer, we in fact have that \[ \mu_{k, \ell}^{\{ 0, 1 \}}(\mathbb{Z}/n\mathbb{Z}) \le \left\lfloor \frac{n}{k + \ell} \right\rfloor. \] \end{proof} \begin{theorem} $\mu_{k, \ell}^{\{ 0, 1 \}} \ge \lfloor \frac{n - r}{k + \ell} \rfloor$, where $\delta = \gcd(n, k - \ell)$ and $r$ is the residue of $-\lfloor \frac{n}{k + \ell} \rfloor \cdot k$ modulo $\delta$. \end{theorem} \begin{proof} To see this lower bound, we provide a construction of a set of $\lfloor \frac{n - r}{k + \ell} \rfloor$ elements of $\mathbb{Z}/n\mathbb{Z}$ that is $\{ 0, 1 \}$-$(k, \ell)$-sum-free. In particular, we will restrict to intervals. Consider an interval $A = [a, \cdots, a + m - 1]$ of $m$ elements of $\mathbb{Z}/n\mathbb{Z}$. Then, $k *_{\{ 0, 1 \}} A = [ka, \dots, ka + km - 1]$ and $\ell *_{\{ 0, 1 \}} A = [\ell a, \dots, \ell a + km - 1]$. Then, $k *_{\{ 0, 1 \}} A \cap \ell *_{\{ 0, 1 \}} A = \emptyset$ iff $k *_{\{ 0, 1 \}} A - \ell *_{\{ 0, 1 \}} = [(k-\ell)a - m\ell + 1, (k-\ell)a + mk - 1]$ does not contain $0$, that is, there exists some integer $b$ for which \begin{align*} (k - \ell)a - m\ell + 1 &\ge bn + 1 \\ (k - \ell)a + mk - 1 &\le (b + 1)n - 1. \end{align*} This can be rewritten as \[ m\ell \le (k - \ell)a - bn \le n - mk. \] Let $\delta = \gcd(n, k - \ell)$, so we may write \[ \frac{m\ell}{\delta} \le \frac{k - \ell}{\delta}\cdot a - \frac{n}{\delta} \cdot b \le \frac{n - mk}{\delta}. \] Since $\gcd \left( \frac{k - \ell}{\delta}, \frac{n}{\delta} \right) = 1$, any integer can be written as $\frac{k - \ell}{\delta} \cdot a - \frac{n}{\delta} \cdot b$ for some $a, b \in \mathbb{Z}$. Thus, there exists a $\{ 0, 1 \}$-$(k, \ell)$-sum-free interval of length $m$ iff there exists some integer $Z$ for which \[ \frac{m\ell}{\delta} \le Z \le \frac{n - mk}{\delta}, \] that is, \[ \frac{m\ell}{\delta} \le \left\lfloor \frac{n - mk}{\delta} \right\rfloor. \] Let $m^*$ denote the largest integer solution to this inequality. We have that $\lfloor \frac{n - mk}{\delta} \rfloor \ge \frac{n - mk - \delta + 1}{\delta}$, which is at least $\frac{m\ell}{\delta}$ when $m \le \frac{n - \delta + 1}{k + \ell}$, or $m^* = \lfloor \frac{n - \delta + 1}{k + \ell} \rfloor$. Since $\delta = \gcd(n, k - \ell) \le k - \ell < k + \ell$, we have that $m^*$ equals either $\lfloor \frac{n}{k + \ell} \rfloor$ or $\lfloor \frac{n}{k + \ell} \rfloor - 1$. Let $f = \lfloor \frac{n}{k + \ell} \rfloor$. Letting $r$ denote the residue of $k \lfloor \frac{n}{k + \ell} \rfloor$ modulo $\delta$, we have that $\lfloor \frac{n - fk}{\delta} \rfloor = \frac{n - fk - r}{\delta}$, so $m^* = f$ iff \[ f\ell \le n - fk - r \Longleftrightarrow f \le \frac{n - r}{k + \ell}. \] This is the same as letting $m^* = \lfloor \frac{n - r}{k + \ell} \rfloor$. \end{proof} \begin{corollary} If $\gcd(n, k - \ell) = 1$, then $\mu_{k, \ell}^{\{ 0, 1 \}} = \lfloor \frac{n}{k + \ell} \rfloor$. In particular, if $n = p$ is prime and $p \not| k - \ell$, then $\mu_{k, \ell}^{\{ 0, 1 \}} = \lfloor \frac{p}{k + \ell} \rfloor$. \end{corollary} Next, suppose that $c > 1$. We have the following bounds: \begin{theorem} In the following, let $\delta = \gcd(d, k - \ell)$ and $r$ be the remainder of $\ell \cdot \left\lceil \frac{d - \delta}{k + \ell} \right\rceil \Mod{\delta}$. Then, \[ \max_{d | c} \left\{ \left\lceil \frac{d - (\delta - r)}{k + \ell} \right\rceil \cdot \frac{n}{d} \right\} \le \mu_{k, \ell}^C (\mathbb{Z}) \le \max_{d | n} \left\{ \left\lfloor \frac{d + \frac{2|C| d}{n} - 4}{k + \ell} - \frac{|C| d}{n} + 2 \right\rfloor \cdot \frac{n}{d} \right\} \] \end{theorem} The upper bound here is a slight improvement upon the obvious upper bound given by the fact that any $C$-$(k, \ell)$-sum-free subset is also $(k, \ell)$-sum-free. \begin{proof} To see the lower bound, note that if we have a $(k, \ell)$-sum-free subset $B$ of $\mathbb{Z}_c$, then this naturally lifts to a $C$-$(k, \ell)$-sum-free set (by taking all elements of $\mathbb{Z}/n\mathbb{Z}$ that are congruent to an element of $B$ modulo $c$). Combining with Bajnok and Matzke's result, we get the lower bound. As for the upper bound, we first show a few properties about the stabilizer of $k *_C A$, if $A$ is a $C$-$(k, \ell)$-sum-free set of maximal size. Let $K$ be the stabilizer of $k *_C A$. Then, $k *_C (A + K) = k *_C A + kK = k *_C A$, so $A + K$ is $C$-$(k, \ell)$-sum-free as well. Then, since $A$ is of maximal size and $A = A + \{ 0 \} \subseteq A + K$, we have in fact that $A = A + K$. Finally, we have that $a + K \subseteq A + K = A$, so $A$ is a union of cosets of $K$. In particular, $\frac{|A|}{|K|} \in \mathbb{Z}$. We have by Kneser that $|k *_C A| \ge k|A| + (k-1)|C| - (2k-2)|\stab(k *_C A)|$ and $|\ell *_C A| \ge \ell |A| + (\ell - 1)|C| - (2\ell - 2)|\stab(\ell *_C A)|$. Suppose without loss of generality that $|\stab(k *_C A)| \ge |\stab(\ell *_C A)|$, and let $K = \stab(k *_C A)$. Then, \begin{align*} n &\ge |k *_C A| + |\ell *_C A| \\ &= (k + \ell)|A| + (k + \ell - 2)|C| - (2k + 2\ell - 4)|K| \\ \Longleftrightarrow \frac{|A|}{|K|} &\le \frac{n - (k + \ell - 2)|C| + (2k + 2\ell - 4)|K|}{(k + \ell)|K|}. \end{align*} Letting $|K| = \frac{n}{d}$, we have that \[ \frac{|A|}{|K|} \le \frac{d + \frac{2d|C|}{n} - 4}{k + \ell} - \frac{d|C|}{n} + 2. \] Since $\frac{|A|}{|K|}$ is an integer, we have that \[ |A| \le \left\lfloor \frac{d + \frac{2d|C|}{n} - 4}{k + \ell} - \frac{d|C|}{n} + 2 \right\rfloor \cdot \frac{n}{d}, \] as claimed. \end{proof} We move on to consider sets of size $3$. We first classify which sets can be related by a sequence of shifts and multiplications. \begin{lemma} $C$ and $D$ are related by a sequence of shifts and multiplications iff $D = g(C + \{ h \})$ for some elements $h \in \mathbb{Z}/n\mathbb{Z}$ and $g \in \mathbb{Z}/n\mathbb{Z}^\times$. \end{lemma} \begin{proof} It suffices to show that a multiplication followed by a shift can be rewritten as a shift followed by a multiplication, as then we can group all the shifts first and multiplications second. Then, since a composition of shifts is a shift and a composition of multiplications is a multiplication, the result follows. To see this, suppose we $D = gC + h$ for some $g \in \mathbb{Z}/n\mathbb{Z}^\times$ and $h \in \mathbb{Z}/n\mathbb{Z}$. We can rewrite this as $D = g(C + \frac{h}{g})$, which is a shift followed by a multiplication. Since $g$ is a unit, we have that $\frac{h}{g}$ is defined, so $D$ is also the result of a shift by $\frac{h}{g}$ followed by a multiplication by $g$. \end{proof} Now, assume $n = p$ is a prime. Via shift and multiplication, we can write any set $C$ of size $3$ as $\{ 0, 1, c \}$. \begin{lemma} Sets $C = \{ 0, 1, c \}$ and $D = \{ 0, 1, d \}$ are related by a sequence of shifts and multiplications iff \[ d \in \left\{ c, \frac{1}{c}, -(c-1), -\frac{1}{c-1}, \frac{c-1}{c}, \frac{c}{c-1} \right\}. \] \end{lemma} \section{Further Questions} When $C = \{ 0, 1, \dots, c-1 \}$, the upper and lower bounds given for $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$ often coincide. When they do not, the two bounds are $1$ apart, and we conjecture that the lower bound holds as equality. \begin{conjecture} For $c \ge 2$ and $C = \{ 0, 1, \dots, c-1 \}$, \[ \mu_{k, \ell}^{C}(\mathbb{Z}/n\mathbb{Z}) = \max \left\{ 0,\ \left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2) \right\}, \] where $\delta = \gcd(n, k - \ell)$, $\chi(c, k, \ell) = \left\lfloor \frac{n + 2(c-2)}{k + \ell} \right\rfloor - (c-2)$ and $r$ is the remainder of $-k\cdot \chi(c, k, \ell) - (k-1)(c-2)$ modulo $\delta$. That is, the largest $C$-$(k, \ell)$-sum-free subset of $\mathbb{Z}/n\mathbb{Z}$ is achieved by an interval. \end{conjecture} Notice that for small enough values of $n$, it is possible for $\left\lfloor \frac{n + 2(c-2) - r}{k + \ell} \right\rfloor - (c-2)$ to be negative, hence we need to compare it to $0$. Consider, for instance, $n = 40$, $k = 9$, and $\ell = 4$. For $C = \{ 0, 1 \}$, the upper bound in Theorem~\ref{thm:01c-main} is $3$, while the lower bound is $2$. A simple computer program verifies that the largest $\{ 0, 1 \}$-$(9, 4)$-sum-free set of $\mathbb{Z}/40\mathbb{Z})$ is $2$. For $C = \{ 0, 1, 2 \}$, the upper and lower bounds once again differ, being $2$ and $1$ respectively, and a computer program verifies that the maximum $\{ 0, 1, 2 \}$-$(9, 4)$-sum-free subset of $\mathbb{Z}/40\mathbb{Z}$ has length $1$. We have checked using a computer that for all $2 \le c \le 10$, there are no values of $n$, $k$, and $\ell$ with $\ell < 10$, $k < 20$, and $n < 5(k + \ell)$ for which the upper and lower bounds of Theorem~\ref{thm:01c-main} differ and $\mu_{k, \ell}^{\{ 0, 1 \}}$ is equal to the upper bound, unless the upper bound is equal to $0$. For $C = \{ 0, s \}$, our bounds often are wider. We ask for a precise value of $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$. \begin{question} For $\gcd(n, s) \not= 1$, what is the value of $\mu_{k, \ell}^{\{ 0, s \}}(\mathbb{Z}/n\mathbb{Z})$? \end{question} Other values of $C$ may be interesting for study. Note that if $C$ is shift-mult-equivalent to a set contained in $\{ 0, 1, \dots, x \}$, then $A$ is $C$-$(k, \ell)$-sum-free if it is $\{ 0, 1, \dots, x \}$-$(k, \ell)$-sum-free, so the lower bound of Theorem~\ref{thm:01c-main} gives a lower bound on $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$. An upper bound in some cases, with some careful considerations, can be attained following the methods of this paper, or by noting that if $\{ 0, x \}$ is a subset of $C$, then any $C$-$(k, \ell)$-sum-free set must also be $\{ 0, x \}$-$(k, \ell)$-sum-free, from which we attain an upper bound of $\left\lfloor \frac{n}{k + \ell} \right\rfloor$ by Theorem~\ref{thm:0s-main}. However, it is not known how to attain an upper bound better than $\left\lfloor \frac{n}{k + \ell} \right\rfloor$ in all cases. \begin{question} What can we say about the value of $\mu_{k, \ell}^C(\mathbb{Z}/n\mathbb{Z})$ for other values of $C$? \end{question} \section{Acknowledgments} This research was conducted at the University of Minnesota Duluth REU and was supported by NSF / DMS grant 1659047 and NSA grant H98230-18-1-0010. I would like to thank Joe Gallian, who suggested the problem, and Mehtaab Sawhney and Aaron Berger, who gave comments on this paper. \end{document}
\begin{document} \markboth{M. B. Ettienne et al.}{Optimal-Time Dictionary-Compressed Indexes} \title{Optimal-Time Dictionary-Compressed Indexes} \author{ Anders Roy Christiansen \affil{The Technical University of Denmark, Denmark} Mikko Berggren Ettienne\affil{The Technical University of Denmark, Denmark} Tomasz Kociumaka\affil{Bar-Ilan University, Israel, and University of Warsaw, Poland} Gonzalo Navarro\affil{CeBiB and University of Chile, Chile} Nicola Prezza\affil{University of Pisa, Italy} } \begin{abstract} We describe the first self-indexes able to count and locate pattern occurrences in optimal time within a space bounded by the size of the most popular dictionary compressors. To achieve this result we combine several recent findings, including \emph{string attractors} --- new combinatorial objects encompassing most known compressibility measures for highly repetitive texts ---, and grammars based on \emph{locally-consistent parsing}. More in detail, let $\gamma$ be the size of the smallest attractor for a text $T$ of length $n$. The measure $\gamma$ is an (asymptotic) lower bound to the size of dictionary compressors based on Lempel--Ziv, context-free grammars, and many others. The smallest known text representations in terms of attractors use space $O(\gamma\log(n/\gamma))$, and our lightest indexes work within the same asymptotic space. Let $\epsilon>0$ be a suitably small constant fixed at construction time, $m$ be the pattern length, and $occ$ be the number of its text occurrences. Our index counts pattern occurrences in $O(m+\log^{2+\epsilon}n)$ time, and locates them in $O(m+(occ+1)\log^\epsilon n)$ time. These times already outperform those of most dictionary-compressed indexes, while obtaining the least asymptotic space for any index searching within $O((m+occ)\,\textrm{polylog}\,n)$ time. Further, by increasing the space to $O(\gamma\log(n/\gamma)\log^\epsilon n)$, we reduce the locating time to the optimal $O(m+occ)$, and within $O(\gamma\log(n/\gamma)\log n)$ space we can also count in optimal $O(m)$ time. No dictionary-compressed index had obtained this time before. All our indexes can be constructed in $O(n)$ space and $O(n\log n)$ expected time. As a byproduct of independent interest, we show how to build, in $O(n)$ expected time and without knowing the size $\gamma$ of the smallest attractor (which is NP-hard to find), a run-length context-free grammar of size $O(\gamma\log(n/\gamma))$ generating (only) $T$. As a result, our indexes can be built without knowing $\gamma$. \end{abstract} \begin{bottomstuff} Kociumaka supported by ISF grants no. 824/17 and 1278/16 and by an ERC grant MPM under the EU's Horizon 2020 Research and Innovation Programme (grant no. 683064). Navarro supported by Fondecyt grant 1-170048, Chile; Basal Funds FB0001, Conicyt, Chile. Prezza supported by the project MIUR-SIR CMACBioSeq (``Combinatorial methods for analysis and compression of biological sequences'') grant no.~RBSI146R5L. A preliminary version of this article appeared in {\em Proc. LATIN'18} \cite{CE18}. Author's addresses: Anders Roy Christiansen, The Technical University of Denmark, Denmark, {\tt [email protected]}. Mikko Berggren Ettienne, The Technical University of Denmark, Denmark, {\tt [email protected]}. Tomasz Kociumaka, Department of Computer Science, Bar-Ilan University, Ramat Gan, Israel \and Institute of Informatics, University of Warsaw, Poland, {\tt [email protected]}. Gonzalo Navarro, CeBiB -- Center for Biotechnology and Bioengineering, Chile \and Department of Computer Science, University of Chile, Chile, {\tt [email protected]}. Nicola Prezza, Department of Computer Science, University of Pisa, Italy, {\tt [email protected]}. \end{bottomstuff} \keywords{Repetitive string collections; Compressed text indexes; Attractors; Grammar compression; Locally-consistent parsing} \maketitle \section{Introduction} The need to search for patterns in large string collections lies at the heart of many text retrieval, analysis, and mining tasks, and techniques to support it efficiently have been studied for decades: the suffix tree, which is the landmark solution, is over 40 years old \cite{DBLP:conf/focs/Weiner73,DBLP:journals/jacm/McCreight76}. The recent explosion of data in digital form led the research since 2000 towards {\em compressed self-indexes}, which support text access and searches within compressed space \cite{DBLP:journals/csur/NavarroM07}. This research, though very successful, is falling short to cope to a new wave of data that is flooding our storage and processing capacity with volumes of higher orders of magnitude that outpace Moore's Law \cite{Plos15}. Interestingly enough, this massive increase in data size is often not accompanied with a proportional increase in the amount of information that data carries: much of the fastest-growing data is {\em highly repetitive}, for example thousands of genomes of the same species, versioned document and software repositories, periodic sky surveys, and so on. Dictionary compression of those datasets typically reduces their size by two orders of magnitude \cite{GNP18}. Unfortunately, previous self-indexes build on statistical compression, which is unable to capture repetitiveness \cite{KN13}; therefore, a new generation of compressed self-indexes based on dictionary compression is emerging. Examples of successful compressors from this family include (but are not limited to) the Lempel--Ziv factorization~\cite{LZ76}, of size $z$; context-free grammars~\cite{KY00} and run-length context-free grammars~\cite{DBLP:conf/mfcs/NishimotoIIBT16}, of size $g$; bidirectional macro schemes \cite{SS82}, of size $b$; and collage systems \cite{KidaMSTSA03}, of size $c$. Other compressors that are not dictionary-based but also perform well on repetitive text collections are the run-length Burrows--Wheeler transform~\cite{BWT}, of size $\rho$, and the CDAWG~\cite{blumer1987complete}, of size $e$. A number of compressed self-indexes have been built on top of those compressors; \citeN{GNP18} give a thorough review. Recently, \citeN{KP18} showed that all the above-mentioned repetitiveness measures (i.e., $z$, $g$, $b$, $c$, $\rho$, $e$) are never asymptotically smaller than the size $\gamma$ of a new combinatorial object called \emph{string attractor}. This and subsequent works~\cite{KP18,NP18,prezza2019optimal} showed that efficient access and searches can be supported within $O(\gamma\log(n/\gamma))$ space. By the nature of this new repetitiveness measure, such data structures are universal, in the sense that they can be used on top of a wide set of dictionary-compressed representations of $T$. \paragraph{Our results} In this article we obtain the best results on attractor-based indexes, including the first optimal-time search complexities within space bounded in terms of $\gamma$, $z$, $g$, $b$, or $c$. We combine and improve upon three recent results: \begin{enumerate} \item \citeN[Thm.~2]{NP18} presented the first index that builds on an attractor of size $\gamma$ of a text $T[1..n]$. It uses $O(\gamma\log(n/\gamma))$ space and finds the $occ$ occurrences of a pattern $P[1..m]$ in time $O(m\log n + occ (\log\log(n/\gamma)+\log^\epsilon \gamma))$ for any constant $\epsilon>0$. \item \citeN[Thm.~2(3)]{CE18} presented an index that builds on the Lempel--Ziv parse of $T$, of $z \ge \gamma$ phrases, which uses $O(z\log(n/z))$ space and searches in time\footnote{This is the conference version of the present article, where we mistakenly claim a slightly better time of $O(m + \log^\epsilon z + occ(\log\log n + \log^\epsilon z))$. The error can be traced back to the wrong claim that our two-sided range structure, built on $O(z\log(n/z))$ points, answers queries in $O(\log^\epsilon z)$ time (the correct time is, instead, $O(\log^\epsilon (z\log(n/z)))$). The second occurrence of $\log^\epsilon z$, however, is correct, because the missing term is absorbed by $O(\log\log n)$.} $O(m + \log^\epsilon(z\log(n/z)) + occ(\log\log n + \log^\epsilon z))$. \item \citeN[Thm.~5]{Nav18} presented the first index that builds on the Lempel--Ziv parse of $T$ and counts the number of occurrences of $P$ in $T$ (i.e., computes $occ$) in time $O(m\log n + m\log^{2+\epsilon} z)$, using $O(z\log(n/z))$ space. \end{enumerate} Our contributions are as follows: \begin{enumerate} \item We obtain, in space $O(\gamma\log(n/\gamma))$, an index that lists all the occurrences of $P$ in $T$ in time $O(m + \log^\epsilon \gamma + occ \log^\epsilon (\gamma\log(n/\gamma)))$, thereby obtaining the best space and improving the time from previous works \cite{CE18,NP18}. \item We obtain, in space $O(\gamma\log(n/\gamma))$, an index that counts the occurrences of $P$ in $T$ in time $O(m+\log^{2+\epsilon} (\gamma\log(n/\gamma)))$, which outperforms the previous result \cite{Nav18} both in time and space. \item Using more space, $O(\gamma\log(n/\gamma)\log^\epsilon n)$, we list the occurrences in optimal $O(m+occ)$ time, and within space $O(\gamma\log(n/\gamma)\log n)$, we count them in optimal $O(m)$ time. \end{enumerate} We can build all our structures in $O(n\log n)$ expected time and $O(n)$ working space, without the need to know the size $\gamma$ of the smallest attractor. Our first contribution uses the minimum known asymptotic space, $O(\gamma\log(n/\gamma))$, for any dictionary-compressed index searching in time $O((m+occ)\,\mathrm{polylog}\,n)$ \cite{GNP18}. Only recently \cite{NP18}, it has been shown that it is possible to search within this space. Indeed, our new index outperforms most dictionary-compressed indexes, with a few notable exceptions like \citeN{gagie2014lz77}, who use $O(z\log(n/z))$ space and $O(m\log m + occ \log\log n)$ search time (but, unlike us, assume a constant alphabet), and \citeN{PhBiCPM17}, who use $O(z\log(n/z)\log\log z)$ space and $O(m+occ \log\log n)$ search time without making any assumption on the alphabet size. Our second contribution lies on a less explored area, since the first index able to count efficiently within dictionary-bounded space is very recent \cite{Nav18}. Our third contribution yields the first indexes with space bounded in terms of $\gamma$, $z$, $g$, $b$, or $c$, multiplied by any $O(\textrm{polylog}\,n)$, that searches in optimal time. Such optimal times have been obtained, instead, by using $O(\rho\log(n/\rho))$ space \cite{GNP18}, or using $O(e)$ space \cite{BC17}. Measures $\rho$ and $e$, however, are not related to dictionary compression and, more importantly, have no known useful upper bounds in terms of $\gamma$. Further, experiments \cite{BCGPR15,GNP18} show that they are usually considerably larger than $z$ on repetitive texts. As a byproduct of independent interest, we show how to build a run-length context-free grammar (RLCFG) of size $O(\gamma\log(n/\gamma))$ generating (only) $T$, where $\gamma$ is the size of the smallest attractor, in $O(n)$ expected time {\em and without the need to know the attractor}. We use this result to show that our indexes do not need to know an attractor, nor its minimum possible size $\gamma$ (which is NP-hard to obtain \cite{KP18}) in order to achieve their attractor-bounded results. This makes our results much more practical. Another byproduct is the generalization of our results to arbitrary CFGs and, especially, RLCFGs, yielding slower times in $O(g)$ space, which can potentially be $o(\gamma\log(n/\gamma))$. \paragraph{Techniques} A key component of our result is the fact that one can build a locally-consistent and locally-balanced grammar generating (only) $T$ such that only a few splits of a pattern $P$ must be considered in order to capture all of its ``primary'' occurrences \cite{KU96}. Previous parsings had obtained $O(\log m \log^* n)$ \cite{NIIBT15} and $O(\log n)$ \cite{DBLP:conf/soda/GawrychowskiKKL18} splits, but now we build on a parsing by \citeN{DBLP:journals/algorithmica/MehlhornSU97} to obtain $O(\log m)$ splits with a grammar of size $O(\gamma\log(n/\gamma))$. Our first step is to define a variant of Mehlhorn et al.'s randomized parsing and prove, in Section~\ref{sec:lcp}, that it enjoys several locality properties we require later for indexing. In Section~\ref{sec:locality}, we use the parsing to build a RLCFG with the local balancing and local consistency properties we need. We then show, in Section~\ref{sec:attractor}, that the size of this grammar is bounded by $O(\gamma\log(n/\gamma))$, by proving that new nonterminals appear only around attractor positions. In that section, we also show that the grammar can be built without knowing the minimum size $\gamma$ of an attractor of $T$. This is important because, unlike $z$, which can be computed in $O(n)$ time, finding $\gamma$ is NP-hard \cite{KP18}. For this sake we define a new measure of compressibility, $\delta \le \gamma$, which can be computed in $O(n)$ time and can be used to bound the size of the grammar. Section~\ref{sec:index} describes our index. We show how to parse the pattern in linear time using the same text grammar, and how to do efficient substring extraction and Karp--Rabin fingerprinting from a RLCFG. Importantly, we prove that only $O(\log m)$ split points are necessary in our grammar. All these elements are needed to obtain time linear in $m$. We also build on existing techniques \cite{CNspire12} to obtain time linear in $occ$ for the ``secondary'' occurrences; the primary ones are found in a two-dimensional data structure and require more time. Finally, by using a larger two-dimensional structure and introducing new techniques to handle short patterns, we raise the space to $O(\gamma\log(n/\gamma)\log^\epsilon n)$ but obtain the first dictionary-compressed index using optimal $O(m+occ)$ time. In Section~\ref{sec:counting} we use the fact that only $O(\log m)$ splits must be considered to reduce the counting time of \citeN{Nav18}, while making its space attractor-bounded as well. This requires handling the run-length rules of RLCFGs, which turns out to require new ideas exploiting string periodicities. Further, by handling short patterns separately and raising the space to $O(\gamma\log(n/\gamma)\log n)$, we obtain the first dictionary-compressed index that counts in optimal time, $O(m)$. Along the article we obtain various results on accessing and indexing specific RLCFGs. We generalize them to arbitrary CFGs and RLCFGs in Appendix~\ref{sec:rlcfgs}. An earlier version of this article appeared in {\em Proc. LATIN'18} \cite{CE18}. This article is an exhaustive rewrite where we significantly extend and improve upon the conference results. We use a slightly different grammar, which requires re-proving all the results, in particular correcting and completing many of the proofs in the conference paper. We have also reduced the space by building on attractors instead of Lempel--Ziv parsing, used better techniques to report secondary occurrences and handle short patterns, and ultimately obtained optimal locating time. All the results on counting are also new. \section{Basic Concepts} \label{sec:basics} \paragraph*{Strings and texts} A {\em string} is a sequence $S[1\mathinner{.\,.} \ell] = S[1] S[2] \cdots S[\ell]$ of {\em symbols}. The symbols belong to an {\em alphabet} $\Sigma$, which is a finite subset of the integers. The {\em length} of $S$ is written as $|S|=\ell$. A string $Q$ is a \emph{substring} of $S$ if $Q$ is empty or $Q=S[i] \cdots S[j]$ for some indices $1\le i \le j \le \ell$. The \emph{occurrence} of $Q$ at position $i$ of $S$ is a \emph{fragment} of $S$ denoted $S[i\mathinner{.\,.} j]$. We then also say that $S[i\mathinner{.\,.} j]$ \emph{matches} $Q$. We assume implicit casting of fragments to the underlying substrings so that $S[i\mathinner{.\,.} j]$ may also denote $S[i]\cdots S[j]$ in contexts requiring strings rather than fragments. A {\em suffix} of $S$ is a fragment of the form $S[i\mathinner{.\,.} \ell]$, and a {\em prefix} is a fragment of the form $S[1\mathinner{.\,.} i]$. The juxtaposition of strings and/or symbols represents their concatenation, and the exponentiation denotes the iterated concatenation. The {\em reverse} of $S[1\mathinner{.\,.} \ell]$ is $S^{rev} = S[\ell] S[\ell-1] \cdots S[1]$. We will index a string $T[1\mathinner{.\,.} n]$, called the {\em text}. We assume our text to be flanked by special symbols $T[1]=\#$ and $T[n]=\$$ that belong to $\Sigma$ but occur nowhere else in $T$. This, of course, does not change any of our asymptotic results, but it simplifies matters. \paragraph*{Karp--Rabin signatures} {\em Karp--Rabin fingerprinting} \cite{DBLP:journals/ibmrd/KarpR87} assigns to every string $S[1\mathinner{.\,.} \ell]$ a signature $\kappa(S) = (\sum_{i=1}^\ell S[i]\cdot c^{i-1}) \bmod \mu$ for a suitable integers $c$ and a prime number $\mu$. It is possible to build a signature formed by a pair of functions $\langle \kappa_1,\kappa_2 \rangle$ guaranteeing no collisions between substrings of $T[1\mathinner{.\,.} n]$, in $O(n\log n)$ expected time \cite{DBLP:journals/jda/BilleGSV14}. \paragraph*{With high probability} The term \emph{with high probability} (\emph{w.h.p.}) means with probability at least $1-n^{-c}$ for an arbitrary constant parameter $c$, where $n$ is the input size (in our case, the length of the text). \paragraph*{Model of computation} We use the RAM model with word size $w = \Omega(\log n)$, allowing classic arithmetic and bit operations on words in constant time. Our logarithms are to the base $2$ by default. \section{Locally-Consistent Parsing} \label{sec:lcp} A string $S[1\mathinner{.\,.} n]$ can be parsed in a \emph{locally consistent} way, meaning that equal substrings are largely parsed in the same form. We use a variant of the parsing of \citeN{DBLP:journals/algorithmica/MehlhornSU97}. Let us define a \emph{run} in a string as a maximal substring repeating one symbol. The parsing proceeds in two passes. First, it groups the runs into \emph{metasymbols}, which are seen as single symbols. The resulting sequence is denoted $\hat{S}[1\mathinner{.\,.} \hat{n}]$. The following definition describes the process precisely and defines mappings between $S$ and $\hat{S}$. \newcommand{\run}[1]{\textrm{\framebox{$#1$}}} \begin{definition} \label{def:Sb} The string $\hat{S}[1\mathinner{.\,.} \hat{n}]$ is obtained from a string $S[1\mathinner{.\,.} n]$ by replacing every distinct run $a^\ell$ in $S$ by a special metasymbol $\run{a^\ell}$ so that two occurrences of the same run $a^\ell$ are replaced by the same metasymbol. The alphabet $\hat{\Sigma}$ of $\hat{S}$ consists of the metasymbols that represent runs in $S$, that is $\hat{\Sigma} = \{\textrm{\framebox{$a^\ell$}}: a^\ell\text{ is a run in }S\}$. A position $S[i]$ that belongs to a run $a^\ell$ is \emph{mapped} to the position $\hat{S}[\hat{i}]$ of the corresponding metasymbol $\run{a^\ell}$, denoted $\hat{i}=map(i)$. A position $\hat{S}[\hat{i}]$ is \emph{mapped back} to the maximal range $imap(\hat{i}) = [fimap(\hat{i})\mathinner{.\,.} limap(\hat{i})]$ of positions in $S$ that map to $\hat{i}$. That is, if $S[i\mathinner{.\,.} i+\ell-1]$ is a run in $S$ that maps to $\hat{i}$, then $fimap(\hat{i})=i$ and $limap(\hat{i})=i+\ell-1$. \end{definition} The string $\hat{S}$ is then parsed into {\em blocks}. A bijective function $\pi : \Sigma \rightarrow [1\mathinner{.\,.} |\Sigma|]$ is chosen uniformly at random; we call it a {\em permutation}. We then extend $\pi$ to $\hat{\Sigma}$ so that $\pi(\run{a^\ell}) = \pi(a)$, that is, the value on a metasymbol is inherited from the underlying symbol. Note that no two consecutive symbols in $\hat{S}$ have the same $\pi$ value. We then define local minima in $\hat{S}$, and these are used to parse $\hat{S}$ (and $S$) into blocks. \begin{definition} \label{def:localmin} Given a string $S$, its corresponding string $\hat{S}[1\mathinner{.\,.} \hat{n}]$, and a permutation $\pi$ on the alphabet of $S$, a \emph{local minimum} of $\hat{S}$ is defined as any position $\hat{i}$ such that $1<\hat{i}<\hat{n}$ and $\pi(\hat{S}[\hat{i}-1]) > \pi(\hat{S}[\hat{i}]) < \pi(\hat{S}[\hat{i}+1])$. \end{definition} \begin{definition} \label{def:parse} The {\em parsing} of $\hat{S}$ partitions it into a sequence of {\em blocks}. The blocks end at position $\hat{n}$ and at every local minimum. The parsing of $\hat{S}$ induces a parsing on $S$: If a block ends at $\hat{S}[\hat{i}]$, then a block ends at $S[limap(\hat{i})]$. \end{definition} Note that, by definition, the first block starts at $S[1]$. When applied on texts $S[1\mathinner{.\,.} n]$, it will hold that $\hat{S}[1]=\#$ and $\hat{S}[\hat{n}]=\$$, so $\hat{S}$ will also be a text. Further, we will always force that $\pi(\$)=1$ and $\pi(\#)=2$, which guarantees that there cannot be local minima in $\hat{S}[1\mathinner{.\,.} 2]$ nor in $\hat{S}[\hat{n}-1\mathinner{.\,.} \hat{n}]$. Together with the fact that there cannot be two consecutive local minima, this yields the following observation. \begin{observation} \label{obs:length2} Every block in $S$ or $\hat{S}$ is formed by at least two consecutive elements (symbols or metasymbols, respectively). \end{observation} \begin{definition}\label{def:bb} We say that a position $p<n$ of the parsed text $S$ is a \emph{block boundary} if a block ends at position $p$. For every non-empty fragment $S[i\mathinner{.\,.} j]$ of $S$, we define \[B(i,j)=\{p - i : i \le p < j\text{ and $p$ is a block boundary}\}.\] Moreover, for every integer $c\ge 0 $, we define subsets $L(i,j,c)$ and $R(i,j,c)$ of $B(i,j)$ consisting of the $\min(c,|B(i,j)|)$ smallest and largest elements of $B(i,j)$, respectively. \end{definition} Observe that any fragment $S[i\mathinner{.\,.} j]$ intersects a sequence of $1+|B(i,j)|$ blocks (the first and the last block might not be contained in the fragment). We are interested in \emph{locally contracting} parsings, where this number of blocks is smaller than the fragment's length by a constant factor. \begin{definition} \label{def:locally-contractive} A parsing is \emph{locally contracting} if there exist constants $\alpha$ and $\beta<1$ such that $|B(i,j)|\le \alpha+\beta |S[i\mathinner{.\,.} j]|$ for every fragment $S[i\mathinner{.\,.} j]$ of $S$. \end{definition} \begin{lemma} \label{lem:locally-contracting} The parsing of $S$ from \Cref{def:parse} is locally contracting with $\alpha=0$ and $\beta=\frac12$. \end{lemma} \begin{proof} By \Cref{obs:length2}, adjacent positions in $S$ cannot both be block boundaries. Hence, $|B(i,j)|\le \lceil \frac{j-i}{2}\rceil = \lfloor\frac{j-i+1}{2}\rfloor = \lfloor\frac12|S[i\mathinner{.\,.} j]|\rfloor \le \frac12|S[i\mathinner{.\,.} j]|$. \end{proof} We formally define \emph{locally consistent} parsings as follows. \begin{definition}\label{def:locally-consistent} A parsing is \emph{locally consistent} if there exists a constant $c_{p}$ such that for every pair of matching fragments $S[i\mathinner{.\,.} j]=S[i'\mathinner{.\,.} j']$ it holds that $B(i,j)\setminus B(i',j') \subseteq L(i,j,c_p)\cup R(i,j,c_p)$, that is $B(i,j)$ and $B(i',j')$ differ by at most $c_p$ smallest and $c_p$ largest elements. \end{definition} Next, we prove local consistency of our parsing. \begin{lemma}\label{lem:alt} The parsing of $S$ from \Cref{def:parse} is locally consistent with $c_p=1$. More precisely, if $S[i\mathinner{.\,.} j]=S[i'\mathinner{.\,.} j']$ are matching fragments of $S$, then \[B(i,j)\setminus \{limap(map(i))-i\} = B(i',j') \setminus \{limap(map(i))-i\}.\] \end{lemma} \begin{proof} By definition, a block boundary is a position $q$ such that $q=limap(\hat{q})$ for a local minimum $\hat{S}[\hat{q}]$ in $\hat{S}$. Hence, a position $q$, with $1 < q < n$, is a block boundary if and only if $\pi(S[q])< \pi(S[q+1])$ and $\pi(S[q]) < \pi(S[r])$, where $r=fimap(map(q))-1$ is the rightmost position to left of $q$ with $S[r]\ne S[q]$. Consider a position $p$, with $i < p < j$, and the corresponding position $p'=p-i+i'$. If $p > limap(map(i))$, then the positions $r=fimap(map(p))-1$ and $r'=fimap(map(p'))-1$ satisfy $r'-i'=r-i\ge 0$. Hence, $p$ is a block boundary if and only if $p'$ is one. On the other hand, if $p < limap(map(i))$, then neither $p$ nor $p'$ is a block boundary because $S[p]=S[p+1]$ and $S[p']=S[p'+1]$. Consequently, only the position $p = limap(map(i))$ is a block boundary not necessarily if and only if $p'$ is one. That is, $B(i,j)\setminus \{limap(map(i))-i\}=B(i',j')\setminus \{limap(map(i))-i\}$. Moreover, since $limap(map(i))-i$ may only be the leftmost element of $B(i,j)$, this yields $B(i,j)\setminus B(i',j')\subseteq L(i,j,1)$, and therefore the parsing is locally consistent with $c_p=1$. \end{proof} We conclude this section by defining \emph{block extensions} and proving that they are sufficiently long to ensure that the block is preserved within the occurrences of its extension. This property will be use several times in subsequent sections. \begin{definition} \label{def:extended-block} Let $S[i\mathinner{.\,.} j]$, with $1<i<j<n$, be a block in $S$. The \emph{extension} of the block $S[i\mathinner{.\,.} j]$ is defined as $S[i^e\mathinner{.\,.} j^e]$, where $i^e=fimap(map(i-1))-1$ and $j^e=j+1$. \end{definition} Note that the first and last blocks cannot be extended. For the remaining blocks $S[i\mathinner{.\,.} j]$, the definition is sound because $map(i-1)>1$ and $j<n$ since $map(i-1)$ and $map(j)$ are local minima of $S'$. Further, note that the block extension spans only the last symbol of the metasymbol $\hat{S}[map(i^e)]$ and the first of $\hat{S}[map(j^e)]$. \begin{lemma} \label{lem:extend} Let $S[i^e\mathinner{.\,.} j^e]$ be the extension of a block $S[i\mathinner{.\,.} j]$. If $S[r'\mathinner{.\,.} s']$ matches $S[i^e\mathinner{.\,.} j^e]$, then $S[r'\mathinner{.\,.} s']$ contains the same block $S[r\mathinner{.\,.} s]=S[i\mathinner{.\,.} j]$, whose extension is precisely $S[r^e\mathinner{.\,.} s^e]=S[r'\mathinner{.\,.} s']$. Furthermore, $r-r^e=i-i^e$ and $s^e-s = j^e-j$. \end{lemma} \begin{proof} Observe that $limap(map(i^e)) = i^e$, so \Cref{lem:alt} yields $B(r',s')\setminus \{0\} = B(i^e,j^e)\setminus \{0\}$. Moreover, $map(i^e)=map(i-1)-1$ and $map(j^e)=map(j)+1$, so $B(i^e,j^e) = \{i-1-i^e, j-i^e\}$ due to \Cref{obs:length2}. Hence, $ B(r',s') \setminus \{ 0 \} = \{i-1-i^e,j-i^e\}$, and therefore $S[r\mathinner{.\,.} s]$ is a block, where $r = i-i^e+r'$ and $s = j-i^e+r'$. \Cref{fig:extend} gives an example. To complete the proof, notice that $s^e=s+1=s'$ and $r^e=limap(map(r-1))-1=r'$ follows from the fact that $S[r'\mathinner{.\,.} s']$ and $S[i^e\mathinner{.\,.} j^e]$ match. \end{proof} \begin{figure} \caption{Illustration of Lemma~\ref{lem:extend} \label{fig:extend} \end{figure} \section{Grammars with Locality Properties} \label{sec:locality} Consider a context-free grammar (CFG) that generates a string $S$ and only $S$ \cite{KY00}. Each nonterminal must be the left-hand side in exactly one production, and the {\em size} $g$ of the grammar is the sum of the right-hand sides of the productions. It is NP-complete to compute the smallest grammar for a string $S$ \cite{DBLP:journals/tcs/Rytter03,DBLP:journals/tit/CharikarLLPPSS05}, but it is possible to build grammars of size $g = O(z\log(|S|/z))$ if the Lempel--Ziv parsing of $S$ consists of $z$ phrases \cite[Lemma~8]{DBLP:conf/esa/Gawrychowski11}.\footnote{There are older constructions \cite{DBLP:journals/tcs/Rytter03,DBLP:journals/tit/CharikarLLPPSS05}, but they refer to a restricted Lempel--Ziv variant where sources and phrases cannot overlap.} If we allow, in addition, rules of the form $A \rightarrow A_1^s$, where $s\ge 2$, taken to be of size 2 for technical convenience, the result is a {\em run-length context-free grammar (RLCFG)} \cite{DBLP:conf/mfcs/NishimotoIIBT16}. These grammars encompass CFGs and are intrinsically more powerful; for example, the smallest CFG for the string family $S=a^n$ is of size $\mathcal Theta(\log n)$ whereas already an RLCFG of size $O(1)$ can generate it. The {\em parse tree} of a CFG has internal nodes labeled with nonterminals and leaves labeled with terminals. The root is the initial symbol and the concatenation of the leaves yields $S$: the $i$th leaf is labeled $S[i]$. If $A \rightarrow A_1\cdots A_s$, then any node labeled $A$ has $s$ children, labeled $A_1, \ldots, A_s$. In the parse tree of a RLCFG, rules $A \rightarrow A_1^s$ are represented as a node labeled $A$ with $s$ children nodes labeled $A_1$. The following definition describes the substring of $S$ generated by each node. \begin{definition} If the leaves descending from a parse tree node $v$ are the $i$th to the $j$th leaves, we say that $v$ {\em generates} $S[i\mathinner{.\,.} j]$ and that $v$ is {\em projected} to the interval $proj(v) = [i\mathinner{.\,.} j]$. \end{definition} The subtrees of equally labeled nodes are identical and generate the same strings, so we speak of the strings generated by the grammar symbols. We call $\exp(A)$ the {\em expansion} of nonterminal $A$, that is, the string it generates (or the concatenation of the leaves under any node labeled $A$ in the parse tree), and $|A| = |\exp(A)|$. For terminals $a$, we assume $\exp(a)=a$. A grammar is said to be {\em balanced} if the parse tree is of height $O(\log n)$. A stricter concept is the following one. \begin{definition} \label{def:locally-balanced} A grammar is {\em locally balanced} if there exists a constant $b$ such that, for any nonterminal $A$, the height of any parse tree node labeled $A$ is at most $b \cdot \log |A|$. \end{definition} \subseteqsection{From parsings to balanced grammars} \label{sec:ec} We build an RLCFG on a text $T[1\mathinner{.\,.} n]$ using our parsing of Section~\ref{sec:lcp}. In the first pass, we collect the distinct runs $a^\ell$ with $\ell\ge 2$ and create run-length nonterminals of the form $A \rightarrow a^\ell$ to replace the corresponding runs in $T$. The resulting sequence is analogous to $\hat{T}$, where a nonterminal $A \to a^\ell$ stands for the metasymbol \framebox{$a^\ell$}, and the terminal $a$ stands for the metasymbol $\run{a^1}$. Next, we choose a permutation $\pi$ and perform a pass on the new text $\hat{T}$, defining the blocks based on local minima according to \Cref{def:parse}. Each distinct block $A_1\cdots A_k$ is replaced by a distinct nonterminal $A$ with the rule $A \rightarrow A_1 \cdots A_k$ (each $A_i$ can be a symbol of $\Sigma$ or a run-length nonterminal created in the first pass). The blocks are then replaced by those created nonterminals $A$, which results in a string $T'$. The string $T'$ is of length $n' \le \lfloor n/2 \rfloor$, by Observation~\ref{obs:length2}. Note that the first and last symbols of $T'$ expand to blocks that contain \# and \$, respectively, and thus they are unique too. We can then regard $T'$ as a text, by having its first nonterminal, $T'[1]$, play the role of \#, and the last, $T'[n']$, play the role of \$. The process is then repeated again on $T'$, and iterated for $h\le \lfloor\log n\rfloor$ rounds, until a single nonterminal is obtained. This is the initial symbol of the grammar. We denote by $T_r[1\mathinner{.\,.} n_r]$ the text created in round $r$, so $T_0=T$ and $T_1=T'$. We also denote by $\hat{T}_r[1\mathinner{.\,.} \hat{n}_r]$ the intermediate text obtained by collapsing runs in $T_r$. Figure~\ref{fig:grammar} exemplifies the grammars we build and the corresponding parse tree. \begin{figure} \caption{An example of the construction of our grammar. The top-left part shows the permutations $\pi$ assigned in each level, and the top-right part gives the complete grammar built (for simplicity we omit run-length nonterminals). The parse tree, shown on the bottom, also omits run-length nonterminals. The texts $T_r$ correspond to the subsequent levels of the parse tree (starting from the bottom). Level-$r$ block boundaries that are not run boundaries are depicted using dotted lines. For example, $T_2 = \mathsf{\#_2DDEDD\$_2} \label{fig:grammar} \end{figure} The height of the grammar is at most $2h\le 2\lfloor\log n\rfloor$, because we create run-length rules and then block-rules in each round. This grammar is then balanced because, by Observation~\ref{obs:length2}, $n_r \le n/2^r$. Moreover, the grammar is locally balanced. \begin{lemma} \label{lem:locally-balanced} The grammar we build from our parsing is locally balanced with $b=2$. \end{lemma} \begin{proof} Because of Observation~\ref{obs:length2}, any subtree rooted at a nonterminal $A$ in the parse tree (at least) doubles the number of nodes per round towards the leaves. If $A$ is formed in round $r$, then the subtree has height at most $2r$, and the expansion satisfies $|A| \ge 2^r$. The height of the subtree rooted at $A$ is thus at most $2r \le 2 \log |A|$. \end{proof} \subseteqsection{Local consistency properties} We now formalize the concept of \emph{local consistency} for our grammars. For each $r\in[0\mathinner{.\,.} h]$, the subsequent characters of $T_r$ naturally correspond to nodes of the parse tree of $T$, and the fragments $T[i\mathinner{.\,.} j]$ generated by these nodes form a decomposition of $T$. We denote this parsing of $T$ by $\mathcal{P}_r$. In other words, $T[i\mathinner{.\,.} j]$ is a block of $\mathcal{P}_r$ if and only if $[i\mathinner{.\,.} j]=proj(v)$ for some node $v$ labeled by a symbol in $T_r$. We refer to the blocks and block boundaries in this parsing as \emph{level-$r$ blocks} and \emph{level-$r$ block boundaries}. Analogously, we define a parsing $\hat{\mathcal{P}}_r$ with blocks corresponding to subsequent symbols of $\hat{T}_r$, and we refer to the underlying blocks and block boundaries as \emph{level-$r$ runs} and \emph{level-$r$ run boundaries}; see \Cref{fig:grammar}. Note that every level-$r$ run boundary is also a level-$r$ block boundary, and every level-$(r+1)$ block boundary is also a level-$r$ run boundary. Moreover, by \Cref{obs:length2} at most one out of every two subsequent level-$r$ run boundaries can be a level-$(r+1)$ block boundary. \begin{definition} For every non-empty fragment $T[i\mathinner{.\,.} j]$ of $T$, the sets defined according to \Cref{def:bb} for the parsing $\mathcal{P}_r$ are denoted $B_r(i,j)$, $L_r(i,j,c)$, and $R_r(i,j,c)$. Analogously, we denote by $\hat{B}_r(i,j)$, $\hat{L}_r(i,j,c)$, and $\hat{R}_r(i,j,c)$ the sets defined for the parsing $\hat{\mathcal{P}}_r$. \end{definition} These notions let us reformulate \Cref{lem:alt} so that it is directly applicable at every level $r$. \begin{lemma}\label{lem:altr} If matching fragments $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ both consist of full level-$r$ blocks, then the corresponding fragments of $T_r$ also match, so $B_r(i,j)=B_r(i',j')$ and $\hat{B}_r(i,j)=\hat{B}_r(i',j')$. Moreover, $B_{r+1}(i,j)\setminus \{\min \hat{B}_r(i,j)\} = B_{r+1}(i',j')\setminus \{\min \hat{B}_r(i,j)\}$ if $\hat{B}_r(i,j)\ne \emptyset$, and $B_{r+1}(i,j)=B_{r+1}(i',j')=\emptyset$ otherwise. \end{lemma} \begin{proof} We proceed by induction on $r$. The first two claims hold trivially for $r=0$: the fragments $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ of $T_0=T$ clearly match, and $B_0(i,j)=[0\mathinner{.\,.} j-i-1]=B_0(i',j')$. For $r>0$, on the other hand, $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ consist of full level-$(r-1)$ blocks, so the inductive assumption yields that the corresponding fragments of $T_{r-1}$ also match and that $B_r(i,j)=B_{r}(i',j')=\emptyset$ or $B_{r}(i,j)\setminus \{\min \hat{B}_{r-1}(i,j)\} = B_{r}(i',j')\setminus \{\min \hat{B}_{r-1}(i,j)\}$. In the latter case, we observe that $i-1$ and $i+\min \hat{B}_{r-1}(i,j)$ are subsequent level-$(r-1)$ run boundaries while $i-1$ is a level-$r$ block boundary, or $i=1$ and $i+\min \hat{B}_{r-1}(i,j)$ is the leftmost level-$(r-1)$ run boundary. Either way, $i+\min \hat{B}_{r-1}(i,j)$ cannot be a level-$r$ block boundary due to \Cref{obs:length2}, so $B_{r}(i,j)\setminus \{\min \hat{B}_{r-1}(i,j)\}=B_r(i,j)$. A symmetric argument proves that $B_{r}(i',j')\setminus \{\min \hat{B}_{r-1}(i,j)\}=B_{r}(i',j')$, which lets conclude that $B_r(i,j)=B_{r}(i',j')$. Hence, the matching fragments of $T_{r-1}$ corresponding to $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ are parsed into the same blocks so the corresponding fragments of $T_r$ also match. To prove the other two claims for arbitrary $r\ge 0$, notice that the fragments of $T_r$ corresponding to $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ are occurrences of the same string, denoted $P_r$. Hence, $\hat{B}_r(i,j)$ and $\hat{B}_r(i',j')$ are equal as they both correspond to the run boundaries in $P_r$. If $P_r$ consists of a single run (i.e., if $\hat{B}_r(i,j)=\emptyset$), then clearly $B_{r+1}(i,j)=B_{r+1}(i',j')= \emptyset$. Otherwise, \Cref{lem:alt} implies $B_{r+1}(i,j)\setminus \{\min \hat{B}_r(i,j)\} = B_{r+1}(i',j')\setminus \{\min \hat{B}_r(i,j)\}$. \end{proof} Nevertheless, we define \emph{local consistency} of a grammar as a stronger property than the one expressed in \Cref{lem:altr}: we require that $B_r(i,j)$ and $B_r(i',j')$ resemble each other even if the matching fragments $T[i\mathinner{.\,.} j]$ and $T[i'\mathinner{.\,.} j']$ do not consist of full blocks. \begin{definition} \label{def:lcg} The grammar we build is {\em locally consistent} if there is a constant $c_{g}$ such that the parsings $\mathcal{P}_r$ are all locally consistent with constant $c_g$. \end{definition} In the rest of this section, we prove that our grammar is locally consistent with constant $c_g = 3$. Our main tool is the following construction of sets $B_r(P)$ and $\hat{B}_r(P)$, consisting of the positions (relative to $P$) of \emph{context-insensitive} level-$r$ block and run boundaries that are common to all occurrences of $P$ in $T$. Despite these sets are defined based on an occurrence of $P$ in $T$, we show in \Cref{lem:lcg} that they do not depend on the choice of the occurrence. \begin{definition}\label{def:bp} Let $P$ be a substring of $T$ and let $T[i\mathinner{.\,.} j]$ be its arbitrary occurrence in $T$. The sets $B_r(P)$ and $\hat{B}_r(P)$ for $r\ge 0$ are defined recursively, with $X+\delta= \{x+\delta : x\in X\}$. \begin{equation*} B_r(P)=\left\{\!\begin{aligned} &[0\mathinner{.\,.} |P|-2] && \text{if }r=0,\\ &B_r(i+1+\min \hat{B}_{r-1}(P),i+\max B_{r-1}(P))+1+\min\hat{B}_{r-1}(P) && \text{if }\hat{B}_{r-1}(P)\ne \emptyset,\\ &\emptyset && \text{if }\hat{B}_{r-1}(P)=\emptyset; \end{aligned}\right. \end{equation*} \begin{equation*} \hat{B}_r(P)=\left\{\begin{aligned} &\hat{B}_r(i+1+\min B_r(P), i+\max B_r(P))+1+\min B_r(P) & & \text{if }B_r(P)\ne \emptyset,\\ &\emptyset && \text{if }B_r(P)=\emptyset. \end{aligned}\right. \end{equation*} \end{definition} Our index also relies on aset $M(P)$ designed as a superset of $B_r(i,j)\setminus B_r(P)$ for every~$r$ and every occurrence $T[i\mathinner{.\,.} j]$ of $P$. In other words, $M(P)$ contains, for each $r$, positions within $P$ that may be level-$r$ block boundaries in some but not necessarily all occurrences of~$P$. \begin{definition}\label{def:m} For a substring $P$ of $T$, the set $M(P)$ is defined to contain $\min B_r(P)$ and $\max B_r(P)$ for every $r\ge 0$ with $B_r(P)\ne \emptyset$, and $\min \hat{B}_r(P)$ for every $r\ge 0$ with $\hat{B}_r(P)\ne \emptyset$. \end{definition} \begin{example} Consider $P= \mathsf{dbdaacaacbdaabcbdaabcbd}$ with occurrences $T[3\mathinner{.\,.} 25]$ and $T[25\mathinner{.\,.} 47]$ in text of \Cref{fig:grammar}. For $r=0$, we define $B_0(P)=[0\mathinner{.\,.} 21]$ and set $\hat{B}_0(P)=\{1,2,4,5,7,8,9,10,12,13,14,15,16,18,19,20\}=1+\hat{B}_0(4,24)=1+\hat{B}_0(26,46)$. For $r=1$, we set $B_1(P)=\{2,5,8,10,14,16,20\}=2+B_1(5,24)=2+B_1(27,46)$ and $\hat{B}_1(P) = \{8,10,14,16\}=3+\hat{B}_1(6,23)=3+\hat{B}_1(28,45)$. For $r=2$, we set $B_2(P)=\{14\}=9+B_2(12,19)=9+B_2(34,41)$ and $\hat{B}_2(P)=\emptyset=15+\hat{B}_2(18,17)=15+\hat{B}_2(40,39)$. For $r\ge 3$, we have $B_r(P)=\hat{B}_r(P)=\emptyset$. Consequently, $M(P)=\{0,1,2,8,14,20,21\}$. \end{example} We now show $B_r(P)$ contains all the level-$r$ block boundaries in any occurrence of $P$ in $T$ except possibly the first 3 and the last one, but those missing boundaries belong to $M(P)$. \begin{lemma}\label{lem:lcg} For every substring $P$ of $T$ and every $r\ge 0$, the sets $B_r(P)$ and $\hat{B}_r(P)$ do not depend on the choice of an occurrence $T[i\mathinner{.\,.} j]$ of $P$. Moreover, \begin{equation} \label{eq:lcg} B_r(P)\cup L_r(i,j,3)\cup R_r(i,j,1)=B_r(i,j)\subseteq B_r(P)\cup M(P). \end{equation} \end{lemma} \begin{proof} We proceed by induction on $r$, proving the independence of $\hat{B}_r(P)$ only at step $r+1$. In the base case, $B_0(P)=[0\mathinner{.\,.} |P|-2]$ does not depend on the choice of the occurrence, and Eq.~\eqref{eq:lcg} is satisfied because $B_0(P)=B_0(i,j)$. For the inductive step, we assume the claims hold for $B_r(P)$. If $B_r(P)=\emptyset$, then $\hat{B}_r(P)=B_{r+1}(P)=\emptyset$ do not depend on the occurrence of $P$. The inductive assumption yields $B_{r+1}(i,j)\subseteq B_r(i,j)\subseteq M(P) = B_{r+1}(P) \cup M(P)$ and $|B_{r+1}(i,j)| \le |B_r(i,j)|=|L_r(i,j,3)\cup R_r(i,j,1)|\le 4$, so $L_{r+1}(i,j,3)\cup R_{r+1}(i,j,1)=B_{r+1}(i,j)$ and Eq.~\eqref{eq:lcg} is satisfied. We henceforth assume that $B_r(P)\ne \emptyset$. Since $B_r(P)\subseteq B_r(i,j)$, both $i+\min B_r(P)$ and $i+\max B_r(P)$ are level-$r$ block boundaries, and therefore $T[i+\min B_r(P)+1\mathinner{.\,.} i+\max B_r(P)]$ consists of full level-$r$ blocks. We conclude from \Cref{lem:altr} that $\hat{B}_r(P)$, as defined in \Cref{def:bp}, does not depend on the occurrence of $P$. Moreover, the only position between $i+\min B_r(P)$ and $i+\max B_r(P)$ that may or may not be a level-$(r+1)$ block boundary depending on the context of $T[i\mathinner{.\,.} j]$ is $i+\min \hat{B}_r(P)$ provided that $\hat{B}_r(P) \ne \emptyset$. In particular, $B_{r+1}(P)$, as defined in \Cref{def:bp}, also does not depend on the occurrence~of~$P$. To prove that $B_{r+1}(P)$ satisfies Eq.~\eqref{eq:lcg}, we consider two cases. First, suppose that $\hat{B}_r(P)=\emptyset$, that is, there are no level-$r$ run boundaries between $i+\min B_r(P)$ and $i+\max B_r(P)$. Since $\hat{B}_r(i,j)\subseteq B_r(i,j)$, the inductive assumption $B_r(i,j)=B_r(P)\cup L_r(i,j,3)\cup R_r(i,j,1)$ implies $\hat{B}_{r}(i,j)\subseteq \{\min B_r(P), \max B_r(P)\} \cup L_{r}(i,j,3)\cup R_r(i,j,1)$, while $B_r(i,j)\subseteq B_r(P) \cup M(P)$ yields $\hat{B}_{r}(i,j) \subseteq \{\min B_r(P), \max B_r(P)\} \cup M(P) = M(P)$, where the equality follows from \Cref{def:m}. The former assertions yields $|\hat{B}_r(i,j)|\le 6$, and since $B_{r+1}(i,j)\subseteq \hat{B}_r(i,j)$ cannot contain two consecutive elements of $\hat{B}_r(i,j)$ by \Cref{obs:length2}, we conclude that $|B_{r+1}(i,j)|\le 3$. In particular, since $B_{r+1}(P)=\emptyset$ according to \Cref{def:bp}, we have $B_{r+1}(P) \cup L_{r+1}(i,j,3)\cup R_{r+1}(i,j,1)=B_{r+1}(i,j)\subseteq B_{r+1}(P) \cup M(P)$ as claimed. Next, suppose that $\hat{B}_r(P)\ne\emptyset$. \Cref{def:bp} clearly implies $B_{r+1}(P)\subseteq B_{r+1}(i,j)$, so it remains to prove that $B_{r+1}(i,j)$ is a subset of both $B_{r+1}(P)\cup L_{r+1}(i,j,3)\cup R_{r+1}(i,j,1)$ and $B_{r+1}(P)\cup M(P)$. We take $q\in B_{r+1}(i,j)$ and consider three cases. \begin{enumerate}[(1)] \item If $q\le \min \hat{B}_r(P)$, then $q\in (L_r(i,j,3)\cap M(P))\cup \{\min B_r(P),\min \hat{B}_r(P)\}$ and therefore $q\in \hat{L}_r(i,j,5)$. \footnote{By the choice of $\hat{B}_r(P)$ in \Cref{def:bp}, there are no level-$r$ run boundaries between $\min B_r(P)$ and $\min \hat{B}_r(P)$. Note that $q < \min B_r(P)$ yields $q \not\in B_r(P)$. Since $q\in B_r(i,j)$, by the inductive assumption $q\notin B_r(P)$ implies $q\in L_r(i,j,3)\cap M(P)$ ($q\notin R_r(i,j,1)$ because $q < \min B_r(P)\in B_r(i,j)$). For the same reason, $L_r(i,j,3)\cup \{\min B_r(P)\}\subseteq L_r(i,j,4)$ and $\min \hat{B}_r(P)\in \hat{L}_r(i,j,5)$.} Since $B_{r+1}(i,j)$ cannot contain two consecutive elements of $\hat{B}_r(i,j)$ due to \Cref{obs:length2}, $q\in B_{r+1}(i,j) \cap \hat{L}_r(i,j,5)$ implies $q\in L_{r+1}(i,j,3)$. Finally, $q\in M(P)\cup \{\min B_r(P),\min \hat{B}_r(P)\} = M(P)$, where the equality holds due to \Cref{def:m}. \item If $q \ge \max B_r(P)$, then $q\in (R_r(i,j,1)\cap M(P))\cup \{\max B_r(P)\}$ and therefore $q\in \hat{R}_r(i,j,2)$. \footnote{Note that $q > \max B_r(P)$ yields $q \not\in B_r(P)$. Since $q\in B_r(i,j)$, by the inductive assumption $q\notin B_r(P)$ implies $q\in R_r(i,j,1)\cap M(P)$ ($q\notin L_r(i,j,3)$ because $q > \max B_r(P) > \min \hat{B}_r(P) > \min B_r(P)$ and these 3 elements belong to $B_r(i,j)$). For the same reason, $R_r(i,j,1)\cup \{\max B_r(P)\} \subseteq R_r(i,j,2)$.} Since $B_{r+1}(i,j)$ cannot contain two consecutive elements of $\hat{B}_r(i,j)$ due to \Cref{obs:length2}, $q\in B_{r+1}(i,j) \cap R_r(i,j,2)$ implies $q\in R_{r+1}(i,j,1)$. Finally, $q\in M(P)\cup \{\max B_r(P)\} = M(P)$, where the equality holds due to \Cref{def:m}. \item If $\min \hat{B}_r(P) < q < \max B_r(P)$, then $q+i$ is a level-$(r+1)$ block boundary and $q\in B_{r+1}(P)$ by \Cref{def:bp}.\qed \end{enumerate} \end{proof} \Cref{lem:lcg} implies that the grammar constructed in this section is locally consistent with $c_g=3$. We conclude this section with a further characterization of the set $M(P)$. \begin{lemma}\label{lem:m} For each substring $P=T[i\mathinner{.\,.} j]$, the set $M(P)$ satisfies the following properties: \begin{enumerate}[(a)] \item\label{it:mleft} If $B_r(i,j)\ne \emptyset$ for some $r\ge 0$, then $\min B_r(i,j)\in M(P)$, \item\label{it:mleftprim} If $\hat{B}_r(i,j)\ne \emptyset$ for some $r\ge 0$, then $\min \hat{B}_r(i,j)\in M(P)$, \item\label{it:msize} $|M(P)|\le 3\lceil\log |P|\rceil$. \end{enumerate} \end{lemma} \begin{proof} To prove \eqref{it:mleft} for any $r$, note that $B_r(P) \subseteq B_{r}(i,j)\subseteq B_r(P)\cup M(P)$ by \Cref{lem:lcg}. If $\min B_r(i,j) \notin B_r(P)$, then it belongs to $M(P)$. Otherwise, it must be equal to $\min B_r(P)$, which is in $M(P)$ by \Cref{def:m}. The proof of \eqref{it:mleftprim} is similar: Since $\hat{B}_r(i,j) \subseteq B_r(i,j)$, either $\min \hat{B}_r(i,j)\in B_r(i,j) \setminus B_r(P) \subseteq M(P)$, or $\min \hat{B}_r(i,j)\in B_r(P)$. If $\min \hat{B}_r(i,j) \in \{\min B_r(P),\max B_r(P)\}$, then it is in $M(P)$ by \Cref{def:m}. Otherwise, $\min B_r(P) < \min \hat{B}_r(i,j) < \max B_r(P)$ and, by the choice of $\hat{B}_r(P)$ in \Cref{def:bp}, $\min \hat{B}_r(i,j)=\min \hat{B}_r(P)$ is also in $M(P)$ by \Cref{def:m}. To prove \eqref{it:msize}, notice that $|B_r(P)| \le \frac12|B_{r-1}(P)|$ holds for all $r$ due to \Cref{def:bp}, \Cref{obs:length2}, and $\min B_{r-1}(P)\notin B_r(P)$. This implies $|B_r(P)| \le |B_0(P)| \cdot 2^{-r} < |P|\cdot 2^{-r}$, and therefore $\hat{B}_r(P) = B_r(P) = \emptyset$ for $r \ge \log |P|$. \Cref{def:m} now yields the claim. \end{proof} \section{Bounding our Grammar in terms of Attractors} \label{sec:attractor} Let us first define the concept of attractors in a string \cite{KP18}. \begin{definition}[\cite{KP18}] \label{def:attractor} An \emph{attractor} of a string $S$ a set $\Gamma\subseteq[1\mathinner{.\,.} n]$ of positions in $S$ such that each non-empty substring $Q$ of $S$ has an occurrence $S[i\mathinner{.\,.} j]$ containing an attractor position, i.e., satisfying $i\le p \le j$ for some $p\in \Gamma$. \end{definition} In this section, we show that the RLCFG of Section~\ref{sec:locality} is of size $g = O(\gamma\log(n/\gamma))$, where $\gamma$ is the minimum size of an attractor of $T$. The key is to prove that distinct nonterminals are formed only around the attractor elements. For this, we first prove that $T'[1\mathinner{.\,.} n']$, where the blocks of $T$ are converted into nonterminals, contains an attractor of size at most $3\gamma$. \begin{lemma} \label{lem:4gamma} Let $\Gamma$ be an attractor of $T$, and let $\Gamma' = \bigcup_{p\in \Gamma} [p'-1\mathinner{.\,.} p'+1]$, where $p'$ is the position in $T'$ of the nonterminal that covers $p$ in $T$. Then $\Gamma'$ is an attractor of $T'$. \end{lemma} \begin{figure} \caption{Illustration of Lemma~\ref{lem:4gamma} \label{fig:4gamma} \end{figure} \begin{proof} Figure~\ref{fig:4gamma} illustrates the proof. Consider an arbitrary substring $T'[x'\mathinner{.\,.} y']$, with $x' \ge 3$ and $y' \le n'-2$; otherwise the substring crosses an attractor because $1$ and $n$ are in $\Gamma$. This is a sequence of consecutive nonterminals, each corresponding to a block in $T$. Let $T[x\mathinner{.\,.} y]$ be the substring of $T$ formed by all the blocks that map to $T'[x'\mathinner{.\,.} y']$. The union of their extensions is also a substring $T[x^e\mathinner{.\,.} y^e]$ of $T$. Since $\Gamma$ is an attractor in $T$, there exists a copy $T[r_*\mathinner{.\,.} s_*] = T[x^e\mathinner{.\,.} y^e]$ that includes an element $p\in\Gamma$, $r_* \le p \le s_*$. Consider any block $T[i\mathinner{.\,.} j]$ inside $T[x\mathinner{.\,.} y]$. Its extension $T[i^e\mathinner{.\,.} j^e]$ is contained in $T[x^e\mathinner{.\,.} y^e]$, so a copy $T[r'\mathinner{.\,.} s']$ of $T[i^e\mathinner{.\,.} j^e]$ appears inside $T[r_*\mathinner{.\,.} s_*]$. By Lemma~\ref{lem:extend}, the block $T[i\mathinner{.\,.} j]$ also forms a block $T[r\mathinner{.\,.} s]$ inside $T[r'\mathinner{.\,.} s']$, at the same relative position; furthermore, $T[r'\mathinner{.\,.} s']=T[r^e\mathinner{.\,.} s^e]$ is the extension of $T[r\mathinner{.\,.} s]$. Since this happens for every block $T[i\mathinner{.\,.} j]$ inside $T[x\mathinner{.\,.} y]$, which is a sequence of blocks, it follows that $T[x\mathinner{.\,.} y]$ appears inside $T[r_*\mathinner{.\,.} s_*]$, as a subsequence $T[u\mathinner{.\,.} v]$ of blocks; furthermore, its extension $T[u^e\mathinner{.\,.} v^e]$ coincides with $T[r_*\mathinner{.\,.} s_*]$ and thus contains $p$. Moreover, $T[u\mathinner{.\,.} v]$ maps to a substring $T'[u'\mathinner{.\,.} v']=T'[x'\mathinner{.\,.} y']$. Since $v^e = v+1$ and $u^e = fimap(map(u-1))-1$, due to Observation~\ref{obs:length2}, the fragments $T[u^e \mathinner{.\,.} u-1]$ and $T[v+1\mathinner{.\,.} v^e]$ are contained within single blocks. Therefore, the position $p'$ to which $p$ is mapped in $T'$ belongs to $T'[u'-1\mathinner{.\,.} v'+1]$. Consequently, $T'[u'\mathinner{.\,.} v']$ contains a position in $\Gamma'$. \end{proof} We now show that the first round contributes $O(\gamma)$ to the size of the final RLCFG. In this bound, we only count the the sizes of the generated rules; the whole accounting will be done in Theorem \ref{thm:rlcfg}. The idea is to show that the $3$ distinct blocks formed around each attractor element have expected length $O(1)$. \begin{lemma} \label{lem:linear} The first round of parsing contributes $O(\gamma)$ to the grammar size, in expectation. Further, a parsing producing a grammar of size $O(\gamma)$ is found in $O(n)$ expected time provided that $\gamma$ is known. \end{lemma} \begin{proof} Let us first focus on block-forming rules; we consider the run-length rules in the next paragraph. The right-hand sides of the block-forming rules correspond to the distinct blocks formed in $\hat{T}$, that is, to single symbols in $T'$. All the distinct symbols in $T'$, in turn, appear at positions of $\Gamma'$. By \Cref{lem:4gamma}, $\Gamma'$ is of size at most $3\gamma$; therefore, there are at most $3\gamma$ distinct blocks in $\hat{T}$ and in $T$ (i.e., those containing attractor elements of $T$ and their neighboring blocks), and thus at most $3\gamma$ distinct nonterminals are formed in the grammar. We must also show, however, that the sum of the sizes of the right-hand sides of those $3\gamma$ productions also add up to $O(\gamma)$. Consider a block of $\hat{T}$ of length $\ell$. The right-hand side of its corresponding production is $\ell$. Each element of $\hat{T}$ can be a metasymbol, however, so the grammar may indeed include $\ell$ further run-length nonterminals, contributing up to $2\ell$ to the grammar size. Therefore, each distinct block of length $\ell$ in $\hat{T}$ contributes at most $3\ell$ to the grammar size. We now show that $\ell=O(1)$ in expectation for the $3\gamma$ blocks specified above. Consider an attractor element $p$ and its position $\hat{p}=map(p)$ when mapped to $\hat{T}$. Let $T[i\mathinner{.\,.} j]$ be the block containing $p$ and let $T[i'..j']$ be its concatenation with the adjacent blocks ($T[i'\mathinner{.\,.} i-1]$ and $T[j+1\mathinner{.\,.} j']$). Moreover, let $\hat{T}[\hat{i}\mathinner{.\,.} \hat{j}]$ and $\hat{T}[\hat{i}'\mathinner{.\,.} \hat{j}']$ be the corresponding fragments of $\hat{T}$, with $\hat{i} = map(i)$, $\hat{j} = map(j)$, $\hat{i}' = map(i')$, and $\hat{j}' = map(j)$. Let $\ell^+ = \hat{j}'-\hat{p}+1$, $\ell^- = \hat{p}-\hat{i}'$, and $\ell = \ell^+ +\ell^-$. Then, $3\ell$ is the maximum possible contribution of attractor $p$ to the grammar size via nonterminals that represent these blocks. The area $\hat{T}[\hat{p}\mathinner{.\,.} \hat{j}']$ contains at most 2 local minima, at $\hat{j}$ and $\hat{j}'$ (unless $\hat{j}' = \hat{n}$). Note that, between two consecutive local minima, we have a sequence of nondecreasing values of $\pi$ and then a sequence of nonincreasing values of $\pi$. Our area can be covered by 2 such ranges. Hence, if we split the substring of length $\ell^+$ into 4 equal parts of length $\ell^+/4$, at least one of them must be monotone (i.e., nondecreasing or nonincreasing) with respect to $\pi$. Note that consecutive symbols in $\hat{T}$ are always different. Further, if there are repeated symbols in a length-$d$ substring of $\hat{T}$, then it cannot be monotone with respect to $\pi$. If all the symbols are different, instead, exactly one out of $d!$ permutations $\pi$ will make the substring increasing and one out of $d!$ will make it decreasing, where $d$ is the length of the substring. As a result, at most 2 out of $(\ell^+/4)!$ permutations can make one of our length-$(\ell^+/4)$ substrings monotone. If we choose permutations $\pi$ uniformly at random, then the probability that at least one of our 4 substrings is monotone is at most $8 / (\ell^+/4)!$. Since this upper-bounds the probability that $\hat{j}' \ge \hat{p}+\ell^+$, the expected value of $\ell^+$ is $O(1)$.\footnote{Because $\sum_{k \ge 1} 1/k! = e-1$.} An analogous argument holds for $\ell^-$ since $\hat{T}[\hat{i}'\mathinner{.\,.} \hat{p}-1]$ can also be covered by at most 2 ranges between consecutive local minima. Adding the expectations of the contributions $3\ell$ over the $\gamma$ attractor elements, we obtain $O(\gamma)$. If the expectation is of the form $c \cdot \gamma$, then at least half of the permutations produce a grammar of size at most $2c\cdot\gamma$, and thus a Las Vegas algorithm finds a permutation producing a grammar of size at most $2c \cdot \gamma$ after $O(1)$ attempts in expectation. Since at each attempt we parse $T[1\mathinner{.\,.} n]$ in time $O(n)$, we find a suitable permutation in $O(n)$ expected time provided we know $\gamma$. \end{proof} We now perform $O(\log(n/\gamma))$ rounds of locally-consistent parsing, where the output $T'$ of each round is the input to the next. The length of the string halves in each iteration, and the grammar grows only by $O(\gamma)$ in each round. \begin{theorem} \label{thm:rlcfg} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then there exists a locally-balanced locally-consistent RLCFG of size $g = O(\gamma\log(n/\gamma))$ and height $O(\log(n/\gamma))$ that generates (only) $T$, and it can be built in $O(n)$ expected time and $O(g)$ working space if $\gamma$ is known. \end{theorem} \begin{proof} We apply the grammar construction described in Section~\ref{sec:ec}, which by Lemmas~\ref{lem:locally-balanced} and \ref{lem:lcg}, is locally balanced and locally consistent. We first show that we can build an attractor $\Gamma_r$ for each $T_r$ formed by $\gamma$ runs of $m_r \in O(1)$ consecutive positions. This is clearly true for $T_0$, with $m_0=1$. Now assume this holds for $T_r$. When parsing $T_r$ into blocks to form $T_{r+1}$, each run of $m_r$ consecutive attractor positions is parsed into at most $1+\lfloor m_r/2 \rfloor$ consecutive symbols $p'$ in $T_{r+1}$, as seen in the proof of Lemma~\ref{lem:locally-contracting}. Lemma~\ref{lem:4gamma} then shows that, if we expand each such mapped attractor positions $p'$ to $[p'-1\mathinner{.\,.} p'+1]$, we obtain an attractor $\Gamma_{r+1}$ for $T_{r+1}$. The union of the expansions of $1+\lfloor m_r/2 \rfloor$ consecutive positions $p'$ creates a run of length $m_{r+1} = 3+\lfloor m_r/2 \rfloor$. It then holds that $\Gamma_{r+1}$ is formed by $\gamma$ runs of at most $m_{r+1}$ positions. The sequence of values $m_r$ stabilizes. If we solve $m = 3+\lfloor m/2\rfloor$, we obtain $\lceil m/2 \rceil = 3$. This solves for $m =5$ or $m=6$. Indeed, the value is 5 and is reached soon: $m_0=1$, $m_1=3$, $m_2=4$, $m_3 = 5$, $m_4 = 5$. Therefore, we safely use $m_r \le 5$ in the following. The only distinct blocks in each $T_r$ are those forming $\Gamma_{r+1}$. Therefore, the parsing of each text $T_r$ produces at most $5\gamma$ distinct nonterminal symbols. By \Cref{lem:linear}, we can find in $O(n_r)$ expected time a permutation $\pi_r$ such that the contribution of the $r$th round to the grammar size is $O(|\Gamma_r|)=O(\gamma)$. The sum of the lengths of all $T_r$s is at most $2n$, thus the total expected construction cost is $O(n)$. We stop after $r^*=\log(n/\gamma)$ rounds. By then, $T_{r^*}$ is of length at most $\gamma$ and the cumulative size of the grammar is $O(\gamma \cdot r^*) = O(\gamma \log(n/\gamma))$. We add a final rule $S \rightarrow T_{r^*}$, which adds $\gamma$ to the grammar size. The height of the grammar is $O(r^*) = O(\log(n/\gamma))$. As for the working space, at each new round $r$ we generate a permutation $\pi_r$ of $|\Sigma_r|$ cells. Since the alphabet size is a lower bound to the attractor size, it holds that $|\Sigma_r|\le 5\gamma$. We store the distinct blocks that arise during the parsing in a hash table. These are at most $5\gamma$ as well, and thus a hash table of size $O(\gamma)$ is sufficient. The rules themselves, which grow by $O(\gamma)$ in each round, add up to $O(g)$ total space. \end{proof} \subseteqsection{Building the grammar without an attractor} \label{sec:delta} Since finding the size $\gamma$ of the smallest attractor is NP-complete \cite{KP18}, it is interesting that we can find a RLCFG similar to that of Theorem~\ref{thm:rlcfg} without having to find an attractor nor knowing $\gamma$. The key idea is to build on another measure, $\delta$, that lower-bounds $\gamma$ and is simpler to compute. \begin{definition} \label{def:delta} Let $T(\ell)$ be the total number of distinct substrings of length $\ell$ in $T$. Then $$\delta = \max \{ T(\ell)/\ell,~\ell \ge 1 \}.$$ \end{definition} Measure $\delta$ is related to the expression $d_\ell(w)/\ell$, used by \citeN{RRRS13} to approximate $z$. Analogously to their result \cite[Lem.~4]{RRRS13}, we have the following bound in terms of attractors. \begin{lemma} \label{lem:delta} It always holds $\delta \le \gamma$. \end{lemma} \begin{proof} Since every length-$\ell$ substring of $T$ must have a copy containing an attractor position, it follows that there are at most $\ell \cdot \gamma$ distinct such substrings, that is, $T(\ell)/\ell \le \gamma$ for all $\ell$. \end{proof} \begin{lemma}\label{lem:compute delta} Measure $\delta$ can be computed in $O(n)$ time and space from $T[1\mathinner{.\,.} n]$. \end{lemma} \begin{proof} Computing $\delta$ boils down to computing $T(\ell)$ for all $1 \le \ell \le n$. This is easily computed from a suffix tree on $T$ \cite{DBLP:conf/focs/Weiner73} (which is built in $O(n)$ time). We first initialize all the counters $T(\ell)$ at zero. Then we traverse the suffix tree: for each leaf with string depth $\ell$ we add 1 to $T(\ell)$, and for each non-root internal node with $k$ children and string depth $\ell'$ we subtract $k-1$ from $T(\ell')$. Finally, for all the $\ell$ values, from $n-1$ to $1$, we add $T(\ell+1)$ to $T(\ell)$. Thus, the leaves count the unique substrings they represent, and the latter step accumulates the leaves descending from each internal node. The value subtracted at internal nodes accounts for the fact that their $k$ distinct children should count only once toward their parent. \end{proof} We now show that $\delta$ can be used as a replacement of $\gamma$ to build the grammar. \begin{theorem} \label{thm:rlcfg2} Let $T[1\mathinner{.\,.} n]$ have a minimum attractor of size $\gamma$. Then we can build a locally-balanced locally-consistent RLCFG of size $g = O(\gamma\log(n/\gamma))$ and height $O(\log n)$ that generates (only) $T$ in $O(n)$ expected time and $O(n)$ working space, without knowing $\gamma$. \end{theorem} \begin{proof} We carry out $\log n$ iterations instead of $\log(n/\gamma)$, and the grammar is still of size $O(\gamma\log(n/\gamma))$; the extra iterations add only $O(\gamma)$ to the size. The only other place where we need to know $\gamma$ is when applying Lemma~\ref{lem:linear}, to check that the total length of the distinct blocks resulting from the parsing, using a randomly chosen permutation, is at most $2c \cdot \gamma$. A workaround to this problem is to use measure $\delta \le \gamma$, which (unlike $\gamma$) can be computed efficiently. To obtain a bound on the sum of the lengths of the blocks formed, we add up all the possible substrings multiplied by the probability that they become a block. Consider a substring $\hat{S}[1\mathinner{.\,.} \ell+3]$ of $\hat{T}$. Whether $\hat{S}$ occurs as a mapped block extension, that is, whether it occurs with $\hat{S}' = \hat{S}[3\mathinner{.\,.} \ell+2]$ being a block, depends only on $\pi$ and $\hat{S}$, because by Lemma~\ref{lem:extend}, if $\hat{S}'$ forms a block inside one occurrence of $\hat{S}$, it must form a block inside each occurrence of $\hat{S}$. Let us now consider the probability that $\hat{S}'$ forms a block. As in the proof of Lemma~\ref{lem:linear}, $\hat{S}[3\mathinner{.\,.} \ell/2+2]$ must have an increasing sequence of $\pi$-values or $\hat{S}[\ell/2+3\mathinner{.\,.} \ell+2]$ must have a decreasing sequence of $\pi$-values, and this holds for at most two out of $(\ell/2)!$ permutations $\pi$. Therefore, any distinct substring of length $\ell+3$ (of which there are $T(\ell+3) \le (\ell+3)\delta$) contributes a block of length $\ell$ to the grammar size with probability at most $2 / (\ell/2)!$ (note that we may be counting the same block several times within different block extensions). The total expected contribution to the grammar size is therefore $\sum_{\ell \ge 2} (\ell+3)\delta \cdot \ell \cdot 2 / (\ell/2)! = O(\delta)$. As in the proof of Lemma~\ref{lem:linear}, given the expectation of the form $c \cdot \delta$, we can try out permutations until the total contribution to the grammar size is at most $2c \cdot \delta$. After $O(1)$ attempts, in expectation, we obtain a grammar of size $O(\delta) \subseteqseteq O(\gamma)$ without knowing $\gamma$. We repeat the same process for each text $T_r$, since we know from Theorem~\ref{thm:rlcfg} that every $T_r$ has an attractor of size at most $5\gamma$, so the value $\delta_r$ we compute on $T_r$ satisfies $\delta_r \le 5\gamma$. The sizes of all texts $T_r$ add up to $O(n)$. \end{proof} \section{An Index Based on our Grammar} \label{sec:index} Let $G$ be a locally-balanced RLCFG of $r$ rules and size $g \ge r$ on text $T[1\mathinner{.\,.} n]$, formed with the procedure of Section~\ref{sec:attractor}, thus $g = O(\gamma\log(n/\gamma))$ with $\gamma$ being the smallest size of an attractor of $T$. We show how to build an index of size $O(g)$ that locates the $occ$ occurrences of a pattern $P[1\mathinner{.\,.} m]$ in time $O(m+(occ+1) \log^\epsilon n)$. We make use of the parse tree and the ``grammar tree'' \cite{CNspire12} of $G$, where the grammar tree is derived from the parse tree. We extend the concept of grammar trees to RLCFGs. \begin{definition} \label{def:grammar-tree} For CFGs, the {\em grammar tree} is obtained by pruning the parse tree: all but the leftmost occurrence of each nonterminal is converted into a leaf and its subtree is pruned. Then the grammar tree has exactly one internal node per distinct nonterminal and the total number of nodes is $g+1$: $r$ internal nodes and $g+1-r$ leaves. For RLCFGs, we treat rules $A \rightarrow A_1^s$ as $A \rightarrow A_1 A_1^{[s-1]}$, where the node labeled $A_1^{[s-1]}$ is always a leaf ($A_1$ may also be a leaf, if it is not the leftmost occurrence of $A_1$). Since we define the size of $A_1^s$ as $2$, the grammar tree is still of size $g+1$. \end{definition} We will identify a nonterminal $A$ with the only internal grammar tree node labeled $A$. When there is no confusion on the referred node, we will also identify terminal symbols $a$ with grammar tree leaves. We extend an existing approach to grammar indexing \cite{CNspire12} to the case of our RLCFGs. We start by classifying the occurrences in $T$ of a pattern $P[1\mathinner{.\,.} m]$ into primary and secondary. \begin{definition} \label{def:occs} The leaves of the grammar tree induce a partition of $T$ into $f = g+1-r$ {\em phrases}. An occurrence of $P[1\mathinner{.\,.} m]$ at $T[t\mathinner{.\,.} t+m-1]$ is {\em primary} if the lowest grammar tree node deriving a range of $T$ that contains $T[t\mathinner{.\,.} t+m-1]$ is internal (or, equivalently, the occurrence crosses the boundary between two phrases); otherwise it is {\em secondary}. \end{definition} \subseteqsection{Finding the primary occurrences} \label{sec:primary} Let nonterminal $A$ be the lowest (internal) grammar tree node that covers a primary occurrence $T[t\mathinner{.\,.} t+m-1]$ of $P[1\mathinner{.\,.} m]$. Then, if $A \rightarrow A_1 \cdots A_s$, there exists some $i\in [1\mathinner{.\,.} s-1]$ and $q\in [1\mathinner{.\,.} m-1]$ such that (1) a suffix of $\exp(A_i)$ matches $P[1\mathinner{.\,.} q]$, and (2) a prefix of $\exp(A_{i+1}) \cdots \exp(A_s)$ matches $P[q+1\mathinner{.\,.} m]$. The idea is to index all the pairs $(\exp(A_i)^{rev},\exp(A_{i+1}) \cdots \exp(A_s))$ and find those where the first and second component are prefixed by $(P[1\mathinner{.\,.} q])^{rev}$ and $P[q+1\mathinner{.\,.} m]$, respectively. Note that there is exactly one such pair per border between two consecutive phrases (or leaves in the grammar tree). \begin{definition} \label{def:locus} Let $v$ be the lowest (internal) grammar tree node that covers a primary occurrence $T[t\mathinner{.\,.} t+m-1]$ of $P$, $[t\mathinner{.\,.} t+m-1] \subseteqseteq proj(v)$. Let $v_i$ be the leftmost child of $v$ that overlaps $T[t\mathinner{.\,.} t+m-1]$, $[t\mathinner{.\,.} t+m-1] \cap proj(v_i) \not= \emptyset$. We say that node $v$ is the {\em parent} of the primary occurrence $T[t\mathinner{.\,.} t+m-1]$ of $P$, and node $v_i$ is its {\em locus}. \end{definition} We build a multiset $\mathcal{G}$ of $f-1=g-r$ string pairs containing, for every rule $A\to A_1\cdots A_s$, the pairs $(\exp(A_i)^{rev}, \exp(A_{i+1})\cdots \exp(A_s))$ for $1\le i < s$. The $i$th pair is associated with the $i$th child of the (unique) $A$-labeled internal node of the grammar tree. The multisets $\mathcal{X}$ and $\mathcal{Y}$ are then defined as projections of $\mathcal{G}$ to the first and second coordinate, respectively. We lexicographically sort these multisets, and represent each pair $(X,Y)\in \mathcal{G}$ by the pair $(x,y)$ of the ranks of $X\in\mathcal{X}$ and $Y\in \mathcal{Y}$, respectively. As a result, $\mathcal{G}$ can be interpreted as a subset of the two-dimensional integer grid $[1\mathinner{.\,.} g-r] \times [1\mathinner{.\,.} g-r]$. Standard solutions~\cite{CNspire12} to find the primary occurrences consider the partitions $P[1\mathinner{.\,.} q] \cdot P[q+1\mathinner{.\,.} m]$ for $1\le q<m$. For each such partition, we search for $(P[1\mathinner{.\,.} q])^{rev}$ in $\mathcal{X}$ to find the range $[x_1\mathinner{.\,.} x_2]$ of symbols $A_i$ whose suffix matches $P[1\mathinner{.\,.} q]$, search for $P[q+1\mathinner{.\,.} m]$ in $\mathcal{Y}$ to find the range $[y_1\mathinner{.\,.} y_2]$ of rule suffixes $A_{i+1}\cdots A_s$ whose prefix matches $P[q+1\mathinner{.\,.} m]$, and finally search the two-dimensional grid for all the points in the range $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$. This retrieves all the primary occurrences whose leftmost intersected phrase ends with $P[1\mathinner{.\,.} q]$. From the locus $A_i$ associated with each point $(x,y)$ found, and knowing $q$, we have sufficient information to report the position in $T$ of this primary occurrence and all of its associated secondary occurrences; we describe this process in Section~\ref{sec:secondary}. This arrangement follows previous strategies to index CFGs \cite{CNspire12}. To include rules $A \rightarrow A_1^s$, we just index the pair $(\exp(A_1)^{rev}, \exp(A_1)^{s-1})$, which corresponds precisely to treating the rule as $A \rightarrow A_1 A_1^{[s-1]}$ to build the grammar tree. It is not necessary to index other positions of the rule, since their pairs will look like $(\exp(A_1)^{rev},\exp(A_1)^{s'})$ with $s' < s-1$, and if $P[q+1\mathinner{.\,.} m]$ matches a prefix of $\exp(A_1)^{s'}$, it will also match a prefix of $\exp(A_1)^{s-1}$. The other occurrences inside $\exp(A_1)^{s-1}$ will be dealt with as secondary occurrences. Finally note that, by definition, a pattern $P$ of length $m=1$ has no primary occurrences. We can, however, find all of its occurrences at the end of a phrase boundary by searching for $P[1\mathinner{.\,.} 1]^{rev} = P[1]$ in $\mathcal{X}$, to find $[x_1\mathinner{.\,.} x_2]$, and assuming $[y_1\mathinner{.\,.} y_2] = [1\mathinner{.\,.} g-r]$. We can only miss the end of the last phrase boundary, but this is symbol \$, which (just as \#) is not present in search patterns. We can just treat these points $(x,y)$ as the primary occurrences of $P$, and report them and their associated secondary occurrences with the same mechanism we will describe for general patterns in Section~\ref{sec:secondary}. A geometric data structure can represent our grid of size $(g-r)\times(g-r)$ with $g-r$ points in $O(g-r) \subseteqseteq O(g)$ space, while performing each range search in time $O(\log^\epsilon g)$ plus $O(\log^\epsilon g)$ per primary occurrence found, for any constant $\epsilon>0$ \cite{DBLP:conf/compgeom/ChanLP11}. \subseteqsection{Parsing the pattern} \label{sec:pattern} In most previous work on grammar-based indexes, all the $m-1$ partitions $P = P[1\mathinner{.\,.} q] \cdot \allowbreak P[q+1\mathinner{.\,.} m]$ are tried out. We now show that, in our locally-consistent parsing, the number of positions that must be tried is reduced to $O(\log m)$. \begin{lemma} \label{lem:pattern} Using our grammar of Section~\ref{sec:attractor}, there are only $O(\log m)$ positions $q$ yielding primary occurrences of $P[1\mathinner{.\,.} m]$. These positions belong to $M(P)+1$ (see \Cref{def:m}). \end{lemma} \begin{proof} Let $A$ be the parent of a primary occurrence $T[t\mathinner{.\,.} t+m-1]$, and let $r$ be the round where $A$ is formed. There are two possibilities: \begin{enumerate} \item $A \rightarrow A_1 \cdots A_s$ is a block-forming rule, and for some $1 \le i < s$, a suffix of $\exp(A_i)$ matches $P[1\mathinner{.\,.} q]$, for some $1 \le q < m$. This means that $q-1 = \min \hat{B}_{r-1}(t,t+m-1)$. \item $A \rightarrow A_1^s$ is a run-length nonterminal, and a suffix of $\exp(A_1)$ matches $P[1\mathinner{.\,.} q]$, for some $1 \le q < m$. This means that $q-1 = \min B_{r}(t,t+m-1)$. \end{enumerate} In either case, $q\in M(P)+1$ by \Cref{lem:m}. \end{proof} In order to construct $M(P)$ using \Cref{def:m,def:bp}, we need to already have an occurrence of $P$, which is not feasible in our context. Hence, we imagine parsing two texts, $T$ and $P^* = \#P\$$, simultaneously using the permutations $\pi_r$ we choose for $T$ at each round $r$. It is easy to verify that the results of \Cref{sec:lcp,sec:locality} remain valid across substrings of both $T$ and $P^*$, because they do not depend on how the permutations are chosen. Hence, our goal is to parse $P^*$ at query time in order to build $M(P)$ using the occurrence of $P$ in $P^*$. We now show how to implement this step in $O(m)$ time. To carry out the parsing, we must preserve the permutations $\pi_r$ of the alphabet used at each of the $O(\log n)$ rounds of the parsing of $T$, so as to parse $P^*$ in the same way. The alphabets in each round are disjoint because all the blocks are of length 2 at least. Therefore the total size of these permutations coincides with the total number of terminals and nonterminals in the grammar, thus by \Cref{lem:linear} and \Cref{thm:rlcfg} they require $O(\gamma)$ space per round and $O(g)$ space overall. Let us describe the first round of the parsing. We first traverse $P^* = P^*_0$ left-to-right and identify the runs $a^\ell$. Those are sought in a perfect hash table where we have stored all the first-round pairs $(a,\ell)$ existing in the text, and are replaced by their corresponding nonterminal $A \rightarrow a^\ell$ (see below for the case where $a^\ell$ does not appear in the text). The result of this pass is a new sequence $\hat{P^*} = \hat{P}^*_0$. We then traverse $\hat{P^*}$, finding the local minima (and thus identifying the blocks) in $O(m)$ time. For this, we have stored the values $\pi(a)=\pi_0(a)$ associated with each terminal $a$ in another perfect hash table (for the nonterminals $A \rightarrow a^\ell$ just created, we have $\pi(A)=\pi(a)$; recall \Cref{sec:lcp}). To convert the identified blocks $A \rightarrow A_1 \cdots A_k$ into nonterminals for the next round, such tuples $(A_1 \cdots A_k)$ have been stored in yet another perfect hash table, from which the nonterminal $A$ is obtained. This way, we can identify all the blocks in time $O(m)$, and proceed to the next round on the resulting sequence of nonterminals, $P^*_1$. The size of the first two hash tables is proportional to the number of terminals and nonterminals in the level, and the size of the tuples stores in the third table is proportional to the right-hand-sides of the rules created during the parsing. By \Cref{thm:rlcfg}, those sizes are $O(\gamma)$ per round and $O(g)$ added over all the rounds. Since the grammar is locally balanced, $P^*$ is parsed in $O(\log m)$ iterations, where at the $r$th iteration we parse $P^*_{r-1}$ into a sequence of blocks whose total number is at most half of the preceding one, by Observation~\ref{obs:length2}. Since we can find the partition into blocks in linear time at any given level, the whole parsing takes time $O(m)$. Construction of the sets $B_r(P)$, $\hat{B}_r(P)$, and $M(P)$ from \Cref{def:bp,def:m}, respectively, also takes $O(m)$ time. Note that $P^*_r$ might contain blocks and runs that do not occur in $T_r$. By \Cref{lem:lcg}, if a block in $P^*_r$ is not among the leftmost 4 or rightmost 2 blocks, then it must also appear within any occurrence of $P$ in $T$, and as a result, the same must also be true for runs in $P^*_{r+1}$. Consequently, if a block (or a run) is not among those 6 extreme ones yet it does not appear in the hash table, we can abandon the search. As for the $O(1)$ allowed new blocks (and runs), we gather them in order to consistently assign new nonterminals and (in case of blocks) arbitrary unused $\pi_r$-values. We then proceed normally with subsequent levels of the parsing. Note that the newly formed blocks cannot appear anymore since distinct levels use distinct symbols, so we do not attempt to insert them into the perfect hash tables. \subseteqsection{Searching for the pattern prefixes and suffixes} \label{sec:ztrie} As a result of the previous section, we need only search for $\tau = O(\log m)$ (reversed) prefixes and suffixes of $P$ in $\mathcal{X}$ or $\mathcal{Y}$, respectively. In this section we show that the corresponding ranges $[x_1\mathinner{.\,.} x_2]$ and $[y_1\mathinner{.\,.} y_2]$ can be found in time $O(m + \tau\log^2 m) = O(m)$. We build on the following result. \begin{lemma}[cf.\ \cite{BBPV18,gagie2014lz77,GNP18}] \label{lemma: z-fast} Let $\mathcal S$ be a set of strings and assume we have a data structure supporting extraction of any length-$\ell$ prefix of strings in $\mathcal S$ in time $f_e(\ell)$ and computation of a given Karp--Rabin signature $\kappa$ of any length-$\ell$ prefix of strings in $\mathcal S$ in time $f_h(\ell)$. We can then build a data structure of $O(|\mathcal S|)$ words such that, later, we can solve the following problem in $O(m + \tau( f_h(m) +\log m ) + f_e(m))$ time: given a pattern $P[1\mathinner{.\,.} m]$ and $\tau>0$ suffixes $Q_1,\dots,Q_\tau$ of $P$, find the ranges of strings in (the lexicographically-sorted) $\mathcal S$ prefixed by $Q_1,\dots,Q_\tau$. \end{lemma} \begin{proof} The proof simplifies a lemma from \citeN[Lem 5.2]{GNP18}. First, we require a Karp--Rabin function $\kappa$ that is collision-free between equal-length text substrings whose length is a power of two. We can find such a function at index construction time in $O(n\log n)$ expected time and $O(n)$ space \cite{DBLP:journals/jda/BilleGSV14}. We extend the collision-free property to pairs of equal-letter strings of arbitrary length by switching to the hash function $\kappa'$ defined as $\kappa'(T[i\mathinner{.\,.} i+\ell-1]) = \langle \kappa(T[i\mathinner{.\,.} i+2^{\lfloor \log \ell \rfloor}-1]), \kappa(T[i+\ell-2^{\lfloor \log \ell \rfloor}\mathinner{.\,.} i+\ell-1]), \ell \rangle$. Z-fast tries \cite[Sec.~H.2]{BBPV18} solve the \emph{weak} part of the lemma in $O(m\log(\sigma)/w + \tau\log m)$ time. They have the same topology of a compact trie on $\mathcal{S}$, but use function $\kappa'$ to find a candidate node for $Q_i$ in time $O(\log|Q_i|)=O(\log m)$. We compute the $\kappa'$-signatures of all pattern suffixes $Q_1, \dots, Q_\tau$ in $O(m)$ time, and then search the z-fast trie for the $\tau$ suffixes $Q_i$ in time $O(\tau\log m)$. By \emph{weak} we mean that the returned answer for each suffix $Q_i$ is not guaranteed to be correct if $Q_i$ does not prefix any string in $\mathcal S$: we could therefore have false positives among the answers, though false negatives cannot occur. A procedure for discarding false positives \cite{gagie2014lz77} requires extracting substrings and their signatures from $\mathcal S$. We describe and simplify this strategy in detail in order to analyze its time complexity in our scenario. Let $Q_1,\dots, Q_j$ be the pattern suffixes for which the z-fast trie found a candidate node. Order the pattern suffixes so that $|Q_1| < \dots < |Q_j|$, that is, $Q_i$ is a suffix of $Q_{i'}$ whenever $i<i'$. In addition, let $v_1, \dots, v_j$ be the candidate nodes (explicit or implicit) of the z-fast trie such that all substrings below them are prefixed by $Q_1, \dots, Q_j$ (modulo false positives), respectively, and let $t_i = \mathit{string}(v_i)$ be the substring read from the root of the trie to $v_i$. Our goal is to discard all nodes $v_k$ such that $t_k \neq Q_k$. Note that it is easy to check (in $O(\tau\cdot f_h(m))$ time) that $\kappa'(Q_i) = \kappa'(t_i)$ for all $i=1, \dots, j$. If a string $t_i$ does not pass this test, then clearly $v_i$ needs to be discarded because it must be the case that $Q_i \neq t_i$. We can thus safely assume that $\kappa'(Q_i) = \kappa'(t_i)$ for all $i=1, \dots, j$. As a second simplification, we note that it is also easy to check (again in $O(\tau\cdot f_h(m))$ time) that $t_a$ is a suffix of $t_b$ whenever $1 \leq a<b \leq j$. Starting from $a=1$ and $b=2$, we check that $\kappa'(t_a) = \kappa'(t_b[|t_b|-|t_a|+1 \mathinner{.\,.} |t_b|])$. If the test succeeds, we know for sure that $t_a$ is a suffix of $t_b$, since $\kappa'$ is collision-free among text substrings: we increment $b \leftarrow b+1$, set $a$ to the next index such that $v_a$ was not discarded (at the beginning of the procedure, no $v_a$ has been discarded), and repeat. Otherwise, we clearly need to discard $v_b$ since $\kappa'(Q_b[|t_b|-|t_a|+1 \mathinner{.\,.} |t_b|]) = \kappa'(Q_a) = \kappa'(t_a) \neq \kappa'(t_b[|t_b|-|t_a|+1 .. |t_b|])$, therefore $Q_b \neq t_b$. Then, we discard $v_b$ and increment $b\leftarrow b+1$. From now on we can thus safely assume that $t_a$ is a suffix of $t_b$ whenever $1 \leq a<b \leq j$. The last step is to compare explicitly $t_j$ and $Q_j$ in $O(f_e(m))$ time. Since we established that (i) $t_a$ is a suffix of $t_b$ whenever $1 \leq a<b \leq j$, (ii) by definition, $Q_a$ is a suffix of $Q_b$ whenever $1 \leq a<b \leq j$, and (iii) $|Q_i| = |t_i|$ for all $i=1, \dots, j$ (since function $\kappa'$ includes the string's length and we know that $\kappa'(Q_i) = \kappa'(t_i)$ for all $i=1, \dots, j$), checking $t_j = Q_j$ is enough to establish that $t_i = Q_i$ for all $i=1, \dots, j$. However, $t_j \neq Q_j$ is not enough to discard all $v_i$: it could also be the case that only a proper suffix of $t_j$ matches the corresponding suffix of $Q_j$, and some $v_i$ pass the test. We therefore compute the longest common suffix $s$ between $t_j$ and $Q_j$, and discard only those $v_i$ such that $|t_i|>s$. To analyze the running time, note that we compute $\kappa'$-signatures of strings that are always suffixes of prefixes of length at most $m$ of strings in $\mathcal S$ (because our candidate nodes $v_1, \dots, v_j$ are always at depth at most $m$). By definition, to retrieve $\kappa'(t_i)$ we need to compute the two $\kappa$-signatures of the length-$2^e$ prefix and suffix of $t_i$, for some $e\leq \log |t_i| \leq \log m$, $1\leq i \leq j$. Computing the required $\kappa'$-signatures reduces therefore to the problem of computing $\kappa$-signatures of suffixes of prefixes of length at most $m$ of strings in $\mathcal S$. Let $R' = t_b[|t_b|-s+1\mathinner{.\,.} |t_b|]$ be such a length-$s$ string of which we need to compute $\kappa(R')$. Then, $\kappa(R') = \kappa(t_b) - \kappa(t_b[1\mathinner{.\,.} |t_b|-s]) \cdot c^{s}\ \rm{mod}\ \mu$. Both signatures on the right-hand side are prefixes of suffixes of length at most $m$ of strings in $\mathcal S$. The value $c^{s}\ \rm{mod}\ \mu$ can moreover be computed in $O(\log m)$ time using the fast exponentiation algorithm. It follows that, overall, computing the required $\kappa'$-signatures takes $O(f_h(m) + \log m)$ time per candidate node. For the last candidate, we extract the prefix $t_j$ of length at most $m$ ($O(f_e(m))$ time) of one of the strings in $\mathcal S$ and compare it with the longest candidate pattern suffix ($O(m)$ time). There are at most $\tau$ candidates, so the verification takes time $O(m + \tau\cdot (f_h(m)+\log m) + f_e(m))$. Added to the time to find the candidates in the z-fast trie, we obtain the claimed bounds. \end{proof} Therefore, when $\mathcal{S}$ is $\mathcal{X}$ or $\mathcal{Y}$, we need to extract length-$\ell$ prefixes of reverse phrases (i.e., of some $\exp(A_i)^{rev}$) or prefixes of consecutive phrases (i.e., of some $\exp(A_{i+1})\cdots \exp(A_s)$) in time $f_e(\ell)$. The next result implies that we can obtain $f_e(\ell)=O(\ell)$. \begin{lemma}[{cf.\ \cite{DBLP:conf/dcc/GasieniecKPS05}, \cite[Sec.~4.3]{CNspire12}}]\label{lem:extract from rlcfg} Given a RLCFG of size $g$, there exists a data structure of size $O(g)$ such that any prefix or suffix of $\exp(A)$ can be obtained from any nonterminal $A$ in real time. \end{lemma} \begin{proof} \citeN{DBLP:conf/dcc/GasieniecKPS05} show how to extract any prefix of any $\exp(A)$ in a CFG of size $g$ in Chomsky Normal Form, in real time, using a data structure of size $O(g)$. This was later extended to general CFGs \cite[Sec.~4.3]{CNspire12}. We now extend the result to RLCFGs. Let us first consider prefixes. Define a forest of tries $T_G$ with one node per distinct nonterminal or terminal symbol. Let us identify symbols with nodes of $T_G$. Terminal symbols are trie roots, and $A_1$ is the parent of $A$ in $T_G$ iff $A_1$ is the leftmost symbol in the rule that defines $A$, that is, $A \rightarrow A_1\cdots$. For the rules $A \rightarrow A_1^s$, we also let $A_1$ be the parent of $A$. We augment $T_G$ to support constant-time level ancestor queries \cite{BF04}, which return the ancestor at a given depth of a given node. To extract $\ell$ symbols of $\exp(A)$, we start with the node $A$ of $T_G$ and immediately return the terminal $a$ associated with its trie root (found with a level ancestor query). We now find the ancestor of $A$ at depth 2 (a child of the trie root). Let $B$ be this node, with $B \rightarrow a B_2 \cdots B_s$. We recursively extract the symbols of $\exp(B_2)$ until $\exp(B_s)$, stopping after emitting $\ell$ symbols. If we obtain the whole $\exp(B)$ and still do not emit $\ell$ symbols, we go to the ancestor of $A$ at depth 3. Let $C$ be this node, with $C \rightarrow B C_2 \cdots C_r$, then we continue with $\exp(C_2)$, $\exp(C_3)$, and so on. At the top level of the recursion, we might finally arrive at extracting symbols from $\exp(A_2)$, $\exp(A_3)$, and so on. In this process, when we have to obtain the next symbols from a nonterminal $D \rightarrow E^s$, we treat it exactly as $D \rightarrow E \cdots E$ of size $s$, that is, we extract $\exp(E)$ $s-1$ further times. Overall, we output $\ell$ symbols in time $O(\ell)$. The extraction is not yet real-time, however, because there may be several returns from the recursion between two symbols output. To ensure $O(1)$ time between two consecutive symbols obtained, we avoid the recursive call for the rightmost child of each nonterminal, and instead move to it directly. Suffixes are analogous, and can be obtained in real-time in reverse order by defining a similar tree $T_G'$ where $A_s$ is the parent of $A$ iff $A_s$ is the rightmost symbol in the rule that defines $A$, $A \rightarrow \cdots A_s$. For rules $A \rightarrow A_1^s$, $A_1$ is still the parent of $A$. \end{proof} By slightly extending the same structures, we can compute any required signature in time $f_h(\ell) = O(\log^2 \ell)$ in our grammars. \begin{lemma} \label{lem:kr} In the grammar of Section~\ref{sec:attractor}, we can compute Karp--Rabin signatures of prefixes of length $\ell$ of strings in $\mathcal{X}$ or $\mathcal{Y}$ in time $f_h(\ell) = O(\log^2 \ell)$. \end{lemma} \begin{proof} Analogously as for extraction (Lemma~\ref{lem:extract from rlcfg}), we consider the $O(\log \ell)$ levels of the grammar subtree containing the desired prefix. For each level, we find in $O(\log \ell)$ time the prefix/suffix of the rule contained in the desired prefix. Fingerprints of those prefixes/suffixes of rules are precomputed. Strings in $\mathcal{X}$ are reversed expansions of nonterminals. Let every nonterminal $X$ store the signatures of the reverses of all the suffixes of $\exp(X)$ that start at $X$'s children. That is, if $X \rightarrow X_1\cdots X_s$, store the signatures of $(\exp(X_i)\cdots \exp(X_s))^{rev}$ for all $i$. We use the trie $T'_G$ of the proof of Lemma~\ref{lem:extract from rlcfg}, where each trie node is a grammar nonterminal and its parent is the rightmost symbol of its defining rule. To extract the signature of the reversed prefix of length $\ell$ of a nonterminal $X$, we go to the node of $X$ in $T'_G$ and run an exponential search over its ancestors, so as to find in time $O(\log \ell)$ the lowest one whose expansion length is $\le \ell$. Let $B$ be that nonterminal, then $B$ is the first node in the rightmost path of the parse tree from $X$ with $|B| \le \ell$. Note that the height of $B$ is $O(\log\ell)$ because the grammar is locally balanced (Lemma~\ref{lem:locally-balanced}), and moreover the parent $A \rightarrow B_1 \cdots B_{s-1} B$ of $B$ satisfies $|A| > \ell$. We then exponentially search the preceding siblings of $B$ until we find the largest $i$ such that $|B_i|+\cdots+|B| > \ell$ (we must store these cumulative expansion lengths for each $B_i$). This takes $O(\log \ell)$ time. We collect the stored signature of $(\exp(B_{i+1})\cdots \exp(B))^{rev}$; this is part of the signature we will assemble. Now we repeat the process from $B_i$, collecting the signature from the remaining part of the desired suffix. Since the depth of the involved nodes decreases at least by 1 at each step, the whole process takes $O(\log^2 \ell)$ time. The case of $\mathcal{Y}$ is similar, now using the trie $T_G$ of the proof of Lemma~\ref{lem:extract from rlcfg} and computing prefixes of signatures. The only difference is that we start from a given child $Y_i$ of a nonterminal $Y \rightarrow Y_1 \cdots Y_t$ and the signature may span up to the end of $Y$. So we start with the exponential search for the leftmost $Y_j$ such that $|Y_i|+\cdots+|Y_j|>\ell$; the rest of the process is similar. When we have rules of the form $A \rightarrow A_1^s$, we find in constant time the desired copy $A_i$, from $\ell$ and $|A_1|$. Similarly, we can compute the signature $\kappa$ of the last $i$ copies of $A_1$ as $\kappa(\exp(A_1)^i) = \left(\kappa(\exp(A_1))\cdot \frac{c^{|A_1|\cdot i}-1}{c^{|A_1|}-1}\right) \!\!\mod \mu$: $c^{|A_1|} \!\!\mod \mu$ and $(c^{|A_1|}-1)^{-1} \!\!\mod \mu$ can be stored with $A_1$, and the exponentiation can be computed in $O(\log i) \subseteqseteq O(\log\ell)$ time. \end{proof} Overall, we find the $m$ ranges in the grid in time $O(m + \tau(f_h(m) +\log m) + f_e(m)) = O(m+\tau \log^2 m) = O(m+\log^3 m) = O(m)$, as claimed. \subseteqsection{Reporting secondary occurrences} \label{sec:secondary} We report each secondary occurrence in constant amortized time, by adapting and extending an existing scheme for CFGs \cite{CNspire12} to RLCFGs. Our data structure enhances the grammar tree with some fields per node $v$ labeled $A$ (where $A$ is a terminal or a nonterminal): \begin{enumerate} \item $v.\mathit{anc} = u$ is the nearest ancestor of $v$, labeled $B$, such that $u$ is the root or $B$ labels more than one node in the grammar tree. Note that, since $u$ is internal in the grammar tree, it has the leftmost occurrence of label $B$ in preorder. This field is undefined in the nodes labeled $A^{[s-1]}$ we create in the grammar tree (these do not appear in the parse tree). \item $v.\mathit{offs} = v_i-u_i$, where $proj(v)=[v_i\mathinner{.\,.} v_j]$ and $proj(u)=[u_i\mathinner{.\,.} u_j]$, is the offset of the projection $\exp(A)$ of $v$ inside the projection $\exp(B)$ of $u$. This field is also undefined in the nodes labeled $A^{[s-1]}$. \item $v.\mathit{next} = v'$ is the next node in preorder labeled $A$, our $null$ if $v$ is the last node labeled $A$ (those next appearances of $A$ are leaves in the grammar tree). If $B \rightarrow A^s$, the internal node $u$ labeled $B$ has two children: $v$ labeled $A$ and $v'$ labeled $A^{[s-1]}$. In this case, $v.next = v'$, and $v'.next$ points to the next occurrence of a node labeled $A$, in preorder. \end{enumerate} Let $u$, labeled $A$, be the parent of primary occurrence of $P$, with $A \rightarrow A_1 \cdots A_s$, and $v$, labeled $A_i$, be its locus. The grid defined in Section~\ref{sec:primary} gives us the pointer to $v$. We then know that the relative offset of this primary occurrence inside $A_i$ is $|A_i|-q+1$. We then move to the nearest ancestor of $v$ we have recorded, $u' = v.\mathit{anc}$, where the occurrence of $P$ starts at offset $\mathit{offs} = |A_i|-q+1+v.\mathit{offs}$ (note that $u'$ can be $u$ or an ancestor of it). From now on, to find the offset of this occurrence in $T$, we repeatedly add $u'.\mathit{offs}$ to $\mathit{offs}$ and move to $u' \leftarrow u'.\mathit{anc}$. When $u'$ reaches the root, $\mathit{offs}$ is the position in $T$ of the primary occurrence. At every step of this upward path to the root, we also take the rightward path to $u'' \leftarrow u'.\mathit{next}$. If $u'' \not= null$, we recursively report the copy of the primary occurrence inside $u''$, continuing from the same current value of $\mathit{offs}$ we have for $u'$. In other words, from the node $u'=v.\mathit{anc}$ we recursively continue by $u'.\mathit{anc}$ and $u'.\mathit{next}$, forming a binary tree of recursive calls. All the leaves of this binary tree that are ``left'' children (i.e., by $u'.\mathit{anc}$) reach the root of the grammar tree and report a distinct offset in $T$ each time. The total number of nodes in this tree is proportional to the number of occurrences reported, and therefore the amortized cost per occurrence reported is $O(1)$. In case $A \rightarrow A_1^s$, the internal grammar tree node $u$ labeled $A$ has two children: $v$ labeled $A_1$ and $v'=v.\mathit{next}$ labeled $A_1^{[s-1]}$. If $P$ has a primary occurrence where $P[1\mathinner{.\,.} q]$ matches a suffix of $\exp(A_1)$, the grid will send us to the node $v$, where the occurrence starts at offset $|A_1|-q+1$. This is just the leftmost occurrence of $P$ within $\exp(A)$, with offset $|A_1|-q+1$ as well. We must also report all the secondary occurrences inside $\exp(A)$, that is, all the offsets $i\cdot|A_1|-q+1$, for $i=1,2,\ldots$ as long as $i\cdot|A_1|-q+m \le s\cdot |A_1|$. For each such offset we continue the reporting from $u'=v.\mathit{anc}$, with offset $\mathit{offs} = i\cdot|A_1|-q+1+v.\mathit{offs}$. We might also arrive at such a node $v$ by a $\mathit{next}$ pointer, in which case the occurrence of $P$ is completely inside $\exp(A_1)$, with offset $\mathit{offs}$. In this case, we must similarly propagate all the other $s-1$ copies of $A_1$ upwards, and then continue to the right. Precisely, we continue from $u'=v.\mathit{anc}$ and offset $\mathit{offs}+i\cdot|A_1|+ v.\mathit{offs}$, for all $0 \le i < s$. Finally, we continue rightward to node $v'.\mathit{next}$ and with the original value $\mathit{offs}$. Our amortized analysis stays valid on these run-length nodes, because we still do $O(1)$ work per new occurrence reported (these are $s$-ary nodes in our tree of recursive calls). \subseteqsection{Short patterns}\label{sec:short patterns} All our data structures use $O(g)$ space. After parsing the pattern to find the $\tau = O(\log m)$ relevant cutting points $q$ in time $O(m)$ (Section~\ref{sec:pattern}), and finding the $\tau$ grid ranges $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$ by searching $\mathcal{X}$ and $\mathcal{Y}$ in time $O(m)$ as well (Section~\ref{sec:ztrie}), we look for the primary and secondary occurrences. Finding the former requires $O(\log^\epsilon g)$ time for each of the $\tau$ ranges, plus $O(\log^\epsilon g)$ time per primary occurrence found (Section~\ref{sec:primary}). The secondary occurrences require just $O(1)$ time each (Section~\ref{sec:secondary}). This yields total time $O(m+\log m\log^\epsilon g + occ \log^\epsilon g)$ to find the $occ$ occurrences of $P[1\mathinner{.\,.} m]$. Next we show how to remove the additive term $O(\log m \log^\epsilon g)$ by dealing separately with short patterns: we use $O(\gamma)$ further space and leave only an additive $O(\log^\epsilon g)$-time term needed for short patterns that do not occur in $T$; we then further reduce this term. The cost $O(\log m \log^\epsilon g)$ comes from the $O(\log m)$ geometric searches, each having a component $O(\log^\epsilon g)$ that cannot be charged to the primary occurrences found \cite{DBLP:conf/compgeom/ChanLP11}. That cost, however, impacts on the total search complexity only for short patterns: it can be $\omega(m)$ only if $m = O(\ell)$, with $\ell=\log^\epsilon g\log\log g$. We can then store sufficient information to avoid this cost for the short patterns. Since $T$ has an attractor of size $\gamma$, there can be at most $\gamma \ell$ substrings of length $\ell$ crossing an attractor element, and all the others must have a copy crossing an attractor element. Thus, there are at most $\gamma \ell$ distinct substrings of length $\ell$ in $T$, and at most $\gamma \ell^2$ distinct substrings of length up to $\ell$. We store all these substrings in a succinct perfect hash table $H$ \cite{BBD09}, using the function $\kappa'$ of Lemma~\ref{lemma: z-fast} as the key. The associated value for each such substring are the $O(\log \ell) = O(\log\log g)$ split points $q$ that are relevant for its search (Section~\ref{sec:pattern}) {\em and} have points in the corresponding grid range (Section~\ref{sec:primary}). Since each partition position $q$ can be represented in $O(\log\ell) = O(\log\log g)$ bits, we encode all this information in $O(\gamma \ell^2 \log^2\ell)$ bits, which is $O(\gamma)$ space for any $\epsilon<\frac{1}{2}$. Succinct perfect hash tables require only linear-bit space on top of the stored data \cite{BBD09}, $O(\gamma \ell^2)$ bits in our case. Avoiding the partitions that do not produce any result effectively removes the $O(\log m \log^\epsilon g)$ additive term on the short patterns, because that cost can be charged to the first primary occurrence found. Note, however, that function $\kappa'$ is collision-free only among the substrings of $T$, and therefore there could be short patterns that do not occur in $T$ but still are sent to a position in $H$ that corresponds to a short substring of $T$ (within $O(g)$ space we cannot afford to store a locus to disambiguate). To discard those patterns, we proceed as follows. If the first partition returned by $H$ yields no grid points, then this was due to a collision with another pattern, and we can immediately return that $P$ does not occur in $T$. If, on the other hand, the first partition does return occurrences, we immediately extract the text around the first one in order to verify that the substring is actually $P$. If it is not, then this is also due to a collision and we return that $P$ does not occur in $T$. Obtaining the locus $v$ of the first primary occurrence from the first partition $q$ takes time $O(\log^\epsilon g)$, and extracting $m$ symbols around it takes time $O(m)$, by using Lemma~\ref{lem:extract from rlcfg} around $v$. Detecting that a short pattern $P$ does not occur in $T$ then costs $O(m+\log^\epsilon g)$. We can slightly reduce this cost to $O(m+\log^\epsilon \gamma)$, as follows. Since $g=O(\gamma\log(n/\gamma))$, we have $\log^\epsilon g \in O(\log^\epsilon \gamma + \log\log(n/\gamma))$. Let $\ell'=\log\log(n/\gamma)$. We store all the $\gamma\ell'$ distinct text substrings of length $\ell'$ in a compact trie $C$, using perfect hashing to store the children of each node, and associating the locus $v$ of a primary occurrence with each trie node. The internal trie nodes represent all the distinct substrings shorter than $\ell'$. The compact trie $C$ requires $O(\gamma \ell') \subseteqseteq O(\gamma\log(n/\gamma))$ space. A search for a pattern of length $m \le \ell'$ that does not occur in $T$ can then be discarded in $O(m)$ time, by traversing $C$ and then verifying the pattern around the locus. Thus the additive term $O(\log^\epsilon g)$ is reduced to $O(\log^\epsilon\gamma)$. \subseteqsection{Construction} \label{sec:constr} \Cref{thm:rlcfg} shows that we can build a suitable grammar in $O(n)$ expected time and $O(g)$ working space, if we know $\gamma$. If not, \Cref{thm:rlcfg2} shows that the working space rises to $O(n)$. The grammar tree is then easily built in $O(g)$ time by traversing the grammar top-down and left-to-right from the initial symbol, and marking nonterminals as we find them for the first time; the next times they are found correspond to leaves in the grammar tree, so they are not further explored. By recording the sizes $|A|$ of all the nonterminals $A$, we also obtain the positions where phrases start. Let us now recapitulate the data structures used by our index: \begin{enumerate} \item The grid of Section~\ref{sec:primary} where the points of $\mathcal{X}$ and $\mathcal{Y}$ are connected. \item The perfect hash tables storing the permutations $\pi$, the runs $a^\ell$, and the blocks generated, for each round of parsing, used in Section~\ref{sec:pattern}. \item The z-fast tries on $\mathcal{X}$ and $\mathcal{Y}$, for Section~\ref{sec:ztrie}. This includes finding a collision-free Karp--Rabin function $\kappa'$. \item The tries $T_G$ and $T_G'$, provided with level-ancestor queries and with the Karp--Rabin signatures of all the prefixes and suffixes of $A_1 \cdots A_s$ for any rule $A \rightarrow A_1 \cdots A_s$. \item The extra fields on the grammar tree to find secondary occurrences in Section~\ref{sec:secondary}. \item The structures $H$ and $C$ for the short patterns, in Section~\ref{sec:short patterns} \end{enumerate} \citeN[Sec.~4]{NP18} carefully analyze the construction cost of points 1 and 3:\footnote{Their $w$ corresponds to our $g$: an upper bound to the number of phrases in $T$.} The multisets $\mathcal{X}$ and $\mathcal{Y}$ can be built from a suffix array in $O(n)$ time and space, but also from a sparse suffix array in $O(n\sqrt{\log g})$ expected time and $O(g)$ space \cite{DBLP:conf/soda/GawrychowskiK17}; this time drops to $O(n)$ if we allow the output to be correct w.h.p.\ only. A variant of the grid structure of point 1 is built in $O(g\sqrt{\log g})$ time and $O(g)$ space \cite{DBLP:conf/soda/BelazzouguiP16}. The z-fast tries of point 3 are built in $O(g)$ expected time and space. However, ensuring that $\kappa'$ is collision-free requires $O(n\log n)$ expected time and $O(n)$ space \cite{DBLP:journals/jda/BilleGSV14}, which is dominant. Otherwise, we can build in $O(n)$ expected time and no extra space a signature that is collision-free w.h.p. The structures of point 2 are of total size $O(g)$ and are already built in $O(g)$ expected time and space during the parsing of $T$. It is an easy programming exercise to build the structures of points 4 and 5 in $O(g)$ time; the level-ancestor data structure is built in $O(g)$ time as well \cite{BF04}. To build the succinct perfect hash table $H$ of point 6, we traverse the text around the $g-r$ phrase borders; this is sufficient to spot all the primary occurrences of all the distinct patterns. There are at most $g\ell^2$ substrings of length up to $\ell$ crossing a phrase boundary, where $\ell = \log^\epsilon g \log\log g$. All their Karp--Rabin signatures $\kappa'$ can be computed in time $O(g\ell^2)$ as well, and inserted into a regular hash table to obtain the $O(\gamma\ell^2)$ distinct substrings. We then build $H$ on the signatures, in $O(\gamma \ell^2)$ expected time \cite{BBD09}. Therefore, the total expected time to create $H$ is $O(g\ell^2)$, whereas the space is $O(\gamma\ell^2)$ (we can obtain this space even without knowing $\gamma$, by progressively doubling the size of the hash table as needed). This construction space can be reduced to $O(\gamma\ell)$ by building a separate table $H_m$ for each distinct length $m \in [1\mathinner{.\,.} \ell]$. Further, since we can spend $O(m)$ time when searching for a pattern of length $m$, we can split $H_m$ into up to $m$ subtables $H_{m,i}$, which can then be built separately within $O(g)$ total space: We stop our traversal each time we collect $g$ distinct substrings of length $m$, build a separate succinct hash table $H_{m,i}$ on those, and start afresh to build a new table $H_{m,i+1}$. Since there are at most $\gamma m \le g\,\! m$ distinct substrings, we will build at most $m$ tables $H_{m,1},\ldots,H_{m,m}$. Note that, in order to detect whether each substring appeared previously, we must search all the preceding tables $H_{m,1},\ldots,H_{m,i-1}$ for it, which raises the construction time to $O(g\ell^3)$. At search time, our pattern may appear in any of the $m$ tables $H_{m,i}$, so we search them all in $O(m)$ time. In order to compute the information on the partitions of each distinct substring, we can simulate its pattern search. Since we only need to find its relevant split points $q$ (Section~\ref{sec:pattern}), their grid ranges (Section~\ref{sec:ztrie}), and which of these are nonempty (Section~\ref{sec:primary}), the total time spent per substring of length up to $\ell$ is $O(\ell + \log \ell \log^\epsilon \gamma) = O(\ell)$. Added over the up to $\gamma \ell^2$ distinct substrings, the time is $O(\gamma\ell^3)$. The whole process then takes $O(g\ell^3)$ expected time and $O(g)$ space. We enforce $\epsilon < \frac{1}{6}$ to keep the time within $O(g \sqrt{\log g})$. We also build the compact trie $C$ on all the distinct substrings of length $\ell' = \log\log(n/\gamma)$. We can collect their signatures $\kappa'$ in $O(g\ell')$ time around phrase boundaries, storing them in a temporary hash table that collects at most $O(\gamma \ell')$ distinct signatures. For each such distinct signature we find, we insert the corresponding substring in $C$, recording its corresponding locus, in $O(\ell')$ time. The locus must also be recorded for the internal trie nodes $v$ we traverse, if the substring represented by $v$ also crosses the phrase boundary; this must happen for some descendant leaf of $v$ because $v$ must have a primary occurrence. Since we insert at most $\gamma \ell'$ distinct substrings, the total work on the trie is $O(\gamma \ell'\,\!^2)$. Then the expected construction time of $C$ is $O(g\ell' + \gamma\ell'\,\!^2) \subseteqseteq O(g\ell'\,\!^2) \subseteqseteq O(\gamma\log(n/\gamma)(\log\log(n/\gamma))^2) \subseteqseteq O(n)$. The construction space is $O(\gamma\ell') = O(\gamma\log\log(n/\gamma)) \subseteqseteq O(\gamma\log(n/\gamma))$. Note that we need to know $\gamma$ to determine $\ell'$. If we do not know $\gamma$, we can try out all the lengths, from $\ell' = \log\log(n/g)$ to $\log\log n$; note that the unknown correct value is in this range because $\gamma \le g$. For each length, we build the structures to collect the distinct substrings of length $\ell$, but stop if we exceed $g$ distinct ones. Note that we cannot exceed $g$ distinct substrings for $\ell' \le \log\log(n/\gamma)$ because, in the grammar of Section~\ref{sec:attractor}, it holds that $g \ge \gamma\log(n/\gamma) \ge \gamma\log\log(n/\gamma) \ge \gamma \ell'$, and this is the maximum number of distinct substrings of length $\ell'$ we can produce. We therefore build the trie $C$ for the value $\ell'$ such that the construction is stopped for the first time with $\ell'+1$. This value must be $\ell' \ge \log\log(n/\gamma)$, sufficiently large to ensure the time bounds of Section~\ref{sec:short patterns}, and sufficiently small so that the extra space is in $O(g)$. The only penalty is that we carry out $\ell'$ iterations in the construction of the hash table (the trie itself is built only after we find $\ell'$), which costs $O(g\ell'\,\!^2)$ time. This is the same construction cost we had, but now $\ell'$ can be up to $\log\log n$; therefore the construction cost is $O(g(\log\log n)^2)$. The construction space stays in $O(g)$ by design. The total construction cost is then $O(n\log n)$ expected time and $O(n)$ space, essentially dominated by the cost to ensure a collision-free Karp--Rabin signature. \begin{theorem} \label{thm:main} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $g=O(\gamma\log(n/\gamma))$ that can find the $occ$ occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+\log^\epsilon \gamma + occ\log^\epsilon g) \subseteqseteq O(m+(occ+1)\log^\epsilon n)$ for any constant $\epsilon>0$. The structure is built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{theorem} An index that is correct w.h.p.\ can be built in $O(n+g\sqrt{\log g}+g(\log\log n)^2) \subseteqseteq O(n+g\sqrt{\log g})$ expected time. If we know $\gamma$, such an index can be built with $O(\log(n/\gamma))$ expected left-to-right passes on $T$ (to build the grammar) plus $O(\gamma\log(n/\gamma))$ main-memory space. Finally, note that if we want to report only $k < occ$ occurrences of $P$, their locating time does not anymore amortize to $O(1)$ as in Section~\ref{sec:secondary}. Rather, extracting each occurrence requires us to climb up the grammar tree up to the root. In this case, the search time becomes $O(m+(k+1)\log n)$. \subseteqsection{Optimal search time} \label{sec:optimal} We now explore various space/time tradeoffs for our index, culminating with a variant that achieves, for the first, time, optimal search time within space bounded by an important family of repetitiveness measures. The tradeoffs are obtained by considering other data structures for the grid of Section~\ref{sec:primary} and for the perfect hash tables of Section~\ref{sec:short patterns}. Table~\ref{tab:tradeoffs} summarizes the results in a slightly simplified form; the construction times stay as in \Cref{thm:main}. \begin{table}[t] \begin{center} \tbl{Space-time tradeoffs within attractor-bounded space; formulas are slightly simplified. \label{tab:tradeoffs}} {\begin{tabular}{l|c|c} Source & Space & Time \\ \hline Baseline \cite{NP18} & $O(\gamma\log(n/\gamma))$ & $O(m\log n + occ \log^\epsilon n)$ \\ \hline \Cref{thm:main} & $O(\gamma\log(n/\gamma))$ & $O(m + (occ+1) \log^\epsilon n)$ \\ Corollary~\ref{cor:tradeoff1} & $O(\gamma\log n)$ & $O(m + occ \log^\epsilon n)$ \\ Corollary~\ref{cor:tradeoff2} & $O(\gamma\log(n/\gamma)\log\log n)$ & $O(m + (occ+1) \log\log n)$ \\ Corollary~\ref{cor:tradeoff3} & $O(\gamma\log n\log\log n)$ & $O(m + occ \log\log n)$ \\ \Cref{thm:opt} & $O(\gamma\log(n/\gamma)\log^\epsilon n)$ & $O(m+occ)$ \\ \hline \end{tabular}} \end{center} \end{table} A first tradeoff is obtained by discarding the table $H$ of Section~\ref{sec:short patterns} and using only a compact trie $C'$, now to store the locus of a primary occurrence and the relevant split points of each substring of length up to $\ell = \log^\epsilon g \log\log g$. This adds $O(\gamma\ell)$ to the space, but it allows verifying that the short patterns actually occurs in $T$ in time $O(m)$ without using the grid. As a result, the additive term $O(\log^\epsilon \gamma)$ disappears from the search time. As seen in Section~\ref{sec:constr}, the extra construction time for $C'$ is now $O(g\ell^2)$, plus $O(\gamma\ell^3)$ to compute the relevant split points. This is within the $O(g\ell^3)$ time bound obtained for \Cref{thm:main}. The construction space is $O(\gamma\ell)$, which we can assume to be $O(n)$ because it is included in the final index size; if this is larger than $n$ then the result holds trivially by using instead a suffix tree on $T$. \begin{corollary} \label{cor:tradeoff1} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $g=O(\gamma(\log(n/\gamma)+\log^\epsilon(\gamma\log(n/\gamma))\log\log(\gamma\log(n/\gamma)))) \subseteqseteq O(\gamma \log n)$ that can find the $occ$ occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+occ\log^\epsilon g) \subseteqseteq O(m+occ\log^\epsilon n)$ for any constant $\epsilon>0$. The structure is built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{corollary} By using $O(g\log\log g)$ space for the grid, the range queries run in time $O(\log\log g)$ per query and per returned item \cite{DBLP:conf/compgeom/ChanLP11}. This reduces the query time to $O(m+\log m \log\log g + occ \log\log g)$, which can be further reduced with the same techniques of Section~\ref{sec:short patterns}: The additive term can be relevant only if $m = O(\ell)$ with $\ell = \log\log g \log\log\log g$. We then store in $H$ all the $\gamma \ell^2$ patterns of length up to $\ell$, with their relevant partitions, using $O(\gamma \ell^2 (\log\ell)^2) = O(\gamma (\log\log g)^2 (\log\log\log g)^4)$ bits, which is $O(\gamma)$ space. We may still need $O(\log\log g)$ time to determine that a short pattern does not occur in $T$. By storing the patterns of length $\ell' = \log\log\log(n/\gamma)$ in trie $C$, this time becomes $O(\log\log\gamma)$. The grid structure can be built in time $O(g\log g)$. The construction time for $H$ and $C$ is lower than in Section~\ref{sec:constr}, because $\ell$ and $\ell'$ are smaller here. \begin{corollary} \label{cor:tradeoff2} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $g=O(\gamma\log(n/\gamma)\log\log(\gamma\log(n/\gamma))) \subseteqseteq O(\gamma \log (n/\gamma)\log\log n)$ that can find the $occ$ occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+\log\log\gamma+occ\log\log g) \subseteqseteq O(m+(occ+1)\log\log n)$. The structure is built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{corollary} By discarding $H$ and building $C'$ on the substrings of length $\ell=\log\log g \log\log\log g$, we increase the space by $O(\gamma\ell^2)$ and remove the additive term in the search time. The construction time for the grid is still $O(g\log g)$, but that of $C$ is within the bounds of Corollary~\ref{cor:tradeoff1}, because $\ell$ is smaller here. \begin{corollary} \label{cor:tradeoff3} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $g=O(\gamma(\log(n/\gamma)\log\log(\gamma\log(n/\gamma))+ (\log\log(\gamma\log(n/\gamma))\log\log\log(\gamma\log(n/\gamma)))^2)) \subseteqseteq O(\gamma \log n\log\log n)$ that can find the $occ$ occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+occ\log\log g) \subseteqseteq O(m+occ\log\log n)$. The structure is built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{corollary} Finally, a larger geometric structure \cite{DBLP:conf/focs/AlstrupBR00} uses $O(g\log^\epsilon g)$ space, for any constant $\epsilon>0$, and reports in $O(\log\log g)$ time per query and $O(1)$ per result. This yields $O(m+\log m\log\log g+occ)$ search time. To remove the second term, we again index all the patterns of length $m\le\ell$, for $\ell = \log\log g \log\log\log g$, of which there are at most $\gamma \ell^2$. Just storing the relevant split points $q$ is not sufficient this time, however, because we cannot even afford the $O(\log\log g)$ time to query the nonempty areas. Still, note that the search time can be written as $O(m+\ell+occ)$. Thus, we only care about the short patterns that, in addition, occur less than $\ell$ times, since otherwise the third term, $O(occ)$, absorbs the second. Storing all the occurrences of such patterns requires $O(\gamma\ell^2)$ space: An enriched version $C''$ of the compact trie $C$ records the number of occurrences in $T$ of each node. Only the leaves (i.e., the patterns of length exactly $\ell$) store their occurrences (if they are at most $\ell$). Since there are at most $\gamma \ell$ leaves, the total space to store those occurrences is $O(\gamma \ell^2)$, dominated by the grid size. Shorter patterns correspond to internal trie nodes, and for them we must traverse all the descendant leaves in order to collect their occurrences. To handle a pattern $P$ of length up to $\ell$, then, we traverse $C''$ and verify $P$ around its locus. If $P$ occurs in $T$, we see if the trie node indicates it occurs more than $\ell$ times. If it does, we use the normal search procedure using the geometric data structure and propagating the secondary occurrences. Otherwise, its (up to $\ell$) occurrences are obtained by traversing all the leaves descending from its trie node: if an internal node occurs less than $\ell$ times, its descendant leaves also occur less than $\ell$ times, so all the occurrences of the internal node are found in the descendant leaves. The search time is then always $O(m+occ)$. The expected construction time of the geometric structure \cite{DBLP:conf/focs/AlstrupBR00} is $O(g \log g)$, and its construction space is $O(g\log^\epsilon g)$. Note that if the construction space exceeds $O(n)$, then so does the size of our index. In this case, a suffix tree obtains linear construction time and space with the same search time. Thus, we can assume the construction space is $O(n)$. The trie $C''$ is not built in the same way $C$ is built in Section~\ref{sec:constr}, because we need to record the number of occurrences of each string of length up to $\ell$. We slide the window of length $\ell$ through the whole text $T$ instead of only around phrase boundaries. We maintain the distinct signatures $\kappa'$ found in a regular hash table, with the counter of how many times they appear in $T$. When a new signature appears, its string is inserted in $C''$, a pointer from the hash table to the corresponding trie leaf is set, and the list of occurrences of the substring is initialized in the trie leaf, with its first position just found. Further occurrence positions of the string are collected at its trie leaf, until they exceed $\ell$, in which case they are deleted. Thus we spend $O(n)$ expected time in the hash table and collecting occurrences, plus $O(\gamma \ell^2)$ time inserting strings in $C''$. From the number of occurrences of each leaf we can finally propagate those counters upwards in the trie, in $O(\gamma\ell)$ additional time. \begin{theorem} \label{thm:opt} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $O(\gamma\log(n/\gamma)\log^\epsilon(\gamma\log(n/\gamma))) \subseteqseteq O(\gamma\log(n/\gamma)\log^\epsilon n)$, for any constant $\epsilon>0$, that can find the $occ$ occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+occ)$. The structure is built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{theorem} \section{Counting Pattern Occurrences} \label{sec:counting} \citeN{Nav18} shows how an index like the one we describe in Section~\ref{sec:index} can be used for counting the number of occurrences of $P[1\mathinner{.\,.} m]$ in $T$. First, he uses the result of \citeN{DBLP:journals/siamcomp/Chazelle88} that a $p \times p$ grid can be enhanced by associating elements of any algebraic semigroup to the points, so that later we can aggregate all the elements inside a rectangular area in time $O(\log^{2+\epsilon} p)$, for any constant $\epsilon>0$, with a structure using $O(p)$ space.\footnote{\citeN{Nav18} gives a simpler explicit construction for groups.} The structure is built in $O(p\log p)$ time and $O(p)$ space \cite{DBLP:journals/siamcomp/Chazelle88}. Then, \citeN{Nav18} shows that one can associate with a CFG the number of secondary occurrences triggered by each point in a grid analogous to that of Section~\ref{sec:primary}, so that their sums can be computed as described. We now improve upon the space and time using our RLCFG of Section~\ref{sec:index}. Three observations are in order (cf.~\cite{CNspire12,Nav18}): \begin{enumerate} \item The occurrences reported are all those derived from each point $(x,y)$ contained in the range $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$ of each relevant partition $P[1\mathinner{.\,.} q] \cdot P[q+1\mathinner{.\,.} m]$. \item Even if the same point $(x,y)$ appears in distinct overlapping ranges $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$, each time it corresponds to a distinct value of $q$, and thus to distinct final offsets in $T$. Therefore, all the occurrences reported are distinct. \item The number of occurrences reported by our procedure in Section~\ref{sec:secondary} depends only on the initial locus associated with the grid point $(x,y)$. This will change with run-length nodes and require special handling, as seen later. \end{enumerate} Therefore, we can associate with each point $(x,y)$ in the grid (and with the corresponding primary occurrence) the total number of occurrences triggered with the procedure of Section~\ref{sec:secondary}. Then, counting the number of occurrences of a partition $P = P[1\mathinner{.\,.} q] \cdot P[q+1\mathinner{.\,.} m]$ corresponds to summing up the number of occurrences of the points that lie in the appropriate range of the grid. As seen in Section~\ref{sec:pattern}, with our particular grammar there are only $O(\log m)$ partitions of $P$ that must be tried in order to recover all of its occurrences. Therefore, we use our structures of Sections~\ref{sec:primary} to \ref{sec:ztrie} to find the $O(\log m)$ relevant ranges $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$, all in $O(m)$ time, and then we count the number of occurrences in each such range in time $O(\log^{2+\epsilon} p)\subseteqseteq O(\log^{2+\epsilon}g)$. The total counting time is then $O(m+\log m \log^{2+\epsilon} g)$. When the second term dominates, $m \le \log m \log^{2+\epsilon} g$, it holds $\log m \log^{2+\epsilon} g \in O(\log^{2+\epsilon} g \log\log g)$, which is $O(\log^{2+\epsilon} g)$ by infinitesimally adjusting $\epsilon$. Under the assumption that there are no run-length rules (we remove this assumption later), our counting time is then $O(m+\log^{2+\epsilon} g)$. This improves sharply upon the previous result \cite{Nav18} in space (because it builds the grammar on a Lempel--Ziv parse instead of on attractors) and in time (because it must consider all the $m-1$ partitions of $P$). To build the structure, we must count the number of secondary occurrences triggered from any locus $v$, and then associate it with every point $(x,y)$ having $v$ as its locus. More precisely, we will compute the number of times any node $u$ occurs in the parse tree of $T$. The process corresponds to accumulating occurrences over the DAG defined by the pointers $u.\mathit{anc}$ and $u.\mathit{next}$ of the grammar tree nodes $u$. Initially, let the counter be $c(u)=0$ for every grammar tree node $u$, except the root, where $c(root)=1$. We now traverse all the nodes $u$ in some order, calling $\mathit{compute}(u)$ on each. Procedure $\mathit{compute}(u)$ proceeds as follows: If $c(u)>0$ then the counter is already computed, so it simply returns $c(u)$. Otherwise, it sets $c(u) = \mathit{compute}(u.\mathit{anc})+\mathit{compute}(u.\mathit{next})$, recursively computing the counters of the two nodes. Nodes $A\rightarrow A_1^s$ are special cases. If $u.\mathit{next}$ is of the form $A_1^{[s-1]}$, then the correct formula is $c(u) = s \cdot \mathit{compute}(u.\mathit{anc})+ \mathit{compute}(u.\mathit{next}.\mathit{next})$. On the other hand, we do nothing for $\mathit{compute}(u)$ if $u$ is of the form $A_1^{[s-1]}$. The total cost is the number of edges in the DAG, which is 2 per grammar tree node, $O(g)$. Finally, the counter of each point $(x,y)$ associated with locus node $v$ is the value $c(u)$, where $u$ is the parent of $v$. A special case arises, however, if $u$ corresponds to a run-length node $A \rightarrow A_1^s$, in which case the locus $v$ is $A_1$. As seen in Section~\ref{sec:secondary}, the number of times $u$ is reported is $s-\lceil (m-q)/|A_1| \rceil$, and therefore the correct counter to associate with $(x,y)$ is $(s-\lceil (m-q)/|A_1| \rceil) \cdot c(u)$. The problem is that such a formula depends on $m-q$, so each point $(x,y)$ could contribute differently for each alignment of the pattern. We then take a different approach for counting these occurrences. Associated with loci $A_1$ with parent $A \rightarrow A_1^s$, instead of $(x,y)$, we add to the grid the points $(x,y')=(\exp(A_1)^{rev},\exp(A_1))$ with weight $c(u)$ and $(x,y'')=(\exp(A_1)^{rev},\exp(A_1)^2)$ with weight $(s-2)c(u)$, extending the set $\mathcal{Y}$ so that it contains both $\exp(A_1)$ and $\exp(A_1)^2$. (Note that there could be various equal string pairs, which can be stored multiple times, or we can accumulate their counters.) We distinguish three cases. \begin{enumerate}[(i)] \item For the occurrences where $P[q+1\mathinner{.\,.} m]$ lies inside $\exp(A_1)$ (i.e., $m-q \le |A_1|$), the rule $A\to A_1^s$ is counted $c(u)+(s-2)c(u) = (s-1)c(u)$ times because both $(x,y')$ and $(x,y'')$ are in the range queried. \item For the occurrences where $P[q+1\mathinner{.\,.} m]$ exceeds the first $\exp(A_1)$ but does not span more than two (i.e., $|A_1| < m-q \le 2|A_1|$), the rule $A\to A_1^s$ is counted $(s-2)c(u)$ times because $(x,y'')$ is in the range queried but $(x,y')$ is not. \item For the occurrences where $P[q+1\mathinner{.\,.} m]$ spans more than two copies of $\exp(A_1)$, however, the rule $A\to A_1^s$ is not counted at all because neither $(x,y')$ nor $(x,y'')$ is in the range queried. \end{enumerate} The key to handle the third case is that, if $P[1\mathinner{.\,.} q]$ spans a suffix of $\exp(A_1)$ and $P[q+1\mathinner{.\,.} m]$ spans at least two consecutive copies of $\exp(A_1)$, then it is easy to see that $P$ is ``periodic'', $|A_1|$ being a ``period'' of $P$ \cite{Jewels}. \begin{definition} A string $P[1\mathinner{.\,.} m]$ has a {\em period} $p$ if $P$ consists of $\lfloor m/p \rfloor$ consecutive copies of $P[1\mathinner{.\,.} p]$ plus a (possibly empty) prefix of $P[1\mathinner{.\,.} p]$. Alternatively, $P[1\mathinner{.\,.} m-p]=P[p+1\mathinner{.\,.} m]$. The string $P$ is {\em periodic} if it has a period $p \le m/2$. \end{definition} We next show an important property relating periods and run-length nodes. \begin{lemma} \label{lem:period} Let there be a run-length rule $A \rightarrow A_1^s$ in our grammar. Then $|A_1|$ is the shortest period of $\exp(A)$. \end{lemma} \begin{proof} Consider an $A$-labeled node $v$ in the parse tree of $T$ and let $proj(v)=[i\mathinner{.\,.} j]$ so that $T[i\mathinner{.\,.} j]=\exp(A)$. Denote the shortest period of $\exp(A)$ by $p$ and note that $|A_1|$ is also a period of $\exp(A)=\exp(A_1)^s$. We conclude from the Periodicity Lemma~\cite{fine1965uniqueness} that $p = \gcd(p, |A_1|)$ and thus $d = |A_1|/p$ is an integer. For a proof by contradiction, suppose that $d>1$. Let $r$ denote the level of the run represented by $v$ (so that $A$ is a symbol in $\hat{T}_r$ and $A_1$ is a symbol in $T_r$). \begin{claim} For each level $r'\in [0\mathinner{.\,.} r]$, both $i+p-1$ and $j-p$ are level-$r'$ block boundaries. \end{claim} \begin{proof} We proceed by induction on $r'$. The base case for $r'=0$ holds trivially. Thus, consider a level $r'\in [1\mathinner{.\,.} r]$ and suppose that the claim holds for $r'-1$. By the inductive assumption, $T[i+p\mathinner{.\,.} j]=T[i\mathinner{.\,.} j-p]$ consist of full level-$(r'-1)$ blocks, so \Cref{lem:altr} yields $\hat{B}_{r'-1}(i+p,j)=\hat{B}_{r'-1}(i,j-p)$. Since $i+dp-1$ is a level-$r'$ block boundary, this set is non-empty and its minimum satisfies $\min \hat{B}_{r'-1}(i+p,j) < dp-p$. The final claim of \Cref{lem:altr} thus yields $B_{r'}(i+dp-p,j-p) = B_{r'}(i+dp,j)$. Consequently, since $i+dp-1$ is a level-$r'$ block boundary, $p-1\in B_{r'}(i+dp-p,j-p)=B_{r'}(i+dp,j)$, so $i+dp+p-1$ is also a level-$r'$ block boundary. Iterating this reasoning $d(s-1)-2$ more times, we conclude that $i+dp+2p-1, i+dp+3p-1,\ldots, j-p$ are all level-$r'$ block boundaries. Moreover, \Cref{lem:altr} applied to $T[i\mathinner{.\,.} j-dp]=T[i+dp\mathinner{.\,.} j]$, which consist of full level-$r'$ blocks, implies $p-1 \in B_{r'}(i,j-dp)=B_{r'}(i+dp,j)$, so $i+p-1$ is also a level-$r'$ block boundary. \end{proof} Note that $T[i\mathinner{.\,.} j]$ consists of $s$ full level-$r$ blocks of length $dp$ each. The claim instantiated to $r'=r$ contradicts this statement imposing blocks of length at most $p$ at the extremities. \qed \end{proof} \Cref{lem:period} implies that, in the remaining case to be handled, the length $|A_1|$ must be precisely the shortest period of $P$. \begin{lemma} Let $P$ be contained in $\exp(A)$ and contain two consecutive copies of $\exp(A_1)$, from rule $A \rightarrow A_1^s$. Then $|A_1|$ is the shortest period of $P$. \end{lemma} \begin{proof} Clearly $|A_1|$ is a period of $P$ because $P[1\mathinner{.\,.} m]$ is contained in a concatenation of strings $\exp(A_1)$; further, $|A_1|\le m/2$. Now assume $P$ has a shorter period, $p < |A_1|$. Since $|A_1|+p < m$, $P$ also has a period of length $p' = \gcd(|A_1|,p)$ \cite{fine1965uniqueness}. This period is smaller than $|A_1|$ and divides it. Since $P$ contains $\exp(A_1)$, this implies that $\exp(A_1)$, and thus $\exp(A)$, also have a period $p' < |A_1|$, contradicting Lemma~\ref{lem:period}. \end{proof} Therefore, all the run-length nonterminals $A \rightarrow A_1^s$, where $A_1$ is a locus of $P$ with offset $q$ and $m \ge 2|A_1|$, must satisfy $\exp(A_1)=P[q+1\mathinner{.\,.} q+p]$, where $p$ is the shortest period of $P$. The shortest period $p$ is easily computed in $O(m)$ time \cite[Sections~1.7 and 3.1]{Jewels}. It is therefore sufficient to compute the Karp--Rabin fingerprints $k=\kappa'(\exp(A_1))$ (which we easily retrieve from the data we store for Lemma~\ref{lem:kr}) for all the run-length rules $A \rightarrow A_1^s$, and store them in a perfect hash table with information on $A_1$. Let $s(A_1) = \{ s \ge 3, A \rightarrow A_1^s\}$ be the different exponents associated with $A_1$. To each $s \in s(A_1)$, we associate two values \[ c(A_1,s) = \sum \{ c(A) : A \rightarrow A_1^{s'}, s'\geq s \} \quad \text{and} \quad c'(A_1,s) = \sum \{ s'\cdot c(A) : A \rightarrow A_1^{s'}, s'\geq s\}. \] where $c(A)$ refers to $c(u)$ for the (only) internal grammar tree node $u$ corresponding to nonterminal $A$. The total space to store the sets $s(A_1)$ and associated values is $O(g)$. For each of the $O(\log m)$ relevant splits $P[1\mathinner{.\,.} q] \cdot P[q+1\mathinner{.\,.} m]$ obtained in Section~\ref{sec:pattern}, if $m-q > 2p$, then we look for $k=\kappa'(P[q+1\mathinner{.\,.} q+p])$ in the hash table. If we find it mapped to a non-terminal $A_1$, then we add $c'(A_1,s_{\min})-c(A_1,s_{\min})\lceil (m-q)/p \rceil$ to the result, where $s_{\min} = \min \{ s \in s(A_1), (s-1)|A_1|\ge m-q\}$. This ensures that each rule $A\to A_1^{s}$ with $s\ge 3$ and $|A_1|(s-1)\ge m-q$ is counted $(s-\lceil (m-q)/p \rceil)\cdot c(A)$ times. We find $s_{\min}$ by exponential search on $s(A_1)$ in $O(\log m)$ time, which over all the splits adds up to $O(\log^2 m)$. Note that all the Karp--Rabin fingerprints for all the substrings of $P$ can be computed in $O(m)$ time (see Section~\ref{sec:ztrie}), and that we can easily rule out false positives: \Cref{lemma: z-fast} filters out any decomposition of $P$ for which $P[q+1\mathinner{.\,.} m]$ is not a prefix of any string $y\in \mathcal{Y}$. Since $\exp(A_1)^{s-1}\in \mathcal{Y}$ for every rule $A\to A_1^s$ and since $\mathcal{Y}$ consists of substrings of $T$, this guarantees that $\kappa'$ does not admit any collision between $P[q+1\mathinner{.\,.} q+p]$ and a substring of $T$. \begin{theorem} \label{thm:count} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $g = O(\gamma\log(n/\gamma))$ that can count the number of occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m+\log^{2+\epsilon} g) \subseteqseteq O(m+\log^{2+\epsilon} n)$ for any constant $\epsilon>0$. The structure can be built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{theorem} An index that is correct w.h.p.\ can be built in $O(n+g\log g)$ expected time (the structures for secondary occurrences and for short patterns, Sections~\ref{sec:secondary} and \ref{sec:short patterns}, are not needed). If we know $\gamma$, the index can be built in $O(\log(n/\gamma))$ expected left-to-right passes on $T$ plus $O(g)$ main memory space. \subseteqsection{Optimal time} \citeN{DBLP:journals/siamcomp/Chazelle88} offers other tradeoffs for operating the elements in a range, all very similar and with the same construction cost: $O(\log^2 p \log\log p)$ time and $O(p\log\log p)$ space, $O(\log^2 p)$ time and $O(p\log^\epsilon p)$ space. These yield, for our index, $O(m+(\log n\log\log n)^2)$ time and $O(g\log\log g)$ space, and $O(m+\log^2 n\log\log n)$ time and $O(g\log^\epsilon g)$ space. If we use $O(p\log p)$ space, however, the cost to compute the sum over a range decreases significantly, to $O(\log p)$ \cite{Wil85,DBLP:conf/focs/AlstrupBR00}. The expected construction cost becomes $O(p\log^2 p)$ \cite{DBLP:conf/focs/AlstrupBR00}. Therefore, using $O(g\log g) \subseteqseteq O(\gamma\log(n/\gamma)\log n)$ space, we can count in time $O(m+\log m\log g) \subseteqseteq O(m+\log g\log\log g) \subseteqseteq O(m+\log n\log\log n)$, which is yet another tradeoff. More interesting is that we can reduce this latter time to the optimal $O(m)$. We index in a compact trie like $C''$ of Section~\ref{sec:optimal} all the text substrings of length up to $\ell=2\log n \log(n/\gamma)$, directly storing their number of occurrences (but not their occurrence lists as in $C''$). Since there are $\gamma \ell$ distinct substrings of length $\ell$, this requires $O(\gamma \log n \log(n/\gamma))$ space. Consider our counting time $O(m+\log m\log n)$. If $\log(n/\gamma) \le \log\log n$, then $\gamma \ge n/\log n$, and thus a suffix tree using space $O(n) = O(\gamma\log n)$ can count in optimal time $O(m)$. Thus, assume $\log(n/\gamma) > \log\log n$. The counting time can exceed $O(m)$ only if $m \le \log m \log n$. In this case, since $m \le \log m \log n \le \log^2 n$, we have $m \le 2\log n \log\log n \le 2\log n \log(n/\gamma) = \ell$. All the queries for patterns of those lengths are directly answered using our variant of $C''$, in time $O(m)$, and thus our counting time is always $O(m)$. We can still apply this idea if we do not know $\gamma$. Instead, we compute $\delta$ (recall Section~\ref{sec:delta}) and use $\ell = 2\log n\log(n/\delta)$. Since there are $T(\ell) \le \delta \ell$ distinct substrings of length $\ell$ in $T$, the space for $C''$ is $O(\delta \ell) = O(\delta \log n \log(n/\delta)) \subseteqseteq O(\gamma \log n \log(n/\gamma))$, the latter by Lemma~\ref{lem:delta}. The reasoning of the previous paragraph then applies verbatim if we replace $\gamma$ by $\delta$. The total space is then $O(g\log g + \gamma \log n \log(n/\gamma)) = O(\gamma \log n \log(n/\gamma))$. The construction cost of $C''$ is $O(n+\gamma\log^2 n \log^2(n/\gamma))$ time and $O(\gamma\log n\log(n/\gamma))$ space.\footnote{If we use $\ell=2\log n\log(n/\delta)$, then $C''$ is built in $O(\delta\log^2 n \log^2(n/\delta)) \subseteqseteq O(\gamma\log^2 n \log^2(n/\gamma))$ time and $O(\delta\log n \log(n/\delta)) \subseteqseteq O(\gamma\log n \log(n/\gamma))$ space, because the costs increase with $\delta$.} Alternatively we can obtain it by pruning the suffix tree of $T$ in time and space $O(n)$. The cost to build the grid is $O(g\log^2 g) \subseteqseteq (g\log^2 n)$. Note that, if $\gamma\log(n/\gamma) \log n > n$, we trivially obtain the result with a suffix tree; therefore the construction time of the grid is in $O(n\log n)$. \begin{theorem} \label{thm:count2} Let $T[1\mathinner{.\,.} n]$ have an attractor of size $\gamma$. Then, there exists a data structure of size $O(\gamma\log(n/\gamma)\log n)$ that can count the number of occurrences of any pattern $P[1\mathinner{.\,.} m]$ in $T$ in time $O(m)$. The structure can be built in $O(n\log n)$ expected time and $O(n)$ space, without the need to know $\gamma$. \end{theorem} If we know $\gamma$, then an index that is correct w.h.p.\ can be built in $O(g \log n)$ space apart from the passes on $T$, but we must build $C''$ without using a suffix tree, in additional time $O(\gamma \log^2 n \log^2(n/\gamma))$. Table~\ref{tab:count} summarizes the results. \begin{table}[t] \begin{center} \tbl{Space-time tradeoffs for counting; formulas are slightly simplified. \label{tab:count}} {\begin{tabular}{l|c|c} Source & Space & Time \\ \hline Baseline \cite{Nav18} & $O(z\log(n/z))$ & $O(m\log^{2+\epsilon} n)$ \\ \hline Theorem~\ref{thm:count} & $O(\gamma\log(n/\gamma))$ & $O(m+\log^{2+\epsilon} n)$ \\ Theorem~\ref{thm:count2} & $O(\gamma\log(n/\gamma)\log n)$ & $O(m)$ \\ \hline \end{tabular}} \end{center} \end{table} \section{Conclusions} The size $\gamma$ of the smallest string attractor of a text $T[1..n]$ is a recent measure of compressibility \cite{KP18} that is particularly well-suited to express the amount of information in repetitive text collections. It asymptotically lower-bounds many other popular dictionary-based compression measures like the size $z$ of the Lempel--Ziv parse or the size $g$ of the smallest context-free grammar generating (only) $T$, among many others. It is not known whether one can always represent $T$ in compressed form in less than $\mathcal Theta(\gamma\log(n/\gamma))$ space, but within this space it is possible to offer direct access and reasonably efficient searches on $T$ \cite{KP18,NP18}. In this article we have shown that, within $O(\gamma\log(n/\gamma))$ space, one can offer much faster searches, in time competitive with, and in most cases better than, the best existing results built on other dictionary-based compression measures, all of which use $\Omega(z\log(n/z))$ space. By building on the measure $\gamma$, our results immediately apply to any index that builds on other dictionary measures like $z$ and $g$. Our results are even competitive with self-indexes based on statistical compression, which are much more mature: we can locate the $occ$ occurrences in $T$ of a pattern $P[1..m]$ in $O(m+(occ+1)\log^\epsilon n)$ time, and count them in $O(m+\log^{2+\epsilon} n)$ time, whereas the fastest statistically-compressed indexes obtain $O(m+occ \log^\epsilon n)$ time to locate and $O(m)$ time to count, in space proportional to the statistical entropy of $T$ \cite{Sad03,BN13}. Further, we show that our results can be obtained without even knowing an attractor nor its minimum size $\gamma$. Rather, we can compute a lower bound $\delta \le \gamma$ in linear time and use it to achieve $O(\gamma\log(n/\gamma))$ space without knowing $\gamma$. This is relevant because computing $\gamma$ is NP-hard \cite{KP18}. Previous work \cite{NP18} assumed that, although they obtained indexes bounded in terms of $\gamma$, one would compute some upper bound on it, like $z$, to apply it in practice. With our result, we obtain results bounded in terms of $\gamma$ without the need to find it. Finally, we also obtain for the first time optimal search time using any index bounded by a dictionary-based compression measure. Within space $O(\gamma\log(n/\gamma)\log^\epsilon n)$, for any constant $\epsilon>0$, we can locate the occurrences in time $O(m+occ)$, and within $O(\gamma\log(n/\gamma)\log n)$ space we can count them in time $O(m)$. This is an important landmark, showing that it is possible to obtain the same optimal time reached by suffix trees in $O(n)$ space, now in space bounded in terms of a very competitive measure of repetitiveness. Such optimal time had also been obtained within space bounded by other measures that adapt to repetitiveness \cite{GNP18,BC17}, but these are weaker than $\gamma$ both in theory and in practice. Further, no statistical-compressed self-index using $o(n)$ space has obtained such optimal time. As a byproduct, our developments yield a number of new or improved results on accessing and indexing on RLCFGs and CFGs; these are collected in Appendix~\ref{sec:rlcfgs}. \paragraph{Future work} There are still several interesting challenges ahead: \begin{itemize} \item While one can compress any text $T$ to $O(z)$ or $O(g)$ space (and even to smaller measures like $O(b)$ \cite{SS82}), it is not known whether one can compress it to $o(\gamma\log(n/\gamma))$ space. This is important to understand the nature of the concept of attractor and of measure $\gamma$. \item While one can support direct access and searches on $T$ in space $O(g)$, it is not known whether one can support those in $o(z\log(n/z))$ or $o(\gamma\log(n/\gamma))$ space. Again, determining if this is a lower bound would yield a separation between $\gamma$, $z$, and $g$ in terms of indexability. \item If we are given the size $\gamma$ of some attractor, we can build our indexes in a streaming-like mode, with $O(\log(n/\gamma))$ expected passes on $T$ plus main-memory space bounded in terms of $\gamma$, with high probability. This is relevant in practice when indexing huge text collections. It would be important to do the same when no bound on $\gamma$ is known. Right now, if we do not know $\gamma$, we need $O(n)$ extra space for a suffix tree that computes the measure $\delta \le \gamma$. \item It is not clear if we can reach optimal search time in the ``minimum'' space $O(\gamma\log(n/\gamma))$, or what is the best time we can obtain in this case. \item The measure $\delta$ is interesting on its own, as it lower-bounds $\gamma$. It is interesting to find more precise bounds in terms of $\gamma$, and whether we can compress $T$, and even offer direct access and indexed searches on it, within space $O(\delta\log(n/\delta))$. \item The fact that only $O(\log m)$ partitions of $P$ are needed to spot all of its occurrences, which outperforms previous results \cite{NIIBT15,DBLP:conf/soda/GawrychowskiKKL18}, was fundamental to obtain our bounds, and we applied them to counting in order to obtain optimal times as well. It is likely that this result is of even more general interest and can be used in other problems related to dictionary-compressed indexing and beyond. \item The result we obtain on counting pattern occurrences in $O(\gamma\log(n/\gamma))$ space is generalized to CFGs in Appendix~\ref{sec:rlcfgs}, but we could not generalize our result on specific RLCFGs to arbitrary ones. It is open whether this is possible or not. \end{itemize} \appendix \newcommand{\mathcal T}{\mathcal T} \newcommand{\mathcal F}{\mathcal F} \section{Some Results on Run-Length Context-Free Grammars} \label{sec:rlcfgs} Along the article we have obtained a number of results for the specific RLCFG we build. Several of those can be generalized to arbitrary RLCFGs, leading to the same state of the art that CFGs now enjoy. We believe it is interesting to explicitly state those new results in general form: not only RLCFGs are always smaller than CFGs (and the difference can be asymptotically relevant, as in text $T = a^n$), but also our results in this article require space $O(\gamma\log(n/\gamma))$, whereas there always exists a RLCFG of size $g_{rl} = O(\gamma\log(n/\gamma))$. Indexes of size $O(g_{rl})$ have then the potential to be smaller than those built on attractors (e.g., $T=a^n$ is generated by a RLCFG of size $O(1)$, whereas $\gamma\log(n/\gamma) = O(\log n)$). \subseteqsection{Extracting substrings} \label{sec:rlcfg-extract} The following result exists on CFGs \cite{BLRSRW15}. They present their result on straight-line programs (SLPs, i.e., CFGs where right-hand sides are two nonterminals or one terminal symbol). While any CFG of size $g$ can be converted into an SLP of size $O(g)$, we start by describing their structure generalized to arbitrary CFGs, which may be interesting when the grammar cannot be modified for some reason. We then show how to handle run-length rules $A \rightarrow A_1^s$ in order to generalize the result to RGCFGs. \begin{theorem} \label{thm:rlcfg-extract} Let a RLCFG of size $g_{rl}$ generate (only) $T[1\mathinner{.\,.} n]$. Then there exists a data structure of size $O(g_{rl})$ that extracts any substring $T[p\mathinner{.\,.} p+\ell-1]$ in time $O(\ell+\log n)$. \end{theorem} Consider the parse tree $\mathcal T$ of $T[1\mathinner{.\,.} n]$. A {\em heavy path} starting at a node $v \in \mathcal T$ with children $v_1,\ldots,v_s$ chooses the child $v_i$ that maximizes $|v_i|$, and continues by $v_i$ in the same way, up to reaching a leaf. We say that $v_i$ is the {\em heavy child} of $v$ and define $h(v)=v_i$. The edge connecting $v$ with its heavy child $v_i$ is said to be {\em heavy}; those connecting $v$ with its other children are {\em light}. Note that, if $v_j \neq h(v)$, then $|v_j| \le |v|/2$; otherwise $v_j$ would be the heavy child of $v$. Then, every time we descend by a light edge, the lenght of the node halves, and as a consequence no path from the root to a leaf may include more than $\log n$ light edges. A decomposition into heavy paths consists of the heavy path starting at the root of $\mathcal T$ and, recursively, all those starting at the children by light edges. \subseteqsubsection{Accessing $T[p]$} For every internal node $v$ with children $v_1,\ldots,v_s$ we define the starting positions of its children as $p_1(v)=1$, $p_i(v)=p_{i-1}(v)+|v_{i-1}|$, for $2 \le i \le s$, and $p_{s+1}=|v|+1$. We then store the set $C(v) = \{ p_1(v), p_2(v), \ldots, p_{s+1}(v) \}$. Let us define $c(v)=p_i(v)$, where $v_i=h(v)$, as the starting position of the heavy child of $v$. Then, if $v$ roots a heavy path $v=v^0, v^1, \ldots, v^k$, where $v^j=h(v^{j-1})$ for $1 \le j \le k$, and $v^k$ is a leaf, we define the starting positions in the heavy path as $s_1(v)=c(v)$ and $s_j(v)=s_{j-1}(v)-1+c(v^{j-1})$ for $2 \le j \le k$, and the ending positions as $e_j(v)=s_j(v)+|v^j|$ for $1 \le j \le k$. We then associate with $v$ the increasing set $P(v) = \{ s_1(v), s_2(v), \ldots, s_k(v), e_k(v), \ldots, e_2(v), e_1(v) \}$; note $e_k(v)=s_k(v)+1$. To find $T[p]$, we start at the root $v$ of $\mathcal T$ (so $1 \le p \le |v|$) with children $v_1,\ldots,v_s$. We make a predecessor search on $C(v)$ to determine that $p_i(v) \le p < p_{i+1}(v)$. If $v_i \neq h(v)$, we traverse the light edge to $v_i$ and continue the search from $v_i$ with $p \leftarrow p - p_i(v) + 1$. Otherwise, since $v_i=h(v)$, it holds that $p \ge p_i(v) = c(v) = s_1(v)$ and $p < p_{i+1}(v) = c(v)+|h(v)| = e_1(v)$. We then jump to the proper node in the heavy path that starts in $v$ by making a predecessor search for $p$ in $P(v)$. If we determine that $s_j(v) \le p < s_{j+1}(v)$ or that $e_{j+1}(v) \le p < e_j(v)$, we continue the search from $v^j$ and $p \leftarrow p-s_j(v)+1$. Otherwise, $p=s_k(v)$ and the answer is the terminal symbol associated with the leaf $v^k$. Note that, when we continue from $v^j$, this is not the head of a heavy path, but after searching $C(v^j)$ we are guaranteed to continue by a light edge. In each step, then, we perform two predecessor searches and traverse a light edge. \citeN{BLRSRW15} describe a predecessor data structure that, when finding the predecessor of $x$ in a universe of size $u$, takes time $O(\log(u/(x^+-x^-)))$, where $x^+$ and $x^-$ are the predecessor and successor of $x$, respectively. Thus, when finding $v_i$ in $C(v)$, this structure takes time $O(\log(|v|/|v_i|))$. If $v_i$ is a light child, we continue by $v_i$, so the sum over all the light edges traversed telescopes to $O(\log |v|)$. When we descend to the heavy child, instead, we also find the node $v^j$ in $P(v)$, which costs $O(\log(|v|/(s_{j+1}(v)-s_j(v)+1))) = O(\log(|v|/c(v^j)))$ if $s_j(v) \le p < s_{j+1}(v)$, or $O(\log(|v|/(e_j(v)-e_{j+1}(v)+1))) = O(\log(|v|/(|v^j|-(c(v^j)+ |h(v^j)|))))$ if $e_{j+1}(v) \le p < e_j(v)$, or $O(\log |v|)$ if $p=s_k(v)$ (but this happens only once along the search). In the first two cases, we descend to $v^j$, which always starts descending by a light edge to some $v^j_i$ at cost $O(\log(|v^j|/|v^j_i|))$. Since $|v^j_i| \le c(v^j)$ (if $s_j(v) \le p < s_{j+1}(v)$) or $|v^j_i| \le |v^j|-(c(v^j)+ |h(v^j)|)$ (if $e_{j+1}(v) \le p < e_j(v)$), we can upper bound the cost to search $P(v)$ by $O(\log(|v|/|v^j_i|))$, and the cost to search $C(v^j)$ by $O(\log(|v^j|/|v^j_i|)) \subseteqseteq O(\log(|v|/|v^j_i|))$ too, and then we continue the search from $v^j_i$. Therefore the cost also telescopes to $O(\log |v|)$ when we search a heavy path. Overall, the cost from the root of the parse tree is $O(\log n)$. The remaining problem is that the structure is of size $O(|\mathcal T|)=O(n)$, but it can be made $O(g)$ as follows. The subtrees of $\mathcal T$ rooted by all the nodes $v$ labeled with the same nonterminal $A$ are identical, so in all of them the node $h(v)$ has the same label, say the terminal or nonterminal $A_i$. \citeN{BLRSRW15} define a forest $\mathcal F$ with exactly one node $v(X) \in \mathcal F$ for each nonterminal or nonterminal $X$. If $v \in \mathcal T$ is labeled $A$ and $h(v) \in \mathcal T$ is labeled $A_i$, then $v(A_i)$ is the parent of $v(A)$ in $\mathcal F$. The nodes $v(a)$ for terminals $a$ are roots in $\mathcal F$. A heavy path from $v \in \mathcal T$, with $v$ labeled $A$, then corresponds to an upward path from $v(A) \in \mathcal F$. The sets $C(v)$ also depend only on the label $A$ of $v \in \mathcal T$, so we associate them to the corresponding nonterminal $A$. The sizes of all sets $C(A)$ add up to the grammar size, because $C(A)$ has $s+1$ elements if the rule that defines $A$ is of the form $A \rightarrow A_1 \cdots A_s$.\footnote{To have the grammar size count only right-hand sides, rules $A \rightarrow \varepsilon$ must be removed or counted as size 1.} The sets $P(v)$ also depend only on the label $A$ of $v \in \mathcal T$, but they are not stored completely in $A$. Instead, each node $v(A) \in \mathcal F$, corresponding to the nodes $v\in \mathcal T$ labeled $A$, and with parent $v(A_i) \in \mathcal F$, stores values $s(v(A))=s(v(A_i))+c(v)-1$ and $e(v(A))=e(v(A_i))+|v|-c(v)-|h(v)|+1$. For the roots $v(a) \in F$, we set $s(v(a)) = e(v(a)) = 0$. They then build two data structures for predecessor queries on tree paths, one on the $s(\cdot)$ and one on the $e(\cdot)$ values, which obtain the same complexities as on arrays. In order to find a position $p$ from $v(A)$, we also store the position $p(A)$ in $\\exp(A)$ of the root in $\mathcal F$ from where $v(A)$ descends, as well as the character $\\exp(A)[p(A)]$. If $p=p(A)$, we just return that symbol and finish. Otherwise, if $p<p(A)$, we search for $p(A)-p$ in the fields $s(\cdot)$ from $v(A)$ to the root, finding $s(v(B)) \ge p(A)-p > s(v(B_i))$, with $v(B_i)$ the parent of $v(B)$ in $\mathcal{F}$. Otherwise, $p>p(A)$ and we search for $p-p(A)$ in the fields $e(\cdot)$ from $v(A)$ to the root, finding $e(v(B)) \ge p-p(A) > e(v(B_i))$, with $v(B_i)$ the parent of $v(B)$ in $\mathcal F$. In both cases, we must exit the heavy path from the node $v(B)$, adjusting $p \leftarrow p-s(v(A))+s(v(B))$. \subseteqsubsection{Extracting $T[p\mathinner{.\,.} q]$} To extract $T[p\mathinner{.\,.} q]$ in time $O(q-p+\log n)$, we store additional information as follows. In each heavy path $v^0,\ldots,v^k$, each node $v^j$ stores a pointer $r(v^j) = h(v^t)$, where $j < t \le k$ is the smallest value for which $h(v^t)$ is not the rightmost child of $v^t$. Similarly, $l(v^j) = h(v^t)$ for the smallest $j < t \le k$ for which $h(v^t) > 1$. At query time, we apply the procedures to retrieve $T[p]$ and $T[q]$ simultaneously until they split at a node $v^*$, where $T[p]$ descends from the child $v^*_i$ and $T[q]$ from the child $v^*_j$. Then the symbols $T[p\mathinner{.\,.} q]$ are obtained by traversing, in left-to-right order, $(1)$ the children $v_{i+1},\ldots$ of every light edge leading to $v_i$ in the way to $T[p]$; $(2)$ every sibling to the right of $r(v)$ for the nodes $v \in \{ v_1, r(v_1), r(r(v_1)), \ldots \}$ for every $v_1$ rooting a heavy path in the way to $T[p]$; $(3)$ the children $\{ v^*_{i+1},\ldots,v^*_{j-1}\}$ of $v^*$; $(4)$ the children $v_1,\ldots,v_{i-1}$ of every light edge $v_i$ in the way to $T[q]$; $(5)$ every sibling to the left of $l(v)$ for the nodes $v \in \{ v_1, l(v_1), l(l(v_1)), \ldots \}$ for every $v_1$ rooting a heavy path in the way to $T[q]$. For all those nodes, we traverse their subtrees completely to obtain chunks of $T[p\mathinner{.\,.} q]$ in optimal time (unless there are unary paths in the grammar, which can be removed or skipped with the information on $r(\cdot)$ or $l(\cdot)$). The left-to-right order between nodes in $(1)$ and $(2)$, and in $(3)$ and $(4)$, is obtained as we descend to $T[p]$ or $T[q]$. Finally, $v^*$ is easily determined if it is the target of a light edge. Otherwise, if we exit a heavy path by distinct nodes $v_p$ and $v_q$, then $v^*$ is the highest of the two. \subseteqsubsection{Extending to RLCFGs} The idea to include rules $A \rightarrow A_1^s$ is to handle them exactly as if they were $A \rightarrow A_1 \cdots A_1$, but using $O(1)$ space instead of $O(s)$. When $v$ is labeled $A$ and this is defined as $A \rightarrow A_1^s$, we would have a tie in determining the heavy child $h(v)$. We then act as if we chose the first copy of $A_1$, $h(v)=v_1$; in particular $v(A_1)$ is the parent of $v(A)$ in $\mathcal F$. If we have to descend by another child of $v$ to reach position $p$ inside $v$, we choose $v_i$ with $i = \lceil p/|v_1| \rceil$ and set $p \leftarrow p - (i-1)\cdot|v_1|$, so we do not need to store the set $C(A)$ (which would exceed our space budget). No pointer $l(v^j)$ will point to $h(v)$, but pointers $r(v^j)$ will. The pointers $r(v^j) = h(v^t)$ are actually stored as a pair $(v^t,i)$ where $v^s_i = h(v^t)$; this allows accessing preceding and following siblings easily. With this format, we can also refer to the $i$th child of a run-length node and handle it appropriately. \subseteqsection{Extracting prefixes and suffixes} The following result also exists on CFGs \cite{DBLP:conf/dcc/GasieniecKPS05}, who use leftmost or rightmost paths instead of heavy paths. In our Lemma~\ref{lem:extract from rlcfg} we have extended it to arbitrary RLCFGs as well, without setting any restriction on the grammar. \begin{theorem} \label{thm:rlcfg-prefsuf} Let a RLCFG of size $g_{rl}$ generate (only) $T[1\mathinner{.\,.} n]$. Then there exists a data structure of size $O(g_{rl})$ that extracts any prefix or suffix of the expansion $\exp(A)$ of any nonterminal $A$ in real time. \end{theorem} \subseteqsection{Computing fingerprints} The following result, already existing on CFGs \cite{BGCSVV17}, can also be extended to arbitrary RLCFGs. Note that it improves our Lemma~\ref{lem:kr} to $O(\log\ell)$ time, though we opted for a simpler variant in the body of the article. \begin{theorem} \label{thm:rlcfg-kr} Let a RLCFG of size $g_{rl}$ generate (only) $T[1\mathinner{.\,.} n]$. Then there exists a data structure of size $O(g_{rl})$ that computes the Karp-Rabin signature of any substring $T[p\mathinner{.\,.} q]$ in time $O(\log n)$. \end{theorem} Recall that, given the signatures $\kappa(S_1)$ and $\kappa(S_2)$, one can compute the signature of the concatenation, $\kappa(S_1 \cdot S_2) = (\kappa(S_1) + c^{|S_1|} \cdot \kappa(S_2)) \bmod \mu$. One can also compute the signature of $S_2$ given those of $S_1$ and $S_1 \cdot S_2$, $\kappa(S_2) = ((\kappa(S_1 \cdot S_2) - \kappa(S_1)) \cdot c^{-|S_1|}) \bmod \mu$, and the signature of $S_1$ given those of $S_2$ and $S_1 \cdot S_2$, $\kappa(S_1) = (\kappa(S_1 \cdot S_2) - \kappa(S_2) \cdot c^{|S_1|}) \bmod \mu$. To have the terms $c^{\pm |S_1|}$ handy, we redefine signatures $\kappa(S)$ as triples $(\kappa(S),c^{|S|} \bmod \mu,c^{-|S|} \bmod \mu)$, which are easily maintained across the described operations. We now show how to compute a fingerprint $\kappa(T[p\mathinner{.\,.} q])$ in $O(\log n)$ time on an arbitrary RLCFG. We present the current result \cite{BGCSVV17}, extended to general CFGs, and then include run-length rules. We follow the idea of our Lemma~\ref{lem:kr}, but combine it with heavy paths. Since we can obtain $\kappa(T[p\mathinner{.\,.} q])$ from $\kappa(T[1\mathinner{.\,.} q])$ and $\kappa(T[1\mathinner{.\,.} p-1])$, we only consider computing fingerprints of text prefixes. We associate with each nonterminal $A \rightarrow A_1 \cdots A_s$ the $s$ signatures $K_i(A) = \kappa(\exp(A_1)\cdots \exp(A_{i-1}))$, for $1 \le i \le s$. We also associate signatures to nodes $v(A)$ in $\mathcal F$, $K(v(A)) = \kappa(\exp(A)[1\mathinner{.\,.} p(A)-1]$. Those values fit in $O(g)$ space. To compute $\kappa(T[1\mathinner{.\,.} p])$ we start with $\kappa = 0$ and follow the same process as for accessing $T[p]$ in Section~\ref{sec:rlcfg-extract}. In our way, every time we descend by a light edge from $v$ to $v_i$, where $v$ is labeled $A$, we update $\kappa \leftarrow (\kappa + K_i(A) \cdot c^{|A_1|+\cdots+|A_{i-1}|}) \bmod \mu$. Note that the power of $c$ is implicitly stored together with the signature $K_i(A)$ itself. Instead, when we descend from $v(A)$ to $v(B)$ because $s(v(B)) \ge p(A)-p > s(v(B_i))$ or $e(v(B)) \ge p-p(A) > e(v(B_i))$, we first compute the signature $\kappa'$ of the prefix of $\exp(A)$ that precedes $\exp(B)$, which is of length $\ell = s(v(A))-s(v(B))$, and then update $\kappa \leftarrow \kappa\cdot c^\ell + \kappa'$ so as to concatenate that prefix (again, $c^\ell$ is computed together with $\kappa'$). We compute $\kappa'$ from $K(v(B)) = \kappa(\exp(B)[1\mathinner{.\,.} p(B)-1])$ and $K(v(A)) = \kappa(\exp(A)[1\mathinner{.\,.} p(A)-1])$. Because $\exp(A)[p(A)]$ is the same symbol of $\exp(B)[p(B)]$, $\exp(B)[1\mathinner{.\,.} p(B)-1]$ is a suffix of $\exp(A)[1\mathinner{.\,.} p(A)-1]$. We then use the method to extract $\kappa(S_1)$ from $\kappa(S_1 \cdot S_2)$ and $\kappa(S_2)$. When we arrive at $T[p]$, we include that symbol and have computed $\kappa = \kappa(T[1\mathinner{.\,.} p])$. The time is the same $O(\log n)$ required to access $T[p]$. \subseteqsubsection{Handling run-length rules} The proof of Lemma~\ref{lem:kr} already shows how to handle run-length rules $A \rightarrow A_1^s$: we again treat them as $A \rightarrow A_1 \cdots A_1$. The only complication is that now we cannot afford to store the values $K_i(A)$ used to descend by light edges, but we can compute them as $K_i(A) = \kappa(\exp(A_1)^{i-1}) = \left(\kappa(\exp(A_1))\cdot \frac{c^{|A_1|\cdot (i-1)}-1}{c^{|A_1|}-1}\right) \bmod \mu$: $c^{|A_1|} \!\!\mod \mu$ and $(c^{|A_1|}-1)^{-1} \bmod \mu$ can be stored with $A_1$, and the exponentiation can be computed in time $O(\log i) \subseteqseteq O(\log s)$. Note that this is precisely the $O(\log(|v|/|v_i|))$ time we are allowed to spend when moving from node $v$ to its child $v_i$ by a light edge. \subseteqsection{Locating pattern occurrences} \citeN[Cor.~1]{CNspire12} obtain a version of the following result that holds only for CFGs and offers search time $O(m^2+(m+occ)\log^\epsilon n)$. We improve their complexity and generalize it to RLCFGs. \begin{theorem} \label{thm:rlcfg-index} Let a RLCFG of size $g_{rl}$ generate (only) $T[1\mathinner{.\,.} n]$. Then there exists a data structure of size $O(g_{rl})$ that finds the $occ$ occurrences in $T$ of any pattern $P[1\mathinner{.\,.} m]$ in time $O(m\log n + occ \log^\epsilon n)$ for any constant $\epsilon>0$. \end{theorem} This result is essentially obtained in our Section~\ref{sec:index}. In that section we use a specific RLCFG that allows us obtain a better complexity. However, in a general RLCFG, where we must search for all the $\tau=m-1$ possible splits of $P$, the application of Lemma~\ref{lemma: z-fast} with complexities $f_e(\ell) = O(\ell)$ (Theorem~\ref{thm:rlcfg-prefsuf}) and $f_h(\ell)=O(\log n)$ (Theorem~\ref{thm:rlcfg-kr}) yields $O(m\log n)$ time to find all the $m-1$ ranges $[x_1\mathinner{.\,.} x_2] \times [y_1\mathinner{.\,.} y_2]$ to search for in the grid. Combining that result with the linear-space grid representation and the mechanism to track the secondary occurrences on the grammar tree of a RLCFG described in Section~\ref{sec:index}, the result follows immediately. \subseteqsection{Counting pattern occurrences} While we cannot generalize our result of Section~\ref{sec:counting} to arbitrary RLCFGs, our developments let us improve the best current result on arbitrary CFGs \cite{Nav18}. \begin{theorem} \label{thm:rlcfg-count} Let a CFG of size $g$ generate (only) $T[1\mathinner{.\,.} n]$. Then there exists a data structure of size $O(g)$ that computes the number of occurrences in $T$ of any pattern $P[1\mathinner{.\,.} m]$ in time $O(m\log^{2+\epsilon} n)$ for any constant $\epsilon>0$. \end{theorem} \citeN[Thm.~4]{Nav18} showed that the number of times $P[1\mathinner{.\,.} m]$ occurs in $T[1\mathinner{.\,.} n]$ can be computed in time $O(m^2 + m\log^{2+\epsilon} n)$ within $O(g)$ space for any CFG of size $g$. As explained in Section~\ref{sec:counting}, he uses the same grid of our Section~\ref{sec:index} for the primary occurrences, but associates with each point the number of occurrences triggered by it (which depend only on the point). Then, a linear-space geometric structure \cite{DBLP:journals/siamcomp/Chazelle88} sums all the numbers in a range in time $O(\log^{2+\epsilon} g)$. Adding over all the $m-1$ partitions of $P$, and considering the $O(m^2)$ previous time to find all the ranges \cite{CNspire12}, the final complexity is obtained. With Lemma~\ref{lemma: z-fast}, and given our new results in Theorems~\ref{thm:rlcfg-prefsuf} and \ref{thm:rlcfg-kr}, we can now improve Navarro's result to $O(m\log^{2+\epsilon} n)$ because the $O(m^2)$ term becomes $O(m\log n)$. However, this holds only for CFGs. Run-length rules introduce significant challenges, in particular the number of secondary occurrences do not depend only on the points. We only could handle this issue for the specific RLCFG we use in Section~\ref{sec:counting}. An interesting open problem is to generalize this solution to arbitrary RLCFGs. \end{document}
\betaegin{equation}gin{document} \title{Arbitrary perfect state transfer in $d$-level spin chains} \alphauthor{Abolfazl Bayat} \alphaffiliation{Department of Physics and Astronomy, University College London, Gower St., London WC1E 6BT, United Kingdom} \date{\today} \betaegin{equation}gin{abstract} We exploit a ferromagnetic chain of interacting $d$-level ($d>2$) particles for arbitrary perfect transfer of quantum states with $(d-1)$ levels. The presence of one extra degree of freedom in the Hilbert space of particles, which is not used in encoding, allows to achieve perfect transfer even in a uniform chain through a repeated measurement procedure with consecutive single site measurements. Apart from the first iteration, for which the time of evolution grows linearly with the size of the chain, in all other iterations, the evolution times are short and does not scale with the length. The success probability of the mechanism grows with the number of repetitions and practically after a few iterations the transfer is accomplished with a high probability. \end{abstract} \pacs{03.67.-a, 03.67.Hk, 37.10.Jk} \muaketitle \section{Introduction} The natural time evolution of strongly correlated many-body systems can be exploited for propagating information across distant sites in a chain \cite{bose-review,bayat-review-book}. Very recently, experimental realization of such phenomena have been achieved in NMR \cite{state-transfer-NMR}, coupled optical wave-guides \cite{Nikolopoulos-perfect-transfer,kwek-perfect-transfer} and cold atoms in optical lattices \cite{Bloch-spin-wave,Bloch-magnon}. The idea has been generalized for higher spins in both ferromagnetic \cite{bayat-dlevel-2007} and anti-ferromagnetic \cite{Sanpera-spin1} regimes. In the simplest case of a uniform chain the quality of transport goes down with increasing the size of the system due to natural dispersion of excitations. To achieve perfect state transfer across a chain one idea is to engineer the Hamiltonian to have a linear dispersion relation (see Ref.~\cite{Kay-review} for a detailed review on perfect state transfer). This can be achieved by either engineering the couplings \cite{christandl} or local magnetic fields \cite{perfect-transfer-magnetic}. The engineered chains may also be combined with extra control on the boundary to avoid state initialization for perfect transfer \cite{DeFranco-perfect}. One may also engineer the two boundary couplings \cite{leonardo} of free fermionic systems in order to excite only those eigenvectors which lie in the linear zone of the spectrum and thus achieve an almost perfect transfer. A sinusoidal deriving of the couplings \cite{hanggi} has also been proposed for routing information in a network of spins between any pair of nodes. A set of pulses in a system with both ferromagnetic and anti-ferromagnetic couplings \cite{Kay,Karimipour-perfect} may also be used to properly transfer quantum states across a spin network. An alternative for achieving almost perfect state transfer is engineering the spectrum of the system to create resonance between sender and receiver sites by either using very weak couplings \cite{weak-coupling} or strong local magnetic fields \cite{Perturbation-magnetic}. As mentioned above, to achieve perfect state transfer, most of the proposed mechanisms are based on engineering the Hamiltonian of the system, up to some degrees, in order to, at least approximately, achieve linear dispersion relation or bring the sender and receiver in to resonance. In Ref.~\cite{burgarth-dual} a dual rail system with uniform couplings has been used in an iterative procedure to achieve arbitrary perfect state transfer of a qubit. This idea then has been generalized to multi-rail systems \cite{burgarth-multirail} for transferring higher level states as well. Although, in both dual and multi-rail systems the chains are uniform and no engineering is needed, but the price which is paid is the number of chains which are needed as well as the more complex encoding and decoding of the quantum states. In this paper, a mechanism is introduced for arbitrary perfect state transfer, which has the same spirit of iterative procedure of the dual and multi-rail systems \cite{burgarth-dual,burgarth-multirail} but with much less complexity. According to this proposal, a ferromagnetic chain of interacting $d$-level ($d>2$) particles are used for sending quantum states with $(d-1)$ levels. The natural time evolution of the system with consecutive single particle projective measurement in a ceratin basis allows for iterative perfect state transfer. The probability of success grows with the number of iterations and apart from the first iteration, all others can be done within a very short time scale, which does not scale with size, and thus the time needed to achieve perfect transfer remains reasonable, even for long chains. The structure of the paper is as following: In section \ref{sec2} the model is introduced, in section \ref{sec3} the mechanism for state transfer is discussed and in section \ref{sec4} the further iterations for achieving perfect transfer are analyzed. Finally, in section \ref{sec5} the results are summarized and a possible realization in optical lattices is discussed.. \betaegin{equation}gin{figure} \centering \includegraphics[width=8cm,height=6cm,angle=0]{Fig1.eps} \caption{ (color online) (a) A ferromagnetic chain of interacting particles for quantum state transfer. The first spin is encoded in the state $|\phi_s\rangle$ while the rest are prepared in the state $|0\rangle$. (b) When the measurement is successful the quantum state is perfectly transferred to site $N$ and all the other spins reset to the state $|0\rangle$. (c) When the measurement is unsuccessful the last site is projected in $|0\rangle$ and the excitation is dispersed along the chain. } \langlebel{fig1} \end{figure} \betaegin{equation}gin{figure*} \centering \includegraphics[width=15cm,height=5cm,angle=0]{Fig2.eps} \caption{(Color online) (a) The probability of success in the first iteration as a function of $Jt$ in chain of length $N=20$. (b) The probability of success $P_1$ in terms of length $N$. (c) The optimal time $Jt_1$ versus length $N$. } \langlebel{fig2} \end{figure*} \section{Introducing the model} \langlebel{sec2} We consider a chain of $N$ particles that each takes $\muu=0,1,...,d-1$ (for $d\geq 3$) different levels. They interact through the Hamiltonian \betaegin{equation}gin{equation}\langlebel{H} H=-J\sum_{k=1}^{N-1} \muathbf{P}_{k,k+1}+B\sum_{k=1}^{N}S^z_k \end{equation} where $J$ is the exchange coupling, $B$ is the magnetic field, $\muathbf{P}_{k,k+1}$ is the swap operator which exchanges the quantum states of sites $k$ and $k+1$, and $S^z_k$ is the generalized Pauli operator in the $z$ direction acting on site $k$ which is defined as $S^z|\muu\rangle=\muu|\muu\rangle$. In the case of $J>0$ and vanishing magnetic field (i.e. $B=0$) the system is ferromagnetic and its ground state is $d$-fold degenerate of the form $|\muu,\muu,...,\muu\rangle$. The two terms in the Hamiltonian commute with each other and thus, the effect of the magnetic filed $B>0$ is just to lift the degeneracy and choose the quantum state $|\muathbf{0}\rangle=|0,0,...,0\rangle$ as the unique ground state of the system. This makes it possible to initialize the system through a simple cooling procedure. After initialization, the magnetic field can be switched off as it has no effect on transport of an excitation, which will be seen in the following. The above Hamiltonian is one possible generalization of spin-$1/2$ Heisenberg interaction for higher spins \cite{bayat-dlevel-2007}. In fact, the swap operator $\muathbf{P}_{k,k+1}$, for $d$-level systems, can always be written as \betaegin{equation}gin{equation}\langlebel{Swap_spin_operators} \muathbf{P}_{k,k+1}=\sum_{p=0}^{d-1} b_p (\muathbf{S}_k.\muathbf{S}_{k+1})^p \end{equation} where, $b_p$'s are some real numbers. To determine these coefficients we apply both sides of the above identity on general states of the form $|\muu \nuu\rangle$ for which we get $d(d+1)/2$ different equations. However, only $d$ equations are independent which are associated to the cases that the difference $|\muu-\nuu|=k$ takes the values of $k=0,1,...,d-1$. By solving these $d$ independent equations one can uniquely determine all the $b_p$ coefficients. For instance, in the special case of spin-1 ($d=3$) one can easily show that \betaegin{equation}gin{equation}\langlebel{H2} \muathbf{P}_{k,k+1}= -I+\muathbf{S}_k.\muathbf{S}_{k+1}+(\muathbf{S}_k.\muathbf{S}_{k+1})^2 \end{equation} where $I$ is the identity operator. The Hamiltonian $H$ has several symmetries with corresponding conserved charges which includes \betaegin{equation}gin{equation}\langlebel{Qm} [H,Q^{(m)}]=0, \hskip 1cm Q^{(m)}=\sum_{k=1}^{N} (S^z_k)^m, \end{equation} for $m=1,2,...,d-1$. These set of conservation laws imply for example that a state like $|2,0,0,...,0\rangle$ cannot evolve to a state like $|1,1,0,0,...,0\rangle$. That is why we use the particular form of the Hamiltonian in Eq.~(\ref{H}) which is essential to achieve arbitrary perfect state transfer in our system. The states with only one site excited are called one particle states and are represented as $|\muathbf{\muu}_k\rangle=|0,0,...0,\muu,0,...,0\rangle$ in which site $k$ is in state $|\muu\rangle$ and the rest are in state $|0\rangle$. The one particle sector with $Q^{(1)}$ charge equal to $\muu$ is denoted by $V_1^{(\muu)}$ and the whole one particle sector is \betaegin{equation}gin{equation}\langlebel{V_1_sector} V_1=V_1^{(1)}\oplus V_1^{(2)} \oplus ... \oplus V_1^{(d-1)}. \end{equation} The Hamiltonian $H$ in Eq.~(\ref{H}) can be analytically diagonalized in the $V_1$ subspace using Bethe ansatz. The eigenvectors and the corresponding eigenvalues are \betaegin{equation}gin{eqnarray}\langlebel{eigen_E} |E_\muu^m\rangle&=&\sqrt{\frac{4}{2N+1}} \sum_{k=1}^{N} \sin(\frac{ 2m+1 }{2N+1}k \pi) |\muu_k\rangle \cr E_\muu^m &=& \muu B-2J \cos(\frac{ 2m+1}{2N+1}\pi) \end{eqnarray} where $m=0,1,...,N-1$ and an irrelevant constant number has been dropped from the eigenvalues. \section{Quantum state transfer} \langlebel{sec3} System is initially prepared in its ground state $|\muathbf{0}\rangle$. The quantum state which has to be transferred is then encoded on the sender site $s$ (which is assumed to be 1 throughout this paper) in a general state as the following \betaegin{equation}gin{equation}\langlebel{psi_s} |\phi_s\rangle=\sum_{\muu=1}^{d-1} a_\muu |\muu\rangle, \end{equation} where $a_\muu$ are complex coefficients with the normalization constraint of $\sum_{\muu=1}^{d-1} |a_\muu|^2=1$. It has to be emphasized that this general state does not include $\muu=0$ (which all other spins are prepared to) and thus the dimension of its Hilbert space is $d-1$. This extra degree of freedom in the chain will then be used to achieve the {\em perfect} transfer of the quantum state in Eq.~(\ref{psi_s}). A schematic picture of the system when the initialization is accomplished is shown in Fig.~\ref{fig1}(a) in which the sender spin $1$ is prepared in quantum state $|\phi_s\rangle$ while the rest are all in state $|0\rangle$. Thus, the initial state of the system becomes \betaegin{equation}gin{equation}\langlebel{psi_0_1} |\Psi^{(1)}(0)\rangle=\sum_{\muu=1}^{d-1} a_\muu |\muathbf{\muu}_1\rangle. \end{equation} Since the excitation is located in site $1$ this quantum state is not an eigenstate of the system and hence the system evolves under the action of the Hamiltonian \betaegin{equation}gin{equation}\langlebel{psi_t_1} |\Psi^{(1)}(t)\rangle=e^{-iHt}|\Psi^{(1)}(0)\rangle. \end{equation} Using the symmetries of Eq.~(\ref{Qm}) One can easily show that \betaegin{equation}gin{equation}\langlebel{psi_tt_1} |\Psi^{(1)}(t)\rangle=\sum_{\muu=1}^{d-1} \sum_{k=1}^N a_\muu f_{k1}^\muu(t) |\muathbf{\muu}_k\rangle, \end{equation} where, \betaegin{equation}gin{equation}\langlebel{f_nm_mu} f_{nm}^\muu(t)=\langle \muathbf{\muu}_n|e^{-iHt}|\muathbf{\muu}_m\rangle \end{equation} form a unitary $N\times N$ matrix $f^\muu(t)$ with the elements given in Eq.~(\ref{f_nm_mu}). Using the eigenvectors of the Eq.~(\ref{eigen_E}) together with the fact that in the subspace $V_1$ we have $\sum_{m=0}^{N-1}\sum_{\muu=1}^{d-1} |E_\muu^m\rangle \langle E_\muu^m|=I$ one can show that $f^\muu(t)=e^{-i\muu B t}F(t)$ where the elements of matrix $F(t)$ are given as \betaegin{equation}gin{widetext} \betaegin{equation}gin{equation}\langlebel{f_nm_mu2} F_{mn}(t)=\frac{4}{2N+1}\sum_{p=0}^{N-1} e^{i2Jt\cos(\frac{ 2p+1}{2N+1}\pi)} \sin(\frac{ 2p+1}{2N+1}m \pi) \sin(\frac{ 2p+1 }{2N+1}n\pi). \end{equation} \end{widetext} To extract the quantum state at a receiver site $r$ one measures the following operator at site $r$ \betaegin{equation}gin{equation}\langlebel{O_measurement} O_r=\sum_{\muu=1}^{d-1} |\muu\rangle \langle \muu|. \end{equation} If the outcome of measurement is $1$ (i.e. measurement is successful) then the quantum state is perfectly (up to a local unitary rotation) transferred to site $r$ as schematically shown in Fig.~\ref{fig1}(b). Otherwise, if the outcome of measurement is $0$ (i.e. measurement is unsuccessful) the spin at site $r$ is projected to state $|0\rangle$ and the excitation is dispersed across the chain as schematically shown in Fig.~\ref{fig1}(c). The probability of a successful measurement on site $N$ (the receiver site $r$ is assumed to be $N$ throughout this paper) at time $t$ is \betaegin{equation}gin{equation}\langlebel{Psuc_1} P_N^{(1)}(t)=\langle \Psi^{(1)}(t)| O_N|\Psi^{(1)}(t) \rangle = |F_{N1}(t)|^2 \end{equation} In Fig.~\ref{fig2}(a) we plot $P_N^{(1)}(t)$ as a function of time $Jt$ in a chain of length $N=20$. As it is clear from the figure the probability peaks for the first time at a particular time $t=t_1$ and then its oscillations damp over time. To maximize the probability of success we should thus perform the measurement at time $t=t_1$ which gives the probability of success as \betaegin{equation}gin{equation}\langlebel{P_1_t1} P_1=P_N^{(1)}(t_1)=|F_{N1}(t_1)|^2 \end{equation} In Fig.~\ref{fig2}(b) the success probability $P_1$ is plotted in terms of length $N$ which is perfectly fit by $P_1=1.7074N^{-0.5181}$. The optimal time $Jt_1$ is also plotted as a function of $N$ in Fig.~\ref{fig2}(c) which shows a perfect linear dependence on the length. In the case of success, the quantum state at the receiver site $N$ becomes \betaegin{equation}gin{equation}\langlebel{psi_r1_local} | \phi_N^{(1)} \rangle = \sum_{\muu=1}^{d-1} a_\muu e^{-i\muu B t_1}|\muu\rangle \end{equation} which is different from $|\phi_s\rangle$, given in Eq.~(\ref{psi_s}). To convert this state into $|\phi_s\rangle$ one has to perform a local unitary operator of the form $e^{iBS_zt_1}$ into site $N$ which then accomplishes the perfect state transfer. \section{Further Iterations} \langlebel{sec4} The quantum state transfer, discussed above, is not always successful as the probability of success $P_1$ is less than 1 and decreases by increasing the length $N$. In fact, if the measurement is unsuccessful the quantum state of the whole chain is projected into \betaegin{equation}gin{equation}\langlebel{psi_0_2} |\Psi^{(2)}(0)\rangle=\frac{1}{\sqrt{1-P_1}}\sum_{\muu=1}^{d-1} \sum_{k=1}^{N-1} a_\muu f^\muu_{k1}(t_1)|\muathbf{\muu}_k\rangle \end{equation} where the index $k$ runs from $1$ to $N-1$ because the previous measurement had been unsuccessful and thus the receiver site $N$ is inevitably projected to state $|0\rangle$. In addition, the reason that the parameter of $|\Psi^{(2)}(0)\rangle$ is chosen to be $0$ is due to the fact that we start another iteration now and the system has not yet evolved in this part. A very interesting feature is revealed by exploring the spatial distribution of excitations across the chain in this quantum state by computing \betaegin{equation}gin{equation}\langlebel{P_n_distribution} P_m^{(2)}(0)=\langle \Psi^{(2)}(0)| O_m|\Psi^{(2)}(0) \rangle=\frac{|F_{m1}(t_1)|^2}{1-P_1} \end{equation} where index $m$ takes $m=1,2,...,N-1$. \betaegin{equation}gin{figure} \centering \includegraphics[width=9cm,height=4.5cm,angle=0]{Fig3.eps} \caption{(Color online) The distribution of excitations across the chain when the first measurement is unsuccessful for a chain of length (a) $N=50$ and; (b) $N=100$.} \langlebel{fig3} \end{figure} \betaegin{equation}gin{figure} \centering \includegraphics[width=9cm,height=6cm,angle=0]{Fig4.eps} \caption{(Color online) The probability of success in each iteration $k$ in chains of length: (a) $N=20$; (b) $N=40$. The corresponding optimal times $t_k$ for each iteration $k$ in chains: (c) $N=20$; (d) $N=40$.} \langlebel{fig4} \end{figure} In Figs.~\ref{fig3}(a)-(b) we plot the distribution function $P_m^{(2)}(0)$ in terms of site index $m$ for two different chains of length $N=50$ and $N=100$ respectively. As it is evident from the figures the distribution is more prominent near the end of the chain, in particular at $m=N-2$ and $m=N-1$. This is indeed due to the particular optimization of $t_1$ in the first iteration which is chosen to maximize the probability of receiving the excitation at site $N$ and thus in the case of unsuccessful measurement the excitations are still very close to the last site of the chain. System is free to evolve as $|\Psi^{(2)}(t)\rangle=e^{-iHt}|\Psi^{(2)}(0)\rangle$ and just as before one has to perform the measurement of Eq.~(\ref{O_measurement}) on site $N$ and see if the outcome is $1$. The probability of success in this iteration is \betaegin{equation}gin{equation}\langlebel{Psuc_2} P_N^{(2)}(t)=\langle \Psi^{(2)}(t)| O_N|\Psi^{(2)}(t) \rangle. \end{equation} In contrast to the first iteration, to maximize this probability we no longer need to wait for very long times as according to the distribution $P_m^{(2)}(0)$, shown in Figs.~\ref{fig3}(a)-(b), the excitations are very close to the receiver site $N$. Indeed, the time window, over which the optimization is done, for all iterations after the first one can be fixed and independent of $N$. In this paper, all time optimizations for iterations after the first one are taken in a time interval of $0\leq Jt\leq10$. The results are, in fact, hardly improved by choosing a wider time window. At a particular time $t=t_2$ at which the probability $P_N^{(2)}(t)$ peaks the measurement is performed on site $N$. Hence, the success probability is $P_2=P_N^{(2)}(t_2)$ which can be written as \betaegin{equation}gin{equation}\langlebel{Psuc_2} P_2=\frac{1}{1-P_1} |\sum_{m=1}^{N-1} F_{Nm}(t_2) F_{m1}(t_1)|^2. \end{equation} In the case of successful measurement the quantum state of site $N$ becomes \betaegin{equation}gin{equation}\langlebel{psi_r2_local} | \phi_N^{(2)} \rangle = \sum_{\muu=1}^{d-1} a_\muu e^{-i\muu B (t_1+t_2)}|\muu\rangle \end{equation} which then is converted into the target state $|\phi_s\rangle$ by applying the local unitary operator $e^{-iBS_z(t_1+t_2)}$. \betaegin{equation}gin{figure} \centering \includegraphics[width=8cm,height=5cm,angle=0]{Fig5.eps} \caption{(Color online) The probability of failure after $k$ consecutive iterations versus the number of iterations $k$ in chains of different lengths.} \langlebel{fig5} \end{figure} In the case of unsuccessful measurement we can repeat the process over and over till the quantum state reaches the receiver site. One can easily show that after $k$ unsuccessful iterations the quantum state of the whole system at the beginning of the $(K+1)$'th iteration is \betaegin{equation}gin{widetext} \betaegin{equation}gin{equation}\langlebel{psi_0_k} |\Psi^{(k+1)}(0)\rangle=\frac{1}{\sqrt{(1-P_{k})(1-P_{k-1})...(1-P_1)}} \sum_{\muu=1}^{d-1} \sum_{m_k=1}^{N-1}...\sum_{m_1=1}^{N-1} a_\muu f^\muu_{m_{k} m_{k-1}}(t_{k}) ... f^\muu_{m_2 m_1}(t_2) f^\muu_{m_11}(t_1) |\muathbf{\muu}_k\rangle. \end{equation} \end{widetext} Then system is released to evolve just as before, i.e. $|\Psi^{(k+1)}(t)\rangle=e^{-iHt}|\Psi^{(k+1)}(0)\rangle$. The probability of a successful measurement after time $t$ is $P_N^{(k+1)}(t)=\langle \Psi^{(k+1)}(t)| O_N|\Psi^{(k+1)}(t) \rangle$ from which the time $t_{k+1}$ is determined as its maximum in the time interval of $0\leq Jt \leq 10$ and thus we have $P_{k+1}=P_N^{(k+1)}(t_{k+1})$. One can show that \betaegin{equation}gin{widetext} \betaegin{equation}gin{equation}\langlebel{psi_0_k} P_{k+1}=\frac{1}{(1-P_{k})(1-P_{k-1})...(1-P_1)} | \sum_{m_k=1}^{N-1}...\sum_{m_1=1}^{N-1} F_{Nm_k}(t_{k+1}) F_{m_k m_{k-1}}(t_{k}) ... F_{m_2 m_1}(t_2) F_{m_1 1}(t_1) |^2 \end{equation} \end{widetext} It is worth mentioning that in the case of successful measurement the received quantum state is \betaegin{equation}gin{equation}\langlebel{psi_r_k_local} |\phi_N^{(k+1)}\rangle =\sum_{\muu=1}^{d-1} a_\muu e^{-i\muu B \sum_{m=1}^{k+1} t_m} |\muu\rangle \end{equation} which can be converted to $|\phi_s\rangle$ by the local unitary operation of $e^{iBS_z\sum_{m=1}^{k+1} t_m}$. In the case of unsuccessful measurement the process is repeated again. In Figs.~\ref{fig4}(a)-(b) we depict $P_k$ for different iterations $k$ in chains of length $N=20$ and $N=40$ respectively. As it is clear from the figures in some iterations the probability of success is relatively small which is due to the limitation that we used for the time window in which the optimization is performed. In Figs.~\ref{fig4}(c)-(d) the corresponding optimal times $Jt_k$ is shown for each iteration on the same chains. As shown in the figure, apart from $t_1$ all other times are less than $10/J$ which was imposed to the system by us to avoid long waiting times. It is worth mentioning that the above choice of $t_k$'s are just to maximize the probability of success in the shortest possible time. In fact, the algorithm works for any choice of $t_k$ including the regular waiting times of $t_k=(2k-1)t_1$ used in Ref.~\cite{burgarth-dual} which represents the oscillation of the excitation along the chain due to reflection from the boundaries. The best way to see the performance of our mechanism is to compute the probability of failure after $k$ consecutive iterations. This means that the procedure has to fail in all $k$ iterations whose probability is then \betaegin{equation}gin{equation}\langlebel{psi_r_k_local} P_{fail}^{(k)}=\prod_{m=1}^k (1-P_m). \end{equation} In Fig.~\ref{fig5} we plot $P_{fail}^{(k)}$ as a function of iteration $k$ for different chains. As it is evident from the figure the probability of failure goes down by increasing the iterations. For instance, in a long chain of length $N=100$ after $10$ iterations the probability of failure is $\sim 0.35$ while for smaller chains the situation is of course much better as, for example, in a chain of length $N=25$ after $10$ iterations the probability of failure is less than $0.1$. \section{Realization in Optical Lattices} Cold atoms are the most promising candidate for realizing $d$-level chains. A Bose-Hubbard chain \cite{Jacsh-bose-hubbard} containing $^{87}$Rb or $^{23}$Na atoms in the half-filling regime can be tuned to its Mott insulator phase, where exactly one atom recites in each site. For such system the interaction between the atoms can be explained by an effective spin-1 Hamiltonian as \cite{Yip-spin1-singlet} \betaegin{equation}gin{equation}\langlebel{Heff_spin1} H=\sum_{n=1}^{N-1} J \muathbf{S}_n.\muathbf{S}_{n+1}+ K (\muathbf{S}_n.\muathbf{S}_{n+1})^2 \end{equation} where $J=-\frac{2t^2}{U_2}$ and $K=-\frac{2t^2}{3U_2}-\frac{4t^2}{3U_0}$ for which $t$ is the tunneling and $U_S$ ($S=0,2$) is the on-site interaction energy for the two spin-1 particles with the total spin $S$. By tuning $U_2=U_0$ one gets $J=K$ and thus the swap operator of Eq.~(\ref{H2}) is realized for spin-1 atoms. The quantum phases accessible to the ground state \cite{GS-cold-spin1} and the quench dynamics \cite{Quench-cold-spin1} of $S=1$ spinor atoms have already been analyzed. Higher level atoms have also been investigated to realize spin-2 ($d=5$) \cite{Cold-spin2} and spin-3 ($d=7$) \cite{Cold-spin3} spinor gases which can be used to realize our proposed mechanism. Although, tuning the interaction to be of the form of swap operators of Eq.~(\ref{H}) might be tricky and needs more detailed analysis of the hyperfine levels and the interaction of atoms in such chains. \section{Conclusion} \langlebel{sec5} We proposed a mechanism for transferring a quantum state of $(d-1)$-levels across a $d$-level spin chain ($d>2$) through an iterative procedure which its probability of success grows continually with the number of iterations. The fidelity of the transferred quantum state is perfect, up to a local unitary rotation, while the Hamiltonian is uniform and no complicated engineering of the couplings is needed. In the process, apart from the first iteration, for which the optimal time $t_1$ grows linearly with the length $N$, in all subsequent iterations the evolution time is short and does not grow with the size of the system. This is very useful as in the presence of decoherence the fast operation time does not allow the environment to spoil the quality of state transfer that much. Furthermore, when the transfer is accomplished the system is automatically rests to its initial ferromagnetic state and becomes ready for reusing. The proposed mechanism is most suited for realization in the fast growing field of trapped atoms in optical lattices. In compare to dual \cite{burgarth-dual} and multi-rail \cite{burgarth-multirail} systems our proposed mechanism is simpler for fabrication as it only needs a single spin chain, no matter how large $d$ is. In contrast, for sending quantum states of larger $d$ one has to increase the number of chains in the multi-rail systems which then makes both the encoding and decoding processes very complicated. In addition, it is worth mentioning that in dual rail systems the excitations are {\em delocalized} between different parallel spin chains which makes it vulnerable against dephasing. \\ {\em Acknowledgements:-} Discussions with Sougato Bose, Leonardo Banchi, Matteo Scala and Enrico Compagno are warmly acknowledged. This paper was supported by the EPSRC grant $EP/K004077/1$. \betaegin{equation}gin{thebibliography}{} \betaibitem{bose-review} S. Bose, Contemporary Physics {\betaf 48}, 13 (2007). \betaibitem{bayat-review-book} G. M. Nikolopoulos, Igor Jex, {\em Quantum State Transfer and Network Engineering}, Springer (2013). \betaibitem{state-transfer-NMR} K. R. Koteswara Rao, T. S. Mahesh, A. Kumar, arXiv:1307.5220. \betaibitem{Nikolopoulos-perfect-transfer} M. Bellec, G. M. Nikolopoulos, and S. Tzortzakis, Optics Letters {\betaf 37}, 4504 (2012). \betaibitem{kwek-perfect-transfer} A. Perez-Leija, R. Keil, A. Kay, H. Moya-Cessa, S. Nolte, L. C. Kwek, B. M. Rodríguez-Lara, A. Szameit, and D. N. Christodoulides, Phys. Rev. A {\betaf 87}, 012309 (2013). \betaibitem{Bloch-spin-wave} T. Fukuhara, {\em et al.}, Nature Phys. {\betaf 9}, 235 (2013). \betaibitem{Bloch-magnon} T. Fukuhara, {\em et al.}, Nature {\betaf 502}, 76 (2013). \betaibitem{bayat-dlevel-2007} A. Bayat, V. Karimipour, Phys. Rev. A {\betaf 75}, 022321 (2007). \betaibitem{Sanpera-spin1} O. Romero-Isart, K. Eckert, A. Sanpera, Phys. Rev. A {\betaf 75}, 050303(R) (2007). \betaibitem{Kay-review} A. Kay, Int. J. Quantum Inf. {\betaf 8}, 641 (2010). \betaibitem{christandl} M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Phys. Rev. Lett. {\betaf 92}, 187902 (2004); C. Albanese, M. Christandl, N. Datta, and A. Ekert, Phys. Rev. Lett. {\betaf 93}, 230502 (2004); M. Christandl, N. Datta, T. C. Dorlas, A. Ekert, A. Kay, and A. J. Landahl, Phys. Rev. A {\betaf 71}, 032312 (2005). \betaibitem{perfect-transfer-magnetic} T. Shi, Y. Li, Z. Song, C. P. Sun Phys. Rev. A {\betaf 71}, 032309 (2005). \betaibitem{DeFranco-perfect} C. Di Franco, M. Paternostro, and M. S. Kim, Phys. Rev. Lett. {\betaf 101} 230502 (2008). \betaibitem{leonardo} L. Banchi, A. Bayat, P. Verrucchi, S. Bose, Phys. Rev. Lett. {\betaf 106}, 140501 (2011); L. Banchi, T. J. G. Apollaro, A. Cuccoli, R. Vaia, and P. Verrucchi, Phys. Rev. A {\betaf 82}, 052321 (2010). \betaibitem{hanggi} F. Galve, D. Zueco, S. Kohler, E. Lutz, and P. H\"{a}nggi, Phys. Rev. A {\betaf 79}, 032332 (2009). \betaibitem{Kay} P. J. Pemberton-Ross and A. Kay, Phys. Rev. Lett. {\betaf 106}, 020503 (2011). \betaibitem{Karimipour-perfect} V. Karimipour, M. Sarmadi Rad, M. Asoudeh, Phys. Rev. A {\betaf 85}, 010302(R) (2012). \betaibitem{weak-coupling} A. Wojcik, T. Luczak, P. Kurzynski, A. Grudka, T. Gdala, and M. Bednarska, Phys. Rev. A {\betaf 72}, 034303 (2005); A. Wojcik, T. Luczak, P. Kurzynski, A. Grudka, T. Gdala, and M. Bednarska, Phys. Rev. A {\betaf 75}, 022330 (2007); M. J. Hartmann, M. E. Reuter, and M. B. Plenio, New J. Phys. {\betaf 8}, 94 (2006); L. Campos Venuti, C. Degli Esposti Boschi, and M. Roncaglia, Phys. Rev. Lett. {\betaf 99}, 060401 (2007); L. Campos Venuti, S. M. Giampaolo, F. Illuminati, and P. Zanardi, Phys. Rev. A {\betaf 76}, 052328 (2007); G. Gualdi, S. M. Giampaolo, and F. Illuminati, Phys. Rev. Lett. {\betaf 106}, 050501 (2011); S. Paganelli, S. Lorenzo, T. J. G. Apollaro, F. Plastina, G. L. Giorgi, Phys. Rev. A {\betaf 87}, 062309 (2013). \betaibitem{Perturbation-magnetic} S. Lorenzo, T. J. G. Apollaro, A. Sindona, F. Plastina, Phys. Rev. A {\betaf 87}, 042313 (2013); K. Korzekwa, P. Machnikowski, P. Horodecki, . arXiv:1403.7359. \betaibitem{burgarth-dual} D. Burgarth and S. Bose, Phys. Rev. A {\betaf 71}, 052315 (2005); K. Shizume, K. Jacobs, D. Burgarth, and S. Bose, Phys. Rev. A {\betaf 75}, 062328 (2007). \betaibitem{burgarth-multirail} D. Burgarth, V. Giovannetti, S. Bose, J. Phys. A: Math. Gen. {\betaf 38}, 6793 (2005). \betaibitem{Jacsh-bose-hubbard} D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, Phys. Rev. Lett. {\betaf 81}, 3108 (1998). \betaibitem{Yip-spin1-singlet} S. K. Yip, Phys. Rev. Lett. {\betaf 90}, 250402 (2003). \betaibitem{GS-cold-spin1} L. de Forges de Parny, F. Hebert, V. G. Rousseau, and G. G. Batrouni, Phys. Rev. B {\betaf 88}, 104509 (2013). \betaibitem{Quench-cold-spin1} K. W. Mahmud and E. Tiesinga, Phys. Rev. A {\betaf 88}, 023602 (2013). \betaibitem{Cold-spin2} H. Schmaljohann, M. Erhard, J. Kronj\"{a}ger, M. Kottke, S. van Staa, L. Cacciapuoti, J. J. Arlt, K. Bongs, and K. Sengstock, Phys. Rev. Lett. {\betaf 92}, 040402 (2004); M.-S. Chang, C. D. Hamley, M. D. Barrett, J. A. Sauer, K. M. Fortier,W. Zhang, L. You, and M. S. Chapman, Phys. Rev. Lett. {\betaf 92}, 140403 (2004); T. Kuwamoto, K. Araki, T. Eno, and T. Hirano, Phys. Rev. A {\betaf 69}, 063604 (2004). \betaibitem{Cold-spin3} L. Santos and T. Pfau, Phys. Rev. Lett. {\betaf 96}, 190404 (2006); B. Pasquiou, E. Marechal, G. Bismut, P. Pedri, L. Vernac, O. Gorceix, and B. Laburthe-Tolra, Phys. Rev. Lett. {\betaf 106}, 255303 (2011). \end{thebibliography} \end{document}
\begin{equation}gin{document} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{conj}[thm]{Conjecture} \newtheorem{lem}[thm]{Lemma} \newtheorem{Def}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{prob}[thm]{Problem} \newtheorem{ex}{Example}[section] \newcommand{\begin{equation}}{\begin{equation}gin{equation}} \newcommand{{\epsilon}nd{equation}}{{\epsilon}nd{equation}} \newcommand{\begin{equation}n}{\begin{equation}gin{enumerate}} \newcommand{{\epsilon}nd{equation}n}{{\epsilon}nd{enumerate}} \newcommand{\begin{equation}q}{\begin{equation}gin{eqnarray}} \newcommand{{\epsilon}nd{equation}q}{{\epsilon}nd{eqnarray}} \newcommand{\begin{equation}qn}{\begin{equation}gin{eqnarray*}} \newcommand{{\epsilon}nd{equation}qn}{{\epsilon}nd{eqnarray*}} \newcommand{\begin{equation}i}{\begin{equation}gin{itemize}} \newcommand{{\epsilon}nd{equation}i}{{\epsilon}nd{itemize}} \newcommand{{\partial}}{{{\partial}rtial}} \newcommand{{\rm V}}{{\rm V}} \newcommand{{\bf R}}{{\bf R}} \newcommand{{\rm K}}{{\rm K}} \newcommand{{\epsilon}}{{{\epsilon}psilon}} \newcommand{\tilde{\omega}}{\tilde{\omega}} \newcommand{\tilde{Omega}}{\tilde{Omega}} \newcommand{\tilde{R}}{\tilde{R}} \newcommand{\tilde{B}}{\tilde{B}} \newcommand{\tilde{\Gamma}}{\tilde{\Gamma}} \newcommand{f_{\alpha}}{f_{\alpha}} \newcommand{f_{\beta}}{f_{\begin{equation}ta}} \newcommand{f_{\alpha}a}{f_{\alpha\alpha}} \newcommand{f_{\alpha}aa}{f_{\alpha\alpha\alpha}} \newcommand{f_{\alpha}b}{f_{\alpha\begin{equation}ta}} \newcommand{f_{\alpha}bb}{f_{\alpha\begin{equation}ta\begin{equation}ta}} \newcommand{f_{\beta}b}{f_{\begin{equation}ta\begin{equation}ta}} \newcommand{f_{\beta}bb}{f_{\begin{equation}ta\begin{equation}ta\begin{equation}ta}} \newcommand{f_{\alpha}ab}{f_{\alpha\alpha\begin{equation}ta}} \newcommand{ {\pa \over \pa x^i}}{ {{\partial} \over {\partial} x^i}} \newcommand{ {\pa \over \pa x^j}}{ {{\partial} \over {\partial} x^j}} \newcommand{ {\pa \over \pa x^k}}{ {{\partial} \over {\partial} x^k}} \newcommand{ {\pa \over \pa y^i}}{ {{\partial} \over {\partial} y^i}} \newcommand{ {\pa \over \pa y^j}}{ {{\partial} \over {\partial} y^j}} \newcommand{ {\pa \over \pa y^k}}{ {{\partial} \over {\partial} y^k}} \newcommand{{\delta \over \delta x^i}}{{\delta \over \delta x^i}} \newcommand{{\delta \over \delta x^j}}{{\delta \over \delta x^j}} \newcommand{{\delta \over \delta x^k}}{{\delta \over \delta x^k}} \newcommand{{\pa \over \pa x}}{{{\partial} \over {\partial} x}} \newcommand{{\pa \over \pa y}}{{{\partial} \over {\partial} y}} \newcommand{{\pa \over \pa t}}{{{\partial} \over {\partial} t}} \newcommand{{\pa \over \pa s}}{{{\partial} \over {\partial} s}} \newcommand{{\pa \over \pa v^i}}{{{\partial} \over {\partial} v^i}} \newcommand{\tilde{y}}{\tilde{y}} \newcommand{\bar{\Gamma}}{\bar{\Gamma}} \font\BBb=msbm10 at 12pt \newcommand{\Bbb}[1]{\mbox{\BBb #1}} \newcommand{\hspace*{\fill}Q.E.D.}{\hspace*{\fill}Q.E.D.} \title{The Randers metrics of weakly isotropic scalar curvature} \author{Xinyue Cheng\footnote{supported by the National Natural Science Foundation of China (No.11871126) and Chongqing Normal University Science Research Fund (No. 17XLB022)}, Yannian Gong} \maketitle \begin{equation}gin{abstract} In this paper, we study the Randers metrics of weakly isotropic scalar curvature. We prove that a Randers metric of weakly isotropic scalar curvature must be of isotropic $S$-curvature. Further, we prove that a conformally flat Randers metric of weakly isotropic scalar curvature is either Minkowskian or Riemannian.\\ {\bf Keywords:} Finsler geometry, Randers metric, Ricci curvature tensor, scalar curvature, $S$-curvature. {\epsilon}nd{abstract} \section{Introduction} Randers metrics form a special and important class of metrics in Finsler geometry. A Randers metric on a manifold $M$ is a Finsler metric in the following form: \[ F = \alpha +\begin{equation}ta, \] where $\alpha =\sqrt{a_{ij}(x)y^iy^j}$ is a Riemannian metric and $\begin{equation}ta = b_i(x) y^i$ is a $1$-form satisfying $\|\begin{equation}ta_x\|_{\alpha} < 1$ on $M$. Randers metrics were first introduced by physicist G. Randers in 1941 from the standpoint of general relativity, here the Riemannian metric $\alpha$ denotes the gravitation field and $\begin{equation}ta$ denotes a electromagnetic field. Later on, these metrics were applied to the theory of the electron microscope by R. S. Ingarden in 1957, who first named them Randers metrics. An interesting fundamental fact about Randers metrics is as follows: any Randers metric can be expressed as the solution of Zermelo navigation problem with navigation data $(h, W)$, where $h$ is a Riemannian metric and $W$ is a vector field with $h(x, -W)<1$ on $M$ (\cite{CS}). Finsler geometry is just the Riemannian geometry without quadratic restriction. Ricci curvature in Finsler geometry is the natural extension of that in Riemannian geometry. However, there is no unified definition of Ricci curvature tensor in Finsler geometry. Hence, we can find several different versions of the definition of scalar curvature in Finsler geometry. Here, we adopt the definitions introduced by H. Akbar-Zadeh for Ricci curvature tensor and scalar curvature (\cite{AZ}). For a Finsler metric $F$ on an $n$-dimensional manifold $M$, let $\bf Ric$ be the Ricci curvature of $F$. Then the scalar curvature of $F$ is defined as folows \begin{equation} {\bf r}:=g^{ij}{\bf Ric}_{ij},\label{eqb0} {\epsilon}nd{equation} where \begin{equation}qn {\bf Ric}_{ij}:=\frac {1}{2}{\bf Ric}_{y^{i}y^{j}} {\epsilon}nd{equation}qn denote the Ricci curvature tensor and $(g^{ij}):=(g_{ij})^{-1}, \ g_{ij}:=\frac {1}{2}[F^{2}]_{y^{i}y^{j}}$. We say that $F$ is of weakly isotropic scalar curvature if there exists a 1-form $\theta:=\theta_{i}(x)y^{i}$ and a scalar function $\mu (x)$ on $M$ such that \begin{equation} {\bf r}=n(n-1)\left[\frac{\theta}{F}+\mu(x)\right]. \label{wisc} {\epsilon}nd{equation} In particular, when $\theta=0$, that is, ${\bf r}=n(n-1)\mu(x)$, we say that $F$ is of isotropic scalar curvature. We can find many Finsler metrics of weakly isotropic scalar curvature which are not of isotropic scalar curvature (see Example \ref{ex1} below). The $S$-curvature ${\bf S} = {\bf S}(x, y)$ is an important non-Riemannian quantity in Finsler geometry which was first introduced by Z. Shen when he studied volume comparison in Riemann-Finsler geometry (\cite{Sh}). Shen proved that the Bishop-Gromov volume comparison holds for Finsler manifolds with vanishing $S$-curvature. The recent studies show that $S$-curvature plays a very important role in Finsler geometry. In 2014, the first author and M. Yuan verified that a Randers metric of isotropic scalar curvature must be of isotropic $S$-curvature (see {\cite{CY}}). In this paper, we mainly study the Randers metrics of weakly isotropic scalar curvature. Firstly, we obtain the following theorem which generalizes the related result in \cite{CY} mentioned above. \begin{equation}gin{thm} \label{SCS} \ Let $F = \alpha +\begin{equation}ta$ be a Randers metric on an $n$-dimensional manifold $M$. If $F$ is of weakly isotropic scalar curvature, ${\bf r}=n(n-1)\left[\frac{\theta}{F}+\mu(x)\right]$, then $F$ is of isotropic $S$-curvature. {\epsilon}nd{thm} The following is an example about Randers metrics of weakly isotropic scalar curvature which arises from \cite{CS1}. \begin{equation}gin{ex}{\rm (\cite{CS1})}\label{ex1} Let us consider the following Randers metric \begin{equation}qn F&=& \frac{\sqrt{\left(1-|a|^{2}|x|^{4}\right)|y|^{2}+\left(|x|^{2}\langle a, y\rangle- 2\langle a, x\rangle\langle x, y\rangle\right)^{2}}}{1-|a|^{2}|x|^{4}} \\ && -\frac{|x|^{2}\langle a, y\rangle- 2\langle a, x\rangle\langle x, y\rangle}{1-|a|^{2}|x|^{4}}, {\epsilon}nd{equation}qn where $a$ is a constant vector in ${\bf R}^{n}$ and $\langle ~, \rangle$ denotes the standard inner product in ${\bf R}^{n}$. By direct computation, one can easily verify that $F$ is of weakly isotropic scalar curvature. Precisely, we have \begin{equation} {\bf r}=n(n-1)\left(\frac{\theta}{F}+\mu(x)\right), \ \ \ \theta =\frac{3(n+1)c_{m}y^{m}}{2n}, \label{ex1} {\epsilon}nd{equation} where $c=\langle a, x\rangle$ and $\mu = 3\langle a, x\rangle^{2}-2|a|^{2}|x|^{2}$, \ $c_{m}:=c_{x^{m}}$. Further, we can also prove that $F$ is of isotropic $S$-curvature, \[ {\bf S}=(n+1) c F. \] Actually, in this case, $F$ is of weakly isotropic flag curvature, \[ \mathbf{K}=\frac{3 c_{m} y^{m}}{F}+ \mu (x). \] Then we can get (\ref{ex1}) by Lemma \ref{wEisc}. {\epsilon}nd{ex} The study on conformal geometry has a long and venerable history. From the beginning, conformal geometry has played an important role in differential geometry and physical theories. The Weyl theorem shows that the conformal and projective properties of a Finsler space determine the properties of metric completely (see{\cite{Kn}}, {\cite{Ru}} and \cite{BC}). Undoubtedly, Finsler conformal geometry is an important part of Finsler geometry. We say two Finsler metrics $F$ and $\bar{F}$ are conformally related if there is a scalar function $\sigma (x)$ on the manifold such that $F=e^{\sigma(x)}\bar{F}$. Further, if $\bar{F}$ is a Minkowskian, the Finsler metric $F$ is called the conformally flat Finsler metric. It is an important topic in Finsler geometry to reveal and characterize deeply geometric structures and properties of conformally flat Finsler metrics. G. Chen and the first author have proved that a conformally flat weak Einstein $(\alpha, \begin{equation}ta)$-metric must be either a locally Minkowski metric or a Riemannian metric on a manifold M with the dimension $n \geq 3$ ({\cite {CC}}). Chen-He-Shen proved that a conformally flat $(\alpha, \begin{equation}ta)$-metric with constant flag curvature is either a locally Minkowskian or Riemannian metric (\cite{GQZ}). On the other hand, Cheng-Yuan proved that a conformally flat non-Riemannian Randers metric of isotropic scalar curvature must be locally Minkowskian when the dimension $n \geq 3$ (\cite{CY}). Further, B. Chen and K. Xia studied a class of conformally flat polynomial $(\alpha, \begin{equation}ta)$-metrics in the form $F=\alpha\left(1+\Sigma^m_{j=2}a_{j}(\frac {\begin{equation}ta}{\alpha})^{j}\right)$ with $m\geq 2$. They proved that, if such a conformally flat $(\alpha, \begin{equation}ta)$-metric F is of weaklly isotropic scalar curvature, then it must have zero salar curvature. Moreover, if $a_{m-1}a_{m}\neq 0$, then $F$ must be either locally Minkowkian or Riemannian when the dimension $n\geq 3$ (\cite{BK}). When $m=1$, that is, when $F$ is a Randers metric, Chen-Xia have not confirmed that the same conclusion still holds. Therefore, it is a natural problem to characterize conformally flat Randers metrics of weakly isotropic scalar curvature. We have got the following theorem. \begin{equation}gin{thm} \label{CWS}\ Let $F = \alpha +\begin{equation}ta$ be a conformally flat non-Riemannian Randers metric on an n-dimensional manifold $M$ with $n\geq 2$. If $F$ is of weakly isotropic scalar curvature, that is, ${\bf r}=n(n-1)[\frac{\theta}{F}+\mu(x)]$, then $F$ must be locally Minkowkian. {\epsilon}nd{thm} \section{Preliminaries} Let $M$ be an $n$-dimensional smooth manifold and $(x^{i}, y^{i})$ denote the local coordinates of point $(x,y)$ on the tangent boudle $TM$ with $y=y^{i}\frac{{\partial}}{{\partial} x^{i}}\in T_{x}M$. Let $F$ be a Finsler metric on $M$ and $g_{y}=g_{kl}(x, y)dx^{k}\otimes dx^{l}$ be the fundamental tensor of $F$, where $g_{kl}:=\frac {1}{2}[F]^{2}_{y^{k}y^{l}}$. The geodesic coefficients of $F$ are given by \begin{equation} G^{k}=\frac {1}{4}g^{kl}\{[F]^{2}_{x^{m}y^{l}}y^{m}-[F^{2}]_{x^{l}}\}, {\epsilon}nd{equation} where $(g^{kl}):=(g_{kl})^{-1}$. For any $x\in M$ and $y\in T_xM\backslash \left\{0\right\}$, the Riemann curvature ${\bf R}_{y}:=R^{i}_{\ k}(x, y)\frac {{\partial}rtial}{{\partial}rtial x^{i}}$ $\otimes dx^{k}$ is defined by \begin{equation} R^{i}_{\ k}(x, y):=2G^{i}_{x^{k}}-G^{i}_{x^jy^k}y^{j}+2G^{j}G^{i}_{y^{j}y^{k}}-G^{i}_{y^{j}}G^{j}_{y^{k}}. {\epsilon}nd{equation} The Ricci curvature of Finsler metric $F$ is defined as the trace of Riemann curvature , that is \begin{equation} {\bf Ric}(x, y):=R^{m}_{\ m}(x, y). {\epsilon}nd{equation} It is not difficult to see that Ricci curvature is a positive homogeneous function of degree two in y. Further, the Ricci curvature tensor is given by \begin{equation} {\bf Ric}_{ij}:=\frac {1}{2}{\bf Ric}_{y^{i}y^{j}}.\label{Ricci def} {\epsilon}nd{equation} One can get ${\bf Ric}(x, y)={\bf Ric}_{ij}y^{i}y^{j}$ by the homogeneity of ${\bf Ric}$. A Finsler metric $F$ is called a weak Einstien metric, if there exists a 1-form $\xi= \xi_{i}(x)y^{i}$ and a scalar function $\mu=\mu(x)$ on $M$ such that \begin{equation} {\bf Ric}=(n-1)\left(\frac {3 \xi}{F}+\mu\right)F^{2} . \label{wE} {\epsilon}nd{equation} In particular, if $\xi =0$, that is, ${\bf Ric}=(n-1)\mu F^{2}$, $F$ is called an Einstein metric. The scalar curvature of a Finsler metric $F$ introduced by Akbar-Zadeh is defined by (1.1), that is, ${\bf r}:=g^{ij}{\bf Ric}_{ij}$. The following lemma is natural and important. \begin{equation}gin{lem}\label{wEisc} Assume that $F$ is a weak Einstein Finsler metric satisfying (\ref{wE}). Then $F$ must be of weakly isotropic scalar curvature satisfying \begin{equation} {\bf r}= n(n-1)\left(\frac{\theta}{F}+\mu\right), \ \ \theta = \frac{3(n+1)}{2n}\xi. \label{wEWIS} {\epsilon}nd{equation} {\epsilon}nd{lem} The distortion $\tau$ of a Finsler metric $F$ is defined by \[ \tau(x, y):=\ln\frac {\sqrt{det(g_{ij}(x, y))}}{\sigma_{BH}(x)}, \] where \[ \sigma_{BH}:=\frac{Vol({\bf B}^{n}(1))}{Vol\{(y^{i})\in R^{n}|F(x, y^{i}\frac{{\partial}rtial}{{\partial}rtial x^{i}})<1\}} \] is Busemann-Hausdorff volume coefficient. $\tau=0$ if and only if Finsler metric $F$ is Riemannian (\cite{SSZ}). $S$-curvature ${\bf S}$ of $F$ characterizes the change rate of distortion $\tau$ along geodesics, that is, \[ {\bf S}(x,y):=\tau_{|m}(x, y)y^{m}. \] In local coordinate system, the $S$-curvature of $F$ can be expressed as \[ {\bf S}(x, y)=\frac{{\partial} G^{m}}{{\partial} y^{m}}-y^{m}\frac{{\partial}}{{\partial} x^{m}}\left[\ln\sigma_{BH}(x)\right]. \] We say that a Finsler metric $F$ is of isotropic $S$-curvature, if there is a scalar function $c(x)$ on $M$ such that \begin{equation} {\bf S}(x, y)=(n-1)c(x)F(x, y). {\epsilon}nd{equation} The mean Cartan torsion ${\bf I}_{y}=I_{i}(x, y) d x^{i}: T_{x} M \rightarrow {\bf R}$ is defined by $$ I_{i}:=g^{j k} C_{i j k}, $$ where $C_{ijk}$ denote the Cartan torsion of $F$. It is easy to check that $ I_{i}=\tau_{y^{i}}$. \vskip 2mm For a Randers metric $F=\alpha+\begin{equation}ta$, we have the following \begin{equation}\label{gij} g^{ij}=\frac {\alpha}{F}a^{ij}-\frac {\alpha}{F^{2}}(b^{i}y^{j}+b^{j}y^{i})+\frac {b^{2}\alpha+\begin{equation}ta}{F^{3}}y^{i}y^{j}, {\epsilon}nd{equation} where $b:=\|\begin{equation}ta\|_{\alpha}$ denotes the norm of $\begin{equation}ta$ with respect to $\alpha$ (\cite{SSZ}). Let \[ r_{ij}:=\frac {1}{2}(b_{i;j}+b_{j;i}), \ \ \ \ s_{ij}:=\frac {1}{2}(b_{i;j}-b_{j;i}),\ \ \ \ \ \ \ \ \ \\ \] \[ e_{ij}:=r_{ij}+s_{i}b_{j}+s_{j}b_{i},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \] \[ w_{ij}:=r_{im}r^{m}\!_{j},\ \ t_{ij}:=s_{im}s^{m}\!_{j},\ \ q_{ij}:=r_{im}s^{m}\!_{j}, \ \ \ \ \ \ \ \\ \] \[ r^{i}_{\ j}:=a^{im}r_{mj},\ s^{i}_{\ j}:=a^{im}s_{mj}, \ t^{i}_{\ j}:=a^{im}t_{mj},\ q^{i}_{\ j}:=a^{im}q_{mj},\\ \] \[ r_{i}:=b^{m}r_{mi},\ s_{i}:=b^{m}s_{mi},\ t_{i}:=b^{m}t_{mi},\ q_{i}:=b^{m}q_{mi},\ \ \\ \] \[ r:=b^{i}b^{j}r_{ij},\ \ \ t:=b^{i}t_{i},\ \ \ p_{i}:=r_{im}s^{m}, \] where `` ; " denotes the covariant derivative with respect to $\alpha$. Besides, put $ r_{00}:=r_{ij}y^{i}y^{j},\ e_{00}:=e_{ij}y^{i}y^{j},\ q_{00}:=q_{ij}y^{i}y^{j},\ s_{0}:=s_{i}y^{i}$, etc.. Further, the Ricci curvature of Randers metric $F=\alpha +\begin{equation}ta$ is given by \begin{equation} {\bf Ric}={}^{\alpha}{\bf Ric}+(2\alpha s^{m}\!_{0;m}-2t_{00}-\alpha^{2}t^{m}_{\ m})+(n-1)\Xi,\label{Ricci curvature} {\epsilon}nd{equation} where ${}^{\alpha}{\bf Ric}$ denotes the Ricci curvature of $\alpha$ and \begin{equation} \Xi:=\frac {2\alpha}{F}(q_{00}-\alpha t_{0})+\frac {3}{4F^{2}}(r_{00}-2\alpha s_{0})^{2}-\frac {1}{2F}(r_{00;0}-2\alpha s_{0;0}). \label{Xi} {\epsilon}nd{equation} Besides, the mean Cartan tensor ${\bf I}=I_{i}dx^{i}$ of $F=\alpha +\begin{equation}ta$ is given by \begin{equation}\label{Ii} I_{i}=\frac {n+1}{2F}(b_{i}-\frac {\begin{equation}ta y_{i}}{\alpha^{2}}), {\epsilon}nd{equation} where $y_{i}:=a_{ij}y^{j}$. For related details, see \cite{CS} or \cite{SSZ}. The following lemma is very important for the proof of Theorem \ref{SCS}. \begin{equation}gin{lem}{\rm (\cite{CS})}\label{e00} Let $F = \alpha +\begin{equation}ta$ be a Randers metric on an $n$-dimensional manifold $M$. Then $F$ is of isotropic $S$-curvature, ${\bf S}=(n+1)c(x)F$ if and only if \begin{equation} e_{00}=2c(x)(\alpha^{2}-\begin{equation}ta^{2}), {\epsilon}nd{equation} where $c=c(x)$ is a scalar function on $M$. {\epsilon}nd{lem} \section{The scalar curvature of Randers metric}\label{section3} In \cite{CY}, the first author and M. Yuan have obtained the formula of the scalar curvature of Randers metrics. In this section, we will further improve and optimize this formula. For readers' convenience, we will derive the formula of the scalar curvature of Randers metrics step by step. By (\ref{gij}), (\ref{Ricci curvature}) and the definition of scalar curvature, we can get \begin{equation} {\bf r}:=g^{ij}{\bf Ric}_{ij}= {}^{\alpha}{\bf Ric}_{ij}g^{ij}+\frac {1}{2}E_{ij}g^{ij}+\frac {1}{2}(n-1)\Xi_{ij}g^{ij},\label{scRanders} {\epsilon}nd{equation} where ${}^{\alpha}{\bf Ric}_{ij}$ denote Ricci curvature tensor of $\alpha$ and \begin{equation}qn && E:= 2\alpha s^{m}_{\ 0;m}-2t_{00}-\alpha^{2}t^{m}_{\ m},\\ && E_{ij}:= E_{y^{i}y^{j}},\ \ \ \ \Xi_{ij}:=\Xi_{y^{i}y^{j}}. {\epsilon}nd{equation}qn Next, we will compute each term on the right side of (\ref {scRanders}). Firstly, we get the following \begin{equation}\label{term 1} {}^{\alpha}{\bf Ric}_{ij}g^{ij}=\frac {\alpha}{F}{\bf r}_{\alpha}-\frac{2\alpha}{F^{2}}\ {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+\frac {b^{2}\alpha+\begin{equation}ta}{F^{3}}\ {}^{\alpha}{\bf Ric}, {\epsilon}nd{equation} where ${\bf r}_{\alpha}$ denotes the scalar curvature of $\alpha$. Moreover, we obtain the following \begin{equation}q E_{ij}g^{ij}&=& \frac{2\alpha}{F}\left[\frac{n+1}{\alpha}s^{m}_{\ 0;m}-(n+2)t^{m}_{\ m}\right] \nonumber\\ &&-\frac{4\alpha}{F^{2}}\big[s^{m}_{\ 0;m}s+\alpha b^{i} s^{m}_{\ i;m}-2t_{0}-\begin{equation}ta t^{m}_{\ m}\big]+2\frac {b^{2}\alpha+\begin{equation}ta}{F^{3}}E, \label{Eij} {\epsilon}nd{equation}q where $s:=\begin{equation}ta / \alpha$. In order to obtian $\Xi_{ij}g^{ij}$, we rewrite (\ref{Xi}) as the following \begin{equation} \Xi:=\Xi^{1}+\Xi^{2}+\Xi^{3}, \label{reXi} {\epsilon}nd{equation} where, \[ \Xi^{1}:=2\left(\frac {D}{F}\right), \ \ \Xi^{2}:=\frac {3}{4}\left(\frac {A^{2}}{F^{2}}\right), \ \ \Xi^{3}:= -\frac {1}{2}\left(\frac {B}{F}\right), \] and \[ A:=r_{00}-2\alpha s_{0},\ \ \ B:=r_{00;0}-2\alpha s_{0;0}, \] \[ D:=\alpha D_{1}=\alpha(q_{00}-\alpha t_{0}), \ \ \ D_{1}:= q_{00}-\alpha t_{0}. \] From (\ref{reXi}), one can write \begin{equation} \Xi_{ij}:=\Xi^{1}_{ij}+\Xi^{2}_{ij}+\Xi^{3}_{ij},\label{XXi} {\epsilon}nd{equation} where \begin{equation}qn \Xi^{1}_{ij}&:=& \Big[2\frac{\alpha}{F}(q_{00}-\alpha t_{0})\Big]_{y^{i}y^{j}}=2\Big(\frac{D}{F}\Big)_{y^{i}y^{j}},\\ \Xi^{2}_{ij}&:=& \Big[\frac{3}{4F^{2}}(r_{00}-2\alpha s_{0})^{2}\Big]_{y^{i}y^{j}}=\Big(\frac{3}{4}\Big)\Big(\frac{A^{2}}{F^{2}}\Big)_{y^{i}y^{j}},\\ \Xi^{3}_{ij}&:=& \Big[-\frac{1}{2F}(r_{00;0}-2\alpha s_{0;0})\Big]_{y^{i}y^{j}}=-\Big(\frac{1}{2}\Big)\Big(\frac{B}{F}\Big)_{y^{i}y^{j}}. {\epsilon}nd{equation}qn By a series direct computations, we can get the following \begin{equation}q \Xi^{1}_{ij}g^{ij}&=& \frac{2}{F}\left \{\frac{\alpha}{F}\Big[(n+3)\frac{D_{1}}{\alpha}+2\alpha q^{m}\!_{m}-(n+1)t_{0}\Big]\right. \nonumber \\ && \left.-\frac{4\alpha}{F^{2}}\big[sD_{1}+\alpha (q_{00\cdot i}b^{i}-st_{0}-\alpha t)\big]+6D\frac{b^{2}\alpha+\begin{equation}ta}{F^{3}}\right\} \nonumber\\ && -\frac{4}{F^{2}}\left\{\frac{\alpha}{F}\big[(3+s)D_{1}+\alpha (q_{00\cdot i}b^{i}-st_{0}-\alpha t)\big]\right. \nonumber\\ && \left. -\frac{\alpha}{F^{2}}\big[ F(sD_{1}+\alpha (q_{00\cdot i}b^{i}-st_{0}-\alpha t))+3D(b^{2}+s)\big]+ 3D\frac{b^{2}\alpha+\begin{equation}ta}{F^{2}}\right\} \nonumber\\ && -2(n-1)\frac{D}{F^{3}}+4\frac{D}{F^{3}}\Big[\frac{\alpha}{F}(1-b^{2})+\frac{b^{2}\alpha+\begin{equation}ta}{F}\Big] \nonumber\\ & =& \frac{1}{2F^{5}}\left\{4F^{3}\big[(n+3)D_{1}-(n+1)\alpha t_{0}+2\alpha^{2}q^{m}_{\ m}\big]\right. \nonumber\\ && -16F^{2}\big[\begin{equation}ta D_{1}+\alpha^{2}q_{00\cdot i}b^{i}-\alpha\begin{equation}ta t_{0}-\alpha^{3}t\big]- 4F^{2}(n+3)D \nonumber\\ && \left.+24F(b^{2}\alpha+\begin{equation}ta)D\right\}, \label{Xieq1} {\epsilon}nd{equation}q \begin{equation}q \Xi^{2}_{ij}g^{ij}&=&\frac{6}{F^{2}}\left\{\frac{\alpha}{F}\Big[w_{00}-2\frac{r_{00}s_{0}}{\alpha}-2\alpha p_{0}+3s^{2}_{0}-\alpha^{2}t\Big]\right.\nonumber\\ && \left.-\frac{2\alpha}{F^{2}}A(r_{0}-ss_{0})+\frac{b^{2}\alpha+\begin{equation}ta}{F^{3}}A^{2}\right\}+\frac{3A}{F^{2}}\left\{\frac{\alpha}{F}\big[r^{m}_{\ m}-(n+1)\frac{s_{0}}{\alpha}\big]\right. \nonumber\\ && \left.-\frac{2\alpha}{F^{2}}(r_{0}-ss_{0})+\frac{b^{2}\alpha+\begin{equation}ta}{F^{3}}A\right\}-\frac{12A}{F^{3}}\left\{\frac{\alpha}{F}\Big[\frac{r_{00}}{\alpha}+r_{0}-(2+s)s_{0}\Big] \right. \nonumber\\ && \left.-\frac{\alpha}{F^{2}}\Big[F(r_{0}-ss_{0})+A(s+b^{2})\Big]+\frac{b^{2}\alpha+\begin{equation}ta}{F^{2}}A\right\}-\frac{3(n-1)A^{2}}{2F^{4}} \nonumber\\ && +\frac{9A^{2}}{2F^{4}}\Big[\frac{\alpha}{F}(1-b^{2})+\frac{b^{2}\alpha+\begin{equation}ta}{F}\Big] \nonumber\\ &=& \frac{1}{2F^{5}}\left\{12F^{2}(\alpha w_{00}-2r_{00}s_{0}-2\alpha^{2}p_{0}+3\alpha s^{2}_{0}-\alpha^{3}t)\right.\nonumber\\ && +6F^{2}A\big[\alpha r^{m}\!_{m}-(n+1)s_{0}\big]-12FA(\alpha r_{0}-\begin{equation}ta s_{0}) \nonumber\\ && -24FA\big[r_{00}+\alpha r_{0}-(2\alpha+\begin{equation}ta)s_{0}\big]-3(n-4)FA^{2} \nonumber\\ && \left.+18A^{2}(b^{2}\alpha+\begin{equation}ta)\right\} \label{Xieq2} {\epsilon}nd{equation}q and \begin{equation}q \Xi^{3}_{ij}g^{ij}&=&\frac{1}{2F}\left\{-\frac{2\alpha}{F}\big[2r^{m}_{\ 0;m}+r^{m}_{\ m;0}-(n+3)\frac{s_{0;0}}{\alpha}-2\alpha s^{m}_{\ ;m}\big]\right. \nonumber\\ && \left.+\frac{4\alpha}{F^{2}}\big[r_{00;0\cdot i}b^{i}-2ss_{0;0}-2\alpha s_{0;0\cdot i}b^{i}\big]-2\frac{b^{2}\alpha+\begin{equation}ta}{F^{3}}B\right\} \nonumber\\ && +\frac{1}{F^{2}}\left\{\frac{\alpha}{F}\big[\frac{3r_{00;0}}{\alpha}-2(3+s)s_{0;0}+r_{00;0\cdot i}b^{i}-2\alpha s_{0;0\cdot i}b^{i}\big]\right. \nonumber\\ && \left.-\frac{\alpha}{F^{2}}\big[F(r_{00;0\cdot i}b^{i}-2ss_{0;0}-2\alpha s_{0;0\cdot i}b^{i})+3B(s+b^{2})\big]\right\} \nonumber\\ && +\frac{n-1}{2F^{3}}B-\frac{ B}{F^{4}}\alpha(1-b^{2})\nonumber\\ &=&\frac{1}{2F^{5}}\left\{-2F^{3}\big[\alpha r^{m}_{\ m;0}+2\alpha r^{m}_{\ 0;m}-2\alpha^{2}s^{m}_{\ ;m}-(3+n)s_{0;0}\big]\right.\nonumber\\ && +F^{2}\big[4\alpha r_{00;0\cdot i}b^{i}-8\alpha^{2}s_{0;0\cdot i}b^{i}-8\begin{equation}ta s_{0;0}+(n+3)B \big] \nonumber\\ && \left.-6F(b^{2}\alpha+\begin{equation}ta)B\right\}, \label{Xieq3} {\epsilon}nd{equation}q where `` $\cdot i$ " denotes the partial derivative with respect to $y^{i}$. Now, plugging (\ref{term 1}), (\ref{Eij}) and (\ref{Xieq1})-(\ref{Xieq3}) into (\ref{scRanders}), one obtains the formula of the scalar curvature for Randers metric $F=\alpha +\begin{equation}ta$ as follows \begin{equation} {\bf r}=\frac {\alpha}{F}{\bf r}_{\alpha}+\frac {1}{4F^{5}}\Big\{\Sigma_{1}+\alpha\Sigma_{2}\Big\}, \label{r1} {\epsilon}nd{equation} where $\Sigma_{1}$ and $\Sigma_{2}$ are both polynomials in $y$. Concretely, we have the following expressions: \[ \begin{equation}gin{aligned} \Sigma_{1}:=&\left\{\big[-4(2b^{2}+4n+7)t^{m}_{\ m}-24b^{i}s^{m}_{\ i;m}+4(n-1)(6q^{m}_{\ m}+3s^{m}_{\ ;m}+2t)\big]\begin{equation}ta\right.\\ &-8\ {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+4(2b^{2}+n+1)s^{m}_{\ 0;m}-4(6b^{2}n-6b^{2}+n^{2}-5)t_{0}\\ &\left.-2(n-1)(6r^{m}_{\ m}s_{0}+8q_{00\cdot i}b^{i}+r^{m}_{\ m;0}+2r^{m}_{\ 0;m}+4s_{0;0\cdot i}b^{i}+12p_{0})\right\}\alpha^{4}\\ &+\left\{-4\big[(3+4n)t^{m}_{\ m}+2b^{i}s^{m}_{\ i;m}-(n-1)(2q^{m}_{\ m}+s^{m}_{\ ;m})\big]\begin{equation}ta^{3}\right.\\ &+\big[-24 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+8(b^{2}+3n+2)s^{m}_{\ 0;m}-4(n+1)(5n-11)t_{0}\\ &-2(n-1)(6r^{m}_{\ m}s_{0}+8q_{00\cdot i}b^{i}+3r^{m}_{\ m;0}+6r^{m}_{\ 0;m}+4s_{0;0\cdot i}b^{i}+12p_{0})\big]\begin{equation}ta^{2}\\ &+\big[4(2b^{2}+1)~ {}^{\alpha}{\bf Ric}-8(2b^{2}+1)t_{00}+4(n-1)(6b^{2}+n+5)q_{00}\\ &+12(n-1)(3n-4)s_{0}^{2}+2(n-1)(6b^{2}+n+5)s_{0;0}+4(n-1)(3r^{m}_{\ m}r_{00}\\ &+18r_{0}s_{0}+2r_{00;0\cdot i}b^{i}+6w_{00})\big]\begin{equation}ta-6(n-1)\big[(12b^{2}+3n-19)s_{0}+6r_{0}\big] r_{00} \\ & \left.-(n-1)(6b^{2}-n-3)r_{00;0} \right\}\alpha^{2}+4(n-1)s^{m}_{\ 0;m}\begin{equation}ta^{4}+\big[4 {}^{\alpha}{\bf Ric}-8t_{00} \\ &+2(n-1)^{2}(2q_{00}+s_{0;0})\big]\begin{equation}ta^{3}+(n-1)\big[-6(n-1)s_{0}r_{00}+(n-3)r_{00;0}\big]\begin{equation}ta^{2}\\ &+3(n-1)(n-6)r_{00}^{2}\begin{equation}ta {\epsilon}nd{aligned} \] and \[ \begin{equation}gin{aligned} \Sigma_{2}:=&\big[-4(b^{2}+n+2)t^{m}_{\ m}-8b^{i}s^{m}_{\ i;m}+4(n-1)(2q^{m}_{\ m}+s^{m}_{\ ;m}+t)\big]\alpha^{4}\\ &+\left\{[-4(b^{2}+6n+8)t^{m}_{\ m}-24b^{i}s^{m}_{\ i;m}+4(n-1)(6q^{m}_{\ m}+3s^{m}_{\ ;m}+t)]\begin{equation}ta^{2}\right.\\ &+[-24 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+16(b^{2}+n+1)s^{m}_{\ 0;m}-2(n-1)(12r^{m}_{\ m}s_{0}+24p_{0}\\ &+16q_{00\cdot i}b^{i}+3r^{m}_{\ m;0}+6r^{m}_{\ 0;m}+8s_{0;0\cdot i}b^{i})-8(3b^{2}n-3b^{2}+2n^{2}-8)t_{0}\big]\begin{equation}ta\\ &+4\ {}^{\alpha}{\bf Ric}b^{2}-8b^{2}t_{00}+2(n-1)\big[12(3b^{2}+n-4)s_{0}^{2}+36s_{0}r_{0}+6b^{2}s_{0;0}\\ &\left.+12b^{2}q_{00}+3r^{m}_{\ m}r_{00}+2r_{00;0\cdot i}b^{i}+6w_{00}]\right\}\alpha^{2}-4nt^{m}_{\ m}\begin{equation}ta^{4}+2\big[8ns^{m}_{\ 0;m}\\ &+(n-1)(2r^{m}_{\ 0;m}+r^{m}_{\ m;0})-4 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}-4n(n-3)t_{0}\big]\begin{equation}ta^{3}\\ &+\left\{4(b^{2}+2)({}^{\alpha}{\bf Ric}-2t_{00})+2(n-1)\big[6(n-2)s_{0}^{2}+2(n+2)s_{0;0}+6w_{00} \right. \\ &\left.+3r^{m}_{\ m}r_{00}+4(n+4)q_{00}+2r_{00;0\cdot i}b^{i}\big]\right\}\begin{equation}ta^{2}-2(n-1)\left\{3b^{2}r_{00;0}-nr_{00;0}\right.\\ &\left.+6[2(n-2)s_{0}-3r_{0}]r_{00}\right\}\begin{equation}ta+3(n-1)(6b^2+n-12)r_{00}^{2}. {\epsilon}nd{aligned} \] It is obvious that $\Sigma_{1}$ and $\Sigma_{2}$ are homogeneous polynomials of degree 5 and 4 in $y$, respectively. \begin{equation}gin{rem} We have totally corrected some errors occurred in the formulas of $\Sigma_{1}$ and $\Sigma_{2}$ in \cite{CY}. We must point out that, in the proofs of main theorems in \cite{CY}, the authors just used the facts that $\Sigma_{1}$ and $\Sigma_{2}$ are homogeneous polynomials of dgree 5 and 4 in $y$, respectively. Hence, the main results in \cite{CY} are still true. {\epsilon}nd{rem} \section{Proof of Theorems} In this section, we are going to prove Theorem \ref{SCS} and Theorem \ref{CWS}. \vskip 2mm {\bf Proof of Theorem \ref{SCS}.} \ Let $F=\alpha+\begin{equation}ta$ be a Randers metric on an $n$-dimentional manifold $M$ with weakly isotropic scalar curvature. Firstly, note that \[ e_{ij}:=r_{ij}+b_{i}s_{j}+b_{j}s_{i}. \] We have \begin{equation} r_{00}=e_{00}-2\begin{equation}ta s_{0} \label{r00} {\epsilon}nd{equation} and \begin{equation} r_{00;0}=e_{00;0}-2(\begin{equation}ta s_{0;0}+s_{0}e_{00}-2\begin{equation}ta s_{0}^{2}). \label{r00;0} {\epsilon}nd{equation} Then plugging (\ref{r00}), (\ref{r00;0}) into (\ref{r1}) and multiplying ($\ref{r1}$) by $4F^{5}$, one gets \begin{equation} 4F^{5}{\bf r}=\Gamma_{1}+\alpha\Gamma_{2},\label{r2} {\epsilon}nd{equation} where $\Gamma_{1}$ and $\Gamma_{2}$ are homogeneous polynomials of degree 5 and 4 in $y$, respectively, which have the following expressions: \begin{equation}q \Gamma_{1}&:=&\left\{\big[16{\bf r}_{\alpha}-4(2b^{2}+4n+7)t^{m}_{\ m}-24b^{i}s^{m}_{\ i;m}+4(n-1)(6q^{m}_{\ m}+3s^{m}_{\ m}+2t)\big]\begin{equation}ta\right.\nonumber\\ && -8 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+4(2b^{2}+n+1)s^{m}_{\ 0;m}-2(n-1)\big[6r^{m}_{\ m}s_{0}+8q_{00\cdot i}b^{i} \nonumber\\ && \left. +r^{m}_{\ m;0}+2r^{m}_{\ 0;m}+4s_{0;0\cdot i}b^{i}+12p_{0}\big]-4(6b^{2}n-6b^{2}+n^{2}-5)t_{0}\right\}\alpha^4 \nonumber\\ && + \left\{\big[16{\bf r}_{\alpha}-4(4n+3)t^{m}_{\ m}-8b^{i}s^{m}_{\ i;m}+4(n-1)(2q^{m}_{\ m}+s^{m}_{\ m})\big]\begin{equation}ta^3\right. \nonumber\\ && +\big[-2(n-1)(18r^{m}_{\ m}s_{0}+ 8q_{00\cdot i}b^{i}+3r^{m}_{\ m;0}+6r^{m}_{\ 0;m}+4s_{0;0\cdot i}b^{i}+12 p_{0}) \nonumber\\ && -24 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+8(b^{2}+3n+2)s^{m}_{\ 0;m}-4(n+1)(5n-11)t_{0}\big]\begin{equation}ta^2 \nonumber \\ && +\Big[4(2b^2+1)({}^{\alpha}{\bf Ric}-2t_{00})+4(n-1)\big[(6b^{2}+n+5)q_{00}+2r_{00;0\cdot i}b^{i} \nonumber\\ && +(6b^{2}+1)s_{0;0}+3r^{m}_{\ m}e_{00}+36s_{0}r_{0}+(30b^{2}+19n-66)s_{0}^{2}+6w_{00}\big]\Big]\begin{equation}ta \nonumber\\ && \left. -(n-1)(6b^{2}-n-3)e_{00;0}-4(n-1)\big[9r_{0}+(15b^{2}+5n-27)s_{0}\big]e_{00}\right\}\alpha^{2} \nonumber\\ && +4(n-1)s^{m}_{\ 0;m}\begin{equation}ta^{4}+\left\{4 {}^{\alpha}{\bf Ric}-8t_{00}+4(n-1)\big[(n-1)q_{00}+s_{0;0}\right. \nonumber\\ && \left.+(7n-24)s_{0}^{2}\big]\right\}\begin{equation}ta^{3}+(n-1)\big[(n-3)e_{00;0}-4(5n-21)s_{0}e_{00}\big]\begin{equation}ta^{2} \nonumber\\ && +3(n-1)(n-6)e_{00}^{2}\begin{equation}ta \label{Gma1} {\epsilon}nd{equation}q and \begin{equation}q \Gamma_{2}&:=&\left\{4{\bf r}_{\alpha}-4(b^{2}+n+2)t^{m}_{\ m}-8b^{i}s^{m}_{\ i;m}+4(n-1)(2q^{m}_{\ m}+s^{m}_{\ m}+t)\right\}\alpha^{4} \nonumber\\ && +\left\{\big[24{\bf r}_{\alpha} -4(b^{2}+6n+8)t^{m}_{\ m}-24b^{i}s^{m}_{\ i;m} +4(n-1)(6q^{m}_{\ m}+t +3s^{m}_{\ m})\big]\begin{equation}ta^2 \right. \nonumber\\ &&+\big[-24 {}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+16(b^{2}+n+1)s^{m}_{\ 0;m} \nonumber\\ && -2(n-1)(18r^{m}_{\ m}s_{0}+16q_{00\cdot i}b^{i}+3r^{m}_{\ m;0}+6r^{m}_{\ 0;m}+8s_{0;0\cdot i}b^{i}+24p_{0}) \nonumber\\ && -8(3b^{2}n-3b^{2}+2n^{2}-8)t_{0}\big]\begin{equation}ta+4 b^{2}\ {}^{\alpha}{\bf Ric}- 8b^{2}t_{00}+2(n-1)\big[12b^{2}q_{00} \nonumber\\ && \left. +2r_{00;0\cdot i}b^{i}+6b^{2}s_{0;0}+3r^{m}_{\ m}e_{00}+36s_{0}r_{0}+12(3b^{2}+n-4)s_{0}^{2} +6w_{00}\big]\right\}\alpha^{2} \nonumber\\ && -4(nt^{m}_{\ m}-{\bf r}_{\alpha})\begin{equation}ta^{4}+\big[-2(n-1)(6r^{m}_{\ m}s_{0}+r^{m}_{\ m;0}+2r^{m}_{\ 0;m}) \nonumber\\ && -8{}^{\alpha}{\bf Ric}_{ij}b^{i}y^{j}+16ns^{m}_{\ 0;m}-8n(n-3)t_{0}\big]\begin{equation}ta^{3}+\left\{4(b^{2}+2)({}^{\alpha}{\bf Ric}-2t_{00})\right. \nonumber\\ && +2(n-1)\big[4(n+2)q_{00}+2r_{00;0\cdot i}b^{i}+2(3b^{2}+2)s_{0;0}+3r^{m}_{\ m}e_{00}+3s_{0}r_{0} \nonumber\\ && \left.+4(6b^{2}+10n-33)s_{0}^{2}+6w_{00}\big]\right\}\begin{equation}ta^{2}- 2(n-1)\left\{(3b^{2}-n)e_{00;0}\right. \nonumber\\ && \left.+\big[18r_{0}+2(15b^{2}+10n-48)s_{0}\big] e_{00}\right\}\begin{equation}ta+3(n-1)(6b^{2}+n-12)e_{00}^{2}.\label{Gma2} {\epsilon}nd{equation}q By (\ref{Gma1}) and (\ref{Gma2}), we can find that \begin{equation} \Gamma_{2}\begin{equation}ta-\Gamma_{1}=-18(1-b^{2})\begin{equation}ta e_{00}^{2}+(\alpha^{2}-\begin{equation}ta^{2})K_{000},\label{G2-G1} {\epsilon}nd{equation} where $K_{000}$ is a homogeneous polynomial of degree 3 in $y$. On the other hand, by the assumption, Randers metric of $F=\alpha +\begin{equation}ta$ is of weakly isotropic scalar curvature, that is, \begin{equation} {\bf r}=n(n-1)\left[\frac{\theta}{F}+\mu(x)\right]. \label{r22} {\epsilon}nd{equation} Then, multipying ($\ref{r22}$) by $4F^{5}$, one has \begin{equation} \label{r3} 4F^{5}{\bf r}=4n(n-1)(\Pi_{1}+\alpha\Pi_{2}), {\epsilon}nd{equation} where, $\Pi_{1}$ and $\Pi_{2}$ are homogeneous polynomials of degree 5 and 4 in $y$, respectively, which are expressed as \begin{equation}qn \Pi_{1}&:=&(5\mu\begin{equation}ta+\theta)\alpha^{4}+(10\mu\begin{equation}ta^{3}+6\theta\begin{equation}ta^{2})\alpha^{2}+\mu\begin{equation}ta^{5}+\theta\begin{equation}ta^{4},\\ \Pi_{2}&:=& \mu\alpha^{4}+(10\mu\begin{equation}ta^{2}+4\theta\begin{equation}ta^{2})\alpha^{2}+5\mu\begin{equation}ta^{4}+4\theta\begin{equation}ta^{3}. {\epsilon}nd{equation}qn Further, we can get \begin{equation} \Pi_{2}\begin{equation}ta-\Pi_{1}=-(\alpha^{2}-\begin{equation}ta^{2})(4\mu\begin{equation}ta\alpha^{2}+4\mu\begin{equation}ta^{3}+\theta\alpha^{2}+3\theta\begin{equation}ta^{2}). \label{r5} {\epsilon}nd{equation} Comparing (${\ref {r2}}$) and (${\ref {r3}}$), we obtian the following \begin{equation} \label{r4} \Gamma_{1}=4n(n-1)\Pi_{1}, \ \ \Gamma_{2}=4n(n-1)\Pi_{2}. {\epsilon}nd{equation} Therefore, $\Gamma _{2}\begin{equation}ta - \Gamma _{1}= 4n(n-1)(\Pi_{2}\begin{equation}ta-\Pi_{1})$. From (${\ref {G2-G1}}$) and (${\ref {r5}}$), one obtains \begin{equation}qn && -18(1-b^{2})\begin{equation}ta e_{00}^{2}+(\alpha^{2}-\begin{equation}ta^{2})K_{000}\\ &&= -4n(n-1)(\alpha^{2}-\begin{equation}ta^{2})(4\mu\begin{equation}ta\alpha^{2}+4\mu\begin{equation}ta^{3}+\theta\alpha^{2}+3\theta\begin{equation}ta^{2}), {\epsilon}nd{equation}qn which is equivalent to \begin{equation} \-18(1-b^{2})\begin{equation}ta e_{00}^{2}=(\alpha^{2}-\begin{equation}ta^{2})[K_{000}+4n(n-1)(4\mu\begin{equation}ta\alpha^{2}+4\mu\begin{equation}ta^{3}+\theta\alpha^{2}+3\theta\begin{equation}ta^{2})]. {\epsilon}nd{equation} Because $\alpha^{2}-\begin{equation}ta^{2}$ is an irreducible polynomial in $y$ and $1-b^{2}>0$, we know that $e_{00}$ must be divided by $\alpha^{2}-\begin{equation}ta^{2}$. That is, there exists a scalar function $c(x)$ on $M$ such that \begin{equation} e_{00}=2c(x)(\alpha^{2}-\begin{equation}ta^{2}). {\epsilon}nd{equation} By Lemma {\ref {e00}}, $F$ is of isotropic $S$-curvature. \hspace*{\fill}Q.E.D. \vskip 2mm In order to prove Theorem \ref{CWS}, we prove the following lemma firstly. \begin{equation}gin{lem}\label{lmS} Let $F = \alpha +\begin{equation}ta$ be a non-Riemannian Randers metric with isotropic $S$-curvature on an n-dimensional ($n\geq 2$) manifold $M$. If there is scalar function $\sigma =\sigma (x)$ on $M$ such that $\bar {F}:=e^{\sigma (x)}F$ is of weakly isotropic scalar curvature, then $\sigma$ must be a constant. {\epsilon}nd{lem} {\bf Proof.} \ By the assumption, $\bar{F}$ is conformally related to $F$, $\bar{F} =e^{\sigma(x)}F$. Then we have the following equality ({\cite {BC}}). \begin{equation} \bar{\bf S} ={\bf S}+F^{2}\sigma^{r}I_{r}, \label{S1} {\epsilon}nd{equation} where $\sigma^{r}:=g^{rm}\sigma_{x^{m}}$ and $I_{r}$ is the mean Cartan tensor of $F$. Also, by the assumption, Randers metric $F$ is of isotropic $S$-curvature, that is, there is a scalar function $\lambda(x)$ on $M$ such that \begin{equation} {\bf S}=(n+1)\lambda(x)F. \label{S2} {\epsilon}nd{equation} On the other hand, by the assumption that Randers metric $\bar{F}:=e^{\sigma(x)}F$ is of weakly isotropic scalar curvature and by Theorem \ref{SCS}, $\bar{F}$ must be of isotropic $S$-curvature , \begin{equation} {\bar{\bf S}}=(n+1){\bar\lambda}(x){\bar F}, \label{S3} {\epsilon}nd{equation} where $\bar{\lambda}(x)$ is a scalar function on $M$. Plugging ($\ref{gij}$), ($\ref{Ii}$), ($\ref{S2}$) and ($\ref{S3}$) into ($\ref{S1}$), and then, similar to the proof of Theorem 1.2 in {\cite{CY}}, we can conclude that $\sigma(x)$ is a constant. \hspace*{\fill}Q.E.D. \vskip 2mm Lemma \ref{lmS} shows that, if $F$ is a non-Riemannian Randers metric with isotropic $S$-curvature on an $n$-dimensional manifold $M (n\geq 3),$ then there is no non-constant scalar function $\sigma=\sigma(x)$ such that $\bar{F}:=e^{\sigma} F$ is of weakly isotropic scalar curvature. Now, we are in the position to prove Theorem \ref{CWS}. {\bf Proof of Theorem \ref{CWS}.} \ By the assumption, $F=\alpha+\begin{equation}ta$ is a conformally flat metric, that is, there exists a scalar function $\kappa(x)$ on $M$, such that \begin{equation} F=e^\kappa(x){\bar{F}}, {\epsilon}nd{equation} where ${\bar{F}}$ is a Minkowski metric. Obviously, $\bar{F}$ is of isotropic S-curvature, $\bar{\bf S}=0$. Further, since $F=e^\kappa(x){\bar{F}}$ is of weakly isotropic scalar curvature, by Lemma \ref{lmS}, $\kappa(x)$ must be a constant. Hence, $F$ is a Minkowski metric. This completes the proof. \hspace*{\fill}Q.E.D. \vskip 8mm \begin{equation}gin{thebibliography}{Ma} \bibitem{AZ} Akbar-Zadeh H.: Sur les espaces de Finsler a courbures sectionnelles constantes, {\it Acad. Roy. Belg. Bull. Cl. Sci.}, {\bf 74}, 281-322(1988). \bibitem{BC} B$\acute{a}$cs$\acute{o}$ S., Cheng X.: Finsler conformal transformations and the curvature invariances, {\it Publicationes Mathematicae-Debrecen}, {\bf 70}(1-2), 221-231(2007). \bibitem{BK} Chen B., Xia K.: On conformally flat $(\alpha, \begin{equation}ta)$-metric with weaklly isotropic scalar curvature, {\it J. Korean Math. Soc.}, {\bf 56}(2), 329-352(2019). \bibitem{CC} Chen G., Cheng X.: An important class of conformally flat weak Einstein Finsler metrics, {\it International Journal of Mathematics}, {\bf 24}(1), 1350003(15 pages)(2013). \bibitem{GQZ} Chen G., He Q., Shen Z.: On conformally flat $(\alpha, \begin{equation}ta)$-metric with constant flag curvature, {\it Publicationes Mathematicae-Debrecen}, {\bf 86}(3-4), 387-400(2015). \bibitem{CS} Cheng X., Shen Z.: Finsler Geometry- An Approach via Randers Spases, Springer and Science Press, Heidelberg and Beijing, 2012. \bibitem{CS1} Cheng X., Shen Z.: Randers metrics of scalar flag curvature, {\it Journal of the Australian Mathematical Society}, {\bf 87}(3), 359-370( 2009). \bibitem{CY} Cheng X., Yuan M.: On Randers metrics of isotropic scalar curvature, {\it Publicationes Mathematicae-Debrecen}, {\bf 84}(1-2), 63-74(2014). \bibitem{SSZ} Chern S.S., Shen Z.: Riemann-Finsler Geometry, World Scientific, Singapore, 2005. \bibitem{Kn} Knebelman M.S.: Conformal geometry of generalised metric spaces, {\it Proc. Natl. Acad. Sci. USA}, {\bf 15}, 376-379(1929). \bibitem{Ru} Rund H.: The Differiential Geometry of Finsler Spaces, Springer-Verlag, Berlin, 1959. \bibitem{Sh} Shen Z.: Volume comparison and its applications in Riemann¨CFinsler geometry, {\it Advances in Mathematics}, {\bf 128}, 306¨C328(1997). {\epsilon}nd{thebibliography} \vskip 10mm \noindent Xinyue Cheng \\ School of Mathematical Sciences \\ Chongqing Normal University \\ Chongqing 401331, P. R. China \\ [email protected] \vskip 5mm \noindent Yannian Gong \\ School of Sciences \\ Chongqing University of Technology \\ Chongqing 400054, P. R. China \\ [email protected] {\epsilon}nd{document}
\begin{document} \title{Electromechanics of Suspended Spiral Capacitors and Inductors} \author{ Sina Khorasani } \affiliation{ School of Electrical Engineering, Sharif University of Technology, P. O. Box 11365-9363, Tehran, Iran\\ \'{E}cole Polytechnique F\'{e}d\'{e}ral de Lausanne (EPFL), CH-1015, Lausanne, Switzerland } \email{[email protected]; [email protected]} \begin{abstract} Most electromechanical devices are in two-dimensional metallic drums under high tensile stress, which causes increased mechanical frequency and quality factor. However, high mechanical frequencies lead to small zero-point displacements $x_{\rm zp}$, which limits the single-photon interaction rate $g_0$. For applications which demand large $g_0$, any design with increased $x_{\rm zp}$ is desirable. It is shown that a patterned drum by spiral shape can resolve this difficulty, which is obtained by a reduction of mechanical frequency while the motion mass is kept almost constant. An order of magnitude increase in $g_0$, and agreement between simulations and interferometric measurements is observed. \end{abstract} \maketitle Various applications of electromechanics covers classical and quantum regimes, such as sensing and cryogenic superconducting circuits. Usually, some force such as radiation pressure, acceleration, or gravitational mass is responsible for deformation of a mechanical moving body, or shifting its resonance frequency $\Omega$. In either case, a nonlinear interaction between the mechanical motion of a parallel plate capacitor and the electric component of an oscillating electromagnetic field is developed. The single-photon interaction rate $g_0$, a quantity having units of frequency, defines the strength of such nonlinear electromechanical interactions \cite{1,2}. Typical values of $g_0$ are dependent on the application. For optomechanical devices and phoxonic crystals it is on the order of 10MHz or more, while for molecular optomechanics it could take on extremely high values. However, for superconducting electromechanics with micro-drum capacitors, where the reservoir frequency is a few GHz, $g_0$ can be in the range of $2\pi\times 20{\rm Hz}$ to $2\pi\times 60{\rm Hz}$. In principle, any method to enhance $g_0$ is favorable and much useful from a practical point of view. All other major applications including mass and force sensing, also rely on $g_0$, so that larger $g_0$ would directly translate into an increased sensitivity, simply because \cite{2} that single-photon cooperativity $\mathcal{C}_0$ is proportional to $g_0^2$. However, field-enhanced cooperativity $\mathcal{C}=\mathcal{C}_0 \bar{n}_{\rm cav}$ may not significantly change since $\bar{n}_{\rm cav}\propto\Omega$ where $\bar{n}_{\rm cav}$ is the equilibrium cavity occupation, implying the fact that $\mathcal{C}$ is independent of $\Omega$. Therefore, the ultimate theoretical side-band cooling limit will remain unchanged, unless squeezed light \cite{2a} or feedback control \cite{2b} schemes are used. The motivation here is to propose a cost-effective, simple, and efficient method to enhance $g_0$ for a given fabrication process. The trick is to suspend a spiral electromechanical element, letting it vibrate more freely compared to the constrained devices grown on fixed substrates. Here, one may etch a spiral pattern on a drum capacitor, which is shown to be quite feasible. That could be thought of a rolled cantilever, however cantilevers are one-dimensional (1D) structures, while this is effectively a two-dimensional (2D) element with a sensitivity exceeding that of a 1D cantilever. Hence, the cantilever approximation cannot be used, since it would yield incorrect results. It is also possible to think of suspended inductors, too. While spiral capacitors need to be fixed at one end, suspended inductors should be fixed at both ends to let current flow. In spiral capacitors electrostatic field is responsible for mechanical deformation, while magnetic field of electrical current causes mechanical deformation and pinching of suspended spiral inductors. A major advantage of using suspended inductors is accessing the second quadrature of the electromagnetic radiation field due to the electrical current, while the first quadrature due to the electrical voltage interacts with spiral capacitors. There is otherwise no known method of accessing both quadratures of microwave radiation in a superconductive circuit so easily and in such a straightforward manner. However, suspended inductors have very small $g_0$, typically ranging from 10mHz to 1Hz, as it has been discussed in the Supplementary Information. While apparently too small, these figures are sufficient to be measurable. \begin{figure} \caption{ Illustration of a uniform spiral. For a spiral capacitor this is the top electrode with one fixed end. For a suspended inductor both ends must be suspended. \label{Fig1} \label{Fig1} \end{figure} A spiral structure with uniform spacing and strip width is illustrated in Fig. \ref{Fig1}, with $b$, $h$, and $t$ being respectively the strip width, thickness, and gap as shown in Fig. \ref{Fig2} in sideways. The external radius of a suspended capacitor or inductor at microwave frequencies is typically of the order of $10{\rm \mu m}$ and 1mm, respectively, and the required number of turns $N$ is normally under 20. We find that roughly $g_0\propto \sqrt{N}$ holds, if external parameters are unchanged. Spiral capacitors must be suspended very close to a conducting bottom electrode not exceeding 100nm-150nm. The fundamental mechanical mode of a suspended capacitor should clearly be out-of-plane, with one maximum at the center. For this to happen, one should select $b>h$. Strong violation of this condition, however, significantly influences the fundamental mode, making it in-plane, much like the clock spiral springs. This design is obviously favorable for the suspended inductor. By cutting through a spiral and undercutting of the capacitor the initial tensile stress is suddenly removed, and thus the spiral is expected to contract horizontally after release. This may lead to difficulties in fabrication, and it is at first not quite obvious that this structure actually can be made. \begin{figure} \caption{ Spiral capacitor viewed schematically sideways. Blue color represents Aluminum conducting layers, cyan color is a thin insulating ${\rm SiO_2} \label{Fig2} \end{figure} The other issue could be Al grain size which puts a practical limit on the achievable minimum $b$. Atomic Layer Deposition (ALD) of Al has been reported \cite{3}, but is not customary, and the deposited Al using evaporation is not crystalline. Replacement superconducting metals which could be grown in crystalline form are not known. This implies that for the moment being, and while not having access to a crystalline growth of Aluminum (through ALD, MBE, etc.), one would need to find a practical solution to demonstrate the feasibility of process. This puts severe restrictions on the fabrication. At first, it is rather hard to imagine that the spiral capacitors could survive the undercut. An evaluation of the idea would suggest that the undercut and released spiral would collapse, buckle, break because of Van der Waals attraction, or at least significantly deform out of plane because of thermal coefficient mismatch. It could be so fragile that would break while carriage. Neither of these happened, contrary to the normal expectations, and we can show here that the spiral capacitor with moderate number of turns, can be successfully fabricated and suspended. We have furthermore measured the mechanical response and observed complete agreement to the design. While we cannot satisfactorily explain yet why the structure survives the fabrication and undercut, possible explanations are first that Focused Ion Beam (FIB) process could infuse and crystallize Al grains, making the grown layer effectively in terms of mechanical properties much like a single-crystal. Secondly, the suspended Al is bounded to vacuum from both top and bottom sides after undercut and is too thin (100nm) to develop any significant residual stress gradient during growth. Hence, it does not buckle up or down similar to what always happens to the multilayer or very thick cantilevers, which are highly deformed after release and undercut. Table \ref{Table1} summarizes various spiral geometries on the same structure. Calculations are done using the polar deformation profile $\Delta\rho(\theta)$ fed from COMSOL, as discussed in the supplementary material. The first row corresponds to the simple membrane of the unpatterned micro-drum capacitor, which exhibits a much larger mechanical frequency due to the residual tensile stress. The second row corresponds to what is fabricated with $N=5$, whose COMSOL simulation is illustrated in Fig. \ref{Fig3}. . By carving only 5 and 10 turns, $g_0$ increases respectively 7- and 12-fold, showing a rough dependence $g_0\propto\sqrt{N}$. For inductor simulations on COMSOL, both ends should be fixed and the first in-plane displacement mode is investigated. By choosing sufficient thickness, the fundamental mode becomes in-plane polarized. For spiral capacitors, a small $N$ is quite sufficient to obtain large enhancement of $g_0$. The deformation profile can be also estimated theoretically within the thin-wire approximation as detailed in the Supplementary Information. But that would mostly cause underestimation of $g_0$. \begin{table} \begin{center} \begin{tabular}{c c c c c c c c} \hline\hline $b$ & $h$ & $t$ & $d$ & $L_0 ({\rm nH})$ & $f ({\rm kHz})$ & $N$ & $g_0 ({\rm Hz})$ \\ \hline $-$ & 100 & $-$ & 100 & 70 & $6.2\times10^3$ & 0 & $2\pi\times60$ \\ 2000 & 100 & 200 & 100 & 70 & 20.96 & 5 & $2\pi\times418$ \\ 1000 & 100 & 200 & 100 & 70 & 10.5 & 10 & $2\pi\times701$ \\ 1000 & 100 & 100 & 100 & 70 & 1.63 & 20 & $2\pi\times941$ \\ \hline\hline \end{tabular} \end{center} \caption{ Typical $g_0$ for various spiral capacitor configurations \cite{4}. The first row corresponds to the unpatterned structure under tensile stress. The second row corresponds to what has been fabricated. Dimensions of $b$, $h$, $t$, and $d$ are given in nm. \label{Table1} } \end{table} \begin{figure} \caption{ COMSOL simulation of spiral with $N=5$. The fundamental frequency is found to be $f=20.96{\rm kHz} \label{Fig3} \end{figure} There is no parasitic resistance in the lumped equivalent circuit of the spiral capacitor, but for non-superconducting states, it can be easily derived by simple geometrical considerations. The parasitic inductance \cite{5} at microwave frequencies of interest is also not a matter of concern, since the typical wavelength is orders of magnitude larger than the spiral diameter. Hence, the spiral is essentially so small that it remains equipotential everywhere and any parasitic inductor can be neglected. We did not try fabrication of suspended inductor, despite easier fabrication due to much larger size. For spiral capacitors, the motion mass $m$ is roughly $2/3$ of the total mass. One can here estimate $\Omega$ from a 1D cantilever equivalent \cite{6,7,8,9} with the identical $b$, $h$, and curve length. However, the result is normally off the correct value by two orders of magnitude, or even more. Hence, the deformation profiles and $\Omega$ must be found numerically for good accuracy, as it has been extensively discussed in the supplementary material. Now, $x_{\rm zp}=\sqrt{\hbar/2m\Omega}$ can be found, which yields $g_0=x_{\rm zp}(\partial\omega/\partial x)$. \begin{figure} \caption{ Fabrication process flow is based on the drum capacitor of an earlier study \cite{10} \label{Fig4} \end{figure} It is possible to redesign and remake all masks by EBL, so that the additional FIB could be removed. Should masks need redesign to be fabricated with EBL, then there exist macros such as the one in Supplementary Information which could easily put a spiral to the mask design with desired shape parameters. However, UV lithography would not be possible anymore and the compatibility of E-Beam photoresists with the present process has yet to be investigated. The accuracy of UV lithography is good enough to support fabrication of suspended inductors if necessary. However, $h$ probably needs to be much more than 100nm to provide sufficient mechanical strength under pinching force of magnetic field. We also are not completely unsure of the irrelevance of FIB to successful release, and these have yet to be investigated in a much deeper study. Therefore, unless a rigorous process has to be developed from the scratch, probably the most straightforward way to fabricate a spiral capacitor is to use the already available micro-drum capacitors \cite{10,11} before undercut and release, take them to the FIB, do the patterning, and then carry out the structure release and undercut at last. The fabrication process flow used for this device is presented in Fig. \ref{Fig4}. \begin{figure} \caption{ (a) SEM after FIB patterning of the second sample; (b) Ion-beam photo after FIB of second sample; (c) Normal SEM after ${\rm XeF} \label{Fig5} \end{figure} The FIB machine provides both SEM and Ion-beam images at once, while patterning. Figures \ref{Fig5}a and \ref{Fig5}b respectively illustrate those images of the fabricated spiral. After performing the FIB, the sample was taken to the undercutting process with the gaseous ${\rm XeF}_2$ etch. It can be seen in the SEM photo in Fig. \ref{Fig5}c that it has survived the undercut very well. It was possible to carry it around afterwards quite safely to the SEM zone and take a few additional SEM photos at normal and oblique incidences. It should be added that since $t<200{\rm nm}$, the spiral is totally invisible under optical microscope. \begin{figure} \caption{ Measurements of mechanical response using optical interferometry under high vacuum and at room temperature: (a) Natural response; (b) Driven response around the resonance with $Q=148$ and $f=21.5{\rm kHz} \label{Fig6} \end{figure} The mechanical response of the fabricated spiral capacitor was tested at room-temperature while the sample was placed in a high-vacuum chamber with transparent quartz window, mounted on an isolated optical table. The reflection of a continuous red laser from spiral surface at normal incidence was fed into an interferometric setup, allowing precise observation of the mechanical movements. The sample holder is mounted on a piezo-electric actuator which can be biased and excited by a sinusoidal frequency. Measurements were done under two modes: (i) the natural response due to thermal fluctuations with no external mechanical excitation, and (ii) the driven or forced response under sinusoidal excitation of the piezo-electric actuator with tunable frequency drive. The driven measurement is needed to identify the right resonance peak, since the natural response of the mechanical structure is exhibits many spurious modes resulting from substrate and sample mount, along with presence of $1/f$ noise. This has been illustrated in Fig. \ref{Fig6}a. The driven measurement let us identify a clear and unmistakable resonance, as shown in Fig. \ref{Fig6}b. But the resulting $\Omega$ and quality factor $Q$ are not accurate, because of the large mechanical power delivered to the sample causing shifts in original values. Once the actual resonance is found by drive, the accurate $\Omega$ and $Q$ can be clearly derived from the natural response. The results of measurements across the fundamental resonance are displayed in Figs. \ref{Fig6}b,c. Quite obviously, the two measurement modes are not exactly the same, and both $\Omega=2\pi\times f$ and $Q$ are different. While the driven response yields a roughly $Q=148$ and $f=21.5{\rm kHz}$, the natural response exhibits roughly $Q=3.6\times10^3$ and $f=21.6{\rm kHz}$ at room temperature. As discussed in the above, we here pick the latter values. Interestingly, COMSOL simulations of the natural or free response predicts $f_0=20.96{\rm kHz}$, which is in very good agreement with both measurement modes. This also very well confirms the accuracy of numerical simulations as well as successful fabrication and levitation of the spiral despite very narrow gap from substrate. It is extremely difficult to theoretically estimate $Q$ of spirals, and perhaps the straightforward way is to fabricate them and measure their response. Nevertheless, the known mechanisms which limit $Q$ may be quite different, including loss due to finite viscosity of the chamber ambient pressure, phonon tunneling \cite{12} from coupling of the spiral tail to the mount, friction loss between Al grain boundaries, finite electrical conductivity of the non-superconducting Al coupled to the mechanical motion \cite{13,14,15} at the higher temperatures, and ultimately the quantum electrodynamical friction of vacuum \cite{16,17}. It is a well-known, yet not theoretically explained, experimental fact that measurements on superconducting mechanical oscillators below the critical transition temperature usually causes a typical four- to ten-fold increase in $Q$. Hence, it could be expected that at temperatures which Al superconducts, $Q$ could still increase to much higher values. Unfortunately, the present experimental setup of our interferometric measurement is not cryogenically cooled and maintains only the room-temperature, disallowing further investigation of this fact. But similar observations have been made on amorphous silica \cite{18}, which reveal a significant increase in $Q$ up to three orders of magnitude at cryogenic temperatures. Presented designs are only for uniform spirals. Non-uniform spirals could possibly still lead to improved $g_0$ without degrading noise performance, which has been shown to be true for inductors in the Supplementary Information. Finally, suspended inductors might be fabricated along with spiral capacitors, to permit access to both quadratures of the electromagnetic field. The capacitor should be remade, put in an LC circuit, and tested to make sure that it is not short circuited inside. Its noise performance should be carefully investigated, since lower $\Omega$ implies larger phonon occupation number. In conclusion, a method to enhance $g_0$ was presented for electromechanical quantum superconducting circuits as well as sensing applications. A detailed theoretical model and numerical simulation was developed. Effects of various parameters were studied and it was demonstrated that FIB could provide an easy means for rapid prototyping, without any need to redesign or optimize the earlier fabrication masks and steps. It was shown that the spirals can survive ${\rm XeF}_2$ undercut. The results of this study opens up new possibilities and applications in sensing, electromechanics, quantum circuits \cite{19}, and other sorts of electromechanical systems. \section*{Supplementary Material} See supplementary material for details of the developed thin-wire formalism, design of spiral inductors, spiral capacitors, numerical COMSOL simulations, and fabrication process flow. \begin{acknowledgments} Discussions with Prof. Guillermo Villanueva, Dr. Cyrille Hibert, Dr. Philippe Langlet, Dr. Christophe Galland, Dr. Alexey Feofanov, and Mr. Amir Hossein Ghadimi is appreciated. Initial unpatterned micro-drum capacitor was provided by Daniel T\'{o}th. Fabrication was done at the Center for Microtechnology (CMi) of EPFL, with the help of Ryan Schilling, Cl\'{e}ment Javerzac-Galy, Dr. Joffrey Pernollet, and Ms. Nahid Hosseini. Interferometric characterization was done by Dr. Nils Johan Engelsen. This work has been supported by Laboratory of Photonics and Quantum Measurements (LPQM) at EPFL and Research Deputy of Sharif University of Technology. \end{acknowledgments} \foreach \x in {1,...,21} { \includepdf[pages={\x}]{SpiralSuppl} } \end{document}
\begin{equation}gin{document} \title{Controlling entanglement by direct quantum feedback} \author{A. R. R. Carvalho}\author{A. J. S. Reid} \affiliation{Department of Physics, Faculty of Science, The Australian National University, ACT 0200, Australia} \author{J. J. Hope} \affiliation{Australian Centre for Quantum-Atom Optics, Department of Physics, Faculty of Science, The Australian National University, ACT 0200, Australia} \date{\today} \begin{equation}gin{abstract} We discuss the generation of entanglement between electronic states of two atoms in a cavity using direct quantum feedback schemes. We compare the effects of different control Hamiltonians and detection processes in the performance of entanglement production and show that the quantum-jump-based feedback proposed by us in Phys. Rev. A {\bf 76} 010301(R) (2007) can protect highly entangled states against decoherence. We provide analytical results that explain the robustness of jump feedback, and also analyse the perspectives of experimental implementation by scrutinising the effects of imperfections and approximations in our model. \end{abstract} \pacs{03.67.Mn,42.50Lc,03.65.Yz} \maketitle \section{Introduction} Recent experimental advances have enabled individual systems to be monitored and manipulated at the quantum level in real time~\cite{hood98,julsgard01,lu03,geremia04,ottl05,puppe07}. As new advances continue to emerge, and the development of control strategies for quantum systems becomes essential, quantum technology inevitably approaches the well-developed classical control theory, borrowing its concepts and extending them to the quantum realm. An important example is feedback control, which consists of the manipulation of the system according to the information acquired through measurement. Feedback control has been applied to quantum systems~\cite{belav,wise_milb93,wiseman94,doherty99}, including its implementation in a variety of experimental setups~\cite{geremia04,reiner04,morrow02,bushev06}. Relying on the ability to produce specific states and perform controlled operations on them, quantum information is undoubtedly an area that would benefit from further advances in quantum control. Of particular interest in this context is the preparation of entangled states, indispensable for quantum information processing. Notwithstanding the considerable number of experiments carried out on the generation of entanglement~\cite{bouwmeester99,rauschenbeutel00,sackett00, panPRL01, roos}, and the effort to protect the system against undesirable imperfections and interactions, entanglement decay due to uncontrolled coupling with the environment remains a major problem yet to overcome~\cite{roosPRL04,eberly_04,arrc_mpd}. In this scenario, quantum feedback emerges as a possible route to develop strategies to circumvent entanglement deterioration. In fact, quantum feedback control has been recently used to improve the creation of steady state entanglement in open quantum systems. A Markovian, or direct, feedback~\cite{wise_milb93,wiseman94} was used in~\cite{mancini05,wang05} to show that states with entanglement corresponding to one third of the maximum possible value can be produced. Maximally entangled states could be achieved~\cite{stockton04} in an idealised situation, by the use of Bayesian~\cite{doherty99}, or state estimation, feedback. This improvement, however, comes at the cost of an increasing experimental complexity due to the need of a real time estimation of the quantum state in the later method, as compared to the simple feedback directly proportional to the measurement signal proposed by the former strategy. Despite being simpler to implement, direct feedback still exhibits a multitude of possibilities due to the arbitrariness of choices for the control Hamiltonian and measurement schemes. In a recent Rapid Communication~\cite{jumpfeedback}, we have shown that an appropriate selection of the feedback Hamilton and detection strategy leads to the robust production of highly entangled states of two atoms in a cavity. In this contribution, we will further explore the richness of feedback strategies by comparing the jump-based scheme proposed in~\cite{jumpfeedback} with a strategy based on homodyne measurements~\cite{wang05}, taking into account the unavoidable imperfections that may occur in realistic experimental implementations. The paper is organized as follows. In the first part of Section~\ref{sec:model} we introduce the model of the system and its equation in the absence of feedback. In the second part we describe the introduction of feedback and the corresponding changes in the master equation for two different choices of measurement strategy, namely, photodetection and homodyning. Entanglement dynamics is examined in Section~\ref{sec:entang}, and the role of feedback on entanglement generation is analysed. Robustness issues are discussed in Section~\ref{sec:robust}, where the effects of detection inefficiencies and spontaneous emission on steady-state entanglement are considered. Section~\ref{sec:exp} discusses different experimental regimes and explores the non-adiabatic limit of the model. Section~\ref{sec:conc} concludes with a discussion of our main results. \section{The model}\label{sec:model} The system consists of a pair of two-level atoms coupled resonantly to a single cavity mode, with equal coupling strength $g$, and simultaneously driven by a laser field with Rabi frequency $\Omega$. The cavity mode is damped with decay rate $\kappa$ and the atoms can spontaneously decay with rates $\gamma_1$ and $\gamma_2$. Feedback is applied conditioned on the measurement of photons leaving the cavity, as schematically shown in Fig.~\ref{fig1}. \begin{equation}gin{figure} \includegraphics[width=6.0cm]{figura1b.eps} \caption{Schematic view of the model. The system consists of a pair of two-level atoms coupled to cavity, which is driven by a laser field and damped. Conditioned on the measurement of the output of the leaky cavity, a Hamiltonian is applied to the atoms, completing the feedback scheme.} \label{fig1} \end{figure} In the absence of feedback, the master equation describing the system is given by \begin{equation}a \dot \rho=-i \Omega \left[(J_+ + J_-),\rho \right] -i g \left[(J_+ a + J_- a^\dagger),\rho \right] \nonumberonumber \\ + \kappa {\cal D}[a]\rho + \sum_i\gamma_i{\cal D}[\sigma_i]\rho. \label{eq:nofb_full} \end{equation}a The first and second terms represent the Hamiltonian evolution induced by the laser driving and atom-cavity coupling, respectively. The superoperator \begin{equation} {\cal D}[c]\rho \equiv c \rho c^{\dagger}-\frac{1}{2}\left(c^{\dagger}c \rho+\rho c^{\dagger}c\right) \end{equation} describes cavity and atomic decays, given in terms of $a$, the annihilation operator of photons in the cavity, and $\sigma_i=\ket{g_i}\bra{e_i}$, the lowering operator for the $i$-th two level atom, respectively. The angular momentum operators are defined as \begin{equation}a \label{J_operators} J_{-}=\sigma_1+\sigma_2,\\ J_+=\sigma_1^{+}+\sigma_2^{+}, \end{equation}a with $\sigma_i^{+}=\ket{e_i}\bra{g_i}$ the raising operator for the atomic electronic levels. In the limit where the cavity decay rate $\kappa$ is much larger than the other relevant frequencies of the problem, the cavity mode can be adiabatically eliminated and one obtains a master equation just for the atomic degrees of freedom~\cite{wang05} \begin{equation}\label{eq:me_total} \dot \rho=-i \Omega \left[(J_+ + J_-),\rho \right] + \Gamma {\cal D}[J_-]\rho + \sum_i \gamma_i{\cal D}[\sigma_i]\rho, \end{equation} where $\Gamma=g^2/\kappa$ is the effective collective decay rate. Furthermore, the Dicke model~\cite{agarwal74} can be recovered under the assumption that the collective decay rate is much larger than the spontaneous emission rates, $\Gamma \gg \gamma_1,\, \gamma_2$, \begin{equation} \label{eq:me_dicke} \dot \rho= {\cal L} \rho=-i\Omega\left[(J_+ + J_-),\rho \right] + \Gamma {\cal D}[J_-]\rho. \end{equation} Later we shall relax the approximations used to obtain Eqs.~(\ref{eq:me_total}) and~(\ref{eq:me_dicke}): The non-adiabatic regime is important when considering experimental situations where high quality cavities are used, so it will be discussed in Section~\ref{sec:exp}. Section~\ref{sec:robust} will show that even small spontaneous emission rates can have a drastic deleterious effect on the final amount of entanglement produced in this system. For the moment, Eq.~(\ref{eq:me_dicke}) will be the starting point to introduce the description of the measurement scheme and the feedback mechanism. Here we will consider the direct, or Markovian, feedback introduced by Wiseman and Milburn~\cite{wise_milb93,wiseman94}, where the control Hamiltonian is proportional to the measurement signal. This control mechanism has the advantage of being simple to apply in practice, since it avoids the challenge of real time state estimation required in Bayesian, or state-based, feedback~\cite{doherty99}. The idea is depicted in Fig.~\ref{fig1}: the cavity output is measured by a detection scheme $M$ whose signal $I(t)$ provides the input to the application of the control Hamiltonian $H_{\rm fb}=I(t)\,F$. Here, we will consider the measurement stage $M$ to be either a homodyne or a direct photodetection of the output field. In the homodyne-based scheme, the detector registers a continuous photocurrent, and the feedback Hamiltonian is constantly applied to the system. Conversely, in the photocounting-based strategy, the absence of signal predominates and the control is only triggered after a detection click, {\it i.e.} a quantum jump, occurs. This is reflected in the different forms of the equations representing the dynamics of a feedback-controlled system under either of the measurement schemes. In the homodyne case, Eq.~(\ref{eq:me_dicke}) becomes~\cite{wise_milb93,wiseman94} \begin{equation}a \label{eq:hd} \dot \rho= {\cal L}_h \rho=-\frac{i}{\hbar}\Omega\left[(J_+ + J_-),\rho \right] + \Gamma {\cal D}[J_-]\rho+ \frac{1}{\Gamma}{\cal D}[F]\rho \nonumberonumber \\-\frac{i}{\hbar}\left[F,-iJ_-\rho+i\rho J_+\right]. \end{equation}a This equation was used by Wang, Wiseman and Milburn (WWM)~\cite{wang05} to show that the amount of steady state entanglement in this atom-cavity model can be enhanced above that generated by the uncontrolled dynamics~\cite{schneider02}. More recently, the same equation was used to explore the effect of different feedback Hamiltonians on entanglement generation~\cite{li08}. In a measurement scenario based on photodetections, the unconditioned dynamics, including feedback, reads~\cite{wiseman94} \begin{equation} \label{eq:pd} \dot \rho={\cal L}_p \rho=-\frac{i}{\hbar}\Omega\left[(J_+ + J_-),\rho \right] + \Gamma {\cal D}[U J_-]\rho. \end{equation} The manifestation of the abrupt character of the jump feedback is clear when one writes the last term of Eq.~(\ref{eq:pd}) explicitly: ${\cal D}[U J_-]= U J_- \rho J_+ U^{\dagger}-(J_+ J_- \rho+\rho J_+ J_-)/2$. The unitary transformation $U=\exp\left[-i F \delta t/\hbar \right]$, representing the finite amount of evolution imposed by the control Hamiltonian on the system, only acts immediately after a detection event, which is described by the first term of the superoperator ${\cal D}$. Note that this is equivalent to replacing the jump operator $J_-$ in the master equation by $UJ_-$. In this way, one can engineer the reservoir dynamics~\cite{poyatos,arrc_sp} via the transformation $U$ to generate highly entangled states. From the differences between Eqs.~(\ref{eq:hd}) and (\ref{eq:pd}) it is evident that the resulting dynamics should strongly depend on the choice of measurement strategy. Here, we will explore this fact, together with the possibility of changing the feedback Hamiltonian to show how they affect the dynamics of the system and, in particular, its asymptotic entanglement. \section{Steady state entanglement and feedback} \label{sec:entang} \subsection{Dynamics without feedback} \label{sec:nocontrol} The steady state solutions for the Dicke model, Eq.~(\ref{eq:me_dicke}), were obtained already many years ago~\cite{puri79,drummond80}. However, only recently their entanglement properties were brought to attention~\cite{schneider02} and explored in the context of quantum information. The first important feature of Eq.~(\ref{eq:me_dicke}) is the fact that it is symmetric with respect to exchange of the atoms. This suggests that, instead of using the two qubit basis $\{ \ket{gg},\,\ket{ge},\,\ket{eg},\,\ket{ee}\}$, one should use angular momentum states, and analyse the system in terms of the symmetric ($j=1$) \begin{equation} \label{symm} \ket{1}=\ket{gg}, \:\:\: \ket{2}=\frac{\ket{ge}+\ket{eg}}{\sqrt{2}}, \:\:\: \ket{3}=\ket{ee}, \end{equation} and anti-symmetric ($j=0$) \begin{equation} \label{asymm} \ket{4}=\frac{\ket{ge}-\ket{eg}}{\sqrt{2}} \end{equation} subspaces. The anti-symmetric subspace is a decoherence-free subspace~\cite{lidar98,zanardi97} and the state $\ket{4}$ is therefore a steady state solution of Eq.~(\ref{eq:me_dicke}). This is an interesting case, despite its trivial dynamics, since the asymptotic state in this subspace is a pure, maximally entangled, one. In fact, this situation was explored in a recent proposal for producing Werner states in a system of atoms inside a cavity~\cite{agarwal06} and in a probabilistic scheme to generate the singlet state via quantum-jump detection~\cite{plenio99}. In terms of dynamics, however, the symmetric case is more interesting: while in the anti-symmetric subspace an initially prepared Bell state $\ket{4}$ does not evolve at all, entanglement can be dynamically generated from any symmetrical initial condition, even from initially separable states. However, even for optimal parameters, the amount of entanglement in this case is only about $10 \%$ of the Bell state's value~\cite{schneider02}. For a general asymmetric initial condition the situation gets more complicated as both symmetric and anti-symmetric components are present. This is exemplified in Fig.~\ref{fig2} where the steady state entanglement, measured by the concurrence~\cite{wot98}, as a function of the ratio $\Omega/\Gamma$ is shown for an asymmetric ($\ket{ge}$, solid line) and a symmetric ($\ket{gg}$, dashed line) initial states. For zero driving, the symmetric part of the asymmetric state evolves towards the ground state ($\ket{gg}$) and the entanglement is totally due to the anti-symmetric component ($\ket{4}$). Tuning the parameters to obtain the maximum entanglement for the symmetric component ($\Omega/\Gamma \approx 0.38$) leads to a smaller amount of entanglement in the asymmetric case. This happens because in the latter case the solutions arising from different subspaces interfere, eventually resulting in a less entangled state. \begin{equation}gin{figure} \includegraphics[width=9.0cm]{figure2.eps} \caption{Steady state concurrence as a function of the ratio $\Omega/\Gamma$ for initial the states $\ket{ge}$ (solid line) and $\ket{gg}$ (dashed line). In the anti-symmetric subspace there is no evolution and the system is always in the maximally entangled state $\ket{4}$ (not shown), while for the symmetric part the maximum entanglement is reached for $\Omega/\Gamma \approx 0.38$~\cite{schneider02}. The steady state for the asymmetric initial condition strongly depends on the interplay between symmetric and anti-symmetric solutions.} \label{fig2} \end{figure} \subsection{Dynamics with feedback} \label{sec:control} Inclusion of Markovian feedback can significantly increase the amount of asymptotic entanglement as compared to the situation described in Fig.~\ref{fig2}. This was first shown in~\cite{wang05} for a homodyne-based feedback where the steady state entanglement was approximately 3 times larger than in the non-controlled case. Also in the case of homodyne detection, a steady-state entanglement of $80$ percent of the maximal possible value was obtained~\cite{li08} using the local asymmetric feedback law proposed in~\cite{jumpfeedback}. In this section we shall show how to improve these results by designing different feedback schemes. \subsubsection{The role of measurement: homodyne vs. photodetection-based feedback} \label{sec:measure} We begin our analysis recalling the WWM results~\cite{wang05}. Their scheme fits the general scenario depicted in Fig.~\ref{fig1} when the measurement stage $M$ is a homodyne detection and the feedback Hamiltonian is $F=\lambda J_x=\lambda \left(J_-+J_+\right)$. In fact, the form of the control Hamiltonian coincides with the one appearing in Eq.~(\ref{eq:me_dicke}) and in~\cite{wang05} was proposed to be implemented via a modulation of the field driving the cavity. With this choice of control, the final equation, Eq.~(\ref{eq:hd}), retains the symmetry properties with respect to exchange of atoms. Assuming a symmetric initial condition, the system will remain in the subspace given by Eq.~(\ref{symm}) and an analytical solution for the steady state can be found~\cite{wang05}. Figure~\ref{fig3}a shows the entanglement of these solutions as a function of the driving and feedback strengths, with a maximum concurrence $c \approx 0.3$ obtained for $\Omega/\Gamma \approx \pm 0.4$ and $\lambda/\Gamma\approx -0.8$. \begin{equation}gin{figure} \includegraphics[width=8.0cm]{figure3.eps} \caption{(Color online) Steady state concurrence as a function of driving and feedback strengths (left) for homodyne (a) and photo-detection feedback (c). The homodyne case reproduces the result of~\cite{wang05} with a maximum concurrence $c\approx 0.31$. Using a jump feedback strategy, the maximum entanglement increases to $c\approx 0.49$. On the right, the absolute value of the density matrix elements (in the angular momentum basis) for the parameters corresponding to the maximum entanglement in the left panel. } \label{fig3} \end{figure} Now, we change the measurement process from homodyne to photo-detection and investigate the effects of this replacement on entanglement generation. For the moment, we keep the same form for the control Hamiltonian in order to focus on the changes induced solely by the alteration of the detection scheme. Note that, as mentioned before, the feedback now is applied only when a detection occurs and is described by the action of the operator $U=\exp\left[-i F \delta t/\hbar \right]\equiv \exp[-i \tilde \lambda J_x]$. The asymptotic solution in the symmetric subspace can be calculated from Eq.~(\ref{eq:pd}) and the corresponding entanglement is shown in Fig.~\ref{fig3}c. Now, the maximum concurrence, $c\approx 0.49$, occurs at $\Omega/\Gamma \approx -0.0023$ and $\tilde \lambda/\Gamma\approx \pm -1.57$, exceeding the value obtained via homodyne-based feedback. The sharp differences between the two feedback schemes are evidenced by the contrasting plots of Fig.~\ref{fig3} (a) and (c) and by the form of the steady states (at the peaks) depicted in Fig.~\ref{fig3} (b) and (d), where the absolute values of the density matrix elements in the angular momentum basis, Eqs.~(\ref{symm}) and~(\ref{asymm}), are shown. In the jump feedback case, the final state at the peak concurrence is close to a mixture between $\ket{3}$ and the Bell state $\ket{2}$, the latter being the state responsible for the entanglement. In the homodyne case, the abundance of different off-diagonal elements suggests a more complicated structure. Understanding these differences will be important for the analysis of spontaneous emission effects in Section~\ref{sec:robust}. \subsubsection{The role of the control Hamiltonian} \label{sec:local} A change in the feedback Hamiltonian to improve the control introduces an enormous range of new possibilities, even when considering the limitations imposed by constraints on experimental feasibility. In this section, however, we will restrict the discussion to the case of a local feedback, {\it i.e.} when a control Hamiltonian acting on just one of the atoms is employed, as proposed in~\cite{jumpfeedback}. The reason for this choice is that a local feedback breaks the symmetry of the system, making it possible to explore the interplay between symmetric and anti-symmetric subspaces from a dynamical point of view. Under the symmetric control described previously, all symmetric initial states lead to the same final stationary solution, while the anti-symmetric state is stationary itself. There are, however, infinitely many stationary states mixing both subspaces, which are unambiguously determined by the choice of initial conditions. The first important point to address is then how the relaxation properties of the feedback master equations change under the new asymmetric control law. Our discussion here will be based on the coherence-vector formalism as described by Lendi in~\cite{alicki_87}. In this approach, the original master equation is transformed into a linear system of equations in real space, {\it i.e.} $\dot \rho(t)={\cal L} \rho(t) \rightarrow {\dot{\vec v}}(t) = G \vec v(t) + \vec k$, and the stationarity properties will be encoded in the matrix $G$. A well known example of such a transformation is the description of two level systems in terms of the Bloch equations. For our purposes it suffices to note that if ${\rm det} (G) \nonumbere 0$ then the system admits a unique stationary solution given by $\vec v_{ss}=-G^{-1} \vec k$. If ${\rm det} (G) = 0$ then two situations may arise: either the equation $G \vec v(t) + \vec k = 0$ has no solution at all, or it has infinitely many, determined by the initial condition. The matrix $G$ can be straightforwardly calculated from the feedback master equations and it is obvious, from our previous discussion on the symmetric control $J_x$, that, in that case, $G$ is singular since there were more than one stationary solution. However, for a general local control of the form $U=U_1\otimes \mathbbm 1$, the matrix is non-singular (except for the trivial cases of $U_1 = \mathbbm 1$ and $\Omega = 0$) and there is a unique solution. Moreover, note that, in the case of jump-based feedback, the anti-symmetric Bell state $\ket{4}$ remains stationary for {\it any} choice of $U$ and therefore is the only steady state of the system. In the case of homodyne feedback, the steady state solution is also unique, but depends on the form of $U_1$ and on the parameters $\lambda$ and $\Omega$. Figure~\ref{figurelocal} illustrates the effect of local control on the steady state entanglement for the particular choice \begin{equation} \label{localham} F=\lambda \, \sigma_x \otimes \mathbbm{1}, \end{equation} with $\sigma_x= \sigma_1 + \sigma_1^+$, for homodyne (a) and photo-detection (c) based feedback. In the homodyne case, there is a substantial increase in the asymptotic entanglement as compared to the $J_x$ control of Fig.~\ref{fig3}. The final state with the largest amount of entanglement corresponds now to a combination of the anti-symmetric component $\ket{4}$ with the symmetric ones, $\ket{1}$ and $\ket{2}$ (see Fig.~\ref{figurelocal}b), with $c=0.81$ for $\lambda/ \Gamma=\pm 0.01$ and $\Omega/\Gamma=\pm 0.07$. As discussed in the previous paragraph, for a photo-detection feedback based on a local control law, a maximally entangled Bell state is generated from any initial condition for all non-trivial parameters, as shown by the plateaux in Fig.~\ref{figurelocal}c and the density matrix elements in Fig.~\ref{figurelocal}d. \begin{equation}gin{figure} \includegraphics[width=8.0cm]{figurelocal.eps} \caption{(Color online) Steady state concurrence for a homodyne (a) and photo-detection (c) feedback with local control $\sigma_x$. There is a huge improvement as compared to the situation depicted in Fig.~\ref{fig3}. For a local control, a pure anti-symmetric Bell state is obtained for all parameters (except in the cases with zero driving $\Omega=0$ or no feedback $\tilde \lambda = 2 \pi m$), and all possible forms of $U_1$. On the right, the absolute value of the density matrix elements for the peaks on the left plots. } \label{figurelocal} \end{figure} Since this steady state can also be obtained directly from the non-controlled system, one could, at first sight, question the relevance of the use of feedback in this case. Indeed, if one considers the model described by Eq.~(\ref{eq:me_dicke}) without any kind of imperfection, then the jump feedback with local control would represent no advantage over the non-controlled case. Nonetheless, imperfections are inherent to any real physical system, and should be considered. Without control, for example, the asymmetric Bell state has to be produced beforehand to be unaffected by the dynamics, and any symmetric component introduced by a non-ideal preparation would spoil the final entanglement (see Section~\ref{sec:nocontrol}). Conversely, with feedback the system naturally evolves to the pure entangled state $\ket{4}$ for any initial condition. However, besides non-ideal initial state preparation, other sorts of imperfection as, for example, detection inefficiency or errors in the production of the feedback Hamiltonian, may arise from the feedback implementation, and will investigated in the next section. \section{Robustness of control} \label{sec:robust} \subsection{Spontaneous emission effects} All our previous discussions were based on the application of different kinds of feedback control on the model described by Eq.~(\ref{eq:me_dicke}), where spontaneous emission effects were neglected. However, even if all other sources of imperfection were surmounted, spontaneous emission would still be the fundamental limiting factor for the existence of entanglement in a system of atomic qubits. Consequently, the ultimate goal for the feedback schemes investigated here would be the production of steady state entanglement under a model that includes this effect. The starting point is therefore Eq.~(\ref{eq:me_total}). The new feedback equations are similar to Eqs.~(\ref{eq:hd}) and~(\ref{eq:pd}), differing only by the addition of the last two terms of Eq.~(\ref{eq:me_total}). The first thing to note is that those terms break the decoherence-free condition for the anti-symmetric subspace and the advantages of using feedback will become even more evident. Analytical solutions for the steady states can still be found but, being generally cumbersome, will not be presented here in their complete form. Instead, we will analyse the regime where spontaneous emission effects can be considered as a perturbation to the system, and focus on the qualitative understanding of how these effects influence the feedback schemes discussed in the previous section. Let us consider first the scheme with symmetric control where, in the absence of atomic decay, symmetric and anti-symmetric subspaces were decoupled and the system presented infinitely many stationary solutions. Spontaneous emission, however, introduces a coupling between the subspaces as it pushes the system towards its ground state. During this process, the anti-symmetric component $\ket{4}$ is produced, breaking the symmetry of the system. This has a huge impact on the new steady state of the system, which is now unique and, for most of the parameters, has no resemblance to the corresponding state for $\gamma=0$. The corresponding steady state entanglement, shown in Fig.~\ref{figureSE}c, is drastically reduced as compared to the situation of Fig.~\ref{fig3}b, despite the small value of the atomic decay rate. The scenario changes drastically when the local feedback Hamiltonian~(\ref{localham}) is considered. In this case, even without spontaneous decay, the system could explore the whole Hilbert space as the original symmetry was already broken by the control Hamiltonian. Moreover, the steady states, for both homodyne and jump-based feedback, were unique and highly entangled. The results for the steady state entanglement for $\gamma=0.01 \, \Gamma$ are shown in Figs.~\ref{figureSE}b and \ref{figureSE}d for homodyne and jump feedback, respectively. Differently form the $J_x$ control case, spontaneous emission has indeed only a perturbative effect, reducing slightly the maximum value of steady state entanglement (compare with Fig.~\ref{figurelocal}). The figure also indicates that, for the local jump control, the steady state remains close to the anti-symmetric Bell state for most values of $\lambda$ and $\Omega$, preserving the plateau structure shown in Fig.~\ref{figurelocal}b (see Fig.~\ref{figureSE}d). In fact, this can be shown directly from the analytical solution including atomic decay. The full solution is complicated but, expanding in powers of $\gamma/\Gamma$ and keeping only the lowest order term, one obtains \begin{equation}a \label{sss_jumplocal} \rho_{44} &\approx& 1-O(\gamma/\Gamma), \nonumberonumber \\ \rho_{ij} &\approx& O(\gamma/\Gamma), \: i,j\nonumbere 4. \end{equation}a The final state is therefore close to the anti-symmetric Bell state. The first order terms depend on the parameters $\Omega$ and $\tilde \lambda$, and this dependence is responsible for the small deviations from the plateau structure of Fig.~\ref{figurelocal}c. The major reason for this robustness of a local control strategy using both detection schemes lies in the uniqueness of the steady state without decay. For any initial condition, the system is driven to the target state and, therefore, as soon as spontaneous decay forces the system away from this state, feedback dynamics counteract this tendency. This competition between feedback and decay dynamics determines how far the perturbed stationary state will be from the original one. Consequently, as soon as the ratio between atomic and collective decay rates remain small, the final state will remain highly entangled. These results are not restricted to a feedback Hamiltonian of the form Eq.~(\ref{localham}). In fact, similar steady state entanglement are obtained other local control laws, though less entanglement can also be observed for particular regions of parameters. Therefore, there is still room for improvements in the results shown in Fig.~\ref{figureSE} as one can, in principle, optimise over the forms of $U$ to get even larger final entanglement. \begin{equation}gin{figure} \includegraphics[width=8.0cm]{figureSE3d_new.eps} \caption{(Color online) Spontaneous emission effects on the steady state concurrence for the cases corresponding to Fig.~\ref{fig3} (left, $J_x$ conrol) and Fig.~\ref{figurelocal} (right, $\sigma_x$ control). Even for a small decay rate ($\gamma/\Gamma=0.01$), the effects for a $J_x$ control are pronounced for both homodyne (top) and jump (bottom) detection schemes. In the local control case (right) the shape seen in Fig.~\ref{figurelocal} is left basically unaltered with only a small decrease in the maximum value of concurrence. Remarkably, for the local control Eq.~(\ref{localham}) with photo-detection (d), a highly entangled state is stabilised for almost all parameter space.} \label{figureSE} \end{figure} \subsection{Detection inefficiencies effects} Since feedback relies on the manipulation of the system based on information gained by a measurement, evaluation of the effects of inefficiencies in the detection process is important. Finite efficiency can be introduced both for photo-detection~\cite{wise_milb_jump93} and homodyne measurements~\cite{wise_milb_homo93}, and this has been explicitly considered for homodyne feedback schemes~\cite{wise_milb93,wiseman02,wang01}. From now on, we will consider only the jump-based feedback, since it is the scheme with best performance for entanglement generation. In this case, the extension of Eq.~(\ref{eq:pd}) to allow for a non-unit-efficiency detection can be done by identifying two distinct situations when a jump occurs: in the first the detector clicks and the feedback transformation $U$ acts in the system, in the second the detector fails to click and no control is applied. The corresponding equation reads \begin{equation}a \label{eq:pd_eta} \dot \rho=-\frac{i}{\hbar}\Omega\left[(J_+ + J_-),\rho \right] + \Gamma \eta {\cal D}[U J_-]\rho \nonumberonumber \\+ \Gamma \left(1-\eta\right) {\cal D}[J_-]\rho + \sum_i \gamma_i{\cal D}[\sigma_i]\rho . \end{equation}a When the detector efficiency $\eta$ is zero, no information is extracted from the measurement and the equation reduces to the equation without feedback, Eq.~(\ref{eq:me_dicke}). Evidently, for an unity efficiency detector Eq.~(\ref{eq:pd}) is regained, and, for a local control, a maximally entangled steady state is reached. In the intermediate case where $0<\eta<1$, one would expect that imperfect knowledge gain should lead to inefficient control. Note however that, neglecting spontaneous emission, the anti-symmetric Bell state is a steady state of both Eqs.~(\ref{eq:me_dicke}) and~(\ref{eq:pd}), and one can show, proceeding exactly as in the unit-efficiency case, that this also holds true for Eq.~(\ref{eq:pd_eta}) for any $\eta >0$. This is a peculiar situation of this dynamics: while the feedback term tries to move the system to the state $\ket{4}$, the one corresponding to a missed click does not affect this anti-symmetric component, impinging only a delay in the time taken to reach the steady state~\cite{jumpfeedback}. This is not true if atomic decay is taken into account as undetected events imply missed opportunities to apply the feedback, hence an effectively weaker feedback as compared to spontaneous emission. Mathematically this is clear from Eq.~(\ref{eq:pd_eta}) as the feedback term is scaled by $\eta$. The effect of detector inefficiencies and spontaneous emission together are summarised in Fig.~\ref{figetagamma} where the maximum steady state concurrence is plotted as a function of $\eta$ and $\gamma/\Gamma$. The most noticeable feature is the weak dependence of the entanglement on the detection efficiency in the limit $\gamma \ll \Gamma$ (unless $\eta$ is close to zero) that confirms the robustness of a local jump-based feedback in this regime~\cite{jumpfeedback}. For a ratio $\gamma/\Gamma=0.002$, for example, one still has concurrence above $0.9$ for detection efficiency as low as $\eta=0.1$ \begin{equation}gin{figure} \includegraphics[width=8.0cm]{figure_eta_gamma.eps} \caption{(Color online) Maximum steady state concurrence as a function of the ratio $\gamma/\Gamma$ and detection efficiency $\eta$. For small $\gamma/\Gamma$ the system is almost insensitive to detection inefficiencies, decaying abruptly when $\eta$ approaches zero (for $\gamma/\Gamma=0$ entanglement is maximum unless $\eta=0$).} \label{figetagamma} \end{figure} \section{Analysis of the adiabatic approximation} \label{sec:exp} The analysis from the previous sections shows that entanglement production can be vastly improved by the use of feedback strategies and that, in the best case scenario, highly entangled states can be protected against decoherence using a quantum-jump-based feedback with a local control Hamiltonian. The robustness of the feedback scheme depends on a single quantity, the ratio $C=\Gamma/\gamma=g^2/\kappa \gamma$, also known as the cooperativity parameter~\cite{bonifacio78}. The higher the cooperativity, the better the performance of the feedback procedure: not only does the steady state entanglement obtained increase, but also the sensitivity of the scheme to detection inefficiencies is reduced. Interestingly, large cooperativity, which corresponds to the strong coupling limit, is the regime used by many of the current cavity QED experiments in the optical domain~\cite{hood00,puppe07,sauer04}. This is achieved either by increasing the atom-cavity coupling or decreasing the cavity decay rate, since the spontaneous emission rate is fixed by the choice of atomic species. The use of high quality cavities, however, affects the feedback model based on Eq.~(\ref{eq:me_total}), which is obtained in the bad cavity limit after adiabatic eliminating the cavity mode. To have a better understanding of how our model would work within current experimental conditions and also to determine the most favourable region of parameters for future experiments, in this section we will relax the adiabatic condition and analyse the performance of local jump feedback strategy in this situation. \subsection{The non-adiabatic model}\label{sec:nonadiabaticmodel} Equation~(\ref{eq:nofb_full}) describes the whole system, including cavity mode dynamics, without control. Introduction of photo-detection feedback follows exactly the same reasoning used to obtain Eq.~(\ref{eq:pd}). Whenever a photon is detected, the control Hamiltonian is applied and the system undergoes an evolution given by the operator $U$. This operator now enters in the term that describes the monitoring of the cavity output, {\it i.e.} ${\cal D}[a] \rho$. Addition of detection inefficiency also follows directly from our previous discussion and the full master equation for the system is \begin{equation}gin{eqnarray} \label{eq:fullfeedback} \dot \rho=-i \Omega \left[(J_+ + J_-),\rho \right] -i g \left[(J_+ a + J_- a^\dagger),\rho \right] \nonumberonumber \\ + \eta \kappa {\cal D}[Ua]\rho + (1 - \eta ) \kappa {\cal D}[a]\rho + \sum_i\gamma_i{\cal D}[\sigma_i]\rho, \end{eqnarray} which transforms back to Eq.~(\ref{eq:pd_eta}) if an adiabatic elimination is performed. To start with, instead of solving the full master equation, Eq.~(\ref{eq:fullfeedback}), we will use a quantum trajectory method~\cite{carmichael,molmer96} to follow the dynamics conditioned on the measurement results. This approach, which provides a useful picture of single runs of an actual experiment, combines continuous evolution under an effective Hamiltonian and the random occurrence of quantum jumps. In our problem there are four possible jumps, corresponding to the four decoherence terms in Eq.~(\ref{eq:fullfeedback}): a detected photon leaving the cavity that triggers the feedback (third term), an undetected photon (fourth term), and spontaneous emission from the atoms (two jumps represented by the sum in the fifth term). For simulation purposes, we assume that the environment is perfectly monitored so that the state remains pure throughout the evolution. Obviously this is not true for undetected photons, but we can take a different perspective and consider feedback inefficiencies as detected photons where the feedback Hamiltonian fails to apply. Note that for the average behavior given by Eq.~(\ref{eq:fullfeedback}) those points of view are equivalent. Figure~\ref{singleshots} shows the concurrence (top panels) and the average number of photons in the cavity (bottom panels) for single stochastic trajectories as a function of time. The cooperativity parameter is set to $C=100$ (as in Fig.~\ref{figureSE}), the detection efficiency is $\eta=1$, and we assume that the atoms are initially in the ground state and the cavity in the vacuum. Figure~\ref{singleshots}a corresponds to a situation close to the adiabatic regime with $\kappa=400 \gamma$ and $g=200 \gamma$, while $g=40 \gamma$ and $\kappa=16 \gamma$ in Fig.~\ref{singleshots}b. All other parameters are selected to give the maximum concurrence for the chosen $g$ and $\kappa$. The average behavior, given by the solution of Eq.~\ref{eq:fullfeedback}, is shown by the dashed lines for comparison. \begin{equation}gin{figure} \includegraphics[width=8.5cm]{singleshots.eps} \caption{Concurrence (top panels) and mean cavity photon number (bottom panels) as a function of time for single quantum trajectories. In (a) the system is in the adiabatic regime with $\kappa=400 \gamma$, $g=200 \gamma$, $\Omega=40 \gamma$ and $\tilde \lambda=-18 \gamma$. In (b), effects of non-adiabatic dynamics get more evident ($\kappa= 16\gamma$, $g=40 \gamma$, $\Omega=20.8 \gamma$ and $\tilde \lambda=-10.0528 \gamma$). In both plots, the cooperativity is $C=100$, and the concurrence for the time-evolved density matrix is shown for comparison (dashed lines)} \label{singleshots} \end{figure} For the case of Fig.~\ref{singleshots}a, dynamics with feedback drives the system to the target state in a short time. In this particular realization, two detected photons, and consequently two control pulses, at $t\approx 0.15$ are enough to correct the trajectory to state $\ket{4}$ with no photons in the cavity. The absence of photons from the cavity output for certain periods of time is therefore an indication that the atoms are in the anti-symmetric Bell state. This state can only be perturbed by the occurrence of spontaneous emission jump (in this case at $\gamma\,t\approx 0.62, 1.15, 2.3, 2.8, 2.9, \,{\rm and} \, 3.5$), which destroys entanglement. As soon as the system gets away from the dark state, cavity is again populated and control pulses applied when photons are detected, reestablishing the maximally entangled state. In the adiabatic regime, since $\kappa \gg \gamma$, this happens quickly during short windows of control jumps (we have 7 of them in Fig.~\ref{singleshots}a). As a result, the system spends most of the time in the entangled state and, on average, entanglement will be high. This is another way of understanding the perturbative nature of the spontaneous emission in the adiabatic limit: the larger the cooperativity, the larger the amount of time spent in the target state, hence higher entanglement. However, this picture is only valid in the adiabatic limit where cooperativity is the only important parameter. Fig.~\ref{singleshots}b shows the dynamics outside the adiabatic regime but for the {\it same} cooperativity. Now, with a smaller $\kappa$, the windows of control are larger and the system takes longer to reach the target state. Moreover, the ratio between the probabilities of having spontaneous emission and control events increases. Consequently, the system spends less time in the state $\ket{4}$, resulting in a less entangled steady state ($c \approx 0.6$). This may be explained by the reliance of the control scheme on information transmitted by the photons that leak from the cavity. In the same way that low detection efficiency limits the eventual peak concurrence because an opportunity for control is lost, if the cavity decay rate is very low, the photons that are emitted into the cavity have a higher probability of being reabsorbed by the atoms rather than escaping and triggering a control pulse. One can therefore conclude that the major aspect responsible for the robustness of the local jump-based feedback scheme is the balance between control and decay dynamics. In the adiabatic case, the feedback time scale is dictated by the effective rate $\Gamma$ and the robustness depends only on the cooperativity $\Gamma/\gamma$. In the non-adiabatic case, feedback rate is given by the cavity decay $\kappa$ and the feedback performance will crucially depend on the ratio $\kappa/\gamma$. This indicates that the adiabatic regime is the best one to achieve a robust generation of highly entangled states. Note, however, that the ratio $\kappa/\gamma$ is not the only important parameter in the non-adiabatic case. The laser can excite the atoms, which can then emit photons in the cavity mode. These photons can finally escape the cavity, triggering the feedback. The dynamics depends, in an intricate way, on how the rates at which those process occur are related, contributing significantly for the behavior of the steady state entanglement. Figure~\ref{fig_structures} shows the stationary concurrence as a function of $\Omega$ and $\tilde \lambda$ for the same values of $g$ and $\kappa$ as in Fig.~\ref{singleshots}b, but for a smaller detection efficiency $\eta=0.5$. The simple plateau feature from the adiabatic case (Fig.~\ref{figureSE}d) disappears, giving place to a structure that highlights the strong dependence of the final entanglement on the different parameters of the problem. \begin{equation}gin{figure} \includegraphics[width=8.0cm]{noadiabatic_structure.eps} \caption{(Color online) Steady state concurrence for jump-based feedback as a function of driving and feedback strengths for $g=40 \gamma$,$\kappa = 16 \gamma$, and $\eta = 0.5$. The maximum concurrence ($c\approx 0.5$) occurs for specific values of the parameters in contrast with the flat structure observed in the adiabatic regime (see Fig.~\ref{figureSE}d). } \label{fig_structures} \end{figure} \section{Conclusions}\label{sec:conc} In this paper we have investigated the improvement on the generation of steady state entanglement via quantum Markovian feedback. Using the Dicke model, we have explored the influences of the choice of detection schemes to the success of the control, showing that a photo-detection feedback strategy outperforms the one based on homodyne measurements. The effects of different feedback Hamiltonians were also analysed: local control schemes, {\it i.e} feedback applied to only one of the atoms, produced larger amounts of steady state entanglement than collective interaction. We identified the strategy of jump-based feedback with local control~\cite{jumpfeedback} to be the best for robust preparation of highly entangled states. For this scheme, we extended the numerical analysis given in~\cite{jumpfeedback}, and presented a justification for the robustness against spontaneous emission and detection inefficiencies based on analytical considerations. Motivated by the parameters used in most of the recent optical cavity QED experiments, we also investigated the role of the adiabatic elimination on our feedback scheme and showed that the adiabatic regime gives the best performance in terms of entanglement generation and robustness. An important issue is whether our feedback could be realized experimentally. The setup of two atoms equally coupled to a cavity mode with possibility of individual addressing has already been demonstrated in~\cite{nussmann_05}. The remaining constraints for an efficient feedback procedure are: i) large cooperativity parameter ($\Gamma \gg \gamma$) and ii) high detection rate as compared to the decay rate ($\kappa \gg \gamma$). Although the later condition can be certainly achieved in experiments with low quality cavities, and the former has been obtained in a variety of recent experiments~\cite{hood01,khudaverdyan08,boozer06,maunz05}, the difficulty relies on combining all those ingredients in a single experimental setup. The usual strategy to fulfill (i) is to improve the quality of the cavity, which would be detrimental to condition (ii). The solution would be to design a cavity with high transmissivity (ii), yet with a large coupling strength $g$, which would involve a compromise between small mode volume and the requirement of individual addressing of atoms. \begin{equation}gin{acknowledgments} The authors thank M. Hush for helpful discussions and comments on the manuscript. A. R. R. C. also thanks D. Meschede for enlightening discussions about experimental issues. \end{acknowledgments} \label{sec_concl} \end{document}
\begin{document} \title[New general integral inequalities]{New general integral inequalities for $(\alpha ,m)$--GA-convex functions via Hadamard fractional integrals} \author{\.{I}mdat \.{I}\c{s}can} \address{Department of Mathematics, Faculty of Sciences and Arts, Giresun University, Giresun, Turkey} \email{[email protected]} \author{Mehmet Kunt} \address{Department of Mathematics, Faculty of Sciences, Karadeniz Technical University, Trabzon, Turkey} \subjclass[2000]{ 26A51, 26A33, 26D15. } \keywords{Hermite--Hadamard type inequality, Ostrowski type inequality, Simpson type inequality, GA-$(\alpha ,m)$-convex function.} \begin{abstract} In this paper, the authors gives a new identity for Hadamard fractional integrals. By using of this identity, the authors obtains new estimates on generalization of Hadamard, Ostrowski and Simpson type inequalities for $ (\alpha ,m)$-GA-convex function via Hadamard fractional integral. \end{abstract} \maketitle \section{Introduction} Let a real function $f$ be defined on some nonempty interval $I$ of real line $ \mathbb{R} $. The function $f$ is said to be convex on $I$ if inequality \begin{equation*} f(tx+(1-t)y)\leq tf(x)+(1-t)f(y) \end{equation*} holds for all $x,y\in I$ and $t\in \left[ 0,1\right] .$ Following inequalities are well known in the literature as Hermite-Hadamard inequality, Ostrowski inequality and Simpson inequality respectively: \begin{theorem} Let $f:I\subseteq \mathbb{R\rightarrow R}$ be a convex function defined on the interval $I$ of real numbers and $a,b\in I$ with $a<b$. The following double inequality holds: \begin{equation*} f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\dint\limits_{a}^{b}f(x)dx \leq \frac{f(a)+f(b)}{2}\text{.} \end{equation*} \end{theorem} \begin{theorem} Let $f:I\subseteq \mathbb{R\rightarrow R}$ be a mapping differentiable in $ I^{\circ },$ the interior of I, and let $a,b\in I^{\circ }$ with $a<b.$ If $ \left\vert f^{\prime }(x)\right\vert \leq M,$ $x\in \left[ a,b\right] ,$ then we the following inequality holds: \begin{equation*} \left\vert f(x)-\frac{1}{b-a}\dint\limits_{a}^{b}f(t)dt\right\vert \leq \frac{M}{b-a}\left[ \frac{\left( x-a\right) ^{2}+\left( b-x\right) ^{2}}{2} \right] \end{equation*} for all $x\in \left[ a,b\right] .$ The constant $\frac{1}{4}$ is the best possible in the sense that it cannot be replaced by a smaller one. \end{theorem} \begin{theorem} Let $f:\left[ a,b\right] \mathbb{\rightarrow R}$ be a four times continuously differentiable mapping on $\left( a,b\right) $ and $\left\Vert f^{(4)}\right\Vert _{\infty }=\underset{x\in \left( a,b\right) }{\sup } \left\vert f^{(4)}(x)\right\vert <\infty .$ Then the following inequality holds: \begin{equation*} \left\vert \frac{1}{3}\left[ \frac{f(a)+f(b)}{2}+2f\left( \frac{a+b}{2} \right) \right] -\frac{1}{b-a}\dint\limits_{a}^{b}f(x)dx\right\vert \leq \frac{1}{2880}\left\Vert f^{(4)}\right\Vert _{\infty }\left( b-a\right) ^{4}. \end{equation*} \end{theorem} The following defnitions are well known in the literature. \begin{definition} \cite{10,11}. A function $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ is said to be GA-convex (geometric-arithmatically convex) if \begin{equation*} f(x^{t}y^{1-t})\leq tf(x)+\left( 1-t\right) f(y) \end{equation*} for all $x,y\in I$ and $t\in \left[ 0,1\right] $. \begin{definition} \cite{8}. Let $f:(0,b]\rightarrow \mathbb{R} ,b>0,$ and $\left( \alpha ,m\right) \in \left( 0,1\right] ^{2}$. If \begin{equation*} f(x^{t}y^{m\left( 1-t\right) })\leq t^{\alpha }f(x)+m\left( 1-t^{\alpha }\right) f(y) \end{equation*} for all $x,y\in (0,b]$ and $t\in \lbrack 0,1]$, then $f$ is said to be a $ \left( \alpha ,m\right) $-GA-convex function . \end{definition} \end{definition} Note that $\left( \alpha ,m\right) \in \left\{ \left( 1,m\right) ,\left( 1,1\right) ,\left( \alpha ,1\right) \right\} $ one obtains the following classes of functions: $m$-GA-convex, GA-convex, $\alpha $ -GA-convex (or GA-$s$-convex in the first sense, if we take $s$ instead of $ \alpha $ (see \cite{19})). We will now give definitions of the right-sided and left-sided Hadamard fractional integrals which are used throughout this paper. \begin{definition} \cite{4}. Let $f\in L\left[ a,b\right] $. The right-sided and left-sided Hadamard fractional integrals $J_{a^{+}}^{\theta }f$ and $J_{b^{-}}^{\theta }f$ of oder $\theta >0$ with $b>a\geq 0$ are defined by \begin{equation*} J_{a+}^{\theta }f(x)=\frac{1}{\Gamma (\theta )}\dint\limits_{a}^{x}\left( \ln \frac{x}{t}\right) ^{\theta -1}f(t)\frac{dt}{t},\ a<x<b \end{equation*} and \begin{equation*} J_{b-}^{\theta }f(x)=\frac{1}{\Gamma (\theta )}\dint\limits_{x}^{b}\left( \ln \frac{t}{x}\right) ^{\theta -1}f(t)\frac{dt}{t},\ a<x<b \end{equation*} respectively, where $\Gamma (\theta )$ is the Gamma function defined by $ \Gamma (\theta )=$ $\dint\limits_{0}^{\infty }e^{-t}t^{\theta -1}dt$. \end{definition} In \cite{20}, \.{I}\c{s}can represented Hermite-Hadamard's inequalities for GA-convex functions in fractional integral forms as follows: \begin{theorem} Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a function such that $f\in L[a,b]$, where $a,b\in I$ with $a<b$. If $f$ is a GA-convex function on $[a,b]$, then the following inequalities for fractional integrals hold: \begin{equation*} f\left( \sqrt{ab}\right) \leq \frac{\Gamma (\theta +1)}{2\left( \ln \frac{b}{ a}\right) ^{\theta }}\left\{ J_{a+}^{\theta }f(b)+J_{b-}^{\theta }f(a)\right\} \leq \frac{f(a)+f(b)}{2} \end{equation*} with $\alpha >0$. \end{theorem} In \cite{20}, \.{I}\c{s}can gave the following identity for differentiable functions.. \begin{lemma} \label{Lemma1}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a,b]$, where $a,b\in I$ with $a<b$. Then for all $x\in \lbrack a,b]$ , $ \lambda \in \left[ 0,1\right] $ and $\alpha >0$ we have: \begin{eqnarray*} I_{f}\left( x,\lambda ,\theta ,a,b\right) &=&\left( 1-\lambda \right) \left[ \ln ^{\theta }\frac{x}{a}+\ln ^{\theta }\frac{b}{x}\right] f\left( x\right) +\lambda \left[ f\left( a\right) \ln ^{\theta }\frac{x}{a}+f\left( b\right) \ln ^{\theta }\frac{b}{x}\right] \\ &&-\Gamma (\theta +1)\left[ J_{x-}^{\theta }f(a)+J_{x+}^{\theta }f(b)\right] \\ &=&a\left( \ln \frac{x}{a}\right) ^{\theta +1}\dint\limits_{0}^{1}\left( t^{\theta }-\lambda \right) \left( \frac{x}{a}\right) ^{t}f^{\prime }\left( x^{t}a^{1-t}\right) dt \\ &&-b\left( \ln \frac{b}{x}\right) ^{\theta +1}\dint\limits_{0}^{1}\left( t^{\theta }-\lambda \right) \left( \frac{x}{b}\right) ^{t}f^{\prime }\left( x^{t}b^{1-t}\right) dt. \end{eqnarray*} \end{lemma} In recent years, many athors have studied errors estimations for Hermite-Hadamard, Ostrowski and Simpson inequalities; for refinements, counterparts, generalization see \cite{1,2,3,5,6,7,12,13,15,16,17,18}. In this paper, new identity for fractional integrals have been defined. By using of this identity, we obtained a generalization of Hadamard, Ostrowski and Simpson type inequalities for $(\alpha ,m)$-GA-convex functions via Hadamard fractional integrals. \ \section{Main results} Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$, the interior of $I$, throughout this section we will take \begin{eqnarray*} K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) &=&\left( 1-\lambda \right) m^{\theta }\left[ \ln ^{\theta }\frac{x}{a}+\ln ^{\theta }\frac{b}{x} \right] f(x^{m}) \\ &&+\lambda m^{\theta }\left[ f(a^{m})\ln ^{\theta }\frac{x}{a}+f(b^{m})\ln ^{\theta }\frac{b}{x}\right] \\ &&-\Gamma \left( \theta +1\right) \left[ J_{x^{m}-}^{\theta }f(a^{m})+J_{x^{m}+}^{\theta }f(b^{m})\right] \end{eqnarray*} where $a,b\in I$ with $a<b$, $\ x\in \lbrack a,b]$ , $\lambda \in \left[ 0,1 \right] $, $\theta >0$ and $\Gamma $ is Euler Gamma function. Similarly to Lemma \ref{Lemma1}, we can prove the following lemma. \begin{lemma} \label{Lemma2}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a^{m},b^{m}]$, where $a^{m},b\in I$ with $a<b$ and $m\in \left( 0,1\right] $. Then for all $x\in \lbrack a,b]$, $\lambda \in \left[ 0,1\right] $ and $ \theta >0$ we have: \begin{eqnarray*} &&K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) =m^{\theta +1}a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\dint\limits_{0}^{1}\left( t^{\theta }-\lambda \right) \left( \frac{x}{a}\right) ^{mt}f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) dt \\ &&-m^{\theta +1}b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\dint\limits_{0}^{1}\left( t^{\theta }-\lambda \right) \left( \frac{x}{b} \right) ^{mt}f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) dt. \end{eqnarray*} \end{lemma} \begin{theorem} \label{Theorem5}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a^{m},b^{m}]$, where $a^{m},b\in I$ $^{\circ }$ with $a<b$ and $m\in \left( 0,1\right] $. If $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $ -GA-convex on $[a^{m},b]$ for some fixed $q\geq 1$, $x\in \lbrack a,b]$, $ \lambda \in \left[ 0,1\right] $ and $\theta >0$ then the following inequality for fractional integrals holds \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1}C_{o}\left( \theta ,\lambda \right) ^{1-\frac{1 }{q}} \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}C_{1}\left( x,\theta ,\lambda ,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( x,\theta ,\lambda ,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}C_{3}\left( x,\theta ,\lambda ,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( x,\theta ,\lambda ,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right\} \label{2.1} \end{eqnarray} where \begin{eqnarray*} C_{o}\left( \theta ,\lambda \right) &=&\frac{2\theta \lambda ^{1+\frac{1}{ \theta }}+1}{\theta +1}-\lambda , \\ C_{1}\left( x,\theta ,\lambda ,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x}{a}\right) ^{qmt}t^{\alpha }dt, \\ C_{2}\left( x,\theta ,\lambda ,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x}{a}\right) ^{qmt}\left( 1-t^{\alpha }\right) dt, \\ C_{3}\left( x,\theta ,\lambda ,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x}{b}\right) ^{qmt}t^{\alpha }dt, \\ C_{4}\left( x,\theta ,\lambda ,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x}{b}\right) ^{qmt}\left( 1-t^{\alpha }\right) dt. \end{eqnarray*} \end{theorem} \begin{proof} From Lemma \ref{Lemma2}, property of the modulus and using the power-mean inequality we have \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert dt\right) ^{1-\frac{1}{q}} \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x }{a}\right) ^{qmt}\left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert \left( \frac{x }{b}\right) ^{qmt}\left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right\} . \label{2.2} \end{eqnarray} Since $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $-GA-convex on $ [a^{m},b],$ for all $t\in \left[ 0,1\right] $ \begin{equation} \left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}\leq t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q}, \label{2.3} \end{equation} \begin{equation} \left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}\leq t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q}. \label{2.4} \end{equation} By a simple computation \begin{eqnarray} \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert dt &=&\dint\limits_{0}^{\lambda ^{1/\theta }}\left( \lambda -t^{\theta }\right) dt+\dint\limits_{\lambda ^{1/\theta }}^{1}\left( t^{\theta }-\lambda \right) dt \notag \\ &=&\frac{2\theta \lambda ^{1+\frac{1}{\theta }}+1}{\theta +1}-\lambda . \label{2.5} \end{eqnarray} If we use $\left( \ref{2.3}\right) $, $\left( \ref{2.4}\right) $ and $\left( \ref{2.5}\right) $ in $\left( \ref{2.2}\right) $, we obtain $\left( \ref{2.1} \right) $. This completes the proof. \end{proof} \begin{corollary} Under the assumptions of Theorem \ref{Theorem5} with $q=1,$ the inequality $ \left( \ref{2.1}\right) $ reduced to the following inequality \begin{eqnarray*} &&K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \leq m^{\theta +1} \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert C_{1}\left( x,\theta ,\lambda ,1,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert C_{2}\left( x,\theta ,\lambda ,1,m,\alpha \right) \end{array} \right) \right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert C_{3}\left( x,\theta ,\lambda ,1,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert C_{4}\left( x,\theta ,\lambda ,1,m,\alpha \right) \end{array} \right) \right\} \end{eqnarray*} \end{corollary} \begin{corollary} Under the assumptions of Theorem \ref{Theorem5} with $x=\sqrt{ab}$, $\lambda =\frac{1}{3}$ from the inequality $\left( \ref{2.1}\right) $ we get the following Simpson type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( \frac{1}{3},\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert =\left\vert \frac{1}{6}\left[ f(a^{m})+4f\left( \left( \sqrt{ab}\right) ^{m}\right) +f(b^{m})\right] \right. \\ &&-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a }\right) ^{\theta }}\left. \left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \leq \frac{m\ln \frac{b}{a}}{4}C_{0}^{1-\frac{1}{q}}\left( \theta ,\frac{1}{3}\right) \\ &&\times \left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{1}\left( \sqrt{ab},\theta ,\frac{1}{3},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( \sqrt{ab} ,\theta ,\frac{1}{3},q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{3}\left( \sqrt{ab},\theta ,\frac{1}{3},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( \sqrt{ab} ,\theta ,\frac{1}{3},q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} \label{Corollary3}Under the assumptions of Theorem \ref{Theorem5} with $x= \sqrt{ab}$,$\ \lambda =0$ from the inequality $\left( \ref{2.1}\right) $ we get the following midpoint-type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 0,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert f\left( \left( \sqrt{ab}\right) ^{m}\right) -\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta } }\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left( \frac{1}{\theta +1}\right) ^{1-\frac{ 1}{q}}\left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{1}\left( \sqrt{ab},\theta ,0,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( \sqrt{ab} ,\theta ,0,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{3}\left( \sqrt{ab},\theta ,0,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( \sqrt{ab} ,\theta ,0,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{remark} If we take $\theta =1$, $m=1$ in Corollary \ref{Corollary3} we have the following midpoint-type inequality for $\alpha $-GA-convex function (or GA-s-convex function in the first sense), which is the same with the inequality $\left( 9\right) $ of Theorem 3.4.b. in \cite{19}, \begin{eqnarray*} &&\left\vert f\left( \sqrt{ab}\right) -\frac{1}{\ln b-\ln a}\int_{a}^{b} \frac{f\left( x\right) }{x}dx\right\vert \\ &\leq &\ln \frac{b}{a}\left( \frac{1}{2}\right) ^{3-\frac{1}{q}}\left\{ a \left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}C_{1}\left( \sqrt{ab},1,0,q,1,\alpha \right) \\ +\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( \sqrt{ab} ,1,0,q,1,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}C_{3}\left( \sqrt{ab},1,0,q,1,\alpha \right) \\ +\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( \sqrt{ab} ,1,0,q,1,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \text{.} \end{eqnarray*} \end{remark} \begin{remark} If we take $\theta =1$, $m=1$, $\alpha =1$ in Corollary \ref{Corollary3} we have the following midpoint-type inequality for GA-convex function, which is the same with the inequality $\left( 13\right) $ of Corollary 3.5 in \cite {19}, \begin{eqnarray*} &&\left\vert f\left( \sqrt{ab}\right) -\frac{1}{\ln b-\ln a}\int_{a}^{b} \frac{f\left( x\right) }{x}dx\right\vert \\ &\leq &\ln \frac{b}{a}\left( \frac{1}{2}\right) ^{3-\frac{1}{q}}\left\{ a \left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}C_{1}\left( \sqrt{ab},1,0,q,1,1\right) \\ +\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( \sqrt{ab} ,1,0,q,1,1\right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}C_{3}\left( \sqrt{ab},1,0,q,1,1\right) \\ +\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( \sqrt{ab} ,1,0,q,1,1\right) \end{array} \right] ^{\frac{1}{q}}\right\} \text{.} \end{eqnarray*} \end{remark} \begin{corollary} Under the assumptions of Theorem \ref{Theorem5} with$\ x=\sqrt{ab}$, $ \lambda =1$ from the inequality $\left( \ref{2.1}\right) $ we get the following trepezoid-type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 1,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert \frac{f(a^{m})+f(b^{m})}{2}-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab} \right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left( \frac{\theta }{\theta +1}\right) ^{1- \frac{1}{q}}\left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{1}\left( \sqrt{ab},\theta ,1,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}C_{2}\left( \sqrt{ab} ,\theta ,1,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}C_{3}\left( \sqrt{ab},\theta ,1,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}C_{4}\left( \sqrt{ab} ,\theta ,1,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} . \end{eqnarray*} \end{corollary} \begin{corollary} Let the assumptions of Theorem \ref{Theorem5} hold. If $\ \left\vert f^{\prime }(u)\right\vert \leq M$ for all $u\in \left[ a^{m},b\right] $ and $ \lambda =0,$ then from the inequality $\left( \ref{2.1}\right) $ we get the following Ostrowski type inequality for fractional integrals \begin{eqnarray*} &&\left\vert \left[ \left( \ln \frac{x}{a}\right) ^{\theta }+\left( \ln \frac{b}{x}\right) ^{\theta }\right] f(x^{m})-\frac{\Gamma \left( \theta +1\right) }{m^{\theta }}\left[ J_{x^{m}-}^{\theta }f(a^{m})+J_{x^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{mM}{\left( \theta +1\right) ^{1-\frac{1}{q}}}\left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \begin{array}{c} C_{1}\left( x,\theta ,0,q,m,\alpha \right) \\ +mC_{2}\left( x,\theta ,0,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \begin{array}{c} C_{3}\left( x,\theta ,\lambda ,q,m,\alpha \right) \\ +mC_{4}\left( x,\theta ,\lambda ,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right\} \end{eqnarray*} for each $x\in \left[ a,b\right] .$ \end{corollary} \begin{theorem} \label{Theorem6}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a^{m},b^{m}]$, where $a^{m},b\in I$ $^{\circ }$ with $a<b$ and $m\in \left( 0,1\right] $. If $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $ -GA-convex on $[a^{m},b]$ for some fixed $q>1$, $x\in \lbrack a,b]$, $ \lambda \in \left[ 0,1\right] $ and $\theta >0$ then the following inequality for fractional integrals holds \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1}R_{0}^{^{\frac{1}{p}}}\left( \theta ,\lambda ,p\right) \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}R_{1}\left( x,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( x,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}R_{3}\left( x,q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( x,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right\} \label{2.6} \end{eqnarray} where \begin{equation*} R_{0}\left( \theta ,\lambda ,p\right) =\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}dt \end{equation*} \begin{equation*} =\left\{ \begin{array}{ccc} \frac{1}{\theta p+1} & , & \lambda =0 \\ \begin{array}{c} \left\{ \frac{\lambda ^{\left( \theta p+1\right) /\theta }}{\theta }\beta \left( \frac{1}{\theta },p+1\right) +\frac{\left( 1-\lambda \right) ^{p+1}}{ \theta \left( p+1\right) }\right. \\ \left. \times \begin{array}{c} _{2}F_{1}\left( 1-\frac{1}{\theta },1;p+2;1-\lambda \right) \end{array} \right\} \end{array} & , & 0<\lambda <1 \\ \frac{1}{\theta }\beta \left( \frac{1}{\theta },p+1\right) & , & \lambda =1 \end{array} \right. , \end{equation*} \begin{eqnarray*} R_{1}\left( x,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left( \frac{x}{a} \right) ^{mqt}t^{\alpha }dt, \\ R_{2}\left( x,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left( \frac{x}{a} \right) ^{mqt}\left( 1-t^{\alpha }\right) dt, \\ R_{3}\left( x,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left( \frac{x}{b} \right) ^{mqt}t^{\alpha }dt, \\ R_{4}\left( x,q,m,\alpha \right) &=&\dint\limits_{0}^{1}\left( \frac{x}{b} \right) ^{mqt}\left( 1-t^{\alpha }\right) dt, \end{eqnarray*} $ \begin{array}{c} _{2}F_{1} \end{array} $ is hypergeometrical function defined by \begin{eqnarray*} _{2}F_{1}\left( a,b;c;z\right) &=&\frac{1}{\beta \left( b,c-b\right) } \int_{0}^{1}t^{b-1}\left( 1-t\right) ^{c-b-1}\left( 1-zt\right) ^{-a}dt, \text{ } \\ c &>&b>0,\left\vert z\right\vert <1\left( \text{see \cite{4}}\right) , \end{eqnarray*} $\beta $ is beta function defined by \begin{equation*} \beta \left( x,y\right) =\frac{\Gamma \left( x\right) \Gamma \left( y\right) }{\Gamma \left( x+y\right) }=\int_{0}^{1}t^{x-1}\left( 1-t\right) ^{y-1}dt, \text{ }x,y>0, \end{equation*} and $\frac{1}{p}+\frac{1}{q}=1$. \end{theorem} \begin{proof} From Lemma \ref{Lemma2}, property of the modulus and using the H\"{o}lder inequality we have \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}dt\right) ^{\frac{1}{p}} \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left( \frac{x}{a}\right) ^{qmt}\left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{ \frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left( \frac{x}{b}\right) ^{qmt}\left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{ \frac{1}{q}}\right\} . \label{2.7} \end{eqnarray} By a simple computation \begin{equation*} R_{0}\left( \theta ,\lambda ,p\right) =\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}dt \end{equation*} \begin{equation} =\left\{ \begin{array}{ccc} \frac{1}{\theta p+1} & , & \lambda =0 \\ \begin{array}{c} \left\{ \frac{\lambda ^{\left( \theta p+1\right) /\theta }}{\theta }\beta \left( \frac{1}{\theta },p+1\right) +\frac{\left( 1-\lambda \right) ^{p+1}}{ \theta \left( p+1\right) }\right. \\ \left. \times \begin{array}{c} _{2}F_{1}\left( 1-\frac{1}{\theta },1;p+2;1-\lambda \right) \end{array} \right\} \end{array} & , & 0<\lambda <1 \\ \frac{1}{\theta }\beta \left( \frac{1}{\theta },p+1\right) & , & \lambda =1 \end{array} \right. , \label{2.8} \end{equation} Since $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $-GA-convex on $ [a^{m},b],$ for all $t\in \left[ 0,1\right] $, if we use $\left( \ref{2.3} \right) $, $\left( \ref{2.4}\right) $ and $\left( \ref{2.8}\right) $ in $ \left( \ref{2.7}\right) $, we obtain $\left( \ref{2.6}\right) $. This completes the proof. \end{proof} \begin{corollary} Under the assumptions of Theorem \ref{Theorem6} with $x=\sqrt{ab}$, $\lambda =\frac{1}{3}$ from the inequality $\left( \ref{2.6}\right) $ we get the following Simpson type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( \frac{1}{3},\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert =\left\vert \frac{1}{6}\left[ f(a^{m})+4f\left( \left( \sqrt{ab}\right) ^{m}\right) +f(b^{m})\right] \right. \\ &&-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a }\right) ^{\theta }}\left. \left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \leq \frac{m\ln \frac{b}{a}}{4}R_{0}^{^{\frac{1}{p}}}\left( \theta ,\frac{1}{3},p\right) \\ &&\times \left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}R_{1}\left( \sqrt{ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}R_{3}\left( \sqrt{ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} \label{Corollary7}Under the assumptions of Theorem \ref{Theorem6} with $x= \sqrt{ab}$,$\ \lambda =0$ from the inequality $\left( \ref{2.6}\right) $ we get the following midpoint-type inequality for fractional integrals \end{corollary} \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 0,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert f\left( \left( \sqrt{ab}\right) ^{m}\right) -\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta } }\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left( \frac{1}{\theta p+1}\right) ^{^{ \frac{1}{p}}}\left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}R_{1}\left( \sqrt{ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}R_{3}\left( \sqrt{ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \begin{remark} If we take $\theta =1$, $m=1$, $p=\frac{q}{q-1}$ in Corollary \ref {Corollary7} we have the following midpoint-type inequality for $\alpha $ -GA-convex function (or GA-s-convex function in the first sense), which is the same with the inequality $\left( 17\right) $ of Theorem 3.7.b. in \cite {19}, \begin{eqnarray*} &&\left\vert f\left( \sqrt{ab}\right) -\frac{1}{\ln b-\ln a}\int_{a}^{b} \frac{f\left( x\right) }{x}dx\right\vert \\ &\leq &\frac{\ln \frac{b}{a}}{4}\left( \frac{q-1}{2q-1}\right) ^{1-\frac{1}{q }}\left\{ a\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}R_{1}\left( \sqrt{ab},q,1,\alpha \right) \\ +\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( \sqrt{ab} ,q,1,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}R_{3}\left( \sqrt{ab},q,1,\alpha \right) \\ +\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( \sqrt{ab} ,q,1,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right\} \text{.} \end{eqnarray*} \end{remark} \begin{remark} If we take $\theta =1$, $m=1$, $\alpha =1$, $p=\frac{q}{q-1}$ in Corollary \ref{Corollary7} we have the following midpoint-type inequality for GA-convex function, which is the same with the inequality $\left( 21\right) $ of Corollary 3.8 in \cite{19}, \begin{eqnarray*} &&\left\vert f\left( \sqrt{ab}\right) -\frac{1}{\ln b-\ln a}\int_{a}^{b} \frac{f\left( x\right) }{x}dx\right\vert \\ &\leq &\frac{\ln \frac{b}{a}}{4}\left( \frac{q-1}{2q-1}\right) ^{1-\frac{1}{q }}\left\{ a\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}R_{1}\left( \sqrt{ab},q,1,1\right) \\ +\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( \sqrt{ab} ,q,1,1\right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b\left[ \begin{array}{c} \left\vert f^{\prime }\left( \sqrt{ab}\right) \right\vert ^{q}R_{3}\left( \sqrt{ab},q,1,1\right) \\ +\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( \sqrt{ab} ,q,1,1\right) \end{array} \right] ^{\frac{1}{q}}\right\} \text{..} \end{eqnarray*} \end{remark} \begin{corollary} Under the assumptions of Theorem \ref{Theorem6} with$\ x=\sqrt{ab}$, $ \lambda =1$ from the inequality $\left( \ref{2.6}\right) $ we get the following trepezoid-type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 1,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert \frac{f(a^{m})+f(b^{m})}{2}-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab} \right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left( \frac{1}{\theta }\beta \left( \frac{1 }{\theta },p+1\right) \right) ^{\frac{1}{p}}\left\{ a^{m}\left[ \begin{array}{c} \left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}R_{1}\left( \sqrt{ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( a\right) \right\vert ^{q}R_{2}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \begin{array}{c} \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}R_{3}\left( \sqrt{ ab},q,m,\alpha \right) \\ +m\left\vert f^{\prime }\left( b\right) \right\vert ^{q}R_{4}\left( \sqrt{ab} ,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} Let the assumptions of Theorem \ref{Theorem6} hold. If $\ \left\vert f^{\prime }(u)\right\vert \leq M$ for all $u\in \left[ a^{m},b\right] $ and $ \lambda =0,$ then from the inequality $\left( \ref{2.6}\right) $ we get the following Ostrowski type inequality for fractional integrals \begin{eqnarray*} &&\left\vert \left[ \left( \ln \frac{x}{a}\right) ^{\theta }+\left( \ln \frac{b}{x}\right) ^{\theta }\right] f(x^{m})-\frac{\Gamma \left( \theta +1\right) }{m^{\theta }}\left[ J_{x^{m}-}^{\theta }f(a^{m})+J_{x^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{mM}{\left( \theta p+1\right) ^{\frac{1}{p}}}\left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \begin{array}{c} R_{1}\left( x,q,m,\alpha \right) \\ +mR_{2}\left( x,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \begin{array}{c} R_{3}\left( x,q,m,\alpha \right) \\ +R_{4}\left( x,q,m,\alpha \right) \end{array} \right) ^{\frac{1}{q}}\right\} \end{eqnarray*} for each $x\in \left[ a,b\right] $ \end{corollary} \begin{theorem} \label{Theorem7}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a^{m},b^{m}]$, where $a^{m},b\in I$ $^{\circ }$ with $a<b$ and $m\in \left( 0,1\right] $. If $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $ -GA-convex on $[a^{m},b]$ for some fixed $q>1$, $x\in \lbrack a,b]$, $ \lambda \in \left[ 0,1\right] $ and $\theta >0$ then the following inequality for fractional integrals holds \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1} \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}T_{1}^{\frac{ 1}{p}}\left( x,\theta ,\lambda ,p,m\right) \left( \frac{\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}T_{2}^{\frac{1}{p} }\left( x,\theta ,\lambda ,p,m\right) \left( \frac{\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right\} \label{2.9} \end{eqnarray} where \begin{eqnarray*} T_{1}\left( x,\theta ,\lambda ,p,m\right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{a}\right) ^{mpt}dt, \\ T_{2}\left( x,\theta ,\lambda ,p,m\right) &=&\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{b}\right) ^{mpt}dt, \end{eqnarray*} and $\frac{1}{p}+\frac{1}{q}=1$. \end{theorem} \begin{proof} Since $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $-GA-convex on $ [a^{m},b],$ for all $t\in \left[ 0,1\right] $, if we use $\left( \ref{2.3} \right) $, $\left( \ref{2.4}\right) $ \begin{eqnarray} \dint\limits_{0}^{1}\left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}dt &\leq &\dint\limits_{0}^{1}t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q}dt \notag \\ &=&\frac{\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}, \label{2.10} \end{eqnarray} \begin{eqnarray} \dint\limits_{0}^{1}\left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}dt &\leq &\dint\limits_{0}^{1}t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q}dt \notag \\ &=&\frac{\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}. \label{2.11} \end{eqnarray} From Lemma \ref{Lemma2}, property of the modulus, $\left( \ref{2.10}\right) $ , $\left( \ref{2.11}\right) $ and using the H\"{o}lder inequality, we have \begin{eqnarray*} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1} \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{a}\right) ^{mpt}dt\right) ^{\frac{1}{p}}\left( \dint\limits_{0}^{1}\left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{b}\right) ^{mpt}dt\right) ^{\frac{1}{p}}\left( \dint\limits_{0}^{1}\left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right\} \\ &\leq &m^{\theta +1}\left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{a}\right) ^{mpt}dt\right) ^{\frac{1}{p}}\left( \frac{ \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}\right) ^{ \frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{p}\left( \frac{x}{b}\right) ^{mpt}dt\right) ^{\frac{1}{p}}\left( \frac{\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right\} \end{eqnarray*} This completes the proof. \end{proof} \begin{corollary} Under the assumptions of Theorem \ref{Theorem7} with $x=\sqrt{ab}$, $\lambda =\frac{1}{3}$ from the inequality $\left( \ref{2.9}\right) $ we get the following Simpson type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( \frac{1}{3},\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert =\left\vert \frac{1}{6}\left[ f(a^{m})+4f\left( \left( \sqrt{ab}\right) ^{m}\right) +f(b^{m})\right] \right. \\ &&-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a }\right) ^{\theta }}\left. \left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \leq \frac{m\ln \frac{b}{a}}{4} \\ &&\times \left\{ a^{m}T_{1}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,\frac{1}{3} ,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab} \right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}T_{2}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,\frac{1}{3} ,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab} \right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} Under the assumptions of Theorem \ref{Theorem7} with $x=\sqrt{ab}$,$\ \lambda =0$ from the inequality $\left( \ref{2.9}\right) $ we get the following midpoint-type inequality for fractional integrals \end{corollary} \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 0,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert f\left( \left( \sqrt{ab}\right) ^{m}\right) -\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta } }\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left\{ a^{m}T_{1}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,0,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q} }\right. \\ &&\left. +b^{m}T_{2}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,0,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right\} \end{eqnarray*} \begin{corollary} Under the assumptions of Theorem \ref{Theorem7} with$\ x=\sqrt{ab}$, $ \lambda =1$ from the inequality $\left( \ref{2.9}\right) $ we get the following trepezoid-type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 1,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert \frac{f(a^{m})+f(b^{m})}{2}-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab} \right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left\{ a^{m}T_{1}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,1,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( a\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q} }\right. \\ &&\left. +b^{m}T_{2}^{\frac{1}{p}}\left( \sqrt{ab},\theta ,1,p,m\right) \left( \frac{\left\vert f^{\prime }\left( \left( \sqrt{ab}\right) ^{m}\right) \right\vert ^{q}+m\alpha \left\vert f^{\prime }\left( b\right) \right\vert ^{q}}{\alpha +1}\right) ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} Let the assumptions of Theorem \ref{Theorem7} hold. If $\ \left\vert f^{\prime }(u)\right\vert \leq M$ for all $u\in \left[ a^{m},b\right] $ and $ \lambda =0,$ then from the inequality $\left( \ref{2.9}\right) $ we get the following Ostrowski type inequality for fractional integrals \begin{eqnarray*} &&\left\vert \left[ \left( \ln \frac{x}{a}\right) ^{\theta }+\left( \ln \frac{b}{x}\right) ^{\theta }\right] f(x^{m})-\frac{\Gamma \left( \theta +1\right) }{m^{\theta }}\left[ J_{x^{m}-}^{\theta }f(a^{m})+J_{x^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &mM\left( \frac{1+m\alpha }{\alpha +1}\right) ^{\frac{1}{q}} \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}T_{1}^{\frac{ 1}{p}}\left( x,\theta ,0,p,m\right) +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}T_{2}^{\frac{1}{p}}\left( x,\theta ,0,p,m\right) \right\} \end{eqnarray*} for each $x\in \left[ a,b\right] $ \end{corollary} \begin{theorem} \label{Theorem8}Let $f:I\subseteq \left( 0,\infty \right) \rightarrow \mathbb{R} $ be a differentiable function on $I^{\circ }$ such that $f^{\prime }\in L[a^{m},b^{m}]$, where $a^{m},b\in I$ $^{\circ }$ with $a<b$ and $m\in \left( 0,1\right] $. If $|f^{\prime }|^{q}$ is $\left( \alpha ,m\right) $ -GA-convex on $[a^{m},b]$ for some fixed $q>1$, $x\in \lbrack a,b]$, $ \lambda \in \left[ 0,1\right] $ and $\theta >0$ then the following inequality for fractional integrals holds \begin{eqnarray} &&\left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1} \notag \\ &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}V_{3}^{\frac{ 1}{p}}\left[ \begin{array}{c} V_{1}\left( \theta ,\lambda ,\alpha ,q\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +mV_{2}\left( \theta ,\lambda ,\alpha ,q\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right. \notag \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}V_{4}^{\frac{1}{p}} \left[ \begin{array}{c} V_{1}\left( \theta ,\lambda ,\alpha ,q\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +mV_{2}\left( \theta ,\lambda ,\alpha ,q\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right\} \label{2.12} \end{eqnarray} where \begin{equation*} V_{1}\left( \theta ,\lambda ,\alpha ,q\right) =\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}t^{\alpha }dt \end{equation*} \begin{equation} =\left\{ \begin{array}{ccc} \frac{1}{\theta q+\alpha +1} & , & \lambda =0 \\ \begin{array}{c} \left\{ \frac{\lambda ^{\left( \theta q+\alpha +1\right) /\theta }}{\theta } \beta \left( \frac{\alpha +1}{\theta },q+1\right) +\frac{\left( 1-\lambda \right) ^{q+1}}{\theta \left( q+1\right) }\right. \\ \left. \times \begin{array}{c} _{2}F_{1}\left( 1-\frac{\alpha +1}{\theta },1;q+2;1-\lambda \right) \end{array} \right\} \end{array} & , & 0<\lambda <1 \\ \frac{1}{\theta }\beta \left( \frac{\alpha +1}{\theta },q+1\right) & , & \lambda =1 \end{array} \right. \label{2.13} \end{equation} \begin{equation*} V_{2}\left( \theta ,\lambda ,\alpha ,q\right) =\dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}\left( 1-t^{\alpha }\right) dt \end{equation*} \begin{equation} =\left\{ \begin{array}{ccc} \frac{1}{\theta q+1}-\frac{1}{\theta q+\alpha +1} & , & \lambda =0 \\ \begin{array}{c} \left\{ \frac{\lambda ^{\left( \theta q+1\right) /\theta }}{\theta }\beta \left( \frac{1}{\theta },q+1\right) -\frac{\lambda ^{\left( \theta q+\alpha +1\right) /\theta }}{\theta }\beta \left( \frac{\alpha +1}{\theta } ,q+1\right) \right. \\ \left. +\frac{\left( 1-\lambda \right) ^{q+1}}{\theta \left( q+1\right) } \left( \begin{array}{c} \begin{array}{c} _{2}F_{1}\left( 1-\frac{1}{\theta },1;q+2;1-\lambda \right) \end{array} \\ -_{2}F_{1}\left( 1-\frac{\alpha +1}{\theta },1;q+2;1-\lambda \right) \end{array} \right) \right\} \end{array} & , & 0<\lambda <1 \\ \frac{1}{\theta }\beta \left( \frac{1}{\theta },q+1\right) -\frac{1}{\theta } \beta \left( \frac{\alpha +1}{\theta },q+1\right) & , & \lambda =1 \end{array} \right. \label{2.14} \end{equation} \begin{equation} V_{3}=\dint\limits_{0}^{1}\left( \frac{x}{a}\right) ^{pmt}dt=\left\{ \begin{array}{ccc} \frac{\left( \frac{x}{a}\right) ^{mp}-1}{\ln \left( \frac{x}{a}\right) ^{mp}} & , & x\neq a \\ 1 & , & \text{otherwise} \end{array} \right. \label{2.15} \end{equation} \begin{equation} V_{4}=\dint\limits_{0}^{1}\left( \frac{x}{b}\right) ^{pmt}dt=\left\{ \begin{array}{ccc} \frac{\left( \frac{x}{b}\right) ^{mp}-1}{\ln \left( \frac{x}{b}\right) ^{mp}} & , & x\neq b \\ 1 & , & \text{otherwise} \end{array} \right. \label{2.16} \end{equation} and $\frac{1}{p}+\frac{1}{q}=1$. \end{theorem} \begin{proof} From Lemma \ref{Lemma2}, property of the modulus, the H\"{o}lder inequality and by using $\left( \ref{2.3}\right) $, $\left( \ref{2.4}\right) $, $\left( \ref{2.15}\right) $ and $\left( \ref{2.16}\right) $ we have \begin{equation*} \left\vert K_{f}\left( \lambda ,\theta ,x^{m},a^{m},b^{m}\right) \right\vert \leq m^{\theta +1} \end{equation*} \begin{eqnarray*} &&\times \left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left( \frac{x}{a}\right) ^{pmt}dt\right) ^{\frac{1}{p} }\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}\left\vert f^{\prime }\left( x^{mt}a^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \dint\limits_{0}^{1}\left( \frac{x}{b}\right) ^{pmt}dt\right) ^{\frac{1}{p} }\left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}\left\vert f^{\prime }\left( x^{mt}b^{m\left( 1-t\right) }\right) \right\vert ^{q}dt\right) ^{\frac{1}{q}}\right\} \end{eqnarray*} \begin{eqnarray} &\leq &m^{\theta +1}\left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}V_{3}^{\frac{1}{p}}\right. \notag \\ &&\times \left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}\left[ \begin{array}{c} t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q} \end{array} \right] dt\right) ^{\frac{1}{q}} \notag \\ &&+b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}V_{4}^{\frac{1}{p}} \notag \\ &&\left. \times \left( \dint\limits_{0}^{1}\left\vert t^{\theta }-\lambda \right\vert ^{q}\left[ \begin{array}{c} t^{\alpha }\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +m\left( 1-t^{\alpha }\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q} \end{array} \right] dt\right) ^{\frac{1}{q}}\right\} \label{2.17} \end{eqnarray} By a simple computation we verify $\left( \ref{2.13}\right) $ and $\left( \ref{2.14}\right) $. If we use $\left( \ref{2.13}\right) $, $\left( \ref {2.14}\right) $, $\left( \ref{2.15}\right) $ and $\left( \ref{2.16}\right) $ in $\left( \ref{2.17}\right) $ we obtain $\left( \ref{2.12}\right) $. This completes the proof. \end{proof} \begin{corollary} Under the assumptions of Theorem \ref{Theorem8} with $x=\sqrt{ab}$, $\lambda =\frac{1}{3}$ from the inequality $\left( \ref{2.12}\right) $ we get the following Simpson type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( \frac{1}{3},\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert =\left\vert \frac{1}{6}\left[ f(a^{m})+4f\left( \left( \sqrt{ab}\right) ^{m}\right) +f(b^{m})\right] \right. \\ &&-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a }\right) ^{\theta }}\left. \left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \leq \frac{m\ln \frac{b}{a}}{4} \\ &&\times \left\{ a^{m}\left( \frac{\left( \frac{b}{a}\right) ^{\frac{mp}{2} }-1}{\ln \left( \frac{b}{a}\right) ^{\frac{mp}{2}}}\right) ^{\frac{1}{p}} \left[ \begin{array}{c} V_{1}\left( \theta ,\frac{1}{3},\alpha ,q\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +mV_{2}\left( \theta ,\frac{1}{3},\alpha ,q\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \frac{\left( \frac{a}{b}\right) ^{\frac{mp}{2}}-1}{\ln \left( \frac{a}{b}\right) ^{\frac{mp}{2}}}\right) ^{\frac{1}{p}}\left[ \begin{array}{c} V_{1}\left( \theta ,\frac{1}{3},\alpha ,q\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +mV_{2}\left( \theta ,\frac{1}{3},\alpha ,q\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} Under the assumptions of Theorem \ref{Theorem8} with $x=\sqrt{ab}$,$\ \lambda =0$ from the inequality $\left( \ref{2.12}\right) $ we get the following midpoint-type inequality for fractional integrals \end{corollary} \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 0,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert f\left( \left( \sqrt{ab}\right) ^{m}\right) -\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta } }\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ ab}\right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left\{ a^{m}\left( \frac{\left( \frac{b}{a} \right) ^{\frac{mp}{2}}-1}{\ln \left( \frac{b}{a}\right) ^{\frac{mp}{2}}} \right) ^{\frac{1}{p}}\left[ \begin{array}{c} \frac{1}{\theta q+\alpha +1}\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +\left( \frac{m}{\theta q+1}-\frac{m}{\theta q+\alpha +1}\right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \frac{\left( \frac{a}{b}\right) ^{\frac{mp}{2}}-1}{\ln \left( \frac{a}{b}\right) ^{\frac{mp}{2}}}\right) ^{\frac{1}{p}}\left[ \begin{array}{c} \frac{1}{\theta q+\alpha +1}\left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +\left( \frac{m}{\theta q+1}-\frac{m}{\theta q+\alpha +1}\right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \begin{corollary} Under the assumptions of Theorem \ref{Theorem8} with$\ x=\sqrt{ab}$, $ \lambda =1$ from the inequality $\left( \ref{2.12}\right) $ we get the following trepezoid-type inequality for fractional integrals \begin{eqnarray*} &&\frac{2^{\theta -1}}{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left\vert K_{f}\left( 1,\theta ,\left( \sqrt{ab}\right) ^{m},a^{m},b^{m}\right) \right\vert \\ &=&\left\vert \frac{f(a^{m})+f(b^{m})}{2}-\frac{2^{\theta -1}\Gamma \left( \theta +1\right) }{\left( m\ln \frac{b}{a}\right) ^{\theta }}\left[ J_{\left( \sqrt{ab}\right) ^{m}-}^{\theta }f(a^{m})+J_{\left( \sqrt{ab} \right) ^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &\frac{m\ln \frac{b}{a}}{4}\left\{ a^{m}\left( \frac{\left( \frac{b}{a} \right) ^{\frac{mp}{2}}-1}{\ln \left( \frac{b}{a}\right) ^{\frac{mp}{2}}} \right) ^{\frac{1}{p}}\left[ \begin{array}{c} \frac{1}{\theta }\beta \left( \frac{\alpha +1}{\theta },q+1\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +\left( \frac{1}{\theta }\beta \left( \frac{1}{\theta },q+1\right) -\frac{1}{ \theta }\beta \left( \frac{\alpha +1}{\theta },q+1\right) \right) \left\vert f^{\prime }\left( a\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right. \\ &&\left. +b^{m}\left( \frac{\left( \frac{a}{b}\right) ^{\frac{mp}{2}}-1}{\ln \left( \frac{a}{b}\right) ^{\frac{mp}{2}}}\right) ^{\frac{1}{p}}\left[ \begin{array}{c} \frac{1}{\theta }\beta \left( \frac{\alpha +1}{\theta },q+1\right) \left\vert f^{\prime }\left( x^{m}\right) \right\vert ^{q} \\ +\left( \frac{1}{\theta }\beta \left( \frac{1}{\theta },q+1\right) -\frac{1}{ \theta }\beta \left( \frac{\alpha +1}{\theta },q+1\right) \right) \left\vert f^{\prime }\left( b\right) \right\vert ^{q} \end{array} \right] ^{\frac{1}{q}}\right\} \end{eqnarray*} \end{corollary} \begin{corollary} Let the assumptions of Theorem \ref{Theorem7} hold. If $\ \left\vert f^{\prime }(u)\right\vert \leq M$ for all $u\in \left[ a^{m},b\right] $ and $ \lambda =0,$ then from the inequality $\left( \ref{2.12}\right) $ we get the following Ostrowski type inequality for fractional integrals \begin{eqnarray*} &&\left\vert \left[ \left( \ln \frac{x}{a}\right) ^{\theta }+\left( \ln \frac{b}{x}\right) ^{\theta }\right] f(x^{m})-\frac{\Gamma \left( \theta +1\right) }{m^{\theta }}\left[ J_{x^{m}-}^{\theta }f(a^{m})+J_{x^{m}+}^{\theta }f(b^{m})\right] \right\vert \\ &\leq &mM\left[ \begin{array}{c} \frac{1}{\theta q+\alpha +1} \\ +\left( \frac{m}{\theta q+1}-\frac{m}{\theta q+\alpha +1}\right) \end{array} \right] ^{\frac{1}{q}}\left\{ a^{m}\left( \ln \frac{x}{a}\right) ^{\theta +1}\left( \frac{\left( \frac{x}{a}\right) ^{mp}-1}{\ln \left( \frac{x}{a} \right) ^{mp}}\right) ^{\frac{1}{p}}\right. \\ &&\left. +b^{m}\left( \ln \frac{b}{x}\right) ^{\theta +1}\left( \frac{\left( \frac{x}{b}\right) ^{mp}-1}{\ln \left( \frac{x}{b}\right) ^{mp}}\right) ^{ \frac{1}{p}}\right\} \end{eqnarray*} for each $x\in \left[ a,b\right] $ \end{corollary} \end{document}
\begin{document} \title[Weigthed Triebel-Lizorkin spaces]{ Triebel-Lizorkin spaces with general weights} \author[D. Drihem]{Douadi Drihem} \address{Douadi Drihem\\ M'sila University\\ Department of Mathematics\\ Laboratory of Functional Analysis and Geometry of Spaces\\ M'sila 28000, Algeria.} \email{[email protected], [email protected]} \thanks{ } \date{\today } \subjclass[2010]{ Primary: 42B25, 42B35; secondary: 46E35.} \begin{abstract} In this paper, the author introduces Triebel-Lizorkin spaces with general smoothness. We present the $\varphi $-transform characterization of these spaces in the sense of Frazier and Jawerth and we prove their Sobolev embeddings. Also, we establish the smooth atomic and molecular decomposition of these function spaces. To do these we need a generalization of some maximal inequality to the case of general weights. \end{abstract} \keywords{Atom, Molecule, Triebel-Lizorkin space, Embedding, Muckenhoupt class.} \maketitle \section{Introduction} This paper is a continuation of \cite{D20}, where the author introduced Besov spaces with general smoothness and presented some their properties, such as the $\varphi $-transform characterization in the sense of Frazier and Jawerth, the smooth atomic, molecular and wavelet decomposition, and the characterization of these function spaces in terms of the difference relations. The spaces of generalized smoothness are have been introduced by several authors. We refer, for instance, to Bownik \cite{M07}, Cobos and Fernandez \cite{CF88}, Goldman \cite{Go79} and \cite{Go83}, and Kalyabin \cite{Ka83}; see also\ Besov \cite{B03} and \cite{B05}, and Kalyabin and Lizorkin \cite {Kl87}. More general Besov spaces with variable smoothness were explicitly studied by Ansorena and Blasco \cite{AB95} and \cite{AB96}, including characterizations by differences and atomic decomposition. The wavelet decomposition (with respect to a compactly supported wavelet basis of Daubechies type) of nonhomogeneous Besov spaces of generalized smoothness was achieved by Almeida \cite{Al}. The theory of these spaces had a remarkable development in part due to its usefulness in applications. For instance, they appear in the study of trace spaces on fractals, see Edmunds and Triebel \cite{ET96} and \cite{ET99}, were they introduced the spaces $ B_{p,q}^{s,\Psi }$, where $\Psi $ is a so-called admissible function, typically of log-type near $0$. For a complete treatment of these spaces we refer the reader the work of Moura \cite{Mo01}. Further results on Besov spaces of variable smoothness are given in \cite{AA16}. More general function spaces of generalized smoothness can be found in Caetano and Leopold \cite{CL}, Farkas and Leopold \cite{FL06}, and reference therein. Recently, Dominguez and Tikhonov \cite{DT} gave a treatment of function spaces with logarithmic smoothness (Besov, Sobolev, Triebel-Lizorkin), including various new characterizations for Besov norms in terms of different, sharp estimates for Besov norms of derivatives and potential operators (Riesz and Bessel potentials) in terms of norms of functions themselves and sharp embeddings between the Besov spaces defined by differences and by Fourier-analytical decompositions as well as between Besov and Sobolev/Triebel-Lizorkin spaces. Tyulenev introduced in \cite{Ty15}, \cite{Ty-N-L} and \cite{Ty-151} a new family of Besov spaces of variable smoothness which cover many classes of Besov spaces, where the norm on these spaces was defined\ with the help of classical differences. Based on this weighted class and \cite{D20} we introduce Triebel-Lizorkin spaces of variable smoothness, is defined as follows. Let $\mathcal{S}(\mathbb{R}^{n})$ be the set of all Schwartz functions $\varphi $ on $\mathbb{R}^{n}$, i.e., $\varphi $ is infinitely differentiable and \begin{equation*} \big\|\varphi |\mathcal{S}_{M}\big\|=\sup_{\beta \in \mathbb{N} _{0}^{n},|\beta |\leqslantslantq M}\sup_{x\in \mathbb{R}^{n}}|\partial ^{\beta }\varphi (x)|(1+|x|)^{n+M+|\beta |}<\infty \end{equation*} for all $M\in \mathbb{N}$. Select a Schwartz function $\varphi $ such that \begin{equation*} \text{supp}(\mathcal{F}(\varphi ))\subset \big\{\xi :1/2\leqslantslantq |\xi |\leqslantslantq 2 \big\} \end{equation*} and \begin{equation*} |\mathcal{F}(\varphi )(\xi )|\geqslantslantq c\text{\quad if\quad }3/5\leqslantslantq |\xi |\leqslantslantq 5/3 \end{equation*} where $c>0$ and we put $\varphi _{k}=2^{kn}\varphi (2^{k}\cdot )$, $k\in \mathbb{Z}$.\textrm{\ }Here $\mathcal{F}(\varphi )$ denotes the Fourier transform of $\varphi $, defined by \begin{equation*} \mathcal{F}(\varphi )(\xi )=(2\pi )^{-n/2}\int_{\mathbb{R}^{n}}e^{-ix\cdot \xi }\varphi (x)dx,\quad \xi \in \mathbb{R}^{n}. \end{equation*} Let \begin{equation*} \mathcal{S}_{\infty }(\mathbb{R}^{n})=\Big\{\varphi \in \mathcal{S}(\mathbb{R }^{n}):\int_{\mathbb{R}^{n}}x^{\beta }\varphi (x)dx=0\text{ for all multi-indices }\beta \in \mathbb{N}_{0}^{n}\Big\}. \end{equation*} Following Triebel \cite{T1}, we consider $\mathcal{S}_{\infty }(\mathbb{R} ^{n})$ as a subspace of $\mathcal{S}(\mathbb{R}^{n})$, including the topology. Thus, $\mathcal{S}_{\infty }(\mathbb{R}^{n})$ is a complete metric space. Equivalently, $\mathcal{S}_{\infty }(\mathbb{R}^{n})$ can be defined as a collection of all $\varphi \in \mathcal{S}(\mathbb{R}^{n})$ such that semi-norms \begin{equation*} \big\|\varphi \big\|_{M}=\sup_{\leqslantslantft\vert \beta \right\vert \leqslantslantq M}\sup_{\xi \in \mathbb{R}^{n}}|\partial ^{\beta }\varphi (\xi )|\big(\leqslantslantft\vert \xi \right\vert ^{M}+\leqslantslantft\vert \xi \right\vert ^{-M}\big)<\infty \end{equation*} for all $M\in \mathbb{N}_{0}$, see \cite[Section 3]{BoHo06}. Let $\mathcal{S} _{\infty }^{\prime }(\mathbb{R}^{n})$ be the topological dual of $\mathcal{S} _{\infty }(\mathbb{R}^{n})$, namely, the set of all continuous linear functionals on $\mathcal{S}_{\infty }(\mathbb{R}^{n})$. Let $0<p<\infty $ and $0<q\leqslantslantq \infty $. Let $\{t_{k}\}$ be a $p$-admissible sequence i.e. $ t_{k}\in L_{p}^{\mathrm{loc}}(\mathbb{R}^{n})$, $k\in \mathbb{Z}$. The Triebel-Lizorkin space $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$\ is the collection of all $f\in \mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n})$\ such that \begin{equation*} \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|=\Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{q}|\varphi _{k}\ast f|^{q}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|<\infty \end{equation*} with the usual modifications if $q=\infty $. We have organized the article in three sections. First we give some preliminaries and recall some basic facts on the Muckenhoupt classes and the weighted class of Tyulenev. Also we give some key technical lemmas needed in the proofs of the main statements. Especially, we present new version of weighted vector-valued maximal inequality of Fefferman and Stein. In Section 3 several basic properties such as the $\varphi $-transform characterization are obtained. We extend the well-known Sobolev embeddings to these function spaces and we give the atomic and molecular decomposition of these function spaces. In Section 4 we study the inhomogeneous spaces $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\})$, and we outline analogous results for these spaces. Other properties of these function spaces are given in \cite{D20.2}. \section{Maximal\ inequalities} Our arguments of this paper essentially rely on the weighted boundedness of Hardy-Littlewood maximal function. In this paper we will assume that the weight sequence $\{t_{k}\}$ used to define the space $\dot{F}_{p,q}(\mathbb{R }^{n},\{t_{k}\})$ lies in the new weighted class $\dot{X}_{\alpha ,\sigma ,p} $ (see Definition \ref{Tyulenev-class}). Therefore we need a new\ version of Hardy-Littlewood maximal inequality. Throughout this paper, we make some notation and conventions. \subsection{Notation and conventions} We denote by $\mathbb{R}^{n}$ the $n$-dimensional real Euclidean space, $ \mathbb{N}$ the collection of all natural numbers and $\mathbb{N}_{0}= \mathbb{N}\cup \{0\}$. The letter $\mathbb{Z}$ stands for the set of all integer numbers.\ The expression $f\leqslantslantsssim g$ means that $f\leqslantslantq c\,g$ for some independent constant $c$ (and non-negative functions $f$ and $g$), and $ f\approx g$ means $f\leqslantslantsssim g\leqslantslantsssim f$.\vskip5pt By supp($f$) we denote the support of the function $f$, i.e., the closure of its non-zero set. If $E\subset {\mathbb{R}^{n}}$ is a measurable set, then $ |E|$ stands for the (Lebesgue) measure of $E$ and $\chi _{E}$ denotes its characteristic function. By $c$ we denote generic positive constants, which may have different values at different occurrences. \vskip5pt A weight is a nonnegative locally integrable function on $\mathbb{R}^{n}$ that takes values in $(0,\infty )$ almost everywhere. For measurable set $ E\subset \mathbb{R}^{n}$ and a weight $\gamma $, $\gamma (E)$ denotes \begin{equation*} \int_{E}\gamma (x)dx. \end{equation*} Given a measurable set $E\subset \mathbb{R}^{n}$ and $0<p\leqslantslantq \infty $, we denote by $L_{p}(E)$ the space of all functions $f:E\rightarrow \mathbb{C}$ equipped with the quasi-norm \begin{equation*} \big\|f|L_{p}(E)\big\|=\Big(\int_{E}\leqslantslantft\vert f(x)\right\vert ^{p}dx\Big) ^{1/p}<\infty , \end{equation*} with $0<p<\infty $ and \begin{equation*} \big\|f|L_{\infty }(E)\big\|=\underset{x\in E}{\text{ess-sup}}\leqslantslantft\vert f(x)\right\vert <\infty . \end{equation*} For a function $f$ in $L_{1}^{\mathrm{loc}}(\mathbb{R}^{n})$, we set \begin{equation*} M_{A}(f)=\frac{1}{|A|}\int_{A}\leqslantslantft\vert f(x)\right\vert dx \end{equation*} for any $A\subset \mathbb{R}^{n}$. Furthermore, we put \begin{equation*} M_{A,p}(f)=\Big(\frac{1}{|A|}\int_{A}\leqslantslantft\vert f(x)\right\vert ^{p}dx\Big) ^{1/p}, \end{equation*} with $0<p<\infty $. Further, given a measurable set $E\subset \mathbb{R}^{n}$ and a weight $\gamma $, we denote the space of all functions $f:\mathbb{R} ^{n}\rightarrow \mathbb{C}$ with finite quasi-norm \begin{equation*} \big\|f|L_{p}(\mathbb{R}^{n},\gamma )\big\|=\big\|f\gamma |L_{p}(\mathbb{R} ^{n})\big\| \end{equation*} by $L_{p}(\mathbb{R}^{n},\gamma )$. If $1\leqslantslantq p\leqslantslantq \infty $ and $1/p+1/p^{\prime }=1$, then $p^{\prime }$ is called the conjugate exponent of $p$. Let $0<p,q\leqslantslantq \infty $. The space $L_{p}(\ell _{q})$ is defined to be the set of all sequences $\{f_{k}\}$ of functions such that \begin{equation*} \big\|\{f_{k}\}|L_{p}(\ell _{q})\big\|=\Big\|\Big(\sum_{k=-\infty }^{\infty }|f_{k}|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|<\infty \end{equation*} with the usual modifications if $q=\infty $ and if $\{t_{k}\}$ is a sequence of functions then \begin{equation*} \big\|\{f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\big\|=\big\|\{t_{k}f_{k}\}|L_{p}( \ell _{q})\big\|. \end{equation*} In what follows, $Q$ will denote a cube in the space $\mathbb{R}^{n}$\ with sides parallel to the coordinate axes and $l(Q)$ \ will denote the side length of the cube $Q$. For all cubes $Q$ and $r>0$, let $rQ$ be the cube concentric with $Q$ having side length $rl(Q)$. For $ v\in \mathbb{Z}$ and $m\in \mathbb{Z}^{n}$, denote by $Q_{v,m}$ the dyadic cube, \begin{equation*} Q_{v,m}=2^{-v}([0,1)^{n}+m). \end{equation*} For the collection of all such cubes we use $\mathcal{Q}=\{Q_{v,m}:v\in \mathbb{Z},m\in \mathbb{Z}^{n}\}$. For each cube $Q$, we denote by $x_{v,m}$ the lower left-corner $2^{-v}m$ of $Q=Q_{v,m}$. \subsection{Muckenhoupt weights} The purpose of this subsection is to review some known properties of\ Muckenhoupt class. \begin{defn} Let $1<p<\infty $. We say that a weight $\gamma $ belongs to the Muckenhoupt class $A_{p}(\mathbb{R}^{n})$ if there exists a constant $C>0$ such that for every cube $Q$ the following inequality holds \begin{equation} M_{Q}(\gamma )M_{Q,p^{\prime }/p}(\gamma ^{-1})\leqslantslantq C. \label{Ap-constant} \end{equation} \end{defn} The smallest constant $C$ for which $\mathrm{\eqref{Ap-constant}}$ holds is denoted by $A_{p}(\gamma )$. As an example, we can take \begin{equation*} \gamma (x)=|x|^{\alpha },\quad \alpha \in \mathbb{R}. \end{equation*} Then $\gamma \in A_{p}(\mathbb{R}^{n})$, $1<p<\infty $, if and only if $ -n<\alpha <n(p-1)$. For $p=1$ we rewrite the above definition in the following way. \begin{defn} We say that a weight $\gamma $ belongs to the Muckenhoupt class $A_{1}( \mathbb{R}^{n})$ if there exists a constant $C>0$ such that for every cube $ Q $\ and for a.e.\ $y\in Q$ the following inequality holds \begin{equation} M_{Q}(\gamma )\leqslantslantq C\gamma (y). \label{A1-constant} \end{equation} \end{defn} The smallest constant $C$ for which $\mathrm{\eqref{A1-constant}}$ holds is denoted by $A_{1}(\gamma )$. The above classes have been first studied by Muckenhoupt\ \cite{Mu72} and use to characterize the boundedness of the Hardy-Littlewood maximal function on $L_{p}(\gamma )$, see the monographs \cite{GR85} and \cite{L. Graf14}\ for a complete account on the theory of Muckenhoupt weights. We recall a few basic properties of the class of $A_{p}(\mathbb{R}^{n})$ weights, see \cite[Chapter 7]{Du01}, \cite[Chapter 7]{L. Graf14} and \cite[ Chapter 5]{St93}. \begin{lem} \label{Ap-Property}Let $1\leqslantslantq p<\infty $.\newline $\mathrm{(i)}$ If $\gamma \in A_{p}(\mathbb{R}^{n})$, then for any $1\leqslantslantq p<q $, $\gamma \in A_{q}(\mathbb{R}^{n})$.\newline $\mathrm{(ii)}$ Let $1<p<\infty $. $\gamma \in A_{p}(\mathbb{R}^{n})$ if and only if $\gamma ^{1-p^{\prime }}\in A_{p^{\prime }}(\mathbb{R}^{n})$.\newline $\mathrm{(iii)}$ Let $\gamma \in A_{p}(\mathbb{R}^{n})$. There is $C>0$ such that for any cube $Q$ and a measurable subset $E\subset Q$ \begin{equation*} \Big(\frac{|E|}{|Q|}\Big)^{p-1}M_{Q}(\gamma )\leqslantslantq CM_{E}(\gamma ). \end{equation*} $\mathrm{(iv)}$ Suppose that $\gamma \in A_{p}(\mathbb{R}^{n})$ for some $ 1<p<\infty $. Then there exists a $1<p_{1}<p<\infty $ such that $\gamma \in A_{p_{1}}(\mathbb{R}^{n})$.\newline $\mathrm{(v)}$ If $\gamma \in A_{p}(\mathbb{R}^{n})$, then for any $ 0<\varepsilon \leqslantslantq 1$, $\gamma ^{\varepsilon }\in A_{p}(\mathbb{R}^{n})$.\newline $\mathrm{(vi)}$ Let $1\leqslantslantq p<\infty $ and $\gamma \in A_{p}(\mathbb{R}^{n})$. Then there exist $\delta \in (0,1)$ and $C>0$ depending only on $n$, $p$, and $A_{p}(\gamma )$ such that for any cube $Q$ and any measurable subset $S$ of $Q$ we have \begin{equation*} \frac{M_{S}(\gamma )}{M_{Q}(\gamma )}\leqslantslantq C\Big(\frac{|S|}{|Q|}\Big)^{\delta-1}. \end{equation*} \end{lem} The following theorem gives a useful property of $A_{p}(\mathbb{R}^{n})$ weights (reverse H\"{o}lder inequality), see \cite[Chapter 7]{L. Graf14} or \cite[Chapter 1]{LuDiYa07}. \begin{thm} \label{reverse Holder inequality}Let $1\leqslantslantq p<\infty \ $and $\gamma \in A_{p}(\mathbb{R}^{n})$. Then there exist a constants $C>0$ and $\varepsilon >0$\ depending only on $p$ and the $A_{p}(\mathbb{R}^{n})$ constant of $ \gamma $, such that for every cube $Q$, \begin{equation*} M_{Q,1+\varepsilon }(\gamma )\leqslantslantq CM_{Q}(\gamma ). \end{equation*} \end{thm} \subsection{The weight class $\dot{X}_{\protect\alpha ,\protect\sigma ,p}$} Let $0<p\leqslantslantq \infty $. A weight sequence $\{t_{k}\}$ is called $p$ -admissible if $t_{k}\in L_{p}^{\mathrm{loc}}(\mathbb{R}^{n})$ for all $k\in \mathbb{Z}$. We mention here that \begin{equation*} \int_{E}t_{k}^{p}(x)dx<c(k) \end{equation*} for any\ compact set $E\subset \mathbb{R}^{n}$. For a $p$-admissible weight sequence $\{t_{k}\}$\ we set \begin{equation*} t_{k,m}=\big\|t_{k}|L_{p}(Q_{k,m})\big\|,\quad k\in \mathbb{Z},m\in \mathbb{Z }^{n}. \end{equation*} Tyulenev\ \cite{Ty14} introduced the following new weighted class\ and used it to study Besov spaces of variable smoothness. \begin{defn} \label{Tyulenev-class}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R}$, $ p,\sigma _{1}$, $\sigma _{2}$ $\in (0,+\infty )$, $\alpha =(\alpha _{1},\alpha _{2})$ and let $\sigma =(\sigma _{1},\sigma _{2})$. We let $\dot{ X}_{\alpha ,\sigma ,p}=\dot{X}_{\alpha ,\sigma ,p}(\mathbb{R}^{n})$ denote the set of $p$-admissible weight sequences $\{t_{k}\}$ satisfying the following conditions. There exist numbers $C_{1},C_{2}>0$ such that for any $ k\leqslantslantq j$\ and every cube $Q,$ \begin{eqnarray} M_{Q,p}(t_{k})M_{Q,\sigma _{1}}(t_{j}^{-1}) &\leqslantslantq &C_{1}2^{\alpha _{1}(k-j)}, \label{Asum1} \\ M_{Q,p}^{-1}(t_{k})M_{Q,\sigma _{2}}(t_{j}) &\leqslantslantq &C_{2}2^{\alpha _{2}(j-k)}. \label{Asum2} \end{eqnarray} \end{defn} The constants $C_{1},C_{2}>0$ are independent of both the indexes $k$ and $j$ . \begin{rem} $\mathrm{(i)}$\ We would like to mention that if $\{t_{k}\}$ satisfies $ \mathrm{\eqref{Asum1}}$ with $\sigma _{1}=r\leqslantslantft( p/r\right) ^{\prime }$ and $0<r<p<\infty $, then $t_{k}^{p}\in A_{p/r}(\mathbb{R}^{n})$ for any $k\in \mathbb{Z}$.\newline $\mathrm{(ii)}$ We say that $t_{k}\in A_{p}(\mathbb{R}^{n})$,\ $k\in \mathbb{ Z}$, $1<p<\infty ,$ have the same Muckenhoupt constant if \begin{equation*} A_{p}(t_{k})=c,\quad k\in \mathbb{Z}, \end{equation*} where $c$ is independent of $k$.\newline $\mathrm{(iii)}$ Definition \ref{Tyulenev-class} is different from the one used in \cite[Definition 2.1]{Ty14} and Definition 2.7 in \cite{Ty15}, because we used the boundedness of the maximal function on weighted Lebesgue spaces. \end{rem} \begin{ex} \label{Example1}Let $0<r<p<\infty $, a weight $\omega ^{p}\in A_{\frac{p}{r} }(\mathbb{R}^{n})$ and \begin{equation*} \{s_{k}\}=\{2^{ks}\omega ^{p}(2^{-k}\cdot )\}_{k\in \mathbb{Z}},\quad s\in \mathbb{R}. \end{equation*} Clearly, $\{s_{k}\}_{k\in \mathbb{Z}}$ lies in $\dot{X}_{\alpha ,\sigma ,p}$ for $\alpha _{1}=\alpha _{2}=s$, $\sigma =(r(p/r)^{\prime },p)$. \end{ex} \begin{rem} \label{Tyulenev-class-properties}Let $0<\theta \leqslantslantq p<\infty $. Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R}$, $\sigma _{1},\sigma _{2}\in (0,+\infty )$ , $\sigma _{2}\geqslantslantq p$, $\alpha =(\alpha _{1},\alpha _{2})$ and let $\sigma =(\sigma _{1}=\theta \leqslantslantft( \frac{p}{\theta }\right) ^{\prime },\sigma _{2})$ . Let a $p$-admissible weight sequence $\{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$. Then \begin{equation*} \alpha _{2}\geqslantslantq \alpha _{1}, \end{equation*} see \cite{D20}. \end{rem} Further notation will be properly introduced whenever needed. \subsection{Auxiliary results} In this subsection we present some results which are useful for us. Let recall the vector-valued maximal inequality of Fefferman and Stein \cite {FeSt71}. As usual, we put \begin{equation*} \mathcal{M}(f)(x)=\sup_{Q}M_{Q}(f),\quad f\in L_{1}^{\mathrm{loc}}(\mathbb{R} ^{n}), \end{equation*} where the supremum\ is taken over all cubes with sides parallel to the axis and $x\in Q$. Also we set \begin{equation*} \mathcal{M}_{\sigma }(f)=\sup_{Q}M_{Q,\sigma }(f),\quad 0<\sigma <\infty . \end{equation*} Observe that $\mathcal{M}_{\sigma }(f)$ can be rewritten as \begin{equation*} \mathcal{M}_{\sigma }(f)=(\mathcal{M}(|f|^{\sigma }))^{1/\sigma },\quad 0<\sigma <\infty . \end{equation*} \begin{thm} \label{Maximal}Let $1<p\leqslantslantq \infty $. Then \begin{equation*} \big\|\mathcal{M}(f)|L_{p}(\mathbb{R}^{n})\big\|\leqslantslantsssim \big\|f|L_{p}( \mathbb{R}^{n})\big\| \end{equation*} holds for all $f\in L_{p}(\mathbb{R}^{n})$. \end{thm} For the proof see \cite[Chapter 7]{L. Graf14}. Now, we state the vector-valued maximal inequality of Fefferman and Stein \cite{FeSt71}. \begin{thm} \label{FS-inequality}Let $0<p<\infty ,0<q\leqslantslantq \infty $ and $0<\sigma <\min (p,q)$. Then \begin{equation} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\sigma }(f_{k})\big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|\leqslantslantsssim \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }\leqslantslantft\vert f_{k}\right\vert ^{q}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \label{Fe-St71} \end{equation} holds for all sequence of functions $\{f_{k}\}\in L_{p}(\ell _{q})$. \end{thm} We shall require the following theorem, that is the Fefferman-Stein's inequality, see \cite {FeSt71}. \begin{lem} \label{FS-lemma}Let $1<p<\infty $. Given a non-negative real valued functions $f$ and $g$. We have \begin{equation*} \int_{\mathbb{R}^{n}}\leqslantslantft( \mathcal{M}(f)(x)\right) ^{p}g(x)dx\leqslantslantq c\int_{ \mathbb{R}^{n}}\leqslantslantft( f(x)\right) ^{p}\mathcal{M}(g)(x)dx, \end{equation*} with $c$ independent of $f$ and $g$. \end{lem} For the proof see \cite[Chapter 7]{L. Graf14}. We need the following version of the Calder\'{o}n-Zygmund covering lemma, see \cite[Lemma 3.3]{CP99}, \cite [Appendix A]{CMP11} and \cite[Chapter 7]{Ca99}. \begin{lem} \label{CZ-lemma}Let $f$ be a measurable function such that $ M_{Q}(f)\rightarrow 0$ as $|Q|\rightarrow \infty $ and given\ a positive number $a$ such that $a>2^{n+1}$. For each $i\in \mathbb{Z}$\ there exists a disjoint collection of maximal dyadic cubes $\{Q^{i,h}\}_{h}$ such that for each $h$, \begin{equation*} a^{i}\leqslantslantq M_{Q^{i,h}}(f)\leqslantslantq 2^{n}a^{i} \end{equation*} and \begin{equation*} \Omega _{i}=\{x\in \mathbb{R}^{n}:\mathcal{M}(f)(x)>4^{n}a^{i}\}\subset \cup _{h}3Q^{i,h}. \end{equation*} Let \begin{equation*} E^{i}=\cup _{h}Q^{i,h} \end{equation*} and \begin{equation*} E^{i,h}=Q^{i,h}\backslash (Q^{i,h}\cap E^{i+1}). \end{equation*} Then $E^{i,h}\subset Q^{i,h},$ there exists a constant $\beta >1$, depending only on $a$, such\ $ |Q^{i,h}|\leqslantslantq \beta |E^{i,h}|$\ and the sets $E^{i,h}$ are pairwise disjoint for all $i$ and $h$. \end{lem} Next, we recall the following Hadamard's three line theorem for subharmonic functions, see \cite[Theorem 14.15]{M83}. \begin{thm} \label{hadamard}Let $f$ be a nonnegative, bounded function on the strip $ 0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$ of the complex plane such that $\log f(z)$ is subharmonic in the open strip $0<\mathbb{R}e (z)<1$ and continuous on the strip $ 0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. If there are positive constants $M_{1},M_{2}$, such that $f(0+iy)\leqslantslantq M_{1}$ and $f(1+iy)\leqslantslantq M_{2}$ for every real $y$, then \begin{equation*} f(\theta +iy)\leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta } \end{equation*} for every $\theta \in \lbrack 0,1]$ and any real $y$. \end{thm} Let $S$ be the linear space consisting of all sequences $\{f_{k}\}$ with $ f_{k}$ simple function on $\mathbb{R}^{n}$ for each $k,$ and $f_{k}=0$ for $ |k|$ large enough. Then $S$ is dense in $L_{p}(\ell _{q}),1<p,q<\infty $, see \cite{BP61}. The main aim of the following lemma is to extend an interpolation result for sublinear operators on Lebesgue spaces\ obtained by Calder\'{o}n and Zygmund in \cite{CZ56} to similar operators acting on $L_{p}(\ell _{q},\{t_{k}\})$ spaces. Consider a mapping function $T$, which maps measurable functions on $\mathbb{R}^{n}$ into measurable functions on $\mathbb{R}^{n}$. We say that $T$ is sublinear if it is satisfies\\ (i) $T(f)$ is defined (uniquely) if $f=f_{1}+f_{2}$, $T(f_{1})$ and $ T(f_{2})$ are defined, and \begin{equation*} |T(f_{1}+f_{2})|\leqslantslantq |T(f_{1})|+|T(f_{2})|. \end{equation*} (ii) For any constant $c$, $T(cf)$ is defined if $T(f)$ is defined and $ |T(cf)|=|c||T(f)|$. \begin{lem} \label{Calderon-Zygmund}Let $t_{k},k\in \mathbb{Z}$ be locally integrable functions on $\mathbb{R}^{n}$ and $1<q_{i}<\infty $, $1<p_{i}<\infty ,M_{i}>0,i=0,1$. Suppose that $T$ is a sublinear operator satisfying \begin{equation*} \big\|\{t_{k}T(f_{k})\}|L_{p_{i}}(\ell _{q_{i}})\big\|\leqslantslantq M_{i}\big\| \{t_{k}f_{k}\}|L_{p_{i}}(\ell _{q_{i}})\big\| \end{equation*} for any $\{t_{k}f_{k}\}\in L_{p_{i}}(\ell _{q_{i}}),i=0,1$. Then $T$ can be extended to a bounded operator: \begin{equation} \big\|\{t_{k}T(f_{k})\}|L_{p}(\ell _{q})\big\|\leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta }\big\|\{t_{k}f_{k}\}|L_{p}(\ell _{q})\big\| \label{CZ1956} \end{equation} for any $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$, where \begin{equation*} \frac{1}{p}=\frac{1-\theta }{p_{0}}+\frac{\theta }{p_{1}}\quad \text{and} \quad \frac{1}{q}=\frac{1-\theta }{q_{0}}+\frac{\theta }{q_{1}},\quad 0\leqslantslantq \theta \leqslantslantq 1. \end{equation*} \end{lem} \begin{proof} We will do the proof into two steps. \textit{Step 1. }We prove that \eqref{CZ1956} holds for any sequence of simple functions $\{t_{k}f_{k}\}\in S$. \textit{Substep 1.1. Preparation}. Assume that \begin{equation*} \big\|\{t_{k}f_{k}\}|L_{p}(\ell _{q})\big\|=1 \end{equation*} and we put \begin{equation*} I=\sum\limits_{k=-\infty }^{\infty }\int_{\mathbb{R} ^{n}}t_{k}(x)|T(f_{k})(x)|g_{k}(x)dx, \end{equation*} where $\{g_{k}\}\in S$ such that \begin{equation*} \big\|\{g_{k}\}|L_{p^{\prime }}(\ell _{q^{\prime }})\big\|\leqslantslantq 1. \end{equation*} From our assumption on the sequence of simple functions $\{t_{k}f_{k}\}$ and $\{g_{k}\}$, we clearly have that \begin{equation*} t_{k}f_{k}=\sum_{l=1}^{N_{1}}|a_{l,k}|c_{l,k}\chi _{E_{l,k}}\quad \text{and} \quad g_{k}=\sum_{j=1}^{N_{2}}b_{j,k}\chi _{E_{j,k}^{\prime }}, \end{equation*} where $|c_{l,k}|=1,b_{j,k}>0,l=1,...,N_{1},j=1,...,N_{2},N_{1},N_{2}\in \mathbb{N}$ and \begin{equation*} t_{k}f_{k}=g_{k}=0 \end{equation*} for sufficiently large $k,|k|\geqslantslantq N,N\in \mathbb{N}$. For $k$ fixed, the sets $E_{l,k}$ are\ disjoint. The same property holds for the sets $ E_{j,k}^{\prime }$. For any $z\in \mathbb{C}$ we set \begin{equation*} \frac{1}{q(z)}=\frac{1-z}{q_{0}}+\frac{z}{q_{1}},\quad \frac{1}{p(z)}=\frac{ 1-z}{p_{0}}+\frac{z}{p_{1}} \end{equation*} and \begin{align*} &\Phi (z)\\ &=\sum\limits_{|k|\leqslantslantq N}\int_{\mathbb{R}^{n}}t_{k}(x)\big|T\big( \omega (\cdot ,z)|f_{k}|^{q/q(z)}\mathrm{sign}f_{k}\big)(x)\big| (g_{k}(x))^{\beta (\mathbb{R}e (z))}\vartheta (x,\mathbb{R}e (z))dx, \end{align*} where \begin{equation*} \omega (\cdot ,z)=A^{\tau (z)}(\cdot )t_{k}^{-\alpha (z)}(\cdot ),\quad A(\cdot )= \big\|\{t_{k}f_{k}\}_{|k|\leqslantslantq N}|\ell _{q}\big\| \end{equation*} and \begin{equation*} \vartheta (\cdot ,z)=B^{\kappa (z)-\beta (z)}(\cdot ),\quad B(\cdot )=\big\| \{g_{k}\}_{|k|\leqslantslantq N}|\ell _{q^{\prime }}\big\|, \end{equation*} with \begin{equation*} \tau (z)=\frac{p}{p(z)}-\frac{q}{q(z)},\quad \alpha (z)=1-\frac{q}{q(z)} ,\quad \kappa (z)=\frac{1-\frac{1}{p(z)}}{1-\frac{1}{p}},\quad \beta (z)= \frac{1-\frac{1}{q(z)}}{1-\frac{1}{q}}. \end{equation*} The non-negative function $\Phi $ reduces to $I$ for $z=\theta $. Using the fact that $\{g_{k}\}$ is a sequence of simple functions we have \begin{equation*} \Phi (z)=\sum_{j=1}^{N_{2}}\sum\limits_{|k|\leqslantslantq N}\int_{E_{j,k}^{\prime }}\vartheta (x,\mathbb{R}e (z))t_{k}(x)\leqslantslantft\vert T(\psi _{z,j,k})(x)\right\vert dx, \end{equation*} where \begin{equation*} \psi _{z,j,k}=\omega (\cdot ,z)(b_{j,k})^{\beta (\mathbb{R}e (z))}\sum_{l=1}^{N_{1}}|t_{k}^{-1}a_{l,k}|^{q/q(z)}\chi _{E_{l,k}}. \end{equation*} We put \begin{equation*} \Psi _{j,k}(z)=\int_{E_{j,k}^{\prime }}\vartheta (x,\mathbb{R}e (z))t_{k}(x)\leqslantslantft\vert T(\psi _{z,j,k})(x)\right\vert dx,\quad |k|\leqslantslantq N,j=1,...,N_{2}. \end{equation*} \textit{Substep 1.2. }We prove that $\Phi $ is continuous in the strip $ 0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. First we prove that $\Psi _{j,k}$ is well defined. Let us consider the integrals \begin{equation*} I_{z,j,k}^{1}=\int_{E_{j,k}^{\prime }}t_{k}^{p_{0}}(x)\big|T\big(\omega (\cdot ,z)|f_{k}|^{q/q(z)}\mathrm{sign}f_{k}\big)(x)\big|^{p_{0}}dx \end{equation*} and \begin{equation*} I_{z,j,k}^{2}=\int_{E_{j,k}^{\prime }}|\vartheta (x,\mathbb{R}e (z))|^{p_{0}^{\prime }}(g_{k}(x))^{p_{0}^{\prime }\beta (\mathbb{R}e (z))}dx. \end{equation*} An easy calculation shows that the integral \begin{equation*} J=\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{q_{0}}(x)\big| \omega (x,z)|f_{k}(x)|^{q/q(z)}\big|^{q_{0}}\Big)^{p_{0}/q_{0}}dx \end{equation*} is just \begin{equation*} \int_{\mathbb{R}^{n}}A^{p_{0}\tau (\mathbb{R}e (z))}(x)\Big(\sum\limits_{|k|\leqslantslantq N} \big(t_{k}(x)|f_{k}(x)|\big)^{(1-\alpha (\mathbb{R}e (z)))q_{0}}\Big) ^{p_{0}/q_{0}}dx. \end{equation*} We may assume without loss of generality that $q_{1}<q_{0}$. As a consequence we see that \begin{equation*} \frac{1}{q}(1-\alpha (\mathbb{R}e (z)))=\frac{1-\mathbb{R}e (z)}{q_{0}}+\frac{\mathbb{R}e (z)}{q_{1}} >\frac{1}{q_{0}}. \end{equation*} This yields \begin{align*} &\Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{(1-\alpha (\mathbb{R}e (z)))q_{0}}(x)|f_{k}(x)|^{(1-\alpha (\mathbb{R}e (z)))q_{0}}\Big)^{1/q_{0}}\\ &\leqslantslantq \Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{q}(x)|f_{k}(x)|^{q}\Big)^{(1-\alpha (\mathbb{R}e (z)))/q}, \end{align*} since $\ell _{r}\hookrightarrow \ell _{s},r\leqslantslantq s$. Hence \begin{equation*} J\leqslantslantq \big\|\{t_{k}f_{k}\}|L_{pp_{0}/p(\mathbb{R}e (z))}(\ell _{q})\big\| ^{pp_{0}/p(\mathbb{R}e (z))}<\infty , \end{equation*} because of $\{t_{k}f_{k}\}$ is a sequence of simple functions. Consequently, by H\"{o}lder's inequality, \begin{align} \Psi _{j,k}(z) &\leqslantslantq (I_{z,j,k}^{1})^{1/p_{0}}(I_{z,j,k}^{2})^{1/p_{0}^{\prime }} \notag \\ &\leqslantslantq \big\|\{t_{k}T\big(\omega (\cdot ,z)|f_{k}|^{q/q(z)}\mathrm{sign}f_{k} \big)\}|L_{p_{0}}(\ell _{q_{0}})\big\|(I_{z,j,k}^{2})^{1/p_{0}^{\prime }}. \label{firstterm} \end{align} Our assumption on $T$ yields that the first term in $\mathrm{ \eqref{firstterm}}$ is finite.\ $I_{z,j,k}^{2}$ is finite since $ \{g_{k}\}\in S$. Hence $\Phi (z)$ exists for each $z$ in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. Now we prove the continuity of $\Phi $ in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. We clearly have that \begin{equation} \Psi _{j,k}(z+\Delta z)-\Psi _{j,k}(z)=J_{1,j,k}(z,\Delta z)+J_{2,j,k}(z,\Delta z), \label{limit} \end{equation} where \begin{equation*} J_{1,j,k}(z,\Delta z)=\int_{E_{j,k}^{\prime }}\big(\vartheta (x,\mathbb{R}e (z+\Delta z))-\vartheta (x,\mathbb{R}e (z))\big)t_{k}(x)\leqslantslantft\vert T(\psi _{z,j,k})(x)\right\vert dx \end{equation*} and \begin{align*} & J_{2,j,k}(z,\Delta z)\\ &=\int_{E_{j,k}^{\prime }}\vartheta (x,\mathbb{R}e (z+\Delta z))t_{k}(x)\big(\leqslantslantft\vert T(\psi _{z+\Delta z,j,k})(x)\right\vert -\leqslantslantft\vert T(\psi _{z,j,k})(x)\right\vert \big)dx. \end{align*} We want to find the limit of $\mathrm{\eqref{limit}}$ as $\Delta z$ approaches $0$, so we may assume that $|\Delta z=\mathbb{R}e (\Delta z)+i\Im (\Delta z)|\leqslantslantq 1$. After a simple calculation we find that \begin{align*} &\big|\vartheta (x,\mathbb{R}e (z+\Delta z))-\vartheta (x,\mathbb{R}e (z))\big| \\ &=\big| B^{\kappa (\mathbb{R}e (z))-\beta (\mathbb{R}e (z))}(x)\big(B^{d+h\mathbb{R}e (\Delta z)}(x)-1\big) \big| \\ &\leqslantslantq B^{\kappa (\mathbb{R}e (z))-\beta (\mathbb{R}e (z))}(x)\big(B^{d+h\mathbb{R}e (\Delta z)}(x)+1 \big) \\ &\leqslantslantq B^{\kappa (\mathbb{R}e (z))-\beta (\mathbb{R}e (z))}(x)\big(B^{d}(x)\max (1,B^{h}(x),B^{-h}(x))+1\big), \end{align*} where \begin{equation*} h=\frac{\frac{1}{q_{1}}-\frac{1}{q_{0}}}{1-\frac{1}{q}}+\frac{\frac{1}{p_{0}} -\frac{1}{p_{1}}}{1-\frac{1}{p}},\quad d=\frac{1-\frac{1}{p_{0}}}{1-\frac{1}{p}}-\frac{1- \frac{1}{q_{0}}}{1-\frac{1}{q}}. \end{equation*} Recall that \begin{equation*} \big|T(\psi _{z,j,k})(x)\big|=(b_{j,k})^{\beta (\mathbb{R}e (z))}\big|T\big(\omega (\cdot ,z)|f_{k}|^{\frac{q}{q(z)}}\mathrm{sign}f_{k}\big)(x)\big|. \end{equation*} Therefore the function \begin{equation*} x\longrightarrow \big|\vartheta (x,\mathbb{R}e (z+\Delta z))-\vartheta (x,\mathbb{R}e (z)) \big|t_{k}(x)\leqslantslantft\vert T(\psi _{z,j,k})(x)\right\vert \end{equation*} is integrable. Dominated convergence theorem yields that $J_{1,j,k}(z,\Delta z)$ tends to zero as $\Delta z$ tends to $0$. Now \begin{equation*} |J_{2,j,k}(z,\Delta z)|\leqslantslantq \int_{E_{j,k}^{\prime }}\vartheta (x,\mathbb{R}e (z+\Delta z))t_{k}(x)\big|T(\psi _{z+\Delta z,j,k}-\psi _{z,j,k})(x)\big|dx, \end{equation*} since \begin{equation*} |T(\psi _{z+\Delta z,j,k})|-|T(\psi _{z,j,k})|\leqslantslantq |T(\psi _{z+\Delta z,j,k}-\psi _{z,j,k})|,\quad j\in \{1,...,N_{2}\},|k|\leqslantslantq N. \end{equation*} Hence \begin{equation*} |J_{2,j,k}(z,\Delta z)|\leqslantslantq \Lambda _{j,k}\times B_{j,k},\quad j\in \{1,...,N_{2}\},|k|\leqslantslantq N, \end{equation*} where \begin{equation*} \Lambda _{j,k}=(b_{j,k})^{\beta (\mathbb{R}e (z))}\Big(\int_{E_{j,k}^{\prime }}\vartheta ^{p_{0}^{\prime }}(x,\mathbb{R}e (z+\Delta z))dx\Big)^{1/p_{0}^{\prime }} \end{equation*} and \begin{equation*} B_{j,k}=\Big(\int_{E_{j,k}^{\prime }}t_{k}^{p_{0}}(x)\big|T\big((b_{j,k})^{-\beta (\mathbb{R}e (z))}(\psi _{z+\Delta z,j,k}-\psi _{z,j,k})\big)(x)\big|^{p_{0}}dx\Big) ^{1/p_{0}}. \end{equation*} Clearly, $\Lambda _{j,k},j\in \{1,...,N_{2}\},|k|\leqslantslantq N$ is bounded. To estimate $B_{j,k}$ we have \begin{align} & \big|\psi _{z+\Delta z,j,k}\big| \notag \\ & =\big|\omega (\cdot ,z+\Delta z)\big|(b_{j,k})^{\beta (\mathbb{R}e (z+\Delta z))}\sum_{l=1}^{N_{1}}\Big|\big|t_{k}^{-1}a_{l,k}\big|^{q/q(z+\Delta z)}\Big| \chi _{E_{l,k}} \notag \\ & =\big|\psi _{z,j,k}\big|A^{\sigma \mathbb{R}e (\Delta z)}(b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\sum_{l=1}^{N_{1}}\big|a_{l,k}\big| ^{q(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)}\chi _{E_{l,k}}, \label{simple} \end{align} where \begin{equation*} \sigma =p\Big(\frac{1}{p_{1}}-\frac{1}{p_{0}}\Big)+q\Big(\frac{1}{q_{0}}- \frac{1}{q_{1}}\Big). \end{equation*} There are two cases: $0\leqslantslantq \mathbb{R}e (\Delta z)\leqslantslantq 1$ and $-1\leqslantslantq \mathbb{R}e (\Delta z)<0$. In the first case we obtain the estimates \begin{equation} \sum_{l=1}^{N_{1}}|a_{l,k}|^{q(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)}\chi _{E_{l,k}}=A^{q(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)} \label{ref1} \end{equation} and \begin{equation} (b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\leqslantslantq \max \big(1,(b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})}\big). \label{ref2} \end{equation} Indeed, the left-hand side of $\mathrm{\eqref{ref1}}$ is equal to \begin{equation*} \Big(\sum_{l=1}^{N_{1}}|a_{l,k}c_{l,k}|^{q}\chi _{E_{l,k}}\Big) ^{(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)}=A^{q(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)}. \end{equation*} Now, if $b_{j,k}\geqslantslantq 1,j\in \{1,...,N_{2}\},|k|\leqslantslantq N$, then \begin{equation*} (b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\leqslantslantq 1, \end{equation*} since $q_{1}<q_{0}$ and $0\leqslantslantq \mathbb{R}e (\Delta z)\leqslantslantq 1$. While if $0<b_{j,k}<1$ , then \begin{align*} & (b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\notag\\ &=(b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})(\mathbb{R}e (\Delta z)-1)}(b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})}\\ &\leqslantslantq (b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})}. \end{align*} Thus, we obtain $\mathrm{\eqref{ref2}}$\textrm{. }Assume that $-1\leqslantslantq \mathbb{R}e (\Delta z)<0$. We obtain \begin{equation*} (b_{j,k})^{q^{\prime }(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\leqslantslantq \max (1,(b_{j,k})^{q^{\prime }(1/q_{1}-1/q_{0})}), \end{equation*} \begin{equation*} |a_{l,k}|^{q(1/q_{1}-1/q_{0})\mathbb{R}e (\Delta z)}\leqslantslantq \max (1,|a_{l,k}|^{q(1/q_{0}-1/q_{1})}) \end{equation*} and \begin{equation*} A^{q(1/q_{0}-1/q_{1})\mathbb{R}e (\Delta z)}\leqslantslantq \max (1,A^{q(1/q_{1}-1/q_{0})}), \end{equation*} \begin{equation*} A^{\sigma \mathbb{R}e (\Delta z)}\leqslantslantq \max(1,A^{\sigma },A^{-\sigma }). \end{equation*} Substituting these estimations in $\mathrm{\eqref{simple}}$ we find that \begin{align*} & |\psi _{z+\Delta z,j,k}|\\ & \leqslantslantq |\psi _{z,j,k}|\max \Big(1,\max (1,A^{q(1/q_{1}-1/q_{0})}),\max (1,A^{\sigma },A^{-\sigma })\Big)\mu _{j,k}, \notag \end{align*} where $\{\mu _{j,k}\}\in S$. This estimate together with \begin{equation*} \{t_{k}(b_{j,k})^{-\beta (\mathbb{R}e (z))}\psi _{z,j,k}\}_{|k|\leqslantslantq N}\in L_{p_{0}}(\ell _{q_{0}}) \end{equation*} guarantees that the function \begin{align*} & x\longrightarrow t_{k}(x)(b_{j,k})^{-\beta (\mathbb{R}e (z))}|\psi _{z,j,k}(x)|\\ & \times\max \Big(1,\max (1,A^{q(1/q_{1}-1/q_{0})}(x)),\max (1,A^{\sigma }(x),A^{-\sigma }(x))\Big) \mu _{j,k}(x) \end{align*} belongs to $L_{p_{0}}(\ell _{q_{0}})$, $|k|\leqslantslantq N$, $j\in \{1,...,N_{2}\}$. Hence \begin{align*} & \Big(\int_{E_{j,k}^{\prime }}t_{k}^{p_{0}}(x)|T((b_{j,k})^{-\beta (\mathbb{R}e (z))}(\psi _{z+\Delta z,j,k}-\psi _{z,j,k}))(x)|^{p_{0}}dx\Big)^{1/ p_{0}} \notag \\ & \leqslantslantsssim \Big(\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{q_{0}}(x) \big|(b_{j,k})^{-\beta (\mathbb{R}e (z))}(\psi _{z+\Delta z,j,k}(x)-\psi _{z,j,k}(x)) \big|^{q_{0}}\Big)^{p_{0}/q_{0}}dx\Big)^{1/p_{0}}. \end{align*} Dominated convergence theorem yields that $J_{2,j,k}(z,\Delta z)$ tends to zero as $\Delta z$ tends to $0$\ and $\Psi _{j,k}$ is continuous at $z$. Hence $\Phi $ is continuous in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. \textit{Substep 1.3.} Here we prove that $\Phi $ is bounded in the strip $ 0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. From Substep 1.2, we need only to prove that $ I_{z,j,k}^{2}\leqslantslantq M$, for a suitable constant $M$ independent of $z$. Since \begin{equation*} B^{\kappa (\mathbb{R}e (z))-\beta (\mathbb{R}e (z))}(x)\leqslantslantq \max (1,B^{p^{\prime }}(x))(g_{k}(x))^{-\beta (\mathbb{R}e (z))} \end{equation*} for any $x\in \mathbb{R}^{n}$ and $|k|\leqslantslantq N$, we find that \begin{equation*} I_{z,j,k}^{2}\leqslantslantq \int_{E_{j,k}^{\prime }}\max (1,B^{p^{\prime }}(x))dx\leqslantslantq M, \end{equation*} we are done. \textit{Substep 1.4. }In this step we prove that $\log \Psi _{j,k}(z)$ is subharmonic. We fix a harmonic function $h(z)$, and denote by $H(z)$ the analytic function whose real part is $h(z)$. We need only to prove that $ \Psi _{j,k}(z)e^{h(z)}$ is subharmonic for every harmonic $h(z)$. Since the problem is local, we may consider $h$ and $H$ in a given circle. Put \begin{equation*} \psi _{z,j,k}^{\star }=\psi _{z,j,k}e^{H(z)}\quad \text{and}\quad \Psi _{j,k}^{\star }(z)=\Psi _{j,k}(z)e^{h(z)}. \end{equation*} We fix $z$, take a $\varrho >0$, and denote by $z_{1},...,z_{r}$ be a system of points equally spaced over the circumference of the circle with center $z$ and radius $\varrho $. Our estimate use partially some techniques already used in \cite{Kr72}. First we prove that $\log \big|T(\psi _{z,j,k})\big|$ is subharmonic. We want to show that \begin{equation*} \big|T(\psi _{z,j,k}^{\star })\big|=e^{h(z)}\big|T(\psi _{z,j,k})\big|\leqslantslantq \frac{1}{2\pi }\int_{0}^{2\pi }\big|T(\psi _{z+\varrho e^{it},j,k}^{\ast }) \big|dt. \end{equation*} Let us calculate the limit of \begin{equation} \big\|t_{k}T\big(\psi _{z,j,k}^{\star }-\frac{1}{r}\sum_{v=1}^{r}\psi _{z_{v},j,k}^{\star }\big)\big\|_{p_{1}}. \label{estimate} \end{equation} as $r$ tends to infinity. We estimate $|\psi _{z_{v},j,k}^{\star }|,v=1,...,r $. Clearly \begin{equation*} |\psi _{z_{v},j,k}^{\star }|\leqslantslantq C_{2}t_{k}^{-1}A^{\tau (\mathbb{R}e (z_{v}))}\sum_{l=1}^{N_{1}}\chi _{E_{l,k}}. \end{equation*} We consider separately the possibilities $\tau (\mathbb{R}e (z_{v}))\geqslantslantq 0$\ and $ \tau (\mathbb{R}e (z_{v}))<0$. In the first case \begin{equation*} A^{\tau (\mathbb{R}e (z_{v}))}\leqslantslantq C_{3} \big\|\{|t_{k}f_{k}|^{\tau (\mathbb{R}e (z_{v}))}\}_{|k|\leqslantslantq N}|\ell _{q}\big\|\leqslantslantq C_{4}\big\|\big\{ \sum_{l=1}^{N_{1}}\chi _{E_{l,k}}\big\}_{|k|\leqslantslantq N}|\ell _{q}\big\|=C_{4}K. \end{equation*} If $\tau (\mathbb{R}e (z_{v}))<0$, then \begin{equation*} A^{\tau (\mathbb{R}e (z_{v}))}\leqslantslantq |t_{0}f_{0}|^{\tau (\mathbb{R}e (z_{v}))}\leqslantslantq C_{5}\sum_{l=1}^{N_{1}}\chi _{E_{l,0}}. \end{equation*} Therefore \begin{equation*} |\psi _{z_{v},j,k}^{\star }|\leqslantslantsssim \max (1,K)t_{k}^{-1}g_{j,k} \end{equation*} where $\{g_{j,k}\}_{k\in \mathbb{Z}}\in S$. Notice that the implicit constant independent of $r,v,j$ and $k$. This guarantees that $\mathrm{ \eqref{estimate}}$ can be estimated by \begin{equation*} C\big\|\psi _{z,j,k}^{\star }-\frac{1}{r}\sum_{v=1}^{r}\psi _{z_{v},j,k}^{\star }\big\|_{p_{1}}. \end{equation*} In view of the fact that \begin{equation*} \psi _{z,j,k}^{\star }(x)=\lim_{r\rightarrow \infty }\frac{1}{r} \sum_{v=1}^{r}\psi _{z_{v},j,k}^{\star }(x), \end{equation*} which is valid since $\psi _{z,j,k}^{\star }$ is analytic for each $x$, dominated convergence theorem yields that the last norm tends to zero as $r$ tends to infinity. Therefore $\mathrm{\eqref{estimate}}$ tends to zero as $r$ tends to infinity. There exist subsequences \begin{equation*} \Big\{T\big(\psi _{z,j,k}^{\star }-\frac{1}{r_{l}}\sum_{v=1}^{r_{l}}\psi _{z_{v},j,k}^{\star }\big)\Big\}_{l} \end{equation*} converging to zero as $r_{l}$ tends to infinity. Hence \begin{align*} \big|T(\psi _{z,j,k}^{\star })\big| &\leqslantslantq \lim_{r_{l}\rightarrow \infty } \big|T(\frac{1}{r_{l}}\sum_{v=1}^{r_{l}}\psi _{z_{v},j,k}^{\star })\big| \\ &\leqslantslantq \lim_{r_{l}\rightarrow \infty }\frac{1}{r_{l}}\sum_{v=1}^{r_{l}}\big| T(\psi _{z_{v},j,k}^{\star })\big| \\ &=\frac{1}{2\pi }\int_{0}^{2\pi }\big|T(\psi _{z+\varrho e^{it},j,k}^{\star })\big|dt. \end{align*} In addition, the mapping $z\rightarrow \log (\vartheta (\cdot ,\mathbb{R}e (z))t_{k}(\cdot ))$ is subharmonic. Consequently, \begin{equation*} z\rightarrow \log (\vartheta (\cdot ,\mathbb{R}e (z))t_{k}(\cdot )\big|T(\psi _{z,j,k})\big|) \end{equation*} is subharmonic. Then \begin{align*} & e^{h(z)}\vartheta (x,\mathbb{R}e (z))t_{k}(x)\big|T(\psi _{z,j,k})(x)\big|\\ &\leqslantslantq\frac{1}{2\pi }\int_{0}^{2\pi }\vartheta (x,\mathbb{R}e (z+\varrho e^{it}))t_{k}(x)\big|T \big(\psi _{z+\varrho e^{it},j,k}^{\star }\big)(x)\big|dt. \end{align*} We integrate with respect to $x$ over the set $E_{j,k}^{\prime }$, we apply Fubini's theorem and observe that \begin{equation*} \Psi _{j,k}(z)e^{h(z)}=\int_{E_{j,k}^{\prime }}\vartheta (x,\mathbb{R}e (z))t_{k}(x) \big|T(\psi _{z,j,k}^{\star })(x)\big|dx, \end{equation*} we obtain \begin{equation*} \Psi _{j,k}^{\star }(z)\leqslantslantq \frac{1}{2\pi }\int_{0}^{2\pi }\Psi _{j,k}^{\star }(z+\varrho e^{it})dt. \end{equation*} Hence $\Psi _{j,k}^{\star }$ is subharmonic in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1 $. \textit{Substep 1.5. }We prove that \eqref{CZ1956} holds for any sequence of simple functions $\{t_{k}f_{k}\}_{k\in \mathbb{Z}}\in S$. Let $z=\mathbb{R}e (z)+i\Im (z),0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$ and $\Im (z)\in \mathbb{R}$. From Substeps 1.2-1.4, $\Phi $ is continuous, bounded\ its logarithm is subharmonic in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$.\ Let us show that $\Phi (i\Im (z)\leqslantslantq M_{1}$ for any $\Im (z)\in \mathbb{R}.$ We have \begin{equation*} D^{\tau (0)}=D^{p/p_{0}-q/q_{0}},\quad D^{\kappa (0)-\beta (0)}=D^{p^{\prime }/p_{0}^{\prime }-q^{\prime }/q_{0}^{\prime }},\quad |D^{\alpha (i\Im (z))}|=D^{1-q/q_{0}} \end{equation*} for any $D\geqslantslantq 0$. By H\"{o}lder's inequality $\Phi (i\Im (z))$ can be estimated by \begin{align*} &\Big(\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{q_{0}}(x)\big|T \big(\omega (\cdot ,i\Im (z))|f_{k}|^{q/q(i\Im (z))}\mathrm{sign}f_{k}\big) (x)\big|^{q_{0}}\Big)^{p_{0}/q_{0}}dx\Big)^{1/p_{0}} \\ &\times \Big(\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}\big( (g_{k}(x))^{\beta (0)}\vartheta (x,0)\big)^{q_{0}^{\prime }}\Big) ^{p_{0}^{\prime }/q_{0}^{\prime }}dx\Big)^{1/p_{0}^{\prime }}. \end{align*} The second integral is just \begin{equation*} \Big(\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}g_{k}^{q^{\prime }}(x) \Big)^{p^{\prime }/q^{\prime }}dx\Big)^{1/p_{0}^{\prime }}\leqslantslantq 1, \end{equation*} since $\kappa (0)-\beta (0)=p^{\prime }/p_{0}^{\prime }-q^{\prime }/q_{0}^{\prime }$ and $\beta (0)=q^{\prime }/q_{0}^{\prime }$. The boundedness of $T$ on $L_{p_{0}}(\ell _{q_{0}},\{t_{k}\})$ yields that the first integral does not exceed \begin{equation} M_{1}\Big(\int_{\mathbb{R}^{n}}\Big(\sum\limits_{|k|\leqslantslantq N}t_{k}^{q_{0}}(x)\big| \omega (x,i\Im (z))|f_{k}(x)|^{q/q(i\Im (z))}\mathrm{sign}f_{k}\big|^{q_{0}} \Big)^{p_{0}/q_{0}}dx\Big)^{1/p_{0}}, \label{caseRez=0} \end{equation} but \eqref{caseRez=0} is just \begin{equation*} M_{1}\big\|\{t_{k}f_{k}\}|L_{p}(\ell _{q})\big\|^{p/p_{0}}= M_{1}. \end{equation*} Similarly we obtain \begin{equation*} \Phi (1+i\Im (z))\leqslantslantq M_{2}. \end{equation*} From our previous\ substeps $\log \Phi (z)$ is continuous bounded above and subharmonic in the strip $0\leqslantslantq \mathbb{R}e (z)\leqslantslantq 1$. In addition $\log \Phi (z)$ does not exceed $\log M_{1}$ and $\log M_{2}$ on the lines $\mathbb{R}e (z)=0$ and $ \mathbb{R}e (z)=1$, respectively. Therefore we apply the Hadamard three-line theorem for subharmonic functions, see Theorem \ref{hadamard}, we obtain that \begin{equation*} \log \Phi (z)\leqslantslantq (1-\theta )\log M_{1}+\theta \log M_{2},\quad 0<\mathbb{R}e (z)<1. \end{equation*} In particular \begin{equation*} I=\Phi (\theta )\leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta }. \end{equation*} Finally we have proved that \begin{equation*} \big\|\{t_{k}T(f_{k})\}|L_{p}(\ell _{q})\big\|\leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta }\big\|\{t_{k}f_{k}\}|L_{p}(\ell _{q})\big\| \end{equation*} for each sequence of simple functions $\{t_{k}f_{k}\}$. \textit{Step 2. }We prove \eqref{CZ1956}. Assume $q_{1}<q_{0}$ and let $ \{f_{k}\}\in L_{p}(\ell _{q},\{t_{k}\})$. Then $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$. There exists a sequence $\{g_{n}\}_{n\in \mathbb{N}}=\{\{g_{n}^{k}\}_{k\in \mathbb{Z}}\}_{n\in \mathbb{N}}\subset S\subset L_{p}(\ell _{q})$ such that $ \{g_{n}\}_{n\in \mathbb{N}}$ converges to $\{t_{k}f_{k}\}$ in $L_{p}(\ell _{q})$. Hence \begin{equation*} \Big\|\{t_{k}^{-1}g_{n}^{k}-f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\| \end{equation*} tends to zero as $n$ tends to infinity. Therefore \begin{equation*} \Big\|\{t_{k}^{-1}g_{n}^{k}-f_{k}\}|L_{p}(\ell _{q_{0}},\{t_{k}\})\Big\| \end{equation*} tends to zero as $n$ tends to infinity, since $L_{p}(\ell _{q})\hookrightarrow L_{p}(\ell _{q_{0}})$. Observe that \begin{equation*} \big|\big|T(f_{k})\big|-\big|T(t_{k}^{-1}g_{n}^{k})\big|\big|\leqslantslantq \big| T(f_{k}-t_{k}^{-1}g_{n}^{k})\big| \end{equation*} for any $k\in \mathbb{Z}$ and any $n\in \mathbb{N}$. Therefore \begin{equation*} \Big\|\{\big|T(f_{k})\big|\}-\{\big|T(t_{k}^{-1}g_{n}^{k})\big|\}|L_{p}(\ell _{q_{0}},\{t_{k}\})\Big\| \label{step2} \end{equation*} can be estimated by \begin{equation*} \Big\|\{T(f_{k}-t_{k}^{-1}g_{n}^{k})\}|L_{p}(\ell _{q_{0}},\{t_{k}\})\Big\| \leqslantslantsssim \Big\|\{t_{k}^{-1}g_{n}^{k}-f_{k}\}|L_{p}(\ell _{q_{0}},\{t_{k}\}) \Big\|, \end{equation*} where this inequality follows by the fact that $(t_{k}^{-1}g_{n}^{k})\}\in L_{p}(\ell _{q_{0}},\{t_{k}\})$ since \begin{equation*} \Big\|\{t_{k}^{-1}g_{n}^{k})\}|L_{p}(\ell _{q_{0}},\{t_{k}\})\Big\|=\Big\| \{g_{n}^{k})\}|L_{p}(\ell _{q_{0}})\Big\|<\infty . \end{equation*} This yields that \begin{equation*} \Big\|\{T(f_{k}-t_{k}^{-1}g_{n}^{k})\}|L_{p}(\ell _{q_{0}},\{t_{k}\})\Big\| \end{equation*} tends to zero as $n$ tends to infinity. Therefore $\big\{\{\big| T(t_{k}^{-1}g_{n}^{k})\big|\}\big\}_{n\in \mathbb{N}}$ converges to $\big\{\big| T(f_{k})\big|\big\}$ in $L_{p}(\ell _{q_{0}},\{t_{k}\})$-norm, which yields that $\big\{\{t_{k}\big|T(t_{k}^{-1}g_{n}^{k})\big|\}\big\}_{n\in \mathbb{N}}$ converges to $\big\{t_{k}\big|T(f_{k})\big|\big\}$ in $L_{p}(\ell _{q_{0}})$-norm. Hence $ \big\{t_{k}\big|T(t_{k}^{-1}g_{n}^{k})\big|\big\}_{n\in \mathbb{N}}$ converges to $t_{k}\big|T(f_{k})\big|$ in $L_{p}$ for every $k\in \mathbb{Z}$. The Cantor diagonal technique gives an increasing sequence $\{\varpi _{k}(n)\}_{n\in \mathbb{N}}$ in ${\mathbb{N}}$ such that $\big\{t_{k}\big|T(t_{k}^{-1}g_{\varpi _{k}(n)}^{k})\big|\big\}_{n\in \mathbb{N}}$ converges to $t_{k}\big|T(f_{k})\big|$ for every $k\in \mathbb{Z}$. We have \begin{align*} \Big\|\{T(f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\| &=\Big\| \{t_{k}T(f_{k})\}|L_{p}(\ell _{q})\Big\| \\ &=\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\big|T(f_{k})\big| ^{q}\Big)^{1/q}\Big\|_{p} \\ &=\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\lim_{n\rightarrow \infty } \big(t_{k}\big|T(t_{k}^{-1}g_{\varpi _{k}(n)}^{k})\big|\big)^{q}\Big)^{1/q}\Big\|_{p} \\ &\leqslantslantq \underset{n\longrightarrow \infty }{\lim \inf }\Big\|\Big( \sum\limits_{k=-\infty }^{\infty }\big(t_{k}\big|T(t_{k}^{-1}g_{\varpi _{k}(n)}^{k})\big|\big)^{q}\Big)^{1/q}\Big\|_{p} \end{align*} by Fatou Lemma. We have proved that the last norm is bounded by \begin{align*} & M_{1}^{1-\theta }M_{2}^{\theta }\Big\|\{t_{k}^{-1}g_{\varpi _{k}(n)}^{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\| \\ & \leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta }\Big\|\{t_{k}^{-1}g_{\varpi _{k}(n)}^{k}-f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\|+M_{1}^{1-\theta }M_{2}^{\theta }\Big\|\{f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\|. \end{align*} Consequently \begin{equation*} \Big\|\{T(f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\|\leqslantslantq M_{1}^{1-\theta }M_{2}^{\theta }\Big\|\{f_{k}\}|L_{p}(\ell _{q},\{t_{k}\})\Big\| \end{equation*} for any $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$ and the lemma is proved. \end{proof} Now we state the main result of this subsection. \begin{lem} \label{key-estimate1}Let $1<\theta \leqslantslantq p<\infty $\ and $1<q<\infty $. Let $ \{t_{k}\}$\ be a $p$-admissible\ weight\ sequence\ such that $t_{k}^{p}\in A_{\frac{p}{\theta }}(\mathbb{R}^{n})$, $k\in \mathbb{Z}$. Assume that $ t_{k}^{p}$,\ $k\in \mathbb{Z}$ have the same Muckenhoupt constant, $A_{\frac{ p}{\theta }}(t_{k})=c,k\in \mathbb{Z}$. Then \begin{equation} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\big(\mathcal{M}(f_{k}) \big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|\leqslantslantsssim \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \label{key-est} \end{equation} holds for all sequences of functions $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$. \end{lem} \begin{proof} We will do the proof into two steps. \textit{Step 1.} We prove $\mathrm{\eqref{key-est}}$ with $1<q\leqslantslantq p<\infty $ . The case $p=q$ follows, e.g., by \cite{D20}, so we assume that $ 1<q<p<\infty $. Let $j\in \mathbb{N}$ be such that $q^{j}\leqslantslantq p<q^{j+1}$. \textit{Substep 1.1. }We consider the case $q^{j}<p<q^{j+1}$. In order to prove we additionally do it into the two steps Substeps 1.1.1 and 1.1.2. \textit{Substep 1.1.1.} We consider the case $1<q\leqslantslantq \theta ^{\frac{1}{j}}$ . From Lemma \ref{Ap-Property}/(i), $t_{k}^{p}\in A_{p/q^{j}}(\mathbb{R} ^{n}) $. By duality the left-hand side of $\mathrm{\eqref{key-est}}$ is bounded by \begin{equation*} \sup \sum\limits_{k=-\infty }^{\infty }\int_{\mathbb{R}^{n}}t_{k}(x)\mathcal{ M}(f_{k})(x)|g_{k}(x)|dx=\sup \sum\limits_{k=-\infty }^{\infty }T_{k}, \end{equation*} where the supremum is taken over all sequence of functions $\{g_{k}\}\in L_{p^{\prime }}(\ell _{q^{\prime }})$ with \begin{equation*} \big\|\{g_{k}\}|L_{p^{\prime }}(\ell _{q^{\prime }})\big\|\leqslantslantq 1, \end{equation*} where $p^{\prime }$ and $q^{\prime }$ are the conjugate exponent of $p$ and $ q$, respectively. Let $Q$ be a cube. By H\"{o}lder's inequality, \begin{equation*} M_{Q}(f_{k})\leqslantslantq \frac{1}{|Q|}\big\|t_{k}f_{k}|L_{p}(Q)\big\|\big\| t_{k}^{-1}|L_{p^{\prime }}(Q)\big\|\leqslantslantq \frac{c}{|Q|}\big\| t_{k}^{-1}|L_{p^{\prime }}(Q)\big\|,\quad k\in \mathbb{Z} \end{equation*} with $c>0$ is independent of $k$. Since $t_{k}^{p}\in A_{p/\theta }(\mathbb{R }^{n})$, $k\in \mathbb{Z}$, by Lemma \ref{Ap-Property}/(i),\ $t_{k}^{p}\in A_{p}(\mathbb{R}^{n})$, $k\in \mathbb{Z}$\ and \begin{equation*} \frac{1}{|Q|}\big\|t_{k}^{-1}|L_{p^{\prime }}(Q)\big\|\leqslantslantq c\big\| t_{k}|L_{p}(Q)\big\|^{-1}. \end{equation*} Moreover $\big\|t_{k}|L_{p}(Q)\big\|\rightarrow \infty $ as $|Q|\rightarrow \infty $ for any $k\in \mathbb{Z}$. Hence we can apply Lemma \ref{CZ-lemma}. Let \begin{equation*} \Omega _{k}^{i}=\{x\in \mathbb{R}^{n}:\mathcal{M}(f_{k})(x)>4^{n}\lambda ^{i}\},\quad k,i\in \mathbb{Z} \end{equation*} with $\lambda >2^{n+1}$\ and \begin{equation*} H_{k}^{i}=\{x\in \mathbb{R}^{n}:4^{n}\lambda ^{i}<\mathcal{M}(f_{k})(x)\leqslantslantq 4^{n}\lambda ^{i+1}\},\quad k,i\in \mathbb{Z}. \end{equation*} We have \begin{equation*} T_{k}=\sum_{i=-\infty }^{\infty }\int_{H_{k}^{i}}t_{k}(x)\mathcal{M} (f_{k})(x)|g_{k}(x)|dx\leqslantslantq 4^{n}\sum_{i=-\infty }^{\infty }\lambda ^{i+1}\int_{\Omega _{k}^{i}}t_{k}(x)|g_{k}(x)|dx. \end{equation*} Let\ $\{Q_{k}^{i,h}\}_{h}$ be the collection of maximal dyadic cubes as in Lemma \ref{CZ-lemma} with \begin{equation*} \Omega _{k}^{i}\subset \cup _{h}3Q_{k}^{i,h}, \end{equation*} which implies that \begin{equation} T_{k}\leqslantslantq 4^{n}\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\lambda ^{i+1}\int_{3Q_{k}^{i,h}}t_{k}(x)|g_{k}(x)|dx,\quad k\in \mathbb{Z}. \label{Key-est-Tk} \end{equation} Applying H\"{o}lder's inequality, \begin{align*} \int_{3Q_{k}^{i,h}}t_{k}(x)|g_{k}(x)|dx &\leqslantslantq \Big( \int_{3Q_{k}^{i,h}}t_{k}^{\tau }(x)dx\Big)^{1/\tau }\Big( \int_{3Q_{k}^{i,h}}|g_{k}(x)|^{\tau ^{\prime }}dx\Big)^{1/\tau ^{\prime }} \\ &=|3Q_{k}^{i,h}|M_{3Q_{k}^{i,h},\tau }(t_{k})M_{3Q_{k}^{i,h},\tau ^{\prime }}(g_{k}) \end{align*} with $\tau >1$. Put $\tau =p(1+\varepsilon )$ with $\varepsilon $ as in Theorem \ref{reverse Holder inequality}, which is possible since $ t_{k}^{p}\in A_{p/\theta }(\mathbb{R}^{n})$, $k\in \mathbb{Z}$. Obviously, we have \begin{equation*} M_{3Q_{k}^{i,h},\tau }(t_{k})=M_{3Q_{k}^{i,h},p(1+\varepsilon )}(t_{k})\leqslantslantq c \text{ }M_{3Q_{k}^{i,h},p}(t_{k}),\quad k\in \mathbb{Z}. \end{equation*} Since $t_{k}^{p}$,\ $k\in \mathbb{Z}$ have the same Muckenhoupt constant and from the proof of Theorem 7.2.2 in \cite{L. Graf14} the constant $c$ is independent of $k$. Therefore, \begin{equation*} \int_{3Q_{k}^{i,h}}t_{k}(x)|g_{k}(x)|dx\leqslantslantsssim |Q_{k}^{i,h}|M_{3Q_{k}^{i,h},p}(t_{k})M_{3Q_{k}^{i,h},\tau ^{\prime }}(g_{k}). \end{equation*} We deduce from the above that \begin{equation*} \lambda ^{i}\int_{3Q_{k}^{i,h}}t_{k}(x)|g_{k}(x)|dx\leqslantslantsssim |Q_{k}^{i,h}|M_{3Q_{k}^{i,h},p}(t_{k})M_{Q_{k}^{i,h}}(f_{k})M_{3Q_{k}^{i,h}, \tau ^{\prime }}(g_{k}). \end{equation*} By H\"{o}lder's inequality, \begin{equation*} M_{Q_{k}^{i,h}}(f_{k})\leqslantslantq M_{3Q_{k}^{i,h},s^{\prime }}(t_{k}^{-1})M_{3Q_{k}^{i,h},s}(t_{k}f_{k}),\quad s=\frac{p}{q^{j}} \end{equation*} and, with the help of the fact that $q^{j}>1$, \begin{equation*} M_{3Q_{k}^{i,h},s^{\prime }}(t_{k}^{-1})\leqslantslantq \big(M_{3Q_{k}^{i,h},\frac{ s^{\prime }}{s}}(t_{k}^{-p})\big)^{1/p}. \end{equation*} Hence \begin{equation*} \lambda ^{i}\int_{3Q_{k}^{i,h}}t_{k}(x)|g_{k}(x)|dx\leqslantslantsssim |Q_{k}^{i,h}|M_{3Q_{k}^{i,h},s}(t_{k}f_{k})M_{3Q_{k}^{i,h},\tau ^{\prime }}(g_{k}), \end{equation*} because of $t_{k}^{p}$ $ \in $ $A_{s}(\mathbb{R}^{n})$,\ $k\in \mathbb{Z}$ . Since $|Q_{k}^{i,h}|\leqslantslantq \beta |E_{k}^{i,h}|$, with $E_{k}^{i,h}=Q_{k}^{i,h} \backslash (Q_{k}^{i,h}\cap (\cup _{h}Q_{k}^{i+1,h}))$ and the family $ E_{k}^{i,h}$ are pairwise disjoint, the last expression is bounded by \begin{align*} & c\int_{E_{k}^{i,h}}M_{3Q_{k}^{i,h},s}(t_{k}f_{k})M_{3Q_{k}^{i,h},\tau ^{\prime }}(g_{k})dx\\ &\leqslantslantsssim \int_{\mathbb{R}^{n}}\mathcal{M} _{s}(t_{k}f_{k})(x)\mathcal{M}_{\tau ^{\prime }}(g_{k})(x)\chi _{E_{k}^{i,h}}(x)dx. \end{align*} Therefore, the right-hand side of $\mathrm{\eqref{Key-est-Tk}}$ does not exceed \begin{align*} & c\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\int_{\mathbb{R}^{n}} \mathcal{M}_{s}(t_{k}f_{k})(x)\mathcal{M}_{\tau ^{\prime }}(g_{k})(x)\chi _{E_{k}^{i,h}}(x)dx\\ &\leqslantslantsssim \int_{\mathbb{R}^{n}}\mathcal{M}_{s}(t_{k}f_{k})(x)\mathcal{M}_{\tau ^{\prime }}(g_{k})(x)dx. \end{align*} This implies that \begin{equation*} \sum\limits_{k=-\infty }^{\infty }T_{k}\leqslantslantsssim \int_{\mathbb{R} ^{n}}\sum\limits_{k=0}^{\infty }\mathcal{M}_{s}(t_{k}f_{k})(x)\mathcal{M} _{\tau ^{\prime }}(g_{k})(x)dx. \end{equation*} By H\"{o}lder's inequality the term inside the integral is bounded by \begin{equation*} \Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{s}\big(t_{k}f_{k} \big)(x)\big)^{q}\Big)^{1/q}\Big(\sum\limits_{k=-\infty }^{\infty }\big( \mathcal{M}_{\tau ^{\prime }}(g_{k})(x)\big)^{q^{\prime }}\Big)^{1/q^{\prime }} \end{equation*} for any $x\in \mathbb{R}^{n}$. Again by H\"{o}lder's inequality $ \sum\limits_{k=-\infty }^{\infty }T_{k}$ can be estimated by \begin{align*} & c\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{s}\big( t_{k}f_{k}\big)\big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|\Big\|\Big( \sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\tau ^{\prime }}(g_{k}) \big)^{q^{\prime }}\Big)^{1/q^{\prime }}|L_{p^{\prime }}(\mathbb{R}^{n}) \Big\| \\ & \leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|, \end{align*} where in the second inequality we used the vector-valued maximal inequality of Fefferman and Stein $\mathrm{\eqref{Fe-St71}}$, because of $0<s<q<p$, and the fact that \begin{align*} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\tau ^{\prime }}(g_{k})\big)^{q^{\prime }}\Big)^{1/q^{\prime }}|L_{p^{\prime }}(\mathbb{R} ^{n})\Big\| &\leqslantslantsssim \big\|\{g_{k}\}|L_{p^{\prime }}(\ell _{q^{\prime }}) \big\| \\ &\leqslantslantsssim 1, \end{align*} since $\tau =p(1+\varepsilon )>p>q$\textrm{. } \textit{Substep 1.1.2.} We consider the case $\theta ^{\frac{1}{j}}<q<p$. Let \begin{equation*} \frac{1}{q}=\frac{1-\lambda }{p}+\frac{\lambda }{\theta ^{\frac{1}{j}}} ,\quad 0<\lambda <1. \end{equation*} Applying Lemma \ref{Calderon-Zygmund} we obtain the desired estimate. \textit{Substep 1.2. }We have proved $\mathrm{\eqref{key-est}}$ in case of $ q^{j}<p<q^{j+1}$. Now we study the case $p=q^{j},j\in \mathbb{N}$. Then $ j\geqslantslantq 2$. Since $t_{k}^{p}\in A_{p}(\mathbb{R}^{n}),k\in \mathbb{Z}$, from Theorem \ref{reverse Holder inequality} there exists a number $\gamma _{1}>0$ such \begin{equation*} M_{Q,1+\gamma _{1}}(t_{k}^{p})\leqslantslantsssim M_{Q}(t_{k}^{p}),\quad k\in \mathbb{Z} , \end{equation*} where the implicit constant is independent of $k$, see, e.g., \cite[Theorem 7.2.5 and Corollary 7.2.6.]{L. Graf14}. In addition by Lemma \ref {Ap-Property}/(i), $t_{k}^{-p^{\prime }}\in A_{p^{\prime }}(\mathbb{R}^{n})$ and again by Theorem \ref{reverse Holder inequality} there exists a number $ \gamma _{2}>0$ such \begin{equation*} M_{Q,1+\gamma _{2}}(t_{k}^{-p^{\prime }})\leqslantslantsssim M_{Q}(t_{k}^{-p^{\prime }}),\quad k\in \mathbb{Z}. \end{equation*} Let \begin{equation*} 0<\gamma <\min \Big(\gamma _{1},\gamma _{2},\frac{p-p/q}{p/q-1}\Big). \end{equation*} Then \begin{equation*} M_{Q,1+\gamma }(t_{k}^{p})\big(M_{Q,1+\gamma }(t_{k}^{-p^{\prime }})\big)^{ \frac{p}{p^{\prime }}}\leqslantslantsssim 1,\quad k\in \mathbb{Z}, \end{equation*} where the implicit constant is independent of $k$. Hence $t_{k}^{p(1+\gamma )}\in A_{p}(\mathbb{R}^{n}),k\in \mathbb{Z}$. Therefore $t_{k}^{p}\in A_{p_{1}}(\mathbb{R}^{n}),k\in \mathbb{Z}$, where \begin{equation*} p_{1}=\frac{p+\gamma }{1+\gamma }, \end{equation*} see again \cite[Exercise 7.1.3]{L. Graf14}. Observe that \begin{equation*} q^{j-1}<p_{1}<p=q^{j} \end{equation*} and $t_{k}^{p_{1}}\in A_{p_{1}}(\mathbb{R}^{n}),k\in \mathbb{Z}$, see Lemma \ref{Ap-Property}/(v). Substep 1.1 gives \begin{equation*} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\big(\mathcal{M}(f_{k}) \big)^{q}\Big)^{1/q}|L_{p_{1}}(\mathbb{R}^{n})\Big\|\leqslantslantsssim \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q} \Big)^{1/q}|L_{p_{1}}(\mathbb{R}^{n})\Big\|. \end{equation*} The same procedure yields that $t_{k}^{p(1+\nu )}\in A_{p}(\mathbb{R} ^{n})\subset A_{p(1+\nu )}(\mathbb{R}^{n}),k\in \mathbb{Z}$ with \begin{equation*} 0<\nu <\min \Big(\gamma _{1},\gamma _{2},q-1\Big) \end{equation*} and then $q^{j}=p<p(1+\nu )<q^{j+1}$. Also, Substep 1.1. gives \begin{equation*} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\big(\mathcal{M}(f_{k}) \big)^{q}\Big)^{1/q}|L_{p(1+\nu )}(\mathbb{R}^{n})\Big\|\leqslantslantsssim \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q} \Big)^{1/q}|L_{p(1+\nu )}(\mathbb{R}^{n})\Big\|. \end{equation*} Again, by Lemma \ref{Calderon-Zygmund} we obtain the desired estimate. \textit{Step 2.} We shall prove $\mathrm{\eqref{key-est}}$ with $ 1<p<q<\infty $. Let $1<\varrho <\theta <\infty $. Again, by duality the left-hand side of $\mathrm{\eqref{key-est}}$, raised to the\ power $\varrho $ is just \begin{equation*} \sup \sum\limits_{k=-\infty }^{\infty }\int_{\mathbb{R}^{n}}t_{k}^{\varrho }(x)\leqslantslantft( \mathcal{M}(f_{k})(x)\right) ^{\varrho }|g_{k}(x)|dx=\sup V, \end{equation*} where the supremum is taken over all sequence of functions $\{g_{k}\}$ such that \begin{equation} \{g_{k}\}\in L_{(p/\varrho )^{\prime }}(\ell _{(q/\varrho )^{\prime }}),\quad \big\|\{g_{k}\}|L_{(p/\varrho )^{\prime }}(\ell _{(q/\varrho )^{\prime }})\big\|\leqslantslantq 1. \label{last} \end{equation} By Lemma \ref{FS-lemma} and H\"{o}lder's inequality, $V$ is bounded by \begin{align*} & \ c\int_{\mathbb{R}^{n}}\sum\limits_{k=-\infty }^{\infty }\big|f_{k}(x)\big| ^{\varrho }\mathcal{M}(t_{k}^{\varrho }g_{k})(x)dx \\ & \leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}|f_{k}|^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|^{\varrho }\\ &\times \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{-\varrho (q/\varrho )^{\prime }}( \mathcal{M}(t_{k}^{\varrho }g_{k}))^{(q/\varrho )^{\prime }}\Big) ^{1/(q/\varrho )^{\prime }}|L_{(p/\varrho )^{\prime }}(\mathbb{R}^{n})\Big\|. \end{align*} By Step 1, the second term is bounded because of $(q/\varrho )^{\prime }<(p/\varrho )^{\prime }$, \begin{equation*} t_{k}^{-\varrho (p/\varrho )^{\prime }}\in A_{(p/\varrho )^{\prime }}( \mathbb{R}^{n}),\quad k\in \mathbb{Z}, \end{equation*} and from Lemma \ref{Ap-Property}/(iv) there exists a $1<\delta <(p/\varrho )^{\prime }<\infty $ such that \begin{equation*} t_{k}^{-\varrho (p/\varrho )^{\prime }}\in A_{(p/\varrho )^{\prime }/\delta }(\mathbb{R}^{n}),\quad k\in \mathbb{Z}. \end{equation*} Thus, the desired estimate follows by Step 1 and $\mathrm{\eqref{last}}$. The proof is complete. \end{proof} \begin{rem} \label{r-estimates}$\mathrm{(i)}$ We would like to mention that the result of this lemma is true if we assume that $t_{k}\in A_{p/\theta }(\mathbb{R} ^{n})$,\ $k\in \mathbb{Z}$, $1<p<\infty $ with \begin{equation*} A_{p/\theta }(t_{k})\leqslantslantq c,\quad k\in \mathbb{Z}, \end{equation*} where $c>0$ independent of $k$.$\newline \mathrm{(ii)}$ The proof of Lemma \ref{key-estimate1}\ for $t_{k}^{p}=\omega $, $k\in \mathbb{Z}$ is given in \cite{AJ80} and \cite{Kok78}. $\newline \mathrm{(iii)}$ Lemma \ref{key-estimate1} with $t_{k}^{p}=\omega $, $k\in \mathbb{Z}$, see e.g., \cite{Ca02}, can be obtained by using the extrapolation theory of J. Garcia-Cuerva and J.L. Rubio de Francia, \cite {GR85}\ or by the theory of vector-valued singular integral with operator-valued kernel, see \cite{RRT86}. $\newline \mathrm{(iv)}$ To circumvent the drawbacks of dealing with general weights, we use different techniques than those using the papers \cite{AJ80}, \cite {GR85}, \cite{Kok78} and \cite{RRT86}. $\newline \mathrm{(v)}$ In view of Lemma \ref{Ap-Property}/(iv) we can assume that $ t_{k}^{p}\in A_{p}(\mathbb{R}^{n})$,\ $k\in \mathbb{Z}$, $1<p<\infty $ with \begin{equation*} A_{p}(t_{k}^{p})\leqslantslantq c,\quad k\in \mathbb{Z}, \end{equation*} where $c>0$ independent of $k$. \end{rem} We need the following lemma, which is a discrete convolution inequality. \begin{lem} \label{lq-inequality1}\textit{Let }$0<a<1,1\leqslantslantq p\leqslantslantq \infty ,1\leqslantslantq r\leqslantslantq \infty $ \textit{and }$0<q<\infty $\textit{. Let }$\leqslantslantft\{ f_{k}\right\} $ \textit{\ and }$\leqslantslantft\{ g_{k}\right\} $ \textit{be two sequences of positive\ real\ functions\ and denote} \begin{equation*} \delta _{k}=\sum_{j=-\infty }^{k+v}a^{k-j}\big\|g_{k}f_{j}|L_{1}(\mathbb{R} ^{n})\big\|^{1/q},\quad k, v\in \mathbb{Z} \end{equation*} and\textit{\ } \begin{equation*} \eta _{k}=\sum_{j=k+v}^{\infty }a^{j-k}\big\|g_{k}f_{j}|L_{1}(\mathbb{R}^{n}) \big\|^{1/q},\quad k, v\in \mathbb{Z}. \end{equation*} Then there exists a constant $c>0\ $\textit{depending only on }$a$\textit{\ and }$q$ such that \begin{equation} \sum\limits_{k=-\infty }^{\infty }\delta _{k}^{q}+\sum\limits_{k=-\infty }^{\infty }\eta _{k}^{q}\leqslantslantq c\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }f_{k}^{r}\Big)^{1/r}|L_{p}(\mathbb{R}^{n})\Big\|\Big\|\Big( \sum\limits_{k=-\infty }^{\infty }g_{k}^{r^{\prime }}\Big)^{1/r^{\prime }}|L_{p^{\prime }}(\mathbb{R}^{n})\Big\|. \label{estimate1} \end{equation} \end{lem} \begin{proof} As the proof for $\{\eta _{k}\}$ is similar, we only consider $\{\delta _{k}\}$. We will do the proof in two steps. \textit{Step 1.} We prove our estimate under the restriction $0<q\leqslantslantq 1$. We have \begin{align*} \sum\limits_{k=-\infty }^{\infty }\delta _{k}^{q} &\leqslantslantq \sum\limits_{k=-\infty }^{\infty }\sum_{j=-\infty }^{k+v}a^{(k-j)q}\big\| g_{k}f_{j}|L_{1}(\mathbb{R}^{n})\big\| \\ &=\sum\limits_{k=-\infty }^{\infty }\sum_{i=-v}^{\infty }a^{iq}\big\| g_{k}f_{k-i}|L_{1}(\mathbb{R}^{n})\big\| \\ &=\sum\limits_{i=-v}^{\infty }\sum\limits_{k=-\infty }^{\infty }a^{iq}\big\| g_{k}f_{k-i}|L_{1}(\mathbb{R}^{n})\big\|. \end{align*} Applying H\"{o}lder's inequality to estimate \begin{equation*} \sum\limits_{k=-\infty }^{\infty }\big\|g_{k}f_{k-i}|L_{1}(\mathbb{R}^{n}) \big\| \end{equation*} by \begin{equation*} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }f_{k}^{r}\Big)^{1/r}|L_{p}( \mathbb{R}^{n})\Big\|\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }g_{k}^{r^{\prime }}\Big)^{1/r^{\prime }}|L_{p^{\prime }}(\mathbb{R}^{n}) \Big\|. \end{equation*} By the fact that $\sum\limits_{i=-v}^{\infty }a^{iq}\leqslantslantsssim 1$ we obtain the desired estimate. \textit{Step 2.} We consider the case $1<q<\infty $. By duality, \begin{equation*} \Big(\sum\limits_{k=-\infty }^{\infty }\delta _{k}^{q}\Big)^{1/q}=\sup \sum\limits_{k=-\infty }^{\infty }\sum_{j=-\infty }^{k+v}a^{k-j}\big\| g_{k}f_{j}|L_{1}(\mathbb{R}^{n})\big\|^{1/q}h_{k}=\sup T, \end{equation*} where the supremum is taken over all sequence of positive\ real\ numbers $ \{h_{k}\}\in \ell _{q^{\prime }}$ with \begin{equation*} \Big(\sum\limits_{k=-\infty }^{\infty }h_{k}^{q^{\prime }}\Big)^{1/q^{\prime }}\leqslantslantq 1. \end{equation*} Again by H\"{o}lder's inequality, \begin{align*} T &=\sum\limits_{k=-\infty }^{\infty }\sum\limits_{i=-v}^{\infty }a^{i}\big\| g_{k}f_{k-i}|L_{1}(\mathbb{R}^{n})\big\|^{1/q}h_{k} \\ &\leqslantslantq \sum\limits_{i=-v}^{\infty }a^{i}\Big(\sum\limits_{k=-\infty }^{\infty } \big\|g_{k}f_{k-i}|L_{1}(\mathbb{R}^{n})\big\|\Big)^{1/q}. \end{align*} As in Step 1, we obtain the desired estimate. The proof is complete. \end{proof} Using the same type of arguments as in Lemma \ref{lq-inequality1 copy(1)} it is easy to prove the following lemma. \begin{lem} \label{lq-inequality1 copy(1)}\textit{Let }$a>0,1\leqslantslantq p\leqslantslantq \infty ,1\leqslantslantq r\leqslantslantq \infty $ \textit{and }$0<q<\infty $\textit{. Let }$\leqslantslantft\{ f_{k}\right\} $\textit{\ and }$\leqslantslantft\{ g_{k}\right\} $ \textit{be two sequences of positive\ real\ functions\ and denote} \begin{equation*} \delta _{k}=\sum_{j=k}^{k+v}a^{k-j}\big\|g_{k}f_{j}|L_{1}(\mathbb{R}^{n}) \big\|^{1/q},\quad k\in \mathbb{Z},v\in \mathbb{N} \end{equation*} and\textit{\ } \begin{equation*} \eta _{k}=\sum_{j=k+l}^{k}a^{j-k}\big\|g_{k}f_{j}|L_{1}(\mathbb{R}^{n})\big\| ^{1/q},\quad k\in \mathbb{Z},l\leqslantslantq 0. \end{equation*} Then there exists a constant $c>0\ $\textit{depending only on }$a,v,l$ \textit{\ and }$q$ such that \eqref{estimate1} holds. \end{lem} The next lemmas are important for the study of our function spaces. \begin{lem} \label{key-estimate1.1}Let $v\in \mathbb{Z}$, $K\geqslantslantq 0,1<\theta \leqslantslantq p<\infty ,1<q<\infty \ $ and $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R}^{2}$. Let $\{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $ \sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. Then for all sequence of functions $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$, \begin{align} &\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\Big(\sum_{j=-\infty }^{k+v}2^{(j-k)K}\mathcal{M}(f_{j})\Big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n}) \Big\|\notag \\ &\leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n}) \Big\| \label{key-est1.1.1} \end{align} if $K>\alpha _{2}$ and \begin{align} & \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\Big( \sum_{j=k+v}^{\infty }2^{(j-k)K}\mathcal{M}(f_{j})\Big)^{q}\Big)^{1/q}|L_{p}( \mathbb{R}^{n})\Big\|\notag \\ &\leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n}) \Big\| \label{key-est1.1.2.} \end{align} if $K<\alpha _{1}$. \end{lem} \begin{proof} We divide the proof into two steps. \textit{Step 1.} We will present the proof of\ $\mathrm{\eqref{key-est1.1.1}} $\textrm{. }We separate this step into\ two distinct cases: $1<q\leqslantslantq p<\infty $ and $1<p<q<\infty $. \textit{Substep 1.1.} We consider the case $1<q\leqslantslantq p<\infty $. By duality the left-hand side of $\mathrm{\eqref{key-est1.1.1}}$ is just \begin{equation*} \sup \sum\limits_{k=-\infty }^{\infty }\int_{\mathbb{R}^{n}}t_{k}(x) \sum_{j=-\infty }^{k+v}2^{(j-k)K}\mathcal{M}(f_{j})(x)|g_{k}(x)|dx=\sup \sum\limits_{k=-\infty }^{\infty }S_{k}, \end{equation*} where the supremum is taking over all sequence of functions $\{g_{k}\}\in L_{p^{\prime }}(\ell _{q^{\prime }})$\ with \begin{equation} \big\|\{g_{k}\}|L_{p^{\prime }}(\ell _{q^{\prime }})\big\|\leqslantslantq 1. \label{con-est1} \end{equation} We easily find that \begin{equation*} S_{k}=\sum_{j=-\infty }^{k+v}2^{(j-k)K}\int_{\mathbb{R}^{n}}\mathcal{M} (f_{j})(x)t_{k}(x)|g_{k}(x)|dx=\sum_{j=-\infty }^{k+v}2^{(j-k)K}D_{k,j} \end{equation*} for any $k\in \mathbb{Z}$. As in Lemma \ref{key-estimate1} we find that $ M_{Q}(f_{j})\rightarrow 0$ as $|Q|\rightarrow \infty $ for any $j\in \mathbb{ Z}$. Therefore,\ we can apply Lemma \ref{CZ-lemma}. Let \begin{equation*} \Omega _{j}^{i}=\{x\in \mathbb{R}^{n}:\mathcal{M}(f_{j})(x)>4^{n}\lambda ^{i}\},\quad j,i\in \mathbb{Z} \end{equation*} with $\lambda >2^{n+1}$\ and \begin{equation*} H_{j}^{i}=\{x\in \mathbb{R}^{n}:4^{n}\lambda ^{i}<\mathcal{M}(f_{j})(x)\leqslantslantq 4^{n}\lambda ^{i+1}\},\quad j,i\in \mathbb{Z}. \end{equation*} Let\ $\{Q_{j}^{i,h}\}_{h}$ be the collection of maximal dyadic cubes as in Lemma \ref{CZ-lemma} with \begin{equation*} \Omega _{j}^{i}\subset \cup _{h}3Q_{j}^{i,h}. \end{equation*} We find that \begin{align*} D_{k,j}& =\sum_{i=-\infty }^{\infty }\int_{H_{j}^{i}}t_{k}(x)\mathcal{M} (f_{j})(x)|g_{k}(x)|dx \\ & \leqslantslantsssim \sum_{i=-\infty }^{\infty }\lambda ^{i}\int_{\Omega _{j}^{i}}t_{k}(x)|g_{k}(x)|dx \\ & \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\lambda ^{i}\int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx \\ & \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\frac{1}{ |Q_{j}^{i,h}|}\int_{Q_{j}^{i,h}}|f_{j}(x)|dx \int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx \end{align*} for any $j\leqslantslantq k+v$. Notice that, by the H\"{o}lder inequality, we find that for all $j\leqslantslantq k+v$, \begin{equation*} \frac{1}{|3Q_{j}^{i,h}|}\int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx\leqslantslantq M_{3Q_{j}^{i,h},\delta ^{\prime }}(g_{k})M_{3Q_{j}^{i,h},\delta }(t_{k}), \end{equation*} with $\delta =p(1+\varepsilon )$ and $\varepsilon $ as in Theorem \ref {reverse Holder inequality}, which is possible since $t_{k}^{p}\in A_{p/\theta }(\mathbb{R}^{n})$ for any $k\in \mathbb{Z}$. We shall distinguish two cases. Let $j\mathbb{\in Z}$ be such that $j\leqslantslantq k$. We obtain by $\mathrm{\eqref{Asum2}}$ \begin{equation*} M_{3Q_{j}^{i,h},\delta }(t_{k})\leqslantslantsssim M_{3Q_{j}^{i,h},p}(t_{k})\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{j}^{i,h},p}(t_{j}). \end{equation*} By H\"{o}lder's inequality, \begin{equation*} 1=\Big(\frac{1}{|3Q_{j}^{i,h}|}\int_{3Q_{j}^{i,h}}t_{j}^{-\eta }(x)t_{j}^{\eta }(x)dx\Big)^{1/\eta }\leqslantslantq M_{3Q_{j}^{i,h},\varrho }(t_{j}^{-1})M_{3Q_{j}^{i,h},\tau }(t_{j}) \end{equation*} for any $\eta >0$ and any $\varrho ,\tau >0$ with $1/\eta =1/\varrho +1/\tau $. Taking any $0<\varrho <\sigma _{1}$ and any $0<\tau <q<\infty $ we obtain \begin{equation} 1\leqslantslantq M_{3Q_{j}^{i,h},\sigma _{1}}(t_{j}^{-1})M_{3Q_{j}^{i,h},\tau }(t_{j}), \label{estimate2} \end{equation} which together with $\mathrm{\eqref{Asum1}}$ implies that \begin{equation*} M_{3Q_{j}^{i,h},\delta }(t_{k})\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{j}^{i,h},\tau }(t_{j}), \end{equation*} which further implies that \begin{align*} M_{3Q_{j}^{i,h},\delta }(t_{k})\int_{Q_{j}^{i,h}}|f_{j}(x)|dx& \leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{j}^{i,h},\tau }(t_{j})\int_{3Q_{j}^{i,h}}|f_{j}(x)|dx \\ & \leqslantslantsssim 2^{\alpha _{2}(k-j)}|Q_{j}^{i,h}|M_{3Q_{j}^{i,h},\tau }(t_{j} \mathcal{M}(f_{j})). \end{align*} Thus, \begin{equation*} \frac{1}{|3Q_{j}^{i,h}|}\int_{Q_{j}^{i,h}}|f_{j}(x)|dx \int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx \end{equation*} is bounded by \begin{equation*} c\text{ }2^{\alpha _{2}(k-j)}|Q_{j}^{i,h}|M_{3Q_{j}^{i,h},\delta ^{\prime }}(g_{k})M_{3Q_{j}^{i,h},\tau }(t_{j}\mathcal{M}(f_{j})), \end{equation*} where the positive constant $c$ is independent of $j,h$ and $k$. Let $j \mathbb{\in Z}$ be such that $k<j\leqslantslantq k+v$. By $\mathrm{\eqref{estimate2}}$ and $\mathrm{\eqref{Asum1}}$ we obtain \begin{equation*} M_{3Q_{j}^{i,h},\delta }(t_{k})\leqslantslantsssim M_{3Q_{j}^{i,h},p}(t_{k})\leqslantslantsssim 2^{\alpha _{1}(k-j)}M_{3Q_{j}^{i,h},p}(t_{j}). \end{equation*} Consequently, \begin{equation*} D_{k,j}\leqslantslantsssim \Lambda _{k,j}\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }|Q_{j}^{i,h}|M_{3Q_{j}^{i,h},\delta ^{\prime }}(g_{k})M_{3Q_{j}^{i,h},\tau }(t_{j}\mathcal{M}(f_{j})) \end{equation*} for any $j\leqslantslantq k+v$, where \begin{equation*} \Lambda _{k,j}=\leqslantslantft\{ \begin{array}{ccc} 2^{\alpha _{2}(k-j)}, & \text{if} & j\leqslantslantq k, \\ 2^{\alpha _{1}(k-j)} & \text{if} & k<j\leqslantslantq k+v. \end{array} \right. \end{equation*} Since $|Q_{j}^{i,h}|\leqslantslantq \beta |E_{j}^{i,h}|$, with $E_{j}^{i,h}=Q_{j}^{i,h} \backslash (Q_{j}^{i,h}\cap (\cup _{h}Q_{j}^{i+1,h}))$ and the family $ E_{j}^{i,h}$ are pairwise, we find that \begin{align*} D_{k,j}& \leqslantslantsssim \Lambda _{k,j}\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }|E_{j}^{i,h}|M_{3Q_{j}^{i,h},\delta ^{\prime }}(g_{k})M_{3Q_{j}^{i,h},\tau }(t_{j}\mathcal{M}(f_{j})) \\ & =c\text{ }\Lambda _{k,j}\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\int_{E_{j}^{i,h}}M_{3Q_{j}^{i,h},\delta ^{\prime }}(g_{k})M_{3Q_{j}^{i,h},\tau }(t_{j}\mathcal{M}(f_{j}))dx \\ & \leqslantslantsssim \Lambda _{k,j}\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\int_{E_{j}^{i,h}}\mathcal{M}_{\delta ^{\prime }}(g_{k})(x)\mathcal{M} _{\tau }(t_{j}\mathcal{M}(f_{j}))(x)dx \\ & \leqslantslantsssim \Lambda _{k,j}\int_{\mathbb{R}^{n}}\mathcal{M}_{\delta ^{\prime }}(g_{k})(x)\mathcal{M}_{\tau }(t_{j}\mathcal{M}(f_{j}))(x)dx \end{align*} for any $j\leqslantslantq k+v$. Since $K>\alpha _{2}$, applying Lemmas \ref {lq-inequality1} and \ref{lq-inequality1 copy(1)}, $\sum\limits_{k=-\infty }^{\infty }S_{k}$ can be estimated by \begin{align*} & c\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\tau }(t_{k}\mathcal{M}(f_{k}))\big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\delta ^{\prime }}(g_{k})\big)^{q^{\prime }}\Big)^{1/q^{\prime }}|L_{p^{\prime }}( \mathbb{R}^{n})\Big\| \\ & \leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}|f_{k}|^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|, \end{align*} where we used the vector-valued maximal inequality of Fefferman and Stein $ \mathrm{\eqref{Fe-St71}}$, Lemma \ref{key-estimate1} and $\mathrm{ \eqref{con-est1}}$. \textit{Substep 1.2.} We consider the case $1<p<q<\infty $. Again by duality the left-hand side of $\mathrm{\eqref{key-est1.1.1}}$, raised to the power\ $ \theta $, is just \begin{equation*} \sup \sum\limits_{k=-\infty }^{\infty }\int_{\mathbb{R}^{n}}t_{k}^{\theta }(x)\Big(\sum_{j=-\infty }^{k+v}2^{(j-k)K}\mathcal{M(}f_{j})(x)\Big)^{\theta }|g_{k}(x)|dx=\sup \sum\limits_{k=-\infty }^{\infty }S_{k}, \end{equation*} where the supremum is taking over all sequence of functions $\{g_{k}\}$ such that \begin{equation} \{g_{k}\}\in L_{(p/\theta )^{\prime }}(\ell _{(q/\theta )^{\prime }}), \quad \big\|\{g_{k}\}|L_{(p/\theta )^{\prime }}(\ell _{(q/\theta )^{\prime }}) \big\|\leqslantslantq 1. \label{con-est2} \end{equation} Notice that, by the Minkowski inequality, we obtain that \begin{align*} S_{k}& =\Big\|t_{k}\sum_{j=-\infty }^{k+v}2^{(j-k)K}\mathcal{M} (f_{j})|g_{k}|^{1/\theta }|L_{\theta }(\mathbb{R}^{n})\Big\|^{\theta } \\ & \leqslantslantq \Big(\sum_{j=-\infty }^{k+v}2^{(j-k)K}\Big(\int_{\mathbb{R} ^{n}}t_{k}^{\theta }(x)\big(\mathcal{M}(f_{j})(x)\big)^{\theta }|g_{k}(x)|dx \Big)^{1/\theta }\Big)^{\theta } \end{align*} for any $k\in \mathbb{Z}$. Using Lemma\ \ref{FS-lemma}, we deduce that, for any $j\leqslantslantq k+v$, \begin{equation*} \int_{\mathbb{R}^{n}}t_{k}^{\theta }(x)\big(\mathcal{M}(f_{j})(x)\big) ^{\theta }|g_{k}(x)|dx\leqslantslantsssim \int_{\mathbb{R}^{n}}|f_{j}(x)|^{\theta } \mathcal{M}(t_{k}^{\theta }g_{k})(x)dx=c\text{ }D_{k,j}, \end{equation*} where the positive constant $c$ is independent of $k$ and $j$. Let $Q$ be a cube. By H\"{o}lder's inequality, \begin{equation*} \frac{1}{|Q|}\int_{Q}t_{k}^{\theta }(x)|g_{k}(x)|dx\leqslantslantq \frac{1}{|Q|}\big\| g_{k}|L_{\leqslantslantft( p/\theta \right) ^{\prime }}(Q)\big\|\big\|t_{k}^{\theta }|L_{p/\theta }(Q)\big\|\leqslantslantq \frac{1}{|Q|}\big\|t_{k}|L_{p}(Q)\big\|^{\theta }, \end{equation*} with $c>0$ is independent of $k$. Since $\{t_{k}\}$ is a $p$-admissible sequence satisfying $\mathrm{\eqref{Asum1}}$ with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$, we find that \begin{equation*} \frac{1}{|Q|^{\frac{1}{\theta }}}\big\|t_{k}|L_{p}(Q)\big\|\leqslantslantq C|Q|^{1/p+1/\sigma _{1}-1/\theta }\big\|t_{k}^{-1}|L_{\sigma _{1}}(Q)\big\| ^{-1}=C\big\|t_{k}^{-1}|L_{\sigma _{1}}(Q)\big\|^{-1}, \end{equation*} with $C>0$ is independent of $Q$, $\theta $ and $k$. Since $t_{k}^{p}\in A_{p/\theta }(\mathbb{R}^{n})$ it follows by Lemma \ref{Ap-Property}/(ii), \begin{equation*} t_{k}^{-\sigma _{1}}\in A_{(p/\theta )^{\prime }}(\mathbb{R}^{n}),\quad k\in \mathbb{Z}. \end{equation*} Hence $\big\|t_{k}^{-1}|L_{\sigma _{1}}(Q)\big\|^{-1}\rightarrow 0$, $ |Q|\rightarrow \infty $, $k\in \mathbb{Z}$. Therefore,\ we can apply Lemma \ref{CZ-lemma}. Using the same arguments as in proof of Lemma \ref {key-estimate1}, we get \begin{align*} D_{k,j}& \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\lambda ^{i}\int_{3Q_{k}^{i,h}}|f_{j}(x)|^{\theta }dx \\ & \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\frac{1}{ |Q_{k}^{i,h}|}\int_{3Q_{k}^{i,h}}|f_{j}(x)|^{\theta }dx\int_{Q_{k}^{i,h}}t_{k}^{\theta }(x)|g_{k}(x)|dx, \end{align*} where we have used the inequality \begin{equation*} \lambda ^{i}\leqslantslantq \frac{1}{|Q_{k}^{i,h}|}\int_{Q_{k}^{i,h}}t_{k}^{\theta }(x)|g_{k}(x)|dx\leqslantslantq 2^{n}\lambda ^{i}. \end{equation*} On the other hand, by H\"{o}lder's inequality, we obtain \begin{equation*} \int_{3Q_{k}^{i,h}}|f_{j}(x)|^{\theta }dx\leqslantslantq |Q_{k}^{i,h}|M_{3Q_{k}^{i,h},\nu ^{\prime }}(t_{j}^{-\theta })M_{3Q_{k}^{i,h},\nu }(t_{j}^{\theta }|f_{j}|^{\theta }) \end{equation*} with $\nu ^{\prime }=\leqslantslantft( p/\theta \right) ^{\prime }(1+\varepsilon )$ and $\varepsilon $ as in Theorem\ \ref{reverse Holder inequality} since, again, $ t_{k}^{-\sigma _{1}}\in A_{(p/\theta )^{\prime }}(\mathbb{R}^{n})$ for any $ k\in \mathbb{Z}$. Therefore, \begin{equation*} M_{3Q_{k}^{i,h},\nu ^{\prime }}(t_{j}^{-\theta })=\big(M_{3Q_{k}^{i,h}, \sigma _{1}(1+\varepsilon )}(t_{j}^{-1})\big)^{\theta }\leqslantslantsssim \big( M_{3Q_{k}^{i,h},\sigma _{1}}(t_{j}^{-1})\big)^{\theta }. \end{equation*} As before, by H\"{o}lder's inequality, \begin{equation} 1=\Big(\frac{1}{|3Q_{k}^{i,h}|}\int_{3Q_{k}^{i,h}}t_{k}^{-\varrho }(x)t_{k}^{\varrho }(x)dx\Big)^{1/\varrho }\leqslantslantq M_{3Q_{k}^{i,h},\theta }(t_{k}^{-1})M_{3Q_{k}^{i,h},p}(t_{k}) \label{estimate6} \end{equation} for any $\varrho >0$ with $1/\varrho =1/\theta +1/p$. Again, we distinguish two cases. Let $j\mathbb{\in Z}$ be such that $j\leqslantslantq k$. From $\mathrm{ \eqref{Asum2}}$ we obtain \begin{equation*} M_{3Q_{k}^{i,h},p}(t_{k})\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{k}^{i,h},p}(t_{j}),\quad j\leqslantslantq k. \end{equation*} Therefore, \begin{equation*} 1\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{k}^{i,h},\theta }(t_{k}^{-1})M_{3Q_{k}^{i,h},p}(t_{j}),\quad j\leqslantslantq k. \end{equation*} Multiplying by $M_{3Q_{k}^{i,h},\sigma _{1}}(t_{j}^{-1})$ and using $\mathrm{ \eqref{Asum1}}$ we get \begin{equation*} M_{3Q_{k}^{i,h},\sigma _{1}}(t_{j}^{-1})\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{k}^{i,h},\theta }(t_{k}^{-1}). \end{equation*} Now, Let $j\mathbb{\in Z}$ be such that $k<j\leqslantslantq k+v$. From $\mathrm{ \eqref{estimate6}}$ and $\mathrm{\eqref{Asum1}}$ we obtain \begin{equation*} M_{3Q_{k}^{i,h},\sigma _{1}}(t_{j}^{-1})\leqslantslantsssim 2^{\alpha _{1}(k-j)}M_{3Q_{k}^{i,h},\theta }(t_{k}^{-1}). \end{equation*} Hence \begin{equation*} |Q_{k}^{i,h}|M_{3Q_{k}^{i,h}}(|f_{j}|^{\theta })M_{Q_{k}^{i,h}}(t_{k}^{\theta }g_{k}), \end{equation*} can be estimated by \begin{align*} & c\Lambda _{k,j}^{\theta }|Q_{k}^{i,h}|M_{Q_{k}^{i,h}}(t_{k}^{\theta }g_{k})M_{3Q_{k}^{i,h},\nu }(t_{j}^{\theta }|f_{j}|^{\theta })M_{3Q_{k}^{i,h}}(t_{k}^{-\theta }) \\ & \leqslantslantsssim \Lambda _{k,j}^{\theta }|Q_{k}^{i,h}|M_{3Q_{k}^{i,h}}(t_{k}^{-\theta }\mathcal{M}(t_{k}^{\theta }g_{k}))M_{3Q_{k}^{i,h},\nu }(t_{j}^{\theta }|f_{j}|^{\theta }). \end{align*} Consequently, \begin{equation*} D_{k,j}\leqslantslantsssim \Lambda _{k,j}^{\theta }\sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }M_{3Q_{k}^{i,h},\nu }(t_{j}^{\theta }|f_{j}|^{\theta })\int_{3Q_{k}^{i,h}}t_{k}^{-\theta }(x)\mathcal{M}(t_{k}^{\theta }g_{k})(x)dx \end{equation*} for any $j\leqslantslantq k+v$. Since $|Q_{k}^{i,h}|\leqslantslantq \beta |E_{k}^{i,h}|$, with $ E_{k}^{i,h}=Q_{k}^{i,h}\backslash (Q_{k}^{i,h}\cap (\cup _{h}Q_{k}^{i+1,h}))$ and of the family $E_{k}^{i,h}$ are pairwise, as before we find that \begin{equation*} D_{k,j}\leqslantslantsssim \Lambda _{k,j}^{\theta }\int_{\mathbb{R}^{n}}\mathcal{M} (t_{k}^{-\theta }\mathcal{M}(t_{k}^{\theta }g_{k}))(x)\mathcal{M}_{\nu }(t_{j}^{\theta }|f_{j}|^{\theta })(x)dx. \end{equation*} Since $K>\alpha _{2}$ and $\nu <p/\theta \leqslantslantq q/\theta $, applying Lemma \ref {lq-inequality1}, we obtain $\sum\limits_{k=-\infty }^{\infty }S_{k}$ can be estimated by \begin{align*} & c\Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M}_{\nu }(t_{k}^{\theta }|f_{k}|^{\theta })\big)^{q/\theta }\Big)^{\theta /q}|L_{p/\theta }(\mathbb{R}^{n})\Big\| \\ & \times \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(\mathcal{M} (t_{k}^{-\theta }\mathcal{M}(t_{k}^{\theta }g_{k}))\big)^{(q/\theta )^{\prime }}\Big)^{1/(q/\theta )^{\prime }}|L_{(p/\theta )^{\prime }}(\mathbb{R}^{n})\Big\| \\ & \leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}|f_{k}|^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|^{\theta } \\ & \times \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }\big(t_{k}^{-\theta } \mathcal{M}(t_{k}^{\theta }g_{k})\big)^{(q/\theta )^{\prime }}\Big)^{1/(q/\theta )^{\prime }}|L_{(p/\theta )^{\prime }}(\mathbb{R}^{n})\Big\| \\ & \leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}|f_{k}|^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|^{\theta }, \end{align*} where we used the vector-valued maximal inequality of Fefferman and Stein $ \mathrm{\eqref{Fe-St71}}$, Lemma \ref{key-estimate1} and $\mathrm{ \eqref{con-est2}}$. This proves the first part of the Lemma. \textit{Step 2.} We will present the proof of $\mathrm{\eqref{key-est1.1.2.}} $. In order to prove we additionally do it into the two steps Substeps 2.1 and 2.2. \textit{Substep 2.1.} We consider the case $1<q\leqslantslantq p<\infty $. We employ the same notation as in Substep 1.1. We obtain \begin{align*} D_{k,j}& \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\lambda ^{i}\int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx \\ & \leqslantslantsssim \sum_{i=-\infty }^{\infty }\sum_{h=0}^{\infty }\frac{1}{ |Q_{j}^{i,h}|}\int_{Q_{j}^{i,h}}|f_{j}(x)|dx \int_{3Q_{j}^{i,h}}t_{k}(x)|g_{k}(x)|dx. \end{align*} Let $j\geqslantslantq k+v$. Recall that \begin{equation*} M_{3Q_{j}^{i,h},\delta }(t_{k})\leqslantslantq M_{3Q_{j}^{i,h},p}(t_{k})\quad \text{and} \quad 1\leqslantslantq M_{3Q_{j}^{i,h},\sigma _{1}}(t_{j}^{-1})M_{3Q_{j}^{i,h},\tau }(t_{j}). \end{equation*} Using $\mathrm{\eqref{Asum1}}$, we find \begin{equation*} M_{3Q_{j}^{i,h},\delta }(t_{k})\leqslantslantsssim 2^{\alpha _{1}(k-j)}M_{3Q_{j}^{i,h},p}(t_{j}),\quad j\geqslantslantq k. \end{equation*} Now, assume that $k+v\leqslantslantq j<k$. From $\mathrm{\eqref{Asum2}}$, we get \begin{equation*} M_{3Q_{j}^{i,h},p}(t_{k})\leqslantslantsssim 2^{\alpha _{2}(k-j)}M_{3Q_{j}^{i,h},p}(t_{j}). \end{equation*} Hence \begin{align*} M_{3Q_{j}^{i,h},\delta }(t_{k})\int_{Q_{j}^{i,h}}|f_{j}(x)|dx& \leqslantslantsssim \digamma _{k,j}M_{3Q_{j}^{i,h},\tau }(t_{j})\int_{Q_{j}^{i,h}}|f_{j}(x)|dx \\ & \leqslantslantsssim \digamma _{k,j}|Q_{j}^{i,h}|M_{3Q_{j}^{i,h},\tau }(t_{j}\mathcal{M }(f_{j})), \end{align*} where \begin{equation*} \digamma _{k,j}=\leqslantslantft\{ \begin{array}{ccc} 2^{\alpha _{1}(k-j)}, & \text{if} & j\geqslantslantq k, \\ 2^{\alpha _{2}(k-j)} & \text{if} & k+v\leqslantslantq j<k. \end{array} \right. \end{equation*} Repeating the same arguments of Substep 1.1, we obtain the desired estimate. \textit{Substep 2.2.} We show $\mathrm{\eqref{key-est1.1.2.}}$ under the assumption $1<p<q<\infty $. We employ the same notation as in Substep 1.2. From $\mathrm{\eqref{estimate6}}$ we get \begin{equation*} M_{3Q_{k}^{i,h},\sigma _{1}}(t_{j}^{-1})\leqslantslantsssim 2^{\alpha _{1}(k-j)}M_{3Q_{j}^{i,h},\theta }(t_{k}^{-1}),\quad j\geqslantslantq k. \end{equation*} We omit the proof since he is essentially similar to the Substep 2.1 and Substep 1.2, respectively. The proof of lemma is complete. \end{proof} \begin{rem} \label{r-estimates copy(1)}Let $i\in \mathbb{Z},1<\theta \leqslantslantq p<\infty ,1<q<\infty \ $and $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R}^{2}$. Let $ \{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. From Lemma \ref{key-estimate1.1} we easily obtain \begin{equation*} \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\big(\mathcal{M} (f_{k+i})\big)^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|\leqslantslantsssim \Big\|\Big( \sum\limits_{k=-\infty }^{\infty }t_{k}^{q}\leqslantslantft\vert f_{k}\right\vert ^{q} \Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \end{equation*} holds for all sequence of functions $\{t_{k}f_{k}\}\in L_{p}(\ell _{q})$, where the implicit constant depends on $i$. Indeed, we have \begin{equation*} \mathcal{M}(f_{k+i})\leqslantslantq \sum\limits_{j=-\infty }^{k+i}2^{(j-k-i)M}\mathcal{M }(f_{j}),\quad M>\alpha _{2},k\in \mathbb{Z}. \end{equation*} Lemma \ref{key-estimate1.1} yields the desired result. \end{rem} \section{The spaces\textbf{\ }$\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$} In this section we\ present the Fourier analytical definition of Triebel-Lizorkin spaces of variable smoothness and we prove their basic properties in analogy to the classical Triebel-Lizorkin spaces. \subsection{ The $\varphi $-transform characterization } Select a pair of Schwartz functions $\varphi $ and $\psi $ satisfy \begin{equation} \text{supp}(\mathcal{F}(\varphi ))\cup \text{supp}(\mathcal{F}(\psi ))\subset \big\{\xi :1/2\leqslantslantq |\xi |\leqslantslantq 2\big\}, \label{Ass1} \end{equation} \begin{equation} |\mathcal{F}(\varphi )(\xi )|,|\mathcal{F}(\psi )(\xi )|\geqslantslantq c\quad \text{if} \quad 3/5\leqslantslantq |\xi |\leqslantslantq 5/3 \label{Ass2} \end{equation} and \begin{equation} \sum_{k=-\infty }^{\infty }\overline{\mathcal{F}(\varphi )(2^{-k}\xi )} \mathcal{F}(\psi )(2^{-k}\xi )=1\quad \text{if}\quad \xi \neq 0, \label{Ass3} \end{equation} where $c>0$. Throughout the paper we put $\tilde{\varphi}(x)=\overline{ \varphi (-x)},x\in \mathbb{R}^{n}$. Let $\varphi \in \mathcal{S}(\mathbb{R} ^{n})$ be a function satisfying $\mathrm{\eqref{Ass1}}$-$\mathrm{\eqref{Ass2} }$. We recall that there exists a function $\psi \in \mathcal{S}(\mathbb{R} ^{n})$ satisfying $\mathrm{\eqref{Ass1}}$-$\mathrm{\eqref{Ass3}}$, see \cite[ Lemma (6.9)]{FrJaWe01}. \begin{rem} Let $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ the spaces under consideration. We would like to mention that the elements of the spaces $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ are not distributions but equivalence classes of distributions. Observe that $\dot{F}_{p,p}(\mathbb{R} ^{n},\{t_{k}\})$ is just the space $\dot{B}_{p,p}(\mathbb{R}^{n},\{t_{k}\})$ , where the space $\dot{B}_{p,q}(\mathbb{R}^{n},\{t_{k}\}),0<p,q\leqslantslantq \infty $ , is defined to be the set of all $f\in \mathcal{S}_{\infty }^{\prime }( \mathbb{R}^{n})$\ such that \begin{equation*} \big\|f|\dot{B}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|=\Big( \sum\limits_{k=-\infty }^{\infty }\big\|t_{k}(\varphi _{k}\ast f)|L_{p}( \mathbb{R}^{n})\big\|^{q}\Big)^{1/q}<\infty , \end{equation*} which studied in detail in \cite{D20}. \end{rem} Using the system $\{\varphi _{k}\}$ we can define the quasi-norms \begin{equation*} \big\|f|\dot{F}_{p,q}^{s}(\mathbb{R}^{n})\big\|=\big\|\Big( \sum\limits_{k=-\infty }^{\infty }2^{ksq}|\varphi _{k}\ast f|^{q}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\big\| \end{equation*} for constants $s\in \mathbb{R}$, $0<p<\infty $ and $0<q\leqslantslantq \infty $. The Triebel-Lizorkin space $\dot{F}_{p,q}^{s}(\mathbb{R}^{n})$\ consist of all distributions $f\in \mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n})$ for which \begin{equation*} \big\|f|\dot{F}_{p,q}^{s}(\mathbb{R}^{n})\big\|<\infty . \end{equation*} It is well-known that these spaces do not depend on the choice of the system $\{\varphi _{k}\}$ (up to equivalence of quasi-norms). Further details on the classical theory of these spaces, included the nonhomogeneous case, can be found in \cite{FJ86}, \cite{FJ90}, \cite{FrJaWe01}, \cite{SchTr87}, \cite {T1}, \cite{T2} and \cite{T3}. One recognizes immediately that if $\{t_{k}\}=\{2^{sk}\}$, $s\in \mathbb{R}$ , then \begin{equation} \dot{F}_{p,q}(\mathbb{R}^{n},\{2^{sk}\})=\dot{F}_{p,q}^{s}(\mathbb{R} ^{n})\notag . \end{equation} Moreover, for $\{t_{k}\}=\{2^{sk}w\}$, $s\in \mathbb{R}$ with a weight $w$ we re-obtain the weighted Triebel-Lizorkin spaces; we refer, in particular, to the papers \cite{Bui82}, \cite{IzSa12}, \cite{Ry01}, \cite {Sch981} and \cite{Sch982} for a comprehensive treatment of the weighted spaces. A basic tool to study\ the above\ function\ spaces is the following Calder \'{o}n reproducing formula, see \cite[Lemma 2.1]{YY2}. \begin{lem} \label{DW-lemma1}Suppose that $\varphi $, $\psi \in \mathcal{S}(\mathbb{R} ^{n})$\ satisfy $\mathrm{\eqref{Ass1}}$ through $\mathrm{\eqref{Ass3}}$ \textrm{. }If\textrm{\ }$f\in \mathcal{S}_{\infty }^{\prime }(\mathbb{R} ^{n}) $, then \begin{equation} f=\sum_{k=-\infty }^{\infty }2^{-kn}\sum_{m\in \mathbb{Z}^{n}}\widetilde{ \varphi }_{k}\ast f(2^{-k}m)\psi _{k}(\cdot -2^{-k}m). \label{proc2} \end{equation} \end{lem} Let $\varphi $, $\psi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $\mathrm{ \eqref{Ass1}}$\ through\ $\mathrm{\eqref{Ass3}}$. Recall that the $\varphi $ -transform $S_{\varphi }$ is defined by setting \begin{equation*} (S_{\varphi}f)_{k,m}=\langle f,\varphi _{k,m}\rangle, \end{equation*} where $\varphi _{k,m}(x)=2^{k \frac{n}{2}}\varphi (2^{k}x-m)$, $m\in \mathbb{Z}^{n}$ and $k\in \mathbb{Z}$ . The inverse $\varphi $-transform $T_{\psi }$ is defined by \begin{equation*} T_{\psi }\lambda =\sum_{k=-\infty }^{\infty }\sum_{m\in \mathbb{Z} ^{n}}\lambda _{k,m}\psi _{k,m}, \end{equation*} where $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}\subset \mathbb{C}$, see \cite{FJ90}. Now we introduce the corresponding sequence spaces of $\dot{F}_{p,q}(\mathbb{ R}^{n},\{t_{k}\})$. \begin{defn} \label{sequence-space}Let $0<p<\infty $ and $0<q\leqslantslantq \infty $. Let $ \{t_{k}\} $ be a $p$-admissible weight sequence. Then for all complex valued sequences $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}\subset \mathbb{C}$ we define \begin{equation*} \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})=\Big\{\lambda :\big\|\lambda |\dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|<\infty \Big\}, \end{equation*} where \begin{equation*} \big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|=\Big\|\Big( \sum_{k=-\infty }^{\infty }\sum\limits_{m\in \mathbb{Z} ^{n}}2^{knq/2}t_{k}^{q}|\lambda _{k,m}|^{q}\chi _{k,m}\Big)^{1/q}|L_{p}( \mathbb{R}^{n})\Big\|, \end{equation*} with the usual modifications if $q=\infty $. \end{defn} Allowing the smoothness $t_{k}$, $k\in \mathbb{Z}$ to vary from point to point will raise extra difficulties\ to study these function spaces. But by the following lemma the problem can be reduced to the case of fixed smoothness, see \cite{D20.2}. \begin{prop} \label{Equi-norm1}Let $0<\theta \leqslantslantq p<\infty $, $0<q<\infty $ and $0<\delta \leqslantslantq 1$. Assume that $\{t_{k}\}$ satisfying $\mathrm{\eqref{Asum1}}$ with $ \sigma _{1}=\theta \leqslantslantft( \frac{p}{\theta }\right) ^{\prime }$ and $j=k$. Then \begin{align*} & \big\|\lambda |\dot{f}_{p,q,\delta }(\mathbb{R}^{n},\{t_{k}\})\big\|^{\ast }\\ & =\Big\|\Big(\sum_{k=-\infty }^{\infty }\sum\limits_{m\in \mathbb{Z} ^{n}}2^{knq(1/2+1/\delta p)}t_{k,m,\delta }^{q}|\lambda _{k,m}|^{q}\chi _{k,m}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|, \end{align*} is an equivalent quasi-norm in $\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$, where \begin{equation*} t_{k,m,\delta }=\big\|t_{k}|L_{\delta p}(Q_{k,m})\big\|,\quad k\in \mathbb{Z} ,m\in \mathbb{Z}^{n}. \end{equation*} \end{prop} The following important properties of the sequence spaces will be required in what follows. \begin{lem} \label{Lamda-est}Let $0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $ \{t_{k}\}$ be a $p$-admissible weight sequence satisfying $\mathrm{ \eqref{Asum1}}$ with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and $j=k$. Let $k\in \mathbb{Z},m\in \mathbb{Z}^{n}$ and $\lambda \in \dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$.\ Then there exists $c>0$ independent of $ k $ and $m$ such that \begin{equation*} |\lambda _{k,m}|\leqslantslantq c\text{ }2^{-kn/2}t_{k,m}^{-1}\big\|\lambda |\dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} \end{lem} \begin{proof} Let $\lambda \in \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}),k\in \mathbb{Z}\ $ and $m\in \mathbb{Z}^{n}$. Since $\{t_{k}\}$ is a $p$-admissible sequence satisfying $\mathrm{\eqref{Asum1}}$ with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$, we get by H\"{o}lder's inequality \begin{align*} |\lambda _{k,m}|& =\Big(\frac{1}{|Q_{k,m}|}\int_{Q_{k,m}}|\lambda _{k,m}|^{\theta }dy\Big)^{1/\theta } \\ & \leqslantslantq M_{Q_{k,m},p}(\lambda _{k,m}t_{k})M_{Q_{k,m},\sigma _{1}}(t_{k}^{-1}) \\ & \leqslantslantq c\text{ }2^{-\frac{kn}{2}}t_{k,m}^{-1}\big\|\lambda |\dot{f}_{p,q}( \mathbb{R}^{n},\{t_{k}\})\big\|, \end{align*} where $c>0$ is independent of $k\in \mathbb{Z}\ $and $m\in \mathbb{Z}^{n}$. \end{proof} The following lemma is a slight variant of \cite{D20}. For the convenience of the reader, we give some details. \begin{lem} \label{Inv-phi-trans}Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2},0<\theta \leqslantslantq p<\infty $ and $0<q\leqslantslantq \infty $. Let $\{t_{k}\}\in \dot{X }_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. Let\ $\psi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $\mathrm{\eqref{Ass1}} $ and $\mathrm{\eqref{Ass2}}$. Then for all $\lambda \in \dot{f}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$ \begin{equation*} T_{\psi }\lambda =\sum_{k=-\infty }^{\infty }\sum_{m\in \mathbb{Z} ^{n}}\lambda _{k,m}\psi _{k,m}, \end{equation*} converges in $\mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n})$; moreover, $ T_{\psi }:\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\rightarrow \mathcal{S} _{\infty }^{\prime }(\mathbb{R}^{n})$ is continuous. \end{lem} \begin{proof} Let $\lambda \in \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ and $\varphi \in \mathcal{S}_{\infty }(\mathbb{R}^{n})$. We see that \begin{equation*} \sum_{k=-\infty }^{\infty }\sum_{m\in \mathbb{Z}^{n}}|\lambda _{k,m}||\langle \psi _{k,m},\varphi \rangle |=I_{1}+I_{2}, \end{equation*} where \begin{equation*} I_{1}=\sum_{k=-\infty }^{0}\sum_{m\in \mathbb{Z}^{n}}|\lambda _{k,m}||\langle \psi _{k,m},\varphi \rangle |\quad \text{and}\quad I_{2}=\sum_{k=1}^{\infty }\sum_{m\in \mathbb{Z}^{n}}|\lambda _{k,m}||\langle \psi _{k,m},\varphi \rangle |. \end{equation*} It suffices to show that both $I_{1}$ and $I_{2}$ are dominated by \begin{equation*} c\big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} \textit{Estimate of }$I_{1}$. Let us recall the following estimate, see (3.18) in \cite{M07}. For any $L>0$, there exists a positive constant $M\in \mathbb{N}$ such that for all $\varphi $, $\psi \in \mathcal{S}_{\infty }( \mathbb{R}^{n}),i,k\in \mathbb{Z}$ and $m,h\in \mathbb{Z}^{n}$, \begin{align*} &\leqslantslantft\vert \langle \varphi _{k,m},\psi _{i,h}\rangle \right\vert \\ & \leqslantslantsssim \big\|\varphi \big\|_{\mathcal{S}_{M+1}}\big\|\psi \big\|_{\mathcal{S}_{M+1}} \Big(1+\frac{|2^{-k}m-2^{-i}h|^{n}}{\max (2^{-kn},2^{-in})}\Big)^{-L}\min \leqslantslantft( 2^{(i-k)nL},2^{(k-i)nL}\right) . \end{align*} Therefore, \begin{equation*} \leqslantslantft\vert \langle \psi _{k,m},\varphi \rangle \right\vert \leqslantslantsssim \big\| \varphi \big\|_{\mathcal{S}_{M+1}}\big\|\psi \big\|_{\mathcal{S}_{M+1}}\Big( 1+\frac{|2^{-k}m|^{n}}{\max (1,2^{-kn})}\Big)^{-L}2^{-|k|nL}. \end{equation*} Our estimate employs partially some decomposition techniques already used in \cite{FJ90} and \cite{Ky03}. {For each $j$}$\in \mathbb{N}${\ we define } \begin{equation*} {\Omega _{j}=\{m\in \mathbb{Z}^{n}:2^{j-1}<|m|\leqslantslantq 2^{j}\}\quad }\text{and}{ \quad \Omega _{0}=\{m\in \mathbb{Z}^{n}:|m|\leqslantslantq 1\}.} \end{equation*} Thus, \begin{align*} I_{1}& \leqslantslantsssim \sum_{k=-\infty }^{0}2^{knL}\sum_{m\in \mathbb{Z}^{n}}\frac{ |\lambda _{k,m}|}{\big(1+|m|\big)^{nL}} \\ & =c\sum_{k=-\infty }^{0}2^{knL}\sum\limits_{j=0}^{\infty }\sum\limits_{m\in \Omega _{j}}\frac{|\lambda _{k,m}|}{\big(1+|m|\big)^{nL}} \\ & \leqslantslantsssim \sum_{k=-\infty }^{0}2^{knL}\sum\limits_{j=0}^{\infty }2^{-nLj}\sum\limits_{m\in \Omega _{j}}|\lambda _{k,m}|. \end{align*} Let $0<\varrho <\min (1,\theta )$ be such that $1/\varrho =1/\tau +1/\sigma _{1}$\ with $0<\tau <\min \big(\frac{1}{1-1/\sigma _{1}},p\big)$.\ We have \begin{align*} &I_{1} \leqslantslantsssim \sum_{k=-\infty }^{0}\sum\limits_{j=0}^{\infty }2^{-nL(j-k)}\Big(\sum\limits_{m\in \Omega _{j}}|\lambda _{k,m}|^{\varrho }\Big) ^{1/\varrho } \\ & =c\sum_{k=-\infty }^{0}\sum\limits_{j=0}^{\infty }2^{(1/\varrho -L)nj+knL}\Big(2^{(k-j)n}\int_{\cup _{z\in \Omega _{j}}Q_{k,z}}\sum\limits_{m\in \Omega _{j}}|\lambda _{k,m}|^{\varrho }\chi _{k,m}(y)dy\Big)^{1/\varrho }. \end{align*} Let $y\in \cup _{z\in \Omega _{j}}Q_{k,z}$ and $x\in Q_{0,0}$. Then $y\in Q_{k,z}$ for some $z\in \Omega _{j}$ and ${2^{j-1}<|z|\leqslantslantq 2^{j}}$. From this it follows that \begin{align*} \leqslantslantft\vert y-x\right\vert & \leqslantslantq |y-2^{-k}z|+|x-2^{-k}z| \\ & \leqslantslantq \sqrt{n}\text{ }2^{-k}+\leqslantslantft\vert x\right\vert +2^{-k}\leqslantslantft\vert z\right\vert \\ & \leqslantslantq 2^{j-k+\delta _{n}},\quad \delta _{n}\in \mathbb{N}, \end{align*} which implies that $y$ is located in the ball $B(x,2^{j-k+\delta _{n}})$. In addition, from the fact that \begin{equation*} \leqslantslantft\vert y\right\vert \leqslantslantq \leqslantslantft\vert y-x\right\vert +\leqslantslantft\vert x\right\vert \leqslantslantq 2^{j-k+\delta _{n}}+1\leqslantslantq 2^{j-k+c_{n}},\quad c_{n}\in \mathbb{N}, \end{equation*} we have that $y$ is located in the ball $B(0,2^{j-k+c_{n}})$. Therefore, by H{\"{o}}lder's inequality \begin{align*} & \Big(2^{(k-j)n}\int_{\cup _{z\in \Omega _{j}}Q_{k,z}}\sum\limits_{m\in \Omega _{j}}|\lambda _{k,m}|^{\varrho }\chi _{k,m}(y)dy\Big)^{1/\varrho } \\ & \leqslantslantq \Big(2^{(k-j)n}\int_{B(x,2^{j-k+c_{n}})}\sum\limits_{m\in \Omega _{j}}|\lambda _{k,m}|^{\tau }t_{k}^{\tau }\chi _{k,m}(y)dy\Big)^{1/\tau }M_{B(0,2^{j-k+c_{n}}),\sigma _{1}}(t_{k}^{-1}) \\ & \leqslantslantsssim \mathcal{M}_{\tau }\big(\sum\limits_{m\in \mathbb{Z} ^{n}}t_{k}\leqslantslantft\vert \lambda _{k,m}\right\vert \chi _{k,m}\big) (x)M_{B(0,2^{j-k+c_{n}}),\sigma _{1}}(t_{k}^{-1}). \end{align*} Since $t_{k}^{-\sigma _{1}}\in A_{(p/\theta )^{\prime }}(\mathbb{R}^{n})$, $ k\in \mathbb{Z}$, by Lemma {\ref{Ap-Property}/(iii)}, $\mathrm{\eqref{Asum1}} $ and $\mathrm{\eqref{Asum2}}$ we obtain \begin{align*} M_{B(0,2^{j-k+c_{n}}),\sigma _{1}}(t_{k}^{-1})& \leqslantslantsssim 2^{(j-k)\frac{n}{p} }M_{B(0,1),\sigma _{1}}(t_{k}^{-1}) \\ & \leqslantslantsssim 2^{(j-k)\frac{n}{p}}\leqslantslantft( M_{B(0,1),p}(t_{k})\right) ^{-1} \\ & \leqslantslantsssim 2^{(j-k)\frac{n}{p}-k\alpha _{2}}\leqslantslantft( M_{B(0,1),\sigma _{2}}(t_{0})\right) ^{-1} \end{align*} for any $k\leqslantslantq 0$ and any $j\in \mathbb{N}_{0}$. Hence, for any $L$ large enough, \begin{equation*} I_{1}\leqslantslantsssim \sum_{k=-\infty }^{0}2^{k(nL-\alpha _{2}-n/p)}\mathcal{M} _{\tau }\big(\sum\limits_{m\in \mathbb{Z}^{n}}t_{k}|\lambda _{k,m}|\chi _{k,m}\big)(x),\quad x\in Q_{0,0}. \end{equation*} The last term is bounded in the $L_{p}(Q_{0,0})$-quasi-norm by $c\big\| \lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|$ with the help of Theorem {\ref{Maximal}.} \textit{Estimate of }$I_{2}$. We have \begin{equation*} \leqslantslantft\vert \langle \psi _{k,m},\varphi \rangle \right\vert \leqslantslantsssim 2^{-knL} \big\|\varphi \big\|_{\mathcal{S}_{M+1}}\big\|\psi \big\|_{\mathcal{S}_{M+1}} \big(1+2^{-kn}|m|^{n}\big)^{-L},\quad k\geqslantslantq 1. \end{equation*} {For each $j,k$}$\in \mathbb{N}${, define } \begin{equation*} {\Omega _{j,k}=\{m\in \mathbb{Z}^{n}:2^{j+k-1}<|m|\leqslantslantq 2^{j+k}\}}\text{\quad and\quad }{\Omega _{0,k}=\{m\in \mathbb{Z}^{n}:|m|\leqslantslantq 2^{k}\}.} \end{equation*} Then we find \begin{align*} &I_{2} \leqslantslantsssim \sum_{k=1}^{\infty }2^{-knL}\sum_{m\in \mathbb{Z}^{n}}\frac{ |\lambda _{k,m}|}{\big(1+2^{-k}|m|\big)^{nL}} \\ & =c\sum_{k=1}^{\infty }2^{-knL}\sum\limits_{j=0}^{\infty }\sum\limits_{m\in \Omega _{j,k}}\frac{|\lambda _{k,m}|}{\big(1+2^{-k}|m|\big)^{nL}} \\ & \leqslantslantq c\sum_{k=1}^{\infty }2^{-knL}\sum\limits_{j=0}^{\infty }2^{-nLj}\sum\limits_{m\in {\Omega _{j,k}}}|\lambda _{k,m}|. \end{align*} Let $0<\varrho <\min (1,\theta )$ be such that $1/\varrho =1/\tau +1/\sigma _{1}$ with $0<\tau <p$. Using the embedding $\ell _{\varrho }\hookrightarrow \ell _{1}$ we find that $I_{2}$ does not exceed \begin{equation*} c \sum_{k=1}^{\infty }2^{-knL}\sum\limits_{j=0}^{\infty }2^{-nLj}\big(\sum\limits_{m\in {\Omega _{j,k}}}|\lambda _{k,m}|^{\varrho } \big)^{1/\varrho } \end{equation*} which is just the term \begin{equation*} c\sum_{k=1}^{\infty }2^{-knL}\sum\limits_{j=0}^{\infty }2^{(n/\varrho -nL)j}\Big(2^{(k-j)n}\int_{\cup _{z\in {\Omega _{j,k}}}Q_{k,z}}\sum \limits_{m\in {\Omega _{j,k}}}|\lambda _{k,m}|^{\varrho }\chi _{k,m}(y)dy \Big)^{1/\varrho }. \end{equation*} Let $y\in \cup _{z\in {\Omega _{j,k}}}Q_{k,z}$ and $x\in Q_{0,0}$. Then $ y\in Q_{k,z}$ for some $z\in {\Omega _{j,k}}$ and ${2^{j-1}<2^{-k}|z|\leqslantslantq 2^{j}}$. From this it follows that \begin{equation*} |y-x|\leqslantslantq |y-2^{-k}z|+|x-2^{-k}|\leqslantslantq \sqrt{n}\text{ }2^{-k}+|x|+2^{-k}|z| \leqslantslantq 2^{j+\delta _{n}},\quad \delta _{n}\in \mathbb{N}, \end{equation*} which implies that $y$ is located in the ball $B\leqslantslantft( x,2^{j+\delta _{n}}\right) $. In addition, from the fact that \begin{equation*} \leqslantslantft\vert y\right\vert \leqslantslantq \leqslantslantft\vert y-x\right\vert +\leqslantslantft\vert x\right\vert \leqslantslantq 2^{j+\delta _{n}}+1\leqslantslantq 2^{j+c_{n}},\quad c_{n}\in \mathbb{ N}, \end{equation*} we have that $y$ is located in the ball $B(0,2^{j+c_{n}})$. Therefore, \begin{align*} & \Big(2^{(k-j)n}\int_{\cup _{z\in {\Omega _{j,k}}}Q_{k,z}}\sum\limits_{m\in {\Omega _{j,k}}}|\lambda _{k,m}|^{\varrho }\chi _{k,m}(y)dy\Big)^{1/\varrho } \\ & \leqslantslantq 2^{kn/\varrho }\Big(2^{-jn}\int_{B(x,2^{j+\delta _{n}})}\sum\limits_{m\in {\Omega _{j,k}}}|\lambda _{k,m}|^{\tau }t_{k}^{\tau }\chi _{k,m}(y)dy\Big)^{1/\tau }M_{B(0,2^{j+c_{n}}),\sigma _{1}}(t_{k}^{-1}) \\ & \leqslantslantsssim 2^{kn/\varrho }\mathcal{M}_{\tau }\big(\sum\limits_{m\in \mathbb{Z }^{n}}t_{k}\lambda _{k,m}\chi _{k,m}\big)(x)M_{B(0,2^{j+c_{n}}),\sigma _{1}}(t_{k}^{-1}). \end{align*} By \eqref{Asum1} and Lemma {\ref{Ap-Property}/(vi)} we obtain \begin{align*} & M_{B(0,2^{j+c_{n}}),\sigma _{1}}(t_{k}^{-1})\\ &\leqslantslantsssim 2^{-k\alpha _{1}}\big( M_{B(0,2^{j+c_{n}}),p}(t_{0})\big)^{-1}\\ &\leqslantslantsssim 2^{j(n/p-n\delta /p)-k\alpha _{1}}\big(M_{B(0,1),p}(t_{0})\big)^{-1}. \end{align*} Therefore \begin{equation} I_{2}\leqslantslantsssim \sum_{k=1}^{\infty }2^{-k(nL-n/\varrho +\alpha _{1})}\mathcal{M }_{\tau }\big(t_{k}\sum\limits_{m\in \mathbb{Z}^{n}}\lambda _{k,m}\chi _{k,m} \big)(x),\quad x\in Q_{0,0} \label{I-two} \end{equation} for any $L$ large enough. Now we take the $L_{p}(Q_{0,0})$-quasi-norm of both sides of $\mathrm{\eqref{I-two}}$ and then use Theorem {\ref{Maximal}}, we obtain \begin{equation*} I_{2}\leqslantslantsssim {\big\|}\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\| }. \end{equation*} The proof is complete. \end{proof} For a sequence $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}\subset \mathbb{C},0<r\leqslantslantq \infty $ and a fixed $d>0$, set \begin{equation*} \lambda _{k,m,r,d}^{\ast }=\Big(\sum_{h\in \mathbb{Z}^{n}}\frac{|\lambda _{k,h}|^{r}}{(1+2^{k}|2^{-k}h-2^{-k}m|)^{d}}\Big)^{1/r} \end{equation*} and $\lambda _{r,d}^{\ast }:=\{\lambda _{k,m,r,d}^{\ast }\}_{k\in \mathbb{Z} ,m\in \mathbb{Z}^{n}}\subset \mathbb{C}$. \begin{lem} \label{lamda-equi}Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2},0<\theta \leqslantslantq p<\infty ,0<q<\infty ,\gamma \in \mathbb{Z}$ and $d>n$. Let $\{t_{k}\}$ be a $p$-admissible weight sequence satisfying $\mathrm{ \eqref{Asum1}}$ with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and $\alpha _{1}\in \mathbb{R}$. Then \begin{equation} \big\|\lambda _{\min(p,q),d}^{\ast }|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k-\gamma }\}) \big\|\approx \big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k-\gamma }\}) \big\|. \label{First} \end{equation} In addition if $\{t_{k}\}$ satisfies $\mathrm{\eqref{Asum2}}$ with $\sigma _{2}\geqslantslantq p$ and $\alpha _{2}\in \mathbb{R}$, then \begin{equation} \big\|\lambda _{\min(p,q),d}^{\ast }|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k-\gamma }\}) \big\|\leqslantslantsssim \big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \label{Second} \end{equation} \end{lem} \begin{proof} The proof is similar to that given in {\cite{D20}}. First we prove $\mathrm{ \eqref{First}}$. Obviously, \begin{equation*} \big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k-\gamma }\})\big\|\leqslantslantq \big\|\lambda _{p,d}^{\ast }|\dot{f}_{\min(p,q),q}(\mathbb{R}^{n},\{t_{k-\gamma }\}) \big\|. \end{equation*} Let $n\min(p,q)/d<a<\min(p,q),{j}\in \mathbb{N}$ and $m\in \mathbb{Z}^{n}$. {Define } \begin{equation*} {\Omega _{j,m}=\{h\in \mathbb{Z}^{n}:2^{j-1}<|h-m|\leqslantslantq 2^{j}\},\quad \Omega _{0,m}=\{h\in \mathbb{Z}^{n}:|h-m|\leqslantslantq 1\}}. \end{equation*} Then \begin{align*} \sum_{h\in \mathbb{Z}^{n}}\frac{|\lambda _{k,h}|^{\min(p,q)}}{\big(1+|h-m|\big)^{d}} & =\sum\limits_{j=0}^{\infty }\sum\limits_{h\in {\Omega _{j,m}}}\frac{ |\lambda _{k,h}|^{\min(p,q)}}{\big(1+|h-m|\big)^{d}} \\ & \leqslantslantsssim \sum\limits_{j=0}^{\infty }2^{-dj}\sum\limits_{h\in {\Omega _{j,m} }}|\lambda _{k,h}|^{\min(p,q)} \\ & \leqslantslantsssim c\sum\limits_{j=0}^{\infty }2^{-dj}\Big(\sum\limits_{h\in {\Omega _{j,m}}}|\lambda _{k,h}|^{a}\Big)^{\min(p,q)/a}. \end{align*} The last expression can be rewritten as \begin{equation} c\sum\limits_{j=0}^{\infty }2^{(n\min(p,q)/a-d)j}\Big(2^{(k-j)n}\int_{\cup _{z\in { \Omega _{j,m}}}Q_{k,z}}\sum\limits_{h\in {\Omega _{j,m}}}|\lambda _{k,h}|^{a}\chi _{k,h}(y)dy\Big)^{\min(p,q)/a}. \label{estimate-lamda} \end{equation} Let $y\in \cup _{z\in {\Omega _{j,m}}}Q_{k,z}$ and $x\in Q_{k,m}$. Then $ y\in Q_{k,z}$ for some $z\in {\Omega _{j,m}}$ which implies that ${ 2^{j-1}<|z-m|\leqslantslantq 2^{j}}$. From this it follows that \begin{align*} \leqslantslantft\vert y-x\right\vert & \leqslantslantq \leqslantslantft\vert y-2^{-k}z\right\vert +\leqslantslantft\vert x-2^{-k}z\right\vert \\ & \leqslantslantq \sqrt{n}\text{ }2^{-k}+\leqslantslantft\vert x-2^{-k}m\right\vert +2^{-k}\leqslantslantft\vert z-m\right\vert \\ & \leqslantslantq 2^{j-k+\delta _{n}},\quad \delta _{n}\in \mathbb{N}, \end{align*} which implies that $y$ is located in the ball $B(x,2^{j-k+\delta _{n}})$. Therefore, $\mathrm{\eqref{estimate-lamda}}$ can be estimated by \begin{equation*} c\mathcal{M}_{a}\big(\sum\limits_{h\in \mathbb{Z}^{n}}\lambda _{k,h}\chi _{k,h}\big)(x), \end{equation*} where the positive constant $c$ is independent of $k$. Consequently \begin{equation} \big\|\lambda _{\min(p,q),d}^{\ast }|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k-\gamma }\}) \big\| \label{estimate-lamda1} \end{equation} does not exceed \begin{equation*} c\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\big(t_{k-\gamma }\mathcal{M} _{a}\big(\sum\limits_{h\in \mathbb{Z}^{n}}\lambda _{k,h}\chi _{k,h}\big)\big) ^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|. \end{equation*} Applying Lemma {\ref{key-estimate1}} we estimate \eqref{estimate-lamda1} by \begin{equation*} c\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\sum\limits_{h\in \mathbb{Z} ^{n}}|\lambda _{k,h}|^{q}\chi _{k,h}\Big)^{1/q}|L_{p}(\mathbb{R} ^{n},t_{k-\gamma })\Big\|=c\big\|\lambda |\dot{f}_{p,q}(\mathbb{R} ^{n},\{t_{k-\gamma }\})\big\|. \end{equation*} To prove $\mathrm{\eqref{Second}}$ we use again Lemma {\ref{key-estimate1} combined with Remark \ref{r-estimates copy(1)}.} \end{proof} Now we have the following result which is called the $\varphi $-transform characterization in the sense of Frazier and Jawerth. It will play an important role in the rest of the paper. \begin{thm} \label{phi-tran}Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2},0<\theta \leqslantslantq p<\infty $ and$\ 0<q<\infty $. Let $\{t_{k}\}\in \dot{X} _{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$ .\ Let $\varphi $, $\psi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $ \mathrm{\eqref{Ass1}}$\ through\ $\mathrm{\eqref{Ass3}}$. The operators \begin{equation*} S_{\varphi }:\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\rightarrow \dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\}) \end{equation*} and \begin{equation*} T_{\psi }:\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\rightarrow \dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\}) \end{equation*} are bounded. Furthermore, $T_{\psi }\circ S_{\varphi }$ is the identity on $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. \end{thm} \begin{proof} The proof is a straightforward adaptation of {\cite[Theorem 2.2]{FJ90} and \cite{D20}. For any $f\in \mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n})$ we put $\sup (f):=\{\sup_{k,m}(f)\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ where \begin{equation*} \sup_{k,m}(f)=2^{-kn/2}\sup_{y\in Q_{k,m}}|\widetilde{\varphi _{k}}\ast f(y)|,\quad k{\in \mathbb{Z},m\in \mathbb{Z}^{n}.} \end{equation*} For any $\gamma $}${\in }\mathbb{N}_{0}${, we define the sequence $ \inf_{\gamma }(f)=\{\inf_{k,m,\gamma }(f)\}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}$ by setting \begin{equation*} \text{inf}_{k,m,\gamma }(f)=2^{-kn/2}\max \{\inf_{y\in \tilde{Q}}|\widetilde{ \varphi _{k}}\ast f(y)|:\tilde{Q}\subset Q_{k,m},l(\tilde{Q})=2^{-k-\gamma }\}, \end{equation*} where $k\in \mathbb{Z}$, $ m\in \mathbb{Z}^{n} $ and $\widetilde{\varphi _{k}}=2^{kn}\overline{\varphi (-2^{k}\cdot )}$, $k\in \mathbb{Z}$. } \textit{Step 1. }{In this step we prove that \begin{equation*} \big\|\text{inf}_{\gamma }(f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \leqslantslantsssim \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} }Define a sequence $\lambda =\{\lambda _{j,h}\}_{j{\in \mathbb{Z},h\in \mathbb{Z}^{n}}}$ by \begin{equation*} \lambda _{j,h}=2^{-jn/2}\inf_{y\in Q_{j,h}}|\widetilde{\varphi _{j-\gamma }} \ast f(y)|,\quad j{\in \mathbb{Z},h\in \mathbb{Z}^{n}.} \end{equation*} Then for all $0<r<\infty $, any $k{\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ and a fixed $\lambda >n$, we have \begin{equation*} \text{inf}_{k,m,\gamma }(f)\chi _{k,m}\leqslantslantsssim 2^{\gamma (d/r+n/2)}\sum_{h\in {\mathbb{Z}^{n},Q}_{k+\gamma ,h}\subset Q_{k,m}}\lambda _{k+\gamma ,h,r,d}^{\ast }\chi _{k+\gamma ,h}. \end{equation*} Picking $r=\min(p,q)$, we obtain \begin{align*} & \big\|\text{inf}_{\gamma }(f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \\ & \leqslantslantsssim 2^{\gamma (d/r+n/2)}\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\sum\limits_{h\in \mathbb{Z}^{n}}(t_{k}\lambda _{k+\gamma ,h,r,d}^{\ast })^{q}\chi _{k+\gamma ,h}\Big)^{1/q}|L_{p}(\mathbb{R}^{n}) \Big\| \\ & =c2^{\gamma d/r}\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\sum\limits_{h\in \mathbb{Z}^{n}}(t_{k-\gamma }\lambda _{k,h,r,d}^{\ast })^{q}\chi _{k,h}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|. \end{align*} We apply Lemma {\ref{lamda-equi} }to estimate the last expression by \begin{align*} & c2^{\gamma d/r}\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\sum\limits_{h\in \mathbb{Z}^{n}}(t_{k-\gamma }\lambda _{k,h})^{q}\chi _{k,h}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \\ & \leqslantslantsssim \Big\|\Big(\sum_{k=-\infty }^{\infty }t_{k}^{q}\big|(\widetilde{ \varphi _{k}}\ast f)\big|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\| \\ & \leqslantslantsssim \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{align*} \textit{Step 2.} {We will prove that } \begin{equation} \big\|\text{inf}_{\gamma }(f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \approx \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|\approx \big\| \sup (f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \label{step21} \end{equation} {Applying Lemma A.4 of \cite{FJ90}, see also Lemma 8.3 of \cite{M07}, to the function }$(\widetilde{\varphi _{k}}\ast f)(2^{-j}x)$ {we obtain } \begin{equation*} \text{{inf}}_{\gamma }{(f)_{\min(p,q),d}^{\ast }\approx \sup (f)_{\min(p,q),d}^{\ast }.} \end{equation*} {Hence for $\gamma >0$ sufficiently large we obtain by applying Lemma \ref {lamda-equi}, } \begin{equation*} \big\|\text{inf}_{\gamma }(f)_{\min(p,q),d}^{\ast }|\dot{f}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})\big\|{\approx }\big\|\text{inf}_{\gamma }(f)|\dot{f}_{p,q}( \mathbb{R}^{n},\{t_{k}\})\big\| \end{equation*} {\ and} \begin{equation*} \big\|\sup (f)_{\min(p,q),d}^{\ast }|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|{ \approx }\big\|\sup (f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} {\ Therefore, \begin{equation} \big\|\text{inf}_{\gamma }(f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \approx \big\|\sup (f)|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \label{sup-inf} \end{equation} From the definition of the spaces }$\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ it follows that \begin{equation*} \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|\leqslantslantsssim \big\|\sup (f)| \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} Consequently $\mathrm{\eqref{sup-inf}}$ and Step 1 yield $\mathrm{ \eqref{step21}.}$ \textit{Step 3}. In this step we prove the boundedness of $S_{\varphi }$ and $T_{\psi }$. We have \begin{equation*} |(S_{\varphi }f)_{k,m}|=|\langle f,\varphi _{k,m}\rangle |=2^{-kn/2}|f\ast \widetilde{\varphi _{k}}(2^{-k}m)|\leqslantslantq \sup_{k,m}(f). \end{equation*} Step 2 yields that \begin{equation*} \big\|S_{\varphi }f|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|\leqslantslantsssim \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} To prove the boundedness of $T_{\psi }$ suppose $\lambda =\{\lambda _{j,h}\}_{j\in \mathbb{Z},h\in \mathbb{Z}^{n}}$ and \begin{equation*} T_{\psi }\lambda =\sum_{j=-\infty }^{\infty }\sum_{h\in \mathbb{Z} ^{n}}\lambda _{j,h}\psi _{j,h}. \end{equation*} Obviously \begin{equation*} \widetilde{\varphi _{k}}\ast T_{\psi }\lambda =\sum_{j=k-1}^{k+1}\sum_{h\in \mathbb{Z}^{n}}\lambda _{j,h}\widetilde{\varphi _{k}}\ast \psi _{j,h}. \end{equation*} Since $\widetilde{\varphi }$ and $\psi $ belong to $\mathcal{S}(\mathbb{R} ^{n})$ we obtain \begin{equation*} |\widetilde{\varphi _{k}}\ast \psi _{j,h}(x)|\leqslantslantsssim 2^{jn/2}(1+2^{j}|x-2^{-j}h|)^{-d/\min (1,\min(p,q))},\quad d>n, \end{equation*} where the implicit constant is independent of $j,k,h$ and $x$. Therefore, if $x\in Q_{k+1,z}\subset Q_{k,m}\subset Q_{k-1,l}$, $z,l\in \mathbb{Z}^{n}$, then we obtain \begin{equation*} |\widetilde{\varphi _{k}}\ast T_{\psi }\lambda (x)|\leqslantslantsssim 2^{kn/2}\sum_{j=k-1}^{k+1}\sum_{h\in \mathbb{Z}^{n}}\frac{|\lambda _{j,h}|}{ (1+2^{j}|x-2^{-j}h|)^{d/\min (1,\min(p,q))}}. \end{equation*} Assume that $0<\min(p,q)\leqslantslantq 1$. Using the inequality \begin{equation*} \Big(\sum_{h\in \mathbb{Z}^{n}}|a_{h}|\Big)^{\min(p,q)}\leqslantslantq \sum_{h\in \mathbb{Z} ^{n}}|a_{h}|^{\min(p,q)},\quad \{a_{h}\}_{h\in \mathbb{Z}^{n}}\subset \mathbb{C}, \end{equation*} we obtain \begin{equation} |\widetilde{\varphi _{k}}\ast T_{\psi }\lambda (x)|\leqslantslantsssim 2^{kn/2}\sum_{j=k-1}^{k+1}\Big(\sum_{h\in \mathbb{Z}^{n}}\frac{|\lambda _{j,h}|^{\min(p,q)}}{(1+2^{j}|x-2^{-j}h|)^{d}}\Big)^{1/\min(p,q)}. \label{est-T} \end{equation} Now if $\min(p,q)>1$, then by the H\"{o}lder inequality and the fact that \begin{equation*} \sum_{h\in \mathbb{Z}^{n}}\frac{1}{(1+2^{j}|x-2^{-j}h|)^{d}}\leqslantslantsssim 1, \end{equation*} we also have \eqref{est-T} with $\min(p,q)>1$. Hence if $x\in Q_{k+1,z}\subset Q_{k,m}\subset Q_{k-1,l}$, then we have \begin{equation*} |\widetilde{\varphi _{k}}\ast T_{\psi }\lambda (x)|\leqslantslantsssim 2^{kn/2}(\lambda _{k-1,l,\min(p,q),d}^{\ast }+\lambda _{k,m,\min(p,q),d}^{\ast }+\lambda _{k+1,z,\min(p,q),d}^{\ast }). \end{equation*} Consequently \begin{equation*} \big\|T_{\psi }\lambda |\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \leqslantslantsssim \sum_{i=-1}^{1}I_{i}, \end{equation*} where \begin{equation*} I_{i}=\Big\|\Big(\sum_{k=-\infty }^{\infty }2^{knq/2}\big\|\sum\limits_{h\in \mathbb{Z}^{n}}(t_{k+i}\lambda _{k,h,\min(p,q),d}^{\ast })^{q}\chi _{k,h}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|, \end{equation*} $i=-1,0,1$. Applying \eqref{Second} we obtain \begin{equation*} \big\|T_{\psi }\lambda |\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \leqslantslantsssim \big\|\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} The proof is complete. \end{proof} \begin{rem} This theorem can then be exploited to obtain a variety of results for the $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ spaces, where arguments can be equivalently transferred to the sequence space, which is often more convenient to handle. More precisely, under the same hypothesis of the last theorem, \begin{equation*} \big\|\{\langle f,\varphi _{k,m}\rangle \}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|\approx \big\|\{f|\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|. \end{equation*} \end{rem} From Theorem \ref{phi-tran}, we obtain the next important property of the function spaces\ $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. \begin{cor} \label{Indpendent}Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2},0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}\in \dot{X} _{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. The definition of the spaces $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ is independent of the choices of $\varphi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $\mathrm{\eqref{Ass1}}$\ and\ $\mathrm{\eqref{Ass2}}$. \end{cor} \begin{lem} Let $0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}$ be a $p$ -admissible weight sequence. $\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ are quasi-Banach spaces. They are Banach spaces if $1\leqslantslantq p<\infty $ and $1\leqslantslantq q<\infty $. \end{lem} \begin{proof} We need only to assume that $1\leqslantslantq p<\infty $ and $1\leqslantslantq q<\infty $. Let $ \{\lambda ^{(i)}\}_{i\in \mathbb{N}_{0}}$ be sequence in $\dot{f}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$\ such that \begin{equation*} \sum_{i=0}^{\infty }\big\|\lambda ^{(i)}|\dot{f}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})\big\|<\infty . \end{equation*} W{e obtain} \begin{equation*} \sum_{i=0}^{\infty }\Big\|\Big(\sum_{k=-\infty }^{\infty }\sum\limits_{m\in \mathbb{Z}^{n}}2^{knq/2}t_{k}^{q}|\lambda _{k,m}^{(i)}|^{q}\chi _{k,m}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|<\infty . \end{equation*} We put \begin{equation*} A^{(i)}=\{A_{k}^{(i)}\}_{k\in \mathbb{Z}},\quad A_{k}^{(i)}=\sum\limits_{m\in \mathbb{Z}^{n}}2^{kn/2}t_{k}|\lambda _{k,m}^{(i)}|\chi _{k,m},\quad i\in \mathbb{N}_{0}, \end{equation*} it follows that $\sum_{i=0}^{\infty }A^{(i)}$ is a is absolutely convergent in $L_{p}(\ell _{q})$, so the sequence $\{A^{(i)}\}_{i\in \mathbb{N}_{0}}$ converges in $L_{p}(\ell _{q})$ and \begin{equation*} \Big\|\sum_{i=0}^{\infty }A^{(i)}|L_{p}(\ell _{q})\Big\|<\infty . \end{equation*} Then \begin{equation*} \sum\limits_{m\in \mathbb{Z}^{n}}t_{k}\chi _{k,m}\sum_{i=0}^{\infty }|\lambda _{k,m}^{(i)}|=\sum_{i=0}^{\infty }\sum\limits_{m\in \mathbb{Z} ^{n}}t_{k}|\lambda _{k,m}^{(i)}|\chi _{k,m}<\infty ,\quad k\in \mathbb{Z},a.e., \end{equation*} which yields that \begin{equation*} \sum_{i=0}^{\infty }\lambda _{k,m}^{(i)}<\infty ,\quad k\in \mathbb{Z},m\in \mathbb{Z}^{n},a.e. \end{equation*} Therefore \begin{equation*} \Big\|\sum_{i=0}^{\infty }\lambda ^{(i)}|\dot{f}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})\Big\|\leqslantslantq \sum_{i=0}^{\infty }\Big\|A^{(i)}|L_{p}(\ell _{q}) \Big\|<\infty . \end{equation*} This completes the proof. \end{proof} Applying this lemma and Theorem \ref{phi-tran} we obtain the following useful properties of these function spaces, see \cite{GJN17}. \begin{thm} Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R}^{2},0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. $\dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$ are quasi-Banach spaces. They are Banach spaces if $1\leqslantslantq p<\infty $ and $1\leqslantslantq q<\infty $. \end{thm} \begin{thm} \label{embeddings-S-inf}Let $0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let\ $\{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$ and $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2}$.\ \newline $\mathrm{(i)}$ We have the embedding \begin{equation} \mathcal{S}_{\infty }(\mathbb{R}^{n})\hookrightarrow \dot{F}_{p,q}(\mathbb{R} ^{n},\{t_{k}\}). \label{embedding} \end{equation} In addition $\mathcal{S}_{\infty }(\mathbb{R}^{n})$ is dense in $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\}$.\newline $\mathrm{(ii)}$ We have the embedding \begin{equation*} \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n}). \end{equation*} \end{thm} \begin{proof} The proof is a variant of that given for Besov spaces in \cite{D20}{. }For the convenience of the reader, we give some details. The embedding $\mathrm{ \eqref{embedding}}$ follows by \begin{equation*} \mathcal{S}_{\infty }(\mathbb{R}^{n})\hookrightarrow \dot{B}_{p,\min (p,q)}( \mathbb{R}^{n},\{t_{k}\})\hookrightarrow \dot{F}_{p,q}(\mathbb{R} ^{n},\{t_{k}\}), \end{equation*} see \cite{D20} for the first embedding. Now, we prove the density of $\mathcal{S} _{\infty }(\mathbb{R}^{n})$ in $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. Let $\varphi $, $\psi \in \mathcal{S}(\mathbb{R}^{n})$\ satisfying $\mathrm{ \eqref{Ass1}}$ through $\mathrm{\eqref{Ass3}}$ and $f\in \dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$. Let \begin{equation*} f_{N}=\sum\limits_{k=-N}^{N}\tilde{\psi}_{k}\ast \varphi _{k}\ast f,\quad N\in \mathbb{N}. \end{equation*} Observe that \begin{equation*} \varphi _{j}\ast \tilde{\psi}_{k}=0,\quad \text{if\quad }k\notin \{j-1,j,j+1\}. \end{equation*} Then, by Lemma \ref{key-estimate1}, \begin{align*} \big\|f_{N}|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|& =\big\|\Big( \sum\limits_{|k|\leqslantslantq N+1}|t_{k}(\varphi _{k}\ast \tilde{\psi}_{k}\ast \bar{ \varphi}_{k}\ast f)|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\big\| \\ & \leqslantslantsssim \big\|\Big(\sum\limits_{|k|\leqslantslantq N+1}|t_{k}\mathcal{M}_{\tau }(\varphi _{k}\ast f)|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\big\| \\ & \leqslantslantsssim \big\|f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|<\infty \end{align*} for any $N\in \mathbb{N}$ where $\bar{\varphi}_{k}=\varphi _{k-1}+\varphi _{k}+\varphi _{k+1},k\in \mathbb{Z}$ and $0<\tau <\min (1,p,q)$. The first inequality follows by Lemma 2.4 of \cite{Dr15}. Consequently, \begin{align*} \big\|f-f_{N}|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|& \leqslantslantq \big\|\Big( \sum\limits_{|k|\geqslantslantq N+1}|t_{k}(\varphi _{k}\ast \tilde{\psi}_{k}\ast \bar{ \varphi}_{k}\ast f)|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\big\| \\ & \leqslantslantsssim \big\|\Big(\sum\limits_{|k|\geqslantslantq N+1}|t_{k}\mathcal{M}_{\tau }(\varphi _{k}\ast f)|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\big\| \\ & \leqslantslantsssim \big\|\Big(\sum\limits_{|k|\geqslantslantq N+1}|t_{k}(\varphi _{k}\ast f)|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\big\|, \end{align*} where we used again Lemma \ref{key-estimate1}. The dominated convergence theorem implies that $f_{N}$ approximate $f$ in $\dot{F}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})$. But $f_{N}$, $N\in \mathbb{N}$ is not necessary an element of $\mathcal{S}_{\infty }(\mathbb{R}^{n})$, so we need to approximate $f_{N}$ in $\mathcal{S}_{\infty }\leqslantslantft( \mathbb{R}^{n}\right) $. Let $\omega \in \mathcal{S}\leqslantslantft( \mathbb{R}^{n}\right) $ with $\omega (0)=1$ and supp($ \mathcal{F}(\omega) )\subset \{\xi :|\xi |\leqslantslantq 1\}$. Put \begin{equation*} f_{N,\delta }:=f_{N}\omega (\delta \cdot ),\quad 0<\delta <1. \end{equation*} We have $f_{N,\delta }\in \mathcal{S}_{\infty }\leqslantslantft( \mathbb{R}^{n}\right) $ see \cite[Lemma 5.3]{YY1}, and \begin{equation*} f_{N}-f_{N,\delta }=\sum\limits_{k=-N}^{N}(\tilde{\psi}_{k}\ast \varphi _{k}\ast f)(1-\omega (\delta \cdot )). \end{equation*} After simple calculation, we obtain \begin{equation*} \varphi _{j}\ast \lbrack (\tilde{\psi}_{k}\ast \varphi _{k}\ast f)(\omega (\delta \cdot ))](x)=\int_{\mathbb{R}^{n}}\varphi _{k}\ast f(y)\varphi _{j}\ast (\tilde{\psi}_{k}\omega (\delta (\cdot +y))(x-y)dy,\quad x\in \mathbb{R}^{n}, \end{equation*} which together with the fact that \begin{equation*} \text{supp}(\mathcal{F}(\tilde{\psi}_{k}\omega (\delta (\cdot +y)))\subset \{\xi :2^{k-2}\leqslantslantq |\xi |\leqslantslantq 2^{k+1}\},\quad y\in \mathbb{R}^{n},|k|\leqslantslantq N \end{equation*} if $0<\delta <2^{-N-3}$\ yield that \begin{equation*} \varphi _{j}\ast \lbrack (\tilde{\psi}_{k}\ast \varphi _{k}\ast f)(\omega (\delta \cdot ))]=0\quad \text{if}\quad |j-k|\geqslantslantq 2. \end{equation*} Therefore, we obtain that \begin{equation*} \big\|f_{N}-f_{N,\delta }|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\| \end{equation*} can be estimated by \begin{align*} & \Big\|\Big(\sum\limits_{|k|\leqslantslantq N+2}\big|t_{k}(\varphi _{k}\ast \sum\limits_{i=-2}^{2}[(\tilde{\psi}_{k+i}\ast \varphi _{k+i}\ast f)(1-\omega (\delta \cdot ))])\big|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n}) \Big\| \\ & \leqslantslantsssim \sum\limits_{i=-2}^{2}\Big\|\Big(\sum\limits_{|k|\leqslantslantq N+2}\big| t_{k+i}\big((\tilde{\psi}_{k+i}\ast \varphi _{k+i}\ast f)(1-\omega (\delta \cdot ))\big|^{q}\Big)^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|. \end{align*} Again, by Lebesgue's dominated convergence theorem $f_{N,\delta }$ approximate $f_{N}$ in the spaces $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. This prove that $\mathcal{S}_{\infty }(\mathbb{R}^{n})$ is dense in $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$. The proof of (ii) follows by the embedding \begin{equation*} \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \dot{B}_{p,\max (p,q)}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n}), \end{equation*} where the second embedding is proved in \cite{D20}. \end{proof} \subsection{Embeddings} For our spaces introduced above we want to show Sobolev embedding theorems. We recall that a quasi-Banach space $A_{1}$ is continuously embedded in another quasi-Banach space $A_{2}$, $A_{1}\hookrightarrow A_{2}$, if $ A_{1}\subset A_{2}$ and there is a $c>0$ such that \begin{equation*} \big\|f|A_{2}\big\|\leqslantslantq c\big\|f|A_{1}\big\| \end{equation*} for all $f\in A_{1}$. We begin with the following elementary embeddings. \begin{thm} \label{elem-embedding}Let $0<p<\infty $ and $0<q\leqslantslantq r<\infty $. Let $ \{t_{k}\}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( \frac{p}{\theta }\right) ^{\prime },\sigma _{2}\geqslantslantq p)$. We have \begin{equation*} \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \dot{F}_{p,r}(\mathbb{ R}^{n},\{t_{k}\}). \end{equation*} \end{thm} \begin{proof} It is a ready consequence of the embeddings between Lebesgue sequence spaces. \end{proof} The main result of this subsection is the following Sobolev-type embedding. In the classical setting this was done in \cite{J77} and \cite{T1}. We set \begin{equation*} w_{k,Q}(p_{1})=\Big(\int_{Q}w_{k}^{p_{1}}(x)dx\Big)^{1/p_{1}}\quad \text{ and\quad }t_{k,Q}(p_{0})=\Big(\int_{Q}t_{k}^{p_{0}}(x)dx\Big)^{1/p_{0}}, \end{equation*} where $Q\in \mathcal{Q}$ with $\ell (Q)=2^{-k}, k \in \mathbb{Z}.$ \begin{thm} \label{Sobolev-embedding-sequence}Let $0<\theta \leqslantslantq p_{0}<p_{1}<\infty $ and $0<q,r<\infty $. Let\ $\{t_{k}\}$ be a $p_{0}$-admissible weight sequence satisfying $\mathrm{\eqref{Asum1}}$\ with $p=p_{0}$, $\sigma _{1}=\theta \leqslantslantft( p_{0}/\theta \right) ^{\prime }$ and $j=k$. Let\ $ \{w_{k}\}$ be a $p_{1}$-admissible weight sequence satisfying $\mathrm{ \eqref{Asum1}}$\ with $p=p_{1}$, $\sigma _{1}=\theta \leqslantslantft( p_{1}/\theta \right) ^{\prime }$ and $j=k$. If $w_{k,Q}(p_{1})\leqslantslantsssim t_{k,Q}(p_{0})$ for all $Q\in \mathcal{Q}$ with $\ell (Q)=2^{-k}, k \in \mathbb{Z}$, then we have \begin{equation*} \dot{f}_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \dot{f}_{p_{1},r}( \mathbb{R}^{n},\{w_{k}\}). \end{equation*} \end{thm} \begin{proof} Let $f\in \dot{f}_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\})$. Without loss of generality, we may assume that \begin{equation*} \big\|\lambda |\dot{f}_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\})\big\|=1. \end{equation*} We set \begin{equation*} f_{k}(x)=\sum\limits_{m\in \mathbb{Z} ^{n}}2^{kn/2}|Q_{k,m}|^{-1/p_{1}}t_{k,m}(p_{0})|\lambda _{k,m}|\chi _{k,m}(x),\quad x\in \mathbb{R}^{n},k\in \mathbb{Z}. \end{equation*} Using Proposition {\ref{Equi-norm1} and the fact that} \begin{equation*} w_{k,m}(p_{1})\leqslantslantsssim t_{k,m}(p_{0}),\quad k\in \mathbb{Z},m\in \mathbb{Z} ^{n}, \end{equation*} we obtain \begin{equation*} \big\|\lambda |\dot{f}_{p_{1},r}(\mathbb{R}^{n},\{w_{k}\})\big\|\leqslantslantsssim \Big\|\Big(\sum\limits_{k=-\infty }^{\infty }f_{k}^{r}\Big)^{1/r}|L_{p_{1}}( \mathbb{R}^{n})\Big\|. \end{equation*} Now we prove our embedding. Let $K\in \mathbb{Z}$. By Lemma {\ref{Lamda-est}, } \begin{equation*} |\lambda _{k,m}|\leqslantslantsssim 2^{-kn/2}t_{k,m}^{-1}(p_{0}) \end{equation*} for any $k\in \mathbb{Z}$ and $m\in \mathbb{Z}^{n}$. Therefore, \begin{equation} \sum_{k=-\infty }^{K}f_{k}^{r}(x)\leqslantslantsssim \sum_{k=-\infty }^{K}2^{knr/p_{1}}\leqslantslantq C2^{Knr/p_{1}}. \label{est-sum1} \end{equation} On the other hand it follows that \begin{align} & \sum_{k=K+1}^{\infty }f_{k}^{r}(x) \notag \\ & \leqslantslantsssim \sup_{k\in \mathbb{Z}}\Big(\sum\limits_{m\in \mathbb{Z} ^{n}}2^{kn/2}|Q_{k,m}|^{-1/p_{0}}t_{k,m}(p_{0})|\lambda _{k,m}|\chi _{k,m}(x) \Big)^{r}\sum_{k=K+1}^{\infty }2^{knr(1/p_{1}-1/p_{0})} \notag \\ & \leqslantslantsssim \sup_{k\in \mathbb{Z}}\Big(\sum\limits_{m\in \mathbb{Z} ^{n}}2^{kn/2}|Q_{k,m}|^{-1/p_{0}}t_{k,m}(p_{0})|\lambda _{k,m}|\chi _{k,m}(x) \Big)^{r}2^{Knr(1/p_{1}-1/p_{0})}. \label{est-sum2} \end{align} The identity \begin{equation*} \big\|g|L_{p_{1}}(\mathbb{R}^{n})\big\|^{p_{1}}=p_{1}\int_{0}^{\infty }y^{p_{1}-1}\leqslantslantft\vert \leqslantslantft\{ x\in \mathbb{R}^{n}:\leqslantslantft\vert g\leqslantslantft( x\right) \right\vert >y\right\} \right\vert dy, \end{equation*} justifies the estimate \begin{equation*} \big\|\lambda |\dot{f}_{p_{1},r}(\mathbb{R}^{n},\{w_{k}\})\big\| ^{p_{1}}\leqslantslantsssim \int_{0}^{\infty }y^{p_{1}-1}\Big|\Big\{x\in \mathbb{R}^{n}: \Big(\sum\limits_{k=-\infty }^{\infty }f_{k}^{r}(x)\Big)^{1/r}>y\Big\}\Big| dy. \end{equation*} We use $\mathrm{\eqref{est-sum1}}$ with $K$ the largest integer such that \begin{equation*} C2^{Knr/p_{1}}\leqslantslantq \frac{y^{r}}{2}. \end{equation*} Since $2^{Kn(1/p_{0}-1/p_{1})}y\geqslantslantq c$ $y^{p_{1}/p_{0}}$,\ using $\mathrm{ \eqref{est-sum2}}$, we obtain \begin{equation*} \Big|\Big\{x\in \mathbb{R}^{n}:\sum_{k=-\infty }^{\infty }f_{k}^{r}(x)>y^{r} \Big\}\Big|\leqslantslantsssim \Big|\Big\{x\in \mathbb{R}^{n}:\sum_{k=K+1}^{\infty }f_{k}^{r}(x)>\frac{y^{r}}{2}\Big\}\Big|, \end{equation*} and hence \begin{equation*} \Big|\Big\{x\in \mathbb{R}^{n}:\sum_{k=-\infty }^{\infty }f_{k}^{r}(x)>y^{r} \Big\}\Big| \end{equation*} does not exceed \begin{equation*} c\Big|\Big\{x\in \mathbb{R}^{n}:\sup_{k\in \mathbb{Z}}\big( 2^{kn(1/p_{0}-1/p_{1})}f_{k}(x)\big)>c\text{ }2^{Kn(1/p_{0}-1/p_{1})}y\Big\} \Big|. \end{equation*} Therefore \begin{align*} & \big\|\lambda |\dot{f}_{p_{1},r}(\mathbb{R}^{n},\{w_{k}\})\big\|^{p_{1}} \\ & \leqslantslantsssim \int_{0}^{\infty }y^{p_{1}-1}\Big|\Big\{x\in \mathbb{R} ^{n}:\sup_{k\in \mathbb{Z}}\big(2^{kn(1/p_{0}-1/p_{1})}f_{k}(x)\big)>c\text{ }y^{p_{1}/p_{0}}\Big\}\Big|dy. \end{align*} After a simple change of variable, we estimate the last term by \begin{align*} &c\int_{0}^{\infty }y^{p_{0}-1}\Big|\Big\{x\in \mathbb{R}^{n}:\sup_{k\in \mathbb{Z}}\big(2^{kn(1/p_{0}-1/p_{1})}f_{k}(x)\big)>y\Big\}\Big|dy\\ & \leqslantslantsssim \big\|\lambda |\dot{f}_{p_{0},\infty }(\mathbb{R}^{n},\{t_{k}\})\big\| ^{p_{0}}, \end{align*} where we have used again Proposition {\ref{Equi-norm1}. }Hence the theorem is proved. \end{proof} From Theorems {\ref{phi-tran} and \ref{Sobolev-embedding-sequence}}, we infer the following Sobolev-type embedding for $\dot{F}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})$. \begin{thm} \label{Sobolev-embedding}Let $0<\theta \leqslantslantq p_{0}<p_{1}<\infty $ and $ 0<q,r<\infty $. Let $\{t_{k}\}\in \dot{X}_{\alpha _{0},\sigma ,p_{0}}$ be a $ p_{0}$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p_{0}/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p_{0})$ and $\alpha _{0}=(\alpha _{1,0},\alpha _{2,0})\in \mathbb{R}^{2}$. Let\ $\{w_{k}\}$ $\in \dot{X}_{\alpha _{1},\sigma ,p_{1}}$ be a $p_{1}$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p_{1}/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p_{1})$ and $\alpha _{1}=(\alpha _{1,1},\alpha _{2,1})\in \mathbb{R}^{2}$. Then \begin{equation*} \dot{F}_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\})\hookrightarrow \dot{F}_{p_{1},r}( \mathbb{R}^{n},\{w_{k}\}), \end{equation*} hold if \begin{equation*} w_{k,Q}(p_{1})\leqslantslantsssim t_{k,Q}(p_{0}) \end{equation*} for all $Q\in \mathcal{Q}$ and all $k\in \mathbb{Z}$. \end{thm} \subsection{Atomic and molecular decompositions} We will use the notation of \cite{FJ90}. We shall say that an operator $A$ is associated with the matrix $\{a_{Q_{k,m}P_{v,h}}\}_{k,v\in \mathbb{Z} ,m,h\in \mathbb{Z}^{n}}$, if for all sequences $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}\subset \mathbb{C}$, \begin{equation*} A\lambda =\{(A\lambda )_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}=\Big\{ \sum_{v=-\infty }^{\infty }\sum_{h\in \mathbb{Z}^{n}}a_{Q_{k,m}P_{v,h}} \lambda _{v,h}\Big\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}. \end{equation*} We will use the notation \begin{equation*} J=\frac{n}{\min (1,p,q)}. \end{equation*} We say that $A$, with associated matrix $\{a_{Q_{k,m}P_{v,h}}\}_{k,v\in \mathbb{Z},m,h\in \mathbb{Z}^{n}}$, is almost diagonal on $\dot{f}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$ if there exists $\varepsilon >0$ such that \begin{equation*} \sup_{k,v\in \mathbb{Z},m,h\in \mathbb{Z}^{n}}\frac{|a_{Q_{k,m}P_{v,h}}|}{ \omega _{Q_{k,m}P_{v,h}}(\varepsilon )}<\infty , \end{equation*} where \begin{align} &\omega _{Q_{k,m}P_{v,h}}(\varepsilon ) \notag \\ &=\Big(1+\frac{|x_{Q_{k,m}}-x_{P_{v,h}}| }{\max \big(2^{-k},2^{-v}\big)}\Big)^{-J-\varepsilon }\leqslantslantft\{ \begin{array}{ccc} 2^{(v-k)(\alpha _{2}+(n+\varepsilon )/2)}, & \text{if} & v\leqslantslantq k, \\ 2^{(v-k)(\alpha _{1}-(n+\varepsilon )/2-J+n)}, & \text{if} & v>k. \end{array} \right. \label{omega-assumption} \end{align} Using Lemma {\ref{key-estimate1.1} }the following theorem is a generalization of {\cite[Theorem 3.3]{FJ90}.} \begin{thm} \label{almost-diag-est}Let $\alpha _{1},\alpha _{2}\in \mathbb{R}$, $ 0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}_{k}\in \dot{X} _{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $\sigma _{2}\geqslantslantq p$. Any almost diagonal operator $A$ on $\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ is bounded. \end{thm} \begin{proof} We write $A\equiv A_{0}+A_{1}$ with\ \begin{equation*} (A_{0}\lambda )_{k,m}=\sum_{v=-\infty }^{k}\sum_{h\in \mathbb{Z} ^{n}}a_{Q_{k,m}P_{v,h}}\lambda _{v,h},\quad k\in \mathbb{Z},m\in \mathbb{Z} ^{n} \end{equation*} and \begin{equation*} (A_{1}\lambda )_{k,m}=\sum_{v=k+1}^{\infty }\sum_{h\in \mathbb{Z} ^{n}}a_{Q_{k,m}P_{v,h}}\lambda _{v,h},\quad k\in \mathbb{Z},m\in \mathbb{Z} ^{n}. \end{equation*} \textit{Estimate of }$A_{0}$. From $\mathrm{\eqref{omega-assumption}}$,\ we obtain \begin{align*} \big|(A_{0}\lambda )_{k,m}\big|& \leqslantslantq \sum_{v=-\infty }^{k}\sum_{h\in \mathbb{Z}^{n}}2^{(v-k)(\alpha _{2}+(n+\varepsilon )/2)}\frac{|\lambda _{v,h}|}{\big(1+2^{v}|x_{k,m}-x_{v,h}|\big)^{J+\varepsilon }} \\ & =\sum_{v=-\infty }^{k}2^{(v-k)(\alpha _{2}+(n+\varepsilon )/2)}S_{k,v,m}. \end{align*} {For each }$j\in \mathbb{N},k\in \mathbb{Z}$\ and $m\in \mathbb{Z}^{n}${\ we define } \begin{equation*} {\Omega _{j,k,m}=\{h\in \mathbb{Z}^{n}:2^{j-1}<2^{v}|x_{k,m}-x_{v,h}|\leqslantslantq 2^{j}\}} \end{equation*} {and } \begin{equation*} {\Omega _{0,k,m}=\{h\in \mathbb{Z}^{n}:2^{v}|x_{k,m}-x_{v,h}|\leqslantslantq 1\}.} \end{equation*} Let $n/(J+\frac{\varepsilon }{2})<\tau <\min (1,p,q)$. We rewrite $S_{k,v,m}$ as follows \begin{align*} S_{k,v,m}& =\sum\limits_{j=0}^{\infty }\sum\limits_{h\in {\Omega _{j,k,m}}} \frac{|\lambda _{v,h}|}{\big(1+2^{v}|x_{k,m}-x_{v,h}|\big)^{J+\varepsilon }} \\ & \leqslantslantq \sum\limits_{j=0}^{\infty }2^{-(J+\varepsilon )j}\sum\limits_{h\in { \Omega _{j,k,m}}}|\lambda _{v,h}|. \end{align*} By the embedding $\ell _{\tau }\hookrightarrow \ell _{1}$ we deduce that \begin{align*} S_{k,v,m} &\leqslantslantq \sum\limits_{j=0}^{\infty }2^{-(J+\varepsilon )j}\big( \sum\limits_{h\in {\Omega _{j,k,m}}}|\lambda _{v,h}|^{\tau }\big)^{1/\tau } \notag \\ &=\sum\limits_{j=0}^{\infty }2^{(n/\tau -J-\varepsilon )j}\Big( 2^{(v-j)n}\int\limits_{\cup _{z\in {\Omega _{j,k,m}}}Q_{v,z}}\sum\limits_{h \in {\Omega _{j,k,m}}}|\lambda _{v,h}|^{\tau }\chi _{v,h}(y)dy\Big)^{1/\tau }. \label{est-Sv,h} \end{align*} Let $y\in \cup _{z\in {\Omega _{j,k,m}}}Q_{v,z}$ and $x\in Q_{k,m}$. It follows that $y\in Q_{v,z}$ for some $z\in {\Omega _{j,k,m}}$ and ${2^{j-1}<2 }^{v}{|2}^{-k}m{-2}^{-v}{z|\leqslantslantq 2^{j}}$. From this we obtain that \begin{align*} \leqslantslantft\vert y-x\right\vert & \leqslantslantq \leqslantslantft\vert y-{2}^{-k}m\right\vert +\leqslantslantft\vert x-{2}^{-k}m\right\vert \\ & \leqslantslantsssim 2^{-v}+2^{j-v}+2^{-k} \\ & \leqslantslantq 2^{j-v+\delta _{n}},\quad \delta _{n}\in \mathbb{N}, \end{align*} which implies that $y$ is located in the ball $B(x,2^{j-v+\delta _{n}})$. Consequently \begin{equation*} S_{k,v,m}\leqslantslantsssim \mathcal{M}_{\tau }\big(\sum\limits_{h\in \mathbb{Z} ^{n}}\lambda _{v,h}\chi _{v,h}\big)(x) \end{equation*} for any $x\in Q_{k,m}$ and any $k\leqslantslantq v$. Applying Lemma \ref {key-estimate1.1}, we obtain that \begin{equation*} {{\big\|}A_{0}\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}} \end{equation*} is bounded by \begin{equation*} c{{\big\|}\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} \textit{Estimate of }$A_{1}$. Again from $\mathrm{\eqref{omega-assumption}}$ ,\ we see that \begin{align*} \big|(A_{1}\lambda )_{v,h}\big|& \leqslantslantq \sum_{v=k+1}^{\infty }\sum_{h\in \mathbb{Z}^{n}}2^{(v-k)(\alpha _{1}-\varepsilon /2-J+n/2)}\frac{|\lambda _{v,h}|}{\big(1+2^{k}|x_{k,m}-x_{v,h}|\big)^{J+\varepsilon }} \\ & =\sum_{v=k+1}^{\infty }2^{(v-k)(\alpha _{1}-\varepsilon /2-J+n/2)}T_{k,v,m}. \end{align*} We proceed as in the estimate of $A_{0}$ we can prove that \begin{equation*} T_{k,v,m}\leqslantslantq c2^{(v-k)n/\tau }\mathcal{M}_{\tau }\big(\sum\limits_{h\in \mathbb{Z}^{n}}\lambda _{v,h}\chi _{v,h}\big)(x),\quad v>k,x\in Q_{k,m}, \end{equation*} where $n/(J+\frac{\varepsilon }{2})<\tau <\min (1,p,q)$ and the positive constant $c$ is independent of $v$, $k$ and $m$. Again applying Lemma \ref {key-estimate1.1} we obtain \begin{equation*} {{\big\|}A_{1}\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}} \end{equation*} is bounded by \begin{equation*} c{{\big\|}\lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} Hence the theorem is proved. \end{proof} \begin{defn} \label{Atom-Def}Let\ $\alpha _{1},\alpha _{2}\in \mathbb{R},0<p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}$ be a $p$-admissible weight sequence. Let $ N=\max \{J-n-\alpha _{1},-1\}$ and $\alpha _{2}^{\ast }=\alpha _{2}-\lfloor \alpha _{2}\rfloor $.\newline $\mathrm{(i)}$\ Let $k\in \mathbb{Z}$ and $m\in \mathbb{Z}^{n}$. A function $ \varrho _{Q_{k,m}}$ is called an homogeneous smooth synthesis molecule for $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$\ supported near $Q_{k,m}$ if there exist a real number $\delta \in (\alpha _{2}^{\ast },1]$ and a real number $ M\in (J,\infty )$ such that \begin{equation} \int_{\mathbb{R}^{n}}x^{\beta }\varrho _{Q_{k,m}}(x)dx=0\text{\quad if\quad } 0\leqslantslantq |\beta |\leqslantslantq N, \label{mom-cond} \end{equation} \begin{equation} |\varrho _{Q_{k,m}}(x)|\leqslantslantq 2^{\frac{kn}{2}}(1+2^{k}|x-x_{Q_{k,m}}|)^{-\max (M,M-\alpha _{1})}, \label{cond1} \end{equation} \begin{equation} |\partial ^{\beta }\varrho _{Q_{k,m}}(x)|\leqslantslantq 2^{k(|\beta |+\frac{1}{2} )}(1+2^{k}|x-x_{Q_{k,m}}|)^{-M}\quad \text{if}\quad |\beta |\leqslantslantq \lfloor \alpha _{2}\rfloor \label{cond2} \end{equation} and \begin{align} &|\partial ^{\beta }\varrho _{Q_{k,m}}(x)-\partial ^{\beta }\varrho _{Q_{k,m}}(y)| \label{cond3} \\ &\leqslantslantq 2^{k(|\beta |+\frac{1}{2}+\delta )}|x-y|^{\delta }\sup_{|z|\leqslantslantq |x-y|}(1+2^{k}|x-z-x_{Q_{k,m}}|)^{-M}\text{\quad if\quad }|\beta |=\lfloor \alpha _{2}\rfloor . \notag \end{align} A collection $\{\varrho _{Q_{k,m}}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ is called a family of homogeneous smooth synthesis molecules for $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$, if each $\varrho _{Q_{k,m}}$, $k\in \mathbb{Z},m\in \mathbb{Z}^{n}$, is an homogeneous smooth synthesis molecule for $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ supported near $Q_{k,m}$. \newline $\mathrm{(ii)}$\ Let $k\in \mathbb{Z}$ and $m\in \mathbb{Z}^{n}$. A function $b_{Q_{k,m}}$ is called an homogeneous smooth analysis molecule for $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$ supported near $Q_{k,m}$ if there exist a $ \kappa \in ((J-\alpha _{2})^{\ast },1]$ and an $M\in (J,\infty )$ such that \begin{equation} \int_{\mathbb{R}^{n}}x^{\beta }b_{Q_{k,m}}(x)dx=0\text{\quad if\quad }0\leqslantslantq |\beta |\leqslantslantq \leqslantslantft\lfloor \alpha _{2}\right\rfloor , \label{mom-cond2} \end{equation} \begin{equation} |b_{Q_{k,m}}(x)|\leqslantslantq 2^{\frac{kn}{2}}(1+2^{k}|x-x_{Q_{k,m}}|)^{-\max (M,M+n+\alpha _{2}-J)}, \label{cond1.1} \end{equation} \begin{equation} |\partial ^{\beta }b_{Q_{k,m}}(x)|\leqslantslantq 2^{k(|\beta |+\frac{n}{2} )}(1+2^{k}|x-x_{Q_{k,m}}|)^{-M}\quad \text{if}\quad |\beta |\leqslantslantq N \label{cond1.2} \end{equation} and \begin{align} &|\partial ^{\beta }b_{Q_{k,m}}(x)-\partial ^{\beta }b_{Q_{k,m}}(y)| \label{cond1.3} \\ &\leqslantslantq 2^{k(|\beta |+\frac{n}{2}+\kappa )}|x-y|^{\kappa }\sup_{|z|\leqslantslantq |x-y|}(1+2^{k}|x-z-x_{Q_{k,m}}|)^{-M}\text{\quad if\quad }|\beta |=N. \notag \end{align} A collection $\{b_{Q_{k,m}}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ is called a family of homogeneous smooth analysis molecules for $\dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$, if each $b_{Q_{k,m}}$, $k\in \mathbb{Z},m\in \mathbb{Z}^{n}$, is an homogeneous smooth synthesis molecule for $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$ supported near $Q_{k,m}$. \end{defn} We will use the notation $\{b_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ instead of $\{b_{Q_{k,m}}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$. The proof of the following lemma is given in {\cite{D20}.} \begin{lem} \label{matrix-est}Let\ $\alpha _{1},\alpha _{2},J,M,N,\delta ,\kappa ,p\ $ and $q$ be as in Definition {\ref{Atom-Def}}. Let $\{t_{k}\}$ be a $p$ -admissible weight sequence. Suppose $\{\varrho _{v,h}\}_{v\in \mathbb{Z} ,h\in \mathbb{Z}^{n}}$ is a family of smooth synthesis molecules for $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$ and $\{b_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$\ is a family of homogeneous smooth analysis molecules for\ $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. Then there exist a positive real number $\varepsilon _{1}$ and a positive constant $c$ such that \begin{equation*} \leqslantslantft\vert \langle \varrho _{v,h},b_{k,m}\rangle \right\vert \leqslantslantq c\text{ } \omega _{Q_{k,m}P_{v,h}}(\varepsilon ),\quad k,v\in \mathbb{Z},h,m\in \mathbb{Z}^{n} \end{equation*} if $\varepsilon \leqslantslantq \varepsilon _{1}$. \end{lem} As an immediate consequence, we have the following analogues of the corresponding results on \cite[Corollary\ B.3]{FJ90}. \begin{cor} Let\ $\alpha _{1},\alpha _{2},J,M,N,\delta ,\kappa ,p\ $and $q$ be as in Definition {\ref{Atom-Def}}. Let $\{t_{k}\}$ be a $p$-admissible weight sequence. Let $\Phi $ and $\varphi $ satisfy, respectively $\mathrm{ \eqref{Ass1}}$ and $\mathrm{\eqref{Ass2}}$.\newline $\mathrm{(i)}$\ If $\{\varrho _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ is a family of homogeneous synthesis molecules for the Triebel-Lizorkin spaces $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$, then the operator $A$ with matrix\ $a_{Q_{k,m}P_{v,h}}=\langle \varrho _{v,h},\varphi _{k,m}\rangle $, $ k,v\in \mathbb{Z},m,h\in \mathbb{Z}^{n}$, is almost diagonal.\newline $\mathrm{(ii)}$ If $\{b_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ is a family of homogeneous smooth analysis molecules for the Triebel-Lizorkin spaces $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$, then \ the operator\ $A$, with matrix $a_{Q_{k,m}P_{v,h}}=\langle \varphi _{v,h},b_{Q_{k,m}}\rangle $, $k,v\in \mathbb{Z},m,h\in \mathbb{Z}^{n}$, is almost diagonal. \end{cor} Let $f\in \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\ $and $\{b_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ be a family of homogeneous\ smooth analysis molecules. To prove that $\langle f,b_{Q_{k,m}}\rangle $, $k\in \mathbb{Z} ,m\in \mathbb{Z}^{n}$, is well defined for all homogeneous smooth analysis molecules for $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\}$, we need the following result, which proved in {\cite[Lemma 5.4]{BoHo06}}. Suppose that $ \Phi $ is a smooth analysis (or synthesis) molecule supported near $Q\in \mathcal{Q}$ . Then there exists a sequence $\{\varphi _{k}\}_{k\in \mathbb{N }}\subset \mathcal{S(}\mathbb{R}^{n}\mathcal{)}$ and $c>0$ such that $ c\varphi _{k}$ is a smooth analysis (or synthesis) molecule supported near $ Q $ for every $k$,and $\varphi _{k}(x)\rightarrow \Phi (x)$ uniformly on $ \mathbb{R}^{n}$ as $k\rightarrow \infty $. Now we have the following smooth molecular characterization of the spaces $\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})$. We refer the reader to {\cite{D20}} {for the corresponding result for Besov spaces.} \begin{thm} \label{molecules-dec}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R},0<\theta \leqslantslantq p<\infty $ and $0<q<\infty $. Let $\{t_{k}\}_{k}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $\sigma _{2}\geqslantslantq p$. Let $ J,M,N,\delta $ and $\kappa $ be as in Definition {\ref{Atom-Def}}. \newline $\mathrm{(i)}$\ If $f=\sum_{v=-\infty }^{\infty }\sum_{h\in \mathbb{Z} ^{n}}\varrho _{v,h}\lambda _{v,h}$, where $\{\varrho _{v,h}\}_{v\in \mathbb{Z },h\in \mathbb{Z}^{n}}$ is a family of homogeneous smooth synthesis molecules for $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$, then for all $ \lambda \in \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ \begin{equation*} {{\big\|}f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}\leqslantslantsssim {\big\|} \lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} $\mathrm{(ii)}$\ Let $\{b_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ be a family of homogeneous\ smooth analysis molecules.\ Then for all\ $f\in \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ \begin{equation*} {{\big\|}\{\langle f,b_{k,m}\rangle \}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}| \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}\leqslantslantsssim {\big\|}f|\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|.} \end{equation*} \end{thm} \begin{proof} The proof is a slight variant of \cite{FJ90} and \cite{D20}{. }For the convenience of the reader, we give some details. \textit{Step 1. Proof of }\textrm{(i)}. By\ $\mathrm{\eqref{proc2}}$ we can write \begin{equation*} \varrho _{v,h}=\sum_{k=-\infty }^{\infty }2^{-kn}\sum_{m\in \mathbb{Z}^{n}} \widetilde{\varphi }_{k}\ast \varrho _{v,h}(2^{-k}m)\psi _{k}(\cdot -2^{-k}m) \end{equation*} for\ any\ $v\in \mathbb{Z},h\in \mathbb{Z}^{n}$.\ Therefore, \begin{equation*} f=\sum_{k=-\infty }^{\infty }\sum_{m\in \mathbb{Z}^{n}}S_{k,m}\psi _{k,m}=T_{\psi }S, \end{equation*} where $S=\{S_{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$, with \begin{equation*} S_{k,m}=2^{-kn/2}\sum_{v=-\infty }^{\infty }\sum_{h\in \mathbb{Z}^{n}} \widetilde{\varphi }_{k}\ast \varrho _{v,h}(2^{-k}m)\lambda _{v,h}. \end{equation*} From Theorem {\ref{phi-tran}, }we have \begin{equation*} {{\big\|}f|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}={\big\|}T_{\psi }S|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|\leqslantslantsssim {\big\|}S|\dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|.} \end{equation*} But \begin{equation*} S_{k,m}=\sum_{v=-\infty }^{\infty }\sum_{h\in \mathbb{Z} ^{n}}a_{Q_{k,m}P_{v,h}}\lambda _{v,h}, \end{equation*} with \begin{equation*} a_{Q_{k,m}P_{v,h}}=\langle \varrho _{v,h},\widetilde{\varphi }_{k,m}\rangle ,\quad k,v\in \mathbb{Z},m,h\in \mathbb{Z}^{n}. \end{equation*} Applying Lemma {\ref{matrix-est} and Theorem \ref{almost-diag-est} we find that} \begin{equation*} {{\big\|}S|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}\leqslantslantsssim {\big\|} \lambda |\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} \textit{Step 2. Proof of }$\mathit{\mathrm{(ii)}}$\textit{.} We have \begin{align*} \langle f,b_{k,m}\rangle &=\sum_{v=-\infty }^{\infty }2^{-vn}\sum_{m\in \mathbb{Z}^{n}}\langle \psi _{v}(\cdot -2^{-v}h),b_{k,m}\rangle \widetilde{ \varphi }_{v}\ast f(2^{-v}h) \\ &=\sum_{v=-\infty }^{\infty }\sum_{m\in \mathbb{Z}^{n}}\langle \psi _{v,h},b_{k,m}\rangle \lambda _{v,h} \\ &=\sum_{v=-\infty }^{\infty }\sum_{h\in \mathbb{Z}^{n}}a_{Q_{k,m}P_{v,h}} \lambda _{v,h}, \end{align*} {\ where } \begin{equation*} a_{Q_{k,m}P_{v,h}}=\langle \psi _{v,h},b_{k,m}\rangle ,\quad \lambda _{v,h}=2^{-vn/2}\widetilde{\varphi }_{v}\ast f(2^{-v}h). \end{equation*} Again by Lemma {\ref{matrix-est} and Theorem \ref{almost-diag-est} we find that} \begin{align*} {\big\|\{\langle f,b_{k,m}\rangle \}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}| \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|} &{\leqslantslantsssim }{\big\|\{\lambda _{v,h}\}_{v\in \mathbb{Z},h\in \mathbb{Z}^{n}}|\dot{f}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})\big\|} \\ &=c{\big\|\{(S_{\varphi })_{v,h}\}_{v\in \mathbb{Z},h\in \mathbb{Z}^{n}}| \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\})\big\|.} \end{align*} Applying Theorem {\ref{phi-tran} we find that} \begin{equation*} {{\big\|}\{\langle f,b_{k,m}\rangle \}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}| \dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}\leqslantslantsssim {\big\|}f|\dot{F} _{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} The proof is complete. \end{proof} \begin{defn} Let $\alpha _{1},\alpha _{2}\in \mathbb{R},0<p<\infty ,0<q<\infty $ and\ $ N=\max \{J-n-\alpha _{1},-1\}$. Let $\{t_{k}\}$ be a $p$-admissible weight sequence. A function $a_{Q_{v,m}}$ is called an homogeneous smooth atom for $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ supported near $Q_{k,m}$, $k\in \mathbb{Z}$ and $m\in \mathbb{Z}^{n}$, if \begin{equation} \mathrm{supp}(\text{ }a_{Q_{k,m}})\subseteq 3Q_{k,m} \label{supp-cond} \end{equation} \begin{equation} |\partial ^{\beta }a_{Q_{k,m}}(x)|\leqslantslantq 2^{kn(|\beta |+1/2)}\text{\quad if\quad }0\leqslantslantq |\beta |\leqslantslantq \max (0,1+\lfloor \alpha _{2}\rfloor ),\quad x\in \mathbb{R}^{n} \label{diff-cond} \end{equation} and if \begin{equation} \int_{\mathbb{R}^{n}}x^{\beta }a_{Q_{k,m}}(x)dx=0\text{\quad if\quad }0\leqslantslantq |\beta |\leqslantslantq N\text{ and }k\in \mathbb{Z}. \label{mom-cond1} \end{equation} \end{defn} A collection $\{a_{Q_{k,m}}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$\ is called a family of homogeneous smooth atoms for $\dot{F}_{p,q}(\mathbb{R} ^{n},\{t_{k}\})$, if each $a_{Q_{k,m}}$ is an homogeneous smooth atom for $ \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ supported near $Q_{v,m}$. We point out that in the moment condition $\mathrm{\eqref{mom-cond1}}$ can be strengthened into that \begin{equation*} \int_{\mathbb{R}^{n}}x^{\beta }a_{Q_{k,m}}(x)dx=0\text{\quad if\quad }0\leqslantslantq |\beta |\leqslantslantq \tilde{N}\text{ and }k\in \mathbb{Z} \end{equation*} and the regularity condition $\mathrm{\eqref{diff-cond}}$ can be strengthened into that \begin{equation*} |\partial ^{\beta }a_{Q_{k,m}}(x)|\leqslantslantq 2^{kn(|\beta |+1/2)}\text{\quad if\quad }0\leqslantslantq |\beta |\leqslantslantq \tilde{K},\quad x\in \mathbb{R}^{n}, \end{equation*} where $\tilde{K}$ and $\tilde{N}$ are arbitrary fixed integer satisfying $ \tilde{K}\geqslantslantq \max (0,1+\lfloor \alpha _{2}\rfloor )$ and $\tilde{N}\geqslantslantq \max \{J-n-\alpha _{1},-1\}$. If an atom $a$ is supported near $Q_{v,m}$, then we denote it by $a_{v,m}$.\vskip5pt Now we come to the atomic decomposition theorem, see {\cite{D20}} {for Besov spaces and the same arguments are true for Triebel-Lizorkin spaces.} \begin{thm} \label{atomic-dec}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R}$, $0<\theta \leqslantslantq p<\infty $, $0<q<\infty $. Let $\{t_{k}\}_{k}\in \dot{X}_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $\sigma _{2}\geqslantslantq p$. Then for each $ f\in \dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$, there exist a family\ $ \{\varrho _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}$ of homogeneous smooth atoms for $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ and $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}\in {\dot{f}}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$ such that \begin{equation*} f=\sum\limits_{k=-\infty }^{\infty }\sum\limits_{m\in \mathbb{Z}^{n}}\lambda _{k,m}\varrho _{k,m},\text{\quad converging in }\mathcal{S}_{\infty }^{\prime }(\mathbb{R}^{n}) \end{equation*} and \begin{equation*} {{\big\|}\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z}^{n}}|\dot{f} _{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}\leqslantslantsssim {\big\|}f|\dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\})\big\|.} \end{equation*} Conversely, for any family of homogeneous smooth atoms for $\dot{F}_{p,q}( \mathbb{R}^{n},\{t_{k}\})$ and $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{Z} ,m\in \mathbb{Z}^{n}}\in {\dot{f}}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ \begin{equation*} {{\big\|}\sum\limits_{k=-\infty }^{\infty }\sum\limits_{m\in \mathbb{Z} ^{n}}\lambda _{k,m}\varrho _{k,m}|\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\}) \big\|\leqslantslantsssim {\big\|}\{\lambda _{k,m}\}_{k\in \mathbb{Z},m\in \mathbb{Z} ^{n}}|\dot{f}_{p,q}(\mathbb{R}^{n},\{t_{k}\}){\big\|}.} \end{equation*} \end{thm} \begin{rem} $\mathrm{(i)}$ Further results, concerning, for instance, characterizations via oscillations, box spline and tensor-product B-spline representations\ are given\ in {\cite{D7}.}\newline $\mathrm{(ii)}$ We mention that the techniques of {\cite{HN07} }are incapable of dealing with spaces of variable smoothness. Also our assumptions on the weight $\{t_{k}\}$\ play an exceptional role in the paper. \newline $\mathrm{(iii)}$ We draw the reader's attention to paper {\cite{LSTDW10} } where generalized Besov-type and Triebel-Lizorkin-type spaces are studied. They assumed that the weight sequence $\{t_{k}\}$ lies in some class which different from the class $\dot{X}_{\alpha ,\sigma ,p}.$ \end{rem} \section{The non-homogeneous space\textbf{\ }$F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$} In this section, we present the inhomogeneous version of our results given above. Let $\Phi ,\psi ,\varphi $ and $\Psi $ satisfy \begin{equation} \Phi ,\Psi ,\varphi ,\psi \in \mathcal{S}(\mathbb{R}^{n}) \label{Ass1.1} \end{equation} \begin{equation} \text{supp}(\mathcal{F}(\Phi ))\cup \text{supp}\mathcal{F}((\Psi ))\subset \overline{B(0,2)},\text{\quad }|\mathcal{F}(\Phi )(\xi )|,|\mathcal{F}(\Psi )(\xi )|\geqslantslantq c, \label{Ass2.1} \end{equation} if $|\xi |\leqslantslantq 5/3$ and \begin{equation} \text{supp}(\mathcal{F}(\varphi ))\cup \text{supp}(\mathcal{F}(\psi ))\subset \overline{B(0,2)}\backslash B(0,1/2),\text{\quad }|\mathcal{F} (\varphi )(\xi )|,|\mathcal{F}(\psi )(\xi )|\geqslantslantq c, \label{Ass3.1} \end{equation} if $3/5\leqslantslantq |\xi |\leqslantslantq 5/3$, such that \begin{equation} \overline{\mathcal{F}(\Phi )(\xi )}\mathcal{F}(\Psi )(\xi )+\sum_{k=1}^{\infty }\overline{\mathcal{F(}\varphi )(2^{-k}\xi )}\mathcal{F} (\psi )(2^{-k}\xi )=1,\quad \xi \in \mathbb{R}^{n}, \label{Ass4.1} \end{equation} where $c>0$. Let $\Phi ,\varphi \in \mathcal{S}(\mathbb{R}^{n})$\ satisfy, respectively, $\mathrm{\eqref{Ass2.1}}$ and $\mathrm{\eqref{Ass3.1}}$. We recall that by \cite[pp. 130--131]{FJ90} or \cite[Lemma 6.9]{FrJaWe01}, there exist functions $\Psi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $ \mathrm{\eqref{Ass2.1}}$ and $\psi \in \mathcal{S}(\mathbb{R}^{n})$ satisfying $\mathrm{\eqref{Ass3.1}}$ such that $\mathrm{\eqref{Ass4.1}}$ holds. The $\varphi $-transform $S_{\varphi }$ is defined by setting \begin{equation*} (S_{\varphi}f)_{0,m}=\langle f,\Psi _{m}\rangle , \end{equation*} with $\Psi _{m}(x)=\Psi (x-m)$ and \begin{equation*} (S_{\varphi }f)_{k,m}=\langle f,\varphi _{k,m}\rangle, \end{equation*} where $\varphi _{k,m}(x)=2^{kn/2}\varphi (2^{k}x-m)$, $k\in \mathbb{N}$ and $m\in \mathbb{Z} ^{n}$. The inverse $\varphi $-transform $T_{\psi }$ is defined by \begin{equation*} T_{\psi }\lambda =\sum_{m\in \mathbb{Z}^{n}}\lambda _{0,m}\Psi _{m}+\sum_{k=1}^{\infty }\sum_{m\in \mathbb{Z}^{n}}\lambda _{k,m}\psi _{k,m}, \end{equation*} with $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z} ^{n}}\subset \mathbb{C}$, see again \cite{FJ90}. Now we present the inhomogenous version of Definition \ref{Tyulenev-class}. \begin{defn} \label{Tyulenev-class-inho}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R}$, $ \sigma _{1}$, $\sigma _{2}$ $\in (0,+\infty ]$, $\alpha =(\alpha _{1},\alpha _{2})$\ and let $\sigma =(\sigma _{1},\sigma _{2})$. We let $X_{\alpha ,\sigma ,p}=X_{\alpha ,\sigma ,p}(\mathbb{R}^{n})$ denote the set of $p$ -admissible weight sequences $\{t_{k}\}_{k\in \mathbb{N}_{0}}$ satisfying $ \mathrm{\eqref{Asum1}}$ and $\mathrm{\eqref{Asum2}}$ for any $0\leqslantslantq k\leqslantslantq j$ , with constants $C_{1},C_{2}>0$ are independent of both the indexes $k$ and $j$. \end{defn} \begin{ex} A sequence $\{\gamma _{j}\}_{j\in \mathbb{N}_{0}}$ of positive real numbers is said to be admissible if there exist two positive constants $d_{0}$ and $ d_{1}$ such that \begin{equation} d_{0}\gamma _{j}\leqslantslantq \gamma _{j+1}\leqslantslantq d_{1}\gamma _{j},\quad j\in \mathbb{N} _{0}. \label{Farkas-Leop} \end{equation} For an admissible sequence $\{\gamma _{j}\}_{j\in \mathbb{N}_{0}}$, let \begin{equation*} \underline{\gamma }_{j}=\inf_{k\geqslantslantq 0}\frac{\gamma _{j+k}}{\gamma _{k}}\quad \text{and}\quad \overline{\gamma }_{j}=\sup_{k\geqslantslantq 0}\frac{\gamma _{j+k}}{ \gamma _{k}},\quad j\in \mathbb{N}_{0}. \end{equation*} Let \begin{equation*} \alpha _{\gamma }=\lim_{j\longrightarrow \infty }\frac{\log \overline{\gamma }_{j}}{j}\quad \text{and}\quad \beta _{\gamma }=\lim_{j\longrightarrow \infty }\frac{\log \underline{\gamma }_{j}}{j}, \end{equation*} be the upper and lower Boyd index of the given sequence $\{\gamma _{j}\}_{j\in \mathbb{N}_{0}}$, respectively. Then \begin{equation*} \underline{\gamma }_{j}\gamma _{k}\leqslantslantq \gamma _{j+k}\leqslantslantq \overline{\gamma } _{j}\gamma _{k},\quad j,k\in \mathbb{N}_{0} \end{equation*} and for each $\varepsilon >0$, \begin{equation*} c_{1}2^{(\beta _{\gamma }-\varepsilon )j}\leqslantslantq \underline{\gamma }_{j}\leqslantslantq \overline{\gamma }_{j}\leqslantslantq c_{2}2^{(\alpha _{\gamma }+\varepsilon )j},\quad j\in \mathbb{N}_{0} \end{equation*} for some constants $c_{1}=c_{1}(\varepsilon )>0$ and $c_{2}=c_{2}( \varepsilon )>0$. Also, $\underline{\gamma }_{1}$ and $\overline{\gamma } _{1} $ are the best possible constants $d_{0}$ and $d_{1}$ in \eqref{Farkas-Leop}, respectively. \newline Clearly the sequence $\{\gamma _{j}\}_{j\in \mathbb{N}_{0}}$ lies in $ X_{\alpha ,\sigma ,p}$ for$\ \alpha _{1}=\beta _{\gamma }-\varepsilon ,\alpha _{2}=\alpha _{\gamma }+\varepsilon $ and $0<p,\sigma _{1},\sigma _{2}\leqslantslantq \infty $.\newline These type of admissible sequences are used in \cite{FL06} to study Besov and Triebel-Lizorkin spaces in terms of a generalized smoothness, see also \cite{HaS08}.\newline Let us consider some examples of admissible sequences. The sequence $ \{\gamma _{j}\}_{j\in \mathbb{N}_{0}}$, \begin{equation*} \gamma _{j}=2^{sj}(1+j)^{b}(1+\log (1+j))^{c},\quad j\in \mathbb{N}_{0} \end{equation*} with arbitrary fixed real numbers $s,b$ and $c$ is a an admissible sequence with \begin{equation*} \beta _{\gamma }=\alpha _{\gamma }=s. \end{equation*} \end{ex} \begin{ex} \label{Example1 copy(1)}Let $0<r<p<\infty $, a weight $\omega ^{p}\in A_{ \frac{p}{r}}(\mathbb{R}^{n})$ and \begin{equation*} \{s_{k}\}=\{2^{ks}\omega ^{p}(2^{-k})\}_{k\in \mathbb{N}_{0}},\quad s\in \mathbb{R}. \end{equation*} Obviously, $\{s_{k}\}_{k\in \mathbb{N}_{0}}$ lies in $X_{\alpha ,\sigma ,p}$ for $\alpha _{1}=\alpha _{2}=s$, $\sigma =(r(p/r)^{\prime },p)$.\ \end{ex} Now, we define the spaces under consideration. \begin{defn} \label{B-F-def-inh}Let $0<p< \infty $ and $0<q\leqslantslantq \infty $. Let $ \{t_{k}\}_{k\in \mathbb{N}_{0}}$ be a $p$-admissible weight sequence. Let $ \Phi ,\varphi \in \mathcal{S}(\mathbb{R}^{n})$\ satisfy $\mathrm{ \eqref{Ass2.1}}$ and $\mathrm{\eqref{Ass3.1}}$, respectively, and we put $ \varphi _{k}=2^{kn}\varphi(2^{k}\cdot) ,k\in \mathbb{N}_{0}$. The Triebel-Lizorkin space $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$\ is the collection of all $ f\in \mathcal{S}^{\prime }(\mathbb{R}^{n})$\ such that \begin{equation*} \big\|f|F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\big\|=\Big\| \Big(\sum\limits_{k=0}^{\infty }t_{k}^{q}|\varphi _{k}\ast f|^{q}\Big) ^{1/q}|L_{p}(\mathbb{R}^{n})\Big\|<\infty, \end{equation*} with the usual modifications if $q=\infty $, where $\varphi _{0}$ is replaced by $\Phi $. \end{defn} Now we introduce the inhomogeneous sequence spaces $f_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$. Let $0<p< \infty $ and $0<q\leqslantslantq \infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}$ be a $p$-admissible weight sequence. Then for all complex valued sequences $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}\subset \mathbb{C}$ we define \begin{equation*} f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})=\Big\{\lambda :\big\| \lambda |f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\big\| <\infty \Big\}, \end{equation*} where \begin{equation*} \big\|\lambda |f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\big\|= \Big\|\Big(\sum_{k=0}^{\infty }2^{knq/2}\sum\limits_{m\in \mathbb{Z} ^{n}}t_{k}^{q}|\lambda _{k,m}|^{q}\chi _{k,m}\Big)^{1/q}|L_{p}(\mathbb{R} ^{n})\big\|. \end{equation*} We have the following analogue of Theorem \ref{phi-tran}. \begin{thm} \label{phi-tran-inho}Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2},0<\theta \leqslantslantq p<\infty $ and$\ 0<q< \infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$.\ Let $\varphi $, $\psi $ satisfying $\mathrm{ \eqref{Ass1.1}}$\ through\ $\mathrm{\eqref{Ass4.1}}$. The operators \begin{equation*} S_{\varphi }:F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N} _{0}})\rightarrow f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}) \end{equation*} and \begin{equation*} T_{\psi }:f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\rightarrow F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}) \end{equation*} are bounded. Furthermore, $T_{\psi }\circ S_{\varphi }$ is the identity on $ F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$. \end{thm} As a consequence the analogues of Corollary \ref{Indpendent} are now clear. We obtain the following useful properties of these function spaces. \begin{thm} Let $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R}^{2},0<\theta \leqslantslantq p<\infty $ and $0<q< \infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. $F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ are quasi-Banach spaces. They are Banach spaces if $1\leqslantslantq p<\infty $ and $1\leqslantslantq q<\infty $. \end{thm} Let $0<\theta \leqslantslantq p<\infty $ and $0<q< \infty $. Let\ $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$ and $\alpha =(\alpha _{1},\alpha _{2})\in \mathbb{R} ^{2}$.\ As in Theorem \ref{embeddings-S-inf} we have the embedding \begin{equation*} \mathcal{S}(\mathbb{R}^{n})\hookrightarrow F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}). \end{equation*} In addition $\mathcal{S}(\mathbb{R}^{n})$ is dense in $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$. Also if $0<\theta \leqslantslantq p<\infty $ and $0<q< \infty $, then \begin{equation*} F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\hookrightarrow \mathcal{S}^{\prime }(\mathbb{R}^{n}). \end{equation*} All the results in Subsection 3.2 are true for the inhomogeneous case. We begin with the following elementary embeddings, where the proof can be obtained by using the properties of sequence Lebesgue spaces. \begin{thm} \label{elem-embedding copy(1)}Let $0<p<\infty $ and $0<q\leqslantslantq r<\infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$ -admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p)$. We have \begin{equation*} F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\hookrightarrow F_{p,r}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}). \end{equation*} \end{thm} As in Subection 3.2 we obtain the following Sobolev-type embedding. We set \begin{equation*} w_{k,Q}(p_{1})=\Big(\int_{Q}w_{k}^{p_{1}}(x)dx\Big)^{1/p_{1}}\quad \text{ and\quad }t_{k,Q}(p_{0})=\Big(\int_{Q}t_{k}^{p_{0}}(x)dx\Big)^{1/p_{0}}, \end{equation*} where $ Q\in \mathcal{Q}$ with $\ell (Q)=2^{-k},k\in \mathbb{N}_{0}.$ \begin{thm} \label{Sobolev-embedding-sequence copy(1)}Let $0<\theta \leqslantslantq p_{0}<p_{1}<\infty $ and $0<q,r<\infty $. Let\ $\{t_{k}\}_{k\in \mathbb{N} _{0}}$ be a $p_{0}$-admissible weight sequence satisfying $\mathrm{ \eqref{Asum1}}$\ with $p=p_{0}$, $\sigma _{1}=\theta \leqslantslantft( p_{0}/\theta \right) ^{\prime }$ and $j=k\geqslantslantq 0$. Let\ $\{w_{k}\}$ be a $p_{1}$ -admissible weight sequence satisfying $\mathrm{\eqref{Asum1}}$\ with $ p=p_{1}$, $\sigma _{1}=\theta \leqslantslantft( p_{1}/\theta \right) ^{\prime }$ and $ j=k\geqslantslantq 0$. If $w_{k,Q}(p_{1})\leqslantslantsssim t_{k,Q}(p_{0})$ for all $Q\in \mathcal{Q}$ with $\ell (Q)=2^{-k},k\in \mathbb{N}_{0}$, then we have \begin{equation*} F_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\hookrightarrow F_{p_{1},r}(\mathbb{R}^{n},\{w_{k}\}_{k\in \mathbb{N}_{0}}). \end{equation*} \end{thm} From Theorems {\ref{phi-tran-inho} and \ref{Sobolev-embedding-sequence copy(1)}}, we have the following Sobolev-type embedding conclusions for $ F_{p,q}(\mathbb{R}^{n},\{t_{k}\})$. \begin{thm} \label{Sobolev-embedding copy(1)}Let $0<\theta \leqslantslantq p_{0}<p_{1}<\infty $ and $0<q,r<\infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha _{0},\sigma ,p_{0}}$ be a $p_{0}$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p_{0}/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p_{0})$ and $\alpha _{0}=(\alpha _{1,0},\alpha _{2,0})\in \mathbb{R}^{2}$. Let\ $\{w_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha _{1},\sigma ,p_{1}}$ be a $p_{1}$-admissible weight sequence with $\sigma =(\sigma _{1}=\theta \leqslantslantft( p_{1}/\theta \right) ^{\prime },\sigma _{2}\geqslantslantq p_{1})$ and $\alpha _{1}=(\alpha _{1,1},\alpha _{2,1})\in \mathbb{R}^{2}$. Then \begin{equation*} F_{p_{0},q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})\hookrightarrow F_{p_{1},r}(\mathbb{R}^{n},\{w_{k}\}_{k\in \mathbb{N}_{0}}), \end{equation*} hold if \begin{equation*} w_{k,Q}(p_{1})\leqslantslantsssim t_{k,Q}(p_{0}) \end{equation*} for all $Q\in \mathcal{Q}$ and all $k\in \mathbb{N}_{0}$ with $\ell (Q)=2^{-k},k\in \mathbb{N}_{0}$ . \end{thm} In the sequel, we shall say that an operator $A$ is associated with the matrix \begin{equation*} \{a_{Q_{k,m}P_{v,h}}\}_{k,v\in \mathbb{N}_{0},m,h\in \mathbb{Z}^{n}}, \end{equation*} if for all sequences $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}\subset \mathbb{C}$, \begin{equation*} A\lambda =\{(A\lambda )_{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}= \Big\{\sum_{v=0}^{\infty }\sum_{h\in \mathbb{Z}^{n}}a_{Q_{k,m}P_{v,h}} \lambda _{v,h}\Big\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}. \end{equation*} We say that $A$, with associated matrix $\{a_{Q_{k,m}P_{v,h}}\}_{k,v\in \mathbb{N}_{0},m,h\in \mathbb{Z}^{n}}$, is almost diagonal on $f_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ if there exists $ \varepsilon >0$ such that \begin{equation*} \sup_{k,v\in \mathbb{N}_{0},m,h\in \mathbb{Z}^{n}}\frac{|a_{Q_{k,m}P_{v,h}}| }{\omega _{Q_{k,m}P_{v,h}}(\varepsilon )}<\infty , \end{equation*} where $\omega _{Q_{k,m}P_{v,h}}(\varepsilon )$ as in Section 5. Let $\alpha _{1},\alpha _{2}\in \mathbb{R},0<\theta \leqslantslantq p<\infty $ and $0<q\leqslantslantq \infty $ . Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$ -admissible weight sequence with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $\sigma _{2}\geqslantslantq p$. It is obvious that an operator $A$ on $ f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ given by an almost diagonal matrix is bounded. Let $J$ be defined as in Section 5. we present the inhomogeneous versions of Definition \ref{Atom-Def}. \begin{defn} \label{Atom-Def-inho}Let\ $\alpha _{1},\alpha _{2}\in \mathbb{R},0<p<\infty $ and $0<q\leqslantslantq \infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}$ be a $p$ -admissible weight sequence. Let $N=\max \{J-n-\alpha _{1},-1\}$ and $\alpha _{2}^{\ast }=\alpha _{2}-\lfloor \alpha _{2}\rfloor $.\newline $\mathrm{(i)}$\ We say that $\varrho _{Q_{k,m}}$, $k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}$, is an inhomogeneous smooth synthesis molecule for $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$\ supported near $Q_{k,m}$ if it satisfies, for some real number $\delta \in (\alpha _{2}^{\ast },1]$ and a real number $M\in (J,\infty )$, $\mathrm{\eqref{mom-cond}}$, $\mathrm{ \eqref{cond1}}$\textrm{, }$\mathrm{\eqref{cond2}}$ and $\mathrm{\eqref{cond3} }$ if $k\in \mathbb{N}$. If $k=0$ we assume $\mathrm{\eqref{cond2}}$, $ \mathrm{\eqref{cond3}}$ and \begin{equation*} |\varrho _{Q_{0,m}}(x)|\leqslantslantq (1+|x-x_{Q_{0,m}}|)^{-M}. \end{equation*} A collection $\{\varrho _{Q_{k,m}}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z} ^{n}}$ is called a family of inhomogeneous smooth synthesis molecules for $ F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$, if each $\varrho _{Q_{k,m}}$ is an inhomogeneous smooth synthesis molecule for $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ supported near $Q_{k,m}$. \newline $\mathrm{(ii)}$\ We say that $b_{Q_{k,m}}$, $k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}$, is an inhomogeneous smooth analysis molecule for $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ supported near $Q_{k,m}$ if it satisfies, for some $\kappa \in ((J-\alpha _{2})^{\ast },1]$ and an $M\in (J,\infty )$, $\mathrm{\eqref{mom-cond2}}$, $\mathrm{\eqref{cond1.1}}$ \textrm{, }$\mathrm{\eqref{cond1.2}}$ and $\mathrm{\eqref{cond1.3}}$ if $ k\in \mathbb{N}$. If $k=0$ we assume $\mathrm{\eqref{cond1.2}}$, $\mathrm{ \eqref{cond1.3}}$ and \begin{equation*} |b_{Q_{0,m}}(x)|\leqslantslantq (1+|x-x_{Q_{0,m}}|)^{-M}. \end{equation*} A collection $\{b_{Q_{k,m}}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}$ is called a family of inhomogeneous smooth analysis molecules for $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$, if each $b_{Q_{k,m}}$ is an inhomogeneous smooth synthesis molecule for $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ supported near $Q_{k,m}$. \end{defn} As a consequence, we formulate the inhomogeneous counterpart of Theorem \ref {molecules-dec}. \begin{thm} \label{molecules-dec-inho}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R} ,0<\theta \leqslantslantq p<\infty $ and $0<q< \infty $. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $\sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $ \sigma _{2}\geqslantslantq p$. Let $J,M,N,\delta $ and $\kappa $ be as in Definition {\ref{Atom-Def-inho}}. \newline $\mathrm{(i)}$\ If $f=\sum_{k=0}^{\infty }\sum_{m\in \mathbb{Z}^{n}}\varrho _{k,m}\lambda _{k,m}$, where $\{\varrho _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}$ is a family of inhomogeneous smooth synthesis molecules for $F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$, then for all $ \lambda \in f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ \begin{equation*} {{\big\|}f|F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}){\big\|} \leqslantslantsssim {\big\|}\lambda |f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N} _{0}}){\big\|}.} \end{equation*} $\mathrm{(ii)}$\ Let $\{b_{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}$ be a family of inhomogeneous\ smooth analysis molecules.\ Then for all\ $ f\in F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ \begin{equation*} {{\big\|}\{\langle f,b_{k,m}\rangle \}_{k\in \mathbb{N}_{0},m\in \mathbb{Z} ^{n}}|f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}){\big\|} \leqslantslantsssim {\big\|}f|F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}) \big\|.} \end{equation*} \end{thm} Now we present the analogue of smooth atomic decomposition. First we need the definition of inhomogeneous smooth. \begin{defn} Let $\alpha _{1},\alpha _{2}\in \mathbb{R},0<p<\infty ,0<q\leqslantslantq \infty $ and\ $N=\max \{J-n-\alpha _{1},-1\}$. Let $\{t_{k}\}_{k\in \mathbb{N}_{0}}$ be a $ p$-admissible weight sequence. A function $a_{Q_{k,m}}$ is called an inhomogeneous smooth atom for $F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ supported near $Q_{k,m}$, $k\in \mathbb{N}_{0}$ and $m\in \mathbb{Z}^{n}$, if it is satisfies $\mathrm{\eqref{supp-cond}}$, $\mathrm{ \eqref{diff-cond}}$\ and $\mathrm{\eqref{mom-cond1}\ }$if $k\in \mathbb{N}$. If $k=0$ we assume $\mathrm{\eqref{supp-cond}}$ and $\mathrm{ \eqref{diff-cond}.}$ \end{defn} A collection $\{a_{Q_{k,m}}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}$ is called a family of inhomogeneous smooth atoms for $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$, if each $a_{Q_{k,m}}$ is an inhomogeneous smooth atom for $F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ supported near $Q_{k,m}$. Now we come to the atomic decomposition theorem. \begin{thm} \label{atomic-dec copy(1)}Let $\alpha _{1}$, $\alpha _{2}\in \mathbb{R}$, $ 0<\theta \leqslantslantq p<\infty $, $0<q< \infty $. Let $\{t_{k}\}_{k\in \mathbb{N} _{0}}\in X_{\alpha ,\sigma ,p}$ be a $p$-admissible weight sequence with $ \sigma _{1}=\theta \leqslantslantft( p/\theta \right) ^{\prime }$ and\ $\sigma _{2}\geqslantslantq p$. Then for each $f\in F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N} _{0}})$, there exist a family\ $\{\varrho _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}$ of inhomogeneous smooth atoms for the spaces $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ and $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}\in f_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ such that \begin{equation*} f=\sum\limits_{k=0}^{\infty }\sum\limits_{m\in \mathbb{Z}^{n}}\lambda _{k,m}\varrho _{k,m},\text{\quad converging in }\mathcal{S}^{\prime }( \mathbb{R}^{n}) \end{equation*} and \begin{equation*} {{\big\|}\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z} ^{n}}|f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}){\big\|} \leqslantslantsssim {\big\|}f|F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}) \big\|.} \end{equation*} Conversely, for any family of inhomogeneous smooth atoms for the spaces $F_{p,q}( \mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ and $\lambda =\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z}^{n}}\in f_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ we have \begin{align*} & {\big\|}\sum\limits_{k=0}^{\infty }\sum\limits_{m\in \mathbb{Z}^{n}}\lambda _{k,m}\varrho _{k,m}|F_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}) \big\|\\ & \leqslantslantsssim {\big\|}\{\lambda _{k,m}\}_{k\in \mathbb{N}_{0},m\in \mathbb{Z }^{n}}|f_{p,q}(\mathbb{R}^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}}){\big\|}. \end{align*} \end{thm} \begin{rem} One of the applications of atomic and molecular decompositions of the spaces $\dot{F}_{p,q}(\mathbb{R}^{n},\{t_{k}\})$ and $F_{p,q}(\mathbb{R} ^{n},\{t_{k}\}_{k\in \mathbb{N}_{0}})$ is studying the continuity of singular integral operators of non convolution type in such function spaces, where it is enough to show that it maps every family of smooth atoms into a family of smooth molecules. For classical Triebel-Lizorkin spaces we refer the reader to, e.g., {\cite{FJHW88} and \cite{Torres91}.}\\ It is also interesting to study other properties and characterizations of these function spaces such as the wavelet characterization, see, e.g., Lemari\'{e} and Meyer {\cite{LM88}}, and Triebel {\cite{T4}}. \end{rem} \subsection*{Acknowledgment} The author would like to thank W. Sickel and A. Tyulenev for valuable discussions and suggestions. \\ We thank the referees for carefully reading our paper and for their several useful suggestions and comments, which improved the exposition of the paper substantially.\\ This work is found by the General Direction of Higher Education and Training under\ Grant No. C00L03UN280120220004 and by The General Directorate of Scientific Research and Technological Development, Algeria. \end{document}
\begin{document} \widetext \title{Enhanced inequalities about arithmetic and geometric means} \author{Fang Dai$^{1}$, Li-Gang Xia \\ 1. Futian middle school, Ji'an City, Jiangxi Province, China, 343000} \begin{abstract} For $n$ positive numbers ($a_k$, $1\leq k \leq n$), enhanced inequalities about the arithmetic mean ($A_n \equiv \frac{\sum_ka_k}{n}$) and the geometric mean ($G_n\equiv \sqrt[n]{\Pi_ka_k}$) are found if some numbers are known, namely, \begin{equation} \frac{G_n}{A_n} \leq (\frac{n-\sum_{k=1}^mr_k}{n-m})^{1-\frac{m}{n}}(\Pi_{k=1}^mr_k)^{\frac{1}{n}}\leq 1 \:, \nonumber \end{equation} if we know $a_k=A_nr_k$ ($1\leq k\leq m\leq n$) for instance, and \begin{equation} \frac{G_n}{A_n} \leq \frac{1}{(1-\frac{m}{n})\Pi_{k=1}^mr_k^{\frac{-1}{n-m}}+\frac{1}{n}\sum_{k=1}^mr_k}\leq 1 \: ,\nonumber \end{equation} if we know $a_k=G_nr_k$ ($1\leq k\leq m \leq n$) for instance. These bounds are better than those derived from S.~H.~Tung's work~\cite{Tung}. \end{abstract} \maketitle \section{Introduction} Let $a_1,a_2,\ldots,a_n$ denote $n$ positive numbers. Let $A_n$ be their arithmetic mean, $\frac{\sum_ka_k}{n}$, and $G_n$ be their geometric mean, $\sqrt[n]{\Pi_ka_k}$. We shall prove the following inequalities. \begin{equation}\label{eq:xia1} \frac{G_n}{A_n} \leq (\frac{n-\sum_{k=1}^mr_k}{n-m})^{1-\frac{m}{n}}(\Pi_{k=1}^mr_k)^{\frac{1}{n}}\leq 1 \: . \end{equation} if we know $a_k=A_nr_k$ ($1\leq k\leq m\leq n$) for instance. \begin{equation}\label{eq:xia2} \frac{G_n}{A_n} \leq \frac{1}{(1-\frac{m}{n})\Pi_{k=1}^mr_k^{\frac{-1}{n-m}}+\frac{1}{n}\sum_{k=1}^mr_k} \leq 1 \end{equation} if we know $a_k=G_nr_k$ ($1\leq k\leq m \leq n$) for instance. S. H. Tung~\cite{Tung} obtained the following lower bound for $A_n-G_n$ in terms of the smallest value, $a$, and the largest value, $A$. \begin{equation}\label{eq:Tung0} A_n-G_n \geq \frac{(\sqrt{A}-\sqrt{a})^2}{n} \: . \end{equation} We will see that our results are better than this bound. \section{Proof of the inequalities} Suppose we know the first $m$ numbers, $a_1,a_2,\ldots,a_m$. We can construct the following inequality. \begin{eqnarray} && \sum_{k=1}^m\lambda_k^{n-m}a_k + \frac{1}{\Pi_{i=1}^m\lambda_i}\sum_{k=m+1}^n a_k \\ = && \sum_{k=1}^m(\lambda_k^{n-m}-\frac{1}{\Pi_{i=1}^m\lambda_i})a_k + \frac{1}{\Pi_{i=1}^m\lambda_i}\sum_{k=1}^n a_k \label{eq:step2}\\ \geq && n\sqrt[n]{\Pi_ka_k} \end{eqnarray} Suppose we know $a_k=A_nr_k$ ($k=1,2,\ldots,m$). Inserting them into Eq.~\ref{eq:step2}, we obtain an upper bound of $G_n/A_n$ as a function of $\lambda_1,\lambda_2,\ldots,\lambda_m$. \begin{equation} \frac{G_n}{A_n} \leq \frac{1}{n}\sum_{k=1}^m\lambda_k^{n-m}r_k+\frac{1}{\Pi_{k=1}^m\lambda_k}\frac{n-\sum_{k=1}^mr_k}{n} \equiv f(\lambda_1,\lambda_2,\ldots,\lambda_m) \: . \end{equation} $\frac{\partial f}{\partial \lambda_i}=0$ ($i=1,2,\ldots,m$) gives the best choice, namely, \begin{eqnarray} && \lambda_i = \left(\frac{n-\sum_{k=1}^mr_k}{n-m}\Pi_{k=1}^mr_k^{\frac{1}{n-m}}\right)^{\frac{1}{n}}r_i^{\frac{-1}{n-m}} \: , \: (i=1,2,\ldots,m) \end{eqnarray} and hence the bound in Ineq.~\ref{eq:xia1}. Similarly suppose we know $a_k=G_nr_k$ ($k=1,2,\ldots,m$). Inserting them into Eq.~\ref{eq:step2}, we obtain an upper bound of $G_n/A_n$ as a function of $\lambda_1,\lambda_2,\ldots,\lambda_m$. \begin{equation} \frac{G_n}{A_n} \leq \frac{1}{\Pi_{k=1}^m\lambda_k(1-\frac{1}{n}\sum_{k=1}^mr_k\lambda_k^{n-m})+\frac{1}{n}\sum_{k=1}^mr_k} \equiv g(\lambda_1,\lambda_2,\ldots,\lambda_m) \: . \end{equation} $\frac{\partial f}{\partial \lambda_i}=0$ ($i=1,2,\ldots,m$) gives the best choice, namely, \begin{eqnarray} && \lambda_i = r_i^{\frac{-1}{n-m}} \: , \: (i=1,2,\ldots,m) \end{eqnarray} and hence the bound in Ineq.~\ref{eq:xia2}. For comparison, Tung's inequality can be written into the following form, \begin{equation} \frac{G_n}{A_n} \leq 1 - \frac{1}{n}(\sqrt{r_1}-\sqrt{r_2})^2 \:, \label{eq:tung1} \end{equation} if we know $A=A_nr_1$ and $a=A_nr_2$ with $0<r_2\leq 1\leq r_1 \leq n-r_2$, or \begin{equation} \frac{G_n}{A_n} \leq \frac{1}{1+\frac{1}{n}(\sqrt{r_1}-\sqrt{r_2})^2} \:, \label{eq:tung2} \end{equation} if we know $A=G_nr_1$ and $a=G_nr_2$ with $0<r_2\leq 1\leq r_1$. We can show that our results are better than these bounds using simple calculus. For illustration, different upper bounds of $G_n/A_n$ as a function of $r_2$ with $r_1=5$ and $n=10$ are compared in Fig.~\ref{fig:comparison}. \begin{figure} \caption{\label{fig:comparison} \label{fig:comparison} \end{figure} \end{document}
\begin{document} \title{Proposed Experiment to Test the Bounds of Quantum Correlations} \author{Ad\'{a}n Cabello} \email{[email protected]} \affiliation{Departamento de F\'{\i}sica Aplicada II, Universidad de Sevilla, 41012 Sevilla, Spain} \date{\today} \begin{abstract} The combination of quantum correlations appearing in the Clauser-Horne-Shimony-Holt inequality can give values between the classical bound, $2$, and Tsirelson's bound, $2 \sqrt 2$. However, for a given set of local observables, there are values in this range which no quantum state can attain. We provide the analytical expression for the corresponding bound for a parametrization of the local observables introduced by Filipp and Svozil, and describe how to experimentally trace it using a source of singlet states. Such an experiment will be useful to identify the origin of the experimental errors in Bell's inequality-type experiments and could be modified to detect hypothetical correlations beyond those predicted by quantum mechanics. \end{abstract} \pacs{03.65.Ud, 03.65.Ta} \maketitle Quantum mechanics is the most accurate and complete description of the world known. This belief is supported by thousands of experiments. Particularly, it is widely agreed that, leaving aside some loopholes~\cite{Aspect99,Grangier01}, no local-realistic theory of the type suggested by Einstein, Podolsky, and Rosen~\cite{EPR35} is compatible with the experimental results showing violations of Bell's inequalities~\cite{Bell64} and ``good agreement'' with the predictions of quantum mechanics~\cite{ADR82,SA88,OM88,OPKP92,TRO94,KMWZSS95,TBZG98,WJSHZ98,RKVSIMW01}. On the other hand, current technology allows us to perform Bell's inequality-type experiments with relative ease. Here we propose using this possibility for a systematic test of the bounds of quantum correlations. The benefits of this proposal, apart from confirming quantum mechanics, would be to help us to identify and discriminate between two sources of possible errors in Bell's inequality-type experiments and to describe a method for searching for hypothetical correlations beyond those predicted by quantum mechanics. To introduce the bounds of quantum correlations, let us consider a high number of copies of systems of two distant particles prepared in an unspecified way. Let $A$ and $a$ ($B$ and $b$) be physical observables taking values $-1$ or $1$ referring to local experiments on particle $I$ ($II$). Here we shall initially assume that the particles have spin-$\frac{1}{2}$, and that $A$ is a spin measurement along the direction represented by the unit ray $\vec A$, etc. Fine~\cite{Fine82a} proved (see also~\cite{GM82,Fine82b,WW01}) that a set of four correlation functions $X_0 = \langle AB \rangle$, $X_1 = \langle Ab \rangle$, $X_2 = \langle aB \rangle$, $X_3 = \langle ab \rangle$ can be attained by a local-realistic theory (i.e., a theory in which the local variables of a particle determine the results of local experiments on this particle) if and only if they satisfy the following eight Clauser-Horne-Shimony-Holt (CHSH) inequalities~\cite{CHSH69}: \begin{equation} -2 \le X_i + X_{(i+1){\rm mod}\,4} + X_{(i+2){\rm mod}\,4} - X_{(i+3){\rm mod}\,4} \le 2, \label{CHSH} \end{equation} where $i=0,1,2,3$ and $(p+q){\rm mod}\,4$ means addition of $p$ and $q$ modulo $4$. On the other hand, Tsirelson~\cite{Tsirelson80,Landau87} showed that, for {\em any} quantum state $\rho$, the corresponding quantum correlations, $x_0 = \langle AB \rangle_\rho$, $x_1 = \langle Ab \rangle_\rho$, $x_2 = \langle aB \rangle_\rho$, $x_3 = \langle ab \rangle_\rho$ must satisfy \begin{eqnarray} -1 & \le & x_i \le 1, \nonumber \\ -2 \sqrt{2} & \le & x_i + x_{(i+1){\rm mod}\,4} + x_{(i+2){\rm mod}\,4} \nonumber \\ & & - x_{(i+3){\rm mod}\,4} \le 2 \sqrt{2}. \label{Tsirelson} \end{eqnarray} Quantum mechanics predicts violations of the CHSH inequalities (\ref{CHSH}) up to $2 \sqrt{2}$. Such violations can be obtained with pure~\cite{CHSH69} or mixed states~\cite{BMR92}. However, inequalities~(\ref{Tsirelson}) are only a necessary but not sufficient condition for the correlations to be attainable by quantum mechanics. To illustrate this point, let us consider a set of four numbers $Y_i$, such that they satisfy~(\ref{Tsirelson}) but not~(\ref{CHSH}); that is, their CHSH combinations lie between Bell's classical bound and Tsirelson's bound. The question is: Are there always a quantum state $\rho$ and four local observables $A$, $a$, $B$, and $b$ such that $Y_i=x_i$? The answer is no; certain sets of correlations {\em cannot} be reached by any quantum state and any set of local observables. If quantum mechanics is correct, this means that certain sets of expectation values will never be found experimentally. Therefore, the notion of superquantum correlations (i.e., correlations beyond those predicted by quantum mechanics) is not only restricted to sets of correlations such that the value of their CHSH operator is between $2 \sqrt{2}$ and $4$ (the maximum possible value for the CHSH operator)~\cite{PR94}, but it also covers some sets of correlations whose CHSH operator is between $2$ and $2 \sqrt{2}$. Uffink's quadratic inequalities~\cite{Uffink02} provide a more restrictive necessary (but still not sufficient) condition for the correlations to be attainable by quantum mechanics. The necessary and sufficient condition for a set of four numbers to be reached by quantum mechanics was found by Landau~\cite{Landau88} and Tsirelson~\cite{Tsirelson93}, and has been rediscovered by Masanes~\cite{Masanes03}. Four numbers $y_i$ can be reached by a quantum state and some local observables (i.e., $y_i=x_i$) if and only if they satisfy a the following eight inequalities: \begin{eqnarray} \!\!\!\!-\pi\! & \le &\! \arcsin{y_i} + \arcsin{y_{(i+1){\rm mod}\,4}} \nonumber \\ \!\!\!\!& &\! + \arcsin{y_{(i+2){\rm mod}\,4}} - \arcsin{y_{(i+3){\rm mod}\,4}} \le \pi. \label{Masanes} \end{eqnarray} The inequalities (\ref{Masanes}) define the whole set of quantum correlations but do not provide a practical characterization of its bounds. A different approach has been proposed by Filipp and Svozil~\cite{FS03}. They define the quantum bounds as follows: Let us choose several particular sets of local observables $\{A_j,a_j,B_j,b_j\}$; let us use a computer to randomly generate a high number of arbitrary quantum states $\{\rho_k\}$, and calculate for all of them the value of the CHSH operator defined as \begin{equation} {\rm CHSH} = \langle AB \rangle_\rho + \langle Ab \rangle_\rho + \langle aB \rangle_\rho - \langle ab \rangle_\rho. \label{CHSHoperator} \end{equation} The maximal and minimal values obtained are a numerical estimation of the bounds of the quantum correlations. Given the way in which the bounds have been constructed, for a given set of local observables no quantum state (and, presumably, no other preparation of physical systems) gives values outside these bounds. A suitable parametrization of both the set of local observables and the set of initial states yields to an analytical expression of the bound of quantum correlations. However, Filipp and Svozil's results are limited to a computer exploration of these bounds. They conclude by saying that ``the exact analytical geometries of quantum bounds remain unknown''~\cite{FS03}. In this Letter, we provide the analytical expression of the bounds of the quantum correlations using Filipp and Svozil's parametrization for the local observables. We then use this analytical expression to describe how to experimentally trace this bound. This experimental verification will require a set of Bell's inequality-type tests, each of them using a particular set of local observables and a particular initial state. Both the local observables and the initial state will depend on a single parameter $\theta$. A suitable parametrization of the local observables and the initial states should reflect the essential features of the bound of quantum correlations: For every possible value of the parameters, the CHSH operator should give values in the range $[2,2 \sqrt{2}]$ (that is, between the classical Bell's bound and Tsirelson's bound), but it will cover only a subset of the whole. The area of this subset divided by the area of the whole set should reflect the ratio between the difference between the hypervolume of the convex set of quantum correlations defined by (\ref{Masanes}) and that of the classical correlation polytope (a four-dimensional octahedron)~\cite{Pitowsky86,Pitowsky89} defined by (\ref{CHSH}) divided by the difference between the hypervolume of the set defined by (\ref{Tsirelson}) (which is the intersection of a bigger four-dimensional octahedron and a four-dimensional cube) and that of the classical correlation polytope. Another important property is that quantum bounds can always be attained using a suitably chosen maximally entangled state. For practical reasons, it would be interesting that the parametrization use as few parameters as possible. Filipp and Svozil~\cite{FS03} choose the following set of local observables which depends on a single parameter, \begin{eqnarray} A & = & \cos (2 \theta) \sigma_z + \sin (2 \theta) \sigma_x, \label{A} \\ B & = & \cos (\theta) \sigma_z + \sin (\theta) \sigma_x, \label{B} \\ a & = & \sigma_z, \label{a} \\ b & = & \cos (3 \theta) \sigma_z + \sin (3 \theta) \sigma_x, \label{b} \end{eqnarray} where $0 \le \theta \le \pi$, and $\sigma_z$ and $\sigma_x$ are the usual Pauli matrices. \begin{figure} \caption{\label{FS02} \label{FS02} \end{figure} To obtain the analytical expression of the corresponding quantum bound $F(\theta)$, it is useful to remember that, for each $\theta$, the bound of quantum correlations can be reached by a maximally entangled state. For the Filipp and Svozil parametrization, a suitable set of maximally entangled states turns out to be \begin{equation} |\varphi (\xi) \rangle = \cos\xi \, |\phi^+\rangle+\sin\xi \, |\psi^-\rangle, \label{frontierstate} \end{equation} where $0 \le \xi < 2 \pi$. In Fig.~\ref{FS02} we show the values of the CHSH operator for several of these states. If, for a given $\theta$, we calculate the maximum and minimum values of the CHSH operator for the states $|\varphi (\xi) \rangle$, we obtain the analytical expression for the bound numerically estimated in~\cite{FS03}. The analytical expression of the bound of quantum correlations is \begin{eqnarray} F(\theta) & = & \pm 2 \left\{ \left[1+\sin^2(2 \theta)\right]^{-1/2} \right. \nonumber \\ & & \left. +g(\theta) \sin(2 \theta) \left[1+{2 \over \cos(4 \theta)-3}\right]^{1/2}\right\}, \label{quantumbound} \end{eqnarray} where \begin{equation} g(\theta)=\left\{ {\matrix{ {1} & {\rm if} & {0 \le \theta < \pi/2} \cr {-1} & {\rm if} & {\pi/2 \le \theta \le \pi.} } } \right. \end{equation} This bound is represented by a thick line in Fig.~\ref{FS02}. The next problem is how to experimentally achieve this bound. Given a particular set of local measurements (i.e., a particular value of $\theta$), which state should we prepare to obtain the quantum upper and lower bounds? It can be easily seen that the quantum upper bound is reached by the maximally entangled states $|\varphi (\xi) \rangle$ given by (\ref{frontierstate}), taking \begin{equation} \xi = \frac{1}{2} \left( \theta-g(\theta) \arccos \left\{\left[1+\sin^2 (2 \theta)\right]^{-1/2}\right\}\right). \label{xi} \end{equation} The quantum lower bound is obtained just by introducing a minus sign inside the arc cosine in (\ref{xi}). No other quantum state gives values outside these bounds. For practical purposes, it is useful to realize that the required initial states can be prepared using a source of singlet states \begin{equation} |\psi^-\rangle = {1 \over \sqrt{2}} \left( |01\rangle-|10\rangle \right), \end{equation} and applying a suitable unitary transformation $U(\theta)$ to particle $I$. This follows from the fact that, for any $\xi$, \begin{equation} |\varphi (\xi) \rangle = U(\xi) \otimes \id \, |\psi^-\rangle, \end{equation} where \begin{equation} U(\xi) = \left( {\matrix{ {\sin\xi}&{-\cos\xi}\cr {\cos\xi}&{\sin\xi}\cr }} \right), \label{U} \end{equation} and $\id$ is the identity matrix. Therefore, the setup required to test the quantum bound $F(\theta)$ is illustrated in Fig.~\ref{Setup02}. It consists of a source of two-qubit singlet states $|\psi^-\rangle$, a unitary operation $U(\theta)$ [given by (\ref{U}) and (\ref{xi})] on qubit $I$, and the local measurements $A(\theta)$ [given by (\ref{A})] and (alternatively) $a(\theta)$ [given by (\ref{a})] on qubit $I$, and $B(\theta)$ [given by (\ref{B})] and (alternatively) $b(\theta)$ [given by (\ref{b})] on qubit $II$. A systematical test of the bounds of quantum correlations can be achieved by performing $N$ Bell's inequality-type tests, each for a particular value of $\theta$ (i.e., for a particular choice of local observables $A(\theta)$, $a(\theta)$, $B(\theta)$, and $b(\theta)$, and a particular state $|\varphi(\theta)\rangle$), covering the range $0 \le \theta \le \pi$. \begin{figure} \caption{\label{Setup02} \label{Setup02} \end{figure} In order to make the CHSH inequality (\ref{CHSH}) useful for real experiments, it is common practice to translate it into the language of joint probabilities. This leads to the Clauser-Horne (CH) inequality~\cite{CH74,Mermin95}: \begin{eqnarray} -1 & \le & P\left(A=1,B=1\right) - P\left(A=1,b=-1\right) \nonumber \\ & & +P\left(a=1,B=1\right) + P\left(a=1,b=-1\right) \nonumber \\ & & -P\left(a=1\right) - P\left(B=1\right) \le 0. \label{CH} \end{eqnarray} It can be easily seen that the bounds $l$ of the CHSH inequality (\ref{CHSH}) are transformed into the bounds $(l-2)/4$ of the CH inequality (\ref{CH}). Therefore, the local-realistic bound in the CH inequality is 0, and Tsirelson's bound is $(\sqrt{2}-1)/2$. Analogously, if we calculate the values of the CH operator \begin{eqnarray} {\rm CH} & = & P_\rho \left(A=1,B=1\right) - P_\rho \left(A=1,b=-1\right) \nonumber \\ & & +P_\rho \left(a=1,B=1\right) + P_\rho \left(a=1,b=-1\right) \nonumber \\ & & -P_\rho \left(a=1\right) - P_\rho \left(B=1\right), \label{CHoperator} \end{eqnarray} for the states $|\varphi(\xi)\rangle$, we obtain a figure which looks like Fig.~\ref{FS02} but has a different scale in the vertical axis \{i.e., instead of points with coordinates $(\theta, {\rm CHSH})$, we obtain points with coordinates $\left[\theta, ({\rm CHSH}-2)/4\right]$\}. This systematic test of the bounds of quantum correlations can be performed with current technology. A physical system particularly suitable for its implementation consists of pairs of photons entangled in polarization produced by degenerate type-II parametric down-conversion~\cite{KMWZSS95,WJSHZ98}. In this system, the role of spin observables is played by polarization observables which are particularly adequate due to the availability of high efficiency polarization-control elements and the relative insensitivity of most materials to birefringent thermally induced drifts. An essential advantage of this system is that it allows Bell's inequality-type tests under strict space-like separations~\cite{WJSHZ98} (however, current detector efficiencies do not allow these experiments to elude the so-called detection loophole~\cite{Grangier01}). Apart from providing a systematical way to experimentally verify set of extreme nonclassical predictions of quantum mechanics, two kind of benefits are expected from the proposed experiment. {\em Identifying the source of experimental errors in tests of Bell's inequalities.---}Since the Hilbert space structure of quantum mechanics is not used in the derivation of Bell's inequalities, the main conclusion of an experimental violation of a Bell's inequality is clear: The experimental results are incompatible with local realism. The role of quantum mechanics in a test of Bell's inequalities is to tell us which physical system we should prepare and in which directions we should orientate our polarizers or Stern-Gerlach devices. However, quantum mechanics does not only tell us this; it also predicts a specific result for the experiment. The point is that this specific prediction relies on some additional assumptions. Some of these assumptions are related to the inefficiencies of our preparations and detectors. Other assumptions concern the adequacy of the quantum-mechanical description of the experiment. The failure of each of these two kinds of assumptions has a different effect on the results. For instance, if the state we have prepared is a Werner state~\cite{Werner89} such as $\rho=(1-\epsilon) |\psi^-\rangle \langle \psi^-|+\epsilon \id/4$ with $0 < \epsilon \ll 1$, instead of $|\psi^-\rangle$, then the quantum prediction for the proposed experiment is not~$F(\theta)$ given by (\ref{quantumbound}), but a curve very close to $F(\theta)$ comprised between the quantum bounds. In other words, in this case the distance between the theoretical prediction assuming the $|\psi^-\rangle$ and the experimental result is not significatively sensitive to the value of parameter $\theta$. However, if we have assumed that the measured local observables are accurately described by a two-dimensional Hilbert space, but that a more adequate description would require a higher dimensional Hilbert space then, even if both quantum predictions were similar for some value of $\theta$ and both are comprised between the quantum bounds, the distance between them will be very sensitive to the value of $\theta$. {\em Searching for correlations beyond those predicted by quantum mechanics.---}The proposed experiment can be modified to search for hypothetical correlations beyond those predicted by quantum mechanics (i.e., superquantum correlations in the extended sense mentioned above). We do not have any plausible theory which predicts these correlations and helps us design an experiment showing violations of the inequalities~(\ref{Masanes}). However, by the very definition of the bounds $F(\theta)$, for a given set of local observables, no quantum state can give values outside the bounds $F(\theta)$. To verify this for any fixed set of alternative local observables, we can randomly modify the state emitted by the source. Quantum mechanics predicts that there are no results outside these bounds. The existence of experimental results outside these bounds would mean that there are procedures for preparing physical systems which are not described by any quantum state and, therefore, that quantum mechanics is incomplete.\\ I thank M.~Bourennane, L.~Masanes, and H.~Weinfurter for helpful discussions, I.~Pitowsky and B.~S.~Tsirelson for references, and the Spanish Ministerio de Ciencia y Tecnolog\'{\i}a Grant No.~BFM2002-02815 and the Junta de Andaluc\'{\i}a Grant No.~FQM-239 for support. \end{document}
\begin{document} \title {Emergence of atom-light-mirror entanglement inside an optical cavity} \author{C. Genes, D. Vitali and P. Tombesi} \affiliation{Dipartimento di Fisica, Universit\`{a} di Camerino, I-62032 Camerino (MC), Italy} \begin{abstract} We propose a scheme for the realization of a hybrid, strongly quantum-correlated system formed of an atomic ensemble surrounded by a high-finesse optical cavity with a vibrating mirror. We show that the steady state of the system shows tripartite and bipartite continuous variable entanglement in experimentally accessible parameter regimes, which is robust against temperature. \end{abstract} \pacs{03.67.Mn, 85.85.+j,42.50.Wk,42.50.Lc} \maketitle Recently there has been an increasing convergence between condensed matter physics and quantum optics, which has manifested in different ways. On one hand, systems of cold trapped atoms \cite{bloch}, ions \cite{cirac} and electrons \cite{ciara} may realize quantum simulators able to reproduce and study condensed matter concepts such as Fermi surfaces and Heisenberg models in a controllable and tunable way. On the other hand, circuit cavity QED \cite{wallraff} provides an example where nano- and micro-structured condensed matter systems are specifically designed in order to reproduce the phenomena and control of quantum coherence typical of quantum optics system. Alternatively, one can design schemes in which one has a direct, strong coupling between an atomic degree of freedom and a condensed matter system. Examples of this latter kind are ion-nanomechanical oscillator \cite{tian1}, or ion-Cooper-pair box \cite{tian2} systems, or a Bose-Einstein condensate coupled to a cantilever via a magnetic tip \cite{Reichel07}. A further important example is provided by cavity optomechanical systems for which strong coupling between an optical cavity mode and a vibrational mode by radiation pressure has been already demonstrated \cite{cohadon99,karrai04,vahala1,gigan06,arcizet06,arcizet06b,bouwm,vahalacool,mavalvala,rugar,harris}, and for which schemes able to show quantum entanglement \cite{Manc02,pinard05,pir06,prl07} and even quantum teleportation \cite{prltelep} have been already proposed. In these systems, the radiation pressure interaction can be made considerably large so that genuine quantum effects can be realized when microcavities and extremely light acoustic resonators \cite{vahala1,gigan06,arcizet06,arcizet06b,bouwm,vahalacool,harris} are used. \begin{figure} \caption{(Color online) (a) The cavity is driven by a laser at frequency $\omega_l$ and the moving mirror at frequency $\omega_m$ scatters photons on the two sidebands at frequency $\omega_l \pm \omega_m$. (b) If the cavity with frequency $\omega_c$ and bandwidth $\kappa$, is put into resonance with the Antistokes sideband (blue), outgoing cavity photons cool the mirror vibrational mode. If the atoms are off-resonance with the cavity but resonantly coupled to the red sideband, an entangled tripartite atom-field-mirror system emerges.} \label{scheme} \end{figure} In this letter, we propose a hybrid system formed by an atomic ensemble placed within an optical Fabry-Perot cavity, in which a micromechanical resonator represents one of the mirrors [see Fig. \ref{scheme}(a)]. The atoms are indirectly coupled to the mechanical oscillator via the common interaction with the intracavity field. As a first step towards quantum state engineering of mechanical oscillators and quantum state transfer between atoms and mirrors, we show that using state-of-the-art technology it is possible to generate stationary and robust continuous variable (CV) tripartite entanglement in the field-atoms-mirror system. To this purpose, we consider $N_{a}$ two-level atoms placed in an optical cavity under weak-coupling conditions and far from the cavity main resonance $\omega _{c}$. CV tripartite entanglement can be generated by choosing as working point for the optical cavity with vibrating mirror, the parameter regime corresponding to the ground state cooling of the mechanical resonator \cite{kippenberg07,girvin07,genes07,dantan07}. In fact, preferential scattering of cavity light into a higher frequency motional sideband of the driving laser is responsible for cooling of the mechanical system. It has been shown in \cite{prl07} that in this cooling regime, field-mirror entanglement can be generated, which can be explained in terms of sideband scattering because such an entanglement is mostly carried by the Stokes sideband. All these facts are at the basis of the robust tripartite atom-resonator-field entanglement reported here. In fact, if the laser anti-Stokes sideband is resonant with the cavity, the mechanical resonator is cooled by photon leakage and if then the atomic frequency matches the red (Stokes) sideband frequency, a resonant atoms-mirror coupling mediated by the cavity field is established. We shall see that in such a regime robust CV tripartite and bipartite entanglement is generated. \textit{Description of the system}. We consider an optical cavity with a fixed input mirror and a second oscillating mirror, which is driven by a laser at frequency $\omega_l$. An ensemble of two-level atoms is placed inside the cavity and it is off-resonantly coupled by a collective Tavis-Cummings type interaction to the optical field \cite{Tavis}. Mirror vibrational motion can be modeled by a harmonic oscillator of frequency $\omega _{m}$ and decay rate $\gamma _{m}$. In the absence of dissipation and fluctuations the total Hamiltonian of the system is given by the sum of a free evolution term \begin{equation}\label{ham0} H_{0} =\hbar \omega _{c}a^{\dag }a+\frac{\hbar \omega _{a}}{2}S_{z}+\frac{\hbar \omega _{m}}{2}(q^{2}+p^{2}), \end{equation} and the interaction term \begin{eqnarray}H_{I} &=&\hbar g\left( S_{+}a+S_{-}a^{\dag }\right) -\hbar G_{0}a^{\dag }aq \nonumber \\ &&+i\hbar E_{l}\left( a^{\dag }e^{-i\omega _{l}t}-ae^{i\omega _{l}t}\right) . \label{ham1} \end{eqnarray} The laser drives significantly only a single cavity mode with frequency $\omega_c$, bandwidth $\kappa$ and annihilation operator $a$ (with $\left[ a,a^{\dag }\right] =1$). The atomic ensemble is comprised of $N_{a}$ two-level atoms with natural frequency $\omega _{a}$ each described by the $ 1/2$ spin algebra of Pauli matrices $\sigma _{+},\sigma _{-}~$\ and $\sigma _{z}$. Collective spin operators are defined as $S_{+,-,z}=\sum_{\{i\}} \sigma _{+,-,z}^{(i)}$ for $i=1,N_{a}$ and satisfy the commutation relations $\left[ S_{+},S_{-}\right] =S_{z}$ and $\left[ S_{z},S_{\pm } \right] =\pm 2 S_{\pm }$. The mechanical mode dimensionless position and momentum operators $q$ and $p$ satisfy $\left[ q,p\right] =i$. The atom-cavity coupling constant is given by $g=\mu \sqrt{\omega_c/2\hbar \epsilon_0 V}$ where $V$ is the cavity mode volume and $\mu$ is the dipole moment of the atomic transition. The radiation pressure coupling constant is instead given by $G_0=(\omega_c/L)\sqrt{\hbar/m \omega_m}$, where $m$ is the effective mass of the mechanical mode, and $L$ is the length of the cavity. The last term describes the driving of the cavity by the laser with amplitude $E_{l}$, which is related to the input power $P_{l}$ and the cavity decay rate $\kappa $ by $\left\vert E_{l}\right\vert =\sqrt{2P_{l}\kappa /\hbar \omega _{l}}$. The dynamics of the tripartite atom-field-mirror system can be described by a set of nonlinear Langevin equations in which dissipation and fluctuation terms are added to the Heisenberg equations of motion derived from the Hamiltonian of Eqs.~(\ref{ham0})-(\ref{ham1}) \cite{gard}. However, we consider a simplified version of such equations, which is valid in the low atomic excitation limit, i.e., when all the atoms are initially prepared in their ground state, so that $S_z \simeq \left\langle S_{z}\right\rangle \simeq -N_{a}$ and this condition is not appreciably altered by the interaction with the cavity. This is satisfied when the excitation probability of a single atom is small. In this limit the dynamics of the atomic polarization can be described in terms of bosonic operators: in fact if one defines the atomic annihilation operator $c=S_{-}/\sqrt{\left\vert \left\langle S_{z}\right\rangle \right\vert }$, one can see that it satisfies the usual bosonic commutation relation $[c,c^{\dag }]=1$ \cite{holstein40}. In the frame rotating at the laser frequency $\omega_l$ for the atom-cavity system, the quantum Langevin equations can then be written as \begin{subequations} \label{QLEs} \begin{align} \dot{q}& =\omega _{m}p, \\ \dot{p}& =-\omega _{m}q-\gamma _{m}p+G_{0}a^{\dag }a+\xi , \\ \overset{\cdot }{a}& =-(\kappa +i\Delta _{f})a+iG_{0}aq-iG_{a}c+E_{l}+\sqrt{ 2\kappa }a_{in}, \\ \overset{\cdot }{c}& =-\left(\gamma _{a}+i\Delta _{a}\right)c-iG_{a}a+\sqrt{2\gamma_a}F_{c}, \end{align} \end{subequations} where $\Delta _{f}=\omega _{c}-\omega _{l}$ and $\Delta _{a}=\omega _{a}-\omega _{l}$ are respectively the cavity and atomic detuning with respect to the laser, $G_a=g\sqrt{N_a}$, and $2\gamma_a$ is the decay rate of the atomic excited level. The Langevin noise operators affecting the system have zero mean value, the Hermitian Brownian noise operator $\xi$ has correlation function $ \langle \xi(t) \xi(t') \rangle = (\gamma_m/2\pi\omega_m) \int d\omega e^{-i\omega(t-t')} \omega [\coth (\hbar \omega/2k_BT)+1] $ ($k_B$ is the Boltzmann constant and $T$ the temperature of the mechanical oscillator reservoir) \cite{GIOV01}, while the only nonvanishing correlation function of the noises affecting atoms and cavity field is $\langle a_{in}\left( t\right) a_{in}^{\dagger }\left( t^{\prime }\right) \rangle =\langle F_{c}\left( t\right) F_{c}^{\dagger }\left( t^{\prime }\right) \rangle=\delta \left( t-t^{\prime }\right) $. We now assume that the cavity is intensely driven, so that at the steady state, the intracavity field has a large amplitude $\alpha_s$, with $|\alpha_s| \gg 1$. However, the single-atom excitation probability is $g^2|\alpha_s|^2/(\Delta_a^2+\gamma_a^2)$ and since this probability has to be much smaller than one for the validity of the bosonic description of the atomic polarization, this imposes an upper bound to $|\alpha_s|$. Therefore the two conditions are simultaneously satisfied only if \emph{the atoms are weakly coupled to the cavity}, $g^2 \ll \Delta_a^2 +\gamma_a^2$. In the strong-driving limit, one has a semiclassical steady state; the corresponding mean values can be determined by setting the time derivatives to zero and factorizing the averages in Eqs.~(\ref{QLEs}), and then solving the corresponding set of nonlinear algebraic equations. The resulting stationary values are $p_{s}=0$, $q_{s}=G_{0}\left\vert \alpha _{s}\right\vert ^{2}/\omega _{m}$, $c_{s}=-iG_a\alpha _{s}/\left( \gamma _{a}+i\Delta _{a}\right) $, where the stationary intracavity field is the solution of the nonlinear equation $\alpha _{s}\left[ \kappa +i\Delta_f-iG_0^2 |\alpha_s|^2/\omega_m +G_{a}^{2}/(\gamma_a +i\Delta _{a}) \right] =E_l $. We are interested in establishing the presence of quantum correlations among atoms, field and mirror, at the steady state. This can be done by analyzing the dynamics of the quantum fluctuations of the system around the steady state. It is convenient to consider the vector of quadrature fluctuations $u=\left( \delta q,\delta p,\delta X,\delta Y,\delta x,\delta y\right) ^{\intercal }$, where $\delta X\equiv(\delta a+\delta a^{\dag})/\sqrt{2}$, $\delta Y\equiv(\delta a-\delta a^{\dag})/i\sqrt{2}$, $\delta x\equiv(\delta c+\delta c^{\dag})/\sqrt{2}$, and $\delta y\equiv(\delta c-\delta c^{\dag})/i\sqrt{2}$, and linearize the quantum Langevin equations (\ref{QLEs}) around the steady state values. The resulting evolution equation for the fluctuation vector is \begin{equation} \dot{u}=Au+n, \end{equation} where the drift matrix $A$ is given by \begin{equation} A= \begin{pmatrix} 0 & \omega _{m} & 0 & 0 & 0 & 0 \\ -\omega _{m} & -\gamma _{m} & G_{m} & 0 & 0 & 0 \\ 0 & 0 & -\kappa & \Delta & 0 & G_{a} \\ G_{m} & 0 & -\Delta & -\kappa & -G_{a} & 0 \\ 0 & 0 & 0 & G_{a} & -\gamma _{a} & \Delta _{a} \\ 0 & 0 & -G_{a} & 0 & -\Delta _{a} & -\gamma _{a} \end{pmatrix} \end{equation} with the effective optomechanical coupling $G_{m}=G_{0}\alpha _{s}\sqrt{2}$ (we have chosen the phase reference so that $\alpha_s$ can be taken real) and the effective cavity detuning $\Delta=\Delta_f-G_m^2/2\omega_m$. The vector of noises $n$ is given by $n=\left( 0,\xi ,\sqrt{2\kappa }X_{in},\sqrt{ 2\kappa }Y_{in},\sqrt{2\gamma _{a}}x_{in},\sqrt{2\gamma _{a}}y_{in}\right) ^{\intercal }$, where $X_{in}=( a_{in}+a_{in}^{\dag }) /\sqrt{2},$ $Y_{in}=( a_{in}-a_{in}^{\dag }) /i\sqrt{2}$, $x_{in}=( F_{c}+F_{c}^{\dag }) /\sqrt{2}$ and $y_{in}=( F_{c}-F_{c}^{\dag }) /i\sqrt{2}$. Owing to the Gaussian nature of the quantum noise terms $\xi $, $a_{in}$ and $F_{c}$, and to the linearization of the dynamics, the steady state of the quantum fluctuations of the system is a CV tripartite Gaussian state, which is completely determined by the $6\times 6$ correlation matrix (CM) $V_{ij}=\langle u_{i}( \infty ) u_{j}( \infty ) +u_{j}( \infty ) u_{i}( \infty ) \rangle /2 $. The Brownian noise $\xi(t)$ is not delta-correlated and therefore does not describe a Markovian process~\cite{GIOV01}. However, entanglement can be achieved only with a large mechanical quality factor, ${\cal Q}=\omega_m /\gamma_m \gg 1$. In this limit, $\xi(t)$ becomes delta-correlated~\cite{benguria}, $ \left \langle \xi(t) \xi(t')+\xi(t') \xi(t)\right \rangle/2 \simeq \gamma_m \left(2\bar{n}+1\right) \delta(t-t')$, where $\bar{n}=\left(\exp\{\hbar \omega_m/k_BT\}-1\right)^{-1}$ is the mean vibrational number. In this Markovian limit, the steady state CM can be derived from the following equation \cite{prl07,parks} \begin{equation} AV+VA^{\intercal }=-D, \label{Lyapunov} \end{equation} where $D=Diag\left[ 0,\gamma _{m}\left( 2\bar{n}+1\right) ,\kappa ,\kappa ,\gamma _{a},\gamma _{a}\right] $ is the diffusion matrix stemming from the noise correlations. We have solved Eq. (\ref{Lyapunov}) for the CM $V$ in a wide range of the parameters $G_m$, $G_{a}$, $\Delta $ and $\Delta _{a}$. We have studied first of all the stationary entanglement of the three possible bipartite subsystems, by quantifying it in terms of the logarithmic negativity \cite{vidal02} of bimodal Gaussian states. We will denote the logarithmic negativities for the mirror-atom, atom-field and mirror-field bimodal partitions with $E_{ma}$, $E_{af}$ and $E_{mf}$, respectively. \begin{figure} \caption{(Color online) (a) Logarithmic negativity of the mirror-field subsystem versus the normalized cavity detuning in the absence of the atoms. Entanglement is maximized around the optimal cooling regime (shown in the inset) namely around $\Delta \simeq \omega _{m} \label{Plotnew} \end{figure} The results on the behavior of the bipartite entanglement are shown in Fig.~2. We have considered experimentally feasible parameters \cite{gigan06,arcizet06b}, i.e., an oscillator with $\omega _{m}/2\pi =10^{7}$ Hz, $\mathcal{Q}=10^{5}$ and $m=10$ ng coupled to a cavity driven by a laser of power $P=35$ mW at $\lambda _{l}=1064$ nm (corresponding to $G_m/2\pi =8\times 10^{6}$ Hz), with length $L=1$ mm and finesse $ \mathcal{F}=3\times 10^{4}$. The properties of the chosen working point of the cavity system are shown in Fig.~2a, showing the mirror-cavity mode logarithmic negativity and, in the inset, the effective mean excitation number of the mechanical oscillator, $n_{eff}$, \emph{in the absence of the atoms}, versus the normalized cavity detuning. The inset shows that we are close to ground state cavity cooling of the mirror vibrational mode because $n_{eff}$ is decreased from the initial value $ \bar{n}=1250$ (corresponding to a reservoir temperature $T_0=0.6$ K) to $n_{eff}\simeq 0.2$ when $\Delta=\omega_m$, i.e., the cavity is resonant with the anti-Stokes sideband of the laser. This cooling regime allows to reach simultaneously a significant optomechanical entanglement. This can be understood in view of the results of \cite{prltelep,pirandola03}, where the entanglement between a vibrating mirror and the scattered optical sidebands is analyzed; when the mirror effective temperature is low enough one can have strong mirror-Stokes sideband entanglement. This latter entanglement is then exploited when the atomic ensemble is placed within the cavity. In Fig.~\ref{Plotnew}(b)-(c), the logarithmic negativity of the three bipartite cases is plotted versus the normalized atomic detuning when $\gamma _{a}/2\pi =5\times 10^{6}$ Hz, and $G_{a}/2\pi =6\times 10^{6}$ Hz. It is evident that one has a sort of entanglement sharing: due to the presence of the atoms, the initial cavity-mirror entanglement is partially redistributed to the atom-mirror and atom-cavity subsystems and this effect is predominant when the atoms are resonant with the Stokes sideband ($\Delta_a=-\omega_m$). It is remarkable that, in the chosen parameter regime, the largest stationary entanglement is the one between atoms and mirror which are only indirectly coupled. Moreover, the nonzero atom-cavity entanglement appears only thanks to the effect of the mirror dynamics because in the bosonic approximation we are considering and with a fixed mirror, there would be no direct atom-cavity entanglement. We also notice that atom-mirror entanglement is instead not present at $\Delta _{a}=\omega _{m}$. This is due to the fact that the cavity-mirror entanglement is mostly carried by the Stokes sideband and that, when $\Delta _{a}=\omega _{m}$, mirror cavity-cooling is disturbed by the Antistokes photons being recycled in the cavity by the absorbing atoms. Fig.~\ref{Plotnew}(c) shows the same plot but at a higher temperature, $T=5T_0=3$ K, showing that the three bipartite entanglements are quite robust with respect to thermal noise. This is studied in more detail in Fig.~\ref{Plotnew}(d), where the atom-mirror entanglement at $\Delta_a=-\omega_m$ is plotted versus the reservoir temperature: such an entanglement vanishes only around $20$ K. The simultaneous presence of all the three possible instances of bipartite entanglement witnesses the strong correlation between the atoms, the intracavity field, and the mechanical resonator at the steady state. This is also confirmed by the fact that such a state is a fully inseparable tripartite CV entangled state in the parameter regime of Fig.~2, for a wide range of atomic detuning ($-3\omega_m < \Delta_a < 3 \omega_m$) and up to temperatures of about $30$ K. This has been checked by applying the results of Ref.~\cite{Giedke02}, which provide a necessary and sufficient criterion for the determination of the entanglement class of a tripartite CV Gaussian state. We notice that the chosen parameters correspond to a small cavity mode volume ($V \simeq 10^{-12}$ m$^3$), implying that for a dipole transition, $g$ is not small. Therefore the assumed weak coupling condition $g^2 \ll \Delta_a^2 +\gamma_a^2$ can be satisfied only if $g$ represents a much smaller, \emph{time averaged}, coupling constant. This holds for example for an atomic vapor cell much larger than the cavity mode: if the (hot) atoms move in a cylindrical cell with axis orthogonal to the cavity axis, with diameter $\sim 0.5$ mm and height $\sim 1$ cm, they will roughly spend only one thousandth of their time within the cavity mode region. This yields an effective $g \sim 10^4$ Hz, so that the assumptions made here hold, and the chosen value $G_{a}/2\pi =6\times 10^{6}$ Hz can be obtained with $N_a \sim 10^7$. An alternative solution could be choosing a cold atomic ensemble and a dipole-forbidden transition. The entanglement properties of the steady state of the tripartite system can be verified by experimentally measuring the corresponding CM. This can be done by combining existing experimental techniques. The cavity field quadratures can be measured directly by homodyning the cavity output, while the mechanical position and momentum can be measured with the setup proposed in \cite{prl07}, in which by adjusting the detuning and bandwidth of an additional adjacent cavity, both position and momentum of the mirror can be measured by homodyning the output of this second cavity. Finally, the atomic polarization quadratures $x$ and $y$ (proportional to $S_x$ and $S_y$) can be measured by adopting the same scheme of Ref.~\cite{sherson}, i.e., by making a Stokes parameter measurement of a laser beam, shined transversal to the cavity and to the cell and off-resonantly tuned to another atomic transition. In conclusion we have proposed a scheme for the realization of a hybrid quantum correlated tripartite system formed by a cavity mode, an atomic ensemble inside it, and a vibrational mode of one cavity mirror. We have shown that, in an experimentally accessible parameter regime, the steady state of the system shows both tripartite and bipartite CV entanglement. The realization of such a scheme will open new perspectives for the realization of quantum interfaces and memories for CV quantum information processing and also for quantum-limited displacement measurements. This work was supported by the European Commission (programs QAP and SCALA), and by the Italian Ministry for University and Research (PRIN-2005 2005024254). \end{document}
\begin{document} \title{A weighted extremal function and equilibrium measure} \author{Len Bos, Norman Levenberg, Sione Ma`u and Federico Piazzon} \maketitle \begin{abstract} Let $K=\mathbb{R}^n\subset \mathbb{C}^n$ and $Q(x):=\frac{1}{2}\log (1+x^2)$ where $x=(x_1,...,x_n)$ and $x^2 = x_1^2+\cdots +x_n^2$. Utilizing extremal functions for convex bodies in $\mathbb{R}^n\subset \mathbb{C}^n$ and Sadullaev's characterization of algebraicity for complex analytic subvarieties of $\mathbb{C}^n$ we prove the following explicit formula for the weighted extremal function $V_{K,Q}$: $$V_{K,Q}(z)=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2})$$ where $z=(z_1,...,z_n)$ and $z^2 = z_1^2+\cdots +z_n^2$. As a corollary, we find that the Alexander capacity $T_{\omega}(\mathbb{R} \mathbb P^n)$ of $\mathbb{R} \mathbb P^n$ is $1/\sqrt 2$. We also compute the Monge-Amp\`ere measure of $V_{K,Q}$: $$(dd^cV_{K,Q})^n = n!\frac{1}{(1+x^2)^{\frac{n+1}{2}}}dx.$$ \end{abstract} \section{Introduction} For $K\subset \mathbb{C}^n$ compact, define the usual Siciak-Zaharjuta {\it extremal function} \begin{equation}\label{vk} V_K(z) := \max \left\{ 0 , \sup _p \left\{ \frac{1}{deg(p)} \log|p(z)|: p \ \hbox{poly.}, \ ||p||_K:=\max_{z\in K} |p(z)| \leq 1\right\} \right\}, \end{equation} where the supremum is taken over (non-constant) holomorphic polynomials $p$, and let $V_K^*(z):= \limsup_{\zeta \to z} V_K(\zeta)$ be its uppersemicontinuous (usc) regularization. If $K\subset \mathbb{C}^n$ is closed, a nonnegative uppersemicontinuous function $w:K\to [0, \infty)$ with $\{z\in K: w(z)=0\}$ pluripolar is called a weight function on $K$ and $Q(z):=-\log w(z)$ is the {\it potential} of $w$. The associated {\it weighted extremal function} is $$V_{K,Q}(z):=\sup \{\frac{1}{deg(p)}\log |p(z)|: p \ \hbox{poly.}, \ ||pe^{-deg(p)Q}||_K\leq 1\}.$$ Note $V_K=V_{K,0}$. For unbounded $K$, the potential $Q$ is required to grow at least like $\log |z|$. If, e.g, $$\liminf_{z\in K, \ |z|\to +\infty}\bigl(Q(z)-\log |z|\bigr) > -\infty $$ (we call $Q$ {\it weakly admissible}), then the Monge-Amp\`ere measure $(dd^cV_{K,Q}^*)^n$ may or may not have compact support. A priori these extremal functions may be defined in terms of upper envelopes of {\it Lelong class} functions: we write $L(\mathbb{C}^n)$ for the set of all plurisubharmonic (psh) functions $u$ on $\mathbb{C}^n$ with the property that $u(z) - \log |z| = 0(1), \ |z| \to \infty$ and $$ L^+(\mathbb{C}^n):=\{u\in L(\mathbb{C}^n): u(z)\geq \log^+|z| + C\}$$ where $C$ is a constant depending on $u$. For $K$ compact, either $V_K^*\in L^+(\mathbb{C}^n)$ or $V_K^*\equiv \infty$, this latter case occurring when $K$ is pluripolar; i.e., there exists $u\not \equiv -\infty$ psh on a neighborhood of $K$ with $K\subset \{u=-\infty\}$. In the setting of weakly admissible $Q$ it is a result of \cite{bs} that, provided the function $$\sup \{u(z): u\in L(\mathbb{C}^n), \ u\leq Q \ \hbox{on} \ K\}$$ is continuous, it coincides with $V_{K,Q}(z)$. If we let $X=\mathbb P^n$ with the usual K\"ahler form $\omega$ normalized so that $\int_{\mathbb P^n} \omega^n =1$, we can define the class of {\it $\omega-$psh functions} (cf., \cite{GZ}) $$PSH(X,\omega) :=\{\phi \in L^1(X): \phi \ \hbox{usc}, \ dd^c\phi +\omega \geq 0\}.$$ Let ${\bf z}:=[z_0:z_1:\cdots :z_n]$ be homogeneous coordinates on $X=\mathbb P^n$. Identifying $\mathbb{C}^n$ with the affine subset of $\mathbb P^n$ given by $\{[1:z_1:\cdots:z_n]\}$, we can identify the $\omega-$psh functions with the Lelong class $L(\mathbb{C}^n)$, i.e., $$PSH(X,\omega) \approx L(\mathbb{C}^n),$$ and the bounded (from below) $\omega-$psh functions coincide with the subclass $L^+(\mathbb{C}^n)$: if $\phi \in PSH(X,\omega)$, then $$u(z)=u(z_1,...,z_n):= \phi ([1:z_1:\cdots:z_n])+\frac{1}{2}\log (1+|z|^2)\in L(\mathbb{C}^n);$$ if $u\in L(\mathbb{C}^n)$, define $\phi \in PSH(X,\omega)$ via $$\phi ([1:z_1:\cdots:z_n])=u(z)-\frac{1}{2}\log (1+|z|^2) \ \hbox{and}$$ $$\phi ([0:z_1:\cdots:z_n])=\limsup_{|t|\to \infty, \ t\in \mathbb{C}}[u(tz)-\frac{1}{2}\log (1+|tz|^2)].$$ Abusing notation, we write $u= \phi +u_0$ where $u_0(z):=\frac{1}{2}\log (1+|z|^2)$. Given a closed subset $K\subset \mathbb P^n$ and a function $q$ on $K$, we can define a {\it weighted $\omega-$psh extremal function} $$v_{K,q}({\bf z}):=\sup \{ \phi({\bf z}): \phi \in PSH(X,\omega), \ \phi \leq q \ \hbox{on} \ K\}.$$ Thus if $K\subset \mathbb{C}^n \subset \mathbb P^n$, for $[1:z_1:\cdots:z_n]=[1:z]\in \mathbb{C}^n$ we have \begin{equation}\label{wtdrel} v_{K,q}([1:z])=\sup \{u(z): u\in L(\mathbb{C}^n), \ u\leq u_0 +q \ \hbox{on} \ K\} -u_0(z)=V_{K,u_0+q}(z)-u_0(z).\end{equation} If $q=0$, the {\it Alexander capacity} $T_{\omega}(K)$ of $K\subset \mathbb P^n$ was defined in \cite{GZ} as $$T_{\omega}(K):=\exp {[-\sup_{\mathbb P^n} v_{K,0}]}.$$ This notion has applications in complex dynamics; cf., \cite{DS}. These extremal psh and $\omega-$psh functions $V_K, V_{K,Q}$ and $v_{K,0}, v_{K,q}$, as well as the homogeneous extremal psh function $H_E$ of $E\subset \mathbb{C}^n$ (whose definition we recall in the next section), are very difficult to compute explicitly. Even when an explicit formula exists, computation of the associated Monge-Amp\`ere measure is problematic. Our main goal in this paper is to utilize a novel approach to explicitly compute $V_{K,Q}$ and $(dd^cV_{K,Q})^n$ for the closed set $K=\mathbb{R}^n\subset \mathbb{C}^n$ and the weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$ where $z^2 =z_1^2 +\cdots + z_n^2$ (see (\ref{magic}) or Theorem \ref{magic1}, and (\ref{monge})). Note the potential $Q(z)$ in this case is the standard K\"ahler potential $u_0(z)$ restricted to $\mathbb{R}^n$. As an application we can calculate the Alexander capacity $T_{\omega}(\mathbb{R} \mathbb P^n)$ of $\mathbb{R} \mathbb P^n$ (Corollary \ref{magiccor}). We offer several methods to explicitly compute $V_{K,Q}$. For the first one, we relate this weighted extremal function to: \begin{enumerate} \item the extremal function $V_{B_{n+1}}$ of the {\it real $(n+1)-$ball} $$B_{n+1}=\{(u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2\leq1\}$$ in $\mathbb{R}^{n+1}\subset \mathbb{C}^{n+1}$ as well as \item the extremal function $V_{\widetilde K}$ of the {\it real $n-$sphere} $$\widetilde K = \{ (u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2=1\}$$ in $\mathbb{R}^{n+1}$ {\it considered as a compact subset of the complexified $n-$sphere} $$A := \{ (W_0,...,W_n)\in \mathbb{C}^{n+1}: \sum_{j=0}^n W_j^2=1\}$$ in $\mathbb{C}^{n+1}$. This function is the {\it Grauert tube function} of $\widetilde K$ in $A$; cf., \cite{Z}. \end{enumerate} A similar (perhaps simpler) idea is a relation between $V_{K,Q}$ and \begin{enumerate} \item the extremal function $V_{B_{n}}$ of the {\it real $n-$ball} $$B_n:= \{(u_1,...,u_n)\in \mathbb{R}^{n}: \sum_{j=1}^n u_j^2\leq1\}$$ in $\mathbb{R}^n\subset \mathbb{C}^{n}$ and \item the homogeneous extremal function $H_S$ of the {\it real $n-$upper hemisphere} $$S:=\{(u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2\leq1, \ u_0 >0\}$$ in $\mathbb{R}^{n+1}$ considered as a subset of $A$ \end{enumerate} \noindent obtained by projecting $S$ onto $B_n$. In both cases we appeal to two well-known and highly non-trivial results: \begin{enumerate} \item using Theorem \ref{blmthm} (or \cite{bar}) we have a foliation of $\mathbb{C}^n\setminus B_n$ (and $\mathbb{C}^{n+1} \setminus B_{n+1}$) by complex ellipses on which $V_{B_n}$ ($V_{B_{n+1}})$ is harmonic; and \item using Theorem \ref{sadthm} we have $V_{\widetilde K}$ (and $H_S$) is locally bounded on $A$ and is maximal on $A\setminus \widetilde K$ (on $A\setminus S$). \end{enumerate} \noindent See the next section for statements of Theorems \ref{blmthm} and \ref{sadthm} and section 4 for details of these relations. Bloom (cf., \cite{BL} and \cite{Bloomtams}) introduced a technique to switch back and forth between certain pluripotential-theoretic notions in $\mathbb{C}^{n+1}$ and their weighted counterparts in $\mathbb{C}^n$; we recall this in the next section. In section 3, we discuss a modification of Bloom's technique suitable for special weights $w$ and we use this modification in section 4 to construct a formula for $V_{K,Q}$ on a neighborhood of $\mathbb{R}^n$ for the set $K=\mathbb{R}^n\subset \mathbb{C}^n$ and weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$. This formula gives an explicit candidate $u\in L(\mathbb{C}^n)$ for $V_{K,Q}$. In section 5 we give another ``geometric'' interpretation of $u$ by observing a relationship with the Lie ball $$L_n :=\{z=(z_1,...,z_n)\in \mathbb{C}^n: |z|^2 +\{ |z|^4 - |z^2|^2\}^{1/2} \leq 1\}$$ which we use to explicitly compute that $(dd^cu)^n=0$ on $\mathbb{C}^n\setminus \mathbb{R}^n$, verifying that $u=V_{K,Q}$. As a corollary, we compute the Alexander capacity $T_{\omega}(\mathbb{R} \mathbb P^n)$ of $\mathbb{R} \mathbb P^n$. Finally, section 6 utilizes results from \cite{blmr} to compute an explicit formula for the Monge-Amp\`ere measure $(dd^cV_{K,Q})^n$. \section{Known results on extremal functions} In this section, we list some results and connections about extremal functions, all of which will be utilized. One particular situation where we know much information about $V_K$ is when $K$ is a convex body in $\mathbb{R}^n$; i.e., $K\subset \mathbb{R}^n$ is compact, convex and int$_{\mathbb{R}^n}K\not =\emptyset$. \begin{theorem}\label{blmthm} Let $K\subset \mathbb{R}^n$ be a convex body. Through every point $z\in\mathbb{C}^n\setminus K$ there is either a complex ellipse $E$ with $z\in E$ such that $V_K$ restricted to $E$ is harmonic on $E\setminus K$, or there is a complexified real line $L$ with $z\in L$ such that $V_K$ is harmonic on $L\setminus K$. For such $E$, $E\cap K$ is a real ellipse inscribed in $K$ with the property that for its given eccentricity and orientation, it is the ellipse with largest area completely contained in $K$; for such $L$, $L\cap K$ is the longest line segment (for its given direction) completely contained in $K$. \end{theorem} We refer the reader to Theorem 5.2 and Section 6 of \cite{blm2}; see also \cite{blm}. The ellipses and lines in Theorem \ref{blmthm} have parametrizations of the form $$ F(\zeta) = a + c\zeta + \frac{\bar c}{\zeta}, $$ $a\in\mathbb{R}^n$, $c\in\mathbb{C}^n$, $\zeta \in \mathbb{C}$ with $V_K(F(\zeta))=\log^+|\zeta|$ ($\bar c$ denotes the component-wise complex conjugate of $c$). These are higher dimensional analogs of the classical Joukowski function $\zeta\mapsto \frac{1}{2}(\zeta + \frac{1}{\zeta})$. For $K = B_n$, the real unit ball in $\mathbb{R}^n \subset \mathbb{C}^n$, the real ellipses $E\cap B_n$ and lines $L\cap B_n$ in Theorem \ref{blmthm} are symmetric with respect to the origin and, other than great circles in the real boundary of $B_n$, each $E\cap B_n$ and $L\cap B_n$ hits this real boundary at exactly two antipodal points. Lundin proved \cite{lunpre}, \cite{bar} that \begin{equation} \label{eq:realball} V_K(z) =\frac{1}{2} \log h(|z|^2 + |z^2 - 1|), \end{equation} where $|z|^2 = \sum |z_j|^2, \ z^2 = \sum z_j^2,$ and $h$ is the inverse Joukowski map $h(\frac{1}{2}(t + \frac{1}{t})) = t$ for $1 \leq t \in \mathbb{R}$. In this example, the Monge-Amp\`ere measure $(dd^cV_K)^n$ has the explicit form $$(dd^cV_K)^n = n! \ vol(K) \ \frac{dx}{(1-|x|^2)^{\frac{1}{2}}}:=n! \ vol(K) \ \frac{dx_1 \wedge \cdots \wedge dx_n}{(1- |x|^2)^{\frac{1}{2}}} $$ (see also (\ref{eq:monge})). We may consider the class $$H:=\{u \in L(\mathbb{C}^n): \ u(tz) =\log {|t|} +u(z), \ t\in \mathbb{C}, \ z \in \mathbb{C}^n \} $$ of {\it logarithmically homogeneous} psh functions and, for $E\subset \mathbb{C}^n$, the {\it homogeneous extremal function of $E$} denoted by $H_E^*$ where $$H_E(z):=\max [0,\sup \{u(z):u \in H, \ u\leq 0 \ \hbox{on} \ E\}]. $$ Note that $H_E(z)\leq V_E(z)$. If $E$ is compact, we have $$H_E(z)=\max [0,\sup \{\frac{1}{deg (h)}\log {|h(z)|}: h \ \hbox{homogeneous polynomial}, \ ||h||_E\leq 1\}]. $$ The $H-$principle of Siciak (cf., \cite{Kl}) gives a one-to-one correspondence between \begin{enumerate} \item homogeneous polynomials $H_d(t,z)$ of a fixed degree $d$ in $\mathbb{C}_t\times \mathbb{C}^n_z$ and polynomials $p_d(z)=H_d(1,z)$ of degree $d$ in $\mathbb{C}^n_z$ via $$H_d(t,z):=t^dp_d(z/t); $$ \item psh functions $h(t,z)$ in $H(\mathbb{C}_t\times \mathbb{C}^n_z)$ and psh functions $u(z)=h(1,z)$ in $L(\mathbb{C}^n_z)$ via $$h(t,z)=\log {|t|} +u(z/t) \ \hbox{if} \ t\not = 0; \ h(0,z):=\limsup_{(t,\zeta)\to (0,z)}h(t,z); $$ \item extremal functions $V_E$ of $E\subset \mathbb{C}^n_z$ and homogeneous extremal functions $H_{1\times E}$ via 2.; i.e., \begin{equation}\label{easyfcn}V_E(z)=H_{1\times E}(1,z). \end{equation} \end{enumerate} \noindent To expand upon 3., given a compact set $E\subset \mathbb{C}^n$, if one forms the circled set ($S$ is circled means $z\in S \iff e^{i\theta}z\in S$) $$Z(E):=\{(t,tz)\in \mathbb{C}^{n+1}: z\in E, \ |t|=1\} \subset \mathbb{C}^{n+1},$$ then $$H_{Z(E)}(1,z) = V_E(z);$$ indeed, for $t\not = 0$, $$H_{Z(E)}(t,z) = V_E(z/t)+\log |t|.$$ Note that $Z(E)$ is the ``circling'' of the set $\{1\}\times E\subset \mathbb{C}^{n+1}$. In general, if $E\subset \mathbb{C}^n$, the set $$E_c:=\{e^{i\theta}z: z\in E, \ \theta \in \mathbb{R}\}$$ is the smallest circled set containing $E$. If $E$ is compact, then $\widehat E_c$, the polynomial hull of $E_c$, is given by $$\widehat E_c=\{tz: \ z\in E, \ |t|\leq 1\}$$ which coincides with the {\it homogeneous polynomial hull} of $E$: $$\widehat E_{hom}:=\{z\in \mathbb{C}^n: |p(z)|\leq ||p||_E \ \hbox{for all homogeneous polynomials} \ p\}.$$ We have $H_{E_c}=V_{E_c}$. For future use we remark that if $E\subset F$ with $H_E=H_F=V_F$, it is {\it not} necessarily true that $V_E=H_E$. As a simple example, we can take $E=B_n$, the real unit ball, and $F=\widehat E_c=\widehat E_{hom}$. Then $F=L_n$, the {\it Lie ball} $$L_n =\{z=(z_1,...,z_n)\in \mathbb{C}^n: |z|^2 +\{ |z|^4 - |z^2|^2\}^{1/2} \leq 1\}$$ (see section 5). Here, $V_{B_n}\not = V_{L_n}$. More generally, if $K\subset \mathbb{C}^n$ is closed and $w$ is a weight function on $K$, we can form the circled set $$Z(K,Q):= \{(t,tz)\in \mathbb{C}^{n+1}: z\in E, \ |t|=w(z)\}$$ and then $$H_{Z(K,Q)} (1,z) = V_{K,Q}(z);$$ indeed, for $t\not = 0$, $$H_{Z(K,Q)} (t,z) = V_{K,Q}(z/t)+\log |t|.$$ This is the device utilized by Bloom (cf., \cite{BL} and \cite{Bloomtams}) alluded to in the introduction. Finally, we mention the following beautiful result of Sadullaev \cite{Sad}. \begin{theorem}\label{sadthm} Let $A$ be a pure $m-$dimensional, irreducible analytic subvariety of $\mathbb{C}^n$ where $1\leq m \leq n-1$. Then $A$ is algebraic if and only if for some (all) $K\subset A$ compact and nonpluripolar in $A$, $V_K$ in (\ref{vk}) is locally bounded on $A$. \end{theorem} \noindent Note that $A$ and hence $K$ is pluripolar in $\mathbb{C}^n$ so $V_K^*\equiv \infty$; moreover, $V_K=\infty$ on $\mathbb{C}^n\setminus A$. In this setting, $V_K|_A$ (precisely, its usc regularization in $A$) is maximal on the regular points $A^{reg}$ of $A$ outside of $K$; i.e., $(dd^cV_K|_A)^m=0$ there, and $V_K|_A \in L(A)$. Here $L(A)$ is the set of psh functions $u$ on $A$ ($u$ is psh on $A^{reg}$ and locally bounded above on $A$) with the property that $u(z) - \log |z| = 0(1)$ as $|z| \to \infty$ through points in $A$, see \cite{Sad}. \section{Relating extremal functions} Let $K\subset \mathbb{C}^n$ be closed and let $f$ be holomorphic on a neighborhood $\Omega$ of $K$. We define $F:\Omega \subset \mathbb{C}^n\to \mathbb{C}^{n+1}$ as $$F(z):=(f(z),zf(z))=W=(W_0,W')=(W_0,W_1,...,W_n)$$ where $W'=(W_1,...,W_n)$. Thus $$W_0= f(z), \ W_1 = z_1f(z), ..., \ W_n=z_nf(z).$$ Moreover we assume there exists a polynomial $P=P(z_0,z)$ in $\mathbb{C}^{n+1}$ with $P(f(z),z)=0$ for $z\in \Omega$; i.e., $f$ is {\it algebraic}. Taking such a polynomial $P$ of minimal degree, let \begin{equation}\label{variety} A:=\{W\in \mathbb{C}^{n+1}:P(W_0,W'/W_0)=P(W_0,W_1/W_0,...,W_n/W_0)=0 \}.\end{equation} Note that writing $P(W_0,W'/W_0)=\widetilde P(W_0,W')/W_0^s$ where $\widetilde P$ is a polynomial in $\mathbb{C}^{n+1}$ and $s$ is the degree of $P(z_0,z)$ in $z$ we see that $A$ differs from the algebraic variety $$\widetilde A:=\{W\in \mathbb{C}^{n+1}:\widetilde P(W_0,W')=0\}$$ by at most the set of points in $A$ where $W_0=0$, which is pluripolar in $A$. Thus we can apply Sadullaev's Theorem \ref{sadthm} to nonpluripolar subsets of $A$. Now $P(f(z),z)=0$ for $z\in \Omega$ says that $$F(\Omega)=\{(f(z),zf(z)): z \in \Omega\}\subset A.$$ We can define a weight function $w(z):=|f(z)|$ which is well defined on all of $\Omega$ and in particular on $K$; as usual, we set \begin{equation}\label{need2}Q(z):=-\log w(z) = -\log |f(z)|.\end{equation} We will need our potentials defined in (\ref{need2}) to satisfy \begin{equation}\label{need}Q(z):=\max \{-\log |W_0|: W\in A, \ W'/W_0=z\}\end{equation} and we mention that (\ref{need}) can give an a priori definition of a potential for those $z\in \mathbb{C}^n$ at which there exist $W\in A$ with $W'/W_0=z$. We observe that for $K\subset \Omega$, we have two natural associated subsets of $A$: \begin{enumerate} \item $\widetilde K:= \{W\in A: W'/W_0\in K\}$ and \item $F(K)=\{W=F(z)\in A: z \in K\}$. \end{enumerate} \noindent Note that $F(K)\subset \widetilde K$ and the inclusion can be strict. \begin{proposition}\label{exfcn} Let $K\subset \mathbb{C}^n$ be closed with $Q$ in (\ref{need2}) satisfying (\ref{need}). If $F(K)$ is nonpluripolar in $A$, $$V_{K,Q}(z)-Q(z)\leq H_{F(K)}(W) \ \hbox{for} \ z\in \Omega \ \hbox{with} \ f(z)\not =0 $$ where the inequality is valid for $W=F(z)\in F(\Omega)$. \end{proposition} \noindent This reduces to (\ref{easyfcn}) if $w(z)\equiv 1$ ($Q(z)\equiv 0$) in which case $F(K)= \{1\}\times K$. \begin{remark} In general, Proposition \ref{exfcn} only gives estimates for $V_{K,Q}(z)$ if $z\in \Omega$ and $f(z)\not =0$. We will use this and Lemma \ref{lowerest} in the next section to get a formula for $V_{K,Q}(z)$ when $K=\mathbb{R}^n\subset \mathbb{C}^n$ and the weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$ for $z$ in a neighborhood $\Omega$ of $\mathbb{R}^n$ and in section 5 we will verify that this formula is valid on all of $\mathbb{C}^n$. \end{remark} \begin{proof} First note that for $z\in K$ and $W=F(z)\in F(K)$, given a polynomial $p$ in $\mathbb{C}^n$, $$|w(z)^{deg p}p(z)|=|f(z)|^{deg p}|p(z)|= |W_0^{deg p} p(W'/W_0)|=|\widetilde p(W)|$$ where $\widetilde p$ is the homogenization of $p$. Thus $||w^{deg p}p||_K\leq 1$ implies $|\widetilde p|\leq 1$ on $F(K)$. Now fix $z\in \Omega$ at which $f(z)\not =0$ (so $Q(z)<\infty$) and fix $\epsilon >0$. Choose a polynomial $p=p(z)$ with $||w^{deg p}p||_K\leq 1$ and $$\frac{1}{deg p} \log |p(z)|\geq V_{K,Q}(z) -\epsilon.$$ Thus $$V_{K,Q}(z) -\epsilon -Q(z)\leq \frac{1}{deg p} \log |p(z)|-Q(z).$$ For $W\in A$ with $W_0\not =0$ and $W'/W_0=z$, the above inequality reads: $$V_{K,Q}(z) -\epsilon -Q(z)\leq \frac{1}{deg p} \log |p(W'/W_0)|-Q(W'/W_0)\leq \frac{1}{deg p} \log |p(W'/W_0)|+\log |W_0|$$ from (\ref{need}). But $$\frac{1}{deg p} \log |p(W'/W_0)|+\log |W_0|=\frac{1}{deg p} \log |W_0^{deg p}p(W'/W_0)|=\frac{1}{deg \widetilde p} \log |\widetilde p(W)|.$$ This shows that $$V_{K,Q}(z) -\epsilon -Q(z)\leq \sup \{\frac{1}{deg \widetilde p} \log |\widetilde p(W)|: |\widetilde p|\leq 1 \ \hbox{on} \ F(K)\}\leq H_{F(K)}(W).$$ \end{proof} Next we prove a lower bound involving $\widetilde K$ which will be applicable in our special case. \begin{definition} \label{three4} \rm Let $A\subset\mathbb{C}^{n+1}$ be an algebraic hypersurface. We say that $A$ is \emph{bounded on lines through the origin} if there exists a uniform constant $c\geq 1$ such that for all $W\in A$, if $\alpha W\in A$ also holds for some $\alpha\in\mathbb{C}$, then $|\alpha|\leq c$. \end{definition} \begin{example} \label{three5} \rm A simple example of a hypersurface bounded on lines through the origin is one given by an equation of the form $p(W)=1$, where $p$ is a homogeneous polynomial. In this case, if $\alpha W\in A$ then $$1=p(\alpha W)=\alpha^{deg p}p(W)=\alpha^{deg p},$$ so $\alpha$ must be a root of unity. Hence we may take $c=1$. \end{example} In order to get a lower bound on $V_{K,Q}-Q$ we need to be able to extend $Q$ to a function in $L(\mathbb{C}^n)$. \begin{lemma}\label{lowerest} Let $K\subset\mathbb{C}^n$ and let $Q(z)=-\log |f(z)|$ with $f$ defined and holomorphic on $\Omega\supset K$. Define $A$ as in (\ref{variety}) and assume $Q$ satisfies (\ref{need}). We suppose $A$ is bounded on lines through the origin, $\widetilde K$ is a nonpluripolar subset of $A$, and that $Q$ has an extension to $\mathbb{C}^n$ (which we still call $Q$) satisfying (\ref{need}) such that $Q\in L(\mathbb{C}^n)$. Then given $z\in\mathbb{C}^n$, $$ H_{\widetilde K}(W)\leq V_{\widetilde K}(W)\leq V_{K,Q}(z)-Q(z) $$ for all $W=(W_0,W')\in A$ with $W'/W_0=z$. \end{lemma} \begin{proof} The left-hand inequality $H_{\widetilde K}(W)\leq V_{\widetilde K}(W)$ is immediate. For the right-hand inequality, we first note that $V_{\widetilde K}(W)\in L(A)$ if $\widetilde K$ is nonpluripolar in $A$. Hence there exists a constant $C\in\mathbb{R}$ such that $$ V_{\widetilde K}(W) \leq \log|W| + C = \log|W_0| + \frac{1}{2}\log(1+|W'/W_0|^2) + C $$ for all $W\in A$ with $W_0\not = 0$. Define the function $$ U(z):= \max\{V_{\widetilde K}(W): W\in A, W'/W_0=z\} + Q(z). $$ Note that the right-hand side is a locally finite maximum since $A$ is an algebraic hypersurface. Away from the singular points $A^{sing}$ of $A$ one can write $V_{\widetilde K}(W)$ as a psh function in $z$ by composing it with a local inverse of the map $A\ni W\mapsto z=W'/W_0\in\mathbb{C}^n$. Hence $U$ is psh off the pluripolar set $$\{z\in\mathbb{C}^n: z=W'/W_0 \ \hbox{ for some } \ W\in A^{sing}\},$$ and hence psh everywhere since it is clearly locally bounded above on $\mathbb{C}^n$. Also, since $V_{\widetilde K}=0$ on $\widetilde K$ it follows that $U\leq Q$ on $K$. We now verify that $U\in L(\mathbb{C}^n)$ by checking its growth. By the definitions of $U$ and $Q$ and (\ref{need}), given $z\in\mathbb{C}^n$ there exist $W,V\in A$, with $z=W'/W_0= V'/V_0$, such that $$ U(z) = V_{\widetilde K}(W) + Q(z) \ \hbox{ and } \ Q(z)=-\log|V_0|. $$ Note that $W=\alpha V$, and since $A$ is uniformly bounded on lines through the origin, there is a uniform constant $c$ (independent of $W,V$) such that $|\alpha|\leq c$. We then compute \begin{eqnarray*} U(z) = V_{\widetilde K}(W) - \log|V_0| &\leq& V_{\widetilde K}(W) - \log|W_0| + \log c \\ &\leq& \log|W| + C - \log|W_0| + \log c \\ &=& \log|W/W_0| + C +\log c = \tfrac{1}{2}\log(1+|z|^2) + C +\log c \end{eqnarray*} where $C>0$ exists since $V_{\widetilde K}\in L(A)$. Hence $U\in L(\mathbb{C}^n)$, and since $U\leq Q$ on $K$ this means that $U(z)\leq V_{K,Q}(z)$. By the definition of $U$, $$ V_{\widetilde K}(W)+Q(z)\leq V_{K,Q}(z) $$ for all $W\in A$ such that $W'/W_0=z$, which completes the proof. \end{proof} The situation of Lemma \ref{lowerest} will be the setting of our example in the next section. \section{The weight $w(z)=|\frac{1}{(1+z^2)^{1/2}}|$ and $K=\mathbb{R}^n$} We consider the closed set $K=\mathbb{R}^n\subset \mathbb{C}^n$ and the weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$ where $z^2 =z_1^2 +\cdots + z_n^2$. Note that $f(z)\not =0$ and we may extend $Q(z)=-\log |f(z)|$ to all of $\mathbb{C}^n$ as $Q(z)=\frac{1}{2}\log |1+z^2|\in L(\mathbb{C}^n)$. Since $$(1+z^2)\cdot f(z)^2 -1 =0,$$ we take $$P(z_0, z) = (1+z^2)z_0^2 -1.$$ Here, $$A = \{W:P(W_0,W'/W_0)=(1+W'^2/W_0^2)W_0^2-1= W_0^2+W'^2-1=0\}$$ is the complexified sphere in $\mathbb{C}^{n+1}$. From Definition \ref{three4} and Example \ref{three5}, $A$ is bounded on lines through the origin. Note that $f$ is clearly holomorphic in a neighborhood of $\mathbb{R}^n$; thus we can take, e.g., $\Omega=\{z= x+iy \in \mathbb{C}^n: y^2=y_1^2 +\cdots + y_n^2 < s <1\}$ in Proposition \ref{exfcn} and Lemma \ref{lowerest} where $z_j=x_j+iy_j$. Condition (\ref{need}) holds for $Q(z)=\frac{1}{2}\log |1+z^2|\in L(\mathbb{C}^n)$ at $z\in \mathbb{C}^n$ for which there exist $W\in A$ with $W'/W_0=z$ since $W=(W_0,W')\in A$ implies $W_0=\pm \sqrt {1-(W')^2}$ so that $|W_0|$ is the same for each choice of $W_0$. We have $$F(K)= \{(f(z),zf(z)): z=(z_1,...,z_n) \in K=\mathbb{R}^n\}=\{(\frac{1}{(1+x^2)^{1/2}},\frac{x}{(1+x^2)^{1/2}}):x\in \mathbb{R}^n\}.$$ Writing $u_j = {\rm Re} W_j$, we see that $$F(K)=\{ (u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2=1, \ u_0 >0\}.$$ On the other hand, $$\widetilde K = \{W\in A: W'/W_0\in K\}= \{ (u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2=1\}.$$ Clearly $\widetilde K$ is nonpluripolar in $A$ which completes the verification that Lemma \ref{lowerest} is applicable. We also observe that since for any homogeneous polynomial $h=h(W_0,...,W_n)$ we have $$|h(-u_0,u_1,...,u_n)|=|h(u_0,-u_1,...,-u_n)|,$$ the homogeneous polynomial hulls of $\widetilde K$ and $\overline {F(K)}$ in $\mathbb{C}^{n+1}$ coincide so that $H_{\widetilde K} = H_{\overline{F(K)}}$ in $A$. Since $$\overline{F(K)}\setminus F(K)=\{ (u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2=1, \ u_0 =0\}\subset A\cap \{ W_0=0\}$$ is a pluripolar subset of $A$, \begin{equation}\label{hulleq} H_{\widetilde K} = H_{F(K)}\end{equation} on $A\setminus P$ where $P\subset A$ is pluripolar in $A$. Combining (\ref{hulleq}) with Proposition \ref{exfcn} and Lemma \ref{lowerest}, we have \begin{equation}\label{fullin} H_{\widetilde K} (W)=V_{\widetilde K}(W) = V_{K,Q}(z) -Q(z)= H_{F(K)}(W)\end{equation} for $z\in \widetilde \Omega :=\Omega \setminus \widetilde P$ and $W=F(z)$ where $\widetilde P$ is pluripolar in $\mathbb{C}^n$. To compute the extremal functions in this example, we first consider $V_{\widetilde K}$ in $A$. Let $$B:=B_{n+1}=\{(u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2\leq1\}$$ be the real $(n+1)-$ball in $\mathbb{C}^{n+1}$. \begin{proposition} We have $$V_B(W)=V_{\widetilde K}(W)$$ for $W\in A$. \end{proposition} \begin{proof} Clearly $V_B|_A \leq V_{\widetilde K}$. To show equality holds, the idea is that if we consider the complexified extremal ellipses $L_{\alpha}$ as in Theorem \ref{blmthm} for $B$ whose real points $S_{\alpha}$ are great circles on $\widetilde K$, the boundary of $B$ in $\mathbb{R}^{n+1}$, then the union of these varieties fill out $A$: $\cup_{\alpha} L_{\alpha} =A$. Since $V_B|_{L_{\alpha}}$ is {\it harmonic}, we must have $V_B|_{L_{\alpha}} \geq V_{\widetilde K}|_{L_{\alpha}}$ so that $V_B|_A = V_{\widetilde K}$. To see that $\cup_{\alpha} L_{\alpha} =A$, we first show $A\subset \cup_{\alpha} L_{\alpha}$. If $W\in A\setminus \widetilde K$, then $W$ lies on {\it some} complexified extremal ellipse $L$ whose real points $E$ are an inscribed ellipse in $B$ with boundary in $\widetilde K$ (and $V_B|_L$ is harmonic). If $L\not = L_{\alpha}$ for some $\alpha$, then $E\cap \widetilde K$ consists of two antipodal points $\pm p$. By rotating coordinates we may assume $\pm p = (\pm 1,0,...,0)$ and $$E\subset \{(u_0,...,u_n): \ u_2=\cdots =u_n=0\}.$$ We have two cases: \begin{enumerate} \item $E=\{(u_0,...,u_n): |u_0|\leq 1, \ u_1=0, \ u_2=\cdots =u_n=0\}$, a real interval: \noindent In this case $$L=\{(W_0,0,...,0): W_0\in \mathbb{C} \}.$$ But then $L\cap A =\{(W_0,0,...,0): W_0=\pm 1 \}=\{\pm p\} \subset \widetilde K$, contradicting $W\in A\setminus \widetilde K$. \item $E=\{(u_0,...,u_n): u_0^2+u_1^2/r^2=1, \ u_2=\cdots =u_n=0\}$ where $0<r<1$, a nondegenerate ellipse: \noindent In this case, $$L:=\{(W_0,...,W_n): W_0^2+W_1^2/r^2=1, \ W_2=\cdots =W_n=0\}.$$ But then if $W\in L\cap A$ we have $$ W_0^2+W_1^2/r^2=1= W_0^2+W_1^2$$ so that $W_1=\cdots =W_n=0$ and $W_0^2=1$; i.e., $L\cap A =\{\pm p\} \subset \widetilde K$ which again contradicts $W\in A\setminus \widetilde K$. \end{enumerate} For the reverse inclusion, recall that the variety $A$ is defined by $\sum_{j=0}^nW_j^2=1$. If $W=u+iv$ with $u,v\in \mathbb{R}^{n+1}$, we have $$\sum_{j=0}^nW_j^2 = \sum_{j=0}^n[u_j^2-v_j^2]+ 2i\sum_{j=0}^nu_jv_j.$$ Thus for $W=u+iv \in A$, we have $$\sum_{j=0}^nu_jv_j=0.$$ If we take an orthogonal transformation $T$ on $\mathbb{R}^{n+1}$, then, by definition, $T$ preserves Euclidean lengths in $\mathbb{R}^{n+1}$; i.e., $\sum_{j=0}^n u_j^2=1=\sum_{j=0}^n (T(u)_j)^2=1$. Moreover, if $u,v$ are orthogonal; i.e., $\sum_{j=0}^nu_jv_j=0$, then $\sum_{j=0}^n(T(u))_j\cdot (T(v))_j =0$. Extending $T$ to a complex-linear map on $\mathbb{C}^{n+1}$ via $$T(W)=T(u+iv):= T(u) +iT(v),$$ we see that if $W\in A$, then $\sum_{j=0}^n(T(u))_j\cdot (T(v))_j =0$ so that $$\sum_{j=0}^n(T(W)_j)^2 = \sum_{j=0}^n[(T(u)_j)^2-(T(v)_j)^2]=\sum_{j=0}^n[u_j^2-v_j^2]=1.$$ Thus $T$ preserves $A$. Clearly the ellipse $$L_0:=\{(W_0,...,W_n): W_0^2+W_1^2=1, \ W_2=\cdots =W_n=0\}$$ corresponding to the great circle $S_0:=\{(u_0,...,u_n): u_0^2+u_1^2=1, \ u_2=\cdots =u_n=0\}$ lies in $A$ and any other great circle $S_{\alpha}$ can be mapped to $S_0$ via an orthogonal transformation $T_{\alpha}$. From the previous paragraph, we conclude that $ \cup_{\alpha} L_{\alpha}\subset A$. \end{proof} \noindent We use the Lundin formula for $V_B$ in (\ref{eq:realball}): $$V_B(W)=\frac{1}{2}\log h\bigl( |W|^2+|W^2-1|\bigr)$$ where $h(t)=t+\sqrt{t^2-1}$ for $t\in \mathbb{C}\setminus [-1,1]$. Now the formula for $V_{\widetilde K}$ can only be valid on $A$; and indeed, since $W^2=1$ on $A$, by the previous proposition we obtain $$V_{\widetilde K}(W)=\frac{1}{2}\log h (|W|^2), \ W\in A.$$ Note that since the real sphere $\widetilde K$ and the complexified sphere $A$ are invariant under real rotations, the Monge-Amp\`ere measure $$(dd^cV_{\widetilde K} (W))^n=(dd^c \frac{1}{2}\log h( |W|^2))^n$$ must be invariant under real rotations as well and hence is normalized surface area measure on the real sphere $\widetilde K$. This can also be seen as a consequence of $V_{\widetilde K}$ being the {\it Grauert tube function} for $\widetilde K$ in $A$ as $(dd^cV_{\widetilde K} (W))^n$ gives the volume form $dV_g$ on $\widetilde K$ corresponding to the standard Riemannian metric $g$ there (cf., \cite{Z}). Getting back to the calculation of $V_{K,Q}$, note that since $W=(\frac{1}{(1+z^2)^{1/2}},\frac{z}{(1+z^2)^{1/2}})$, $$|W|^2:=|W_0|^2+|W_1|^2+\cdots +|W_n|^2=\frac{1+|z|^2}{|1+z^2|}.$$ Plugging in to (\ref{fullin}) $$V_{\widetilde K}(W)= V_B(W) =V_{K,Q}(z)-Q(z)=V_{K,Q}(z)- \frac{1}{2}\log |1+z^2|$$ gives \begin{equation}\label{magic}V_{K,Q}(z)=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2}\bigr)\end{equation} for $z\in \widetilde \Omega$. We show in section 5 that this formula does indeed give us the extremal function $V_{K,Q}(z)$ for all $z\in \mathbb{C}^n$. A similar observation leads to another derivation of the above formula. Consider $\overline {F(K)}$ as the upper hemisphere $$S:=\{(u_0,...,u_n)\in \mathbb{R}^{n+1}: \sum_{j=0}^n u_j^2 =1, \ u_0 \geq 0\}$$ in $\mathbb{R}^{n+1}$ and let $\pi: \mathbb{R}^{n+1}\to \mathbb{R}^n$ be the projection $\pi(u_0,...,u_n)=(u_1,...,u_n)$ which we extend to $\pi: \mathbb{C}^{n+1}\to \mathbb{C}^n$ via $\pi(W_0,...,W_n)=(W_1,...,W_n)$. Then $$\pi(S) = B_n:= \{(u_1,...,u_n)\in \mathbb{R}^{n}: \sum_{j=1}^n u_j^2\leq1\}$$ is the real $n-$ball in $\mathbb{C}^{n}$. Each great semicircle $C_{\alpha}$ in $S$ -- these are simply half of the $L_{\alpha}$'s from before -- projects to half of an inscribed ellipse $E_{\alpha}$ in $B_n$, while the other half of $E_{\alpha}$ is the projection of the great semicircle given by the negative $u_1,...,u_n$ coordinates of $C_{\alpha}$ (still in $F(K)$, i.e., with $u_0>0$). As before, the complexification $E^*_{\alpha}$ of the ellipses $E_{\alpha}$ correspond to complexifications of the great circles. \begin{proposition} \label{semi} We have $$H_{F(K)}(W_0,...,W_n)=V_{B_n}(\pi(W))=V_{B_n}(W_1,...,W_n)=V_{B_n}(W')\leq V_{\widetilde K}(W_0,...,W_n)$$ for $W=(W_0,...,W_n)=(W_0,W')\in A$. \end{proposition} \begin{proof} Clearly $V_{B_n}(\pi(W))\leq V_{\widetilde K}(W)$. For the inequality $H_{F(K)}(W)\leq V_{B_n}(\pi(W))$, note that for $W\in A$ with $W=(W_0,W')$, we have $\pi^{-1}(W')=(\pm W_0,W')\in A$ but the value of $H_{F(K)}$ is the same at both of these points. Thus $W'\to H_{F(K)}(\pi^{-1}(W'))$ is a well-defined function of $W'$ for $W\in A$ which is clearly in $L(\mathbb{C}^n)$ (in the $W'$ variables) and nonpositive if $W'\in B_n$; hence $H_{F(K)}(\pi^{-1}(W'))\leq V_{B_n}(W')$. \end{proof} From (\ref{fullin}), $$ H_{\widetilde K} (W)=V_{\widetilde K}(W) = V_{K,Q}(z) -Q(z)= H_{F(K)}(W)$$ for $z\in \widetilde \Omega$ and $W=F(z)$ so that we have equality for such $W$ in Proposition \ref{semi} and an alternate way of computing $V_{K,Q}$. From the Lundin formula, for $(W_0,W')\in A$ we have $W_0^2+W'^2=1$ so $$V_{B_n}(W')=\frac{1}{2}\log h\bigl( |W'|^2+|W'^2-1|\bigr)=\frac{1}{2}\log h( |W|^2).$$ and we get the same formula (\ref{magic}) \[V_{K,Q}(z)=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2}\bigr)\] for $z\in \widetilde \Omega$. \begin{remark}\label{onevarem} Note that for $n=1$, it is easy to see that \begin{equation}\label{onevar} V_{K,Q}(z)=\max [\log |z-i|, \log |z+i|]\end{equation} which agrees with formula (\ref{magic}). \end{remark} \section{Relation with Lie ball and maximality of $V_{K,Q}$} One way of describing the Lie ball $L_n\subset \mathbb{C}^n$ is that it is the homogeneous polynomiall hull $\widehat{(B_n)}_{hom}$ of the real ball $$B_n:=\{x=(x_1,...,x_n)\in \mathbb{R}^n: x^2 = x_1^2+\cdots +x_n^2\leq 1\}.$$ A formula for $L_n$ is given by $$L_n =\{z=(z_1,...,z_n)\in \mathbb{C}^n: |z|^2 +\{ |z|^4 - |z^2|^2\}^{1/2} \leq 1\}.$$ Note that (by definition) $L_n$ is circled. Writing $Z:=(z_0,z)=(z_0,z_1,...,z_n)\in \mathbb{C}^{n+1}$, $$L_{n+1}=\{Z\in \mathbb{C}^{n+1}: |Z|^2 +\{ |Z|^4 - |Z^2|^2\}^{1/2} \leq 1\}.$$ The (homogeneous) Siciak-Zaharjuta extremal function of this (circled) set is $$H_{L_{n+1}}(Z)=V_{L_{n+1}}(Z)= \frac{1}{2}\log^+ \bigl(|Z|^2 +\{ |Z|^4 - |Z^2|^2\}^{1/2}\bigr).$$ Thus $$V_{L_{n+1}}(1,z)= \frac{1}{2}\log \bigl([1+|z|^2] +\{ [1+|z|^2]^2 - |1+z^2|^2\}^{1/2}\bigr)$$ so that from (\ref{magic}) $$V_{K,Q}(z)= V_{L_{n+1}}(1,z)$$ for $z\in \widetilde \Omega$. The extremal function $V_{L_{n+1}}(Z)$ for the Lie ball in $\mathbb{C}^{n+1}$ is maximal outside $L_{n+1}$ and, since $$V_{L_{n+1}}(\lambda Z)= \log |\lambda| + V_{L_{n+1}}(Z)$$ for $Z\in \partial L_{n+1}$ and $\lambda \in \mathbb{C}$ with $|\lambda|>1$, we see that $V_{L_{n+1}}$ is harmonic on complex lines through the origin (in the complement of $L_{n+1}$). Thus for each $Z\not \in L_{n+1}$, the vector $Z$ is an eigenvector of the complex Hessian of $V_{L_{n+1}}$ at $Z$ with eigenvalue $0$. We will use this to show: {\sl for $z\not \in \mathbb{R}^n$, the vector ${\rm Im} z$ is an eigenvector of the complex Hessian of the function $V_{K,Q}(z)$ defined in (\ref{magic}) at $z$ with eigenvalue $0$}. To this end, let $u:\mathbb{C}^n\to\mathbb{R}$ denote our candidate function for $V_{K,Q}$ where $K=\mathbb{R}^n\subset \mathbb{C}^n$ and the weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$, i.e., for $z\in \mathbb{C}^n$, define \[u(z):=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2}\bigr).\] Let $U:\mathbb{C}^{n+1}\to \mathbb{R}$ denote its homogenization, i.e, \[U(Z)=\frac{1}{2}\log \bigl(|Z|^2+ \{|Z|^4-|Z^2|^2\}\bigr)\] with $Z:=(z_0,z)\in \mathbb{C}^{n+1},$ so that $u(z)=U(1,z)$. From above, $\max[0,U(Z)]$ is the extremal function for the Lie ball $L_{n+1}$, and since $U(Z)$ is psh, so is $u(z)$. Also, $U$ is symmetric as a function of its arguments and has the property that $U(\overline{Z})=U(Z)$; in particular it follows that \[\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(\overline{Z})= \frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(Z).\] Now, for any function $v$, let $H_v(z)$ denote the complex Hessian of $v$ evaluated at the point $z.$ For any fixed $Z\in\mathbb{C}^{n+1}$ and $\lambda\in\mathbb{C},$ \[U(\lambda Z)=U(Z)+\log|\lambda|,\] which is harmonic as a function of $\lambda$ for $\lambda \not = 0$. It follows that \begin{equation} \label{atz} H_U(Z)Z=0\in \mathbb{C}^{n+1},\quad \forall Z\in \mathbb{C}^{n+1}\setminus \{0\} \end{equation} and that \begin{equation} \label{atzbar} H_U(Z)\overline{Z}=H_U(\overline{Z})\overline{Z}= 0\in \mathbb{C}^{n+1},\quad \forall Z\in \mathbb{C}^{n+1}\setminus \{0\}. \end{equation} Equivalently, equation \eqref{atz} says that, for $0\le j\le n,$ \[\sum_{k=0}^{n} \frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(Z) \times Z_k=0.\] But then, for $1\le j\le n,$ we have \[\sum_{k=1}^{n} \frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(Z) \times Z_k=-\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_{0}}(Z)\times Z_{0}.\] Evaluating at $Z=(1,z)$ we obtain \[\sum_{k=1}^{n} \frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(1,z) \times z_k=-\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_{0}}(1,z)\times 1,\] i.e., \[\sum_{k=1}^{n} \frac{\partial^2u}{\partial z_j\partial \overline{z}_k}(z) \times z_k=-\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_{0}}(1,z).\] Similarly, from \eqref{atzbar} we obtain, for $1\le j\le n,$ \[\sum_{k=1}^{n} \frac{\partial^2U}{\partial Z_j\partial \overline{Z}_k}(Z) \times \overline{Z}_k=-\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_{0}}(Z)\times \overline{Z}_{0}\] so that evaluating at $Z=(1,z)$ gives \[\sum_{k=1}^{n} \frac{\partial^2u}{\partial z_j\partial \overline{z}_k}(z) \times \overline{z}_k=-\frac{\partial^2U}{\partial Z_j\partial \overline{Z}_{0}}(1,z).\] Consequently, \[H_u(z)z=H_u(z)\overline{z}, \,\,\hbox{i.e.,}\,\, H_u(z)(z-\overline{z})=0.\] In particular, for $z\neq \overline{z},$ i.e., $z\notin \mathbb{R}^n,$ ${\rm det}(H_u(z))=0,$ i.e., $(dd^c u)^n=0$ (note as $u$ is psh, $H_u(z)$ is a positive semi-definite matrix). Since the function $u$ is maximal on $\mathbb{C}^n\setminus \mathbb{R}^n$ and $u(x)= Q(x)=\frac{1}{2}\log (1+x^2)$ for $x\in \mathbb{R}^n$ we have proved the following: \begin{theorem} \label{magic1} For $K=\mathbb{R}^n\subset \mathbb{C}^n$ and weight $w(z)=|f(z)|=|\frac{1}{(1+z^2)^{1/2}}|$, $$V_{K,Q}(z)=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2}\bigr), \ z\in \mathbb{C}^n.$$ \end{theorem} Note that from (\ref{wtdrel}), since the K\"ahler potential $u_0(x)=Q(x)$ for $x\in K=\mathbb{R}^n$, $$V_{K,Q}(z)= u_0(z)+ v_{K,0}([1:z]).$$ Thus we have found a formula for the (unweighted) extremal function of $\mathbb{R} \mathbb P^n$, the real points of $\mathbb P^n$. \begin{corollary} \label{magiccor}The unweighted $\omega-$psh extremal function of $\mathbb{R} \mathbb P^n$ is given by $$v_{\mathbb{R} \mathbb P^n,0}([1:z])=\frac{1}{2}\log \bigl( [1+|z|^2] + \{ [1+|z|^2]^2-|1+z^2|^2\}^{1/2}\bigr)-u_0(z)$$ \begin{equation}\label{extrwt} =\frac{1}{2}\log \bigl( 1+[1-\frac{|1+z^2|^2}{(1+|z|^2)^2}]^{1/2}\bigr)\end{equation} for $[1:z]\in \mathbb{C}^n$ and \begin{equation}\label{extrwt2}v_{\mathbb{R} \mathbb P^n,0}([0:z])=\frac{1}{2}\log \bigl( 1+[1-\frac{|z^2|^2}{(|z|^2)^2}]^{1/2}\bigr).\end{equation} \end{corollary} Since $|1+z^2|\leq 1+|z|^2$ (and $|z^2|\leq |z|^2$), we see that, e.g., upon taking $z=i(1/\sqrt n,...,1/\sqrt n)$ in (\ref{extrwt}) or letting $z\to 0$ in (\ref{extrwt2}), $$\sup_{{\bf z}\in \mathbb P^n}v_{\mathbb{R} \mathbb P^n,0}({\bf z})= \frac{1}{2}\log 2.$$ This gives the exact value of the Alexander capacity $T_{\omega}(\mathbb{R} \mathbb P^n)$ of $\mathbb{R} \mathbb P^n$ in Example 5.12 of \cite{GZ}: $$ T_{\omega}(\mathbb{R} \mathbb P^n)=1/\sqrt 2.$$ We remark that Dinh and Sibony had observed that the value of the Alexander capacity $T_{\omega}(\mathbb{R} \mathbb P^n)$ was independent of $n$ (Proposition A.6 in \cite{DS}). \section{Calculation of $(dd^cV_{K,Q})^n$ with $V_{K,Q}$ in (\ref{magic})} We will compute $(dd^cV_{K,Q})^n$ for $V_{K,Q}$ in (\ref{magic}) after discussing some differential geometry. Let $\delta(x;y)$ be a Finsler metric where $x\in \mathbb{R}^n$ and $y\in \mathbb{R}^n$ is a tangent vector at $x$. The Busemann density associated to this Finsler metric is $$\omega(x)=\frac{vol(\hbox{Euclidean unit ball in $\mathbb{R}^n$})}{vol(B_x)}$$ where $$B_x:= \{y: \delta(x;y)\leq 1\}.$$ The Holmes-Thompson density associated to $\delta(x;y)$ is $$\widetilde \omega(x)=\frac{vol(B_x^*)}{vol(\hbox{Euclidean unit ball in $\mathbb{R}^n$})}$$ where $$B_x^*:=\{y: \delta(x;y)\leq 1\}^*=\{x:x\cdot y= x^ty\leq 1 \ \hbox{for all} \ y\in B_x\}$$ is the dual unit ball. Here $x^t$ denotes the transpose of the (vector) matrix $x$. Finsler metrics arise naturally in pluripotential theory in the following setting: if $K=\bar \Omega$ where $\Omega$ is a bounded domain in $\mathbb{R}^n\subset \mathbb{C}^n$, the quantity \begin{equation}\label{baran}\delta_B(x;y):=\limsup_{t\to 0^+}\frac{V_K(x+ity)}{t}=\limsup_{t\to 0^+}\frac{V_K(x+ity)-V_K(x)}{t} \end{equation} for $x\in K$ and $y\in \mathbb{R}^n$ defines a Finsler metric called the {\it Baran pseudometric} (cf., \cite{blw}). It is generally not Riemannian: such a situation yields more information on these densities. \begin{proposition} \label{ball} Suppose $$\delta(x;y)^2=y^tG(x)y$$ is a Riemannian metric; i.e., $G(x)$ is a positive definite matrix. Then $$vol(B_x^*)\cdot vol(B_x)=1 \ \hbox{and} \ vol(B_x^*)=\sqrt {\mathop{\mathrm{det}}\nolimits G(x)}.$$ \end{proposition} \begin{proof} Writing $G(x)=H^t(x)H(x)$, we have $$\delta(x;y)^2=y^tG(x)y=y^tH^t(x)H(x)y.$$ Letting $||\cdot||_2$ denote the standard Euclidean ($l^2$) norm, we then have $$B_x=\{y\in \mathbb{R}^N: ||H(x)y||_2\leq 1\}=H^{-1}(x)\bigl( \hbox{unit ball in $l^2-$norm})$$ and $$B_x^*=H(x)^t\bigl( \hbox{unit ball in $l^2-$norm}).$$ Hence $vol(B_x^*)\cdot vol(B_x)=1$ and $$vol\bigl(\{y:\delta(x;y)\leq 1\}^*\bigr)=vol(B_x^*)=\mathop{\mathrm{det}}\nolimits H(x)=\sqrt {\mathop{\mathrm{det}}\nolimits G(x)}.$$ \end{proof} Motivated by (\ref{baran}) and Theorem \ref{main} below, for $u(z)=V_{K,Q}(z)$ in (\ref{magic}), we will show that the limit $$\delta_u(x;y):=\lim_{t\to 0^+}\frac{u(x+ity)-u(x)}{t}$$ exists. Fixing $x\in \mathbb{R}^n$ and $y\in \mathbb{R}^n$, let $$F(t):=u(x+ity)=\frac{1}{2}\log \{ (1+x^2+t^2y^2) + 2 [t^2y^2+t^2x^2y^2-(x\cdot ty)^2]^{1/2}\}$$ $$=\frac{1}{2}\log \{ (1+x^2+t^2y^2) + 2 t[y^2+x^2y^2-(x\cdot y)^2]^{1/2}\}.$$ It follows that $$\delta_u(x;y)=F'(0)=\frac{1}{2}\frac{2 [y^2+x^2y^2-(x\cdot y)^2]^{1/2}}{1+x^2}=\frac{ [y^2+x^2y^2-(x\cdot y)^2]^{1/2}}{1+x^2}.$$ We write $$\delta_u^2(x;y)=\frac{ y^2+x^2y^2-(x\cdot y)^2}{(1+x^2)^2}=y^tG(x)y$$ where $$G(x):=\frac{(1+x^2)I -xx^t}{(1+x^2)^2}.$$ Since this matrix is positive definite, $\delta_u(x;y)$ defines a Riemannian metric. We analyze this further. The eigenvalues of the rank one matrix $xx^t\in \mathbb{R}^{n\times n}$ are $x^2,0,\ldots,0$ for $$(xx^t)x = x(x^tx) = x^2\cdot x;$$ and clearly $v\perp x$ implies $(xx^t)v=x(x^tv)=0$. The eigenvalues of $(1+x^2)I -xx^t$ are then $$(1+x^2)-x^2, \ (1+x^2)-0, \ldots, \ (1+x^2)-0 \ = \ 1,1+x^2,\ldots, 1+x^2$$ and the eigenvalues of $G(x)$ are $$\frac{1}{(1+x^2)^2}, \frac{1}{1+x^2},\ldots, \frac{1}{1+x^2}.$$ This shows $G(x)$ is, indeed, positive definite (it is clearly symmetric) and $$\mathop{\mathrm{det}}\nolimits G(x)=\frac{1}{(1+x^2)^{n+1}}.$$ From Proposition \ref{ball}, $$vol(B_x^*) = \sqrt {\mathop{\mathrm{det}}\nolimits G(x)}=\frac{1}{(1+x^2)^{\frac{n+1}{2}}}=\frac{1}{vol(B_x)}.$$ In particular, the Busemann and Holmes-Thompson densities associated to $\delta_u(x;y)$ are \begin{equation}\label{dense}\frac{1}{(1+x^2)^{\frac{n+1}{2}}}\end{equation} up to normalization. Note from (\ref{onevar}) in Remark \ref{onevarem} this agrees with the density of $\mathbb Delta V_{K,Q}$ with respect to Lebesgue measure $dx$ on $\mathbb{R}$ if $n=1$ and this will be the case for the density of $ (dd^cV_{K,Q})^n$ with respect to Lebesgue measure $dx$ on $\mathbb{R}^n$ for $n>1$ as well. For motivation, we recall the main result of \cite{blmr} (see \cite{bt} for the symmetric case $K=-K$): \begin{theorem} \label{main} Let $K$ be a convex body and $V_K$ its Siciak-Zaharjuta extremal function. The limit \begin{equation} \label{dblim} \delta (x;y):=\lim_{t\to 0^+}{V_K(x+ity)\over t} \end{equation} exists for each $x\in {\rm int}_{\mathbb{R}^n}K$ and $y\in \mathbb{R}^n$ and \begin{equation} \label{eq:monge} (dd^cV_K)^n=\lambda(x)dx \ \hbox{where} \ \lambda(x)=n!vol (\{y: \delta (x;y)\leq 1\}^*)=n!vol(B_x^*). \end{equation} \end{theorem} \noindent The conclusion of Theorem \ref{main} required Proposition 4.4 of \cite{blmr}: \begin{proposition} \label{propbaran} Let $D\subset \mathbb{C}^n$ and let $\Omega :=D\cap \mathbb{R}^n$. Let $v$ be a nonnegative locally bounded psh function on $D$ which satisfies: $\begin{array}{rl} i. & \Omega=\{v=0\}; \\ ii. & (dd^cv)^n=0 \; {\mbox{on}} \; D\setminus \Omega; \\ iii. & (dd^cv)^n=\lambda(x)dx \; on \; \Omega; \\ iv. & for \; all \; x\in \Omega, \ y \in \mathbb{R}^n, \; the \; limit \end{array}$ $$h(x,y):=\lim_{t\to 0^+} {v(x+ity)\over t} \; exists \; and \; is \; continuous \; on \; \Omega \times i\mathbb{R}^n;$$ $\begin{array}{rl} v. & for \; all \; x\in \Omega, y\to h(x,y) \; is \; a \; norm. \end{array}$ Then $$\lambda(x)=n! {\rm vol} \{y:h(x,y)\leq 1\}^*$$ and $\lambda(x)$ is a continuous function on $\Omega$. \end{proposition} \begin{theorem} For $V_{K,Q}$ in (\ref{magic}), \begin{equation} \label{monge} (dd^cV_{K,Q})^n=n! \frac{1}{(1+x^2)^{\frac{n+1}{2}}}dx.\end{equation} \end{theorem} \begin{proof} Recall we extended $Q(x)=\frac{1}{2}\log (1+x^2)$ on $\mathbb{R}^n$ to all of $\mathbb{C}^n$ as $$Q(z)=\frac{1}{2}\log |1+z^2|\in L(\mathbb{C}^n).$$ With this extension of $Q$, and writing $u:=V_{K,Q}$, we claim \begin{enumerate} \item $Q$ is pluriharmonic on $\mathbb{C}^n\setminus V$ where $V=\{z\in \mathbb{C}^n: 1+z^2=0\}$; \item $u-Q\geq 0$ in $\mathbb{C}^n$; and $\mathbb{R}^n=\{z\in \mathbb{C}^n: u(z)-Q(z)=0\}$; \item for each $x,y\in \mathbb{R}^n$ $$\lim_{t\to 0^+}\frac{Q(x+ity)-Q(x)}{t} =0.$$ \end{enumerate} Item 1. is clear; 2. may be verified by direct calculation (the inequality also follows from the observation that $Q\in L(\mathbb{C}^n)$ and $Q$ equals $u$ on $\mathbb{R}^n$); and for 3., observe that $$|1+(x+ity)^2|^2 = (1+x^2-t^2y^2)^2+4t^2(x\cdot y)^2=(1+x^2)^2+0(t^2)$$ so that $$Q(x+ity)-Q(x)=\frac{1}{2}\log |1+(x+ity)^2|-\frac{1}{2}\log (1+x^2)$$ $$=\frac{1}{4} \log \frac{(1+x^2)^2+0(t^2)}{(1+x^2)^2}\approx \frac{1}{4}\frac{0(t^2)}{(1+x^2)^2} \ \hbox{as} \ t\to 0. $$ Thus 1. and 2. imply that $v:=u-Q$ defines a nonnegative plurisubharmonic function in $\mathbb{C}^n\setminus V$, in particular, on a neighborhood $D\subset \mathbb{C}^n$ of $\mathbb{R}^n$; from 1., \begin{equation} \label{maeq} (dd^cv)^n = (dd^cu)^n \ \hbox{on} \ D;\end{equation} and from 3., for each $x,y\in \mathbb{R}^n$ $$\lim_{t\to 0^+}\frac{v(x+ity)-v(x)}{t} =\lim_{t\to 0^+}\frac{u(x+ity)-Q(x+ity)-u(x)+Q(x)}{t}$$ $$=\lim_{t\to 0^+}\frac{u(x+ity)-u(x)}{t}- \lim_{t\to 0^+}\frac{Q(x+ity)-Q(x)}{t}=\delta_u(x;y).$$ Then (\ref{maeq}), (\ref{dense}) and Proposition \ref{propbaran} give (\ref{monge}). \end{proof} {\bf Authors:}\\[\baselineskip] L. Bos, [email protected]\\ University of Verona, Verona, ITALY \\[\baselineskip] N. Levenberg, [email protected]\\ Indiana University, Bloomington, IN, USA\\ \\[\baselineskip] S. Ma`u, [email protected]\\ University of Auckland, Auckland, NEW ZEALAND\\ \\[\baselineskip] F. Piazzon, [email protected]\\ University of Padua, Padua, ITALY \\[\baselineskip] \end{document}
\begin{document} \title[Fisher-KPP equation with time delay] {Propagation dynamics of Fisher-KPP equation with time delay and free boundaries} \begin{abstract} Incorporating free boundary into time-delayed reaction-diffusion equations yields a compatible condition that guarantees the well-posedness of the initial value problem. With the KPP type nonlinearity we then establish a vanishing-spreading dichotomy result. Further, when the spreading happens, we show that the spreading speed and spreading profile are nonlinearly determined by a delay-induced nonlocal semi-wave problem. It turns out that time delay slows down the spreading speed. \end{abstract} \maketitle \section{Introduction}\label{sec:intr} In the pioneer work of Fisher \cite{Fisher}, and Kolmogorov, Petrovski and Piskunov \cite{KPP}, it was shown that \begin{equation}\label{eq:KPP} u_t =u_{xx}+f(u), \quad x\in\mathbb{R} \end{equation} with \begin{equation}\label{eq:KPP-cond} f\in C^1(\mathbb{R},\mathbb{R}),\quad f(0)=0=f(1),\quad f(s)\leqslant f'(0)s,\ s\geqslant 0, \end{equation} admits traveling waves solutions of the form $u(t,x)=\phi(x-ct)$ satisfying $\phi(-\infty)=1$ and $\phi(+\infty)=0$ if and only if $c\geqslant c_0:=2\sqrt{f'(0)}$. In 1970s', Aronson and Weinberger \cite{AW2} proved that the minimal wave speed $c_0$ is also the asymptotic speed of spread (spreading speed for short) in the sense that \begin{equation} \lim_{t\to\infty} \sup_{|x|\geqslant (c_0+\epsilon)t}u(t,x)=0,\quad \lim_{t\to\infty} \inf_{|x|\leqslant (c_0-\epsilon)t}u(t,x)=1 \end{equation} for any small $\epsilon>0$ provided that the initial function $u(0,x)$ is compactly supported. These works have stimulated volumes of studies for the propagation dynamics of various types of evolution systems. Among others, of particular interest to the Fisher-KPP equation \eqref{eq:KPP}-\eqref{eq:KPP-cond} with time delay or free boundary are two typical ones. Schaaf \cite{Sc} studied the following delayed reaction-diffusion equation \begin{equation}\label{eq:Schaaf} u_t(t,x)=u_{xx}(t,x)+f(u(t,x),u(t-\tau,x)),\quad x\in\mathbb{R},\ t>0, \end{equation} where $\tau>0$ is the time delay. With the Fisher-KPP condition on $\tilde{f}(s):=f(s,s)$ and the quasi-monotone condition $\partial_2 f\geqslant 0$, it was shown that the minimal wave speed $c_0=c_0(\tau)$ exists and it is determined by the system of two transcendental equations \begin{equation}\label{eq:speed} F(c,\lambda)=0, \quad \frac{\partial F}{\lambda}(c,\lambda)=0, \end{equation} where \begin{equation} F(c,\lambda)=\lambda^2+c\lambda+\partial_1 f(0,0)+\partial_2 f(0,0)e^{-\lambda\tau}. \end{equation} The delay-induced spatial non-locality was brought to attention by So, Wu and Zou \cite{SWZ}, where they derived the following time-delayed reaction-diffusion model equation with nonlocal response for the study of age-structured population \begin{equation}\label{eq:SWZ} u_t=u_{xx}-d u+\gamma \int_\mathbb{R} b(u(t-\tau,x-y))k(y)dy,\quad x\in\mathbb{R},\ t>0, \end{equation} where $u$ represents the density of mature population, $\tau>0$ is the maturation age, $d$ is the death rate, $b$ is the birth rate function, $\gamma$ is the survival rate from newborn to being mature, and $k$ is the redistribution kernel during the maturation period. As such, introducing time delay into diffusive equation usually gives rises to spatial non-locality due to the interaction of time lag (for maturation) and diffusion of immature population. In the extreme case where the immature population does not diffuse, the kernel $k$ becomes the Dirac measure, and hence \eqref{eq:SWZ} reduces to \eqref{eq:Schaaf}. We refer to the survey article \cite{GW} for the delay-induced nonlocal reaction-diffusion problems. In \cite{SWZ}, the authors obtained the minimal wave speed $c_0(\tau)$ that is determined by a similar system to \eqref{eq:speed} provided that $b$ is nondecreasing and $f(s):=-ds+b(s)$ is of Fisher-KPP type. Wang, Li and Ruan \cite{WLR} proved that $c_0(\tau)$ is decreasing in $\tau$. Liang and Zhao \cite{LZ} showed that $c_0(\tau)$ is also the spreading speed for the solutions satisfying the following initial condition \begin{equation} \text{$u(\theta,x)$ is continuous and compactly supported in $\theta\in [-\tau,0]$ and $x\in\mathbb{R}$.} \end{equation} Similar to the classical Fisher-KPP equation, the spreading speed $c_0(\tau)$ for delayed reaction-diffusion equation is still linearly determined for both local and nonlocal problems thanks to the Fisher-KPP type condition. We refer to \cite{MS} for more properties that are induced by time delay in reaction-diffusion equations, including the well-posedness of initial value problems as well as the role of the quasi-monotone condition on the comparison principle, and \cite{FangZhao14, FangZhao15} for the delay-induced weak compactness of time-$t$ solution maps when $t\in(0,\tau]$ as well as its role in the study of wave propagation. Recently, Du and Lin \cite{DuLin} proposed a Stefan type free boundary to the Fisher-KPP equation \begin{equation}\label{freeb} \left\{ \begin{array}{ll} u_t =u_{xx}+u(1-u), & g(t)< x<h(t),\; t>0,\\ u(t,g(t))=0,\ \ g'(t)=-\mu u_x(t, g(t)), & t>0,\\ u(t,h(t))=0,\ \ h'(t)=-\mu u_x (t, h(t)) , & t>0, \end{array} \right. \end{equation} where the free boundaries $x=g(t)$ and $x=h(t)$ represent the spreading fronts, which are determined jointly by the gradient at the fronts and the coefficient $\mu$ in the Stefan condition. For more background of proposing such free boundary conditions, we refer to \cite{DuLin, BDK}. It was proved in \cite{DuLin} that the unique global solution $(u,g,h)$ has a spreading-vanishing dichotomy property as $t\to\infty$: either $(g(t),h(t))\to\mathbb{R}$ and $u\to1$ (spreading case), or $g(t)\to g_\infty$, $h(t)\to h_\infty$ with $h_\infty-g_\infty\leqslant \pi$, and $u\to 0$ (vanishing case). Moreover, it was also proved that when spreading happens, there is a constant $k_0>0$ such that $-g(t)$ and $h(t)$ behave like a straight line $k_0t$ for large time, where $k_0$ is called the asymptotic speed of spread (spreading speed for short). Different from the classical Fisher-KPP speed, $k_0$ is the unique value of $c$ such that the following nonlinear semi-line problem is solvable: \begin{equation}\label{k0} \left\{ \begin{array}{ll} q'' - cq'+q(1-q)=0, & z>0,\\ q(\infty)=1, \ \ \mu q_+'(0)=c,\ \ q(z)>0, & z\leqslant 0,\\ q(z)=0, & z\leqslant 0, \end{array} \right. \end{equation} where $q_+'(0)$ is the right derivative of $q(z)$ at $0$. In particular, as $\mu$ increases to infinity, $k_0$ increases to the classical Fisher-KPP speed $2\sqrt{f'(0)}$. Later on, Du and Lou \cite{DuLou} obtained a rather complete characterization on the asymptotic behavior of solutions for \eqref{freeb} with some general nonlinear terms. For further related work on free boundary problems, we refer to \cite{DuGuo, DGP, DMZ} and the references therein. In this paper, we aim to explore how to incorporate time delay and free boundary into the Fisher-KPP equation \eqref{eq:KPP}-\eqref{eq:KPP-cond} so that the problem is well-posed, and then study their joint influence on the propagation dynamics. Keeping a smooth flow for the organizations of the paper, we write down here the problem of interest while leaving in the next section the derivation details, including the emergence of the compatible condition \eqref{CC} for the well-posedness of the initial value problem. \begin{equation}\label{p} \left\{ \begin{array}{ll} u_t(t,x) =u_{xx}(t,x)- d u(t,x) +f(u(t-\tau,x)), & x\in(g(t),h(t)),\; t>0,\\ u(t,g(t))=0,\ \ g'(t)=-\mu u_x(t, g(t)), & t>0,\\ u(t,h(t))=0,\ \ h'(t)=-\mu u_x (t, h(t)) , & t>0,\\ u(\theta,x) =\phi (\theta,x),& g(\theta) \leqslant x \leqslant h(\theta),\; \theta\in[-\tau,0], \end{array} \right. \tag{$P$} \end{equation} where $d$ and $\tau$ are two positive constants, the nonlinear function $f$ satisfies \[ \bf{(H)}\ \hskip 16mm \left\{ \begin{array}{l} f(s)\in C^{1+\tilde{\nu}}([0,\infty))\ \mbox{ for some } \tilde{\nu}\in(0,1),\ \ f(0)=0,\ \ f'(0)-d>0;\\ f(s)-d s=0 \mbox{ has a unique positive constant root } u^*;\\ f(s) \mbox{ is monotonically increasing in } s \in[0,u^*];\\ \frac{f(s)}{s} \mbox{ is monotonically decreasing in } s \in[0,u^*] \end{array} \right. \ \hskip 15mm \] and the initial data $(\phi(\theta,x), g(\theta), h(\theta))$ satisfies \begin{equation}\label{def:X} \left\{ \begin{array}{ll} \phi(\theta,x) \in C^{1,2} ([-\tau,0]\times[g(\theta),h(\theta)]),\\ 0<\phi(\theta,x)\leqslant u^*\ \mbox{ for } (\theta,x)\in[-\tau,0]\times(g(\theta), h(\theta)),\\ \phi(\theta,x) \equiv 0\ \ \mbox{ for } \theta\in[-\tau,0],\; x\not\in(g(\theta),h(\theta)) \end{array} \right. \end{equation} as well as the compatible condition \begin{equation}\label{CC} [g(\theta), h(\theta)]\subset [g(0), h(0)]\ \ \ \mbox{ for }\ \theta\in[-\tau,0]. \end{equation} Assumption {\bf (H)} ensures the Fisher-KPP structure as well as the comparison principle. Due to the nature of delay differential equations, the initial value, including the initial domain, has to be imposed over the history period $[-\tau,0]$, as in \eqref{def:X}. The interaction of time delay and free boundary gives rise to the compatible condition \eqref{CC} that is essential for the well-posedness of the problem. If $\tau=0$, then the compatible condition (1.12) becomes trivial and problem (P) reduces to \eqref{freeb}. \begin{thm}\label{wellposedness} {\rm(Well-posedness)} For an initial data $(\phi(\theta,x), g(\theta), h(\theta))$ satisfying \eqref{def:X} and \eqref{CC}, there exists a unique triple $(u, g, h)$ solving \eqref{p} with $u\in C^{1,2}((0,\infty) \times[g(t),h(t)])$ and $g,\, h\in C^1([0,\infty))$. \end{thm} With the compatible condition \eqref{CC} we can cast the problem into a fixed boundary problem and then apply the Schauder fixed point theorem to establish the local existence of solutions. The extension to all positive time is based on some a priori estimates\footnote{We sincerely thank Professor Avner Friedman for his valuable comments and suggestions on the proof of the well-posedness.}. From the maximum principle and {\bf (H)}, it follows that when $t>0$ the solution $u>0$ as $x\in (g(t),h(t))$, $u_x(t,g(t))>0$ and $u_x(t,h(t))<0$, and hence, $g'(t)<0<h'(t)$ for all $t>0$. Therefore, we can denote $$ g_{\infty}:=\lim_{t\to\infty}g(t)\ \ \ \mbox{and }\quad h_{\infty}:= \lim_{t\to\infty}h(t). $$ \begin{thm}{\rm (Spreading-vanishing dichotomy)}\label{thm:asy be} Let $(u,g,h)$ be the solution of \eqref{p}. Then the following alternative holds: Either {\rm (i) Spreading:} $(g_\infty, h_\infty)=\mathbb{R}$ and \[ \lim_{t\to\infty}u(t,x)=u^* \mbox{ locally uniformly in $\mathbb{R}$}, \] or {\rm (ii) Vanishing:} $(g_\infty, h_\infty)$ is a finite interval with length no bigger than $\frac{\pi}{\sqrt{f'(0)-d}}$ and \[ \lim_{t\to\infty}\max_{g(t)\leqslant x\leqslant h(t)} u(t,x)=0. \] \end{thm} When spreading happens, we characterize the spreading speed and profile of the solutions. The nonlinear and nonlocal semi-wave problem \begin{equation}\label{sw11} \left\{ \begin{array}{ll} q'' - cq'-d q+ f( q(z-c\tau))=0, & z>0,\\ q(\infty)=u^*, \ \ \mu q_+'(0)=c,\ \ q(z)>0, & z\leqslant 0,\\ q(z)=0, & z\leqslant 0 \end{array} \right. \end{equation} will play an important role. If $\tau=0$ then \eqref{sw11} reduces to the local form \eqref{k0}. \begin{thm}\label{thm:semiwave} Problem \eqref{sw11} admits a unique solution $(c^*, q_{c^*})$ and $c^*=c^*(\tau)$ is decreasing in delay $\tau\geqslant 0$. \end{thm} Due to the presence of time delay, the proof of Theorem \ref{thm:semiwave} highly relies on the distribution of complex solutions of the following transcendental equation \begin{equation} \lambda^2-c\lambda-d+f'(0)e^{-\lambda c\tau}=0. \end{equation} We refer to Lemma \ref {lem:eigen} and Proposition \ref{prop:qoan1}, which are independently of interest. With the semi-wave established above, we can construct various super- and subsolutions to estimate the spreading fronts $h(t),g(t)$ and the spreading profile as $t\to\infty$. \begin{thm}\label{thm:profile of spreading sol} {\rm(Spreading profile)} Let $u$ be a solution satisfying Theorem \ref{thm:asy be}(i). Then there exist two constants $H_1$ and $G_1$ such that \[ \lim\limits_{t\to\infty}[h(t)- c^*t] = H_1 ,\quad \ \lim\limits_{t\to\infty} h'(t)=c^*, \] \[ \lim\limits_{t\to\infty}[g(t) + c^*t] = G_1 ,\quad\ \lim\limits_{t\to\infty} g'(t)=-c^*, \] \begin{equation}\label{profile convergence 1} \lim\limits_{t\to\infty} \left\| u(t,\cdot)- q_{c^*}(c^*t+ H_1-\cdot) \right\|_{L^\infty ( [0, h(t)])}=0, \end{equation} \begin{equation}\label{profile convergence 1-left} \lim\limits_{t\to\infty} \left\| u(t,\cdot)- q_{c^*}(c^*t- G_1+\cdot )\right\|_{L^\infty ([g(t), 0])} =0, \end{equation} where $(c^*,q_{c^*})$ is the unique solution of \eqref{sw11}. \end{thm} The rest of this paper is organized as follows. In Section 2 we derive the compatible condition \eqref{CC}, with which we formulate problem (P) and then establish the well-posedness as well as the comparison principle. Section 3 is devoted to the study of the semi-wave problem \eqref{sw11}. In section 4, we establish the spreading-vanishing dichotomy result. Finally in Section 5, we characterize the spreading speed and profile of spreading solutions of \eqref{p}. \section{The compatible condition, well-posedness and comparison principle}\label{sec:basic} \subsection{The compatible condition} To formulate problem \eqref{p}, we start from the age-structured population growth law \begin{equation}\label{ase} p_t+p_a=D(a)p_{xx}-d(a)p, \end{equation} where $p=p(t,x;a)$ denotes the density of species of age $a$ at time $t$ and location $x$, $D(a)$ and $d(a)$ denote the diffusion rate and death rate of species of age $a$, respectively. Next we consider the scenario that the species has the following biological characteristics. \begin{itemize} \item[(A1)] The species can be classified into two stages by age: mature and immature. An individual at time $t$ belongs to the mature class if and only if its age exceeds the maturation time $\tau>0$. Within each stage, all individuals share the same behavior. \item[(A2)] Immature population does not move in space. \end{itemize} The total mature population $u$ at time $t$ and location $x$ can be represented by the integral \begin{equation}\label{mimuv} u(t,x)=\int_\tau^\infty p(t,x;a)da. \end{equation} We assume that the mature population $u$ lives in the habitat $[g(t),h(t)]$, vanishes in the boundary \begin{equation}\label{vc} u(t,g(t))=0=u(t,h(t)),\quad t>0 \end{equation} and extends the habitat by obeying the Stefan type moving boundary conditions: \begin{equation}\label{fbc} h'(t)=-\mu u_x(t,h(t)),\ \ g'(t)=-\mu u_x(t,g(t)), \quad t>0, \end{equation} where $\mu$ is a given positive constant. Note that the immature population does not contribute to the extension of habitat due to their immobility, as assumed in (A2). According to (A1) we may assume that \[ D(a)=\left\{ \begin{array}{ll} 1,& a\geqslant \tau ,\\ 0, & 0\leqslant a<\tau, \end{array} \right. \ \ \ \ d(a)=\left\{ \begin{array}{ll} d ,& a\geqslant \tau ,\\ d_I, & 0\leqslant a<\tau, \end{array} \right. \] where $d$ and $d_I$ are two positive constants. Differentiating the both sides of the equation \eqref{mimuv} in time yields \begin{eqnarray}\label{diff-u} u_t & = &\int_\tau^\infty p_t da = \int_\tau^\infty [-p_a+ p_{xx}-d p]da\nonumber\\ & = & u_{xx}-d u+p(t,x;\tau) -p(t,x;\infty).\label{du1} \end{eqnarray} Since no individual lives forever, it is nature to assume that \begin{equation}\label{infinity} p(t,x;\infty)=0. \end{equation} To obtain a closed form of the model, one then needs to express $p(t,x;\tau)$ by $u$ in a certain way. Indeed, $p(t,x;\tau)$ denotes the newly matured population at time $t$, and it is the evolution result of newborns at $t-\tau$. In other words, there is an evolution relation between the quantities $p(t,x;\tau)$ and $p(t-\tau,x;0)$. Such a relation is obeyed by the growth law \eqref{ase} for $0<a<\tau$, and hence it is the time-$\tau$ solution map of the following equation \begin{equation}\label{ast} \left\{ \begin{array}{ll} q_s=-d_Iq, & x\in\mathbb{R},\ 0\leqslant s\leqslant \tau,\\ q(0,x)=p(t-\tau,x;0), & x\in\mathbb{R}. \end{array} \right. \end{equation} Therefore, $p(t,x;\tau)=q(\tau,x)=e^{-d_I\tau}p(t-\tau,x,0)$. Further, the newborns $p(t-\tau,x;0)$ is given by the birth $b(u(t-\tau,x))$, where $b$ is the birth rate function with $b(0)=0$. Consequently, \begin{equation}\label{pptst} p(t,x;\tau)=e^{-d_I\tau}b(u(t-\tau,x)). \end{equation} Combining \eqref{vc}-\eqref{infinity} and \eqref{pptst}, we are led to the following system: \begin{equation}\label{p-nonsim} \left\{ \begin{array}{lll} u_t(t,x) =u_{xx}(t,x)- d u(t,x) +e^{-d_I\tau}b(u(t-\tau,x)), & t>0, x\in[g(t-\tau),h(t-\tau)]\\ u_t(t,x) =u_{xx}(t,x)- d u(t,x), & t>0, x\in[g(t),h(t)]\setminus [g(t-\tau),h(t-\tau)]\\ u(t,g(t))=0=u(t,h(t)), &t>0\\ h'(t)=-\mu u_x(t,h(t)),\ \ g'(t)=-\mu u_x(t,g(t)), & t>0. \end{array} \right. \end{equation} For $t>0$, outside the habitat $(g(t),h(t))$ the mature population does not exist, that is, \begin{equation}\label{uhhgg} u(t,x)\equiv0 \ \ \ \mbox{ for } \ t>0,\; x\not\in(g(t),h(t)). \end{equation} Clearly, since the habitat is expanding for $t>0$, we have \begin{equation}\label{habitat} [g(t-\tau),h(t-\tau)]\subset [g(t),h(t)],\quad t\geqslant \tau. \end{equation} Hence, the first two equations in \eqref{p-nonsim} can be written as the following single one \begin{equation} u_t(t,x) =u_{xx}(t,x)- d u(t,x) +e^{-d_I\tau}b(u(t-\tau,x)), \quad t>0, x\in[g(t),h(t)] \end{equation} provided that \eqref{habitat} holds for $t\geqslant 0$. As such, in view of \eqref{habitat} we need an additional condition \begin{equation}\label{AC} [g(t-\tau),h(t-\tau)]\subset [g(t),h(t)], \quad t\in[0,\tau). \end{equation} Note that $[g(0),h(0)]\subset[g(t),h(t)]$ for $t>0$. And as the coefficient $\mu\to+\infty$ we have $[g(t),h(t)]\to [g(0,h(0))]$ uniformly for $t\in [0,\tau]$. Therefore, regardless of the influence of $\mu$, \eqref{AC} is strengthened to be \[ [g(\theta), h(\theta)]\subset [g(0), h(0)]\ \ \ \mbox{ for }\ \theta\in[-\tau,0], \] which is the aforementioned compatible condition \eqref{CC}. Setting $f(s):=e^{-d_I\tau}b(s)$ in \eqref{p-nonsim}, we obtain problem \eqref{p}. \subsection{Well-posedness} We employ the Schauder fixed point theorem to establish the local existence of solutions to \eqref{p}, and prove the uniqueness, then extend the solutions to all time by an estimate on the free boundary. \begin{thm}\label{thm:local} Suppose {\bf(H)} holds. For any $\alpha\in (0,1)$, there is a $T>0$ such that problem \eqref{p} admits a solution $$(u, g, h)\in C^{(1+\alpha)/2, 1+\alpha}([0,T]\times[g(t),h(t)])\times C^{1+\alpha/2}([0,T])\times C^{1+\alpha/2}([0,T]).$$ \end{thm} \begin{proof} We divide the proof into three steps. $Step\ 1$. We use a change of variable argument to transform problem \eqref{p} into a fixed boundary problem with a more complicated equation which is used in \cite{CF, DuLin}. Denote $l_1=g(0)$ and $l_2=h(0)$ for convenience, and set $h_0=\frac{1}{2}(l_2-l_1)$. Let $\xi_{1}(y)$ and $\xi_{2}(y)$ be two nonnegative functions in $C^{3}(\mathbb{R})$ such that \[ \xi_{1}(y)=1\ \mbox{ if}\ | y-l_2|< \frac{h_{0}}{4},\ \xi_{1}(y)=0\ \mbox{ if} \ |y-l_2| > \frac{h_{0}}{2},\ |\xi_{1}'(y)|<\frac{6}{h_{0}}\ \mbox{for}\ y\in \mathbb{R}; \] \[ \xi_{2}(y)=1\ \mbox{ if}\ | y-l_1| < \frac{h_{0}}{4},\ \xi_{2}(y)=0\ \mbox{ if}\ | y-l_1| > \frac{h_{0}}{2},\ |\xi_{2}'(y)| < \frac{6}{h_{0}}\ \mbox{for}\ y\in \mathbb{R}. \] Define $y= y(t,x)$ through the identity \begin{align*} &x=y+\xi_{1}(y)(h(t)-l_2)+\xi_{2}(y)(g(t)-l_1)\ \ \ \mbox{ for } t>0,\\ &x\equiv y\ \ \ \mbox{ for } -\tau\leqslant t\leqslant 0. \end{align*} and set \begin{align*} &w(t,y):=u(t,y+\xi_{1}(y)(h(t)-l_2)+\xi_{2}(y)(g(t)-l_1))=u(t,x)\ \ \ \mbox{ for } t>0,\\ &w(\theta,y):=\phi(\theta,y)\ \ \ \mbox{ for } -\tau\leqslant \theta\leqslant 0. \end{align*} Then the free boundary problem \eqref{p} becomes \begin{equation}\label{lin1} \left\{ \begin{array}{ll} w_t -A(g,h,y)w_{yy} + B(g,h,y)w_{y}=f(w(t-\tau,y))- d w, & y\in(l_1, l_2),\ t>0,\\ w(t,l_i)=0, & t>0,\ i=1, 2,\\ w(\theta,y) =\phi(\theta,y),& y\in[l_1,l_2],\ \theta\in[-\tau,0], \end{array} \right. \end{equation} and \begin{equation}\label{lghin1} g'(t)=-\mu\, w_y(t,l_1), \ \ h'(t) = -\mu w_{y}(t,l_2),\ \ \ t>0, \end{equation} with $f(w(t-\tau,y))=f(u(t-\tau,y))$ and $A(g,h,y)=[1+\xi_1'(y)(h(t)-l_2)+\xi_2'(y)(g(t)-l_1)]^{-2}$, \[ B(g,h,y)=[\xi_1''(y)(h(t)-l_2)+\xi_2''(y)(g(t)-l_1)]A(g,h,y)^{\frac {3}{2}}-[\xi_1(y)h'(t)+\xi_2(y)g'(t)]A(g,h,y)^{\frac {1}{2}}. \] Denote $ h_{1}=-\mu (u_{0})_y(0,l_2)$, and $h_{2}=\mu (u_{0})_y(0,l_1)$. For $ 0<T\leqslant \min\big\{\frac{h_{0}}{16(1+ h_{1} +h_{2})},\ \tau\big\}$, we define $\Omega_{T}:=[0,T]\times[l_1,l_2]$, \begin{align*} &\mathcal{D}^{h}_{T}=\{h\in C^{1}([0,T]):\ h(0)=l_2,\ h'(0)=h_{1},\ \| h'-h_{1}\|_{C([0,T])} \leqslant 1\},\\ &\mathcal{D}^{g}_{T}=\{g\in C^{1}([0,T]):\ g(0)=l_1,\ g'(0)=-h_{2},\ \| g'+h_{2}\|_{C([0,T])} \leqslant 1\}. \end{align*} Clearly, $\mathcal{D}:=\mathcal{D}^{g}_{T}\times\mathcal{D}^{h}_{T}$ is a bounded and closed convex set of $C^1([0,T])\times C^1([0,T])$. Noting that the restriction on $T$, it is easy to see that the transformation $(t,y)\rightarrow(t,x)$ is well defined. By a similar argument as in \cite{W}, applying standard $L^p$ theory and the Sobolev embedding theorem, we can deduce that for any given $(g,h)\in \mathcal{D}$, problem \eqref{lin1} admits a unique $w(t,y;g,h)\in W^{1,2}_p(\Omega_{T})\hookrightarrow C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})$, which satisfies \begin{equation}\label{eq1} \|w\| _{W^{1,2}_p(\Omega_{T})}+\|w\| _{C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})}\leqslant C_{1}, \end{equation} where $p>1$ and $C_{1}$ is a constant dependent on $g(\theta)$, $h(\theta)$, $\alpha$, $p$ and $\| \phi\|_{C^{1,2}([-\tau,0]\times[g(\theta),h(\theta)])}$. Defining $\hat{h}$ and $\hat{g}$ by $\hat{h}(t)=l_2-\int_0^t \mu w_{y}(s, l_2)ds$ and $\hat{g}(t)=l_1-\int_0^t \mu w_{y}(s, l_1)ds$, respectively, then we have \[ \hat{h}'(t)=-\mu w_{y}(t, l_2),\ \hat{h}(0)=l_2,\ \hat{h}'(0)=-\mu w_{y}(0, l_2)=h_1, \] and thus $\hat{h}'\in C^{\frac{\alpha}{2}}([0,T])$, which satisfies \begin{equation}\label{eq2} \|\hat{h}'\| _{C^{\frac{\alpha}{2}}([0,T])}\leqslant \mu C_{1}=:C_{2}. \end{equation} Similarly $\hat{g}'\in C^{\frac{\alpha}{2}}([0,T])$, which satisfies \begin{equation}\label{eq3} \|\hat{g}'\| _{C^{\frac{\alpha}{2}}([0,T])}\leqslant \mu C_{1}=:C_{2}. \end{equation} $ Step\ 2$. For any given triple $(g,h)\in \mathcal{D}$, we define an operator $ \mathcal{F}$ by \[ \mathcal{F}(g,h)=(\hat{g},\hat{h}). \] Clearly, $\mathcal{F}$ is continuous in $\mathcal{D}$, and $(g,h)\in \mathcal{D}$ is a fixed point of $\mathcal{F}$ if and only if $(w,g,h)$ solves \eqref{lin1} and \eqref{lghin1}. We will show that if $ T>0$ is small enough, then $\mathcal{F}$ has a fixed point by using the Schauder fixed point theorem. Firstly, it follows from \eqref{eq2} and \eqref{eq3} that \[ \|\hat{h}'-h_{1}\|_{C([0,T])}\leqslant C_{2}T^{\frac{\alpha}{2}},\ \|\hat{g}'+h_{2}\|_{C([0,T])}\leqslant C_{2}T^{\frac{\alpha}{2}}. \] Thus if we choose $T\leqslant \min\big\{\frac{h_{0}}{16(1+ h_{1} +h_{2})},\ \tau, \ C^{-\frac{2}{\alpha}}_{2}\big\}$, then $\mathcal{F}$ maps $\mathcal{D}$ into itself. Consequently, $\mathcal{F}$ has at least one fixed point by using the Schauder fixed point theorem, which implies that \eqref{lin1} and \eqref{lghin1} have at least one solution $(w,g,h)$ defined in $[0,T]$. Moreover, by the Schauder estimates, we have additional regularity for $(w, g, h)$ as a solution of \eqref{lin1} and \eqref{lghin1}, namely, \[ (w,g,h)\in C^{1+\alpha/2,2+\alpha}((0,T]\times[l_1,l_2])\times C^{1+\alpha/2}((0,T]) \times C^{1+\alpha/2}((0,T]) \] and for any given $0<\varepsilon<T$, there holds \[ \|w\|_{C^{1+\alpha/2,2+\alpha}([\varepsilon,T]\times[l_1,l_2])}\leqslant C_3, \] where $C_3$ is a constant dependent on $\varepsilon$, $ g(\theta)$, $h(\theta)$, $\alpha$ and $\| \phi\|_{C^{1,2}}$. Thus we deduce a local classical solution $(u,g,h)$ of \eqref{p} by $(w,g,h)$, and $u\in C^{1+\alpha/2,2+\alpha}((0,T]\times[g(t),h(t)])$ satisfies \[ \|u\|_{C^{1+\alpha/2,2+\alpha}([\varepsilon,T]\times[g(t),h(t)])}\leqslant C_3. \] $ Step\ 3$. We will prove the uniqueness of solutions of \eqref{p}. Let $(u_i,g_i,h_i)$, $i=1,2$, be two solutions of \eqref{p} and set \[ w_i(t,y):=u_i(t,y+\xi_{1}(y)(h_i(t)-l_2)+\xi_{2}(y)(g_i(t)-l_1)). \] Then it follows from \eqref{eq1}, \eqref{eq2} and \eqref{eq3} that \[ \|w_i\| _{W^{1,2}_p(\Omega_{T})}+\|w_i\| _{C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})}\leqslant C_{1},\ \ \|h'_i\| _{C^{\frac{\alpha}{2}}([0,T])}\leqslant C_{2}, \ \ \|g'_i\| _{C^{\frac{\alpha}{2}}([0,T])}\leqslant C_{2}. \] Set \[ \tilde{w}(t,y):=w_{1}(t,y)-w_{2}(t,y), \ \ \tilde{g}(t):=g_1(t)-g_2(t),\ \mbox{ and }\ \tilde{h}(t):=h_1(t)-h_2(t), \] then we find that $\tilde{w}(t,y)$ satisfies that \begin{equation}\label{p1} \left\{ \begin{array}{ll} \tilde{w}_{t} -A_{2}(t,y)\tilde{w}_{yy} + B_{2}(t,y)\tilde{w}_{y}=\tilde{f}(t,y), & y\in(l_1, l_2),\ t\in(0, T),\\ \tilde{w}(t,l_1)=\tilde{w}(t,l_2)= 0, & t\in(0, T),\\ \tilde{w}(\theta,y) =0 ,& y\in[l_1, l_2],\ \theta\in[-\tau, 0], \end{array} \right. \end{equation} where \[ \tilde{f}(t,y)=(A_{1}-A_{2})(w_{1})_{yy}-(B_{1}-B_{2})(w _{1})_{y}+f(w_1(t-\tau,y))-f(w_2(t-\tau,y))- d \tilde{w}, \] and $A_i$ and $B_i$ are the coefficients of problem \eqref{lin1} with $(w_i,g_i,h_i)$ instead of $(w,g,h)$. Recalling that $T\leqslant \tau$, then $f(w_1(t-\tau,y))-f(w_2(t-\tau,y))=0$ for all $(t,y)\in \Omega_{T}$, thus \[ \tilde{f}(t,y)=(A_{1}-A_{2})(w_{1})_{yy}-(B_{1}-B_{2})(w_{1})_{y}- d \tilde{w}. \] Thanks to this, we can apply the $L^p$ estimates for parabolic equations to deduce that \begin{equation}\label{sobem} \|\tilde{w}\|_{W^{1,2}_p(\Omega_{T})}\leqslant C_4 (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}) \end{equation} with $C_4$ depending on $C_1$ and $C_2$. By a similar argument as in \cite{W}, we obtain that \[ \| \tilde{w}\|_{C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})}\leqslant C \|\tilde{w}\|_{W^{1,2}_p(\Omega_{T})} \] for some positive constant $C$ independent of $T^{-1}$. Thus \begin{equation}\label{sobem} \| \tilde{w}\|_{C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})}\leqslant C C_4 (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}) \end{equation} Since $\tilde{h}'(0)=h'_{1}(0)-h'_{2}(0)=0$, then \[ \| \tilde{h}'\|_{C^{\frac{\alpha}{2}}([0,T])}=\mu \| \tilde{w}_{y}\|_{C^{\frac{\alpha}{2},0}(\Omega_{T})}\leqslant \mu \| \tilde{w}\|_{C^{\frac{1+\alpha}{2},{1+\alpha}}(\Omega_{T})}. \] This, together with \eqref{sobem}, implies that \[ \| \tilde{h}\|_{C^1([0,T])}\leqslant 2T^{\frac{\alpha}{2}} \| \tilde{h}'\|_{C^{\frac{\alpha}{2}}([0,T])}\leqslant C_5T^{\frac{\alpha}{2}} (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}), \] where $C_5=2\mu C C_4$. Similarly, we have \[ \| \tilde{g}\|_{C^1([0,T])}\leqslant C_5T^{\frac{\alpha}{2}} (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}), \] As a consequence, we deduce that \[ \| \tilde{g}\|_{C^1([0,T])}\|+\| \tilde{h}\|_{C^1([0,T])} \leqslant 2C_5 T^{\frac{\alpha}{2}} (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}). \] Hence for \[ T:=\min\Big\{\frac{h_{0}}{ 16(1+h_{1}+h_{2})},\ \tau,\ C^{-\frac{2}{\alpha}}_{2},\ (4C_5)^{-\frac{2}{\alpha}}\Big\}, \] we have \[ \| \tilde{g}\|_{C^1([0,T])}\|+\| \tilde{h}\|_{C^1([0,T])} \leqslant \frac{1}{2} (\| \tilde{g}\|_{C^1([0,T])}+\| \tilde{h}\|_{C^1([0,T])}). \] This shows that $\tilde{g}\equiv 0 \equiv \tilde{h}$ for $0\leqslant t\leqslant T$, thus $\tilde{w}\equiv0$ in $[0,T]\times[l_1,l_2]$. Consequently, the uniqueness of solution of \eqref{p} is established, which ends the proof of this theorem. \end{proof} \begin{lem}\label{lem:global} Assume that {\bf(H)} holds. Then every positive solution $(u, g, h)$ of problem \eqref{p} exists and is unique for all $t\in(0, \infty)$. \end{lem} \begin{proof} Let $[0, T_{max})$ be the maximal time interval in which the solution exists. In view of Theorem \ref{thm:local}, it remains to show that $T_{max}=\infty$. We proceed by a contradiction argument and assume that $T_{max}<\infty$. Thanks to the choice of the initial data, the comparison principle implies that $u(t,x)\leqslant u^*$ for $(t,x)\in(0,T_{max})\times[g(t),h(t)]$. Construct the auxiliary function \[ \bar{u}(t,x)=u^*\big[2M(h(t)-x)-M^{2}(h(t)-x)^{2}\big],\ \ \ t\in[-\tau,T_{max}),\ x\in[h(t)-M^{-1},h(t)] \] where \[ M:=\max\Big\{\sqrt{d},\ \frac{2}{h(-\tau)-g(-\tau)},\ \frac{4}{3u^*}\max_{-\tau\leqslant \theta\leqslant 0}\|\phi(\theta,\cdot)\|_{C^1([g(\theta),h(\theta)])}\Big\}. \] It follows the proof of \cite[Lemma 2.2]{DuLin} to prove that there is a constant $C_0$ independent on $T_{max}$ such that $h'(t)\leqslant C_0$ for $t\in (0, T_{max})$. The proof for $-g'(t)\leqslant C_0$ for $t\in (0, T_{max})$ is parallel. Let us now fix $\epsilon\in(0,T_{max})$. Similar to the proof of Theorem \ref{thm:local}, by standard $L^p$ estimate, the Sobolev embedding theorem and the H\"{o}lder estimates for parabolic equation, we can find $C_1>0$ depending only on $\epsilon$, $T_{max}$, $u^*$, $ h_0$, $\| \phi\|_{C^{1,2}([-\tau,0]\times[g(\theta),h(\theta)])}$ and $C_0$ such that \[ ||u||_{C^{1+\alpha/2,2+\alpha}([\varepsilon,T_{max}]\times[g(t), h(t)])}\leqslant C_1. \] This implies that $(u,g,h)$ exists on $[0,T_{max}]$. Choosing $t_n\in(0,T_{max})$ with $t_n\nearrow T_{max}$, and regarding $(u(t_n-\theta, x), h), g(t_n-\theta), h(t_n-\theta))$ for $\theta\in[0,\tau]$ as the initial function, it then follows from the proof of Theorem \ref{thm:local} that there exists $s_0>0$ depending on $C_0$, $C_1$ and $u^*$ independent of $n$ such that problem \eqref{p} has a unique solution $(u, g, h)$ in $[t_n, t_n+s_0]$. This yields that the solution $(u,g,h)$ of \eqref{p} can be extended uniquely to $[0,t_n+s_0)$. Hence $t_n+s_0>T_{max}$ when $n$ is large. But this contradicts the assumption, which ends the proof of this lemma. \end{proof} \noindent {\bf Proof of Theorem \ref{wellposedness}:} Combining Theorem \ref{thm:local} and Lemma \ref{lem:global}, we complete the proof.{ $\Box$} \subsection{Comparison Principle}\label{subsec:cp} In this subsection, we establish the comparison principle, which will be used in the rest of this paper. Let us start with the following result. \begin{lem} \label{lem:comp1} Suppose that {\bf{(H)}} holds, $T\in (0,\infty)$, $\overlineerline g,\ \overlineerline h\in C^1([-\tau,T])$, $\overlineerline u\in C(\overlineerline D_T) \cap C^{1,2}(D_T)$ satisfies $\overlineerline u \leqslant u^*$ in $\overlineerline D_T$ with $D_T=\{(t,x)\in\mathbb{R}^2: -\tau<t\leqslant T,\ \overlineerline g(t)<x<\overlineerline h(t)\}$, and \begin{eqnarray*} \left\{ \begin{array}{lll} \overlineerline u_{t} \geqslant \overlineerline u_{xx} -d \overlineerline u+f(\overlineerline u(t-\tau, x)),\; & 0<t \leqslant T,\ \overlineerline g(t)<x<\overlineerline h(t), \\ \overlineerline u= 0,\quad \overlineerline g'(t)\leqslant -\mu \overlineerline u_x,\quad & 0<t \leqslant T, \ x=\overlineerline g(t),\\ \overlineerline u= 0,\quad \overlineerline h'(t)\geqslant -\mu \overlineerline u_x,\quad &0<t \leqslant T, \ x=\overlineerline h(t). \end{array} \right. \end{eqnarray*} If $[g(\theta), h(\theta)]\subseteq [\overlineerline g(\theta), \overlineerline h(\theta)]$ for $\theta\in[-\tau,0]$ and $\overlineerline u(\theta,x)\in C^{1,2}([-\tau,0]\times[\overlineerline g(\theta), \overlineerline h(\theta)])$ satisfies \[ \phi(\theta,x)\leqslant \overlineerline u(\theta,x) \leqslant u^*\ \ \mbox{ in } [-\tau,0]\times[g(\theta),h(\theta)], \] then the solution $(u,g, h)$ of problem \eqref{p} satisfies $g(t)\geqslant \overlineerline g(t)$, $h(t)\leqslant \overlineerline h(t)$ in $(0,T]$, and \begin{align*} u(t,x)\leqslant \overlineerline u(t,x)\ \ \mbox{ for } (t,x)\in (0, T]\times(g(t), h(t)). \end{align*} \end{lem} \begin{proof} We integrate the ideas of \cite[Lemma 5.7]{DuLin} and \cite[Corollary 5]{MS} to deal with free boundary and time delay. Firstly, for small $\epsilon>0$, let $(u_\epsilon,g_\epsilon,h_\epsilon)$ denote the unique solution of \eqref{p} with $g(\theta)$ and $h(\theta)$ replaced by $g_\epsilon(\theta):=g(\theta)(1-\epsilon)$ and $h_\epsilon(\theta):=h(\theta)(1-\epsilon)$ for $\theta\in[-\tau,0]$, respectively, with $\mu$ replaced by $\mu_\epsilon:=\mu(1-\epsilon)$, and with $\phi(\theta,x)$ replaced by some $\phi_\epsilon(\theta,x)\in C^{1,2}([-\tau,0]\times[g_\epsilon(\theta),h_\epsilon(\theta)])$, satisfying \[ 0<\phi_\epsilon(\theta,x)\leqslant \phi(\theta,x),\ \ \ \phi_\epsilon(\theta,g_\epsilon(\theta))=\phi_\epsilon(\theta,h_\epsilon(\theta))=0 \ \ \mbox{ for }\ \theta\in[-\tau,0],\ x\in[g_\epsilon(\theta),h_\epsilon(\theta)], \] and for any fixed $\theta\in[-\tau,0]$ as $\epsilon\to 0$, $\phi_\epsilon (\theta,x )\to \phi(\theta,x)$ in the $C^2([g(\theta),h(\theta)])$ norm. We claim that $h_\epsilon(t)<\overlineerline h(t)$, $g_\epsilon(t)> \overlineerline g(t)$ and $u_\epsilon(t,x)<\overlineerline u(t,x)$ for all $t\in[0,T]$ and $x\in[g_\epsilon(t),h_\epsilon(t)]$. Obviously, this is true for all small $t>0$. Now, let us use an indirect argument and suppose that the claim does not hold, then there exists a first $t^*\in(0,T]$ such that \begin{align*} u_\epsilon(t,x)< \overlineerline{u}(t,x)\ \ \mbox{ for }\ t\in [0, t^*),\ x\in [g_\epsilon(t), h_\epsilon(t)]\subset (\overlineerline {g}(t), \overlineerline {h}(t)), \end{align*} and there is some $x^*\in[g_\epsilon(t^*),h_\epsilon(t^*)]$ such that $u_\epsilon(t^*,x^*)=\overlineerline{u}(t^*,x^*)$. Later, let us compare $u_\epsilon$ and $\overlineerline u$ over the region \[ \Omega_{t^*}:=\{(t,x)\in\mathbb{R}^2: 0<t\leqslant t^*,\ g_\epsilon(t)< x < h_\epsilon(t)\}. \] An direct computation shows that for $(t,x)\in \Omega_{t^*}$, \[ (\overlineerline{u}-u_\epsilon)_t- (\overlineerline{u}-u_\epsilon)_{xx}+d (\overlineerline{u}-u_\epsilon)\geqslant f(\overlineerline{u}(t-\tau,x))-f(u_\epsilon(t-\tau,x))\geqslant 0, \] it then follows from the strong maximum principle that \begin{equation}\label{mao1} u_\epsilon(t,x)<\overlineerline{u}(t,x)\ \ \mbox{ in } \Omega_{t^*}. \end{equation} Thus either $x^*=h_\epsilon(t^*)$ or $x^*=g_\epsilon(t^*)$. Without loss of generality we may assume that $x^*=h_\epsilon(t^*)$, then $\overlineerline{u}(t^*,h_\epsilon(t^*))=u_\epsilon(t^*,h_\epsilon(t^*))=0$. This, together with \eqref{mao1}, implies that $\overlineerline{u}_x(t^*,h_\epsilon(t^*))\leqslant (u_\epsilon)_x(t^*,h_\epsilon(t^*))$, from which we obtain that \begin{equation}\label{uhq} h'_\epsilon(t^*)=-\mu_\epsilon(u_\epsilon)_x(t^*,h_\epsilon(t^*))<-\mu \overlineerline{u}_x(t^*,h_\epsilon(t^*))= \overlineerline{h}'(t^*). \end{equation} As $h_\epsilon(t)< \overlineerline{h}(t)$ for $t\in[0,t^*)$ and $h_\epsilon(t^*)=\overlineerline{h}(t^*)$, then $h'_\epsilon(t^*)\geqslant \overlineerline{h}'(t^*)$, which contradicts \eqref{uhq}. This proves our claim. Finally, thanks to the unique solution of \eqref{p} depending continuously on the parameters in \eqref{p}, as $\epsilon \to 0$, $(u_\epsilon,g_\epsilon,h_\epsilon)$ converges to $(u,g,h)$, the unique of solution of \eqref{p}. The desired result then follows by letting $\epsilon\to 0$ in the inequalities $u_\epsilon< \overlineerline{u},\ g_\epsilon> \overlineerline{g}$ and $h_\epsilon< \overlineerline{h}$. \end{proof} By slightly modifying the proof of Lemma \ref{lem:comp1}, we obtain a variant of Lemma \ref{lem:comp1}. \begin{lem} \label{lem:comp2} Suppose that {\bf {(H)}} holds, $T\in (0,\infty)$, $\overlineerline g,\, \overlineerline h\in C^1([-\tau,T])$, $\overlineerline u\in C(\overlineerline D_T)\cap C^{1,2}(D_T)$ satisfies $\overlineerline u \leqslant u^*$ in $\overlineerline D_T$ with $D_T=\{(t,x)\in\mathbb{R}^2: -\tau<t\leqslant T,\ \overlineerline g(t)<x<\overlineerline h(t)\}$, and \begin{eqnarray*} \left\{ \begin{array}{lll} \overlineerline u_{t} \geqslant \overlineerline u_{xx} -d \overlineerline u+f(\overlineerline u(t-\tau, x)),\; &0<t \leqslant T,\ \overlineerline g(t)<x<\overlineerline h(t), \\ \overlineerline u\geqslant u, &0<t \leqslant T, \ x= \overlineerline g(t),\\ \overlineerline u= 0,\quad \overlineerline h'(t)\geqslant -\mu \overlineerline u_x,\quad &0<t \leqslant T, \ x=\overlineerline h(t), \end{array} \right. \end{eqnarray*} with $\overlineerline g(t)\geqslant g(t)$ in $[0,T]$, $h(\theta)\leqslant \overlineerline h(\theta)$, $\phi(\theta,x)\leqslant \overlineerline u(\theta,x)$ for $\theta\in[-\tau,0]$ and $x\in[\overlineerline g(\theta),h(\theta)]$, where $(u,g, h)$ is a solution to \eqref{p}. Then \[ \mbox{ $h(t)\leqslant \overlineerline h(t)$ in $(0, T]$,\quad $u(x,t)\leqslant \overlineerline u(x,t)$ for $(t,x)\in (0, T]\times(g(t),h(t))$.} \] \end{lem} \begin{remark} \label{rem5.8}\rm The function $\overlineerline u$, or the triple $(\overlineerline u,\overlineerline g,\overlineerline h)$, in Lemmas \ref{lem:comp1} and \ref{lem:comp2} is often called a supersolution to \eqref{p}. A subsolution can be defined analogously by reversing all the inequalities. There is a symmetric version of Lemma~\ref{lem:comp2}, where the conditions on the left and right boundaries are interchanged. We also have corresponding comparison results for lower solutions in each case. \end{remark} \section{Semi-waves}\label{subsec:semiwave} This section is devoted to proving the existence and uniqueness of a semi-wave $q(z)$ of \eqref{sw11}, which will be used to construct some suitable sub- and supersolutions to study the asymptotic profiles of spreading solutions of \eqref{p}. Let us consider the following nonlocal elliptic problem \begin{equation}\label{semiwave} \left\{ \begin{array}{ll} q'' - cq'-d q+ f( q(z-c\tau))=0, & z>0,\\ q(z)=0, & z\leqslant 0,\\ \end{array} \right. \end{equation} where $c\geqslant 0$ is a constant. If $z$ is understood as the time variable, then we may regard problem \eqref{semiwave} as a time-delayed dynamical system in the phase space $C([-c\tau,0],\mathbb{R}^2)$. When $c\tau=0$, the phase space reduces to $\mathbb{R}^2$ and it follows from the phase plane analysis that \eqref{semiwave} admits a unique positive solution $q_0(z)$, which is increasing in $z$ and $q_0(z)\to u^*$ as $z\to \infty$. When $c\tau>0$, the phase space is of infinite dimension and the positivity and boundedness of the unique solution are not clear. \begin{prop}\label{prop:semiwave} Suppose {\bf{(H)}} holds. For any given constant $c> 0$, problem \eqref{semiwave} has a maximal nonnegative solution $q_c$. Moreover, either $q_c(z)\equiv 0$ or $q_c(z)> 0$ in $(0,\infty)$. Furthermore, if $q_c>0$, then it is the unique positive solution of \eqref{semiwave}, $q_c'(z)>0$ in $(0,\infty)$ and $q_c(z)\to u^*$ as $z\to\infty$, in addition, for any given constant $c_1<c$, one has $q_c(z)<q_{c_1}(z)$ for $z\in(0,\infty)$, and $q'_c(0)<q'_{c_1}(0)$. \end{prop} \begin{proof} We divide the proof into four steps. $ Step \ 1$. Problem \eqref{semiwave} always has a maximal nonnegative solution $\overlineerline{q}$ and it satisfies \[ \overlineerline{q}\leqslant u^*\ \ \mbox{ for } z\in[0,\infty). \] Clearly, $0$ is a nonnegative solution of \eqref{semiwave}. For any $l>0$, consider the following problem: \begin{equation}\label{semiwavel} \left\{ \begin{array}{ll} w'' - cw'-d w+ f( w(z-c\tau))=0, & 0<z<l,\\ w(l)=u^*,\ \ \ w(z)=0, \ \ z\leqslant 0. \end{array} \right. \end{equation} It is well known problem \eqref{semiwavel} admits a unique solution $w^l(z)>0$ for $z\in(0,l]$. Applying the maximal principle, we can deduce that $w^l(z)\leqslant u^*$ for $z\in[0,l]$. Moreover, it is easy to check that $w^l(z)$ is decreasing in $l>0$ and increasing in $z\in[0,l]$ and \[ w^l(z)\to W(z)\ \ \mbox{ as } l\to\infty, \] where $W(z)$ is a nonnegative solution of problem \eqref{semiwave} and it satisfies $W(z)\leqslant u^*$ for $z\in[0,\infty)$. In what follows, we want to prove that $W$ is the maximal nonnegative solution of \eqref{semiwave}. Let $q$ be an arbitrary nonnegative solution of \eqref{semiwave}, then $q(z)\leqslant u^*$ for $z\in[0,\infty)$. If $q\equiv 0$, then $q\leqslant W$. Suppose now $q\geqslant, \not\equiv 0$, then $q>0$ in $(0,\infty)$. Let us show $q(z)\leqslant W(z)$ for $z\in[0,\infty)$. Firstly, for any fixed $l>0$ we can find $M>0$ large such that $Mw^l(z)\geqslant q(z)$ for $z\in[0,l]$. We claim that the above inequality holds for $M=1$. On the contrary, define \[ M_0:=\inf\{M>0:\ Mw^l(z)\geqslant q(z)\ \ \mbox{ for } z\in[0,l]\}, \] then $M_0>1$ and $M_0w^l(z)\geqslant,\not\equiv q(z)$ for $z\in[0,l]$. Thanks to the monotonicity of $w^l(z)$ in $z\in[0,l]$, then there is $z_0\in(0,l)$ such that $M_0w^l(z_0)=u^*$ and $M_0w^l(z)<u^*$ for $z\in[0,z_0)$. It is easy to check that $q(z_0)<u^*$. Then the strong maximal principle yields that $M_0 (w^l)'(0)>q'(0)$ and $M_0w^l(z)>q(z)$ for $z\in(0,z_0]$. Thus we can find a constant $0<\epsilon\ll1$ such that \begin{equation}\label{Mqw} M_1:=M_0(1+\epsilon)^{-1}>1, \ \ M_1w^l(z)>q(z)\ \ \mbox{ for } z\in(0,z_0], \end{equation} and$ M_1w^l(z_0+\tilde{z})> u^*$ for $\tilde{z}=\min\{c\tau,\ l-z_0\}$. So there is $z_1\in(0,\tilde{z}]$ such that $M_1w^l(z_0+z_1)= u^*$ and $M_1w^l(z_0+z)> u^*$ for $z\in (z_1,l-z_0]$. Later, we want to prove that $M_1 w^l(z)>q(z)$ for all $z\in(z_0,l]$. Combining the definition of $z_1$, we only need to prove $M_1 w^l(z)\geqslant q(z)$ for all $z\in(z_0,z_0+z_1]$. Since $M_1 w^l(z)\geqslant q(z)$ for $z=z_0+z_1$ and $z=z_0$, and for $z\in(z_0,z_0+z_1)$, \begin{eqnarray*} && \big(M_1w^l-q\big)''-c\big(M_1w^l-q\big)'-d \big(M_1 w^l-q\big)\\ &=& f(q(z-c\tau))-M_1f\big( w^l(z-c\tau)\big)\\ & \leqslant& f(q(z-c\tau))-f\big(M_1 w^l(z-c\tau)\big) \leqslant 0, \end{eqnarray*} where the monotonicity of $f(v)$ in $v\in[0,u^*]$ and the fact where $M_1w^l(z-c\tau)\geqslant q(z-c\tau)$ for $z\leqslant z_0+z_1$ are used. The comparison principle yields that $M_1 w^l(z)\geqslant q(z)$ for all $z\in[z_0,z_0+z_1]$. This, together with the definition of $z_1$ and \eqref{Mqw}, yields that $M_1 w^l(z)\geqslant q(z)$ for all $z\in(0,l]$, which contradicts the definition of $M_0$. Thus we have proved that $w^l(z)\geqslant q(z)$ for $z\in[0,l]$. Finally, letting $l\to\infty$, we deduce that \[ W(z)\geqslant q(z)\ \ \mbox{ for } z\in[0,\infty), \] as we wanted. Thus Step 1 is proved. $ Step \ 2$. For any $c\geqslant 0$, if $q$ is a positive solution of \eqref{semiwave}, then $q_+'(0)>0$, $q'(z)>0$ for $z\in(0,\infty)$, and $q(z)\to u^*$ as $z\to\infty$. Since $q>0$ for $z>0$, then the Hopf lemma can be used to deduce $q_+'(0)>0$, it follows that $q'(z)>0$ for all small $z>0$. Setting \[ \gamma^*:=\sup\{\gamma>0:\ q(2\gamma-z)>q(z)\ \mbox{ for } z\in[0,\gamma),\ \ q'(z)>0\ \mbox{ for } z\in(0,\gamma]\}. \] In the following, we shall show $\gamma^*=\infty$. Suppose by way of contradiction that $\gamma^*\in(0,\infty)$, then \[ q(2\gamma^*-z)\geqslant q(z), \ \mbox{ and }\ q'(z)\geqslant 0\ \ \mbox{ for } z\in[0,\gamma^*]. \] Define $\tilde{q}(z)=q(2\gamma^*-z)$ for $z\in[\gamma^*,2\gamma^*]$, then \[ \tilde{q}''-c\tilde{q}'-d\tilde{q}+f(\tilde{q}(z-c\tau))=-2cq_\xi,\ \ \ \xi=2\gamma^*-z\in[0,\gamma^*]. \] Let us set \[ Q(z;\gamma^*)=Q(z)=\tilde{q}(z)-q(z)=q(\xi)-q(2\gamma^*-\xi). \] Then $Q\leqslant 0$ for $z\in[\gamma^*,2\gamma^*]$ and it satisfies \begin{equation}\label{Qqq} \left\{ \begin{array}{ll} Q'' - cQ'-d Q=f(q(z-c\tau))-f(\tilde{q}(z-c\tau))-2cq_\xi\leqslant 0, & \gamma^*\leqslant z\leqslant 2\gamma^*,\\ Q(\gamma^*)=0,\ \ \ Q(2\gamma^*)=-q(2\gamma^*)<0. \end{array} \right. \end{equation} The strong maximal principle and the Hopf lemma imply that \[ Q(z)<0,\ \ \ z\in(\gamma^*,2\gamma^*],\ \ \ Q'(\gamma^*)<0. \] It follows the continuity that for all small $\varepsilon\geqslant 0$, \[ Q'(\gamma^*+\varepsilon;\gamma^*+\varepsilon)<0,\ \ \ Q(z;\gamma^*+\varepsilon)<0\ \ \mbox{ for } z\in(\gamma^*+\varepsilon,2\gamma^*+2\varepsilon], \] which implies that $q(2\gamma^*+2\varepsilon-\xi)>q(\xi)$ for $\xi\in[0,\gamma^*+\varepsilon)$. Moreover, since $Q'(\gamma^*+\varepsilon;\gamma^*+\varepsilon)=-2q'(\gamma^*+\varepsilon)$, it then follows that $q'(\gamma^*+\varepsilon)>0$. But these facts contradict the definition of $\gamma^*$. Thus the monotonicity of positive solutions of \eqref{semiwave} is established. Next, we consider the asymptotic behavior of positive solution $q$ of \eqref{semiwave}. The monotonicity of $q$ implies that there is a constant $a>0$ such that $\lim_{z\to\infty} q(z)=a$. We claim that $a=u^*$. For any sequence $\{z_n\}$ with $z_n\to\infty$ as $n\to\infty$, define $q_n(z)=q(z+z_n)$. Then $q_n$ solves the same equation as $q$ but over $(-z_n,\infty)$. Since $q_n\leqslant u^*$, it follows that there is a subsequence of $\{q_n\}$ (still denoted by $\{q_n\}$) such that $q_n\to \hat{q}$ locally in $C^2(\mathbb{R})$ as $n\to\infty$, and $\hat{q}$ is a solution of \[ v''-cv'-d v+f(v(z-c\tau))=0,\ \ \ z\in\mathbb{R}. \] On the other hand, it follows from $\lim_{z\to\infty}q(z)=a$ that $ \hat{q}\equiv a$, which implies that $a=u^*$, as we wanted. Thus this completes the proof of Step 2. $ Step \ 3$. We show that problem \eqref{semiwave} has at most one positive solution. Suppose problem \eqref{semiwave} has two positive solutions $q_1$ and $q_2$, then $0<q_i<u^*$ in $(0,\infty)$, and $q_i(z)\to u^*$ as $z\to\infty$ for $i=1,\ 2$. Define \[ \rho^*:=\inf\left\{\frac{q_1(z)}{q_2(z)}:z>0\right\}. \] From Step 2 we have $(q_i)_+'(0)>0$, $i=1, 2$. Then by L'H\^{o}pital's rule we obtain $\lim_{z\downarrow 0}\frac{q_1(z)}{q_2(z)}>0$, which together with $\lim_{z\to+\infty}\frac{q_1(z)}{q_2(z)}=1$ implies that $\rho^*\in (0,1]$. Next we show $\rho^*=1$. Indeed, assume for the sake of contraction that $\rho^*\in (0,1)$. Define \[ w(z):=q_1(z)-\rho^*q_2(z). \] Then $w(z)\geqslant 0$ for $z\geqslant 0$, $w(0)=0$, $w(+\infty)=(1-\rho^*)u^*>0$ and \[ w''-cw'-dw=-f(q_1(z-c\tau))+\rho^*f(q_2(z-c\tau))\leqslant 0, \] where the sub-linearity and monotonicity of $f(z)$ for $z\in(0,u^*)$ are used. By Hopf's lemma, we see that $0<w'(0)=(q_1)_+'(0)-\rho^* (q_2)_+'(0)$, which implies that $\lim_{z\downarrow 0}\frac{q_1(z)}{q_2(z)}>\rho^*$. Thus, in view of the definition of $\rho^*$, we have an $z_0\in (0,+\infty)$ such that $w(z_0)=0$. By the elliptic strong maximum principle, we infer that $w(z)\equiv 0$ for $z>0$, a contradiction to $w(+\infty)>0$. Therefore, $\rho^*=1$, and hence, $q_1(z)\geqslant q_2(z)$. Changing the role of $q_1$ and $q_2$ and repeating the above arguments, we obtain $q_2(z)\geqslant q_1(z)$. The uniqueness is proved. $ Step \ 4$. Let us consider the monotonicity of positive solutions in $c$. Assume that $q_c$ is a positive solution of \eqref{semiwave}. Choose $c_1<c$ and let $q_{c_1}$ be the maximal nonnegative solution of \eqref{semiwave} with $c=c_1$. Since $u^*$ is a supersolution of \eqref{semiwave}, and by Step 2 we know that $q_c$ is a subsolution of \eqref{semiwave} with $c=c_1$, in view of the uniqueness of positive solution of this problem, then we see that $q_{c_1}(z)\geqslant q_c(z)$ for $z\in[0,\infty)$. It thus follows from the maximum principle and the Hopf lemma that \[ q_{c_1}(z)>q_c(z)\ \ \mbox{ for } z\in(0,\infty), \ \ \mbox{ and }\ \ q'_{c_1}(0)>q'_c(0). \] The proof of this proposition is complete now. \end{proof} Next we give a necessary and sufficient condition for the existence of a positive solution of \eqref{semiwave}. For this purpose, we need the following property on the distribution of complex solutions to a transcendental equation. \begin{lem}\label{lem:eigen} Let $c>0$ and $\tau>0$. Define \begin{equation} \Delta_c(\lambda,\tau)=\lambda^2-c\lambda-d+f'(0)e^{-\lambda c\tau}. \end{equation} Then there exists $c_0(\tau)\in (0,2\sqrt{f'(0)-d})$ such that the following statements hold: \begin{enumerate} \item[(i)] $\Delta_c(\lambda,\tau)=0$ has a positive solution if and only if $c\geqslant c_0(\tau)$; \item[(ii)] $\Delta_c(\lambda,\tau)=0$ has a complex solution in the domain \begin{equation}\label{def-Omega} \Omega:=\left\{\lambda\in \mathbb{C}: Re \lambda>0, Im \lambda \in \left(0,\frac{\pi}{c\tau}\right) \right\} \end{equation} provided that $c\in (0,c_0(\tau))$. \end{enumerate} \end{lem} Before the proof, we note that if $\tau=0$ then $\Delta_c(\lambda,\tau)=0$ reduces to a polynomial equation of order $2$. It admits at least one positive solution if and only if $c\geqslant 2\sqrt{f'(0)-d}$ and exactly a pair of complex eigenvalues in $\Omega$ when $c\in (0,2\sqrt{f'(0)-d})$. \begin{proof} (i) Note that $\Delta_c(\lambda,\tau)$ is convex in $\lambda$, decreasing in $c>0$ when $\lambda>0$, $\Delta_0(\lambda,\tau)>0$ and $\Delta_c(\lambda,\tau)=0$ is negative for some $\lambda>0$ when $c$ is sufficiently large. Therefore, such $c_0(\tau)$ exists. (ii) We employ a continuation method with $\tau$ being the parameter. From the proof of \cite[Theorem 2.1]{RuanWei2003}, we can infer that the solutions of $\Delta_c(\lambda,\tau)=0$ is continuous in $\tau>0$. We write $\lambda=\alpha(\tau)+i\beta(\tau)$, where $\alpha(\tau)$ and $\beta(\tau)$ are continuous in $\tau>0$. Separating the real and imaginary parts of $\Delta_c(\lambda,\tau)=0$ yields \begin{equation}\label{s1} \begin{cases} F_1(\alpha,\beta,\tau):=\alpha^2-\beta^2-c\alpha-d+f'(0)e^{-c\tau \alpha}\cos c\tau\beta=0\\ F_2(\alpha,\beta,\tau):=2\alpha\beta-c\beta-f'(0)e^{-c\tau \alpha}\sin c\tau\beta=0. \end{cases} \end{equation} We proceed with four steps. $ Step \ 1$. If $\tau$ is small enough, then there is a solution in $\Omega$. Indeed, At $\tau=0$, \eqref{s1} admits a solution $(\alpha,\beta)=\left(\frac{c}{2}, \frac{ \sqrt{|c^2-(f'(0)-d)^2}|}{2}\right)$. Note that \begin{equation} \det \left( \begin{matrix} \partial_\alpha F_1 & \partial_\beta F_1\\ \partial_\alpha F_2 & \partial_\beta F_2 \end{matrix}\right)|_{\tau=0 } =\det \left( \begin{matrix} 2\alpha-c &-2\beta\\ 2\beta & 2\alpha+c \end{matrix}\right) >0. \end{equation} It then follows from the implicit function theorem that for small $\tau$, $\Delta_c(\lambda,\tau)$ admits a complex solution near $\frac{c}{2}+i\frac{ \sqrt{|c^2-(f'(0)-d)^2}|}{2}$, and hence, in the open domain $\Omega$. $ Step \ 2$. For any $\tau>0$, $\Delta_c(\lambda,\tau)$ admits no solution with $\beta=0$ or $\beta=\frac{\pi}{c\tau}$ when $c\tau>0$. It follows from statement (i) that there is no solution with $\beta=0$ when $c<c_0(\tau)$. If $\beta$ equals $\frac{\pi}{c\tau}$, then from the second equation of \eqref{s1} we can infer that $\alpha=\frac{c}{2}$. Substituting $\alpha=\frac{c}{2}$ and $\beta=\frac{\pi}{c\tau}$ into the first equation of \eqref{s1}, we obtain $0=-\frac{1}{4}c^2-\left(\frac{\pi}{c\tau}\right)^2-d-f'(0)e^{-c^2\tau/2}$, a contradiction. $ Step \ 3$. If a solution $\alpha(\tau)+i\beta(\tau)$ touches pure imaginary axis at some $\tau=\tau^*>0$, then $\alpha'(\tau^*)>0$. We use the implicit function theorem. By direct computations, we have \begin{eqnarray*} &&\det \left( \begin{matrix} \partial_\alpha F_1 & \partial_\beta F_1\\ \partial_\alpha F_2 & \partial_\beta F_2 \end{matrix}\right)|_{\tau=\tau^*}\\ =&&\det \left( \begin{matrix} -c-c\tau f'(0)\cos c\tau\beta &-2\beta-c\tau f'(0)\sin c\tau\beta\\ 2\beta+c\tau f'(0)\sin c\tau\beta& -c-c\tau f'(0)\cos c\tau\beta \end{matrix}\right)\\ =&& [-c-c\tau f'(0)\cos c\tau\beta]^2+ [2\beta+c\tau f'(0)\sin c\tau\beta]^2 \geqslant 0, \end{eqnarray*} where the equality holds if and only if $-c-c\tau f'(0)\cos c\tau\beta=0$ and $2\beta+c\tau f'(0)\sin c\tau\beta=0$. Taking these two relations into \eqref{s1} with $\alpha=0$, we obtain \begin{equation} \begin{cases} -\beta^2-d-\frac{1}{\tau}=0\\ -c\beta+\frac{2\beta}{c\tau}=0, \end{cases} \end{equation} which is not solvable for $\beta$. Therefore, \[ \det \left( \begin{matrix} \partial_\alpha F_1 & \partial_\beta F_1\\ \partial_\alpha F_2 & \partial_\beta F_2 \end{matrix}\right)|_{\tau=\tau^*}>0. \] On the other hand, \[ \left( \begin{matrix} \partial_\tau F_1\\ \partial_\tau F_2 \end{matrix}\right)|_{\tau=\tau^*} =-c\beta f'(0)\left( \begin{matrix} \sin c\tau\beta \\ \cos c\tau\beta \end{matrix}\right) \] Consequently, by the implicit function theorem we have \[ \left( \begin{matrix} \alpha'(\tau^*)\\ \beta'(\tau^*) \end{matrix}\right)|_{\tau=\tau^*} =-\left( \begin{matrix} \partial_\alpha F_1 & \partial_\beta F_1\\ \partial_\alpha F_2 & \partial_\beta F_2 \end{matrix}\right)^{-1}|_{\tau=\tau^*}\left( \begin{matrix} \partial_\tau F_1\\ \partial_\tau F_2 \end{matrix}\right)|_{\tau=\tau^*}, \] from which we compute to have \begin{equation} \alpha'(\tau^*)=\frac{(2\beta^4+2d\beta^2+c^2)c}{\det \left( \begin{matrix} \partial_\alpha F_1 & \partial_\beta F_1\\ \partial_\alpha F_2 & \partial_\beta F_2 \end{matrix}\right)|_{\tau=\tau^*}}>0. \end{equation} $ Step \ 4$. Completion of the proof. In Steps 2 and 3, we have verified that the perturbed solution at Step 1 can not escape $\Omega$ continuously as $\tau$ increases from $0$ to $\infty$. Therefore, it always stays in $\Omega$. \end{proof} Based on the above results, we are ready to give the following necessary and sufficient condition for \eqref{semiwave} to have a unique positive solution. \begin{prop}\label{prop:qoan1} Suppose {\bf{(H)}} holds. Problem \eqref{semiwave} has a unique positive solution $q\in C^2([0,\infty))$ if and only if $c\in[0,c_0(\tau))$, where $c_0(\tau)$ is given in Lemma \ref{lem:eigen}. \end{prop} \begin{proof} Firstly, let us show that problem \eqref{semiwave} admits a unique positive solution when $c\in[0,c_0(\tau))$. We employ the super- and subsolution method. The case where $c\tau=0$ is trivial and the proof is omitted. Fix $c\in (0,c_0(\tau))$. By Lemma \ref{lem:eigen} we can infer that there exists $\gamma>0$ such that \begin{equation} \widetilde{\Delta}_c(\lambda)=\lambda^2-c\lambda-d+(1-\gamma) f'(0)e^{-\lambda c\tau}=0 \end{equation} has a solution $\lambda=\alpha+i\beta$ in $\Omega$. {\bf Claim.} The function \begin{equation} \underline{v}(x):= \begin{cases} \delta e^{\alpha x} cos \beta x, & \beta x\in (\frac{3\pi}{2}, \frac{5\pi}{2}),\\ 0, & \text{elsewhere}, \end{cases} \end{equation} is a subsolution provided that $\delta$ is small enough. Indeed, for $\beta x\in (\frac{3\pi}{2}, \frac{5\pi}{2})$, we have \begin{eqnarray*} L[\underline{v}](x):=&&\underline{v}''(x)-c\underline{v}'(x)-d\underline{v}(x)+f(\underline{v}(x-c\tau))\\ =&& \underline{v}(x) \left[ \alpha^2-\beta^2-c\alpha -d- [2\alpha\beta -c\beta]\tan\beta x \right] +f(\underline{v}(x-c\tau))\\ =&& -\underline{v}(x) \frac{1}{\cos \beta x}(1-\gamma)f'(0)e^{-c\tau\alpha}\cos(\beta(x-c\tau))+f(\underline{v}(x-c\tau))\\ =&& -(1-\gamma) f'(0)\delta e^{\alpha(x-c\tau)}\cos\beta(x-c\tau)+f(\underline{v}(x-c\tau)). \end{eqnarray*} Choose $\delta>0$ sufficiently small such that \[ f(\underline{v}(x-c\tau)) \geqslant (1-\gamma) f'(0) \underline{v}(x-c\tau), \] with which we obtain \[ L[\underline{v}](x)\geqslant (1-\gamma)f'(0) [\underline{v}(x-c\tau)-\delta e^{\alpha(x-c\tau)}\cos\beta(x-c\tau)],\quad \beta x\in \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right). \] Clearly, if $\beta (x-c\tau) \in \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right)$, then $\underline{v}(x-c\tau)=\delta e^{\alpha(x-c\tau)}\cos\beta(x-c\tau)$, and hence, $L[\underline{v}](x)\geqslant 0$. If $\beta (x-c\tau) \not\in \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right)$, then $\underline{v}(x-c\tau)=0$, and hence, \[ L[\underline{v}](x)\geqslant -(1-\gamma)f'(0)\delta e^{\alpha(x-c\tau)}\cos\beta(x-c\tau) \] with $\beta (x-c\tau)\in \left(\frac{3\pi}{2}-\beta c\tau, \frac{5\pi}{2}-\beta c\tau\right)\setminus \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right)$. Since $\beta c\tau\leqslant \pi$ (as proved in Lemma \ref{lem:eigen}), we obtain $\cos\beta(x-c\tau)\leqslant 0$ when $\beta (x-c\tau)\in \left(\frac{3\pi}{2}-\beta c\tau, \frac{5\pi}{2}-\beta c\tau\right)\setminus \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right)$. To summarize, $L[\underline{v}](x)\geqslant 0$ for $\beta x\in \left(\frac{3\pi}{2}, \frac{5\pi}{2}\right)$ and $L[\underline{v}](x)= 0$ for $\beta x\not \in \left[\frac{3\pi}{2}, \frac{5\pi}{2}\right]$. The claim is proved. Having such a subsolution, we can infer that \eqref{semiwave} admits a positive solution. The proof of uniqueness of the solution of \eqref{semiwave} follows from Proposition \ref{prop:semiwave}. Next we show that \eqref{semiwave} does not admit a positive solution when $c\geqslant c_0(\tau)$. We employ a sliding argument. Assume for the sake of contradiction that there is a solution $q(z)$. Since $c\geqslant c_0(\tau)$, $\Delta_c(\lambda,\tau)=0$ admits a positive solution $\lambda_1$. Define $w(z)=le^{\lambda_1 z}-q(z), l>0$. Since $q(0)=0$ and $q(+\infty)=u^*$, we may choose $l$ such that $w(z)\geqslant 0$ for $z\geqslant 0$ and $w(z)$ vanishes at some $z\in (0,+\infty)$. Note that $f(u)\leqslant f'(0) u$. It then follows that \begin{equation} w''(z)-cw'(z)-dw(z)=-f'(0)w(z-c\tau)+[f(q(z-c\tau))-f'(0)q(z-c\tau)]\leqslant 0, \quad z \geqslant 0. \end{equation} By the elliptic strong maximum principle, we obtain $w(z)=0$ for $z\geqslant 0$, a contradiction. The nonexistence is proved. \end{proof} Based on the above results, we obtain the solvability of \eqref{sw11}. \begin{thm}\label{waves} For any given $\tau>0$, let $c_0(\tau)$ be given in Lemma \ref{lem:eigen}. For each $\mu>0$, there exists a unique $c^*=c^*_\mu(\tau)\in (0, c_0(\tau))$ such that $(q_{c^*})'_+(0)=\frac{c^*}{\mu}$, where $q_{c^*}(z)$ is the unique positive solution of \eqref{semiwave} with $c$ replaced by $c^*$. Moreover, $c^*_\mu(\tau)$ is increasing in $\mu$ with \[ \lim_{\mu\to\infty}c^*_\mu(\tau)=c_0(\tau). \] \end{thm} \begin{proof} From Propositions \ref{prop:semiwave} and \ref{prop:qoan1}, it is known that for each $c\in [0,c_0(\tau))$, problem \eqref{semiwave} admits a unique solution $q_c(z)>0$ for $z>0$, and for any $0\leqslant c_1<c_2\leqslant c_0(\tau)$, $q_{c_1}(z)>q_{c_2}(z)$ in $(0,\infty)$. Define \begin{equation}\label{def-P} P(0;c,\tau):=(q_c)_+'(0). \end{equation} Then $P(0;c,\tau)>0$ for all $c\in[0,c_0(\tau))$ and it decreases continuously in $c\in [0, c_0(\tau))$. Let $c_n\uparrow c_0(\tau)$. For each $c_n$ problem \eqref{semiwave} admits a unique solution $q_{c_n}(z)$. Clearly, $q_{c_n}$ converges to some $q^*$ and $(q_{c_n})'$ converges to $(q^*)'$ locally uniformly in $z\in[0,+\infty)$, and $q^*$ solves \eqref{semiwave} with $c=c_0(\tau)$. By the nonexistence established in Proposition \ref{prop:qoan1} we obtain $q^*\equiv 0$. In particular, \begin{equation} \lim_{c\uparrow c_0(\tau)} (q_c)_+'(0)=(q^*)_+'(0)=0. \end{equation} We now consider the continuous function \[ \eta(c;\tau)=\eta_\mu(c;\tau):=P(0;c,\tau)-\frac{c}{\mu}\ \ \mbox{ for }\ c\in [0,c_0(\tau)). \] By the above discussion we know that $\eta(c;\tau)$ is strictly decreasing in $c\in[0, c_0(\tau))$. Moreover, $\eta(0;\tau)=P(0;0,\tau)>0$ and $\lim_{c\uparrow c_0(\tau)}\eta(c;\tau)=-c_0(\tau)/\mu<0$. Thus there exists a unique $c^*=c^*_\mu(\tau)\in (0, c_0(\tau))$ such that $\eta(c^*;\tau)=0$, which means that \[ (q_{c^*})_+'(0)=\frac{c^*}{\mu}. \] Next, let us view $(c^*_\mu, c_\mu^*/\mu)$ as the unique intersection point of the decreasing curve $y=P(0;c,\tau)$ with the increasing line $y=c/\mu$ in the $cy$-plane, then it is clear that $c^*_\mu(\tau)$ increases to $c_0(\tau)$ as $\mu$ increases to $\infty$. The proof is complete. \end{proof} \begin{remark}\label{tau0}\rm In \cite{DuLou}, the authors considered the case $\tau=0$. They obtained that for each $\mu>0$, there is a unique $c^*=c^*_\mu(0)\in (0, c_0(0))$ such that $(q_{c^*})'_+(0)=\frac{c^*}{\mu}$, where $q_{c^*}(z)$ is the unique of \eqref{semiwave} with $\tau=0$ and $c=c^*$, and $c_0(0)=2\sqrt{f'(0)-d}$. Moreover, $c^*_\mu(0)$ is increasing in $\mu$ with \[ \lim_{\mu\to\infty}c^*_\mu(0)=c_0(0). \] \end{remark} In the rest of this part, we study the monotonicity of $c^*_\mu(\tau)$ in $\tau$. For any given $\tau\geqslant 0$, the unique positive solution of \eqref{semiwave} with $c\in[0,c_0(\tau))$ may be denoted by $q_c(z;\tau)$. Now we give the proof of Theorem \ref{thm:semiwave}. \noindent {\bf Proof of Theorem \ref{thm:semiwave}:} For $\tau\geqslant 0$ and $\mu>0$, let $c^*_\mu(\tau)$ be given in Theorem \ref{waves} and Remark \ref{tau0} for $\tau>0$ and $\tau=0$, respectively. By Propositions \ref{prop:semiwave} and \ref{prop:qoan1}, we see that for $\tau\geqslant 0$ and $c\in (0,c_0(\tau))$, problem \eqref{semiwave} admits a unique positive solution $q_c(z;\tau)$. Moreover, $q_c(z;\tau)$ is increasing in $z>0$ and decreasing in $c\in (0,c_0(\tau))$. Let $P(0;c,\tau)$ be defined as in \eqref{def-P}. {\bf Claim.} For $0\leqslant \tau_1<\tau_2$ , $P(0;c,\tau_1)>P(0;c,\tau_2)$ when $c\in (0,c_0(\tau_2))$. We postpone the proof of the claim and reach the conclusion in a few lines. Note that $c^*_\mu(\tau)$ is the unique positive solution of $P(0;c,\tau)-\frac{c}{\mu}=0$. In view of $\lim_{c\uparrow c_0(\tau_2)} P(0;c,\tau_2)=0$, we have $c^*_\mu(\tau_2)\in (0,c_0(\tau_2))$. If $c^*_\mu(\tau_1)\geqslant c_0(\tau_2)$, then we are done. Otherwise, $c^*_\mu(\tau_1)\in (0,c_0(\tau_2))$, which, together with the claim, implies that \[ \frac{c^*_\mu(\tau_1)}{\mu}=P(0;c^*_\mu(\tau_1),\tau_1)>P(0;c^*_\mu(\tau_1),\tau_2). \] This further implies that $c^*_\mu(\tau_1)>c^*_\mu(\tau_2)$, due to the monotonicity of $P(0;c,\tau_2)-\frac{c}{\mu}$ in $c\in (0,c_0(\tau_2))$. Thus, $c^*_\mu(\tau)$ is decreasing in $\tau\geqslant 0$. {\it Proof of the claim.} Since $c_0(\tau)$ is decreasing in $\tau\geqslant 0$, we see that $P(0;c,\tau_1)$ is well-defined when $c\in (0,c_0(\tau_2))$. By the monotonicity of $q_c(z;\tau_2)$ in $z>0$, we have $q_c(z-c\tau_2;\tau_2)<q_c(z-c\tau_1;\tau_2)$. This, together with the monotonicity of $f(v)$ in $v$, implies that $f(q_c(z-c\tau_2;\tau_2))< f(q_c(z-c\tau_1;\tau_2))$. Consequently, \[ q_c''(z;\tau_2)-cq_c'(z;\tau_2)-dq_c(z;\tau_2)+f(q_c(z-c\tau_1;\tau_2))> 0,\quad z>0. \] Consider the initial value problem \begin{equation} \begin{cases} v_t=v_{zz}-cv_z-dv+f(v(t,z-c\tau_1)),& t>0,\ z>0\\ v(t,z)=0, &t>0,\ z\leqslant 0\\ v(0,z)=q_c(z;\tau_2) \end{cases} \end{equation} By the maximum principle we know that $v(t,z)$ is nondecreasing in $t\geqslant 0$ and its limit $v^*(z)$ as $t\to\infty$ satisfies \eqref{semiwave} with $\tau=\tau_1$. By the uniqueness established in Proposition \ref{prop:semiwave}, we obtain $v^*(z)=q_c(z;\tau_1)$. Therefore, \begin{equation} q_c(z;\tau_2)=v(0,z)\leqslant v(t,z)\leqslant v(+\infty,z)=v^*(z)=q_c(z;\tau_1). \end{equation} The claim is proved. { $\Box$} \section{Long time behavior of the solutions}\label{seclo} In this section we study the asymptotic behavior of solutions of \eqref{p}. Firstly, we give some sufficient conditions for vanishing and spreading. Next, based on these results, we prove the spreading-vanishing dichotomy result of \eqref{p}. Let us start this section with the following equivalent conditions for vanishing. \begin{lem}\label{lemvansmall} Assume that {\bf(H)} holds. Let $(u,g,h)$ be a solution of \eqref{p}. Then the following three assertions are equivalent: $$ {\rm (i)}\ h_\infty \mbox{ or } g_\infty \mbox{ is finite};\qquad {\rm (ii)}\ h_\infty-g_\infty\leqslant \pi/\sqrt{f'(0)- d }; \qquad {\rm (iii)}\ \lim_{t\to\infty}\|u(t,\cdot)\|_{L^\infty ([g(t),h(t)])}= 0. $$ \end{lem} \begin{proof} ``(i)$\mathbb{R}ightarrow$ (ii)". Without loss of generality we assume $h_\infty < -\infty$ and prove (ii) by contradiction. Assume that $h_\infty-g_\infty > \pi/\sqrt{f'(0)- d }$, then there exists $t_1 \gg 1$ such that \[ h(t_1) - g(t_1) > \frac{\pi}{\sqrt{f'(0)- d }}. \] Let us consider the following auxiliary problem: \begin{equation}\label{subso} \left\{ \begin{array}{ll} v_t = v_{xx} - d v +f(v(t-\tau,x)), & t> t_1,\ x\in (g(t_1), \xi(t)),\\ v(t, \xi(t)) = 0,\quad \xi'(t)= -\mu v_x(t, \xi(t)),& t>t_1,\\ v (t,g(t_1))=0, & t> t_1,\\ \xi(t_1) = h(t_1),\ \ v(s, x)= u(s, x), & s\in[t_1-\tau, t_1],\ x\in [g(s), h(s)]. \end{array} \right. \end{equation} It is easy to check that $v$ is a subsolution of \eqref{p}, then $\xi(t)\leqslant h(t)$ and $\xi(\infty)<\infty$ by our assumption. Using a similar argument as in \cite[Lemma 3.3]{DGP} one can show that $$ \|v(t,\cdot)- V(\cdot)\|_{C^2([g(t_1),\xi(t)])} \to 0,\quad \mbox{as } t\to\infty, $$ where $V(x)$ is the unique positive solution of the problem \[ V''- d V +f(V)=0\ \ \mbox{ for}\ \ x\in(g(t_1),\xi(\infty)),\ \ \ \ V(g(t_1))=V(\xi(\infty))=0. \] Thus, \[ \lim_{t\to\infty} \xi'(t)=-\mu \lim_{t\to\infty}v_x (t,\xi(t)) =-\mu V'(\xi(\infty)) = \delta, \] for some $\delta>0$, which contradicts the fact that $\xi(\infty) < \infty$. ``(ii)$\mathbb{R}ightarrow$(iii)". It follows from the assumption and \cite[Proposition 2.9]{YCW} that the unique positive solution of the following problem \begin{equation}\label{upbsoper} \left\{ \begin{array}{ll} v_t=v_{xx}- d v+f(v(t-\tau,x)), & t>0,\ x\in[g_\infty,h_\infty],\\ v(t,g_\infty)= v(t,h_\infty)=0, & t>0,\\ v(\theta,x)\geqslant 0, & \theta\in[-\tau,0],\ x\in[g_\infty,h_\infty], \end{array} \right. \end{equation} with $v(\theta,x)\geqslant \phi(\theta,x)$ in $[-\tau,0]\times[g(\theta),h(\theta)]$, satisfies $v\to0$ uniformly for $x\in[g_\infty,h_\infty]$ as $t\to\infty$. Then the conclusion (iii) follows easily from the comparison principle. ``(iii)$\mathbb{R}ightarrow$(ii)": Suppose by way of contraction argument that for some small $\varepsilon>0$ there exists $t_2\gg 1$ such that $h(t)-g(t)>\frac{\pi}{\sqrt{f'(0)- d }}+ 3\varepsilon$ for all $t>t_2-\tau$. Let $l_1:=\pi/\sqrt{f'(0)- d }+ \varepsilon$, it is well known that the following eigenvalue problem $$ \left\{ \begin{array}{ll} -\varphi_{xx} + d \varphi- f'(0)\varphi=\lambda_1\varphi, & 0<x<l_1,\\ \varphi(0)=\varphi(l_1)=0, \end{array} \right. $$ has a negative principal eigenvalue, denoted by $\lambda_1$, whose corresponding positive eigenfunction, denoted by $\varphi$, can be chosen positive and normalized by $\|\varphi\|_{L^{\infty}}=1$. Set \[ w(t,x) :=\epsilon\varphi(x)\ \mbox{ for } x\in[0,l_1], \] with $\epsilon>0$ small such that \[ f(\epsilon\varphi)\geqslant f'(0)\epsilon\varphi+ \frac{1}{2}\lambda_1\epsilon\varphi\ \ \mbox{ in }[0, l_1]. \] It is easy to compute that for $x\in[0, l_1]$, $$ w_t-w_{xx}+ d w-f(w(t-\tau,x))= \epsilon\varphi [f'(0)+\lambda_1 ] -f(\epsilon\varphi) \leqslant 0. $$ Moreover one can see that \[ 0\leqslant w(x) = \epsilon \varphi(x) < u(t_2+s, x +g(t_2+s)+\varepsilon),\quad x\in [0, l_1],\ s\in[-\tau,0] \] provided that $\epsilon$ is sufficiently small. Then we can apply the comparison principle to deduce $$ u(t+t_2,x +g(t_2) +\varepsilon) \geqslant w(x)>0,\quad (t,x)\in[0,\infty)\times(0, l_1), $$ contradicting (iii). ``(ii)$\mathbb{R}ightarrow$(i)". When (ii) holds, (i) is obvious. This proves the lemma. \end{proof} Next, we give a sufficient condition for vanishing, which indicates that if the initial domain and initial function are both small, then the species dies out eventually in the environment. \begin{lem}\label{vfsma} Assume that {\bf(H)} holds. Let $(u,g,h)$ be a solution of \eqref{p}. Then vanishing happens provided that $h(0)-g(0)<\frac{\pi}{\sqrt{f'(0)- d }}$ and $\|\phi\|_{L^\infty([-\tau,0]\times[g(\theta),h(\theta)])}$ is sufficient small. \end{lem} \begin{proof} Set \[ h_0=\frac{h(0)-g(0)}{2}, \] then $h_0<\pi/(2\sqrt{f'(0)- d })$, so there exists a small $\varepsilon >0$ such that \begin{equation}\label{choice of delta} \frac{\pi^2}{4 (1+\varepsilon)^2 h^2_0} - (f'(0)+\varepsilon)e^{\varepsilon\tau} + d \geqslant \varepsilon. \end{equation} For such $\varepsilon$, we can find a small positive constant $\delta$ such that $$ \pi \mu \delta \leqslant \varepsilon^2 h^2_0, \qquad f(v) \leqslant (f'(0) + \varepsilon) v \quad \mbox{for } v\in [0,\delta]. $$ Define \begin{align*} & k(t) := h_0 \Big( 1+\varepsilon - \frac{\varepsilon}{2} e^{-\varepsilon t} \Big), \quad w(t,x):= \delta e^{-\varepsilon t} \cos\Big( \frac{\pi x}{2 k (t)}\Big),\ \ t>0,\ x\in[-k(t),k(t)],\\ & k(\theta)\equiv k_0 := h_0 \Big( 1+\frac{\varepsilon}{2} \Big), \quad w(\theta,x)\equiv w_0(x):= \delta \cos\Big(\frac{\pi x}{h_0(2+\varepsilon)}\Big),\ \ \theta\in[-\tau,0],\ x\in[-k_0,k_0]. \end{align*} and extend $w(t,x)$ by $0$ for $t\in[-\tau,\infty)$, $x\in(-\infty, -k(t)]\cup [k(t),\infty)$. A direct calculation shows that for $t>0$, $x\in(-k(t),k(t))$ \begin{eqnarray*} && w_t - w_{xx} + d w - f(w(t-\tau,x))\\ &=& \left[ \frac{\pi^2}{4k^2(t)}-\varepsilon + d -(f'(0)+\varepsilon)\frac{w(t-\tau,x)}{w(t,x)} +\frac{\pi x k'(t)}{2k^2(t)}\tan \Big( \frac{\pi x}{2 k (t)}\Big)\right] w\\ & \geqslant& \left[ -\varepsilon + \frac{\pi^2}{4k^2(t)}+ d -(f'(0)+\varepsilon)\frac{w(t-\tau,x)}{w(t,x)} \right] w, \end{eqnarray*} where we have used $k'(t)>0$, $k(t)>0$ for $t>0$ and $y\tan y\geqslant 0$ for $y\in(-\frac{\pi}{2},\frac{\pi}{2})$. When $t\geqslant \tau$ and $x\in(-k(t),k(t))$, it is easy to check that \begin{eqnarray*} \mathcal{A} & := & -\varepsilon + \frac{\pi^2}{4k^2(t)}+ d -(f'(0)+\varepsilon)\frac{w(t-\tau,x)}{w(t,x)} \\ & \geqslant & -\varepsilon + \frac{\pi^2}{4h_0^2(1+\varepsilon)^2}+ d -(f'(0)+\varepsilon)e^{\varepsilon\tau} \geqslant 0, \end{eqnarray*} where the fact that $\cos\Big(\frac{\pi x}{2 k (t-\tau)}\Big)\leqslant \cos\Big(\frac{\pi x}{2 k (t)}\Big)$ for $(t,x)\in[\tau,\infty)\times[-k(t),k(t)]$ and the monotonicity of $k(t)$ in $t\in[0,\infty)$ are used. If $t\in[0, \tau)$ and $x\in(-k(t),k(t))$, we have that \begin{eqnarray*} \mathcal{A} & \geqslant & -\varepsilon + \frac{\pi^2}{4h_0^2(1+\varepsilon)^2}+ d -(f'(0)+\varepsilon)e^{\varepsilon t}\frac{\cos\Big(\frac{\pi x}{h_0(2+\varepsilon)}\Big)}{\cos\Big(\frac{\pi x}{2k(t)}\Big)} \\ & \geqslant & -\varepsilon + \frac{\pi^2}{4h_0^2(1+\varepsilon)^2}+ d -(f'(0)+\varepsilon)e^{\varepsilon\tau} \geqslant 0. \end{eqnarray*} Thus we have $$ w_t - w_{xx} + d w - f(w(t-\tau,x)) \geqslant 0\ \ \mbox{ in }\ (0,\infty)\times(-k(t),k(t)). $$ On the other hand, $$ k'(t)=\frac{\varepsilon^2 h_0}{2} e^{-\varepsilon t}\geqslant \frac{\pi \mu \delta}{2h_0 } e^{-\varepsilon t}\geqslant \frac{\pi \mu \delta}{2k(t)} e^{-\varepsilon t} \geqslant - \mu w_x(t, k(t)) =\mu w_x(t, -k(t)). $$ As a consequence, $(w(t,x), -k(t), k(t))$ will be a supersolution of \eqref{p} if $w(\theta,x)\geqslant \phi (\theta,x)$ in $[-\tau,0]\times[g(\theta),h(\theta)]$. Indeed, choose $\sigma_1 := \delta\cos \frac{\pi}{2+\varepsilon}$, which depends only on $\mu, h_0, d $ and $f$. Then when $\|\phi\|_{L^\infty([-\tau,0]\times[g(\theta),h(\theta)])} \leqslant \sigma_1$ we have $\phi(\theta,x)\leqslant \sigma_1 \leqslant w(\theta,x)$ in $[-\tau,0]\times[g(\theta), h(\theta)]$, since $h_0 < k(0)= h_0 (1+\frac{\varepsilon}{2})$. It follows from the comparison principle that $$ h(t)\leqslant k(t) \leqslant h_0 (1+\varepsilon),\; h_\infty<\infty. $$ This, together with the previous lemma, implies that vanishing happens. \end{proof} \begin{remark}\rm When $\tau=0$, the proof of Lemma \ref{vfsma} reduces to that of \cite[Theorem 3.2(i)]{DuLou}. \end{remark} We now present a sufficient condition for spreading, which reads as follows. \begin{lem}\label{lemuto1} Assume that {\bf(H)} holds. If $h(0)-g(0)\geqslant \pi/\sqrt{f'(0)- d }$, then spreading happens for every positive solution $(u, g, h)$ of \eqref{p}. \end{lem} \begin{proof} Since $g'(t)<0<h'(t)$ for $t>0$, we have $h(t)-g(t)>\pi/\sqrt{f'(0)- d }$ for any $t>0$. So the conclusion $-g_\infty = h_\infty =\infty$ follows from Lemma \ref{lemvansmall}. In what follows we prove \begin{equation}\label{utoPt} \lim_{t\to\infty}u(t,x)=u^* \mbox{ locally uniformly in $\mathbb{R}$}. \end{equation} First, it is well known that for any $L>\pi/(2\sqrt{f'(0)- d })$, the following problem \[ W_{xx}- d W+f(W)=0,\ \ \ x\in(-L,L),\ \ \ W(\pm L)= 0, \] admits a unique positive solution $W_L$, which is increasing in $L$ and satisfies \begin{equation}\label{WL1} \lim_{L\to\infty}W_L(x)=u^* \mbox{ locally uniformly in $\mathbb{R}$}. \end{equation} Moreover we can find an increasing sequence of positive numbers $L_n$ with $L_n\to\infty$ as $n\to\infty$ such that $L_n>\pi/\sqrt{f'(0)- d }$ for all $n\geqslant1$. Since $W_{L_n}$ converges to $u^*$ locally uniformly in $\mathbb{R}$, we can choose $t_n$ such that $h(t)\geqslant L_n$ and $g(t)\leqslant-L_n$ for $t\geqslant t_n$. It then follows from \cite{YCW} the following problem \[ \left\{ \begin{array}{ll} w_t =w_{xx}- d w +f(w(t-\tau,x)), & t\geqslant t_n+\tau,\ x\in[-L_n,L_n],\\ w(t,\pm L_n)= 0, & t\geqslant t_n+\tau,\\ w(s,x)=u(s,x), & s\in[t_n, t_n+\tau],\ x\in[-L_n,L_n], \end{array} \right. \] has a unique positive solution $w_n(t,x)$, which satisfies that \[ w_n(t,x)\to W_{L_n}(x) \ \mbox{ uniformly for } x\in[-L_n,L_n]\ \mbox{ as } t\to\infty. \] Applying the comparison principle we have $w_n(t,x)\leqslant u(t,x)$ for all $t\geqslant t_n+\tau$, $x\in [-L_n,L_n]$. This, together with \eqref{WL1}, yields that \begin{equation}\label{uin1} \liminf_{t\to\infty} u(t ,x) \geqslant u^*\ \mbox{ locally uniformly for } x\in\mathbb{R}. \end{equation} Later, since the initial data $u_0(s,x)$ satisfies $0\leqslant u_0(s,x)\leqslant u^*$ for $(s,x)\in[-\tau,0]\times[g(s),h(s)]$, it thus follows from the comparison principle that \[ \limsup_{t\to\infty} u(t ,x) \leqslant u^*\ \mbox{ locally uniformly for } x\in\mathbb{R}. \] Combining with \eqref{uin1}, one can easily obtain \eqref{utoPt}, which ends the proof of this lemma. \end{proof} Now we are ready to give the proof of Theorem \ref{thm:asy be}. \noindent {\bf Proof of Theorem \ref{thm:asy be}}. It is easy to see that there are two possibilities: (i) $h_\infty-g_\infty\leqslant \pi/\sqrt{f'(0)- d }$; (ii) $h_\infty-g_\infty>\pi/\sqrt{f'(0)- d }$. In case (i), it follows from Lemma \ref{lemvansmall} that $\lim_{t\to\infty} \|u(t,\cdot)\|_{L^\infty([g(t),h(t)])}=0$. For case (ii), it follows from Lemma \ref{lemuto1} and its proof that $(g_\infty, h_\infty)=\mathbb{R}$ and $u(t,x)\to u^*$ as $t\to\infty$ locally uniformly in $\mathbb{R}$, which ends the proof. $\square$ \section{Asymptotic profiles of spreading solutions}\label{sec:asybeh} Throughout this section we assume that {\bf(H)} holds and $(u,g,h)$ is a solution of \eqref{p} for which spreading happens. In order to determine the spreading speed, we will construct some suitable sub- and supersolutions based on semi-waves. Let $c^*$ and $q_{c^*}(z)$ be given in Theorem \ref{waves}. The first subsection covers the proof of the boundedness for $|h(t)-c^*t|$ and $|g(t)+c^*t|$. Based on these results, we prove Theorem \ref{thm:profile of spreading sol} in the second subsection. \subsection{Boundedness for $|h(t)-c^*t|$ and $|g(t)+c^*t|$.}\label{sub51} Let us begin this subsection with the following estimate. \begin{lem}\label{lem:u-to-1} Let $(u, g, h)$ be a solution of \eqref{p} for which spreading happens. Then for any $c\in (0,c^*)$, there exist small $\beta^*\in (0, d -f'(u^*))$ , $T>0$ and $ M>0$ such that for $t\geqslant T$, \begin{itemize} \item[\rm (i)] $ [g(t), h(t)]\supset [-ct, ct]; $ \item[\rm (ii)] $ u(t,x)\geqslant u^*\big(1-M e^{-\beta^* t}\big)\quad \mbox{for } x\in [-ct, ct]; $ \item[\rm (iii)] $ u(t,x) \leqslant u^*\big(1+M e^{-\beta^* t}\big) \quad \mbox{for } x \in [g(t), h(t)]. $ \end{itemize} \end{lem} \begin{proof} In order to prove conclusions (i) and (ii), inspired by \cite{FM}, we will use the semi-wave $q_{c^*}$ to construct the suitable subsolution. Here we mainly use the the monotonicity and exponentially convergent of $q_{c^*}$. (i)\ Since $q_{c^*}(z)$ is the unique positive solution of \begin{equation}\label{semiwave112} \left\{ \begin{array}{ll} q_{c^*}'' - c^*q_{c^*}'- d q_{c^*}+ f( q_{c^*}(z-c^*\tau))=0,\ \ \ q_{c^*}'(z)>0, & z>0,\\ q_{c^*}(z)=0, & z\leqslant 0,\\ \mu q_{c^*}'(0)=c^*,\ \ q_{c^*}(\infty)=u^*, \end{array} \right. \end{equation} then it is easy to check that $q_{c^*}''(0)> 0$. Since $q_{c^*}'(z)> 0$ for $z\geqslant0$ and $q_{c^*}(z)\to u^*$ as $z\to\infty$, thus there is $z_0\gg 1$ such that $q_{c^*}''(z)<0$ for $z\geqslant z_0$. Thus there exists $\hat{z} \in (0,\infty)$ such that $q_{c^*}''(\hat{z})=0$ and $q_{c^*}''(z)>0 $ for $z\in[0,\hat{z})$. This means that $q_{c^*}'(z)$ is increasing in $z\in[0,\hat{z})$. Let $\hat{p}_0 \in (0,q_{c^*}(\hat{z}))$ be small. Define \[ G(u,p)=\left\{ \begin{array}{ll} d +[f(u-p)-f(u)]/p ,& p>0 ,\\ d -f'(u), & p=0, \end{array} \right. \] for $p>0$ and $u>p$. Then $G(u,p)$ is a continuous function for $0 \leqslant p \leqslant \hat{p}_0$ and $G(u^*,p)>0$, $G(u^*,0)= d -f'(u^*)>0$, thus there exists $0<\gamma\ll d $ such that $G(u^*,p) \geqslant 2\gamma$ for $0\leqslant p\leqslant \hat{p}_0 $. By continuity, there exists $\rho>0$ small such that $G(u,p) \geqslant \gamma $ for $u^*-\rho \leqslant u\leqslant u^*$, $0\leqslant p\leqslant \hat{p}_0$. Furthermore, as $f(u^*)= d u^*$, then there is a constant $b>0$ such that \begin{equation}\label{fub1} f(v)- d v\leqslant b(u^*-v)\ \ \mbox{ for }\ v\in[u^*-\rho, u^*]. \end{equation} Inspired by \cite{FM}, let us construct the following function: \[ \underline{u}(t,x):= \max\{0,\ q_{c^*}(x+c^*t+\xi(t))+q_{c^*}(c^*t-x+\xi(t))-u^*-p(t)\},\ \ t>0, \] and denote $\underline{g}(t)$ and $\underline{h}(t)$ be the zero points of $\underline{u}(t,x)$ with $t>0$, that is \[ \underline{u}(t,\underline{g}(t))=\underline{u}(t,\underline{h}(t))=0. \] In the following, we will show that $(\underline{u},\underline{g}, \underline{h})$ is a subsolution of problem \eqref{p}. We only prove the case where $x\geqslant 0$, since the other is analogous. For any function $J$ depended on $t$, we write $J_{\tau}(t):=J(t-\tau)$ if no confusion arises. For simplicity of notations, we will write \[ \zeta^-(t):=-x+c^*t+\xi(t), \ \zeta^+(t):=x+c^*t+\xi(t),\ \ \zeta^-_\tau:=\zeta^-(t-\tau), \ \zeta^+_\tau:=\zeta^+(t-\tau). \] Firstly, a direct calculation shows that for $(t,x)\in(\tau,\infty)\times[0, \underline{h}(t)]$, \begin{align*} \mathcal{N}[\underline{u}]:&=\underline{u}_t-\underline{u}_{xx}+ d \underline{u}-f(\underline{u}(t-\tau,x))\\ &=\xi'[q'_{c^*}(\zeta^-)+q'_{c^*}(\zeta^+)]+f(q_{c^*}(\zeta^-_\tau))+f(q_{c^*}(\zeta^+_\tau))\\ &\ \ \ -f(q_{c^*}(\zeta^-_\tau)+q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau})- d (u^*+p) -p'. \end{align*} Assume that $\xi'(t) \leqslant 0$, and choose $\xi$ large such that $u^*-\frac{\rho}{2}\leqslant q_{c^*}(\zeta^+_\tau)\leqslant u^*$ in $(\tau,\infty) \times[0, \underline{h}(t)]$. The monotonicity of $q_{c^*}$ and its exponential rate of convergence to $u^*$ at $\infty$ imply that if we choose $\xi$ sufficiently large, then there exist positive constants $\nu$, $K_0$ and $K$ such that \[ u^*-q_{c^*}(\zeta^+_\tau)\leqslant K_0e^{-\nu \zeta^+_\tau}\leqslant Ke^{-\nu(\xi(t)+c^*t)}. \] Set $p(t)=p_0e^{-\beta t}$ with $p_0:=\frac{1}{2}\min\{\hat{p}_0,\ \frac{\rho}{2}\}$ and $\beta:=\frac{1}{2}\min\{\nu c^*,\ \alpha_0\}$, where $\alpha_0$ is the unique zero point of \[ d (e^{\tau y}-1)-\gamma e^{\tau y}+y=0. \] Thus, when $q_{c^*}(\zeta^-_\tau)\in[u^*-\rho, u^*]$ and $(t,x)\in(\tau,\infty)\times[0, \underline{h}(t)]$, since $q_{c^*}'(z) \geqslant 0$, then \begin{align*} \mathcal{N}[\underline{u}]&=\xi'[q'_{c^*}(\zeta^-)+q'_{c^*}(\zeta^+)]+f(q_{c^*}(\zeta^-_\tau))+f(q_{c^*}(\zeta^+_\tau))\\ &\ \ \ -f(q_{c^*}(\zeta^-_\tau)+q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau})- d (u^*+p) -p'\\ &\leqslant \gamma [q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau}]+b[u^*-q_{c^*}(\zeta^+_\tau)]+ d (p_{\tau}-p)-p'\\ &\leqslant b[u^*-q_{c^*}(\zeta^+_\tau)]+ d (p_{\tau}-p)-p'-\gamma p_{\tau}\\ &\leqslant Kbe^{-\nu(\xi(t)+c^*t)}+p_0e^{-\beta t}\big[ d \big(e^{\beta \tau}-1\big)-\gamma e^{\beta \tau}+\beta\big]\leqslant 0, \end{align*} provided that $\xi$ is sufficiently large. For the part $ q_{c^*}(\zeta^-_\tau)\in[0,u^*-\rho]$, then for $(t,x)\in(\tau,\infty)\times[0, \underline{h}(t)]$ and sufficiently large $\xi$, there are two positive constants $d_1$ and $d_2$ where $d_1<1$ such that $q'_{c^*}(\zeta^-)+q'_{c^*}(\zeta^+)\geqslant d_1$, and \[ f\big(q_{c^*}(\zeta^-_\tau)\big)-f\big(q_{c^*}(\zeta^-_\tau)+q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau}\big)+ d [q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau}]\leqslant d_2[u^*+p_{\tau}-q_{c^*}(\zeta^+_\tau)], \] thus we have \begin{align*} \mathcal{N}[\underline{u}]&=\xi'[q'_{c^*}(\zeta^-)+q'_{c^*}(\zeta^+)]+f(q_{c^*}(\zeta^-_\tau))+f(q_{c^*}(\zeta^+_\tau))\\ &\ \ \ -f(q_{c^*}(\zeta^-_\tau)+q_{c^*}(\zeta^+_\tau)-u^*-p_{\tau})- d (u^*+p) -p'\\ &\leqslant d_1\xi'+d_2 [u^*+p_{\tau}-q_{c^*}(\zeta^+_\tau)]+b[u^*-q_{c^*}(\zeta^+_\tau)+ d (p_{\tau}-p)-p'\\ &\leqslant d_1\xi'+(d_2+b)Ke^{-\nu(\xi+c^*t)} +p_0e^{-\beta t}\big[d_2e^{\beta \tau}+ d \big(e^{\beta \tau}-1\big)+\beta\big]\\ &\leqslant d_1\xi'+p_0e^{-\beta t}\big[d_2e^{\beta \tau}+ d (e^{\beta \tau}-1)+2\beta\big]. \end{align*} Now let us choose $\xi$ satisfies \[ d_1\xi'+\kappa p_0e^{-\beta t}=0 \] with $\xi(0)=\xi_0$ sufficiently large, and $\kappa:=d_2e^{\beta \tau}+ d \big(e^{\beta \tau}-1\big)+2\beta$, then $\xi'(t)\leqslant 0$. Hence from the above we obtain that $\mathcal{N}[\underline{u}]\leqslant 0$ in this part. Next, let us check the free boundary condition. When $x=\underline{h}(t)$, we set $\zeta_1(t)=-\underline{h}(t)+c^*t+\xi(t)$ and $\zeta_2(t)=\underline{h}(t)+c^*t+\xi(t)$, then \begin{equation}\label{qq1} q_{c^*}(\zeta_1(t))+q_{c^*}(\zeta_2(t))=u^*+p(t). \end{equation} We differentiate \eqref{qq1} with respect to $t$ to obtain \begin{equation}\label{hf1} \big[q_{c^*}'(\zeta_2)-q_{c^*}'(\zeta_1)\big]\big(\underline{h}'(t)-c^*\big)= p'-2c^*q_{c^*}'(\zeta_2)-\big[q_{c^*}'(\zeta_2)+q_{c^*}'(\zeta_1)\big]\xi'. \end{equation} By shrinking $p_0$ and enlarge $\xi_0$ if necessary, then we can see that $\zeta_2(t)\gg1$, and $q_{c^*}(\zeta_2(t))\approx u^*$. This, together with \eqref{qq1}, yields that $q_{c^*}(\zeta_1(t))\approx p(t)$. Since $q''_{c^*}(z)>0>q''_{c^*}(y)$ for $0\leqslant z\ll 1$ and $y\gg 1$ and $q'_{c^*}(z)\searrow 0$ as $z\to\infty$, thus we have \begin{equation}\label{q1q21} 0<q_{c^*}'(\zeta_2)< q_{c^*}'(0)< q_{c^*}'(\zeta_1). \end{equation} Thanks to the choice of $\xi(t)$, we can compute that \begin{equation}\label{q1q22} p'-2c^*q_{c^*}'(\zeta_2)-[q_{c^*}'(\zeta_2)+q_{c^*}'(\zeta_1)]\xi'\geqslant \big(\frac{\kappa q_{c^*}'(0)}{d_1}-\beta\big) p_0e^{-\beta t}-2c^*K_1e^{-\nu(\xi(t)+c^*t)}\geqslant 0, \end{equation} where $K_1$ is a positive constant, $\kappa:=d_2e^{\beta \tau}+ d \big(e^{\beta \tau}-1\big)+2\beta>2\beta$ and we have used that by shrinking $d_1$ if necessary, then $\kappa q_{c^*}'(0)>\beta d_1$. It follows from \eqref{hf1}, \eqref{q1q21}, \eqref{q1q22} and the monotonicity of $q_{c^*}'(z)$ in $z$ that \[ \underline{h}'(t)\leqslant c^*=\mu q_{c^*}'(0)\leqslant \mu[q_{c^*}'(\zeta_1)-q_{c^*}'(\zeta_2)]=-\mu \underline{u}_x(t,\underline{h}(t)). \] Using \eqref{qq1} again, it is easy to see that $\zeta_1(t)$ is decreasing in $t\geqslant T_1$, thus for all $t\geqslant T_1$, \begin{equation}\label{huh} \underline{h}(t)-c^*t\geqslant \tilde{C}_0:=\underline{h}(T_1)-c^*T_1+\xi(\infty)-\xi(0). \end{equation} Since $(u,g,h)$ is a spreading solution of \eqref{p}, then there exists $T_2>0$ such that \begin{align*} & u(T_1+T_2+\tilde{s},x)\geqslant \underline{u}(T_1+\tau,x)\ \mbox{ for }\ \tilde{s}\in[0,\tau],\ x\in[\underline{g}(\tau),\underline{h}(\tau)],\\ & g(T_1+T_2)\leqslant \underline{g}(T_1+\tau)\ \ \mbox{and }\ h(T_1+T_2)\geqslant \underline{h}(T_1+\tau). \end{align*} Consequently, $(\underline{u},\underline{g}, \underline{h})$ is a subsolution of problem \eqref{p}, then we can apply the comparison principle to conclude that $u(t+T_1+T_2,x)\geqslant \underline{u}(t+T_1,x)$, $h(t+T_1+T_2)\geqslant \underline{h}(t+T_1)$ for $t>0$, $x\in[0,\underline{h}(t)]$. This, together with \eqref{huh}, implies that \[ h(t)-c^*t\geqslant -C_1\ \ \ \mbox{ for } t>0, \] with $C_1:=-|\tilde{C}_0|-h(T_1+T_2+\tau)-c^*(T_1+T_2+\tau)$. Similarly, by enlarging $C_1$ if necessary, we can have $g(t)+c^*t\leqslant C_1$ for $t>0$. Thus result (i) holds for large $T$. (ii)\ From the proof of (i), it is easy to see that $u(t+T_2)\geqslant \underline{u}(t,x)$ for $t>T_1$. The monotonicity of $q_{c^*}$ and its exponential rate of convergence to $u^*$ at $\infty$ can be used again to conclude that for any $c\in(0,c^*)$ there exist constants $\nu$, $K>0$ such that for any $x\in[0,ct]$ and $t>0$, \begin{align*} & u^*-q_{c^*}(x+c^*t+\xi(t))\leqslant u^*-q_{c^*}(c^*t+\xi(t))\leqslant K e^{-\nu(c^*t+\xi(t))},\\ & q_{c^*}(-x+c^*t+\xi(t))\geqslant q_{c^*}((c^*-c)t+\xi(t))\geqslant u^*-K e^{-\nu[(c^*-c)t+\xi(t)]}. \end{align*} Based on above results, we can find $T_3>T_1+T_2$ large such that for $t>T_3$ and $x\in[0,ct]$, \begin{align*} u(t,x)&\geqslant q_{c^*}(x+c^*(t-T_2)+\xi(t-T_2))+q_{c^*}(-x+c^*(t-T_2)+\xi(t-T_2))-u^*-p_0e^{\beta (t-T_2)}\\ & \geqslant u^* -2K e^{-\nu\big[(c^*-c)(t-T_2)+\xi(t-T_2)\big]}-p_0e^{\beta (t-T_2)} \geqslant u^*-M u^*e^{-\beta^* t}, \end{align*} where $M>0$ is sufficiently large and $\beta^*:=\frac{1}{2}\min\big\{\nu(c^*-c),\ \beta,\ d -f'(u^*)\big\}$. The case where $x\in[-ct,0]$ can be proved by a similar argument as above. The proof of (ii) is now complete. (iii)\ Thanks to the choice of the initial data, we know that for any given $\beta^*>0$ and $M>0$, \[ u(t,x) \leqslant u^*+ Mu^* e^{-\beta^* t}\ \ \ \mbox{ for }\ (t,x)\in[0,\infty)\times[g(t), h(t)]. \] This completes the proof. \end{proof} Next we prove the boundedness of $h(t)-c^*t$ and show that $u(t,\cdot) \approx u^*$ in the domain $[0, h(t)-Z]$, where $Z>0$ is a large number. \begin{prop}\label{pro:sigma01} Assume that spreading happens for the solution $(u,g,h)$. Then \begin{itemize} \item[(i)] there exists $C>0$ such that \begin{equation}\label{hghg1} |h(t)-c^*t |\leqslant C \ \ \mbox{ for all } t\geqslant0 ; \end{equation} \item[(ii)] for any small $\varepsilon>0$, there exists $Z_\varepsilon>0$ and $T_\epsilon >0$ such that \begin{equation}\label{ughu1} \|u(t,\cdot ) - u^* \|_{L^\infty ([0, h(t) -Z_\varepsilon])} \leqslant u^*\varepsilon \ \ \mbox{ for } t> T_\varepsilon. \end{equation} \end{itemize} \end{prop} \begin{proof} In order to prove conclusions in this proposition, inspired by \cite{DMZ}, we will use the semi-wave $q_{c^*}$ to construct the suitable sub- and supersolution. Compared with \cite{DMZ}, our problem deal with the case where $\tau>0$. Due to $\tau>0$, there will be some space-translation of the semi-wave $q_{c^*}$, which make our problem difficult to deal with. To overcome this difficulty, we mainly use the the monotonicity and exponentially convergent of $q_{c^*}$. Moreover, this idea also be used in Lemma \ref{limn21}. For clarity we divide the proof into several steps. $Step\ 1$. To give some upper bounds for $h(t)$ and $u(t,x)$. Fix $c\in(0,c^*)$. It follows from Lemma \ref{lem:u-to-1} that there exist $\beta^*\in(0, d -f'(u^*))$, $M >0$, and $T> 0$ such that for $t \geqslant T$, (i), (ii) and (iii) in Lemma \ref{lem:u-to-1} hold. Thanks to {\bf (H)}, by shrinking $\beta^*$ if necessary, we can find $\rho>0$ small such that \begin{equation}\label{vu1} d -f'(v)e^{\beta^* \tau}\geqslant \beta^*\ \ \ \mbox{ for } v\in[u^*-\rho,u^*+\rho]. \end{equation} For any $T_*>T+\tau$ large satisfying $Mu^* e^{-\beta^* (T_*-\tau)}<\frac{\rho}{2}$, there is $M' > M$ such that $M'u^*e^{-\beta^* (T_*-\tau)}< \rho$. Since $q_{c^*}(z)\to u^*$ as $z\to\infty$, we can find $Z_0 >0$ such that \begin{equation}\label{U1a1} \big(1+M'e^{-\beta^* (T_*+\tau) }\big)q_{c^*}(Z_0 )\geqslant u^*. \end{equation} Now we construct a supersolution $(\bar{u} ,g, \bar{h})$ to \eqref{p} as follows: \begin{align*} & \bar{h} (t): =c^*(t - T_*)+ h (T_*+\tau )+ K M'\big(e^{-\beta^* T_* }-e^{-\beta^* t}\big)+Z_0\ \ \ \mbox{ for }\ t\geqslant T_* ,\\ & \bar{u}(t,x):=\min\big\{\big(1+M'e^{-\beta^* t}\big)q_{c^*}\big(\bar{h} (t)-x\big),\ u^*\big\}\ \ \ \mbox{ for }\ t\geqslant T_* ,\ x\leqslant \bar{h} (t), \end{align*} where $K$ is a positive constant to be determined below. Clearly, for all $t\geqslant T_*$, $\bar{u} (t, g(t))>0= u(t, g(t))$, $\bar{u}\big(t, \bar{h} (t)\big)=0$, and \begin{eqnarray*} -\mu \bar{u} _x(t,\bar{h}(t))& = & \mu \big(1+M'e^{-\beta^* t}\big)q_{c^*}'(0)=\big(1+M'e^{-\beta^* t}\big)c^*, \\ & < & c^*+M' K \beta^* e^{-\beta^* t} = \bar{h}'(t), \end{eqnarray*} if we choose $K$ with $K\beta^* > c^*$. By the definition of $\bar{h}$ we have $h (T_*+s )<\bar{h}(T_* +s)$ for $s\in[0,\tau]$. It then follows from \eqref{U1a1} that for $(s,x)\in[0,\tau]\times[ g (T_*+s ),h (T_* +s)]$, \[ \big(1+M'e^{-\beta^* (T_*+s) }\big)q_{c^*}\big(\bar{h} (T_*+s)-x\big) \geqslant\big(1+M'e^{-\beta^* (T_*+\tau) }\big)q_{c^*}(Z_0)\geqslant u^*, \] which yields that $\bar{u}(T_*+s,x)=u^*\geqslant u(T_*+s,x)$ for $(s,x)\in[0,\tau]\times[g(T_*+s),h(T_*+s)]$. We now show that \begin{equation}\label{u+ upper} \mathcal{N} [\bar{u}] := \bar{u}_t - \bar{u}_{xx} + d \bar{u}-f(\bar{u}(t-\tau,x)) \geqslant 0,\quad x\in [g(t), \bar{h}(t)],\ t> T_*+\tau . \end{equation} Thanks to the definition of $\bar{u}(t,x)$ and the monotonicity of $q_{c^*}(z)$ in $z$, we can find a decreasing function $\eta(t)<\bar{h}(t)$ for $t>T_*$, such that \[ \big(1+M'e^{-\beta^* t}\big)q_{c^*}\big(\bar{h}(t)-x\big)\left\{\begin{array}{ll} > u^*, & x<\eta(t),\\ = u^*, & x=\eta(t),\\ < u^*, & x\in\big(\eta(t),\bar{h}(t)\big], \end{array} \right. \] which implies that \[ \bar{u}(t,x)=u^*\ \mbox{ for } x \leqslant \eta(t), \ \mbox{ and }\ \bar{u}(t,x)=\big(1+M'e^{-\beta^* t}\big)q_{c^*}\big(\bar{h}(t)-x\big)\ \mbox{ for } x\in\big[\eta(t),\bar{h}(t)\big]. \] As $\mathcal{N} u^*=0$, thus in what follows, we only consider the case $ x\in\big[\eta(t),\bar{h}(t)\big]$. Set $q_\tau:=q_{c^*}\big(\bar{h}_\tau-x\big)$ for convenience. A direct calculation shows that, for $t>T_*+\tau$, \begin{align*} \mathcal{N} [\bar{u}] :& = \bar{u}_t - \bar{u}_{xx} + d \bar{u}-f(\bar{u}(t-\tau,x))\\ & = -\beta^* M'e^{-\beta^* t} q_{c^*}+\big(1+M'e^{-\beta^* t}\big) \{K\beta^* M'e^{-\beta^* t} q_{c^*}'+f(q_{\tau})\} -f\big((1+M'e^{-\beta^* (t-\tau)})q_{\tau}\big)\\ & = M'e^{-\beta^* t}\Big\{f(q_{\tau})+K \beta^*\big(1+M'e^{-\beta^* t}\big)q_{c^*}' -\beta^*q_{c^*} \Big\} + f(q_\tau)- f\big((1+M'e^{-\beta^* (t-\tau)})q_\tau\big)\\ & \geqslant M'e^{-\beta^* t}\Big\{K \beta^* \big(1+M'e^{-\beta^* t}\big)q_{c^*}'-\big[\big(f'\big((1+ \theta M'e^{-\beta^* (t-\tau)})q_\tau\big)e^{\beta^* \tau}- d \big)q_\tau-\beta^*q_{c^*}\big]\Big\}, \end{align*} for some $\theta \in (0,1)$. Since \begin{equation}\label{qcto1} q_{c^*}(z)\to u^*\ \mbox{ and } \frac{(q_{c^*}(z)-u^*)'}{q_{c^*}(z)-u^*}\to k^*\ \ \mbox{ as } z\to \infty \end{equation} where $k^*:=c^*-\sqrt{(c^*)^2+4( d -f'(u^*))}<0$, there are $z_0>0$ and $k_1>0$ such that \begin{equation}\label{qqq1} q_{c^*}''(z)<0,\ \ \ q_{c^*}(z)\geqslant u^*-\rho\ \ \mbox{ and }\ \ q_{c^*}'(z-2c^*\tau) \leqslant k_1 q_{c^*}'(z) \ \mbox{ for } \ z>z_0, \end{equation} Moreover, we can compute that \begin{align*} \triangle \bar{h}(t) := \bar{h}(t)-\bar{h}_\tau(t)= c^*\tau+KM'e^{-\beta^*t}(e^{\beta^*\tau}-1). \end{align*} For any given $K>0$, by enlarging $T_*$ if necessary, we have that \begin{equation}\label{deh} \triangle \bar{h}(t)\in[c^*\tau,2c^*\tau]\ \ \mbox{ for }\ t \geqslant T_*. \end{equation} When $\bar{h}_\tau-x>z_0$ and $t> T_*+\tau$, it then follows that \begin{align*} \mathcal{B} :& = K \beta^* \big(1+M'e^{-\beta^* t}\big)q_{c^*}'-\big[\big(f'\big((1+ \theta M'e^{-\beta^* (t-\tau)})q_\tau\big)e^{\beta^* \tau}- d \big)q_\tau-\beta^*q_{c^*}\big]\\ & \geqslant \big[ d -f'\big(\big(1+ \theta M'e^{-\beta^* (t-\tau)}\big)q_\tau\big)e^{\beta^* \tau}-\beta^*\big]q_\tau+ K\beta^* q_{c^*}'+\beta^* (q_\tau-q_{c^*})\\ & \geqslant K\beta^* q_{c^*}'(\bar{h}(t)-x)-\beta^* q_{c^*}'(\bar{h}(t)-x-\tilde{\theta}\triangle\bar{h}(t))\triangle\bar{h}(t)\ \ \ (\mbox{with } \tilde{\theta}\in(0,1))\\ & \geqslant (K-2k_1c^*\tau) \beta^*q_{c^*}'(\bar{h}(t)-x)\geqslant 0 \end{align*} provided that $K$ is sufficiently large, and we have used $M'e^{-\beta^* (t-\tau)} u^*\leqslant\rho$ for $t> T_* $, $q_{c^*}'(z)>0$ for $z>0$, \eqref{vu1}, \eqref{qqq1} and \eqref{deh}. Thus $\mathcal{N} [\bar{u}]\geqslant 0$ in this case. When $0\leqslant \bar{h}_\tau-x\leqslant z_0$ and $t> T_*+\tau $, for sufficiently large $K$, we have $$ \mathcal{N}[\bar{u}] \geqslant M'e^{-\beta^* t}\big[K \beta^* D_1 - D_2u^* e^{\beta^* \tau}-\beta^*u^*\big]\geqslant 0, $$ where $D_1:=\min_{z\in[0,z_0+2c^*\tau]}q_{c^*}'(z)>0$, $D_2:=\max_{v\in[0,2u^*]}f'( v)$, and \eqref{deh} are used. Summarizing the above results we see that $(\bar{u}, g, \bar{h})$ is a supersolution of \eqref{p}. Thus we can apply the comparison principle to deduce $$ h(t) \leqslant \bar{h}(t) \quad \mbox{and} \quad u(t,x)\leqslant \bar{u}(t,x) \leqslant u^*+M' u^*e^{-\beta^* t}\quad \mbox{ for } x\in [g(t), h(t)],\ t>T_*. $$ By the definition of $\bar{h}$ we see that, for $C_r := h(T_*+\tau)+Z_0 +KM'$, we have \begin{equation}\label{hbd} h(t)< c^*t +C_r\ \ \ \mbox{ for all } t\geqslant 0. \end{equation} For any $\varepsilon>0$, if we choose $T_1(\varepsilon) >T_*$ large such that $M' e^{-\beta^* T_1(\varepsilon)} < \varepsilon$, then we have \begin{equation}\label{v<1+epsilon/P} u(t,x)\leqslant \bar{u}(t,x) \leqslant u^*(1 +\varepsilon) ,\quad x\in [g(t), h(t)],\ t> T_1(\varepsilon), \end{equation} which ends the proof of Step 1. $Step\ 2$. To give some lower bounds for $h(t)$ and $u(t,x)$. Let $c$, $M$, $T$ and $\beta^*$ be as before. By shrinking $c$ if necessary, we can find $T^*>T+\tau$ large such that \begin{equation}\label{vu2} M u^* e^{-\beta^* (t-\tau)}\leqslant \frac{\rho}{2}\ \ \ \ \mbox{ for }\ t\geqslant T^*\ \mbox{ and }\ \ \ h(T^*)-cT^*\geqslant c^*\tau. \end{equation} We will define the following functions \begin{align*} & \underline{g}(t)=ct,\ \ \underline{h}(t)=c^*(t-T^*)+cT^*-\sigma M(e^{-\beta^*T^*}-e^{-\beta^*t}),\ \ \ t\geqslant T^*,\\ & \underline{u}(t,x)=\big(1-Me^{-\beta^* t}\big)q_{c^*}(\underline{h}(t)-x),\ \ \ t\geqslant T^*,\ \ x\in[\underline{g}(t),\underline{h}(t)], \end{align*} where $\sigma$ is a positive constant to be determined later. We will prove that $(\underline{u},\underline{g},\underline{h})$ is a subsolution to \eqref{p} for $t>T^*$. Firstly, for $t\geqslant T^*$, \[ \underline{u}\big(t,\underline{g}(t)\big)=\underline{u}(t,-ct)\leqslant u^*-M u^* e^{-\beta^* t}\leqslant u(t,-ct)=u\big(t,\underline{g}(t)\big). \] Next, we check that $\underline{h}$ and $\underline{u}$ satisfy the required conditions at $x=\underline{h}(t)$. It is obvious that $\underline{u}(t,\underline{h}(t))=0$. If we choose $\sigma$ with $\sigma\beta^*\geqslant c^*$, then \begin{eqnarray*} -\mu \underline{u}_x(t,\underline{h}(t))& = & \mu\big(1-Me^{-\beta^* t}\big)q_{c^*}'(0)=c^*\big(1-Me^{-\beta^* t}\big), \\ & > & c^*-\sigma M \beta^* e^{-\beta^* t} = \underline{h}'(t). \end{eqnarray*} Later, let us check the initial conditions. From Lemma \ref{lem:u-to-1}, it is easy to see that \begin{align*} &\underline{h}(T^*+s)\leqslant cT^*+c^*\tau \leqslant h(T^*+s),\\ &\underline{u}(T^*+s,x)\leqslant u^*\big(1-Me^{-\beta^* (T^*+s)}\big)\leqslant u(T^*+s,x), \end{align*} for $s\in[0,\tau]$ and $x\in [\underline{g}(T^*+s),\underline{h}(T^*+s)]$. Finally we will prove that $\underline{u}_t-\underline{u}_{xx}+ d \underline{u}-f(\underline{u}(t-\tau,x))\leqslant 0$ for $t\geqslant T^*+\tau$. Put $z=\underline{h}(t)-x$ and $q_\tau=q_{c^*}(\underline{h}(t-\tau)-x)$. It is easy to check that \begin{align*} \mathcal{N}[\underline{u}]:&=\underline{u}_t-\underline{u}_{xx}+ d \underline{u}-f(\underline{u}(t-\tau,x))\\ &\leqslant M e^{-\beta^* t}\Big\{\beta^*q_{c^*}-\sigma \beta^*\big(1-M e^{-\beta^* t}\big)q_{c^*}'+\big[f'\big(\big(1-\theta_1M e^{-\beta^* (t-\tau)}\big)q_\tau\big)e^{\beta^*\tau}- d \big]q_\tau\Big\}. \end{align*} for some $\theta_1\in(0,1)$. It follows from \eqref{qcto1} that there are two constants $z_1>0$, $k_2>0$ such that \begin{equation}\label{qqq12} q_{c^*}''(z)<0,\ \ \ q_{c^*}(z)\geqslant u^*-\frac{\rho}{2}\ \ \mbox{ and }\ \ q_{c^*}'(z-c^*\tau) \leqslant k_2 q_{c^*}'(z) \ \mbox{ for } \ z>z_1, \end{equation} Moreover, we can compute that \begin{align*} \triangle \underline{h}(t) : = \underline{h}(t)-\underline{h}_\tau(t) = c^*\tau-\sigma M e^{-\beta^*t}(e^{\beta^*\tau}-1). \end{align*} For any given $\sigma>0$, by enlarging $T^*$ if necessary, we have that \begin{equation}\label{deh2} \triangle \underline{h}(t)\in[0,c^*\tau]\ \ \mbox{ for }\ t \geqslant T^*. \end{equation} When $\underline{h}_\tau-x>z_1$ and $t\geqslant T^*+\tau$, it then follows that \begin{align*} \mathcal{C} :& = \beta^*q_{c^*}-\sigma \beta^*\big(1-M e^{-\beta^* t}\big)q_{c^*}'+\big[f'\big(\big(1-\theta_1M e^{-\beta^* (t-\tau)}\big)q_\tau\big)e^{\beta^*\tau}- d \big]q_\tau\\ & \leqslant \big[f'\big(\big(1-\theta_1M e^{-\beta^* (t-\tau)}\big)q_\tau\big)e^{\beta^*\tau}- d +\beta^*\big]q_\tau -\sigma\beta^* q_{c^*}'+\beta^*(q_{c^*}-q_{\tau})\\ & \leqslant -\sigma\beta^* q_{c^*}'(\underline{h}(t)-x)+\beta^* q_{c^*}'(\underline{h}(t)-x-\tilde{\theta}_1\triangle \underline{h}(t))\triangle \underline{h}(t) \ \ \ (\mbox{with } \tilde{\theta}_1\in(0,1))\\ & \leqslant (k_2c^*\tau-\sigma) \beta^*q_{c^*}'(\underline{h}(t)-x)\leqslant 0 \end{align*} provided that $\sigma$ is sufficiently large, and we have used $\big(1-\theta_1 M e^{-\beta^* (t-\tau)}\big)q_\tau\in[u^*-\rho,u^*]$ and \eqref{vu2} for $t\geqslant T^*$, and \eqref{vu1}, \eqref{qqq12}, \eqref{deh2}. Thus $\mathcal{N} [\underline{u}]\leqslant 0$ in this case. When $0\leqslant \underline{h}_\tau-x\leqslant z_1$ and $t\geqslant T^*+\tau $, for sufficiently large $\sigma$, we have $$ \mathcal{N} [\underline{u}] \leqslant M e^{-\beta^*t} \Big[\beta^*u^*-\sigma \beta^* \Big(1-\frac{\rho}{2u^*} e^{-\beta^*\tau}\Big)D'_1 + D'_2u^* e^{\beta^* \tau}\Big]\leqslant 0, $$ where $D'_1:=\min_{z\in[0,z_1+c^*\tau]}q_{c^*}'(z)>0$, $D'_2:=\max_{v\in[0,2u^*]}f'( v)$ and \eqref{deh2} are used. Consequently, $(\underline{u},\underline{g}, \underline{h})$ is a subsolution to \eqref{p}, then the comparison principle implies that \[ \underline{h}(t)\leqslant h(t),\ \ \underline{u}(t,x)\leqslant u(t,x)\ \ \mbox{ for }\ t\geqslant T^*,\ x\in[\underline{g}(t), \underline{h}(t)], \] which yields that \begin{equation}\label{hbd2} h(t)\geqslant \underline{h}(t) - \max_{t\in[0,T^*]}|h(t)-\underline{h}(t)| \geqslant c^*t -C_l \ \ \mbox{ for all } t\geqslant 0, \end{equation} where $C_l = \max_{t\in[0,T^*]}|h(t)-\underline{h}(t)|+c^*T^* +\sigma M$. Combining with \eqref{hbd} we obtain \eqref{hghg1}. On the other hand, for any $\varepsilon>0$, since $q_{c^*}(\infty) =u^*$, there exists $Z_1(\varepsilon)>0$ such that $$ q_{c^*}(z)> u^*\Big(1- \frac{\varepsilon}{2}\Big)\ \ \mbox{ for } z\geqslant Z_1(\varepsilon). $$ It follows from \eqref{hbd2} and \eqref{hbd} that $$ \underline{h}(t) -x \geqslant c^*t -C_l -x \geqslant h(t) - C_r -C_l -x \geqslant Z_1(\varepsilon)\ \mbox{ for }\ t>T^*, $$ which yields that for $(t,x)\in \Phi_1 := \{ (t,x) : ct\leqslant x\leqslant h(t) -C_r -C_l -Z_1(\varepsilon),\ t>T^*\}$, $$ u(t,x) \geqslant \underline{u} (t,x) \geqslant \big(1-M e^{-\beta^* t} \big) q_{c^*}\big(Z_1(\varepsilon)\big) \geqslant u^*\big(1-M e^{-\beta^* t} \big) \Big( 1- \frac{\varepsilon}{2} \Big). $$ Moreover, if we choose $T_2(\epsilon) >T^*$ such that $2 M e^{-\beta^* T_2(\varepsilon)} <\varepsilon$, then \begin{equation}\label{v>1-epsilonP} u(t,x)\geqslant u^*\Big( 1- \frac{\varepsilon}{2}\Big)^2 > u^*(1- \varepsilon)\ \ \mbox{ for } (t,x)\in \Phi_1\ \mbox{ and }\ t>T_2(\varepsilon), \end{equation} which completes the proof of Step 2. $ Step\ 3$. Completion of the proof of \eqref{ughu1}. Denote $T_\varepsilon :=T_1(\varepsilon)+T_2(\varepsilon)$ and $Z_\varepsilon := C_r + C_l +Z_1(\varepsilon)$, then by \eqref{v<1+epsilon/P} and \eqref{v>1-epsilonP} we have $$ |u(t,x)-u^*| \leqslant u^*\varepsilon\ \ \mbox{ for } 0\leqslant x\leqslant h(t) -Z_\varepsilon,\ t>T_\varepsilon. $$ This yields the estimate in \eqref{ughu1}, which completes the proof of this proposition. \end{proof} Using a similar argument as above we can obtain the following result. \begin{prop}\label{pro:sigma12} Assume that spreading happens for the solution $(u,g,h)$. Then \begin{itemize} \item[(i)] there exists $C'>0$ such that \begin{equation}\label{hhgg1} |g(t)+c^*t |\leqslant C' \ \ \mbox{for all } t\geqslant0 ; \end{equation} \item[(ii)] for any small $\varepsilon>0$, there exists $Z'_\varepsilon>0$ and $T'_\epsilon >0$ such that \begin{equation}\label{uugghh1} \|u(t,\cdot ) - u^* \|_{L^\infty ([g(t)+Z'_\varepsilon,0])} \leqslant u^*\varepsilon \ \ \mbox{ for } t> T'_\varepsilon. \end{equation} \end{itemize} \end{prop} \subsection{Asymptotic profiles of the spreading solutions}\label{sub52} This subsection is devoted to the proof of Theorem \ref{thm:profile of spreading sol}. We will prove this theorem by a series of results. Firstly, it follows from Proposition \ref{pro:sigma01} that there exist positive constant $C$ such that \[ -C\leqslant h(t)-c^*t\leqslant C\ \ \mbox{ for } t\geqslant 0. \] Let us use the moving coordinate $y :=x-c^*t+2C$ and set $$ \begin{array}{l} h_1(t):= h(t)-c^*t+2C, \quad g_1(t) :=g(t)-c^*t+2C\ \ \mbox{ for } t\geqslant 0,\\ \mbox{and } u_1(t,y):= u (t, y+c^*t-2C)\ \ \mbox{ for } y\in[g_1(t),h_1(t)],\ t\geqslant 0. \end{array} $$ Then $({u_1},{g_1},{h_1})$ solves \begin{equation}\label{pWH} \left\{ \begin{array}{ll} (u_1)_t =(u_1)_{yy}+c^*(u_1)_y- d u_1+ f(u_1(t-\tau,y+c^*\tau)), & {g_1}(t)<y<{h_1}(t),\ t>0,\\ {u_1}(t, y)= 0,\ {g'_1}(t)=-\mu (u_1)_y(t,y)-c^*, & y={g_1}(t),\ t>0,\\ {u_1}(t, y)= 0,\ {h'_1}(t)=-\mu (u_1)_y(t,y)-c^*, & y={h_1}(t),\ t>0. \end{array} \right. \end{equation} Let $t_n\to\infty$ be an arbitrary sequence satisfying $t_n>\tau$ for $n\geqslant1$. Define \[ v_n(t,y)=u_1(t+t_n,y),\ \ H_n(t)=h_1(t+t_n), \ \ k_n(t)=g_1(t+t_n). \] \begin{lem}\label{limn1} Subject to a subsequence, \begin{equation}\label{vhgt1} H_n(t)\to H\ \ in \ C^{1+\frac{\nu}{2}}_{loc}(\mathbb{R})\ \ \ and \ \ \ \|v_n-V\|_{C^{\frac{1+\nu}{2},1+ \nu}_{loc}(\Omega_n)}\to 0, \end{equation} where $\nu\in(0,1)$, $\Omega_n=\{(t,y)\in\Omega: \ y\leqslant H_n(t)\}$, $\Omega=\{(t,y): \ -\infty<y\leqslant H(t), t\in\mathbb{R}\}$, and $(V(t,y),H(t))$ satisfies \begin{equation}\label{VGHQ} \left\{ \begin{array}{ll} V_t =V_{yy}+c^*V_{y}- d V+ f(V(t-\tau,y+c^*\tau)), & (t,y)\in\Omega,\\ V (t, H(t))= 0,\ H'(t)=-\mu V_{y} (t,H(t))-c^*, & t\in\mathbb{R}. \end{array} \right. \end{equation} \end{lem} \begin{proof} It follows from the proof of Lemma \ref{lem:global} that there is $C_0>0$ such that $0<h'(t)\leqslant C_0$ for all $t>0$. One can deduce that \[ -c^*<H_n'(t)\leqslant C_0 \ \ \mbox{ for } t+t_n\ \mbox{ large and every } n\geqslant1. \] Define \[ z=\frac{y}{H_n(t)},\ \ \ w_n(t,z)=v_n(t,y), \] and direct computations yield that \[(w_n)_t =\frac{1}{H^2_n(t)}(w_n)_{zz}+\frac{c^*+zH'_n(t)}{H_n(t)}(w_n)_z- d w_n+ f\Big(w_n\Big(t-\tau,\frac{H_n(t)z+c^*\tau}{H_n(t-\tau)}\Big)\Big)\] for $\frac{k_n(t)}{H_n(t)} <z<1$, $t>\tau-t_n$, and \[ w_n(t,1)=0,\ \ H_n'(t)=-\mu \frac{(w_n)_z(t,1)}{H_n(t)}-c^*,\ \ t>\tau-t_n. \] Since $w_n\leqslant u^*$, then $f\Big(w_n\Big(t-\tau,\frac{H_n(t)z+c^*\tau}{H_n(t-\tau)}\Big)\Big)$ is bounded. For any given $Z>0$ and $T_0\in\mathbb{R}$, using the partial interior-boundary $L^p$ estimates and the Sobolev embedding theorem (see \cite{DMZ2, Fr}), for any $\nu'\in(0,1)$, we obtain \[ \|w_n\|_{C^{\frac{1+\nu'}{2},1+ \nu'}([T_0,\infty)\times[-Z,1])}\leqslant C_Z\ \ \mbox{ for all large } n, \] where $C_Z$ is a positive constant depending on $Z$ and $\nu'$ but independent of $n$ and $T_0$. Thanks to this, we have \[ \|H_n\|_{C^{1+\frac{\nu'}{2}}([T_0,\infty))}\leqslant C_1\ \ \mbox{ for all large } n, \] with $C_1$ is a positive constant independent of $n$ and $T_0$. Hence by passing to a subsequence we may assume that as $n\to\infty$, \[ w_n\to W\ \ \mbox{ in } C_{loc}^{\frac{1+\nu}{2},1+ \nu}(\mathbb{R}\times(-\infty,1]),\ \ \ H_n\to H\ \ \mbox{ in } C_{loc}^{1+\frac{\nu}{2}}(\mathbb{R}), \] where $\nu\in(0,\nu')$. Based on above results, we can see that $(W,H)$ satisfies that \[ \left\{ \begin{array}{ll} W_t =\frac{W_{zz}}{H^2(t)}+\frac{c^*+zH'(t)}{H(t)}W_{z}- d W+ f(W(t-\tau,H(t)z+c^*\tau)), & (t,z)\in(-\infty,1]\times\mathbb{R},\\ W (t, 1)= 0,\ \ \ \ H'(t)=-\mu \frac{W_{z} (t,1)}{H(t)}-c^*, & t\in\mathbb{R}. \end{array} \right. \] Define $V(t,y)=W\big(t, \frac{y}{H(t)}\big)$. It is easy to check that $(V,H)$ satisfies \eqref{VGHQ} and \eqref{vhgt1} holds. \end{proof} Later, we show by a sequence of lemmas that $H(t)\equiv H_0$ is a constant and hence \[ V(t,y)=q_{c^*}(H_0-y). \] Since $C\leqslant h(t)-c^*t+2C\leqslant 3C$ for all $t\geqslant0$, then $C\leqslant H(t)\leqslant 3C$ for $t\in\mathbb{R}$. Denote \[ \phi(z):=q_{c^*}(-z)\ \ \mbox{ for } z\in\mathbb{R}, \] it follows from the proof of Proposition \ref{pro:sigma01} that for $x\in[(c-c^*)(t+t_n),H_n(t)]$ and $t+t_n$ large, \[ \big(1-Me^{-\beta^* (t+t_n)}\big)\phi(y-C)\leqslant v_n(t,y)\leqslant \min\Big\{\big(1+M'e^{-\beta^* (t+t_n)}\big)\phi(y-3C),\ u^*\Big\}. \] Letting $n\to\infty$ we have \[ \phi(y-C)\leqslant V(t,y)\leqslant \phi(y-3C) \ \ \ \mbox{for all } t\in\mathbb{R},\ y<H(t). \] Define \[ X^*:=\inf\{X:\ V(t,y)\leqslant \phi(y-X)\ \ \mbox{ for all } (t,y)\in D\} \] and \[ X_*:=\sup\{X:\ V(t,y)\geqslant \phi(y-X)\ \ \mbox{ for all } (t,y)\in D\} \] Then \[\phi(y-X_*)\leqslant V(t,y)\leqslant \phi(y-X^*)\ \ \mbox{ for all } (t,y)\in D,\] and \[ C\leqslant X_*\leqslant\inf_{t\in\mathbb{R}} H(t)\leqslant \sup_{t\in\mathbb{R}} H(t)\leqslant X^*\leqslant 3C. \] By a similar argument as in \cite{DMZ2}, we have the following result. \begin{lem}\label{limn2} $X^*=\sup_{t\in\mathbb{R}} H(t)$, $X_*=\inf_{t\in\mathbb{R}} H(t)$, and there exist two sequences $\{s_n\}$, $\{\tilde{s}_n\}\subset \mathbb{R}$ such that \[ H(t+s_n)\to X^*,\ \ V(t+s_n,y)\to \phi(y-X^*)\ \ \mbox{ as } n\to\infty \] uniformly for $(t,y)$ in compact subsets of $\mathbb{R}\times(-\infty,X^*]$, and \[ H(t+\tilde{s}_n)\to X_*,\ \ V(t+\tilde{s}_n,y)\to \phi(y-X_*)\ \ \mbox{ as } n\to\infty \] uniformly for $(t,y)$ in compact subsets of $\mathbb{R}\times(-\infty, X_*]$. \end{lem} Based on Lemma \ref{limn2}, we have the following lemma. \begin{lem}\label{limn21} $X^*=X_*$, and hence $H(t)\equiv H_0$ is a constant, which yields $V(t,y)=\phi (y-H_0)$. \end{lem} \begin{proof} Argue indirectly we may assume that $X_*<X^*$. Choose $\epsilon=(X^*-X_*)/4$. We will show next that there is $T_\epsilon>0$ such that \begin{equation}\label{HGH1} H(t)-X^*\geqslant -\epsilon\ \ \mbox{ and }\ \ H(t)-X_*\leqslant \epsilon\ \ \mbox{ for } t\geqslant T_\epsilon, \end{equation} which implies that $X^*-X_*\leqslant 2\epsilon$. This contraction would complete the proof. To complete the proof, we need to prove that for given $\epsilon=(X^*-X_*)/4$, there exist $n_1(\epsilon)$ and $n_2(\epsilon)$ such that \[ H(t)-X^*\geqslant -\epsilon\ \ (\forall t\geqslant s_{n_1}),\ \ \ H(t)-X_*\leqslant \epsilon\ \ (\forall t\geqslant \tilde{s}_{n_2}). \] It follows from $\phi(y-X_*)\leqslant V(t,y)\leqslant \phi(y-X^*)$ that there exist $C_1>0$ and $\beta_1>0$ such that \[ |u^*-V(t,y)|\leqslant C_1e^{\beta_1y}. \] By Lemma \ref{limn2}, for any $\varepsilon>0$, there exist $K>0$, $T>0$ such that for $\tilde{s}_n>T+\tau$ and $s\in[0,\tau]$, \begin{equation}\label{UHG1} \sup_{y\in(-\infty,K]}|V(\tilde{s}_n+s,y)-\phi(y-X^*)|<\varepsilon \end{equation} Set $G(t)=H(t)+c^*t$ and $U(t,y)=V(t,y-c^*t)$, then $(W,G)$ satisfies \begin{equation}\label{UGu} \left\{ \begin{array}{ll} U_t =U_{yy}- d U+ f(U(t-\tau,y)), & t\in\mathbb{R},\ y\leqslant G(t),\\ U (t, G(t))= 0,\ \ \ \ G'(t)=-\mu U_y (t,G(t)), & t\in\mathbb{R}. \end{array} \right. \end{equation} It follows from Lemma \ref{limn2} and \eqref{UHG1} that there is $n_1=n_1(\varepsilon)$ such that for $n\geqslant n_1$, \begin{eqnarray} & H(\tilde{s}_n+s)\leqslant X_*+\varepsilon\ \ \mbox{ for } s\in[0,\tau], \label{GR1}\\ & V(\tilde{s}_n+s,y)\leqslant \phi(y-X_*-\varepsilon)+\varepsilon\ \ \mbox{ for } s\in[0,\tau],\ y\leqslant X_*. \label{UR1} \end{eqnarray} Thanks to {\bf (H)}, for $\beta_0\in(0,\beta^*)$ small with $\beta^*$ is given in the proof of Proposition \ref{pro:sigma01}, there is $\eta>0$ small such that \begin{equation}\label{vuf1} d -f'(v)e^{\beta_0 \tau}\geqslant \beta_0\ \ \ \mbox{ for } v\in[u^*-\eta,u^*+\eta], \end{equation} and we can find $N>1$ independent of $\varepsilon$ satisfies \[ \phi(y-X_*-\varepsilon)+\varepsilon \leqslant \big(1+N\varepsilon e^{-\beta_0\tau}\big)\phi(y-X_*-N\varepsilon)\ \ \mbox{ for } y\leqslant X_*+\varepsilon, \] Let us construct the following supersolution of problem \eqref{UGu}: $$ \begin{array}{l} \bar{G}(t):= X_*+N\varepsilon+c^*t+N\sigma\varepsilon\big(1-e^{-\beta_0(t-\tilde{s}_n)}\big),\\ \bar{U}(t,y):=\min\big\{\big(1+N\varepsilon e^{-\beta_0(t-\tilde{s}_n)}\big)\phi\big(y-\bar{G}(t)\big),\ u^*\big\}. \end{array} $$ Since $\lim_{y\to-\infty}\big(1+N\varepsilon e^{-\beta_0(t-\tilde{s}_n)}\big)\phi\big(y-\bar{G}(t)\big)>u^*$, then there is a smooth function $\bar{K}(t)$ of $t\geqslant \tilde{s}_n$ such that $\bar{K}(t)\to-\infty$ as $t\to\infty$ and $\big(1+N\varepsilon e^{-\beta_0(t-\tilde{s}_n)}\big)\phi\big(\bar{K}(t)-\bar{G}(t)\big)=u^*$. We will check that $(\bar{U},\bar{K},\bar{G})$ is a supersolution for $t\geqslant \tilde{s}_n+\tau$ and $y\in[\bar{K}(t),\bar{G}(t)]$. We note that \[ \bar{U}(t,y)=\big(1+N\varepsilon e^{-\beta_0(t-\tilde{s}_n)}\big)\phi\big(y-\bar{G}(t)\big)\ \mbox{ when }\ y\in[\bar{K}(t),\bar{G}(t)]. \] Firstly, it follows from \eqref{GR1} that for $s\in[0,\tau]$, \[ G(\tilde{s}_n+s)\leqslant X_*+\varepsilon+c^*(\tilde{s}_n+s)\leqslant X_*+N\varepsilon+c^*(\tilde{s}_n+s)\leqslant \bar{G}(\tilde{s}_n+s). \] In view of \eqref{UR1}, we have \begin{align*} \bar{U}(\tilde{s}_n+s,y)&=\big(1+N\varepsilon e^{-\beta_0s}\big)\phi\big(y-\bar{G}(\tilde{s}_n+s)\big)\\ &\geqslant\big(1+N\varepsilon e^{-\beta_0\tau}\big)\phi\big(y-X_*-N\varepsilon-c^*(\tilde{s}_n+s)\big)\\ &\geqslant\phi\big(y-X_*-\varepsilon-c^*(\tilde{s}_n+s)\big)+\varepsilon\\ &\geqslant V\big(\tilde{s}_n+s,y-c^*(\tilde{s}_n+s)\big)=U(\tilde{s}_n+s,y ). \end{align*} for $s\in[0,\tau]$ and $y\leqslant G(\tilde{s}_n+s)$. By definition $\bar{U}(t,\bar{G}(t))=0$ and direct computation yields \begin{eqnarray*} -\mu \bar{U} _y(t,\bar{G}(t))& = & c^*\big(1+N\varepsilon e^{-\beta_0( t-\tilde{s}_n)}\big), \\ & < & c^*+N\varepsilon \sigma \beta_0 e^{-\beta_0 ( t-\tilde{s}_n)} = \bar{G}'(t), \end{eqnarray*} if we choose $\sigma$ with $\sigma\beta_0 > c^*$. Since $U\leqslant u^*$, it then follows from the definition of $\bar{K}(t)$ that $\bar{U}(t,\bar{K}(t))=u^*\geqslant U(t,\bar{K}(t))$. Finally, let us show \begin{equation}\label{uhupper} \mathcal{N} [\bar{U}] := \bar{U}_t - \bar{U}_{yy} + d \bar{U}-f(\bar{U}(t-\tau,y)) \geqslant 0,\quad y\in [\bar{K}(t), \bar{G}(t)],\ t>\tilde{s}_n+\tau. \end{equation} Put $z:=y-\bar{G}(t)$, $\zeta(t):=N\varepsilon e^{-\beta_0( t-\tilde{s}_n)}$ and $\phi_\tau:=\phi\big(y-\bar{G}(t-\tau)\big)$. It is easy to compute that \begin{align*} \mathcal{N} [\bar{U}]&=\zeta\Big\{f(\phi_\tau)-\beta_0\phi-\sigma\beta_0(1+\zeta)\phi'-f'\big(\big(1+\theta_2\zeta e^{\beta_0\tau}\big)\phi_\tau\big)e^{\beta_0\tau}\phi_\tau\Big\}\ \ \ (\mbox{with } \theta_2\in(0,1))\\ &\geqslant\zeta\Big\{-\sigma\beta_0(1+\zeta)\phi'-\big[f'\big(\big(1+\theta_2\zeta e^{\beta_0\tau}\big)\phi_\tau\big)e^{\beta_0\tau}- d \big]\phi_\tau-\beta_0\phi\Big\}. \end{align*} Since \[ \phi(z)\to u^*\ \mbox{ and } \frac{(\phi(z)-u^*)'}{\phi(z)-u^*}\to k^*\ \ \mbox{ as } z\to -\infty \] where $k^*:=c^*-\sqrt{(c^*)^2+4( d -f'(u^*))}<0$, there are two constants $z_\eta<0$ and $k_0$ such that \begin{equation}\label{qqq122} \phi''(z)>0,\ \ \ \phi(z)\geqslant u^*-\eta\ \ \mbox{ and }\ \ \phi'(z-2c^*\tau) \geqslant k_0 \phi'(z) \ \mbox{ for } \ z<z_\eta, \end{equation} Moreover, we can compute that \begin{align*} \triangle \bar{G}(t) :& = \bar{G}(t)-\bar{G}(t-\tau) = c^*\tau+N\sigma \varepsilon e^{-\beta_0(t-\tilde{s}_n)}(e^{\beta_0\tau}-1). \end{align*} For any given $\sigma>0$, by shrinking $\varepsilon$ if necessary, we have that \begin{equation}\label{deh22} \triangle \bar{G}(t)\in[c^*\tau,2c^*\tau]\ \ \mbox{ for }\ t > \tilde{s}_n+\tau. \end{equation} For $y-\bar{G}(t-\tau)\leqslant z_\eta$ and $t>\tilde{s}_n+\tau$, direct calculation implies \begin{align*} \mathcal{N} [\bar{U}] &\geqslant\zeta\big\{-\sigma\beta_0(1+\zeta)\phi'-\big[f'\big(\big(1+\theta_2\zeta e^{\beta_0\tau}\big)\phi_\tau\big)e^{\beta_0\tau}- d \big]\phi_\tau-\beta_0\phi\big\}\\ & \geqslant \zeta\big\{\big[d-f'\big(\big(1+\theta_2\zeta e^{\beta_0\tau}\big)\phi_\tau\big)e^{\beta_0\tau}-\beta_0\big]\phi_\tau-\sigma\beta_0\phi'+\beta_0(\phi_\tau-\phi)\big\}\\ &\geqslant \zeta\big[\beta_0 \phi'(y-\bar{G}(t)+\tilde{\theta}_2\triangle\bar{G}(t))\triangle\bar{G}(t)-\sigma\beta_0\phi'(y-\bar{G}(t))\big]\ \ \ (\mbox{with } \tilde{\theta}_2\in(0,1))\\ & \geqslant \zeta (2k_0c^*\tau-\sigma) \beta_0\phi'(y-\bar{G}(t)) \geqslant 0 \end{align*} provided that $\sigma$ is sufficiently large, and we have used $\big(1+ \theta_2 \zeta e^{\beta_0\tau}\big)\phi_\tau\in[u^*-\eta,u^*+\eta]$ for $t>\tilde{s}_n+\tau$, \eqref{vuf1}, $\phi'(z)\leqslant0$ for $z\leqslant z_\eta$, \eqref{qqq122} and \eqref{deh22}. When $z_\eta\leqslant y-\bar{G}(t-\tau)\leqslant 0$ and $t>\tilde{s}_n+\tau$, for sufficiently large $\sigma$, we have \[ \mathcal{N} [\bar{U}]\geqslant\zeta \big[-\sigma\beta_0 C_z-u^*e^{\beta_0\tau}C_f -\beta_0u^* \big]\geqslant0. \] where $C_z:=\max_{z\in[0,z_\eta+2c^*\tau]}\phi'(z)<0$, $C_f:=\max_{v\in[0,2u^*]}f'(v)$, and \eqref{deh22} are used. Thus \eqref{uhupper} holds, then we can apply the comparison principle to conclude that \[ U(t,y)\leqslant \bar{U}(t,y),\ \ \ G(t)\leqslant \bar{G}(t)\ \ \mbox{ for }\ y\in[\bar{K}(t),\bar{G}(t)]\ \mbox{and } t>\tilde{s}_n+\tau. \] This, together with the definition of $H(t)$, yields that $H(t)\leqslant X_*+N\varepsilon(1+\sigma)$ for $t>\tilde{s}_n+\tau$. By shrinking $\varepsilon$ if necessary, we obtain \begin{equation}\label{Hsu1} H(t)\leqslant X_*+\epsilon\ \ \ \mbox{ for }\ t>\tilde{s}_n+\tau\ \mbox{and } n>n_1. \end{equation} In the following, we show $H(t)\geqslant X^*-\epsilon$ for all large $t$. As in the construction of supersolution, for any $\varepsilon>0$, there exists $n_2=n_2(\varepsilon)$ such that, for $n\geqslant n_2$, \begin{eqnarray} & H(s_n+s)\geqslant X^*-\varepsilon\ \ \mbox{ for } s\in[0,\tau], \label{GRH1}\\ & V(s_n+s,y)\geqslant \phi(y-X^*+\varepsilon)-\varepsilon\ \ \mbox{ for } s\in[0,\tau],\ y\leqslant X^*-\varepsilon. \label{URH1} \end{eqnarray} We also can find $N_0>1$ independent of $\varepsilon$ such that \[ \phi(y-X^*+\varepsilon)-\varepsilon \geqslant (1-N_0\varepsilon e^{-\beta_0\tau})\phi(y-X^*+N_0\varepsilon)\ \ \mbox{ for } y\leqslant X^*-\varepsilon, \] We can define a subsolution as follows: $$ \begin{array}{l} \underline{G}(t):= X^*-N_0\varepsilon+c^*t-N_0\sigma\varepsilon\big(1-e^{-\beta_0(t-s_n)}\big),\\ \underline{U}(t,y):=\big(1-N_0\varepsilon e^{-\beta_0(t-s_n)}\big)\phi\big(y-\underline{G}(t)\big). \end{array} $$ Since $U(t,y)\geqslant\phi(y-X_*)$, there are $C_0$ and $\alpha>0$ such that $V(t,y)\geqslant u^*-C_0e^{\alpha y}$ for all $y\leqslant0$, which implies that $U(t,y)\geqslant u^*-C_0e^{\alpha (y-c^*t)}$. Let us fix $c\in(0,c^*)$ such that $\beta_0\leqslant \alpha(c+c^*)$. By enlarging $n$ if necessary we may assume that $C_0\leqslant u^*N_0\varepsilon e^{\beta_0 s_n}$. Denote $\underline{K}(t)\equiv-ct$. By a similar argument as above and in Step 2 of Proposition \ref{pro:sigma01}, we can show that $(\underline{U},\underline{G},\underline{K})$ is a subsolution of problem \eqref{UGu} by taking $\sigma>0$ sufficiently large. The comparison principle can be used to conclude that \[ \underline{U}(t,y)\leqslant U(t,y),\ \ \ \ \underline{G}\leqslant G(t)\ \ \ \mbox{ for } t\geqslant s_n+\tau,\ y\in[-ct,\underline{G}(t)], \] which implies that $G(t)\geqslant X^*-N_0\varepsilon(1+\sigma)$ for $t\geqslant s_n+\tau$. By shrinking $\varepsilon$ if necessary, we have \[ X^*-\epsilon\leqslant G(t)\ \ \ \mbox{ for } t\geqslant s_n+\tau\ \mbox{ and }\ n\geqslant n_2. \] This completes the proof of this lemma. \end{proof} \begin{thm}\label{thm:WHG} Assume that {\bf (H)} and spreading happens. Then there exists $H_1\in\mathbb{R}$ such that \begin{equation}\label{HWt1} \lim_{t\to\infty}[h(t) - c^*t ]= H_1,\ \ \ \ \ \lim_{t\to\infty} h'(t) =c^*, \end{equation} \begin{equation}\label{WHt1} \lim\limits_{t\to\infty} \| u(t,\cdot)- q_{c^*}(c^*t+H_1-\cdot)\| _{L^\infty ( [0, h(t)])}=0, \end{equation} where $(c^*, q_{c^*})$ be the unique solution of \eqref{sw11}. \end{thm} \begin{proof} It follows from Lemmas \ref{limn1} and \ref{limn21} that for any $t_n\to\infty$, by passing to a subsequence, $h(t+t_n)-c^*(t+t_n)\to H_1:=H_0-2C$ in $C_{loc}^{1+\frac{\nu}{2}}(\mathbb{R})$. The arbitrariness of $\{t_n\}$ implies that $h(t)-c^*t\to H_1$ and $h'(t)\to c^*$ as $t\to\infty$, which proves \eqref{HWt1}. In what follows, we use the moving coordinate $z:= x-h(t)$ to prove \eqref{WHt1}. Set $$ g_2(t) := g(t)-h(t), \ \ \ \ \ \ u_2(t,z) := u(t, z+h(t))\ \ \mbox{ for }\ z\in [g_2 (t), 0],\ t\geqslant \tau, $$ \[ \tilde{g}_n(t)=g(t+t_n)-h(t+t_n),\ \ \ \tilde{h}_n(t)=h(t+t_n),\ \ \ \ \tilde{u}_n(t,z)=u_2(t+t_n,z), \] then the pair $(\tilde{u}_n, \tilde{g}_n,\tilde{h}_n)$ solves \begin{equation}\label{p u2} \left\{ \begin{array}{ll} (\tilde{u}_n)_t =(\tilde{u}_n)_{zz}+\tilde{h}_n'(\tilde{u}_n)_z+ f(\tilde{u}_n(t-\tau,z+\tilde{h}_n(t)-\tilde{h}_n(t-\tau))- d \tilde{u}_n, & z\in(\tilde{g}_n(t),0),\ t>\tau,\\ {\tilde{u}_n}(t, z)= 0,\ \ {\tilde{g}'_n}(t)=- \mu (\tilde{u}_n)_z (t,z)-\tilde{h}_n'(t), & z={\tilde{g}_n}(t),\ t>\tau,\\ {\tilde{u}_n}(t, 0)= 0,\ \ \tilde{h}_n'(t) = -\mu (\tilde{u}_n)_z(t,0), & t>\tau. \end{array} \right. \end{equation} By the same reasoning as in the proof of Lemma \ref{limn1}, the parabolic regularity to \eqref{p u2} plus the Sobolev embedding theorem can be used to conclude that, by passing to a further subsequence if necessary, as $n\to\infty$, $\tilde{u}_n\to W$ in $\ C_{loc}^{\frac{1+\nu}{2},1+\nu}(\mathbb{R}\times(-\infty,0])$, and $W$ satisfies, in view of $\tilde{h}'_n(t)\to c^*$, \[ \left\{ \begin{array}{ll} W_t =W_{zz}+c^*W_z- d W+ f(W(t-\tau,z+c^*\tau)), & -\infty<z<0,\ t\in \mathbb{R},\\ W (t, 0)= 0, \ c^*=-\mu W_z (t,0), & t\in \mathbb{R}. \end{array} \right. \] This is equivalent to \eqref{VGHQ} with $V=W$ and $H=0$. Hence we can conclude \[ W(t,z)\equiv \phi(z)\ \ \ \mbox{ for } (t,z)\in\mathbb{R}\times(-\infty,0]. \] Thus we have proved that, as $n\to\infty$, \[ u(t+t_n,z+h(t+t_n))-q_{c^*}(-z)\to0 \ \ \ \mbox{ in } C_{loc}^{\frac{1+\nu}{2},1+\nu}(R\times(-\infty,0]). \] This, together with the arbitrariness of $\{t_n\}$, yields that \[ \lim_{t\to\infty}[u(t,z+h(t))-q_{c^*}(-z)]=0\ \ \mbox{ uniformly for } z \mbox{ in compact subsets of } (-\infty,0]. \] Then, for any $L>0$, \[ \|u(t,\cdot)-q_{c^*}(h(t)-\cdot)\|_{L^\infty ([h(t)-L,h(t)])}\to0 \ \quad \mbox{ as } t\to \infty. \] Using the limit $h(t)-c^*t\to H_1$ as $t\to\infty$ we obtain \begin{equation}\label{u to U near h(t)} \|u(t, \cdot) - q_{c^*}(c^*t+H_1 -\cdot)\|_{L^\infty ([h(t)-L,h(t)])} \to 0\ \quad \mbox{ as } t\to \infty. \end{equation} Finally we prove \eqref{WHt1}. For any given small $\varepsilon >0$, it follows from \eqref{ughu1} in Proposition \ref{pro:sigma01} that there exist two positive constants $Z_\varepsilon$ and $T_\varepsilon$ such that $$ |u(t,x) - u^*| \leqslant u^*\varepsilon \ \quad \mbox{ for }\ 0\leqslant x\leqslant h(t) - Z_\varepsilon,\ t>T_\varepsilon. $$ Since $q_{c^*}(z)\to u^*$ as $z\to\infty$, there exists $Z^*_\varepsilon > Z_\varepsilon$ such that $$ |q_{c^*}(c^*t +H_1 -x) -u^*| \leqslant u^*\varepsilon\ \quad \mbox{ for }\ x\leqslant c^*t+ 2H_1 -Z^*_\varepsilon. $$ Taking $T^*_\varepsilon >T_\varepsilon$ large such that $h(t) <c^*t+2H_1$ for $t>T^*_\varepsilon$, then we obtain $$ |u(t,x) - q_{c^*}(c^*t +H_1 -x) | \leqslant 2u^*\varepsilon\ \quad \mbox{ for }\ 0\leqslant x\leqslant h(t)-Z^*_\varepsilon, \ t> T^*_\varepsilon. $$ Taking $L= Z^*_\varepsilon$ in \eqref{u to U near h(t)} we see that for some $T^{**}_\varepsilon >T^*_\varepsilon$, we have $$ |u(t, x) - q_{c^*}(c^*t +H_1 -x)| \leqslant u^*\varepsilon\ \quad \mbox{ for }\ h(t) -Z^*_\varepsilon \leqslant x \leqslant h(t),\ t>T^{**}_\varepsilon. $$ This completes the proof of \eqref{WHt1}. \end{proof} Taking use of a similar argument as above one can obtain the following result. \begin{thm}\label{thm:WGH} Assume that {\bf (H)} and spreading happens. Then there exists $G_1\in\mathbb{R}$ such that \begin{equation}\label{HWgt1} \lim_{t\to\infty}[g(t) + c^*t ]= G_1,\ \ \ \ \ \lim_{t\to\infty} g'(t) =-c^*, \end{equation} \begin{equation}\label{WHtg1} \lim\limits_{t\to\infty} \| u(t,\cdot)- q_{c^*}(c^*t-G_1+\cdot)\| _{L^\infty ( [g(t), 0])}=0, \end{equation} where $(c^*, q_{c^*})$ be the unique solution of \eqref{sw11}. \end{thm} \noindent {\bf Proof of Theorem \ref{thm:profile of spreading sol}}. The results in Theorem \ref{thm:profile of spreading sol} follow from Theorems \ref{thm:WHG} and \ref{thm:WGH}. $\square$ \end{document}
\begin{document} \title{On Universal Cycles of Labeled Graphs} \begin{tabular}{p{3in}l} \small\textsc{Greg Brockman} & \small\textsc{Bill Kay}\\[-5pt] \small\textsc{Harvard University} & \small\textsc{University of South Carolina}\\[-5pt] \small\textsc{Cambridge, MA 02138} & \small\textsc{Columbia, SC 29208}\\[-5pt] \small\textsc{United States} & \small\textsc{United States}\\[-5pt] \small\verb|[email protected]| & \small \verb|[email protected]|\\ \small\textsc{Emma E. Snively}\\[-5pt] \small\textsc{Rose-Hulman Institute of Technology}\\[-5pt] \small\textsc{Terre Haute, IN 47803}\\[-5pt] \small\textsc{United States}\\[-5pt] \small\verb|[email protected]| \end{tabular} \begin{abstract} A universal cycle is a compact listing of a class of combinatorial objects. In this paper, we prove the existence of universal cycles of classes of labeled graphs, including simple graphs, trees, graphs with $m$ edges, graphs with loops, graphs with multiple edges (with up to $m$ duplications of each edge), directed graphs, hypergraphs, and $k$-uniform hypergraphs.\greg{redid intro} \end{abstract} \section{Introduction} A simple example of a \textit{universal cycle} (U-cycle) is the cyclic string $11101000$, which contains every 3-letter word on a binary alphabet precisely once. We obtain these words by taking substrings of length 3; it is useful to imagine that we are looking at the string through a ``window'' of length 3, and we shift the window to transition from one word to the next, allowing the window to wrap if necessary. Universal cycles have been shown to exist for words of any length and for any alphabet size. (For the special case of a binary alphabet, such strings are also known as \textit{de Bruijn cycles}). The concept easily lends itself to extension, and universal cycles for permutations, partitions, and certain classes of functions are well-studied in the literature (see Chung, Diaconis, Graham~\cite{chungdiaconisgraham} for an overview of previous work in the field\greg{edited this sentence}). In all cases, the distinguishing feature of a universal cycle is that by shifting a window through a cyclic string (or in some generalizations, an array), all objects in a given class are represented precisely once. In this paper we generalize the notion of universal cycles. In particular, we show that these cycles exist for certain classes of labeled graphs. In order to define a universal cycle of graphs, we must first extend the notion of a ``window.'' \begin{definition} Given a labeled graph $G$ having vertex set $V(G)=\{v_1,v_2,\ldots,v_n\}$ with vertices labeled by the rule ${v_j \mapsto j}$ and an integer $0\leq k \leq n$, define a \textit{$k$-window} of $G$ to be the subgraph of $G$ induced by the vertex set $V = \{v_i, v_{i+1}, \ldots, v_{i+k-1}\}$ for some $i$, where vertex subscripts are reduced modulo $n$ as appropriate, and vertices are relabeled such that $v_i\mapsto 1, v_{i+1}\mapsto 2, \ldots, v_{i+k-1}\mapsto k$. For each value of $i$ such that $1\leq i\leq n$, we denote the corresponding \textit{$i^\text{th}$ $k$-window} of $G$ as $W_{G,k}(i)$. If $G$ is clear from context, we abbreviate our window as $W_k(i)$. \end{definition} \begin{figure} \caption{A 3-window of an 8 vertex graph.} \label{fig:kwindow} \end{figure} \begin{definition} Given $\script F$, a family of labeled graphs on $k$ vertices, a \textit{universal cycle (U-cycle) of $\script F$}, is a labeled graph $G$ such that the sequence of $k$-windows of $G$ contains each graph in $\script F$ precisely once. That is, $\{W_k(i) | 1 \leq i \leq n\} = \script F$, and $W_k(i) = W_k(j) \implies i = j$. (Note that the vertex set of the $k$-windows and the elements of $\script{F}$ may be different, however, we will set two labeled graphs equal if they differ only by a bijection between their vertex sets.) \end{definition} \begin{example} Note that the full 8 vertex graph in Figure \ref{fig:kwindow} is a U-cycle of simple labeled graphs (graphs without loops or multiple edges) on 3 vertices. \end{example} \section{Universal cycles of simple labeled graphs} \label{simple graphs} We begin our investigation by considering only simple graphs; that is, those without loops or multiple edges. Our result will be that U-cycles of simple labeled graphs on $k$ vertices exist for all $k \geq 0$, $k\neq 2$. Our proof employs two common notions from the study of U-cycles: the transition graph and arc digraph. The transition graph $T$ of a family $\script{F}$ of combinatorial objects is a directed graph with vertex set $V(T) = \script{F}$. If $A,B\in\script{F}$, there is an edge from $A$ to $B$ in $T$ if and only if $B$ can follow $A$ in one window shift of a U-cycle. If $\script{F}$ is a family of graphs on $k$ vertices, this means that the subgraph induced by the vertices labeled $2, 3, \ldots, k$ in $A$ is equal to that induced by the vertices $1, 2, \ldots, k-1$ in $B$. It should be clear that a U-cycle of $\script F$ corresponds to a Hamiltonian circuit in $T$ (a directed cycle passing through every vertex exactly once). Unfortunately, finding Hamiltonian circuits in graphs is an NP-hard problem; however, in our case the problem can be further reduced. Let $D$ be the graph with $E(D) = \script{F}$ such that two edges $A,B$ in $D$ are consecutive (the head of $A$ equals the tail of $B$) if and only if $B$ can follow $A$ in a U-cycle. Note that $V(D)$ is arbitrary. Call $D$ the \textit{arc digraph} of \script{F}. Now finding a U-cycle of \script{F} is equivalent to finding an Eulerian circuit in $D$ (a directed cycle passing through every edge exactly once); such circuits are easy to detect. In particular, a graph has an Eulerian circuit if and only if each of its vertices has equal in-degree and out-degree and the graph is strongly connected (for any two vertices $x,y$, there is a directed path from $x$ to $y$). For convenience, we often choose the vertices in $V(D)$ to be equal to the ``overlap'' between consecutive edges. In the following, we construct the transition graph only as a guide for constructing the arc digraph. \begin{figure} \caption{(Left) A partial sketch of the transition graph of simple graphs on 3 vertices, and (right) the full transition graph. We provide the left figure for clarity.} \label{fig:transition graph} \end{figure} \begin{lemma} \label{simple graph de bruijn} The arc digraph $D$ of simple labeled graphs on $k$ vertices, $k\geq 3$, has the following properties: \begin{enumerate} \item \label{degrees equal} For each $\script X\in V(D)$, the in-degree of $\script X$ equals the out-degree of $\script X$. \item \label{strong connectedness} The graph $D$ is strongly connected (there is a directed path from $\script X$ to $\script Y$ for any $\script X\neq\script Y\in V(D)$). \end{enumerate} \end{lemma} \begin{proof} Fix $k\geq 3$. Let $\script{F}$ be the set of simple graphs on $k$ vertices. We begin by constructing the transition graph of $\script{F}$. The vertices of this graph are the elements of $\script{F}$. As an example, Figure \ref{fig:transition graph} contains the transition graph for the case $k=3$. \begin{figure} \caption{The arc digraph of the set of simple labeled graphs on 3 vertices.} \label{fig:arc digraph} \end{figure} Consider $A,B\in\script{F}$. Let $u$ be the subgraph induced by the vertices labeled $2,3,\ldots,k$ in $A$ (with its vertices relabeled to $1,2,\ldots,k-1$, preserving order) and let $v$ be the subgraph induced by the vertices labeled $1,2,\ldots,k-1$ in $B$. We draw an edge from $A$ to $B$ if and only if $B$ can follow $A$ in a U-cycle, which is equivalent to $u = v$. Thus the transition graph has an edge from $A$ to $B$ if and only if removing the first vertex from $A$ yields the same graph as removing the last vertex from $B$. We now construct the arc digraph corresponding to this transition graph. Its edge set will be the vertex set of our transition graph. In accordance with the convention mentioned earlier, we use as its vertex set the set of graphs on $k-1$ vertices. By the previous paragraph, the head of an edge $A\in\script{F}$ is the vertex equal to the induced subgraph resulting from removing $A$'s first vertex. Similarly its tail is the vertex equal to the induced subgraph resulting from removing its last vertex. See Figure \ref{fig:arc digraph} for the arc digraph in the case $k=3$. \textbf{Proof of \ref{degrees equal}:} Let $\script X\in V(D)$ be a vertex in the arc digraph. Since an edge $A\in E(D)$ points into $\script X$ if and only if removing the first vertex of $E(D)$ yields $\script X$, the in-degree of $\script X$ must equal $2^{k-1}$, since the first vertex can arbitarily be adjacent to or not adjacent to each vertex in $V(\script X)$. Similarly, the out-degree of $\script X$ is also $2^{k-1}$ since an edge $B\in E(D)$ points out of $\script X$ precisely when deleting its last vertex yields $\script X$, and again we have two choices for each vertex in $V(D)$. \textbf{Proof of \ref{strong connectedness}:} Consider any two vertices of $D$, $\script X$ and $\script Y$. Let $G$ be the (labeled) disjoint union of $\script X$ and $\script Y$, after incrementing the label on each of $\script Y$'s vertices by $k$, as exemplified in Figure \ref{fig:concat}. Now consider the sequence of $k$-vertex graphs $W_{G,k}(1), W_{G,k}(2), \ldots, W_{G,k}(k+1)$. Deleting the first vertex of $W_{G,k}(i)$ yields $W_{G,k-1}(i+1)$, as does deliting the last vertex of $W_{G,k}(i+1)$. Thus $W_{G,k}(i)$ and $W_{G,k}(i+1)$ are consecutive in $D$. Furthermore, $W_{G,k}(1) = \script X$ and $W_{G,k}(k+1) = \script Y$, and hence there is a path in $D$ from $\script{X}$ to $\script{Y}$, as desired. \begin{figure} \caption{An example of taking the labeled disjoint union of $\script X$ and $\script Y$, where each of $\script X$ and $\script Y$ are graphs on four vertices.} \label{fig:concat} \end{figure} \end{proof} \begin{theorem} For each $k \geq 0$, $k\neq 2$, there exists a universal cycle of simple labeled graphs on $k$ vertices. \end{theorem} \begin{proof} When $k=0$ or $k=1$ the result is trivial. For $k\geq 3$, Lemma \ref{simple graph de bruijn} implies that the arc digraph of simple labeled graphs on $k$ vertices has an Eulerian cycle, and hence a U-cycle of them exists. \end{proof} Note that for $k=2$ we can modify our definition of a window in order to recognize two distinct windows on two vertices, as shown in Figure \ref{fig:k=2}. Also notice that in addition to showing existence, our results provide quick algorithms for constructing the relevant U-cycles. \begin{figure} \caption{An illustration of a U-cycle using a modified window for $k = 2$. The left window is the complete graph while the right window is the empty graph. Note that an edge is considered to be in a window only if it is not ``cut'' by the window.} \label{fig:k=2} \end{figure} \section{General Strategies} The results of the previous section can be generalized to many classes of graphs, as we show here. Throughout this section, we suppose that all graphs in a given family have $k$ vertices for some fixed $k$. Since our results will equally well apply to hypergraphs, we will consider hypergraphs to be a class of graphs. \begin{definition} Let $\rot X$ be the \textit{rotation class of $X$}, or the set of labeled graphs that differ from $X$ only by a cyclic rotation of vertex labels. \end{definition} \begin{lemma} \label{in equals out} Let $\script F$ be a family of labeled graphs (possibly including non-simple graphs or even hypergraphs) such that if $X\in\script F$, then $\rot X\subseteq \script F$. Then in the arc digraph of $\script F$, for every vertex $V$, the in-degree of $V$ equals the out-degree of $V$.\greg{cut this: we have that} \end{lemma} \begin{proof} Let $V$ be a vertex of the arc digraph of $\script F$. Let the set of edges pointing into $V$ be denoted by $I(V)$, and let the set of edges leaving $V$ be denoted $O(V)$. We provide a bijection $f: I(V) \longrightarrow O(V)$, thus proving our lemma. Let $I$ be an edge pointing into $V$ (recall that edges in our arc digraph are elements of $\script F$). If $I$ has $k$ vertices, define $f(I)$ as the graph obtained by cyclically relabeling $I$ as follows: $1\mapsto k, 2\mapsto 1, 3\mapsto 2,\ldots, k\mapsto k-1$. Then we see that $f(I) \in\script F$, since $f(I)$ is a rotation of $I$, and furthermore $f(I)$ is an edge leaving $V$. \begin{figure} \caption{The cyclic relabeling of the vertices to create an isomorphic graph} \label{fig:pushpins} \end{figure} Injectivity of $f$ is clear. Now consider any edge $J$ leaving $V$. Let $I$ be the graph obtained by cyclically relabeling $J$ as follows: $1 \mapsto 2, 2 \mapsto 3, \ldots, k-1 \mapsto k, k \mapsto 1$. Again, $I$ is isomorphic to $J$, so $I \in\script F$. Furthermore, $I$ is an edge pointing into $V$. Thus $J$ has a preimage under $f$, and $f$ is surjective. \end{proof} Lemma \ref{in equals out} implies that if some class $\script F$ of labeled graphs is closed under rotation, then in order to show that a U-cycle of \script{F} exists we need only show that the arc digraph is strongly connected (save for isolated vertices). That is, we must show that given two edges $I$ and $J$ in the arc digraph, there exists a directed path in the arc digraph beginning with $I$ and ending with $J$. In terms of U-cycles, this is equivalent to showing the existence of a graph $G$ such that $W_{G,k}(i) = I, W_{G,k}(j) = J$ and $W_{G,k}(h) \in \script{F}$ for $i \leq h \leq j$. Or alternatively, we can picture walking on the arc digraph from $I$ to $J$, taking a series of ``moves'' along consecutive edges, always following the directed arrows. We now apply these ideas to prove the existence of U-cycles of various classes of graphs. \begin{theorem} \label{extensions of results} For each $k \neq 2$, U-cycles exist for the following classes of graphs on $k$ vertices: graphs with loops, graphs with multiple edges (with up to $m$ duplications of each edge), directed graphs, hypergraphs, and $j$-uniform hypergraphs. \end{theorem} \begin{proof} The cases $k = 0,1$ are trivial. If $k \geq 3$, we proceed in analogy to Part \ref{strong connectedness} of Lemma \ref{simple graph de bruijn}. Take \script{F} to be any of the desired classes of graphs. Pick two graphs $I$ and $J$ from \script{F}. Let $G$ be the labeled disjoint union of $I$ and $J$. The graph $I$ is the first $k$-window of $G$, and the graph $J$ is the $(k+1)^\text{st}$ $k$-window. Further, each $k$-window $W_{G,k}(i)$, $1 \leq i \leq k+1$, is a graph in $\script{F}$. Thus these $k$-windows represent a series of legal edge moves in our arc digraph. \end{proof} The extensions from Theorem \ref{extensions of results} followed readily because the relevant graph classes were unrestricted; connectedness of the arc digraph was trivial. Notice that our proof also applies to some restricted classes of graphs, such as forests. We now turn our attention to U-cycles of two types of restricted classes of simple graphs on $k$ vertices. \begin{theorem} U-cycles exist for trees on $k$ vertices for $k \geq 3$. \end{theorem} \begin{proof} Let $I$, $J$ be trees. Let $G$ be the labeled disjoint union of $I$ and $J$. As we read the $k$-windows starting from $I$, let $M$ be the first non-tree window that we arrive upon. Define $v_M$ to be vertex of highest label in $M$. Since none of $G$'s subgraphs contain cycles, we see that there must be one or more components of $M$ that are not connected to the component of $v_M$. Draw edges from $v_M$ to each of these components; note that the resulting window is now a tree. Furthermore, these edges did not create any cycles in any of $G$'s $k$-windows, since there are no edges between vertices with label higher than that of $v_M$ and those with lower label. Also note that still $W_{G,k}(1) = I, W_{G,k}(k+1) = J$. We then iterate this process until we arrive at a graph that gives us a sequence of $k$-windows, all of which are trees, starting at $I$ and ending at $J$. \end{proof} \begin{theorem} U-cycles exist for graphs with precisely $m$ edges. \end{theorem} \begin{proof} For any graph $G$, let $d(G)$ be the degree sequence of $G$. For two graphs $G,H$ having $m$ edges, define $d(G) < d(H)$ if $d(G)$ comes before $d(H)$ lexicographically. We show that for any graph $I$ having exactly $m$ edges, there is a sequence of moves that takes $I$ to the (unique) graph $L$ having $m$ edges and having least degree sequence of graphs with $m$ edges. Thereupon, using the bijection we created in Lemma \ref{in equals out}, for any graph $J$ having exactly $m$ edges we can reverse its path to $L$ to arrive at a path from $L$ to $J$. This will complete our proof. Let $I$ be a graph with $m$ edges. If $I$ has least degree sequence, we are done. Otherwise, let $d(I) = (d_1,d_2,\ldots,d_{k})$ and $d(L) = (L_1,L_2,\ldots,L_{k})$. Since $d(I)$ is not minimal, there must be some $i$ such that $d_i > L_i$. But then there must exist some $j > i$ such that $d_j < L_j$. Now consider a sequence of moves where at each step we rotate $I$'s vertices according to the relabeling $1\mapsto k, 2\mapsto 1,\ldots, k\mapsto k-1$ until the vertex formerly labeled $i$ attains label 1; let $i'$ be any vertex adjacent to it. Rotate the vertex set once more, but this time connect the vertex formerly labeled $i'$ to $j$ instead of $i$. After rotating all of the vertices back to their original positions, we obtain a graph with a smaller degree sequence than $I$. By infinite descent, we see that we can eventually move to $L$ via a sequence of moves all of which are graphs having $m$ edges, thus completing the proof of the theorem. \end{proof} \section{Conclusions and Future Directions} In this paper, we have presented a beginning theory of universal cycles of graphs. We have shown the existence of U-cycles of various classes of labeled graphs on $k$ vertices, including simple graphs, multigraphs, graphs on $m$ edges, directed graphs, trees, hypergraphs, and $k$-uniform hypergraphs. Our work in this field is far from complete. There exist many other classes of graphs for which there conceivably exist U-cycles. However, perhaps the most obvious gap is results regarding U-cycles of unlabeled graphs. The canonical result would be to prove the existence of U-cycles of isomorphism classes of graphs. In such a U-cycle, no two $k$-windows are isomorphic and every isomorphism class is represented as a $k$-window. It is easy to find a U-cycle of isomorphism classes of graphs on 3 vertices. It is difficult, but still possible, to find a U-cycle of isomorphism classes of graphs on 4 vertices; one such cycle in exhibited in Figure \ref{fig:unlabeled 4 cycle}. These results lead us to conjecture the following. \begin{figure} \caption{A U-cycle of isomorphism classes of graphs on 4 vertices.} \label{fig:unlabeled 4 cycle} \end{figure} \begin{conjecture} For each $k \neq 2$, there exists a U-cycle of isomorphism classes of graphs on $k$ vertices. \end{conjecture} We also note that U-cycles have potential in theorem-proving as well, as demonstrated by the following result. \begin{definition} We say that an integer-valued graph theoretic function $f$ is \textit{window-Lipschitz} if, for all graphs $G$ and $H$ which are one window shift apart in a U-cycle, $|f(G) - f(H)| \leq 1$. \end {definition} Some examples of window-Lipschitz functions are chromatic number and largest clique. \begin{lemma} Let $U$ be a U-cycle of some family $\script F$ of graphs, and let $f$ be a window-Lipschitz function defined on these graphs. Then for each integer $\displaystyle\min_{G\in\script F} f(G) < i < \max_{G\in\script F} f(G)$ there exist at least two distinct elements of $G\in\script F$ such that $f(G) = i$. \label{lipschitz} \end{lemma} \begin{proof} By definition of a Lipschitz function, under a single window shift the value of $f$ can change by at most 1. Hence during the sequence of window shifts from the graph with minimal $f$-value to maximal, every possible value of $f$ in between the minimum and maximum is attained. Similarly, during the sequence of window shifts from the graph with maximal $f$-value to that with minimal, every possible value of $f$ is again attained. This completes our proof. \end{proof} Finally, we note that it is possible to reduce finding a U-cycle of a set of labeled graphs to finding a U-cycle of an appropriately defined set of equivalence classes of words. For example, let $\script F$ be the set of simple labeled graphs on $k$ vertices, and let $\script G$ be the set of words of length $k-1$ on the alphabet $\{0,1,\ldots,2^{k-1}-1\}$. Define $f:\script G\to\script F$ such that $f(x_1x_2\ldots x_{k-1})$ is the graph where, for $1\leq i < j \leq k$, there is an edge from $i$ to $j$ if and only if the $j$th bit of $x_i$ is 1. Now define two words in $\script G$ to be equivalent when their image under $f$ is equal, and define the bijection $f'$ to map an equivalence class to the image under $f$ of any member of that equivalence class. For an example, see Figure \ref{fig:encoding}. \begin{figure} \caption{A graph and an element of its corresponding equivalence class of words, 530.} \label{fig:encoding} \end{figure} It is not hard to show that a U-cycle of these equivalence classes (a string whose sequence of $(k-1)$-windows contains exactly one representative from each equivalence class) exists precisely when a U-cycle of $\script F$ exists. This reduction allows one to think of U-cycles of graphs in the more traditional context of U-cycles of a restricted class of words. Similar reductions apply to other classes of labeled graphs. \end{document}
\begin{document} \title{EDITS: Modeling and Mitigating Data Bias \\ for Graph Neural Networks} \author{Yushun Dong$^1$, Ninghao Liu$^2$, Brian Jalaian$^3$, Jundong Li$^1$} \affiliation{ \institution{$^1$University of Virginia \country{USA}} \institution{$^2$University of Georgia \country{USA}} \institution{$^3$U.S. Army Research Laboratory \country{USA}}} \email{{yd6eb,jundong}@virginia.edu,[email protected],[email protected]} \renewcommand{Yushun Dong, Ninghao Liu, Brian Jalaian, \& Jundong Li}{Yushun Dong, Ninghao Liu, Brian Jalaian, \& Jundong Li} \begin{abstract} Graph Neural Networks (GNNs) have shown superior performance in analyzing attributed networks in various web-based applications such as social recommendation and web search. Nevertheless, in high-stake decision-making scenarios such as online fraud detection, there is an increasing societal concern that GNNs could make discriminatory decisions towards certain demographic groups. Despite recent explorations on fair GNNs, these works are tailored for a specific GNN model. However, myriads of GNN variants have been proposed for different applications, and it is costly to fine-tune existing debiasing algorithms for each specific GNN architecture. Different from existing works that debias GNN models, we aim to debias the input attributed network to achieve fairer GNNs through feeding GNNs with less biased data. Specifically, we propose novel definitions and metrics to measure the bias in an attributed network, which leads to the optimization objective to mitigate bias. We then develop a framework EDITS to mitigate the bias in attributed networks while maintaining the performance of GNNs in downstream tasks. EDITS works in a model-agnostic manner, i.e., it is independent of any specific GNN. Experiments demonstrate the validity of the proposed bias metrics and the superiority of EDITS on both bias mitigation and utility maintenance. Open-source implementation: https://github.com/yushundong/EDITS. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies~Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Machine learning} \keywords{graph neural networks, algorithmic fairness, data bias} \maketitle {\fontsize{8pt}{8pt} \selectfont \textbf{ACM Reference Format:}\\ Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li. 2022. EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks. In \textit{Proceedings of ACM Web Conference 2022 (WWW ’22), April 25–29, 2022, Virtual Event, Lyon, France.} ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3485447.\\3512173} \section{Introduction} \label{intro} Attributed networks are ubiquitous in a plethora of web-related applications including online social networking~\cite{tang2009relational}, web advertising~\cite{yang2022wtagraph}, and news recommendation~\cite{qian2020interaction}. To better understand these networks, various graph mining algorithms have been proposed. In particular, the recently emerged Graph Neural Networks (GNNs) have demonstrated superior capability of analyzing attributed networks in various tasks, such as node classification~\cite{kipf2016semi,velickovic2017graph} and link prediction~\cite{DBLP:conf/nips/ZhangC18,kipf2016variational}. Despite the superior performance of GNNs, they usually do not consider fairness issues in the learning process~\cite{dai2020say}. Extensive research efforts have shown that many recently proposed GNNs~\cite{dai2020say,shumovskaia2021linking,xu2021towards} could make biased decisions towards certain demographic groups determined by sensitive attributes such as gender~\cite{ekstrand2021exploring} and political ideology~\cite{pariser2011filter}. For example, e-commerce platforms generate a huge amount of user activity data, and such data is often constructed as a large attributed network in which entities (e.g., buyers, sellers, and products) are nodes while activities between entities (e.g.., purchasing and reviewing) are edges. To prevent potential losses, fraud entities (e.g., manipulated reviews and fake buyers) need to be identified on these platforms, and GNNs have become the prevalent solution to achieve such goal~\cite{dou2020enhancing,liu2021pick}. Nevertheless, GNNs may have the risk of using sensitive information (e.g., race and gender) to identify fraud entities, yielding inevitable discrimination. Therefore, it is a crucial problem to mitigate bias in these network-based applications. Various efforts have been made to mitigate the bias exhibited in graph mining algorithms. For example, in online social networks, random walk algorithms can be modified via improving the appearance rate of minorities~\cite{rahman2019fairwalk, burke2017balanced}; adversarial learning is another popular approach, which aims to learn node embeddings that are not distinguishable on sensitive attributes~\cite{DBLP:conf/icml/BoseH19, masrour2020bursting}. Some recent efforts have also been made to mitigate bias in the outcome of GNNs. For example, adversarial learning can also be adapted to GNNs for outcome bias mitigation~\cite{dai2020say}. Nevertheless, existing approaches to debias GNN outcomes are tailored for a specific GNN model on a certain downstream task. In practical scenarios, different applications could adopt different GNN variants~\cite{kipf2016semi,hamilton2017inductive}, and it is costly to train and fine-tune the debiasing approaches based on diverse GNN backbones. As a consequence, to mitigate bias more efficiently for different GNNs and tasks, developing a one-size-fits-all approach becomes highly desired. Then the question is: how can we perform debiasing regardless of specific GNNs and downstream tasks? Considering that a model trained on biased datasets also tends to be biased~\cite{zemel2013learning,dai2020say,beutel2017data}, directly debiasing the dataset itself can be a straightforward solution. There are already debiasing approaches modifying original datasets via perturbing data distributions or reweighting the data points in the dataset~\cite{wang2019repairing,kamiran2012data,calders2009building}. These approaches obtain less biased datasets, which help to mitigate bias in learning algorithms. In this regard, considering that debiasing for different GNNs is costly, it is also highly desired to mitigate the bias in attributed networks before they are fed into GNNs. Nevertheless, to the best of our knowledge, despite its fundamental importance, no existing literature has taken such a step forward. In this paper, we make an initial investigation on debiasing attributed networks towards more fair GNNs. Specifically, we tackle the following challenges. (1) \emph{\textbf{Data Bias Modeling}}. Traditionally, bias modeling is coupled with the outcome of a specific GNN~\cite{dai2020say}. Based on the GNN outcome, bias can be modeled via different fairness notions, e.g., \textit{Statistical Parity}~\cite{dwork2012fairness} and \textit{Equality of Opportunity}~\cite{DBLP:conf/nips/HardtPNS16}, to determine whether the outcome is discriminatory towards some specific demographic groups. Nevertheless, if debiasing is carried out directly based on the input attributed networks instead of the GNN outcome, the first and foremost challenge is how to appropriately model such data bias. (2) \emph{\textbf{Multi-Modality Debiasing}}. In fact, attributed networks contain both graph structure and node attribute information. Correspondingly, bias may exist with diverse formats across different data modalities. In this regard, how to debias attributed networks that have different data modalities is the second challenge that needs to be tackled. (3) \emph{\textbf{Model-Agnostic Debiasing}}. Existing GNN debiasing approaches require the outcome of a specific GNN for objective function optimization during training. Different from these approaches, model-agnostic debiasing for GNNs should not rely on any specific GNN, as our goal is to develop a one-size-fits-all data debiasing approach to benefit various GNNs. Clearly, such model-agnostic debiasing could have better generalization capability but becomes much more difficult compared with the model-oriented GNN debiasing approaches. Nevertheless, the ultimate goal of debiasing is still to ensure the GNN outcome does not exhibit any discrimination. Such a contradiction poses the challenge of how to properly formulate a debiasing objective that can be universally applied to different GNNs in downstream tasks. To tackle the challenges above, we present novel data bias modeling approaches and a principled debiasing framework named EDITS (mod\underline{E}ling an\underline{D} m\underline{I}tigating da\underline{T}a bia\underline{S}) to achieve model-agnostic attributed network debiasing for GNNs. Specifically, we first carry out preliminary analysis to illustrate how bias exists in the two data modalities of an attributed network (i.e., node attributes and network structure) and affects each other in the information propagation of GNNs. Then, we formally define \textit{attribute bias} and \textit{structural bias}, together with the corresponding metrics for data bias modeling. Besides, we formulate the problem of debiasing attributed networks for GNNs, and propose a novel framework named EDITS for bias mitigation. It is worth mentioning that EDITS is model-agnostic for GNNs. In other words, our goal is to obtain less biased attributed networks for the input of any GNNs. Finally, empirical evaluations on both synthetic and real-world datasets corroborate the validity of the proposed bias metrics and the effectiveness of EDITS. Our contributions are summarized as: \begin{itemize}[topsep=0pt] \item \textbf{Problem Formulation.} We formulate and make an initial investigation on a novel research problem: debiasing attributed networks for GNNs based on the analysis of the information propagation mechanism. \item \textbf{Metric and Algorithm Design.} We design novel bias metrics for attributed network data, and propose a model-agnostic debiasing framework named EDITS to mitigate the bias in attributed networks before they are fed into any GNNs. \item \textbf{Experimental Evaluation.} We conduct comprehensive experiments on both synthetic and real-world datasets to verify the validity of the proposed bias metrics and the effectiveness of the proposed framework EDITS. \end{itemize} \section{Preliminary Analysis} \label{investigation} We provide two exemplary cases to show how the two data modalities of an attributed network (i.e., node attribute and network structure) introduce bias in information propagation -- the most common operation in GNNs. These two cases also bring insights on tackling the three challenges mentioned in Sec.~\ref{intro}. Specifically, two synthetic datasets are generated with either biased node attribute or network structure, and then attributes are propagated across the network structure to show how bias is introduced in GNNs. Here we consider the attribute distribution difference between different demographic groups as the bias in attribute, while the group membership distribution difference of the neighbors for nodes between different demographic groups is regarded as the bias in network structure. Such bias in attribute and structure can be regarded as the bias that existed in two data modalities in an attributed network. It should be noted that using distribution difference to define the level of bias is consistent with many algorithmic fairness studies~\cite{dwork2012fairness,zemel2013learning}, Now we explain how the synthetic datasets are generated. We assume the \textit{sensitive attribute} is gender, and 1,000 nodes are generated with half males (blue) and half females (orange) for both cases. In addition to the sensitive attribute, each node is with an extra two-dimensional attribute vector, which will be initialized and fed as input for information propagation. To introduce bias to either of the data modalities, different strategies are adopted to generate the attribute vector and the network structure. To study how the two data modalities introduce bias in information propagation, we compare the distribution difference of attributes between groups before and after the propagation mechanism in GCN~\cite{kipf2016semi}. \begin{figure} \caption{Two exemplary cases illustrating how bias in the two data modalities of an attributed network introduce bias in GNN information propagation. Here (c) is the node attribute distribution after propagation with biased node attributes (a) and unbiased network structure (b); while (f) is the attribute distribution after propagation with unbiased node attributes (d) and biased network structure (e).} \label{dis_1_1} \label{st_1} \label{dis_1_2} \label{dis_2_1} \label{st_2} \label{dis_2_2} \label{bias_incorporating} \end{figure} \noindent \textbf{Case 1: Biased attributes and unbiased structure.} In this case, we generate biased two-dimensional attribute vectors for nodes from the two groups (i.e., males and females) and unbiased network structure. Specifically, biased attributes at each dimension is generated independently with Gaussian distribution $\mathcal{N}$(-1.5, $1^2$) for female and $\mathcal{N}$(1.5, $1^2$) for male. The distributions are shown in Fig.~(\ref{dis_1_1}). We then introduce how an unbiased network structure is generated. For each node in an unbiased network structure, the expected membership ratio of any group in its neighbor node set should be independent of the membership of the node itself. In this regard, we generate unbiased network structure via \textit{random graph} model with edge formation probability as $2 \times 10^{-3}$. The visualization of the network is presented in Fig.~(\ref{st_1}). The attribute distribution after information propagation according to the network structure is shown in Fig.~(\ref{dis_1_2}). Comparing Fig.~(\ref{dis_1_1}) (attribute distribution before propagation) with (\ref{dis_1_2}) (attribute distribution after propagation), we can see the unbiased structure helps mitigate the original attribute bias after attributes are propagated according to the network structure. This not only implies that the attribute distribution difference between groups is a vital source of bias, but also demonstrates that unbiased structure helps mitigate bias in attributes after the information propagation process. \noindent \textbf{Case 2: Unbiased attributes and biased structure.} In this case, unbiased attributes are generated independently at each dimension with $\mathcal{N}$(0, $1^2$) for both males and females. The distributions are shown in Fig.~(\ref{dis_2_1}). The biased network structure is generated as follows. For each node, we sum up its attribute values. Then, we rank all nodes in descending order according to the summation of attribute values. After that, given a threshold integer $t$, for the top-ranked $t$ males and bottom-ranked $t$ females, we assume that they form two separated communities. The two communities are shown as the bottom right community (males) and the upper left community (females) in Fig.~(\ref{st_2}). We generate edges via \textit{random graph} model with edge formation probability as $5 \times 10^{-2}$ within each community. Similarly, the rest nodes form the third community via \textit{random graph} model with edge formation probability as $1 \times 10^{-2}$. We also generate edges between nodes from the male (or female) community and the third community with the probability of $2 \times 10^{-4}$. In this way, we introduce bias in network structure. The final network is presented in Fig.~(\ref{st_2}). The attribute distribution after propagation according to the network structure is shown in Fig.~(\ref{dis_2_2}). Comparing Fig.~(\ref{dis_2_1}) with (\ref{dis_2_2}), we find that even if the original attributes are unbiased, the biased structure still turns the attributes into biased ones after information propagation. This implies that the bias contained in the network structure is also a significant source of bias. Based on the discussions, we draw three preliminary conclusions to help us tackle the challenges in Sec.~\ref{intro}. (1) For \emph{\textbf{Data Bias Modeling}}, bias in attributes can be modeled based on the difference of attribute distribution between two groups. Also, bias in network structure can be modeled based on the difference of attribute distribution between two groups after information propagation. (2) For \emph{\textbf{Multi-Modality Debiasing}} in an attributed network, at least two debiasing processes should be carried out targeting the two data modalities (i.e., attributes and structure). (3) For \emph{\textbf{Model-Agnostic Debiasing}}, if the attribute distributions between groups can be less biased both before and after information propagation, the learned node representations tend to be indistinguishable between groups. Then GNNs trained on such data could also be less biased. \section{Modeling Data Bias for GNNs} \label{metrics} In this section, we define \textit{attribute bias} and \textit{structural bias} in attributed networks together with their metrics. For the sake of simplicity, we focus on binary sensitive attribute and leave the generalization to non-binary cases in the Appendix. Theoretical analysis of our proposed metrics is also presented in the Appendix. \subsection{Preliminaries} In this paper, without further specification, bold uppercase letters (e.g., $\mathbf{X}$), bold lowercase letters (e.g., $\mathbf{x}$), and normal lowercase letters (e.g., $x$) represent matrices, vectors, and scalars, respectively. For any matrix, e.g., $\mathbf{X}$, we use $\mathbf{X}_i$ denote its $i$-th row. Let $\mathcal{G}$ = ($\mathbf{A}$, $\mathbf{X}$) be an undirected attributed network. Here $\mathbf{A} \in \mathbb{R}^{N \times N}$ is the adjacency matrix, and $\mathbf{X} \in \mathbb{R}^{N \times M}$ is the node attribute matrix, where $N$ is the number of nodes and $M$ is the attribute dimension. Let a diagonal matrix $\mathbf{D}$ be the degree matrix of $\mathbf{A}$, where its ($i$,$i$)-th entry $\mathbf{D}_{i,i} = \sum_{j} \mathbf{A}_{i,j}$, and $\mathbf{D}_{i,j} = 0$ ($i\neq j$). $\mathbf{L} = \mathbf{D} - \mathbf{A}$ is the graph Laplacian matrix. Denote the normalized adjacency matrix and the normalized Laplacian matrix as $\mathbf{A}_{\text{norm}} = \mathbf{D}^{- \frac{1}{2}} \mathbf{A} \mathbf{D}^{- \frac{1}{2}}$ and $\mathbf{L}_{\text{norm}} = \mathbf{D}^{- \frac{1}{2}} \mathbf{L} \mathbf{D}^{- \frac{1}{2}}$. $|.|$ is the absolute value operator. \subsection{Definitions of Bias} \label{definitions} We consider two types of bias on attributed networks, i.e., attribute bias and structural bias. We first define attribute bias as follows. \begin{myDef}\label{defn:attribute_bias} \textbf{Attribute bias.} Given an undirected attributed network $\mathcal{G}$ = ($\mathbf{A}$, $\mathbf{X}$) and the group indicator (w.r.t. the sensitive attribute) for each node $\mathbf{s} = [s_1, s_2, ..., s_N]$, where $s_i \in \{0, 1\}$ ($1 \leq i \leq N$). For any attribute, if its value distributions between different demographic groups are different, then attribute bias exists in $\mathcal{G}$. \end{myDef} Besides, as shown in the second example in Sec.~\ref{investigation}, bias can also emerge after attributes are propagated in the network even when original attributes are unbiased. Therefore, an intuitive idea to identify structural bias is to check whether information propagation in the network introduces or exacerbates bias~\cite{jalali2020information}. Formally, we define structural bias on attributed networks as follows. \begin{myDef}\label{defn:structural_bias} \textbf{Structural bias.} Given an undirected attributed network $\mathcal{G}$ = ($\mathbf{A}$, $\mathbf{X}$) and the corresponding group indicator (w.r.t. sensitive attribute) for each node $\mathbf{s} = [s_1, s_2, ..., s_N]$, where $s_i \in \{0, 1\}$ ($1 \leq i \leq N$). For the attribute values propagated w.r.t. $\mathbf{A}$, if their distributions between different demographic groups are different at any attribute dimension, then structural bias exists in $\mathcal{G}$. \end{myDef} Apart from these definitions, it is also necessary to quantitatively measure the attribute bias and structural bias. In the sequel, we introduce our proposed metrics for the two types of bias. \subsection{Bias Metrics} \label{metrics} Here we take the first step to define metrics for both \textit{attribute bias} and \textit{structural bias} for an undirected attributed network $\mathcal{G}$. \noindent \textbf{Attribute bias metric.} Let $\mathbf{X}_{\text{norm}} \in \mathbb{R}^{N \times M}$ be the normalized attribute matrix. For the $m$-th dimension ($1 \leq m \leq M$) of $\mathbf{X}_{\text{norm}}$, we use $\mathcal{X}^{0}_m$ and $\mathcal{X}^{1}_m$ to denote attribute value set for nodes with $s_{i}=0$ and $s_{i}=1$ ($1 \leq i \leq N$). Then, attributes of all nodes can be divided into tuples: $\mathcal{X}_{total} = \{ (\mathcal{X}^{0}_1, \mathcal{X}^{1}_1), (\mathcal{X}^{0}_2, \mathcal{X}^{1}_2), ..., (\mathcal{X}^{0}_M, \mathcal{X}^{1}_M)\}$. We measure attribute bias with Wasserstein-1 distance~\cite{villani2021topics} between the distributions of the two groups: \begin{align} \label{bias_1_formulation} b_{\text{attr}} = \frac{1}{M} \sum_{m} W (pdf(\mathcal{X}^{0}_m), pdf(\mathcal{X}^{1}_m)). \end{align} Here $pdf (\cdot)$ is the probability density function for a set of values, and $W$(., .) is the Wasserstein distance between two distributions. Intuitively, $b_{\text{attr}}$ describes the average Wasserstein-1 distance between attribute distributions of different groups across all dimensions. It should be noted that taking the distribution difference between demographic groups as the indication of bias is in align with many existing algorithmic fairness studies~\cite{zemel2013learning,DBLP:conf/icml/BoseH19,dai2020say}. \begin{figure*} \caption{An illustration of EDITS with \textbf{$H=2$} \label{framework} \end{figure*} \noindent \textbf{Structural bias metric.} As illustrated in Sec.~\ref{investigation}, the key mechanism of GNNs is information propagation, during which the structural bias could be introduced. Let $\mathbf{P}_{\text{norm}} = \alpha \textbf{A}_{\text{norm}} + (1 - \alpha) \textbf{I}$. Here $\mathbf{P}_{\text{norm}}$ can be regarded as a normalized adjacency matrix with re-weighted self-loops, where $\alpha \in [0,1]$ is a hyper-parameter. Before measuring structural bias, we define the \textit{propagation matrix} $\mathbf{M}_H \in \mathbb{R}^{N \times N}$ as: \begin{align}\label{eq:Mh} \mathbf{M}_{H} = \beta_1 \mathbf{P}_{\text{norm}} + \beta_2 \mathbf{P}_{\text{norm}}^2 + ... + \beta_H \mathbf{P}_{\text{norm}}^H, \end{align} where $\beta_h$ ($1 \leq h \leq H$) is re-weighting parameters. The rationale behind the formulation above is to measure the aggregated reaching likelihood from each node to other nodes within a distance of $H$. To achieve localized effect for each node, a desired choice is to let $\beta_1 \geq \beta_2 \geq ... \geq \beta_H$, i.e., emphasizing short-distance terms and reducing the weights of long-distance terms. For example, assume $H=3$, then the value $(\mathbf{M}_{3})_{i,j}$ is the aggregated reaching likelihood from node $i$ to node $j$ within 3 hops with re-weighting parameters being $\beta_1$, $\beta_2$ and $\beta_3$. Also, given attributes $\mathbf{X}_{\text{norm}}$, we define the \textit{reachability matrix} $\mathbf{R} \in \mathbb{R}^{N \times M}$ as $\mathbf{R} = \mathbf{M}_{H} \mathbf{X}_{\text{norm}}$. Intuitively, $\mathbf{R}_{i, m}$ is the aggregated reachable attribute value for attribute $m$ of node $i$. We utilize $\mathcal{R}^{0}_m$ and $\mathcal{R}^{1}_m$ to represent the set of values of the $m$-th dimension in $\mathbf{R}$ for nodes with $s_{i}=0$ and $s_{i}=1$ ($1 \leq i \leq N$). The entries in $\mathbf{R}$ can also be divided into tuples according to attribute dimensions: $\mathcal{R}_{total} = \{ (\mathcal{R}^{0}_1, \mathcal{R}^{1}_1), (\mathcal{R}^{0}_2, \mathcal{R}^{1}_2), ..., (\mathcal{R}^{0}_M, \mathcal{R}^{1}_M)\}$. We define structural bias as: \begin{align} \label{bias_2_formulation} b_{\text{stru}} = \frac{1}{M} \sum_{m} W (pdf(\mathcal{R}^{0}_m), pdf(\mathcal{R}^{1}_m)). \end{align} Here $b_{\text{stru}}$ is defined in a similar way as $b_{\text{attr}}$, except that the former uses $\mathcal{R}^{0}_m$ and $\mathcal{R}^{1}_m$ instead of $\mathcal{X}^{0}_m$ and $\mathcal{X}^{1}_m$. In this way, structural bias $b_{\text{stru}}$ describes the average difference between aggregated attribute distributions of different groups after several rounds of propagation. \subsection{Problem Statement} Based on the definitions and metrics in Sec.~\ref{definitions} and \ref{metrics}, we argue that if both $b_{attr}$ and $b_{stru}$ are reduced, bias in an attributed network can be mitigated. As a result, if GNNs are trained on such data, the bias issues in downstream tasks could also be alleviated. Formally, we define the debiasing problem as follows. \begin{myDef1} \label{problem} \textbf{Debiasing attributed networks for GNNs.} Given an attributed network $\mathcal{G}$ = ($\mathbf{A}$, $\mathbf{X}$), our goal is to debias $\mathcal{G}$ by reducing $b_{attr}$ and $b_{stru}$ to obtain $\mathcal{\tilde{G}}$ = ($\mathbf{\tilde{A}}$, $\mathbf{\tilde{X}}$), so that the bias of GNNs trained on $\mathcal{\tilde{G}}$ is mitigated. The debiasing is independent of any specific GNNs. \end{myDef1} \section{Mitigating Data Bias for GNNs} \label{framework_intro} In this section, we discuss how to tackle Problem~\ref{problem} with our proposed framework EDITS. We focus on the binary sensitive attribute for the sake of simplicity and discuss the extension later. We first present an overview of EDITS, followed by the formulation of the objective function. Finally, we present the optimization process. \subsection{Framework Overview} An overview of the proposed framework EDITS is shown in Fig.~(\ref{framework}). Specifically, EDITS consists of three modules. The parameters of these three modules are optimized alternatively during training. \begin{itemize} \item \textbf{\emph{Attribute Debiasing.}} This module learns a debiasing function $g_{\bm{\theta}}$ with learnable parameter $\bm{\theta} \in \mathbb{R}^{M}$. The debiased version of $\mathbf{X}$ is obtained as output where $\mathbf{\tilde{X}} = g_{\bm{\theta}} (\mathbf{X})$. \item \textbf{\emph{Structural Debiasing.}} This module outputs $\mathbf{\tilde{A}}$ as the debiased $\mathbf{A}$. Specifically, $\mathbf{\tilde{A}}$ is initialized with $\mathbf{A}$ at the beginning of the optimization process. The entries in $\mathbf{\tilde{A}}$ are optimized via gradient descent with clipping and binarization. \item \textbf{\emph{Wasserstein Distance Approximator.}} This module learns a function $f$ for each attribute dimension. Here $f$ is utilized to estimate the Wasserstein distance between the distributions of different groups for any attribute dimension. \end{itemize} \subsection{Objective Function} In this subsection, we introduce the details of our framework. Following the Definition~\ref{defn:attribute_bias} and Definition~\ref{defn:structural_bias}, our goal is to reduce $b_{\text{attr}}$ and $b_{\text{stru}}$ simultaneously. For the ease of understanding, we first only consider the $m$-th attribute dimension as an example, and then extend it to all $M$ dimensions to obtain our final objective function. Let $P_{0,m}$ and $P_{1,m}$ be the value distribution at the $m$-th attribute dimension in $\mathbf{X}$ for nodes with sensitive attribute $s=0$ and $s=1$, respectively. Denote $x_{0,m} \sim P_{0,m}^{(h)}$ and $x_{1,m} \sim P_{1,m}^{(h)}$ as two random variables drawn from the two distributions. Assume that we have a function $g_{\theta_m}: \mathbb{R} \rightarrow \mathbb{R}$ to mitigate attribute bias, where $1 \leq m \leq M$. For the $m$-th dimension, we denote $x_{0,m}^{(0)} = g_{\theta_m}(x_{0,m}) \sim P_{0,m}^{(0)}$ and $x_{1,m}^{(0)} = g_{\theta_m}(x_{1,m}) \sim P_{1,m}^{(0)}$ as the debiasing results for $x_{0,m}$ and $x_{1,m}$, respectively. Here the superscript $(0)$ indicates that no information propagation is performed in the debaising process. Correspondingly, when such operation is extended to all $M$ dimensions, we will have the debiased attribute matrix $\mathbf{\tilde{X}}$. Apart from the goal of mitigating attribute bias, we also want to mitigate structural bias. Let $\mathbf{\tilde{A}}$ be the adjacency matrix from the debiased network structure, and $\mathbf{\tilde{P}}_{\text{norm}}$ denotes the normalized $\mathbf{\tilde{A}}$ with re-weighted self-loops. Information propagation with $h$ hops using the debiased adjacency matrix could be expressed as $\mathbf{\tilde{P}}_{\text{norm}}^{h} \mathbf{\tilde{X}}$, where $1 \leq h \leq H$. Let $P_{0,m}^{(h)}$ and $P_{1,m}^{(h)}$ be the value distribution at the $m$-th column of $\mathbf{\tilde{P}}_{\text{norm}}^{h} \mathbf{\tilde{X}}$ for nodes with sensitive attribute $s=0$ and $s=1$, respectively. Denote $x_{0,m}^{(h)} \sim P_{0,m}^{(h)}$ and $x_{1,m}^{(h)} \sim P_{1,m}^{(h)}$ as two random variables drawn from the two distributions. We hope that $\mathbf{\tilde{A}}$ could mitigate structural bias. We combine attribute and structural debiasing as below. Based on the random variables $x_{0,m}^{(0)}$ to $x_{0,m}^{(H)}$ and $x_{1,m}^{(0)}$ to $x_{1,m}^{(H)}$, we have $(H+1)$-dimensional vectors $\mathbf{x}_{0,m} = [x_{0,m}^{(0)}, x_{0,m}^{(1)}, ..., x_{0,m}^{(H)}]$ and $\mathbf{x}_{1,m} = [x_{1,m}^{(0)}, x_{1,m}^{(1)}, ..., x_{1,m}^{(H)}]$ following the joint distribution $P_{0,m}^{Joint}$ and $P_{1,m}^{Joint}$, respectively. To reduce both $b_{\text{attr}}$ and $b_{\text{stru}}$ at the $m$-th dimension, our goal is to minimize the Wasserstein distance between $P_{0,m}^{Joint}$ and $P_{1,m}^{Joint}$, which is formulated as $\min_{\theta_m, \mathbf{\tilde{A}}} W(P_{0,m}^{Joint}, P_{1,m}^{Joint})$. $W(P_{0,m}^{Joint}, P_{1,m}^{Joint})$ can be expressed as \begin{align} \label{wd_2} W(P_{0,m}^{Joint}, &P_{1,m}^{Joint}) = \\ \notag &\inf_{\gamma \in \Pi(P_{0,m}^{Joint}, P_{1,m}^{Joint})} \mathbb{E}_{(\mathbf{x}_{0,m}, \mathbf{x}_{1,m}) \sim \gamma}[\|\mathbf{x}_{0,m}-\mathbf{x}_{1,m}\|_{1}]. \end{align} Here $\Pi(P_{0,m}^{Joint}, P_{1,m}^{Joint})$ represents the set of all joint distributions $\gamma (\mathbf{x}_{0,m},\mathbf{x}_{1,m})$ whose marginals are $P_{0,m}^{Joint}$ and $P_{1,m}^{Joint}$, respectively. After considering all the $M$ dimensions, the overall objective is \begin{align} \label{wd_3} \min_{\bm{\theta}, \mathbf{\tilde{A}}} \frac{1}{M} \sum_{1 \leq m \leq M} W(P_{0,m}^{Joint}, P_{1,m}^{Joint}). \end{align} It is non-trivial to optimize Eq. (\ref{wd_3}) as the infimum is intractable. Therefore, in the next subsection, we show how to convert it into a tractable optimization problem through approximation, which enables end-to-end gradient-based optimization. \subsection{Framework Optimization} \label{optimization} In this subsection, we introduce our optimization algorithm. For simplicity, first we still use the $m$-th attribute dimension in $\mathbf{X}$ to illustrate the idea. Considering the infimum in Wasserstein distance computation is intractable, we apply the Kantorovich-Rubinstein duality~\cite{villani2008optimal} to convert the problem of Eq. (\ref{wd_2}) as: \begin{align} \label{wd_4} W(&P_{0,m}^{Joint}, P_{1,m}^{Joint}) = \\ \notag &\sup_{\|f\|_{L} \leq 1} \mathbb{E}_{\mathbf{x}_{0,m} \sim P_{0,m}^{Joint}}[f(\mathbf{x}_{0,m})] - \mathbb{E}_{\mathbf{x}_{1,m} \sim P_{1,m}^{Joint}}[f(\mathbf{x}_{1,m})]. \end{align} Here $\|f\|_{L} \leq 1$ denotes that the supremum is taken over all 1-Lipschitz functions $f$ : $\mathbb{R}^{H+1} \rightarrow \mathbb{R}$. The problem can be solved by learning a neural network as $f$. Nevertheless, it is worth noting that the 1-Lipschitz function is difficult to obtain during optimization. Therefore, here we relax $\|f\|_{L} \leq 1$ to $\|f\|_{L} \leq k$ ($k$ is a constant). In this case, the left side of Eq. (\ref{wd_4}) also changes to $k W(P_{0,m}^{Joint}, P_{1,m}^{Joint})$. Then, the Wasserstein distance between $P_{0,m}^{Joint}$ and $P_{1,m}^{Joint}$ up to a multiplicative constant can be attained via: \begin{align} \label{wd_5} &\max_{f_m \in \mathcal{F}} \mathbb{E}_{\mathbf{x}_{0,m} \sim P_{0,m}^{Joint}}[f_{m}(\mathbf{x}_{0,m})] - \mathbb{E}_{\mathbf{x}_{1,m} \sim P_{1,m}^{Joint}}[f_{m}(\mathbf{x}_{1,m})], \end{align} where $\mathcal{F}$ denotes the set of all $k$-Lipschitz functions (i.e., $\|f_{m}\|_{L} \leq k$, $f_m \in \mathcal{F}$). Then, extending Eq. (\ref{wd_5}) to all $M$ dimensions leads to our final objective function as: \begin{align} \label{wd_6} \mathscr{L}_1 = \sum_{1 \leq m \leq M} \{ \mathbb{E}_{\mathbf{x}_{0,m} \sim P_{0,m}^{Joint}}[f_{m}(\mathbf{x}_{0,m})] - \mathbb{E}_{\mathbf{x}_{1,m} \sim P_{1,m}^{Joint}}[f_{m}(\mathbf{x}_{1,m})] \}, \end{align} where $\{f_m: 1\leq m \leq M\} \subset \mathcal{F}$. To model the function $f$ in Eq. (\ref{wd_6}), a single-layered neural network serves as the \textit{Wasserstein Distance Approximators} in Fig.~(\ref{framework}) to approximate each $f_m$ ($1 \leq m \leq M$), where the objective can be formulated as: \begin{align} \label{wd_7} \max_{\{f_m: 1 \leq m \leq M\} \subset \mathcal{F}} \mathscr{L}_1 \,\,. \end{align} The weights of neural networks are clipped within $[-c, c]$ ($c$ is a pre-defined constant), which has been proved to be a simple but effective way to enforce the Lipschitz constraint for every $f_{m}$~\cite{DBLP:journals/corr/ArjovskyCB17}. For the \textit{Attribute Debiasing} module in Fig.~(\ref{framework}), we choose a linear function, i.e., $g_{\theta_m}(x_{s,m}) = \theta_m x_{s,m}$ ($s \in \{0, 1\}$). One advantage is that it acts as the role of feature re-weighting by assigning a feature weight for each attribute, which enables better interpretability for the debiased result. In matrix form, assume $\bm{\Theta}$ is a diagonal matrix with the $m$-th diagonal entry being $\theta_m$, we have $\mathbf{\tilde{X}} =g_{\bm{\theta}} (\mathbf{X})= \mathbf{X} \bm{\Theta}$. Then the optimization goal for \textit{attribute debiasing} is: \begin{align} \label{wd_8} \min_{\bm{\Theta}}\, \mathscr{L}_1 + \mu_1 \|\mathbf{\tilde{X}} - \mathbf{X}\|_{F}^{2} + \mu_2 \|\bm{\Theta}\|_{1}, \end{align} where $\mu_1$ and $\mu_2$ are hyper-parameters. The second term ensures that the debiased attributes after feature re-weighting are close to the original ones (i.e., preserve as much information as possible). The third term controls the sparsity of re-weighting parameters. For the \textit{Structural Debiasing} module in Fig.~(\ref{framework}), $\mathbf{\tilde{A}}$ is optimized through: \begin{align} \label{wd_9} \min_{\mathbf{\tilde{A}}}\, \mathscr{L}_1 + \mu_3 \|\mathbf{\tilde{A}} - \mathbf{A}\|_{F}^{2} + \mu_4 \|\mathbf{\tilde{A}} \|_{1} \;\;\; s.t., \mathbf{\tilde{A}} = \mathbf{\tilde{A}}^{\top}. \end{align} where $\mu_3$ and $\mu_4$ are hyper-parameters. The second term ensures the debiased result $\mathbf{\tilde{A}}$ is close to the original structure $\mathbf{A}$. The third term enforces the debiased network structure is also sparse, which is aligned with the characteristics of real-world networks~\cite{jin2020graph}. \noindent \textbf{Optimization Strategy.} To optimize function $f$, parameter $\bm{\Theta}$, and $\mathbf{\tilde{A}}$, we propose a gradient-based optimization approach for alternatively training as Algorithm~\ref{algorithm} in Appendix. First, for the optimization of $f$ w.r.t. Eq. (\ref{wd_7}), we directly utilize Stochastic Gradient Descent (SGD). Second, for the optimization of parameter $\bm{\Theta}$ w.r.t. Eq. (\ref{wd_8}), we adopt Proximal Gradient Descent (PGD). In the projection operation in PGD, we clip the parameters in $\bm{\Theta}$ within [0, 1]. Finally, to remove the most biased attribute channels, the $z$ smallest weights in the diagonal of $\bm{\Theta}$ are masked with 0, where $z$ is a pre-assigned hyper-parameter for attribute debiasing. Third, for the optimization of parameter $\mathbf{\tilde{A}}$ w.r.t. Eq. (\ref{wd_9}), we also adopt PGD with similar clipping strategy as the optimization of $\bm{\Theta}$. Finally, Algorithm~\ref{algorithm} outputs $\mathbf{\tilde{X}}$ and $\mathbf{\tilde{A}}$ after multiple epochs of optimization. \noindent \textbf{Edge Binarization.} Here we introduce how we binarize the elements in $\mathbf{\tilde{A}}$ to indicate existence of edges. The basic intuition is to set a numerical threshold to determine the edge existence based on the entry-wise value change between $\mathbf{\tilde{A}}$ and $\mathbf{A}$. Specifically, for the "0" entries in $\mathbf{A}$, if the corresponding weight of any entry in $\mathbf{\tilde{A}}$ exceeds $r \cdot \max (\mathbf{\tilde{A}} - \mathbf{A})$, then we flip such entry from 0 to 1. Here $r$ is a pre-set threshold for binarization, and $\max (\cdot)$ outputs the largest entry of a matrix. Similarly, for the "1" entries in $\mathbf{A}$, if the corresponding weight of any entry in $\mathbf{\tilde{A}}$ is reduced by a number exceeding $r \cdot | \min (\mathbf{\tilde{A}} - \mathbf{A})|$, then such entry should be flipped as 0. Here $\min (\cdot)$ gives the smallest entry of a matrix. To summarize, this operation aims to flip the entries with significant changes in value directly, and maintain other entries as their original values. Finally, the binarized matrix is assigned to $\mathbf{\tilde{A}}$ as the final outcome. \section{Experimental Evaluations} We conduct experiments on both real-world and synthetic datasets to evaluate the effectiveness of EDITS. In particular, we answer the following research questions. \textbf{RQ1:} How well can EDITS mitigate the bias in attributed networks together with the outcome of different GNN variants for the downstream task? \textbf{RQ2:} How well can EDITS balance utility maximization and bias mitigation compared with other debiasing baselines tailored for a specific GNN? \subsection{Downstream Task and Datasets} \label{data} \textbf{Downstream Task.} We choose the widely adopted \textit{node classification} task to assess the effectiveness of our proposed framework. \noindent \textbf{Datasets.} We use two types of datasets in our experiments, including six real-world datasets and two synthetic datasets. Statistics of the real-world datasets can be found in Table~\ref{datasets} of Appendix. We elaborate more details as follows: (1) \textit{Real-world Datasets.} We use six real-world datasets, namely Pokec-z, Pokec-n~\cite{takac2012data,dai2020say}, UCSD34~\cite{traud2012social}, German Credit, Credit Defaulter, and Recidivism~\cite{agarwal2021towards}. We first introduce the three web-related networks. \textit{Pokec-z} and \textit{Pokec-n} are collected from a popular social network in Slovakia. Here a node represents a user, and an edge denotes the friendship relation between two users~\cite{takac2012data}. We take "region" as the sensitive attribute, and the task is to predict the user working field. UCSD34 is a Facebook friendship network of the University of California San Diego~\cite{traud2012social}. Each node denotes a user, and edges represent the friendship relations between nodes. We take "gender" as the sensitive attribute, and the task is to predict whether a user belongs to a specific major. Users with incomplete information (e.g., missing attribute values) are filtered out from the three web networks above. Besides, we also adopt three networks beyond web-related data. In \textit{German Credit}, nodes represent clients in a German bank, and edges are formed between clients if their credit accounts are similar. With "gender" being the sensitive attribute, the task is to classify the credit risk of the clients as high or low. In \textit{Recidivism}, nodes are defendants released on bail during 1990-2009. Nodes are connected based on the similarity of past criminal records and demographics. The task is to classify defendants into bail vs. no bail, with "race" being the sensitive attribute. In the \textit{Credit Defaulter}, nodes are credit card users, and they are connected based on the pattern similarity of their purchases and payments. Here "age" is the sensitive attribute, and the task is to predict whether a user will default on credit card payment. (2) \textit{Synthetic Datasets.} For the ablation study of EDITS, we use the two datasets generated in Sec.~\ref{investigation}. One network has biased attributes and an unbiased structure, while the other network is on the opposite. We add eight extra attribute dimensions besides the two attribute dimensions for both datasets. The attribute values in the extra attribute dimensions are generated uniformly between 0 and 1. For labels, we compute the sum of the first two extra attribute dimensions. Then, we add Gaussian noise to the sum values, and rank them by the values in descending order. Labels of the top-ranked 50\% individuals are set as 1, while the labels of the other 50\% are set as 0. The task is to predict the labels. \subsection{Experimental settings} \noindent \textbf{GNN Models.} Here we adopt three popular GNN variants in our experiments: GCN~\cite{kipf2016semi}, GraphSAGE~\cite{hamilton2017inductive}, and GIN~\cite{xu2018powerful}. \noindent \textbf{Baselines.} Since there is no existing work directly debiasing network data for GNNs, here we choose two state-of-the-art GNN-based debiasing approaches for comparison, namely FairGNN~\cite{dai2020say} and NIFTY~\cite{agarwal2021towards}. (1) \textit{FairGNN}. It is a debiasing method based on adversarial training. A discriminator is trained to distinguish the representations between different demographic groups. The goal of FairGNN is to train a GNN that fools the discriminator for bias mitigation. (2) \textit{NIFTY}. It is a recently proposed GNN-based debiasing framework. With counterfactual perturbation on the sensitive attribute, bias is mitigated via learning node representations that are invariant to the sensitive attribute. It should be noted that both of them take GNNs as their backbones in the downstream task. While on the other hand, EDITS attempts at debiasing attributed networks without referring to the output of downstream GNN models (i.e., EDITS is model-agnostic). The hyper-parameters of EDITS are tuned only based on our proposed bias metrics. Obviously, the debiasing performed by EDITS is with better generalization ability but more difficult compared with the model-oriented baselines. \noindent \textbf{Evaluation Metrics.} We evaluate model performance from two perspectives: model utility and bias mitigation. Good performance means low bias and high model utility. We introduce the adopted metrics for model utility and bias mitigation: (1) \textit{Model Utility Metrics}. For node classification, we use the area under the receiver operating characteristic curve (AUC) and F1 score as the indicator of model utility; (2) \textit{Bias Mitigation Metrics}. We use two widely-adopted metrics $\Delta_{SP}$ and $\Delta_{EO}$ to show to what extent the bias in the output of different GNNs are mitigated~\cite{louizos2015variational,beutel2017data,dai2020say}. For both metrics, a lower value means better bias mitigation performance. \begin{table}[] \scriptsize \centering \caption{Attribute and structural bias comparison between original networks and debiased ones from EDITS (in scale of \textbf{$\times 10^{-3}$}). The lower, the better. Best ones are marked in bold.} \label{bias_results} \begin{tabular}{cccccc} \hline & \multicolumn{2}{c}{\textbf{Attribute Bias}} & & \multicolumn{2}{c}{\textbf{Structural Bias}} \\ \cline{2-3} \cline{5-6} & \textbf{Vanilla} & \textbf{EDITS} & & \textbf{Vanilla} & \textbf{EDITS} \\ \hline \textbf{Pokec-z} & 0.43 &\textbf{0.33} ($-23.3\%$) & &0.83 & \textbf{0.75} ($-9.64\%$) \\ \hline \textbf{Pokec-n} & 0.54 &\textbf{0.42} ($-22.2\%$) & &1.03 & \textbf{0.89} ($-13.6\%$) \\ \hline \textbf{UCSD34} & 0.53 &\textbf{0.48} ($-9.43\%$) & &0.68 & \textbf{0.63} ($-7.35\%$) \\ \hline \textbf{German} & 6.33 &\textbf{2.38} ($-62.4\%$) & &10.4 & \textbf{3.54} ($-66.0\%$) \\ \hline \textbf{Credit} & 2.46 &\textbf{0.56} ($-77.2\%$) & &4.45 & \textbf{2.36} ($-47.0\%$) \\ \hline \textbf{Recidivism} & 0.95 &\textbf{0.39} ($-58.9\%$) & &1.10 & \textbf{0.52} ($-52.7\%$) \\ \hline \end{tabular} \end{table} \begin{table*}[] \centering \caption{Comparison on utility and bias mitigation between GNNs with original networks (denoted as Vanilla) and debiased networks (denoted as EDITS) as input. $\uparrow$ denotes the larger, the better; $\downarrow$ denotes the opposite. Best ones are in \textbf{bold}.} \label{results} \footnotesize \begin{tabular}{crcccccccc} \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{\textbf{GCN}} & & \multicolumn{2}{c}{\textbf{GraphSAGE}} & & \multicolumn{2}{c}{\textbf{GIN}} \\ \cline{3-4} \cline{6-7} \cline{9-10} \multicolumn{2}{c}{} & \textbf{Vanilla} & \textbf{EDITS} & & \textbf{Vanilla} & \textbf{EDITS} & & \textbf{Vanilla} & \textbf{EDITS} \\ \hline \multirow{4}{*}{\textbf{Pokec-z}} & \textbf{AUC} $\uparrow$ & \textbf{67.83 $\pm$ 0.7\%} & 67.38 $\pm$ 0.3\% & & \textbf{68.00 $\pm$ 0.3\%} & 66.37 $\pm$ 0.7\% & & \textbf{66.74 $\pm$ 0.8\%} & 65.64 $\pm$ 0.5\% \\ \cline{2-10} & \textbf{F1} $\uparrow$ & \textbf{61.95 $\pm$ 0.6\%} & 61.91 $\pm$ 0.1\% & & \textbf{61.58 $\pm$ 1.3\%} & 60.62 $\pm$ 0.6\% & & \textbf{61.55 $\pm$ 0.5\%} & 60.65 $\pm$ 1.2\% \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 5.70 $\pm$ 1.2\% & \textbf{2.74 $\pm$ 0.9\%} & & 7.10 $\pm$ 1.2\% & \textbf{2.89 $\pm$ 0.4\%} & & 5.20 $\pm$ 1.0\% & \textbf{1.90 $\pm$ 1.3\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 4.88 $\pm$ 1.3\% & \textbf{2.87 $\pm$ 1.0\%} & & 6.37 $\pm$ 0.8\% & \textbf{2.54 $\pm$ 0.7\%} & & 4.65 $\pm$ 1.1\% & \textbf{2.09 $\pm$ 1.1\%} \\ \hline \multirow{4}{*}{\textbf{Pokec-n}} & \textbf{AUC} $\uparrow$ & \textbf{63.24 $\pm$ 0.5\%} & 61.82 $\pm$ 0.9\% & & \textbf{64.07 $\pm$ 0.4\%} & 62.05 $\pm$ 0.6\% & & \textbf{62.53 $\pm$ 1.4\%} & 61.60 $\pm$ 1.4\% \\ \cline{2-10} & \textbf{F1} $\uparrow$ & \textbf{54.32 $\pm$ 0.4\%} & 52.84 $\pm$ 0.3\% & & \textbf{53.45 $\pm$ 1.2\%} & 52.53 $\pm$ 0.1\% & & \textbf{52.62 $\pm$ 1.2\%} & 52.56 $\pm$ 1.0\% \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 3.36 $\pm$ 0.4\% & \textbf{0.91 $\pm$ 0.87\%} & & 3.85 $\pm$ 0.2\% & \textbf{2.08 $\pm$ 1.2\%} & & 5.90 $\pm$ 2.5\% & \textbf{0.96 $\pm$ 0.5\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 3.97 $\pm$ 1.6\% & \textbf{1.10 $\pm$ 1.0\%} & & 2.64 $\pm$ 0.3\% & \textbf{1.82 $\pm$ 0.9\%} & & 4.47 $\pm$ 3.7\% & \textbf{0.47 $\pm$ 0.4\%} \\ \hline \multirow{4}{*}{\textbf{UCSD34}} & \textbf{AUC} $\uparrow$ & \textbf{63.33 $\pm$ 0.3\%} & 62.43 $\pm$ 0.9\% & & 62.62 $\pm$ 1.0\% & \textbf{62.82 $\pm$ 2.4\%} & & 62.57 $\pm$ 0.7\% & \textbf{64.50 $\pm$ 0.9\%} \\ \cline{2-10} & \textbf{F1} $\uparrow$ & 94.16 $\pm$ 0.3\% & \textbf{94.69 $\pm$ 0.1\%} & & 94.00 $\pm$ 0.2\% & \textbf{94.55 $\pm$ 0.1\%} & & 92.24 $\pm$ 1.6\% & \textbf{92.48 $\pm$ 0.5\%} \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 1.27 $\pm$ 0.4\% & \textbf{0.27 $\pm$ 0.1\%} & & 1.27 $\pm$ 0.5\% & \textbf{0.35 $\pm$ 0.3\%} & & 2.11 $\pm$ 1.3\% & \textbf{0.36 $\pm$ 0.1\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 1.40 $\pm$ 0.4\% & \textbf{0.39 $\pm$ 0.1\%} & & 1.40 $\pm$ 0.4\% & \textbf{0.25 $\pm$ 0.3\%} & & 2.32 $\pm$ 1.6\% & \textbf{0.47 $\pm$ 0.4\%} \\ \hline \multirow{4}{*}{\textbf{German}} & \textbf{AUC} $\uparrow$ & \textbf{74.46 $\pm$ 0.7\%} & 71.01 $\pm$ 1.3\% & & \textbf{75.28 $\pm$ 2.1\%} & 73.21 $\pm$ 0.5\% & & 71.35 $\pm$ 1.7\% & \textbf{71.51 $\pm$ 0.6\%} \\ \cline{2-10} & \textbf{F1} $\uparrow$ & 81.54 $\pm$ 0.9\% & \textbf{82.43 $\pm$ 0.69\%} & & \textbf{81.52 $\pm$ 1.0\%} & 80.62 $\pm$ 1.5\% & & 83.08 $\pm$ 0.9\% & \textbf{83.78 $\pm$ 0.4\%} \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 43.14 $\pm$ 2.5\% & \textbf{2.04 $\pm$ 1.3\%} & & 26.83 $\pm$ 0.5\% & \textbf{8.30 $\pm$ 3.1\%} & & 18.55 $\pm$ 2.0\% & \textbf{1.26 $\pm$ 0.7\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 33.75 $\pm$ 0.4\% & \textbf{0.63 $\pm$ 0.39\%} & & 20.66 $\pm$ 3.0\% & \textbf{3.75 $\pm$ 3.3\%} & & 11.27 $\pm$ 3.5\% & \textbf{2.87 $\pm$ 1.4\%} \\ \hline \multirow{4}{*}{\textbf{Credit}} & \textbf{AUC} $\uparrow$ & \textbf{73.62 $\pm$ 0.3\%} & 70.16 $\pm$ 0.6\% & & 74.99 $\pm$ 0.2\% & \textbf{75.28 $\pm$ 0.5\%} & & \textbf{73.82 $\pm$ 0.4\%} & 72.06 $\pm$ 0.9\% \\ \cline{2-10} & \textbf{F1} $\uparrow$ & \textbf{81.86 $\pm$ 0.1\%} & 81.44 $\pm$ 0.2\% & & 82.31 $\pm$ 0.7\% & \textbf{83.39 $\pm$ 0.3\%} & & 82.11 $\pm$ 0.1\% & \textbf{85.10 $\pm$ 0.7\%} \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 12.93 $\pm$ 0.1\% & \textbf{9.13 $\pm$ 1.2\%} & & 17.03 $\pm$ 3.3\% & \textbf{12.25 $\pm$ 0.2\%} & & 12.18 $\pm$ 0.3\% & \textbf{8.79 $\pm$ 5.6\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 10.65 $\pm$ 0.0\% & \textbf{7.88 $\pm$ 1.0\%} & & 15.31 $\pm$ 4.0\% & \textbf{9.58 $\pm$ 0.1\%} & & 9.48 $\pm$ 0.3\% & \textbf{7.19 $\pm$ 3.8\%} \\ \hline \multirow{4}{*}{\textbf{Recidivism}} & \textbf{AUC} $\uparrow$ & \textbf{86.91 $\pm$ 0.4\%} & 85.96 $\pm$ 0.3\% & & 88.12 $\pm$ 1.4\% & \textbf{88.15 $\pm$ 0.9\%} & & \textbf{82.40 $\pm$ 0.8\%} & 81.55 $\pm$ 1.5\% \\ \cline{2-10} & \textbf{F1} $\uparrow$ & \textbf{78.30 $\pm$ 1.0\%} & 75.80 $\pm$ 0.5\% & & 76.23 $\pm$ 2.8\% & \textbf{76.30 $\pm$ 1.4\%} & & 70.36 $\pm$ 1.9\% & \textbf{71.09 $\pm$ 2.3\%} \\ \cline{2-10} & $\bm{\Delta_{SP}}$ $\downarrow$ & 7.89 $\pm$ 0.3\% & \textbf{5.39 $\pm$ 0.2\%} & & 2.42 $\pm$ 1.2\% & \textbf{0.79 $\pm$ 0.5\%} & & 9.97 $\pm$ 0.7\% & \textbf{4.98 $\pm$ 0.9\%} \\ \cline{2-10} & $\bm{\Delta_{EO}}$ $\downarrow$ & 5.58 $\pm$ 0.2\% & \textbf{3.36 $\pm$ 0.3\%} & & 2.98 $\pm$ 2.2\% & \textbf{1.01 $\pm$ 0.5\%} & & 6.10 $\pm$ 1.2\% & \textbf{5.47 $\pm$ 0.7\%} \\ \hline \end{tabular} \end{table*} \subsection{Debiasing Attributed Network for GNNs} To answer \textbf{RQ1}, we first evaluate the effectiveness of EDITS in reducing the bias measured by the two proposed metrics and traditional bias metrics with different GNN backbones. The attribute and structural bias of the six real-world datasets before and after being debiased by EDITS are shown in Table~\ref{bias_results}. The comparison on $\Delta_{SP}$ and $\Delta_{EO}$ between GNNs trained on debiased networks from EDITS and original networks is presented in Table~\ref{results}. We make the following observations: (1) From the perspective of bias mitigation in the attributed network, EDITS demonstrates significant advantages over the vanilla approach as indicated by Table~\ref{bias_results}. This verifies the effectiveness of EDITS in reducing the bias existing in the attributed network data. (2) From the perspective of bias mitigation in the downstream task, we observe from Table~\ref{results} that EDITS achieves desirable bias mitigation performance with little utility sacrifice in all cases compared with GNNs with the original network as input (i.e., the vanilla one). This verifies that attributed networks debiased by EDITS can generally mitigate the bias in the outcome of different GNNs. (3) When comparing bias mitigation performance indicated by Table~\ref{bias_results} and Table~\ref{results}, we can find that the bias in the outcome of GNNs is also mitigated after EDITS mitigates attribute bias and structural bias in the attributed networks. Such consistency verifies the validity of our proposed metrics on measuring the bias that existed in the attributed networks. \subsection{Comparison with Other Debiasing Models} To answer \textbf{RQ2}, we then compare the balance between model utility and bias mitigation with other baselines based on a given GNN. Here we present the comparison of AUC and $\Delta_{SP}$ based on GCN in Fig.~(\ref{baseline_both}). Similar results can be obtained for other GNNs, which are omitted due to space limit. Experimental results include the performance of baselines and EDITS on the six real-world datasets. The following observations can be made: (1) From the perspective of model utility (indicated by Fig.~(\ref{baseline_auc1}) and Fig.~(\ref{baseline_auc2})), EDITS and baselines achieve comparable results with the vanilla GCN. This implies that the debiasing process of EDITS preserves as much useful information for the downstream task as the original attributed network. (2) From the perspective of bias mitigation (indicated by Fig.~(\ref{baseline_sp1}) and Fig.~(\ref{baseline_sp2})), all baselines achieve effective bias mitigation. Compared with debiasing in downstream tasks, debiasing the attributed network is more difficult due to the lack of supervision signals from GNN prediction. Observation can be drawn that the debiasing performance of EDITS is similar to or even better than that of the adopted baselines. This verifies the superior performance of EDITS on debiasing attributed networks for more fair GNNs. (3) From the perspective of balancing the model utility and bias mitigation, EDITS achieves comparable model utility with alternatives but exhibits better bias mitigation performance. Consequently, we argue that EDITS achieves superior performance on balancing the model utility and bias mitigation over other baselines. \begin{figure} \caption{Performance comparison between EDITS and baselines on utility (AUC) and bias mitigation ($\bm{\Delta_{SP} \label{baseline_auc1} \label{baseline_auc2} \label{baseline_sp1} \label{baseline_sp2} \label{baseline_both} \end{figure} \begin{figure*}\label{ablation1} \label{ablation2} \label{ablation3} \label{ablation4} \label{ablation} \end{figure*} \subsection{Ablation Study} To evaluate the effectiveness of the two debiasing modules (i.e., attribute debiasing module and structural debiasing module) in EDITS, here we investigate how each of them individually contributes to bias mitigation under our proposed bias metrics and the traditional bias metrics in the downstream task. We choose GCN as the GNN model in our downstream task. For better visualization purposes, the two datasets showing large attribute bias and structural bias (i.e., \textit{German} and \textit{Credit}) are selected for experiments. Besides, to better demonstrate the functionality of the two debiasing modules, we also adopt the two synthetic datasets we mentioned in Sec.~\ref{investigation} (i.e., the network with only biased attributes and the network with only biased structure), which are further modified according to Sec.~\ref{data}. Based on the four selected datasets, four different variants of EDITS are tested, namely EDITS with both debiasing modules, EDITS without the structural debiasing module (i.e., *w/o-SD), EDITS without the attribute debiasing module (i.e., *w/o-AD), vanilla GCN model without debiased input (i.e., Vanilla). We present their performance of attribute bias, structural bias, AUC, and $\Delta_{SP}$ on the four datasets in Fig.~(\ref{ablation}). We make the following observations: (1) The value of $\textit{attribute bias}$ can be reduced with the attribute debiasing module of EDITS, which maintains the model utility (i.e., AUC) but reduces $\Delta_{SP}$ in the downstream task. (2) The value of $\textit{structural bias}$ can be reduced with both attribute debiasing and structural debiasing modules. With only structural debiasing, EDITS still maintains comparable model utility but reduces $\Delta_{SP}$ in the downstream task. (3) Although both attribute debiasing and structural debiasing module help mitigate $\textit{structural bias}$, only debiasing the network structure achieves better bias mitigation performance on all four datasets compared with only debiasing the attributes as implied by Fig.~(\ref{ablation4}). This demonstrates the indispensability of the structural debiasing module in EDITS. \section{ Related Work} \noindent \textbf{Mitigating Bias in Machine Learning.} Bias can be defined from a variety of perspectives in machine learning algorithms~\cite{du2019fairness, DBLP:conf/nips/HardtPNS16, barocas2017fairness, wu2019counterfactual,mehrabi2019survey,li2021dyadic}. Commonly used algorithmic bias notions can be broadly categorized into \textit{group fairness} and \textit{individual fairness}~\cite{DBLP:conf/innovations/DworkHPRZ12}. Group fairness emphasizes that algorithms should not yield discriminatory outcomes for any specific demographic groups~\cite{DBLP:conf/innovations/DworkHPRZ12}. Such groups are usually determined by sensitive attributes, e.g., gender or race~\cite{kamiran2012data}. Existing debiasing approaches work in one of the three data flow stages, i.e., pre-processing, processing and post-processing stage. In pre-processing stage, a common method is to re-weight training samples from different groups to mitigate bias before model training~\cite{kamiran2012data}. Perturbing data distributions between groups is another popular approach to debias the data in the pre-processing stage~\cite{wang2019repairing}. In processing stage, a popular method is to add regularization terms to disentangle the outcome from sensitive attribute~\cite{ross2017right,liu2019incorporating} or minimize the outcome difference between groups~\cite{agarwal2018reductions}. Besides, utilizing adversarial learning to remove sensitive information from representations is also widely adopted~\cite{elazar2018adversarial}. In post-processing stage, bias in outcomes is usually mitigated by constraining the outcome to follow a less biased distribution~\cite{zhao2017men,DBLP:conf/nips/HardtPNS16,pleiss2017fairness,laclau2021all,krasanakis2020applying}. Usually, all above-mentioned approaches are evaluated via measuring how much certain fairness notion is violated. \textit{Statistical Parity}~\cite{DBLP:conf/innovations/DworkHPRZ12}, \textit{Equality of Opportunity}, \textit{Equality of Odds}~\cite{DBLP:conf/nips/HardtPNS16} and \textit{Counterfactual Fairness}~\cite{kusner2017counterfactual} are commonly studied fairness notions. Different from group fairness, individual fairness focuses on treating similar individuals similarly~\cite{DBLP:conf/innovations/DworkHPRZ12,zemel2013learning}. The similarity can be given by oracle similarity scores from domain experts~\cite{lahoti2019operationalizing}. Most existing debiasing methods based on individual fairness work in the processing stage. For example, constraints can enforce similar predictions between similar instances~\cite{lahoti2019operationalizing,jung2019eliciting}. \textit{Consistency} is a popular metric for individual fairness evaluation~\cite{lahoti2019operationalizing,lahoti2019ifair}. \noindent \textbf{Mitigating Bias in Graph Mining.} Efforts have been made to mitigate bias in graph mining algorithms, where these works can be broadly categorized into either focusing on \textit{group fairness} or \textit{individual fairness}. For group fairness, adversarial learning can be adopted to learn less biased node representations that fool the discriminator~\cite{DBLP:conf/icml/BoseH19,dai2020say}. Rebalancing between groups is also a popular approach to mitigate bias~\cite{burke2017balanced,DBLP:journals/corr/abs-1809-09030,DBLP:conf/ijcai/RahmanS0019,tang2020investigating,li2021dyadic}. For example, Rahman et al. mitigate bias via rebalancing the appearance rate of minority groups in random walks~\cite{rahman2019fairwalk}. Projecting the embeddings onto a hyperplane orthogonal to the hyperplane of sensitive attributes is another approach for bias mitigation~\cite{DBLP:journals/corr/abs-1909-11793}. Compared with the vast amount of works on group fairness, only few works promote individual fairness in graphs. To the best of our knowledge, Kang et al.~\cite{kang2020inform} first propose to systematically debias multiple graph mining algorithms based on individual fairness. Dong et al.~\cite{dong2021individual} argue that for each individual, if the similarity ranking of others in the GNN outcome follows the same order of an oracle ranking given by domain experts, then people can get a stronger sense of fairness. Different from the above approaches, this paper proposes to directly debias the attributed networks in a model-agnostic way. \section{ Conclusion} GNNs are increasingly critical in various applications. Nevertheless, there is an increasing societal concern that GNNs could yield discriminatory decisions towards certain demographic groups. Existing debiasing approaches are mainly tailored for a specific GNN. Adapting these methods to different GNNs can be costly, as they need to be fine-tuned. Different from them, in this paper, we propose to debias the attributed network for GNNs. With analysis of the source of bias existing in different data modalities, we define two kinds of bias with corresponding metrics, and formulate a novel problem of debiasing attributed networks for GNNs. To tackle this problem, we then propose a principled framework EDITS for model-agnostic debiasing. Experiments demonstrate the effectiveness of EDITS in mitigating the bias and maintaining the model utility. \section{ ACKNOWLEDGMENTS} This material is supported by the National Science Foundation (NSF) under grants $\#$2006844 and the Cisco Faculty Research Award. \appendix \section{Appendix} \subsection{Datasets Statistics} \label{statistics} The detailed statistics of six real-world datasets (i.e., German Credit, Recidivism and Credit Defaulter) can be found in Table~\ref{datasets}. \begin{table*}[] \caption{The statistics and basic information about the six real-world datasets adopted for experimental evaluation. Sens. represents the semantic meaning of sensitive attribute.} \label{datasets} \footnotesize \centering \begin{tabular}{lcccccc} \hline \textbf{Dataset} & \textbf{Pokec-z} & \textbf{Pokec-n} & \textbf{UCSD34} & \textbf{German Credit} & \textbf{Recidivism} & \textbf{Credit Defaulter} \\ \hline \textbf{\# Nodes} & 7,659 & 6,185 & 4,132 & 1,000 & 18,876 & 30,000 \\ \textbf{\# Edges} & 29,476 & 21,844 & 108,383 & 22,242 & 321,308 & 1,436,858 \\ \textbf{\# Attributes} & 59 & 59 & 7 & 27 & 18 & 13 \\ \textbf{Avg. degree} & 7.70 & 7.06 & 52.5 & 44.5 & 34.0 & 95.8 \\ \textbf{Sens.} & Region & Region & Gender & Gender & Race & Age \\ \textbf{Label} & Working field & Working field & Student major & Credit status & Bail decision & Future default \\ \hline \end{tabular} \end{table*} \subsection{Algorithm} \label{algorithm} We present the optimization algorithm for EDITS in Algorithm~\ref{algorithm}. \begin{algorithm}[H] \scriptsize \caption{The Optimization Algorithm for EDITS} \label{algorithm} \begin{algorithmic}[1] \REQUIRE ~~\\ $\mathbf{A}$: Adjacency matrix; $\mathbf{X}$: Attribute matrix; $\alpha$, $\mu_1$ to $\mu_4$: Hyper-parameters in objectives; $c$: Threshold enforcing Lipschitz; $z$: Threshold for attribute masking; $r$: Threshold factor for adjacency matrix binarization; \ENSURE ~~\\ Debiased adjacency matrix $\mathbf{\tilde{A}}$ and attribute matrix $\mathbf{\tilde{X}}$; \\ \STATE $\mathbf{\tilde{A}} \gets \mathbf{A}$; $\bm{\Theta} \gets \mathbf{I}$; \WHILE{\textit{epoch} $\leq$ \textit{epoch\_max}} \STATE Compute $\mathscr{L}_1$ following Eq. (\ref{wd_6}); \STATE Update the weights of $f$ by SGD following Eq. (\ref{wd_7}); \STATE Clip the weights of $f$ within [-$c$, $c$]; \STATE Update $\bm{\Theta}$ by PGD following Eq. (\ref{wd_8}), $\mathbf{\tilde{X}} \gets \mathbf{X} \bm{\Theta}$; \STATE Update $\mathbf{\tilde{A}}$ by PGD following Eq. (\ref{wd_9}), $\mathbf{\tilde{A}} \gets \frac{1}{2} (\mathbf{\tilde{A}} + \mathbf{\tilde{A}}^{\top})$; \ENDWHILE \STATE Mask the $z$ smallest entries with 0 in $diag(\bm{\Theta})$, $\mathbf{\tilde{X}} \gets \mathbf{X} \bm{\Theta}$; \STATE Binarize $\mathbf{\tilde{A}}$ w.r.t. the threshold $r$; \RETURN $\mathbf{\tilde{A}}$ and $\mathbf{\tilde{X}}$; \end{algorithmic} \end{algorithm} \subsection{Theoretical Analysis} \label{theo} Here we present theoretical analysis for the two proposed metrics to gain a deeper understanding of debiasing attributed networks for GNNs. For attribute bias, it is straightforward that if the Wasserstein distance of the attribute value distribution between the two groups is zero for every dimension, then there would be no clue to distinguish between the two groups. Consequently, here we mainly focus on the theoretical analysis of the structural bias metric. Specifically, we perform theoretical analysis from the perspective of Spectral Graph Theory~\cite{chung1997spectral}. Usually, an undirected attributed network is regarded as a signal composed of different frequency components in Graph Signal Processing (GSP). If an operation preserves lower frequency components more than higher frequency components of a graph signal, this operation can be regarded as low-pass filtering the input graph signal. \textbf{Theorem 1.} \textit{Let $\lambda_{\text{max}}$ be the largest eigenvalue of $\mathbf{L}_{\text{norm}}$. Multiplying $\mathbf{X}$ by the propagation matrix $\mathbf{M}_{H}$ can be regarded as low-pass filtering $\mathbf{X}$ when $\alpha = \frac{1}{\lambda_{max}}$ and $\beta_i > 0$ ($1 \leq i \leq H$).} \begin{proof} We present the proof based on Laplacian graph spectrum. By replacing $\alpha$ with $\frac{1}{\lambda_{\text{max}}}$, we have \begin{align} \label{new_p} \mathbf{P}_{\text{norm}} = \frac{1}{\lambda_{\text{max}}} \mathbf{A}_{\text{norm}} + (1 - \frac{1}{\lambda_{\text{max}}}) \mathbf{I} = \mathbf{I} - \frac{\mathbf{L}_{\text{norm}}}{\lambda_{\text{max}}} . \end{align} Then, by combining Eq. (\ref{eq:Mh}) and Eq. (\ref{new_p}), we get \begin{small} \begin{align} \mathbf{M}_{H} = \beta_1 (\mathbf{I} - \frac{\mathbf{L}_{\text{norm}}}{\lambda_{\text{max}}}) + \beta_2 (\mathbf{I} - \frac{\mathbf{L}_{\text{norm}}}{\lambda_{\text{max}}})^2 + ... + \beta_H (\mathbf{I} - \frac{\mathbf{L}_{\text{norm}}}{\lambda_{\text{max}}})^H . \label{l_in_proof} \end{align} \end{small}Considering that $\mathbf{L}_{\text{norm}}$ is a symmetric real matrix, it can be decomposed as $\mathbf{L}_{\text{norm}} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^{\top}$, then Eq. (\ref{l_in_proof}) can be rewritten as \begin{small} \begin{align} \label{frequency} \mathbf{M}_{H} &= \mathbf{U} \big( \beta_1 (\mathbf{I} - \frac{\mathbf{\Lambda}}{\lambda_{\text{max}}}) + \beta_2 (\mathbf{I} - \frac{\mathbf{\Lambda}}{\lambda_{\text{max}}})^2 + ... + \beta_H (\mathbf{I} - \frac{\mathbf{\Lambda}}{\lambda_{\text{max}}})^H \big) \mathbf{U}^{\top}. \end{align} \end{small}Here $\mathbf{\Lambda}$ is the diagonal eigenvalue matrix of $\mathbf{L}_{\text{norm}}$, and the $h$-th term ($1 \leq h \leq H$) in Eq. (\ref{frequency}) indicates a frequency response function of $(1 - \frac{\mathbf{\lambda_{i}}}{\lambda_{\text{max}}})^h$. For any $\lambda_{i}$ ($1 \leq i \leq N$), $ \frac{\lambda_{i}}{\lambda_{\text{max}}} \leq 1$ holds. Consequently, the frequency response of each term in Eq. (\ref{frequency}) monotonically decreases w.r.t. $\lambda_i$. This indicates that, for each term, when it is multiplied by a graph signal, the higher frequency components of the graph signal are more weakened compared with the lower frequency components. Therefore, according to Eq. (\ref{frequency}), $\mathbf{M}_{H}$ can be regarded as a graph filter whose frequency response is composed of $H$ low-pass filters. In conclusion, multiplying the propagation matrix $\mathbf{M}_{H}$ with any graph signal equals to the operation of low-pass filtering when $\alpha = \frac{1}{\lambda_{\text{max}}}$ and all $\beta_i > 0$. The graph signal is the attribute matrix $\mathbf{X}$ in the proposed structural bias metric. \end{proof} Based on Theorem 1, we propose the corollary below to build connections between attribute bias and structural bias. \textbf{Corollary 1.} \textit{The attribute bias contained in the low frequency components of an attributed network is equivalent to structural bias.} From the proof of Theorem 1, we can observe that $\mathbf{M}_{H} \mathbf{X}_{\text{norm}}$ is equivalent to low-pass filtering the attribute matrix $\mathbf{X}_{\text{norm}}$. Then Corollary 1 is self-evident based on Definition~\ref{defn:structural_bias}. At the same time, considering that the frequencies and the corresponding basis of a network data changes when $\mathbf{A}$ is optimized to be $\mathbf{\tilde{A}}$. The basic goal of EDITS can also be interpreted as: \textit{debiasing the full spectrum of a graph signal, and learning better frequencies together with the corresponding basis to further mitigate the bias existed in the lower frequency components of the graph signal}. \subsection{Implementation Details} EDITS is implemented using Pytorch~\cite{paszke2017automatic} and optimized via RMSprop optimizer~\cite{hinton2012neural}. In the training of EDITS, we set the training epochs as 100 for Recidivism and 500 for other datasets. The learning rate is set as $3 \times 10^{-3}$ for epochs under 400 and $1 \times 10^{-3}$ for those above. $\alpha$ is set as 0.5 considering that $\lambda_{\text{max}}=2$~\cite{chung1997spectral}. To train GNNs, we fix the training epochs to be 1,000 based on Adam optimizer~\cite{kingma2014adam}, with the learning rate of $1 \times 10^{-3}$. Dropout rate and hidden channel number is set as 0.05 and 16, respectively. \subsection{Extension to Non-Binary Sensitive Attributes} Here, we show how our proposed framework EDITS can be generalized to handle non-binary sensitive attributes. More specifically, we use a synthetic dataset to showcase the extension. \noindent \textbf{Synthetic Dataset Generation.} Our goal here is to generate a synthetic attributed network with both biased node attributes and network structure, where nodes should come from at least three different groups based on the sensitive attribute. We elaborate more details from three perspectives: biased network structure generation, biased node attribute generation, and node label generation. (1) \textit{Biased Network Structure Generation.} We adopt a similar approach as presented in Fig.~(\ref{bias_incorporating}) to generate three communities with dense intra-community links but sparse inter-community links. (2) \textit{Biased Node Attributes Generation.} We generate a ten-dimensional attribute vector for each node. The values at the first two dimensions are generated independently with Gaussian distribution $\mathcal{N}$(-1, $1^2$), $\mathcal{N}$(0, $1^2$), and $\mathcal{N}$(1, $1^2$) for the nodes in the three communities, respectively. The attribute values for all other dimensions are generated with independent Gaussian Distribution $\mathcal{N}$(0, $1^2$). Besides, We generate a ternary variable $s \in \{0,1,2\}$ based on the node community membership for all nodes as an extra attribute dimension. Here the community membership is regarded as the sensitive attribute of nodes in this network. (3) \textit{Node Label Generation.} We sum up the values at the first two unbiased attribute dimensions for all nodes, and then add Gaussian noise to the summation values. The summation values with noise are ranked in descending order. Labels of the top-ranked 50\% nodes are set as 1, while the labels of the other 50\% nodes are set as 0. The task is to predict the generated labels. \noindent \textbf{Framework Extension.} To extend the proposed framework EDITS to handle non-binary sensitive attributes, the basic rationale is to encourage the function $f_m$ introduced in Section~\ref{optimization} to help approximate the squared Wasserstein distance sum between all group pairs based on ternary sensitive attribute. Therefore, we modify the $\mathscr{L}_1$ in Eq.~(\ref{wd_6}) as \begin{align} \label{non-binary-l1} \mathscr{\tilde{L}}_1 = \sum_{i,j} \sum_{m} \{ \mathbb{E}_{\mathbf{x}_{i,m}}&[f_{m}(\mathbf{x}_{i,m})] - \mathbb{E}_{\mathbf{x}_{j,m}}[f_{m}(\mathbf{x}_{j,m})] \}^2. \end{align} Here $1 \leq m \leq M$, and $ i,j \in \{0,1,2\}$ ($i < j$). $\mathbf{x}_{i,m}$ and $\mathbf{x}_{j,m}$ follows $P_{i,m}^{Joint}$ and $P_{j,m}^{Joint}$, respectively. The $\mathscr{L}_1$ in Eq.~(\ref{wd_7}),~(\ref{wd_8}), and~(\ref{wd_9}) are repalced with $\mathscr{\tilde{L}}_1$. This enables EDITS to approximate and minimize the squared Wasserstein distance sum between all group pairs. \noindent \textbf{Research Questions.} Here we aim to answer two research questions. \textbf{RQ1:} Can EDITS mitigate the bias in the network dataset with ternary sensitive attributes? \textbf{RQ2:} Can EDITS achieve a good balance between mitigating bias and maintaining utility for GNN predictions with ternary sensitive attributes? \noindent \textbf{Evaluation Metrics.} We introduce the metrics following the two research questions above. (1) For RQ1, to measure the bias in the network dataset, we adopt the $b_{\text{attr}}$ and $b_{\text{stru}}$ introduced in Sec.~\ref{metrics}. (2) For RQ2, to measure the bias exhibited in GNN predictions, we adopt two traditional fairness metrics: $\Delta_{SP}$ and $\Delta_{EO}$. Considering that these two metrics are designed only for binary sensitive attributes, $\Delta_{SP}$ and $\Delta_{EO}$ for each pair of groups are utilized to evaluate the fairness level of GNN predictions. Besides, AUC and F1 are adopted to evaluate the utility of GNN predictions. \noindent \textbf{Results Analysis.} Results based on GCN are presented in Fig.~(\ref{sp_eo_ternary}) and Table~\ref{b_in_non_binary}, and similar observations can also be found on other GNN backbones. We evaluate the performance of EDITS from two perspectives. (1) \textit{RQ1: the fairness level of the network dataset}. As presented in Table~\ref{b_in_non_binary}, $b_{\text{attr}}$ and $b_{\text{stru}}$ of the dataset are clearly reduced with EDITS. This verifies the effectiveness of EDITS on debiasing the attributed network data. (2) \textit{RQ2: the balance between fairness and utility for GNN predictions}. As presented in Fig.~(\ref{sp_eo_ternary}), $\Delta_{SP}$ and $\Delta_{EO}$ for every group pair are reduced. This corroborates the effectiveness of EDITS on achieving more fair GNN predictions. At the same time, Table~\ref{b_in_non_binary} indicates that the GNN with debiased input data still maintains similar utility performance compared with the GNN with vanilla input. This indicates that EDITS achieves a good balance between fairness and utility for GNN predictions. \begin{figure} \caption{Comparison of $\bm{\Delta_{SP} \label{sp_ternary} \label{eo_ternary} \label{sp_eo_ternary} \end{figure} \begin{table}[] \centering \footnotesize \caption{Parameter study for $\mu_1$ and $\mu_3$. The values of $b_{\text{attr}}$ and $b_{\text{stru}}$ are in scale of \textbf{$\times 10^{-3}$}.} \label{param_study} \begin{tabular}{cccc||cccc} \hline $\mu_1$ & $b_{attr}$ & F1(\%) & $\Delta_{SP}$(\%) & $\mu_3$ & $b_{stru}$ & F1(\%) & $\Delta_{SP}$(\%) \\ \hline \textbf{1e2} & 6.33 & 81.69 & 35.3 & \textbf{1e2} & 10.2 & 82.26 & 34.2 \\ \textbf{1e1} & 5.02 & 80.69 & 19.9 & \textbf{1e1} & 9.97 & 80.89 & 25.0 \\ \textbf{1e0} & 3.74 & 80.28 & 7.76 & \textbf{1e0} & 9.81 & 79.77 & 14.1 \\ \textbf{1e-1} & 2.38 & 80.00 & 4.58 & \textbf{1e-1} & 4.89 & 79.46 & 3.96 \\ \textbf{1e-2} & 2.34 & 79.95 & 4.08 & \textbf{1e-2} & 3.53 & 78.93 & 3.26 \\ \textbf{1e-3} & 2.35 & 79.46 & 3.96 & \textbf{1e-3} & 3.34 & 78.89 & 2.76 \\ \textbf{1e-4} & 2.34 & 79.03 & 3.29 & \textbf{1e-4} & 3.29 & 78.37 & 2.06 \\ \textbf{1e-5} & 2.34 & 76.22 & 2.86 & \textbf{1e-5} & 3.22 & 78.06 & 2.00 \\ \hline \end{tabular} \end{table} \subsection{Parameter Study} Here we aim to study the sensitivity of EDITS w.r.t. hyper-parameters. Specifically, we show the parameter study of $\mu_1$ and $\mu_3$ on German dataset, but similar observations can also be found on other datasets. Here $\mu_1$ and $\mu_3$ control how much original information should be preserved from the original attributes and graph structure, respectively. We first vary $\mu_1$ in the range of \{1e2, 1e1, 1e0, 1e-1, 1e-2, 1e-3, 1e-4, 1e-5\} while fix other parameters as $\mu_2$=1e-4, $\mu_3$=1e-1, $\mu_4$=1e-4; then we vary $\mu_1$ in the same range with $\mu_1$=1e-3, $\mu_2$=1e-4, $\mu_4$=1e-4. The results in Table~\ref{param_study} indicate that the trade-off between debiasing and utility performance is stable when $\mu_1$ and $\mu_3$ are in a wide range between 1e-3 and 1e-1. Therefore, it is safe to say that we can tune these parameters in a wide range without greatly affecting the fairness and model utility. \begin{table}[] \centering \footnotesize \caption{Comparison of fairness level and utility between the original synthetic network and the debiased one based on the ternary sensitive attributes. The values of $b_{\text{attr}}$ and $b_{\text{stru}}$ are in scale of \textbf{$\times 10^{-3}$}. Best ones are marked in bold.} \label{b_in_non_binary} \begin{tabular}{ccccccccc} \hline \hline \multicolumn{9}{c}{\textbf{Attribute Bias \& Structural Bias Comparison}} \\ \hline & \multicolumn{2}{c}{\textbf{Group 0 v.s. 1}} & & \multicolumn{2}{c}{\textbf{Group 0 v.s. 2}} & & \multicolumn{2}{c}{\textbf{Group 1 v.s. 2}} \\ \cline{2-3} \cline{5-6} \cline{8-9} & $b_{\text{attr}}$ & $b_{\text{stru}}$ & & $b_{\text{attr}}$ & $b_{\text{stru}}$ & & $b_{\text{attr}}$ & $b_{\text{stru}}$ \\ \hline \textbf{Vanilla} & 13.7 & 25.5 & & 26.5 & 48.8 & & 11.0 & 20.4 \\ \textbf{EDITS} & \textbf{5.33} & \textbf{9.63} & & \textbf{13.4} & \textbf{24.1} & & \textbf{4.73} & \textbf{8.73} \\ \hline \hline \multicolumn{9}{c}{\textbf{Utility Comparison}} \\ \hline & \multicolumn{4}{c}{\textbf{AUC}} & \multicolumn{4}{c}{\textbf{F1}} \\ \hline \textbf{Vanilla} & \multicolumn{4}{c}{\textbf{67.09 $\pm$ 0.3\%}} & \multicolumn{4}{c}{\textbf{64.50 $\pm$ 0.6\%}} \\ \textbf{EDITS} & \multicolumn{4}{c}{67.05 $\pm$ 0.2\%} & \multicolumn{4}{c}{62.91 $\pm$ 0.8\%} \\ \hline \end{tabular} \end{table} \end{document}
\begin{document} \title{Characteristic Formulae for Session Types (extended version)} \begin{abstract} Subtyping is a crucial ingredient of session type theory and its applications, notably to programming language implementations. In this paper, we study effective ways to check whether a session type is a subtype of another by applying a characteristic formulae approach to the problem. Our core contribution is an algorithm to generate a modal $\mu$-calculus formula that characterises all the supertypes (or subtypes) of a given type. Subtyping checks can then be off-loaded to model checkers, thus incidentally yielding an efficient algorithm to check safety of session types, soundly and completely. We have implemented our theory and compared its cost with other classical subtyping algorithms. \end{abstract} \section{Introduction} \paragraph{\bf Motivations} Session types~\cite{THK94,HVK98,betty-survey} have emerged as a fundamental theory to reason about concurrent programs, whereby not only the data aspects of programs are typed, but also their \emph{behaviours} wrt.\ communication. Recent applications of session types to the reverse-engineering of large and complex distributed systems~\cite{zdlc,LTY15} have led to the need of handling potentially large and complex session types. Analogously to the current trend of modern compilers to rely on external tools such as SMT-solvers to solve complex constraints and offer strong guarantees~\cite{haskell-smt,haskell-measures,LY12,Leino10}, state-of-the-art model checkers can be used to off-load expensive tasks from session type tools such as~\cite{scribble,LTY15,YHNN2013}. A typical use case for session types in software (reverse-) engineering is to compare the type of an existing program with a candidate replacement, so to ensure that both are ``compatible''. In this context, a crucial ingredient of session type theory is the notion of \emph{subtyping}~\cite{GH99,DemangeonH11,CDY2014} which plays a key role to guarantee safety of concurrent programs while allowing for the refinement of specifications and implementations. Subtyping for session types relates to many classical theories such as simulations and pre-orders in automata and process algebra theories; but also to subtyping for recursive types in the $\lambda$-calculus~\cite{AC93}. The characteristic formulae approach~\cite{GS86,si94,steffen89,ails12,ai97,ai07,cs91}, which has been studied since the late eighties as a method to compute simulation-like relations in process algebra and automata, appears then as an evident link between subtyping in session type theory and model checking theories. In this paper, we make the first formal connection between session type and model checking theories, to the best of our knowledge. We introduce a novel approach to session types subtyping based on characteristic formulae; and thus establish that subtyping for session types can be decided in quadratic time wrt.\ the size of the types. This improves significantly on the classical algorithm~\cite{GH05}. Subtyping can then be reduced to a model checking problem and thus be discharged to powerful model checkers. Consequently, any advance in model checking technology has an impact on subtyping. \paragraph{\bf Example} Let us illustrate what session types are and what subtyping covers. Consider a simple protocol between a server and a client, from the point of view of the server. The client sends a message of type $\msg{request}$ to the server who decides whether or not the request can be processed by replying $\msg{ok}$ or $\msg{ko}$, respectively. If the request is rejected, the client is offered another chance to send another request, and so on. This may be described by the \emph{session type} below \begin{equation}\label{ex:intro-example-u-1} U_1 = \rec{\var{x}} \rcv{request} . \{ \snd{ok} . \mathtt{end} \;\, \inchoicetop \; \snd{ko} . \var{x} \, \} \end{equation} where $\recND{\var{x}}$ binds variable $\var{x}$ in the rest of the type, $\rcv{msg}$ (resp.\ $\snd{msg}$) specifies the reception (resp.\ emission) of a message $\msg{msg}$, $\inchoicetop$ indicates an \emph{internal choice} between two behaviours, and $\mathtt{end}$ signifies the termination of the conversation. An implementation of a server can then be \emph{type-checked} against $U_1$. The client's perspective of the protocol may be specified by the \emph{dual} of $U_1$: \begin{equation}\label{ex:intro-example-u-2} \dual{U}_1 = U_2 = \rec{\var{x}} \snd{request} . \{ \rcv{ok} . \mathtt{end} \;\, \outchoicetop \; \rcv{ko} . \var{x} \, \} \end{equation} where $\outchoicetop$ indicates an \emph{external choice}, i.e., the client expects two possible behaviours from the server. A classical result in session type theory essentially says that if the types of two programs are \emph{dual} of each other, then their parallel composition is free of errors (e.g., deadlock). Generally, when we say that $\mathtt{integer}$ is a subtype of $\mathtt{float}$, we mean that one can safely use an $\mathtt{integer}$ when a $\mathtt{float}$ is required. Similarly, in session type theory, if $T$ is a \emph{subtype} of a type $U$ (written $T \subtype U$), then $T$ can be used whenever $U$ is required. Intuitively, a type $T$ is a \emph{subtype} of a type $U$ if $T$ is ready to receive no fewer messages than $U$, and $T$ may not send more messages than $U$~\cite{DemangeonH11,CDY2014}. For instance, we have \begin{equation}\label{ex:intro-example-subs} \begin{array}{l} T_1 = \rcv{request} . \snd{ok} . \mathtt{end} \; \subtype \; U_1 \\ T_2 = \rec{\var{x}} \snd{request} . \{ \rcv{ok} . \mathtt{end} \, \outchoicetop \rcv{ko} . \var{x} \, \outchoicetop \rcv{error} . \mathtt{end} \, \} \; \subtype \; U_2 \end{array} \end{equation} A server of type $T_1$ can be used whenever a server of type $U_1$~\eqref{ex:intro-example-u-1} is required ($T_1$ is a more refined version of $U_1$, which always accepts the request). A client of type $T_2$ can be used whenever a client of type $U_2$~\eqref{ex:intro-example-u-2} is required since $T_2$ is a type that can deal with (strictly) more messages than $U_2$. In Section~\ref{subsec:CF}, we will see that a session type can be naturally transformed into a $\mu$-calculus formula that characterises all its subtypes. The transformation notably relies on the diamond modality to make some branches mandatory, and the box modality to allow some branches to be made optional; see Example~\ref{ex:char-formula}. \paragraph{\bf Contribution \& synopsis} In \S~\ref{sec:session-type-theory} we recall session types and give a new abstract presentation of subtyping. In \S~\ref{sec:mucal-char-formu} we present a fragment of the modal $\mu$-calculus and, following~\cite{steffen89}, we give a simple algorithm to generate a $\mu$-calculus formula from a session type that characterises either all its subtypes or all its supertypes. In \S~\ref{sec:safety}, building on results from~\cite{CDY2014}, we give a sound and complete model-checking characterisation of safety for session types. In \S~\ref{sec:algos}, we present two other subtyping algorithms for session types: Gay and Hole's classical algorithm~\cite{GH05} based on inference rules that unfold types explicitly; and an adaptation of Kozen et al.'s automata-theoretic algorithm~\cite{KPS95}. In \S~\ref{sec:tool}, we evaluate the cost of our approach by comparing its performances against the two algorithms from \S~\ref{sec:algos}. Our performance analysis is notably based on a tool that generates arbitrary well-formed session types. We conclude and discuss related works in \S~\ref{sec:related}. Due to lack of space, full proofs are relegated to Appendix~\ref{app:proofs} (also available online~\cite{appendix}). Our tool and detailed benchmark results are available online~\cite{tool}. \section{Session types and subtyping}\label{sec:session-type-theory} Session types are abstractions of the behaviour of a program wrt.\ the communication of this program on a given \emph{session} (or conversation), through which it interacts with another program (or component). \subsection{Session types}\label{sub:session-types} We use a two-party version of the multiparty session types in~\cite{DY13}. For the sake of simplicity, we focus on first order session types (that is, types that carry only simple types (sorts) or values and not other session types). We discuss how to lift this restriction in Section~\ref{sec:conc}. Let $\mathcal{V}$ be a countable set of variables (ranged over by $\var{x}, \var{y}$, etc.); let $\mathbb{A}$ be a (finite) alphabet, ranged over by $a$, $b$, etc.; and $\mathcal{A}$ be the set defined as $\{ \snd{a} \, \mid \, a \in \mathbb{A} \} \cup \{ \rcv{a} \, \mid \, a \in \mathbb{A} \}$. We let $\Op$ range over elements of $\{ !, ? \}$, so that ${\Op \! a}$ ranges over $\mathcal{A}$. The syntax of session types is given by \[ T \coloneqq \mathtt{end} \;\mid\; \inchoice \;\mid\; \outchoice \;\mid\; \rec{\var{x}} T \;\mid\; \var{x} \] where $I \neq \emptyset$ is finite, $a_i \in \mathbb{A}$ for all $i \in I$, $\msg{a}_i \neq \msg{a}_j$ for $i \neq j$, and $\var{x} \in \mathcal{V}$. Type $\mathtt{end}$ indicates the end of a session. Type $\inchoice$ specifies an \emph{internal} choice, indicating that the program chooses to send one of the $\msg{a}_i$ messages, then behaves as $T_i$. Type $\outchoice$ specifies an \emph{external} choice, saying that the program waits to receive one of the $\msg{a}_i$ messages, then behaves as $T_i$. Types $\rec{\var{x}} T$ and $\var{x}$ are used to specify recursive behaviours. We often write, e.g., $\{\snd{a}_1. T_1 \inchoicetop {\ldots} \inchoicetop \snd{a}_k . T_k \}$ for $\inchoiceop_{1 \leq i \leq k} { \snd{a}_i . T_i}$, write $\snd{a_1} . T_1$ when $k =1$, similarly for $\outchoice$, and omit trailing occurrences of $\mathtt{end}$. The sets of free and bound variables of a type $T$ are defined as usual (the unique binder is the recursion operator $\rec{\var{x}} T$). For each type $T$, we assume that two distinct occurrences of a recursion operator bind different variables, and that no variable has both free and bound occurrences. In coinductive definitions, we take an equi-recursive view of types, not distinguishing between a type $\rec{\var{x}} T$ and its unfolding $T \subs{\rec{\var{x}} T}{\var{x}}$. We assume that each type $T$ is \emph{contractive}~\cite{piercebook02}, e.g., $\rec{\var{x}} \var{x}$ is not a type. Let $\mathcal{T}$ be the set of all (contractive) session types and $\mathcal{T}C \subseteq \mathcal{T}$ the set of all closed session types (i.e., which do not contain free variables). \begin{figure} \caption{LTS for session types in $\mathcal{T} \label{fig:lts-types} \end{figure} A session type $T \in \mathcal{T}C$ induces a (finite) \emph{labelled transition system} (LTS) according to the rules in Figure~\ref{fig:lts-types}. We write $T \semarrow{{\Op \! a}}$ if there is $T' \in \mathcal{T}$ such that $T \semarrow{{\Op \! a}} T'$ and write $T \!\!\nrightarrow$ if $\phill {\Op \! a} \in \mathcal{A} \, : \, \neg (T \semarrow{{\Op \! a}} )$. \subsection{Subtyping for session types} Subtyping for session types was first studied in~\cite{GH99} and further studied in~\cite{DemangeonH11,CDY2014}. It is a crucial notion for practical applications of session types, as it allows for programs to be \emph{refined} while preserving safety. We give a definition of subtyping which is parameterised wrt.\ operators $\inchoicetop$ and $\outchoicetop$, so to allow us to give a common characteristic formula construction for both the subtype and the supertype relations, cf.\ Section~\ref{subsec:CF}. Below, we let $\bigtOp$ range over $\{\inchoicetop, \outchoicetop \}$. When writing $\choice$, we take the convention that $\Op$ refers to $!$ iff $\bigtOp$ refers to $\inchoicetop$ (and vice-versa for $?$ and $\outchoicetop$). We define the (idempotent) duality operator $\dual{\phantom{\outchoiceop}}$ as follows: $\dual{\vphantom{\outchoiceop}\inchoicetop} \defi \outchoicetop$, $\dual{\outchoicetop} \defi \inchoicetop$, $\dual{!} \defi ?$, and $\dual{?} \defi !$. \begin{mydef}[Subtyping]\label{def:ab-subtype} Fix $\bigtOp \in \{\inchoicetop, \outchoicetop \}$, $\absub{\bigOp} \subseteq \mathcal{T}C \times \mathcal{T}C$ is the \emph{largest} relation that contains the rules: \[ \resizebox{\textwidth}{!}{$ \coinference{S-\ensuremath{{\bigtOp}}} { I \subseteq J & \phill i \in I \, : \, T_i \absub{\bigOp} U_i } { \choice \absub{\bigOp} \choiceSet{j}{J}{U} } \quad \coinference{S-end} {} {\mathtt{end} \absub{\bigOp} \mathtt{end}} \quad \coinference{S-\ensuremath{\dual{\bigtOp}}} { J \subseteq I & \phill j \in J \, : \, T_j \absub{\bigOp} U_j } { \cochoice \absub{\bigOp} \cochoiceSet{j}{J}{U} } $} \] The double line in the rules indicates that the rules should be interpreted \emph{coinductively}. Recall that we are assuming an equi-recursive view of types. \end{mydef} We comment Definition~\ref{def:ab-subtype} assuming that $\bigtOp$ is set to $\inchoicetop$. Rule $\inferrule{S-\ensuremath{{\bigOp}}}$ says that a type $\inchoiceSet{j}{J}{U}$ can be replaced by a type that offers no more messages, e.g., $\snd{a} \absub{\,\inchoicetop} \snd{a} \inchoicetop \snd{b}$. Rule $\inferrule{S-\ensuremath{\dual{\bigOp}}}$ says that a type $\outchoiceSet{j}{J}{U}$ can be replaced by a type that is ready to receive at least the same messages, e.g., $\rcv{a} \outchoicetop \rcv{b} \absub{\,\inchoicetop} \rcv{a}$. Rule $\inferrule{S-end}$ is trivial. It is easy to see that $\absub{\,\inchoicetop} = ( \absub{\,\outchoicetop} )^{-1}$. In fact, we can recover the subtyping of~\cite{DemangeonH11,CDY2014} (resp.~\cite{GH99,GH05}) from $\absub{\bigOp}$, by instantiating $\bigtOp$ to $\inchoicetop$ (resp.\ $\outchoicetop$). \begin{example} Consider the session types from~\eqref{ex:intro-example-subs}, we have $T_1 \absub{\,\inchoicetop} U_1$, $U_1 \absub{\,\outchoicetop} T_1$, $T_2 \absub{\,\inchoicetop} U_2$, and $U_2 \absub{\,\outchoicetop} T_2$. \end{example} Hereafter, we will write $\subtype$ (resp.\ $\supertype$) for the pre-order $\absub{\,\inchoicetop}$ (resp.\ $\absub{\,\outchoicetop}$). \section{Characteristic formulae for subtyping}\label{sec:mucal-char-formu} We give the core construction of this paper: a function that given a (closed) session type $T$ returns a modal $\mu$-calculus formula~\cite{Kozen83} that characterises either all the supertypes of $T$ or all its subtypes. Technically, we ``translate'' a session type $T$ into a modal $\mu$-calculus formula $\phi$, so that $\phi$ characterises all the supertypes of $T$ (resp.\ all its subtypes). Doing so, checking whether $T$ is a subtype (resp.\ supertype) of $U$ can be reduced to checking whether $U$ is a model of $\phi$, i.e., whether $U \models \phi$ holds. The constructions presented here follow the theory first established in~\cite{steffen89}; which gives a characteristic formulae approach for (bi-)simulation-like relations over finite-state processes, notably for CCS processes. \subsection{Modal $\mu$-calculus}\label{sub:mucal} In order to encode subtyping for session types as a model checking problem it is enough to consider the fragment of the modal $\mu$ calculus below: \[ \phi \; \coloneqq \; \truek \;\mid\; \falsek \;\mid\; \phi \land \phi \;\mid\; \phi \lor \phi \;\mid\; \mmbox{{\Op \! a}}{\phi} \;\mid\; \mmdiamond{{\Op \! a}}{\phi} \;\mid\; \mmnu{\var{x}} \phi \;\mid\; \var{x} \] Modal operators $\mmbox{{\Op \! a}}{}$ and $\mmdiamond{{\Op \! a}}{}$ have precedence over Boolean binary operators $\land$ and $\lor$; the greatest fixpoint point operator $\mmnuND{\var{x}}$ has the lowest precedence (and its scope extends as far to the right as possible). Let $\mathcal{F}$ be the set of all (contractive) modal $\mu$-calculus formulae and $\mathcal{F}C \subseteq \mathcal{F}$ be the set of all closed formulae. Given a set of actions $A \subseteq \mathcal{A}$, we write $\compset{A}$ for $\mathcal{A} \setminus A$, and $\mmbox{A}{\phi}$ for $\bigwedge_{{\Op \! a} \in A}\mmbox{{\Op \! a}}{\phi}$. The $n^{th}$ approximation of a fixpoint formula is defined as follows: \[ \approxi{\mmnu{\var{x}} \phi}{0} \; \defi \; \truek \qquad\qquad\quad \approxi{\mmnu{\var{x}} \phi}{n} \; \defi \; \phi \subs{\approxi{\mmnu{\var{x}} \phi}{n-1}}{\var{x}} \qquad \text{if } n >0 \] A \emph{closed} formula $\phi$ is interpreted on the labelled transition system induced by a session type $T$. The satisfaction relation $\models$ between session types and formulae is inductively defined as follows: \newcommand{\quad}{\quad} \[ \begin{array} { l@{\quad \mathit{iff} \quad} l@{\quad} l@{\quad \mathit{iff} \quad} l} \multicolumn{2}{l}{T \models \truek \vphantom{ T \semarrow{{\Op \! a}} } } \\ T \models \phi_1 \!\land\! \phi_2 & T \models \phi_1 \text{ and } T \models \phi_2 \\ T \models \phi_1 \!\lor\! \phi_2 & T \models \phi_1 \text{ or } T \models \phi_2 \vphantom{ T \semarrow{{\Op \! a}} } \\ T \models \mmbox{{\Op \! a}}{\phi} & \phill T' \in \mathcal{T}C \, : \, \text{if } T \semarrow{{\Op \! a}} T' \text{ then } T' \models \phi \\ T \models \mmdiamond{{\Op \! a}}{\phi} & \exists T' \in \mathcal{T}C \, : \, T \semarrow{{\Op \! a}} T' \text{ and } T' \models \phi \\ T \models \mmnu{\var{x}} \phi & \phill n \geq 0 \, : \, T \models \approxi{ \mmnu{\var{x}} \phi }{n} \vphantom{ T \semarrow{{\Op \! a}} } \\ \end{array} \] Intuitively, $\truek$ holds for every $T$ (while $\falsek$ never holds). Formula $\phi_1 \land \phi_2$ (resp.\ $\phi_1 \lor \phi_2$) holds if both components (resp.\ at least one component) of the formula hold in $T$. The construct $\mmbox{{\Op \! a}}{\phi}$ is a \emph{modal} operator that is satisfied if for each $\msg{{\Op \! a}}$-derivative $T'$ of $T$, the formula $\phi$ holds in $T'$. The dual modality is $\mmdiamond{{\Op \! a}}{\phi}$ which holds if there is an $\msg{{\Op \! a}}$-derivative $T'$ of $T$ such that $\phi$ holds in $T'$. Construct $\mmnu{\var{x}} \phi$ is the \emph{greatest} fixpoint operator (binding $\var{x}$ in $\phi$). \subsection{Characteristic formulae} \label{subsec:CF} We now construct a $\mu$-calculus formula from a (closed) session types, parameterised wrt.\ a constructor $\bigtOp$. This construction is somewhat reminiscent of the \emph{characteristic functional} of~\cite{steffen89}. \begin{mydef}[Characteristic formulae] \label{def:char-formula} The characteristic formulae of $T \in \mathcal{T}C$ on $\bigtOp$ is given by function $\charforname : \mathcal{T}C \times \{ \inchoicetop , \outchoicetop \} \rightarrow \mathcal{F}C$, defined as: \[ \Anyform{T}{\bigtOp} \defi \begin{cases} \bigwedge_{i \in I} \, \mmdiamond{\opfun{\bigOp}{a_i}}{\Anyform{T_i}{\bigtOp}} & \text{if } T = \choice \\ \bigwedge_{i \in I} \, \mmbox{\opfun{\bigOp}{a_i}}{\Anyform{T_i}{\bigtOp}} & \text{if } T = \cochoice \\ \quad \land \, \bigvee_{i \in I} \, \mmdiamond{\opfun{\bigOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \mmbox{\mathcal{A}}{\falsek} & \text{if } T = \mathtt{end} \\ \mmnu{\var{x}} \Anyform{T'}{\bigtOp} & \text{if } T = \rec{\var{x}} T' \\ \var{x} & \text{if } T = \var{x} \xqedhere{59pt}{\qed} \end{cases} \] \renewcommand{\qed}{} \end{mydef} Given $T \in \mathcal{T}C$, $\Anyform{T}{\inchoicetop}$ is a $\mu$-calculus formula that characterises all the \emph{supertypes} of $T$; while $\Anyform{T}{\outchoicetop}$ characterises all its \emph{subtypes}. For the sake of clarity, we comment on Definition~\ref{def:char-formula} assuming that $\bigtOp$ is set to $\inchoicetop$. The first case of the definition makes every branch \emph{mandatory}. If $T = \inchoice$, then every internal choice branch that $T$ can select must also be offered by a supertype, and the relation must hold after each selection. The second case makes every branch \emph{optional} but requires at least one branch to be implemented. If $T = \outchoice$, then ($i$) for each of the $\rcv{a_i}$-branch offered by a supertype, the relation must hold in its $\rcv{a_i}$-derivative, ($ii$) a supertype must offer at least one of the $\rcv{a_i}$ branches, and ($iii$) a supertype cannot offer anything else but the $\rcv{a_i}$ branches. If $T = \mathtt{end}$, then a supertype cannot offer any behaviour (recall that $\falsek$ does not hold for any type). Recursive types are mapped to greatest fixpoint constructions. Lemma~\ref{lem:compo} below states the compositionality of the construction, while Theorem~\ref{thm:main-theorem}, our main result, reduces subtyping checking to a model checking problem. A consequence of Theorem~\ref{thm:main-theorem} is that the characteristic formula of a session type precisely specifies the set of its subtypes or supertypes. \begin{restatable}{lemma}{lemcompo} \label{lem:compo} $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}}$ \end{restatable} \begin{proof} By structural induction, see Appendix~\ref{proof:lemcompo}. \end{proof} \begin{restatable}{theorem}{thmmaintheorem} \label{thm:main-theorem} $\phill T, U \in \mathcal{T}C \, : \, T \absub{\bigOp} U \iff U \models \Anyform{T}{\bigtOp}$ \end{restatable} \begin{proof} The proof essentially follows the techniques of~\cite{steffen89}, see Appendix~\ref{proof:thmmaintheorem}. \end{proof} \begin{corollary}\label{cor:TsubU} The following holds: \[ \begin{array}{c@{\qquad\quad}c} \begin{array}{l@{\;\,}l} (a) & T \subtype U \iff U \models \subform{T} \\ (b) & U \supertype T \iff T \models \SUPform{U} \end{array} & \begin{array}{l@{\;\,}l} (c) & U \models \subform{T} \iff T \models \SUPform{U} \end{array} \end{array} \] \end{corollary} \begin{proof} By Theorem~\ref{thm:main-theorem} and $\subtype = \absub{\,\inchoicetop}$, $\supertype = \absub{\,\outchoicetop}$, $\subtype = \supertype^{-1}$, and $\absub{\,\inchoicetop} = (\absub{\,\outchoicetop} )^{-1}$ \end{proof} \begin{proposition}\label{prop:char-complexity} For all $T, U \in \mathcal{T}C$, deciding whether or not $U \models \Anyform{T}{\bigtOp}$ holds can be done in time complexity of $\bigo{\lvert T \rvert \times \lvert U \rvert}$, in the worst case; where $\lvert T \rvert$ stands for the number of states in the LTS induced by $T$. \end{proposition} \begin{proof} Follows from~\cite{cs91}, since the size of $\Anyform{T}{\bigtOp}$ increases linearly with $\lvert T \rvert$. \end{proof} \begin{example}\label{ex:char-formula} Consider session types $T_1$ and $U_1$ from~\eqref{ex:intro-example-u-1} and~\eqref{ex:intro-example-subs} and fix $\mathcal{A} = \{ \rcv{request}, \snd{ok}, \snd{ko} \}$. Following Definition~\ref{def:char-formula}, we obtain: \[ \begin{array}{rcll} \subform{T_1} & = & \multicolumn{2}{l}{ \mmbox{\rcv{request}}{ \mmdiamond{\snd{ok}}{ \mmbox{\mathcal{A}}{\falsek} } } \;\, \land \;\, \mmdiamond{\rcv{request}}{\truek} \;\, \land \;\, \mmbox{ \neg \{ \rcv{request} \}}{\falsek} } \\ \SUPform{U_1} & = & \mmnu{\var{x}} \mmdiamond{\rcv{request}}{} \big( & \left( \mmbox{\snd{ok}}{ \mmbox{\mathcal{A}}{\falsek} } \; \land \;\, \mmbox{\snd{ko}}{ \var{x} } \right) \\ && & \land \; \, \left( \mmdiamond{\snd{ok}}{ \truek } \lor \mmdiamond{\snd{ko}}{ \truek } \right) \;\, \land \;\, \mmbox{ \neg \{ \snd{ok}, \snd{ok} \}}{\falsek} \big) \end{array} \] We have $U_1 \models \subform{T_1}$ and $T_1 \models \SUPform{U_1}$, as expected (recall tat $T_1 \subtype U_1$). \end{example} \section{Safety and duality in session types}\label{sec:safety} A key ingredient of session type theory is the notion of \emph{duality} between types. In this section, we study the relation between duality of session types, characteristic formulae, and safety (i.e., error freedom). In particular, building on recent work~\cite{CDY2014} which studies the preciseness of subtyping for session types, we show how characteristic formulae can be used to guarantee safety. \label{subsec:safety} A system (of session types) is a pair of session types $T$ and $U$ that interact with each other by synchronising over messages. We write $T \spar U$ for a system consisting of $T$ and $U$ and let $S$ range over systems of session types. \begin{mydef}[Synchronous semantics]\label{def:synch-semantics} The \emph{synchronous} semantics of a \emph{system} of session types $T \spar U$ is given by the rule below, in conjunction with the rules of Figure~\ref{fig:lts-types}. \[ \inference {s-com} { T \semarrow{{\Op \! a}dir{a}} T' & U \semarrow{\coanydir{a}} U' } { T \spar U \semarrow{} T' \spar U' } \] We write $\semarrow{}^{\ast}$ for the reflexive transitive closure of $\semarrow{}$. \end{mydef} Definition~\ref{def:synch-semantics} says that two types interact whenever they fire dual operations. \begin{example} Consider the following execution of system $T_1 \spar U_2$, from~\eqref{ex:intro-example-subs}: \begin{equation}\label{eq:good-exec} \begin{array}{ccll} T_1 \spar U_2 & \; = \; & \rcv{request} . \snd{ok} . \mathtt{end} \; \spar \; \rec{\var{x}} \snd{request} . \{ \ldots \} \\ & \; \semarrow{\; \; \;} \; & \snd{ok} . \mathtt{end} \; \spar \; \{ \rcv{ok} . \mathtt{end} \; \outchoicetop \, \rcv{ok} . \rec{\var{x}} \rcv{request} \{ \ldots \} \} \; \semarrow{\; \; \;} \; & \mathtt{end} \; \spar \; \mathtt{end} \end{array} \end{equation} \end{example} \begin{mydef}[Error~\cite{CDY2014} and safety] A system $T_1 \spar T_2$ is an \emph{error} if, either: \begin{enumerate}[label=(\emph{\alph*})] \item \label{en:error-same} $T_1 = \choice$ and $T_2 = \choiceSet{j}{J}{U}$, with $\bigtOp$ fixed; \item \label{en:error-miss} $T_h = \inchoice$ and $T_g = \outchoiceSet{j}{J}{U}$; and $\exists i \in I \, : \, \phill j \in J \, : \, a_i \neq a_j$, with $h \neq g \in \{1,2\}$; or \item \label{en:error-end} $T_h = \mathtt{end}$ and $T_g = \choice$, with $h \neq g \in \{1,2\}$. \end{enumerate} We say that $S = T \spar U$ is \emph{safe} if for all $S' \, : \, S \semarrow{}^{\ast} S'$, $S'$ is not an error. \end{mydef} A system of the form~\ref{en:error-same} is an error since both types are either attempting to send (resp.\ receive) messages. An error of type~\ref{en:error-miss} indicates that some of the messages cannot be received by one of the types. An error of type~\ref{en:error-end} indicates a system where one of the types has terminated while the other still expects to send or receive messages. \begin{mydef}[Duality]\label{def:duality} The dual of a formula $\phi \in \mathcal{F}$, written $\dual{\phi}$ (resp.\ of a session type $T \in \mathcal{T}$, written $\dual{T}$), is defined recursively as follows: \[ \begin{array}{c@{\quad}c} \dual{\phi} \defi \begin{cases} \dual{\phi_1} \land \dual{\phi_2} & \text{if } \phi = \phi_1 \land \phi_2 \\ \dual{\phi_1} \lor \dual{\phi_2} & \text{if } \phi = \phi_1 \lor \phi_2 \\ \mmbox{\coanydir{a}}{\dual{\phi'}} & \text{if } \phi = \mmbox{{\Op \! a}}{\phi'} \\ \mmdiamond{\coanydir{a}}{\dual{\phi'}} & \text{if } \phi = \mmdiamond{{\Op \! a}}{\phi'} \\ \mmnu{\var{x}} \dual{\phi'} & \text{if } \phi = \mmnu{\var{x}} \phi' \\ \phi & \text{if } \phi = \truek, \falsek, \text{ or } \var{x} \xqedhere{196pt}{\qed} \end{cases} & \dual{T} \defi \begin{cases} \cocochoiceSetNoIdx{i}{I}{\dual{T_i}} & \text{if } T = \choice \\ \rec{\var{x}} \dual{T'} & \text{if } T = \rec{\var{x}} T' \\ \var{x} & \text{if } T = \var{x} \\ \mathtt{end} & \text{if } T = \mathtt{end} \end{cases} \end{array} \] \renewcommand{\qed}{} \end{mydef} In Definition~\ref{def:duality}, notice that the dual of a formula only rename labels. \begin{lemma} For all $T \in \mathcal{T}C$ and $\phi \in \mathcal{F}C$, $T \models \phi \iff \dual{T} \models \dual{\phi}$. \end{lemma} \begin{proof} Direct from the definitions of $\dual{T}$ and $\dual{\phi}$ (labels are renamed uniformly). \end{proof} \begin{restatable}{theorem}{thmcharformduality} \label{thm:char-form-duality} For all $T \in \mathcal{T} \, : \, \dual{\Anyform{T}{\bigtOp}} = \Anyform{\dual{T}}{\dual{\bigtOp}}$. \end{restatable} \begin{proof} By structural induction on $T$, see Appendix~\ref{proof:thmcharformduality}. \end{proof} Theorem~\ref{thm:safety} follows straightforwardly from~\cite{CDY2014} and allows us to obtain a sound and complete model-checking based condition for safety, cf.\ Theorem~\ref{thm:safety-statements}. \begin{restatable}[Safety]{theorem}{thmsafety} \label{thm:safety} $T \spar U$ is safe $\iff$ $(T \subtype \dual{U} \lor \dual{T} \subtype U)$. \end{restatable} \begin{proof} Direction $(\Longrightarrow)$ follows from ~\cite[Table 7]{CDY2014} and direction $(\Longleftarrow)$ is by coinduction on the derivations of $T \subtype \dual{U}$ and $\dual{T} \subtype U$. See Appendix~\ref{app:safety} for details. \end{proof} \noindent Finally we achieve: \begin{theorem}\label{thm:safety-statements} The following statements are equivalent: $\; (a) {\;\,} T \spar U \text{ is safe }$ \[ \begin{array}{l@{\;\,}l@{\qquad\qquad}l@{\;\,}l} (b) & \dual{U} \models \Anyform{T}{\inchoicetop} \lor U \models \Anyform{\dual{T}}{\inchoicetop} & (d) & U \models \Anyform{\dual{T}}{\outchoicetop} \lor \dual{U} \models \Anyform{T}{\outchoicetop} \\ (c) & T \models \Anyform{\dual{U}}{\outchoicetop} \lor \dual{T} \models \Anyform{{U}}{\outchoicetop} & (e) & \dual{T} \models \Anyform{{U}}{\inchoicetop} \lor {T} \models \Anyform{\dual{U}}{\inchoicetop} \end{array} \] \end{theorem} \begin{proof} By direct applications of Theorem~\ref{thm:safety}, then Corollary~\ref{cor:TsubU} and Theorem~\ref{thm:char-form-duality}. \end{proof} \section{Alternative algorithms for subtyping}\label{sec:algos} In order to compare the cost of checking the subtyping relation via characteristic formulae to other approaches, we present two other algorithms: the original algorithm as given by Gay and Hole in~\cite{GH05} and an adaptation of Kozen, Palsberg, and Schwartzbach's algorithm~\cite{KPS95} for recursive subtyping for the $\lambda$-calculus. \subsection{Gay and Hole's algorithm} \label{sub:gay-hole-algo} \begin{figure} \caption{Algorithmic subtyping rules~\cite{GH05} \label{fig:gay-hole-algo} \end{figure} The inference rules of Gay and Hole's algorithm are given in Figure~\ref{fig:gay-hole-algo} (adapted to our setting). The rules essentially follow those of Definition~\ref{def:ab-subtype} but deal explicitly with recursion. They use judgments $\judge{\Gamma}{T}{U}$ in which $T$ and $U$ are (closed) session types and $\Gamma$ is a sequence of assumed instances of the subtyping relation, i.e., $\Gamma = T_1 \subtype_{c} U_1 , {\scriptstyle \ldots}, T_k \subtype_{c} U_k$, saying that each pair $T_i \subtype_{c} U_i$ has been visited. To guarantee termination, rule $\inferrule{Assump}$ should always be used if it is applicable. \begin{theorem}[Correspondence{~\cite[Corollary 2]{GH05}}] \label{thm:gay-hole-algo} $T \subtype U$ if and only if $\judge{\emptyset}{T}{U}$ is derivable from the rules in Figure~\ref{fig:gay-hole-algo}. \end{theorem} \noindent Proposition~\ref{prop:gay-complexity}, a contribution of this paper, states the algorithm's complexity. \begin{proposition}\label{prop:gay-complexity} For all $T, U \in \mathcal{T}C$, the problem of deciding whether or not $\judge{\emptyset}{T}{U}$ is derivable has an $\bigo{n^{2^n}}$ time complexity, in the worst case; where $n$ is the number of nodes in the parsing tree of the $T$ or $U$ (which ever is bigger). \end{proposition} \begin{proof} Assume the bigger session type is $T$ and its size is $n$ (the number of nodes in its parsing tree). Observe that the algorithm in Figure~\ref{fig:gay-hole-algo} needs to visit every node of $T$ and relies on explicit unfolding of recursive types. Given a type of size $n$, its unfolding is of size $\bigo{n^2}$, in the worst case. Hence, we have a chain $\bigo{n} + \bigo{n^2} + \bigo{n^4} + \ldots$, or $\bigo{\sum_{1 \leq i \leq k} n^{2^i}}$, where $k$ is a bound on the number of derivations needed for the algorithm to terminate. According to~\cite[Lemma 10]{GH05}, the number of derivations is bounded by the number of sub-terms of $T$, which is $\bigo{n}$. Thus, we obtain a worst case time complexity of $\bigo{n^{2^n}}$. \end{proof} \subsection{Kozen, Palsberg, and Schwartzbach's algorithm} \label{sub:kozen-et-al} Considering that the results of~\cite{KPS95} ``\emph{generalise to an arbitrary signature of type constructors (\ldots)}'', we adapt Kozen et al.'s algorithm, originally designed for subtyping recursive types in the $\lambda$-calculus. Intuitively, the algorithm reduces the problem of subtyping to checking the language emptiness of an automaton given by the product of two (session) types. The intuition of the theory behind the algorithm is that ``\emph{two types are ordered if no common path detects a counterexample}''. We give the details of our instantiation below. The set of type constructors over $\mathcal{A}$, written $\mathfrak{C}_\mathcal{A}$, is defined as follows: \[ \mathfrak{C}_\mathcal{A} \defi \{ \mathtt{end} \} \cup \{ \choicecons{\inchoicetop}{A} \, \mid \, \emptyset \subset A \subseteq \mathcal{A} \} \cup \{ \choicecons{\outchoicetop}{A} \, \mid \, \emptyset \subset A \subseteq \mathcal{A} \} \] \begin{mydef}[Term automata]\label{def:term-automaton} A term automaton over $\mathcal{A}$ is a tuple $ \mathcal{M} = (Q, \, \mathfrak{C}_\mathcal{A}, \, \, \mid \,ate_0, \, \delta, \, \ell) $ where \begin{itemize} \item $Q$ is a (finite) set of states, \item $\, \mid \,ate_0 \in Q$ is the initial state, \item $\delta : Q \times \mathcal{A} \rightarrow Q$ is a (partial) function (the \emph{transition function}), and \item $\ell : Q \rightarrow \mathfrak{C}_\mathcal{A}$ is a (total) labelling function \end{itemize} such that for any $\, \mid \,ate \in Q$, if $\ell(\, \mid \,ate) \in \{\choicecons{\inchoicetop}{A} , \choicecons{\outchoicetop}{A} \} $, then $\delta(\, \mid \,ate, {\Op \! a})$ is defined for all ${\Op \! a} \in A$; and for any $\, \mid \,ate \in Q$ such that $\ell(\, \mid \,ate) = \mathtt{end}$, $\delta(\, \mid \,ate, {\Op \! a})$ is undefined for all ${\Op \! a} \in \mathcal{A}$. We decorate $Q$, $\delta$, etc.\ with a superscript, e.g., $\mathcal{M}$, where necessary. \end{mydef} We assume that session types have been ``translated'' to term automata, the transformation is straightforward (see,~\cite{DY13} for a similar transformation). Given a session type $T \in \mathcal{T}C$, we write $\autof{T}$ for its corresponding term automaton. \begin{mydef}[Subtyping]\label{def:kozen-sub} $\sqsubseteq$ is the smallest binary relation on $\mathfrak{C}_\mathcal{A}$ such that: \[ \mathtt{end} \sqsubseteq \mathtt{end} \qquad \choicecons{\inchoicetop}{A} \sqsubseteq \choicecons{\inchoicetop}{B} \iff A \subseteq B \qquad \choicecons{\outchoicetop}{A} \sqsubseteq \choicecons{\outchoicetop}{B} \iff B \subseteq A \qedhere \] \end{mydef} Definition~\ref{def:kozen-sub} essentially maps the rules of Definition~\ref{def:ab-subtype} to type constructors. The order $\sqsubseteq$ is used in the product automaton to identify final states, see below. \begin{mydef}[Product automaton]\label{def:product-automaton} Given two term automata $\automaton$ and $\mathcal{N}$ over $\mathcal{A}$, their product automaton $\autoprod{\automaton}{\mathcal{N}} = (P, \, p_0, \, \Delta, \, F)$ is such that \begin{itemize} \item $P = Q^{\automaton} \times Q^{\mathcal{N}}$ are the states of $\autoprod{\automaton}{\mathcal{N}}$, \item $p_0 = (\, \mid \,ate_0^{\automaton}, \, \mid \,ate_0^{\mathcal{N}})$ is the initial state, \item $\Delta : P \times \mathcal{A} \rightarrow P$ is the partial function which for $\, \mid \,ate_1 \in Q^{\automaton}$ and $\, \mid \,ate_2 \in Q^{\mathcal{N}}$ gives \[ \Delta( ( \, \mid \,ate_1, \, \mid \,ate_2 ), {\Op \! a} ) = ( \delta^{\automaton}(\, \mid \,ate_1, {\Op \! a} ) , \delta^{\mathcal{N}}(\, \mid \,ate_2, {\Op \! a} ) ) \] \item $F \subseteq P$ is the set of \emph{accepting} states: $ F = \{ \, ( \, \mid \,ate_1, \, \mid \,ate_2 ) \, \mid \, \ell^{\automaton}(\, \mid \,ate_1) \nsqsubseteq \ell^{\mathcal{N}}(\, \mid \,ate_2) \, \} $ \end{itemize} Note that $\Delta( ( \, \mid \,ate_1, \, \mid \,ate_2 ), {\Op \! a} )$ is defined iff $\delta^{\automaton}(\, \mid \,ate_1, {\Op \! a})$ and $\delta^{\mathcal{N}}(\, \mid \,ate_2, {\Op \! a})$ are defined. \end{mydef} Following~\cite{KPS95}, we obtain Theorem~\ref{def:kozen-correspondence}. \begin{theorem}\label{def:kozen-correspondence} Let $T, U \in \mathcal{T}C$, $ T \subtype U$ iff the language of $\autoprod{\autof{T}}{\autof{U}}$ is empty. \end{theorem} Theorem~\ref{def:kozen-correspondence} essentially says that $T \subtype U$ iff one cannot find a ``common path'' in $T$ and $U$ that leads to nodes whose labels are not related by $\sqsubseteq$, i.e., one cannot find a counterexample for them \emph{not} being in the subtyping relation. \newcommand{\;\,\,}{\;\,\,} \begin{example} Below we show the constructions for $T_1$~\eqref{ex:intro-example-u-1} and $U_1$~\eqref{ex:intro-example-subs}. \[ \begin{array}{c@{\;\,\,}c@{\;\,\,}c@{\;\,\,}c} \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$\choicecons{\outchoicetop}{\{ \rcv{request} \}}$}; \node[state, below of=q0] (q1) {$\choicecons{\inchoicetop}{ \{ \snd{ok} \} }$}; \node[state, below of=q1] (q2) {$\mathtt{end}$}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autof{T_1}$}; \path (q0) edge node [right] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$\choicecons{\outchoicetop}{\{ \rcv{request} \}}$}; \node[state, below of=q0] (q1) {$\choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} }$}; \node[state, below of=q1] (q2) {$\mathtt{end}$}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autof{U_1}$}; \path (q0) edge [bend right] node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) (q1) edge [bend right] node [right] {$\snd{ko}$} (q0) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$ \choicecons{\outchoicetop}{\{ \rcv{request} \}} \sqsubseteq \choicecons{\outchoicetop}{\{ \rcv{request} \}} $}; \node[state, below of=q0] (q1) {$ \choicecons{\inchoicetop}{ \{ \snd{ok} \} } \sqsubseteq \choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} } $}; \node[state, below of=q1] (q2) {$ \mathtt{end} \sqsubseteq \mathtt{end} $}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$\autoprod{\autof{T_1}}{\autof{U_1}}$ }; \path (q0) edge node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} & \begin{tikzpicture}[mycfsm] \node[state, fill=gray!15] (q0) {$ \choicecons{\outchoicetop}{\{ \rcv{request} \}} \sqsubseteq \choicecons{\outchoicetop}{\{ \rcv{request} \}} $}; \node[state, below of=q0, accepting] (q1) {$ \choicecons{\inchoicetop}{ \{ \snd{ok} , \snd{ko} \} } \nsqsubseteq \choicecons{\inchoicetop}{ \{ \snd{ok} \} } $}; \node[state, below of=q1] (q2) {$ \mathtt{end} \sqsubseteq \mathtt{end} $}; \node[below of=q2,yshift=0.5cm,font=\tiny] {$ \autoprod{\autof{U_1}}{\autof{T_1}}$}; \path (q0) edge node [left] {$\rcv{request}$} (q1) (q1) edge node [left] {$\snd{ok}$} (q2) ; \end{tikzpicture} \end{array} \] Where initial states are shaded and accepting states are denoted by a double line. Note that the language of $\autoprod{\autof{T_1}}{\autof{U_1}}$ is empty (no accepting states). \end{example} \begin{proposition}\label{prop:kozen-complexity} For all $T, U \in \mathcal{T}C$, the problem of deciding whether or not the language of $\autoprod{\autof{T}}{\autof{U}}$ is empty has a worst case complexity of $\bigo{\lvert T \rvert \times \lvert U \rvert}$; where $\lvert T \rvert$ stands for the number of states in the term automaton $\autof{T}$. \end{proposition} \begin{proof} Follows from the fact that the algorithm in~\cite{KPS95} has a complexity of $\bigo{n^2}$, see~\cite[Theorem 18]{KPS95}. This complexity result applies also to our instantiation, assuming that checking membership of $\sqsubseteq$ is relatively inexpensive, i.e., $\lvert A \rvert \ll \lvert Q^\mathcal{M} \rvert$ for each $\, \mid \,ate$ such that $\ell^\mathcal{M}(\, \mid \,ate) \in \{\choicecons{\inchoicetop}{A} , \choicecons{\outchoicetop}{A} \}$. \end{proof} \section{Experimental evaluation} \label{sec:tool} Proposition~\ref{prop:gay-complexity} states that Gay and Hole's classical algorithm has an exponential complexity; while the other approaches have a quadratic complexity (Propositions~\ref{prop:char-complexity} and~\ref{prop:kozen-complexity}). The rest of this section presents several experiments that give a better perspective of the \emph{practical} cost of these approaches. \begin{figure} \caption{Benchmarks (1)} \label{fig:plots-1} \end{figure} \subsection{Implementation overview and metrics} We have implemented three different approaches to checking whether two given session types are in the subtyping relation given in Definition~\ref{def:ab-subtype}. The tool~\cite{tool}, written in Haskell, consists of three main parts: ($i$) A module that translates session types to the mCRL2 specification language~\cite{groote2014modeling} and generates a characteristic ($\mu$-calculus) formula (cf.\ Definition~\ref{def:char-formula}), respectively; ($ii$) A module implementing the algorithm of~\cite{GH05} (see Section~\ref{sub:gay-hole-algo}), which relies on the Haskell $\mathtt{bound}$ library to make session types unfolding as efficient as possible. ($iii$) A module implementing our adaptation of Kozen et al.'s algorithm~\cite{KPS95}, see Section~\ref{sub:kozen-et-al}. Additionally, we have developed an accessory tool which generates arbitrary session types using Haskell's QuickCheck library~\cite{quickcheck}. The tool invokes the mCRL2 toolset~\cite{mcrl2} (release version {\tt 201409.1}) to check the validity of a $\mu$-calculus formula on a given model. We experimented invoking mCRL2 with several parameters and concluded that the default parameters gave us the best performance overall. Following discussions with mCRL2 developers, we have notably experimented with a parameter that pre-processes the $\mu$-calculus formula to ``insert dummy fixpoints in modal operators''. This parameter gave us better performances in some cases, but dramatic losses for ``super-recursive'' session types. Instead, an addition of ``dummy fixpoints'' while generating the characteristic formulae gave us the best results overall.\footnote{This optimisation was first suggested on the mCRL2 mailing list.} The tool is thus based on a slight modification of Definition~\ref{def:char-formula} where a modal operator $\mmbox{{\Op \! a}}{\phi}$ becomes $\mmbox{{\Op \! a}}{\mmnu{\var{t}}\phi}$ (with $\var{t}$ fresh and unused) and similarly for $\mmdiamond{{\Op \! a}}{\phi}$. Note that this modification does not change the semantics of the generated formulae. We use the following functions to measure the size of a session type. \[ \resizebox{\textwidth}{!}{$ \begin{array}{ll} \nummsg{T} \, \defi & \unfoldP{T}{X} \, \defi \\ \quad \left\{ \! \begin{array}{ll} 0 & \text{if } T = \mathtt{end} \; \text{or} \; T = \var{x} \\ \nummsg{T'} & \text{if } T = \rec{x} T' \\ \lvert I \rvert \!+\! \sum_{i \in I} \nummsg{T_i} & \text{if } T = {\choice} \end{array} \right. \; & \quad \left\{ \! \begin{array}{ll} 0 & \text{if } T = \mathtt{end} \; \text{or} \; T = \var{x} \\ (1 \!+\! \varocc{T'}{\var{x}}) \!\times\! \unfold{T'} & \text{if } T = \rec{x} T' \\ \lvert I \rvert \!+\! \sum_{i \in I} \unfold{T_i} & \text{if } T = {\choice} \end{array} \right. \end{array} $} \] Function $\nummsg{T}$ returns the \emph{number of messages} in $T$. Letting $\varocc{T}{\var{x}}$ be the number of times variable $\var{x}$ appears \emph{free} in session type $T$, function $\unfold{T}$ returns the number of messages in the unfolding of $T$. Function $\unfold{T}$ takes into account the structure of a type wrt.\ recursive definitions and calls (by unfolding once every recursion variable). \begin{figure} \caption{Benchmarks (2)} \label{fig:plots-2} \end{figure} \subsection{Benchmark results} The first set of benchmarks compares the performances of the three approaches when the pair of types given are identical, i.e., we measure the time it takes for an algorithm to check whether $T \subtype T$ holds. The second set of benchmarks considers types that are ``unfolded'', so that types have different sizes. Note that checking whether two equal types are in the subtyping relation is one of the most costly cases of subtyping since every branch of a choice must be visited. Our results below show the performances of four algorithms: ($i$) our Haskell implementation of Gay and Hole's algorithm (GH), ($ii$) our implementation of Kozen, Palsberg, and Schwartzbach's algorithm (KPS), ($iii$) an invocation to mCRL2 to check whether $U \models \subform{T}$ holds, and ($iv$) an invocation to mCRL2 to check whether $T \models \SUPform{U}$ holds. All the benchmarks were conducted on an 3.40GHz Intel i7 computer with 16GB of RAM. Unless specified otherwise, the tests have been executed with a timeout set to $2$ hours ($7200$ seconds). A gap appears in the plots whenever an algorithm reached the timeout. Times ($y$-axis) are plotted on a \emph{logarithmic} scale, the scale used for the size of types ($x$-axis) is specified below each plot. {\bf Arbitrary session types} Plots (a) and (b) in Figure~\ref{fig:plots-1} shows how the algorithms perform with arbitrary session types (randomly generated by our tool). Plot (a) shows clearly that the execution time of KPS, $T \models \SUPform{T}$, and $T \models \subform{T}$ mostly depends on $\nummsg{T}$; while plot (b) shows that GH is mostly affected by the number of messages in the unfolding of a type ($\unfold{T}$). Unsurprisingly, GH performs better for smaller session types, but starts reaching the timeout when $\nummsg{T} \approx 700$. The other three algorithms have roughly similar performances, with the model checking based ones performing slightly better for large session types. Note that both $T \models \SUPform{T}$ and $T \models \subform{T}$ have roughly the same execution time. {\bf Non-recursive arbitrary session types} Plot (c) in Figure~\ref{fig:plots-1} shows how the algorithms perform with arbitrary types that do \emph{not} feature any recursive definition (randomly generated by our tool), i.e., the types are of the form: \[ \textstyle T \coloneqq \mathtt{end} \;\mid\; \inchoice \;\mid\; \outchoice \] The plot shows that GH performs much better than the other three algorithms (terminating under $1$s for each invocation). Indeed this set of benchmarks is the best case scenario for GH: there is no recursion hence no need to unfold types. Observe that the model checking based algorithms perform better than KPS for large session types. Again, $T \models \SUPform{T}$ and $T \models \subform{T}$ behave similarly. {\bf Handcrafted session types} Plots (d) and (e) in Figure~\ref{fig:plots-2} shows how the algorithms deal with ``super-recursive'' types, i.e., types of the form: \[ \textstyle T \coloneqq \recND{x}_1 . {\Op \! a}_1 . \ldots \recND{x}_k . {\Op \! a}_k \left\{ \; \bigOp_{1 \leq i \leq k} {\Op \! a}_i . \{ \bigOp_{1 \leq j \leq k} {\Op \! a}_j . \var{x}_j \} \; \right\} \] where $\nummsg{T} = k(k+2)$ for each $T$. Plot (d) shows the results of experiments with $\bigtOp$ set to $\inchoicetop$ and $\Op$ to $\sendop$; while $\bigtOp$ is set to $\outchoicetop$ and $\Op$ to $\rcvop$ in plot (e). The exponential time complexity of GH appears clearly in both plots: GH starts reaching the timeout when $\nummsg{T} = 80$ ($k=8$). However, the other three algorithms deal well with larger session types of this form. Interestingly, due to the nature of these session types (consisting of either only \emph{internal} choices or only \emph{external} choices), the two model checking based algorithms perform slightly differently. This is explained by Definition~\ref{def:char-formula} where the formula generated with $\SUPform{T}$ for an internal choice is larger than for an external choice, and vice-versa for $\subform{T}$. Observe that, $T \models \subform{T}$ (resp.\ $T \models \SUPform{T}$) performs better than KPS for large session types in plot (d) (resp.\ plot (e)). {\bf Unfolded types} \newcommand{{V}}{{V}} The last set of benchmarks evaluates the performances of the four algorithms to check whether $T = \rec{\var{x}} {V} \; \subtype \; \rec{\var{x}} \left( {V} \subs{{V}}{\var{x}} \right) = U$ holds, where $\var{x}$ is fixed and ${V}$ (randomly generated) is of the form: \[ \textstyle {V} \coloneqq \inchoiceSet{i}{I}{{V}} \;\mid\; \outchoiceSet{i}{I}{{V}} \;\mid\; \var{x} \] Plots (f) in Figure~\ref{fig:plots-2} shows the results of our experiments (we have set the timeout to $6$ hours for these tests). Observe that $U \models \subform{T}$ starts reaching the timeout quickly. In this case, the model (i.e., $U$) is generally much larger than the formula (i.e., $\subform{T}$). After discussing with the mCRL2 team, this discrepancy seems to originate from internal optimisations of the model checker that can be diminished (or exacerbated) by tweaking the parameters of the tool-set. The other three algorithms have similar performances. Note that the good performance of GH in this case can be explained by the fact that there is only one recursion variable in these types; hence the size of their unfolding does not grow very fast. \section{Related work and conclusions}\label{sec:conc} \label{sec:related} \paragraph{\bf Related work} Subtyping for recursive types has been studied for many years. Amadio and Cardelli~\cite{AC93} introduced the first subtyping algorithm for recursive types for the $\lambda$-calculus. Kozen et al.\ gave a quadratic subtyping algorithm in~\cite{KPS95}, which we have adapted for session types, cf.\ Section~\ref{sub:kozen-et-al}. A good introduction to the theory and history of the field is in~\cite{GLP02}. Pierce and Sangiori~\cite{PS96} introduced subtyping for IO types in the $\pi$-calculus, which later became a foundation for the algorithm of Gay and Hole who first introduced subtyping for session types in the $\pi$-calculus in~\cite{GH05}. The paper~\cite{DemangeonH11} studied an abstract encoding between linear types and session types, with a focus on subtyping. Chen et al.~\cite{CDY2014} studied the notion of \emph{preciseness} of subtyping relations for session types. The present work is the first to study the algorithmic aspect of the problem. Characteristic formulae for finite processes were first studied in~\cite{GS86}, then in~\cite{steffen89} for finite-state processes. Since then the theory has been studied extensively~\cite{si94,ai97,ai07,cs91,FS05,MOlm98,ails12} for most of the van Glabbeek's spectrum~\cite{Glabbeek90} and in different settings (e.g., time~\cite{AIPP00} and probabilistic~\cite{SZ12}). See~\cite{ails12,ai07} for a detailed historical account of the field. This is the first time characteristic formulae are applied to the field of session types. A recent work~\cite{ails12} proposes a general framework to obtain characteristic formula constructions for simulation-like relation ``for free''. We chose to follow~\cite{steffen89} as it was a better fit for session types as they allow for a straightforward inductive construction of a characteristic formula. Moreover,~\cite{steffen89} uses the standard $\mu$-calculus which allowed us to integrate our theory with an existing model checker. \paragraph{\bf Conclusions} In this paper, we gave a first connection between session types and model checking, through a characteristic formulae approach based on the $\mu$-calculus. We gave three new algorithms for subtyping: two are based on model checking and one is an instantiation of an algorithm for the $\lambda$-calculus~\cite{KPS95}. All of which have a quadratic complexity in the worst case and behave well in practice. Our approach can be easily: ($i$) adapted to types for the $\lambda$-calculus (see Appendix~\ref{sec:lambda}) and ($ii$) extended to session types that carry other (\emph{closed}) session types, e.g., see~\cite{GH05,CDY2014}, by simply applying the algorithm recursively on the carried types. For instance, to check $ \snd{a\langle \rcv{c} \outchoicetop \rcv{d} \rangle } \; \subtype \; \snd{a\langle \rcv{c} \rangle } \, \inchoicetop \, \snd{b \langle \mathtt{end} \rangle } $ one can check the subtyping for the outer-most types, while building constraints, i.e., $\{ \rcv{\!c} \, \outchoicetop \rcv{d} \, \subtype \, \rcv{c} \}$, to be checked later on, by re-applying the algorithm. The present work paves the way for new connections between session types and modal fixpoint logic or model checking theories. It is a basis for upcoming connections between model checking and classical problems of session types, such as the asynchronous subtyping of~\cite{CDY2014} and multiparty compatibility checking~\cite{LTY15, DY13}. We are also considering applying model checking approaches to session types with probabilistic, logical~\cite{BHTY10}, or time~\cite{BLY15,BYY14} annotations. Finally, we remark that~\cite{CDY2014} also establishes that subtyping (cf.\ Definition~\ref{def:ab-subtype}) is \emph{sound} (but not complete) wrt.\ the \emph{asynchronous} semantics of session types, which models programs that communicate through FIFO buffers. Thus, our new conditions (items $(b$)-$(e)$ of Theorem~\ref{thm:safety-statements}) also imply safety $(a)$ in the asynchronous setting. \paragraph{\bf Acknowledgements} We would like to thank Luca Aceto, Laura Bocchi, and Alceste Scalas for their invaluable comments on earlier versions of this work. This work is partially supported by UK EPSRC projects EP/K034413/1, EP/K011715/1, and EP/L00058X/1; and by EU 7FP project under grant agreement 612985 (UPSCALE). \appendix \section{Appendix: Proofs}\label{app:proofs} \subsection{Compositionality} \lemcompo* \begin{proof}\label{proof:lemcompo} By structural induction on the structure of $T$. \begin{enumerate} \item If $T = \mathtt{end}$, then \begin{itemize} \item $T \subs{U}{\var{x}} = \mathtt{end}$ and $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek}$, and \item $\Anyform{T}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek} = \mmbox{\mathcal{A}}{\falsek} \subs{ \Anyform{U}{\bigtOp} }{\var{x}}$. \end{itemize} \item If $T = \var{x}$, then \begin{itemize} \item $T \subs{U}{\var{x}} = U$, hence $\Anyform{T \subs{U}{\var{x}}}{\bigtOp} = \Anyform{U}{\bigtOp}$, and \item $\Anyform{T}{\bigtOp} = \var{x}$, hence $\Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \Anyform{U}{\bigtOp}$. \end{itemize} \item If $T = \var{y} ( \neq \var{x})$, then \begin{itemize} \item $\Anyform{\var{y} \subs{U}{\var{x}}}{\bigtOp} = \Anyform{\var{y}}{\bigtOp} = \var{y}$, and \item $\Anyform{\var{y}}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \var{y} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} = \var{y}$. \end{itemize} \item If $T = \choice$, then \begin{align*} \Anyform{T \subs{U}{\var{x}}}{\bigtOp} & = \Anyform{ \choiceSetNoIdx{i}{I}{ T_i \subs{U}{\var{x}} }}{\bigtOp} \\ & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i \subs{U}{\var{x}} }{\bigtOp}} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{ \left(\Anyform{T_i }{\bigtOp} \subs{ \Anyform{U}{\bigtOp}}{\var{x}} \right)} \\ & = \left( \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i }{\bigtOp}} \right) \subs{ \Anyform{U}{\bigtOp}}{\var{x}} \\ &= \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \item If $T = \cochoice$, then \begin{align*} \Anyform{T \subs{U}{\var{x}}}{\bigtOp} & = \Anyform{ \cochoiceSetNoIdx{i}{I}{ T_i \subs{U}{\var{x}} }}{\bigtOp} \\ & = \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{ T_i \subs{U}{\var{x}} }{\bigtOp}} \, \land \, \\ & \qquad \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ \left( \Anyform{ T_i }{\bigtOp} \subs{\Anyform{U}{\bigtOp}}{\var{x}} \right) } \, \land \, \\ & \qquad \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ & = \left( \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ \Anyform{ T_i }{\bigtOp} } \, \land \, \right. \\ & \qquad \left. \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \, \land \, \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \right) \subs{\Anyform{U}{\bigtOp}}{\var{x}} \\ & = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \item If $ T = \rec{\var{y}} T'$ ($\var{x} \neq \var{y}$), we have \begin{align*} \Anyform{ \rec{\var{y}} T' \subs{U}{\var{x}}}{\bigtOp} & = \mmnu{\var{y}} \Anyform{T' \subs{U}{\var{x}} }{\bigtOp} \\ \text{\tiny\it (I.H.)} \quad & = \mmnu{\var{y}} \Anyform{T'}{\bigtOp} \subs{\Anyform{U}{\bigtOp}}{\var{x}} \\ & = \Anyform{T}{\bigtOp} \subs{ \Anyform{U}{\bigtOp} }{\var{x}} \end{align*} \end{enumerate} \end{proof} \subsection{Extensions and approximations} The proofs in this section follow closely the proof techniques in~\cite{steffen89}. \begin{mydef}[Extended subtyping] Let $T, U \in \mathcal{T}$, $\phi \in \mathcal{F}$, and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$, $U$, or $\phi$. We define the \emph{extended subtyping} $\subtype_e$ and the \emph{extended satisfaction relation}, $\models_e$, by \begin{enumerate} \item $T \subtype_e U \iff \phill \vec{V} \in \mathcal{T}^n \, : \, T\subs{\vec{V}}{\vec{\var{x}}} \subtype U\subs{\vec{V}}{\vec{\var{x}}} $ \item $ T \models_e \phi \iff \phill \vec{V} \in \mathcal{T}^n \phill \vec{\psi} \in \mathcal{F}^n \, : \, \vec{V} \models \vec{\psi} \implies T\subs{\vec{V}}{\vec{\var{x}}} \models \phi \subs{\vec{\psi}}{\vec{\var{x}}} $ \end{enumerate} where $ \vec{V} \models \vec{\psi}$ is understood component wise. \end{mydef} \begin{mydef}[Subtyping approximations]\label{def:subtype-approx} Let $T, U \in \mathcal{T}$ and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$ or $U$. The extended $k$-limited subtyping, $\subtype_{e,k}$ is defined inductively on $k$ as follows: $T \subtype_{e,0} U$ always holds; if $k \geq 1$, then $T \subtype_{e,k} U$ holds iff for all $\vec{V} \in \mathcal{T}^n$, $ T\subs{\vec{V}}{\vec{\var{x}}} \subtype_{e,k} U\subs{\vec{V}}{\vec{\var{x}}} $ can be derived from the following rules: \[ \begin{array}{c} \inference{S-out} { I \subseteq J & \phill i \in I \, : \, T_i \absub{\bigOp}_{e,k-1} U_i } { \choice \absub{\bigOp}_{e,k} \choiceSet{j}{J}{U} } \qquad \inference{S-in} { J \subseteq I & \phill j \in J \, : \, T_j \absub{\bigOp}_{e,k-1} U_j } { \cochoice \absub{\bigOp}_{e,k} \cochoiceSet{j}{J}{U} } \\[1pc] \inference{S-end} {} {\mathtt{end} \absub{\bigOp}_{e,k} \mathtt{end}} \end{array} \] Recall that we are assuming an equi-recursive view of types. \end{mydef} \begin{restatable}{lemma}{lemextsubtype} \label{lem:ext-subtype} $ T \absub{\bigOp}_e U \iff \phill k \, : \, T \absub{\bigOp}_{e,k} U$ \end{restatable} \begin{proof} The ($\Rightarrow$) direction is straightforward, while the converse follow from the fact the session types we consider have only a finite number of states. \end{proof} \begin{mydef}[Semantics approximations] Let $T \in \mathcal{T}$ and $\vec{\var{x}} = (\var{x}_1 , \ldots , \var{x}_n)$ be a vector containing all the free variables in $T$. The extended $k$-limited satisfaction relation $\models_{e,k}$ is defined inductively as follows on $k$: $T \models_{e,0} \phi$ always holds; if $k \geq 1$, then $\models_{e,k}$ is given by: \[ \begin{array}{l@{\quad \mathit{iff} \quad}l} \multicolumn{1}{l}{T \models_{e,k} \truek} \\ T \models_{e,k} \phi_1 \land \phi_2 & T \models_{e,k} \phi_1 \text{ and } T \models_{e,k} \phi_2 \\ T \models_{e,k} \phi_1 \lor \phi_2 & T \models_{e,k} \phi_1 \text{ or } T \models_{e,k} \phi_2 \\ T \models_{e,k} \mmbox{{\Op \! a}}{\phi} & \phill \vec{V} \in \mathcal{T}^n \; \phill T' \, : \, \text{if } T\subs{\vec{V}}{\vec{\var{x}}} \semarrow{{\Op \! a}} T' \text{ then } T' \models_{e,k-1} \phi \\ T \models_{e,k} \mmdiamond{{\Op \! a}}{\phi} & \phill \vec{V} \in \mathcal{T}^n \; \exists T' \, : \, T\subs{\vec{V}}{\vec{\var{x}}} \semarrow{{\Op \! a}} T' \text{ and } T' \models_{e,k-1} \phi \\ T \models_{e,k} \mmnu{\var{x}} \phi & \phill n \, : \, T \models_{e,k} \approxi{ \mmnu{\var{x}} \phi }{n} \ensuremath{$\qedhere$} \end{array} \] \end{mydef} \begin{restatable}{lemma}{lemextmodels} \label{lem:ext-models} $ T \models_e \phi \iff \phill k \geq 0 \, : \, T \models_{e,k} \phi$ \end{restatable} \begin{proof} The $(\Rightarrow)$ direction is straightforward, while the $(\Leftarrow)$ direction follows from the fact that a session type induce a finite LTS. \end{proof} \begin{restatable}[Fixpoint properties]{lemma}{lemfixpointprop} \label{lem:fix-point-prop} Let $T \in \mathcal{T}$ and $\phi \in \mathcal{F}$, then we have: \begin{enumerate} \item \label{enum:fix-mu} $T \models_{e,k} \mmnu{\var{x}} \phi \iff T \models_{e,k} \phi \subs{\mmnu{\var{x}} \phi}{\var{x}}$ \item \label{enum:fix-rec} $\rec{\var{x}} T \models_{e,k} \phi \iff T\subs{\rec{\var{x}}T}{\var{x}} \models_{e,k} \phi$ \item \label{enum:fix-tr} $ \rec{\var{x}} T \; \absub{\bigOp}_{e,k} \; T \subs{\rec{\var{x}} T}{\var{x}} \; \absub{\bigOp}_{e,k} \; \rec{\var{x}} T $ \end{enumerate} \end{restatable} \begin{proof} The first property is a direct consequence of the definition of $\models_{e,k}$, while the last two properties follow from the equi-recursive view of types. \end{proof} \subsection{Main results} \thmmaintheorem* \begin{proof}\label{proof:thmmaintheorem} Direct consequence of Lemma~\ref{lem:main-lemma}. \end{proof} \begin{restatable}[Main lemma]{lemma}{lemmainlemma} \label{lem:main-lemma} $\phill T, U \in \mathcal{T} \, : \, T \absub{\bigOp}_{e} U \iff U \models_e \Anyform{T}{\bigtOp}$ \end{restatable} \begin{proof} According to Lemmas~\ref{lem:ext-subtype} and~\ref{lem:ext-models}, it is enough to show that \begin{equation}\label{eq:main-equi} \phill k \geq 0 \, : \, \phill U, T \in \mathcal{T} \, : \, T \absub{\bigOp}_{e,k} U \; \iff \; U \models_{e,k} \Anyform{T}{\bigtOp} \end{equation} We show this by induction on $k$. If $k = 0$, the result holds trivially, let us show that it also holds for $k \geq 1$. We distinguish four cases according to the structure of $T$. \begin{enumerate} \item If $T = \var{x}$, then must have $U = \var{x}$, by definition of $\absub{\bigOp}_e$ and $\models_e$. \item If $T = \rec{\var{x}} T'$, then by Lemma~\ref{lem:fix-point-prop}, we have \begin{enumerate} \item $ U \models_{e,k} \Anyform{T}{\bigtOp} \iff U \models_{e,k} \Anyform{T'}{\bigtOp} \subs{\Anyform{T}{\bigtOp}}{\var{x}} $ \item $ T \; \absub{\bigtOp}_{e,k} \; T' \subs{T}{\var{x}} \; \absub{\bigtOp}_{e,k} \; T $ \end{enumerate} Applying Lemma~\ref{lem:compo}, it is enough to show that: \[ \phill T, U \in \mathcal{T} \, : \, T' \subs{\rec{\var{x}} T'}{\var{x}} \absub{\bigtOp}_{e,k} U \; \iff \; U \models_{e,k} \Anyform{T' \subs{\rec{\var{x}} T'}{\var{x}}}{\bigtOp} \] Hence, since we have assumed that the types are guarded, we only have to deal with the cases where $T = \choice$, $T = \cochoice$, and $T = \mathtt{end}$. On the other hand, considering both sides of the equivalence~\eqref{eq:main-equi}, we notice that $U$ cannot be a variable. Thus, let us assume that $U = \rec{\var{x}} U'$, by Lemma~\ref{lem:fix-point-prop}, we have \begin{enumerate} \item $ U \models_{e,k} \Anyform{T}{\bigtOp} \iff U' \subs{U}{\var{x}} \models_{e,k} \Anyform{T}{\bigtOp} $ \item $ U \; \absub{\bigtOp}_{e,k} \; U' \subs{U}{\var{x}} \; \absub{\bigtOp}_{e,k} \; U $ \end{enumerate} Hence, applying Lemma~\ref{lem:compo} again, this case reduces to the cases where $U$ is of the form: $\choiceSet{j}{J}{U}$, $\cochoiceSet{j}{J}{U}$, or $\mathtt{end}$. \item $T = \mathtt{end}$ \begin{itemize} \item $(\Rightarrow)$ Assume $\mathtt{end} = T \absub{\bigtOp}_{e,k} U$, then by Definition~\ref{def:subtype-approx}, we have $U = \mathtt{end}$. By Definition~\ref{def:char-formula}, we have $\Anyform{\mathtt{end}}{\bigtOp} = \mmbox{\mathcal{A}}{\falsek}$, and we have $\mathtt{end} \models_{e,k} \mmbox{\mathcal{A}}{\falsek}$ since $ U = \mathtt{end} \!\!\nrightarrow$. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\mathtt{end}}{\bigtOp}$. By Definition~\ref{def:char-formula}, we have $U \models_{e,k} \mmbox{\mathcal{A}}{\falsek}$, which holds iff $U \!\!\nrightarrow$, hence we must have $U = \mathtt{end}$. Finally, by Definition~\ref{def:subtype-approx}, we have $\mathtt{end} \absub{\bigOp}_{e,k} \mathtt{end}$. \end{itemize} \item $T = \choice$ \begin{itemize} \item $(\Rightarrow)$ Assume $\choice \absub{\bigtOp}_{e,k} U$. By Definition~\ref{def:subtype-approx}, $ U = \choiceSet{j}{J}{U}$ with $I \subseteq J$ (note that $\emptyset \neq I$ by assumption) and $\phill i \in I \, : \, T_i \absub{\bigtOp}_{e,k-1} U_i$. Hence, $\phill i \in I \, : \, U \semarrow{{\Op \! a}dir{a_i}} U_i$, and by induction hypothesis, we have $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$, for all $i \in I$. By Definition~\ref{def:char-formula}, we have $ \Anyform{T}{\bigtOp} = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} $. Thus we have to show that for all $i \in I$, $U \semarrow{\opfun{\bigtOp}{a_i}} U_i$ and $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$; which follows from above. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\choice}{\bigtOp}$. From Definition~\ref{def:char-formula}, we have \[ \Anyform{T}{\bigtOp} = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \] Hence, , $\phill i \in I \, : \, U \semarrow{\opfun{\bigtOp}{a_i}} U_i$, and $U_i \models_{e,k-1} \Anyform{T_i}{\bigtOp}$, for all $i \in I$. Hence, we must have $U = \choiceSet{j}{J}{U}$ with $I \subseteq J$ and by induction hypothesis, this implies that $T_i \absub{\bigtOp}_{e,k-1} U_i$ for all $i \in I$. \end{itemize} \item $T = \cochoice$ \begin{itemize} \item $(\Rightarrow)$ Assume $\cochoice \absub{\bigtOp}_{e,k} U$. By Definition~\ref{def:subtype-approx}, $ U = \cochoiceSet{j}{J}{U}$, with $J \subseteq I$ and $\phill j \in J \, : \, T_j \absub{\bigtOp}_{e,k-1} U_j$. Hence, by induction hypothesis, we have $U_j \models_{e,k-1} \Anyform{T_j}{\bigtOp}$, for all $j \in J$. By Definition~\ref{def:char-formula}, we have \begin{equation}\label{eq:formula-cochoice-1} {\Anyform{T}{\bigtOp}} \; = \; \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \end{equation} We must show that $U \models_{e,k} {\Anyform{T}{\bigtOp}} $. Since $J \subseteq I$, we have that $\phill i \in I \, : \, T \semarrow{\opfun{\bigtOp}{a_i}} T_i \implies U \semarrow{\opfun{\bigtOp}{a_i}} U_i$, hence the first conjunct of~\eqref{eq:formula-cochoice-1} holds (using the induction hypothesis, cf.\ above). While the second conjunct of~\eqref{eq:formula-cochoice-1} must be true from the assumption that $ \emptyset \neq J$. Finally, the third conjunct of~\eqref{eq:formula-cochoice-1} is false only if $U \semarrow{\opfun{\bigtOp}{a_n}}$ with $n \notin I$, which contradicts $J \subseteq I$. \item $(\Leftarrow)$ Assume $U \models_{e,k} \Anyform{\cochoice}{\bigtOp}$. From Definition~\ref{def:char-formula}, we have \begin{equation}\label{eq:formula-cochoice-2} {\Anyform{T}{\bigtOp}} \; = \; \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \end{equation} Hence, we must have $U = \cochoiceSet{j}{J}{U}$. It follows straightforwardly that $\emptyset \neq J \subseteq I$. Finally, the fact that for all $j \in J \, : \, T_j \absub{\bigtOp}_{e,k-1} U_j$, follows from the induction hypothesis. \qedhere \end{itemize} \end{enumerate} \end{proof} \subsection{Duality and safety in session types}\label{app:safety} \thmcharformduality* \begin{proof} \label{proof:thmcharformduality} By straightforward induction on the structure of $T$. \begin{enumerate} \item The result follows trivially if $T = \mathtt{end}$ or $T = \var{x}$. \item If $T = \rec{\var{x}} T'$, then we have $\dual{\Anyform{\rec{\var{x}} T'}{\bigtOp}} = \mmnu{\var{x}} \dual{ \Anyform{T'}{\bigtOp}}$, and $ \Anyform{\dual{T}}{\dual{\bigtOp}} = \mmnu{\var{x}} \Anyform{\dual{T'}}{\dual{\bigtOp}} $. The result follows by induction hypothesis. \item If $T = \choice$, then we have \begin{align*} \dual{\Anyform{T}{\bigtOp}} & = \dual { \bigwedge_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\Anyform{T_i}{\bigtOp}} } \\ & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{ \dual { \Anyform{T_i}{\bigtOp} } } \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{ \Anyform{\dual{T_i}}{\dual{\bigtOp}} } = \Anyform{\dual{T}}{\dual{\bigtOp}} \end{align*} \item If $T = \cochoice$, then we have \begin{align*} \dual{\Anyform{T}{\bigtOp}} & = \dual{ \bigwedge_{i \in I} \mmbox{\opfun{\bigtOp}{a_i}}{ { \Anyform{T_i}{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigtOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigtOp}{a_i} \, \mid \, i \in I\} }}{\falsek} } \\ & = \bigwedge_{i \in I} \mmbox{\opfun{\bigcoOp}{a_i}}{ \dual{ \Anyform{T_i}{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigcoOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ \text{\tiny\it (I.H.)} \quad & = \bigwedge_{i \in I} \mmbox{\opfun{\bigcoOp}{a_i}}{ \Anyform{ \dual{T_i}}{ \dual{\bigtOp}} } \; \land \; \bigvee_{i \in I} \mmdiamond{\opfun{\bigcoOp}{a_i}}{\truek} \; \land \; \mmbox{ \compset{ \{ \opfun{\bigcoOp}{a_i} \, \mid \, i \in I\} }}{\falsek} \\ & = \Anyform{\dual{T}}{\dual{\bigtOp}} \end{align*} \end{enumerate} \end{proof} \thmsafety* \begin{proof} $(\Longleftarrow)$ We prove that if $T \subtype \dual{U}$ then $T \spar U$ is safe by coinduction on the derivation of $T \subtype \dual{U}$ (recall that $\subtype$ stands for $\absub{\,\inchoicetop}$). \\[1mm] {\bf Case [S-end]} Obvious since $T =\dual{U}=\mathtt{end}$ and $T \spar U \not \semarrow{}$. \\[1mm] {\bf Case [S-$\bigtOp$]} Suppose $T = \inchoice$. Then $\dual{U}=\inchoiceSet{j}{J}{\dual{U}}$ such that $I \subseteq J$ and $T_i \subtype \dual{U}_i$ for all $i \in I$. For all $a_i$ such that $i \in I$, $T \semarrow{!a_i} T_i$ implies $U \semarrow{?a_i} U_i$. Hence by [S-COM], we have $T \spar U\semarrow{} T_i \spar U_i$. Then by coinduction hypothesis, $T_i \spar U_i$ is safe. \\[1mm] {\bf Case [S-$\dual{\bigtOp}$]} Similar to the above case.\\[1mm] $(\Longrightarrow)$ We prove $(\neg (T \subtype \dual{U}) \land \neg (\dual{T} \subtype U))$ implies $T \spar U$ has an error. Since the error rule coincides with the negation rules of subtyping in~\cite[Table 7]{CDY2014}, we conclude this direction. \end{proof} \newcommand{\mathtt{bot}}{\mathtt{bot}} \newcommand{\mathtt{top}}{\mathtt{top}} \newcommand{\tsemarrow}[1]{\xhookrightarrow{\, #1 \,}} \newcommand{\delta}{\delta} \newcommand{\lamsub}[2]{{{\Lambda}}(#1,#2)} \newcommand{\mathcal{T}_R}{\mathcal{T}_R} \newcommand{\trec}[1]{\mathtt{rec}\, {#1} . } \newcommand{\tmmnu}[1]{\nu {#1} .\,} \section{Appendix: Recursive types for the $\lambda$-calculus} \label{sec:lambda} \subsection{Recursive types and subtyping} We consider recursive types for the $\lambda$-calculus below: \[ t \coloneqq \mathtt{top} \;\mid\; \mathtt{bot} \;\mid\; t_0 \rightarrow t_1 \;\mid\; \trec{v} t \;\mid\; v \] Let $\mathcal{T}_R$ be the set of all closed recursive types. A type $t$ induces an LTS according to the rules below: \[ \begin{array}{c@{\qquad\qquad}c} \inference{top} {} {\mathtt{top} \tsemarrow{\mathtt{top}} \mathtt{top} } & \inference{bot} {} {\mathtt{bot} \tsemarrow{\mathtt{bot}} \mathtt{top}} \\[0.5cm] \inference{arrow} {i \in \{0,1\}} {t_0 \rightarrow t_1 \; \tsemarrow{i} \; t_i } & \inference {rec} {t \subs{\trec{v} t}{v} \tsemarrow{a} t'} {\trec{v} t \tsemarrow{a} t'} \end{array} \] where we let $a$ range over $\{ 0, 1, \mathtt{bot}, \mathtt{top} \}$. \begin{definition}[Subtyping for recursive types] $\subtype \subseteq \mathcal{T}_R \times \mathcal{T}_R$ is the \emph{largest} relation that contains the rules: \[ \coinference{S-bot} {t \in \mathcal{T}_R} {\mathtt{bot} \subtype t} \qquad \coinference{S-top} {t \in \mathcal{T}_R} {t \subtype \mathtt{top}} \qquad \coinference{S-arrow} { t'_0 \subtype t_0 & t_1 \subtype t'_1 } {t_0 \rightarrow t_1 \subtype t'_0 \rightarrow t'_1} \] Recall that we are assuming an equi-recursive view of types. The double line in the rules indicates that the rules should be interpreted \emph{coinductively}. \end{definition} \subsection{Characteristic formulae for recursive types} We assume the same fragment of the modal $\mu$-calculus as in Section~\ref{sub:mucal} but for ($i$) omitting the direction $\Op$ on labels, i.e., we consider modalities: $\mmbox{a}{\phi}$ and $\mmdiamond{a}{\phi}$; and ($ii$) using $v$ to range over recursion variables. Let $\delta \in \{ \mathtt{top} , \mathtt{bot} \}$, and $\dual{\mathtt{bot}} = \mathtt{top}$, $\dual{\mathtt{top}} = \mathtt{bot}$. \[ \lamsub{t}{\delta} \defi \begin{cases} \mmdiamond{\delta}\truek & \text{if } t = \delta \\ \truek & \text{if } t = \dual{\delta} \\ \mmdiamond{0} \, \lamsub{t_0}{ \dual{\delta} } \; \land \; \mmdiamond{1} \, \lamsub{t_1}{\delta} & \text{if } t = t_0 \rightarrow t_1 \\ \tmmnu{v} \lamsub{t'}{\delta} & \text{if } t = \trec{v} t' \\ v & \text{if } t = v \end{cases} \] \begin{theorem} The following holds: \begin{itemize} \item $t \subtype t' \iff t' \models \lamsub{t}{\, \mathtt{top}}$ \item $t \subtype t' \iff t \; \models \lamsub{t'}{\, \mathtt{bot}}$ \end{itemize} \end{theorem} \begin{proof} We show only the $(\Leftarrow)$ direction here. \begin{enumerate} \item \label{en:lambda-top} We show $t \subtype t' \Leftarrow t' \models \lamsub{t}{\mathtt{top}}$ by induction on $t$. \begin{itemize} \item If $t = \mathtt{top}$, then $\lamsub{t}{\mathtt{top}} = \mmdiamond{\mathtt{top}}\truek$, hence $t' = \mathtt{top}$. \item If $t = \mathtt{bot}$, then $\lamsub{t}{\mathtt{top}} = \truek$ hence $t'$ can be any type, as expected. \item If $t = t_0 \rightarrow t_1$, then \[ \lamsub{t}{\mathtt{top}} = \mmdiamond{0} \, \lamsub{t_0}{ \mathtt{bot} } \; \land \; \mmdiamond{1} \, \lamsub{t_1}{\mathtt{top}} \] hence we must have $t' = t'_0 \rightarrow t'_1$ with $t'_0 \models \lamsub{t_0}{ \mathtt{bot} }$ (hence $t'_0 \subtype t_0$, by IH, see below) and $t'_1 \models \lamsub{t_1}{\mathtt{top}}$ (hence $t_1 \subtype t'_1$ by IH). \end{itemize} \item \label{en:lambda-bot} We show $t \subtype t' \Leftarrow t \models \lamsub{t'}{\mathtt{bot}}$ by induction on $t'$. \begin{itemize} \item If $t' = \mathtt{bot}$, then $\lamsub{t'}{\mathtt{bot}} = \mmdiamond{\mathtt{bot}}\truek$ and $t = \mathtt{bot}$. \item if $t' = \mathtt{top}$, then $\lamsub{t'}{\mathtt{bot}} = \truek$ and $t$ can be any type, as expected. \item If $t' = t'_0 \rightarrow t'_1$, then \[ \lamsub{t'}{\mathtt{bot}} = \mmdiamond{0} \, \lamsub{t'_0}{ \mathtt{top} } \; \land \; \mmdiamond{1} \, \lamsub{t'_1}{\mathtt{bot}} \] hence we must have $t = t_0 \rightarrow t_1$ with $t_0 \models \lamsub{t'_0}{ \mathtt{top} }$ (hence $t'_0 \subtype t_0$, by IH, see above) and $t_1 \models \lamsub{t'_1}{\mathtt{bot}}$ (hence $t_1 \subtype t'_1$ by IH). \end{itemize} \end{enumerate} The other direction is similar to the above, while the recursive step is similar to the proof of Theorem~\ref{thm:main-theorem}. \end{proof} \end{document}
\begin{document} \title{Twisted Alexander polynomials of 2-bridge knots associated to metabelian representations} \pagestyle{myheadings} \markboth{ Twisted Alexander polynomials associated to metabelian representations } {M. Hirasawa \& K. Murasugi } \begin{abstract} Suppose the knot group $G(K)$ of a knot $K$ has a non-abelian representation $\rho$ on $A_4 \subset GL(4,\ZZ)$. We conjecture that the twisted Alexander polynomial of $K$ associated to $\rho$ is of the form: $\displaystyle{\left[\dfrac{\Delta_K(t)}{1-t}\right]\varphi(t^3)}$, where $\varphi(t^3)$ is an integer polynomial in $t^3$. We prove the conjecture for $2$-bridge knots $K$ whose group $G(K)$ can be mapped onto a free product $\ZZ/2*\ZZ/3$. Later, we discuss more general metabelian representations of the knot groups and propose a similar conjecture on the form of the twisted Alexander polynomials. \end{abstract} \keywords{ Alexander polynomial, $2$-bridge knot, knot group, metabelian representation, twisted Alexander polynomial, continued fraction.} \ccode{Mathematics Subject Classification 2000: 57M25, 57M27} \section{Introduction} In this paper, which is a sequel of \cite{HM1}, \cite{HM2}, we consider non-abelian representations of the knot group $G(K)$ of a knot $K$ on metabelian groups. First we study a representation of $2$-bridge knot groups on the alternating group $A_4$ of order $12$, the simplest metabelian group, which we call an $A_4$-representation. It is shown in \cite{R} and \cite{Ha} that $G(K)$ has a $A_4$-representation if and only if $\Delta_K(\omega) \Delta_K (\omega^2) \equiv 0$ ($mod$ 2), where $\Delta_K(t)$ is the Alexander polynomial of $K$ and $\omega$ is a primitive cubic root~of~$1$. For a $2$-bridge knot $K(r)$, Heusner gives a nice criterion for $G(K(r))$ to have an $A_4$-representation in terms of the degree of $\Delta_K(t)$ \cite{He}. In the later section, we discuss briefly the general metabelian representation of the knot groups. Now, let $\rho: G(K) \rightarrow A_4$ be an $A_4$-representation of $G(K)$. We may assume that one meridian generator maps to $(123)$. Let $K(r)$, $r \in \QQ$, $0 < r < 1$, be a $2$-bridge knot with a Wirtinger presentation \begin{equation} G(K(r))=\langle x, y|R\rangle, R=WxW^{-1}y^{-1}. \end{equation} Suppose that $G(K(r))$ has an $A_4$-representation $\rho$ such that \begin{equation} \rho(x)=(123)\ {\rm and}\ \rho(y)=(142). \end{equation} Let $\xi:A_4 \rightarrow GL(4,\ZZ)$ be the permutation representation of $A_4$. Then $\xi$ is equivalent to $\widetilde{\xi}$: \begin{equation} \widetilde{\xi}(123)= \left(\begin{array}{rrrr} 1&-1&0&0\\ 0&-1&1&0\\ 0&-1&0&0\\ 0&-1&0&1 \end{array} \right) \ {\rm and}\ \widetilde{\xi}(142)= \left( \begin{array}{rrrr} 1&0&0&-1\\ 0&0&0&-1\\ 0&0&1&-1\\ 0&1&0&-1 \end{array} \right) \end{equation} and hence, $\xi_0: A_4 \longrightarrow GL(3,\ZZ)$ given by \begin{equation} \xi_0(123)=\left( \begin{array}{rrr} -1&1&0\\ -1&0&0\\ -1&0&1 \end{array}\right) \ {\rm and}\ \xi_0(142)= \left( \begin{array}{rrr} 0&0&-1\\ 0&1&-1\\ 1&0&-1 \end{array}\right) \end{equation} defines an irreducible representation of $A_4$. Combining them, we have a $A_4$-representation of $G(K(r))$, $\rho_0=\rho \circ \xi_0 : G(K(r)) \longrightarrow GL(3, \ZZ)$, and the twisted Alexander polynomial $\widetilde{\Delta}_{\rho_0, K(r)} (t)$ of $K(r)$ associated to $\rho_0$ is defined. From the forms of $\widetilde{\xi}$ and $\xi$, we see immediately that the twisted Alexander polynomial $\widetilde{\Delta}_{\rho,K}(t)$ is the product of $\widetilde{\Delta}_{\iota,K}(t)$ and $\widetilde{\Delta}_{\rho_0, K}(t)$, where $\iota$ is a trivial representation. Since $\widetilde{\Delta}_{\iota,K}(t)=\Delta_{K}(t)/(1-t)$, the conjecture stated in the abstract is rephrased as follows: \noindent {\bf Conjecture A.} {\it For a 2-bridge knot $K(r)$ with an $A_4$-representation, $\widetilde{\Delta}_{\rho_0, K(r)} (t)$ is an integer polynomial in $t^3$ (up to $\pm t^k$ ), namely, \begin{equation} \widetilde{\Delta}_{\rho_0, K(r)} (t) = \pm t^k \varphi(t^3),\ {\it for\ some\ integer\ polynomial}\ \varphi(t). \end{equation} } We prove this conjecture for $K(r)$ in $H(3)$, where $H(3)$ is the set of $K(r)$ such that $G(K(r))$ maps onto a non-trivial free product $\ZZ/2 * \ZZ/3$. The proof will be given in Section 2 through Section 5. Since our proof is similar to those given in the previous two papers \cite{HM1} and \cite{HM2}, we will skip some details in our argument. In the last Section 6, we discuss general metabelian representations and state a similar conjecture on the form of the twisted Alexander polynomials. We give several examples that justify our conjecture. For convenience, we draw a diagram below consisting of various groups and homomorphisms. \begin{center} $ \begin{array}{crccc} & & & & GL(4,\ZZ)\\ & & &\nearrow \mbox{\large $\xi$} &\\ G(K)& \underset{\mbox{\large $\rho$}}{\longrightarrow}& A_4&\underset{\mbox{\large $\xi_0$}}{\longrightarrow}&GL(3,\ZZ)\\ & &\downarrow& &\\ & & \ZZ A_4&\underset{\mbox{\large $\widetilde{\xi}_0$}}{\longrightarrow}&M_{3,3}(\ZZ)\\ & & & & \\ \ZZ F(x,y)&\underset{\mbox{\large $\widetilde{\rho}$}}{\longrightarrow}& \widetilde{A}(x,y)& &\\ \downarrow\nu^*& & & &\\ \mbox{$[\ZZ F(x,y)][t^{\pm 1}]$}& \underset{\mbox{\large $\rho^*$}}{\longrightarrow}& \widetilde{A}(x,y)[t^{\pm 1}]& \underset{\mbox{\large $\eta$}}{\longrightarrow}& GL(3,\ZZ [t^{\pm 1}])\\ \end{array} $ \end{center} Here, (1) $\rho_0=\rho \circ \xi_0$, (2) $\nu^*(\prod_{i=1}^m x^{k_i} y^{\ell_i})= (\prod_{i=1}^m x^{k_i} y^{\ell_i}) t^q$, $q=\sum_{i=1}^m k_i + \sum_{i=1}^m \ell_i$. \section{$\ZZ$-algebra $\widetilde{A}(x,y)$} Denote $x=(123)$ and $y=(142)$. Let $X= \xi_0 (x)$ and $Y= \xi_0 (y)$. We define an algebra $\widetilde{A}(x,y)$ using the group algebra $\ZZ A_4$ as follows. Let $\widetilde{\xi}_0: \ZZ A_4 \longrightarrow M_{3,3}(\ZZ)$ be a linear extension of $\xi_0$ and $S =(\widetilde{\xi}_0)^{-1}(0)$ be the kernel of $\widetilde{\xi}_0$. Then $\ZZ A_4/S$ is a non-commutative $\ZZ$-algebra, denoted by $\widetilde{A}(x,y)$. \begin{prop}\label{prop:2.1} In $\widetilde{A}(x,y)$, the following formulas hold. \begin{align} &(1)\ x^3 = y^3 =(xy)^3 = 1,\nonumber\\ &(2)\ xyx = yxy,\nonumber\\ &(3)\ (xy^{-1})^2 = 1,\nonumber\\ &(4)\ xyx = x^{-1} y^{-1} x^{-1},\nonumber\\ &(5)\ (x+y)^2 = (x^{-1} + y^{-1})^2 = 0,\nonumber\\ &(6)\ xyx(x+y) = (x+y)xyx = -(x+y),\nonumber\\ &(7)\ xyx(x^{-1} + y^{-1}) = (x^{-1} + y^{-1})xyx = - (x^{-1} + y^{-1}),\nonumber\\ &(8)\ (x +y)(x^{-1} + y^{-1}) + (x^{-1} + y^{-1})(x+y) = 2(1-xyx),\nonumber\\ &(9)\ xy + yx = -(x^{-1} + y^{-1})\ {\rm and}\ x^{-1} y^{-1} + y^{-1} x^{-1} =- (x+y). \end{align} \end{prop} Only a few formulas below are needed to prove Proposition \ref{prop:2.1}, and details are omitted. \qed \arraycolsep=0.14em \begin{equation} X + Y = \left[\begin{array}{rrr} -1&1&-1\\ -1&1&-1\\ 0&0&0 \end{array} \right], X^{-1} + Y^{-1}= \left[\begin{array}{rrr} -1&-1&1\\ 0&0&0\\ -1&-1&1 \end{array} \right]\ {\rm and}\ XYX=\left[ \begin{array}{rrr} -1&0&0\\ -1&0&1\\ -1&1&0 \end{array} \right] \end{equation} \arraycolsep=0.5em Now, let $L(t)$ be the set of Laurent polynomials in $t$ with coefficients in $\widetilde{A}(x,y)$. Write $f(t)= {\displaystyle \sum_{- \infty < j < \infty}} d_j t^j,\ d_j \in \widetilde{A} (x,y)$. \begin{dfn} A polynomial $f(t)$ in $L(t)$ is called {\it twin} if $f(t)$ satisfies the following conditions: \begin{align} &(1)\ {\rm If}\ j \equiv 0\ ({\rm mod}\ 3),\ {\rm then}\ d_j = c_j + c_j^{\prime} xyx,\ {\rm where}\ c_j , c_j^{\prime} \in \ZZ.\nonumber\\ &(2)\ {\rm If}\ j \equiv 1\ ({\rm mod}\ 3),\ {\rm then}\ d_j = a_j (x+y),\ a_j \in \ZZ.\nonumber\\ &(3)\ {\rm If}\ j \equiv 2\ ({\rm mod}\ 3),\ {\rm then}\ d_j = b_j (x^{-1} + y^{-1}),\ b_j\in \ZZ. \ {\rm Furthermore},\nonumber\\ &(4)\ {\rm for\ any}\ j, a_{3j+1} = b_{3j+2}. \end{align} The set of twin polynomials is denoted by $T(t)$. \end{dfn} \begin{prop}\label{prop:2.3} $T(t)$ is a non-commutative subring of $L(t)$. \end{prop} \noindent {\it Proof.} Let $f(t)$ and $g(t)$ be twin polynomials. Obviously, $f(t) \pm g(t)$ is twin. To show $f(t)g(t)$ is twin, it is enough to show that for any $m \in \ZZ$, \begin{align} &(1)\ f(t) t^{3m} \in T(t)\ {\rm and}\ f(t)xyx t^{3m} \in T(t).\nonumber\\ &(2)\ \left\{(x+y) t^{3m+1} + (x^{-1} + y^{-1}) t^{3m+2} \right\}\left\{(x+y) t^{3\ell+1} + (x^{-1} + y^{-1}) t^{3\ell+2}\right\} \in T(t). \end{align} These formulas follow from (2.1) (5)-(8). \qed \section{Main Theorem.} Let $K(r)$ be a 2-bridge knot in $H(3)$. Then the continued fraction of $r = \beta / \alpha,\ \alpha \equiv \beta \equiv 1$ (mod $2$), is of the following form. (See \cite{GR}, \cite{ORS}) \begin{equation*} r =[3k_1,2m_1, 3k_2, 2m_2, \cdots, 2m_{q-1}, 3k_q]. \end{equation*} Consider a Wirtinger presentation of $G(K(r))$: \begin{equation*} G(K(r)) = \langle x,y | R \rangle, R = W x W^{-1} y^{-1}. \end{equation*} \noindent Since $K(r) \in H(3)$, $R$ is a product of conjugates of $R_0 =xyxy^{-1}x^{-1}y^{-1}$, namely \begin{equation} R = \prod_{j=1}^{m}u_j R_0^{\epsilon_j} u_j^{-1}, \end{equation} where $u_j$ are words in the free group $F(x,y)$ freely generated by $x$ and $y$, and $\epsilon_j = \pm 1$. Therefore, \begin{equation*} \frac{\partial R}{\partial x} =\sum_{j=1}^{m} \epsilon_j u_j \frac{\partial{R_0}}{\partial x}, \ {\rm where}\ \frac{\partial}{\partial x}\ {\rm denotes\ the\ free\ derivative}. \end{equation*} Let $\widetilde{\rho}: \ZZ F(x,y) \longrightarrow \widetilde{A} (x,y)$ be an algebra homomorphism defined by \begin{equation*} \widetilde{\rho} (x)=x\ {\rm and}\ \widetilde{\rho} (y) = y. \end{equation*} We write $\lambda (r) = \widetilde{\rho} (\sum_{j} \epsilon_j u_j) \in \widetilde{A} (x,y)$ and $\lambda^* (r) =(\nu^* \circ \rho^*)(\lambda (r)) \in \widetilde{A} (x,y)[t^{\pm 1}]$. Then the following is shown in \cite{HM1}: \begin{equation} \widetilde{\Delta}_{\rho_0,K(r)} (t) =\left\{\det \widetilde{\xi}_0(\lambda^*(r))\right\} \widetilde{\Delta}_{\rho_0,K(1/3)} (t). \end{equation} We note that $\widetilde{\Delta}_{\rho_0,K(1/3)} (t)$ = $1-t^3$. Now our main theorem is the following: \begin{thm}\label{thm:3.1} If $K(r) \in H(3)$, then $\widetilde{\Delta}_{\rho_0,K(r)} (t) = \varphi (t^{\pm 3})$, where $\varphi (t)$ is an integer polynomial. \end{thm} Since $\widetilde{\Delta}_{\rho_0,K(1/3)} (t)=1-t^3$, the theorem is a consequence of Proposition \ref{prop:3.2} below: \begin{prop}\label{prop:3.2} Under the same conditions of Theorem \ref{thm:3.1}, we have; \begin{equation*} \det[ \widetilde{\xi}_0 (\lambda^* (r))] = \varphi_0 (t^{\pm 3}),\ {\it for\ some\ integer\ polynomial}\ \varphi_0 (t). \end{equation*} \end{prop} \section{Proof of Proposition 3.2. (I)} In this section, we prove some basic formulas needed to prove the main theorem. However, since these formulas are mostly technical, we omit some details. (See \cite{HM2}.) Now, we denote; \begin{align} &(1)\ Q_0 (t) = 1,\nonumber\\ &(2)\ {\rm For}\ m \geq 1,\nonumber\\ &\ \ \ (a)\ Q_m (t) = 1 +(yx) t^2 + (yx)^2 t^4 + \cdots + (yx)^m t^{2m},\nonumber\\ &\ \ \ (b)\ Q_{-m} (t) = (yx)^{-m} t^{-2m} Q_{m-1}(t) = (x^{-1} y^{-1}) t^{-2} + (x^{-1} y^{-1})^2 t^{-4} + \nonumber\\ &\ \ \ \ \ \ \ \ \cdots + (x^{-1} y^{-1})^m t^{-2m}. \end{align} We claim first the following: \begin{prop}\label{prop:4.1} Let $k \geq 0$, then we have; \begin{align} &(1)\ y^{-1} t^{-1}[(1-yt) Q_{3k+1}(t) yt + (yx)^{3k+2} t^{6k+4}](1-xt) \in T(t).\nonumber\\ &(2)\ y^{-1}t^{-1}(1-yt) Q_{3k+2}(t) yt (1-xt) \in T(t).\nonumber\\ &(3)\ y^{-1} t^{-1} [(1-yt) Q_{-(3k+1)}(t) yt - (x^{-1} y^{-1})^{3k+1} t^{-(6k+2)}](1-xt) \in T(t). \nonumber\\ &(4)\ y^{-1} t^{-1} (1-yt) Q_{-(3k+3)}(t) yt (1-xt) \in T(t). \end{align} \end{prop} \noindent {\it Proof.} For simplicity, we denote ${\bf a} = x+y$ and ${\bf b} = x^{-1} +y^{-1}$. A proof will be done by induction on $k$. First, straightforward computations prove for the initial cases. \begin{align*} (1)& \ \ y^{-1} t^{-1}\{(1-yt) Q_1(t) yt + (yx)^2 t^4\}(1-xt)\\ &=1 -{\bf a} t - {\bf b} t^2 - xyx t^3 \in T(t).\\ (2)& \ \ y^{-1} t^{-1} (1-yt) Q_2 (t) yt (1-xt)\\ &= 1 -{\bf a} t - {\bf b} t^2 +2 xyx t^3 - {\bf a} t^4 - {\bf b} t^5 + t^6 \in T(t).\\ (3)& \ \ y^{-1} t^{-1} \{(1-yt) Q_{-1} (t) yt - (x^{-1} y^{-1}) t^{-2}\}(1-xt)\\ &=- xyx t^{-3} - {\bf a} t^{-2} - {\bf b} t^{-1} + 1 \in T(t).\\ (4)&\ \ y^{-1} t^{-1} (1-yt) Q_{-3} (t) yt (1-xt)\\ &= t^{-6} - {\bf a} t^{-5} - {\bf b} t^{-4} - 2xyx t^{-3} - {\bf a} t^{-2} - {\bf b} t^{-1} +1 \in T(t). \end{align*} Now suppose the formula hold for $k = k$ and we prove them for $k=k+1$. {\it Proof of (1)}. Since $Q_{3k+4}(t) = Q_{3k+1} + (yx)^{3k+2} t^{6k+4} Q_2(t)$, we see \begin{align*} &\ \ \ \ y^{-1} t^{-1} \{(1-yt) Q_{3k+4}(t) yt + (yx)^{3k+5} t^{6k+10}\}(1-xt)\\ &= y^{-1} t^{-1} \{(1-yt) Q_{3k+1} (t) yt + (yx)^{3k+2} t^{6k+4} + (1-yt) (yx)^{3k+2} t^{6k+4} Q_2 (t) yt \\ &\ \ \ \ -(yx)^{3k+2} t^{6k+4} +(yx)^{3k+5} t^{6k+10} \}(1-xt). \end{align*} Therefore, by induction hypothesis, it suffices to show that \begin{align} &y^{-1} t^{-1}\{(1-yt) (yx)^{3k+2} t^{6k+4} Q_2 (t) yt \nonumber\\ &\ \ \ -(yx)^{3k+2} t^{6k+4} +(yx)^{3k+5} t^{6k+10}\}(1-xt) \in T(t), \end{align} or equivalently \begin{align} y^{-1}\{(1-yt) (yx)^2 Q_2 (t) yt - (yx)^2 + (yx)^2 t^6 \}(1-xt) \in T(t). \end{align} However, it is straightforward to show that \begin{align*} &\{y^{-1} (1-yt) (yx)^2 Q_2 (t) yt - xyx + xyx t^6\}(1-xt)\\ = & -xyx - {\bf a} t - {\bf b} t^2 +2 t^3 - {\bf a} t^4 - {\bf b} t^5 - xyx t^6 \in T(t). \end{align*} Since other cases can be handled in the same way, details will be omitted. \qed Now, to prove Proposition 3.2, we need explicit recursion formulas for $\lambda (r)$. To obtain such formulas, we write \begin{equation*} r_q=[3k_1, 2m_1,3k_2, 2m_2, \dots, 2m_{q-1}, 3k_q]. \end{equation*} Then $\lambda (r_q)$ is exactly $w_{2q-1}$ in \cite{HM1}. Using formula there, we can compute $\lambda (r_q)$ inductively. We only state formulas without proof. \begin{prop}\label{prop:4.2} Let $r_q = [3k_1, 2m_1,3k_2, 2m_2, \dots, 2m_{q-1},3k_q]$. Case 1. $k_q > 0$. (1) If $k_q = 2s$, $s \geq 1$, then \begin{align*} \lambda (r_q)= &(1-y) Q_{3s-1} y \{\sum_{j=1}^{q-1} m_j (x-1) y^{-1} \lambda (r_j)\} \\ &+ (yx)^{3s} \lambda (r_{q-1}) - \sum_{j=1}^s (yx)^{3s-3j+2} + \sum_{j=1}^s (yx)^{3s-3j} y. \end{align*} (2) If $k_q = 2s -1$, $s \geq 1$, then \begin{align*} \lambda (r_q)= &\{(1-y) Q_{3s-2} y + (yx)^{3s-1}\} \sum_{j=1}^{q-1} m_j (x-1) y^{-1} \lambda (r_j) \\ &- (yx)^{3s-1} y^{-1} \lambda (r_{q-1}) + \sum_{j=1}^s (yx)^{3s-3j} y - \sum_{j=1}^{s-1} (yx)^{3s-3j-1}. \end{align*} Case 2. $k_q < 0$. (1) If $k_q$ = $- 2s$, $s \geq 1$, then \begin{align*} \lambda (r_q) =& (y-1) Q_{-3s} y \sum_{j=1}^{q-1} m_j (x-1) y^{-1} \lambda (r_j) + (x^{-1} y^{-1})^{3s} \lambda (r_{q-1})\\ & -\sum_{j=1}^s (x^{-1} y^{-1})^{3s-3j+2} x^{-1} + \sum_{j=1}^s (x^{-1} y^{-1})^{3s-3j+1}. \end{align*} (2) If $k_q = -(2s +1)$, $s \geq 0$, then \begin{align*} \lambda(r_q)=& \{(y-1) Q_{-(3s+1)}y + (x^{-1} y^{-1})^{3s+1}\} \sum_{j=1}^{q-1} m_j (x-1) y^{-1} \lambda (r_j)\\ & -(x^{-1}y^{-1})^{3s+1} y^{-1} \lambda (r_{q-1}) + \sum_{j=0}^s (x^{-1} y^{-1})^{3s-3j+1} \\ &- \sum_{j=0}^s (x^{-1} y^{-1})^{3s-3j+2} x^{-1}. \end{align*} \end{prop} Using Proposition \ref{prop:4.2}, we can show the following key proposition. \begin{prop}\label{prop:4.3} For any $r_q$, $q \geq 1$, $y^{-1} t^{-1}\lambda^*(r_q)$ is twin. \end{prop} \noindent {\it Proof.} Since other cases can be proven in the same way, we prove only one case: $k_q = 2s$, $s \geq 1$. We use induction argument. Let $q = 1$. Then $r=[6s]$, $s \geq 1$, and \begin{equation*} \lambda (r_1)= - \sum_{j=1}^s (yx)^{3s-3j+2} + \sum_{j=1}^s (yx)^{3s-3j} y. \end{equation*} Therefore \begin{align*} y^{-1}t^{-1}\lambda^*(r_1)&= y^{-1} t^{-1}\{ - \sum_{j=1}^s (yx)^2 t^{6s-6j+4} + \sum_{j=1}^s y t^{6s-6j+1}\}\\ &=- \sum_{j=1}^s xyx t^{6s-6j+3} + \sum_{j=1}^s t^{6s-6j}, \end{align*} that is obviously twin.\\ Now suppose $y^{-1} t^{-1} \lambda^* (r_j)$ is twin for $j=1,2,\dots,q-1$. Then \begin{align*} y^{-1} t^{-1} \lambda^* (r_q) &=y^{-1} t^{-1} (1-yt) Q_{3s-1}(t) yt \left\{ \sum_{j=1}^{q-1} m_j (xt-1) y^{-1} t^{-1} \lambda^* (r_j)\right\}\\ &\ \ \ + y^{-1} t^{-1} (yx)^{3s} t^{6s} \lambda^*(r_{q-1}) - y^{-1} t^{-1} \sum_{j=1}^s (yx)^{3s-3j+2} t^{6s-6j+4}\\ &\ \ \ +y^{-1} t^{-1} \sum_{j=1}^s (yx)^{3s-3j} y t^{6s-6j+1}\\ &=\sum_{j=1}^{q-1} m_j \{ y^{-1} t^{-1} (1-yt) Q_{3s-1}(t) yt (xt-1)\} y^{-1} t^{-1} \lambda^*(r_j)\\ &\ \ \ + t^{6s} \{y^{-1} t^{-1} \lambda^*(r_{q-1})\} - \sum_{j=1}^s xyx t^{6s-6j+3} + \sum_{j=1}^s t^{6s-6j}. \end{align*} Since both $y^{-1} t^{-1} \lambda^* (r_j)$ and $y^{-1} t^{-1} (1-yt) Q_{3s-1} (t) yt (xt-1)$ are twin by (4.2)(2), Proposition \ref{prop:4.3} follows for this case. \qed \section{Proof of Proposition 3.2.(II)} To show $\det[\widetilde{\xi}_0(\lambda^*(r))] = \varphi_0(t^{\pm 3})$, it suffice to show $\det[\widetilde{\xi}_0(y^{-1}t^{-1} \lambda^*(r))] =\varphi_1(t^{\pm 3})$. Now, since $y^{-1}t^{-1} \lambda^*(r)$ is twin, we can write \begin{align*} y^{-1}t^{-1} \lambda^*(r)&=\ \sum_{-\infty < j <\infty}(c_j + c_j^{\prime} xyx ) t^{3j} \\ &\ \ + \sum_{-\infty < j < \infty} a_j(x+y) t^{3j+1}\\ &\ \ + \sum_{-\infty < j < \infty} b_j(x^{-1} + y^{-1}) t^{3j+2}, \end{align*} where $a_j, b_j, c_j$ and $c_j^{\prime}$ are integers and $a_k$= $b_k$ for all $k$.\\ Denote $A=\sum_{j} a_j t^{3j+1}$, $C=\sum_{j}c_j t^{3j}$ and $C^{\prime}=\sum_{j}c_j^{\prime} t^{3j}$. Then, \begin{equation*} y^{-1} t^{-1} \lambda^*(r)= C +C^{\prime} xyx +A(x+y) + A(x^{-1} + y^{-1})t. \end{equation*} Using (2.2), we see that \begin{align*} D=&\det[ \widetilde{\xi}_0 (y^{-1} t^{-1} \lambda (r))]\\ =&\det\left(\begin{array}{ccc} -A - At+C - C^{\prime}&\ A-At&\ -A+At \\ -A - C^{\prime}&\ A+C&\ -A+C^{\prime} \\ -At -C^{\prime}&\ -At+C^{\prime}&\ At+C \end{array}\right). \end{align*} First add the second column to the third, then subtract the second row from the third, and we have: \begin{align*} D& =(C+C^{\prime}) \{(-A -At+C -C^{\prime}) (-A-At -C +C^{\prime}) - (A-At)^2\}\\ &=(C +C^{\prime}) \{4A^2 t - (C-C^{\prime})^2\}. \end{align*} Since $A=t \sum_{j} a_j t^{3j}$, we see $A^2=t^2 \{\sum_{j} a_jt^{3j}\}^2$ and thus \begin{equation*} D=\left\{ \sum_{j} (c_j + c_j^{\prime}) t^{3j}\right\} \left\{4t^3( \sum_{j} a_j t^{3j})^2 - ( \sum_{j}(c_j - c_j^{\prime})t^{3j})^2\right\}. \end{equation*} Therefore $D$ is a polynomial in $t^{\pm 3}$. \\ This proves Proposition \ref{prop:3.2}, and the proof of Theorem \ref{thm:3.1} is now complete. \begin{ex} The following examples justify our main theorem.\\ (1) For $r=1/9,\widetilde{\Delta}_{\rho_0,K(r)} (t) =(1-t^3)(1-t^3 + t^6 )(1 + t^3 + t^6)^2$.\\ (2) For $r = 5/27, \widetilde{\Delta}_{\rho_0,K(r)} (t) =(1-t^3)(4+7t^3 +4t^6)$.\\ (3) For $r = 7/39, \widetilde{\Delta}_{ \rho_0,K(r)} (t) =(1-t^3)(1-3t^3 + t^6)(1+t^3+ t^6)^2$.\\ (4) For $r=29/75, \widetilde{\Delta}_{\rho_0,K(r)} (t) =(1-t^3)(4-t^3)(1-4t^3)$.\\ (5) For $r=227/777$, \begin{align*} \widetilde{\Delta}_{\rho_0,K(r)} (t) &=(1-t^3)(1-3t^3 +t^6)(1+t^3 +t^6)(2-3t^3 +2t^6)\\ &\ \ \times(4- 36t^3-35t^6 -71t^9 -35t^{12}-36t^{15}+4t^{18}). \end{align*} \end{ex} \section{Metabelian representations} In this section, we discuss general metabelian representations of knot groups. First we define metabelian groups on which the knot groups map \cite{Ha}. Let $p$ be a prime and let $\Phi_n$ be the $n$-th cyclotomic polynomial. We assume that $(n,p)=1$ and $\Phi_n$ is irreducible over $\ZZ/p$. Let $k$ be the degree of $\Phi_n$. Let $A(p,k)$ denote the elementary abelian $p$-group of order $p^k$, i.e. the direct product of $\ZZ/p$ $k$-times. We define a semi-direct product $\ZZ/n {\small \marusen} A(p,k)$ on which the knot groups map. For convenience, we denote $M(n|p,k) = \ZZ/n {\small \marusen} A(p,k)$. Consider $A(p,k)$ as a $k$-dimensional vector space over $\ZZ/p$ and take a basis for $A(p,k)$, say $\{b_1, b_2, \cdots, b_k\}$. Let $T$ be the companion matrix of $\Phi_n$ over $\ZZ/p$. Now, let $s$ be a generator of $\ZZ/n$ and fix it. An element $g$ of $M(n|p,k)$ is written as $g=s^{\ell} a_g$, where $a_g=b_1^{\lambda_1} b_2^{\lambda_2} \cdots b_k^{\lambda_k}$, $0 \leq {\ell}< n$ , and $0 \leq {\lambda_j} < p$, $1 \leq j \leq k$ . We define the action of $\ZZ/n$ on $A(p,k)$ by conjugation: \begin{equation} s a_g s^{-1}=a_h,\ {\rm where\ } a_h=b_1^{\mu_1} b_2^{\mu_2} \cdots b_k^{\mu_k}\ {\rm is\ given\ by} \end{equation} \begin{equation} (\lambda_1, \lambda_2, \cdots, \lambda_k ) T =( \mu_1,\mu_2, \cdots, \mu_k). \end{equation} To define the twisted Alexander polynomial $\widetilde{\Delta}_{\rho,K}(t)$ associated to a metabelian representation $\rho$, we need a faithful representation of $M(n|p,k)$ in $GL(m,\ZZ)$ for some $m$. Now let $N= \{1,s,s^2, \cdots, s^{n-1}\}$ be a subgroup of $M(n|p,k)$ generated by $s$ and let $\widehat{N} = \{N1=N, N2, \cdots, Np^k\}$ be the set of all right cosets of $M(n|p,k)$ mod $N$. Using (6.1), we see that the right multiplication of $g \in M(n|p,k)$ on $\widehat{N}$ induces a permutation representation $\sigma$ of $M(n|p,k)$ in $S_{p^k}$ and hence in $GL(p^k, \ZZ)$ via permutation matrices that is denoted by $\xi$ . Suppose that there is a homomorphism $f$ from $G(K)$ onto $M(n|p,k)$ for some $n,p$ and $k$. (It is known that if such a homomorphism exists, then $p$ divides $\prod{\Delta_K(\omega_j)}$, where the product runs over all primitive $n$-th roots $\omega_j$ of $1$.) Then $f$ induces a representation $\rho =f \circ \sigma \circ \xi: G(K)\longrightarrow M(n|p,k) \longrightarrow S_{p^k} \longrightarrow GL(p^k ,\ZZ)$. We may assume without loss of generality that for one meridian generator $x$, \begin{equation} f(x)=s. \end{equation} Let $\widetilde{\Delta}_{\rho, K}(t)$ be the twisted Alexander polynomial of a knot $K$ associated to $\rho$. Then we propose the following conjecture: \begin{yosou}\label{conj:6.1} $\widetilde{\Delta}_{\rho, K}(t)$ is of the form: \begin{equation} \widetilde{\Delta}_{\rho, K}(t) = \left[\frac{\Delta_{K}(t)}{1-t}\right] \varphi (t),\ {\rm where\ } \varphi (t)\ {\rm is\ an\ integer\ polynomial\ in\ } t^n. \end{equation} \end{yosou} In the rest of this section, we discuss several examples that support this conjecture. \begin{ex} Consider a torus knot $K(1/p)$, $p$ being an odd prime. Let $G(K(1/p))=\langle x,y| Wx=yW\rangle$, $W=(xy)^{\frac{p-1}{2}}$, be a Wirtinger presentation. Since $\Phi_p = 1+z+z^2+ \cdots +z^{p-1}$ is irreducible over $\ZZ/2$, $G(K(1/p))$ maps on $M(p|2,p-1)$ by a mapping $f(x)=s$ and $f(y)=sb_1$, and hence we obtain a metabelian representation $\rho_p$ of $G(K(1/p))$, $\rho_p : G(K(1/p)) \rightarrow M(p|2,p-1) \rightarrow S_{2^{p-1}} \rightarrow GL(2^{p-1},\ZZ)$. For example, if $p=3$, then $M(3|2,2)$ is the alternating group $A_4$ and $\rho_3$ coincides with $\rho$ defined in (1.2). Let $\widetilde{\Delta}_{\rho_3, K(1/p)}(t)$ be the twisted Alexander polynomial of $K(1/p)$ associated to $\rho_3$. Then computations show that \begin{align} &(1) \widetilde{\Delta}_{\rho_3, K(1/3)}(t)= \left[\frac{\Delta_{K(1/3)}(t)}{1-t}\right] (1-t^3).\nonumber\\ &(2) \widetilde{\Delta}_{\rho_5, K(1/5)}(t)= \left[\frac{\Delta_{K(1/5)}(t)}{1-t}\right] (1-t^5)^5 (1+t^5)^4. \end{align} It is quite likely that for any odd prime $p$, \begin{equation} \widetilde{\Delta}_{\rho_p, K(1/p)}(t) = \left[\frac{\Delta_{K(1/p)}(t)}{1-t}\right] (1-t^p)^m (1+t^p)^{m-1},\ {\rm where\ }m=2^{p-2}-\left[\frac{2^{p-1}-1}{p}\right]. \end{equation} \end{ex} \begin{ex} Consider $M(4|3,2)=Z/4 {\small \marusen} (\ZZ/3 \oplus \ZZ/3)$. Since $\Phi_4 = 1+z^2$ is irreducible over $\ZZ/3$, the group has the following presentation: \begin{equation} M(4|3,2)= \langle s, a, b| s^4 = a^3 = b^3 =1, ab=ba, sa s^{-1} = b^{-1}, sb s^{-1} = a\rangle. \end{equation} Let $N = \{1,s , s^2, s^3\}$ be a subgroup of $M(4|3,2)$ generated by $s$. Let $\widehat N = \{N=N1, N2, \cdots, N9\}$ be the set of all right cosets of $M(4|3,2)$ mod $N$. Let $\sigma$ be the permutation representation of $M(4|3,2)$ in $S_9$. For example, $\sigma(s)= (1)(2435)(6897)$ and $\sigma(sa)=(1264)(5378)(9)$. Now let $G(K(r)) =\langle x,y| R_r = 1\rangle$ be a Wirtinger presentation of $G(K(r))$. Then the groups of $2$-bridge knots $K(3/5), K(3/7), K(5/13), K(11/17)$ and $K(13/23)$ map onto $M(4|3,2)$ by $f(x) =s$ and $f(y)=sa$ and a homomorphism : $\rho(x) = \xi(\sigma(s))$ and $\rho(y)= \xi(\sigma(sa))$ defines a metabelian representation of $G(K(r))$. The twisted Alexander polynomials $\widetilde{\Delta}_{\rho, K(r)}(t)$ associated to $\rho$ are of the form : $\left[\frac{\Delta_{K(r)}(t)}{1-t}\right] \varphi_{K(r)} (t)$, and $\varphi_{K(r)} (t)$ is given as follows. \begin{align} &(1)\ \varphi_{K(3/5)} (t) = (1-t^4)^2,\nonumber\\ &(2)\ \varphi_{K(3/7)} (t)=4(1-t^4)^2, \nonumber\\ &(3)\ \varphi_{K(5/13)} (t)= (1-t^{12})^2,\nonumber\\ &(4)\ \varphi_{K(11/17)} (t) = (1-t^4)^4 (1+t^4+t^8)^3\ {\rm and}\nonumber\\ &(5)\ \varphi_{K(13/23)} (t) = (1-t^4)^2 (4-13t^4-9t^8-13t^{12}+4t^{16}). \end{align} \end{ex} \begin{ex} Consider $M(3|5,2) = \ZZ/3 {\small \marusen} (\ZZ/5 \oplus \ZZ/5)$. Since $\Phi_3=1+z+z^2$ is irreducible over $\ZZ/5$, the group has a presentation: $\langle s, a, b| s^3 = a^5 = b^5 =1, ab=ba, sa s^{-1} = b^{-1}, sb s^{-1}=a b^{-1}\rangle$. \end{ex} As is shown in Example 6.3, $M(3|5,2)$ can be represented in $S_{25} \subset GL(25,\ZZ)$. The groups of $2$-bridge knots $K(3/7), K(7/11), K(9/23)$ and $K(9/31)$ map onto $M(3|5,2)$ by $f(x) = s$ and $f(y) = sa$. The twisted Alexander polynomials associated to $\rho=f \circ \sigma \circ \xi$ : $G(K(r)) \longrightarrow GL(25,\ZZ)$ are of the form: $\left[\frac{\Delta_{K(r)}(t)}{1-t}\right] \varphi_{K(r)} (t)$ and we have \begin{align} &(1)\ \varphi_{K(3/7)} (t) = 16(1-t^3)^8,\nonumber\\ &(2)\ \varphi_{K(7/11} (t) = (1-t^3)^8 (1-3t^3-2t^6-6t^9-5t^{12}-6t^{15}-2t^{18}-3t^{21}+t^{24})^2,\nonumber\\ &(3)\ \varphi_{K(9/23)} (t) = \nonumber\\ &\ \ (1-t^3)^8 (1-5t^6-20t^9 -28t^{12}-20t^{15}-5t^{18}+t^{24})^2\nonumber\\ &\hspace*{1.5cm}\times(1-5t^3+10t^6-10t^9+7t^{12}-10t^{15}+10t^{18}-5t^{21}+t^{24})^2\ {\rm and}\nonumber\\ &(4)\ \varphi_{K(9/31)} (t) = \nonumber\\ &\ \ (1-t^3)^8 (1+3t^3-6t^6+15t^9-15t^{12}+15t^{15}-6t^{18}+3t^{21}+t^{24})^2\nonumber\\ &\ \ \times (4+12t^3+36t^6+30t^9+35t^{12}+30t^{15}+36t^{18}+12t^{21}+4t^{24})^2. \end{align} \begin{ex} Consider $M(4|5,2) = \langle s, a, b| s^4 = a^5 = b^5 =1, ab=ba, sa s^{-1} = b^{-1}, sb s^{-1} =a\rangle$. Since $\Phi_4$ is reducible over $\ZZ/5$, we cannot apply the previous argument, but we see that $M(4|5,2)$ can be represented in $S_{25} \subset GL(25,\ZZ)$ and the group of a $2$-bridge knot $K(5/9)$ maps onto $M(4|5,2)$ by $f(x)=s$ and $f(y)=sa$ and hence $G(K(5/9))$ has a metabelian representation $\rho$ in $GL(25,\ZZ)$. The twisted Alexander polynomial of $K(5/9)$ associated to $\rho$ is given by \begin{equation} \widetilde{\Delta}_{\rho, K(5/9)}(t)= \left[\frac{\Delta_{K(5/9)}(t)}{1-t}\right] 16(1-t^4)^6. \end{equation} \end{ex} In the last three examples below, we consider a non-rational knot $K =8_5$ and non-alternating knots $10_{145}$ and $10_{159}$ in Rolfsen table, and show that Conjecture \ref{conj:6.1} holds for $K$. \begin{ex} Let $K$ be the knot $8_5$. The group $G(K)$ has a Wirtinger presentation \cite[Example 10.5]{HM2}: \begin{align*} &G(K)=\langle x,y,z|R_1,R_2\rangle,\ {\rm where}\\ &R_1 =(x^{-1}y^{-1}zyxy^{-1}x^{-1}y^{-1})x(yxyx^{-1}y^{-1}z^{-1}yx)y^{-1},\ {\rm and}\\ &R_2 = (xzyxy^{-1})z(yx^{-1}y^{-1}z^{-1}x^{-1})y^{-1}. \end{align*} We see that $G(K)$ maps on a metabelian group $M(3|2,2)$ that is an alternating group $A_4$. (We note that $\Delta_K(t) =(1-t+t^2)(1-2t+t^2-2t^3+t^4))$. A mapping $f(x)=f(z)=(123)$ and $f(y)=(142)$ induces an $A_4$-representation $\rho$ of $G(K)$ into $GL(4,\ZZ)$. The twisted Alexander polynomial is given by \begin{equation} \widetilde{\Delta}_{\rho, K}(t) =\left[\frac{\Delta_K(t)}{1-t}\right](1-t^3)(1-8t^3-6t^6-8t^9+t^{12}). \end{equation} \end{ex} \begin{ex} Let $K$ be the knot $10_{145}$. The Alexander polynomial $\Delta_K(t)$ is $1~+~t-3t^2+t^3+t^4$. The group $G(K)$ has a Wirtinger presentation: \begin{align*} &G(K)=\langle x,y,z|R_1,R_2\rangle,\ {\rm where}\\ &R_1 = (y^{-1}z x^{-1}z^{-1}y z x y^{-1}x^{-1}z^{-1})y(zxyx^{-1}z^{-1}y^{-1}zxz^{-1}y)z^{-1},\ {\rm and}\\ &R_2 =(z^{-1}y^{-1}zx^{-1}z^{-1}yzxy^{-1})z(yx^{-1}z^{-1}y^{-1}zxz^{-1}yz)x^{-1}. \end{align*} Now we see that $G(K)$ maps on the group $M(5|2,4) = \ZZ/5 {\small \marusen} (\ZZ/2)^4$. This group has a presentation: \begin{align} \langle s,b_1,b_2,b_3,b_4| &s^5=b_i^2 = 1, b_i b_j=b_j b_i,1 \leq i, j \leq 4, \nonumber\\ &s b_1 s^{-1} = b_4^{-1}, s b_2 s^{-1} = b_1 b_4^{-1},\nonumber\\ &s b_3 s^{-1} = b_2 b_4^{-1},s b_4 s^{-1} = b_3b_4^{-1}\rangle. \end{align} We see easily that a mapping $f(x)=sb_1b_2 b_3 b_4$, $f(y)=s b_1$ and $f(z) = s$ is, in fact, a homomorphism and we can represent $M(5|2,4)$ in $GL(16,\ZZ)$. Then the twisted Alexander polynomial of $K$ associated to a representation $\rho : G(K) \longrightarrow GL(16,\ZZ)$ is given as follows. \begin{equation} \widetilde{\Delta}_{\rho, K}(t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^5) (1+14t^5 + t^{10}) (1+3t^5+t^{10}). \end{equation} \end{ex} \begin{ex} Let $K$ be the knot $10_{159}$. The Alexander polynomial $\Delta_K(t)$ is $(1-t+t^2)(1-3t+5t^2-3t^3+t^4)$. The group $G(K)$ has a Wirtinger presentation: $G(K)=\langle x,y,z|R_1,R_2\rangle$, where $R_1 = (xzx^{-1}z^{-1}y^{-1}zy)x(y^{-1}z^{-1}yzxz^{-1}x^{-1})y^{-1}$, and $R_2 = (x^{-1}yxzx^{-1}z^{-1}x^{-1}y^{-1}z^{-1})y(zyxzxz^{-1}x^{-1}y^{-1}x)z^{-1}$. We see that $G(K)$ maps on two metabelian groups $M(3|2,2)$ and $M(5|2,4)$. The first group is an alternating group $A_4$ and a mapping $f(x)=f(y)=(123)$ and $f(z)=(142)$ induces an $A_4$-representation $\rho$ of $G(K)$ into $GL(4,\ZZ)$. The twisted Alexander polynomial is given by \begin{equation} \widetilde{\Delta}_{\rho, K}(t) = \left[\frac{\Delta_K(t)}{1-t}\right] (1-t^3)(1-3t^3-3t^6-3t^9+t^{12}). \end{equation} Now consider the second group $M(5|2,4) =\ZZ/5 {\small \marusen} (\ZZ/2)^4$. This group has a presentation (6.12). We see easily that a mapping $f(x)=s, f(y)=s b_1b_4$ and $f(z)=s b_1$ is a homomorphism. As before, we represent $M(5|2,4)$ in $GL(16,\ZZ)$. Then the twisted Alexander polynomial of $K$ associated to a representation $\rho : G(K) \longrightarrow GL(16,\ZZ)$ is given as follows. \begin{equation} \widetilde{\Delta}_{\rho, K}(t) = \left[\frac{\Delta_K(t)}{1-t}\right] \varphi(t),\ {\rm where\ } \end{equation} $\varphi(t) = (1-t^5)(1+3t^5+t^{10})(1-31t^5+12t^{10}-31t^{15}+t^{20})(1+5t^5+52t^{10}+5t^{15}+t^{20}). $ \end{ex} \noindent {\bf Acknowledgements. } The first author is partially supported by MEXT, Grant-in-Aid for Young Scientists (B) 18740035, and the second author is partially supported by NSERC Grant~A~4034. \end{document}
\begin{document} \title{Platform-Mediated Competition} \begin{note} \Huge\textcolor{blue}{WITH NOTES} \end{note} \begin{center} \Large \textcolor{blue}{\href{https://northwestern.box.com/s/ggdovah9pxagfys6e38kecl31lb42j2p}{Click here for the latest version}} \end{center} \begin{abstract} Cross-group externalities and network effects in two-sided platform markets shape market structure and competition policy, and are the subject of extensive study. Less understood are the within-group externalities that arise when the platform designs many-to-many matchings: the value to agent $i$ of matching with agent $j$ may depend on the set of agents with which $j$ is matched. These effects are present in a wide range of settings in which firms compete for individuals' custom or attention. I characterize platform-optimal matchings in a general model of many-to-many matching with within-group externalities. I prove a set of comparative statics results for optimal matchings, and show how these can be used to analyze the welfare effects various changes, including vertical integration by the platform, horizontal mergers between firms on one side of the market, and changes in the platform's information structure. I then explore market structure and regulation in two in-depth applications. The first is monopolistic competition between firms on a retail platform such as Amazon. The second is a multi-channel video program distributor (MVPD) negotiating transfer fees with television channels and bundling these to sell to individuals. \end{abstract} Multi-sided platforms play a large and growing role in the economy. The core business of some of the world's largest companies, including Amazon, Alibaba, Facebook, and Google, fall into this category. The defining feature of platforms is that they match different agents. In the case of multi-sided platforms, agents can be divided into distinct groups, and matches occur across groups. Increasing data availability and developments of matching technology, both facilitated in many cases by the internet, have fueled the rise in multi-sided platform businesses. An important feature of the environments in which many platforms operate is the existence of cross-group externalities. A U.S. based retailer derives benefits from contracting with a Chinese manufacturer, and the manufacturer benefits from selling products to the retailer. In general, neither party appropriates the full surplus from their transaction. This would not be a problem if it was easy for the retailer to find an appropriate manufacturer, or vice-versa. However when searching for a partner is costly, the presence of externalities means that agents will generally under-invest in search. This explains the existence of a platform such as Alibaba. The platform facilitates matches, and internalizes the matching externalities through fees charged to agents on one or both sides of the market. In addition to cross-group externalities, platforms often take advantage of network effects. The benefit that one side of the market derives from the platform's services depends on the set of agents on the other side with whom they may be matched. Software developers would like to create programs for operating systems that have a large number of users, and users prefer operating systems which support many programs. Modern platforms generally engage in more sophisticated matching than simply granting all or nothing access to a network. Search engines prioritize certain results, and curate results based on user preferences. Cable providers allow customers to choose between many different packages consisting of different bundles of channels. By doing so, platforms fine-tune the network effects within the platform. Cross-group externalities and network effects have long been central to the literature on multi-sided platform design and regulation. However significantly less attention has been paid to \textit{with-group} externalities in multi-sided settings. Consider again Alibaba's role matching retailers and manufacturers. Cross-group network effects are present; retailers would like to search on a platform on which many manufacturers are available, and manufacturers would like access to the largest set of potential customers. However manufacturers are also competitors. Fixing the set of retailers using the platform, a manufacturer would prefer to compete with as few other manufacturers as possible. As in this example, within-group externalities often work in the opposite direction as network effects. Within-group externalities have important implications for the regulation of platforms. As noted above, platforms add value by internalizing cross-group externalities. Indeed, when network effects are large, it has been argued that the efficiency gains provided by large networks are sufficient to justify the existence of a monopolistic platform \citep{evans2003antitrust}. However the platform also internalizes, to some degree, the effects of competition between firms. That is, the platform internalizes within-group externalities. Thus the platform will have an incentive to reduce competition between firms beyond the socially optimal level. Within-group externalities also affect the welfare implications of vertical integration by the platform into the firm side of the market, as well has horizontal mergers between firms. This paper studies the implications of within-group externalities on the design and regulation of platforms. I consider a monopolistic platform whose role is to match each agent on one side of the market with a set of agents on the other (so called many-to-many matching). For example, Google's ad platform matches advertisers and websites. Each advertiser's ad may be shown on multiple websites, and websites display multiple ads. Matches are reciprocal; $i$ is matched with $j$ if and only if $j$ is matched with $i$. I will refer to one side of the market as firms and the other side as individuals, although the analysis applies equally well to a wide range of business-to-business activities. There are no within-side externalities for individuals; an individual's payoff depends on the set of firms with which they are matched. A firm's payoffs, however, depend not only on the set of individuals with which it is matched, but also on the set of firms with which each of these individuals is matched. In the web advertising example, the ``individuals'' are the websites, whose payoff depends only on which ads are being displayed on their site. The firms are the advertisers, who care not only about which sites display their ads, but also, potentially, about how many other adds are shown on the same sites (due perhaps to viewers' limited attention), and whether these ads are from their competitors. The model accommodates both vertical and horizontal differentiation between agents. An agent's vertical type relates to their own marginal value for better matches, whereas their horizontal type captures their attractiveness to the other side of the market. I show that when payoffs are suitably supermodular, optimal matches have a natural threshold structure, whereby agents are matched with those on the other side of the market who have high enough vertical types. Using this characterization, I prove a set of general comparative statics results on how the matching changes when payoffs shift in a way that makes some agents relatively more important. I show how in a broad class of problems, including those in which one set of agents is privately informed about their type, these comparative statics results can be used to perform welfare analyses. Within-group externalities are an essential component of competition analysis. For example, I show that when these effects are not present vertical mergers between the platform and firms will unambiguously benefit individuals in most cases. I explore two applications in depth. I modify the canonical Dixit-Stiglitz model of monopolistic competition by giving a platform control over the set of firms that each individual has access to. This model applies to many settings; for example, Amazon mediating interactions between customers and vendors. I characterize the types of mergers between the platform and firms, and the types of information acquisitions by the platform, that make individuals better or worse off. I also study a multi-channel video program distributor (MVPD) negotiating transfer fees with television channels and bundling these to sell to individuals who are privately informed about their value for programming. I show that horizontal mergers between channels which are included in the basic cable package will make all individuals worse off, even if the merger does nothing but create cost synergies and has no direct anti-competitive effects. On the other hand, all individuals will be made better off if the merger is between channels only offered in the premium packages. Similar results obtain for vertical mergers between the MVPD and channels. The general intuition for a number of the results can be illustrated by focusing on the effect of the platform acquiring a firm, denote by $j$. Assume that firms like matches with individuals, but dislike competition; their payoff from a match with individual $i$ is decreasing in the number of other firms that $i$ is matched with. Suppose that before the merger the platform was not able to capture the full surplus enjoyed by firm $j$. This will be the case when firms have bargaining power or private information. After the merger, on the other hand, the platform will internalize $j$'s surplus completely. This change has two effects on the matching structure. First, the platform will want to increase the payoff to firm $j$ by matching more individuals with $j$. This effect is analogous to ``eliminating markups'', as discussed in the classical vertical integration literature.\footnote{See \cite{riordan2005competitive}.} Second, the platform wants to reduce the competition faced by $j$ (analogous to ``raising rivals' costs''). To do this, the platform matches the individuals who are matched with $j$ with fewer additional firms. When there are no within-group externalities only the first effect is present. In this case the merger results in larger matching sets for all individuals, which I show implies higher payoffs for all individuals.\footnote{This conclusion holds even when individuals make monetary payments to the firm, in which case it is an implication of the envelope condition for the platform's revenue maximizing mechanism.} The second effect however, which is driven by competition between firms, has the effect of shrinking individual matching sets. The overall welfare effect depends on which of the two dominates. Using the characterization of optimal matches, I am able to identify cases in which the welfare effect is unambiguous. In general, an acquisition by the platform of a low-type firm will benefit individuals, and an acquisition of a high-type firm will harm individuals. The literature on competition policy for multi-sided platforms is extensive, and will not be summarized in full here. For a recent review see \cite{jullien2020economics}. Much of this literature focuses on competition between platforms, and ignores within-group externalities. Seminal theoretical contributions in this area were made by \cite{rochet2003platform}, \cite{caillaud2003chicken}, and \cite{armstrong2006competition}. My primary interest is on the implications of platform mediation for competition between firms. I therefore focus on a monopolistic platform, but introduce competition effects. \cite{pouyet2016vertical} study vertical integration in a model with competing platforms, but without within-side competition on a given platform. More recently, \cite{tan2020effects} incorporate within-group externalities into a model of competition between platforms. In \cite{tan2020effects} platforms have a membership structure, and do not engage in more sophisticated matching, whereas in the current paper I allow the platform to flexibly design the matching and transfer schedule. The authors show that increasing the number of platforms can adversely affect consumer welfare. More closely related to the current paper is \cite{de2014integration}, which studies a search engine matching users and publishers. Publishers in turn make revenue by selling space to an advertiser. The search engine also sells ad space directly to the advertiser. This generates competition between the publishers' and search engine's web pages. While the structure of the model is quite different than that considered here, the authors identify broadly similar effects of vertical integration. Integration by search engine into publishing can induce bias into search results, analogous to the bias in matching sets that I find. However, as in my setting, there is a countervailing effect; integration may also reduce the quantity of ads seen by users, which benefits them. The net effect on user welfare remains ambiguous. The key ingredients to the model are \textit{i}) a platform that controls the interactions between agents on different sides of the market via many-to-many matchings, and \textit{ii}) within-side competition effects on one side. While these features appear separately in the literature, they have not been previously considered together. This paper builds on the matching design and price discrimination literature. The model of the platform is similar to that of \cite{gomes2016many}, who also consider the design of many-to-many matchings by a platform. However their model does not allow for within-side competition. As in \cite{gomes2016many}, the platform in this paper may engage in price discrimination by offering a menu of matching sets and fees to each side of the market. The platform can flexibly design both the matching sets and fees. This is in contrast to the literature on two-sided markets, in which platforms sell access to a single network, or to different mutually exclusive networks (see \cite{rysman2009economics} for a survey and \cite{weyl2010price}, \cite{white2016insulated} for more recent contributions). The remainder of this paper is organized as follows. Section \ref{sec:model} presents the basic model, characterizes optimal matchings, and discusses the main comparative statics results. Section \ref{sec:extension} discusses an extension of the model, which is useful in applications. Section \ref{sec:applications} presents the two applications mentioned earlier. \section{The basic model}\label{sec:model} The model generalizes that of \cite{gomes2016many}. There is a unit mass of individuals (side $I$) and a set $\mathcal{F}$ of firms (side F). Depending on the setting, I will consider either finite $\mathcal{F}$ or $\mathcal{F} = [0,1]$. In what follows $\lambda$ will be used to denote either Lebesgue measure, in the case of a continuum of firms, or the measure placing mass $1$ on each firm when $\mathcal{F}$ is finite. Competition effects are present only on the firm side, and take a form that will be specified below.\footnote{It is also interesting to consider markets with competition effects on both sides. For now I will focus on the one-sided case because most of the applications that I have in mind are of this form.} Matchings are reciprocal: individual $i$ is matched with firm $j$ iff $j$ is matched with $i$. \subsection{Payoffs} I present here the baseline model. Alternative payoff structures are explored in the extensions in Section \ref{sec:extension}. Agent $\ell \in [0,1]$ on side $K \in \{F,I\}$ is characterized by a \textit{vertical type} $v_K^{\ell}$ and a \textit{horizontal type} $\sigma_K^{\ell} \geq 0$. The platform's objective function will be of the form \begin{equation}\label{eq:objective} \int_F U^F(v_F^j, |s_F(j)|_{S_I}) d\lambda(j) + \int_I U^I(v_I^i, |s_I(i)|) d \lambda(i). \end{equation} The components of (\ref{eq:objective}) will be discussed below. What is important to note at present is that the platform's payoff depends on the sum of some aggregate payoffs coming from the firm side and the individual side. If the platform's objective is utilitarian welfare maximization then $U^F$ and $U^I$ will correspond to the utilities of firms and individuals respectively. However there are many other objectives of the form given in (\ref{eq:objective}). One such setting, in which agents are privately informed about their vertical type, will be discussed in Section \ref{sec:privateinfo}. Further examples will be illustrated in the applications of Section \ref{sec:applications}. For simplicity, in what follows I will refer to $U^F$ and $U^I$ as firm and individual payoffs respectively. Later, when discussing individual welfare I will be careful to differentiate between the true utilities of agents and the payoffs that are relevant for the firm's objective in (\ref{eq:objective}). The vertical type $v_K^{\ell}$ determines the value that $\ell$ attaches to matches with agents on the other side of the market, while the horizontal type $\sigma_K^{\ell}$, which I also refer to as \textit{salience}, represents how important $\ell$ is to agents on the other side. The be precise, for an individual $i$ on side $I$ the payoff of being matched with a set $s_I(i) \subseteq [0,1]$ of firms is \begin{equation*} U^I(v_I^i, |s_I(i)|) \end{equation*} where $|s_I(i)|$ is the salience weighted size of the set $s_I(i)$, given by \begin{equation*} |s_I(i)| = \int\limits_{j \in s_I(i)} \sigma_F^j d \lambda(j). \end{equation*} Payoffs on the firms side are similar, but account for competition effects. This means that the payoff of a firm depends on the entire matching, not just their own matching set. The payoff to firm $j$ with matching set $s_F(j)$ when individuals have matchings $S_I = \{(s_I(i),i)\}_{i\in[0,1]}$ is given by \begin{equation*} U^F(v_F^j, |s_F(j)|_{S_I}) \end{equation*} where \begin{equation*} |s_F(j)|_{S_I} = \int\limits_{i \in s_F(j)} h(|s_I(i)|, \sigma_I^i, \sigma_F^j,) d \lambda(i), \end{equation*} and $h$ is non-negative. The function $h$ captures competition effects. It can be thought of as depending on the exogenous ($\sigma_I^i$) and endogenous ($|s_I(i)|$) components of individual salience.\footnote{$h$ need not be a function of $|s_I|$. The general results presented below apply as long as $h$ is continuous (in an appropriate sense) in $s_I$ and independent of the vertical types $v^j_F$ for $j \in s_I$. For example, $h$ could be a function of $\lambda(s_b)$ rather than $|s_I|$.} In most applications $h$ will be decreasing; individuals are less valuable to firms if they are matched with many other firms. However I will not assume that this is always the case. In the model described above, neither firms nor individuals care about the vertical types of the agents they are matched with. The interpretation is the an agent's vertical type is a private taste parameter that describes their value for matches. I will discuss settings in which this assumption is natural. On the other hand, in some situations we may derive the vertical type from characteristics of the agent, for example the cost function of a firm that is also choosing prices, that may be relevant for agents on the other side. In Section \ref{sec:monopcomp} I explore payoffs of this form. The main qualitative conclusions of the model will be the same in both cases. \subsection{Optimal matchings} For the time being, assume that there is no horizontal differentiation; $\sigma_I^i = \sigma_I^k$ for all $i,k$ and $\sigma_F^j = \sigma_F^l$ for all $j,l$. If there is horizontal differentiation, the results of this section will hold conditional on horizontal types. \noindent \textbf{Supermodularity.} Let $v''>v'$, $x'' > x'$. Then $U^K(v'', x'') + U^K(v',x') \geq U^K(v'', x') + U^K(v',x'')$ for $K \in \{F,I\}$. \noindent \textbf{Order-Supermodularity.} There exists a complete order on the agents on side $K$ such that Supermodularity holds with respect to this order. That is, if type $\hat{v}$ is higher in this order than type $\tilde{v}$ (not necessarily $\hat{v} > \tilde{v}$) and $x'' \geq x'$, then $U^K(\hat{v}, x'') + U^K(\tilde{v},x') \geq U^K(\hat{v}, x') + U^K(\tilde{v},x'')$ for $K \in \{F,I\}$. I will refer to the order specified in the definition of Order-Supermodularity as the supermodular order. Clearly, Supermodularity is a special case of Order-Supermodularity in which the order is given by type. \begin{lemma}\label{lem:monotonicity} Under Order-Supermodualrity, optimal matchings are monotone in the supermodular order: higher type individuals receive larger matching sets and higher type firms receive higher quality matching sets. \end{lemma} \begin{proof} Without loss, let the order be given by type. Consider first individual side monotonicity. Let $v^I_j \geq v^I_k$ and suppose $|s_I(k)| \geq |s_I(j)|$. Then switch the matching sets. By Supermodularity, payoffs on the individual side increase. Moreover, payoffs on the firm side are unchanged, since any firm that was matched with $j$ is now matched with firm $i$ that has the same matching set that $j$ had. The same switching argument works on the firm side. \end{proof} For the remainder of this section assume that Supermodularity holds. All results extend immediately to other supermodular orders. I now turn to establishing the aforementioned threshold structure of matching sets. One might be tempted to apply a similar proof as that of Lemma \ref{lem:monotonicity}. To see why this does not work, suppose $g_F$ is increasing and concave. Fix an individual, and suppose that they are matched with a firm with type $v'$ but not with a type $v''$ firm, where $v'' > v'$. Simply dropping the low type firm from and adding the high type firm to the individual's set clearly does not change the individual's payoff, but will not necessarily improve payoffs on the firm side. This is because despite having a higher vertical type, the $v''$ firm will also have a higher quality matching set, by Lemma \ref{lem:monotonicity}. Then by concavity this means precisely that the marginal change in match quality for the $v''$ firm is lower than for the $v'$ firm. The proof of the following proposition modifies the switching argument to accommodate this case. \begin{proposition}\label{prop1.1} For an individual with type $v_I$ there is a threshold $v^*(v_I)$ such that the individual is matched with a firm if and only if the firm's type is above $v^*_F(v_I)$. \end{proposition} \begin{proof} First I claim that the optimal matching should be characterized by a threshold in the marginal firm utility $U^F_2(v_F^j, |s_F(j)|_{S_I})$ (or the discrete analog). If this was not the case then switching out low marginal utility firms for high marginal utility firms does not change the size of the individual's matching set, and thus has no effect on the individual's payoffs or their endogenous salience. Moreover it increases firm side payoffs.\footnote{I am using here the fact that monotonicity does not bid. If it did then I would have to guarantee that this switch does not violate monotonicity. This could probably be dealt with, but I don't need to. } Firm marginal utilities are of course endogenous objects. The result will follow if the optimal matching the firm marginal utilities are increasing in $v_F$, which is what I know show. Let $\{S_F, S_I \}$ be the optimal matching. A necessary condition for $\{S_F, S_I \}$ to be optimal is that each individual's match be characterized by a threshold in the marginal firm utilities induced by $\{S_F, S_I \}$, as discussed above. Suppose that $U^F_2(v_F^j, |s_F(j)|_{S_I})$ is not increasing in firm type, that is, there are firms $i,j$ with $v^j > v^k$ such that \begin{equation}\label{eq2.1} U^F_2(v_F^k, |s_F(k)|_{S_I}) > U^F_2(v_F^j, |s_F(j)|_{S_I}). \end{equation} For this to hold there must be a positive measure of individuals for whom the marginal utility threshold defining their matching set falls strictly above $U^F_2(v_F^j, |s_F(j)|_{S_I})$ and weakly below $ U^F_2(v_F^k, |s_F(k)|_{S_I})$, (otherwise $|s_F(v')|_{s_I}) = |s_F(v'')|_{s_I}$ and so (\ref{eq2.1}) does not hold). Then $s_F(v'') \subsetneq s_F(v')$. Since $h \geq 0$, this implies $|s_F(v')|_{s_I} > |s_F(v'')|_{s_I}$, contradicting monotonicity from Lemma \ref{lem:monotonicity}.\footnote{If $g_F$ is concave then we need not appeal to monotonicity, $|s_F(v')|_{s_I} > |s_F(v'')|_{s_I}$ contradicts (\ref{eq2.1}). The argument can be modified to accommodate for negative values of $h$.} \end{proof} \begin{corollary}\label{cor2.1} The threshold $v_F^*(v_I)$ is decreasing. \end{corollary} \begin{proof} Immediate from Lemma \ref{lem:monotonicity} and Proposition \ref{prop1.1} \end{proof} \begin{corollary}\label{cor2.2} Firm matchings are characterized by a threshold $v_I^*(v_F)$. Moreover $v_I^*(v_F)$ is decreasing. \end{corollary} \begin{proof} Immediate from Corollary \ref{cor2.1}. \end{proof} \subsection{Optimal matchings with horizontal differentiation} The previous section established that optimal matchings have a threshold structure when there is no horizontal differentiation. With horizontal differentiation, the same structure continues to hold conditional on horizontal types; each individual $i$'s matching set will be characterized by a threshold function $v^*_I(\sigma_F, i)$ such that the individual matches with a firm of type $v_F$ and salience $\sigma_F$ if and only if $v_F \geq v^*_I(\sigma_F,i)$. Similarly for firms. In this section I will be interested in the features of optimal threshold functions. In particular, I will identify when this functions are increasing/decreasing. The platform's problem simplifies greatly when $U_F(v, \cdot)$ is affine. In this case the firm's payoffs can be written as $U_F(v,x) =a^F(v) + b^F(v) \cdot x$. If supermodularity holds then we can make a change of variables so that without loss of generality $b^F(v) = v$. Then the platform's objective function can be written as \begin{equation*} \int_I \left(U^I(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j \cdot h(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j) \right) d\lambda(i). \end{equation*} This integral can be maximized point-wise for each individual. This is true even when match qualities are constrained to be monotone in agent type, as will be the case when types are private infromation, since Lemma \ref{lem:monotonicity} tells us that these constraints will not bind. For simplicity, assume all primitive functions are differentiable. Consider the objective \begin{equation*} U^I(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j\cdot h(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j). \end{equation*} The marginal effect of adding a firm of type $v^F$ and salience $\sigma^F$ to the matching set of individual $i$ is \begin{equation}\label{eq:horizontal_foc} v^F \cdot h(|s_I(i)|, \sigma^I_i, \sigma^F) + \sigma^F \cdot\left( U^I_2(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j \cdot h_1(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j)\right) . \end{equation} The function $v^*_F(\sigma_F, i)$ will be decreasing (increasing) if for $\sigma_F'' > \sigma_F'$ the following single crossing property holds: if the marginal benefit in (\ref{eq:horizontal_foc}) is positive (negative) for $\sigma_F'$ then it is positive (negative) for $\sigma_F''$. Assume $h$ is decreasing it its first and third arguments (If $h$ is not decreasing in $\sigma^F$ then the threshold functions may be non-monotone). The term in parentheses in (\ref{eq:horizontal_foc}), $U^I_2(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j \cdot h_1(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j)$, is the key determinant of the slope of $v^*_F(\sigma_F, i)$. The first part of this expression is the marginal benefit to the individual of increasing the size of their matching set. The second part is the inframarginal cost to all firms matched with this individual of increasing the size of the matching set. If the sum of these two is positive, it means that the benefit to the individual outweighs the externality imposed on firms. But then this individual should be matched with all firms that have positive values $v^F)j$. Moreover, the individual should be matched with firms with negative values only if $\sigma^F_j$ is large enough, so that the benefit the individual is sufficient to outweigh both the cost to the new firm and the inframarginal cost to all other firms. If on the other hand $U^I_2(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j \cdot h_1(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j) < 0$ then the individual should never match with negative value firms, and should only match with positive value firms if their salience is not too high. These observations are summarized in the following Lemma. \begin{lemma}\label{lem:horiz_structure} Assume $h$ is decreasing in its first and third arguments (match size and firm salience).\footnote{Just assuming that $h$ is decreasing in match size, we can conclude that if $U^I_2(v_i^I, |s_I(i)|) + \int_{s_I(i)} v^F_j \cdot h_1(|s_I(i)|, \sigma^I_i, \sigma^F_j) \ d\lambda(j) \geq 0$ then the individual will be matched with all positive value firms.} Then the matching sets of individuals have the following structure: \begin{itemize} \item There exists a threshold $v^{**}$ such that an individual $i$'s matching set contains firms with types $v^F_j < 0$ if and only if $v^I_i > v^{**}$. \item The individual's threshold function $v^*_F(\sigma_F,i)$ is downward sloping if $v^I_i > v^{**}$ and upward sloping otherwise. \end{itemize} \end{lemma} Lemma \ref{lem:horiz_structure} has some interesting implications. If any firms have positive values then high-value, low-salience firms will be matched with the largest set of individuals (in the set inclusion sense). If no firms have positive values then the largest matching sets will instead go to high-value, high-salience firms. Suppose that firms vertical types are their private information, while horizontal types are known to the platform. For example, the vertical type may reflect a firm's marginal cost, while its horizontal type is the attractiveness of its product to consumers. Below, I will discuss more extensively the platform's problem when types are private information. Suppose that the platform make monetary transfers with firms, and that firms payoffs are quasi-linear in money. In this context it is easy to see, from the usual \cite{myerson1981optimal} argument, that the objective of a revenue maximizing platform will look the same as $\ref{eq:objective}$, except that the firms' vertical types $v^F_j$ will be replaced with their ``virtual values'' $\varphi(v^F_j, \sigma^F_j) = v^F_j - \frac{1 - Q_F(v^F_j| \sigma^F_j)}{q_F(v^F_j| \sigma^F_j)}$. Virtual values may be negative even if all true values are positive. As a result, the second best matching, in which firm types must be elicited, and the first best matching in which they are known may be qualitatively different: in the first best, conditional on values, low-salience firms will always be matched with a superset of the individuals matched with high-salience firms. In the second best the reverse may hold: for low enough values, high-salience firms are matched with a superset of the individuals that low-salience firms with the same value are matched with. \subsection{Comparative statics} Throughout this section, assume that there is no horizontal differentiation on either side. I will be interested in how the optimal matching changes when some of the firms become ``more prominent'', in a sense I will make precise. Intuitively, there are two effects of a platform placing greater weight on the payoffs of firm $j$. For one, it should increase the size firm $j$'s' matching set. This effect is easy to see. On the other hand, it should change the other firms' matching sets to increase the value of firm $j$'s matching. If $h$ is decreasing this means reducing the size of the other firms' matching sets. How these two effects interact is not obvious in general. However there will be cases in which the effect on the matching structure can be identified. I explore such cases in this section. By ``more prominent'' I mean that the marginal value of increasing the quality of a firm's matching increases. \noindent \textbf{Increasing differences change.} A change in the payoffs of firm $j$ from $U^F$ to $\hat{U}^F$ are an \textit{increasing differences change} if for $x'' > x'$, $\hat{U}^F(v^F_j, x'') - \hat{U}^F(v^F_j, x') \geq U^F(v^F_j, x'') - U^F(v^F_j, x')$. \begin{lemma}\label{lem6.1} The quality of firm $j$'s matching set increases following an increasing differences change in its payoffs. \end{lemma} \begin{proof} This follows from the resulting single-crossing property of the objective. \end{proof} To further understand the changes in the matching, is important to first clarify the necessary conditions for optimality of a matching. Assume that there are $N < \infty$ firms and a continuum of individuals. The argument can be easily modified for the case of finitely many individuals. For notational simplicity, assume that Supermodularity holds (all results go through under Order-supermodularity). Label the firms in order of types, with firm 1 being the highest type and firm $N$ the lowest. If there is no horizontal differentiation $h(n)$ is the endogenous salience of an individual matched with $n$ firms. Let $v_I^*(j)$ be the threshold type for firm $j$'s matching set. If firm $j$ marginally increases the size of its matching set by lowering the cut-off it benefits from a larger matching set, but affects the endogenous salience of the newly added individuals, who are included in the matching sets of all higher types. Suppose that firm $j$ adds a marginal individual to its matching set. Assume that there is not another firm with the exact same matching set as $j$, and that $U^I$ is continuous in its first argument. Using the fact that the optimal matching has the threshold structure described in Proposition \ref{prop1.1}, the FOC for such a change is given by\footnote{If another firm has the same matching set we just need to modify the term $[h(j-1) - h(j)]$ in equation (\ref{eq:foc}).} \begin{equation}\label{eq:foc} \begin{split} U^F_2(v^F_j, |s_F(j)|_{S_I}) \cdot h(j) - &\sum_{k = 1}^{j-1} U^F_2(v^F_k, |s_F(k)|_{S_I}) \cdot [h(j-1) - h(j)] \\ &+ U^I(v_I^*(j), j) - U^I(v_I^*(j), j-1) \geq 0 \end{split} \end{equation} with equality if $v^*_I(j)$ is interior. The first term in (\ref{eq:foc}) is the marginal benefit to firm $j$, the second is the inframarginal cost to all firms matched with the newly added individual, and the last is the direct benefit to the new individual. \begin{proposition}\label{prop:comp_stat1} Suppose there is an increasing differences change in the payoffs of firm $k$, and that the supermodular order remains unchanged. If $U^F(v, \cdot)$ is concave for all $v$ and $h$ is decreasing then all firms $j > k$ receive smaller matching sets and firm $k$ receives a larger matching set. \end{proposition} \begin{proof} Throughout this proof $s$ and $S$ will refer to the original matchings and $\tilde{s}$, $\tilde{S}$ will refer to the new matching. If $k=N$ then the proposition follows immediately from Lemma \ref{lem6.1}, so assume $k > 1$. \textit{Claim 1. There cannot exists an $\ell$ such that firms $N, N-1, \dots,\ell$ receive larger matching sets after the change.} First, suppose that not all firms receive larger matching sets. Let $j$ be the lowest value firm that receives a strictly smaller matching set ( Lemma \ref{lem6.1} implies that $j \neq k$). Since $j$ receives a smaller matching set $\tilde{v}^*_I(j)$ must be larger than $v^*_I(j)$. Then Supermodularity implies that \begin{equation*} U^I(\tilde{v}_I^*(j), j) - U^I(\tilde{v}_I^*(j), j-1) > U^I(v_I^*(j), j) - U^I(v_I^*(j), j-1). \end{equation*} Since all lower value firms receive larger matching sets and $h$ is decreasing, firm $j$'s matching set is of lower quality. By concavity, $U^F_2(v^F_j, |\tilde{s}(j)|_{\tilde{S}_I}) > U^F_2(v^F_j, |s(j)|_{S_I})$. Then the FOC for $j$ holds only if \begin{equation*} \sum_{t = 1}^{j-1} U^F_2(v^F_t, |\tilde{s}_F(t)|_{\tilde{S}_I}) \cdot [h(j-1) - h(j)] \end{equation*} is larger as well. But then we have that \begin{equation}\label{eq6.1} \sum_{t = 1}^j U^F_2(v^F_t, |\tilde{s}_F(t)|_{\tilde{S}_I}) \cdot [h(j-1) - h(j)] > \sum_{t = 1}^j U^F_2(v^F_t, |s_F(t)|_{S_I}) \cdot [h(j-1) - h(j)] \end{equation} is also larger. Since firm $j+1$ received a larger matching set $\tilde{v}^*_I(j+1)$ is smaller. Then Supermodularity implies that \begin{equation*} U^I(\tilde{v}_I^*(j+1), j+1) - U^I(\tilde{v}_I^*(j+1), j) < U^I(v_I^*(j+1), j+1) - U^I(v_I^*(j), j). \end{equation*} Then the FOC for $j+1$ and (\ref{eq6.1}) imply that $U^F_2(v^F_{j+1}, |\tilde{s}_F(j)|_{\tilde{S}_I})$ is larger (or $\hat{U}^F_2(v^F_{j+1}, |\tilde{s}_F(j)|_{\tilde{S}_I})$ if $j+1 = k$). Proceeding in this way we conclude that $U^F_2(v^F_{N}, |\tilde{s}_F(N)|_{\tilde{S}_I})$ must be larger. But if firm $N$ received a larger matching set then concavity implies that $U^F_2(v^F_{N}, |\tilde{s}_F(N)|_{\tilde{S}_I})$ is strictly smaller after the merger (since $k > 1$ by assumption). Thus we have a contradiction. Now suppose all firms receive larger matching sets. If firm $1$ receives a larger matching set then (\ref{eq:foc}) implies that its marginal value $U^F_2(v^F_1, |\tilde{s}(1)|_{\tilde{S}_I})$ must be higher. Since firm $2$ receives a larger matching set $\tilde{v}^*_I(2)$ must be smaller. Then (\ref{eq:foc}) implies that $U^F_2(v^F_2, |\tilde{s}(2)|_{\tilde{S}_I})$ must be larger. Proceeding in this way we conclude that $U^F_2(v^F_N, |\tilde{s}(N)|_{\tilde{S}_I})$ is larger after the change, which we have already noted is a contradiction. \textit{Claim 2. No firm $j > k$ can receive a larger matching set after the change.} Let $j$ be the lowest value firm that receives a larger matching set, and suppose $j > k$. By Claim 1, $j < N$. Since $j$ receives a larger matching set $\tilde{v}^*_I(j)$ is smaller, so Supermodularity implies that $U^I(\tilde{v}_I^*(j), j) - U^I(\tilde{v}_I^*(j), j-1)$ is smaller. Since all firms $m > j$ receive smaller matching sets, concavity implies that $U^F_2(v^F_j, |\tilde{s}(j)|_{\tilde{S}_I})$ is smaller. Then the FOC for $j$ implies that \begin{equation*} \sum_{t = 1}^{j-1} U^F_2(v^F_t, |\tilde{s}_F(t)|_{\tilde{S}_I}) \cdot [h(j-1) - h(j)] \end{equation*} is smaller. But then we have that \begin{equation} \sum_{t = 1}^j U^F_2(v^F_t, |\tilde{s}_F(t)|_{\tilde{S}_I}) \cdot [h(j-1) - h(j)] < \sum_{t = 1}^j U^F_2(v^F_t, |s_F(t)|_{S_I}) \cdot [h(j-1) - h(j)] \end{equation} Since $j+1$ receives a smaller matching set $\tilde{v}^*_I(j+1)$ is larger. But then the FOC for $j+1$ implies that $U^F_2(v^F_{j+1}, |\tilde{s}(j+1)|_{\tilde{S}_I})$ is smaller. Proceeding in this way we conclude that $U^F_2(v^F_N, |\tilde{s}(N)|_{\tilde{S}_I})$ is smaller. But this contradicts the assumption that $N$ receives a smaller (and thus worse) matching set, given concavity. \textit{Claim 3. Firm $k$ receives a larger matching set.} Suppose $k$ receives a smaller set. Suppose some firm with a higher value than $k$ receives a larger set, and let $j$ be the lowest value such firm. Then using the same proof as in Claim 2 we can arrive at a contradiction, so no firm can receive a larger set. If firm 1 receives a smaller matching set then the FOC for 1 implies that $U^F_2(v^F_1, |\tilde{s}(1)|_{\tilde{S}_I})$ (or $U^F_2(v^F_1, |\tilde{s}(1)|_{\tilde{S}_I})$ if $k=1$) must be smaller. Then again we can use the proof of Claim 2 to arrive at a contradiction. \end{proof} The assumption of concavity of $U_F(v,\cdot)$ is used throughout the proof, but it is not necessary. This can be seen most easily by appealing to continuity of the objective function. The strict version of the comparative statics result of Proposition \ref{prop:comp_stat1} holds when $U_F(v,\cdot)$ is affine, and given appropriate continuity assumptions Berge's theorem implies that it will continue to hold if $U_F(v,\cdot)$ is perturbed slightly to be strictly convex. Thus it is not clear exactly what the role of concavity is in the result. To try to build some intuition, consider an increasing differences change to the payoffs of firm 1. At the new optimal matching, the sum of payoffs for all other firms must be smaller (otherwise it would have been worthwhile to change their matchings before the change in firm 1's payoffs). Roughly speaking, concavity of $U_F(v,\cdot)$ implies that it is optimal to spread the reduction in payoffs across all these firms. This intuition is incomplete, and it would be valuable to know how far convexity of $U_F(v,\cdot)$ can be pushed. Let $v^*_I(k)$ be the threshold individual type defining firm $k$'s matching set. If all firms with lower types than $k$ receive smaller sets then all individuals with types higher then $v^*_I(k)$ receive smaller matching sets. {} \begin{corollary}\label{cor1.1} Under the conditions of Proposition \ref{prop1.1}, all individuals with types higher than $v^*_I(k)$ receive (weakly) smaller matching sets. \end{corollary} If an individual's utility is given entirely by $U^I$, and is increasing in match size, then Corollary \ref{cor1.1} implies that all individuals with types above $v^*_I(k)$ are weakly worse off following the change in $k$'s payoffs. Even if this is not the case, for example if there are transfers between individuals and the platform, it may be possible to determine the welfare effects of such a change. I return to this question in the next section. It is easy to see from the proof of Proposition \ref{prop:comp_stat1} that the result can be extended to increasing changes in the payoffs of multiple firms. \begin{lemma}\label{lem:compstat_multi} Under the conditions of Proposition \ref{prop:comp_stat1}, if there is an increasing differences change in the payoffs of a set $C$ of firms firms the all firms $j > \max{C}$ receive smaller matching sets. \end{lemma} Proposition \ref{prop:comp_stat1} does not specify what happens to the matching sets for higher types than the one for which there is an increasing differences change. This will in general depend on the function $h$. This is because it is not clear what the effect of the described changes is on the value of the original matching sets for such types. The fact that all $j > k$ receive smaller matching sets benefits types $m < k$. However these types are also hurt by the fact that $k$ receives a larger matching set. If $U^F(v,\cdot)$ is affine then comparative statics are much easier to identify. In this case, we can write $U^F(v,x) = \alpha^F(v) + \beta^F(v)\cdot x$ In this case, we can look at the problem entirely from the point of view of an individual, and the problem separates across individuals. In other words, the objective function can be written as \begin{equation*} \int_I\left( U^I(v^I_i, |s_I(i)|) + h(|s_I(i)|) \cdot \int_{s_I(i)} \beta^F(v^F_j) d \lambda(j) \right)d\lambda(i). \end{equation*} An increasing differences change in payoffs here corresponds to an increase in $\beta^F(v)$. The following Lemma is immediate. \begin{lemma}\label{lem:affine_welfare} If $U_F(v,\cdot)$ is affine then the size of firm $j$'s matching set increases following an increasing differences change in payoffs. \end{lemma} In this case we can also identify what happens when there is an increasing differences change to the payoffs of the lowest type. The following result is immediate given the threshold structure of matching sets. \begin{lemma}\label{lem:compstat_affine} Suppose there is an increasing differences change in the payoffs of firm $N$, and that the supermodular order remains unchanged. If $U_F(v,\cdot)$ is affine then either firm $N$ is added to an individual's matching set or the individual's matching set remains unchanged. \end{lemma} \subsection{Individuals with private information}\label{sec:privateinfo} Propositions \ref{prop1.1} and \ref{prop:comp_stat1} will be particularly interesting when there is a continuum of individuals who have private information about their type which must be elicited by the platform. Assume that Supermodularity holds on side $I$ (the arguments extend directly to Order-supermodularity). In this setting the usual \cite{myerson1981optimal} argument implies that in any incentive compatible mechanism the payoff of individual $i$ can be written as \begin{equation}\label{eq:envelope} V(v^I_i) = V(\underline{v}^I) + \int\limits_{\underline{v}^I}^{v^I_i} U^I_1(v, |s_I(v)|) dv. \end{equation} Suppose that the conditions of Proposition \ref{prop:comp_stat1} are satisfied, and that there is a increasing differences change in the payoffs of firm $1$. Corollary \ref{cor1.1} says that all individuals with types above $v^*_I(1)$ receive smaller matching sets. Under Supermodularity, $U^F_1(v,x)$ is increasing in $x$. Then if $V(v^*_I(1))$ does not increase, the envelope condition in (\ref{eq:envelope}) implies that all individuals will be worse off. This will be the case if $v^*_I(1) = \bar{v}^I$, which in turn will hold whenever both $U^F(v^F_1, \cdot)$ and $U^I(\underline{v},\cdot)$ (or the variant of payoffs used in the platforms problem, if it is not the same as (\ref{eq:objective})) are increasing. \begin{lemma}\label{lem:welfare} When individuals' types are private information and $v^*_I(1)= \bar{v}^I$, all individuals are worse off when all receive smaller matching sets, and better off when all receive larger matching sets. \end{lemma} In order to apply Lemma \ref{lem:welfare} it need not be the case that the platform's objective exactly corresponds to that in (\ref{eq:objective}). So long as the platform objective satisfies the conditions of Proposition \ref{prop:comp_stat1}. In many settings individual payoffs are quasi-linear and the platform wants to maximize transfers from individuals. Normalize the outside option of the lowest type, $U(\underline{v}, 0)$, to zero. If individual payoffs are given by $u^I(v,|s_I(v)|) - t(v)$ then the sum of transfers from all individuals is given by \begin{equation}\label{eq:payments} \int\limits_{\underline{v}}^{\bar{v}}\left[ u^I(v,|s_I(v)|) - \dfrac{1- Q_F(v)}{q_F(v)} u_1^I(v, |s(v)|) \right] q_F(v) dv. \end{equation} Under Supermodularity, the matching mechanism is incentive compatible for individuals if and only if individual match sizes are increasing in types and payments are constructed to satisfy the envelope condition in (\ref{eq:envelope}). Proposition 1 will hold provided the integrand in (\ref{eq:payments}) is Supermodular. Moreover, under this condition the monotonicity constraint will not bind.\footnote{Without this condition Proposition \ref{prop1.1} will hold, i.e. the optimal matching will still have a threshold structure, but the proof of Proposition \ref{prop:comp_stat1} does not go through.} \section{Extension: preferences over vertical types}\label{sec:extension} One extension of the model, which arises naturally in many applications, is that individuals and/or firms may care about the vertical types of their matches. In the MVPD example explored above, we might think that channels prefer individuals with high viewing propensity, or individuals prefer channels with high quality programming (which in this case we interpret as high $v_F$). The main results will continue to hold when agents on one side prefer matches with higher type agents on the other side. This reinforces the optimality of giving better quality match sets to high type individuals, which is the key driver of the threshold structure, and thus the results that follow.\footnote{We could interpret the model in which agents care about the vertical types of those on the other side as a special case of the model presented above, in which the horizontal types are perfectly correlated with the vertical types. However under this interpretation the results of the previous section become trivial: a threshold structure of matchings conditional or horizontal type is meaningless when there is a single vertical type for each horizontal type. Moreover, we will be intersted in settings in which vertical types are private information. The case of privatly known horizontal types was not studied above.} Another natural extension is if the endogenous salience of an individual depends on the vertical types of the firms they are matched with. Again, if the endogenous salience is increasing in the types of an individual's matches then there will be no complications to the previous results. However in many cases this is not the direction we expect. An example, which I will explore in detail later on, is that monopolistically competitive firms may care not only about how many other firms their customers have access to, but also about whether these are high or low cost firms. Low cost (high type) are more damaging competitors because they are able to charge a lower price. Nonetheless, the general welfare conclusions will continue to hold; broadly speaking, increasing differences payoff changes for high type firms will hurt individuals, while such changes for low type firms will benefit individuals. In order to gain tractability when this holds, I assume that $U^F$ is affine in match quality. As we will see, this does not immediatly imply that the platform's problem will separted accross individuals, as would be the case if veritcal types were not payoff relevant for agents on the other side of the market; when agents' types are their private information the monotonicity constraint for incentive compatible mechanisms may bind. In this section I will provide conditions under which the problem is separable, characterize the optimal matching, and explore comparative statics. These results will be useful in applications, as illustrated in section \ref{sec:monopcomp}. For simplicity, consider the model without horiontal differentiation. The payoffs of a type $v_I$ individual are given by $U^I(v_I,V_I(s_I))$ where \begin{equation*} V_I(s_I) = \int\limits_{s_I} v_j d\lambda(j). \end{equation*} $U^I$ is strictly increasing in its second argument. The payoffs of a type $v_F$ firm are given by $\beta^F(v_F)\cdot V_F(s_F|S_I)$, where \begin{equation*} V_F(s_F|S_I) = \int\limits_{s_F} h(v_I(i), V_I(s_I(i))) d\lambda(i) \end{equation*} and $h \geq 0$. The platform's objective can be written as \begin{equation}\label{eq:general_obj} \int\left[ U^I(v_I, V_I(s_I(v_I))) + h(v_I,V_I(s_I(v_I))) \cdot \int\limits_{s_I(v_I)} \beta(v_F) dQ_F(v_F)\right]dQ_I(v_I). \end{equation} I will refer to maximizing the integrand in (\ref{eq:general_obj}) separatly for each individual as maximizing pointwise. Denote the integrand by $\Pi(s_I|v_I)$. The solution to the platform's problem may not be given by pointwise maximization, in particular if they are subject to a monotonicity constraint imposed by inceitnive compatability. We cannot appeal to Proposition \ref{prop1.1} to establish the threshold structure of optimal matchings in this setting, which would imply that the monotonicity constraint does not bind. This is because the proof of Lemma \ref{lem:monotonicity}, which says that higher type firms receive larger matching sets, does not go through. The reason is that higher type firms impose a larger negative externality on other firms by setting lower prices. However if the problem can be solved pointwise the solution is easy to characterize. This characterization also reveals the relevant assumptions needed to guarantee that the matchings have a threshold structure in vertical types. \begin{lemma}\label{lem:general_threshold} Pointwise maximization of (\ref{eq:general_obj}) yeilds matchings for each individual type $v_I$ that are characterized by a threshold in $\beta(v_F)/v_F$: a firm is included if and only if $\beta(v_F)/v_F$ is high enough. \end{lemma} \begin{proof} The integrand in (\ref{eq:general_obj}) only depends on the matching through the terms $V_I(s_I(v_I))$ and $\int_{s_I(v_I)} \beta(v_F)dQ_F(v_F)$. Consider the problem of maximizing $\int_{s_I(v_I)} \beta(v_F)dQ_F(v_F)$ subject to $V_I(s_I(v_I)) = \bar{V}$. Since the objective and the constraint are linear, this amounts to including a firm in the matching set if and only if $\beta(v_F)/v_F$ is high enough. \end{proof} In immediate implication of Lemma \ref{lem:general_threshold} is that $v_F \mapsto V_F(s_F|S_I)$ is increasing in $\beta(v_F)/v_F$. However it says nothing about the comparison of matching sets for different individuals. Monotonicity properties of individual side matchings can be derived from the usual conditions (single crossing, interval order dominance, etc.) on the integrand. I will explore an application in detail, and delay further discussion of such results until then. Instead, I will present a useful comparative statics result on the matching set of a given individual that makes use of the threshold structure indentified in Lemma \ref{lem:general_threshold}. As we will see, it will be intersting to compare the optimal matchings under differnt firm payoffs, captured by different funtions $\beta$ and $\tilde{\beta}$. Assume $v \mapsto \beta(v)/v$ and $v \mapsto \tilde{\beta}(v)/v$ are increasing. This is not necessary to obtain the types of results that follow, but it simplifies the notation, and will, in any case, be satisfied in many applications.\footnote{If these functions are not increasing then we can attain similar comparative statics results with respect to the order they induce.} If this holds then each individual's optimal matching set will be defined by a threshold in firm type. Let $\Pi(s_I|v_I,\beta)$ and be the integrand of $\ref{eq:general_obj}$ under $\beta$. Let $v^*(s_I)$ and $\tilde{v}^*(s_I)$ be the threshold types that maximize $\Pi(s_I|v_I,\beta)$ and $\Pi(s_I|v_I,\tilde{\beta})$ respectively; these define the optimal matchings under pointwise maximization. The following comparative statics result is straightforward, but will be useful in applications. \begin{lemma}\label{lem:B_compstat_simple} Let $h(s_I,\cdot)$ be strictly decreasing. Assume $v \mapsto \beta(v)/v$ and $v \mapsto \tilde{\beta}(v)/v$ are increasing. If $\tilde{\beta}(v) = \beta(v)$ for all $v \leq v^*(s_I)$ then $\tilde{v}^*(s_I) \geq v^*(s_I)$. If $\tilde{\beta}(v) = \beta(v)$ for all $v \leq v^*(s_I) + \varepsilon$ for some $\varepsilon > 0$ and $\tilde{\beta}(v) \geq \beta(v)$ for all $v > v^*(s_I) + \varepsilon$, with strict inequality on a positive measure set, then $\tilde{v}^*(s_I) > v^*(s_I)$. \end{lemma} \begin{proof} Abusing notation, for a matching set $s_I$ defined by a threshold $v^*$, denote the integrand by $\Pi(v^*|v_I,\beta)$. If $\tilde{\beta}(v) = \beta(v)$ for all $v \leq v^*(s_I)$, for any $v^* < v^*(s_I)$, we have $\Pi(v^*|v_I,\tilde{\beta}) - \Pi(v^*(s_I)|v_I,\tilde{\beta}) = \Pi(v^*|v_I,\beta) - \Pi(v^*(s_I)|v_I,\beta)$. This implies that $\tilde{v}^*(s_I) \geq v^*(s_I)$. The second part of the lemma, the strict inequality, follows from the fact that $h$ is strictly decreasing and $\int_{v^*}^{\bar{v}} \beta(v) dQ_F(v) > \int_{v^*}^{\bar{v}} \beta(v) dQ_F(v)$ under the stated assumptions. \end{proof} On the other hand, if the increase in $\beta$ occurs for firms below $v^*(s_I$ than the oposite conclusion will hold. The proof is symmetric to that of Lemma \ref{lem:B_compstat_simple}. \begin{lemma}\label{lem:B_compstat_simple2} Let $h(s_I,\cdot)$ be strictly decreasing. Assume $v \mapsto \beta(v)/v$ and $v \mapsto \tilde{\beta}(v)/v$ are increasing. If $\tilde{\beta}(v) = \beta(v)$ for all $v > v^*(s_I)$ then $\tilde{v}^*(s_I) \leq v^*(s_I)$. If $\tilde{\beta}(v) = \beta(v)$ for all $v \geq v^*(s_I) - \varepsilon$ for some $\varepsilon > 0$ and $\tilde{\beta}(v) \geq \beta(v)$ for all $v < v^*(s_I) - \varepsilon$, with strict inequality on a positive measure set, then $\tilde{v}^*(s_I) < v^*(s_I)$. \end{lemma} Suppose that there is no direct individual-side component to platform payoffs, so $U^I = 0$. For example, an online retail platform that generates revenue by charging fees to sellers, but which allows customers free access to the site. Under this assumption we can draw stronger comparative statics conclusions. Proposition \ref{prop:B_compstat} is relevant for considering the effects of technological change on the matches. \begin{proposition}\label{prop:B_compstat} Assume $h(v_I,\cdot)$ is decreasing and differentiable for all $v_I$, $U^I = 0$, $\beta(v)/v$ is increasing, and $\int_{\underline{v}}^{\bar{v}} \beta(v)dv \geq 0$.\footnote{A similar result holds if $\beta(v)/v$ is not increasing, we just have to define $\alpha$ to be increasing in the order on types induced by $\beta(v)/v$.} Suppose there exists an increasing and strictly positive function $\alpha$ such $\tilde{\beta}(v) = \alpha(v) \beta(v)$. Then under pointwise maximization all individuals and all firms recieve smaller matching sets under $\tilde{\beta}$ than under $\beta.$ \end{proposition} The proof of Proposition \ref{prop:B_compstat} makes use of the following result (see \cite{quah2009comparative} for a proof). \begin{lemma}\label{lem:integral_ineq} Suppose $[x',x'']$ is a compact interval or $\mathbb{R}$ and that $\alpha$ and $k$ are real valued functions on $[x',x'']$, with $k$ integrable and $\alpha$ increasing (and thus integrable as well). If $\int_x^{x''} h(t)dt \geq 0$ for all $x \in [x',x'']$ then \begin{equation*} \int\limits_{x'}^{x''} \alpha(t)h(t)dt \geq \alpha(x')\int\limits_{x'}^{x''}h(t)dt. \end{equation*} \end{lemma} Proposition \ref{prop:B_compstat} follows from Proposition 2 and Theorem 1 of \cite{quah2009comparative}, and Lemma \ref{lem:integral_ineq}. \begin{proof}\textit{Proposition \ref{prop:B_compstat}}. Consider an idividual with type $v_I$. Lemma \ref{lem:general_threshold} means that the individual's matching set will be defined by a threshold firm type $v^*$. Then, abusing the definition of $V_I$, we can write \begin{equation*} \Pi(v^*|\tilde{\beta}) = h(v_I, V_I(v^*)) \cdot \int\limits_{v^*}^{\bar{v}_F} \tilde{\beta}(v) q_F(v) dv \end{equation*} where $V_I(v_*) = \int_{v^*}^{\bar{v}_F} v q_F(v)dv$. The derivative of the objective with respect to $v^*$ is \begin{align*} \Pi'(v^*|\tilde{\beta}) &= -h_2(v_I,V_I(v^*))v^*q_F(v^*) \cdot \int\limits_{v^*}^{\bar{v}_F} \tilde{\beta}(v) q_F(v) dv - h(v_I,V_I(v^*)) \cdot \tilde{\beta}(v^*)q_F(v^*) \\ & \geq -h_2(v_I,V_I(v^*))v^*q_F(v^*)\alpha(v^*) \cdot \int\limits_{v^*}^{\bar{v}_F} \tilde{\beta}(v) q_F(v) dv - h(v_I,V_I(v^*)) \cdot\alpha(v^*) \tilde{\beta}(v^*)q_F(v^*) \\ & = \alpha(v^*)\Pi'(v^*|\beta). \end{align*} Proposition 2 and Theorem 1 of of \cite{quah2009comparative} then implies that the optimal threshold is higher for $\tilde{\beta}$ than $\beta$, which means that all individuals recieve smaller matching sets. Then all firms also recieve smaller matching sets. \end{proof} \section{Applications}\label{sec:applications} \subsection{Cable packages} A monopolistic multi-channel video program distributor (MVPD), such as DirectTV or Comcast, faces a population of viewers with unknown values for programming. The MVPD offers a menu of packages of different channels to viewers. At the same time, the MVPD negotiates carriage fees with channels (referred to as video programmers in the IO literature). Video programmers benefit from viewers through advertising revenue, but may differ in terms of their cost of producing programming or their attractiveness to advertisers. From a programmers perspective, a viewer who has access to a large number of additional channels is less valuable, as they are likely to spend less time watching a given channel. The objective of the MVPD is to maximize revenue. The payoff of individual $i$ with type $v^I_i$ who purchases package consisting of a set $s$ or channels at a price of $p$ is given by $v^I_i \cdot g_I(|s|) - p$, where $g_I$ is increasing and $g_I(0) = 0$. Implicit in this formalization is the assumption that viewers like all channels equally, and only care about the number of channels they have access to. I will discuss this assumption later on. As discussed above, the total revenue to the platform from viewer fees generated by a direct mechanism $s: [\underline{v}_I, \bar{v}^v] \mapsto 2^F$, where $A$ is the set of channels, is given by \begin{equation}\label{eq:revenue} \int\limits_{\underline{v}_I}^{\bar{v}_I} \left(v - \dfrac{1- Q_I(v)}{q_I(v)} \right) g_I(|s(v)|) dQ_I(v). \end{equation} Assume $Q_I$ is regular in the sense of \cite{myerson1981optimal}, so that the virtual values $\varphi(v) = v - (1-Q_I(v))/q_I(v)$ are increasing. Lemma \ref{lem:welfare} applies in this setting. After showing that the conditions of Propositions \ref{prop1.1} and \ref{prop:comp_stat1} are satisfied, we will identify changes to the environment which make all individuals worse off. The payoffs of channel $j$ matched with a set $s_F(j)$ of viewers is $U^F(v^j, |s_F(j)|_{S_I})$. I assume that the function $h$ determining individuals' endogenous salience is decreasing; the more channels an individual has access to the less valuable they are. Assume that $U^F$ is supermodular and concave in its second argument. The platform bargains with channels over a payment to be made for the right to carry their programming. In this industry, negotiations are generally over a monthly per subscriber ``affiliate fee'' that the MVPD pays the channel for every subscriber who has access the channel, whether the subscriber watches it or not. Assume that the platform is able to commit to a set of cable packages it will offer before negotiations over affiliate fees take place. The outcome of the multilateral bargaining game between the MVPD and channels is described by the Nash-in-Nash solution concept. Assume that if negotiations between a channel and the MVPD break down then the channel receives a payoff of zero. In this scenario the MVPD simply drops this channel from the existing packages. This results in a new allocation for consumers. If the original allocation had a threshold structure then this new allocation remains implementable: monotonicity is preserved when when one of the channels is dropped, holding fixed the remaining channels offered to each type. The total number of consumers who have access to each channel does not change, and so the MVPD's revenue from affiliate fees from other channels does not change. Thus the only effect of such a breakdown is the change it induces on the payments made by individuals for their packages. Thus we can think of negotiations as being simply over the total payment made from the MVPD to the channel, given the menu of bundles it plans to offer individuals. The price negotiated with channel $j$ is given by \begin{align*} p^*_j &= \arg\max_{p} \left( p + R(s) - R^O_j(s) \right)^{\beta}(U^F(v^F_j, |s_F(j)|_{S_I})) \\ & = \beta U^F(v^F_j, |s_F(j)|_{S_I}) + (1-\beta)(R^O_j(s) - R(s)) \end{align*} where $R(s)$ is the individual-side revenue given in (\ref{eq:revenue}) and $R^O_j(s)$ is the individual-side revenue if channel $j$ is dropped. The difference between the two is given by \begin{equation*} R^O(s_F(j)) - R(s) = \int\limits_{s_F(j)} \varphi(v) \cdot\left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]dQ_I(v). \end{equation*} Using the expression for $p^*_j$, we can write the MVPD firm-side revenue as a function of the matching as \begin{align*} \sum_j p^*_j &= \beta \sum_j U^F(v^F_j, |s_F(j)|_{S_I}) + (1-\beta) \sum_j \int\limits_{s_F(j)} \varphi(v) \cdot\left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]dQ_I(v)\\ & = \beta \sum_j U^F(v^F_j, |s_F(j)|_{S_I}) + (1-\beta)\int\limits_{v^*_I(1|s)}^{\bar{v}_I} \varphi(v)|s_I(v)|\cdot \left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]dQ_I(v) \end{align*} where $v^*_I(1|s)$ is the threshold type for firm $1$'s matching set, which is also the lowest type individual that purchases a non-empty package. Then total MVPD revenue is given by \begin{equation}\label{eq:revenue2} \begin{split} \beta \sum_j U^F(&v^F_j, |s_F(j)|_{S_I}) \\ &+ \int_{\underline{v}_I}^{\bar{v}_I} \varphi(v)\big( g_I(|s_I(v)|) + (1-\beta)|s_I(v)|\cdot \left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]\big)dQ_I(v). \end{split} \end{equation} It is worth pointing out that if $g_I(x) = ax + b$ then the MVPD's objective simplifies to \begin{equation*} \beta \sum_j U^F(v^F_j, |s_F(j)|_{S_I}) + \int\limits_{v^*_I(1|s)}^{\bar{v}_I} \varphi(v)\big( \beta a \cdot |s_I(v)| + b\big)dQ_I(v). \end{equation*} In the special case of $a = 1$ and $b = 1$ maximizing this objective is the same as maximizing total (second-best) welfare, given by \begin{equation*} \sum_j U^F(v^F_j, |s_F(j)|_{S_I}) + \int\limits_{\underline{v}_I}^{\bar{v}_I} \varphi(v)\cdot g_I(|s_I(v)|) dQ_I(v). \end{equation*} The general form of the objective in (\ref{eq:revenue2}) will satisfy the conditions of Propositions \ref{prop1.1} and \ref{prop:comp_stat1} if $U^F(v,\cdot)$ is concave and the integrand in the second term of (\ref{eq:revenue2}) is supermodular. This second requirement will be satisfied iff $g_I(x) + (1-\beta)x\cdot \left[g_I(x - 1) - g_I(x) \right]$ is increasing. A sufficient condition for this, given that $g_I$ is increasing, is that $g_I$ is concave. \begin{lemma}\label{lem6} If $g_I$ is concave then $g_I(x) + (1-\beta)x\cdot \left[g_I(x - 1) - g_I(x) \right]$ is increasing for $x \geq 1$. \end{lemma} \begin{proof} For $x'' > x'$ we want to show \begin{equation*} g_I(x'') - g_I(x') + (1-\beta)x''\cdot \left[g_I(x'' - 1) - g_I(x'') \right] - (1-\beta)x'\cdot \left[g_I(x' - 1) - g_I(x') \right] \geq 0. \end{equation*} By concavity, $g_I(x' - 1) - g_I(x') \geq g_I(x'' - 1) - g_I(x'') \geq 0$. If $x''\cdot \left[g_I(x'' - 1) - g_I(x'') \right] \geq x'\cdot \left[g_I(x' - 1) - g_I(x') \right]$ then we are done, so suppose this does not hold. Under this assumption, since $(1-\beta) < 1$ it suffices to show \begin{equation*} g_I(x'') - g_I(x') + x''\cdot \left[g_I(x'' - 1) - g_I(x'') \right] - x'\cdot \left[g_I(x' - 1) - g_I(x') \right] \geq 0. \end{equation*} Adding and subtracting $x'[g_I(x''- 1) - g_I(x')]$, we can rewrite the left hand side as \begin{equation*} g_I(x'') - g_I(x') + (x''- x')\cdot \big[g_I(x'' - 1) - g_I(x'') \big] - x'\cdot \big[g_I(x' - 1) - g_I(x') - [g_I(x'' - 1) - g_I(x'')]\big] \end{equation*} If $g_I$ is concave, $g_I(x'') - g_I(x'' - 1) \leq [g_I(x'' - 1) - g_I(x' -1)]/(x'' - x')$. Using this inequality, we obtain \begin{align*} &g_I(x'') - g_I(x') + (x''- x')\cdot \big[g_I(x'' - 1) - g_I(x'') \big] - x'\cdot \big[g_I(x' - 1) - g_I(x') - [g_I(x'' - 1) - g_I(x'')]\big] \\ &\geq g_I(x'') - g(x'' - 1) - [g_I(x') - g(x'-1)] - x'\cdot \big[g_I(x' - 1) - g_I(x') - [g_I(x'' - 1) - g_I(x'')]\big]\\ &= (1-x')[g_I(x'-1) - g_I(x')] - (1 - x')\cdot[g_I(x'' - 1) - g_I(x'')]\\ & \geq 0 \end{align*} where the first inequality follows from $g_I(x'') - g_I(x'' - 1) \leq [g_I(x'' - 1) - g_I(x' -1)]/(x'' - x')$, and the final inequality from $x' \geq 1$ and concavity of $g_I$. \end{proof} The discussion thus far of the MVPD problem is summarized in the following proposition \begin{proposition} Assume $g_I$ is concave and increasing and $U^F$ is supermodular and concave in its second argument. Then the MVPD's objective in (\ref{eq:revenue2}) satisfies the conditions of Propositions \ref{prop1.1} and \ref{prop:comp_stat1}. \end{proposition} If we assume that $U^F(v,\cdot)$ is increasing for all $v$ then a sufficient condition for no individuals to be excluded is that $\varphi(\underline{v}_I) >0 $. If this holds then all individuals will be made worse off by increasing differences shifts in the payoffs of the highest type channels. The assumption that viewers care only about the number of channels they have access to may seem unnatural. High type channels as those which are best able to convert viewers who have access to their channel into profits, which are generated by add revenue. A natural interpretation is that these channels attract a larger portion of the available viewers then low type channels, but this interpretation suggests non-trivial viewer preferences. Viewers with the same ordinal preferences over channels can be accommodated, especially if the ordering corresponds to the ordering on channel types. However if viewers have different ordinal preferences over channels then threshold matchings may not be optimal. None the less, conditional on the MVPD choosing to offer threshold matchings all viewers will prefer larger matching sets. In reality, MVPDs almost always offer a menu of nested bundles. This may be because there is uncertainty about which programs channels will offer, so that viewers do not in fact have strong preferences over channels ex-ante, although they may develop such preferences after purchasing a bundle. It could also be because the distribution of preferences conditional on vertical types $v_I$ is such that profitable screening along this dimension is not possible. In this sense, the model seems to be a reasonable approximation of reality. I will now discuss situations in which increasing differences changes in firm payoffs make all individuals worse off. \noindent \textit{Changes in viewing patters.} The most obvious cause of an increasing differences change in firm payoffs is technological change - direct shifts in firm payoffs. In this context, technological change could mean better programming or changes in viewing patterns that make some channels relatively more attractive to advertisers than others. A ``high type'' channel in this setting is one that most effectively converts viewers who have access to the channel into add revenue. This could be because of the demographic composition of their viewers or the broadness of their appeal. High type channels can be identified as those offered in the basic cable packages, such as CBS and FOX, whereas low type channels have more niche appeal, such as Animal Planet or HBO. Suppose viewers become less interested in watching the nightly news on NBC, and more interested in watching serial shows such as those offered by HBO. This can be interpreted as an increasing differences change in the payoffs of high type channels relative to NBC. Assuming $U_F$ is affine in its second argument, by Lemmas \ref{lem:affine_welfare} and \ref{lem:welfare}, we would expect such a change to lead to more packages that include HBO, with prices adjusting in a way that makes consumers better off. \noindent \textit{Horizontal mergers.} Suppose there is a merger between two channels. Mergers are often executed because of ``synergies'' between the firms involved. Moreover, firms will often defend the proposed merger against anti-trust challenges by arguing that synergies, in particular those that reduce costs, will benefit consumers. As an example to illustrate mergers with cost synergies in the context of this model, suppose that given a matching set $s$, a channel chooses the quality $q$ of its programming to maximize $(r(q) - c(v,q)) \cdot |s|_{S_I}$. Here $|s|_{S_I}$ is interpreted as the channels viewership potential, $r(q)$ is the realized add revenue per potential viewer, and $c$ is the cost per-viewer of producing programming.\footnote{An alternative formulation for channel profits would be $r(q)g(\cdot|s|_{S_I}) -c(v,q)$, i.e. zero marginal costs. In this case however the value function is supermodular, but it will only be concave if $g$ is sufficiently concave.} Assume $r$ is increasing and $c$ is submodular (decreasing differences in $v,q$). The firm's optimal choice of program quality is independent of its viewership, and increasing in $v$, so $U^F(v,x) = \max_q (r(q) - c(v,q)) \cdot |s|_{S_I}$ is supermodular and linear in its second argument. Cost synergies entail a reduction in the marginal cost of production for the firms involved. In this context, suppose firms with values $v''$ and $v'$ merge. Because of cost synergies, they will each have new cost functions $\hat{c}$ that are point-wise lower than their original costs. Then their margins will increase, i.e. $\max_q r(q) - \hat{c}(v,q)$ will be larger. This is an increasing differences shift in their payoffs. Suppose there is a merger between the lowest-value channels. Then Lemma \ref{lem:compstat_multi} implies that all cable packages will become larger. If on the other hand the two highest value channels merge then all cable packages will shrink. If there was no exclusion of individuals to begin with then all individuals will be made worse off. One could argue that this welfare analysis is incomplete, since the quality of programming increases following the merger. The positive conclusion regarding mergers of low value firms is unchanged by this consideration. In the case of high value firm mergers, the net effect will depend on how much individuals value higher quality programming. \noindent \textit{Vertical mergers.} Suppose the MVPD purchases a channel. This has two effects on its matching problem. One is that the MVPD will now appropriates the entire surplus generated by the purchased channel. This will have the effect of an increasing differences shift in this channel's payoffs. However the acquisition also affects the negotiations with other firms. If a channel fails to reach an agreement with the MVPD and is dropped from the packages, some viewers who would have had access to both this channel and the MVPD-owned channel will migrate to the latter. This benefits the MVPD, whereas before the merger the MVPD appropriated none of this additional benefit. This change in the outside option of the MVPD is termed the ``bargaining leverage over rivals'' (BLR) effect by \cite{Rogerson2019}. The effect on the structure of bundling will depend on the magnitude of the BLR effect. If the MVPD purchases firm 1 then the price negotiated with channel $j$ is given by \begin{small} \begin{align*} \tilde{p}^*_j &= \arg\max_{p} \left(p + R(s) - R^O_j(s) + \theta \left(U^F(v^F_1, |s_F(1)|_{S_I}) - U^F(v^F_1, |s_F(1)|_{\tilde{S}^j_I})\right) \right)^{\beta}(U^F(v^F_j, |s_F(j)|_{S_I}))^{(1-\beta)} \\ & = \beta U^F(v^F_j, |s_F(j)|_{S_I}) + (1-\beta)\theta\left(R^O_j(s) - R(s) + U^F(v^F_1, |s_F(1)|_{\tilde{S}^j_I}) - U^F(v^F_1, |s_F(1)|_{S_I}) \right) \end{align*} \end{small} where $|s_F(j)|_{\tilde{S}^j_I}$ is quality of channel 1's matching set when firm $j$ is dropped from all matchings. The parameter $\theta \in [0,1]$ captures the degree to which the MVPD internalizes the change in the viewership of the purchased channel should negotiations break down with channel $j$. If $\theta = 0$ then there is no MVPD effect. The MVPD's total revenue is \begin{small} \begin{equation*} \sum_{j=2}^N \tilde{p}^*_j + U^F(v^F_1, |s_F(1)|_{S_I}) + R(s) = \sum_{j = 2}^N p^*_j + (1- \beta)\theta \sum_{j=2}^N \left( U^F(v^F_1, |s_F(1)|_{\tilde{S}^j_I}) - U^F(v^F_1, |s_F(1)|_{S_I}) \right) + R(s) \end{equation*} \end{small} which can be re-written as \begin{equation}\label{eq:blr_revenue} \begin{split} \beta \sum_{j=2}^N U^F(&v^F_j, |s_F(j)|_{S_I}) + U^F(v^F_1, |s_F(1)|_{S^j_I}) \\ &+ \int_{\underline{v}_I}^{\bar{v}_I} \varphi(v)\big( g_I(|s_I(v)|) + (1-\beta)|s_I(v)|\cdot \left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]\big)dQ_I(v) \\ &- (1-\beta)\int\limits_{s_F(1)} \varphi(v) \cdot\left[g_I(|s_I(v)| - 1) - g_I(|s_I(v)|) \right]dQ_I(v) \\ &+ (1- \beta)\theta \sum_{j=2}^N \left( U^F(v^F_1, |s_F(1)|_{\tilde{S}^j_I})- U^F(v^F_1, |s_F(1)|_{S_I}) \right). \end{split} \end{equation} The first two lines of (\ref{eq:blr_revenue}) are the same as the original MVPD revenue in (\ref{eq:revenue2}), except that the weight on channel 1's revenue is now 1 instead of $\beta$. The third and fourth lines complicate the analysis. The third line reflects the fact that the individual side payoffs in the even of a breakdown in negotiations with firm 1 need no longer be considered. The fourth line is the BLR effect. Here payoffs do not have the same form as discussed above, and we can not directly apply or previous results. Under some additional assumptions however we can draw welfare conclusions.\footnote{Just assuming that $U^F(v,\cdot)$ is affine should be enough to do comparative statics, but this will require additional work.} First, assume that there is no BLR effect, so $\theta = 0$ and the fourth line of (\ref{eq:blr_revenue}) disappears. Second, suppose $g_I$ affine with slope $a$. Then the second line of (\ref{eq:blr_revenue}) reduces to \begin{equation*} (1-\beta) a \int\limits_{s_F(1)} \varphi(v)dQ_I(v). \end{equation*} The only effect of this on the MVPD's problem is to increase the marginal benefit of adding viewers to the matching set of firm 1. Thus the result of Proposition \ref{prop:comp_stat1} will continue to hold. \begin{lemma} Assume that there is no BLR effect, $g_I$ is affine, and $U^F(v, \cdot)$ is concave. Then if there is no exclusion of viewers and the MVPD purchases the highest type channel, all individuals will be worse off. If instead $U^F(v, \cdot)$ is affine and the MVPD purchases the lowest type channel then all individuals will be better off. \end{lemma} The BLR effect reduces the price that the platform pays to all channels. However it is not immediately clear how this affects the marginal benefit of increasing bundle size. Analyzing the merger when the BLR effect is present is an interesting objective, but one which I leave for future work. \subsection{Platform-mediated monopolistic competition}\label{sec:monopcomp} Retailers, particularly on-line retailers such as Amazon, exert considerable control over the set of products to which consumers have access. This influence may take the form of placement of products in stores or in search results pages, advertising, or product recommendations, among others. Firms care about how many customers see their product and how many other products these customers see. Additionally, a firm's payoffs will depend on the prices offered by the competing firms to which its customers also have access. The retailer may exert varying levels of control over firm pricing decisions. A grocery store may have a high degree of control, whereas Amazon may have little direct say in the prices set by firms. I will consider a version of the classic \cite{dixit1977monopolistic} model of monopolistic competition. There are a continuum of firms and individuals. For simplicity, assume that individuals are identical. I will discuss later how individual heterogeneity can be accommodated. The driving force here will be the desire of the platform to screen firms that have different marginal costs, and thus value access to consumers differently. The platform cannot directly control the prices set by firms, although it may do so indirectly through the choice of the matching sets. This assumption fits best with a model of an online platform such as Amazon. I will show that platform payoffs in this setting reduce to the form studied in \ref{sec:extension}. \subsubsection{Customer side} I first describe the demand of a customer who is matched with a set $s$ of firms, where firm $j$ has set price $p(j)$. The individual chooses a quantity $q(j)$ to purchase from firm $j$ and how much money $m$ to hold. The individual's choice solves \begin{equation}\label{eq:custobjective} \max_{q(\cdot),m} = m + \dfrac{1}{\theta}\left( \int_s q(j)^{\frac{\sigma - 1}{\sigma}} dj \right)^{\frac{\theta \sigma}{\sigma -1}} \ \ \text{s.t.} \ \ \int_s p(j) q(j) dj + m \leq w \end{equation} where $w$ is the individual's wealth, $\sigma > 1$, $\theta \in (0,1)$ and $\sigma(1-\theta) > 1$.\footnote{Preferences of the form in (\ref{eq:custobjective}) have been used by \cite{bagwell2015trade}, \cite{helpman2010labour}, and \cite{helpman1989trade} among others.} Assume that wealth is high enough that individuals hold a positive amount of money. Define the price index for matching set $s$ as \begin{equation*} P(s) = \left(\int_s p(j)^{1-\sigma} dj\right)^{\frac{1}{1 - \sigma}}. \end{equation*} Then demand for money can be written as \begin{equation*} m(s) = w - P(s)^{\frac{\theta}{\theta - 1}} \end{equation*} and demand for the product of firm $j$ is given by\footnote{Derivations can be found in Appendix \ref{sec:pmmc}} \begin{equation*} q(j|s) = p(j)^{-\sigma} \cdot P(s)^{\frac{\sigma(1 - \theta) - 1}{1 - \theta}}. \end{equation*} The indirect utility of an individual is given by \begin{equation*} \dfrac{1-\theta}{\theta} P(s)^{\frac{\theta}{\theta -1}} + w. \end{equation*} \subsubsection{Firm side} A firm matched with a set $s_F(j)$ of customers faces an aggregate demand given by \begin{equation*} p(j)^{-\sigma} \cdot \int\limits_{s_F(j)} P(s_I(i))^{\frac{\sigma(1 - \theta) - 1}{1 - \theta}} di. \end{equation*} Define $h(s) \equiv P(s_I(i))^{\frac{\sigma(1 - \theta) - 1}{1 - \theta}}$. Assume that marginal costs are constant for all firms, but differ across firms. Marginal costs are the firm's private information, which the platform will like to elicit. For simplicity, assume that firms have no fixed cost. \footnote{Adding fixed costs, even if they differ across firms, does not change anything as long as all firms want to continue operating.} The firm's price setting problem can be stated as \begin{equation*} \max_p \ p^{1-\sigma} \cdot \int\limits_{s_F(j)} h(i) di - c_j p^{-\sigma} \cdot \int\limits_{s_F(j)} h(i) di. \end{equation*} The solution to this problem is to set $p(j) = \frac{\sigma}{\sigma - 1} c_j$. The fact that the price does not depend on the matching set or the prices of other firms is the main benefit of assuming CES utility and constant marginal costs. Define $v_j \equiv c_j^{1-\sigma}$ (recall that $\sigma > 1$). The firms profits from matching set $s_F(j)$ are then given by \begin{equation*} v_j \gamma \cdot \int\limits_{s_F(j)} h(i)di \end{equation*} where $\gamma = \left(\frac{\sigma}{\sigma-1}\right)^{1-\sigma} - \left(\frac{\sigma}{\sigma-1}\right)^{-\sigma} > 0$. Using the firm pricing decision, payoffs of an individual with matching set $s_I$ are given by \begin{equation}\label{eq:custwelfare} w + \left( \frac{1-\theta}{\theta} \right) \left( \frac{\sigma}{\sigma-1} \right)^{\frac{\theta}{\theta - 1}} \left( \int\limits_{s_I} v_j dj \right)^{\frac{\theta}{(\theta - 1)(1 - \sigma)}}. \end{equation} This can be written as $w + g_I(s_I)$. \subsubsection{The platform} There are a number of platform objectives that can be entertained in this setting. I will assume that the goal of the platform is to maximize a weighted sum of customer surplus and revenue collected from firms. The platform may care about customer surplus because it wants to attract customers. Alternatively, if the platform was also screening on the individual side we have seen how the individual-side objective looks like welfare maximization, with virtual values replacing true values. I could also consider a platform that collects transaction fees from users, and thus wants to maximize a weighted sum of customer transactions and revenues from firms. The results discussed here would not change. Payoffs for the firm and individual take the form studied in section \ref{sec:extension}: the endogenous salience of individual $i$, given by $h(i)$, depends on the types of the firms in $s_I(i)$. Using the firm's pricing decision, we can rewrite $P(s_I(i))$ in terms of firm types: \begin{equation*} h(s_I(i)) = P(s_I(i))^{\frac{\sigma(1 - \theta) - 1}{1 - \theta}} = \psi \cdot \left( \int\limits_{s_I(i)} v_j dj \right)^{\kappa} \end{equation*} where $\kappa = \frac{\sigma(1-\theta) - 1}{(1-\theta)(1-\sigma)} \in (-1,0)$ and $\psi= \left(\frac{\sigma}{\sigma-1}\right)^{\kappa(1-\sigma)} > 0$. The platform wants to screen firms based on marginal costs. Firm payoffs are of the form $v_j\cdot g_F(s_F(j))$, where $g_F$ is linear. Assuming that the distribution of $v_j$ is regular, the platform will solve for the optimal matching by replacing $v_j$ with $\varphi_F(v_j) = v_j - \frac{1 - Q_F(v_j)}{q_F(v_j)}$, subject to monotonicity of firm side match quality in type. Ignoring individual wealth, which is fixed, the platform payoff can be written as \begin{equation}\label{eq:amazonobj} \int\limits_{\underline{v}_I}^{\bar{v}_I} \left[g_I(s_I(v)) + \gamma h(s_I(v)) \cdot \int\limits_{s_I(v)} \varphi_F(v_j) dj\right]dQ_I(v). \end{equation} where $g_I(s_I) = \left(\frac{1-\theta}{\theta} \right) P(s_I)^{\frac{\theta}{\theta - 1}}$. The platform chooses $s_I(\cdot)$ to maximize (\ref{eq:amazonobj}), subject to monotonicity of firm match quality. If the platform cannot discriminate between customers, meaning that all customers must receive the same matching set, then monotonicity of the firm side matching implies that individual matching sets must be characterized by a threshold in $v_F$. If the platform is allowed to offer different matching sets to different individuals however, the problem is not necessarily separable across customers due to the firm-side monotonicity constraint. Given the separable structure of the objective, Lemma \ref{lem:general_threshold} implies that the threshold structure will hold, under a slight strengthening of regularity. \begin{lemma}\label{lem:threshold_amazon} Maximizing the integrand in (\ref{eq:amazonobj}), i.e. solving the problem separately for each individual, yields matchings characterized by a threshold in $\varphi_F(v)/v$; a firm with type $v$ is matched with the individual if and only if $\varphi_F(v)/v$ is high enough. \end{lemma} Under the assumption that $\varphi_F$ is increasing, $\varphi_F(v)/v$ will always be increasing over the set of $v$ such that $\varphi_F(v) < 0$. This means that if we know that all firms with positive virtual values are matched with all individuals then under regularity we can conclude that the individuals' matching sets are characterized by a threshold in $v_F$. In general however we can only conclude this when $\varphi_F(v)/v$ is increasing. \begin{corollary} If $\varphi_F(v)/v$ is increasing then maximizing (\ref{eq:amazonobj}) pointwise yeilds matching sets characterized by a threshold in $v_F$. \end{corollary} If $\varphi(v)/v$ is increasing, or if the platform is not able to discriminate between individuals, then the problem is separable accross individuals, and can be solved by maximizing the integrand in (\ref{eq:amazonobj}). Lemma \ref{lem:threshold_amazon} implies that the solution to this problem is unique, so all customers will receive the same matching set, and we can talk about the ``representative customer''. Even when the problem is separable, comparative statics in this setting are complicated by the fact that firm types enter into both the endogenous salience of individuals and the profitability of firms. This is not an issue however if we consider changes that affect the virtual values without changing the true firm types, in which case we can apply Lemma \ref{lem:B_compstat_simple}. I will consider two such changes; if the platform receives more precise information about firm types, and if the platform purchases a subset of high type firms. The following Lemma is the immediate implication of \ref{lem:B_compstat_simple} in this setting. \begin{lemma}\label{lem:amazoncompstat} Assume $\varphi_F(v)/v$ is increasing. Consider a set of firms, all of which are matched with the representative customer. If the virtual values of these firms increase then the representative customer will be worse off and receive a smaller matching set. \end{lemma} Suppose the platform receives some information about the vertical types of firms. For some partition $\tau$ of the set $[\underline{v}_F, \bar{v}_F]$ of potential firm types the platform learns to which partition cell each firm belongs. The platform therfore only needs to worry about firms deviating to types that are in the same cell; the mechanism design problem on the firm side separates completely accross cells. Thus the virtual values are computed cell-by-cell; for a firm $v$ in cell $[v_{**}, v^{**}]$ the virtual value is $v - \frac{Q_F(v^{**}) - Q_F(v)}{q_F(v)}$. Moreover the monotonicity constraint is need only be satisfied within each cell. Say that a partition cell is \textit{included} if all firms in that cell are matched with the representative customer. The following are corollaries of Lemma \ref{lem:amazoncompstat}. \begin{corollary}\label{cor:amazonmerger} Assume $\varphi_F(v)/v$ is increasing within each cell. If the platform purchases an included partition cell of firms then the customer recieves a smaller matching set and is worse off. \end{corollary} \begin{proof} If the platform purchases an included partition cell it replaces the virtual values of each firm in this cell with the true values, which are higher. There is no change in the virtual values of firms in other cells. \end{proof} Assume $\varphi_F(v)/v$ is increasing. Corollary \ref{cor:amazonmerger} and Lemma \ref{lem:threshold_amazon} imply that if the inverse hazard rate is decreasing then the customer receives a smaller matching set. Even without this assumption, the merger makes the customer worse off, as the platform seeks to divert customers to the firms that it owns. Similarly, if the platform gets better information about the values of firms that are already included, it will be able to extract more surplus from these firms, and would thus like to divert customers to them. \begin{corollary}\label{cor:amazoninfo} Assume $\varphi_F(v)/v$ is increasing within each cell. If the platform's partition over a set of included firms becomes finer then the customer recieves a smaller matching set and is worse off. \end{corollary} \begin{proof} If a given partition cell $[v_{**}, v^{**}]$ is subdivided then any firm in $[v_{**}, v^{**}]$ will be in a new cell with an upper bound that is below $v^{**}$, and strictly so for some firms. Then the virtual values of these firms will be higher. \end{proof} The natural counterpoints to $\ref{cor:amazonmergerhetero}$ and $\ref{cor:amazoninfohetero}$ also obtain; if the merger or information aquisition concerns cells of excluded firms, firms with which the customer is not matched, than the customer recieves larger matching sets and is better off \subsubsection{Customer heterogeneity} A natural dimension of customer heterogeneity is the degree to which customers value the goods sold on the platform relative to money. With quasi-liner preferences wealth heterogeneity does affect the purchase decisions. In some sense, the trade-off between goods and money is captured by the parameter $\theta$, so we can consider heterogeneity in this dimension. However higher $\theta$ does not exactly capture an intuitive notion of valuing goods relatively more. The quality of a customer's matching set is given by $\int_{s_I} v_j dj$. Customer preferences are supermodular in $\theta$ and match quality if and only if $P \geq 1$. In fact, if $P < 1$ it is easy to show that customer demand may be decreasing in $\theta$.\footnote{There is no technical problem with considering heterogeneity in $\theta$, although it does complicate the analysis when we consider the case in which the type is an individual's private information and there are transfers between the platform and individuals. The issue is one of interpretation.} Thus, I will consider an slight modification of customer preferences which admits a more natural interpretation of the individual type: let preferences for a type $v_I$ customer be represented by \begin{equation*} m + \frac{v_I^{1-\theta}}{\theta}\left( \int\limits_{s} q(j)^{\frac{\sigma - 1}{\sigma}} dj \right)^{\frac{\theta \sigma}{\sigma - 1}} \end{equation*} where $v_I$ is the individuals type. It is easy to show, following the same steps as before, that demand for good $j$ is given by \begin{equation*} q(j|s,v_I) = v_I \cdot p(j)^{-\sigma}\cdot P(s)^{\frac{\sigma(1-\theta) - 1}{1-\theta}} \end{equation*} and customers' indirect utility is given by \begin{equation*} v_I \left(\frac{1-\theta}{\theta}\right) P(s)^{\frac{\theta}{\theta-1}}. \end{equation*} Define $h(s|v_I) \equiv v_I \cdot P(s)^{\frac{\sigma(1-\theta) - 1}{1-\theta}}$. The remainder of the derivations are as before. If the platform can observe individuals' types, which may not be an unreasonable assumption for online platforms with access to detailed information about their customers, not much changes in the above analysis. Lemma \ref{lem:general_threshold} applies, so if $\varphi_F(v)/v$ is increasing then matchings for each individual have a threshold structure and the firm-side monotonicity constraint does not bind. Then the conclusions Corollaries \ref{cor:amazonmerger} and \ref{cor:amazoninfo} continue to hold for each individual: each individual is matched with a smaller set of firms if the platform purchases or gains more precise information about an included cell of firms. This makes individuals worse of if the platform is maximizing the weighted sum of customer welfare and firm-side revenue. Of course, if the platform can use transfers to extract surples from individuals then individuals recieve zero surpluss in either case. We can also consider the situation in which an individual's type is their private information. If the platform can only control individual payoffs through the matching set, for example if monetary transfers are not possible, then again all individuals will receive the same matching set. Lemmas \ref{lem:threshold_amazon} and \ref{lem:amazoncompstat}; and Corollaries \ref{cor:amazonmerger} and \ref{cor:amazoninfo} continue to hold for all individuals. The primary case of interest is when individuals are privatly informed about their type, and transfers can be made between individuals and the platform. The necessary and sufficient conditions for customer-side incentive compatibility in this setting are monotonicity of match quality in type and payments that satisfy the envelope condition. Let $g_I$ be defined as above. The platform may still seek to maximize the sum of total individual-side welfare and net revenue, meaning the sum of transfers from firms and customers. Aggregate individual-side welfare can be written as \begin{equation*}\label{eq:amazon_welfareobj} U(\underline{v}_I) + \int\limits_{\underline{v}_I}^{\bar{v}_I} g_I(s_I(v)) (1 - Q_I(v)) dv. \end{equation*} The platform does not benefit from transferring money to individuals since its payoffs are linear in money, so it is without loss to set $U(\underline{v}_I) = 0$. The platform objective is to maximize \begin{equation}\label{eq:amazonobjhetero} \int\limits_{\underline{v}_I}^{\bar{v}_I} \left[g_I(s_I(v)) (1 - Q_I(v)) + v \gamma h(s_I(v)) \cdot \int\limits_{s_I(v)} \varphi_F(v_j) dj\right]dQ_I(v) \end{equation} subject to monotonicity of firm and individual match qualities. Lemma \ref{lem:threshold_amazon} applies here, so if $\varphi_F(v)/v$ is increasing then maximizing the integrand in (\ref{eq:amazonobjhetero}) yields threshold matching sets for each individual. If we ignored the firm side, maximizing individual-side welfare would imply full ironing since $1-Q_I(v)$ is decreasing. All individuals would receive the same quality matching set, and thus the same set given that threshold matchings are optimal. In fact, the firm side reinforces this effect. The platform faces a trade-off between higher customer welfare and greater firm-side revenue. Since firm side revenue scales with $v_I$, while $1-Q_I(v_I)$ is decreasing, the platform will prioritize firm side revenue when dealing with higher type individuals. \begin{lemma}\label{lem:amazonpooling} Assume $\varphi_F(v)/v$ is increasing. If the platform's objective is to maximize (\ref{eq:amazonobjhetero}), the sum of individual-side welfare and revenue, then there is full pooling: the optimal matching is the same for all individuals. \end{lemma} \begin{proof} Suppose platform maximized its objective separately for each individual, ignoring the monotonicity constraint. I want to show that higher type individuals would get worse matching sets. The objective for an individual with type $v_I$ can be written as \begin{equation*} v_I\left(g_I(s_I)\frac{(1 - Q_I(v_I))}{v_I} + \gamma\psi \left(\int\limits_{s_I} v_j dj\right)^{\kappa} \cdot \int\limits_{s_I} \varphi_F(v_j) dj\right). \end{equation*} This objective satisfies single-crossing in $v_I$ and $- \int_{s_I} v_j dj$, so higher types get worse matching sets. Since point-wise maximization yields a decreasing allocation there is full ironing: the optimal matching subject to monotonicity has the same quality for all individuals. Since $\varphi_F(v)/v$ is increasing this implies that all matching sets will be the same. \end{proof} Lemma \ref{lem:amazonpooling} is consistent with some observed patterns of platform structure. Early stage platforms, which are focused on attracting new users, often offer the same service to all customers. Later, when the user base has been established, do platforms begin do discriminate between customers. As we will see below, such discrimination arises when the platform seeks to translate individual-side users into revenue. Given Lemma \ref{lem:amazonpooling} we can again talk about the ``representative customer'' who in this case has a type given by the average type in the population. Lemma \ref{lem:amazoncompstat} and Corollaries \ref{cor:amazonmerger} and \ref{cor:amazoninfo} continue to apply. Finally, suppose the platform also wants to maximize firm-side and individual-side revenue. Individual side revenue can be written as \begin{equation*}\label{eq:amazon_revobj} U(\underline{v}_I) + \int\limits_{\underline{v}_I}^{\bar{v}_I} \varphi_I(v)\cdot g_I(s_I(v)) dv \end{equation*} where $\varphi_I(v) = v - \frac{1-Q_I(v)}{q_I(v)}$. The platform can extract full surpluss from the lowest type, so $U(\underline{v}_I) = 0$. The platform objective is to maximize \begin{equation}\label{eq:amazonrevobjhetero} \int\limits_{\underline{v}_I}^{\bar{v}_I} \left[\varphi_I(v)\cdot g_I(s_I(v)) + v \gamma h(s_I(v)) \cdot \int\limits_{s_I(v)} \varphi_F(v_j) dj\right]dQ_I(v) \end{equation} subject to individual and firm side monotonicity. We can rewrite the integrand as \begin{equation*} v \left[\frac{\varphi_I(v)}{v} g_I(s_I(v)) + \gamma h(s_I(v)) \cdot \int\limits_{s_I(v)} \varphi_F(v_j) dj\right]. \end{equation*} This objective satisfies single-crossing in $\varphi_I(v)/v$ and $\int_{s_I} v_j dj$, which has the following implication. \begin{lemma}\label{lem:amazon_indivmon} Point-wise maximization of total revenue, given by (\ref{eq:amazonrevobjhetero}), yields individual match qualities that are increasing in $\varphi_I(v)/v$. \end{lemma} An immediate implication of Lemma \ref{lem:amazon_indivmon} is that point-wise maximization will yield monotone allocations if $\varphi_I(v)/v$ is increasing. This claim has a partial converse: if $\varphi_I(v)/v$ is not increasing then point-wise maximization will violate monotonicity, unless the violations of increasing $\varphi_I(v)/v$ happen to occur for individuals that are either excluded or fully matched. Assuming that both $\varphi_I(v)/v$ and $\varphi_F(v)/v$ are increasing. Then monotonoicity constraints do not bind for either individuals or firms. Thus we can apply Lemma \ref{lem:general_threshold}. and Corollaries \ref{cor:amazonmerger} and \ref{cor:amazoninfo} apply to each individual: each individual is matched with a smaller set of firms if the platform purchases or gains more precise information about an included cell of firms. Using the envelope expression for individual payoffs we have the following welfare conclusions. \begin{corollary}\label{cor:amazonmergerhetero} Assume $\varphi_F(v)/v$ is increasing within each cell. If the platform purchases a cell of firms that is included for all customers then all customers are worse off. \end{corollary} \begin{corollary}\label{cor:amazoninfohetero} Assume $\varphi_F(v)/v$ is increasing within each cell. If the platform's partition over a set of included firms that are included for all customers becomes finer then all customers are worse off. \end{corollary} \section*{Appendix} \appendix \section{Platform-mediated monopolistic competition}\label{sec:pmmc} The FOC for the customer's problem in (\ref{eq:custobjective}) gives \begin{equation}\label{eq:custfoc} p(k) = q(k)^{-\frac{1}{\sigma}} \left( \int_s q(j)^{\frac{\sigma - 1}{\sigma}} \right)^{\frac{\theta \sigma}{\sigma-1} - 1}. \end{equation} Multiplying both sides by $q(k)$ and integrating both sides over $s$ yields \begin{equation}\label{eq15} \int_s p(j)q(j)dj = \left( \int_s q(j)^{\frac{\sigma - 1}{\sigma}} \right)^{\frac{\theta \sigma}{\sigma-1}}. \end{equation} We can also rearange (\ref{eq:custfoc}) to obtain \begin{equation}\label{eq16} p(k)q(k) = p(k)^{1-\sigma} \left( \int_s q(j)^{\frac{\sigma - 1}{\sigma}} \right)^{\sigma\frac{\theta \sigma}{\sigma-1} - 1}. \end{equation} Integrating both sides over $s$ yields \begin{equation}\label{eq17} \int_s p(j)q(j)dj = P(s)^{1-\sigma} \left( \int_s q(j)^{\frac{\sigma - 1}{\sigma}} \right)^{\sigma\frac{\theta \sigma}{\sigma-1} - 1}. \end{equation} Combining (\ref{eq15}) and (\ref{eq17}) we obtain \begin{equation*} \int_s p(j)q(j)dj = P(s)^{\frac{\theta}{\theta-1}} \end{equation*} which is what allows us to write demands as a function of $P(s)$. \end{document}
\begin{document} \title[]{Dynamical Sampling on Finite Index Sets} \author[C. Cabrelli]{Carlos Cabrelli} \author[U. Molter]{Ursula Molter} \author[V. Paternostro]{Victoria Paternostro} \address[C. Cabrelli, U. Molter and V. Paternostro]{Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Matem\'atica, Buenos Aires, Argentina, and CONICET-Universidad de Buenos Aires, Instituto de Investigaciones Matem\'aticas Luis A. Santalo (IMAS), Buenos Aires, Argentina} \email[C. Cabrelli]{[email protected]} \urladdr[C. Cabrelli]{http://mate.dm.uba.ar/~cabrelli/} \email[U. Molter]{[email protected]} \urladdr[U. Molter]{http://mate.dm.uba.ar/~umolter/} \email[V. Paternostro]{[email protected]} \author[F. Philipp]{Friedrich Philipp} \address[F. Philipp]{KU Eichst\"att-Ingolstadt, Mathematisch-Geographische Fakult\"at, Ostenstra\ss e 26, Kollegiengeb\"aude I Bau B, 85072 Eichst\"att, Germany} \email{[email protected]} \urladdr{http://www.ku.de/?fmphilipp} \thanks{The research of the authors was partially supported by UBACyT under grants 20020130100403BA, 20020130100422BA, and 20020150200110BA, by CONICET (PIP 11220110101018), and MinCyT Argentina under grant PICT-2014-1480.} \begin{abstract} We consider bounded operators $A$ acting iteratively on a finite set of vectors $\{f_i:i\in I\}$ in a Hilbert space $\calH$ and address the problem of providing necessary and sufficient conditions for the collection of iterates $\{A^nf_i: i\in I, n=0,1,2,....\}$ to form a frame for the space $\calH$. For normal operators $A$ we completely solve the problem by proving a characterization theorem. Our proof incorporates techniques from different areas of mathematics, such as operator theory, spectral theory, harmonic analysis, and complex analysis in the unit disk. In the second part of the paper we drop the strong condition on $A$ to be normal. Despite this quite general setting, we are able to prove a characterization which allows to infer many strong necessary conditions on the operator $A$. For example, $A$ needs to be similar to a contraction of a very special kind. We also prove a characterization theorem for the finite-dimensional case. --- These results provide a theoretical solution to the so-called Dynamical Sampling problem where a signal $f$ that is evolving in time through iterates of an operator $A$ is spatially sub-sampled at various times and one seeks to reconstruct the signal $f$ from these spatial-temporal samples. \end{abstract} \subjclass[2010]{94A20, 42C15, 30J99, 47A53} \keywords{Sampling Theory, Dynamical Sampling, Frame, Normal Operator, Semi-Fredholm, Strongly stable, Contraction} \maketitle \thispagestyle{empty} \section{Introduction} Given a system of vectors $\{f_i\}_{i\in I}$ from some Hilbert space $\calH$ and a normal operator $A$, we consider the collection of iterates $\calA=\{A^nf_i: i\in I, n = 1, \dots, l_i\}.$ We are interested in the special structure of this set. The relevant questions are when the set $\mathcal{A}$ is complete in $\calH$, when it is a basis, when it is a Bessel sequence, or when it forms a frame for $\calH$. In particular, one seeks conditions on the operator $A$, the vectors $\{f_i\}$ and the number of iterations $l_i$ in order to guarantee the desired properties of the system $\calA$. These questions are in general of a very difficult nature. Their answers require the use of notions and techniques of different areas of mathematics such as operator theory, spectral theory, harmonic analysis, and complex analysis in the unit disk. The results are most of the time unexpected. Just to mention some examples, it was proved in \cite{acmt} that if $A$ is a diagonal operator in $\ell^2(\ensuremath{\mathbb N})$, the collection $\calA$ can never be a basis of $\calH$. It was also shown in \cite{acmt} that for these kinds of operators the orbit $(A^nf)_{n\in\ensuremath{\mathbb N}}$ of one vector $f\in\ell^2(\ensuremath{\mathbb N})$ is a frame for $\ell^2(\ensuremath{\mathbb N})$ if and only if the sequence of eigenvalues of $A$ is a set of interpolation for the Hardy space $H^2(\ensuremath{\mathbb D})$ of the unit disk $\ensuremath{\mathbb D}$ together with some boundedness condition on the vector $f$. In signal processing this problem constitutes an instance of the so-called Dynamical Sampling problem. In Dynamical Sampling a signal $f$ that is evolving in time through an operator $A$ is spatially sub-sampled at multiple times and one seeks to reconstruct the signal $f$ from these spatial-temporal samples, thereby exploiting time evolution (see, e.g., \cite{aadp,accmp,acmt,adk,ap,lv,rclv}). Obviously, the task of reconstructing the signal is an inverse problem. In this paper we give necessary and sufficient conditions on its well-posedness. In the following, we shall introduce the reader to the motivation, the ideas, and the details of Dynamical Sampling, describe the current state of research, and expose our contribution in this paper. \vspace*{.2cm} \subsection{Motivation and idea of Dynamical Sampling} Let us assume that we are given the task of spatially sampling (i.e., evaluating) a signal $f$ from a function space $\calH$ in such a way that $f$ can later be recovered from these samples. The first idea is, of course, to sample the function $f$ at many convenient positions $x_i$ -- hoping that the knowledge on the properties of the functions in $\calH$ suffices to recover $f$ from the samples $f(x_i)$. However, in real-world scenarios there are typically many restrictions that one has to deal with. For example, the access to some of the required places $x_i$ might be prohibited. Another problem is that sensors are usually very expensive so that the installation of a great number of them in order to guarantee a high-accuracy recovery becomes a crucial financial problem. However, in many situations the signal $f$ also varies in time and the evolution law is known. The idea of Dynamical Sampling is to avoid the above-mentioned obstacles by reducing the number of positions $x_i$ and to sample $f$ not only at one but at various times, thereby exploiting the knowledge of the evolution law. This idea was for the first time considered by Lu et al. (see \cite{lv,rclv}), where the authors investigated signals obeying the heat equation. Therefore, in our model let us add a time entry to $f$ and assume that $f(t,\cdot)$ remains in $\calH$ for each $t\ge 0$ and that $f(t,x)$ is a solution to a dynamical system. In the simplest case, where this dynamical system is homogeneous and linear, the function $u(t) = f(t,\cdot)$, $t\ge 0$, maps $[0,\infty)$ to $\calH$ and satisfies $\dot u(t) = Bu(t)$, where $B$ is a generator of a semigroup $(T_t)_{t\ge 0}$ of operators. The solution of this Cauchy problem is then given by $u(t) = T_tu_0$, where $u_0 = u(0)$ is our original signal. If we sample uniformly in time and at fixed positions, the samples are of the following form: $$ f(nt_0,x_i) = u(nt_0)(x_i) = [T_{t_0}^nu_0](x_i), \qquad n = 0,1,2,\ldots,n_i,\;i\in I. $$ If $\calH$ is in fact a reproducing kernel Hilbert space (RKHS) with kernel $K$, we have $$ f(nt_0,x_i) = \left\lambdangleT_{t_0}^nu_0,K_{x_i}\right\rangle = \left\lambdangleu_0,A^nK_{x_i}\right\rangle, $$ where $A := T_{t_0}^*$. Since the original task was to recover $u_0$ from the retrieved information, the question now becomes: ``{\it Is $(A^nK_{x_i})_{n,i}$ complete in $\calH$?}''. If one requires the recovery to be a stable process, the question is ``{\it Is $(A^nK_{x_i})_{n,i}$ a frame for $\calH$?}''. In the general Dynamical Sampling problem (see, e.g., \cite{acmt,accmp}), the $K_{x_i}$ in a RKHS are replaced by vectors $f_i$ from an arbitrary Hilbert space $\calH$. The question is now the following: \begin{quote} {\it For which operators $A$, which sets $I,N_i\subset\ensuremath{\mathbb N}$, and which vectors $f_i\in\calH$, $i\in I$, is the system $(A^nf_i)_{i\in I,\,n\in N_i}$ complete in $\calH$ or a frame for $\calH$?} \end{quote} \noindent In this paper, we focus on the case where the iteration sets $N_i$ do not depend on $i\in I$ and equal $N_i := N := \{0,1,\ldots,\dim\calH - 1\}$. In particular, $N = \ensuremath{\mathbb N}$ if $\calH$ is infinite-dimensional. \vspace*{.2cm} \subsection{Previous works on the topic and our contribution} The history of Dynamical Sampling is fairly young. The papers \cite{lv,rclv} of Vetterli et al.\ can be seen as the first works on Dynamical Sampling. They consider the sampling of signals under diffusion evolution. The next series of papers, written by Aldroubi et al.\ (see, e.g., \cite{aadp,adk}), was dealing with the special type of convolution operators $A$. The first paper on the above-mentioned problem in its most general form was \cite{acmt}, in which the authors considered both the finite-dimensional and the infinite-dimensional case. They proved that if $A\in\ensuremath{\mathbb C}^{d\times d}$ is diagonalizable, then $(A^nf_i)_{i\in I,\,n\in N}$ is a frame for $\calH = \ensuremath{\mathbb C}^d$ if and only if for each eigenprojection $P$ of $A$ we have that $(Pf_i)_{i\in I}$ is complete in $P\calH$. If the operator $A$ is not diagonalizable, the above statement can be generalized, using the Jordan canonical form: If $A\in\ensuremath{\mathbb C}^{d\times d}$ and $\calF = \{f_i : i\in I\}$, then $(A^nf_i)_{i\in I,\,n\in N}$ is a frame for $\calH = \ensuremath{\mathbb C}^d$ if and only if for each eigenvalue $\lambda$ the projection $Q_\lambda$ of $\calF$ onto the cyclic Jordan vectors for the eigenvalue $\lambda$ along the image of $A-\lambda$ is complete in $Q_\lambda\calH$. The drawback of that approach is that it practically requires the knowledge of the entire Jordan structure of $A$. Here, we provide another necessary and sufficient condition which is easier to check (cf.\ Theorem \ref{t:main_findim}). In fact, the projection $Q_\lambda$ from above can be replaced by {\em any} projection onto a complementary subspace of the image of $A-\lambda$. Hence, $Q_\lambda$ can be replaced by the orthogonal projection onto $\operatorname{ker}(A^*-\overline\lambda)$. Concerning the infinite-dimensional situation, the most interesting result in \cite{acmt} addresses the one-vector problem (i.e., $|I|=1$). This result was further improved in \cite{accmp} and \cite{ap}. Its final version reads as follows: {\it If $A$ is a normal operator, the system $(A^nf)_{n\in\ensuremath{\mathbb N}}$ is a frame for $\calH$ if and only if {\rm (a)} $A = \sum_{in\ensuremath{\mathbb N}}\lambda_j\lambdangle\,\cdot\,,e_j\ranglee_j$ with an ONB $(e_j)_{j\in\ensuremath{\mathbb N}}$, {\rm (b)} the sequence $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is uniformly separated in the unit disk \braces{cf.\ page \pageref{p:us}}, and {\rm (c)} the sequence $(|\lambdanglef,e_j\rangle|^2/(1-|\lambda_j|^2)_{j\in\ensuremath{\mathbb N}}$ is bounded from below and above.} One of the aims of this paper is to generalize this result to arbitrary finite index sets $I$. However, this problem turns out to be more difficult to tackle than one might think at first glance -- the attempt of using the same techniques as in the case $|I|=1$ terribly fails. Nevertheless, we find the right methods to deal with the new situation (see Theorem \ref{t:characterization}). Three conditions in our characterization are generalizations of the conditions (a)--(c) above in the one-vector case. But indeed one has to add a fourth condition which is trivially satisfied when $|I|=1$. Very little is known on the Dynamical Sampling problem for general non-normal bounded operators $A$. It was only proved in \cite{ap} that for $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ to be a frame for $\calH$ it is necessary that $A^*$ be strongly stable, i.e., $(A^*)^nf\to 0$ as $n\to\infty$ for each $f\in\calH$. Here, we complete this condition to a characterizing set of three conditions (cf.\ Theorem \ref{t:charac}). Using this theorem, we completely characterize the class of all operators $A$ for which there exists some finite set $\{f_i : i\in I\}$ such that $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$. In fact, these are the operators that are similar to a strongly stable contraction $T$ for which $\operatorname{Id} - TT^*$ is of finite rank. We also characterize the Riesz bases of the form $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ when $I$ is finite. In this case, the operator $A$ has to be similar to the $|I|$-th power of the unilateral shift in $\ell^2(\ensuremath{\mathbb N})$. \vspace*{.2cm} \subsection{Outline} The present paper is organized as follows. Section \ref{s:normal} contains our main results concerning Dynamical Sampling on finite index sets with normal operators, including the above-mentioned characterization consisting of four conditions. In Section \ref{s:general} we drop the requirement that $A$ be normal and provide our results, summarized above, for this much more general setting. In Section \ref{s:findim} we deal with the finite-dimensional situation and prove a characterization result in which the condition can be very easily checked. \vspace*{.2cm} \subsection{Notation} We conclude this Introduction by fixing the notation that we shall use throughout this paper. By $\ensuremath{\mathbb N}$ we denote the set of the natural numbers {\it including zero}. Unit circle and open unit disk in $\ensuremath{\mathbb C}$ are denoted by $\ensuremath{\mathbb T}$ and $\ensuremath{\mathbb D}$, respectively, i.e., $$ \ensuremath{\mathbb T} = \{z\in\ensuremath{\mathbb C} : |z|=1\}\qquad\text{and}\qquad\ensuremath{\mathbb D} = \{z\in\ensuremath{\mathbb C} : |z| < 1\}. $$ The $p$-th Hardy space on the unit disk, $1\le p\le\infty$, is denoted by $H^p(\ensuremath{\mathbb D})$. Recall that especially $H^2(\ensuremath{\mathbb D})$ consists of those functions that have a representation $\varphi(z) = \sum_{n=0}^\infty c_nz^n$, $z\in\ensuremath{\mathbb D}$, where $c = (c_n)_{n\in\ensuremath{\mathbb N}}\in\ell^2(\ensuremath{\mathbb N})$, and that $\|\varphi\|_{H^2(\ensuremath{\mathbb D})} = \|c\|_2$. Throughout, $\calH$ stands for a separable Hilbert space. If $\calK$ is another Hilbert space, by $L(\calH,\calK)$ we denote the set of all bounded linear operators from $\calH$ to $\calK$ which are defined on all of $\calH$. As usual, we set $L(\calH) := L(\calH,\calH)$. The kernel (i.e., the null-space) and the range (i.e., the image) of $T\in L(\calH)$ are denoted by $\operatorname{ker} T$ and $\operatorname{ran} T$, respectively. \section{Dynamical Sampling with normal operators}\lambdabel{s:normal} In this section we investigate sequences of the form $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ where $A$ is a bounded normal operator in $\calH$, $I$ an at most countable index set, and $(f_i)_{i\in I}\subset\calH$. The spectral measure of $A$ will be denoted by $E$. Throughout, we set \begin{equation}\lambdabel{e:calA} \calA := \calA(A,(f_i)_{i\in I}) := (A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}. \end{equation} In the sequel, we will often be dealing with diagonal operators -- a special class of normal operators -- which we define as follows. \begin{defn} A {\em diagonal operator} in $\calH$ is of the form $A = \sum_{j\in J}\lambda_jP_j$ (the series converging in the strong operator topology), where $J$ is a finite or countable index set, $(\lambda_j)_{j\in J}\subset\ensuremath{\mathbb C}$ a bounded sequence of scalars, and $(P_j)_{j\in J}$ a sequence of orthogonal projections with $P_jP_k = 0$ for $j\neq k$. The series $\sum_{j\in J}\lambda_jP_j$ is called a {\em normal form} of $A$ if $\lambda_j\neq\lambda_k$ for $j\neq k$ and $\sum_{j\in J}P_j = \operatorname{Id}$. The {\em multiplicity} of a diagonal operator $A$ is defined by $$ \operatorname{mult}(A) := \max\{\dim P_j\calH : j\in J\}, $$ where $(P_j)_{j\in J}$ is the sequence of orthogonal projections in a normal form of $A$. If the maximum should not exist, we set $\operatorname{mult}(A) := \infty$ and say that $A$ has infinite multiplicity. \end{defn} The normal form of a diagonal operator is obviously unique up to permutations of $J$. Moreover, it is clear that the $\lambda_j$ in the normal form of $A$ are the distinct eigenvalues of $A$ and that $P_j$ projects onto the eigenspace $\operatorname{ker}(A - \lambda_j)$. Note that every diagonal operator is bounded and normal. If $f\in\calH$ and $E$ denotes the spectral measure of $A$, the measure $\mu_f := \|E(\cdot)f\|^2$ obviously takes the form \begin{equation}\lambdabel{e:mu_f} \mu_f = \sum_{j\in J}\delta_{\lambda_j}\|P_jf\|^2. \end{equation} Recall that the {\em pseudo-hyperbolic metric} $\varrho$ on the open unit disk is defined by \begin{equation*} \varrho(z,w) := \left|\frac{z-w}{1-z\overline w}\right|,\quad z,w\in\ensuremath{\mathbb D}. \end{equation*} Since \begin{equation}\lambdabel{e:hyp_identity} |1-z\overline w|^2 = |z-w|^2 + (1-|z|^2)(1-|w|^2), \end{equation} we always have $\varrho(z,w) < 1$. It is well known that $\varrho$ is indeed a metric on $\ensuremath{\mathbb D}$. For $z\in\ensuremath{\mathbb D}$ and $r > 0$ by $\mathbb B_r(z)$ we denote the pseudo-hyperbolic ball ($\varrho$-ball) in $\ensuremath{\mathbb D}$ of radius $r$ and center $z$, i.e., $$ \mathbb B_r(z) = \{\lambda\in\ensuremath{\mathbb D} : \varrho(\lambda,z) < r\}. $$ Note that $\mathbb B_r(z) = \{\lambda\in\ensuremath{\mathbb D} : |\lambda-z'| < r'\}$ with certain $r'< r$ and $z' = tz$, where $t < 1$. A sequence $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ in the open unit disk $\ensuremath{\mathbb D}$ is called {\em separated} if $$ \inf_{j\neq k}\varrho(\lambda_j,\lambda_k) > 0. $$ The sequence $\Lambda$ is called {\em uniformly separated} if\lambdabel{p:us} $$ \inf_{n\in\ensuremath{\mathbb N}}\,\prod_{j\neq k}\varrho(\lambda_j,\lambda_k)\,>\,0. $$ Obviously, a uniformly separated sequence is separated. We refer to Appendix \ref{a:sequences} for more detailed relationships between these notions. The next theorem was proved in \cite[Theorem 3.14]{acmt}. \begin{thm}\lambdabel{t:one} Let $A = \sum_{j=0}^\infty\lambda_jP_j$ be a diagonal operator in normal form and $f\in\calH$. Then $(A^nf)_{n\in\ensuremath{\mathbb N}}$ is a frame for $\calH$ if and only if the following statements hold: \begin{enumerate} \item[{\rm (i)}] $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is a uniformly separated sequence in $\ensuremath{\mathbb D}$. \item[{\rm (ii)}] $\dim P_j\calH = 1$ for all $j\in\ensuremath{\mathbb N}$. \item[{\rm (iii)}] There exist $\alpha,\beta > 0$ such that $$ \alpha\,\le\,\frac{\|P_jf\|^2}{1 - |\lambda_j|^2}\,\le\,\beta\quad\text{for all $j\in\ensuremath{\mathbb N}$}. $$ \end{enumerate} \end{thm} Note that the system $(A^nf)_{n\in\ensuremath{\mathbb N}}$ in Theorem \ref{t:one} corresponds to systems of the form $\calA$ in \eqref{e:calA} with the index set $I$ being a singleton, i.e., $|I|=1$. In this section it is our aim to generalize Theorem \ref{t:one} to arbitrary finite index sets $I$ (see Theorem \ref{t:characterization} below). Although this might seem to be a trivial task, our treatment shows that this is not the case. In order to formulate our main result Theorem \ref{t:characterization}, it is necessary to introduce a few more notions concerning sequences in the unit disk. For a sequence $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$ in the open unit disk we agree to write \begin{equation}\lambdabel{e:veps} \varepsilon_j := \sqrt{1 - |\lambda_j|^2},\qquad j\in\ensuremath{\mathbb N}. \end{equation} Although $\varepsilon_j$ depends on $\Lambda$, it will always be clear which $\Lambda$ it refers to. We also define the linear {\em evaluation operator} \begin{equation}\lambdabel{e:eval1} T_\Lambda : H^2(\ensuremath{\mathbb D})\supset D(T_\Lambda)\to\ell^2(\ensuremath{\mathbb N}),\qquad T_\Lambda\varphi := (\varepsilon_j\varphi(\lambda_j))_{j\in\ensuremath{\mathbb N}}, \end{equation} on its natural domain \begin{equation}\lambdabel{e:eval2} D(T_\Lambda) := \left\{\varphi\in H^2(\ensuremath{\mathbb D}) : (\varepsilon_j\varphi(\lambda_j))_{j\in\ensuremath{\mathbb N}}\in\ell^2(\ensuremath{\mathbb N})\right\}. \end{equation} Note that $\varepsilon_j\varphi(\lambda_j) = \lambdangle\varphi,K_{\lambda_j}\rangle_{H^2(\ensuremath{\mathbb D})}$, where \begin{equation}\lambdabel{e:K} K_\lambda(z) = \sqrt{1 - |\lambda|^2}\sum_{n=0}^\infty\overline\lambda^nz^n = \frac{\sqrt{1 - |\lambda|^2}}{1 - \overline\lambda z},\qquad\lambda,z\in\ensuremath{\mathbb D}, \end{equation} is the normalized reproducing kernel of $H^2(\ensuremath{\mathbb D})$. Hence, the operator $T_\Lambda$ is the analysis operator corresponding to the sequence $(K_{\lambda_j})_{j\in\ensuremath{\mathbb N}}$. As every analysis operator is closed on its natural domain, it follows from the closed graph theorem that $T_\Lambda$ is a bounded operator from $H^2(\ensuremath{\mathbb D})$ to $\ell^2(\ensuremath{\mathbb N})$ if and only if $D(T_\Lambda) = H^2(\ensuremath{\mathbb D})$. The next theorem is the main result in this section. For a Bessel sequence $E$ in $\calH$ we let $C_E$ denote the analysis operator of $E$. \begin{thm}\lambdabel{t:characterization} If $|I|$ is finite, then the system $\calA = (A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$ if and only if the following conditions are satisfied: \begin{enumerate} \item[{\rm (i)}] $A = \sum_{j=0}^\infty\lambda_jP_j$ is a diagonal operator \braces{in normal form} having multiplicity $\operatorname{mult}(A)\le |I|$. \item[{\rm (ii)}] $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is a union of $|I|$ uniformly separated sequences in $\ensuremath{\mathbb D}$. \item[{\rm (iii)}] There exist $\alpha,\beta > 0$ such that for all $j\in\ensuremath{\mathbb N}$ and all $h\in P_j\calH$ we have \begin{equation}\lambdabel{e:fs} \alpha(1-|\lambda_j|^2)\|h\|^2\,\le\,\sum_{i\in I}|\lambdangleh,P_jf_i\rangle|^2\,\le\,\beta(1-|\lambda_j|^2)\|h\|^2. \end{equation} \item[{\rm (iv)}] $(\operatorname{ran} T_{\Lambda})^{|I|} + \operatorname{ker} C_E^* = \ell^2(I\times\ensuremath{\mathbb N})$, where $E := ((1-|\lambda_j|^2)^{-1/2}P_jf_i)_{j\in\ensuremath{\mathbb N},\,i\in I}$. \end{enumerate} \end{thm} Before we head towards the proof of Theorem \ref{t:characterization}, let us first make a few remarks. \begin{rem}\lambdabel{r:jojo} (a) The necessity of condition (i) in Theorem \ref{t:characterization} for $\calA$ to be a frame for $\calH$ was already proved in \cite{ap}. \sigma_{-}allskip (b) Condition (iii) means that for each $j\in\ensuremath{\mathbb N}$ the finite system $((1-|\lambda_j|^2)^{-1/2}P_jf_i)_{i\in I}$ is a frame for $P_j\calH$ with frame bounds $\alpha$ and $\beta$. Since the frame bounds are independent of $j\in\ensuremath{\mathbb N}$, condition (iii) is equivalent to saying that the system $E = ((1-|\lambda_j|^2)^{-1/2}P_jf_i)_{j\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$ with frame bounds $\alpha$ and $\beta$. \sigma_{-}allskip (c) Here and in the sequel, we will make use of the following notion: \begin{equation}\lambdabel{e:TLaI} T_{\Lambda,I} := \bigoplus_{i\in I}T_\Lambda. \end{equation} That is, $T_{\Lambda,I}$ is a closed linear operator mapping from $D(T_{\Lambda,I}) = D(T_\Lambda)^{|I|}\subset H_I^2$ (here, $H_I^2 = (H^2(\ensuremath{\mathbb D}))^{|I|}$) to $(\ell^2(\ensuremath{\mathbb N}))^{|I|} = \ell^2(I\times\ensuremath{\mathbb N})$. Hence, $(\operatorname{ran} T_\Lambda)^{|I|} = \operatorname{ran} T_{\Lambda,I}$. Since $E$ is a frame for $\calH$, the relation $\operatorname{ran} T_{\Lambda,I} + \operatorname{ker} C_E^* = \ell^2(I\times\ensuremath{\mathbb N})$ in (iv) can be equivalently replaced by $$ C_E^*\operatorname{ran} T_{\Lambda,I} = \calH. $$ Indeed, assume that the relation in (iv) holds. As $E$ is a frame for $\calH$, for any $h\in\calH$ there exists $c\in\ell^2(I\times\ensuremath{\mathbb N})$ such that $C_E^*c = h$. Now, $c = c_1+c_2$ with $c_1\in\operatorname{ran} T_{\Lambda,I}$ and $c_2\in\operatorname{ker} C_E^*$. Hence, $h = C_E^*c_1\in C_E^*\operatorname{ran} T_{\Lambda,I}$. Conversely, if $C_E^*\operatorname{ran} T_{\Lambda,I} = \calH$ and $c\in\ell^2(I\times\ensuremath{\mathbb N})$, then $C_E^*c = C_E^*T_{\Lambda,I}h$ for some $h\in\calH$. Thus, $c - T_{\Lambda,I}h\in\operatorname{ker} C_E^*$, i.e., $c\in\operatorname{ran} T_{\Lambda,I} + \operatorname{ker} C_E^*$. \sigma_{-}allskip (d) If (ii) holds and $\lambda_i\neq\lambda_j$ for $i\neq j$ (which follows from (i)), then $(\operatorname{ran} T_{\Lambda})^{|I|}$ in (iv) is dense in $\ell^2(I\times\ensuremath{\mathbb N})$. Indeed, since (ii) implies that the operator $T_\Lambda$ is bounded and everywhere defined on $H^2(\ensuremath{\mathbb D})$ (cf.\ Theorem \ref{t:finite_union}), the claim follows from Lemma \ref{l:dense}. \sigma_{-}allskip (e) Note that (ii) does not prevent $\Lambda$ to be a union of less than $|I|$ uniformly separated sequences because each subsequence of a uniformly separated sequence is also uniformly separated. Hence, $\Lambda$ might even be uniformly separated itself. In this case, we know that $\operatorname{ran} T_\Lambda = \ell^2(\ensuremath{\mathbb N})$ (see Theorem \ref{t:interpolating}), so that condition (iv) is trivially satisfied. \sigma_{-}allskip (f) As noted in the last remark, (iv) follows from (ii) if $\Lambda$ is uniformly separated. In particular, (iv) is not necessary to state in the case $|I|=1$ (cf.\ Theorem \ref{t:one}). However, if $|I|>1$, condition (iv) does in general not follow from (i)--(iii). As an example, choose a sequence $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ which is a union of no more than $|I|$ uniformly separated sequences, but is not uniformly separated itself. In addition, choose orthogonal projections $P_j$ such that $\sum_{j=0}^\infty P_j = \operatorname{Id}$ (in the strong sense) and $\dim P_j\calH = |I|$ for each $j\in\ensuremath{\mathbb N}$ as well as orthonormal bases $(g_{ij})_{i\in I}$ for $P_j\calH$, $j\in\ensuremath{\mathbb N}$. Now, define $A := \sum_{j=0}^\infty\lambda_j P_j$ and $f_i := \sum_{j=0}^\infty\varepsilon_jg_{ij}$, $i\in I$, where $\varepsilon_j = \sqrt{1-|\lambda_j|^2}$. Then conditions (i)--(iii) in Theorem \ref{t:characterization} are satisfied, but (iv) is not as $E = (\varepsilon_j^{-1}P_jf_i)_{j\in\ensuremath{\mathbb N},\,i\in I} = (g_{ij})_{j\in\ensuremath{\mathbb N},\,i\in I}$ is an orthonormal basis of $\calH$ (and thus $\operatorname{ker} C_E^* = \{0\}$) and $\Lambda$ is not uniformly separated (i.e., $\operatorname{ran} T_\Lambda\neq\ell^2(\ensuremath{\mathbb N})$). \end{rem} \ \\ For an at most countable index set $I$ we define $H^2_I := \bigoplus_{i\in I}H^2(\ensuremath{\mathbb D})$. This is the space of tuples $\phi = (\varphi_i)_{i\in I}$, where $\varphi_i\in H^2(\ensuremath{\mathbb D})$ for each $i\in I$, such that $\sum_{i\in I}\|\varphi_i\|_{H^2(\ensuremath{\mathbb D})}^2 < \infty$. One defines $\|\phi\|_{H^2_I} := (\sum_{i\in I}\|\varphi_i\|_{H^2(\ensuremath{\mathbb D})}^2)^{1/2}$. In addition, we shall write the tensor product of a sequence $\vec y = (y_i)_{i\in I}\subset\ensuremath{\mathbb C}$ and a function $\varphi\in H^2(\ensuremath{\mathbb D})$ as $\vec y\varphi$ (i.e., $(\vec y\varphi)(z) = (y_i\varphi(z))_{i\in I}$, $z\in\ensuremath{\mathbb D}$). The result is an element of $H^2_I$ if and only if $\vec y\in\ell^2(I)$. In this case, \begin{equation}\lambdabel{e:prodnorm} \|\vec y\varphi\|_{H^2_I} = \|\vec y\|_2\|\varphi\|_{H^2(\ensuremath{\mathbb D})}. \end{equation} The following theorem will be used in the proof of Theorem \ref{t:characterization}. However, it might be of independent interest. Here, the index set $I$ is allowed to be infinite. \begin{thm}\lambdabel{t:riesz} Let $A = \sum_{j=0}^\infty\lambda_jP_j$ be a diagonal operator in normal form with $(\lambda_j)_{j\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$ and let $f_i\in\calH$, $i\in I$. For $j\in\ensuremath{\mathbb N}$ we put $n_j := \dim P_j\calH$ \braces{where possibly $n_j = \infty$} and $L_j := \{1,\ldots,n_j\}$. Moreover, let $(e_{jl})_{l=1}^{n_j}$ be an orthonormal basis of $P_j\calH$ and for $l\in L_j$ define $\vec y_{jl} := \varepsilon_j^{-1}(\lambdanglee_{jl},f_i\rangle)_{i\in I}$ as well as $\phi_{jl} := \vec y_{jl}K_{\lambda_j}$. Then the following statements hold. \begin{enumerate} \item[{\rm (i)}] $\calA$ is a Bessel sequence in $\calH$ if and only if $\vec y_{jl}\in\ell^2(I)$ \braces{i.e., $\phi_{jl}\in H_I^2$} for each $j\in\ensuremath{\mathbb N}$ and each $l\in L_j$ and $\Phi = (\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is a Bessel sequence in $H_I^2$. In this case, the Bessel bounds of $\calA$ and $\Phi$ coincide. \item[{\rm (ii)}] $\calA$ is a frame for $\calH$ if and only if $\vec y_{jl}\in\ell^2(I)$ for each $j\in\ensuremath{\mathbb N}$ and each $l\in L_j$ and $\Phi = (\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is a Riesz sequence in $H_I^2$. In this case, the frame bounds of $\calA$ coincide with the Riesz bounds of $\Phi$. \end{enumerate} \end{thm} \begin{proof} In the following we will often make use of the unitary operator $U : \ell^2(I\times\ensuremath{\mathbb N})\to H_I^2$, defined by $Ux = (\varphi_i)_{i\in I}$, where $\varphi_i(z) = \sum_{n\in\ensuremath{\mathbb N}}x_{in}z^n$, $z\in\ensuremath{\mathbb D}$. Assume that $\calA$ is a Bessel sequence in $\calH$ with Bessel bound $\beta > 0$. Then for $h\in\calH$ we have that $$ \sum_{n=0}^\infty\sum_{i\in I}|\lambdangleh,A^nf_i\rangle|^2\,\le\,\beta\|h\|_2^2. $$ Let $j\in\ensuremath{\mathbb N}$ and $l\in L_j$. Since for $h = e_{jl}$ we have $$ \sum_{n=0}^\infty\sum_{i\in I}|\lambdangleh,A^nf_i\rangle|^2 = \sum_{n=0}^\infty\sum_{i\in I}|\lambda_j|^{2n}|\lambdanglee_{jl},f_i\rangle|^2 = \sum_{i\in I}\frac{|\lambdanglee_{jl},f_i\rangle|^2}{1-|\lambda_j|^2}, $$ it follows that $\sum_{i\in I}\varepsilon_j^{-2}|\lambdanglee_{jl},f_i\rangle|^2\le\beta$, i.e., $\vec y_{jl}\in\ell^2(I)$. Let $\psi = (\psi_i)_{i\in I}\in H^2_I$ and put $x := U^{-1}\psi\in\ell^2(I\times\ensuremath{\mathbb N})$. Denote the synthesis operator of $\calA$ by $T$. Then for each $j\in\ensuremath{\mathbb N}$ and $l\in L_j$ we have \begin{align*} \lambdangleTx,e_{jl}\rangle &=\left\lambdangle\sum_{i\in I}\sum_{n=0}^\infty x_{in}A^nf_i,e_{jl}\right\rangle = \sum_{i\in I}\sum_{n=0}^\infty x_{in}\lambda_j^n\lambdanglef_i,e_{jl}\rangle\\ &= \sum_{i\in I}\psi_i(\lambda_j)\lambdanglef_i,e_{jl}\rangle = \sum_{i\in I}\frac{\lambdanglef_i,e_{jl}\rangle}{\varepsilon_j}\left\lambdangle\psi_i,K_{\lambda_j}\right\rangle\\ &= \sum_{i\in I}\left\lambdangle\psi_i,\frac{\lambdanglee_{jl},f_i\rangle}{\varepsilon_j}K_{\lambda_j}\right\rangle = \lambdangle\psi,\phi_{jl}\rangle. \end{align*} Thus, $\sum_{j=0}^\infty\sum_{l=1}^{n_j}|\lambdangle\psi,\phi_{jl}\rangle|^2 = \sum_{j=0}^\infty\sum_{l=1}^{n_j}|\lambdangleTx,e_{jl}\rangle|^2 = \|Tx\|^2$, which implies that the sequence $(\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is a Bessel sequence in $H^2_I$. Let $C$ denote its analysis operator. Then the above relation shows that $T = CU$. In particular, the Bessel bounds of both sequences coincide. Moreover, if $\calA$ is a frame for $\calH$, then $C = TU^*$ is onto, meaning that $(\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is indeed a Riesz sequence and that its lower Riesz bound coincides with the lower frame bound of $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$. Assume conversely that $\vec y_{jl}\in \ell^2(I)$ for each $j\in\ensuremath{\mathbb N}$ and $l\in L_j$ and that $(\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is a Bessel sequence in $H_I^2$ with Bessel bound $\beta > 0$. If $x\in\ell^2(I\times\ensuremath{\mathbb N})$ with only finitely many non-zero entries and $\psi = Ux$, then $\lambdangleTx,e_{jl}\rangle = \lambdangle\psi,\phi_{jl}\rangle$, that is, $\|Tx\|^2 = \sum_{j=0}^\infty\sum_{l=1}^{n_j}|\lambdangle\psi,\phi_{jl}\rangle|^2\le\beta\|\psi\|^2 = \beta\|x\|^2$. Thus, $\calA$ is a Bessel sequence. If, in addition, $(\phi_{jl})_{j\in\ensuremath{\mathbb N},\,l\in L_j}$ is a Riesz sequence, then $T = CU$ is onto, which means that $\calA$ is a frame for $\calH$. \end{proof} If the index set $I$ only contains one element, the system $\calA$ has the form $(A^nf)_{n\in\ensuremath{\mathbb N}}$, where $f\in\calH$. In the next lemma we formulate a characterization from \cite{p} for the case of a diagonal operator. \begin{lem}\lambdabel{l:bessel} Let $A = \sum_{j=0}^\infty\lambda_jP_j$ be a diagonal operator in normal form such that $\lambda_j\in\ensuremath{\mathbb D}$ for all $j\in\ensuremath{\mathbb N}$ and $f\in\calH$. Then the following statements are equivalent. \begin{enumerate} \item[{\rm (i)}] The sequence $(A^nf)_{n\in\ensuremath{\mathbb N}}$ is a Bessel sequence in $\calH$. \item[{\rm (ii)}] There exists a constant $C > 0$ such that $$ \sum_{j=0}^\infty\left|\varphi(\lambda_j)\right|^2\|P_jf\|^2\,\le\,C\|\varphi\|_{H^2(\ensuremath{\mathbb D})}^2\qquad\text{for all }\varphi\in H^2(\ensuremath{\mathbb D}). $$ \end{enumerate} \end{lem} \begin{proof} Since $A$ is a diagonal operator, we have that $\mu_f = \sum_{j=0}^\infty\delta_{\lambda_j}\|P_jf\|^2$ (see \eqref{e:mu_f}). Therefore, for every measurable function $\varphi : \ensuremath{\mathbb C}\to\ensuremath{\mathbb C}$ we have $$ \int|\varphi|^2\,d\mu_f = \sum_{j=0}^\infty|\varphi(\lambda_j)|^2\|P_jf\|^2. $$ Hence, (ii) exactly means that $H^2(\ensuremath{\mathbb D})$ is continuously embedded in $L^2(\mu_f)$. By \cite[Theorem 4.3]{p}, the latter is equivalent to (i). \end{proof} In order to prove Theorem \ref{t:characterization} we need one more definition. \begin{defn}\lambdabel{d:index} Let $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\mathbb D}$. If $\Lambda$ is not a finite union of separated sequences, we set $\operatorname{ind}(\Lambda) := \infty$. Otherwise, we define \begin{equation*} \operatorname{ind}(\Lambda) := \min\left\{n\in\ensuremath{\mathbb N} : \ensuremath{\mathbb N} = \bigcup_{k=1}^n J_k,\;(\lambda_j)_{j\in J_k}\text{ is separated for each $k=1,\ldots,n$}\right\}. \end{equation*} The value $\operatorname{ind}(\Lambda)$ will be called the {\em index} of the sequence $\Lambda$. \end{defn} \begin{proof}[Proof of Theorem \rmref{t:characterization}] Suppose that $\calA$ is a frame for $\calH$. By \cite[Corollary 1]{ap}, $A$ is a diagonal operator with multiplicity $\operatorname{mult}(A)\le|I|$ having all its eigenvalues in $\ensuremath{\mathbb D}$. Let $A = \sum_{j=0}^\infty\lambda_jP_j$ be its normal form as in (i) and let $\alpha,\beta > 0$ be the frame bounds of $\calA$. That is, \begin{equation}\lambdabel{e:frame} \alpha\|h\|^2\,\le\,\sum_{i\in I}\sum_{n=0}^\infty|\lambdangleh,A^nf_i\rangle|^2\,\le\,\beta\|h\|^2,\qquad h\in\calH. \end{equation} Fix $j\in\ensuremath{\mathbb N}$. If $h\in P_j\calH$, then $A^*h = \overline{\lambda_j}h$ and hence we have $|\lambdangleh,A^nf_i\rangle| = |\lambdangle(A^*)^nh,f_i\rangle| = |\lambda_j|^n|\lambdangleh,f_i\rangle|$. Therefore, $$ \sum_{n=0}^\infty\sum_{i\in I}|\lambdangleh,A^nf_i\rangle|^2 = \sum_{n=0}^\infty\sum_{i\in I}|\lambda_j|^{2n}|\lambdangleh,f_i\rangle|^2 = \frac{\sum_{i\in I}|\lambdangleh,f_i\rangle|^2}{1-|\lambda_j|^2}. $$ Together with \eqref{e:frame}, this proves (iii). From (iii) we moreover conclude that \begin{equation} \sum_{i\in I}\|P_jf_i\|^2\,\ge\,\alpha\varepsilon_j^2(\dim P_j\calH)\,\ge\,\alpha\varepsilon_j^2 \end{equation} for each $j\in\ensuremath{\mathbb N}$, where (cf.\ \eqref{e:veps}) $\varepsilon_j := \sqrt{1-|\lambda_j|^2}$, $j\in\ensuremath{\mathbb N}$. For this, simply choose an orthonormal basis of $P_j\calH$ and plug its vectors into $h$ in \eqref{e:fs}. Thus, for $\varphi\in H^2(\ensuremath{\mathbb D})$ we obtain for the evaluation operator $T_\Lambda$ from \eqref{e:eval1}--\eqref{e:eval2} that \begin{equation} \|T_\Lambda\varphi\|_2^2 = \sum_{j=0}^\infty\varepsilon_j^2|\varphi(\lambda_j)|^2\,\le\,\alpha^{-1}\sum_{i\in I}\sum_{j=0}^\infty|\varphi(\lambda_j)|^2\|P_jf_i\|^2. \end{equation} By Lemma \ref{l:bessel}, the latter expression is bounded from above by $C\|\varphi\|_{H^2(\ensuremath{\mathbb D})}^2$, where $C$ is some positive constant. Thus, the operator $T_\Lambda$ is everywhere defined and bounded. Due to Theorem \ref{t:finite_union} this means that the sequence $\Lambda$ is a finite union of uniformly separated sequences. To prove (iv), we start by noticing that $T_{\Lambda,I}$ is bounded as $T_\Lambda$ is. Let $h\in\calH$ be arbitrary. Then there exists $(c_{in})_{i\in I,\,n\in\ensuremath{\mathbb N}}\in\ell^2(I\times\ensuremath{\mathbb N})$ such that $h = \sum_{i\in I}\sum_{n=0}^\infty c_{in}A^nf_i$. Define $\varphi_i\in H^2(\ensuremath{\mathbb D})$ by $\varphi_i(z) = \sum_{n=0}^\infty c_{in}z^n$ for $i\in I$ and put $\phi := (\varphi_i)_{i\in I}\in H_I^2$. Then $$ h = \sum_{i\in I}\sum_{j=0}^\infty P_j\sum_{n=0}^\infty c_{in}A^nf_i = \sum_{i\in I}\sum_{j=0}^\infty\sum_{n=0}^\infty c_{in}\lambda_j^nP_jf_i = \sum_{i\in I}\sum_{j=0}^\infty\varepsilon_j\varphi_i(\lambda_j)(\varepsilon_j^{-1}P_jf_i), $$ and thus $h = C_E^*(T_\Lambda\varphi_i)_{i\in I} = C_E^*T_{\Lambda,I}\phi$, where $T_{\Lambda,I}$ is the operator in \eqref{e:TLaI}. Hence, we have $C_E^*\operatorname{ran} T_{\Lambda,I} = \calH$, which implies (iv), see Remark \ref{r:jojo}(c). It remains to complete the proof of (ii), i.e., showing that $\Lambda$ is a union of $m:=|I|$ (or less) uniformly separated sequences. Taking in to account Theorem \ref{t:durenschuster}, it is sufficient to separate $\Lambda$ into $m$ separated sequences, that is, to show that $\operatorname{ind}(\Lambda)\le m$. For this, we fix some positive number $r < \sqrt{\alpha(8\beta|I|)^{-1}}$ and prove that every pseudo-hyperbolic ball $\mathbb B_r(z)$, $z\in\ensuremath{\mathbb D}$, contains at most $m$ elements of the sequence $\Lambda$. Then the claim follows from Lemma \ref{l:index}. Towards a contradiction, suppose that some ball $\mathbb B_r(z_0)$ contains $m+1$ elements of $\Lambda$. Without loss of generality, we may assume that these elements are $\lambda_1,\ldots,\lambda_{m+1}$. Since $\varrho$ is a metric on $\ensuremath{\mathbb D}$, we have $\varrho(\lambda_j,\lambda_k) < 2r$ for all $j,k=1,\ldots,m+1$. Using the notation of Theorem \ref{t:riesz}, let $\vec y_j := \vec y_{j,1} = \varepsilon_j^{-1}(\lambdanglee_{j,1},f_i\rangle)_{i\in I}\in\ensuremath{\mathbb C}^m$, $j\in\ensuremath{\mathbb N}$. Since $\vec y_1,\ldots,\vec y_{m+1}$ are $m+1$ vectors in $\ensuremath{\mathbb C}^m$, there exists some $c\in\ensuremath{\mathbb C}^{m+1}$ such that $\|c\|_2 = 1$ and $\sum_{j=1}^{m+1}c_j\vec y_j = 0$. By Theorem \ref{t:riesz}, $(\vec y_jK_{\lambda_j})_{j\in\ensuremath{\mathbb N}}$ is a Riesz sequence with Riesz bounds $\alpha$ and $\beta$. In particular, we have $\|\vec y_j\|_2^2\le\beta$ for all $j\in\ensuremath{\mathbb N}$. Using this, Cauchy-Schwarz, and Lemma \ref{l:lipschitz}, we obtain \begin{align*} \alpha &\le\Bigg\|\sum_{j=1}^{m+1}c_j\vec y_jK_{\lambda_j}\Bigg\|_{H_I^2}^2 = \Bigg\|\sum_{j=1}^{m}c_j\vec y_j\left(K_{\lambda_j} - K_{\lambda_{m+1}}\right)\Bigg\|_{H_I^2}^2\\ &\le \sum_{j=1}^{m}\left\|\vec y_j(K_{\lambda_j} - K_{\lambda_{m+1})}\right\|_{H_I^2}^2\le 2\beta\sum_{j=1}^{m}\varrho(\lambda_j,\lambda_{m+1})^2\,\le\,8\beta mr^2 < \alpha, \end{align*} which is the desired contradiction. Here, we used that $\|\vec y\varphi\|_{H^2_I} = \|\vec y\|_2\|\varphi\|_{H^2(\ensuremath{\mathbb D})}$ for $\vec y\in\ensuremath{\mathbb C}^m$ and $\varphi\in H^2(\ensuremath{\mathbb D})$, cf.\ \eqref{e:prodnorm}. Conversely, let the conditions (i)--(iv) be satisfied. Let us first prove that for each $i\in I$ the system $(A^nf_i)_{n\in\ensuremath{\mathbb N}}$ is a Bessel sequence. For this, fix $i\in I$ and deduce from (iii) that $\|P_jf_i\|^2\le\beta\varepsilon_j^2$ holds for each $j\in\ensuremath{\mathbb N}$. Note that the evaluation operator $T_\Lambda$ from \eqref{e:eval1}--\eqref{e:eval2} is everywhere defined and bounded by (ii) and Theorem \ref{t:finite_union}. Thus, for every $\varphi\in H^2(\ensuremath{\mathbb D})$ we have \begin{equation*} \sum_{j=0}^\infty|\varphi(\lambda_j)|^2\|P_jf_i\|^2\,\le\,\beta\sum_{j=0}^\infty|\varepsilon_j\varphi(\lambda_j)|^2 = \beta\|T_\Lambda\varphi\|_2^2\,\le\,\beta\|T_\Lambda\|^2\|\varphi\|_{H^2(\ensuremath{\mathbb D})}^2. \end{equation*} Hence, the condition (ii) in Lemma \ref{l:bessel} is satisfied so that $(A^nf_i)_{n\in\ensuremath{\mathbb N}}$ indeed is a Bessel sequence. Let $h\in\calH$ be arbitrary. By (iii) and (iv) (see also Remark \ref{r:jojo}(c)), there exists $\phi\in H_I^2$, $\phi = (\varphi_i)_{i\in I}$, such that $C_E^*T_{\Lambda,I}\phi = h$. For $i\in I$, let $(c_{in})_{n\in\ensuremath{\mathbb N}}\in\ell^2(\ensuremath{\mathbb N})$ such that $\varphi_i(z) = \sum_{n=0}^\infty c_{in}z^n$ for $z\in\ensuremath{\mathbb D}$. Then $$ \sum_{i\in I}\sum_{n=0}^\infty c_{in}A^nf_i = \sum_{i\in I}\sum_{j=0}^\infty\sum_{n=0}^\infty c_{in}\lambda_j^nP_jf_i = \sum_{i\in I}\sum_{j=0}^\infty\varepsilon_j\varphi_i(\lambda_j)(\varepsilon_j^{-1}P_jf_i) = h. $$ Hence, the synthesis operator of $\calA = (A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is onto, which means that $\calA$ is a frame for $\calH$. \end{proof} From Theorem \ref{t:characterization} we deduce the following two corollaries. \begin{cor}\lambdabel{c:charac_us} Let $A = \sum_{j=0}^\infty\lambda_jP_j$ be a diagonal operator in normal form such that $(\lambda_j)_{j\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$ is uniformly separated and let $(f_i)_{i\in I}$ be a finite sequence of vectors in $\calH$. Then $\calA$ is a frame for $\calH$ if and only if there exist $\alpha,\beta > 0$ such that \begin{equation*} \alpha(1-|\lambda_j|^2)\|h\|^2\,\le\,\sum_{i\in I}|\lambdangleh,f_i\rangle|^2\,\le\,\beta(1-|\lambda_j|^2)\|h\|^2,\quad j\in\ensuremath{\mathbb N},\,h\in P_j\calH. \end{equation*} \end{cor} \begin{proof} Since $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is assumed to be uniformly separated, the conditions (ii) and (iv) from Theorem \ref{t:characterization} are trivially satisfied (see Theorem \ref{t:interpolating}). Thus, $\calA$ is a frame for $\calH$ if and only if condition (iii) from Theorem \ref{t:characterization} holds. \end{proof} \begin{cor}\lambdabel{c:nownew} Assume that $A = \sum_{j=0}^\infty\lambda_jP_j$ is a diagonal operator in normal form and that $\dim P_j\calH = |I| < \infty$ for all but a finite number of $j\in\ensuremath{\mathbb N}$. Then $\calA$ is a frame for $\calH$ if and only if the following two statements hold: \begin{enumerate} \item[{\rm (a)}] $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is a uniformly separated sequence in $\ensuremath{\mathbb D}$. \item[{\rm (b)}] There exist $\alpha,\beta > 0$ such that for all $j\in\ensuremath{\mathbb N}$ and all $h\in P_j\calH$ we have $$ \alpha(1-|\lambda_j|^2)\|h\|^2\,\le\,\sum_{i\in I}|\lambdangleh,f_i\rangle|^2\,\le\,\beta(1-|\lambda_j|^2)\|h\|^2. $$ \end{enumerate} \end{cor} \begin{proof} If (a) and (b) hold, then $\calA$ is a frame for $\calH$ by Corollary \ref{c:charac_us}. Conversely, let $\calA$ be a frame for $\calH$. Then (b) follows from Theorem \ref{t:characterization}. For the proof of (a) let us first assume that $\dim P_j\calH = |I|$ for {\it all} $j\in\ensuremath{\mathbb N}$. Since $(\varepsilon_j^{-1}P_jf_i)_{i\in I}$ is a frame for $P_j\calH$ for every $j\in\ensuremath{\mathbb N}$ with frame bounds $\alpha$ and $\beta$ (see Remark \ref{r:jojo}), we conclude that it is even a Riesz basis of $P_j\calH$ with Riesz bounds $\alpha$ and $\beta$. Hence, $E = (\varepsilon_j^{-1}P_jf_i)_{j\in\ensuremath{\mathbb N},\,i\in I}$ is a Riesz basis of $\calH$. Thus, the synthesis operator $C_E^*$ is one-to-one, i.e., $\operatorname{ker} C_E^* = \{0\}$. Therefore, it is a consequence of Theorem \ref{t:characterization} that $T_\Lambda : H^2(\ensuremath{\mathbb D})\to\ell^2(\ensuremath{\mathbb N})$ is onto which is equivalent to $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ being uniformly separated (see Theorem \ref{t:interpolating}). For the general case, let $J := \{j\in\ensuremath{\mathbb N} : \dim P_j\calH = |I|\}$ and let $P := \sum_{j\in J}P_j$. Then $A_0 := A|P\calH = \sum_{j\in J}\lambda_j(P_j|P\calH)$ and $(A_0^nPf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $P\calH$. Thus, $(\lambda_j)_{j\in J}$ is uniformly separated by the first part of the proof. The claim now follows from Corollary \ref{c:finunifsep}. \end{proof} \begin{rem}\lambdabel{r:counterex} For every given finite index set $I$ there exist a diagonal operator $A$ and vectors $f_i$, $i\in I$, such that $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$ and $\operatorname{ind}(\Lambda) = |I|$ (where $\Lambda$ is the sequence of the distinct eigenvalues of $A$). As an example, let $m:=|I|$ and let $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ be a union of $m$ uniformly separated subsequences $\Lambda_i$, $i=1,\ldots,m$, such that $\lambda_j\neq\lambda_k$ for $j\neq k$ and $\operatorname{ind}(\Lambda) = m$. Note that each $\Lambda_i$ is necessarily infinite by Corollary \ref{c:finunifsep}. For $i=1,\ldots,m$ let $\Lambda_i = (\lambda_{j,i})_{j\in\ensuremath{\mathbb N}}$ and put $A_i := \sum_{j=0}^\infty\lambda_{j,i}\lambdangle\cdot,e_j\ranglee_j$, where $(e_j)_{j\in\ensuremath{\mathbb N}}$ is an orthonormal basis of a Hilbert space $H$. Choose vectors $g_1,\ldots,g_m\in H$ such that $(A_i^ng_i)_{n\in\ensuremath{\mathbb N}}$ is a frame for $H$, $i=1,\ldots,m$. This is possible by Theorem \ref{t:one}. Now, put $\calH := \bigoplus_{i=1}^m H$ and $A := \bigoplus_{i=1}^m A_i$, and let $f_i$ be the vector in $\calH$ with $(f_i)_i = g_i$ and $(f_i)_k = 0$ for $k\in\{1,\ldots,m\}\setminus\{i\}$. Then $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$, since for $h = h_1\oplus\ldots\oplus h_m\in\calH$ we have \begin{align*} \sum_{i=1}^m\sum_{n=0}^\infty\left|\left\lambdangleh,A^nf_i\right\rangle\right|^2 = \sum_{i=1}^m\sum_{n=0}^\infty\left|\left\lambdangleh_i,A_i^ng_i\right\rangle\right|^2, \end{align*} from where it is easily seen that the claim is true. \end{rem} \section{Dynamical Sampling with general bounded operators}\lambdabel{s:general} In this section, we drop the requirement that $A\in L(\calH)$ be normal. Similarly as before, we fix $A\in L(\calH)$, an at most countable index set $I$, and vectors $f_i\in\calH$, $i\in I$, and define \begin{equation}\lambdabel{e:calA2} \calA := \calA(A,(f_i)_{i\in I}) := (A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}. \end{equation} The {\em spectrum} of an operator $T\in L(\calH)$ (i.e., the set of all $\lambda\in\ensuremath{\mathbb C}$ for which $T - \lambda := T - \lambda\operatorname{Id}$ is not boundedly invertible) is denoted by $\sigma(T)$, the set of all eigenvalues of $T$ (usually called the {\em point spectrum} of $T$) by $\sigma_p(T)$. The continuous spectrum of $T$ is the set of all $\lambda\in\sigma(T)\setminus\sigma_p(T)$ for which $\operatorname{ran}(T - \lambda)$ is dense in $\calH$. It is denoted by $\sigma_c(T)$. The spectral radius of $T$ will be denoted by $r(T)$, i.e., $r(T) := \sup\{|\lambda| : \lambda\in\sigma(T)\}$. Recall that an operator $T\in L(\calH)$ is said to be {\em strongly stable} if $T^nf\to 0$ as $n\to\infty$ holds for each $f\in\calH$. In this case, it follows from the Banach-Steinhaus theorem that $T$ is {\em power-bounded}, i.e., $\sup_{n\in\ensuremath{\mathbb N}}\|T^n\| < \infty$. Consequently, we infer from Gelfand's formula for the spectral radius that $r(T) = \lim_{n\to\infty}\|T^n\|^{1/n}\,\le\,1$. Hence, the spectrum of a strongly stable operator $T$ is contained in the closed unit disk. It is, moreover, quite easily shown that neither $T$ nor $T^*$ can have eigenvalues on the unit circle $\ensuremath{\mathbb T}$. The first statement of the following lemma was proved in \cite{ap}. \begin{lem}\lambdabel{l:armenak} If $\calA$ is a frame for $\calH$, then $A^*$ is strongly stable. In particular, we have $$ \sigma(A)\subset\overline\ensuremath{\mathbb D},\qquad \sigma(A)\cap\ensuremath{\mathbb T}\,\subset\,\sigma_c(A) \qquad\text{and}\qquad \sigma(A^*)\cap\ensuremath{\mathbb T}\,\subset\,\sigma_c(A^*). $$ \end{lem} The next theorem completes the necessary condition from Lemma \ref{l:armenak} to a characterizing set of conditions. \begin{thm}\lambdabel{t:charac} The system $\calA = (A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$ if and only if the following conditions are satisfied. \begin{enumerate} \item[{\rm (i)}] $A^*$ is strongly stable. \item[{\rm (ii)}] $(f_i)_{i\in I}$ is a Bessel sequence. \item[{\rm (iii)}] There exists a boundedly invertible operator $S\in L(\calH)$ such that \begin{equation}\lambdabel{e:identity} ASA^* = S - F, \end{equation} where $F$ is the frame operator of $(f_i)_{i\in I}$. \end{enumerate} If the conditions {\rm (i)--(iii)} are satisfied, then the operator $S$ in {\rm (iii)} is necessarily the frame operator of $\calA$. \end{thm} \begin{proof} Assume that $\calA$ is a frame for $\calH$ and let $S$ be its frame operator. As a subsequence of the frame $\calA$, $(f_i)_{i\in I}$ is a Bessel sequence. Furthermore, for $f\in\calH$ we have \begin{align*} ASA^*f &= \sum_{i\in I}\sum_{n=0}^\infty\lambdangleA^*f,A^nf_i\rangleA^{n+1}f_i = \sum_{i\in I}\sum_{n=1}^\infty\lambdanglef,A^nf_i\rangleA^nf_i\\ &= \sum_{i\in I}\sum_{n=0}^\infty\lambdanglef,A^nf_i\rangleA^nf_i - \sum_{i\in I}\lambdanglef,f_i\ranglef_i = Sf - Ff, \end{align*} which proves \eqref{e:identity}. For the converse statement, assume that (i)--(iii) are satisfied. From (iii) (and (ii)) we conclude that $$ A^2S(A^*)^2 = A(ASA^*)A^* = A(S-F)A^* = ASA^* - AFA^* = S - (F + AFA^*). $$ For $n\in\ensuremath{\mathbb N}$, $n\ge 1$, we obtain by induction $$ A^nS(A^*)^n = S - \sum_{k=0}^{n-1}A^kF(A^*)^k = S - \sum_{k=0}^{n-1}\sum_{i\in I}\left\lambdangle\cdot\,,A^kf_i\right\rangleA^kf_i. $$ Hence, for $f\in\calH$ we have (using (i)) $$ \sum_{k=0}^{\infty}\sum_{i\in I}\left|\left\lambdanglef,A^kf_i\right\rangle\right|^2 = \lambdangleSf,f\rangle - \lim_{n\to\infty}\left\lambdangleS(A^*)^nf,(A^*)^nf\right\rangle = \lambdangleSf,f\rangle. $$ Therefore, $S$ is selfadjoint and non-negative. The claim now follows from the fact that $S$ is boundedly invertible. Moreover, if $S_0$ denotes the frame operator of $\calA$, then $\lambdangle(S - S_0)f,f\rangle = 0$ for $f\in\calH$, which implies $S = S_0$. \end{proof} The next theorem describes the ``admissible'' operators $A\in L(\calH)$ for which there exists a (finite or infinite) sequence $(f_i)_{i\in I}$ such that $\calA$ becomes a frame for $\calH$. It shows that these are similar to certain contractions. \begin{thm}\lambdabel{t:admissible} For $A\in L(\calH)$ the following statements hold. \begin{enumerate} \item[{\rm (i)}] There exists a Bessel family $\{f_i : i\in I\}\subset\calH$ such that $\calA$ in \eqref{e:calA2} is a frame for $\calH$ if and only if $A^*$ is similar to a strongly stable contraction. \item[{\rm (ii)}] There exists a finite set $\{f_i : i\in I\}\subset\calH$ such that $\calA$ in \eqref{e:calA2} is a frame for $\calH$ if and only if $A^*$ is similar to a strongly stable contraction $T\in L(\calH)$ such that $\operatorname{ran}(\operatorname{Id} - T^*T)$ is finite-dimensional. \end{enumerate} If the conditions in {\rm (i)} or {\rm (ii)} are satisfied, then a contraction as in {\rm (i)} of {\rm (ii)} is given by $$ T = S^{1/2}A^*S^{-1/2}, $$ where $S$ is the frame operator of $\calA$. \end{thm} \begin{proof} Assume that there exists a Bessel family $\calF = \{f_i : i\in I\}\subset\calH$ such that $\calA$ is a frame for $\calH$. By Theorem \ref{t:charac}, $A^*$ is strongly stable and $ASA^* = S - F$, where $S$ and $F$ are the frame operators of $\calA$ and $\calF$, respectively. Define $T := S^{1/2}A^*S^{-1/2}$. Then $T$ is strongly stable and $T^*T = \operatorname{Id} - F'$, where $F' = S^{-1/2}FS^{-1/2}$. Note that $F'$ is a non-negative selfadjoint operator (since $F$ is). Therefore, for $f\in\calH$ we have $$ \|Tf\|^2 = \lambdangleT^*Tf,f\rangle = \|f\|^2 - \lambdangleF'f,f\rangle\,\le\,\|f\|^2, $$ which shows that $T$ is a contraction. If $\calF$ is finite, then $\operatorname{ran}(\operatorname{Id} - T^*T) = \operatorname{ran} F'$ is finite-dimensional. Conversely, assume that $A^*$ is similar to a strongly stable contraction $T\in L(\calH)$. Then the operator $G := \operatorname{Id} - T^*T$ is selfadjoint and non-negative. Hence, if $U : \ell^2(\ensuremath{\mathbb N})\to\calH$ is any unitary operator and we put $g_i := G^{1/2}Ue_i$ ($e_i$ being the $i$-th standard basis vector of $\ell^2(\ensuremath{\mathbb N})$), we have that $(g_i)_{i\in\ensuremath{\mathbb N}}$ is a Bessel sequence and $G = \sum_{i\in\ensuremath{\mathbb N}}\lambdangle\,\cdot\,,g_i\rangleg_i$ is its frame operator. By Theorem \ref{t:charac}, $(T^{*n}g_i)_{n,i\in\ensuremath{\mathbb N}}$ is a frame for $\calH$. Hence, if $A = LT^*L^{-1}$ with a boundedly invertible $L\in L(\calH)$, then $A^nLg_i = LT^{*n}g_i$, so that $\calA$ is a frame for $\calH$ with $f_i = Lg_i$, $i\in I := \ensuremath{\mathbb N}$. If $\operatorname{ran} G$ is finite-dimensional, we can choose $U$ such that $Ue_i\in\operatorname{ker} G^{1/2}$ for $i\ge m := \dim\operatorname{ran} G^{1/2}$. Then $g_i = 0$ for $i\ge m$ and hence $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$, where $I := \{0,\ldots,m-1\}$. \end{proof} \begin{rem} The operators that are similar to a contraction have been found by V.I.\ Paulsen in \cite[Cor.\ 3.5]{pa} to be exactly those operators which are {\it completely polynomially bounded}. For the definition of this term and more details we refer to \cite{pa}. \end{rem} In what follows, we will mainly focus on the situation in which $I$ is a finite index set -- or at least the frame operator of $(f_i)_{i\in I}$ is a compact operator. In this case, it is clear that $(f_i)_{i\in I}$ itself cannot be a frame for $\calH$ unless $\dim\calH < \infty$. The next proposition is key to most of our observations below. For the notion {\em semi-Fredholm} and the corresponding results used below we refer the reader to Appendix \ref{s:semi-fredholm}. Recall that an operator $T\in L(\calH)$ is said to be {\em finite-dimensional} or of {\em finite-rank} if $\dim\operatorname{ran} T < \infty$. \begin{prop}\lambdabel{p:fredholm} Assume that $\calA$ is a frame for $\calH$. If the frame operator $F$ of $(f_i)_{i\in I}$ is compact, then for each $\lambda\in\ensuremath{\mathbb D}$ the operator $A^* - \lambda$ is upper semi-Fredholm. If $|I|$ is finite \braces{in which case $F$ is even finite-dimensional\,}, then \begin{equation}\lambdabel{e:fd} \operatorname{nul}(A^* - \lambda)\,\le\,|I|,\qquad\lambda\in\ensuremath{\mathbb D}. \end{equation} \end{prop} \begin{proof} We derive the claim from the identity \eqref{e:identity}. For $\lambda\in\ensuremath{\mathbb D}$ we have $$ AS(A^*-\lambda) = ASA^* - \lambda AS = S - F - \lambda AS = (\operatorname{Id} - \lambda A)S - F. $$ For all $\lambda\in\ensuremath{\mathbb D}$ the operator $B_\lambda := \operatorname{Id} - \lambda A$ is boundedly invertible. This is clear for $\lambda = 0$, and for $\lambda\neq 0$ we have $B_\lambda = \lambda(\lambda^{-1} - A)$, which is boundedly invertible as $\sigma(A)\subset\overline\ensuremath{\mathbb D}$. Thus, \begin{equation}\lambdabel{e:nice} B_\lambda^{-1}AS(A^* - \lambda) = S - B_\lambda^{-1}F. \end{equation} By Theorem \ref{t:compi}, the operator on the right hand side is Fredholm, and so $A^* - \lambda$ is upper semi-Fredholm by Lemma \ref{l:komisch}. Now, let $|I|$ be finite and let $\lambda\in\ensuremath{\mathbb D}$ be an eigenvalue of $A^*$. If $f$ is a corresponding eigenvector, then \eqref{e:nice} yields $f = S^{-1}B_\lambda^{-1}Ff$. Hence, $\operatorname{ker}(A^*-\lambda)\subset S^{-1}B_\lambda^{-1}\operatorname{ran} F$, which implies \eqref{e:fd} as $\dim\operatorname{ran} F\le |I|$. \end{proof} In the proof of the next theorem we heavily make use of the punctured neighborhood theorem, Theorem \ref{t:pnt}. \begin{thm}\lambdabel{t:alt} If $\calA$ is a frame for $\calH$ and the frame operator of $(f_i)_{i\in I}$ is compact, then $\operatorname{ind}(A^*-\lambda) = \operatorname{ind}(A^*)$ for each $\lambda\in\ensuremath{\mathbb D}$ and exactly one of the following cases holds: \begin{enumerate} \item[{\bf (i)}] $\sigma(A^*) = \overline\ensuremath{\mathbb D}$ and $\sigma_p(A^*) = \ensuremath{\mathbb D}$. \item[{\bf (ii)}] $\sigma(A^*) = \overline\ensuremath{\mathbb D}$ and $\sigma_p(A^*)$ is discrete in $\ensuremath{\mathbb D}$. \item[{\bf (iii)}] $\sigma(A^*)$ is discrete in $\ensuremath{\mathbb D}$. \end{enumerate} In the case {\bf (i)}, each $\lambda\in\ensuremath{\mathbb D}$ is an eigenvalue of $A^*$ with infinite algebraic multiplicity, whereas in the cases {\bf (ii)} and {\bf (iii)} the eigenvalues of $A^*$ in $\ensuremath{\mathbb D}$ have finite algebraic multiplicities. If case {\bf (iii)} holds, then we have $\operatorname{ind}(A^*) = 0$. \end{thm} \begin{proof} By Proposition \ref{p:fredholm}, $A^*-\lambda$ is upper semi-Fredholm for each $\lambda\in\ensuremath{\mathbb D}$. Hence it follows from the punctured neighborhood theorem, Theorem \ref{t:pnt}, and a compactness argument that $\operatorname{ind}(A^*-\lambda)$ is constant on $\ensuremath{\mathbb D}$. Similarly, one sees that $\operatorname{nul}(A^*-\lambda)$ is constant on $\ensuremath{\mathbb D}\setminus\ensuremath{\mathbb D}elta$, where $\ensuremath{\mathbb D}elta$ is a discrete subset of $\ensuremath{\mathbb D}$. Denote this constant value by $n_0$. Then it is immediate that case {\bf (i)} is satisfied exactly if $n_0 > 0$. If $n_0 = 0$, then case {\bf (iii)} occurs if and only if $\operatorname{ind}(A^*) = 0$. Let $\lambda_0\in\ensuremath{\mathbb D}$ be an eigenvalue of $A^*$. For $\lambda\neq\lambda_0$ close to $\lambda_0$ we have $\lambda\in\ensuremath{\mathbb D}\setminus\ensuremath{\mathbb D}elta$ and hence $n_0 = \operatorname{nul}(A^*-\lambda) = \operatorname{nul}(A^* - \lambda_0) - k$, where (see Theorem \ref{t:pnt}) $$ k = \dim\big(\operatorname{ker}(A^*-\lambda_0)/\left(\operatorname{ker}(A^* - \lambda_0)\cap R_\infty(A^*-\lambda_0)\right)\big). $$ Hence, cases {\bf (ii)} and {\bf (iii)} occur exactly when $\operatorname{nul}(A^*-\lambda_0) = k$. This happens if and only if $\operatorname{ker}(A^* - \lambda_0)\cap R_\infty(A^*-\lambda_0) = \{0\}$. But the latter means that the algebraic multiplicity of $\lambda_0$ as an eigenvalue of $A^*$ is finite. \end{proof} \begin{cor}\lambdabel{c:r=1} If $\dim\calH = \infty$, $\calA$ is a frame for $\calH$ with frame operator $S$, and the frame operator of $(f_i)_{i\in I}$ is compact, then $$ r(A) = \big\|S^{-1/2}AS^{1/2}\big\| = 1. $$ In particular, $\|A^n\|\ge 1$ for all $n\in\ensuremath{\mathbb N}$. \end{cor} \begin{proof} It follows from Theorem \ref{t:alt} that $r(A) = 1$. By Theorem \ref{t:admissible}, the operator $B := S^{-1/2}AS^{1/2}$ is a contraction. Since $B$ is similar to $A$, we have $\sigma(B) = \sigma(A)$ and therefore $1 = r(A) = r(B)\le\|B\|\le 1$. \end{proof} We define the {\em essential spectrum} $\sigma_{\rm ess}(T)$ of $T\in L(\calH)$ by those $\lambda\in\ensuremath{\mathbb C}$ for which $T - \lambda$ is not semi-Fredholm. \begin{cor} Assume that $\calA$ is a frame for $\calH$ and the frame operator of $(f_i)_{i\in I}$ is compact. Then $$ \sigma_{\rm ess}(A^*) = \sigma_c(A^*) = \sigma(A^*)\cap\ensuremath{\mathbb T}. $$ If, in addition, $\operatorname{ind}(A^*)\neq 0$, then $$ \sigma_{\rm ess}(A^*) = \sigma_c(A^*) = \ensuremath{\mathbb T}. $$ \end{cor} \begin{proof} $\sigma_c(A^*)\subset\sigma_{\rm ess}(A^*)$ holds by definition and $\sigma_{\rm ess}(A^*)\subset\sigma(A^*)\cap\ensuremath{\mathbb T}$ is a direct consequence of Proposition \ref{p:fredholm}. The remaining inclusion $\sigma(A^*)\cap\ensuremath{\mathbb T}\subset\sigma_c(A^*)$ holds due to Lemma \ref{l:armenak}. If $\operatorname{ind}(A^*)\neq 0$, then either case {\bf (i)} or case {\bf (ii)} holds. In these cases, we have $\sigma(A^*) = \overline\ensuremath{\mathbb D}$ and hence, clearly, $\sigma(A^*)\cap\ensuremath{\mathbb T} = \ensuremath{\mathbb T}$. \end{proof} In the proof of Theorem \ref{t:alt} we have not used that the operator $A^*$ is strongly stable. We incorporate this in the proof of the next theorem, where we make use of a theorem from \cite{tu}. \begin{thm}\lambdabel{t:tu} Let $I$ be finite and assume that $\calA$ is a frame for $\calH$. Then $\operatorname{def}(A^*-\lambda) = 0$ for all $\lambda\in\ensuremath{\mathbb D}\setminus\ensuremath{\mathbb D}elta$, where $\ensuremath{\mathbb D}elta\subset\ensuremath{\mathbb D}$ is discrete in $\ensuremath{\mathbb D}$. In particular, either case {\bf (i)} or case {\bf (iii)} occurs. Case {\bf (iii)} holds if and only if $\operatorname{ind}(A^*) = 0$. In this case, also $A$ is strongly stable. \end{thm} \begin{proof} Let $S$ and $F$ be the frame operators of $\calA$ and $(f_i)_{i\in I}$, respectively, and define $T := S^{1/2}A^*S^{-1/2}$. By Theorem \ref{t:admissible}, the operator $T$ is a strongly stable contraction and $T^*T = \operatorname{Id} - F_1$, where $F_1 = S^{-1/2}FS^{-1/2}$. Since $T$ and $A^*$ are similar, it suffices to prove the corresponding statements for the operator $T$. Let us show the first part of the theorem. To this end, we shall use techniques from the proof of \cite[Lemma 1.3]{uch}. The key in this proof is a triangulation of the contraction $T$ of the form (see \cite[Theorem II.4.1]{nfbk}) $$ T = \mat{T_{01}}C0{T_{00}} $$ with respect to a decomposition $\calH = \calH_{01}\oplus\calH_{00}$. Here, $T_{01}\in C_{01}$ (that is, $\inf\{\|(T_{01}^*)^nf\| : n\in\ensuremath{\mathbb N}\} > 0$ for each $f\in\calH_{01}\setminus\{0\}$) and $T_{00}\in C_{00}$ (i.e., both $T_{00}$ and $T_{00}^*$ are strongly stable). We have $$ \operatorname{Id} - T^*T = \operatorname{Id} - \mat{T_{01}^*}0{C^*}{T_{00}^*}\mat{T_{01}}C0{T_{00}} = \mat{\operatorname{Id} - T_{01}^*T_{01}}{-T_{01}^*C}{-C^*T_{01}}{\operatorname{Id} - C^*C - T_{00}^*T_{00}}. $$ Hence, all entries in the latter operator matrix are of finite rank. In particular, $T_{01}$ is upper semi-Fredholm (see Theorem \ref{t:compi} and Lemma \ref{l:komisch}). Since $T_{01}\in C_{01}$, the operator $T_{01}^*$ is injective and thus it has a bounded left-inverse. Hence, as $T_{01}^*C$ is of finite rank we infer that also $C$ is of finite rank. Thus, $\operatorname{Id} - T_{00}^*T_{00}$ is of finite rank. Since $T_{00}\in C_{00}$, this yields that $T_{00}$ is a so-called $C_0$-contraction (see \cite{tu}). Consequently, the spectrum of $T_{00}$ in $\ensuremath{\mathbb D}$ is discrete (cf.\ \cite[Theorem III.5.1]{nfbk}). Let $\lambda\in\ensuremath{\mathbb D}\setminus\sigma(T_{00})$. Then $(T^*-\overline\lambda)f = 0$ implies $(T_{01}^*-\overline\lambda)g = 0$ and $C^*g + (T_{00}^*-\overline\lambda)h = 0$, where $f = g+h$, $g\in\calH_{01}$, $h\in\calH_{00}$. But as $T_{01}^*-\overline\lambda$ is injective (due to $T_{01}\in C_{01}$), we conclude that $g = 0$ and therefore also $h = 0$ as $\overline\lambda\in\rho(T_{00}^*)$. Hence, for $\lambda\in\ensuremath{\mathbb D}\setminus\sigma(T_{00})$ we have that $\operatorname{def}(T-\lambda) = \operatorname{nul}(T^*-\overline\lambda) = 0$. This also implies that case {\bf (ii)} cannot occur and that case {\bf (iii)} holds if and only if $\operatorname{ind}(T) = 0$. Assume that $\operatorname{ind}(T) = 0$. In order to show that also $T^*$ is strongly stable, due to \cite[Theorem 2]{tu} it suffices to prove that $\operatorname{Id} - TT^*$ is of finite rank. To see this, we observe that there exists a representation $T = U|T|$ of $T$, where $|T|= (T^*T)^{1/2}$ and $U$ is a unitary operator in $\calH$, see \cite[Lemma 2.9]{dp}. We have $F_1 = \operatorname{Id} - T^*T = \operatorname{Id} - |T|^2$ and thus $|T| = \operatorname{Id} - F_1(\operatorname{Id}+|T|)^{-1}$. Therefore, $T = U|T| = U - F_2$ with some finite rank operator $F_2$, and consequently $$ TT^* = \operatorname{Id} - \left[F_2U^* + (U - F_2)F_2^*\right]. $$ Thus, $\operatorname{Id} - TT^*$ is of finite rank. \end{proof} \begin{rem} It follows from the proof of Theorem \ref{t:tu} and the references used therein that the claim of the theorem remains to hold if we replace the condition that $I$ be finite by the requirement that the frame operator of $(f_i)_{i\in I}$ is of trace class. \end{rem} By $R$ and $L$ we denote the right-shift and the left-shift on $\ell^2(\ensuremath{\mathbb N})$. That is, $R,L\in L(\ell^2(\ensuremath{\mathbb N}))$, $(Lf)(j) = f(j+1)$ for $j\in\ensuremath{\mathbb N}$ and $(Rf)(0) = 0$, as well as $(Rf)(j) = f(j-1)$ for $j\ge 1$. Moreover, let $e_k$ denote the $k$-th standard basis vector of $\ell^2(\ensuremath{\mathbb N})$, $k\in\ensuremath{\mathbb N}$. The following example shows in particular that case {\bf (i)} in Theorem \ref{t:alt} cannot be neglected as a possibility for an operator generating a frame. \begin{ex}\lambdabel{ex:shift} Let $\calH = \ell^2(\ensuremath{\mathbb N})$, $m\in\ensuremath{\mathbb N}\setminus\{0\}$, and $I := \{0,\ldots,m-1\}$. If we put $A := R^m$, then we have $$ \calA = \left(A^ne_i\right)_{n\in\ensuremath{\mathbb N},\,i\in I} = \left((R^m)^ne_i\right)_{n\in\ensuremath{\mathbb N},\,i\in I} = (e_{i + nm})_{n\in\ensuremath{\mathbb N},\,i\in I} = (e_k)_{k\in\ensuremath{\mathbb N}}. $$ Hence, $(A^ne_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is an orthonormal basis of $\calH$ and it is easily seen that every $\lambda\in\ensuremath{\mathbb D}$ is an eigenvalue of $A^*$. Thus, we are in the situation of case {\bf (i)}. \end{ex} The next theorem shows that the orthonormal bases in Example \ref{ex:shift} are the prototype of all Riesz bases of the form $\calA$ in the sense of the following theorem. \begin{thm}\lambdabel{t:riesz_sequence} Let $A\in L(\calH)$ and $f_i\in\calH$, $i\in I$, where $I = \{0,\ldots,m-1\}$. Then the following statements are equivalent. \begin{enumerate} \item[{\rm (i)}] $\calA$ is a Riesz basis of $\calH$. \item[{\rm (ii)}] There exists a boundedly invertible operator $V\in L(\ell^2(\ensuremath{\mathbb N}),\calH)$ such that $$ A = VR^mV^{-1} \qquad\text{and}\qquad f_i = Ve_i,\;i\in I. $$ \end{enumerate} \end{thm} \begin{proof} It is clear that (ii) implies (i). So, assume that $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a Riesz basis of $\calH$. Then there exists a boundedly invertible operator $V\in L(\ell^2(\ensuremath{\mathbb N}),\calH)$ such that $A^nf_i = Ve_{i+nm}$, $n\in\ensuremath{\mathbb N}$, $i\in I$. In particular, for $i\in I$ we have $f_i = Ve_i$. Also, for $n\in\ensuremath{\mathbb N}$ and $i\in I$ we have $$ AVe_{i+nm} = A^{n+1}f_i = Ve_{i + nm + m} = VR^me_{i+nm}, $$ and therefore $AV = VR^m$. \end{proof} Finally, we turn back to the motivation of Dynamical Sampling in the Introduction, where $A^*$ was an instance of an operator semigroup. Recall that a semigroup of operators is a collection $(T_t)_{t\ge 0}\subset L(\calH)$ satisfying $T_{s+t} = T_sT_t$ for all $s,t\in [0,\infty)$. \begin{prop}\lambdabel{p:not(i)} Let $A^* = T_{t_0}$, $t_0 > 0$, be an instance of a semigroup $(T_t)_{t\ge 0}$ of operators and let the frame operator of $(f_i)_{i\in I}$ be compact. Then if $\calA$ is a frame for $\calH$, either case {\bf (ii)} or case {\bf (iii)} occurs. In case {\bf (ii)} we have $\operatorname{ind}(A^*) = -\infty$. \end{prop} \begin{proof} Let $B_m := T_{t_0/m}^*$, $m\in\ensuremath{\mathbb N}$, $m\ge 1$. Then we have $B_m^m = A$ for each $m$. Let $\lambda\in\ensuremath{\mathbb D}$ be arbitrary. Then $A^*-\lambda^m = (B_m^*)^m - \lambda^m = P(B_m^*,\lambda)(B_m^*-\lambda)$, where $P(B_m^*,\lambda)$ is a polynomial in $B_m^*$ and $\lambda$. This and Lemma \ref{l:komisch} imply that $B_m^* - \lambda$ is upper semi-Fredholm. Moreover, by the index formula \eqref{e:indexformel} we have that $\operatorname{ind}(A^*) = m\cdot\operatorname{ind}(B_m^*)$. In particular, $\operatorname{ind}(A^*)$ is divisible by each $m\in\ensuremath{\mathbb N}$, $m\ge 2$. Thus, $\operatorname{ind}(A^*)\in\{0,-\infty\}$. Note that $\operatorname{ind}(A^*) = +\infty$ is not possible since $\operatorname{nul}(A^*) < \infty$. Now, let $\lambda\in\ensuremath{\mathbb D}\setminus\{0\}$, $\lambda = re^{it}$, $r\in (0,1)$, $t\in [0,2\pi)$, let $n = \operatorname{nul}(A^*-\lambda)+1$, and put $\lambda_k := \sqrt[n]{r}\exp(\frac{i}{n}(t+2k\pi))$, $k=0,\ldots,n-1$. Suppose that each $\lambda_k$ is an eigenvalue of $B_n^*$ with eigenvector $g_k$, $k=0,\ldots,n-1$. Then each $g_k$ is an eigenvector of $A^*$ with respect to $\lambda$ as $A^*g_k = (B_n^*)^ng_k = \lambda_k^ng_k = \lambda g_k$. But as the $g_k$ are linearly independent, this is a contradiction to the choice of $n$. Thus, the eigenvalues of $B_n^*$ do not fill the open unit disk. In turn, $\sigma_p(B_n^*)$ is discrete in $\ensuremath{\mathbb D}$ and hence the same holds for $\sigma_p(A^*)$. \end{proof} Note that we have not used any continuity properties of the semigroup in the proof above. In fact, if $(T_t)_{t\ge 0}$ is a strongly continuous semigroup, it can be shown that under the conditions of Proposition \ref{p:not(i)}, $\operatorname{ker}(T_t) = \{0\}$ for each $t\ge 0$, which in particular excludes case {\bf (i)}. We conclude this section with the following corollary, which directly follows from Proposition \ref{p:not(i)}, Theorem \ref{t:riesz_sequence}, and Theorem \ref{t:tu}. \begin{cor}\lambdabel{c:neverriesz} Let $A^* = T_{t_0}$, $t_0 > 0$, be an instance of a semigroup $(T_t)_{t\ge 0}$ of operators. Then, for any finite sequence $(f_i)_{i\in I}$ of vectors in $\calH$, the system $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is never a Riesz basis of $\calH$. Moreover, if $(A^nf_i)_{n\in\ensuremath{\mathbb N},\,i\in I}$ is a frame for $\calH$ and $I$ is finite, then case {\bf\rm (iii)} holds and both $A$ and $A^*$ are strongly stable. \end{cor} \section{Dynamical Sampling in finite dimensions}\lambdabel{s:findim} In this section we let $\calH = \calH_d$ be a $d$-dimensional Hilbert space and consider the question for which linear operators $A\in L(\calH_d)$ and which sets of vectors $\{f_i : i\in I\}\subset\calH_d$ the system \begin{equation}\lambdabel{e:calA_findim} \calA := (A^nf_i)_{n\in N,\,i\in I} \end{equation} is a frame for $\calH_d$ (or, equivalently, complete in $\calH_d$). Here, we let $$ N := \{0,\ldots,d-1\} \qquad\text{and}\qquad I := \{1,\ldots,m\},\;\;m\in\ensuremath{\mathbb N}\setminus\{0\}. $$ The main result in this section is the following characterization theorem. Here by $P_M$ we denote the orthogonal projection in $\calH_d$ onto the subspace $M\subset\calH_d$ and by $\dotplus$ the direct sum of subspaces. \begin{thm}\lambdabel{t:main_findim} Let $A\in L(\calH_d)$, $f_1,\ldots,f_m\in\calH_d$, and set $\calF := \operatorname{span}\{f_1,\ldots,f_m\}$. Moreover, for each $\lambda\in\sigma(A)$ choose a subspace $V_\lambda$ such that $\calH_d = V_\lambda\dotplus\operatorname{ran}(A-\lambda)$ and denote the projection onto $V_\lambda$ with respect to this decomposition by $Q_{V_\lambda}$. Then the following statements are equivalent: \begin{enumerate} \item[{\rm (i)}] The system $\calA$ in \eqref{e:calA_findim} is a frame for $\calH_d$. \item[{\rm (ii)}] For each $\lambda\in\sigma(A)$ we have $Q_{V_\lambda}\calF = V_\lambda$. \item[{\rm (iii)}] For each $\lambda\in\sigma(A^*)$ we have $P_{\operatorname{ker}(A^*-\lambda)}\calF = \operatorname{ker}(A^*-\lambda)$. \end{enumerate} \end{thm} In the following proof we deal with root subspaces of linear operators. The {\em root subspace} of an operator $T\in L(\calH_d)$ at $\lambda\in\sigma(T)$ is defined by $$ \calL_\lambda(T) := \bigcup_{n=0}^d \operatorname{ker}\left((T-\lambda)^n\right). $$ It is obviously invariant under $T$. It is well known that if $\sigma(T) = \{\lambda_1,\ldots,\lambda_m\}$, then \begin{equation}\lambdabel{e:decomp} \calH_d = \calL_{\lambda_1}(T)\,\dotplus\,\ldots\,\dotplus\,\calL_{\lambda_m}(T). \end{equation} \begin{proof}[Proof of Theorem \rmref{t:main_findim}] (i)$\Rightarrow$(ii). Let $\lambda\in\sigma(A)$ and define a scalar product $(\cdot\,,\cdot)$ on $\calH_d$ such that $V_\lambda$ and $\operatorname{ran}(A-\lambda)$ are $(\cdot\,,\cdot)$-orthogonal to each other. By $A^\star$ and $R_M$ we denote the adjoint of $A$ and the orthogonal projection onto a subspace $M\subset\calH_d$, respectively, both with respect to the inner product $(\cdot\,,\cdot)$. Then $V_\lambda = \operatorname{ker}(A^\star-\overline\lambda)$ and $Q_{V_\lambda} = R_{\operatorname{ker}(A^\star-\overline\lambda)}$. Now, let $f\in V_\lambda$ be such that $(f,Q_{V_\lambda}f_i) = 0$ for all $i\in I$. Then for each $i\in I$ and $n\in\{0,\ldots,d-1\}$ we have $$ \left(f,A^{n}f_i\right) = \left(A^{\star n}f,f_i\right) = \overline\lambda^n\left(f,f_i\right) = \overline\lambda^n\left(f,Q_{V_\lambda}f_i\right) = 0. $$ Hence, (i) implies $f = 0$. (ii)$\Rightarrow$(iii). Let $\lambda\in\sigma(A)$ be arbitrary. Then we have $$ Q_{V_\lambda}P_{\operatorname{ker}(A^*-\overline\lambda)} = Q_{V_\lambda}(\operatorname{Id} - P_{\operatorname{ran}(A-\lambda)}) = Q_{V_\lambda} - Q_{V_\lambda}P_{\operatorname{ran}(A-\lambda)} = Q_{V_\lambda}. $$ Therefore, if $Q_{V_\lambda}\calF = V_\lambda$, then $V_\lambda = Q_{V_\lambda}P_{\operatorname{ker}(A^*-\overline\lambda)}\calF$, which implies that the dimension of $P_{\operatorname{ker}(A^*-\overline\lambda)}\calF$ cannot be less than the dimension of $V_\lambda$. But $\dim V_\lambda = \dim\operatorname{ker}(A^*-\overline\lambda)$ and $P_{\operatorname{ker}(A^*-\overline\lambda)}\calF = \operatorname{ker}(A^*-\overline\lambda)$ follows. (iii)$\Rightarrow$(i). Towards a contradiction, suppose that there exists some $f\in\calH_d$, $f\neq 0$, such that $\lambdanglef,A^nf_i\rangle = 0$ for all $n=0,\ldots,d-1$ and all $i\in I$. In other words, we have that $\lambdangleq(A^*)f,f_i\rangle = 0$ for all $i\in I$ and each polynomial $q$ of degree at most $d-1$. By $p$ denote the minimal polynomial of $A^*$ and let $\lambda_1,\ldots,\lambda_M$ be the distinct eigenvalues of $A^*$. Then $p(\lambda) = (\lambda-\lambda_1)^{k_1}\dots(\lambda-\lambda_M)^{k_M}$ with some $k_j\in\ensuremath{\mathbb N}$, $j\in [M] := \{1,\ldots,M\}$. Clearly, we have $k_1+\dots+k_M\le d$. By \eqref{e:decomp} we can write $f = \sum_{j=1}^Mh_j$, where $h_j\in\calL_{\lambda_j}(A^*)$, $j\in [M]$. As $p(A^*) = 0$ and each $\calL_{\lambda_j}(A^*)$ is $A^*$-invariant, we have $(A^*-\lambda_j)^{k_j}h_j = 0$ for all $j\in [M]$. Since $f\neq 0$, there exists at least one $j$ for which $h_j\neq 0$ and we fix it for the rest of the proof. Let $\ell_j$ be the minimum of all $\ell\le k_j$ with $(A^*-\lambda_j I)^\ell h_j = 0$ and define the polynomial $$ q(\lambda) := (\lambda-\lambda_j)^{\ell_j-1}\cdot\!\!\!\prod_{\ell\in [M]\setminus\{j\}}(\lambda-\lambda_\ell)^{k_\ell}. $$ We obviously have $q(A^*)h_r = 0$ for $r\neq j$ and hence $q(A^*)f = q(A^*)h_j$. Now, let $g_j := (A^*-\lambda_j I)^{\ell_j-1}h_j$. Then $g_j\in \operatorname{ker}(A^*-\lambda_j)$, $g_j\neq 0$ (by the definition of $\ell_j$), and thus $$ q(A^*)f = q(A^*)h_j = \prod_{\ell\in [M]\setminus\{j\}}(A^*-\lambda_\ell)^{k_\ell}g_j = c_jg_j, $$ where $c_j = \prod_{\ell\in [M]\setminus\{j\}}(\lambda_j-\lambda_\ell)^{k_\ell}\neq 0$. Since $\deg(q)\le d-1$, we obtain for all $i=1,\ldots,m$, $$ \lambdangleg_j,f_i\rangle = c_j^{-1}\lambdangleq(A^*)f,f_i\rangle = 0. $$ But as $g_j\in\operatorname{ker}(A^*-\lambda_j)$ and $\{P_{\operatorname{ker}(A^*-\lambda_j)}f_i\}_{i=1}^m$ is complete in $\operatorname{ker}(A^*-\lambda_j)$ by (ii), it follows that $g_j = 0$, which is the desired contradiction. \end{proof} \begin{rem} Note that in the proof of (ii)$\Rightarrow$(iii) we actually proved that for any fixed subspace $W$ of $\calH_d$ and any pair $V,V'$ of subspaces complementary to $W$ the following holds: For each subspace $\calF$ of $\calH_d$ we have $Q_V\calF = V$ if and only if $Q_{V'}\calF = V'$. \end{rem} The first characterization for $\calA$ to be a frame for $\calH_d$ was proved in \cite{acmt}. To formulate it here, let us introduce the notion of subspaces of cyclic vectors. For this, let $\lambda\in\sigma(T)$, where $T\in L(\calH_d)$. A subspace $W_\lambda$ will be called a {\em subspace of cyclic vectors} for $T$ at $\lambda\in\sigma(T)$ if \begin{equation}\lambdabel{e:soitis} \calL_\lambda(T) = W_\lambda\dotplus(T-\lambda)\calL_\lambda(T). \end{equation} For such a subspace $W_\lambda$, we set $Q_{W_\lambda} := Q_\lambda P_\lambda$, where $P_\lambda$ is the projection onto $\calL_\lambda(T)$ with respect to the decomposition \ref{e:decomp} and $Q_\lambda$ is the projection in $\calL_\lambda(T)$ onto $W_\lambda$ with respect to \eqref{e:soitis}. \begin{thm}[{\cite[Theorem 2.11]{acmt}}]\lambdabel{t:acmt} Let $A\in L(\calH_d)$, $f_1,\ldots,f_m\in\calH_d$, and fix subspaces of cyclic vectors $W_\lambda$ for $A$, $\lambda\in\sigma(A)$. Then $\calA$ in \eqref{e:calA_findim} is a frame for $\calH_d$ if and only if for any $\lambda\in\sigma(A)$ we have $Q_{W_{\lambda}}\calF = W_{\lambda}$, where $\calF := \operatorname{span}\{f_1,\ldots,f_m\}$. \end{thm} Theorem \ref{t:acmt} is in fact a consequence of Theorem \ref{t:main_findim} because for each subspace $W_\lambda$ of cyclic vectors for $A$ at $\lambda$ we have $\calH_d = W_\lambda\dotplus\operatorname{ran}(A-\lambda)$ and $Q_{W_\lambda}$ actually is the projection onto $W_\lambda$ along $\operatorname{ran}(A-\lambda)$. \appendix \vspace*{.2cm} \section{Semi-Fredholm operators}\lambdabel{s:semi-fredholm} An operator $T\in L(\calH)$ is said to be {\em upper semi-Fredholm}, if $\operatorname{ker} T$ is finite-dimensional and $\operatorname{ran} T$ is closed. The operator $T$ is called {\em lower semi-Fredholm}, if $\operatorname{codim}\operatorname{ran} T < \infty$ (in this case, the range of $T$ is automatically closed). $T$ is called {\em semi-Fredholm} if it is upper or lower semi-Fredholm and {\em Fredholm} if it is both upper and lower semi-Fredholm. In all cases, one defines the {\em nullity} and {\em deficiency} of $T$ by $$ \operatorname{nul}(T) := \dim\operatorname{ker} T\qquad\text{and}\qquad\operatorname{def}(T) := \operatorname{codim}\operatorname{ran} T. $$ The {\em index} of $T$ is defined by $$ \operatorname{ind}(T) := \operatorname{nul}(T) - \operatorname{def}(T). $$ This value might be a positive or negative integer or $\pm\infty$. It is, moreover, easily seen that $T$ is upper semi-Fredholm if and only if $T^*$ is lower semi-Fredholm. We have $\operatorname{def}(T^*) = \operatorname{nul}(T)$ and $\operatorname{nul}(T^*) = \operatorname{def}(T)$ and thus $\operatorname{ind}(T^*) = -\operatorname{ind}(T)$. The next theorem shows that the semi-Fredholm property of operators is stable under compact perturbations. \begin{thm}[{\cite[Theorem IV.5.26]{k}}]\lambdabel{t:compi} If $K\in L(\calH)$ is compact and $T\in L(\calH)$ is upper \braces{lower\,} semi-Fredholm, then $T + K$ is upper \braces{lower, respectively\,} semi-Fredholm with $\operatorname{ind}(T+K) = \operatorname{ind}(T)$. \end{thm} For a proof of the following lemma we refer the reader to \cite[Theorems III.16.5, III.16.6, and III.16.12]{m}. \begin{lem}\lambdabel{l:komisch} Let $S,T\in L(\calH)$. Then the following statements hold. \begin{enumerate} \item[{\rm (i)}] If $ST$ is upper semi-Fredholm, then so is $T$. \item[{\rm (ii)}] If $S$ and $T$ are upper semi-Fredholm, then so is $ST$ and \begin{equation}\lambdabel{e:indexformel} \operatorname{ind}(ST) = \operatorname{ind}(S) + \operatorname{ind}(T). \end{equation} \end{enumerate} \end{lem} While Theorem \ref{t:compi} deals with compact perturbations, the next theorem (also known as the (extended) {\em punctured neighborhood theorem} (see \cite[Theorems 4.1 and 4.2]{ka})) is concerned with perturbations of the type $\lambda\operatorname{Id}$, where $|\lambda|$ is small. \begin{thm}\lambdabel{t:pnt} Let $T\in L(\calH)$ be upper semi-Fredholm and put $$ k := \dim\left[\operatorname{ker} T/(\operatorname{ker} T\cap R_\infty(T))\right], $$ where $R_\infty(T) := \bigcap_{n=0}^\infty\operatorname{ran}(T^n)$. Then there exists $\varepsilon > 0$ such that for $0 < |\lambda| < \varepsilon$ the following statements hold. \begin{enumerate} \item[{\rm (i)}] $T-\lambda$ is upper semi-Fredholm. \item[{\rm (ii)}] $\operatorname{ind}(T-\lambda) = \operatorname{ind}(T)$. \item[{\rm (iii)}] $\operatorname{nul}(T-\lambda) = \operatorname{nul}(T) - k$. \item[{\rm (iv)}] $\operatorname{def}(T-\lambda) = \operatorname{def}(T) - k$. \end{enumerate} \end{thm} \vspace*{.1cm} \section{Harmonic Analysis in the unit disk}\lambdabel{a:sequences} In this section of the Appendix we collect some results on complex sequences in the unit disk. Recall the definition of the evaluation operator $T_\Lambda$ in \eqref{e:eval1}--\eqref{e:eval2}. \begin{lem}\lambdabel{l:dense} If $\mathbbm{1}\in D(T_\Lambda)$, then $\operatorname{id}^n\in D(T_\Lambda)$ for all $n\in\ensuremath{\mathbb N}$. In particular, $T_\Lambda$ is densely defined. If, in addition, $\lambda_i\neq\lambda_j$ for $i\neq j$, then $\operatorname{ran} T_\Lambda$ is dense in $\ell^2(\ensuremath{\mathbb N})$. \end{lem} \begin{proof} First, $\mathbbm{1}\in D(T_\Lambda)$ means that $\Lambda$ is a Blaschke sequence, i.e., $(\varepsilon_j)_{j\in\ensuremath{\mathbb N}}\in\ell^2(\ensuremath{\mathbb N})$. Thus, for $n\in\ensuremath{\mathbb N}$ we have that $\sum_{j=0}^\infty\varepsilon_j^2|\lambda_j|^{2n}\le\sum_{j=0}^\infty\varepsilon_j^2 < \infty$. That is, $\operatorname{id}^n\in D(T_\Lambda)$. If $\lambda_i\neq\lambda_j$ for $i\neq j$, for fixed $i\in\ensuremath{\mathbb N}$ let $B_i$ be the Blaschke product of $(\lambda_j)_{j\neq i}$, i.e., $$ B_i(z) = z^k\prod_{j\neq i}\frac{\lambda_j-z}{1 - \overline\lambda_jz}\frac{|\lambda_j|}{\lambda_j}\,, $$ where $k\in\{0,1\}$ and $k=0$ iff $\lambda_j\neq 0$ for all $j\neq i$. Set $f_i := (\varepsilon_iB_i(\lambda_i))^{-1}B_i$, $i\in\ensuremath{\mathbb N}$. Then $f_i\in H^\infty(\ensuremath{\mathbb D})\subset H^2(\ensuremath{\mathbb D})$ (see, e.g., \cite[Theorem 15.21]{r}). Moreover, $f_i\in D(T_\Lambda)$ and $T_\Lambda f_i$ is the $i$-th standard basis vector of $\ell^2(\ensuremath{\mathbb N})$. Hence, $\operatorname{ran} T_\Lambda$ is dense in $\ell^2(\ensuremath{\mathbb N})$. \end{proof} The following theorem is due to Shapiro and Shields \cite{ss}. \begin{thm}\lambdabel{t:interpolating} For a sequence $\Lambda = (\lambda_k)_{k\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$ the following statements are equivalent. \begin{enumerate} \item[{\rm (i)}] The evaluation operator $T_\Lambda$ is defined on $H^2(\ensuremath{\mathbb D})$ and is onto. \item[{\rm (ii)}] The sequence $(K_{\lambda_j})_{j\in\ensuremath{\mathbb N}}$ is a Riesz sequence in $H^2(\ensuremath{\mathbb D})$. \item[{\rm (iii)}] $\Lambda$ is uniformly separated. \end{enumerate} \end{thm} The equivalence of (i) and (ii) in Theorem \ref{t:interpolating} simply follows from the fact that $T_\Lambda$ is the analysis operator of the sequence $(K_{\lambda_j})_{j\in\ensuremath{\mathbb N}}$. The following two theorems can be found in \cite{ds}. \begin{thm}\lambdabel{t:finite_union} Let $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$. Then the following conditions are equivalent. \begin{enumerate} \item[{\rm (i)}] $\Lambda$ is a finite union of uniformly separated sequences. \item[{\rm (ii)}] $D(T_\Lambda) = H^2(\ensuremath{\mathbb D})$. \end{enumerate} \end{thm} \begin{thm}\lambdabel{t:durenschuster} Let $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}\subset\ensuremath{\mathbb D}$. Then the following statements are equivalent. \begin{enumerate} \item[{\rm (i)}] $\Lambda$ is uniformly separated. \item[{\rm (ii)}] $\Lambda$ is separated and $D(T_\Lambda) = H^2(\ensuremath{\mathbb D})$. \end{enumerate} \end{thm} Note that Theorem \ref{t:durenschuster} is not formulated as a theorem in \cite{ds}, but is hidden in the proof of the implication (iii)$\Rightarrow$(i) of the main theorem. It immediately implies the next corollary. \begin{cor}\lambdabel{c:finunifsep} Let $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\mathbb D}$ such that $\lambda_j\neq\lambda_k$ for $j\neq k$. If $(\lambda_j)_{j\ge n}$ is uniformly separated, then also $(\lambda_j)_{j\in\ensuremath{\mathbb N}}$ is uniformly separated. \end{cor} \begin{lem}\lambdabel{l:index} Let $\Lambda = (\lambda_j)_{j\in\ensuremath{\mathbb N}}$ be a sequence in $\ensuremath{\mathbb D}$. For $r\in (0,1)$ and $z\in\ensuremath{\mathbb D}$ define $$ J(r,z) := \{j\in\ensuremath{\mathbb N} : \lambda_j\in\mathbb B_r(z)\} \qquad\text{and}\qquad m_r := \sup_{z\in\ensuremath{\mathbb D}}\left|J(r,z)\right|, $$ and assume $m_{r_0} < \infty$ for some $r_0\in (0,1)$. Then $\Lambda$ is a union of $m_{r_0}$ separated sequences \braces{or less\,} and \begin{equation}\lambdabel{e:index} \operatorname{ind}(\Lambda) = \inf\{m_r : r\in (0,r_0]\}. \end{equation} \end{lem} \begin{proof} For $J\subset\ensuremath{\mathbb N}$ define $\Lambda_J := (\lambda_j)_{j\in J}$ and set $M := m_{r_0}$. We will define subsets $J_1,\ldots,J_m$, $m\le M$, recursively such that $\ensuremath{\mathbb N} = \bigcup_{k=1}^m J_k$ and each $\Lambda_{J_k}$ is separated. For the definition of $J_1$, set $j_0 := 0$ and once $j_0,\ldots,j_s$ are chosen, we pick $$ j_{s+1} := \min\left\{j > j_s : \lambdambda_j\notin\bigcup_{i=1}^{s}\mathbb B_r(\lambdambda_{j_i})\right\}. $$ Note that $j_{s+1}$ is well defined due to the assumption that $M < \infty$. In other words, $\lambdambda_{j_{s+1}}$ is the first element of the sequence $\Lambda$ that does not belong to any of the balls of radius $r$ around the previously chosen elements. Put $J_1 := \{j_s : s\in\ensuremath{\mathbb N}\}$. It is clear that $\Lambda_{J_1}$ is separated (by $r$) and that all the $\lambdambda_j$ that have not been chosen due to this process belong to some $\mathbb B_r(\lambdambda_{j_s})$. If $\ensuremath{\mathbb N}\setminus J_1$ is finite, we are finished. If not, proceed as before with $\ensuremath{\mathbb N}\setminus J_1$ instead of $\ensuremath{\mathbb N}$ to find an infinite set $J_2\subset\ensuremath{\mathbb N}\setminus J_1$ such that $\Lambda_{J_2}$ is separated (by $r$). Continuing in this way, the process either terminates after $m < M-1$ steps (in which case we are done) or we obtain $M-1$ separated sequences $\Lambda_{J_1},\ldots,\Lambda_{J_{M-1}}$. In this case, put $$ J_{M} := \ensuremath{\mathbb N}\setminus\bigcup_{k=1}^{M-1}J_k. $$ Let us prove that $\Lambda_{J_{M}}$ is separated. For this, let $j\in J_{M}$ be arbitrary. Then, as a result of the construction process, $\lambda_j\in\bigcup_{k=1}^{M-1}\mathbb B_r(\lambda_{i_k})$, where $i_k\in J_k$, $k=1,\ldots,M-1$. Thus, we have that $j,i_1,\ldots,i_{M-1}\in J(r,\lambda_j)$ such that $J(r,\lambda_j) = \{j,i_1,\ldots,i_{M-1}\}$. Therefore, $\varrho(\lambda_j,\lambda_l)\ge r$ for all $l\in J_{M}$, $l\neq j$. Hence, $\Lambda_{J_{M}}$ is indeed separated. It remains to prove the relation \eqref{e:index}. For this, put $m_0 := \inf\{m_r : r\in (0,r_0]\}$. It is clear that $m_0 = m_{r_1}$ for some $r_1\le r_0$ (note that $r\mapsto m_r$ is non-decreasing) and that, therefore, $n := \operatorname{ind}(\Lambda)\le m_0$. There exist $J_1,\ldots,J_n\subset\ensuremath{\mathbb N}$ with $\ensuremath{\mathbb N} = \bigcup_{k=1}^nJ_k$ such that $\Lambda_{J_k}$ is separated for each $k=1,\ldots,n$. Without loss of generality, let each $\Lambda_{J_k}$ be separated by $r_1$. From $m_0 = m_{r_1/2}$ we conclude that there exists some $z\in\ensuremath{\mathbb D}$ such that $|J(r_1/2,z)| = m_0$. Suppose that $n < m_0$. Then there exists some $J_k$ that contains at least two of the $m_0$ elements of $J(r_1/2,z)$, say, $j_1$ and $j_2$. But then $\varrho(\lambda_{j_1},\lambda_{j_2})\le\varrho(\lambda_{j_1},z) + \varrho(z,\lambda_{j_2}) < r_1$, contradicting the fact that $\Lambda_{J_k}$ is separated by $r_1$. \end{proof} We shall also make use of the following lemma which in particular shows that the map $(\ensuremath{\mathbb D},\varrho)\to H^2(\ensuremath{\mathbb D})$, $z\mapsto K_z$, is Lipschitz continuous. Here, $K$ is the normalized reproducing kernel of $H^2(\ensuremath{\mathbb D})$ defined in \eqref{e:K}. \begin{lem}\lambdabel{l:lipschitz} For $z\in\ensuremath{\mathbb D}$ put $s_z = \sqrt{1-|z|^2}$. Then for $z,w\in\ensuremath{\mathbb D}$ the following relation holds: $$ \|K_z - K_w\|_{H^2(\ensuremath{\mathbb D})}^2 = (2 - s_zs_w)\varrho(z,w)^2 - \left(1 - \varrho(z,w)^2\right)\frac{(s_z-s_w)^2}{s_zs_w}. $$ In particular, $$ \|K_z - K_w\|_{H^2(\ensuremath{\mathbb D})}\,\le\,\sqrt 2\,\varrho(z,w). $$ \end{lem} \begin{proof} Using $1-\varrho(z,w)^2 = s_z^2s_w^2/|1-\overline zw|^2$ (see \eqref{e:hyp_identity}), we see that \begin{align*} \|K_z - K_w\|_{H^2(\ensuremath{\mathbb D})}^2 &= 2 - 2\ensuremath{\mathbb R}e\lambdangleK_z,K_w\rangle = 2 - 2\ensuremath{\mathbb R}e\frac{s_zs_w}{1 - \overline zw}\\ &= 2 - \,2\frac{s_zs_w}{|1-\overline zw|^2}(1 - \ensuremath{\mathbb R}e(\overline zw))\\ &= 2 - \,\frac{s_zs_w}{|1-\overline zw|^2}\left(2 - 2\ensuremath{\mathbb R}e(\overline zw) - 2s_zs_w\right) - 2\left(1-\varrho(z,w)^2\right)\\ &= 2\varrho(z,w)^2 - \,\frac{s_zs_w}{|1-\overline zw|^2}\left(2 + |z-w|^2 - |z|^2 - |w|^2 - 2s_zs_w\right)\\ &= 2\varrho(z,w)^2 - \,\frac{s_zs_w}{|1-\overline zw|^2}\left((s_z-s_w)^2 + |z-w|^2\right), \end{align*} which proves the claim. \end{proof} \vspace*{.5cm}\noindent {\bf Acknowledgements.} The authors would like to thank D. Su\'arez for sharing his knowledge on sequences in the unit disk. \end{document}
\begin{document} \title{ On quantum perfect state transfer in weighted join graphs } \begin{abstract} We study perfect state transfer on quantum networks represented by weighted graphs. Our focus is on graphs constructed from the join and related graph operators. Some specific results we prove include: \begin{itemize} \item The join of a weighted two-vertex graph with any regular graph has perfect state transfer. This generalizes a result of Casaccino {\it et al.~} \cite{clms09} where the regular graph is a complete graph or a complete graph with a missing link. In contrast, the half-join of a weighted two-vertex graph with any weighted regular graph has no perfect state transfer. This implies that adding weights in a complete bipartite graph do not help in achieving perfect state transfer. \item A Hamming graph has perfect state transfer between each pair of its vertices. This is obtained using a closure property on weighted Cartesian products of perfect state transfer graphs. Moreover, on the hypercube, we show that perfect state transfer occurs between uniform superpositions on pairs of arbitrary subcubes. This generalizes results of Bernasconi {\it et al.~} \cite{bgs08} and Moore and Russell \cite{mr02}. \end{itemize} Our techniques rely heavily on the spectral properties of graphs built using the join and Cartesian product operators. \par\noindent{\em Keywords}: Perfect state transfer, quantum networks, weighted graphs, join. \end{abstract} \section{Introduction} Recently, the notion of perfect state transfer in quantum networks modeled by graphs has received considerable attention in quantum information \cite{cdel04,cddekl05,sss07,bgs08,bcms09,bp09,clms09}. A main goal in this line of research is to find and characterize graph structures which exhibit perfect state transfer between pairs of vertices in the graph. This is a useful property of quantum networks since it facilitates information transfer between locations. We may conveniently view the perfect state transfer problem in the context of quantum walks on graphs \cite{fg98,k06}. In this setting, the initial state of the quantum system is described by a unit vector on some initial vertex $a$. To achieve perfect transfer to a target vertex $b$ at time $t$, the quantum walk amplitude of the system at time $t$ on vertex $b$ must be of unit magnitude. In other words, we require that $|\bra{b}e^{-itA_{G}}\ket{a}| = 1$, where $A_{G}$ is the adjacency matrix of the underlying graph $G$ that describes the quantum network. Christandl {\it et al.~} \cite{cdel04} observed that the Cartesian products of paths of length three (two-link hypercubes) admit perfect state transfer between antipodal vertices. They also noted that paths of length four or larger do not possess perfect state transfer unless their edges are weighted in a specific manner (see \cite{cddekl05}). In fact, this weighting scheme corresponds closely to the hypercube structure. This crucially shows that edge weights can be useful in achieving perfect state transfer on graphs which are known not to possess the property. It is known that complete graphs do not have perfect state transfer. But surprisingly, Casaccino {\it et al.~} \cite{clms09} observed that adding weighted self-loops on two vertices in a complete graph helps create perfect state transfer between the two vertices. We generalize their observation by considering the join of a weighted two-vertex graph with an arbitrary regular graph. We prove that adding weights also helps for perfect state transfer in this more general case. On the other hand, we show that the half-join between a weighted two-vertex graph with a weighted self-join of an arbitrary regular graph, where each vertex of the two-vertex graph is connected to exactly half of the join graph, has no perfect state transfer for any set of weights. This implies that weights provably do not help in achieving perfect state transfer in a complete bipartite graphs. The full connection that is available in the standard join seems crucial in achieving perfect state transfer. Bernasconi {\it et al.~} \cite{bgs08} gave a complete characterization of perfect state transfer on the hypercubes. They proved that perfect state transfer is possible at time $t = \pi/2$ between any pair of vertices. We will refer to this stronger property as universal perfect state transfer. Previously known results on perfect state transfer on other graphs, such as integral circulants \cite{bp09} and two-link hypercubes \cite{cdel04}, only allow perfect state transfer between antipodal vertices (which are vertices at maximum distance from each other). Recent results on integral circulants and other graphs (see \cite{anoprt09}) have exhibited perfect state transfer between non-antipodal vertices, but most of these graphs still lack the universal perfect state transfer property. We show that weights are useful for universal perfect state transfer in the family of Hamming graphs, which is a generalization of the hypercube family. We prove this result by extending the observation of Christandl {\it et al.~} \cite{cdel04} to perfect state transfer on weighted Cartesian products. For a weighted $n$-cube, we prove a stronger universal perfect state transfer property. We show that perfect state transfer occurs between uniform superpositions over two arbitrary subcubes of the $n$-cube. This generalizes the results of both Bernasconi {\it et al.~} \cite{bgs08} mentioned above and also of Moore and Russell \cite{mr02} on the uniform mixing of a quantum walk on the $n$-cube. We note that Bernasconi {\it et al.~} \cite{bgs08} proved universal perfect state transfer on the $n$-cube by {\em dynamically} changing the underlying hypercubic structure of the graph. In contrast, our scheme is based on {\em static} weights which can be interpreted dynamically with time. Note that if we allow zero edge weights then universal perfect state transfer becomes trivial. Assuming that the two source and target vertices are connected, find a path connecting them, assign the hypercubic weights to the edges on this path (as in Christandl {\it et al.~} \cite{cdel04}) and zero weights to the other edges. This shows that universal perfect state transfer can be achieved if zero edge weights are allowed. Our work exploits the machinery developed in \cite{anoprt09} and their extensions to weighted graphs. These include the join theorem for regular graphs and the closure property for Cartesian product of perfect state transfer graphs. \section{Preliminaries} For a logical statement $\mathcal{S}$, the Iversonian notation $\iverson{\mathcal{S}}$ is $1$ if $\mathcal{S}$ is true and $0$ otherwise (see Graham, Knuth and Patashnik \cite{gkp94}). As is standard, we use $I_{n}$ and $J_{n}$ to denote the $n \times n$ identity and all-one matrices, respectively; we drop the subscript $n$ whenever the context is clear. The graphs $G=(V,E)$ we study are finite, mostly simple, undirected, and connected. The adjacency matrix $A_{G}$ of a graph $G$ is defined as $A_{G}[u,v] = \iverson{(u,v) \in E}$. A graph $G$ is called $k$-regular if each vertex has $k$ adjacent neighbors. That is, the neighbor set $\{v \in V : (u,v) \in E\}$ of $u$ has cardinality $k$ for each vertex $u \in V$. In most cases, we also require $G$ to be vertex-transitive, that is, for any $a,b \in V$, there is an automorphism $\pi \in Aut(G)$ with $\pi(a)=b$. In this paper, we also consider edge-weighted graphs $\widetilde{G}=(V,E,w)$, where $w: E \rightarrow \mathbb{R}$ is a function that assigns weights to edges. In the simplest case, we take an unweighted graph $G=(V,E)$ and add self-loops with weight $\alpha$ to all vertices and assign a weight of $\beta$ to all edges; we denote such a graph by $\widetilde{G}(\alpha,\beta)$. Note that the adjacency matrix of $\widetilde{G}$ is given by $\alpha I + \beta A_{G}$. Unless otherwise stated, most of our weighted graphs will be of this form. We denote the complete graph on $n$ vertices by $K_{n}$. The Cartesian product $G \oplus H$ of graphs $G$ and $H$ is a graph whose adjacency matrix is $I \otimes A_{H} + A_{G} \otimes I$ (see Lov\'{a}sz \cite{lovasz}, page 617). The binary $n$-dimensional hypercube $Q_{n}$ may be defined recursively as $Q_{n} = K_{2} \oplus Q_{n-1}$, for $n \ge 2$, and $Q_{1} = K_{2}$. Similarly, the Hamming graph $H(q,n)$ is defined as $K_{q}^{\oplus n}$; this may be viewed as a $q$-ary $n$-dimensional hypercube. The {\em join} $G + H$ of graphs $G$ and $H$ is defined as $\overline{G+H} = \overline{G} \cup \overline{H}$; that is, we take a copy of $G$ and a copy of $H$ and connect all vertices of $G$ with all vertices of $H$ (see \cite{sw78}). We will also consider the weighted join $G +_{\rho} H$ where we assign a weight of $\rho$ to the edges that connect $G$ and $H$; more specifically, the adjacency matrix of $G +_{\rho} H$ is given by \begin{equation} \begin{bmatrix} A_{G} & \rho J \\ \rho J & A_{H} \end{bmatrix}, \end{equation} with the appropriate dimensions on the two all-one $J$ matrices. A {\em cone} on a graph $G$ is the graph $K_{1} + G$. Similarly, a connected {\em double} cone on a graph $G$ is the graph $K_{2} + G$; similarly, a disconnected double cone is the graph $\overline{K}_{2} + G$. When $G$ is the empty graph, the connected double-cone is simply the complete graph whereas the disconnected double-cone is the complete graph with a missing edge (see \cite{bcms09,clms09}). On the other hand, a connected (or disconnected) double {\em half-cone} on a graph $G$ is formed by taking $K_{2}$ (or $\overline{K}_{2}$) and $G + G$ and connecting each vertex of the two-vertex graph to exactly one copy of $G$ in the join $G+G$. When $G$ is the empty graph, the double half-cone simply yields a complete bipartite graph. For more background on algebraic graph theory, we refer the reader to the monograph by Biggs \cite{biggs}. For a graph $G=(V,E)$, let $\ket{\psi(t)} \in \mathbb{C}^{|V|}$ be a time-dependent amplitude vector over $V$. The continuous-time quantum walk on $G$ is defined using Schr\"{o}dinger's equation as \begin{equation} \ket{\psi(t)} = e^{-it A_{G}} \ket{\psi(0)}, \end{equation} where $\ket{\psi(0)}$ is the initial amplitude vector (see \cite{fg98}). Further background on quantum walks on graphs can be found in the survey by Kendon \cite{k06}. We say $G$ has {\em perfect state transfer} (PST) from vertex $a$ to vertex $b$ at time $t^{\star}$ if \begin{equation} \label{eqn:pst} |\bra{b}e^{-it^{\star} A_{G}}\ket{a}| = 1, \end{equation} where $\ket{a}$, $\ket{b}$ denote the unit vectors corresponding to the vertices $a$ and $b$, respectively. The graph $G$ has perfect state transfer if there exist distinct vertices $a$ and $b$ in $G$ and a time $t^{\star} \in \mathbb{R}^{+}$ so that (\ref{eqn:pst}) is true. We say that $G$ has {\em universal} perfect state transfer if (\ref{eqn:pst}) occurs between all distinct pairs of vertices $a$ and $b$ of $G$. \subsection{Example: Triangle} We begin by describing an explicit example of the role of weights for perfect state transfer in a triangle, or $K_{3}$, which is the complete graph on three vertices. The eigenvalues of $K_{3}$ are $2$ (simple) and $-1$ (with multiplicity two) with eigenvectors $\ket{F_{k}}$, where $\ket{F_{k}}$ are the columns of the Fourier matrix, with $\braket{j}{F_{k}} = \omega_{3}^{jk}/\sqrt{3}$, for $j,k \in \{0,1,2\}$ (see Biggs \cite{biggs}). The quantum walk on $K_{3}$ yields \begin{equation} \bra{1}e^{-itK_{3}}\ket{0} = \bra{1}\left\{\sum_{k=0}^{2} e^{-it\lambda_{k}}\ket{F_{k}}\bra{F_{k}}\right\}\ket{0} = -\frac{2}{3}ie^{-it/2}\sin(3t/2). \end{equation} \begin{figure} \caption{Weighted joins: (a) $K_{2} \end{figure} So, it is clear that there is no perfect state transfer on $K_{3}$ (see \cite{abtw03,clms09}). Now, consider adding self-loops on vertices $0$ and $1$ with weight $\mu$ and putting a weight of $\eta$ on the edge connecting $0$ and $1$. The adjacency matrix of this weighted $\widetilde{K}_{3}$ is \begin{equation} \widetilde{K}_{3} = \begin{bmatrix} \mu & \eta & 1 \\ \eta & \mu & 1 \\ 1 & 1 & 0 \end{bmatrix} \end{equation} The spectra of $\widetilde{K}_{3}$ is given by the eigenvalues $\lambda_{0}=\mu-\eta$ and $\lambda_{\pm} = 2\alpha_{\pm}$, where $\alpha_{\pm} = \frac{1}{4}(\delta \pm \Delta)$, $\delta = \mu+\eta$ and $\Delta = \sqrt{\delta^{2}+8}$, with corresponding orthonormal eigenvectors \begin{equation} \ket{v_{0}} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \\ 0 \end{bmatrix}, \ \ \ \ket{v_{\pm}} = \frac{1}{\sqrt{2\alpha^{2}_{\pm}+1}}\begin{bmatrix} \alpha_{\pm} \\ \alpha_{\pm} \\ 1 \end{bmatrix} \end{equation} The perfect state transfer equation between the two vertices with weighted self-loops are given by \begin{eqnarray} \bra{1}e^{-it\widetilde{K}_{3}}\ket{0} & = & \bra{1}e^{-it\widetilde{K}_{2}}\ket{0} + \frac{1}{2}e^{-it\delta}\left\{ e^{it\delta/2} \left[\cos\left(\frac{\Delta}{2}t\right) - i\frac{\delta}{\Delta}\sin\left(\frac{\Delta}{2}t\right)\right] - 1 \right\}, \end{eqnarray} where $\widetilde{K}_{2}$ is $\widetilde{K}_{2}(\mu,\eta)$. Recall that the perfect state transfer $\bra{1}e^{-itK_{2}}\ket{0}$ on the (unweighted) $K_{2}$ is given by $-i\sin(t)$. Thus, the weighted $\widetilde{K}_{2}$ has perfect state transfer at time $t^{\star} = (2\mathbb{Z}+1)\pi/2\eta$, since the self-loop weight $\mu$ disappears into an irrelevant phase factor and the edge weight $\eta$ translates into a time-scaling. So, to achieve perfect state transfer on $\widetilde{K}_{3}$, it suffices to have \begin{equation} \cos\left( \frac{\delta}{4\eta} \pi \right)\cos\left( \frac{\Delta}{4\eta} \pi \right) = 1. \end{equation} Equivalently, we require that: \begin{enumerate} \item $A \stackrel{.}{=} \delta/4\eta$ be an integer; \item $B \stackrel{.}{=} \Delta/4\eta$ be an integer; and \item $A \equiv B\pmod{2}$ or that $A$ and $B$ have the same parity. \end{enumerate} From the first two conditions, we require that $\delta/\Delta$ be a rational number $p/q < 1$ with $\gcd(p,q)=1$. Restating this last condition on $p$ and $q$ and simplifying, we get that \begin{equation} \delta \ = \ p \ \sqrt{\frac{8}{q^{2}-p^{2}}}, \hspace{.5in} \Delta \ = \ q \ \sqrt{\frac{8}{q^{2}-p^{2}}}. \end{equation} So, we may choose \begin{equation} \eta \ = \ \frac{1}{4}\sqrt{\frac{8}{q^{2}-p^{2}}} \end{equation} so that both $\delta/4\eta$ and $\Delta/4\eta$ are integers. Therefore, we choose odd integers $p$ and $q$ satisfying $\gcd(p,q)=1$; this will satisfy all three conditions stated above. This shows that there are infinitely many weights $\mu$ and $\eta$ (via infinitely choices of odd integers $p$ and $q$) which allow perfect state transfer on $\widetilde{K}_{3}$. We generalize this example in our join theorem for arbitrary regular weighted graphs. This example complements a result of Casaccino {\it et al.~} \cite{clms09} which showed the power of weighted self-loops on complete graphs. Our analysis above shows that perfect state transfer is achieved through edge weights instead. \section{Join of weighted regular graphs} In this section, we prove that the existence of perfect state transfer in a join of two arbitrary regular weighted graphs can be reduced to perfect state transfer in one of the graphs. In fact, since we add weights to our graphs in a particular way, this is a reduction onto the unweighted version of one of the graphs. This allows us to analyze the double-cone on any regular graph; that is, the join of $K_{2}$ with an arbitrary regular graph. The next theorem is a generalization of a similar join theorem given in \cite{anoprt09}. \begin{theorem} \label{thm:weighted-binary-join} For $j \in \{1,2\}$, let $\widetilde{G}_{j}(\mu_{j},\eta_{j})$ be a $k_{j}$-regular graph on $n_{j}$ vertices, where each vertex has a self-loop with weight $\mu_{j}$ and each edge has weight $\eta_{j}$. Also, for $j \in \{1,2\}$, let \begin{equation} \kappa_{j} = \mu_{j}+\eta_{j}k_{j}. \end{equation} Suppose that $a$ and $b$ are two vertices in $\widetilde{G}_{1}$. Let $\mathcal{G} = \widetilde{G}_{1}(\mu_{1},\eta_{1}) + \widetilde{G}_{2}(\mu_{2},\eta_{2})$ be the join of the weighted graphs. Then, \begin{equation} \label{eqn:pst-join-reduction} \bra{b}e^{-it A_{\mathcal{G}}}\ket{a} = \bra{b} e^{-it A_{\widetilde{G}_{1}}} \ket{a} + \frac{e^{-it \kappa_{1}}}{n_{1}} \left\{e^{it\delta/2} \left[ \cos\left(\frac{\Delta t}{2}\right) - i\left(\frac{\delta}{\Delta}\right)\sin\left(\frac{\Delta t}{2}\right) \right] - 1 \right\} \end{equation} where $\delta = \kappa_{1}-\kappa_{2}$ and $\Delta = \sqrt{\delta^{2} + 4n_{1}n_{2}}$. \end{theorem} \Probf Let $G_{j}$ be the simple and unweighted version of $\widetilde{G}_{j}$, for $j \in \{1,2\}$; that is, $G_{j} = \widetilde{G}_{j}(0,1)$. Whenever it is clear from context, we denote $\widetilde{G}_{j}(\mu_{j},\eta_{j})$ as simply $\widetilde{G}_{j}$. If $\lambda_{k}$ and $\ket{u_{k}}$ are the eigenvalues and eigenvectors of $A_{G_{1}}$, for $k=0,\ldots,n_{1}-1$, then \begin{equation} \bra{b}e^{-it A_{G_{1}}}\ket{a} = \bra{b}\left\{\sum_{k=0}^{n_{1}-1} \ket{u_{k}}\bra{u_{k}} e^{-it\lambda_{k}}\right\} \ket{a}. \end{equation} Here, we assume $\ket{u_{0}}$ is the all-one eigenvector (that is orthogonal to the other eigenvectors) with eigenvalue $\lambda_{0} = k_{1}$. By the same token, let $\theta_{\ell}$ and $\ket{v_{\ell}}$ be the eigenvalues and eigenvectors of $A_{G_{2}}$, for $\ell = 0,\ldots,n_{2}-1$. Also, let $\ket{v_{0}}$ be the all-one eigenvector (with eigenvalue $\theta_{0} = k_{2}$) which is orthogonal to the other eigenvectors $\ket{v_{\ell}}$, $\ell \neq 0$. Let $\mathcal{G} = \widetilde{G}_{1} + \widetilde{G}_{2}$. Note that the adjacency matrix of $\mathcal{G}$ is \begin{equation} A_{\mathcal{G}} = \begin{bmatrix} \mu_{1}I+\eta_{1}A_{G_{1}} & J_{n_{1} \times n_{2}} \\ J_{n_{2} \times n_{1}} & \mu_{2}I+\eta_{2}A_{G_{2}} \end{bmatrix}. \end{equation} Let $\delta = \kappa_{1}-\kappa_{2}$, where $\kappa_{j} = \mu_{j}+\eta_{j}k_{j}$, for $j \in \{1,2\}$. The eigenvalues and eigenvectors of $A_{\mathcal{G}}$ are given by the following three sets: \begin{itemize} \item For $k=1,\ldots,n_{1}-1$, let $\ket{u_{k},0_{n_{2}}}$ be a column vector formed by concatenating the column vector $\ket{u_{k}}$ with the zero vector of length $n_{2}$. Then, $\ket{u_{k},0_{n_{2}}}$ is an eigenvector with eigenvalue $\widetilde{\lambda}_{k} = \mu_{1}+\eta_{1}\lambda_{k}$. Note that $\widetilde{\lambda}_{0} = \kappa_{1}$. \item For $\ell=1,\ldots,n_{2}-1$, let $\ket{0_{n_{1}},v_{\ell}}$ be a column vector formed by concatenating the zero vector of length $n_{1}$ with the column vector $\ket{v_{\ell}}$. Then, $\ket{0_{n_{1}},v_{\ell}}$ is an eigenvector with eigenvalue $\widetilde{\theta}_{\ell} = \mu_{2}+\eta_{2}\theta_{\ell}$. \item Let $\ket{\pm} = \frac{1}{\sqrt{L_{\pm}}}\ket{\alpha_{\pm},1_{n_{2}}}$ be a column vector formed by concatenating the vector $\alpha_{\pm}\ket{1_{n_{1}}}$ with the vector $\ket{1_{n_{2}}}$, where $\ket{1_{n_{1}}}$, $\ket{1_{n_{2}}}$ denote the all-one vectors of length $n_{1}$, $n_{2}$, respectively. Then, $\ket{\pm}$ is an eigenvector with eigenvalue $\widetilde{\lambda}_{\pm} = n_{1}\alpha_{\pm} + \kappa_{2}$. Here, we have \begin{equation} \alpha_{\pm} = \frac{1}{2n_{1}}(\delta \pm \Delta), \ \ \ \Delta^{2} = \delta^{2} + 4n_{1}n_{2}, \ \ \ L_{\pm} = n_{1}(\alpha_{\pm})^{2} + n_{2}, \end{equation} \end{itemize} In what follows, we will abuse notation by using $\ket{a}$, $\ket{b}$ for both $\widetilde{G}_{1}$ and $\widetilde{G}_{1}+\widetilde{G}_{2}$; their dimensions differ in both cases, although it will be clear from context which version is used. The quantum wave amplitude from $a$ to $b$ is given by \begin{eqnarray} \bra{b}e^{-it A_{\mathcal{G}}}\ket{a} & = & \bra{b} e^{-it A_{\mathcal{G}}} \left\{\sum_{k=1}^{n_{1}-1}\braket{u_{k},0_{n_{2}}}{a} \ket{u_{k},0_{n_{2}}} + \sum_{\pm} \frac{\alpha_{\pm}}{\sqrt{L_{\pm}}} \ket{\pm} \right\} \\ & = & \bra{b} \left\{\sum_{k=1}^{n_{1}-1}\braket{u_{k}}{a} e^{-it\widetilde{\lambda}_{k}}\ket{u_{k},0_{n_{2}}} + \sum_{\pm} \frac{\alpha_{\pm}}{\sqrt{L_{\pm}}} e^{-it\widetilde{\lambda}_{\pm}} \ket{\pm} \right\} \\ & = & \sum_{k=1}^{n_{1}-1}\braket{b}{u_{k}}\braket{u_{k}}{a} e^{-it\widetilde{\lambda}_{k}} + \sum_{\pm} \frac{\alpha_{\pm}^{2}}{L_{\pm}} e^{-it\widetilde{\lambda}_{\pm}}. \end{eqnarray} This shows that \begin{eqnarray} \bra{b}e^{-it A_{\mathcal{G}}}\ket{a} \label{eqn:pst-join} & = & \bra{b}\left\{\sum_{k=0}^{n_{1}-1}\ket{u_{k}}\bra{u_{k}} e^{-it\widetilde{\lambda}_{k}}\right\} \ket{a} -\frac{e^{-it \kappa_{1}}}{n_{1}} + \sum_{\pm} \frac{\alpha_{\pm}^{2}}{L_{\pm}} e^{-it\widetilde{\lambda}_{\pm}} \\ \label{eqn:pst-join2} & = & \bra{b} e^{-it A_{\widetilde{G}_{1}}} \ket{a} + \sum_{\pm} \frac{\alpha_{\pm}^{2}}{L_{\pm}} e^{-it\widetilde{\lambda}_{\pm}} -\frac{e^{-it \kappa_{1}}}{n_{1}}. \end{eqnarray} To analyze the second term next, we use the following identities whose correctness follows easily from the definitions of $\alpha_{\pm}$, $L_{\pm}$, $\delta$ and $\Delta$: \begin{eqnarray} \alpha_{+}\alpha_{-} & = & -(n_{2}/n_{1}) \\ \alpha_{+} + \alpha_{-} & = & \delta/n_{1} \\ L_{+}L_{-} & = & (n_{2}/n_{1})\Delta^{2} \\ L_{+} + L_{-} & = & \Delta^{2}/n_{1} \\ (\alpha_{\pm})^{2}L_{\mp} & = & (n_{2}/n_{1})L_{\pm} \\ \label{eqn:lambda-pm} \widetilde{\lambda}_{\pm} & = & (\hat{\delta} \pm \Delta)/2 \end{eqnarray} where $\hat{\delta} = \kappa_{1} + \kappa_{2}$. Therefore, the summand in (\ref{eqn:pst-join2}) is given by \begin{eqnarray} \sum_{\pm} \frac{\alpha_{\pm}^{2}}{L_{\pm}} e^{-it\widetilde{\lambda}_{\pm}} & = & \frac{1}{n_{1}} e^{-it\hat{\delta}/2} \left[ \cos\left(\frac{\Delta t}{2}\right) - i\left(\frac{\delta}{\Delta}\right)\sin\left(\frac{\Delta t}{2}\right) \right]. \end{eqnarray} This yields \begin{equation} \bra{b}e^{-it A_{\mathcal{G}}}\ket{a} = \bra{b} e^{-it A_{\widetilde{G}_{1}}} \ket{a} + \frac{e^{-it \kappa_{1}}}{n_{1}} \left\{e^{it\delta/2} \left[ \cos\left(\frac{\Delta t}{2}\right) - i\left(\frac{\delta}{\Delta}\right)\sin\left(\frac{\Delta t}{2}\right) \right] - 1 \right\} \end{equation} which proves the claim. \qed\\ \par\noindent We describe several applications of Theorem \ref{thm:weighted-binary-join} to the weighted double-cone $\widetilde{K}_{2} + G$, for any regular graph $G$. For notational simplicity, let $K_{2}^{b}$ denote $K_{2}$ if $b=1$ and $\overline{K}_{2}$ if $b=0$. \\ \par\noindent{\em Remark}: The next corollary complements the observation made by Casaccino {\it et al.~} \cite{clms09} on $K_{2} + K_{m}$ where each vertex of $K_{2}$ has a weighted self-loop. They show that perfect state transfer occurs in this weighted graph in contrast to the unweighted version. \begin{corollary} \label{cor:weighted-double-cone} For any $k$-regular graph $G$ on $n$ vertices and any $b \in \{0,1\}$, there exist weights $\mu,\eta \in \mathbb{R}^{+}$ so that the double-cone $\widetilde{K}^{b}_{2}(\mu,\eta)+G$ has perfect state transfer between the two vertices of $\widetilde{K}^{b}_{2}$. \end{corollary} \Probf Consider the weighted double-cone $\widetilde{K}^{b}_{2}(\mu,\eta) + \widetilde{G}(0,1)$, where $\widetilde{G}(0,1)$ is simply the unweighted graph $G$. We know that $\widetilde{K}^{b}_{2}(\mu,\eta)$ has perfect state transfer for $b\eta t^{\star} = (2\mathbb{Z}+1)\pi/2$. Note that when $b=0$, the perfect state transfer time is $\infty$ or non-existent. Let $\delta = (\mu+b\eta)-k$ and $\Delta^{2} = \delta^{2} +8n$. By Theorem \ref{thm:weighted-binary-join}, it suffices to have \begin{equation} \cos\left(\frac{\delta}{2}t^{\star}\right)\cos\left(\frac{\Delta}{2}t^{\star}\right) = \cos\left(\frac{\delta}{4\eta}\pi\right) \cos\left(\frac{\Delta}{4\eta}\pi\right) = (-1)^{1-b}. \end{equation} So, we require that: \begin{enumerate} \item $A \stackrel{.}{=} \delta/4\eta$ be an integer; \item $B \stackrel{.}{=} \Delta/4\eta$ be an integer; and \item $\iverson{A \equiv B\pmod{2}} = b$; or that $A$ and $B$ have the same parity if and only if $b=1$. \end{enumerate} From the first two conditions, we require that $\delta/\Delta$ be a rational number $p/q < 1$ with $\gcd(p,q)=1$. Restating this last condition on $p$ and $q$ and simplifying, we get that \begin{equation} \delta \ = \ p \ \sqrt{\frac{8n}{q^{2}-p^{2}}}, \hspace{.5in} \Delta \ = \ q \ \sqrt{\frac{8n}{q^{2}-p^{2}}}. \end{equation} So, we may choose \begin{equation} \eta \ = \ \frac{1}{4}\sqrt{\frac{8n}{q^{2}-p^{2}}} \end{equation} so that both $\delta/4\eta$ and $\Delta/4\eta$ are integers. Therefore, we choose integers $p$ and $q$ satisfying $\gcd(p,q)=1$ and $\iverson{p \equiv q\pmod{2}} = b$; this will satisfy all three conditions stated above. Finally, we may choose $\mu = b\eta-k-\delta$ to complete the weight parameters. \qed\\ \subsection{Double half-cones} In this section, we consider graphs obtained by taking a half-join between $K_{2}$ and $G+G$, for some arbitrary $k$-regular graph $G$, where each vertex of $K_{2}$ is connected to only one copy of $G$ in the join $G+G$. When $G = \overline{K}_{n}$, this half-join is obtained by selecting two adjacent vertices in the complete bipartite graph $K_{n+1,n+1}$. In contrast to complete graphs, we show that weights are not helpful in complete bipartite graphs for achieving perfect state transfer. In fact, we prove a stronger result where perfect state transfer still does not exist even if weights are added to some of the other sets of edges. \begin{figure} \caption{Weighted half-join between $K_{2} \end{figure} \begin{theorem} \label{thm:half-join} Let $G$ be a $k$-regular graph on $n$ vertices. Let $\mathcal{G}(\mu,\eta;\kappa,\tau,\rho;{\bf Var}epsilon)$ be a graph obtained from $\widetilde{K}_{2}(\mu,\eta)$ and $\widetilde{G}(\kappa,\tau) +_{\rho} \widetilde{G}(\kappa,\tau)$ by connecting each vertex of $\widetilde{K}_{2}(\mu,\eta)$ to exactly one copy of $\widetilde{G}(\kappa,\tau)$ in the weighted join $\widetilde{G}(\kappa,\tau) +_{\rho} \widetilde{G}(\kappa,\tau)$ and assigning a weight to ${\bf Var}epsilon$ to each of these connecting edges. Then, there are no non-zero real-valued weights $\mu$, $\eta$, $\kappa$, $\tau$, $\rho$ or ${\bf Var}epsilon$ for which $\mathcal{G}(\mu,\eta;\kappa,\tau,\rho;{\bf Var}epsilon)$ has perfect state transfer between the two vertices of $\widetilde{K}_{2}(\mu,\eta)$. \end{theorem} \par\noindent{\em Remark}: Note that if ${\bf Var}epsilon = 0$, then we have perfect state transfer in $\mathcal{G}$ trivially.\\ \Probf The adjacency matrix of $\mathcal{G}$ is given by \begin{equation} A_{\mathcal{G}} = \begin{bmatrix} \mu & \eta & {\bf Var}epsilon \mathbf{1}_{n}^{T} & \mathbf{0}_{n}^{T} \\ \eta & \mu & \mathbf{0}_{n}^{T} & {\bf Var}epsilon \mathbf{1}_{n}^{T} \\ {\bf Var}epsilon \mathbf{1}_{n} & \mathbf{0}_{n} & \kappa I_{n} + \tau A_{G} & \rho J_{n} \\ \mathbf{0}_{n} & {\bf Var}epsilon \mathbf{1}_{n} & \rho J_{n} & \kappa I_{n} + \tau A_{G} \end{bmatrix} \end{equation} where $\mathbf{0}_{n}$ and $\mathbf{1}_{n}$ denote the all-zero and all-one column vectors of dimension $n$, respectively. Suppose that $A_{G}\ket{u_{j}} = \lambda_{j}\ket{u_{j}}$ are the eigenvalues and eigenvectors of $G$, for $0 \le j \le n-1$, with $\ket{u_{0}}$ being the all-one eigenvector with $\lambda_{0} = k$. Then, the spectra of $A_{\mathcal{G}}$ is given by the following sets: \begin{enumerate} \item The eigenvectors $\ket{0,0,0_{n},u_{j}}$ and $\ket{0,0,u_{j},0_{n}}$ both share the eigenvalues $\kappa + \tau\lambda_{j}$, for $1 \le j \le n-1$. \item Let \begin{equation} \alpha_{\pm} = \frac{1}{2}(\delta_{\alpha} \pm \Delta_{\alpha}), \end{equation} where $\delta_{\alpha} = (\mu+\eta)-(\kappa + \tau k + \rho n)$ and $\Delta_{\alpha}^{2} = \delta_{\alpha}^{2}+4{\bf Var}epsilon^{2}n$. Then, the two eigenvectors \begin{equation} \ket{\alpha_{\pm}} = \frac{1}{\sqrt{L^{\alpha}_{\pm}}} \begin{bmatrix} \alpha_{\pm} & \alpha_{\pm} & 1_{n} & 1_{n} \end{bmatrix}^{T} \end{equation} have $\lambda_{\pm} = \alpha_{\pm}+(\kappa + \tau k + \rho n)$ as eigenvalues. Here $L^{\alpha}_{\pm} = 2(\alpha_{\pm})^{2}+2n$ is the normalization constant. \item Let \begin{equation} \beta_{\pm} = \frac{1}{2}(\delta_{\beta} \pm \Delta_{\beta}), \end{equation} where $\delta_{\beta} = (\mu-\eta)-(\kappa + \tau k -\rho n)$ and $\Delta_{\beta}^{2} = \delta_{\beta}^{2}+4{\bf Var}epsilon^{2}n$. Then, the two eigenvectors \begin{equation} \ket{\beta_{\pm}} = \frac{1}{\sqrt{L^{\beta}_{\pm}}} \begin{bmatrix} \beta_{\pm} & -\beta_{\pm} & 1_{n} & -1_{n} \end{bmatrix} \end{equation} have $\theta_{\pm} = \beta_{\pm}+(\kappa + \tau k - \rho n)$ as eigenvalues. Here $L^{\beta}_{\pm} = 2(\beta_{\pm})^{2}+2n$ is the normalization constant. \end{enumerate} The following identities can be verified easily: for $\xi \in \{\alpha,\beta\}$, we have \begin{eqnarray} L^{\xi}_{+}L^{\xi}_{-} & = & 4n\Delta^{2}_{\xi}/{\bf Var}epsilon^{2} \\ \xi_{+}\xi_{-} & = & -n \\ \xi_{\pm}^{2}L^{\xi}_{\mp} & = & nL^{\xi}_{\pm} \end{eqnarray} Using these, the quantum walk on $\mathcal{G}$ starting at $a$ and ending at $b$ is given by: \begin{eqnarray} \bra{b}e^{-itA_{\mathcal{G}}}\ket{a} & = & \left\{\sum_{\pm} e^{-it\lambda_{\pm}}\frac{\alpha^{2}_{\pm}}{L^{\alpha}_{\pm}}\right\} - \left\{\sum_{\pm} e^{-it\theta_{\pm}}\frac{\beta^{2}_{\pm}}{L^{\beta}_{\pm}}\right\} \end{eqnarray} After simplifications, we obtain \begin{eqnarray} \bra{b}e^{-itA_{\mathcal{G}}}\ket{a} & = & \frac{e^{-i(\kappa + \tau k)t}}{2} e^{-i(\rho n)t}e^{-i\delta_{\alpha}t/2} \left[ \cos\left(\frac{\Delta_{\alpha}}{2}t\right) - i\frac{\delta_{\alpha}}{\Delta_{\alpha}}\sin\left(\frac{\Delta_{\alpha}}{2}t\right) \right] \\ & - & \frac{e^{-i(\kappa + \tau k)t}}{2} e^{i(\rho n)t}e^{-i\delta_{\beta}t/2} \left[ \cos\left(\frac{\Delta_{\beta}}{2}t\right) - i\frac{\delta_{\beta}}{\Delta_{\beta}}\sin\left(\frac{\Delta_{\beta}}{2}t\right) \right] \end{eqnarray} Ignoring the irrelevant phase factor $e^{-i(\kappa + \tau k)t}$ and noting that the damping factor $\delta/\Delta$ forces the sine term to vanish, we get \begin{eqnarray} \bra{b}e^{-itA_{\mathcal{G}}}\ket{a} & = & \frac{e^{-i(\rho n)t}}{2} \cos\left(\frac{\delta_{\alpha}}{2}t\right) \cos\left(\frac{\Delta_{\alpha}}{2}t\right) - \frac{e^{i(\rho n)t}}{2} \cos\left(\frac{\delta_{\beta}}{2}t\right) \cos\left(\frac{\Delta_{\beta}}{2}t\right) \end{eqnarray} We choose $t^{\star}$ so that $e^{-i(\rho n)t^{\star}} = 1$, which implies that $t^{\star} = 2\mathbb{Z}\pi/\rho n$. This simplifies the above expression to \begin{eqnarray} \bra{b}e^{-it^{\star} A_{\mathcal{G}}}\ket{a} & = & \frac{1}{2} \cos\left(\frac{\delta_{\alpha}}{2}t^{\star}\right) \cos\left(\frac{\Delta_{\alpha}}{2}t^{\star}\right) - \frac{1}{2} \cos\left(\frac{\delta_{\beta}}{2}t^{\star}\right) \cos\left(\frac{\Delta_{\beta}}{2}t^{\star}\right) \end{eqnarray} For simplicity, define \begin{eqnarray} Z_{\alpha} & = & \cos\left(\frac{\delta_{\alpha}}{2}t^{\star}\right) \cos\left(\frac{\Delta_{\alpha}}{2}t^{\star}\right) = \cos\left(\frac{\delta_{\alpha}}{\rho n}\pi\right) \cos\left(\frac{\Delta_{\alpha}}{\rho n}\pi\right) \\ Z_{\beta} & = & \cos\left(\frac{\delta_{\beta}}{2}t^{\star}\right) \cos\left(\frac{\Delta_{\beta}}{2}t^{\star}\right) = \cos\left(\frac{\delta_{\beta}}{\rho n}\pi\right) \cos\left(\frac{\Delta_{\beta}}{\rho n}\pi\right) \end{eqnarray} Let \begin{equation} \widetilde{P}_{\alpha} = \frac{\delta_{\alpha}}{\rho n}, \hspace{.25in} \widetilde{Q}_{\alpha} = \frac{\Delta_{\alpha}}{\rho n}, \hspace{.25in} \widetilde{P}_{\beta} = \frac{\delta_{\beta}}{\rho n}, \hspace{.25in} \widetilde{Q}_{\beta} = \frac{\Delta_{\beta}}{\rho n}. \end{equation} To achieve perfect state transfer, we require that $Z_{\alpha}Z_{\beta} = -1$. For example, if we require $Z_{\alpha} = -1$ and $Z_{\beta} = 1$, then it suffices to impose the following {\em integrality} and {\em parity} conditions: \begin{eqnarray} \widetilde{P}_{\alpha}, \widetilde{Q}_{\alpha} & \in & \mathbb{Z}, \hspace{.5in} \widetilde{P}_{\alpha} \not\equiv \widetilde{Q}_{\alpha}\hspace{-0.1in}\pmod{2} \\ \widetilde{P}_{\beta}, \widetilde{Q}_{\beta} & \in & \mathbb{Z}, \hspace{.5in} \widetilde{P}_{\beta} \equiv \widetilde{Q}_{\beta}\hspace{-0.1in}\pmod{2}. \end{eqnarray} We will show that there is no $\rho$ which can satisfy all the above conditions. Suppose that, for $\xi \in \{\alpha, \beta\}$, we have \begin{equation} \frac{\delta_{\xi}}{\Delta_{\xi}} \ = \ \frac{p_{\xi}}{q_{\xi}} \ \in \ \mathbb{Q}, \end{equation} where $p_{\xi}$ and $q_{\xi}$ are integers with $\gcd(p_{\xi},q_{\xi})=1$; moreover, since $\Delta_{\xi}^{2} = \delta_{\xi}^{2} + 4{\bf Var}epsilon^{2}n$, we get \begin{equation} \delta_{\xi} = p_{\xi} \sqrt{\frac{4{\bf Var}epsilon^{2}n}{q_{\xi}^{2} - p_{\xi}^{2}}}, \hspace{.5in} \Delta_{\xi} = q_{\xi} \sqrt{\frac{4{\bf Var}epsilon^{2}n}{q_{\xi}^{2} - p_{\xi}^{2}}}. \end{equation} \par\noindent Consider $\widetilde{P}_{\xi}$ and $\widetilde{Q}_{\xi}$, for $\xi \in \{\alpha,\beta\}$. Letting $\Lambda = 2{\bf Var}epsilon/\rho\sqrt{n}$, we have \begin{equation} \label{eqn:PQ} \widetilde{P}_{\xi} = p_{\xi} \frac{\Lambda}{\sqrt{q_{\xi}^{2}-p_{\xi}^{2}}}, \hspace{.5in} \widetilde{Q}_{\xi} = q_{\xi} \frac{\Lambda}{\sqrt{q_{\xi}^{2}-p_{\xi}^{2}}}. \end{equation} \par Since $\widetilde{P}_{\alpha}\equiv\widetilde{P}_{\alpha}^2\pmod{2}$, we know $\widetilde{P}_{\alpha}^{2}\not\equiv\widetilde{Q}_{\alpha}^{2}\pmod{2}$ is equivalent to $\widetilde{P}_{\alpha}\not\equiv\widetilde{Q}_{\alpha}\pmod{2}$. Likewise, $\widetilde{P}_{\beta}^{2}\equiv\widetilde{Q}_{\beta}^{2}\pmod{2}$ is equivalent to $\widetilde{P}_{\beta}\equiv\widetilde{Q}_{\beta}\pmod{2}$. This changes (\ref{eqn:PQ}) to \begin{equation} \label{eqn:PQ-squared} \widetilde{P}_{\xi}^{2} = p_{\xi}^{2} \frac{\Lambda^{2}}{q_{\xi}^{2}-p_{\xi}^{2}}, \hspace{.5in} \widetilde{Q}_{\xi}^{2} = q_{\xi}^{2} \frac{\Lambda^{2}}{q_{\xi}^{2}-p_{\xi}^{2}}. \end{equation} Since we require that $\widetilde{P}_{\xi}$ and $\widetilde{Q}_{\xi}$ must be integers, then $(q_{\xi}^{2}-p_{\xi}^{2}) \ | \ q_{\xi}^{2}\Lambda^{2}$ and $(q_{\xi}^{2}-p_{\xi}^{2}) \ | \ p_{\xi}^{2}\Lambda^{2}$. However, $gcd(p_{\xi},q_{\xi})=1$ implies that $gcd(p_{\xi}^{2},q_{\xi}^{2})=1$. This gives us $(q_{\xi}^{2}-p_{\xi}^{2}) \ | \ \Lambda^{2}$. Suppose now that $p_{\beta}^{2}\equiv{q_{\beta}^{2}}\pmod{2}$. Then $q_{\beta}^{2}-p_{\beta}^{2}$ is even. This forces $\Lambda^{2}$ to be even. Similarly, suppose $p_{\beta}^{2}\not\equiv{q_{\beta}^{2}}\pmod{2}$. Then $q_{\beta}^{2}-p_{\beta}^{2}$ is odd. However, since $\widetilde{P}_{\beta}^{2}\equiv\widetilde{Q}_{\beta}^{2}\pmod{2}$ and one of $p_{\beta}^{2}$,$q_{\beta}^{2}$ is odd, then $\Lambda^{2}$ must be even. In both cases, $\Lambda^{2}$ is even. Allowing $p_{\alpha}^{2}\equiv{q_{\alpha}^{2}}\pmod{2}$ guarantees $\widetilde{P}_{\alpha}^{2}\equiv\widetilde{Q}_{\alpha}^{2}\pmod{2}$. Letting $p_{\alpha}^{2}\not\equiv{q_{\alpha}^{2}}\pmod{2}$ gives us $q_{\alpha}^{2}-p_{\alpha}^{2}$ to be odd. This again forces $\widetilde{P}_{\alpha}^{2}\equiv\widetilde{Q}_{\alpha}^{2}\pmod{2}$. Both instances contradict our given requirement that $\widetilde{P}_{\alpha}^{2}\not\equiv\widetilde{Q}_{\alpha}^{2}\pmod{2}$. The case when we require that $Z_{\alpha} = 1$ and $Z_{\beta} = -1$, that is, where $\widetilde{P}_{\alpha}$ is even and $\widetilde{Q}_{\alpha}$, $\widetilde{P}_{\beta}$, $\widetilde{Q}_{\beta}$ are odd, may be treated similarly. \qed\\ \begin{corollary} For any $n \ge 2$, consider the complete bipartite graph $K_{n,n}$. Let $a$ and $b$ be two arbitrary adjacent vertices in $K_{n,n}$. Then, there are no self-loop weights $\mu$ on $a$ and $b$ and edge weight $\eta$ on the edge $(a,b)$ for which there is perfect state transfer from vertex $a$ to vertex $b$ in this weighted version of $K_{n,n}$. \end{corollary} \Probf We apply Theorem \ref{thm:half-join} with $G=\overline{K}_{n-1}$ set to the empty graph on $n-1$ vertices, that is $A_{G}$ is the all-zero matrix and hence $k = 0$. Also, we set ${\bf Var}epsilon = 1$, $\kappa = 0$ and $\tau$ is an arbitrary value. In the proof of Theorem \ref{thm:half-join}, setting $\kappa = 0$ does not affect perfect state transfer since the term $\kappa + k\tau$ may be ignored due to its contribution as a global phase factor. Setting ${\bf Var}epsilon = 1$ does not affect perfect state transfer since it is ``factored out'' through $\Lambda$. Thus, these specific setting of values do not affect the conclusions of Theorem \ref{thm:half-join}. \qed \section{Hamming graphs} We show that the class of weighted Hamming graphs exhibit perfect state transfer between any two of its vertices. First, we prove the following closure result on Cartesian product of graphs. This is an adaptation of a similar theorem for the unweighted case (see \cite{anoprt09}). \begin{theorem} \label{thm:weighted-cartesian} For $j=1,\ldots,m$, the graph $G_{j}$ has perfect state transfer from $a_{j}$ to $b_{j}$ at time $t_{j}$ if and only if $\mathcal{G} = \bigoplus_{j=1}^{m} \widetilde{G}_{j}(\mu_{j},\eta_{j})$ has perfect state transfer from $(a_{1},\ldots,a_{m})$ to $(b_{1},\ldots,b_{m})$ at time $t^{\star}$, whenever $t^{\star} = t_{j}/\eta_{j}$. This holds independently of the choice of the self-loop weights $\mu_{j}$. \end{theorem} \Probf We prove the claim for $m=2$. Suppose that the unweighted graph $G_{j}$ has perfect state transfer from $a_{j}$ to $b_{j}$ at time $t^{\star}_{j}$. Consider the quantum walk on the $\widetilde{G}_{1}(\mu_{1},\eta_{1}) \oplus \widetilde{G}_{2}(\mu_{2},\eta_{2})$. For shorthand, we denote each graph simply as $\widetilde{G}_{j}$: \begin{eqnarray} \bra{b_{1},b_{2}} e^{-itA_{\widetilde{G}_{1} \oplus \widetilde{G}_{2}}}\ket{a_{1},a_{2}} & = & \bra{b_{1}}\bra{b_{2}} e^{-it(I \otimes A_{\widetilde{G}_{2}})}e^{-it(A_{\widetilde{G}_{1}} \otimes I)} \ket{a_{1}}\ket{a_{2}} \\ & = & \bra{b_{1}}\bra{b_{2}} (I \otimes e^{-it A_{\widetilde{G}_{2}}}) (e^{-it A_{\widetilde{G}_{1}}} \otimes I) \ket{a_{1}}\ket{a_{2}} \\ & = & \bra{b_{1}}e^{-it A_{\widetilde{G}_{1}}}\ket{a_{1}} \bra{b_{2}}e^{-it A_{\widetilde{G}_{2}}}\ket{a_{2}}. \end{eqnarray} Since $A_{\widetilde{G}(\mu,\eta)} = \mu I + \eta A_{G}$, we have \begin{equation} \bra{b}e^{-it A_{\widetilde{G}}}\ket{a} = e^{-i\mu t}\bra{b}e^{-i\eta t A_{G}}\ket{a}. \end{equation} Therefore, the quantum walk on the weighted Cartesian product yields \begin{eqnarray} \bra{b_{1},b_{2}} e^{-itA_{\widetilde{G}_{1} \oplus \widetilde{G}_{2}}}\ket{a_{1},a_{2}} & = & e^{-i(\mu_{1}+\mu_{2})t}\bra{b_{1}}e^{-i\eta_{1}t A_{G_{1}}}\ket{a_{1}} \bra{b_{2}}e^{-i\eta_{2}t A_{G_{2}}}\ket{a_{2}}. \end{eqnarray} This shows that $\widetilde{G}_{1} \oplus \widetilde{G}_{2}$ has perfect state transfer from $(a_{1},a_{2})$ to $(b_{1},b_{2})$ at time $t$ if and only if $G_{1}$ has perfect state transfer from $a_{1}$ to $b_{1}$ at time $\eta_{1}t$ {\em and} $G_{2}$ has perfect state transfer from $a_{2}$ to $b_{2}$ at time $\eta_{2}t$. So, if the weights $\eta_{j}$ satisfy $\eta_{j}t^{\star} = t_{j}$, for all $j$, then $\widetilde{G}_{1} \oplus \widetilde{G}_{2}$ has perfect state transfer at time $t^{\star}$. The general claim follows by induction. \qed\\ \begin{figure} \caption{Hamming graphs: (a) $H(2,3)$ (b) $H(3,2)$. Perfect state transfer occurs between any pair of vertices with the help of weighted self-loops and edges.} \end{figure} \begin{theorem} \label{thm:weighted-hamming} The class $\widetilde{H}(q,n)$ of weighted Hamming graphs has universal perfect state transfer at an arbitrarily chosen time. \end{theorem} \Probf Recall that $H(q,n) = K_{q}^{\oplus n}$. Let $a=(a_{1},\ldots,a_{n})$ and $b=(b_{1},\ldots,b_{n})$ be two vertices of $\widetilde{H}(q,n)$. By Corollary \ref{cor:weighted-double-cone}, we know that $\widetilde{K}_{q}$ has perfect state transfer between any two of its vertices for a suitable choice of weights. For each dimension $j \in \{1,\ldots,n\}$, fix a set of weights so that $\widetilde{K}_{q}^{(j)}$ has perfect state transfer from $a_{j}$ to $b_{j}$. Then, by Theorem \ref{thm:weighted-cartesian}, $\bigoplus_{j=1}^{n} \widetilde{K}_{q}^{(j)}$ has perfect state transfer from $a$ to $b$. \qed\\ \subsection{Hypercubes} In this section, we show that a weighted hypercube has universal perfect state transfer property. In fact, we prove a stronger statement as given in the next theorem. But first, we need to define a particular notion of uniform superposition over the $n$-cube. \begin{fact} \label{fact:hypercube} {\em (Moore-Russell \cite{mr02}, Bernasconi {\it et al.~} \cite{bgs08})}\\ The following facts are known about a quantum walk on the hypercube $Q_{n}$ at times $t \in \{\pi/4,\pi/2\}$: \begin{equation} \bra{b}e^{-it{Q_{n}}}\ket{a} = \left\{\begin{array}{ll} (-i)^{|a \oplus b|}/\sqrt{2^{n}} & \mbox{ if $t = \pi/4$ } \\ \iverson{a \oplus b = 1_{n}} & \mbox{ if $t = \pi/2$ } \end{array}\right. \end{equation} \end{fact} \par\noindent We say that a superposition $\ket{{\bf Var}rho_{n}}$ over $Q_{n}$ is in {\em normal form} if \begin{equation} \ket{{\bf Var}rho_{n}} = \frac{1}{\sqrt{2^{n}}} \sum_{a \in \{0,1\}^{n}} (-i)^{|a|}\ket{a}. \end{equation} Note that $\ket{{\bf Var}rho_{n}}$ is the uniform superposition of a quantum walk on $Q_{n}$ from $0_{n}$ at time $\pi/4$; that is, $\ket{{\bf Var}rho_{n}} = \exp(-i(\pi/4) Q_{n})\ket{0_{n}}$. \begin{theorem} \label{thm:weighted-cube} For any $n \ge 1$, given any two distinct subcubes $B_{1}$ and $B_{2}$ of $Q_{n}$, there is a set of edge weights $w$ so that $Q_{n}^{w}$ has perfect state transfer between uniform superpositions in normal form on $B_{1}$ and $B_{2}$. \end{theorem} \Probf First, we show that the hypercube $Q_{n}$ has perfect state transfer from any vertex to any subcube. Since $Q_{n}$ is vertex-transitive, it suffices to show perfect state transfer from vertex $0_{n}$ to the subcube $B = (1_{k}0_{\ell}\star_{m})$, where $m = n-k-\ell$. Define the adjacency matrix of $\widetilde{Q}_{n}$ as \begin{equation} \widetilde{Q}_{n} = Q_{k} \otimes I_{2^{n-k}} + \mbox{\small $\frac{1}{2}$} I_{2^{k+\ell}} \otimes Q_{m}. \end{equation} which is a sum of two commuting matrices. Then, letting $t^{\star} = \pi/2$, we have \begin{equation} \bra{1_{k}0_{\ell}}\bra{{\bf Var}rho_{m}}\exp\left(-it^{\star}\widetilde{Q}_{n}\right)\ket{0_{k}0_{\ell}0_{m}} = \bra{1_{k}0_{\ell}}\bra{{\bf Var}rho_{m}} \exp\left(-i\frac{t^{\star}}{2} I_{2^{k+\ell}} \otimes Q_{m}\right) \ket{1_{k}0_{\ell}0_{m}}. \end{equation} The equality and the fact that the last expression has unit magnitude follows from Fact \ref{fact:hypercube}. To show perfect state transfer between two arbitrary subcubes, note that we just showed that $\ket{B} = e^{-it^{\star}\widetilde{Q}_{n}}\ket{0_{n}}$. Thus, we also have $\ket{0_{n}} = e^{-it^{\star}(-\widetilde{Q}_{n})}\ket{B}$. This proves the claim. \qed\\ \par\noindent We recover the result of Bernasconi {\it et al.~} \cite{bgs08}, which we restate in the next corollary, via the use of explicit edge weights on the hypercube. \begin{corollary} For any $n \ge 1$, given any two distinct vertices $a$ and $b$ of the hypercube $Q_{n}$, there is a set of edge weights $w$ so that $Q_{n}^{w}$ has perfect state transfer from $a$ to $b$ at time $t^{\star} = \pi/2$. \end{corollary} \par\noindent{\em Remark}: We note that Bernasconi {\it et al.~} \cite{bgs08} proved universal perfect state transfer for the $n$-cube by {\em dynamically} changing the underlying hypercubic structure of the graph. In contrast, our scheme is based on using {\em static} weights which can be interpreted dynamically with time. In both schemes, it is possible to route information through a Hamiltonian path which visits each vertex once and exactly once. We believe that this Hamiltonian property might be of interest in further applications of perfect state transfer. \section{Conclusion} We studied perfect state transfer on quantum networks represented by weighted graphs. Our goal was to understand the role of weights in achieving perfect state transfer in graphs. First, we proved a join theorem for weighted regular graphs and derived, as a corollary, that a weighted double-cone on any regular graph has perfect state transfer. This implies as a corollary a result of Casaccino {\it et al.~} \cite{clms09} where the regular graph is a complete graph. In contrast, we also showed that weights do not help in achieving perfect state transfer in complete bipartite graphs. This is obtained as part of a more general result on graphs constructed from a half-join of $K_{2}$ and $G+G$, for an arbitrary regular graph $G$. We found it curious that the full join connection seemed crucial for weights to have a positive effect in achieving perfect state transfer. We leave the case of complete multipartite graphs and strongly regular graphs as an open question. Second, we observed that Hamming graphs have the universal perfect state transfer property. This is a stronger requirement that the standard perfect state transfer property where perfect state transfer must occur between any pair of vertices. Prior to this work, the only known family of graphs with universal perfect state transfer were the (unweighted) hypercubic graphs \cite{bgs08}. We proved our result on the Hamming graphs by showing a closure result for a weighted Cartesian product of perfect state tranfer graphs; even when the graph components have different perfect state transfer times. The unweighted version of this closure result, as shown in \cite{anoprt09}, requires a global common perfect state transfer time for all graphs in the Cartesian product. For the hypercubes, we showed a stronger universal perfect state transfer property, where perfect state transfer occurs between uniform superpositions of two arbitrary subcubes. We imposed a mild condition on the uniform superpositions which exhibit perfect state transfer. We remark that if zero weights are allowed, then universal perfect state transfer is trivial. Simply take any path connecting the two vertices and assign the hypercubic weights to the edges on the path (as in Christandl {\it et al.~} \cite{cdel04}) and zero weights to all other edges. If zero weights are not allowed then we conjecture that near-perfect state transfer is possible by assigning weights that tend to zero (for the edges which require zero weights). \begin{figure} \caption{Existence of universal near-perfect state transfer on any weighted graph. (a) $Q_{n} \end{figure} \section*{Acknowledgments} This research was supported in part by the National Science Foundation grant DMS-0646847 and also by the National Security Agency grant H98230-09-1-0098. \end{document}
\begin{document} \baselineskip16pt \begin{abstract} The aim of this article is to prove an \lq\lq almost\rq\rq \, global existence result for some semilinear wave equations in the plane outside a bounded convex obstacle with the Neumann boundary condition. {\mathcal E}d{abstract} \maketitle \thispagestyle{empty} \section{Introduction} Let ${\mathcal O}$ be an open bounded convex domain with smooth boundary in ${\mathbb R}^2$ and put $\Omega:={\mathbb R}^2 \setminus \overline{\mathcal O}$. Let $\partialrtial_\nu$ denote the outer normal derivative on $\partialrtial \Omega$. We consider the mixed problem for semilinear wave equations in $\Omega$ with the Neumann boundary condition: \begin{equation}\label{eq.PMN} \begin{array}{ll} (\partialrtial_t^2-\Delta) u =G(\partialrtial_t u, \nabla_x u), & (t,x) \in (0,\infty)\times \Omega,\\ \partialrtial_\nu u(t,x)=0, &(t,x) \in (0,\infty)\times \partialrtial\Omega,\\ u(0,x)=\phi(x), &x\in \Omega,\\ \partialrtial_t u(0,x)=\psi(x), & x\in \Omega,\\ {\mathcal E}d{array} {\mathcal E}d{equation} where $\phi$ and $\psi$ are ${\mathcal C}^\infty$-functions compactly supported in $\overline\Omega$, and $G: \mathbb R^3\to \mathbb R$ is a nonlinear function. We will study the case of the cubic nonlinearity with small initial data and obtain an estimate from below for the lifespan of the solution in terms of the size of the initial data. Here by the expression \lq\lq small initial data'' we mean that there exist $m\in \mathbb N$, $s\in \mathbb R$ and a small number $\varepsilon>0$ such that \begin{equation*} \|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)}\le \varepsilon, {\mathcal E}d{equation*} where the weighted Sobolev space $H^{m,s}(\Omega)$ is endowed with the norm \begin{equation}\label{eq.datanorm} \|\varphi\|_{H^{m,s}(\Omega)}^2:=\sum_{|\alpha|\le m}\int_{\Omega} (1+|x|^2)^s |\partialrtial_x^\alpha \varphi(x)|^2 d x. {\mathcal E}d{equation} A large amount of works has been devoted to the study of the mixed problem for nonlinear wave equations in an exterior domain $\Omega \subset \mathbb R^n$ for $n\ge 3$, mostly with the Dirichlet boundary condition. To our knowledge very few results deal with the global existence or the lifespan estimate for the exterior mixed problems of nonlinear wave equations in $2$D; in \cite{SSW} the global existence for the case of the Dirichlet boundary condition and the nonlinear terms depending only on $u$ is considered; in \cite{KP} one of the authors obtained an almost global existence result for small initial data under the assumptions that $|G(\partialrtial u)|\simeq (\partialrtial u)^3$, the obstacle is star-shaped and the boundary condition is of the Dirichlet type (see Remark~\ref{Rem1.4} below for the detail). Here we will treat the problem with the Neumann boundary condition in $2$D and obtain an analogous result to \cite{KP}. However, because we have a weaker decay property for the solution to the Neumann exterior problem of linear wave equations in $2$D (see Secchi and Shibata \cite{SeSh03}), we will obtain a slightly worse lifespan estimate than in the Dirichlet case. For simplicity, we assume that the nonlinear function $G$ in \eqref{eq.PMN} is a homogeneous polynomial of cubic order. Equivalently, writing $\partialrtial u=(\partialrtial_tu, \nabla_x u)$, this means that \begin{equation}\label{eq.semiG} G(\partialrtial u)=\sum_{0\le \alpha\le \beta\le \gamma\le 2} g_{\alpha,\beta,\gamma}(\partialrtial_\alpha u)(\partialrtial_\beta u)(\partialrtial_\gamma u) {\mathcal E}d{equation} with $g_{\alpha,\beta,\gamma}\in \mathbb R$ and $(\partialrtial_0,\partialrtial_1,\partialrtial_2):=(\partialrtial_t, \partialrtial_{x_1},\partialrtial_{x_2})$. As usual, to consider smooth solutions to the mixed problem, we need some compatibility conditions (see \cite{KaKu08}). Note that, for a nonnegative integer $k$ and a smooth function $u=u(t,x)$ on $[0,T)\times \Omega$, we have \begin{equation} \label{CC0} \partialrtial_t^k\left(G(\partialrtial u)\right)=G^{(k)}[u, \partialrtial_t u, \ldots, \partialrtial_t^{k+1}u], {\mathcal E}d{equation} where for $\mathcal C^1$ functions $(p_0, p_1, \ldots, p_{k+1})$ we put \begin{align*} G^{(k)}[p_0, p_1, \ldots, p_{k+1}]=&\sum_{k_1+k_2+k_3=k} g_{0, 0, 0} p_{k_1+1}p_{k_2+1}p_{k_3+1}+ \sum_{k_1+k_2+k_3=k} \sum_{\gamma=1}^2 g_{0,0,\gamma}p_{k_1+1}p_{k_2+1}(\partialrtial_\gamma p_{k_3})\\ &{}+\sum_{k_1+k_2+k_3=k} \sum_{1\le \beta\le \gamma\le 2} g_{0,\beta,\gamma} p_{k_1+1}(\partialrtial_\beta p_{k_2})(\partialrtial_\gamma p_{k_3})\\ &{}+\sum_{k_1+k_2+k_3=k} \sum_{1\le \alpha\le \beta\le \gamma\le 2} g_{\alpha,\beta,\gamma} (\partialrtial_\alpha p_{k_1})(\partialrtial_\beta p_{k_2})(\partialrtial_\gamma p_{k_3}). {\mathcal E}d{align*} \begin{definition}\label{CCN} To the mixed problem \eqref{eq.PMN} we can associate the recurrence sequence $\{v_j\}_{j\in\mathbb N^*}$ with $v_j:\overline{\Omega}\to \mathbb R$ such that $$ \begin{array}{l} v_0= \phi,\\ v_1= \psi,\\ v_j=\Delta v_{j-2}+G^{(j-2)}[v_0, v_1, \ldots, v_{j-1}], \quad j\ge 2, {\mathcal E}d{array} $$ where $\mathbb N^*$ denotes the set of nonnegative integers and $G^{(k)}$ is defined as above {\rm(}cf.~\eqref{CC0}{\rm)}. We say that $(\phi, \psi, G)$ satisfies the compatibility condition of infinite order in $\Omega$ for \eqref{eq.PMN} if $\phi,\psi\in {\mathcal C}^\infty(\overline{\Omega})$, and one has $$ \partialrtial_\nu v_j(x)=0, \quad x\in \partialrtial \Omega $$ for all $j\in \mathbb N^*$. {\mathcal E}d{definition} Our aim is to prove the following result. \begin{theorem}\label{thm.mainsemi} Let ${\mathcal O}$ be a convex obstacle. Consider the semilinear mixed problem \eqref{eq.PMN} with given compactly supported initial data $(\phi,\psi)\in \mathcal C^\infty(\overline{\Omega})\times {\mathcal C}^\infty(\overline{\Omega}) $ and a given nonlinear term $G(\partialrtial u)$ which is a homogeneous polynomial of cubic order as in \eqref{eq.semiG}. Assume that $(\phi, \psi, G)$ satisfies the compatibility condition of infinite order in $\Omega$ for \eqref{eq.PMN}. Under these assumptions, there exist $\varepsilon_0>0$, $m\in \mathbb N$, $s\in \mathbb R$ such that, if $\varepsilon \in (0,\varepsilon_0]$ and \begin{equation}\label{eq.smalldata.semi} \|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)} \le \varepsilon, {\mathcal E}d{equation} then the mixed problem \eqref{eq.PMN} admits a unique solution $u \in {\mathcal C}^\infty([0,T_\varepsilon)\times \Omega)$ with \begin{equation}\label{eq.lifespanT1.semi} T_\varepsilon \ge \exp(C \varepsilon^{-1}), {\mathcal E}d{equation} where $C>0$ is a suitable constant which is uniform with respect to $\varepsilon\in (0,\varepsilon_0]$. {\mathcal E}d{theorem} \begin{rem} \normalfont The only point where we require that the obstacle ${\mathcal O}$ is convex is to gain the local energy decay (see Lemma~\ref{LocalEnergyDecay} below). In general one can treat the obstacles for which Lemma~\ref{LocalEnergyDecay} holds. Unfortunately, for the Neumann problems in $2$D, up to our knowledge it is not known if there exists non-convex obstacles satisfying such a local energy decay. {\mathcal E}d{rem} \begin{rem} \normalfont One can ask if it is possible to gain a global existence result maintaining our assumption on the growth of $G$. In general the answer to this question is negative since the blow-up in finite time occurs for $F=(\partial_t u)^3$ when $n=2$. Indeed, it was proved in \cite{God93} that for any $R>0$ we can find initial data such that the blow-up for the corresponding Cauchy problem occurs in the region $|x|>t+R$. This result shows the blow-up for the exterior problem with any boundary condition if we choose sufficiently large $R$, because the solution in $|x|>t+R$ is not affected by the obstacle and the boundary condition, thanks to the finite propagation property (see \cite{KaKu12} for the corresponding discussion in $3$D). In order to look for global solutions one could investigate the exterior problem with suitable nonlinearity satisfying the so-called \it null condition. \rm {\mathcal E}d{rem} \begin{rem}\label{Rem1.4} \normalfont If we consider the Cauchy problem in $\mathbb R^2$, or the Dirichlet problem in a domain exterior to a star-shaped obstacle in $2$D, an analogous result to Theorem~$\ref{thm.mainsemi}$ holds with \begin{equation} \label{sharplife} T_\varepsilon \ge \exp (C\varepsilon^{-2}), {\mathcal E}d{equation} and this lifespan estimate is known to be sharp (see \cite{God93} for the Cauchy problem and \cite{KP} for the Dirichlet problem). One loss of the logarithmic factor in the decay estimates causes this difference between the lifespan estimates \eqref{eq.lifespanT1.semi} and \eqref{sharplife} (see Theorem~\ref{main} and Remark~\ref{Rem83} below). It is an interesting problem whether our lower bound \eqref{eq.lifespanT1.semi} is sharp or not for the Neumann problem. {\mathcal E}d{rem} \section{Preliminaries} In this section we introduce some notation which will be used throughout this paper and some basic lemmas for the proof of Theorem \ref{thm.mainsemi}. Throughout the paper we shall assume $0\in {\mathcal O}$ so that we have $|x|\ge c_0$ for $x\in \Omega$ for some positive constant $c_0$. We shall also assume that $\overline{\mathcal O}\subset B_1$, where $B_r$ stands for an open ball with radius $r$ centered at the origin of ${\mathbb R}^2$. Thus a function $v=v(x)$ on $\Omega$ vanishing for $|x|\le 1$ can be naturally regarded as a function on $\mathbb R^2$. \subsection{Notation} Let us start with some standard notation. \begin{itemize} \item We put $\langle y\rangle :=\sqrt{1+|y|^2}$ for $y\in \mathbb R^d$ with $d\in \mathbb N$. \item Let $A=A(y)$ and $B=B(y)$ be two positive functions of some variable $y$, such as $y=(t,x)$ or $y=x$, on suitable domains. We write $A\lesssim B$ if there exists a positive constant $C$ such that $A(y)\le C B(y)$ for all $y$ in the intersection of the domains of $A$ and $B$. \item The $L^2(\Omega)$ norm is denoted by $\|\cdot\|_{L^2_{\Omega}}$, while the norm $\| \cdot \|_{L^2}$ without any other index stands for $\|\cdot \|_{L^2(\mathbb R^2)}$. Similar notation will be used for the $L^\infty$ norms. \item For a time-space depending function $u$ satisfying $u(t,\cdot)\in X$ for $0\le t< T$ with a Banach space $X$, we put $\|u\|_{L^\infty_TX}:=\sup_{0\le t< T}\|u(t,\cdot)\|_X$. For the brevity of the description, we sometimes use the expression $\|h(s,y)\|_{L^\infty_tL^\infty_\Omega}$ with dummy variables $(s,y)$ for a function $h$ on $[0,t)\times \Omega$, which means $\sup_{0\le s<t}\|h(s, \cdot)\|_{L^\infty_\Omega}$. \item For $m\in \mathbb N$ and $s\in \mathbb R$, by $H^{m,s}(\Omega)$ we denote the weighted Sobolev space with norm defined by \eqref{eq.datanorm}. Moreover $H^m(\Omega)$ and $H^m(\mathbb R^2)$ are the standard Sobolev spaces. \item We denote by $\mathcal C_0^\infty(\overline\Omega)$ the set of smooth functions defined on $\overline{\Omega}$ which vanish outside $B_R$ for some $R>1$. {\mathcal E}d{itemize} Let $\nu\in \mathbb R$. We put \begin{equation*} w_\nu(t,x)=\langle x \rangle^{-1/2} \langle t-|x|\rangle^{-\nu}+\langle t+|x|\rangle^{-1/2}\langle t-|x|\rangle^{-1/2}. {\mathcal E}d{equation*} This weight function $w_\nu$ will be used repeatedly in the \it a priori estimates of the solution $u$ to \eqref{eq.PMN}. We shall often use the following inequality \begin{equation}\label{eq.ome} w_\nu(t,x)\lesssim \langle t+|x|\rangle^{-1/2}(\min\{\langle x\rangle, \langle t-|x| \rangle\})^{-1/2}, \quad \nu \ge 1/2. {\mathcal E}d{equation} For $\nu$, $\kappa>0$ we put $$ W_{\nu,\kappa}(t,x)= \langle t+|x|\rangle^\nu \left (\min\{\langle x\rangle, \langle t-|x|\rangle\}\right)^\kappa. $$ Finally, for $a\ge 1$ we set $$ \Omega_a=\Omega \cap B_a. $$ Since $\overline{\mathcal O}\subset B_1$, we see that $\Omega_a\not= \emptyset $ for any $a\ge 1$. \subsection{Vector fields associated with the wave operator} We introduce the vector fields\,: $$ \Gamma_0:=\partialrtial_0=\partialrtial_t, \quad \Gamma_1:=\partialrtial_1=\partialrtial_{x_1},\quad \Gamma_2:=\partialrtial_2=\partialrtial_{x_2}, \quad \Gamma_3:=\Lambda: =x_1 \partialrtial_2-x_2\partialrtial_1. $$ Denoting $[A,B]:=AB-BA$, we have \begin{equation}\label{eq.commute} [\Gamma_i, \partialrtial_{t}^2-\Delta]=0, \quad i=0,\dots,3, {\mathcal E}d{equation} and also $$ \begin{array}{ll} [\Gamma_i,\Gamma_j]=0, & i,j=0,1,2,\cr [\Gamma_0,\Gamma_3]=0, &\ \cr [\Gamma_1,\Gamma_3]=\Gamma_2, &\cr [\Gamma_2,\Gamma_3]=-\Gamma_1. {\mathcal E}d{array} $$ Hence, for $i,j=0,1,2,3$, we have $[\Gamma_i, \Gamma_j]=\sum_{k=0}^3 c_{ij}^k\,\Gamma_k$ with suitable constants $c_{ij}^k$. Moreover, for $i=0,1,2$ and $j=0,1,2,3$ we also have $[\partialrtial_i, \Gamma_j]=\sum_{k=1}^2 d_{ij}^k \partialrtial_k$ with suitable constants $d_{ij}^k$. We put $\partialrtial=(\partialrtial_0,\partialrtial_1, \partialrtial_2)$, $\partialrtial_x=(\partialrtial_1, \partialrtial_2)$, $\Gamma=(\Gamma_0, \Gamma_1, \Gamma_2, \Gamma_3)=(\partialrtial, \Lambda)$ and $\widetilde{\Gamma}=(\Gamma_1, \Gamma_2, \Gamma_3)=(\partialrtial_x, \Lambda)= (\nabla_x, \Lambda) $. The standard multi-index notation will be used for these sets of vector fields, such as $\partial^\alpha=\partial_0^{\alpha_0}\partial_1^{\alpha_1}\partial_2^{\alpha_2}$ with $\alpha=(\alpha_0,\alpha_1,\alpha_2)$ and $\Gamma^\gamma=\Gamma_0^{\gamma_0} \cdots \Gamma_3^{\gamma_3}$ with $\gamma=(\gamma_0, \dots, \gamma_3)$. For $\rho\ge 0$, $k\in \mathbb N$ and functions $v_0=v_0(x)$ and $v_1=v_1(x)$, we put \begin{eqnarray*} \mathcal A_{\rho, k}[v_0,v_1]:= \sum_{|\gamma|\le k}\big( \|\langle \cdot \rangle^\rho \widetilde{\Gamma}^\gamma v_0\|_{L^\infty_{_{\Omega}}} +\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma \nabla_x v_0\|_{L^\infty_{_{\Omega}}} +\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma v_1\|_{L^\infty_{_{\Omega}}} \big);\\ \mathcal B_{\rho, k}[v_0,v_1]:= \sum_{|\gamma|\le k}\big( \|\langle \cdot \rangle^\rho \widetilde{\Gamma}^\gamma v_0\|_{L^\infty} +\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma \nabla_x v_0\|_{L^\infty} +\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma v_1\|_{L^\infty} \big). {\mathcal E}d{eqnarray*} These quantities will be used to control the influence of the initial data to the $L^\infty$ norms of the solution. Using the vector fields in $\widetilde{\Gamma}$, we obtain the following Sobolev-type inequality. \begin{lemma}\label{KlainermanSobolev}\ Let $v \in C_0^2(\overline{\Omega})$. Then we have \begin{eqnarray*} \sup_{x \in \Omega} |x|^{1/2}|v(x)| \lesssim \sum_{\substack{|\alpha|+\beta\le 2 \\ \beta \ne 2}} \|\partial_x^\alpha \Lambda^\beta v\|_{L^2(\Omega)}. {\mathcal E}d{eqnarray*} {\mathcal E}d{lemma} \noindent{\it Proof.}\ It is well known that for $w \in C_0^2(\mathbb R^2)$ we have \begin{equation} |x|^{1/2}|w(x)| \lesssim \sum_{\substack{|\alpha|+\beta \le 2\\ \beta\not=2}}\|\partial_x^\alpha \Lambda^\beta w\|_{L^2(\mathbb R^2)}, \quad x\in \mathbb R^2 \label{KlainermanIneq} {\mathcal E}d{equation} (see Klainerman \cite{kl0} for the proof). Let $\chi=\chi(x)$ be a nonnegative smooth function satisfying $\chi(x)\equiv 0$ for $|x|\le 1$ and $\chi(x)\equiv 1$ for $|x|\ge 2$. If we rewrite $v$ as $v=\chi v+(1-\chi) v$, then we have $\chi v\in C^\infty_0(\mathbb R^2)$ and \eqref{KlainermanIneq} leads to $$ \sup_{x \in \Omega} |x|^{1/2}|v(x)|\lesssim \sum_{\substack{|\alpha|+\beta \le 2\\ \beta\ne 2}} \|\partial_x^\alpha \Lambda^\beta (\chi v)\|_{L^2(\mathbb R^2)}+\|(1-\chi)v\|_{L^\infty(\Omega)}. $$ By using the Sobolev embedding to estimate the last term, we arrive at $$ \sup_{x \in \Omega} |x|^{1/2}|v(x)|\lesssim \sum_{\substack{|\alpha|+\beta \le 2\\ \beta\not=2}} \|\partial_x^\alpha \Lambda^\beta v\|_{L^2(\Omega)} +\sum_{|\alpha| \le 2} \|\partial_x^\alpha v\|_{L^2(\Omega)}. $$ This completes the proof. $\qed$ \subsection{Elliptic estimates} The following elliptic estimates will be used in the energy estimates. \begin{lemma}\label{elliptic2}\ Let $R>1$, $m$ be an integer with $m \ge 2$ and $v \in H^m(\Omega)$ such that $\partialrtial_\nu v=0$ on $\partialrtial \Omega$. Then we have \begin{eqnarray}\label{eq.ap1} \|\partial^\alpha_x v\|_{L^2(\Omega)} \lesssim \|\Delta v\|_{H^{|\alpha|-2}(\Omega)}+\|v\|_{H^{|\alpha|-1}(\Omega_{R+1})} {\mathcal E}d{eqnarray} for $2\le |\alpha| \le m$. {\mathcal E}d{lemma} \begin{proof} Let $\chi$ be a $C^\infty_0({\mathbb R}^n)$ function such that $\chi(x) \equiv 1$ for $|x|\le R$ and $\chi (x)\equiv 0$ for $|x|\ge R+1$. We set $v_1=\chi v$ and $v_2=(1-\chi) v$, so that $v=v_1+v_2$. If we put $h=\Delta v_1$, the function $v_1$ solves the elliptic problem \begin{equation*} \left\{ \begin{array}{ll} \Delta v_1= h & \text{ on } \Omega_{R+1}, \\ \partialrtial_{\nu} v_1=0 & \text{ on } \partialrtial \Omega, \\ v_1=0 & \text{ on } \partialrtial B_{R+1}. {\mathcal E}d{array} \right. {\mathcal E}d{equation*} From Theorem 15.2 of \cite{ADN}, we have \begin{equation}\label{eq.ADN} \|v_1\|_{H^{l}(\Omega_{R+1})}\lesssim \|h\|_{H^{l-2}(\Omega_{R+1})}+\|v_1\|_{L^{2}(\Omega_{R+1})}= \|\Delta v_1 \|_{H^{l-2}(\Omega_{R+1})}+\|v_1\|_{L^{2}(\Omega_{R+1})} {\mathcal E}d{equation} for $l\ge 2$. Hence \begin{eqnarray*} \|\partial^\alpha_x v_1\|_{L^2(\Omega)} & \lesssim & \|\Delta v\|_{H^{|\alpha|-2}(\Omega_{R+1})} +\|\nabla v\|_{H^{|\alpha|-2}(\Omega_{R+1})}+\|v\|_{H^{|\alpha|-2}(\Omega_{R+1})}\\ & \lesssim & \|\Delta v\|_{H^{|\alpha|-2}(\Omega_{R+1})} +\|v\|_{H^{|\alpha|-1}(\Omega_{R+1})} {\mathcal E}d{eqnarray*} Now we consider $v_2$. Note that $v_2$ can be regarded as a function in $\mathbb R^2$ and we can write $\|\partial^\alpha_x v_2\|_{L^2(\Omega)}=\| \partial^\alpha_x v_2\|_{L^2(\mathbb R^2)}$. Let us recall that $\|\partial^\beta_x w\|_{L^2({\mathbb R}^n)}\lesssim \|\Delta w\|_{L^2({\mathbb R}^n)}$ for any $w \in H^2({\mathbb R}^n)$ and $|\beta|=2$. Writing $\alpha=\beta+\gamma$ with $|\beta|=2$ and $|\gamma|=|\alpha|-2$, we have \begin{eqnarray*}\nonumber \|\partial^\alpha_x v_2\|_{L^2(\Omega)}&\lesssim & \|\Delta \partial^\gamma_x v_2\|_{L^2(\mathbb R^2)} \lesssim \|\Delta v_2\|_{ H^{|\alpha|-2}(\mathbb R^2)}\\ &\lesssim &\|\Delta v\|_{ H^{|\alpha|-2}(\Omega)}+\|v\|_{ H^{|\alpha|-1}(\Omega_{R+1})}. {\mathcal E}d{eqnarray*} Combining this inequality with the estimate for $v_1$, we find \eqref{eq.ap1}. {\mathcal E}d{proof} \subsection{Decay estimates for the linear wave equation with Neumann boundary condition}\label{LinearNeumann}\rm Given $T>0$, we consider the mixed problem \begin{equation}\label{eq.PMfT} \begin{array}{ll} (\partialrtial_t^2-\Delta) u =f, & (t,x) \in(0,T)\times \Omega,\\ \partialrtial_\nu u(t,x)=0, & (t,x) \in(0,T)\times \partialrtial\Omega,\\ u(0,x)=u_0(x), & x\in \Omega,\\ (\partialrtial_t u)(0,x)=u_1(x), & x\in \Omega. {\mathcal E}d{array} {\mathcal E}d{equation} It is known that for $u_0\in H^2(\Omega)$, $u_1\in H^1(\Omega)$ and $f\in {\mathcal C}^1\bigl([0,T); L^2(\Omega)\bigr)$, the mixed problem \eqref{eq.PMfT} admits a unique solution $$ u\in \bigcap_{j=0}^2 {\mathcal C}^j\bigl([0,T); H^{2-j}(\Omega)\bigr), $$ provided that $(u_0, u_1, f)$ satisfies the compatibility condition of order $0$, that is to say, \begin{equation} \label{CCo0} \partial_\nu u_0(x)=0,\quad x\in \partial \Omega {\mathcal E}d{equation} (see \cite{I68} for instance). Under these assumptions for $\varepsilonc{u}_0:=(u_0,u_1)$, the solution $u$ of \eqref{eq.PMfT} will be denoted by $S[\varepsilonc{u}_0,f](t,x)$. We set $K[\varepsilonc{u}_0](t,x)$ for the solution of \eqref{eq.PMfT} with $f\equiv 0$ and $L[f](t,x)$ for the solution of \eqref{eq.PMfT} with $\varepsilonc{u}_0\equiv (0,0)$; in other words we put $$ K[\varepsilonc{u}_0](t,x):=S[\varepsilonc{u}_0, 0](t,x), \quad L[f](t,x):= S[(0,0),f](t,x) $$ so that we get $$ S[\varepsilonc{u}_0,f](t,x)=K[\varepsilonc{u}_0](t,x)+L[f](t,x), $$ where $K[\varepsilonc{u_0}]$ and $L[f]$ are well defined because both of $(u_0, u_1, 0)$ and $(0,0,f)$ satisfy the compatibility condition of order $0$. In order to obtain a smooth solution to \eqref{eq.PMfT}, we need the compatibility condition of infinite order. \begin{definition}\label{def.complinear} Suppose that $u_0$, $u_1$ and $f$ are smooth. Define $u_j$ for $j\ge 2$ inductively by $$ u_j(x)=\Delta u_{j-2}(x)+(\partialrtial_t^{j-2}f)(0,x),\quad j\ge 2. $$ We say that $(u_0, u_1, f)$ satisfies the compatibility condition of infinite order in $\Omega$ for \eqref{eq.PMfT}, if one has \begin{equation*} \partialrtial_\nu u_j=0 \quad\text{on}\quad \partialrtial\Omega {\mathcal E}d{equation*} for any nonnegative integer $j$. {\mathcal E}d{definition} We say that $(u_0, u_1, f)\in X(T)$ if the following three conditions are satisfied: \begin{itemize} \item $(u_0,u_1)\in \mathcal C_0^{\infty}(\overline{\Omega}) \times \mathcal C_0^{\infty}(\overline{\Omega})$, \item $f\in C^{\infty}([0,T) \times \overline{\Omega})$; moreover, $f(t,\cdot)\in \mathcal C_0^{\infty}(\overline{\Omega})$ for any $t\in [0, T)$, \item $(u_0, u_1, f)$ satisfies the compatibility condition of infinite order. {\mathcal E}d{itemize} It is known that if $(u_0, u_1, f)\in X(T)$, then we have $S[\varepsilonc{u}_0,f]\in {\mathcal C}^\infty\bigl([0,T)\times \overline{\Omega}\bigr)$ (see \cite{I68} for instance). The following decay estimates play important roles in our proof of the main theorem. \begin{theorem}\ \label{main} Let ${\mathcal O}$ be a convex set and $k$ be a nonnegative integer. Suppose that $\Xi=(\varepsilonc{u}_0,f)=({u}_0, {u}_1,f) \in X(T)$. \noindent {\rm (i)}\ Let $\mu>0$. Then we have \begin{eqnarray}\label{ba3} \sum_{|\delta|\le k} |\Gamma^\delta S[\Xi](t,x)| \lesssim {\mathcal A}_{2+\mu,3+k} [\varepsilonc{u}_0] + \log(e+t)\sum_{|\delta|\le 3+k}\| |y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega} {\mathcal E}d{eqnarray} for $(t,x)\in [0,T) \times {\overline{\Omega}}$. \noindent {\rm (ii)}\ Let $0<\eta<1/2$ and $\mu>0$. Then we have \begin{eqnarray} && w_{(1/2)-\eta}^{-1}(t,x) \sum_{|\delta|\le k} |\Gamma^\delta \partialrtial S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim \mathcal A_{2+\mu,k+4}[\varepsilonc{u}_0]+ \log^2(e+t+|x|) \sum_{|\delta|\le k+4}\||y|^{1/2}W_{1,1}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega}, \label{ba4weak} \\ && w_{1/2}^{-1}(t,x) \sum_{|\delta|\le k} |\Gamma^\delta \partialrtial S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim \mathcal A_{2+\mu,k+4}[\varepsilonc{u}_0]+ \log^2(e+t+|x|) \sum_{|\delta|\le k+4}\||y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega} \label{ba4} {\mathcal E}d{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$. \noindent {\rm (iii)}\ Let $0<\eta<1$ and $\mu>0$. Then we have \begin{eqnarray} && w_{1-\eta}^{-1}(t,x) \sum_{|\delta|\le k} |\Gamma^\delta \partial\partialrtial_t S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim {\mathcal A}_{2+\mu,k+ 5}[\varepsilonc{u}_0]+ \log^{2} (e+t+|x|)\sum_{|\delta|\le k+ 5}\||y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega} \label{ba4t} {\mathcal E}d{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$. {\mathcal E}d{theorem} We will prove Theorem~\ref{main} in Section~\ref{sec.pointwise} below, by using the so-called cut-off method to combine the corresponding decay estimates for the Cauchy problem with the local energy decay. \section{The abstract argument for the proof of the main theorem}\label{AAPMT} Since the local existence of smooth solutions for the mixed problem \eqref{eq.PMN} has been shown by \cite{SN89} (see also the Appendix), what we need to do for showing the large time existence of the solution is to derive suitable {\it a priori} estimates: following \cite{SN89}, we need the control of $\|u(t)\|_{H^9(\Omega)}+\|\partial_t u(t)\|_{H^8(\Omega)}$ for the solution $u$. Let $u$ be the local solution of \eqref{eq.PMN}, assuming \eqref{eq.smalldata.semi} holds for large $m\in \mathbb N$ and $s>0$. Let $T^*$ be the supremum of $T$ such that \eqref{eq.PMN} admits a (unique) classical solution in $[0,T)\times \overline{\Omega}$. For $0<T\le T^*$, a small $\eta>0$, and nonnegative integers $H$ and $K$ we define \begin{align*} {\mathcal E}_{H,K}(T)\equiv & \sum_{|\gamma|\le H-1} \|w_{1/2}^{-1} \Gamma^\gamma \partial u\|_{L^\infty_TL^\infty_\Omega} +\sum_{1\le j+ |\alpha|\le K}\|\partialrtial_t^j \partial_x^\alpha u\|_{L^\infty_T L^2_{\Omega}} \\ & +\sum_{|\delta|\leq K-2} \| \jb{s}^{-1/2} \Gamma^\delta \partialrtial u(s,y)\|_{L^\infty_T L^2_{\Omega}} +\sum_{|\delta|\leq K-8} \| \jb{s}^{-(1/4)-\eta} \Gamma^\delta \partialrtial u(s,y)\|_{L^\infty_T L^2_{\Omega}} \\ & +\sum_{|\delta|\leq K-14} \| \jb{s}^{-2\eta} \Gamma^\delta \partialrtial u(s,y)\|_{L^\infty_T L^2_{\Omega}} +\sum_{|\delta|\leq K-20} \| \Gamma^\delta \partialrtial u\|_{L^\infty_T L^2_{\Omega}}. {\mathcal E}d{align*} We neglect the first sum when $H=0$. Similarly we neglect summations taken over the empty set as $K$ varies. We also put $${\mathcal E}_{H,K}(0)=\lim_{T\to 0^+}{\mathcal E}_{H,K}(T).$$ Observe that ${\mathcal E}_{H,K}(0)$ can be determined only by $\phi$, $\psi$ and $G$ and that we have $$ {\mathcal E}_{H,K}(0)\lesssim \|\phi\|_{H^{m+1,s}(\Omega)} +\|\psi\|_{H^{m,s}(\Omega)} $$ for suitably large $m \in\mathbb N$ and $s>0$ depending on $H$ and $K$. From \eqref{eq.smalldata.semi} for such $m\in \mathbb N$ and $s>0$, we see that ${\mathcal E}_{H,K}(0)$ is finite. The previous inequality can be obtained combining the embedding $H^r(\Omega)\hookrightarrow L^\infty(\Omega)$ for $r>1$ with the trivial inequality $|\Gamma_3 f|\le \langle x\rangle|\partialrtial_1 f|+\langle x\rangle|\partialrtial_2 f|$ and the equivalence between $\sum_{ |\alpha|\le m}\|\langle \cdot \rangle^s \partial_x^\alpha f\|_{ L^2_\Omega}$ and $\|\langle \cdot \rangle^s f\|_{H^m(\Omega)}$. In order to optimize $m$ or $s$ it is possible to use sharpest embedding theorem in weighted Sobolev spaces proved for example in \cite{GeLu04}. Our goal is to show the following claim. \begin{claim}\label{claim} \normalfont We can take suitable $H$ and $K$ and sufficiently large $m$ and $s$, so that there exist positive numbers $C_1$, $P$ and $Q$ and a strictly increasing continuous function ${\mathcal R}:[0,\infty)\to[0,\infty)$ with ${\mathcal R}(0)=0$ such that if ${\mathcal E}_{H,K}(T) \le 1$, then \begin{equation}\label{eq.energy} {\mathcal E}_{H,K}(T)\le C_1\varepsilon + {\mathcal R}\left({\mathcal E}_{H,K}^{P}(T)\log^{Q}(e+T)\right) (\varepsilon+{\mathcal E}_{H,K}(T)), {\mathcal E}d{equation} provided that \eqref{eq.smalldata.semi} holds with $\varepsilon\le 1$. Here $C_1$, $P$, $Q$ and ${\mathcal R}$ are independent of $\varepsilon$ and $T$. {\mathcal E}d{claim} Let us explain how from \eqref{eq.energy} we can gain the lifespan estimate. Suppose that the above claim is true. If we assume \eqref{eq.smalldata.semi} for some $m$ and $s$ which are sufficiently large, then, as we have mentioned, there exists $C_*>0$ such that ${\mathcal E}_{H,K}(0)< 2C_*\varepsilon$. We may assume $C_*\ge \max\{C_1,1\}$. We set $\varepsilon_0=\min \{(2C_*)^{-1}, 1\}$ and suppose that $0<\varepsilon\le \varepsilon_0$, so that we have $\varepsilon\le 1$ and $2C_*\varepsilon\le 1$. We put $$ T_*(\varepsilon):=\sup \left\{T\in [0, T^*):\, {\mathcal E}_{H,K}(T)\le 2C_*\varepsilon\right\}. $$ In particular, for any $T\le T_*(\varepsilon)$, we have $ {\mathcal E}_{H,K}(T)\le 1$. From \eqref{eq.energy} with $T=T_*(\varepsilon)$, we get \begin{equation*} {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right)\le C_*\varepsilon + {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T_*(\varepsilon)\right)\right) (3C_* \varepsilon). {\mathcal E}d{equation*} We are going to prove \begin{equation} \label{eq.lifespan00} {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T^*\right) \right) >\frac{1}{4} {\mathcal E}d{equation} by contradiction. Suppose that $T^*$ satisfies \begin{equation} \label{eq.lifespan} {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T^*\right) \right) \le \frac{1}{4}. {\mathcal E}d{equation} Since $T_*(\varepsilon)\le T^*$, and $\mathcal R$ is an increasing function, we obtain \begin{equation*} {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right) \le \frac{7}{4} C_* \varepsilon <2C_* \varepsilon. {\mathcal E}d{equation*} Therefore we get $T_*(\varepsilon)=T^*$, because otherwise the continuity of ${\mathcal E}_{H,K}(T)$ implies that there exists $\widetilde{T}>T_*(\varepsilon)$ satisfying ${\mathcal E}_{H,K}(\widetilde{T})\le 2C_*\varepsilon$, which contradicts the definition of $T_*(\varepsilon)$. However, if $T_*(\varepsilon)=T^*$, and $H,K$ are sufficiently large, we can prove \begin{align} \|u\|_{L^\infty_{T^*}H^9(\Omega)}+\|\partial_tu\|_{L^\infty_{T^*}H^8(\Omega)} & \lesssim \varepsilon + (1+T^*) {\mathcal E}_{H,K}(T^*) \label{eq.stima98}\\ &=\varepsilon + (1+T_*(\varepsilon)) {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right) \lesssim \varepsilon + (1+T_*(\varepsilon)) 2C_*\varepsilon, \nonumber {\mathcal E}d{align} and we can extend the solution beyond the time $T^*$ by the local existence theorem, which contradicts the definition of $T^*$. Therefore \eqref{eq.lifespan} is not true, and we obtain \eqref{eq.lifespan00}. This means that, for any $\varepsilon \le \varepsilon_0$, there exists $\tilde{C}>0$ such that \begin{equation}\label{eq.lifePQ} T^*>\exp\{\tilde C\epsilon^{-P/Q}\}. {\mathcal E}d{equation} It remains to show \eqref{eq.stima98}. It is evident that $$ \|u\|_{L^\infty_{T^*}H^9(\Omega)}+\|\partial_tu\|_{L^\infty_{T^*}H^8(\Omega)}\lesssim \|u\|_{L^\infty_{T^*}L^2_\Omega}+{\mathcal E}_{0,9}(T^*). $$ In order to estimate $\|u\|_{L^\infty_{T^*}L^2_\Omega}$ we will use the expression $$ u(t,x)=u(0,x) +\int_0^t \partialrtial_t u(\tau, x) d\, \tau, $$ which leads to $$ \|u\|_{L^\infty_{T^*}L^2_\Omega}\lesssim \varepsilon +T^*{\mathcal E}_{0,1}(T^*). $$ As a conclusion, we obtain \eqref{eq.lifespanT1.semi}, once we can show that Claim~\ref{claim} is true with $P=Q=1$. This will be done in the next three sections. \section{Energy estimates for the standard derivatives} In this section we are going to estimate $\|\partial_t^j\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega}$ for $j+|\alpha|\ge 1$. In the first subsection, we consider the case where $j\ge 0$ and $ |\alpha|=1$. This can be done directly through the standard energy inequalities. In the second subsection, the case where $j\ge 1$ and $|\alpha|\ge 2$ will be treated with the help of the elliptic estimate, Lemma~\ref{elliptic2}. In the third subsection, we consider the case where $j=0$ and $|\alpha|\ge 2$. Lemma~\ref{elliptic2} will be used again, but this time we need the estimate of $\| u\|_{L^\infty_TL^2(\Omega_{R+1})}$ for some $R>0$, which is not included in the definition of ${\mathcal E}_{H,K}(T)$. Since we are considering the $2$D Neumann problem, it seems difficult to use some embedding theorem to estimate $\| u\|_{L^\infty_TL^2(\Omega_{R+1})}$ by $\|\nabla_x u \|_{L^\infty_TH^k(\Omega)}$ with some positive integer $k$. Instead, we will employ the $L^\infty$ estimate, Theorem~\ref{main}, for this purpose. \subsection{On the energy estimates for the derivatives in time} First we set \begin{equation*} E(v;t)=\frac12 \int_{\Omega} \{|\partialrtial_t v(t,x)|^2+|\nabla_x v(t,x)|^2\} dx {\mathcal E}d{equation*} for a smooth function $v=v(t,x)$. Let $j$ be a nonnegative integer. Since $\partialrtial_t$ commutes with the restriction of the function to $\partialrtial \Omega$, we have $\partialrtial_\nu \partialrtial_t^j u(t,x)=0$ for all $(t,x)\in (0,T) \times \partialrtial \Omega$. Therefore, by the standard energy method, we find \begin{equation*} \frac{d}{dt} E(\partialrtial_t^j u;t)= \int_{\Omega} \partialrtial_t^j( G(\partialrtial u))(t,x)\,\partialrtial_t^{j+1} u(t,x) dx. {\mathcal E}d{equation*} Recalling the definition of ${\mathcal E}_{H,K}(T)$, for $j+|\alpha|\ge 1$ we have \begin{equation}\label{eq.nu} |\partialrtial^j_t\nabla_x^\alpha u(t,x)|\le w_{1/2}(t,x) {\mathcal E}_{j+|\alpha|, 0}(T), \quad x\in \Omega, t\in [0,T). {\mathcal E}d{equation} Applying \eqref{eq.nu} and the Leibniz rule we find $$ \frac{d}{dt} E(\partialrtial_t^j u;t)\lesssim \|w_{1/2}(t)\|_{L^\infty_\Omega}^2{\mathcal E}_{[j/2]+1,0}^2(T)\sum_{h=0}^{j} \int_{\Omega} |\partialrtial_t^h \partialrtial u(t,x) |\,|\partialrtial_t^{j+1} u(t,x) |dx. $$ It is also clear that if $j+|\alpha| \ge 1$, one has $$ \|\partialrtial_t^j\partialrtial_x^\alpha u(t)\|_{L^2_{\Omega}} \le {\mathcal E}_{0,j+|\alpha|} (T), \quad t\in [0,T). $$ This gives $$ \frac{d}{dt} E(\partialrtial_t^j u;t)\lesssim \|w_{1/2}(t)\|_{L^\infty_\Omega}^2 {\mathcal E}^2_{[j/2]+1,0}(T) {\mathcal E}^2_{0,j+1}(T). $$ Since ${\mathcal E}_{H,K}(T)$ is increasing in $H$ and $K$, we get $$ \frac{d}{dt} E(\partialrtial_t^j u;t)\lesssim \|w_{1/2}(t)\|_{L^\infty_\Omega}^2 {\mathcal E}^4_{[j/2]+1,j+1}(T). $$ As a trivial consequence of \eqref{eq.ome}, we find $w_{1/2}(t,x)\le \langle t\rangle^{-1/2}$, so that \begin{equation*} \frac{d}{dt} E(\partialrtial_t^j u;t)\lesssim \langle t \rangle^{-1}{\mathcal E}^4_{[j/2]+1,j+1}(T). {\mathcal E}d{equation*} After integration this gives \begin{eqnarray}\label{eq.ap9} \sum_{l=0}^{j} \|\partialrtial_t^{l+1} u(t)\|_{L^2_\Omega}+ \sum_{l=0}^{j} \|\partialrtial_t^{l} \nabla_x u(t)\|_{L^2_\Omega} \lesssim {\mathcal E}_{j+1}(0)+{\mathcal E}^2_{j+1}(T)\log^{1/2}(e+t) {\mathcal E}d{eqnarray} for any $j\ge 0$ and $t \in [0,T)$, where $${\mathcal E}_s(T)={\mathcal E}_{[(s-1)/2]+1,s}(T)$$ for any integer $s\ge 0$. \subsection{On the energy estimates for the space-time derivatives.}\label{sec.32} Since the spatial derivatives do not preserve the Neumann boundary condition, we need to use elliptic regularity results. We shall show that for $j\ge 1$ and $k\ge 0$ it holds \begin{eqnarray}\label{eq.ap11} \sum_{|\alpha|=k} \|\partialrtial_t^{j} \partialrtial_x^\alpha u(t)\|_{L^2_\Omega} \lesssim {\mathcal E}_{j+k}(0)+{\mathcal E}^2_{j+k}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{j+k-1}(T) {\mathcal E}d{eqnarray} with ${\mathcal E}_s(T)={\mathcal E}_{[(s-1)/2]+1, s}(T)$ as before. It is clear that (\ref{eq.ap11}) follows from (\ref{eq.ap9}) when $j\ge 1$ and $k=0,1$. Next we suppose that (\ref{eq.ap11}) holds for $j\ge 1$ and $k\le l$ with some positive integer $l$. Let $|\alpha|=l+1$ and $j \ge 1$. Since $|\alpha|\ge 2$, we apply to $\partialrtial_t^{j} u$ the elliptic estimate (Lemma~\ref{elliptic2}) and we obtain $$ \|\partialrtial_x^\alpha \partialrtial_t^j u(t)\| \lesssim \|\Delta \partialrtial_t^{j} u(t)\|_{H^{l-1}(\Omega)} +\|\partialrtial_t^{j} u(t)\|_{H^{l}(\Omega)}. $$ By \eqref{eq.ap11} for $k\le l$, we see that the second term has the desired bound. On the other hand, using the fact that $u$ is a solution to \eqref{eq.PMN}, for the first term we have \begin{eqnarray}\nonumber \|\Delta \partialrtial_t^{j} u(t)\|_{H^{l-1}(\Omega)}\lesssim \|\partialrtial_t^{j+2} u(t)\|_{H^{l-1}(\Omega)} + \|\partialrtial_t^{j} (G(\partialrtial u))(t)\|_{H^{l-1}(\Omega)}. {\mathcal E}d{eqnarray} Since $(j+2)+(l-1)=j+l+1$, it follows from \eqref{eq.ap11} for $k=l-1$ with $j$ replaced by $j+2$ that $$ \|\partial_t^{j+2} u(t)\|_{H^{l-1}(\Omega)}\lesssim {\mathcal E}_{j+l+1}(0)+{\mathcal E}^2_{j+l+1}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{j+l}(T), $$ which is the desired bound. Finally, observing that $w_{1/2}(t,x)\le 1$, we get \begin{align*} \|\partialrtial^j_t G(\partialrtial u)(t) \|_{H^{l-1}(\Omega)}\lesssim \sum_{1\le |\beta|\le [(j+l-1)/2]+1}\|\partialrtial^\beta u(t)\|_{L^\infty_\Omega}^2 \sum_{1\le |\gamma|\le j+l}\| \partialrtial^\gamma u(t)\|_{L^2_\Omega} \lesssim {\mathcal E}^3_{j+l}(T). {\mathcal E}d{align*} Combining these estimates, we obtain \eqref{eq.ap11} for $j\ge 1$ and $k=l+1$. This completes the proof of \eqref{eq.ap11} for $j\ge 1$ and $k\ge 0$. \subsection{On the energy estimates for the space derivatives}\label{sec.spaceder} Our aim here is to estimate $\|\partialrtial_x^\alpha u\|_{L^\infty_TL^2_{\Omega}}$ for $|\alpha|=k\ge 1$. The estimate for $k=1$ is included in \eqref{eq.ap9}. Let us consider the case $|\alpha|=k\ge 2$. Let us fix $R>1$. The elliptic estimate \eqref{eq.ap1} gives \begin{eqnarray*} \sum_{|\alpha|=k} \|\partialrtial_x^\alpha u\|_{L^\infty_T L^2_{\Omega}} &\lesssim & \|\Delta u\|_{L^\infty_T H^{k-2}(\Omega)}+\|u\|_{L^\infty_TH^{k-1}(\Omega_{R+1})}\\ &\lesssim & \|\partialrtial_{t}^2 u\|_{L^\infty_T H^{k-2}(\Omega)}+\|G(\partialrtial u)\|_{L^\infty_T H^{k-2}(\Omega)}+ \|u\|_{L^\infty_T H^{k-1}(\Omega_{R+1})}. {\mathcal E}d{eqnarray*} The first term can be estimated by \eqref{eq.ap11} and we get $$ \|\partialrtial_t^2 u\|_{L^\infty_T H^{k-2}(\Omega)} \lesssim {\mathcal E}_{k}(0)+{\mathcal E}^2_{k}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{k-1}(T). $$ For the second term, we obtain the following inequality as before: $$ \|G(\partialrtial u)\|_{L^\infty_T H^{k-2}(\Omega)} \lesssim {\mathcal E}_{k-1}^3(T). $$ As for the third term, we get \begin{align} \|u\|_{L^\infty_TH^{k-1}(\Omega_{R+1})}\lesssim & \sum_{1\le |\beta|\le k-1}\|\partial_x^\beta u\|_{L^\infty_TL^2(\Omega_{R+1})} +\|u\|_{L^\infty_TL^2(\Omega_{R+1})} \nonumber\\ \lesssim & \sum_{1\le |\beta|\le k-1}\|\partial_x^\beta u\|_{L^\infty_TL^2_\Omega} +\|u\|_{L^\infty_TL^\infty(\Omega_{R+1})}. \nonumber {\mathcal E}d{align} Now we fix $\mu\in (0, 1/2)$ and use \eqref{ba3} with $k=0$ to obtain \begin{align} \|u\|_{L^\infty_TL^\infty_\Omega} \lesssim & {\mathcal A}_{2+\mu, 3}[\phi, \psi] +\log(e+T) \sum_{|\delta|\le 3}\left\|\jb{y}^{1/2}W_{1,1+\mu}(s,y) \Gamma^\delta \left(G(\partial u)\right)(s,y)\right\|_{L^\infty_TL^\infty_\Omega}. \label{eq.infty11a} {\mathcal E}d{align} By using \eqref{eq.ome}, for any $ s\in [0,T)$ we have $$ \sum_{|\delta|\le 3} |\Gamma^{\delta}G(\partialrtial u)( s,y)| \lesssim \langle s+|y|\rangle^{-3/2}\left(\min\{ \langle y \rangle, \langle |y|- s \rangle\}\right)^{-3/2} {\mathcal E}^3_{4,0}(T). $$ This implies $$ \sum_{|\delta|\le 3}\left\| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta \left(G(\partialrtial u)\right)(s,y) \right\|_{L^\infty_T L^\infty_{\Omega}} \lesssim {\mathcal E}^3_{4,0}(T), $$ and \eqref{eq.infty11a} gives \begin{equation} \|u\|_{L^\infty_TL^\infty_\Omega} \lesssim \mathcal A_{2+\mu, 3}[\phi, \psi]+{\mathcal E}^3_{4,0}(T)\log (e+T). \label{eq.inftyOK} {\mathcal E}d{equation} Summing up the estimates above, for $|\alpha|=k\ge 2$, we get \begin{align*} \sum_{|\alpha|=k} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \le &{\mathcal A}_{2+\mu,3}[\phi, \psi] {}+{\mathcal E}_{k}(0)+{\mathcal E}_{k}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{k-1}^3(T)+{\mathcal E}_{4,0}^3\log(e+T)\\ & {}+\sum_{1\le |\alpha|\le k-1} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega}. {\mathcal E}d{align*} Finally we inductively obtain $$ \sum_{|\alpha|=k} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \le {\mathcal A}_{2+\mu,3}[\phi, \psi] {}+{\mathcal E}_{k}(0)+{\mathcal E}_{k}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{k-1}^3(T)+{\mathcal E}_{4,0}^3(T)\log(e+T) $$ for $k\ge 1$. \subsection{Conclusion for the energy estimates of the standard derivatives} If $m$ and $s$ are sufficiently large, \eqref{eq.smalldata.semi} and the Sobolev embedding theorem lead to $$ {\mathcal A}_{2+\mu,3}[\phi,\psi]+{\mathcal E}_{K}(0) \lesssim \|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)} \lesssim \varepsilon. $$ Summing up the estimates in this section, we get \begin{equation} \sum_{1\le j+|\alpha|\le K} \|\partial_t^j \partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \lesssim \varepsilon+ {\mathcal E}_{K}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{K}^3(T)\log(e+T) \label{Concl01} {\mathcal E}d{equation} for each $K\ge 7$. \section{On the energy estimates for the generalized derivatives}\label{sec.eegdI} Throughout this section and the next one, we suppose that $K$ is sufficiently large, and we assume that ${\mathcal E}_K(T)\le 1$. \subsection{Direct energy estimates for the generalized derivatives} Let $|\delta|\le K-2$. Recalling (\ref{eq.commute}), it follows that \begin{eqnarray} \frac{d}{dt} E(\Gamma^\delta u;t) &=& \int_{\Omega} \Gamma^\delta G(\partialrtial u)(t,x)\,\partialrtial_t \Gamma^\delta u(t,x) dx \nonumber\\ &&+\int_{\partialrtial \Omega} \nu\cdot \nabla_x \Gamma^\delta u(t,x)\,\partialrtial_t \Gamma^\delta u(t,x) dS=:I_{\delta}(t)+I\!I_{\delta}(t), \label{eq.eegd} {\mathcal E}d{eqnarray} where $\nu=\nu(x)$ is the unit outer normal vector at $x \in \partialrtial \Omega$ and $dS$ is the surface measure on $\partialrtial \Omega$. Since $G(\partialrtial u)$ is a homogeneous polynomial of order three, we can say that \begin{equation}\label{eq.eegdn} |\Gamma^\delta G(\partialrtial u)\,\partialrtial_t \Gamma^\delta u| \lesssim \sum_{|\delta_1|\le [|\delta|/2]}|\Gamma^{\delta_1} \partial u|^2 \sum_{|\delta_2|\le |\delta|}|\Gamma^{\delta_2} \partial u(t,x)|^2. {\mathcal E}d{equation} Applying the H\"older inequality and taking the $L^\infty$ norm of the first factor, we arrive at \begin{equation}\label{eq.eegdI} |I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{ [|\delta|/2]+1, 0}(T) \jb{t} {\mathcal E}^2_{0, K }(T) \lesssim {\mathcal E}^4_{K}(T), {\mathcal E}d{equation} since $|\delta|\le K-2$. Now we treat the boundary term, by means of the trace theorem. Since $\partialrtial \Omega \subset B_{1}$, the norms of the generalized derivatives on $\partial \Omega$ are equivalent to the norms of the standard derivatives. Hence for all $t\in (0,T)$ we have $$ |I\!I_{\delta}(t)|\lesssim \sum_{1\le |\gamma|+k \le |\delta|+1} \|\partialrtial_t^{k} \partial_x^\gamma u(t)\|^2_{L^2(\partialrtial \Omega)}. $$ Moreover, by the trace theorem and \eqref{Concl01}, we see that $$ |I\!I_{\delta}(t)| \lesssim \sum_{1\le |\gamma|+k \le |\delta|+2} \|\partialrtial_t^{k} \partial_x^\gamma u(t)\|_{L^2_{\Omega}}^2 \lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2, $$ because of the assumption $|\delta|\le K-2$. Here we put $${\mathcal R}_0(s)=s+s^2.$$ Summarizing the above estimates, for any $K\ge 7$ and $|\delta|\le K-2$, it holds \begin{align*} \frac{d}{dt} E(\Gamma^\delta u;t) &\lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2 +{\mathcal E}_{K}^4(T) \\ & \lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2. {\mathcal E}d{align*} For the last inequality, we recall that ${\mathcal E}_K(T)\le 1$. After integration, this gives \begin{align} \sum_{|\delta| \le K-2} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega} & \lesssim {\mathcal E}_{K}(0)+ t^{1/2} \left(\varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T)\right) \notag \\ & \lesssim \jb{t}^{1/2} \left(\varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T)\right). \label{eq.seceegdI} {\mathcal E}d{align} \subsection{Refinement of the energy estimates for the generalized derivatives} Let $1\le |\delta|\le K-8$. Since $\partial \Omega$ is a bounded set, it follows from \eqref{eq.eegd} that \begin{align*} |I\!I_{\delta}(t)|\lesssim & \| \Gamma^\delta \partialrtial_t u(t)\|_{L^2(\partialrtial \Omega)} \sum_{|\gamma|\le |\delta|} \| \Gamma^\gamma \nabla_x u(t)\|_{L^2(\partialrtial \Omega)}\\ \lesssim & \sum_{1\le |\gamma|\le \delta} \|\partial^\gamma\partial_t u(t)\|_{L^\infty(\partial \Omega)} \sum_{|\gamma|\le |\delta|} \| \partial^\gamma \nabla_x u(t)\|_{L^\infty(\partialrtial \Omega)}. {\mathcal E}d{align*} Since we have $|x|\le 1$ for $x\in \partial \Omega$, we get $\langle |x|+t\rangle {} \simeq {}\langle t\rangle \simeq \langle |x|-t\rangle$ for $x\in \partial\Omega$. In particular we get $\sup_{x\in \partial\Omega} w_\nu(t,x) \lesssim \langle t\rangle^{-\nu}$ for $0<\nu \le 1$. We fix sufficiently small and positive constants $0<\eta<1/4$ and $\mu>0$. Applying the pointwise estimates \eqref{ba4weak} and \eqref{ba4t} in Theorem~\ref{main}, we get $$ |I\!I_{\delta}(t)|\lesssim \langle t\rangle^{-(3/2)+\eta} \log^4 (e+t) \left(\mathcal A^2_{2+\mu,|\delta|+4}[\phi,\psi]+A^{2}_{|\delta|+4}(t)\right), $$ where $$ A_{s}(t)=\sum_{|\gamma|\le s}\left\|\,|y|^{1/2}W_{1,1}(s,y)\Gamma^\gamma \left(G(\partial u)\right)(s,y) \right\|_{L^\infty_tL^\infty_\Omega}. $$ If $m$ and $s$ are sufficiently large, by the Sobolev embedding theorem we have $A_{2+\mu,|\delta|+4}[\phi,\psi]\lesssim \varepsilon$ and we obtain \begin{equation}\label{eq.IIA1} |I\!I_{\delta}(t)|\lesssim \langle t\rangle^{-(3/2)+\eta} \log^4 (e+t) \left(\varepsilon^2+A^2_{|\delta|+4}(t)\right). {\mathcal E}d{equation} In order to estimate $A_{|\delta|+4}(t)$, we argue as in \eqref{eq.eegdn}, so that $$ \sum_{|\gamma|\le |\delta|+4} |\Gamma^\gamma G(\partialrtial u)(s,y)| \lesssim w_{1/2}^2(s,y) {\mathcal E}^2_{[(|\delta|+4)/2]+1,0}(T) \sum_{|\gamma'|\le |\delta|+4} |\Gamma^{\gamma'} \partialrtial u(s,y)|. $$ Now using \eqref{eq.ome} and applying Lemma~\ref{KlainermanSobolev} to estimate $|\Gamma^{\gamma'} \partialrtial u|$, we obtain $$ \sum_{|\gamma|\le|\delta|+4} |\Gamma^\gamma G(\partialrtial u)(s,y)|\lesssim |y|^{-1/2} W_{1,1}^{-1}(s,y){\mathcal E}^2_{[(|\delta|+4)/2]+1,0}(T) \sum_{|\gamma|\le |\delta|+6} {\|\Gamma^{\gamma} \partialrtial u(s,\cdot)\|_{L^2_\Omega}}, $$ which yields \begin{equation} \label{Est_A_1} A_{|\delta|+4}(t) \lesssim {\mathcal E}^2_{K}(T) \sum_{|\gamma|\le |\delta|+6} \|\Gamma^{\gamma} \partialrtial u(s,y)\|_{L^\infty_t{L^2_\Omega}} {\mathcal E}d{equation} because we have $[(|\delta|+4)/2]\le [(K-1)/2]$ for $|\delta|\le K-8$. Observing that $$ \sum_{|\gamma|\le |\delta|+6} \|\Gamma^{\gamma} \partialrtial u(s,y)\|_{L^\infty_t{L^2_\Omega}} \lesssim \jb{t}^{1/2} {\mathcal E}_{K}(T) $$ for $|\delta|\le K-8$, we see from \eqref{eq.IIA1} and \eqref{Est_A_1} that $$ |I\!I_{\delta}(t)| \lesssim \langle t\rangle^{-(1/2)+2\eta} \left(\varepsilon^2+{\mathcal E}^6_{K}(T)\right). $$ Moreover for $|\delta|\le K-8$ the inequality \eqref{eq.eegdI} can be improved as $$ |I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{ [|\delta|/2]+1, 0}(T) \left(\jb{t}^{1/4+\eta} {\mathcal E}_{0,K} (T)\right)^2 \lesssim \langle t\rangle^{-(1/2)+2\eta} {\mathcal E}^4_{K}(T). $$ Coming back to \eqref{eq.eegd}, one can conclude from the assumption ${\mathcal E}_K(T)\le 1$ that \begin{eqnarray} \sum_{1 \le|\delta| \le K-8} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega} &\lesssim& {\mathcal E}_{K}(0) +\jb{t}^{(1/4)+\eta} \left(\varepsilon+{\mathcal E}^2_{K}(T)\right) \nonumber \\ \label{eq.inftyeegd} &\lesssim & \jb{t}^{(1/4)+\eta} \left(\varepsilon+{\mathcal E}^2_{K}(T)\right). {\mathcal E}d{eqnarray} Next step is to improve this estimate for lower $|\delta|$ in order to avoid the polynomial growth in $t$. Let $1\le|\delta|\le K-14$. From \eqref{Est_A_1} and the definition of ${\mathcal E}_K(T)$ we get $$ A_{|\delta|+4}(t)\lesssim {\mathcal E}^3_K(T) \jb{t}^{(1/4)+\eta}. $$ From \eqref{eq.IIA1}, it follows that \begin{eqnarray*} |I\!I_{\delta}(t)| &\lesssim & \langle t \rangle^{-(3/2)+\eta} \log^{4} (e+t) \left(\varepsilon^2 + \langle t\rangle^{(1/2)+2\eta} {\mathcal E}^6_K(T)\right) \\ &\lesssim& \langle t \rangle^{-1+4\eta} \left(\varepsilon^2+{\mathcal E}^6_K(T)\right). {\mathcal E}d{eqnarray*} On the other hand, for $|\delta|\le K-14$ it holds $$ |I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{ [|\delta|/2]+1, 0}(T) \left(\jb{t}^{2\eta} {\mathcal E}_{0,K}(T)\right)^2 \lesssim \langle t\rangle^{-1+4\eta} {\mathcal E}^4_{K}(T). $$ Summing up these estimates and integrating \eqref{eq.eegd}, we get \begin{equation}\label{eq.seceegdIIj00} \sum_{1\le |\delta| \le K-14} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega} \lesssim \jb{t}^{2\eta} \left(\varepsilon +{\mathcal E}^2_K(T) \right). {\mathcal E}d{equation} We repeat the above procedure once again with $1\le|\delta|\le K-20$. Being $|\delta|+6\le K-14$, from \eqref{Est_A_1} we have $A_{|\delta|+4}(t)\lesssim \jb{t}^{2\eta}{\mathcal E}^3_K(T)$. In turn this implies \begin{eqnarray*} |I\!I_{\delta}(t)| & \lesssim & \langle t \rangle^{-(3/2)+\eta} \log^{4} (e+t) \left(\varepsilon^2 + \langle t\rangle^{4\eta} {\mathcal E}^6_K(T)\right)\\ & \lesssim & \jb{t}^{-(3/2)+6\eta} \left(\varepsilon^2+{\mathcal E}_K^6(T)\right). {\mathcal E}d{eqnarray*} In this case $I_{\delta}(t)\le \langle t \rangle^{-1}{\mathcal E}_K^4(T)$. After integration we get \begin{eqnarray} \sum_{ 1\le|\delta| \le K-20} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega} &\lesssim & {\mathcal E}_{K}(0)+{\mathcal E}^2_{K}(T)\log^{1/2}(e+t)+\varepsilon+{\mathcal E}^3_K(T) \nonumber\\ &\lesssim &\varepsilon + {\mathcal E}^2_K(T)\log^{1/2}(e+t). \label{eq.eegdfinal} {\mathcal E}d{eqnarray} This estimate is the best we can obtain with our methods due to the estimate of $I_{\delta}(t)$. \section{Boundedness for the $L^\infty$ norm and the conclusion of the proof of Theorem~\ref{thm.mainsemi}} Summarizing \eqref{Concl01}, \eqref{eq.seceegdI}, \eqref{eq.inftyeegd}, \eqref{eq.seceegdIIj00}, \eqref{eq.eegdfinal} we have \begin{equation}\label{eq.e0K} {\mathcal E}_{0,K}(T)\lesssim \varepsilon +\mathcal R_0({\mathcal E}_{[(K-1)/2]+1,K}(T) \log^{1/2}(e+T)){\mathcal E}_{[(K-1)/2]+1,K}(T) {\mathcal E}d{equation} with $K\ge 20$ and $\mathcal R_0(s)=s+s^2$. If ${\mathcal E}_{H,0}(T)$ with $H=[(K-1)/2]+1$ has the same bound of ${\mathcal E}_{0,K}(T)$ given in \eqref{eq.e0K}, then we conclude that the estimate \eqref{eq.energy} in the Claim~\ref{claim} holds for $P=1$ and $Q=1/2$, and hence $T^* \ge \exp(\tilde C \epsilon^{-2})$. However, $\mathcal R_0$ (and hence $Q$) will be changed due to the following argument. Such a modification yields a worse estimate for the lifespan. Since we assume $\phi,\psi \in \mathcal{C}_0^\infty(\overline{\Omega})$, there is a positive constant $M$ such that $|x|\le t+M$ in $\supp u(t,\cdot)$ for $t\ge 0$. Hence we have $\log(e+t+|x|)\lesssim \log(e+t)$ in $\supp u(t, \cdot)$. From \eqref{Est_A_1} and the definition of ${\mathcal E}_K(T)$, it follows that $A_{|\delta|+4}(t)\lesssim {\mathcal E}^3_K(T)$ for $K\ge 26$ and $|\delta|\le K-26$. Let $\mu>0$. Then we have ${\mathcal A}_{2+\mu, K-22}[\phi,\psi]\lesssim \varepsilon$ if $m$ and $s$ are sufficiently large. For fixed $0<\eta<1/2$, by \eqref{ba4weak}, we obtain \[ \sum_{|\gamma|\le K-26}| \Gamma^\gamma \partialrtial u(t, x)| \lesssim \mathcal B(\varepsilon,t) \,w_{(1/2)-\eta}(t,x) \] where \[ \mathcal B(\varepsilon,t) :=\varepsilon +\log^{2}(e+t){\mathcal E}_K^3(T). \] Using this estimate, we obtain \[ \sum_{|\gamma|\le K-26}|\Gamma^\gamma G(\partialrtial u)(t,x)|\lesssim w_{1/2}^2(t,x){\mathcal E}^2_{[(K-1)/2]+1,0}(T)w_{(1/2)-\eta}(t,x)\mathcal B(\varepsilon,t). \] Since $|y|^{1/2} w_{1/2-\eta}\lesssim 1$, this implies \[ \mathcal A_{|\delta|+4}(t)\lesssim {\mathcal E}_K^2(T)\mathcal B(\varepsilon,t) \] for any $|\delta|+4\le K-26$. Therefore, \eqref{ba4} in Theorem~\ref{main} yields \begin{eqnarray*} \sum_{|\gamma|\le K-30}| \Gamma^\gamma \partialrtial u(t, x)|\lesssim \left( \varepsilon+ \mathcal B(\varepsilon,t) {\mathcal E}^2_{K}(T)\log^2(e+t) \right) w_{1/2}(t,x). {\mathcal E}d{eqnarray*} For $K\ge 61$ we have $[(K-1)/2]+1 \le K-30$, and we conclude that \begin{equation} \label{Concl02} \sum_{|\gamma|\le [(K-1)/2]+1} \|w_{1/2}^{-1} \Gamma^\gamma \partialrtial u\|_{L^\infty_TL^\infty_\Omega}\lesssim \varepsilon+ \mathcal B(\varepsilon,t) {\mathcal E}_K^2(T)\log^2(e+T). {\mathcal E}d{equation} Finally, we combine \eqref{eq.e0K} and \eqref{Concl02} to obtain \begin{align*} {\mathcal E}_K(T) \lesssim & {} \varepsilon+ (\varepsilon+{\mathcal E}_K(T))\times\\ &\times \left( {\mathcal E}_K(T)\log^{1/2}(e+T)+{\mathcal E}_K^2(T)\log^2(e+T)+{\mathcal E}_K^4(T)\log^{4}(e+T) \right). {\mathcal E}d{align*} In order to find \begin{align*} {\mathcal E}_K(T)\le C_1 \varepsilon+{\mathcal R}\left({\mathcal E}^P_K(T)\log^Q(e+T)\right) (\varepsilon+{\mathcal E}_K(T)) {\mathcal E}d{align*} with as larger $P/Q$ as possible, we take $$ \mathcal R(\tau):=C_2 (\tau+\tau^2+\tau^4) $$ and $P=Q=1$. Recalling the discussion in Section~\ref{AAPMT}, we obtain Theorem~\ref{thm.mainsemi}. \section{Proof of pointwise estimates}\label{sec.pointwise} In this section, we go back to the Neumann problem~\eqref{eq.PMfT} and will prove Theorem~\ref{main} by combining the decay estimates for the Cauchy problem in $\mathbb R^2$ and the local energy decay estimate through the cut-off argument. \subsection{Decomposition of solutions} Recall the definitions of $X(T)$ and $S[\varepsilonc{u}_0, f](t,x)$, $K[\varepsilonc{u}_0](t,x)$, $L[f](t,x)$ in Subsection~\ref{LinearNeumann}. In the same manner, the solution of the Cauchy problem \begin{equation}\label{eq.PCgT} \begin{array}{ll} (\partialrtial_t^2-\Delta) v = g & (t,x) \in (0,T)\times {\mathbb R}^2,\\ v(0,x)=v_0(x), & x\in \mathbb R^2,\\ (\partialrtial_t v)(0,x)=v_1(x), & x\in {\mathbb R}^2, {\mathcal E}d{array} {\mathcal E}d{equation} will be denoted by $S_0[\varepsilonc{v}_0, g](t,x)$ with $\varepsilonc{v}_0=(v_0, v_1)$. Then we have $$ S_0[\varepsilonc{v}_0,g](t,x)= K_0[\varepsilonc{v}_0](t,x)+L_0[f](t,x), $$ where $K_0[\varepsilonc{v}_0](t,x)$ and $L_0[g](t,x)$ are the solutions of \eqref{eq.PCgT} with $g=0$ and $\varepsilonc{v}_{0}=(0,0)$, respectively. In other words, $K_0[\varepsilonc{v}_0](t,x)=S_0[\varepsilonc{v}_0, 0](t,x)$ and $L_0[g](t,x)=S_0[(0,0), g](t,x)$. Now we proceed to introduce the cut-off argument. For $a >0$, we denote by $\psi_a$ a smooth radially symmetric function on ${\mathbb R}^2$ satisfying \begin{equation}\label{eq.psia} \begin{cases} \psi_a(x)=0, & |x| \le a, \\ \psi_a(x)=1, & |x| \ge a+1. {\mathcal E}d{cases} {\mathcal E}d{equation} \begin{lemma}\label{lemma.decomposition} Fix $a\ge 1$. Let $(u_0, u_1, f)\in X(T)$. Assume that for any $t\in (0,T)$ one has $$ \text{supp}\,f(t,\cdot) \subset \overline{\Omega_{t+a}}\quad \text{and}\quad \text{supp}\,u_0 \subset \overline{\Omega_a}, \ \text{supp}\,u_1 \subset \overline{\Omega_a}. $$ Then we have \begin{eqnarray}\label{eq.omo} S[\varepsilonc{u}_0, f](t,x)=\psi_a(x) S_0[ \psi_{2a} \varepsilonc{u}_0, \psi_{2a}f](t,x)+\sum_{i=1}^4 S_i[ \varepsilonc{u}_0, f](t,x), {\mathcal E}d{eqnarray} where \begin{eqnarray}\label{eq.S1} && S_1[\varepsilonc{u}_0,f](t,x)=(1-\psi_{2a}(x))L[\,[\psi_a,-\Delta]S_0[\psi_{2a} \varepsilonc{u}_0, \psi_{2a}f] ](t,x), \\ \label{eq.S2} && S_2[\varepsilonc{u}_0,f](t,x)=-L_0[\,[\psi_{2a},-\Delta]L[\,[\psi_a,-\Delta]S_0[\psi_{2a} \varepsilonc{u}_0, \psi_{2a}f] ] ](t,x), \\ \label{eq.S3} && S_3[\varepsilonc{u}_0, f](t,x)=(1-\psi_{3a} (x)) S[(1-\psi_{2a}) \varepsilonc{u}_0, (1-\psi_{2a})f](t,x), \\ \label{eq.S4} && S_4[\varepsilonc{u}_0, f](t,x)=-L_0[\,[\psi_{3a},-\Delta] S[(1-\psi_{2a}) \varepsilonc{u}_0, (1-\psi_{2a})f] ](t,x). {\mathcal E}d{eqnarray} {\mathcal E}d{lemma} For the proof, we refer to \cite{Kub06}. Observe that the first term on the right-hands side of (\ref{eq.omo}) can be evaluated by applying the decay estimates for the whole space case. In contrast, the local energy decay estimates for the mixed problem work well in estimating $S_j[\varepsilonc{u}_0, f]$ for $1\le j\le 4$, because we always have some localized factor in front of the operators $L$, $S$ and in their arguments. \subsection{Known estimates for the $2$D linear Cauchy problem} In this subsection we recall the decay estimates for solutions of homogeneous wave equation. Since $\Lambda K_0[v_0,v_1]=K_0[\Lambda v_0, \Lambda v_1]$ by \eqref{eq.commute}, we find that Proposition 2.1 of \cite{k93} leads to the following. \begin{lemma} Let $m\in\mathbb N$. For any $(v_0,v_1) \in \mathcal C^\infty_0(\mathbb R^2)\times \mathcal C^\infty_0(\mathbb R^2)$, it holds that \begin{equation}\label{eq.kubota} \langle t+|x| \rangle^{1/2}\log^{-1}\left(e+ \frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right)\sum_{|\beta|\le m}|\Gamma^\beta K_0[v_0,v_1](t,x)| \lesssim \mathcal B_{3/2,m}[v_0,v_1]. {\mathcal E}d{equation} Under the same assumption, for any $\mu>0$ we have \begin{equation}\label{eq.kubota2} \langle t+|x|\rangle^{1/2}\langle t-|x|\rangle^{1/2}\sum_{|\beta|\le m}|\Gamma^\beta K_0[v_0,v_1](t,x)|\lesssim \mathcal B_{2+\mu,m}[v_0,v_1]. {\mathcal E}d{equation} {\mathcal E}d{lemma} For $\kappa\ge 1$ and $\tau\ge 0$, we define $$ \Psi_\kappa(\tau):=\begin{cases} 1, & \kappa>1, \\ \log(e+\tau), & \kappa=1. {\mathcal E}d{cases} $$ The following two lemmas are proved for $m=0$ in \cite{DiF03}. For the general case, see \cite{KP}. \begin{lemma} Let $\kappa \ge 1$ and $m\in \mathbb N$. Then we have \begin{eqnarray}\label{eq.decayMGbis} \sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)| \lesssim \Psi_{\kappa}(t+|x|)\sum_{|\delta|\le m}\|\langle y\rangle^{1/2}W_{1/2,\kappa}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}, {\mathcal E}d{eqnarray} and \begin{eqnarray} \label{eq.decayMG} &&\langle t+|x| \rangle^{1/2}\log^{-1}\left(e+ \frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right) \sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)|\lesssim\nonumber \\&&\quad \lesssim \Psi_{\kappa}(t+|x|) \sum_{|\delta|\le m}\|\langle y\rangle^{1/2}W_{1,\kappa}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty} {\mathcal E}d{eqnarray} for any $(t,x)\in [0,T)\times \mathbb R^2$. {\mathcal E}d{lemma} \begin{lemma} Let $0<\sigma<3/2$, $\kappa>1$, $\mu\ge 0$, $0<\eta<1$ and $m\in \mathbb N$. Then, for any $(t,x)\in [0,T)\times \mathbb R^2$, one has \begin{eqnarray} &&\sum_{|\delta|\le m}|\Gamma^\delta \partialrtial L_0[g](t,x)| \lesssim \nonumber\\ &&\quad \lesssim w_{\sigma}(t,x) \Psi_{\mu+1}(t+|x|) \sum_{|\delta|\le m+1}\|\langle y\rangle^{1/2+\kappa}\langle s+ |y| \rangle^{\sigma+\mu} \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}, \label{eq.decayMG2} \\ &&\sum_{|\delta|\le m}|\Gamma^\delta \partialrtial L_0[g](t,x)| \lesssim \nonumber\\ &&\quad \lesssim w_{1-\eta}(t,x)\log(e+t+|x|) \sum_{|\delta|\le m+1}\|\langle y\rangle^{1/2}W_{1,1}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}. \label{eq.decayMG4} {\mathcal E}d{eqnarray} {\mathcal E}d{lemma} \subsection{The local energy decay estimates} We come back to the linear problem \eqref{eq.PMfT}. Let $X_a(T)$ be the set of all $({u}_0, {u}_1,f)\in X(T)$ such that \begin{align}\label{eq.Xa} & {u}_0(x)={u}_1(x)=0 \text{ for } |x|\ge a, \\ & f(t,x)=0 \text{ for } |x|\ge a, \; t\in [0,T).\label{eq.Xaa} {\mathcal E}d{align} The following local energy decay will be used in the proof of the pointwise estimate. \begin{lemma}\label{LocalEnergyDecay} Assume that ${\mathcal O}$ is convex. Let $a, b>1$, $\gamma\in (0, 1]$ and $m \in {\mathbb N}$. If $\Xi=({u}_0, {u}_1, f) \in X_{a}(T)$, then for any $t\in [0,T)$ one has \begin{eqnarray} && \sum_{|\alpha| \le m} \jb{t}^\gamma \| \partialrtial^\alpha S[\Xi](t)\|_{L^2(\Omega_b)}\lesssim \nonumber\\ &&\quad \lesssim \| {u}_0 \|_{{H}^{m}(\Omega)} +\| {u}_1 \|_{{H}^{m-1}(\Omega)} +\log(e+t)\sum_{|\alpha|\le m-1}\| \jb{s}^{\gamma} (\partial^\alpha f)(s,y) \|_{L^\infty_t L^2_\Omega}. \label{eq.LE} {\mathcal E}d{eqnarray} {\mathcal E}d{lemma} \noindent{\it Proof.} For $a,b>1$, it is known that there exists a positive constant $C=C(a,b)$ such that \begin{align} \int_{\Omega_b}(|\partialrtial_t K[\varepsilonc \phi_0](t,x)|^2+ |\nabla_x K[\varepsilonc \phi_0](t,x)|^2+& |K[\varepsilonc \phi_0](t,x)|^2) \,dx \lesssim \nonumber\\ & \lesssim \jb{t}^{-2}\left(\|\phi_0\|^2_{H^1(\Omega)}+ \|\phi_1\|^2_{L^2(\Omega)}\right) \label{obstacle} {\mathcal E}d{align} for any $\varepsilonc \phi_0=(\phi_0, \phi_1)\in H^{2}(\Omega)\times H^1(\Omega)$ satisfying $\phi_0(x)=\phi_1(x)\equiv 0$ for $|x|\ge a$ and satisfying also the compatibility condition of order $0$, that is to say, $\partial_\nu \phi_0(x)=0$ for $x\in \partial \Omega$ (see for instance Lemma~2.1 of \cite{SeSh03}; see also Morawetz \cite{Mor75} and Vainberg \cite{Vai75}). Now let $({u}_0, {u}_1, f)\in X_{a}(T)$ with some $a>1$. Let $u_j$ for $j\ge 2$ be defined as in Definition~\ref{def.complinear}. Then, by Duhamel's principle, it follows that \begin{align} & \partialrtial_t^j S[({u}_0, {u}_1,f)](t,x) \nonumber\\ & \qquad =K[({u}_j, {u}_{j+1})](t,x)+ \int_0^t K\bigl[\bigl(0,(\partialrtial_t^j f)(s) \bigr) \bigr](t-s,x) ds \label{DP} {\mathcal E}d{align} for any nonnegative integer $j\in \mathbb N^*$ and any $(t,x) \in [0,T) \times \Omega$. Observe that $(u_j, u_{j+1}, 0)$ satisfies the compatibility condition of order $0$, because $(u_0, u_1, f)\in X(T)$ implies $\partial_\nu u_j=0$ on $\partial \Omega$; the compatibility condition of order $0$ is also trivially satisfied for $\bigl(0,(\partial_s^j f)(s),0\bigr)$ for all $s\ge 0$. \noindent Therefore, by (\ref{obstacle}) we have \begin{eqnarray*} \sum_{|\alpha|\le 1} \|\partialrtial^\alpha K[{u}_j, {u}_{j+1}](t)\|_{L^2({\Omega_b})} & \lesssim & \jb{t}^{-1} \left(\|{u}_j\|_{H^1(\Omega)}+\|{u}_{j+1}\|_{L^2(\Omega)}\right) \\ & \lesssim & \jb{t}^{-1} \bigl(\|{u}_0\|_{H^{j+1}(\Omega)}+\|{u}_1\|_{H^{j}(\Omega)} +\sum_{k=0}^{j-1} \|(\partialrtial_t^k f)(0)\|_{L^2(\Omega)}\bigr) {\mathcal E}d{eqnarray*} and \begin{eqnarray*} \sum_{|\alpha|\le 1}\int_0^t \|\partialrtial^\alpha K[(0,(\partialrtial_t^j f)(s))](t-s)\|_{L^2({\Omega_b})} ds &\lesssim &\int_0^t \jb{t-s}^{-1} \,\|(\partialrtial_t^j f)(s)\|_{L^2(\Omega)} ds \\ \qquad &\lesssim &\jb{t}^{-\gamma} {\rm log}(e+t) \sup_{0\le s \le t} \jb{s}^\gamma \|(\partialrtial_t^j f)(s)\|_{L^2(\Omega)} {\mathcal E}d{eqnarray*} for any $\gamma \in (0,1]$. In conclusion for any $j\in\mathbb N^*$, we have \begin{eqnarray} && \sum_{|\alpha|\le 1}\| \partialrtial^\alpha \partialrtial^j_{t} S[({u}_0, {u}_1, f)](t)\|_{L^2(\Omega_b)}\lesssim \nonumber\\ && \quad \lesssim \jb{t}^{-\gamma} \bigl( \|{u}_0\|_{H^{j+1}(\Omega)}+\|{u}_1\|_{H^{j}(\Omega)} +\sum_{k=0}^j\log(e+t) \sup_{0\le s \le t} \jb{s}^\gamma\|(\partialrtial_t^k f)(s)\|_{L^2(\Omega)}\bigr). \label{LE1} {\mathcal E}d{eqnarray} In order to evaluate $\partialrtial^\alpha S[\Xi]$ for $2\le |\alpha| \le m$, we have only to combine (\ref{LE1}) with a variant of (\ref{eq.ap1})\,: \begin{equation}\label{LE2} \|\varphi\|_{H^m(\Omega_b)} \lesssim \|\Delta_x \varphi\|_{H^{m-2}(\Omega_{b^\prime})}+\|\varphi\|_{H^{m-1}(\Omega_{b^\prime})}, {\mathcal E}d{equation} where $1<b<b^\prime$ and $\varphi \in H^m(\Omega)$ with $m \ge 2$; we can easily obtain \eqref{LE2} from \eqref{eq.ap1} by cutting off $\varphi$ for $|x|\ge b'$. In order to complete the proof, one has to apply this inequality recalling the equation $\Delta S[\Xi]=\partialrtial_t^2 S[\Xi]-f$. Invoking \eqref{LE1}, we finally get the basic estimate \eqref{obstacle}. $\qed$ \subsection{Proof of Theorem~\ref{main}} The following lemma is the main tool for the proof of Theorem~\ref{main}. \begin{lemma}\label{KataLem} Let ${\mathcal O}$ be a convex set. Let $a,b>1$, $0<\rho\le 1$, $m\in \mathbb N^*$ and $\kappa \ge 1$. \noindent {\rm (i)} Suppose that $\chi$ is a smooth function on $\mathbb R^2$ satisfying ${\rm supp}\, \chi \subset B_b$. If $\Xi=({u}_0, {u}_1,f) \in X_{a}(T)$, then \begin{eqnarray} && \langle t \rangle^\rho \sum_{|\delta|\le m}|\Gamma^\delta (\chi S[ \Xi ])(t,x)|\lesssim \nonumber\\ &&\lesssim \| {u}_0 \|_{{H}^{m+2}(\Omega)}+\| {u}_1 \|_{{H}^{m+1}(\Omega)} + \log(e+t) \sum_{|\beta|\le m+1} \|\langle s\rangle^{\rho} \partialrtial^\beta f(s,y)\|_{L^\infty_tL^\infty({\Omega_a})} \label{KataL01} {\mathcal E}d{eqnarray} for $(t,x)\in[0, T)\times \overline{\Omega}$. \noindent {\rm (ii)} Let $g\in \mathcal C^{\infty}([0,T)\times \mathbb R^2)$ such that $\supp g (t,\cdot)\subset \overline{B_a\setminus B_1}$ for any $t\in [0,T)$. Then \begin{eqnarray} \label{KataL02} \sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)| \lesssim \sum_{|\beta|\le m} \|\langle s\rangle^{1/2} \partialrtial^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega_a})}, {\mathcal E}d{eqnarray} and for any $0\le \eta<\rho$ we have \begin{align} \label{KataL03} & w^{-1}_{\rho-\eta}(t,x) \sum_{|\delta|\le m} |\Gamma^\delta \partialrtial L_0[g](t,x)| \lesssim \Psi_{\eta+1}(t+|x|) \sum_{|\beta|\le m+1} \|\langle s\rangle^{\rho} \partialrtial^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega_a})}. {\mathcal E}d{align} for $(t,x)\in [0,T)\times \overline{\Omega}$. \noindent {\rm (iii)} Let $(v_0,v_1,g)\in \mathcal C^{\infty}(\mathbb R^2)\times \mathcal C^{\infty}(\mathbb R^2)\times \mathcal C^{\infty}([0,T)\times \mathbb R^2)$. If $v_0=v_1=g(t,\cdot)=0$ for any $x\in B_1$ and $t\in [0,T)$, then \begin{eqnarray} && \langle t \rangle^{1/2} \sum_{|\beta|\le m} |\Gamma^\beta S_0[v_0,v_1, g ](t,x)| \lesssim \nonumber\\ &&\quad \lesssim {\mathcal A}_{3/2, m}[v_0,v_1] +\Psi_\kappa(t+|x|) \sum_{|\beta|\le m} \|\langle y\rangle^{1/2} W_{1,\kappa}(s,y)\Gamma^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega})} \label{KataL04} {\mathcal E}d{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}_b}$. {\mathcal E}d{lemma} \noindent{\it Proof.}\ \ First we note that for any smooth function $h:[0,T)\times \overline{\Omega}\to \mathbb R$ such that ${\rm supp}\, h(t, \cdot)\subset B_R$ for any $t\in [0,T)$ and suitable $R>1$, it holds that \begin{equation} \sum_{|\beta|\le m}|\Gamma^\beta h(t,x)|\lesssim \sum_{|\beta|\le m} |\partialrtial^\beta h(t,x)|. \label{KataM01} {\mathcal E}d{equation} Clearly the same estimate holds for $h:[0,T)\times \mathbb R^2\to \mathbb R$. We start with the proof of \eqref{KataL01}. Let $\Xi \in X_{a}(T)$ and $0<\rho\le 1$. For $(t,x)\in [0, T)\times \overline{\Omega}$, combining \eqref{KataM01} with the standard Sobolev inequality and then applying the local energy decay \eqref{eq.LE}, we get \begin{eqnarray*} &&\langle t \rangle^\rho \sum_{|\beta|\le m}|\Gamma^{\beta}(\chi S[ \Xi ]) (t,x)| \lesssim \langle t\rangle^\rho \!\!\! \sum_{|\beta|\le m+2} \|{\partialrtial^\beta S[ \Xi ](t)}\|_{L^2(\Omega_b)} \\ && \quad \lesssim \| {u}_0 \|_{{H}^{m+2}(\Omega)}+\| {u}_1 \|_{{H}^{m+1}(\Omega)} +\log(e+t) \sum_{|\beta|\le m+1} \| \jb{s}^{\rho} \partial^\beta f(s,y) \|_{L^\infty_t L^2_\Omega}. {\mathcal E}d{eqnarray*} Since $\text{supp}\,f(t,\cdot) \subset \overline{\Omega}_{a}$ implies $\|{\partialrtial^\beta f(s)}\|_{L^2(\Omega)}\lesssim \|{\partialrtial^\beta f(s)}\|_{L^\infty({\Omega}_a)}$, we obtain \eqref{KataL01}. Next we prove \eqref{KataL02} by the aid of the decay estimates for the linear Cauchy problem. By \eqref{eq.decayMGbis} for some $\kappa>1$, we find \begin{eqnarray*} \sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)| \lesssim \sum_{|\delta|\le m} \|\langle y\rangle^{1/2}W_{1/2,\kappa}(s,y)\Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}. {\mathcal E}d{eqnarray*} Using the assumption ${\rm supp}\, g(t,\cdot) \subset \overline{B_a\setminus B_1}\subset {\overline{\Omega}_{a}}$, we gain \eqref{KataL02}. Similarly, if we use \eqref{eq.decayMG2} (with $\sigma $ being replaced by $\rho-\eta$ and $\mu$ by $\eta$), instead of \eqref{eq.decayMGbis}, then we get \eqref{KataL03}. Finally we prove \eqref{KataL04} by using \eqref{eq.kubota} and \eqref{eq.decayMG}. It follows that \begin{eqnarray*} && \langle t+|x| \rangle^{1/2} \log\left(e+\frac{\langle t+|x|\rangle }{\langle t-|x|\rangle }\right) \sum_{|\beta|\le m} |\Gamma^\beta S_0[\varepsilonc{v}_0, g ](t,x)|\lesssim \\&& \quad \lesssim {\mathcal B}_{3/2, m}[\varepsilonc{v}_0] + \Psi_\kappa(t+|x|) \sum_{|\beta|\le m} \|\langle y\rangle^{1/2}W_{1,\kappa}(s,y) \Gamma^\beta g(s,y)\|_{L^\infty_tL^\infty} {\mathcal E}d{eqnarray*} for $(t,x)\in [0, T)\times \mathbb R^2$. Observe that the logarithmic term on the left-hand side is equivalent to a constant when $x \in \overline{\Omega_b}$. Thus we get \eqref{KataL04}, because our assumption ensures that support of data and ${\rm supp}\,g(t,\cdot)$ are contained in $\Omega$. This completes the proof. $\qed$ Now we are in a position to prove Theorem~\ref{main}. \begin{proof}[Proof of Theorem~$\ref{main}$] According to Lemma \ref{lemma.decomposition} with $a=1$, we can write \begin{equation}\label{decomposition} S[{\Xi}](t,x)=\psi_1(x) S_0[\psi_2 \Xi](t,x) {}+\sum_{i=1}^4 S_i[\Xi](t,x) {\mathcal E}d{equation} for $(t,x)\in [0,T)\times {\overline{\Omega}}$, where $\psi_a$ is defined by (\ref{eq.psia}) and $S_i[\Xi]$ for $1\le i\le 4$ are defined by \eqref{eq.S1}--\eqref{eq.S4} with $a=1$. It is easy to check that \begin{equation}\label{eq.compsi} [\psi_a,-\Delta]h(t,x)= h(t,x) \Delta \psi_a(x)+2\nabla_{\!x}\,h(t,x) \cdot \nabla_{\!x}\, \psi_a(x) {\mathcal E}d{equation} for $(t, x) \in [0,T)\times {\overline{\Omega}}$, $a \ge 1$ and any smooth function $h$. Note that this identity implies \begin{equation}\label{eq.comX} (0,0, [\psi_a, -\Delta]h)\in X_{a+1}(T) {\mathcal E}d{equation} because ${\rm supp}\, \nabla_x \psi_a\cup {\rm supp}\, \Delta \psi_a\subset \overline{B_{a+1}\setminus B_a}$. First we prove \eqref{ba3}. Applying \eqref{eq.kubota} and \eqref{eq.decayMG}, we have \begin{eqnarray*} && \jb{t+|x|}^{1/2} \log^{-1}\left(e+\frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right) \sum_{|\delta|\le k}\left|\Gamma^\delta S_0[\psi_2\Xi](t,x)\right|\lesssim \\ && \quad \lesssim {\mathcal B}_{3/2,k}[\psi_2\varepsilonc{u}_0] + \sum_{|\delta|\le k}\|\langle y\rangle^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta (\psi_2 f)(s,y)\|_{L^\infty_tL^\infty} \\ && \quad \lesssim {\mathcal A}_{3/2,k}[\varepsilonc{{u}}_0] +\sum_{|\delta|\le k} \| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}, {\mathcal E}d{eqnarray*} so that \begin{align} & \jb{t+|x|}^{1/2}\log^{-1}\left(e+\frac{\jb{t+|x|}}{\jb{t-|x|}}\right) \sum_{|\delta|\le k}\left| \Gamma^{\delta}\bigl(\psi_1(x) S_0[\psi_2\Xi](t,x)\bigr)\right| \lesssim \nonumber\\ & \qquad\qquad \lesssim {\mathcal A}_{3/2,k}[\varepsilonc{u}_0] + \sum_{|\delta|\le k}\| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS0} {\mathcal E}d{align} Now we write $$ S_1[\Xi]=(1-\psi_2)L[[\psi_1,-\Delta]K_0[\psi_2 \varepsilonc{u}_0]]+(1-\psi_2)L[[\psi_1,-\Delta]L_0[\psi_2 f]]=: S_{1,1}[\Xi]+S_{1,2}[\Xi]. $$ We can apply \eqref{KataL01} to estimate $S_{1,2}[\Xi]$, because we have $L[h]=S[0,0,h]$ and ${\rm supp}(1-\psi_2)\subset B_3$ and because \eqref{eq.comX} guarantees $(0,0,[\psi_1,-\Delta]L_0[\psi_2f]) \in X_{2}$. Therefore we get \begin{eqnarray*} \langle t \rangle^{1/2}\sum_{|\delta|\le k} |\Gamma^\delta S_{1,2}[\Xi](t,x)|&\lesssim& \log(e+t)\sum_{|\beta|\le k+ 1} \bigl\|\langle s\rangle^{1/2} \partialrtial^\beta \bigl([\psi_1,-\Delta]L_0[\psi_2 f] \bigr)(s,x)\bigr\|_{L^\infty_tL^\infty(\Omega_2)}\\ &\lesssim & \log(e+t)\sum_{|\beta|\le k+2} \|\langle s\rangle^{1/2} \partialrtial^\beta L_0[\psi_2 f](s,x)\|_{L^\infty_tL^\infty(\Omega_2)}, {\mathcal E}d{eqnarray*} where we have used \eqref{eq.compsi} to obtain the second line. Recalling that $L_0[h]=S_0[0,0,h]$ and noting that $\psi_2 f(t,x)=0$ if $|x|\le 2$, we can use \eqref{KataL04} to obtain \begin{equation}\label{eq.stimaS12} \langle t \rangle^{1/2} \sum_{|\delta|\le k}|\Gamma^\delta S_{1,2}[\Xi](t,x)| \lesssim \log(e+t) \sum_{|\beta|\le k+2}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}} {\mathcal E}d{equation} for $(t,x)\in [0,T)\times {\overline{\Omega}}$. In order to estimate $S_{1,1}[\Xi]$, we combine the Sobolev embedding and the local energy decay estimate \eqref{eq.LE} with $\gamma=1$. Then we get \begin{eqnarray*} \sum_{|\delta|\le k}|\Gamma^\delta S_{1,1}[\Xi](t,x)| &\lesssim& \|(1-\psi_2) L[[\psi_1,-\Delta]K_0[\psi_2 \varepsilonc{u}_0]](t,\cdot)\|_{H^{2+k}(\Omega)}\\ &\lesssim& \|S[0,0,[\psi_1,-\Delta]K_0[\psi_2 \varepsilonc{u}_0]](t,\cdot)\|_{H^{2+k}(\Omega_3)} \\ &\lesssim& \langle t \rangle^{-1} \log(e+t) \sum_{|\delta|\le k+1} \|\langle s\rangle\partial^\delta\bigl( [\psi_1,-\Delta] K_0[\psi_2 \varepsilonc{u}_0]\bigr)(s,y) \|_{L^\infty_{t} L^2_{\Omega} } \\ &\lesssim& \langle t \rangle^{-1} \log(e+t)\sum_{|\beta|\le k+2}\| \langle s\rangle \partialrtial^\beta K_0 [\psi_2 \varepsilonc{u}_0](s,y) \|_{L^\infty_{t} L^\infty(\Omega_2)}. {\mathcal E}d{eqnarray*} Then we use \eqref{eq.kubota2}; recalling that we are in a bounded $y$-domain, for any $\mu>0$ we get \begin{equation}\label{eq.stimaS11} \jb{t}^{1/2} \jb{t+|x|}^{1/2} \log^{-1} (e+t) \sum_{|\delta|\le k}|\Gamma^\delta S_{1,1}[\Xi](t,x)|\lesssim \mathcal B_{2+\mu,2+k}[\psi_2\varepsilonc{u}_0] \lesssim \mathcal A_{2+\mu,2+k}[\varepsilonc{u}_0] {\mathcal E}d{equation} for any $(t,x)\in [0,T)\times {\overline{\Omega}}$. Now we proceed estimating $S_3[\Xi]$. Because $(1-\psi_2)\Xi\in X_{3}(T)$ for any $\Xi\in X(T)$, taking $\rho=1-\mu$ in \eqref{KataL01} we get \begin{eqnarray} && \langle t\rangle^{1/2} \sum_{|\delta|\le k}|\Gamma^\delta S_3[\Xi](t,x)|\lesssim \label{eq.stimaS3}\\ &&\quad \lesssim \langle t\rangle^{-1/2+\mu} \Big( \|{u}_0\|_{H^{k+2}(\Omega_{3})}+\|{u}_1\|_{H^{k+1}(\Omega_{3})}+ \log(e+t) \sum_{|\beta|\le k+1} \|\langle s\rangle^{1-\mu} \partialrtial^\beta f(s,y)\|_{L^\infty_tL^\infty(\Omega_3)} \Big) \nonumber {\mathcal E}d{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$. By using the trivial inequality $\langle s\rangle^{1-\mu}\lesssim |y|^{1/2} W_{1, 1}(s,y)$ in $[0,T)\times\Omega_3$, from \eqref{eq.stimaS12}, \eqref{eq.stimaS11} and \eqref{eq.stimaS3} we can conclude that \begin{align} &\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta S_1[\Xi]|+\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta S_3[\Xi]|\lesssim \nonumber\\ & \lesssim \jb{t}^{-(1/2)+\mu} \mathcal A_{2+\mu,2+k}[\varepsilonc{u}_0]+ \log(e+t)\sum_{|\beta|\le 2+k}\||y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS1S3log} {\mathcal E}d{align} Finally we consider the terms $S_2[\Xi]$, $S_4[\Xi]$. Let us set $g_j[\Xi]=(\partialrtial_t^2-\Delta) S_j[\Xi]$ for $j=2, 4$. Recalling the definition of $L_0$, we find \begin{eqnarray*} g_2[\Xi]&=& -[\psi_2,-\Delta] L\bigl[\,[\psi_1,-\Delta] S_0[\psi_2 \Xi]\bigr];\\ g_4[\Xi]&=& -[\psi_3,-\Delta] S[(1-\psi_2)\Xi]. {\mathcal E}d{eqnarray*} Having in mind \eqref{eq.compsi} we can say that $g_2$ and $g_4$ have the same structures as $S_1$ and $S_3$, but they contain one more derivative. Therefore, arguing similarly to the derivation of \eqref{eq.finalS1S3log}, we arrive at \begin{align} &\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta g_2[\Xi]|+\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta g_4[\Xi]|\lesssim \nonumber\\ &\lesssim \jb{t}^{-(1/2)+\mu} \mathcal A_{2+\mu,3+k}[\varepsilonc{u}_0]+ \log(e+t)\sum_{|\beta|\le 3+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{{\Omega}}}. \label{Star01} {\mathcal E}d{align} On the other hand, we have $S_i[\Xi]= L_0[g_i]$ for $i=2,4$. Thus, since $g_2$ and $g_4$ are supported on $\overline{B_4\setminus B_2}$, we are in a position to apply \eqref{KataL02} and we get \begin{eqnarray} && \sum_{|\delta|\le k}\left(|\Gamma^\delta S_2[\Xi]|+|\Gamma^\delta S_4[\Xi]|\right)(t,x) \lesssim\nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,3+k}[\varepsilonc{u}_0]+\log(e+t) \sum_{|\beta|\le 3+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_\Omega}. \label{eq.finalS2S4} {\mathcal E}d{eqnarray} Now \eqref{ba3} follows from \eqref{eq.finalS0}, \eqref{eq.finalS1S3log} and \eqref{eq.finalS2S4}. Next we prove \eqref{ba4}. Trivially one has \begin{eqnarray*} && \sum_{|\delta|\le k} |\Gamma^\delta \partialrtial (\psi_1(x) S_0[\psi_2 \Xi](t,x))|\lesssim \\ && \quad \lesssim \sum_{|\delta|\le k} |\Gamma^\delta \partialrtial S_0[\psi_2 \Xi](t,x)| +\sum_{|\delta|\le k} |\Gamma^\delta \nabla_x \psi_1(x)| |\Gamma^{\delta} S_0[\psi_2 \Xi](t,x)|. {\mathcal E}d{eqnarray*} Since in $\Omega$ one has $|y|\simeq \langle y\rangle$, by \eqref{eq.kubota2} and \eqref{eq.decayMG4} with $\eta=1/2$, we see that \begin{eqnarray*} &&\sum_{|\delta|\le k} |\Gamma^\delta \partialrtial S_0[\psi_2 \Xi](t,x)| \lesssim \jb{t+|x|}^{-1/2}\jb{t-|x|}^{-1/2} {\mathcal A}_{2+\mu,k+1}[\varepsilonc{u}_0]+ \\ && \quad +w_{1/2}(t,x) \log (e+t+|x|) \sum_{|\delta|\le k+1}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. {\mathcal E}d{eqnarray*} On the other hand, by \eqref{eq.kubota} and \eqref{eq.decayMG} with $\kappa=1$, we have \begin{eqnarray*} && \jb{t+|x|}^{1/2} \log^{-1}\left(e+\frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right) \sum_{|\delta|\le k}\left|\Gamma^\delta S_0[\psi_2\Xi](t,x)\right|\lesssim \\ && \quad \lesssim {\mathcal A}_{3/2,k}[\varepsilonc{u}_0]+\log (e+t+|x|) \sum_{|\delta|\le k}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. {\mathcal E}d{eqnarray*} Since the logarithmic term on the left-hand side does not appear when $x \in \Omega_{2}$, we get \begin{eqnarray} && w_{1/2}^{-1}(t,x) \sum_{|\delta|\le k}\left| \Gamma^{\delta} \partialrtial \bigl(\psi_1(x) S_0[\psi_2\Xi]\bigr)(t,x)\right| \nonumber\\ && \quad \lesssim {\mathcal A}_{2+\mu,k+1}[\varepsilonc{u}_0]+\log (e+t+|x|) \sum_{|\delta|\le k+1} \| |y|^{1/2}W_{1,1}(s,y) \Gamma^{\delta} f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS0bis} {\mathcal E}d{eqnarray} Therefore, $\partialrtial(\psi_1S_0[\psi_2\Xi])$ has the desired bound. Let us recall that $|x|$ is bounded in $\supp S_1[\Xi](t,\cdot)\cup \supp S_3[\Xi](t,\cdot)$. In particular we get $ w_{1/2}^{-1}(t,x)\lesssim \langle t\rangle^{1/2}$. From \eqref{eq.finalS1S3log} we deduce \begin{eqnarray} && \sum_{|\delta|\le k} w_{1/2}^{-1}(t,x) \left( |\Gamma^\delta \partialrtial S_1[\Xi](t,x)|+|\Gamma^\delta \partialrtial S_3[\Xi](t,x)| \right) \lesssim \nonumber \\ && \quad \lesssim \mathcal A_{2+\mu,3+k}[\varepsilonc{u}_0]+ \log(e+t) \sum_{|\beta|\le 3+k}\| |y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y) \|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS1S3logder} {\mathcal E}d{eqnarray} As for $S_4[\Xi]$, we use a similar estimate to \eqref{eq.stimaS3} with $k$ replaced by $k+1$, that is \begin{align} & \langle t\rangle^{1-\mu} \sum_{|\delta|\le k+1}|\Gamma^\delta g_4[\Xi](t,x)|\lesssim \nonumber\\ & \lesssim \mathcal A_{2+\mu,k+4}[\varepsilonc{u}_0]+ \log(e+t) \sum_{|\beta|\le k+3}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{g2g4} {\mathcal E}d{align} Applying \eqref{KataL03} with $\rho=1-\mu$ and $\eta=\mu$~($0<\mu\le 1/4$), we find that \begin{eqnarray} && \sum_{|\delta|\le k} w_{1-2\mu}^{-1}(t,x) |\Gamma^\delta \partialrtial S_4[\Xi]|(t,x) \lesssim \nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,k+4}[\varepsilonc{u}_0]+ \log(e+t) \sum_{|\beta|\le k+3}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS2S4der} {\mathcal E}d{eqnarray} For treating $S_2[\Xi]$, we decompose $g_2[\Xi]$ into $g_{2,1}[\Xi]$ and $g_{2,2}[\Xi]$ as was done for evaluating $S_1[\Xi]$. Then $L_0[g_{2,1}]$ can be estimated as $S_4[\Xi]$. On the other hand, using \eqref{KataL03} with $\rho=1/2$ and $\eta=0$ for $L_0[g_{2,2}]$, we arrive at \begin{eqnarray} && \sum_{|\delta|\le k} w_{1/2}^{-1}(t,x) |\Gamma^\delta \partialrtial S_2[\Xi]|(t,x) \lesssim \nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,4+k}[\varepsilonc{u}_0]+ \log^2(e+t+|x|) \sum_{|\beta|\le 4+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS2S4derBis} {\mathcal E}d{eqnarray} Thus we obtain \eqref{ba4} from \eqref{eq.finalS0bis}, \eqref{eq.finalS1S3logder}, \eqref{eq.finalS2S4der}, and \eqref{eq.finalS2S4derBis}. In order to show \eqref{ba4weak}, we remark that $w_{1/2}\le w_{(1/2)-\eta}$ so that in \eqref{eq.finalS0bis} we can replace $w_{1/2}$ with $w_{1/2-\eta}$. Moreover, \eqref{eq.finalS1S3logder} and \eqref{g2g4} hold with $\mu=0$ if we replace $\log(e+t)$ by $\log^2(e+t)$, thanks to \eqref{KataL04} with $\kappa=1$. Therefore, the application of \eqref{KataL03} with $\rho=1/2$ and $0<\eta<1/2$ leads to \eqref{eq.finalS2S4der} with $w_{1/2}^{-1}$ replaced by $w^{-1}_{(1/2)-\eta}$ and $\mu=0$ in the second term of the right-hand side. Hence we get \eqref{ba4weak}. Finally, we prove \eqref{ba4t}. We put $\eta'=\eta/2$. By \eqref{eq.kubota2} and \eqref{eq.decayMG4}, we see that \begin{eqnarray*} &&\sum_{|\delta|\le k +1} |\Gamma^\delta \partialrtial_t (\psi_1(x) S_0[\psi_2 \Xi](t,x))| \lesssim \sum_{|\delta|\le k +1} |\Gamma^\delta \partial_t S_0[\psi_2\Xi](t,x)|\lesssim\\ && \quad \lesssim \jb{t+|x|}^{-1/2}\jb{t-|x|}^{-1/2} {\mathcal A}_{2+\mu,k+ 2}[\varepsilonc{u}_0]+ \\ && \qquad +w_{1-\eta'}(t,x) \log (e+t+|x|) \sum_{|\delta|\le k+ 2}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. {\mathcal E}d{eqnarray*} Therefore, $\partial_t\bigl(\psi_1S_0[\psi_2\Xi] \bigr)$ has the desired bound because $w_{1-\eta'}\le w_{1-\eta}$. Combining this estimate with \eqref{KataL01}, we obtain the estimate for $S_1[\Xi]$. Indeed, for $0<\eta<1$ we have \begin{eqnarray*} \langle t \rangle^{1-\eta'} \sum_{|\delta|\le k +1}|\Gamma^\delta \partialrtial_t S_1[ \Xi ](t,x)|\lesssim \log(e+t) \sum_{|\beta|\le k+ 2} \bigl\|\langle s\rangle^{1-\eta'} \partialrtial^\beta \partialrtial_t \bigl([\psi_1, -\Delta]S_0[\psi_2\Xi] \bigr)(s,y)\bigr\|_{L^\infty_tL^\infty({\Omega_2})}. {\mathcal E}d{eqnarray*} Recalling \eqref{eq.compsi}, we can use the estimate of $\partialrtial_t (\psi_1 S_0[\psi_2 \Xi])$ adding two derivatives. In conclusion, we have \begin{eqnarray*} && \langle t \rangle^{1-\eta'} \sum_{|\delta|\le k +1} |\Gamma^\delta \partialrtial_t S_1[\Xi](t,x)|\lesssim \Theta_{\mu, k+4}(t) {\mathcal E}d{eqnarray*} for $(t,x)\in [0,T)\times {\overline{\Omega}}$, where $$ \Theta_{\mu, m}(t):= {\mathcal A}_{2+\mu, m}[\varepsilonc{u}_0]+ {\rm log}^2(e+t) \sum_{|\delta|\le m}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. $$ Since we have $(1-\psi_2)\Xi\in X_{3}(T)$ for any $\Xi\in X(T)$, by using \eqref{KataL01} with $\rho=1-\eta'$ we have $$ \jb{t}^{1-\eta'}\sum_{|\delta|\le k+1} |\Gamma^\delta \partialrtial_t S_3[\Xi](t,x)|\lesssim \Theta_{\mu, k+3}(t). $$ In order to treat $S_2[\Xi]$ and $S_4[\Xi]$, we set $g_j[\Xi]=(\partialrtial_t^2-\Delta) S_j[\Xi]$ for $j=2, 4$ as before. Going similar lines to the estimates for $S_1[\Xi]$ and $S_3[\Xi]$, with a derivative more, we can reach at \begin{eqnarray*} \langle t\rangle^{1-\eta'} \sum_{|\delta|\le k +1}|\Gamma^\delta \partialrtial_t g_2[\Xi]|+\langle t\rangle^{1-\eta'} \sum_{|\delta|\le k +1}|\Gamma^\delta \partialrtial_t g_4[\Xi]|\lesssim \Theta_{\mu, k+5}(t). {\mathcal E}d{eqnarray*} Let us recall that $g_2$ and $g_4$ are supported on $\overline{B_4\setminus B_2}$ and $ \partial_t S_i[\Xi]= L_0[{\partial_t}g_i]$ for $i=2,4$. We are in a position to apply \eqref{KataL03} (with $\rho=1-\eta'$, and $\eta$ replaced by $\eta'$) and obtain $$ w_{1-\eta}^{-1}(t,x)\sum_{|\delta|\le k} \sum_{i=2, 4} |\Gamma^\delta \partial \partial_t S_i[\Xi](t,x)| \lesssim \sum_{i=2,4}\sum_{|\delta|\le k+1}\|\jb{s}^{1-\eta'}\partial^\beta\partial_tg_i(s,y)\|_{L^\infty_tL^\infty(\Omega_4)} \lesssim \Theta_{\mu, k+5}(t). $$ The proof of Theorem~\ref{main} is complete. {\mathcal E}d{proof} \begin{rem}\label{Rem83} \normalfont The main difference between the Dirichlet and the Neumann boundary cases is in the logarithmic loss in the local energy decay estimate \eqref{eq.LE}. Due to this term, comparing our result with the one in \cite{KP}, we see that the estimates for $S_2[\Xi]$ and $S_4[\Xi]$ are worse in the Neumann case. {\mathcal E}d{rem} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0} \renewcommand{A.\arabic{lemma}}{A.\arabic{lemma}} \setcounter{lemma}{0} \renewcommand{A.\arabic{theorem}}{A.\arabic{theorem}} \setcounter{theorem}{0} \section*{Appendix: A local existence theorem of smooth solutions} Here we sketch a proof of the following local existence theorem for the semilinear case (for the general case, see \cite{SN89}). We underline that the convexity assumption for the obstacle is not necessary for the local existence result. \begin{theorem}\label{LE} Let ${\mathcal O}$ be a bounded obstacle with $\mathcal C^\infty$ boundary and $\Omega=\mathbb R^2\setminus \mathcal O$. For any $\phi$, $\psi\in {\mathcal C}^\infty_0(\overline{\Omega})$ satisfying the compatibility condition of infinite order and \begin{equation} \label{Ori-init} \|\phi\|_{H^{5}(\Omega)}+\|\psi\|_{H^{4}(\Omega)}\le R, {\mathcal E}d{equation} there exists a positive constant $T=T(R)$ such that the mixed problem \eqref{eq.PMN} admits a unique solution $u\in C^\infty\bigl([0,T)\times \overline{\Omega}\bigr)$. Here $T$ is a constant depending only on $R$. {\mathcal E}d{theorem} For nonnegative integer $s$, we put $$ Y^s_T:=\bigcap_{j=0}^{s} {\mathcal C}^{j}\bigl([0,T]; H^{s-j}(\Omega)\bigr), $$ and $$ \|h\|_{Y^s_T}:=\sum_{j=0}^s \sup_{t\in [0,T]} \|\partial_t^j h(t,\cdot)\|_{H^{s-j}(\Omega)}. $$ Let $v_j$ for $j\ge 0$ be given as in Definition~\ref{CCN}. First we show the following result. \begin{lemma}\label{LEw} Let $m\ge 2$. Suppose that $(\phi$, $\psi)\in H^{m+2}(\Omega)\times H^{m+1}(\Omega)$ satisfies the compatibility condition of order $m+1$, that is to say, $\left.\partial_\nu v_j\right|_{\partial\Omega}=0$ for $j\in \{0,1,\ldots, m+1\}$, and \begin{equation} \|\phi\|_{H^{m+2}(\Omega)}+\|\psi\|_{H^{m+1}(\Omega)}\le M. \label{init} {\mathcal E}d{equation} Then\footnote{The assumption on initial data here is just for simplicity, and we can prove the same result for initial data with compatibility condition of order $m$ in fact.}, there exists a positive constant $T=T(m, M)$ such that the mixed problem \eqref{eq.PMN} admits a unique solution $u\in Y_{T}^{m+2}$. Here $T$ is a constant depending only on $m$ and $M$. {\mathcal E}d{lemma} \begin{proof} To begin with, we note that the Sobolev embedding theorem implies \begin{equation} \label{Sob01} \sum_{|\beta|\le [(m+1)/2]+1} \|\partial^\beta h(t,\cdot)\|_{L^\infty_\Omega} \lesssim \sum_{|\beta|\le [(m+1)/2]+3}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega} \le \sum_{|\beta|\le m+2}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega} {\mathcal E}d{equation} for $m\ge 2$. We show the existence of $u$ by constructing an approximate sequence $\bigl\{u^{(n)}\bigr\}\subset Y_T^{m+2}$, and proving its convergence for suitably small $T>0$. Throughout this proof, $C_M$ denotes a positive constant depending on $M$, but being independent of $T$. In order to keep the compatibility condition, we need to choose an appropriate function for the first step: for a moment, we suppose that we can choose a function $u^{(0)}\in Y_T^{m+2}$ satisfying $(\partial_t^j u^{(0)})(0,x)=v_j$ for all $j\in \{0,1,\ldots, m+2\}$. For $n\ge 1$ we inductively define $u^{(n)}$ as \begin{equation} u^{(n)}=S\bigl[\phi, \psi, G\bigl(\partial u^{(n-1)}\bigr) \bigr]. {\mathcal E}d{equation} We have to check that $u^{(n)}$ is well defined. Let $v_0^{(n)}:=\phi$, $v_1^{(n)}:=\psi$, and $v_j^{(n)}:=\Delta v_{j-2}^{(n)} +\partial_t^{j-2} (G(\partial u^{(n-1)})\bigr|_{t=0}$ for $j\ge 2$. Suppose that $u^{(n-1)}\in Y_T^{m+2}$ with $(\partial_t^j u^{(n-1)})(0)=v_j$ for $0\le j\le m+2$. Then we can see that $v_j^{(n)}=v_j$ for $0\le j\le m+2$, and consequently the compatibility condition of order $m+1$ is satisfied for the equation of $u^{(n)}$. Since \eqref{Sob01} implies $G(\partial u^{(n-1)})\in Y_T^{m+1}$, the linear theory (see \cite{I68}) shows that $u^{(n)}\in Y_T^{m+2}$. Therefore, by induction with respect to $n$, we see that $\{u^{(n)}\}\subset Y_T^{m+2}$ is well defined, and that $(\partial_t^j u^{(n)})(0)=v_j^{(n)}=v_j$ for $0\le j\le m+2$ and $n\ge 0$. Now we are going to explain how to construct $u^{(0)}$. We can show that $v_j\in H^{m+2-j}(\Omega)$ for $0\le j\le m+2$ by its definition and \eqref{Sob01}. By the well-known extension theorem, there is $V_j\in H^{m+2-j}(\mathbb R^2)$ such that $\left. V_j\right|_\Omega=v_j$ and $\|V_j\|_{H^{m+2-j}(\mathbb R^2)}\lesssim \|v_j\|_{H^{m+2-j}(\Omega)}$. Let $(a_{kl})_{0\le k, l\le m+2}$ be the inverse matrix of $(i^k(l+1)^k)_{0\le k, l\le m+2}$, where $i=\sqrt{-1}$. We put $$ \widehat{V} (t, \xi)=\sum_{k,l=0}^{m+2} \exp(i(k+1)\jb{\xi}t)a_{kl}\widehat{V_l}(\xi)\jb{\xi}^{-l}, $$ where $\widehat{V_l}$ is the Fourier transform of $V_l$. We set $u^{(0)}(t)=\left. V(t) \right|_{\Omega}$ with the inverse Fourier transform $V(t)$ of $\widehat{V}(t)$. Now we can show that $u^{(0)}(t)$ has the desired property, and $\|u^{(0)}\|_{Y_T^{m+2}}\le C_M$ (see \cite{SN89} where this kind of function is used to reduce the problem to the case of zero-data). Now we are in a position to show that $u^{(n)}$ converges to a local solution of \eqref{eq.PMN} on $[0,T]$ with appropriately chosen $T$. For simplicity of description, we put $$ \mathbb Norm{h(t)}_k=\sum_{j=0}^{m+2-k}\|\partial_t^j h(t)\|_{H^k(\Omega)} $$ for $0\le k\le m+2$. Note that we have $\|h\|_{Y_T^{m+2}}\lesssim \sup_{t\in[0,T]}\sum_{k=0}^{m+2}\mathbb Norm{h(t)}_k$. We also set $G_n(t,x)=G\bigl(\partial u^{(n)}(t,x)\bigr)$ for $n\ge 0$. Combining the elementary inequality $$ \|h(t)\|_{L^2_\Omega}\le \|h(0)\|_{L^2_\Omega}+\int_0^t \|(\partial_t h)(\tau)\|_{L^2_\Omega}d\tau $$ with the standard energy inequality for $\partial_t^j u^{(n)}$ with $0\le j\le m+1$, we get $$ \mathbb Norm{u^{(n)}(t)}_0+\mathbb Norm{u^{(n)}(t)}_1\le (1+T)\left(C_M+C\sum_{j=0}^{m+1}\int_0^t \|(\partial_t^j G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right). $$ Writing $$ \Delta \partial^\beta u^{(n)}(t,x)=\partial_t^2 \partial^\beta u^{(n)}-(\partial^\beta G_{n-1})(0,x) {}-\int_0^t (\partial_t\partial^\beta G_{n-1})(\tau,x)d\tau $$ for a multi-index $\beta$ and using the elliptic estimate, given in Lemma \ref{elliptic2}, we have $$ \mathbb Norm{u^{(n)}(t)}_k\le C \left(\mathbb Norm{u^{(n)}(t)}_{k-2}+\mathbb Norm{u^{(n)}(t)}_{k-1}+C_M+\sum_{|\alpha|\le k-1}\int_0^t \|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right) $$ for $2\le k\le m+2$. By induction we get control of $\mathbb Norm{u^{(n)}(t)}_k$ for $0\le k\le m+2$, and obtain \begin{equation} \label{EnergyLocal} \sum_{k=0}^{m+2} \mathbb Norm{u^{(n)}(t)}_k\le (1+T)\left(C_M+C\sum_{|\alpha|\le m+1}\int_0^t \|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right). {\mathcal E}d{equation} It follows from \eqref{Sob01} that \begin{equation} \label{EstNonlinearity01} \sum_{|\alpha|\le m+1} \|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega} \le C\|u^{(n-1)}\|_{Y^{m+2}_T}^3, \quad 0\le \tau\le T, {\mathcal E}d{equation} and \eqref{EnergyLocal} implies $\|u^{(n)}\|_{Y_T^{m+2}}\le (1+T)\left(C_M+CT\|u^{(n-1)}\|_{Y_T^{m+2}}^3\right)$ for $n\ge 1$. From this, if we take appropriate constants $N_M$ and $T_M$ which can be determined by $M$, we can show that $\|u^{(n)}\|_{Y_T^{m+2}}\le N_M$ for all $n\ge 0$, provided that $T\le T_M$. In the same manner, we can also show that there is some $T_M'(\le T_M)$ such that $$ \|u^{(n+1)}-u^{(n)}\|_{Y_T^{m+2}}\le \frac{1}{2}\|u^{(n)}-u^{(n-1)}\|_{Y_T^{m+2}} $$ for all $n\ge 1$, provided that $T\le T_M'$. Now we see that if $T\le T_M'$, then $\{u^{(n)}\}$ is a Cauchy sequence in $Y_T^{m+2}$, and there is $u\in Y_T^{m+2}$ such that $\lim_{n\to\infty}\|u^{(n)}-u\|_{Y_T^{m+2}}=0$. It is not difficult to see that this $u$ is the desired solution to \eqref{eq.PMN}. Uniqueness can be easily obtained by the energy inequality. {\mathcal E}d{proof} Theorem~\ref{LE} is a corollary of Lemma~\ref{LEw}. \begin{proof}[ Proof of Theorem~\ref{LE}] The assumption on the initial data guarantees that for each $m\ge 3$, there is a positive constant $M_m$ such that $\|\phi\|_{H^{m+2}(\Omega)}+\|\psi\|_{H^{m+1}(\Omega)}\le M_m$. Hence, by Lemma~\ref{LEw}, there is $T_m=T(m, M_m)>0$ such that \eqref{eq.PMN} admits a unique solution $u\in Y_{T_m}^{m+2}$. Note that we may take $T_3=T(3,R)$. We put \begin{equation}\label{eq.bi} C_0:=\|u\|_{Y_{T_3}^{3+2}}. {\mathcal E}d{equation} Our aim is to prove that \eqref{eq.PMN} admits a solution $u\in \bigcap_{m\ge 3}Y_{T_3}^{m+2}$. Then the Sobolev embedding theorem implies that $u\in {\mathcal C}^\infty\left([0,T_3]\times \overline{\Omega}\right)$, which is the desired result. For this purpose, we are going to prove the following {\it a priori} estimate: for each $m\ge 3$, if $u\in Y^{m+2}_T$ is a solution to \eqref{eq.PMN} with some $T\in (0, T_3]$, then there is a positive constant $C_m$, which is independent of $T$, such that \begin{equation} \label{LW_conclusion} \|u(t)\|_{Y_T^{m+2}}\le C_m. {\mathcal E}d{equation} Once we obtain this estimate, by applying Lemma~\ref{LEw} repeatedly, we can see that $u\in Y_{T_3}^{m+2}$ for each $m\ge 3$, which concludes the proof of Theorem~\ref{LE}. Now we show \eqref{LW_conclusion} by induction. For $m=3$ \eqref{LW_conclusion} follows immediately from \eqref{eq.bi}. Suppose that we have \eqref{LW_conclusion} for some $m=l\ge 3$. If we put $$ \mathbb Norm{h(t)}_k=\sum_{j=0}^{l+3-k}\|\partial_t^j h(t)\|_{H^k(\Omega)}, $$ then, similarly to \eqref{EnergyLocal}, we obtain $$ \sum_{k=0}^{l+3} \mathbb Norm{u(t)}_k\le (1+T_3)\left(C+C\sum_{|\alpha|\le l+2}\int_0^t \bigl\|(\partial^\alpha \bigl(G\bigl(\partial u(\tau)\bigr)\bigr)\bigr\|_{L^2_\Omega}d\tau\right). $$ Since $[(m+1)/2]+3\le m+1$ for $m\ge 4$, we have \begin{equation} \label{Sob02} \sum_{|\beta|\le [(m+1)/2]+1} \|\partial^\beta h(t,\cdot)\|_{L^\infty_\Omega} \le C \sum_{|\beta|\le m+1}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega},\quad m\ge 4, {\mathcal E}d{equation} in place of \eqref{Sob01}. Combining this estimate for $m=l+1$ with the inductive assumption, we get $$ \sum_{|\alpha|\le l+2}\bigl\|\partial^\alpha\bigl(G(\partial u(\tau))\bigr)\bigr\|_{L^2_\Omega}\le CC_l^2 \sum_{k=0}^{l+3}\mathbb Norm{u(\tau)}_k, $$ which yields $$ \sum_{k=0}^{l+3} \mathbb Norm{u(t)}_k\le (1+T_3)\left(C+CC_l^2\int_0^t \sum_{k=0}^{l+3} \mathbb Norm{u(\tau)}_kd\tau\right). $$ Now the Gronwall Lemma implies $\sum_{k=0}^{l+3} \mathbb Norm{u(t)}_k\le C(1+T_3)\exp\bigl(C C_l^2(1+T_3)T_3\bigr)=:C_{l+1}$ for $0\le t\le T(\le T_3)$, which implies $\|u\|_{Y_T^{l+3}}\le C_{l+1}$ for $0\le T\le T_3$. This completes the proof of \eqref{LW_conclusion}. {\mathcal E}d{proof} \begin{center} {\bf Acknowledgments} {\mathcal E}d{center} The first author is partially supported by Grant-in-Aid for Scientific Research (C) (No. 23540241), JSPS. The second author is partially supported by Grant-in-Aid for Science Research (B) (No. 24340024), JSPS. The third author is partially supported by GNAMPA Projects 2010 and 2011, coordinating by Prof. P. D'Ancona. \begin{thebibliography}{abc99} \bibitem[ADN59]{ADN} {\sc S. Agmon, A. Douglis, and L. Nirenberg}, {\it Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions I}, Comm. Pure Appl. Math. {\bf 12} (1959), 623--737. \bibitem[D03]{DiF03} {\sc M. Di Flaviano}, {\it Lower bounds of the life span of classical solutions to a system of semilinear wave equations in two space dimensions}, J. Math. Anal. Appl. {\bf 281} (2003), 22--45. \bibitem[GL04]{GeLu04} {\sc V. Georgiev and S. Lucente}, {\it Decay for Nonlinear Klein-Gordon Equations}, NoDEA {\bf 11} (2004), 529--555. \bibitem[G93]{God93} {\sc P. Godin}, {\it Lifespan of solutions of semilinear wave equations in two space dimensions}, Comm. Partial Differential Equations, {\bf 18} (1993), 895--916. \bibitem[I68]{I68} {\sc M. Ikawa}, {\it Mixed problems for hyperbolic equations of second order}, J. Math. Soc. Japan {\bf 20} (1968), 580--608. \bibitem[KK08]{KaKu08} {\sc S. Katayama and H. Kubo}, {\it An elementary proof of global existence for nonlinear wave equations in an exterior domain}, J. Math. Soc. Japan {\bf 60} (2008), 1135--1170. \bibitem[KK12]{KaKu12} {\sc S. Katayama and H.Kubo}, {\it Lower bound of the lifespan of solutions to semilinear wave equations in an exterior domain}, ArXiv: 1009.1188. \bibitem[Kl85]{kl0} {\sc S. Klainerman}, {\it Uniform decay estimates and the Lorentz invariance of the classical wave equation}, Comm. Pure Appl. Math. {\bf 38} (1985), 321--332. \bibitem[K07]{Kub06} {\sc H. Kubo}, {\it Uniform decay estimates for the wave equation in an exterior domain}, \lq\lq Asymptotic analysis and singularities\rq\rq, pp.~31--54, Advanced Studies in Pure Mathematics 47-1, Math. Soc. of Japan, 2007. \bibitem[K12]{KP} {\sc H. Kubo}, {\it Global existence for nonlinear wave equations in an exterior domain in 2D}, ArXiv: 1204.3725v2. \bibitem[Ku93]{k93} {\sc K. Kubota}, {\it Existence of a global solutions to a semi-linear wave equation with initial data of non-compact support in low space dimensions}, Hokkaido Math. J. {\bf 22} (1993), 123--180. \bibitem[M75]{Mor75} {\sc C. S.~Morawetz}, {\it Decay for solutions of the exterior problem for the wave equation}, Comm. Pure Appl. Math. {\bf 28} (1975), 229--264. \bibitem[SS03]{SeSh03} {\sc P. Secchi and Y. Shibata}, {\it On the decay of solutions to the 2D Neumann exterior problem for the wave equation}, J. Differential Equations {\bf 194} (2003), 221--236. \bibitem[SN89]{SN89} {\sc Y. Shibata and G. Nakamura}, {\it On a local existence theorem of Neumann problem for some quasilinear hyperbolic systems of 2nd order }, Math. Z, {\bf 202} (1989), 1--64. \bibitem[SSW11]{SSW} {\sc H.F. Smith, C.D. Sogge, and C. Wang}, {\it Strichartz Estimates for Dirichlet-Wave Equations in Two Dimensions with Applications}, Transactions Amer. Math. Soc. \bf 364 \rm (2012), 3329-3347. \bibitem[V75]{Vai75} {\sc B.R. Vainberg}, {\it The short-wave asymptotic behavior of the solutions of stationary problems, and the asymptotic behavior as $t\rightarrow \infty $ of the solutions of nonstationary problems}, (Russian) Uspehi Mat. Nauk {\bf 30} (1975), 3-55. {\mathcal E}d{thebibliography} {\mathcal E}d{document}
\begin{document} \title[Acceleration of Clenshaw-Curtis quadrature]{Convergence rate and acceleration of Clenshaw-Curtis quadrature for functions with endpoint singularities} \author{Haiyong Wang} \address{School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China} \email{[email protected]} \thanks{The author was supported by the National Science Foundation of China (No.~11301200).} \subjclass[2010]{Primary 65D32, 41A25, 65B05.} \keywords{Clenshaw-Curtis quadrature, rate of convergence, endpoint singularities, asymptotic expansion, extrapolation acceleration} \begin{abstract} In this paper, we study the rate of convergence of Clenshaw-Curtis quadrature for functions with endpoint singularities in $X^s$, where $X^s$ denotes the space of functions whose Chebyshev coefficients decay asymptotically as $a_k = \mathcal{O}(k^{-s-1})$ for some positive $s$. For such a subclass of $X^s$, we show that the convergence rate of $(n+1)$-point Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-s-2})$. Furthermore, an asymptotic error expansion for Clenshaw-Curtis quadrature is presented which enables us to employ some extrapolation techniques to accelerate its convergence. Numerical examples are provided to confirm our analysis. \end{abstract} \maketitle \section{Introduction} The evaluation of the definite integral \begin{equation} I[f] := \int_{-1}^{1} f(x) dx, \end{equation} is one of the fundamental and important research topics in the field of numerical analysis \cite{davis1984quadrature}. Given a set of distinct nodes $\{ x_j \}_{j=0}^{n}$, an interpolatory quadrature rule of the form \begin{equation}\label{eq:quadrature} Q_n[f] := \sum_{j=0}^{n} w_j f(x_j), \end{equation} can be constructed to approximate the above integral by requiring $I[f] = Q_n[f]$ whenever $f(x)$ is a polynomial of degree $n$ or less. In order to obtain a stable quadrature rule, the quadrature nodes with the Chebyshev density $\mu(x) = 1/\sqrt{1-x^2}$ are preferable. Ideal candidates are the roots or extrema of classical orthogonal polynomials such as Chebyshev and Legendre polynomials. Clenshaw-Curtis quadrature rule, which is the interpolatory quadrature formula based on the extrema of Chebyshev polynomials, has attracted considerable attention in the past few decades. Let $\{ x_j \}_{j=0}^{n}$ be the Clenshaw-Curtis points or the Chebyshev-Lobatto points \begin{equation}\label{eq:chebyshev points} x_j = \cos\left( \frac{j \pi}{n} \right), \quad j=0,\ldots,n. \end{equation} Then the Clenshaw-Curtis quadrature rule is \begin{equation} I_{n}^{C}[f] := \sum_{j=0}^{n} w_j f(x_j), \end{equation} where the quadrature weights are given explicitly by \cite[p.~86]{davis1984quadrature} \begin{equation} w_j = \frac{4\delta_j}{n} \sum_{k=0}^{ [\frac{n}{2}] } \frac{\delta_{2k} }{1-4k^2} \cos\left( \frac{2jk\pi}{n} \right), \end{equation} and the coefficients $\delta_j$ are defined as \begin{equation} \delta_j = \left\{\begin{array}{cc} 1/2, & \mbox{$\textstyle j=0$ or $j=n$},\\ [5pt] 1, & \mbox{otherwise}. \end{array} \right. \end{equation} Here $[\cdot]$ denotes the integer part. It is well known that the quadrature weights are all positive and can be computed in only $\mathcal{O}(n \log n)$ operations by the inverse Fourier transform \cite{waldvogel2006clenshaw}. Clenshaw-Curtis quadrature rule with $(n+1)$-point is exact for polynomials of degree less than or equal to $n$. However, its performance for differentiable functions is comparable with the classic Gauss-Legendre quadrature which is exact for polynomials of degree up to $2n+1$. This remarkable accuracy makes it extraordinarily attractive and many studies have been done on the error behaviour of the Clenshaw-Curtis quadrature (see, for example, \cite{mason2003chebyshev,OHara1968ccquad,Riess1972ccquadrature,trefethen2008gausscc,trefethen2013atap,xiang2012clenshawcurtis}). In particular, Trefethen in \cite{trefethen2008gausscc} presented a comprehensive comparison of error bounds of Gauss and Clenshaw-Curtis quadrature rules for analytic and differentiable functions. For the latter, an $\mathcal{O}(n^{-s})$ bound was established for functions belong to $ X^s$, where $X^s$ denotes the space of functions whose Chebyshev coefficients decay asymptotically as $a_k = \mathcal{O}(k^{-s-1})$ for some positive $s$. More recently, Xiang and Bornemann in \cite{xiang2012clenshawcurtis} presented a more accurate estimate and showed that the optimal rate of convergence of Clenshaw-Curtis quadrature rule for $f \in X^s$ is $\mathcal{O}(n^{-s-1})$. In this work, we are interested in the rate of convergence of Clenshaw-Curtis quadrature for the integrals $\int_{-1}^{1} f(x) dx$, where the integrands $f(x)$ have singularities at one or both endpoints. More specifically, we assume that \begin{equation}\label{eq:algebraic functions} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} g(x), \end{equation} where $\alpha, \beta \geq 0$ are not integers simultaneously and $g(x) \in C^{\infty}[-1,1]$. Note that the assumption $\alpha, \beta \geq 0$ is due to the fact that Clenshaw-Curtis quadrature needs to evaluate the values of the integrand $f(x)$ at both endpoints. When such kind of functions belong to the space $X^s$ where $s$ is determined by the strength of singularities of $f$, however, we will show that the optimal rate of convergence of Clenshaw-Curtis quadrature for evaluating the integrals $\int_{-1}^{1} f(x) dx$ is $\mathcal{O}(n^{-s-2})$, which is one power of $n$ better than that given in \cite{xiang2012clenshawcurtis}. Furthermore, we also extend our analysis to functions with algebraic-logarithmic endpoint singularities of the form \begin{equation} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} \log(1 - x) g(x), \end{equation} where $\alpha$ is a positive integer and $\beta \geq 0$ and $g(x) \in C^{\infty}[-1,1]$. Similarly, we show that the optimal rate of convergence of Clenshaw-Curtis quadrature is also $\mathcal{O}(n^{-s-2})$ if $f(x)$ belongs to $X^s$. Apart from the close connection with the FFT, another particularly significant advantage of Clenshaw-Curtis quadrature is that its quadrature nodes are nested. This means that it is possible to accelerate the convergence of Clenshaw-Curtis quadrature by using some extrapolation schemes. In Section \ref{sec:acceleration}, we shall explore the asymptotic expansion of the error of Clenshaw-Curtis quadrature for functions with endpoint singularities. An asymptotic series in negative powers of $n$ is derived for even $n$, which allows to employ some extrapolation schemes, such as the Richardson extrapolation approach, to accelerate the convergence of Clenshaw-Curtis quadrature. Thus, comparing with Gauss-Legendre quadrature, Clenshaw-Curtis quadrature is a more attractive scheme for computing the integrals whose integrands have endpoint singularities. The rate of convergence of Gauss-Legendre quadrature for functions with endpoint singularities has been investigated considerably in the past decades (see \cite{chawla1968gauss,lubinsky1984gauss,rabinowitz1968gauss,rabinowitz1986gauss,sidi2009variable,sidi2009gauss,verlinden1997gauss} and references therein). For example, for functions like $f(x) = (1-x)^{\alpha} g(x)$ where $\alpha>-1$ is not an integer and $g(x)$ is sufficiently smooth, Rabinowitz in \cite{rabinowitz1968gauss,rabinowitz1986gauss} and Luninsky and Rabinowitz in \cite{lubinsky1984gauss} have shown that the asymptotic error estimate of the $n$-point Gauss-Legendre quadrature is $\mathcal{O}(n^{-2\alpha-2})$ as $n \rightarrow \infty$. On the other hand, Verlinden in \cite{verlinden1997gauss} and Sidi in \cite{sidi2009gauss} further studied the asymptotic expansion of the error of the Gauss-Legendre quadrature for functions with algebraic and algebraic-logarithmic endpoint singularities. Although the rate of convergence and asymptotic error expansion of Gauss-Legendre quadrature for functions with endpoint singularities have been extensively explored, we are still unable to find the corresponding result for the Clenshaw-Curtis quadrature in the literature. This motivates the author to conduct the current research. The rest of the paper is organized as follows. In the next section, we shall show that the rate of convergence of Clenshaw-Curtis quadrature can be improved to $\mathcal{O}(n^{-s-2})$ if the Chebyshev coefficients of functions in $X^s$ satisfy a more specific condition; see Theorem \ref{thm:superconvergence of cc} for details. In Section \ref{sec:asymptotic of Chebyshev coeff} we discuss the asymptotic behaviour of Chebyshev coefficients of functions with endpoints singularities, including algebraic and algebraic-logarithmic singularities. An asymptotic error expansion for Clenshaw-Curtis quadrature is presented in Section \ref{sec:acceleration}. This allows us to use some extrapolation schemes for convergence acceleration. We present some numerical examples in Section \ref{sec:example} and give some concluding remarks in Section \ref{sec:conclusion}. \section{Conditions for enhanced convergence rate}\label{sec:improved rate of clenshaw} In this section, we establish sufficient conditions under which the rate of convergence of Clenshaw-Curtis quadrature for functions in $X^s$ can be further enhanced. We commence our analysis from a helpful lemma. \begin{lemma}\label{lem:estimate H} For each $k \geq 1$, we have \begin{equation}\label{eq:estimate H} \sum_{r=1}^{n} \frac{r^{2k}}{4r^2 - 1} = \frac{1}{4^{k-1}} \frac{n(n+1)}{2(2n+1)} + \sum_{j=1}^{2k-1} \nu_j^{k} n^{2k-j} , \end{equation} where \begin{equation}\label{eq:last coeff H} \nu_{2j+1}^{k} = \frac{1}{\Gamma(2k-2j)} \sum_{p=1}^{j+1} \frac{\Gamma(2k-2p+1) }{ \Gamma(2j-2p+3) } \frac{B_{2j-2p+2} }{ 4^{p}} , \quad 0 \leq j \leq k-2, \end{equation} and \begin{equation}\label{eq:last coeff H} \nu_{2k-1}^{k} = \sum_{p=1}^{k-1} \frac{1}{4^{p} } B_{2k-2p}. \end{equation} Here $B_j$ denotes the $j$-th Bernoulli number ($B_0 = 1, B_2 = \frac{1}{6}, \ldots$). Moreover, \begin{equation}\label{eq:expansion coeff H} \nu_{2j}^{k} = \frac{1}{2^{2j+1}} , \quad 1 \leq j \leq k-1. \end{equation} \end{lemma} \begin{proof} Let $H(n,k)$ denote the sum on the left hand side of \eqref{eq:estimate H}. It is easy to derive the following recurrence relation \begin{equation}\label{eq:rec H} 4 H(n,j+1) = H(n,j) + \sum_{r=1}^{n} r^{2j}. \end{equation} Let $S(n,j)$ denote the last sum on the right hand side of the above equation. Multiplying both sides of the above equation by $4^{j-1}$ and summing over $j$ from $1$ to $k-1$, we obtain \begin{equation}\label{eq:formula H} H(n,k) = \frac{1}{4^{k-1}} H(n,1) + \sum_{j=1}^{k-1} \frac{1}{4^{k-j}} S(n,j), \end{equation} where the sum on the right hand side vanishes when $k=1$. For $H(n,1)$, straightforward computation gives \begin{equation}\label{eq:H one} H(n,1) = \frac{n(n+1)}{2(2n+1)}. \end{equation} Moreover, using the Faulhaber's formula \cite[Corollary~3.4]{javed2013trapezoidal}, we have \begin{equation}\label{eq:faulhaber} S(n,k) = \frac{n^{2k+1}}{2k+1} + \frac{n^{2k}}{2} + \sum_{j=1}^{k} \frac{\Gamma(2k+1) B_{2j}}{\Gamma(2j+1) \Gamma(2k-2j+2)} n^{2k-2j+1}. \end{equation} Substituting \eqref{eq:H one} and \eqref{eq:faulhaber} into \eqref{eq:formula H} gives the desired result. \end{proof} In the following we shall present sufficient conditions for the enhanced rate of convergence of Clenshaw-Curtis quadrature. \begin{theorem}\label{thm:superconvergence of cc} Suppose $f \in X^s$ and if the Chebyshev coefficients of $f(x)$ decay asymptotically as \begin{equation}\label{eq:cheb coefficients constant sign} a_{m} = \frac{c(s)}{ m^{s+1}} + \mathcal{O}(m^{-s-2}), \quad m\geq m_0, \end{equation} or \begin{equation}\label{eq:cheb coefficients alternate sign} a_m = (-1)^m \frac{c(s)}{m^{s+1}} + \mathcal{O}(m^{-s-2}), \quad m\geq m_0, \end{equation} where $c(s)$ is independent of $m$. Then, for $n\geq \max\{m_0, 2\}$, the rate of convergence of Clenshaw-Curtis quadrature rule can be improved to \begin{equation}\label{eq:rate of cc} E_n^{C}(f) = \mathcal{O}(n^{-s-2}). \end{equation} \end{theorem} \begin{proof} In \cite{xiang2012clenshawcurtis}, the authors have presented a simple and elegant proof on the rate of convergence of Clenshaw-Curtis quadrature. For the sake of clarity, we shall briefly describe their idea and then give the key observation that leads to \eqref{eq:rate of cc}. Define \begin{equation} \Delta(n) = \{ m~ |~ m = 2jn + 2r,~ j\geq 1, ~ 1-n \leq 2r \leq n \}. \end{equation} Note that the Clenshaw-Curtis rule is exact for polynomials of degree $n$ and $E_n^{C}(f) = 0$ for odd functions $f$. The error of the Clenshaw-Curtis quadrature rule can be written as \begin{equation}\label{eq:error of cc} E_n^{C}(f) = \sum_{m \in \Delta(n) } a_m E_n^{C}(T_m), \end{equation} where $T_j(x)$ denotes the Chebyshev polynomial of degree $j$. Moreover, using the aliasing condition, we have that \begin{equation}\label{eq:aliasing} E_n^{C}(T_{m}) = \frac{2}{ 1 - m^2 } - \frac{2}{1-4r^2}, \quad m\in \Delta(n). \end{equation} Substituting this into the reminder $E_n^{C}(f)$ yields \[ E_n^{C}(f) = S_1 + S_2, \] where \begin{equation} S_1 = \sum_{m \in \Delta(n)} \frac{2a_{m} }{ 1 - m^2 }, \quad S_2 = \sum_{m \in \Delta(n)} \frac{2a_{m} }{ 4r^2 - 1 }. \end{equation} From the assumption that $f\in X^s$, it is easy to deduce that $S_1 = \mathcal{O}(n^{-s-2})$. The remaining task is to give an accurate estimate of $S_2$. Using the following identities \begin{equation} \sum_{r=-\infty}^{\infty} \frac{1}{|4r^2-1|} = 2, \quad \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} = \zeta(s+1), \end{equation} where $\zeta(n)$ is the Riemann zeta function, Xiang and Bornemann in \cite{xiang2012clenshawcurtis} deduced that \begin{eqnarray}\label{eq:estimate s2} | S_2 | & \leq & \sum_{m \in \Delta(n)} \frac{2 |a_{m} | }{ | 4r^2 - 1 | } \nonumber \\ & = & \sum_{j=1}^{\infty} \sum_{ 1-n \leq 2r \leq n} \frac{2 | a_{2j n + 2r} | }{ |4r^2 - 1| } = \mathcal{O}(n^{-s-1}). \end{eqnarray} Hence, they proved that the convergence rate of the Clenshaw-Curtis quadrature for $f \in X^s$ is $\mathcal{O}(n^{-s-1})$. In the following, we shall show that if $f\in X^s$ and (\ref{eq:cheb coefficients constant sign}) or (\ref{eq:cheb coefficients alternate sign}) is satisfied, the rate of convergence of the Clenshaw-Curtis quadrature can be further improved. The key observation is that the estimate of $S_2$ can be further improved to $\mathcal{O}(n^{-s-2})$. Here we only discuss the case (\ref{eq:cheb coefficients constant sign}) and the case (\ref{eq:cheb coefficients alternate sign}) can be analyzed similarly. For $n \geq \max\{m_0, 2\}$, substituting the asymptotic of $a_m$ into $S_2$, we have \begin{eqnarray} S_2 & = & \sum_{ m\in \Delta(n) } \frac{2a_{m} }{ 4r^2 - 1 } \nonumber \\ & = & \sum_{ m\in \Delta(n) } \frac{2 c(s) }{ (4r^2 - 1) m^{s+1} } + \sum_{ m\in \Delta(n) } \frac{2 }{ (4r^2 - 1)} \mathcal{O}(m^{-s-2}). \end{eqnarray} In analogy to the estimate of \eqref{eq:estimate s2}, it is easy to deduce that the last sum in the above equation is $\mathcal{O}(n^{-s-2})$, and thus we get \begin{align}\label{eq:expansion of s2} S_2 & = \sum_{ m\in \Delta(n) } \frac{2 c(s) }{ (4r^2 - 1) m^{s+1} } + \mathcal{O}(n^{-s-2}) \nonumber \\ & = \sum_{j=1}^{\infty} \sum_{ 1-n \leq 2r \leq n} \frac{ 2 c(s) }{(4r^2 - 1)(2jn+2r)^{s+1}} + \mathcal{O}(n^{-s-2}) \nonumber \\ & = \frac{2c(s)}{(2n)^{s+1}} \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{ 1-n \leq 2r \leq n} \frac{1}{ 4r^2 - 1 } \left(1 + \frac{r}{ j n } \right)^{-s-1} + \mathcal{O}(n^{-s-2}). \end{align} We now consider the asymptotic behaviour of the double sum in the above equation. First, we consider the case that $n$ is even. Rearranging the inner sum, we obtain \begin{align}\label{eq:rearrange sum even} & \sum_{ 1-n \leq 2r \leq n} \frac{1}{ 4r^2 - 1} \left(1 + \frac{r}{ j n } \right)^{-s-1} \nonumber \\ & = -1 + \sum_{k=1}^{n/2} \frac{1}{4k^2 - 1} \left[ \left( 1 + \frac{k}{jn} \right)^{-s-1} + \left( 1 - \frac{k}{jn} \right)^{-s-1} \right] \\ &~~~~~~~~~~~ - \frac{1}{n^2 - 1} \left( 1 - \frac{1}{2j} \right)^{-s-1}. \nonumber \end{align} Utilizing the following binomial series expansion \begin{equation}\label{eq:binomial expansion} (1+x)^{-\beta} = \sum_{k=0}^{\infty} (-1)^k \frac{ (\beta)_k }{k!} x^k, \quad |x|<1, \end{equation} where $(z)_n$ is the Pochhammer symbol, we further get \begin{align} \sum_{ 1-n \leq 2r \leq n} \frac{1}{ 4r^2 - 1 } \left(1 + \frac{r}{ j n } \right)^{-s-1} & = -1 + \sum_{k=1}^{n/2} \frac{2}{4k^2 - 1} \sum_{q=0}^{\infty} \frac{(s+1)_{2q}}{(2q)!} \left( \frac{k}{jn} \right)^{2q} \nonumber \\ &~~~~~~~~~~~ - \frac{1}{n^2 - 1} \left( 1 - \frac{1}{2j} \right)^{-s-1}. \end{align} This together with the following identities \begin{equation} \sum_{k=1}^{n/2} \frac{2}{4k^2 - 1} = \frac{n}{n+1}, \quad \sum_{j=1}^{\infty} \left( j-\frac{1}{2} \right)^{-s-1} = (2^{s+1} - 1) \zeta(s+1), \end{equation} gives \begin{align} S_2 & =\frac{2c(s)}{(2n)^{s+1}} \bigg( - \left( \frac{2^{s+1}-1}{n^2-1} + \frac{1}{n+1} \right)\zeta(s+1) \nonumber \\ &~~~~~~ + \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{k=1}^{n/2} \frac{2}{4k^2 - 1} \sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)!} \left( \frac{k}{jn} \right)^{2q} \bigg) + \mathcal{O}(n^{-s-2}). \end{align} Next, we explore the asymptotic behaviour of the last term inside the bracket. By the results of Lemma \ref{eq:estimate H}, we have \begin{align} &~~~~~ \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{k=1}^{n/2} \frac{2}{4k^2 - 1} \sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)!} \left( \frac{k}{jn} \right)^{2q} \nonumber \\ & = 2 \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)! (jn)^{2q} } \left( \frac{n(n+2)}{4^q 2(n+1) } + \sum_{k=1}^{2q-1} \nu_{k}^{q} \left( \frac{n}{2} \right)^{2q-k} \right) \nonumber \\ & = \frac{n(n+2)}{n+1} \sum_{j=1}^{\infty} \frac{1}{j^{s+1}}\sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)!(2jn)^{2q}} \nonumber \\ &~~~ + 2\sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)! (jn)^{2q}} \left( \sum_{k=1}^{q-1} \nu_{2k}^{q} \left( \frac{n}{2} \right)^{2q-2k} + \sum_{k=1}^{q} \nu_{2k-1}^{q} \left( \frac{n}{2} \right)^{2q-2k+1} \right) . \nonumber \end{align} Now using the explicit expression of $\nu_{k}^{q}$ and after some elementary computations, we arrive at \begin{align} &~~ \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{k=1}^{n/2} \frac{2}{4k^2 - 1} \sum_{q=1}^{\infty} \frac{(s+1)_{2q}}{(2q)!} \left( \frac{k}{jn} \right)^{2q} \nonumber \\ & = \frac{1}{n^2-1} \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{q=1}^{\infty} \frac{ (s+1)_{2q} }{ (2q)! (2j)^{2q} } \nonumber \\ &~~~ + \left( \frac{n(n+2)}{n+1} - \frac{n^2}{n^2 - 1} \right) \sum_{k = 1}^{\infty} \frac{(s+1)_{2k} \zeta(s+2k+1)}{(2k)! (2n)^{2k}} \\ & ~~~ + \sum_{k=0}^{\infty} \frac{1}{n^{2k+1}} \sum_{j=1}^{\infty} \frac{1}{j^{s+1}} \sum_{\ell=0}^{\infty} \frac{ \nu_{2k+1}^{k+\ell+1} (s+1)_{2\ell+2k+2} }{2^{2\ell} (2\ell+2k+2)! j^{2\ell+2k+2} } \nonumber \\ & = \mathcal{O}(n^{-1}). \nonumber \end{align} Hence, we immediately deduce that \begin{align} S_2 & = \mathcal{O}(n^{-s-2}), \quad n \rightarrow\infty. \end{align} Thus, the desired result follows. For the case that $n$ is odd, similar to \eqref{eq:rearrange sum even}, rearranging the summation yields \begin{align} & \sum_{ 1-n \leq 2r \leq n} \frac{1}{ 4r^2 - 1 } \left(1 + \frac{r}{ j n } \right)^{-s-1} \nonumber \\ & = -1 + \sum_{k=1}^{\frac{n-1}{2}} \frac{1}{4k^2 - 1} \left[ \left( 1 + \frac{k}{jn} \right)^{-s-1} + \left( 1 - \frac{k}{jn} \right)^{-s-1} \right]. \end{align} The remaining argument can be proceeded similarly as the case $n$ is even and we omit the details. This proves the theorem. \end{proof} \begin{remark} Functions satisfy \eqref{eq:cheb coefficients constant sign} or \eqref{eq:cheb coefficients alternate sign} are only a subclass of $X^s$. We will show in the next section that typical examples are functions with endpoint singularities. \end{remark} \begin{remark}\label{eq:superconv of cc additional term} If additional terms like \begin{equation} b_{m,s} = \pm\frac{d(s)}{m^{s+1+\mu}},~~~ \mbox{ or }~~~ \pm (-1)^m \frac{d(s)}{m^{s+1+\mu}}, \end{equation} where $d(s)$ is independent of $m$ and $0 < \mu < 1$, are added in \eqref{eq:cheb coefficients constant sign} or \eqref{eq:cheb coefficients alternate sign}. Then, similar to the estimate of $S_2$, we can deduce that \begin{equation} \sum_{ m\in \Delta(n) } \frac{2b_{m,s} }{ 4r^2 - 1 } = \mathcal{O}(n^{-s-2-\mu}). \end{equation} Hence, the rate of convergence of Clenshaw-Curtis quadrature rule is also $E_n^{C}(f) = \mathcal{O}(n^{-s-2})$. \end{remark} \section{Asymptotics of Chebyshev coefficients of functions with endpoints singularities}\label{sec:asymptotic of Chebyshev coeff} We have showed that the rate of convergence of Clenshaw-Curtis quadrature can be improved if the Chebyshev coefficients of $f(x)$ satisfy \eqref{eq:cheb coefficients constant sign} or \eqref{eq:cheb coefficients alternate sign}. It is natural to raise the following question: what kind of functions satisfy these conditions? In this section we shall give some typical examples, including functions with algebraic and algebraic-logarithmic singularities. Moreover, for each class of functions, we also establish the corresponding rate of convergence of Clenshaw-Curtis quadrature. \subsection{Functions with algebraic singularities} Elliott in \cite{elliott1965chebyshev} and Tuan and Elliott in \cite{tuan1972spectral} have investigated the asymptotic of Chebyshev coefficients of the following singular functions \begin{equation} f(x) = ( 1 \pm x)^{\alpha} g(x), \end{equation} where $\alpha>0$ is not an integer and $g(x)$ is analytic in a region containing the interval $[-1,1]$. For example, for $f(x) = ( 1 - x)^{\alpha} g(x)$, it was shown that its Chebyshev coefficients satisfy \cite[Eqn.~(4.13)]{tuan1972spectral} \begin{equation} a_n = - \frac{2^{1-\alpha} g(1) \sin(\alpha\pi) }{\pi n^{2\alpha+1}} \Gamma(2\alpha+1) + \mathcal{O}(n^{-2\alpha-3}). \end{equation} Obviously, functions of this kind satisfy the conditions of Theorem \ref{thm:superconvergence of cc}. For functions with endpoint singularities of the following general form \begin{equation}\label{eq:f both endpoint singularities} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} g(x), \end{equation} where $\alpha , \beta$ are positive real numbers not integers and $g(x)$ is analytic in a region containing both endpoints, Tuan and Elliott in \cite{tuan1972spectral} proposed a complicated technique to separate the singularities with the aid of auxiliary functions and then derived the asymptotic of the Chebyshev coefficients. For more details, we refer the reader to \cite{tuan1972spectral}. In the following we shall present a simpler approach to analyze the asymptotic of Chebyshev coefficients of $f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} g(x)$ with $\alpha, \beta > - \frac{1}{2}$ are not integers simultaneously. Meanwhile, for the sake of simplicity, we always assume that $g(x) \in C^{\infty}[-1,1]$. However, the generalization to the case $g(x) \in C^m[-1,1]$ for some positive integer $m$ is mathematically straightforward. Note that the assumptions we consider here is more general than that considered in \cite{elliott1965chebyshev,tuan1972spectral}. Before commencing our analysis, we give a useful lemma. \begin{lemma}\label{lem:asymp for singular oscill integrals} Suppose that \begin{align*} f(x)=(x-a)^{\gamma} (b-x)^{\delta} h(x) \end{align*} with $\gamma,\delta > -1$ and $h(x)$ is $m$ times continuously differentiable for $x \in [a,b]$. Furthermore, define \begin{align*} \phi(x) = (x-a)^{\gamma} h(x), \quad \psi(x)=(b-x)^{\delta} h(x). \end{align*} Then for large $\lambda$, \begin{align} \int_{a}^{b} f(x) e^{i\lambda x} dx & \sim e^{i \lambda a} \sum_{k=0}^{m-1}\frac{\psi^{(k)}(a) e^{i \frac{\pi}{2} (k+\gamma+1)} \Gamma(k+\gamma+1)}{\lambda^{k+\gamma+1}s!} \nonumber \\ &~~~~~~ - e^{i \lambda b} \sum_{k=0}^{m-1}\frac{\phi^{(k)}(b) e^{i \frac{\pi}{2} (k-\delta+1)} \Gamma(k+\delta+1)}{\lambda^{k+\delta+1}s!} \nonumber \\ &~~~ + \mathcal{O}(\lambda^{-m-1-\min\{\gamma,\delta\}}), \quad \lambda \rightarrow \infty. \end{align} \end{lemma} \begin{proof} The first proof of this result was given by Erd\'{e}lyi in \cite{erdelyi1955fourier}. The idea was based on the neutralizer functions together with integration by parts \cite[Thm.~3]{erdelyi1955fourier}. If $h(x)$ is analytic in a neighborhood of the interval $[a,b]$, an alternative proof based on the contour integration was given by Lyness \cite[Thm.~1.12]{lyness1972fourier}. \end{proof} We now give the asymptotic of Chebyshev coefficients of functions with algebraic endpoint singularities. \begin{theorem}\label{thm:asymptotic albebraic singularities} For the function $f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} g(x)$ with $\alpha, \beta > -\frac{1}{2}$ are not integers simultaneously and $g(x) \in C^{\infty}[-1,1]$, then its Chebyshev coefficients satisfy \begin{align}\label{eq:asymptotic expansion Cheby coeff} a_n & \sim \frac{2^{\alpha+\beta+1}}{\pi} \bigg\{- \sin(\alpha\pi) \sum_{k=0}^{\infty} \frac{ (-1)^k \hat{\psi}^{(2k)}(0) \Gamma(2k+2\alpha+1) }{ n^{2k+2\alpha+1} (2k)! } \nonumber\\ & - (-1)^n \sin(\beta\pi) \sum_{k=0}^{\infty} \frac{ (-1)^k \hat{\phi}^{(2k)}(\pi) \Gamma(2k+2\beta+1) }{ n^{2k+2\beta+1} (2k)! } \bigg\}, \quad n \rightarrow \infty, \end{align} where \begin{equation}\label{eq:auxiliary functions} \hat{\psi}(t) = (\pi - t )^{2\beta} \hat{g}( t) , \quad \hat{\phi}(t) = t^{2\alpha} \hat{g}(t), \end{equation} and \begin{equation}\label{eq:hat g} \hat{g}(t) = \left( t^{-1} \sin(t/2)\right)^{2\alpha} \left( (\pi - t)^{-1} \cos(t/2) \right)^{2\beta} g(\cos(t)). \end{equation} These values $\hat{\psi}^{(2s)}(0)$ and $\hat{\phi}^{(2s)}(\pi)$ can be calculated explicitly using the L'H\^{o}pital's rule. Here we give the first several values \begin{equation} \hat{\psi}(0) = \frac{g(1)}{2^{2\alpha}}, \quad \hat{\phi}(\pi) = \frac{g(-1)}{2^{2\beta}}, \end{equation} and \begin{equation} \hat{\psi}{''}(0) = - \frac{g(1)}{2^{2\alpha+1}} \left( \frac{\alpha}{3} + \beta \right) - \frac{ g'(1)}{2^{2\alpha}} , \quad \hat{\phi}{''}(\pi) = - \frac{g(-1)}{2^{2\beta+1}} \left( \alpha + \frac{\beta}{3} \right) + \frac{ g'(-1)}{2^{2\beta}}. \end{equation} \end{theorem} \begin{proof} First, make a change of variable $x = \cos t$, we have \begin{align}\label{eq:transformed cheby coefficients} a_n &= \frac{2}{\pi} \int_{0}^{\pi} f(\cos t) \cos(nt) dt \nonumber \\ & = \frac{2^{\alpha+\beta+1}}{\pi} \int_{0}^{\pi} t^{2\alpha} (\pi - t)^{2\beta} \hat{g}(t) \cos(nt) dt, \end{align} where $ \hat{g}(t) $ is defined as in \eqref{eq:hat g}. It is easy to see that $\hat{g}(t) \in C^{\infty}[0, \pi]$. On the other hand, we observe that $\hat{\psi}(t)$ defined in \eqref{eq:auxiliary functions} is infinitely differentiable at $t=0$ while $\hat{\phi}(t)$ is infinitely differentiable at $t=\pi$, and \[ \hat{\psi}(-t) = \hat{\psi}(t), \quad \hat{\phi}(\pi + t) = \hat{\phi}(\pi - t). \] Hence, it holds that \begin{equation} \hat{\psi}^{(2k+1)}(0) = 0, \quad \hat{\phi}^{(2k+1)}(\pi) = 0, \quad k \geq 0. \end{equation} The desired result then follows from applying Lemma \ref{lem:asymp for singular oscill integrals} with $m = \infty$ to the integral \eqref{eq:transformed cheby coefficients}. \end{proof} \begin{remark} The assumption $\alpha, \beta > - \frac{1}{2}$ can not be relaxed to $\alpha, \beta > -1$ since the Chebyshev coefficients $a_n$ of the function $f(x) = (1-x)^{\alpha} (1+x)^{\beta} g(x)$ will be divergent if one of $\alpha$ and $\beta$ is less than or equal to $-\frac{1}{2}$. \end{remark} \begin{corollary} Under the same assumptions as in Theorem \ref{thm:asymptotic albebraic singularities}, if neither $\alpha$ nor $\beta$ is a nonegative integer, then the leading term of Chebyshev coefficients of $f(x)$ is given by \begin{equation}\label{eq:asymptotic one} a_n = \left\{\begin{array}{ccc} {\displaystyle (-1)^{n+1} \frac{2^{\alpha-\beta+1} g(-1) \sin(\beta\pi)}{\pi n^{2\beta+1}}\Gamma(2\beta+1) + \mathcal{O}(n^{- \min\{ 2\alpha+1, 2\beta+3 \}}) }, & \mbox{$\alpha>\beta$}, \\ [10pt] {\displaystyle - \frac{2 \sin(\alpha\pi) \Gamma(2\alpha+1)}{\pi n^{2\alpha+1}} \left( g(1) + (-1)^n g(-1) \right) + \mathcal{O}(n^{-2\alpha-3})}, & \mbox{$\alpha = \beta$}, \\ [10pt] {\displaystyle - \frac{2^{\beta-\alpha+1} g(1) \sin(\alpha\pi)}{\pi n^{2\alpha+1}}\Gamma(2\alpha+1) + \mathcal{O}(n^{- \min\{ 2\alpha+3, 2\beta+1 \}})}, & \mbox{$\alpha < \beta$}. \end{array} \right. \end{equation} Further, if one of $\alpha$ and $\beta$ is an integer, then \begin{equation}\label{eq:asymptotic two} a_n = \left\{\begin{array}{cc} {\displaystyle (-1)^{n+1} \frac{2^{\alpha-\beta+1} g(-1) \sin(\beta\pi)}{\pi n^{2\beta+1}}\Gamma(2\beta+1) + \mathcal{O}(n^{- 2\beta-3 })}, & \mbox{if $\alpha$ is an integer}, \\ [10pt] {\displaystyle - \frac{2^{\beta-\alpha+1} g(1) \sin(\alpha\pi)}{\pi n^{2\alpha+1}}\Gamma(2\alpha+1) + \mathcal{O}(n^{-2\alpha-3})}, & \mbox{if $\beta$ is an integer}. \end{array} \right. \end{equation} \end{corollary} \begin{proof} It follows immediately from Theorem \ref{thm:asymptotic albebraic singularities} by taking the leading term of \eqref{eq:asymptotic expansion Cheby coeff}. \end{proof} Having derived the leading term of the asymptotic of the Chebyshev coefficients, we can define the parameter $s$ such that $f$ belong to the space $X^s$. Note that our aim is to establish the rate of convergence of Clenshaw-Curtis quadrature for the integral $\int_{-1}^{1} f(x) dx$. Thus we restrict our attention to the case $\alpha, \beta \geq 0$. \begin{definition}\label{def: s one} For the function $f(x) = (1-x)^{\alpha} (1+x)^{\beta} g(x)$ with $\alpha, \beta \geq 0$ are not integers simultaneously and $g(x) \in C^{\infty}[-1,1]$. Define \begin{equation}\label{eq:class s} s = \left\{\begin{array}{ccc} {\displaystyle 2\min\{ \alpha, \beta \} }, & \mbox{if $\alpha,\beta$ are not integers}, \\ [10pt] {\displaystyle 2\alpha}, & \mbox{if $\beta$ is an integer}, \\ [10pt] {\displaystyle 2\beta}, & \mbox{if $\alpha$ is an integer}. \end{array} \right. \end{equation} Then, from equations \eqref{eq:asymptotic one} and \eqref{eq:asymptotic two} we can deduce immediately that $f \in X^s$. \end{definition} \begin{theorem}\label{thm:rate algebraic endpoint} Let $f(x)$ satisfy the assumptions as in Definition \ref{def: s one}. Then, the rate of convergence of $(n+1)$-point Clenshaw-Curtis quadrature for the integral $\int_{-1}^{1} f(x) dx$ is \begin{equation} E_n^{C}[f] = \mathcal{O}(n^{-s-2}), \end{equation} where $s$ is defined as in \eqref{eq:class s}. \end{theorem} \begin{proof} If one of $\alpha$ and $\beta$ is an integer, then we observe from equation \eqref{eq:asymptotic two} that the Chebyshev coefficients satisfy the condition of Theorem \ref{thm:superconvergence of cc}, therefore the desired result holds. If neither $\alpha$ nor $\beta$ is a nonegative integer, then the desired result holds when $\alpha = \beta$ due to the second equation of \eqref{eq:asymptotic one}. We now consider the case $\alpha>\beta$: if $\alpha \geq \beta + 1$, then the desired result follows by noting the first equation of \eqref{eq:asymptotic one}. If $\beta < \alpha < \beta + 1$, using Theorem \ref{thm:asymptotic albebraic singularities} we find that \begin{align*} a_n & = (-1)^{n+1} \frac{2^{\alpha-\beta+1} g(-1) \sin(\beta\pi)}{\pi n^{2\beta+1}}\Gamma(2\beta+1) \\ &~~~~~~~~~~ - \frac{2^{\beta-\alpha+1} g(1) \sin(\alpha\pi)}{\pi n^{2\alpha+1}}\Gamma(2\alpha+1) + \mathcal{O}(n^{-2\beta-3}). \end{align*} This together with Remark \ref{eq:superconv of cc additional term} gives the desired result. Thus, the proof is completed since the argument in the case $\alpha < \beta$ is similar. \end{proof} \begin{corollary} For functions of the form $f(x) = (1 \pm x)^{\alpha} g(x)$, where $\alpha > 0$ is not an integer and $g(x) \in C^{\infty}[-1,1]$. From Theorem \ref{thm:rate algebraic endpoint}, we see that the convergence rate of Clenshaw-Curtis quadrature is $E_n^{C}[f] = \mathcal{O}(n^{-2\alpha-2})$, which is the same as that of Gauss-Legendre quadrature. \end{corollary} \begin{remark} Not all functions with algebraic endpoint singularities can be expressed in terms of the form $f(x) = (1 - x)^{\alpha} (1+x)^{\beta} g(x)$. Typical examples are $f(x) = \log( 1 + \sin\sqrt{1 - x})$ and $f(x) = \arccos(x^{2m})$ where $m$ is a positive integer. The latter function has square root singularities at $x = \pm 1$. However, if we formally define $f(x) = \sqrt{1 - x^2} g(x)$ with $g(x) = \arccos(x^{2m})/\sqrt{1 - x^2}$. It is easy to verify that $g(x) \in C^{\infty}[-1,1]$. Thus, the result of Theorem \ref{thm:rate algebraic endpoint} still holds for this function; see Example \ref{example: arecos} for details. \end{remark} \subsection{Functions with algebraic-logarithmic singularities} In this subsection we consider the asymptotic of the Chebyshev coefficients for functions of the following form \begin{equation}\label{eq:log singularity} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} \log(1 - x) g(x), \end{equation} where $\alpha >0 $, $\beta > -\frac{1}{2}$ and $g(x)\in C^{\infty}[-1,1]$. \begin{lemma}\label{lem:asymp for logarithm oscillartory integrals} Suppose that \begin{align*} f(x)=(x-a)^{\gamma}(b-x)^{\delta} \log(x-a) h(x) \end{align*} with $\gamma > 0, \delta > -1$ and $h(x)$ is $m$ times continuously differentiable for $x \in [a,b]$. Define \begin{align*} \phi(x) = (x-a)^{\gamma} \log(x-a) h(x), \quad \psi(x)=(b-x)^{\delta} h(x). \end{align*} Then for large $\lambda$, \begin{align} \int_{a}^{b} f(x) e^{i\lambda x} dx & = e^{i \lambda a} \sum_{k=0}^{m-1}\frac{\psi^{(k)}(a) e^{i \frac{\pi}{2} (k+\gamma+1)} \Gamma(k+\gamma+1)}{\lambda^{k+\gamma+1}k!} \left( \tilde{\psi}(k+\gamma+1) - \log\lambda + \frac{\pi}{2} i \right) \nonumber \\ &~~~ - e^{i \lambda b} \sum_{k=0}^{m-1}\frac{\phi^{(k)}(b) e^{i \frac{\pi}{2} (k-\delta+1)} \Gamma(k+\delta+1)}{\lambda^{k+\delta+1}k!} \nonumber\\ &~~~~~ + \mathcal{O}( \lambda^{-m-\gamma-1} \log\lambda ) + \mathcal{O}( \lambda^{-m-\delta-1}), \nonumber \end{align} where $\tilde{\psi}(x)$ is the digamma function. \end{lemma} \begin{proof} The idea of Erd\'{e}lyi's proof can be extended to the current setting in a straightforward way; see \cite{erdelyi1956fourier} for details. If $h(x)$ is analytic, the desired result can be derived by using the technique of contour integration \cite[Appendix]{lyness1972fourier}. \end{proof} Using the above Lemma, we obtain the following. \begin{theorem}\label{thm:asymptotic logarithmic singularity} For the function $f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} \log(1 - x) g(x)$ with $\alpha > 0$, $\beta > -\frac{1}{2}$ and $g(x) \in C^{\infty}[-1,1]$, its Chebyshev coefficients are given asymptotically by \begin{align}\label{eq:left logarithm} a_n &\sim - \frac{2^{\alpha+\beta+1}}{\pi} \sin(\alpha\pi)\sum_{s = 0}^{\infty} \frac{ (-1)^s \psi_1^{(2s)}(0) \Gamma(2s+2\alpha+1) }{n^{2s+2\alpha+1} (2s)!} \nonumber \\ &~~~~~ -(-1)^n \frac{2^{\alpha+\beta+1}}{\pi} \sin(\beta\pi) \sum_{s = 0}^{\infty} \frac{(-1)^s \phi_1^{(2s)}(\pi) \Gamma(2s+2\beta+1) }{n^{2s+2\beta+1} (2s)!} \nonumber \\ &~~~~~ - \frac{2^{\alpha+\beta+2}}{\pi} \sum_{s = 0}^{\infty} \frac{ (-1)^s \hat{\psi}^{(2s)}(0) \Gamma(2s+2\alpha+1) }{n^{2s+2\alpha+1} (2s)!} \bigg( \sin(\alpha\pi) (\tilde{\psi}(2s+2\alpha+1) - \log n ) \nonumber \\ &~~~~~ + \frac{\pi}{2}\cos(\alpha\pi) \bigg), \quad n\rightarrow\infty, \end{align} where \begin{equation}\label{def:psi and phi one} \psi_1(t) = \hat{\psi}(t) \log\left( 2(t^{-1} \sin(t/2) )^2 \right), \quad \phi_1(t) = \hat{\phi}(t) \log\left( 2 (\sin(t/2))^2 \right). \end{equation} \end{theorem} \begin{proof} The idea is similar to the proof of Theorem \ref{thm:asymptotic albebraic singularities}. The change of variable $x = \cos t$ results in \begin{align} a_n &= \frac{2}{\pi} \int_{0}^{\pi} f(\cos t) \cos(nt) dt \nonumber \\ &= \frac{2}{\pi} \int_{0}^{\pi} (1 - \cos t)^{\alpha} (1 + \cos t)^{\beta} \log(1 - \cos t) g(\cos t) \cos(nt) dt \nonumber \\ & = \frac{2^{\alpha+\beta+2}}{\pi} \int_{0}^{\pi} t^{2\alpha} (\pi - t)^{2\beta} \log(t) \hat{g}(t) \cos(nt) dt \nonumber \\ &~~~~~~~~~~~ + \frac{2^{\alpha+\beta+1}}{\pi} \int_{0}^{\pi} t^{2\alpha} (\pi - t)^{2\beta} \tilde{g}(t) \cos(nt) dt, \end{align} where $\hat{g}(t)$ is defined as in \eqref{eq:hat g} and \begin{equation} \tilde{g}(t) = \hat{g}(t) \log\left( 2(t^{-1} \sin(t/2) )^2 \right), \end{equation} which is infinitely differentiable on $[0,\pi]$. Note that $\psi_1(t)$ defined in \eqref{def:psi and phi one} is an even function, we have \begin{equation} \psi_1^{(2k+1)}(0) = 0, \quad k \geq 0. \end{equation} This together with Lemmas \ref{lem:asymp for logarithm oscillartory integrals} and \ref{lem:asymp for singular oscill integrals} gives \begin{align}\label{eq:asymptotic algebraic logarithm} a_n &\sim \frac{2^{\alpha+\beta+1}}{\pi} \bigg\{ - \sin(\alpha\pi)\sum_{k = 0}^{\infty} \frac{ (-1)^k \psi_1^{(2k)}(0) \Gamma(2k+2\alpha+1) }{n^{2k+2\alpha+1} (2k)!} \nonumber \\ &~~~~~ -(-1)^n \sum_{k = 0}^{\infty} \frac{ \phi_2^{(k)}(\pi) \Gamma(k+2\beta+1) }{n^{k+2\beta+1} k!} \sin\left( \beta\pi - \frac{k}{2}\pi \right) \bigg\} \nonumber \\ &~~~~~ + \frac{2^{\alpha+\beta+2}}{\pi} \bigg\{ - \sum_{k = 0}^{\infty} \frac{ (-1)^k \hat{\psi}^{(2k)}(0) \Gamma(2k+2\alpha+1) }{n^{2k+2\alpha+1} (2k)!} \bigg( \sin(\alpha\pi) (\tilde{\psi}(2k+2\alpha+1) - \log n ) \nonumber \\ &~~~~~ + \frac{\pi}{2}\cos(\alpha\pi) \bigg) -(-1)^n \sum_{k = 0}^{\infty} \frac{\phi_3^{(k)}(\pi) \Gamma(k+2\beta+1) }{n^{k+2\beta+1} k!} \sin\left(\beta\pi - \frac{k}{2}\pi \right) \bigg\}, \end{align} where \begin{align} \phi_2(t) = \hat{\phi}(t) \log\left( 2(t^{-1} \sin(t/2) )^2 \right), \quad \phi_3(t) = \hat{\phi}(t) \log t. \end{align} By \eqref{def:psi and phi one}, we get $\phi_1(t) = \phi_2(t) + 2 \phi_3(t)$. On the other hand, for $k \geq 0$, we have \begin{align*} \phi_1^{(2k+1)}(\pi) &= \left( \hat{\phi}(t) \log\left( 2 (\sin(t/2))^2 \right) \right)^{(2k+1)}_{t = \pi} \\ & = \sum_{j=0}^{2k+1} \binom{2k+1}{j} \hat{\phi}^{(j)}(\pi) \left( \log\left( 2 (\sin(t/2))^2 \right) \right)^{(2k+1-j)}_{t = \pi} \\ & = 0, \end{align*} where we have used the fact that \[ \hat{\phi}^{(2j+1)}(\pi) = 0, \quad \left( \log\left( 2 (\sin(t/2))^2 \right) \right)^{(2j+1)}(\pi) = 0, \quad j \geq 0. \] This together with the second and the last sums on the right hand side of \eqref{eq:asymptotic algebraic logarithm} gives the desired result. This completes the proof. \end{proof} \begin{corollary} Under the same assumptions as in Theorem \ref{thm:asymptotic logarithmic singularity}. If $\alpha$ is a positive integer and $\beta$ is a nonegative integer, then \begin{align}\label{eq:logarithm one} a_n &\sim - 2^{\alpha+\beta+1} \cos(\alpha\pi) \sum_{k = 0}^{\infty} \frac{ (-1)^k \hat{\psi}^{(2k)}(0) \Gamma(2k+2\alpha+1) }{n^{2k+2\alpha+1} (2k)!}. \end{align} If $\alpha$ is a positive integer and $\beta$ is not a nonnegative integer, then \begin{equation}\label{eq:logarithm two} a_n = \left\{\begin{array}{ccc} {\displaystyle \frac{ (-1)^{n+1} 2^{\alpha-\beta+1} g(-1) \sin(\beta\pi)}{\pi n^{2\beta+1}}\Gamma(2\beta+1) \log2 + \mathcal{O}(n^{- \min\{ 2\alpha+1, 2\beta+3 \}}) }, & \mbox{$\alpha > \beta$}, \\ [10pt] {\displaystyle - \frac{2^{\beta-\alpha+1} g(1) \cos(\alpha\pi)}{ n^{2\alpha+1}}\Gamma(2\alpha+1) + \mathcal{O}(n^{- \min\{ 2\alpha+3, 2\beta+1 \}})}, & \mbox{$\alpha < \beta$}. \end{array} \right. \end{equation} \end{corollary} \begin{proof} It follows directly from Theorem \ref{thm:asymptotic logarithmic singularity}. \end{proof} Again, we define the parameter $s$ such that $f \in X^s$. Meanwhile, we restrict our attention to the case $\alpha > 0$ and $\beta \geq 0$ since our aim is to derive the optimal rate of convergence of Clenshaw-Curtis quadrature for the integral $\int_{-1}^{1} f(x) dx$. \begin{definition}\label{def: s two} For the function $f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} \log(1 - x) g(x)$ with $\alpha$ a positive integer and $\beta \geq 0$ and $g(x) \in C^{\infty}[-1,1]$. Define \begin{equation}\label{def:s logarithm} s = \left\{\begin{array}{ccc} {\displaystyle 2\alpha}, & \mbox{if $\beta$ is an integer}, \\ [10pt] {\displaystyle 2 \min\{\alpha, \beta \} }, & \mbox{otherwise}. \end{array} \right. \end{equation} From the above corollary we see that $f \in X^s$. \end{definition} \begin{theorem} Let $f(x)$ satisfy the assumptions as in Definition \ref{def: s two}. Then, the rate of convergence of $(n+1)$-point Clenshaw-Curtis quadrature for the integral $\int_{-1}^{1} f(x) dx$ is \begin{equation} E_n^{C}[f] = \mathcal{O}(n^{-s-2}), \end{equation} where $s$ is defined as in \eqref{def:s logarithm}. \end{theorem} \begin{proof} The proof is similar to the proof of Theorem \ref{thm:rate algebraic endpoint}. \end{proof} \begin{remark} For the case that $\alpha$ is not a positive integer, from \eqref{eq:left logarithm} we observe that there exists a factor $\log n$ in the third summation. Therefore, it is reasonable to expect that the convergence rate of Clenshaw-Curtis quadrature would be slightly slower than $\mathcal{O}(n^{-s-2})$. \end{remark} \section{Extrapolation methods for accelerating Clenshaw-Curtis quadrature}\label{sec:acceleration} Comparing with Gauss quadrature, an essential feature of Clenshaw-Curtis quadrature is that its quadrature nodes are nested. This implies that the previous function values can be stored and reused when the number of quadrature nodes is doubled. Therefore, Clenshaw-Curtis quadrature is a particularly ideal candidate for implementing an automatic quadrature rule in practical computations. In this section, we shall extend our analysis in Section 2 and show that it is possible to accelerate the rate of convergence of Clenshaw-Curtis quadrature for functions with endpoint singularities. Asymptotic expansion of the error of Gauss-Legendre quadrature for functions with endpoint singularities has been investigated in \cite{sidi2009gauss,verlinden1997gauss}. For example, when $f(x) = ( 1 - x)^{\alpha} g(x)$ with $\Re(\alpha) > -1$ and $g(x)$ is analytic in a region containing the interval $[-1,1]$, Verlinden proved that the error of the $n$-point Gauss-Legendre quadrature admits the following asymptotic expansion \cite[Thm.~1]{verlinden1997gauss} \begin{equation} E_{n}^{G}[f] \sim \sum_{k = 1}^{\infty} c_k h^{k+\alpha}, \quad n \rightarrow\infty, \end{equation} where $h = (n + 1/2)^{-2}$ and $c_k$ are constants independent of $n$. Furthermore, some extrapolation schemes were proposed to accelerate the convergence of Gauss quadrature. Even Verlinden's results reveal an important connection between Gauss quadrature and extrapolation schemes. However, accelerating the Gauss quadrature is expensive since its quadrature nodes are completely distinct if $n$ is changed. In the following, we shall show that the error of Clenshaw-Curtis quadrature also admits a similar expansion. This together with the nested property of Clenshaw-Curtis points implies that Clenshaw-Curtis quadrature is more advantageous than its Gauss-Legendre counterpart. Since the Clenshaw-Curtis points are nested when $n$ is doubled, we restrict our attention to the case of even $n$. \begin{theorem}\label{thm:expansion error clenshaw} If the Chebyshev coefficients of $f(x)$ satisfy \begin{equation}\label{eq:asymp expan cheby coeff 1} a_n \sim \sum_{k = 0}^{\infty} \frac{\mu_k}{ n^{d_k} }, \quad n\rightarrow\infty, \end{equation} or \begin{equation}\label{eq:asymp expan cheby coeff 2} a_n \sim (-1)^n \sum_{k = 0}^{\infty} \frac{\mu_k}{ n^{d_k} }, \quad n\rightarrow\infty, \end{equation} where $\mu_k$ are constants independent of $n$ and $0 < d_0 < d_1 < \cdots$. Then, for even $n$, the error of the Clenshaw-Curtis quadrature can be expanded as \begin{equation}\label{eq:expansion error clenshaw} E_{n}^{C}[f] \sim \sum_{k = 0}^{\infty} \sum_{j=0}^{\infty} \frac{\varsigma_{j,k} }{n^{d_k+2j+1}}, \end{equation} where $\varsigma_{j,k}$ are constants independent of $n$. \end{theorem} \begin{proof} We only prove the case \eqref{eq:asymp expan cheby coeff 1} since the case \eqref{eq:asymp expan cheby coeff 2} can be proved similarly. According to \eqref{eq:error of cc} and \eqref{eq:asymp expan cheby coeff 1}, we have \begin{align} E_n^{C}[f] &= \sum_{m \in \Delta(n) } a_m E_n^{C}(T_m) \nonumber \\ & \sim \sum_{m \in \Delta(n) } \left( \sum_{k = 0}^{\infty} \frac{\mu_k}{ m^{d_k} } \right) E_n^{C}(T_m) \nonumber \\ & = \sum_{k=0}^{\infty} \mu_k \sum_{m \in \Delta(n) } \frac{1}{m^{d_k}} E_n^{C}(T_m). \end{align} Moreover, from \eqref{eq:aliasing} we have \begin{align} \sum_{m \in \Delta(n) } \frac{1}{m^{d_k}} E_n^{C}(T_m) = \sum_{m \in \Delta(n) } \frac{2}{m^{d_k} (1 - m^2) } + \sum_{m \in \Delta(n) } \frac{2}{m^{d_k}(4r^2 - 1)}. \end{align} In the following we shall analyze the asymptotic of these two sums on the right hand side of the above equation. For the first sum, it is easy to see that \begin{align} \sum_{m \in \Delta(n) } \frac{2}{m^{d_k} (1 - m^2) } & = -2 \sum_{j = 0}^{\infty} \sum_{m \in \Delta(n) } \frac{1}{m^{d_k+2j+2} } \nonumber \\ & = \frac{2}{n^{d_k} (n^2 - 1)} - \sum_{j= 0}^{\infty} \frac{1}{2^{d_k+2j+1}} \zeta\left(d_k+2j+2, \frac{n}{2}\right) , \end{align} where $\zeta(s,a)$ is the Hurwitz zeta function. Recall the asymptotic expansion of $\zeta(s,a)$ \cite[p.~25]{magnus1966formulas} \begin{equation*} \zeta(s,a) \sim \frac{1}{(s-1)a^{s-1}} + \frac{1}{2a^{s}} + \frac{1}{\Gamma(s)} \sum_{\ell=1}^{\infty} \frac{B_{2\ell}}{(2\ell)!} \frac{\Gamma(s+2\ell-1)}{ a^{2\ell+s-1}},\quad a \rightarrow\infty, \end{equation*} it follows that \begin{align}\label{eq:asymptotic error part one} \sum_{m \in \Delta(n) } \frac{2}{m^{d_k} (1 - m^2) } &\sim \frac{1}{n^{d_k} (n^2 - 1)} - \sum_{j=0}^{\infty} \sum_{\ell=0}^{\infty} \frac{4^{\ell} B_{2\ell} \Gamma(2\ell+2j+d_k+1) }{ (2\ell)! \Gamma(d_k+2j+2) n^{2\ell+2j+d_k+1} }. \end{align} For the second sum, by means of the estimate of $S_2$ with $c(s) = 1$ and $s+1$ replaced by $d_k$, we see that \begin{align} \sum_{m \in \Delta(n) } \frac{2}{m^{d_k}(4r^2 - 1)} &= \frac{2}{(2n)^{d_k}} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{ 1-n \leq 2r \leq n} \frac{1}{ 4r^2 - 1} \left( 1 + \frac{r}{jn} \right)^{-d_k} \nonumber \\ & = \frac{2}{(2n)^{d_k}} \bigg\{ - \left( \frac{2^{d_k}-1}{n^2-1} + \frac{1}{n+1} \right)\zeta(d_k) \nonumber \\ &~~~ + \frac{1}{n^2-1} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{q=1}^{\infty} \frac{ (d_k)_{2q} }{ (2q)! (2j)^{2q} } \nonumber \\ &~~~ + \left( \frac{n(n+2)}{n+1} - \frac{n^2}{n^2 - 1} \right) \sum_{\ell = 1}^{\infty} \frac{(d_k)_{2\ell} \zeta(2\ell+d_k)}{(2\ell)! (2n)^{2\ell}} \nonumber \\ & ~~~ + \sum_{i=0}^{\infty} \frac{1}{n^{2i+1}} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{\ell=0}^{\infty} \frac{ \nu_{2i+1}^{i+\ell+1} (d_k)_{2\ell+2i+2} }{2^{2\ell} (2\ell+2i+2)! j^{2\ell+2i+2} } \bigg\}. \nonumber \end{align} Observe that \begin{align} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{q=1}^{\infty} \frac{ (d_k)_{2q} }{ (2q)! (2j)^{2q} } &= \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{q=0}^{\infty} \frac{ (d_k)_{2q} }{ (2q)! (2j)^{2q} } - \zeta(d_k) \nonumber \\ & = \frac{1}{2} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \left( \left(1 + \frac{1}{2j} \right)^{-d_k} + \left( (1 - \frac{1}{2j} \right)^{-d_k} \right) - \zeta(d_k) \nonumber \\ & = 2^{d_k}\zeta(d_k) - 2\zeta(d_k) - 2^{d_k - 1}. \end{align} Consequently, \begin{align}\label{eq:asymptotic error part two} \sum_{m \in \Delta(n) } \frac{2}{m^{d_k}(4r^2 - 1)} &= \frac{2}{(2n)^{d_k}} \bigg\{ - \left( \frac{1}{n+1} + \frac{1}{n^2 - 1} \right) \zeta(d_k) - \frac{2^{d_k - 1}}{n^2 - 1} \nonumber \\ &~~~ + \left( \frac{n(n+2)}{n+1} - \frac{n^2}{n^2 - 1} \right) \sum_{\ell = 1}^{\infty} \frac{(d_k)_{2\ell} \zeta(2\ell+d_k)}{(2\ell)! (2n)^{2\ell}} \nonumber \\ & ~~~ + \sum_{i=0}^{\infty} \frac{1}{n^{2i+1}} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{\ell=0}^{\infty} \frac{ \nu_{2i+1}^{i+\ell+1} (d_k)_{2\ell+2i+2} }{2^{2\ell} (2\ell+2i+2)! j^{2\ell+2i+2} } \bigg\}. \end{align} Combining this with \eqref{eq:asymptotic error part one} gives \begin{align}\label{eq:asymptotic error expansion} \sum_{m \in \Delta(n) } \frac{1}{m^{d_k}} E_n^{C}(T_m) & \sim - \sum_{j=0}^{\infty} \sum_{\ell=0}^{\infty} \frac{4^{\ell} B_{2\ell} \Gamma(2\ell+2j+d_k+1) }{ (2\ell)! \Gamma(d_k+2j+2) n^{2\ell+2j+d_k+1} } \nonumber \\ &~~~ - \frac{2^{1-d_k}}{n^{d_k}} \left( \frac{1}{n+1} + \frac{1}{n^2 - 1} \right) \zeta(d_k) \nonumber \\ &~~~ + \frac{2^{1-d_k}}{n^{d_k}} \left( \frac{n(n+2)}{n+1} - \frac{n^2}{n^2 - 1} \right) \sum_{\ell = 1}^{\infty} \frac{(d_k)_{2\ell} \zeta(2\ell+d_k)}{(2\ell)! (2n)^{2\ell}} \nonumber \\ &~~~ + \frac{2^{1-d_k}}{n^{d_k}} \sum_{i=0}^{\infty} \frac{1}{n^{2i+1}} \sum_{j=1}^{\infty} \frac{1}{j^{d_k}} \sum_{\ell=0}^{\infty} \frac{ \nu_{2i+1}^{i+\ell+1} (d_k)_{2\ell+2i+2} }{2^{2\ell} (2\ell+2i+2)! j^{2\ell+2i+2} }. \end{align} Since \begin{equation*} \frac{1}{n+1} + \frac{1}{n^2 - 1} = \sum_{j=0}^{\infty} \frac{1}{n^{2j+1}}, \quad \frac{n(n+2)}{n+1} - \frac{n^2}{n^2 - 1} = n - \sum_{j=0}^{\infty} \frac{1}{n^{2j+1}}. \end{equation*} Thus, we can deduce that the asymptotic series on the right hand side of \eqref{eq:asymptotic error expansion} consists of negative powers of $n$ with exponents $\{ d_k + 2j + 1\}_{j=0}^{\infty}$ and $k \geq 0$. This completes the proof. \end{proof} \begin{corollary}\label{cor: mixed asymptotic error} If the Chebyshev coefficients of $f(x)$ satisfy \begin{equation}\label{eq: mixed asymptotic} a_n \sim \sum_{k = 0}^{\infty} \frac{\mu_k}{ n^{d_k} } + (-1)^n \sum_{k = 0}^{\infty} \frac{\gamma_k}{ n^{\zeta_k} }, \quad n \rightarrow\infty, \end{equation} where $\mu_k, \gamma_k$ are constants independent of $n$ and $\{ d_k \}_{k=0}^{\infty}$ and $\{ \zeta_k \}_{k=0}^{\infty}$ are positive and strictly increasing sequences. Then, we have \begin{equation}\label{eq:mixed error expansion clenshaw} E_{n}^{C}[f] \sim \sum_{k = 0}^{\infty} \sum_{j=0}^{\infty} \frac{\varsigma_{j,k} }{n^{\xi_k+2j+1}}, \quad n \rightarrow\infty, \end{equation} where $\varsigma_{j,k}$ are constants independent of $n$ and $\{ \xi_k \}_{k = 0}^{\infty}$ is a strictly increasing sequence and $\{ \xi_k \}_{k=0}^{\infty} = \{ d_k \}_{k=0}^{\infty} \cup \{ \zeta_k \}_{k=0}^{\infty} $. \end{corollary} \begin{proof} It follows from Theorem \ref{thm:expansion error clenshaw}. \end{proof} \begin{remark}\label{eq:d0 and s} A direct consequence of Theorem \ref{thm:expansion error clenshaw} is that the rate of convergence of Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-d_0-1})$. For example, if $f \in X^s$ which implies that $d_0 = s + 1$. In this case, we can deduce immediately that the rate of convergence of Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-s-2})$. \end{remark} For functions $f(x) = (1-x)^{\alpha}(1+x)^{\beta}g(x)$ with $\alpha, \beta \geq 0$ are not integers simultaneously and $g(x) \in C^{\infty}[-1,1]$, from Theorem \ref{thm:asymptotic albebraic singularities} we know that their Chebyshev coefficients admit the asymptotic of the form \eqref{eq:asymp expan cheby coeff 1} or \eqref{eq:asymp expan cheby coeff 2} if $\beta$ or $\alpha$ is an nonnegative integer. If both $\alpha$ and $\beta$ are not nonnegative integers, then their Chebyshev coefficients admit the asymptotic of the form \eqref{eq: mixed asymptotic}. Similarly, for functions with algebraic-logarithmic singularities of the form \eqref{eq:log singularity} with $\alpha$ a positive integer. If $\beta$ is a nonnegative integer, then from \eqref{eq:logarithm one} we see that the asymptotic of their Chebyshev coefficients satisfies the form \eqref{eq:asymp expan cheby coeff 1}. If $\beta$ is not a nonnegative integer, then the asymptotic of their Chebyshev coefficients satisfies the form \eqref{eq: mixed asymptotic}. Therefore, for these cases we mentioned, the error of the Clenshaw-Curtis quadrature always has the asymptotic expansion of the form \eqref{eq:expansion error clenshaw} or \eqref{eq:mixed error expansion clenshaw}. The error of the form \eqref{eq:expansion error clenshaw} or \eqref{eq:mixed error expansion clenshaw} is especially suitable for using some convergence acceleration techniques such as Richardson extrapolation and $\epsilon$-algorithm to accelerate the convergence rate of Clenshaw-Curtis quadrature. In particular, the previous function evaluations can be reused in the process of convergence acceleration when $n$ is doubled. In the following we only consider the form \eqref{eq:expansion error clenshaw} since the form \eqref{eq:mixed error expansion clenshaw} can be dealt with in a similar way. In Algorithm $1$ we outline the main steps of the convergence acceleration of Clenshaw-Curtis quadrature by using Richardson extrapolation: \begin{algorithm}\label{Richardson extrapolation} \caption{Richardson extrapolation for Clenshaw-Curtis quadrature} \begin{algorithmic}[1] \STATE {\mbox{Input parameters $n$ and $q$}} \FOR{$k=0:q$} \STATE {\mbox{Compute $R(0,2^kn) = I_{2^kn}^{C}[f]$ by FFT;}} \ENDFOR \FOR{$j = 0: q-1 $} \FOR{ $ k = 0:q-1-j $} \STATE { \mbox{ Evaluate ${\displaystyle R(j+1, 2^{k}n) = \frac{ 2^{d_j + 1} R(j, 2^{k+1}n) - R(j,2^{k}n) }{2^{d_j + 1} - 1}; }$ } } \ENDFOR \ENDFOR \STATE{\mbox{Return $R(q,n)$. }} \end{algorithmic} \end{algorithm} The term $R(q,n)$ achieves a higher order of convergence. More precisely, from the standard theory of Richardson extrapolation we have the following estimate \begin{equation} I[f] - R(q,n) = \mathcal{O}(n^{-d_q-1}) . \end{equation} Note that the Richardson extrapolation scheme $R(q,n)$ reduces to Clenshaw-Curtis quadrature when $q = 0$. \begin{corollary}\label{cor: order by exrapolation} When using the Algorithm 1, the sequence $\{ d_k \}_{k=0}^{\infty}$ can be defined as follows: For functions $f(x) = (1-x)^{\alpha}(1+x)^{\beta}g(x)$ with $\alpha, \beta \geq 0$ are not integers simultaneously and $g(x) \in C^{\infty}[-1,1]$, we can define \begin{align}\label{eq:order algebraic singularities} \{ d_k \}_{k=0}^{\infty} = \left\{\begin{array}{ccc} {\displaystyle \{ 2\alpha+2j+1 \}_{j = 0}^{\infty} \cup \{ 2\beta+2j+1 \}_{j = 0}^{\infty} }, & \mbox{if $\alpha,\beta$ are not integers}, \\ [10pt] {\displaystyle \{ 2\alpha+2j+1 \}_{j = 0}^{\infty} }, & \mbox{if $\beta$ is an integer}, \\ [10pt] {\displaystyle \{ 2\beta+2j+1 \}_{j = 0}^{\infty} }, & \mbox{if $\alpha$ is an integer}. \end{array} \right. \end{align} For functions $f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} \log(1 - x) g(x)$ where $\alpha$ is a positive integer, $\beta \geq 0$ and $g(x) \in C^{\infty}[-1,1]$, then we can define \begin{align}\label{eq:order logarithmic singularities} \{ d_k \}_{k=0}^{\infty} = \left\{\begin{array}{ccc} {\displaystyle \{ 2\alpha+2j+1 \}_{j = 0}^{\infty} }, & \mbox{if $\beta$ is an integer}, \\ [10pt] {\displaystyle \{ 2\alpha+2j+1 \}_{j = 0}^{\infty} \cup \{ 2\beta+2j+1 \}_{j = 0}^{\infty} }, & \mbox{otherwise}. \end{array} \right. \end{align} \end{corollary} \begin{example} Consider $f(x) = (1 - x)^{\alpha} g(x)$ and $\alpha>0$ is not an integer. From \eqref{eq:class s} we know that $f \in X^s$ and $s = 2\alpha$. On the other hand, from Corollary \ref{cor: order by exrapolation} we see immediately that $d_j = 2j + 2\alpha + 1$ for $j \geq 0$. Thus, the convergence rate of the Richardson extrapolation scheme $R(q,n)$ is \begin{equation} I[f] - R(q,n) = \mathcal{O}(n^{-2q - s - 2}), \quad q \geq 0. \end{equation} This higher order convergence rate is confirmed by numerical experiments in the next section. \end{example} \begin{remark}\label{eq:interior singularities} If $f(x)$ has an interior singularity inside the interval $[-1,1]$. For example, suppose that \begin{equation} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} |x - x_0|^\delta g(x), \end{equation} where $x_0 \in (-1,1)$ and $\delta \geq 0$ is not an integer. Then, we can first divide the interval $[-1,1]$ into two parts at $x = x_0$ and then apply Clenshaw-Curtis quadrature or its extrapolation acceleration scheme to the resulting two integrals. \end{remark} \section{Numerical experiments}\label{sec:example} In this section we present some concrete examples to show the convergence rates of Clenshaw-Curtis quadrature and Richardson extrapolation approach for functions with endpoint singularities. We apply ``Acceleration one" and ``Acceleration two" to indicate $R(1,n)$ and $R(2,n)$, respectively. For comparison, we also add the rate of convergence of Gauss-Legendre quadrature to the following examples. \begin{example} Consider the following function \begin{equation}\label{eq:test fun 1} f(x) = (1 - x)^{\alpha} (1 + x)^{\beta} e^x, \end{equation} where $\alpha, \beta \geq 0$ are not integers simultaneously. Obviously, Theorem \ref{thm:asymptotic albebraic singularities} implies that $f(x) \in X^s$ where $s$ is defined as in \eqref{eq:class s}. From Remark \ref{eq:d0 and s} and Corollary \ref{cor: order by exrapolation} we know that the rate of convergence of Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-d_0 - 1})$ where $d_0 = s+1$, while the rate of convergence of the Richardson extrapolation scheme $R(q,n)$ is $I[f] - R(q,n) = \mathcal{O}(n^{-d_q-1})$ and $d_q$ is defined as in \eqref{eq:order algebraic singularities}. Numerical results are illustrated in Figure \ref{fig:branch} with two different choices of $\alpha$ and $\beta$. The left graph of Figure \ref{fig:branch} demonstrates the case $\alpha = \frac{1}{2}$ and $\beta = 0$ which implies $d_j = 2j+2$ for $j \geq 0$. The right graph of Figure \ref{fig:branch} demonstrates the case $\alpha = \frac{3}{4}$ and $\beta = \frac{1}{4}$. From \eqref{eq:order algebraic singularities} we can deduce that $d_j = j + \frac{3}{2}$ for $j \geq 0$. It can be observed clearly from Figure \ref{fig:branch} that the rate of convergence of Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-s-2})$ and the rate of convergence of the extrapolation scheme $R(q,n)$ is $\mathcal{O}(n^{-d_q - 1})$ for $q = 1,2$, which coincides with our analysis. \end{example} \begin{figure} \caption{Convergence rates of $(n+1)$-point Clenshaw-Curtis and Gauss quadrature rules for $f(x) = (1-x)^{\alpha} \label{fig:branch} \end{figure} \begin{example} Consider the function \begin{equation} f(x) = (1-x)^{\alpha} (1 + x)^{\beta}\log(1-x) \cos(t+1), \end{equation} where $\alpha$ is a positive integer and $\beta \geq 0$. Clearly, $f \in X^s$ and $s$ is defined as in Remark \ref{def: s two}. In Figure \ref{fig:logrithm} we demonstrate the convergence rate of Clenshaw-Curtis and Gauss-Legendre quadrature rules and the Richardson extrapolation schemes $R(1,n)$ and $R(2,n)$. The left graph of Figure \ref{fig:logrithm} demonstrates the case $\alpha = 1$ and $\beta = 0$. In this case, we have from Definition \ref{def: s two} and Corollary \ref{cor: order by exrapolation} that $s = 2$ and $d_j = 2j + 2\alpha + 1$ for $j \geq 0$. The right graph of Figure \ref{fig:logrithm} demonstrates the case $\alpha = 1$ and $\beta = \frac{1}{2}$. In this case, we have from \eqref{eq:order logarithmic singularities} that $s = 1$ and $d_j = j+2$ for $j \geq 0$. The numerical results shown in Figure \ref{fig:logrithm} are consistent with our theoretical results. \end{example} \begin{figure} \caption{Convergence rates of $(n+1)$-point Clenshaw-Curtis and Gauss quadrature rules for $f(x) = (1-x)^{\alpha} \label{fig:logrithm} \end{figure} \begin{example}\label{example: arecos} Finally, consider the function \begin{equation}\label{eq:arccos} f(x) = \arccos(x^{2m}), \end{equation} where $m$ is a positive integer. Using repeated integration by parts, we obtain the asymptotic of its Chebyshev coefficients \begin{equation*} a_{2n} \sim \sum_{j=0}^{\infty} \frac{\mu_j}{n^{d_j}}, \end{equation*} where $d_j = 2j+2$ for $j\geq 0$ and $\mu_j$ are constants depend on $m$. Here we give explicit expressions for the first three coefficients of $\mu_j$ \begin{equation} \mu_0 = - \frac{\sqrt{2m}}{\pi}, ~~~~~ \mu_1 = - \frac{\sqrt{2m}}{4\pi}\left(m - \frac{1}{2}\right), ~~~~~ \mu_2 = -\frac{\sqrt{2m}}{16\pi}\left( m^2 - 5m + \frac{9}{4}\right). \end{equation} Moreover, $a_{2n+1} = 0$ for $n\geq0$ since the function $f(x)$ is even. Obviously, $f \in X^1$ and it satisfies the condition of the Theorem \ref{thm:superconvergence of cc}. Thus, the convergence rate of the $(n+1)$-point Clenshaw-Curtis quadrature is $\mathcal{O}(n^{-3})$. Figure \ref{fig:arccos} shows the convergence rates of the Clenshaw-Curtis and Gauss-Legendre quadrature rules and the Richardson extrapolation schemes $R(q,n)$ for the function \eqref{eq:arccos} with two different values of $m$. Clearly, we can see that the convergence rates of both quadrature rules are $\mathcal{O}(n^{-3})$. Moreover, the convergence rate of $R(q,n)$ is $\mathcal{O}(n^{-d_q-1})$. \end{example} \begin{figure} \caption{Convergence rates of $(n+1)$-point Clenshaw-Curtis and Gauss quadrature rules and the extrapolation schemes $R(q,n)$ for $f(x) = \arccos(x^{2m} \label{fig:arccos} \end{figure} \begin{remark} From these examples, we can observe that the rate of convergence of Gauss quadrature is almost indistinguishable with that of Clenshaw-Curtis quadrature for functions with endpoint singularities for large $n$. \end{remark} \section{Conclusion}\label{sec:conclusion} In this paper, we have analyzed the rate of convergence of Clenshaw-Curtis quadrature for functions in $X^s$ which have algebraic or algebraic-logarithmic endpoint singularities. For such functions, we show that the rate of convergence can be further improved to $\mathcal{O}(n^{-s-2})$, which is one power of $n$ better than the optimal estimate given in \cite{xiang2012clenshawcurtis}. Furthermore, an asymptotic error expansion for Clenshaw-Curtis quadrature was obtained, based on which extrapolation schemes such as Richardson extrapolation was applied to accelerate the convergence of Clenshaw-Curtis quadrature. In contrast to Gauss-Legendre quadrature, Clenshaw-Curtis quadrature is a more powerful scheme to integrate functions with endpoint singularities since its nodes are nested and its quadrature weights can be evaluated efficiently by the inverse Fourier transform. \end{document}
\begin{document} \title{HS-integral and Eisenstein integral mixed Cayley graphs over abelian groups} \begin{center}{\textbf{Abstract}}\end{center} \noindent A mixed graph is called \emph{second kind hermitian integral}(or \emph{HS-integral}) if the eigenvalues of its Hermitian-adjacency matrix of second kind are integers. A mixed graph is called \emph{Eisenstein integral} if the eigenvalues of its (0, 1)-adjacency matrix are Eisenstein integers. Let $\Gamma$ be an abelian group. We characterize the set $S$ for which a mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. We also show that a mixed Cayley graph is Eisenstein integral if and only if it is HS-integral. \vspace*{0.3cm} \noindent \textbf{Keywords.} Hermitian adjacency matrix of second kind, mixed Cayley graph; HS-integral mixed graph; Eisenstein integral mixed graph \\ \textbf{Mathematics Subject Classifications:} 05C50, 05C25 \section{Introduction} A \emph{mixed graph} $G$ is a pair $(V(G),E(G))$, where $V(G)$ and $E(G)$ are the vertex set and the edge set of $G$, respectively. Here $E(G)\subseteq V(G) \times V(G)\setminus \{(u,u)~|~u\in V(G)\}$. If $G$ is a mixed graph, then $(u,v)\in E(G)$ need not imply that $(v,u)\in E(G)$. An edge $(u,v)$ of a mixed graph $G$ is called \textit{undirected} if both $(u,v)$ and $(v,u)$ belong to $E(G)$. An edge $(u,v)$ of a mixed graph $G$ is called \textit{directed} if $(u,v)\in E(G)$ but $(v,u)\notin E(G)$. A mixed graph can have both undirected and directed edges. A mixed graph $G$ is said to be a \textit{simple graph} if all the edges of $G$ are undirected. A mixed graph $G$ is said to be an \textit{oriented graph} if all the edges of $G$ are directed. For a mixed graph $G$ on $n$ vertices, its (0, 1)-\textit{adjacency matrix} and \textit{Hermitian-adjacency matrix of second kind} are denoted by $\mathcal{A}(G)=(a_{uv})_{n\times n}$ and $\mathcal{H}(G)=(h_{uv})_{n\times n}$, respectively, where \[a_{uv} = \left\{ \begin{array}{rl} 1 &\mbox{ if } (u,v)\in E \\ 0 &\textnormal{ otherwise,} \end{array}\right. ~~~~~\text{ and }~~~~~~ h_{uv} = \left\{ \begin{array}{cl} 1 &\mbox{ if } (u,v)\in E \textnormal{ and } (v,u)\in E \\ \frac{1+i\sqrt{3}}{2} & \mbox{ if } (u,v)\in E \textnormal{ and } (v,u)\not\in E \\ \frac{1-i\sqrt{3}}{2} & \mbox{ if } (u,v)\not\in E \textnormal{ and } (v,u)\in E\\ 0 &\textnormal{ otherwise.} \end{array}\right.\] The Hermitian-adjacency matrix of second kind was introduced by Bojan Mohar~\cite{mohar2020new}. Let $G$ be a mixed graph. By an \emph{HS-eigenvalue} of $G$, we mean an eigenvalue of $\mathcal{H}(G)$. By an \emph{eigenvalue} of $G$, we mean an eigenvalue of $\mathcal{A}(G)$. Similarly, the \emph{HS-spectrum} of $G$, denoted $Sp_H(G)$, is the multi-set of the HS-eigenvalues of $G$, and the \emph{spectrum} of $G$, denoted $Sp(G)$, is the multi-set of the eigenvalues of $G$. Note that the Hermitian-adjacency matrix of second kind of a mixed graph is a Hermitian matrix, and so its HS-eigenvalues are real numbers. However, if a mixed graph $G$ contains at least one directed edge, then $\mathcal{A}(G)$ is non-symmetric. Accordingly, the eigenvalues of $G$ need not be real numbers. The matrix obtained by replacing $\frac{1+i\sqrt{3}}{2}$ and $\frac{1-i\sqrt{3}}{2}$ by $i$ and $-i$, respectively, in $\mathcal{H}(G)$, is called the \emph{Hermitian adjacency} matrix of $G$. Hermitian adjacency matrix of mixed graphs was introduced in~\cite{2017mixed, 2015mixed}. A mixed graph is called \textit{H-integral} if the eigenvalues of its Hermitian adjacency matrix are integers. A mixed graph $G$ is said to be \textit{HS-integral} if all the HS-eigenvalues of $G$ are integers. A mixed graph $G$ is said to be \textit{Eisenstein integral} if all the eigenvalues of $G$ are Eisenstein integers. Note that complex numbers of the form $a+b\omega_3$, where $a,b\in \mathbb{Z}, \omega_3=\frac{-1+i\sqrt{3}}{2}$, are called \emph{Eisenstein} integers. An HS-integral simple graph is called an \emph{integral} graph. Note that $\mathcal{A}(G)=\mathcal{H}(G)$ for a simple graph $G$. Therefore in case of a simple graph $G$, the terms HS-eigenvalue, HS-spectrum and HS-integrality of $G$ are the same with that of eigenvalue, spectrum and integrality of $G$, respectively. Integrality of simple graphs have been extensively studied in the past. Integral graphs were first defined by Harary and Schwenk~\cite{harary1974graphs} in 1974 and proposed a classification of integral graphs. See \cite{balinska2002survey} for a survey on integral graphs. Watanabe and Schwenk \cite{watanabe1979note,watanabe1979integral} proved several interesting results on integral trees in 1979. Csikvari \cite{csikvari2010integral} constructed integral trees with arbitrary large diameters in 2010. Further research on integral trees can be found in \cite{brouwer2008integral,brouwer2008small,wang2000some, wang2002integral}. In $2009$, Ahmadi et al. \cite{ahmadi2009graphs} proved that only a fraction of $2^{-\Omega (n)}$ of the graphs on $n$ vertices have an integral spectrum. Bussemaker et al. \cite{bussemaker1976there} proved that there are exactly $13$ connected cubic integral graphs. Stevanovi{\'c} \cite{stevanovic20034} studied the $4$-regular integral graphs avoiding $\pm3$ in the spectrum, and Lepovi{\'c} et al. \cite{lepovic2005there} proved that there are $93$ non-regular, bipartite integral graphs with maximum degree four. Let $S$ be a subset, not containing the identity element, of a group $\Gamma$. The set $S$ is said to be \textit{symmetric} (resp. \textit{skew-symmetric}) if $S$ is closed under inverse (resp. $a^{-1} \not\in S$ for all $a\in S$). Define $\overline{S}= \{u\in S: u^{-1}\not\in S \}$. Clearly, $S\setminus \overline{S}$ is symmetric and $\overline{S}$ is skew-symmetric. The \textit{mixed Cayley graph} $G=\text{Cay}(\Gamma,S)$ is a mixed graph, where $V(G)=\Gamma$ and $E(G)=\{ (a,b): a^{-1}b\in S , a,b\in \Gamma\}$. If $S$ is symmetric then $G$ is a \textit{simple Cayley graph}. If $S$ is skew-symmetric then $G$ is an \textit{oriented Cayley graph}. In 1982, Bridge and Mena \cite{bridges1982rational} introduced a characterization of integral Cayley graphs over abelian groups. Later on, same characterization was rediscovered by Wasin So \cite{2006integral} for cyclic groups in 2005. In 2009, Abdollahi and Vatandoost \cite{abdollahi2009cayley} proved that there are exactly seven connected cubic integral Cayley graphs. On the same year, Klotz and Sander \cite{klotz2010integral} proved that if a Cayley graph $\text{Cay}(\Gamma,S)$ over an abelian group $\Gamma$ is integral then $S$ belongs to the Boolean algebra $\mathbb{B}(\Gamma)$ generated by the subgroups of $\Gamma$. Moreover, they conjectured that the converse is also true, which was proved by Alperin and Peterson \cite{alperin2012integral}. In 2015, Ku et al. \cite{ku2015Cayley} proved that normal Cayley graphs over the symmetric groups are integral. In 2017, Lu et al. \cite{lu2018integral} gave necessary and sufficient condition for the integrality of Cayley graphs over the dihedral group $D_n$. In particular, they completely determined all integral Cayley graphs over the dihedral group $D_p$ for a prime $p$. In 2019, Cheng et al. \cite{cheng2019integral} obtained several simple sufficient conditions for the integrality of Cayley graphs over the dicyclic group $T_{4n}= \langle a,b| a^{2n}=1, a^n=b^2,b^{-1}ab=a^{-1} \rangle $. In particular, they also completely determined all integral Cayley graphs over the dicyclic group $T_{4p}$ for a prime $p$. In 2014, Godsil \emph{et al.} \cite{godsil2014rationality} characterized integral normal Cayley graphs. Xu \emph{et al.} \cite{xu2011gaussian} and Li \cite{li2013circulant} characterized the set $S$ for which the mixed circulant graph $\text{Cay}(\mathbb{Z}_n, S)$ is Gaussian integral. In 2006, So \cite{2006integral} introduced characterization of integral circulant graphs. In \cite{kadyan2021integralNormal}, the authors provide an alternative proof of the characterization obtained in~\cite{li2013circulant,xu2011gaussian}. H-integral mixed circulant graphs, H-integral mixed Cayley graphs over abelian groups, H-integral normal Cayley graphs and HS-integral mixed circulant graphs have been characterized in \cite{kadyan2021integral}, \cite{kadyan2021integralAbelian}, \cite{kadyan2021integralNormal} and \cite{kadyan2021Secintegral}, respectively. Throughout this paper, we consider Cayley graphs over abelian groups. The paper is organized as follows. In Section~\ref{prelAbelian}, some preliminary concepts and results are discussed. In particular, we express the HS-eigenvalues of a mixed Cayley graph as a sum of HS-eigenvalues of a simple Cayley graph and an oriented Cayley graph. In Section~\ref{sec3}, we obtain a sufficient condition on the connection set for the HS-integrality of an oriented Cayley graph. In Section~\ref{sec4}, we first characterize HS-integrality of oriented Cayley graphs by proving the necessity of the condition obtained in Section~\ref{sec3}. After that, we extend this characterization to mixed Cayley graphs. In Section~\ref{sec5}, we prove that a mixed Cayley graph is Eisenstein integral if and only if it is HS-integral. \section{Preliminaries}\label{prelAbelian} A \textit{representation} of a finite group $\Gamma$ is a homomorphism $\rho : \Gamma \to GL(V)$, where $GL(V)$ is the group of automorphisms of a finite dimensional vector space $V$ over the complex field $\mathbb{C}$. The dimension of $V$ is called the \textit{degree} of $\rho$. Two representations $\rho_1$ and $\rho_2$ of $\Gamma$ on $V_1$ and $V_2$, respectively, are \textit{equivalent} if there is an isomorphism $T:V_1 \to V_2$ such that $T\rho_1(g)=\rho_2(g)T$ for all $g\in \Gamma$. Let $\rho : \Gamma \to GL(V)$ be a representation. The \textit{character} $\chi_{\rho}: \Gamma \to \mathbb{C}$ of $\rho$ is defined by setting $\chi_{\rho}(g)=Tr(\rho(g))$ for $g\in \Gamma$, where $Tr(\rho(g))$ is the trace of the representation matrix of $\rho(g)$. By degree of $\chi_{\rho}$ we mean the degree of $\rho$ which is simply $\chi_{\rho}(1)$. If $W$ is a $\rho(g)$-invariant subspace of $V$ for each $g\in \Gamma$, then we say $W$ a $\rho(\Gamma)$-invariant subspace of $V$. If the only $\rho(\Gamma)$-invariant subspaces of $V$ are $\{ 0\}$ and $V$, we say $\rho$ an \textit{irreducible representation} of $\Gamma$, and the corresponding character $\chi_{\rho}$ an \textit{irreducible character} of $\Gamma$. For a group $\Gamma$, we denote by $\text{IRR}(\Gamma)$ and $\text{Irr}(\Gamma)$ the complete set of non-equivalent irreducible representations of $\Gamma$ and the complete set of non-equivalent irreducible characters of $\Gamma$, respectively. Throughout this paper, we consider $\Gamma$ to be an abelian group of order $n$. Let $S$ be a subset of $\Gamma$ with $0\not\in S$, where $0$ is the additive identity of $\Gamma$. Then $\Gamma$ is isomorphic to the direct product of cyclic groups of prime power order, $i.e.$ $$\Gamma\cong \mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k},$$ where $n=n_1 \cdots n_k$, and $n_j$ is a power of a prime number for each $j=1,...,k $. We consider an abelian group $\Gamma$ as $\mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k}$ of order $n=n_1...n_k$. We consider the elements $x\in \Gamma $ as elements of the cartesian product $\mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k}$, $i.e.$ $$x=(x_1,x_2,...,x_k), \mbox{ where } x_j \in \mathbb{Z}_{n_j} \mbox{ for all } 1\leq j \leq k. $$ Addition in $\Gamma$ is done coordinate-wise modulo $n_j$. For a positive integer $k$ and $a\in \Gamma$, we denote by $ka$ or $a^k$ the $k$-fold sum of $a$ to itself, $(-k)a=k(-a)$, $0a=0$, and inverse of $a$ by $-a$. \begin{lema}\label{lemma1}\cite{steinberg2009representation} Let $\mathbb{Z}_n=\{ 0,1,...,n-1\}$ be a cyclic group of order $n$. Then $\text{IRR}(\mathbb{Z}_n)=\{ \phi_k: 0\leq k \leq n-1\}$, where $\phi_k(j)=\omega_n^{jk}$ for all $0\leq j,k \leq n-1$, and $\omega_n=\exp(\frac{2\pi i}{n})$. \end{lema} \begin{lema}\label{lemma2}\cite{steinberg2009representation} Let $\Gamma_1$,$\Gamma_2$ be abelian groups of order $m,n$, respectively. Let $\text{IRR}(\Gamma_1)=\{ \phi_1,...,\phi_m\}$, and $\text{IRR}(\Gamma_2)=\{ \rho_1,...,\rho_n\}$. Then $$\text{IRR}(\Gamma_1 \times \Gamma_2)=\{ \psi_{kl} : 1\leq k \leq m, 1\leq l \leq n \},$$ where $\psi_{kl}: \Gamma_1 \times \Gamma_2 \to \mathbb{C}^* \mbox{ and } \psi_{kl}(g_1,g_2)=\phi_k(g_1)\rho_l(g_2)$ for all $g_1\in \Gamma_1, g_2\in \Gamma_2$. \end{lema} Consider $\Gamma = \mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times ...\times \mathbb{Z}_{n_k}$. By Lemma \ref{lemma1} and Lemma \ref{lemma2}, $\text{IRR}(\Gamma)=\{ \psi_{\alpha}: \alpha \in \Gamma\}$, where \begin{equation} \psi_{\alpha}(x)=\prod_{j=1}^{k}\omega_{n_j}^{\alpha_j x_j} \textnormal{ for all $\alpha=( \alpha_1,...,\alpha_k),x=(x_1,...,x_k) \in \Gamma$},\label{character} \end{equation} and $\omega_{n_j}=\exp\left(\frac{2\pi i}{n_j}\right)$. Since $\Gamma$ is an abelian group, every irreducible representation of $\Gamma$ is 1-dimensional and thus it can be identified with its characters. Hence $\text{IRR}(\Gamma)=\text{Irr}(\Gamma)$. For $x\in \Gamma$, let $\text{ord}(x)$ denote the order of $x$. The following lemma can be easily proved. \begin{lema}\label{Basic} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the following statements are true. \begin{enumerate}[label=(\roman*)] \item $\psi_{\alpha}(x)=\psi_x({\alpha})$ for all $x,\alpha \in \Gamma$. \item $(\psi_{\alpha}(x))^{\text{ord}(x)}=(\psi_{\alpha}(x))^{\text{ord}(\alpha)}=1$ for all $x,\alpha \in \Gamma$. \end{enumerate} \end{lema} Let $f : \Gamma \to \mathbb{C}$ be a function. The \textit{Cayley color digraph} of $\Gamma$ with \textit{connection function} $f$, denoted by $\text{Cay}(\Gamma, f)$, is defined to be the directed graph with vertex set $\Gamma$ and arc set $\{ (x,y): x,y \in \Gamma\}$ such that each arc $(x,y)$ is colored by $f(x^{-1}y)$. The \textit{adjacency matrix} of $\text{Cay}(\Gamma, f)$ is defined to be the matrix whose rows and columns are indexed by the elements of $\Gamma$, and the $(x,y)$-entry is equal to $f(x^{-1}y)$. The eigenvalues of $\text{Cay}( \Gamma, f)$ are simply the eigenvalues of its adjacency matrix. \begin{theorem}\cite{babai1979spectra}\label{EigNorColCayMix} Let $\Gamma$ be a finite abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the Cayley color digraph $\text{Cay}(\Gamma, f)$ is $\{ \gamma_\alpha : \alpha\in \Gamma \},$ where $$\gamma_{\alpha} = \sum_{y\in \Gamma} f(y)\psi_{\alpha}(y) \hspace{0.2cm} \textnormal{ for all } \alpha \in \Gamma.$$ \end{theorem} For a subset $S$ of an abelian group $\Gamma$, let $S^{-1}=\{s^{-1}~:~s\in S\}$. \begin{lema}\cite{babai1979spectra}\label{imcgoa3} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the HS-spectrum of the mixed Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \gamma_\alpha : \alpha \in \Gamma \}$, where $\gamma_{\alpha}=\lambda_{\alpha}+\mu_{\alpha}$ and $$\lambda_{\alpha}=\sum_{s\in S\setminus \overline{S}} \psi_{\alpha}(s),\hspace{0.2cm} \mu_{\alpha}=\sum_{s\in\overline{S}}\left( \omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s)\right) \textnormal{ for all } \alpha \in \Gamma.$$ \end{lema} \begin{proof} Define $f_S: \Gamma \to \{0,1,\omega_6,\omega_6^5\}$ such that $$f_S(s)= \left\{ \begin{array}{rl} 1 & \mbox{if } s\in S \setminus \overline{S} \\ \omega_6 & \mbox{if } s\in \overline{S}\\ \omega_6^5 & \mbox{if } s\in \overline{S}^{-1}\\ 0 & \mbox{otherwise}. \end{array}\right.$$ The adjacency matrix of the Cayley color digraph $\text{Cay}(\Gamma, f_S)$ agrees with the Hermitian adjacency matrix of the mixed Cayley graph $\text{Cay}(\Gamma, S)$. Thus the result follows from Theorem~\ref{EigNorColCayMix}. \end{proof} Next two corollaries are special cases of Lemma~\ref{imcgoa3}. \begin{corollary}\cite{klotz2010integral}\label{simpleAbelianEigBabai} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \lambda_\alpha : \alpha \in \Gamma \}$, where $\lambda_\alpha=\lambda_{-\alpha}$ and $$\lambda_{\alpha}=\sum_{s\in S} \psi_{\alpha}(s) \mbox{ for all } \alpha \in \Gamma.$$ \end{corollary} \begin{corollary}\label{OriEig} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \mu_\alpha : \alpha \in \Gamma \}$, where $$\mu_{\alpha}=\sum_{s\in S}\left(\omega_6 \psi_{\alpha}(s)+ \omega_6^5 \psi_{\alpha}(-s)\right) \mbox{ for all } \alpha \in \Gamma.$$ \end{corollary} Let $n\geq 2$ be a fixed positive integer. Define $G_n(d)=\{k: 1\leq k\leq n-1, \gcd(k,n)=d \}$. It is clear that $G_n(d)=dG_{\frac{n}{d}}(1)$. Alperin and Peterson \cite{alperin2012integral} considered a Boolean algebra generated by a class of subgroups of a group in order to determine the integrality of Cayley graphs over abelian groups. Suppose $\Gamma$ is a finite group, and $\mathcal{F}_{\Gamma}$ is the family of all subgroups of $\Gamma$. The Boolean algebra $\mathbb{B}(\Gamma)$ generated by $\mathcal{F}_{\Gamma}$ is the set whose elements are obtained by arbitrary finite intersections, unions, and complements of the elements in the family $\mathcal{F}_{\Gamma}$. The minimal non-empty elements of this algebra are called \textit{atoms}. Thus each element of $\mathbb{B}(\Gamma)$ is the union of some atoms. Consider the equivalence relation $\sim$ on $\Gamma$ such that $x\sim y$ if and only if $y=x^k$ for some $k\in G_m(1)$, where $m=\text{ord}(x)$. \begin{lema}\cite{alperin2012integral} The equivalence classes of $\sim$ are the atoms of $\mathbb{B}(\Gamma)$. \end{lema} For $x\in \Gamma$, let $[x]$ denote the equivalence class of $x$ with respect to the relation $\sim$. Also, let $\langle x \rangle$ denote the cyclic group generated by $x$. \begin{lema}\label{atomsboolean} \cite{alperin2012integral} The atoms of the Boolean algebra $\mathbb{B}(\Gamma)$ are the sets $[x]=\{ y: \langle y \rangle = \langle x \rangle \}$. \end{lema} By Lemma \ref{atomsboolean}, each element of $\mathbb{B}(\Gamma)$ is a union of some sets of the form $[x]=\{ y: \langle y \rangle = \langle x \rangle \}$. Thus, for all $S\in \mathbb{B}(\Gamma)$, we have $S=[x_1]\cup...\cup [x_k]$ for some $x_1,...,x_k\in \Gamma$. The next result provides a complete characterization of integral Cayley graphs over an abelian group $\Gamma$ in terms of the atoms of $\mathbb{B}(\Gamma)$. \begin{theorem}\label{Cayint} (\cite{alperin2012integral}, \cite{bridges1982rational}) Let $\Gamma$ be an abelian group. The Cayley graph $\text{Cay}(\Gamma, S)$ is integral if and only if $S\in \mathbb{B}(\Gamma)$. \end{theorem} Define $\Gamma(3)$ to be the set of all $x\in \Gamma$ satisfying $\text{ord}(x)\equiv 0 \pmod 3$. For all $x\in \Gamma(3)$ and $r\in \{0,1,2\}$, define $$M_r(x):=\{x^k: 1\leq k \leq \text{ord}(x) , k \equiv r \Mod 3 \}.$$ For all $a\in \Gamma$ and $S\subseteq \Gamma$, define $a+S:= \{ a+s: s\in S\}$ and $-S:=\{ -s: s\in S \}$. Note that $-s$ denotes the inverse of $s$, that is $-s=s^{m-1}$, where $m=\text{ord}(s)$. \begin{lema} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then the following statement\textbf{}s are true. \begin{enumerate}[label=(\roman*)] \item $\bigcup\limits_{r=0}^{2}M_r(x)= \langle x \rangle$. \item Both $M_1(x)$ and $M_2(x)$ are skew-symmetric subsets of $\Gamma$. \item $-M_1(x)=M_2(x)$ and $-M_2(x)=M_1(x)$. \item $a+M_1(x)=M_1(x)$ and $a+M_2(x)=M_2(x)$ for all $a\in M_0(x)$. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)] \item It follows from the definitions of $M_r(x)$ and $\langle x \rangle$. \item Let $\text{ord}(x)=m$. If $x^k\in M_1(x)$ then $-x^k=x^{m-k} \not\in M_1(x)$, as $k\equiv 1 \pmod 3$ gives $m-k\equiv 2 \pmod 3$. Thus $M_1(x)$ is a skew-symmetric subset of $\Gamma$. Similarly, $M_2(x)$ is also a skew-symmetric subset of $\Gamma$. \item Let $\text{ord}(x)=m$. As $k\equiv 1 \pmod 3$ if and only if $m-k\equiv 2 \pmod 3$, and $-x^k=x^{m-k}$, we get $-M_1(x)=M_2(x)$ and $-M_2(x)=M_1(x)$. \item Let $a\in M_0(x)$ and $y\in a+M_1(x)$. Then $a=x^{k_1}$ and $y=x^{k_1}+x^{k_2}=x^{k_1+k_2}$, where $k_1\equiv 0 \pmod 3$ and $k_2 \equiv 1 \pmod 3$. Since $k_1+k_2\equiv 1 \pmod 3$, we have $y\in M_1(x)$ implying that $a+M_1(x)\subseteq M_1(x)$. Hence $a+M_1(x)=M_1(x)$. Similarly, $a+M_2(x)=M_2(x)$ for all $a\in M_0(x)$.\qedhere \end{enumerate} \end{proof} Let $m \equiv 0 \Mod 3$. For $r\in \{1,2\}$ and $g \in \mathbb{Z}$, define the following: \begin{align*} & G_{m,3}^r(1)=\{ k: 1\leq k \leq m-1 , \gcd(k,m )= 1,k\equiv r \Mod 3 \},\\ & D_{g,3}= \{ k: k \text{ divides } g, k \not\equiv 0 \Mod 3\},~\text{ and}\\ & D_{g,3}^r=\{ k: k \text{ divides } g, k \equiv r \Mod 3\} . \end{align*} It is clear that $D_{g,3}=D_{g,3}^1 \bigcup\limits D_{g,3}^2$. Define an equivalence relation $\approx$ on $\Gamma(3)$ such that $x \approx y$ if and only if $y=x^k$ for some $k\in G_{m,3}^1(1)$, where $m=\text{ord}(x)$. Observe that if $x,y\in \Gamma(3)$ and $x \approx y$ then $x \sim y$, but the converse need not be true. For example, consider $x=5\pmod {12}$, $y=7\pmod {12}$ in $\mathbb{Z}_{12}$. Here $x,y\in \mathbb{Z}_{12}(3)$ and $x \sim y$ but $x \not\approx y$. For $x\in \Gamma(3)$, let $\langle\!\langle x \rangle\!\rangle$ denote the equivalence class of $x$ with respect to the relation $\approx$. \begin{lema}\label{lemanecc} Let $\Gamma$ be an abelian group, $x\in \Gamma(3)$ and $m=\text{ord}(x)$. Then the following are true. \begin{enumerate}[label=(\roman*)] \item $\langle\!\langle x \rangle\!\rangle= \{ x^k: k \in G_{m,3}^1(1) \}$. \item $\langle\!\langle -x \rangle\!\rangle= \{ x^k: k \in G_{m,3}^2(1) \}$. \item $\langle\!\langle x \rangle\!\rangle \cap \langle\!\langle -x \rangle\!\rangle=\emptyset$. \item $[x]=\langle\!\langle x \rangle\!\rangle \cup \langle\!\langle -x \rangle\!\rangle$. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)] \item Let $y\in \langle\!\langle x \rangle\!\rangle$. Then $x \approx y$, and so $\text{ord}(x)=\text{ord}(y)=m$ and there exists $k\in G_{m,3}^1(x)$ such that $y=x^k$. Thus $\langle\!\langle x \rangle\!\rangle\subseteq \{ x^k: k \in G_{m,3}^1(1) \}$. On the other hand, let $z=x^k$ for some $k\in G_{m,3}^1(1)$. Then $\text{ord}(x)=\text{ord}(z)$ and so $x \approx z$. Thus $ \{ x^k: k \in G_{m,3}^1(1) \} \subseteq \langle\!\langle x \rangle\!\rangle$. \item Note that $-x=x^{m-1}$ and $m-1\equiv 2 \pmod 3$. By Part $(i)$, \begin{align*} \langle\!\langle -x \rangle\!\rangle = \{ (-x)^k: k \in G_{m,3}^1(1) \} = \{ x^{(m-1)k}: k \in G_{m,3}^1(1) \}&= \{ x^{-k}: k \in G_{m,3}^1(1) \}\\ &= \{ x^{k}: k \in G_{m,3}^2(1) \}. \end{align*} \item Since $G_{m,3}^1(1)\cap G_{m,3}^2(1)=\emptyset$, so by Part $(i)$ and Part $(ii)$, $\langle\!\langle x \rangle\!\rangle \cap \langle\!\langle -x \rangle\!\rangle=\emptyset$ holds. \item Since $[x]= \{ x^k: k \in G_m(1) \}$ and $G_m(1)$ is a disjoint union of $G_{m,3}^1(1)$ and $ G_{m,3}^3(1)$, by Part $(i)$ and Part $(ii)$, $[x]=\langle\!\langle x \rangle\!\rangle \cup \langle\!\langle -x \rangle\!\rangle$ holds. \qedhere \end{enumerate} \end{proof} \begin{lema}\label{imcgoa4} Let $\Gamma$ be an abelian group, $x\in \Gamma(3)$, $m=\text{ord}(x)$ and $g=\frac{m}{3}$. Then the following are true. \begin{enumerate}[label=(\roman*)] \item $M_1(x) \cup M_2(x)=\bigcup\limits_{h\in D_{g,3}} [x^h] $. \item $M_1(x)= \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. \item $M_2(x)=\bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle -x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle x^h \rangle\!\rangle $. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)] \item Let $x^k \in M_1(x) \cup M_2(x)$, where $k \equiv 1 \text{ or } 2 \pmod 3$. To show that $x^k \in \bigcup\limits_{h\in D_{g,3}} [x^h]$, it is enough to show $x^k \sim x^h$ for some $h \in D_{g,3}$. Let $h=\gcd (k,g) \in D_{g,3}$. Note that $$\text{ord}(x^k)=\frac{m}{\gcd(m,k)}=\frac{m}{\gcd(g,k)}=\frac{m}{h}=\text{ord}(x^h).$$ Also, as $h=\gcd(k,m)$, we have $\langle x^k \rangle = \langle x^h \rangle$, and so $x^k=x^{hj}$ for some $j\in G_q(1)$, where $q=\text{ord}(x^h)=\frac{m}{h}$. Thus $x^k\sim x^h$ where $h=\gcd (k,g) \in D_{g,3}$. Conversely, let $z\in \bigcup\limits_{h\in D_{g,3}} [x^h]$. Then there exists $h\in D_{g,3}$ such that $z=x^{hj}$ where $j\in G_q(1)$ and $q= \frac{m}{\gcd(m,h)}$. Now $h\in D_{g,3}$ and $q\equiv 0\pmod 3$ imply that $hj\equiv 1 \text{ or } 2 \pmod 3$, and so $\bigcup\limits_{h\in D_{g,3}} [x^h] \subseteq M_1(x) \cup M_2(x)$. Hence $M_1(x) \cup M_2(x)=\bigcup\limits_{h\in D_{g,3}} [x^h] $. \item Let $x^k \in M_1(x)$, where $k\equiv 1 \pmod 3$. By Part $(i)$, there exists $h\in D_{g,3}$ and $j\in G_q(1)$ such that $x^k=x^{hj}$, where $q=\frac{m}{\gcd(m,h)}$. Note that $k=jh$. If $h\equiv 1 \pmod 3$ then $j\in G_{q,3}^1(1)$, otherwise $j\in G_{q,3}^2(1)$. Thus using parts $(i)$ and $(ii)$ of Lemma \ref{lemanecc}, if $h\equiv 1 \pmod 3$ then $x^k \approx x^h$, otherwise $x^k \approx -x^h$. Hence $M_1(x) \subseteq \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. Conversely, assume that $z\in \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. This gives $z\in \langle\!\langle x^h \rangle\!\rangle$ for an $h\in D_{g,3}^1$ or $z\in \langle\!\langle -x^h \rangle\!\rangle$ for an $h\in D_{g,3}^2$. In the first case, by part $(i)$ of Lemma \ref{lemanecc}, there exists $j\in G_{q,3}^1(1)$ with $q=\frac{m}{\gcd(m,h)}$ such that $z = x^{hj}$. Similarly, for the second case, by part $(ii)$ of Lemma \ref{lemanecc}, there exists $j\in G_{q,3}^2(1)$ with $q=\frac{m}{\gcd(m,h)}$ such that $z = x^{hj}$. In both the cases, $hj \equiv 1 \pmod 3$. Thus $z\in M_1(x)$. \item The proof is similar to Part $(ii)$. \qedhere \end{enumerate} \end{proof} The \textit{cyclotomic polynomial} $\Phi_m(x)$ is the monic polynomial whose zeros are the primitive $m^{th}$ roots of unity. That is, $$\Phi_m(x)= \prod_{a\in G_m(1)}(x-\omega_m^a).$$ Clearly, the degree of $\Phi_m(x)$ is $\varphi(m)$, where $\varphi$ denotes the Euler $\varphi$-function. It is well known that the cyclotomic polynomial $\Phi_m(x)$ is monic and irreducible in $\mathbb{Z}[x]$. See \cite{numbertheory} for more details on cyclotomic polynomials. The polynomial $\Phi_m(x)$ is irreducible over $\mathbb{Q}(\omega_3)$ if and only if $[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)]= \varphi(m)$. Also, $ \mathbb{Q}(\omega_m)$ does not contain the number $\omega_3$ if and only if $m\not\equiv 0 \Mod 3$. Thus, if $m\not\equiv 0 \Mod 3$ then $[\mathbb{Q}(\omega_3,\omega_m):\mathbb{Q}(\omega_m) ]=2=[\mathbb{Q}(\omega_3), \mathbb{Q}]$, and therefore $$[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)]=\frac{[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_m)] \times [\mathbb{Q}(\omega_m) : \mathbb{Q}]}{ [\mathbb{Q}(\omega_3) : \mathbb{Q}]}= [\mathbb{Q}(\omega_m) : \mathbb{Q}]= \varphi(m).$$ Further, if $m\equiv 0 \Mod 3$ then $ \mathbb{Q}(\omega_3,\omega_m)= \mathbb{Q}(\omega_m)$, and so $$[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)] = \frac{[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}]}{[\mathbb{Q}(\omega_3) : \mathbb{Q}]}=\frac{\varphi(m)}{2}.$$ Note that $\mathbb{Q}(\omega_3)=\mathbb{Q}(\omega_6)=\mathbb{Q}(i\sqrt{3})$. Therefore $\Phi_m(x)$ is irreducible over $\mathbb{Q}(\omega_3), \mathbb{Q}(\omega_6)$ or $\mathbb{Q}(i\sqrt{3})$ if and only if $m\not\equiv 0 \Mod 3$. Let $m\equiv 0 \Mod 3$. Observe that $G_{m}(1)$ is a disjoint union of $G_{m,3}^1(1)$ and $G_{m,3}^2(1)$. Define $$\Phi_{m,3}^{1}(x)= \prod_{a\in G_{m,3}^1(1)}(x-\omega_m^a)~~ \textnormal{ and } ~~\Phi_{m,3}^2(x)= \prod_{a\in G_{m,3}^2(1)}(x-\omega_m^a).$$ It is clear from the definition that $\Phi_m(x)=\Phi_{m,3}^1(x)\Phi_{m,3}^2(x)$. \begin{theorem}\cite{kadyan2021Secintegral} Let $m\equiv 0\Mod 3$. Then $\Phi_{m,3}^1(x)$ and $\Phi_{m,3}^2(x)$ are irreducible monic polynomials in $\mathbb{Q}(\omega_3)[x]$ of degree $\frac{\varphi(m)}{2}$. \end{theorem} \section{A sufficient condition for HS-integrality of oriented Cayley graphs over abelian groups}\label{sec3} In this section, first we prove that $S=\emptyset$ is the only connection set for an HS-integral oriented Cayley graph $\text{Cay}(\Gamma, S)$ whenever $\Gamma(3)=\emptyset$. After that we obtain a sufficient condition on the set $S$ for which the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. \begin{lema}\label{AbelianSqrt3ZeroSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$. If $\sum\limits_{s \in S} i\sqrt{3} (\psi_{\alpha}(s) -\psi_{\alpha}(-s)) =0$ for all $j=0,...,n-1$ then $S=\emptyset$. \end{lema} \begin{proof} Let $A_S=(a_{uv})_{n\times n}$ be the matrix whose rows and columns are indexed by the elements of $\Gamma$, where $$a_{uv} = \left\{ \begin{array}{cl} i\sqrt{3} & \mbox{ if } v-u \in S \\ -i\sqrt{3} & \mbox{ if } v-u \in S^{-1}\\ 0 &\textnormal{ otherwise.} \end{array}\right.$$ Since $A_S$ is a circulant matrix, $\lambda_{\alpha}=\sum\limits_{k\in S} i\sqrt{3} (\psi_{\alpha}(s)-\psi_{\alpha}(-s))$ is an HS-eigenvalue of $A_S$ for each $\alpha \in \Gamma$. Therefore $\lambda_{\alpha}=0$ for all $\alpha \in \Gamma$, which implies all the entries of $A_S$ are zero. Hence $S=\emptyset$. \end{proof} \begin{theorem}\label{ori4} Let $\Gamma$ be an abelian group and $\Gamma(3) = \emptyset$. Then the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S=\emptyset$ \end{theorem} \begin{proof} Let $G=\text{Cay}(\Gamma, S)$ and $Sp_H(G)=\{ \mu_{\alpha}: \alpha \in \Gamma \}$. Assume that $\text{Cay}(\Gamma, S)$ is HS-integral and $n\not\equiv 0 \Mod 3$. By Corollary~\ref{OriEig}, $$\mu_{\alpha}=\sum_{s\in S} (\omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s)) \in \mathbb{Z}, \textnormal{ for all } \alpha \in \Gamma.$$ Note that, $\psi_{\alpha}(s)$ and $\psi_{\alpha}(-s)$ are $n^{th}$ roots of unity for all $ \alpha \in \Gamma, s\in S$. Fix a primitive $n^{th}$ root $\omega$ of unity and express $\psi_{\alpha}(s)$ in the form $\omega^j$ for some $j \in \{ 0,1,...,n-1\}$. Thus $$\mu_{\alpha}= \sum_{s\in S} (\omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s) ) = \sum_{j=0}^{n-1} a_j \omega^j,$$ where $a_j \in \mathbb{Q}(\omega_3)$. Since $\mu_{\alpha} \in \mathbb{Z}$, so $p(x)= \sum\limits_{j=0}^{n-1} a_j x^j- \mu_{\alpha} \in \mathbb{Q}(\omega_3)[x]$ and $\omega$ is a root of $p(x)$. Since $n\not\equiv 0(\mod 3)$, so $\Phi_n(x)$ is irreducible in $\mathbb{Q}(\omega_3)[x]$. Thus $p(\omega)=0$ and $\Phi_n(x)$ is the monic irreducible polynomial over $\mathbb{Q}(\omega_3)$ having $\omega$ as a root. Therefore $\Phi_n(x)$ divides $p(x)$, and so $\omega^{-1}=\omega^{n-1}$ is also a root of $p(x)$. Note that, if $\psi_{\alpha}(s)=\omega^j$ for some $j \in \{ 0,1,...,n-1\}$ then $\psi_{-\alpha}(s)=\omega^{-j}$. We have \begin{align*} \sum_{s \in S} i\sqrt{3} (\psi_{\alpha}(s) - \psi_{\alpha}(-s))&=\sum_{s \in S}[(\omega_6 - \omega_6^5) \psi_{\alpha}(s) + (\omega_6^5 - \omega_6) \psi_{\alpha}(-s)]\\ &=\sum_{j=0}^{n-1} a_j \omega^{-j}- \mu_{\alpha}=\mu_{-\alpha}-\mu_{\alpha}=p(\omega^{-1})=0 . \end{align*} By Lemma \ref{AbelianSqrt3ZeroSum}, $S=\emptyset$. Conversely, if $S=\emptyset$ then all the HS-eigenvalues of $\text{Cay}(\Gamma, S)$ are zero. Thus $\text{Cay}(\Gamma, S)$ is HS-integral. \end{proof} \begin{lema}\label{imcgoa11} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then $\sum\limits_{s\in M_1(x)} \left(\omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Let $x\in \Gamma(3)$, $\alpha \in \Gamma$ and $\mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right).$ \noindent\textbf{Case 1.} There exists $a\in M_0(x)$ such that $\psi_{\alpha}(a)\neq 1$. Then \begin{equation*} \begin{split} \mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right) &= \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &= \sum\limits_{s\in a+M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in a+M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &=\psi_{\alpha}(a) \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \psi_{\alpha}(a) \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &= \psi_{\alpha}(a) \mu_{\alpha}. \end{split} \end{equation*} We have $(1-\psi_{\alpha}(a)) \mu_{\alpha}=0$. Since $\psi_{\alpha}(a)\neq 1$, we get $\mu_{\alpha}=0\in \mathbb{Z}$. \noindent\textbf{Case 2.} Assume that $\psi_{\alpha}(a)=1$ for all $a\in M_0(x)$. Then $\psi_{\alpha}(s)=\psi_{\alpha}(x)$ for all $s\in M_1(x)$ and $\psi_{\alpha}(s)=\psi_{\alpha}(x^2)$ for all $s\in M_2(x)$. Therefore \begin{equation*} \begin{split} \mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right) &= \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &= |M_1(x)|(\omega_6 \psi_{\alpha}(x) + \omega_6^5 \psi_{\alpha}(x^2))\\ &= -|M_1(x)|(\omega_3^2 \psi_{\alpha}(x) + \omega_3 \psi_{\alpha}(x^2)). \end{split} \end{equation*} Since $\psi_{\alpha}(x)^3=1$ then $\psi_{\alpha}(x)=\omega_3$ or $\omega_3^2$. If $\psi_{\alpha}(x)=\omega_3$ then $\mu_{\alpha}= -2 |M_1(x)|$. If $\psi_{\alpha}(x)=\omega_3^2$ then $\mu_{\alpha}= |M_1(x)|$. Thus in both cases, $\mu_{\alpha}$ are integers for all $\alpha \in \Gamma$. \end{proof} For $x\in \Gamma(3)$ and $\alpha \in \Gamma$, define $$Z_{x}(\alpha)= \sum\limits_{s\in \langle\!\langle x \rangle\!\rangle}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right).$$ \begin{lema}\label{integerEigenvalue} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then $Z_{x}(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Note that there exists $x\in \Gamma(3)$ with $\text{ord}(x)=3$. Apply induction on $\text{ord}(x)$. If $\text{ord}(x)=3$, then $M_1(x)=\langle\!\langle x \rangle\!\rangle$. Hence by Lemma ~\ref{imcgoa11}, $Z_{x}(\alpha)$ is an integer for each $\alpha \in \Gamma$. Assume that the statement holds for all $x\in \Gamma(3)$ with $\text{ord}(x)\in \{ 3,6,...,3(g-1)\}$. We prove it for $\text{ord}(x)=3g$. Lemma ~\ref{imcgoa4} implies that $$M_1(x)= \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle.$$ If $\text{ord}(x)=3g=m, h\in D_{g,3}^1\cup D_{g,3}^2$, and $h>1$ then $\text{ord}(x^h), \text{ord}(-x^h)\in \{ 3,6,...,3(g-1)\}$. By induction hypothesis, both $Z_{x^h}(\alpha)$ and $Z_{-x^h}(\alpha)$ are integers for all $\alpha \in \Gamma$. Now we have \begin{equation*} \begin{split} \sum\limits_{s\in M_1(x)}\left( \omega_6\psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(s)\right) &= Z_{x}(\alpha)+ \sum_{h\in D_{g,3}^1, h> 1} Z_{x^h}(\alpha) + \sum_{h\in D_{g,3}^2, h> 1} Z_{-x^h}(\alpha). \end{split} \end{equation*} By Lemma ~\ref{imcgoa11} and induction hypothesis, \begin{equation*} \begin{split} Z_{x}(\alpha) = \sum\limits_{s\in M_1(x)}\left( \omega_6\psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(s)\right) - \sum_{h\in D_{g,3}^1, h> 1} Z_{x^h}(\alpha) - \sum_{h\in D_{g,3}^2, h> 1} Z_{-x^h}(\alpha) \end{split} \end{equation*} is an integer for each $\alpha \in \Gamma$. \end{proof} For $\Gamma(3) \neq \emptyset$, define $\mathbb{E}(\Gamma)$ to be the set of all skew-symmetric subsets of $\Gamma$ of the form $\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. For $\Gamma(3) = \emptyset$, define $\mathbb{E}(\Gamma)=\{ \emptyset \}$. \begin{theorem}\label{OrientedChara} Let $\Gamma$ be an abelian group. If $S \in \mathbb{E}(\Gamma)$ then the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. \end{theorem} \begin{proof} Assume that $S \in \mathbb{E}(\Gamma)$. Then $S=\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. We have \begin{equation*} \begin{split} \mu_{\alpha} &=\sum_{s\in S}\left( \omega_6 \psi_{\alpha}(s)+ \omega_6^5 \psi_{\alpha}(-s)\right)= \sum_{j=1}^k Z_{x_j}(\alpha). \end{split} \end{equation*} Now by Lemma~\ref{integerEigenvalue}, $\mu_{\alpha}$ is an integer for each $\alpha \in \Gamma$. Hence the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. \end{proof} \section{Characterization of HS-integral mixed Cayley graphs over abelian groups}\label{sec4} Let $\Gamma$ be an abelian group of order $n$. Define $E$ to be the matrix of size $n\times n$, whose rows and columns are indexed by elements of $\Gamma$ such that $E_{x,y}=\psi_{x}(y)$. Note that each row of $E$ corresponds to a character of $\Gamma$ and $EE^*=nI_n$, where $E^*$ is the conjugate transpose of $E$. Let $v_{\langle\!\langle x \rangle\!\rangle}$ be the vector in $\mathbb{Q}(\omega_3)^n$ whose coordinates are indexed by the elements of $\Gamma$, and the $a^{th}$ coordinate of $v_{\langle\!\langle x \rangle\!\rangle}$ is given by $$v_{\langle\!\langle x \rangle\!\rangle}(a) = \left\{ \begin{array}{cl} \omega_6 &\mbox{ if } a \in \langle\!\langle x \rangle\!\rangle \\\omega_6^5 & \mbox{ if } a \in \langle\!\langle -x \rangle\!\rangle\\ 0 &\textnormal{ otherwise.} \end{array}\right.$$ By Lemma~\ref{integerEigenvalue}, we have $Ev_{\langle\!\langle x \rangle\!\rangle} \in \mathbb{Z}^n$. For $z\in \mathbb{C}$, let $\overline{z}$ denote the complex conjugate of $z$ and $\Re (z)$ (resp. $\Im (z)$) denote the real part (resp. imaginary part) of $z$. \begin{lema}\label{Ori4Nec} Let $\Gamma$ be an abelian group, $v\in \mathbb{Q}(\omega_3)^n$ and $Ev \in \mathbb{Q}^n$. Let the coordinates of $v$ be indexed by elements of $\Gamma$. Then \begin{enumerate}[label=(\roman*)] \item $\overline{v}_x=v_{-x}$ for all $x \in \Gamma$. \item $v_x=v_y$ for all $x,y \in \Gamma(3)$ satisfying $x \approx y$. \item $\Re(v_x)=\Re(v_{-x})$ and $\Im(v_x)=\Im(v_{-x})=0$ for all $x\in \Gamma \setminus \Gamma(3)$. \end{enumerate} \end{lema} \begin{proof} Let $E_x$ and $E_y$ denote the column vectors of $E$ indexed by $x$ and $y$, respectively, and assume that $u=Ev \in \mathbb{Q}^n$. \begin{enumerate}[label=(\roman*)] \item We use the fact that $\overline{\psi_x(y)}=\psi_{-x}(y)=\psi_x(-y)$ for all $x,y \in \Gamma$. Again \[u=Ev\Rightarrow E^*u=E^*Ev= (nI_n)v\Rightarrow \frac{1}{n}E^*u=v \in \mathbb{Q}(\omega_3)^n.\] Thus \begin{equation*} \begin{split} v_x=\frac{1}{n}(E^* u)_x=\frac{1}{n} \sum_{a\in \Gamma} E^*_{x,a}u_a = \frac{1}{n} \sum_{a\in \Gamma} \overline{ \psi_a(x)}u_a &= \frac{1}{n} \sum_{a\in \Gamma} \psi_{a}(-x)u_a \\ &= \overline{\frac{1}{n} \sum_{a\in \Gamma} \overline{ \psi_{a}(-x)}u_a}\\ &= \overline{\frac{1}{n} \sum_{a\in \Gamma} E^*_{-x,a}u_a}= \overline{\frac{1}{n}(E^* u)_{-x}}=\overline{v}_{-x}. \end{split} \end{equation*} \item If $\Gamma(3) = \emptyset$ then there is nothing to prove. Now assume that $\Gamma(3)\neq\emptyset$. Let $x,y \in \Gamma(3)$ and $x \approx y$. Then there exists $k\in G_{m,3}^1(1)$ such that $y=x^k$, where $m=\text{ord}(x)$. Assume $x\neq y$, so that $k\geq 2$. Using Lemma ~\ref{Basic}, entries of $E$ are $m^{th}$ roots of unity. Fix a primitive $m^{th}$ root of unity $\omega$, and express each entry of $E_x$ and $E_y$ in the form $\omega^j$ for some $j\in \{ 0,1,...,m-1\}$. Thus $$nv_x= (E^*u)_x= \sum_{j=0}^{m-1} a_j \omega^j,$$ where $a_j\in \mathbb{Q}$ for all $j$. Thus $\omega$ is a root of the polynomial $p(x)= \sum\limits_{j=0}^{m-1} a_j x^j-nv_x \in \mathbb{Q}(\omega_3)[x]$. Therefore $p(x)$ is a multiple of the irreducible polynomial $\Phi_{m,3}^1(x)$, and so $\omega^k$ is also a root of $p(x)$, because of $k\in G_{m,3}^1(1)$. As $y=x^k$ implies that $\psi_{a}(y)=\psi_{a}(x)^k$ for all $a\in \Gamma$, we have $(E^*u)_y= \sum\limits_{j=0}^{m-1} a_j \omega^{kj}$. Hence $$0 =p(\omega^k)= \sum\limits_{j=0}^{m-1} a_j \omega^{kj}-nv_x= (E^*u)_y -nv_x=nv_y-nv_x \Rightarrow v_x=v_y.$$ \item Let $x\in \Gamma \setminus \Gamma(3)$ and $r=\text{ord}(x) \not\equiv 0 \pmod 3$. Fix a primitive $r^{th}$ root $\omega$ of unity, and express each entry of $E_x$ in the form $\omega^j$ for some $j\in \{ 0,1,...,r-1\}$. Thus $$nv_x= (E^*u)_x= \sum_{j=0}^{r-1} a_j \omega^j,$$ where $a_j\in \mathbb{Q}$ for all $j$. Thus $\omega$ is a root of the polynomial $p(x)= \sum\limits_{j=0}^{r-1} a_j x^j-nv_x \in \mathbb{Q}(\omega_3)[x]$. Therefore, $p(x)$ is a multiple of the irreducible polynomial $\Phi_r(x)$, and so $\omega^{-1}$ is also a root of $p(x)$. Since $\psi_{a}(-x)=\psi_{a}(x)^{-1}$ for all $a\in \Gamma$, therefore $(E^*u)_{-x}= \sum\limits_{j=0}^{r-1} a_j \omega^{-j}$. Hence $$0 =p(\omega^{-1})= \sum\limits_{j=0}^{r-1} a_j \omega^{-j}-nv_x= (E^*u)_{-x} -nv_x=nv_{-x}-nv_x,$$ implies that $v_x=v_{-x}$. This together with Part $(i)$ imply that $\Re(v_x)=\Re(v_{-x})$, and that $\Im(v_x)=\Im(v_{-x})=0$ for all $x\in \Gamma \setminus \Gamma(3)$. \qedhere \end{enumerate} \end{proof} \begin{theorem}\label{neccori} Let $\Gamma$ be an abelian group. The oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S \in \mathbb{E}(\Gamma)$. \end{theorem} \begin{proof} Assume that the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is integral. If $\Gamma(3) = \emptyset$ then by Theorem ~\ref{ori4}, we have $S = \emptyset$, and so $S \in \mathbb{E}(\Gamma)$. Now assume that $\Gamma(3) \neq \emptyset$. Let $v$ be the vector in $\mathbb{Q}^n(\omega_3)$ whose coordinates are indexed by the elements of $\Gamma$, and the $x^{th}$ coordinate of $v$ is given by $$v_x = \left\{ \begin{array}{rl} \omega_6 &\mbox{ if } x \in S \\ \omega_6^5 & \mbox{ if } x \in S^{-1}\\ 0 &\textnormal{ otherwise.} \end{array}\right.$$ We have \[(Ev)_a=\sum\limits_{x\in \Gamma}E_{a,x}v_x =\sum\limits_{x\in S}\omega_6 E_{a,x}+ \sum\limits_{x\in S^{-1}} \omega_6^5 E_{a,x}=\sum\limits_{x\in S}\left( \omega_6 \psi_a(x)+ \omega_6^5 \psi_a(-x)\right).\] Thus $(Ev)_a$ is an HS-eigenvalue of the integral oriented Cayley graph $\text{Cay}(\Gamma, S)$ for each $a\in \Gamma$. Therefore $Ev \in \mathbb{Q}^n$, and hence all the three conditions of Lemma ~\ref{Ori4Nec} hold. By the third condition of Lemma ~\ref{Ori4Nec}, $v_x=0$ for all $x\in \Gamma \setminus\Gamma(3)$, and so we must have $S \cup S^{-1} \subseteq \Gamma(3)$. Again, let $x \in S$, $y \in \Gamma(3)$ and $ x \approx y$. The second condition of Lemma ~\ref{Ori4Nec} gives $v_x=v_y$, which implies that $ y\in S$. Thus $x \in S$ implies $\langle\!\langle x \rangle\!\rangle \subseteq S$. Hence $S\in \mathbb{E}(\Gamma)$. The converse part follows from Theorem~\ref{OrientedChara}. \end{proof} The following example illustrates Theorem~\ref{neccori}. \begin{ex}\label{ex1} Consider $\Gamma= \mathbb{Z}_3 \times \mathbb{Z}_3$ and $S=\{ (0,1), (2,0)\}$. The oriented graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is shown in Figure~\ref{a}. We see that $\langle\!\langle (0,1) \rangle\!\rangle=\{(0,1)\}$ and $\langle\!\langle (2,0)\rangle\!\rangle=\{(2,0)\}$. Therefore $S \in \mathbb{E}(\Gamma)$. Further, using Corollary~\ref{OriEig} and Equation~\ref{character}, the HS-eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as $$\mu_\alpha= [\omega_6\psi_{\alpha}(0,1) + \omega_6^5 \psi_{\alpha}(0,2)] + [\omega_6 \psi_{\alpha}(2,0) + \omega_6^5 \psi_{\alpha}(1,0)] ~~\text{for each }\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3 ,$$ where $$\psi_{\alpha}(x) =\omega_3^{\alpha_1 x_1}\omega_3^{\alpha_2 x_2} \text{ for all } \alpha=(\alpha_1,\alpha_2),x=(x_1,x_2)\in \mathbb{Z}_3 \times \mathbb{Z}_3.$$ It can be seen that $\mu_{(0,0)}=2,\mu_{(0,1)}=-1,\mu_{(0,2)}=2,\mu_{(1,0)}=2,\mu_{(1,1)}=-1,\mu_{(1,2)}=2$, $\mu_{(2,0)}=-1$, $\mu_{(2,1)}=-4$ and $\mu_{(2,2)}=-1$. Thus $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is HS-integral. \end{ex} \begin{figure} \caption{$S=\{ (0,1), (2,0)\} \label{a} \caption{$S=\{ (0,1),(1,0), (2,0)\} \label{b} \caption{The graph $\text{Cay} \label{main} \end{figure} \begin{lema}\label{CharaNewIntegSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$ and $t(\neq 0) \in \mathbb{Q}$. If \linebreak[4] $\sum\limits_{s\in S}it\sqrt{3} (\psi_{\alpha}(s)- \psi_{\alpha}(-s))$ is an integer $ \textnormal{ for each } \alpha \in \Gamma$ then $S \in \mathbb{E}(\Gamma)$ \end{lema} \begin{proof} Let $v$ be the vector, whose coordinates are indexed by the elements of $\Gamma$, defined by $$v_x= \left\{ \begin{array}{cl} it\sqrt{3} & \mbox{if } x\in S \\ -it\sqrt{3} & \mbox{if } x\in S^{-1}\\ 0 & \mbox{otherwise}. \end{array}\right.$$ Since $v\in \mathbb{Q}(\omega_3)^n$ and $\alpha$-th coordinate of $Ev$ is $\sum\limits_{s\in S} it\sqrt{3} ( \psi_{\alpha}(s)-\psi_{\alpha}(-s))$, we have $Ev \in \mathbb{Q}^n$. By the third condition of Lemma ~\ref{Ori4Nec}, $\Im(v_x)=0$, and so $v_x=0$ for all $x\in \Gamma \setminus\Gamma(3)$. Thus we must have $S \cup S^{-1} \subseteq \Gamma(3)$. Again, let $x \in S$, $y \in \Gamma(3)$ and $ x \approx y$. The second condition of Lemma ~\ref{Ori4Nec} gives $v_x=v_y$, which implies that $ y\in S$. Thus $x \in S$ implies $\langle\!\langle x \rangle\!\rangle \subseteq S$. Hence $S\in \mathbb{E}(\Gamma)$. \end{proof} \begin{lema}\label{Sqrt3NecessIntSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$ and $t(\neq 0) \in \mathbb{Q}$. If \linebreak[4] $\sum\limits_{s\in S}it\sqrt{3} ( \psi_{\alpha}(s)- \psi_{\alpha}(-s))$ is an integer for each $ \alpha \in \Gamma$ then $\sum\limits_{s\in S\cup S^{-1}} \psi_{\alpha}(s) $ is an integer for each $ \alpha \in \Gamma$. \end{lema} \begin{proof} Assume that $\sum\limits_{s\in S}it\sqrt{3} (\psi_{\alpha}(s)-\psi_{\alpha}(-s))$ is an integer for each $ \alpha \in \Gamma$. By Lemma \ref{CharaNewIntegSum} we have $S \in \mathbb{E}(\Gamma)$, and so $S=\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. Therefore, using Lemma~\ref{lemanecc} we get $S \cup S^{-1}=[x_1] \cup ...\cup[x_k] \in \mathbb{B}(\Gamma)$. Thus by Theorem~\ref{Cayint}, $\text{Cay}(\Gamma, S \cup S^{-1})$ is integral, that is, $\sum\limits_{s\in S\cup S^{-1}} \psi_{\alpha}(s) $ is an integer for each $ \alpha \in \Gamma$. \end{proof} \begin{lema}\label{SeperatIntegMixedGraph} Let $\Gamma$ be an abelian group. The mixed Cayley graph $\text{Cay}(\Gamma,S)$ is HS-integral if and only if $\text{Cay}(\Gamma,{S\setminus \overline{S}})$ is integral and $\text{Cay}(\Gamma, {\overline{S}})$ are HS-integral. \end{lema} \begin{proof} Assume that the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is HS-integral. Let the HS-spectrum of $\text{Cay}(\Gamma,S)$ be $\{\gamma_{\alpha}: \alpha \in \Gamma\}$, where $\gamma_{\alpha}=\lambda_{\alpha} +\mu_{\alpha}$, $$\lambda_{\alpha}= \sum\limits_{s \in S\setminus \overline{S}} \psi_{\alpha}(s) \textnormal{ and } \mu_{\alpha}= \sum\limits_{s \in \overline{S}} (\omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)), \textnormal{ for } \alpha \in \Gamma.$$ Note that $\{\lambda_{\alpha}: \alpha \in \Gamma\}$ is the spectrum of $\text{Cay}(\Gamma, S\setminus \overline{S})$ and $\{\mu_{\alpha}: \alpha \in \Gamma\}$ is the HS-spectrum of $\text{Cay}(\Gamma,\overline{S})$. By assumption $\gamma_{\alpha} \in \mathbb{Z}$, and so $ \gamma_{\alpha} - \gamma_{-\alpha}= \sum\limits_{s\in \overline{S}}i\sqrt{3}(\psi_{\alpha}(s)-\psi_{\alpha}(-s)) \in \mathbb{Z}$ for all $ \alpha \in \Gamma$. By Lemma \ref{Sqrt3NecessIntSum}, we get $\sum\limits_{ s \in \overline{S} \cup \overline{S}^{-1}}\psi_{\alpha}(s) \in \mathbb{Z}$ for all $ \alpha \in \Gamma$. Note that $\mu_{\alpha}$ is an algebraic integer. Also \begin{align*} \mu_{\alpha}= \frac{1}{2} \sum\limits_{ s \in \overline{S} \cup \overline{S}^{-1}}\psi_{\alpha}(s) + \frac{1}{2} \sum\limits_{s\in \overline{S}}i\sqrt{3}(\psi_{\alpha}(s)-\psi_{\alpha}(-s))\in \mathbb{Q}. \end{align*} Hence $\mu_{\alpha}$ is an integer for each $ \alpha \in \Gamma$. Thus $\text{Cay}(\Gamma,\overline{S})$ is HS-integral. Now we have $\gamma_{\alpha}, \mu_{\alpha} \in \mathbb{Z}$, and so $\lambda_{\alpha} = \gamma_{\alpha} -\mu_{\alpha} \in \mathbb{Z}$ for each $ \alpha \in \Gamma$. Hence $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral. Conversely, assume that $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral and $\text{Cay}(\Gamma, \overline{S})$ is HS-integral. Then Lemma \ref{imcgoa3} implies that $\text{Cay}(\Gamma,S)$ is integral. \end{proof} \begin{theorem}\label{CharaHSintMixed} Let $\Gamma$ be an abelian group. The mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S \setminus \overline{S} \in \mathbb{B}(\Gamma)$ and $\overline{S} \in \mathbb{E}(\Gamma)$. \end{theorem} \begin{proof} By Lemma~\ref{SeperatIntegMixedGraph}, the mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $\text{Cay}(\Gamma, S \setminus \overline{S})$ is integral and $\text{Cay}(\Gamma, \overline{S})$ is HS-integral. Note that $S \setminus \overline{S}$ is a symmetric set and $\overline{S}$ is a skew-symmetric set. Thus by Theorem~\ref{Cayint}, $\text{Cay}(\Gamma, S \setminus \overline{S})$ is integral if and only if $S \setminus \overline{S} \in \mathbb{B}(\Gamma)$. By Theorem~\ref{neccori}, $\text{Cay}(\Gamma, \overline{S})$ is HS-integral if and only if $\overline{S} \in \mathbb{E}(\Gamma)$. Hence the result follows. \end{proof} The following example illustrates Theorem~\ref{CharaHSintMixed}. \begin{ex}\label{ex2} Consider $\Gamma= \mathbb{Z}_3 \times \mathbb{Z}_3$ and $S=\{ (0,1),(1,0), (2,0)\}$. The mixed graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is shown in Figure~\ref{b}. Here $\overline{S}=\{(0,1)\}=\langle\!\langle (0,1)\rangle\!\rangle\in \mathbb{E}(\Gamma)$ and $S\setminus\overline{S}=\{(1,0),(2,0)\} =[(1,0)]\in \mathbb{B}(\Gamma)$. Further, using Lemma~\ref{imcgoa3} and Equation~\ref{character}, the HS-eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as $$\gamma_\alpha=[\psi_{\alpha}(1,0) + \psi_{\alpha}(2,0)] + [\omega_6 \psi_{\alpha}(0,1) + \omega_6^5 \psi_{\alpha}(0,2)] ~~\text{for each }\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3.$$ One can see that $\gamma_{(0,1)}=\gamma_{(1,0)}= \gamma_{(1,2)}= \gamma_{(2,0)}=\gamma_{(2,2)}=0$, $\gamma_{(0,0)}=\gamma_{(0,2)}=3$ and $\gamma_{(1,1)} = \gamma_{(2,1)}=-3$. Thus $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is HS-integral. \end{ex} \section{Characterization of Eisenstein integral mixed Cayley graphs over abelian groups}\label{sec5} Let $\Gamma$ be a finite abelian group of order $n$. For an $S\subseteq \Gamma$ with $0\notin S$, consider the function $\alpha:\Gamma \rightarrow \{0,1\}$ defined by \[\alpha(s)= \left\{ \begin{array}{rl} 1 & \mbox{if } s\in S \\ 0 & \mbox{otherwise}, \end{array}\right.\] in Theorem~\ref{EigNorColCayMix}. We see that $\sum\limits_{s \in S} \psi_{\alpha}(s)$ is an eigenvalue of the mixed Cayley graph $\text{Cay}(\Gamma, S)$ for all $\alpha \in \Gamma$. For $x \in \Gamma, y\in \Gamma(3)$ and $\alpha \in \Gamma$, define $$C_x(\alpha)=\sum_{s \in [x] } \psi_{\alpha}(s)~~\text{ and }~~ T_y(\alpha)= \sum_{s \in \langle\!\langle y \rangle\!\rangle} i\sqrt{3}( \psi_{\alpha}(s)-\psi_{\alpha}(-s)).$$ Note that $C_x(\alpha)$ is an eigenvalue of the mixed Cayley graph $\text{Cay}(\Gamma, [x])$ for each $\alpha \in \Gamma$. \begin{lema}\label{Tn(q)IsIntegerForAll} Let $x \in \Gamma(3)$. Then $T_x(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} We have \begin{equation*} \begin{split} Z_x(\alpha) = \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle}\left( \omega_6\psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right) &= \frac{1}{2} \sum\limits_{s\in [x]}\psi_{\alpha}(s) + \frac{i\sqrt{3}}{2} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s))\\ &= \frac{C_x(\alpha)}{2} + \frac{T_x(\alpha)}{2}.\\ \end{split} \end{equation*} By Lemma~\ref{integerEigenvalue}, $T_x(\alpha)= 2 Z_x(\alpha) - C_x(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{proof} \begin{lema} Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $3m$, with $m \not\equiv 0 \Mod 3$. Then $$x^m [x^3] = \left\{ \begin{array}{ll} \langle\!\langle x \rangle\!\rangle & \mbox{if } m \equiv 1 \Mod 3 \\ \langle\!\langle -x \rangle\!\rangle & \mbox{if } m \equiv 2 \Mod 3. \end{array}\right. $$ \end{lema} \begin{proof} Assume that $m \equiv 1 \Mod 3$. Let $y \in x^m [x^3]$. Then $y=x^{m+3r}$ for some $r\in G_m(1)$. We have $\gcd(r,m)=1$, which implies that $\gcd(m+3r, 3m)=1$ and $m+3r \equiv 1 \Mod 3$. Thus $x^m [x^3] \subseteq \langle\!\langle x \rangle\!\rangle$. Since size of both $x^m [x^3]$ and $\langle\!\langle x \rangle\!\rangle$ are same, so $x^m [x^3] = \langle\!\langle x \rangle\!\rangle$. Similarly, if $m \equiv 2 \Mod 3$ then we have $x^m [x^3] = \langle\!\langle -x \rangle\!\rangle$. \end{proof} \begin{lema}\label{Sum3mequalRamanujan} Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $3m$, with $m \not\equiv 0 \Mod 3$. Then \begin{equation*} \begin{split} T_x(\alpha) = \left\{ \begin{array}{cl} \pm3 C_{x^3}(\alpha) & \mbox{if } \psi_{\alpha}(x^m) \neq 1 \\ 0 & \mbox{otherwise.} \end{array}\right. \end{split} \end{equation*} for all $\alpha \in \Gamma$. \end{lema} \begin{proof} We have \begin{equation*} \begin{split} T_x(\alpha) &= i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) \\ &= \left\{ \begin{array}{rl} i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\ -i\sqrt{3} \sum\limits_{s \in \langle\!\langle -x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3 \end{array}\right.\\ &= \left\{ \begin{array}{rl} i\sqrt{3} \sum\limits_{s \in x^m [x^3]} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\ -i\sqrt{3} \sum\limits_{s \in x^m [x^3]} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3 \end{array}\right.\\ &= \left\{ \begin{array}{rl} i\sqrt{3} \sum\limits_{s \in [x^3]} (\psi_{\alpha}(x^m) \psi_{\alpha}(s) - \psi_{\alpha}(-x^{m}) \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\ -i\sqrt{3} \sum\limits_{s \in [x^3]} (\psi_{\alpha}(x^m) \psi_{\alpha}(s) - \psi_{\alpha}(-x^m) \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3 \end{array}\right.\\ &= \left\{ \begin{array}{rl} -2\sqrt{3} \Im(\psi_{\alpha}(x^m)) \sum\limits_{s \in [x^3]} \psi_{\alpha}(s) & \mbox{if } m \equiv 1 \Mod 3 \\ 2\sqrt{3} \Im(\psi_{\alpha}(x^m)) \sum\limits_{s \in [x^3]} \psi_{\alpha}(s) & \mbox{if } m \equiv 2 \Mod 3 \end{array}\right.\\ &=\pm 2\sqrt{3} \Im(\psi_{\alpha}(x^m)) C_{x^3}(\alpha). \end{split} \end{equation*} Since $\psi_{\alpha}(x^m)$ is a $3$-rd root of unity, $\Im(\psi_{\alpha}(x^m))=0$ or $\pm \frac{ \sqrt{3}}{2}$. Thus \begin{equation*} \begin{split} T_x(\alpha) = \left\{ \begin{array}{cl} \pm3 C_{x^3}(\alpha) & \mbox{if } \psi_{\alpha}(x^m) \neq 1 \\ 0 & \mbox{otherwise.} \end{array}\right. \end{split} \end{equation*} \end{proof} \begin{lema}\label{Tn(q)SumEqualTo3TimesSum} Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $k=3^tm$, with $m \not\equiv 0 \Mod 3$ and $t\geq 2$. Then $$T_x(\alpha)= \left\{ \begin{array}{ll} 3 \sqrt{3} i \sum\limits_{r \in G_{3^{t-1}m,3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) & \mbox{if } \psi_{\alpha}(x^{\frac{k}{3}}) =1 \\ 0 & otherwise. \end{array}\right.$$ \end{lema} \begin{proof} Note that $ G_{k,3}^1(1)= G_{\frac{k}{3},3}^1(1) \cup \left(\frac{k}{3} + G_{\frac{k}{3},3}^1(1)\right) \cup \left(\frac{2k}{3} + G_{\frac{k}{3},3}^1(1)\right)$. Therefore \begin{equation*} \begin{split} T_x(\alpha) =& i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) \\ =& i\sqrt{3} \sum\limits_{r \in G_{k,3}^1(1) } (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) \\ =& i\sqrt{3} \bigg[\sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) + \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^{\frac{k}{3}}) \psi_{\alpha}(x^r) - \psi_{\alpha}(x^{\frac{2k}{3}}) \psi_{\alpha}(-x^r)) \\ &+ \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^{\frac{2k}{3}}) \psi_{\alpha}(x^r) - \psi_{\alpha}(x^{\frac{k}{3}}) \psi_{\alpha}(-x^r))\bigg]\\ =& i\sqrt{3} \bigg[ \sum\limits_{r \in G_{\frac{k}{3},3}^1(1) } (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) + \psi_{\alpha}(x^{\frac{k}{3}}) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1) } ( \psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) \\ &+\psi_{\alpha}(x^{\frac{2k}{3}}) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} ( \psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))\bigg]\\ =& i\sqrt{3} (1+ \psi_{\alpha}(x^{\frac{k}{3}}) + \psi_{\alpha}(x^{\frac{2k}{3}})) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))\\ =& \left\{ \begin{array}{cl} 3 \sqrt{3} i \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) & \mbox{if } \psi_{\alpha}(x^{\frac{k}{3}}) =1 \\ 0 & otherwise. \end{array}\right. \end{split} \end{equation*} \end{proof} \begin{lema}\label{3DividesTn(q)} Let $\Gamma$ be a finite abelian group and $x\in \Gamma(3)$. Then $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $. \end{lema} \begin{proof} Let $x\in \Gamma(3)$ and order of $x$ be $k=3^tm$ with $m\not\equiv 0 \Mod 3$ and $t\geq 1$. If $t=1$ then by Lemma~\ref{Sum3mequalRamanujan}, $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $. Assume that $t\geq 2$. If $\psi_{\alpha}(x^{\frac{k}{3}}) \neq1$ then by Lemma~\ref{Tn(q)SumEqualTo3TimesSum}, $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $. If $\psi_{\alpha}(x^{\frac{k}{3}})=1$ then by Lemma~\ref{Tn(q)IsIntegerForAll} and Lemma~\ref{Tn(q)SumEqualTo3TimesSum}, $i\sqrt{3} \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))$ is a rational algebraic integer, and hence an integer for each $\alpha \in \Gamma $. \end{proof} \begin{lema}\label{3DividesTn(q)SameParity} Let $\Gamma$ be a finite abelian group and $x\in \Gamma(3)$. Then $C_x(\alpha)$ and $\frac{T_x(\alpha)}{3}$ are integers of the same parity for each $\alpha \in \Gamma $. \end{lema} \begin{proof} Let $x\in \Gamma(3)$ and $\alpha \in \Gamma $. By Lemma~\ref{integerEigenvalue}, $T_x(\alpha)+ C_x(\alpha)= 2 Z_x(\alpha)$ is an even integer, therefore $T_x(\alpha)$ and $C_x(\alpha)$ are integers of the same parity. By Lemma~\ref{3DividesTn(q)}, $\frac{T_x(\alpha)}{3}$ is an integer. Hence $C_x(\alpha)$ and $\frac{T_x(\alpha)}{3}$ are integers of the same parity. \end{proof} Let $S$ be a subset of $\Gamma$. For each $\alpha \in \Gamma$, define $$f_{\alpha}(S) = \sum_{s \in S \setminus \overline{S}} \psi_{\alpha}(s) \hspace{0.5cm}\textnormal{and}\hspace{0.5cm} g_{\alpha}(S)= \sum_{s \in \overline{S}}(\omega \psi_{\alpha}(s) + \overline{w}\psi_{\alpha}(-s)),$$ where $\omega=\frac{1}{2} - \frac{i\sqrt{3}}{6}$. It is clear that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are real numbers. We have $$\sum_{s \in S} \psi_{\alpha}(s)= f_{\alpha}(S)+ g_{\alpha}(S) + \left( \frac{-1}{2} + \frac{i\sqrt{3}}{2} \right)( g_{\alpha}(S) - g_{-\alpha}(S) ).$$ Note that $f_{\alpha}(S)=f_{-\alpha}(S)$ for each $\alpha \in \Gamma$. Therefore if $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha\in \Gamma$, then $g_{\alpha}(S)-g_{-\alpha}(S)= \left[ f_{\alpha}(S)+g_{\alpha}(S)\right]-\left[ f_{-\alpha}(S)+g_{-\alpha}(S)\right]$ is also an integer for each $\alpha \in \Gamma$. Hence, the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. \begin{lema}\label{CharaEisensteinIntegral} Let $S$ be a subset of a finite abelian group $\Gamma$ with $0 \not\in S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Suppose the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral and $\alpha \in \Gamma$. Then $f_{\alpha}(S) + g_{\alpha}(S)$ and $g_{\alpha}(S)-g_{-\alpha}(S)= \sum\limits_{s\in \overline{S}} \frac{-1}{3}\left[i\sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\right]$ are integers. By Lemma~\ref{Sqrt3NecessIntSum}, $\sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}} \psi_{\alpha}(s) \in \mathbb{Z}$. Since $$2 g_{\alpha}(S)= \sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}} \psi_{\alpha}(s)- \sum\limits_{s\in \overline{S}}\frac{i\sqrt{3}}{3} (\psi_{\alpha}(s)- \psi_{\alpha}(-s)),$$ we find that $2 g_{\alpha}(S)$ is an integer. Therefore, $2 f_{\alpha}(S)=2(f_{\alpha}(S)+g_{\alpha}(S))-2g_{\alpha}(S)$ is also integer of the same parity with $g_{\alpha}(S)$. Conversely, assume that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity for each $\alpha \in \Gamma$. Then $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. Hence the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral. \end{proof} \begin{lema}\label{CharaEisensteinIntegral1} Let $S$ be a subset of finite abelian group $\Gamma$ with $0 \not\in S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$. \end{lema} \begin{proof} By Lemma~\ref{CharaEisensteinIntegral}, it is enough to show that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity if and only if $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers. If $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers, then clearly $2 f_{\alpha}(S)$ and $2 \beta_j(S)$ are even integers. Conversely, assume that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity. Since $f_{\alpha}(S)$ is an algebraic integer, the integrality of $2 f_{\alpha}(S)$ implies that $f_{\alpha}(S)$ is an integer. Thus $2 f_{\alpha}(S)$ is even, and so by the assumption $2 g_{\alpha}(S)$ is also even. Hence $g_{\alpha}(S)$ is an integer. \end{proof} \begin{theorem}\label{MinCharacEisensteinInteg} Let $S$ be a subset of a finite abelian group $\Gamma$ with $0\notin S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $\text{Cay}(\Gamma,S)$ is HS-integral. \end{theorem} \begin{proof} By Lemma~\ref{CharaEisensteinIntegral1}, it is enough to show that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$ if and only if $\text{Cay}(\Gamma,S)$ is HS-integral. Note that $f_{\alpha}(S)$ is an eigenvalue of the Cayley graph $\text{Cay}(\Gamma,S\setminus \overline{S})$. By Theorem~\ref{Cayint}, $f_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$ if and only if $S\setminus \overline{S}\in \mathbb{B}(\Gamma)$. Assume that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$. Then $ \sum\limits_{s\in \overline{S}} \frac{-i\sqrt{3}}{3}( \psi_{\alpha}(s)- \psi_{\alpha}(-s))=g_{\alpha}(S)-g_{-\alpha}(S)$ is also an integer for each $\alpha \in \Gamma$. Using Theorem~\ref{Cayint} and Lemma~\ref{CharaNewIntegSum}, we see that $S \setminus \overline{S}$ and $\overline{S}$ satisfy the conditions of Theorem~\ref{CharaHSintMixed}. Hence $\text{Cay}(\Gamma,S)$ is HS-integral. Conversely, assume that $\text{Cay}(\Gamma,S)$ is HS-integral. Then $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral, and hence $f_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. By Theorem~\ref{CharaHSintMixed}, we have $\overline{S} \in \mathbb{E}(\Gamma)$, and so $\overline{S}= \langle\!\langle x_1 \rangle\!\rangle \cup ...\cup \langle\!\langle x_k \rangle\!\rangle $ for some $x_1,...,x_k \in \Gamma (3)$. Then \begin{equation*} \begin{split} g_{\alpha}(S)&=\frac{1}{2} \sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}}\psi_{\alpha}(s) -\frac{1}{6} \sum\limits_{s\in \overline{S}}i \sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\\ &=\frac{1}{2} \sum\limits_{j=1}^{k} \sum\limits_{s\in [ x_j ] }\psi_{\alpha}(s) -\frac{1}{6} \sum\limits_{j=1}^{k} \sum\limits_{s\in \langle\!\langle x_j \rangle\!\rangle}i \sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\\ &= \frac{1}{2} \sum\limits_{j=1}^{k} C_{x_j}(\alpha) - \frac{1}{6} \sum\limits_{j=1}^{k} T_{x_j}(\alpha)\\ &= \frac{1}{2} \sum\limits_{j=1}^{k} \left( C_{x_j}(\alpha) - \frac{1}{3} T_{x_j}(\alpha) \right). \end{split} \end{equation*} By Lemma~\ref{3DividesTn(q)SameParity}, $C_{x_j}(\alpha) - \frac{1}{3} T_{x_j}(\alpha)$ is an even integer for each $j\in \{1,\ldots,k\}$. Hence $g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. \end{proof} The following example illustrates Theorem~\ref{MinCharacEisensteinInteg}. \begin{ex} Consider the HS-integral graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ of Example~\ref{ex2}. By Theorem~\ref{MinCharacEisensteinInteg}, the graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is Eisenstein integral. Indeed, the eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as \begin{equation*} \begin{split} \gamma_{\alpha}=\psi_{\alpha}(0,1)+\psi_{\alpha}(1,0) + \psi_{\alpha}(2,0). \end{split}\end{equation*} We have $\gamma_{(0,0)}=3, \gamma_{(0,1)}=2+ \omega_3, \gamma_{(0,2)}=1-\omega_3, \gamma_{(1,0)}=0, \gamma_{(1,1)}=-1+\omega_3,\gamma_{(1,2)}=-2-\omega_3, \gamma_{(2,0)}=0,$ $ \gamma_{(2,1)}=-1+\omega_3$ and $\gamma_{(2,2)}=-2-\omega_3$. Thus $\gamma_{\alpha}$ is an Eisenstein integer for each $\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3$. \qed \end{ex} \end{document}
\begin{equation}taegin{document} \title{Coherent single-photon generation and trapping with practical cavity QED systems} \alphauthor{David Fattal$^{1,2}$, Raymond G.\ Beausoleil$^{1}$, and Yoshihisa Yamamoto$^{2}$} \alphaffiliation{$^1$Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304} \alphaffiliation{$^2$Quantum Entanglement Project, SORST, JST, Ginzton Laboratory, Stanford University, Stanford CA 94305} \email{[email protected]} \begin{equation}taegin{abstract} We study analytically the dynamics of cavity QED nodes in a practical quantum network. Given a single 3-level $\Lambda$-type atom or quantum dot coupled to a micro-cavity, we derive several necessary and sufficient criteria for the coherent trapping and generation of a single photon pulse with a given waveform to be realizable. We prove that these processes can be performed with practical hardware --- such as cavity QED systems which are operating deep in the weak coupling regime --- given a set of restrictions on the single-photon pulse envelope. We systematically study the effects of spontaneous emission and spurious cavity decay on the transfer efficiency, including the case where more than one excited state participates in the dynamics. This work should open the way to very efficient optimizations of the operation of quantum networks. \end{abstract} \maketitle Photons, the elementary constituents of light, are fast and robust carriers of quantum information. Recently, techniques have been found to reversibly produce or trap photons one by one in matter. This inter-conversion capability potentially allows quantum information processing using the best of two worlds: the fast and reliable transport of quantum information via photons, and its storage and processing in matter where interactions between qubits can be made strong. Optical cavities with the ability to concentrate the electro-magnetic field in small regions of space provide a natural interface for photonic and matter qubits. They constitute a key element of quantum networks\cite{ref:Qnet} in which cavity ``nodes'' communicate coherently via photonic channels\cite{ref:2}. The theory of photon absorption and trapping was partially worked out in \cite{ref:Sham_05}, extending the initial solution given in \cite{ref:Qnet}. This approach relies on a carefully designed classical ``control'' pulse driving a 3-level quantum system in a $\Lambda$ configuration, and numerically shows that (contrary to widely accepted belief) the photon transfer can sometimes be realized in a non-adiabatic (i.e., rapid) way. Until then, both theory\cite{ref:8} and experiments\cite{ref:3, ref:4, ref:5, ref:6,ref:7} aiming at coherent photon emission from a driven cavity-QED system were based on an adiabatic technique called Stimulated Raman Adiabatic Passage (STIRAP), requiring a slowly varying control pulse. Another widely accepted requirement for coherent photon transfer is the one of strong coupling between the cavity and the atomic system. This assumption was also challenged numerically in Ref.~\cite{ref:Imamoglu_04}, where it was shown that (adiabatic) coherent photon generation with high efficiency and indistinguishability was possible in the weak coupling regime. However, there remain a number of open questions regarding this technique: Exactly what kind of single photons can be generated and/or trapped? How fast can their envelopes vary? How far detuned from the cavity resonance can they be, and what Raman detunings can be tolerated in connected cavity nodes? How weak a coupling between cavity and atom can be tolerated, and at what expense in performance? Finally, how sensitive is the transfer technique to characteristics of non-ideal systems with spontaneous emission, spurious cavity decay, or where several excited states contribute to the dynamics? In this paper, we derive analytical formulas that answer all of these questions, and provide a solid basis for the optimization of quantum networks. When losses are neglected, we show that a single criterion --- given by Eq.~(\ref{crit}) below --- determines the existence condition for a control pulse achieving coherent photon transfer in a 3-level cavity-QED system. This criterion tells us which complex envelope functions $\alpha(t)$ of a single photon pulse are eligible for generation and/or trapping when the characteristics of the cavity-QED system are known. Interestingly, this criterion can be satisfied in the non-adiabatic or in the weak-coupling regime, in the presence of photon-cavity detuning, and for arbitrary Raman detuning $\Deltaelta$ (see Fig.~\ref{fig:scheme}), suggesting that quantum networks could be operated even with highly deficient and heterogeneous nodes. When losses are included, we find another criterion for the existence of a control pulse that leaves the node and the waveguide unentangled after transfer, and we provide an analytical expression for the transfer efficiency. We show that very efficient transfer can be realized in the weak coupling regime provided the Purcell factor is sufficiently high, regardless of whether strong coupling is precluded by a low cavity $Q$ (as in solid-state systems) or by a large spontaneous emission rate (as in trapped ions and atoms) when compared to the vacuum Rabi frequency. We generalize the loss analysis to the case where other excited states participate in the dynamics of the transfer. \begin{equation}taegin{figure}[htbp] \epsfig{file=scheme.eps, width=3in} \caption{Illustration of a node, a 3-level $\Lambda$ system in a single mode cavity. The red arrow represents the photon trapping process, where a photon from the waveguide is absorbed by the node and induces the flip $\kappaet{g} \rightarrow \kappaet{e}$, if the correct control pulse $\Omega(t)$ is applied to the $e-r$ transition. The loss mechanisms are represented by the red wavy arrows.} \label{fig:scheme} \end{figure} Consider the system shown in Fig.~\ref{fig:scheme}, a 3-level $\Lambda$ atom with ground states $\kappaet{g}$ and $\kappaet{e}$ and excited state $\kappaet{r}$ encapsulated within a single-mode cavity. The cavity vacuum field and coherent control field couple levels $g-r$ and $e-r$ respectively, with Rabi frequency $g_0$ and $\frac{1}{2} \Omega(t)$, and common (Raman) detuning $\Deltaelta$. The cavity mode with frequency $\omega_c$ leaks into a single waveguide mode, with a rate $\kappa$. The first important result of the paper is the following: in the absence of loss, given an incident single photon pulse with complex amplitude $\alphalpha(t)e^{-i \omega_c t}$ satisfying at all times the criterion \begin{equation}tae \int_{-\infty}^t |\alpha(s)|^2 ds - \frac{|\alpha(t)|^2}{\kappa} - \frac{1}{\kappa g_0^2}|\dot{\alpha}(t) - \frac{\kappa}{2} \alpha(t)|^2 > 0 , \label{crit}\end{equation} there exists a unique control pulse $\Omega(t) = |\Omega(t)| e^{i \Phi(t)}$ that achieves deterministic trapping of the photon, driving the system from initial state $\kappaet{g}$ to final state $\kappaet{e}$. Before going into the details of the proof, we study the restrictions imposed by (\ref{crit}). First, note that it is independent of the Raman detuning $\Deltaelta$, so that changing $\Deltaelta$ need not cause any failure as long as the control pulse is changed accordingly. In practice, the intensity and chirp of the control pulse need to increase with $\Delta$, setting experimental limitations on the amount of detuning. Criterion~(\ref{crit}) is easier to satisfy if $g_0$ is large, and if the photon pulse is slow. In most cases of interest the bandwidth of the photon pulse should be always smaller than both $\kappa$ and $g_0^2/\kappa$ --- bearing in mind that these are estimates that can be violated for short time durations. In the strong coupling regime, this means that the maximum bandwidth of the photon pulse is $\kappa$, irrespective of the cavity coupling. In the weak coupling regime, photon transfer can still occur with efficiencies approaching unity if the photon is slowly varying enough, with a bandwidth not greater than $g_0^2\kappa$. Finally, we note that the scheme can accommodate a photon-cavity detuning on the order of $g_0$; larger detunings would increase the negative contribution of $\dot{\alpha}(t)$ to (\ref{crit}), making the criterion difficult to satisfy in practice. The second important set of results concern the transfer efficiency in the presence of spontaneous emission from level $\kappaet{r}$ (with rate $\gamma$) and spurious cavity losses (with rate $\Gamma_c$). These dissipative processes are not time-reversal symmetric, so that the photon generation problem has to be handled differently from the photon trapping problem. In both cases, we find a control pulse that achieves the best transfer efficiency possible while leaving the waveguide and the cavity QED node in a separable (unentangled) final state. Technically, such a pulse may not give the absolute highest efficiency for the transfer, but has the key feature that it will minimize the propagation of errors in the network. In the trapping case, the control pulse satisfying the disentanglement condition exists for a certain class of envelope function $\alpha$ obeying a criterion similar to (\ref{crit}), but taking into account the modification of coherent dynamics introduced by $\gamma$ and $\Gamma_c$. We will show that the transfer efficiency is given by \begin{equation}tae \eta_\text{trap} = \left(1 - \frac{\Gamma_c}{\kappa}\right) \left(1 - \frac{\gamma(\kappa-\Gamma_c)}{4g_0^2} \right) - \frac{\gamma}{\kappa g_0^2} \int |\dot{\alpha}|^2 \label{eta_trap}\end{equation} Whether the transfer succeeds or not, the photon has been perfectly removed from the waveguide. In the photon generation case, the eligible photon waveforms are also determined by a modified criterion, and the transfer efficiency is: \begin{equation}tae \eta_\text{gen} = \left[\left(1+\frac{\Gamma_c}{\kappa} \right) \left(1+\frac{\gamma(\kappa+\Gamma_c)}{4g_0^2} \right) + \frac{\gamma}{\kappa g_0^2} \int |\dot{\alpha}|^2\right]^{-1} \label{eta_gene}\end{equation} These relations indicate that efficient transfer in the presence of losses is possible if the cavity-waveguide interface is well designed ($\Gamma_c \ll \kappa$) and if the quantity $4g_0^2/\kappa\gamma$ (equal to the Purcell factor in the weak coupling regime) is significantly greater than one. We now prove the above claims and exhibit the corresponding control pulses. We make the simplifying assumptions that there is no additional dephasing of the $e-r$ and $g-r$ transition beyond $\gamma/2$, and we assume that level $\kappaet{r}$ decays mostly in $\kappaet{g}$ or outside of the system. Under these conditions it is possible to treat the state of the node as a pure state, defined as \begin{equation}tae \Psi(t) = g(t) \kappaet{g,1} + r(t) \kappaet{r,0} + e(t) \kappaet{e,0} , \end{equation} where the notation $\kappaet{X,n}$ stands for the node in level $X$ with $n$ photons in the cavity. Note that $||\Psi(t)||^2$ is the probability of finding the excitation in the node (either in the cavity or in the atom) at time $t$. If these conditions do not hold, the state of the system becomes mixed and a density matrix approach is required. In the quantum jump picture \cite{ref:Dalibard_quantumjumps}, the system undergoes a coherent evolution described by $\Psi(t)$, but at each time step it has some probability of collapsing to state $\kappaet{e,0}$ (due to decay to level $e$) or of undergoing the change $r(t) \rightarrow -r(t)$ (due to dephasing). In this case equations (\ref{eta_trap}) and (\ref{eta_gene}) provide a lower bound for the average fidelity of the transfer. Using the standard cavity input-output relations \cite{Walls} and the critical assumption that no more than one photon can be present in the leakage mode of the cavity, we can show that in the rotating wave approximation, the evolution of the waveguide state and of $\Psi(t)$ are given by: \begin{equation}taea \alpha_{out} & = & \sqrt{\kappa} g - \alpha_{in} \label{c1}\\ \dot{g} & = & -ig_0^{\alphast} \,r -\frac{\kappaappa + \Gamma_c}{2}g + \sqrt{\kappaappa} \alpha_{in} \label{c2}\\ \dot{r} & = & -(\frac{\gamma}{2}+i\Deltaelta) r -ig_0\,g -i\frac{\Omega}{2}\,e \label{c3}\\ \dot{e} & = & -i\frac{\Omega^{\alphast}}{2}\,r \label{c4} \end{equation}a To study the trapping of a photon $\alpha(t)$, we set $\alpha_{in} = \alpha$ and impose the condition $\alpha_{out}=0$. We find that \begin{equation}tae \frac{\Omega}{2} = \frac{i}{e} \left[ \dot{r} + \left(\frac{\gamma}{2}+i\Deltaelta\right) r + ig_0 g \right] , \label{om}\end{equation} where the necessary and sufficient condition for the existence of $\Omega(t)$ is that $|e| > 0$ on all finite time intervals \cite{ref:laudenbach}. In the above expression, $g = \alpha/\sqrt{\kappa}$, $r = i (\dot{g} - \frac{\kappa-\Gamma_c}{2}g)/g_0^{\alphast}$, and $e$ is given by \begin{equation}taea |e|^2 & = & \left(1 - \frac{\Gamma_c}{\kappa}\right) \left(1 - \frac{\gamma(\kappa-\Gamma_c)}{4g_0^2} \right) \int^{t} |\alpha|^2 - \frac{\gamma}{\kappa g_0^2} \int^{t} |\dot{\alpha}|^2 \nonumber\\ & & - \frac{|\alpha|^2}{\kappa}\left[1 - \frac{\gamma(\kappa-\Gamma_c)}{2g_0^2} \right] - \frac{|\dot{\alpha}-\frac{(\kappa-\Gamma_c)}{2}\alpha|^2}{\kappa g_0^2} \label{E_trap}\\ \dot{\Phi}_e & = & \frac{|r|^2 (\dot{\Phi}_r + \Delta) - |g|^2 \dot{\Phi}_g }{|e|^2} . \end{equation}a When the right hand side of Eq.~(\ref{E_trap}) is strictly positive at all times, the control pulse exists, and the transfer efficiency is given by $|e|^2(t \rightarrow \infty) = \eta_\text{trap}$. When trapping occurs, the quantum mechanical amplitude $-\alphalpha(t)$ associated with the photon being reflected by the front mirror of the cavity exactly cancels the amplitude $\sqrt{\kappa}g(t)$ of the photon being absorbed in the node and re-emitted from it. This \textit{destructive interference} can be viewed as an impedance-matching requirement between the incoming waveguide and the receiving node that has to be insured by a carefully designed control pulse. Note that for $\Gamma_c > \kappa$, we can never find a control pulse that leaves the node and the waveguide unentangled, irrespective of the photon waveform. In the case of photon generation, we assume that $\alpha_{in} = 0$ and we impose $\alpha_{out} = \sqrt{\eta}\, \alpha$, where $\alpha$ is the desired photon envelope normalized to 1, and $\eta$, the generation efficiency, will have to be found self-consistently. The proper control pulse is still given by (\ref{om}) in this case, but with $g = \sqrt{\eta \alpha/\kappa}$, $r = i (\dot{g} + \frac{\kappa+\Gamma_c}{2}g)/g_0^{\alphast}$, and \begin{equation}taea \frac{1-|e|^2}{\eta} & = & \left(1+\frac{\Gamma_c}{\kappa} \right) \left(1+\frac{\gamma(\kappa+\Gamma_c)}{4g_0^2} \right) \int^{t} |\alpha|^2 + \frac{\gamma}{\kappa g_0^2} \int^{t} |\dot{\alpha}|^2 \nonumber\\ & & + \frac{|\alpha|^2}{\kappa}\left[1 + \frac{\gamma(\kappa+\Gamma_c)}{2g_0^2} \right] + \frac{|\dot{\alpha}+\frac{(\kappa+\Gamma_c)}{2}\alpha|^2}{\kappa g_0^2} \label{E_gene}\\ \dot{\Phi}_e & = & \frac{|r|^2 (\dot{\Phi}_r + \Delta) - |g|^2 \dot{\Phi}_g }{|e|^2} . \end{equation}a The node will be disentangled from the waveguide if and only if $e(t \rightarrow \infty) =0$, that is $\eta = \eta_\text{gen}$. The corresponding control pulse then exists if and only if the right hand side of eq. (\ref{E_gene}) stays strictly positive at all times.\\ It is worth noting that in the case of zero Raman detuning and when the photon pulse itself has no chirp (i.e. when $\alpha(t)$ can be taken real), then $\Phi(t)$ can be taken constant: no chirp is needed on the control pulse, a potentially desired feature for experiments. If in addition the photon pulse is slow (adiabatic regime) and the cavity coupling is assumed strong, as in many photon generation experiments \cite{ref:3, ref:4, ref:5, ref:6,ref:7}, we can obtain a particularly simple relation between control and photon pulse: \begin{equation}tae \left| \frac{\Omega(t)}{2} \right|^2 \sim \frac{g_0^2|\alpha(t)|^2}{\int_{-\infty}^t|\alpha(s)|^2\, ds} , \label{adiab}\end{equation} with the dual relation \begin{equation}tae |\alpha(t)|^2 = \frac{\kappaappa}{g_0^2}\left|\frac{\Omega(t)}{2}\right|^2 \, e^{\frac{\kappaappa}{2g_0^2} \int_t^{+\infty}\,|\frac{\Omega(s)}{2}|^2 \, ds} . \end{equation} These expressions explain the experimental observations that a photon emitted with the STIRAP technique will ``follow'' the control pulse with some retardance due to the finite value of $\kappa$. We now study the effect of having $N$ excited levels $\kappaet{r_k}$ contributing to the transfer dynamics. We denote as $g_k$ and $\Omega_k$ respectively the couplings of level $\kappaet{r_k}$ to level $\kappaet{g}$ and level $\kappaet{e}$, and we denote as $\Delta_k$ and $\gamma_k$ the corresponding Raman detuning and spontaneous decay rate from level $r_k$. For simplicity, we gather the quantities $r_k(t)$, $g_k$ and $\Omega_k(t)$ into size $N$ vectors $R(t)$, $G$ and $\Omega(t)\, V$, with $V$ a vector of unit length. We also define a complex detuning matrix $\Delta = \text{diag}(\Delta_k-i \gamma_k)$. Using the notation $y \equiv \dot{g} + (\kappa+\Gamma_c) g/2 - \sqrt{\kappa} \alpha_{in}$, the evolution of the system is given by \begin{equation}taea y(t) & = & -i G^{\dagger}\cdot R(t)\\ \dot{R}(t) & = & -i \Delta \cdot R(t) -i g(t)\,G - i \frac{\Omega(t)}{2} e(t) V \label{R_eq}\\ \dot{e}(t) & = & -i \frac{\Omega^{\alphast}(t)}{2} V^{\dagger} \cdot R(t) \end{equation}a The condition for proper trapping is that $y(t) = \dot{g} - (\kappa-\Gamma_c) g/2$. To find the correct control pulse, we split the vector space into the dimension 1 subspace subtended by vector $V$ and its orthogonal, with corresponding notation $\|$ and $\begin{equation}taot$. With these notations, the above set of equations can be rewritten as~: \begin{equation}taea y & = & -i G_{\|}^{\alphast}R_{\|} -i G_{\begin{equation}taot}^{\dagger}\cdot R_{\begin{equation}taot}\\ \dot{R}_{\|} & = & -i \Delta_{\|}^{\|} R_{\|} -i \Delta_{\begin{equation}taot}^{\|}\cdot R_{\begin{equation}taot}(t) -i g\, G_{\|} - i \frac{\Omega}{2} e\\ \dot{R}_{\begin{equation}taot} & = & -i \Delta_{\|}^{\begin{equation}taot} R_{\|} -i \Delta_{\begin{equation}taot}^{\begin{equation}taot}\cdot R_{\begin{equation}taot} - i g\, G_{\begin{equation}taot}\\ \dot{e} & = & -i \frac{\Omega^{\alphast}}{2} R_{\|} \end{equation}a When it exists, the correct control pulse envelope function is given by \begin{equation}tae \frac{\Omega}{2} = \frac{i}{e} \left(\dot{R}_{\|} +i \Delta_{\|}^{\|} R_{\|} +i \Delta_{\begin{equation}taot}^{\|}\cdot R_{\begin{equation}taot} +i g\, G_{\|} \right) , \end{equation} where the components of $R$ are given by \begin{equation}taea R_{\|} & = & \frac{i}{G_{\|}^{\alphast}}\left(y + i G_{\begin{equation}taot}^{\dagger} R_{\begin{equation}taot} \right)\\ R_{\begin{equation}taot} & = & -i e^{-iMt} \int^t e^{iMs} \left(G_{\begin{equation}taot} g(s) +i \frac{\Delta_{\|}^{\begin{equation}taot}}{G_{\|}^{\alphast}} y(s) \right) \\ M & = & \Delta_{\begin{equation}taot}^{\begin{equation}taot} - \frac{\Delta_{\|}^{\begin{equation}taot} G_{\begin{equation}taot}^{\dagger}}{G_{\|}^{\alphast}} \end{equation}a and the amplitude $e$ is given by \begin{equation}taea |e|^2 & = & (1-\frac{\Gamma_c}{\kappa})\int^t |\alpha|^2 + 2Im(\int^t R^{\dagger}\Delta R)\nonumber\\ & & - \frac{|\alpha|^2}{\kappa} - R^{\dagger}R\\ \dot{\Phi}_e & = & \frac{|R_{\|}|^2 \left[Re(\Delta_{\|}^{\|})+\dot{\Phi}_{R_{\|}} \right] + Re\left[(\Delta_{\begin{equation}taot}^{\|}R_{\begin{equation}taot}+g G_{\|})R_{\|}^{\alphast}\right]}{|e|^2} \end{equation}a where again the expression for $|e|^2$ has to be strictly positive for all times. If $M$ has a purely real eigenvalue, the control pulse must be turned on for an infinite amount of time to prevent re-emission of the photon in the waveguide. Even then, a fraction of the population will be trapped in a ``dark'' state in the excited states manifold. In general this situation will not happen due to the spontaneous decay. The additional levels will only further reduce the trapping efficiency to a value of \begin{equation}tae \eta_\text{trap}^{(N)} = 1 - \frac{\Gamma_c}{\kappa} + 2 \int R^{\dagger}\, \text{Im}(\Delta)\, R . \end{equation} The same formula hold for photon generation provided we use $y(t) = \dot{g} + (\kappa + \Gamma_c) g/2$ and $g = \sqrt{\eta_\text{gen}^{(N)}/\kappa}\, \alpha$, with \begin{equation}tae \eta_\text{gen}^{(N)} = \left[1 + \frac{\Gamma_c}{\kappa} + 2 \int R^{\dagger}\, \text{Im}(\Delta)\, R \right]^{-1} .\end{equation} Note that the extra absorption caused by the added excited levels increases when $\alpha$ has large Fourier components at frequencies corresponding to the real part of the eigenvalues of $M$. It could be avoided by a clever design the photon envelope.\\ To summarize, we have derived a series of analytical formulas that clearly delimit the range of operation of cavity QED nodes for quantum networks. With these formulas in hand, we are able to prove that nodes can operate even deeply in the weak coupling regime, at the expense of slowing down the information transfer. Nodes can be operated with an arbitrary atom-cavity detuning $\Delta$ as long as we have the experimental ability to generate a compensating chirp in the control pulses. A significant amount of detuning between cavity photon and cavity resonance can also be tolerated when the vacuum Rabi frequency $g_0$ is large. This ability will be key to the operation of heterogeneous cavity QED networks, and suggests that nodes featuring the highest coupling $g_0$ will be central to the defect-tolerant operation of the network. For non-ideal systems, we derived an analytical expression of the transfer efficiencies was derived. Spontaneous emission from the excited state causes loss by an amount that is inversely proportional to the Purcell factor (which can be high even in the weak coupling regime). Spurious cavity decay starts to cause prohibitive losses when it becomes comparable to the cavity-waveguide coupling rate. When several excited levels participate in the node dynamics, additional losses occur as some energy becomes irremediably trapped in (and radiated from) a sub-manifold that is not accessible to the control pulse. The authors acknowledge Charles Santori for his scrupulous examination of the manuscript. This work was supported in part by JST SORST, MURI Grant No. ARMY, DAAD19-03-1-0199 and NTT-BRL. \begin{equation}taegin{thebibliography}{13} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\begin{equation}taibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\begin{equation}taibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\begin{equation}taibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \begin{equation}taibitem[{\citenamefont{Cirac et~al.}(1997)\citenamefont{Cirac, Zoller, Kimble, and Mabuchi}}]{ref:Qnet} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{J.}~\begin{equation}taibnamefont{Cirac}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{P.}~\begin{equation}taibnamefont{Zoller}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{H.}~\begin{equation}taibnamefont{Kimble}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{H.}~\begin{equation}taibnamefont{Mabuchi}}, \begin{equation}taibinfo{journal}{Phys. Rev. Lett.} \textbf{\begin{equation}taibinfo{volume}{78}}, \begin{equation}taibinfo{pages}{3221} (\begin{equation}taibinfo{year}{1997}). \begin{equation}taibitem[{\citenamefont{Enk et~al.}(1998)\citenamefont{Enk, Cirac, and Zoller}}]{ref:2} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{S.~V.} \begin{equation}taibnamefont{Enk}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{J.}~\begin{equation}taibnamefont{Cirac}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{P.}~\begin{equation}taibnamefont{Zoller}}, \begin{equation}taibinfo{journal}{Science} \textbf{\begin{equation}taibinfo{volume}{279}}, \begin{equation}taibinfo{pages}{205} (\begin{equation}taibinfo{year}{1998}). \begin{equation}taibitem[{\citenamefont{Yao et~al.}(2005)\citenamefont{Yao, Liu, and Sham}}]{ref:Sham_05} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{W.}~\begin{equation}taibnamefont{Yao}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{R.-B.} \begin{equation}taibnamefont{Liu}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{L.}~\begin{equation}taibnamefont{Sham}}, \begin{equation}taibinfo{journal}{Phys. Rev. Lett.} \textbf{\begin{equation}taibinfo{volume}{92}}, \begin{equation}taibinfo{pages}{30504} (\begin{equation}taibinfo{year}{2005}). \begin{equation}taibitem[{\citenamefont{Kuhn et~al.}(1999)\citenamefont{Kuhn, Hennrich, Bondo, and Rempe}}]{ref:8} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Kuhn}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{M.}~\begin{equation}taibnamefont{Hennrich}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{T.}~\begin{equation}taibnamefont{Bondo}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{G.}~\begin{equation}taibnamefont{Rempe}}, \begin{equation}taibinfo{journal}{Appl. Phys. B} \textbf{\begin{equation}taibinfo{volume}{69}}, \begin{equation}taibinfo{pages}{373} (\begin{equation}taibinfo{year}{1999}). \begin{equation}taibitem[{\citenamefont{Hennrich et~al.}(2000)\citenamefont{Hennrich, Legero, Kuhn, and Rempe}}]{ref:3} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{M.}~\begin{equation}taibnamefont{Hennrich}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{T.}~\begin{equation}taibnamefont{Legero}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Kuhn}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{G.}~\begin{equation}taibnamefont{Rempe}}, \begin{equation}taibinfo{journal}{Phys. Rev. Lett.} \textbf{\begin{equation}taibinfo{volume}{85}}, \begin{equation}taibinfo{pages}{4872} (\begin{equation}taibinfo{year}{2000}). \begin{equation}taibitem[{\citenamefont{Kuhn et~al.}(2002)\citenamefont{Kuhn, Hennrich, and Rempe}}]{ref:4} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Kuhn}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{M.}~\begin{equation}taibnamefont{Hennrich}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{G.}~\begin{equation}taibnamefont{Rempe}}, \begin{equation}taibinfo{journal}{Phys. Rev. Lett.} \textbf{\begin{equation}taibinfo{volume}{89}}, \begin{equation}taibinfo{pages}{67901} (\begin{equation}taibinfo{year}{2002}). \begin{equation}taibitem[{\citenamefont{McKeever et~al.}(2004)\citenamefont{McKeever, Boca, Boozer, Miller, Buck, Kuzmich, and Kimble}}]{ref:5} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{J.}~\begin{equation}taibnamefont{McKeever}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Boca}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.~D.} \begin{equation}taibnamefont{Boozer}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{R.}~\begin{equation}taibnamefont{Miller}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{J.~R.} \begin{equation}taibnamefont{Buck}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Kuzmich}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{H.~J.} \begin{equation}taibnamefont{Kimble}}, \begin{equation}taibinfo{journal}{Science} \textbf{\begin{equation}taibinfo{volume}{303}}, \begin{equation}taibinfo{pages}{1992} (\begin{equation}taibinfo{year}{2004}). \begin{equation}taibitem[{\citenamefont{Brattke et~al.}(2001)\citenamefont{Brattke, Varcoe, and Walther}}]{ref:6} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{S.}~\begin{equation}taibnamefont{Brattke}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{B.}~\begin{equation}taibnamefont{Varcoe}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{H.}~\begin{equation}taibnamefont{Walther}}, \begin{equation}taibinfo{journal}{Science} \textbf{\begin{equation}taibinfo{volume}{86}}, \begin{equation}taibinfo{pages}{3534} (\begin{equation}taibinfo{year}{2001}). \begin{equation}taibitem[{\citenamefont{Keller et~al.}(2004)\citenamefont{Keller, Lange, Hayasaka, Lange, and Walther}}]{ref:7} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{M.}~\begin{equation}taibnamefont{Keller}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{B.}~\begin{equation}taibnamefont{Lange}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{K.}~\begin{equation}taibnamefont{Hayasaka}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{W.}~\begin{equation}taibnamefont{Lange}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{H.}~\begin{equation}taibnamefont{Walther}}, \begin{equation}taibinfo{journal}{Nature} \textbf{\begin{equation}taibinfo{volume}{431}}, \begin{equation}taibinfo{pages}{1075} (\begin{equation}taibinfo{year}{2004}). \begin{equation}taibitem[{\citenamefont{Kiraz et~al.}(2004)\citenamefont{Kiraz, Atatüre, and Imamoglu}}]{ref:Imamoglu_04} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Kiraz}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{M.}~\begin{equation}taibnamefont{Atatüre}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{A.}~\begin{equation}taibnamefont{Imamoglu}}, \begin{equation}taibinfo{journal}{Phys. Rev. A} \textbf{\begin{equation}taibinfo{volume}{69}}, \begin{equation}taibinfo{pages}{032305} (\begin{equation}taibinfo{year}{2004}). \begin{equation}taibitem[{\citenamefont{Dalibard et~al.}(1992)\citenamefont{Dalibard, Castin, and M\o{}lmer}}]{ref:Dalibard_quantumjumps} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{J.}~\begin{equation}taibnamefont{Dalibard}}, \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{Y.}~\begin{equation}taibnamefont{Castin}}, \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{K.}~\begin{equation}taibnamefont{M\o{}lmer}}, \begin{equation}taibinfo{journal}{Phys. Rev. Lett.} \textbf{\begin{equation}taibinfo{volume}{68}}, \begin{equation}taibinfo{pages}{580} (\begin{equation}taibinfo{year}{1992}). \begin{equation}taibitem[{\citenamefont{Walls and Milburn}(1994)}]{Walls} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{D.}~\begin{equation}taibnamefont{Walls}} \begin{equation}taibnamefont{and} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{G.}~\begin{equation}taibnamefont{Milburn}}, \emph{\begin{equation}taibinfo{title}{{\it Quantum Optics}}} (\begin{equation}taibinfo{publisher}{Springer-Verlag}, \begin{equation}taibinfo{address}{Berlin}, \begin{equation}taibinfo{year}{1994}). \begin{equation}taibitem[{\citenamefont{Laudenbach}(2000)}]{ref:laudenbach} \begin{equation}taibinfo{author}{\begin{equation}taibfnamefont{F.}~\begin{equation}taibnamefont{Laudenbach}}, \emph{\begin{equation}taibinfo{title}{{Calcul differentiel et integral}}} (\begin{equation}taibinfo{publisher}{Editions de l'Ecole Polytechnique}, \begin{equation}taibinfo{year}{2000}). \end{thebibliography} \end{document}
\begin{document} \date{\today} \title{Deterministic quantum state transfer between remote qubits in cavities} \author{B. Vogell} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \author{B. Vermersch} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \author{T. E. Northup} \affiliation{Institute for Experimental Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \author{B. P. Lanyon} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \author{C. A. Muschik} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \begin{abstract} Performing a faithful transfer of an unknown quantum state is a key challenge for enabling quantum networks. The realization of networks with a small number of quantum links is now actively pursued, which calls for an assessment of different state transfer methods to guide future design decisions. Here, we theoretically investigate quantum state transfer between two distant qubits, each in a cavity, connected by a waveguide, e.g., an optical fiber. We evaluate the achievable success probabilities of state transfer for two different protocols: standard wave packet shaping and adiabatic passage. The main loss sources are transmission losses in the waveguide and absorption losses in the cavities. While special cases studied in the literature indicate that adiabatic passages may be beneficial in this context, it remained an open question under which conditions this is the case and whether their use will be advantageous in practice. We answer these questions by providing a full analysis, showing that state transfer by adiabatic passage -- in contrast to wave packet shaping -- can mitigate the effects of undesired cavity losses, far beyond the regime of coupling to a single waveguide mode and the regime of lossless waveguides, as was proposed so far. Furthermore, we show that the photon arrival probability is in fact bounded in a trade-off between losses due to non-adiabaticity and due to coupling to off-resonant waveguide modes. We clarify that neither protocol can avoid transmission losses and discuss how the cavity parameters should be chosen to achieve an optimal state transfer. \end{abstract} \maketitle \section{Introduction} \label{sec:introduction} The ability to faithfully transmit an unknown quantum state between remote locations is a key primitive for the development of various quantum technologies. The quest to create {\it long-distance links} that can connect multiple nodes into a quantum internet~\cite{KimbleReview,TracyReview,Manifesto} is motivated by applications such as unconditionally secure communication~\cite{Bruss2000,Lo2014}, distributed quantum computing~\cite{Beals2013}, quantum fingerprinting~\cite{QuFingerprinting1,QuFingerprinting2}, quantum credit cards~\cite{QuCreditCards}, quantum secret voting~\cite{QuSecretVoting}, quantum secret sharing~\cite{QSecretSharing}, secure quantum cloud computing~\cite{QuCloudComputing1,QuCloudComputing2}, quantum time and frequency metrology~\cite{Komar14}, and tests of the foundations of quantum physics~\cite{BellTest1,BellTest2}. {\it Short-distance links} acting as `quantum USB cables', on the other hand, allow the connection of different types of quantum hardware and are a promising approach to scalable quantum computing architectures~\cite{ScalableArchitecture1}. \begin{figure} \caption{Basic quantum network for a deterministic state transfer. (a) Matter qubits ($|0\rangle$, $|1\rangle$) in cavities $A$ and $B$ are coupled via a waveguide. Both cavities are asymmetric with reflectivities $\mathcal{R} \label{fig:mainresults} \end{figure} A quantum state transfer can be accomplished probabilistically or deterministically. Probabilistic state transfer protocols use a two-step procedure: First, an entangled state between two network nodes is generated in a probabilistic and heralded fashion~\cite{Cabrillo99,Chou2005,Moehring07,Hofmann2012,Bernien2013}. Second, this entangled link is used for transferring a quantum state by teleportation~\cite{Olmschenk2009,Bao2012,Krauter2013,Noelleke2013,Pfaff2014}. For deterministic state transfer~\cite{detST}, a qubit state at one node is mapped onto a photon wave packet, which propagates across the desired distance and is then absorbed by the qubit in the receiving cavity~\cite{Mabuchi1997,ExpDetTransfer}. Deterministic approaches, as discussed here, do not rely on the availability of entangled resource states. Hence, such protocols are particularly well suited for the implementation of time-continuous schemes for quantum information processing~\cite{ContTeleportation,Hofer2013,Vollbrecht2011} and also dispense with the photon counters typically required for probabilistic approaches, e.g.,~\cite{Duan2010}. We are interested in the simple scenario depicted in \textrm{f}igref{fig:mainresults}a, in which two nodes are connected by a waveguide. Each node consists of a qubit placed in a cavity. In this context, relevant to, e.g., atoms, ions, and superconducting circuits coupled by waveguides, we study the task of transmitting quantum information \textit{deterministically} between the two nodes. We focus on evaluating the performance of two deterministic protocols. First, we consider the standard approach based on wave packet shaping~\cite{Mabuchi1997}, in which a classical qubit drive, as depicted in Fig.\ref{fig:mainresults}b, is designed such that the photon emitted by the first qubit is entirely captured by the second qubit, without reflections. The second approach uses the techniques of adiabatic passage to perform a quantum state transfer~\cite{Pellizzari1997} in the same setup but with classical driving fields in a counterintuitive order, in which the receiving drive is turned on before the emitting drive, as shown in \textrm{f}igref{fig:mainresults}c. While experiments performing state transfer by wave packet shaping have already been carried out~\cite{ExpDetTransfer}, state transfer between remote nodes by adiabatic passage has yet to be realized experimentally. The central problem for deterministic state transfer protocols is photon loss. Photon losses mainly occur either during transmission in the waveguide or locally in the cavities. Note that even for links spanning hundreds of meters, state-of-the-art cavity setups with mirror absorption losses of only a few parts per million per round-trip nevertheless operate in a `cavity loss' dominated regime (see \tabref{tab:exp}). In this article, we analyze the limitations and prospects for transferring quantum information in the presence of the aforementioned photon losses, leading to two main results: First, we show that neither wave packet shaping nor adiabatic passage can mitigate waveguide transmission losses. It has been stated in the literature (e.g., in Refs.~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016}) that waveguide losses can be avoided in the single mode or short-fiber limit~\cite{Pellizzari1997}, in which the cavities couple effectively only to a single mode of the waveguide. We show that this is incorrect since the photon arrival probability is bounded by a trade-off between losses due to non-adiabaticity and losses due to coupling to off-resonant waveguide modes. Taking this trade-off into account will be important for optimizing the experimental design parameters of future quantum networks. Second, we derive an analytical solution of the achievable state transfer success probability for adiabatic passages and provide a full numerical analysis. With this analysis, we show that, in contrast to wave packet shaping, quantum state transfer by adiabatic passage can mitigate losses due to absorption in the cavities far beyond the regime of lossless waveguides introduced in the original proposal~\cite{Pellizzari1997} and the single mode limit introduced in Ref.~\cite{vanEnk1999}. We show, however, that the single mode limit imposes far stronger constraints on the parameters of the system than is necessary: in order to mitigate cavity losses it is sufficient to be in the `long photon limit', in which the effective length of the photon is longer than the distance between the two nodes of the setup. The long photon regime is naturally reached for short transmission links and can be realized for distances up to thousands of kilometers, using slowly varying classical driving fields, by state-of-the-art experiments. The paper is organized as follows: First, we provide a brief overview of the setup and the main results in \secref{sec:mainresults}. In \secref{sec:setupprotocol} we describe the setup and the two quantum state transfer protocols under consideration in detail. In \secref{sec:fiberlosses} we treat the influence of waveguide losses; \secref{sec:cavitylosses} then also includes the influence of cavity losses. Finally, we discuss further experimental imperfections in \secref{sec:robustness} and give our conclusion and outlook in \secref{sec:outlook}. \section{Overview and main results} \label{sec:mainresults} In the following we provide a brief overview of the setup, the two quantum state transfer protocols considered and the main results. The rest of the paper provides the detailed explanation and derivations of these main results.\\ \\ \textbf{Setup:} We consider two emitters (matter qubits) placed in distant cavities $A$ and $B$ that are connected by a waveguide (for example, an optical fiber) of length $L$, as displayed in \textrm{f}igref{fig:mainresults}a and detailed in \secref{sec:basicmodel}. We consider the transfer of a quantum state encoded in the ground state levels of the emitters from cavity $A$ to cavity $B$, \begin{align} \ket{\Psi^{\text{in}}}_{A}\ket{\Psi^{\text{in}}}_{B}&=\left(a_0 |0\rangle_{A}+a_1|1\rangle_{A}\right)\ket{0}_B, \label{eq:statetransfer}\\ \ket{\Psi^{\text{out}}}_{A}\ket{\Psi^{\text{out}}}_{B}&=\ket{0}_A\left(a_0 |0\rangle_{B}+a_1|1\rangle_{B}\right),\notag \end{align} where $a_0$ and $a_1$ are the normalized amplitudes and the indices $A$ and $B$ refer to the qubits in cavities $A$ and $B$, respectively. The state of the emitter qubit is mapped to flying photonic Fock states such that $|0\rangle_A\rightarrow|0\rangle_P$, $|1\rangle_A\rightarrow|1\rangle_P$ (and conversely $|0\rangle_P\rightarrow|0\rangle_B$, $|1\rangle_P\rightarrow|1\rangle_B$)~\cite{Boozer2007}, where the quantum state with index $P$ refers to the photonic state in both cavity and waveguide. Note that this setting is not limited to the use of photonic Fock states $|0\rangle_P$, $|1\rangle_P$. Quantum information can also be encoded in the light field using polarization qubits (our results apply to specific types of polarization encoded state transfer protocols, as explained in~\cite{polEncoding}). We assume here two identical cavities $A$ and $B$ of length $l$, with $l\ll L$. The outer mirrors (M1 and M4 in \textrm{f}igref{fig:mainresults}a), not coupled to the waveguide, have a reflectivity $\mathcal{R}_1$. The two inner mirrors (M2 and M3 in \textrm{f}igref{fig:mainresults}a), adjoined to the waveguide, have a reflectivity $\mathcal{R}_2$. The cavities are asymmetric with reflectivities $\mathcal{R}_1 \gg \mathcal{R}_2$, such that photons leave predominantly through the inner mirrors. The rate $\kappa_\textrm{cav}$ of this desired photon coupling between waveguide and either cavity is proportional to the transmission $\mathcal{T}_2$ of the interfacing mirrors (M2 and M3). We consider waveguide losses parameterized by a loss rate $\gamma_\textrm{f}ib$, and cavity losses at a rate $\gamma_\textrm{cav}$. The latter rate refers to photons leaving through the outer mirrors (M1 and M4) with transmission $\mathcal{T}_1$ (with $\mathcal{T}_2 \gg \mathcal{T}_1$), and absorption and scattering losses in the mirrors at rate $\mathcal{L}$. The effect of other experimental imperfections such as spontaneous decay of the emitters, timing errors of the classical drive and in- and outcoupling losses due to imperfect coupling of cavity and waveguide will be discussed in \secref{sec:robustness}.\\ \\ \textbf{Classification of the relevant regimes:} We distinguish between two different regimes, the single mode limit and the long photon regime. To this end, we introduce two length scales: $L_{\text{eff}}$ refers to the {\it natural spatial length of a photon} that got emitted by a cavity (in the absence of an atom). $L_\textrm{ph}$ refers to the {\it spatial length of a photon} that got emitted by a qubit driven by a classical field mediated by a cavity (see inset in \textrm{f}igref{fig:mainresults}a). The single mode limit refers to the parameter regime in which the cavity linewidth $\kappa$ is much smaller than the free spectral range of the waveguide $\textnormal{FSR}_\textrm{f}ib=\pi c_\textrm{f}/L$ (with $c_\textrm{f}$ the speed of light in the waveguide)~ \cite{Pellizzari1997}. This regime is characterized by the single mode parameter \begin{align} \mathfrak{n}\equiv\textrm{f}rac{2\kappa}{\textnormal{FSR}_\textrm{f}ib} \ll 1, \label{eq:SMLp} \end{align} see \textrm{f}igref{fig:spectrum}a-b. Note that this condition is equivalent to $L\ll L_{\text{eff}}$ (short-fiber limit as in Ref.~\cite{vanEnk1999}), where the natural spatial photon length is defined by $L_{\text{eff}}=c_\textrm{f}/\kappa$. We define the long photon limit through two main conditions. First, the desired coupling rate of the cavity to the waveguide $\kappa_\textrm{cav}$ is assumed to be much larger than the effective coupling of the qubit to the cavity $G_{A/B}$ (defined in \eeqref{eq:ioncavcoupl}) such that the cavities' photon population is always much less than one. Under this assumption, the cavity can be eliminated, leading to an effective qubit-waveguide coupling rate $\gamma_{A/B}=\kappa_\textrm{cav} (G_{A/B}/\kappa)^2$ (\secref{sec:WPS}). In analogy to the natural spatial length of the photon $L_{\text{eff}}$ as defined above, the length of the photon $L_{\text{ph}}$ is defined by $L_\textrm{ph}=c_\textrm{f}/\gamma_{A/B}$. Second, the length of the photon is assumed to be larger than the link, such that $L_\textrm{ph}\gg L$. While $L_\textrm{eff}$ is a fixed quantity for a given setup (see \tabref{tab:exp} for typical values), $L_\textrm{ph}$ can be varied via the effective coupling $G_{A/B}$ between qubit and cavity. Current experiments can access the long photon limit by choosing a small amplitude of the effective coupling $G_{A/B}$ and applying the classical driving field for a long exposure time. In particular, they can reach the regime $L_\textrm{ph}\gg L_\textrm{eff}$, in which they can operate in the long photon limit but not in the single mode limit for a given fiber length $L$. \\ \\ \textbf{Quantum state transfer by wave packet shaping:} The standard protocol for transferring a quantum state deterministically between two cavities is based on wave packet shaping~\cite{Mabuchi1997,ExpDetTransfer, Stannigel2011}. The main idea behind wave packet shaping is to choose a temporal variation of the classical driving field applied to the atoms in cavities $A$ and $B$ such that in the absence of losses, the photon emitted by the first cavity is perfectly absorbed by the second cavity. This approach avoids the reflection of the photon by the highly reflective mirror M3 of the second cavity due to a quantum interference effect, as studied in~\cite{Gorshkov2007,Fleischhauer2000,Dilley2012}. For simplicity and concreteness, we discuss a time-symmetric wave packet emitted by the first qubit~\cite{Mabuchi1997} due to a classical coupling $\Omega_A(t)$, which can be reabsorbed by the second qubit under a time-reversed coupling $\Omega_B(t)=\Omega_A(\tau-t)$. Here $\tau=L/c_\textrm{f}$ is the time delay between the first and the second coupling; see \textrm{f}igref{fig:mainresults}b and \secref{sec:WPS}. Note that the wave packet is not required to be symmetric: any choice of shaping pulses that avoids the reflection of the wave packet from cavity $B$ yields the limitations discussed below.\\ \\ \textbf{Quantum state transfer by adiabatic passage:} Adiabatic passage as a protocol to transfer a quantum state between two remote qubits in cavities~\cite{Pellizzari1997} uses the methods known from STIRAP in atoms~\cite{RMPAP2016} within the setup shown in \textrm{f}igref{fig:mainresults}a. The principal idea is to perform a coherent transfer through a dark state with respect to the photon fields. This transfer is accomplished by temporally shaping the intensity of the classical driving fields of both atoms with a Gaussian shape in a counterintuitive order; see \textrm{f}igref{fig:mainresults}c and \secref{sec:AP}. Importantly, adiabatic passage state transfer has to be performed in the long photon limit.\\ \\ \textbf{Limitations of wave packet shaping:} We find that, by using the method of wave packet shaping, the maximal success probability $F$ (formally defined in \secref{sec:modelEOM}) of quantum state transfer is strictly limited by $P_1$ (below), i.e., $F\leq P_1$; see \secref{sec:fiberlosses} and \secref{sec:cavitylosses}. Here, $P_1$ is given by \begin{align} P_{1}=P_\textrm{out} \,P_{\textrm{f}ib} \, P_\textrm{in}, \label{eq:Ptot} \end{align} and denotes the probability of a photon to propagate through a waveguide of length $L$ \begin{align} P_{\textrm{f}ib}=e^{-\gamma_\textrm{f}ib L/c_\textrm{f}}, \label{eq:Pfib} \end{align} multiplied by the probability of a photon being emitted from the cavity into the desired output mode \begin{align} P_\textrm{out}=\textrm{f}rac{\mathcal{T}_2}{\mathcal{T}_1+\mathcal{T}_2+\mathcal{L}}, \label{eq:Pout0} \end{align} and being absorbed by the second cavity $P_{\text{in}}$. Due to symmetry reasons, the probability for a photon to enter the second cavity equals the emitting probability, i.e., $P_{\text{out}}=P_{\text{in}}$.\\ \\ \textbf{Waveguide losses:} It has been stated~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016} that limitations due to waveguide losses can be overcome in the single mode limit. These results are based on a description that takes only a single waveguide mode into account and in which, in analogy to stimulated Raman adiabatic passages (STIRAP), the corresponding success probability of state transfer is given by \begin{align} F_{\text{STIRAP}}=\exp\left(-\textrm{f}rac{ \gamma_\textrm{f}ib}{{g_{0}^2 T}} \textrm{f}rac{\pi}{2}\right), \label{eq:fidelitystirapINTRO} \end{align} as detailed in~\appref{app:atomiclimit}. The effective atom-waveguide coupling is denoted by $g_0$ (see \secref{sec:fiberanalytics}) and the pulse width of the driving laser by $T$ (see \secref{sec:AP}). In the adiabatic limit $g_{0}^2 T/\gamma_\textrm{f}ib \rightarrow \infty$ the success probability $F_{\text{STIRAP}}$ reaches unity, corresponding to a perfect state transfer. We provide an analytical example that demonstrates why the coupling to far-detuned waveguide modes can in fact not be neglected. As explained in \secref{sec:fiberanalytics}, including three waveguide modes already leads to non-negligible effects, even deep in the single mode limit. The corresponding amended success probability of state transfer is given by \begin{align} F_\textnormal{fib} &=\exp\left(- \textrm{f}rac{\gamma_\textrm{f}ib \pi}{2}\left[\textrm{f}rac{ 1}{{g_0^2 T}}+\textrm{f}rac{{g_0^2 T}}{ \textnormal{FSR}_\textrm{f}ib^2} \right] \right), \label{eq:fidelity3MLINTRO} \end{align} revealing a clear trade-off (see \secref{sec:fiberanalytics} for details). While the first summand in \eeqref{eq:fidelity3MLINTRO} recovers the dependency seen in previous work~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016}, the second summand in \eeqref{eq:fidelity3MLINTRO} arises due to the coupling to detuned waveguide modes. As a result, choosing the adiabatic limit as done in previous work is in fact incompatible with obtaining a high success probability of state transfer. Optimizing $F_\textnormal{fib}$ with respect to $g_0^2 T$ leads to \begin{align} F_\textnormal{fib}^\textnormal{opt}&=\exp\left(- \gamma_\textrm{f}ib \pi/\textnormal{FSR}_\textrm{f}ib \right)=\exp\left(- \gamma_\textrm{f}ib L/c_\textrm{f} \right)= P_\textrm{f}ib. \label{eq:fidelityMAXsecII} \end{align} These results are also shown numerically for an even larger parameter space, taking a large number of waveguide modes into account (see \secref{sec:fibernumerics}). We show that, in the absence of cavity losses ($P_\textnormal{out}=1$), the success probability of state transfer is strictly limited by $F \leq P_\textrm{f}ib$.\\ \\\textbf{Cavity losses:} In contrast to wave packet shaping, quantum state transfer by adiabatic passage allows one to outperform limitations due to cavity losses imposed by $P_\textrm{out}$. Note that for high-finesse cavities, cavity losses can play a significant role due to the high number of round-trips of the photon. Experimental values of $P_\textnormal{out}$ for current optical setups are given in \tabref{tab:exp}. It has already been shown that limitations due to $P_{\text{out}}$ can be mitigated for perfect waveguides ($\gamma_\textrm{f}ib=0$)~\cite{Pellizzari1997} and in the single mode limit for imperfect waveguides~\cite{vanEnk1999}. In this article, we show that quantum state transfer by adiabatic passage can in fact mitigate cavity losses for both $\gamma_\textrm{f}ib>0$ and well beyond the single mode limit; see \secref{sec:cavitylosses} and \textrm{f}igref{fig:cavitylosses}. We find that the parameter regime over which cavity losses can be mitigated is determined by the long photon limit. The figure of merit determining the maximal success probability of state transfer for a given waveguide with length $L$ and loss rate $\gamma_\textrm{f}ib$ is the probability of the photon to leave the cavity $P_\textrm{out}$, as demonstrated in \secref{sec:cavitylosses}. Extending the analytics for waveguide losses only, we show that the success probability of state transfer in the presence of both cavity and waveguide losses is given by \begin{align} F_\textnormal{1}&=\exp\left(- \textrm{f}rac{ \pi}{2}\left[\textrm{f}rac{\gamma_\textrm{f}ib}{{g_0^2 T}}+\textrm{f}rac{{\gamma_\textrm{f}ib g_0^2 T }}{ \textnormal{FSR}_\textrm{f}ib^2} +\textrm{f}rac{\tilde{\gamma}_\textrm{cav} T}{4} \right] \right),\label{eq:PAPunoptsecII} \end{align} with effective cavity decay rate $\tilde{\gamma}_\textrm{cav} $ (see \appref{app:beyondSML}). We show (\secref{sec:cavitylossesanalytics}) that the achievable success probability of state transfer by adiabatic passage can be optimized to \begin{align} F_\textnormal{AP}\equiv F_1^\textnormal{opt}=\exp\left(- P^{\text{fib}}_{\text{loss}} \sqrt{1+\textrm{f}rac{\pi^2}{2 P^{\text{fib}}_{\text{loss}}} \textrm{f}rac{1-P_\text{out}}{P_\text{out}}}\ \right), \label{eq:FAPsecII} \end{align} where $P^{\text{fib}}_{\text{loss}}=\gamma_\textrm{f}ib L/c_\textrm{f}$ (see \appref{app:beyondSML}). As a result, we find (\secref{sec:cavitylossesnumerics} and \textrm{f}igref{fig:cavitylosses}) that the success probability of state transfer by adiabatic passage exceeds $P_1$, i.e., $F_\textnormal{AP}\geq P_1$, and thus adiabatic passage allows for better state transfer performance than wave packet shaping, which is limited by $P_1$, cf. last two columns of \tabref{tab:exp}. For adiabatic passages, the same state transfer success probability can therefore be obtained using a cm-long slowly emitting cavity with a linewidth of tens of kHz or a $\mu$m-short fast emitting cavity with a linewidth of tens of MHz, as long as their probabilities $P_\textrm{out}$ of emitting the photon into the desired output mode are equal. Experimental values for the state transfer probability $F_\textnormal{AP}^{\textnormal{exp}}$ that can be reached by adiabatic passages are given in \tabref{tab:exp}. \begin{table}[h] \caption{Overview of a selection of current experimental realizations of ions/atoms coupled to cavities. The cavity emission rate into the desired output mode $\kappa_\textrm{cav}$ is given by $\kappa_\textrm{cav}=2\kappa-\gamma_\textrm{cav}$, where $\kappa$ is the total linewidth of the cavity and $\gamma_\textrm{cav}$ is the undesirable cavity loss rate. The latter includes mirror absorption losses $\mathcal{L}$ as well as the undesired transmission $\mathcal{T}_1$ of photons through the outer mirrors (M1 and M4 in \textrm{f}igref{fig:mainresults}a). $L_\textrm{eff}=c_\textrm{f}/\kappa$ is the natural spatial length of a photon leaking out of a cavity, $P_\textnormal{out}$ is the probability that a photon in the cavity is emitted into the desired output mode (into the fiber). For comparison: the transmission probability of a photon through a telecom fiber with absorption of $0.2$\,dB/km and length of $500$\,m is $P_\textrm{f}ib^{0.2\textrm{dB/km},500\textnormal{m}}=97.7\%$ such that the total transmission probability of the photon is $P_1^{\textnormal{exp}}=P_\textnormal{out} P_\textrm{f}ib^{0.2\textrm{dB/km},500\textnormal{m}} P_\textnormal{in}$ with $P_\textnormal{in}=P_\textnormal{out}$. The success probability of state transfer by adiabatic passage $F_\textnormal{AP}^{\textnormal{exp}}$ is defined in \eeqref{eq:FAPsecII} and evaluated for the same fiber parameters as $P_1^{\textnormal{exp}}$. Typical fiber loss rates are $\gamma_\textrm{f}ib^{0.2 \textrm{dB/km}}/2 \pi =1.5$\,kHz (telecom wavelengths) and $\gamma_\textrm{f}ib^{3 \textrm{dB/km}}/2 \pi= 22$\,kHz (optical wavelengths). } \begin{center} \begin{tabular}{l |c |c | c | c | c|c} \bf{Experiment} & $\kappa_\textrm{cav}/2\pi$& $\gamma_\textrm{cav}/2\pi$& $L_\textrm{eff}$& $P_\textnormal{out}$ & $P_1^{\textnormal{exp}}$ & $F_\textnormal{AP}^{\textnormal{exp}}$ \\ & [MHz] & [MHz] & [m] & [\%] & [\%] & [\%]\\ \hline Mainz~\cite{Pfister2016} & $4.77$ & $31.5$ & $1.74$ & $13.2 $ & $1.7$ & $42.1$ \\ \hline Innsbruck~\cite{Stute2012} & $ 0.02$ & $0.08$& $ 636$ & $15.8 $ &$2.4$ & $45.9$ \\ \hline Paris~\cite{Hunger2010} & $ 19.2$ & $88.4$& $0.6$ & $17.8 $ &$3.1$ & $48.5$ \\ \hline Bonn K~\cite{Steiner2014} & $14.0$ & $29.5$ & $1.27$ & $32.3 $& $10.2$ & $61.3$ \\ \hline Caltech~\cite{Hood1998,vanEnk1999} & $38.2$ & $43.0$ & $0.8$ & $47.1 $ &$21.6$ & $69.9$ \\ \hline $\textnormal{MPQ}_1$~\cite{Hamsen2016,Chibani2016} & $2.12$ & $1.67$ & $15.9$ & $56.0$ & $30.6$ & $74.1$\\ \hline Bonn M~\cite{Gallego2016,WAlt} & $ 32.2$ & $16.9$ & $1.3$ & $65.6 $ & $41.1$ & $78.3$ \\ \hline Aarhus~\cite{Herskind2008} & $ 3.03$ & $1.22$ & $15.2$ & $71.3 $ & $49.6$ & $80.6$ \\ \hline Sussex~\cite{Begley2016,MKeller}& $0.45$ & $0.07$ & $135$ & $87.0 $ & $73.9$ & $87.6$ \\ \hline $\textnormal{MPQ}_2$~\cite{Reiserer2014} & $4.52$ & $0.5$ & $ 12.7$ & $90.2$ & $79.5$ & $89.3$ \\ \hline \end{tabular} \end{center} \label{tab:exp} \end{table} \section{Setup and Transfer Protocols} \label{sec:setupprotocol} In this section, we provide a detailed description of the setup (\secref{sec:basicmodel}) and the state transfer protocols (\secref{sec:protocols}) under consideration. In the following, we use the language of optical platforms, considering atoms as matter qubits and an optical fiber as a waveguide. Note that our derivations also apply to other platforms such as, e.g., superconducting qubits. \subsection{Basic Model} \label{sec:basicmodel} The Hamiltonian of our system consists of two parts: $H_\textnormal{sys}$ describing the coherent interactions (\secref{sec:modelhamiltonian}) and $H_\textnormal{loss}$ describing the couplings to undesired dissipative channels (\secref{sec:modeldissipation}). \subsubsection{Hamiltonian} \label{sec:modelhamiltonian} We model the cavity-fiber-cavity system as three linearly coupled cavities with the field modes being represented by independent annihilation and creation operators of the corresponding cavity or fiber mode. As explained in \appref{app:methodCFC}, we also employed an alternative description for our numerical simulations in which the system is described by the eigenmodes of the cavity-fiber-cavity system. Both descriptions yield the same results in the regime of high finesse cavities and for time scales that are long compared to the round-trip time $2\tau$ of a photon. Throughout the main text, we will use the former choice of basis states. The full Hamiltonian for the setup under consideration in \textrm{f}igref{fig:APM} is given by \begin{align} H=H_\textrm{cav}+H_\textrm{f}ib+H_\textnormal{cav-fib}+H_\textnormal{at}+H_\textnormal{at-cav}. \label{eq:Htot} \end{align} The Hamiltonian $H_\textrm{cav}$ describes the bare evolution of both cavities $A$ and $B$ and is given by \begin{align} H_\textrm{cav}=\hbar \omegaega_0 \left( a^{\dag}a +b^{\dag}b\right), \label{eq:Hcav} \end{align} where the annihilation operator $a$ ($b$) refers to the cavity mode of cavity $A$ ($B$). In \eeqref{eq:Hcav} we consider only a single cavity resonance for each cavity, with frequency $\omegaega_0$ for both cavities. The restriction to a single cavity mode is well justified in the limit in which the cavity length is much smaller than the fiber length, $l \ll L$. \begin{figure} \caption{Quantum network setup for a deterministic state transfer. (a) The cavities $A$ and $B$ (length $l$) are coupled by a fiber $C$ (length $L$). Each cavity contains an atom that is modeled as a three-level system with ground states $\ket{0} \label{fig:APM} \end{figure} The fiber modes are described by the Hamiltonian \begin{align} H_\textrm{f}ib=\hbar \sum_{n=-\infty}^{\infty} \, \omegaega_n \, c_n^{\dag} c_n, \label{eq:Hfib} \end{align} where the annihilation operator $c_n$ denotes the $n$th fiber mode with frequency $\omegaega_n=\omegaega_0+n\cdot \textnormal{FSR}_\textrm{f}ib$. We assume the fiber mode $c_0$ with frequency $\omegaega_0$ to be resonant with the cavity modes $a$ and $b$, which translates into the condition $L=m\cdot l$ with integer $m$. Note that the fiber can alternatively be modeled by using spatially localized modes, allowing for a more intuitive representation of a travelling photon~\cite{Ramos2016}. The interaction Hamiltonian $H_\textnormal{cav-fib}$ is given by the coupling between cavity and fiber modes~\cite{Pellizzari1997} \begin{align} H_\textnormal{cav-fib}&=\hbar \sum_{n}\Big[g_{A} \,a^{\dag}+(-1)^n g_{B} \,b^{\dag}\Big]c_n+\textrm{h.c.} \ , \label{eq:Hcavfib} \end{align} where $\textrm{h.c.}$ is the Hermitian conjugate. The coupling strengths $g_{A}$ and $g_{B}$ of the cavity modes $a$ and $b$ to the fiber modes $c_n$ are related to the effective decay rates of the cavities $A$ or $B$ coupled to the fiber given by $\kappa_\textrm{cav}=2 \pi g_{A/B}^2/\textnormal{FSR}_\textrm{f}ib$ \cite{Vermersch2016}. For optical implementations, the cavity emission rate $\kappa_\textrm{cav}$ into the desired output mode is given by $\kappa_\textrm{cav} \equiv c |\mathfrak{t}|^2/(2 l)$~\cite{vanEnk1999}. Therefore, the coupling strengths between cavity and fiber modes are given by \begin{align} g_{A/B}=\sqrt{\textrm{f}rac{\kappa_\textrm{cav} \textnormal{FSR}_\textrm{f}ib}{2 \pi}}=\sqrt{\textrm{f}rac{c \, c_\textrm{f} \, |\mathfrak{t}|^2}{4\, L\, l}}, \label{eq:gAB} \end{align} where $\sabs{\mathfrak{t}}^2=\mathcal{T}_2$ is the transmission coefficient of the identical inner mirrors (M2 and M3 in \textrm{f}igref{fig:mainresults}) and $c_\textrm{f}=2 c/3$ is the speed of light in the fiber. Note that the coupling $g_{A/B}$ is equally strong for all fiber modes. The phase factor $(-1)^n$ in \eeqref{eq:Hcavfib} introduces alternating signs for the coupling to even or odd modes in the fiber. As illustrated in \textrm{f}igref{fig:spectrum}c, even and odd fiber modes correspond to wave functions with an even or odd number of nodes in the intensity profile. \begin{figure} \caption{Cavity and fiber properties. (a) Spectra of cavity $B$ (magenta) and the fiber (blue) in the multimode limit. Here, the cavity linewidth $2\kappa$ is much larger than the free spectral range of the fiber $\textnormal{FSR} \label{fig:spectrum} \end{figure} Each atom is modeled as a three-level system with degenerate ground states $|0\rangle$ and $|1\rangle$ of equal energy and an excited state $\ket{E}$ with energy $\hbar \omegaega_E$ with respect to the ground states (see \textrm{f}igref{fig:APM}). The bare atomic Hamiltonian is hence given by \begin{align} H_\textnormal{at}=\hbar \omegaega_E (\ket{E}\bra{E}_A+\ket{E}\bra{E}_B). \label{eq:Hat} \end{align} The transition between the atomic ground state $|0\rangle$ and the excited state $|E\rangle$ is coupled to the cavity field $a$ ($b$) in cavity $A$ ($B$) with coupling strength $g_\textnormal{at-c}^{A}$ ($g_\textnormal{at-c}^{B}$). The transition between the atomic ground state $|1\rangle$ and the excited state $|E\rangle$ is driven by a time-dependent classical field with Rabi frequency $\Omega_{A/B}(t)$ and frequency $\omega_\textrm{L}$. Note that the atom-cavity coupling $g_\textnormal{at-c}^{A/B}$ depends on the mode volume of the cavity and on the position and dipole moment of the atom~\cite{AtomPhotonInt}. In order to avoid populating the excited state $|E\rangle$, which suffers from spontaneous emission at rate $\Gamma$, the classical drive and cavity are strongly detuned from the atomic transition, i.e., $\sabs{\Delta_\textnormal{at}}=\omegaega_\textnormal{L} - \omegaega_{E} \gg \Gamma$ such that photon loss due to atomic decay is strongly suppressed. The effect of spontaneous emission is discussed in \secref{sec:robustness}. Due to the strongly detuned laser drive, the excited state $|E\rangle$ can be eliminated such that the effective atom-cavity interaction Hamiltonian~\cite{Mabuchi1997} is given by \begin{align} H_\textnormal{at-cav}=&\hbar G_A \left( \sigma_A^+ a+ a^\dag \sigma_A^- \right) + \hbar G_B \left( \sigma_B^+ b+ b^\dag \sigma_B^- \right), \end{align} where $\sigma^+_{A/B}=|1\rangle\langle0|_{A/B}$ ($\sigma^-_{A/B}=|0\rangle\langle1|_{A/B}$) is the raising (lowering) operator for the qubit in cavity $A/B$. The effective atom-cavity coupling is given by \begin{align} G_{A/B}(t)= \textrm{f}rac{g_\textnormal{at-c}^{A/B} \Omega_{A/B}(t)}{ \Delta_\textnormal{at}}. \label{eq:ioncavcoupl} \end{align} After the excited state $\ket{E}$ is eliminated, the bare atomic Hamiltonian $H_\textnormal{at}$ in \eeqref{eq:Hat} vanishes. Note that eliminating the excited state $\ket{E}$ also results in effective Stark shifts for both ground states $\ket{0}$ and $\ket{1}$, which however can be compensated; see Ref.~\cite{Pellizzari1997}. The full Hamiltonian given in \eeqref{eq:Htot} can be expressed in an interaction picture with respect to $H_0=\hbar \omegaega_0( a^{\dag}a + b^{\dag}b +c_0^{\dag}c_0)$ such that \begin{align} H_\textnormal{sys}= \,&\hbar\, G_A(t) \left( \sigma_A^+ a+ a^\dag \sigma_A^- \right) \label{eq:HAPM} \\ &+\hbar \, \sum_n \,n \, \textnormal{FSR}_\textrm{f}ib \ c_n^{\dag}c_n \notag \\ &+\hbar \, \sum_{n}\Big[g_{A}\, a^{\dag}+(-1)^n g_{B} \, b^{\dag}\Big]c_n+\textrm{h.c.} \notag \\ &+ \hbar \, G_B(t) \left( \sigma_B^+ b+ b^\dag \sigma_B^- \right). \notag \end{align} \subsubsection{Dissipation} \label{sec:modeldissipation} Here, we discuss the two main sources of imperfection in deterministic state transfer: fiber and cavity losses. The influence of other imperfections will be discussed in \secref{sec:robustness}. The loss Hamiltonian is given by $H_\textnormal{loss}=V_{\textrm{cav},a}+V_{\textrm{cav},b}+\sum_n V_{\textrm{f}ib,n}$. To model losses in the fiber, we consider each fiber mode to couple in a Markovian way to a bath of bosonic modes with annihilation (creation) operators $\tilde{c}_{\omegaega,n}$ ($\tilde{c}_{\omegaega,n}^\dag$) described by the Hamiltonian \begin{align} V_{\textrm{f}ib,n}=\sqrt{\textrm{f}rac{\gamma_\textrm{f}ib}{2\pi}} \int \textnormal{d}\omegaega \left(c_n^\dag \tilde{c}_{\omegaega,n}+ \tilde{c}_{\omegaega,n}^\dag c_{n} \right). \label{eq:Vfib} \end{align} The fiber loss rate $\gamma_\textrm{f}ib=\alpha c_\textrm{f}$, where $\alpha$ is the absorption coefficient in the fiber \cite{vanEnk1999}. The absorption coefficient $\alpha$ is defined by the fraction absorbed inside a fiber of length $L$ \begin{align} &\exp(-\alpha L)= 10^{- \textrm{f}rac{X}{10} \cdot \textrm{f}rac{L}{1000}}, \label{eq:fiberloss}\\ &\, \Rightarrow \alpha = X \cdot \ln(10)10^{-4}, \notag \end{align} where $X$ is the attenuation coefficient of the fiber in decibels per kilometer. For telecom wavelength fibers, a typical attenuation is $0.2$\,dB/km, yielding a fiber loss rate of $\gamma_\textrm{f}ib^{0.2 \textrm{dB/km}}/2\pi=1.5$\,kHz, and for optical wavelengths, a typical attenuation is $3$\,dB/km, with rate $\gamma_\textrm{f}ib^{3 \textrm{dB/km}}/2\pi=22$\,kHz. Note that frequency conversion from optical to telecom wavelengths has been achieved with efficiencies up to $80\%$, e.g., \cite{Pelc2011}. The probability of a photon to propagate through a fiber of length $L$ is given by $P_\textrm{f}ib$, as defined in \eeqref{eq:Pfib}. Equivalently, we model cavity losses by considering each cavity $A$ and $B$ to decay to free space, with the interaction given by the coupling of cavity modes $a$ and $b$ to a frequency bath with annihilation (creation) operator $\tilde{a}_\omega$ ($\tilde{a}_\omega^\dag$) and $\tilde{b}_\omegaega$ ($\tilde{b}_\omegaega^\dag$): \begin{align} V_{\textrm{cav},a}&= \sqrt{\textrm{f}rac{\gamma_\textrm{cav}}{2\pi}}\int \textnormal{d}\omegaega \left(a^\dag \tilde{a}_{\omegaega}+ \tilde{a}_{\omegaega}^\dag a \right), \label{eq:Vcava}\\ V_{\textrm{cav},b}&=\sqrt{\textrm{f}rac{\gamma_\textrm{cav}}{2\pi}} \int \textnormal{d}\omegaega \left(b^\dag \tilde{b}_{\omegaega}+ \tilde{b}_{\omegaega}^\dag b \right).\label{eq:Vcavb} \end{align} Here, cavity losses at rate $\gamma_\textrm{cav}$ include the losses through the outer mirrors (M1 and M4 in \textrm{f}igref{fig:mainresults}a) with transmission $\mathcal{T}_1$ as well as absorption losses $\mathcal{L}$ in the cavities. The total linewidth of the cavity $2\kappa$ consists of the rate of coupling into the fiber $ \kappa_\textrm{cav}$ as well as the total loss rate $\gamma_\textrm{cav}$ such that \begin{align} 2\kappa&= \kappa_\textrm{cav} + \gamma_\textrm{cav}, \label{eq:kappagoodandbad}\\ &\equiv \textrm{f}rac{c |\mathfrak{t}|^2}{2 l} + \textrm{f}rac{c |{\ell}|^2}{2 l}, \notag \end{align} where $\gamma_\textrm{cav}$ contains both transmission and absorption losses: $\sabs{\ell}^2=\mathcal{T}_1+\mathcal{L}$. In \tabref{tab:exp} we summarize the cavity losses of a selection of experiments. The probability of a photon to be emitted into the desired output mode $P_\textnormal{out}$ as defined in \eeqref{eq:Pout0} can be rephrased as \begin{align} P_\textnormal{out}=\textrm{f}rac{\kappa_\textrm{cav}}{\kappa_\textrm{cav}+\gamma_\textrm{cav}}=\textrm{f}rac{\sabs{\mathfrak{t}}^2}{\sabs{\mathfrak{t}}^2+\sabs{\ell}^2}, \label{eq:Pout} \end{align} which is equivalent to the probability $P_\textnormal{in}$ of the photon being absorbed by the (second) cavity.\\ Accordingly, we expect the total success probability of a photon transfer between two cavities through a fiber to be limited by \begin{align} P_1&=P_\textnormal{out} P_\textrm{f}ib P_\textnormal{in} =\left(\textrm{f}rac{\sabs{\mathfrak{t}}^2}{\sabs{\mathfrak{t}}^2+\sabs{\ell}^2}\right)^2 \exp\left(-\gamma_\textrm{f}ib \textrm{f}rac{L}{c_\textrm{f}}\right), \label{eq:Ploss} \end{align} cf. Ref.~\cite{vanEnk1999}. We use this limit $P_1$ later in \secref{sec:fiberlosses} and \secref{sec:cavitylosses} as a benchmark for the success probability of state transfer. \subsubsection{Equations of Motion} \label{sec:modelEOM} As we are interested in performing a quantum state transfer, we solve the dynamics of the full system according to the Hamiltonian in \eeqref{eq:HAPM}, taking into account the loss mechanisms described in \eqsref{eq:Vfib}, \eqref{eq:Vcava} and \eqref{eq:Vcavb} using a single-excitation Wigner Weisskopf ansatz. The wave function of the full model in this single excitation ansatz is given by \begin{align} |\Psi\rangle=&c_{A} |1\rangle_A+ c_{B} |1\rangle_B +c_a |a\rangle+c_b |b\rangle\label{eq:wavefunction}\\ &+\sum_n c_{c_n} |c_n\rangle + \int \! \textnormal{d} \omega \, \left( c_{\tilde{a}_\omega} \tilde{a}_\omega^\dag +c_{\tilde{b}_\omega} \tilde{b}_\omega^\dag\right) \ket{\textnormal{vac}} \notag \\ &+ \sum_n \int \!\textnormal{d} \omega \, c_{\tilde{c}_{\omega,n}} \tilde{c}_{\omega,n}^\dag \ket{\textnormal{vac}}+c_\textnormal{vac} \ket{\textnormal{vac}},\notag \end{align} where $|1\rangle_{A/B}$ denotes the state of system with the excitation in the atom in cavity $A/B$ with amplitude $c_{A/B}$, $\ket{a/b}$ the state with the excitation in the cavity $A/B$ with amplitude $c_{a/b}$ and $\ket{c_n}$ the state with the excitation in the $n$th fiber mode with amplitude $c_{c_n}$. The sixth and seventh term in \eeqref{eq:wavefunction} describe the baths associated with the cavity and fiber losses as modeled in the previous section, where $\ket{\textnormal{vac}}$ is the vacuum state of light field and $c_{\tilde{x}_\omega}$ are the amplitudes of the baths $x\in (a,b,c_n)$. Lastly, the amplitude $c_\textnormal{vac}$ denotes the state of the system without an excitation, i.e., $\ket{\textnormal{vac}}$ corresponds to the state in which both atoms are in the ground state $|0\rangle_{A/B}$, while the cavities and the fiber are empty. Starting from the Schr\"odinger equation $i\hbar\dot{\ket{\Psi}}=(H_\textnormal{sys}+H_\textnormal{loss} )\ket{\Psi}$, we obtain the time evolution of the amplitudes of the system \begin{align} i \dot{c}_{A}&= G_A(t) c_{a},\label{eq:EOM} \\ i\dot{c}_{a}&=-i \textrm{f}rac{\gamma_\textrm{cav}}{2}c_{a}+G_A(t) c_{A}+g_A \sum_n c_{c_n}, \notag \\ i\dot{c}_{c_{n}}&=-\left(i \textrm{f}rac{\gamma_\textrm{f}ib}{2} -\, n \, \textnormal{FSR}_\textrm{f}ib \right) c_{c_{n}}+ g_A c_{a} + g_B (-1)^n c_{b}, \notag \\ i\dot{c}_{b}&=-i \textrm{f}rac{\gamma_\textrm{cav}}{2}c_{b}+G_B(t) c_{B}+g_B \sum_n (-1)^n c_{c_n}, \notag \\ i\dot{c}_{B}&= G_B(t) c_{b}, \notag \end{align} where the amplitudes of the lossy channels have been intregrated out~\cite{Dorner2002}. Finally, we solve \eeqref{eq:EOM} for the initial state $\ket{\Psi(t=0)}=\ket{1_A}$ to obtain the success probability of the state transfer, which we define as the probability \begin{align} F=\sabs{c_B(t\rightarrow \infty)}^2. \label{eq:Fidelity} \end{align} This probability provides a measure for a successful transmission of the photonic excitation through the setup. The error $1-F$ denotes the probability to emit a photon into an undesired channel $\tilde{a}_\omegaega$, $\tilde{b}_\omegaega$ or $\tilde{c}_{\omegaega,n}$. \textrm{f}igref{fig:APM}b illustrates the coupling scheme corresponding to \eeqref{eq:EOM}. \subsection{Quantum State Transfer Protocols} \label{sec:protocols} \subsubsection{Wave Packet Shaping} \label{sec:WPS} As explained in \secref{sec:mainresults}, one possibility to realize quantum state transfer by wave packet shaping is to produce a time-symmetric photon wave packet inside the fiber by the first combined atom-cavity system such that the back reflection of the wave packet at the inner mirror of the second cavity (M3 in \textrm{f}igref{fig:APM}) is prevented \cite{Mabuchi1997}. For the time-symmetric wave packet shaping we consider here, it is essential to use a temporal profile for the classical drive of the first atom that produces a time-symmetric wave packet in the fiber. The classical drive for the second atom is then given by the time-reversed temporal profile with time delay $\tau=L/c_\textrm{f}$. We consider the regime in which the maximal coupling between atom and cavity $G_\textnormal{max}\equiv g_\textnormal{at-c}^{A/B} \Omega_{A/B}^\textnormal{max}/\Delta_\textnormal{at}$ is much smaller than the cavity decay rate $\kappa$ (and equal for both cavities). In this regime, we can effectively eliminate the cavity, which results in an effective coupling rate between the atoms and the fiber modes given by $\gamma_{A/B}(t)=\kappa_\textrm{cav} (G_{A/B}(t)/\kappa)^2$~\cite{Habraken2012}. In this case, a possible classical drive sequence for the atom-fiber coupling rate $\gamma_{A/B}$ is given by Ref.~\cite{Stannigel2011} \begin{align} \gamma_A(t)&= \begin{cases} \gamma_\textnormal{max} \textrm{f}rac{\exp(\gamma_\textnormal{max} t)}{2-\exp(\gamma_\textnormal{max} t)} & \text{if } t \textless 0 \\ \gamma_\textnormal{max} & \text{if } t \ge 0 , \end{cases} \label{eq:WPSpulsesequence} \\ \gamma_B(t)&= \gamma_A\left(\tau-t\right), \notag \end{align} where the maximal atom-fiber coupling rate is given by $\gamma_\textnormal{max}=\kappa_\textrm{cav} (G_\textnormal{max}/\kappa)^2$. This drive sequence generates a time-symmetric wave packet with exponential shape and of length $L_\textnormal{ph}$. Experimentally, the relevant parameter to vary the coupling rate is given by the classical laser drive $\Omega_{A/B}(t)$, as shown in \textrm{f}igref{fig:mainresults}b. This classical drive relates to the effective atom-fiber drive sequence $\gamma_{A/B}(t)$ in \eeqref{eq:WPSpulsesequence} through the expression \begin{align} \Omega_{A/B}(t)=\textrm{f}rac{\Delta_\textnormal{at} \kappa}{g_\textnormal{at-c}^{A/B} \sqrt{\kappa_\textrm{cav}}} \,\sqrt{ \gamma_{A/B}(t)}. \end{align} Note that the wave packet shaping approach works for both limits, the long photon limit ($L_\textnormal{ph}>L$) and also the limit of short wave packets with $L_\textnormal{ph}<L$. \subsubsection{Adiabatic Passage} \label{sec:AP} The general idea of performing quantum state transfer by adiabatic passage is to use the methods known from STIRAP \cite{RMPAP2016} for atoms to perform a coherent transfer by using a dark state with respect to the photon fields~\cite{Pellizzari1997,vanEnk1999}. The time-dependent coupling of the atoms to the cavity modes $G_{A/B}(t)$ in \eeqref{eq:HAPM} is varied via the classical laser drive $\Omega_{A/B}(t)$ of the atoms. The classical laser drive of both atoms realizes a counterintuitive pulse sequence~\cite{Vitanov1997b}, in which the classical field in the receiving cavity $B$ is switched on before the driving field of the sending cavity $A$: \begin{align} \lim_{t\rightarrow - \infty} \textrm{f}rac{\Omega_A(t)}{\Omega_B(t)}=0, \qquad \lim_{t\rightarrow \infty} \textrm{f}rac{\Omega_B(t)}{\Omega_A(t)}=0. \label{eq:adiabaticcondition} \end{align} We choose the temporal profiles of both pulses to be Gaussian functions of equal maximal strength $\Omega_\textnormal{max}=\Omega_{A}^\textnormal{max}=\Omega_{B}^\textnormal{max}$ with a retardation $\tau_\textnormal{spl}$ between them \begin{align} \Omega_A(t)&= \Omega_{A}^\textnormal{max} \exp\left(-[t-\tau_\textnormal{spl}]^2/T^2\right), \label{eq:APpulsesequence} \\ \Omega_B(t)&=\Omega_{B}^\textnormal{max} \exp\left(-t^2/T^2\right), \notag \end{align} where $T$ is the pulse width, cf. \textrm{f}igref{fig:mainresults}c. In general, the temporal separation of the pulses $\tau_\textnormal{spl}$ is on the order of the pulse width \cite{Vitanov1997b}, such that we introduce the relative temporal separation $x_\textnormal{spl}=\tau_\textnormal{spl}/T$. Performing a quantum state transfer by adiabatic passage requires the optimization of three parameters: the coupling strength $G^\textnormal{max}_{A/B}$, the temporal width of the classical drive $T$ and the relative temporal separation of the two pulses $x_\textnormal{spl}$. The maximal coupling strength $G^\textnormal{max}_{A/B}= g_\textnormal{at-c}^{A/B} \Omega^\textnormal{max}_{A/B}/ \Delta_\textnormal{at}$ is optimized for fixed atom-cavity coupling $g_\textnormal{at-c}^{A/B}$. In contrast to wave packet shaping, quantum state transfer by adiabatic passage only works in the long photon limit. The reason lies in the mechanism of adiabatic passage, for which a standing wave of the photon field is required to perform the transfer. \section{Fiber Losses} \label{sec:fiberlosses} In this section, we evaluate and discuss the influence of fiber losses on the achievable quantum state transfer success probability by means of a numerical analysis (\secref{sec:fibernumerics}) and an analytical example (\secref{sec:fiberanalytics}). The effect of cavity losses will be addressed later in \secref{sec:cavitylosses}. \subsection{Numerical Treatment} \label{sec:fibernumerics} We numerically study a wide parameter range, including the concrete regime (single mode limit) that has been identified in the literature~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016} as the regime in which limitations $F<P_1$ can be overcome (see \secref{sec:fiberanalytics} and \appref{app:atomiclimit}). As a result, we find that even deep in the single mode limit, the success probability of the state transfer is always limited by $P_1$ (which is equal to $P_\textrm{f}ib$ for $\gamma_\textrm{cav}=0$). This result holds for both state transfer methods. \textrm{f}igref{fig:fiberlosses} provides an example of the state transfer success probability $F$, defined in \eeqref{eq:Fidelity}, as a function of the fiber length $L$ based on the experimental parameters in Ref.~\cite{Stute2012} for $\gamma_\textrm{cav}=0$ (the effect of cavity losses will be included in \secref{sec:cavitylosses} below). In this example, the identical cavities $A$ and $B$ have a length of $l=0.02$\,m and an inner mirror (M2 \& M3 in \textrm{f}igref{fig:mainresults}) with transmissivity of $|\mathfrak{t}|^2=13$\,ppm. We consider two fibers with different loss rates $\gamma_\textrm{f}ib$: first, absorption losses of $0.2$\,dB/km corresponding to fibers at telecom wavelengths and second, absorption losses of $3$\,dB/km corresponding to optical wavelengths. \begin{figure} \caption{State transfer success probability $F$ for two fiber absorption coefficients $0.2$\,dB/km (blue) and $3$\,dB/km (green) with $\gamma_\textrm{cav} \label{fig:fiberlosses} \end{figure} In addition, in \textrm{f}igref{fig:fiberlosses} we compare both state transfer methods. For every simulation, we ensure that we consider sufficiently many fiber modes in \eeqref{eq:HAPM} that our results converge with respect to the number of fiber modes included. For wave packet shaping, the state transfer success probability is optimized in the regime $\kappa_\textrm{cav} \gg G_\textnormal{max}$, in which the cavity can be eliminated~\cite{WPS}. In the case of adiabatic passage, every plot point in \textrm{f}igref{fig:fiberlosses} is optimized with respect to the pulse length $T$, the relative temporal separation of the pulses $x_\textnormal{spl}$ and the pulse area $\Omega_\textnormal{max} T$. In particular, the optimized values for adiabatic passage lie in the same regime as those for wave packet shaping, i.e., $\kappa_\textrm{cav} \gg G_\textnormal{max}$, in which the cavity is barely populated. In order to reach high success probabilities, the pulse area must be large ($\Omega^2_\textnormal{max} T\gg1$), and due to the necessity of a weak coupling $G_\textnormal{max}$, this is achieved for long pulse durations $T$. The pulse length is varied in the range $T=(10 \dots 1000) / \kappa_\textrm{cav}$, which translates for the chosen parameters into pulse lengths of $T\propto10^{-3}-10^{-5}$\,s. The relative separation of the pulses and the coupling strength ratio are optimized in the ranges $x_\textnormal{spl}=(0.8 \dots 2.1)$ and $\Omega_\textnormal{max}/\Delta_\textnormal{at}=(0.0001 \dots 0.1)$. As a benchmark for the state transfer we plot the expected survival probability $P_1$ of a photon through a lossy fiber as given in \eeqref{eq:Ploss}. We find that the numerically optimized success probability for both state transfer methods and both fiber absorption losses is in excellent agreement with the limit given by $P_1$ (see \textrm{f}igref{fig:fiberlosses}). For wave packet shaping, this agreement seems natural because, in the optimal case, the photon only passes through the fiber once. In the case of adiabatic passages, the agreement implies that the state transfer success probability is limited by $P_1$ and therefore fiber losses cannot be overcome. Note that the numerical results shown in \textrm{f}igref{fig:fiberlosses} are obtained for a parameter set deep in the single mode limit (see \textrm{f}igref{fig:spectrum}), for which the single mode parameter $\mathfrak{n}$ (defined in \eeqref{eq:SMLp}) for $L=1000$\,m is $\mathfrak{n}=0.31$. Numerical results were however also derived for different parameter sets ($|\mathfrak{t}|^2,|\ell|^2, l$) beyond the single mode limit. We find that for all regimes the quantum state transfer is strictly limited by $P_1$. \subsection{Analytical Example} \label{sec:fiberanalytics} It has been stated in the literature~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016} that the Hamiltonian in \eeqref{eq:HAPM} can be simplified in the single mode limit by neglecting all fiber modes but the resonant mode $c_0$. This argument is based on the weak coupling assumption $g_{A/B}\ll \textnormal{FSR}_\textrm{f}ib$, under which the fiber modes $n_{n\neq 0}$ are far detuned from the cavities. However, as we show below, these off-resonant contributions integrated over long times lead to non-negligible effects. As detailed in \appref{app:atomiclimit}, the simplified Hamiltonian that includes only a single fiber mode as used in Refs.~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016} leads, in complete analogy to STIRAP in a three-level atomic system, to a success probability of state transfer by adiabatic passage that reaches unity in the adiabatic limit $g_{0}^2 T/\gamma_\textrm{f}ib \rightarrow \infty$, \begin{align} F_{\text{STIRAP}}=\exp\left(-\textrm{f}rac{ \gamma_\textrm{f}ib}{{g_{0}^2 T}} \textrm{f}rac{\pi}{2}\right), \label{eq:fidelitystirap} \end{align} where $g_0$ denotes the maximal coupling of atom and fiber. In the following, we give an illustrative analytical example that points out a crucial difference between adiabatic passages in atoms compared to adiabatic passage state transfer in a fiber. \\ We consider the regime in which the decay of the cavity $\kappa$ is much larger than the atom-cavity coupling $G_{A/B}$. We focus here on fiber losses and assume therefore $\gamma_\textrm{cav}=0$ throughout this section. In this regime, we can adiabatically eliminate the cavity and obtain an effectively coupled atom-fiber-atom system; see \textrm{f}igref{fig:APscheme}a. In \appref{sec:purelyphotonic} we discuss the very similar case of a purely photonic model in which the state of the atom is mapped rapidly onto the cavity, followed by a state transfer in the coupled cavity-fiber-cavity system. \begin{figure} \caption{Level scheme analogue for adiabatic passage state transfer. (a) Effectively coupled atom-fiber-atom description in which the cavities have been eliminated: the ground states $\ket{1} \label{fig:APscheme} \end{figure} A full description of the problem would include all fiber modes $c_n$ with $n\in(-\infty, +\infty)$. However, the importance of going beyond the single mode description (\appref{app:atomiclimit}), even in the regime in which $g_{A/B}\ll\textnormal{FSR}_\textrm{f}ib$ and $\mathfrak{n}\ll 1$, is well illustrated by including the first pair of far detuned fiber modes $c_{\pm 1}$ in our analytical derivation of the success probability of the adiabatic state transfer. The equations of motion for the wave function can be derived from \eeqref{eq:EOM} by eliminating the cavity modes. We consider here three fiber modes such that the atom-fiber-atom system is described by \begin{align} \dot{c}_{A}&=-i \tilde{g}_A(t) \left( c_{c_0}+c_{c_{-1}} +c_{c_{+1}} \right), \label{eq:TMLEOM} \\ \dot{c}_{c_{-1}}&=-i \tilde{g}_A(t) c_{A}+i \tilde{g}_B(t) c_{B}-\left(\gamma_\textrm{f}ib/2 -i \textnormal{FSR}_\textrm{f}ib \right) c_{c_{-1}},\notag \\ \dot{c}_{c_0}&=-i \tilde{g}_A(t) c_{A}-i \tilde{g}_B(t) c_{B}-(\gamma_\textrm{f}ib/2) c_{c_0},\notag \\ \dot{c}_{c_{+1}}&=-i \tilde{g}_A(t) c_{A}+i \tilde{g}_B(t) c_{B}-\left(\gamma_\textrm{f}ib/2 +i \textnormal{FSR}_\textrm{f}ib \right) c_{c_{+1}},\notag \\ \dot{c}_{B}&=-i \tilde{g}_B(t) \left( c_{c_0}-c_{c_{-1}} -c_{c_{+1}} \right), \notag \end{align} where $\tilde{g}_{A/B}(t)= g_{A/B} (G_{A/B}(t)/\kappa)$ is the effective atom-to-fiber coupling strength (see \textrm{f}igref{fig:APM}b). The level scheme analogue for this specific example is given in \textrm{f}igref{fig:APscheme}a. As introduced in \secref{sec:basicmodel} even and odd modes (\textrm{f}igref{fig:spectrum}c) couple in \eeqref{eq:TMLEOM} with a different sign to the cavity $B$ due to the factor $(-1)^n$ in \eeqref{eq:HAPM}. This sign is the crucial difference with respect to STIRAP in atoms. In atoms, where all excited states couple with the same strength and phase to the ground states, there exists a dark state with respect to the whole manifold of excited states. In our case, however, due to the alternating sign of the coupling in \eeqref{eq:HAPM}, it is impossible to find a dark state with respect to the whole manifold of excited states because a superposition state that is dark with respect to the even modes couples to the odd modes and vice versa~\cite{vanEnk1999}. It is thus apparent that some fiber modes (even or odd) have to be populated during the state transfer, and that consequently losses due to absorption in the fiber are unavoidable and affect the success probability of the transfer. We consider here the particular case in which the temporal profiles of the classical driving fields in cavities $A$ and $B$ are given by a sine and a cosine function respectively (see \appref{app:beyondSML} for details). This specific choice allows us to derive a simple analytical expression for the state transfer success probability $F$ as defined in \eeqref{eq:Fidelity}. To this end, we adiabatically eliminate the far detuned modes $c_{\pm 1}$ in \eeqref{eq:TMLEOM} under the assumption $\gamma_\textrm{f}ib^2/4+\textnormal{FSR}_\textrm{f}ib^2\gg \tilde{g}_{A/B}^2$. The presence of $c_{\pm 1}$ leads to effective dynamics of the modes $c_{0}$, $c_A$, and $c_B$ that include loss terms acting on the qubits in cavities $A$ and $B$ (\textrm{f}igref{fig:APscheme}b), resulting in \begin{align} F_\textnormal{fib} &=\exp\left(- \textrm{f}rac{\gamma_\textrm{f}ib \pi}{2}\left[\textrm{f}rac{ 1}{{g_0^2 T}}+\textrm{f}rac{{g_0^2 T}}{ \textnormal{FSR}_\textrm{f}ib^2} \right] \right), \label{eq:fidelity3ML} \end{align} where the maximal values of the coupling strengths $\tilde{g}_{A/B}(t)$ are chosen to be equal: $ g_0\equiv \textnormal{max}(\tilde{g}_A)=\textnormal{max}(\tilde{g}_B)$. The first summand in \eeqref{eq:fidelity3ML} yields the result as obtained from STIRAP and hence the naively truncated Hamiltonian (cf. \eeqref{eq:fidelitystirap} and \appref{app:atomiclimit}), resulting in a success probability of unity in the limit $g_0^2 T\gg \gamma_\textrm{f}ib$. However, the second summand, which is due to the presence of the far detuned fiber modes $c_{\pm 1}$, compensates for this effect. More specifically, increasing $g_0^2 T$ also increases the effect of the effective decay terms on the qubit ground states. Due to this trade-off, there is an optimal value $g_0^2 T = \textnormal{FSR}_\textrm{f}ib$ which balances the effects of non-adiabaticity (first summand) and the effects due to the coupling to the off-resonant fiber modes (second summand), leading to \begin{align} F_\textnormal{fib}^\textrm{opt}&=\exp\left(- \gamma_\textrm{f}ib \pi/\textnormal{FSR}_\textrm{f}ib \right)=\exp\left(- \gamma_\textrm{f}ib L/c_\textrm{f} \right), \label{eq:fidelityMAX} \end{align} which coincides with the transmission probability $P_\textrm{f}ib$ of a photon through a fiber of length $L$. This result also agrees with our numerical simulations in \secref{sec:fibernumerics}. \section{Cavity Losses} \label{sec:cavitylosses} In this section, we show that in contrast to the restrictions due to fiber losses, the problem of cavity losses, which limits wave packet shaping, can be overcome to a significant extent in current experimental settings by using adiabatic passages. We derive an approximate analytical solution for the state transfer success probability that can be achieved by performing adiabatic state transfers (\secref{sec:cavitylossesanalytics}) and provide a numerical analysis for both methods (\secref{sec:cavitylossesnumerics}). \subsection{Approximate Analytical Treatment for Adiabatic State Transfers} \label{sec:cavitylossesanalytics} In the following, we extend the analytical example provided in \secref{sec:fiberanalytics}, which models a quantum state transfer by adiabatic passage for $\gamma_\textrm{cav}=0$, to cover the effect of both cavity and fiber losses. To this end, we adiabatically eliminate the cavity modes in \eeqref{eq:EOM} in the limit $\kappa\gg G_{A/B}$. As in the previous section, we only include the fiber mode resonant with the cavity $c_0$ and the first pair of detuned fiber modes $c_{\pm}$. The resulting equations of motion describe the interaction between the qubits and the fiber modes with an effective coupling strength $\tilde{g}_{A/B}(t)= g_{A/B} (G_{A/B}(t)/\kappa)$ as shown in \textrm{f}igref{fig:APM}b and are given by \eeqref{eq:TMLEOM}, with the first and last equation modified to \begin{align} \dot{c}_{A}&=-i \tilde{g}_A(t) \left( c_{c_0}+c_{c_{-1}}+c_{c_{+1}}\right) -\textrm{f}rac{\tilde{\gamma}^{A}_\textrm{cav}(t)}{2} c_{A}, \label{eq:TMLEOMCAV} \\ \dot{c}_{B}&=-i \tilde{g}_B(t) \left( c_{c_0}-c_{c_{-1}} -c_{c_{+1}} \right)-\textrm{f}rac{\tilde{\gamma}^{B}_\textrm{cav}(t)}{2} c_{B}. \notag \end{align} In this description, cavity losses lead to an effective decay that acts on the qubits with rate $\tilde{\gamma}^{A/B}_\textrm{cav}(t)=\gamma_\textrm{cav} (G_{A/B}(t)/\kappa)^2$. Using these equations of motion, and assuming classical driving fields of sine- and cosine shape (see \secref{sec:fiberanalytics} and \appref{app:beyondSML}), the initially complex problem involving time-dependent couplings and decay rates can be cast into a simpler form. This simplified description allows us to derive an approximate solution of the success probability of state transfer by adiabatic passage. As detailed in \appref{app:beyondSML}, this solution takes the form \begin{align} F_\textnormal{1}&=\exp\left(- \textrm{f}rac{ \pi}{2}\left[\textrm{f}rac{\gamma_\textrm{f}ib}{{g_0^2 T}}+\textrm{f}rac{{\gamma_\textrm{f}ib g_0^2 T }}{ \textnormal{FSR}_\textrm{f}ib^2} +\textrm{f}rac{\tilde{\gamma}_\textrm{cav} T}{4} \right] \right).\label{eq:PAPunopt} \end{align} By using the definitions of $\tilde{\gamma}_\textrm{cav}$ and $g_0$ given above along with \eeqref{eq:gAB}, we can optimize \eeqref{eq:PAPunopt} with respect to the pulse length $T$, resulting in \begin{align}\label{eq:PAP} F_\textnormal{AP}&\equiv F_1^\textnormal{opt}=\exp\left(- \textrm{f}rac{ \gamma_\textrm{f}ib L}{c_\textrm{f}}\sqrt{1+ \textrm{f}rac{\pi^2 c_\textrm{f}}{2\gamma_\textrm{f}ib L} \textrm{f}rac{\left(1-P_\textnormal{out}\right)}{P_\textnormal{out}} } \right). \end{align} We find that this analytical expression agrees well with the full numerical simulation of the achievable state transfer success probability presented in the following \secref{sec:cavitylossesnumerics} (see \textrm{f}igref{fig:cavitylosses} and \textrm{f}igref{fig:Fall1MMLvsFall2SML}). Note that in the case of vanishing cavity losses ($\gamma_\textrm{cav}=0$, i.e., $P_\textnormal{out} =1$), \eeqref{eq:PAP} reduces to \eeqref{eq:fidelityMAX}. In \textrm{f}igref{fig:Lmax} we display the maximal length $L_\text{max}$ for which the state transfer success probability achievable by adiabatic passage $F_\text{AP}$ given by \eeqref{eq:PAP} surpasses the state transfer success probability achievable by wave packet shaping $P_1$ given by \eeqref{eq:Ploss} by more than $5$\%, i.e., $F_\text{AP}(L_\text{max})=P_1+0.05$, as a function of $P_\text{out}$. We plot the results for the maximal length $L_\text{max}$ for two different fiber attenuation coefficients, $0.2$\,dB/km and $3$\,dB/km. \begin{figure} \caption{Fiber length $L_\text{max} \label{fig:Lmax} \end{figure} \subsection{Numerical Treatment} \label{sec:cavitylossesnumerics} In the ideal case in which fiber losses are absent, adiabatic passages allow one to bypass cavity losses completely~\cite{Pellizzari1997} and to achieve a state transfer success probability $F=1$ in the absence of other imperfections. In the following, we numerically study the achievable quantum state transfer success probability under more realistic conditions by increasing gradually the relative weight of fiber losses to the overall photon loss rate (atomic losses and other imperfections will be included in \secref{sec:robustness}). To this end, we take the full photonic mode structure into account and consider the following regimes: cavity loss dominated ($\gamma_\textrm{cav} \gg \gamma_\textrm{f}ib$), equal losses ($\gamma_\textrm{f}ib \sim \gamma_\textrm{cav}$) and fiber loss dominated ($\gamma_\textrm{f}ib \gg \gamma_\textrm{cav}$). For comparison, we also include the extremal regime in which the fiber losses are absent ($\gamma_\textrm{f}ib=0$). We find that in {\it all} parameter regimes, improvements of the state transfer success probability $F$, as defined in \eeqref{eq:Fidelity}, with respect to the limit $P_1$ are possible for adiabatic passages in accordance with \eeqref{eq:PAP}. The success probability of state transfer by wave packet shaping, however, is limited by $P_1$ in all regimes, i.e., $F\lesssim P_1$. Below, we illustrate this result with numerical simulations for a mirror transmission of the inner mirrors (M2 and M3 in \textrm{f}igref{fig:mainresults}) of $|\mathfrak{t}|^2=13$\,ppm and a cavity length of $l=0.02$\,m as in Ref.~\cite{Stute2012}. In addition, we compare the numerical results to the analytical estimate $F_\textnormal{AP}$ for the state transfer success probability by adiabatic passage, as defined in \eeqref{eq:PAP} in \secref{sec:cavitylossesanalytics}. The cavity decay rate $\gamma_\textrm{cav}$ depends on the loss coefficient $|\ell|^2$ as defined in \eeqref{eq:kappagoodandbad}. The optimization for both methods is equivalent to the one described in \secref{sec:fibernumerics}. \\ \begin{figure} \caption{State transfer success probability $F$ as a function of the fiber length $L$ for different values of the photon loss rates in the cavity and the fiber, $\gamma_\textrm{cav} \label{fig:cavitylosses} \end{figure} \\ {\bf Cavity loss dominated regime} $(\gamma_\textrm{cav} \gg \gamma_\textrm{f}ib)$: For most current optical realizations (see \tabref{tab:exp}) the cavity loss dominated regime is the relevant regime. In \textrm{f}igref{fig:cavitylosses}b we depict the state transfer success probability $F$ as a function of the fiber length $L$ for both state transfer methods. We consider here a cavity loss coefficient of $|\ell|^2=5$\,ppm (per round-trip) resulting in a loss rate of $\gamma_\textrm{cav}/2\pi=6$\,kHz and a fiber attenuation coefficient of $0.2$\,dB/km, i.e., $\gamma_\textrm{f}ib^{0.2 \textrm{dB/km}}/2\pi=1.5$\,kHz. \textrm{f}igref{fig:cavitylosses}b shows how the success probability for a state transfer by adiabatic passage (filled circles) surpasses the limit $P_1$. \\ \\ \\ {\bf Equal loss regime} $(\gamma_\textrm{f}ib \sim \gamma_\textrm{cav})$: In this regime fiber and cavity losses are approximately balanced. In \textrm{f}igref{fig:cavitylosses}c we depict the success probability $F$ as a function of the fiber length $L$ for both state transfer methods. To approximately balance both rates, we choose a cavity loss coefficient of $|\ell|^2=2$\,ppm, which corresponds to $\gamma_\textrm{cav}/2\pi=2.4$kHz, and a fiber attenuation coefficient of $0.2$\,dB/km, i.e., $\gamma_\textrm{f}ib^{0.2 \textrm{dB/km}}/2\pi=1.5$kHz. As shown in \textrm{f}igref{fig:cavitylosses}c, the adiabatic passage state transfer (filled circles) once again surpasses the limit $P_1$ (solid line). In the limit of long fiber lengths (not shown here), the state transfer success probability $F$ converges to the limit $P_1$.\\ \\ {\bf Fiber loss dominated regime} $(\gamma_\textrm{f}ib \gg \gamma_\textrm{cav})$: In this regime fiber losses dominate over cavity losses. In \textrm{f}igref{fig:cavitylosses}d we depict the success probability $F$ as a function of the fiber length $L$ for both state transfer methods. Here we choose a cavity loss coefficient of $|\ell|^2=2$\,ppm, which corresponds to a loss rate of $\gamma_\textrm{cav}/2\pi=2.4$\,kHz, and a fiber attenuation coefficient of $3$\,dB/km with a corresponding loss rate of $\gamma_\textrm{f}ib^{3 \textrm{dB/km}}/2\pi=22$\,kHz. \textrm{f}igref{fig:cavitylosses}d shows that the success probability of state transfer by adiabatic passage (filled circles) continues to exceed the limit $P_1$ (solid line) for smaller distances, while in the limit of longer fibers, the success probability converges to $P_1$. This behavior can also be seen in \textrm{f}igref{fig:Lmax}, in which the maximal fiber length until which adiabatic passage exceeds the limit $P_1$ by at least $5$\% is displayed. \\ \\ Each of the plots in \textrm{f}igref{fig:cavitylosses} has been obtained for a specific parameter set ($\sabs{\mathfrak{t}}^2, \sabs{\ell}^2, l$). However, our numerical results show that the achievable quantum state transfer success probability for a given fiber length $L$ and fiber loss rate $\gamma_\textrm{f}ib$ is solely determined by the outcoupling probability $P_\textnormal{out}=\sabs{\mathfrak{t}}^2/(\sabs{\mathfrak{t}}^2+ \sabs{\ell}^2)$, in accordance with the analytical solution given by \eeqref{eq:PAP}. For a given value of $\gamma_\textrm{f}ib$, each of the plots in \textrm{f}igref{fig:cavitylosses} corresponds therefore to a whole class of parameter sets ($\sabs{\mathfrak{t}}^2, \sabs{\ell}^2, l$) that is characterized by the initial drop of the success probability to $P_1$ at length $L=0$ given by $P_\textnormal{out}^2$, cf. \eeqref{eq:Ploss}.\\ For illustration, \textrm{f}igref{fig:Fall1MMLvsFall2SML} displays the achievable state transfer success probability for a cavity length $l_1 = 2$\,cm and for mirror transmission and loss coefficients $\sabs{\mathfrak{t}}^2 =5$\,ppm and $\sabs{\ell}^2=2$\,ppm, resulting in $P_\textnormal{out}^2\approx51 \%$. The same plot is also obtained for a much shorter cavity of length $l_2=0.5$\,mm with equal transmission and loss coefficients $\sabs{\mathfrak{t}}^2 =5$\,ppm and $\sabs{\ell}^2=2$\,ppm (and therefore equal $P_\textnormal{out}^2\approx51 \%$). Even though the cavities of lengths $l_1$ and $l_2$ have very different cavity decay rates of $\kappa_{\textrm{cav},1} /2\pi \approx 6$\,kHz and $\kappa_{\textrm{cav},2}/2\pi \approx 240$\,kHz, and despite the very different single mode parameters $\mathfrak{n}_1= 0.17 \ll 1$ and $\mathfrak{n}_2= 6.7$ for the maximally considered fiber length of $L=1000$\,m, the numerical solution of the state transfer success probability for both cavities result in the same plot depicted in \textrm{f}igref{fig:Fall1MMLvsFall2SML}. \\ \begin{figure} \caption{The state transfer success probability $F$ that can be achieved by adiabatic passages is shown for two different fiber attenuation coefficients: $0.2$\,dB/km (blue circles) and $3$\,dB/km (green squares). These numerical results are compared to $P_1$ given in \eeqref{eq:Ploss} \label{fig:Fall1MMLvsFall2SML} \end{figure} Summarizing, we find that the single mode parameter is not a relevant figure of merit. The relevant regime for performing quantum state transfer by adiabatic passage goes {\it beyond} the single mode limit and is given by the (more general) long photon limit $L\le L_\textnormal{ph}$, in which the length of the photon is at least on the order of the fiber length. In the ideal case of a lossless fiber, increasing the photon length (which is equivalent to a slower driving of the classical fields) leads to improved state transfer success probabilities. In the presence of fiber losses however, a trade-off exists between preventing cavity losses by slowly driving the classical fields on the one hand and avoiding fiber losses through multiple reflections of the photons in the fiber on the other, as described by \eeqref{eq:PAPunopt}. \section{Robustness} \label{sec:robustness} In the following, we discuss the influence of atomic decay (\secref{sec:robustatom}) and other imperfections (\secref{sec:robustother}) on the achievable quantum state transfer success probability. \subsection{Atomic Decay} \label{sec:robustatom} The role of atomic decay depends on the specific setup under consideration as the spontaneous decay rate $\Gamma$ varies for different atom and ion species. The presence of atomic losses leads to a modification of the first and last equation of motion in \eeqref{eq:EOM}, \begin{align} i \dot{c}_{A}&= G_A(t) c_{a}- i(\tilde{\Gamma}_{A}(t)/2) c_{A}, \\ i\dot{c}_{B}&= G_B(t) c_{b}- i(\tilde{\Gamma}_{B}(t)/2) c_{B}, \notag \end{align} where the effective decay rate $\tilde{\Gamma}_{A/B}(t)=\Gamma (\Omega_{A/B}(t)/\Delta_\textnormal{at})^2$ results from the elimination of the excited state $\ket{E}$ of the atom in \textrm{f}igref{fig:APM}, with $\Gamma$ the spontaneous emission rate of the excited state. The probability of a photon to be emitted from the cavity into the desired output mode is altered accordingly in the presence of atomic losses, \begin{align} \tilde{P}_\textnormal{out}=\textrm{f}rac{\Gamma_{A/B}^\textnormal{des}}{\Gamma_{A/B}^\textnormal{des}+\Gamma_{A/B}^\textnormal{und}}, \label{eq:Poutatom} \end{align} where $\Gamma_{A/B}^\textnormal{des}$ is the (desired) rate of photons leaving the cavity into the fiber \begin{align} \Gamma_{A/B}^\textnormal{des}=\left(\textrm{f}rac{G_{A/B}}{ \kappa} \right)^2 \kappa_\textrm{cav}, \notag \end{align} and the (undesired) rate of photon loss due to atomic decay or due to cavity losses is $\Gamma_{A/B}^\textnormal{und}=\tilde{\Gamma}_{A/B}+\gamma_\textrm{cav} (G_{A/B}/ \kappa)^2$. The probability $\tilde{P}_\textnormal{out}$ in \eeqref{eq:Poutatom} can be simplified to \begin{align} \tilde{P}_\textnormal{out}=\textrm{f}rac{\mathcal{C}}{1/4+\mathcal{C}} P_\textnormal{out}, \notag \end{align} where we define the cooperativity of the system as \begin{align} \mathcal{C}=\textrm{f}rac{\left(G_{A/B}(t)\right)^2}{(2 \kappa) \, \tilde{\Gamma}(t)}=\textrm{f}rac{\left(g^{A/B}_\textnormal{at-c}\right)^2}{(2 \kappa) \, \Gamma}. \notag \end{align} Note that the optimal efficiency for transferring the quantum state stored in the emitter to a photon in the desired output channel (or vice versa) is given by $\mathcal{C}/(1/4+\mathcal{C})$ independent of the retrieval (storage) technique~\cite{Gorshkov2007}. Hence, the expected limit for transferring a photon when considering cavity, fiber and atomic losses leads to a modified version of \eeqref{eq:Ploss}, \begin{align} \tilde{P}_{1}= \left(\textrm{f}rac{\mathcal{C}}{1/4+\mathcal{C}}\right)^2 \cdot P_1. \label{eq:Plosstilde} \end{align} In \textrm{f}igref{fig:atomicdecay}, we show the state transfer success probability $F$, as defined in \eeqref{eq:Fidelity}, in the presence of atomic losses for both adiabatic passage and wave packet shaping. As a concrete example, we use here the parameters considered in \textrm{f}igref{fig:cavitylosses}c, i.e., $\sabs{\mathfrak{t}}^2=13$\,ppm, $l=0.02$\,m, $\sabs{\ell}^2=2$\,ppm and a fiber absorption coefficient of $0.2$\,dB/km. \textrm{f}igref{fig:atomicdecay}a displays the state transfer success probability as a function of the fiber length for a cooperativity of $\mathcal{C}= 27$. \begin{figure} \caption{ Achievable state transfer success probabilities in the presence of atomic decay. (a) State transfer success probability $F$ as a function of the fiber length $L$ for a fixed atomic decay $\Gamma$. While wave packet shaping (WPS, crosses) is limited by $\tilde{P} \label{fig:atomicdecay} \end{figure} Experiments for single atoms reach cooperativities of, e.g., $\mathcal{C}=82$~\cite{Hood2000} and $\mathcal{C}= 11$~\cite{Hamsen2016}. For experiments with atomic ensembles in cavities, the cooperativity $\mathcal{C}_N=N \mathcal{C}$ is enhanced by using a large number of atoms $N$~\cite{Gorshkov2007}. Hence, high cooperativities can be reached, e.g., $\mathcal{C}_N= 73$~\cite{Colombe2007}. The results in this work apply also to this case. As shown in \textrm{f}igref{fig:atomicdecay}a, the state transfer by wave packet shaping is limited by $\tilde{P}_\textnormal{1}$, while adiabatic passages provide an advantage also in the presence of atomic losses (see \textrm{f}igref{fig:cavitylosses}c for the corresponding state transfer success probability for $\Gamma=0$). \textrm{f}igref{fig:atomicdecay}b shows how the state transfer success probability of both methods improves with increasing cooperativity. More specifically, this plot displays the achievable state transfer success probability for very short distances (evaluated at $L=10$\,m, which corresponds to the first point/cross in \textrm{f}igref{fig:atomicdecay}a). While state transfer by wave packet shaping is always limited by $\tilde{P}_\textnormal{1}$, adiabatic passages surpass this bound with a gain in success probability $F-\tilde{P}_\textnormal{1}$ that increases with increasing cooperativity $\mathcal{C}$. \subsection{Other Imperfections} \label{sec:robustother} In the following we address in- and outcoupling losses of the fiber and timing errors of the adiabatic passage.\\ \\ {\bf In-/outcoupling losses:} In- and outcoupling losses refer to imperfect coupling of the light field between the cavities and the fiber. These imperfections can be included into our model in the fiber loss rate $\gamma_\textrm{f}ib$. For optical cavities, optimized efficiencies for coupling in or out of a single-mode fiber can exceed $90\%$~\cite{Steiner2014}. For fiber-integrated cavity systems, there is an additional multiplicative factor due to imperfect overlap between the fiber and the cavity modes. This mode overlap may be as high as $90\%$~\cite{Steiner2014} but drops off for longer cavities~\cite{Hunger2010}. \\ \\ {\bf Timing errors:} State transfer by adiabatic passage depends on optimizing the temporal separation of the two pulses. In \textrm{f}igref{fig:robustness} we depict as an example the success probability of a state transfer by adiabatic passage as a function of the relative temporal separation $x_\textnormal{spl}$ of the two Gaussian pulses (see \secref{sec:AP}). Here we use the same parameters as considered for \textrm{f}igref{fig:cavitylosses}, i.e., a mirror transmission $|\mathfrak{t}|^2=13$\,ppm and cavity length $l=0.02$\,m. Additionally, we choose a fixed fiber length of $L=400$\,m and optimize the pulse width $T$ and the atom-cavity coupling $G_{A/B}$ accordingly. \begin{figure} \caption{Robustness with respect to timing mismatch in adiabatic passage. The state transfer success probability $F$ is shown as a function of the temporal separation $x_\textnormal{spl} \label{fig:robustness} \end{figure} We vary the mirror losses $|\ell|^2$ from $0$\,ppm (black, solid line) to $10$\,ppm (green, blue) for the two different fiber attenuations, $0.2$\,dB/km (circles) and $3$\,dB/km (triangles). The resulting moderate increase of the timing sensitivity with increasing losses is shown in \textrm{f}igref{fig:robustness}. \\ \section{Conclusions and Outlook} \label{sec:outlook} We have investigated deterministic quantum state transfer between remote qubits in cavities by studying the standard method of wave packet shaping and the use of adiabatic passages. We have provided an analysis for both methods beyond the single mode limit, taking the full photonic mode structure into account. This analysis has allowed us to assess the potential and limitations of these approaches for future developments of quantum networks. We have also discussed the role of the relevant cavity parameters in view of experimental-design decisions. In particular, we have clarified that fiber transmission losses cannot be overcome using either of the two methods, and we have shown that cavity losses can be mitigated in a far greater parameter regime than previously known. We note that the model considered is very general and not limited to optical setups and that the results can be used to evaluate the achievable performance of other experimental platforms such as superconducting qubits in microwave resonators~\cite{Wenner2014,Pfaff2016} or solid-state systems such as color centers or quantum dots coupled to photonic crystal nanocavities~\cite{Riedrich2014,Sipahigil2016,Yoshie2004,Hennessy2007,Englund2007}. Our results apply to quantum networks on both short and long length scales. Relating to the latter, we have shown for links up to $1000$\,m that adiabatic passages can substantially improve the state transfer success probability for current experimental setups by overcoming limitations due to photon losses in the cavities. Regarding photon losses during the transmission, future networking implementations can in addition resort to quantum error correction techniques that rely on the transmission of multi-photon states and allow for the deterministic detection and correction of loss errors~\cite{Grassl1997,Ofek2016,Michael2016,Vermersch2016,Xiang2017}. In future work, it would be interesting to study the application of adiabatic passages to enable time-continuous protocols~\cite{ContTeleportation,Hofer2013,Vollbrecht2011,Muschik2011}. Moreover, it will be useful to compare the potential and application range of adiabatic passages to other deterministic quantum state transfer methods such as dissipative schemes~\cite{Kraus2004}, approaches to mitigate cavity losses using quantum error correcting codes~\cite{Grassl1997,Ofek2016} and combinations of deterministic and heralded state transfer techniques. \section*{Acknowledgments} We thank F. Reiter, K. M{\o}lmer, G. Kirchmair and P.-O. Guimond for helpful discussions. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-15-2-0060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. This work was also supported by the European Research Council (ERC) Synergy Grant UQUAM, and by the Austrian Science Fund through SFB FOQUS (FWF Project No. F4016-N23). T. E. Northup acknowledges support from the Austrian Science Fund (FWF) Projects No. F4019-N23 and No. V252. B. P. Lanyon acknowledges the Austrian Science Fund (FWF) through a START grant under Project No. Y849-N20. \appendix \section{Quantum State Transfer by Adiabatic Passage Including Multiple Fiber Modes} \label{app:beyondSML} In the following, we derive the success probability $F_\text{AP}$ of state transfer by adiabatic passage in \eqsref{eq:FAPsecII} and \eqref{eq:PAP} in the presence of cavity and fiber losses by means of an analytical model. The analytical example in \secref{sec:fiberanalytics} corresponds to the special case of vanishing cavity losses, $\gamma_\textrm{cav}=0$. By adiabatically eliminating the cavity modes in the equations of motion for the full system given by \eqsref{eq:EOM} under the condition $\kappa \gg G_{A/B}$, we obtain an effective atom-fiber-atom model (cf. \textrm{f}igref{fig:APscheme}a for the case $\gamma_\textrm{cav}=0$). In this description, the qubits in cavities A and B are coupled to the fiber modes with an effective time-dependent coupling strength $\tilde{g}_{A/B}(t)=g_{A/B}(G_{A/B}/\kappa)$ and are subject to an effective time-dependent decay that acts with a rate $\tilde{\gamma}_\textrm{cav}^{A/B}=\gamma_\textrm{cav} (G_{A/B} /\kappa)^2$. The equations of motion for the two qubits, the resonant fiber mode $c_0$ and the first pair of detuned fiber modes $c_{\pm 1}$ are given by \begin{align} \dot{c}_{A}&=-i \tilde{g}_A(t) \left( c_{c_0}+c_{c_{-1}}+c_{c_{+1}}\right) -\textrm{f}rac{\tilde{\gamma}^{A}_\textrm{cav}(t)}{2} c_{A},\notag\\ \dot{c}_{c_{-1}}&=-i \tilde{g}_A(t) c_{A}+i \tilde{g}_B(t) c_{B}-\left(\gamma_\textrm{f}ib/2 -i \, \textnormal{FSR}_\textrm{f}ib \right) c_{c_{-1}},\notag \\ \dot{c}_{c_0}&=-i \tilde{g}_A(t) c_{A}-i \tilde{g}_B(t) c_{B}-(\gamma_\textrm{f}ib/2) c_{c_0},\notag \\ \dot{c}_{c_{+1}}&=-i \tilde{g}_A(t) c_{A}+i \tilde{g}_B(t) c_{B}-\left(\gamma_\textrm{f}ib/2 +i \, \textnormal{FSR}_\textrm{f}ib \right) c_{c_{+1}},\notag \\ \dot{c}_{B}&=-i \tilde{g}_B(t) \left( c_{c_0}-c_{c_{-1}} -c_{c_{+1}} \right)-\textrm{f}rac{\tilde{\gamma}^{B}_\textrm{cav}(t)}{2} c_{B}. \notag \end{align} We proceed by adiabatically eliminating the dynamics of the off-resonant fiber modes $c_{\pm 1}$, which evolve on a much faster time scale than the other modes for $\gamma_\textrm{f}ib^2/4+\delta^2\gg \tilde{g}_{A/B}^2$. Here, we introduce the abbreviations $\delta=\textnormal{FSR}_\textrm{f}ib$, $\gamma_{g_{AB}}=\textrm{f}rac{2 \gamma_\textrm{f}ib \tilde{g}_{A} \tilde{g}_{B}}{\gamma_\textrm{f}ib^2/4+\delta^2}$ and $\gamma_{g_{A/B}}=\textrm{f}rac{2 \gamma_\textrm{f}ib \tilde{g}_{A/B}^2 }{\gamma_\textrm{f}ib^2/4+\delta^2}$. The resulting effective three mode system (atom-fiber-atom) is depicted in \textrm{f}igref{fig:APscheme}b and described by \begin{align} \left(\begin{array}{c} i \dot{c_A} \\ i\dot{c_0} \\ i \dot{c_B} \end{array} \right) &= \left(\begin{array}{c c c} -i \textrm{f}rac{\gamma_{g_A} +\tilde{\gamma}_\textrm{cav}^{A}}{2} & \tilde{g}_A & i \ \gamma_{g_{AB}} \\ \tilde{g}_A & - i \textrm{f}rac{\gamma_\textrm{f}ib}{2}& \tilde{g}_B \\ i \gamma_{g_{AB}} & \tilde{g}_B & -i \textrm{f}rac{\gamma_{g_B}+\tilde{\gamma}_\textrm{cav}^{B}}{2} \end{array}\right) \left( \begin{array}{c} c_A \\ c_0 \\ c_B \end{array} \right). \label{eq:analyticmodel3M} \end{align} Note that the couplings to the fiber modes $c_+$ and $c_-$ lead to Stark shifts ($\sim \textrm{f}rac{i \delta \tilde{g}_{A/B}^2}{\gamma_\textrm{f}ib^2/4+\delta^2}$) on the ground states with equal magnitude and opposite sign, which cancel. The same is true for the coherent couplings between the ground states that result from the coupling to these fiber modes ($\sim \textrm{f}rac{i \delta \tilde{g}_{A} \tilde{g}_B}{\gamma_\textrm{f}ib^2/4+\delta^2}$). Furthermore, the off-diagonal corner entries of the matrix in \eeqref{eq:analyticmodel3M} have different signs with respect to the diagonal corner entries due to the alternating sign of the coupling to cavity $B$ as discussed in the main text, cf. \eeqref{eq:Hcavfib}. The Hamiltonian represented by the matrix in \eeqref{eq:analyticmodel3M} can be split into two parts: a coherent part corresponding to the Hamiltonian $H_\textrm{coh}=\hbar\left(\tilde{g}_A a^\dag+\tilde{g}_B b^\dag\right) c_0 + \textrm{h.c.}$, which involves only the two qubits and the resonant fiber mode $c_0$ (as discussed in detail in \appref{app:atomiclimit}), and a dissipative part, which will be labelled $H_\textrm{diss}$. With this \eeqref{eq:analyticmodel3M} becomes: $i \hbar \,\dot{c} = [H_\textrm{coh}+H_\textrm{diss}] c$, where $c$ denotes the vector $c=(c_A,c_0,c_B)$. In the absence of losses (i.e., $\gamma_\textrm{f}ib=\gamma_\textrm{cav}=0$), the dissipate part $H_\textrm{diss}$ vanishes and the adiabatic instantaneous eigenstates are given by the ones of $H_\textrm{coh}$, i.e., $|\pm\rangle$ and $|D\rangle$ as defined in \eqsref{eq:SMLdark}-\eqref{eq:SMLbright2}. We transform the Hamiltonian into the basis of the adiabatic states $|\pm\rangle, |D\rangle$ with amplitudes $a_\pm$ and $a_D$ (see \cite{Vitanov1997}) and proceed by choosing the coupling functions $\tilde{g}_A=g_0 \sin(t/T)$ and $\tilde{g}_B=g_0 \cos(t/T)$. This choice renders the resulting Hamiltonian in the adiabatic representation partially time-independent and thus greatly simplifies the problem. More specifically, the Schr{\"o}dinger equation in the adiabatic representation is given by \begin{align} i \left(\begin{array}{c} \dot{a}_+ \\ \dot{a}_D \\ \dot{a}_- \end{array} \right) &= \left[H_\textrm{adia}+H_\textrm{corr}\right] \left( \begin{array}{c} a_+ \\ a_D \\ a_- \end{array} \right), \label{eq:adiabaticSE} \end{align} where the adiabatic Hamiltonian $H_\textrm{adia}$ describes the dynamics of the resonant fiber mode and is given by \begin{align} H_\textrm{adia} &= \left(\begin{array}{c c c} g_0- \textrm{f}rac{i (\gamma_\textrm{f}ib+\tilde{\gamma}_\textrm{cav})}{4} & \textrm{f}rac{i}{\sqrt{2} T} & \textrm{f}rac{i \gamma_\textrm{f}ib }{4} \\ \textrm{f}rac{-i}{\sqrt{2} T} & 0 & \textrm{f}rac{-i}{\sqrt{2} T} \\ \textrm{f}rac{i \gamma_\textrm{f}ib }{4} & \textrm{f}rac{i}{\sqrt{2} T} & -g_0- \textrm{f}rac{i (\gamma_\textrm{f}ib+\tilde{\gamma}_\textrm{cav})}{4} \end{array}\right), \notag \end{align} where $g_0=\sqrt{\tilde{g}_A^2+\tilde{g}_B^2}$ is the maximal coupling and $\tilde{\gamma}_\textrm{cav}=\tilde{\gamma}_\textrm{cav}^A=\tilde{\gamma}_\textrm{cav}^B$. Note that we set the effective cavity decay $\tilde{\gamma}_\textrm{cav}$ time-independent and equal for both cavities. With the chosen coupling functions, the Hamiltonian $H_\textrm{adia}$ is time-independent. In contrast, the second Hamiltonian $H_\textrm{corr}$ in \eeqref{eq:adiabaticSE} is time-dependent and arises from the adiabatic elimination of the two off-resonant fiber modes $c_{\pm 1}$, representing the effect of the detuned fiber modes on the success probability of state transfer. The Hamiltonian $H_\textrm{corr}$ is given by \begin{equation} H_\textrm{corr}=\textrm{f}rac{-i \gamma_\textrm{f}ib g_0^2}{\textrm{f}rac{\gamma_\textrm{f}ib^2}{2}+2\delta^2} \left(\begin{array}{c c c} \cos(\textrm{f}rac{2t}{T})^2 & -\textrm{f}rac{\sin(\textrm{f}rac{4t}{T})}{\sqrt{2}} & \cos(\textrm{f}rac{2t}{T})^2 \\ -\textrm{f}rac{\sin(\textrm{f}rac{4t}{T})}{\sqrt{2}} & 2 \sin(\textrm{f}rac{2t}{T})^2 & -\textrm{f}rac{\sin(\textrm{f}rac{4t}{T})}{\sqrt{2} }\\ \cos(\textrm{f}rac{2t}{T})^2 & -\textrm{f}rac{\sin(\textrm{f}rac{4t}{T})}{\sqrt{2}} & \cos(\textrm{f}rac{2t}{T})^2 \end{array}\right). \notag \end{equation} In the limit $\gamma_\textrm{f}ib^2/4+\delta^2\gg g_0^2$ and $g_0\gg\gamma_\textrm{f}ib,\tilde{\gamma}_\textrm{cav}$, the dynamics of the bright states can be eliminated, resulting in a slow effective decay of the dark state. Assuming $\delta \gg\gamma_\textrm{f}ib,\tilde{\gamma}_\textrm{cav}$, we obtain \begin{align} a_D(t)=\exp \left( -\textrm{f}rac{\gamma_\textrm{f}ib t}{2 g_0^2 T^2}-\textrm{f}rac{\gamma_\textrm{f}ib g_0^2 t}{ 2 \delta^2}-\textrm{f}rac{\tilde{\gamma}_\textrm{cav} t}{8} \right). \label{eq:a0oft} \end{align} Due to the chosen coupling functions we evaluate \eeqref{eq:a0oft} at the final time of $t=\pi \,T/2$, and hence the final population of the dark state $a_D$ is given by $|a_D(\pi \,T/2)|^2$. The state transfer success probability for adiabatic passage $F_1=|a_D(\pi \, T/2)|^2$ in the presence of cavity and fiber losses results in \eeqref{eq:PAPunoptsecII} and \eeqref{eq:PAPunopt}. \section{Quantum State Transfer by Adiabatic Passage for a Single Excited State} \label{app:atomiclimit} Here, we review the derivation of the success probability of an adiabatic state transfer via a decaying state in general. Subsequently, we map the derivations to a coupled atom-fiber-atom system as assumed in Refs.~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016}. We consider a system consisting of two ground states $\ket{A}$ and $\ket{B}$ and an excited state $\ket{C}$. The states $\ket{A}$ and $\ket{C}$ are coupled with a time-dependent coupling strength $\mathcal{G}_A(t)$, and the states $\ket{B}$ and $\ket{C}$ are coupled with strength $\mathcal{G}_B(t)$. The state of the whole system in the standard basis is given by \begin{align} |\Psi\rangle_\textnormal{}=c_A \ket{A} +c_B \ket{B}+c_C \ket{C}, \notag \end{align} with probability amplitudes $c_A$, $c_B$ and $c_C$. In the adiabatic basis the state vector of the system is given by \begin{align} |\Psi\rangle_\textnormal{adia}=a_D \ket{D} +a_+ \ket{+}+a_- \ket{-}, \notag \end{align} with probability amplitudes $a_D$, $a_-$ and $a_+$. The adiabatic states are connected to the standard basis via \begin{align} |D\rangle&= \textrm{f}rac{\mathcal{G}_B}{\sqrt{\mathcal{G}_A^2+\mathcal{G}_B^2}} \ket{A}-\textrm{f}rac{\mathcal{G}_A}{\sqrt{\mathcal{G}_A^2+\mathcal{G}_B^2}} \ket{B}, \label{eq:SMLdark}\\ |\pm\rangle&=\textrm{f}rac{1}{\sqrt{2}} \left(\textrm{f}rac{\mathcal{G}_A \ket{A}+\mathcal{G}_B \ket{B}}{\sqrt{\mathcal{G}_A^2+\mathcal{G}_B^2}} \pm \ket{C}\right). \label{eq:SMLbright2} \end{align} $|D\rangle$ is a dark state with respect to the coupling $\mathcal{G}_{A/B}$, while $|\pm\rangle$ are so-called bright states. We are interested in performing an adiabatic state transfer from the ground state $\ket{A}$ to the ground state $\ket{B}$ using the dark state $\ket{D}$. The success probability $F$ of this state transfer is given by the final population of the dark state at $t=t_\textnormal{fin}$, i.e., $F=\sabs{a_D(t_\textnormal{fin})}^2$. In the following, we introduce a decay of the excited state $\ket{C}$ at rate $\Gamma_C$. This decay is included in the derivation as described in \secref{sec:modeldissipation} such that the Schr{\"o}dinger equation for the probability amplitudes reads \begin{align} i \left(\begin{array}{c} \dot{c_A} \\ \dot{c_C} \\ \dot{c_B} \end{array} \right) &= \left(\begin{array}{c c c} 0 & \mathcal{G}_A (t) &0 \\ \mathcal{G}_A(t) & - i \Gamma_C/2 & \mathcal{G}_B(t) \\ 0 & \mathcal{G}_B(t) & 0 \end{array}\right) \left( \begin{array}{c} c_A \\ c_C \\ c_B \end{array} \right). \label{eq:SMLEOM} \end{align} In order to perform a state transfer by using the dark state $\ket{D}$, the coupling strengths $\mathcal{G}_A(t)$ and $\mathcal{G}_B(t)$ have to fulfil the conditions in \eeqref{eq:adiabaticcondition}. In particular, we choose the coupling strengths to be \begin{align} \mathcal{G}_A(t)= \mathcal{G} \sin(t/T), \qquad \mathcal{G}_B(t)= \mathcal{G} \cos(t/T), \label{eq:timedep} \end{align} for times $t\in{[0,T\pi/2]}$ with temporal length $T$ of the coupling and amplitude $\mathcal{G}$. We transform \eeqref{eq:SMLEOM} into the adiabatic basis, yielding the evolution of the amplitudes $a_D$, $a_-$ and $a_+$ (see Ref.~\cite{Vitanov1997}). In the limit $\mathcal{G} \gg \Gamma_C$, the evolution of the amplitudes of the bright states $a_{\pm}$ is much faster than the evolution of the dark state $a_D$. We therefore first solve for the amplitudes of the bright states $a_{\pm}$ and subsequently derive the amplitude of the dark state $a_D$. The success probability $F$ of state transfer is given by the population of the dark state \begin{align} F_{\text{}}=\sabs{a_D(t_\text{fin})}^2=\exp\left(-\textrm{f}rac{ \Gamma_C}{{\mathcal{G}^2 T}} \textrm{f}rac{\pi}{2}\right) \label{eq:fidelitysml} \end{align} at time $t_\text{fin}=T\pi/2$, the final time of the coupling sequence. In the adiabatic limit $\mathcal{G}^2 T \gg \Gamma_C$, the success probability in \eeqref{eq:fidelitysml} reaches unity and a perfect state transfer can be achieved. Note that this derivation can directly be mapped to an adiabatic state transfer in atoms (STIRAP)~\cite{RMPAP2016}. \\ In Refs.~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016}, the Hamiltonian in \eeqref{eq:HAPM} has been truncated, which results in a description in which all fiber modes except the resonant mode $c_0$ are neglected. This case can be mapped to the situation described above, where the ground states $\ket{A}$ and $\ket{B}$ represent the qubit states in cavities $A$ and $B$. The excited state $\ket{C}$ corresponds to the state of the fiber mode $c_0$ with associated decay rate $\Gamma_C = \gamma_\textrm{f}ib$. The time-dependent coupling strengths $\mathcal{G}_{A/B}(t)$ translate into the effective atom-to-fiber coupling strengths $\tilde{g}_{A/B}(t)$ as depicted in \textrm{f}igref{fig:APM}b and defined in \secref{sec:fiberanalytics}. This mapping, using the truncated Hamiltonian, in which only a single fiber mode (sfm) is considered, results in the success probability \begin{align} F_{\text{sfm}}=\exp\left(-\textrm{f}rac{ \gamma_\textrm{f}ib}{{g_0^2 T}} \textrm{f}rac{\pi}{2}\right), \label{eq:fidelitynth} \end{align} where $g_0$ is the maximal atom-to-fiber coupling strength. By choosing a large pulse area $g_0^2 T \gg \gamma_\textrm{f}ib$, the success probability in \eeqref{eq:fidelitynth} reaches unity. In this limit, it seems that a perfect state transfer via a decaying fiber is possible. However, as we show in \secref{sec:fiberlosses}, the naive truncation of the Hamiltonian as done in Refs.~\cite{Serafini2006, Yin2007,Chen2007,Ye2008,Lu2008,Zhou2009,Clader2014,Chen2015,Hua2015,Huang2016} is not valid, cf. \appref{app:beyondSML}. \section{Purely Photonic Description} \label{sec:purelyphotonic} The derivation of the analytical example in \secref{sec:fiberanalytics} can also be considered in a purely photonic context. Here, we consider the same setup as in \textrm{f}igref{fig:APM} but in a regime in which we first map the atomic state onto the cavity field by a fast swapping laser pulse. Subsequently we consider the purely photonic state transfer from cavity $A$ to cavity $B$ via fiber $C$. This cavity-fiber-cavity system can be described by the second line of the Hamiltonian in \eeqref{eq:HAPM} given by \begin{align} H_\textnormal{cfc}=& \hbar \, \sum_n \,n \, \textnormal{FSR}_\textrm{f}ib \ c_n^{\dag}c_n \label{eq:HPP} \\ &+\hbar \, \sum_{n}\Big[g_{A}(t)\, a^{\dag}+ g_{B}(t) (-1)^n \, b^{\dag}\Big]c_n+\textrm{h.c.} \, . \notag \end{align} In contrast to the Hamiltonian in \eeqref{eq:HAPM}, here we have introduced time-variable cavity-fiber couplings $g_{A/B}(t)$ with a time dependence in analogy to \eeqref{eq:APpulsesequence} and \textrm{f}igref{fig:mainresults}c. In order to map the arguments from the atom-fiber-atom system described in \secref{sec:fiberanalytics} onto the purely photonic model, we replace the effective atom-fiber coupling strengths $\tilde{g}_{A/B}(t)$ by the (now time-dependent) cavity-fiber coupling strengths $g_{A/B}(t)$ as defined in \eeqref{eq:gAB}. With this, we recover the equations of motion as in \eeqref{eq:TMLEOM} for the purely photonic model and hence also the limited success probability as given by \eeqref{eq:fidelityMAX}. The variation of the cavity-fiber coupling $g_{A/B}(t)$ in a time-dependent coupling sequence (see \secref{sec:AP}) is not straightforward for optical cavities but has been realized for superconducting resonators \cite{Yin2013} and photonic crystal nanocavities~\cite{Sato2012}. \section{Description of Coupled Cavity-Fiber-Cavity System} \label{app:methodCFC} In this appendix, we discuss the choice of basis states for the electric field modes in the coupled cavity-fiber-cavity setting. In the main text, we use independent field modes for the two outer cavities $a$, $b$ and for the fiber $c_n$ that are linearly coupled as described by \eeqref{eq:Hcavfib}. In the following we relate this approach (which is generally valid in the case of high finesse cavities and for time scales that are long compared to the round-trip time $2\tau$ of a photon) to an alternative description that is based on the derivation of the electromagnetic field eigenmodes in the second quantization for the whole cavity-fiber-cavity system (see Ref.~\cite{vanEnk1999}).\\ The eigenmodes $\bar{c}_n$ of the complete optical system consisting of two perfectly reflecting outer mirrors M1 and M4 and two identical partially transmitting mirrors M2 and M3 (see \textrm{f}igref{fig:mainresults}a) can be calculated as shown in Ref.~\cite{Ley1987} for the mirrors M2 and M3, together with an additional boundary condition at the positions of the outer mirrors M1 and M4. The corresponding eigenenergies $\bar{\omegaega}_n$ of the whole system can be inferred by solving Eq.~(2) in Ref.~\cite{vanEnk1999}, such that the Hamiltonian in \eeqref{eq:HAPM} can be expressed as \begin{align} H_\textnormal{hyb}= &\hbar \, \sum_n \, \bar{\omegaega}_n \ \bar{c}_n^{\dag}\bar{c}_n +\hbar \, \sum_{n}\Big[G_A(t) \, \sigma_A^+ \sqrt{CC_n} \, \bar{c}_n+\textrm{h.c.} \Big] \label{eq:Hhyb} \\ &+ \hbar \, \sum_{n}\Big[G_B(t) (-1)^n \, \sigma_B^+ \sqrt{CC_n} \, \bar{c}_n+\textrm{h.c.} \Big],\notag \end{align} where the coupling between atom and field modes is weighted with the cavity content $CC_n$. The cavity content $CC_n$ quantifies the fraction of the population in mode $n$ that populates the cavities, as defined in Eq.~(5) in Ref.~\cite{vanEnk1999} (the fiber content of mode $n$ is given by $FC_n=1-CC_n$). The losses for the hybrid cavity-fiber-cavity modes $\bar{c}_n$ are modeled as a weighted combination of the loss processes discussed in \secref{sec:modeldissipation}, such that each hybrid mode $\bar{c}_n$ decays with \begin{align} \bar{\gamma}_n=CC_n \, \gamma_\textrm{cav} + FC_n \, \gamma_\textrm{f}ib. \end{align} By calculating the eigenenergies $\bar{\omegaega}_n$ and cavity contents $CC_n$ using the methods mentioned above and deriving the equations of motion as described in \secref{sec:modelEOM}, the numerical results shown in the main text can be (and have been) reproduced using \eeqref{eq:Hhyb}. The eigenenergies and cavity content for the parameter set used in \textrm{f}igref{fig:cavitylosses} are shown in \textrm{f}igref{fig:phphbasis}. We illustrate the basis transformation that relates the modes $a$, $b$ and $c_n$ used in the main text and the hybrid modes $\bar{c}_n$ for the truncated mode set (involving only three fiber modes) that is used in the analytical example discussed in \secref{sec:fiberanalytics} and \secref{sec:cavitylossesanalytics}. By diagonalizing the cavity and fiber Hamiltonians in \eeqref{eq:Hfib} and in \eeqref{eq:Hcavfib} for three fiber modes, we obtain the eigenfrequencies $\bar{\omegaega}_n$ of the hybrid cavity-fiber-cavity modes as \begin{align} &\bar{\omegaega}_0=0, \qquad \bar{\omegaega}_{\pm 1}=\pm \sqrt{2} g_{A/B} , \label{eq:eigenenergies}\\ & \qquad \bar{\omegaega}_{\pm 2}=\pm \sqrt{4 g^2_{A/B}+ \textnormal{FSR}_\textrm{f}ib^2}, \notag \\ \end{align} where $g_{A/B}$ is defined in \eeqref{eq:gAB}. \textrm{f}igref{fig:phphbasis} shows that even for the truncated mode set, these values (grey lines) are very close to the data points (red crosses and blue dots) that are obtained by numerically solving Eq.~(2) in Ref.~\cite{vanEnk1999} or by solving the problem using transfer matrices. The height of each data point indicates the cavity content, i.e. the fraction of that mode that is populating the cavities. The grey lines indicate the positions of the eigenenergies as derived in \eeqref{eq:eigenenergies}. The corresponding eigenstates of the hybrid modes $\bar{c}_n$ can be written as a superposition of the original basis of cavity $a$ and $b$ and fiber modes $c_n$ as used in the main text \begin{align} \begin{psmallmatrix} \bar{c}_0 \\ \bar{c}_{+1} \\ \bar{c}_{-1} \\ \bar{c}_{+2} \\ \bar{c}_{-2} \end{psmallmatrix} &=\mathfrak{M} \begin{psmallmatrix} a \\ b \\ c_{+1}\\ c_0\\ c_{-1} \\ \end{psmallmatrix} , \end{align} with $\mathfrak{M}$ given by \begin{align} \mathcal{N} \begin{psmallmatrix} \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & - \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & -1 & 0 & 1 \\ -\textrm{f}rac{1}{\sqrt{2}} & -\textrm{f}rac{1}{\sqrt{2}}&0&1&0 \\ \textrm{f}rac{1}{\sqrt{2}} & \textrm{f}rac{1}{\sqrt{2}}&0&1&0 \\ \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{-2}}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & \textrm{f}rac{-\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{+2}}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & 1 + \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib (\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{-2})}{\bar{\omegaega}_{\pm 1}^2} &0&1\\ \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{+2}}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & \textrm{f}rac{-\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{-2}}{\sqrt{2} \bar{\omegaega}_{\pm 1}^2} & 1 + \textrm{f}rac{\textnormal{FSR}_\textrm{f}ib (\textnormal{FSR}_\textrm{f}ib+\bar{\omegaega}_{+2})}{\bar{\omegaega}_{\pm 1}^2} &0&1 \end{psmallmatrix}, \end{align} where $\mathcal{N}$ indicates the proper normalization (not shown here). \begin{figure} \caption{Cavity content as a function of the eigenenergies $\bar{\omegaega} \label{fig:phphbasis} \end{figure} \end{document}